threads
listlengths 1
2.99k
|
---|
[
{
"msg_contents": "As explained in the comments for generate_orderedappend_paths, we don't\ncurrently support parameterized MergeAppend paths, and it doesn't seem\nlike going to change anytime soon. Based on that, we could simplify\ncreate_merge_append_path a bit, such as set param_info to NULL directly\nrather than call get_appendrel_parampathinfo() for it. We already have\nan Assert on that in create_merge_append_plan.\n\nI understand that the change would not make any difference for\nperformance, it's just for clarity's sake.\n\nThanks\nRichard",
"msg_date": "Fri, 11 Aug 2023 10:31:34 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Simplify create_merge_append_path a bit for clarity"
},
{
"msg_contents": "Hi!\n\nOn 11.08.2023 05:31, Richard Guo wrote:\n> As explained in the comments for generate_orderedappend_paths, we don't\n> currently support parameterized MergeAppend paths, and it doesn't seem\n> like going to change anytime soon. Based on that, we could simplify\n> create_merge_append_path a bit, such as set param_info to NULL directly\n> rather than call get_appendrel_parampathinfo() for it. We already have\n> an Assert on that in create_merge_append_plan.\n>\n> I understand that the change would not make any difference for\n> performance, it's just for clarity's sake.\n\nI agree with you, and we can indeed directly set the param_info value to \nNULL, and there are enough comments here to explain.\n\nI didn't find anything else to add in your patch.\n\n-- \nRegards,\nAlena Rybakina\n\n\n\n",
"msg_date": "Tue, 24 Oct 2023 13:00:13 +0300",
"msg_from": "Alena Rybakina <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simplify create_merge_append_path a bit for clarity"
},
{
"msg_contents": "On Tue, Oct 24, 2023 at 6:00 PM Alena Rybakina <[email protected]>\nwrote:\n\n> I agree with you, and we can indeed directly set the param_info value to\n> NULL, and there are enough comments here to explain.\n>\n> I didn't find anything else to add in your patch.\n\n\nThanks for reviewing this patch!\n\nThanks\nRichard\n\nOn Tue, Oct 24, 2023 at 6:00 PM Alena Rybakina <[email protected]> wrote:\nI agree with you, and we can indeed directly set the param_info value to \nNULL, and there are enough comments here to explain.\n\nI didn't find anything else to add in your patch.Thanks for reviewing this patch!ThanksRichard",
"msg_date": "Wed, 25 Oct 2023 15:23:28 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Simplify create_merge_append_path a bit for clarity"
},
{
"msg_contents": "On Fri, Jul 26, 2024 at 1:28 PM Paul A Jungwirth\n<[email protected]> wrote:\n> Is there a reason you don't want to remove the required_outer\n> parameter altogether? I guess because it is such a common pattern to\n> pass it?\n\nI think it's best to keep this parameter unchanged to maintain\nconsistency with other functions that create path nodes in pathnode.c.\n\n> Do you think it is worth keeping this assertion?:\n>\n> -\n> - /* All child paths must have same parameterization */\n> - Assert(bms_equal(PATH_REQ_OUTER(subpath), required_outer));\n>\n> I understand any failure would trigger one of the prior asserts\n> instead, but it does communicate an extra requirement, and there is no\n> cost.\n\nI don't think it's a good idea to keep this Assert: with this change\nit becomes redundant.\n\n> But I'd be fine with committing this patch as-is.\n\nI've pushed this patch. Thanks for review.\n\nThanks\nRichard\n\n\n",
"msg_date": "Mon, 29 Jul 2024 11:03:23 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Simplify create_merge_append_path a bit for clarity"
}
] |
[
{
"msg_contents": "While reviewing other threads I have been looking more closely at the\nthe logicalrep_worker_launch() function. IMO the logic of that\nfunction seems not quite right.\n\nHere are a few things I felt are strange:\n\n1. The function knows exactly what type of worker it is launching, but\nstill, it is calling the worker counting functions\nlogicalrep_sync_worker_count() and logicalrep_pa_worker_count() even\nwhen launching a worker of a *different* type.\n\n1a. I think should only count/check the tablesync worker limit when\ntrying to launch a tablesync worker\n\n1b. I think should only count/check the parallel apply worker limit\nwhen trying to launch a parallel apply worker\n\n~\n\n2. There is some condition for attempting the garbage-collection of workers:\n\n/*\n* If we didn't find a free slot, try to do garbage collection. The\n* reason we do this is because if some worker failed to start up and its\n* parent has crashed while waiting, the in_use state was never cleared.\n*/\nif (worker == NULL || nsyncworkers >= max_sync_workers_per_subscription)\n\nThe inclusion of that nsyncworkers check here has very subtle\nimportance. AFAICT this means that even if we *did* find a free\nworker, we still need to do garbage collection just in case one of\nthose 'in_use' tablesync worker was in error (e.g. crashed after\nmarked in_use). By garbage-collecting (and then re-counting\nnsyncworkers) we might be able to launch the tablesync successfully\ninstead of just returning that it has maxed out.\n\n2a. IIUC that is all fine. The problem is that I think there should be\nexactly the same logic for the parallel apply workers here also.\n\n2b. The comment above should explain more about the reason for the\nnsyncworkers condition -- the existing comment doesn't really cover\nit.\n\n~\n\n3. There is a wrong cut/paste comment in the body of\nlogicalrep_sync_worker_count().\n\nThat comment should be changed to read similarly to the equivalent\ncomment in logicalrep_pa_worker_count.\n\n------\n\nPSA a patch to address all these items.\n\nThis patch is about making the function logically consistent. Removing\nsome of the redundant countings should also be more efficient in\ntheory, but in practice, I think the unnecessary worker loops are too\nshort (max_logical_replication_workers) for any performance\nimprovements to be noticeable.\n\nThoughts?\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Fri, 11 Aug 2023 18:58:35 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "logicalrep_worker_launch -- counting/checking the worker limits"
},
{
"msg_contents": "The previous patch was accidentally not resetting the boolean limit\nflags to false for retries.\n\nFixed in v2.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Sat, 12 Aug 2023 09:58:40 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: logicalrep_worker_launch -- counting/checking the worker limits"
},
{
"msg_contents": "On Fri, Aug 11, 2023 at 2:29 PM Peter Smith <[email protected]> wrote:\n>\n> While reviewing other threads I have been looking more closely at the\n> the logicalrep_worker_launch() function. IMO the logic of that\n> function seems not quite right.\n>\n> Here are a few things I felt are strange:\n>\n...\n>\n> 2. There is some condition for attempting the garbage-collection of workers:\n>\n> /*\n> * If we didn't find a free slot, try to do garbage collection. The\n> * reason we do this is because if some worker failed to start up and its\n> * parent has crashed while waiting, the in_use state was never cleared.\n> */\n> if (worker == NULL || nsyncworkers >= max_sync_workers_per_subscription)\n>\n> The inclusion of that nsyncworkers check here has very subtle\n> importance. AFAICT this means that even if we *did* find a free\n> worker, we still need to do garbage collection just in case one of\n> those 'in_use' tablesync worker was in error (e.g. crashed after\n> marked in_use). By garbage-collecting (and then re-counting\n> nsyncworkers) we might be able to launch the tablesync successfully\n> instead of just returning that it has maxed out.\n>\n> 2a. IIUC that is all fine. The problem is that I think there should be\n> exactly the same logic for the parallel apply workers here also.\n>\n\nDid you try to reproduce this condition, if not, can you please try it\nonce? I wonder if the leader worker crashed, won't it lead to a\nrestart of the server?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 12 Aug 2023 19:48:51 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: logicalrep_worker_launch -- counting/checking the worker limits"
},
{
"msg_contents": "A rebase was needed due to a recent push [1].\n\nPSA v3.\n\n------\n[1] https://github.com/postgres/postgres/commit/2a8b40e3681921943a2989fd4ec6cdbf8766566c\n\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Tue, 15 Aug 2023 12:38:28 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: logicalrep_worker_launch -- counting/checking the worker limits"
},
{
"msg_contents": "On Tue, 15 Aug 2023 at 08:09, Peter Smith <[email protected]> wrote:\n>\n> A rebase was needed due to a recent push [1].\n\nI have changed the status of the patch to \"Waiting on Author\" as\nAmit's queries at [1] have not been verified and concluded. Please\nfeel free to address them and change the status back again.\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1LtFyiMV6e9%2BRr66pe5e-MX7Pk6N3iHd4JgcBW1X4kS6Q%40mail.gmail.com\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Sun, 14 Jan 2024 11:13:43 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: logicalrep_worker_launch -- counting/checking the worker limits"
},
{
"msg_contents": "\n\n> On 15 Aug 2023, at 07:38, Peter Smith <[email protected]> wrote:\n> \n> A rebase was needed due to a recent push [1].\n> \n> PSA v3.\n\n\n> On 14 Jan 2024, at 10:43, vignesh C <[email protected]> wrote:\n> \n> I have changed the status of the patch to \"Waiting on Author\" as\n> Amit's queries at [1] have not been verified and concluded. Please\n> feel free to address them and change the status back again.\n\nHi Peter!\n\nAre you still interested in this thread? If so - please post an answer to Amit's question.\nIf you are not interested - please Withdraw a CF entry [0].\n\nThanks!\n\n\nBest regards, Andrey Borodin.\n\n[0] https://commitfest.postgresql.org/47/4499/\n\n",
"msg_date": "Sun, 31 Mar 2024 14:12:01 +0500",
"msg_from": "\"Andrey M. Borodin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: logicalrep_worker_launch -- counting/checking the worker limits"
},
{
"msg_contents": "On Sun, Mar 31, 2024 at 8:12 PM Andrey M. Borodin <[email protected]> wrote:\n>\n>\n>\n> > On 15 Aug 2023, at 07:38, Peter Smith <[email protected]> wrote:\n> >\n> > A rebase was needed due to a recent push [1].\n> >\n> > PSA v3.\n>\n>\n> > On 14 Jan 2024, at 10:43, vignesh C <[email protected]> wrote:\n> >\n> > I have changed the status of the patch to \"Waiting on Author\" as\n> > Amit's queries at [1] have not been verified and concluded. Please\n> > feel free to address them and change the status back again.\n>\n> Hi Peter!\n>\n> Are you still interested in this thread? If so - please post an answer to Amit's question.\n> If you are not interested - please Withdraw a CF entry [0].\n>\n> Thanks!\n\nYeah, sorry for the long period of inactivity on this thread. Although\nI still have some interest in it, I don't know when I'll get back to\nit again so meantime I've withdrawn this from the CF as requested.\n\nKind regards,\nPeter Smith\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 23 Apr 2024 13:06:03 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: logicalrep_worker_launch -- counting/checking the worker limits"
}
] |
[
{
"msg_contents": "Hi All,\nPostgreSQL code uses Assert() as a way to\n1. document the assumption or conditions which must be true at a given\nplace in code\n2. make sure that some bug elsewhere does not break these assumptions or rules\n3. conditions which can not be (easily) induced by user actions\n\nE.g. following conditions in adjust_appendrel_attrs()\n/* If there's nothing to adjust, don't call this function. */\nAssert(nappinfos >= 1 && appinfos != NULL);\n\n/* Should never be translating a Query tree. */\nAssert(node == NULL || !IsA(node, Query));\n\nThese conditions do not make it to non-assert builds and thus do not\nmake it to the production binary. That saves some CPU cycles.\n\nWhen an Assert fails, and it fails quite a lot for developers, the\nPostgres backend that caused the Assert is Aborted, restarting the\nserver. So a developer testing code that caused the Assert has to\nstart a psql again, run any session setup before running the faulting\nquery, gdb needs to be reattached to the new backend process. That's\nsome waste of time and frustration, esp. when the Assert does not\ndamage the backend itself e.g. by corrupting memory.\n\nMost of the Asserts are recoverable by rolling back the transaction\nwithout crashing the backend. So an elog(ERROR, ) is enough. But just\nby themselves elogs are compiled into non-debug binary and the\ncondition check can waste CPU cycles esp. conditions are mostly true\nlike the ones we use in Assert.\n\nAttached patch combines Assert and elog(ERROR, ) so that when an\nAssert is triggered in assert-enabled binary, it will throw an error\nwhile keeping the backend intact. Thus it does not affect gdb session\nor psql session. These elogs do not make their way to non-assert\nbinary so do not make it to production like Assert.\n\nI have been using AssertLog for my work for some time. It is quite\nconvenient. With AssertLog I get\n```\n#explain (summary on) select * from a, b, c where a.c1 = b.c1 and b.c1\n= c.c1 and a.c2 < b.c2 and a.c3 + b.c3 < c.c3;\nERROR: failed Assert(\"equal(child_rinfo, adjust_appendrel_attrs(root,\n(Node *) rinfo, nappinfos, appinfos))\"), File: \"relnode.c\", Line: 997,\nPID: 568088\n```\ninstead of\n```\n#explain (summary on) select * from a, b, c where a.c1 = b.c1 and b.c1\n= c.c1 and a.c2 < b.c2 and a.c3 + b.c3 < c.c3;\nserver closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\nThe connection to the server was lost. Attempting reset: Failed.\n@!#\\q\n```\n\nIf there is an interest in accepting the patch, I will add it to the\ncommitfest and work on it further.\n\n--\nBest Wishes,\nAshutosh Bapat",
"msg_date": "Fri, 11 Aug 2023 17:59:37 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": true,
"msg_subject": "AssertLog instead of Assert in some places"
},
{
"msg_contents": "Hi,\n\nOn 2023-08-11 17:59:37 +0530, Ashutosh Bapat wrote:\n> Most of the Asserts are recoverable by rolling back the transaction\n> without crashing the backend. So an elog(ERROR, ) is enough. But just\n> by themselves elogs are compiled into non-debug binary and the\n> condition check can waste CPU cycles esp. conditions are mostly true\n> like the ones we use in Assert.\n> \n> Attached patch combines Assert and elog(ERROR, ) so that when an\n> Assert is triggered in assert-enabled binary, it will throw an error\n> while keeping the backend intact. Thus it does not affect gdb session\n> or psql session. These elogs do not make their way to non-assert\n> binary so do not make it to production like Assert.\n\nI am quite strongly against this. This will lead to assertions being hit in\ntests without that being noticed, e.g. because they happen in a background\nprocess that just restarts.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 11 Aug 2023 10:57:23 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AssertLog instead of Assert in some places"
},
{
"msg_contents": "On Fri, Aug 11, 2023 at 10:57 AM Andres Freund <[email protected]> wrote:\n> I am quite strongly against this. This will lead to assertions being hit in\n> tests without that being noticed, e.g. because they happen in a background\n> process that just restarts.\n\nCouldn't you say the same thing about defensive \"can't happen\" ERRORs?\nThey are essentially a form of assertion that isn't limited to\nassert-enabled builds.\n\nI have sometimes thought that it would be handy if there was a variant\nof \"can't happen\" ERRORs that took on the structure of an assertion.\n(This is quite different to what Ashutosh has proposed, though, since\nit would still look like a conventional assertion failure on\nassert-enabled builds.)\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 11 Aug 2023 11:14:34 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AssertLog instead of Assert in some places"
},
{
"msg_contents": "Hi,\n\nOn 2023-08-11 11:14:34 -0700, Peter Geoghegan wrote:\n> On Fri, Aug 11, 2023 at 10:57 AM Andres Freund <[email protected]> wrote:\n> > I am quite strongly against this. This will lead to assertions being hit in\n> > tests without that being noticed, e.g. because they happen in a background\n> > process that just restarts.\n> \n> Couldn't you say the same thing about defensive \"can't happen\" ERRORs?\n> They are essentially a form of assertion that isn't limited to\n> assert-enabled builds.\n\nYes. A lot of them I hate them with the passion of a thousand suns ;). \"Oh,\nour transaction state machinery is confused. Yes, let's just continue going\nthrough the same machinery again, that'll resolve it.\". Even worse are the\nones that are just WARNINGS. Like \"Oh, something wrote beyond the length of\nallocated memory, that's something to issue a WARNING about and happily\ncontinue\".\n\nThere've been people arguing in the past that it's good for robustness to do\nstuff like that. I think it's exactly the opposite.\n\nNow, don't get me wrong, there are plenty cases where a \"this shouldn't\nhappen\" elog(ERROR) is appropriate...\n\n\n> I have sometimes thought that it would be handy if there was a variant\n> of \"can't happen\" ERRORs that took on the structure of an assertion.\n\nWhat are you imagining? Basically something that generates an elog(ERROR) with\nthe stringified expression as part of the error message?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 11 Aug 2023 11:23:50 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AssertLog instead of Assert in some places"
},
{
"msg_contents": "On Fri, Aug 11, 2023 at 11:23 AM Andres Freund <[email protected]> wrote:\n> > Couldn't you say the same thing about defensive \"can't happen\" ERRORs?\n> > They are essentially a form of assertion that isn't limited to\n> > assert-enabled builds.\n>\n> Yes. A lot of them I hate them with the passion of a thousand suns ;). \"Oh,\n> our transaction state machinery is confused. Yes, let's just continue going\n> through the same machinery again, that'll resolve it.\".\n\nI am not unsympathetic to Ashutosh's point about conventional ERRORs\nbeing easier to deal with when debugging your own code, during initial\ndevelopment work. But that seems like a problem with the tooling in\nother areas.\n\nFor example, dealing with core dumps left behind by the regression\ntests can be annoying. Don't you also hate it when there's a\nregression.diffs that just shows 20k lines of subtractions? Perhaps\nyou don't -- perhaps your custom setup makes it quick and easy to get\nrelevant information about what actually went wrong. But it seems like\nthat sort of thing could be easier to deal with by default, without\nusing custom shell scripts or anything -- particularly for those of us\nthat haven't been Postgres hackers for eons.\n\n> There've been people arguing in the past that it's good for robustness to do\n> stuff like that. I think it's exactly the opposite.\n>\n> Now, don't get me wrong, there are plenty cases where a \"this shouldn't\n> happen\" elog(ERROR) is appropriate...\n\nRight. They're not bad -- they just need to be backed up by some kind\nof reasoning, which will be particular to each case. The default\napproach should be to crash whenever any invariants are violated,\nbecause all bets are off at that point.\n\n> What are you imagining? Basically something that generates an elog(ERROR) with\n> the stringified expression as part of the error message?\n\nI'd probably start with a new elevel, that implied an assertion\nfailure in assert-enabled builds but otherwise acted just like ERROR.\nI remember multiple cases where I added an \"Assert(false)\" right after\na \"can't happen\" error, which isn't a great approach.\n\nIt might also be useful to offer something along the lines you've\ndescribed, which I was sort of thinking of myself. But now that I've\nthought about it a little more, I think that such an approach might\nend up being overused. If you're going to add what amounts to a \"can't\nhappen\" ERROR then you should really be obligated to write a halfway\nreasonable error message. As I said, you should have to own the fact\nthat you think it's better to not just crash for this one particular\ncallsite, based on some fairly specific chain of reasoning. Ideally\nit'll be backed up by practical experience/user reports.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 11 Aug 2023 11:56:27 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AssertLog instead of Assert in some places"
},
{
"msg_contents": "Hi,\n\nOn 2023-08-11 11:56:27 -0700, Peter Geoghegan wrote:\n> On Fri, Aug 11, 2023 at 11:23 AM Andres Freund <[email protected]> wrote:\n> > > Couldn't you say the same thing about defensive \"can't happen\" ERRORs?\n> > > They are essentially a form of assertion that isn't limited to\n> > > assert-enabled builds.\n> >\n> > Yes. A lot of them I hate them with the passion of a thousand suns ;). \"Oh,\n> > our transaction state machinery is confused. Yes, let's just continue going\n> > through the same machinery again, that'll resolve it.\".\n> \n> I am not unsympathetic to Ashutosh's point about conventional ERRORs\n> being easier to deal with when debugging your own code, during initial\n> development work.\n\nOh, I am as well - I just don't think it's a good idea to introduce \"log + error\"\nassertions to core postgres, because it seems very likely that they'll end up\ngetting used a lot.\n\n\n> But that seems like a problem with the tooling in other areas.\n\nAgreed.\n\n\n> For example, dealing with core dumps left behind by the regression\n> tests can be annoying.\n\nHm. I don't have a significant problem with that. But I can see it being\nproblematic. Unfortunately, short of preventing core dumps from happening,\nI don't think we really can do much about that - whatever is running the tests\nshouldn't have privileges to change system wide settings about where core\ndumps end up etc.\n\n\n> Don't you also hate it when there's a regression.diffs that just shows 20k\n> lines of subtractions? Perhaps you don't -- perhaps your custom setup makes\n> it quick and easy to get relevant information about what actually went\n> wrong.\n\nI do really hate that. At the very least we should switch to using\nrestart-after-crash by default, and not start new tests once the server has\ncrashed and do a waitpid(postmaster, WNOHANG) after each failing test, to see\nif the reason the test failed is that the backend died.\n\n\n> But it seems like that sort of thing could be easier to deal with by\n> default, without using custom shell scripts or anything -- particularly for\n> those of us that haven't been Postgres hackers for eons.\n\nYes, wholeheartedly agreed.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 11 Aug 2023 12:26:17 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AssertLog instead of Assert in some places"
},
{
"msg_contents": "On Fri, Aug 11, 2023 at 12:26 PM Andres Freund <[email protected]> wrote:\n> > For example, dealing with core dumps left behind by the regression\n> > tests can be annoying.\n>\n> Hm. I don't have a significant problem with that. But I can see it being\n> problematic. Unfortunately, short of preventing core dumps from happening,\n> I don't think we really can do much about that - whatever is running the tests\n> shouldn't have privileges to change system wide settings about where core\n> dumps end up etc.\n\nI was unclear. I wasn't talking about managing core dumps. I was\ntalking about using core dumps to get a simple backtrace, that just\ngives me some very basic information. I probably shouldn't have even\nmentioned core dumps, because what I'm really concerned about is the\nworkflow around getting very basic information about assertion\nfailures. Not around core dumps per se.\n\nThe inconsistent approach needed to get a simple, useful backtrace for\nassertion failures (along with other basic information associated with\nthe failure) is a real problem. Particularly when running the tests.\nMost individual assertion failures that I see are for code that I'm\npractically editing in real time. So shaving cycles here really\nmatters.\n\nFor one thing the symbol mangling that we have around the built-in\nbacktraces makes them significantly less useful. I really hope that\nyour libbacktrace patch gets committed soon, since that looks like it\nwould be a nice quality of life improvement, all on its own.\n\nIt would also be great if the tests spit out information about\nassertion failures that was reasonably complete (backtrace without any\nmangling, query text included, other basic context), reliably and\nuniformly -- it shouldn't matter if it's from TAP or pg_regress test\nSQL scripts. Which kind of test happened to be involved is usually not\ninteresting to me here (even the query text won't usually be\ninteresting), so being forced to think about it slows me down quite a\nlot.\n\n> > Don't you also hate it when there's a regression.diffs that just shows 20k\n> > lines of subtractions? Perhaps you don't -- perhaps your custom setup makes\n> > it quick and easy to get relevant information about what actually went\n> > wrong.\n>\n> I do really hate that. At the very least we should switch to using\n> restart-after-crash by default, and not start new tests once the server has\n> crashed and do a waitpid(postmaster, WNOHANG) after each failing test, to see\n> if the reason the test failed is that the backend died.\n\n+1\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 11 Aug 2023 13:19:34 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AssertLog instead of Assert in some places"
},
{
"msg_contents": "Hi,\n\nOn 2023-08-11 13:19:34 -0700, Peter Geoghegan wrote:\n> On Fri, Aug 11, 2023 at 12:26 PM Andres Freund <[email protected]> wrote:\n> > > For example, dealing with core dumps left behind by the regression\n> > > tests can be annoying.\n> >\n> > Hm. I don't have a significant problem with that. But I can see it being\n> > problematic. Unfortunately, short of preventing core dumps from happening,\n> > I don't think we really can do much about that - whatever is running the tests\n> > shouldn't have privileges to change system wide settings about where core\n> > dumps end up etc.\n>\n> I was unclear. I wasn't talking about managing core dumps. I was\n> talking about using core dumps to get a simple backtrace, that just\n> gives me some very basic information. I probably shouldn't have even\n> mentioned core dumps, because what I'm really concerned about is the\n> workflow around getting very basic information about assertion\n> failures. Not around core dumps per se.\n>\n> The inconsistent approach needed to get a simple, useful backtrace for\n> assertion failures (along with other basic information associated with\n> the failure) is a real problem. Particularly when running the tests.\n> Most individual assertion failures that I see are for code that I'm\n> practically editing in real time. So shaving cycles here really\n> matters.\n\nAh, yes. Agreed, obviously.\n\n\n> For one thing the symbol mangling that we have around the built-in\n> backtraces makes them significantly less useful. I really hope that\n> your libbacktrace patch gets committed soon, since that looks like it\n> would be a nice quality of life improvement, all on its own.\n\nI've been hacking a further on it:\nhttps://github.com/anarazel/postgres/tree/backtrace\n\nHaven't yet posted a new version. Doing it properly for fronted tools has some\ndependencies on threading stuff. I'm hoping Thomas' patch for that will go in.\n\n\nNow it also intercepts segfaults and prints\nbacktraces for them, if that's possible (it requires libbacktrace to be async\nsignal safe, which it isn't on all platforms).\n\nWhere supported, a crash (distinguishing from assertion failures) will now\nreport something like:\n\nprocess with pid: 2900527 received signal: SIGSEGV, si_code: 1, si_addr: 0xdeadbeef\n\t[0x5628ec45212f] pg_fatalsig_handler+0x20f: ../../../../home/andres/src/postgresql/src/common/pg_backtrace.c:615\n\t[0x7fc4b743650f] [unknown]\n\t[0x5628ec14897c] check_root+0x19c (inlined): ../../../../home/andres/src/postgresql/src/backend/main/main.c:393\n\t[0x5628ec14897c] main+0x19c: ../../../../home/andres/src/postgresql/src/backend/main/main.c:183\n\nafter I added\n\t*(volatile int*)0xdeadbeef = 1;\nto check_root().\n\n\nFor signals sent by users, it'd show the pid of the process sending the signal\non most OSs. I really would like some generalized infrastructure for that, so\nthat we can report for things like query cancellations.\n\n\nAs the patch stands, the only OS that doesn't yet have useful \"backtrace on\ncrash\" support with that is windows, as libbacktrace unfortunately isn't\nsignal safe on windows. But it'd still provide useful backtraces on\nAssert()s. I haven't yet figured out whether/when it's required to be signal\nsafe on windows though - crashes are intercepted by\nSetUnhandledExceptionFilter() - I don't understand the precise constraints of\nthat. Plenty people seem to generate backtraces on crashes on windows using\nthat, without concerns for signal safety like things.\n\n\nCurrently Macos CI doesn't use libbacktrace, but as it turns out\nbacktrace_symbols() on windows is a heck of a lot better than on glibc\nplatforms. CI for windows with visual studio doesn't have libbacktrace\ninstalled yet (and has the aforementioned signal safety issue), I think it's\ninstalled for windows w/ mingw.\n\n\n> It would also be great if the tests spit out information about\n> assertion failures that was reasonably complete (backtrace without any\n> mangling, query text included, other basic context), reliably and\n> uniformly -- it shouldn't matter if it's from TAP or pg_regress test\n> SQL scripts.\n\nHm. What other basic context are you thinking of? Pid is obvious. I guess\nbackend type could be useful too, but normally be inferred from the stack\ntrace pretty easily. Application name could be useful to know which test\ncaused the crash.\n\nI'm a bit wary about trying to print things like query text - what if that\nstring points to memory not terminated by a \\0? I guess we could use an\napproach similar to pgstat_get_crashed_backend_activity().\n\nOne issue with reporting crashes from signal handlers is that the obvious way\nto make that signal safe (lots of small writes) leads to the potential for\ninterspersed lines. It's probably worth having a statically sized buffer that\nwill commonly be large enough to print a whole backtrace. When too small, it\nshould include the pid at the start of every \"chunk\".\n\n\n> Which kind of test happened to be involved is usually not interesting to me\n> here (even the query text won't usually be interesting), so being forced to\n> think about it slows me down quite a lot.\n\nInteresting - I quite often end up spending time trying to dig out which query\nfrom what sql file triggered a crash, so I can try to trigger it in\nisolation. I often wished the server knew the source line associated with the\nquery. Enough that I pondered ways to have psql transport that knowledge to the\nthe server.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 11 Aug 2023 14:04:47 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AssertLog instead of Assert in some places"
},
{
"msg_contents": "On Fri, Aug 11, 2023 at 2:04 PM Andres Freund <[email protected]> wrote:\n> Where supported, a crash (distinguishing from assertion failures) will now\n> report something like:\n>\n> process with pid: 2900527 received signal: SIGSEGV, si_code: 1, si_addr: 0xdeadbeef\n> [0x5628ec45212f] pg_fatalsig_handler+0x20f: ../../../../home/andres/src/postgresql/src/common/pg_backtrace.c:615\n> [0x7fc4b743650f] [unknown]\n> [0x5628ec14897c] check_root+0x19c (inlined): ../../../../home/andres/src/postgresql/src/backend/main/main.c:393\n> [0x5628ec14897c] main+0x19c: ../../../../home/andres/src/postgresql/src/backend/main/main.c:183\n>\n> after I added\n> *(volatile int*)0xdeadbeef = 1;\n> to check_root().\n\nIt'll be like living in the future!\n\n> For signals sent by users, it'd show the pid of the process sending the signal\n> on most OSs. I really would like some generalized infrastructure for that, so\n> that we can report for things like query cancellations.\n\nThat sounds great.\n\n> > It would also be great if the tests spit out information about\n> > assertion failures that was reasonably complete (backtrace without any\n> > mangling, query text included, other basic context), reliably and\n> > uniformly -- it shouldn't matter if it's from TAP or pg_regress test\n> > SQL scripts.\n>\n> Hm. What other basic context are you thinking of? Pid is obvious. I guess\n> backend type could be useful too, but normally be inferred from the stack\n> trace pretty easily. Application name could be useful to know which test\n> caused the crash.\n>\n> I'm a bit wary about trying to print things like query text - what if that\n> string points to memory not terminated by a \\0? I guess we could use an\n> approach similar to pgstat_get_crashed_backend_activity().\n\nI agree that being less verbose by default is good. On second thought\neven query text isn't all that important.\n\n> One issue with reporting crashes from signal handlers is that the obvious way\n> to make that signal safe (lots of small writes) leads to the potential for\n> interspersed lines. It's probably worth having a statically sized buffer that\n> will commonly be large enough to print a whole backtrace. When too small, it\n> should include the pid at the start of every \"chunk\".\n\nGood idea.\n\n> > Which kind of test happened to be involved is usually not interesting to me\n> > here (even the query text won't usually be interesting), so being forced to\n> > think about it slows me down quite a lot.\n>\n> Interesting - I quite often end up spending time trying to dig out which query\n> from what sql file triggered a crash, so I can try to trigger it in\n> isolation. I often wished the server knew the source line associated with the\n> query. Enough that I pondered ways to have psql transport that knowledge to the\n> the server.\n\nI actually do plenty of that too. My overall point was this: there is\nlikely some kind of pareto principle here. That should guide the sorts\nof scenarios we optimize for.\n\nIf you actually benchmarked where I spent time while writing code,\nminute to minute, I bet it would show that most of the individual\ndebug-compile cycles were triggered by issues that had a fairly simple\nand immediate cause. Cases where I improve one small thing, and then\nrerun the tests, which show an assertion failure in nearby code. As\nsoon as I see very basic details I immediately think \"duh, of course\"\nin these cases, at which point I'll come up with a likely-good fix in\nseconds. And then I'll rinse and repeat. My fix might just work (at\nleast to the extent that all tests now pass), but it also might lead\nto another assertion failure of the same general nature.\n\nThere are also lots of cases where I really do have to think about\nrecreating the details from the test in order to truly understand\nwhat's going on, of course. But there are still way way more\nindividual \"duh, of course\" assertion failures in practice. Those are\nwhere productivity wins are still possible, because the bottleneck\nisn't just that I have an incomplete mental model that I need to work\nto expand.\n\nPerhaps my experiences here aren't universal. But it seems like they\nmight be roughly the same as everybody else that works on Postgres?\nAssuming that they are, then the information that is output should be\noptimized for the \"duh, of course\" scenarios. Not to an absurd degree,\nmind you. But the output shouldn't be too verbose. Ideally there'd be\na still fairly straightforward way of getting extra information, for\nthe cases where debugging is likely to take a few minutes, and require\nreal focus. The extra work in those other cases is relatively\ninsignificant, because the \"startup costs\" are relatively large -- a\nlittle extra indirection (though only a little) can't hurt too much.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 11 Aug 2023 14:39:45 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AssertLog instead of Assert in some places"
},
{
"msg_contents": "On Fri, Aug 11, 2023 at 11:27 PM Andres Freund <[email protected]> wrote:\n> >\n> > Attached patch combines Assert and elog(ERROR, ) so that when an\n> > Assert is triggered in assert-enabled binary, it will throw an error\n> > while keeping the backend intact. Thus it does not affect gdb session\n> > or psql session. These elogs do not make their way to non-assert\n> > binary so do not make it to production like Assert.\n>\n> I am quite strongly against this. This will lead to assertions being hit in\n> tests without that being noticed, e.g. because they happen in a background\n> process that just restarts.\n\nFair point. Our regression doesn't check server error logs for\nunwanted errors. How about restricting it to only client backends? I\ndon't know how to identify those from others but there must be a way.\n\n--\nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Mon, 14 Aug 2023 20:07:34 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: AssertLog instead of Assert in some places"
},
{
"msg_contents": "On Sat, Aug 12, 2023 at 12:56 AM Andres Freund <[email protected]> wrote:\n> On 2023-08-11 11:56:27 -0700, Peter Geoghegan wrote:\n> > On Fri, Aug 11, 2023 at 11:23 AM Andres Freund <[email protected]> wrote:\n> > > > Couldn't you say the same thing about defensive \"can't happen\" ERRORs?\n> > > > They are essentially a form of assertion that isn't limited to\n> > > > assert-enabled builds.\n> > >\n> > > Yes. A lot of them I hate them with the passion of a thousand suns ;). \"Oh,\n> > > our transaction state machinery is confused. Yes, let's just continue going\n> > > through the same machinery again, that'll resolve it.\".\n> >\n> > I am not unsympathetic to Ashutosh's point about conventional ERRORs\n> > being easier to deal with when debugging your own code, during initial\n> > development work.\n>\n> Oh, I am as well - I just don't think it's a good idea to introduce \"log + error\"\n> assertions to core postgres, because it seems very likely that they'll end up\n> getting used a lot.\n>\n>\n\nI am open to ideas which allow the same backend to recover after\nmeeting an easily recoverable but \"can't happen\" condition rather than\nlosing that backend and starting all over with a new backend. Not all\nAssert'ed conditions are recoverable so a blanket GUC or compile time\noption won't help. Those might make things worse. We need two separate\nincantations for non-recoverable and recoverable Asserts respectively.\n\nI like Peter's idea of having a new elevel, however it still requires\nadding conditional USE_ASSERT, an if testing the condition and then\nwriting an error message. AssertLog() in the patch uses just a few\nmore letters.\n\nIt won't help to expand the scope of the problem since that will\nreduce the chances of getting anything done.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Mon, 14 Aug 2023 20:17:19 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: AssertLog instead of Assert in some places"
}
] |
[
{
"msg_contents": "Hi,\n\nwhile working on some logical replication stuff, I noticed that on PG16\nI often end up with a completely idle publisher (no user activity), that\nhowever looks like this in top:\n\n ... %CPU COMMAND\n\n\n ... 17.9 postgres: walsender user test ::1(43064) START_REPLICATION\n\n\n ... 16.6 postgres: walsender user test ::1(43128) START_REPLICATION\n\n\n ... 15.0 postgres: walsender user test ::1(43202) START_REPLICATION\n\n\n ... 6.6 postgres: walsender user test ::1(43236) START_REPLICATION\n\n\n ... 5.3 postgres: walsender user test ::1(43086) START_REPLICATION\n\n\n ... 4.3 postgres: walsender user test ::1(43180) START_REPLICATION\n\n\n ... 3.7 postgres: walsender user test ::1(43052) START_REPLICATION\n\n\n ... 3.7 postgres: walsender user test ::1(43158) START_REPLICATION\n\n\n ... 3.0 postgres: walsender user test ::1(43108) START_REPLICATION\n\n\n ... 3.0 postgres: walsender user test ::1(43214) START_REPLICATION\n\n\n\nThat's an awful lot of CPU for a cluster doing essentially \"nothing\"\n(there's no WAL to decode/replicate, etc.). It usually disappears after\na couple seconds, but sometimes it's a rather persistent state.\n\nThe profile from the walsender processes looks like this:\n\n --99.94%--XLogSendLogical\n |\n |--99.23%--XLogReadRecord\n | XLogReadAhead\n | XLogDecodeNextRecord\n | ReadPageInternal\n | logical_read_xlog_page\n | |\n | |--97.80%--WalSndWaitForWal\n | | |\n | | |--68.48%--WalSndWait\n\nIt seems to me the issue is in WalSndWait, which was reworked to use\nConditionVariableCancelSleep() in bc971f4025c. The walsenders end up\nwaking each other in a busy loop, until the timing changes just enough\nto break the cycle.\n\nI've been unable to reproduce this on PG15, and bc971f4025c seems like\nthe most significant change to WalSndWait, which is why I suspect it's\nrelated to the issue.\n\nReproducing this is simple, create a publisher with multiple subscribers\n(could even go to the same subscriber instance) and empty publications.\nIf you trigger a \"noop\" it's likely to cause this high memory usage:\n\n---------------------------------------------------------------------\n# init two clusters\npg_ctl -D data-publisher init\npg_ctl -D data-subscriber init\n\n# change the parameters to allow 10 subscriptions\necho 'wal_level = logical' >> data-publisher/postgresql.conf\necho 'port = 5433' >> data-subscriber/postgresql.conf\necho 'max_worker_processes = 20' >> data-subscriber/postgresql.conf\necho 'max_logical_replication_workers = 20' >>\ndata-subscriber/postgresql.conf\n\n# setup empty publication\ncreatedb test\npsql test -c \"create publication p\";\n\n# setup 10 subscriptions\nfor i in `seq 1 10`; do\n createdb -p 5433 test$i\n psql -p 5433 test$i -c \"create subscription s$i CONNECTION\n'host=localhost port=5432 dbname=test' publication p\";\ndone\n\n# emit logical messages, which are almost noop, 5s apart\nfor i in `seq 1 10`; do\n psql test -c \"select pg_logical_emit_message(false, 'x', 'x')\";\n sleep 5;\ndone;\n---------------------------------------------------------------------\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 11 Aug 2023 15:31:43 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "walsender \"wakeup storm\" on PG16, likely because of bc971f4025c\n (Optimize walsender wake up logic using condition variables)"
},
{
"msg_contents": "Hi,\n\nOn 2023-08-11 15:31:43 +0200, Tomas Vondra wrote:\n> That's an awful lot of CPU for a cluster doing essentially \"nothing\"\n> (there's no WAL to decode/replicate, etc.). It usually disappears after\n> a couple seconds, but sometimes it's a rather persistent state.\n\nUgh, that's not great.\n\n> The profile from the walsender processes looks like this:\n> \n> --99.94%--XLogSendLogical\n> |\n> |--99.23%--XLogReadRecord\n> | XLogReadAhead\n> | XLogDecodeNextRecord\n> | ReadPageInternal\n> | logical_read_xlog_page\n> | |\n> | |--97.80%--WalSndWaitForWal\n> | | |\n> | | |--68.48%--WalSndWait\n> \n> It seems to me the issue is in WalSndWait, which was reworked to use\n> ConditionVariableCancelSleep() in bc971f4025c. The walsenders end up\n> waking each other in a busy loop, until the timing changes just enough\n> to break the cycle.\n\nIMO ConditionVariableCancelSleep()'s behaviour of waking up additional\nprocesses can nearly be considered a bug, at least when combined with\nConditionVariableBroadcast(). In that case the wakeup is never needed, and it\ncan cause situations like this, where condition variables basically\ndeteriorate to a busy loop.\n\nI hit this with AIO as well. I've been \"solving\" it by adding a\nConditionVariableCancelSleepEx(), which has a only_broadcasts argument.\n\nI'm inclined to think that any code that needs that needs the forwarding\nbehaviour is pretty much buggy.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 11 Aug 2023 10:51:11 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: walsender \"wakeup storm\" on PG16, likely because of bc971f4025c\n (Optimize walsender wake up logic using condition variables)"
},
{
"msg_contents": "On Sat, Aug 12, 2023 at 5:51 AM Andres Freund <[email protected]> wrote:\n> On 2023-08-11 15:31:43 +0200, Tomas Vondra wrote:\n> > It seems to me the issue is in WalSndWait, which was reworked to use\n> > ConditionVariableCancelSleep() in bc971f4025c. The walsenders end up\n> > waking each other in a busy loop, until the timing changes just enough\n> > to break the cycle.\n>\n> IMO ConditionVariableCancelSleep()'s behaviour of waking up additional\n> processes can nearly be considered a bug, at least when combined with\n> ConditionVariableBroadcast(). In that case the wakeup is never needed, and it\n> can cause situations like this, where condition variables basically\n> deteriorate to a busy loop.\n>\n> I hit this with AIO as well. I've been \"solving\" it by adding a\n> ConditionVariableCancelSleepEx(), which has a only_broadcasts argument.\n>\n> I'm inclined to think that any code that needs that needs the forwarding\n> behaviour is pretty much buggy.\n\nOh, I see what's happening. Maybe commit b91dd9de wasn't the best\nidea, but bc971f4025c broke an assumption, since it doesn't use\nConditionVariableSleep(). That is confusing the signal forwarding\nlogic which expects to find our entry in the wait list in the common\ncase.\n\nWhat do you think about this patch?",
"msg_date": "Sat, 12 Aug 2023 07:51:09 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: walsender \"wakeup storm\" on PG16, likely because of bc971f4025c\n (Optimize walsender wake up logic using condition variables)"
},
{
"msg_contents": "Hi,\n\nOn 2023-08-12 07:51:09 +1200, Thomas Munro wrote:\n> Oh, I see what's happening. Maybe commit b91dd9de wasn't the best\n> idea, but bc971f4025c broke an assumption, since it doesn't use\n> ConditionVariableSleep(). That is confusing the signal forwarding\n> logic which expects to find our entry in the wait list in the common\n> case.\n\nHm, I guess I got confused by the cv code once more. I thought that\nConditionVariableCancelSleep() would wake us up anyway, because\nonce we return from ConditionVariableSleep(), we'd be off the list. But I now\nrealize (and I think not for the first time), that ConditionVariableSleep()\nalways puts us *back* on the list.\n\n\nLeaving aside the issue in this thread, isn't always adding us back into the\nlist bad from a contention POV alone - it doubles the write traffic on the CV\nand is guaranteed to cause contention for ConditionVariableBroadcast(). I\nwonder if this explains some performance issues I've seen in the past.\n\nWhat if we instead reset cv_sleep_target once we've been taken off the list? I\nthink it'd not be too hard to make it safe to do the proclist_contains()\nwithout the spinlock. Lwlocks have something similar, there we solve it by\nthis sequence:\n\n\t\tproclist_delete(&wakeup, iter.cur, lwWaitLink);\n\n\t\t/*\n\t\t * Guarantee that lwWaiting being unset only becomes visible once the\n\t\t * unlink from the link has completed. Otherwise the target backend\n\t\t * could be woken up for other reason and enqueue for a new lock - if\n\t\t * that happens before the list unlink happens, the list would end up\n\t\t * being corrupted.\n\t\t *\n\t\t * The barrier pairs with the LWLockWaitListLock() when enqueuing for\n\t\t * another lock.\n\t\t */\n\t\tpg_write_barrier();\n\t\twaiter->lwWaiting = LW_WS_NOT_WAITING;\n\t\tPGSemaphoreUnlock(waiter->sem);\n\nI guess this means we'd need something like lwWaiting for CVs as well.\n\n\n> From a85b2827f4500bc2e7c533feace474bc47086dfa Mon Sep 17 00:00:00 2001\n> From: Thomas Munro <[email protected]>\n> Date: Sat, 12 Aug 2023 07:06:08 +1200\n> Subject: [PATCH] De-pessimize ConditionVariableCancelSleep().\n>\n> Commit b91dd9de was concerned with a theoretical problem with our\n> non-atomic condition variable operations. If you stop sleeping, and\n> then cancel the sleep in a separate step, you might be signaled in\n> between, and that could be lost. That doesn't matter for callers of\n> ConditionVariableBroadcast(), but callers of ConditionVariableSignal()\n> might be upset if a signal went missing like this.\n\nFWIW I suspect at least some of the places that'd have a problem without that\nforwarding, might also be racy with it....\n\n\n> New idea: ConditionVariableCancelSleep() can just return true if you've\n> been signaled. Hypothetical users of ConditionVariableSignal() would\n> then still have a way to deal with rare lost signals if they are\n> concerned about that problem.\n\nSounds like a plan to me.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 11 Aug 2023 13:26:07 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: walsender \"wakeup storm\" on PG16, likely because of bc971f4025c\n (Optimize walsender wake up logic using condition variables)"
},
{
"msg_contents": "\n\nOn 8/11/23 21:51, Thomas Munro wrote:\n> On Sat, Aug 12, 2023 at 5:51 AM Andres Freund <[email protected]> wrote:\n>> On 2023-08-11 15:31:43 +0200, Tomas Vondra wrote:\n>>> It seems to me the issue is in WalSndWait, which was reworked to use\n>>> ConditionVariableCancelSleep() in bc971f4025c. The walsenders end up\n>>> waking each other in a busy loop, until the timing changes just enough\n>>> to break the cycle.\n>>\n>> IMO ConditionVariableCancelSleep()'s behaviour of waking up additional\n>> processes can nearly be considered a bug, at least when combined with\n>> ConditionVariableBroadcast(). In that case the wakeup is never needed, and it\n>> can cause situations like this, where condition variables basically\n>> deteriorate to a busy loop.\n>>\n>> I hit this with AIO as well. I've been \"solving\" it by adding a\n>> ConditionVariableCancelSleepEx(), which has a only_broadcasts argument.\n>>\n>> I'm inclined to think that any code that needs that needs the forwarding\n>> behaviour is pretty much buggy.\n> \n> Oh, I see what's happening. Maybe commit b91dd9de wasn't the best\n> idea, but bc971f4025c broke an assumption, since it doesn't use\n> ConditionVariableSleep(). That is confusing the signal forwarding\n> logic which expects to find our entry in the wait list in the common\n> case.\n> \n> What do you think about this patch?\n\nI'm not familiar with the condition variable code enough to have an\nopinion, but the patch seems to resolve the issue for me - I can no\nlonger reproduce the high CPU usage.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 14 Aug 2023 16:23:22 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: walsender \"wakeup storm\" on PG16, likely because of bc971f4025c\n (Optimize walsender wake up logic using condition variables)"
},
{
"msg_contents": "On Tue, Aug 15, 2023 at 2:23 AM Tomas Vondra\n<[email protected]> wrote:\n> I'm not familiar with the condition variable code enough to have an\n> opinion, but the patch seems to resolve the issue for me - I can no\n> longer reproduce the high CPU usage.\n\nThanks, pushed.\n\n\n",
"msg_date": "Tue, 15 Aug 2023 10:55:12 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: walsender \"wakeup storm\" on PG16, likely because of bc971f4025c\n (Optimize walsender wake up logic using condition variables)"
},
{
"msg_contents": "Thomas Munro <[email protected]> wrote:\n\n> On Tue, Aug 15, 2023 at 2:23 AM Tomas Vondra\n> <[email protected]> wrote:\n> > I'm not familiar with the condition variable code enough to have an\n> > opinion, but the patch seems to resolve the issue for me - I can no\n> > longer reproduce the high CPU usage.\n> \n> Thanks, pushed.\n\nI try to understand this patch (commit 5ffb7c7750) because I use condition\nvariable in an extension. One particular problem occured to me, please\nconsider:\n\nConditionVariableSleep() gets interrupted, so AbortTransaction() calls\nConditionVariableCancelSleep(), but the signal was sent in between. Shouldn't\nat least AbortTransaction() and AbortSubTransaction() check the return value\nof ConditionVariableCancelSleep(), and re-send the signal if needed?\n\nNote that I'm just thinking about such a problem, did not try to reproduce it.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n",
"msg_date": "Wed, 16 Aug 2023 13:20:23 +0200",
"msg_from": "Antonin Houska <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: walsender \"wakeup storm\" on PG16,\n likely because of bc971f4025c (Optimize walsender wake up logic using\n condition variables)"
},
{
"msg_contents": "On Wed, Aug 16, 2023 at 11:18 PM Antonin Houska <[email protected]> wrote:\n> I try to understand this patch (commit 5ffb7c7750) because I use condition\n> variable in an extension. One particular problem occured to me, please\n> consider:\n>\n> ConditionVariableSleep() gets interrupted, so AbortTransaction() calls\n> ConditionVariableCancelSleep(), but the signal was sent in between. Shouldn't\n> at least AbortTransaction() and AbortSubTransaction() check the return value\n> of ConditionVariableCancelSleep(), and re-send the signal if needed?\n\nI wondered about that in the context of our only in-tree user of\nConditionVariableSignal(), in parallel btree index creation, but since\nit's using the parallel executor infrastructure, any error would be\npropagated everywhere so all waits would be aborted. There is another\nplace where the infrastructure cancels for you and would now eat the\nresult: if you prepare to sleep on one CV, and then prepare to sleep\non another, we''ll just cancel the first one. It think that's\nsemantically OK: we can't really wait for two CVs at once, and if you\ntry you'll miss signals anyway, but you must already have code to cope\nwith that by re-checking your exit conditions.\n\n> Note that I'm just thinking about such a problem, did not try to reproduce it.\n\nHmm. I looked for users of ConditionVariableSignal() in the usual web\ntools and didn't find anything, so I guess your extension is not\nreleased yet or not open source. I'm curious: what does it actually\ndo if there is an error in a CV-wakeup-consuming backend? I guess it\nmight be some kind of work-queue processing system... it seems\ninevitable that if backends are failing with errors, and you don't\nrespond by retrying/respawning, you'll lose or significantly delay\njobs/events/something anyway (imagine only slightly different timing:\nyou consume the signal and start working on a job and then ereport,\nwhich amounts to the same thing in the end now that your transaction\nis rolled back?), and when you retry you'll see whatever condition was\nwaited for anyway. But that's just me imagining what some\nhypothetical strawman system might look like... what does it really\ndo?\n\n(FWIW when I worked on a couple of different work queue-like systems\nand tried to use ConditionVariableSignal() I eventually concluded that\nit was the wrong tool for the job because its wakeup order is\nundefined. It's actually FIFO, but I wanted LIFO so that workers have\na chance to become idle and reduce the pool size, but I started to\nthink that once you want that level of control you really want to\nbuild a bespoke wait list system, so I never got around to proposing\nthat we consider changing that.)\n\nOur condition variables are weird. They're not associated with a\nlock, so we made start-of-wait non-atomic: prepare first, then return\ncontrol and let the caller check its condition, then sleep. Typical\nuser space condition variable APIs force you to acquire some kind of\nlock that protects the condition first, then check the condition, then\natomically release-associated-lock-and-start-sleeping, so there is no\ndata race but also no time where control is returned to the caller but\nthe thread is on the wait list consuming signals. That choice has\nsome pros (you can choose whatever type of lock you want to protect\nyour condition, or none at all if you can get away with memory\nbarriers and magic) and cons.. However, as I think Andres was getting\nat, having a non-atomic start-of-wait doesn't seem to require us to\nhave a non-atomic end-of-wait and associated extra contention. So\nmaybe we should figure out how to fix that in 17.\n\n\n",
"msg_date": "Thu, 17 Aug 2023 10:58:45 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: walsender \"wakeup storm\" on PG16, likely because of bc971f4025c\n (Optimize walsender wake up logic using condition variables)"
},
{
"msg_contents": "Thomas Munro <[email protected]> wrote:\n\n> On Wed, Aug 16, 2023 at 11:18 PM Antonin Houska <[email protected]> wrote:\n> > I try to understand this patch (commit 5ffb7c7750) because I use condition\n> > variable in an extension. One particular problem occured to me, please\n> > consider:\n> >\n> > ConditionVariableSleep() gets interrupted, so AbortTransaction() calls\n> > ConditionVariableCancelSleep(), but the signal was sent in between. Shouldn't\n> > at least AbortTransaction() and AbortSubTransaction() check the return value\n> > of ConditionVariableCancelSleep(), and re-send the signal if needed?\n> \n> I wondered about that in the context of our only in-tree user of\n> ConditionVariableSignal(), in parallel btree index creation, but since\n> it's using the parallel executor infrastructure, any error would be\n> propagated everywhere so all waits would be aborted.\n\nI see, ConditionVariableSignal() is currently used only to signal other\nworkers running in the same transactions. The other parts use\nConditionVariableBroadcast(), so no consumer should miss its signal.\n\n> > Note that I'm just thinking about such a problem, did not try to reproduce it.\n> \n> Hmm. I looked for users of ConditionVariableSignal() in the usual web\n> tools and didn't find anything, so I guess your extension is not\n> released yet or not open source. I'm curious: what does it actually\n> do if there is an error in a CV-wakeup-consuming backend? I guess it\n> might be some kind of work-queue processing system... it seems\n> inevitable that if backends are failing with errors, and you don't\n> respond by retrying/respawning, you'll lose or significantly delay\n> jobs/events/something anyway (imagine only slightly different timing:\n> you consume the signal and start working on a job and then ereport,\n> which amounts to the same thing in the end now that your transaction\n> is rolled back?), and when you retry you'll see whatever condition was\n> waited for anyway. But that's just me imagining what some\n> hypothetical strawman system might look like... what does it really\n> do?\n\nIf you're interested, the extension is pg_squeeze [1]. I think the use case is\nrather special. All the work is done by a background worker, but an user\nfunction can be called to submit a \"task\" for the worker and wait for its\ncompletion. So the function sleeps on a CV and the worker uses the CV to wake\nit up. If this function ends due to ERROR, the user is supposed to find a log\nmessage in the worker output sooner or later. It may sound weird, but that\nfunction exists primarily for regression tests, so ERROR is a problem anyway.\n\n> (FWIW when I worked on a couple of different work queue-like systems\n> and tried to use ConditionVariableSignal() I eventually concluded that\n> it was the wrong tool for the job because its wakeup order is\n> undefined. It's actually FIFO, but I wanted LIFO so that workers have\n> a chance to become idle and reduce the pool size, but I started to\n> think that once you want that level of control you really want to\n> build a bespoke wait list system, so I never got around to proposing\n> that we consider changing that.)\n> \n> Our condition variables are weird. They're not associated with a\n> lock, so we made start-of-wait non-atomic: prepare first, then return\n> control and let the caller check its condition, then sleep. Typical\n> user space condition variable APIs force you to acquire some kind of\n> lock that protects the condition first, then check the condition, then\n> atomically release-associated-lock-and-start-sleeping, so there is no\n> data race but also no time where control is returned to the caller but\n> the thread is on the wait list consuming signals. That choice has\n> some pros (you can choose whatever type of lock you want to protect\n> your condition, or none at all if you can get away with memory\n> barriers and magic) and cons.. However, as I think Andres was getting\n> at, having a non-atomic start-of-wait doesn't seem to require us to\n> have a non-atomic end-of-wait and associated extra contention. So\n> maybe we should figure out how to fix that in 17.\n\nThanks for sharing your point of view. I'm fine with this low-level approach:\nit's well documented and there are examples in the PG code showing how it\nshould be used :-)\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\nhttps://github.com/cybertec-postgresql/pg_squeeze/\n\n\n",
"msg_date": "Thu, 17 Aug 2023 14:25:40 +0200",
"msg_from": "Antonin Houska <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: walsender \"wakeup storm\" on PG16,\n likely because of bc971f4025c (Optimize walsender wake up logic using\n condition variables)"
}
] |
[
{
"msg_contents": "Hi\n\nCommit 31966b15 invented a way for functions dealing with relation\nextension to accept a Relation in online code and an SMgrRelation in\nrecovery code (instead of using the earlier FakeRelcacheEntry\nconcept). It seems highly likely that future new bufmgr.c interfaces\nwill face the same problem, and need to do something similar. Let's\ngeneralise the names so that each interface doesn't have to re-invent\nthe wheel? ExtendedBufferWhat is also just not a beautiful name. How\nabout BufferedObjectSelector? That name leads to macros BOS_SMGR()\nand BOS_REL(). Could also be BufMgrObject/BMO, ... etc etc.\n\nThis is from a patch-set that I'm about to propose for 17, which needs\none of these too, hence desire to generalise. But if we rename them\nin 17, then AM authors, who are likely to discover and make use of\nthis interface, would have to use different names for 16 and 17.",
"msg_date": "Sat, 12 Aug 2023 12:29:05 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Rename ExtendedBufferWhat in 16?"
},
{
"msg_contents": "Hi,\n\nOn 2023-08-12 12:29:05 +1200, Thomas Munro wrote:\n> Commit 31966b15 invented a way for functions dealing with relation\n> extension to accept a Relation in online code and an SMgrRelation in\n> recovery code (instead of using the earlier FakeRelcacheEntry\n> concept). It seems highly likely that future new bufmgr.c interfaces\n> will face the same problem, and need to do something similar. Let's\n> generalise the names so that each interface doesn't have to re-invent\n> the wheel? ExtendedBufferWhat is also just not a beautiful name. How\n> about BufferedObjectSelector? That name leads to macros BOS_SMGR()\n> and BOS_REL(). Could also be BufMgrObject/BMO, ... etc etc.\n\nI like the idea of generalizing it. I somehow don't quite like BOS*, but I\ncan't really put into words why, so...\n\n\n> This is from a patch-set that I'm about to propose for 17, which needs\n> one of these too, hence desire to generalise. But if we rename them\n> in 17, then AM authors, who are likely to discover and make use of\n> this interface, would have to use different names for 16 and 17.\n\nMakes sense to me.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 16 Aug 2023 15:49:49 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Rename ExtendedBufferWhat in 16?"
},
{
"msg_contents": "On Thu, Aug 17, 2023 at 10:49 AM Andres Freund <[email protected]> wrote:\n> On 2023-08-12 12:29:05 +1200, Thomas Munro wrote:\n> > Commit 31966b15 invented a way for functions dealing with relation\n> > extension to accept a Relation in online code and an SMgrRelation in\n> > recovery code (instead of using the earlier FakeRelcacheEntry\n> > concept). It seems highly likely that future new bufmgr.c interfaces\n> > will face the same problem, and need to do something similar. Let's\n> > generalise the names so that each interface doesn't have to re-invent\n> > the wheel? ExtendedBufferWhat is also just not a beautiful name. How\n> > about BufferedObjectSelector? That name leads to macros BOS_SMGR()\n> > and BOS_REL(). Could also be BufMgrObject/BMO, ... etc etc.\n>\n> I like the idea of generalizing it. I somehow don't quite like BOS*, but I\n> can't really put into words why, so...\n\nDo you like BufferManagerRelation, BMR_REL(), BMR_SMGR()?\n\nJust BM_ would clash with the flag namespace.\n\n> > This is from a patch-set that I'm about to propose for 17, which needs\n> > one of these too, hence desire to generalise. But if we rename them\n> > in 17, then AM authors, who are likely to discover and make use of\n> > this interface, would have to use different names for 16 and 17.\n>\n> Makes sense to me.\n\nDoes anyone else want to object? Restating the case in brief: commit\n31966b15's naming is short-sighted and likely to lead to a\nproliferation of similar things or a renaming in later releases.\n\n\n",
"msg_date": "Thu, 17 Aug 2023 11:31:27 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Rename ExtendedBufferWhat in 16?"
},
{
"msg_contents": "On Wed, Aug 16, 2023 at 4:32 PM Thomas Munro <[email protected]> wrote:\n> Does anyone else want to object? Restating the case in brief: commit\n> 31966b15's naming is short-sighted and likely to lead to a\n> proliferation of similar things or a renaming in later releases.\n\n+1 to proceeding with this change.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 16 Aug 2023 16:33:58 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Rename ExtendedBufferWhat in 16?"
},
{
"msg_contents": "On 2023-08-17 11:31:27 +1200, Thomas Munro wrote:\n> On Thu, Aug 17, 2023 at 10:49 AM Andres Freund <[email protected]> wrote:\n> > On 2023-08-12 12:29:05 +1200, Thomas Munro wrote:\n> > > Commit 31966b15 invented a way for functions dealing with relation\n> > > extension to accept a Relation in online code and an SMgrRelation in\n> > > recovery code (instead of using the earlier FakeRelcacheEntry\n> > > concept). It seems highly likely that future new bufmgr.c interfaces\n> > > will face the same problem, and need to do something similar. Let's\n> > > generalise the names so that each interface doesn't have to re-invent\n> > > the wheel? ExtendedBufferWhat is also just not a beautiful name. How\n> > > about BufferedObjectSelector? That name leads to macros BOS_SMGR()\n> > > and BOS_REL(). Could also be BufMgrObject/BMO, ... etc etc.\n> >\n> > I like the idea of generalizing it. I somehow don't quite like BOS*, but I\n> > can't really put into words why, so...\n> \n> Do you like BufferManagerRelation, BMR_REL(), BMR_SMGR()?\n> \n> Just BM_ would clash with the flag namespace.\n\nI like BMR better!\n\n\n",
"msg_date": "Wed, 16 Aug 2023 17:42:33 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Rename ExtendedBufferWhat in 16?"
},
{
"msg_contents": "On Thu, Aug 17, 2023 at 12:42 PM Andres Freund <[email protected]> wrote:\n> I like BMR better!\n\nThanks Andres and Peter. Here's a version like that. I hesitated\nabout BufMgrRelation instead, but neither name appears in code\ncurrently and full words are better. In this version I also renamed\nall the 'eb' variables to 'bmr'.\n\nIf there are no more comments or objections, I'd like to push this to\n16 and master in a day or two.",
"msg_date": "Thu, 17 Aug 2023 15:35:15 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Rename ExtendedBufferWhat in 16?"
}
] |
[
{
"msg_contents": "The attached patch implements a new SEARCH clause for CREATE FUNCTION.\nThe SEARCH clause controls the search_path used when executing\nfunctions that were created without a SET clause.\n\nBackground:\n\nControlling search_path is critical for the correctness and security of\nfunctions. Right now, the author of a function without a SET clause has\nlittle ability to control the function's behavior, because even basic\noperators like \"+\" involve search_path. This is a big problem for, e.g.\nfunctions used in expression indexes which are called by any user with\nwrite privileges on the table.\n\nMotivation:\n\nI'd like to (eventually) get to safe-by-default behavior. In other\nwords, the simplest function declaration should be safe for the most\ncommon use cases.\n\nTo get there, we need some way to explicitly specify the less common\ncases. Right now there's no way for the function author to indicate\nthat a function intends to use the session's search path. We also need\nan easier way to specify that the user wants a safe search_path (\"SET\nsearch_path = pg_catalog, pg_temp\" is arcane).\n\nAnd when we know more about the user's actual intent, then it will be\neasier to either form a transition plan to push users into the safer\nbehavior, or at least warn strongly when the user is doing something\ndangerous (i.e. using a function that depends on the session's search\npath as part of an expression index).\n\nToday, the only information we have about the user's intent is the\npresence or absence of a \"SET search_path\" clause, which is not a\nstrong signal.\n\nProposal:\n\nAdd SEARCH { DEFAULT | SYSTEM | SESSION } clause to CREATE/ALTER\nfunction.\n\n * SEARCH DEFAULT is the same as no SEARCH clause at all, and ends up\nstored in the catalog as prosearch='d'.\n * SEARCH SYSTEM means that we switch to the safe search path of\n\"pg_catalog, pg_temp\" when executing the function. Stored as\nprosearch='y'.\n * SEARCH SESSION means that we don't switch the search_path when\nexecuting the function, and it's inherited from the session. Stored as\nprosearch='e'.\n\nRegardless of the SEARCH clause, a \"SET search_path\" clause will\noverride it. The SEARCH clause only matters when \"SET search_path\" is\nnot there.\n\nAdditionally provide a GUC, defaulting to false for compatibility, that\ncan interpret prosearch='d' as if it were prosearch='y'. It could help\nprovide a transition path. I know there's a strong reluctance to adding\nthese kinds of GUCs; I can remove it and I think the patch will still\nbe worthwhile. Perhaps there are alternatives that could help with\nmigration at pg_dump time instead?\n\nBenefits:\n\n1. The user can be more explicit about their actual intent. Do they\nwant safety and consistency? Or the flexibility of using the session's\nsearch_path?\n\n2. We can more accurately serve the user's intent. For instance, the\nsafe search_path of \"pg_catalog, pg_temp\" is arcane and seems to be\nthere just because we don't have a way to specify that pg_temp be\nexcluded entirely. But perhaps in the future we *do* want to exclude\npg_temp entirely. Knowing that the user just wants \"SEARCH SYSTEM\"\nallows us some freedom to do that.\n\n3. Users can be forward-compatible by specifying the functions that\nreally do need to use the session's search path as SEARCH SESSION, so\nthat they will never be broken in the future. That gives us a cleaner\npath toward making the default behavior safe.\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS",
"msg_date": "Fri, 11 Aug 2023 19:35:22 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "CREATE FUNCTION ... SEARCH { DEFAULT | SYSTEM | SESSION }"
},
{
"msg_contents": "On 8/11/23 22:35, Jeff Davis wrote:\n> The attached patch implements a new SEARCH clause for CREATE FUNCTION.\n> The SEARCH clause controls the search_path used when executing\n> functions that were created without a SET clause.\n> \n> Background:\n> \n> Controlling search_path is critical for the correctness and security of\n> functions. Right now, the author of a function without a SET clause has\n> little ability to control the function's behavior, because even basic\n> operators like \"+\" involve search_path. This is a big problem for, e.g.\n> functions used in expression indexes which are called by any user with\n> write privileges on the table.\n> \n> Motivation:\n> \n> I'd like to (eventually) get to safe-by-default behavior. In other\n> words, the simplest function declaration should be safe for the most\n> common use cases.\n\nI agree with the general need.\n\n> Add SEARCH { DEFAULT | SYSTEM | SESSION } clause to CREATE/ALTER\n> function.\n> \n> * SEARCH DEFAULT is the same as no SEARCH clause at all, and ends up\n> stored in the catalog as prosearch='d'.\n> * SEARCH SYSTEM means that we switch to the safe search path of\n> \"pg_catalog, pg_temp\" when executing the function. Stored as\n> prosearch='y'.\n> * SEARCH SESSION means that we don't switch the search_path when\n> executing the function, and it's inherited from the session. Stored as\n> prosearch='e'.\n\nIt isn't clear to me what is the precise difference between DEFAULT and \nSESSION\n\n\n> 2. We can more accurately serve the user's intent. For instance, the\n> safe search_path of \"pg_catalog, pg_temp\" is arcane and seems to be\n> there just because we don't have a way to specify that pg_temp be\n> excluded entirely. But perhaps in the future we *do* want to exclude\n> pg_temp entirely. Knowing that the user just wants \"SEARCH SYSTEM\"\n> allows us some freedom to do that.\n\nPersonally I think having pg_temp in the SYSTEM search path makes sense \nfor temp tables, but I find it easy to forget that functions can be \ncreated by unprivileged users in pg_temp, and therefore having pg_temp \nin the search path for functions is dangerous.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n",
"msg_date": "Sat, 12 Aug 2023 09:15:55 -0400",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CREATE FUNCTION ... SEARCH { DEFAULT | SYSTEM | SESSION }"
},
{
"msg_contents": "On 8/12/23 09:15, Joe Conway wrote:\n> On 8/11/23 22:35, Jeff Davis wrote:\n>> 2. We can more accurately serve the user's intent. For instance, the\n>> safe search_path of \"pg_catalog, pg_temp\" is arcane and seems to be\n>> there just because we don't have a way to specify that pg_temp be\n>> excluded entirely. But perhaps in the future we *do* want to exclude\n>> pg_temp entirely. Knowing that the user just wants \"SEARCH SYSTEM\"\n>> allows us some freedom to do that.\n> \n> Personally I think having pg_temp in the SYSTEM search path makes sense\n> for temp tables, but I find it easy to forget that functions can be\n> created by unprivileged users in pg_temp, and therefore having pg_temp\n> in the search path for functions is dangerous.\n\nHmm, I guess I was too hasty -- seems we have some magic related to this \nalready.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n",
"msg_date": "Sat, 12 Aug 2023 09:50:25 -0400",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CREATE FUNCTION ... SEARCH { DEFAULT | SYSTEM | SESSION }"
},
{
"msg_contents": "On Sat, 2023-08-12 at 09:15 -0400, Joe Conway wrote:\n> It isn't clear to me what is the precise difference between DEFAULT\n> and \n> SESSION\n\nThe the current patch, SESSION always gets the search path from the\nsession, while DEFAULT is controlled by the GUC\nsafe_function_search_path. If the GUC is false (the default) then\nDEFAULT and SESSION are the same. If the GUC is true, then DEFAULT and\nSYSTEM are the same.\n\nThere are alternatives to using a GUC to differentiate them. The main\npoint of this patch is to capture what the user intends in a convenient\nand forward-compatible way. If the user specifies nothing at all, they\nget DEFAULT, and we could treat that specially in various ways to move\ntoward safety while minimizing breakage.\n\n> \n> Personally I think having pg_temp in the SYSTEM search path makes\n> sense \n> for temp tables\n\nThe patch doesn't change this behavior -- SYSTEM (without any other\nSET) gives you \"pg_catalog, pg_temp\" and there's no way to exclude\npg_temp entirely.\n\nMy point was that by capturing the user's intent with SEARCH SYSTEM, it\ngives us a bit more freedom to have these kinds of discussions later.\nAnd it's certainly easier for the user to specify SEARCH SYSTEM than\n\"SET search_path = pg_catalog, pg_temp\".\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Sat, 12 Aug 2023 09:07:47 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: CREATE FUNCTION ... SEARCH { DEFAULT | SYSTEM | SESSION }"
},
{
"msg_contents": "On Sat, 2023-08-12 at 09:50 -0400, Joe Conway wrote:\n> Hmm, I guess I was too hasty -- seems we have some magic related to\n> this \n> already.\n\nI was worried after your first email. But yes, the magic is in\nFuncnameGetCandidates(), which simply ignores functions in the temp\nnamespace.\n\nIt would be better if we were obviously safe rather than magically\nsafe, though.\n\nRegards,\n\tJeff Davis\n\n\n\n\n\n",
"msg_date": "Sat, 12 Aug 2023 09:23:31 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: CREATE FUNCTION ... SEARCH { DEFAULT | SYSTEM | SESSION }"
},
{
"msg_contents": "Hi,\n\nOn 2023-08-11 19:35:22 -0700, Jeff Davis wrote:\n> Controlling search_path is critical for the correctness and security of\n> functions. Right now, the author of a function without a SET clause has\n> little ability to control the function's behavior, because even basic\n> operators like \"+\" involve search_path. This is a big problem for, e.g.\n> functions used in expression indexes which are called by any user with\n> write privileges on the table.\n\n> Motivation:\n>\n> I'd like to (eventually) get to safe-by-default behavior. In other\n> words, the simplest function declaration should be safe for the most\n> common use cases.\n\nI'm not sure that anything based, directly or indirectly, on search_path\nreally is a realistic way to get there.\n\n\n> To get there, we need some way to explicitly specify the less common\n> cases. Right now there's no way for the function author to indicate\n> that a function intends to use the session's search path. We also need\n> an easier way to specify that the user wants a safe search_path (\"SET\n> search_path = pg_catalog, pg_temp\" is arcane).\n\nNo disagreement with that. Even if I don't yet agree that your proposal is a\nconvincing path to \"easy security for PLs\" - just making the search path stuff\nless arcane is good.\n\n\n> And when we know more about the user's actual intent, then it will be\n> easier to either form a transition plan to push users into the safer\n> behavior, or at least warn strongly when the user is doing something\n> dangerous (i.e. using a function that depends on the session's search\n> path as part of an expression index).\n\nI think that'd be pretty painful from a UX perspective. Having to write\ne.g. operators as operator(schema, op) just sucks as an experience. And with\nextensions plenty of operators will live outside of pg_catalog, so there is\nplenty things that will need qualifying. And because of things like type\ncoercion search, which prefers \"bettering fitting\" coercions over search path\norder, you can't just put \"less important\" things later in search path.\n\n\nI wonder if we ought to work more on \"fossilizing\" the result of search path\nresolutions at the time functions are created, rather than requiring the user\nto do so explicitly. Most of the problem here comes down to the fact that if\na user creates a function like 'a + b' we'll not resolve the operator, the\npotential type coercions etc, when the function is created - we do so when the\nfunction is executed.\n\nWe can't just store the oids at the time, because that'd end up very fragile -\ntables/functions/... might be dropped and recreated etc and thus change their\noid. But we could change the core PLs to rewrite all the queries (*) so that\nthey schema qualify absolutely everything, including operators and implicit\ntype casts.\n\nThat way objects referenced by functions can still be replaced, but search\npath can't be used to \"inject\" objects in different schemas. Obviously it\ncould lead to errors on some schema changes - e.g. changing a column type\nmight mean that a relevant cast lives in a different place than with the old\ntype - but I think that'll be quite rare. Perhaps we could offer a ALTER\nFUNCTION ... REFRESH REFERENCES; or such?\n\nOne obvious downside of such an approach is that it requires some work with\neach PL. I'm not sure that's avoidable - and I suspect that most \"security\nsensitive\" functions are written in just a few languages.\n\n\n(*) Obviously the one thing that doesn't work for is use of EXECUTE in plpgsql\nand similar constructs elsewhere. I'm not sure there's much that can be done\nto make that safe, but it's worth thinking about more.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 12 Aug 2023 11:25:59 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CREATE FUNCTION ... SEARCH { DEFAULT | SYSTEM | SESSION }"
},
{
"msg_contents": "On Sat, 2023-08-12 at 11:25 -0700, Andres Freund wrote:\n> \n> I'm not sure that anything based, directly or indirectly, on\n> search_path\n> really is a realistic way to get there.\n\nCan you explain a little more? I see what you mean generally, that\nsearch_path is an imprecise thing, and that it leaves room for\nambiguity and mistakes.\n\nBut I also think we can do a lot better than we're doing today and\nstill retain the basic concept of search_path, which is good because\nit's deeply integrated into postgres, and it's not clear that we're\ngoing to get away from it any time soon.\n\n> \n> \n> I think that'd be pretty painful from a UX perspective. Having to\n> write\n> e.g. operators as operator(schema, op) just sucks as an experience.\n\nI'm not suggesting that the user fully-qualify everything; I'm\nsuggesting that the include a \"SET search_path\" clause if they depend\non anything other than pg_catalog.\n\n> And with\n> extensions plenty of operators will live outside of pg_catalog, so\n> there is\n> plenty things that will need qualifying.\n\nIn my proposal, that would still involve a \"SET search_path TO\nmyextension, pg_catalog, pg_temp\".\n\nThe main reason that's bad is that adding pg_temp at the end is painful\nUX -- just something that the user needs to remember to do with little\nobvious reason or observable impact; but it has important security\nimplications. Perhaps we should just not implicitly include pg_temp for\na function's search_path (at least for the case of CREATE FUNCTION ...\nSEARCH SYSTEM)?\n\n> And because of things like type\n> coercion search, which prefers \"bettering fitting\" coercions over\n> search path\n> order, you can't just put \"less important\" things later in search\n> path.\n\nI understand this introduces some ambiguity, but you just can't include\nschemas in the search_path that you don't trust, for similar reasons as\n$PATH. If you have a few objects you'd like to access in another user's\nschema, fully-qualify them.\n\n> We can't just store the oids at the time, because that'd end up very\n> fragile -\n> tables/functions/... might be dropped and recreated etc and thus\n> change their\n> oid.\n\nRobert suggested something along those lines[1]. I won't rule it out,\nbut I agree that there are quite a few things left to figure out.\n\n> But we could change the core PLs to rewrite all the queries (*) so\n> that\n> they schema qualify absolutely everything, including operators and\n> implicit\n> type casts.\n\nSo not quite like \"SET search_path FROM CURRENT\": you resolve it to a\nspecific \"schemaname.objectname\", but stop just short of resolving to a\nspecific OID?\n\nAn interesting compromise, but I'm not sure what the benefit is vs. SET\nsearch_path FROM CURRENT (or some defined search_path).\n\n> That way objects referenced by functions can still be replaced, but\n> search\n> path can't be used to \"inject\" objects in different schemas.\n> Obviously it\n> could lead to errors on some schema changes - e.g. changing a column\n> type\n> might mean that a relevant cast lives in a different place than with\n> the old\n> type - but I think that'll be quite rare. Perhaps we could offer a\n> ALTER\n> FUNCTION ... REFRESH REFERENCES; or such?\n\nHmm. I feel like that's making things more complicated. I'd find it\nmore straightforward to use something like Robert's approach of fully\nparsing something, and then have the REFRESH command reparse it when\nsomething needs updating. Or perhaps just create all of the dependency\nentries more like a view query and then auto-refresh.\n\n> (*) Obviously the one thing that doesn't work for is use of EXECUTE\n> in plpgsql\n> and similar constructs elsewhere. I'm not sure there's much that can\n> be done\n> to make that safe, but it's worth thinking about more.\n\nI think it would be really nice to have some better control over the\nsearch_path regardless, because it still helps with cases like this. A\nlot of C functions build queries, and I don't think it's reasonable to\nconstantly worry about the ambiguity of the schema for \"=\".\n\nRegards,\n\tJeff Davis\n\n[1]\nhttps://www.postgresql.org/message-id/CA%2BTgmobd%3DeFRGWHhfG4mG2cA%2BdsVuA4jpBvD8N1NS%3DVc9eHFQg%40mail.gmail.com\n\n\n\n",
"msg_date": "Mon, 14 Aug 2023 12:25:30 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: CREATE FUNCTION ... SEARCH { DEFAULT | SYSTEM | SESSION }"
},
{
"msg_contents": "On 12.08.23 04:35, Jeff Davis wrote:\n> The attached patch implements a new SEARCH clause for CREATE FUNCTION.\n> The SEARCH clause controls the search_path used when executing\n> functions that were created without a SET clause.\n\nI don't understand this. This adds a new option for cases where the \nexisting option wasn't specified. Why not specify the existing option \nthen? Is it not good enough? Can we improve it?\n\n\n\n",
"msg_date": "Wed, 16 Aug 2023 08:51:25 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CREATE FUNCTION ... SEARCH { DEFAULT | SYSTEM | SESSION }"
},
{
"msg_contents": "On Wed, 2023-08-16 at 08:51 +0200, Peter Eisentraut wrote:\n> On 12.08.23 04:35, Jeff Davis wrote:\n> > The attached patch implements a new SEARCH clause for CREATE\n> > FUNCTION.\n> > The SEARCH clause controls the search_path used when executing\n> > functions that were created without a SET clause.\n> \n> I don't understand this. This adds a new option for cases where the \n> existing option wasn't specified. Why not specify the existing\n> option \n> then? Is it not good enough? Can we improve it?\n\nSET search_path = '...' not good enough in my opinion.\n\n1. Not specifying a SET clause falls back to the session's search_path,\nwhich is a bad default because it leads to all kinds of inconsistent\nbehavior and security concerns.\n\n2. There's no way to explicitly request that you'd actually like to use\nthe session's search_path, so it makes it very hard to ever change the\ndefault.\n\n3. It's user-unfriendly. A safe search_path that would be suitable for\nmost functions is \"SET search_path = pg_catalog, pg_temp\", which is\narcane, and requires some explanation.\n\n4. search_path for the session is conceptually different than for a\nfunction. A session should be context-sensitive and the same query\nshould (quite reasonably) behave differently for different sessions and\nusers to sort out things like object name conflicts, etc. A function\nshould (ordinarily) be context-insensitive, especially when used in\nsomething like an index expression or constraint. Having different\nsyntax helps separate those concepts.\n\n5. There's no way to prevent pg_temp from being included in the\nsearch_path. This is separately fixable, but having the proposed SEARCH\nsyntax is likely to make for a better user experience in the common\ncases.\n\nI'm open to suggestion about other ways to improve it, but SEARCH is\nwhat I came up with.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Wed, 16 Aug 2023 10:44:53 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: CREATE FUNCTION ... SEARCH { DEFAULT | SYSTEM | SESSION }"
},
{
"msg_contents": "On 16.08.23 19:44, Jeff Davis wrote:\n> On Wed, 2023-08-16 at 08:51 +0200, Peter Eisentraut wrote:\n>> On 12.08.23 04:35, Jeff Davis wrote:\n>>> The attached patch implements a new SEARCH clause for CREATE\n>>> FUNCTION.\n>>> The SEARCH clause controls the search_path used when executing\n>>> functions that were created without a SET clause.\n>>\n>> I don't understand this. This adds a new option for cases where the\n>> existing option wasn't specified. Why not specify the existing\n>> option\n>> then? Is it not good enough? Can we improve it?\n> \n> SET search_path = '...' not good enough in my opinion.\n> \n> 1. Not specifying a SET clause falls back to the session's search_path,\n> which is a bad default because it leads to all kinds of inconsistent\n> behavior and security concerns.\n\nNot specifying SEARCH would have the same issue?\n\n> 2. There's no way to explicitly request that you'd actually like to use\n> the session's search_path, so it makes it very hard to ever change the\n> default.\n\nThat sounds like something that should be fixed independently. I could \nsee this being useful for other GUC settings, like I want to run a \nfunction explicitly with the session's work_mem.\n\n> 3. It's user-unfriendly. A safe search_path that would be suitable for\n> most functions is \"SET search_path = pg_catalog, pg_temp\", which is\n> arcane, and requires some explanation.\n\nTrue, but is that specific to functions? Maybe I want a safe \nsearch_path just in general, for a session or something.\n\n> 4. search_path for the session is conceptually different than for a\n> function. A session should be context-sensitive and the same query\n> should (quite reasonably) behave differently for different sessions and\n> users to sort out things like object name conflicts, etc. A function\n> should (ordinarily) be context-insensitive, especially when used in\n> something like an index expression or constraint. Having different\n> syntax helps separate those concepts.\n\nI'm not sure I follow that. When you say a function should be \ncontext-insensitive, you could also say, a function should be \ncontext-sensitive, but have a separate context. Which is kind of how it \nworks now. Maybe not well enough.\n\n> 5. There's no way to prevent pg_temp from being included in the\n> search_path. This is separately fixable, but having the proposed SEARCH\n> syntax is likely to make for a better user experience in the common\n> cases.\n\nseems related to #3\n\n> I'm open to suggestion about other ways to improve it, but SEARCH is\n> what I came up with.\n\nSome extensions of the current mechanism, like search_path = safe, \nsearch_path = session, search_path = inherit, etc. might work.\n\n\n\n",
"msg_date": "Fri, 18 Aug 2023 14:25:54 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CREATE FUNCTION ... SEARCH { DEFAULT | SYSTEM | SESSION }"
},
{
"msg_contents": "On Fri, 2023-08-18 at 14:25 +0200, Peter Eisentraut wrote:\n> \n> Not specifying SEARCH would have the same issue?\n\nNot specifying SEARCH is equivalent to SEARCH DEFAULT, and that gives\nus some control over what happens. In the proposed patch, a GUC\ndetermines whether it behaves like SEARCH SESSION (the default for\ncompatibility reasons) or SEARCH SYSTEM (safer).\n\n> > 2. There's no way to explicitly request that you'd actually like to\n> > use\n> > the session's search_path, so it makes it very hard to ever change\n> > the\n> > default.\n> \n> That sounds like something that should be fixed independently. I\n> could \n> see this being useful for other GUC settings, like I want to run a \n> function explicitly with the session's work_mem.\n\nI'm confused about how this would work. It doesn't make sense to set a\nGUC to be the session value in postgresql.conf, because there's no\nsession yet. And it doesn't really make sense in a top-level session,\nbecause it would just be a no-op (right?). It maybe makes sense in a\nfunction, but I'm still not totally clear on what that would mean.\n\n> \n> True, but is that specific to functions? Maybe I want a safe \n> search_path just in general, for a session or something.\n\nI agree this is a somewhat orthogonal problem and we should have a way\nto keep pg_temp out of the search_path entirely. We just need to agree\non a string representation of a search path that omits pg_temp. One\nidea would be to have special identifiers \"!pg_temp\" and \"!pg_catalog\"\nthat would cause those to be excluded entirely.\n\n> \n> I'm not sure I follow that. When you say a function should be \n> context-insensitive, you could also say, a function should be \n> context-sensitive, but have a separate context. Which is kind of how\n> it \n> works now. Maybe not well enough.\n\nFor functions called from index expressions or constraints, you want\nthe function's result to only depend on its arguments; otherwise you\ncan easily violate a constraint or cause an index to return wrong\nresults.\n\nYou're right that there is some other context, like the database\ndefault collation, but (a) that's mostly nailed down; and (b) if it\nchanges unexpectedly that also causes problems.\n\n> > I'm open to suggestion about other ways to improve it, but SEARCH\n> > is\n> > what I came up with.\n> \n> Some extensions of the current mechanism, like search_path = safe, \n> search_path = session, search_path = inherit, etc. might work.\n\nI had considered some new special names like this in search path, but I\ndidn't come up with a specific proposal that I liked. Do you have some\nmore details about how this would help get us to a safe default?\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Fri, 18 Aug 2023 13:11:42 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: CREATE FUNCTION ... SEARCH { DEFAULT | SYSTEM | SESSION }"
},
{
"msg_contents": "Hi,\n\nOn 2023-08-14 12:25:30 -0700, Jeff Davis wrote:\n> On Sat, 2023-08-12 at 11:25 -0700, Andres Freund wrote:\n> >\n> > I'm not sure that anything based, directly or indirectly, on\n> > search_path\n> > really is a realistic way to get there.\n>\n> Can you explain a little more? I see what you mean generally, that\n> search_path is an imprecise thing, and that it leaves room for\n> ambiguity and mistakes.\n\nIt just doesn't seem to provide enough control and it's really painful for\nusers to manage. If you install a bunch of extensions into public - very very\ncommon from what I have seen - you can't really remove public from the search\npath. Which then basically makes all approaches of resolving any of the\nsecurity issues via search path pretty toothless.\n\n\n> > I think that'd be pretty painful from a UX perspective. Having to\n> > write\n> > e.g. operators as operator(schema, op) just sucks as an experience.\n>\n> I'm not suggesting that the user fully-qualify everything; I'm\n> suggesting that the include a \"SET search_path\" clause if they depend\n> on anything other than pg_catalog.\n\nI don't think that really works in practice, due to the very common practice\nof installing extensions into the same schema as the application. Then that\nschema needs to be in search path (if you don't want to schema qualify\neverything), which leaves you wide open.\n\n\n> > And with\n> > extensions plenty of operators will live outside of pg_catalog, so\n> > there is\n> > plenty things that will need qualifying.\n>\n> In my proposal, that would still involve a \"SET search_path TO\n> myextension, pg_catalog, pg_temp\".\n\nmyextension is typically public. Which means that there's zero protection due\nto such a search path.\n\n\n> > � And because of things like type\n> > coercion search, which prefers \"bettering fitting\" coercions over\n> > search path\n> > order, you can't just put \"less important\" things later in search\n> > path.\n>\n> I understand this introduces some ambiguity, but you just can't include\n> schemas in the search_path that you don't trust, for similar reasons as\n> $PATH. If you have a few objects you'd like to access in another user's\n> schema, fully-qualify them.\n\nI think the more common attack paths are things like tricking extension\nscripts into evaluating arbitrary code, to gain \"real superuser\" privileges.\n\n\n> > We can't just store the oids at the time, because that'd end up very\n> > fragile -\n> > tables/functions/... might be dropped and recreated etc and thus\n> > change their\n> > oid.\n>\n> Robert suggested something along those lines[1]. I won't rule it out,\n> but I agree that there are quite a few things left to figure out.\n>\n> > But we could change the core PLs to rewrite all the queries (*) so\n> > that\n> > they schema qualify absolutely everything, including operators and\n> > implicit\n> > type casts.\n>\n> So not quite like \"SET search_path FROM CURRENT\": you resolve it to a\n> specific \"schemaname.objectname\", but stop just short of resolving to a\n> specific OID?\n>\n> An interesting compromise, but I'm not sure what the benefit is vs. SET\n> search_path FROM CURRENT (or some defined search_path).\n\nSearch path does not reliably protect things involving \"type matching\". If you\nhave a better fitting cast, or a function call with parameters that won't need\ncoercion, later in search path, they'll win, even if there's another fit\nearlier on.\n\nIOW, search path is a bandaid for this kind of thing, at best.\n\nIf we instead store something that avoids the need for such search, the\n\"better fitting cast\" logic wouldn't add these kind of security issues\nanymore.\n\n\n> > That way objects referenced by functions can still be replaced, but\n> > search\n> > path can't be used to \"inject\" objects in different schemas.\n> > Obviously it\n> > could lead to errors on some schema changes - e.g. changing a column\n> > type\n> > might mean that a relevant cast lives in a different place than with\n> > the old\n> > type - but I think that'll be quite rare. Perhaps we could offer a\n> > ALTER\n> > FUNCTION ... REFRESH REFERENCES; or such?\n>\n> Hmm. I feel like that's making things more complicated. I'd find it\n> more straightforward to use something like Robert's approach of fully\n> parsing something, and then have the REFRESH command reparse it when\n> something needs updating. Or perhaps just create all of the dependency\n> entries more like a view query and then auto-refresh.\n\nHm, I'm not quite sure I follow on what exactly you see as different here.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 19 Aug 2023 11:59:51 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CREATE FUNCTION ... SEARCH { DEFAULT | SYSTEM | SESSION }"
},
{
"msg_contents": "On Wed, Aug 16, 2023 at 1:45 PM Jeff Davis <[email protected]> wrote:\n> On Wed, 2023-08-16 at 08:51 +0200, Peter Eisentraut wrote:\n> > On 12.08.23 04:35, Jeff Davis wrote:\n> > > The attached patch implements a new SEARCH clause for CREATE\n> > > FUNCTION.\n> > > The SEARCH clause controls the search_path used when executing\n> > > functions that were created without a SET clause.\n> >\n> > I don't understand this. This adds a new option for cases where the\n> > existing option wasn't specified. Why not specify the existing\n> > option\n> > then? Is it not good enough? Can we improve it?\n>\n> SET search_path = '...' not good enough in my opinion.\n>\n> 1. Not specifying a SET clause falls back to the session's search_path,\n> which is a bad default because it leads to all kinds of inconsistent\n> behavior and security concerns.\n>\n> 2. There's no way to explicitly request that you'd actually like to use\n> the session's search_path, so it makes it very hard to ever change the\n> default.\n>\n> 3. It's user-unfriendly. A safe search_path that would be suitable for\n> most functions is \"SET search_path = pg_catalog, pg_temp\", which is\n> arcane, and requires some explanation.\n>\n> 4. search_path for the session is conceptually different than for a\n> function. A session should be context-sensitive and the same query\n> should (quite reasonably) behave differently for different sessions and\n> users to sort out things like object name conflicts, etc. A function\n> should (ordinarily) be context-insensitive, especially when used in\n> something like an index expression or constraint. Having different\n> syntax helps separate those concepts.\n>\n> 5. There's no way to prevent pg_temp from being included in the\n> search_path. This is separately fixable, but having the proposed SEARCH\n> syntax is likely to make for a better user experience in the common\n> cases.\n>\n> I'm open to suggestion about other ways to improve it, but SEARCH is\n> what I came up with.\n\nThe one thing that I really like about your proposal is that you\nexplicitly included a way of specifying that the prevailing\nsearch_path should be used. If we move to any kind of a system where\nthe default behavior is something other than that, then we need that\nas an option. Another, related thing that I recently discovered would\nbe useful is a way to say \"I'd like to switch the search_path to X,\nbut I'd also like to discover what the prevailing search_path was just\nbefore entering this function.\" For example, if I have a function that\nis SECURITY DEFINER which takes some executable code as an input, I\nmight want to arrange to eventually execute that code with the\ncaller's user ID and search_path, but I can't discover the caller's\nsearch_path unless I don't set it, and that's a dangerous thing to do.\n\nHowever, my overall concern here is that this feels like it's\nreinventing the wheel. We already have a way of setting search_path;\nthis gives us a second one. If we had no existing mechanism for that,\nI think this would definitely be an improvement, and quite possibly\nbetter than the current mechanism. But given that we had a mechanism\nalready, if we added this, we'd then have two, which seems like the\nwrong number.\n\nI'm inclined to think that if there are semantics that we currently\nlack, we should think of extending the current syntax to support them.\nRight now you can SET search_path = 'specific value' or SET\nsearch_path FROM CURRENT or leave it out. We could introduce a new way\nof spelling \"leave it out,\" like RESET search_path or whatever. We\ncould introduce a new setting that doesn't set the search_path at all\nbut reverts to the old value on function exit, like SET search_path\nUSING CALL or whatever. And we could think of making SET search_path\nFROM CURRENT or any new semantics we introduce the default in a future\nrelease, or even make the default behavior depend on an evil\nbehavior-changing GUC as you proposed. I'm not quite sure what we\nshould do here conceptually, but I don't see why having a completely\nnew syntax for doing it really helps.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 21 Aug 2023 15:14:48 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CREATE FUNCTION ... SEARCH { DEFAULT | SYSTEM | SESSION }"
},
{
"msg_contents": "On Sat, 2023-08-19 at 11:59 -0700, Andres Freund wrote:\n> If you install a bunch of extensions into public - very very\n> common from what I have seen - you can't really remove public from\n> the search\n> path. Which then basically makes all approaches of resolving any of\n> the\n> security issues via search path pretty toothless.\n\nToothless only if (a) untrusted users have CREATE privileges in the\npublic schema, which is no longer the default; and (b) you're writing a\nfunction that accesses extension objects installed in the public\nschema.\n\nWhile those may be normal things to do, there are a lot of times when\nthose things aren't true. I speculate that it's far more common to\nwrite functions that only use pg_catalog objects (e.g. the \"+\"\noperator, some string manipulation, etc.) and basic control flow.\n\nThere's a lot of value in making those simple cases secure-by-default.\nWe are already moving users towards a readable-but-not-writable public\nschema as a best practice, so if we also move to something like SEARCH\nSYSTEM as a best practice, then that will help a LOT of users.\n\n> > \n> I don't think that really works in practice, due to the very common\n> practice\n> of installing extensions into the same schema as the application.\n> Then that\n> schema needs to be in search path (if you don't want to schema\n> qualify\n> everything), which leaves you wide open.\n\n...\n\n> > \n> myextension is typically public. Which means that there's zero\n> protection due\n> to such a search path.\n\nYou mentioned this three times so I must be missing something. Why is\nit \"wide open\" and \"zero protection\"? If the schema is not world-\nwritable, then aren't attacks a lot harder to pull off?\n\n> > \n> I think the more common attack paths are things like tricking\n> extension\n> scripts into evaluating arbitrary code, to gain \"real superuser\"\n> privileges.\n\nExtension scripts are a separate beast. I do see some potential avenues\nof attack, but I don't see how your approach of resolving schemas early\nwould help.\n\n> Search path does not reliably protect things involving \"type\n> matching\". If you\n> have a better fitting cast, or a function call with parameters that\n> won't need\n> coercion, later in search path, they'll win, even if there's another\n> fit\n> earlier on.\n\nYou need to trust the schemas in your search_path.\n\n> If we instead store something that avoids the need for such search,\n> the\n> \"better fitting cast\" logic wouldn't add these kind of security\n> issues\n> anymore.\n\nI don't disagree, but I don't understand the approach in detail (i.e. I\ncouldn't write it up as a proposal). For instance, what would the\npg_dump output look like?\n\nAnd even if we had that in place, I think we'd still want a better way\nto control the search_path.\n\n> > \n> Hm, I'm not quite sure I follow on what exactly you see as different\n> here.\n\n From what I understand, Robert's approach is to fully parse the\ncommands and resolve to specific OIDs (necessitating dependencies,\netc.); while your approach resolves to fully-qualified names but not\nOIDs (and needing no dependencies).\n\nI don't understand either proposal entirely, so perhaps I'm on the\nwrong track here, but I feel like Robert's approach is more \"normal\"\nand easy to document whereas your approach is more \"creative\" and\nperhaps hard to document.\n\nBoth approaches (resolving to names and resolving to OIDs) seem pretty\nfar away, so I'm still very much inclined to nudge users toward safer\nbest practices with search_path. I think SEARCH SYSTEM is a good start\nthere and doable for 17.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Mon, 21 Aug 2023 12:44:55 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: CREATE FUNCTION ... SEARCH { DEFAULT | SYSTEM | SESSION }"
},
{
"msg_contents": "On Mon, 2023-08-21 at 15:14 -0400, Robert Haas wrote:\n> Another, related thing that I recently discovered would\n> be useful is a way to say \"I'd like to switch the search_path to X,\n> but I'd also like to discover what the prevailing search_path was\n> just\n> before entering this function.\"\n\nInteresting, that could probably be accommodated one way or another.\n\n> However, my overall concern here is that this feels like it's\n> reinventing the wheel. We already have a way of setting search_path;\n> this gives us a second one.\n\nIn one sense, you are obviously right. We have a way to set search_path\nfor a function already, just like any other GUC.\n\nBut I don't look at the search_path as \"just another GUC\" when it comes\nto executing a function. The source of the initial value of search_path\nis more like the IMMUTABLE marker.\n\nWe can also do something with the knowledge the SEARCH marker gives us.\nFor instance, issue WARNINGs or ERRORs when someone uses a SEARCH\nSESSION function in an index expression or constraint, or perhaps when\nthey try to declare a function IMMUTABLE in the first place.\n\nIn other words, the SEARCH clause tells us where search_path comes\nfrom, not so much what it is specifically. I believe that tells us\nsomething fundamental about the kind of function it is. If I tell you\nnothing about a function except whether the search path comes from the\nsystem or the session, you can imagine how it should be used (or not\nused, as the case may be).\n\n> I'm inclined to think that if there are semantics that we currently\n> lack, we should think of extending the current syntax to support\n> them.\n> Right now you can SET search_path = 'specific value' or SET\n> search_path FROM CURRENT or leave it out. We could introduce a new\n> way\n> of spelling \"leave it out,\" like RESET search_path or whatever.\n\nThe thought occurred to me but any way I looked at it was messier and\nless user-friendly. It feels like generalizing from search_path to all\nGUCs, and then needing to specialize for search_path anyway.\n\nFor instance, if we want the default search_path to be the safe value\n'pg_catalog, pg_temp', where would that default value come from? Or\ninstead, we could say that the default would be FROM CURRENT, which\nwould seem to generalize; but then we immediately run into the problem\nthat we don't want most GUCs to default to FROM CURRENT (because that\nwould capture the entire GUC state, which seems bad for several\nreasons), and again we'd need to specialize for search_path.\n\n\nIn other words, search_path really *is* special. I don't think it's\ngreat to generalize from it as though it were just like every other\nGUC.\n\nI do recognize that \"SEARCH SYSTEM ... SET search_path = '...'\" is\nredundant, and that's not great. I just see the other options as worse,\nbut if I've misunderstood your approach then please clarify.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Mon, 21 Aug 2023 14:32:05 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: CREATE FUNCTION ... SEARCH { DEFAULT | SYSTEM | SESSION }"
},
{
"msg_contents": "On Mon, Aug 21, 2023 at 5:32 PM Jeff Davis <[email protected]> wrote:\n> But I don't look at the search_path as \"just another GUC\" when it comes\n> to executing a function. The source of the initial value of search_path\n> is more like the IMMUTABLE marker.\n\nI mean I agree and I disagree.\n\nPhilosophically, I agree. Most functions are written with some\nparticular search_path in mind; the author imagines that the function\nwill be executed with, well, probably whatever search path the author\ntypically uses themselves. Now and then, someone may write a function\nthat's intended to run with various different search paths, e.g.\nanything of the form customer_XXXX, pg_catalog, pg_temp. I think that\nis a real thing that people actually do, intentionally varying the\nsearch_path with the idea of rebinding some references. However, cases\nwhere somebody sincerely intends for the caller to be able to make +\nor || mean something different from normal probably do not exist in\npractice. So, if we were designing a system from scratch, then I would\nrecommend against making search_path a GUC, because it's clearly\nshouldn't behave in the same way as a session property like\ndebug_print_plan or enable_seqscan, where you could want to run the\nsame code with various values.\n\nBut practically, I disagree. As things stand today, search_path *is* a\nGUC that dynamically changes the run-time properties of a session, and\nyour proposed patch wouldn't change that. What it would do is layer\nanother mechanism on top of that which, IMHO, makes something that is\nalready complicated and error-prone even more complicated. If we\nwanted to really make seach_path behave like a property of the code\nrather than the session, I think we'd need to change quite a bit more\nstuff, and the price of that in terms of backward-compatibility might\nbe higher than we'd be willing to pay, but if, hypothetically, we\ndecided to pay that price, then at the end of it search_path as a GUC\nwould be gone, and we'd have one way of managing sarch_path that is\ndifferent from the one we have now.\n\nBut with the patch as you have proposed it that's not what happens. We\njust end up with two interconnected mechanisms for managing what,\nright now, is managed by a single mechanism. That mechanism is (and I\nthink we probably mostly all agree on this) bad. Like really really\nbad. But having more than one mechanism, to me, still seems worse.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 18 Sep 2023 12:01:19 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CREATE FUNCTION ... SEARCH { DEFAULT | SYSTEM | SESSION }"
},
{
"msg_contents": "On Mon, 2023-09-18 at 12:01 -0400, Robert Haas wrote:\n> But with the patch as you have proposed it that's not what happens.\n> We\n> just end up with two interconnected mechanisms for managing what,\n> right now, is managed by a single mechanism. That mechanism is (and I\n> think we probably mostly all agree on this) bad. Like really really\n> bad. But having more than one mechanism, to me, still seems worse.\n\nI don't want to make an argument of the form \"the status quo is really\nbad, and therefore my proposal is good\". That line of argument is\nsuspect for good reason.\n\nBut if my proposal isn't good enough, and we don't have a clear\nalternative, we need to think seriously about how much we've\ncollectively over-promised and under-delivered on the concept of\nprivilege separation.\n\nAbsent a better idea, we need to figure out a way to un-promise what we\ncan't do and somehow guide users towards safe practices. For instance,\ndon't grant the INSERT or UPDATE privilege if the table uses functions\nin index expressions or constraints. Also don't touch any table unless\nthe onwer has SET ROLE privileges on your role already, or the\noperation is part of a special carve out (logical replication or a\nmaintenance command). And don't use the predefined role\npg_write_all_data, because that's unsafe for most imaginable use cases.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Mon, 18 Sep 2023 13:50:59 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: CREATE FUNCTION ... SEARCH { DEFAULT | SYSTEM | SESSION }"
},
{
"msg_contents": "On Mon, Sep 18, 2023 at 4:51 PM Jeff Davis <[email protected]> wrote:\n> I don't want to make an argument of the form \"the status quo is really\n> bad, and therefore my proposal is good\". That line of argument is\n> suspect for good reason.\n\n+1.\n\n> But if my proposal isn't good enough, and we don't have a clear\n> alternative, we need to think seriously about how much we've\n> collectively over-promised and under-delivered on the concept of\n> privilege separation.\n>\n> Absent a better idea, we need to figure out a way to un-promise what we\n> can't do and somehow guide users towards safe practices. For instance,\n> don't grant the INSERT or UPDATE privilege if the table uses functions\n> in index expressions or constraints. Also don't touch any table unless\n> the onwer has SET ROLE privileges on your role already, or the\n> operation is part of a special carve out (logical replication or a\n> maintenance command). And don't use the predefined role\n> pg_write_all_data, because that's unsafe for most imaginable use cases.\n\nI agree this is a mess, and that documenting the mess better would be\ngood. But instead of saying not to do something, we need to say what\nwill happen if you do the thing. I'm regularly annoyed when somebody\nreports that \"I tried to do X and it didn't work,\" instead of saying\nwhat happened when they tried, and this situation is another form of\nthe same thing. \"If you do X, then Y will or can occur\" is much better\nthan \"do not do X\". And I think better documentation of this area\nwould be useful regardless of any other improvements that we may or\nmay not make. Indeed, really good documentation of this area might\nfacilitate making further improvements by highlighting some of the\nproblems so that they can more easily be understood by a wider\naudience. I fear it will be hard to come up with something that is\nclear, that highlights the severity of the problems, and that does not\nveer off into useless vitriol against the status quo, but if we can\nget there, that would be good.\n\nBut, leaving that to one side, what technical options do we have on\nthe table, supposing that we want to do something that is useful but\nnot this exact thing?\n\nI think one option is to somehow change the behavior around\nsearch_path but in a different way than you've proposed. The most\nradical option would be to make it not be a GUC any more. I think the\nbackward-compatibility implications of that would likely be\nunpalatable to many, and the details of what we'd actually do are also\nnot clear, at least to me. For a function, I think there is a\nreasonable argument that you just make it a function property, like\nIMMUTABLE, as you said before. But for code that goes directly into\nthe session, where's the search_path supposed to come from? It's got\nto be configured somewhere, and somehow that somewhere feels a lot\nlike a GUC. That leads to a second idea, which is having it continue\nto be a GUC but only affect directly-entered SQL, with all\nindirectly-entered SQL either being stored as a node tree or having a\nsearch_path property attached somewhere. Or, as a third idea, suppose\nwe leave it a GUC but start breaking semantics around where and how\nthat GUC gets set, e.g. by changing CREATE FUNCTION to capture the\nprevailing search_path by default unless instructed otherwise.\nPersonally I feel like we'd need pretty broad consensus for any of\nthese kinds of changes because it would break a lot of stuff for a lot\nof people, but if we could get that then I think we could maybe emerge\nin a better spot once the pain of the compatibility break receded.\n\nAnother option is something around sandboxing and/or function trust.\nThe idea here is to not care too much about the search_path behavior\nitself, and instead focus on the consequences, namely what code is\ngetting executed as which user and perhaps what kinds of operations\nit's performing. To me, this seems like a possibly easier answer way\nforward at least in the short to medium term, because I think it will\nbreak fewer things for fewer people, and if somebody doesn't like the\nnew behavior they can just say \"well I trust everyone completely\" and\nit all goes back to the way it was. That said, I think there are\nproblems with my previous proposals on the other thread so I believe\nsome adjustments would be needed there, and then there's the problem\nof actually implementing anything. I'll try to respond to your\ncomments on that thread soon.\n\nAre there any other categories of things we can do? More specific\nkinds of things we can do in each category? I don't really see an\noption other than (1) \"change something in the system design so that\npeople use search_path wrongly less often\" or (2) \"make it so that it\ndoesn't matter as much if people using the wrong search_path\" but\nmaybe I'm missing a clever idea.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 19 Sep 2023 11:41:14 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CREATE FUNCTION ... SEARCH { DEFAULT | SYSTEM | SESSION }"
},
{
"msg_contents": "On Tue, 2023-09-19 at 11:41 -0400, Robert Haas wrote:\n> I agree this is a mess, and that documenting the mess better would be\n> good. But instead of saying not to do something, we need to say what\n> will happen if you do the thing. I'm regularly annoyed when somebody\n> reports that \"I tried to do X and it didn't work,\" instead of saying\n> what happened when they tried, and this situation is another form of\n> the same thing. \"If you do X, then Y will or can occur\" is much\n> better\n> than \"do not do X\".\n\nGood documentation includes some guidance. Sure, it should describe the\nsystem behavior, but without anchoring it to some kind of expected use\ncase, it can be equally frustrating.\n\nAllow me to pick on this example which came up in a recent thread:\n\n\"[password_required] Specifies whether connections to the publisher\nmade as a result of this subscription must use password authentication.\nThis setting is ignored when the subscription is owned by a superuser.\nThe default is true. Only superusers can set this value to false.\"\n -- https://www.postgresql.org/docs/16/sql-createsubscription.html\n\nOnly superusers can set it, and it's ignored for superusers. That does\na good job of describing the actual behavior, but is a bit puzzling.\n\nI guess what the user is supposed to do is one of:\n 1. Create a subscription as a superuser with the right connection\nstring (without a password) and password_required=false, then reassign\nit to a non-superuser; or\n 2. Create a subscription as a non-superuser member of\npg_create_subscription using a bogus connection string, then a\nsuperuser can alter it to set password_required=false, then alter the\nconnection string; or\n 3. Create a superuser, let the new superuser create a subscription\nwith password_required=false, and then remove their superuser status.\n\nso why not just document one of those things as the expected thing to\ndo? Not a whole section or anything, but a sentence to suggest what\nthey should do or where else they should look.\n\nI don't mean to set some major new standard in the documentation that\nshould apply to everything; but for the privilege system, even hackers\nare having trouble keeping up (myself included). A bit of guidance\ntoward supported use cases helps a lot.\n\n> I fear it will be hard to come up with something that is\n> clear, that highlights the severity of the problems, and that does\n> not\n> veer off into useless vitriol against the status quo, but if we can\n> get there, that would be good.\n\nI hope what I'm saying is not useless vitriol. I am offering the best\nsolutions I see in a bad situation. And I believe I've uncovered some\nemergent behaviors that are not well-understood even among prominent\nhackers.\n\n> That leads to a second idea, which is having it continue\n> to be a GUC but only affect directly-entered SQL, with all\n> indirectly-entered SQL either being stored as a node tree or having a\n> search_path property attached somewhere.\n\nThat's not too far from the proposed patch and I'd certainly be\ninterested to hear more and/or adapt my patch towards this idea.\n\n> Or, as a third idea, suppose\n> we leave it a GUC but start breaking semantics around where and how\n> that GUC gets set, e.g. by changing CREATE FUNCTION to capture the\n> prevailing search_path by default unless instructed otherwise.\n\nHow would one instruct otherwise?\n\n> Personally I feel like we'd need pretty broad consensus for any of\n> these kinds of changes\n\n+1\n\n> because it would break a lot of stuff for a lot\n> of people, but if we could get that then I think we could maybe\n> emerge\n> in a better spot once the pain of the compatibility break receded.\n\nAre there ways we can soften this a bit? I know compatibility GUCs are\nnot to be added lightly, but perhaps one is justified here?\n\n> Another option is something around sandboxing and/or function trust.\n> The idea here is to not care too much about the search_path behavior\n> itself, and instead focus on the consequences, namely what code is\n> getting executed as which user and perhaps what kinds of operations\n> it's performing.\n\nI'm open to discussing that further, and it certainly may solve some\nproblems, but it does not seem to solve the fundamental problem with\nsearch_path: that the caller can (intentionally or unintentionally)\ncause a function to do unexpected things.\n\nSometimes an unexpected thing is not a the kind of thing that would be\ncaught by a sandbox, e.g. just an unexpected function result. But if\nthat function is used in a constraint or expression index, that\nunexpected result can lead to a violated constraint or a bad index\n(that will later cause wrong results). The owner of the table might\nreasonably consider that a privilege problem, if the user who causes\nthe trouble had only INSERT privileges.\n\n> Are there any other categories of things we can do? More specific\n> kinds of things we can do in each category? I don't really see an\n> option other than (1) \"change something in the system design so that\n> people use search_path wrongly less often\" or (2) \"make it so that it\n> doesn't matter as much if people using the wrong search_path\" but\n> maybe I'm missing a clever idea.\n\nPerhaps there are some clever ideas about maintaining compatibility\nwithin the approaches (1) or (2), which might make one of them more\nappealing.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Tue, 19 Sep 2023 16:56:33 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: CREATE FUNCTION ... SEARCH { DEFAULT | SYSTEM | SESSION }"
},
{
"msg_contents": "On Tue, Sep 19, 2023 at 5:56 PM Jeff Davis <[email protected]> wrote:\n>...\n> On Tue, 2023-09-19 at 11:41 -0400, Robert Haas wrote:\n> > That leads to a second idea, which is having it continue\n> > to be a GUC but only affect directly-entered SQL, with all\n> > indirectly-entered SQL either being stored as a node tree or having a\n> > search_path property attached somewhere.\n>\n> That's not too far from the proposed patch and I'd certainly be\n> interested to hear more and/or adapt my patch towards this idea.\n\nAs an interested bystander, that's the same thing I was thinking when\nreading this. I reread your original e-mail, Jeff, and I still think\nthat.\n\nI wonder if something like CURRENT (i.e., the search path at function\ncreation time) might be a useful keyword addition. I can see some uses\n(more forgiving than SYSTEM but not as loose as SESSION), but I don't\nknow if it would justify its presence.\n\nThanks for working on this.\n\nThanks,\nMaciek\n\n\n",
"msg_date": "Tue, 19 Sep 2023 20:23:44 -0700",
"msg_from": "Maciek Sakrejda <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CREATE FUNCTION ... SEARCH { DEFAULT | SYSTEM | SESSION }"
},
{
"msg_contents": "On Tue, Sep 19, 2023, 20:23 Maciek Sakrejda <[email protected]> wrote:\n\n> I wonder if something like CURRENT (i.e., the search path at function\n> creation time) might be a useful keyword addition. I can see some uses\n> (more forgiving than SYSTEM but not as loose as SESSION), but I don't\n> know if it would justify its presence.\n\n\nI realize now this is exactly what SET search_path FROM CURRENT does. Sorry\nfor the noise.\n\nRegarding extensions installed in the public schema throwing a wrench in\nthe works, is that still a problem if the public schema is not writable? I\nknow that that's a new default, but it won't be forever.\n\nOn Tue, Sep 19, 2023, 20:23 Maciek Sakrejda <[email protected]> wrote:\nI wonder if something like CURRENT (i.e., the search path at function\ncreation time) might be a useful keyword addition. I can see some uses\n(more forgiving than SYSTEM but not as loose as SESSION), but I don't\nknow if it would justify its presence.I realize now this is exactly what SET search_path FROM CURRENT does. Sorry for the noise.Regarding extensions installed in the public schema throwing a wrench in the works, is that still a problem if the public schema is not writable? I know that that's a new default, but it won't be forever.",
"msg_date": "Tue, 19 Sep 2023 23:25:13 -0700",
"msg_from": "Maciek Sakrejda <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CREATE FUNCTION ... SEARCH { DEFAULT | SYSTEM | SESSION }"
},
{
"msg_contents": "Hi\n\nst 20. 9. 2023 v 9:34 odesílatel Maciek Sakrejda <[email protected]>\nnapsal:\n\n> On Tue, Sep 19, 2023 at 5:56 PM Jeff Davis <[email protected]> wrote:\n> >...\n> > On Tue, 2023-09-19 at 11:41 -0400, Robert Haas wrote:\n> > > That leads to a second idea, which is having it continue\n> > > to be a GUC but only affect directly-entered SQL, with all\n> > > indirectly-entered SQL either being stored as a node tree or having a\n> > > search_path property attached somewhere.\n> >\n> > That's not too far from the proposed patch and I'd certainly be\n> > interested to hear more and/or adapt my patch towards this idea.\n>\n> As an interested bystander, that's the same thing I was thinking when\n> reading this. I reread your original e-mail, Jeff, and I still think\n> that.\n>\n> I wonder if something like CURRENT (i.e., the search path at function\n> creation time) might be a useful keyword addition. I can see some uses\n> (more forgiving than SYSTEM but not as loose as SESSION), but I don't\n> know if it would justify its presence.\n>\n\nPersonally, I dislike this - because the value of the search path is hidden\nin this case.\n\nI agree so it can be comfortable, but it can be confusing for review,\nmigration, ...\n\nRegards\n\nPavel\n\n\n> Thanks for working on this.\n>\n> Thanks,\n> Maciek\n>\n>\n>\n\nHist 20. 9. 2023 v 9:34 odesílatel Maciek Sakrejda <[email protected]> napsal:On Tue, Sep 19, 2023 at 5:56 PM Jeff Davis <[email protected]> wrote:\n>...\n> On Tue, 2023-09-19 at 11:41 -0400, Robert Haas wrote:\n> > That leads to a second idea, which is having it continue\n> > to be a GUC but only affect directly-entered SQL, with all\n> > indirectly-entered SQL either being stored as a node tree or having a\n> > search_path property attached somewhere.\n>\n> That's not too far from the proposed patch and I'd certainly be\n> interested to hear more and/or adapt my patch towards this idea.\n\nAs an interested bystander, that's the same thing I was thinking when\nreading this. I reread your original e-mail, Jeff, and I still think\nthat.\n\nI wonder if something like CURRENT (i.e., the search path at function\ncreation time) might be a useful keyword addition. I can see some uses\n(more forgiving than SYSTEM but not as loose as SESSION), but I don't\nknow if it would justify its presence.Personally, I dislike this - because the value of the search path is hidden in this case. I agree so it can be comfortable, but it can be confusing for review, migration, ...RegardsPavel\n\nThanks for working on this.\n\nThanks,\nMaciek",
"msg_date": "Wed, 20 Sep 2023 10:46:00 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CREATE FUNCTION ... SEARCH { DEFAULT | SYSTEM | SESSION }"
},
{
"msg_contents": "On Tue, Sep 19, 2023 at 7:56 PM Jeff Davis <[email protected]> wrote:\n> Good documentation includes some guidance. Sure, it should describe the\n> system behavior, but without anchoring it to some kind of expected use\n> case, it can be equally frustrating.\n\nFair.\n\n> I don't mean to set some major new standard in the documentation that\n> should apply to everything; but for the privilege system, even hackers\n> are having trouble keeping up (myself included). A bit of guidance\n> toward supported use cases helps a lot.\n\nYeah, this stuff is complicated, and I agree that it's hard even for\nhackers to keep up with. I don't really have a strong view on the\nconcrete case you mentioned involving password_required. I always\nworry that if there are three cases and we suggest one of them then\nthe others will be viewed negatively when really they're all equally\nfine. On the other hand, that can often be addressed by starting the\nsentence with \"For example, you could....\" or similar, so perhaps\nthere's no problem here at all. I generally agree with the idea that\nexamples can be useful for clarifying points that may otherwise be too\ntheoretical.\n\n> I hope what I'm saying is not useless vitriol. I am offering the best\n> solutions I see in a bad situation. And I believe I've uncovered some\n> emergent behaviors that are not well-understood even among prominent\n> hackers.\n\nYeah, I wasn't really intending to say that you were. I just get\nnervous about statements like \"don't ever do X!\" because I find it\ncompletely unconvincing. In my experience, when you tell people stuff\nlike that, some of them just go off and do it anyway and, especially\nin a case like this, chances are very good that nothing bad will ever\nhappen to them, simply because most PostgreSQL installations don't\nhave malicious local users. When you tell them \"but that's really bad\"\nthey say \"why?\" and if the documentation doesn't have an answer to the\nquestion well then that sucks.\n\n> > because it would break a lot of stuff for a lot\n> > of people, but if we could get that then I think we could maybe\n> > emerge\n> > in a better spot once the pain of the compatibility break receded.\n>\n> Are there ways we can soften this a bit? I know compatibility GUCs are\n> not to be added lightly, but perhaps one is justified here?\n\nI don't know. I'm skeptical. This behavior is so complicated and hard\nto get right. Having it GUC-dependent makes it even more confusing\nthan it is already. But I guess it also depends on what the GUC does.\n\nLet's say we make a rule that every function or procedure has to have\na search_path attached to it as a function property. That is, CREATE\nFUNCTION .. SEARCH something sets pg_proc.prosearch = 'something'. If\nyou omit the SEARCH clause, one is implicitly supplied for you. If you\nsay SEARCH NULL, then the function is executed with the search_path\ntaken from the GUC; SEARCH 'anything_else' specified a literal\nsearch_path to be used.\n\nIn such a world, I can imagine having a GUC that determines whether\nthe implicitly supplied SEARCH clause is SEARCH\n${WHATEVER_THE_SEARCH_PATH_IS_RIGHT_NOW} or SEARCH NULL. Such a GUC\nonly takes effect at CREATE FUNCTION time. However, I cannot imagine\nhaving a GUC that causes the SEARCH clauses attached to all functions\nto be globally ignored at execution time. That seems like a choice we\nwould likely regret bitterly. The first thing is already painful, but\nthe second one is exponentially worse, because in the first world, you\nhave to be careful to get your functions defined correctly, but if you\ndo, you know they'll run OK on any PostgreSQL cluster anywhere,\nwhereas in the second world, there's no way to define a function that\nbehaves the same way on every PostgreSQL instance. Imagine being an\nextension author, for example.\n\nI am a little worried that this kind of design might end up reversing\nthe direction of some security problems that we have now. For\ninstance, right now, if you call a function with a SET search_path\nclause, you know that it can't make any changes to search_path that\nsurvive function exit. You'll get your old search_path back. With this\nkind of design, it seems like it would be a lot easier to get back to\nthe SQL toplevel and find the search_path surprisingly changed under\nyou. I think we have that problem in some cases already, though. I'm\nunclear how much worse this makes it.\n\n> > Another option is something around sandboxing and/or function trust.\n> > The idea here is to not care too much about the search_path behavior\n> > itself, and instead focus on the consequences, namely what code is\n> > getting executed as which user and perhaps what kinds of operations\n> > it's performing.\n>\n> I'm open to discussing that further, and it certainly may solve some\n> problems, but it does not seem to solve the fundamental problem with\n> search_path: that the caller can (intentionally or unintentionally)\n> cause a function to do unexpected things.\n\nWell, I think it's meant to solve that problem. How effectively it\ndoes so is a point worth debating.\n\n> Sometimes an unexpected thing is not a the kind of thing that would be\n> caught by a sandbox, e.g. just an unexpected function result. But if\n> that function is used in a constraint or expression index, that\n> unexpected result can lead to a violated constraint or a bad index\n> (that will later cause wrong results). The owner of the table might\n> reasonably consider that a privilege problem, if the user who causes\n> the trouble had only INSERT privileges.\n\nThat's an interesting example. Earlier versions of the function trust\nproposal proposed to block *any* execution of code belonging to an\nuntrusted party. That could potentially block this attack. However, it\nwould also block a lot of other things. For instance, if Alice tries\nto insert into Bob's table and Bob's table has a CHECK constraint or\nan index expression, Alice has to trust Bob or she can't insert\nanything at all. By trusting Bob just enough to allow him do things\nlike CHECK(LENGTH(foo) < 10) or whatever, Alice can operate on Bob's\ntable without a problem in normal cases, but is still protected if Bob\nsuddenly starts doing something sneaky. I think that's a significant\nimprovement, because a system that is so stringent that it blocks even\ncompletely harmless things is likely to get disabled, at which point\nit protects nobody from anything.\n\nHowever, that analysis presumes that what we're trying to do is\nprotect Alice from Bob, and I think you're raising the question of how\nwe protect Bob from Alice. Suppose Bob has got a trigger function but\nhas failed to control search_path for that function. Alice can set\nsearch_path so that Bob's trigger calls some function or operator that\nshe owns instead of the intended call to, say, a system function or\noperator. Some sufficiently-rigid function trust system could catch\nthis: Bob doesn't trust Alice, and so the fact that his code is trying\nto call some a function or operator owned by Alice is a red flag. On\nthe basis of the fact that Bob doesn't trust Alice, we should error\nout to protect Bob. Had the search_path been set in the expected way,\nBob would have been trying to call a superuser-owned function, and Bob\nmust trust the superuser, so the operation is permitted.\n\nI wouldn't have a problem with a function-trust proposal that\nincorporated a mode that rigid as a configuration option. I find this\na convincing example of how that could be useful. But such a mode has\npretty serious downsides, too. It makes it very difficult for one user\nto interact with another user's objects in any way without triggering\nsecurity errors.\n\nAlso, in a case like this, I don't think it's unreasonable to ask\nwhether, perhaps, Bob just needs to be a little more careful about\nsetting search_path. I think that there is a big difference between\n(a) defining a SQL-language function that is accessible to multiple\nusers and (b) inserting a row into a table you don't own. When you\ndefine a function, you know people are potentially going to call it.\nAsking you, as the function author, to take some care to secure your\nfunction against a malicious search_path doesn't seem like an\nunsupportable burden. After all, you control the definition of that\nfunction. The problem with inserting a row into a table you don't own\nis that all of the objects involved -- the table itself, its indexes,\nits triggers, its defaults, its constraints -- are owned by somebody\nelse, and that user controls those objects and can change any of them\nat any time. You can't really be expected to verify that all code\nreachable as a result of an INSERT into the table is safe enough\nbefore every INSERT into that table. You can, I think, be expected to\ncheck that functions you define have SET search_path attached.\n\n> > Are there any other categories of things we can do? More specific\n> > kinds of things we can do in each category? I don't really see an\n> > option other than (1) \"change something in the system design so that\n> > people use search_path wrongly less often\" or (2) \"make it so that it\n> > doesn't matter as much if people using the wrong search_path\" but\n> > maybe I'm missing a clever idea.\n>\n> Perhaps there are some clever ideas about maintaining compatibility\n> within the approaches (1) or (2), which might make one of them more\n> appealing.\n\nIndeed.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 21 Sep 2023 14:06:27 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CREATE FUNCTION ... SEARCH { DEFAULT | SYSTEM | SESSION }"
},
{
"msg_contents": "On Tue, 2023-09-19 at 20:23 -0700, Maciek Sakrejda wrote:\n> On Tue, Sep 19, 2023 at 5:56 PM Jeff Davis <[email protected]> wrote:\n> > ...\n> > On Tue, 2023-09-19 at 11:41 -0400, Robert Haas wrote:\n> > > That leads to a second idea, which is having it continue\n> > > to be a GUC but only affect directly-entered SQL, with all\n> > > indirectly-entered SQL either being stored as a node tree or\n> > > having a\n> > > search_path property attached somewhere.\n> > \n> > That's not too far from the proposed patch and I'd certainly be\n> > interested to hear more and/or adapt my patch towards this idea.\n> \n> As an interested bystander, that's the same thing I was thinking when\n> reading this. I reread your original e-mail, Jeff, and I still think\n> that.\n\nI have attached an updated patch. Changes:\n\n * Syntax is now: SEARCH FROM { DEFAULT | TRUSTED | SESSION }\n - added \"FROM\" to suggest that it's the source, and only a starting\nplace, rather than a specific and final setting. I don't feel strongly\nabout the FROM one way or another, so I can take it out if it's not\nhelpful.\n - changed \"SYSTEM\" to \"TRUSTED\", which better reflects the purpose,\nand doesn't suggest any connection to ALTER SYSTEM.\n * Removed GUC -- we can reconsider this kind of thing later.\n * ERROR if IMMUTABLE is combined with SEARCH FROM SESSION\n * pg_dump support. Emits \"SEARCH FROM SESSION\" or \"SEARCH FROM\nTRUSTED\" only if explicitly specified; otherwise emits no SEARCH\nclause. Differentiating the unspecified cases may be useful for\nmigration purposes later.\n * psql support.\n * Updated docs to try to better present the concept, and document\nCREATE PROCEDURE as well.\n\n\nThe SEARCH clause declares a new property that will be useful to both\nenforce safety and also to guide users to migrate in a safe direction\nover time.\n\nFor instance, the current patch prohibits the combination of IMMUTABLE\nand SEARCH FROM SESSION; but allows IMMUTABLE if no SEARCH clause is\nspecified at all (to avoid breaking anything). We could extend that\nslowly over several releases ratchet up the pressure (with warnings or\nchanging defaults) until all IMMUTABLE functions require SEARCH FROM\nTRUSTED. Perhaps IMMUTABLE would even imply SEARCH FROM TRUSTED.\n\nThe search property is consistent with other properties, like\nIMMUTABLE, which is both a marker and also enforces some restrictions\n(e.g. you can't CREATE TABLE). It's also a lot nicer to use than a SET\nclause, and provides a nice place to document certain behaviors.\n\n(Aside: the concept of IMMUTABLE is basically broken today, due to\nsearch_path problems.)\n\nSEARCH FROM DEFAULT is just a way to get an object back to the\n\"unspecified search clause\" state. It has the same behavior as SEARCH\nFROM SESSION, except that the former will cause a hard error when\ncombined with IMMUTABLE. I think it's worth differentiating the\nunspecified search clause from the explicit SEARCH FROM SESSION clause\nfor the purposes of migration.\n\nThere were three main complaints:\n\nComaplaint A: That it creates a new mechanism[1].\n\nThe patch doesn't create a new internal mechanism, it almost entirely\nreuses the existing SET clause mechanism. I think complaint A is really\nabout the user-facing mechanics, which is essentially the same as the\ncomplaint B.\n\nComplaint B: That it's overlapping in functionality with the SET\nclause[2][3]. In other words:\n\n CREATE FUNCTION ... SEARCH FROM TRUSTED ...;\n CREATE FUNCTION ... SET search_path = pg_catalog, pg_temp ...;\n\ndo similar things. But the latter is much worse:\n\n * it's user-unfriendly (requiring pg_temp is highly unintuitive)\n * it doesn't allow Postgres to warn if the function is being used in\nan unsafe context\n * there's no way to explicitly declare that you want the search path\nto come from the session instead (either to be more clear about your\nintentions, or to be forward-compatible)\n\nIn my opinion, the \"SET search_path = ...\" form should be used when you\nactually want the search_path to contain some specific schema, not in\ncases where you're just using built-in objects.\n\nComplaint C: search_path is hopeless[4].\n\nI think we can make meaningful improvements to the status quo, like\nwith the attached patch, that will reduce a lot of the surface area for\nsecurity risks. Right now our privilege model breaks down very quickly\neven with trivial and typical use cases and we can do better.\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS\n\n[1]\nhttps://www.postgresql.org/message-id/CA%2BTgmoaRPJJN%3DAOqC4b9t90vFQX81hKiXNPNhbxR0-Sm8F8nCA%40mail.gmail.com\n[2]\nhttps://www.postgresql.org/message-id/CA%2BTgmoah_bTjUFng-vZnivPQs0kQWUaSwAu49SU5M%2BzTxA%2B3Qw%40mail.gmail.com\n[3]\nhttps://www.postgresql.org/message-id/15464811-18fb-c7d4-4620-873366d367d6%40eisentraut.org\n[4]\nhttps://www.postgresql.org/message-id/20230812182559.d7plqwx3p65ys4i7%40awork3.anarazel.de",
"msg_date": "Thu, 21 Sep 2023 14:33:13 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: CREATE FUNCTION ... SEARCH { DEFAULT | SYSTEM | SESSION }"
},
{
"msg_contents": "On Thu, 2023-09-21 at 14:06 -0400, Robert Haas wrote:\n\n> Also, in a case like this, I don't think it's unreasonable to ask\n> whether, perhaps, Bob just needs to be a little more careful about\n> setting search_path.\n\nThat's what this whole thread is about: I wish it was reasonable, but I\ndon't think the tools we provide today make it reasonable. You expect\nBob to do something like:\n\n CREATE FUNCTION ... SET search_path = pg_catalog, pg_temp ...\n\nfor all functions, not just SECURITY DEFINER functions, is that right?\n\nUp until now, we've mostly treated search_path as a problem for\nSECURITY DEFINER, and specifying something like that might be\nreasonable for a small number of SECURITY DEFINER functions.\n\nBut as my example showed, search_path is actually a problem for\nSECURITY INVOKER too: an index expression relies on the function\nproducing the correct results, and it's hard to control that without\ncontrolling the search_path.\n\n> I think that there is a big difference between\n> (a) defining a SQL-language function that is accessible to multiple\n> users and (b) inserting a row into a table you don't own. When you\n> define a function, you know people are potentially going to call it.\n\nIt's a bit problematic that (a) is the default:\n\n CREATE FUNCTION f(INT) RETURNS INT IMMUTABLE\n LANGUAGE plpgsql\n AS $$ BEGIN RETURN 42+$1; END; $$;\n CREATE TABLE x(i INT);\n CREATE INDEX x_idx ON x(f(i));\n GRANT INSERT ON TABLE x TO u2;\n\nIt's not obvious that f() is directly callable by u2 (though it is\ndocumented).\n\nI'm not disagreeing with the principle behind what you say above. My\npoint is that \"accessible to multiple users\" is the ordinary default\ncase, so there's no cue for the user that they need to do something\nspecial to secure function f().\n\n> Asking you, as the function author, to take some care to secure your\n> function against a malicious search_path doesn't seem like an\n> unsupportable burden.\n\nWhat you are suggesting has been possible for quite some time. Do you\nthink users are taking care to do this today? If not, how can we\nencourage them to do so?\n\n> You can, I think, be expected to\n> check that functions you define have SET search_path attached.\n\nWe've already established that even postgres hackers are having\ndifficulty keeping up with these nuances. Even though the SET clause\nhas been there for a long time, our documentation on the subject is\ninsufficient and misleading. And on top of that, it's extra typing and\nnoise for every schema file. Until we make some changes I don't think\nwe can expect users to do as you suggest.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Fri, 22 Sep 2023 13:05:34 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: CREATE FUNCTION ... SEARCH { DEFAULT | SYSTEM | SESSION }"
},
{
"msg_contents": "On Fri, Sep 22, 2023 at 4:05 PM Jeff Davis <[email protected]> wrote:\n> On Thu, 2023-09-21 at 14:06 -0400, Robert Haas wrote:\n> > Also, in a case like this, I don't think it's unreasonable to ask\n> > whether, perhaps, Bob just needs to be a little more careful about\n> > setting search_path.\n>\n> That's what this whole thread is about: I wish it was reasonable, but I\n> don't think the tools we provide today make it reasonable. You expect\n> Bob to do something like:\n>\n> CREATE FUNCTION ... SET search_path = pg_catalog, pg_temp ...\n>\n> for all functions, not just SECURITY DEFINER functions, is that right?\n\nYes, I do. I think it's self-evident that a SQL function's behavior is\nnot guaranteed to be invariant under all possible values of\nsearch_path. If you care about your function behaving the same way all\nthe time, you have to set the search_path.\n\nTBH, I don't see any reasonable way around that requirement. We can\nperhaps provide some safeguards that will make it less likely that you\nwill get completely hosed if your forget, and we could decide to make\nSET search_path or some mostly-equivalent thing the default at the\nprice of pretty large compatibility break, but you can't have\nfunctions that both resolve object references using the caller's\nsearch path and also reliably do what the author intended.\n\n> > You can, I think, be expected to\n> > check that functions you define have SET search_path attached.\n>\n> We've already established that even postgres hackers are having\n> difficulty keeping up with these nuances. Even though the SET clause\n> has been there for a long time, our documentation on the subject is\n> insufficient and misleading. And on top of that, it's extra typing and\n> noise for every schema file. Until we make some changes I don't think\n> we can expect users to do as you suggest.\n\nRespectfully, I find this position unreasonable, to the point of\nfinding it difficult to take seriously. You said in another part of\nyour email that I didn't quote here that it's a problem that it's a\nproblem that functions and procedures are created with public execute\naccess by default -- but you can work around this by using a schema to\nwhich other users don't have access, or by changing the default\npermissions for functions on the schema where you are creating them,\nor by adjusting permissions on the individual objects. If you don't do\nany of that but don't trust the other users on your system then you at\nleast need to set search_path. If you neither understand how function\npermissions work nor understand the importance of controlling\nsearch_path, you cannot expect to have a secure system with multiple,\nmutually untrusting users. That's just never going to work, regardless\nof what the server behavior is.\n\nI also disagree with the idea that setting the search_path should be\nregarded as noise. I think it's quite the opposite. I don't believe\nthat people want to run their functions under a sanitized search_path\nthat only includes system schemas. That might work for some people,\nbut I think most people will define functions that call other\nfunctions that they themselves defined, or access tables that they\nthemselves created. They will therefore need the search_path to\ninclude the schemas in which they created those objects. There's no\nway for the system to magically figure out what the user wants here.\n*Perhaps* if the function is defined interactively the then-current\nvalue could be captured, but in a pg_dump for example that won't work,\nand the configured value, wherever it came from initially, is going to\nhave to be recorded so that it can be recreated when the dump is\nrestored.\n\nMost of the problems that we're dealing with here have analogues in\nthe world of shell scripts. A sql or plpgsql function is like a shell\nscript. If it's setuid, i.e. SECURITY DEFINER, you have to worry about\nthe caller hijacking it by setting PATH or IFS or LD_something. Even\nif it isn't, you have to either trust that the caller has set a\nreasonable PATH, or set one yourself, else your script isn't always\ngoing to work reliably. Nobody really expects to be able to make a\nsetuid shell script secure at all -- that typically requires a wrapper\nexecutable -- but it definitely can't be done by someone who doesn't\nunderstand the importance of setting their PATH and has no idea how to\nuse chmod.\n\nOne thing that is quite different between the shell script situation\nand what we do inside PostgreSQL is that there's a lot more security\nby default. Every user gets a home directory which by default is\naccessible only to them, or at the very least writable only by them,\nand system directories have tightly-controlled permissions. I think\nUNIX had analogues of a lot of the problems we have today 40 years\nago, but they've tightened things up. We've started to move in that\ndirection by, for example, removing public execute access by default.\nIf we want to move further in the direction that UNIX has taken, we\nshould probably get rid of the public schema altogether, and\nauto-create per-user schemas with permissions that allow only that\nuser to access them. But that's only making it easier to not\naccidentally have users accessing each other's stuff. The core problem\nthat, if people do want to access each other's stuff, they either need\nto trust each other or really be on point with all the\nsecurity-related stuff. That's equally true in the shell script case,\nand I think that problem is fundamental. It's just really not possible\nfor people to call other people's code frequently without everyone\ninvolved either being super-careful about security or just not caring.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 25 Sep 2023 11:30:07 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CREATE FUNCTION ... SEARCH { DEFAULT | SYSTEM | SESSION }"
},
{
"msg_contents": "On 9/25/23 11:30, Robert Haas wrote:\n> I don't believe that people want to run their functions under a\n> sanitized search_path that only includes system schemas. That might\n> work for some people, but I think most people will define functions\n> that call other functions that they themselves defined, or access\n> tables that they themselves created. They will therefore need the\n> search_path to include the schemas in which they created those\n> objects.\nWithout diving into all the detailed nuances of this discussion, this \nparticular paragraph made me wonder if at least part of the problem here \nis that the same search_path is used to find \"things that I want to \nexecute\" (functions and operators) and \"things I want to access\" \n(tables, etc).\n\nI think many folks would be well served by only having system schemas in \nthe search_path for the former (augmented by explicit schema qualifying \nof one's own functions), but agree that almost no one wants that for the \nlatter (needing to schema qualify every table reference).\n\nShould there be a way to have a separate \"execution\" search_path?\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n",
"msg_date": "Mon, 25 Sep 2023 12:00:39 -0400",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CREATE FUNCTION ... SEARCH { DEFAULT | SYSTEM | SESSION }"
},
{
"msg_contents": "On Mon, Sep 25, 2023 at 12:00 PM Joe Conway <[email protected]> wrote:\n> Should there be a way to have a separate \"execution\" search_path?\n\nI have heard that idea proposed before, and I don't think it's a\nterrible idea, but I also don't think it fixes anything terribly\nfundamental. I think it's pretty normal for people to define functions\nand procedures and then call them from other functions and procedures,\nand if you do that, then you need that schema in your execution search\npath. Of course, if somebody doesn't do that, or schema-qualifies all\nsuch references, then this becomes useful for defense in depth. But I\nfind it hard to see it as anything more than defense in depth because\nI think a lot of people will need to have use cases where they need to\nput non-system schemas into the execution search path, and such people\nwouldn't really benefit from the existence of this feature.\n\nSlightly off-topic, but I wonder whether, if we do this, we ought to\ndo it by adding some kind of a marker to the existing search_path,\nrather than by creating a new GUC. For example, maybe putting & before\na schema name means that it can be searched, but only for\nnon-executable things. Then you could set search_path = &jconway,\npg_catalog or something of that kind. It could potentially be more\npowerful to have it be a completely separate setting, but if we do\nthat, everyone who currently needs to secure search_path properly\nstarts needing to also secure execution_search_path properly. This is\none of those cases where two is not better than one.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 25 Sep 2023 13:35:16 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CREATE FUNCTION ... SEARCH { DEFAULT | SYSTEM | SESSION }"
},
{
"msg_contents": "On Mon, 2023-09-25 at 11:30 -0400, Robert Haas wrote:\n> On Fri, Sep 22, 2023 at 4:05 PM Jeff Davis <[email protected]> wrote:\n> > You expect\n> > Bob to do something like:\n> > \n> > CREATE FUNCTION ... SET search_path = pg_catalog, pg_temp ...\n> > \n> > for all functions, not just SECURITY DEFINER functions, is that\n> > right?\n> \n> Yes, I do.\n\nDo users like Bob do that today? If not, what causes you to expect them\nto do so in the future?\n\n> I think it's self-evident that a SQL function's behavior is\n> not guaranteed to be invariant under all possible values of\n> search_path.\n\nIt's certainly not self-evident in a literal sense. I think you mean\nthat it's \"obvious\" or something, and perhaps that narrow question is,\nbut it's also not terribly helpful.\n\nIf the important behaviors here were so obvious, how did we end up in\nthis mess in the first place?\n\n> > We've already established that even postgres hackers are having\n> > difficulty keeping up with these nuances. Even though the SET\n> > clause\n> > has been there for a long time, our documentation on the subject is\n> > insufficient and misleading. And on top of that, it's extra typing\n> > and\n> > noise for every schema file. Until we make some changes I don't\n> > think\n> > we can expect users to do as you suggest.\n> \n> Respectfully, I find this position unreasonable, to the point of\n> finding it difficult to take seriously.\n\nWhich part exactly is unreasonable?\n\n * Hackers are having trouble keeping up with the nuances.\n * Our documentation on the subject *is* insufficient and misleading.\n * \"pg_temp\" is noise.\n\nIt seems like you think that users are already doing \"SET search_path =\npg_catalog, pg_temp\" in all the necessary places, and therefore no\nchange is required?\n\n\n> Most of the problems that we're dealing with here have analogues in\n> the world of shell scripts.\n\nI think analogies to unix are what caused a lot of the problems we have\ntoday, because postgres is a very different world. In unix-like\nenvironments, a file is just a file; in postgres, we have all kinds of\ncode attached in interesting ways.\n\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Mon, 25 Sep 2023 10:56:36 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: CREATE FUNCTION ... SEARCH { DEFAULT | SYSTEM | SESSION }"
},
{
"msg_contents": "On Mon, 2023-09-25 at 12:00 -0400, Joe Conway wrote:\n> Should there be a way to have a separate \"execution\" search_path?\n\nI hadn't considered that and I like that idea for a few reasons:\n\n * a lot of the problem cases are for functions that don't need to\naccess tables at all, e.g., in an index expression.\n * it avoids annoyances with pg_temp, because that's not searched for\nfunctions/operators anyway\n * perhaps we could force the object search_path to be empty for\nIMMUTABLE functions?\n\nI haven't thought it through in detail, but it seems like a promising\napproach.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Mon, 25 Sep 2023 11:03:01 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: CREATE FUNCTION ... SEARCH { DEFAULT | SYSTEM | SESSION }"
},
{
"msg_contents": "On 9/25/23 14:03, Jeff Davis wrote:\n> On Mon, 2023-09-25 at 12:00 -0400, Joe Conway wrote:\n>> Should there be a way to have a separate \"execution\" search_path?\n> \n> I hadn't considered that and I like that idea for a few reasons:\n> \n> * a lot of the problem cases are for functions that don't need to\n> access tables at all, e.g., in an index expression.\n> * it avoids annoyances with pg_temp, because that's not searched for\n> functions/operators anyway\n> * perhaps we could force the object search_path to be empty for\n> IMMUTABLE functions?\n> \n> I haven't thought it through in detail, but it seems like a promising\n> approach.\n\n\nRelated to this, it would be useful if you could grant create on schema \nfor only non-executable objects. You may want to allow a user to create \ntheir own tables but not allow them to create their own functions, for \nexample. Right now \"GRANT CREATE ON SCHEMA foo\" gives the grantee the \nability to create \"all the things\".\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n",
"msg_date": "Mon, 25 Sep 2023 16:11:18 -0400",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CREATE FUNCTION ... SEARCH { DEFAULT | SYSTEM | SESSION }"
},
{
"msg_contents": "On Mon, Sep 25, 2023 at 1:56 PM Jeff Davis <[email protected]> wrote:\n> Do users like Bob do that today? If not, what causes you to expect them\n> to do so in the future?\n\nWhat I would say is that if there's a reasonable way of securing your\nstuff and you don't make use of it, that's your problem. If securing\nyour stuff is unreasonably difficult, that's a product problem. I\nthink that setting the search_path on your own functions is a basic\nprecaution that you should take if you are worried about multi-user\nsecurity. I do not believe it is realistic to eliminate that\nrequirement, and if people like Bob don't do that today and can't be\nmade to do that in the future, then I think it's just hopeless. In\ncontrast, I think that the precautions that you need to take when\ndoing anything to a table owned by another user are unreasonably\ncomplex and not very realistic for anyone to take on a routine basis.\nEven if you validate that there's nothing malicious before you access\nthe table, the table owner can change that at any time, so it's very\nhard to reliably protect yourself.\n\nIn terms of whether people like Bob actually DO do that today, I'd say\nprobably some do and others don't. I think that the overwhelming\nmajority of PostgreSQL users simply aren't concerned about multi-user\nsecurity. They either have a single user account that is used for\neverything, or say one account for the application and another for\ninteractive access, or they have a bunch of users but basically all of\nthose people are freely accessing each other's stuff and they're not\nreally concerned with firewalling them from each other. Among the\nsmall percentage of users who are really concerned with making sure\nthat users can't get into each others accounts, I would expect that\nknowing that you need to control search_path is fairly common, but\nit's hard to say. I haven't actually met many such users.\n\n> > I think it's self-evident that a SQL function's behavior is\n> > not guaranteed to be invariant under all possible values of\n> > search_path.\n>\n> It's certainly not self-evident in a literal sense. I think you mean\n> that it's \"obvious\" or something, and perhaps that narrow question is,\n> but it's also not terribly helpful.\n>\n> If the important behaviors here were so obvious, how did we end up in\n> this mess in the first place?\n\nI feel like this isn't really responsive to the argument that I was\nand am making, and I'm worried that we're going down a rat-hole here.\n\nI wondered after reading this whether I had misused the term\nself-evident, but when I did a Google search for \"self-evident\" the\ndefinition that comes up is \"not needing to be demonstrated or\nexplained; obvious.\"\n\nI am not saying that everyone is going to realize that you probably\nought to be setting search_path on all of your functions in any kind\nof multi-user environment, and maybe even in a single-user environment\njust to avoid weird failures if you ever change your default\nsearch_path. What I am saying is that if you stop to think about what\nsearch_path does while looking at any SQL function you've ever\nwritten, you should probably realize pretty quickly that the behavior\nof your function in search_path-dependent, and indeed that the\nbehavior of every other SQL function you've ever written is probably\nsearch_path-dependent, too. I think the problem here isn't really that\nthis is hard to understand, but that many people have not stopped to\nthink about it.\n\nMaybe it is obvious to you what we ought to do about that, but it is\nnot obvious to me. As I have said, I think that changing the behavior\nof CREATE FUNCTION or CREATE PROCEDURE so that some search_path\ncontrol is the default is worth considering. However, I think that\nsuch a change inevitably breaks backward compatibility, and I don't\nthink we have enough people weighing in on this thread to think that\nwe can just go do that even if everyone agreed on precisely what was\nto be done, and I think it is pretty clear that we do not have\nunanimous agreement.\n\n> > Respectfully, I find this position unreasonable, to the point of\n> > finding it difficult to take seriously.\n>\n> Which part exactly is unreasonable?\n\nI interpreted you to be saying that we can't expect people to set\nsearch_path on their functions. And I just don't buy that. We have\nmade mistakes in that area in PostgreSQL itself and had to fix them\nlater, and we may make more mistakes again in the future, so if you\nthink we need better documentation or better defaults, I think you\nmight be right. But if you think it's a crazy idea for people running\nPostgreSQL in multi-user environments to understand that setting\nsearch_path on all of their functions and procedures is essential, I\ndisagree. They've got to understand that, because it's not that\ncomplicated, and there's no real alternative.\n\n> I think analogies to unix are what caused a lot of the problems we have\n> today, because postgres is a very different world. In unix-like\n> environments, a file is just a file; in postgres, we have all kinds of\n> code attached in interesting ways.\n\nYeah. That's another area where I find it very unclear how to do\nbetter. From a security point of view, I think that the fact that\nthere are so many interesting places to attach code is completely\ninsane. It makes running a secure multi-user environment very\ndifficult, bordering on impossible. But is there any way we can really\nfix that without just removing a whole bunch of functionality? I think\nthat some of the ideas that have been proposed here could help, but\nI'm extremely doubtful that they or anything else are a complete\nsolution, and I'm pretty sure that there is no \"easy button\" here --\ngiven the number of \"interesting\" ways to execute code, I think\nsecurity will always be tough to get right, regardless of what we\nchange.\n\nMy emails on this thread seem to have made you frustrated. For that, I am sorry.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 26 Sep 2023 11:28:40 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CREATE FUNCTION ... SEARCH { DEFAULT | SYSTEM | SESSION }"
},
{
"msg_contents": "On Thu, 2023-09-21 at 14:33 -0700, Jeff Davis wrote:\n> I have attached an updated patch. Changes:\n\nWithdrawing this from CF due to lack of consensus.\n\nI'm happy to resume this discussion if someone sees a path forward to\nmake it easier to secure the search_path; or at least help warn users\nwhen a function without a secured search_path is being used unsafely.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Fri, 27 Oct 2023 14:01:46 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: CREATE FUNCTION ... SEARCH { DEFAULT | SYSTEM | SESSION }"
},
{
"msg_contents": "On Mon, 2023-09-25 at 11:30 -0400, Robert Haas wrote:\n> > That's what this whole thread is about: I wish it was reasonable,\n> > but I\n> > don't think the tools we provide today make it reasonable. You\n> > expect\n> > Bob to do something like:\n> > \n> > CREATE FUNCTION ... SET search_path = pg_catalog, pg_temp ...\n> > \n> > for all functions, not just SECURITY DEFINER functions, is that\n> > right?\n> \n> Yes, I do. I think it's self-evident that a SQL function's behavior\n> is\n> not guaranteed to be invariant under all possible values of\n> search_path. If you care about your function behaving the same way\n> all\n> the time, you have to set the search_path.\n\nAfter adding the search path cache (recent commit f26c2368dc) hopefully\nthat helps to make the above suggestion more reasonable performance-\nwise. I think we can call that progress.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Tue, 14 Nov 2023 20:21:16 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: CREATE FUNCTION ... SEARCH { DEFAULT | SYSTEM | SESSION }"
},
{
"msg_contents": "On Tue, Nov 14, 2023 at 11:21 PM Jeff Davis <[email protected]> wrote:\n> After adding the search path cache (recent commit f26c2368dc) hopefully\n> that helps to make the above suggestion more reasonable performance-\n> wise. I think we can call that progress.\n\nI agree. Not to burden you, but do you know what the overhead is now,\nand do you have any plans to further reduce it? I don't believe that's\nthe only thing we ought to be doing here, necessarily, but it is one\nthing that we definitely should be doing and probably the least\ncontroversial.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 20 Nov 2023 15:52:41 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CREATE FUNCTION ... SEARCH { DEFAULT | SYSTEM | SESSION }"
},
{
"msg_contents": "On Mon, 2023-11-20 at 15:52 -0500, Robert Haas wrote:\n> I agree. Not to burden you, but do you know what the overhead is now,\n> and do you have any plans to further reduce it? I don't believe\n> that's\n> the only thing we ought to be doing here, necessarily, but it is one\n> thing that we definitely should be doing and probably the least\n> controversial.\n\nRunning the simple test described here:\n\nhttps://www.postgresql.org/message-id/04c8592dbd694e4114a3ed87139a7a04e4363030.camel%40j-davis.com\n\nThe baseline (no \"SET search_path\" clause on the function) is around\n3800ms, and with the clause it shoots up to 8000ms. That's not good,\nbut it is down from about 12000ms before.\n\nThere are a few patches in the queue to bring it down further. Andres\nand I were discussing some GUC hashtable optimizations here:\n\nhttps://www.postgresql.org/message-id/[email protected]\n\nwhich will (if committed) bring it down into the mid 7s.\n\nThere are also a couple other patches I have here (and intend to commit\nsoon):\n\nhttps://www.postgresql.org/message-id/e6fded24cb8a2c53d4ef069d9f69cc7baaafe9ef.camel%40j-davis.com\n\nand those I think will get it into the mid 6s. I think a bit lower\ncombined with the GUC hash table optimizations above.\n\nSo we are still looking at around 50% overhead for a simple function if\nall this stuff gets committed. Not great, but a lot better than before.\n\nOf course I welcome others to profile and see what they can do. There's\na setjmp() call, and a couple allocations, and maybe some other stuff\nto look at. There are also higher-level ideas, like avoiding calling\ninto guc.c in some cases, but that starts to get tricky as you pointed\nout:\n\nhttps://www.postgresql.org/message-id/CA%2BTgmoa8uKQgak5wH0%3D7sL-ukqbwnCPMXA2ZW7Ccdt7tdNGkzg%40mail.gmail.com\n\nIt seems others are also interested in this problem, so I can put some\nmore effort in after this round of patches goes in. I don't have a\nspecific target other than \"low enough overhead that we can reasonably\nrecommend it as a best practice in multi-user environments\".\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Mon, 20 Nov 2023 14:27:34 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: CREATE FUNCTION ... SEARCH { DEFAULT | SYSTEM | SESSION }"
},
{
"msg_contents": "On Mon, Nov 20, 2023 at 5:27 PM Jeff Davis <[email protected]> wrote:\n> Of course I welcome others to profile and see what they can do. There's\n> a setjmp() call, and a couple allocations, and maybe some other stuff\n> to look at. There are also higher-level ideas, like avoiding calling\n> into guc.c in some cases, but that starts to get tricky as you pointed\n> out:\n>\n> https://www.postgresql.org/message-id/CA%2BTgmoa8uKQgak5wH0%3D7sL-ukqbwnCPMXA2ZW7Ccdt7tdNGkzg%40mail.gmail.com\n>\n> It seems others are also interested in this problem, so I can put some\n> more effort in after this round of patches goes in. I don't have a\n> specific target other than \"low enough overhead that we can reasonably\n> recommend it as a best practice in multi-user environments\".\n\nThe two things that jump out at me are the setjmp() and the\nhash_search() call inside find_option(). As to the first, could we\nremove the setjmp() and instead have abort-time processing know\nsomething about this? For example, imagine we just push something onto\na stack like we do for ErrorContextCallback, do whatever, and then pop\nit off. But if an error is thrown then the abort path knows to look at\nthat variable and do whatever. As to the second, could we somehow come\nup with an API for guc.c where you can ask for an opaque handle that\nwill later allow you to do a fast-SET of a GUC? The opaque handle\nwould basically be the hashtable entry, perhaps with some kind of\nwrapping or decoration. Then fmgr_security_definer() could obtain the\nopaque handles and cache them in fn_extra.\n\n(I'm just spitballing here.)\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 21 Nov 2023 09:24:22 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CREATE FUNCTION ... SEARCH { DEFAULT | SYSTEM | SESSION }"
},
{
"msg_contents": "On Tue, 2023-11-21 at 09:24 -0500, Robert Haas wrote:\n> As to the second, could we somehow come\n> up with an API for guc.c where you can ask for an opaque handle that\n> will later allow you to do a fast-SET of a GUC?\n\nYes, attached. That provides a significant speedup: my test goes from\naround ~7300ms to ~6800ms.\n\nThis doesn't seem very controversial or complex, so I'll probably\ncommit this soon unless someone else has a comment.\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS",
"msg_date": "Mon, 04 Dec 2023 16:55:40 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: CREATE FUNCTION ... SEARCH { DEFAULT | SYSTEM | SESSION }"
},
{
"msg_contents": "On Tue, Dec 5, 2023 at 7:55 AM Jeff Davis <[email protected]> wrote:\n>\n> On Tue, 2023-11-21 at 09:24 -0500, Robert Haas wrote:\n> > As to the second, could we somehow come\n> > up with an API for guc.c where you can ask for an opaque handle that\n> > will later allow you to do a fast-SET of a GUC?\n>\n> Yes, attached. That provides a significant speedup: my test goes from\n> around ~7300ms to ~6800ms.\n>\n> This doesn't seem very controversial or complex, so I'll probably\n> commit this soon unless someone else has a comment.\n\n+ * set_config_option_ext: sets option with the given handle to the given\n+ * value.\n\nCopy-paste-o of the other function name.\n\n+config_handle *\n+get_config_handle(const char *name)\n+{\n+ struct config_generic *record;\n+\n+ record = find_option(name, false, false, 0);\n+ if (record == NULL)\n+ return 0;\n\nPart of this code this was copied from a function that returned int,\nbut this one returns a pointer.\n\n\n",
"msg_date": "Tue, 5 Dec 2023 23:22:14 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CREATE FUNCTION ... SEARCH { DEFAULT | SYSTEM | SESSION }"
},
{
"msg_contents": "On Tue, 2023-12-05 at 23:22 +0700, John Naylor wrote:\n> Copy-paste-o of the other function name.\n\n...\n\n> Part of this code this was copied from a function that returned int,\n> but this one returns a pointer.\n\nThank you, fixed.\n\nAlso, I forward-declared config_generic in guc.h to eliminate the cast.\n\nRegards,\n\tJeff Davis",
"msg_date": "Tue, 05 Dec 2023 11:58:08 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: CREATE FUNCTION ... SEARCH { DEFAULT | SYSTEM | SESSION }"
},
{
"msg_contents": "On Tue, 2023-11-21 at 09:24 -0500, Robert Haas wrote:\n> As to the first, could we\n> remove the setjmp() and instead have abort-time processing know\n> something about this? For example, imagine we just push something\n> onto\n> a stack like we do for ErrorContextCallback, do whatever, and then\n> pop\n> it off. But if an error is thrown then the abort path knows to look\n> at\n> that variable and do whatever.\n\nIf I remove the TRY/CATCH entirely, it shows there's room for ~200ms\nimprovement in my test.\n\nI attached a rough patch, which doesn't quite achieve that much, it's\nmore like ~100ms improvement and starts to fall within the noise. So\nperhaps an improvement, but a bit disappointing. It's not a lot of\ncode, but it's not trivial either because the nesting level needs to be\ntracked (so a subxact abort doesn't reset too much state).\n\nAlso, it's not quite as clean as it could be, because I went to some\neffort to avoid an alloc/free by keeping the stack within the fcache. I\ndidn't pay a lot of attention to correctness in this particular patch;\nI was mostly trying a few different formulations for performance\nmeasurement.\n\nI'm not inclined to commit this in its current form but if someone\nthinks that it's a worthwhile direction, I can clean it up a bit and\nreconsider.\n\nRegards,\n\tJeff Davis",
"msg_date": "Thu, 07 Dec 2023 12:45:27 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: CREATE FUNCTION ... SEARCH { DEFAULT | SYSTEM | SESSION }"
},
{
"msg_contents": "On Tue, 2023-12-05 at 11:58 -0800, Jeff Davis wrote:\n> Also, I forward-declared config_generic in guc.h to eliminate the\n> cast.\n\nLooking more closely, I fixed an issue related to placeholder configs.\nWe can't return a handle to a placeholder, because it's not stable, so\nin that case it falls back to using the name.\n\nMy apologies for the churn on this (mostly) simple patch. I think this\nversion is correct; I intend to commit soon.\n\nRegards,\n\tJeff Davis",
"msg_date": "Thu, 07 Dec 2023 14:00:15 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: CREATE FUNCTION ... SEARCH { DEFAULT | SYSTEM | SESSION }"
}
] |
[
{
"msg_contents": "Hello hackers,\n\nThis issue was discussed some time ago as a possible security problem, but\nit was concluded that it is not something extraordinary from the security\npoint of view and it may be a subject for a public discussion.\n\nThe issue is that during the backend initialization procedure, the function\nInitPostgres() tries to lock the database relation id for the current database\n(LockSharedObject(DatabaseRelationId, MyDatabaseId, 0, RowExclusiveLock))\nand there is a way for any authenticated user to hold a lock for the database\nentry as long as they want. Thus, InitProgress() may be effectively blocked\nby that lock.\n\nTo observe such blocking, you can apply the patch attached on a client side,\nand then do the following with a network-accessible server:\n(echo \"SELECT '=DISABLE_READ_MARKER=';\";\nfor i in {1..200000}; do echo \"ALTER DATABASE postgres SET TABLESPACE xxx;\"; done;\n) >/tmp/ad.sql\n\npsql postgres -h 10.0.2.2 -U user 2>/dev/null\n\npostgres=> \\i /tmp/ad.sql\n ?column?\n-----------------------\n=DISABLE_READ_MARKER=\n(1 row)\n...\n\nSeveral seconds later, try in another console:\npsql postgres -h 10.0.2.2 -c \"SELECT 1\"\nYou'll get:\npsql: FATAL: canceling statement due to lock timeout\n\nIn this case the first client locks the database relation due to:\n1. table_open(DatabaseRelationId, RowExclusiveLock) is called in movedb()\nbefore pg_database_ownercheck(), so every user can acquire that lock;\n2. the transaction is rolled back (and the lock released) in PostgresMain()\nafter EmitErrorReport(), so a client can suspend all the transaction-related\nactivity by blocking a write operation.\n\nPerhaps, that exactly case can be prevented by placing object_ownercheck()\nbefore table_open() in movedb(), but the possibility to stall a transaction\nabort looks dangerous too.\n\nBest regards,\nAlexander",
"msg_date": "Sat, 12 Aug 2023 12:00:00 +0300",
"msg_from": "Alexander Lakhin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Backend initialization may be blocked by locking the database\n relation id"
}
] |
[
{
"msg_contents": "The attached 010_zero.pl, when run as part of the pg_waldump test suite, fails\nat today's master (c36b636) and v15 (1bc19df). It passes at v14 (5a32af3).\nCommand \"pg_waldump --start 0/01000000 --end 0/01000100\" fails as follows:\n\n pg_waldump: error: WAL segment size must be a power of two between 1 MB and 1 GB, but the WAL file \"000000010000000000000002\" header specifies 0 bytes\n\nWhere it fails, the server has created an all-zeros WAL file under that name.\nWhere it succeeds, that file doesn't exist at all. Two decisions to make:\n\n- Should a clean server shutdown ever leave an all-zeros WAL file? I think\n yes, it's okay to let that happen.\n- Should \"pg_waldump --start $X --end $Y\" open files not needed for the\n requested range? I think no.\n\nBisect of master got:\n30a53b7 Wed Mar 8 16:56:37 2023 +0100 Allow tailoring of ICU locales with custom rules\nDoesn't fail at $(git merge-base REL_15_STABLE master). Bisect of v15 got:\n811203d Sat Aug 6 11:50:23 2022 -0400 Fix data-corruption hazard in WAL-logged CREATE DATABASE.\n\nI suspect those are innocent. They changed the exact WAL content, which I\nexpect somehow caused creation of segment 2.\n\nOddly, I find only one other report of this:\nhttps://www.postgresql.org/message-id/CAJ6DU3HiJ5FHbqPua19jAD%3DwLgiXBTjuHfbmv1jCOaNOpB3cCQ%40mail.gmail.com\n\nThanks,\nnm",
"msg_date": "Sat, 12 Aug 2023 20:15:31 -0700",
"msg_from": "Noah Misch <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_waldump vs. all-zeros WAL files; server creation of such files"
},
{
"msg_contents": "On Sat, Aug 12, 2023 at 08:15:31PM -0700, Noah Misch wrote:\n> The attached 010_zero.pl, when run as part of the pg_waldump test suite, fails\n> at today's master (c36b636) and v15 (1bc19df). It passes at v14 (5a32af3).\n> Command \"pg_waldump --start 0/01000000 --end 0/01000100\" fails as follows:\n> \n> pg_waldump: error: WAL segment size must be a power of two between\n> 1 MB and 1 GB, but the WAL file \"000000010000000000000002\" header\n> specifies 0 bytes\n\nSo this depends on the ordering of the entries retrieved by readdir()\nas much as the segments produced by the backend.\n\n> Where it fails, the server has created an all-zeros WAL file under that name.\n> Where it succeeds, that file doesn't exist at all. Two decisions to make:\n> \n> - Should a clean server shutdown ever leave an all-zeros WAL file? I think\n> yes, it's okay to let that happen.\n\nIt doesn't hurt to leave that around. On the contrary, it makes any\nfollow-up startup cheaper the bigger the segment size.\n\n> - Should \"pg_waldump --start $X --end $Y\" open files not needed for the\n> requested range? I think no.\n\nSo this is a case where identify_target_directory() is called with a\nfname of NULL. Agreed that search_directory could be smarter with the\nfiles it should scan, especially if we have start and/or end LSNs at\nhand to filter out what we'd like to be in the data folder.\n--\nMichael",
"msg_date": "Mon, 14 Aug 2023 10:37:37 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_waldump vs. all-zeros WAL files; server creation of such files"
}
] |
[
{
"msg_contents": "Hello devs,\n\nPgbench is managing clients I/Os manually with select or poll. Much of \nthis could be managed by libevent.\n\nPros:\n\n1. libevent is portable, stable, and widely used (eg Chromium, Memcached, PgBouncer).\n\n2. libevent implements more I/O wait methods, which may be more efficient on some platforms\n (eg FreeBSD kqueue, Windows wepoll in libevent 2.2 alpha), and hides portability issues.\n\n3. it would remove significant portions of unattractive pgbench code, esp. in threadRun,\n and around socket/poll abstraction and portability layer.\n\n4. depending on the number of event loops, the client load could be shared more evenly.\n currently a thread only manages its own clients, some client I/Os may be waiting to be\n processed while other threads could be available to process them.\n\nCons:\n\n1. it adds a libevent dependency to postgres. This may be a no go from the start.\n\n2. this is a significant refactoring project which implies a new internal architecture and adds\n new code to process and generate appropriate events.\n\n3. libevent ability to function efficiently in a highly multithreaded environment\n is unclear. Should there be one event queue which generate a shared work queue?\n or several event queues, one per thread (which would remove the sharing pro)?\n or something in between? Some experiments and configuratibility may be desirable.\n This may also have an impact on pgbench user interface and output depending on the result,\n eg there may be specialized event and worker threads, some statistics may be slightly\n different, new options may be needed…\n\n4. libevent development seems slugish, last bugfix was published 3 years ago, version\n 2.2 has been baking for years, but the development seems lively (+100 contributors).\n\nNeutral?\n\n1. BSD 3 clauses license.\n\nIs pros > cons, or not? Other thoughts, pros, cons?\n\n-- \nFabien.",
"msg_date": "Sun, 13 Aug 2023 11:32:08 +0200 (CEST)",
"msg_from": "Fabien COELHO <[email protected]>",
"msg_from_op": true,
"msg_subject": "pgbench with libevent?"
},
{
"msg_contents": "\n> Pgbench is managing clients I/Os manually with select or poll. Much of this \n> could be managed by libevent.\n\nOr maybe libuv (used by nodejs?).\n\n From preliminary testing libevent seems not too good at fine grain time \nmanagement which are used for throttling, whereas libuv advertised that it \nis good at it, although what it does is yet to be seen.\n\nNote: libev had no updates in 8 years.\n\n-- \nFabien.\n\n\n",
"msg_date": "Mon, 14 Aug 2023 02:35:26 +0200 (CEST)",
"msg_from": "Fabien COELHO <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pgbench with libevent?"
},
{
"msg_contents": "On Mon, Aug 14, 2023 at 12:35 PM Fabien COELHO <[email protected]> wrote:\n> > Pgbench is managing clients I/Os manually with select or poll. Much of this\n> > could be managed by libevent.\n>\n> Or maybe libuv (used by nodejs?).\n>\n> From preliminary testing libevent seems not too good at fine grain time\n> management which are used for throttling, whereas libuv advertised that it\n> is good at it, although what it does is yet to be seen.\n\nDo you think our WaitEventSet stuff could be good here, if made\nfrontend-friendly?\n\n\n",
"msg_date": "Mon, 14 Aug 2023 14:58:26 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgbench with libevent?"
},
{
"msg_contents": "> On Mon, Aug 14, 2023 at 12:35 PM Fabien COELHO <[email protected]> wrote:\r\n>> > Pgbench is managing clients I/Os manually with select or poll. Much of this\r\n>> > could be managed by libevent.\r\n>>\r\n>> Or maybe libuv (used by nodejs?).\r\n>>\r\n>> From preliminary testing libevent seems not too good at fine grain time\r\n>> management which are used for throttling, whereas libuv advertised that it\r\n>> is good at it, although what it does is yet to be seen.\r\n> \r\n> Do you think our WaitEventSet stuff could be good here, if made\r\n> frontend-friendly?\r\n\r\nInteresting. In my understanding this also needs to make Latch\r\nfrontend-friendly?\r\n\r\nBest reagards,\r\n--\r\nTatsuo Ishii\r\nSRA OSS LLC\r\nEnglish: http://www.sraoss.co.jp/index_en/\r\nJapanese:http://www.sraoss.co.jp\r\n",
"msg_date": "Mon, 14 Aug 2023 15:06:59 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgbench with libevent?"
},
{
"msg_contents": "On Mon, Aug 14, 2023 at 6:07 PM Tatsuo Ishii <[email protected]> wrote:\n> Interesting. In my understanding this also needs to make Latch\n> frontend-friendly?\n\nIt could be refactored to support a different subset of event types --\nmaybe just sockets, no latches and obviously no 'postmaster death'.\nBut figuring out how to make latches work between threads might also\nbe interesting for future projects...\n\nMaybe Fabien has completion-based I/O in mind (not just \"readiness\").\nThat's something that some of those libraries can do, IIUC. For\nexample, when your thread wakes up, it tells you \"your socket read is\nfinished, the data is already in your target buffer\". As opposed to\n\"you can now call recv() without blocking\", so you avoid another trip\ninto the kernel. But that's also something we'll eventually want to\nfigure out in the server.\n\n\n",
"msg_date": "Mon, 14 Aug 2023 19:15:20 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgbench with libevent?"
},
{
"msg_contents": "On 2023-Aug-13, Fabien COELHO wrote:\n\n> 4. libevent development seems slugish, last bugfix was published 3 years ago, version\n> 2.2 has been baking for years, but the development seems lively (+100 contributors).\n\nUgh, I would stay away from something like that. Would we become\nhostage to an undelivering group? No thanks.\n\nOn 2023-Aug-14, Fabien COELHO wrote:\n\n> Or maybe libuv (used by nodejs?).\n\n> Note: libev had no updates in 8 years.\n\nlibev or libuv? No updates in 8 years => dead. No way.\n\n\nReworking based on wait events as proposed downthread sounds more\npromising.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Siempre hay que alimentar a los dioses, aunque la tierra esté seca\" (Orual)\n\n\n",
"msg_date": "Mon, 14 Aug 2023 11:38:07 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgbench with libevent?"
},
{
"msg_contents": "Hello Thomas,\n\n>>> Pgbench is managing clients I/Os manually with select or poll. Much of this\n>>> could be managed by libevent.\n>>\n>> Or maybe libuv (used by nodejs?).\n>>\n>> From preliminary testing libevent seems not too good at fine grain time\n>> management which are used for throttling, whereas libuv advertised that it\n>> is good at it, although what it does is yet to be seen.\n>\n> Do you think our WaitEventSet stuff could be good here, if made\n> frontend-friendly?\n\nInteresting question.\n\nAFAICS, the answer is that it could indeed probably fit the task, but it \nwould require significant work to make it thread-compatible, and to \nuntangle it from IsUnderPosmaster/postmaster death, memory context, \nelog/ereport, and other back-end specific stuff.\n\nIf you remove all that with a clean abstraction (quite a task), then once \ndone the question could be why not use libevent/libuv/… in the backend \ninstead of maintaining more or less the same thing inside postgres?\n\nSo ISTM that as far as pgbench is concerned it would be much simpler to \nuse libevent/libuv/… directly if the pros are enough and the cons not \nredhibitory, and provided that the needed detailed features are really \nthere.\n\n-- \nFabien.",
"msg_date": "Mon, 14 Aug 2023 12:09:52 +0200 (CEST)",
"msg_from": "Fabien COELHO <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pgbench with libevent?"
},
{
"msg_contents": "\n>> Interesting. In my understanding this also needs to make Latch\n>> frontend-friendly?\n>\n> It could be refactored to support a different subset of event types --\n> maybe just sockets, no latches and obviously no 'postmaster death'.\n> But figuring out how to make latches work between threads might also\n> be interesting for future projects...\n>\n> Maybe Fabien has completion-based I/O in mind (not just \"readiness\").\n\nPgbench is really a primitive client on top of libpq. ISTM that \ncompletion-based I/O would require to enhance libpq asynchronous-ity, not \njust expose its underlying fd to allow asynchronous implementations.\nCurrently pgbench only actuall \"waits\" for results from the server\nand testing PQisBusy to check whether they are there.\n\n> That's something that some of those libraries can do, IIUC. For\n> example, when your thread wakes up, it tells you \"your socket read is\n> finished, the data is already in your target buffer\".\n\nIndeed, libevent has a higher level \"buffer\" oriented API.\n\n> As opposed to \"you can now call recv() without blocking\", so you avoid \n> another trip into the kernel. But that's also something we'll \n> eventually want to figure out in the server.\n\n-- \nFabien.\n\n\n",
"msg_date": "Mon, 14 Aug 2023 12:22:24 +0200 (CEST)",
"msg_from": "Fabien COELHO <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pgbench with libevent?"
},
{
"msg_contents": "\n>> 4. libevent development seems slugish, last bugfix was published 3 years ago, version\n>> 2.2 has been baking for years, but the development seems lively (+100 contributors).\n>\n> Ugh, I would stay away from something like that. Would we become\n> hostage to an undelivering group? No thanks.\n\nOk.\n\n>> Or maybe libuv (used by nodejs?).\n>\n>> Note: libev had no updates in 8 years.\n>\n> libev or libuv? No updates in 8 years => dead. No way.\n\nSorry, it was not a typo, but the information was not very explicit.\nI have looked at 3 libraries: libevent, libuv and libev.\n\nlibuv is quite lively, last updated 2023-06-30.\n\nlibev is an often cited library, which indeed seems quite dead, so I was \n\"noting\" that I had discarded it, but it looked like a typo.\n\n> Reworking based on wait events as proposed downthread sounds more \n> promising.\n\nThe wait event postgres backend implementation would require a lot of work \nto be usable in a client context.\n\nMy current investigation is that libuv could be the reasonable target, if \nany, especially as it seems to provide a portable thread pool as well.\n\n-- \nFabien.\n\n\n",
"msg_date": "Mon, 14 Aug 2023 12:32:33 +0200 (CEST)",
"msg_from": "Fabien COELHO <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pgbench with libevent?"
},
{
"msg_contents": "> It could be refactored to support a different subset of event types --\n> maybe just sockets, no latches and obviously no 'postmaster death'.\n\nOk.\n\n> But figuring out how to make latches work between threads might also\n> be interesting for future projects...\n\nMaybe. Some people are working on threading PostgreSQL. They may\nalready know...\n\n> Maybe Fabien has completion-based I/O in mind (not just \"readiness\").\n> That's something that some of those libraries can do, IIUC. For\n> example, when your thread wakes up, it tells you \"your socket read is\n> finished, the data is already in your target buffer\". As opposed to\n> \"you can now call recv() without blocking\", so you avoid another trip\n> into the kernel. But that's also something we'll eventually want to\n> figure out in the server.\n\nAgreed.\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Mon, 14 Aug 2023 19:45:21 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgbench with libevent?"
}
] |
[
{
"msg_contents": "Suppose that I create the following index on the tenk1 table from the\nregression tests:\n\ncreate index on tenk1 (two, four, hundred, thousand, tenthous);\n\nNow the following query will be able to use index quals for each\ncolumn that appear in my composite index:\n\nselect * from tenk1\nwhere\n two = 1\n and four = 3\n and hundred = 91\n and thousand = 891\n and tenthous = 1891;\n\nThe query returns one row, and touches 3 buffers/pages (according to\nEXPLAIN ANALYZE with buffers). The overheads here make perfect sense:\nthere's one root page access, one leaf page access, and a single heap\npage access. Clearly the nbtree initial positioning code is able to\ndescend to the exact leaf page (and page offset) where the first\npossible match could be found. Pretty standard stuff.\n\nBut if I rewrite this query to use an inequality, the picture changes.\nIf I replace \"four = 3\" with \"four > 2\", I get a query that is very\nsimilar to the original (that returns the same single row):\n\nselect * from tenk1\nwhere\n two = 1\n and four > 2\n and hundred = 91\n and thousand = 891\n and tenthous = 1891;\n\nThis time our query touches 16 buffers/pages. That's a total of 15\nindex pages accessed, of which 14 are leaf pages. We'll needlessly\nplow through an extra 13 leaf pages, before finally arriving at the\nfirst leaf page that might *actually* have a real match for us.\n\nWe can and should find a way for the second query to descend to the\nsame leaf page directly, so that the physical access patterns match\nthose that we saw with the first query. Only the first query can use\nan insertion scan key with all 4 attributes filled in to find its\ninitial scan position. The second query uses an insertion scan key\nwith values set for the first 2 index columns (on two and four) only.\nEXPLAIN offers no hint that this is what happens -- the \"Index Cond:\"\nshown for each query is practically identical. It seems to me that\nMarkus Winand had a very good point when he complained that we don't\nexpose this difference directly (e.g., by identifying which columns\nappear in \"Access predicates\" and which columns are merely \"Index\nfilter predicates\") [1]. That would make these kinds of issues a lot\nmore obvious.\n\nThe nbtree code is already capable of tricks that are close enough to\nwhat I'm thinking of here. Currently, nbtree's initial positioning\ncode will set BTScanInsertData.nextkey=false for the first query\n(where BTScanInsertData.keysz=4), and BTScanInsertData.nextkey=true\nfor the second query (where BTScanInsertData.keysz=2 right now). So\nthe second query I came up with does at least manage to locate the\nleaf page where \"four = 3\" tuples begin, even today -- its \"four > 2\"\ninequality is at least \"handled efficiently\". The inefficiencies come\nfrom how nbtree handles the remaining two index columns when building\nan insertion scan key for our initial descent. nbtree will treat the\ninequality as making it unsafe to include further values for the\nremaining two attributes, which is the real source of the extra leaf\npage scans (though of course the later attributes are still usable as\nsearch-type scan keys). But it's *not* unsafe to position ourselves on\nthe right leaf page from the start. Not really.\n\nAll that it would take to fix the problem is per-attribute\nBTScanInsertData.nextkey values. There is no reason why \"nextkey\"\nsemantics should only work for the last attribute in the insertion\nscan key. Under this scheme, _bt_first() would be taught to set up the\ninsertion scan key with (say) nextkey=true for the \"four > 2\"\nattribute, and nextkey=false for the other 3 attributes (since we do\nthat whenever >= or = are used). It would probably also make sense to\ngeneralize this approach to handle (say) a third query that had a\n\"four < 2\" inequality, but otherwise matched the first two queries. So\nwe wouldn't literally use multiple \"nextkey\" fields to do this.\n\nThe most general approach seems to require that we teach insertion\nscan key routines like _bt_compare() about \"goback\" semantics, which\nmust also work at the attribute granularity. So we'd probably replace\nboth \"nextkey\" and \"goback\" with something new and more general. I\nalready wrote a patch (still in the CF queue) to teach nbtree\ninsertion scan keys about \"goback\" semantics [2] (whose use would\nstill be limited to backwards scans), so that we'd avoid needlessly\naccessing extra pages in so-called boundary cases (which seems like a\nmuch less important problem than the one I've highlighted here).\n\nThat existing patch already removed code in _bt_first that handled\n\"stepping back\" once we're on the leaf level. ISTM that the right\nplace to do stuff like that is in routines like _bt_search,\n_bt_binsrch, and _bt_compare -- not in _bt_first. The code around\n_bt_compare seems like it would benefit from having more of this sort\nof context. Having the full picture matters both when searching\ninternal pages and leaf pages.\n\nThoughts? Was this issue discussed at some point in the past?\n\n[1] https://use-the-index-luke.com/sql/explain-plan/postgresql/filter-predicates\n[2] https://www.postgresql.org/message-id/flat/CAH2-Wz=XPzM8HzaLPq278Vms420mVSHfgs9wi5tjFKHcapZCEw@mail.gmail.com\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sun, 13 Aug 2023 17:50:56 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Naive handling of inequalities by nbtree initial positioning code"
},
{
"msg_contents": "On Sun, Aug 13, 2023 at 5:50 PM Peter Geoghegan <[email protected]> wrote:\n> select * from tenk1\n> where\n> two = 1\n> and four = 3\n> and hundred = 91\n> and thousand = 891\n> and tenthous = 1891;\n>\n> The query returns one row, and touches 3 buffers/pages (according to\n> EXPLAIN ANALYZE with buffers). The overheads here make perfect sense:\n> there's one root page access, one leaf page access, and a single heap\n> page access. Clearly the nbtree initial positioning code is able to\n> descend to the exact leaf page (and page offset) where the first\n> possible match could be found. Pretty standard stuff.\n\nI probably should have made this first query use \"four >= 3\" instead\nof using \"four = 3\" (while still using \"four > 2\" for the second,\n\"bad\" query). The example works a bit better that way because now the\nqueries are logically equivalent, and yet still have this big\ndisparity. (We get 4 buffer hits for the \"good\" >= query, but 16\nbuffer hits for the equivalent \"bad\" > query.)\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sun, 13 Aug 2023 18:09:30 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Naive handling of inequalities by nbtree initial positioning code"
},
{
"msg_contents": "On Sun, Aug 13, 2023 at 5:50 PM Peter Geoghegan <[email protected]> wrote:\n> All that it would take to fix the problem is per-attribute\n> BTScanInsertData.nextkey values. There is no reason why \"nextkey\"\n> semantics should only work for the last attribute in the insertion\n> scan key. Under this scheme, _bt_first() would be taught to set up the\n> insertion scan key with (say) nextkey=true for the \"four > 2\"\n> attribute, and nextkey=false for the other 3 attributes (since we do\n> that whenever >= or = are used). It would probably also make sense to\n> generalize this approach to handle (say) a third query that had a\n> \"four < 2\" inequality, but otherwise matched the first two queries. So\n> we wouldn't literally use multiple \"nextkey\" fields to do this.\n\nActually, that can't work when there are a huge number of index tuples\nwith the same values for \"four\" (enough to span many internal pages).\nSo we'd need specialized knowledge of the data type (probably\nfrom an opclass support function) to transform \"four > 2\" into \"four\n>= 3\" up front. Alternatively, we could do roughly the same thing via\nan initial index probe to do the same thing. The latter approach would\nbe needed for continuous data types, where the transformation isn't\npossible at all.\n\nThe probing approach could work by finding an initial position in the\nsame way as we currently locate an initial leaf page -- the way that I\ncomplained about earlier on, but with an important twist. Once we'd\nestablished that the first \"four\" value in the index > 2 really was 3\n(or whatever it turned out to be), we could fill that value into a new\ninsertion scan key. It would then be possible to do another descent of\nthe index, skipping over most of the leaf pages that we'll access\nneedlessly right now. (Actually, we'd only do all that when it seemed\nlikely to allow us to skip a significant number of intermediate leaf\npages -- which is what we saw in my test case.)\n\nThis is directly related to skip scan. The former approach is more or\nless what the MDAM paper calls \"dense\" access (which is naturally\nlimited to discrete data types like integer), while the latter probing\napproach is what it calls \"sparse\" access. Skip scan performs this\nprocess repeatedly, most of the time, but we'd only skip once here.\n\nIn fact, if my example had used (say) \"four > 1\" instead, then it\nwould have made sense to skip multiple times -- not just once, after\nan initial descent. Because then we'd have had to consider matches for\nboth \"two=1 and four=2\" and \"two=1 and four=3\" (there aren't any\n\"two=1 and four=4\" matches so we'd then be done).\n\nIn fact, had there been no mention of the \"four\" column in the query\nwhatsoever (which is how we tend to think of skip scan), then a decent\nimplementation of skip scan would effectively behave as if the query\nhad been written \"two=1 and four > -inf and ...\", while following the\nsame general approach. (Or \"two=1 and four < +inf and ...\", if this\nwas a similar looking backwards scan.)\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Sun, 13 Aug 2023 21:33:30 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Naive handling of inequalities by nbtree initial positioning code"
}
] |
[
{
"msg_contents": "Hi,\n\nSMgrRelationData objects don't currently have a defined lifetime, so\nit's hard to know when the result of smgropen() might become a\ndangling pointer. This has caused a few bugs in the past, and the\nusual fix is to just call smgropen() more often and not hold onto\npointers. If you're doing that frequently enough, the hash table\nlookups can show up in profiles. I'm interested in this topic for\nmore than just micro-optimisations, though: in order to be able to\nbatch/merge smgr operations, I'd like to be able to collect them in\ndata structures that survive more than just a few lines of code.\n(Examples to follow in later emails).\n\nThe simplest idea seems to be to tie object lifetime to transactions\nusing the existing AtEOXact_SMgr() mechanism. In recovery, the\nobvious corresponding time would be the commit/abort record that\ndestroys the storage.\n\nThis could be achieved by extending smgrrelease(). That was a\nsolution to the same problem in a narrower context: we didn't want\nCFIs to randomly free SMgrRelations, but we needed to be able to\nforce-close fds in other backends, to fix various edge cases.\n\nThe new idea is to overload smgrrelease(it) so that it also clears the\nowner, which means that AtEOXact_SMgr() will eventually smgrclose(it),\nunless it is re-owned by a relation before then. That choice stems\nfrom the complete lack of information available via sinval in the case\nof an overflow. We must (1) close all descriptors because any file\nmight have been unlinked, (2) keep all pointers valid and yet (3) not\nleak dropped smgr objects forever. In this patch, smgrreleaseall()\nachieves those goals.\n\nProof-of-concept patch attached. Are there holes in this scheme?\nBetter ideas welcome. In terms of spelling, another option would be\nto change the behaviour of smgrclose() to work as described, ie it\nwould call smgrrelease() and then also disown, so we don't have to\nchange most of those callers, and then add a new function\nsmgrdestroy() for the few places that truly need it. Or something\nlike that.\n\nOther completely different ideas I've bounced around with various\nhackers and decided against: references counts, \"holder\" objects that\ncan be an \"owner\" (like Relation, but when you don't have a Relation)\nbut can re-open on demand. Seemed needlessly complicated.\n\nWhile studying this I noticed a minor thinko in smgrrelease() in\n15+16, so here's a fix for that also. I haven't figured out a\nsequence that makes anything bad happen, but we should really\ninvalidate smgr_targblock when a relfilenode is reused, since it might\npoint past the end. This becomes more obvious once smgrrelease() is\nused for truncation, as proposed here.",
"msg_date": "Mon, 14 Aug 2023 14:41:56 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Extending SMgrRelation lifetimes"
},
{
"msg_contents": "On 14/08/2023 05:41, Thomas Munro wrote:\n> The new idea is to overload smgrrelease(it) so that it also clears the\n> owner, which means that AtEOXact_SMgr() will eventually smgrclose(it),\n> unless it is re-owned by a relation before then. That choice stems\n> from the complete lack of information available via sinval in the case\n> of an overflow. We must (1) close all descriptors because any file\n> might have been unlinked, (2) keep all pointers valid and yet (3) not\n> leak dropped smgr objects forever. In this patch, smgrreleaseall()\n> achieves those goals.\n\nMakes sense.\n\nSome of the smgrclose() calls could perhaps still be smgrclose(). If you \ncan guarantee that no-one is holding the SMgrRelation, it's still OK to \ncall smgrclose(). There's little gain from closing earlier, though.\n\n> Proof-of-concept patch attached. Are there holes in this scheme?\n> Better ideas welcome. In terms of spelling, another option would be\n> to change the behaviour of smgrclose() to work as described, ie it\n> would call smgrrelease() and then also disown, so we don't have to\n> change most of those callers, and then add a new function\n> smgrdestroy() for the few places that truly need it. Or something\n> like that.\n\nIf you change smgrclose() to do what smgrrelease() does now, then it \nwill apply automatically to extensions.\n\nIf an extension is currently using smgropen()/smgrclose() correctly, \nthis patch alone won't make it wrong, so it's not very critical for \nextensions to adopt the change. However, if after this we consider it OK \nto hold a pointer to SMgrRelation for longer, and start doing that in \nthe backend, then extensions need to be adapted too.\n\n> While studying this I noticed a minor thinko in smgrrelease() in\n> 15+16, so here's a fix for that also. I haven't figured out a\n> sequence that makes anything bad happen, but we should really\n> invalidate smgr_targblock when a relfilenode is reused, since it might\n> point past the end. This becomes more obvious once smgrrelease() is\n> used for truncation, as proposed here.\n\n+1. You can move the smgr_targblock clearing out of the loop, though.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Tue, 15 Aug 2023 19:11:38 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Extending SMgrRelation lifetimes"
},
{
"msg_contents": "On Wed, Aug 16, 2023 at 4:11 AM Heikki Linnakangas <[email protected]> wrote:\n> Makes sense.\n\nThanks for looking!\n\n> If you change smgrclose() to do what smgrrelease() does now, then it\n> will apply automatically to extensions.\n>\n> If an extension is currently using smgropen()/smgrclose() correctly,\n> this patch alone won't make it wrong, so it's not very critical for\n> extensions to adopt the change. However, if after this we consider it OK\n> to hold a pointer to SMgrRelation for longer, and start doing that in\n> the backend, then extensions need to be adapted too.\n\nYeah, that sounds quite compelling. Let's try that way:\n\n * smgrrelease() is removed\n * smgrclose() now releases resources, but doesn't destroy until EOX\n * smgrdestroy() now frees memory, and should rarely be used\n\nStill WIP while I think about edge cases, but so far I think this is\nthe better option.\n\n> > While studying this I noticed a minor thinko in smgrrelease() in\n> > 15+16, so here's a fix for that also. I haven't figured out a\n> > sequence that makes anything bad happen, but we should really\n> > invalidate smgr_targblock when a relfilenode is reused, since it might\n> > point past the end. This becomes more obvious once smgrrelease() is\n> > used for truncation, as proposed here.\n>\n> +1. You can move the smgr_targblock clearing out of the loop, though.\n\nRight, thanks. Pushed.",
"msg_date": "Thu, 17 Aug 2023 17:10:18 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Extending SMgrRelation lifetimes"
},
{
"msg_contents": "On Thu, Aug 17, 2023 at 1:11 AM Thomas Munro <[email protected]> wrote:\n> Still WIP while I think about edge cases, but so far I think this is\n> the better option.\n\nI think this direction makes a lot of sense. The lack of a defined\nlifetime for SMgrRelation objects makes correct programming difficult,\nmakes efficient programming difficult, and doesn't really have any\nadvantages. I know this is just a WIP patch but for the final version\nI think it would make sense to try to do a bit more work on the\ncomments. For instance:\n\n- In src/include/utils/rel.h, instead of just deleting that comment,\nhow about documenting the new object lifetime? Or maybe that comment\nbelongs elsewhere, but I think it would definitely good to spell it\nout very explicitly at some suitable place.\n\n- When we change smgrcloseall() to smgrdestroyall(), maybe it's worth\nspelling out why destroying is needed and not just closing. For\nexample, the second hunk in bgwriter.c includes a comment that says\n\"After any checkpoint, close all smgr files. This is so we won't hang\nonto smgr references to deleted files indefinitely.\" But maybe it\nshould say something like \"After any checkpoint, close all smgr files\nand destroy the associated smgr objects. This guarantees that we close\nthe actual file descriptors, that we close the File objects as managed\nby fd.c, and that we also destroy the smgr objects. We don't want any\nof these resources to stick around indefinitely after a relation file\nhas been deleted.\"\n\n- Maybe it's worth adding comments around some of the smgrclose[all]\ncalls to mentioning that in those cases we want the file descriptors\n(and File objects?) to get closed but don't want to invalidate\npointers. But I'm less sure that this is necessary. I don't want to\nhave a million comments everywhere, just enough that someone looking\nat this stuff in the future can orient themselves about what's going\non without too much difficulty.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 17 Aug 2023 10:30:17 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Extending SMgrRelation lifetimes"
},
{
"msg_contents": "On Fri, Aug 18, 2023 at 2:30 AM Robert Haas <[email protected]> wrote:\n> I think this direction makes a lot of sense. The lack of a defined\n> lifetime for SMgrRelation objects makes correct programming difficult,\n> makes efficient programming difficult, and doesn't really have any\n> advantages.\n\nThanks for looking!\n\n> I know this is just a WIP patch but for the final version\n> I think it would make sense to try to do a bit more work on the\n> comments. For instance:\n>\n> - In src/include/utils/rel.h, instead of just deleting that comment,\n> how about documenting the new object lifetime? Or maybe that comment\n> belongs elsewhere, but I think it would definitely good to spell it\n> out very explicitly at some suitable place.\n\nRight, let's one find one good place. I think smgropen() would be best.\n\n> - When we change smgrcloseall() to smgrdestroyall(), maybe it's worth\n> spelling out why destroying is needed and not just closing. For\n> example, the second hunk in bgwriter.c includes a comment that says\n> \"After any checkpoint, close all smgr files. This is so we won't hang\n> onto smgr references to deleted files indefinitely.\" But maybe it\n> should say something like \"After any checkpoint, close all smgr files\n> and destroy the associated smgr objects. This guarantees that we close\n> the actual file descriptors, that we close the File objects as managed\n> by fd.c, and that we also destroy the smgr objects. We don't want any\n> of these resources to stick around indefinitely after a relation file\n> has been deleted.\"\n\nThere are several similar comments. I think they can be divided into\ntwo categories:\n\n1. The error-path ones, that we should now just delete along with the\ncode they describe, because the \"various strange errors\" should have\nbeen fixed comprehensively by PROCSIGNAL_BARRIER_SMGRRELEASE. Here is\na separate patch to do that.\n\n2. The per-checkpoint ones that still make sense to avoid unbounded\nresource usage. Here is a new attempt at explaining:\n\n /*\n- * After any checkpoint, close all smgr files.\nThis is so we\n- * won't hang onto smgr references to deleted\nfiles indefinitely.\n+ * After any checkpoint, free all smgr\nobjects. Otherwise we\n+ * would never do so for dropped relations, as\nthe checkpointer\n+ * does not process shared invalidation messages or call\n+ * AtEOXact_SMgr().\n */\n- smgrcloseall();\n+ smgrdestroyall();\n\n> - Maybe it's worth adding comments around some of the smgrclose[all]\n> calls to mentioning that in those cases we want the file descriptors\n> (and File objects?) to get closed but don't want to invalidate\n> pointers. But I'm less sure that this is necessary. I don't want to\n> have a million comments everywhere, just enough that someone looking\n> at this stuff in the future can orient themselves about what's going\n> on without too much difficulty.\n\nI covered that with the following comment for smgrclose():\n\n+ * The object remains valid, but is moved to the unknown list where it will\n+ * be destroyed by AtEOXact_SMgr(). It may be re-owned if it is accessed by a\n+ * relation before then.",
"msg_date": "Wed, 23 Aug 2023 16:54:49 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Extending SMgrRelation lifetimes"
},
{
"msg_contents": "I think that if you believe 0001 to be correct you should go ahead and\ncommit it sooner rather than later. If you're wrong and something\nweird starts happening we'll then have a chance to notice that before\ntoo much other stuff gets changed on top of this and confuses the\nmatter.\n\nOn Wed, Aug 23, 2023 at 12:55 AM Thomas Munro <[email protected]> wrote:\n> Right, let's one find one good place. I think smgropen() would be best.\n\nI think it would be a good idea to give this comment a bit more oomph.\nIn lieu of this:\n\n+ * This does not attempt to actually open the underlying files. The returned\n+ * object remains valid at least until AtEOXact_SMgr() is called, or until\n+ * smgrdestroy() is called in non-transactional backends.\n\nI would leave the existing \"This does not attempt to actually open the\nunderlying files.\" comment as a separate comment, and add something\nlike this as a new paragraph:\n\nIn versions of PostgreSQL prior to 17, this function returned an\nobject with no defined lifetime. Now, however, the object remains\nvalid for the lifetime of the transaction, up to the point where\nAtEOXact_SMgr() is called, making it much easier for callers to know\nfor how long they can hold on to a pointer to the returned object. If\nthis function is called outside of a transaction, the object remains\nvalid until smgrdestroy() or smgrdestroyall() is called. Background\nprocesses that use smgr but not transactions typically do this once\nper checkpoint cycle.\n\nApart from that, the main thing that is bothering me is that the\njustification for redefining smgrclose() to do what smgrrelease() now\ndoes isn't really spelled out anywhere. You mentioned some reasons and\nHeikki mentioned the benefit to extensions, but I feel like somebody\nshould be able to understand the reasoning clearly from reading the\ncommit message or the comments in the patch, rather than having to\nvisit the mailing list discussion, and I'm not sure we're there yet. I\nfeel like I understood why we were doing this and was convinced it was\na good idea at some point, but now the reasoning has gone out of my\nhead and I can't recover it. If somebody does smgropen() .. stuff ...\nsmgrclose() as in heapam_relation_copy_data() or index_copy_data(),\nthis change has the effect of making the SmgrRelation remain valid\nuntil eoxact whereas before it would have been destroyed instantly. Is\nthat what we want? Presumably yes, or this patch wouldn't be shaped\nlike this, but I don't know why we want that...\n\nAnother thing that seems a bit puzzling is how this is intended to, or\ndoes, interact with the ownership mechanism. Intuitively, I feel like\na Relation owns an SMgrRelation *because* the SMgrRelation has no\ndefined lifetime. But I think that's not quite right. I guess the\nownership mechanism doesn't guarantee anything about the lifetime of\nthe object, just that the pointer in the Relation won't hang around\nany longer than the object to which it's pointing. So does that mean\nthat we're free to redefine the object lifetime to be pretty much\nanything we want and that mechanism doesn't really care? Huh,\ninteresting.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 18 Sep 2023 11:19:36 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Extending SMgrRelation lifetimes"
},
{
"msg_contents": "I spent some more time digging into this, experimenting with different \napproaches. Came up with pretty significant changes; see below:\n\nOn 18/09/2023 18:19, Robert Haas wrote:\n> I think that if you believe 0001 to be correct you should go ahead and\n> commit it sooner rather than later. If you're wrong and something\n> weird starts happening we'll then have a chance to notice that before\n> too much other stuff gets changed on top of this and confuses the\n> matter.\n\n+1\n\n> On Wed, Aug 23, 2023 at 12:55 AM Thomas Munro <[email protected]> wrote:\n>> Right, let's one find one good place. I think smgropen() would be best.\n> \n> I think it would be a good idea to give this comment a bit more oomph.\n> In lieu of this:\n> \n> + * This does not attempt to actually open the underlying files. The returned\n> + * object remains valid at least until AtEOXact_SMgr() is called, or until\n> + * smgrdestroy() is called in non-transactional backends.\n> \n> I would leave the existing \"This does not attempt to actually open the\n> underlying files.\" comment as a separate comment, and add something\n> like this as a new paragraph:\n> \n> In versions of PostgreSQL prior to 17, this function returned an\n> object with no defined lifetime. Now, however, the object remains\n> valid for the lifetime of the transaction, up to the point where\n> AtEOXact_SMgr() is called, making it much easier for callers to know\n> for how long they can hold on to a pointer to the returned object. If\n> this function is called outside of a transaction, the object remains\n> valid until smgrdestroy() or smgrdestroyall() is called. Background\n> processes that use smgr but not transactions typically do this once\n> per checkpoint cycle.\n\n+1\n\n> Apart from that, the main thing that is bothering me is that the\n> justification for redefining smgrclose() to do what smgrrelease() now\n> does isn't really spelled out anywhere. You mentioned some reasons and\n> Heikki mentioned the benefit to extensions, but I feel like somebody\n> should be able to understand the reasoning clearly from reading the\n> commit message or the comments in the patch, rather than having to\n> visit the mailing list discussion, and I'm not sure we're there yet. I\n> feel like I understood why we were doing this and was convinced it was\n> a good idea at some point, but now the reasoning has gone out of my\n> head and I can't recover it. If somebody does smgropen() .. stuff ...\n> smgrclose() as in heapam_relation_copy_data() or index_copy_data(),\n> this change has the effect of making the SmgrRelation remain valid\n> until eoxact whereas before it would have been destroyed instantly. Is\n> that what we want? Presumably yes, or this patch wouldn't be shaped\n> like this, but I don't know why we want that...\n\nFair. I tried to address that by adding an overview comment at top of \nsmgr.c, explaining how this stuff work. I hope that helps.\n\n> Another thing that seems a bit puzzling is how this is intended to, or\n> does, interact with the ownership mechanism. Intuitively, I feel like\n> a Relation owns an SMgrRelation *because* the SMgrRelation has no\n> defined lifetime. But I think that's not quite right. I guess the\n> ownership mechanism doesn't guarantee anything about the lifetime of\n> the object, just that the pointer in the Relation won't hang around\n> any longer than the object to which it's pointing. So does that mean\n> that we're free to redefine the object lifetime to be pretty much\n> anything we want and that mechanism doesn't really care? Huh,\n> interesting.\n\nYeah that owner mechanism is weird. It guarantees that the pointer to \nthe SMgrRelation is cleared when the SMgrRelation is destroyed. But it \nalso prevents the SMgrRelation from being destroyed at end of \ntransaction. That's how it is in 'master' too.\n\nBut with this patch, we don't normally call smgrdestroy() on an \nSMgrRelation that came from the relation cache. We do call \nsmgrdestroyall() in the aux processes, but they don't have a relcache. \nSo the real effect of setting the owner now is to prevent the \nSMgrRelation from being destroyed at end of transaction; the mechanism \nof clearing the pointer is unused.\n\nI found two exceptions to that, though, by adding some extra assertions \nand running the regression tests:\n\n1. The smgrdestroyall() in a single-user backend in RequestCheckpoint(). \nIt destroys SMgrRelations belonging to relcache entries, and the owner \nmechanism clears the pointers from the relcache. I think smgrcloseall(), \nor doing nothing, would actually be more appropriate there.\n\n2. A funny case with foreign tables: ANALYZE on a foreign table calls \nvisibilitymap_count(). A foreign table has no visibility map so it \nreturns 0, but before doing so it calls RelationGetSmgr on the foreign \ntable, which has 0/0/0 rellocator. That creates an SMgrRelation for \n0/0/0, and sets the foreign table's relcache entry as its owner. If you \nthen call ANALYZE on another foreign table, it also calls \nRelationGetSmgr with 0/0/0 rellocator, returning the same SMgrRelation \nentry, and changes its owner to the new relcache entry. That doesn't \nmake much sense and it's pretty accidental that it works at all, so \nattached is a patch to avoid calling visibilitymap_count() on foreign \ntables.\n\nI propose that we replace the single owner with a \"pin counter\". One \nSMgrRelation can have more than one pin on it, and the guarantee is that \nas long as the pin counter is non-zero, the SMgrRelation cannot be \ndestroyed and the pointer remains valid. We don't really need the \ncapability for more than one pin at the moment (the regression tests \npass with an extra assertion that pincount <= 1 after fixing the foreign \ntable issue), but it seems more straightforward than tracking an owner.\n\nHere's another reason to do that: I noticed this at the end of \nswap_relation_files():\n\n> \t/*\n> \t * Close both relcache entries' smgr links. We need this kluge because\n> \t * both links will be invalidated during upcoming CommandCounterIncrement.\n> \t * Whichever of the rels is the second to be cleared will have a dangling\n> \t * reference to the other's smgr entry. Rather than trying to avoid this\n> \t * by ordering operations just so, it's easiest to close the links first.\n> \t * (Fortunately, since one of the entries is local in our transaction,\n> \t * it's sufficient to clear out our own relcache this way; the problem\n> \t * cannot arise for other backends when they see our update on the\n> \t * non-transient relation.)\n> \t *\n> \t * Caution: the placement of this step interacts with the decision to\n> \t * handle toast rels by recursion. When we are trying to rebuild pg_class\n> \t * itself, the smgr close on pg_class must happen after all accesses in\n> \t * this function.\n> \t */\n> \tRelationCloseSmgrByOid(r1);\n> \tRelationCloseSmgrByOid(r2);\n\nIf RelationCloseSmgrByOid() merely closes the underlying file descriptor \nbut doesn't destroy the SMgrRelation object - as it does with these \npatches - I think we reintroduce the dangling reference problem that the \ncomment mentions. But if we allow the same SMgrRelation to be pinned by \ntwo different relcache entries, the problem goes away and we can remove \nthat kluge.\n\nI think we're missing test coverage for that though. I commented out \nthose calls in 'master' and ran the regression tests, but got no \nfailures. I don't fully understand the problem anyway. Or does it not \nexist anymore? Is there a moment where we have two relcache entries \npointing to the same SMgrRelation? I don't see it. In any case, with a \npin counter mechanism, I believe it would be fine.\n\n\nSummary of the changes to the attached main patch:\n\n* Added an overview comment at top of smgr.c\n\n* Added the comment Robert suggested to smgropen()\n\n* Replaced the single owner with a pin count and smgrpin() / smgrunpin() \nfunctions. smgrdestroyall() now only destroys unpinned entries\n\n* Removed that kluge from swap_relation_files(). It should not be needed \nanymore with the pin counter.\n\n* Changed a few places in bufmgr.c where we called RelationGetSmgr() on \nevery smgr call to keep the SMgrRelation in a local variable. That was \nnot safe before, but is now. I don't think we should go on a spree to \nchange all callers - RelationGetSmgr() is still cheap - but in these few \nplaces it seems worthwhile.\n\n* I kept the separate smgrclose() and smgrrelease() functions. I know I \nsuggested to just change smgrclose() to do what smgrrelease() did, but \non second thoughts keeping them separate seems nicer. However, \nsgmgrclose() just calls smgrrelease() now, so the distinction is just \npro forma. The idea is that if you call smgrclose(), you hint that you \ndon't need that SMgrRelation pointer anymore, although there might be \nother pointers to the same object and they stay valid.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)",
"msg_date": "Wed, 29 Nov 2023 14:41:57 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Extending SMgrRelation lifetimes"
},
{
"msg_contents": "On 29/11/2023 14:41, Heikki Linnakangas wrote:\n> 2. A funny case with foreign tables: ANALYZE on a foreign table calls\n> visibilitymap_count(). A foreign table has no visibility map so it\n> returns 0, but before doing so it calls RelationGetSmgr on the foreign\n> table, which has 0/0/0 rellocator. That creates an SMgrRelation for\n> 0/0/0, and sets the foreign table's relcache entry as its owner. If you\n> then call ANALYZE on another foreign table, it also calls\n> RelationGetSmgr with 0/0/0 rellocator, returning the same SMgrRelation\n> entry, and changes its owner to the new relcache entry. That doesn't\n> make much sense and it's pretty accidental that it works at all, so\n> attached is a patch to avoid calling visibilitymap_count() on foreign\n> tables.\n\nThis patch seems uncontroversial and independent of the others, so I \ncommitted it.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Fri, 8 Dec 2023 09:20:26 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Extending SMgrRelation lifetimes"
},
{
"msg_contents": "On Wed, Nov 29, 2023 at 1:42 PM Heikki Linnakangas <[email protected]> wrote:\n> I spent some more time digging into this, experimenting with different\n> approaches. Came up with pretty significant changes; see below:\n\nHi Heikki,\n\nI think this approach is good. As I wrote in the first email, I had\nbriefly considered reference counting, but at the time I figured there\nwasn't much point if it's only ever going to be 0 or 1, so I was\ntrying to find the smallest change. But as you explained, there is\nalready an interesting case where it goes to 2, and modelling it that\nway removes a weird hack, so it's a net improvement over the unusual\n'owner' concept. +1 for your version. Are there any further tidying\nor other improvements you want to make?\n\nTypos in comments:\n\ns/desroyed/destroyed/\ns/receiveing/receiving/\n\n\n",
"msg_date": "Wed, 31 Jan 2024 09:54:58 +0100",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Extending SMgrRelation lifetimes"
},
{
"msg_contents": "On 31/01/2024 10:54, Thomas Munro wrote:\n> On Wed, Nov 29, 2023 at 1:42 PM Heikki Linnakangas <[email protected]> wrote:\n>> I spent some more time digging into this, experimenting with different\n>> approaches. Came up with pretty significant changes; see below:\n> \n> Hi Heikki,\n> \n> I think this approach is good. As I wrote in the first email, I had\n> briefly considered reference counting, but at the time I figured there\n> wasn't much point if it's only ever going to be 0 or 1, so I was\n> trying to find the smallest change. But as you explained, there is\n> already an interesting case where it goes to 2, and modelling it that\n> way removes a weird hack, so it's a net improvement over the unusual\n> 'owner' concept. +1 for your version. Are there any further tidying\n> or other improvements you want to make?\n\nOk, no, this is good to go then. I'll rebase, fix the typos, run the \nregression tests again, and push this shortly. Thanks!\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Wed, 31 Jan 2024 12:37:22 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Extending SMgrRelation lifetimes"
}
] |
[
{
"msg_contents": "Hi\n\nNow, there is no native functionality for conversion from json(b) value to\nsome array.\n\nhttps://stackoverflow.com/questions/76894960/unable-to-assign-text-value-to-variable-in-pgsql/76896112#76896112\n\nIt should not be too hard to implement native function jsonb_populate_array\n\njsonb_populate_array(anyarray, jsonb) returns anyarray\n\nUsage:\n\nselect jsonb_populate_array(null::text[], '[\"cust_full_name\",\"cust_email\"]')\n\nComments, notes?\n\nRegards\n\nPavel\n\nHiNow, there is no native functionality for conversion from json(b) value to some array.https://stackoverflow.com/questions/76894960/unable-to-assign-text-value-to-variable-in-pgsql/76896112#76896112It should not be too hard to implement native function jsonb_populate_arrayjsonb_populate_array(anyarray, jsonb) returns anyarrayUsage:select jsonb_populate_array(null::text[], '[\"cust_full_name\",\"cust_email\"]')Comments, notes?RegardsPavel",
"msg_date": "Mon, 14 Aug 2023 05:51:57 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": true,
"msg_subject": "proposal: jsonb_populate_array"
},
{
"msg_contents": "On 2023-Aug-14, Pavel Stehule wrote:\n\n> jsonb_populate_array(anyarray, jsonb) returns anyarray\n> \n> Usage:\n> \n> select jsonb_populate_array(null::text[], '[\"cust_full_name\",\"cust_email\"]')\n\nI don't understand what this does. Can you be more explicit?\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\nMaybe there's lots of data loss but the records of data loss are also lost.\n(Lincoln Yeoh)\n\n\n",
"msg_date": "Mon, 14 Aug 2023 11:32:24 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: proposal: jsonb_populate_array"
},
{
"msg_contents": "po 14. 8. 2023 v 11:32 odesílatel Alvaro Herrera <[email protected]>\nnapsal:\n\n> On 2023-Aug-14, Pavel Stehule wrote:\n>\n> > jsonb_populate_array(anyarray, jsonb) returns anyarray\n> >\n> > Usage:\n> >\n> > select jsonb_populate_array(null::text[],\n> '[\"cust_full_name\",\"cust_email\"]')\n>\n> I don't understand what this does. Can you be more explicit?\n>\n\nexample\n\n'[\"2023-07-13\",\"2023-07-14\"]'::jsonb --> {2023-07-13,2023-07-14}::date[]\n\nNow, I have to transform to table, casting, and back transformation to\narray, and I cannot to write generic function. I can run just \"slow\" query\n\nselect array_agg(value::date) from\njsonb_array_elements_text('[\"2023-07-13\",\"2023-07-14\"]'::jsonb);\n\nwith proposed function I can write\n\nselect jsonb_populate_array(null:date[],\n'[\"2023-07-13\",\"2023-07-14\"]'::jsonb)\n\nRegards\n\nPavel\n\n\n\n> --\n> Álvaro Herrera 48°01'N 7°57'E —\n> https://www.EnterpriseDB.com/\n> Maybe there's lots of data loss but the records of data loss are also lost.\n> (Lincoln Yeoh)\n>\n\npo 14. 8. 2023 v 11:32 odesílatel Alvaro Herrera <[email protected]> napsal:On 2023-Aug-14, Pavel Stehule wrote:\n\n> jsonb_populate_array(anyarray, jsonb) returns anyarray\n> \n> Usage:\n> \n> select jsonb_populate_array(null::text[], '[\"cust_full_name\",\"cust_email\"]')\n\nI don't understand what this does. Can you be more explicit?example'[\"2023-07-13\",\"2023-07-14\"]'::jsonb --> {2023-07-13,2023-07-14}::date[]Now, I have to transform to table, casting, and back transformation to array, and I cannot to write generic function. I can run just \"slow\" queryselect array_agg(value::date) from jsonb_array_elements_text('[\"2023-07-13\",\"2023-07-14\"]'::jsonb);with proposed function I can writeselect jsonb_populate_array(null:date[], '[\"2023-07-13\",\"2023-07-14\"]'::jsonb)RegardsPavel\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\nMaybe there's lots of data loss but the records of data loss are also lost.\n(Lincoln Yeoh)",
"msg_date": "Mon, 14 Aug 2023 14:51:43 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: proposal: jsonb_populate_array"
},
{
"msg_contents": "Op 8/14/23 om 14:51 schreef Pavel Stehule:> po 14. 8. 2023 v 11:32 \nodesílatel Alvaro Herrera <[email protected]>\n > with proposed function I can write\n >\n > select jsonb_populate_array(null:date[],\n > '[\"2023-07-13\",\"2023-07-14\"]'::jsonb)\n >\nNot yet committed, but outstanding\nSQL/JSON patches (v11) will let you do:\n\nselect json_query(\n '[\"2023-07-13\", \"2023-07-14\"]'::jsonb\n , '$' returning date[]\n);\n json_query\n-------------------------\n {2023-07-13,2023-07-14}\n(1 row)\n\nThat's (more or less) what you want, no?\n\nLet's hope it gets submitted 17-ish, anyway\n\nErik\n\n\n\n\n\n\n\n",
"msg_date": "Mon, 14 Aug 2023 15:11:27 +0200",
"msg_from": "Erik Rijkers <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: proposal: jsonb_populate_array"
},
{
"msg_contents": "po 14. 8. 2023 v 15:09 odesílatel Erik Rijkers <[email protected]> napsal:\n\n> Op 8/14/23 om 14:51 schreef Pavel Stehule:> po 14. 8. 2023 v 11:32\n> odesílatel Alvaro Herrera <[email protected]>\n> > with proposed function I can write\n> >\n> > select jsonb_populate_array(null:date[],\n> > '[\"2023-07-13\",\"2023-07-14\"]'::jsonb)\n> >\n> Not yet committed, but outstanding\n> SQL/JSON patches (v11) will let you do:\n>\n> select json_query(\n> '[\"2023-07-13\", \"2023-07-14\"]'::jsonb\n> , '$' returning date[]\n> );\n> json_query\n> -------------------------\n> {2023-07-13,2023-07-14}\n> (1 row)\n>\n> That's (more or less) what you want, no?\n>\n\nYes, the functionality is exactly the same, but still maybe for completeness\nthe function json_populate_array can be nice.\n\nIn old API the transformations between json and row/record types is well\ncovered, but for array, only direction array->json is covered\n\nI think so this can be +/- 40 lines of C code\n\n\n\n\n\n> Let's hope it gets submitted 17-ish, anyway\n>\n> Erik\n>\n>\n>\n>\n>\n>\n\npo 14. 8. 2023 v 15:09 odesílatel Erik Rijkers <[email protected]> napsal:Op 8/14/23 om 14:51 schreef Pavel Stehule:> po 14. 8. 2023 v 11:32 \nodesílatel Alvaro Herrera <[email protected]>\n > with proposed function I can write\n >\n > select jsonb_populate_array(null:date[],\n > '[\"2023-07-13\",\"2023-07-14\"]'::jsonb)\n >\nNot yet committed, but outstanding\nSQL/JSON patches (v11) will let you do:\n\nselect json_query(\n '[\"2023-07-13\", \"2023-07-14\"]'::jsonb\n , '$' returning date[]\n);\n json_query\n-------------------------\n {2023-07-13,2023-07-14}\n(1 row)\n\nThat's (more or less) what you want, no?Yes, the functionality is exactly the same, but still maybe for completeness the function json_populate_array can be nice.In old API the transformations between json and row/record types is well covered, but for array, only direction array->json is coveredI think so this can be +/- 40 lines of C code\n\nLet's hope it gets submitted 17-ish, anyway\n\nErik",
"msg_date": "Mon, 14 Aug 2023 15:37:04 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: proposal: jsonb_populate_array"
},
{
"msg_contents": "On 2023-08-14 09:11, Erik Rijkers wrote:\n> , '$' returning date[]\n\nI certainly like that syntax better.\n\nIt's not that the \"here's a null to tell you the type I want\"\nis terribly unclear, but it seems not to be an idiom I have\nseen a lot of in PostgreSQL before now. Are there other places\nit's currently used that I've overlooked?\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Mon, 14 Aug 2023 09:47:27 -0400",
"msg_from": "Chapman Flack <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: proposal: jsonb_populate_array"
},
{
"msg_contents": "po 14. 8. 2023 v 15:47 odesílatel Chapman Flack <[email protected]>\nnapsal:\n\n> On 2023-08-14 09:11, Erik Rijkers wrote:\n> > , '$' returning date[]\n>\n> I certainly like that syntax better.\n>\n> It's not that the \"here's a null to tell you the type I want\"\n> is terribly unclear, but it seems not to be an idiom I have\n> seen a lot of in PostgreSQL before now. Are there other places\n> it's currently used that I've overlooked?\n>\n\nIt is used only for hstore, json, jsonb function if I remember correctly.\n\nI dislike this idiom too, but SQL cannot use type as parameter. I proposed\nanytype polymorphic pseudotype so instead\n\nfx(null::int, ...) you can write (theoretically) fx('int', ...), but it\ndoesn't look too much better. For composite functions we can dynamically to\nspecify structure as SELECT FROM fx(...) AS (a int, b int), but it cannot\nbe used for scalar functions and cannot be used for static composite types.\n\nI can imagine some special syntax of CAST, that can push type to inside\nfunction, and allows to us to write functions like fx(non polymorphic\ntypes) RETURNS any\n\nfor proposed functionality it can look like SELECT\nCAST(json_populate_array('[]'::jsonb) AS date[])\n\n\n\n\n> Regards,\n> -Chap\n>\n\npo 14. 8. 2023 v 15:47 odesílatel Chapman Flack <[email protected]> napsal:On 2023-08-14 09:11, Erik Rijkers wrote:\n> , '$' returning date[]\n\nI certainly like that syntax better.\n\nIt's not that the \"here's a null to tell you the type I want\"\nis terribly unclear, but it seems not to be an idiom I have\nseen a lot of in PostgreSQL before now. Are there other places\nit's currently used that I've overlooked?It is used only for hstore, json, jsonb function if I remember correctly. I dislike this idiom too, but SQL cannot use type as parameter. I proposed anytype polymorphic pseudotype so insteadfx(null::int, ...) you can write (theoretically) fx('int', ...), but it doesn't look too much better. For composite functions we can dynamically to specify structure as SELECT FROM fx(...) AS (a int, b int), but it cannot be used for scalar functions and cannot be used for static composite types. I can imagine some special syntax of CAST, that can push type to inside function, and allows to us to write functions like fx(non polymorphic types) RETURNS anyfor proposed functionality it can look like SELECT CAST(json_populate_array('[]'::jsonb) AS date[])\n\nRegards,\n-Chap",
"msg_date": "Mon, 14 Aug 2023 16:26:35 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: proposal: jsonb_populate_array"
},
{
"msg_contents": "\\df jsonb_populate_record\n List of functions\n Schema | Name | Result data type | Argument data\ntypes | Type\n------------+-----------------------+------------------+---------------------+------\n pg_catalog | jsonb_populate_record | anyelement | anyelement,\njsonb | func\n(1 row)\n\nmanual:\n> anyelement Indicates that a function accepts any data type.\n> For the “simple” family of polymorphic types, the matching and deduction rules work like this:\n> Each position (either argument or return value) declared as anyelement is allowed to have any specific actual data type, but in any given call they must all be the same actual type.\n\nSo jsonb_populate_record signature can handle cases like\njsonb_populate_record(anyarray, jsonb)? obviously this is a cast, it\nmay fail.\nalso if input is anyarray, so the output anyarray will have the same\nbase type as input anyarray.\n\n\n",
"msg_date": "Tue, 15 Aug 2023 11:12:16 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: proposal: jsonb_populate_array"
},
{
"msg_contents": "On 8/14/23 15:47, Chapman Flack wrote:\n> On 2023-08-14 09:11, Erik Rijkers wrote:\n>> , '$' returning date[]\n> \n> I certainly like that syntax better.\n> \n> It's not that the \"here's a null to tell you the type I want\"\n> is terribly unclear, but it seems not to be an idiom I have\n> seen a lot of in PostgreSQL before now. Are there other places\n> it's currently used that I've overlooked?\n\nIt has been used since forever in polymorphic aggregate final functions. \n I don't mind it there, but I do not like it in general user-facing \nfunctions.\n\nhttps://www.postgresql.org/docs/current/sql-createaggregate.html\n-- \nVik Fearing\n\n\n\n",
"msg_date": "Tue, 15 Aug 2023 07:44:36 +0200",
"msg_from": "Vik Fearing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: proposal: jsonb_populate_array"
},
{
"msg_contents": "On 8/14/23 15:37, Pavel Stehule wrote:\n> po 14. 8. 2023 v 15:09 odesílatel Erik Rijkers <[email protected]> napsal:\n> \n>> Op 8/14/23 om 14:51 schreef Pavel Stehule:> po 14. 8. 2023 v 11:32\n>> odesílatel Alvaro Herrera <[email protected]>\n>> > with proposed function I can write\n>> >\n>> > select jsonb_populate_array(null:date[],\n>> > '[\"2023-07-13\",\"2023-07-14\"]'::jsonb)\n>> >\n>> Not yet committed, but outstanding\n>> SQL/JSON patches (v11) will let you do:\n>>\n>> select json_query(\n>> '[\"2023-07-13\", \"2023-07-14\"]'::jsonb\n>> , '$' returning date[]\n>> );\n>> json_query\n>> -------------------------\n>> {2023-07-13,2023-07-14}\n>> (1 row)\n>>\n>> That's (more or less) what you want, no?\n>>\n> \n> Yes, the functionality is exactly the same, but still maybe for completeness\n> the function json_populate_array can be nice.\n> \n> In old API the transformations between json and row/record types is well\n> covered, but for array, only direction array->json is covered\n\nI don't think we should be extending the old API when there are Standard \nways of doing the same thing. In fact, I would like to see the old way \nslowly be deprecated.\n\n> I think so this can be +/- 40 lines of C code\n\nIt seems to me like a good candidate for an extension.\n-- \nVik Fearing\n\n\n\n",
"msg_date": "Tue, 15 Aug 2023 07:48:40 +0200",
"msg_from": "Vik Fearing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: proposal: jsonb_populate_array"
},
{
"msg_contents": "út 15. 8. 2023 v 7:48 odesílatel Vik Fearing <[email protected]>\nnapsal:\n\n> On 8/14/23 15:37, Pavel Stehule wrote:\n> > po 14. 8. 2023 v 15:09 odesílatel Erik Rijkers <[email protected]> napsal:\n> >\n> >> Op 8/14/23 om 14:51 schreef Pavel Stehule:> po 14. 8. 2023 v 11:32\n> >> odesílatel Alvaro Herrera <[email protected]>\n> >> > with proposed function I can write\n> >> >\n> >> > select jsonb_populate_array(null:date[],\n> >> > '[\"2023-07-13\",\"2023-07-14\"]'::jsonb)\n> >> >\n> >> Not yet committed, but outstanding\n> >> SQL/JSON patches (v11) will let you do:\n> >>\n> >> select json_query(\n> >> '[\"2023-07-13\", \"2023-07-14\"]'::jsonb\n> >> , '$' returning date[]\n> >> );\n> >> json_query\n> >> -------------------------\n> >> {2023-07-13,2023-07-14}\n> >> (1 row)\n> >>\n> >> That's (more or less) what you want, no?\n> >>\n> >\n> > Yes, the functionality is exactly the same, but still maybe for\n> completeness\n> > the function json_populate_array can be nice.\n> >\n> > In old API the transformations between json and row/record types is well\n> > covered, but for array, only direction array->json is covered\n>\n> I don't think we should be extending the old API when there are Standard\n> ways of doing the same thing. In fact, I would like to see the old way\n> slowly be deprecated.\n>\n> > I think so this can be +/- 40 lines of C code\n>\n> It seems to me like a good candidate for an extension.\n>\n\nUnfortunately, these small extensions have zero chance to be available for\nusers that use some cloud postgres.\n\n\n\n> --\n> Vik Fearing\n>\n>\n\nút 15. 8. 2023 v 7:48 odesílatel Vik Fearing <[email protected]> napsal:On 8/14/23 15:37, Pavel Stehule wrote:\n> po 14. 8. 2023 v 15:09 odesílatel Erik Rijkers <[email protected]> napsal:\n> \n>> Op 8/14/23 om 14:51 schreef Pavel Stehule:> po 14. 8. 2023 v 11:32\n>> odesílatel Alvaro Herrera <[email protected]>\n>> > with proposed function I can write\n>> >\n>> > select jsonb_populate_array(null:date[],\n>> > '[\"2023-07-13\",\"2023-07-14\"]'::jsonb)\n>> >\n>> Not yet committed, but outstanding\n>> SQL/JSON patches (v11) will let you do:\n>>\n>> select json_query(\n>> '[\"2023-07-13\", \"2023-07-14\"]'::jsonb\n>> , '$' returning date[]\n>> );\n>> json_query\n>> -------------------------\n>> {2023-07-13,2023-07-14}\n>> (1 row)\n>>\n>> That's (more or less) what you want, no?\n>>\n> \n> Yes, the functionality is exactly the same, but still maybe for completeness\n> the function json_populate_array can be nice.\n> \n> In old API the transformations between json and row/record types is well\n> covered, but for array, only direction array->json is covered\n\nI don't think we should be extending the old API when there are Standard \nways of doing the same thing. In fact, I would like to see the old way \nslowly be deprecated.\n\n> I think so this can be +/- 40 lines of C code\n\nIt seems to me like a good candidate for an extension.Unfortunately, these small extensions have zero chance to be available for users that use some cloud postgres. \n-- \nVik Fearing",
"msg_date": "Tue, 15 Aug 2023 07:53:33 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: proposal: jsonb_populate_array"
},
{
"msg_contents": "út 15. 8. 2023 v 5:12 odesílatel jian he <[email protected]>\nnapsal:\n\n> \\df jsonb_populate_record\n> List of functions\n> Schema | Name | Result data type | Argument data\n> types | Type\n>\n> ------------+-----------------------+------------------+---------------------+------\n> pg_catalog | jsonb_populate_record | anyelement | anyelement,\n> jsonb | func\n> (1 row)\n>\n> manual:\n> > anyelement Indicates that a function accepts any data type.\n> > For the “simple” family of polymorphic types, the matching and deduction\n> rules work like this:\n> > Each position (either argument or return value) declared as anyelement\n> is allowed to have any specific actual data type, but in any given call\n> they must all be the same actual type.\n>\n> So jsonb_populate_record signature can handle cases like\n> jsonb_populate_record(anyarray, jsonb)? obviously this is a cast, it\n> may fail.\n> also if input is anyarray, so the output anyarray will have the same\n> base type as input anyarray.\n>\n\nIt fails (what is expected - else be too strange to use function in name\n\"record\" for arrays)\n\n (2023-08-15 07:57:40) postgres=# select\njsonb_populate_record(null::varchar[], '[1,2,3]');\nERROR: first argument of jsonb_populate_record must be a row type\n\nregards\n\nPavel\n\nút 15. 8. 2023 v 5:12 odesílatel jian he <[email protected]> napsal:\\df jsonb_populate_record\n List of functions\n Schema | Name | Result data type | Argument data\ntypes | Type\n------------+-----------------------+------------------+---------------------+------\n pg_catalog | jsonb_populate_record | anyelement | anyelement,\njsonb | func\n(1 row)\n\nmanual:\n> anyelement Indicates that a function accepts any data type.\n> For the “simple” family of polymorphic types, the matching and deduction rules work like this:\n> Each position (either argument or return value) declared as anyelement is allowed to have any specific actual data type, but in any given call they must all be the same actual type.\n\nSo jsonb_populate_record signature can handle cases like\njsonb_populate_record(anyarray, jsonb)? obviously this is a cast, it\nmay fail.\nalso if input is anyarray, so the output anyarray will have the same\nbase type as input anyarray.It fails (what is expected - else be too strange to use function in name \"record\" for arrays) (2023-08-15 07:57:40) postgres=# select jsonb_populate_record(null::varchar[], '[1,2,3]');ERROR: first argument of jsonb_populate_record must be a row typeregardsPavel",
"msg_date": "Tue, 15 Aug 2023 07:59:05 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: proposal: jsonb_populate_array"
},
{
"msg_contents": "On 8/15/23 07:53, Pavel Stehule wrote:\n> út 15. 8. 2023 v 7:48 odesílatel Vik Fearing <[email protected]>\n> napsal:\n> \n>> On 8/14/23 15:37, Pavel Stehule wrote:\n>>> po 14. 8. 2023 v 15:09 odesílatel Erik Rijkers <[email protected]> napsal:\n>>>\n>>> I think so this can be +/- 40 lines of C code\n>>\n>> It seems to me like a good candidate for an extension.\n> \n> Unfortunately, these small extensions have zero chance to be available for\n> users that use some cloud postgres.\n\nThen those people can use the Standard SQL syntax. I am strongly \nagainst polluting PostgreSQL because of what third party vendors do and \ndo not allow on their platforms.\n-- \nVik Fearing\n\n\n\n",
"msg_date": "Tue, 15 Aug 2023 08:04:54 +0200",
"msg_from": "Vik Fearing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: proposal: jsonb_populate_array"
},
{
"msg_contents": "út 15. 8. 2023 v 8:04 odesílatel Vik Fearing <[email protected]>\nnapsal:\n\n> On 8/15/23 07:53, Pavel Stehule wrote:\n> > út 15. 8. 2023 v 7:48 odesílatel Vik Fearing <[email protected]>\n> > napsal:\n> >\n> >> On 8/14/23 15:37, Pavel Stehule wrote:\n> >>> po 14. 8. 2023 v 15:09 odesílatel Erik Rijkers <[email protected]> napsal:\n> >>>\n> >>> I think so this can be +/- 40 lines of C code\n> >>\n> >> It seems to me like a good candidate for an extension.\n> >\n> > Unfortunately, these small extensions have zero chance to be available\n> for\n> > users that use some cloud postgres.\n>\n> Then those people can use the Standard SQL syntax. I am strongly\n> against polluting PostgreSQL because of what third party vendors do and\n> do not allow on their platforms.\n>\n\nok\n\n\n> --\n> Vik Fearing\n>\n>\n\nút 15. 8. 2023 v 8:04 odesílatel Vik Fearing <[email protected]> napsal:On 8/15/23 07:53, Pavel Stehule wrote:\n> út 15. 8. 2023 v 7:48 odesílatel Vik Fearing <[email protected]>\n> napsal:\n> \n>> On 8/14/23 15:37, Pavel Stehule wrote:\n>>> po 14. 8. 2023 v 15:09 odesílatel Erik Rijkers <[email protected]> napsal:\n>>>\n>>> I think so this can be +/- 40 lines of C code\n>>\n>> It seems to me like a good candidate for an extension.\n> \n> Unfortunately, these small extensions have zero chance to be available for\n> users that use some cloud postgres.\n\nThen those people can use the Standard SQL syntax. I am strongly \nagainst polluting PostgreSQL because of what third party vendors do and \ndo not allow on their platforms.ok \n-- \nVik Fearing",
"msg_date": "Tue, 15 Aug 2023 08:26:29 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: proposal: jsonb_populate_array"
}
] |
[
{
"msg_contents": "\nHi, hackers\n\nI find when I compile PG 14 with --with-icu, collate.icu.utf8 and foreign_data\nregression tests will failed. However it is OK on REL_15_STABLE and master.\nI also test this on REL_13_STABLE, and it also failed. Here is the regression\ndiffs.\n\ndiff -U3 /home/japin/Codes/postgres/build/../src/test/regress/expected/collate.icu.utf8.out /home/japin/Codes/postgres/build/src/test/regress/results/collate.icu.utf8.out\n--- /home/japin/Codes/postgres/build/../src/test/regress/expected/collate.icu.utf8.out\t2023-08-14 17:37:31.960448245 +0800\n+++ /home/japin/Codes/postgres/build/src/test/regress/results/collate.icu.utf8.out\t2023-08-14 21:30:44.335214886 +0800\n@@ -1035,6 +1035,9 @@\n quote_literal(current_setting('lc_ctype')) || ');';\n END\n $$;\n+ERROR: collations with different collate and ctype values are not supported by ICU\n+CONTEXT: SQL statement \"CREATE COLLATION test1 (provider = icu, lc_collate = 'C.UTF-8', lc_ctype = 'en_US.UTF-8');\"\n+PL/pgSQL function inline_code_block line 3 at EXECUTE\n CREATE COLLATION test3 (provider = icu, lc_collate = 'en_US.utf8'); -- fail, need lc_ctype\n ERROR: parameter \"lc_ctype\" must be specified\n CREATE COLLATION testx (provider = icu, locale = 'nonsense'); /* never fails with ICU */ DROP COLLATION testx;\n@@ -1045,13 +1048,12 @@\n collname \n ----------\n test0\n- test1\n test5\n-(3 rows)\n+(2 rows)\n \n ALTER COLLATION test1 RENAME TO test11;\n+ERROR: collation \"test1\" for encoding \"UTF8\" does not exist\n ALTER COLLATION test0 RENAME TO test11; -- fail\n-ERROR: collation \"test11\" already exists in schema \"collate_tests\"\n ALTER COLLATION test1 RENAME TO test22; -- fail\n ERROR: collation \"test1\" for encoding \"UTF8\" does not exist\n ALTER COLLATION test11 OWNER TO regress_test_role;\n@@ -1059,18 +1061,19 @@\n ERROR: role \"nonsense\" does not exist\n ALTER COLLATION test11 SET SCHEMA test_schema;\n COMMENT ON COLLATION test0 IS 'US English';\n+ERROR: collation \"test0\" for encoding \"UTF8\" does not exist\n SELECT collname, nspname, obj_description(pg_collation.oid, 'pg_collation')\n FROM pg_collation JOIN pg_namespace ON (collnamespace = pg_namespace.oid)\n WHERE collname LIKE 'test%'\n ORDER BY 1;\n collname | nspname | obj_description \n ----------+---------------+-----------------\n- test0 | collate_tests | US English\n test11 | test_schema | \n test5 | collate_tests | \n-(3 rows)\n+(2 rows)\n \n DROP COLLATION test0, test_schema.test11, test5;\n+ERROR: collation \"test0\" for encoding \"UTF8\" does not exist\n DROP COLLATION test0; -- fail\n ERROR: collation \"test0\" for encoding \"UTF8\" does not exist\n DROP COLLATION IF EXISTS test0;\n@@ -1078,10 +1081,17 @@\n SELECT collname FROM pg_collation WHERE collname LIKE 'test%';\n collname \n ----------\n-(0 rows)\n+ test11\n+ test5\n+(2 rows)\n \n DROP SCHEMA test_schema;\n+ERROR: cannot drop schema test_schema because other objects depend on it\n+DETAIL: collation test_schema.test11 depends on schema test_schema\n+HINT: Use DROP ... CASCADE to drop the dependent objects too.\n DROP ROLE regress_test_role;\n+ERROR: role \"regress_test_role\" cannot be dropped because some objects depend on it\n+DETAIL: owner of collation test_schema.test11\n -- ALTER\n ALTER COLLATION \"en-x-icu\" REFRESH VERSION;\n NOTICE: version has not changed\ndiff -U3 /home/japin/Codes/postgres/build/../src/test/regress/expected/foreign_data.out /home/japin/Codes/postgres/build/src/test/regress/results/foreign_data.out\n--- /home/japin/Codes/postgres/build/../src/test/regress/expected/foreign_data.out\t2023-08-14 17:37:31.964448260 +0800\n+++ /home/japin/Codes/postgres/build/src/test/regress/results/foreign_data.out\t2023-08-14 21:30:55.571170376 +0800\n@@ -5,10 +5,13 @@\n -- Suppress NOTICE messages when roles don't exist\n SET client_min_messages TO 'warning';\n DROP ROLE IF EXISTS regress_foreign_data_user, regress_test_role, regress_test_role2, regress_test_role_super, regress_test_indirect, regress_unprivileged_role;\n+ERROR: role \"regress_test_role\" cannot be dropped because some objects depend on it\n+DETAIL: owner of collation test_schema.test11\n RESET client_min_messages;\n CREATE ROLE regress_foreign_data_user LOGIN SUPERUSER;\n SET SESSION AUTHORIZATION 'regress_foreign_data_user';\n CREATE ROLE regress_test_role;\n+ERROR: role \"regress_test_role\" already exists\n CREATE ROLE regress_test_role2;\n CREATE ROLE regress_test_role_super SUPERUSER;\n CREATE ROLE regress_test_indirect;\n\nIs it a bug that fixed in REL_15_STABLE? If yes, why not backpatch?\n\n\n-- \nRegrads,\nJapin Li\n\n\n",
"msg_date": "Mon, 14 Aug 2023 23:33:05 +0800",
"msg_from": "Japin Li <[email protected]>",
"msg_from_op": true,
"msg_subject": "Regression test collate.icu.utf8 failed on REL_14_STABLE"
},
{
"msg_contents": ">\n>\n>\n> DROP SCHEMA test_schema;\n> +ERROR: cannot drop schema test_schema because other objects depend on it\n> +DETAIL: collation test_schema.test11 depends on schema test_schema\n> +HINT: Use DROP ... CASCADE to drop the dependent objects too.\n> DROP ROLE regress_test_role;\n> +ERROR: role \"regress_test_role\" cannot be dropped because some objects\n> depend on it\n> +DETAIL: owner of collation test_schema.test11\n> +ERROR: role \"regress_test_role\" cannot be dropped because some objects\n> depend on it\n> +DETAIL: owner of collation test_schema.test11\n>\n> +ERROR: role \"regress_test_role\" already exists\n>\n>\n>\n>\nDid you run 'make installcheck' rather than 'make check' and there\nwas a failure before this round of test? This looks to me that there\nare some objects are not cleaned well before this run. you can try\n'make installcheck' with a pretty clean setup or run 'make check'\ndirectly to verify this.\n\n-- \nBest Regards\nAndy Fan\n\n\n DROP SCHEMA test_schema;\n+ERROR: cannot drop schema test_schema because other objects depend on it\n+DETAIL: collation test_schema.test11 depends on schema test_schema\n+HINT: Use DROP ... CASCADE to drop the dependent objects too.\n DROP ROLE regress_test_role;\n+ERROR: role \"regress_test_role\" cannot be dropped because some objects depend on it\n+DETAIL: owner of collation test_schema.test11\n+ERROR: role \"regress_test_role\" cannot be dropped because some objects depend on it\n+DETAIL: owner of collation test_schema.test11\n+ERROR: role \"regress_test_role\" already exists\nDid you run 'make installcheck' rather than 'make check' and therewas a failure before this round of test? This looks to me that thereare some objects are not cleaned well before this run. you can try'make installcheck' with a pretty clean setup or run 'make check'directly to verify this.-- Best RegardsAndy Fan",
"msg_date": "Tue, 15 Aug 2023 08:49:44 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Regression test collate.icu.utf8 failed on REL_14_STABLE"
},
{
"msg_contents": "\nOn Tue, 15 Aug 2023 at 08:49, Andy Fan <[email protected]> wrote:\n>>\n>>\n>>\n>> DROP SCHEMA test_schema;\n>> +ERROR: cannot drop schema test_schema because other objects depend on it\n>> +DETAIL: collation test_schema.test11 depends on schema test_schema\n>> +HINT: Use DROP ... CASCADE to drop the dependent objects too.\n>> DROP ROLE regress_test_role;\n>> +ERROR: role \"regress_test_role\" cannot be dropped because some objects\n>> depend on it\n>> +DETAIL: owner of collation test_schema.test11\n>> +ERROR: role \"regress_test_role\" cannot be dropped because some objects\n>> depend on it\n>> +DETAIL: owner of collation test_schema.test11\n>>\n>> +ERROR: role \"regress_test_role\" already exists\n>>\n>>\n>>\n>>\n> Did you run 'make installcheck' rather than 'make check' and there\n> was a failure before this round of test? This looks to me that there\n> are some objects are not cleaned well before this run. you can try\n> 'make installcheck' with a pretty clean setup or run 'make check'\n> directly to verify this.\n\nI used `make check` and cleanup the entire build directory. Here is my\ncompile & build script.\n\n$ cat compile.sh\n#!/bin/bash\n\nset -e\nrm -rf $(ls -I '*.sh')\n\n../configure \\\n --prefix=$PWD/pg \\\n --enable-tap-tests \\\n --enable-debug \\\n --enable-cassert \\\n --enable-depend \\\n --enable-dtrace \\\n --with-icu \\\n --with-llvm \\\n --with-openssl \\\n --with-python \\\n --with-libxml \\\n --with-libxslt \\\n --with-lz4 \\\n --with-pam \\\n CFLAGS='-O0 -Wmissing-prototypes -Wincompatible-pointer-types' \\\n >configure.log 2>&1\nmake -j $(nproc) -s && make install -s\n(cd contrib/ && make -j $(nproc) -s && make install -s)\n\n-- \nRegrads,\nJapin Li\n\n\n",
"msg_date": "Tue, 15 Aug 2023 08:54:51 +0800",
"msg_from": "Japin Li <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Regression test collate.icu.utf8 failed on REL_14_STABLE"
},
{
"msg_contents": "\nOn Tue, 15 Aug 2023 at 08:54, Japin Li <[email protected]> wrote:\n> On Tue, 15 Aug 2023 at 08:49, Andy Fan <[email protected]> wrote:\n>>>\n>>>\n>>>\n>>> DROP SCHEMA test_schema;\n>>> +ERROR: cannot drop schema test_schema because other objects depend on it\n>>> +DETAIL: collation test_schema.test11 depends on schema test_schema\n>>> +HINT: Use DROP ... CASCADE to drop the dependent objects too.\n>>> DROP ROLE regress_test_role;\n>>> +ERROR: role \"regress_test_role\" cannot be dropped because some objects\n>>> depend on it\n>>> +DETAIL: owner of collation test_schema.test11\n>>> +ERROR: role \"regress_test_role\" cannot be dropped because some objects\n>>> depend on it\n>>> +DETAIL: owner of collation test_schema.test11\n>>>\n>>> +ERROR: role \"regress_test_role\" already exists\n>>>\n>>>\n>>>\n>>>\n>> Did you run 'make installcheck' rather than 'make check' and there\n>> was a failure before this round of test? This looks to me that there\n>> are some objects are not cleaned well before this run. you can try\n>> 'make installcheck' with a pretty clean setup or run 'make check'\n>> directly to verify this.\n>\n\nThanks Andy, I think I find the root cause. In my environment, LANG has\ndifferent setting from others.\n\n$ locale\nLANG=C.UTF-8\nLANGUAGE=\nLC_CTYPE=\"en_US.UTF-8\"\nLC_NUMERIC=\"en_US.UTF-8\"\nLC_TIME=\"en_US.UTF-8\"\nLC_COLLATE=\"en_US.UTF-8\"\nLC_MONETARY=\"en_US.UTF-8\"\nLC_MESSAGES=\"en_US.UTF-8\"\nLC_PAPER=\"en_US.UTF-8\"\nLC_NAME=\"en_US.UTF-8\"\nLC_ADDRESS=\"en_US.UTF-8\"\nLC_TELEPHONE=\"en_US.UTF-8\"\nLC_MEASUREMENT=\"en_US.UTF-8\"\nLC_IDENTIFICATION=\"en_US.UTF-8\"\nLC_ALL=en_US.UTF-8\n\nThen, I set LANG to en_US.UTF-8, all tests passed. What I'm curious about\nis why PG 15 can pass.\n\n-- \nRegrads,\nJapin Li\n\n\n",
"msg_date": "Tue, 15 Aug 2023 09:51:38 +0800",
"msg_from": "Japin Li <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Regression test collate.icu.utf8 failed on REL_14_STABLE"
}
] |
[
{
"msg_contents": "Hello, my name is Satwik Sharma, and I'm an enthusiast in the fields of\ndata science and new to open source development. I presently use Python,\nSQL, MongoDB, PowerBI, Tableau, and am currently studying Scala. I came\nacross the repository maintained by your company, and I want to contribute.\nIt would be really useful if you could steer me in the appropriate path\n(which projects/repos to choose) and suggest some good first issues. In\norder to contribute more actively, what technologies should I learn as well?\n\nHello, my name is Satwik Sharma, and I'm an enthusiast in the\nfields of data science and new to open source development. I presently use\nPython, SQL, MongoDB, PowerBI, Tableau, and am currently studying Scala. I came\nacross the repository maintained by your company, and I want to contribute. It\nwould be really useful if you could steer me in the appropriate path (which\nprojects/repos to choose) and suggest some good first issues. In order\nto contribute more actively, what technologies should I learn as well?",
"msg_date": "Tue, 15 Aug 2023 01:24:44 +0530",
"msg_from": "Satwik Sharma <[email protected]>",
"msg_from_op": true,
"msg_subject": "Regarding Contributions"
},
{
"msg_contents": "On Tue, Aug 15, 2023 at 01:24:44AM +0530, Satwik Sharma wrote:\n> Hello, my name is Satwik Sharma, and I'm an enthusiast in the fields of data\n> science and new to open source development. I presently use Python, SQL,\n> MongoDB, PowerBI, Tableau, and am currently studying Scala. I came across the\n> repository maintained by your company, and I want to contribute. It would be\n> really useful if you could steer me in the appropriate path (which projects/\n> repos to choose) and suggest some good first issues. In order to contribute\n> more actively, what technologies should I learn as well?\n\nYou might want to look here:\n\n\thttps://www.postgresql.org/developer/\n\nespecially the developers FAQ.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Mon, 14 Aug 2023 16:37:54 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Regarding Contributions"
}
] |
[
{
"msg_contents": "Hi\nI have a modified version of ECPG, to which I gave the ability to do\nsemantic analysis of SQL statements. Where i can share it or with whom can\nI discuss it?\n\nAtte.\nJRBM\n\nHiI have a modified version of ECPG, to which I gave the ability to do semantic analysis of SQL statements. Where i can share it or with whom can I discuss it?Atte.JRBM",
"msg_date": "Mon, 14 Aug 2023 17:00:35 -0400",
"msg_from": "Juan Rodrigo Alejandro Burgos Mella <[email protected]>",
"msg_from_op": true,
"msg_subject": "ECPG Semantic Analysis"
},
{
"msg_contents": "Hi,\n\n> I have a modified version of ECPG, to which I gave the ability to\n> do semantic analysis of SQL statements. Where i can share it or with\n> whom can I discuss it?\n\nFeel free to send it my way. \n\nThanks,\nMichael\n-- \nMichael Meskes\nMichael at Fam-Meskes dot De\nMichael at Meskes dot (De|Com|Net|Org)\nMeskes at (Debian|Postgresql) dot Org\n\n\n",
"msg_date": "Tue, 15 Aug 2023 07:57:26 +0200",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ECPG Semantic Analysis"
},
{
"msg_contents": "Hi,\n\nsorry for the double reply, but it seems my mail client's default is\nnow to send to the list only when hitting group reply. Not good, sigh.\n\n> I have a modified version of ECPG, to which I gave the ability to\n> do semantic analysis of SQL statements. Where i can share it or with\n> whom can I discuss it?\n\nFeel free to send it to me.\n\nThanks,\nMichael\n-- \nMichael Meskes\nMichael at Fam-Meskes dot De\nMichael at Meskes dot (De|Com|Net|Org)\nMeskes at (Debian|Postgresql) dot Org\n\n\n",
"msg_date": "Tue, 15 Aug 2023 07:59:42 +0200",
"msg_from": "Michael Meskes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ECPG Semantic Analysis"
}
] |
[
{
"msg_contents": "Hi all,\n\nWhile playing with pg_logical_emit_message() and WAL replay, I have\nnoticed that LogLogicalMessage() inserts a record but forgets to make\nsure that the record has been flushed. So, for example, if the system\ncrashes the message inserted can get lost.\n\nI was writing some TAP tests for it for the sake of a bug, and I have\nfound this the current behavior annoying because one cannot really\nrely on it when emulating crashes.\n\nThis has been introduced in 3fe3511 (from 2016), and there is no\nmention of that on the original thread that led to this commit:\nhttps://www.postgresql.org/message-id/flat/5685F999.6010202%402ndquadrant.com\n\nThis could be an issue for anybody using LogLogicalMessage() out of\ncore, as well, because it would mean some records lost. So, perhaps\nthis should be treated as a bug, sufficient for a backpatch?\n\nThoughts?\n--\nMichael",
"msg_date": "Tue, 15 Aug 2023 15:38:06 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_logical_emit_message() misses a XLogFlush()"
},
{
"msg_contents": "\n\nOn 8/15/23 08:38, Michael Paquier wrote:\n> Hi all,\n> \n> While playing with pg_logical_emit_message() and WAL replay, I have\n> noticed that LogLogicalMessage() inserts a record but forgets to make\n> sure that the record has been flushed. So, for example, if the system\n> crashes the message inserted can get lost.\n> \n> I was writing some TAP tests for it for the sake of a bug, and I have\n> found this the current behavior annoying because one cannot really\n> rely on it when emulating crashes.\n> \n> This has been introduced in 3fe3511 (from 2016), and there is no\n> mention of that on the original thread that led to this commit:\n> https://www.postgresql.org/message-id/flat/5685F999.6010202%402ndquadrant.com\n> \n> This could be an issue for anybody using LogLogicalMessage() out of\n> core, as well, because it would mean some records lost. So, perhaps\n> this should be treated as a bug, sufficient for a backpatch?\n> \n\nShouldn't the flush be done only for non-transactional messages? The\ntransactional case will be flushed by regular commit flush.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 15 Aug 2023 11:37:32 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_logical_emit_message() misses a XLogFlush()"
},
{
"msg_contents": "On Tue, Aug 15, 2023 at 11:37:32AM +0200, Tomas Vondra wrote:\n> Shouldn't the flush be done only for non-transactional messages? The\n> transactional case will be flushed by regular commit flush.\n\nIndeed, that would be better. I am sending an updated patch.\n\nI'd like to backpatch that, would there be any objections to that?\nThis may depend on how much logical solutions depend on this routine.\n--\nMichael",
"msg_date": "Wed, 16 Aug 2023 06:58:56 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_logical_emit_message() misses a XLogFlush()"
},
{
"msg_contents": "Hi,\n\nOn 2023-08-16 06:58:56 +0900, Michael Paquier wrote:\n> On Tue, Aug 15, 2023 at 11:37:32AM +0200, Tomas Vondra wrote:\n> > Shouldn't the flush be done only for non-transactional messages? The\n> > transactional case will be flushed by regular commit flush.\n> \n> Indeed, that would be better. I am sending an updated patch.\n> \n> I'd like to backpatch that, would there be any objections to that?\n\nYes, I object. This would completely cripple the performance of some uses of\nlogical messages - a slowdown of several orders of magnitude. It's not clear\nto me that flushing would be the right behaviour if it weren't released, but\nit certainly doesn't seem right to make such a change in a minor release.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 15 Aug 2023 17:33:33 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_logical_emit_message() misses a XLogFlush()"
},
{
"msg_contents": "On 8/16/23 02:33, Andres Freund wrote:\n> Hi,\n> \n> On 2023-08-16 06:58:56 +0900, Michael Paquier wrote:\n>> On Tue, Aug 15, 2023 at 11:37:32AM +0200, Tomas Vondra wrote:\n>>> Shouldn't the flush be done only for non-transactional messages? The\n>>> transactional case will be flushed by regular commit flush.\n>>\n>> Indeed, that would be better. I am sending an updated patch.\n>>\n>> I'd like to backpatch that, would there be any objections to that?\n> \n> Yes, I object. This would completely cripple the performance of some uses of\n> logical messages - a slowdown of several orders of magnitude. It's not clear\n> to me that flushing would be the right behaviour if it weren't released, but\n> it certainly doesn't seem right to make such a change in a minor release.\n> \n\nSo are you objecting to adding the flush in general, or just to the\nbackpatching part?\n\nIMHO we either guarantee durability of non-transactional messages, in\nwhich case this would be a clear bug - and I'd say a fairly serious one.\nI'm curious what the workload that'd see order of magnitude of slowdown\ndoes with logical messages, but even if such workload exists, would it\nreally be enough to fix any other durability bug?\n\nOr perhaps we don't want to guarantee durability for such messages, in\nwhich case we don't need to fix it at all (even in master).\n\nThe docs are not very clear on what to expect, unfortunately. It says\nthat non-transactional messages are \"written immediately\" which I could\ninterpret in either way.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 16 Aug 2023 03:20:53 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_logical_emit_message() misses a XLogFlush()"
},
{
"msg_contents": "On Wed, Aug 16, 2023 at 03:20:53AM +0200, Tomas Vondra wrote:\n> So are you objecting to adding the flush in general, or just to the\n> backpatching part?\n> \n> IMHO we either guarantee durability of non-transactional messages, in\n> which case this would be a clear bug - and I'd say a fairly serious one.\n> I'm curious what the workload that'd see order of magnitude of slowdown\n> does with logical messages, but even if such workload exists, would it\n> really be enough to fix any other durability bug?\n\nYes, I also think that this is a pretty serious issue to not ensure\ndurability in the non-transactional case if you have solutions\ndesigned around that.\n\n> Or perhaps we don't want to guarantee durability for such messages, in\n> which case we don't need to fix it at all (even in master).\n\nI mean, just look at the first message of the thread I am mentioning\nat the top of this thread: it lists three cases where something like\npg_logical_emit_message() was wanted. And, it looks like a serious\nissue to me for the first two ones at least (out-of-order messages and\ninter-node communication), because an application may want to send\nsomething from node1 to node2, and node1 may just forget about it\nentirely if it crashes, finishing WAL redo at an earlier point than\nthe record inserted.\n\n> The docs are not very clear on what to expect, unfortunately. It says\n> that non-transactional messages are \"written immediately\" which I could\n> interpret in either way.\n\nAgreed. The original thread never mentions \"flush\", \"sync\" or\n\"durab\".\n\nI won't fight much if there are objections to backpatching, but that's\nnot really wise (no idea how much EDB's close flavor of BDR relies on\nthat).\n--\nMichael",
"msg_date": "Wed, 16 Aug 2023 12:37:21 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_logical_emit_message() misses a XLogFlush()"
},
{
"msg_contents": "Hi,\n\nOn 2023-08-16 03:20:53 +0200, Tomas Vondra wrote:\n> On 8/16/23 02:33, Andres Freund wrote:\n> > Hi,\n> >\n> > On 2023-08-16 06:58:56 +0900, Michael Paquier wrote:\n> >> On Tue, Aug 15, 2023 at 11:37:32AM +0200, Tomas Vondra wrote:\n> >>> Shouldn't the flush be done only for non-transactional messages? The\n> >>> transactional case will be flushed by regular commit flush.\n> >>\n> >> Indeed, that would be better. I am sending an updated patch.\n> >>\n> >> I'd like to backpatch that, would there be any objections to that?\n> >\n> > Yes, I object. This would completely cripple the performance of some uses of\n> > logical messages - a slowdown of several orders of magnitude. It's not clear\n> > to me that flushing would be the right behaviour if it weren't released, but\n> > it certainly doesn't seem right to make such a change in a minor release.\n> >\n>\n> So are you objecting to adding the flush in general, or just to the\n> backpatching part?\n\nBoth, I think. I don't object to adding a way to trigger flushing, but I think\nit needs to be optional.\n\n\n> IMHO we either guarantee durability of non-transactional messages, in which\n> case this would be a clear bug - and I'd say a fairly serious one. I'm\n> curious what the workload that'd see order of magnitude of slowdown does\n> with logical messages I've used it, but even if such workload exists, would\n> it really be enough to fix any other durability bug?\n\nNot sure what you mean with the last sentence?\n\nI've e.g. used non-transactional messages for:\n\n- A non-transactional queuing system. Where sometimes one would dump a portion\n of tables into messages, with something like\n SELECT pg_logical_emit_message(false, 'app:<task>', to_json(r)) FROM r;\n Obviously flushing after every row would be bad.\n\n This is useful when you need to coordinate with other systems in a\n non-transactional way. E.g. letting other parts of the system know that\n files on disk (or in S3 or ...) were created/deleted, since a database\n rollback wouldn't unlink/revive the files.\n\n- Audit logging, when you want to log in a way that isn't undone by rolling\n back transaction - just flushing every pg_logical_emit_message() would\n increase the WAL flush rate many times, because instead of once per\n transaction, you'd now flush once per modified row. It'd basically make it\n impractical to use for such things.\n\n- Optimistic locking. Emitting things that need to be locked on logical\n replicas, to be able to commit on the primary. A pre-commit hook would wait\n for the WAL to be replayed sufficiently - but only once per transaction, not\n once per object.\n\n\n> Or perhaps we don't want to guarantee durability for such messages, in\n> which case we don't need to fix it at all (even in master).\n\nWell, I can see adding an option to flush, or perhaps a separate function to\nflush, to master.\n\n\n> The docs are not very clear on what to expect, unfortunately. It says\n> that non-transactional messages are \"written immediately\" which I could\n> interpret in either way.\n\nYea, the docs certainly should be improved, regardless what we end up with.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 15 Aug 2023 21:13:36 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_logical_emit_message() misses a XLogFlush()"
},
{
"msg_contents": "Hi,\n\nOn 2023-08-16 12:37:21 +0900, Michael Paquier wrote:\n> I won't fight much if there are objections to backpatching, but that's\n> not really wise (no idea how much EDB's close flavor of BDR relies on\n> that).\n\nTo be clear: I don't just object to backpatching, I also object to making\nexisting invocations flush WAL in HEAD. I do not at all object to adding a\nparameter that indicates flushing, or a separate function to do so. The latter\nmight be better, because it allows you to flush a group of messages, rather\nthan a single one.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 15 Aug 2023 21:16:53 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_logical_emit_message() misses a XLogFlush()"
},
{
"msg_contents": "On Tue, Aug 15, 2023 at 09:16:53PM -0700, Andres Freund wrote:\n> To be clear: I don't just object to backpatching, I also object to making\n> existing invocations flush WAL in HEAD. I do not at all object to adding a\n> parameter that indicates flushing, or a separate function to do so. The latter\n> might be better, because it allows you to flush a group of messages, rather\n> than a single one.\n\nFor the latter, am I getting it right that you mean a function\ncompletely outside of the scope of LogLogicalMessage() and\npg_logical_emit_message()? Say, a single pg_wal_flush(lsn)?\n\nI am a bit concerned by that, because anybody calling directly\nLogLogicalMessage() or the existing function would never really think\nabout durability but they may want to ensure a message flush in their\ncode calling it. Adding an argument does not do much about the SQL\nfunction if it has a DEFAULT, still it addresses my first concern\nabout the C part.\n\nAnyway, attached is a patch to add a 4th argument \"flush\" that\ndefaults to false. Thoughts about this version are welcome.\n--\nMichael",
"msg_date": "Wed, 16 Aug 2023 16:51:53 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_logical_emit_message() misses a XLogFlush()"
},
{
"msg_contents": "\n\nOn 8/16/23 06:13, Andres Freund wrote:\n> Hi,\n> \n> On 2023-08-16 03:20:53 +0200, Tomas Vondra wrote:\n>> On 8/16/23 02:33, Andres Freund wrote:\n>>> Hi,\n>>>\n>>> On 2023-08-16 06:58:56 +0900, Michael Paquier wrote:\n>>>> On Tue, Aug 15, 2023 at 11:37:32AM +0200, Tomas Vondra wrote:\n>>>>> Shouldn't the flush be done only for non-transactional messages? The\n>>>>> transactional case will be flushed by regular commit flush.\n>>>>\n>>>> Indeed, that would be better. I am sending an updated patch.\n>>>>\n>>>> I'd like to backpatch that, would there be any objections to that?\n>>>\n>>> Yes, I object. This would completely cripple the performance of some uses of\n>>> logical messages - a slowdown of several orders of magnitude. It's not clear\n>>> to me that flushing would be the right behaviour if it weren't released, but\n>>> it certainly doesn't seem right to make such a change in a minor release.\n>>>\n>>\n>> So are you objecting to adding the flush in general, or just to the\n>> backpatching part?\n> \n> Both, I think. I don't object to adding a way to trigger flushing, but I think\n> it needs to be optional.\n> \n> \n>> IMHO we either guarantee durability of non-transactional messages, in which\n>> case this would be a clear bug - and I'd say a fairly serious one. I'm\n>> curious what the workload that'd see order of magnitude of slowdown does\n>> with logical messages I've used it, but even if such workload exists, would\n>> it really be enough to fix any other durability bug?\n> \n> Not sure what you mean with the last sentence?\n> \n\nSorry, I meant to write \"enough not to fix any other durability bug\".\n\nThat is, either we must not lose messages, in which case it's a bug and\nwe should fix that. Or it's acceptable for the intended use cases, in\nwhich case there's nothing to fix.\n\nTo me losing messages seems like a bad thing, but if the users are aware\nof it and are fine with it ... I'm simply arguing that if we conclude\nthis is a durability bug, we should not leave it unfixed because it\nmight have performance impact.\n\n> I've e.g. used non-transactional messages for:\n> \n> - A non-transactional queuing system. Where sometimes one would dump a portion\n> of tables into messages, with something like\n> SELECT pg_logical_emit_message(false, 'app:<task>', to_json(r)) FROM r;\n> Obviously flushing after every row would be bad.\n> \n> This is useful when you need to coordinate with other systems in a\n> non-transactional way. E.g. letting other parts of the system know that\n> files on disk (or in S3 or ...) were created/deleted, since a database\n> rollback wouldn't unlink/revive the files.\n> \n> - Audit logging, when you want to log in a way that isn't undone by rolling\n> back transaction - just flushing every pg_logical_emit_message() would\n> increase the WAL flush rate many times, because instead of once per\n> transaction, you'd now flush once per modified row. It'd basically make it\n> impractical to use for such things.\n> \n> - Optimistic locking. Emitting things that need to be locked on logical\n> replicas, to be able to commit on the primary. A pre-commit hook would wait\n> for the WAL to be replayed sufficiently - but only once per transaction, not\n> once per object.\n> \n\nHow come the possibility of losing messages is not an issue for these\nuse cases? I mean, surely auditors would not like that, and I guess\nforgetting locks might be bad too.\n\n> \n>> Or perhaps we don't want to guarantee durability for such messages, in\n>> which case we don't need to fix it at all (even in master).\n> \n> Well, I can see adding an option to flush, or perhaps a separate function to\n> flush, to master.\n>>\n>> The docs are not very clear on what to expect, unfortunately. It says\n>> that non-transactional messages are \"written immediately\" which I could\n>> interpret in either way.\n> \n> Yea, the docs certainly should be improved, regardless what we end up with.\n> \n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 16 Aug 2023 12:01:01 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_logical_emit_message() misses a XLogFlush()"
},
{
"msg_contents": "On Wed, Aug 16, 2023 at 12:01:01PM +0200, Tomas Vondra wrote:\n> To me losing messages seems like a bad thing, but if the users are aware\n> of it and are fine with it ... I'm simply arguing that if we conclude\n> this is a durability bug, we should not leave it unfixed because it\n> might have performance impact.\n\nI've been doing some digging here, and the original bdr repo posted at\n[1] has a concept similar to LogLogicalMessage() called\nLogStandbyMessage(). *All* the non-transactional code paths enforce\nan XLogFlush() after *each* message logged. So the original\nexpectation seems pretty clear to me: flushes were wanted.\n\n[1]: https://github.com/2ndQuadrant/bdr\n\n>> I've e.g. used non-transactional messages for:\n>> \n>> - A non-transactional queuing system. Where sometimes one would dump a portion\n>> of tables into messages, with something like\n>> SELECT pg_logical_emit_message(false, 'app:<task>', to_json(r)) FROM r;\n>> Obviously flushing after every row would be bad.\n>> \n>> This is useful when you need to coordinate with other systems in a\n>> non-transactional way. E.g. letting other parts of the system know that\n>> files on disk (or in S3 or ...) were created/deleted, since a database\n>> rollback wouldn't unlink/revive the files.\n>> \n>> - Audit logging, when you want to log in a way that isn't undone by rolling\n>> back transaction - just flushing every pg_logical_emit_message() would\n>> increase the WAL flush rate many times, because instead of once per\n>> transaction, you'd now flush once per modified row. It'd basically make it\n>> impractical to use for such things.\n>> \n>> - Optimistic locking. Emitting things that need to be locked on logical\n>> replicas, to be able to commit on the primary. A pre-commit hook would wait\n>> for the WAL to be replayed sufficiently - but only once per transaction, not\n>> once per object.\n> \n> How come the possibility of losing messages is not an issue for these\n> use cases? I mean, surely auditors would not like that, and I guess\n> forgetting locks might be bad too.\n\n+1. Now I can also get why one may not want to flush every individual\nmessages if you care only about a queue to be flushed after generating\na series of them, so after sleeping on it I'm OK with the last patch I\nposted where one can just choose what he wants. The default, though,\nmay be better if flush is true compared to false (the patch has kept\nflush at false)..\n\nI am not sure where this is leading yet, so I have registered a CF\nentry to keep track of that:\nhttps://commitfest.postgresql.org/44/4505/\n--\nMichael",
"msg_date": "Thu, 17 Aug 2023 10:15:35 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_logical_emit_message() misses a XLogFlush()"
},
{
"msg_contents": "\n\nOn 2023/08/16 16:51, Michael Paquier wrote:\n> Anyway, attached is a patch to add a 4th argument \"flush\" that\n> defaults to false. Thoughts about this version are welcome.\n\nWhen the \"transactional\" option is set to true, WAL including\nthe record generated by the pg_logical_emit_message() function is flushed\nat the end of the transaction based on the synchronous_commit setting.\nHowever, in the current patch, if \"transactional\" is set to false and\n\"flush\" is true, the function flushes the WAL immediately without\nconsidering synchronous_commit. Is this the intended behavior?\nI'm not sure how the function should work in this case, though.\n\nThough I don't understand the purpose of this option fully yet,\nis flushing the WAL sufficient? Are there scenarios where the function\nshould ensure that the WAL is not only flushed but also replicated\nto the standby?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Mon, 11 Sep 2023 12:54:11 +0900",
"msg_from": "Fujii Masao <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_logical_emit_message() misses a XLogFlush()"
},
{
"msg_contents": "On Mon, Sep 11, 2023 at 12:54:11PM +0900, Fujii Masao wrote:\n> However, in the current patch, if \"transactional\" is set to false and\n> \"flush\" is true, the function flushes the WAL immediately without\n> considering synchronous_commit. Is this the intended behavior?\n> I'm not sure how the function should work in this case, though.\n\nYes, that's the intended behavior. This just offers more options to\nthe toolkit of this function to give more control to applications when\nemitting a message. In this case, like the current non-transactional\ncase, we make the record immediately available to logical decoders but\nalso make sure that it is flushed to disk. If one wants to force the\nrecord's presence to a remote instance, then using the transactional\nmode would be sufficient.\n\nPerhaps you have a point here, though, that we had better make\nentirely independent the flush and transactional parts, and still\ncall XLogFlush() even in transactional mode. One would make sure that\nthe record is on disk before waiting for the commit to do so, but\nthat's also awkward for applications because they would not know the\nend LSN of the emitted message until the internal transaction commits\nthe allocated XID, which would be a few records after the result\ncoming out of pg_logical_emit_message().\n\nThe documentation does not worry about any of that even now in the\ncase of the non-transactional case, and it does not mention that one\nmay need to monitor pg_stat_replication or similar to make sure that\nthe LSN of the message exists on the remote with an application-level\ncheck, either. How about adding an extra paragraph to the\ndocumentation, then? I could think of something like that, but the\ncurrent docs also outline this a bit by telling that the message is\n*not* part of a transaction, which kind of implies, at least to me,\nthat synchonous_commit is moot in this case:\n\"When transactional is false, note that the backend ignores\nsynchronous_commit as the record is not part of a transaction so there\nis no commit to wait for. Ensuring that the record of a message\nemitted exists on standbys requires additional monitoring.\"\n\n> Though I don't understand the purpose of this option fully yet,\n> is flushing the WAL sufficient? Are there scenarios where the function\n> should ensure that the WAL is not only flushed but also replicated\n> to the standby?\n\nThe flush makes sure that the record is durable, but we only care\nabout transaction commits in a synchronous setup, so that's\nindependent, in my opinion. If you look closely, we do some fancy\nstuff in finish_sync_worker(), for example, where a transaction commit\nis enforced to make sure that the internal flush is sensitive to the\nsynchronous commit requirements, but that's just something we expect\nto happen in a sync worker.\n--\nMichael",
"msg_date": "Mon, 11 Sep 2023 14:02:38 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_logical_emit_message() misses a XLogFlush()"
},
{
"msg_contents": "On 2023-09-11 14:02, Michael Paquier wrote:\n> On Mon, Sep 11, 2023 at 12:54:11PM +0900, Fujii Masao wrote:\n>> However, in the current patch, if \"transactional\" is set to false and\n>> \"flush\" is true, the function flushes the WAL immediately without\n>> considering synchronous_commit. Is this the intended behavior?\n>> I'm not sure how the function should work in this case, though.\n> \n> Yes, that's the intended behavior. This just offers more options to\n> the toolkit of this function to give more control to applications when\n> emitting a message. In this case, like the current non-transactional\n> case, we make the record immediately available to logical decoders but\n> also make sure that it is flushed to disk. If one wants to force the\n> record's presence to a remote instance, then using the transactional\n> mode would be sufficient.\n> \n> Perhaps you have a point here, though, that we had better make\n> entirely independent the flush and transactional parts, and still\n> call XLogFlush() even in transactional mode. One would make sure that\n> the record is on disk before waiting for the commit to do so, but\n> that's also awkward for applications because they would not know the\n> end LSN of the emitted message until the internal transaction commits\n> the allocated XID, which would be a few records after the result\n> coming out of pg_logical_emit_message().\n> \n> The documentation does not worry about any of that even now in the\n> case of the non-transactional case, and it does not mention that one\n> may need to monitor pg_stat_replication or similar to make sure that\n> the LSN of the message exists on the remote with an application-level\n> check, either. How about adding an extra paragraph to the\n> documentation, then? I could think of something like that, but the\n> current docs also outline this a bit by telling that the message is\n> *not* part of a transaction, which kind of implies, at least to me,\n> that synchonous_commit is moot in this case:\n> \"When transactional is false, note that the backend ignores\n> synchronous_commit as the record is not part of a transaction so there\n> is no commit to wait for. Ensuring that the record of a message\n> emitted exists on standbys requires additional monitoring.\"\n> \n>> Though I don't understand the purpose of this option fully yet,\n>> is flushing the WAL sufficient? Are there scenarios where the function\n>> should ensure that the WAL is not only flushed but also replicated\n>> to the standby?\n> \n> The flush makes sure that the record is durable, but we only care\n> about transaction commits in a synchronous setup, so that's\n> independent, in my opinion. If you look closely, we do some fancy\n> stuff in finish_sync_worker(), for example, where a transaction commit\n> is enforced to make sure that the internal flush is sensitive to the\n> synchronous commit requirements, but that's just something we expect\n> to happen in a sync worker.\n> --\n> Michael\nHi,\n\nWith regard to the patch, the documentation outlines the \npg_logical_emit_message function and its corresponding syntax in the \nfollowing manner.\n\npg_logical_emit_message ( transactional boolean, prefix text, content \ntext ) → pg_lsn\npg_logical_emit_message ( transactional boolean, prefix text, content \nbytea [, flush boolean DEFAULT false] ) → pg_lsn\n\nA minor issue with the description here is that while the description \nfor the new flush argument in pg_logical_emit_message() with bytea type \nis clearly declared, there is no description of flush argument in the \nformer pg_logical_emit_message() with text type at all.\n\nAdditionally, there is a lack of consistency in the third argument names \nbetween the function definition and the description (i.e., \"message \nbytea\" versus \"<parameter>content</parameter> <type>bytea</type>\") as \nfollows.\n----------------\n+CREATE OR REPLACE FUNCTION pg_catalog.pg_logical_emit_message(\n+ transactional boolean,\n+ prefix text,\n+ message bytea,\n+ flush boolean DEFAULT false)\n+RETURNS pg_lsn\n+LANGUAGE INTERNAL\n+VOLATILE STRICT\n+AS 'pg_logical_emit_message_bytea';\n----------------\n...\n----------------\n+ <function>pg_logical_emit_message</function> ( \n<parameter>transactional</parameter> <type>boolean</type>, \n<parameter>prefix</parameter> <type>text</type>, \n<parameter>content</parameter> <type>bytea</type> [, \n<parameter>flush</parameter> <type>boolean</type> \n<literal>DEFAULT</literal> <literal>false</literal>] )\n----------------\nCould you please provide clarification on the reason for this \ndifferentiation?\n\nOn a side note, could you also include a bit more information that \n\"flush is set to false by default\" in the document as well? It could be \nhelpful for the users.\n\nRegards,\nTung Nguyen\n\n\n",
"msg_date": "Mon, 11 Sep 2023 14:42:16 +0900",
"msg_from": "bt23nguyent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_logical_emit_message() misses a XLogFlush()"
},
{
"msg_contents": "On Mon, Sep 11, 2023 at 02:42:16PM +0900, bt23nguyent wrote:\n> A minor issue with the description here is that while the description for\n> the new flush argument in pg_logical_emit_message() with bytea type is\n> clearly declared, there is no description of flush argument in the former\n> pg_logical_emit_message() with text type at all.\n\nIndeed, I forgot to update the first function signature. Fixed in the\nattached. \n\n> On a side note, could you also include a bit more information that \"flush is\n> set to false by default\" in the document as well? It could be helpful for\n> the users.\n\nWith the function signature saying that, that did not seem stricly\nnecessary to me, but no objections to add a few words about that.\n\nI'll need a bit more input from Fujii-san before doing anything about\nhis comments, still it looks like a doc issue to me that may need a\nbackpatch to clarify how the non-transactional case behaves.\n\nAttaching a v4 with the two doc changes, fow now.\n--\nMichael",
"msg_date": "Mon, 11 Sep 2023 16:24:58 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_logical_emit_message() misses a XLogFlush()"
},
{
"msg_contents": "On Mon, Sep 11, 2023 at 5:13 PM Michael Paquier <[email protected]> wrote:\n>\n> I'll need a bit more input from Fujii-san before doing anything about\n> his comments, still it looks like a doc issue to me that may need a\n> backpatch to clarify how the non-transactional case behaves.\n>\n\nI would prefer to associate the new parameter 'flush' with\nnon-transactional messages as per the proposed patch.\n\nFew points for you to consider:\n1.\n+CREATE OR REPLACE FUNCTION pg_catalog.pg_logical_emit_message(\n+ transactional boolean,\n+ prefix text,\n+ message text,\n+ flush boolean DEFAULT false)\n+RETURNS pg_lsn\n+LANGUAGE INTERNAL\n+VOLATILE STRICT\n+AS 'pg_logical_emit_message_text';\n+\n+CREATE OR REPLACE FUNCTION pg_catalog.pg_logical_emit_message(\n+ transactional boolean,\n+ prefix text,\n+ message bytea,\n+ flush boolean DEFAULT false)\n+RETURNS pg_lsn\n+LANGUAGE INTERNAL\n+VOLATILE STRICT\n\nIs there a reason to make the functions strict now when they were not earlier?\n\n2.\n+ The <parameter>flush</parameter> parameter (default set to\n+ <literal>false</literal>) controls if the message is immediately\n+ flushed to WAL or not. <parameter>flush</parameter> has no effect\n+ with <parameter>transactional</parameter>, as the message's WAL\n+ record is flushed when its transaction is committed.\n\nThe last part of the message sounds a bit too specific (\".. as the\nmessage's WAL record is flushed when its transaction is committed.\")\nbecause sometimes the WAL could be flushed by walwriter even before\nthe commit. Can we say something along the lines: \".. as the message's\nWAL record is flushed along with its transaction.\"?\n\n3.\n+ /*\n+ * Make sure that the message hits disk before leaving if not emitting a\n+ * transactional message, if flush is requested.\n+ */\n+ if (!transactional && flush)\n\nTwo ifs in the above comment sound a bit odd but if we want to keep it\nlike that then adding 'and' before the second if may slightly improve\nit.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 13 Oct 2023 15:20:30 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_logical_emit_message() misses a XLogFlush()"
},
{
"msg_contents": "On Fri, Oct 13, 2023 at 03:20:30PM +0530, Amit Kapila wrote:\n> I would prefer to associate the new parameter 'flush' with\n> non-transactional messages as per the proposed patch.\n\nCheck.\n\n> Is there a reason to make the functions strict now when they were not earlier?\n\nThese two are already STRICT on HEAD:\n=# select proname, provolatile, proisstrict from pg_proc\n where proname ~ 'message';\n proname | provolatile | proisstrict\n-------------------------+-------------+-------------\n pg_logical_emit_message | v | t\n pg_logical_emit_message | v | t\n(2 rows)\n\n> 2.\n> + The <parameter>flush</parameter> parameter (default set to\n> + <literal>false</literal>) controls if the message is immediately\n> + flushed to WAL or not. <parameter>flush</parameter> has no effect\n> + with <parameter>transactional</parameter>, as the message's WAL\n> + record is flushed when its transaction is committed.\n> \n> The last part of the message sounds a bit too specific (\".. as the\n> message's WAL record is flushed when its transaction is committed.\")\n> because sometimes the WAL could be flushed by walwriter even before\n> the commit. Can we say something along the lines: \".. as the message's\n> WAL record is flushed along with its transaction.\"?\n\nFine by me.\n\n> 3.\n> + /*\n> + * Make sure that the message hits disk before leaving if not emitting a\n> + * transactional message, if flush is requested.\n> + */\n> + if (!transactional && flush)\n> \n> Two ifs in the above comment sound a bit odd but if we want to keep it\n> like that then adding 'and' before the second if may slightly improve\n> it.\n\nSure, I've improved this comment.\n\nAn updated version is attached. How does it look?\n--\nMichael",
"msg_date": "Mon, 16 Oct 2023 16:17:23 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_logical_emit_message() misses a XLogFlush()"
},
{
"msg_contents": "On Mon, Oct 16, 2023 at 12:47 PM Michael Paquier <[email protected]> wrote:\n>\n> On Fri, Oct 13, 2023 at 03:20:30PM +0530, Amit Kapila wrote:\n> > I would prefer to associate the new parameter 'flush' with\n> > non-transactional messages as per the proposed patch.\n>\n> Check.\n>\n> > Is there a reason to make the functions strict now when they were not earlier?\n>\n> These two are already STRICT on HEAD:\n> =# select proname, provolatile, proisstrict from pg_proc\n> where proname ~ 'message';\n> proname | provolatile | proisstrict\n> -------------------------+-------------+-------------\n> pg_logical_emit_message | v | t\n> pg_logical_emit_message | v | t\n> (2 rows)\n>\n\noh, I misunderstood the default.\n\n>\n> An updated version is attached. How does it look?\n>\n\nLGTM.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 17 Oct 2023 11:57:33 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_logical_emit_message() misses a XLogFlush()"
},
{
"msg_contents": "On Tue, Oct 17, 2023 at 11:57:33AM +0530, Amit Kapila wrote:\n> LGTM.\n\nThanks, I've applied that, then.\n--\nMichael",
"msg_date": "Wed, 18 Oct 2023 12:50:35 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_logical_emit_message() misses a XLogFlush()"
}
] |
[
{
"msg_contents": "Hi,\nHas anyone tried generating a dynamic memory trace of a backend Postgres\nprocess while it's running a query?\n\nI want to characterize the memory access pattern of the Postgres database\nengine when it's running any given query. The usual way to do this would be\nto attach a dynamic instrumentation tool like DynamoRIO or Intel Pin to the\nbackend process running the query (?). How could I find the exact backend\nprocess to attach to? Also, if there's any prior experience with generating\nsuch memory traces, it would be helpful to know what tools were used.\n\nbest regards,\nMuneeb Khan\n\nHi,Has anyone tried generating a dynamic memory trace of a backend Postgres process while it's running a query?I want to characterize the memory access pattern of the Postgres database engine when it's running any given query. The usual way to do this would be to attach a dynamic instrumentation tool like DynamoRIO or Intel Pin to the backend process running the query (?). How could I find the exact backend process to attach to? Also, if there's any prior experience with generating such memory traces, it would be helpful to know what tools were used.best regards,Muneeb Khan",
"msg_date": "Tue, 15 Aug 2023 10:18:29 +0100",
"msg_from": "Muneeb Anwar <[email protected]>",
"msg_from_op": true,
"msg_subject": "Generating memory trace from Postgres backend process"
}
] |
[
{
"msg_contents": "Hi:\n\nIn the test case of xmlmap.sql, we have the query below\nunder schema_to_xml.\n\nexplain (costs off, verbose)\nSELECT oid FROM pg_catalog.pg_class\nWHERE relnamespace = 28601\nAND relkind IN ('r','m','v')\nAND pg_catalog.has_table_privilege (oid, 'SELECT')\nORDER BY relname;\n\nIf the query is using SeqScan, the execution order of the quals is:\n\nhas_table_privilege(pg_class.oid, 'SELECT'::text) AND\n(pg_class.relnamespace = '28601'::oid) AND (pg_class.relkind = ANY\n('{r,m,v}'::\"char\"[]))\n\nbased on current cost setting and algorithm. With this plan,\nhas_table_privilege(pg_class.oid, 'SELECT'::text) may be executed\nagainst all the relations (not just the given namespace), so if a\ntuple in pg_class is scanned and before has_table_privilege is called,\nthe relation is dropped, then we will get error:\n\nERROR: relation with OID xxx does not exist\n\nTo overcome this, if disabling the seqscan, then only index scan on\nrelnamespace is possible, so relnamespace = '28601'::oid will be filtered\nfirst before calling has_table_privilege. and in this test case, we are\nsure\nthe relation belonging to the current namespace will never be dropped, so\nno error is possible. Here is the plan for reference:\n\nSeq Scan:\n\n Sort\n Output: oid, relname\n Sort Key: pg_class.relname\n -> Seq Scan on pg_catalog.pg_class\n Output: oid, relname\n Filter: (has_table_privilege(pg_class.oid, 'SELECT'::text) AND\n(pg_class.relnamespace = '28601'::oid) AND (pg_class.relkind = ANY\n('{r,m,v}'::\"char\"[])))\n\nenable_seqscan to off\n\n QUERY PLAN\n\n------------------------------------------------------------------------------------------------------------------\n Index Scan using pg_class_relname_nsp_index on pg_catalog.pg_class\n Output: oid, relname\n Index Cond: (pg_class.relnamespace = '28601'::oid)\n Filter: (has_table_privilege(pg_class.oid, 'SELECT'::text) AND\n(pg_class.relkind = ANY ('{r,m,v}'::\"char\"[])))\n\nPatch is attached.\n\n-- \nBest Regards\nAndy Fan",
"msg_date": "Tue, 15 Aug 2023 19:09:32 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Avoid a potential unstable test case: xmlmap.sql"
},
{
"msg_contents": "I overlooked the fact even in the bitmap index scan loose mode, the recheck\nis still executed before the qual, so bitmap index scan is OK in this case.\n\n Sort\n Output: oid, relname\n Sort Key: pg_class.relname\n -> Bitmap Heap Scan on pg_catalog.pg_class\n Output: oid, relname\n Recheck Cond: (pg_class.relnamespace = '28601'::oid)\n Filter: (has_table_privilege(pg_class.oid, 'SELECT'::text) AND\n(pg_class.relkind = ANY ('{r,m,v}'::\"char\"[])))\n -> Bitmap Index Scan on pg_class_relname_nsp_index\n Index Cond: (pg_class.relnamespace = '28601'::oid)\n\nv2 attached.\n\n-- \nBest Regards\nAndy Fan",
"msg_date": "Tue, 15 Aug 2023 19:26:45 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Avoid a potential unstable test case: xmlmap.sql"
},
{
"msg_contents": "Hi Andy,\n\n15.08.2023 14:09, Andy Fan wrote:\n>\n> Hi:\n>\n> In the test case of xmlmap.sql, we have the query below under schema_to_xml.\n>\n\nPlease look at the bug #18014:\nhttps://www.postgresql.org/message-id/flat/18014-28c81cb79d44295d%40postgresql.org\nThere were other aspects of the xmlmap test failure discussed in that thread as well.\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Tue, 15 Aug 2023 18:00:01 +0300",
"msg_from": "Alexander Lakhin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoid a potential unstable test case: xmlmap.sql"
},
{
"msg_contents": ">\n>\n> Please look at the bug #18014:\n>\n> https://www.postgresql.org/message-id/flat/18014-28c81cb79d44295d%40postgresql.org\n> There were other aspects of the xmlmap test failure discussed in that\n> thread as well.\n>\n\nThank you Alexander for the information, I will go through there for\ndiscussion.\n\n-- \nBest Regards\nAndy Fan\n\n\nPlease look at the bug #18014:\nhttps://www.postgresql.org/message-id/flat/18014-28c81cb79d44295d%40postgresql.org\nThere were other aspects of the xmlmap test failure discussed in that thread as well.Thank you Alexander for the information, I will go through there for discussion. -- Best RegardsAndy Fan",
"msg_date": "Wed, 16 Aug 2023 07:03:01 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Avoid a potential unstable test case: xmlmap.sql"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile working on the join pushdown issue, I noticed this bit in commit\ne4106b252:\n\n--- parameterized remote path\n+-- parameterized remote path for foreign table\n EXPLAIN (VERBOSE, COSTS false)\n- SELECT * FROM ft2 a, ft2 b WHERE a.c1 = 47 AND b.c1 = a.c2;\n+ SELECT * FROM \"S 1\".\"T 1\" a, ft2 b WHERE a.\"C 1\" = 47 AND b.c1 = a.c2;\n SELECT * FROM ft2 a, ft2 b WHERE a.c1 = 47 AND b.c1 = a.c2;\n+\n\nThe first statement was modified to test the intended behavior, but\nthe second one was not. The second one as-is performs a foreign join:\n\nEXPLAIN (VERBOSE, COSTS OFF)\nSELECT * FROM ft2 a, ft2 b WHERE a.c1 = 47 AND b.c1 = a.c2;\n\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Foreign Scan\n Output: a.c1, a.c2, a.c3, a.c4, a.c5, a.c6, a.c7, a.c8, b.c1, b.c2,\nb.c3, b.c4, b.c5, b.c6, b.c7, b.c8\n Relations: (public.ft2 a) INNER JOIN (public.ft2 b)\n Remote SQL: SELECT r1.\"C 1\", r1.c2, r1.c3, r1.c4, r1.c5, r1.c6,\nr1.c7, r1.c8, r2.\"C 1\", r2.c2, r2.c3, r2.c4, r2.c5, r2.c6, r2.c7,\nr2.c8 FROM (\"S 1\".\"T 1\" r1 INNER JOIN \"S 1\".\"T 1\" r2 ON (((r1.c2 =\nr2.\"C 1\")) AND ((r1.\"C 1\" = 47))))\n(4 rows)\n\nSo we should have modified the second one as well? Attached is a\nsmall patch for that.\n\nBest regards,\nEtsuro Fujita",
"msg_date": "Tue, 15 Aug 2023 20:49:59 +0900",
"msg_from": "Etsuro Fujita <[email protected]>",
"msg_from_op": true,
"msg_subject": "Test case for parameterized remote path in postgres_fdw"
},
{
"msg_contents": "On Tue, Aug 15, 2023 at 7:50 PM Etsuro Fujita <[email protected]>\nwrote:\n\n> So we should have modified the second one as well? Attached is a\n> small patch for that.\n\n\nAgreed, nice catch! +1 to the patch.\n\nThanks\nRichard\n\nOn Tue, Aug 15, 2023 at 7:50 PM Etsuro Fujita <[email protected]> wrote:\nSo we should have modified the second one as well? Attached is a\nsmall patch for that.Agreed, nice catch! +1 to the patch.ThanksRichard",
"msg_date": "Wed, 16 Aug 2023 08:41:09 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Test case for parameterized remote path in postgres_fdw"
},
{
"msg_contents": "Hi Richard,\n\nOn Wed, Aug 16, 2023 at 9:41 AM Richard Guo <[email protected]> wrote:\n> On Tue, Aug 15, 2023 at 7:50 PM Etsuro Fujita <[email protected]> wrote:\n>> So we should have modified the second one as well? Attached is a\n>> small patch for that.\n\n> Agreed, nice catch! +1 to the patch.\n\nThanks for looking!\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Wed, 16 Aug 2023 18:45:06 +0900",
"msg_from": "Etsuro Fujita <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Test case for parameterized remote path in postgres_fdw"
},
{
"msg_contents": "On Wed, Aug 16, 2023 at 6:45 PM Etsuro Fujita <[email protected]> wrote:\n> On Wed, Aug 16, 2023 at 9:41 AM Richard Guo <[email protected]> wrote:\n> > On Tue, Aug 15, 2023 at 7:50 PM Etsuro Fujita <[email protected]> wrote:\n> >> So we should have modified the second one as well? Attached is a\n> >> small patch for that.\n>\n> > Agreed, nice catch! +1 to the patch.\n>\n> Thanks for looking!\n\nPushed.\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Wed, 30 Aug 2023 17:50:57 +0900",
"msg_from": "Etsuro Fujita <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Test case for parameterized remote path in postgres_fdw"
}
] |
[
{
"msg_contents": "Hi there\n\nI am trying to convert a SQL Anywhere database to postgres. Within SQL anywhere a field can have a default value of 'last user'. This means that when you perform an update on a table, if the field is not explicitly set then the current user is used. So for instance if I have a field called mod_user in a table, but when I do an update on the table and do not set mod_user then SQL Anywhere sets the field to current_uer. I have tried to replicate this using a postgres trigger in the before update. However, if I do not set the value then it automatically picks up the value that was already in the field. Is there a way to tell the difference between me setting the value to the same as the previous value and postgres automatically picking it up.\n\nIf the field myfield contains the word 'me'. Can I tell the difference between:\nUpdate table1 set field1='something',myfield='me'\nAnd\nUpdate table1 set field1='something'\n\n\n\n\n\n\n\n\n\n\n\nHi there\n \nI am trying to convert a SQL Anywhere database to postgres. Within SQL anywhere a field can have a default value of ‘last user’. This means that when you perform an update on a table, if the field is not explicitly set then the current\n user is used. So for instance if I have a field called mod_user in a table, but when I do an update on the table and do not set mod_user then SQL Anywhere sets the field to current_uer. I have tried to replicate this using a postgres trigger in the before\n update. However, if I do not set the value then it automatically picks up the value that was already in the field. Is there a way to tell the difference between me setting the value to the same as the previous value and postgres automatically picking it up.\n \nIf the field myfield contains the word ‘me’. Can I tell the difference between:\nUpdate table1 set field1=’something’,myfield=’me’\nAnd\nUpdate table1 set field1=’something’",
"msg_date": "Tue, 15 Aug 2023 15:06:16 +0000",
"msg_from": "Russell Rose | Passfield Data Systems <[email protected]>",
"msg_from_op": true,
"msg_subject": "Converting sql anywhere to postgres"
}
] |
[
{
"msg_contents": "Hi everyone,\n\nThis started as a conversation on Discord. Someone asked if Postgres\nlogs which line in pg_hba.conf matched against a certain login\nattempt, and I said no. That's not quite right, as enabling\nlog_connections includes a line like this:\n\n2023-08-15 13:26:03.159 PDT [692166] postgres@snip LOG: connection\nauthenticated: identity=\"postgres\" method=md5\n(/etc/postgresql/15/main/pg_hba.conf:107)\n\nBut I wasn't getting that output. I finally gave up and looked at the\ncode, where I found that this particular output is only generated by\nthe set_authn_id function. So if that function is never called,\nthere's no message saying which line from the pg_hba.conf file matched\na particular login.\n\nThe switch statement that decodes port->hba->auth_method ends by\nsimply setting status = STATUS_OK; with no supplementary output since\nit never calls set_authn_id. So in theory, a malicious user could add\na trust line to pg_hba.conf and have unlimited unlogged access to the\ndatabase. Unless you happen to notice that the \"connection\nauthenticated\" line has disappeared, it would look like normal\nactivity.\n\nWould it make sense to decouple the hba info from set_authn_id so that\nit is always logged even when new auth methods get added in the\nfuture? Or alternatively create a function call specifically for that\noutput so it can be produced from the trust case statement and\nanywhere else that needs to tag the auth line. I personally would love\nto see if someone got in through a trust line, ESPECIALLY if it isn't\nsupposed to be there. Like:\n\n2023-08-15 13:26:03.159 PDT [692166] postgres@snip LOG: connection\nauthenticated: identity=\"postgres\" method=trust\n(/etc/postgresql/15/main/pg_hba.conf:1)\n\nPerhaps I'm being too paranoid; It just seemed to be an odd omission.\nEuler Taveira clued me into the initial patch which introduced the\npg_hba.conf tattling:\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=9afffcb833d3c5e59a328a2af674fac7e7334fc1\n\nI read through the discussion, and it doesn't seem like the security\naspect of simply hiding trust auths from the log was considered. Since\nthis is a new capability, I suppose nothing is really different from\nsay Postgres 14 and below. Still, it never hurts to ask.\n\nCheers!\n\n-- \nShaun Thomas\nHigh Availability Architect\nEDB\nwww.enterprisedb.com\n\n\n",
"msg_date": "Tue, 15 Aug 2023 16:49:47 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Logging of matching pg_hba.conf entry during auth skips trust auth,\n potential security issue"
},
{
"msg_contents": "On Tue, Aug 15, 2023 at 04:49:47PM -0500, Shaun Thomas wrote:\n> The switch statement that decodes port->hba->auth_method ends by\n> simply setting status = STATUS_OK; with no supplementary output since\n> it never calls set_authn_id. So in theory, a malicious user could add\n> a trust line to pg_hba.conf and have unlimited unlogged access to the\n> database.\n\nThat's the same as giving access to your data folder. Updating\npg_hba.conf is only the tip of the iceberg if one has write access to\nyour data folder.\n\n> Unless you happen to notice that the \"connection\n> authenticated\" line has disappeared, it would look like normal\n> activity.\n\n\"trust\" is not really an anthentication method because it does\nnothing, it just authorizes things to go through, so you cannot\nreally say that it can have an authn ID (grep for \"pedantic\" around\nhere):\nhttps://www.postgresql.org/message-id/[email protected]\n\n> Would it make sense to decouple the hba info from set_authn_id so that\n> it is always logged even when new auth methods get added in the\n> future? Or alternatively create a function call specifically for that\n> output so it can be produced from the trust case statement and\n> anywhere else that needs to tag the auth line. I personally would love\n> to see if someone got in through a trust line, ESPECIALLY if it isn't\n> supposed to be there. Like:\n>\n> 2023-08-15 13:26:03.159 PDT [692166] postgres@snip LOG: connection\n> authenticated: identity=\"postgres\" method=trust\n> (/etc/postgresql/15/main/pg_hba.conf:1)\n\nYou mean outside the switch/case in ClientAuthentication(). Some\nmethods have an authn that is implementation-dependent, like ldap. I\nam not sure if switching the logic would lead to a gain, like calling\nonce set_authn_id() vs passing up a string to set in a single call of\nset_authn_id().\n\n> I read through the discussion, and it doesn't seem like the security\n> aspect of simply hiding trust auths from the log was considered. Since\n> this is a new capability, I suppose nothing is really different from\n> say Postgres 14 and below. Still, it never hurts to ask.\n\nThe first message from Jacob outlines the idea behind the handling of\ntrust. We could perhaps add one extra set_authn_id() for the uaTrust\ncase (not uaCert!) if that's more helpful.\n--\nMichael",
"msg_date": "Wed, 16 Aug 2023 07:23:42 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Logging of matching pg_hba.conf entry during auth skips trust\n auth, potential security issue"
},
{
"msg_contents": "On Tue, Aug 15, 2023 at 3:24 PM Michael Paquier <[email protected]> wrote:\n> The first message from Jacob outlines the idea behind the handling of\n> trust. We could perhaps add one extra set_authn_id() for the uaTrust\n> case (not uaCert!) if that's more helpful.\n\nI'm not super comfortable with saying \"connection authenticated\" when\nit explicitly hasn't been (nor with switching the meaning of a\nnon-NULL SYSTEM_USER from \"definitely authenticated somehow\" to \"who\nknows; parse it apart to see\"). But adding a log entry (\"connection\ntrusted:\" or some such?) with the pointer to the HBA line that made it\nhappen seems like a useful audit helper to me.\n\n--Jacob\n\n\n",
"msg_date": "Tue, 15 Aug 2023 15:39:10 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Logging of matching pg_hba.conf entry during auth skips trust\n auth, potential security issue"
},
{
"msg_contents": "On Tue, Aug 15, 2023 at 03:39:10PM -0700, Jacob Champion wrote:\n> I'm not super comfortable with saying \"connection authenticated\" when\n> it explicitly hasn't been (nor with switching the meaning of a\n> non-NULL SYSTEM_USER from \"definitely authenticated somehow\" to \"who\n> knows; parse it apart to see\"). But adding a log entry (\"connection\n> trusted:\" or some such?) with the pointer to the HBA line that made it\n> happen seems like a useful audit helper to me.\n\nYeah, thanks for confirming. That's also the impression I get after\nreading again the original thread and the idea of how this code path\nis handled in this commit.\n\nWe could do something like a LOG \"connection: method=%s user=%s\n(%s:%d)\", without the \"authenticated\" and \"identity\" terms from\nset_authn_id(). Just to drop an idea.\n--\nMichael",
"msg_date": "Wed, 16 Aug 2023 07:49:31 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Logging of matching pg_hba.conf entry during auth skips trust\n auth, potential security issue"
},
{
"msg_contents": "> We could do something like a LOG \"connection: method=%s user=%s\n> (%s:%d)\", without the \"authenticated\" and \"identity\" terms from\n> set_authn_id(). Just to drop an idea.\n\nThat would be my inclination as well. Heck, just slap a log message\nright in the specific case statements that don't have actual auth as\ndefined by set_authn_id. This assumes anyone really cares about it\nthat much, of course. :D\n\n-- \nShaun\n\n\n",
"msg_date": "Wed, 16 Aug 2023 08:26:55 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Logging of matching pg_hba.conf entry during auth skips trust\n auth, potential security issue"
},
{
"msg_contents": "On Wed, Aug 16, 2023 at 6:27 AM Shaun Thomas\n<[email protected]> wrote:\n>\n> > We could do something like a LOG \"connection: method=%s user=%s\n> > (%s:%d)\", without the \"authenticated\" and \"identity\" terms from\n> > set_authn_id(). Just to drop an idea.\n>\n> That would be my inclination as well. Heck, just slap a log message\n> right in the specific case statements that don't have actual auth as\n> defined by set_authn_id. This assumes anyone really cares about it\n> that much, of course. :D\n\nMaybe something like the attached?\n\n- I made the check more generic, rather than hardcoding it inside the\ntrust statement, because my OAuth proposal would add a method that\nonly calls set_authn_id() some of the time.\n- I used the phrasing \"connection not authenticated\" in the hopes that\nit's a bit more greppable than just \"connection\", especially in\ncombination with the existing \"connection authenticated\" lines.\n\n(I'm reminded that we're reflecting an unauthenticated username as-is\ninto the logs, but I also don't think this makes things any worse than\nthey are today with the \"authorized\" lines.)\n\n--Jacob",
"msg_date": "Wed, 16 Aug 2023 15:11:22 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Logging of matching pg_hba.conf entry during auth skips trust\n auth, potential security issue"
},
{
"msg_contents": "Greetings,\n\n* Jacob Champion ([email protected]) wrote:\n> Maybe something like the attached?\n\n> - I used the phrasing \"connection not authenticated\" in the hopes that\n> it's a bit more greppable than just \"connection\", especially in\n> combination with the existing \"connection authenticated\" lines.\n\nThat doesn't seem quite right ... admittedly, 'trust' isn't performing\nauthentication but there can certainly be an argument made that the\nbasic 'matched a line in pg_hba.conf' is a form of authentication, and\nworse really, saying 'not authenticated' would seem to imply that we\ndidn't allow the connection when, really, we did, and that could be\nconfusing to someone.\n\nMaybe 'connection allowed' instead..?\n\nThanks,\n\nStephen",
"msg_date": "Thu, 17 Aug 2023 12:01:26 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Logging of matching pg_hba.conf entry during auth skips trust\n auth, potential security issue"
},
{
"msg_contents": "On Thu, Aug 17, 2023 at 9:01 AM Stephen Frost <[email protected]> wrote:\n> That doesn't seem quite right ... admittedly, 'trust' isn't performing\n> authentication but there can certainly be an argument made that the\n> basic 'matched a line in pg_hba.conf' is a form of authentication\n\nI'm not personally on board with this argument, but...\n\n> and\n> worse really, saying 'not authenticated' would seem to imply that we\n> didn't allow the connection when, really, we did, and that could be\n> confusing to someone.\n\n...with this one, I agree.\n\n> Maybe 'connection allowed' instead..?\n\nHm. It hasn't really been allowed yet, either. To illustrate what I mean:\n\n LOG: connection received: host=[local]\n LOG: connection allowed: user=\"jacob\" method=trust\n(/home/jacob/src/data/pg16/pg_hba.conf:117)\n LOG: connection authorized: user=jacob database=postgres\napplication_name=psql\n\nMaybe \"unauthenticated connection:\"? \"connection without\nauthentication:\"? \"connection skipped authentication:\"?\n\n--Jacob\n\n\n",
"msg_date": "Thu, 17 Aug 2023 09:42:35 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Logging of matching pg_hba.conf entry during auth skips trust\n auth, potential security issue"
},
{
"msg_contents": "Greetings,\n\n* Jacob Champion ([email protected]) wrote:\n> On Thu, Aug 17, 2023 at 9:01 AM Stephen Frost <[email protected]> wrote:\n> > Maybe 'connection allowed' instead..?\n> \n> Hm. It hasn't really been allowed yet, either. To illustrate what I mean:\n> \n> LOG: connection received: host=[local]\n> LOG: connection allowed: user=\"jacob\" method=trust\n> (/home/jacob/src/data/pg16/pg_hba.conf:117)\n> LOG: connection authorized: user=jacob database=postgres\n> application_name=psql\n> \n> Maybe \"unauthenticated connection:\"? \"connection without\n> authentication:\"? \"connection skipped authentication:\"?\n\nDon't like 'skipped' but that feels closer.\n\nHow about 'connection bypassed authentication'?\n\nThanks,\n\nStephen",
"msg_date": "Thu, 17 Aug 2023 12:46:34 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Logging of matching pg_hba.conf entry during auth skips trust\n auth, potential security issue"
},
{
"msg_contents": "On Thu, Aug 17, 2023 at 9:46 AM Stephen Frost <[email protected]> wrote:\n> Don't like 'skipped' but that feels closer.\n>\n> How about 'connection bypassed authentication'?\n\nWorks for me; see v2.\n\nThanks!\n--Jacob",
"msg_date": "Thu, 17 Aug 2023 09:53:34 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Logging of matching pg_hba.conf entry during auth skips trust\n auth, potential security issue"
},
{
"msg_contents": "On Thu, Aug 17, 2023 at 12:54 PM Jacob Champion <[email protected]> wrote:\n> On Thu, Aug 17, 2023 at 9:46 AM Stephen Frost <[email protected]> wrote:\n> > Don't like 'skipped' but that feels closer.\n> >\n> > How about 'connection bypassed authentication'?\n>\n> Works for me; see v2.\n\nFor what it's worth, my vote would be for \"connection authenticated:\n... method=trust\". The only reason we're not doing that is because\nthere's some argument that trusting that the client is who they say\nthey are is not really authentication at all. But this seems silly,\nbecause we put \"trust\" in the \"METHOD\" column of pg_hba.conf, so in\nthat case we already treat it as an authentication method. Also, any\nsuch line in pg_hba.conf still matches against the supplied IP address\nmask, which I suppose could be viewed as a form of authentication. Or\nmaybe not. But I wonder if we're just being too persnickety about\nlanguage here, in a way that maybe isn't consistent with our previous\npractice.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 17 Aug 2023 15:23:11 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Logging of matching pg_hba.conf entry during auth skips trust\n auth, potential security issue"
},
{
"msg_contents": "Greetings,\n\nOn Thu, Aug 17, 2023 at 15:23 Robert Haas <[email protected]> wrote:\n\n> On Thu, Aug 17, 2023 at 12:54 PM Jacob Champion <[email protected]>\n> wrote:\n> > On Thu, Aug 17, 2023 at 9:46 AM Stephen Frost <[email protected]>\n> wrote:\n> > > Don't like 'skipped' but that feels closer.\n> > >\n> > > How about 'connection bypassed authentication'?\n> >\n> > Works for me; see v2.\n>\n> For what it's worth, my vote would be for \"connection authenticated:\n> ... method=trust\".\n\n\nI don’t have any particular objection to this language and agree that it’s\nactually closer to how we talk about the trust auth method in our\ndocumentation.\n\nMaybe if we decided to rework the documentation … or perhaps just ripped\n“trust” out entirely … but those are whole different things from what we\nare trying to accomplish here.\n\nThanks,\n\nStephen\n\nGreetings,On Thu, Aug 17, 2023 at 15:23 Robert Haas <[email protected]> wrote:On Thu, Aug 17, 2023 at 12:54 PM Jacob Champion <[email protected]> wrote:\n> On Thu, Aug 17, 2023 at 9:46 AM Stephen Frost <[email protected]> wrote:\n> > Don't like 'skipped' but that feels closer.\n> >\n> > How about 'connection bypassed authentication'?\n>\n> Works for me; see v2.\n\nFor what it's worth, my vote would be for \"connection authenticated:\n... method=trust\". I don’t have any particular objection to this language and agree that it’s actually closer to how we talk about the trust auth method in our documentation.Maybe if we decided to rework the documentation … or perhaps just ripped “trust” out entirely … but those are whole different things from what we are trying to accomplish here.Thanks,Stephen",
"msg_date": "Thu, 17 Aug 2023 15:29:28 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Logging of matching pg_hba.conf entry during auth skips trust\n auth, potential security issue"
},
{
"msg_contents": "On Thu, Aug 17, 2023 at 03:29:28PM -0400, Stephen Frost wrote:\n> On Thu, Aug 17, 2023 at 15:23 Robert Haas <[email protected]> wrote:\n>> For what it's worth, my vote would be for \"connection authenticated:\n>> ... method=trust\".\n> \n> I don’t have any particular objection to this language and agree that it’s\n> actually closer to how we talk about the trust auth method in our\n> documentation.\n\nAfter sleeping on it, I think that I'd just agree with Robert's point\nto just use the same language as the message, while also agreeing with\nthe patch to not set MyClientConnectionInfo.authn_id in the uaTrust\ncase, only logging something under log_connections.\n\n+ * No authentication was actually performed; this happens e.g. when the\n+ * trust method is in use.\n\nThis comment should be reworded a bit, say \"No authentication identity\nwas set; blah ..\".\n\n> Maybe if we decided to rework the documentation … or perhaps just ripped\n> “trust” out entirely … but those are whole different things from what we\n> are trying to accomplish here.\n\nNot sure I see any point in doing that these days.\n--\nMichael",
"msg_date": "Fri, 18 Aug 2023 08:49:16 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Logging of matching pg_hba.conf entry during auth skips trust\n auth, potential security issue"
},
{
"msg_contents": "On Fri, Aug 18, 2023 at 08:49:16AM +0900, Michael Paquier wrote:\n> After sleeping on it, I think that I'd just agree with Robert's point\n> to just use the same language as the message, while also agreeing with\n> the patch to not set MyClientConnectionInfo.authn_id in the uaTrust\n> case, only logging something under log_connections.\n> \n> + * No authentication was actually performed; this happens e.g. when the\n> + * trust method is in use.\n> \n> This comment should be reworded a bit, say \"No authentication identity\n> was set; blah ..\".\n\nAttached is a v3 to do these two things, with adjustments for two SSL\ntests. Any objections about it?\n\n(Note: no backpatch)\n--\nMichael",
"msg_date": "Mon, 21 Aug 2023 08:57:55 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Logging of matching pg_hba.conf entry during auth skips trust\n auth, potential security issue"
},
{
"msg_contents": "On Sun, Aug 20, 2023 at 7:58 PM Michael Paquier <[email protected]> wrote:\n> Attached is a v3 to do these two things, with adjustments for two SSL\n> tests. Any objections about it?\n\n+ * No authentication identity was set; this happens e.g. when the\n+ * trust method is in use. For audit purposes, log a breadcrumb to\n+ * explain where in the HBA this happened.\n\nProposed rewrite: \"Normally, if log_connections is set, the call to\nset_authn_id will log the connection. However, if that function is\nnever called, perhaps because the trust method is in use, then we\nhandle the logging here instead.\"\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 21 Aug 2023 09:27:51 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Logging of matching pg_hba.conf entry during auth skips trust\n auth, potential security issue"
},
{
"msg_contents": "On Sun, Aug 20, 2023 at 4:58 PM Michael Paquier <[email protected]> wrote:\n> Attached is a v3 to do these two things, with adjustments for two SSL\n> tests. Any objections about it?\n\n(Sorry for the long weekend delay.) No objections; you may want to\nadjust the comment above the test block in t/001_password.pl, as well.\n\nI will ask -- more as a rhetorical question than something to resolve\nfor this patch, since the topic is going to come back with a vengeance\nfor OAuth -- what purpose the consistency here is serving. If the OP\nwants to notice when a connection that should be using strong\nauthentication is not, is it helpful to make that connection \"look the\nsame\" in the logs? I understand we've been carrying the language\n\"trust authentication method\" for a long time, but is that really the\nonly hang-up, or would there be pushback if I tried to change that\ntoo, sometime in the future?\n\nThanks,\n--Jacob\n\n\n",
"msg_date": "Mon, 21 Aug 2023 10:49:16 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Logging of matching pg_hba.conf entry during auth skips trust\n auth, potential security issue"
},
{
"msg_contents": "On Mon, Aug 21, 2023 at 09:27:51AM -0400, Robert Haas wrote:\n> + * No authentication identity was set; this happens e.g. when the\n> + * trust method is in use. For audit purposes, log a breadcrumb to\n> + * explain where in the HBA this happened.\n> \n> Proposed rewrite: \"Normally, if log_connections is set, the call to\n> set_authn_id will log the connection. However, if that function is\n> never called, perhaps because the trust method is in use, then we\n> handle the logging here instead.\"\n\nWFM.\n--\nMichael",
"msg_date": "Tue, 22 Aug 2023 08:04:18 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Logging of matching pg_hba.conf entry during auth skips trust\n auth, potential security issue"
},
{
"msg_contents": "On Mon, Aug 21, 2023 at 10:49:16AM -0700, Jacob Champion wrote:\n> On Sun, Aug 20, 2023 at 4:58 PM Michael Paquier <[email protected]> wrote:\n> > Attached is a v3 to do these two things, with adjustments for two SSL\n> > tests. Any objections about it?\n> \n> (Sorry for the long weekend delay.) No objections; you may want to\n> adjust the comment above the test block in t/001_password.pl, as well.\n\nThere are additionally two more comments in the SSL tests that could\nbe removed, I guess. Here's a v4, with Robert's latest suggestion\nadded.\n\n> I will ask -- more as a rhetorical question than something to resolve\n> for this patch, since the topic is going to come back with a vengeance\n> for OAuth -- what purpose the consistency here is serving. If the OP\n> wants to notice when a connection that should be using strong\n> authentication is not, is it helpful to make that connection \"look the\n> same\" in the logs? I understand we've been carrying the language\n> \"trust authentication method\" for a long time, but is that really the\n> only hang-up, or would there be pushback if I tried to change that\n> too, sometime in the future?\n\nI am not sure that we need to change this historic term, TBH. Perhaps\nit would be shorter to just rip off the trust method from the tree\nwith a deprecation period but that's not something I'm much in favor\noff either (I use it daily for my own stuff, as one example).\nAnother, more conservative approach may be to make it a developer-only\noption and discourage more its use in the docs.\n--\nMichael",
"msg_date": "Tue, 22 Aug 2023 08:22:36 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Logging of matching pg_hba.conf entry during auth skips trust\n auth, potential security issue"
},
{
"msg_contents": "On Mon, 21 Aug 2023 at 19:23, Michael Paquier <[email protected]> wrote:\n\nI am not sure that we need to change this historic term, TBH. Perhaps\n> it would be shorter to just rip off the trust method from the tree\n> with a deprecation period but that's not something I'm much in favor\n> off either (I use it daily for my own stuff, as one example).\n> Another, more conservative approach may be to make it a developer-only\n> option and discourage more its use in the docs.\n>\n\nI hope we're not really considering removing the \"trust\" method. For\ntesting and development purposes it's very handy — just tell the database,\nrunning in a VM, to allow all connections and just believe who they say\nthey are from a client process running in the same or a different VM, with\nno production data anywhere in site and no connection to the real network.\n\nIf people are really getting confused and using it in production, then\nchange the documentation to make it even more clear that it is a\nnon-authenticating setting which is there specifically to bypass security\nin testing contexts. Ultimately, real tools have the ability to cut your\narm off, and our documentation just needs to make clear which parts of\nPostgres are like that.\n\nOn Mon, 21 Aug 2023 at 19:23, Michael Paquier <[email protected]> wrote:\nI am not sure that we need to change this historic term, TBH. Perhaps\nit would be shorter to just rip off the trust method from the tree\nwith a deprecation period but that's not something I'm much in favor\noff either (I use it daily for my own stuff, as one example).\nAnother, more conservative approach may be to make it a developer-only\noption and discourage more its use in the docs.I hope we're not really considering removing the \"trust\" method. For testing and development purposes it's very handy — just tell the database, running in a VM, to allow all connections and just believe who they say they are from a client process running in the same or a different VM, with no production data anywhere in site and no connection to the real network.If people are really getting confused and using it in production, then change the documentation to make it even more clear that it is a non-authenticating setting which is there specifically to bypass security in testing contexts. Ultimately, real tools have the ability to cut your arm off, and our documentation just needs to make clear which parts of Postgres are like that.",
"msg_date": "Mon, 21 Aug 2023 19:43:56 -0400",
"msg_from": "Isaac Morland <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Logging of matching pg_hba.conf entry during auth skips trust\n auth, potential security issue"
},
{
"msg_contents": "On Mon, Aug 21, 2023 at 4:22 PM Michael Paquier <[email protected]> wrote:\n> There are additionally two more comments in the SSL tests that could\n> be removed, I guess. Here's a v4, with Robert's latest suggestion\n> added.\n\nLGTM.\n\n> I am not sure that we need to change this historic term, TBH. Perhaps\n> it would be shorter to just rip off the trust method from the tree\n> with a deprecation period but that's not something I'm much in favor\n> off either (I use it daily for my own stuff, as one example).\n> Another, more conservative approach may be to make it a developer-only\n> option and discourage more its use in the docs.\n\nI don't think we should get rid of anonymous connections; there are\nways to securely authorize a client connection without ever\nauthenticating the entity at the other end. I'd just like the server\nto call them what they are, because I think the distinction is\nvaluable for DBAs who are closely watching their systems.\n\n--Jacob\n\n\n",
"msg_date": "Mon, 21 Aug 2023 16:44:33 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Logging of matching pg_hba.conf entry during auth skips trust\n auth, potential security issue"
},
{
"msg_contents": "On Mon, Aug 21, 2023 at 07:43:56PM -0400, Isaac Morland wrote:\n> I hope we're not really considering removing the \"trust\" method. For\n> testing and development purposes it's very handy — just tell the database,\n> running in a VM, to allow all connections and just believe who they say\n> they are from a client process running in the same or a different VM, with\n> no production data anywhere in site and no connection to the real network.\n\nFor some benchmarking scenarios, it can actually be useful when\ntesting cases where new connections are spawned as it bypasses\nentirely the authentication path, moving the bottlenecks to different\nareas one may want to stress.\n--\nMichael",
"msg_date": "Tue, 22 Aug 2023 08:58:42 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Logging of matching pg_hba.conf entry during auth skips trust\n auth, potential security issue"
},
{
"msg_contents": "On Mon, Aug 21, 2023 at 04:44:33PM -0700, Jacob Champion wrote:\n> On Mon, Aug 21, 2023 at 4:22 PM Michael Paquier <[email protected]> wrote:\n>> There are additionally two more comments in the SSL tests that could\n>> be removed, I guess. Here's a v4, with Robert's latest suggestion\n>> added.\n> \n> LGTM.\n\nOkay. Hearing nothing else, I have gone ahead and applied v4.\n--\nMichael",
"msg_date": "Sat, 26 Aug 2023 20:17:19 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Logging of matching pg_hba.conf entry during auth skips trust\n auth, potential security issue"
}
] |
[
{
"msg_contents": "Hi,\n\nI know that this will probably get a staunch \"No\" as an answer, but...\nI'm still going to ask: Would it be possible to backport 28b5726 to\nthe PG16 branch? Even though it's clearly a new feature?\n\nI'm working on named prepared statement support in PgBouncer:\nhttps://github.com/pgbouncer/pgbouncer/pull/845 That PR is pretty\nclose to mergable (IMO) and my intention is to release a PgBouncer\nversion with prepared statement support within a few months.\n\n28b5726 allows sending Close messages from libpq, as opposed to\nsending DEALLOCATE queries to deallocate prepared statements. Without\nsupport for Close messages, libpq based clients won't be able to\ndeallocate prepared statements on PgBouncer, because PgBouncer does\nnot parse SQL queries and only looks at protocol level messages (i.e.\nClose messages for deallocation).\n\nPersonally I think backpatching 28b5726 has a really low risk of\nbreaking anything. And since PgBouncer is used a lot in the Postgres\necosystem, especially with libpq based clients, IMO it might be worth\ndeviating from the rule of not backporting features after a STABLE\nbranch has been cut. Otherwise all libpq based clients will have only\nlimited support for prepared statements with PgBouncer until PG17 is\nreleased.\n\nJelte\n\n\n",
"msg_date": "Wed, 16 Aug 2023 00:14:21 +0200",
"msg_from": "Jelte Fennema <[email protected]>",
"msg_from_op": true,
"msg_subject": "Would it be possible to backpatch Close support in libpq (28b5726) to\n PG16?"
},
{
"msg_contents": "On Wed, Aug 16, 2023 at 12:14:21AM +0200, Jelte Fennema wrote:\n> 28b5726 allows sending Close messages from libpq, as opposed to\n> sending DEALLOCATE queries to deallocate prepared statements. Without\n> support for Close messages, libpq based clients won't be able to\n> deallocate prepared statements on PgBouncer, because PgBouncer does\n> not parse SQL queries and only looks at protocol level messages (i.e.\n> Close messages for deallocation).\n\nThe RMT has the final word on anything related to the release, but we\nare discussing about adding something new to a branch that has gone\nthrough two beta cycles with a GA targetted around the end of\nSeptember ~ beginning of October based on the trends of the recent\nyears. That's out of the picture, IMO. This comes once every year.\n\n> Personally I think backpatching 28b5726 has a really low risk of\n> breaking anything.\n\nI agree about the low-risk argument, though. This is just new code.\n--\nMichael",
"msg_date": "Wed, 16 Aug 2023 07:36:07 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Would it be possible to backpatch Close support in libpq\n (28b5726) to PG16?"
},
{
"msg_contents": "On 2023-Aug-16, Michael Paquier wrote:\n\n> > Personally I think backpatching 28b5726 has a really low risk of\n> > breaking anything.\n> \n> I agree about the low-risk argument, though. This is just new code.\n\nHere's a way to think about it. If 16.1 was already out, would we add\nlibpq support for Close to 16.2?\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"La rebeldía es la virtud original del hombre\" (Arthur Schopenhauer)\n\n\n",
"msg_date": "Wed, 16 Aug 2023 00:39:20 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Would it be possible to backpatch Close support in libpq\n (28b5726) to PG16?"
},
{
"msg_contents": "On 8/15/23 15:39, Alvaro Herrera wrote:\n> On 2023-Aug-16, Michael Paquier wrote:\n> \n>>> Personally I think backpatching 28b5726 has a really low risk of\n>>> breaking anything.\n>>\n>> I agree about the low-risk argument, though. This is just new code.\n> \n> Here's a way to think about it. If 16.1 was already out, would we add\n> libpq support for Close to 16.2?\n\nSeems pretty clearly a \"no\" to me.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n",
"msg_date": "Wed, 16 Aug 2023 16:20:09 -0700",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Would it be possible to backpatch Close support in libpq\n (28b5726) to PG16?"
}
] |
[
{
"msg_contents": "Here is a patch set with some straightforward code cleanup in index.c \nand indexcmds.c and some adjacent places.\n\nFirst, I have added const qualifiers to all the function prototypes as \nappropriate. This didn't require any additional casts or tricks.\n\nThen, I have renamed some function arguments for clarity. For example, \nseveral functions had an argument like\n\n Oid *classObjectId\n\nThis is confusing in more than one way: The \"class\" is actually the \noperator class, not the pg_class entry, and the argument is actually an \narray, not a single value as the name would suggest. The amended version\n\n const Oid *opclassIds\n\nshould be much clearer.\n\nAlso, about half the code in these files already used the better naming \nsystem, so this change also makes everything within these files more \nconsistent.\n\nThird, I removed some line breaks around the places that I touched \nanyway. In some cases, with the renaming, the lines didn't seem that \nlong anymore to warrant a line break.",
"msg_date": "Wed, 16 Aug 2023 08:04:46 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "some code cleanup in index.c and indexcmds.c"
}
] |
[
{
"msg_contents": "Hi, \n\nThe Chinese words there are ok, but the `Unix-domian` should be `Unix-domain`.\n\n\nZhang Mingli\nHashData https://www.hashdata.xyz",
"msg_date": "Wed, 16 Aug 2023 15:34:56 +0800",
"msg_from": "Zhang Mingli <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fix typo in src/interfaces/libpq/po/zh_CN.po"
},
{
"msg_contents": "On 16.08.23 09:34, Zhang Mingli wrote:\n> The Chinese words there are ok, but the `Unix-domian` should be \n> `Unix-domain`.\n\nfixed, thanks\n\n\n\n",
"msg_date": "Wed, 16 Aug 2023 16:24:39 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix typo in src/interfaces/libpq/po/zh_CN.po"
},
{
"msg_contents": "> On Aug 16, 2023, at 22:24, Peter Eisentraut <[email protected]> wrote:\n> \n> On 16.08.23 09:34, Zhang Mingli wrote:\n>> The Chinese words there are ok, but the `Unix-domian` should be `Unix-domain`.\n> \n> fixed, thanks\n> \n\n\nHi, Peter, thanks and just want to make sure that it is pushed?\n\n\nZhang Mingli\nHashData https://www.hashdata.xyz\n\n\nOn Aug 16, 2023, at 22:24, Peter Eisentraut <[email protected]> wrote:On 16.08.23 09:34, Zhang Mingli wrote:The Chinese words there are ok, but the `Unix-domian` should be `Unix-domain`.fixed, thanksHi, Peter, thanks and just want to make sure that it is pushed?\nZhang MingliHashData https://www.hashdata.xyz",
"msg_date": "Sat, 19 Aug 2023 19:36:48 +0800",
"msg_from": "Zhang Mingli <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fix typo in src/interfaces/libpq/po/zh_CN.po"
},
{
"msg_contents": "> On 19 Aug 2023, at 13:36, Zhang Mingli <[email protected]> wrote:\n> \n>> On Aug 16, 2023, at 22:24, Peter Eisentraut <[email protected]> wrote:\n>> \n>> On 16.08.23 09:34, Zhang Mingli wrote:\n>>> The Chinese words there are ok, but the `Unix-domian` should be `Unix-domain`.\n>> \n>> fixed, thanks\n> \n> Hi, Peter, thanks and just want to make sure that it is pushed?\n\nThis was fixed by Peter as mentioned upthread, but the translations are\nmaintained in its own Git repository so the commit is not visible in the main\nGit repo. Translations are synced with the main repo before releases. The\ncommit can be seen here:\n\nhttps://git.postgresql.org/gitweb/?p=pgtranslation/messages.git;a=commitdiff;h=14391f71ca61e90d52502093447fe1ee0080116f\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Mon, 21 Aug 2023 11:59:11 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix typo in src/interfaces/libpq/po/zh_CN.po"
},
{
"msg_contents": "> \n> This was fixed by Peter as mentioned upthread, but the translations are\n> maintained in its own Git repository so the commit is not visible in the main\n> Git repo. Translations are synced with the main repo before releases. The\n> commit can be seen here:\n> \n> https://git.postgresql.org/gitweb/?p=pgtranslation/messages.git;a=commitdiff;h=14391f71ca61e90d52502093447fe1ee0080116f\n> \n> --\n> Daniel Gustafsson\n> \n\nThanks, got it~\n\nZhang Mingli\nHashData https://www.hashdata.xyz\n\n\nThis was fixed by Peter as mentioned upthread, but the translations aremaintained in its own Git repository so the commit is not visible in the mainGit repo. Translations are synced with the main repo before releases. Thecommit can be seen here:https://git.postgresql.org/gitweb/?p=pgtranslation/messages.git;a=commitdiff;h=14391f71ca61e90d52502093447fe1ee0080116f--Daniel GustafssonThanks, got it~\nZhang MingliHashData https://www.hashdata.xyz",
"msg_date": "Mon, 21 Aug 2023 18:54:59 +0800",
"msg_from": "Zhang Mingli <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fix typo in src/interfaces/libpq/po/zh_CN.po"
}
] |
[
{
"msg_contents": "Hello,\n\nThe following surprised me enough to think it might be a bug:\n(17devel)\n\nselect\n regexp_replace('Abc Def'\n , '([a-z]) ([A-Z])'\n , '\\1 ' || lower('\\2') );\n\nregexp_replace\n----------------\n Abc Def\n(1 row)\n\n-- 'Abc Def' got\n-- 'Abc def' expected\n\nWhat do you think?\n\nThanks,\n\nErik Rijkers\n\n\n\n",
"msg_date": "Wed, 16 Aug 2023 11:09:02 +0200",
"msg_from": "Erik Rijkers <[email protected]>",
"msg_from_op": true,
"msg_subject": "regexp_replace weirdness amounts to a bug?"
},
{
"msg_contents": "Hi Erik,\n\nThe regexp doesn't match your string because you're not allowing for\nany repeat characters, try adding a '+'.\n\nOn Wed, 16 Aug 2023 at 09:07, Erik Rijkers <[email protected]> wrote:\n>\n> Hello,\n>\n> The following surprised me enough to think it might be a bug:\n> (17devel)\n>\n> select\n> regexp_replace('Abc Def'\n> , '([a-z]) ([A-Z])'\n> , '\\1 ' || lower('\\2') );\n>\n> regexp_replace\n> ----------------\n> Abc Def\n> (1 row)\n>\n> -- 'Abc Def' got\n> -- 'Abc def' expected\n>\n> What do you think?\n>\n> Thanks,\n>\n> Erik Rijkers\n>\n>\n>\n\n\n",
"msg_date": "Wed, 16 Aug 2023 09:09:05 +0000",
"msg_from": "Malthe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: regexp_replace weirdness amounts to a bug?"
},
{
"msg_contents": "On 2023-Aug-16, Erik Rijkers wrote:\n\n> Hello,\n> \n> The following surprised me enough to think it might be a bug:\n> (17devel)\n> \n> select\n> regexp_replace('Abc Def'\n> , '([a-z]) ([A-Z])'\n> , '\\1 ' || lower('\\2') );\n> \n> regexp_replace\n> ----------------\n> Abc Def\n\nWhat's happening here is that the lower() is applying to the literal \\2,\nand the expansion of \\2 to 'D' occurs afterwards, when lower() has\nalready executed. Note this other example, where the literal part of\nthe replacement string is correctly lowercased:\n\nselect\n regexp_replace('Abc Def'\n , '([a-z]) ([A-Z])'\n , '\\1 ' || lower('D\\2D'));\n regexp_replace \n────────────────\n Abc dDdef\n(1 fila)\n\nI don't know how to achieve what you want.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n#error \"Operator lives in the wrong universe\"\n (\"Use of cookies in real-time system development\", M. Gleixner, M. Mc Guire)\n\n\n",
"msg_date": "Wed, 16 Aug 2023 12:07:43 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: regexp_replace weirdness amounts to a bug?"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nEarlier this year I proposed a small change for the pg_stat_subscription view:\n\n------\n...it would be very useful to have an additional \"kind\" attribute for\nthis view. This will save the user from needing to do mental\ngymnastics every time just to recognise what kind of process they are\nlooking at.\n------\n\nAt that time Amit replied [1] that this could be posted as a separate\nenhancement thread.\n\nNow that the LogicalRepWorkerType has been recently pushed [2]\n(something with changes in the same area of the code) it seemed the\nright time to resurrect my pg_stat_subscription proposal.\n\nPSA patch v1.\n\nThoughts?\n\n------\n[1] https://www.postgresql.org/message-id/CAA4eK1JO54%3D3s0KM9iZGSrQmmfzk9PEOKkW8TXjo2OKaKrSGCA%40mail.gmail.com\n[2] https://github.com/postgres/postgres/commit/2a8b40e3681921943a2989fd4ec6cdbf8766566c\n\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Wed, 16 Aug 2023 19:14:18 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "Add 'worker_type' to pg_stat_subscription"
},
{
"msg_contents": "On Wed, Aug 16, 2023 at 07:14:18PM +1000, Peter Smith wrote:\n> Earlier this year I proposed a small change for the pg_stat_subscription view:\n> \n> ------\n> ...it would be very useful to have an additional \"kind\" attribute for\n> this view. This will save the user from needing to do mental\n> gymnastics every time just to recognise what kind of process they are\n> looking at.\n> ------\n> \n> At that time Amit replied [1] that this could be posted as a separate\n> enhancement thread.\n> \n> Now that the LogicalRepWorkerType has been recently pushed [2]\n> (something with changes in the same area of the code) it seemed the\n> right time to resurrect my pg_stat_subscription proposal.\n\nThis sounds generally reasonable to me.\n\n <row>\n <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n+ <structfield>worker_type</structfield> <type>text</type>\n+ </para>\n+ <para>\n+ Type of the subscription worker process. Possible values are:\n+ <itemizedlist>\n+ <listitem>\n+ <para>\n+ <literal>a</literal>: apply worker\n+ </para>\n+ </listitem>\n+ <listitem>\n+ <para>\n+ <literal>p</literal>: parallel apply worker\n+ </para>\n+ </listitem>\n+ <listitem>\n+ <para>\n+ <literal>t</literal>: tablesync worker\n+ </para>\n+ </listitem>\n+ </itemizedlist>\n+ </para></entry>\n+ </row>\n\nIs there any reason not to spell out the names? I think that would match\nthe other system views better (e.g., backend_type in pg_stat_activity).\nAlso, instead of \"tablesync worker\", I'd suggest using \"synchronization\nworker\" to match the name used elsewhere in this table.\n\nI see that the table refers to \"leader apply workers\". Would those show up\nas parallel apply workers in the view? Can we add another worker type for\nthose?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 1 Sep 2023 12:41:54 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add 'worker_type' to pg_stat_subscription"
},
{
"msg_contents": "On Sat, Sep 2, 2023 at 7:41 AM Nathan Bossart <[email protected]> wrote:\n\nThanks for your interest in this patch.\n\n> Is there any reason not to spell out the names? I think that would match\n> the other system views better (e.g., backend_type in pg_stat_activity).\n\nI had thought it might be simpler in case someone wanted to query by\ntype. But your suggestion for consistency is probably better, so I\nchanged to do it that way. The help is also simplified to match the\nother 'backend_type' you cited.\n\n> Also, instead of \"tablesync worker\", I'd suggest using \"synchronization\n> worker\" to match the name used elsewhere in this table.\n>\n\nChanged to \"table synchronization worker\".\n\n> I see that the table refers to \"leader apply workers\". Would those show up\n> as parallel apply workers in the view? Can we add another worker type for\n> those?\n\nInternally there are only 3 worker types: A \"leader\" apply worker is\nbasically the same as a regular apply worker, except it has other\nparallel apply workers associated with it.\n\nI felt that pretending there are 4 types in the view would be\nconfusing. Instead, I just removed the word \"leader\". Now there are:\n\"apply worker\"\n\"parallel apply worker\"\n\"table synchronization worker\"\n\nPSA patch v2.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Wed, 6 Sep 2023 09:02:21 +1200",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add 'worker_type' to pg_stat_subscription"
},
{
"msg_contents": "On Wed, Sep 06, 2023 at 09:02:21AM +1200, Peter Smith wrote:\n> On Sat, Sep 2, 2023 at 7:41 AM Nathan Bossart <[email protected]> wrote:\n>> I see that the table refers to \"leader apply workers\". Would those show up\n>> as parallel apply workers in the view? Can we add another worker type for\n>> those?\n> \n> Internally there are only 3 worker types: A \"leader\" apply worker is\n> basically the same as a regular apply worker, except it has other\n> parallel apply workers associated with it.\n> \n> I felt that pretending there are 4 types in the view would be\n> confusing. Instead, I just removed the word \"leader\". Now there are:\n> \"apply worker\"\n> \"parallel apply worker\"\n> \"table synchronization worker\"\n\nOkay. Should we omit \"worker\" for each of the types? Since these are the\nvalues for the \"worker_type\" column, it seems a bit redundant. For\nexample, we don't add \"backend\" to the end of each value for backend_type\nin pg_stat_activity.\n\nI wonder if we could add the new field to the end of\npg_stat_get_subscription() so that we could simplify this patch a bit. At\nthe moment, a big chunk of it is dedicated to reordering the values.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 5 Sep 2023 14:49:55 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add 'worker_type' to pg_stat_subscription"
},
{
"msg_contents": "On Wed, Sep 6, 2023 at 9:49 AM Nathan Bossart <[email protected]> wrote:\n>\n> On Wed, Sep 06, 2023 at 09:02:21AM +1200, Peter Smith wrote:\n> > On Sat, Sep 2, 2023 at 7:41 AM Nathan Bossart <[email protected]> wrote:\n> >> I see that the table refers to \"leader apply workers\". Would those show up\n> >> as parallel apply workers in the view? Can we add another worker type for\n> >> those?\n> >\n> > Internally there are only 3 worker types: A \"leader\" apply worker is\n> > basically the same as a regular apply worker, except it has other\n> > parallel apply workers associated with it.\n> >\n> > I felt that pretending there are 4 types in the view would be\n> > confusing. Instead, I just removed the word \"leader\". Now there are:\n> > \"apply worker\"\n> > \"parallel apply worker\"\n> > \"table synchronization worker\"\n>\n> Okay. Should we omit \"worker\" for each of the types? Since these are the\n> values for the \"worker_type\" column, it seems a bit redundant. For\n> example, we don't add \"backend\" to the end of each value for backend_type\n> in pg_stat_activity.\n>\n> I wonder if we could add the new field to the end of\n> pg_stat_get_subscription() so that we could simplify this patch a bit. At\n> the moment, a big chunk of it is dedicated to reordering the values.\n>\n\nModified as suggested. PSA v3.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Thu, 7 Sep 2023 12:36:29 +1200",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add 'worker_type' to pg_stat_subscription"
},
{
"msg_contents": "On Thu, Sep 07, 2023 at 12:36:29PM +1200, Peter Smith wrote:\n> Modified as suggested. PSA v3.\n\nThanks. I've attached v4 with a couple of small changes. Notably, I've\nmoved the worker_type column to before the pid column, as it felt more\nnatural to me to keep the PID columns together. I've also added an\nelog(ERROR, ...) for WORKERTYPE_UNKNOWN, as that seems to be the standard\npractice elsewhere. That being said, are we absolutely confident that this\nreally cannot happen? I haven't looked too closely, but if there is a\nsmall race or something that could cause us to see a worker with this type,\nperhaps it would be better to actually list it as \"unknown\". Thoughts?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 7 Sep 2023 15:28:34 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add 'worker_type' to pg_stat_subscription"
},
{
"msg_contents": "On Fri, Sep 8, 2023 at 8:28 AM Nathan Bossart <[email protected]> wrote:\n>\n> On Thu, Sep 07, 2023 at 12:36:29PM +1200, Peter Smith wrote:\n> > Modified as suggested. PSA v3.\n>\n> Thanks. I've attached v4 with a couple of small changes. Notably, I've\n> moved the worker_type column to before the pid column, as it felt more\n> natural to me to keep the PID columns together. I've also added an\n> elog(ERROR, ...) for WORKERTYPE_UNKNOWN, as that seems to be the standard\n> practice elsewhere.\n> That being said, are we absolutely confident that this\n> really cannot happen? I haven't looked too closely, but if there is a\n> small race or something that could cause us to see a worker with this type,\n> perhaps it would be better to actually list it as \"unknown\". Thoughts?\n\nThe type is only assigned during worker process launch, and during\nprocess cleanup [1]. It's only possible to be UNKNOWN after\nlogicalrep_worker_cleanup().\n\nAFAIK the stats can never see a worker with an UNKNOWN type, although\nit was due to excessive caution against something unforeseen that my\noriginal code did below instead of the elog.\n\n+ case WORKERTYPE_UNKNOWN: /* should not be possible */\n+ nulls[9] = true;\n\nAdding \"unknown\" for something that is supposed to be impossible might\nbe slight overkill, but so long as there is no obligation to write\nabout \"unknown\" in the PG DOCS then I agree it is probably better to\ndo that,\n\n------\n[1] https://github.com/search?q=repo%3Apostgres%2Fpostgres%20%20worker-%3Etype&type=code\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 12 Sep 2023 13:07:51 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add 'worker_type' to pg_stat_subscription"
},
{
"msg_contents": "On Tue, Sep 12, 2023 at 01:07:51PM +1000, Peter Smith wrote:\n> The type is only assigned during worker process launch, and during\n> process cleanup [1]. It's only possible to be UNKNOWN after\n> logicalrep_worker_cleanup().\n> \n> AFAIK the stats can never see a worker with an UNKNOWN type, although\n> it was due to excessive caution against something unforeseen that my\n> original code did below instead of the elog.\n> \n> + case WORKERTYPE_UNKNOWN: /* should not be possible */\n> + nulls[9] = true;\n> \n> Adding \"unknown\" for something that is supposed to be impossible might\n> be slight overkill, but so long as there is no obligation to write\n> about \"unknown\" in the PG DOCS then I agree it is probably better to\n> do that,\n\nUsing an elog() is OK IMO. pg_stat_get_subscription() holds\nLogicalRepWorkerLock in shared mode, and the only code path setting a\nworker to WORKERTYPE_UNKNOWN requires that this same LWLock is hold in\nexclusive mode while resetting all the shmem fields of the\nsubscription entry cleaned up, which is what\npg_stat_get_subscription() uses to check if a subscription should be\nincluded in its SRF.\n\nShouldn't this patch add or tweak some SQL queries in\nsrc/test/subscription/ to show some worker types, at least?\n--\nMichael",
"msg_date": "Tue, 12 Sep 2023 12:43:54 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add 'worker_type' to pg_stat_subscription"
},
{
"msg_contents": "On Tue, Sep 12, 2023 at 1:44 PM Michael Paquier <[email protected]> wrote:\n>\n> On Tue, Sep 12, 2023 at 01:07:51PM +1000, Peter Smith wrote:\n> > The type is only assigned during worker process launch, and during\n> > process cleanup [1]. It's only possible to be UNKNOWN after\n> > logicalrep_worker_cleanup().\n> >\n> > AFAIK the stats can never see a worker with an UNKNOWN type, although\n> > it was due to excessive caution against something unforeseen that my\n> > original code did below instead of the elog.\n> >\n> > + case WORKERTYPE_UNKNOWN: /* should not be possible */\n> > + nulls[9] = true;\n> >\n> > Adding \"unknown\" for something that is supposed to be impossible might\n> > be slight overkill, but so long as there is no obligation to write\n> > about \"unknown\" in the PG DOCS then I agree it is probably better to\n> > do that,\n>\n> Using an elog() is OK IMO. pg_stat_get_subscription() holds\n> LogicalRepWorkerLock in shared mode, and the only code path setting a\n> worker to WORKERTYPE_UNKNOWN requires that this same LWLock is hold in\n> exclusive mode while resetting all the shmem fields of the\n> subscription entry cleaned up, which is what\n> pg_stat_get_subscription() uses to check if a subscription should be\n> included in its SRF.\n>\n> Shouldn't this patch add or tweak some SQL queries in\n> src/test/subscription/ to show some worker types, at least?\n\nRight. I found just a single test currently using pg_stat_subscription\ncatalog. I added a worker_type check for that.\n\nPSA v5\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Tue, 12 Sep 2023 19:00:14 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add 'worker_type' to pg_stat_subscription"
},
{
"msg_contents": "On Tue, Sep 12, 2023 at 07:00:14PM +1000, Peter Smith wrote:\n> Right. I found just a single test currently using pg_stat_subscription\n> catalog. I added a worker_type check for that.\n\nThis looks enough to me, thanks!\n--\nMichael",
"msg_date": "Wed, 13 Sep 2023 12:04:58 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add 'worker_type' to pg_stat_subscription"
},
{
"msg_contents": "Hi!\n\nI did a look at the patch, like the idea. The overall code is in a good\ncondition, implements the described feature.\n\nSide note: this is not a problem of this particular patch, but in\npg_stat_get_subscription and many other places, there\nis a switch with worker types. Can we use a default section there to have\nan explicit error instead of the compiler\nwarnings if somehow we decide to add another one worker type?\n\nSo, should we mark this thread as RfC?\n\n-- \nBest regards,\nMaxim Orlov.\n\nHi!I did a look at the patch, like the idea. The overall code is in a good condition, implements the described feature.Side note: this is not a problem of this particular patch, but in pg_stat_get_subscription and many other places, there is a switch with worker types. Can we use a default section there to have an explicit error instead of the compiler warnings if somehow we decide to add another one worker type? So, should we mark this thread as RfC?-- Best regards,Maxim Orlov.",
"msg_date": "Wed, 13 Sep 2023 17:06:28 +0300",
"msg_from": "Maxim Orlov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add 'worker_type' to pg_stat_subscription"
},
{
"msg_contents": "On Wed, Sep 13, 2023 at 05:06:28PM +0300, Maxim Orlov wrote:\n> I did a look at the patch, like the idea. The overall code is in a good\n> condition, implements the described feature.\n\nThanks for reviewing.\n\n> Side note: this is not a problem of this particular patch, but in\n> pg_stat_get_subscription and many other places, there\n> is a switch with worker types. Can we use a default section there to have\n> an explicit error instead of the compiler\n> warnings if somehow we decide to add another one worker type?\n\n-1. We want such compiler warnings to remind us to adjust the code\naccordingly. If we just rely on an ERROR in the default section, we might\nmiss it if there isn't a relevant test.\n\n> So, should we mark this thread as RfC?\n\nI've done so. Barring additional feedback, I intend to commit this in the\nnext few days.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 13 Sep 2023 09:59:04 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add 'worker_type' to pg_stat_subscription"
},
{
"msg_contents": "On Wed, Sep 13, 2023 at 09:59:04AM -0700, Nathan Bossart wrote:\n> On Wed, Sep 13, 2023 at 05:06:28PM +0300, Maxim Orlov wrote:\n>> So, should we mark this thread as RfC?\n> \n> I've done so. Barring additional feedback, I intend to commit this in the\n> next few days.\n\nNote to self: this needs a catversion bump.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 13 Sep 2023 10:54:49 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add 'worker_type' to pg_stat_subscription"
},
{
"msg_contents": "On Wed, Sep 13, 2023 at 09:59:04AM -0700, Nathan Bossart wrote:\n> On Wed, Sep 13, 2023 at 05:06:28PM +0300, Maxim Orlov wrote:\n>> So, should we mark this thread as RfC?\n> \n> I've done so. Barring additional feedback, I intend to commit this in the\n> next few days.\n\nI did some staging work for the patch (attached). The one code change I\nmade was for the new test. Instead of adding a new test, I figured we\ncould modify the preceding test to check for the expected worker type\ninstead of whether relid is NULL. ISTM this relid check is intended to\nfilter for the apply worker, anyway.\n\nThe only reason I didn't apply this already is because IMHO we should\nadjust the worker types and the documentation for the view to be\nconsistent. For example, the docs say \"leader apply worker\" but the view\njust calls them \"apply\" workers. The docs say \"synchronization worker\" but\nthe view calls them \"table synchronization\" workers. My first instinct is\nto call apply workers \"leader apply\" workers in the view, and to call table\nsynchronization workers \"table synchronization workers\" in the docs.\n\nThoughts?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 14 Sep 2023 15:04:19 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add 'worker_type' to pg_stat_subscription"
},
{
"msg_contents": "On Thu, Sep 14, 2023 at 03:04:19PM -0700, Nathan Bossart wrote:\n> The only reason I didn't apply this already is because IMHO we should\n> adjust the worker types and the documentation for the view to be\n> consistent. For example, the docs say \"leader apply worker\" but the view\n> just calls them \"apply\" workers. The docs say \"synchronization worker\" but\n> the view calls them \"table synchronization\" workers. My first instinct is\n> to call apply workers \"leader apply\" workers in the view, and to call table\n> synchronization workers \"table synchronization workers\" in the docs.\n\nConcretely, like this.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 15 Sep 2023 09:35:38 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add 'worker_type' to pg_stat_subscription"
},
{
"msg_contents": "On Fri, Sep 15, 2023 at 09:35:38AM -0700, Nathan Bossart wrote:\n> Concretely, like this.\n\nThere are two references to \"synchronization worker\" in tablesync.c\n(exit routine and busy loop), and a bit more of \"sync worker\"..\nAnyway, these don't matter much, but there are two errmsgs where the\nterm \"tablesync worker\" is used. Even if they are internal, these\ncould be made more consistent at least?\n\nIn config.sgml, max_sync_workers_per_subscription's description uses\n\"synchronization workers\". In the second case, adding \"table\" makes\nlittle sense, but could it for the two other sentences?\n--\nMichael",
"msg_date": "Sat, 16 Sep 2023 09:13:18 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add 'worker_type' to pg_stat_subscription"
},
{
"msg_contents": "On Sat, Sep 16, 2023 at 1:06 AM Nathan Bossart <[email protected]> wrote:\n>\n> On Thu, Sep 14, 2023 at 03:04:19PM -0700, Nathan Bossart wrote:\n> > The only reason I didn't apply this already is because IMHO we should\n> > adjust the worker types and the documentation for the view to be\n> > consistent. For example, the docs say \"leader apply worker\" but the view\n> > just calls them \"apply\" workers. The docs say \"synchronization worker\" but\n> > the view calls them \"table synchronization\" workers. My first instinct is\n> > to call apply workers \"leader apply\" workers in the view, and to call table\n> > synchronization workers \"table synchronization workers\" in the docs.\n>\n> Concretely, like this.\n>\n\nI think there is a merit in keeping the worker type as 'sync' or\n'synchronization' because these would be used in future for syncing\nother objects like sequences. One more thing that slightly looks odd\nis the 'leader apply' type of worker, won't this be confusing when\nthere is no parallel apply worker in the system? In this regard,\nprobably existing documentation could also be improved.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 16 Sep 2023 18:09:48 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add 'worker_type' to pg_stat_subscription"
},
{
"msg_contents": "On Sat, Sep 16, 2023 at 06:09:48PM +0530, Amit Kapila wrote:\n> I think there is a merit in keeping the worker type as 'sync' or\n> 'synchronization' because these would be used in future for syncing\n> other objects like sequences. One more thing that slightly looks odd\n> is the 'leader apply' type of worker, won't this be confusing when\n> there is no parallel apply worker in the system? In this regard,\n> probably existing documentation could also be improved.\n\nThese are good points. I went ahead and adjusted the patch back to using\n\"apply\" for [leader] apply workers and to using \"synchronization\" for\nsynchronization workers. I also adjusted a couple of the error messages\nthat Michael pointed out to say \"synchronization worker\" instead of \"table\nsynchronization worker\" or \"tablesync worker\".\n\nThis still leaves the possibility for confusion with the documentation's\nuse of \"leader apply worker\", but I haven't touched that for now.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Sat, 16 Sep 2023 13:40:41 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add 'worker_type' to pg_stat_subscription"
},
{
"msg_contents": "IIUC some future feature syncing of sequences is likely to share a lot\nof the tablesync worker code (maybe it is only differentiated by the\nrelid being for a RELKIND_SEQUENCE?).\n\nThe original intent of this stats worker-type patch was to be able to\neasily know the type of the process without having to dig through\nother attributes (like relid etc.) to infer it. If you feel\ndifferentiating kinds of syncing processes won't be of interest to\nusers then just generically calling it \"synchronization\" is fine by\nme. OTOH, if users might care what 'kind' of syncing it is, perhaps\nleaving the stats attribute as \"table synchronization\" (and some\nfuture patch would add \"sequence synchronization\") is better.\n\nTBH, I am not sure which option is best, so I am happy to go with the consensus.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Mon, 18 Sep 2023 10:40:22 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add 'worker_type' to pg_stat_subscription"
},
{
"msg_contents": "On Sun, Sep 17, 2023 at 2:10 AM Nathan Bossart <[email protected]> wrote:\n>\n> On Sat, Sep 16, 2023 at 06:09:48PM +0530, Amit Kapila wrote:\n>\n> This still leaves the possibility for confusion with the documentation's\n> use of \"leader apply worker\", but I haven't touched that for now.\n>\n\nWe may want to fix that separately but as you have raised here, I\nfound the following two places in docs which could be a bit confusing.\n\n\"Specifies maximum number of logical replication workers. This\nincludes leader apply workers, parallel apply workers, and table\nsynchronization\"\n\n\"\"OID of the relation that the worker is synchronizing; NULL for the\nleader apply worker and parallel apply workers\"\n\nOne simple idea to reduce confusion could be to use (leader) in the\nabove two places. Do you see any other place which could be confusing\nand what do you suggest to fix it?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 18 Sep 2023 09:13:25 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add 'worker_type' to pg_stat_subscription"
},
{
"msg_contents": "On Mon, Sep 18, 2023 at 6:10 AM Peter Smith <[email protected]> wrote:\n>\n> IIUC some future feature syncing of sequences is likely to share a lot\n> of the tablesync worker code (maybe it is only differentiated by the\n> relid being for a RELKIND_SEQUENCE?).\n>\n> The original intent of this stats worker-type patch was to be able to\n> easily know the type of the process without having to dig through\n> other attributes (like relid etc.) to infer it.\n>\n\nThat makes sense and I think it will probably be helpful in debugging.\nFor example, I am not sure the following and similar changes in the\npatch are a good idea:\nif (am_tablesync_worker())\n ereport(LOG,\n- (errmsg(\"logical replication table synchronization worker for\nsubscription \\\"%s\\\", table \\\"%s\\\" has started\",\n+ (errmsg(\"logical replication synchronization worker for subscription\n\\\"%s\\\", table \\\"%s\\\" has started\",\n\nI think it would be sometimes helpful in debugging to know the type of\nsync worker, so keeping the type in the above message would be\nhelpful.\n\n> If you feel\n> differentiating kinds of syncing processes won't be of interest to\n> users then just generically calling it \"synchronization\" is fine by\n> me. OTOH, if users might care what 'kind' of syncing it is, perhaps\n> leaving the stats attribute as \"table synchronization\" (and some\n> future patch would add \"sequence synchronization\") is better.\n>\n\nEarlier, I thought it would be better to keep it generic but after\nseeing your point and the latest changes in the patch it seems\ndifferentiating between types of sync workers would be a good idea.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 18 Sep 2023 09:31:57 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add 'worker_type' to pg_stat_subscription"
},
{
"msg_contents": "On Mon, Sep 18, 2023 at 1:43 PM Amit Kapila <[email protected]> wrote:\n>\n> On Sun, Sep 17, 2023 at 2:10 AM Nathan Bossart <[email protected]> wrote:\n> >\n> > On Sat, Sep 16, 2023 at 06:09:48PM +0530, Amit Kapila wrote:\n> >\n> > This still leaves the possibility for confusion with the documentation's\n> > use of \"leader apply worker\", but I haven't touched that for now.\n> >\n>\n> We may want to fix that separately but as you have raised here, I\n> found the following two places in docs which could be a bit confusing.\n>\n> \"Specifies maximum number of logical replication workers. This\n> includes leader apply workers, parallel apply workers, and table\n> synchronization\"\n>\n> \"\"OID of the relation that the worker is synchronizing; NULL for the\n> leader apply worker and parallel apply workers\"\n>\n> One simple idea to reduce confusion could be to use (leader) in the\n> above two places. Do you see any other place which could be confusing\n> and what do you suggest to fix it?\n>\n\nIIRC we first encountered this problem with the parallel apply workers\nwere introduced -- \"leader\" was added wherever we needed to\ndistinguish the main apply and the parallel apply worker. Perhaps at\nthat time, we ought to have changed it *everywhere* instead of\nchanging only the ambiguous places. Lately, I've been thinking it\nwould have been easier to have *one* rule and always call the (main)\napply worker the \"leader apply\" worker -- simply because 2 names\n(\"leader apply\" and \"parallel apply\") are easier to explain than 3\nnames.\n\nA \"leader apply\" worker with no \"parallel apply\" workers is a bit like\nthe \"boss\" of a company that has no employees -- IMO it's OK to still\nsay that they are the \"boss\".\n\nRegardless, I think changing this in other docs and other code is\noutside the scope of this simple pg stats patch -- here we can just\nchange the relevant config docs and the stats attribute value to\n\"leader apply\" and leave it at that.\n\nChanging every other place to consistently say \"leader apply\" is a\nbigger task for another thread because we will find lots more places\nto change. For example, there are messages like: \"logical replication\napply worker for subscription \\\"%s\\\" has started\" that perhaps should\nsay \"logical replication leader apply worker for subscription \\\"%s\\\"\nhas started\". Such changes don't belong in this stats patch.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Mon, 18 Sep 2023 16:56:46 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add 'worker_type' to pg_stat_subscription"
},
{
"msg_contents": "On Mon, Sep 18, 2023 at 04:56:46PM +1000, Peter Smith wrote:\n> On Mon, Sep 18, 2023 at 1:43 PM Amit Kapila <[email protected]> wrote:\n>> One simple idea to reduce confusion could be to use (leader) in the\n>> above two places. Do you see any other place which could be confusing\n>> and what do you suggest to fix it?\n> \n> IIRC we first encountered this problem with the parallel apply workers\n> were introduced -- \"leader\" was added wherever we needed to\n> distinguish the main apply and the parallel apply worker. Perhaps at\n> that time, we ought to have changed it *everywhere* instead of\n> changing only the ambiguous places. Lately, I've been thinking it\n> would have been easier to have *one* rule and always call the (main)\n> apply worker the \"leader apply\" worker -- simply because 2 names\n> (\"leader apply\" and \"parallel apply\") are easier to explain than 3\n> names.\n> \n> A \"leader apply\" worker with no \"parallel apply\" workers is a bit like\n> the \"boss\" of a company that has no employees -- IMO it's OK to still\n> say that they are the \"boss\".\n\n From the latest discussion, it sounds like you (Peter and Amit) are leaning\nmore towards something like the v7 patch [0]. I'm okay with that. Perhaps\nit'd be worth starting a new thread after this one to make the terminology\nconsistent in the docs, error messages, views, etc. Fortunately, we have\nsome time to straighten this out for v17.\n\n> Regardless, I think changing this in other docs and other code is\n> outside the scope of this simple pg stats patch -- here we can just\n> change the relevant config docs and the stats attribute value to\n> \"leader apply\" and leave it at that.\n> \n> Changing every other place to consistently say \"leader apply\" is a\n> bigger task for another thread because we will find lots more places\n> to change. For example, there are messages like: \"logical replication\n> apply worker for subscription \\\"%s\\\" has started\" that perhaps should\n> say \"logical replication leader apply worker for subscription \\\"%s\\\"\n> has started\". Such changes don't belong in this stats patch.\n\n+1\n\n[0] https://postgr.es/m/attachment/150345/v7-0001-Add-worker-type-to-pg_stat_subscription.patch\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 18 Sep 2023 08:20:03 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add 'worker_type' to pg_stat_subscription"
},
{
"msg_contents": "On Tue, Sep 19, 2023 at 1:20 AM Nathan Bossart <[email protected]> wrote:\n>\n> On Mon, Sep 18, 2023 at 04:56:46PM +1000, Peter Smith wrote:\n> > On Mon, Sep 18, 2023 at 1:43 PM Amit Kapila <[email protected]> wrote:\n> >> One simple idea to reduce confusion could be to use (leader) in the\n> >> above two places. Do you see any other place which could be confusing\n> >> and what do you suggest to fix it?\n> >\n> > IIRC we first encountered this problem with the parallel apply workers\n> > were introduced -- \"leader\" was added wherever we needed to\n> > distinguish the main apply and the parallel apply worker. Perhaps at\n> > that time, we ought to have changed it *everywhere* instead of\n> > changing only the ambiguous places. Lately, I've been thinking it\n> > would have been easier to have *one* rule and always call the (main)\n> > apply worker the \"leader apply\" worker -- simply because 2 names\n> > (\"leader apply\" and \"parallel apply\") are easier to explain than 3\n> > names.\n> >\n> > A \"leader apply\" worker with no \"parallel apply\" workers is a bit like\n> > the \"boss\" of a company that has no employees -- IMO it's OK to still\n> > say that they are the \"boss\".\n>\n> From the latest discussion, it sounds like you (Peter and Amit) are leaning\n> more towards something like the v7 patch [0]. I'm okay with that. Perhaps\n> it'd be worth starting a new thread after this one to make the terminology\n> consistent in the docs, error messages, views, etc. Fortunately, we have\n> some time to straighten this out for v17.\n>\n\nYes, the v7 patch looked good to me.\n\n> > Regardless, I think changing this in other docs and other code is\n> > outside the scope of this simple pg stats patch -- here we can just\n> > change the relevant config docs and the stats attribute value to\n> > \"leader apply\" and leave it at that.\n> >\n> > Changing every other place to consistently say \"leader apply\" is a\n> > bigger task for another thread because we will find lots more places\n> > to change. For example, there are messages like: \"logical replication\n> > apply worker for subscription \\\"%s\\\" has started\" that perhaps should\n> > say \"logical replication leader apply worker for subscription \\\"%s\\\"\n> > has started\". Such changes don't belong in this stats patch.\n>\n> +1\n>\n> [0] https://postgr.es/m/attachment/150345/v7-0001-Add-worker-type-to-pg_stat_subscription.patch\n>\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 19 Sep 2023 07:18:44 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add 'worker_type' to pg_stat_subscription"
},
{
"msg_contents": "On Mon, Sep 18, 2023 at 8:50 PM Nathan Bossart <[email protected]> wrote:\n>\n> On Mon, Sep 18, 2023 at 04:56:46PM +1000, Peter Smith wrote:\n> > On Mon, Sep 18, 2023 at 1:43 PM Amit Kapila <[email protected]> wrote:\n> >> One simple idea to reduce confusion could be to use (leader) in the\n> >> above two places. Do you see any other place which could be confusing\n> >> and what do you suggest to fix it?\n> >\n> > IIRC we first encountered this problem with the parallel apply workers\n> > were introduced -- \"leader\" was added wherever we needed to\n> > distinguish the main apply and the parallel apply worker. Perhaps at\n> > that time, we ought to have changed it *everywhere* instead of\n> > changing only the ambiguous places. Lately, I've been thinking it\n> > would have been easier to have *one* rule and always call the (main)\n> > apply worker the \"leader apply\" worker -- simply because 2 names\n> > (\"leader apply\" and \"parallel apply\") are easier to explain than 3\n> > names.\n> >\n> > A \"leader apply\" worker with no \"parallel apply\" workers is a bit like\n> > the \"boss\" of a company that has no employees -- IMO it's OK to still\n> > say that they are the \"boss\".\n>\n> From the latest discussion, it sounds like you (Peter and Amit) are leaning\n> more towards something like the v7 patch [0].\n>\n\nI am of the opinion that worker_type should be 'apply' instead of\n'leader apply' because even when it is a leader for parallel apply\nworker, it could perform individual transactions apply. For reference,\nI checked pg_stat_activity.backend_type, there is nothing called main\nor leader backend even when the backend is involved in parallel query.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 19 Sep 2023 08:36:35 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add 'worker_type' to pg_stat_subscription"
},
{
"msg_contents": "On Tue, Sep 19, 2023 at 08:36:35AM +0530, Amit Kapila wrote:\n> I am of the opinion that worker_type should be 'apply' instead of\n> 'leader apply' because even when it is a leader for parallel apply\n> worker, it could perform individual transactions apply. For reference,\n> I checked pg_stat_activity.backend_type, there is nothing called main\n> or leader backend even when the backend is involved in parallel query.\n\nOkay. Here is v9 of the patch with this change.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 20 Sep 2023 12:30:38 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add 'worker_type' to pg_stat_subscription"
},
{
"msg_contents": "On Thu, Sep 21, 2023 at 5:30 AM Nathan Bossart <[email protected]> wrote:\n>\n> On Tue, Sep 19, 2023 at 08:36:35AM +0530, Amit Kapila wrote:\n> > I am of the opinion that worker_type should be 'apply' instead of\n> > 'leader apply' because even when it is a leader for parallel apply\n> > worker, it could perform individual transactions apply. For reference,\n> > I checked pg_stat_activity.backend_type, there is nothing called main\n> > or leader backend even when the backend is involved in parallel query.\n>\n> Okay. Here is v9 of the patch with this change.\n>\n\nOne question -- the patch comment still says \"Bumps catversion.\", but\ncatversion.h change is missing from the v9 patch?\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 21 Sep 2023 09:01:01 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add 'worker_type' to pg_stat_subscription"
},
{
"msg_contents": "On Thu, Sep 21, 2023 at 09:01:01AM +1000, Peter Smith wrote:\n> One question -- the patch comment still says \"Bumps catversion.\", but\n> catversion.h change is missing from the v9 patch?\n\nYeah, previous patches did that, but it is no big deal. My take is\nthat it is a good practice to never do a catversion bump in posted\npatches, and that it is equally a good practice from Nathan to be\nreminded about that with the addition of a note in the commit message\nof the patch posted.\n--\nMichael",
"msg_date": "Thu, 21 Sep 2023 08:14:23 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add 'worker_type' to pg_stat_subscription"
},
{
"msg_contents": "On Thu, Sep 21, 2023 at 08:14:23AM +0900, Michael Paquier wrote:\n> On Thu, Sep 21, 2023 at 09:01:01AM +1000, Peter Smith wrote:\n>> One question -- the patch comment still says \"Bumps catversion.\", but\n>> catversion.h change is missing from the v9 patch?\n> \n> Yeah, previous patches did that, but it is no big deal. My take is\n> that it is a good practice to never do a catversion bump in posted\n> patches, and that it is equally a good practice from Nathan to be\n> reminded about that with the addition of a note in the commit message\n> of the patch posted.\n\nRight, I'll take care of it before committing. I'm trying to make sure I\ndon't forget. :)\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 20 Sep 2023 16:34:51 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add 'worker_type' to pg_stat_subscription"
},
{
"msg_contents": "On Thu, Sep 21, 2023 at 9:34 AM Nathan Bossart <[email protected]> wrote:\n>\n> On Thu, Sep 21, 2023 at 08:14:23AM +0900, Michael Paquier wrote:\n> > On Thu, Sep 21, 2023 at 09:01:01AM +1000, Peter Smith wrote:\n> >> One question -- the patch comment still says \"Bumps catversion.\", but\n> >> catversion.h change is missing from the v9 patch?\n> >\n> > Yeah, previous patches did that, but it is no big deal. My take is\n> > that it is a good practice to never do a catversion bump in posted\n> > patches, and that it is equally a good practice from Nathan to be\n> > reminded about that with the addition of a note in the commit message\n> > of the patch posted.\n>\n> Right, I'll take care of it before committing. I'm trying to make sure I\n> don't forget. :)\n\nOK, all good.\n\n~~~\n\nThis is a bit of a late entry, but looking at the PG DOCS, I felt it\nmight be simpler if we don't always refer to every other worker type\nwhen explaining NULLs. The descriptions are already bigger than they\nneed to be, and if more types ever get added they will keep growing.\n\n~\n\nBEFORE\nleader_pid integer\nProcess ID of the leader apply worker if this process is a parallel\napply worker; NULL if this process is a leader apply worker or a table\nsynchronization worker\n\nSUGGESTION\nleader_pid integer\nProcess ID of the leader apply worker; NULL if this process is not a\nparallel apply worker\n\n~\n\nBEFORE\nrelid oid\nOID of the relation that the worker is synchronizing; NULL for the\nleader apply worker and parallel apply workers\n\nSUGGESTION\nrelid oid\nOID of the relation being synchronized; NULL if this process is not a\ntable synchronization worker\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 21 Sep 2023 10:06:03 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add 'worker_type' to pg_stat_subscription"
},
{
"msg_contents": "On Thu, Sep 21, 2023 at 5:36 AM Peter Smith <[email protected]> wrote:\n>\n> On Thu, Sep 21, 2023 at 9:34 AM Nathan Bossart <[email protected]> wrote:\n> >\n> > On Thu, Sep 21, 2023 at 08:14:23AM +0900, Michael Paquier wrote:\n> > > On Thu, Sep 21, 2023 at 09:01:01AM +1000, Peter Smith wrote:\n> > >> One question -- the patch comment still says \"Bumps catversion.\", but\n> > >> catversion.h change is missing from the v9 patch?\n> > >\n> > > Yeah, previous patches did that, but it is no big deal. My take is\n> > > that it is a good practice to never do a catversion bump in posted\n> > > patches, and that it is equally a good practice from Nathan to be\n> > > reminded about that with the addition of a note in the commit message\n> > > of the patch posted.\n> >\n> > Right, I'll take care of it before committing. I'm trying to make sure I\n> > don't forget. :)\n>\n> OK, all good.\n>\n> ~~~\n>\n> This is a bit of a late entry, but looking at the PG DOCS, I felt it\n> might be simpler if we don't always refer to every other worker type\n> when explaining NULLs. The descriptions are already bigger than they\n> need to be, and if more types ever get added they will keep growing.\n>\n> ~\n>\n> BEFORE\n> leader_pid integer\n> Process ID of the leader apply worker if this process is a parallel\n> apply worker; NULL if this process is a leader apply worker or a table\n> synchronization worker\n>\n> SUGGESTION\n> leader_pid integer\n> Process ID of the leader apply worker; NULL if this process is not a\n> parallel apply worker\n>\n> ~\n>\n> BEFORE\n> relid oid\n> OID of the relation that the worker is synchronizing; NULL for the\n> leader apply worker and parallel apply workers\n>\n> SUGGESTION\n> relid oid\n> OID of the relation being synchronized; NULL if this process is not a\n> table synchronization worker\n>\n\nI find the current descriptions better than the proposed. But I am not\nopposed to your proposal if others are okay with it. Personally, I\nfeel even if we want to change these descriptions, we can do it as a\nseparate patch.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 21 Sep 2023 15:55:37 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add 'worker_type' to pg_stat_subscription"
},
{
"msg_contents": "On Thu, Sep 21, 2023 at 1:00 AM Nathan Bossart <[email protected]> wrote:\n>\n> On Tue, Sep 19, 2023 at 08:36:35AM +0530, Amit Kapila wrote:\n> > I am of the opinion that worker_type should be 'apply' instead of\n> > 'leader apply' because even when it is a leader for parallel apply\n> > worker, it could perform individual transactions apply. For reference,\n> > I checked pg_stat_activity.backend_type, there is nothing called main\n> > or leader backend even when the backend is involved in parallel query.\n>\n> Okay. Here is v9 of the patch with this change.\n>\n\nThe changes looks good to me, though I haven't tested it. But feel\nfree to commit if you are fine with this patch.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 21 Sep 2023 16:01:20 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add 'worker_type' to pg_stat_subscription"
},
{
"msg_contents": "On Thu, Sep 21, 2023 at 04:01:20PM +0530, Amit Kapila wrote:\n> The changes looks good to me, though I haven't tested it. But feel\n> free to commit if you are fine with this patch.\n\nCommitted.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 25 Sep 2023 14:16:11 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add 'worker_type' to pg_stat_subscription"
},
{
"msg_contents": "On Tue, Sep 26, 2023 at 7:16 AM Nathan Bossart <[email protected]> wrote:\n>\n> On Thu, Sep 21, 2023 at 04:01:20PM +0530, Amit Kapila wrote:\n> > The changes looks good to me, though I haven't tested it. But feel\n> > free to commit if you are fine with this patch.\n>\n> Committed.\n>\n\nThanks for pushing.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 26 Sep 2023 11:06:36 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add 'worker_type' to pg_stat_subscription"
}
] |
[
{
"msg_contents": "This warning comes from parse_expr.c transformJsonValueExpr() and is \ntriggered for example by the following test case:\n\nSELECT JSON_OBJECT('foo': NULL::json FORMAT JSON);\nWARNING: FORMAT JSON has no effect for json and jsonb types\n\nBut I don't see anything in the SQL standard that would require this \nwarning. It seems pretty clear that FORMAT JSON in this case is \nimplicit and otherwise without effect.\n\nAlso, we don't have that warning in the output case (RETURNING json \nFORMAT JSON).\n\nAnyone remember why this is here? Should we remove it?\n\n\n",
"msg_date": "Wed, 16 Aug 2023 15:54:57 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "dubious warning: FORMAT JSON has no effect for json and jsonb types"
},
{
"msg_contents": "On Wed, Aug 16, 2023 at 8:55 AM Peter Eisentraut <[email protected]>\nwrote:\n\n> This warning comes from parse_expr.c transformJsonValueExpr() and is\n> triggered for example by the following test case:\n>\n> SELECT JSON_OBJECT('foo': NULL::json FORMAT JSON);\n> WARNING: FORMAT JSON has no effect for json and jsonb types\n>\n> But I don't see anything in the SQL standard that would require this\n> warning. It seems pretty clear that FORMAT JSON in this case is\n> implicit and otherwise without effect.\n>\n> Also, we don't have that warning in the output case (RETURNING json\n> FORMAT JSON).\n>\n> Anyone remember why this is here? Should we remove it?\n\n\n+1 for removing, on the basis that it is not suprising, and would pollute\nlogs for most configurations.\n\nmerlin\n\nOn Wed, Aug 16, 2023 at 8:55 AM Peter Eisentraut <[email protected]> wrote:This warning comes from parse_expr.c transformJsonValueExpr() and is \ntriggered for example by the following test case:\n\nSELECT JSON_OBJECT('foo': NULL::json FORMAT JSON);\nWARNING: FORMAT JSON has no effect for json and jsonb types\n\nBut I don't see anything in the SQL standard that would require this \nwarning. It seems pretty clear that FORMAT JSON in this case is \nimplicit and otherwise without effect.\n\nAlso, we don't have that warning in the output case (RETURNING json \nFORMAT JSON).\n\nAnyone remember why this is here? Should we remove it?+1 for removing, on the basis that it is not suprising, and would pollute logs for most configurations.merlin",
"msg_date": "Wed, 16 Aug 2023 09:59:03 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: dubious warning: FORMAT JSON has no effect for json and jsonb\n types"
},
{
"msg_contents": "On 16.08.23 16:59, Merlin Moncure wrote:\n> On Wed, Aug 16, 2023 at 8:55 AM Peter Eisentraut <[email protected] \n> <mailto:[email protected]>> wrote:\n> \n> This warning comes from parse_expr.c transformJsonValueExpr() and is\n> triggered for example by the following test case:\n> \n> SELECT JSON_OBJECT('foo': NULL::json FORMAT JSON);\n> WARNING: FORMAT JSON has no effect for json and jsonb types\n> \n> But I don't see anything in the SQL standard that would require this\n> warning. It seems pretty clear that FORMAT JSON in this case is\n> implicit and otherwise without effect.\n> \n> Also, we don't have that warning in the output case (RETURNING json\n> FORMAT JSON).\n> \n> Anyone remember why this is here? Should we remove it?\n> \n> \n> +1 for removing, on the basis that it is not suprising, and would \n> pollute logs for most configurations.\n\ndone\n\n\n\n",
"msg_date": "Fri, 18 Aug 2023 07:59:34 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: dubious warning: FORMAT JSON has no effect for json and jsonb\n types"
},
{
"msg_contents": "On Fri, Aug 18, 2023 at 2:59 PM Peter Eisentraut <[email protected]> wrote:\n> On 16.08.23 16:59, Merlin Moncure wrote:\n> > On Wed, Aug 16, 2023 at 8:55 AM Peter Eisentraut <[email protected]\n> > <mailto:[email protected]>> wrote:\n> >\n> > This warning comes from parse_expr.c transformJsonValueExpr() and is\n> > triggered for example by the following test case:\n> >\n> > SELECT JSON_OBJECT('foo': NULL::json FORMAT JSON);\n> > WARNING: FORMAT JSON has no effect for json and jsonb types\n> >\n> > But I don't see anything in the SQL standard that would require this\n> > warning. It seems pretty clear that FORMAT JSON in this case is\n> > implicit and otherwise without effect.\n> >\n> > Also, we don't have that warning in the output case (RETURNING json\n> > FORMAT JSON).\n> >\n> > Anyone remember why this is here? Should we remove it?\n> >\n> >\n> > +1 for removing, on the basis that it is not suprising, and would\n> > pollute logs for most configurations.\n>\n> done\n\n+1 and thanks. May have been there as a debugging aid if anything.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 21 Aug 2023 16:33:10 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: dubious warning: FORMAT JSON has no effect for json and jsonb\n types"
}
] |
[
{
"msg_contents": "Hi,\r\n\r\nThe date for PostgreSQL 16 Release Candidate 1 (RC1) is August 31, 2023. \r\nPlease ensure all open items[1] are completed and committed before \r\nAugust 26, 2023 12:00 UTC.\r\n\r\nThis means the current target date for the PostgreSQL 16 GA release is \r\nSeptember 14, 2023. While this date could change if the release team \r\ndecides the candidate release is not ready, please plan for this date to \r\nbe the GA release.\r\n\r\nThanks,\r\n\r\nJonathan\r\n\r\n[1] https://wiki.postgresql.org/wiki/PostgreSQL_16_Open_Items",
"msg_date": "Wed, 16 Aug 2023 15:48:59 -0400",
"msg_from": "\"Jonathan S. Katz\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL 16 RC1 + GA release dates"
},
{
"msg_contents": "\"Jonathan S. Katz\" <[email protected]> writes:\n> The date for PostgreSQL 16 Release Candidate 1 (RC1) is August 31, 2023. \n> Please ensure all open items[1] are completed and committed before \n> August 26, 2023 12:00 UTC.\n\nFYI, I moved the \"Oversight in reparameterize_path_by_child leading to\nexecutor crash\" open item to the \"Older bugs affecting stable branches\"\nsection, because it is in fact an old bug: the given test case crashes\nin every branch that has enable_partitionwise_join. I'll still look\nat getting in the fix before RC1, but we should understand what we're\ndealing with.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 20 Aug 2023 10:50:40 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 16 RC1 + GA release dates"
},
{
"msg_contents": "On 8/20/23 10:50 AM, Tom Lane wrote:\r\n> \"Jonathan S. Katz\" <[email protected]> writes:\r\n>> The date for PostgreSQL 16 Release Candidate 1 (RC1) is August 31, 2023.\r\n>> Please ensure all open items[1] are completed and committed before\r\n>> August 26, 2023 12:00 UTC.\r\n> \r\n> FYI, I moved the \"Oversight in reparameterize_path_by_child leading to\r\n> executor crash\" open item to the \"Older bugs affecting stable branches\"\r\n> section, because it is in fact an old bug: the given test case crashes\r\n> in every branch that has enable_partitionwise_join. I'll still look\r\n> at getting in the fix before RC1, but we should understand what we're\r\n> dealing with.\r\n\r\n[RMT hat]\r\n\r\nThanks -- appreciative of the accurate record keeping.\r\n\r\nJonathan",
"msg_date": "Sun, 20 Aug 2023 21:09:56 -0400",
"msg_from": "\"Jonathan S. Katz\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL 16 RC1 + GA release dates"
},
{
"msg_contents": "On 8/16/23 3:48 PM, Jonathan S. Katz wrote:\r\n\r\n> The date for PostgreSQL 16 Release Candidate 1 (RC1) is August 31, 2023. \r\n> Please ensure all open items[1] are completed and committed before \r\n> August 26, 2023 12:00 UTC.\r\n\r\nReminder: the RC1 open item[1] deadline is at August 26, 2023 @ 12:00 UTC.\r\n\r\nThanks,\r\n\r\nJonathan\r\n\r\n[1] https://wiki.postgresql.org/wiki/PostgreSQL_16_Open_Items",
"msg_date": "Thu, 24 Aug 2023 14:59:15 -0400",
"msg_from": "\"Jonathan S. Katz\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL 16 RC1 + GA release dates"
}
] |
[
{
"msg_contents": "Respected Sir/Ma'am,\nI have a doubt regarding the storage of the data directory path of\nPostgreSQL. I'm using PostgreSQL server 15 in which I want to give a path\nto an external drive RAID Memory Storage. Which is on the LAN Network? Is\nit compatible or not? If it is then which file system is suitable: NAS or\nSAN?\n\nIf it is, can you share any documents with me?\n\nRegards\nHarsh\n\nRespected Sir/Ma'am,I have a doubt regarding the storage of the data directory path of PostgreSQL. I'm using PostgreSQL server 15 in which I want to give a path to an external drive RAID Memory Storage. Which is on the LAN Network? Is it compatible or not? If it is then which file system is suitable: NAS or SAN?If it is, can you share any documents with me?RegardsHarsh",
"msg_date": "Thu, 17 Aug 2023 10:51:45 +0530",
"msg_from": "Harsh N Bhatt <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query regarding sharing data directory"
},
{
"msg_contents": "On Thu, Aug 17, 2023 at 10:51:45AM +0530, Harsh N Bhatt wrote:\n> I have a doubt regarding the storage of the data directory path of\n> PostgreSQL. I'm using PostgreSQL server 15 in which I want to give a path\n> to an external drive RAID Memory Storage. Which is on the LAN Network? Is\n> it compatible or not? If it is then which file system is suitable: NAS or\n> SAN?\n> \n> If it is, can you share any documents with me?\n\nIf you can connect to your server, you could use the following query\nto know where your data folder is:\nSHOW data_directory;\n\nThe location of the data directory is something that distributions and\ninstallations set by themselves, so in short it depends on your\nenvironment except if you set up a cluster by yourself.\n--\nMichael",
"msg_date": "Fri, 18 Aug 2023 10:03:17 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query regarding sharing data directory"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nWhile working on [1] it has been noticed by Masahiro-san that the description field\nin the new pg_wait_event view contains 2 blanks for one row.\n\nIt turns out that it comes from wait_event_names.txt (added in fa88928).\n\nAttached a tiny patch to fix this entry in wait_event_names.txt (I did check that no\nother entries are in the same case).\n\n[1]: https://www.postgresql.org/message-id/735fbd560ae914c96faaa23cc8d9a118%40oss.nttdata.com\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 17 Aug 2023 07:49:29 +0200",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fix an entry in wait_event_names.txt"
},
{
"msg_contents": "On 2023-08-17 14:49, Drouvot, Bertrand wrote:\n> Hi hackers,\n> \n> While working on [1] it has been noticed by Masahiro-san that the\n> description field\n> in the new pg_wait_event view contains 2 blanks for one row.\n> \n> It turns out that it comes from wait_event_names.txt (added in \n> fa88928).\n> \n> Attached a tiny patch to fix this entry in wait_event_names.txt (I did\n> check that no\n> other entries are in the same case).\n> \n> [1]:\n> https://www.postgresql.org/message-id/735fbd560ae914c96faaa23cc8d9a118%40oss.nttdata.com\n> \n> Regards,\n\n+1. Thanks!\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 17 Aug 2023 15:25:27 +0900",
"msg_from": "Masahiro Ikeda <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix an entry in wait_event_names.txt"
},
{
"msg_contents": "On Thu, Aug 17, 2023 at 03:25:27PM +0900, Masahiro Ikeda wrote:\n> +1. Thanks!\n\nApplied.\n--\nMichael",
"msg_date": "Fri, 18 Aug 2023 08:19:14 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix an entry in wait_event_names.txt"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nI'm developing an index AM for a specific user defined type. Details\nabout LIMIT clause can lead to different strategy during scan. So\n\"SELECT * FROM tbl ORDER BY col LIMIT 5\" will have a different\ncode path to \"SELECT * FROM tbl ORDER BY col LIMIT 500\".\nIt's still the same IndexScan.\n\nIn planning phase, we have this via amcostestimate. But I don't\nsee a proper way in ambeginscan/amrescan/amgettuple. For\nexample,\nambeginscan_function: we have indexRelation, nkeys and norderbys.\namrescan_function: we have the IndexScanDesc built by beginscan,\nand detailed info about the scan keys.\namgettuple_function: we have IndexScanDesc, and scan direction.\nMaybe I miss some API please point out, thanks.\n\nIn FDW API, BeginForeignScan has ForeignScanState which\nincludes the whole plan. It's possible to find LIMIT clause.\nSo I propose adding a ScanState pointer to IndexScanDesc. In\nIndexNext() populate this in IndexScanDesc after ambeginscan.\nThen amrescan/amgettuple can adjust it's strategy with information\nabout LIMIT cluase, or more generally the whole plan tree. This\nwill make AM scan API on par with FDW API in my opinion. This\napproach should be compatible with existing extensions if we place\nthe newly added pointer at the end of IndexScanDesc.\n\nAnother approach is adding a new API to IndexAmRoutine and\ngive the extension a way to access plan information. But this\ndoesn't seems to provide more benefits compare to the above\napproach.\n\nThoughts?\n\nBest regards,\nPeifeng Qiu\n\n\n",
"msg_date": "Thu, 17 Aug 2023 23:11:47 +0900",
"msg_from": "Peifeng Qiu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Allow Index AM scan API to access information about LIMIT clause"
}
] |
[
{
"msg_contents": "The following bug has been logged on the website:\n\nBug reference: 18059\nLogged by: Pavel Kulakov\nEmail address: [email protected]\nPostgreSQL version: 15.4\nOperating system: Debian GNU/Linux 11\nDescription: \n\nSteps to reproduce:\r\n1. Create stored procedure\r\n\r\ncreate or replace procedure test_proc()\r\nlanguage plpgsql as $procedure$\r\nbegin\r\n commit;\r\n set transaction isolation level repeatable read;\r\n -- here follows some useful code which is omitted for brevity\r\nend\r\n$procedure$;\r\n\r\n2. Open new connection\r\n\r\n3. Execute the following 3 queries one by one:\r\na) call test_proc();\r\nb) create temporary table \"#tmp\"(c int) on commit drop;\r\nc) call test_proc();\r\nOn step c) you'll get an error\r\n[25001]: ERROR: SET TRANSACTION ISOLATION LEVEL must be called before any\nquery\r\n Where: SQL statement \"set transaction isolation level repeatable read\"\r\nPL/pgSQL function test_proc() line 4 at SQL statement\r\n--------------------------------------------\r\nI used 3 different instruments with the same problem everywhere:\r\n1) libpq in my own C++ application\r\n2) DBeaver\r\n3) npgsql in my own C# application\r\n\r\nThe same problem occures on PostgreSQL 14.4 running on Windows 10.",
"msg_date": "Thu, 17 Aug 2023 14:35:23 +0000",
"msg_from": "PG Bug reporting form <[email protected]>",
"msg_from_op": true,
"msg_subject": "BUG #18059: Unexpected error 25001 in stored procedure"
},
{
"msg_contents": "[ redirected to -hackers ]\n\nPG Bug reporting form <[email protected]> writes:\n> Steps to reproduce:\n> 1. Create stored procedure\n\n> create or replace procedure test_proc()\n> language plpgsql as $procedure$\n> begin\n> commit;\n> set transaction isolation level repeatable read;\n> -- here follows some useful code which is omitted for brevity\n> end\n> $procedure$;\n\n> 2. Open new connection\n\n> 3. Execute the following 3 queries one by one:\n> a) call test_proc();\n> b) create temporary table \"#tmp\"(c int) on commit drop;\n> c) call test_proc();\n> On step c) you'll get an error\n> [25001]: ERROR: SET TRANSACTION ISOLATION LEVEL must be called before any\n> query\n\nThanks for the report!\n\nI looked into this. The issue is that the plancache decides it needs\nto revalidate the plan for the SET command, and that causes a\ntransaction start (or at least acquisition of a snapshot), which then\ncauses check_transaction_isolation to complain. The weird sequence\nthat you have to go through to trigger the failure is conditioned by\nthe need to get the plancache entry into the needs-revalidation state\nat the right time. This wasn't really a problem when the plancache\ncode was written, but now that we have procedures it's not good.\n\nWe could imagine trying to terminate the new transaction once we've\nfinished revalidating the plan, but that direction seems silly to me.\nA SET command has no plan to rebuild, while for commands that do need\nthat, terminating and restarting the transaction adds useless overhead.\nSo the right fix seems to be to just do nothing. plancache.c already\nknows revalidation should do nothing for TransactionStmts, but that\namount of knowledge is insufficient, as shown by this report.\n\nOne reasonable precedent is found in PlannedStmtRequiresSnapshot:\nwe could change plancache.c to exclude exactly the same utility\ncommands that does, viz\n\n if (IsA(utilityStmt, TransactionStmt) ||\n IsA(utilityStmt, LockStmt) ||\n IsA(utilityStmt, VariableSetStmt) ||\n IsA(utilityStmt, VariableShowStmt) ||\n IsA(utilityStmt, ConstraintsSetStmt) ||\n /* efficiency hacks from here down */\n IsA(utilityStmt, FetchStmt) ||\n IsA(utilityStmt, ListenStmt) ||\n IsA(utilityStmt, NotifyStmt) ||\n IsA(utilityStmt, UnlistenStmt) ||\n IsA(utilityStmt, CheckPointStmt))\n return false;\n\nHowever, this feels unsatisfying. \"Does it require a snapshot?\" is not\nthe same question as \"does it have a plan that could need rebuilding?\".\nThe vast majority of utility statements do not have any such plan:\nthey are left untouched by parse analysis, rewriting, and planning.\n\nWhat I'm inclined to propose, therefore, is that we make revalidation\nbe a no-op for every statement type for which transformStmt() reaches\nits default: case. (When it does so, the resulting CMD_UTILITY Query\nwill not get any processing from the rewriter or planner either.)\nThat gives us this list of statements requiring revalidation:\n\n case T_InsertStmt:\n case T_DeleteStmt:\n case T_UpdateStmt:\n case T_MergeStmt:\n case T_SelectStmt:\n case T_ReturnStmt:\n case T_PLAssignStmt:\n case T_DeclareCursorStmt:\n case T_ExplainStmt:\n case T_CreateTableAsStmt:\n case T_CallStmt:\n\nFor maintainability's sake I'd suggest writing a new function along\nthe line of RawStmtRequiresParseAnalysis() and putting it beside\ntransformStmt(), rather than allowing plancache.c to know directly\nwhich statement types require analysis.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 19 Aug 2023 13:19:22 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #18059: Unexpected error 25001 in stored procedure"
},
{
"msg_contents": "On Sat, Aug 19, 2023 at 1:19 PM Tom Lane <[email protected]> wrote:\n> What I'm inclined to propose, therefore, is that we make revalidation\n> be a no-op for every statement type for which transformStmt() reaches\n> its default: case. (When it does so, the resulting CMD_UTILITY Query\n> will not get any processing from the rewriter or planner either.)\n> That gives us this list of statements requiring revalidation:\n>\n> case T_InsertStmt:\n> case T_DeleteStmt:\n> case T_UpdateStmt:\n> case T_MergeStmt:\n> case T_SelectStmt:\n> case T_ReturnStmt:\n> case T_PLAssignStmt:\n> case T_DeclareCursorStmt:\n> case T_ExplainStmt:\n> case T_CreateTableAsStmt:\n> case T_CallStmt:\n\nThat sounds like the right thing. It is perhaps unfortunate that we\ndon't have a proper parse analysis/execution distinction for other\ntypes of statements, but if that ever changes then this can be\nrevisited.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 21 Aug 2023 09:32:27 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #18059: Unexpected error 25001 in stored procedure"
},
{
"msg_contents": "Robert Haas <[email protected]> writes:\n> On Sat, Aug 19, 2023 at 1:19 PM Tom Lane <[email protected]> wrote:\n>> What I'm inclined to propose, therefore, is that we make revalidation\n>> be a no-op for every statement type for which transformStmt() reaches\n>> its default: case. (When it does so, the resulting CMD_UTILITY Query\n>> will not get any processing from the rewriter or planner either.)\n\n> That sounds like the right thing. It is perhaps unfortunate that we\n> don't have a proper parse analysis/execution distinction for other\n> types of statements, but if that ever changes then this can be\n> revisited.\n\nI started to code this, and immediately noticed that transformStmt()\nalready has a companion function analyze_requires_snapshot() that\nreturns \"true\" in the cases of interest ... except that it does\nnot return true for T_CallStmt. Perhaps that was intentional to\nbegin with, but it is very hard to believe that it isn't a bug now,\nsince transformCallStmt can invoke nearly arbitrary processing via\ntransformExpr(). What semantic anomalies, if any, do we risk if CALL\nprocessing forces a transaction start? (I rather imagine it does\nalready, somewhere later on...)\n\nAnyway, I'm now of two minds whether to use analyze_requires_snapshot()\nas-is for plancache.c's invalidation test, or duplicate it under a\ndifferent name, or have two names but one is just an alias for the\nother. It still seems like \"analyze requires snapshot\" isn't\nnecessarily the exact inverse condition of \"analyze is a no-op\", but\nit is today (assuming we agree that CALL needs a snapshot), and maybe\nmaintaining two duplicate functions is silly. Thoughts?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 22 Aug 2023 17:29:25 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #18059: Unexpected error 25001 in stored procedure"
},
{
"msg_contents": "I wrote:\n> I started to code this, and immediately noticed that transformStmt()\n> already has a companion function analyze_requires_snapshot() that\n> returns \"true\" in the cases of interest ... except that it does\n> not return true for T_CallStmt. Perhaps that was intentional to\n> begin with, but it is very hard to believe that it isn't a bug now,\n> since transformCallStmt can invoke nearly arbitrary processing via\n> transformExpr(). What semantic anomalies, if any, do we risk if CALL\n> processing forces a transaction start? (I rather imagine it does\n> already, somewhere later on...)\n\nI poked around some more, and determined that there should not be any\nnew semantic anomalies if analyze_requires_snapshot starts returning\ntrue for CallStmts, because ExecuteCallStmt already acquires and\nreleases a snapshot before invoking the procedure (at least in the\nnon-atomic case, which is the one of interest here). I spent some\ntime trying to devise a test case showing it's broken, and did not\nsucceed: the fact that we disallow sub-SELECTs in CALL arguments makes\nit a lot harder than I'd expected to reach anyplace that would require\nhaving a transaction snapshot set. Nonetheless, I have very little\nfaith that such a scenario doesn't exist today, and even less that\nwe won't add one in future. The only real reason I can see for not\nsetting a snapshot here is as a micro-optimization. While that's\nnot without value, it seems hard to argue that CALL deserves an\noptimization that SELECT doesn't get.\n\nI also realized that ReturnStmts are likewise missing from\nanalyze_requires_snapshot(). This is probably unreachable, because\nReturnStmt can only appear in a SQL-language function and I can't\nthink of a scenario where we'd be parsing one outside a transaction.\nNonetheless it seems hard to argue that this is an optimization\nwe need to keep.\n\nHence I propose the attached patch, which invents\nstmt_requires_parse_analysis() and makes analyze_requires_snapshot()\ninto an alias for it, so that all these statement types are treated\nalike. I made the adjacent comments a lot more opinionated, too,\nin hopes that future additions won't overlook these concerns.\n\nThe actual bug fix is in plancache.c. I decided to invert the tests\nin plancache.c, because the macro really needed renaming anyway and\nit seemed to read better this way. I also noticed that\nResetPlanCache() already tries to optimize away invalidation of\nutility statements, but that logic seems no longer necessary ---\nwhat's more, it's outright wrong for CALL, because that does need\ninvalidation and won't get it. (I have not tried to build a test\ncase proving that that's broken, but surely it is.)\n\nBarring objections, this needs to be back-patched as far as v11.\n\n\t\t\tregards, tom lane",
"msg_date": "Wed, 23 Aug 2023 16:53:12 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #18059: Unexpected error 25001 in stored procedure"
}
] |
[
{
"msg_contents": "I started digging into a warning I noticed on my FDW builds where \nPostgres is built with meson, e.g. \n<https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=crake&dt=2023-08-16%2018%3A37%3A25&stg=FileTextArrayFDW-build> \nwhich has this:\n\ncc1: warning: ‘-Wformat-security’ ignored without ‘-Wformat’ \n[-Wformat-security]\n\nI found that the pgxs Makefile.global built under meson is a bit \ndifferent. On debug builds for both this is what I get on HEAD (meson) \nand REL_15_STABLE (autoconf), stripped of the current components:\n\n HEAD: CFLAGS =-Wshadow=compatible-local\nREL_15_STABLE: CFLAGS =-Wall -g\n\nThe warning is apparently due to the missing -Wall.\n\nShouldn't we be aiming for pretty much identical settings?\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\nI started digging into a warning I noticed on my FDW builds where\n Postgres is built with meson, e.g.\n<https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=crake&dt=2023-08-16%2018%3A37%3A25&stg=FileTextArrayFDW-build>\n which has this:\ncc1: warning: ‘-Wformat-security’ ignored without ‘-Wformat’\n [-Wformat-security]\n\nI found that the pgxs Makefile.global built under meson is a bit\n different. On debug builds for both this is what I get on HEAD\n (meson) and REL_15_STABLE (autoconf), stripped of the current\n components:\n\n HEAD: CFLAGS\n =-Wshadow=compatible-local\n REL_15_STABLE: CFLAGS =-Wall -g \n\nThe warning is apparently due to the missing -Wall.\nShouldn't we be aiming for pretty much identical settings?\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Thu, 17 Aug 2023 15:32:40 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": true,
"msg_subject": "meson: pgxs Makefile.global differences"
},
{
"msg_contents": "On Thu Aug 17, 2023 at 2:32 PM CDT, Andrew Dunstan wrote:\n> I started digging into a warning I noticed on my FDW builds where \n> Postgres is built with meson, e.g. \n> <https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=crake&dt=2023-08-16%2018%3A37%3A25&stg=FileTextArrayFDW-build> \n> which has this:\n>\n> cc1: warning: ‘-Wformat-security’ ignored without ‘-Wformat’ \n> [-Wformat-security]\n>\n> I found that the pgxs Makefile.global built under meson is a bit \n> different. On debug builds for both this is what I get on HEAD (meson) \n> and REL_15_STABLE (autoconf), stripped of the current components:\n>\n> HEAD: CFLAGS =-Wshadow=compatible-local\n> REL_15_STABLE: CFLAGS =-Wall -g\n>\n> The warning is apparently due to the missing -Wall.\n>\n> Shouldn't we be aiming for pretty much identical settings?\n\nI agree that they should be identical. The meson bild should definitely \nbe aiming for 100% compatibility for the Makefile.global.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Thu, 17 Aug 2023 14:45:54 -0500",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: meson: pgxs Makefile.global differences"
},
{
"msg_contents": "Hi,\n\nOn 2023-08-17 14:45:54 -0500, Tristan Partin wrote:\n> On Thu Aug 17, 2023 at 2:32 PM CDT, Andrew Dunstan wrote:\n> > I started digging into a warning I noticed on my FDW builds where\n> > Postgres is built with meson, e.g. <https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=crake&dt=2023-08-16%2018%3A37%3A25&stg=FileTextArrayFDW-build>\n> > which has this:\n> >\n> > cc1: warning: ‘-Wformat-security’ ignored without ‘-Wformat’\n> > [-Wformat-security]\n> >\n> > I found that the pgxs Makefile.global built under meson is a bit\n> > different. On debug builds for both this is what I get on HEAD (meson)\n> > and REL_15_STABLE (autoconf), stripped of the current components:\n\nI assume \"current\" means the flags that are present in both cases?\n\n\n> > HEAD: CFLAGS =-Wshadow=compatible-local\n> > REL_15_STABLE: CFLAGS =-Wall -g\n> >\n> > The warning is apparently due to the missing -Wall.\n> >\n> > Shouldn't we be aiming for pretty much identical settings?\n\nThe difference for -Wshadow=compatible-local is due to changes between 15 and\nHEAD.\n\nWe're indeed not adding -Wall right now (the warning level is handled by\nmeson, so it doesn't show up in our cflags right now).\n\n\n> I agree that they should be identical. The meson bild should definitely be\n> aiming for 100% compatibility for the Makefile.global.\n\nI don't think that's feasible. It was a fair bit of work to get the most\nimportant contents to match, while skipping lots of things that are primarily\nrelevant for building the server (which isn't relevant for pgxs).\n\nThat said, in this specific case, I agree, we should likely emit -Wall to\nMakefile.global in meson as well.\n\nGreetings,\n\nAndres Freund\n\n\nPS: I don't have [email protected] , just .de :)\n\n\n",
"msg_date": "Thu, 17 Aug 2023 13:51:42 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: meson: pgxs Makefile.global differences"
},
{
"msg_contents": "\n\n\n> On Aug 17, 2023, at 4:51 PM, Andres Freund <[email protected]> wrote:\n> \n> Hi,\n> \n>> On 2023-08-17 14:45:54 -0500, Tristan Partin wrote:\n>>> On Thu Aug 17, 2023 at 2:32 PM CDT, Andrew Dunstan wrote:\n>>> I started digging into a warning I noticed on my FDW builds where\n>>> Postgres is built with meson, e.g. <https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=crake&dt=2023-08-16%2018%3A37%3A25&stg=FileTextArrayFDW-build>\n>>> which has this:\n>>> \n>>> cc1: warning: ‘-Wformat-security’ ignored without ‘-Wformat’\n>>> [-Wformat-security]\n>>> \n>>> I found that the pgxs Makefile.global built under meson is a bit\n>>> different. On debug builds for both this is what I get on HEAD (meson)\n>>> and REL_15_STABLE (autoconf), stripped of the current components:\n> \n> I assume \"current\" means the flags that are present in both cases?\n\n\nYes, sorry, meant to type common.\n\n> \n> \n>>> HEAD: CFLAGS =-Wshadow=compatible-local\n>>> REL_15_STABLE: CFLAGS =-Wall -g\n>>> \n>>> The warning is apparently due to the missing -Wall.\n>>> \n>>> Shouldn't we be aiming for pretty much identical settings?\n> \n> The difference for -Wshadow=compatible-local is due to changes between 15 and\n> HEAD.\n> \n> We're indeed not adding -Wall right now (the warning level is handled by\n> meson, so it doesn't show up in our cflags right now).\n> \n> \n>> I agree that they should be identical. The meson bild should definitely be\n>> aiming for 100% compatibility for the Makefile.global.\n> \n> I don't think that's feasible. It was a fair bit of work to get the most\n> important contents to match, while skipping lots of things that are primarily\n> relevant for building the server (which isn't relevant for pgxs).\n> \n> That said, in this specific case, I agree, we should likely emit -Wall to\n> Makefile.global in meson as well.\n> \n> \n\nCool\n\nCheers \n\nAndrew\n\n",
"msg_date": "Thu, 17 Aug 2023 16:56:02 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: meson: pgxs Makefile.global differences"
},
{
"msg_contents": "On Thu Aug 17, 2023 at 3:51 PM CDT, Andres Freund wrote:\n> PS: I don't have [email protected] , just .de :)\n\nFat fingered a \"v\" somehow.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Thu, 17 Aug 2023 15:56:37 -0500",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: meson: pgxs Makefile.global differences"
},
{
"msg_contents": "On 2023-08-17 Th 16:51, Andres Freund wrote:\n> Hi,\n>\n> On 2023-08-17 14:45:54 -0500, Tristan Partin wrote:\n>> On Thu Aug 17, 2023 at 2:32 PM CDT, Andrew Dunstan wrote:\n>>> I started digging into a warning I noticed on my FDW builds where\n>>> Postgres is built with meson, e.g.<https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=crake&dt=2023-08-16%2018%3A37%3A25&stg=FileTextArrayFDW-build>\n>>> which has this:\n>>>\n>>> cc1: warning: ‘-Wformat-security’ ignored without ‘-Wformat’\n>>> [-Wformat-security]\n>>>\n>>> I found that the pgxs Makefile.global built under meson is a bit\n>>> different. On debug builds for both this is what I get on HEAD (meson)\n>>> and REL_15_STABLE (autoconf), stripped of the current components:\n> I assume \"current\" means the flags that are present in both cases?\n>\n>\n>>> HEAD: CFLAGS =-Wshadow=compatible-local\n>>> REL_15_STABLE: CFLAGS =-Wall -g\n>>>\n>>> The warning is apparently due to the missing -Wall.\n>>>\n>>> Shouldn't we be aiming for pretty much identical settings?\n> The difference for -Wshadow=compatible-local is due to changes between 15 and\n> HEAD.\n>\n> We're indeed not adding -Wall right now (the warning level is handled by\n> meson, so it doesn't show up in our cflags right now).\n>\n>\n>> I agree that they should be identical. The meson bild should definitely be\n>> aiming for 100% compatibility for the Makefile.global.\n> I don't think that's feasible. It was a fair bit of work to get the most\n> important contents to match, while skipping lots of things that are primarily\n> relevant for building the server (which isn't relevant for pgxs).\n>\n> That said, in this specific case, I agree, we should likely emit -Wall to\n> Makefile.global in meson as well.\n>\n\nWhere should we do that? And how about the -g that's also missing for \ndebug-enabled builds?\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-08-17 Th 16:51, Andres Freund\n wrote:\n\n\nHi,\n\nOn 2023-08-17 14:45:54 -0500, Tristan Partin wrote:\n\n\nOn Thu Aug 17, 2023 at 2:32 PM CDT, Andrew Dunstan wrote:\n\n\nI started digging into a warning I noticed on my FDW builds where\nPostgres is built with meson, e.g. <https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=crake&dt=2023-08-16%2018%3A37%3A25&stg=FileTextArrayFDW-build>\nwhich has this:\n\ncc1: warning: ‘-Wformat-security’ ignored without ‘-Wformat’\n[-Wformat-security]\n\nI found that the pgxs Makefile.global built under meson is a bit\ndifferent. On debug builds for both this is what I get on HEAD (meson)\nand REL_15_STABLE (autoconf), stripped of the current components:\n\n\n\n\nI assume \"current\" means the flags that are present in both cases?\n\n\n\n\n\n HEAD: CFLAGS =-Wshadow=compatible-local\nREL_15_STABLE: CFLAGS =-Wall -g\n\nThe warning is apparently due to the missing -Wall.\n\nShouldn't we be aiming for pretty much identical settings?\n\n\n\n\nThe difference for -Wshadow=compatible-local is due to changes between 15 and\nHEAD.\n\nWe're indeed not adding -Wall right now (the warning level is handled by\nmeson, so it doesn't show up in our cflags right now).\n\n\n\n\nI agree that they should be identical. The meson bild should definitely be\naiming for 100% compatibility for the Makefile.global.\n\n\n\nI don't think that's feasible. It was a fair bit of work to get the most\nimportant contents to match, while skipping lots of things that are primarily\nrelevant for building the server (which isn't relevant for pgxs).\n\nThat said, in this specific case, I agree, we should likely emit -Wall to\nMakefile.global in meson as well.\n\n\n\n\n\nWhere should we do that? And how about the -g that's also missing\n for debug-enabled builds?\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Mon, 21 Aug 2023 11:33:45 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: meson: pgxs Makefile.global differences"
},
{
"msg_contents": "On 21.08.23 17:33, Andrew Dunstan wrote:\n> Where should we do that? And how about the -g that's also missing for \n> debug-enabled builds?\n\nI think it's the options in these two tables that meson handles \ninternally and that we should explicitly reproduce for Makefile.global:\n\nhttps://mesonbuild.com/Builtin-options.html#details-for-buildtype\nhttps://mesonbuild.com/Builtin-options.html#details-for-warning_level\n\n\n\n",
"msg_date": "Mon, 21 Aug 2023 17:43:48 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: meson: pgxs Makefile.global differences"
},
{
"msg_contents": "On Mon Aug 21, 2023 at 10:33 AM CDT, Andrew Dunstan wrote:\n>\n> On 2023-08-17 Th 16:51, Andres Freund wrote:\n> > Hi,\n> >\n> > On 2023-08-17 14:45:54 -0500, Tristan Partin wrote:\n> >> On Thu Aug 17, 2023 at 2:32 PM CDT, Andrew Dunstan wrote:\n> >>> I started digging into a warning I noticed on my FDW builds where\n> >>> Postgres is built with meson, e.g.<https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=crake&dt=2023-08-16%2018%3A37%3A25&stg=FileTextArrayFDW-build>\n> >>> which has this:\n> >>>\n> >>> cc1: warning: ‘-Wformat-security’ ignored without ‘-Wformat’\n> >>> [-Wformat-security]\n> >>>\n> >>> I found that the pgxs Makefile.global built under meson is a bit\n> >>> different. On debug builds for both this is what I get on HEAD (meson)\n> >>> and REL_15_STABLE (autoconf), stripped of the current components:\n> > I assume \"current\" means the flags that are present in both cases?\n> >\n> >\n> >>> HEAD: CFLAGS =-Wshadow=compatible-local\n> >>> REL_15_STABLE: CFLAGS =-Wall -g\n> >>>\n> >>> The warning is apparently due to the missing -Wall.\n> >>>\n> >>> Shouldn't we be aiming for pretty much identical settings?\n> > The difference for -Wshadow=compatible-local is due to changes between 15 and\n> > HEAD.\n> >\n> > We're indeed not adding -Wall right now (the warning level is handled by\n> > meson, so it doesn't show up in our cflags right now).\n> >\n> >\n> >> I agree that they should be identical. The meson bild should definitely be\n> >> aiming for 100% compatibility for the Makefile.global.\n> > I don't think that's feasible. It was a fair bit of work to get the most\n> > important contents to match, while skipping lots of things that are primarily\n> > relevant for building the server (which isn't relevant for pgxs).\n> >\n> > That said, in this specific case, I agree, we should likely emit -Wall to\n> > Makefile.global in meson as well.\n> >\n>\n> Where should we do that? And how about the -g that's also missing for \n> debug-enabled builds?\n\nLook in src/makefiles/meson.build. You will see a line like\n'CFLAGS': var_cflags. You probably want to do something like:\n\n\tpgxs_cflags = var_cflags + cc.get_supported_arguments('-Wxxx')\n\tif get_option('debug')\n\t\t# Populate for debug flags that aren't -g\n\t\tdebug_flags = {}\n\n\t\tpgxs_cflags += debug_flags.get(cc.get_id(), \n\t\t\tcc.get_supported_arguments('-g')\n\tendif\n\n\t...\n\tCFLAGS: pgxs_cflags,\n\t...\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Mon, 21 Aug 2023 10:48:23 -0500",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: meson: pgxs Makefile.global differences"
}
] |
[
{
"msg_contents": "Hi,\n\nRecently, the API to define custom wait events for extension is \nsupported.\n* Change custom wait events to use dynamic shared hash tables(af720b4c5)\n\nSo, I'd like to rethink the wait event names for modules which use\n\"WAIT_EVENT_EXTENSION\" wait events.\n* postgres_fdw\n* dblink\n* pg_prewarm\n* test_shm_mq\n* worker_spi\n\nI expect that no one will object to changing the names to appropriate\nones. But, we need to discuss that naming convention, the names \nthemselves,\ndocument descriptions and so on.\n\nI made the v1 patch\n* CamelCase naming convention\n* Add document descriptions for each module\n\nI haven't added document descriptions for pg_prewarm and test modules.\nThe reason is that the wait event of autoprewarm is not shown on\npg_stat_activity. It's not an auxiliary-process and doesn't connect to\na database, so pgstat_bestart() isn't be called.\n\nFeedback is always welcome and appreciated.\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION",
"msg_date": "Fri, 18 Aug 2023 12:27:02 +0900",
"msg_from": "Masahiro Ikeda <[email protected]>",
"msg_from_op": true,
"msg_subject": "Rethink the wait event names for postgres_fdw, dblink and etc"
},
{
"msg_contents": "On Fri, Aug 18, 2023 at 12:27:02PM +0900, Masahiro Ikeda wrote:\n> I expect that no one will object to changing the names to appropriate\n> ones. But, we need to discuss that naming convention, the names themselves,\n> document descriptions and so on.\n> \n> I made the v1 patch\n> * CamelCase naming convention\n\nNot sure how others feel about that, but I am OK with camel-case style\nfor the modules.\n\n> I haven't added document descriptions for pg_prewarm and test modules.\n> The reason is that the wait event of autoprewarm is not shown on\n> pg_stat_activity. It's not an auxiliary-process and doesn't connect to\n> a database, so pgstat_bestart() isn't called.\n\nPerhaps we could just leave it out, then, adding a comment instead.\n\n> Feedback is always welcome and appreciated.\n\n+\t\t/* first time, allocate or get the custom wait event */\n+\t\tif (wait_event_info == 0)\n+\t\t\twait_event_info = WaitEventExtensionNew(\"DblinkConnect\");\n[...]\n+\t/* first time, allocate or get the custom wait event */\n+\tif (wait_event_info == 0)\n+\t\twait_event_info = WaitEventExtensionNew(\"DblinkConnect\");\n\nShouldn't dblink use two different strings?\n\n+ if (wait_event_info == 0)\n+ wait_event_info = WaitEventExtensionNew(\"PgPrewarmDumpDelay\");\n\nSame about autoprewarm.c. The same flag is used in two different code\npaths. If removed from the patch, no need to do that, of course.\n\n+static uint32 wait_event_info_connect = 0;\n+static uint32 wait_event_info_receive = 0;\n+static uint32 wait_event_info_cleanup_receive = 0;\n\nPerhaps such variables could be named with shorter names proper to\neach module, like pgfdw_we_receive, etc.\n\n+ <filename>dblink</filename> could show the following wait event under the wait \n\ns/could show/can report/?\n\n+ Waiting for same reason as <literal>PostgresFdwReceive</literal>, except that it's only for\n+ abort.\n\n\"Waiting for transaction abort on remote server\"?\n---\nMichael",
"msg_date": "Fri, 18 Aug 2023 14:11:47 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Rethink the wait event names for postgres_fdw, dblink and etc"
},
{
"msg_contents": "Hi,\n\nThanks for your comments.\n\nI updated the patch to v2.\n* Update a comment instead writing documentation about\n the wait events for pg_prewarm.\n* Make the name of wait events which are different code\n path different. Add DblinkGetConnect and PgPrewarmDumpShutdown.\n* Make variable names shorter like pgfdw_we_receive.\n* Update documents.\n* Add some tests with pg_wait_events view.\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION",
"msg_date": "Mon, 21 Aug 2023 11:04:23 +0900",
"msg_from": "Masahiro Ikeda <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Rethink the wait event names for postgres_fdw, dblink and etc"
},
{
"msg_contents": "On Mon, Aug 21, 2023 at 11:04:23AM +0900, Masahiro Ikeda wrote:\n> I updated the patch to v2.\n> * Update a comment instead writing documentation about\n> the wait events for pg_prewarm.\n\nRight. It does not seem worth the addition currently, so I am\ndiscarded this part. It's just not worth the extra cycles for the\nmoment.\n\n> * Make the name of wait events which are different code\n> path different. Add DblinkGetConnect and PgPrewarmDumpShutdown.\n> * Make variable names shorter like pgfdw_we_receive.\n> * Update documents.\n> * Add some tests with pg_wait_events view.\n\nSounds like a good idea for postgres_fdw and dblink, still some of\nthem may not be stable? First, PostgresFdwReceive and\nPostgresFdwCleanupReceive would be registered only if the connection\nis busy, but that may not be always the case depending on the timing?\nPostgresFdwConnect is always OK because this code path in\nconnect_pg_server() is always taken. Similarly, DblinkConnect and\nDblinkGetConnect are registered in deterministic code paths, so these\nwill show up all the time.\n\nI am lacking a bit of time now, but I have applied the bits for\ntest_shm_mq and worker_spi. Note that I have not added tests for\ntest_shm_mq as it may be possible that the two events (for the\nbgworker startup and for a message to be queued) are never reached\ndepending on the timing. I'll handle the rest tomorrow, with likely\nsome adjustments to the tests. (I may as well just remove them, this\nAPI is already covered by worker_spi.)\n--\nMichael",
"msg_date": "Wed, 4 Oct 2023 17:19:40 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Rethink the wait event names for postgres_fdw, dblink and etc"
},
{
"msg_contents": "On Wed, Oct 04, 2023 at 05:19:40PM +0900, Michael Paquier wrote:\n> I am lacking a bit of time now, but I have applied the bits for\n> test_shm_mq and worker_spi. Note that I have not added tests for\n> test_shm_mq as it may be possible that the two events (for the\n> bgworker startup and for a message to be queued) are never reached\n> depending on the timing. I'll handle the rest tomorrow, with likely\n> some adjustments to the tests. (I may as well just remove them, this\n> API is already covered by worker_spi.)\n\nAfter sleeping on it, I've taken the decision to remove the tests. As\nfar as I have tested, this was stable, but this does not really\nimprove the test coverage as WaitEventExtensionNew() is covered in\nworker_spi. I have done tweaks to the docs and the variable names,\nand applied that into its own commit.\n\nNote as well that the docs of dblink were wrong for DblinkGetConnect:\nthe wait event could be seen in other functions than dblink() and\ndblink_exec().\n--\nMichael",
"msg_date": "Thu, 5 Oct 2023 10:28:29 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Rethink the wait event names for postgres_fdw, dblink and etc"
},
{
"msg_contents": "On 2023-10-05 10:28, Michael Paquier wrote:\n> On Wed, Oct 04, 2023 at 05:19:40PM +0900, Michael Paquier wrote:\n>> I am lacking a bit of time now, but I have applied the bits for\n>> test_shm_mq and worker_spi. Note that I have not added tests for\n>> test_shm_mq as it may be possible that the two events (for the\n>> bgworker startup and for a message to be queued) are never reached\n>> depending on the timing. I'll handle the rest tomorrow, with likely\n>> some adjustments to the tests. (I may as well just remove them, this\n>> API is already covered by worker_spi.)\n> \n> After sleeping on it, I've taken the decision to remove the tests. As\n> far as I have tested, this was stable, but this does not really\n> improve the test coverage as WaitEventExtensionNew() is covered in\n> worker_spi. I have done tweaks to the docs and the variable names,\n> and applied that into its own commit.\n> \n> Note as well that the docs of dblink were wrong for DblinkGetConnect:\n> the wait event could be seen in other functions than dblink() and\n> dblink_exec().\n\nThanks for modifying and committing. I agree your comments.\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 06 Oct 2023 11:02:18 +0900",
"msg_from": "Masahiro Ikeda <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Rethink the wait event names for postgres_fdw, dblink and etc"
}
] |
[
{
"msg_contents": "In the following sentence, I believe either 'the' or 'a' should be kept, not\nboth. I here keep the 'the', but feel free to change.\n\n---\n src/backend/storage/ipc/dsm_impl.c | 2 +-\n 1 file changed, 1 insertion(+), 1 deletion(-)\n\ndiff --git a/src/backend/storage/ipc/dsm_impl.c\nb/src/backend/storage/ipc/dsm_impl.c\nindex 6399fa2ad5..19a9cfc8ac 100644\n--- a/src/backend/storage/ipc/dsm_impl.c\n+++ b/src/backend/storage/ipc/dsm_impl.c\n@@ -137,7 +137,7 @@ int min_dynamic_shared_memory;\n * Arguments:\n * op: The operation to be performed.\n * handle: The handle of an existing object, or for DSM_OP_CREATE, the\n- * a new handle the caller wants created.\n+ * new handle the caller wants created.\n * request_size: For DSM_OP_CREATE, the requested size. Otherwise, 0.\n * impl_private: Private, implementation-specific data. Will be a pointer\n * to NULL for the first operation on a shared memory segment within this\n-- \n\n-- \nRegards\nJunwang Zhao\n\n\n",
"msg_date": "Fri, 18 Aug 2023 17:10:10 +0800",
"msg_from": "Junwang Zhao <[email protected]>",
"msg_from_op": true,
"msg_subject": "[dsm] comment typo"
},
{
"msg_contents": "> On 18 Aug 2023, at 11:10, Junwang Zhao <[email protected]> wrote:\n> \n> In the following sentence, I believe either 'the' or 'a' should be kept, not\n> both. I here keep the 'the', but feel free to change.\n\n> * handle: The handle of an existing object, or for DSM_OP_CREATE, the\n> - * a new handle the caller wants created.\n> + * new handle the caller wants created.\n\nSince the handle doesn't exist for DSM_OP_CREATE, both \"a handle\" and \"the\nhandle\" seems a tad misleading, how about \"the identifier for the new handle the\ncaller wants created\"?\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Mon, 21 Aug 2023 11:16:21 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [dsm] comment typo"
},
{
"msg_contents": "On Mon, Aug 21, 2023 at 5:16 PM Daniel Gustafsson <[email protected]> wrote:\n>\n> > On 18 Aug 2023, at 11:10, Junwang Zhao <[email protected]> wrote:\n> >\n> > In the following sentence, I believe either 'the' or 'a' should be kept, not\n> > both. I here keep the 'the', but feel free to change.\n>\n> > * handle: The handle of an existing object, or for DSM_OP_CREATE, the\n> > - * a new handle the caller wants created.\n> > + * new handle the caller wants created.\n>\n> Since the handle doesn't exist for DSM_OP_CREATE, both \"a handle\" and \"the\n> handle\" seems a tad misleading, how about \"the identifier for the new handle the\n> caller wants created\"?\n>\n\nSounds great 👍\n\n> --\n> Daniel Gustafsson\n>\n\n\n-- \nRegards\nJunwang Zhao\n\n\n",
"msg_date": "Mon, 21 Aug 2023 18:15:36 +0800",
"msg_from": "Junwang Zhao <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [dsm] comment typo"
},
{
"msg_contents": "> On 21 Aug 2023, at 12:15, Junwang Zhao <[email protected]> wrote:\n> \n> On Mon, Aug 21, 2023 at 5:16 PM Daniel Gustafsson <[email protected]> wrote:\n>> \n>>> On 18 Aug 2023, at 11:10, Junwang Zhao <[email protected]> wrote:\n>>> \n>>> In the following sentence, I believe either 'the' or 'a' should be kept, not\n>>> both. I here keep the 'the', but feel free to change.\n>> \n>>> * handle: The handle of an existing object, or for DSM_OP_CREATE, the\n>>> - * a new handle the caller wants created.\n>>> + * new handle the caller wants created.\n>> \n>> Since the handle doesn't exist for DSM_OP_CREATE, both \"a handle\" and \"the\n>> handle\" seems a tad misleading, how about \"the identifier for the new handle the\n>> caller wants created\"?\n>> \n> \n> Sounds great\n\nDone that way, thanks!\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 23 Aug 2023 10:28:00 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [dsm] comment typo"
}
] |
[
{
"msg_contents": "\nHi hackers,\n\nJust a FYI: Started building daily snapshot RPMs using the tarball at:\n\nhttps://download.postgresql.org/pub/snapshot/dev/postgresql-snapshot.tar.bz2\n\nPackages are available on these platforms:\n\n* RHEL 9 (x86_64 & aarch64)\n* RHEL 8 (x86_64, aarch64 and ppc64le)\n* Fedora 38 (x86_64)\n* Fedora 37 (x86_64)\n* SLES 15\n\nThese alpha packages are built with compile options (--enable-debug --\nenable-cassert) which have significant negative effect on performance,\nso packages are not useful for performance testing. \n\nPlease use latest repo rpm for Fedora, RHEL/Rocky from:\n\nhttps://yum.postgresql.org/repopackages/\n\nand run:\n\ndnf config-manager --set-enabled pgdg17-updates-testing\n\nto enable v17 daily repos.\n\nFor SLES 15, run:\n\nzypper addrepo https://download.postgresql.org/pub/repos/zypp/repo/pgdg-sles-15-pg17-devel.repo\n\nand edit repo file to enable v17 repo.\n\n-HTH\n\nRegards,\n-- \nDevrim Gündüz\nOpen Source Solution Architect, PostgreSQL Major Contributor\nTwitter: @DevrimGunduz , @DevrimGunduzTR\n\n\n",
"msg_date": "Fri, 18 Aug 2023 10:46:58 +0100",
"msg_from": "Devrim =?ISO-8859-1?Q?G=FCnd=FCz?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL 17 alpha RPMs"
}
] |
[
{
"msg_contents": "The attached patch adds some special names to prevent pg_temp and/or\npg_catalog from being included implicitly.\n\nThis is a useful safety feature for functions that don't have any need\nto search pg_temp.\n\nThe current (v16) recommendation is to include pg_temp last, which does\nadd to the safety, but it's confusing to *include* a namespace when\nyour intention is actually to *exclude* it, and it's also not\ncompletely excluding pg_temp.\n\nAlthough the syntax in the attached patch is not much friendlier, at\nleast it's clear that the intent is to exclude pg_temp. Furthermore, it\nwill be friendlier if we adopt the SEARCH SYSTEM syntax proposed in\nanother thread[1].\n\nAdditionally, this patch adds a WARNING when creating a schema that\nuses one of these special names. Previously, there was no warning when\ncreating a schema with the name \"$user\", which could cause confusion.\n\n[1]\nhttps://www.postgresql.org/message-id/flat/[email protected]\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS",
"msg_date": "Fri, 18 Aug 2023 14:44:31 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "[17] Special search_path names \"!pg_temp\" and \"!pg_catalog\""
},
{
"msg_contents": "Hi\n\npá 18. 8. 2023 v 23:44 odesílatel Jeff Davis <[email protected]> napsal:\n\n> The attached patch adds some special names to prevent pg_temp and/or\n> pg_catalog from being included implicitly.\n>\n> This is a useful safety feature for functions that don't have any need\n> to search pg_temp.\n>\n> The current (v16) recommendation is to include pg_temp last, which does\n> add to the safety, but it's confusing to *include* a namespace when\n> your intention is actually to *exclude* it, and it's also not\n> completely excluding pg_temp.\n>\n> Although the syntax in the attached patch is not much friendlier, at\n> least it's clear that the intent is to exclude pg_temp. Furthermore, it\n> will be friendlier if we adopt the SEARCH SYSTEM syntax proposed in\n> another thread[1].\n>\n> Additionally, this patch adds a WARNING when creating a schema that\n> uses one of these special names. Previously, there was no warning when\n> creating a schema with the name \"$user\", which could cause confusion.\n>\n> [1]\n>\n> https://www.postgresql.org/message-id/flat/[email protected]\n\n\ncannot be better special syntax\n\nCREATE OR REPLACE FUNCTION xxx()\nRETURNS yyy AS $$ ... $$$\nSET SEARCH_PATH DISABLE\n\nwith possible next modification\n\nSET SEARCH_PATH CATALOG .. only for pg_catalog\nSET SEARCH_PATH MINIMAL .. pg_catalog, pg_temp\n\nI question if we should block search path settings when this setting is\nused. Although I set search_path, the search_path can be overwritten in\nfunction of inside some nesting calls\n\n(2023-08-19 07:15:21) postgres=# create or replace function fx()\nreturns text as $$\nbegin\n perform set_config('search_path', 'public', false);\n return current_setting('search_path');\nend;\n$$ language plpgsql set search_path = 'pg_catalog';\nCREATE FUNCTION\n(2023-08-19 07:15:27) postgres=# select fx();\n┌────────┐\n│ fx │\n╞════════╡\n│ public │\n└────────┘\n(1 row)\n\n\n\n\n\n>\n>\n>\n> --\n> Jeff Davis\n> PostgreSQL Contributor Team - AWS\n>\n>\n>\n\nHipá 18. 8. 2023 v 23:44 odesílatel Jeff Davis <[email protected]> napsal:The attached patch adds some special names to prevent pg_temp and/or\npg_catalog from being included implicitly.\n\nThis is a useful safety feature for functions that don't have any need\nto search pg_temp.\n\nThe current (v16) recommendation is to include pg_temp last, which does\nadd to the safety, but it's confusing to *include* a namespace when\nyour intention is actually to *exclude* it, and it's also not\ncompletely excluding pg_temp.\n\nAlthough the syntax in the attached patch is not much friendlier, at\nleast it's clear that the intent is to exclude pg_temp. Furthermore, it\nwill be friendlier if we adopt the SEARCH SYSTEM syntax proposed in\nanother thread[1].\n\nAdditionally, this patch adds a WARNING when creating a schema that\nuses one of these special names. Previously, there was no warning when\ncreating a schema with the name \"$user\", which could cause confusion.\n\n[1]\nhttps://www.postgresql.org/message-id/flat/[email protected] be better special syntaxCREATE OR REPLACE FUNCTION xxx()RETURNS yyy AS $$ ... $$$SET SEARCH_PATH DISABLEwith possible next modificationSET SEARCH_PATH CATALOG .. only for pg_catalogSET SEARCH_PATH MINIMAL .. pg_catalog, pg_tempI question if we should block search path settings when this setting is used. Although I set search_path, the search_path can be overwritten in function of inside some nesting calls (2023-08-19 07:15:21) postgres=# create or replace function fx()returns text as $$begin perform set_config('search_path', 'public', false); return current_setting('search_path');end;$$ language plpgsql set search_path = 'pg_catalog';CREATE FUNCTION(2023-08-19 07:15:27) postgres=# select fx();┌────────┐│ fx │╞════════╡│ public │└────────┘(1 row) \n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS",
"msg_date": "Sat, 19 Aug 2023 07:18:10 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [17] Special search_path names \"!pg_temp\" and \"!pg_catalog\""
},
{
"msg_contents": "On Sat, 2023-08-19 at 07:18 +0200, Pavel Stehule wrote:\n> cannot be better special syntax\n> \n> CREATE OR REPLACE FUNCTION xxx()\n> RETURNS yyy AS $$ ... $$$\n> SET SEARCH_PATH DISABLE\n> \n> with possible next modification\n> \n> SET SEARCH_PATH CATALOG .. only for pg_catalog\n> SET SEARCH_PATH MINIMAL .. pg_catalog, pg_temp\n\nI agree that we should consider new syntax, and there's a related\ndiscussion here:\n\nhttps://www.postgresql.org/message-id/flat/[email protected]\n\nRegardless, even with syntax changes, we need something to print when\nsomeone does a \"SHOW search_path\", i.e. some representation that\nindicates pg_temp is excluded. That way it can also be saved and\nrestored.\n\n> I question if we should block search path settings when this setting\n> is used. Although I set search_path, the search_path can be\n> overwritten in function of inside some nesting calls \n\nIf so, that should be a separate feature. For the purposes of this\nthread, we just need a way to represent a search path that excludes\npg_temp and/or pg_catalog.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Mon, 21 Aug 2023 09:08:43 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [17] Special search_path names \"!pg_temp\" and \"!pg_catalog\""
},
{
"msg_contents": "On Fri, Aug 18, 2023 at 02:44:31PM -0700, Jeff Davis wrote:\n> + SET search_path = admin, \"!pg_temp\";\n\nI think it's unfortunate that these new identifiers must be quoted. I\nwonder if we could call these something like \"no_pg_temp\". *shrug*\n\n> +\t * Add any implicitly-searched namespaces to the list unless the markers\n> +\t * \"!pg_catalog\" or \"!pg_temp\" are present. Note these go on the front,\n> +\t * not the back; also notice that we do not check USAGE permissions for\n> +\t * these.\n> \t */\n> -\tif (!list_member_oid(oidlist, PG_CATALOG_NAMESPACE))\n> +\tif (implicit_pg_catalog &&\n> +\t\t!list_member_oid(oidlist, PG_CATALOG_NAMESPACE))\n> \t\toidlist = lcons_oid(PG_CATALOG_NAMESPACE, oidlist);\n> \n> -\tif (OidIsValid(myTempNamespace) &&\n> +\tif (implicit_pg_temp &&\n> +\t\tOidIsValid(myTempNamespace) &&\n> \t\t!list_member_oid(oidlist, myTempNamespace))\n> \t\toidlist = lcons_oid(myTempNamespace, oidlist);\n\nShould we disallow including both !pg_temp and pg_temp at the same time? I\nworry that could be a source of confusion. IIUC your patches effectively\nignore !pg_temp if pg_temp is explicitly listed elsewhere in the list.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 26 Oct 2023 16:28:32 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [17] Special search_path names \"!pg_temp\" and \"!pg_catalog\""
},
{
"msg_contents": "On Thu, 2023-10-26 at 16:28 -0500, Nathan Bossart wrote:\n> On Fri, Aug 18, 2023 at 02:44:31PM -0700, Jeff Davis wrote:\n> > + SET search_path = admin, \"!pg_temp\";\n> \n> I think it's unfortunate that these new identifiers must be quoted. \n> I\n> wonder if we could call these something like \"no_pg_temp\". *shrug*\n\nDo you, overall, find this feature useful?\n\nMost functions don't need pg_temp, so it feels cleaner to exclude it.\nBut pg_temp is ignored for function/op lookup anyway, so functions\nwon't be exposed to search_path risks related to pg_temp unless they\nare accessing tables.\n\nIf my proposal for the SEARCH clause got more support, I'd be more\nexcited about this feature because it could be set implicitly as part\nof a safe search_path. Without the SEARCH clause, the only way to set\n\"!pg_temp\" is by typing it out, and I'm not sure a lot of people will\nactually do that.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Fri, 27 Oct 2023 12:58:47 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [17] Special search_path names \"!pg_temp\" and \"!pg_catalog\""
},
{
"msg_contents": "On Fri, Oct 27, 2023 at 12:58:47PM -0700, Jeff Davis wrote:\n> Do you, overall, find this feature useful?\n> \n> Most functions don't need pg_temp, so it feels cleaner to exclude it.\n> But pg_temp is ignored for function/op lookup anyway, so functions\n> won't be exposed to search_path risks related to pg_temp unless they\n> are accessing tables.\n> \n> If my proposal for the SEARCH clause got more support, I'd be more\n> excited about this feature because it could be set implicitly as part\n> of a safe search_path. Without the SEARCH clause, the only way to set\n> \"!pg_temp\" is by typing it out, and I'm not sure a lot of people will\n> actually do that.\n\nI thought it sounded generally useful, but if we're not going to proceed\nwith the primary use-case for this feature, then perhaps it's not worth\ngoing through this particular one-way door at this time.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 31 Oct 2023 11:31:45 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [17] Special search_path names \"!pg_temp\" and \"!pg_catalog\""
}
] |
[
{
"msg_contents": "It's entirely possible for a logical slot to have a confirmed_flush\nLSN higher than the last value saved on disk while not being marked as\ndirty. It's currently not a problem to lose that value during a clean\nshutdown / restart cycle but to support the upgrade of logical slots\n[1] (see latest patch at [2]), we seem to rely on that value being\nproperly persisted to disk. During the upgrade, we need to verify that\nall the data prior to shudown_checkpoint for the logical slots has\nbeen consumed, otherwise, the downstream may miss some data. Now, to\nensure the same, we are planning to compare the confirm_flush LSN\nlocation with the latest shudown_checkpoint location which means that\nthe confirm_flush LSN should be updated after restart.\n\nI think this is inefficient even without an upgrade because, after the\nrestart, this may lead to decoding some data again. Say, we process\nsome transactions for which we didn't send anything downstream (the\nchanges got filtered) but the confirm_flush LSN is updated due to\nkeepalives. As we don't flush the latest value of confirm_flush LSN,\nit may lead to processing the same changes again.\n\nThe idea discussed in the thread [1] is to always persist logical\nslots to disk during the shutdown checkpoint. I have extracted the\npatch to achieve the same from that thread and attached it here. This\ncould lead to some overhead during shutdown (checkpoint) if there are\nmany slots but it is probably a one-time work.\n\nI couldn't think of better ideas but another possibility is to mark\nthe slot as dirty when we update the confirm_flush LSN (see\nLogicalConfirmReceivedLocation()). However, that would be a bigger\noverhead in the running server as it could be a frequent operation and\ncould lead to more writes.\n\nThoughts?\n\n[1] - https://www.postgresql.org/message-id/TYAPR01MB58664C81887B3AF2EB6B16E3F5939%40TYAPR01MB5866.jpnprd01.prod.outlook.com\n[2] - https://www.postgresql.org/message-id/TYAPR01MB5866562EF047F2C9DDD1F9DEF51BA%40TYAPR01MB5866.jpnprd01.prod.outlook.com\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Sat, 19 Aug 2023 11:46:38 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": true,
"msg_subject": "persist logical slots to disk during shutdown checkpoint"
},
{
"msg_contents": "On Sat, 19 Aug 2023, 14:16 Amit Kapila, <[email protected]> wrote:\n\n> It's entirely possible for a logical slot to have a confirmed_flush\n> LSN higher than the last value saved on disk while not being marked as\n> dirty. It's currently not a problem to lose that value during a clean\n> shutdown / restart cycle but to support the upgrade of logical slots\n> [1] (see latest patch at [2]), we seem to rely on that value being\n> properly persisted to disk. During the upgrade, we need to verify that\n> all the data prior to shudown_checkpoint for the logical slots has\n> been consumed, otherwise, the downstream may miss some data. Now, to\n> ensure the same, we are planning to compare the confirm_flush LSN\n> location with the latest shudown_checkpoint location which means that\n> the confirm_flush LSN should be updated after restart.\n>\n> I think this is inefficient even without an upgrade because, after the\n> restart, this may lead to decoding some data again. Say, we process\n> some transactions for which we didn't send anything downstream (the\n> changes got filtered) but the confirm_flush LSN is updated due to\n> keepalives. As we don't flush the latest value of confirm_flush LSN,\n> it may lead to processing the same changes again.\n>\n\nIn most cases there shouldn't be a lot of records to decode after restart,\nbut I agree it's better to avoid decoding those again.\n\nThe idea discussed in the thread [1] is to always persist logical\n> slots to disk during the shutdown checkpoint. I have extracted the\n> patch to achieve the same from that thread and attached it here. This\n> could lead to some overhead during shutdown (checkpoint) if there are\n> many slots but it is probably a one-time work.\n>\n> I couldn't think of better ideas but another possibility is to mark\n> the slot as dirty when we update the confirm_flush LSN (see\n> LogicalConfirmReceivedLocation()). However, that would be a bigger\n> overhead in the running server as it could be a frequent operation and\n> could lead to more writes.\n>\n\nYeah I didn't find any better option either at that time. I still think\nthat forcing persistence on shutdown is the best compromise. If we tried to\nalways mark the slot as dirty, we would be sure to add regular overhead but\nwe would probably end up persisting the slot on disk on shutdown anyway\nmost of the time, so I don't think it would be a good compromise.\n\nMy biggest concern was that some switchover scenario might be a bit slower\nin some cases, but if that really is a problem it's hard to imagine what\nworkload would be possible without having to persist them anyway due to\ncontinuous activity needing to be sent just before the shutdown.\n\n>\n\nOn Sat, 19 Aug 2023, 14:16 Amit Kapila, <[email protected]> wrote:It's entirely possible for a logical slot to have a confirmed_flush\nLSN higher than the last value saved on disk while not being marked as\ndirty. It's currently not a problem to lose that value during a clean\nshutdown / restart cycle but to support the upgrade of logical slots\n[1] (see latest patch at [2]), we seem to rely on that value being\nproperly persisted to disk. During the upgrade, we need to verify that\nall the data prior to shudown_checkpoint for the logical slots has\nbeen consumed, otherwise, the downstream may miss some data. Now, to\nensure the same, we are planning to compare the confirm_flush LSN\nlocation with the latest shudown_checkpoint location which means that\nthe confirm_flush LSN should be updated after restart.\n\nI think this is inefficient even without an upgrade because, after the\nrestart, this may lead to decoding some data again. Say, we process\nsome transactions for which we didn't send anything downstream (the\nchanges got filtered) but the confirm_flush LSN is updated due to\nkeepalives. As we don't flush the latest value of confirm_flush LSN,\nit may lead to processing the same changes again.In most cases there shouldn't be a lot of records to decode after restart, but I agree it's better to avoid decoding those again. \nThe idea discussed in the thread [1] is to always persist logical\nslots to disk during the shutdown checkpoint. I have extracted the\npatch to achieve the same from that thread and attached it here. This\ncould lead to some overhead during shutdown (checkpoint) if there are\nmany slots but it is probably a one-time work.\n\nI couldn't think of better ideas but another possibility is to mark\nthe slot as dirty when we update the confirm_flush LSN (see\nLogicalConfirmReceivedLocation()). However, that would be a bigger\noverhead in the running server as it could be a frequent operation and\ncould lead to more writes.Yeah I didn't find any better option either at that time. I still think that forcing persistence on shutdown is the best compromise. If we tried to always mark the slot as dirty, we would be sure to add regular overhead but we would probably end up persisting the slot on disk on shutdown anyway most of the time, so I don't think it would be a good compromise. My biggest concern was that some switchover scenario might be a bit slower in some cases, but if that really is a problem it's hard to imagine what workload would be possible without having to persist them anyway due to continuous activity needing to be sent just before the shutdown.",
"msg_date": "Sat, 19 Aug 2023 15:16:15 +0800",
"msg_from": "Julien Rouhaud <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: persist logical slots to disk during shutdown checkpoint"
},
{
"msg_contents": "On Sat, Aug 19, 2023 at 12:46 PM Julien Rouhaud <[email protected]> wrote:\n>\n> On Sat, 19 Aug 2023, 14:16 Amit Kapila, <[email protected]> wrote:\n>>\n>\n>> The idea discussed in the thread [1] is to always persist logical\n>> slots to disk during the shutdown checkpoint. I have extracted the\n>> patch to achieve the same from that thread and attached it here. This\n>> could lead to some overhead during shutdown (checkpoint) if there are\n>> many slots but it is probably a one-time work.\n>>\n>> I couldn't think of better ideas but another possibility is to mark\n>> the slot as dirty when we update the confirm_flush LSN (see\n>> LogicalConfirmReceivedLocation()). However, that would be a bigger\n>> overhead in the running server as it could be a frequent operation and\n>> could lead to more writes.\n>\n>\n> Yeah I didn't find any better option either at that time. I still think that forcing persistence on shutdown is the best compromise. If we tried to always mark the slot as dirty, we would be sure to add regular overhead but we would probably end up persisting the slot on disk on shutdown anyway most of the time, so I don't think it would be a good compromise.\n>\n\nThe other possibility is that we introduce yet another dirty flag for\nslots, say dirty_for_shutdown_checkpoint which will be set when we\nupdate confirmed_flush LSN. The flag will be cleared each time we\npersist the slot but we won't persist if only this flag is set. We can\nthen use it during the shutdown checkpoint to decide whether to\npersist the slot.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sun, 20 Aug 2023 08:33:46 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: persist logical slots to disk during shutdown checkpoint"
},
{
"msg_contents": "On Sun, Aug 20, 2023 at 8:40 AM Amit Kapila <[email protected]> wrote:\n>\n> On Sat, Aug 19, 2023 at 12:46 PM Julien Rouhaud <[email protected]> wrote:\n> >\n> > On Sat, 19 Aug 2023, 14:16 Amit Kapila, <[email protected]> wrote:\n> >>\n> >\n> >> The idea discussed in the thread [1] is to always persist logical\n> >> slots to disk during the shutdown checkpoint. I have extracted the\n> >> patch to achieve the same from that thread and attached it here. This\n> >> could lead to some overhead during shutdown (checkpoint) if there are\n> >> many slots but it is probably a one-time work.\n> >>\n> >> I couldn't think of better ideas but another possibility is to mark\n> >> the slot as dirty when we update the confirm_flush LSN (see\n> >> LogicalConfirmReceivedLocation()). However, that would be a bigger\n> >> overhead in the running server as it could be a frequent operation and\n> >> could lead to more writes.\n> >\n> >\n> > Yeah I didn't find any better option either at that time. I still think that forcing persistence on shutdown is the best compromise. If we tried to always mark the slot as dirty, we would be sure to add regular overhead but we would probably end up persisting the slot on disk on shutdown anyway most of the time, so I don't think it would be a good compromise.\n> >\n>\n> The other possibility is that we introduce yet another dirty flag for\n> slots, say dirty_for_shutdown_checkpoint which will be set when we\n> update confirmed_flush LSN. The flag will be cleared each time we\n> persist the slot but we won't persist if only this flag is set. We can\n> then use it during the shutdown checkpoint to decide whether to\n> persist the slot.\n\nThere are already two booleans controlling dirty-ness of slot, dirty\nand just_dirty. Adding third will created more confusion.\n\nAnother idea is to record the confirm_flush_lsn at the time of\npersisting the slot. We can use it in two different ways 1. to mark a\nslot dirty and persist if the last confirm_flush_lsn when slot was\npersisted was too far from the current confirm_flush_lsn of the slot.\n2. at shutdown checkpoint, persist all the slots which have these two\nconfirm_flush_lsns different.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Mon, 21 Aug 2023 18:36:04 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: persist logical slots to disk during shutdown checkpoint"
},
{
"msg_contents": "On Mon, Aug 21, 2023 at 6:36 PM Ashutosh Bapat\n<[email protected]> wrote:\n>\n> On Sun, Aug 20, 2023 at 8:40 AM Amit Kapila <[email protected]> wrote:\n> >\n> > The other possibility is that we introduce yet another dirty flag for\n> > slots, say dirty_for_shutdown_checkpoint which will be set when we\n> > update confirmed_flush LSN. The flag will be cleared each time we\n> > persist the slot but we won't persist if only this flag is set. We can\n> > then use it during the shutdown checkpoint to decide whether to\n> > persist the slot.\n>\n> There are already two booleans controlling dirty-ness of slot, dirty\n> and just_dirty. Adding third will created more confusion.\n>\n> Another idea is to record the confirm_flush_lsn at the time of\n> persisting the slot. We can use it in two different ways 1. to mark a\n> slot dirty and persist if the last confirm_flush_lsn when slot was\n> persisted was too far from the current confirm_flush_lsn of the slot.\n> 2. at shutdown checkpoint, persist all the slots which have these two\n> confirm_flush_lsns different.\n>\n\nI think using it in the second (2) way sounds advantageous as compared\nto storing another dirty flag because this requires us to update\nlast_persisted_confirm_flush_lsn only while writing the slot info.\nOTOH, having a flag dirty_for_shutdown_checkpoint will require us to\nupdate it each time we update confirm_flush_lsn under spinlock at\nmultiple places. But, I don't see the need of doing what you proposed\nin (1) as the use case for it is very minor, basically this may\nsometimes help us to avoid decoding after crash recovery.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 22 Aug 2023 09:48:15 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: persist logical slots to disk during shutdown checkpoint"
},
{
"msg_contents": "On Tue, Aug 22, 2023 at 9:48 AM Amit Kapila <[email protected]> wrote:\n> >\n> > Another idea is to record the confirm_flush_lsn at the time of\n> > persisting the slot. We can use it in two different ways 1. to mark a\n> > slot dirty and persist if the last confirm_flush_lsn when slot was\n> > persisted was too far from the current confirm_flush_lsn of the slot.\n> > 2. at shutdown checkpoint, persist all the slots which have these two\n> > confirm_flush_lsns different.\n> >\n>\n> I think using it in the second (2) way sounds advantageous as compared\n> to storing another dirty flag because this requires us to update\n> last_persisted_confirm_flush_lsn only while writing the slot info.\n> OTOH, having a flag dirty_for_shutdown_checkpoint will require us to\n> update it each time we update confirm_flush_lsn under spinlock at\n> multiple places. But, I don't see the need of doing what you proposed\n> in (1) as the use case for it is very minor, basically this may\n> sometimes help us to avoid decoding after crash recovery.\n\nOnce we have last_persisted_confirm_flush_lsn, (1) is just an\noptimization on top of that. With that we take the opportunity to\npersist confirmed_flush_lsn which is much farther than the current\npersisted value and thus improving chances of updating restart_lsn and\ncatalog_xmin faster after a WAL sender restart. We need to keep that\nin mind when implementing (2). The problem is if we don't implement\n(1) right now, we might just forget to do that small incremental\nchange in future. My preference is 1. Do both (1) and (2) together 2.\nDo (2) first and then (1) as a separate commit. 3. Just implement (2)\nif we don't have time at all for first two options.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Tue, 22 Aug 2023 14:56:20 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: persist logical slots to disk during shutdown checkpoint"
},
{
"msg_contents": "On Tue, Aug 22, 2023 at 2:56 PM Ashutosh Bapat\n<[email protected]> wrote:\n>\n> On Tue, Aug 22, 2023 at 9:48 AM Amit Kapila <[email protected]> wrote:\n> > >\n> > > Another idea is to record the confirm_flush_lsn at the time of\n> > > persisting the slot. We can use it in two different ways 1. to mark a\n> > > slot dirty and persist if the last confirm_flush_lsn when slot was\n> > > persisted was too far from the current confirm_flush_lsn of the slot.\n> > > 2. at shutdown checkpoint, persist all the slots which have these two\n> > > confirm_flush_lsns different.\n> > >\n> >\n> > I think using it in the second (2) way sounds advantageous as compared\n> > to storing another dirty flag because this requires us to update\n> > last_persisted_confirm_flush_lsn only while writing the slot info.\n> > OTOH, having a flag dirty_for_shutdown_checkpoint will require us to\n> > update it each time we update confirm_flush_lsn under spinlock at\n> > multiple places. But, I don't see the need of doing what you proposed\n> > in (1) as the use case for it is very minor, basically this may\n> > sometimes help us to avoid decoding after crash recovery.\n>\n> Once we have last_persisted_confirm_flush_lsn, (1) is just an\n> optimization on top of that. With that we take the opportunity to\n> persist confirmed_flush_lsn which is much farther than the current\n> persisted value and thus improving chances of updating restart_lsn and\n> catalog_xmin faster after a WAL sender restart. We need to keep that\n> in mind when implementing (2). The problem is if we don't implement\n> (1) right now, we might just forget to do that small incremental\n> change in future. My preference is 1. Do both (1) and (2) together 2.\n> Do (2) first and then (1) as a separate commit. 3. Just implement (2)\n> if we don't have time at all for first two options.\n>\n\nI prefer one of (2) or (3). Anyway, it is better to do that\noptimization (persist confirm_flush_lsn at a regular interval) as a\nseparate patch as we need to test and prove its value separately.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 22 Aug 2023 15:42:40 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: persist logical slots to disk during shutdown checkpoint"
},
{
"msg_contents": "On Tue, Aug 22, 2023 at 3:42 PM Amit Kapila <[email protected]> wrote:\n> >\n> > Once we have last_persisted_confirm_flush_lsn, (1) is just an\n> > optimization on top of that. With that we take the opportunity to\n> > persist confirmed_flush_lsn which is much farther than the current\n> > persisted value and thus improving chances of updating restart_lsn and\n> > catalog_xmin faster after a WAL sender restart. We need to keep that\n> > in mind when implementing (2). The problem is if we don't implement\n> > (1) right now, we might just forget to do that small incremental\n> > change in future. My preference is 1. Do both (1) and (2) together 2.\n> > Do (2) first and then (1) as a separate commit. 3. Just implement (2)\n> > if we don't have time at all for first two options.\n> >\n>\n> I prefer one of (2) or (3). Anyway, it is better to do that\n> optimization (persist confirm_flush_lsn at a regular interval) as a\n> separate patch as we need to test and prove its value separately.\n\nFine with me.\n\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Tue, 22 Aug 2023 20:23:39 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: persist logical slots to disk during shutdown checkpoint"
},
{
"msg_contents": "Dear hackers,\r\n\r\nThanks for forking the thread! I think we would choose another design, but I wanted\r\nto post the updated version once with the current approach. All comments came\r\nfrom the parent thread [1].\r\n\r\n> 1. GENERAL -- git apply\r\n>\r\n> The patch fails to apply cleanly. There are whitespace warnings.\r\n>\r\n> [postgres(at)CentOS7-x64 oss_postgres_misc]$ git apply\r\n> ../patches_misc/v23-0001-Always-persist-to-disk-logical-slots-during-a-sh.patch\r\n> ../patches_misc/v23-0001-Always-persist-to-disk-logical-slots-during-a-sh.patch:102:\r\n> trailing whitespace.\r\n> # SHUTDOWN_CHECKPOINT record.\r\n> warning: 1 line adds whitespace errors.\r\n\r\nThere was an extra blank, removed.\r\n\r\n> 2. GENERAL -- which patch is the real one and which is the copy?\r\n>\r\n> IMO this patch has become muddled.\r\n>\r\n> Amit recently created a new thread [1] \"persist logical slots to disk\r\n> during shutdown checkpoint\", which I thought was dedicated to the\r\n> discussion/implementation of this 0001 patch. Therefore, I expected any\r\n> 0001 patch changes to would be made only in that new thread from now on,\r\n> (and maybe you would mirror them here in this thread).\r\n>\r\n> But now I see there are v23-0001 patch changes here again. So, now the same\r\n> patch is in 2 places and they are different. It is no longer clear to me\r\n> which 0001 (\"Always persist...\") patch is the definitive one, and which one\r\n> is the copy.\r\n\r\nAttached one in another thread is just copy to make cfbot happy, it could be\r\nignored.\r\n\r\n> contrib/test_decoding/t/002_always_persist.pl\r\n>\r\n> 3.\r\n> +\r\n> +# Copyright (c) 2023, PostgreSQL Global Development Group\r\n> +\r\n> +# Test logical replication slots are always persist to disk during a\r\n> shutdown\r\n> +# checkpoint.\r\n> +\r\n> +use strict;\r\n> +use warnings;\r\n> +\r\n> +use PostgreSQL::Test::Cluster;\r\n> +use PostgreSQL::Test::Utils;\r\n> +use Test::More;\r\n>\r\n> /always persist/always persisted/\r\n\r\nFixed.\r\n\r\n> 4.\r\n> +\r\n> +# Test set-up\r\n> my $node = PostgreSQL::Test::Cluster->new('test');\r\n> $node->init(allows_streaming => 'logical');\r\n> $node->append_conf('postgresql.conf', q{\r\n> autovacuum = off\r\n> checkpoint_timeout = 1h\r\n> });\r\n>\r\n> $node->start;\r\n>\r\n> # Create table\r\n> $node->safe_psql('postgres', \"CREATE TABLE test (id int)\");\r\n>\r\n> Maybe it is better to call the table something different instead of the\r\n> same name as the cluster. e.g. 'test_tbl' would be better.\r\n\r\nChanged to 'test_tbl'.\r\n\r\n> 5.\r\n> +# Shutdown the node once to do shutdown checkpoint\r\n> $node->stop();\r\n>\r\n> SUGGESTION\r\n> # Stop the node to cause a shutdown checkpoint\r\n\r\nFixed.\r\n\r\n> 6.\r\n> +# Fetch checkPoint from the control file itself\r\n> my ($stdout, $stderr) = run_command([ 'pg_controldata', $node->data_dir ]);\r\n> my @control_data = split(\"\\n\", $stdout);\r\n> my $latest_checkpoint = undef;\r\n> foreach (@control_data)\r\n> {\r\n> if ($_ =~ /^Latest checkpoint location:\\s*(.*)$/mg)\r\n> {\r\n> $latest_checkpoint = $1;\r\n> last;\r\n> }\r\n> }\r\n> die \"No checkPoint in control file found\\n\"\r\n> unless defined($latest_checkpoint);\r\n>\r\n> 6a.\r\n> /checkPoint/checkpoint/ (2x)\r\n>\r\n> 6b.\r\n> +die \"No checkPoint in control file found\\n\"\r\n>\r\n> SUGGESTION\r\n> \"No checkpoint found in control file\\n\"\r\n\r\nHmm, these notations were followed the test recovery/t/016_min_consistency.pl,\r\nit uses the word \"minRecoveryPoint\". So I preferred current one.\r\n\r\n[1]: https://www.postgresql.org/message-id/CAHut%2BPtb%3DZYTM_awoLy3sJ5m9Oxe%3DJYn6Gve5rSW9cUdThpsVA%40mail.gmail.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Wed, 23 Aug 2023 05:10:13 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: persist logical slots to disk during shutdown checkpoint"
},
{
"msg_contents": "On Tue, 22 Aug 2023 at 15:42, Amit Kapila <[email protected]> wrote:\n>\n> On Tue, Aug 22, 2023 at 2:56 PM Ashutosh Bapat\n> <[email protected]> wrote:\n> >\n> > On Tue, Aug 22, 2023 at 9:48 AM Amit Kapila <[email protected]> wrote:\n> > > >\n> > > > Another idea is to record the confirm_flush_lsn at the time of\n> > > > persisting the slot. We can use it in two different ways 1. to mark a\n> > > > slot dirty and persist if the last confirm_flush_lsn when slot was\n> > > > persisted was too far from the current confirm_flush_lsn of the slot.\n> > > > 2. at shutdown checkpoint, persist all the slots which have these two\n> > > > confirm_flush_lsns different.\n> > > >\n> > >\n> > > I think using it in the second (2) way sounds advantageous as compared\n> > > to storing another dirty flag because this requires us to update\n> > > last_persisted_confirm_flush_lsn only while writing the slot info.\n> > > OTOH, having a flag dirty_for_shutdown_checkpoint will require us to\n> > > update it each time we update confirm_flush_lsn under spinlock at\n> > > multiple places. But, I don't see the need of doing what you proposed\n> > > in (1) as the use case for it is very minor, basically this may\n> > > sometimes help us to avoid decoding after crash recovery.\n> >\n> > Once we have last_persisted_confirm_flush_lsn, (1) is just an\n> > optimization on top of that. With that we take the opportunity to\n> > persist confirmed_flush_lsn which is much farther than the current\n> > persisted value and thus improving chances of updating restart_lsn and\n> > catalog_xmin faster after a WAL sender restart. We need to keep that\n> > in mind when implementing (2). The problem is if we don't implement\n> > (1) right now, we might just forget to do that small incremental\n> > change in future. My preference is 1. Do both (1) and (2) together 2.\n> > Do (2) first and then (1) as a separate commit. 3. Just implement (2)\n> > if we don't have time at all for first two options.\n> >\n>\n> I prefer one of (2) or (3). Anyway, it is better to do that\n> optimization (persist confirm_flush_lsn at a regular interval) as a\n> separate patch as we need to test and prove its value separately.\n\nHere is a patch to persist to disk logical slots during a shutdown\ncheckpoint if the updated confirmed_flush_lsn has not yet been\npersisted.\n\nRegards,\nVignesh",
"msg_date": "Wed, 23 Aug 2023 11:00:11 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: persist logical slots to disk during shutdown checkpoint"
},
{
"msg_contents": "Dear Vignesh,\r\n\r\n> Here is a patch to persist to disk logical slots during a shutdown\r\n> checkpoint if the updated confirmed_flush_lsn has not yet been\r\n> persisted.\r\n\r\nThanks for making the patch with different approach! Here are comments.\r\n\r\n01. RestoreSlotFromDisk\r\n\r\n```\r\n slot->candidate_xmin_lsn = InvalidXLogRecPtr;\r\n slot->candidate_restart_lsn = InvalidXLogRecPtr;\r\n slot->candidate_restart_valid = InvalidXLogRecPtr;\r\n+ slot->last_persisted_confirmed_flush = InvalidXLogRecPtr;\r\n```\r\n\r\nlast_persisted_confirmed_flush was set to InvalidXLogRecPtr, but isn't it better\r\nto use cp.slotdata. confirmed_flush? Assuming that the server is shut down immediately,\r\nyour patch forces to save.\r\n\r\n02. t/002_always_persist.pl\r\n\r\nThe original author of the patch is me, but I found that the test could pass\r\nwithout your patch. This is because pg_logical_slot_get_changes()->\r\npg_logical_slot_get_changes_guts(confirm = true) always mark the slot as dirty.\r\nIIUC we must use the logical replication system to verify the persistence.\r\nAttached test can pass only when patch is applied.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Wed, 23 Aug 2023 08:51:46 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: persist logical slots to disk during shutdown checkpoint"
},
{
"msg_contents": "On Wed, 23 Aug 2023 at 14:21, Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> Dear Vignesh,\n>\n> > Here is a patch to persist to disk logical slots during a shutdown\n> > checkpoint if the updated confirmed_flush_lsn has not yet been\n> > persisted.\n>\n> Thanks for making the patch with different approach! Here are comments.\n>\n> 01. RestoreSlotFromDisk\n>\n> ```\n> slot->candidate_xmin_lsn = InvalidXLogRecPtr;\n> slot->candidate_restart_lsn = InvalidXLogRecPtr;\n> slot->candidate_restart_valid = InvalidXLogRecPtr;\n> + slot->last_persisted_confirmed_flush = InvalidXLogRecPtr;\n> ```\n>\n> last_persisted_confirmed_flush was set to InvalidXLogRecPtr, but isn't it better\n> to use cp.slotdata. confirmed_flush? Assuming that the server is shut down immediately,\n> your patch forces to save.\n>\n> 02. t/002_always_persist.pl\n>\n> The original author of the patch is me, but I found that the test could pass\n> without your patch. This is because pg_logical_slot_get_changes()->\n> pg_logical_slot_get_changes_guts(confirm = true) always mark the slot as dirty.\n> IIUC we must use the logical replication system to verify the persistence.\n> Attached test can pass only when patch is applied.\n\nHere are few other comments that I noticed:\n\n1) I too noticed that the test passes both with and without patch:\ndiff --git a/contrib/test_decoding/meson.build\nb/contrib/test_decoding/meson.build\nindex 7b05cc25a3..12afb9ea8c 100644\n--- a/contrib/test_decoding/meson.build\n+++ b/contrib/test_decoding/meson.build\n@@ -72,6 +72,7 @@ tests += {\n 'tap': {\n 'tests': [\n 't/001_repl_stats.pl',\n+ 't/002_always_persist.pl',\n\n2) change checkPoint to checkpoint:\n2.a) checkPoint should be checkpoint to maintain consistency across the file:\n+# Shutdown the node once to do shutdown checkpoint\n+$node->stop();\n+\n+# Fetch checkPoint from the control file itself\n+my ($stdout, $stderr) = run_command([ 'pg_controldata', $node->data_dir ]);\n+my @control_data = split(\"\\n\", $stdout);\n+my $latest_checkpoint = undef;\n\n2.b) similarly here:\n+die \"No checkPoint in control file found\\n\"\n+ unless defined($latest_checkpoint);\n\n2.c) similarly here too:\n+# Compare confirmed_flush_lsn and checkPoint\n+ok($confirmed_flush eq $latest_checkpoint,\n+ \"Check confirmed_flush is same as latest checkpoint location\");\n\n3) change checkpoint to \"Latest checkpoint location\":\n3.a) We should change \"No checkPoint in control file found\\n\" to:\n\"Latest checkpoint location not found in control file\\n\" as there are\nmany checkpoint entries in control data\n\n+foreach (@control_data)\n+{\n+ if ($_ =~ /^Latest checkpoint location:\\s*(.*)$/mg)\n+ {\n+ $latest_checkpoint = $1;\n+ last;\n+ }\n+}\n+die \"No checkPoint in control file found\\n\"\n+ unless defined($latest_checkpoint);\n\n3.b) We should change \"Fetch checkPoint from the control file itself\" to:\n\"Fetch Latest checkpoint location from the control file\"\n\n+# Fetch checkPoint from the control file itself\n+my ($stdout, $stderr) = run_command([ 'pg_controldata', $node->data_dir ]);\n+my @control_data = split(\"\\n\", $stdout);\n+my $latest_checkpoint = undef;\n+foreach (@control_data)\n+{\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Wed, 23 Aug 2023 14:27:11 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: persist logical slots to disk during shutdown checkpoint"
},
{
"msg_contents": "On Wed, 23 Aug 2023 at 14:21, Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> Dear Vignesh,\n>\n> > Here is a patch to persist to disk logical slots during a shutdown\n> > checkpoint if the updated confirmed_flush_lsn has not yet been\n> > persisted.\n>\n> Thanks for making the patch with different approach! Here are comments.\n>\n> 01. RestoreSlotFromDisk\n>\n> ```\n> slot->candidate_xmin_lsn = InvalidXLogRecPtr;\n> slot->candidate_restart_lsn = InvalidXLogRecPtr;\n> slot->candidate_restart_valid = InvalidXLogRecPtr;\n> + slot->last_persisted_confirmed_flush = InvalidXLogRecPtr;\n> ```\n>\n> last_persisted_confirmed_flush was set to InvalidXLogRecPtr, but isn't it better\n> to use cp.slotdata. confirmed_flush? Assuming that the server is shut down immediately,\n> your patch forces to save.\n\nModified\n\n> 02. t/002_always_persist.pl\n>\n> The original author of the patch is me, but I found that the test could pass\n> without your patch. This is because pg_logical_slot_get_changes()->\n> pg_logical_slot_get_changes_guts(confirm = true) always mark the slot as dirty.\n> IIUC we must use the logical replication system to verify the persistence.\n> Attached test can pass only when patch is applied.\n\nUpdate the test based on your another_test with slight modifications.\n\nAttached v4 version patch has the changes for the same.\n\nRegards,\nVignesh",
"msg_date": "Thu, 24 Aug 2023 11:43:53 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: persist logical slots to disk during shutdown checkpoint"
},
{
"msg_contents": "On Sat, 19 Aug 2023 at 11:53, Amit Kapila <[email protected]> wrote:\n>\n> It's entirely possible for a logical slot to have a confirmed_flush\n> LSN higher than the last value saved on disk while not being marked as\n> dirty. It's currently not a problem to lose that value during a clean\n> shutdown / restart cycle but to support the upgrade of logical slots\n> [1] (see latest patch at [2]), we seem to rely on that value being\n> properly persisted to disk. During the upgrade, we need to verify that\n> all the data prior to shudown_checkpoint for the logical slots has\n> been consumed, otherwise, the downstream may miss some data. Now, to\n> ensure the same, we are planning to compare the confirm_flush LSN\n> location with the latest shudown_checkpoint location which means that\n> the confirm_flush LSN should be updated after restart.\n>\n> I think this is inefficient even without an upgrade because, after the\n> restart, this may lead to decoding some data again. Say, we process\n> some transactions for which we didn't send anything downstream (the\n> changes got filtered) but the confirm_flush LSN is updated due to\n> keepalives. As we don't flush the latest value of confirm_flush LSN,\n> it may lead to processing the same changes again.\n\nI was able to test and verify that we were not processing the same\nchanges again.\nNote: The 0001-Add-logs-to-skip-transaction-filter-insert-operation.patch\nhas logs to print if a decode transaction is skipped and also a log to\nmention if any operation is filtered.\nThe test.sh script has the steps for a) setting up logical replication\nfor a table b) perform insert on table that need to be published (this\nwill be replicated to the subscriber) c) perform insert on a table\nthat will not be published (this insert will be filtered, it will not\nbe replicated) d) sleep for 5 seconds e) stop the server f) start the\nserver\nI used the following steps, do the following in HEAD:\na) Apply 0001-Add-logs-to-skip-transaction-filter-insert-operation.patch\npatch in Head and build the binaries b) execute test.sh c) view N1.log\nfile to see that the insert operations were filtered again by seeing\nthe following logs:\nLOG: Filter insert for table tbl2\n...\n===restart===\n...\nLOG: Skipping transaction 0/156AD10 as start decode at is greater 0/156AE40\n...\nLOG: Filter insert for table tbl2\n\nWe can see that the insert operations on tbl2 which was filtered\nbefore server was stopped is again filtered after restart too in HEAD.\n\nLets see that the same changes were not processed again with patch:\na) Apply v4-0001-Persist-to-disk-logical-slots-during-a-shutdown-c.patch\nfrom [1] also apply\n0001-Add-logs-to-skip-transaction-filter-insert-operation.patch patch\nand build the binaries b) execute test.sh c) view N1.log file to see\nthat the insert operations were skipped after restart of server by\nseeing the following logs:\nLOG: Filter insert for table tbl2\n...\n===restart===\n...\nSkipping transaction 0/156AD10 as start decode at is greater 0/156AFB0\n...\nSkipping transaction 0/156AE80 as start decode at is greater 0/156AFB0\n\nWe can see that the insert operations on tbl2 are not processed again\nafter restart with the patch.\n\n[1] - https://www.postgresql.org/message-id/CALDaNm0VrAt24e2FxbOX6eJQ-G_tZ0gVpsFBjzQM99NxG0hZfg%40mail.gmail.com\n\nRegards,\nVignesh",
"msg_date": "Fri, 25 Aug 2023 17:40:36 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: persist logical slots to disk during shutdown checkpoint"
},
{
"msg_contents": "On Fri, 25 Aug 2023 at 17:40, vignesh C <[email protected]> wrote:\n>\n> On Sat, 19 Aug 2023 at 11:53, Amit Kapila <[email protected]> wrote:\n> >\n> > It's entirely possible for a logical slot to have a confirmed_flush\n> > LSN higher than the last value saved on disk while not being marked as\n> > dirty. It's currently not a problem to lose that value during a clean\n> > shutdown / restart cycle but to support the upgrade of logical slots\n> > [1] (see latest patch at [2]), we seem to rely on that value being\n> > properly persisted to disk. During the upgrade, we need to verify that\n> > all the data prior to shudown_checkpoint for the logical slots has\n> > been consumed, otherwise, the downstream may miss some data. Now, to\n> > ensure the same, we are planning to compare the confirm_flush LSN\n> > location with the latest shudown_checkpoint location which means that\n> > the confirm_flush LSN should be updated after restart.\n> >\n> > I think this is inefficient even without an upgrade because, after the\n> > restart, this may lead to decoding some data again. Say, we process\n> > some transactions for which we didn't send anything downstream (the\n> > changes got filtered) but the confirm_flush LSN is updated due to\n> > keepalives. As we don't flush the latest value of confirm_flush LSN,\n> > it may lead to processing the same changes again.\n>\n> I was able to test and verify that we were not processing the same\n> changes again.\n> Note: The 0001-Add-logs-to-skip-transaction-filter-insert-operation.patch\n> has logs to print if a decode transaction is skipped and also a log to\n> mention if any operation is filtered.\n> The test.sh script has the steps for a) setting up logical replication\n> for a table b) perform insert on table that need to be published (this\n> will be replicated to the subscriber) c) perform insert on a table\n> that will not be published (this insert will be filtered, it will not\n> be replicated) d) sleep for 5 seconds e) stop the server f) start the\n> server\n> I used the following steps, do the following in HEAD:\n> a) Apply 0001-Add-logs-to-skip-transaction-filter-insert-operation.patch\n> patch in Head and build the binaries b) execute test.sh c) view N1.log\n> file to see that the insert operations were filtered again by seeing\n> the following logs:\n> LOG: Filter insert for table tbl2\n> ...\n> ===restart===\n> ...\n> LOG: Skipping transaction 0/156AD10 as start decode at is greater 0/156AE40\n> ...\n> LOG: Filter insert for table tbl2\n>\n> We can see that the insert operations on tbl2 which was filtered\n> before server was stopped is again filtered after restart too in HEAD.\n>\n> Lets see that the same changes were not processed again with patch:\n> a) Apply v4-0001-Persist-to-disk-logical-slots-during-a-shutdown-c.patch\n> from [1] also apply\n> 0001-Add-logs-to-skip-transaction-filter-insert-operation.patch patch\n> and build the binaries b) execute test.sh c) view N1.log file to see\n> that the insert operations were skipped after restart of server by\n> seeing the following logs:\n> LOG: Filter insert for table tbl2\n> ...\n> ===restart===\n> ...\n> Skipping transaction 0/156AD10 as start decode at is greater 0/156AFB0\n> ...\n> Skipping transaction 0/156AE80 as start decode at is greater 0/156AFB0\n>\n> We can see that the insert operations on tbl2 are not processed again\n> after restart with the patch.\n\nHere is another way to test using pg_replslotdata approach that was\nproposed earlier at [1].\nI have rebased this on top of HEAD and the v5 version for the same is attached.\n\nWe can use the same test as test.sh shared at [2].\nWhen executed with HEAD, it was noticed that confirmed_flush points to\nWAL location before both the transaction:\nslot_name slot_type datoid persistency xmin catalog_xmin\n restart_lsn confirmed_flush two_phase_at two_phase\nplugin\n--------- --------- ------ ---------- ----\n ----------- ----------- ---------------\n ------------ --------- ------\nsub logical 5 persistent 0\n735 0/1531E28 0/1531E60 0/0\n 0 pgoutput\n\nWAL record information generated using pg_walinspect for various\nrecords at and after confirmed_flush WAL 0/1531E60:\n row_number | start_lsn | end_lsn | prev_lsn | xid |\nresource_manager | record_type | record_length |\nmain_data_length | fpi_length |\n description\n |\n block_ref\n------------+-----------+-----------+-----------+-----+------------------+---------------------+---------------+------------------+------------+-------------------------------------------------------------------\n--------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------\n 1 | 0/1531E60 | 0/1531EA0 | 0/1531E28 | 0 | Heap2\n | PRUNE | 57 | 9 |\n0 | snapshotConflictHorizon: 0, nredirected: 0, ndead: 1, nunused: 0,\nredirected: [], dead: [1], unused: []\n |\nblkref #0: rel 1663/5/1255 fork main blk 58\n 2 | 0/1531EA0 | 0/1531EE0 | 0/1531E60 | 735 | Heap\n | INSERT+INIT | 59 | 3 |\n0 | off: 1, flags: 0x08\n\n |\nblkref #0: rel 1663/5/16384 fork main blk 0\n 3 | 0/1531EE0 | 0/1531F20 | 0/1531EA0 | 735 | Heap\n | INSERT | 59 | 3 |\n0 | off: 2, flags: 0x08\n\n |\nblkref #0: rel 1663/5/16384 fork main blk 0\n 4 | 0/1531F20 | 0/1531F60 | 0/1531EE0 | 735 | Heap\n | INSERT | 59 | 3 |\n0 | off: 3, flags: 0x08\n\n |\nblkref #0: rel 1663/5/16384 fork main blk 0\n 5 | 0/1531F60 | 0/1531FA0 | 0/1531F20 | 735 | Heap\n | INSERT | 59 | 3 |\n0 | off: 4, flags: 0x08\n\n |\nblkref #0: rel 1663/5/16384 fork main blk 0\n 6 | 0/1531FA0 | 0/1531FE0 | 0/1531F60 | 735 | Heap\n | INSERT | 59 | 3 |\n0 | off: 5, flags: 0x08\n\n |\nblkref #0: rel 1663/5/16384 fork main blk 0\n 7 | 0/1531FE0 | 0/1532028 | 0/1531FA0 | 735 | Transaction\n | COMMIT | 46 | 20 |\n0 | 2023-08-27 23:22:17.161215+05:30\n\n |\n 8 | 0/1532028 | 0/1532068 | 0/1531FE0 | 736 | Heap\n | INSERT+INIT | 59 | 3 |\n0 | off: 1, flags: 0x08\n\n |\nblkref #0: rel 1663/5/16387 fork main blk 0\n 9 | 0/1532068 | 0/15320A8 | 0/1532028 | 736 | Heap\n | INSERT | 59 | 3 |\n0 | off: 2, flags: 0x08\n\n |\nblkref #0: rel 1663/5/16387 fork main blk 0\n 10 | 0/15320A8 | 0/15320E8 | 0/1532068 | 736 | Heap\n | INSERT | 59 | 3 |\n0 | off: 3, flags: 0x08\n\n |\nblkref #0: rel 1663/5/16387 fork main blk 0\n 11 | 0/15320E8 | 0/1532128 | 0/15320A8 | 736 | Heap\n | INSERT | 59 | 3 |\n0 | off: 4, flags: 0x08\n\n |\nblkref #0: rel 1663/5/16387 fork main blk 0\n 12 | 0/1532128 | 0/1532168 | 0/15320E8 | 736 | Heap\n | INSERT | 59 | 3 |\n0 | off: 5, flags: 0x08\n\n |\nblkref #0: rel 1663/5/16387 fork main blk 0\n 13 | 0/1532168 | 0/1532198 | 0/1532128 | 736 | Transaction\n | COMMIT | 46 | 20 |\n0 | 2023-08-27 23:22:17.174756+05:30\n\n |\n 14 | 0/1532198 | 0/1532210 | 0/1532168 | 0 | XLOG\n | CHECKPOINT_SHUTDOWN | 114 | 88 |\n0 | redo 0/1532198; tli 1; prev tli 1; fpw true; xid 0:737; oid 16399;\n multi 1; offset 0; oldest xid 723 in DB 1; oldest multi 1 in DB 1;\noldest/newest commit timestamp xid: 0/0; oldest running xid 0;\nshutdown |\n\nWhereas the same test executed with the patch applied shows that\nconfirmed_flush points to CHECKPOINT_SHUTDOWN record:\nslot_name slot_type datoid persistency xmin catalog_xmin\nrestart_lsn confirmed_flush two_phase_at two_phase\n plugin\n--------- --------- ------ ----------- ---\n----------- ----------- ---------------\n----------- --------- ------\nsub logical 5 persistent 0 735\n 0/1531E28 0/1532198 0/0 0\n pgoutput\n\nWAL record information generated using pg_walinspect for various\nrecords at and after confirmed_flush WAL 0/1532198:\n row_number | start_lsn | end_lsn | prev_lsn | xid |\nresource_manager | record_type | record_length |\nmain_data_length | fpi_length |\n description\n |\nblock_ref\n------------+-----------+-----------+-----------+-----+------------------+---------------------+---------------+------------------+------------+-------------------------------------------------------------------\n--------------------------------------------------------------------------------------------------------------------------------------------+-----------\n 1 | 0/1532198 | 0/1532210 | 0/1532168 | 0 | XLOG\n | CHECKPOINT_SHUTDOWN | 114 | 88 |\n0 | redo 0/1532198; tli 1; prev tli 1; fpw true; xid 0:737; oid 16399;\n multi 1; offset 0; oldest xid 723 in DB 1; oldest multi 1 in DB 1;\noldest/newest commit timestamp xid: 0/0; oldest running xid 0;\nshutdown |\n(1 row)\n\n[1] - https://www.postgresql.org/message-id/flat/CALj2ACW0rV5gWK8A3m6_X62qH%2BVfaq5hznC%3Di0R5Wojt5%2Byhyw%40mail.gmail.com\n[2] - https://www.postgresql.org/message-id/CALDaNm2BboFuFVYxyzP4wkv7%3D8%2B_TwsD%2BugyGhtibTSF4m4XRg%40mail.gmail.com\n\nRegards,\nVignesh",
"msg_date": "Mon, 28 Aug 2023 00:02:28 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: persist logical slots to disk during shutdown checkpoint"
},
{
"msg_contents": "Dear hackers,\r\n\r\nI also tested for logical slots on the physical standby. PSA the script.\r\nconfirmed_flush_lsn for such slots were successfully persisted.\r\n\r\n# Topology\r\n\r\nIn this test nodes are connected each other.\r\n\r\nnode1 --(physical replication)-->node2--(logical replication)-->node3\r\n\r\n# Test method\r\n\r\nAn attached script did following steps\r\n\r\n1. constructed above configurations\r\n2. Inserted data on node1\r\n3. read confirmed_flush_lsn on node2 (a)\r\n4. restarted node2\r\n5. read confirmed_flush_lsn again on node2 (b)\r\n6. compare (a) and (b)\r\n\r\n# result\r\n\r\nBefore patching, (a) and (b) were different value, which meant that logical\r\nslots on physical standby were not saved at shutdown.\r\n\r\n```\r\nslot_name | confirmed_flush_lsn \r\n-----------+---------------------\r\n sub | 0/30003E8\r\n(1 row)\r\n\r\nwaiting for server to shut down.... done\r\nserver stopped\r\nwaiting for server to start.... done\r\nserver started\r\n slot_name | confirmed_flush_lsn \r\n-----------+---------------------\r\n sub | 0/30000D8\r\n(1 row)\r\n```\r\n\r\nAfter patching, (a) and (b) became the same value. The v4 patch worked well even\r\nif the node is physical standby.\r\n\r\n```\r\nslot_name | confirmed_flush_lsn \r\n-----------+---------------------\r\n sub | 0/30003E8\r\n(1 row)\r\n\r\nwaiting for server to shut down.... done\r\nserver stopped\r\nwaiting for server to start.... done\r\nserver started\r\n slot_name | confirmed_flush_lsn \r\n-----------+---------------------\r\n sub | 0/30003E8\r\n(1 row)\r\n```\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Mon, 28 Aug 2023 11:48:14 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: persist logical slots to disk during shutdown checkpoint"
},
{
"msg_contents": "On Thu, Aug 24, 2023 at 11:44 AM vignesh C <[email protected]> wrote:\n>\n\nThe patch looks mostly good to me. I have made minor changes which are\nas follows: (a) removed the autovacuum =off and\nwal_receiver_status_interval = 0 setting as those doesn't seem to be\nrequired for the test; (b) changed a few comments and variable names\nin the code and test;\n\nShall we change the test file name from always_persist to\nsave_logical_slots_shutdown and move to recovery/t/ as this test is\nabout verification after the restart of the server?\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Mon, 28 Aug 2023 18:55:57 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: persist logical slots to disk during shutdown checkpoint"
},
{
"msg_contents": "On Mon, 28 Aug 2023 at 18:56, Amit Kapila <[email protected]> wrote:\n>\n> On Thu, Aug 24, 2023 at 11:44 AM vignesh C <[email protected]> wrote:\n> >\n>\n> The patch looks mostly good to me. I have made minor changes which are\n> as follows: (a) removed the autovacuum =off and\n> wal_receiver_status_interval = 0 setting as those doesn't seem to be\n> required for the test; (b) changed a few comments and variable names\n> in the code and test;\n>\n> Shall we change the test file name from always_persist to\n> save_logical_slots_shutdown and move to recovery/t/ as this test is\n> about verification after the restart of the server?\n\nThat makes sense. The attached v6 version has the changes for the\nsame, apart from this I have also fixed a) pgindent issues b) perltidy\nissues c) one variable change (flush_lsn_changed to\nconfirmed_flush_has_changed) d) corrected few comments in the test\nfile. Thanks to Peter for providing few offline comments.\n\nRegards,\nVignesh",
"msg_date": "Tue, 29 Aug 2023 10:16:18 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: persist logical slots to disk during shutdown checkpoint"
},
{
"msg_contents": "On Tue, Aug 29, 2023 at 10:16 AM vignesh C <[email protected]> wrote:\n>\n> That makes sense. The attached v6 version has the changes for the\n> same, apart from this I have also fixed a) pgindent issues b) perltidy\n> issues c) one variable change (flush_lsn_changed to\n> confirmed_flush_has_changed) d) corrected few comments in the test\n> file. Thanks to Peter for providing few offline comments.\n>\n\nThe latest version looks good to me. Julien, Ashutosh, and others,\nunless you have more comments or suggestions, I would like to push\nthis in a day or two.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 29 Aug 2023 14:21:15 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: persist logical slots to disk during shutdown checkpoint"
},
{
"msg_contents": "On Tue, Aug 29, 2023 at 2:21 PM Amit Kapila <[email protected]> wrote:\n>\n> On Tue, Aug 29, 2023 at 10:16 AM vignesh C <[email protected]> wrote:\n> >\n> > That makes sense. The attached v6 version has the changes for the\n> > same, apart from this I have also fixed a) pgindent issues b) perltidy\n> > issues c) one variable change (flush_lsn_changed to\n> > confirmed_flush_has_changed) d) corrected few comments in the test\n> > file. Thanks to Peter for providing few offline comments.\n> >\n>\n> The latest version looks good to me. Julien, Ashutosh, and others,\n> unless you have more comments or suggestions, I would like to push\n> this in a day or two.\n\nI am looking at it. If you can wait till the end of the week, that\nwill be great.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Tue, 29 Aug 2023 17:40:25 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: persist logical slots to disk during shutdown checkpoint"
},
{
"msg_contents": "Hi,\n\nOn Tue, Aug 29, 2023 at 02:21:15PM +0530, Amit Kapila wrote:\n> On Tue, Aug 29, 2023 at 10:16 AM vignesh C <[email protected]> wrote:\n> >\n> > That makes sense. The attached v6 version has the changes for the\n> > same, apart from this I have also fixed a) pgindent issues b) perltidy\n> > issues c) one variable change (flush_lsn_changed to\n> > confirmed_flush_has_changed) d) corrected few comments in the test\n> > file. Thanks to Peter for providing few offline comments.\n> >\n>\n> The latest version looks good to me. Julien, Ashutosh, and others,\n> unless you have more comments or suggestions, I would like to push\n> this in a day or two.\n\nUnfortunately I'm currently swamped with some internal escalations so I\ncouldn't keep up closely with the latest activity here.\n\nI think I recall that you wanted to\nchange the timing at which logical slots are shutdown, I'm assuming that this\nchange won't lead to always have a difference between the LSN and latest\npersisted LSN being different? Otherwise saving the latest persisted LSN to\ntry to avoid persisting again all logical slots on shutdown seems reasonable to\nme.\n\n\n",
"msg_date": "Wed, 30 Aug 2023 11:33:00 +0800",
"msg_from": "Julien Rouhaud <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: persist logical slots to disk during shutdown checkpoint"
},
{
"msg_contents": "On Wed, Aug 30, 2023 at 9:03 AM Julien Rouhaud <[email protected]> wrote:\n>\n> On Tue, Aug 29, 2023 at 02:21:15PM +0530, Amit Kapila wrote:\n> > On Tue, Aug 29, 2023 at 10:16 AM vignesh C <[email protected]> wrote:\n> > >\n> > > That makes sense. The attached v6 version has the changes for the\n> > > same, apart from this I have also fixed a) pgindent issues b) perltidy\n> > > issues c) one variable change (flush_lsn_changed to\n> > > confirmed_flush_has_changed) d) corrected few comments in the test\n> > > file. Thanks to Peter for providing few offline comments.\n> > >\n> >\n> > The latest version looks good to me. Julien, Ashutosh, and others,\n> > unless you have more comments or suggestions, I would like to push\n> > this in a day or two.\n>\n> Unfortunately I'm currently swamped with some internal escalations so I\n> couldn't keep up closely with the latest activity here.\n>\n> I think I recall that you wanted to\n> change the timing at which logical slots are shutdown, I'm assuming that this\n> change won't lead to always have a difference between the LSN and latest\n> persisted LSN being different?\n>\n\nI think here by LSN you are referring to confirmed_flush LSN. If so,\nthis doesn't create any new difference between the values for the\nconfirmed_flush LSN in memory and in disk. We just remember the last\npersisted value to avoid writes of slots at shutdown time.\n\n> Otherwise saving the latest persisted LSN to\n> try to avoid persisting again all logical slots on shutdown seems reasonable to\n> me.\n\nThanks for responding.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 30 Aug 2023 09:46:10 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: persist logical slots to disk during shutdown checkpoint"
},
{
"msg_contents": "On Tue, Aug 29, 2023 at 5:40 PM Ashutosh Bapat\n<[email protected]> wrote:\n>\n> I am looking at it. If you can wait till the end of the week, that\n> will be great.\n\n /*\n * Successfully wrote, unset dirty bit, unless somebody dirtied again\n- * already.\n+ * already and remember the confirmed_flush LSN value.\n */\n SpinLockAcquire(&slot->mutex);\n if (!slot->just_dirtied)\n slot->dirty = false;\n+ slot->last_saved_confirmed_flush = slot->data.confirmed_flush;\n\nIf the slot->data.confirmed_flush gets updated concurrently between copying it\nto be written to the disk and when it's written to last_saved_confirmed_flush,\nwe will miss one update. I think we need to update last_saved_confirmed_flush\nbased on the cp.slotdata.confirmed_flush rather than\nslot->data.confirmed_flush.\n\nWe are setting last_saved_confirmed_flush for all types of slots but using it\nonly when the slot is logical. Should we also set it only for logical slots?\n\n /* first check whether there's something to write out */\n SpinLockAcquire(&slot->mutex);\n was_dirty = slot->dirty;\n slot->just_dirtied = false;\n+ confirmed_flush_has_changed = (slot->data.confirmed_flush !=\nslot->last_saved_confirmed_flush);\n\nThe confirmed_flush LSN should always move forward, otherwise there may not be\nenough WAL retained for the slot to work. I am wondering whether we should take\nan opportunity to make sure\nAssert(slot->data.confirmed_flush <= slot->last_saved_confirmed_flush)\n\n- /* and don't do anything if there's nothing to write */\n- if (!was_dirty)\n+ /* Don't do anything if there's nothing to write. See ReplicationSlot. */\n+ if (!was_dirty &&\n+ !(is_shutdown && SlotIsLogical(slot) && confirmed_flush_has_changed))\n\nRather than complicating this condition, I wonder whether it's better to just\nset was_dirty = true when is_shutdown && SlotIsLogical(slot) &&\nconfirmed_flush_has_changed) or even slot->dirty = true. See also the note at\nthe end of the email.\n\n+\n+ /*\n+ * We won't ensure that the slot is persisted after the confirmed_flush\n+ * LSN is updated as that could lead to frequent writes. However, we need\n+ * to ensure that we do persist the slots at the time of shutdown whose\n+ * confirmed_flush LSN is changed since we last saved the slot to disk.\n+ * This will help in avoiding retreat of the confirmed_flush LSN after\n+ * restart. This variable is used to track the last saved confirmed_flush\n+ * LSN value.\n+ */\n\nThis comment makes more sense in SaveSlotToPath() than here. We may decide to\nuse last_saved_confirmed_flush for something else in future.\n+\n+sub compare_confirmed_flush\n+{\n+ my ($node, $confirmed_flush_from_log) = @_;\n+\n+ # Fetch Latest checkpoint location from the control file\n+ my ($stdout, $stderr) =\n+ run_command([ 'pg_controldata', $node->data_dir ]);\n+ my @control_data = split(\"\\n\", $stdout);\n+ my $latest_checkpoint = undef;\n+ foreach (@control_data)\n+ {\n+ if ($_ =~ /^Latest checkpoint location:\\s*(.*)$/mg)\n+ {\n+ $latest_checkpoint = $1;\n+ last;\n+ }\n+ }\n+ die \"Latest checkpoint location not found in control file\\n\"\n+ unless defined($latest_checkpoint);\n+\n+ # Is it same as the value read from log?\n+ ok( $latest_checkpoint eq $confirmed_flush_from_log,\n+ \"Check that the slot's confirmed_flush LSN is the same as the\nlatest_checkpoint location\"\n+ );\n\nThis function assumes that the subscriber will receive and confirm WAL upto\ncheckpoint location and publisher's WAL sender will update it in the slot.\nWhere is the code to ensure that? Does the WAL sender process wait for\ncheckpoint\nLSN to be confirmed when shutting down?\n+\n+# Restart the publisher to ensure that the slot will be persisted if required\n+$node_publisher->restart();\n\nCan we add this test comparing LSNs after publisher restart, to an existing\ntest itself - like basic replication. That's the only extra thing that this\ntest does beyond usual replication stuff.\n\n+\n+# Wait until the walsender creates decoding context\n+$node_publisher->wait_for_log(\n+ qr/Streaming transactions committing after\n([A-F0-9]+\\/[A-F0-9]+), reading WAL from ([A-F0-9]+\\/[A-F0-9]+)./,\n+ $offset);\n+\n+# Extract confirmed_flush from the logfile\n+my $log_contents = slurp_file($node_publisher->logfile, $offset);\n+$log_contents =~\n+ qr/Streaming transactions committing after ([A-F0-9]+\\/[A-F0-9]+),\nreading WAL from ([A-F0-9]+\\/[A-F0-9]+)./\n+ or die \"could not get confirmed_flush_lsn\";\n\nWhy are we figuring out the LSN from the log file? Is it not available from\npg_replication_slots view? If we do so, I think this test will fail is the slot\ngets written after the restart because of concurrent activity on the publisher\n(like autovacuum, or other things that cause empty transaction to be\nreplicated) and subscriber. A very rare chance but not 0 probability one. I\nthink we should shut down subscriber, restart publisher and then make this\ncheck based on the contents of the replication slot instead of server log.\nShutting down subscriber will ensure that the subscriber won't send any new\nconfirmed flush location to the publisher after restart.\n\nAll the places which call ReplicationSlotSave() mark the slot as dirty. All\nthe places where SaveSlotToPath() is called, the slot is marked dirty except\nwhen calling from CheckPointReplicationSlots(). So I am wondering whether we\nshould be marking the slot dirty in CheckPointReplicationSlots() and avoid\npassing down is_shutdown flag to SaveSlotToPath().\n\nUnrelated to this patch, I noticed that all the callers of SaveSlotToPath()\nhave the same code to craft replication slot file path. I wonder if that needs\nto be macro'ised or added to some common function or to be pushed into\nSaveSlotToPath() itself to make sure that any changes to the path in future are\nconsistent for all callers of SaveSlotToPath(). Interestingly slot->data.name\nis accessed without a lock here. Name of the slot does not change after\ncreation so this isn't a problem right now. But generally against the principle\nof accessing data protected by a mutex.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Wed, 30 Aug 2023 18:32:50 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: persist logical slots to disk during shutdown checkpoint"
},
{
"msg_contents": "On Wed, Aug 30, 2023 at 6:33 PM Ashutosh Bapat\n<[email protected]> wrote:\n>\n> On Tue, Aug 29, 2023 at 5:40 PM Ashutosh Bapat\n> <[email protected]> wrote:\n> >\n> > I am looking at it. If you can wait till the end of the week, that\n> > will be great.\n>\n> /*\n> * Successfully wrote, unset dirty bit, unless somebody dirtied again\n> - * already.\n> + * already and remember the confirmed_flush LSN value.\n> */\n> SpinLockAcquire(&slot->mutex);\n> if (!slot->just_dirtied)\n> slot->dirty = false;\n> + slot->last_saved_confirmed_flush = slot->data.confirmed_flush;\n>\n> If the slot->data.confirmed_flush gets updated concurrently between copying it\n> to be written to the disk and when it's written to last_saved_confirmed_flush,\n> we will miss one update. I think we need to update last_saved_confirmed_flush\n> based on the cp.slotdata.confirmed_flush rather than\n> slot->data.confirmed_flush.\n>\n\nYeah, this appears to be a problem.\n\n> We are setting last_saved_confirmed_flush for all types of slots but using it\n> only when the slot is logical. Should we also set it only for logical slots?\n>\n\nWe can do that but not sure if there is any advantage of it other than\nadding extra condition. BTW, won't even confirmed_flush LSN be used\nonly for logical slots?\n\n> /* first check whether there's something to write out */\n> SpinLockAcquire(&slot->mutex);\n> was_dirty = slot->dirty;\n> slot->just_dirtied = false;\n> + confirmed_flush_has_changed = (slot->data.confirmed_flush !=\n> slot->last_saved_confirmed_flush);\n>\n> The confirmed_flush LSN should always move forward, otherwise there may not be\n> enough WAL retained for the slot to work. I am wondering whether we should take\n> an opportunity to make sure\n> Assert(slot->data.confirmed_flush <= slot->last_saved_confirmed_flush)\n>\n\nTheoretically, what you are saying makes sense to me but we don't have\nsuch a protection while updating the confirmed_flush LSN. It would be\nbetter to first add such a protection for confirmed_flush LSN update\nas a separate patch.\n\n> - /* and don't do anything if there's nothing to write */\n> - if (!was_dirty)\n> + /* Don't do anything if there's nothing to write. See ReplicationSlot. */\n> + if (!was_dirty &&\n> + !(is_shutdown && SlotIsLogical(slot) && confirmed_flush_has_changed))\n>\n> Rather than complicating this condition, I wonder whether it's better to just\n> set was_dirty = true when is_shutdown && SlotIsLogical(slot) &&\n> confirmed_flush_has_changed) or even slot->dirty = true.\n>\n\nI think it is better to keep the slot's dirty property separate. But\nwe can introduce another variable that can be the result of both\nwas_dirty and other checks together, however, that doesn't seem much\nbetter than the current code.\n\n> +\n> + /*\n> + * We won't ensure that the slot is persisted after the confirmed_flush\n> + * LSN is updated as that could lead to frequent writes. However, we need\n> + * to ensure that we do persist the slots at the time of shutdown whose\n> + * confirmed_flush LSN is changed since we last saved the slot to disk.\n> + * This will help in avoiding retreat of the confirmed_flush LSN after\n> + * restart. This variable is used to track the last saved confirmed_flush\n> + * LSN value.\n> + */\n>\n> This comment makes more sense in SaveSlotToPath() than here. We may decide to\n> use last_saved_confirmed_flush for something else in future.\n>\n\nI have kept it here because it contains some information that is not\nspecific to SaveSlotToPath. So, it seems easier to follow the whole\ntheory if we keep it at the central place in the structure and then\nadd the reference wherever required but I am fine if you and others\nfeel strongly about moving this to SaveSlotToPath().\n\n> +\n> +sub compare_confirmed_flush\n> +{\n> + my ($node, $confirmed_flush_from_log) = @_;\n> +\n> + # Fetch Latest checkpoint location from the control file\n> + my ($stdout, $stderr) =\n> + run_command([ 'pg_controldata', $node->data_dir ]);\n> + my @control_data = split(\"\\n\", $stdout);\n> + my $latest_checkpoint = undef;\n> + foreach (@control_data)\n> + {\n> + if ($_ =~ /^Latest checkpoint location:\\s*(.*)$/mg)\n> + {\n> + $latest_checkpoint = $1;\n> + last;\n> + }\n> + }\n> + die \"Latest checkpoint location not found in control file\\n\"\n> + unless defined($latest_checkpoint);\n> +\n> + # Is it same as the value read from log?\n> + ok( $latest_checkpoint eq $confirmed_flush_from_log,\n> + \"Check that the slot's confirmed_flush LSN is the same as the\n> latest_checkpoint location\"\n> + );\n>\n> This function assumes that the subscriber will receive and confirm WAL upto\n> checkpoint location and publisher's WAL sender will update it in the slot.\n> Where is the code to ensure that? Does the WAL sender process wait for\n> checkpoint\n> LSN to be confirmed when shutting down?\n>\n\nNote, that we need to compare if all the WAL before the\nshutdown_checkpoint WAL record is sent. Before the clean shutdown, we\ndo ensure that all the pending WAL is confirmed back. See the use of\nWalSndDone() in WalSndLoop().\n\n> +\n> +# Restart the publisher to ensure that the slot will be persisted if required\n> +$node_publisher->restart();\n>\n> Can we add this test comparing LSNs after publisher restart, to an existing\n> test itself - like basic replication. That's the only extra thing that this\n> test does beyond usual replication stuff.\n>\n\nAs this is a test after the restart of the server, I thought to keep\nit with recovery tests. However, I think once the upgrade (of\npublisher nodes) patch is ready, we should keep this test with those\ntests or somehow merge it with those tests but till that patch is\nready, let's keep this as a separate test.\n\n> +\n> +# Wait until the walsender creates decoding context\n> +$node_publisher->wait_for_log(\n> + qr/Streaming transactions committing after\n> ([A-F0-9]+\\/[A-F0-9]+), reading WAL from ([A-F0-9]+\\/[A-F0-9]+)./,\n> + $offset);\n> +\n> +# Extract confirmed_flush from the logfile\n> +my $log_contents = slurp_file($node_publisher->logfile, $offset);\n> +$log_contents =~\n> + qr/Streaming transactions committing after ([A-F0-9]+\\/[A-F0-9]+),\n> reading WAL from ([A-F0-9]+\\/[A-F0-9]+)./\n> + or die \"could not get confirmed_flush_lsn\";\n>\n> Why are we figuring out the LSN from the log file? Is it not available from\n> pg_replication_slots view? If we do so, I think this test will fail is the slot\n> gets written after the restart because of concurrent activity on the publisher\n> (like autovacuum, or other things that cause empty transaction to be\n> replicated) and subscriber. A very rare chance but not 0 probability one.\n>\n\nYes, that is a possibility.\n\n> I\n> think we should shut down subscriber, restart publisher and then make this\n> check based on the contents of the replication slot instead of server log.\n> Shutting down subscriber will ensure that the subscriber won't send any new\n> confirmed flush location to the publisher after restart.\n>\n\nBut if we shutdown the subscriber before the publisher there is no\nguarantee that the publisher has sent all outstanding logs up to the\nshutdown checkpoint record (i.e., the latest record). Such a guarantee\ncan only be there if we do a clean shutdown of the publisher before\nthe subscriber.\n\n> All the places which call ReplicationSlotSave() mark the slot as dirty. All\n> the places where SaveSlotToPath() is called, the slot is marked dirty except\n> when calling from CheckPointReplicationSlots(). So I am wondering whether we\n> should be marking the slot dirty in CheckPointReplicationSlots() and avoid\n> passing down is_shutdown flag to SaveSlotToPath().\n>\n\nI feel that will add another spinlock acquire/release pair without\nmuch benefit. Sure, it may not be performance-sensitive but still\nadding another pair of lock/release doesn't seem like a better idea.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 31 Aug 2023 12:09:53 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: persist logical slots to disk during shutdown checkpoint"
},
{
"msg_contents": "On Thu, Aug 31, 2023 at 12:10 PM Amit Kapila <[email protected]> wrote:\n>\n> > +\n> > + /*\n> > + * We won't ensure that the slot is persisted after the confirmed_flush\n> > + * LSN is updated as that could lead to frequent writes. However, we need\n> > + * to ensure that we do persist the slots at the time of shutdown whose\n> > + * confirmed_flush LSN is changed since we last saved the slot to disk.\n> > + * This will help in avoiding retreat of the confirmed_flush LSN after\n> > + * restart. This variable is used to track the last saved confirmed_flush\n> > + * LSN value.\n> > + */\n> >\n> > This comment makes more sense in SaveSlotToPath() than here. We may decide to\n> > use last_saved_confirmed_flush for something else in future.\n> >\n>\n> I have kept it here because it contains some information that is not\n> specific to SaveSlotToPath. So, it seems easier to follow the whole\n> theory if we keep it at the central place in the structure and then\n> add the reference wherever required but I am fine if you and others\n> feel strongly about moving this to SaveSlotToPath().\n\nSaving slot to disk happens only in SaveSlotToPath, so except the last\nsentence rest of the comment makes sense in SaveSlotToPath().\n\n> >\n> > This function assumes that the subscriber will receive and confirm WAL upto\n> > checkpoint location and publisher's WAL sender will update it in the slot.\n> > Where is the code to ensure that? Does the WAL sender process wait for\n> > checkpoint\n> > LSN to be confirmed when shutting down?\n> >\n>\n> Note, that we need to compare if all the WAL before the\n> shutdown_checkpoint WAL record is sent. Before the clean shutdown, we\n> do ensure that all the pending WAL is confirmed back. See the use of\n> WalSndDone() in WalSndLoop().\n\nOk. Thanks for pointing that out to me.\n\n>\n> > I\n> > think we should shut down subscriber, restart publisher and then make this\n> > check based on the contents of the replication slot instead of server log.\n> > Shutting down subscriber will ensure that the subscriber won't send any new\n> > confirmed flush location to the publisher after restart.\n> >\n>\n> But if we shutdown the subscriber before the publisher there is no\n> guarantee that the publisher has sent all outstanding logs up to the\n> shutdown checkpoint record (i.e., the latest record). Such a guarantee\n> can only be there if we do a clean shutdown of the publisher before\n> the subscriber.\n\nSo the sequence is shutdown publisher node, shutdown subscriber node,\nstart publisher node and carry out the checks.\n\n>\n> > All the places which call ReplicationSlotSave() mark the slot as dirty. All\n> > the places where SaveSlotToPath() is called, the slot is marked dirty except\n> > when calling from CheckPointReplicationSlots(). So I am wondering whether we\n> > should be marking the slot dirty in CheckPointReplicationSlots() and avoid\n> > passing down is_shutdown flag to SaveSlotToPath().\n> >\n>\n> I feel that will add another spinlock acquire/release pair without\n> much benefit. Sure, it may not be performance-sensitive but still\n> adding another pair of lock/release doesn't seem like a better idea.\n\nWe call ReplicatioinSlotMarkDirty() followed by ReplicationSlotSave()\nat all the places, even those which are more frequent than this. So I\nthink it's better to stick to that protocol rather than adding a new\nflag.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Thu, 31 Aug 2023 12:25:17 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: persist logical slots to disk during shutdown checkpoint"
},
{
"msg_contents": "On Thu, Aug 31, 2023 at 12:25 PM Ashutosh Bapat\n<[email protected]> wrote:\n>\n> On Thu, Aug 31, 2023 at 12:10 PM Amit Kapila <[email protected]> wrote:\n> >\n> > > +\n> > > + /*\n> > > + * We won't ensure that the slot is persisted after the confirmed_flush\n> > > + * LSN is updated as that could lead to frequent writes. However, we need\n> > > + * to ensure that we do persist the slots at the time of shutdown whose\n> > > + * confirmed_flush LSN is changed since we last saved the slot to disk.\n> > > + * This will help in avoiding retreat of the confirmed_flush LSN after\n> > > + * restart. This variable is used to track the last saved confirmed_flush\n> > > + * LSN value.\n> > > + */\n> > >\n> > > This comment makes more sense in SaveSlotToPath() than here. We may decide to\n> > > use last_saved_confirmed_flush for something else in future.\n> > >\n> >\n> > I have kept it here because it contains some information that is not\n> > specific to SaveSlotToPath. So, it seems easier to follow the whole\n> > theory if we keep it at the central place in the structure and then\n> > add the reference wherever required but I am fine if you and others\n> > feel strongly about moving this to SaveSlotToPath().\n>\n> Saving slot to disk happens only in SaveSlotToPath, so except the last\n> sentence rest of the comment makes sense in SaveSlotToPath().\n>\n> > >\n> > > This function assumes that the subscriber will receive and confirm WAL upto\n> > > checkpoint location and publisher's WAL sender will update it in the slot.\n> > > Where is the code to ensure that? Does the WAL sender process wait for\n> > > checkpoint\n> > > LSN to be confirmed when shutting down?\n> > >\n> >\n> > Note, that we need to compare if all the WAL before the\n> > shutdown_checkpoint WAL record is sent. Before the clean shutdown, we\n> > do ensure that all the pending WAL is confirmed back. See the use of\n> > WalSndDone() in WalSndLoop().\n>\n> Ok. Thanks for pointing that out to me.\n>\n> >\n> > > I\n> > > think we should shut down subscriber, restart publisher and then make this\n> > > check based on the contents of the replication slot instead of server log.\n> > > Shutting down subscriber will ensure that the subscriber won't send any new\n> > > confirmed flush location to the publisher after restart.\n> > >\n> >\n> > But if we shutdown the subscriber before the publisher there is no\n> > guarantee that the publisher has sent all outstanding logs up to the\n> > shutdown checkpoint record (i.e., the latest record). Such a guarantee\n> > can only be there if we do a clean shutdown of the publisher before\n> > the subscriber.\n>\n> So the sequence is shutdown publisher node, shutdown subscriber node,\n> start publisher node and carry out the checks.\n>\n\nThis can probably work but I still prefer the current approach as that\nwill be closer to the ideal values on the disk instead of comparison\nwith a later in-memory value of confirmed_flush LSN. Ideally, if we\nwould have a tool like pg_replslotdata which can read the on-disk\nstate of slots that would be better but missing that, the current one\nsounds like the next best possibility. Do you see any problem with the\ncurrent approach of test?\n\nBTW, I think we can keep autovacuum = off for this test just to avoid\nany extra record generation even though that doesn't matter for the\npurpose of test.\n\n> >\n> > > All the places which call ReplicationSlotSave() mark the slot as dirty. All\n> > > the places where SaveSlotToPath() is called, the slot is marked dirty except\n> > > when calling from CheckPointReplicationSlots(). So I am wondering whether we\n> > > should be marking the slot dirty in CheckPointReplicationSlots() and avoid\n> > > passing down is_shutdown flag to SaveSlotToPath().\n> > >\n> >\n> > I feel that will add another spinlock acquire/release pair without\n> > much benefit. Sure, it may not be performance-sensitive but still\n> > adding another pair of lock/release doesn't seem like a better idea.\n>\n> We call ReplicatioinSlotMarkDirty() followed by ReplicationSlotSave()\n> at all the places, even those which are more frequent than this.\n>\n\nAll but one. Normally, the idea of marking dirty is to indicate that\nwe will actually write/flush the contents at a later point (except\nwhen required for correctness) as even indicated in the comments atop\nReplicatioinSlotMarkDirty(). However, I see your point that we use\nthat protocol at all the current places including CreateSlotOnDisk().\nSo, we can probably do it here as well.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 31 Aug 2023 14:52:47 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: persist logical slots to disk during shutdown checkpoint"
},
{
"msg_contents": "On Thu, Aug 31, 2023 at 2:52 PM Amit Kapila <[email protected]> wrote:\n> > > > I\n> > > > think we should shut down subscriber, restart publisher and then make this\n> > > > check based on the contents of the replication slot instead of server log.\n> > > > Shutting down subscriber will ensure that the subscriber won't send any new\n> > > > confirmed flush location to the publisher after restart.\n> > > >\n> > >\n> > > But if we shutdown the subscriber before the publisher there is no\n> > > guarantee that the publisher has sent all outstanding logs up to the\n> > > shutdown checkpoint record (i.e., the latest record). Such a guarantee\n> > > can only be there if we do a clean shutdown of the publisher before\n> > > the subscriber.\n> >\n> > So the sequence is shutdown publisher node, shutdown subscriber node,\n> > start publisher node and carry out the checks.\n> >\n>\n> This can probably work but I still prefer the current approach as that\n> will be closer to the ideal values on the disk instead of comparison\n> with a later in-memory value of confirmed_flush LSN. Ideally, if we\n> would have a tool like pg_replslotdata which can read the on-disk\n> state of slots that would be better but missing that, the current one\n> sounds like the next best possibility. Do you see any problem with the\n> current approach of test?\n\n> + qr/Streaming transactions committing after ([A-F0-9]+\\/[A-F0-9]+),\n> reading WAL from ([A-F0-9]+\\/[A-F0-9]+)./\n\nI don't think the LSN reported in this message is guaranteed to be the\nconfirmed_flush LSN of the slot. It's usually confirmed_flush but not\nalways. It's the LSN that snapshot builder computes based on factors\nincluding confirmed_flush. There's a chance that this test will fail\nsometimes because of this behaviour. Reading directly from\nreplication slot is better that this. pg_replslotdata might help if we\nread replication slot content between shutdown and restart of\npublisher.\n\n>\n> BTW, I think we can keep autovacuum = off for this test just to avoid\n> any extra record generation even though that doesn't matter for the\n> purpose of test.\n\nAutovacuum is one thing, but we can't guarantee the absence of any\nconcurrent activity forever.\n\n>\n> > >\n> > > > All the places which call ReplicationSlotSave() mark the slot as dirty. All\n> > > > the places where SaveSlotToPath() is called, the slot is marked dirty except\n> > > > when calling from CheckPointReplicationSlots(). So I am wondering whether we\n> > > > should be marking the slot dirty in CheckPointReplicationSlots() and avoid\n> > > > passing down is_shutdown flag to SaveSlotToPath().\n> > > >\n> > >\n> > > I feel that will add another spinlock acquire/release pair without\n> > > much benefit. Sure, it may not be performance-sensitive but still\n> > > adding another pair of lock/release doesn't seem like a better idea.\n> >\n> > We call ReplicatioinSlotMarkDirty() followed by ReplicationSlotSave()\n> > at all the places, even those which are more frequent than this.\n> >\n>\n> All but one. Normally, the idea of marking dirty is to indicate that\n> we will actually write/flush the contents at a later point (except\n> when required for correctness) as even indicated in the comments atop\n> ReplicatioinSlotMarkDirty(). However, I see your point that we use\n> that protocol at all the current places including CreateSlotOnDisk().\n> So, we can probably do it here as well.\n\nyes\n\nI didn't see this entry in commitfest. Since we are discussing it and\nthe next CF is about to begin, probably it's good to add one there.\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Thu, 31 Aug 2023 18:12:32 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: persist logical slots to disk during shutdown checkpoint"
},
{
"msg_contents": "On Thu, Aug 31, 2023 at 6:12 PM Ashutosh Bapat\n<[email protected]> wrote:\n>\n> On Thu, Aug 31, 2023 at 2:52 PM Amit Kapila <[email protected]> wrote:\n> > > > > I\n> > > > > think we should shut down subscriber, restart publisher and then make this\n> > > > > check based on the contents of the replication slot instead of server log.\n> > > > > Shutting down subscriber will ensure that the subscriber won't send any new\n> > > > > confirmed flush location to the publisher after restart.\n> > > > >\n> > > >\n> > > > But if we shutdown the subscriber before the publisher there is no\n> > > > guarantee that the publisher has sent all outstanding logs up to the\n> > > > shutdown checkpoint record (i.e., the latest record). Such a guarantee\n> > > > can only be there if we do a clean shutdown of the publisher before\n> > > > the subscriber.\n> > >\n> > > So the sequence is shutdown publisher node, shutdown subscriber node,\n> > > start publisher node and carry out the checks.\n> > >\n> >\n> > This can probably work but I still prefer the current approach as that\n> > will be closer to the ideal values on the disk instead of comparison\n> > with a later in-memory value of confirmed_flush LSN. Ideally, if we\n> > would have a tool like pg_replslotdata which can read the on-disk\n> > state of slots that would be better but missing that, the current one\n> > sounds like the next best possibility. Do you see any problem with the\n> > current approach of test?\n>\n> > + qr/Streaming transactions committing after ([A-F0-9]+\\/[A-F0-9]+),\n> > reading WAL from ([A-F0-9]+\\/[A-F0-9]+)./\n>\n> I don't think the LSN reported in this message is guaranteed to be the\n> confirmed_flush LSN of the slot. It's usually confirmed_flush but not\n> always. It's the LSN that snapshot builder computes based on factors\n> including confirmed_flush. There's a chance that this test will fail\n> sometimes because of this behaviour.\n>\n\nI think I am missing something here because as per my understanding,\nthe LOG referred by the test is generated in CreateDecodingContext()\nbefore which we shouldn't be changing the slot's confirmed_flush LSN.\nThe LOG [1] refers to the slot's persistent value for confirmed_flush,\nso how it could be different from what the test is expecting.\n\n[1]\nerrdetail(\"Streaming transactions committing after %X/%X, reading WAL\nfrom %X/%X.\",\n LSN_FORMAT_ARGS(slot->data.confirmed_flush),\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 31 Aug 2023 19:28:32 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: persist logical slots to disk during shutdown checkpoint"
},
{
"msg_contents": "On Thu, Aug 31, 2023 at 6:12 PM Ashutosh Bapat\n<[email protected]> wrote:\n>\n> On Thu, Aug 31, 2023 at 2:52 PM Amit Kapila <[email protected]> wrote:\n> >\n> > All but one. Normally, the idea of marking dirty is to indicate that\n> > we will actually write/flush the contents at a later point (except\n> > when required for correctness) as even indicated in the comments atop\n> > ReplicatioinSlotMarkDirty(). However, I see your point that we use\n> > that protocol at all the current places including CreateSlotOnDisk().\n> > So, we can probably do it here as well.\n>\n> yes\n>\n\nI think we should also ensure that slots are not invalidated\n(slot.data.invalidated != RS_INVAL_NONE) before marking them dirty\nbecause we don't allow decoding from such slots, so we shouldn't\ninclude those.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 1 Sep 2023 10:06:37 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: persist logical slots to disk during shutdown checkpoint"
},
{
"msg_contents": "On Fri, 1 Sept 2023 at 10:06, Amit Kapila <[email protected]> wrote:\n>\n> On Thu, Aug 31, 2023 at 6:12 PM Ashutosh Bapat\n> <[email protected]> wrote:\n> >\n> > On Thu, Aug 31, 2023 at 2:52 PM Amit Kapila <[email protected]> wrote:\n> > >\n> > > All but one. Normally, the idea of marking dirty is to indicate that\n> > > we will actually write/flush the contents at a later point (except\n> > > when required for correctness) as even indicated in the comments atop\n> > > ReplicatioinSlotMarkDirty(). However, I see your point that we use\n> > > that protocol at all the current places including CreateSlotOnDisk().\n> > > So, we can probably do it here as well.\n> >\n> > yes\n> >\n>\n> I think we should also ensure that slots are not invalidated\n> (slot.data.invalidated != RS_INVAL_NONE) before marking them dirty\n> because we don't allow decoding from such slots, so we shouldn't\n> include those.\n\nAdded this check.\n\nApart from this I have also fixed the following issues that were\nagreed on: a) Setting slots to dirty in CheckPointReplicationSlots\ninstead of setting it in SaveSlotToPath b) The comments were moved\nfrom ReplicationSlot and moved to CheckPointReplicationSlots c) Tests\nwill be run in autovacuum = off d) updating last_saved_confirmed_flush\nbased on cp.slotdata.confirmed_flush rather than\nslot->data.confirmed_flush.\nI have also added the commitfest entry for this at [1].\n\nThanks to Ashutosh/Amit for the feedback.\nAttached v7 version patch has the changes for the same.\n[1] - https://commitfest.postgresql.org/44/4536/\n\nRegards,\nVignesh",
"msg_date": "Fri, 1 Sep 2023 10:50:11 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: persist logical slots to disk during shutdown checkpoint"
},
{
"msg_contents": "On Thu, Aug 31, 2023 at 7:28 PM Amit Kapila <[email protected]> wrote:\n>\n> On Thu, Aug 31, 2023 at 6:12 PM Ashutosh Bapat\n> <[email protected]> wrote:\n> >\n> > On Thu, Aug 31, 2023 at 2:52 PM Amit Kapila <[email protected]> wrote:\n> > > > > > I\n> > > > > > think we should shut down subscriber, restart publisher and then make this\n> > > > > > check based on the contents of the replication slot instead of server log.\n> > > > > > Shutting down subscriber will ensure that the subscriber won't send any new\n> > > > > > confirmed flush location to the publisher after restart.\n> > > > > >\n> > > > >\n> > > > > But if we shutdown the subscriber before the publisher there is no\n> > > > > guarantee that the publisher has sent all outstanding logs up to the\n> > > > > shutdown checkpoint record (i.e., the latest record). Such a guarantee\n> > > > > can only be there if we do a clean shutdown of the publisher before\n> > > > > the subscriber.\n> > > >\n> > > > So the sequence is shutdown publisher node, shutdown subscriber node,\n> > > > start publisher node and carry out the checks.\n> > > >\n> > >\n> > > This can probably work but I still prefer the current approach as that\n> > > will be closer to the ideal values on the disk instead of comparison\n> > > with a later in-memory value of confirmed_flush LSN. Ideally, if we\n> > > would have a tool like pg_replslotdata which can read the on-disk\n> > > state of slots that would be better but missing that, the current one\n> > > sounds like the next best possibility. Do you see any problem with the\n> > > current approach of test?\n> >\n> > > + qr/Streaming transactions committing after ([A-F0-9]+\\/[A-F0-9]+),\n> > > reading WAL from ([A-F0-9]+\\/[A-F0-9]+)./\n> >\n> > I don't think the LSN reported in this message is guaranteed to be the\n> > confirmed_flush LSN of the slot. It's usually confirmed_flush but not\n> > always. It's the LSN that snapshot builder computes based on factors\n> > including confirmed_flush. There's a chance that this test will fail\n> > sometimes because of this behaviour.\n> >\n>\n> I think I am missing something here because as per my understanding,\n> the LOG referred by the test is generated in CreateDecodingContext()\n> before which we shouldn't be changing the slot's confirmed_flush LSN.\n> The LOG [1] refers to the slot's persistent value for confirmed_flush,\n> so how it could be different from what the test is expecting.\n>\n> [1]\n> errdetail(\"Streaming transactions committing after %X/%X, reading WAL\n> from %X/%X.\",\n> LSN_FORMAT_ARGS(slot->data.confirmed_flush),\n\nI was afraid that we may move confirmed_flush while creating the\nsnapshot builder when creating the decoding context. But I don't see\nany code doing that. So may be we are safe. But if the log message\nchanges, this test would fail - depending upon the log message looks a\nbit fragile, esp. when we have a way to access the data directly\nreliably.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Fri, 1 Sep 2023 13:11:27 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: persist logical slots to disk during shutdown checkpoint"
},
{
"msg_contents": "On Fri, Sep 1, 2023 at 1:11 PM Ashutosh Bapat\n<[email protected]> wrote:\n>\n> On Thu, Aug 31, 2023 at 7:28 PM Amit Kapila <[email protected]> wrote:\n> >\n> > On Thu, Aug 31, 2023 at 6:12 PM Ashutosh Bapat\n> > <[email protected]> wrote:\n> > >\n> > > > + qr/Streaming transactions committing after ([A-F0-9]+\\/[A-F0-9]+),\n> > > > reading WAL from ([A-F0-9]+\\/[A-F0-9]+)./\n> > >\n> > > I don't think the LSN reported in this message is guaranteed to be the\n> > > confirmed_flush LSN of the slot. It's usually confirmed_flush but not\n> > > always. It's the LSN that snapshot builder computes based on factors\n> > > including confirmed_flush. There's a chance that this test will fail\n> > > sometimes because of this behaviour.\n> > >\n> >\n> > I think I am missing something here because as per my understanding,\n> > the LOG referred by the test is generated in CreateDecodingContext()\n> > before which we shouldn't be changing the slot's confirmed_flush LSN.\n> > The LOG [1] refers to the slot's persistent value for confirmed_flush,\n> > so how it could be different from what the test is expecting.\n> >\n> > [1]\n> > errdetail(\"Streaming transactions committing after %X/%X, reading WAL\n> > from %X/%X.\",\n> > LSN_FORMAT_ARGS(slot->data.confirmed_flush),\n>\n> I was afraid that we may move confirmed_flush while creating the\n> snapshot builder when creating the decoding context. But I don't see\n> any code doing that. So may be we are safe.\n>\n\nWe are safe in that respect. As far as I understand there is no reason\nto be worried.\n\n>\n But if the log message\n> changes, this test would fail - depending upon the log message looks a\n> bit fragile, esp. when we have a way to access the data directly\n> reliably.\n>\n\nThis message is there from the very begining (b89e1510) and I can't\nforsee a reason to change such a message. But even if we change, we\ncan always change the test output or test accordingly, if required. I\nthink it is a matter of preference to which way we can write the test,\nso let's not argue too much on this. I find current way slightly more\nreliable but we can change it if we see any problem.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 1 Sep 2023 14:45:48 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: persist logical slots to disk during shutdown checkpoint"
},
{
"msg_contents": "On Fri, Sep 1, 2023 at 10:50 AM vignesh C <[email protected]> wrote:\n>\n> On Fri, 1 Sept 2023 at 10:06, Amit Kapila <[email protected]> wrote:\n> >\n> > On Thu, Aug 31, 2023 at 6:12 PM Ashutosh Bapat\n> > <[email protected]> wrote:\n> > >\n> > > On Thu, Aug 31, 2023 at 2:52 PM Amit Kapila <[email protected]> wrote:\n> > > >\n> > > > All but one. Normally, the idea of marking dirty is to indicate that\n> > > > we will actually write/flush the contents at a later point (except\n> > > > when required for correctness) as even indicated in the comments atop\n> > > > ReplicatioinSlotMarkDirty(). However, I see your point that we use\n> > > > that protocol at all the current places including CreateSlotOnDisk().\n> > > > So, we can probably do it here as well.\n> > >\n> > > yes\n> > >\n> >\n> > I think we should also ensure that slots are not invalidated\n> > (slot.data.invalidated != RS_INVAL_NONE) before marking them dirty\n> > because we don't allow decoding from such slots, so we shouldn't\n> > include those.\n>\n> Added this check.\n>\n> Apart from this I have also fixed the following issues that were\n> agreed on: a) Setting slots to dirty in CheckPointReplicationSlots\n> instead of setting it in SaveSlotToPath\n>\n\n+ if (is_shutdown && SlotIsLogical(s))\n+ {\n+ SpinLockAcquire(&s->mutex);\n+ if (s->data.invalidated == RS_INVAL_NONE &&\n+ s->data.confirmed_flush != s->last_saved_confirmed_flush)\n+ s->dirty = true;\n\nI think it is better to use ReplicationSlotMarkDirty() as that would\nbe consistent with all other usages.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 4 Sep 2023 15:20:04 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: persist logical slots to disk during shutdown checkpoint"
},
{
"msg_contents": "On Mon, 4 Sept 2023 at 15:20, Amit Kapila <[email protected]> wrote:\n>\n> On Fri, Sep 1, 2023 at 10:50 AM vignesh C <[email protected]> wrote:\n> >\n> > On Fri, 1 Sept 2023 at 10:06, Amit Kapila <[email protected]> wrote:\n> > >\n> > > On Thu, Aug 31, 2023 at 6:12 PM Ashutosh Bapat\n> > > <[email protected]> wrote:\n> > > >\n> > > > On Thu, Aug 31, 2023 at 2:52 PM Amit Kapila <[email protected]> wrote:\n> > > > >\n> > > > > All but one. Normally, the idea of marking dirty is to indicate that\n> > > > > we will actually write/flush the contents at a later point (except\n> > > > > when required for correctness) as even indicated in the comments atop\n> > > > > ReplicatioinSlotMarkDirty(). However, I see your point that we use\n> > > > > that protocol at all the current places including CreateSlotOnDisk().\n> > > > > So, we can probably do it here as well.\n> > > >\n> > > > yes\n> > > >\n> > >\n> > > I think we should also ensure that slots are not invalidated\n> > > (slot.data.invalidated != RS_INVAL_NONE) before marking them dirty\n> > > because we don't allow decoding from such slots, so we shouldn't\n> > > include those.\n> >\n> > Added this check.\n> >\n> > Apart from this I have also fixed the following issues that were\n> > agreed on: a) Setting slots to dirty in CheckPointReplicationSlots\n> > instead of setting it in SaveSlotToPath\n> >\n>\n> + if (is_shutdown && SlotIsLogical(s))\n> + {\n> + SpinLockAcquire(&s->mutex);\n> + if (s->data.invalidated == RS_INVAL_NONE &&\n> + s->data.confirmed_flush != s->last_saved_confirmed_flush)\n> + s->dirty = true;\n>\n> I think it is better to use ReplicationSlotMarkDirty() as that would\n> be consistent with all other usages.\n\nReplicationSlotMarkDirty works only on MyReplicationSlot whereas\nCheckpointReplicationSlots loops through all the slots and marks the\nappropriate slot as dirty, we might have to change\nReplicationSlotMarkDirty to take the slot as input parameter and all\ncaller should pass MyReplicationSlot. Another thing is we have already\ntaken spin lock to access last_confirmed_flush_lsn from\nCheckpointReplicationSlots, we could set dirty flag here itself, else\nwe will have to release the lock and call ReplicationSlotMarkDirty\nwhich will take lock again. Instead shall we set just_dirtied also in\nCheckpointReplicationSlots?\nThoughts?\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Mon, 4 Sep 2023 15:45:07 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: persist logical slots to disk during shutdown checkpoint"
},
{
"msg_contents": "On Monday, September 4, 2023 6:15 PM vignesh C <[email protected]> wrote:\r\n> \r\n> On Mon, 4 Sept 2023 at 15:20, Amit Kapila <[email protected]> wrote:\r\n> >\r\n> > On Fri, Sep 1, 2023 at 10:50 AM vignesh C <[email protected]> wrote:\r\n> > >\r\n> > > On Fri, 1 Sept 2023 at 10:06, Amit Kapila <[email protected]> wrote:\r\n> > > >\r\n> > > > On Thu, Aug 31, 2023 at 6:12 PM Ashutosh Bapat\r\n> > > > <[email protected]> wrote:\r\n> > > > >\r\n> > > > > On Thu, Aug 31, 2023 at 2:52 PM Amit Kapila\r\n> <[email protected]> wrote:\r\n> > > > > >\r\n> > > > > > All but one. Normally, the idea of marking dirty is to\r\n> > > > > > indicate that we will actually write/flush the contents at a\r\n> > > > > > later point (except when required for correctness) as even\r\n> > > > > > indicated in the comments atop ReplicatioinSlotMarkDirty().\r\n> > > > > > However, I see your point that we use that protocol at all the current\r\n> places including CreateSlotOnDisk().\r\n> > > > > > So, we can probably do it here as well.\r\n> > > > >\r\n> > > > > yes\r\n> > > > >\r\n> > > >\r\n> > > > I think we should also ensure that slots are not invalidated\r\n> > > > (slot.data.invalidated != RS_INVAL_NONE) before marking them dirty\r\n> > > > because we don't allow decoding from such slots, so we shouldn't\r\n> > > > include those.\r\n> > >\r\n> > > Added this check.\r\n> > >\r\n> > > Apart from this I have also fixed the following issues that were\r\n> > > agreed on: a) Setting slots to dirty in CheckPointReplicationSlots\r\n> > > instead of setting it in SaveSlotToPath\r\n> > >\r\n> >\r\n> > + if (is_shutdown && SlotIsLogical(s)) { SpinLockAcquire(&s->mutex);\r\n> > + if (s->data.invalidated == RS_INVAL_NONE &&\r\n> > + s->data.confirmed_flush != s->last_saved_confirmed_flush) dirty =\r\n> > + s->true;\r\n> >\r\n> > I think it is better to use ReplicationSlotMarkDirty() as that would\r\n> > be consistent with all other usages.\r\n> \r\n> ReplicationSlotMarkDirty works only on MyReplicationSlot whereas\r\n> CheckpointReplicationSlots loops through all the slots and marks the\r\n> appropriate slot as dirty, we might have to change ReplicationSlotMarkDirty to\r\n> take the slot as input parameter and all caller should pass MyReplicationSlot.\r\n\r\nPersonally, I feel if we want to centralize the code of marking dirty into a\r\nfunction, we can introduce a new static function MarkSlotDirty(slot) to mark\r\npassed slot dirty and let ReplicationSlotMarkDirty and\r\nCheckpointReplicationSlots call it. Like:\r\n\r\nvoid\r\nReplicationSlotMarkDirty(void)\r\n{\r\n\tMarkSlotDirty(MyReplicationSlot);\r\n}\r\n\r\n+static void\r\n+MarkSlotDirty(ReplicationSlot *slot)\r\n+{\r\n+\tAssert(slot != NULL);\r\n+\r\n+\tSpinLockAcquire(&slot->mutex);\r\n+\tslot->just_dirtied = true;\r\n+\tslot->dirty = true;\r\n+\tSpinLockRelease(&slot->mutex);\r\n+}\r\n\r\nThis is somewhat similar to the relation between ReplicationSlotSave(serialize\r\nmy backend's replications slot) and SaveSlotToPath(save the passed slot).\r\n\r\n> Another thing is we have already taken spin lock to access\r\n> last_confirmed_flush_lsn from CheckpointReplicationSlots, we could set dirty\r\n> flag here itself, else we will have to release the lock and call\r\n> ReplicationSlotMarkDirty which will take lock again.\r\n\r\nYes, this is unavoidable, but maybe it's not a big problem as\r\nwe only do it at shutdown.\r\n\r\n> Instead shall we set just_dirtied also in CheckpointReplicationSlots?\r\n> Thoughts?\r\n\r\nI agree we'd better set just_dirtied to true to ensure we will serialize slot\r\ninfo here, because if some other processes just serialized the slot, the dirty\r\nflag will be reset to false if we don't set just_dirtied to true in\r\nCheckpointReplicationSlots(), this race condition may not exists for now, but\r\nseems better to completely forbid it by setting just_dirtied.\r\n\r\nBest Regards,\r\nHou zj\r\n\r\n",
"msg_date": "Tue, 5 Sep 2023 02:24:09 +0000",
"msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: persist logical slots to disk during shutdown checkpoint"
},
{
"msg_contents": "On Tue, Sep 5, 2023 at 7:54 AM Zhijie Hou (Fujitsu)\n<[email protected]> wrote:\n>\n> On Monday, September 4, 2023 6:15 PM vignesh C <[email protected]> wrote:\n> >\n> > On Mon, 4 Sept 2023 at 15:20, Amit Kapila <[email protected]> wrote:\n> > >\n> > > On Fri, Sep 1, 2023 at 10:50 AM vignesh C <[email protected]> wrote:\n> > > >\n> > > > On Fri, 1 Sept 2023 at 10:06, Amit Kapila <[email protected]> wrote:\n> > > > >\n> > > > > On Thu, Aug 31, 2023 at 6:12 PM Ashutosh Bapat\n> > > > > <[email protected]> wrote:\n> > > > > >\n> > > > > > On Thu, Aug 31, 2023 at 2:52 PM Amit Kapila\n> > <[email protected]> wrote:\n> > > > > > >\n> > > > > > > All but one. Normally, the idea of marking dirty is to\n> > > > > > > indicate that we will actually write/flush the contents at a\n> > > > > > > later point (except when required for correctness) as even\n> > > > > > > indicated in the comments atop ReplicatioinSlotMarkDirty().\n> > > > > > > However, I see your point that we use that protocol at all the current\n> > places including CreateSlotOnDisk().\n> > > > > > > So, we can probably do it here as well.\n> > > > > >\n> > > > > > yes\n> > > > > >\n> > > > >\n> > > > > I think we should also ensure that slots are not invalidated\n> > > > > (slot.data.invalidated != RS_INVAL_NONE) before marking them dirty\n> > > > > because we don't allow decoding from such slots, so we shouldn't\n> > > > > include those.\n> > > >\n> > > > Added this check.\n> > > >\n> > > > Apart from this I have also fixed the following issues that were\n> > > > agreed on: a) Setting slots to dirty in CheckPointReplicationSlots\n> > > > instead of setting it in SaveSlotToPath\n> > > >\n> > >\n> > > + if (is_shutdown && SlotIsLogical(s)) { SpinLockAcquire(&s->mutex);\n> > > + if (s->data.invalidated == RS_INVAL_NONE &&\n> > > + s->data.confirmed_flush != s->last_saved_confirmed_flush) dirty =\n> > > + s->true;\n> > >\n> > > I think it is better to use ReplicationSlotMarkDirty() as that would\n> > > be consistent with all other usages.\n> >\n> > ReplicationSlotMarkDirty works only on MyReplicationSlot whereas\n> > CheckpointReplicationSlots loops through all the slots and marks the\n> > appropriate slot as dirty, we might have to change ReplicationSlotMarkDirty to\n> > take the slot as input parameter and all caller should pass MyReplicationSlot.\n>\n> Personally, I feel if we want to centralize the code of marking dirty into a\n> function, we can introduce a new static function MarkSlotDirty(slot) to mark\n> passed slot dirty and let ReplicationSlotMarkDirty and\n> CheckpointReplicationSlots call it. Like:\n>\n> void\n> ReplicationSlotMarkDirty(void)\n> {\n> MarkSlotDirty(MyReplicationSlot);\n> }\n>\n> +static void\n> +MarkSlotDirty(ReplicationSlot *slot)\n> +{\n> + Assert(slot != NULL);\n> +\n> + SpinLockAcquire(&slot->mutex);\n> + slot->just_dirtied = true;\n> + slot->dirty = true;\n> + SpinLockRelease(&slot->mutex);\n> +}\n>\n> This is somewhat similar to the relation between ReplicationSlotSave(serialize\n> my backend's replications slot) and SaveSlotToPath(save the passed slot).\n>\n> > Another thing is we have already taken spin lock to access\n> > last_confirmed_flush_lsn from CheckpointReplicationSlots, we could set dirty\n> > flag here itself, else we will have to release the lock and call\n> > ReplicationSlotMarkDirty which will take lock again.\n>\n> Yes, this is unavoidable, but maybe it's not a big problem as\n> we only do it at shutdown.\n>\n\nTrue but still it doesn't look elegant. I also thought about having a\nprobably inline function that marks both just_dirty and dirty fields.\nHowever, that requires us to assert that the caller has already\nacquired a spinlock. I see a macro SpinLockFree() that might help but\nit didn't seem to be used anywhere in the code so not sure if we can\nrely on it.\n\n> > Instead shall we set just_dirtied also in CheckpointReplicationSlots?\n> > Thoughts?\n>\n> I agree we'd better set just_dirtied to true to ensure we will serialize slot\n> info here, because if some other processes just serialized the slot, the dirty\n> flag will be reset to false if we don't set just_dirtied to true in\n> CheckpointReplicationSlots(), this race condition may not exists for now, but\n> seems better to completely forbid it by setting just_dirtied.\n>\n\nAgreed, and it is better to close any such possibility because we\ncan't say with certainty about manual slots. This seems better than\nthe other ideas we discussed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 5 Sep 2023 08:58:40 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: persist logical slots to disk during shutdown checkpoint"
},
{
"msg_contents": "On Fri, Sep 1, 2023 at 12:16 PM vignesh C <[email protected]> wrote:\n>\n> On Fri, 1 Sept 2023 at 10:06, Amit Kapila <[email protected]> wrote:\n> >\n> > On Thu, Aug 31, 2023 at 6:12 PM Ashutosh Bapat\n> > <[email protected]> wrote:\n> > >\n> > > On Thu, Aug 31, 2023 at 2:52 PM Amit Kapila <[email protected]> wrote:\n> > > >\n> > > > All but one. Normally, the idea of marking dirty is to indicate that\n> > > > we will actually write/flush the contents at a later point (except\n> > > > when required for correctness) as even indicated in the comments atop\n> > > > ReplicatioinSlotMarkDirty(). However, I see your point that we use\n> > > > that protocol at all the current places including CreateSlotOnDisk().\n> > > > So, we can probably do it here as well.\n> > >\n> > > yes\n> > >\n> >\n> > I think we should also ensure that slots are not invalidated\n> > (slot.data.invalidated != RS_INVAL_NONE) before marking them dirty\n> > because we don't allow decoding from such slots, so we shouldn't\n> > include those.\n>\n> Added this check.\n>\n> Apart from this I have also fixed the following issues that were\n> agreed on: a) Setting slots to dirty in CheckPointReplicationSlots\n> instead of setting it in SaveSlotToPath b) The comments were moved\n> from ReplicationSlot and moved to CheckPointReplicationSlots c) Tests\n> will be run in autovacuum = off d) updating last_saved_confirmed_flush\n> based on cp.slotdata.confirmed_flush rather than\n> slot->data.confirmed_flush.\n> I have also added the commitfest entry for this at [1].\n\nThe overall idea looks fine to me\n\n+\n+ /*\n+ * We won't ensure that the slot is persisted after the\n+ * confirmed_flush LSN is updated as that could lead to frequent\n+ * writes. However, we need to ensure that we do persist the slots at\n+ * the time of shutdown whose confirmed_flush LSN is changed since we\n+ * last saved the slot to disk. This will help in avoiding retreat of\n+ * the confirmed_flush LSN after restart.\n+ */\n+ if (is_shutdown && SlotIsLogical(s))\n+ {\n+ SpinLockAcquire(&s->mutex);\n+ if (s->data.invalidated == RS_INVAL_NONE &&\n+ s->data.confirmed_flush != s->last_saved_confirmed_flush)\n+ s->dirty = true;\n+ SpinLockRelease(&s->mutex);\n+ }\n\nThe comments don't mention anything about why we are just flushing at\nthe shutdown checkpoint. I mean the checkpoint is not that frequent\nand we already perform a lot of I/O during checkpoints so isn't it\nwise to flush during every checkpoint. We may argue that there is no\nextra advantage of that as we are not optimizing for crash recovery\nbut OTOH there is no reason for not doing so for other checkpoints or\nwe are worried about the concurrency with parallel walsender running\nduring non shutdown checkpoint if so then better we explain that as\nwell? If it is already discussed in the thread and we have a\nconclusion on this then maybe we can mention this in comments?\n\n\n/*\n * Flush all replication slots to disk.\n *\n- * This needn't actually be part of a checkpoint, but it's a convenient\n- * location.\n+ * is_shutdown is true in case of a shutdown checkpoint.\n */\n void\n-CheckPointReplicationSlots(void)\n+CheckPointReplicationSlots(bool is_shutdown)\n\nIt seems we have removed two lines from the function header comments,\nis this intentional or accidental?\n\nOther than this patch LGTM.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 5 Sep 2023 09:09:43 +0530",
"msg_from": "Dilip Kumar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: persist logical slots to disk during shutdown checkpoint"
},
{
"msg_contents": "On Tue, Sep 5, 2023 at 9:10 AM Dilip Kumar <[email protected]> wrote:\n>\n> On Fri, Sep 1, 2023 at 12:16 PM vignesh C <[email protected]> wrote:\n> >\n> > On Fri, 1 Sept 2023 at 10:06, Amit Kapila <[email protected]> wrote:\n> > >\n> > > On Thu, Aug 31, 2023 at 6:12 PM Ashutosh Bapat\n> > > <[email protected]> wrote:\n> > > >\n> > > > On Thu, Aug 31, 2023 at 2:52 PM Amit Kapila <[email protected]> wrote:\n> > > > >\n> > > > > All but one. Normally, the idea of marking dirty is to indicate that\n> > > > > we will actually write/flush the contents at a later point (except\n> > > > > when required for correctness) as even indicated in the comments atop\n> > > > > ReplicatioinSlotMarkDirty(). However, I see your point that we use\n> > > > > that protocol at all the current places including CreateSlotOnDisk().\n> > > > > So, we can probably do it here as well.\n> > > >\n> > > > yes\n> > > >\n> > >\n> > > I think we should also ensure that slots are not invalidated\n> > > (slot.data.invalidated != RS_INVAL_NONE) before marking them dirty\n> > > because we don't allow decoding from such slots, so we shouldn't\n> > > include those.\n> >\n> > Added this check.\n> >\n> > Apart from this I have also fixed the following issues that were\n> > agreed on: a) Setting slots to dirty in CheckPointReplicationSlots\n> > instead of setting it in SaveSlotToPath b) The comments were moved\n> > from ReplicationSlot and moved to CheckPointReplicationSlots c) Tests\n> > will be run in autovacuum = off d) updating last_saved_confirmed_flush\n> > based on cp.slotdata.confirmed_flush rather than\n> > slot->data.confirmed_flush.\n> > I have also added the commitfest entry for this at [1].\n>\n> The overall idea looks fine to me\n>\n> +\n> + /*\n> + * We won't ensure that the slot is persisted after the\n> + * confirmed_flush LSN is updated as that could lead to frequent\n> + * writes. However, we need to ensure that we do persist the slots at\n> + * the time of shutdown whose confirmed_flush LSN is changed since we\n> + * last saved the slot to disk. This will help in avoiding retreat of\n> + * the confirmed_flush LSN after restart.\n> + */\n> + if (is_shutdown && SlotIsLogical(s))\n> + {\n> + SpinLockAcquire(&s->mutex);\n> + if (s->data.invalidated == RS_INVAL_NONE &&\n> + s->data.confirmed_flush != s->last_saved_confirmed_flush)\n> + s->dirty = true;\n> + SpinLockRelease(&s->mutex);\n> + }\n>\n> The comments don't mention anything about why we are just flushing at\n> the shutdown checkpoint. I mean the checkpoint is not that frequent\n> and we already perform a lot of I/O during checkpoints so isn't it\n> wise to flush during every checkpoint. We may argue that there is no\n> extra advantage of that as we are not optimizing for crash recovery\n> but OTOH there is no reason for not doing so for other checkpoints or\n> we are worried about the concurrency with parallel walsender running\n> during non shutdown checkpoint if so then better we explain that as\n> well? If it is already discussed in the thread and we have a\n> conclusion on this then maybe we can mention this in comments?\n>\n\nThe point is that at the time of non-shutdown checkpoints, it is not\nclear that there is an extra advantage but we will definitely add\nextra I/O for this. Because at other times, we will already be saving\nthe slot from time to time as the replication makes progress. And, we\nalso need to flush such slots during shutdown for correctness for some\nuse cases like upgrades. We can probably add something like: \"At other\ntimes, the walsender keeps saving the slot from time to time as the\nreplication progresses, so there is no clear advantage of flushing\nadditional slots at the time of checkpoint\". Will that work for you?\nHaving said that, I am not opposed to doing it for non-shutdown\ncheckpoints if one makes a separate case for it.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 5 Sep 2023 09:58:13 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: persist logical slots to disk during shutdown checkpoint"
},
{
"msg_contents": "On Tue, Sep 5, 2023 at 9:58 AM Amit Kapila <[email protected]> wrote:\n>\n> On Tue, Sep 5, 2023 at 9:10 AM Dilip Kumar <[email protected]> wrote:\n> >\n> > The comments don't mention anything about why we are just flushing at\n> > the shutdown checkpoint. I mean the checkpoint is not that frequent\n> > and we already perform a lot of I/O during checkpoints so isn't it\n> > wise to flush during every checkpoint. We may argue that there is no\n> > extra advantage of that as we are not optimizing for crash recovery\n> > but OTOH there is no reason for not doing so for other checkpoints or\n> > we are worried about the concurrency with parallel walsender running\n> > during non shutdown checkpoint if so then better we explain that as\n> > well? If it is already discussed in the thread and we have a\n> > conclusion on this then maybe we can mention this in comments?\n> >\n>\n> The point is that at the time of non-shutdown checkpoints, it is not\n> clear that there is an extra advantage but we will definitely add\n> extra I/O for this. Because at other times, we will already be saving\n> the slot from time to time as the replication makes progress. And, we\n> also need to flush such slots during shutdown for correctness for some\n> use cases like upgrades. We can probably add something like: \"At other\n> times, the walsender keeps saving the slot from time to time as the\n> replication progresses, so there is no clear advantage of flushing\n> additional slots at the time of checkpoint\". Will that work for you?\n\nYeah that comments will work out, my only concern was because we added\nan explicit condition that it should be synced only during shutdown\ncheckpoint so better comments also explicitly explains the reason.\nAnyway I am fine with either way whether we sync at the shutdown\ncheckpoint or all the checkpoint. Because I/O for slot sync during\ncheckpoint time should not be a real worry and with that if we can\navoid additional code with extra conditions then it's better because\nsuch code branches will be frequently hit and I think for testability\npov we prefer to add code in common path unless there is some overhead\nor it is specifically meant for that branch only.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 5 Sep 2023 10:18:20 +0530",
"msg_from": "Dilip Kumar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: persist logical slots to disk during shutdown checkpoint"
},
{
"msg_contents": "On Tue, 5 Sept 2023 at 09:10, Dilip Kumar <[email protected]> wrote:\n>\n> On Fri, Sep 1, 2023 at 12:16 PM vignesh C <[email protected]> wrote:\n> >\n> > On Fri, 1 Sept 2023 at 10:06, Amit Kapila <[email protected]> wrote:\n> > >\n> > > On Thu, Aug 31, 2023 at 6:12 PM Ashutosh Bapat\n> > > <[email protected]> wrote:\n> > > >\n> > > > On Thu, Aug 31, 2023 at 2:52 PM Amit Kapila <[email protected]> wrote:\n> > > > >\n> > > > > All but one. Normally, the idea of marking dirty is to indicate that\n> > > > > we will actually write/flush the contents at a later point (except\n> > > > > when required for correctness) as even indicated in the comments atop\n> > > > > ReplicatioinSlotMarkDirty(). However, I see your point that we use\n> > > > > that protocol at all the current places including CreateSlotOnDisk().\n> > > > > So, we can probably do it here as well.\n> > > >\n> > > > yes\n> > > >\n> > >\n> > > I think we should also ensure that slots are not invalidated\n> > > (slot.data.invalidated != RS_INVAL_NONE) before marking them dirty\n> > > because we don't allow decoding from such slots, so we shouldn't\n> > > include those.\n> >\n> > Added this check.\n> >\n> > Apart from this I have also fixed the following issues that were\n> > agreed on: a) Setting slots to dirty in CheckPointReplicationSlots\n> > instead of setting it in SaveSlotToPath b) The comments were moved\n> > from ReplicationSlot and moved to CheckPointReplicationSlots c) Tests\n> > will be run in autovacuum = off d) updating last_saved_confirmed_flush\n> > based on cp.slotdata.confirmed_flush rather than\n> > slot->data.confirmed_flush.\n> > I have also added the commitfest entry for this at [1].\n>\n> The overall idea looks fine to me\n>\n> +\n> + /*\n> + * We won't ensure that the slot is persisted after the\n> + * confirmed_flush LSN is updated as that could lead to frequent\n> + * writes. However, we need to ensure that we do persist the slots at\n> + * the time of shutdown whose confirmed_flush LSN is changed since we\n> + * last saved the slot to disk. This will help in avoiding retreat of\n> + * the confirmed_flush LSN after restart.\n> + */\n> + if (is_shutdown && SlotIsLogical(s))\n> + {\n> + SpinLockAcquire(&s->mutex);\n> + if (s->data.invalidated == RS_INVAL_NONE &&\n> + s->data.confirmed_flush != s->last_saved_confirmed_flush)\n> + s->dirty = true;\n> + SpinLockRelease(&s->mutex);\n> + }\n>\n> The comments don't mention anything about why we are just flushing at\n> the shutdown checkpoint. I mean the checkpoint is not that frequent\n> and we already perform a lot of I/O during checkpoints so isn't it\n> wise to flush during every checkpoint. We may argue that there is no\n> extra advantage of that as we are not optimizing for crash recovery\n> but OTOH there is no reason for not doing so for other checkpoints or\n> we are worried about the concurrency with parallel walsender running\n> during non shutdown checkpoint if so then better we explain that as\n> well? If it is already discussed in the thread and we have a\n> conclusion on this then maybe we can mention this in comments?\n\nI felt it is better to do this only during the shutdown checkpoint as\nin other cases it is being saved periodically as and when the\nreplication happens. Added comments for the same.\n\n> /*\n> * Flush all replication slots to disk.\n> *\n> - * This needn't actually be part of a checkpoint, but it's a convenient\n> - * location.\n> + * is_shutdown is true in case of a shutdown checkpoint.\n> */\n> void\n> -CheckPointReplicationSlots(void)\n> +CheckPointReplicationSlots(bool is_shutdown)\n>\n> It seems we have removed two lines from the function header comments,\n> is this intentional or accidental?\n\nModified.\nThe updated v8 version patch has the changes for the same.\n\nRegards,\nVignesh",
"msg_date": "Tue, 5 Sep 2023 12:31:37 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: persist logical slots to disk during shutdown checkpoint"
},
{
"msg_contents": "On Tue, Sep 5, 2023 at 10:12 AM Amit Kapila <[email protected]> wrote:\n>\n> On Tue, Sep 5, 2023 at 7:54 AM Zhijie Hou (Fujitsu)\n> <[email protected]> wrote:\n> >\n> > On Monday, September 4, 2023 6:15 PM vignesh C <[email protected]> wrote:\n> > >\n> > > On Mon, 4 Sept 2023 at 15:20, Amit Kapila <[email protected]> wrote:\n> > > >\n> > > > On Fri, Sep 1, 2023 at 10:50 AM vignesh C <[email protected]> wrote:\n> > > > >\n> > > > > On Fri, 1 Sept 2023 at 10:06, Amit Kapila <[email protected]> wrote:\n> > > > > >\n> > > > > > On Thu, Aug 31, 2023 at 6:12 PM Ashutosh Bapat\n> > > > > > <[email protected]> wrote:\n> > > > > > >\n> > > > > > > On Thu, Aug 31, 2023 at 2:52 PM Amit Kapila\n> > > <[email protected]> wrote:\n> > > > > > > >\n> > > > > > > > All but one. Normally, the idea of marking dirty is to\n> > > > > > > > indicate that we will actually write/flush the contents at a\n> > > > > > > > later point (except when required for correctness) as even\n> > > > > > > > indicated in the comments atop ReplicatioinSlotMarkDirty().\n> > > > > > > > However, I see your point that we use that protocol at all the current\n> > > places including CreateSlotOnDisk().\n> > > > > > > > So, we can probably do it here as well.\n> > > > > > >\n> > > > > > > yes\n> > > > > > >\n> > > > > >\n> > > > > > I think we should also ensure that slots are not invalidated\n> > > > > > (slot.data.invalidated != RS_INVAL_NONE) before marking them dirty\n> > > > > > because we don't allow decoding from such slots, so we shouldn't\n> > > > > > include those.\n> > > > >\n> > > > > Added this check.\n> > > > >\n> > > > > Apart from this I have also fixed the following issues that were\n> > > > > agreed on: a) Setting slots to dirty in CheckPointReplicationSlots\n> > > > > instead of setting it in SaveSlotToPath\n> > > > >\n> > > >\n> > > > + if (is_shutdown && SlotIsLogical(s)) { SpinLockAcquire(&s->mutex);\n> > > > + if (s->data.invalidated == RS_INVAL_NONE &&\n> > > > + s->data.confirmed_flush != s->last_saved_confirmed_flush) dirty =\n> > > > + s->true;\n> > > >\n> > > > I think it is better to use ReplicationSlotMarkDirty() as that would\n> > > > be consistent with all other usages.\n> > >\n> > > ReplicationSlotMarkDirty works only on MyReplicationSlot whereas\n> > > CheckpointReplicationSlots loops through all the slots and marks the\n> > > appropriate slot as dirty, we might have to change ReplicationSlotMarkDirty to\n> > > take the slot as input parameter and all caller should pass MyReplicationSlot.\n> >\n> > Personally, I feel if we want to centralize the code of marking dirty into a\n> > function, we can introduce a new static function MarkSlotDirty(slot) to mark\n> > passed slot dirty and let ReplicationSlotMarkDirty and\n> > CheckpointReplicationSlots call it. Like:\n> >\n> > void\n> > ReplicationSlotMarkDirty(void)\n> > {\n> > MarkSlotDirty(MyReplicationSlot);\n> > }\n> >\n> > +static void\n> > +MarkSlotDirty(ReplicationSlot *slot)\n> > +{\n> > + Assert(slot != NULL);\n> > +\n> > + SpinLockAcquire(&slot->mutex);\n> > + slot->just_dirtied = true;\n> > + slot->dirty = true;\n> > + SpinLockRelease(&slot->mutex);\n> > +}\n> >\n> > This is somewhat similar to the relation between ReplicationSlotSave(serialize\n> > my backend's replications slot) and SaveSlotToPath(save the passed slot).\n> >\n> > > Another thing is we have already taken spin lock to access\n> > > last_confirmed_flush_lsn from CheckpointReplicationSlots, we could set dirty\n> > > flag here itself, else we will have to release the lock and call\n> > > ReplicationSlotMarkDirty which will take lock again.\n> >\n> > Yes, this is unavoidable, but maybe it's not a big problem as\n> > we only do it at shutdown.\n> >\n>\n> True but still it doesn't look elegant. I also thought about having a\n> probably inline function that marks both just_dirty and dirty fields.\n> However, that requires us to assert that the caller has already\n> acquired a spinlock. I see a macro SpinLockFree() that might help but\n> it didn't seem to be used anywhere in the code so not sure if we can\n> rely on it.\n\nCan't we just have code like this? I mean we will have to make\nReplicationSlotMarkDirty take slot as an argument or have another\nversion which takes slot as an argument and that would be called by us\nas well as by ReplicationSlotMarkDirty(). I mean why do we need these\nchecks (s-(data.invalidated == RS_INVAL_NONE &&\ns->data.confirmed_flush != s->last_saved_confirmed_flush) under the\nmutex? Walsender is shutdown so confirmed flush LSN can not move\nconcurrently and slot can not be invalidated as well because that is\ndone by checkpointer and we are in checkpointer?\n\n+ if (is_shutdown && SlotIsLogical(s))\n+ {\n+ if (s->data.invalidated == RS_INVAL_NONE &&\n+ s->data.confirmed_flush != s->last_saved_confirmed_flush)\n+ {\n+ ReplicationSlotMarkDirty(s);\n+ }\n+\n+ SpinLockRelease(&s->mutex);\n+ }\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 5 Sep 2023 13:45:28 +0530",
"msg_from": "Dilip Kumar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: persist logical slots to disk during shutdown checkpoint"
},
{
"msg_contents": "On Tuesday, September 5, 2023 4:15 PM Dilip Kumar <[email protected]> wrote:\r\n\r\nHi,\r\n\r\n> \r\n> On Tue, Sep 5, 2023 at 10:12 AM Amit Kapila <[email protected]>\r\n> wrote:\r\n> >\r\n> > On Tue, Sep 5, 2023 at 7:54 AM Zhijie Hou (Fujitsu)\r\n> > <[email protected]> wrote:\r\n> > >\r\n> > > On Monday, September 4, 2023 6:15 PM vignesh C\r\n> <[email protected]> wrote:\r\n> > > >\r\n> > > > On Mon, 4 Sept 2023 at 15:20, Amit Kapila <[email protected]>\r\n> wrote:\r\n> > > > >\r\n> > > > > On Fri, Sep 1, 2023 at 10:50 AM vignesh C <[email protected]>\r\n> wrote:\r\n> > > > > >\r\n> > > > > > On Fri, 1 Sept 2023 at 10:06, Amit Kapila <[email protected]>\r\n> wrote:\r\n> > > > > > > I think we should also ensure that slots are not invalidated\r\n> > > > > > > (slot.data.invalidated != RS_INVAL_NONE) before marking them\r\n> > > > > > > dirty because we don't allow decoding from such slots, so we\r\n> > > > > > > shouldn't include those.\r\n> > > > > >\r\n> > > > > > Added this check.\r\n> > > > > >\r\n> > > > > > Apart from this I have also fixed the following issues that\r\n> > > > > > were agreed on: a) Setting slots to dirty in\r\n> > > > > > CheckPointReplicationSlots instead of setting it in\r\n> > > > > > SaveSlotToPath\r\n> > > > > >\r\n> > > > >\r\n> > > > > + if (is_shutdown && SlotIsLogical(s)) {\r\n> > > > > + SpinLockAcquire(&s->mutex); if (s->data.invalidated ==\r\n> > > > > + RS_INVAL_NONE &&\r\n> > > > > + s->data.confirmed_flush != s->last_saved_confirmed_flush)\r\n> > > > > + s->dirty = true;\r\n> > > > >\r\n> > > > > I think it is better to use ReplicationSlotMarkDirty() as that\r\n> > > > > would be consistent with all other usages.\r\n> > > >\r\n> > > > ReplicationSlotMarkDirty works only on MyReplicationSlot whereas\r\n> > > > CheckpointReplicationSlots loops through all the slots and marks\r\n> > > > the appropriate slot as dirty, we might have to change\r\n> > > > ReplicationSlotMarkDirty to take the slot as input parameter and all caller\r\n> should pass MyReplicationSlot.\r\n> > >\r\n> > > Personally, I feel if we want to centralize the code of marking\r\n> > > dirty into a function, we can introduce a new static function\r\n> > > MarkSlotDirty(slot) to mark passed slot dirty and let\r\n> > > ReplicationSlotMarkDirty and CheckpointReplicationSlots call it. Like:\r\n> > >\r\n> > > void\r\n> > > ReplicationSlotMarkDirty(void)\r\n> > > {\r\n> > > MarkSlotDirty(MyReplicationSlot); }\r\n> > >\r\n> > > +static void\r\n> > > +MarkSlotDirty(ReplicationSlot *slot) {\r\n> > > + Assert(slot != NULL);\r\n> > > +\r\n> > > + SpinLockAcquire(&slot->mutex);\r\n> > > + slot->just_dirtied = true;\r\n> > > + slot->dirty = true;\r\n> > > + SpinLockRelease(&slot->mutex); }\r\n> > >\r\n> > > This is somewhat similar to the relation between\r\n> > > ReplicationSlotSave(serialize my backend's replications slot) and\r\n> SaveSlotToPath(save the passed slot).\r\n> > >\r\n> > > > Another thing is we have already taken spin lock to access\r\n> > > > last_confirmed_flush_lsn from CheckpointReplicationSlots, we could\r\n> > > > set dirty flag here itself, else we will have to release the lock\r\n> > > > and call ReplicationSlotMarkDirty which will take lock again.\r\n> > >\r\n> > > Yes, this is unavoidable, but maybe it's not a big problem as we\r\n> > > only do it at shutdown.\r\n> > >\r\n> >\r\n> > True but still it doesn't look elegant. I also thought about having a\r\n> > probably inline function that marks both just_dirty and dirty fields.\r\n> > However, that requires us to assert that the caller has already\r\n> > acquired a spinlock. I see a macro SpinLockFree() that might help but\r\n> > it didn't seem to be used anywhere in the code so not sure if we can\r\n> > rely on it.\r\n> \r\n> Can't we just have code like this? I mean we will have to make\r\n> ReplicationSlotMarkDirty take slot as an argument or have another version\r\n> which takes slot as an argument and that would be called by us as well as by\r\n> ReplicationSlotMarkDirty(). I mean why do we need these checks\r\n> (s-(data.invalidated == RS_INVAL_NONE &&\r\n> s->data.confirmed_flush != s->last_saved_confirmed_flush) under the\r\n> mutex? Walsender is shutdown so confirmed flush LSN can not move\r\n> concurrently and slot can not be invalidated as well because that is done by\r\n> checkpointer and we are in checkpointer?\r\n\r\nI agree with your analysis that the lock may be unnecessary for now and the\r\ncode will work, but I personally feel we'd better take the spinlock.\r\n\r\nFirstly, considering our discussion on the potential extension of persisting\r\nthe slot for online checkpoints in the future, we anyway need the lock at that\r\ntime, so taking the lock here could avoid overlooking the need to update it\r\nlater. And the lock also won't cause any performance or concurrency issue.\r\n\r\nAdditionally, if we don't take the lock, we rely on the assumption that the\r\nwalsender will exit before the shutdown checkpoint, currently, that's true for\r\nlogical walsender, but physical walsender can exit later than checkpointer. So,\r\nI am slight woirred that if we change the logical walsender's exit timing in\r\nthe future, the assumption may not hold.\r\n\r\nBesides, for non-built-in logical replication, if someone creates their own\r\nwalsender or other processes to send the changes and the process doesn't exit\r\nbefore the shutdown checkpoint, it may also be a problem. Although I don't have\r\nexsiting examples about these extensions, but I feel taking the lock would make\r\nit more robust.\r\n\r\nBest Regards,\r\nHou zj\r\n",
"msg_date": "Tue, 5 Sep 2023 11:34:14 +0000",
"msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: persist logical slots to disk during shutdown checkpoint"
},
{
"msg_contents": "On Tue, Sep 5, 2023 at 5:04 PM Zhijie Hou (Fujitsu)\n<[email protected]> wrote:\n>\n\n> > Can't we just have code like this? I mean we will have to make\n> > ReplicationSlotMarkDirty take slot as an argument or have another version\n> > which takes slot as an argument and that would be called by us as well as by\n> > ReplicationSlotMarkDirty(). I mean why do we need these checks\n> > (s-(data.invalidated == RS_INVAL_NONE &&\n> > s->data.confirmed_flush != s->last_saved_confirmed_flush) under the\n> > mutex? Walsender is shutdown so confirmed flush LSN can not move\n> > concurrently and slot can not be invalidated as well because that is done by\n> > checkpointer and we are in checkpointer?\n>\n> I agree with your analysis that the lock may be unnecessary for now and the\n> code will work, but I personally feel we'd better take the spinlock.\n>\n> Firstly, considering our discussion on the potential extension of persisting\n> the slot for online checkpoints in the future, we anyway need the lock at that\n> time, so taking the lock here could avoid overlooking the need to update it\n> later. And the lock also won't cause any performance or concurrency issue.\n\nIf we think that we might plan to persist on the online checkpoint as\nwell then better to do it now, because this is not a extension of the\nfeature instead we are thinking that it is wise to just persist on the\nshutdown checkpoint and I think that's what the conclusion at this\npoint and if thats the conclusion then no point to right code in\nassumption that we will change our conclusion in future.\n\n> Additionally, if we don't take the lock, we rely on the assumption that the\n> walsender will exit before the shutdown checkpoint, currently, that's true for\n> logical walsender, but physical walsender can exit later than checkpointer. So,\n> I am slight woirred that if we change the logical walsender's exit timing in\n> the future, the assumption may not hold.\n>\n> Besides, for non-built-in logical replication, if someone creates their own\n> walsender or other processes to send the changes and the process doesn't exit\n> before the shutdown checkpoint, it may also be a problem. Although I don't have\n> exsiting examples about these extensions, but I feel taking the lock would make\n> it more robust.\n\nI think our all logic is based on that the walsender is existed\nalready. If not then even if you check under the mutex that the\nconfirmed flush LSN is not changed then it can changed right after you\nrelease the lock and then we will not be flushing the latest update of\nthe confirmed flush lsn to the disk and our logic of comparing\ncheckpoint.redo with the confirmed flush lsn might not work?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 5 Sep 2023 18:00:41 +0530",
"msg_from": "Dilip Kumar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: persist logical slots to disk during shutdown checkpoint"
},
{
"msg_contents": "On Tue, Sep 5, 2023 at 6:00 PM Dilip Kumar <[email protected]> wrote:\n>\n> On Tue, Sep 5, 2023 at 5:04 PM Zhijie Hou (Fujitsu)\n> <[email protected]> wrote:\n> >\n>\n> > > Can't we just have code like this? I mean we will have to make\n> > > ReplicationSlotMarkDirty take slot as an argument or have another version\n> > > which takes slot as an argument and that would be called by us as well as by\n> > > ReplicationSlotMarkDirty(). I mean why do we need these checks\n> > > (s-(data.invalidated == RS_INVAL_NONE &&\n> > > s->data.confirmed_flush != s->last_saved_confirmed_flush) under the\n> > > mutex? Walsender is shutdown so confirmed flush LSN can not move\n> > > concurrently and slot can not be invalidated as well because that is done by\n> > > checkpointer and we are in checkpointer?\n> >\n>\n...\n> > Additionally, if we don't take the lock, we rely on the assumption that the\n> > walsender will exit before the shutdown checkpoint, currently, that's true for\n> > logical walsender, but physical walsender can exit later than checkpointer. So,\n> > I am slight woirred that if we change the logical walsender's exit timing in\n> > the future, the assumption may not hold.\n> >\n> > Besides, for non-built-in logical replication, if someone creates their own\n> > walsender or other processes to send the changes and the process doesn't exit\n> > before the shutdown checkpoint, it may also be a problem. Although I don't have\n> > exsiting examples about these extensions, but I feel taking the lock would make\n> > it more robust.\n>\n> I think our all logic is based on that the walsender is existed\n> already. If not then even if you check under the mutex that the\n> confirmed flush LSN is not changed then it can changed right after you\n> release the lock and then we will not be flushing the latest update of\n> the confirmed flush lsn to the disk and our logic of comparing\n> checkpoint.redo with the confirmed flush lsn might not work?\n>\n\nRight, it can change and in that case, the check related to\nconfirm_flush LSN will fail during the upgrade. However, the point is\nthat if we don't take spinlock, we need to properly write comments on\nwhy unlike in other places it is safe here to check these values\nwithout spinlock. We can do that but I feel we have to be careful for\nall future usages of these variables, so, having spinlock makes them\nfollow the normal coding pattern which I feel makes it more robust.\nYes, marking dirty via common function also has merits but personally,\nI find it better to follow the normal coding practice of checking the\nrequired fields under spinlock. The other possibility is to first\ncheck if we need to mark the slot dirty under spinlock, then release\nthe spinlock, and then call the common MarkDirty function, but again\nthat will be under the assumption that these flags won't change.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 6 Sep 2023 09:46:58 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: persist logical slots to disk during shutdown checkpoint"
},
{
"msg_contents": "On Wed, Sep 6, 2023 at 9:47 AM Amit Kapila <[email protected]> wrote:\n>\n> On Tue, Sep 5, 2023 at 6:00 PM Dilip Kumar <[email protected]> wrote:\n\n>\n> Right, it can change and in that case, the check related to\n> confirm_flush LSN will fail during the upgrade. However, the point is\n> that if we don't take spinlock, we need to properly write comments on\n> why unlike in other places it is safe here to check these values\n> without spinlock.\n\nI agree with that, but now also it is not true that we are alway\nreading this under the spin lock for example[1][2], we can see we are\nreading this without spin lock.\n[1]\nStartLogicalReplication\n{\n/*\n* Report the location after which we'll send out further commits as the\n* current sentPtr.\n*/\nsentPtr = MyReplicationSlot->data.confirmed_flush;\n}\n[2]\nLogicalIncreaseRestartDecodingForSlot\n{\n/* candidates are already valid with the current flush position, apply */\nif (updated_lsn)\nLogicalConfirmReceivedLocation(slot->data.confirmed_flush);\n}\n\n We can do that but I feel we have to be careful for\n> all future usages of these variables, so, having spinlock makes them\n> follow the normal coding pattern which I feel makes it more robust.\n> Yes, marking dirty via common function also has merits but personally,\n> I find it better to follow the normal coding practice of checking the\n> required fields under spinlock. The other possibility is to first\n> check if we need to mark the slot dirty under spinlock, then release\n> the spinlock, and then call the common MarkDirty function, but again\n> that will be under the assumption that these flags won't change.\n\nThats true, but we are already making the assumption because now also\nwe are taking the spinlock and taking a decision of marking the slot\ndirty. And after that we are releasing the spin lock and if we do not\nhave guarantee that it can not concurrently change the many things can\ngo wrong no?\n\nAnyway said that, I do not have any strong objection against what we\nare doing now. There were discussion around making the code so that\nit can use common function and I was suggesting how it could be\nachieved but I am not against the current way either.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 6 Sep 2023 09:57:37 +0530",
"msg_from": "Dilip Kumar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: persist logical slots to disk during shutdown checkpoint"
},
{
"msg_contents": "On Wed, Sep 6, 2023 at 9:57 AM Dilip Kumar <[email protected]> wrote:\n>\n> On Wed, Sep 6, 2023 at 9:47 AM Amit Kapila <[email protected]> wrote:\n> >\n> > On Tue, Sep 5, 2023 at 6:00 PM Dilip Kumar <[email protected]> wrote:\n>\n> >\n> > Right, it can change and in that case, the check related to\n> > confirm_flush LSN will fail during the upgrade. However, the point is\n> > that if we don't take spinlock, we need to properly write comments on\n> > why unlike in other places it is safe here to check these values\n> > without spinlock.\n>\n> I agree with that, but now also it is not true that we are alway\n> reading this under the spin lock for example[1][2], we can see we are\n> reading this without spin lock.\n> [1]\n> StartLogicalReplication\n> {\n> /*\n> * Report the location after which we'll send out further commits as the\n> * current sentPtr.\n> */\n> sentPtr = MyReplicationSlot->data.confirmed_flush;\n> }\n> [2]\n> LogicalIncreaseRestartDecodingForSlot\n> {\n> /* candidates are already valid with the current flush position, apply */\n> if (updated_lsn)\n> LogicalConfirmReceivedLocation(slot->data.confirmed_flush);\n> }\n>\n\nThese are accessed only in walsender and confirmed_flush is always\nupdated by walsender. So, this is clearly okay.\n\n> We can do that but I feel we have to be careful for\n> > all future usages of these variables, so, having spinlock makes them\n> > follow the normal coding pattern which I feel makes it more robust.\n> > Yes, marking dirty via common function also has merits but personally,\n> > I find it better to follow the normal coding practice of checking the\n> > required fields under spinlock. The other possibility is to first\n> > check if we need to mark the slot dirty under spinlock, then release\n> > the spinlock, and then call the common MarkDirty function, but again\n> > that will be under the assumption that these flags won't change.\n>\n> Thats true, but we are already making the assumption because now also\n> we are taking the spinlock and taking a decision of marking the slot\n> dirty. And after that we are releasing the spin lock and if we do not\n> have guarantee that it can not concurrently change the many things can\n> go wrong no?\n>\n\nAlso, note that invalidated field could be updated by startup process\nbut that is only possible on standby, so it is safe but again that\nwould be another assumption.\n\n> Anyway said that, I do not have any strong objection against what we\n> are doing now. There were discussion around making the code so that\n> it can use common function and I was suggesting how it could be\n> achieved but I am not against the current way either.\n>\n\nOkay, thanks for looking into it.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 6 Sep 2023 12:01:00 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: persist logical slots to disk during shutdown checkpoint"
},
{
"msg_contents": "On Wed, Sep 6, 2023 at 12:01 PM Amit Kapila <[email protected]> wrote:\n>\n> On Wed, Sep 6, 2023 at 9:57 AM Dilip Kumar <[email protected]> wrote:\n> >\n> > On Wed, Sep 6, 2023 at 9:47 AM Amit Kapila <[email protected]> wrote:\n> > >\n> > > On Tue, Sep 5, 2023 at 6:00 PM Dilip Kumar <[email protected]> wrote:\n> >\n> > >\n> > > Right, it can change and in that case, the check related to\n> > > confirm_flush LSN will fail during the upgrade. However, the point is\n> > > that if we don't take spinlock, we need to properly write comments on\n> > > why unlike in other places it is safe here to check these values\n> > > without spinlock.\n> >\n> > I agree with that, but now also it is not true that we are alway\n> > reading this under the spin lock for example[1][2], we can see we are\n> > reading this without spin lock.\n> > [1]\n> > StartLogicalReplication\n> > {\n> > /*\n> > * Report the location after which we'll send out further commits as the\n> > * current sentPtr.\n> > */\n> > sentPtr = MyReplicationSlot->data.confirmed_flush;\n> > }\n> > [2]\n> > LogicalIncreaseRestartDecodingForSlot\n> > {\n> > /* candidates are already valid with the current flush position, apply */\n> > if (updated_lsn)\n> > LogicalConfirmReceivedLocation(slot->data.confirmed_flush);\n> > }\n> >\n>\n> These are accessed only in walsender and confirmed_flush is always\n> updated by walsender. So, this is clearly okay.\n\nHmm, that's a valid point.\n\n> > We can do that but I feel we have to be careful for\n> > > all future usages of these variables, so, having spinlock makes them\n> > > follow the normal coding pattern which I feel makes it more robust.\n> > > Yes, marking dirty via common function also has merits but personally,\n> > > I find it better to follow the normal coding practice of checking the\n> > > required fields under spinlock. The other possibility is to first\n> > > check if we need to mark the slot dirty under spinlock, then release\n> > > the spinlock, and then call the common MarkDirty function, but again\n> > > that will be under the assumption that these flags won't change.\n> >\n> > Thats true, but we are already making the assumption because now also\n> > we are taking the spinlock and taking a decision of marking the slot\n> > dirty. And after that we are releasing the spin lock and if we do not\n> > have guarantee that it can not concurrently change the many things can\n> > go wrong no?\n> >\n>\n> Also, note that invalidated field could be updated by startup process\n> but that is only possible on standby, so it is safe but again that\n> would be another assumption.\n\nOkay, so I also agree to go with the current patch. Because as you\nsaid above if we access this without a spin lock outside walsender\nthen we will be making a new exception and I agree with that decision\nof not making the new exception.\n\nOther than that the patch LGTM.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 6 Sep 2023 12:08:57 +0530",
"msg_from": "Dilip Kumar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: persist logical slots to disk during shutdown checkpoint"
},
{
"msg_contents": "On Wed, 6 Sept 2023 at 12:09, Dilip Kumar <[email protected]> wrote:\n>\n> Other than that the patch LGTM.\n\npgindent reported that the new comments added need to be re-adjusted,\nhere is an updated patch for the same.\nI also verified the following: a) patch applies neatly on HEAD b) make\ncheck-world passes c) pgindent looks good d) pgperltiy was fine e)\nmeson test runs were successful. Also checked that CFBot run was fine\nfor the last patch.\n\nRegards,\nVignesh",
"msg_date": "Thu, 7 Sep 2023 10:13:09 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: persist logical slots to disk during shutdown checkpoint"
},
{
"msg_contents": "On Thu, Sep 7, 2023 at 10:13 AM vignesh C <[email protected]> wrote:\n>\n> On Wed, 6 Sept 2023 at 12:09, Dilip Kumar <[email protected]> wrote:\n> >\n> > Other than that the patch LGTM.\n>\n> pgindent reported that the new comments added need to be re-adjusted,\n> here is an updated patch for the same.\n>\n\nThanks, the patch looks good to me as well.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 7 Sep 2023 11:56:28 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: persist logical slots to disk during shutdown checkpoint"
},
{
"msg_contents": "On Thu, Sep 07, 2023 at 11:56:28AM +0530, Amit Kapila wrote:\n> Thanks, the patch looks good to me as well.\n\n+ /* This is used to track the last saved confirmed_flush LSN value */\n+ XLogRecPtr last_saved_confirmed_flush;\n\nThis does not feel sufficient, as the comment explaining what this\nvariable does uses the same terms as the variable name (aka it is the\nlast save of the confirmed_lsn). Why it it here and why it is useful?\nIn which context and/or code paths is it used? Okay, there are some\nexplanations when saving a slot, restoring a slot or when a checkpoint\nprocesses slots, but it seems important to me to document more things\nin ReplicationSlot where this is defined.\n\n(Just passing by, I have not checked the patch logic in details but\nthat looks under-documented to me.)\n--\nMichael",
"msg_date": "Thu, 7 Sep 2023 16:48:37 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: persist logical slots to disk during shutdown checkpoint"
},
{
"msg_contents": "On Thu, Sep 7, 2023 at 1:18 PM Michael Paquier <[email protected]> wrote:\n>\n> On Thu, Sep 07, 2023 at 11:56:28AM +0530, Amit Kapila wrote:\n> > Thanks, the patch looks good to me as well.\n>\n> + /* This is used to track the last saved confirmed_flush LSN value */\n> + XLogRecPtr last_saved_confirmed_flush;\n>\n> This does not feel sufficient, as the comment explaining what this\n> variable does uses the same terms as the variable name (aka it is the\n> last save of the confirmed_lsn). Why it it here and why it is useful?\n> In which context and/or code paths is it used? Okay, there are some\n> explanations when saving a slot, restoring a slot or when a checkpoint\n> processes slots, but it seems important to me to document more things\n> in ReplicationSlot where this is defined.\n>\n\nHmm, this is quite debatable as different people feel differently\nabout this. The patch author kept it where it is now but in one of my\nrevisions, I rewrote and added it in the ReplicationSlot. Then\nAshutosh argued that it is better to keep it near where we are saving\nthe slot (aka where the patch has) [1]. Anyway, as I also preferred\nthe core part of the theory about this variable to be in\nReplicationSlot, so I'll move it there before commit unless someone\nargues against it or has any other comments.\n\n[1] - https://www.postgresql.org/message-id/CAExHW5uXq_CU80XJtmWbPJinRjJx54kbQJ9DT%3DUFySKXpjVwrw%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 7 Sep 2023 13:43:30 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: persist logical slots to disk during shutdown checkpoint"
},
{
"msg_contents": "On Thu, Sep 7, 2023 at 1:43 PM Amit Kapila <[email protected]> wrote:\n>\n> On Thu, Sep 7, 2023 at 1:18 PM Michael Paquier <[email protected]> wrote:\n> >\n> > On Thu, Sep 07, 2023 at 11:56:28AM +0530, Amit Kapila wrote:\n> > > Thanks, the patch looks good to me as well.\n> >\n> > + /* This is used to track the last saved confirmed_flush LSN value */\n> > + XLogRecPtr last_saved_confirmed_flush;\n> >\n> > This does not feel sufficient, as the comment explaining what this\n> > variable does uses the same terms as the variable name (aka it is the\n> > last save of the confirmed_lsn). Why it it here and why it is useful?\n> > In which context and/or code paths is it used? Okay, there are some\n> > explanations when saving a slot, restoring a slot or when a checkpoint\n> > processes slots, but it seems important to me to document more things\n> > in ReplicationSlot where this is defined.\n> >\n>\n> Hmm, this is quite debatable as different people feel differently\n> about this. The patch author kept it where it is now but in one of my\n> revisions, I rewrote and added it in the ReplicationSlot. Then\n> Ashutosh argued that it is better to keep it near where we are saving\n> the slot (aka where the patch has) [1]. Anyway, as I also preferred\n> the core part of the theory about this variable to be in\n> ReplicationSlot, so I'll move it there before commit unless someone\n> argues against it or has any other comments.\n\nIf we want it to be in ReplicationSlot, I suggest we just say, - saves\nlast confirmed flush LSN to detect any divergence in the in-memory and\non-disk confirmed flush LSN cheaply.\n\nWhen to detect that divergence and what to do when there is divergence\nshould be document at relevant places in the code. In future if we\nexpand the When and How we use this variable, the comment in\nReplicationSlot will be insufficient.\n\nWe follow this commenting style at several places e.g.\n/* any outstanding modifications? */\nbool just_dirtied;\nbool dirty;\n\nhow and when these variables are used is commented upon in the relevant code.\n\n * This needn't actually be part of a checkpoint, but it's a convenient\n- * location.\n+ * location. is_shutdown is true in case of a shutdown checkpoint.\n\nRelying on the first sentence, if we decide to not persist the\nreplication slot at the time of checkpoint, would that be OK? It\ndoesn't look like a convenience thing to me any more.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Thu, 7 Sep 2023 15:38:07 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: persist logical slots to disk during shutdown checkpoint"
},
{
"msg_contents": "On Thu, Sep 7, 2023 at 3:38 PM Ashutosh Bapat\n<[email protected]> wrote:\n>\n> * This needn't actually be part of a checkpoint, but it's a convenient\n> - * location.\n> + * location. is_shutdown is true in case of a shutdown checkpoint.\n>\n> Relying on the first sentence, if we decide to not persist the\n> replication slot at the time of checkpoint, would that be OK? It\n> doesn't look like a convenience thing to me any more.\n>\n\nInstead of removing that comment, how about something like this: \"This\nneedn't actually be part of a checkpoint except for shutdown\ncheckpoint, but it's a convenient location.\"?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 7 Sep 2023 16:11:09 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: persist logical slots to disk during shutdown checkpoint"
},
{
"msg_contents": "On Thu, Sep 7, 2023 at 4:11 PM Amit Kapila <[email protected]> wrote:\n>\n> On Thu, Sep 7, 2023 at 3:38 PM Ashutosh Bapat\n> <[email protected]> wrote:\n> >\n> > * This needn't actually be part of a checkpoint, but it's a convenient\n> > - * location.\n> > + * location. is_shutdown is true in case of a shutdown checkpoint.\n> >\n> > Relying on the first sentence, if we decide to not persist the\n> > replication slot at the time of checkpoint, would that be OK? It\n> > doesn't look like a convenience thing to me any more.\n> >\n>\n> Instead of removing that comment, how about something like this: \"This\n> needn't actually be part of a checkpoint except for shutdown\n> checkpoint, but it's a convenient location.\"?\n>\n\nI find the wording a bit awkward. My version would be \"Checkpoint is a\nconvenient location to persist all the slots. But in a shutdown\ncheckpoint, indicated by is_shutdown = true, we also update\nconfirmed_flush.\" But please feel free to choose whichever version you\nare comfortable with.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Thu, 7 Sep 2023 16:30:43 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: persist logical slots to disk during shutdown checkpoint"
},
{
"msg_contents": "On Thu, Sep 7, 2023 at 4:30 PM Ashutosh Bapat\n<[email protected]> wrote:\n>\n> On Thu, Sep 7, 2023 at 4:11 PM Amit Kapila <[email protected]> wrote:\n> >\n> > On Thu, Sep 7, 2023 at 3:38 PM Ashutosh Bapat\n> > <[email protected]> wrote:\n> > >\n> > > * This needn't actually be part of a checkpoint, but it's a convenient\n> > > - * location.\n> > > + * location. is_shutdown is true in case of a shutdown checkpoint.\n> > >\n> > > Relying on the first sentence, if we decide to not persist the\n> > > replication slot at the time of checkpoint, would that be OK? It\n> > > doesn't look like a convenience thing to me any more.\n> > >\n> >\n> > Instead of removing that comment, how about something like this: \"This\n> > needn't actually be part of a checkpoint except for shutdown\n> > checkpoint, but it's a convenient location.\"?\n> >\n>\n> I find the wording a bit awkward. My version would be \"Checkpoint is a\n> convenient location to persist all the slots. But in a shutdown\n> checkpoint, indicated by is_shutdown = true, we also update\n> confirmed_flush.\" But please feel free to choose whichever version you\n> are comfortable with.\n>\n\nI think saying we also update confirmed_flush appears unclear to me.\nSo, I tried another version by changing the entire comment to:\n\"Normally, we can flush dirty replication slots at regular intervals\nby any background process like bgwriter but checkpoint is a convenient\nlocation to persist. Additionally, in case of a shutdown checkpoint,\nwe also identify the slots for which confirmed_flush has been updated\nsince the last time it persisted and flush them.\"\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 7 Sep 2023 18:41:45 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: persist logical slots to disk during shutdown checkpoint"
},
{
"msg_contents": "On Thu, Sep 7, 2023 at 3:38 PM Ashutosh Bapat\n<[email protected]> wrote:\n>\n> On Thu, Sep 7, 2023 at 1:43 PM Amit Kapila <[email protected]> wrote:\n> >\n> > On Thu, Sep 7, 2023 at 1:18 PM Michael Paquier <[email protected]> wrote:\n> > >\n> > > On Thu, Sep 07, 2023 at 11:56:28AM +0530, Amit Kapila wrote:\n> > > > Thanks, the patch looks good to me as well.\n> > >\n> > > + /* This is used to track the last saved confirmed_flush LSN value */\n> > > + XLogRecPtr last_saved_confirmed_flush;\n> > >\n> > > This does not feel sufficient, as the comment explaining what this\n> > > variable does uses the same terms as the variable name (aka it is the\n> > > last save of the confirmed_lsn). Why it it here and why it is useful?\n> > > In which context and/or code paths is it used? Okay, there are some\n> > > explanations when saving a slot, restoring a slot or when a checkpoint\n> > > processes slots, but it seems important to me to document more things\n> > > in ReplicationSlot where this is defined.\n> > >\n> >\n> > Hmm, this is quite debatable as different people feel differently\n> > about this. The patch author kept it where it is now but in one of my\n> > revisions, I rewrote and added it in the ReplicationSlot. Then\n> > Ashutosh argued that it is better to keep it near where we are saving\n> > the slot (aka where the patch has) [1]. Anyway, as I also preferred\n> > the core part of the theory about this variable to be in\n> > ReplicationSlot, so I'll move it there before commit unless someone\n> > argues against it or has any other comments.\n>\n> If we want it to be in ReplicationSlot, I suggest we just say, - saves\n> last confirmed flush LSN to detect any divergence in the in-memory and\n> on-disk confirmed flush LSN cheaply.\n>\n\nI have added something on these lines and also changed the other\ncomment pointed out by you. In the passing, I made minor cosmetic\nchanges as well.\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Fri, 8 Sep 2023 09:04:43 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: persist logical slots to disk during shutdown checkpoint"
},
{
"msg_contents": "On Fri, Sep 08, 2023 at 09:04:43AM +0530, Amit Kapila wrote:\n> I have added something on these lines and also changed the other\n> comment pointed out by you. In the passing, I made minor cosmetic\n> changes as well.\n\n+ * We can flush dirty replication slots at regular intervals by any\n+ * background process like bgwriter but checkpoint is a convenient location.\n\nI don't see a need to refer to the bgwriter here. On the contrary, it\ncan be confusing as one could think that this flush happens in the\nbgwriter, but that's not the case currently as only the checkpointer\ndoes that.\n\n+ * We won't ensure that the slot is persisted after the\n\ns/persisted/flushed/? Or just refer to the \"slot's data being\nflushed\", or refer to \"the slot's data is made durable\" instead? The\nuse of \"persist\" here is confusing, because a slot's persistence\nrefers to it as being a *candidate* for flush (compared to an\nephemeral slot), and it does not refer to the *fact* of flushing its\ndata to make sure that it survives a crash. In the context of this\npatch, the LSN value tracked in the slot's in-memory data refers to\nthe last point where the slot's data has been flushed.\n\n+ /*\n+ * This is used to track the last persisted confirmed_flush LSN value to\n+ * detect any divergence in the in-memory and on-disk values for the same.\n+ */\n\n\"This value tracks is the last LSN where this slot's data has been\nflushed to disk. This is used during a checkpoint shutdown to decide\nif a logical slot's data should be forcibly flushed or not.\"\n\nHmm. WAL senders are shut down *after* the checkpointer and *after*\nthe shutdown checkpoint is finished (see PM_SHUTDOWN and\nPM_SHUTDOWN_2) because we want the WAL senders to acknowledge the\ncheckpoint record before shutting down the primary. In order to limit\nthe number of records to work on after a restart, what this patch is\nproposing is an improvement. Perhaps it would be better to document\nthat we don't care about the potential concurrent activity of logical\nWAL senders in this case and that the LSN we are saving at is a best\neffort, meaning that last_saved_confirmed_flush is just here to reduce\nthe damage on a follow-up restart? The comment added in\nCheckPointReplicationSlots() goes in this direction, but perhaps this\npotential concurrent activity should be mentioned?\n--\nMichael",
"msg_date": "Fri, 8 Sep 2023 13:38:30 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: persist logical slots to disk during shutdown checkpoint"
},
{
"msg_contents": "On Fri, Sep 8, 2023 at 10:08 AM Michael Paquier <[email protected]> wrote:\n>\n>\n> + /*\n> + * This is used to track the last persisted confirmed_flush LSN value to\n> + * detect any divergence in the in-memory and on-disk values for the same.\n> + */\n>\n> \"This value tracks is the last LSN where this slot's data has been\n> flushed to disk.\n>\n\nThis makes the comment vague as this sounds like we are saving a slot\ncorresponding to some LSN which is not the case. If you prefer this\ntone then we can instead say: \"This value tracks the last\nconfirmed_flush LSN flushed which is used during a checkpoint shutdown\nto decide if a logical slot's data should be forcibly flushed or not.\"\n\n>\n> This is used during a checkpoint shutdown to decide\n> if a logical slot's data should be forcibly flushed or not.\"\n>\n> Hmm. WAL senders are shut down *after* the checkpointer and *after*\n> the shutdown checkpoint is finished (see PM_SHUTDOWN and\n> PM_SHUTDOWN_2) because we want the WAL senders to acknowledge the\n> checkpoint record before shutting down the primary.\n>\n\nAs per my understanding, this is not true for logical walsenders. As\nper code, while HandleCheckpointerInterrupts(), we call ShutdownXLOG()\nwhich sends a signal to walsender to stop and waits for it to stop.\nAnd only after that, did it write a shutdown checkpoint WAL record.\nAfter getting the InitStopping signal, walsender sets got_STOPPING\nflag. Then *logical* walsender ensures that it sends all the pending\nWAL and exits. What you have quoted is probably true for physical\nwalsenders.\n\n>\n> In order to limit\n> the number of records to work on after a restart, what this patch is\n> proposing is an improvement. Perhaps it would be better to document\n> that we don't care about the potential concurrent activity of logical\n> WAL senders in this case and that the LSN we are saving at is a best\n> effort, meaning that last_saved_confirmed_flush is just here to reduce\n> the damage on a follow-up restart?\n>\n\nUnless I am wrong, there shouldn't be any concurrent activity for\nlogical walsenders. IIRC, it is a mandatory requirement for logical\nwalsenders to stop before shutdown checkpointer to avoid panic error.\nWe do handle logical walsnders differently because they can generate\nWAL during decoding.\n\n>\n> The comment added in\n> CheckPointReplicationSlots() goes in this direction, but perhaps this\n> potential concurrent activity should be mentioned?\n>\n\nSure, we can change it if required.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 8 Sep 2023 11:50:37 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: persist logical slots to disk during shutdown checkpoint"
},
{
"msg_contents": "On Fri, Sep 08, 2023 at 11:50:37AM +0530, Amit Kapila wrote:\n> On Fri, Sep 8, 2023 at 10:08 AM Michael Paquier <[email protected]> wrote:\n>>\n>>\n>> + /*\n>> + * This is used to track the last persisted confirmed_flush LSN value to\n>> + * detect any divergence in the in-memory and on-disk values for the same.\n>> + */\n>>\n>> \"This value tracks is the last LSN where this slot's data has been\n>> flushed to disk.\n>>\n> \n> This makes the comment vague as this sounds like we are saving a slot\n> corresponding to some LSN which is not the case. If you prefer this\n> tone then we can instead say: \"This value tracks the last\n> confirmed_flush LSN flushed which is used during a checkpoint shutdown\n> to decide if a logical slot's data should be forcibly flushed or not.\"\n\nOkay, that looks like an improvement over the term \"persisted\".\n\n>> This is used during a checkpoint shutdown to decide\n>> if a logical slot's data should be forcibly flushed or not.\"\n>>\n>> Hmm. WAL senders are shut down *after* the checkpointer and *after*\n>> the shutdown checkpoint is finished (see PM_SHUTDOWN and\n>> PM_SHUTDOWN_2) because we want the WAL senders to acknowledge the\n>> checkpoint record before shutting down the primary.\n>>\n> \n> As per my understanding, this is not true for logical walsenders. As\n> per code, while HandleCheckpointerInterrupts(), we call ShutdownXLOG()\n> which sends a signal to walsender to stop and waits for it to stop.\n> And only after that, did it write a shutdown checkpoint WAL record.\n> After getting the InitStopping signal, walsender sets got_STOPPING\n> flag. Then *logical* walsender ensures that it sends all the pending\n> WAL and exits. What you have quoted is probably true for physical\n> walsenders.\n\nHm, reminding me about this area.. This roots down to the handling of\nWalSndCaughtUp in the send_data callback for logical or physical.\nThis is switched to true for logical WAL senders much earlier than\nphysical WAL senders, aka before the shutdown checkpoint begins in the\nlatter. What was itching me a bit is that the postmaster logic could\nbe made more solid. Logical and physical WAL senders are both marked\nas BACKEND_TYPE_WALSND, but we don't actually check that the WAL\nsenders remaining at the end of PM_SHUTDOWN_2 are *not* connected to a\ndatabase. This would require a new BACKEND_TYPE_* perhaps, or perhaps\nwe're fine with the current state because we'll catch up problems in\nthe checkpointer if any WAL is generated while the shutdown checkpoint\nis running anyway. Just something I got in mind, unrelated to this\npatch.\n\n>> In order to limit\n>> the number of records to work on after a restart, what this patch is\n>> proposing is an improvement. Perhaps it would be better to document\n>> that we don't care about the potential concurrent activity of logical\n>> WAL senders in this case and that the LSN we are saving at is a best\n>> effort, meaning that last_saved_confirmed_flush is just here to reduce\n>> the damage on a follow-up restart?\n> \n> Unless I am wrong, there shouldn't be any concurrent activity for\n> logical walsenders. IIRC, it is a mandatory requirement for logical\n> walsenders to stop before shutdown checkpointer to avoid panic error.\n> We do handle logical walsnders differently because they can generate\n> WAL during decoding.\n\nYeah. See above.\n\n>> The comment added in\n>> CheckPointReplicationSlots() goes in this direction, but perhaps this\n>> potential concurrent activity should be mentioned?\n> \n> Sure, we can change it if required.\n\n+ * We can flush dirty replication slots at regular intervals by any\n+ * background process like bgwriter but checkpoint is a convenient location.\n\nI still don't quite see a need to mention the bgwriter at all here..\nThat's just unrelated.\n\nThe comment block in CheckPointReplicationSlots() from v10 uses\n\"persist\", but you mean \"flush\", I guess..\n--\nMichael",
"msg_date": "Mon, 11 Sep 2023 15:38:19 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: persist logical slots to disk during shutdown checkpoint"
},
{
"msg_contents": "On Mon, Sep 11, 2023 at 12:08 PM Michael Paquier <[email protected]> wrote:\n>\n> On Fri, Sep 08, 2023 at 11:50:37AM +0530, Amit Kapila wrote:\n> > On Fri, Sep 8, 2023 at 10:08 AM Michael Paquier <[email protected]> wrote:\n> >>\n> >>\n> >> + /*\n> >> + * This is used to track the last persisted confirmed_flush LSN value to\n> >> + * detect any divergence in the in-memory and on-disk values for the same.\n> >> + */\n> >>\n> >> \"This value tracks is the last LSN where this slot's data has been\n> >> flushed to disk.\n> >>\n> >\n> > This makes the comment vague as this sounds like we are saving a slot\n> > corresponding to some LSN which is not the case. If you prefer this\n> > tone then we can instead say: \"This value tracks the last\n> > confirmed_flush LSN flushed which is used during a checkpoint shutdown\n> > to decide if a logical slot's data should be forcibly flushed or not.\"\n>\n> Okay, that looks like an improvement over the term \"persisted\".\n>\n\nChanged accordingly.\n\n> >> This is used during a checkpoint shutdown to decide\n> >> if a logical slot's data should be forcibly flushed or not.\"\n> >>\n> >> Hmm. WAL senders are shut down *after* the checkpointer and *after*\n> >> the shutdown checkpoint is finished (see PM_SHUTDOWN and\n> >> PM_SHUTDOWN_2) because we want the WAL senders to acknowledge the\n> >> checkpoint record before shutting down the primary.\n> >>\n> >\n> > As per my understanding, this is not true for logical walsenders. As\n> > per code, while HandleCheckpointerInterrupts(), we call ShutdownXLOG()\n> > which sends a signal to walsender to stop and waits for it to stop.\n> > And only after that, did it write a shutdown checkpoint WAL record.\n> > After getting the InitStopping signal, walsender sets got_STOPPING\n> > flag. Then *logical* walsender ensures that it sends all the pending\n> > WAL and exits. What you have quoted is probably true for physical\n> > walsenders.\n>\n> Hm, reminding me about this area.. This roots down to the handling of\n> WalSndCaughtUp in the send_data callback for logical or physical.\n> This is switched to true for logical WAL senders much earlier than\n> physical WAL senders, aka before the shutdown checkpoint begins in the\n> latter. What was itching me a bit is that the postmaster logic could\n> be made more solid. Logical and physical WAL senders are both marked\n> as BACKEND_TYPE_WALSND, but we don't actually check that the WAL\n> senders remaining at the end of PM_SHUTDOWN_2 are *not* connected to a\n> database. This would require a new BACKEND_TYPE_* perhaps, or perhaps\n> we're fine with the current state because we'll catch up problems in\n> the checkpointer if any WAL is generated while the shutdown checkpoint\n> is running anyway. Just something I got in mind, unrelated to this\n> patch.\n>\n\nI don't know if the difference is worth inventing a new BACKEND_TYPE_*\nbut if you think so then we can probably discuss this in a new thread.\nI think we may want to improve some comments as a separate patch to\nmake this evident.\n\n>\n> + * We can flush dirty replication slots at regular intervals by any\n> + * background process like bgwriter but checkpoint is a convenient location.\n>\n> I still don't quite see a need to mention the bgwriter at all here..\n> That's just unrelated.\n>\n\nI don't disagree with it, so changed it in the attached patch.\n\n> The comment block in CheckPointReplicationSlots() from v10 uses\n> \"persist\", but you mean \"flush\", I guess..\n>\n\nThis point is not very clear to me. Can you please quote the exact\ncomment if you think something needs to be changed?\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Mon, 11 Sep 2023 14:49:49 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: persist logical slots to disk during shutdown checkpoint"
},
{
"msg_contents": "On Mon, Sep 11, 2023 at 02:49:49PM +0530, Amit Kapila wrote:\n> I don't know if the difference is worth inventing a new BACKEND_TYPE_*\n> but if you think so then we can probably discuss this in a new thread.\n> I think we may want to improve some comments as a separate patch to\n> make this evident.\n\nThe comments in postmaster.c could be improved, at least. There is no\nneed to discuss that here.\n\n> This point is not very clear to me. Can you please quote the exact\n> comment if you think something needs to be changed?\n\nHmm. Don't think that's it yet..\n\nPlease see the v11 attached, that rewords all the places of the patch\nthat need clarifications IMO. I've found that the comment additions\nin CheckPointReplicationSlots() to be overcomplicated:\n- The key point to force a flush of a slot if its confirmed_lsn has\nmoved ahead of the last LSN where it was saved is to make the follow\nup restart more responsive.\n- Not sure that there is any point to mention the other code paths in\nthe tree where ReplicationSlotSave() can be called, and a slot can be\nsaved in other processes than just WAL senders (like slot\nmanipulations in normal backends, for one). This was the last\nsentence in v10.\n- Persist is incorrect in this context in the tests, slot.c and\nslot.h, as it should refer to the slot's data being flushed, saved or\njust \"made durable\" because this is what the new last saved LSN is\nhere for. Persistence is a slot property, and does not refer to the\nfact of flushing the data IMO.\n\n+ if (s->data.invalidated == RS_INVAL_NONE &&\n+ s->data.confirmed_flush != s->last_saved_confirmed_flush)\n\nActually this is incorrect, no? Shouldn't we make sure that the\nconfirmed_flush is strictly higher than the last saved LSN?\n--\nMichael",
"msg_date": "Tue, 12 Sep 2023 14:25:33 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: persist logical slots to disk during shutdown checkpoint"
},
{
"msg_contents": "On Tue, Sep 12, 2023 at 10:55 AM Michael Paquier <[email protected]> wrote:\n>\n> On Mon, Sep 11, 2023 at 02:49:49PM +0530, Amit Kapila wrote:\n>\n> Please see the v11 attached, that rewords all the places of the patch\n> that need clarifications IMO. I've found that the comment additions\n> in CheckPointReplicationSlots() to be overcomplicated:\n> - The key point to force a flush of a slot if its confirmed_lsn has\n> moved ahead of the last LSN where it was saved is to make the follow\n> up restart more responsive.\n>\n\nI don't think it will become more responsive in any way, not sure what\nmade you think like that. The key idea is that after restart we want\nto ensure that all the WAL data up to the shutdown checkpoint record\nis sent to downstream. As mentioned in the commit message, this will\nhelp in ensuring that upgrades don't miss any data and then there is\nanother small advantage as mentioned in the commit message.\n\n> - Not sure that there is any point to mention the other code paths in\n> the tree where ReplicationSlotSave() can be called, and a slot can be\n> saved in other processes than just WAL senders (like slot\n> manipulations in normal backends, for one). This was the last\n> sentence in v10.\n>\n\nThe point was the earlier sentence is no longer true and keeping it as\nit is could be wrong or at least misleading. For example, earlier it\nis okay to say, \"This needn't actually be part of a checkpoint, ...\"\nbut now that is no longer true as we want to invoke this at the time\nof shutdown checkpoint for correctness. If we want to be precise, we\ncan say, \"It is convenient to flush dirty replication slots at the\ntime of checkpoint. Additionally, ..\"\n\n>\n> + if (s->data.invalidated == RS_INVAL_NONE &&\n> + s->data.confirmed_flush != s->last_saved_confirmed_flush)\n>\n> Actually this is incorrect, no? Shouldn't we make sure that the\n> confirmed_flush is strictly higher than the last saved LSN?\n>\n\nI can't see why it is incorrect. Do you see how (in what scenario) it\ncould go wrong? As per my understanding, confirmed_flush LSN will\nalways be greater than equal to last_saved_confirmed_flush but we\ndon't want to ensure that point here because we just want if the\nlatest value is not the same then we should mark the slot dirty and\nflush it as that will be location we have ensured to update before\nwalsender shutdown. I think it is better to add an assert if you are\nworried about any such case and we had thought of adding it as well\nbut then didn't do it because we don't have matching asserts to ensure\nthat we never assign prior LSN value to consfirmed_flush LSN.\n\n+ /*\n+ * LSN used to track the last confirmed_flush LSN where the slot's data\n+ * has been flushed to disk.\n+ */\n+ XLogRecPtr last_saved_confirmed_flush;\n\nI don't want to argue on such a point because it is a little bit of a\nmatter of personal choice but I find this comment unclear. It seems to\nread that confirmed_flush LSN is some LSN position which is where we\nflushed the slot's data and that is not true. I found the last comment\nin the patch sent by me: \"This value tracks the last confirmed_flush\nLSN flushed which is used during a shutdown checkpoint to decide if\nlogical's slot data should be forcibly flushed or not.\" which I feel\nwe agreed upon is better.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 12 Sep 2023 15:15:44 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: persist logical slots to disk during shutdown checkpoint"
},
{
"msg_contents": "On Tue, Sep 12, 2023 at 03:15:44PM +0530, Amit Kapila wrote:\n> I don't think it will become more responsive in any way, not sure what\n> made you think like that. The key idea is that after restart we want\n> to ensure that all the WAL data up to the shutdown checkpoint record\n> is sent to downstream. As mentioned in the commit message, this will\n> help in ensuring that upgrades don't miss any data and then there is\n> another small advantage as mentioned in the commit message.\n\nGood thing I did not use the term \"responsive\" in the previous patch I\nposted. My apologies if you found that confusing. Let's say, \"to\nprevent unnecessary retreat\", then ;)\n\n>> - Not sure that there is any point to mention the other code paths in\n>> the tree where ReplicationSlotSave() can be called, and a slot can be\n>> saved in other processes than just WAL senders (like slot\n>> manipulations in normal backends, for one). This was the last\n>> sentence in v10.\n> \n> The point was the earlier sentence is no longer true and keeping it as\n> it is could be wrong or at least misleading. For example, earlier it\n> is okay to say, \"This needn't actually be part of a checkpoint, ...\"\n> but now that is no longer true as we want to invoke this at the time\n> of shutdown checkpoint for correctness. If we want to be precise, we\n\nHow so? This is just a reference about the fact that using a\ncheckpoint path for this stuff is useful. A shutdown checkpoint is\nstill a checkpoint, done by the checkpointer. The background writer\nis not concerned by that. \n\n> can say, \"It is convenient to flush dirty replication slots at the\n> time of checkpoint. Additionally, ..\"\n\nOkay by mr to reword the top comment of CheckPointReplicationSlots()\nto use these terms, if you feel strongly about it.\n\n>> + if (s->data.invalidated == RS_INVAL_NONE &&\n>> + s->data.confirmed_flush != s->last_saved_confirmed_flush)\n>>\n>> Actually this is incorrect, no? Shouldn't we make sure that the\n>> confirmed_flush is strictly higher than the last saved LSN?\n> \n> I can't see why it is incorrect. Do you see how (in what scenario) it\n> could go wrong? As per my understanding, confirmed_flush LSN will\n> always be greater than equal to last_saved_confirmed_flush but we\n> don't want to ensure that point here because we just want if the\n> latest value is not the same then we should mark the slot dirty and\n> flush it as that will be location we have ensured to update before\n> walsender shutdown. I think it is better to add an assert if you are\n> worried about any such case and we had thought of adding it as well\n> but then didn't do it because we don't have matching asserts to ensure\n> that we never assign prior LSN value to consfirmed_flush LSN.\n\nBecause that's just safer in the long run, and I don't see why we\ncannot just do that? Imagine, for instance, that a bug doing an\nincorrect manipulation of a logical slot's data does an incorrect\ncomputation of this field, and that we finish with in-memory data\nthat's older than what was previously saved. The code may cause a\nflush at an incorrect, past, position. That's just an assumption from\nmy side, of course.\n\n> I don't want to argue on such a point because it is a little bit of a\n> matter of personal choice but I find this comment unclear. It seems to\n> read that confirmed_flush LSN is some LSN position which is where we\n> flushed the slot's data and that is not true. I found the last comment\n> in the patch sent by me: \"This value tracks the last confirmed_flush\n> LSN flushed which is used during a shutdown checkpoint to decide if\n> logical's slot data should be forcibly flushed or not.\" which I feel\n> we agreed upon is better.\n\nOkay, fine by me here.\n--\nMichael",
"msg_date": "Wed, 13 Sep 2023 14:27:37 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: persist logical slots to disk during shutdown checkpoint"
},
{
"msg_contents": "On Wed, Sep 13, 2023 at 10:57 AM Michael Paquier <[email protected]> wrote:\n>\n> On Tue, Sep 12, 2023 at 03:15:44PM +0530, Amit Kapila wrote:\n> >>\n> >> - Not sure that there is any point to mention the other code paths in\n> >> the tree where ReplicationSlotSave() can be called, and a slot can be\n> >> saved in other processes than just WAL senders (like slot\n> >> manipulations in normal backends, for one). This was the last\n> >> sentence in v10.\n> >\n> > The point was the earlier sentence is no longer true and keeping it as\n> > it is could be wrong or at least misleading. For example, earlier it\n> > is okay to say, \"This needn't actually be part of a checkpoint, ...\"\n> > but now that is no longer true as we want to invoke this at the time\n> > of shutdown checkpoint for correctness. If we want to be precise, we\n>\n> How so?\n>\n\nConsider if we move this call to bgwriter (aka flushing slots is no\nlonger part of a checkpoint), Would that be okay? Previously, I think\nit was okay but not now. I see an argument to keep that as it is as\nwell because we have already mentioned the special shutdown checkpoint\ncase. By the way, I have changed this because Ashutosh felt it is no\nlonger correct to keep the first sentence as it is. See his email[1]\n(Relying on the first sentence, ...). It happened previously as well\nthat different reviewers working in this area have different views on\nthis sort of thing. I am trying my best to address the review\ncomments, especially from experienced hackers but personally, I feel\nthis is a minor nitpick and isn't worth too much argument, either way,\nshould be okay.\n\n>\n> >> + if (s->data.invalidated == RS_INVAL_NONE &&\n> >> + s->data.confirmed_flush != s->last_saved_confirmed_flush)\n> >>\n> >> Actually this is incorrect, no? Shouldn't we make sure that the\n> >> confirmed_flush is strictly higher than the last saved LSN?\n> >\n> > I can't see why it is incorrect. Do you see how (in what scenario) it\n> > could go wrong? As per my understanding, confirmed_flush LSN will\n> > always be greater than equal to last_saved_confirmed_flush but we\n> > don't want to ensure that point here because we just want if the\n> > latest value is not the same then we should mark the slot dirty and\n> > flush it as that will be location we have ensured to update before\n> > walsender shutdown. I think it is better to add an assert if you are\n> > worried about any such case and we had thought of adding it as well\n> > but then didn't do it because we don't have matching asserts to ensure\n> > that we never assign prior LSN value to consfirmed_flush LSN.\n>\n> Because that's just safer in the long run, and I don't see why we\n> cannot just do that? Imagine, for instance, that a bug doing an\n> incorrect manipulation of a logical slot's data does an incorrect\n> computation of this field, and that we finish with in-memory data\n> that's older than what was previously saved. The code may cause a\n> flush at an incorrect, past, position. That's just an assumption from\n> my side, of course.\n>\n\nIf you are worried about such bugs, it would be better to have an\nAssert as suggested previously rather than greater than check because\nwe will at least catch such bugs otherwise it can go unnoticed or in\nthe worst case will lead to unknown consequences. I am saying this\nbecause if there are such bugs (or got introduced later) then the slot\ncan be flushed with a prior confirmed_flush location even from other\ncode paths. Just for reference, we don't have any check ensuring that\nconfirmed_flush LSN can move backward in function\nLogicalConfirmReceivedLocation(), see and also another place where we\nupdate it:\nelse\n{\nSpinLockAcquire(&MyReplicationSlot->mutex);\nMyReplicationSlot->data.confirmed_flush = lsn;\nSpinLockRelease(&MyReplicationSlot->mutex);\n}\n\nAs other places don't have an assert, I didn't add one here but we can\nadd one here.\n\n> > I don't want to argue on such a point because it is a little bit of a\n> > matter of personal choice but I find this comment unclear. It seems to\n> > read that confirmed_flush LSN is some LSN position which is where we\n> > flushed the slot's data and that is not true. I found the last comment\n> > in the patch sent by me: \"This value tracks the last confirmed_flush\n> > LSN flushed which is used during a shutdown checkpoint to decide if\n> > logical's slot data should be forcibly flushed or not.\" which I feel\n> > we agreed upon is better.\n>\n> Okay, fine by me here.\n>\n\nThanks, will change once we agree on the remaining points.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 13 Sep 2023 12:07:12 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: persist logical slots to disk during shutdown checkpoint"
},
{
"msg_contents": "On Wed, Sep 13, 2023 at 12:07:12PM +0530, Amit Kapila wrote:\n> Consider if we move this call to bgwriter (aka flushing slots is no\n> longer part of a checkpoint), Would that be okay? Previously, I think\n> it was okay but not now. I see an argument to keep that as it is as\n> well because we have already mentioned the special shutdown checkpoint\n> case. By the way, I have changed this because Ashutosh felt it is no\n> longer correct to keep the first sentence as it is. See his email[1]\n> (Relying on the first sentence, ...).\n\nHmmm.. Okay..\n\n> As other places don't have an assert, I didn't add one here but we can\n> add one here.\n\nI'd be OK with an assertion here at the end, though I'd still choose a\nstricter run-time check if I were to apply that myself.\n--\nMichael",
"msg_date": "Wed, 13 Sep 2023 16:14:56 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: persist logical slots to disk during shutdown checkpoint"
},
{
"msg_contents": "On Wed, Sep 13, 2023 at 12:45 PM Michael Paquier <[email protected]> wrote:\n>\n> On Wed, Sep 13, 2023 at 12:07:12PM +0530, Amit Kapila wrote:\n> > Consider if we move this call to bgwriter (aka flushing slots is no\n> > longer part of a checkpoint), Would that be okay? Previously, I think\n> > it was okay but not now. I see an argument to keep that as it is as\n> > well because we have already mentioned the special shutdown checkpoint\n> > case. By the way, I have changed this because Ashutosh felt it is no\n> > longer correct to keep the first sentence as it is. See his email[1]\n> > (Relying on the first sentence, ...).\n>\n> Hmmm.. Okay..\n>\n\nThe patch is updated as per recent discussion.\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Wed, 13 Sep 2023 16:20:37 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: persist logical slots to disk during shutdown checkpoint"
},
{
"msg_contents": "On Wed, Sep 13, 2023 at 04:20:37PM +0530, Amit Kapila wrote:\n> The patch is updated as per recent discussion.\n\nWFM. Thanks for the updated version.\n--\nMichael",
"msg_date": "Thu, 14 Sep 2023 10:50:18 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: persist logical slots to disk during shutdown checkpoint"
},
{
"msg_contents": "On Thu, Sep 14, 2023 at 7:20 AM Michael Paquier <[email protected]> wrote:\n>\n> On Wed, Sep 13, 2023 at 04:20:37PM +0530, Amit Kapila wrote:\n> > The patch is updated as per recent discussion.\n>\n> WFM. Thanks for the updated version.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 14 Sep 2023 14:41:17 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: persist logical slots to disk during shutdown checkpoint"
},
{
"msg_contents": "On Thu, Sep 14, 2023 at 2:41 PM Amit Kapila <[email protected]> wrote:\n>\n> On Thu, Sep 14, 2023 at 7:20 AM Michael Paquier <[email protected]> wrote:\n> >\n> > On Wed, Sep 13, 2023 at 04:20:37PM +0530, Amit Kapila wrote:\n> > > The patch is updated as per recent discussion.\n> >\n> > WFM. Thanks for the updated version.\n> >\n>\n> Pushed.\n\nCommitfest entry \"https://commitfest.postgresql.org/44/4536/ is in\n\"Ready for committer\" state. Is there something remaining here? We\nshould probably set it as \"committed\".\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Wed, 20 Sep 2023 16:48:00 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: persist logical slots to disk during shutdown checkpoint"
},
{
"msg_contents": "On Wed, Sep 20, 2023 at 04:48:00PM +0530, Ashutosh Bapat wrote:\n> Commitfest entry \"https://commitfest.postgresql.org/44/4536/ is in\n> \"Ready for committer\" state. Is there something remaining here? We\n> should probably set it as \"committed\".\n\nThanks, I've switched that to \"Committed\".\n--\nMichael",
"msg_date": "Wed, 20 Sep 2023 20:24:05 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: persist logical slots to disk during shutdown checkpoint"
}
] |
[
{
"msg_contents": "Hi,\r\n\r\nAttached is the first draft of the PostgreSQL 16 release announcement, \r\nauthored by Chelsea Dole & myself.\r\n\r\nTo frame this up, the goal of the GA release announcement is to help \r\nfolks discover the awesome new features of PostgreSQL. It's impossible \r\nto list out every single feature in the release and still have a \r\ncoherent announcement, so we try to target features that have the \r\nbroadest range of impact.\r\n\r\nIt's possible we missed or incorrectly stated something, so please \r\nprovide feedback if we did so.\r\n\r\n(Note I have not added in all of the links etc. to the Markdown yet, as \r\nI want to wait for the first pass of feedback to come through).\r\n\r\n**Please provide feedback by August 26, 12:00 UTC**. After that point, \r\nwe need to freeze all changes so we can begin the release announcement \r\ntranslation effort.\r\n\r\nThanks,\r\n\r\nJonathan",
"msg_date": "Sat, 19 Aug 2023 15:38:37 -0400",
"msg_from": "\"Jonathan S. Katz\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL 16 release announcement draft"
},
{
"msg_contents": ">>>>>\nAdditionally, this release adds a new field to the pg_stat_all_tables view,\ncapturing a timestamp representing when a table or index was last scanned.\nPostgreSQL also makes auto_explain more readable by logging values passed into\nparameterized statements, and improves accuracy of pg_stat_activity’s\nnormalization algorithm.\n>>>>>\n\nI am not sure if it's \"capturing a timestamp representing\" or\n\"capturing the timestamp representing\".\n\"pg_stat_activity’s normalization algorithm\", I think you are\nreferring to \"pg_stat_statements\"?\n\n\n",
"msg_date": "Sun, 20 Aug 2023 10:18:23 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 16 release announcement draft"
},
{
"msg_contents": ">>>\nPostgreSQL 16 improves the performance of existing PostgreSQL functionality\nthrough new query planner optimizations. In this latest release, the query\nplanner can parallelize `FULL` and `RIGHT` joins, utilize incremental sorts for\n`SELECT DISTINCT` queries, and execute window functions more efficiently. It\nalso introduces `RIGHT` and `OUTER` \"anti-joins\", which enable users to identify\nrows not present in a joined table.\n>>>\n\nI think \"utilize incremental sorts is for\" something like select\nmy_avg(distinct one),my_sum(one) from (values(1),(3)) t(one);\nso it's not the same as `SELECT DISTINCT` queries?\nref: https://git.postgresql.org/cgit/postgresql.git/commit/?id=1349d2790bf48a4de072931c722f39337e72055e\n\nalso\n<<<< \"the query planner ....., and execute window functions more efficiently.\"\nsince the query planner doesn't execute anything. probably \"and\noptimize window functions execution\"?\n\n\n",
"msg_date": "Wed, 23 Aug 2023 18:20:49 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 16 release announcement draft"
},
{
"msg_contents": "On Wed, 23 Aug 2023 at 22:21, jian he <[email protected]> wrote:\n>\n> >>>\n> PostgreSQL 16 improves the performance of existing PostgreSQL functionality\n> through new query planner optimizations. In this latest release, the query\n> planner can parallelize `FULL` and `RIGHT` joins, utilize incremental sorts for\n> `SELECT DISTINCT` queries, and execute window functions more efficiently. It\n> also introduces `RIGHT` and `OUTER` \"anti-joins\", which enable users to identify\n> rows not present in a joined table.\n> >>>\n>\n> I think \"utilize incremental sorts is for\" something like select\n> my_avg(distinct one),my_sum(one) from (values(1),(3)) t(one);\n> so it's not the same as `SELECT DISTINCT` queries?\n> ref: https://git.postgresql.org/cgit/postgresql.git/commit/?id=1349d2790bf48a4de072931c722f39337e72055e\n\nThe incremental sorts for DISTINCT will likely be a reference to\n3c6fc5820, so, not the same thing as 1349d2790. I don't see anything\nthere relating to 1349d2790.\n\n> also\n> <<<< \"the query planner ....., and execute window functions more efficiently.\"\n> since the query planner doesn't execute anything. probably \"and\n> optimize window functions execution\"?\n\nYeah, that or \"and optimize window functions so they execute more\nefficiently\" is likely an improvement there.\n\nDavid\n\n\n",
"msg_date": "Thu, 24 Aug 2023 00:02:51 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 16 release announcement draft"
},
{
"msg_contents": "On 8/23/23 8:02 AM, David Rowley wrote:\r\n> On Wed, 23 Aug 2023 at 22:21, jian he <[email protected]> wrote:\r\n>>\r\n>>>>>\r\n>> PostgreSQL 16 improves the performance of existing PostgreSQL functionality\r\n>> through new query planner optimizations. In this latest release, the query\r\n>> planner can parallelize `FULL` and `RIGHT` joins, utilize incremental sorts for\r\n>> `SELECT DISTINCT` queries, and execute window functions more efficiently. It\r\n>> also introduces `RIGHT` and `OUTER` \"anti-joins\", which enable users to identify\r\n>> rows not present in a joined table.\r\n>>>>>\r\n>>\r\n>> I think \"utilize incremental sorts is for\" something like select\r\n>> my_avg(distinct one),my_sum(one) from (values(1),(3)) t(one);\r\n>> so it's not the same as `SELECT DISTINCT` queries?\r\n>> ref: https://git.postgresql.org/cgit/postgresql.git/commit/?id=1349d2790bf48a4de072931c722f39337e72055e\r\n> \r\n> The incremental sorts for DISTINCT will likely be a reference to\r\n> 3c6fc5820, so, not the same thing as 1349d2790. I don't see anything\r\n> there relating to 1349d2790.\r\n\r\nWe could add something about 1349d2790 -- do you have suggested wording?\r\n\r\n>> also\r\n>> <<<< \"the query planner ....., and execute window functions more efficiently.\"\r\n>> since the query planner doesn't execute anything. probably \"and\r\n>> optimize window functions execution\"?\r\n> \r\n> Yeah, that or \"and optimize window functions so they execute more\r\n> efficiently\" is likely an improvement there.\r\n\r\nModified. See updated announcement, with other incorporated changes.\r\n\r\nReminder that the window to submit changes closes at **August 26, 12:00 \r\nUTC**.\r\n\r\nThanks,\r\n\r\nJonathan",
"msg_date": "Wed, 23 Aug 2023 13:55:26 -0400",
"msg_from": "\"Jonathan S. Katz\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL 16 release announcement draft"
},
{
"msg_contents": "On Thu, 24 Aug 2023 at 05:55, Jonathan S. Katz <[email protected]> wrote:\n> We could add something about 1349d2790 -- do you have suggested wording?\n\nI think it's worth a mention. See the text added in square brackets below:\n\nPostgreSQL 16 improves the performance of existing PostgreSQL functionality\nthrough new query planner optimizations. In this latest release, the query\nplanner can parallelize `FULL` and `RIGHT` joins, [generate more\noptimal plans for\nqueries containing aggregate functions with a `DISTINCT` or `ORDER BY` clause,]\nutilize incremental sorts for `SELECT DISTINCT` queries, and optimize\nwindow function\nexecutions so they execute more efficiently. It also introduces\n`RIGHT` and `OUTER`\n\"anti-joins\", which enable users to identify rows not present in a joined table.\n\nThanks\n\nDavid\n\n\n",
"msg_date": "Thu, 24 Aug 2023 09:07:44 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 16 release announcement draft"
},
{
"msg_contents": "On 8/23/23 5:07 PM, David Rowley wrote:\r\n> On Thu, 24 Aug 2023 at 05:55, Jonathan S. Katz <[email protected]> wrote:\r\n>> We could add something about 1349d2790 -- do you have suggested wording?\r\n> \r\n> I think it's worth a mention. See the text added in square brackets below:\r\n> \r\n> PostgreSQL 16 improves the performance of existing PostgreSQL functionality\r\n> through new query planner optimizations. In this latest release, the query\r\n> planner can parallelize `FULL` and `RIGHT` joins, [generate more\r\n> optimal plans for\r\n> queries containing aggregate functions with a `DISTINCT` or `ORDER BY` clause,]\r\n> utilize incremental sorts for `SELECT DISTINCT` queries, and optimize\r\n> window function\r\n> executions so they execute more efficiently. It also introduces\r\n> `RIGHT` and `OUTER`\r\n> \"anti-joins\", which enable users to identify rows not present in a joined table.\r\n\r\nI added this in mostly verbatim. I'm concerned the sentence is a bit \r\nlong, but we could break it up into two: (1) with the new JOIN \r\ncapabilities and (2) with the optimizations.\r\n\r\nJonathan",
"msg_date": "Thu, 24 Aug 2023 10:32:02 -0400",
"msg_from": "\"Jonathan S. Katz\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL 16 release announcement draft"
},
{
"msg_contents": "hi. Can you check my first email about \"a\" versus \"the\" and \"pg_stat_activity\".\n\nalso:\n\"including the `\\bind` command, which allows\nusers to execute parameterized queries (e.g `SELECT $1 + $2`) then use `\\bind`\nto substitute the variables.\"\n\nThe example actually is very hard to reproduce. (it's not that super intuitive).\nfail case:\ntest16-# SELECT $1 + $2 \\bind 1 2\ntest16-# ;\n\na better example would be (e.g `SELECT $1 , $2`).\nThe semicolon still needed to be in the next line.\n\n\n",
"msg_date": "Thu, 24 Aug 2023 23:16:55 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 16 release announcement draft"
},
{
"msg_contents": "Op 8/24/23 om 16:32 schreef Jonathan S. Katz:\n> On 8/23/23 5:07 PM, David Rowley wrote:\n>> On Thu, 24 Aug 2023 at 05:55, Jonathan S. Katz <[email protected]> \n>> wrote:\n\nHi,\n\nWhen v15 docs have:\n\n\"27.2.7. Cascading Replication\nThe cascading replication feature allows a standby server to accept \nreplication connections and stream WAL records to other standbys, acting \nas a relay. This can be used to reduce the number of direct connections \nto the primary and also to minimize inter-site bandwidth overheads.\"\n\nwhy then, in the release draft, is that capability mentioned as \nsomething that is new for v16?\n\"\nIn PostgreSQL 16, users can perform logical decoding from a standby\ninstance, meaning a standby can publish logical changes to other servers.\n\"\n\nIs there a difference between the two?\n\nThanks,\n\nErik\n\n\n\n\n\n\n\n",
"msg_date": "Thu, 24 Aug 2023 17:17:14 +0200",
"msg_from": "Erik Rijkers <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 16 release announcement draft"
},
{
"msg_contents": "On 2023-Aug-24, Jonathan S. Katz wrote:\n\n> ### Performance Improvements\n> \n> PostgreSQL 16 improves the performance of existing PostgreSQL functionality\n> through new query planner optimizations. In this latest release, the query\n> planner can parallelize `FULL` and `RIGHT` joins, generate better optimized\n> plans for queries that use aggregate functions (e.g. `count`) with a `DISTINCT`\n> or `ORDER BY` clause, utilize incremental sorts for `SELECT DISTINCT` queries,\n> and optimize window function executions so they execute more efficiently.\n\n\"optimize window function executions so that they execute blah\" sounds\nredundant and strange. Maybe just \"optimize execution of window\nfunctions\" is sufficient? Also, using \"e.g.\" there looks somewhat out\nof place; maybe \"(such as `count`)\" is a good replacement?\n\n> It also introduces `RIGHT` and `OUTER` \"anti-joins\", which enable users to\n> identify rows not present in a joined table.\n\nWait. Are you saying we didn't have those already? Looking at\nrelease-16.sgml I think this refers to commit 16dc2703c541, which means\nthis made them more efficient rather than invented them.\n\n\n> This release includes improvements for bulk loading using `COPY` in both single\n> and concurrent operations, with tests showing up to a 300% performance\n> improvement in some cases. PostgreSQL adds support for load balancing in clients\n\nPostgreSQL 16\n\n> that use `libpq`, and improvements to vacuum strategy that reduce the necessity\n> of full-table freezes. Additionally, PostgreSQL 16 introduces CPU acceleration\n> using `SIMD` in both x86 and ARM architectures, resulting in performance gains\n> when processing ASCII and JSON strings, and performing array and subtransaction\n> searches.\n> \n> ### Logical replication \n> \n> Logical replication lets PostgreSQL users stream data to other PostgreSQL\n\n\"L.R. in PostgreSQL lets users\"?\n\n> instances or subscribers that can interpret the PostgreSQL logical replication\n> protocol. In PostgreSQL 16, users can perform logical decoding from a standby\n\ns/decoding/replication/ ? (It seems odd to use \"decoding\" when the\nprevious sentence used \"replication\")\n\n> instance, meaning a standby can publish logical changes to other servers. This\n> provides developers with new workload distribution options – for example, using\n> a standby rather than the busier primary to logically replicate changes to\n> downstream systems.\n> \n> Additionally, there are several performance improvements in PostgreSQL 16 to\n> logical replication. Subscribers can now apply large transactions using parallel\n> workers. For tables that do not have a `PRIMARY KEY`, subscribers can use B-tree\n\n\"a primary key\", no caps.\n\n> indexes instead of sequential scans to find rows. Under certain conditions,\n> users can also speed up initial table synchronization using the binary format.\n> \n> There are several access control improvements to logical replication in\n> PostgreSQL 16, including the new predefined role pg_create_subscription, which\n> grants users the ability to create a new logical subscription. Finally, this\n> release begins adding support for bidirectional logical replication, introducing\n> functionality to replicate data between two tables from different publishers.\n\n\"to create a new logical subscription\" -> \"to create new logical subscriptions\"\n\n> ### Developer Experience\n> \n> PostgreSQL 16 adds more syntax from the SQL/JSON standard, including\n> constructors and predicates such as `JSON_ARRAY()`, `JSON_ARRAYAGG()`, and\n> `IS JSON`. This release also introduces the ability to use underscores for\n> thousands separators (e.g. `5_432_000`) and non-decimal integer literals, such\n> as `0x1538`, `0o12470`, and `0b1010100111000`.\n> \n> Developers using PostgreSQL 16 will also benefit from the addition of multiple\n> commands to `psql` client protocol, including the `\\bind` command, which allows\n> users to execute parameterized queries (e.g `SELECT $1 + $2`) then use `\\bind`\n> to substitute the variables. \n\nThis paragraph sounds a bit suspicious. What do you mean with \"multiple\ncommands to psql client protocol\"? Also, I think \"to execute parameterized\nqueries\" should be \"to prepare parameterized queries\", and later \"then\nuse \\bind to execute the query substituting the variables\".\n\n\n\n> ### Monitoring\n> \n> A key aspect of tuning the performance of database workloads is understanding\n> the impact of your I/O operations on your system. PostgreSQL 16 helps simplify\n> how you can analyze this data with the new pg_stat_io view, which tracks key I/O\n> statistics such as shared_buffer hits and I/O latency.\n\nHmm, I think what pg_stat_io gives you is data which wasn't available\npreviously at all. Maybe do something like \"Pg 16 introduces\npg_stat_io, a new source of key I/O metrics that can be used for more\nfine grained something something\".\n\n> Additionally, this release adds a new field to the `pg_stat_all_tables` view \n> that records a timestamp representing when a table or index was last scanned.\n> PostgreSQL also makes auto_explain more readable by logging values passed into\n\nPostgreSQL 16\n\n> parameterized statements, and improves accuracy of pg_stat_activity's\n> normalization algorithm.\n\nI think jian already mentioned that this refers to pg_stat_statement\nquery fingerprinting. I know that the query_id also appears in\npg_stat_activity, but that is much newer, and it's not permanent there\nlike in pss. Maybe it should be \"of the query fingerprinting algorithm\nused by pg_stat_statement and pg_stat_activity\".\n\n> ## Images and Logos\n> \n> Postgres, PostgreSQL, and the Elephant Logo (Slonik) are all registered\n> trademarks of the [PostgreSQL Community Association of Canada](https://www.postgres.ca).\n\nIsn't this just the \"PostgreSQL Community Association\", no Canada?\n\n\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Ellos andaban todos desnudos como su madre los parió, y también las mujeres,\naunque no vi más que una, harto moza, y todos los que yo vi eran todos\nmancebos, que ninguno vi de edad de más de XXX años\" (Cristóbal Colón)\n\n\n",
"msg_date": "Thu, 24 Aug 2023 17:19:56 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 16 release announcement draft"
},
{
"msg_contents": "On 8/24/23 11:16 AM, jian he wrote:\r\n> hi. Can you check my first email about \"a\" versus \"the\" and \"pg_stat_activity\".\r\n\r\nI did when you first sent it, and did not make any changes.\r\n\r\n> also:\r\n> \"including the `\\bind` command, which allows\r\n> users to execute parameterized queries (e.g `SELECT $1 + $2`) then use `\\bind`\r\n> to substitute the variables.\"\r\n> \r\n> The example actually is very hard to reproduce. (it's not that super intuitive).\r\n> fail case:\r\n> test16-# SELECT $1 + $2 \\bind 1 2\r\n> test16-# ;\r\n> \r\n> a better example would be (e.g `SELECT $1 , $2`).\r\n> The semicolon still needed to be in the next line.\r\n\r\nI agree with updating the example, I'd propose:\r\n\r\nSELECT $1::int + $2::int \\bind 1 2 \\g\r\n\r\nwhich mirrors what's in the docs[1]\r\n\r\nThanks,\r\n\r\nJonathan\r\n\r\n[1] \r\nhttps://www.postgresql.org/docs/16/app-psql.html#APP-PSQL-META-COMMAND-BIND",
"msg_date": "Thu, 24 Aug 2023 11:23:31 -0400",
"msg_from": "\"Jonathan S. Katz\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL 16 release announcement draft"
},
{
"msg_contents": "On 2023-08-24 11:23, Jonathan S. Katz wrote:\n> \n> SELECT $1::int + $2::int \\bind 1 2 \\g\n\nOne cast also works, letting type inference figure out the other.\nSo if I say\n\nSELECT $1::int + $2 \\gdesc\n\nit tells me the result will be int. That made me wonder if there is\na \\gdesc variant to issue the \"statement variant\" Describe message\nand show what the parameter types have been inferred to be. If\nthere's not, obviously it won't be in 16, but it might be useful.\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Thu, 24 Aug 2023 11:38:05 -0400",
"msg_from": "Chapman Flack <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 16 release announcement draft"
},
{
"msg_contents": ">\n>\n> > Postgres, PostgreSQL, and the Elephant Logo (Slonik) are all registered\n> > trademarks of the [PostgreSQL Community Association of Canada](\n> https://www.postgres.ca).\n>\n> Isn't this just the \"PostgreSQL Community Association\", no Canada?\n>\n\nCertainly confusing from the website, but in the about section is this\n\"PostgreSQL Community Association is a trade or business name of the PostgreSQL\nCommunity Association of Canada.\"\n\nDave\n\n\n> Postgres, PostgreSQL, and the Elephant Logo (Slonik) are all registered\n> trademarks of the [PostgreSQL Community Association of Canada](https://www.postgres.ca).\n\nIsn't this just the \"PostgreSQL Community Association\", no Canada?Certainly confusing from the website, but in the about section is this\"PostgreSQL Community Association is a trade or business name of the PostgreSQL Community Association of Canada.\"Dave",
"msg_date": "Thu, 24 Aug 2023 12:54:34 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 16 release announcement draft"
},
{
"msg_contents": "On 8/24/23 12:54 PM, Dave Cramer wrote:\r\n> \r\n> \r\n> \r\n> > Postgres, PostgreSQL, and the Elephant Logo (Slonik) are all\r\n> registered\r\n> > trademarks of the [PostgreSQL Community Association of\r\n> Canada](https://www.postgres.ca <https://www.postgres.ca>).\r\n> \r\n> Isn't this just the \"PostgreSQL Community Association\", no Canada?\r\n> \r\n> \r\n> Certainly confusing from the website, but in the about section is this\r\n> \"PostgreSQL Community Association is a trade or business name of the \r\n> PostgreSQL Community Association of Canada.\"\r\n\r\nThis was something I missed when reviewing the fulltext, and went ahead \r\nand fixed it. Thanks,\r\n\r\nJonathan",
"msg_date": "Fri, 25 Aug 2023 22:50:24 -0400",
"msg_from": "\"Jonathan S. Katz\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL 16 release announcement draft"
},
{
"msg_contents": "On 8/24/23 11:17 AM, Erik Rijkers wrote:\r\n> Op 8/24/23 om 16:32 schreef Jonathan S. Katz:\r\n>> On 8/23/23 5:07 PM, David Rowley wrote:\r\n>>> On Thu, 24 Aug 2023 at 05:55, Jonathan S. Katz <[email protected]> \r\n>>> wrote:\r\n> \r\n> Hi,\r\n> \r\n> When v15 docs have:\r\n> \r\n> \"27.2.7. Cascading Replication\r\n> The cascading replication feature allows a standby server to accept \r\n> replication connections and stream WAL records to other standbys, acting \r\n> as a relay. This can be used to reduce the number of direct connections \r\n> to the primary and also to minimize inter-site bandwidth overheads.\"\r\n> \r\n> why then, in the release draft, is that capability mentioned as \r\n> something that is new for v16?\r\n> \"\r\n> In PostgreSQL 16, users can perform logical decoding from a standby\r\n> instance, meaning a standby can publish logical changes to other servers.\r\n> \"\r\n> \r\n> Is there a difference between the two?\r\n\r\nYes. Those docs refer to **physical** replication, where a standby can \r\ncontinue to replicate WAL records to other standbys. In v16, standbys \r\ncan now publish changes over **logical** replication.\r\n\r\nThanks,\r\n\r\nJonathan",
"msg_date": "Fri, 25 Aug 2023 22:51:43 -0400",
"msg_from": "\"Jonathan S. Katz\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL 16 release announcement draft"
},
{
"msg_contents": "On 8/24/23 11:19 AM, Alvaro Herrera wrote:\r\n> On 2023-Aug-24, Jonathan S. Katz wrote:\r\n> \r\n>> ### Performance Improvements\r\n>>\r\n>> PostgreSQL 16 improves the performance of existing PostgreSQL functionality\r\n>> through new query planner optimizations. In this latest release, the query\r\n>> planner can parallelize `FULL` and `RIGHT` joins, generate better optimized\r\n>> plans for queries that use aggregate functions (e.g. `count`) with a `DISTINCT`\r\n>> or `ORDER BY` clause, utilize incremental sorts for `SELECT DISTINCT` queries,\r\n>> and optimize window function executions so they execute more efficiently.\r\n> \r\n> \"optimize window function executions so that they execute blah\" sounds\r\n> redundant and strange. Maybe just \"optimize execution of window\r\n> functions\" is sufficient? Also, using \"e.g.\" there looks somewhat out\r\n> of place; maybe \"(such as `count`)\" is a good replacement?\r\n> \r\n>> It also introduces `RIGHT` and `OUTER` \"anti-joins\", which enable users to\r\n>> identify rows not present in a joined table.\r\n> \r\n> Wait. Are you saying we didn't have those already? Looking at\r\n> release-16.sgml I think this refers to commit 16dc2703c541, which means\r\n> this made them more efficient rather than invented them.\r\n> \r\n> \r\n>> This release includes improvements for bulk loading using `COPY` in both single\r\n>> and concurrent operations, with tests showing up to a 300% performance\r\n>> improvement in some cases. PostgreSQL adds support for load balancing in clients\r\n> \r\n> PostgreSQL 16\r\n> \r\n>> that use `libpq`, and improvements to vacuum strategy that reduce the necessity\r\n>> of full-table freezes. Additionally, PostgreSQL 16 introduces CPU acceleration\r\n>> using `SIMD` in both x86 and ARM architectures, resulting in performance gains\r\n>> when processing ASCII and JSON strings, and performing array and subtransaction\r\n>> searches.\r\n>>\r\n>> ### Logical replication\r\n>>\r\n>> Logical replication lets PostgreSQL users stream data to other PostgreSQL\r\n> \r\n> \"L.R. in PostgreSQL lets users\"?\r\n> \r\n>> instances or subscribers that can interpret the PostgreSQL logical replication\r\n>> protocol. In PostgreSQL 16, users can perform logical decoding from a standby\r\n> \r\n> s/decoding/replication/ ? (It seems odd to use \"decoding\" when the\r\n> previous sentence used \"replication\")\r\n> \r\n>> instance, meaning a standby can publish logical changes to other servers. This\r\n>> provides developers with new workload distribution options – for example, using\r\n>> a standby rather than the busier primary to logically replicate changes to\r\n>> downstream systems.\r\n>>\r\n>> Additionally, there are several performance improvements in PostgreSQL 16 to\r\n>> logical replication. Subscribers can now apply large transactions using parallel\r\n>> workers. For tables that do not have a `PRIMARY KEY`, subscribers can use B-tree\r\n> \r\n> \"a primary key\", no caps.\r\n> \r\n>> indexes instead of sequential scans to find rows. Under certain conditions,\r\n>> users can also speed up initial table synchronization using the binary format.\r\n>>\r\n>> There are several access control improvements to logical replication in\r\n>> PostgreSQL 16, including the new predefined role pg_create_subscription, which\r\n>> grants users the ability to create a new logical subscription. Finally, this\r\n>> release begins adding support for bidirectional logical replication, introducing\r\n>> functionality to replicate data between two tables from different publishers.\r\n> \r\n> \"to create a new logical subscription\" -> \"to create new logical subscriptions\"\r\n> \r\n>> ### Developer Experience\r\n>>\r\n>> PostgreSQL 16 adds more syntax from the SQL/JSON standard, including\r\n>> constructors and predicates such as `JSON_ARRAY()`, `JSON_ARRAYAGG()`, and\r\n>> `IS JSON`. This release also introduces the ability to use underscores for\r\n>> thousands separators (e.g. `5_432_000`) and non-decimal integer literals, such\r\n>> as `0x1538`, `0o12470`, and `0b1010100111000`.\r\n>>\r\n>> Developers using PostgreSQL 16 will also benefit from the addition of multiple\r\n>> commands to `psql` client protocol, including the `\\bind` command, which allows\r\n>> users to execute parameterized queries (e.g `SELECT $1 + $2`) then use `\\bind`\r\n>> to substitute the variables.\r\n> \r\n> This paragraph sounds a bit suspicious. What do you mean with \"multiple\r\n> commands to psql client protocol\"? Also, I think \"to execute parameterized\r\n> queries\" should be \"to prepare parameterized queries\", and later \"then\r\n> use \\bind to execute the query substituting the variables\".\r\n> \r\n> \r\n> \r\n>> ### Monitoring\r\n>>\r\n>> A key aspect of tuning the performance of database workloads is understanding\r\n>> the impact of your I/O operations on your system. PostgreSQL 16 helps simplify\r\n>> how you can analyze this data with the new pg_stat_io view, which tracks key I/O\r\n>> statistics such as shared_buffer hits and I/O latency.\r\n> \r\n> Hmm, I think what pg_stat_io gives you is data which wasn't available\r\n> previously at all. Maybe do something like \"Pg 16 introduces\r\n> pg_stat_io, a new source of key I/O metrics that can be used for more\r\n> fine grained something something\".\r\n> \r\n>> Additionally, this release adds a new field to the `pg_stat_all_tables` view\r\n>> that records a timestamp representing when a table or index was last scanned.\r\n>> PostgreSQL also makes auto_explain more readable by logging values passed into\r\n> \r\n> PostgreSQL 16\r\n> \r\n>> parameterized statements, and improves accuracy of pg_stat_activity's\r\n>> normalization algorithm.\r\n> \r\n> I think jian already mentioned that this refers to pg_stat_statement\r\n> query fingerprinting. I know that the query_id also appears in\r\n> pg_stat_activity, but that is much newer, and it's not permanent there\r\n> like in pss. Maybe it should be \"of the query fingerprinting algorithm\r\n> used by pg_stat_statement and pg_stat_activity\".\r\n> \r\n>> ## Images and Logos\r\n>>\r\n>> Postgres, PostgreSQL, and the Elephant Logo (Slonik) are all registered\r\n>> trademarks of the [PostgreSQL Community Association of Canada](https://www.postgres.ca).\r\n> \r\n> Isn't this just the \"PostgreSQL Community Association\", no Canada?\r\n\r\nThanks for the feedback. I accepted most of the changes. Please see \r\nrevised text here, which also includes the URL substitutions.\r\n\r\nJonathan",
"msg_date": "Fri, 25 Aug 2023 23:31:01 -0400",
"msg_from": "\"Jonathan S. Katz\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL 16 release announcement draft"
},
{
"msg_contents": "Op 8/26/23 om 04:51 schreef Jonathan S. Katz:\n> On 8/24/23 11:17 AM, Erik Rijkers wrote:\n>> Op 8/24/23 om 16:32 schreef Jonathan S. Katz:\n>>> On 8/23/23 5:07 PM, David Rowley wrote:\n>>>> On Thu, 24 Aug 2023 at 05:55, Jonathan S. Katz \n>>>> <[email protected]> wrote:\n>>\n>> Hi,\n>>\n>> When v15 docs have:\n>>\n>> \"27.2.7. Cascading Replication\n>> The cascading replication feature allows a standby server to accept \n>> replication connections and stream WAL records to other standbys, \n>> acting as a relay. This can be used to reduce the number of direct \n>> connections to the primary and also to minimize inter-site bandwidth \n>> overheads.\"\n>>\n>> why then, in the release draft, is that capability mentioned as \n>> something that is new for v16?\n>> \"\n>> In PostgreSQL 16, users can perform logical decoding from a standby\n>> instance, meaning a standby can publish logical changes to other servers.\n>> \"\n>>\n>> Is there a difference between the two?\n> \n> Yes. Those docs refer to **physical** replication, where a standby can \n> continue to replicate WAL records to other standbys. In v16, standbys \n> can now publish changes over **logical** replication.\n\nWell, I must assume you are right.\n\nBut why is the attached program, running 3 cascading v15 servers, \nshowing 'logical' in the middle server's (port 6526) \npg_replication_slots.slot_type ? Surely that is not physical but \nlogical replication?\n\n port | svn | slot_name | slot_type\n------+--------+--------------------+-----------\n 6526 | 150003 | pub_6527_from_6526 | logical <--\n(1 row)\n\nI must be confused -- I will be thankful for enlightenment.\n\nErik\n\n> Thanks,\n> \n> Jonathan\n>",
"msg_date": "Sat, 26 Aug 2023 06:28:35 +0200",
"msg_from": "Erik Rijkers <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 16 release announcement draft"
}
] |
[
{
"msg_contents": "Hi,\n\nThread [1] concerns (generalizing slightly) the efficient casting\nto an SQL type of the result of a jsonb extracting operation\n(array indexing, object keying, path evaluation) that has ended\nwith a scalar JsonbValue.\n\nSo far, it can efficiently rewrite casts to boolean or numeric\ntypes.\n\nI notice that, since 6dda292, JsonbValue includes a datetime\nscalar member.\n\nAs far as I can tell, the only jsonb extracting operations\nthat might be capable of producing such a JsonbValue would be\njsonb_path_query(_first)?(_tz)? with a path ending in .datetime().\n\nIf casts existed from jsonb to date/time types, then the same\ntechniques used in [1] would be able to rewrite such casts,\neliding the JsonbValueToJsonb and subsequent reconversion via text.\n\nBut no such casts seem to exist, providing nothing to hang the\noptimization on. (And, after all, 6dda292 says \"These datetime\nvalues are allowed for temporary representation only. During\nserialization datetime values are converted into strings.\")\n\nPerhaps it isn't worth supplying such casts. The value is held\nas text within jsonb, so .datetime() in a jsonpath had to parse\nit. One might lament the extra serialization and reparsing if\nthat path query result goes through ::text::timestamp, but then\nsimply leaving .datetime() off of the jsonpath in the first place\nwould have left the parsing to be done just once by ::timestamp.\n\nOptimizable casts might be of more interest if the jsonpath\nlanguage had more operations on datetimes, so that you might\nefficiently retrieve the result of some arbitrary expression\nin the path, not just a literal datetime value that has to get\nparsed in one place or another anyway.\n\nI haven't looked into SQL/JSON to see what it provides in terms\nof casts to SQL types. I'm more familiar with SQL/XML, which does\nprovide XMLCAST, which can take an XML source and SQL date/time\ntarget, and does the equivalent of an XML Query ending in\n\"cast as xs:dateTime\" and assigns that result to the SQL type\n(with some time zone subtleties rather carefully specified).\nSo I might assume SQL/JSON has something analogous?\n\nOn the other hand, XML Query does offer more operations on\ndate/time values, which may, as discussed above, make such a cast\nmore interesting to have around.\n\nThoughts?\n\nRegards,\n-Chap\n\n[1] \nhttps://www.postgresql.org/message-id/flat/CAKU4AWoqAVya6PBhn+BCbFaBMt3z-2=i5fKO3bW=6HPhbid2Dw@mail.gmail.com\n\n\n",
"msg_date": "Sun, 20 Aug 2023 12:11:26 -0400",
"msg_from": "Chapman Flack <[email protected]>",
"msg_from_op": true,
"msg_subject": "datetime from a JsonbValue"
}
] |
[
{
"msg_contents": "If we have a hash join with an Append node on the outer side, something\nlike\n\n Hash Join\n Hash Cond: (pt.a = t.a)\n -> Append\n -> Seq Scan on pt_p1 pt_1\n -> Seq Scan on pt_p2 pt_2\n -> Seq Scan on pt_p3 pt_3\n -> Hash\n -> Seq Scan on t\n\nWe can actually prune those subnodes of the Append that cannot possibly\ncontain any matching tuples from the other side of the join. To do\nthat, when building the Hash table, for each row from the inner side we\ncan compute the minimum set of subnodes that can possibly match the join\ncondition. When we have built the Hash table and start to execute the\nAppend node, we should have known which subnodes are survived and thus\ncan skip other subnodes.\n\nThis kind of partition pruning can be extended to happen across multiple\njoin levels. For instance,\n\n Hash Join\n Hash Cond: (pt.a = t2.a)\n -> Hash Join\n Hash Cond: (pt.a = t1.a)\n -> Append\n -> Seq Scan on pt_p1 pt_1\n -> Seq Scan on pt_p2 pt_2\n -> Seq Scan on pt_p3 pt_3\n -> Hash\n -> Seq Scan on t1\n -> Hash\n -> Seq Scan on t2\n\nWe can compute the matching subnodes of the Append when building Hash\ntable for 't1' according to the join condition 'pt.a = t1.a', and when\nbuilding Hash table for 't2' according to join condition 'pt.a = t2.a',\nand the final surviving subnodes would be their intersection.\n\nGreenplum [1] has implemented this kind of partition pruning as\n'Partition Selector'. Attached is a patch that refactores Greenplum's\nimplementation to make it work on PostgreSQL master. Here are some\ndetails about the patch.\n\nDuring planning:\n\n1. When creating a hash join plan in create_hashjoin_plan() we first\n collect information required to build PartitionPruneInfos at this\n join, which includes the join's RestrictInfos and the join's inner\n relids, and put this information in a stack.\n\n2. When we call create_append_plan() for an appendrel, for each of the\n joins we check if join partition pruning is possible to take place\n for this appendrel, based on the information collected at that join,\n and if so build a PartitionPruneInfo and add it to the stack entry.\n\n3. After finishing the outer side of the hash join, we should have built\n all the PartitionPruneInfos that can be used to perform join\n partition pruning at this join. So we pop out the stack entry to get\n the PartitionPruneInfos and add them to Hash node.\n\nDuring executing:\n\nWhen building the hash table for a hash join, we perform the partition\nprunning for each row according to each of the JoinPartitionPruneStates\nat this join, and store each result in a special executor parameter to\nmake it available to Append nodes. When executing an Append node, we\ncan directly use the pre-computed pruning results to skip subnodes that\ncannot contain any matching rows.\n\nHere is a query that shows the effect of the join partition prunning.\n\nCREATE TABLE pt (a int, b int, c varchar) PARTITION BY RANGE(a);\nCREATE TABLE pt_p1 PARTITION OF pt FOR VALUES FROM (0) TO (250);\nCREATE TABLE pt_p2 PARTITION OF pt FOR VALUES FROM (250) TO (500);\nCREATE TABLE pt_p3 PARTITION OF pt FOR VALUES FROM (500) TO (600);\nINSERT INTO pt SELECT i, i % 25, to_char(i, 'FM0000') FROM\ngenerate_series(0, 599) i WHERE i % 2 = 0;\n\nCREATE TABLE t1 (a int, b int);\nINSERT INTO t1 values (10, 10);\n\nCREATE TABLE t2 (a int, b int);\nINSERT INTO t2 values (300, 300);\n\nANALYZE pt, t1, t2;\n\nSET enable_nestloop TO off;\n\nexplain (analyze, costs off, summary off, timing off)\nselect * from pt join t1 on pt.a = t1.a right join t2 on pt.a = t2.a;\n QUERY PLAN\n------------------------------------------------------------\n Hash Right Join (actual rows=1 loops=1)\n Hash Cond: (pt.a = t2.a)\n -> Hash Join (actual rows=0 loops=1)\n Hash Cond: (pt.a = t1.a)\n -> Append (actual rows=0 loops=1)\n -> Seq Scan on pt_p1 pt_1 (never executed)\n -> Seq Scan on pt_p2 pt_2 (never executed)\n -> Seq Scan on pt_p3 pt_3 (never executed)\n -> Hash (actual rows=1 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 9kB\n -> Seq Scan on t1 (actual rows=1 loops=1)\n -> Hash (actual rows=1 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 9kB\n -> Seq Scan on t2 (actual rows=1 loops=1)\n(14 rows)\n\n\nThere are several points that need more consideration.\n\n1. All the join partition prunning decisions are made in createplan.c\n where the best path tree has been decided. This is not great. Maybe\n it's better to make it happen when we build up the path tree, so that\n we can take the partition prunning into consideration when estimating\n the costs.\n\n2. In order to make the join partition prunning take effect, the patch\n hacks the empty-outer optimization in ExecHashJoinImpl(). Not sure\n if this is a good practice.\n\n3. This patch does not support parallel hash join yet. But it's not\n hard to add the support.\n\n4. Is it possible and worthwhile to extend the join partition prunning\n mechanism to support nestloop and mergejoin also?\n\nAny thoughts or comments?\n\n[1] https://github.com/greenplum-db/gpdb\n\nThanks\nRichard",
"msg_date": "Mon, 21 Aug 2023 11:48:07 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Support run-time partition pruning for hash join"
},
{
"msg_contents": "On Mon, Aug 21, 2023 at 11:48 AM Richard Guo <[email protected]> wrote:\n\n> If we have a hash join with an Append node on the outer side, something\n> like\n>\n> Hash Join\n> Hash Cond: (pt.a = t.a)\n> -> Append\n> -> Seq Scan on pt_p1 pt_1\n> -> Seq Scan on pt_p2 pt_2\n> -> Seq Scan on pt_p3 pt_3\n> -> Hash\n> -> Seq Scan on t\n>\n> We can actually prune those subnodes of the Append that cannot possibly\n> contain any matching tuples from the other side of the join. To do\n> that, when building the Hash table, for each row from the inner side we\n> can compute the minimum set of subnodes that can possibly match the join\n> condition. When we have built the Hash table and start to execute the\n> Append node, we should have known which subnodes are survived and thus\n> can skip other subnodes.\n>\n\nThis feature looks good, but is it possible to know if we can prune\nany subnodes before we pay the extra effort (building the Hash\ntable, for each row... stuff)? IIUC, looks no. If so, I think this area\nneeds more attention. I can't provide any good suggestions yet.\n\nMaybe at least, if we have found no subnodes can be skipped\nduring the hashing, we can stop doing such work anymore.\n\nThere are several points that need more consideration.\n>\n> 1. All the join partition prunning decisions are made in createplan.c\n> where the best path tree has been decided. This is not great. Maybe\n> it's better to make it happen when we build up the path tree, so that\n> we can take the partition prunning into consideration when estimating\n> the costs.\n>\n\nfwiw, the current master totally ignores the cost reduction for run-time\npartition prune, even for init partition prune. So in some real cases,\npg chooses a hash join just because the cost of nest loop join is\nhighly over estimated.\n\n4. Is it possible and worthwhile to extend the join partition prunning\n> mechanism to support nestloop and mergejoin also?\n>\n\nIn my current knowledge, we have to build the inner table first for this\noptimization? so hash join and sort merge should be OK, but nestloop should\nbe impossible unless I missed something.\n\n-- \nBest Regards\nAndy Fan\n\nOn Mon, Aug 21, 2023 at 11:48 AM Richard Guo <[email protected]> wrote:If we have a hash join with an Append node on the outer side, somethinglike Hash Join Hash Cond: (pt.a = t.a) -> Append -> Seq Scan on pt_p1 pt_1 -> Seq Scan on pt_p2 pt_2 -> Seq Scan on pt_p3 pt_3 -> Hash -> Seq Scan on tWe can actually prune those subnodes of the Append that cannot possiblycontain any matching tuples from the other side of the join. To dothat, when building the Hash table, for each row from the inner side wecan compute the minimum set of subnodes that can possibly match the joincondition. When we have built the Hash table and start to execute theAppend node, we should have known which subnodes are survived and thuscan skip other subnodes. This feature looks good, but is it possible to know if we can pruneany subnodes before we pay the extra effort (building the Hash table, for each row... stuff)? IIUC, looks no. If so, I think this areaneeds more attention. I can't provide any good suggestions yet. Maybe at least, if we have found no subnodes can be skippedduring the hashing, we can stop doing such work anymore. There are several points that need more consideration.1. All the join partition prunning decisions are made in createplan.c where the best path tree has been decided. This is not great. Maybe it's better to make it happen when we build up the path tree, so that we can take the partition prunning into consideration when estimating the costs.fwiw, the current master totally ignores the cost reduction for run-time partition prune, even for init partition prune. So in some real cases, pg chooses a hash join just because the cost of nest loop join is highly over estimated. 4. Is it possible and worthwhile to extend the join partition prunning mechanism to support nestloop and mergejoin also?In my current knowledge, we have to build the inner table first for thisoptimization? so hash join and sort merge should be OK, but nestloop shouldbe impossible unless I missed something. -- Best RegardsAndy Fan",
"msg_date": "Mon, 21 Aug 2023 20:34:24 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support run-time partition pruning for hash join"
},
{
"msg_contents": "On Tue, 22 Aug 2023 at 00:34, Andy Fan <[email protected]> wrote:\n>\n> On Mon, Aug 21, 2023 at 11:48 AM Richard Guo <[email protected]> wrote:\n>> 1. All the join partition prunning decisions are made in createplan.c\n>> where the best path tree has been decided. This is not great. Maybe\n>> it's better to make it happen when we build up the path tree, so that\n>> we can take the partition prunning into consideration when estimating\n>> the costs.\n>\n>\n> fwiw, the current master totally ignores the cost reduction for run-time\n> partition prune, even for init partition prune. So in some real cases,\n> pg chooses a hash join just because the cost of nest loop join is\n> highly over estimated.\n\nThis is true about the existing code. It's a very tricky thing to cost\ngiven that the parameter values are always unknown to the planner.\nThe best we have for these today is the various hardcoded constants in\nselfuncs.h. While I do agree that it's not great that the costing code\nknows nothing about run-time pruning, I also think that run-time\npruning during execution with parameterised nested loops is much more\nlikely to be able to prune partitions and save actual work than the\nequivalent with Hash Joins. It's more common for the planner to\nchoose to Nested Loop when there are fewer outer rows, so the pruning\ncode is likely to be called fewer times with Nested Loop than with\nHash Join.\n\nWith Hash Join, it seems to me that the pruning must take place for\nevery row that makes it into the hash table. There will be maybe\ncases where the unioned set of partitions simply yields every\npartition and all the work results in no savings. Pruning on a scalar\nvalue seems much more likely to be able to prune away unneeded\nAppend/MergeAppend subnodes.\n\nPerhaps there can be something adaptive in Hash Join which stops\ntrying to prune when all partitions must be visited. On a quick\nglance of the patch, I don't see any code in ExecJoinPartitionPrune()\nwhich gives up trying to prune when the number of members in\npart_prune_result is equal to the prunable Append/MergeAppend\nsubnodes.\n\nIt would be good to see some performance comparisons of the worst case\nto see how much overhead the pruning code adds to Hash Join. It may\nwell be that we need to consider two Hash Join paths, one with and one\nwithout run-time pruning. It's pretty difficult to meaningfully cost,\nas I already mentioned, however.\n\n>> 4. Is it possible and worthwhile to extend the join partition prunning\n>> mechanism to support nestloop and mergejoin also?\n>\n>\n> In my current knowledge, we have to build the inner table first for this\n> optimization? so hash join and sort merge should be OK, but nestloop should\n> be impossible unless I missed something.\n\nBut run-time pruning already works for Nested Loops... I must be\nmissing something here.\n\nI imagine for Merge Joins a more generic approach would be better by\nimplementing parameterised Merge Joins (a.k.a zigzag merge joins).\nThe Append/MergeAppend node could then select the correct partition(s)\nbased on the current parameter value at rescan. I don't think any code\nchanges would be needed in node[Merge]Append.c for that to work. This\ncould also speed up Merge Joins to non-partitioned tables when an\nindex is providing presorted input to the join.\n\nDavid\n\n\n",
"msg_date": "Tue, 22 Aug 2023 18:38:09 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support run-time partition pruning for hash join"
},
{
"msg_contents": "On Mon, Aug 21, 2023 at 8:34 PM Andy Fan <[email protected]> wrote:\n\n> This feature looks good, but is it possible to know if we can prune\n> any subnodes before we pay the extra effort (building the Hash\n> table, for each row... stuff)?\n>\n\nIt might be possible if we take the partition prunning into\nconsideration when estimating costs. But it seems not easy to calculate\nthe costs accurately.\n\n\n> Maybe at least, if we have found no subnodes can be skipped\n> during the hashing, we can stop doing such work anymore.\n>\n\nYeah, this is what we can do.\n\n\n> In my current knowledge, we have to build the inner table first for this\n> optimization? so hash join and sort merge should be OK, but nestloop\n> should\n> be impossible unless I missed something.\n>\n\nFor nestloop and mergejoin, we'd always execute the outer side first.\nSo the Append/MergeAppend nodes need to be on the inner side for the\njoin partition prunning to take effect. For a mergejoin that will\nexplicitly sort the outer side, the sort node would process all the\nouter rows before scanning the inner side, so we can do the join\npartition prunning with that. For a nestloop, if we have a Material\nnode on the outer side, we can do that too, but I wonder if we'd have\nsuch a plan in real world, because we only add Material to the inner\nside of nestloop.\n\nThanks\nRichard\n\nOn Mon, Aug 21, 2023 at 8:34 PM Andy Fan <[email protected]> wrote:This feature looks good, but is it possible to know if we can pruneany subnodes before we pay the extra effort (building the Hash table, for each row... stuff)? It might be possible if we take the partition prunning intoconsideration when estimating costs. But it seems not easy to calculatethe costs accurately. Maybe at least, if we have found no subnodes can be skippedduring the hashing, we can stop doing such work anymore. Yeah, this is what we can do. In my current knowledge, we have to build the inner table first for thisoptimization? so hash join and sort merge should be OK, but nestloop shouldbe impossible unless I missed something. For nestloop and mergejoin, we'd always execute the outer side first.So the Append/MergeAppend nodes need to be on the inner side for thejoin partition prunning to take effect. For a mergejoin that willexplicitly sort the outer side, the sort node would process all theouter rows before scanning the inner side, so we can do the joinpartition prunning with that. For a nestloop, if we have a Materialnode on the outer side, we can do that too, but I wonder if we'd havesuch a plan in real world, because we only add Material to the innerside of nestloop.ThanksRichard",
"msg_date": "Tue, 22 Aug 2023 17:43:26 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Support run-time partition pruning for hash join"
},
{
"msg_contents": "On Tue, Aug 22, 2023 at 2:38 PM David Rowley <[email protected]> wrote:\n\n> With Hash Join, it seems to me that the pruning must take place for\n> every row that makes it into the hash table. There will be maybe\n> cases where the unioned set of partitions simply yields every\n> partition and all the work results in no savings. Pruning on a scalar\n> value seems much more likely to be able to prune away unneeded\n> Append/MergeAppend subnodes.\n\n\nYeah, you're right. If we have 'pt HashJoin t', for a subnode of 'pt'\nto be pruned, it needs every row of 't' to be able to prune that\nsubnode. The situation may improve if we have more than 2-way hash\njoins, because the final surviving subnodes would be the intersection of\nmatching subnodes in each Hash.\n\nWith parameterized nestloop I agree that it's more likely to be able to\nprune subnodes at rescan of Append/MergeAppend nodes based on scalar\nvalues.\n\nSometimes we may just not generate parameterized nestloop as final plan,\nsuch as when there are no indexes and no lateral references in the\nAppend/MergeAppend node. In this case I think it would be great if we\ncan still do some partition prunning. So I think this new 'join\npartition prunning mechanism' (maybe this is not a proper name) should\nbe treated as a supplement to, not a substitute for, the current\nrun-time partition prunning based on parameterized nestloop, and it is\nso implemented in the patch.\n\n\n> Perhaps there can be something adaptive in Hash Join which stops\n> trying to prune when all partitions must be visited. On a quick\n> glance of the patch, I don't see any code in ExecJoinPartitionPrune()\n> which gives up trying to prune when the number of members in\n> part_prune_result is equal to the prunable Append/MergeAppend\n> subnodes.\n\n\nYeah, we can do that.\n\n\n> But run-time pruning already works for Nested Loops... I must be\n> missing something here.\n\n\nHere I mean nestloop with non-parameterized inner path. As I explained\nupthread, we need to have a Material node on the outer side for that to\nwork, which seems not possible in real world.\n\nThanks\nRichard\n\nOn Tue, Aug 22, 2023 at 2:38 PM David Rowley <[email protected]> wrote:\nWith Hash Join, it seems to me that the pruning must take place for\nevery row that makes it into the hash table. There will be maybe\ncases where the unioned set of partitions simply yields every\npartition and all the work results in no savings. Pruning on a scalar\nvalue seems much more likely to be able to prune away unneeded\nAppend/MergeAppend subnodes.Yeah, you're right. If we have 'pt HashJoin t', for a subnode of 'pt'to be pruned, it needs every row of 't' to be able to prune thatsubnode. The situation may improve if we have more than 2-way hashjoins, because the final surviving subnodes would be the intersection ofmatching subnodes in each Hash.With parameterized nestloop I agree that it's more likely to be able toprune subnodes at rescan of Append/MergeAppend nodes based on scalarvalues.Sometimes we may just not generate parameterized nestloop as final plan,such as when there are no indexes and no lateral references in theAppend/MergeAppend node. In this case I think it would be great if wecan still do some partition prunning. So I think this new 'joinpartition prunning mechanism' (maybe this is not a proper name) shouldbe treated as a supplement to, not a substitute for, the currentrun-time partition prunning based on parameterized nestloop, and it isso implemented in the patch. \nPerhaps there can be something adaptive in Hash Join which stops\ntrying to prune when all partitions must be visited. On a quick\nglance of the patch, I don't see any code in ExecJoinPartitionPrune()\nwhich gives up trying to prune when the number of members in\npart_prune_result is equal to the prunable Append/MergeAppend\nsubnodes.Yeah, we can do that. \nBut run-time pruning already works for Nested Loops... I must be\nmissing something here.Here I mean nestloop with non-parameterized inner path. As I explainedupthread, we need to have a Material node on the outer side for that towork, which seems not possible in real world.ThanksRichard",
"msg_date": "Tue, 22 Aug 2023 17:51:38 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Support run-time partition pruning for hash join"
},
{
"msg_contents": "On Tue, Aug 22, 2023 at 5:43 PM Richard Guo <[email protected]> wrote:\n\n>\n> On Mon, Aug 21, 2023 at 8:34 PM Andy Fan <[email protected]> wrote:\n>\n>> This feature looks good, but is it possible to know if we can prune\n>> any subnodes before we pay the extra effort (building the Hash\n>> table, for each row... stuff)?\n>>\n>\n> It might be possible if we take the partition prunning into\n> consideration when estimating costs. But it seems not easy to calculate\n> the costs accurately.\n>\n\nThis is a real place I am worried about the future of this patch.\nPersonally, I do like this patch, but not sure what if this issue can't be\nfixed to make everyone happy, and fixing this perfectly looks hopeless\nfor me. However, let's see what will happen.\n\n\n>\n>\n>> Maybe at least, if we have found no subnodes can be skipped\n>> during the hashing, we can stop doing such work anymore.\n>>\n>\n> Yeah, this is what we can do.\n>\n\ncool.\n\n\n>\n>\n>> In my current knowledge, we have to build the inner table first for this\n>> optimization? so hash join and sort merge should be OK, but nestloop\n>> should\n>> be impossible unless I missed something.\n>>\n>\n> For nestloop and mergejoin, we'd always execute the outer side first.\n> So the Append/MergeAppend nodes need to be on the inner side for the\n> join partition prunning to take effect. For a mergejoin that will\n> explicitly sort the outer side, the sort node would process all the\n> outer rows before scanning the inner side, so we can do the join\n> partition prunning with that. For a nestloop, if we have a Material\n> node on the outer side, we can do that too, but I wonder if we'd have\n> such a plan in real world, because we only add Material to the inner\n> side of nestloop.\n>\n\nThis is more interesting than I expected,thanks for the explaination.\n\n-- \nBest Regards\nAndy Fan\n\nOn Tue, Aug 22, 2023 at 5:43 PM Richard Guo <[email protected]> wrote:On Mon, Aug 21, 2023 at 8:34 PM Andy Fan <[email protected]> wrote:This feature looks good, but is it possible to know if we can pruneany subnodes before we pay the extra effort (building the Hash table, for each row... stuff)? It might be possible if we take the partition prunning intoconsideration when estimating costs. But it seems not easy to calculatethe costs accurately.This is a real place I am worried about the future of this patch. Personally, I do like this patch, but not sure what if this issue can't befixed to make everyone happy, and fixing this perfectly looks hopelessfor me. However, let's see what will happen. Maybe at least, if we have found no subnodes can be skippedduring the hashing, we can stop doing such work anymore. Yeah, this is what we can do. cool. In my current knowledge, we have to build the inner table first for thisoptimization? so hash join and sort merge should be OK, but nestloop shouldbe impossible unless I missed something. For nestloop and mergejoin, we'd always execute the outer side first.So the Append/MergeAppend nodes need to be on the inner side for thejoin partition prunning to take effect. For a mergejoin that willexplicitly sort the outer side, the sort node would process all theouter rows before scanning the inner side, so we can do the joinpartition prunning with that. For a nestloop, if we have a Materialnode on the outer side, we can do that too, but I wonder if we'd havesuch a plan in real world, because we only add Material to the innerside of nestloop. This is more interesting than I expected,thanks for the explaination. -- Best RegardsAndy Fan",
"msg_date": "Wed, 23 Aug 2023 09:19:34 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support run-time partition pruning for hash join"
},
{
"msg_contents": ">\n> > fwiw, the current master totally ignores the cost reduction for run-time\n> > partition prune, even for init partition prune. So in some real cases,\n> > pg chooses a hash join just because the cost of nest loop join is\n> > highly over estimated.\n>\n> This is true about the existing code. It's a very tricky thing to cost\n> given that the parameter values are always unknown to the planner.\n> The best we have for these today is the various hardcoded constants in\n> selfuncs.h. While I do agree that it's not great that the costing code\n> knows nothing about run-time pruning, I also think that run-time\n> pruning during execution with parameterised nested loops is much more\n> likely to be able to prune partitions and save actual work than the\n> equivalent with Hash Joins. It's more common for the planner to\n> choose to Nested Loop when there are fewer outer rows, so the pruning\n> code is likely to be called fewer times with Nested Loop than with\n> Hash Join.\n>\n\nYes, I agree with this. In my 4 years of PostgresSQL, I just run into\n2 cases of this issue and 1 of them is joining 12+ tables with run-time\npartition prune for every join. But this situation causes more issues than\ngenerating a wrong plan, like for a simple SELECT * FROM p WHERE\npartkey = $1; generic plan will never win so we have to pay the expensive\nplanning cost for partitioned table.\n\nIf we don't require very accurate costing for every case, like we only\ncare about '=' operator which is the most common case, it should be\neasier than the case here since we just need to know if only 1 partition\nwill survive after pruning, but don't care about which one it is. I'd like\nto discuss in another thread, and leave this thread for Richard's patch\nonly.\n\n-- \nBest Regards\nAndy Fan\n\n\n> fwiw, the current master totally ignores the cost reduction for run-time\n> partition prune, even for init partition prune. So in some real cases,\n> pg chooses a hash join just because the cost of nest loop join is\n> highly over estimated.\n\nThis is true about the existing code. It's a very tricky thing to cost\ngiven that the parameter values are always unknown to the planner.\nThe best we have for these today is the various hardcoded constants in\nselfuncs.h. While I do agree that it's not great that the costing code\nknows nothing about run-time pruning, I also think that run-time\npruning during execution with parameterised nested loops is much more\nlikely to be able to prune partitions and save actual work than the\nequivalent with Hash Joins. It's more common for the planner to\nchoose to Nested Loop when there are fewer outer rows, so the pruning\ncode is likely to be called fewer times with Nested Loop than with\nHash Join.Yes, I agree with this. In my 4 years of PostgresSQL, I just run into2 cases of this issue and 1 of them is joining 12+ tables with run-timepartition prune for every join. But this situation causes more issues thangenerating a wrong plan, like for a simple SELECT * FROM p WHEREpartkey = $1; generic plan will never win so we have to pay the expensiveplanning cost for partitioned table. If we don't require very accurate costing for every case, like we onlycare about '=' operator which is the most common case, it should beeasier than the case here since we just need to know if only 1 partitionwill survive after pruning, but don't care about which one it is. I'd liketo discuss in another thread, and leave this thread for Richard's patchonly. -- Best RegardsAndy Fan",
"msg_date": "Wed, 23 Aug 2023 09:41:41 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support run-time partition pruning for hash join"
},
{
"msg_contents": "On Tue, Aug 22, 2023 at 2:38 PM David Rowley <[email protected]> wrote:\n\n> It would be good to see some performance comparisons of the worst case\n> to see how much overhead the pruning code adds to Hash Join. It may\n> well be that we need to consider two Hash Join paths, one with and one\n> without run-time pruning. It's pretty difficult to meaningfully cost,\n> as I already mentioned, however.\n\n\nI performed some performance comparisons of the worst case with two\ntables as below:\n\n1. The partitioned table has 1000 children, and 100,000 tuples in total.\n\n2. The other table is designed that\n a) its tuples occupy every partition of the partitioned table so\n that no partitions can be pruned during execution,\n b) tuples belong to the same partition are placed together so that\n we need to scan all its tuples before we could know that no\n pruning would happen and we could stop trying to prune,\n c) the tuples are unique on the hash key so as to minimize the cost\n of hash probe, so that we can highlight the impact of the pruning\n codes.\n\nHere is the execution time (ms) I get with different sizes of the other\ntable.\n\ntuples unpatched patched\n10000 45.74 53.46 (+0.17)\n20000 54.48 70.18 (+0.29)\n30000 62.57 85.18 (+0.36)\n40000 69.14 99.19 (+0.43)\n50000 76.46 111.09 (+0.45)\n60000 82.68 126.37 (+0.53)\n70000 92.69 137.89 (+0.49)\n80000 94.49 151.46 (+0.60)\n90000 101.53 164.93 (+0.62)\n100000 107.22 178.44 (+0.66)\n\nSo the overhead the pruning code adds to Hash Join is too large to be\naccepted :(. I think we need to solve this problem first before we can\nmake this new partition pruning mechanism some useful in practice, but\nhow? Some thoughts currently in my mind include\n\n1) we try our best to estimate the cost of this partition pruning when\ncreating hash join paths, and decide based on the cost whether to use it\nor not. But this does not seem to be an easy task.\n\n2) we use some heuristics when executing hash join, such as when we\nnotice that a $threshold percentage of the partitions must be visited\nwe just abort the pruning and assume that no partitions can be pruned.\n\nAny thoughts or comments?\n\nThanks\nRichard\n\nOn Tue, Aug 22, 2023 at 2:38 PM David Rowley <[email protected]> wrote:\nIt would be good to see some performance comparisons of the worst case\nto see how much overhead the pruning code adds to Hash Join. It may\nwell be that we need to consider two Hash Join paths, one with and one\nwithout run-time pruning. It's pretty difficult to meaningfully cost,\nas I already mentioned, however.I performed some performance comparisons of the worst case with twotables as below:1. The partitioned table has 1000 children, and 100,000 tuples in total.2. The other table is designed that a) its tuples occupy every partition of the partitioned table so that no partitions can be pruned during execution, b) tuples belong to the same partition are placed together so that we need to scan all its tuples before we could know that no pruning would happen and we could stop trying to prune, c) the tuples are unique on the hash key so as to minimize the cost of hash probe, so that we can highlight the impact of the pruning codes.Here is the execution time (ms) I get with different sizes of the othertable.tuples unpatched patched10000 45.74 53.46 (+0.17)20000 54.48 70.18 (+0.29)30000 62.57 85.18 (+0.36)40000 69.14 99.19 (+0.43)50000 76.46 111.09 (+0.45)60000 82.68 126.37 (+0.53)70000 92.69 137.89 (+0.49)80000 94.49 151.46 (+0.60)90000 101.53 164.93 (+0.62)100000 107.22 178.44 (+0.66)So the overhead the pruning code adds to Hash Join is too large to beaccepted :(. I think we need to solve this problem first before we canmake this new partition pruning mechanism some useful in practice, buthow? Some thoughts currently in my mind include1) we try our best to estimate the cost of this partition pruning whencreating hash join paths, and decide based on the cost whether to use itor not. But this does not seem to be an easy task.2) we use some heuristics when executing hash join, such as when wenotice that a $threshold percentage of the partitions must be visitedwe just abort the pruning and assume that no partitions can be pruned.Any thoughts or comments?ThanksRichard",
"msg_date": "Thu, 24 Aug 2023 17:27:21 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Support run-time partition pruning for hash join"
},
{
"msg_contents": "On Thu, 24 Aug 2023 at 21:27, Richard Guo <[email protected]> wrote:\n> I performed some performance comparisons of the worst case with two\n> tables as below:\n>\n> 1. The partitioned table has 1000 children, and 100,000 tuples in total.\n>\n> 2. The other table is designed that\n> a) its tuples occupy every partition of the partitioned table so\n> that no partitions can be pruned during execution,\n> b) tuples belong to the same partition are placed together so that\n> we need to scan all its tuples before we could know that no\n> pruning would happen and we could stop trying to prune,\n> c) the tuples are unique on the hash key so as to minimize the cost\n> of hash probe, so that we can highlight the impact of the pruning\n> codes.\n>\n> Here is the execution time (ms) I get with different sizes of the other\n> table.\n>\n> tuples unpatched patched\n> 10000 45.74 53.46 (+0.17)\n> 20000 54.48 70.18 (+0.29)\n> 30000 62.57 85.18 (+0.36)\n> 40000 69.14 99.19 (+0.43)\n> 50000 76.46 111.09 (+0.45)\n> 60000 82.68 126.37 (+0.53)\n> 70000 92.69 137.89 (+0.49)\n> 80000 94.49 151.46 (+0.60)\n> 90000 101.53 164.93 (+0.62)\n> 100000 107.22 178.44 (+0.66)\n>\n> So the overhead the pruning code adds to Hash Join is too large to be\n> accepted :(.\n\nAgreed. Run-time pruning is pretty fast to execute, but so is\ninserting a row into a hash table.\n\n> I think we need to solve this problem first before we can\n> make this new partition pruning mechanism some useful in practice, but\n> how? Some thoughts currently in my mind include\n>\n> 1) we try our best to estimate the cost of this partition pruning when\n> creating hash join paths, and decide based on the cost whether to use it\n> or not. But this does not seem to be an easy task.\n\nI think we need to consider another Hash Join path when we detect that\nthe outer side of the Hash Join involves scanning a partitioned table.\n\nI'd suggest writing some cost which costs an execution of run-time\npruning. With LIST and RANGE you probably want something like\ncpu_operator_cost * LOG2(nparts) once for each hashed tuple to account\nfor the binary search over the sorted datum array. For HASH\npartitions, something like cpu_operator_cost * npartcols once for each\nhashed tuple.\n\nYou'll need to then come up with some counter costs to subtract from\nthe Append/MergeAppend. This is tricky, as discussed. Just come up\nwith something crude for now.\n\nTo start with, it could just be as crude as:\n\ntotal_costs *= (Min(expected_outer_rows, n_append_subnodes) /\nn_append_subnodes);\n\ni.e assume that every outer joined row will require exactly 1 new\npartition up to the total number of partitions. That's pretty much\nworst-case, but it'll at least allow the optimisation to work for\ncases like where the hash table is expected to contain just a tiny\nnumber of rows (fewer than the number of partitions)\n\nTo make it better, you might want to look at join selectivity\nestimation and see if you can find something there to influence\nsomething better.\n\n> 2) we use some heuristics when executing hash join, such as when we\n> notice that a $threshold percentage of the partitions must be visited\n> we just abort the pruning and assume that no partitions can be pruned.\n\nYou could likely code in something that checks\nbms_num_members(jpstate->part_prune_result) to see if it still remains\nbelow the total Append/MergeAppend subplans whenever, say whenever the\nlower 8 bits of hashtable->totalTuples are all off. You can just give\nup doing any further pruning when all partitions are already required.\n\nDavid\n\n\n",
"msg_date": "Fri, 25 Aug 2023 15:03:13 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support run-time partition pruning for hash join"
},
{
"msg_contents": "On Fri, Aug 25, 2023 at 11:03 AM David Rowley <[email protected]> wrote:\n\n> On Thu, 24 Aug 2023 at 21:27, Richard Guo <[email protected]> wrote:\n> > I think we need to solve this problem first before we can\n> > make this new partition pruning mechanism some useful in practice, but\n> > how? Some thoughts currently in my mind include\n> >\n> > 1) we try our best to estimate the cost of this partition pruning when\n> > creating hash join paths, and decide based on the cost whether to use it\n> > or not. But this does not seem to be an easy task.\n>\n> I think we need to consider another Hash Join path when we detect that\n> the outer side of the Hash Join involves scanning a partitioned table.\n>\n> I'd suggest writing some cost which costs an execution of run-time\n> pruning. With LIST and RANGE you probably want something like\n> cpu_operator_cost * LOG2(nparts) once for each hashed tuple to account\n> for the binary search over the sorted datum array. For HASH\n> partitions, something like cpu_operator_cost * npartcols once for each\n> hashed tuple.\n>\n> You'll need to then come up with some counter costs to subtract from\n> the Append/MergeAppend. This is tricky, as discussed. Just come up\n> with something crude for now.\n>\n> To start with, it could just be as crude as:\n>\n> total_costs *= (Min(expected_outer_rows, n_append_subnodes) /\n> n_append_subnodes);\n>\n> i.e assume that every outer joined row will require exactly 1 new\n> partition up to the total number of partitions. That's pretty much\n> worst-case, but it'll at least allow the optimisation to work for\n> cases like where the hash table is expected to contain just a tiny\n> number of rows (fewer than the number of partitions)\n>\n> To make it better, you might want to look at join selectivity\n> estimation and see if you can find something there to influence\n> something better.\n\n\nThank you for the suggestion. I will take some time considering it.\n\nWhen we have multiple join levels, it seems the situation becomes even\nmore complex. One Append/MergeAppend node might be pruned by more than\none Hash node, and one Hash node might provide pruning for more than one\nAppend/MergeAppend node. For instance, below is the plan from the test\ncase added in the v1 patch:\n\nexplain (analyze, costs off, summary off, timing off)\nselect * from tprt p1\n inner join tprt p2 on p1.col1 = p2.col1\n right join tbl1 t on p1.col1 = t.col1 and p2.col1 = t.col1;\n QUERY PLAN\n-------------------------------------------------------------------------\n Hash Right Join (actual rows=2 loops=1)\n Hash Cond: ((p1.col1 = t.col1) AND (p2.col1 = t.col1))\n -> Hash Join (actual rows=3 loops=1)\n Hash Cond: (p1.col1 = p2.col1)\n -> Append (actual rows=3 loops=1)\n -> Seq Scan on tprt_1 p1_1 (never executed)\n -> Seq Scan on tprt_2 p1_2 (actual rows=3 loops=1)\n -> Seq Scan on tprt_3 p1_3 (never executed)\n -> Seq Scan on tprt_4 p1_4 (never executed)\n -> Seq Scan on tprt_5 p1_5 (never executed)\n -> Seq Scan on tprt_6 p1_6 (never executed)\n -> Hash (actual rows=3 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 9kB\n -> Append (actual rows=3 loops=1)\n -> Seq Scan on tprt_1 p2_1 (never executed)\n -> Seq Scan on tprt_2 p2_2 (actual rows=3 loops=1)\n -> Seq Scan on tprt_3 p2_3 (never executed)\n -> Seq Scan on tprt_4 p2_4 (never executed)\n -> Seq Scan on tprt_5 p2_5 (never executed)\n -> Seq Scan on tprt_6 p2_6 (never executed)\n -> Hash (actual rows=2 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 9kB\n -> Seq Scan on tbl1 t (actual rows=2 loops=1)\n(23 rows)\n\nIn this plan, the Append node of 'p1' is pruned by two Hash nodes: Hash\nnode of 't' and Hash node of 'p2'. Meanwhile, the Hash node of 't'\nprovides pruning for two Append nodes: Append node of 'p1' and Append\nnode of 'p2'.\n\nIn this case, meaningfully costing for the partition pruning seems even\nmore difficult. Do you have any suggestion on that?\n\n\n> > 2) we use some heuristics when executing hash join, such as when we\n> > notice that a $threshold percentage of the partitions must be visited\n> > we just abort the pruning and assume that no partitions can be pruned.\n>\n> You could likely code in something that checks\n> bms_num_members(jpstate->part_prune_result) to see if it still remains\n> below the total Append/MergeAppend subplans whenever, say whenever the\n> lower 8 bits of hashtable->totalTuples are all off. You can just give\n> up doing any further pruning when all partitions are already required.\n\n\nYeah, we can do that. While this may not help in the tests I performed\nfor the worst case because the table in the hash side is designed that\ntuples belong to the same partition are placed together so that we need\nto scan almost all its tuples before we could know that all partitions\nare already required, I think this might help a lot in real world.\n\nThanks\nRichard\n\nOn Fri, Aug 25, 2023 at 11:03 AM David Rowley <[email protected]> wrote:On Thu, 24 Aug 2023 at 21:27, Richard Guo <[email protected]> wrote:\n> I think we need to solve this problem first before we can\n> make this new partition pruning mechanism some useful in practice, but\n> how? Some thoughts currently in my mind include\n>\n> 1) we try our best to estimate the cost of this partition pruning when\n> creating hash join paths, and decide based on the cost whether to use it\n> or not. But this does not seem to be an easy task.\n\nI think we need to consider another Hash Join path when we detect that\nthe outer side of the Hash Join involves scanning a partitioned table.\n\nI'd suggest writing some cost which costs an execution of run-time\npruning. With LIST and RANGE you probably want something like\ncpu_operator_cost * LOG2(nparts) once for each hashed tuple to account\nfor the binary search over the sorted datum array. For HASH\npartitions, something like cpu_operator_cost * npartcols once for each\nhashed tuple.\n\nYou'll need to then come up with some counter costs to subtract from\nthe Append/MergeAppend. This is tricky, as discussed. Just come up\nwith something crude for now.\n\nTo start with, it could just be as crude as:\n\ntotal_costs *= (Min(expected_outer_rows, n_append_subnodes) /\nn_append_subnodes);\n\ni.e assume that every outer joined row will require exactly 1 new\npartition up to the total number of partitions. That's pretty much\nworst-case, but it'll at least allow the optimisation to work for\ncases like where the hash table is expected to contain just a tiny\nnumber of rows (fewer than the number of partitions)\n\nTo make it better, you might want to look at join selectivity\nestimation and see if you can find something there to influence\nsomething better.Thank you for the suggestion. I will take some time considering it.When we have multiple join levels, it seems the situation becomes evenmore complex. One Append/MergeAppend node might be pruned by more thanone Hash node, and one Hash node might provide pruning for more than oneAppend/MergeAppend node. For instance, below is the plan from the testcase added in the v1 patch:explain (analyze, costs off, summary off, timing off)select * from tprt p1 inner join tprt p2 on p1.col1 = p2.col1 right join tbl1 t on p1.col1 = t.col1 and p2.col1 = t.col1; QUERY PLAN------------------------------------------------------------------------- Hash Right Join (actual rows=2 loops=1) Hash Cond: ((p1.col1 = t.col1) AND (p2.col1 = t.col1)) -> Hash Join (actual rows=3 loops=1) Hash Cond: (p1.col1 = p2.col1) -> Append (actual rows=3 loops=1) -> Seq Scan on tprt_1 p1_1 (never executed) -> Seq Scan on tprt_2 p1_2 (actual rows=3 loops=1) -> Seq Scan on tprt_3 p1_3 (never executed) -> Seq Scan on tprt_4 p1_4 (never executed) -> Seq Scan on tprt_5 p1_5 (never executed) -> Seq Scan on tprt_6 p1_6 (never executed) -> Hash (actual rows=3 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 9kB -> Append (actual rows=3 loops=1) -> Seq Scan on tprt_1 p2_1 (never executed) -> Seq Scan on tprt_2 p2_2 (actual rows=3 loops=1) -> Seq Scan on tprt_3 p2_3 (never executed) -> Seq Scan on tprt_4 p2_4 (never executed) -> Seq Scan on tprt_5 p2_5 (never executed) -> Seq Scan on tprt_6 p2_6 (never executed) -> Hash (actual rows=2 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 9kB -> Seq Scan on tbl1 t (actual rows=2 loops=1)(23 rows)In this plan, the Append node of 'p1' is pruned by two Hash nodes: Hashnode of 't' and Hash node of 'p2'. Meanwhile, the Hash node of 't'provides pruning for two Append nodes: Append node of 'p1' and Appendnode of 'p2'.In this case, meaningfully costing for the partition pruning seems evenmore difficult. Do you have any suggestion on that? \n> 2) we use some heuristics when executing hash join, such as when we\n> notice that a $threshold percentage of the partitions must be visited\n> we just abort the pruning and assume that no partitions can be pruned.\n\nYou could likely code in something that checks\nbms_num_members(jpstate->part_prune_result) to see if it still remains\nbelow the total Append/MergeAppend subplans whenever, say whenever the\nlower 8 bits of hashtable->totalTuples are all off. You can just give\nup doing any further pruning when all partitions are already required.Yeah, we can do that. While this may not help in the tests I performedfor the worst case because the table in the hash side is designed thattuples belong to the same partition are placed together so that we needto scan almost all its tuples before we could know that all partitionsare already required, I think this might help a lot in real world.ThanksRichard",
"msg_date": "Fri, 25 Aug 2023 16:48:35 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Support run-time partition pruning for hash join"
},
{
"msg_contents": "On Fri, Aug 25, 2023 at 11:03 AM David Rowley <[email protected]> wrote:\n\n> I'd suggest writing some cost which costs an execution of run-time\n> pruning. With LIST and RANGE you probably want something like\n> cpu_operator_cost * LOG2(nparts) once for each hashed tuple to account\n> for the binary search over the sorted datum array. For HASH\n> partitions, something like cpu_operator_cost * npartcols once for each\n> hashed tuple.\n>\n> You'll need to then come up with some counter costs to subtract from\n> the Append/MergeAppend. This is tricky, as discussed. Just come up\n> with something crude for now.\n>\n> To start with, it could just be as crude as:\n>\n> total_costs *= (Min(expected_outer_rows, n_append_subnodes) /\n> n_append_subnodes);\n>\n> i.e assume that every outer joined row will require exactly 1 new\n> partition up to the total number of partitions. That's pretty much\n> worst-case, but it'll at least allow the optimisation to work for\n> cases like where the hash table is expected to contain just a tiny\n> number of rows (fewer than the number of partitions)\n>\n> To make it better, you might want to look at join selectivity\n> estimation and see if you can find something there to influence\n> something better.\n\n\nI have a go at writing some costing codes according to your suggestion.\nThat's compute_partprune_cost() in the v2 patch.\n\nFor the hash side, this function computes the pruning cost as\ncpu_operator_cost * LOG2(nparts) * inner_rows for LIST and RANGE, and\ncpu_operator_cost * nparts * inner_rows for HASH.\n\nFor the Append/MergeAppend side, this function first estimates the size\nof outer side that matches, using the same idea as we estimate the\njoinrel size for JOIN_SEMI. Then it assumes that each outer joined row\noccupies one new partition (the worst case) and computes how much cost\ncan be saved from partition pruning.\n\nIf the cost saved from the Append/MergeAppend side is larger than the\npruning cost from the Hash side, then we say that partition pruning is a\nwin.\n\nNote that this costing logic runs for each Append-Hash pair, so it copes\nwith the case where we have multiple join levels.\n\nWith this costing logic added, I performed the same performance\ncomparisons of the worst case as in [1], and here is what I got.\n\ntuples unpatched patched\n10000 44.66 44.37 -0.006493506\n20000 52.41 52.29 -0.002289639\n30000 61.11 61.12 +0.000163639\n40000 67.87 68.24 +0.005451599\n50000 74.51 74.75 +0.003221044\n60000 82.3 81.55 -0.009113001\n70000 87.16 86.98 -0.002065168\n80000 93.49 93.89 +0.004278532\n90000 101.52 100.83 -0.00679669\n100000 108.34 108.56 +0.002030644\n\nSo the costing logic successfully avoids performing the partition\npruning in the worst case.\n\nI also tested the cases where partition pruning is possible with\ndifferent sizes of the hash side.\n\ntuples unpatched patched\n100 36.86 2.4 -0.934888768\n200 35.87 2.37 -0.933928074\n300 35.95 2.55 -0.92906815\n400 36.4 2.63 -0.927747253\n500 36.39 2.85 -0.921681781\n600 36.32 2.97 -0.918226872\n700 36.6 3.23 -0.911748634\n800 36.88 3.44 -0.906724512\n900 37.02 3.46 -0.906537007\n1000 37.25 37.21 -0.001073826\n\nThe first 9 rows show that the costing logic allows the partition\npruning to be performed and the pruning turns out to be a big win. The\nlast row shows that the partition pruning is disallowed by the costing\nlogic because it thinks no partition can be pruned (we have 1000\npartitions in total).\n\nSo it seems that the new costing logic is quite crude and tends to be\nvery conservative, but it can help avoid the large overhead in the worst\ncases. I think this might be a good start to push this patch forward.\n\nAny thoughts or comments?\n\n[1]\nhttps://www.postgresql.org/message-id/CAMbWs49%2Bp6hBxXJHFiSwOtPCSkAHwhJj3hTpCR_pmMiUUVLZ1Q%40mail.gmail.com\n\nThanks\nRichard",
"msg_date": "Tue, 29 Aug 2023 18:41:28 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Support run-time partition pruning for hash join"
},
{
"msg_contents": "On Tue, Aug 29, 2023 at 6:41 PM Richard Guo <[email protected]> wrote:\n\n> So it seems that the new costing logic is quite crude and tends to be\n> very conservative, but it can help avoid the large overhead in the worst\n> cases. I think this might be a good start to push this patch forward.\n>\n> Any thoughts or comments?\n>\n\nI rebased this patch over the latest master. Nothing changed except\nthat I revised the new added test case to make it more stable.\n\nHowever, the cfbot indicates that there are test cases that fail on\nFreeBSD [1] (no failure on other platforms). So I set up a FreeBSD-13\nlocally but just cannot reproduce the failure. I must be doing\nsomething wrong. Can anyone give me some hints or suggestions?\n\nFYI. The failure looks like:\n\n explain (costs off)\n select p2.a, p1.c from permtest_parent p1 inner join permtest_parent p2\n on p1.a = p2.a and left(p1.c, 3) ~ 'a1$';\n- QUERY PLAN\n-----------------------------------------------------\n- Hash Join\n- Hash Cond: (p2.a = p1.a)\n- -> Seq Scan on permtest_grandchild p2\n- -> Hash\n- -> Seq Scan on permtest_grandchild p1\n- Filter: (\"left\"(c, 3) ~ 'a1$'::text)\n-(6 rows)\n-\n+ERROR: unrecognized node type: 1130127496\n\n[1]\nhttps://api.cirrus-ci.com/v1/artifact/task/5334808075698176/testrun/build/testrun/regress/regress/regression.diffs\n\nThanks\nRichard",
"msg_date": "Thu, 2 Nov 2023 19:19:34 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Support run-time partition pruning for hash join"
},
{
"msg_contents": "Hello Richard,\n\n02.11.2023 14:19, Richard Guo wrote:\n>\n> However, the cfbot indicates that there are test cases that fail on\n> FreeBSD [1] (no failure on other platforms). So I set up a FreeBSD-13\n> locally but just cannot reproduce the failure. I must be doing\n> something wrong. Can anyone give me some hints or suggestions?\n>\n> FYI. The failure looks like:\n>\n> explain (costs off)\n> select p2.a, p1.c from permtest_parent p1 inner join permtest_parent p2\n> on p1.a = p2.a and left(p1.c, 3) ~ 'a1$';\n> - QUERY PLAN\n> -----------------------------------------------------\n> - Hash Join\n> - Hash Cond: (p2.a = p1.a)\n> - -> Seq Scan on permtest_grandchild p2\n> - -> Hash\n> - -> Seq Scan on permtest_grandchild p1\n> - Filter: (\"left\"(c, 3) ~ 'a1$'::text)\n> -(6 rows)\n> -\n> +ERROR: unrecognized node type: 1130127496\n\nI've managed to reproduce that failure on my Ubuntu with:\nCPPFLAGS=\"-Og -DWRITE_READ_PARSE_PLAN_TREES -DCOPY_PARSE_PLAN_TREES\" ./configure ... make check\n...\n SELECT t1, t2 FROM prt1 t1 LEFT JOIN prt2 t2 ON t1.a = t2.b WHERE t1.b = 0 ORDER BY t1.a, t2.b;\n- QUERY PLAN\n---------------------------------------------------\n- Sort\n- Sort Key: t1.a, t2.b\n- -> Hash Right Join\n- Hash Cond: (t2.b = t1.a)\n- -> Append\n- -> Seq Scan on prt2_p1 t2_1\n- -> Seq Scan on prt2_p2 t2_2\n- -> Seq Scan on prt2_p3 t2_3\n- -> Hash\n- -> Append\n- -> Seq Scan on prt1_p1 t1_1\n- Filter: (b = 0)\n- -> Seq Scan on prt1_p2 t1_2\n- Filter: (b = 0)\n- -> Seq Scan on prt1_p3 t1_3\n- Filter: (b = 0)\n-(16 rows)\n-\n+ERROR: unrecognized node type: -1465804424\n...\n\nAs far as I can see from https://cirrus-ci.com/task/6642692659085312,\nthe FreeBSD host has the following CPPFLAGS specified:\n-DRELCACHE_FORCE_RELEASE\n-DCOPY_PARSE_PLAN_TREES\n-DWRITE_READ_PARSE_PLAN_TREES\n-DRAW_EXPRESSION_COVERAGE_TEST\n-DENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS\n\nBest regards,\nAlexander\n\n\n\n\n\nHello Richard,\n\n 02.11.2023 14:19, Richard Guo wrote:\n\n\n\n\n\n\nHowever, the cfbot indicates that there\n are test cases that fail on\nFreeBSD [1] (no failure on other platforms). So I set up\n a FreeBSD-13\n locally but just cannot reproduce the failure. I must be\n doing\n something wrong. Can anyone give me some hints or\n suggestions?\n\n FYI. The failure looks like:\n\n explain (costs off)\n select p2.a, p1.c from permtest_parent p1 inner join\n permtest_parent p2\n on p1.a = p2.a and left(p1.c, 3) ~ 'a1$';\n - QUERY PLAN\n -----------------------------------------------------\n - Hash Join\n - Hash Cond: (p2.a = p1.a)\n - -> Seq Scan on permtest_grandchild p2\n - -> Hash\n - -> Seq Scan on permtest_grandchild p1\n - Filter: (\"left\"(c, 3) ~ 'a1$'::text)\n -(6 rows)\n -\n +ERROR: unrecognized node type: 1130127496\n\n\n\n\n\n I've managed to reproduce that failure on my Ubuntu with:\n CPPFLAGS=\"-Og -DWRITE_READ_PARSE_PLAN_TREES -DCOPY_PARSE_PLAN_TREES\"\n ./configure ... make check\n ...\n SELECT t1, t2 FROM prt1 t1 LEFT JOIN prt2 t2 ON t1.a = t2.b WHERE\n t1.b = 0 ORDER BY t1.a, t2.b;\n - QUERY PLAN \n ---------------------------------------------------\n - Sort\n - Sort Key: t1.a, t2.b\n - -> Hash Right Join\n - Hash Cond: (t2.b = t1.a)\n - -> Append\n - -> Seq Scan on prt2_p1 t2_1\n - -> Seq Scan on prt2_p2 t2_2\n - -> Seq Scan on prt2_p3 t2_3\n - -> Hash\n - -> Append\n - -> Seq Scan on prt1_p1 t1_1\n - Filter: (b = 0)\n - -> Seq Scan on prt1_p2 t1_2\n - Filter: (b = 0)\n - -> Seq Scan on prt1_p3 t1_3\n - Filter: (b = 0)\n -(16 rows)\n -\n +ERROR: unrecognized node type: -1465804424\n ...\n\n As far as I can see from\n https://cirrus-ci.com/task/6642692659085312,\n the FreeBSD host has the following CPPFLAGS specified:\n -DRELCACHE_FORCE_RELEASE\n -DCOPY_PARSE_PLAN_TREES\n -DWRITE_READ_PARSE_PLAN_TREES\n -DRAW_EXPRESSION_COVERAGE_TEST\n -DENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS\n\n Best regards,\n Alexander",
"msg_date": "Sat, 4 Nov 2023 13:00:00 +0300",
"msg_from": "Alexander Lakhin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support run-time partition pruning for hash join"
},
{
"msg_contents": "On Sat, Nov 4, 2023 at 6:00 PM Alexander Lakhin <[email protected]> wrote:\n\n> 02.11.2023 14:19, Richard Guo wrote:\n>\n> However, the cfbot indicates that there are test cases that fail on\n> FreeBSD [1] (no failure on other platforms). So I set up a FreeBSD-13\n> locally but just cannot reproduce the failure. I must be doing\n> something wrong. Can anyone give me some hints or suggestions?\n>\n> I've managed to reproduce that failure on my Ubuntu with:\n> CPPFLAGS=\"-Og -DWRITE_READ_PARSE_PLAN_TREES -DCOPY_PARSE_PLAN_TREES\"\n> ./configure ... make check\n>\n\nWow, thank you so much. You saved me a lot of time. It turns out that\nit was caused by me not making JoinPartitionPruneInfo a node. The same\nissue can also exist for JoinPartitionPruneCandidateInfo - if you\npprint(root) at some point you'll see 'could not dump unrecognized node\ntype' warning.\n\nFixed this issue in v4.\n\nThanks\nRichard",
"msg_date": "Mon, 6 Nov 2023 11:05:24 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Support run-time partition pruning for hash join"
},
{
"msg_contents": "Hello Richard,\n\n06.11.2023 06:05, Richard Guo wrote:\n>\n> Fixed this issue in v4.\n>\n\nPlease look at a warning and an assertion failure triggered by the\nfollowing script:\nset parallel_setup_cost = 0;\nset parallel_tuple_cost = 0;\nset min_parallel_table_scan_size = '1kB';\n\ncreate table t1 (i int) partition by range (i);\ncreate table t1_1 partition of t1 for values from (1) to (2);\ncreate table t1_2 partition of t1 for values from (2) to (3);\ninsert into t1 values (1), (2);\n\ncreate table t2(i int);\ninsert into t2 values (1), (2);\nanalyze t1, t2;\n\nselect * from t1 right join t2 on t1.i = t2.i;\n\n2023-11-06 14:11:37.398 UTC|law|regression|6548f419.392cf5|WARNING: Join partition pruning $0 has not been performed yet.\nTRAP: failed Assert(\"node->as_prune_state\"), File: \"nodeAppend.c\", Line: 846, PID: 3747061\n\nBest regards,\nAlexander\n\n\n\n\n\nHello Richard,\n\n 06.11.2023 06:05, Richard Guo wrote:\n\n\n\n\n\n\nFixed this issue in v4.\n\n\n\n\n\n\n Please look at a warning and an assertion failure triggered by the\n following script:\n set parallel_setup_cost = 0;\n set parallel_tuple_cost = 0;\n set min_parallel_table_scan_size = '1kB';\n\n create table t1 (i int) partition by range (i);\n create table t1_1 partition of t1 for values from (1) to (2);\n create table t1_2 partition of t1 for values from (2) to (3);\n insert into t1 values (1), (2);\n\n create table t2(i int);\n insert into t2 values (1), (2);\n analyze t1, t2;\n\n select * from t1 right join t2 on t1.i = t2.i;\n\n 2023-11-06 14:11:37.398 UTC|law|regression|6548f419.392cf5|WARNING: \n Join partition pruning $0 has not been performed yet.\n TRAP: failed Assert(\"node->as_prune_state\"), File:\n \"nodeAppend.c\", Line: 846, PID: 3747061\n\n Best regards,\n Alexander",
"msg_date": "Mon, 6 Nov 2023 18:00:00 +0300",
"msg_from": "Alexander Lakhin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support run-time partition pruning for hash join"
},
{
"msg_contents": "On Mon, Nov 6, 2023 at 11:00 PM Alexander Lakhin <[email protected]>\nwrote:\n\n> Please look at a warning and an assertion failure triggered by the\n> following script:\n> set parallel_setup_cost = 0;\n> set parallel_tuple_cost = 0;\n> set min_parallel_table_scan_size = '1kB';\n>\n> create table t1 (i int) partition by range (i);\n> create table t1_1 partition of t1 for values from (1) to (2);\n> create table t1_2 partition of t1 for values from (2) to (3);\n> insert into t1 values (1), (2);\n>\n> create table t2(i int);\n> insert into t2 values (1), (2);\n> analyze t1, t2;\n>\n> select * from t1 right join t2 on t1.i = t2.i;\n>\n> 2023-11-06 14:11:37.398 UTC|law|regression|6548f419.392cf5|WARNING: Join\n> partition pruning $0 has not been performed yet.\n> TRAP: failed Assert(\"node->as_prune_state\"), File: \"nodeAppend.c\", Line:\n> 846, PID: 3747061\n>\n\nThanks for the report! I failed to take care of the parallel-hashjoin\ncase, and I have to admit that it's not clear to me yet how we should do\njoin partition pruning in that case.\n\nFor now I think it's better to just avoid performing join partition\npruning for parallel hashjoin, so that the patch doesn't become too\ncomplex for review. We can always extend it in the future.\n\nI have done that in v5. Thanks for testing!\n\nThanks\nRichard",
"msg_date": "Tue, 7 Nov 2023 15:55:16 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Support run-time partition pruning for hash join"
},
{
"msg_contents": "On Tue, 7 Nov 2023 at 13:25, Richard Guo <[email protected]> wrote:\n>\n>\n> On Mon, Nov 6, 2023 at 11:00 PM Alexander Lakhin <[email protected]> wrote:\n>>\n>> Please look at a warning and an assertion failure triggered by the\n>> following script:\n>> set parallel_setup_cost = 0;\n>> set parallel_tuple_cost = 0;\n>> set min_parallel_table_scan_size = '1kB';\n>>\n>> create table t1 (i int) partition by range (i);\n>> create table t1_1 partition of t1 for values from (1) to (2);\n>> create table t1_2 partition of t1 for values from (2) to (3);\n>> insert into t1 values (1), (2);\n>>\n>> create table t2(i int);\n>> insert into t2 values (1), (2);\n>> analyze t1, t2;\n>>\n>> select * from t1 right join t2 on t1.i = t2.i;\n>>\n>> 2023-11-06 14:11:37.398 UTC|law|regression|6548f419.392cf5|WARNING: Join partition pruning $0 has not been performed yet.\n>> TRAP: failed Assert(\"node->as_prune_state\"), File: \"nodeAppend.c\", Line: 846, PID: 3747061\n>\n>\n> Thanks for the report! I failed to take care of the parallel-hashjoin\n> case, and I have to admit that it's not clear to me yet how we should do\n> join partition pruning in that case.\n>\n> For now I think it's better to just avoid performing join partition\n> pruning for parallel hashjoin, so that the patch doesn't become too\n> complex for review. We can always extend it in the future.\n>\n> I have done that in v5. Thanks for testing!\n\nCFBot shows that the patch does not apply anymore as in [1]:\n=== Applying patches on top of PostgreSQL commit ID\n924d046dcf55887c98a1628675a30f4b0eebe556 ===\n=== applying patch\n./v5-0001-Support-run-time-partition-pruning-for-hash-join.patch\n...\npatching file src/include/nodes/plannodes.h\n...\npatching file src/include/optimizer/cost.h\nHunk #1 FAILED at 211.\n1 out of 1 hunk FAILED -- saving rejects to file\nsrc/include/optimizer/cost.h.rej\n\nPlease post an updated version for the same.\n\n[1] - http://cfbot.cputube.org/patch_46_4512.log\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Sat, 27 Jan 2024 08:59:04 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support run-time partition pruning for hash join"
},
{
"msg_contents": "On Sat, Jan 27, 2024 at 11:29 AM vignesh C <[email protected]> wrote:\n\n> CFBot shows that the patch does not apply anymore as in [1]:\n>\n> Please post an updated version for the same.\n\n\nAttached is an updated patch. Nothing else has changed.\n\nThanks\nRichard",
"msg_date": "Tue, 30 Jan 2024 10:33:19 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Support run-time partition pruning for hash join"
},
{
"msg_contents": "On Tue, Jan 30, 2024 at 10:33 AM Richard Guo <[email protected]> wrote:\n\n> Attached is an updated patch. Nothing else has changed.\n>\n\nHere is another rebase over master so it applies again. Nothing else\nhas changed.\n\nThanks\nRichard",
"msg_date": "Tue, 19 Mar 2024 14:12:34 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Support run-time partition pruning for hash join"
},
{
"msg_contents": "On 19/3/2024 07:12, Richard Guo wrote:\n> \n> On Tue, Jan 30, 2024 at 10:33 AM Richard Guo <[email protected] \n> <mailto:[email protected]>> wrote:\n> \n> Attached is an updated patch. Nothing else has changed.\n> \n> \n> Here is another rebase over master so it applies again. Nothing else\n> has changed.\nThe patch doesn't apply to the master now.\nI wonder why this work was suppressed - it looks highly profitable in \nthe case of foreign partitions. And the idea of cost-based enablement \nmakes it a must-have, I think.\nI have just skimmed through the patch and have a couple of questions:\n1. It makes sense to calculate the cost and remember the minimum number \nof pruned partitions when the cost of HJ with probing is still \nprofitable. Why don't we disable this probing in runtime if we see that \nthe number of potentially pruning partitions is already too low?\n2. Maybe I misunderstood the code, but having matched a hashed tuple \nwith a partition, it makes sense for further tuples to reduce the number \nof probing expressions because we already know that the partition will \nnot be pruned.\n\n-- \nregards, Andrei Lepikhov\n\n\n\n",
"msg_date": "Thu, 5 Sep 2024 09:56:55 +0200",
"msg_from": "Andrei Lepikhov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support run-time partition pruning for hash join"
},
{
"msg_contents": "On Tue, 22 Aug 2023 at 21:51, Richard Guo <[email protected]> wrote:\n> Sometimes we may just not generate parameterized nestloop as final plan,\n> such as when there are no indexes and no lateral references in the\n> Append/MergeAppend node. In this case I think it would be great if we\n> can still do some partition running.\n\n(I just read through this thread again to remind myself of where it's at.)\n\nHere are my current thoughts: You've done some costing work which will\nonly prefer the part-prune hash join path in very conservative cases.\nThis is to reduce the risk of performance regressions caused by\nrunning the pruning code too often in cases where it's less likely to\nbe able to prune any partitions.\n\nNow, I'm not saying we shouldn't ever do this pruning hash join stuff,\nbut what I think might be better to do as a first step is to have\npartitioned tables create a parameterized path on their partition key,\nand a prefix thereof for RANGE partitioned tables. This would allow\nparameterized nested loop joins when no index exists on the partition\nkey.\n\nRight now you can get a plan that does this if you do:\n\ncreate table p (col int);\ncreate table pt (partkey int) partition by list(partkey);\ncreate table pt1 partition of pt for values in(1);\ncreate table pt2 partition of pt for values in(2);\ninsert into p values(1);\ninsert into pt values(1);\n\nexplain (analyze, costs off, timing off, summary off)\nSELECT * FROM p, LATERAL (SELECT * FROM pt WHERE p.col = pt.partkey OFFSET 0);\n QUERY PLAN\n----------------------------------------------------------\n Nested Loop (actual rows=0 loops=1)\n -> Seq Scan on p (actual rows=1 loops=1)\n -> Append (actual rows=0 loops=1)\n -> Seq Scan on pt1 pt_1 (actual rows=0 loops=1)\n Filter: (p.col = partkey)\n -> Seq Scan on pt2 pt_2 (never executed)\n Filter: (p.col = partkey)\n\nYou get the parameterized nested loop. Great! But, as soon as you drop\nthe OFFSET 0, the lateral join will be converted to an inner join and\nNested Loop won't look so great when it's not parameterized.\n\nexplain (analyze, costs off, timing off, summary off)\nSELECT * FROM p, LATERAL (SELECT * FROM pt WHERE p.col = pt.partkey);\n QUERY PLAN\n----------------------------------------------------------\n Hash Join (actual rows=1 loops=1)\n Hash Cond: (pt.partkey = p.col)\n -> Append (actual rows=1 loops=1)\n -> Seq Scan on pt1 pt_1 (actual rows=1 loops=1)\n -> Seq Scan on pt2 pt_2 (actual rows=0 loops=1)\n -> Hash (actual rows=1 loops=1)\n Buckets: 4096 Batches: 2 Memory Usage: 32kB\n -> Seq Scan on p (actual rows=1 loops=1)\n\nMaybe instead of inventing a very pessimistic part prune Hash Join, it\nmight be better to make the above work without the LATERAL + OFFSET 0\nby creating the parameterized paths Seq Scan paths. That's going to be\nan immense help when the non-partitioned relation just has a small\nnumber of rows, which I think your costing favoured anyway.\n\nWhat do you think?\n\nDavid\n\n\n",
"msg_date": "Fri, 6 Sep 2024 13:22:28 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support run-time partition pruning for hash join"
},
{
"msg_contents": "On Fri, Sep 6, 2024 at 9:22 AM David Rowley <[email protected]> wrote:\n> Maybe instead of inventing a very pessimistic part prune Hash Join, it\n> might be better to make the above work without the LATERAL + OFFSET 0\n> by creating the parameterized paths Seq Scan paths. That's going to be\n> an immense help when the non-partitioned relation just has a small\n> number of rows, which I think your costing favoured anyway.\n>\n> What do you think?\n\nThis approach seems promising. It reminds me of the discussion about\npushing join clauses into a seqscan [1]. But I think there are two\nproblems that we need to address to make it work.\n\n* Currently, the costing code does not take run-time pruning into\nconsideration. How should we calculate the costs of the parameterized\npaths on partitioned tables?\n\n* This approach generates additional paths at the scan level, which\nmay not be easily compared with regular scan paths. As a result, we\nmight need to retain these paths at every level of the join tree. I'm\nafraid this could lead to a significant increase in planning time in\nsome cases. We need to find a way to avoid regressions in planning\ntime.\n\n[1] https://postgr.es/m/[email protected]\n\nThanks\nRichard\n\n\n",
"msg_date": "Fri, 6 Sep 2024 15:18:57 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Support run-time partition pruning for hash join"
},
{
"msg_contents": "On Fri, 6 Sept 2024 at 19:19, Richard Guo <[email protected]> wrote:\n> * Currently, the costing code does not take run-time pruning into\n> consideration. How should we calculate the costs of the parameterized\n> paths on partitioned tables?\n\nCouldn't we assume total_cost = total_cost / n_apppend_children for\nequality conditions and do something with DEFAULT_INEQ_SEL and\nDEFAULT_RANGE_INEQ_SEL for more complex cases. I understand we\nprobably need to do something about this to have the planner have any\nchance of actually choose these Paths, so hacking something in there\nto test the idea is sound before going to the trouble of refining the\ncost model seems like a good idea.\n\n> * This approach generates additional paths at the scan level, which\n> may not be easily compared with regular scan paths. As a result, we\n> might need to retain these paths at every level of the join tree. I'm\n> afraid this could lead to a significant increase in planning time in\n> some cases. We need to find a way to avoid regressions in planning\n> time.\n\nHow about just creating these Paths for partitioned tables (and\npartitions) when there's an EquivalenceClass containing multiple\nrelids on the partition key? I think those are about the only cases\nthat could benefit, so I think it makes sense to restrict making the\nadditional Paths for that case.\n\nDavid\n\n\n",
"msg_date": "Thu, 12 Sep 2024 09:14:03 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support run-time partition pruning for hash join"
}
] |
[
{
"msg_contents": "Hi all,\n\nCurrently PostgreSQL has three different variants of a 32-bit CRC calculation: CRC-32C, CRC-32(Ethernet polynomial),\nand a legacy CRC-32 version that uses the lookup table. Some ARMv8 (AArch64) CPUs implement the CRC32 extension which\nis equivalent with CRC-32(Ethernet polynomial), so they can also benefit from hardware acceleration.\n\nCan I propose a patch to optimize crc32 calculation with Arm64 specific instructions?\n\nAny comments or feedback are welcome.\nIMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.\n\n\n\n\n\n\n\n\n\nHi all,\n \nCurrently PostgreSQL has three different variants of a 32-bit CRC calculation: CRC-32C, CRC-32(Ethernet polynomial),\n\nand a legacy CRC-32 version that uses the lookup table. Some ARMv8 (AArch64) CPUs implement the CRC32 extension which\n\nis equivalent with CRC-32(Ethernet polynomial), so they can also benefit from hardware acceleration.\n \nCan I propose a patch to optimize crc32 calculation with Arm64 specific instructions?\n \nAny comments or feedback are welcome.\n\nIMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose,\n or store or copy the information in any medium. Thank you.",
"msg_date": "Mon, 21 Aug 2023 09:32:42 +0000",
"msg_from": "Xiang Gao <[email protected]>",
"msg_from_op": true,
"msg_subject": "Optimize Arm64 crc32 implementation in PostgreSQL"
},
{
"msg_contents": "On Mon, Aug 21, 2023 at 09:32:42AM +0000, Xiang Gao wrote:\n> Currently PostgreSQL has three different variants of a 32-bit CRC calculation: CRC-32C, CRC-32(Ethernet polynomial),\n> and a legacy CRC-32 version that uses the lookup table. Some ARMv8 (AArch64) CPUs implement the CRC32 extension which\n> is equivalent with CRC-32(Ethernet polynomial), so they can also benefit from hardware acceleration.\n> \n> Can I propose a patch to optimize crc32 calculation with Arm64 specific instructions?\n\nWe have support for ARMv8 CRC instructions for CRC-32C (see\nsrc/port/pg_crc32c_armv8.c), but AFAICT Postgres has no such optimization\nfor CRC-32. The CRC-32 macros have a comment indicating that they are\ncurrently only used in ltree and hstore, so there might not be terribly\nmuch demand for hardware acceleration, though.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 21 Aug 2023 13:04:46 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimize Arm64 crc32 implementation in PostgreSQL"
}
] |
[
{
"msg_contents": "Is there any piece of code I could see how to achieve $subject ? \nI haven’t found anything in the standard library or contrib modules.\n\nI’m trying to build ArrayType ** of sorts and return a Datum of those but I can’t seem to manage memory correctly. \n\n\n\n\n\n",
"msg_date": "Mon, 21 Aug 2023 22:31:47 +0300",
"msg_from": "Markur Sens <[email protected]>",
"msg_from_op": true,
"msg_subject": "C function to return double precision[][]"
},
{
"msg_contents": "On 8/21/23 15:31, Markur Sens wrote:\n> Is there any piece of code I could see how to achieve $subject ?\n> I haven’t found anything in the standard library or contrib modules.\n> \n> I’m trying to build ArrayType ** of sorts and return a Datum of those but I can’t seem to manage memory correctly.\n\nThere is an example in PL/R here:\n\nhttps://github.com/postgres-plr/plr/blob/20a1f133bcf2bc8f37ac23da191aea590d612619/pg_conversion.c#L1275\n\nwhich points to here with number of dims == 2:\n\nhttps://github.com/postgres-plr/plr/blob/20a1f133bcf2bc8f37ac23da191aea590d612619/pg_conversion.c#L1493\n\nThis is all generic to the element type (i.e. not specifically float8), \nbut the needed type conversion stuff happens in here:\n\nhttps://github.com/postgres-plr/plr/blob/20a1f133bcf2bc8f37ac23da191aea590d612619/plr.c#L1109\n\nHTH,\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n",
"msg_date": "Mon, 21 Aug 2023 16:04:53 -0400",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: C function to return double precision[][]"
}
] |
[
{
"msg_contents": "This is part of the larger project of allowing all test suites to pass \nin OpenSSL FIPS mode. We had previously integrated several patches that \navoid or isolate use of MD5 in various forms in the tests. Now to \nanother issue.\n\nOpenSSL in FIPS mode rejects several encrypted private keys used in the \ntest suites ssl and ssl_passphrase_callback. The reason for this is \nexplained in [0]:\n\n > Technically you shouldn't use keys created outside FIPS mode in FIPS\n > mode.\n >\n > In FIPS mode the \"traditional\" format is not supported because it used\n > MD5 for key derivation. The more standard PKCS#8 mode using SHA1 for\n > key derivation is use instead. You can convert keys using the pkcs8\n > command outside FIPS mode but again technically you aren't supposed\n > to...\n\n[0]: \nhttps://groups.google.com/g/mailing.openssl.users/c/Sd5E8VY5O2s/m/QYGezoQeo84J\n\nThe affected files are\n\nsrc/test/modules/ssl_passphrase_callback/server.key\nsrc/test/ssl/ssl/client-encrypted-pem.key\nsrc/test/ssl/ssl/server-password.key\n\nA fix is to convert them from their existing PKCS#1 format to the PKCS#8 \nformat, like this:\n\nopenssl pkcs8 -topk8 -in \nsrc/test/modules/ssl_passphrase_callback/server.key -passin pass:FooBaR1 \n-out src/test/modules/ssl_passphrase_callback/server.key.new -passout \npass:FooBaR1\nmv src/test/modules/ssl_passphrase_callback/server.key.new \nsrc/test/modules/ssl_passphrase_callback/server.key\n\netc.\n\n(Fun fact: The above command also doesn't work if your OpenSSL \ninstallation is in FIPS mode because it will refuse to read the old file.)\n\nWe should also update the generation rules to generate the newer format, \nlike this:\n\n- $(OPENSSL) rsa -aes256 -in server.ckey -out server.key -passout \npass:$(PASS)\n+ $(OPENSSL) pkey -aes256 -in server.ckey -out server.key -passout \npass:$(PASS)\n\nI have attached two patches, one to update the generation rules, and one \nwhere I have converted the existing test files. (I didn't generate them \nfrom scratch, so for example \nsrc/test/modules/ssl_passphrase_callback/server.crt that corresponds to \none of the keys does not need to be updated.)\n\nTo check that these new files are backward compatible, I have \nsuccessfully tested them on CentOS 7 with the included version 1.0.2k.\n\nIt's also interesting that if you generate all private keys from scratch \nusing the existing rules on a new OpenSSL version (3+), they will be \ngenerated in PKCS#8 format by default. In those OpenSSL versions, the \nopenssl-rsa command has a -traditional option to get the old format, but \nof course old OpenSSL versions don't have that. As OpenSSL 3 gets more \nwidespread, we might need to rethink these rules anyway to make sure we \nget consistent behavior.",
"msg_date": "Tue, 22 Aug 2023 10:07:05 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Convert encrypted SSL test keys to PKCS#8 format"
},
{
"msg_contents": "On Tue, Aug 22, 2023 at 1:07 AM Peter Eisentraut <[email protected]> wrote:\n> I have attached two patches, one to update the generation rules, and one\n> where I have converted the existing test files. (I didn't generate them\n> from scratch, so for example\n> src/test/modules/ssl_passphrase_callback/server.crt that corresponds to\n> one of the keys does not need to be updated.)\n\nLooks good from here. I don't have a FIPS setup right now, but the new\nfiles pass tests on OpenSSL 1.0.2u, 1.1.1v, 3.0.2-0ubuntu1.10, and\nLibreSSL 3.8. Tests continue to pass after a full clean and rebuild of\nthe sslfiles.\n\n> It's also interesting that if you generate all private keys from scratch\n> using the existing rules on a new OpenSSL version (3+), they will be\n> generated in PKCS#8 format by default. In those OpenSSL versions, the\n> openssl-rsa command has a -traditional option to get the old format, but\n> of course old OpenSSL versions don't have that. As OpenSSL 3 gets more\n> widespread, we might need to rethink these rules anyway to make sure we\n> get consistent behavior.\n\nYeah. Looks like OpenSSL 3 also adds new v3 extensions to the\ncertificates... For now they look benign, but I assume someone's going\nto run into weirdness at some point.\n\nThanks!\n--Jacob\n\n\n",
"msg_date": "Tue, 22 Aug 2023 12:02:02 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Convert encrypted SSL test keys to PKCS#8 format"
},
{
"msg_contents": "On 22.08.23 21:02, Jacob Champion wrote:\n> On Tue, Aug 22, 2023 at 1:07 AM Peter Eisentraut <[email protected]> wrote:\n>> I have attached two patches, one to update the generation rules, and one\n>> where I have converted the existing test files. (I didn't generate them\n>> from scratch, so for example\n>> src/test/modules/ssl_passphrase_callback/server.crt that corresponds to\n>> one of the keys does not need to be updated.)\n> \n> Looks good from here. I don't have a FIPS setup right now, but the new\n> files pass tests on OpenSSL 1.0.2u, 1.1.1v, 3.0.2-0ubuntu1.10, and\n> LibreSSL 3.8. Tests continue to pass after a full clean and rebuild of\n> the sslfiles.\n\nCommitted, thanks.\n\n\n\n",
"msg_date": "Mon, 28 Aug 2023 08:44:25 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Convert encrypted SSL test keys to PKCS#8 format"
}
] |
[
{
"msg_contents": "The list of acknowledgments for the PG16 release notes has been \ncommitted. It should show up here sometime: \n<https://www.postgresql.org/docs/16/release-16.html#RELEASE-16-ACKNOWLEDGEMENTS>. \n As usual, please check for problems such as wrong sorting, duplicate \nnames in different variants, or names in the wrong order etc. (Our \nconvention is given name followed by surname.)\n\n\n",
"msg_date": "Tue, 22 Aug 2023 11:33:25 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "list of acknowledgments for PG16"
},
{
"msg_contents": "On 2023-Aug-22, Peter Eisentraut wrote:\n\n> The list of acknowledgments for the PG16 release notes has been committed.\n> It should show up here sometime: <https://www.postgresql.org/docs/16/release-16.html#RELEASE-16-ACKNOWLEDGEMENTS>.\n> As usual, please check for problems such as wrong sorting, duplicate names\n> in different variants, or names in the wrong order etc. (Our convention is\n> given name followed by surname.)\n\nHmm, I think these docs would only regenerate during the RC1 release, so\nit'll be a couple of weeks, unless we manually poke the doc builder.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Here's a general engineering tip: if the non-fun part is too complex for you\nto figure out, that might indicate the fun part is too ambitious.\" (John Naylor)\nhttps://postgr.es/m/CAFBsxsG4OWHBbSDM%3DsSeXrQGOtkPiOEOuME4yD7Ce41NtaAD9g%40mail.gmail.com\n\n\n",
"msg_date": "Tue, 22 Aug 2023 12:41:30 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: list of acknowledgments for PG16"
},
{
"msg_contents": "On 8/22/23 11:33, Peter Eisentraut wrote:\n> As usual, please check for problems such as wrong sorting, duplicate names in different variants, or names in the wrong order etc. (Our convention is given name followed by surname.)\n\nNot necessarily for this time around, but I would like to see this \nconvention be a bit more inclusive of other cultures. My proposed \nsolution is to list them the same way we do now, but also have in \nparentheses or something their name in their native order and script.\n-- \nVik Fearing\n\n\n\n",
"msg_date": "Tue, 22 Aug 2023 14:01:19 +0200",
"msg_from": "Vik Fearing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: list of acknowledgments for PG16"
},
{
"msg_contents": "On 2023-Aug-22, Vik Fearing wrote:\n\n> On 8/22/23 11:33, Peter Eisentraut wrote:\n> > As usual, please check for problems such as wrong sorting, duplicate\n> > names in different variants, or names in the wrong order etc. (Our\n> > convention is given name followed by surname.)\n> \n> Not necessarily for this time around, but I would like to see this\n> convention be a bit more inclusive of other cultures. My proposed solution\n> is to list them the same way we do now, but also have in parentheses or\n> something their name in their native order and script.\n\nYeah, I've been proposing this kind of thing for many years; the\nproblem, until not long ago, was that the tooling was unable to process\nnon-Latin1 characters in all the output formats that we use. But\ntooling has changed and the oldest platforms have disappeared, so maybe\nit works now; do you want to inject some Chinese, Cyrillic, Japanese\nnames and give it a spin? At least HTML and PDF need to work correctly.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Pido que me den el Nobel por razones humanitarias\" (Nicanor Parra)\n\n\n",
"msg_date": "Tue, 22 Aug 2023 15:05:15 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: list of acknowledgments for PG16"
},
{
"msg_contents": "Alvaro Herrera <[email protected]> writes:\n> Yeah, I've been proposing this kind of thing for many years; the\n> problem, until not long ago, was that the tooling was unable to process\n> non-Latin1 characters in all the output formats that we use. But\n> tooling has changed and the oldest platforms have disappeared, so maybe\n> it works now; do you want to inject some Chinese, Cyrillic, Japanese\n> names and give it a spin? At least HTML and PDF need to work correctly.\n\nI'm pretty sure the PDF toolchain still fails on non-Latin1 characters.\nAt least it does the way I have it installed; maybe adding some\nnon-default dependencies would help?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 22 Aug 2023 09:29:49 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: list of acknowledgments for PG16"
},
{
"msg_contents": "On 8/22/23 15:29, Tom Lane wrote:\n> Alvaro Herrera <[email protected]> writes:\n>> Yeah, I've been proposing this kind of thing for many years; the\n>> problem, until not long ago, was that the tooling was unable to process\n>> non-Latin1 characters in all the output formats that we use. But\n>> tooling has changed and the oldest platforms have disappeared, so maybe\n>> it works now; do you want to inject some Chinese, Cyrillic, Japanese\n>> names and give it a spin? At least HTML and PDF need to work correctly.\n> \n> I'm pretty sure the PDF toolchain still fails on non-Latin1 characters.\n> At least it does the way I have it installed; maybe adding some\n> non-default dependencies would help?\n\nI am struggling to find documentation on how to build the pdfs with \nmeson. Any pointers?\n-- \nVik Fearing\n\n\n\n",
"msg_date": "Tue, 22 Aug 2023 15:40:48 +0200",
"msg_from": "Vik Fearing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: list of acknowledgments for PG16"
},
{
"msg_contents": "Alvaro Herrera <[email protected]> writes:\n> On 2023-Aug-22, Peter Eisentraut wrote:\n>> The list of acknowledgments for the PG16 release notes has been committed.\n>> It should show up here sometime: <https://www.postgresql.org/docs/16/release-16.html#RELEASE-16-ACKNOWLEDGEMENTS>.\n\n> Hmm, I think these docs would only regenerate during the RC1 release, so\n> it'll be a couple of weeks, unless we manually poke the doc builder.\n\nYeah. I could produce a new set of tarballs from the v16 branch tip,\nbut I don't know the process (nor have the admin permissions) to\nextract the HTML docs and put them on the website.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 22 Aug 2023 09:44:28 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: list of acknowledgments for PG16"
},
{
"msg_contents": "On 8/22/23 11:33, Peter Eisentraut wrote:\n> The list of acknowledgments for the PG16 release notes has been \n> committed. It should show up here sometime: \n> <https://www.postgresql.org/docs/16/release-16.html#RELEASE-16-ACKNOWLEDGEMENTS>. As usual, please check for problems such as wrong sorting, duplicate names in different variants, or names in the wrong order etc. (Our convention is given name followed by surname.)\n\nI think these might be the same person:\n\n <member>Zhihong Yu</member>\n <member>Zihong Yu</member>\n\nI did not spot any others.\n-- \nVik Fearing\n\n\n\n",
"msg_date": "Tue, 22 Aug 2023 15:48:51 +0200",
"msg_from": "Vik Fearing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: list of acknowledgments for PG16"
},
{
"msg_contents": "On 8/22/23 09:44, Tom Lane wrote:\n> Alvaro Herrera <[email protected]> writes:\n>> On 2023-Aug-22, Peter Eisentraut wrote:\n>>> The list of acknowledgments for the PG16 release notes has been committed.\n>>> It should show up here sometime: <https://www.postgresql.org/docs/16/release-16.html#RELEASE-16-ACKNOWLEDGEMENTS>.\n> \n>> Hmm, I think these docs would only regenerate during the RC1 release, so\n>> it'll be a couple of weeks, unless we manually poke the doc builder.\n> \n> Yeah. I could produce a new set of tarballs from the v16 branch tip,\n> but I don't know the process (nor have the admin permissions) to\n> extract the HTML docs and put them on the website.\n\n\nThese days the docs update is part of a scripted process for doing an \nentire release.\n\nI'm sure we could figure out how to just release the updated docs, but \nwith RC1 a week away, is it really worthwhile?\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n",
"msg_date": "Tue, 22 Aug 2023 10:03:29 -0400",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: list of acknowledgments for PG16"
},
{
"msg_contents": "On 22.08.23 15:29, Tom Lane wrote:\n> Alvaro Herrera <[email protected]> writes:\n>> Yeah, I've been proposing this kind of thing for many years; the\n>> problem, until not long ago, was that the tooling was unable to process\n>> non-Latin1 characters in all the output formats that we use. But\n>> tooling has changed and the oldest platforms have disappeared, so maybe\n>> it works now; do you want to inject some Chinese, Cyrillic, Japanese\n>> names and give it a spin? At least HTML and PDF need to work correctly.\n> \n> I'm pretty sure the PDF toolchain still fails on non-Latin1 characters.\n> At least it does the way I have it installed; maybe adding some\n> non-default dependencies would help?\n\nSee here: \nhttps://www.postgresql.org/message-id/[email protected]\n\n\n\n",
"msg_date": "Tue, 22 Aug 2023 16:24:09 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: list of acknowledgments for PG16"
},
{
"msg_contents": "On Tue, Aug 22, 2023 at 10:03:29AM -0400, Joe Conway wrote:\n> On 8/22/23 09:44, Tom Lane wrote:\n> > Alvaro Herrera <[email protected]> writes:\n> > > On 2023-Aug-22, Peter Eisentraut wrote:\n> > > > The list of acknowledgments for the PG16 release notes has been committed.\n> > > > It should show up here sometime: <https://www.postgresql.org/docs/16/release-16.html#RELEASE-16-ACKNOWLEDGEMENTS>.\n> > \n> > > Hmm, I think these docs would only regenerate during the RC1 release, so\n> > > it'll be a couple of weeks, unless we manually poke the doc builder.\n> > \n> > Yeah. I could produce a new set of tarballs from the v16 branch tip,\n> > but I don't know the process (nor have the admin permissions) to\n> > extract the HTML docs and put them on the website.\n> \n> \n> These days the docs update is part of a scripted process for doing an entire\n> release.\n> \n> I'm sure we could figure out how to just release the updated docs, but with\n> RC1 a week away, is it really worthwhile?\n\nYou can see the list in my automated build:\n\n\thttps://momjian.us/pgsql_docs/release-16.html#RELEASE-16-ACKNOWLEDGEMENTS\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Tue, 22 Aug 2023 15:26:04 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: list of acknowledgments for PG16"
},
{
"msg_contents": "Peter Eisentraut a écrit :\n> The list of acknowledgments for the PG16 release notes has been\n> committed. It should show up here sometime:\n> <https://www.postgresql.org/docs/16/release-16.html#RELEASE-16-ACKNOWLEDGEMENTS>.\n> As usual, please check for problems such as wrong sorting, duplicate\n> names in different variants, or names in the wrong order etc. (Our\n> convention is given name followed by surname.)\n> \n\n\"Gabriele Varrazzo\" is mentioned in commit \n0032a5456708811ca95bd80a538f4fb72ad0dd20 but it should be \"Daniele \nVarrazzo\" (per Discussion link in commit message); the later is already \nin the list.",
"msg_date": "Wed, 23 Aug 2023 09:13:09 +0200",
"msg_from": "Denis Laxalde <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: list of acknowledgments for PG16"
},
{
"msg_contents": "On Tue, Aug 22, 2023 at 6:33 PM Peter Eisentraut <[email protected]> wrote:\n> As usual, please check for problems such as wrong sorting, duplicate\n> names in different variants, or names in the wrong order etc. (Our\n> convention is given name followed by surname.)\n\nI went through Japanese names on the list. I think they are all in\nthe right order (ie, the given-name-followed-by-surname order).\n\nThanks!\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Wed, 23 Aug 2023 20:35:00 +0900",
"msg_from": "Etsuro Fujita <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: list of acknowledgments for PG16"
},
{
"msg_contents": "On Tue, Aug 22, 2023 at 9:41 PM Vik Fearing <[email protected]> wrote:\n>\n>\n> I am struggling to find documentation on how to build the pdfs with\n> meson. Any pointers?\n> --\n> Vik Fearing\n>\n>\n>\n\nninja docs:\nhttps://wiki.postgresql.org/wiki/Meson#Meson_documentation\n\nninja alldocs. which take some time, build all kinds of formats, some may fail.\n\nthere is another tricky usage:\ntype \"ninja doc\" then press Tab for complete twice, you will get all\nthe available options like following:\ndocs doc/src/sgml/man3\ndoc/src/sgml/errcodes-table.sgml doc/src/sgml/man7\ndoc/src/sgml/features-supported.sgml doc/src/sgml/postgres-A4.fo\ndoc/src/sgml/features-unsupported.sgml doc/src/sgml/postgres-A4.pdf\ndoc/src/sgml/html doc/src/sgml/postgres.epub\ndoc/src/sgml/INSTALL doc/src/sgml/postgres-full.xml\ndoc/src/sgml/install-html doc/src/sgml/postgres.html\ndoc/src/sgml/INSTALL.html doc/src/sgml/postgres.txt\ndoc/src/sgml/install-man doc/src/sgml/postgres-US.fo\ndoc/src/sgml/INSTALL.xml doc/src/sgml/postgres-US.pdf\ndoc/src/sgml/keywords-table.sgml doc/src/sgml/wait_event_types.sgml\ndoc/src/sgml/man1\n\n\n",
"msg_date": "Fri, 25 Aug 2023 13:06:03 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: list of acknowledgments for PG16"
},
{
"msg_contents": "On Tue, Aug 22, 2023 at 4:03 PM Joe Conway <[email protected]> wrote:\n>\n> On 8/22/23 09:44, Tom Lane wrote:\n> > Alvaro Herrera <[email protected]> writes:\n> >> On 2023-Aug-22, Peter Eisentraut wrote:\n> >>> The list of acknowledgments for the PG16 release notes has been committed.\n> >>> It should show up here sometime: <https://www.postgresql.org/docs/16/release-16.html#RELEASE-16-ACKNOWLEDGEMENTS>.\n> >\n> >> Hmm, I think these docs would only regenerate during the RC1 release, so\n> >> it'll be a couple of weeks, unless we manually poke the doc builder.\n> >\n> > Yeah. I could produce a new set of tarballs from the v16 branch tip,\n> > but I don't know the process (nor have the admin permissions) to\n> > extract the HTML docs and put them on the website.\n>\n>\n> These days the docs update is part of a scripted process for doing an\n> entire release.\n>\n> I'm sure we could figure out how to just release the updated docs, but\n> with RC1 a week away, is it really worthwhile?\n\nWe've also been pretty strict to say that we don't *want* unreleased\ndocs on the website for any of our stable branches before, so changing\nthat would be a distinct policy change as well. And doing such an\nexception for just one commit seems like it's set up for problems --\nyou'd then have to do another one as soon as an adjustment is made.\nAnd in the end, that would mean changing the policy to say that the\n\"release branches documentation tracks branch tip instead of\nreleases\". Which I generally speaking don't think is a good idea,\nbecause then they don't match what people are running anymore. I think\nit only really makes sense for this one part of the docs -- even other\nchanges to the REL16 docs should be excluded until the next release is\n(this time, RC1).\n\nBottom line is, definite -1 for doing a one-off change that violates\nthe principle we're on.\n\nNow, if we want a *separate* location where we continuously load\nbranch tip docs that's a different thing and certainly something we\ncould consider.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Fri, 25 Aug 2023 14:22:36 +0200",
"msg_from": "Magnus Hagander <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: list of acknowledgments for PG16"
},
{
"msg_contents": "> On 25 Aug 2023, at 14:22, Magnus Hagander <[email protected]> wrote:\n> On Tue, Aug 22, 2023 at 4:03 PM Joe Conway <[email protected]> wrote:\n\n>> I'm sure we could figure out how to just release the updated docs, but\n>> with RC1 a week away, is it really worthwhile?\n> \n> We've also been pretty strict to say that we don't *want* unreleased\n> docs on the website for any of our stable branches before, so changing\n> that would be a distinct policy change as well. And doing such an\n> exception for just one commit seems like it's set up for problems --\n> you'd then have to do another one as soon as an adjustment is made.\n> And in the end, that would mean changing the policy to say that the\n> \"release branches documentation tracks branch tip instead of\n> releases\". Which I generally speaking don't think is a good idea,\n> because then they don't match what people are running anymore. I think\n> it only really makes sense for this one part of the docs -- even other\n> changes to the REL16 docs should be excluded until the next release is\n> (this time, RC1).\n> \n> Bottom line is, definite -1 for doing a one-off change that violates\n> the principle we're on.\n\nBased on your reasoning above, I agree.\n\n> Now, if we want a *separate* location where we continuously load\n> branch tip docs that's a different thing and certainly something we\n> could consider.\n\nThat could be useful, seeing changes rendered with the full website style is a\ngood way to ensure a doc patch didn't break something subtle. As long as keep\nthem from being indexed by search engines and clearly separated from /docs/ it\nshould be fine.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Fri, 25 Aug 2023 14:32:56 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: list of acknowledgments for PG16"
},
{
"msg_contents": "On 22.08.23 15:48, Vik Fearing wrote:\n> On 8/22/23 11:33, Peter Eisentraut wrote:\n>> The list of acknowledgments for the PG16 release notes has been \n>> committed. It should show up here sometime: \n>> <https://www.postgresql.org/docs/16/release-16.html#RELEASE-16-ACKNOWLEDGEMENTS>. As usual, please check for problems such as wrong sorting, duplicate names in different variants, or names in the wrong order etc. (Our convention is given name followed by surname.)\n> \n> I think these might be the same person:\n> \n> <member>Zhihong Yu</member>\n> <member>Zihong Yu</member>\n> \n> I did not spot any others.\n\nFixed.\n\n\n",
"msg_date": "Sun, 27 Aug 2023 20:34:07 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: list of acknowledgments for PG16"
},
{
"msg_contents": "On 23.08.23 09:13, Denis Laxalde wrote:\n> Peter Eisentraut a écrit :\n>> The list of acknowledgments for the PG16 release notes has been\n>> committed. It should show up here sometime:\n>> <https://www.postgresql.org/docs/16/release-16.html#RELEASE-16-ACKNOWLEDGEMENTS>.\n>> As usual, please check for problems such as wrong sorting, duplicate\n>> names in different variants, or names in the wrong order etc. (Our\n>> convention is given name followed by surname.)\n>>\n> \n> \"Gabriele Varrazzo\" is mentioned in commit \n> 0032a5456708811ca95bd80a538f4fb72ad0dd20 but it should be \"Daniele \n> Varrazzo\" (per Discussion link in commit message); the later is already \n> in the list.\n\nFixed.\n\n\n",
"msg_date": "Sun, 27 Aug 2023 20:34:27 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: list of acknowledgments for PG16"
},
{
"msg_contents": "I'm not completely sure what should be in this list, but maybe also\ntuplesort extensibility [1]? [1]\nhttps://www.postgresql.org/message-id/flat/CALT9ZEHjgO_r2cFr35%3Du9xZa6Ji2e7oVfSEBRBj0Gc%2BtJjTxSg%40mail.gmail.com#201dc4202af38f224a1e3acc78795199\n\n\n",
"msg_date": "Sun, 27 Aug 2023 23:55:47 +0400",
"msg_from": "Pavel Borisov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: list of acknowledgments for PG16"
},
{
"msg_contents": "On 8/22/23 16:24, Peter Eisentraut wrote:\n> On 22.08.23 15:29, Tom Lane wrote:\n>> Alvaro Herrera <[email protected]> writes:\n>>> Yeah, I've been proposing this kind of thing for many years; the\n>>> problem, until not long ago, was that the tooling was unable to process\n>>> non-Latin1 characters in all the output formats that we use. But\n>>> tooling has changed and the oldest platforms have disappeared, so maybe\n>>> it works now; do you want to inject some Chinese, Cyrillic, Japanese\n>>> names and give it a spin? At least HTML and PDF need to work correctly.\n>>\n>> I'm pretty sure the PDF toolchain still fails on non-Latin1 characters.\n>> At least it does the way I have it installed; maybe adding some\n>> non-default dependencies would help?\n> \n> See here: \n> https://www.postgresql.org/message-id/[email protected]\n\nI applied that patch, and it works for Cyrillic text, but not for \nJapanese. I am trying to figure out how to make it use a secondary \nfont, but that might take me a while.\n-- \nVik Fearing\n\n\n\n",
"msg_date": "Mon, 28 Aug 2023 20:36:09 +0200",
"msg_from": "Vik Fearing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: list of acknowledgments for PG16"
},
{
"msg_contents": "On 2023-Aug-27, Peter Eisentraut wrote:\n\n> On 22.08.23 15:48, Vik Fearing wrote:\n\n> > I think these might be the same person:\n> > \n> > <member>Zhihong Yu</member>\n> > <member>Zihong Yu</member>\n> > \n> > I did not spot any others.\n> \n> Fixed.\n\nHm, I noticed we also list Ted Yu, but that's the same person as Zhihong Yu.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Mon, 16 Oct 2023 15:46:40 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: list of acknowledgments for PG16"
},
{
"msg_contents": "On 16.10.23 15:46, Alvaro Herrera wrote:\n> On 2023-Aug-27, Peter Eisentraut wrote:\n> \n>> On 22.08.23 15:48, Vik Fearing wrote:\n> \n>>> I think these might be the same person:\n>>>\n>>> <member>Zhihong Yu</member>\n>>> <member>Zihong Yu</member>\n>>>\n>>> I did not spot any others.\n>>\n>> Fixed.\n> \n> Hm, I noticed we also list Ted Yu, but that's the same person as Zhihong Yu.\n\nfixed\n\n\n\n",
"msg_date": "Thu, 19 Oct 2023 10:39:45 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: list of acknowledgments for PG16"
},
{
"msg_contents": "Hi,\n\n> On Aug 22, 2023, at 17:33, Peter Eisentraut <[email protected]> wrote:\n> \n> The list of acknowledgments for the PG16 release notes has been committed. It should show up here sometime: <https://www.postgresql.org/docs/16/release-16.html#RELEASE-16-ACKNOWLEDGEMENTS>. As usual, please check for problems such as wrong sorting, duplicate names in different variants, or names in the wrong order etc. (Our convention is given name followed by surname.)\n> \n> \n\nCould you help me with Mingli Zhang -> Zhang Mingli\n\nThanks.\n\nZhang Mingli\nHashData https://www.hashdata.xyz\n\n\nHi,On Aug 22, 2023, at 17:33, Peter Eisentraut <[email protected]> wrote:The list of acknowledgments for the PG16 release notes has been committed. It should show up here sometime: <https://www.postgresql.org/docs/16/release-16.html#RELEASE-16-ACKNOWLEDGEMENTS>. As usual, please check for problems such as wrong sorting, duplicate names in different variants, or names in the wrong order etc. (Our convention is given name followed by surname.)Could you help me with Mingli Zhang -> Zhang MingliThanks.\nZhang MingliHashData https://www.hashdata.xyz",
"msg_date": "Fri, 20 Oct 2023 08:08:35 +0800",
"msg_from": "Zhang Mingli <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: list of acknowledgments for PG16"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile testing the logical snapshot restore functionality, I noticed the\ndata size reported in the error message seems not correct.\n\nI think it's because we used a const value here:\n\nSnapBuildRestoreContents(int fd, char *dest, Size size, const char *path)\n...\n\treadBytes = read(fd, dest, size);\n\tpgstat_report_wait_end();\n\tif (readBytes != size)\n...\n\t\t\tereport(ERROR,\n\t\t\t\t\t(errcode(ERRCODE_DATA_CORRUPTED),\n\t\t\t\t\t errmsg(\"could not read file \\\"%s\\\": read %d of %zu\",\n**\t\t\t\t\t\t\tpath, readBytes, * sizeof(SnapBuild) *)));\n\nI think we need to pass the size here.\n\nAttach a small patch to fix this. BTW, the error message exists in HEAD ~ PG10.\n\nBest Regards,\nHou zj",
"msg_date": "Tue, 22 Aug 2023 12:39:09 +0000",
"msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fix the error message when failing to restore the snapshot"
},
{
"msg_contents": "On Tue, Aug 22, 2023 at 6:09 PM Zhijie Hou (Fujitsu)\n<[email protected]> wrote:\n>\n> While testing the logical snapshot restore functionality, I noticed the\n> data size reported in the error message seems not correct.\n>\n> I think it's because we used a const value here:\n>\n> SnapBuildRestoreContents(int fd, char *dest, Size size, const char *path)\n> ...\n> readBytes = read(fd, dest, size);\n> pgstat_report_wait_end();\n> if (readBytes != size)\n> ...\n> ereport(ERROR,\n> (errcode(ERRCODE_DATA_CORRUPTED),\n> errmsg(\"could not read file \\\"%s\\\": read %d of %zu\",\n> ** path, readBytes, * sizeof(SnapBuild) *)));\n>\n> I think we need to pass the size here.\n>\n\nGood catch. I'll take care of this.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 22 Aug 2023 18:37:44 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix the error message when failing to restore the snapshot"
}
] |
[
{
"msg_contents": "This started out as a small patch to make pg_controldata use the logging \nAPI instead of printf statements, and then it became a larger patch to \nadjust error and warning messages about invalid WAL segment sizes \n(IsValidWalSegSize()) across the board. I went through and made the \nprimary messages more compact and made the detail messages uniform. In \ninitdb.c and pg_resetwal.c, I use the newish option_parse_int() to \nsimplify some of the option parsing. For the backend GUC \nwal_segment_size, I added a GUC check hook to do the verification \ninstead of coding it in bootstrap.c. This might be overkill, but that \nway the check is in the right place and it becomes more self-documenting.",
"msg_date": "Tue, 22 Aug 2023 15:44:14 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Make error messages about WAL segment size more consistent"
},
{
"msg_contents": "Hi Peter,\n\n> This started out as a small patch to make pg_controldata use the logging\n> API instead of printf statements, and then it became a larger patch to\n> adjust error and warning messages about invalid WAL segment sizes\n> (IsValidWalSegSize()) across the board.\n\nThanks for working on this.\n\n> I went through and made the\n> primary messages more compact and made the detail messages uniform. In\n> initdb.c and pg_resetwal.c, I use the newish option_parse_int() to\n> simplify some of the option parsing. For the backend GUC\n> wal_segment_size, I added a GUC check hook to do the verification\n> instead of coding it in bootstrap.c. This might be overkill, but that\n> way the check is in the right place and it becomes more self-documenting.\n\nI reviewed the code and tested it on Linux and MacOS with Autotools\nand Meson. The patch LGTM.\n\n--\nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Tue, 22 Aug 2023 17:26:04 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Make error messages about WAL segment size more consistent"
},
{
"msg_contents": "On 22.08.23 16:26, Aleksander Alekseev wrote:\n> Hi Peter,\n> \n>> This started out as a small patch to make pg_controldata use the logging\n>> API instead of printf statements, and then it became a larger patch to\n>> adjust error and warning messages about invalid WAL segment sizes\n>> (IsValidWalSegSize()) across the board.\n> \n> Thanks for working on this.\n> \n>> I went through and made the\n>> primary messages more compact and made the detail messages uniform. In\n>> initdb.c and pg_resetwal.c, I use the newish option_parse_int() to\n>> simplify some of the option parsing. For the backend GUC\n>> wal_segment_size, I added a GUC check hook to do the verification\n>> instead of coding it in bootstrap.c. This might be overkill, but that\n>> way the check is in the right place and it becomes more self-documenting.\n> \n> I reviewed the code and tested it on Linux and MacOS with Autotools\n> and Meson. The patch LGTM.\n\nThanks, committed.\n\n\n\n",
"msg_date": "Mon, 28 Aug 2023 15:26:35 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Make error messages about WAL segment size more consistent"
}
] |
[
{
"msg_contents": "Hi,\n\nA couple of times a day, cfbot reports an error like this:\n\nhttps://cirrus-ci.com/task/6424286882168832\n\nI didn't study it closely but it looks like there might be a second\ndeadlock, after the one that is expected by the test? Examples from\nthe past couple of weeks:\n\ncfbot=> select test.task_id, task.task_name, task.created as time from\ntest join task using (task_id) where suite = 'subscription' and name =\n'015_stream' and result = 'ERROR' and task.created > now() - interval\n'14 days' order by task.created desc;\n task_id | task_name\n------------------+------------------------------------------------\n 6600867550330880 | Windows - Server 2019, VS 2019 - Meson & ninja\n 6222795470798848 | Windows - Server 2019, VS 2019 - Meson & ninja\n 6162088322662400 | macOS - Ventura - Meson\n 5014781862608896 | Windows - Server 2019, VS 2019 - Meson & ninja\n 6424286882168832 | Windows - Server 2019, VS 2019 - Meson & ninja\n 6683705222103040 | Windows - Server 2019, VS 2019 - Meson & ninja\n 5881665076068352 | Windows - Server 2019, VS 2019 - Meson & ninja\n 5865929054093312 | Windows - Server 2019, VS 2019 - Meson & ninja\n 5855144995192832 | Windows - Server 2019, VS 2019 - Meson & ninja\n 6071567994585088 | Windows - Server 2019, VS 2019 - Meson & ninja\n 5311312343859200 | Windows - Server 2019, VS 2019 - Meson & ninja\n 4986100071006208 | FreeBSD - 13 - Meson\n 6302301388800000 | Windows - Server 2019, VS 2019 - Meson & ninja\n 4554627119579136 | Windows - Server 2019, VS 2019 - Meson & ninja\n 6106090807492608 | Windows - Server 2019, VS 2019 - Meson & ninja\n 5190113534148608 | Windows - Server 2019, VS 2019 - Meson & ninja\n 6452324697112576 | Windows - Server 2019, VS 2019 - Meson & ninja\n 4610228927332352 | Windows - Server 2019, VS 2019 - Meson & ninja\n 4928567608344576 | Windows - Server 2019, VS 2019 - Meson & ninja\n 5705451157848064 | FreeBSD - 13 - Meson\n 5952066133164032 | Windows - Server 2019, VS 2019 - Meson & ninja\n 5341101565935616 | Windows - Server 2019, VS 2019 - Meson & ninja\n 6751165837213696 | Windows - Server 2019, VS 2019 - Meson & ninja\n 4624168109473792 | Windows - Server 2019, VS 2019 - Meson & ninja\n 6730863963013120 | Windows - Server 2019, VS 2019 - Meson & ninja\n 6174140269330432 | Windows - Server 2019, VS 2019 - Meson & ninja\n 4637318561136640 | Windows - Server 2019, VS 2019 - Meson & ninja\n 4535300303618048 | Windows - Server 2019, VS 2019 - Meson & ninja\n 5672693542944768 | FreeBSD - 13 - Meson\n 6087381225308160 | Windows - Server 2019, VS 2019 - Meson & ninja\n 6098413217906688 | Windows - Server 2019, VS 2019 - Meson & ninja\n 6130601380544512 | Windows - Server 2019, VS 2019 - Meson & ninja\n 5546054284738560 | Windows - Server 2019, VS 2019 - Meson & ninja\n 6674258676416512 | Windows - Server 2019, VS 2019 - Meson & ninja\n 4571976740634624 | FreeBSD - 13 - Meson\n 6566515328155648 | Windows - Server 2019, VS 2019 - Meson & ninja\n 6576879084240896 | Windows - Server 2019, VS 2019 - Meson & ninja\n 5295804827566080 | Windows - Server 2019, VS 2019 - Meson & ninja\n 6426387188285440 | Windows - Server 2019, VS 2019 - Meson & ninja\n 4763275859066880 | Windows - Server 2019, VS 2019 - Meson & ninja\n 6137227240013824 | Windows - Server 2019, VS 2019 - Meson & ninja\n 5185063273365504 | Windows - Server 2019, VS 2019 - Meson & ninja\n 6542656449282048 | Windows - Server 2019, VS 2019 - Meson & ninja\n 4874919171850240 | Windows - Server 2019, VS 2019 - Meson & ninja\n 6531290556530688 | Windows - Server 2019, VS 2019 - Meson & ninja\n 5377848232378368 | FreeBSD - 13 - Meson\n 6436049925177344 | FreeBSD - 13 - Meson\n 6057679748071424 | Windows - Server 2019, VS 2019 - Meson & ninja\n 5534694867992576 | Windows - Server 2019, VS 2019 - Meson & ninja\n 4831311429369856 | Windows - Server 2019, VS 2019 - Meson & ninja\n 4704271531245568 | macOS - Ventura - Meson\n 5297047549509632 | Windows - Server 2019, VS 2019 - Meson & ninja\n 5836487120388096 | macOS - Ventura - Meson\n 6527459915464704 | FreeBSD - 13 - Meson\n 4985483743199232 | FreeBSD - 13 - Meson\n 4583651082502144 | Linux - Debian Bullseye - Meson\n 5498444756811776 | FreeBSD - 13 - Meson\n 5146601035923456 | Windows - Server 2019, VS 2019 - Meson & ninja\n 5709550989344768 | macOS - Ventura - Meson\n 6357936616767488 | FreeBSD - 13 - Meson\n(60 rows)\n\n\n",
"msg_date": "Wed, 23 Aug 2023 08:21:56 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "subscription/015_stream sometimes breaks"
},
{
"msg_contents": "On Wed, Aug 23, 2023 at 8:21 AM Thomas Munro <[email protected]> wrote:\n> I didn't study it closely but it looks like there might be a second\n> deadlock, after the one that is expected by the test? Examples from\n> the past couple of weeks:\n\nI should add, it's not correlated with the patches that cfbot is\ntesting, and it's the most frequent failure for which that is the\ncase.\n\n suite | name | distinct_patches | errors\n--------------+------------+------------------+--------\n subscription | 015_stream | 47 | 61\n\n\n",
"msg_date": "Wed, 23 Aug 2023 08:54:45 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: subscription/015_stream sometimes breaks"
},
{
"msg_contents": "On Wednesday, August 23, 2023 4:55 AM Thomas Munro <[email protected]> wrote:\r\n> \r\n> On Wed, Aug 23, 2023 at 8:21 AM Thomas Munro\r\n> <[email protected]> wrote:\r\n> > I didn't study it closely but it looks like there might be a second\r\n> > deadlock, after the one that is expected by the test? Examples from\r\n> > the past couple of weeks:\r\n> \r\n> I should add, it's not correlated with the patches that cfbot is testing, and it's\r\n> the most frequent failure for which that is the case.\r\n> \r\n> suite | name | distinct_patches | errors\r\n> --------------+------------+------------------+--------\r\n> subscription | 015_stream | 47 | 61\r\n> \r\n\r\nThanks for reporting !\r\nI am researching the failure and will share my analysis.\r\n\r\nBest Regards,\r\nHou zj\r\n",
"msg_date": "Wed, 23 Aug 2023 02:27:24 +0000",
"msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: subscription/015_stream sometimes breaks"
},
{
"msg_contents": "On Wed, 23 Aug 2023 at 02:25, Thomas Munro <[email protected]> wrote:\n>\n> On Wed, Aug 23, 2023 at 8:21 AM Thomas Munro <[email protected]> wrote:\n> > I didn't study it closely but it looks like there might be a second\n> > deadlock, after the one that is expected by the test? Examples from\n> > the past couple of weeks:\n>\n> I should add, it's not correlated with the patches that cfbot is\n> testing, and it's the most frequent failure for which that is the\n> case.\n>\n> suite | name | distinct_patches | errors\n> --------------+------------+------------------+--------\n> subscription | 015_stream | 47 | 61\n\nI had noticed that it is failing because of a segmentation fault:\n2023-08-22 19:07:22.403 UTC [3823023][logical replication parallel\nworker][4/44:767] FATAL: terminating logical replication worker due\nto administrator command\n2023-08-22 19:07:22.403 UTC [3823023][logical replication parallel\nworker][4/44:767] CONTEXT: processing remote data for replication\norigin \"pg_16397\" during message type \"STREAM STOP\" in transaction 748\n2023-08-22 19:07:22.404 UTC [3819892][postmaster][:0] DEBUG:\nunregistering background worker \"logical replication parallel apply\nworker for subscription 16397\"\n2023-08-22 19:07:22.404 UTC [3819892][postmaster][:0] LOG: background\nworker \"logical replication parallel worker\" (PID 3823455) exited with\nexit code 1\n2023-08-22 19:07:22.404 UTC [3819892][postmaster][:0] DEBUG:\nunregistering background worker \"logical replication parallel apply\nworker for subscription 16397\"\n2023-08-22 19:07:22.404 UTC [3819892][postmaster][:0] LOG: background\nworker \"logical replication parallel worker\" (PID 3823023) exited with\nexit code 1\n2023-08-22 19:07:22.419 UTC [3819892][postmaster][:0] LOG: background\nworker \"logical replication apply worker\" (PID 3822876) was terminated\nby signal 11: Segmentation fault\n\nThe stack trace for the same generated at [1] is:\nCore was generated by `postgres: subscriber: logical replication apply\nworker for subscription 16397 '.\nProgram terminated with signal SIGSEGV, Segmentation fault.\n\nwarning: Section `.reg-xstate/3822876' in core file too small.\n#0 0x00000000007b461e in logicalrep_worker_stop_internal\n(worker=<optimized out>, signo=<optimized out>) at\n/home/bf/bf-build/dragonet/HEAD/pgsql.build/../pgsql/src/backend/replication/logical/launcher.c:583\n583 kill(worker->proc->pid, signo);\n#0 0x00000000007b461e in logicalrep_worker_stop_internal\n(worker=<optimized out>, signo=<optimized out>) at\n/home/bf/bf-build/dragonet/HEAD/pgsql.build/../pgsql/src/backend/replication/logical/launcher.c:583\n#1 0x00000000007b565a in logicalrep_worker_detach () at\n/home/bf/bf-build/dragonet/HEAD/pgsql.build/../pgsql/src/backend/replication/logical/launcher.c:774\n#2 0x00000000007b49ff in logicalrep_worker_onexit (code=<optimized\nout>, arg=<optimized out>) at\n/home/bf/bf-build/dragonet/HEAD/pgsql.build/../pgsql/src/backend/replication/logical/launcher.c:829\n#3 0x00000000008034c5 in shmem_exit (code=<optimized out>) at\n/home/bf/bf-build/dragonet/HEAD/pgsql.build/../pgsql/src/backend/storage/ipc/ipc.c:239\n#4 0x00000000008033dc in proc_exit_prepare (code=1) at\n/home/bf/bf-build/dragonet/HEAD/pgsql.build/../pgsql/src/backend/storage/ipc/ipc.c:194\n#5 0x000000000080333d in proc_exit (code=1) at\n/home/bf/bf-build/dragonet/HEAD/pgsql.build/../pgsql/src/backend/storage/ipc/ipc.c:107\n#6 0x0000000000797068 in StartBackgroundWorker () at\n/home/bf/bf-build/dragonet/HEAD/pgsql.build/../pgsql/src/backend/postmaster/bgworker.c:827\n#7 0x000000000079f257 in do_start_bgworker (rw=0x284e750) at\n/home/bf/bf-build/dragonet/HEAD/pgsql.build/../pgsql/src/backend/postmaster/postmaster.c:5734\n#8 0x000000000079b541 in maybe_start_bgworkers () at\n/home/bf/bf-build/dragonet/HEAD/pgsql.build/../pgsql/src/backend/postmaster/postmaster.c:5958\n#9 0x000000000079cb51 in process_pm_pmsignal () at\n/home/bf/bf-build/dragonet/HEAD/pgsql.build/../pgsql/src/backend/postmaster/postmaster.c:5121\n#10 0x000000000079b6bb in ServerLoop () at\n/home/bf/bf-build/dragonet/HEAD/pgsql.build/../pgsql/src/backend/postmaster/postmaster.c:1769\n#11 0x000000000079aaa5 in PostmasterMain (argc=4, argv=<optimized\nout>) at /home/bf/bf-build/dragonet/HEAD/pgsql.build/../pgsql/src/backend/postmaster/postmaster.c:1462\n#12 0x00000000006d82a0 in main (argc=4, argv=0x27e3fd0) at\n/home/bf/bf-build/dragonet/HEAD/pgsql.build/../pgsql/src/backend/main/main.c:198\n$1 = {si_signo = 11, si_errno = 0, si_code = 1, _sifields = {_pad =\n{64, 0 <repeats 27 times>}, _kill = {si_pid = 64, si_uid = 0}, _timer\n= {si_tid = 64, si_overrun = 0, si_sigval = {sival_int = 0, sival_ptr\n= 0x0}}, _rt = {si_pid = 64, si_uid = 0, si_sigval = {sival_int = 0,\nsival_ptr = 0x0}}, _sigchld = {si_pid = 64, si_uid = 0, si_status = 0,\nsi_utime = 0, si_stime = 0}, _sigfault = {si_addr = 0x40, _addr_lsb =\n0, _addr_bnd = {_lower = 0x0, _upper = 0x0}}, _sigpoll = {si_band =\n64, si_fd = 0}, _sigsys = {_call_addr = 0x40, _syscall = 0, _arch =\n0}}}\n\n[1] - https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=dragonet&dt=2023-08-22%2018%3A56%3A04&stg=subscription-check\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Wed, 23 Aug 2023 08:46:51 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: subscription/015_stream sometimes breaks"
},
{
"msg_contents": "> -----Original Message-----\r\n> From: Zhijie Hou (Fujitsu) <[email protected]>\r\n> Sent: Wednesday, August 23, 2023 10:27 AM\r\n> To: Thomas Munro <[email protected]>\r\n> Cc: Amit Kapila <[email protected]>; pgsql-hackers\r\n> <[email protected]>\r\n> Subject: RE: subscription/015_stream sometimes breaks\r\n> \r\n> On Wednesday, August 23, 2023 4:55 AM Thomas Munro\r\n> <[email protected]> wrote:\r\n> >\r\n> > On Wed, Aug 23, 2023 at 8:21 AM Thomas Munro\r\n> <[email protected]>\r\n> > wrote:\r\n> > > I didn't study it closely but it looks like there might be a second\r\n> > > deadlock, after the one that is expected by the test? Examples from\r\n> > > the past couple of weeks:\r\n> >\r\n> > I should add, it's not correlated with the patches that cfbot is\r\n> > testing, and it's the most frequent failure for which that is the case.\r\n> >\r\n> > suite | name | distinct_patches | errors\r\n> > --------------+------------+------------------+--------\r\n> > subscription | 015_stream | 47 | 61\r\n> >\r\n> \r\n> Thanks for reporting !\r\n> I am researching the failure and will share my analysis.\r\n\r\nHi,\r\n\r\nAfter an off-list discussion with Amit, we figured out the reason.\r\nFrom the crash log, I can see the apply worker crashed when accessing the\r\nworker->proc, so I think it's because the work->proc has been released.\r\n\r\n 577: \t/* Now terminate the worker ... */\r\n> 578: \tkill(worker->proc->pid, signo);\r\n 579: \r\n 580: \t/* ... and wait for it to die. */\r\n\r\nNormally, this should not happen because we take a lock on LogicalRepWorkerLock\r\nwhen shutting all the parallel workers[1] which can prevent concurrent worker\r\nto free the worker info. But in logicalrep_worker_stop_internal(), when\r\nstopping parallel worker #1, we will release the lock shortly. and at this\r\ntiming it's possible that another parallel worker #2 which reported an ERROR\r\nwill shutdown by itself and free the worker->proc. So when we try to stop that\r\nparallel worker #2 in next round, we didn't realize it has been closed,\r\nthus accessing invalid memory(worker->proc).\r\n\r\n[1]--\r\n\t\tLWLockAcquire(LogicalRepWorkerLock, LW_SHARED);\r\n\r\n\t\tworkers = logicalrep_workers_find(MyLogicalRepWorker->subid, true);\r\n\t\tforeach(lc, workers)\r\n\t\t{\r\n\t\t\tLogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);\r\n\r\n**\t\t\tif (isParallelApplyWorker(w))\r\n\t\t\t\tlogicalrep_worker_stop_internal(w, SIGTERM);\r\n\t\t}\r\n--\r\n\r\nThe bug happens after commit 2a8b40e where isParallelApplyWorker() start to use\r\nthe worker->type to check but we forgot to reset the worker type at worker exit\r\ntime. So, even if the worker #2 has shutdown, the worker_type is still valid\r\nand we try to stop it again.\r\n\r\nPreviously, the isParallelApplyWorker() used the worker->leader_pid which will\r\nbe reset when the worker exits, so the \"if (isParallelApplyWorker(w))\" won't pass\r\nin this case and we don't try to stop the worker #2.\r\n\r\nTo fix it I think we need to reset the worker type at exit as well.\r\nAttach the patch which does the same. I am also testing it locally\r\nto see if there are other issues here.\r\n\r\nBest Regards,\r\nHou Zhijie",
"msg_date": "Wed, 23 Aug 2023 05:56:56 +0000",
"msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: subscription/015_stream sometimes breaks"
},
{
"msg_contents": "On 2023-Aug-23, Zhijie Hou (Fujitsu) wrote:\n\n> [1]--\n> \t\tLWLockAcquire(LogicalRepWorkerLock, LW_SHARED);\n> \n> \t\tworkers = logicalrep_workers_find(MyLogicalRepWorker->subid, true);\n> \t\tforeach(lc, workers)\n> \t\t{\n> \t\t\tLogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);\n> \n> **\t\t\tif (isParallelApplyWorker(w))\n> \t\t\t\tlogicalrep_worker_stop_internal(w, SIGTERM);\n> \t\t}\n\nHmm, I think if worker->in_use is false, we shouldn't consult the rest\nof the struct at all, so I propose to add the attached 0001 as a minimal\nfix.\n\nIn fact, I'd go further and propose that if we do take that stance, then\nwe don't need clear out the contents of this struct at all, so let's\nnot. That's 0002.\n\nAnd the reason 0002 does not remove the zeroing of ->proc is that the\ntests gets stuck when I do that, and the reason for that looks to be\nsome shoddy coding in WaitForReplicationWorkerAttach, so I propose we\nchange that too, as in 0003.\n\nThoughts?\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/",
"msg_date": "Wed, 23 Aug 2023 10:00:56 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: subscription/015_stream sometimes breaks"
},
{
"msg_contents": "On 2023-Aug-23, Alvaro Herrera wrote:\n\n> And the reason 0002 does not remove the zeroing of ->proc is that the\n> tests gets stuck when I do that, and the reason for that looks to be\n> some shoddy coding in WaitForReplicationWorkerAttach, so I propose we\n> change that too, as in 0003.\n\nHmm, actually the test got stuck when I ran it repeatedly with this\n0003. I guess there might be other places that depend on ->proc being\nset to NULL on exit.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Most hackers will be perfectly comfortable conceptualizing users as entropy\n sources, so let's move on.\" (Nathaniel Smith)\n\n\n",
"msg_date": "Wed, 23 Aug 2023 10:44:08 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: subscription/015_stream sometimes breaks"
},
{
"msg_contents": "On Wed, Aug 23, 2023 at 1:31 PM Alvaro Herrera <[email protected]> wrote:\n>\n> On 2023-Aug-23, Zhijie Hou (Fujitsu) wrote:\n>\n> > [1]--\n> > LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);\n> >\n> > workers = logicalrep_workers_find(MyLogicalRepWorker->subid, true);\n> > foreach(lc, workers)\n> > {\n> > LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);\n> >\n> > ** if (isParallelApplyWorker(w))\n> > logicalrep_worker_stop_internal(w, SIGTERM);\n> > }\n>\n> Hmm, I think if worker->in_use is false, we shouldn't consult the rest\n> of the struct at all, so I propose to add the attached 0001 as a minimal\n> fix.\n>\n\nI think that way we may need to add the check for in_use before\naccessing each of the LogicalRepWorker struct fields or form some rule\nabout which fields (or places) are okay to access without checking\nin_use field.\n\n> In fact, I'd go further and propose that if we do take that stance, then\n> we don't need clear out the contents of this struct at all, so let's\n> not. That's 0002.\n>\n> And the reason 0002 does not remove the zeroing of ->proc is that the\n> tests gets stuck when I do that, and the reason for that looks to be\n> some shoddy coding in WaitForReplicationWorkerAttach, so I propose we\n> change that too, as in 0003.\n>\n\nPersonally, I think we should consider this change (0002 and 0002) separately.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 24 Aug 2023 09:15:36 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: subscription/015_stream sometimes breaks"
},
{
"msg_contents": "On 2023-Aug-24, Amit Kapila wrote:\n\n> On Wed, Aug 23, 2023 at 1:31 PM Alvaro Herrera <[email protected]> wrote:\n\n> > Hmm, I think if worker->in_use is false, we shouldn't consult the rest\n> > of the struct at all, so I propose to add the attached 0001 as a minimal\n> > fix.\n> \n> I think that way we may need to add the check for in_use before\n> accessing each of the LogicalRepWorker struct fields or form some rule\n> about which fields (or places) are okay to access without checking\n> in_use field.\n\nAs far as I realize, we have that rule already. It's only a few\nrelatively new places that have broken it. I understand that the in_use\nconcept comes from the one of the same name in ReplicationSlot, except\nthat it is not at all documented in worker_internal.h.\n\nSo I propose we do both: apply Zhijie's patch and my 0001 now; and\nsomebody gets to document the locking design for LogicalRepWorker.\n\n> > In fact, I'd go further and propose that if we do take that stance, then\n> > we don't need clear out the contents of this struct at all, so let's\n> > not. That's 0002.\n> \n> Personally, I think we should consider this change (0002 and 0002) separately.\n\nI agree. I'd maybe even retract them.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Thu, 24 Aug 2023 09:50:44 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: subscription/015_stream sometimes breaks"
},
{
"msg_contents": "On Thu, Aug 24, 2023 at 1:20 PM Alvaro Herrera <[email protected]> wrote:\n>\n> On 2023-Aug-24, Amit Kapila wrote:\n>\n> > On Wed, Aug 23, 2023 at 1:31 PM Alvaro Herrera <[email protected]> wrote:\n>\n> > > Hmm, I think if worker->in_use is false, we shouldn't consult the rest\n> > > of the struct at all, so I propose to add the attached 0001 as a minimal\n> > > fix.\n> >\n> > I think that way we may need to add the check for in_use before\n> > accessing each of the LogicalRepWorker struct fields or form some rule\n> > about which fields (or places) are okay to access without checking\n> > in_use field.\n>\n> As far as I realize, we have that rule already. It's only a few\n> relatively new places that have broken it. I understand that the in_use\n> concept comes from the one of the same name in ReplicationSlot, except\n> that it is not at all documented in worker_internal.h.\n>\n> So I propose we do both: apply Zhijie's patch and my 0001 now; and\n> somebody gets to document the locking design for LogicalRepWorker.\n>\n\nAgreed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 24 Aug 2023 15:48:02 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: subscription/015_stream sometimes breaks"
},
{
"msg_contents": "On Thu, Aug 24, 2023 at 8:18 PM Amit Kapila <[email protected]> wrote:\n>\n> On Thu, Aug 24, 2023 at 1:20 PM Alvaro Herrera <[email protected]> wrote:\n> >\n> > On 2023-Aug-24, Amit Kapila wrote:\n> >\n> > > On Wed, Aug 23, 2023 at 1:31 PM Alvaro Herrera <[email protected]> wrote:\n> >\n> > > > Hmm, I think if worker->in_use is false, we shouldn't consult the rest\n> > > > of the struct at all, so I propose to add the attached 0001 as a minimal\n> > > > fix.\n> > >\n> > > I think that way we may need to add the check for in_use before\n> > > accessing each of the LogicalRepWorker struct fields or form some rule\n> > > about which fields (or places) are okay to access without checking\n> > > in_use field.\n> >\n> > As far as I realize, we have that rule already. It's only a few\n> > relatively new places that have broken it. I understand that the in_use\n> > concept comes from the one of the same name in ReplicationSlot, except\n> > that it is not at all documented in worker_internal.h.\n> >\n> > So I propose we do both: apply Zhijie's patch and my 0001 now; and\n> > somebody gets to document the locking design for LogicalRepWorker.\n> >\n>\n> Agreed.\n\nBoth of these patches (Hou-san's expedient resetting of the worker\ntype, Alvaro's 0001 putting the 'in_use' check within the isXXXWorker\ntype macros) appear to be blending the concept of \"type\" with whether\nthe worker is \"alive\" or not, which I am not sure is a good thing. IMO\nthe type is the type forever, so I felt type should get assigned only\nonce when the worker is \"born\". For example, a dead horse is still a\nhorse.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 25 Aug 2023 13:39:04 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: subscription/015_stream sometimes breaks"
},
{
"msg_contents": "On Fri, Aug 25, 2023 at 9:09 AM Peter Smith <[email protected]> wrote:\n>\n> On Thu, Aug 24, 2023 at 8:18 PM Amit Kapila <[email protected]> wrote:\n> >\n> > On Thu, Aug 24, 2023 at 1:20 PM Alvaro Herrera <[email protected]> wrote:\n> > >\n> > > On 2023-Aug-24, Amit Kapila wrote:\n> > >\n> > > > On Wed, Aug 23, 2023 at 1:31 PM Alvaro Herrera <[email protected]> wrote:\n> > >\n> > > > > Hmm, I think if worker->in_use is false, we shouldn't consult the rest\n> > > > > of the struct at all, so I propose to add the attached 0001 as a minimal\n> > > > > fix.\n> > > >\n> > > > I think that way we may need to add the check for in_use before\n> > > > accessing each of the LogicalRepWorker struct fields or form some rule\n> > > > about which fields (or places) are okay to access without checking\n> > > > in_use field.\n> > >\n> > > As far as I realize, we have that rule already. It's only a few\n> > > relatively new places that have broken it. I understand that the in_use\n> > > concept comes from the one of the same name in ReplicationSlot, except\n> > > that it is not at all documented in worker_internal.h.\n> > >\n> > > So I propose we do both: apply Zhijie's patch and my 0001 now; and\n> > > somebody gets to document the locking design for LogicalRepWorker.\n> > >\n> >\n> > Agreed.\n>\n> Both of these patches (Hou-san's expedient resetting of the worker\n> type, Alvaro's 0001 putting the 'in_use' check within the isXXXWorker\n> type macros) appear to be blending the concept of \"type\" with whether\n> the worker is \"alive\" or not, which I am not sure is a good thing. IMO\n> the type is the type forever, so I felt type should get assigned only\n> once when the worker is \"born\". For example, a dead horse is still a\n> horse.\n>\n\nI think it is important to have a alive check before accessing the\nworker type as we are doing for some of the other fields. For example,\nsee the usage of in_use flag in the function logicalrep_worker_find().\nThe usage of parallel apply workers doesn't consider the use of in_use\nflag where as other worker types would first check in_use flag.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 25 Aug 2023 09:25:47 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: subscription/015_stream sometimes breaks"
},
{
"msg_contents": "On Thu, Aug 24, 2023 at 3:48 PM Amit Kapila <[email protected]> wrote:\n>\n> On Thu, Aug 24, 2023 at 1:20 PM Alvaro Herrera <[email protected]> wrote:\n> >\n> > On 2023-Aug-24, Amit Kapila wrote:\n> >\n> > > On Wed, Aug 23, 2023 at 1:31 PM Alvaro Herrera <[email protected]> wrote:\n> >\n> > > > Hmm, I think if worker->in_use is false, we shouldn't consult the rest\n> > > > of the struct at all, so I propose to add the attached 0001 as a minimal\n> > > > fix.\n> > >\n> > > I think that way we may need to add the check for in_use before\n> > > accessing each of the LogicalRepWorker struct fields or form some rule\n> > > about which fields (or places) are okay to access without checking\n> > > in_use field.\n> >\n> > As far as I realize, we have that rule already. It's only a few\n> > relatively new places that have broken it. I understand that the in_use\n> > concept comes from the one of the same name in ReplicationSlot, except\n> > that it is not at all documented in worker_internal.h.\n> >\n> > So I propose we do both: apply Zhijie's patch and my 0001 now; and\n> > somebody gets to document the locking design for LogicalRepWorker.\n> >\n>\n> Agreed.\n>\n\nPushed both the patches.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 25 Aug 2023 15:45:17 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: subscription/015_stream sometimes breaks"
},
{
"msg_contents": "On Fri, Aug 25, 2023 at 8:15 PM Amit Kapila <[email protected]> wrote:\n>\n> On Thu, Aug 24, 2023 at 3:48 PM Amit Kapila <[email protected]> wrote:\n> >\n> > On Thu, Aug 24, 2023 at 1:20 PM Alvaro Herrera <[email protected]> wrote:\n> > >\n> > > On 2023-Aug-24, Amit Kapila wrote:\n> > >\n> > > > On Wed, Aug 23, 2023 at 1:31 PM Alvaro Herrera <[email protected]> wrote:\n> > >\n> > > > > Hmm, I think if worker->in_use is false, we shouldn't consult the rest\n> > > > > of the struct at all, so I propose to add the attached 0001 as a minimal\n> > > > > fix.\n> > > >\n> > > > I think that way we may need to add the check for in_use before\n> > > > accessing each of the LogicalRepWorker struct fields or form some rule\n> > > > about which fields (or places) are okay to access without checking\n> > > > in_use field.\n> > >\n> > > As far as I realize, we have that rule already. It's only a few\n> > > relatively new places that have broken it. I understand that the in_use\n> > > concept comes from the one of the same name in ReplicationSlot, except\n> > > that it is not at all documented in worker_internal.h.\n> > >\n> > > So I propose we do both: apply Zhijie's patch and my 0001 now; and\n> > > somebody gets to document the locking design for LogicalRepWorker.\n> > >\n> >\n> > Agreed.\n> >\n>\n> Pushed both the patches.\n>\n\nIMO there are inconsistencies in the second patch that was pushed.\n\n1. In the am_xxx functions, why is there Assert 'in_use' only for the\nAPPLY / PARALLEL_APPLY workers but not for TABLESYNC workers?\n\n2. In the am_xxx functions there is now Assert 'in_use', so why are we\nstill using macros to check again what we already asserted is not\npossible? (Or, if the checking overkill was a deliberate choice then\nwhy is there no isLeaderApplyWorker macro?)\n\n~\n\nPSA a small patch to address these.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Mon, 28 Aug 2023 10:04:45 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: subscription/015_stream sometimes breaks"
},
{
"msg_contents": "On Mon, Aug 28, 2023 at 5:35 AM Peter Smith <[email protected]> wrote:\n>\n> On Fri, Aug 25, 2023 at 8:15 PM Amit Kapila <[email protected]> wrote:\n>\n> IMO there are inconsistencies in the second patch that was pushed.\n>\n> 1. In the am_xxx functions, why is there Assert 'in_use' only for the\n> APPLY / PARALLEL_APPLY workers but not for TABLESYNC workers?\n>\n> 2. In the am_xxx functions there is now Assert 'in_use', so why are we\n> still using macros to check again what we already asserted is not\n> possible? (Or, if the checking overkill was a deliberate choice then\n> why is there no isLeaderApplyWorker macro?)\n>\n> ~\n>\n> PSA a small patch to address these.\n>\n\nI find your suggestions reasonable. Alvaro, do you have any comments?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 29 Aug 2023 09:48:11 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: subscription/015_stream sometimes breaks"
},
{
"msg_contents": "On 2023-Aug-29, Amit Kapila wrote:\n\n> On Mon, Aug 28, 2023 at 5:35 AM Peter Smith <[email protected]> wrote:\n> >\n> > On Fri, Aug 25, 2023 at 8:15 PM Amit Kapila <[email protected]> wrote:\n> >\n> > IMO there are inconsistencies in the second patch that was pushed.\n\n> I find your suggestions reasonable. Alvaro, do you have any comments?\n\nWell, my main comment is that at this point I'm not sure these\nisFooWorker() macros are worth their salt. It looks like we could\nreplace their uses with direct type comparisons in their callsites and\nremove them, with no loss of readability. The am_sth_worker() are\nprobably a bit more useful, though some of the callsites could end up\nbetter if replaced with straight type comparison.\n\nAll in all, I don't disagree with Peter's suggestions, but this is\npretty much in \"meh\" territory for me.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 29 Aug 2023 14:35:08 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: subscription/015_stream sometimes breaks"
},
{
"msg_contents": "On Tue, Aug 29, 2023 at 10:35 PM Alvaro Herrera <[email protected]> wrote:\n>\n> On 2023-Aug-29, Amit Kapila wrote:\n>\n> > On Mon, Aug 28, 2023 at 5:35 AM Peter Smith <[email protected]> wrote:\n> > >\n> > > On Fri, Aug 25, 2023 at 8:15 PM Amit Kapila <[email protected]> wrote:\n> > >\n> > > IMO there are inconsistencies in the second patch that was pushed.\n>\n> > I find your suggestions reasonable. Alvaro, do you have any comments?\n>\n> Well, my main comment is that at this point I'm not sure these\n> isFooWorker() macros are worth their salt. It looks like we could\n> replace their uses with direct type comparisons in their callsites and\n> remove them, with no loss of readability. The am_sth_worker() are\n> probably a bit more useful, though some of the callsites could end up\n> better if replaced with straight type comparison.\n>\n> All in all, I don't disagree with Peter's suggestions, but this is\n> pretty much in \"meh\" territory for me.\n\nI had written a small non-functional patch [1] to address some macro\ninconsistencies introduced by a prior patch of this thread.\n\nIt received initial feedback from Amit (\"I find your suggestions\nreasonable\") and from Alvaro (\"I don't disagree with Peter's\nsuggestions\") but then nothing further happened. I also created a CF\nentry https://commitfest.postgresql.org/46/4570/ for it.\n\nAFAIK my patch is still valid, but after 4 months of no activity it\nseems there is no interest in pushing it, so I am withdrawing the CF\nentry.\n\n======\n[1] https://www.postgresql.org/message-id/CAHut%2BPuwaF4Sb41pWQk69d2WO_ZJQpj-_2JkQvP%3D1jwozUpcCQ%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 10 Jan 2024 10:24:31 +1100",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: subscription/015_stream sometimes breaks"
}
] |
[
{
"msg_contents": "Hi,\r\n\r\nI Recently encountered a situation on the field in which the message\r\n“could not truncate directory \"pg_serial\": apparent wraparound”\r\nwas logged even through there was no danger of wraparound. This\r\nwas on a brand new cluster and only took a few minutes to see\r\nthe message in the logs.\r\n\r\nReading on some history of this error message, it appears that there\r\nwas work done to improve SLRU truncation and associated wraparound\r\nlog messages [1]. The attached repro on master still shows that this message\r\ncan be logged incorrectly.\r\n\r\nThe repro runs updates with 90 threads in serializable mode and kicks\r\noff a “long running” select on the same table in serializable mode.\r\n\r\nAs soon as the long running select commits, the next checkpoint fails\r\nto truncate the SLRU and logs the error message.\r\n\r\nBesides the confusing log message, there may also also be risk with\r\npg_serial getting unnecessarily bloated and depleting the disk space.\r\n\r\nIs this a bug?\r\n\r\n[1] https://www.postgresql.org/message-id/flat/20190202083822.GC32531%40gust.leadboat.com\r\n\r\nRegards,\r\n\r\nSami Imseih\r\nAmazon Web Services (AWS)",
"msg_date": "Wed, 23 Aug 2023 00:55:51 +0000",
"msg_from": "\"Imseih (AWS), Sami\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "\n =?utf-8?B?RmFsc2UgInBnX3NlcmlhbCI6IGFwcGFyZW50IHdyYXBhcm91bmTigJ0gaW4g?=\n =?utf-8?Q?logs?="
},
{
"msg_contents": "Hi,\r\n\r\nI dug a bit into this and what looks to be happening is the comparison\r\nof the page containing the latest cutoff xid could falsely be reported\r\nas in the future of the last page number because the latest\r\npage number of the Serial slru is only set when the page is\r\ninitialized [1].\r\n\r\nSo under the correct conditions, such as in the repro, the serializable\r\nXID has moved past the last page number, therefore to the next checkpoint\r\nwhich triggers a CheckPointPredicate, it will appear that the slru\r\nhas wrapped around.\r\n\r\nIt seems what may be needed here is to advance the\r\nlatest_page_number during SerialSetActiveSerXmin and if\r\nwe are using the SLRU. See below:\r\n\r\n\r\ndiff --git a/src/backend/storage/lmgr/predicate.c b/src/backend/storage/lmgr/predicate.c\r\nindex 1af41213b4..6946ed21b4 100644\r\n--- a/src/backend/storage/lmgr/predicate.c\r\n+++ b/src/backend/storage/lmgr/predicate.c\r\n@@ -992,6 +992,9 @@ SerialSetActiveSerXmin(TransactionId xid)\r\n\r\n serialControl->tailXid = xid;\r\n\r\n+ if (serialControl->headPage > 0)\r\n+ SerialSlruCtl->shared->latest_page_number = SerialPage(xid);\r\n+\r\n LWLockRelease(SerialSLRULock);\r\n}\r\n\r\n[1] https://github.com/postgres/postgres/blob/master/src/backend/access/transam/slru.c#L306\r\n\r\nRegards,\r\n\r\nSami\r\n\r\nFrom: \"Imseih (AWS), Sami\" <[email protected]>\r\nDate: Tuesday, August 22, 2023 at 7:56 PM\r\nTo: \"[email protected]\" <[email protected]>\r\nSubject: False \"pg_serial\": apparent wraparound” in logs\r\n\r\nHi,\r\n\r\nI Recently encountered a situation on the field in which the message\r\n“could not truncate directory \"pg_serial\": apparent wraparound”\r\nwas logged even through there was no danger of wraparound. This\r\nwas on a brand new cluster and only took a few minutes to see\r\nthe message in the logs.\r\n\r\nReading on some history of this error message, it appears that there\r\nwas work done to improve SLRU truncation and associated wraparound\r\nlog messages [1]. The attached repro on master still shows that this message\r\ncan be logged incorrectly.\r\n\r\nThe repro runs updates with 90 threads in serializable mode and kicks\r\noff a “long running” select on the same table in serializable mode.\r\n\r\nAs soon as the long running select commits, the next checkpoint fails\r\nto truncate the SLRU and logs the error message.\r\n\r\nBesides the confusing log message, there may also also be risk with\r\npg_serial getting unnecessarily bloated and depleting the disk space.\r\n\r\nIs this a bug?\r\n\r\n[1] https://www.postgresql.org/message-id/flat/20190202083822.GC32531%40gust.leadboat.com\r\n\r\nRegards,\r\n\r\nSami Imseih\r\nAmazon Web Services (AWS)\r\n\r\n\r\n\r\n\r\n\r\n\n\n\n\n\n\n\n\n\nHi,\n \nI dug a bit into this and what looks to be happening is the comparison\nof the page containing the latest cutoff xid could falsely be reported\nas in the future of the last page number because the latest\npage number of the Serial slru is only set when the page is\ninitialized [1].\n \nSo under the correct conditions, such as in the repro, the serializable\nXID has moved past the last page number, therefore to the next checkpoint\nwhich triggers a CheckPointPredicate, it will appear that the slru\nhas wrapped around.\n \nIt seems what may be needed here is to advance the \nlatest_page_number during SerialSetActiveSerXmin and if \n\nwe are using the SLRU. See below:\n \n \ndiff --git a/src/backend/storage/lmgr/predicate.c b/src/backend/storage/lmgr/predicate.c\nindex 1af41213b4..6946ed21b4 100644\n--- a/src/backend/storage/lmgr/predicate.c\n+++ b/src/backend/storage/lmgr/predicate.c\n@@ -992,6 +992,9 @@ SerialSetActiveSerXmin(TransactionId xid)\n \n serialControl->tailXid = xid;\n \n+ if (serialControl->headPage > 0)\n+ SerialSlruCtl->shared->latest_page_number = SerialPage(xid);\n+\n LWLockRelease(SerialSLRULock);\n}\n \n[1] \r\nhttps://github.com/postgres/postgres/blob/master/src/backend/access/transam/slru.c#L306\n \nRegards,\n \nSami\n \n\nFrom: \"Imseih (AWS), Sami\" <[email protected]>\nDate: Tuesday, August 22, 2023 at 7:56 PM\nTo: \"[email protected]\" <[email protected]>\nSubject: False \"pg_serial\": apparent wraparound” in logs\n\n\n \n\nHi,\n \nI Recently encountered a situation on the field in which the message\n“could not truncate directory \"pg_serial\": apparent wraparound”\nwas logged even through there was no danger of wraparound. This\nwas on a brand new cluster and only took a few minutes to see\nthe message in the logs.\n \nReading on some history of this error message, it appears that there\nwas work done to improve SLRU truncation and associated wraparound\nlog messages [1]. The attached repro on master still shows that this message\r\n\ncan be logged incorrectly.\n \nThe repro runs updates with 90 threads in serializable mode and kicks\noff a “long running” select on the same table in serializable mode.\n \nAs soon as the long running select commits, the next checkpoint fails\nto truncate the SLRU and logs the error message. \n \nBesides the confusing log message, there may also also be risk with\r\n\npg_serial getting unnecessarily bloated and depleting the disk space.\n \nIs this a bug?\n \n[1] \r\nhttps://www.postgresql.org/message-id/flat/20190202083822.GC32531%40gust.leadboat.com\n \nRegards,\n \nSami Imseih\nAmazon Web Services (AWS)",
"msg_date": "Wed, 23 Aug 2023 20:49:29 +0000",
"msg_from": "\"Imseih (AWS), Sami\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "\n =?utf-8?B?UmU6IEZhbHNlICJwZ19zZXJpYWwiOiBhcHBhcmVudCB3cmFwYXJvdW5k4oCd?=\n =?utf-8?Q?_in_logs?="
},
{
"msg_contents": "Attached a patch with a new CF entry: https://commitfest.postgresql.org/44/4516/\r\n\r\nRegards,\r\n\r\nSami Imseih\r\nAmazon Web Services (AWS)",
"msg_date": "Fri, 25 Aug 2023 04:29:53 +0000",
"msg_from": "\"Imseih (AWS), Sami\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "\n =?utf-8?B?UmU6IEZhbHNlICJwZ19zZXJpYWwiOiBhcHBhcmVudCB3cmFwYXJvdW5k4oCd?=\n =?utf-8?Q?_in_logs?="
},
{
"msg_contents": "On 25/08/2023 07:29, Imseih (AWS), Sami wrote:\n> diff --git a/src/backend/storage/lmgr/predicate.c b/src/backend/storage/lmgr/predicate.c\n> index 1af41213b4..7e7be3b885 100644\n> --- a/src/backend/storage/lmgr/predicate.c\n> +++ b/src/backend/storage/lmgr/predicate.c\n> @@ -992,6 +992,13 @@ SerialSetActiveSerXmin(TransactionId xid)\n> \n> \tserialControl->tailXid = xid;\n> \n> +\t/*\n> +\t * If the SLRU is being used, set the latest page number to\n> +\t * the current tail xid.\n> +\t */\n> +\tif (serialControl->headPage > 0)\n> +\t\tSerialSlruCtl->shared->latest_page_number = SerialPage(serialControl->tailXid);\n> +\n> \tLWLockRelease(SerialSLRULock);\n> }\n\nI don't really understand what exactly the problem is, or how this fixes \nit. But this doesn't feel right:\n\nFirstly, isn't headPage == 0 also a valid value? We initialize headPage \nto -1 when it's not in use.\n\nSecondly, shouldn't we set it to the page corresponding to headXid \nrather than tailXid.\n\nThirdly, I don't think this code should have any business setting \nlatest_page_number directly. latest_page_number is set in \nSimpleLruZeroPage(). Are we missing a call to SimpleLruZeroPage() somewhere?\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Tue, 26 Sep 2023 19:29:44 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "=?UTF-8?Q?Re=3a_False_=22pg=5fserial=22=3a_apparent_wraparound?=\n =?UTF-8?B?4oCdIGluIGxvZ3M=?="
},
{
"msg_contents": "> I don't really understand what exactly the problem is, or how this fixes\r\n> it. But this doesn't feel right:\r\n\r\nAs the repro show, false reports of \"pg_serial\": apparent wraparound”\r\nmessages are possible. For a very busy system which checkpoints frequently\r\nand heavy usage of serializable isolation, this will flood the error logs, and \r\nfalsely cause alarm to the user. It also prevents the SLRU from being\r\ntruncated.\r\n\r\nIn my repro, I end up seeing, even though the SLRU does not wraparound.\r\n\" LOG: could not truncate directory \"pg_serial\": apparent wraparound\"\r\n\r\n> Firstly, isn't headPage == 0 also a valid value? We initialize headPage\r\n> to -1 when it's not in use.\r\n\r\nYes. You are correct. This is wrong.\r\n\r\n> Secondly, shouldn't we set it to the page corresponding to headXid\r\n> rather than tailXid.\r\n\r\n> Thirdly, I don't think this code should have any business setting\r\n> latest_page_number directly. latest_page_number is set in\r\n> SimpleLruZeroPage(). \r\n\r\nCorrect, after checking again, I do realize the patch is wrong.\r\n\r\n> Are we missing a call to SimpleLruZeroPage() somewhere?\r\n\r\nThat is a good point.\r\n\r\nThe initial idea was to advance the latest_page_number \r\nduring SerialSetActiveSerXmin, but the initial approach is \r\nobviously wrong.\r\n\r\nWhen SerialSetActiveSerXmin is called for a new active\r\nserializable xmin, and at that point we don't need to keep any\r\nany earlier transactions, should SimpleLruZeroPage be called\r\nto ensure there is a target page for the xid?\r\n\r\nI tried something like below, which fixes my repro, by calling\r\nSimpleLruZeroPage at the end of SerialSetActiveSerXmin.\r\n\r\n@@ -953,6 +953,8 @@ SerialGetMinConflictCommitSeqNo(TransactionId xid)\r\n static void\r\n SerialSetActiveSerXmin(TransactionId xid)\r\n {\r\n+ int targetPage = SerialPage(xid);\r\n+\r\n LWLockAcquire(SerialSLRULock, LW_EXCLUSIVE);\r\n \r\n /*\r\n@@ -992,6 +994,9 @@ SerialSetActiveSerXmin(TransactionId xid)\r\n \r\n serialControl->tailXid = xid;\r\n \r\n+ if (serialControl->headPage != targetPage)\r\n+ SimpleLruZeroPage(SerialSlruCtl, targetPage);\r\n+\r\n LWLockRelease(SerialSLRULock);\r\n }\r\n\r\nRegards,\r\n\r\nSami\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n",
"msg_date": "Fri, 29 Sep 2023 23:16:03 +0000",
"msg_from": "\"Imseih (AWS), Sami\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "\n =?utf-8?q?Re=3A_False_=22pg=5Fserial=22=3A_apparent_wraparound=E2=80=9D_i?=\n =?utf-8?q?n_logs?="
},
{
"msg_contents": "On 30/09/2023 02:16, Imseih (AWS), Sami wrote:\n> The initial idea was to advance the latest_page_number\n> during SerialSetActiveSerXmin, but the initial approach is\n> obviously wrong.\n\nThat approach at high level could work, a\n\n> When SerialSetActiveSerXmin is called for a new active\n> serializable xmin, and at that point we don't need to keep any\n> any earlier transactions, should SimpleLruZeroPage be called\n> to ensure there is a target page for the xid?\n> \n> I tried something like below, which fixes my repro, by calling\n> SimpleLruZeroPage at the end of SerialSetActiveSerXmin.\n> \n> @@ -953,6 +953,8 @@ SerialGetMinConflictCommitSeqNo(TransactionId xid)\n> static void\n> SerialSetActiveSerXmin(TransactionId xid)\n> {\n> + int targetPage = SerialPage(xid);\n> +\n> LWLockAcquire(SerialSLRULock, LW_EXCLUSIVE);\n> \n> /*\n> @@ -992,6 +994,9 @@ SerialSetActiveSerXmin(TransactionId xid)\n> \n> serialControl->tailXid = xid;\n> \n> + if (serialControl->headPage != targetPage)\n> + SimpleLruZeroPage(SerialSlruCtl, targetPage);\n> +\n> LWLockRelease(SerialSLRULock);\n> }\n\nNo, that's very wrong too. You are zeroing the page containing the \noldest XID that's still needed. That page still contains important \ninformation. It might work if you zero the previous page, but I think \nyou need to do a little more than that. (I wish we had tests that would \ncatch that.)\n\nThe crux of the problem is that 'tailXid' can advance past 'headXid'. I \nwas bit surprised by that, but I think it's by design. I wish it was \ncalled out explicitly in a comment though. The code mostly handles that \nfine, except that it confuses the \"apparent wraparound\" check.\n\n'tailXid' is the oldest XID that we might still need to look up in the \nSLRU, based on the transactions that are still active, and 'headXid' is \nthe newest XID that has been written out to the SLRU. But we only write \nan XID out to the SLRU and advance headXid if the shared memory data \nstructure fills up. So it's possible that as old transactions age out, \nwe advance 'tailXid' past 'headXid'.\n\nSerialAdd() tolerates tailXid > headXid. It will zero out all the pages \nbetween the old headXid and tailXid, even though no lookups can occur on \nthose pages. That's unnecessary but harmless.\n\nI think the smallest fix here would be to change CheckPointPredicate() \nso that if tailPage > headPage, pass headPage to SimpleLruTruncate() \ninstead of tailPage. Or perhaps it should go into the \"The SLRU is no \nlonger needed\" codepath in that case. If tailPage > headPage, the SLRU \nisn't needed at the moment.\n\nIn addition to that, we could change SerialAdd() to not zero out the \npages between old headXid and tailXid unnecessarily, but that's more of \nan optimization than bug fix.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Sun, 1 Oct 2023 21:43:21 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "=?UTF-8?Q?Re=3a_False_=22pg=5fserial=22=3a_apparent_wraparound?=\n =?UTF-8?B?4oCdIGluIGxvZ3M=?="
},
{
"msg_contents": "On Sun, Oct 01, 2023 at 09:43:21PM +0300, Heikki Linnakangas wrote:\n> I think the smallest fix here would be to change CheckPointPredicate() so\n> that if tailPage > headPage, pass headPage to SimpleLruTruncate() instead of\n> tailPage. Or perhaps it should go into the \"The SLRU is no longer needed\"\n> codepath in that case. If tailPage > headPage, the SLRU isn't needed at the\n> moment.\n\nGood idea. Indeed that should be good and simple enough for the\nback-branches, at quick glance.\n--\nMichael",
"msg_date": "Wed, 4 Oct 2023 09:07:21 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: False \"pg_serial\": =?utf-8?Q?apparent_?=\n =?utf-8?Q?wraparound=E2=80=9D?= in logs"
},
{
"msg_contents": "> I think the smallest fix here would be to change CheckPointPredicate()\r\n> so that if tailPage > headPage, pass headPage to SimpleLruTruncate()\r\n> instead of tailPage. Or perhaps it should go into the \"The SLRU is no\r\n> longer needed\" codepath in that case. If tailPage > headPage, the SLRU\r\n> isn't needed at the moment.\r\n\r\nI spent sometime studying this and it appears to be a good approach. \r\n\r\nPassing the cutoff page as headPage (SLRU not needed code path ) instead of the tailPage to \r\nSimpleLruTruncate is already being done when the tailXid is not a valid XID. \r\nI added an additional condition to make sure that the tailPage proceeds the headPage\r\nas well. \r\n\r\nAttached is v2 of the patch.\r\n\r\n> In addition to that, we could change SerialAdd() to not zero out the\r\n> pages between old headXid and tailXid unnecessarily, but that's more of\r\n> an optimization than bug fix.\r\n\r\nYes, I did notice that in my debugging, but will not address this in the current patch.\r\n\r\n\r\nRegards,\r\n\r\nSami",
"msg_date": "Thu, 5 Oct 2023 23:28:02 +0000",
"msg_from": "\"Imseih (AWS), Sami\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "\n =?utf-8?q?Re=3A_False_=22pg=5Fserial=22=3A_apparent_wraparound=E2=80=9D_i?=\n =?utf-8?q?n_logs?="
},
{
"msg_contents": "Correct a typo in my last message: \r\n\r\nInstead of:\r\n“ I added an additional condition to make sure that the tailPage proceeds the headPage\r\nas well. “\r\n\r\nIt should be:\r\n“ I added an additional condition to make sure that the tailPage precedes the headPage\r\nas well. ”\r\n\r\n\r\nRegards,\r\n\r\nSami\r\n\r\n> On Oct 5, 2023, at 6:29 PM, Imseih (AWS), Sami <[email protected]> wrote:\r\n> \r\n> I added an additional condition to make sure that the tailPage proceeds the headPage\r\n> as well.\r\n",
"msg_date": "Thu, 5 Oct 2023 23:50:01 +0000",
"msg_from": "\"Imseih (AWS), Sami\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "\n =?utf-8?q?Re=3A_False_=22pg=5Fserial=22=3A_apparent_wraparound=E2=80=9D_i?=\n =?utf-8?q?n_logs?="
},
{
"msg_contents": "On Thu, Oct 05, 2023 at 11:28:02PM +0000, Imseih (AWS), Sami wrote:\n> I spent sometime studying this and it appears to be a good approach. \n> \n> Passing the cutoff page as headPage (SLRU not needed code path ) instead of the tailPage to \n> SimpleLruTruncate is already being done when the tailXid is not a valid XID. \n> I added an additional condition to make sure that the tailPage proceeds the headPage\n> as well. \n> \n> Attached is v2 of the patch.\n\nThanks for the updated patch. I have begun looking at what you have\nhere.\n--\nMichael",
"msg_date": "Tue, 10 Oct 2023 16:20:07 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: False \"pg_serial\": =?utf-8?Q?apparent_?=\n =?utf-8?Q?wraparound=E2=80=9D?= in logs"
},
{
"msg_contents": "On Thu, Oct 05, 2023 at 11:28:02PM +0000, Imseih (AWS), Sami wrote:\n> I spent sometime studying this and it appears to be a good approach. \n> \n> Passing the cutoff page as headPage (SLRU not needed code path ) instead of the tailPage to \n> SimpleLruTruncate is already being done when the tailXid is not a valid XID. \n> I added an additional condition to make sure that the tailPage proceeds the headPage\n> as well. \n\nI have been studying the whole area, and these threads from 2011 have\ncome to me, with two separate attempts:\nhttps://www.postgresql.org/message-id/[email protected]\nhttps://www.postgresql.org/message-id/4D8F54E6020000250003BD16%40gw.wicourts.gov\n\nBack then, we were pretty much OK with the amount of space that could\nbe wasted even in this case. Actually, how much space are we talking\nabout here when a failed truncation happens? As this is basically\nharmless, still leads to a confusing message, do we really need a\nbackpatch here?\n\nAnyway, it looks like you're right, we don't really need the SLRU once\nthe tail is ahead of the tail because the SLRU has wrapped around due\nto the effect of transactions aging out, so making the truncation a\nbit smarter should be OK.\n\n+ /*\n+ * Check if the tailXid is valid and that the tailPage is not ahead of\n+ * the headPage, otherwise the SLRU is no longer needed.\n+ */\n\nHmm. This doesn't seem enough. Shouldn't we explain at least in\nwhich scenarios the tail can get ahead of the head (aka at least\nwith long running transactions that make the SLRU wrap-around)?\nExcept if I am missing something, there is no explanation of that in\npredicate.c.\n--\nMichael",
"msg_date": "Wed, 11 Oct 2023 17:37:42 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: False \"pg_serial\": =?utf-8?Q?apparent_?=\n =?utf-8?Q?wraparound=E2=80=9D?= in logs"
},
{
"msg_contents": "Sorry for the delay in response.\r\n\r\n> Back then, we were pretty much OK with the amount of space that could\r\n> be wasted even in this case. Actually, how much space are we talking\r\n> about here when a failed truncation happens? \r\n\r\nIt is a transient waste in space as it will eventually clean up.\r\n\r\n> As this is basically\r\n> harmless, still leads to a confusing message, \r\n\r\nCorrect, and especially because the message has\r\n\"wraparound\" in the text.\r\n\r\n> do we really need a backpatch here?\r\n\r\nNo, I don't think a backpatch is necessary.\r\n\r\n\r\n> Anyway, it looks like you're right, we don't really need the SLRU once\r\n> the tail is ahead of the tail because the SLRU has wrapped around due\r\n> to the effect of transactions aging out, so making the truncation a\r\n> bit smarter should be OK.\r\n\r\nI assume you meant \" the tail is ahead of the head\".\r\n\r\nSummarizeOldestCommittedSxact advances the headXid, but if we\r\ncheckpoint before this is called, then the tail could be ahead. The tail is\r\nadvanced by SerialSetActiveSerXmin whenever there is a new serializable\r\ntransaction.\r\n\r\n\r\n> Hmm. This doesn't seem enough. Shouldn't we explain at least in\r\n> which scenarios the tail can get ahead of the head (aka at least\r\n> with long running transactions that make the SLRU wrap-around)?\r\n> Except if I am missing something, there is no explanation of that in\r\n> predicate.c.\r\n\r\nAfter looking at this a bit more, I don't think the previous rev is correct.\r\nWe should not fall through to the \" The SLRU is no longer needed.\" Which\r\nalso sets the headPage to invalid. We should only truncate up to the\r\nhead page.\r\n\r\nPlease see attached v3.\r\n\r\n\r\nRegards,\r\n\r\nSami",
"msg_date": "Sat, 14 Oct 2023 19:29:54 +0000",
"msg_from": "\"Imseih (AWS), Sami\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "\n =?utf-8?q?Re=3A_False_=22pg=5Fserial=22=3A_apparent_wraparound=E2=80=9D_i?=\n =?utf-8?q?n_logs?="
},
{
"msg_contents": "On Sat, Oct 14, 2023 at 07:29:54PM +0000, Imseih (AWS), Sami wrote:\n>> Anyway, it looks like you're right, we don't really need the SLRU once\n>> the tail is ahead of the tail because the SLRU has wrapped around due\n>> to the effect of transactions aging out, so making the truncation a\n>> bit smarter should be OK.\n> \n> I assume you meant \" the tail is ahead of the head\".\n\nDamn fingers on a keyboard who don't know how to type.\n\n>> Hmm. This doesn't seem enough. Shouldn't we explain at least in\n>> which scenarios the tail can get ahead of the head (aka at least\n>> with long running transactions that make the SLRU wrap-around)?\n>> Except if I am missing something, there is no explanation of that in\n>> predicate.c.\n> \n> After looking at this a bit more, I don't think the previous rev is correct.\n> We should not fall through to the \" The SLRU is no longer needed.\" Which\n> also sets the headPage to invalid. We should only truncate up to the\n> head page.\n\nSeems correct to me. Or this would count as if the SLRU is not in\nuse, but it's being used.\n--\nMichael",
"msg_date": "Mon, 16 Oct 2023 16:58:31 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: False \"pg_serial\": =?utf-8?Q?apparent_?=\n =?utf-8?Q?wraparound=E2=80=9D?= in logs"
},
{
"msg_contents": "On Mon, Oct 16, 2023 at 04:58:31PM +0900, Michael Paquier wrote:\n> On Sat, Oct 14, 2023 at 07:29:54PM +0000, Imseih (AWS), Sami wrote:\n>> After looking at this a bit more, I don't think the previous rev is correct.\n>> We should not fall through to the \" The SLRU is no longer needed.\" Which\n>> also sets the headPage to invalid. We should only truncate up to the\n>> head page.\n> \n> Seems correct to me. Or this would count as if the SLRU is not in\n> use, but it's being used.\n\nSo, I've spent more time on that and applied the simplification today,\ndoing as you have suggested to use the head page rather than the tail\npage when the tail XID is ahead of the head XID, but without disabling\nthe whole. I've simplified a bit the code and the comments, though,\nwhile on it (some renames and a slight refactoring of tailPage, for\nexample).\n--\nMichael",
"msg_date": "Tue, 17 Oct 2023 14:46:37 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: False \"pg_serial\": =?utf-8?Q?apparent_?=\n =?utf-8?Q?wraparound=E2=80=9D?= in logs"
},
{
"msg_contents": "> So, I've spent more time on that and applied the simplification today,\r\n> doing as you have suggested to use the head page rather than the tail\r\n> page when the tail XID is ahead of the head XID, but without disabling\r\n> the whole. I've simplified a bit the code and the comments, though,\r\n> while on it (some renames and a slight refactoring of tailPage, for\r\n> example).\r\n> --\r\n> Michael\r\n\r\nThank you!\r\n\r\nRegards,\r\n\r\nSami\r\n\r\n\r\n\r\n",
"msg_date": "Tue, 17 Oct 2023 13:43:54 +0000",
"msg_from": "\"Imseih (AWS), Sami\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "\n =?utf-8?q?Re=3A_False_=22pg=5Fserial=22=3A_apparent_wraparound=E2=80=9D_i?=\n =?utf-8?q?n_logs?="
}
] |
[
{
"msg_contents": "Hi hackers.\n\nDuring a recent review of nearby code I noticed that there was a shadowing\nof the 'new_cluster' global variable by a function parameter:\n\nHere:\nstatic void check_for_new_tablespace_dir(ClusterInfo *new_cluster);\n\n~~~\n\nIt looks like it has been like this for a couple of years. I guess this\nmight have been found/fixed earlier had the code been compiled differently:\n\ncheck.c: In function ‘check_for_new_tablespace_dir’:\ncheck.c:381:43: warning: declaration of ‘new_cluster’ shadows a global\ndeclaration [-Wshadow]\n check_for_new_tablespace_dir(ClusterInfo *new_cluster)\n ^\nIn file included from check.c:16:0:\npg_upgrade.h:337:4: warning: shadowed declaration is here [-Wshadow]\n new_cluster;\n ^\n\n~~~\n\nPSA a small patch to remove the unnecessary parameter, and so eliminate\nthis shadowing.\n\nThoughts?\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia.",
"msg_date": "Wed, 23 Aug 2023 11:28:25 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_upgrade - a function parameter shadows global 'new_cluster'"
},
{
"msg_contents": "> On 23 Aug 2023, at 03:28, Peter Smith <[email protected]> wrote:\n\n> PSA a small patch to remove the unnecessary parameter, and so eliminate this shadowing.\n\nAgreed, applied. Thanks!\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 23 Aug 2023 10:00:13 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade - a function parameter shadows global 'new_cluster'"
},
{
"msg_contents": "On Wed, Aug 23, 2023 at 6:00 PM Daniel Gustafsson <[email protected]> wrote:\n\n> > On 23 Aug 2023, at 03:28, Peter Smith <[email protected]> wrote:\n>\n> > PSA a small patch to remove the unnecessary parameter, and so eliminate\n> this shadowing.\n>\n> Agreed, applied. Thanks!\n>\n>\nThanks for pushing!\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\nOn Wed, Aug 23, 2023 at 6:00 PM Daniel Gustafsson <[email protected]> wrote:> On 23 Aug 2023, at 03:28, Peter Smith <[email protected]> wrote:\n\n> PSA a small patch to remove the unnecessary parameter, and so eliminate this shadowing.\n\nAgreed, applied. Thanks!Thanks for pushing!------Kind Regards,Peter Smith.Fujitsu Australia",
"msg_date": "Thu, 24 Aug 2023 09:07:55 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade - a function parameter shadows global 'new_cluster'"
}
] |
[
{
"msg_contents": "I've been having some problems running the regression tests using\nmeson on Windows. This seems to be down to initdb being compiled with\na version of pg_config_paths.h left over from the msvc build which had\nbeen used on that source tree previously.\n\nGenerally when there are files left over the meson build script will\ndetect this and ask you to run make maintainer-clean. That's useful\non Linux, but on Windows you're just left to manually remove the\nconflicting files which are listed. Unfortunately, pg_config_paths.h\nwasn't listed, which I think was missed because it's not generated\nduring configure, but during the actual make build. (see\nsrc/port/Makefile)\n\nLinux users are unlikely to experience this issue as they're likely\njust going to run make maintainer-clean as instructed by the meson\nerror message.\n\nThe attached patch adds pg_config_paths.h to the generated_sources_ac\nvariable so that if that file exists, meson will provide an error\nmessage to mention this. i.e:\n\n> meson.build:2953:2: ERROR: Problem encountered:\n> ****\n> Non-clean source code directory detected.\n\n> To build with meson the source tree may not have an in-place, ./configure\n> style, build configured. You can have both meson and ./configure style builds\n> for the same source tree by building out-of-source / VPATH with\n> configure. Alternatively use a separate check out for meson based builds.\n\n\n> Conflicting files in source directory:\n> C:/Users/<user>/pg_src/src/port/pg_config_paths.h\n\n> The conflicting files need to be removed, either by removing the files listed\n> above, or by running configure and then make maintainer-clean.\n\nAre there any objections to the attached being applied?\n\nDavid",
"msg_date": "Thu, 24 Aug 2023 00:52:52 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": true,
"msg_subject": "meson uses stale pg_config_paths.h left over from make"
},
{
"msg_contents": "On Thu, 24 Aug 2023 at 00:52, David Rowley <[email protected]> wrote:\n> Are there any objections to the attached being applied?\n\nPushed.\n\nDavid\n\n\n",
"msg_date": "Thu, 24 Aug 2023 10:35:18 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: meson uses stale pg_config_paths.h left over from make"
},
{
"msg_contents": "On 23.08.23 14:52, David Rowley wrote:\n> Generally when there are files left over the meson build script will\n> detect this and ask you to run make maintainer-clean. That's useful\n> on Linux, but on Windows you're just left to manually remove the\n> conflicting files which are listed. Unfortunately, pg_config_paths.h\n> wasn't listed, which I think was missed because it's not generated\n> during configure, but during the actual make build. (see\n> src/port/Makefile)\n\nHow is this different from any other built file being left in the tree? \nSurely meson should not be required to detect that?\n\n\n\n",
"msg_date": "Thu, 24 Aug 2023 08:18:14 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: meson uses stale pg_config_paths.h left over from make"
},
{
"msg_contents": "Hi,\n\nOn 2023-08-24 08:18:14 +0200, Peter Eisentraut wrote:\n> On 23.08.23 14:52, David Rowley wrote:\n> > Generally when there are files left over the meson build script will\n> > detect this and ask you to run make maintainer-clean. That's useful\n> > on Linux, but on Windows you're just left to manually remove the\n> > conflicting files which are listed. Unfortunately, pg_config_paths.h\n> > wasn't listed, which I think was missed because it's not generated\n> > during configure, but during the actual make build. (see\n> > src/port/Makefile)\n> \n> How is this different from any other built file being left in the tree?\n\nFiles included into other files (i.e. mostly .h, but also a few .c) are\nparticularly problematic, because they will be used from the source tree, if\nthe #include doesn't have directory component - the current directory will\nalways be searched first. In this case initdb on David's machine failed,\nbecause the paths from the wrong pg_config_paths.h was used.\n\n\n> Surely meson should not be required to detect that?\n\nI think we should try to detect included files, due to the nasty and hard to\ndebug issues that creates. I've spent quite a bit of time helping people to\ndebug such issues.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 23 Aug 2023 23:25:47 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: meson uses stale pg_config_paths.h left over from make"
},
{
"msg_contents": "On Thu, 24 Aug 2023 at 18:25, Andres Freund <[email protected]> wrote:\n>\n> On 2023-08-24 08:18:14 +0200, Peter Eisentraut wrote:\n> > Surely meson should not be required to detect that?\n>\n> I think we should try to detect included files, due to the nasty and hard to\n> debug issues that creates. I've spent quite a bit of time helping people to\n> debug such issues.\n\nYeah, I agree. I think it's a fairly trivial thing to do to help avoid\ndevelopers spending hours scratching their heads over some\nhard-to-debug meson issue.\n\nI think lowering the difficulty bar for people transitioning to meson\nis a good thing. The easier we can make that process, the faster\npeople will adopt meson and the faster we can get rid of support for\nthe other build systems. That's likely not too far off into the\ndistant future for MSVC, so I don't think we should go rejecting\npatches from people using that build system where the patch aims to\nhelp that transition go faster and more smoothly.\n\nDavid\n\n\n",
"msg_date": "Fri, 25 Aug 2023 15:24:32 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: meson uses stale pg_config_paths.h left over from make"
}
] |
[
{
"msg_contents": "I was about to push a quick patch to replace the use of heap_getattr()\nin get_primary_key_attnos() with SysCacheGetAttrNotNull(), because that\nmakes the code a few lines shorter and AFAICS there's no downside.\nHowever, I realized that the commit that added the function\n(d435f15fff3c) did not make any such change at all -- it only changed\nSysCacheGetAttr calls to use the new function, but no heap_getattr.\nAnd we don't seem to have added such calls after.\n\nEssentially the possibly contentious point is that the tuple we'd be\ndeforming did not come from syscache, but from a systable scan, so\ncalling a syscache function on it could be seen as breaking some API.\n(Of course, this only works if there is a syscache on the given\nrelation.)\n\nBut we do have precedent: for example RelationGetFKeyList uses a sysscan\nto feed DeconstructFkConstraintRow(), which extracts several attributes\nthat way using the CONSTROID syscache.\n\nDoes anybody think this could be a problem, if we extended it to be more\nwidely used?\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Los cuentos de hadas no dan al niño su primera idea sobre los monstruos.\nLo que le dan es su primera idea de la posible derrota del monstruo.\"\n (G. K. Chesterton)\n\n\n",
"msg_date": "Wed, 23 Aug 2023 16:43:42 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": true,
"msg_subject": "using SysCacheGetAttrNotNull in place of heap_getattr"
},
{
"msg_contents": "Alvaro Herrera <[email protected]> writes:\n> I was about to push a quick patch to replace the use of heap_getattr()\n> in get_primary_key_attnos() with SysCacheGetAttrNotNull(), because that\n> makes the code a few lines shorter and AFAICS there's no downside.\n> However, I realized that the commit that added the function\n> (d435f15fff3c) did not make any such change at all -- it only changed\n> SysCacheGetAttr calls to use the new function, but no heap_getattr.\n> And we don't seem to have added such calls after.\n\nSeems to me it'd be more consistent to invent a wrapper function\nheap_getattr_notnull() that adds the same sort of error check,\ninstead of abusing the syscache function as you suggest. For one\nthing, then the functionality could be used whether there's a\nsuitable syscache or not.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 23 Aug 2023 13:04:26 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: using SysCacheGetAttrNotNull in place of heap_getattr"
}
] |
[
{
"msg_contents": "Somewhere at PGCon, I forgot exactly where, maybe in the same meeting \nwhere we talked about getting rid of distprep, we talked about that the \ndocumentation builds are not reproducible (in the sense of \nhttps://reproducible-builds.org/). This is easily fixable, the fix is \navailable upstream \n(https://github.com/docbook/xslt10-stylesheets/issues/54) but not \nreleased. We can backpatch that into our customization layer. The \nattached patch shows it.\n\nI had actually often wanted this during development. When making \ndocumentation tooling changes, it's useful to be able to compare the \noutput before and after, and this will eliminate false positives in that.\n\nThis patch addresses both the HTML and the FO output. The man output is \nalready reproducible AFAICT. Note that the final PDF output is \ncurrently not reproducible; that's a different issue that needs to be \nfixed in FOP. (See \nhttps://wiki.debian.org/ReproducibleBuilds/TimestampsInPDFGeneratedByApacheFOP.)",
"msg_date": "Wed, 23 Aug 2023 21:24:07 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Make documentation builds reproducible"
},
{
"msg_contents": "On Wed Aug 23, 2023 at 2:24 PM CDT, Peter Eisentraut wrote:\n> Somewhere at PGCon, I forgot exactly where, maybe in the same meeting \n> where we talked about getting rid of distprep, we talked about that the \n> documentation builds are not reproducible (in the sense of \n> https://reproducible-builds.org/). This is easily fixable, the fix is \n> available upstream \n> (https://github.com/docbook/xslt10-stylesheets/issues/54) but not \n> released. We can backpatch that into our customization layer. The \n> attached patch shows it.\n\nI am a tiny bit confused here. The commit that solved the issue was \nmerged into the master branch in 2018. GitHub lists the lastest release \nas being in 2020. A quick git command shows this has been in releases \nsince December of 2018.\n\n\t$ git --no-pager tag --contains 0763160\n\tndw-test-001\n\tsnapshot-2018-12-07-01\n\tsnapshot-ndw-test/2019-10-04\n\tsnapshot/2018-09-28-172\n\tsnapshot/2018-09-28-173\n\tsnapshot/2018-09-28-174\n\tsnapshot/2018-09-28-175\n\tsnapshot/2018-09-29-176\n\tsnapshot/2018-09-29-177\n\tsnapshot/2018-09-30-178\n\tsnapshot/2018-09-30-179\n\tsnapshot/2018-10-01-180\n\tsnapshot/2018-10-02-183\n\tsnapshot/2018-10-02-184\n\tsnapshot/2018-10-16-185\n\tsnapshot/2018-10-16-186\n\tsnapshot/2018-10-21-188\n\tsnapshot/2018-11-01-191\n\tsnapshot/2019-10-05-bobs\n\tsnapshot/2020-05-28-pdesjardins\n\tsnapshot/2020-06-03\n\nIs there anything I am missing? Is Postgres relying on releases older \nthan snapshot-2018-12-07-01? If so, is it possible to up the minimum \nversion?\n\n> I had actually often wanted this during development. When making \n> documentation tooling changes, it's useful to be able to compare the \n> output before and after, and this will eliminate false positives in that.\n>\n> This patch addresses both the HTML and the FO output. The man output is \n> already reproducible AFAICT. Note that the final PDF output is \n> currently not reproducible; that's a different issue that needs to be \n> fixed in FOP. (See \n> https://wiki.debian.org/ReproducibleBuilds/TimestampsInPDFGeneratedByApacheFOP.)\n\nI think reproducibility is very important. Thanks for taking this on! \n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Thu, 24 Aug 2023 13:44:34 -0500",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Make documentation builds reproducible"
},
{
"msg_contents": "\"Tristan Partin\" <[email protected]> writes:\n> On Wed Aug 23, 2023 at 2:24 PM CDT, Peter Eisentraut wrote:\n>> Somewhere at PGCon, I forgot exactly where, maybe in the same meeting \n>> where we talked about getting rid of distprep, we talked about that the \n>> documentation builds are not reproducible (in the sense of \n>> https://reproducible-builds.org/). This is easily fixable,\n\n> Is there anything I am missing? Is Postgres relying on releases older \n> than snapshot-2018-12-07-01? If so, is it possible to up the minimum \n> version?\n\nAFAICT the \"latest stable release\" of docbook-xsl is still 1.79.2,\nwhich seems to have been released in 2017, so it's unsurprising that\nit's missing this fix.\n\nIt's kind of hard to argue that developers (much less distro packagers)\nshould install unsupported snapshot releases in order to build our docs.\nHaving said that, maybe we should check whether this patch is compatible\nwith those snapshot releases, just in case somebody is using one.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 24 Aug 2023 15:30:06 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Make documentation builds reproducible"
},
{
"msg_contents": "On Thu Aug 24, 2023 at 2:30 PM CDT, Tom Lane wrote:\n> \"Tristan Partin\" <[email protected]> writes:\n> > On Wed Aug 23, 2023 at 2:24 PM CDT, Peter Eisentraut wrote:\n> >> Somewhere at PGCon, I forgot exactly where, maybe in the same meeting \n> >> where we talked about getting rid of distprep, we talked about that the \n> >> documentation builds are not reproducible (in the sense of \n> >> https://reproducible-builds.org/). This is easily fixable,\n>\n> > Is there anything I am missing? Is Postgres relying on releases older \n> > than snapshot-2018-12-07-01? If so, is it possible to up the minimum \n> > version?\n>\n> AFAICT the \"latest stable release\" of docbook-xsl is still 1.79.2,\n> which seems to have been released in 2017, so it's unsurprising that\n> it's missing this fix.\n>\n> It's kind of hard to argue that developers (much less distro packagers)\n> should install unsupported snapshot releases in order to build our docs.\n> Having said that, maybe we should check whether this patch is compatible\n> with those snapshot releases, just in case somebody is using one.\n\nI agree with you. Thanks for the pointer.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Thu, 24 Aug 2023 14:52:59 -0500",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Make documentation builds reproducible"
},
{
"msg_contents": "On Fri, 25 Aug 2023 at 01:23, Tristan Partin <[email protected]> wrote:\n>\n> On Thu Aug 24, 2023 at 2:30 PM CDT, Tom Lane wrote:\n> > \"Tristan Partin\" <[email protected]> writes:\n> > > On Wed Aug 23, 2023 at 2:24 PM CDT, Peter Eisentraut wrote:\n> > >> Somewhere at PGCon, I forgot exactly where, maybe in the same meeting\n> > >> where we talked about getting rid of distprep, we talked about that the\n> > >> documentation builds are not reproducible (in the sense of\n> > >> https://reproducible-builds.org/). This is easily fixable,\n> >\n> > > Is there anything I am missing? Is Postgres relying on releases older\n> > > than snapshot-2018-12-07-01? If so, is it possible to up the minimum\n> > > version?\n> >\n> > AFAICT the \"latest stable release\" of docbook-xsl is still 1.79.2,\n> > which seems to have been released in 2017, so it's unsurprising that\n> > it's missing this fix.\n> >\n> > It's kind of hard to argue that developers (much less distro packagers)\n> > should install unsupported snapshot releases in order to build our docs.\n> > Having said that, maybe we should check whether this patch is compatible\n> > with those snapshot releases, just in case somebody is using one.\n>\n> I agree with you. Thanks for the pointer.\n\nI'm seeing that there has been no activity in this thread for nearly 5\nmonths, I'm planning to close this in the current commitfest unless\nsomeone is planning to take it forward.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Sat, 20 Jan 2024 08:03:01 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Make documentation builds reproducible"
},
{
"msg_contents": "On 20.01.24 03:33, vignesh C wrote:\n> On Fri, 25 Aug 2023 at 01:23, Tristan Partin <[email protected]> wrote:\n>>\n>> On Thu Aug 24, 2023 at 2:30 PM CDT, Tom Lane wrote:\n>>> \"Tristan Partin\" <[email protected]> writes:\n>>>> On Wed Aug 23, 2023 at 2:24 PM CDT, Peter Eisentraut wrote:\n>>>>> Somewhere at PGCon, I forgot exactly where, maybe in the same meeting\n>>>>> where we talked about getting rid of distprep, we talked about that the\n>>>>> documentation builds are not reproducible (in the sense of\n>>>>> https://reproducible-builds.org/). This is easily fixable,\n>>>\n>>>> Is there anything I am missing? Is Postgres relying on releases older\n>>>> than snapshot-2018-12-07-01? If so, is it possible to up the minimum\n>>>> version?\n>>>\n>>> AFAICT the \"latest stable release\" of docbook-xsl is still 1.79.2,\n>>> which seems to have been released in 2017, so it's unsurprising that\n>>> it's missing this fix.\n>>>\n>>> It's kind of hard to argue that developers (much less distro packagers)\n>>> should install unsupported snapshot releases in order to build our docs.\n>>> Having said that, maybe we should check whether this patch is compatible\n>>> with those snapshot releases, just in case somebody is using one.\n>>\n>> I agree with you. Thanks for the pointer.\n> \n> I'm seeing that there has been no activity in this thread for nearly 5\n> months, I'm planning to close this in the current commitfest unless\n> someone is planning to take it forward.\n\nI think there was general agreement with what this patch is doing, but I \nguess it's too boring to actually review the patch in detail. Let's \nsay, if there are no objections, I'll go ahead and commit it.\n\n\n\n",
"msg_date": "Sat, 20 Jan 2024 09:32:25 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Make documentation builds reproducible"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> I think there was general agreement with what this patch is doing, but I \n> guess it's too boring to actually review the patch in detail. Let's \n> say, if there are no objections, I'll go ahead and commit it.\n\nI re-read the thread and have two thoughts:\n\n* We worried about whether this change would be compatible with a\n(presently unreleased) version of docbook that contains the upstreamed\nfix. It seems unlikely that there's a problem, but maybe worth\nchecking?\n\n* I gather that the point here is to change some generated anchor\ntags. Would any of these tags be things people would be likely\nto have bookmarked?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 20 Jan 2024 11:03:07 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Make documentation builds reproducible"
},
{
"msg_contents": "On 20.01.24 17:03, Tom Lane wrote:\n> Peter Eisentraut <[email protected]> writes:\n>> I think there was general agreement with what this patch is doing, but I\n>> guess it's too boring to actually review the patch in detail. Let's\n>> say, if there are no objections, I'll go ahead and commit it.\n> \n> I re-read the thread and have two thoughts:\n> \n> * We worried about whether this change would be compatible with a\n> (presently unreleased) version of docbook that contains the upstreamed\n> fix. It seems unlikely that there's a problem, but maybe worth\n> checking?\n\nThe code in the patch is the same code as upstream, so it would behave \nthe same as a new release.\n\n> * I gather that the point here is to change some generated anchor\n> tags. Would any of these tags be things people would be likely\n> to have bookmarked?\n\nNo, because the problem is that the anchor names are randomly generated \nin each build.\n\n\n\n",
"msg_date": "Sat, 20 Jan 2024 23:44:00 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Make documentation builds reproducible"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> On 20.01.24 17:03, Tom Lane wrote:\n>> * I gather that the point here is to change some generated anchor\n>> tags. Would any of these tags be things people would be likely\n>> to have bookmarked?\n\n> No, because the problem is that the anchor names are randomly generated \n> in each build.\n\nD'oh. No objection then.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 20 Jan 2024 17:59:48 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Make documentation builds reproducible"
},
{
"msg_contents": "On 20.01.24 23:59, Tom Lane wrote:\n> Peter Eisentraut <[email protected]> writes:\n>> On 20.01.24 17:03, Tom Lane wrote:\n>>> * I gather that the point here is to change some generated anchor\n>>> tags. Would any of these tags be things people would be likely\n>>> to have bookmarked?\n> \n>> No, because the problem is that the anchor names are randomly generated\n>> in each build.\n> \n> D'oh. No objection then.\n\nThanks, committed.\n\n\n\n",
"msg_date": "Mon, 22 Jan 2024 11:18:31 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Make documentation builds reproducible"
},
{
"msg_contents": "Hi,\n\nI usually the HTML documentation locally using command:\n\nmake STYLE=website html\n\n~\n\nThis has been working forever, but seems to have broken due to commit\n[1] having an undeclared variable.\n\ne.g.\n[postgres@CentOS7-x64 sgml]$ make STYLE=website html\n{ \\\n echo \"<!ENTITY version \\\"17devel\\\">\"; \\\n echo \"<!ENTITY majorversion \\\"17\\\">\"; \\\n} > version.sgml\n'/usr/bin/perl' ./mk_feature_tables.pl YES\n../../../src/backend/catalog/sql_feature_packages.txt\n../../../src/backend/catalog/sql_features.txt >\nfeatures-supported.sgml\n'/usr/bin/perl' ./mk_feature_tables.pl NO\n../../../src/backend/catalog/sql_feature_packages.txt\n../../../src/backend/catalog/sql_features.txt >\nfeatures-unsupported.sgml\n'/usr/bin/perl' ./generate-errcodes-table.pl\n../../../src/backend/utils/errcodes.txt > errcodes-table.sgml\n'/usr/bin/perl' ./generate-keywords-table.pl . > keywords-table.sgml\n'/usr/bin/perl' ./generate-targets-meson.pl targets-meson.txt\ngenerate-targets-meson.pl > targets-meson.sgml\n'/usr/bin/perl'\n../../../src/backend/utils/activity/generate-wait_event_types.pl\n--docs ../../../src/backend/utils/activity/wait_event_names.txt\n/usr/bin/xmllint --nonet --path . --path . --output postgres-full.xml\n--noent --valid postgres.sgml\n/usr/bin/xsltproc --nonet --path . --path . --stringparam pg.version\n'17devel' --param website.stylesheet 1 stylesheet.xsl\npostgres-full.xml\nruntime error: file stylesheet-html-common.xsl line 452 element if\nVariable 'autolink.index.see' has not been declared.\nmake: *** [html-stamp] Error 10\n\n======\n[1] https://github.com/postgres/postgres/commit/b0f0a9432d0b6f53634a96715f2666f6d4ea25a1\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 23 Jan 2024 12:06:34 +1100",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Make documentation builds reproducible"
},
{
"msg_contents": "Peter Smith <[email protected]> writes:\n> I usually the HTML documentation locally using command:\n> make STYLE=website html\n> This has been working forever, but seems to have broken due to commit\n> [1] having an undeclared variable.\n\nInterestingly, that still works fine for me, on RHEL8 with\n\ndocbook-dtds-1.0-69.el8.noarch\ndocbook-style-xsl-1.79.2-9.el8.noarch\ndocbook-style-dsssl-1.79-25.el8.noarch\n\nWhat docbook version are you using?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 22 Jan 2024 20:13:00 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Make documentation builds reproducible"
},
{
"msg_contents": "On Tue, Jan 23, 2024 at 12:13 PM Tom Lane <[email protected]> wrote:\n>\n> Peter Smith <[email protected]> writes:\n> > I usually the HTML documentation locally using command:\n> > make STYLE=website html\n> > This has been working forever, but seems to have broken due to commit\n> > [1] having an undeclared variable.\n>\n> Interestingly, that still works fine for me, on RHEL8 with\n>\n> docbook-dtds-1.0-69.el8.noarch\n> docbook-style-xsl-1.79.2-9.el8.noarch\n> docbook-style-dsssl-1.79-25.el8.noarch\n>\n> What docbook version are you using?\n>\n\n[postgres@CentOS7-x64 sgml]$ sudo yum list installed | grep docbook\ndocbook-dtds.noarch 1.0-60.el7 @anaconda\ndocbook-style-dsssl.noarch 1.79-18.el7 @base\ndocbook-style-xsl.noarch 1.78.1-3.el7 @anaconda\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 23 Jan 2024 12:32:35 +1100",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Make documentation builds reproducible"
},
{
"msg_contents": "On Tue, Jan 23, 2024 at 12:32 PM Peter Smith <[email protected]> wrote:\n>\n> On Tue, Jan 23, 2024 at 12:13 PM Tom Lane <[email protected]> wrote:\n> >\n> > Peter Smith <[email protected]> writes:\n> > > I usually the HTML documentation locally using command:\n> > > make STYLE=website html\n> > > This has been working forever, but seems to have broken due to commit\n> > > [1] having an undeclared variable.\n> >\n> > Interestingly, that still works fine for me, on RHEL8 with\n> >\n> > docbook-dtds-1.0-69.el8.noarch\n> > docbook-style-xsl-1.79.2-9.el8.noarch\n> > docbook-style-dsssl-1.79-25.el8.noarch\n> >\n> > What docbook version are you using?\n> >\n>\n> [postgres@CentOS7-x64 sgml]$ sudo yum list installed | grep docbook\n> docbook-dtds.noarch 1.0-60.el7 @anaconda\n> docbook-style-dsssl.noarch 1.79-18.el7 @base\n> docbook-style-xsl.noarch 1.78.1-3.el7 @anaconda\n>\n\nIIUC these releases notes [1] say autolink.index.see existed since\nv1.79.1, but unfortunately, that is more recent than my ancient\ninstalled v1.78.1\n\n From the release notes:\n------\nRobert Stayton: autolink.index.see.xml\n\nNew param to control automatic links in index from see and\nseealso to indexterm primary.\n------\n\n======\n[1] https://docbook.sourceforge.net/release/xsl/1.79.1/RELEASE-NOTES.html\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 25 Jan 2024 09:12:28 +1100",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Make documentation builds reproducible"
},
{
"msg_contents": "On Thu, Jan 25, 2024 at 9:12 AM Peter Smith <[email protected]> wrote:\n>\n> On Tue, Jan 23, 2024 at 12:32 PM Peter Smith <[email protected]> wrote:\n> >\n> > On Tue, Jan 23, 2024 at 12:13 PM Tom Lane <[email protected]> wrote:\n> > >\n> > > Peter Smith <[email protected]> writes:\n> > > > I usually the HTML documentation locally using command:\n> > > > make STYLE=website html\n> > > > This has been working forever, but seems to have broken due to commit\n> > > > [1] having an undeclared variable.\n> > >\n> > > Interestingly, that still works fine for me, on RHEL8 with\n> > >\n> > > docbook-dtds-1.0-69.el8.noarch\n> > > docbook-style-xsl-1.79.2-9.el8.noarch\n> > > docbook-style-dsssl-1.79-25.el8.noarch\n> > >\n> > > What docbook version are you using?\n> > >\n> >\n> > [postgres@CentOS7-x64 sgml]$ sudo yum list installed | grep docbook\n> > docbook-dtds.noarch 1.0-60.el7 @anaconda\n> > docbook-style-dsssl.noarch 1.79-18.el7 @base\n> > docbook-style-xsl.noarch 1.78.1-3.el7 @anaconda\n> >\n>\n> IIUC these releases notes [1] say autolink.index.see existed since\n> v1.79.1, but unfortunately, that is more recent than my ancient\n> installed v1.78.1\n>\n> From the release notes:\n> ------\n> Robert Stayton: autolink.index.see.xml\n>\n> New param to control automatic links in index from see and\n> seealso to indexterm primary.\n> ------\n>\n> ======\n> [1] https://docbook.sourceforge.net/release/xsl/1.79.1/RELEASE-NOTES.html\n>\n\nIs anything going to be changed for this? Since the recent commit [1]\nwhen building the docs now each time I need to first hack (e.g. either\nthe Makefile or stylesheet-html-common.xml) to declare the missing\n‘autolink.index.see’ variable. I know that my old OS is approaching\nEOL but I thought my docbook installation was still valid.\n\n======\n[1] https://github.com/postgres/postgres/commit/b0f0a9432d0b6f53634a96715f2666f6d4ea25a1\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 30 Jan 2024 10:01:25 +1100",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Make documentation builds reproducible"
},
{
"msg_contents": "Peter Smith <[email protected]> writes:\n>> IIUC these releases notes [1] say autolink.index.see existed since\n>> v1.79.1, but unfortunately, that is more recent than my ancient\n>> installed v1.78.1\n\n> Is anything going to be changed for this?\n\nI assume Peter E. is going to address it, but FOSDEM is this week and\nso a lot of people are going to be busy traveling and conferencing\nrather than hacking. Things might not happen right away.\n\nYou could possibly help move things along if you can propose a\nworkable patch.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 29 Jan 2024 18:36:16 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Make documentation builds reproducible"
},
{
"msg_contents": "On 23.01.24 02:06, Peter Smith wrote:\n> This has been working forever, but seems to have broken due to commit\n> [1] having an undeclared variable.\n\n> runtime error: file stylesheet-html-common.xsl line 452 element if\n> Variable 'autolink.index.see' has not been declared.\n> make: *** [html-stamp] Error 10\n\nI have committed a fix for this. I have successfully tested docbook-xsl \n1.77.1 through 1.79.*.\n\n\n\n",
"msg_date": "Thu, 8 Feb 2024 11:47:23 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Make documentation builds reproducible"
},
{
"msg_contents": "On Thu, Feb 8, 2024 at 9:47 PM Peter Eisentraut <[email protected]> wrote:\n>\n> On 23.01.24 02:06, Peter Smith wrote:\n> > This has been working forever, but seems to have broken due to commit\n> > [1] having an undeclared variable.\n>\n> > runtime error: file stylesheet-html-common.xsl line 452 element if\n> > Variable 'autolink.index.see' has not been declared.\n> > make: *** [html-stamp] Error 10\n>\n> I have committed a fix for this. I have successfully tested docbook-xsl\n> 1.77.1 through 1.79.*.\n>\n\nYes, the latest is working for me now. Thanks!\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia.\n\n\n",
"msg_date": "Fri, 9 Feb 2024 10:22:33 +1100",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Make documentation builds reproducible"
}
] |
[
{
"msg_contents": "Hi\n\nI was playing around with \"pg_stat_get_backend_subxact()\" (commit 10ea0f924)\nand see it emits NULL values for some backends, e.g.:\n\n postgres=# \\pset null NULL\n Null display is \"NULL\".\n\n postgres=# SELECT id, pg_stat_get_backend_pid(id), s.*,\n pg_stat_get_backend_activity (id)\n FROM pg_stat_get_backend_idset() id\n JOIN LATERAL pg_stat_get_backend_subxact(id) AS s ON TRUE;\n id | pg_stat_get_backend_pid | subxact_count |\nsubxact_overflowed | pg_stat_get_backend_activity\n -----+-------------------------+---------------+--------------------+------------------------------------------------------------\n 1 | 3175972 | 0 | f\n | <command string not enabled>\n 2 | 3175973 | 0 | f\n | <command string not enabled>\n 3 | 3177889 | 0 | f\n | SELECT id, pg_stat_get_backend_pid(id), s.*, +\n | | |\n | pg_stat_get_backend_activity (id) +\n | | |\n | FROM pg_stat_get_backend_idset() id +\n | | |\n | JOIN LATERAL pg_stat_get_backend_subxact(id) AS s ON TRUE;\n 4 | 3176027 | 5 | f\n | savepoint s4;\n 256 | 3175969 | NULL | NULL\n | <command string not enabled>\n 258 | 3175968 | NULL | NULL\n | <command string not enabled>\n 259 | 3175971 | NULL | NULL\n | <command string not enabled>\n (7 rows)\n\nReading through the thread [1], it looks like 0/false are intended to be\nreturned for non-backend processes too [2], so it seems odd that NULL/NULL is\ngetting returned in some cases, especially as that's what's returned if a\nnon-existent backend ID is provided.\n\n[1] https://www.postgresql.org/message-id/flat/CAFiTN-uvYAofNRaGF4R%2Bu6_OrABdkqNRoX7V6%2BPP3H_0HuYMwg%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/flat/CAFiTN-ut0uwkRJDQJeDPXpVyTWD46m3gt3JDToE02hTfONEN%3DQ%40mail.gmail.com#821f6f40e91314066390efd06d71d5ac\n\nLooking at the code, this is happening because\n\"pgstat_fetch_stat_local_beentry()\"\nexpects to be passed the backend ID as an integer representing a 1-based index\nreferring to \"localBackendStatusTable\", but \"pg_stat_get_backend_subxact()\"\nis presumably intended to take the actual BackendId , as per other\n\"pg_stat_get_XXX()\"\nfunctions.\n\nAlso, the comment for \"pgstat_fetch_stat_local_beentry()\" says:\n\n Returns NULL if the argument is out of range (no current caller does that).\n\nso the last part is currently incorrect.\n\nAssuming I am not misunderstanding something here (always a\npossibility, apologies\nin advance if this is merely noise), what is actually needed is a function which\naccepts a BackendId (as per \"pgstat_fetch_stat_beentry()\"), but returns a\nLocalPgBackendStatus (as per \"pgstat_fetch_stat_local_beentry()\") like the\nattached, clumsily named \"pgstat_fetch_stat_backend_local_beentry()\".\n\nRegards\n\nIan Barwick",
"msg_date": "Thu, 24 Aug 2023 10:22:49 +0900",
"msg_from": "Ian Lawrence Barwick <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_stat_get_backend_subxact() and backend IDs?"
},
{
"msg_contents": "On Thu, Aug 24, 2023 at 10:22:49AM +0900, Ian Lawrence Barwick wrote:\n> Looking at the code, this is happening because\n> \"pgstat_fetch_stat_local_beentry()\"\n> expects to be passed the backend ID as an integer representing a 1-based index\n> referring to \"localBackendStatusTable\", but \"pg_stat_get_backend_subxact()\"\n> is presumably intended to take the actual BackendId , as per other\n> \"pg_stat_get_XXX()\"\n> functions.\n\nYes, this was changed in d7e39d7, but 10ea0f9 seems to have missed the\nmemo.\n\n> Assuming I am not misunderstanding something here (always a\n> possibility, apologies\n> in advance if this is merely noise), what is actually needed is a function which\n> accepts a BackendId (as per \"pgstat_fetch_stat_beentry()\"), but returns a\n> LocalPgBackendStatus (as per \"pgstat_fetch_stat_local_beentry()\") like the\n> attached, clumsily named \"pgstat_fetch_stat_backend_local_beentry()\".\n\nI think you are right. The relevant information is only available in\nLocalPgBackendStatus, but there's presently no helper function for\nobtaining the \"local\" status with the BackendId.\n\n> +LocalPgBackendStatus *\n> +pgstat_fetch_stat_backend_local_beentry(BackendId beid)\n> +{\n> +\tLocalPgBackendStatus key;\n> +\n> +\tpgstat_read_current_status();\n> +\n> +\t/*\n> +\t * Since the localBackendStatusTable is in order by backend_id, we can use\n> +\t * bsearch() to search it efficiently.\n> +\t */\n> +\tkey.backend_id = beid;\n> +\n> +\treturn (LocalPgBackendStatus *) bsearch(&key, localBackendStatusTable,\n> +\t\t\t\t\t\t\t\t\t\t\tlocalNumBackends,\n> +\t\t\t\t\t\t\t\t\t\t\tsizeof(LocalPgBackendStatus),\n> +\t\t\t\t\t\t\t\t\t\t\tcmp_lbestatus);\n> +}\n\nWe could probably modify pgstat_fetch_stat_beentry() to use this new\nfunction. I suspect we'll want to work on the naming, too. Maybe we could\nname them pg_stat_fetch_local_beentry_by_index() and\npg_stat_fetch_local_beentry_by_backendid().\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 23 Aug 2023 19:32:06 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_get_backend_subxact() and backend IDs?"
},
{
"msg_contents": "On Wed, Aug 23, 2023 at 07:32:06PM -0700, Nathan Bossart wrote:\n> On Thu, Aug 24, 2023 at 10:22:49AM +0900, Ian Lawrence Barwick wrote:\n>> Looking at the code, this is happening because\n>> \"pgstat_fetch_stat_local_beentry()\"\n>> expects to be passed the backend ID as an integer representing a 1-based index\n>> referring to \"localBackendStatusTable\", but \"pg_stat_get_backend_subxact()\"\n>> is presumably intended to take the actual BackendId , as per other\n>> \"pg_stat_get_XXX()\"\n>> functions.\n> \n> Yes, this was changed in d7e39d7, but 10ea0f9 seems to have missed the\n> memo.\n\nBTW I'd argue that this is a bug in v16 that we should try to fix before\nGA, so I've added an open item [0]. I assigned it to Robert (CC'd) since\nhe was the committer, but I'm happy to pick it up.\n\n[0] https://wiki.postgresql.org/wiki/PostgreSQL_16_Open_Items#Open_Issues\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 23 Aug 2023 19:51:40 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_get_backend_subxact() and backend IDs?"
},
{
"msg_contents": "On Wed, Aug 23, 2023 at 07:51:40PM -0700, Nathan Bossart wrote:\n> On Wed, Aug 23, 2023 at 07:32:06PM -0700, Nathan Bossart wrote:\n>> On Thu, Aug 24, 2023 at 10:22:49AM +0900, Ian Lawrence Barwick wrote:\n>>> Looking at the code, this is happening because\n>>> \"pgstat_fetch_stat_local_beentry()\"\n>>> expects to be passed the backend ID as an integer representing a 1-based index\n>>> referring to \"localBackendStatusTable\", but \"pg_stat_get_backend_subxact()\"\n>>> is presumably intended to take the actual BackendId , as per other\n>>> \"pg_stat_get_XXX()\"\n>>> functions.\n>> \n>> Yes, this was changed in d7e39d7, but 10ea0f9 seems to have missed the\n>> memo.\n> \n> BTW I'd argue that this is a bug in v16 that we should try to fix before\n> GA, so I've added an open item [0]. I assigned it to Robert (CC'd) since\n> he was the committer, but I'm happy to pick it up.\n\nSince RC1 is fast approaching, I put together a revised patch set. 0001\nrenames the existing pgstat_fetch_stat* functions, and 0002 adds\npgstat_get_local_beentry_by_backend_id() and uses it for\npg_stat_get_backend_subxact(). Thoughts?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 24 Aug 2023 09:19:13 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_get_backend_subxact() and backend IDs?"
},
{
"msg_contents": "2023年8月25日(金) 1:19 Nathan Bossart <[email protected]>:\n>\n> On Wed, Aug 23, 2023 at 07:51:40PM -0700, Nathan Bossart wrote:\n> > On Wed, Aug 23, 2023 at 07:32:06PM -0700, Nathan Bossart wrote:\n> >> On Thu, Aug 24, 2023 at 10:22:49AM +0900, Ian Lawrence Barwick wrote:\n> >>> Looking at the code, this is happening because\n> >>> \"pgstat_fetch_stat_local_beentry()\"\n> >>> expects to be passed the backend ID as an integer representing a 1-based index\n> >>> referring to \"localBackendStatusTable\", but \"pg_stat_get_backend_subxact()\"\n> >>> is presumably intended to take the actual BackendId , as per other\n> >>> \"pg_stat_get_XXX()\"\n> >>> functions.\n> >>\n> >> Yes, this was changed in d7e39d7, but 10ea0f9 seems to have missed the\n> >> memo.\n> >\n> > BTW I'd argue that this is a bug in v16 that we should try to fix before\n> > GA, so I've added an open item [0]. I assigned it to Robert (CC'd) since\n> > he was the committer, but I'm happy to pick it up.\n>\n> Since RC1 is fast approaching, I put together a revised patch set. 0001\n> renames the existing pgstat_fetch_stat* functions, and 0002 adds\n> pgstat_get_local_beentry_by_backend_id() and uses it for\n> pg_stat_get_backend_subxact(). Thoughts?\n\nThanks for looking at this. In summary we now have these functions:\n\n extern PgBackendStatus *pgstat_get_beentry_by_backend_id(BackendId beid);\n extern LocalPgBackendStatus\n*pgstat_get_local_beentry_by_backend_id(BackendId beid);\n extern LocalPgBackendStatus *pgstat_get_local_beentry_by_index(int beid);\n\nwhich LGTM; patches work as expected and resolve the reported issue.\n\nRegards\n\nIan Barwick\n\n\n",
"msg_date": "Fri, 25 Aug 2023 09:36:18 +0900",
"msg_from": "Ian Lawrence Barwick <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_get_backend_subxact() and backend IDs?"
},
{
"msg_contents": "I tested the patch and it does the correct thing.\r\n\r\nI have a few comments:\r\n\r\n1/ cast the return of bsearch. This was done previously and is the common\r\nconvention in the code.\r\n\r\nSo\r\n\r\n+ return bsearch(&key, localBackendStatusTable, localNumBackends,\r\n+ sizeof(LocalPgBackendStatus), cmp_lbestatus);\r\n\r\nShould be\r\n\r\n+ return (LocalPgBackendStatus *) bsearch(&key, localBackendStatusTable, localNumBackends,\r\n+ sizeof(LocalPgBackendStatus), cmp_lbestatus);\r\n\r\n2/ This will probably be a good time to update the docs for pg_stat_get_backend_subxact [1]\r\nto call out that \"subxact_count\" will \"only increase if a transaction is performing writes\". Also to link\r\nthe reader to the subtransactions doc [2].\r\n\r\n\r\n1. https://www.postgresql.org/docs/16/monitoring-stats.html#WAIT-EVENT-TIMEOUT-TABLE\r\n2. https://www.postgresql.org/docs/16/subxacts.html\r\n\r\n\r\nRegards,\r\n\r\nSami Imseih\r\nAmazon Web Services (AWS)\r\n\r\n",
"msg_date": "Fri, 25 Aug 2023 15:01:40 +0000",
"msg_from": "\"Imseih (AWS), Sami\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_get_backend_subxact() and backend IDs?"
},
{
"msg_contents": "On Fri, Aug 25, 2023 at 09:36:18AM +0900, Ian Lawrence Barwick wrote:\n> Thanks for looking at this. In summary we now have these functions:\n> \n> extern PgBackendStatus *pgstat_get_beentry_by_backend_id(BackendId beid);\n> extern LocalPgBackendStatus\n> *pgstat_get_local_beentry_by_backend_id(BackendId beid);\n> extern LocalPgBackendStatus *pgstat_get_local_beentry_by_index(int beid);\n> \n> which LGTM; patches work as expected and resolve the reported issue.\n\nOn second thought, renaming these exported functions so close to release is\nprobably not a great idea. I should probably skip back-patching that one.\nOr I could have the existing functions call the new ones in v16 for\nbackward compatibility...\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 25 Aug 2023 08:32:51 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_get_backend_subxact() and backend IDs?"
},
{
"msg_contents": "On Fri, Aug 25, 2023 at 03:01:40PM +0000, Imseih (AWS), Sami wrote:\n> 1/ cast the return of bsearch. This was done previously and is the common\n> convention in the code.\n\nWill do.\n\n> 2/ This will probably be a good time to update the docs for pg_stat_get_backend_subxact [1]\n> to call out that \"subxact_count\" will \"only increase if a transaction is performing writes\". Also to link\n> the reader to the subtransactions doc [2].\n\nI'd rather keep this patch focused on fixing the bug, given we are so close\nto the v16 release.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 25 Aug 2023 08:36:21 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_get_backend_subxact() and backend IDs?"
},
{
"msg_contents": "On Fri, Aug 25, 2023 at 08:32:51AM -0700, Nathan Bossart wrote:\n> On second thought, renaming these exported functions so close to release is\n> probably not a great idea. I should probably skip back-patching that one.\n> Or I could have the existing functions call the new ones in v16 for\n> backward compatibility...\n\nHere is a new version of the patch that avoids changing the names of the\nexisting functions. I'm not thrilled about the name\n(pgstat_fetch_stat_local_beentry_by_backend_id), so I am open to\nsuggestions. In any case, I'd like to rename all three of the\npgstat_fetch_stat_* functions in v17.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 25 Aug 2023 12:29:49 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_get_backend_subxact() and backend IDs?"
},
{
"msg_contents": "> Here is a new version of the patch that avoids changing the names of the\r\n> existing functions. I'm not thrilled about the name\r\n> (pgstat_fetch_stat_local_beentry_by_backend_id), so I am open to\r\n> suggestions. In any case, I'd like to rename all three of the>\r\n> pgstat_fetch_stat_* functions in v17.\r\n\r\nThanks for the updated patch.\r\n\r\nI reviewed/tested the latest version and I don't have any more comments.\r\n\r\nRegards,\r\n\r\nSami\r\n\r\n",
"msg_date": "Fri, 25 Aug 2023 22:56:14 +0000",
"msg_from": "\"Imseih (AWS), Sami\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_get_backend_subxact() and backend IDs?"
},
{
"msg_contents": "On Fri, Aug 25, 2023 at 10:56:14PM +0000, Imseih (AWS), Sami wrote:\n> > Here is a new version of the patch that avoids changing the names of the\n> > existing functions. I'm not thrilled about the name\n> > (pgstat_fetch_stat_local_beentry_by_backend_id), so I am open to\n> > suggestions. In any case, I'd like to rename all three of the>\n> > pgstat_fetch_stat_* functions in v17.\n> \n> Thanks for the updated patch.\n> \n> I reviewed/tested the latest version and I don't have any more comments.\n\nFWIW, I find the new routine introduced by this patch rather\nconfusing. pgstat_fetch_stat_local_beentry() and\npgstat_fetch_stat_local_beentry_by_backend_id() use the same \nargument name for a BackendId or an int. This is not entirely the\nfault of this patch as pg_stat_get_backend_subxact() itself is\nconfused about \"beid\" being a uint32 or a BackendId. However, I think\nthat this makes much harder to figure out that\npgstat_fetch_stat_local_beentry() is only here because it is cheaper \nto do sequential scan of all the local beentries rather than a\nbsearch() for all its callers, while\npgstat_fetch_stat_local_beentry_by_backend_id() is here because we\nwant to retrieve the local beentry matching with the *backend ID* with\nthe binary search().\n\nI understand that this is not a fantastic naming, but renaming\npgstat_fetch_stat_local_beentry() to something like\npgstat_fetch_stat_local_beentry_by_{index|position}_id() would make\nthe difference much easier to grasp, and we should avoid the use of\n\"beid\" when we refer to the *position/index ID* in\nlocalBackendStatusTable, because it is not a BackendId at all, just a\nposition in the local array.\n--\nMichael",
"msg_date": "Mon, 28 Aug 2023 10:53:52 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_get_backend_subxact() and backend IDs?"
},
{
"msg_contents": "On Mon, Aug 28, 2023 at 10:53:52AM +0900, Michael Paquier wrote:\n> I understand that this is not a fantastic naming, but renaming\n> pgstat_fetch_stat_local_beentry() to something like\n> pgstat_fetch_stat_local_beentry_by_{index|position}_id() would make\n> the difference much easier to grasp, and we should avoid the use of\n> \"beid\" when we refer to the *position/index ID* in\n> localBackendStatusTable, because it is not a BackendId at all, just a\n> position in the local array.\n\nThis was my first reaction [0]. I was concerned about renaming the\nexported functions so close to release, so I was suggesting that we hold\noff on that part until v17. If there isn't a concern with renaming these\nfunctions in v16, I can proceed with something more like v2.\n\n[0] https://postgr.es/m/20230824161913.GA1394441%40nathanxps13.lan\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 29 Aug 2023 09:46:55 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_get_backend_subxact() and backend IDs?"
},
{
"msg_contents": "On Tue, Aug 29, 2023 at 09:46:55AM -0700, Nathan Bossart wrote:\n> This was my first reaction [0]. I was concerned about renaming the\n> exported functions so close to release, so I was suggesting that we hold\n> off on that part until v17. If there isn't a concern with renaming these\n> functions in v16, I can proceed with something more like v2.\n\nThanks for the pointer. This version is much better IMO, because it\nremoves entirely the source of the confusion between the difference in\nbackend ID and index ID treatments when fetching the local entries in\nthe array. So I'm okay to rename these functions now, before .0 is\nreleased to get things in a better shape while addressing the issue\nreported.\n\n+extern LocalPgBackendStatus *pgstat_get_local_beentry_by_index(int beid); \n\nStill I would to a bit more of s/beid/id/ for cases where the code\nrefers to an index ID, and not a backend ID, especially for the\ninternal routines.\n--\nMichael",
"msg_date": "Wed, 30 Aug 2023 08:22:27 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_get_backend_subxact() and backend IDs?"
},
{
"msg_contents": "On Wed, Aug 30, 2023 at 08:22:27AM +0900, Michael Paquier wrote:\n> On Tue, Aug 29, 2023 at 09:46:55AM -0700, Nathan Bossart wrote:\n>> This was my first reaction [0]. I was concerned about renaming the\n>> exported functions so close to release, so I was suggesting that we hold\n>> off on that part until v17. If there isn't a concern with renaming these\n>> functions in v16, I can proceed with something more like v2.\n> \n> Thanks for the pointer. This version is much better IMO, because it\n> removes entirely the source of the confusion between the difference in\n> backend ID and index ID treatments when fetching the local entries in\n> the array. So I'm okay to rename these functions now, before .0 is\n> released to get things in a better shape while addressing the issue\n> reported.\n\nOkay.\n\n> +extern LocalPgBackendStatus *pgstat_get_local_beentry_by_index(int beid); \n> \n> Still I would to a bit more of s/beid/id/ for cases where the code\n> refers to an index ID, and not a backend ID, especially for the\n> internal routines.\n\nMakes sense. I did this in v4.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 29 Aug 2023 19:01:51 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_get_backend_subxact() and backend IDs?"
},
{
"msg_contents": "On Tue, Aug 29, 2023 at 07:01:51PM -0700, Nathan Bossart wrote:\n> On Wed, Aug 30, 2023 at 08:22:27AM +0900, Michael Paquier wrote:\n>> +extern LocalPgBackendStatus *pgstat_get_local_beentry_by_index(int beid); \n>> \n>> Still I would to a bit more of s/beid/id/ for cases where the code\n>> refers to an index ID, and not a backend ID, especially for the\n>> internal routines.\n> \n> Makes sense. I did this in v4.\n\nYep, that looks more consistent, at quick glance.\n--\nMichael",
"msg_date": "Wed, 30 Aug 2023 13:13:31 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_get_backend_subxact() and backend IDs?"
},
{
"msg_contents": "On Wed, Aug 30, 2023 at 12:13 AM Michael Paquier <[email protected]> wrote:\n> Yep, that looks more consistent, at quick glance.\n\nSorry, I'm only just noticing this thread. Thanks, Nathan, Ian, and\nothers, for your work on this. Apart from hoping that the 0002 patch\nwill get a more detailed commit message spelling out the problem very\nexplicitly, I don't have any comments on the proposed patches.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 30 Aug 2023 09:50:41 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_get_backend_subxact() and backend IDs?"
},
{
"msg_contents": "On Wed, Aug 30, 2023 at 09:50:41AM -0400, Robert Haas wrote:\n> Sorry, I'm only just noticing this thread. Thanks, Nathan, Ian, and\n> others, for your work on this. Apart from hoping that the 0002 patch\n> will get a more detailed commit message spelling out the problem very\n> explicitly, I don't have any comments on the proposed patches.\n\nI'm about to spend way too much time writing the commit message for 0002,\nbut I plan to commit both patches sometime today.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 30 Aug 2023 07:27:55 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_get_backend_subxact() and backend IDs?"
},
{
"msg_contents": "On Wed, Aug 30, 2023 at 10:27 AM Nathan Bossart\n<[email protected]> wrote:\n> On Wed, Aug 30, 2023 at 09:50:41AM -0400, Robert Haas wrote:\n> > Sorry, I'm only just noticing this thread. Thanks, Nathan, Ian, and\n> > others, for your work on this. Apart from hoping that the 0002 patch\n> > will get a more detailed commit message spelling out the problem very\n> > explicitly, I don't have any comments on the proposed patches.\n>\n> I'm about to spend way too much time writing the commit message for 0002,\n> but I plan to commit both patches sometime today.\n\nThanks! I'm glad your committing the patches, and I approve of you\nspending way too much time on the commit message. :-)\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 30 Aug 2023 10:56:22 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_get_backend_subxact() and backend IDs?"
},
{
"msg_contents": "On Wed, Aug 30, 2023 at 10:56:22AM -0400, Robert Haas wrote:\n> On Wed, Aug 30, 2023 at 10:27 AM Nathan Bossart\n> <[email protected]> wrote:\n>> I'm about to spend way too much time writing the commit message for 0002,\n>> but I plan to commit both patches sometime today.\n> \n> Thanks! I'm glad your committing the patches, and I approve of you\n> spending way too much time on the commit message. :-)\n\nCommitted.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 30 Aug 2023 14:56:48 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_get_backend_subxact() and backend IDs?"
},
{
"msg_contents": "On Wed, Aug 30, 2023 at 02:56:48PM -0700, Nathan Bossart wrote:\n> Committed.\n\nCool, thanks!\n--\nMichael",
"msg_date": "Thu, 31 Aug 2023 13:22:53 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_get_backend_subxact() and backend IDs?"
},
{
"msg_contents": "On Thu, Aug 31, 2023 at 4:38 AM Nathan Bossart <[email protected]> wrote:\n>\n> On Wed, Aug 30, 2023 at 10:56:22AM -0400, Robert Haas wrote:\n> > On Wed, Aug 30, 2023 at 10:27 AM Nathan Bossart\n> > <[email protected]> wrote:\n> >> I'm about to spend way too much time writing the commit message for 0002,\n> >> but I plan to commit both patches sometime today.\n> >\n> > Thanks! I'm glad your committing the patches, and I approve of you\n> > spending way too much time on the commit message. :-)\n>\n> Committed.\n\nSorry, I didn't notice this thread earlier. The new behavior looks\nbetter to me, thanks for working on it.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 31 Aug 2023 16:47:21 +0530",
"msg_from": "Dilip Kumar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_get_backend_subxact() and backend IDs?"
},
{
"msg_contents": "2023年8月31日(木) 6:56 Nathan Bossart <[email protected]>:\n>\n> On Wed, Aug 30, 2023 at 10:56:22AM -0400, Robert Haas wrote:\n> > On Wed, Aug 30, 2023 at 10:27 AM Nathan Bossart\n> > <[email protected]> wrote:\n> >> I'm about to spend way too much time writing the commit message for 0002,\n> >> but I plan to commit both patches sometime today.\n> >\n> > Thanks! I'm glad your committing the patches, and I approve of you\n> > spending way too much time on the commit message. :-)\n>\n> Committed.\n\nThanks for taking care of this (saw the commits, then got distracted\nby something else and forgot to follow up).\n\nRegards\n\nIan Barwick\n\n\n",
"msg_date": "Mon, 18 Sep 2023 15:52:53 +0900",
"msg_from": "Ian Lawrence Barwick <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_get_backend_subxact() and backend IDs?"
}
] |
[
{
"msg_contents": "I suggest we rename this setting to something starting with debug_. \nRight now, the name looks much too tempting for users to fiddle with. I \nthink this is similar to force_parallel_mode.\n\nAlso, the descriptions in guc_tables.c could be improved. For example,\n\n gettext_noop(\"Controls when to replicate or apply each change.\"),\n\nis pretty content-free and unhelpful.\n\n\n",
"msg_date": "Thu, 24 Aug 2023 08:14:44 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "logical_replication_mode"
},
{
"msg_contents": "On Thu, Aug 24, 2023 at 12:45 PM Peter Eisentraut <[email protected]> wrote:\n>\n> I suggest we rename this setting to something starting with debug_.\n> Right now, the name looks much too tempting for users to fiddle with. I\n> think this is similar to force_parallel_mode.\n>\n\n+1. How about debug_logical_replication?\n\n> Also, the descriptions in guc_tables.c could be improved. For example,\n>\n> gettext_noop(\"Controls when to replicate or apply each change.\"),\n>\n> is pretty content-free and unhelpful.\n>\n\nThe other possibility I could think of is to change short_desc as:\n\"Allows to replicate each change for large transactions.\". Do you have\nany better ideas?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 25 Aug 2023 09:58:13 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: logical_replication_mode"
},
{
"msg_contents": "On Friday, August 25, 2023 12:28 PM Amit Kapila <[email protected]> wrote:\r\n> \r\n> On Thu, Aug 24, 2023 at 12:45 PM Peter Eisentraut <[email protected]>\r\n> wrote:\r\n> >\r\n> > I suggest we rename this setting to something starting with debug_.\r\n> > Right now, the name looks much too tempting for users to fiddle with.\r\n> > I think this is similar to force_parallel_mode.\r\n> >\r\n> \r\n> +1. How about debug_logical_replication?\r\n> \r\n> > Also, the descriptions in guc_tables.c could be improved. For\r\n> > example,\r\n> >\r\n> > gettext_noop(\"Controls when to replicate or apply each change.\"),\r\n> >\r\n> > is pretty content-free and unhelpful.\r\n> >\r\n> \r\n> The other possibility I could think of is to change short_desc as:\r\n> \"Allows to replicate each change for large transactions.\". Do you have any\r\n> better ideas?\r\n\r\nHow about \"Forces immediate streaming or serialization of changes in large\r\ntransactions.\" which is similar to the description in document.\r\n\r\nI agree that renaming it to debug_xx would be better and\r\nhere is a patch that tries to do this.\r\n\r\nBest Regards,\r\nHou zj",
"msg_date": "Fri, 25 Aug 2023 06:52:14 +0000",
"msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: logical_replication_mode"
},
{
"msg_contents": "On 25.08.23 08:52, Zhijie Hou (Fujitsu) wrote:\n> On Friday, August 25, 2023 12:28 PM Amit Kapila <[email protected]> wrote:\n>>\n>> On Thu, Aug 24, 2023 at 12:45 PM Peter Eisentraut <[email protected]>\n>> wrote:\n>>>\n>>> I suggest we rename this setting to something starting with debug_.\n>>> Right now, the name looks much too tempting for users to fiddle with.\n>>> I think this is similar to force_parallel_mode.\n>>>\n>>\n>> +1. How about debug_logical_replication?\n>>\n>>> Also, the descriptions in guc_tables.c could be improved. For\n>>> example,\n>>>\n>>> gettext_noop(\"Controls when to replicate or apply each change.\"),\n>>>\n>>> is pretty content-free and unhelpful.\n>>>\n>>\n>> The other possibility I could think of is to change short_desc as:\n>> \"Allows to replicate each change for large transactions.\". Do you have any\n>> better ideas?\n> \n> How about \"Forces immediate streaming or serialization of changes in large\n> transactions.\" which is similar to the description in document.\n> \n> I agree that renaming it to debug_xx would be better and\n> here is a patch that tries to do this.\n\nMaybe debug_logical_replication is too general? Something like \ndebug_logical_replication_streaming would be more concrete. (Or \ndebug_logical_streaming.) Is that an appropriate name for what it's doing?\n\n\n\n",
"msg_date": "Fri, 25 Aug 2023 09:08:45 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: logical_replication_mode"
},
{
"msg_contents": "On Fri, Aug 25, 2023 at 12:38 PM Peter Eisentraut <[email protected]> wrote:\n>\n> On 25.08.23 08:52, Zhijie Hou (Fujitsu) wrote:\n> > On Friday, August 25, 2023 12:28 PM Amit Kapila <[email protected]> wrote:\n> >>\n> >> On Thu, Aug 24, 2023 at 12:45 PM Peter Eisentraut <[email protected]>\n> >> wrote:\n> >>>\n> >>> I suggest we rename this setting to something starting with debug_.\n> >>> Right now, the name looks much too tempting for users to fiddle with.\n> >>> I think this is similar to force_parallel_mode.\n> >>>\n> >>\n> >> +1. How about debug_logical_replication?\n> >>\n> >>> Also, the descriptions in guc_tables.c could be improved. For\n> >>> example,\n> >>>\n> >>> gettext_noop(\"Controls when to replicate or apply each change.\"),\n> >>>\n> >>> is pretty content-free and unhelpful.\n> >>>\n> >>\n> >> The other possibility I could think of is to change short_desc as:\n> >> \"Allows to replicate each change for large transactions.\". Do you have any\n> >> better ideas?\n> >\n> > How about \"Forces immediate streaming or serialization of changes in large\n> > transactions.\" which is similar to the description in document.\n> >\n> > I agree that renaming it to debug_xx would be better and\n> > here is a patch that tries to do this.\n>\n> Maybe debug_logical_replication is too general? Something like\n> debug_logical_replication_streaming would be more concrete.\n>\n\nYeah, that sounds better.\n\n> (Or\n> debug_logical_streaming.) Is that an appropriate name for what it's doing?\n>\n\nYes.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 25 Aug 2023 15:25:46 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: logical_replication_mode"
},
{
"msg_contents": "On Friday, August 25, 2023 5:56 PM Amit Kapila <[email protected]> wrote:\r\n> \r\n> On Fri, Aug 25, 2023 at 12:38 PM Peter Eisentraut <[email protected]> wrote:\r\n> >\r\n> > On 25.08.23 08:52, Zhijie Hou (Fujitsu) wrote:\r\n> > > On Friday, August 25, 2023 12:28 PM Amit Kapila\r\n> <[email protected]> wrote:\r\n> > >>\r\n> > >> On Thu, Aug 24, 2023 at 12:45 PM Peter Eisentraut\r\n> > >> <[email protected]>\r\n> > >> wrote:\r\n> > >>>\r\n> > >>> I suggest we rename this setting to something starting with debug_.\r\n> > >>> Right now, the name looks much too tempting for users to fiddle with.\r\n> > >>> I think this is similar to force_parallel_mode.\r\n> > >>>\r\n> > >>\r\n> > >> +1. How about debug_logical_replication?\r\n> > >>\r\n> > >>> Also, the descriptions in guc_tables.c could be improved. For\r\n> > >>> example,\r\n> > >>>\r\n> > >>> gettext_noop(\"Controls when to replicate or apply each\r\n> > >>> change.\"),\r\n> > >>>\r\n> > >>> is pretty content-free and unhelpful.\r\n> > >>>\r\n> > >>\r\n> > >> The other possibility I could think of is to change short_desc as:\r\n> > >> \"Allows to replicate each change for large transactions.\". Do you\r\n> > >> have any better ideas?\r\n> > >\r\n> > > How about \"Forces immediate streaming or serialization of changes in\r\n> > > large transactions.\" which is similar to the description in document.\r\n> > >\r\n> > > I agree that renaming it to debug_xx would be better and here is a\r\n> > > patch that tries to do this.\r\n> >\r\n> > Maybe debug_logical_replication is too general? Something like\r\n> > debug_logical_replication_streaming would be more concrete.\r\n> >\r\n> \r\n> Yeah, that sounds better.\r\n\r\nOK, here is the debug_logical_replication_streaming version.\r\n\r\nBest Regards,\r\nHou zj",
"msg_date": "Sun, 27 Aug 2023 12:05:40 +0000",
"msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: logical_replication_mode"
},
{
"msg_contents": "Hi Hou-san.\n\nI had a look at the patch 0001.\n\nIt looks OK to me, but here are a couple of comments:\n\n======\n\n1. Is this fix intended for PG16?\n\nI found some mention of this GUC old name lurking in the release v16 notes [1].\n\n~~~\n\n2. DebugLogicalRepStreamingMode\n\n-/* possible values for logical_replication_mode */\n+/* possible values for debug_logical_replication_streaming */\n typedef enum\n {\n- LOGICAL_REP_MODE_BUFFERED,\n- LOGICAL_REP_MODE_IMMEDIATE\n-} LogicalRepMode;\n+ DEBUG_LOGICAL_REP_STREAMING_BUFFERED,\n+ DEBUG_LOGICAL_REP_STREAMING_IMMEDIATE\n+} DebugLogicalRepStreamingMode;\n\nShouldn't this typedef name be included in the typedef.list file?\n\n------\n[1] https://www.postgresql.org/docs/16/release-16.html\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 29 Aug 2023 17:25:40 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: logical_replication_mode"
},
{
"msg_contents": "On Tue, Aug 29, 2023 at 12:56 PM Peter Smith <[email protected]> wrote:\n>\n> I had a look at the patch 0001.\n>\n> It looks OK to me, but here are a couple of comments:\n>\n> ======\n>\n> 1. Is this fix intended for PG16?\n>\n\nYes.\n\n> I found some mention of this GUC old name lurking in the release v16 notes [1].\n>\n\nThat should be changed as well but we can do that as a separate patch\njust for v16.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 29 Aug 2023 13:51:27 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: logical_replication_mode"
},
{
"msg_contents": "On Tuesday, August 29, 2023 3:26 PM Peter Smith <[email protected]> wrote:\r\n\r\nThanks for reviewing.\r\n\r\n> 2. DebugLogicalRepStreamingMode\r\n> \r\n> -/* possible values for logical_replication_mode */\r\n> +/* possible values for debug_logical_replication_streaming */\r\n> typedef enum\r\n> {\r\n> - LOGICAL_REP_MODE_BUFFERED,\r\n> - LOGICAL_REP_MODE_IMMEDIATE\r\n> -} LogicalRepMode;\r\n> + DEBUG_LOGICAL_REP_STREAMING_BUFFERED,\r\n> + DEBUG_LOGICAL_REP_STREAMING_IMMEDIATE\r\n> +} DebugLogicalRepStreamingMode;\r\n> \r\n> Shouldn't this typedef name be included in the typedef.list file?\r\n\r\nI think it's unnecessary to add this as there currently is no reference to the name.\r\nSee other similar examples like DebugParallelMode, RecoveryPrefetchValue ...\r\nAnd the name is also not included in BF[1]\r\n\r\n[1] https://buildfarm.postgresql.org/cgi-bin/typedefs.pl?branch=HEAD\r\n\r\nBest Regards,\r\nHou zj\r\n",
"msg_date": "Tue, 29 Aug 2023 09:44:54 +0000",
"msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: logical_replication_mode"
},
{
"msg_contents": "On 27.08.23 14:05, Zhijie Hou (Fujitsu) wrote:\n> On Friday, August 25, 2023 5:56 PM Amit Kapila <[email protected]> wrote:\n>>\n>> On Fri, Aug 25, 2023 at 12:38 PM Peter Eisentraut <[email protected]> wrote:\n>>>\n>>> On 25.08.23 08:52, Zhijie Hou (Fujitsu) wrote:\n>>>> On Friday, August 25, 2023 12:28 PM Amit Kapila\n>> <[email protected]> wrote:\n>>>>>\n>>>>> On Thu, Aug 24, 2023 at 12:45 PM Peter Eisentraut\n>>>>> <[email protected]>\n>>>>> wrote:\n>>>>>>\n>>>>>> I suggest we rename this setting to something starting with debug_.\n>>>>>> Right now, the name looks much too tempting for users to fiddle with.\n>>>>>> I think this is similar to force_parallel_mode.\n>>>>>>\n>>>>>\n>>>>> +1. How about debug_logical_replication?\n>>>>>\n>>>>>> Also, the descriptions in guc_tables.c could be improved. For\n>>>>>> example,\n>>>>>>\n>>>>>> gettext_noop(\"Controls when to replicate or apply each\n>>>>>> change.\"),\n>>>>>>\n>>>>>> is pretty content-free and unhelpful.\n>>>>>>\n>>>>>\n>>>>> The other possibility I could think of is to change short_desc as:\n>>>>> \"Allows to replicate each change for large transactions.\". Do you\n>>>>> have any better ideas?\n>>>>\n>>>> How about \"Forces immediate streaming or serialization of changes in\n>>>> large transactions.\" which is similar to the description in document.\n>>>>\n>>>> I agree that renaming it to debug_xx would be better and here is a\n>>>> patch that tries to do this.\n>>>\n>>> Maybe debug_logical_replication is too general? Something like\n>>> debug_logical_replication_streaming would be more concrete.\n>>>\n>>\n>> Yeah, that sounds better.\n> \n> OK, here is the debug_logical_replication_streaming version.\n\ncommitted\n\n\n\n",
"msg_date": "Tue, 29 Aug 2023 15:38:31 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: logical_replication_mode"
}
] |
[
{
"msg_contents": "During some refactoring I noticed that the field \nIndexInfo.ii_OpclassOptions is kind of useless. The IndexInfo struct is \nnotionally an executor support node, but this field is not used in the \nexecutor or by the index AM code. It is really just used in DDL code in \nindex.c and indexcmds.c to pass information around locally. For that, \nit would be clearer to just use local variables, like for other similar \ncases. With that change, we can also remove \nRelationGetIndexRawAttOptions(), which only had one caller left, for \nwhich it was overkill.",
"msg_date": "Thu, 24 Aug 2023 08:57:58 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Remove IndexInfo.ii_OpclassOptions field"
},
{
"msg_contents": "On Thu, Aug 24, 2023 at 08:57:58AM +0200, Peter Eisentraut wrote:\n> During some refactoring I noticed that the field IndexInfo.ii_OpclassOptions\n> is kind of useless. The IndexInfo struct is notionally an executor support\n> node, but this field is not used in the executor or by the index AM code.\n> It is really just used in DDL code in index.c and indexcmds.c to pass\n> information around locally. For that, it would be clearer to just use local\n> variables, like for other similar cases. With that change, we can also\n> remove RelationGetIndexRawAttOptions(), which only had one caller left, for\n> which it was overkill.\n\nI am not so sure. There is a very recent thread where it has been\npointed out that we have zero support for relcache invalidation with\nindex options, causing various problems:\nhttps://www.postgresql.org/message-id/CAGem3qAM7M7B3DdccpgepRxuoKPd2Y74qJ5NSNRjLiN21dPhgg%40mail.gmail.com\n\nPerhaps we'd better settle on the other one before deciding if the\nchange you are proposing here is adapted or not.\n--\nMichael",
"msg_date": "Fri, 25 Aug 2023 10:31:09 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove IndexInfo.ii_OpclassOptions field"
},
{
"msg_contents": "On 25.08.23 03:31, Michael Paquier wrote:\n> On Thu, Aug 24, 2023 at 08:57:58AM +0200, Peter Eisentraut wrote:\n>> During some refactoring I noticed that the field IndexInfo.ii_OpclassOptions\n>> is kind of useless. The IndexInfo struct is notionally an executor support\n>> node, but this field is not used in the executor or by the index AM code.\n>> It is really just used in DDL code in index.c and indexcmds.c to pass\n>> information around locally. For that, it would be clearer to just use local\n>> variables, like for other similar cases. With that change, we can also\n>> remove RelationGetIndexRawAttOptions(), which only had one caller left, for\n>> which it was overkill.\n> \n> I am not so sure. There is a very recent thread where it has been\n> pointed out that we have zero support for relcache invalidation with\n> index options, causing various problems:\n> https://www.postgresql.org/message-id/CAGem3qAM7M7B3DdccpgepRxuoKPd2Y74qJ5NSNRjLiN21dPhgg%40mail.gmail.com\n> \n> Perhaps we'd better settle on the other one before deciding if the\n> change you are proposing here is adapted or not.\n\nOk, I'll wait for the resolution of that.\n\nAt a glance, however, I think my patch is (a) not related, and (b) if it \nwere, it would probably *help*, because the change is to not allocate \nany long-lived structures that no one needs and that might get out of date.\n\n\n\n",
"msg_date": "Tue, 29 Aug 2023 10:51:10 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Remove IndexInfo.ii_OpclassOptions field"
},
{
"msg_contents": "On Tue, Aug 29, 2023 at 10:51:10AM +0200, Peter Eisentraut wrote:\n> At a glance, however, I think my patch is (a) not related, and (b) if it\n> were, it would probably *help*, because the change is to not allocate any\n> long-lived structures that no one needs and that might get out of date.\n\nHmm, yeah, perhaps you're right about (b) here. I have a few other\nhigh-priority items for stable branches on my board before being able\nto look at all this in more details, unfortunately, so feel free to\nignore me if you think that this is an improvement anyway even\nregarding the other issue discussed.\n--\nMichael",
"msg_date": "Wed, 30 Aug 2023 09:51:23 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove IndexInfo.ii_OpclassOptions field"
},
{
"msg_contents": "On 30.08.23 02:51, Michael Paquier wrote:\n> On Tue, Aug 29, 2023 at 10:51:10AM +0200, Peter Eisentraut wrote:\n>> At a glance, however, I think my patch is (a) not related, and (b) if it\n>> were, it would probably *help*, because the change is to not allocate any\n>> long-lived structures that no one needs and that might get out of date.\n> \n> Hmm, yeah, perhaps you're right about (b) here. I have a few other\n> high-priority items for stable branches on my board before being able\n> to look at all this in more details, unfortunately, so feel free to\n> ignore me if you think that this is an improvement anyway even\n> regarding the other issue discussed.\n\nI have committed this.\n\n\n\n",
"msg_date": "Tue, 3 Oct 2023 17:56:08 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Remove IndexInfo.ii_OpclassOptions field"
}
] |
[
{
"msg_contents": "Hi\n\nIn the function heapgetpage. If a table is not updated very frequently. \nMany actions in tuple loops are superfluous. For all_visible pages, \nloctup does not need to be assigned, nor does the \"valid\" variable. \nCheckForSerializableConflictOutNeeded from \nHeapCheckForSerializableConflictOut function, it only need to inspect at \nthe beginning of the cycle only once. Using vtune you can clearly see \nthe result (attached heapgetpage.jpg).\n\nSo by splitting the loop logic into two parts, the vtune results show \nsignificant improvement (attached heapgetpage-allvis.jpg).\n\nThe test data uses TPC-H's table \"orders\" with a scale=20, 30 million rows.\n\n\nQuan Zongliang",
"msg_date": "Thu, 24 Aug 2023 18:55:28 +0800",
"msg_from": "Quan Zongliang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Improving the heapgetpage function improves performance in common\n scenarios"
},
{
"msg_contents": "On Thu, Aug 24, 2023 at 5:55 PM Quan Zongliang <[email protected]>\nwrote:\n\n> In the function heapgetpage. If a table is not updated very frequently.\n> Many actions in tuple loops are superfluous. For all_visible pages,\n> loctup does not need to be assigned, nor does the \"valid\" variable.\n> CheckForSerializableConflictOutNeeded from\n> HeapCheckForSerializableConflictOut function, it only need to inspect at\n\nThanks for submitting! A few weeks before this, there was another proposal,\nwhich specializes code for all paths, not just one. That patch also does so\nwithout duplicating the loop:\n\nhttps://www.postgresql.org/message-id/[email protected]\n\n> the beginning of the cycle only once. Using vtune you can clearly see\n> the result (attached heapgetpage.jpg).\n>\n> So by splitting the loop logic into two parts, the vtune results show\n> significant improvement (attached heapgetpage-allvis.jpg).\n\nFor future reference, it's not clear at all from the screenshots what the\nimprovement will be for the user. In the above thread, the author shares\ntesting methodology as well as timing measurements. This is useful for\nreproducibilty, as well as convincing others that the change is important.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Thu, Aug 24, 2023 at 5:55 PM Quan Zongliang <[email protected]> wrote:> In the function heapgetpage. If a table is not updated very frequently.> Many actions in tuple loops are superfluous. For all_visible pages,> loctup does not need to be assigned, nor does the \"valid\" variable.> CheckForSerializableConflictOutNeeded from> HeapCheckForSerializableConflictOut function, it only need to inspect atThanks for submitting! A few weeks before this, there was another proposal, which specializes code for all paths, not just one. That patch also does so without duplicating the loop:https://www.postgresql.org/message-id/[email protected]> the beginning of the cycle only once. Using vtune you can clearly see> the result (attached heapgetpage.jpg).>> So by splitting the loop logic into two parts, the vtune results show> significant improvement (attached heapgetpage-allvis.jpg).For future reference, it's not clear at all from the screenshots what the improvement will be for the user. In the above thread, the author shares testing methodology as well as timing measurements. This is useful for reproducibilty, as well as convincing others that the change is important.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 5 Sep 2023 15:15:55 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improving the heapgetpage function improves performance in common\n scenarios"
},
{
"msg_contents": "\n\nOn 2023/9/5 16:15, John Naylor wrote:\n> \n> On Thu, Aug 24, 2023 at 5:55 PM Quan Zongliang <[email protected] \n> <mailto:[email protected]>> wrote:\n> \n> > In the function heapgetpage. If a table is not updated very frequently.\n> > Many actions in tuple loops are superfluous. For all_visible pages,\n> > loctup does not need to be assigned, nor does the \"valid\" variable.\n> > CheckForSerializableConflictOutNeeded from\n> > HeapCheckForSerializableConflictOut function, it only need to inspect at\n> \n> Thanks for submitting! A few weeks before this, there was another \n> proposal, which specializes code for all paths, not just one. That patch \n> also does so without duplicating the loop:\n> \n> https://www.postgresql.org/message-id/[email protected] <https://www.postgresql.org/message-id/[email protected]>\n> \nNice patch. I'm sorry I didn't notice it before.\n\n> > the beginning of the cycle only once. Using vtune you can clearly see\n> > the result (attached heapgetpage.jpg).\n> >\n> > So by splitting the loop logic into two parts, the vtune results show\n> > significant improvement (attached heapgetpage-allvis.jpg).\n> \n> For future reference, it's not clear at all from the screenshots what \n> the improvement will be for the user. In the above thread, the author \n> shares testing methodology as well as timing measurements. This is \n> useful for reproducibilty, as well as convincing others that the change \n> is important.\n> \nHere's how I test it\n EXPLAIN ANALYZE SELECT * FROM orders;\nMaybe the test wasn't good enough. Although the modified optimal result \nlooks good. Because it fluctuates a lot. It's hard to compare. The \nresults of vtune are therefore used.\n\nMy patch is mainly to eliminate:\n1, Assignment of \"loctup\" struct variable (in vtune you can see that \nthese 4 lines have a significant overhead: 0.4 1.0 0.2 0.4).\n2. Assignment of the \"valid\" variable.(overhead 0.6)\n3. HeapCheckForSerializableConflictOut function call.(overhead 0.6)\n\nAlthough these are not the same overhead from test to test. But all are \ntoo obvious to ignore. The screenshots are mainly to show the three \nimprovements mentioned above.\n\nI'll also try Andres Freund's test method next.\n\n> --\n> John Naylor\n> EDB: http://www.enterprisedb.com <http://www.enterprisedb.com>\n\n\n\n",
"msg_date": "Tue, 5 Sep 2023 17:27:03 +0800",
"msg_from": "Quan Zongliang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improving the heapgetpage function improves performance in common\n scenarios"
},
{
"msg_contents": "On Tue, Sep 5, 2023 at 4:27 PM Quan Zongliang <[email protected]>\nwrote:\n\n> Here's how I test it\n> EXPLAIN ANALYZE SELECT * FROM orders;\n\nNote that EXPLAIN ANALYZE has quite a bit of overhead, so it's not good for\nthese kinds of tests.\n\n> I'll also try Andres Freund's test method next.\n\nCommit f691f5b80a85 from today removes another source of overhead in this\nfunction, so I suggest testing against that, if you wish to test again.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Tue, Sep 5, 2023 at 4:27 PM Quan Zongliang <[email protected]> wrote:> Here's how I test it> EXPLAIN ANALYZE SELECT * FROM orders;Note that EXPLAIN ANALYZE has quite a bit of overhead, so it's not good for these kinds of tests.> I'll also try Andres Freund's test method next.Commit f691f5b80a85 from today removes another source of overhead in this function, so I suggest testing against that, if you wish to test again.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 5 Sep 2023 17:46:44 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improving the heapgetpage function improves performance in common\n scenarios"
},
{
"msg_contents": "On 2023/9/5 18:46, John Naylor wrote:\n> \n> On Tue, Sep 5, 2023 at 4:27 PM Quan Zongliang <[email protected] \n> <mailto:[email protected]>> wrote:\n> \n> > Here's how I test it\n> > EXPLAIN ANALYZE SELECT * FROM orders;\n> \n> Note that EXPLAIN ANALYZE has quite a bit of overhead, so it's not good \n> for these kinds of tests.\n> \n> > I'll also try Andres Freund's test method next.\n> \n> Commit f691f5b80a85 from today removes another source of overhead in \n> this function, so I suggest testing against that, if you wish to test again.\n> \nTest with the latest code of the master branch, see the attached results.\n\nIf not optimized(--enable-debug CFLAGS='-O0'), there is a clear \ndifference. When the compiler does the optimization, the performance is \nsimilar. I think the compiler does a good enough optimization with \n\"pg_attribute_always_inline\" and the last two constant parameters when \ncalling heapgetpage_collect.\n\n\n> --\n> John Naylor\n> EDB: http://www.enterprisedb.com <http://www.enterprisedb.com>",
"msg_date": "Wed, 6 Sep 2023 15:50:37 +0800",
"msg_from": "Quan Zongliang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improving the heapgetpage function improves performance in common\n scenarios"
},
{
"msg_contents": "\n\nOn 2023/9/6 15:50, Quan Zongliang wrote:\n> \n> \n> On 2023/9/5 18:46, John Naylor wrote:\n>>\n>> On Tue, Sep 5, 2023 at 4:27 PM Quan Zongliang <[email protected] \n>> <mailto:[email protected]>> wrote:\n>>\n>> > Here's how I test it\n>> > EXPLAIN ANALYZE SELECT * FROM orders;\n>>\n>> Note that EXPLAIN ANALYZE has quite a bit of overhead, so it's not \n>> good for these kinds of tests.\n>>\n>> > I'll also try Andres Freund's test method next.\n>>\n>> Commit f691f5b80a85 from today removes another source of overhead in \n>> this function, so I suggest testing against that, if you wish to test \n>> again.\n>>\n> Test with the latest code of the master branch, see the attached results.\n> \n> If not optimized(--enable-debug CFLAGS='-O0'), there is a clear \n> difference. When the compiler does the optimization, the performance is \n> similar. I think the compiler does a good enough optimization with \n> \"pg_attribute_always_inline\" and the last two constant parameters when \n> calling heapgetpage_collect.\n> \nAdd a note. The first execution time of an attachment is not calculated \nin the average.\n\n> \n>> -- \n>> John Naylor\n>> EDB: http://www.enterprisedb.com <http://www.enterprisedb.com>\n\n\n\n",
"msg_date": "Wed, 6 Sep 2023 15:55:03 +0800",
"msg_from": "Quan Zongliang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improving the heapgetpage function improves performance in common\n scenarios"
},
{
"msg_contents": "On Wed, Sep 6, 2023 at 2:50 PM Quan Zongliang <[email protected]>\nwrote:\n\n> If not optimized(--enable-debug CFLAGS='-O0'), there is a clear\n> difference. When the compiler does the optimization, the performance is\n> similar. I think the compiler does a good enough optimization with\n> \"pg_attribute_always_inline\" and the last two constant parameters when\n> calling heapgetpage_collect.\n\nSo as we might expect, more specialization (Andres' patch) has no apparent\ndownsides in this workload. (While I'm not sure of the point of testing at\n-O0, I think we can conclude that less-bright compilers will show some\nimprovement with either patch.)\n\nIf you agree, do you want to withdraw your patch from the commit fest?\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Wed, Sep 6, 2023 at 2:50 PM Quan Zongliang <[email protected]> wrote:> If not optimized(--enable-debug CFLAGS='-O0'), there is a clear> difference. When the compiler does the optimization, the performance is> similar. I think the compiler does a good enough optimization with> \"pg_attribute_always_inline\" and the last two constant parameters when> calling heapgetpage_collect.So as we might expect, more specialization (Andres' patch) has no apparent downsides in this workload. (While I'm not sure of the point of testing at -O0, I think we can conclude that less-bright compilers will show some improvement with either patch.)If you agree, do you want to withdraw your patch from the commit fest?--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 6 Sep 2023 16:07:37 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improving the heapgetpage function improves performance in common\n scenarios"
},
{
"msg_contents": "\n\nOn 2023/9/6 17:07, John Naylor wrote:\n> \n> On Wed, Sep 6, 2023 at 2:50 PM Quan Zongliang <[email protected] \n> <mailto:[email protected]>> wrote:\n> \n> > If not optimized(--enable-debug CFLAGS='-O0'), there is a clear\n> > difference. When the compiler does the optimization, the performance is\n> > similar. I think the compiler does a good enough optimization with\n> > \"pg_attribute_always_inline\" and the last two constant parameters when\n> > calling heapgetpage_collect.\n> \n> So as we might expect, more specialization (Andres' patch) has no \n> apparent downsides in this workload. (While I'm not sure of the point of \n> testing at -O0, I think we can conclude that less-bright compilers will \n> show some improvement with either patch.)\n> \n> If you agree, do you want to withdraw your patch from the commit fest?\n> \nOk.\n> --\n> John Naylor\n> EDB: http://www.enterprisedb.com <http://www.enterprisedb.com>\n\n\n\n",
"msg_date": "Wed, 6 Sep 2023 17:56:28 +0800",
"msg_from": "Quan Zongliang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improving the heapgetpage function improves performance in common\n scenarios"
}
] |
[
{
"msg_contents": "Hi!\n\nRecently, I've been playing around with pg_lists and realize how annoying\n(maybe, I was a bit tired) some stuff related to the lists.\nFor an example, see this code\nList *l1 = list_make4(1, 2, 3, 4),\n *l2 = list_make4(5, 6, 7, 8),\n *l3 = list_make4(9, 0, 1, 2);\nListCell *lc1, *lc2, *lc3;\n\nforthree(lc1, l1, lc2, l2, lc3, l3) {\n...\n}\n\nlist_free(l1);\nlist_free(l2);\nlist_free(l3);\n\nThere are several questions:\n1) Why do I need to specify the number of elements in the list in the\nfunction name?\n Compiler already knew how much arguments do I use.\n2) Why I have to call free for every list? I don't know how to call it\nright, for now I call it vectorization.\n Why not to use simple wrapper to \"vectorize\" function args?\n\nSo, my proposal is:\n1) Add a simple macro to \"vectorize\" functions.\n2) Use this macro to \"vectorize\" list_free and list_free_deep functions.\n3) Use this macro to \"vectorize\" bms_free function.\n4) \"Vectorize\" list_makeN functions.\n\nFor this V1 version, I do not remove all list_makeN calls in order to\nreduce diff, but I'll address\nthis in future, if it will be needed.\n\nIn my view, one thing still waiting to be improved if foreach loop. It is\nnot very handy to have a bunch of\nsimilar calls foreach, forboth, forthree and etc. It will be ideal to have\nsingle foreach interface, but I don't know how\nto do it without overall interface of the loop.\n\nAny opinions are very welcome!\n\n-- \nBest regards,\nMaxim Orlov.",
"msg_date": "Thu, 24 Aug 2023 17:07:29 +0300",
"msg_from": "Maxim Orlov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Vectorization of some functions and improving pg_list interface"
},
{
"msg_contents": "On 2023-08-24 10:07, Maxim Orlov wrote:\n> 1) Why do I need to specify the number of elements in the list in the\n> function name?\n\nThis is reminding me of something someone (Tom?) worked on sort of\nrecently.\n\nAh, yes: \nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=1cff1b9\n\nI wasn't following closely, but that and the discussion link\nmay answer some questions.\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Thu, 24 Aug 2023 10:19:29 -0400",
"msg_from": "Chapman Flack <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Vectorization of some functions and improving pg_list interface"
},
{
"msg_contents": "24.08.2023 17:07, Maxim Orlov wrote:\n> Hi!\n> \n> Recently, I've been playing around with pg_lists and realize how \n> annoying (maybe, I was a bit tired) some stuff related to the lists.\n> For an example, see this code\n> List *l1 = list_make4(1, 2, 3, 4),\n> *l2 = list_make4(5, 6, 7, 8),\n> *l3 = list_make4(9, 0, 1, 2);\n> ListCell *lc1, *lc2, *lc3;\n> \n> forthree(lc1, l1, lc2, l2, lc3, l3) {\n> ...\n> }\n> \n> list_free(l1);\n> list_free(l2);\n> list_free(l3);\n> \n> There are several questions:\n> 1) Why do I need to specify the number of elements in the list in the \n> function name?\n> Compiler already knew how much arguments do I use.\n> 2) Why I have to call free for every list? I don't know how to call it \n> right, for now I call it vectorization.\n> Why not to use simple wrapper to \"vectorize\" function args?\n> \n> So, my proposal is:\n> 1) Add a simple macro to \"vectorize\" functions.\n> 2) Use this macro to \"vectorize\" list_free and list_free_deep functions.\n> 3) Use this macro to \"vectorize\" bms_free function.\n> 4) \"Vectorize\" list_makeN functions.\n> \n> For this V1 version, I do not remove all list_makeN calls in order to \n> reduce diff, but I'll address\n> this in future, if it will be needed.\n> \n> In my view, one thing still waiting to be improved if foreach loop. It \n> is not very handy to have a bunch of\n> similar calls foreach, forboth, forthree and etc. It will be ideal to \n> have single foreach interface, but I don't know how\n> to do it without overall interface of the loop.\n> \n> Any opinions are very welcome!\n\nGiven use case doesn't assume \"zero\" arguments, it is possible to \nimplement \"lists_free\" with just macro expansion (following code is not \nchecked, but close to valid):\n\n#define VA_FOR_EACH(invoke, join, ...) \\\n\tCppConcat(VA_FOR_EACH_, VA_ARGS_NARGS(__VA_ARGS__))( \\\n\t\tinvoke, join, __VA_ARGS__)\n#define VA_FOR_EACH_1(invoke, join, a1) \\\n\tinvoke(a1)\n#define VA_FOR_EACH_2(invoke, join, a1, a2) \\\n\tinvoke(a1) join() invoke(a2)\n#define VA_FOR_EACH_3(invoke, join, a1, a2, a3) \\\n\tinvoke(a1) join() invoke(a2) join() invoke(a3)\n... up to 63 args\n\n#define VA_SEMICOLON() ;\n\n#define lists_free(...) \\\n\tVA_FOR_EACH(list_free, VA_SEMICOLON, __VA_ARGS__)\n\n#define lists_free_deep(...) \\\n\tVA_FOR_EACH(list_free_deep, VA_SEMICOLON, __VA_ARGS__)\n\nThere could be couple of issues with msvc, but they are solvable.\n\n------\n\nRegards,\nYura\n\n\n",
"msg_date": "Wed, 6 Sep 2023 13:24:20 +0300",
"msg_from": "Yura Sokolov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Vectorization of some functions and improving pg_list interface"
},
{
"msg_contents": "06.09.2023 13:24, Yura Sokolov wrote:\n> 24.08.2023 17:07, Maxim Orlov wrote:\n>> Hi!\n>>\n>> Recently, I've been playing around with pg_lists and realize how \n>> annoying (maybe, I was a bit tired) some stuff related to the lists.\n>> For an example, see this code\n>> List *l1 = list_make4(1, 2, 3, 4),\n>> *l2 = list_make4(5, 6, 7, 8),\n>> *l3 = list_make4(9, 0, 1, 2);\n>> ListCell *lc1, *lc2, *lc3;\n>>\n>> forthree(lc1, l1, lc2, l2, lc3, l3) {\n>> ...\n>> }\n>>\n>> list_free(l1);\n>> list_free(l2);\n>> list_free(l3);\n>>\n>> There are several questions:\n>> 1) Why do I need to specify the number of elements in the list in the \n>> function name?\n>> Compiler already knew how much arguments do I use.\n>> 2) Why I have to call free for every list? I don't know how to call it \n>> right, for now I call it vectorization.\n>> Why not to use simple wrapper to \"vectorize\" function args?\n>>\n>> So, my proposal is:\n>> 1) Add a simple macro to \"vectorize\" functions.\n>> 2) Use this macro to \"vectorize\" list_free and list_free_deep functions.\n>> 3) Use this macro to \"vectorize\" bms_free function.\n>> 4) \"Vectorize\" list_makeN functions.\n>>\n>> For this V1 version, I do not remove all list_makeN calls in order to \n>> reduce diff, but I'll address\n>> this in future, if it will be needed.\n>>\n>> In my view, one thing still waiting to be improved if foreach loop. It \n>> is not very handy to have a bunch of\n>> similar calls foreach, forboth, forthree and etc. It will be ideal to \n>> have single foreach interface, but I don't know how\n>> to do it without overall interface of the loop.\n>>\n>> Any opinions are very welcome!\n> \n> Given use case doesn't assume \"zero\" arguments, it is possible to \n> implement \"lists_free\" with just macro expansion (following code is not \n> checked, but close to valid):\n> \n> #define VA_FOR_EACH(invoke, join, ...) \\\n> CppConcat(VA_FOR_EACH_, VA_ARGS_NARGS(__VA_ARGS__))( \\\n> invoke, join, __VA_ARGS__)\n> #define VA_FOR_EACH_1(invoke, join, a1) \\\n> invoke(a1)\n> #define VA_FOR_EACH_2(invoke, join, a1, a2) \\\n> invoke(a1) join() invoke(a2)\n> #define VA_FOR_EACH_3(invoke, join, a1, a2, a3) \\\n> invoke(a1) join() invoke(a2) join() invoke(a3)\n> ... up to 63 args\n> \n> #define VA_SEMICOLON() ;\n> \n> #define lists_free(...) \\\n> VA_FOR_EACH(list_free, VA_SEMICOLON, __VA_ARGS__)\n> \n> #define lists_free_deep(...) \\\n> VA_FOR_EACH(list_free_deep, VA_SEMICOLON, __VA_ARGS__)\n> \n> There could be couple of issues with msvc, but they are solvable.\n\nGiven we could use C99 compound literals, list contruction could be \nimplemented without C vaarg functions as well\n\n List *\n list_make_impl(NodeTag t, int n, ListCell *datums)\n {\n List\t *list = new_list(t, n);\n memcpy(list->elements, datums, sizeof(ListCell)*n);\n return list;\n }\n\n #define VA_COMMA() ,\n\n #define list_make__m(Tag, type, ...) \\\n list_make_impl(Tag, VA_ARGS_NARGS(__VA_ARGS__), \\\n ((ListCell[]){ \\\n VA_FOR_EACH(list_make_##type##_cell, VA_COMMA, __VA_ARGS__) \\\n }))\n\n\n #define list_make(...) list_make__m(T_List, ptr, __VA_ARGS__)\n #define list_make_int(...) list_make__m(T_IntList, int, __VA_ARGS__)\n #define list_make_oid(...) list_make__m(T_OidList, oid, __VA_ARGS__)\n #define list_make_xid(...) list_make__m(T_XidList, xid, __VA_ARGS__)\n\n(code is not checked)\n\nIf zero arguments (no arguments) should be supported, it is tricky \nbecause of mvsc, but solvable.\n\n\n",
"msg_date": "Wed, 6 Sep 2023 13:40:44 +0300",
"msg_from": "Yura Sokolov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Vectorization of some functions and improving pg_list interface"
}
] |
[
{
"msg_contents": "The first patch on my \"Refactoring backend fork+exec code\" thread [0] \nchanges the allocations of BackgroundWorkerList from plain malloc() to \nMemoryContextAlloc(PostmasterContext). However, that actually caused a \nsegfault in worker_spi tests in EXEC_BACKEND mode.\n\nBackgroundWorkerList is a postmaster-private data structure and should \nnot be accessed in backends. That assumption failed in \nRegisterBackgroundWorker(). When you put worker_spi in \nshared_preload_libraries, its _PG_init() function calls \nRegisterBackgroundWorker(), as expected. But in EXEC_BACKEND mode, the \nlibrary is loaded *again* in each backend process, and each of those \nloads also call RegisterBackgroundWorker(). It's too late to correctly \nregister any static background workers at that stage, but \nRegisterBackgroundWorker() still goes through the motions and adds the \nelement to BackgroundWorkerList. If you change the malloc() to \nMemoryContextAlloc(PostmasterContext), it segfaults because \nPostmasterContext == NULL in a backend process.\n\nIn summary, RegisterBackgroundWorker() is doing some questionable and \nuseless work, when a shared preload library is loaded to a backend \nprocess in EXEC_BACKEND mode.\n\nAttached patches:\n\n1. Tighten/clarify those checks. See also commit message for details.\n2. The patch from the other thread to switch to \nMemoryContextAlloc(PostmasterContext)\n3. A fix for a highly misleading comment in the same file.\n\nAny comments?\n\n[0] \nhttps://www.postgresql.org/message-id/flat/7a59b073-5b5b-151e-7ed3-8b01ff7ce9ef%40iki.fi\n\nP.S. In addition to those, I also considered these changes but didn't \nimplement them yet:\n\n- Change RegisterBackgroundWorker() to return true/false to indicate \nwhether the registration succeeded. Currently, the caller has no way of \nknowing. In many cases, I think even an ERROR and refusing to start up \nthe server would be appropriate. But at least we should let the caller \nknow and decide.\n\n- Add \"Assert(am_postmaster)\" assertions to the functions in bgworker.c \nthat are only supposed to be called in postmaster process. The functions \nhave good explicit comments on that, but wouldn't hurt to also assert. \n(There is no 'am_postmaster' flag, the equivalent is actually \n(IsPostmasterEnvironment && !IsUnderPostmaster), but perhaps we should \ndefine a macro or flag for that)\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)",
"msg_date": "Thu, 24 Aug 2023 18:15:33 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Checks in RegisterBackgroundWorker.()"
},
{
"msg_contents": "On Fri, Aug 25, 2023 at 3:15 AM Heikki Linnakangas <[email protected]> wrote:\n> In summary, RegisterBackgroundWorker() is doing some questionable and\n> useless work, when a shared preload library is loaded to a backend\n> process in EXEC_BACKEND mode.\n\nYeah. When I was working on 7389aad6 (\"Use WaitEventSet API for\npostmaster's event loop.\"), I also tried to move all of the\npostmaster's state variables into PostmasterContext (since the only\nreason for that scope was the signal handler code that is now gone),\nand I hit a variant of this design problem. I wonder if that would be\nunblocked by this...\n\nhttps://www.postgresql.org/message-id/CA+hUKGKH_RPAo=NgPfHKj--565aL1qiVpUGdWt1_pmJehY+dmw@mail.gmail.com\n\n\n",
"msg_date": "Fri, 25 Aug 2023 09:00:27 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Checks in RegisterBackgroundWorker.()"
},
{
"msg_contents": "On 25/08/2023 00:00, Thomas Munro wrote:\n> On Fri, Aug 25, 2023 at 3:15 AM Heikki Linnakangas <[email protected]> wrote:\n>> In summary, RegisterBackgroundWorker() is doing some questionable and\n>> useless work, when a shared preload library is loaded to a backend\n>> process in EXEC_BACKEND mode.\n> \n> Yeah. When I was working on 7389aad6 (\"Use WaitEventSet API for\n> postmaster's event loop.\"), I also tried to move all of the\n> postmaster's state variables into PostmasterContext (since the only\n> reason for that scope was the signal handler code that is now gone),\n> and I hit a variant of this design problem. I wonder if that would be\n> unblocked by this...\n> \n> https://www.postgresql.org/message-id/CA+hUKGKH_RPAo=NgPfHKj--565aL1qiVpUGdWt1_pmJehY+dmw@mail.gmail.com\n\nA-ha, yes I believe this patch will unblock that. \nRegisterBackgroundWorker() has no legit reason to access \nBackgroundWorkerList in child processes, and with these patches, it no \nlonger does.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Tue, 19 Sep 2023 18:47:40 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Checks in RegisterBackgroundWorker.()"
},
{
"msg_contents": "Here's a new version of these patches. I fixed one comment and ran \npgindent, no other changes.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)",
"msg_date": "Wed, 27 Sep 2023 23:46:20 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Checks in RegisterBackgroundWorker.()"
},
{
"msg_contents": "On Thu, Sep 28, 2023 at 9:46 AM Heikki Linnakangas <[email protected]> wrote:\n> Here's a new version of these patches. I fixed one comment and ran\n> pgindent, no other changes.\n\n> Subject: [PATCH v2 1/3] Clarify the checks in RegisterBackgroundWorker.\n\nLGTM. I see it passes on CI, and I also tested locally with\nEXEC_BACKEND, with shared_preload_libraries=path/to/pg_prewarm.so\nwhich works fine.\n\n> Subject: [PATCH v2 2/3] Allocate Backend structs in PostmasterContext.\n\nLGTM. I checked that you preserve the behaviour on OOM (LOG), and you\nconverted free() to pfree() in code that runs in the postmaster, but\ndropped it in the code that runs in the child because all children\nshould delete PostmasterContext, making per-object pfree redundant.\nGood.\n\n> Subject: [PATCH v2 3/3] Fix misleading comment on StartBackgroundWorker().\n\nLGTM. Hmm, maybe I would have called that function\n\"BackgroundWorkerMain()\" like several other similar things, but that's\nnot important.\n\nThis doesn't quite fix the problem I was complaining about earlier,\nbut it de-confuses things. (Namely that if BackgroundWorkerList\nweren't a global variable, RegisterWorkerMain() wouldn't be able to\nfind it, and if it took some kind of context pointer as an argument,\n_PG_init() functions wouldn't be able to provide it, unless we changed\n_PG_init() to take an argument, which we can't really do. Oh well.)\n\n\n",
"msg_date": "Fri, 6 Oct 2023 23:13:56 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Checks in RegisterBackgroundWorker.()"
},
{
"msg_contents": "On 06/10/2023 13:13, Thomas Munro wrote:\n> On Thu, Sep 28, 2023 at 9:46 AM Heikki Linnakangas <[email protected]> wrote:\n>> Subject: [PATCH v2 3/3] Fix misleading comment on StartBackgroundWorker().\n> \n> LGTM. Hmm, maybe I would have called that function\n> \"BackgroundWorkerMain()\" like several other similar things, but that's\n> not important.\n\nThat's a good idea. I renamed it to BackgroundWorkerMain().\n\nPushed with that change, thanks for the review!\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Mon, 9 Oct 2023 11:52:58 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Checks in RegisterBackgroundWorker.()"
}
] |
[
{
"msg_contents": "Hi, i am trying to mutate a plan for a T_CreateTableAsStmt before it executes.\n\nI've created a planner_hook_type hook, where i can see the plan that gets executed as part of the AS of the query, where the CmdType is CMD_SELECT, and its subplans are the plans for the actual select itself.\n\nI've also created a ProcessUtility_hook_type, which correctly shows the T_CreateTableAsStmt, but the hook is called after the actual CREATE TABLE is called, and the plan is no longer mutable for the part that i need.\n\nIs there another hook i should be looking at, or another way i should be approaching this? I need to be able to alter the plan specifically for the SELECT portion of a CREATE TABLE AS query, but only in the case of the SELECT TABLE AS, and in no other SELECTs.\n\nAlternately, I can look at the query string in the CMD_SELECT planner hook and search specifically for CREATE, TABLE and AS, but I feel that there has to be a better way, and look forward to some guidance.\n\nThanks!\n\n",
"msg_date": "Thu, 24 Aug 2023 08:42:57 -0700",
"msg_from": "Jerry Sievert <[email protected]>",
"msg_from_op": true,
"msg_subject": "Altering the SELECT portion of a CREATE TABLE AS plan"
}
] |
[
{
"msg_contents": "Hi,\n\nnFreeBlocks stores the number of free blocks and\nyour type is *long*.\n\nAt Function ltsGetFreeBlock is locally stored in\nheapsize wich type is *int*\n\nWith Windows both *long* and *int* are 4 bytes.\nBut with Linux *long* is 8 bytes and *int* are 4 bytes.\n\npatch attached.\n\nbest regards,\nRanier Vilela",
"msg_date": "Thu, 24 Aug 2023 14:46:42 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": true,
"msg_subject": "Avoid a possible overflow (src/backend/utils/sort/logtape.c)"
},
{
"msg_contents": "On Thu, Aug 24, 2023 at 02:46:42PM -0300, Ranier Vilela wrote:\n> With Windows both *long* and *int* are 4 bytes.\n> But with Linux *long* is 8 bytes and *int* are 4 bytes.\n\nAnd I recall that WIN32 is the only place where we treat long as 4\nbytes.\n\n> patch attached.\n\nYeah, it looks like you're right here. Will do something about that.\n--\nMichael",
"msg_date": "Fri, 25 Aug 2023 08:47:39 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoid a possible overflow (src/backend/utils/sort/logtape.c)"
},
{
"msg_contents": "On Thu, Aug 24, 2023 at 4:47 PM Michael Paquier <[email protected]> wrote:\n> > patch attached.\n>\n> Yeah, it looks like you're right here. Will do something about that.\n\nThis is a known issue. It has been discussed before.\n\nI am in favor of fixing the problem. I don't quite recall what it was\nthat made the discussion stall last time around.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 24 Aug 2023 17:33:15 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoid a possible overflow (src/backend/utils/sort/logtape.c)"
},
{
"msg_contents": "On Thu, Aug 24, 2023 at 05:33:15PM -0700, Peter Geoghegan wrote:\n> I am in favor of fixing the problem. I don't quite recall what it was\n> that made the discussion stall last time around.\n\nI think that you mean this one:\nhttps://www.postgresql.org/message-id/CAH2-WznCscXnWmnj=STC0aSa7QG+BRedDnZsP=Jo_R9GUZvUrg@mail.gmail.com\n\nStill that looks entirely different to me. Here we have a problem\nwhere the number of free blocks stored may cause an overflow in the\ninternal routine retrieving a free block, but your other thread\nis about long being not enough on Windows. I surely agree that\nthere's an argument for improving this interface and remove its use of\nlong in the long-term but that would not be backpatched. I also don't\nsee why we cannot do the change proposed here until then, and it's\nbackpatchable.\n\nThere is a second thread related to logtape.c here, but that's still\ndifferent:\nhttps://www.postgresql.org/message-id/flat/CAH2-Wzn5PCBLUrrds%3DhD439LtWP%2BPD7ekRTd%3D8LdtqJ%2BKO5D1Q%40mail.gmail.com\n--\nMichael",
"msg_date": "Fri, 25 Aug 2023 10:18:48 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoid a possible overflow (src/backend/utils/sort/logtape.c)"
},
{
"msg_contents": "On Fri, 25 Aug 2023 at 13:19, Michael Paquier <[email protected]> wrote:\n> Still that looks entirely different to me. Here we have a problem\n> where the number of free blocks stored may cause an overflow in the\n> internal routine retrieving a free block, but your other thread\n> is about long being not enough on Windows. I surely agree that\n> there's an argument for improving this interface and remove its use of\n> long in the long-term but that would not be backpatched. I also don't\n> see why we cannot do the change proposed here until then, and it's\n> backpatchable.\n\nI agree with this. I think Ranier's patch is good and we should apply\nit and backpatch it.\n\nWe shouldn't delay fixing this simple bug because we have some future\nambitions to swap the use of longs to int64.\n\nDavid\n\n\n",
"msg_date": "Fri, 25 Aug 2023 13:43:47 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoid a possible overflow (src/backend/utils/sort/logtape.c)"
},
{
"msg_contents": "On Thu, Aug 24, 2023 at 6:18 PM Michael Paquier <[email protected]> wrote:\n> Still that looks entirely different to me. Here we have a problem\n> where the number of free blocks stored may cause an overflow in the\n> internal routine retrieving a free block, but your other thread\n> is about long being not enough on Windows.\n\nI must have seen logtape.c, windows, and long together on this thread,\nand incorrectly surmised that it was exactly the same issue as before.\nI now see that the only sense in which Windows is relevant is that\nWindows happens to not have the same inconsistency. Windows is\nconsistently wrong.\n\nSo, yeah, I guess it's a different issue. Practically speaking it\nshould be treated as a separate issue, in any case. Since, as you\npointed out, there is no reason to not just fix this while\nbackpatching.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 24 Aug 2023 18:55:00 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoid a possible overflow (src/backend/utils/sort/logtape.c)"
},
{
"msg_contents": "On Thu, Aug 24, 2023 at 6:44 PM David Rowley <[email protected]> wrote:\n> I agree with this. I think Ranier's patch is good and we should apply\n> it and backpatch it.\n\nFWIW I'm pretty sure that it's impossible to run into problems here in\npractice -- the minheap is allocated by palloc(), and the high\nwatermark number of free pages is pretty small. Even still, I agree\nwith your conclusion. There is really no reason to not be consistent\nhere.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 24 Aug 2023 19:02:40 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoid a possible overflow (src/backend/utils/sort/logtape.c)"
},
{
"msg_contents": "On Thu, Aug 24, 2023 at 07:02:40PM -0700, Peter Geoghegan wrote:\n> FWIW I'm pretty sure that it's impossible to run into problems here in\n> practice -- the minheap is allocated by palloc(), and the high\n> watermark number of free pages is pretty small. Even still, I agree\n> with your conclusion. There is really no reason to not be consistent\n> here.\n\nPostgres 16 RC1 is now tagged, so applied down to 13.\n--\nMichael",
"msg_date": "Wed, 30 Aug 2023 08:05:57 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoid a possible overflow (src/backend/utils/sort/logtape.c)"
},
{
"msg_contents": "Em ter., 29 de ago. de 2023 às 20:06, Michael Paquier <[email protected]>\nescreveu:\n\n> On Thu, Aug 24, 2023 at 07:02:40PM -0700, Peter Geoghegan wrote:\n> > FWIW I'm pretty sure that it's impossible to run into problems here in\n> > practice -- the minheap is allocated by palloc(), and the high\n> > watermark number of free pages is pretty small. Even still, I agree\n> > with your conclusion. There is really no reason to not be consistent\n> > here.\n>\n> Postgres 16 RC1 is now tagged, so applied down to 13.\n>\nThank you, Michael.\n\nbest regards,\nRanier Vilela\n\nEm ter., 29 de ago. de 2023 às 20:06, Michael Paquier <[email protected]> escreveu:On Thu, Aug 24, 2023 at 07:02:40PM -0700, Peter Geoghegan wrote:\n> FWIW I'm pretty sure that it's impossible to run into problems here in\n> practice -- the minheap is allocated by palloc(), and the high\n> watermark number of free pages is pretty small. Even still, I agree\n> with your conclusion. There is really no reason to not be consistent\n> here.\n\nPostgres 16 RC1 is now tagged, so applied down to 13.\nThank you, Michael.best regards,Ranier Vilela",
"msg_date": "Wed, 30 Aug 2023 07:40:10 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Avoid a possible overflow (src/backend/utils/sort/logtape.c)"
}
] |
[
{
"msg_contents": "Currently, psql exits if a database connection is not established when\npsql is launched.\n\nSometimes it may be useful to launch psql without connecting to the\ndatabase. For example, a user may choose to start psql and then pipe\ncommands via stdin, one of which may eventually perform the \\connect\ncommand. Or the user may be interested in performing operations that\npsql can perform, like setting variables etc., before optionally\ninitiating a connection.\n\nThe attached patch introduces the --no-connect option, which allows\npsql to continue operation in absence of connection options.\n\nThis patch is nowhere close to finished, but I'm posting it here to\ngauge interest in the feature. For example, this patch results in an\nundesirable output (0.0 server version), as seen below.\n\n$ psql --no-connect\npsql (17devel, server 0.0.0)\nWARNING: psql major version 17, server major version 0.0.\n Some psql features might not work.\nType \"help\" for help.\n\n!?>\n\nThe patch needs many improvements, like not expecting to inherit\nconnection options from a previous connection, etc., but mostly in\nidentifying the conflicting options; an example of this is included in\nthe patch where psql throws an error if --no-connect and --list are\nspecified together.\n\n$ psql --no-connect --list\npsql: error: --no-connect cannot be specified with --list\n\nBest regards,\nGurjeet\nhttp://Gurje.et",
"msg_date": "Thu, 24 Aug 2023 12:55:30 -0700",
"msg_from": "Gurjeet Singh <[email protected]>",
"msg_from_op": true,
"msg_subject": "psql --no-connect option"
},
{
"msg_contents": "On Thu Aug 24, 2023 at 2:55 PM CDT, Gurjeet Singh wrote:\n> Currently, psql exits if a database connection is not established when\n> psql is launched.\n>\n> Sometimes it may be useful to launch psql without connecting to the\n> database. For example, a user may choose to start psql and then pipe\n> commands via stdin, one of which may eventually perform the \\connect\n> command. Or the user may be interested in performing operations that\n> psql can perform, like setting variables etc., before optionally\n> initiating a connection.\n\nSpeaking for myself, but you do bring up a fairly interesting usecase. \nIs this a feature you have found yourself wanting in the past?\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Thu, 24 Aug 2023 21:19:22 -0500",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: psql --no-connect option"
},
{
"msg_contents": "Hi,\n\nOn Thu, Aug 24, 2023 at 12:55:30PM -0700, Gurjeet Singh wrote:\n> Currently, psql exits if a database connection is not established when\n> psql is launched.\n>\n> Sometimes it may be useful to launch psql without connecting to the\n> database. For example, a user may choose to start psql and then pipe\n> commands via stdin, one of which may eventually perform the \\connect\n> command. Or the user may be interested in performing operations that\n> psql can perform, like setting variables etc., before optionally\n> initiating a connection.\n>\n> The attached patch introduces the --no-connect option, which allows\n> psql to continue operation in absence of connection options.\n\nFTR this has been discussed in the past, see at least [1].\n\nI was interested in this feature, suggesting the exact same \"--no-connect\"\nname, so still +1 for this patch (note that I haven't read it).\n\n[1]: https://www.postgresql.org/message-id/flat/CAFe70G7iATwCMrymVwSVz7NajxCw3552TzFFHvkJqL_3L6gDTA%40mail.gmail.com\n\n\n",
"msg_date": "Fri, 25 Aug 2023 12:20:58 +0800",
"msg_from": "Julien Rouhaud <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: psql --no-connect option"
},
{
"msg_contents": "pá 25. 8. 2023 v 6:21 odesílatel Julien Rouhaud <[email protected]> napsal:\n\n> Hi,\n>\n> On Thu, Aug 24, 2023 at 12:55:30PM -0700, Gurjeet Singh wrote:\n> > Currently, psql exits if a database connection is not established when\n> > psql is launched.\n> >\n> > Sometimes it may be useful to launch psql without connecting to the\n> > database. For example, a user may choose to start psql and then pipe\n> > commands via stdin, one of which may eventually perform the \\connect\n> > command. Or the user may be interested in performing operations that\n> > psql can perform, like setting variables etc., before optionally\n> > initiating a connection.\n> >\n> > The attached patch introduces the --no-connect option, which allows\n> > psql to continue operation in absence of connection options.\n>\n> FTR this has been discussed in the past, see at least [1].\n>\n> I was interested in this feature, suggesting the exact same \"--no-connect\"\n> name, so still +1 for this patch (note that I haven't read it).\n>\n> [1]:\n> https://www.postgresql.org/message-id/flat/CAFe70G7iATwCMrymVwSVz7NajxCw3552TzFFHvkJqL_3L6gDTA%40mail.gmail.com\n\n\n+1\n\nPavel\n\npá 25. 8. 2023 v 6:21 odesílatel Julien Rouhaud <[email protected]> napsal:Hi,\n\nOn Thu, Aug 24, 2023 at 12:55:30PM -0700, Gurjeet Singh wrote:\n> Currently, psql exits if a database connection is not established when\n> psql is launched.\n>\n> Sometimes it may be useful to launch psql without connecting to the\n> database. For example, a user may choose to start psql and then pipe\n> commands via stdin, one of which may eventually perform the \\connect\n> command. Or the user may be interested in performing operations that\n> psql can perform, like setting variables etc., before optionally\n> initiating a connection.\n>\n> The attached patch introduces the --no-connect option, which allows\n> psql to continue operation in absence of connection options.\n\nFTR this has been discussed in the past, see at least [1].\n\nI was interested in this feature, suggesting the exact same \"--no-connect\"\nname, so still +1 for this patch (note that I haven't read it).\n\n[1]: https://www.postgresql.org/message-id/flat/CAFe70G7iATwCMrymVwSVz7NajxCw3552TzFFHvkJqL_3L6gDTA%40mail.gmail.com+1Pavel",
"msg_date": "Fri, 25 Aug 2023 06:26:47 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: psql --no-connect option"
}
] |
[
{
"msg_contents": "Hi\n\ntoday build is broken on my Fedora 39\n\nRegards\n\nPavel\n\nmake[2]: Opouští se adresář\n„/home/pavel/src/postgresql.master/src/bin/initdb“\nmake -C pg_amcheck check\nmake[2]: Vstupuje se do adresáře\n„/home/pavel/src/postgresql.master/src/bin/pg_amcheck“\necho \"# +++ tap check in src/bin/pg_amcheck +++\" && rm -rf\n'/home/pavel/src/postgresql.master/src/bin/pg_amcheck'/tmp_check &&\n/usr/bin/mkdir -p\n'/home/pavel/src/postgresql.master/src/bin/pg_amcheck'/tmp_check && cd . &&\nTESTLOGDIR='/home/pavel/src/postgresql.master/src/bin/pg_amcheck/tmp_check/log'\nTESTDATADIR='/home/pavel/src/postgresql.master/src/bin/pg_amcheck/tmp_check'\nPATH=\"/home/pavel/src/postgresql.master/tmp_install/usr/local/pgsql/master/bin:/home/pavel/src/postgresql.master/src/bin/pg_amcheck:$PATH\"\nLD_LIBRARY_PATH=\"/home/pavel/src/postgresql.master/tmp_install/usr/local/pgsql/master/lib\"\nINITDB_TEMPLATE='/home/pavel/src/postgresql.master'/tmp_install/initdb-template\n PGPORT='65432'\ntop_builddir='/home/pavel/src/postgresql.master/src/bin/pg_amcheck/../../..'\nPG_REGRESS='/home/pavel/src/postgresql.master/src/bin/pg_amcheck/../../../src/test/regress/pg_regress'\n/usr/bin/prove -I ../../../src/test/perl/ -I . t/*.pl\n# +++ tap check in src/bin/pg_amcheck +++\nt/001_basic.pl ........... ok\nt/002_nonesuch.pl ........ ok\nt/003_check.pl ........... 1/?\n# Failed test 'pg_amcheck all schemas, tables and indexes in database db1\nstdout /(?^:could not open file \".*\": No such file or directory)/'\n# at t/003_check.pl line 345.\n# 'btree index \"db1.s1.t1_btree\":\n# ERROR: index \"t1_btree\" lacks a main relation fork\n# btree index \"db1.s1.t2_btree\":\n# ERROR: could not open file \"base/16384/16477.1\" (target block\n2862699856): previous segment is only 2 blocks\n# btree index \"db1.s3.t1_btree\":\n# ERROR: index \"t1_btree\" lacks a main relation fork\n# btree index \"db1.s3.t2_btree\":\n# ERROR: could not open file \"base/16384/16601.1\" (target block\n2862699856): previous segment is only 2 blocks\n# heap table \"db1.s2.t1\":\n# ERROR: could not open file \"base/16384/16491\": Adresář nebo soubor\nneexistuje\n# heap table \"db1.s2.t2\", block 0, offset 3:\n# line pointer redirection to item at offset 21840 exceeds maximum\noffset 4\n# heap table \"db1.s2.t2\", block 0, offset 4:\n# line pointer to page offset 21840 with length 21840 ends beyond\nmaximum page offset 8192\n# heap table \"db1.s3.t1\":\n# ERROR: could not open file \"base/16384/16553\": Adresář nebo soubor\nneexistuje\n# heap table \"db1.s3.t2\", block 0, offset 3:\n# line pointer redirection to item at offset 21840 exceeds maximum\noffset 4\n# heap table \"db1.s3.t2\", block 0, offset 4:\n# line pointer to page offset 21840 with length 21840 ends beyond\nmaximum page offset 8192\n# heap table \"db1.s4.t2\":\n# ERROR: could not open file \"base/16384/16623\": Adresář nebo soubor\nneexistuje\n# heap table \"db1.s3.t1_mv\":\n# ERROR: could not open file \"base/16384/16570\": Adresář nebo soubor\nneexistuje\n# heap table \"db1.s3.t2_mv\", block 0, offset 3:\n# line pointer redirection to item at offset 21840 exceeds maximum\noffset 4\n# heap table \"db1.s3.t2_mv\", block 0, offset 4:\n# line pointer to page offset 21840 with length 21840 ends beyond\nmaximum page offset 8192\n# heap table \"db1.s3.p1_1\":\n# ERROR: could not open file \"base/16384/16588\": Adresář nebo soubor\nneexistuje\n# heap table \"db1.pg_toast.pg_toast_16620\":\n# ERROR: could not open file \"base/16384/16623\": Adresář nebo soubor\nneexistuje\n# '\n# doesn't match '(?^:could not open file \".*\": No such file or\ndirectory)'\nt/003_check.pl ........... 9/?\n# Failed test 'pg_amcheck all schemas, tables and indexes in databases\ndb1, db2, and db3 stdout /(?^:could not open file \".*\": No such file or\ndirectory)/'\n# at t/003_check.pl line 355.\n# 'btree index \"db1.s1.t1_btree\":\n# ERROR: index \"t1_btree\" lacks a main relation fork\n# btree index \"db1.s1.t2_btree\":\n# ERROR: could not open file \"base/16384/16477.1\" (target block\n2862699856): previous segment is only 2 blocks\n# btree index \"db1.s3.t1_btree\":\n# ERROR: index \"t1_btree\" lacks a main relation fork\n# btree index \"db1.s3.t2_btree\":\n# ERROR: could not open file \"base/16384/16601.1\" (target block\n2862699856): previous segment is only 2 blocks\n# heap table \"db1.s2.t1\":\n# ERROR: could not open file \"base/16384/16491\": Adresář nebo soubor\nneexistuje\n# heap table \"db1.s2.t2\", block 0, offset 3:\n# line pointer redirection to item at offset 21840 exceeds maximum\noffset 4\n# heap table \"db1.s2.t2\", block 0, offset 4:\n# line pointer to page offset 21840 with length 21840 ends beyond\nmaximum page offset 8192\n# heap table \"db1.s3.t1\":\n# ERROR: could not open file \"base/16384/16553\": Adresář nebo soubor\nneexistuje\n# heap table \"db1.s3.t2\", block 0, offset 3:\n# line pointer redirection to item at offset 21840 exceeds maximum\noffset 4\n# heap table \"db1.s3.t2\", block 0, offset 4:\n# line pointer to page offset 21840 with length 21840 ends beyond\nmaximum page offset 8192\n# heap table \"db1.s4.t2\":\n# ERROR: could not open file \"base/16384/16623\": Adresář nebo soubor\nneexistuje\n# heap table \"db1.s3.t1_mv\":\n# ERROR: could not open file \"base/16384/16570\": Adresář nebo soubor\nneexistuje\n# heap table \"db1.s3.t2_mv\", block 0, offset 3:\n# line pointer redirection to item at offset 21840 exceeds maximum\noffset 4\n# heap table \"db1.s3.t2_mv\", block 0, offset 4:\n# line pointer to page offset 21840 with length 21840 ends beyond\nmaximum page offset 8192\n# heap table \"db1.s3.p1_1\":\n# ERROR: could not open file \"base/16384/16588\": Adresář nebo soubor\nneexistuje\n# heap table \"db1.pg_toast.pg_toast_16620\":\n# ERROR: could not open file \"base/16384/16623\": Adresář nebo soubor\nneexistuje\n# btree index \"db2.s1.t1_btree\":\n# ERROR: index \"t1_btree\" lacks a main relation fork\n# heap table \"db2.s1.t1\":\n# ERROR: could not open file \"base/16736/16781\": Adresář nebo soubor\nneexistuje\n# '\n# doesn't match '(?^:could not open file \".*\": No such file or\ndirectory)'\n\n# Failed test 'pg_amcheck of db2.s1 excluding indexes stdout /(?^:could\nnot open file \".*\": No such file or directory)/'\n# at t/003_check.pl line 402.\n# 'heap table \"db2.s1.t1\":\n# ERROR: could not open file \"base/16736/16781\": Adresář nebo soubor\nneexistuje\n# '\n# doesn't match '(?^:could not open file \".*\": No such file or\ndirectory)'\n\n# Failed test 'pg_amcheck schema s3 reports table and index errors stdout\n/(?^:could not open file \".*\": No such file or directory)/'\n# at t/003_check.pl line 410.\n# 'btree index \"db1.s3.t1_btree\":\n# ERROR: index \"t1_btree\" lacks a main relation fork\n# btree index \"db1.s3.t2_btree\":\n# ERROR: could not open file \"base/16384/16601.1\" (target block\n2862699856): previous segment is only 2 blocks\n# heap table \"db1.s3.t1\":\n# ERROR: could not open file \"base/16384/16553\": Adresář nebo soubor\nneexistuje\n# heap table \"db1.s3.t2\", block 0, offset 3:\n# line pointer redirection to item at offset 21840 exceeds maximum\noffset 4\n# heap table \"db1.s3.t2\", block 0, offset 4:\n# line pointer to page offset 21840 with length 21840 ends beyond\nmaximum page offset 8192\n# heap table \"db1.s3.t1_mv\":\n# ERROR: could not open file \"base/16384/16570\": Adresář nebo soubor\nneexistuje\n# heap table \"db1.s3.t2_mv\", block 0, offset 3:\n# line pointer redirection to item at offset 21840 exceeds maximum\noffset 4\n# heap table \"db1.s3.t2_mv\", block 0, offset 4:\n# line pointer to page offset 21840 with length 21840 ends beyond\nmaximum page offset 8192\n# heap table \"db1.s3.p1_1\":\n# ERROR: could not open file \"base/16384/16588\": Adresář nebo soubor\nneexistuje\n# '\n# doesn't match '(?^:could not open file \".*\": No such file or\ndirectory)'\n\n# Failed test 'pg_amcheck in schema s4 reports toast corruption stdout\n/(?^:could not open file \".*\": No such file or directory)/'\n# at t/003_check.pl line 423.\n# 'heap table \"db1.s4.t2\":\n# ERROR: could not open file \"base/16384/16623\": Adresář nebo soubor\nneexistuje\n# heap table \"db1.pg_toast.pg_toast_16620\":\n# ERROR: could not open file \"base/16384/16623\": Adresář nebo soubor\nneexistuje\n# '\n# doesn't match '(?^:could not open file \".*\": No such file or\ndirectory)'\nt/003_check.pl ........... 40/? # Looks like you failed 5 tests of 63.\nt/003_check.pl ........... Dubious, test returned 5 (wstat 1280, 0x500)\nFailed 5/63 subtests\nt/004_verify_heapam.pl ... ok\nt/005_opclass_damage.pl .. ok\n\nTest Summary Report\n-------------------\nt/003_check.pl (Wstat: 1280 (exited 5) Tests: 63 Failed: 5)\n Failed tests: 7, 12, 24, 29, 32\n Non-zero exit status: 5\nFiles=5, Tests=214, 9 wallclock secs ( 0.09 usr 0.01 sys + 1.65 cusr\n 1.59 csys = 3.34 CPU)\nResult: FAIL\nmake[2]: *** [Makefile:48: check] Chyba 1\nmake[2]: Opouští se adresář\n„/home/pavel/src/postgresql.master/src/bin/pg_amcheck“\nmake[1]: *** [Makefile:43: check-pg_amcheck-recurse] Chyba 2\nmake[1]: Opouští se adresář „/home/pavel/src/postgresql.master/src/bin“\nmake: *** [GNUmakefile:71: check-world-src/bin-recurse] Chyba 2\n\nHitoday build is broken on my Fedora 39RegardsPavelmake[2]: Opouští se adresář „/home/pavel/src/postgresql.master/src/bin/initdb“make -C pg_amcheck checkmake[2]: Vstupuje se do adresáře „/home/pavel/src/postgresql.master/src/bin/pg_amcheck“echo \"# +++ tap check in src/bin/pg_amcheck +++\" && rm -rf '/home/pavel/src/postgresql.master/src/bin/pg_amcheck'/tmp_check && /usr/bin/mkdir -p '/home/pavel/src/postgresql.master/src/bin/pg_amcheck'/tmp_check && cd . && TESTLOGDIR='/home/pavel/src/postgresql.master/src/bin/pg_amcheck/tmp_check/log' TESTDATADIR='/home/pavel/src/postgresql.master/src/bin/pg_amcheck/tmp_check' PATH=\"/home/pavel/src/postgresql.master/tmp_install/usr/local/pgsql/master/bin:/home/pavel/src/postgresql.master/src/bin/pg_amcheck:$PATH\" LD_LIBRARY_PATH=\"/home/pavel/src/postgresql.master/tmp_install/usr/local/pgsql/master/lib\" INITDB_TEMPLATE='/home/pavel/src/postgresql.master'/tmp_install/initdb-template PGPORT='65432' top_builddir='/home/pavel/src/postgresql.master/src/bin/pg_amcheck/../../..' PG_REGRESS='/home/pavel/src/postgresql.master/src/bin/pg_amcheck/../../../src/test/regress/pg_regress' /usr/bin/prove -I ../../../src/test/perl/ -I . t/*.pl# +++ tap check in src/bin/pg_amcheck +++t/001_basic.pl ........... ok t/002_nonesuch.pl ........ ok t/003_check.pl ........... 1/? # Failed test 'pg_amcheck all schemas, tables and indexes in database db1 stdout /(?^:could not open file \".*\": No such file or directory)/'# at t/003_check.pl line 345.# 'btree index \"db1.s1.t1_btree\":# ERROR: index \"t1_btree\" lacks a main relation fork# btree index \"db1.s1.t2_btree\":# ERROR: could not open file \"base/16384/16477.1\" (target block 2862699856): previous segment is only 2 blocks# btree index \"db1.s3.t1_btree\":# ERROR: index \"t1_btree\" lacks a main relation fork# btree index \"db1.s3.t2_btree\":# ERROR: could not open file \"base/16384/16601.1\" (target block 2862699856): previous segment is only 2 blocks# heap table \"db1.s2.t1\":# ERROR: could not open file \"base/16384/16491\": Adresář nebo soubor neexistuje# heap table \"db1.s2.t2\", block 0, offset 3:# line pointer redirection to item at offset 21840 exceeds maximum offset 4# heap table \"db1.s2.t2\", block 0, offset 4:# line pointer to page offset 21840 with length 21840 ends beyond maximum page offset 8192# heap table \"db1.s3.t1\":# ERROR: could not open file \"base/16384/16553\": Adresář nebo soubor neexistuje# heap table \"db1.s3.t2\", block 0, offset 3:# line pointer redirection to item at offset 21840 exceeds maximum offset 4# heap table \"db1.s3.t2\", block 0, offset 4:# line pointer to page offset 21840 with length 21840 ends beyond maximum page offset 8192# heap table \"db1.s4.t2\":# ERROR: could not open file \"base/16384/16623\": Adresář nebo soubor neexistuje# heap table \"db1.s3.t1_mv\":# ERROR: could not open file \"base/16384/16570\": Adresář nebo soubor neexistuje# heap table \"db1.s3.t2_mv\", block 0, offset 3:# line pointer redirection to item at offset 21840 exceeds maximum offset 4# heap table \"db1.s3.t2_mv\", block 0, offset 4:# line pointer to page offset 21840 with length 21840 ends beyond maximum page offset 8192# heap table \"db1.s3.p1_1\":# ERROR: could not open file \"base/16384/16588\": Adresář nebo soubor neexistuje# heap table \"db1.pg_toast.pg_toast_16620\":# ERROR: could not open file \"base/16384/16623\": Adresář nebo soubor neexistuje# '# doesn't match '(?^:could not open file \".*\": No such file or directory)'t/003_check.pl ........... 9/? # Failed test 'pg_amcheck all schemas, tables and indexes in databases db1, db2, and db3 stdout /(?^:could not open file \".*\": No such file or directory)/'# at t/003_check.pl line 355.# 'btree index \"db1.s1.t1_btree\":# ERROR: index \"t1_btree\" lacks a main relation fork# btree index \"db1.s1.t2_btree\":# ERROR: could not open file \"base/16384/16477.1\" (target block 2862699856): previous segment is only 2 blocks# btree index \"db1.s3.t1_btree\":# ERROR: index \"t1_btree\" lacks a main relation fork# btree index \"db1.s3.t2_btree\":# ERROR: could not open file \"base/16384/16601.1\" (target block 2862699856): previous segment is only 2 blocks# heap table \"db1.s2.t1\":# ERROR: could not open file \"base/16384/16491\": Adresář nebo soubor neexistuje# heap table \"db1.s2.t2\", block 0, offset 3:# line pointer redirection to item at offset 21840 exceeds maximum offset 4# heap table \"db1.s2.t2\", block 0, offset 4:# line pointer to page offset 21840 with length 21840 ends beyond maximum page offset 8192# heap table \"db1.s3.t1\":# ERROR: could not open file \"base/16384/16553\": Adresář nebo soubor neexistuje# heap table \"db1.s3.t2\", block 0, offset 3:# line pointer redirection to item at offset 21840 exceeds maximum offset 4# heap table \"db1.s3.t2\", block 0, offset 4:# line pointer to page offset 21840 with length 21840 ends beyond maximum page offset 8192# heap table \"db1.s4.t2\":# ERROR: could not open file \"base/16384/16623\": Adresář nebo soubor neexistuje# heap table \"db1.s3.t1_mv\":# ERROR: could not open file \"base/16384/16570\": Adresář nebo soubor neexistuje# heap table \"db1.s3.t2_mv\", block 0, offset 3:# line pointer redirection to item at offset 21840 exceeds maximum offset 4# heap table \"db1.s3.t2_mv\", block 0, offset 4:# line pointer to page offset 21840 with length 21840 ends beyond maximum page offset 8192# heap table \"db1.s3.p1_1\":# ERROR: could not open file \"base/16384/16588\": Adresář nebo soubor neexistuje# heap table \"db1.pg_toast.pg_toast_16620\":# ERROR: could not open file \"base/16384/16623\": Adresář nebo soubor neexistuje# btree index \"db2.s1.t1_btree\":# ERROR: index \"t1_btree\" lacks a main relation fork# heap table \"db2.s1.t1\":# ERROR: could not open file \"base/16736/16781\": Adresář nebo soubor neexistuje# '# doesn't match '(?^:could not open file \".*\": No such file or directory)'# Failed test 'pg_amcheck of db2.s1 excluding indexes stdout /(?^:could not open file \".*\": No such file or directory)/'# at t/003_check.pl line 402.# 'heap table \"db2.s1.t1\":# ERROR: could not open file \"base/16736/16781\": Adresář nebo soubor neexistuje# '# doesn't match '(?^:could not open file \".*\": No such file or directory)'# Failed test 'pg_amcheck schema s3 reports table and index errors stdout /(?^:could not open file \".*\": No such file or directory)/'# at t/003_check.pl line 410.# 'btree index \"db1.s3.t1_btree\":# ERROR: index \"t1_btree\" lacks a main relation fork# btree index \"db1.s3.t2_btree\":# ERROR: could not open file \"base/16384/16601.1\" (target block 2862699856): previous segment is only 2 blocks# heap table \"db1.s3.t1\":# ERROR: could not open file \"base/16384/16553\": Adresář nebo soubor neexistuje# heap table \"db1.s3.t2\", block 0, offset 3:# line pointer redirection to item at offset 21840 exceeds maximum offset 4# heap table \"db1.s3.t2\", block 0, offset 4:# line pointer to page offset 21840 with length 21840 ends beyond maximum page offset 8192# heap table \"db1.s3.t1_mv\":# ERROR: could not open file \"base/16384/16570\": Adresář nebo soubor neexistuje# heap table \"db1.s3.t2_mv\", block 0, offset 3:# line pointer redirection to item at offset 21840 exceeds maximum offset 4# heap table \"db1.s3.t2_mv\", block 0, offset 4:# line pointer to page offset 21840 with length 21840 ends beyond maximum page offset 8192# heap table \"db1.s3.p1_1\":# ERROR: could not open file \"base/16384/16588\": Adresář nebo soubor neexistuje# '# doesn't match '(?^:could not open file \".*\": No such file or directory)'# Failed test 'pg_amcheck in schema s4 reports toast corruption stdout /(?^:could not open file \".*\": No such file or directory)/'# at t/003_check.pl line 423.# 'heap table \"db1.s4.t2\":# ERROR: could not open file \"base/16384/16623\": Adresář nebo soubor neexistuje# heap table \"db1.pg_toast.pg_toast_16620\":# ERROR: could not open file \"base/16384/16623\": Adresář nebo soubor neexistuje# '# doesn't match '(?^:could not open file \".*\": No such file or directory)'t/003_check.pl ........... 40/? # Looks like you failed 5 tests of 63.t/003_check.pl ........... Dubious, test returned 5 (wstat 1280, 0x500)Failed 5/63 subtests t/004_verify_heapam.pl ... ok t/005_opclass_damage.pl .. ok Test Summary Report-------------------t/003_check.pl (Wstat: 1280 (exited 5) Tests: 63 Failed: 5) Failed tests: 7, 12, 24, 29, 32 Non-zero exit status: 5Files=5, Tests=214, 9 wallclock secs ( 0.09 usr 0.01 sys + 1.65 cusr 1.59 csys = 3.34 CPU)Result: FAILmake[2]: *** [Makefile:48: check] Chyba 1make[2]: Opouští se adresář „/home/pavel/src/postgresql.master/src/bin/pg_amcheck“make[1]: *** [Makefile:43: check-pg_amcheck-recurse] Chyba 2make[1]: Opouští se adresář „/home/pavel/src/postgresql.master/src/bin“make: *** [GNUmakefile:71: check-world-src/bin-recurse] Chyba 2",
"msg_date": "Fri, 25 Aug 2023 05:53:38 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": true,
"msg_subject": "broken master regress tests"
},
{
"msg_contents": "On Fri, Aug 25, 2023 at 3:54 PM Pavel Stehule <[email protected]> wrote:\n> today build is broken on my Fedora 39\n\nHas commit 252dcb32 somehow upset some kind of bleeding edge btrfs\nfilesystem? That's a wild guess and I can't really imagine how but\napparently your database files are totally messed up...\n\n\n",
"msg_date": "Fri, 25 Aug 2023 17:37:29 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: broken master regress tests"
},
{
"msg_contents": "Hi\n\npá 25. 8. 2023 v 7:38 odesílatel Thomas Munro <[email protected]>\nnapsal:\n\n> On Fri, Aug 25, 2023 at 3:54 PM Pavel Stehule <[email protected]>\n> wrote:\n> > today build is broken on my Fedora 39\n>\n> Has commit 252dcb32 somehow upset some kind of bleeding edge btrfs\n> filesystem? That's a wild guess and I can't really imagine how but\n> apparently your database files are totally messed up...\n>\n\nI use only ext4\n\n[pavel@nemesis ~]$ mount | grep home\n/dev/mapper/luks-feb21fdf-c7aa-4373-b25e-fb26d4b28216 on /home type ext4\n(rw,relatime,seclabel)\n\nbut the kernel is fresh\n\n[pavel@nemesis ~]$ uname -a\nLinux nemesis 6.5.0-0.rc6.43.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Aug 14\n17:17:41 UTC 2023 x86_64 GNU/Linux\n\nHipá 25. 8. 2023 v 7:38 odesílatel Thomas Munro <[email protected]> napsal:On Fri, Aug 25, 2023 at 3:54 PM Pavel Stehule <[email protected]> wrote:\n> today build is broken on my Fedora 39\n\nHas commit 252dcb32 somehow upset some kind of bleeding edge btrfs\nfilesystem? That's a wild guess and I can't really imagine how but\napparently your database files are totally messed up...I use only ext4[pavel@nemesis ~]$ mount | grep home/dev/mapper/luks-feb21fdf-c7aa-4373-b25e-fb26d4b28216 on /home type ext4 (rw,relatime,seclabel)but the kernel is fresh[pavel@nemesis ~]$ uname -aLinux nemesis 6.5.0-0.rc6.43.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Aug 14 17:17:41 UTC 2023 x86_64 GNU/Linux",
"msg_date": "Fri, 25 Aug 2023 08:10:07 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: broken master regress tests"
},
{
"msg_contents": "pá 25. 8. 2023 v 8:10 odesílatel Pavel Stehule <[email protected]>\nnapsal:\n\n> Hi\n>\n> pá 25. 8. 2023 v 7:38 odesílatel Thomas Munro <[email protected]>\n> napsal:\n>\n>> On Fri, Aug 25, 2023 at 3:54 PM Pavel Stehule <[email protected]>\n>> wrote:\n>> > today build is broken on my Fedora 39\n>>\n>> Has commit 252dcb32 somehow upset some kind of bleeding edge btrfs\n>> filesystem? That's a wild guess and I can't really imagine how but\n>> apparently your database files are totally messed up...\n>>\n>\n> I use only ext4\n>\n> [pavel@nemesis ~]$ mount | grep home\n> /dev/mapper/luks-feb21fdf-c7aa-4373-b25e-fb26d4b28216 on /home type ext4\n> (rw,relatime,seclabel)\n>\n> but the kernel is fresh\n>\n> [pavel@nemesis ~]$ uname -a\n> Linux nemesis 6.5.0-0.rc6.43.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Aug 14\n> 17:17:41 UTC 2023 x86_64 GNU/Linux\n>\n>\nI tested it on another comp with fresh fedora 38 installation with same\nresult\n\nagain I use only ext4 there\n\n\n\n>\n>\n>\n\npá 25. 8. 2023 v 8:10 odesílatel Pavel Stehule <[email protected]> napsal:Hipá 25. 8. 2023 v 7:38 odesílatel Thomas Munro <[email protected]> napsal:On Fri, Aug 25, 2023 at 3:54 PM Pavel Stehule <[email protected]> wrote:\n> today build is broken on my Fedora 39\n\nHas commit 252dcb32 somehow upset some kind of bleeding edge btrfs\nfilesystem? That's a wild guess and I can't really imagine how but\napparently your database files are totally messed up...I use only ext4[pavel@nemesis ~]$ mount | grep home/dev/mapper/luks-feb21fdf-c7aa-4373-b25e-fb26d4b28216 on /home type ext4 (rw,relatime,seclabel)but the kernel is fresh[pavel@nemesis ~]$ uname -aLinux nemesis 6.5.0-0.rc6.43.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Aug 14 17:17:41 UTC 2023 x86_64 GNU/LinuxI tested it on another comp with fresh fedora 38 installation with same resultagain I use only ext4 there",
"msg_date": "Fri, 25 Aug 2023 08:17:14 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: broken master regress tests"
},
{
"msg_contents": "On Fri, 25 Aug 2023, 05:54 Pavel Stehule, <[email protected]> wrote:\n\n> Hi\n>\n> today build is broken on my Fedora 39\n>\n> Regards\n>\n> Pavel\n>\n> make[2]: Opouští se adresář\n> „/home/pavel/src/postgresql.master/src/bin/initdb“\n> make -C pg_amcheck check\n> make[2]: Vstupuje se do adresáře\n> „/home/pavel/src/postgresql.master/src/bin/pg_amcheck“\n> [...]\n> # ERROR: could not open file \"base/16736/16781\": Adresář nebo soubor\n> neexistuje\n> # '\n> # doesn't match '(?^:could not open file \".*\": No such file or\n> directory)'\n>\n\nIt looks like the error message matcher doesn't account for the localized\nversion of \"No such file or directory\", might that be the issue?\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\nOn Fri, 25 Aug 2023, 05:54 Pavel Stehule, <[email protected]> wrote:Hitoday build is broken on my Fedora 39RegardsPavelmake[2]: Opouští se adresář „/home/pavel/src/postgresql.master/src/bin/initdb“make -C pg_amcheck checkmake[2]: Vstupuje se do adresáře „/home/pavel/src/postgresql.master/src/bin/pg_amcheck“[...]# ERROR: could not open file \"base/16736/16781\": Adresář nebo soubor neexistuje# '# doesn't match '(?^:could not open file \".*\": No such file or directory)'It looks like the error message matcher doesn't account for the localized version of \"No such file or directory\", might that be the issue?Kind regards,Matthias van de MeentNeon (https://neon.tech)",
"msg_date": "Fri, 25 Aug 2023 08:22:24 +0200",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: broken master regress tests"
},
{
"msg_contents": "pá 25. 8. 2023 v 8:22 odesílatel Matthias van de Meent <\[email protected]> napsal:\n\n>\n>\n> On Fri, 25 Aug 2023, 05:54 Pavel Stehule, <[email protected]> wrote:\n>\n>> Hi\n>>\n>> today build is broken on my Fedora 39\n>>\n>> Regards\n>>\n>> Pavel\n>>\n>> make[2]: Opouští se adresář\n>> „/home/pavel/src/postgresql.master/src/bin/initdb“\n>> make -C pg_amcheck check\n>> make[2]: Vstupuje se do adresáře\n>> „/home/pavel/src/postgresql.master/src/bin/pg_amcheck“\n>> [...]\n>> # ERROR: could not open file \"base/16736/16781\": Adresář nebo soubor\n>> neexistuje\n>> # '\n>> # doesn't match '(?^:could not open file \".*\": No such file or\n>> directory)'\n>>\n>\n> It looks like the error message matcher doesn't account for the localized\n> version of \"No such file or directory\", might that be the issue?\n>\n\nyes\n\nLANG=C maje check-world\n\nfixed this issue\n\nRegards\n\nPavel\n\n>\n> Kind regards,\n>\n> Matthias van de Meent\n> Neon (https://neon.tech)\n>\n\npá 25. 8. 2023 v 8:22 odesílatel Matthias van de Meent <[email protected]> napsal:On Fri, 25 Aug 2023, 05:54 Pavel Stehule, <[email protected]> wrote:Hitoday build is broken on my Fedora 39RegardsPavelmake[2]: Opouští se adresář „/home/pavel/src/postgresql.master/src/bin/initdb“make -C pg_amcheck checkmake[2]: Vstupuje se do adresáře „/home/pavel/src/postgresql.master/src/bin/pg_amcheck“[...]# ERROR: could not open file \"base/16736/16781\": Adresář nebo soubor neexistuje# '# doesn't match '(?^:could not open file \".*\": No such file or directory)'It looks like the error message matcher doesn't account for the localized version of \"No such file or directory\", might that be the issue?yesLANG=C maje check-worldfixed this issueRegardsPavel Kind regards,Matthias van de MeentNeon (https://neon.tech)",
"msg_date": "Fri, 25 Aug 2023 09:12:20 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: broken master regress tests"
},
{
"msg_contents": "Hi\n\npá 25. 8. 2023 v 9:12 odesílatel Pavel Stehule <[email protected]>\nnapsal:\n\n>\n>\n> pá 25. 8. 2023 v 8:22 odesílatel Matthias van de Meent <\n> [email protected]> napsal:\n>\n>>\n>>\n>> On Fri, 25 Aug 2023, 05:54 Pavel Stehule, <[email protected]>\n>> wrote:\n>>\n>>> Hi\n>>>\n>>> today build is broken on my Fedora 39\n>>>\n>>> Regards\n>>>\n>>> Pavel\n>>>\n>>> make[2]: Opouští se adresář\n>>> „/home/pavel/src/postgresql.master/src/bin/initdb“\n>>> make -C pg_amcheck check\n>>> make[2]: Vstupuje se do adresáře\n>>> „/home/pavel/src/postgresql.master/src/bin/pg_amcheck“\n>>> [...]\n>>> # ERROR: could not open file \"base/16736/16781\": Adresář nebo\n>>> soubor neexistuje\n>>> # '\n>>> # doesn't match '(?^:could not open file \".*\": No such file or\n>>> directory)'\n>>>\n>>\n>> It looks like the error message matcher doesn't account for the localized\n>> version of \"No such file or directory\", might that be the issue?\n>>\n>\n> yes\n>\n> LANG=C maje check-world\n>\n>\n>\nI tried to fix this issue, but there is some strange\n\nregress tests are initialized with\n\n<-->delete $ENV{LANGUAGE};\n<-->delete $ENV{LC_ALL};\n<-->$ENV{LC_MESSAGES} = 'C';\n\nso the environment should be correct\n\nI checked this setting before\n\n<-->IPC::Run::run($cmd, '>', \\$stdout, '2>', \\$stderr);\n\nand it looks correct. But the tests fails\n\nOnly when I use `LC_MESSAGES=C make check` the tests are ok\n\nMy environment has only `LANG=cs_CZ.UTF-8`\n\nSo it looks so IPC::Run::run is ignore parent environment\n\n\n\n>\n> Regards\n>\n> Pavel\n>\n>>\n>> Kind regards,\n>>\n>> Matthias van de Meent\n>> Neon (https://neon.tech)\n>>\n>\n\nHipá 25. 8. 2023 v 9:12 odesílatel Pavel Stehule <[email protected]> napsal:pá 25. 8. 2023 v 8:22 odesílatel Matthias van de Meent <[email protected]> napsal:On Fri, 25 Aug 2023, 05:54 Pavel Stehule, <[email protected]> wrote:Hitoday build is broken on my Fedora 39RegardsPavelmake[2]: Opouští se adresář „/home/pavel/src/postgresql.master/src/bin/initdb“make -C pg_amcheck checkmake[2]: Vstupuje se do adresáře „/home/pavel/src/postgresql.master/src/bin/pg_amcheck“[...]# ERROR: could not open file \"base/16736/16781\": Adresář nebo soubor neexistuje# '# doesn't match '(?^:could not open file \".*\": No such file or directory)'It looks like the error message matcher doesn't account for the localized version of \"No such file or directory\", might that be the issue?yesLANG=C maje check-worldI tried to fix this issue, but there is some strange regress tests are initialized with<-->delete $ENV{LANGUAGE};<-->delete $ENV{LC_ALL};<-->$ENV{LC_MESSAGES} = 'C';so the environment should be correctI checked this setting before<-->IPC::Run::run($cmd, '>', \\$stdout, '2>', \\$stderr);and it looks correct. But the tests failsOnly when I use `LC_MESSAGES=C make check` the tests are okMy environment has only `LANG=cs_CZ.UTF-8`So it looks so IPC::Run::run is ignore parent environment RegardsPavel Kind regards,Matthias van de MeentNeon (https://neon.tech)",
"msg_date": "Sat, 26 Aug 2023 17:02:29 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: broken master regress tests"
},
{
"msg_contents": "On Sun, Aug 27, 2023 at 3:03 AM Pavel Stehule <[email protected]> wrote:\n> So it looks so IPC::Run::run is ignore parent environment\n\nI guess the new initdb template captures lc_messages in\npostgresql.conf, when it runs earlier? I guess if you put\n$node->append_conf('postgresql.conf', 'lc_messages=C'); into\nsrc/bin/pg_amcheck/t/003_check.pl then it will work. I'm not sure\nwhat the correct fix should be, ie if the template mechanism should\nnotice this difference and not use the template, or if tests that\ndepend on the message locale should explicitly say so with\nlc_messages=C or similar (why is this the only one?), or ...\n\n\n",
"msg_date": "Sun, 27 Aug 2023 09:52:17 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: broken master regress tests"
},
{
"msg_contents": "Hi\n\nso 26. 8. 2023 v 23:52 odesílatel Thomas Munro <[email protected]>\nnapsal:\n\n> On Sun, Aug 27, 2023 at 3:03 AM Pavel Stehule <[email protected]>\n> wrote:\n> > So it looks so IPC::Run::run is ignore parent environment\n>\n> I guess the new initdb template captures lc_messages in\n> postgresql.conf, when it runs earlier? I guess if you put\n> $node->append_conf('postgresql.conf', 'lc_messages=C'); into\n> src/bin/pg_amcheck/t/003_check.pl then it will work. I'm not sure\n> what the correct fix should be, ie if the template mechanism should\n> notice this difference and not use the template, or if tests that\n> depend on the message locale should explicitly say so with\n> lc_messages=C or similar (why is this the only one?), or ...\n>\n\ndiff --git a/src/bin/pg_amcheck/t/003_check.pl b/src/bin/pg_amcheck/t/\n003_check.pl\nindex d577cffa30..ba7c713adc 100644\n--- a/src/bin/pg_amcheck/t/003_check.pl\n+++ b/src/bin/pg_amcheck/t/003_check.pl\n@@ -122,6 +122,7 @@ sub perform_all_corruptions()\n $node = PostgreSQL::Test::Cluster->new('test');\n $node->init;\n $node->append_conf('postgresql.conf', 'autovacuum=off');\n+$node->append_conf('postgresql.conf', 'lc_messages=C');\n $node->start;\n $port = $node->port;\n\nit fixes this issue\n\nRegards\n\nPavel\n\nHiso 26. 8. 2023 v 23:52 odesílatel Thomas Munro <[email protected]> napsal:On Sun, Aug 27, 2023 at 3:03 AM Pavel Stehule <[email protected]> wrote:\n> So it looks so IPC::Run::run is ignore parent environment\n\nI guess the new initdb template captures lc_messages in\npostgresql.conf, when it runs earlier? I guess if you put\n$node->append_conf('postgresql.conf', 'lc_messages=C'); into\nsrc/bin/pg_amcheck/t/003_check.pl then it will work. I'm not sure\nwhat the correct fix should be, ie if the template mechanism should\nnotice this difference and not use the template, or if tests that\ndepend on the message locale should explicitly say so with\nlc_messages=C or similar (why is this the only one?), or ...diff --git a/src/bin/pg_amcheck/t/003_check.pl b/src/bin/pg_amcheck/t/003_check.plindex d577cffa30..ba7c713adc 100644--- a/src/bin/pg_amcheck/t/003_check.pl+++ b/src/bin/pg_amcheck/t/003_check.pl@@ -122,6 +122,7 @@ sub perform_all_corruptions() $node = PostgreSQL::Test::Cluster->new('test'); $node->init; $node->append_conf('postgresql.conf', 'autovacuum=off');+$node->append_conf('postgresql.conf', 'lc_messages=C'); $node->start; $port = $node->port;it fixes this issueRegardsPavel",
"msg_date": "Sun, 27 Aug 2023 05:37:29 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: broken master regress tests"
},
{
"msg_contents": "On 2023-Aug-27, Thomas Munro wrote:\n\n> On Sun, Aug 27, 2023 at 3:03 AM Pavel Stehule <[email protected]> wrote:\n> > So it looks so IPC::Run::run is ignore parent environment\n> \n> I guess the new initdb template captures lc_messages in\n> postgresql.conf, when it runs earlier? I guess if you put\n> $node->append_conf('postgresql.conf', 'lc_messages=C'); into\n> src/bin/pg_amcheck/t/003_check.pl then it will work. I'm not sure\n> what the correct fix should be, ie if the template mechanism should\n> notice this difference and not use the template, or if tests that\n> depend on the message locale should explicitly say so with\n> lc_messages=C or similar (why is this the only one?), or ...\n\nSo I tried this technique, but it gest old pretty fast: apparently\nthere's a *ton* of tests that depend on the locale. I gave up after\npatching the first five files, and noticing that in a second run there\nanother half a dozen failing tests that hadn't failed the first time\naround. (Not sure why this happened.)\n\nSo I think injecting --no-locale to the initdb line that creates the\ntemplate is a better approach; something like the attached.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/",
"msg_date": "Tue, 29 Aug 2023 17:54:24 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: broken master regress tests"
},
{
"msg_contents": "út 29. 8. 2023 v 17:54 odesílatel Alvaro Herrera <[email protected]>\nnapsal:\n\n> On 2023-Aug-27, Thomas Munro wrote:\n>\n> > On Sun, Aug 27, 2023 at 3:03 AM Pavel Stehule <[email protected]>\n> wrote:\n> > > So it looks so IPC::Run::run is ignore parent environment\n> >\n> > I guess the new initdb template captures lc_messages in\n> > postgresql.conf, when it runs earlier? I guess if you put\n> > $node->append_conf('postgresql.conf', 'lc_messages=C'); into\n> > src/bin/pg_amcheck/t/003_check.pl then it will work. I'm not sure\n> > what the correct fix should be, ie if the template mechanism should\n> > notice this difference and not use the template, or if tests that\n> > depend on the message locale should explicitly say so with\n> > lc_messages=C or similar (why is this the only one?), or ...\n>\n> So I tried this technique, but it gest old pretty fast: apparently\n> there's a *ton* of tests that depend on the locale. I gave up after\n> patching the first five files, and noticing that in a second run there\n> another half a dozen failing tests that hadn't failed the first time\n> around. (Not sure why this happened.)\n>\n> So I think injecting --no-locale to the initdb line that creates the\n> template is a better approach; something like the attached.\n>\n\nok\n\nthank you for fixing it\n\nRegards\n\nPavel\n\n\n>\n> --\n> Álvaro Herrera PostgreSQL Developer —\n> https://www.EnterpriseDB.com/\n>\n\nút 29. 8. 2023 v 17:54 odesílatel Alvaro Herrera <[email protected]> napsal:On 2023-Aug-27, Thomas Munro wrote:\n\n> On Sun, Aug 27, 2023 at 3:03 AM Pavel Stehule <[email protected]> wrote:\n> > So it looks so IPC::Run::run is ignore parent environment\n> \n> I guess the new initdb template captures lc_messages in\n> postgresql.conf, when it runs earlier? I guess if you put\n> $node->append_conf('postgresql.conf', 'lc_messages=C'); into\n> src/bin/pg_amcheck/t/003_check.pl then it will work. I'm not sure\n> what the correct fix should be, ie if the template mechanism should\n> notice this difference and not use the template, or if tests that\n> depend on the message locale should explicitly say so with\n> lc_messages=C or similar (why is this the only one?), or ...\n\nSo I tried this technique, but it gest old pretty fast: apparently\nthere's a *ton* of tests that depend on the locale. I gave up after\npatching the first five files, and noticing that in a second run there\nanother half a dozen failing tests that hadn't failed the first time\naround. (Not sure why this happened.)\n\nSo I think injecting --no-locale to the initdb line that creates the\ntemplate is a better approach; something like the attached.okthank you for fixing itRegardsPavel \n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/",
"msg_date": "Tue, 29 Aug 2023 18:55:05 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: broken master regress tests"
},
{
"msg_contents": "Hi,\n\nOn 2023-08-29 17:54:24 +0200, Alvaro Herrera wrote:\n> On 2023-Aug-27, Thomas Munro wrote:\n> \n> > On Sun, Aug 27, 2023 at 3:03 AM Pavel Stehule <[email protected]> wrote:\n> > > So it looks so IPC::Run::run is ignore parent environment\n> > \n> > I guess the new initdb template captures lc_messages in\n> > postgresql.conf, when it runs earlier? I guess if you put\n> > $node->append_conf('postgresql.conf', 'lc_messages=C'); into\n> > src/bin/pg_amcheck/t/003_check.pl then it will work. I'm not sure\n> > what the correct fix should be, ie if the template mechanism should\n> > notice this difference and not use the template, or if tests that\n> > depend on the message locale should explicitly say so with\n> > lc_messages=C or similar (why is this the only one?), or ...\n> \n> So I tried this technique, but it gest old pretty fast: apparently\n> there's a *ton* of tests that depend on the locale. I gave up after\n> patching the first five files, and noticing that in a second run there\n> another half a dozen failing tests that hadn't failed the first time\n> around. (Not sure why this happened.)\n> \n> So I think injecting --no-locale to the initdb line that creates the\n> template is a better approach; something like the attached.\n\nMakes sense, thanks for taking care of this.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 11 Sep 2023 19:23:58 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: broken master regress tests"
},
{
"msg_contents": "On Mon, 2023-09-11 at 19:23 -0700, Andres Freund wrote:\n> > So I think injecting --no-locale to the initdb line that creates\n> > the\n> > template is a better approach; something like the attached.\n> \n> Makes sense, thanks for taking care of this.\n\nAfter this, it seems \"make check\" no longer picks up the locale from\nthe system environment by default.\n\nWhat is the new way to run the regression tests with an actual locale?\nIf it's no longer done by default, won't that dramatically reduce the\ncoverage of non-C locales?\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Tue, 10 Oct 2023 17:08:25 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: broken master regress tests"
},
{
"msg_contents": "Hi,\n\nOn 2023-10-10 17:08:25 -0700, Jeff Davis wrote:\n> On Mon, 2023-09-11 at 19:23 -0700, Andres Freund wrote:\n> > > So I think injecting --no-locale to the initdb line that creates\n> > > the\n> > > template is a better approach; something like the attached.\n> > \n> > Makes sense, thanks for taking care of this.\n> \n> After this, it seems \"make check\" no longer picks up the locale from\n> the system environment by default.\n\nYea. I wonder if the better fix would have been to copy setenv(\"LC_MESSAGES\", \"C\", 1);\nto the initdb template creation. That afaict also fixes the issue, with a\nsmaller blast radius?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 10 Oct 2023 17:54:34 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: broken master regress tests"
},
{
"msg_contents": "On Tue, 2023-10-10 at 17:54 -0700, Andres Freund wrote:\n> Yea. I wonder if the better fix would have been to copy\n> setenv(\"LC_MESSAGES\", \"C\", 1);\n> to the initdb template creation. That afaict also fixes the issue,\n> with a\n> smaller blast radius?\n\nSounds good to me. Is there anything else we should do to notice that\ntests are unexpectedly skipped, or is this a rare enough problem to not\nworry about other cases?\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Mon, 16 Oct 2023 20:10:05 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: broken master regress tests"
},
{
"msg_contents": "On Tue, Oct 10, 2023 at 05:54:34PM -0700, Andres Freund wrote:\n> On 2023-10-10 17:08:25 -0700, Jeff Davis wrote:\n> > After this, it seems \"make check\" no longer picks up the locale from\n> > the system environment by default.\n> \n> Yea. I wonder if the better fix would have been to copy setenv(\"LC_MESSAGES\", \"C\", 1);\n> to the initdb template creation. That afaict also fixes the issue, with a\n> smaller blast radius?\n\n+1, that would restore the testing semantics known from v16-. I think the\nintent of the template was to optimize without changing semantics, and the\nabove proposal aligns with that. Currently, for v17 alone, one needs\ninstallcheck or build file hacks to reproduce a locale-specific test failure.\n\nAn alternative would be to declare that the tests are supported in one\nencoding+locale only, then stop testing others in the buildfarm. Even if we\ndid that, I'm fairly sure we'd standardize on UTF8, not SQL_ASCII, as the one\ntestable encoding.\n\n\n",
"msg_date": "Tue, 12 Dec 2023 18:56:19 -0800",
"msg_from": "Noah Misch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: broken master regress tests"
},
{
"msg_contents": "Noah Misch <[email protected]> writes:\n> An alternative would be to declare that the tests are supported in one\n> encoding+locale only, then stop testing others in the buildfarm.\n\nSurely that's not even a thinkably acceptable choice.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 12 Dec 2023 21:59:22 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: broken master regress tests"
},
{
"msg_contents": "On Tue, 2023-12-12 at 18:56 -0800, Noah Misch wrote:\n> > Yea. I wonder if the better fix would have been to copy\n> > setenv(\"LC_MESSAGES\", \"C\", 1);\n> > to the initdb template creation. That afaict also fixes the issue,\n> > with a\n> > smaller blast radius?\n> \n> +1, that would restore the testing semantics known from v16-. I\n> think the\n> intent of the template was to optimize without changing semantics,\n> and the\n> above proposal aligns with that. Currently, for v17 alone, one needs\n> installcheck or build file hacks to reproduce a locale-specific test\n> failure.\n\nAttached.\n\nI just changed --no-locale to --lc-messages=C, which I think solves it\nin the right place with minimal blast radius. Andres, did you literally\nmean C setenv() somewhere, or is this what you had in mind?\n\nI also noticed that collate.linux.utf8.sql seems to be skipped on my\nmachine because of the \"version() !~ 'linux-gnu'\" check, even though\nI'm running Ubuntu. Is that test getting run often enough?\n\nAnd relatedly, is it worth thinking about extending pg_regress to\nreport skipped tests so it's easier to find these kinds of problems?\n\nRegards,\n\tJeff Davis",
"msg_date": "Wed, 20 Dec 2023 17:48:55 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: broken master regress tests"
},
{
"msg_contents": "On Wed, 2023-12-20 at 17:48 -0800, Jeff Davis wrote:\n> Attached.\n\nIt appears to increase the coverage. I committed it and I'll see how\nthe buildfarm reacts.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Thu, 21 Dec 2023 15:17:32 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: broken master regress tests"
},
{
"msg_contents": "Hi\n\npá 22. 12. 2023 v 0:17 odesílatel Jeff Davis <[email protected]> napsal:\n\n> On Wed, 2023-12-20 at 17:48 -0800, Jeff Davis wrote:\n> > Attached.\n>\n> It appears to increase the coverage. I committed it and I'll see how\n> the buildfarm reacts.\n>\n\nI tested it locally ( LANG=cs_CZ.UTF-8 ) without problems\n\nRegards\n\nPavel\n\n\n> Regards,\n> Jeff Davis\n>\n>\n\nHipá 22. 12. 2023 v 0:17 odesílatel Jeff Davis <[email protected]> napsal:On Wed, 2023-12-20 at 17:48 -0800, Jeff Davis wrote:\n> Attached.\n\nIt appears to increase the coverage. I committed it and I'll see how\nthe buildfarm reacts.I tested it locally ( LANG=cs_CZ.UTF-8 ) without problemsRegardsPavel\n\nRegards,\n Jeff Davis",
"msg_date": "Sat, 23 Dec 2023 21:35:55 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: broken master regress tests"
},
{
"msg_contents": "Hello Jeff,\n\n22.12.2023 02:17, Jeff Davis wrote:\n> On Wed, 2023-12-20 at 17:48 -0800, Jeff Davis wrote:\n>> Attached.\n> It appears to increase the coverage. I committed it and I'll see how\n> the buildfarm reacts.\n\nStarting from the commit 8793c6005, I observe a failure of test\ncollate.windows.win1252 on Windows Server 2016:\nmeson test regress/regress\n1/1 postgresql:regress / regress/regress ERROR 24.47s exit status 1\n\nregression.diffs contains:\n@@ -993,6 +993,8 @@\n -- nondeterministic collations\n -- (not supported with libc provider)\n CREATE COLLATION ctest_det (locale = 'en_US', deterministic = true);\n+ERROR: could not create locale \"en_US\": No such file or directory\n+DETAIL: The operating system could not find any locale data for the locale name \"en_US\".\n CREATE COLLATION ctest_nondet (locale = 'en_US', deterministic = false);\n ERROR: nondeterministic collations not supported with this provider\n -- cleanup\n\nThough\nCREATE COLLATION ctest_det (locale = 'English_United States', deterministic = true);\nexecuted successfully on this OS.\n\nAFAICS, before that commit SELECT getdatabaseencoding() in the test\nreturned SQL_ASCII, hence the test was essentially skipped, but now it\nreturns WIN1252, so problematic CREATE COLLATION(locale = 'en_US', ...)\nis reached.\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Thu, 28 Dec 2023 18:00:00 +0300",
"msg_from": "Alexander Lakhin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: broken master regress tests"
},
{
"msg_contents": "On Thu, 2023-12-28 at 18:00 +0300, Alexander Lakhin wrote:\n> AFAICS, before that commit SELECT getdatabaseencoding() in the test\n> returned SQL_ASCII, hence the test was essentially skipped, but now\n> it\n> returns WIN1252, so problematic CREATE COLLATION(locale = 'en_US',\n> ...)\n> is reached.\n\nWe do want that test to run though, right?\n\nI suspect that test line never worked reliably. The skip_test check at\nthe top guarantees that the collation named \"en_US\" exists, but that\ndoesn't mean that the OS understands the locale 'en_US'.\n\nPerhaps we can change that line to use a similar trick as what's used\nelsewhere in the file:\n\n do $$\n BEGIN\n EXECUTE 'CREATE COLLATION ctest_det (locale = ' ||\n quote_literal((SELECT collcollate FROM pg_collation WHERE\ncollname = ''en_US'')) || ', deterministic = true);';\n END\n $$;\n\nThe above may need some adjustment, but perhaps you can try it out?\nAnother option might be to use \\gset to assign it to a variable, which\nmight be more readable, but I think it's better to just follow what the\nrest of the file is doing.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Thu, 28 Dec 2023 09:36:42 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: broken master regress tests"
},
{
"msg_contents": "28.12.2023 20:36, Jeff Davis wrote:\n> We do want that test to run though, right?\n\nYes, I think so.\n\n> I suspect that test line never worked reliably. The skip_test check at\n> the top guarantees that the collation named \"en_US\" exists, but that\n> doesn't mean that the OS understands the locale 'en_US'.\n>\n> Perhaps we can change that line to use a similar trick as what's used\n> elsewhere in the file:\n>\n> do $$\n> BEGIN\n> EXECUTE 'CREATE COLLATION ctest_det (locale = ' ||\n> quote_literal((SELECT collcollate FROM pg_collation WHERE\n> collname = ''en_US'')) || ', deterministic = true);';\n> END\n> $$;\n>\n> The above may need some adjustment, but perhaps you can try it out?\n\nYes, this trick resolves the issue, it gives locale 'en-US' on that OS,\nwhich works there. Please see the attached patch.\n\nBut looking at the result with the comment above that \"do\" block, I wonder\nwhether this successful CREATE COLLATION command is so important to perform\nit that tricky way, if we want to demonstrate that nondeterministic\ncollations not supported.\nSo in case you decide just to remove this command, please see the second\npatch.\n\nBest regards,\nAlexander",
"msg_date": "Thu, 28 Dec 2023 22:00:00 +0300",
"msg_from": "Alexander Lakhin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: broken master regress tests"
},
{
"msg_contents": "On Thu, 2023-12-28 at 22:00 +0300, Alexander Lakhin wrote:\n> But looking at the result with the comment above that \"do\" block, I\n> wonder\n> whether this successful CREATE COLLATION command is so important to\n> perform\n> it that tricky way, if we want to demonstrate that nondeterministic\n> collations not supported.\n\nThank you, pushed this version. There are other similar commands in the\nfile, so I think it's fine. It exercises a specific locale that might\nbe different from datcollate.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Fri, 29 Dec 2023 12:10:26 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: broken master regress tests"
}
] |
[
{
"msg_contents": "Hi,\n\nI've just noticed this warning when building on Debian 12:\n\nIn file included from /usr/lib/llvm-14/include/llvm/Analysis/ModuleSummaryAnalysis.h:17,\n from llvmjit_inline.cpp:51:\n/usr/lib/llvm-14/include/llvm/IR/ModuleSummaryIndex.h: In constructor ‘llvm::ModuleSummaryIndex::ModuleSummaryIndex(bool, bool)’:\n/usr/lib/llvm-14/include/llvm/IR/ModuleSummaryIndex.h:1175:73: warning: member ‘llvm::ModuleSummaryIndex::Alloc’ is used uninitialized [-Wuninitialized]\n 1175 | : HaveGVs(HaveGVs), EnableSplitLTOUnit(EnableSplitLTOUnit), Saver(Alloc),\n | \n\ncat /etc/debian_version \n12.1\n\nRegards\nDaniel\n\n",
"msg_date": "Fri, 25 Aug 2023 05:36:12 +0000",
"msg_from": "\"Daniel Westermann (DWE)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Compiler warning on Debian 12, PostgreSQL 16 Beta3"
},
{
"msg_contents": "On 2023-Aug-25, Daniel Westermann (DWE) wrote:\n\n> I've just noticed this warning when building on Debian 12:\n> \n> In file included from /usr/lib/llvm-14/include/llvm/Analysis/ModuleSummaryAnalysis.h:17,\n> from llvmjit_inline.cpp:51:\n> /usr/lib/llvm-14/include/llvm/IR/ModuleSummaryIndex.h: In constructor ‘llvm::ModuleSummaryIndex::ModuleSummaryIndex(bool, bool)’:\n> /usr/lib/llvm-14/include/llvm/IR/ModuleSummaryIndex.h:1175:73: warning: member ‘llvm::ModuleSummaryIndex::Alloc’ is used uninitialized [-Wuninitialized]\n> 1175 | : HaveGVs(HaveGVs), EnableSplitLTOUnit(EnableSplitLTOUnit), Saver(Alloc),\n> | \n\nYeah, I get this one too. I thought commit 37d5babb5cfa (\"jit: Support\nopaque pointers in LLVM 16.\") was going to silence it, but I was quite\nmistaken. I gave that code a quick look and could not understand what\nit was complaining about. Is it a bug in the LLVM headers?\n\nAdding Andres and Thomas to CC, because they're the ones touching the\nLLVM / JIT code.\n\nAny clues?\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 7 Nov 2023 16:46:53 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Compiler warning on Debian 12, PostgreSQL 16 Beta3"
},
{
"msg_contents": "On Wed, Nov 8, 2023 at 4:46 AM Alvaro Herrera <[email protected]> wrote:\n> On 2023-Aug-25, Daniel Westermann (DWE) wrote:\n> > I've just noticed this warning when building on Debian 12:\n> >\n> > In file included from /usr/lib/llvm-14/include/llvm/Analysis/ModuleSummaryAnalysis.h:17,\n> > from llvmjit_inline.cpp:51:\n> > /usr/lib/llvm-14/include/llvm/IR/ModuleSummaryIndex.h: In constructor ‘llvm::ModuleSummaryIndex::ModuleSummaryIndex(bool, bool)’:\n> > /usr/lib/llvm-14/include/llvm/IR/ModuleSummaryIndex.h:1175:73: warning: member ‘llvm::ModuleSummaryIndex::Alloc’ is used uninitialized [-Wuninitialized]\n> > 1175 | : HaveGVs(HaveGVs), EnableSplitLTOUnit(EnableSplitLTOUnit), Saver(Alloc),\n> > |\n>\n> Yeah, I get this one too. I thought commit 37d5babb5cfa (\"jit: Support\n> opaque pointers in LLVM 16.\") was going to silence it, but I was quite\n> mistaken. I gave that code a quick look and could not understand what\n> it was complaining about. Is it a bug in the LLVM headers?\n\nI found the commit where they fixed that in 15+:\n\nhttps://github.com/llvm/llvm-project/commit/1d9086bf054c2e734940620d02d4451156b424e6\n\nThey don't seem to back-patch fixes, generally.\n\n\n",
"msg_date": "Wed, 8 Nov 2023 07:56:07 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Compiler warning on Debian 12, PostgreSQL 16 Beta3"
},
{
"msg_contents": "On 2023-Nov-08, Thomas Munro wrote:\n\n> On Wed, Nov 8, 2023 at 4:46 AM Alvaro Herrera <[email protected]> wrote:\n> > On 2023-Aug-25, Daniel Westermann (DWE) wrote:\n> >\n> > Yeah, I get this one too. I thought commit 37d5babb5cfa (\"jit: Support\n> > opaque pointers in LLVM 16.\") was going to silence it, but I was quite\n> > mistaken. I gave that code a quick look and could not understand what\n> > it was complaining about. Is it a bug in the LLVM headers?\n> \n> I found the commit where they fixed that in 15+:\n> \n> https://github.com/llvm/llvm-project/commit/1d9086bf054c2e734940620d02d4451156b424e6\n> \n> They don't seem to back-patch fixes, generally.\n\nAh yeah, I can silence the warning by patching that file locally.\n\nAnnoying :-(\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"El sabio habla porque tiene algo que decir;\nel tonto, porque tiene que decir algo\" (Platon).\n\n\n",
"msg_date": "Tue, 7 Nov 2023 20:13:52 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Compiler warning on Debian 12, PostgreSQL 16 Beta3"
},
{
"msg_contents": "On Wed, Nov 8, 2023 at 8:13 AM Alvaro Herrera <[email protected]> wrote:\n> On 2023-Nov-08, Thomas Munro wrote:\n> > On Wed, Nov 8, 2023 at 4:46 AM Alvaro Herrera <[email protected]> wrote:\n> > > On 2023-Aug-25, Daniel Westermann (DWE) wrote:\n> > >\n> > > Yeah, I get this one too. I thought commit 37d5babb5cfa (\"jit: Support\n> > > opaque pointers in LLVM 16.\") was going to silence it, but I was quite\n> > > mistaken. I gave that code a quick look and could not understand what\n> > > it was complaining about. Is it a bug in the LLVM headers?\n> >\n> > I found the commit where they fixed that in 15+:\n> >\n> > https://github.com/llvm/llvm-project/commit/1d9086bf054c2e734940620d02d4451156b424e6\n> >\n> > They don't seem to back-patch fixes, generally.\n>\n> Ah yeah, I can silence the warning by patching that file locally.\n\nSince LLVM only seems to maintain one branch at a time as a matter of\npolicy (I don't see were that is written down but I do see for example\ntheir backport request format[1] which strictly goes from main to\n(currently) release/17.x, and see how the commit history of each\nrelease branch ends as a new branch is born), I suppose another angle\nwould be to check if the Debian maintainers carry extra patches for\nstuff like that. They're the ones creating the dependency on an 'old'\nLLVM after all. Unlike the RHEL/etc maintainers' fast rolling version\npolicy (that we learned about in the thread for CF #4640). Who wants\nto ship zombie unmaintained code for years? On the other hand, Debian\nitself rolls faster than RHEL.\n\n[1] https://llvm.org/docs/GitHub.html\n\n\n",
"msg_date": "Wed, 8 Nov 2023 10:00:59 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Compiler warning on Debian 12, PostgreSQL 16 Beta3"
}
] |
[
{
"msg_contents": "I noticed another missing fsync() with unlogged tables, similar to the \none at [1].\n\nRelationCopyStorage does this:\n\n> \t/*\n> \t * When we WAL-logged rel pages, we must nonetheless fsync them. The\n> \t * reason is that since we're copying outside shared buffers, a CHECKPOINT\n> \t * occurring during the copy has no way to flush the previously written\n> \t * data to disk (indeed it won't know the new rel even exists). A crash\n> \t * later on would replay WAL from the checkpoint, therefore it wouldn't\n> \t * replay our earlier WAL entries. If we do not fsync those pages here,\n> \t * they might still not be on disk when the crash occurs.\n> \t */\n> \tif (use_wal || copying_initfork)\n> \t\tsmgrimmedsync(dst, forkNum);\n\nThat 'copying_initfork' condition is wrong. The first hint of that is \nthat 'use_wal' is always true for an init fork. I believe this was meant \nto check for 'relpersistence == RELPERSISTENCE_UNLOGGED'. Otherwise, \nthis bad thing can happen:\n\n1. Create an unlogged table\n2. ALTER TABLE unlogged_tbl SET TABLESPACE ... -- This calls \nRelationCopyStorage\n3. a checkpoint happens while the command is running\n4. After the ALTER TABLE has finished, shut down postgres cleanly.\n5. Lose power\n\nWhen you recover, the unlogged table is not reset, because it was a \nclean postgres shutdown. But the part of the file that was copied after \nthe checkpoint at step 3 was never fsync'd, so part of the file is lost. \nI was able to reproduce with a VM that I kill to simulate step 5.\n\nThis is the same scenario that the smgrimmedsync() call above protects \nfrom for WAL-logged relations. But we got the condition wrong: instead \nof worrying about the init fork, we need to call smgrimmedsync() for all \n*other* forks of an unlogged relation.\n\nFortunately the fix is trivial, see attached. I suspect we have similar \nproblems in a few other places, though. end_heap_rewrite(), _bt_load(), \nand gist_indexsortbuild have a similar-looking smgrimmedsync() calls, \nwith no consideration for unlogged relations at all. I haven't tested \nthose, but they look wrong to me.\n\nI'm also attaching the scripts I used to reproduce this, although they \nwill require some manual fiddling to run.\n\n[1] \nhttps://www.postgresql.org/message-id/d47d8122-415e-425c-d0a2-e0160829702d%40iki.fi\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)",
"msg_date": "Fri, 25 Aug 2023 15:47:27 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Unlogged relation copy is not fsync'd"
},
{
"msg_contents": "On Fri, Aug 25, 2023 at 03:47:27PM +0300, Heikki Linnakangas wrote:\n> That 'copying_initfork' condition is wrong. The first hint of that is that\n> 'use_wal' is always true for an init fork. I believe this was meant to check\n> for 'relpersistence == RELPERSISTENCE_UNLOGGED'. Otherwise, this bad thing\n> can happen:\n> \n> 1. Create an unlogged table\n> 2. ALTER TABLE unlogged_tbl SET TABLESPACE ... -- This calls\n> RelationCopyStorage\n> 3. a checkpoint happens while the command is running\n> 4. After the ALTER TABLE has finished, shut down postgres cleanly.\n> 5. Lose power\n> \n> When you recover, the unlogged table is not reset, because it was a clean\n> postgres shutdown. But the part of the file that was copied after the\n> checkpoint at step 3 was never fsync'd, so part of the file is lost. I was\n> able to reproduce with a VM that I kill to simulate step 5.\n\nOops.\n\nThe comment at the top of smgrimmedsync() looks incorrect now to me\nnow. When copying the data from something else than an init fork, the\nrelation pages are not WAL-logged for an unlogged relation.\n\n> Fortunately the fix is trivial, see attached. I suspect we have similar\n> problems in a few other places, though. end_heap_rewrite(), _bt_load(), and\n> gist_indexsortbuild have a similar-looking smgrimmedsync() calls, with no\n> consideration for unlogged relations at all. I haven't tested those, but\n> they look wrong to me.\n\nOops ^ N. And end_heap_rewrite() mentions directly\nRelationCopyStorage().. \n--\nMichael",
"msg_date": "Mon, 4 Sep 2023 16:12:32 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Unlogged relation copy is not fsync'd"
},
{
"msg_contents": "On Fri, Aug 25, 2023 at 8:47 AM Heikki Linnakangas <[email protected]> wrote:\n>\n> I noticed another missing fsync() with unlogged tables, similar to the\n> one at [1].\n>\n> RelationCopyStorage does this:\n>\n> > /*\n> > * When we WAL-logged rel pages, we must nonetheless fsync them. The\n> > * reason is that since we're copying outside shared buffers, a CHECKPOINT\n> > * occurring during the copy has no way to flush the previously written\n> > * data to disk (indeed it won't know the new rel even exists). A crash\n> > * later on would replay WAL from the checkpoint, therefore it wouldn't\n> > * replay our earlier WAL entries. If we do not fsync those pages here,\n> > * they might still not be on disk when the crash occurs.\n> > */\n> > if (use_wal || copying_initfork)\n> > smgrimmedsync(dst, forkNum);\n>\n> That 'copying_initfork' condition is wrong. The first hint of that is\n> that 'use_wal' is always true for an init fork. I believe this was meant\n> to check for 'relpersistence == RELPERSISTENCE_UNLOGGED'. Otherwise,\n> this bad thing can happen:\n>\n> 1. Create an unlogged table\n> 2. ALTER TABLE unlogged_tbl SET TABLESPACE ... -- This calls\n> RelationCopyStorage\n> 3. a checkpoint happens while the command is running\n> 4. After the ALTER TABLE has finished, shut down postgres cleanly.\n> 5. Lose power\n>\n> When you recover, the unlogged table is not reset, because it was a\n> clean postgres shutdown. But the part of the file that was copied after\n> the checkpoint at step 3 was never fsync'd, so part of the file is lost.\n> I was able to reproduce with a VM that I kill to simulate step 5.\n>\n> This is the same scenario that the smgrimmedsync() call above protects\n> from for WAL-logged relations. But we got the condition wrong: instead\n> of worrying about the init fork, we need to call smgrimmedsync() for all\n> *other* forks of an unlogged relation.\n>\n> Fortunately the fix is trivial, see attached. I suspect we have similar\n> problems in a few other places, though. end_heap_rewrite(), _bt_load(),\n> and gist_indexsortbuild have a similar-looking smgrimmedsync() calls,\n> with no consideration for unlogged relations at all. I haven't tested\n> those, but they look wrong to me.\n\nThe patch looks reasonable to me. Is this [1] case in hash index build\nthat I reported but didn't take the time to reproduce similar?\n\n- Melanie\n\n[1] https://www.postgresql.org/message-id/CAAKRu_bPc81M121pOEU7W%3D%2BwSWEebiLnrie4NpaFC%2BkWATFtSA%40mail.gmail.com\n\n\n",
"msg_date": "Mon, 4 Sep 2023 09:59:14 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Unlogged relation copy is not fsync'd"
},
{
"msg_contents": "On Fri, Aug 25, 2023 at 8:47 AM Heikki Linnakangas <[email protected]> wrote:\n> 1. Create an unlogged table\n> 2. ALTER TABLE unlogged_tbl SET TABLESPACE ... -- This calls\n> RelationCopyStorage\n> 3. a checkpoint happens while the command is running\n> 4. After the ALTER TABLE has finished, shut down postgres cleanly.\n> 5. Lose power\n>\n> When you recover, the unlogged table is not reset, because it was a\n> clean postgres shutdown. But the part of the file that was copied after\n> the checkpoint at step 3 was never fsync'd, so part of the file is lost.\n> I was able to reproduce with a VM that I kill to simulate step 5.\n>\n> This is the same scenario that the smgrimmedsync() call above protects\n> from for WAL-logged relations. But we got the condition wrong: instead\n> of worrying about the init fork, we need to call smgrimmedsync() for all\n> *other* forks of an unlogged relation.\n>\n> Fortunately the fix is trivial, see attached.\n\nThe general rule throughout the system is that the init-fork of an\nunlogged relation is treated the same as a permanent relation: it is\nWAL-logged and fsyncd. But the other forks of an unlogged relation are\nneither WAL-logged nor fsync'd ... except in the case of a clean\nshutdown, when we fsync even that data.\n\nIn other words, somehow it feels like we ought to be trying to defer\nthe fsync here until a clean shutdown actually occurs, instead of\nperforming it immediately. Admittedly, the bookkeeping seems like a\nproblem, so maybe this is the best we can do, but it's clearly worse\nthan what we do in other cases.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 5 Sep 2023 14:20:18 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Unlogged relation copy is not fsync'd"
},
{
"msg_contents": "On Tue, Sep 05, 2023 at 02:20:18PM -0400, Robert Haas wrote:\n> The general rule throughout the system is that the init-fork of an\n> unlogged relation is treated the same as a permanent relation: it is\n> WAL-logged and fsyncd. But the other forks of an unlogged relation are\n> neither WAL-logged nor fsync'd ... except in the case of a clean\n> shutdown, when we fsync even that data.\n> \n> In other words, somehow it feels like we ought to be trying to defer\n> the fsync here until a clean shutdown actually occurs, instead of\n> performing it immediately. Admittedly, the bookkeeping seems like a\n> problem, so maybe this is the best we can do, but it's clearly worse\n> than what we do in other cases.\n\nThat's where we usually rely more on RegisterSyncRequest() and\nregister_dirty_segment() so as the flush of the dirty segments can\nhappen when they should, but we don't go through the shared buffers\nwhen copying all the forks of a relation file across tablespaces..\n--\nMichael",
"msg_date": "Thu, 7 Sep 2023 13:36:30 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Unlogged relation copy is not fsync'd"
},
{
"msg_contents": "On 04/09/2023 16:59, Melanie Plageman wrote:\n> The patch looks reasonable to me. Is this [1] case in hash index build\n> that I reported but didn't take the time to reproduce similar?\n> \n> [1] https://www.postgresql.org/message-id/CAAKRu_bPc81M121pOEU7W%3D%2BwSWEebiLnrie4NpaFC%2BkWATFtSA%40mail.gmail.com\n\nYes, I think you're right. Any caller of smgrwrite() must either:\n\na) Call smgrimmedsync(), after smgrwrite() and before the relation is \nvisible to other transactions. Regardless of the 'skipFsync' parameter! \nI don't think this is ever completely safe unless it's a new relation. \nLike you pointed out with the hash index case. Or:\n\nb) Hold the buffer lock, so that if a checkpoint happens, it cannot \n\"race past\" the page without seeing the sync request.\n\nThe comment on smgwrite() doesn't make this clear. It talks about \n'skipFsync', and gives the impression that as long as you pass \n'skipFsync=false', you don't need to worry about fsyncing. But that is \nnot true. Even in these bulk loading cases where we currently call \nsmgrimmedsync(), we would *still* need to call smgrimmedsync() if we \nused 'skipFsync=false'.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Fri, 15 Sep 2023 14:22:18 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Unlogged relation copy is not fsync'd"
},
{
"msg_contents": "On 05/09/2023 21:20, Robert Haas wrote:\n> In other words, somehow it feels like we ought to be trying to defer\n> the fsync here until a clean shutdown actually occurs, instead of\n> performing it immediately.\n\n+1\n\n> Admittedly, the bookkeeping seems like a problem, so maybe this is\n> the best we can do, but it's clearly worse than what we do in other\n> cases.\nI think we can do it, I have been working on a patch along those lines \non the side. But I want to focus on a smaller, backpatchable fix in this \nthread.\n\n\nThinking about this some more, I think this is still not 100% correct, \neven with the patch I posted earlier:\n\n> \t/*\n> \t * When we WAL-logged rel pages, we must nonetheless fsync them. The\n> \t * reason is that since we're copying outside shared buffers, a CHECKPOINT\n> \t * occurring during the copy has no way to flush the previously written\n> \t * data to disk (indeed it won't know the new rel even exists). A crash\n> \t * later on would replay WAL from the checkpoint, therefore it wouldn't\n> \t * replay our earlier WAL entries. If we do not fsync those pages here,\n> \t * they might still not be on disk when the crash occurs.\n> \t */\n> \tif (use_wal || relpersistence == RELPERSISTENCE_UNLOGGED)\n> \t\tsmgrimmedsync(dst, forkNum);\n\nLet's walk through each case one by one:\n\n1. Temporary table. No fsync() needed. This is correct.\n\n2. Unlogged rel, main fork. Needs to be fsync'd, because we skipped WAL, \nand also because we bypassed the buffer manager. Correct.\n\n3. Unlogged rel, init fork. Needs to be fsync'd, even though we \nWAL-logged it, because we bypassed the buffer manager. Like the comment \nexplains. This is now correct with the patch.\n\n4. Permanent rel, use_wal == true. Needs to be fsync'd, even though we \nWAL-logged it, because we bypassed the buffer manager. Like the comment \nexplains. Correct.\n\n5. Permanent rel, use_wal == false. We skip fsync, because it means that \nit's a new relation, so we have a sync request pending for it. (An \nassertion for that would be nice). At the end of transaction, in \nsmgrDoPendingSyncs(), we will either fsync it or we will WAL-log all the \npages if it was a small relation. I believe this is *wrong*. If \nsmgrDoPendingSyncs() decides to WAL-log the pages, we have the same race \ncondition that's explained in the comment, because we bypassed the \nbuffer manager:\n\n1. RelationCopyStorage() copies some of the pages.\n2. Checkpoint happens, which fsyncs the relation (smgrcreate() \nregistered a dirty segment when the relation was created)\n3. RelationCopyStorage() copies the rest of the pages.\n4. smgrDoPendingSyncs() WAL-logs all the pages.\n5. Another checkpoint happens. It does *not* fsync the relation.\n6. Crash.\n\nWAL replay will not see the WAL-logged pages, because they were \nWAL-logged before the last checkpoint. And the contents were not fsync'd \neither.\n\nIn other words, we must do smgrimmedsync() here for permanent relations, \neven if use_wal==false, because we bypassed the buffer manager. Same \nreason we need to do it with use_wal==true.\n\nFor a backportable fix, I think we should change this to only exempt \ntemporary tables, and call smgrimmedsync() for all other cases. Maybe \nwould be safe to skip it in some cases, but it feels too dangerous to be \nclever here. The other similar callers of smgrimmedsync() in nbtsort.c, \ngistbuild.c, and rewriteheap.c, need similar treatment.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Fri, 15 Sep 2023 14:47:45 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Unlogged relation copy is not fsync'd"
},
{
"msg_contents": "On Fri, Sep 15, 2023 at 02:47:45PM +0300, Heikki Linnakangas wrote:\n> On 05/09/2023 21:20, Robert Haas wrote:\n\n> Thinking about this some more, I think this is still not 100% correct, even\n> with the patch I posted earlier:\n> \n> >\t/*\n> >\t * When we WAL-logged rel pages, we must nonetheless fsync them. The\n> >\t * reason is that since we're copying outside shared buffers, a CHECKPOINT\n> >\t * occurring during the copy has no way to flush the previously written\n> >\t * data to disk (indeed it won't know the new rel even exists). A crash\n> >\t * later on would replay WAL from the checkpoint, therefore it wouldn't\n> >\t * replay our earlier WAL entries. If we do not fsync those pages here,\n> >\t * they might still not be on disk when the crash occurs.\n> >\t */\n> >\tif (use_wal || relpersistence == RELPERSISTENCE_UNLOGGED)\n> >\t\tsmgrimmedsync(dst, forkNum);\n> \n> Let's walk through each case one by one:\n> \n> 1. Temporary table. No fsync() needed. This is correct.\n> \n> 2. Unlogged rel, main fork. Needs to be fsync'd, because we skipped WAL, and\n> also because we bypassed the buffer manager. Correct.\n\nAgreed.\n\n> 3. Unlogged rel, init fork. Needs to be fsync'd, even though we WAL-logged\n> it, because we bypassed the buffer manager. Like the comment explains. This\n> is now correct with the patch.\n\nThis has two subtypes:\n\n3a. Unlogged rel, init fork, use_wal (wal_level!=minimal). Matches what\nyou wrote.\n\n3b. Unlogged rel, init fork, !use_wal (wal_level==minimal). Needs to be\nfsync'd because we didn't WAL-log it and RelationCreateStorage() won't call\nAddPendingSync(). (RelationCreateStorage() could start calling\nAddPendingSync() for this case. I think we chose not to do that because there\nwill never be additional writes to the init fork, and smgrDoPendingSyncs()\nwould send the fork to FlushRelationsAllBuffers() even though the fork will\nnever appear in shared buffers. On the other hand, grouping the sync with the\nother end-of-xact syncs could improve efficiency for some filesystems. Also,\nthe number of distinguishable cases is unpleasantly high.)\n\n> 4. Permanent rel, use_wal == true. Needs to be fsync'd, even though we\n> WAL-logged it, because we bypassed the buffer manager. Like the comment\n> explains. Correct.\n> \n> 5. Permanent rel, use_wal == false. We skip fsync, because it means that\n> it's a new relation, so we have a sync request pending for it. (An assertion\n> for that would be nice). At the end of transaction, in smgrDoPendingSyncs(),\n> we will either fsync it or we will WAL-log all the pages if it was a small\n> relation. I believe this is *wrong*. If smgrDoPendingSyncs() decides to\n> WAL-log the pages, we have the same race condition that's explained in the\n> comment, because we bypassed the buffer manager:\n> \n> 1. RelationCopyStorage() copies some of the pages.\n> 2. Checkpoint happens, which fsyncs the relation (smgrcreate() registered a\n> dirty segment when the relation was created)\n> 3. RelationCopyStorage() copies the rest of the pages.\n> 4. smgrDoPendingSyncs() WAL-logs all the pages.\n\nsmgrDoPendingSyncs() delegates to log_newpage_range(). log_newpage_range()\nloads each page into the buffer manager and calls MarkBufferDirty() on each.\nHence, ...\n\n> 5. Another checkpoint happens. It does *not* fsync the relation.\n\n... I think this will fsync the relation. No?\n\n> 6. Crash.\n\n\n",
"msg_date": "Wed, 20 Sep 2023 23:22:10 -0700",
"msg_from": "Noah Misch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Unlogged relation copy is not fsync'd"
},
{
"msg_contents": "On Wed, Sep 20, 2023 at 11:22:10PM -0700, Noah Misch wrote:\n> On Fri, Sep 15, 2023 at 02:47:45PM +0300, Heikki Linnakangas wrote:\n> > On 05/09/2023 21:20, Robert Haas wrote:\n> \n> > Thinking about this some more, I think this is still not 100% correct, even\n> > with the patch I posted earlier:\n> > \n> > >\t/*\n> > >\t * When we WAL-logged rel pages, we must nonetheless fsync them. The\n> > >\t * reason is that since we're copying outside shared buffers, a CHECKPOINT\n> > >\t * occurring during the copy has no way to flush the previously written\n> > >\t * data to disk (indeed it won't know the new rel even exists). A crash\n> > >\t * later on would replay WAL from the checkpoint, therefore it wouldn't\n> > >\t * replay our earlier WAL entries. If we do not fsync those pages here,\n> > >\t * they might still not be on disk when the crash occurs.\n> > >\t */\n> > >\tif (use_wal || relpersistence == RELPERSISTENCE_UNLOGGED)\n> > >\t\tsmgrimmedsync(dst, forkNum);\n> > \n> > Let's walk through each case one by one:\n> > \n> > 1. Temporary table. No fsync() needed. This is correct.\n> > \n> > 2. Unlogged rel, main fork. Needs to be fsync'd, because we skipped WAL, and\n> > also because we bypassed the buffer manager. Correct.\n> \n> Agreed.\n> \n> > 3. Unlogged rel, init fork. Needs to be fsync'd, even though we WAL-logged\n> > it, because we bypassed the buffer manager. Like the comment explains. This\n> > is now correct with the patch.\n> \n> This has two subtypes:\n> \n> 3a. Unlogged rel, init fork, use_wal (wal_level!=minimal). Matches what\n> you wrote.\n> \n> 3b. Unlogged rel, init fork, !use_wal (wal_level==minimal). Needs to be\n> fsync'd because we didn't WAL-log it and RelationCreateStorage() won't call\n> AddPendingSync(). (RelationCreateStorage() could start calling\n> AddPendingSync() for this case. I think we chose not to do that because there\n> will never be additional writes to the init fork, and smgrDoPendingSyncs()\n> would send the fork to FlushRelationsAllBuffers() even though the fork will\n> never appear in shared buffers. On the other hand, grouping the sync with the\n> other end-of-xact syncs could improve efficiency for some filesystems. Also,\n> the number of distinguishable cases is unpleasantly high.)\n> \n> > 4. Permanent rel, use_wal == true. Needs to be fsync'd, even though we\n> > WAL-logged it, because we bypassed the buffer manager. Like the comment\n> > explains. Correct.\n> > \n> > 5. Permanent rel, use_wal == false. We skip fsync, because it means that\n> > it's a new relation, so we have a sync request pending for it. (An assertion\n> > for that would be nice). At the end of transaction, in smgrDoPendingSyncs(),\n> > we will either fsync it or we will WAL-log all the pages if it was a small\n> > relation. I believe this is *wrong*. If smgrDoPendingSyncs() decides to\n> > WAL-log the pages, we have the same race condition that's explained in the\n> > comment, because we bypassed the buffer manager:\n> > \n> > 1. RelationCopyStorage() copies some of the pages.\n> > 2. Checkpoint happens, which fsyncs the relation (smgrcreate() registered a\n> > dirty segment when the relation was created)\n> > 3. RelationCopyStorage() copies the rest of the pages.\n> > 4. smgrDoPendingSyncs() WAL-logs all the pages.\n> \n> smgrDoPendingSyncs() delegates to log_newpage_range(). log_newpage_range()\n> loads each page into the buffer manager and calls MarkBufferDirty() on each.\n> Hence, ...\n> \n> > 5. Another checkpoint happens. It does *not* fsync the relation.\n> \n> ... I think this will fsync the relation. No?\n> \n> > 6. Crash.\n\nWhile we're cataloging gaps, I think the middle sentence is incorrect in the\nfollowing heapam_relation_set_new_filelocator() comment:\n\n\t/*\n\t * If required, set up an init fork for an unlogged table so that it can\n\t * be correctly reinitialized on restart. Recovery may remove it while\n\t * replaying, for example, an XLOG_DBASE_CREATE* or XLOG_TBLSPC_CREATE\n\t * record. Therefore, logging is necessary even if wal_level=minimal.\n\t */\n\tif (persistence == RELPERSISTENCE_UNLOGGED)\n\t{\n\t\tAssert(rel->rd_rel->relkind == RELKIND_RELATION ||\n\t\t\t rel->rd_rel->relkind == RELKIND_MATVIEW ||\n\t\t\t rel->rd_rel->relkind == RELKIND_TOASTVALUE);\n\t\tsmgrcreate(srel, INIT_FORKNUM, false);\n\t\tlog_smgrcreate(newrlocator, INIT_FORKNUM);\n\t}\n\nXLOG_DBASE_CREATE_FILE_COPY last had that problem before fbcbc5d (2005-06)\nmade it issue a checkpoint. XLOG_DBASE_CREATE_WAL_LOG never had that problem.\nXLOG_TBLSPC_CREATE last had that problem before 97ddda8a82 (2021-08). In\ngeneral, if we reintroduced such a bug, it would affect all new rels under\nwal_level=minimal, not just init forks. Having said all that,\nlog_smgrcreate() calls are never conditional on wal_level=minimal; the above\ncode is correct.\n\n\n",
"msg_date": "Sat, 7 Oct 2023 19:22:04 -0700",
"msg_from": "Noah Misch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Unlogged relation copy is not fsync'd"
},
{
"msg_contents": "On Fri, Sep 15, 2023 at 7:47 AM Heikki Linnakangas <[email protected]> wrote:\n> Thinking about this some more, I think this is still not 100% correct,\n> even with the patch I posted earlier:\n\nThis is marked as needing review, but that doesn't appear to be\ncorrect, because there's this comment, indicating that the patch\nrequires re-work, and there's also two emails from Noah on the thread\nproviding further feedback. So it seems this has been waiting for\nHeikki or someone else to have time to work it for the last 3 months.\n\nHence, marking RwF for now; if someone gets back to it, please reopen.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 5 Jan 2024 14:40:05 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Unlogged relation copy is not fsync'd"
}
] |
[
{
"msg_contents": "I propose to reformat the catalog lists in src/backend/catalog/Makefile \nto be more vertical, one per line. This makes it easier to keep that \nlist in sync with src/include/catalog/meson.build, and visually compare \nboth lists. Also, it's easier to read and edit in general.\n\nIn passing, I'd also copy over some relevant comments from the makefile \nto meson.build. For the hypothetical future when we delete the \nmakefiles, these comments seem worth keeping. (For fun, I tested \nwhether the comments are still true, and yes, the order still matters.)",
"msg_date": "Fri, 25 Aug 2023 15:12:51 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Format list of catalog files in makefile vertically"
},
{
"msg_contents": "Hi, \n\nOn August 25, 2023 9:12:51 AM EDT, Peter Eisentraut <[email protected]> wrote:\n>I propose to reformat the catalog lists in src/backend/catalog/Makefile to be more vertical, one per line. This makes it easier to keep that list in sync with src/include/catalog/meson.build, and visually compare both lists. Also, it's easier to read and edit in general.\n>\n>In passing, I'd also copy over some relevant comments from the makefile to meson.build. For the hypothetical future when we delete the makefiles, these comments seem worth keeping. (For fun, I tested whether the comments are still true, and yes, the order still matters.)\n\nMakes sense to me.\n\nAndres \n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n",
"msg_date": "Fri, 25 Aug 2023 09:25:22 -0400",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Format list of catalog files in makefile vertically"
},
{
"msg_contents": "On 2023-Aug-25, Peter Eisentraut wrote:\n\n> I propose to reformat the catalog lists in src/backend/catalog/Makefile to\n> be more vertical, one per line. This makes it easier to keep that list in\n> sync with src/include/catalog/meson.build, and visually compare both lists.\n> Also, it's easier to read and edit in general.\n\n+1\n\n> In passing, I'd also copy over some relevant comments from the makefile to\n> meson.build. For the hypothetical future when we delete the makefiles,\n> these comments seem worth keeping. (For fun, I tested whether the comments\n> are still true, and yes, the order still matters.)\n\nSure.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"La experiencia nos dice que el hombre peló millones de veces las patatas,\npero era forzoso admitir la posibilidad de que en un caso entre millones,\nlas patatas pelarían al hombre\" (Ijon Tichy)\n\n\n",
"msg_date": "Sat, 26 Aug 2023 18:38:15 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Format list of catalog files in makefile vertically"
}
] |
[
{
"msg_contents": "The cached plan for a prepared statements can get invalidated when DDL\nchanges the tables used in the query, or when search_path changes. When\nthis happens the prepared statement can still be executed, but it will\nbe replanned in the new context. This means that the prepared statement\nwill do something different e.g. in case of search_path changes it will\nselect data from a completely different table. This won't throw an\nerror, because it is considered the responsibility of the operator and\nquery writers that the query will still do the intended thing.\n\nHowever, we would throw an error if the the result of the query is of a\ndifferent type than it was before:\nERROR: cached plan must not change result type\n\nThis requirement was not documented anywhere and it\ncan thus be a surprising error to hit. But it's actually not needed for\nthis to be an error, as long as we send the correct RowDescription there\ndoes not have to be a problem for clients when the result types or\ncolumn counts change.\n\nThis patch starts to allow a prepared statement to continue to work even\nwhen the result type changes.\n\nWithout this change all clients that automatically prepare queries as a\nperformance optimization will need to handle or avoid the error somehow,\noften resulting in deallocating and re-preparing queries when its\nusually not necessary. With this change connection poolers can also\nsafely prepare the same query only once on a connection and share this\none prepared query across clients that prepared that exact same query.\n\nSome relevant previous discussions:\n[1]: https://www.postgresql.org/message-id/flat/CAB%3DJe-GQOW7kU9Hn3AqP1vhaZg_wE9Lz6F4jSp-7cm9_M6DyVA%40mail.gmail.com\n[2]: https://stackoverflow.com/questions/2783813/postgres-error-cached-plan-must-not-change-result-type\n[3]: https://stackoverflow.com/questions/42119365/how-to-avoid-cached-plan-must-not-change-result-type-error\n[4]: https://github.com/pgjdbc/pgjdbc/pull/451\n[5]: https://github.com/pgbouncer/pgbouncer/pull/845#discussion_r1305295551\n[6]: https://github.com/jackc/pgx/issues/927\n[7]: https://elixirforum.com/t/postgrex-errors-with-cached-plan-must-not-change-result-type-during-migration/51235/2\n[8]: https://github.com/rails/rails/issues/12330",
"msg_date": "Fri, 25 Aug 2023 19:57:32 +0200",
"msg_from": "Jelte Fennema <[email protected]>",
"msg_from_op": true,
"msg_subject": "Support prepared statement invalidation when result types change"
},
{
"msg_contents": "On Sat, Aug 26, 2023 at 1:58 AM Jelte Fennema <[email protected]> wrote:\n>\n> The cached plan for a prepared statements can get invalidated when DDL\n> changes the tables used in the query, or when search_path changes. When\n> this happens the prepared statement can still be executed, but it will\n> be replanned in the new context. This means that the prepared statement\n> will do something different e.g. in case of search_path changes it will\n> select data from a completely different table. This won't throw an\n> error, because it is considered the responsibility of the operator and\n> query writers that the query will still do the intended thing.\n>\n> However, we would throw an error if the the result of the query is of a\n> different type than it was before:\n> ERROR: cached plan must not change result type\n>\n> This requirement was not documented anywhere and it\n> can thus be a surprising error to hit. But it's actually not needed for\n> this to be an error, as long as we send the correct RowDescription there\n> does not have to be a problem for clients when the result types or\n> column counts change.\n>\n> This patch starts to allow a prepared statement to continue to work even\n> when the result type changes.\n>\n> Without this change all clients that automatically prepare queries as a\n> performance optimization will need to handle or avoid the error somehow,\n> often resulting in deallocating and re-preparing queries when its\n> usually not necessary. With this change connection poolers can also\n> safely prepare the same query only once on a connection and share this\n> one prepared query across clients that prepared that exact same query.\n>\n> Some relevant previous discussions:\n> [1]: https://www.postgresql.org/message-id/flat/CAB%3DJe-GQOW7kU9Hn3AqP1vhaZg_wE9Lz6F4jSp-7cm9_M6DyVA%40mail.gmail.com\n> [2]: https://stackoverflow.com/questions/2783813/postgres-error-cached-plan-must-not-change-result-type\n> [3]: https://stackoverflow.com/questions/42119365/how-to-avoid-cached-plan-must-not-change-result-type-error\n> [4]: https://github.com/pgjdbc/pgjdbc/pull/451\n> [5]: https://github.com/pgbouncer/pgbouncer/pull/845#discussion_r1305295551\n> [6]: https://github.com/jackc/pgx/issues/927\n> [7]: https://elixirforum.com/t/postgrex-errors-with-cached-plan-must-not-change-result-type-during-migration/51235/2\n> [8]: https://github.com/rails/rails/issues/12330\n\nprepared statement with no parameters, tested many cases (add column,\nchange column data type, rename column, set default, set not null), it\nworked as expected.\nWith parameters, it also works, only a tiny issue with error reporting.\n\nprepstmt2 | PREPARE prepstmt2(bigint) AS SELECT * FROM pcachetest\nWHERE q1 = $1; | {bigint} | {bigint,bigint,bigint}\nERROR: column \"q1\" does not exist at character 61\nHINT: Perhaps you meant to reference the column \"pcachetest.x1\".\nSTATEMENT: execute prepstmt2(1);\n\nI think \"character 61\" refer to \"PREPARE prepstmt2(bigint) AS SELECT *\nFROM pcachetest WHERE q1 = $1;\"\nso maybe the STATEMENT is slightly misleading.\n\n\n",
"msg_date": "Mon, 28 Aug 2023 17:27:27 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support prepared statement invalidation when result types change"
},
{
"msg_contents": "\n\nOn 25.08.2023 8:57 PM, Jelte Fennema wrote:\n> The cached plan for a prepared statements can get invalidated when DDL\n> changes the tables used in the query, or when search_path changes. When\n> this happens the prepared statement can still be executed, but it will\n> be replanned in the new context. This means that the prepared statement\n> will do something different e.g. in case of search_path changes it will\n> select data from a completely different table. This won't throw an\n> error, because it is considered the responsibility of the operator and\n> query writers that the query will still do the intended thing.\n>\n> However, we would throw an error if the the result of the query is of a\n> different type than it was before:\n> ERROR: cached plan must not change result type\n>\n> This requirement was not documented anywhere and it\n> can thus be a surprising error to hit. But it's actually not needed for\n> this to be an error, as long as we send the correct RowDescription there\n> does not have to be a problem for clients when the result types or\n> column counts change.\n>\n> This patch starts to allow a prepared statement to continue to work even\n> when the result type changes.\n>\n> Without this change all clients that automatically prepare queries as a\n> performance optimization will need to handle or avoid the error somehow,\n> often resulting in deallocating and re-preparing queries when its\n> usually not necessary. With this change connection poolers can also\n> safely prepare the same query only once on a connection and share this\n> one prepared query across clients that prepared that exact same query.\n>\n> Some relevant previous discussions:\n> [1]: https://www.postgresql.org/message-id/flat/CAB%3DJe-GQOW7kU9Hn3AqP1vhaZg_wE9Lz6F4jSp-7cm9_M6DyVA%40mail.gmail.com\n> [2]: https://stackoverflow.com/questions/2783813/postgres-error-cached-plan-must-not-change-result-type\n> [3]: https://stackoverflow.com/questions/42119365/how-to-avoid-cached-plan-must-not-change-result-type-error\n> [4]: https://github.com/pgjdbc/pgjdbc/pull/451\n> [5]: https://github.com/pgbouncer/pgbouncer/pull/845#discussion_r1305295551\n> [6]: https://github.com/jackc/pgx/issues/927\n> [7]: https://elixirforum.com/t/postgrex-errors-with-cached-plan-must-not-change-result-type-during-migration/51235/2\n> [8]: https://github.com/rails/rails/issues/12330\n\n\nThe following assignment of format is not corrects:\n\n /* Do nothing if portal won't return tuples */\n if (portal->tupDesc == NULL)\n+ {\n+ /*\n+ * For SELECT like queries we delay filling in the tupDesc \nuntil after\n+ * PortalRunUtility, because we don't know what rows an EXECUTE\n+ * command will return. Because we delay setting tupDesc, we \nalso need\n+ * to delay setting formats. We do this in a pretty hacky way, by\n+ * temporarily setting the portal formats to the passed in formats.\n+ * Then once we fill in tupDesc, we call PortalSetResultFormat \nagain\n+ * with portal->formats to fill in the final formats value.\n+ */\n+ if (portal->strategy == PORTAL_UTIL_SELECT)\n+ portal->formats = formats;\n return;\n\n\nbecause it is create in other memory context:\n\npostgres.c:\n /* Done storing stuff in portal's context */\n MemoryContextSwitchTo(oldContext);\n ...\n /* Get the result format codes */\n numRFormats = pq_getmsgint(input_message, 2);\n if (numRFormats > 0)\n {\n rformats = palloc_array(int16, numRFormats);\n for (int i = 0; i < numRFormats; i++)\n rformats[i] = pq_getmsgint(input_message, 2);\n }\n\n\n\nIt has to be copied as below:\n\n portal->formats = (int16 *)\n MemoryContextAlloc(portal->portalContext,\n natts * sizeof(int16));\n memcpy(portal->formats, formats, natts * sizeof(int16));\n\n\nor alternatively MemoryContextSwitchTo(oldContext) should be moved \nafter initialization of rformat\n\n\n",
"msg_date": "Mon, 28 Aug 2023 16:05:28 +0300",
"msg_from": "Konstantin Knizhnik <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support prepared statement invalidation when result types change"
},
{
"msg_contents": "On Mon, 28 Aug 2023 at 15:05, Konstantin Knizhnik <[email protected]> wrote:\n> The following assignment of format is not corrects:\n>\n> It has to be copied as below:\n>\n> portal->formats = (int16 *)\n> MemoryContextAlloc(portal->portalContext,\n> natts * sizeof(int16));\n> memcpy(portal->formats, formats, natts * sizeof(int16));\n\nI attached a new version of the patch where I now did this. But I also\nmoved the code around quite a bit, since all this tupDesc/format\ndelaying is only needed for exec_simple_query. The original changes\nactually broke some prepared statements that were using protocol level\nBind messages.",
"msg_date": "Mon, 28 Aug 2023 18:47:59 +0200",
"msg_from": "Jelte Fennema <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Support prepared statement invalidation when result types change"
},
{
"msg_contents": "On Mon, 28 Aug 2023 at 11:27, jian he <[email protected]> wrote:\n> With parameters, it also works, only a tiny issue with error reporting.\n>\n> prepstmt2 | PREPARE prepstmt2(bigint) AS SELECT * FROM pcachetest\n> WHERE q1 = $1; | {bigint} | {bigint,bigint,bigint}\n> ERROR: column \"q1\" does not exist at character 61\n> HINT: Perhaps you meant to reference the column \"pcachetest.x1\".\n> STATEMENT: execute prepstmt2(1);\n>\n> I think \"character 61\" refer to \"PREPARE prepstmt2(bigint) AS SELECT *\n> FROM pcachetest WHERE q1 = $1;\"\n> so maybe the STATEMENT is slightly misleading.\n\nCould you share the full set of commands that cause the reporting\nissue? I don't think my changes should impact this reporting, so I'm\ncurious if this is a new issue, or an already existing one.\n\n\n",
"msg_date": "Mon, 28 Aug 2023 18:51:02 +0200",
"msg_from": "Jelte Fennema <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Support prepared statement invalidation when result types change"
},
{
"msg_contents": "On Tue, Aug 29, 2023 at 12:51 AM Jelte Fennema <[email protected]> wrote:\n>\n> Could you share the full set of commands that cause the reporting\n> issue? I don't think my changes should impact this reporting, so I'm\n> curious if this is a new issue, or an already existing one.\n\nI didn't apply your v2 patch.\nfull set of commands:\n--------------------\nregression=# CREATE TEMP TABLE pcachetest AS SELECT * FROM int8_tbl;\nSELECT 5\nregression=# PREPARE prepstmt2(bigint) AS SELECT * FROM pcachetest\nWHERE q1 = $1;'\nPREPARE\nregression=# alter table pcachetest rename q1 to x;\nALTER TABLE\nregression=# EXECUTE prepstmt2(123);\n2023-08-29 17:23:59.148 CST [1382074] ERROR: column \"q1\" does not\nexist at character 61\n2023-08-29 17:23:59.148 CST [1382074] HINT: Perhaps you meant to\nreference the column \"pcachetest.q2\".\n2023-08-29 17:23:59.148 CST [1382074] STATEMENT: EXECUTE prepstmt2(123);\nERROR: column \"q1\" does not exist\nHINT: Perhaps you meant to reference the column \"pcachetest.q2\".\n--------------------------\n\n\n",
"msg_date": "Tue, 29 Aug 2023 17:29:43 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support prepared statement invalidation when result types change"
},
{
"msg_contents": "On Tue, 29 Aug 2023 at 11:29, jian he <[email protected]> wrote:\n> regression=# CREATE TEMP TABLE pcachetest AS SELECT * FROM int8_tbl;\n> SELECT 5\n> regression=# PREPARE prepstmt2(bigint) AS SELECT * FROM pcachetest\n> WHERE q1 = $1;'\n> PREPARE\n> regression=# alter table pcachetest rename q1 to x;\n> ALTER TABLE\n> regression=# EXECUTE prepstmt2(123);\n> 2023-08-29 17:23:59.148 CST [1382074] ERROR: column \"q1\" does not\n> exist at character 61\n> 2023-08-29 17:23:59.148 CST [1382074] HINT: Perhaps you meant to\n> reference the column \"pcachetest.q2\".\n> 2023-08-29 17:23:59.148 CST [1382074] STATEMENT: EXECUTE prepstmt2(123);\n> ERROR: column \"q1\" does not exist\n> HINT: Perhaps you meant to reference the column \"pcachetest.q2\".\n\nThank you for the full set of commands. In that case the issue you're\ndescribing is completely separate from this patch. The STATEMENT: part\nof the log will always show the query that was received by the server.\nThis behaviour was already present even without my patch (I double\nchecked with PG15.3).\n\n\n",
"msg_date": "Tue, 29 Aug 2023 13:19:43 +0200",
"msg_from": "Jelte Fennema <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Support prepared statement invalidation when result types change"
},
{
"msg_contents": "Another similar issue was reported on the PgBouncer prepared statement PR[1].\n\nIt's related to an issue that was already reported on the\nmailinglist[2]. It turns out that invalidation of the argument types\nis also important in some cases. The newly added 3rd patch in this\nseries addresses that issue.\n\n[1]: https://github.com/pgbouncer/pgbouncer/pull/845#discussion_r1309454695\n[2]: https://www.postgresql.org/message-id/flat/CA%2Bmi_8YAGf9qibDFTRNKgaTwaBa1OUcteKqLAxfMmKFbo3GHZg%40mail.gmail.com",
"msg_date": "Fri, 8 Sep 2023 11:54:58 +0200",
"msg_from": "Jelte Fennema <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Support prepared statement invalidation when result types change"
},
{
"msg_contents": "When running the Postgres JDBC tests with this patchset I found dumb\nmistake in my last patch where I didn't initialize the contents of\norig_params correctly. This new patchset fixes that.",
"msg_date": "Tue, 12 Sep 2023 15:17:09 +0200",
"msg_from": "Jelte Fennema <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Support prepared statement invalidation when result types change"
},
{
"msg_contents": "On Tue, Sep 12, 2023, at 10:17 AM, Jelte Fennema wrote:\n> When running the Postgres JDBC tests with this patchset I found dumb\n> mistake in my last patch where I didn't initialize the contents of\n> orig_params correctly. This new patchset fixes that.\n> \n0001:\n\nDon't you want to execute this code path only for EXECUTE command?\nPORTAL_UTIL_SELECT includes other commands such as CALL, FETCH, SHOW, and\nEXPLAIN. If so, check if commandTag is CMDTAG_EXECUTE.\n\nRegarding tupDesc, you don't need to call FreeTupleDesc instead you can modify\nPortalStart as\n\n case PORTAL_UTIL_SELECT:\n\n /*\n * We don't set snapshot here, because PortalRunUtility will\n * take care of it if needed.\n */\n {\n PlannedStmt *pstmt = PortalGetPrimaryStmt(portal);\n\n Assert(pstmt->commandType == CMD_UTILITY);\n /*\n * tupDesc will be filled by FillPortalStore later because\n * it might change due to replanning when ExecuteQuery calls\n * GetCachedPlan.\n */\n if (portal->commandTag != CMDTAG_EXECUTE)\n portal->tupDesc = UtilityTupleDescriptor(pstmt->utilityStmt);\n }\n\nRegarding the commit message, ...if the the result... should be fixed. The\nsentence \"it's actually not needed...\" could be \"It doesn't need to be an error\nas long as it sends the RowDescription...\". The sentence \"This patch starts to\nallow a prepared ...\" could be \"This patch allows a prepared ...\".\n\n0002:\n\nYou should remove this comment because it refers to the option you are\nremoving.\n\n- plan->cursor_options,\n- false); /* not fixed result */\n+ plan->cursor_options); /* not fixed result */\n\nYou should also remove the sentence that refers to fixed_result in\nCompleteCachedPlan.\n\n* cursor_options: options bitmask to pass to planner\n* fixed_result: true to disallow future changes in query's result tupdesc\n*/\nvoid\nCompleteCachedPlan(CachedPlanSource *plansource,\n List *querytree_list,\n MemoryContext querytree_context,\n\n0003:\n\nYou should initialize the new parameters (orig_param_types and orig_num_params)\nin CreateCachedPlan. One suggestion is to move the following code to\nCompleteCachedPlan because plansource->param_types are assigned there.\n\n@@ -108,6 +108,10 @@ PrepareQuery(ParseState *pstate, PrepareStmt *stmt,\n\n argtypes[i++] = toid;\n }\n+\n+ plansource->orig_num_params = nargs;\n+ plansource->orig_param_types = MemoryContextAlloc(plansource->context, nargs * sizeof(Oid));\n+ memcpy(plansource->orig_param_types, argtypes, nargs * sizeof(Oid));\n }\n\nThis comment is confusing. Since the new function\n(GetCachedPlanFromRevalidated) contains almost all code from GetCachedPlan, its\ncontent is the same as the *previous* GetCachedPlan function. You could expand\nthis comment a bit to make it clear that it contains the logic to decide\nbetween generic x custom plan. I don't like the function name but have only a\nnot-so-good suggestion: GetCachedPlanAfterRevalidate. I also don't like the\nrevalidationResult as a variable name. Why don't you keep qlist? Or use a name\nnear query-tree list (query_list? qtlist?). s/te caller/the caller/\n\n+ * GetCachedPlanFromRevalidated: is the same as get GetCachedPlan, but requires\n+ * te caller to first revalidate the query. This is needed for callers that\n+ * need to use the revalidated plan to generate boundParams.\n+ */\n+CachedPlan *\n+GetCachedPlanFromRevalidated(CachedPlanSource *plansource,\n+ ParamListInfo boundParams,\n+ ResourceOwner owner,\n+ QueryEnvironment *queryEnv,\n+ List *revalidationResult)\n\n\nAre these names accurate? The original are the current ones; new ones are\n\"runtime\" data. It would be good to explain why a new array is required.\n\n Oid *param_types; /* array of parameter type OIDs, or NULL */\n int num_params; /* length of param_types array */\n+ Oid *orig_param_types; /* array of original parameter type OIDs,\n+ * or NULL */\n+ int orig_num_params; /* length of orig_param_types array */\n\nYou should expand the commit message a bit. Explain this feature. Inform the\nbehavior change.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Tue, Sep 12, 2023, at 10:17 AM, Jelte Fennema wrote:When running the Postgres JDBC tests with this patchset I found dumbmistake in my last patch where I didn't initialize the contents oforig_params correctly. This new patchset fixes that.0001:Don't you want to execute this code path only for EXECUTE command?PORTAL_UTIL_SELECT includes other commands such as CALL, FETCH, SHOW, andEXPLAIN. If so, check if commandTag is CMDTAG_EXECUTE.Regarding tupDesc, you don't need to call FreeTupleDesc instead you can modifyPortalStart as case PORTAL_UTIL_SELECT: /* * We don't set snapshot here, because PortalRunUtility will * take care of it if needed. */ { PlannedStmt *pstmt = PortalGetPrimaryStmt(portal); Assert(pstmt->commandType == CMD_UTILITY); /* * tupDesc will be filled by FillPortalStore later because * it might change due to replanning when ExecuteQuery calls * GetCachedPlan. */ if (portal->commandTag != CMDTAG_EXECUTE) portal->tupDesc = UtilityTupleDescriptor(pstmt->utilityStmt); }Regarding the commit message, ...if the the result... should be fixed. Thesentence \"it's actually not needed...\" could be \"It doesn't need to be an erroras long as it sends the RowDescription...\". The sentence \"This patch starts toallow a prepared ...\" could be \"This patch allows a prepared ...\".0002:You should remove this comment because it refers to the option you areremoving.- plan->cursor_options,- false); /* not fixed result */+ plan->cursor_options); /* not fixed result */You should also remove the sentence that refers to fixed_result inCompleteCachedPlan.* cursor_options: options bitmask to pass to planner* fixed_result: true to disallow future changes in query's result tupdesc*/voidCompleteCachedPlan(CachedPlanSource *plansource, List *querytree_list, MemoryContext querytree_context,0003:You should initialize the new parameters (orig_param_types and orig_num_params)in CreateCachedPlan. One suggestion is to move the following code toCompleteCachedPlan because plansource->param_types are assigned there.@@ -108,6 +108,10 @@ PrepareQuery(ParseState *pstate, PrepareStmt *stmt, argtypes[i++] = toid; }++ plansource->orig_num_params = nargs;+ plansource->orig_param_types = MemoryContextAlloc(plansource->context, nargs * sizeof(Oid));+ memcpy(plansource->orig_param_types, argtypes, nargs * sizeof(Oid)); }This comment is confusing. Since the new function(GetCachedPlanFromRevalidated) contains almost all code from GetCachedPlan, itscontent is the same as the *previous* GetCachedPlan function. You could expandthis comment a bit to make it clear that it contains the logic to decidebetween generic x custom plan. I don't like the function name but have only anot-so-good suggestion: GetCachedPlanAfterRevalidate. I also don't like therevalidationResult as a variable name. Why don't you keep qlist? Or use a namenear query-tree list (query_list? qtlist?). s/te caller/the caller/+ * GetCachedPlanFromRevalidated: is the same as get GetCachedPlan, but requires+ * te caller to first revalidate the query. This is needed for callers that+ * need to use the revalidated plan to generate boundParams.+ */+CachedPlan *+GetCachedPlanFromRevalidated(CachedPlanSource *plansource,+ ParamListInfo boundParams,+ ResourceOwner owner,+ QueryEnvironment *queryEnv,+ List *revalidationResult)Are these names accurate? The original are the current ones; new ones are\"runtime\" data. It would be good to explain why a new array is required. Oid *param_types; /* array of parameter type OIDs, or NULL */ int num_params; /* length of param_types array */+ Oid *orig_param_types; /* array of original parameter type OIDs,+ * or NULL */+ int orig_num_params; /* length of orig_param_types array */You should expand the commit message a bit. Explain this feature. Inform thebehavior change.--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Wed, 13 Sep 2023 18:38:08 -0300",
"msg_from": "\"Euler Taveira\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support prepared statement invalidation when result types change"
},
{
"msg_contents": "Hi,\n\n\n>\n> This requirement was not documented anywhere and it\n> can thus be a surprising error to hit. But it's actually not needed for\n> this to be an error, as long as we send the correct RowDescription there\n> does not have to be a problem for clients when the result types or\n> column counts change.\n>\n\nWhat if a client has *cached* an old version of RowDescription\nand the server changed it to something new and sent resultdata\nwith the new RowDescription. Will the client still be able to work\nexpectly?\n\nI don't hope my concern is right since I didn't go through any of\nthe drivers in detail, but I hope my concern is expressed correctly.\n\n\n-- \nBest Regards\nAndy Fan\n\nHi, \nThis requirement was not documented anywhere and it\ncan thus be a surprising error to hit. But it's actually not needed for\nthis to be an error, as long as we send the correct RowDescription there\ndoes not have to be a problem for clients when the result types or\ncolumn counts change.What if a client has *cached* an old version of RowDescriptionand the server changed it to something new and sent resultdatawith the new RowDescription. Will the client still be able to workexpectly? I don't hope my concern is right since I didn't go through any ofthe drivers in detail, but I hope my concern is expressed correctly. -- Best RegardsAndy Fan",
"msg_date": "Fri, 15 Sep 2023 07:41:46 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support prepared statement invalidation when result types change"
},
{
"msg_contents": "@Euler thanks for the review. I addressed the feedback.\n\nOn Fri, 15 Sept 2023 at 01:41, Andy Fan <[email protected]> wrote:\n> What if a client has *cached* an old version of RowDescription\n> and the server changed it to something new and sent resultdata\n> with the new RowDescription. Will the client still be able to work\n> expectly?\n\nIt depends a bit on the exact change. For instance a column being\nadded to the end of the resultdata shouldn't be a problem. And that is\nactually quite a common case for this issue:\n1. PREPARE p as (SELECT * FROM t);\n2. ALTER TABLE t ADD COLUMN ...\n3. EXECUTE p\n\nBut type changes of existing columns might cause issues when the\nRowDescription is cached. But such changes also cause issues now.\nCurrently the prepared statement becomes unusable when this happens\n(returning errors every time). With this patch it's at least possible\nto have prepared statements continue working in many cases.\nFurthermore caching RowDescription is also not super useful, most\nclients request it every time because it does not require an extra\nround trip, so there's almost no overhead in requesting it.\n\nClients caching ParameterDescription seems more useful because\nfetching the parameter types does require an extra round trip. So\ncaching it could cause errors with 0003. But right now if the argument\ntypes need to change it gives an error every time when executing the\nprepared statement. So I believe 0003 is still an improvement over the\nstatus quo, because there are many cases where the client knows that\nthe types might have changed and it thus needs to re-fetch the\nParameterDescription: the most common case is changing the\nsearch_path. And there's also cases where even a cached\nParamaterDescription will work fine: e.g. the type is changed but the\nencoding stays the same (e.g. drop + create an enum, or text/varchar,\nor the text encoding of int and bigint)",
"msg_date": "Mon, 18 Sep 2023 13:30:48 +0200",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support prepared statement invalidation when result types change"
},
{
"msg_contents": "On Mon, 18 Sept 2023 at 18:01, Jelte Fennema-Nio <[email protected]> wrote:\n>\n> @Euler thanks for the review. I addressed the feedback.\n>\n> On Fri, 15 Sept 2023 at 01:41, Andy Fan <[email protected]> wrote:\n> > What if a client has *cached* an old version of RowDescription\n> > and the server changed it to something new and sent resultdata\n> > with the new RowDescription. Will the client still be able to work\n> > expectly?\n>\n> It depends a bit on the exact change. For instance a column being\n> added to the end of the resultdata shouldn't be a problem. And that is\n> actually quite a common case for this issue:\n> 1. PREPARE p as (SELECT * FROM t);\n> 2. ALTER TABLE t ADD COLUMN ...\n> 3. EXECUTE p\n>\n> But type changes of existing columns might cause issues when the\n> RowDescription is cached. But such changes also cause issues now.\n> Currently the prepared statement becomes unusable when this happens\n> (returning errors every time). With this patch it's at least possible\n> to have prepared statements continue working in many cases.\n> Furthermore caching RowDescription is also not super useful, most\n> clients request it every time because it does not require an extra\n> round trip, so there's almost no overhead in requesting it.\n>\n> Clients caching ParameterDescription seems more useful because\n> fetching the parameter types does require an extra round trip. So\n> caching it could cause errors with 0003. But right now if the argument\n> types need to change it gives an error every time when executing the\n> prepared statement. So I believe 0003 is still an improvement over the\n> status quo, because there are many cases where the client knows that\n> the types might have changed and it thus needs to re-fetch the\n> ParameterDescription: the most common case is changing the\n> search_path. And there's also cases where even a cached\n> ParamaterDescription will work fine: e.g. the type is changed but the\n> encoding stays the same (e.g. drop + create an enum, or text/varchar,\n> or the text encoding of int and bigint)\n\nOne of the test has aborted in CFBot at [1] with:\n[05:26:16.214] Core was generated by `postgres: postgres\nregression_pg_stat_statements [local] EXECUTE '.\n[05:26:16.214] Program terminated with signal SIGABRT, Aborted.\n[05:26:16.214] #0 __GI_raise (sig=sig@entry=6) at\n../sysdeps/unix/sysv/linux/raise.c:50\n[05:26:16.214] Download failed: Invalid argument. Continuing without\nsource file ./signal/../sysdeps/unix/sysv/linux/raise.c.\n[05:26:16.392]\n[05:26:16.392] Thread 1 (Thread 0x7fbe1d997a40 (LWP 28738)):\n[05:26:16.392] #0 __GI_raise (sig=sig@entry=6) at\n../sysdeps/unix/sysv/linux/raise.c:50\n....\n....\n[05:26:36.911] #5 0x000055c5aa523e71 in RevalidateCachedQuery\n(plansource=0x55c5ac811cf0, queryEnv=queryEnv@entry=0x0) at\n../src/backend/utils/cache/plancache.c:730\n[05:26:36.911] num_params = 0\n[05:26:36.911] param_types = 0x55c5ac860438\n[05:26:36.911] snapshot_set = false\n[05:26:36.911] rawtree = 0x55c5ac859f08\n[05:26:36.911] tlist = <optimized out>\n[05:26:36.911] qlist = <optimized out>\n[05:26:36.911] resultDesc = <optimized out>\n[05:26:36.911] querytree_context = <optimized out>\n[05:26:36.911] oldcxt = <optimized out>\n[05:26:36.911] #6 0x000055c5a9de0262 in ExplainExecuteQuery\n(execstmt=0x55c5ac6d9d38, into=0x0, es=0x55c5ac859648,\nqueryString=0x55c5ac6d91e0 \"EXPLAIN (VERBOSE, COSTS OFF) EXECUTE\nst6;\", params=0x0, queryEnv=0x0) at\n../src/backend/commands/prepare.c:594\n[05:26:36.911] entry = 0x55c5ac839ba8\n[05:26:36.911] query_string = <optimized out>\n[05:26:36.911] cplan = <optimized out>\n[05:26:36.911] plan_list = <optimized out>\n[05:26:36.911] p = <optimized out>\n[05:26:36.911] paramLI = 0x0\n[05:26:36.911] estate = 0x0\n[05:26:36.911] planstart = {ticks = <optimized out>}\n[05:26:36.911] planduration = {ticks = 1103806595203}\n[05:26:36.911] bufusage_start = {shared_blks_hit = 1,\nshared_blks_read = 16443, shared_blks_dirtied = 16443,\nshared_blks_written = 8, local_blks_hit = 94307489783240,\nlocal_blks_read = 94307451894117, local_blks_dirtied = 94307489783240,\nlocal_blks_written = 140727004487184, temp_blks_read = 0,\ntemp_blks_written = 94307489783416, shared_blk_read_time = {ticks =\n0}, shared_blk_write_time = {ticks = 94307489780192},\nlocal_blk_read_time = {ticks = 0}, local_blk_write_time = {ticks =\n94307491025040}, temp_blk_read_time = {ticks = 0}, temp_blk_write_time\n= {ticks = 0}}\n[05:26:36.911] bufusage = {shared_blks_hit = 140727004486192,\nshared_blks_read = 140061866319832, shared_blks_dirtied = 8,\nshared_blks_written = 94307447196988, local_blks_hit = 34359738376,\nlocal_blks_read = 94307489783240, local_blks_dirtied = 70622147264512,\nlocal_blks_written = 94307491357144, temp_blks_read = 140061866319832,\ntemp_blks_written = 94307489783240, shared_blk_read_time = {ticks =\n140727004486192}, shared_blk_write_time = {ticks = 140061866319832},\nlocal_blk_read_time = {ticks = 8}, local_blk_write_time = {ticks =\n94307489783240}, temp_blk_read_time = {ticks = 0}, temp_blk_write_time\n= {ticks = 94307447197163}}\n[05:26:36.911] revalidationResult = <optimized out>\n[05:26:36.911] #7 0x000055c5a9daa387 in ExplainOneUtility\n(utilityStmt=0x55c5ac6d9d38, into=into@entry=0x0,\nes=es@entry=0x55c5ac859648,\nqueryString=queryString@entry=0x55c5ac6d91e0 \"EXPLAIN (VERBOSE, COSTS\nOFF) EXECUTE st6;\", params=params@entry=0x0,\nqueryEnv=queryEnv@entry=0x0) at ../src/backend/commands/explain.c:495\n[05:26:36.911] __func__ = \"ExplainOneUtility\"\n\n[1] - https://cirrus-ci.com/task/5770112389611520?logs=cores#L71\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Sun, 7 Jan 2024 12:25:24 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support prepared statement invalidation when result types change"
},
{
"msg_contents": "On Mon, Sep 18, 2023 at 1:31 PM Jelte Fennema-Nio <[email protected]> wrote:\n\n> Furthermore caching RowDescription is also not super useful, most\n> clients request it every time because it does not require an extra\n> round trip, so there's almost no overhead in requesting it.\n\nJust to point out, FWIW, that the .NET Npgsql driver does indeed cache\nRowDescriptions... The whole point of preparation is to optimize things as\nmuch as possible for repeated execution of the query; I get that the value\nthere is much lower than e.g. doing another network roundtrip, but that's\nstill extra work that's better off being cut if it can be.\n\nOn Mon, Sep 18, 2023 at 1:31 PM Jelte Fennema-Nio <[email protected]> wrote:> Furthermore caching RowDescription is also not super useful, most> clients request it every time because it does not require an extra> round trip, so there's almost no overhead in requesting it.Just to point out, FWIW, that the .NET Npgsql driver does indeed cache RowDescriptions... The whole point of preparation is to optimize things as much as possible for repeated execution of the query; I get that the value there is much lower than e.g. doing another network roundtrip, but that's still extra work that's better off being cut if it can be.",
"msg_date": "Sun, 7 Jan 2024 09:16:48 +0100",
"msg_from": "Shay Rojansky <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support prepared statement invalidation when result types change"
},
{
"msg_contents": "On Sun, 7 Jan 2024 at 07:55, vignesh C <[email protected]> wrote:\n> One of the test has aborted in CFBot at [1] with:\n\nRebased the patchset and removed patch 0003 since it was causing the\nCI issue reported by vignesh and it seems much less useful and more\ncontroversial to me anyway (due to the extra required roundtrip).\n\n\nOn Sun, 7 Jan 2024 at 09:17, Shay Rojansky <[email protected]> wrote:\n> Just to point out, FWIW, that the .NET Npgsql driver does indeed cache RowDescriptions... The whole point of preparation is to optimize things as much as possible for repeated execution of the query; I get that the value there is much lower than e.g. doing another network roundtrip, but that's still extra work that's better off being cut if it can be.\n\nHmm, interesting. I totally agree that it's always good to do less\nwhen possible. The problem is that in the face of server side prepared\nstatement invalidations due to DDL changes to the table or search path\nchanges, the row types might change. Or the server needs to constantly\nthrow an error, like it does now, but that seems worse imho.\n\nI'm wondering though if we can create a middleground, where a client\ncan still cache the RowDescription client side when no DDL or\nsearch_patch changes are happening. But still tell the client about a\nnew RowDescription when they do happen.\n\nThe only way of doing that I can think of is changing the Postgres\nprotocol in a way similar to this: Allow Execute to return a\nRowDescription too, but have the server only do so when the previously\nreceived RowDescription for this prepared statement is now invalid.\n\nThis would definitely require some additional tracking at PgBouncer to\nmake it work, i.e. per client and per server it should now keep track\nof the last RowDescription for each prepared statement. But that's\ndefinitely something we could do.\n\nThis would make this patch much more involved though, as now it would\nsuddenly involve an actual protocol change, and that basically depends\non this patch moving forward[1].\n\n[1]: https://www.postgresql.org/message-id/flat/CAGECzQTg2hcmb5GaU53uuWcdC7gCNJFLL6mnW0WNhWHgq9UTgw@mail.gmail.com",
"msg_date": "Wed, 3 Apr 2024 12:48:02 +0200",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support prepared statement invalidation when result types change"
},
{
"msg_contents": "Jelte Fennema <[email protected]> writes:\n> The cached plan for a prepared statements can get invalidated when DDL\n> changes the tables used in the query, or when search_path changes.\n> ...\n> However, we would throw an error if the the result of the query is of a\n> different type than it was before:\n> ERROR: cached plan must not change result type\n\nYes, this is intentional.\n\n> This patch starts to allow a prepared statement to continue to work even\n> when the result type changes.\n\nWhat this is is a wire protocol break. What if the client has\npreviously done a Describe Statement on the prepared statement?\nWe have no mechanism for notifying it that that information is\nnow falsified. The error is thrown to prevent us from getting\ninto a situation where we'd need to do that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 24 Jul 2024 11:39:56 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support prepared statement invalidation when result types change"
},
{
"msg_contents": "On Wed, 24 Jul 2024 at 17:39, Tom Lane <[email protected]> wrote:\n> > This patch starts to allow a prepared statement to continue to work even\n> > when the result type changes.\n>\n> What this is is a wire protocol break.\n\nYeah that's what I realised as well in my latest email. I withdrew\nthis patch from the commitfest now to reflect that. Until we get the\nlogic for protocol bumps in:\nhttps://www.postgresql.org/message-id/flat/CAGECzQQPQO9K1YeBKe%2BE8yfpeG15cZGd3bAHexJ%2B6dpLP-6jWw%40mail.gmail.com#2386179bc970ebaf1786501f687a7bb2\n\n> What if the client has\n> previously done a Describe Statement on the prepared statement?\n> We have no mechanism for notifying it that that information is\n> now falsified. The error is thrown to prevent us from getting\n> into a situation where we'd need to do that.\n\nHowever, this makes me think of an intermediary solution. In some\nsense it's only really a protocol break if the result type changes\nbetween the last Describe and the current Execute. So would it be okay\nif a Describe triggers the proposed invalidation?\n\n\n",
"msg_date": "Thu, 25 Jul 2024 10:30:09 +0200",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support prepared statement invalidation when result types change"
}
] |
[
{
"msg_contents": "Hi Hackers,\r\n\r\nWhen I tried to select a big amount of rows, psql complains a error \"Cannot add cell to table content: total cell count of 905032704 exceeded.\"\r\n\r\nHere are the reproduce steps:\r\n```\r\ninterma=# select version();\r\n version\r\n-----------------------------------------------------------------------------------------\r\n PostgreSQL 12.13 on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit\r\n(1 row)\r\n\r\ninterma=# create table t26(a int,b int,c int,d int,e int,f int,g int,h int,i int,j int,k int,l int,m int,n int,o int,p int,q int,r int,s int,t int ,u int,v int,w int,x int,y int,z int);\r\nCREATE TABLE\r\ninterma=# insert into t26 select generate_series(1,200000000);\r\nINSERT 0 200000000\r\ninterma=# select * from t26;\r\nCannot add cell to table content: total cell count of 905032704 exceeded.\r\n```\r\n\r\nI checked the related code, and root cause is clear:\r\n```\r\n// in printTableAddCell()\r\nif (content->cellsadded >= content->ncolumns * content->nrows)\r\n report this error and exit\r\n\r\n// cellsadded is long type, but ncolumns and nrows are int\r\n// so, it's possible overflow the int value here.\r\n\r\n// using a test program to verify:\r\nint rows = 200000000;\r\nint cols = 26;\r\nprintf(\"%d*%d = %d\\n\", rows,cols, rows*cols);\r\n\r\noutput:\r\n 2,0000,0000*26 = 9,0503,2704 // overflow and be truncated into int value here\r\n```\r\n\r\nBased on it, I think it's a bug. We should use long for ncolumns and nrows and give a more obvious error message here.\r\n\r\nMy version is 12.13, and I think the latest code also exists this issue: issue: https://github.com/postgres/postgres/blob/1a4fd77db85abac63e178506335aee74625f6499/src/fe_utils/print.c#L3259\r\n\r\nAny thoughts? or some other hidden reasons?\r\nThanks.\r\n\r\n\r\n\n\n\n\n\n\n\n\r\nHi Hackers,\n\n\n\n\r\nWhen I tried to select a big amount of rows, psql complains a error \"Cannot add cell to table content: total cell count of 905032704 exceeded.\"\n\n\n\n\r\nHere are the reproduce steps:\n\r\n```\n\r\ninterma=# select version();\r\n version\n-----------------------------------------------------------------------------------------\n PostgreSQL 12.13 on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit\n(1 row)\n\n\ninterma=# create table t26(a int,b int,c int,d int,e int,f int,g int,h int,i int,j int,k int,l int,m int,n int,o int,p int,q int,r int,s int,t int ,u int,v int,w int,x int,y int,z int);\nCREATE TABLE\ninterma=# insert into t26 select generate_series(1,200000000);\nINSERT 0 200000000\ninterma=# select * from t26;\r\nCannot add cell to table content: total cell count of 905032704 exceeded.\n\n\r\n```\n\n\n\n\r\nI checked the related code, and root cause is clear:\n\r\n```\n\r\n// in printTableAddCell()\n\r\nif (content->cellsadded >= content->ncolumns * content->nrows)\n\n\n report this error and exit\n\n\n\n\r\n// cellsadded is long type, but ncolumns and nrows\r\n are int\n\n//\r\n so, it's possible overflow the int value here.\n\n\n\n\n//\r\n using a test program to verify:\n\nint\r\n rows = 200000000;\r\nint cols = 26;\nprintf(\"%d*%d = %d\\n\", rows,cols, rows*cols);\n\n\noutput:\r\n 2,0000,0000*26 = 9,0503,2704 // overflow and be truncated into int value here\n\r\n```\n\n\n\n\r\nBased on it, I think it's a bug. We should use long for ncolumns\r\n and nrows and give a more obvious error message here.\n\n\n\n\r\nMy version is 12.13, and I think the latest code also exists this issue: issue: https://github.com/postgres/postgres/blob/1a4fd77db85abac63e178506335aee74625f6499/src/fe_utils/print.c#L3259\n\n\n\n\r\nAny thoughts? or some other hidden reasons?\n\r\nThanks.",
"msg_date": "Sat, 26 Aug 2023 03:29:44 +0000",
"msg_from": "Hongxu Ma <[email protected]>",
"msg_from_op": true,
"msg_subject": "PSQL error: total cell count of XXX exceeded"
},
{
"msg_contents": "On Friday, August 25, 2023, Hongxu Ma <[email protected]> wrote:\n>\n>\n> When I tried to select a big amount of rows, psql complains a error \"Cannot\n> add cell to table content: total cell count of 905032704 exceeded.\"\n>\n> We should use long for ncolumns and nrows and give a more obvious error\n> message here.\n>\n> Any thoughts? or some other hidden reasons?\n>\n\n9 millions cells seems more than realistic a limit for a psql query result\noutput. In any case it isn’t a bug, the code demonstrates that fact by\nproducing an explicit error.\n\nI wouldn’t be adverse to an improved error message, and possibly\ndocumenting said limit.\n\nDavid J.\n\nOn Friday, August 25, 2023, Hongxu Ma <[email protected]> wrote:\n\n\n\n\nWhen I tried to select a big amount of rows, psql complains a error \"Cannot add cell to table content: total cell count of 905032704 exceeded.\"\n We should use long for ncolumns\n and nrows and give a more obvious error message here.\n\n\n\nAny thoughts? or some other hidden reasons?9 millions cells seems more than realistic a limit for a psql query result output. In any case it isn’t a bug, the code demonstrates that fact by producing an explicit error.I wouldn’t be adverse to an improved error message, and possibly documenting said limit.David J.",
"msg_date": "Fri, 25 Aug 2023 21:09:28 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PSQL error: total cell count of XXX exceeded"
},
{
"msg_contents": "Thank you David.\n\n From the code logic, I don't think this check is meant to check the limit:\nIf it enters the double-loop (cont.nrows * cont.ncolumns) in printQuery(), the check should be always false (except overflow happened). So, if want to check the limit, we could have done this check before the double-loop: just checking PGresult and reports error earlier.\n\n> I wouldn’t be adverse to an improved error message, and possibly documenting said limit.\n\nAgreed with you, current error message may even report a negative value, it's very confusing for user. It's better to introduce a limit here. Or using a bigger integer type (e.g. long) for them, but it's also have the theoretical upbound.\n\nThanks.\n\n________________________________\nFrom: David G. Johnston <[email protected]>\nSent: Saturday, August 26, 2023 12:09\nTo: Hongxu Ma <[email protected]>\nCc: PostgreSQL Hackers <[email protected]>\nSubject: Re: PSQL error: total cell count of XXX exceeded\n\nOn Friday, August 25, 2023, Hongxu Ma <[email protected]<mailto:[email protected]>> wrote:\n\nWhen I tried to select a big amount of rows, psql complains a error \"Cannot add cell to table content: total cell count of 905032704 exceeded.\"\n\n We should use long for ncolumns and nrows and give a more obvious error message here.\n\nAny thoughts? or some other hidden reasons?\n\n9 millions cells seems more than realistic a limit for a psql query result output. In any case it isn’t a bug, the code demonstrates that fact by producing an explicit error.\n\nI wouldn’t be adverse to an improved error message, and possibly documenting said limit.\n\nDavid J.\n\n\n\n\n\n\n\n\n\nThank you David.\n\n\n\n\n From the code logic, I don't think this check is meant to check the limit:\n\nIf it enters the double-loop (cont.nrows * cont.ncolumns) in printQuery(), the check should be always false (except overflow happened). So, if want to check the\n limit, we could have done this check before the double-loop: just checking\nPGresult and reports error earlier.\n\n\n\n\n> I\n wouldn’t be adverse to an improved error message, and possibly documenting said limit.\n\n\n\n\nAgreed with you, current error message may even report a negative value, it's very confusing for user. It's better to introduce a limit here. Or using a bigger integer type (e.g.\n long) for them, but it's also have the theoretical upbound.\n\n\n\n\nThanks.\n\n\n\n\nFrom: David G. Johnston <[email protected]>\nSent: Saturday, August 26, 2023 12:09\nTo: Hongxu Ma <[email protected]>\nCc: PostgreSQL Hackers <[email protected]>\nSubject: Re: PSQL error: total cell count of XXX exceeded\n \n\nOn Friday, August 25, 2023, Hongxu Ma <[email protected]> wrote:\n\n\n\n\n\n\nWhen I tried to select a big amount of rows, psql complains a error \"Cannot add cell to table content: total cell count of 905032704 exceeded.\"\n\n\n\n\n We should use long for ncolumns and nrows and give a more obvious error message here.\n\n\n\n\n\nAny thoughts? or some other hidden reasons?\n\n\n\n\n\n\n9 millions cells seems more than realistic a limit for a psql query result output. In any case it isn’t a bug, the code demonstrates that fact by producing an explicit error.\n\n\nI wouldn’t be adverse to an improved error message, and possibly documenting said limit.\n\n\nDavid J.",
"msg_date": "Sat, 26 Aug 2023 11:19:03 +0000",
"msg_from": "Hongxu Ma <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PSQL error: total cell count of XXX exceeded"
},
{
"msg_contents": "I created a patch to fix it.\nReally appreciate to anyone can help to review it.\nThanks.\n\n________________________________\nFrom: Hongxu Ma <[email protected]>\nSent: Saturday, August 26, 2023 19:19\nTo: David G. Johnston <[email protected]>\nCc: PostgreSQL Hackers <[email protected]>\nSubject: Re: PSQL error: total cell count of XXX exceeded\n\nThank you David.\n\n From the code logic, I don't think this check is meant to check the limit:\nIf it enters the double-loop (cont.nrows * cont.ncolumns) in printQuery(), the check should be always false (except overflow happened). So, if want to check the limit, we could have done this check before the double-loop: just checking PGresult and reports error earlier.\n\n> I wouldn’t be adverse to an improved error message, and possibly documenting said limit.\n\nAgreed with you, current error message may even report a negative value, it's very confusing for user. It's better to introduce a limit here. Or using a bigger integer type (e.g. long) for them, but it's also have the theoretical upbound.\n\nThanks.\n\n________________________________\nFrom: David G. Johnston <[email protected]>\nSent: Saturday, August 26, 2023 12:09\nTo: Hongxu Ma <[email protected]>\nCc: PostgreSQL Hackers <[email protected]>\nSubject: Re: PSQL error: total cell count of XXX exceeded\n\nOn Friday, August 25, 2023, Hongxu Ma <[email protected]<mailto:[email protected]>> wrote:\n\nWhen I tried to select a big amount of rows, psql complains a error \"Cannot add cell to table content: total cell count of 905032704 exceeded.\"\n\n We should use long for ncolumns and nrows and give a more obvious error message here.\n\nAny thoughts? or some other hidden reasons?\n\n9 millions cells seems more than realistic a limit for a psql query result output. In any case it isn’t a bug, the code demonstrates that fact by producing an explicit error.\n\nI wouldn’t be adverse to an improved error message, and possibly documenting said limit.\n\nDavid J.",
"msg_date": "Mon, 11 Sep 2023 06:50:39 +0000",
"msg_from": "Hongxu Ma <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PSQL error: total cell count of XXX exceeded"
},
{
"msg_contents": "On Mon, 11 Sept 2023 at 08:51, Hongxu Ma <[email protected]> wrote:\n>\n> I created a patch to fix it.\n> Really appreciate to anyone can help to review it.\n> Thanks.\n\nI think \"product\" as a variable name isn't very descriptive. Let's\ncall it total_cells (or something similar instead).\n\nOther than that I think it's a good change. content->cellsadded is\nalso a long, So I agree that I don't think the limit of int cells was\nintended here.\n\n\n",
"msg_date": "Mon, 11 Sep 2023 09:04:22 +0200",
"msg_from": "Jelte Fennema <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PSQL error: total cell count of XXX exceeded"
},
{
"msg_contents": "Thank you for your advice, Jelte.\nI have refactored my code, please see the attached patch. (and I put it into https://commitfest.postgresql.org/45/ for trace)\n\nThanks.\n\n________________________________\nFrom: Jelte Fennema <[email protected]>\nSent: Monday, September 11, 2023 15:04\nTo: Hongxu Ma <[email protected]>\nCc: David G. Johnston <[email protected]>; PostgreSQL Hackers <[email protected]>\nSubject: Re: PSQL error: total cell count of XXX exceeded\n\nOn Mon, 11 Sept 2023 at 08:51, Hongxu Ma <[email protected]> wrote:\n>\n> I created a patch to fix it.\n> Really appreciate to anyone can help to review it.\n> Thanks.\n\nI think \"product\" as a variable name isn't very descriptive. Let's\ncall it total_cells (or something similar instead).\n\nOther than that I think it's a good change. content->cellsadded is\nalso a long, So I agree that I don't think the limit of int cells was\nintended here.",
"msg_date": "Tue, 12 Sep 2023 02:39:55 +0000",
"msg_from": "Hongxu Ma <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PSQL error: total cell count of XXX exceeded"
},
{
"msg_contents": "On Tue, Sep 12, 2023 at 02:39:55AM +0000, Hongxu Ma wrote:\n> Thank you for your advice, Jelte.\n> I have refactored my code, please see the attached patch. (and I put\n> it into https://commitfest.postgresql.org/45/ for trace)\n\n {\n+ long total_cells;\n\nlong is 4 bytes on Windows, and 8 bytes basically elsewhere. So you\nwould still have the same problem on Windows, no?\n--\nMichael",
"msg_date": "Tue, 12 Sep 2023 12:55:44 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PSQL error: total cell count of XXX exceeded"
},
{
"msg_contents": "Michael Paquier <[email protected]> writes:\n> On Tue, Sep 12, 2023 at 02:39:55AM +0000, Hongxu Ma wrote:\n> + long total_cells;\n\n> long is 4 bytes on Windows, and 8 bytes basically elsewhere. So you\n> would still have the same problem on Windows, no?\n\nMore to the point: what about the multiplication in printTableInit?\nThe cat's been out of the bag for quite some time before we get to\nprintTableAddCell.\n\nI'm more than a bit skeptical about trying to do something about this,\nsimply because this range of query result sizes is far past what is\npractical. The OP clearly hasn't tested his patch on actually\noverflowing query results, and I don't care to either.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 12 Sep 2023 00:19:53 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PSQL error: total cell count of XXX exceeded"
},
{
"msg_contents": "Thanks for pointing that, I did miss some other \"ncols * nrows\" places. Uploaded v3 patch to fix them.\r\n\r\nAs for the Windows, I didn't test it before but I think it should also have the issue (and happens more possible since `cellsadded` is also a long type).\r\nMy fix idea is simple: define a common long64 type for it.\r\nI referred MSDN: only `LONGLONG` and `LONG64` are 64 bytes. And I assume Postgres should already have a similar type, but only found `typedef long int int64` in src/include/c.h, looks it's not a proper choose.\r\n@Michael Paquier<mailto:[email protected]>, could you help to give some advices here (which type should be used? or should define a new one?). Thank you very much.\r\n\r\n________________________________\r\nFrom: Tom Lane <[email protected]>\r\nSent: Tuesday, September 12, 2023 12:19\r\nTo: Michael Paquier <[email protected]>\r\nCc: Hongxu Ma <[email protected]>; Jelte Fennema <[email protected]>; David G. Johnston <[email protected]>; PostgreSQL Hackers <[email protected]>\r\nSubject: Re: PSQL error: total cell count of XXX exceeded\r\n\r\nMichael Paquier <[email protected]> writes:\r\n> On Tue, Sep 12, 2023 at 02:39:55AM +0000, Hongxu Ma wrote:\r\n> + long total_cells;\r\n\r\n> long is 4 bytes on Windows, and 8 bytes basically elsewhere. So you\r\n> would still have the same problem on Windows, no?\r\n\r\nMore to the point: what about the multiplication in printTableInit?\r\nThe cat's been out of the bag for quite some time before we get to\r\nprintTableAddCell.\r\n\r\nI'm more than a bit skeptical about trying to do something about this,\r\nsimply because this range of query result sizes is far past what is\r\npractical. The OP clearly hasn't tested his patch on actually\r\noverflowing query results, and I don't care to either.\r\n\r\n regards, tom lane",
"msg_date": "Wed, 13 Sep 2023 02:22:48 +0000",
"msg_from": "Hongxu Ma <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PSQL error: total cell count of XXX exceeded"
},
{
"msg_contents": "After double check, looks `int64` of src/include/c.h is the proper type for it.\r\nUploaded the v4 patch to fix it.\r\nThanks.\r\n\r\n________________________________\r\nFrom: Hongxu Ma <[email protected]>\r\nSent: Wednesday, September 13, 2023 10:22\r\nTo: Tom Lane <[email protected]>; Michael Paquier <[email protected]>\r\nCc: PostgreSQL Hackers <[email protected]>\r\nSubject: Re: PSQL error: total cell count of XXX exceeded\r\n\r\nThanks for pointing that, I did miss some other \"ncols * nrows\" places. Uploaded v3 patch to fix them.\r\n\r\nAs for the Windows, I didn't test it before but I think it should also have the issue (and happens more possible since `cellsadded` is also a long type).\r\nMy fix idea is simple: define a common long64 type for it.\r\nI referred MSDN: only `LONGLONG` and `LONG64` are 64 bytes. And I assume Postgres should already have a similar type, but only found `typedef long int int64` in src/include/c.h, looks it's not a proper choose.\r\n@Michael Paquier<mailto:[email protected]>, could you help to give some advices here (which type should be used? or should define a new one?). Thank you very much.\r\n\r\n________________________________\r\nFrom: Tom Lane <[email protected]>\r\nSent: Tuesday, September 12, 2023 12:19\r\nTo: Michael Paquier <[email protected]>\r\nCc: Hongxu Ma <[email protected]>; Jelte Fennema <[email protected]>; David G. Johnston <[email protected]>; PostgreSQL Hackers <[email protected]>\r\nSubject: Re: PSQL error: total cell count of XXX exceeded\r\n\r\nMichael Paquier <[email protected]> writes:\r\n> On Tue, Sep 12, 2023 at 02:39:55AM +0000, Hongxu Ma wrote:\r\n> + long total_cells;\r\n\r\n> long is 4 bytes on Windows, and 8 bytes basically elsewhere. So you\r\n> would still have the same problem on Windows, no?\r\n\r\nMore to the point: what about the multiplication in printTableInit?\r\nThe cat's been out of the bag for quite some time before we get to\r\nprintTableAddCell.\r\n\r\nI'm more than a bit skeptical about trying to do something about this,\r\nsimply because this range of query result sizes is far past what is\r\npractical. The OP clearly hasn't tested his patch on actually\r\noverflowing query results, and I don't care to either.\r\n\r\n regards, tom lane",
"msg_date": "Wed, 13 Sep 2023 07:31:54 +0000",
"msg_from": "Hongxu Ma <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PSQL error: total cell count of XXX exceeded"
},
{
"msg_contents": "On 2023-Sep-12, Tom Lane wrote:\n\n> I'm more than a bit skeptical about trying to do something about this,\n> simply because this range of query result sizes is far past what is\n> practical. The OP clearly hasn't tested his patch on actually\n> overflowing query results, and I don't care to either.\n\nI think we're bound to hit this limit at some point in the future, and\nit seems easy enough to solve. I propose the attached, which is pretty\nmuch what Hongxu last submitted, with some minor changes.\n\nHaving this make a difference requires some 128GB of RAM, so it's not a\npiece of cake, but it's an amount that can be reasonably expected to be\nphysically installed in real machines nowadays.\n\n(I first thought we could just use pg_mul_s32_overflow during\nprintTableInit and raise an error if that returns true, but that just\npostpones the problem.)\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\nSubversion to GIT: the shortest path to happiness I've ever heard of\n (Alexey Klyukin)",
"msg_date": "Mon, 20 Nov 2023 21:48:35 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PSQL error: total cell count of XXX exceeded"
},
{
"msg_contents": "Alvaro Herrera <[email protected]> writes:\n> I think we're bound to hit this limit at some point in the future, and\n> it seems easy enough to solve. I propose the attached, which is pretty\n> much what Hongxu last submitted, with some minor changes.\n\nThis bit needs more work:\n\n-\tcontent->cells = pg_malloc0((ncolumns * nrows + 1) * sizeof(*content->cells));\n+\ttotal_cells = (int64) ncolumns * nrows;\n+\tcontent->cells = pg_malloc0((total_cells + 1) * sizeof(*content->cells));\n\nYou've made the computation of total_cells reliable, but there's\nnothing stopping the subsequent computation of the malloc argument\nfrom overflowing (especially on 32-bit machines). I think we need\nan explicit test along the lines of\n\n\tif (total_cells >= SIZE_MAX / sizeof(*content->cells))\n\t\tthrow error;\n\n(\">=\" allows not needing to add +1.)\n\nAlso, maybe total_cells should be uint64? We don't want\nnegative values to pass this test. Alternatively, add a separate\ncheck that total_cells >= 0.\n\nIt should be sufficient to be paranoid about this in printTableInit,\nsince after that we know the product of ncolumns * nrows isn't\ntoo big.\n\nThe rest of this passes an eyeball check.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 20 Nov 2023 17:29:03 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PSQL error: total cell count of XXX exceeded"
},
{
"msg_contents": "On 2023-Sep-13, Hongxu Ma wrote:\n\n> After double check, looks `int64` of src/include/c.h is the proper type for it.\n> Uploaded the v4 patch to fix it.\n\nRight. I made a few more adjustments, including the additional overflow\ncheck in printTableInit that Tom Lane suggested, and pushed this.\n\nIt's a bit annoying that the error recovery decision of this code is to\nexit the process with an error. If somebody were to be interested in a\nfun improvement exercise, it may be worth redoing the print.c API so\nthat it returns errors that psql can report and recover from, instead of\njust closing the process.\n\nTBH though, I've never hit that code in real usage.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 21 Nov 2023 15:33:03 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PSQL error: total cell count of XXX exceeded"
},
{
"msg_contents": "Alvaro Herrera <[email protected]> writes:\n> Right. I made a few more adjustments, including the additional overflow\n> check in printTableInit that Tom Lane suggested, and pushed this.\n\nCommitted patch LGTM.\n\n> It's a bit annoying that the error recovery decision of this code is to\n> exit the process with an error. If somebody were to be interested in a\n> fun improvement exercise, it may be worth redoing the print.c API so\n> that it returns errors that psql can report and recover from, instead of\n> just closing the process.\n> TBH though, I've never hit that code in real usage.\n\nYeah, I think the reason it's stayed like that for 25 years is that\nnobody's hit the case in practice. Changing the API would be a bit\ntroublesome, too, because we don't know if anybody besides psql\nuses it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 21 Nov 2023 09:43:16 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PSQL error: total cell count of XXX exceeded"
},
{
"msg_contents": "On 2023-Nov-21, Tom Lane wrote:\n\n> Alvaro Herrera <[email protected]> writes:\n> > Right. I made a few more adjustments, including the additional overflow\n> > check in printTableInit that Tom Lane suggested, and pushed this.\n> \n> Committed patch LGTM.\n\nThanks for looking!\n\n> > It's a bit annoying that the error recovery decision of this code is to\n> > exit the process with an error. [...]\n> > TBH though, I've never hit that code in real usage.\n> \n> Yeah, I think the reason it's stayed like that for 25 years is that\n> nobody's hit the case in practice. Changing the API would be a bit\n> troublesome, too, because we don't know if anybody besides psql\n> uses it.\n\nTrue.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 21 Nov 2023 17:14:22 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PSQL error: total cell count of XXX exceeded"
},
{
"msg_contents": "Alvaro Herrera <[email protected]> writes:\n> On 2023-Nov-21, Tom Lane wrote:\n>> Alvaro Herrera <[email protected]> writes:\n>>> It's a bit annoying that the error recovery decision of this code is to\n>>> exit the process with an error. [...]\n>>> TBH though, I've never hit that code in real usage.\n\n>> Yeah, I think the reason it's stayed like that for 25 years is that\n>> nobody's hit the case in practice. Changing the API would be a bit\n>> troublesome, too, because we don't know if anybody besides psql\n>> uses it.\n\n> True.\n\nIt strikes me that perhaps a workable compromise behavior could be\n\"report the error to wherever we would have printed the table, and\nreturn normally\". I'm still not excited about doing anything about\nit, but ...\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 21 Nov 2023 13:11:02 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PSQL error: total cell count of XXX exceeded"
}
] |
[
{
"msg_contents": "Hello Pgperffarm Community,\n\nI'm Anil, a GSoC'23 contributor to Pgperffarm. I'm excited to share our\nprogress as the program approaches its culmination.\n\nYou can view the latest benchmark results at http://140.211.168.145/\n\nNow, I'm inviting you to be a part of our effort. Could you test and run\nthe benchmark on your system? Let me know, and I'll give you a unique\nmachine ID for seamless contributions.\n\nAccess the repository at https://github.com/PGPerfFarm/pgperffarm\nI want you to know that your involvement matters greatly.\nPlease feel free to reach out if you have any questions or to get started.\nIf you encounter any bug, please report it.\n\nBest,\nAnil\nGSoC'23 Contributor, Pgperffarm\n\nHello Pgperffarm Community,I'm Anil, a GSoC'23 contributor to Pgperffarm. I'm excited to share our progress as the program approaches its culmination.You can view the latest benchmark results at http://140.211.168.145/Now, I'm inviting you to be a part of our effort. Could you test and run the benchmark on your system? Let me know, and I'll give you a unique machine ID for seamless contributions.Access the repository at https://github.com/PGPerfFarm/pgperffarmI want you to know that your involvement matters greatly.Please feel free to reach out if you have any questions or to get started. If you encounter any bug, please report it.Best,AnilGSoC'23 Contributor, Pgperffarm",
"msg_date": "Sat, 26 Aug 2023 21:37:16 +0530",
"msg_from": "Anil <[email protected]>",
"msg_from_op": true,
"msg_subject": "Invitation to Test and Contribute to Pgperffarm"
},
{
"msg_contents": "---------- Forwarded message ---------\nFrom: Anil <[email protected]>\nDate: Sat, Aug 26, 2023 at 9:37 PM\nSubject: Invitation to Test and Contribute to Pgperffarm\nTo: <[email protected]>\n\n\nHello Pgperffarm Community,\n\nI'm Anil, a GSoC'23 contributor to Pgperffarm. I'm excited to share our\nprogress as the program approaches its culmination.\n\nYou can view the latest benchmark results at http://140.211.168.145/\n\nNow, I'm inviting you to be a part of our effort. Could you test and run\nthe benchmark on your system? Let me know, and I'll give you a unique\nmachine ID for seamless contributions.\n\nAccess the repository at https://github.com/PGPerfFarm/pgperffarm\nI want you to know that your involvement matters greatly.\nPlease feel free to reach out if you have any questions or to get started.\nIf you encounter any bug, please report it.\n\nBest,\nAnil\nGSoC'23 Contributor, Pgperffarm\n\n---------- Forwarded message ---------From: Anil <[email protected]>Date: Sat, Aug 26, 2023 at 9:37 PMSubject: Invitation to Test and Contribute to PgperffarmTo: <[email protected]>Hello Pgperffarm Community,I'm Anil, a GSoC'23 contributor to Pgperffarm. I'm excited to share our progress as the program approaches its culmination.You can view the latest benchmark results at http://140.211.168.145/Now, I'm inviting you to be a part of our effort. Could you test and run the benchmark on your system? Let me know, and I'll give you a unique machine ID for seamless contributions.Access the repository at https://github.com/PGPerfFarm/pgperffarmI want you to know that your involvement matters greatly.Please feel free to reach out if you have any questions or to get started. If you encounter any bug, please report it.Best,AnilGSoC'23 Contributor, Pgperffarm",
"msg_date": "Sat, 26 Aug 2023 21:45:10 +0530",
"msg_from": "Anil <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fwd: Invitation to Test and Contribute to Pgperffarm"
}
] |
[
{
"msg_contents": "Hi,\n\nThis is reopening of thread:\nhttps://www.postgresql.org/message-id/flat/2ef6b491-1946-b606-f064-d9ea79d91463%40gmail.com#14e0bdb6872c0b26023d532eeb943d3e\n\nThis is a PoC patch which implements distinct operation in window\naggregates (without order by and for single column aggregation, final\nversion may vary wrt these limitations). Purpose of this PoC is to get\nfeedback on the approach used and corresponding implementation, any\nnitpicking as deemed reasonable.\n\nDistinct operation is mirrored from implementation in nodeAgg. Existing\npartitioning logic determines if row is in partition and when distinct is\nrequired, all tuples for the aggregate column are stored in tuplesort. When\nfinalize_windowaggregate gets called, tuples are sorted and duplicates are\nremoved, followed by calling the transition function on each tuple.\nWhen distinct is not required, the above process is skipped and the\ntransition function gets called directly and nothing gets inserted into\ntuplesort.\nNote: For each partition, in tuplesort_begin and tuplesort_end is involved\nto rinse tuplesort, so at any time, max tuples in tuplesort is equal to\ntuples in a particular partition.\n\nI have verified it for interger and interval column aggregates (to rule out\nobvious issues related to data types).\n\nSample cases:\n\ncreate table mytable(id int, name text);\ninsert into mytable values(1, 'A');\ninsert into mytable values(1, 'A');\ninsert into mytable values(5, 'B');\ninsert into mytable values(3, 'A');\ninsert into mytable values(1, 'A');\n\nselect avg(distinct id) over (partition by name) from mytable;\n avg\n--------------------\n2.0000000000000000\n2.0000000000000000\n2.0000000000000000\n2.0000000000000000\n5.0000000000000000\n\nselect avg(id) over (partition by name) from mytable;\n avg\n--------------------\n 1.5000000000000000\n 1.5000000000000000\n 1.5000000000000000\n 1.5000000000000000\n 5.0000000000000000\n\nselect avg(distinct id) over () from mytable;\n avg\n--------------------\n 3.0000000000000000\n 3.0000000000000000\n 3.0000000000000000\n 3.0000000000000000\n 3.0000000000000000\n\nselect avg(distinct id) from mytable;\n avg\n--------------------\n 3.0000000000000000\n\nThis is my first-time contribution. Please let me know if anything can be\nimproved as I`m eager to learn.\n\nRegards,\nAnkit Kumar Pandey",
"msg_date": "Sun, 27 Aug 2023 17:27:20 +0530",
"msg_from": "Ankit Pandey <[email protected]>",
"msg_from_op": true,
"msg_subject": "[PoC] Implementation of distinct in Window Aggregates: take two"
},
{
"msg_contents": "Hi,\n\nI went through the Cfbot, and some of the test cases are failing for\nthis patch. It seems like some tests are crashing:\nhttps://api.cirrus-ci.com/v1/artifact/task/6291153444667392/crashlog/crashlog-postgres.exe_03b0_2023-11-07_10-41-39-624.txt\n\n[10:46:56.546] Summary of Failures:\n[10:46:56.546]\n[10:46:56.546] 87/270 postgresql:postgres_fdw / postgres_fdw/regress\nERROR 11.10s exit status 1\n[10:46:56.546] 82/270 postgresql:regress / regress/regress ERROR\n248.55s exit status 1\n[10:46:56.546] 99/270 postgresql:recovery /\nrecovery/027_stream_regress ERROR 161.40s exit status 29\n[10:46:56.546] 98/270 postgresql:pg_upgrade /\npg_upgrade/002_pg_upgrade ERROR 253.31s exit status 29\n\nlink of tests failing:\nhttps://cirrus-ci.com/task/6642997165555712\nhttps://cirrus-ci.com/task/4602303584403456\nhttps://cirrus-ci.com/task/5728203491246080\nhttps://cirrus-ci.com/task/5165253537824768?logs=test_world#L511\nhttps://cirrus-ci.com/task/6291153444667392\n\nThanks\nShlok Kumar Kyal\n\n\n",
"msg_date": "Wed, 8 Nov 2023 11:46:08 +0530",
"msg_from": "Shlok Kyal <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Implementation of distinct in Window Aggregates: take two"
},
{
"msg_contents": "On Wed, 8 Nov 2023 at 11:46, Shlok Kyal <[email protected]> wrote:\n>\n> Hi,\n>\n> I went through the Cfbot, and some of the test cases are failing for\n> this patch. It seems like some tests are crashing:\n> https://api.cirrus-ci.com/v1/artifact/task/6291153444667392/crashlog/crashlog-postgres.exe_03b0_2023-11-07_10-41-39-624.txt\n>\n> [10:46:56.546] Summary of Failures:\n> [10:46:56.546]\n> [10:46:56.546] 87/270 postgresql:postgres_fdw / postgres_fdw/regress\n> ERROR 11.10s exit status 1\n> [10:46:56.546] 82/270 postgresql:regress / regress/regress ERROR\n> 248.55s exit status 1\n> [10:46:56.546] 99/270 postgresql:recovery /\n> recovery/027_stream_regress ERROR 161.40s exit status 29\n> [10:46:56.546] 98/270 postgresql:pg_upgrade /\n> pg_upgrade/002_pg_upgrade ERROR 253.31s exit status 29\n>\n> link of tests failing:\n> https://cirrus-ci.com/task/6642997165555712\n> https://cirrus-ci.com/task/4602303584403456\n> https://cirrus-ci.com/task/5728203491246080\n> https://cirrus-ci.com/task/5165253537824768?logs=test_world#L511\n> https://cirrus-ci.com/task/6291153444667392\n\nThe patch which you submitted has been awaiting your attention for\nquite some time now. As such, we have moved it to \"Returned with\nFeedback\" and removed it from the reviewing queue. Depending on\ntiming, this may be reversible. Kindly address the feedback you have\nreceived, and resubmit the patch to the next CommitFest.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Thu, 1 Feb 2024 16:19:50 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Implementation of distinct in Window Aggregates: take two"
}
] |
[
{
"msg_contents": "Hi,\n\nHere are a couple of changes that got FreeBSD down to 4:29 total, 2:40\nin test_world in my last run (over 2x speedup), using a RAM disk\nbacked by a swap partition, and more CPUs. It's still a regular UFS\nfile system but FreeBSD is not as good at avoiding I/O around short\nlived files and directories as Linux: it can get hung up on a bunch of\nsynchronous I/O, and also flushes disk caches for those writes,\nwithout an off switch.\n\nI don't know about Windows, but I suspect the same applies there, ie\nsynchronous I/O blocking system calls around our blizzard of file\ncreations and unlinks. Anyone know how to try it?",
"msg_date": "Mon, 28 Aug 2023 10:29:39 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "CI speed improvements for FreeBSD"
},
{
"msg_contents": "And after adding this to the commitfest, here's the first cfbot run.\nThe gain was due to \"test_world\" which shows a greater-than-2x speedup\n(~4:30 -> ~2:08) from 2x CPUs. That is nice for humans who want the\nanswer as soon as possible, but note that the resource usage cost\nmight go up because of the non-parallel parts now wasting more idle\nCPUs: git clone, meson configure etc (as they do on every platform).\n\nhttps://cirrus-ci.com/build/6060109692928000\n\n\n",
"msg_date": "Mon, 28 Aug 2023 11:24:39 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: CI speed improvements for FreeBSD"
},
{
"msg_contents": "Hi!\n\nI looked at the changes and I liked them. Here are my thoughts:\n\n0001:\n1. I think, this is a good idea to use RAM. Since, it's still a UFS, and\nwe lose nothing in terms of testing, but win in speed significantly.\n2. Change from \"swapoff -a || true\" to \"swapoff -a\" is legit in my view,\nsince it's better to explicitly fail than silent any possible problem.\n3. Man says that lowercase suffixes should be used for the mdconfig. But in\nfact, you can use either lowercase or an appercase. Yep, it's in\n the mdconfig.c: \"else if (*p == 'g' || *p == 'G')\".\n\n0002:\n1. The resource usage should be a bit higher, this is for sure. But, if\nI'm not missing something, not drastically. Anyway, I do not know\n how to measure this increase to get concrete values.\n2. And think of a potential benefits of increasing the number of test jobs:\nmore concurrent processes, more interactions, better test coverage.\n\nHere are my runs:\nFreeBSD @master\nhttps://cirrus-ci.com/task/4934701194936320\nRun test_world 05:56\n\nFreeBSD @master + 0001\nhttps://cirrus-ci.com/task/5921385306914816\nRun test_world 05:06\n\nFreeBSD @master + 0001, + 0002\nhttps://cirrus-ci.com/task/5635288945393664\nRun test_world 02:20\n\nFor comparison\nDebian @master\nhttps://cirrus-ci.com/task/5143705577848832\nRun test_world 02:38\n\nIn the overall, I consider this changes useful. CI run faster, with better\ntest coverage in exchange for presumably slight increase\nin resource usage, but I don't think this increase should be significant.\n\n-- \nBest regards,\nMaxim Orlov.\n\nHi!I looked at the changes and I liked them. Here are my thoughts:0001:1. I think, this is a good idea to use RAM. Since, it's still a UFS, and we lose nothing in terms of testing, but win in speed significantly.2. Change from \"swapoff -a || true\" to \"swapoff -a\" is legit in my view, since it's better to explicitly fail than silent any possible problem.3. Man says that lowercase suffixes should be used for the mdconfig. But in fact, you can use either lowercase or an appercase. Yep, it's in the mdconfig.c: \"else if (*p == 'g' || *p == 'G')\".0002:1. The resource usage should be a bit higher, this is for sure. But, if I'm not missing something, not drastically. Anyway, I do not know how to measure this increase to get concrete values.2. And think of a potential benefits of increasing the number of test jobs: more concurrent processes, more interactions, better test coverage.Here are my runs:FreeBSD @masterhttps://cirrus-ci.com/task/4934701194936320Run test_world 05:56FreeBSD @master + 0001https://cirrus-ci.com/task/5921385306914816Run test_world 05:06FreeBSD @master + 0001, + 0002https://cirrus-ci.com/task/5635288945393664Run test_world 02:20For comparisonDebian @masterhttps://cirrus-ci.com/task/5143705577848832Run test_world 02:38In the overall, I consider this changes useful. CI run faster, with better test coverage in exchange for presumably slight increase in resource usage, but I don't think this increase should be significant.-- Best regards,Maxim Orlov.",
"msg_date": "Tue, 12 Mar 2024 18:50:06 +0300",
"msg_from": "Maxim Orlov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CI speed improvements for FreeBSD"
},
{
"msg_contents": "On Wed, Mar 13, 2024 at 4:50 AM Maxim Orlov <[email protected]> wrote:\n> I looked at the changes and I liked them. Here are my thoughts:\n\nThanks for looking! Pushed.\n\n\n",
"msg_date": "Wed, 13 Mar 2024 15:03:01 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: CI speed improvements for FreeBSD"
}
] |
[
{
"msg_contents": "Hi all,\n(Heikki in CC.)\n\nAfter b0bea38705b2, I have noticed that the syslogger is generating a\nlot of dummy LOG entries:\n2023-08-28 09:40:52.565 JST [24554]\nLOG: could not close client or listen socket: Bad file descriptor \n\nThe only reason why I have noticed this issue is because I enable the\nlogging collector in my development scripts. Note that the pg_ctl\ntest 004_logrotate.pl, the only one with logging_collector set, is\nequally able to reproduce the issue.\n\nThe root of the problem is ClosePostmasterPorts() in syslogger.c,\nwhere we close the postmaster ports, but all of them are still set at\n0, leading to these spurious logs.\n\nFrom what I can see, this is is a rather old issue, because\nListenSocket[] is filled with PGINVALID_SOCKET *after* starting the\nsyslogger. It seems to me that we should just initialize the array\nbefore starting the syslogger, so as we don't get these incorrect\nlogs?\n\nThoughts? Please see the attached.\n--\nMichael",
"msg_date": "Mon, 28 Aug 2023 09:52:09 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Logger process and \"LOG: could not close client or listen socket:\n Bad file descriptor\""
},
{
"msg_contents": "On Sun, Aug 27, 2023 at 5:52 PM Michael Paquier <[email protected]> wrote:\n> From what I can see, this is is a rather old issue, because\n> ListenSocket[] is filled with PGINVALID_SOCKET *after* starting the\n> syslogger. It seems to me that we should just initialize the array\n> before starting the syslogger, so as we don't get these incorrect\n> logs?\n>\n> Thoughts? Please see the attached.\n\nAgreed, this is very annoying. I'm going to start using your patch\nwith the feature branch I'm working on. Hopefully that won't be\nnecessary for too much longer.\n\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 28 Aug 2023 20:18:22 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Logger process and \"LOG: could not close client or listen socket:\n Bad file descriptor\""
},
{
"msg_contents": "On 29/08/2023 06:18, Peter Geoghegan wrote:\n> On Sun, Aug 27, 2023 at 5:52 PM Michael Paquier <[email protected]> wrote:\n>> From what I can see, this is is a rather old issue, because\n>> ListenSocket[] is filled with PGINVALID_SOCKET *after* starting the\n>> syslogger. It seems to me that we should just initialize the array\n>> before starting the syslogger, so as we don't get these incorrect\n>> logs?\n>>\n>> Thoughts? Please see the attached.\n> \n> Agreed, this is very annoying. I'm going to start using your patch\n> with the feature branch I'm working on. Hopefully that won't be\n> necessary for too much longer.\n\nJust to close the loop on this thread: I committed and backpatched \nMichael's fix.\n\nDiscussion on other thread at \nhttps://www.postgresql.org/message-id/9caed67f-f93e-5701-8c25-265a2b139ed0%40iki.fi.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Tue, 29 Aug 2023 09:27:48 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Logger process and \"LOG: could not close client or listen socket:\n Bad file descriptor\""
},
{
"msg_contents": "On Tue, Aug 29, 2023 at 09:27:48AM +0300, Heikki Linnakangas wrote:\n> Just to close the loop on this thread: I committed and backpatched Michael's\n> fix.\n\nThanks!\n--\nMichael",
"msg_date": "Tue, 29 Aug 2023 15:30:46 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Logger process and \"LOG: could not close client or listen\n socket: Bad file descriptor\""
}
] |
[
{
"msg_contents": "Hi,\n\nEvery time we run a SQL query, we fork a new psql process and a new\ncold backend process. It's not free on Unix, and quite a lot worse on\nWindows, at around 70ms per query. Take amcheck/001_verify_heapam for\nexample. It runs 272 subtests firing off a stream of queries, and\ncompletes in ~51s on Windows (!), and ~6-9s on the various Unixen, on\nCI.\n\nHere are some timestamps I captured from CI by instrumenting various\nPerl and C bits:\n\n0.000s: IPC::Run starts\n0.023s: postmaster socket sees connection\n0.025s: postmaster has created child process\n0.033s: backend starts running main()\n0.039s: backend has reattached to shared memory\n0.043s: backend connection authorized message\n0.046s: backend has executed and logged query\n0.070s: IPC::Run returns\n\nI expected process creation to be slow on that OS, but it seems like\nsomething happening at the end is even slower. CI shows Windows\nconsuming 4 CPUs at 100% for a full 10 minutes to run a test suite\nthat finishes in 2-3 minutes everywhere else with the same number of\nCPUs. Could there be an event handling snafu in IPC::Run or elsewhere\nnearby? It seems like there must be either a busy loop or a busted\nsleep/wakeup... somewhere? But even if there's a weird bug here\nwaiting to be discovered and fixed, I guess it'll always be too slow\nat ~10ms per process spawned, with two processes to spawn, and it's\nbad enough on Unix.\n\nAs an experiment, I hacked up a not-good-enough-to-share experiment\nwhere $node->safe_psql() would automatically cache a BackgroundPsql\nobject and reuse it, and the times for that test dropped ~51 -> ~9s on\nWindows, and ~7 -> ~2s on the Unixen. But even that seems non-ideal\n(well it's certainly non-ideal the way I hacked it up anyway...). I\nsuppose there are quite a few ways we could do better:\n\n1. Don't fork anything at all: open (and cache) a connection directly\nfrom Perl.\n1a. Write xsub or ffi bindings for libpq. Or vendor (parts) of the\npopular Perl xsub library?\n1b. Write our own mini pure-perl pq client module. Or vendor (parts)\nof some existing one.\n2. Use long-lived psql sessions.\n2a. Something building on BackgroundPsql.\n2b. Maybe give psql or a new libpq-wrapper a new low level stdio/pipe\nprotocol that is more fun to talk to from Perl/machines?\n\nIn some other languages one can do FFI pretty easily so we could use\nthe in-tree libpq without extra dependencies:\n\n>>> import ctypes\n>>> libpq = ctypes.cdll.LoadLibrary(\"/path/to/libpq.so\")\n>>> libpq.PQlibVersion()\n170000\n\n... but it seems you can't do either static C bindings or runtime FFI\nfrom Perl without adding a new library/package dependency. I'm not\nmuch of a Perl hacker so I don't have any particular feeling. What\nwould be best?\n\nThis message brought to you by the Lorax.\n\n\n",
"msg_date": "Mon, 28 Aug 2023 17:29:56 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query execution in Perl TAP tests needs work"
},
{
"msg_contents": "On 2023-08-28 Mo 01:29, Thomas Munro wrote:\n> Hi,\n>\n> Every time we run a SQL query, we fork a new psql process and a new\n> cold backend process. It's not free on Unix, and quite a lot worse on\n> Windows, at around 70ms per query. Take amcheck/001_verify_heapam for\n> example. It runs 272 subtests firing off a stream of queries, and\n> completes in ~51s on Windows (!), and ~6-9s on the various Unixen, on\n> CI.\n>\n> Here are some timestamps I captured from CI by instrumenting various\n> Perl and C bits:\n>\n> 0.000s: IPC::Run starts\n> 0.023s: postmaster socket sees connection\n> 0.025s: postmaster has created child process\n> 0.033s: backend starts running main()\n> 0.039s: backend has reattached to shared memory\n> 0.043s: backend connection authorized message\n> 0.046s: backend has executed and logged query\n> 0.070s: IPC::Run returns\n>\n> I expected process creation to be slow on that OS, but it seems like\n> something happening at the end is even slower. CI shows Windows\n> consuming 4 CPUs at 100% for a full 10 minutes to run a test suite\n> that finishes in 2-3 minutes everywhere else with the same number of\n> CPUs. Could there be an event handling snafu in IPC::Run or elsewhere\n> nearby? It seems like there must be either a busy loop or a busted\n> sleep/wakeup... somewhere? But even if there's a weird bug here\n> waiting to be discovered and fixed, I guess it'll always be too slow\n> at ~10ms per process spawned, with two processes to spawn, and it's\n> bad enough on Unix.\n>\n> As an experiment, I hacked up a not-good-enough-to-share experiment\n> where $node->safe_psql() would automatically cache a BackgroundPsql\n> object and reuse it, and the times for that test dropped ~51 -> ~9s on\n> Windows, and ~7 -> ~2s on the Unixen. But even that seems non-ideal\n> (well it's certainly non-ideal the way I hacked it up anyway...). I\n> suppose there are quite a few ways we could do better:\n>\n> 1. Don't fork anything at all: open (and cache) a connection directly\n> from Perl.\n> 1a. Write xsub or ffi bindings for libpq. Or vendor (parts) of the\n> popular Perl xsub library?\n> 1b. Write our own mini pure-perl pq client module. Or vendor (parts)\n> of some existing one.\n> 2. Use long-lived psql sessions.\n> 2a. Something building on BackgroundPsql.\n> 2b. Maybe give psql or a new libpq-wrapper a new low level stdio/pipe\n> protocol that is more fun to talk to from Perl/machines?\n>\n> In some other languages one can do FFI pretty easily so we could use\n> the in-tree libpq without extra dependencies:\n>\n>>>> import ctypes\n>>>> libpq = ctypes.cdll.LoadLibrary(\"/path/to/libpq.so\")\n>>>> libpq.PQlibVersion()\n> 170000\n>\n> ... but it seems you can't do either static C bindings or runtime FFI\n> from Perl without adding a new library/package dependency. I'm not\n> much of a Perl hacker so I don't have any particular feeling. What\n> would be best?\n>\n> This message brought to you by the Lorax.\n\nThanks for raising this. Windows test times have bothered me for ages.\n\nThe standard perl DBI library has a connect_cached method. Of course we \ndon't want to be dependent on it, especially if we might have changed \nlibpq in what we're testing, and it would place a substantial new burden \non testers like buildfarm owners.\n\nI like the idea of using a pure perl pq implementation, not least \nbecause it could expand our ability to test things at the protocol \nlevel. Not sure how much work it would be. I'm willing to help if we \nwant to go that way.\n\nYes you need an external library to use FFI in perl, but there's one \nthat's pretty tiny. See <https://metacpan.org/pod/FFI::Library>. There \nis also FFI::Platypus, but it involves building a library. OTOH, that's \nthe one that's available standard on my Fedora and Ubuntu systems. I \nhaven't tried using either Maybe we could use some logic that would use \nthe FFI interface if it's available, and fall back on current usage.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-08-28 Mo 01:29, Thomas Munro\n wrote:\n\n\nHi,\n\nEvery time we run a SQL query, we fork a new psql process and a new\ncold backend process. It's not free on Unix, and quite a lot worse on\nWindows, at around 70ms per query. Take amcheck/001_verify_heapam for\nexample. It runs 272 subtests firing off a stream of queries, and\ncompletes in ~51s on Windows (!), and ~6-9s on the various Unixen, on\nCI.\n\nHere are some timestamps I captured from CI by instrumenting various\nPerl and C bits:\n\n0.000s: IPC::Run starts\n0.023s: postmaster socket sees connection\n0.025s: postmaster has created child process\n0.033s: backend starts running main()\n0.039s: backend has reattached to shared memory\n0.043s: backend connection authorized message\n0.046s: backend has executed and logged query\n0.070s: IPC::Run returns\n\nI expected process creation to be slow on that OS, but it seems like\nsomething happening at the end is even slower. CI shows Windows\nconsuming 4 CPUs at 100% for a full 10 minutes to run a test suite\nthat finishes in 2-3 minutes everywhere else with the same number of\nCPUs. Could there be an event handling snafu in IPC::Run or elsewhere\nnearby? It seems like there must be either a busy loop or a busted\nsleep/wakeup... somewhere? But even if there's a weird bug here\nwaiting to be discovered and fixed, I guess it'll always be too slow\nat ~10ms per process spawned, with two processes to spawn, and it's\nbad enough on Unix.\n\nAs an experiment, I hacked up a not-good-enough-to-share experiment\nwhere $node->safe_psql() would automatically cache a BackgroundPsql\nobject and reuse it, and the times for that test dropped ~51 -> ~9s on\nWindows, and ~7 -> ~2s on the Unixen. But even that seems non-ideal\n(well it's certainly non-ideal the way I hacked it up anyway...). I\nsuppose there are quite a few ways we could do better:\n\n1. Don't fork anything at all: open (and cache) a connection directly\nfrom Perl.\n1a. Write xsub or ffi bindings for libpq. Or vendor (parts) of the\npopular Perl xsub library?\n1b. Write our own mini pure-perl pq client module. Or vendor (parts)\nof some existing one.\n2. Use long-lived psql sessions.\n2a. Something building on BackgroundPsql.\n2b. Maybe give psql or a new libpq-wrapper a new low level stdio/pipe\nprotocol that is more fun to talk to from Perl/machines?\n\nIn some other languages one can do FFI pretty easily so we could use\nthe in-tree libpq without extra dependencies:\n\n\n\n\n\nimport ctypes\nlibpq = ctypes.cdll.LoadLibrary(\"/path/to/libpq.so\")\nlibpq.PQlibVersion()\n\n\n\n\n170000\n\n... but it seems you can't do either static C bindings or runtime FFI\nfrom Perl without adding a new library/package dependency. I'm not\nmuch of a Perl hacker so I don't have any particular feeling. What\nwould be best?\n\nThis message brought to you by the Lorax.\n\n\n\nThanks for raising this. Windows test times have bothered me for\n ages.\n\nThe standard perl DBI library has a connect_cached method. Of\n course we don't want to be dependent on it, especially if we might\n have changed libpq in what we're testing, and it would place a\n substantial new burden on testers like buildfarm owners.\nI like the idea of using a pure perl pq implementation, not least\n because it could expand our ability to test things at the protocol\n level. Not sure how much work it would be. I'm willing to help if\n we want to go that way.\n\nYes you need an external library to use FFI in perl, but there's\n one that's pretty tiny. See\n <https://metacpan.org/pod/FFI::Library>. There is also\n FFI::Platypus, but it involves building a library. OTOH, that's\n the one that's available standard on my Fedora and Ubuntu systems.\n I haven't tried using either Maybe we could use some logic that\n would use the FFI interface if it's available, and fall back on\n current usage.\n\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Mon, 28 Aug 2023 09:23:16 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query execution in Perl TAP tests needs work"
},
{
"msg_contents": "On Mon, Aug 28, 2023 at 05:29:56PM +1200, Thomas Munro wrote:\n> CI shows Windows\n> consuming 4 CPUs at 100% for a full 10 minutes to run a test suite\n> that finishes in 2-3 minutes everywhere else with the same number of\n> CPUs. Could there be an event handling snafu in IPC::Run or elsewhere\n> nearby? It seems like there must be either a busy loop or a busted\n> sleep/wakeup... somewhere?\n\nThat smells like this one:\nhttps://github.com/cpan-authors/IPC-Run/issues/166#issuecomment-1288190929\n\n> As an experiment, I hacked up a not-good-enough-to-share experiment\n> where $node->safe_psql() would automatically cache a BackgroundPsql\n> object and reuse it, and the times for that test dropped ~51 -> ~9s on\n> Windows, and ~7 -> ~2s on the Unixen.\n\nNice!\n\n> suppose there are quite a few ways we could do better:\n> \n> 1. Don't fork anything at all: open (and cache) a connection directly\n> from Perl.\n> 1a. Write xsub or ffi bindings for libpq. Or vendor (parts) of the\n> popular Perl xsub library?\n> 1b. Write our own mini pure-perl pq client module. Or vendor (parts)\n> of some existing one.\n> 2. Use long-lived psql sessions.\n> 2a. Something building on BackgroundPsql.\n> 2b. Maybe give psql or a new libpq-wrapper a new low level stdio/pipe\n> protocol that is more fun to talk to from Perl/machines?\n\n(2a) seems adequate and easiest to achieve. If DBD::Pg were under a\ncompatible license, I'd say use it as the vendor for (1a). Maybe we can get\nit relicensed? Building a separate way of connecting from Perl would be sad.\n\n\n",
"msg_date": "Mon, 28 Aug 2023 18:48:19 -0700",
"msg_from": "Noah Misch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query execution in Perl TAP tests needs work"
},
{
"msg_contents": "On Tue, Aug 29, 2023 at 1:48 PM Noah Misch <[email protected]> wrote:\n> https://github.com/cpan-authors/IPC-Run/issues/166#issuecomment-1288190929\n\nInteresting. But that shows a case with no pipes connected, using\nselect() as a dumb sleep and ignoring SIGCHLD. In our usage we have\npipes connected, and I think select() should return when the child's\noutput pipes become readable due to EOF. I guess something about that\nmight be b0rked on Windows? I see there is an extra helper process\ndoing socket<->pipe conversion (hah, that explains an extra ~10ms at\nthe start in my timestamps)... I can't really follow that code, but\nperhaps the parent forgot to close the far end of the socket pair, so\nthere is no EOF?\n\n\n",
"msg_date": "Tue, 29 Aug 2023 16:25:24 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query execution in Perl TAP tests needs work"
},
{
"msg_contents": "On Tue, Aug 29, 2023 at 1:23 AM Andrew Dunstan <[email protected]> wrote:\n> I like the idea of using a pure perl pq implementation, not least because it could expand our ability to test things at the protocol level. Not sure how much work it would be. I'm willing to help if we want to go that way.\n\nCool. Let's see what others think.\n\nAnd assuming we can pick *something* vaguely efficient and find a Perl\nhacker to implement it, a related question is how to expose it to our\ntest suites.\n\nShould we figure out how to leave all our tests unchanged, by teaching\n$node->psql() et al to do caching implicitly? Or should we make it\nexplicit, with $conn = $node->connect(), and $conn->do_stuff()? And\nif the latter, should do_stuff be DBI style or something that suits us\nbetter?\n\n\n",
"msg_date": "Tue, 29 Aug 2023 16:33:48 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query execution in Perl TAP tests needs work"
},
{
"msg_contents": "On Tue, Aug 29, 2023 at 1:48 PM Noah Misch <[email protected]> wrote:\n> On Mon, Aug 28, 2023 at 05:29:56PM +1200, Thomas Munro wrote:\n> > 1. Don't fork anything at all: open (and cache) a connection directly\n> > from Perl.\n> > 1a. Write xsub or ffi bindings for libpq. Or vendor (parts) of the\n> > popular Perl xsub library?\n> > 1b. Write our own mini pure-perl pq client module. Or vendor (parts)\n> > of some existing one.\n> > 2. Use long-lived psql sessions.\n> > 2a. Something building on BackgroundPsql.\n> > 2b. Maybe give psql or a new libpq-wrapper a new low level stdio/pipe\n> > protocol that is more fun to talk to from Perl/machines?\n>\n> (2a) seems adequate and easiest to achieve. If DBD::Pg were under a\n> compatible license, I'd say use it as the vendor for (1a). Maybe we can get\n> it relicensed? Building a separate way of connecting from Perl would be sad.\n\nHere's my minimal POC of 2a. It only changes ->safe_psql() and not\nthe various other things like ->psql() and ->poll_query_until().\nHence use of amcheck/001_verify_heapam as an example: it runs a lot of\nsafe_psql() queries. It fails in all kinds of ways if enabled\ngenerally, which would take some investigation (some tests require\nthere to be no connections at various times, so we'd probably need to\ninsert disable/re-enable calls at various places).",
"msg_date": "Tue, 29 Aug 2023 18:41:59 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query execution in Perl TAP tests needs work"
},
{
"msg_contents": "On Tue, Aug 29, 2023 at 04:25:24PM +1200, Thomas Munro wrote:\n> On Tue, Aug 29, 2023 at 1:48 PM Noah Misch <[email protected]> wrote:\n> > https://github.com/cpan-authors/IPC-Run/issues/166#issuecomment-1288190929\n> \n> Interesting. But that shows a case with no pipes connected, using\n> select() as a dumb sleep and ignoring SIGCHLD. In our usage we have\n> pipes connected, and I think select() should return when the child's\n> output pipes become readable due to EOF. I guess something about that\n> might be b0rked on Windows? I see there is an extra helper process\n> doing socket<->pipe conversion (hah, that explains an extra ~10ms at\n> the start in my timestamps)...\n\nIn that case, let's assume it's not the same issue.\n\n\n",
"msg_date": "Tue, 29 Aug 2023 06:49:26 -0700",
"msg_from": "Noah Misch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query execution in Perl TAP tests needs work"
},
{
"msg_contents": "On Wed, Aug 30, 2023 at 1:49 AM Noah Misch <[email protected]> wrote:\n> On Tue, Aug 29, 2023 at 04:25:24PM +1200, Thomas Munro wrote:\n> > On Tue, Aug 29, 2023 at 1:48 PM Noah Misch <[email protected]> wrote:\n> > > https://github.com/cpan-authors/IPC-Run/issues/166#issuecomment-1288190929\n> >\n> > Interesting. But that shows a case with no pipes connected, using\n> > select() as a dumb sleep and ignoring SIGCHLD. In our usage we have\n> > pipes connected, and I think select() should return when the child's\n> > output pipes become readable due to EOF. I guess something about that\n> > might be b0rked on Windows? I see there is an extra helper process\n> > doing socket<->pipe conversion (hah, that explains an extra ~10ms at\n> > the start in my timestamps)...\n>\n> In that case, let's assume it's not the same issue.\n\nYeah, I think it amounts to the same thing, if EOF never arrives.\n\nI suspect that we could get ->safe_psql() down to about ~25ms baseline\nif someone could fix the posited IPC::Run EOF bug and change its\ninternal helper process to a helper thread. Even if we fix tests to\nreuse backends, I expect that'd help CI measurably. (The native way\nwould be to use pipes directly, changing select() to\nWaitForMultipleObjects(), but I suspect IPC::Run might have painted\nitself into a corner by exposing the psuedo-pipes and documenting that\nthey can be used with select(). Oh well.)\n\n\n",
"msg_date": "Wed, 30 Aug 2023 09:48:42 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query execution in Perl TAP tests needs work"
},
{
"msg_contents": "Hi,\n\nOn 2023-08-28 17:29:56 +1200, Thomas Munro wrote:\n> Every time we run a SQL query, we fork a new psql process and a new\n> cold backend process. It's not free on Unix, and quite a lot worse on\n> Windows, at around 70ms per query. Take amcheck/001_verify_heapam for\n> example. It runs 272 subtests firing off a stream of queries, and\n> completes in ~51s on Windows (!), and ~6-9s on the various Unixen, on\n> CI.\n\nWhoa.\n\n> Here are some timestamps I captured from CI by instrumenting various\n> Perl and C bits:\n> \n> 0.000s: IPC::Run starts\n> 0.023s: postmaster socket sees connection\n> 0.025s: postmaster has created child process\n> 0.033s: backend starts running main()\n> 0.039s: backend has reattached to shared memory\n> 0.043s: backend connection authorized message\n> 0.046s: backend has executed and logged query\n> 0.070s: IPC::Run returns\n> \n> I expected process creation to be slow on that OS, but it seems like\n> something happening at the end is even slower. CI shows Windows\n> consuming 4 CPUs at 100% for a full 10 minutes to run a test suite\n> that finishes in 2-3 minutes everywhere else with the same number of\n> CPUs.\n\nIt finishes in that time on linux, even with sanitizers enabled...\n\n\n> As an experiment, I hacked up a not-good-enough-to-share experiment\n> where $node->safe_psql() would automatically cache a BackgroundPsql\n> object and reuse it, and the times for that test dropped ~51 -> ~9s on\n> Windows, and ~7 -> ~2s on the Unixen. But even that seems non-ideal\n> (well it's certainly non-ideal the way I hacked it up anyway...). I\n> suppose there are quite a few ways we could do better:\n\nThat's a really impressive win.\n\nEven if we \"just\" converted some of the safe_psql() cases and converted\npoll_query_until() to this, we'd win a lot.\n\n\n> 1. Don't fork anything at all: open (and cache) a connection directly\n> from Perl.\n\nOne advantage of that is that the socket is entirely controlled by perl, so\nwaiting for IO should be easy...\n\n\n> 2b. Maybe give psql or a new libpq-wrapper a new low level stdio/pipe\n> protocol that is more fun to talk to from Perl/machines?\n\nThat does also seem promising - a good chunk of the complexity around some of\nthe IPC::Run uses is that we end up parsing psql input/output...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 30 Aug 2023 15:19:26 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query execution in Perl TAP tests needs work"
},
{
"msg_contents": "On 2023-08-28 Mo 09:23, Andrew Dunstan wrote:\n>\n>\n> On 2023-08-28 Mo 01:29, Thomas Munro wrote:\n>> Hi,\n>>\n>> Every time we run a SQL query, we fork a new psql process and a new\n>> cold backend process. It's not free on Unix, and quite a lot worse on\n>> Windows, at around 70ms per query. Take amcheck/001_verify_heapam for\n>> example. It runs 272 subtests firing off a stream of queries, and\n>> completes in ~51s on Windows (!), and ~6-9s on the various Unixen, on\n>> CI.\n>>\n>> Here are some timestamps I captured from CI by instrumenting various\n>> Perl and C bits:\n>>\n>> 0.000s: IPC::Run starts\n>> 0.023s: postmaster socket sees connection\n>> 0.025s: postmaster has created child process\n>> 0.033s: backend starts running main()\n>> 0.039s: backend has reattached to shared memory\n>> 0.043s: backend connection authorized message\n>> 0.046s: backend has executed and logged query\n>> 0.070s: IPC::Run returns\n>>\n>> I expected process creation to be slow on that OS, but it seems like\n>> something happening at the end is even slower. CI shows Windows\n>> consuming 4 CPUs at 100% for a full 10 minutes to run a test suite\n>> that finishes in 2-3 minutes everywhere else with the same number of\n>> CPUs. Could there be an event handling snafu in IPC::Run or elsewhere\n>> nearby? It seems like there must be either a busy loop or a busted\n>> sleep/wakeup... somewhere? But even if there's a weird bug here\n>> waiting to be discovered and fixed, I guess it'll always be too slow\n>> at ~10ms per process spawned, with two processes to spawn, and it's\n>> bad enough on Unix.\n>>\n>> As an experiment, I hacked up a not-good-enough-to-share experiment\n>> where $node->safe_psql() would automatically cache a BackgroundPsql\n>> object and reuse it, and the times for that test dropped ~51 -> ~9s on\n>> Windows, and ~7 -> ~2s on the Unixen. But even that seems non-ideal\n>> (well it's certainly non-ideal the way I hacked it up anyway...). I\n>> suppose there are quite a few ways we could do better:\n>>\n>> 1. Don't fork anything at all: open (and cache) a connection directly\n>> from Perl.\n>> 1a. Write xsub or ffi bindings for libpq. Or vendor (parts) of the\n>> popular Perl xsub library?\n>> 1b. Write our own mini pure-perl pq client module. Or vendor (parts)\n>> of some existing one.\n>> 2. Use long-lived psql sessions.\n>> 2a. Something building on BackgroundPsql.\n>> 2b. Maybe give psql or a new libpq-wrapper a new low level stdio/pipe\n>> protocol that is more fun to talk to from Perl/machines?\n>>\n>> In some other languages one can do FFI pretty easily so we could use\n>> the in-tree libpq without extra dependencies:\n>>\n>>>>> import ctypes\n>>>>> libpq = ctypes.cdll.LoadLibrary(\"/path/to/libpq.so\")\n>>>>> libpq.PQlibVersion()\n>> 170000\n>>\n>> ... but it seems you can't do either static C bindings or runtime FFI\n>> from Perl without adding a new library/package dependency. I'm not\n>> much of a Perl hacker so I don't have any particular feeling. What\n>> would be best?\n>>\n>> This message brought to you by the Lorax.\n>\n> Thanks for raising this. Windows test times have bothered me for ages.\n>\n> The standard perl DBI library has a connect_cached method. Of course \n> we don't want to be dependent on it, especially if we might have \n> changed libpq in what we're testing, and it would place a substantial \n> new burden on testers like buildfarm owners.\n>\n> I like the idea of using a pure perl pq implementation, not least \n> because it could expand our ability to test things at the protocol \n> level. Not sure how much work it would be. I'm willing to help if we \n> want to go that way.\n>\n> Yes you need an external library to use FFI in perl, but there's one \n> that's pretty tiny. See <https://metacpan.org/pod/FFI::Library>. There \n> is also FFI::Platypus, but it involves building a library. OTOH, \n> that's the one that's available standard on my Fedora and Ubuntu \n> systems. I haven't tried using either Maybe we could use some logic \n> that would use the FFI interface if it's available, and fall back on \n> current usage.\n>\n>\n>\n\nI had a brief play with this. Here's how easy it was to wrap libpq in perl:\n\n\n#!/usr/bin/perl\n\nuse strict; use warnings;\n\nuse FFI::Platypus;\n\nmy $ffi = FFI::Platypus->new(api=>1);\n$ffi->lib(\"inst/lib/libpq.so\");\n\n\n$ffi->type('opaque' => 'PGconn');\n$ffi->attach(PQconnectdb => [ 'string' ] => 'PGconn');\n$ffi->attach(PQfinish => [ 'PGconn' ] => 'void');\n\n$ffi->type('opaque' => 'PGresult');\n$ffi->attach(PQexec => [ 'PGconn', 'string' ] => 'PGresult');\n$ffi->attach(PQgetvalue => [ 'PGresult', 'int', 'int' ] => 'string');\n\nmy $pgconn = PQconnectdb(\"dbname=postgres host=/tmp\");\nmy $res = PQexec($pgconn, \"select count(*) from pg_class\");\nmy $count = PQgetvalue( $res, 0, 0);\n\nprint \"count: $count\\n\";\n\nPQfinish($pgconn);\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-08-28 Mo 09:23, Andrew Dunstan\n wrote:\n\n\n\n\n\nOn 2023-08-28 Mo 01:29, Thomas Munro\n wrote:\n\n\nHi,\n\nEvery time we run a SQL query, we fork a new psql process and a new\ncold backend process. It's not free on Unix, and quite a lot worse on\nWindows, at around 70ms per query. Take amcheck/001_verify_heapam for\nexample. It runs 272 subtests firing off a stream of queries, and\ncompletes in ~51s on Windows (!), and ~6-9s on the various Unixen, on\nCI.\n\nHere are some timestamps I captured from CI by instrumenting various\nPerl and C bits:\n\n0.000s: IPC::Run starts\n0.023s: postmaster socket sees connection\n0.025s: postmaster has created child process\n0.033s: backend starts running main()\n0.039s: backend has reattached to shared memory\n0.043s: backend connection authorized message\n0.046s: backend has executed and logged query\n0.070s: IPC::Run returns\n\nI expected process creation to be slow on that OS, but it seems like\nsomething happening at the end is even slower. CI shows Windows\nconsuming 4 CPUs at 100% for a full 10 minutes to run a test suite\nthat finishes in 2-3 minutes everywhere else with the same number of\nCPUs. Could there be an event handling snafu in IPC::Run or elsewhere\nnearby? It seems like there must be either a busy loop or a busted\nsleep/wakeup... somewhere? But even if there's a weird bug here\nwaiting to be discovered and fixed, I guess it'll always be too slow\nat ~10ms per process spawned, with two processes to spawn, and it's\nbad enough on Unix.\n\nAs an experiment, I hacked up a not-good-enough-to-share experiment\nwhere $node->safe_psql() would automatically cache a BackgroundPsql\nobject and reuse it, and the times for that test dropped ~51 -> ~9s on\nWindows, and ~7 -> ~2s on the Unixen. But even that seems non-ideal\n(well it's certainly non-ideal the way I hacked it up anyway...). I\nsuppose there are quite a few ways we could do better:\n\n1. Don't fork anything at all: open (and cache) a connection directly\nfrom Perl.\n1a. Write xsub or ffi bindings for libpq. Or vendor (parts) of the\npopular Perl xsub library?\n1b. Write our own mini pure-perl pq client module. Or vendor (parts)\nof some existing one.\n2. Use long-lived psql sessions.\n2a. Something building on BackgroundPsql.\n2b. Maybe give psql or a new libpq-wrapper a new low level stdio/pipe\nprotocol that is more fun to talk to from Perl/machines?\n\nIn some other languages one can do FFI pretty easily so we could use\nthe in-tree libpq without extra dependencies:\n\n\n\n\n\nimport ctypes\nlibpq = ctypes.cdll.LoadLibrary(\"/path/to/libpq.so\")\nlibpq.PQlibVersion()\n\n\n\n\n170000\n\n... but it seems you can't do either static C bindings or runtime FFI\nfrom Perl without adding a new library/package dependency. I'm not\nmuch of a Perl hacker so I don't have any particular feeling. What\nwould be best?\n\nThis message brought to you by the Lorax.\n\n\n\nThanks for raising this. Windows test times have bothered me\n for ages.\n\nThe standard perl DBI library has a connect_cached method. Of\n course we don't want to be dependent on it, especially if we\n might have changed libpq in what we're testing, and it would\n place a substantial new burden on testers like buildfarm owners.\nI like the idea of using a pure perl pq implementation, not\n least because it could expand our ability to test things at the\n protocol level. Not sure how much work it would be. I'm willing\n to help if we want to go that way.\n\nYes you need an external library to use FFI in perl, but\n there's one that's pretty tiny. See <https://metacpan.org/pod/FFI::Library>.\n There is also FFI::Platypus, but it involves building a library.\n OTOH, that's the one that's available standard on my Fedora and\n Ubuntu systems. I haven't tried using either Maybe we could use\n some logic that would use the FFI interface if it's available,\n and fall back on current usage.\n\n\n\n\n\n\n\nI had a brief play with this. Here's how easy it was to wrap\n libpq in perl:\n\n\n\n#!/usr/bin/perl\n\n use strict; use warnings;\n\n use FFI::Platypus;\n\n my $ffi = FFI::Platypus->new(api=>1);\n $ffi->lib(\"inst/lib/libpq.so\");\n\n\n $ffi->type('opaque' => 'PGconn');\n $ffi->attach(PQconnectdb => [ 'string' ] => 'PGconn');\n $ffi->attach(PQfinish => [ 'PGconn' ] => 'void');\n\n $ffi->type('opaque' => 'PGresult');\n $ffi->attach(PQexec => [ 'PGconn', 'string' ] =>\n 'PGresult');\n $ffi->attach(PQgetvalue => [ 'PGresult', 'int', 'int' ]\n => 'string');\n\n my $pgconn = PQconnectdb(\"dbname=postgres host=/tmp\");\n my $res = PQexec($pgconn, \"select count(*) from pg_class\");\n my $count = PQgetvalue( $res, 0, 0);\n\n print \"count: $count\\n\";\n\n PQfinish($pgconn);\n\n\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Wed, 30 Aug 2023 18:32:30 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query execution in Perl TAP tests needs work"
},
{
"msg_contents": "On Thu, Aug 31, 2023 at 10:32 AM Andrew Dunstan <[email protected]> wrote:\n> #!/usr/bin/perl\n>\n> use strict; use warnings;\n>\n> use FFI::Platypus;\n>\n> my $ffi = FFI::Platypus->new(api=>1);\n> $ffi->lib(\"inst/lib/libpq.so\");\n>\n>\n> $ffi->type('opaque' => 'PGconn');\n> $ffi->attach(PQconnectdb => [ 'string' ] => 'PGconn');\n> $ffi->attach(PQfinish => [ 'PGconn' ] => 'void');\n>\n> $ffi->type('opaque' => 'PGresult');\n> $ffi->attach(PQexec => [ 'PGconn', 'string' ] => 'PGresult');\n> $ffi->attach(PQgetvalue => [ 'PGresult', 'int', 'int' ] => 'string');\n>\n> my $pgconn = PQconnectdb(\"dbname=postgres host=/tmp\");\n> my $res = PQexec($pgconn, \"select count(*) from pg_class\");\n> my $count = PQgetvalue( $res, 0, 0);\n>\n> print \"count: $count\\n\";\n>\n> PQfinish($pgconn);\n\nIt looks very promising so far. How hard would it be for us to add\nthis dependency? Mostly pinging build farm owners?\n\nI'm still on the fence, but the more I know about IPC::Run, the better\nthe various let's-connect-directly-from-Perl options sound...\n\n\n",
"msg_date": "Thu, 31 Aug 2023 13:29:37 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query execution in Perl TAP tests needs work"
},
{
"msg_contents": "On 2023-08-30 We 21:29, Thomas Munro wrote:\n> On Thu, Aug 31, 2023 at 10:32 AM Andrew Dunstan<[email protected]> wrote:\n>> #!/usr/bin/perl\n>>\n>> use strict; use warnings;\n>>\n>> use FFI::Platypus;\n>>\n>> my $ffi = FFI::Platypus->new(api=>1);\n>> $ffi->lib(\"inst/lib/libpq.so\");\n>>\n>>\n>> $ffi->type('opaque' => 'PGconn');\n>> $ffi->attach(PQconnectdb => [ 'string' ] => 'PGconn');\n>> $ffi->attach(PQfinish => [ 'PGconn' ] => 'void');\n>>\n>> $ffi->type('opaque' => 'PGresult');\n>> $ffi->attach(PQexec => [ 'PGconn', 'string' ] => 'PGresult');\n>> $ffi->attach(PQgetvalue => [ 'PGresult', 'int', 'int' ] => 'string');\n>>\n>> my $pgconn = PQconnectdb(\"dbname=postgres host=/tmp\");\n>> my $res = PQexec($pgconn, \"select count(*) from pg_class\");\n>> my $count = PQgetvalue( $res, 0, 0);\n>>\n>> print \"count: $count\\n\";\n>>\n>> PQfinish($pgconn);\n> It looks very promising so far. How hard would it be for us to add\n> this dependency? Mostly pinging build farm owners?\n>\n> I'm still on the fence, but the more I know about IPC::Run, the better\n> the various let's-connect-directly-from-Perl options sound...\n\n\nHere's some progress. I have put it all in a perl module, which I have \ntested on Windows (both mingw and MSVC) as well as Ubuntu. I think this \nis probably something worth having in itself. I wrapped a substantial \nportion of libpq, but left out things to do with large objects, async \nprocessing, pipelining, SSL and some other things. We can fill in the \ngaps in due course.\n\nThe test program now looks like this:\n\n use strict;\n use warnings;\n\n use lib \".\";\n use PqFFI;\n\n PqFFI::setup(\"inst/lib\");\n\n my $conn = PQconnectdb(\"dbname=postgres host=/tmp\");\n my $res = PQexec($conn, 'select count(*) from pg_class');\n my $count = PQgetvalue($res,0,0);\n print \"$count rows in pg_class\\n\";\n PQfinish($conn);\n\nI guess the next thing would be to test it on a few more platforms and \nalso to see if we need to expand the coverage of libpq for the intended \nuses.\n\nI confess I'm a little reluctant to impose this burden on buildfarm \nowners. We should think about some sort of fallback in case this isn't \nsupported on some platform, either due to technological barriers or \nbuildfarm owner reluctance.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com",
"msg_date": "Sat, 2 Sep 2023 14:42:42 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query execution in Perl TAP tests needs work"
},
{
"msg_contents": "On Sun, Sep 3, 2023 at 6:42 AM Andrew Dunstan <[email protected]> wrote:\n> I guess the next thing would be to test it on a few more platforms and also to see if we need to expand the coverage of libpq for the intended uses.\n\nNice. It works fine on my FreeBSD battlestation after \"sudo pkg\ninstall p5-FFI-Platypus\" and adjusting that lib path. I wonder if\nthere is a nice way to extract those constants from our headers...\n\nIt's using https://sourceware.org/libffi/ under the covers (like most\nother scripting language FFI things), and that knows calling\nconventions for everything we care about including weird OSes and\narchitectures. It might be a slight pain to build it on systems that\nhave no package manager, if cpan can't do it for you? I guess AIX\nwould be the most painful?\n\n(Huh, while contemplating trying that, I just noticed that the GCC\nbuild farm's AIX 7.2 system seems to have given up the ghost a few\nweeks ago. I wonder if it'll come back online with the current\nrelease, or if that's the end... There is still the\noverloaded-to-the-point-of-being-hard-to-interact-with AIX 7.1 (=EOL)\nmachine.)\n\n> I confess I'm a little reluctant to impose this burden on buildfarm owners. We should think about some sort of fallback in case this isn't supported on some platform, either due to technological barriers or buildfarm owner reluctance.\n\nI guess you're thinking that it could be done in such a way that it is\nautomatically used for $node->safe_psql() and various other things if\nPlatypus is detected, but otherwise forking psql as now, for a\ntransition period? Then we could nag build farm owners, and\neventually turn off the fallback stuff after N months. After that it\nwould begin to be possible to use this in more interesting and\nadvanced ways.\n\n\n",
"msg_date": "Sun, 3 Sep 2023 12:17:56 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query execution in Perl TAP tests needs work"
},
{
"msg_contents": "On 2023-09-02 Sa 20:17, Thomas Munro wrote:\n> On Sun, Sep 3, 2023 at 6:42 AM Andrew Dunstan<[email protected]> wrote:\n>\n>> I confess I'm a little reluctant to impose this burden on buildfarm owners. We should think about some sort of fallback in case this isn't supported on some platform, either due to technological barriers or buildfarm owner reluctance.\n> I guess you're thinking that it could be done in such a way that it is\n> automatically used for $node->safe_psql() and various other things if\n> Platypus is detected, but otherwise forking psql as now, for a\n> transition period? Then we could nag build farm owners, and\n> eventually turn off the fallback stuff after N months. After that it\n> would begin to be possible to use this in more interesting and\n> advanced ways.\n\n\nYeah, that would be ideal. I'll prep a version of the module that \ndoesn't fail if FFI::Platypus isn't available, but instead sets a flag \nwe can test.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-09-02 Sa 20:17, Thomas Munro\n wrote:\n\n\nOn Sun, Sep 3, 2023 at 6:42 AM Andrew Dunstan <[email protected]> wrote:\n\n\n\nI confess I'm a little reluctant to impose this burden on buildfarm owners. We should think about some sort of fallback in case this isn't supported on some platform, either due to technological barriers or buildfarm owner reluctance.\n\n\n\nI guess you're thinking that it could be done in such a way that it is\nautomatically used for $node->safe_psql() and various other things if\nPlatypus is detected, but otherwise forking psql as now, for a\ntransition period? Then we could nag build farm owners, and\neventually turn off the fallback stuff after N months. After that it\nwould begin to be possible to use this in more interesting and\nadvanced ways.\n\n\n\nYeah, that would be ideal. I'll prep a version of the module that\n doesn't fail if FFI::Platypus isn't available, but instead sets a\n flag we can test.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Mon, 4 Sep 2023 11:25:39 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query execution in Perl TAP tests needs work"
},
{
"msg_contents": "On Sun, Sep 3, 2023 at 12:17 PM Thomas Munro <[email protected]> wrote:\n> (Huh, while contemplating trying that, I just noticed that the GCC\n> build farm's AIX 7.2 system seems to have given up the ghost a few\n> weeks ago. I wonder if it'll come back online with the current\n> release, or if that's the end... There is still the\n> overloaded-to-the-point-of-being-hard-to-interact-with AIX 7.1 (=EOL)\n> machine.)\n\nFTR it (gcc119) appears to have come back online, now upgraded to AIX\n7.3. No reports from \"hoverfly\" (I think it was on that host?). It\nprobably needs some attention to start working again after the\nupgrade.\n\n\n",
"msg_date": "Tue, 12 Sep 2023 08:52:50 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query execution in Perl TAP tests needs work"
},
{
"msg_contents": "Hi,\n\nOn 2023-08-28 17:29:56 +1200, Thomas Munro wrote:\n> As an experiment, I hacked up a not-good-enough-to-share experiment\n> where $node->safe_psql() would automatically cache a BackgroundPsql\n> object and reuse it, and the times for that test dropped ~51 -> ~9s on\n> Windows, and ~7 -> ~2s on the Unixen. But even that seems non-ideal\n> (well it's certainly non-ideal the way I hacked it up anyway...). I\n> suppose there are quite a few ways we could do better:\n> \n> 1. Don't fork anything at all: open (and cache) a connection directly\n> from Perl.\n> 1a. Write xsub or ffi bindings for libpq. Or vendor (parts) of the\n> popular Perl xsub library?\n> 1b. Write our own mini pure-perl pq client module. Or vendor (parts)\n> of some existing one.\n> 2. Use long-lived psql sessions.\n> 2a. Something building on BackgroundPsql.\n> 2b. Maybe give psql or a new libpq-wrapper a new low level stdio/pipe\n> protocol that is more fun to talk to from Perl/machines?\n\nWhile we can't easily use plain long-lived psql everywhere, due to tests\ndepending on no additional connections being present, we could at least\npartially address that by adding a \\disconnect to psql. Based on your numbers\nthe subprocess that IPC::Run wraps around psql on windows is a substantial\npart of the overhead. Even if we default to reconnecting after every\n->psql(), just saving the fork of the wrapper process and psql should reduce\ncosts substantially.\n\nFamous last words, but it seems like that it should be quite doable to add\nthat to psql and use it in Cluster->{psql,safe_psql,poll_query_until}? There\nmight be a few cases that expect the full connection error message, but I\ncan't imagine that to be too many?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 12 Sep 2023 21:14:12 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query execution in Perl TAP tests needs work"
},
{
"msg_contents": "On Sat, Sep 2, 2023 at 2:42 PM Andrew Dunstan <[email protected]> wrote:\n> I confess I'm a little reluctant to impose this burden on buildfarm owners. We should think about some sort of fallback in case this isn't supported on some platform, either due to technological barriers or buildfarm owner reluctance.\n\nHow much burden is it? Would anyone actually mind?\n\nI definitely don't want to put ourselves in a situation where we add a\nbunch of annoying dependencies that are required to be able to run the\ntests, not just because it will inconvenience buildfarm owners, but\nalso because it will potentially inconvenience developers, and in\nparticular, me. At the same time, fallbacks can be a problem too,\nbecause then you can end up with things that work one way on one\ndeveloper's machine (or BF machine) and another way on another\ndeveloper's machine (or BF machine) and it's not obvious that the\nreason for the difference is that one machine is using a fallback and\nthe other is not. I feel like this tends to create so much aggravation\nin practice that it's best not to have fallbacks in this kind of\nsituation - my vote is that we either stick with the current method\nand live with the performance characteristics thereof, or we put in\nplace something that is faster and better and that new thing becomes a\nhard dependency for anyone who wants to be able to run the TAP tests.\n\nIn terms of what that faster and better thing might be, AFAICS, there\nare two main options. First, we could proceed with the approach you've\ntried here, namely requiring everybody to get Platypus::FFI. I find\nthat it's included in MacPorts on my machine, which is at least\nsomewhat of a good sign that perhaps this is fairly widely available.\nThat might need more investigation, though. Second, we could write a\npure-Perl implementation, as you proposed earlier. That would be more\nwork to write and maintain, but would avoid needing FFI. Personally, I\nfeel like either an FFI-based approach or a pure-Perl approach would\nbe pretty reasonable, as long as Platypus::FFI is widely\navailable/usable. If we go with pure Perl, the hard part might be\nmanaging the authentication methods, but as Thomas pointed out to me\nyesterday, we now have UNIX sockets on Windows, and thus everywhere,\nso maybe we could get to a point where the pure-Perl implementation\nwouldn't need to do any non-trivial authentication.\n\nAnother thing, also already mentioned, that we can do is cache psql\nconnections instead of continuously respawing psql. That doesn't\nrequire any fundamentally new mechanism, and in some sense it's\nindependent of the approaches above, because they could be implemented\nwithout caching connections, but they would benefit from caching\nconnections, as the currently psql-based approach also does. I think\nit would be good to introduce new syntax for this, e.g.:\n\n$conn_primary = $node_primary->connect();\n$conn_primary->simple_query('whatever');\n$conn_primary->simple_query('whatever 2');\n$conn_primary->disconnect();\n\nSomething like this would require a fairly large amount of mechanical\nwork to implement across all of our TAP test cases, but I think it\nwould be effort well spent. If we try to introduce connection caching\n\"transparently,\" I think it will turn into another foot-gun that\npeople keep getting wrong because they don't realize there is magic\nunder the hood, or forget how it works.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 18 Oct 2023 10:03:06 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query execution in Perl TAP tests needs work"
},
{
"msg_contents": "Robert Haas <[email protected]> writes:\n> On Sat, Sep 2, 2023 at 2:42 PM Andrew Dunstan <[email protected]> wrote:\n>>> How much burden is it? Would anyone actually mind?\n\n> ... At the same time, fallbacks can be a problem too,\n> because then you can end up with things that work one way on one\n> developer's machine (or BF machine) and another way on another\n> developer's machine (or BF machine) and it's not obvious that the\n> reason for the difference is that one machine is using a fallback and\n> the other is not.\n\nI agree with this worry.\n\n> In terms of what that faster and better thing might be, AFAICS, there\n> are two main options. First, we could proceed with the approach you've\n> tried here, namely requiring everybody to get Platypus::FFI. I find\n> that it's included in MacPorts on my machine, which is at least\n> somewhat of a good sign that perhaps this is fairly widely available.\n\nI did a bit of research on this on my favorite platforms, and did\nnot like the results:\n\nRHEL8: does not seem to be packaged at all.\n\nFedora 37: available, but the dependencies are a bit much:\n\n$ sudo yum install perl-FFI-Platypus\nLast metadata expiration check: 2:07:42 ago on Wed Oct 18 08:05:40 2023.\nDependencies resolved.\n================================================================================\n Package Architecture Version Repository Size\n================================================================================\nInstalling:\n perl-FFI-Platypus x86_64 2.08-1.fc37 updates 417 k\nInstalling dependencies:\n libgfortran x86_64 12.3.1-1.fc37 updates 904 k\n libquadmath x86_64 12.3.1-1.fc37 updates 206 k\n libquadmath-devel x86_64 12.3.1-1.fc37 updates 48 k\n perl-FFI-CheckLib noarch 0.29-2.fc37 updates 29 k\nInstalling weak dependencies:\n gcc-gfortran x86_64 12.3.1-1.fc37 updates 12 M\n\nTransaction Summary\n================================================================================\nInstall 6 Packages\n\nTotal download size: 14 M\nInstalled size: 37 M\nIs this ok [y/N]: \n\ngfortran? Really?? I mean, I don't care about the disk space,\nbut this is not promising for anyone who has to build it themselves.\n\nSo I'm afraid that requiring Platypus::FFI might be a bridge too\nfar for a lot of our older buildfarm machines.\n\n> Another thing, also already mentioned, that we can do is cache psql\n> connections instead of continuously respawing psql.\n\nThis seems like it's worth thinking about. I agree with requiring\nthe re-use to be explicit within a TAP test, else we might have\nmysterious behavioral changes (akin to connection-pooler-induced\nbugs).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 18 Oct 2023 10:27:59 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query execution in Perl TAP tests needs work"
},
{
"msg_contents": "Tom Lane <[email protected]> writes:\n\n> Robert Haas <[email protected]> writes:\n>> On Sat, Sep 2, 2023 at 2:42 PM Andrew Dunstan <[email protected]> wrote:\n>>>> How much burden is it? Would anyone actually mind?\n>\n>> ... At the same time, fallbacks can be a problem too,\n>> because then you can end up with things that work one way on one\n>> developer's machine (or BF machine) and another way on another\n>> developer's machine (or BF machine) and it's not obvious that the\n>> reason for the difference is that one machine is using a fallback and\n>> the other is not.\n>\n> I agree with this worry.\n>\n>> In terms of what that faster and better thing might be, AFAICS, there\n>> are two main options. First, we could proceed with the approach you've\n>> tried here, namely requiring everybody to get Platypus::FFI. I find\n>> that it's included in MacPorts on my machine, which is at least\n>> somewhat of a good sign that perhaps this is fairly widely available.\n>\n> I did a bit of research on this on my favorite platforms, and did\n> not like the results:\n>\n> RHEL8: does not seem to be packaged at all.\n>\n> Fedora 37: available, but the dependencies are a bit much:\n>\n> $ sudo yum install perl-FFI-Platypus\n> Last metadata expiration check: 2:07:42 ago on Wed Oct 18 08:05:40 2023.\n> Dependencies resolved.\n> ================================================================================\n> Package Architecture Version Repository Size\n> ================================================================================\n> Installing:\n> perl-FFI-Platypus x86_64 2.08-1.fc37 updates 417 k\n> Installing dependencies:\n> libgfortran x86_64 12.3.1-1.fc37 updates 904 k\n> libquadmath x86_64 12.3.1-1.fc37 updates 206 k\n> libquadmath-devel x86_64 12.3.1-1.fc37 updates 48 k\n> perl-FFI-CheckLib noarch 0.29-2.fc37 updates 29 k\n> Installing weak dependencies:\n> gcc-gfortran x86_64 12.3.1-1.fc37 updates 12 M\n>\n> Transaction Summary\n> ================================================================================\n> Install 6 Packages\n>\n> Total download size: 14 M\n> Installed size: 37 M\n> Is this ok [y/N]: \n>\n> gfortran? Really?? I mean, I don't care about the disk space,\n> but this is not promising for anyone who has to build it themselves.\n\nThe Fortran support for FFI::Platypus is in a separate CPAN distribution\n(FFI-Platypus-Lang-Fortran), so that must be some quirk of the Fedora\npackaging and not a problem for people building it themselves. They\njust need libffi and a working Perl/CPAN setup.\n\nOn Debian the only things besides Perl and core perl modules it\n(build-)depends on are libffi, Capture::Tiny, FFI::Checklib (which\ndepends on File::Which), Test2::Suite and pkg-config.\n\n- ilmari\n\n\n",
"msg_date": "Wed, 18 Oct 2023 16:06:59 +0100",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query execution in Perl TAP tests needs work"
},
{
"msg_contents": "On Wed, Oct 18, 2023 at 10:28 AM Tom Lane <[email protected]> wrote:\n> I did a bit of research on this on my favorite platforms, and did\n> not like the results:\n\nHmm. That's unfortunate. Is perl -MCPAN -e 'install Platypus::FFI' a\nviable alternative?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 18 Oct 2023 11:41:28 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query execution in Perl TAP tests needs work"
},
{
"msg_contents": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]> writes:\n> Tom Lane <[email protected]> writes:\n>> gfortran? Really?? I mean, I don't care about the disk space,\n>> but this is not promising for anyone who has to build it themselves.\n\n> The Fortran support for FFI::Platypus is in a separate CPAN distribution\n> (FFI-Platypus-Lang-Fortran), so that must be some quirk of the Fedora\n> packaging and not a problem for people building it themselves.\n\nAh, they must've decided to combine FFI-Platypus-Lang-Fortran into the\nsame RPM. Not quite as bad then, since clearly we don't need that.\n\nAnother thing to worry about is whether it runs on the oldest\nperl versions we support. I tried it on a 5.14.0 installation,\nand it at least compiles and passes its self-test, so that's\npromising. \"cpanm install FFI::Platypus\" took about 5 minutes\n(on a 2012 mac mini, not the world's fastest machine). The\nlist of dependencies it pulled in is still kinda long:\n\nCapture-Tiny-0.48\nExtUtils-ParseXS-3.51\nTest-Simple-1.302195\nconstant-1.33\nScalar-List-Utils-1.63\nTerm-Table-0.017\nTest2-Suite-0.000156\nFile-Which-1.27\nFFI-CheckLib-0.31\nTry-Tiny-0.31\nTest-Fatal-0.017\nTest-Needs-0.002010\nTest-Warnings-0.032\nURI-5.21\nAlgorithm-Diff-1.201\nText-Diff-1.45\nSpiffy-0.46\nTest-Base-0.89\nTest-YAML-1.07\nTest-Deep-1.204\nYAML-1.30\nFile-chdir-0.1011\nPath-Tiny-0.144\nAlien-Build-2.80\nAlien-Build-Plugin-Download-GitHub-0.10\nNet-SSLeay-1.92\nHTTP-Tiny-0.088\nMozilla-CA-20230821\nIO-Socket-SSL-2.083\nMojo-DOM58-3.001\nAlien-FFI-0.27\nFFI-Platypus-2.08\n\nA couple of these are things a buildfarm animal would need anyway,\nbut on the whole it seems like this is pretty far up the food chain\ncompared to our previous set of TAP dependencies (only three\nnon-core-Perl modules).\n\nStill, writing our own equivalent is probably more work than it's\nworth, if this is a suitable solution in all other respects.\n\nHaving said that ... I read the man page for FFI::Platypus,\nand I'm failing to draw a straight line between what it can do\nand what we need. Aren't we going to need a big chunk of new\nPerl code anyway? If so, why not write a big chunk of new Perl\nthat doesn't have new external dependencies?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 18 Oct 2023 11:43:29 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query execution in Perl TAP tests needs work"
},
{
"msg_contents": "Robert Haas <[email protected]> writes:\n> On Wed, Oct 18, 2023 at 10:28 AM Tom Lane <[email protected]> wrote:\n>> I did a bit of research on this on my favorite platforms, and did\n>> not like the results:\n\n> Hmm. That's unfortunate. Is perl -MCPAN -e 'install Platypus::FFI' a\n> viable alternative?\n\nProbably, see my followup.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 18 Oct 2023 11:47:17 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query execution in Perl TAP tests needs work"
},
{
"msg_contents": "On Wed, Oct 18, 2023 at 11:43 AM Tom Lane <[email protected]> wrote:\n> Having said that ... I read the man page for FFI::Platypus,\n> and I'm failing to draw a straight line between what it can do\n> and what we need. Aren't we going to need a big chunk of new\n> Perl code anyway? If so, why not write a big chunk of new Perl\n> that doesn't have new external dependencies?\n\nI think that the question here is what exactly the Perl code that we'd\nhave to write would be doing.\n\nIf we use FFI::Platypus, our Perl code can directly call libpq\nfunctions. The Perl code would just be a wrapper around those function\ncalls, basically. I'm sure there's some work to be done there but a\nlot of it is probably boilerplate.\n\nWithout FFI::Platypus, we have to write Perl code that can speak the\nwire protocol directly. Basically, we're writing our own PostgreSQL\ndriver for Perl, though we might need only a subset of the things a\nreal driver would need to handle, and we might add some extra things,\nlike code that can send intentionally botched protocol messages.\n\nI think it's a judgement call which is better. Depending on\nFFI::Platypus is annoying, because nobody likes dependencies. But\nwriting a new implementation of the wire protocol is probably more\nwork, and once we wrote it, we'd also need to maintain it and debug it\nand stuff. We would probably be able to gain some test coverage of\nsituations that libpq won't let you create, but we would also perhaps\nlose some test coverage for libpq itself.\n\nI feel like either way is a potentially viable way forward, and where\nwe end up might end up depending on who is willing to do the work and\nwhat that person would prefer to do.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 18 Oct 2023 11:53:21 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query execution in Perl TAP tests needs work"
},
{
"msg_contents": "On 2023-Oct-18, Robert Haas wrote:\n\n> Without FFI::Platypus, we have to write Perl code that can speak the\n> wire protocol directly. Basically, we're writing our own PostgreSQL\n> driver for Perl, though we might need only a subset of the things a\n> real driver would need to handle, and we might add some extra things,\n> like code that can send intentionally botched protocol messages.\n\nWe could revive the old src/interfaces/perl5 code, which was a libpq\nwrapper -- at least the subset of it that the tests need. It was moved\nto gborg by commit 9a0b4d7f8474 and a couple more versions were made\nthere, which can be seen at\nhttps://www.postgresql.org/ftp/projects/gborg/pgperl/stable/,\nversion 2.1.1 being apparently the latest. The complete driver was\nabout 3000 lines, judging by the commit that removed it. Presumably we\ndon't need the whole of that.\n\nApparently the project was migrated from gborg to pgFoundry at some\npoint, because this exists\nhttps://www.postgresql.org/ftp/projects/pgFoundry/pgperl/\nand maybe they did some additional changes there, but at least\nour FTP site doesn't show anything. Perhaps there were never any\nreleases, and we don't have the CVSROOT. But I doubt any changes at\nthat point would have been critical.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Wed, 18 Oct 2023 18:25:01 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query execution in Perl TAP tests needs work"
},
{
"msg_contents": "On 2023-10-18 We 11:47, Tom Lane wrote:\n> Robert Haas <[email protected]> writes:\n>> On Wed, Oct 18, 2023 at 10:28 AM Tom Lane <[email protected]> wrote:\n>>> I did a bit of research on this on my favorite platforms, and did\n>>> not like the results:\n>> Hmm. That's unfortunate. Is perl -MCPAN -e 'install Platypus::FFI' a\n>> viable alternative?\n> Probably, see my followup.\n>\n> \t\t\t\n\n\nInteresting. OK, here's an attempt to push the cart a bit further down \nthe road. The attached module wraps quite a lot of libpq, at least \nenough for most of the cases we would be interested in, I think. It also \nexports some constants such as connection status values, query status \nvalues, transaction status values and type Oids. It also makes the \nabsence of FFI::Platypus not instantly fatal, but any attempt to use one \nof the wrapped functions will die with a message about the module being \nmissing if it's not found.\n\nI guess the next step would be for someone to locate some of the \nhotspots in the TAP tests and try to convert them to using persistent \nconnections with this gadget or similar and see how much faster we can \nmake them.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Tue, 24 Oct 2023 08:48:58 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query execution in Perl TAP tests needs work"
},
{
"msg_contents": "On Wed, 18 Oct 2023 18:25:01 +0200\nAlvaro Herrera <[email protected]> wrote:\n\n> On 2023-Oct-18, Robert Haas wrote:\n> \n> > Without FFI::Platypus, we have to write Perl code that can speak the\n> > wire protocol directly. Basically, we're writing our own PostgreSQL\n> > driver for Perl, though we might need only a subset of the things a\n> > real driver would need to handle, and we might add some extra things,\n> > like code that can send intentionally botched protocol messages. \n> \n> We could revive the old src/interfaces/perl5 code, which was a libpq\n> wrapper -- at least the subset of it that the tests need. It was moved\n> to gborg by commit 9a0b4d7f8474 and a couple more versions were made\n> there, which can be seen at\n> https://www.postgresql.org/ftp/projects/gborg/pgperl/stable/,\n> version 2.1.1 being apparently the latest. The complete driver was\n> about 3000 lines, judging by the commit that removed it. Presumably we\n> don't need the whole of that.\n\n+1 to test this. I can give it some time to revive it and post results here if\nyou agree and no one think of some show stopper.\n\n\n",
"msg_date": "Mon, 6 Nov 2023 11:14:25 +0100",
"msg_from": "Jehan-Guillaume de Rorthais <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query execution in Perl TAP tests needs work"
},
{
"msg_contents": "On Wed, Aug 30, 2023 at 09:48:42AM +1200, Thomas Munro wrote:\n> On Wed, Aug 30, 2023 at 1:49 AM Noah Misch <[email protected]> wrote:\n> > On Tue, Aug 29, 2023 at 04:25:24PM +1200, Thomas Munro wrote:\n> > > On Tue, Aug 29, 2023 at 1:48 PM Noah Misch <[email protected]> wrote:\n> > > > https://github.com/cpan-authors/IPC-Run/issues/166#issuecomment-1288190929\n> > >\n> > > Interesting. But that shows a case with no pipes connected, using\n> > > select() as a dumb sleep and ignoring SIGCHLD. In our usage we have\n> > > pipes connected, and I think select() should return when the child's\n> > > output pipes become readable due to EOF. I guess something about that\n> > > might be b0rked on Windows? I see there is an extra helper process\n> > > doing socket<->pipe conversion (hah, that explains an extra ~10ms at\n> > > the start in my timestamps)...\n> >\n> > In that case, let's assume it's not the same issue.\n> \n> Yeah, I think it amounts to the same thing, if EOF never arrives.\n> \n> I suspect that we could get ->safe_psql() down to about ~25ms baseline\n> if someone could fix the posited IPC::Run EOF bug\n\nI pushed optimizations in https://github.com/cpan-authors/IPC-Run/pull/172\nthat make the TAP portion of \"make check-world\" 7% faster on my GNU/Linux\nmachine. I didn't confirm an EOF bug, but that change also reduces Windows\nidle time in simple tests. I didn't run Windows check-world with it. For\nnon-Windows, we can get almost all the benefit from the attached one-liner.\n(The relative benefit is probably lower for parallel check-world, where idle\nthreads matter less, and for slower machines.)",
"msg_date": "Sat, 30 Mar 2024 22:03:10 -0700",
"msg_from": "Noah Misch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query execution in Perl TAP tests needs work"
}
] |
[
{
"msg_contents": "I normally pull from\n https://git.postgresql.org/git/postgresql.git/\n\nbut for a few hours now it's been failing (while other git repo's are \nstill reachable).\n\nIs it me or is there a hiccup there?\n\nthanks,\n\nErik Rijkers\n\n\n\n",
"msg_date": "Mon, 28 Aug 2023 08:09:44 +0200",
"msg_from": "Erik Rijkers <[email protected]>",
"msg_from_op": true,
"msg_subject": "https://git.postgresql.org/git/postgresql.git/ fails"
},
{
"msg_contents": "On 28.08.23 08:09, Erik Rijkers wrote:\n> I normally pull from\n> https://git.postgresql.org/git/postgresql.git/\n> \n> but for a few hours now it's been failing (while other git repo's are \n> still reachable).\n> \n> Is it me or is there a hiccup there?\n\nI see the same problem. Also, the buildfarm hasn't reported anything \nfor a few hours.\n\n\n\n",
"msg_date": "Mon, 28 Aug 2023 08:45:39 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: https://git.postgresql.org/git/postgresql.git/ fails"
}
] |
[
{
"msg_contents": "While translating a message, I found a questionable behavior in \\d+,\nintroduced by a recent commit b0e96f3119. In short, the current code\nhides the constraint's origin when \"NO INHERIT\" is used.\n\nFor these tables:\n\ncreate table p (a int, b int not null default 0);\ncreate table c1 (a int, b int not null default 1) inherits (p);\n\nThe output from \"\\d+ c1\" contains the lines:\n\n> Not-null constraints:\n> \"c1_b_not_null\" NOT NULL \"b\" *(local, inherited)*\n\nBut with these tables:\n\ncreate table p (a int, b int not null default 0);\ncreate table c1 (a int, b int not null NO INHERIT default 1) inherits (p);\n\nI get:\n\n> Not-null constraints:\n> \"c1_b_not_null\" NOT NULL \"b\" *NO INHERIT*\n\nHere, \"NO INHERIT\" is mapped from connoinherit, and conislocal and\n\"coninhcount <> 0\" align with \"local\" and \"inherited\". For a clearer\npicuture, those values for c1 are as follows.\n\n=# SELECT co.conname, at.attname, co.connoinherit, co.conislocal, co.coninhcount FROM pg_catalog.pg_constraint co JOIN pg_catalog.pg_attribute at ON (at.attnum = co.conkey[1]) WHERE co.contype = 'n' AND co.conrelid = 'c1'::pg_catalog.regclass AND at.attrelid = 'c1'::pg_catalog.regclass ORDER BY at.attnum;\n conname | attname | connoinherit | conislocal | coninhcount \n---------------+---------+--------------+------------+-------------\n c1_b_not_null | b | t | t | 1\n\nIt feels off to me, but couldn't find any discussion about it. Is it\nthe intended behavior? I believe it's more appropriate to show the\norigins even when specifed as NO INHERIT.\n\n======\nIf not so, the following change might be possible, which is quite simple.\n\n> Not-null constraints:\n> \"c1_b_not_null\" NOT NULL \"b\" NO INHERIT(local, inherited)\n\nHowever, it looks somewhat strange as the information in parentheses\nis not secondary to \"NO INHERIT\". Thus, perhaps a clearer or more\nproper representation would be:\n\n> \"c1_b_not_null\" NOT NULL \"b\" (local, inherited, not inheritable)\n\nThat being said, I don't come up with a simple way to do this for now..\n(Note that we need to translate the puctuations and the words.)\n\nThere's no need to account for all combinations. \"Local\" and\n\"inherited\" don't be false at the same time and the combination (local\n& !inherited) is not displayed. Given these factors, we're left with 6\npossible combinations, which I don't think aren't worth the hassle:\n\n(local, inherited, not inheritable)\n(inherited, not inheritable) # I didn't figure out how to cause this.\n(not inheritable)\n(local, inherited)\n(inherited)\n\"\" (empty string, means local)\n\nA potential solution that comes to mind is presenting the attributes\nin a space sparated list after a colon as attached. (Honestly, I'm not\nfond of the format and the final term, though.)\n\n> \"c1_b_not_null\" NOT NULL \"b\": local inherited uninheritable\n\nIn 0001, I did wonder about hiding \"local\" when it's not inherited,\nbut this behavior rfollows existing code.\n\nIn 0002, I'm not completely satisfied with the location, but standard\nregression test suite seems more suitable for this check than the TAP\ntest suite used for testing psql.\n\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Mon, 28 Aug 2023 16:16:58 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Strange presentaion related to inheritance in \\d+"
},
{
"msg_contents": "On 2023-Aug-28, Kyotaro Horiguchi wrote:\n\n> But with these tables:\n> \n> create table p (a int, b int not null default 0);\n> create table c1 (a int, b int not null NO INHERIT default 1) inherits (p);\n> \n> I get:\n> \n> > Not-null constraints:\n> > \"c1_b_not_null\" NOT NULL \"b\" *NO INHERIT*\n> \n> Here, \"NO INHERIT\" is mapped from connoinherit, and conislocal and\n> \"coninhcount <> 0\" align with \"local\" and \"inherited\". For a clearer\n> picuture, those values for c1 are as follows.\n\nHmm, I think the bug here is that we let you create a constraint in c1\nthat is NOINHERIT. If the parent already has one INHERIT constraint\nin that column, then the child must have that one also; it's not\npossible to have both a constraint that inherits and one that doesn't.\n\nI understand that there are only three possibilities for a NOT NULL\nconstraint in a column:\n\n- There's a NO INHERIT constraint. A NO INHERIT constraint is always\n defined locally in that table. In this case, if there is a parent\n relation, then it must either not have a NOT NULL constraint in that\n column, or it may also have a NO INHERIT one. Therefore, it's\n correct to print NO INHERIT and nothing else. We could also print\n \"(local)\" but I see no point in doing that.\n\n- A constraint comes inherited from one or more parent tables and has no\n local definition. In this case, the constraint always inherits\n (otherwise, the parent wouldn't have given it to this table). So\n printing \"(inherited)\" and nothing else is correct.\n\n- A constraint can have a local definition and also be inherited. In\n this case, printing \"(local, inherited)\" is correct.\n\nHave I missed other cases?\n\n\nThe NO INHERIT bit is part of the syntax, which is why I put it in\nuppercase and not marked it for translation. The other two are\ninformational, so they are translatable.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"The important things in the world are problems with society that we don't\nunderstand at all. The machines will become more complicated but they won't\nbe more complicated than the societies that run them.\" (Freeman Dyson)\n\n\n",
"msg_date": "Mon, 28 Aug 2023 13:36:00 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Strange presentaion related to inheritance in \\d+"
},
{
"msg_contents": "At Mon, 28 Aug 2023 13:36:00 +0200, Alvaro Herrera <[email protected]> wrote in \n> On 2023-Aug-28, Kyotaro Horiguchi wrote:\n> \n> > But with these tables:\n> > \n> > create table p (a int, b int not null default 0);\n> > create table c1 (a int, b int not null NO INHERIT default 1) inherits (p);\n> > \n> > I get:\n> > \n> > > Not-null constraints:\n> > > \"c1_b_not_null\" NOT NULL \"b\" *NO INHERIT*\n> > \n> > Here, \"NO INHERIT\" is mapped from connoinherit, and conislocal and\n> > \"coninhcount <> 0\" align with \"local\" and \"inherited\". For a clearer\n> > picuture, those values for c1 are as follows.\n> \n> Hmm, I think the bug here is that we let you create a constraint in c1\n> that is NOINHERIT. If the parent already has one INHERIT constraint\n> in that column, then the child must have that one also; it's not\n> possible to have both a constraint that inherits and one that doesn't.\n\nYeah, I had the same question about the coexisting of the two.\n\n> I understand that there are only three possibilities for a NOT NULL\n> constraint in a column:\n> \n> - There's a NO INHERIT constraint. A NO INHERIT constraint is always\n> defined locally in that table. In this case, if there is a parent\n> relation, then it must either not have a NOT NULL constraint in that\n> column, or it may also have a NO INHERIT one. Therefore, it's\n> correct to print NO INHERIT and nothing else. We could also print\n> \"(local)\" but I see no point in doing that.\n> \n> - A constraint comes inherited from one or more parent tables and has no\n> local definition. In this case, the constraint always inherits\n> (otherwise, the parent wouldn't have given it to this table). So\n> printing \"(inherited)\" and nothing else is correct.\n> \n> - A constraint can have a local definition and also be inherited. In\n> this case, printing \"(local, inherited)\" is correct.\n> \n> Have I missed other cases?\n\nSeems correct. I don't see another case given that NO INHERIT is\ninhibited when a table has an inherited constraint.\n\n> The NO INHERIT bit is part of the syntax, which is why I put it in\n> uppercase and not marked it for translation. The other two are\n> informational, so they are translatable.\n\nGiven the conditions above, I agree with you.\n\nAttached is the initial version of the patch. It prevents \"CREATE\nTABLE\" from executing if there is an inconsisntent not-null\nconstraint. Also I noticed that \"ALTER TABLE t ADD NOT NULL c NO\nINHERIT\" silently ignores the \"NO INHERIT\" part and fixed it.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Tue, 29 Aug 2023 13:53:34 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Strange presentaion related to inheritance in \\d+"
},
{
"msg_contents": "On 2023-Aug-29, Kyotaro Horiguchi wrote:\n\n> Attached is the initial version of the patch. It prevents \"CREATE\n> TABLE\" from executing if there is an inconsisntent not-null\n> constraint. Also I noticed that \"ALTER TABLE t ADD NOT NULL c NO\n> INHERIT\" silently ignores the \"NO INHERIT\" part and fixed it.\n\nGreat, thank you. I pushed it after modifying it a bit -- instead of\nthrowing the error in MergeAttributes, I did it in\nAddRelationNotNullConstraints(). It seems cleaner this way, mostly\nbecause we already have to match these two constraints there. (I guess\nyou could argue that we waste catalog-insertion work before the error is\nreported and the whole thing is aborted; but I don't think this is a\nserious problem in practice.)\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"En las profundidades de nuestro inconsciente hay una obsesiva necesidad\nde un universo lógico y coherente. Pero el universo real se halla siempre\nun paso más allá de la lógica\" (Irulan)\n\n\n",
"msg_date": "Tue, 29 Aug 2023 19:28:28 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Strange presentaion related to inheritance in \\d+"
},
{
"msg_contents": "At Tue, 29 Aug 2023 19:28:28 +0200, Alvaro Herrera <[email protected]> wrote in \n> On 2023-Aug-29, Kyotaro Horiguchi wrote:\n> \n> > Attached is the initial version of the patch. It prevents \"CREATE\n> > TABLE\" from executing if there is an inconsisntent not-null\n> > constraint. Also I noticed that \"ALTER TABLE t ADD NOT NULL c NO\n> > INHERIT\" silently ignores the \"NO INHERIT\" part and fixed it.\n> \n> Great, thank you. I pushed it after modifying it a bit -- instead of\n> throwing the error in MergeAttributes, I did it in\n> AddRelationNotNullConstraints(). It seems cleaner this way, mostly\n> because we already have to match these two constraints there. (I guess\n\nI agree that it is cleaner.\n\n> you could argue that we waste catalog-insertion work before the error is\n> reported and the whole thing is aborted; but I don't think this is a\n> serious problem in practice.)\n\nGiven the rarity and the speed required, I agree that early-catching\nis not that crucial here. Thanks for clearing that up.\n\nregardes.\n\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 30 Aug 2023 09:46:58 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Strange presentaion related to inheritance in \\d+"
}
] |
[
{
"msg_contents": "PQputCopyEnd returns 1 or -1, never 0, I guess the comment was\ncopy/paste from PQputCopyData's comment, this should be fixed.\n\n-- \nRegards\nJunwang Zhao",
"msg_date": "Mon, 28 Aug 2023 17:58:43 +0800",
"msg_from": "Junwang Zhao <[email protected]>",
"msg_from_op": true,
"msg_subject": "[PATCH v1] PQputCopyEnd never returns 0, fix the inaccurate comment"
},
{
"msg_contents": "Hi Junwang,\n\n> PQputCopyEnd returns 1 or -1, never 0, I guess the comment was\n> copy/paste from PQputCopyData's comment, this should be fixed.\n\nThe patch LGTM but I wonder whether we should also change all the\nexisting calls of PQputCopyEnd() from:\n\n```\nPQputCopyEnd(...) <= 0\n```\n\n... to:\n\n```\nPQputCopyEnd(...) < 0\n```\n\nGiven the circumstances, checking for equality to zero seems to be at\nleast strange.\n\n\nOn top of that, none of the PQputCopyData() callers cares whether the\nfunction returns 0 or -1, both are treated the same way. I suspect the\nfunction does some extra work no one asked to do and no one cares\nabout. Perhaps this function should be refactored too for consistency.\n\nThoughts?\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Mon, 28 Aug 2023 14:48:05 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH v1] PQputCopyEnd never returns 0,\n fix the inaccurate comment"
},
{
"msg_contents": "On Mon, Aug 28, 2023 at 7:48 PM Aleksander Alekseev\n<[email protected]> wrote:\n>\n> Hi Junwang,\n>\n> > PQputCopyEnd returns 1 or -1, never 0, I guess the comment was\n> > copy/paste from PQputCopyData's comment, this should be fixed.\n>\n> The patch LGTM but I wonder whether we should also change all the\n> existing calls of PQputCopyEnd() from:\n>\n> ```\n> PQputCopyEnd(...) <= 0\n> ```\n>\n> ... to:\n>\n> ```\n> PQputCopyEnd(...) < 0\n> ```\n>\n> Given the circumstances, checking for equality to zero seems to be at\n> least strange.\n>\n\nYeah, it makes sense to me, or maybe just `PQputCopyEnd(...) == -1`,\nlet's wait for some other opinions.\n\n>\n> On top of that, none of the PQputCopyData() callers cares whether the\n> function returns 0 or -1, both are treated the same way. I suspect the\n> function does some extra work no one asked to do and no one cares\n> about. Perhaps this function should be refactored too for consistency.\n>\n> Thoughts?\n>\n> --\n> Best regards,\n> Aleksander Alekseev\n\n\n\n-- \nRegards\nJunwang Zhao\n\n\n",
"msg_date": "Mon, 28 Aug 2023 21:46:07 +0800",
"msg_from": "Junwang Zhao <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH v1] PQputCopyEnd never returns 0,\n fix the inaccurate comment"
},
{
"msg_contents": "On Mon, Aug 28, 2023 at 09:46:07PM +0800, Junwang Zhao wrote:\n> Yeah, it makes sense to me, or maybe just `PQputCopyEnd(...) == -1`,\n> let's wait for some other opinions.\n\nOne can argue that PQputCopyEnd() returning 0 could be possible in an\nolder version of libpq these callers are linking to, but this has\nnever existed from what I can see (just checked down to 8.2 now).\nAnyway, changing these callers may create some backpatching conflicts,\nso I'd let them as they are, and just fix the comment.\n--\nMichael",
"msg_date": "Tue, 29 Aug 2023 07:40:11 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH v1] PQputCopyEnd never returns 0, fix the inaccurate\n comment"
},
{
"msg_contents": "On Tue, Aug 29, 2023 at 6:40 AM Michael Paquier <[email protected]> wrote:\n>\n> On Mon, Aug 28, 2023 at 09:46:07PM +0800, Junwang Zhao wrote:\n> > Yeah, it makes sense to me, or maybe just `PQputCopyEnd(...) == -1`,\n> > let's wait for some other opinions.\n>\n> One can argue that PQputCopyEnd() returning 0 could be possible in an\n> older version of libpq these callers are linking to, but this has\n> never existed from what I can see (just checked down to 8.2 now).\n> Anyway, changing these callers may create some backpatching conflicts,\n> so I'd let them as they are, and just fix the comment.\n\nsure, thanks for the explanation.\n\n> --\n> Michael\n\n\n\n-- \nRegards\nJunwang Zhao\n\n\n",
"msg_date": "Tue, 29 Aug 2023 17:45:30 +0800",
"msg_from": "Junwang Zhao <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH v1] PQputCopyEnd never returns 0,\n fix the inaccurate comment"
},
{
"msg_contents": "Hi,\n\n> > On Mon, Aug 28, 2023 at 09:46:07PM +0800, Junwang Zhao wrote:\n> > > Yeah, it makes sense to me, or maybe just `PQputCopyEnd(...) == -1`,\n> > > let's wait for some other opinions.\n> >\n> > One can argue that PQputCopyEnd() returning 0 could be possible in an\n> > older version of libpq these callers are linking to, but this has\n> > never existed from what I can see (just checked down to 8.2 now).\n> > Anyway, changing these callers may create some backpatching conflicts,\n> > so I'd let them as they are, and just fix the comment.\n>\n> sure, thanks for the explanation.\n\nThe patch was applied in 8bf7db02 [1] and I assume it's safe to close\nthe corresponding CF entry [2].\n\nThanks, everyone.\n\n[1]: https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=8bf7db0285dfbc4b505c8be4c34ab7386eb6297f\n[2]: https://commitfest.postgresql.org/44/4521/\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Wed, 30 Aug 2023 13:28:53 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH v1] PQputCopyEnd never returns 0,\n fix the inaccurate comment"
}
] |
[
{
"msg_contents": "Hi\n\nI workin with protocol and reading related code.\n\nI found in routine EndCommand one strange thing\n\nvoid\nEndCommand(const QueryCompletion *qc, CommandDest dest, bool\nforce_undecorated_output)\n{\n<-->char<--><-->completionTag[COMPLETION_TAG_BUFSIZE];\n<-->Size<--><-->len;\n\n<-->switch (dest)\n<-->{\n<--><-->case DestRemote:\n<--><-->case DestRemoteExecute:\n<--><-->case DestRemoteSimple:\n\n<--><--><-->len = BuildQueryCompletionString(completionTag, qc,\n<--><--><--><--><--><--><--><--><--><--><--> force_undecorated_output);\n<--><--><-->pq_putmessage(PqMsg_Close, completionTag, len + 1);\n\n<--><-->case DestNone:\n\nThere is message PqMsgClose, but this should be used from client side. Here\nshould be used PqMsg_CommandComplete instead?\n\nRegards\n\nPavel\n\nHiI workin with protocol and reading related code.I found in routine EndCommand one strange thingvoidEndCommand(const QueryCompletion *qc, CommandDest dest, bool force_undecorated_output){<-->char<--><-->completionTag[COMPLETION_TAG_BUFSIZE];<-->Size<--><-->len;<-->switch (dest)<-->{<--><-->case DestRemote:<--><-->case DestRemoteExecute:<--><-->case DestRemoteSimple:<--><--><-->len = BuildQueryCompletionString(completionTag, qc,<--><--><--><--><--><--><--><--><--><--><--> force_undecorated_output);<--><--><-->pq_putmessage(PqMsg_Close, completionTag, len + 1);<--><-->case DestNone:There is message PqMsgClose, but this should be used from client side. Here should be used PqMsg_CommandComplete instead?RegardsPavel",
"msg_date": "Mon, 28 Aug 2023 13:38:57 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": true,
"msg_subject": "Wrong usage of pqMsg_Close message code?"
},
{
"msg_contents": "Hi Pavel,\n\n> There is message PqMsgClose, but this should be used from client side. Here should be used PqMsg_CommandComplete instead?\n\nIt seems so. This change was introduced in f4b54e1ed98 [1]:\n\n```\n--- a/src/backend/tcop/dest.c\n+++ b/src/backend/tcop/dest.c\n@@ -176,7 +176,7 @@ EndCommand(const QueryCompletion *qc, CommandDest\ndest, bool force_undecorated_o\n\n len = BuildQueryCompletionString(completionTag, qc,\n\n force_undecorated_output);\n- pq_putmessage('C', completionTag, len + 1);\n+ pq_putmessage(PqMsg_Close, completionTag, len + 1);\n\n case DestNone:\n case DestDebug\n```\n\nIt should have been PqMsg_CommandComplete.\n\n[1]: https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=f4b54e1ed98\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Mon, 28 Aug 2023 15:00:26 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Wrong usage of pqMsg_Close message code?"
},
{
"msg_contents": "Hi\n\npo 28. 8. 2023 v 14:00 odesílatel Aleksander Alekseev <\[email protected]> napsal:\n\n> Hi Pavel,\n>\n> > There is message PqMsgClose, but this should be used from client side.\n> Here should be used PqMsg_CommandComplete instead?\n>\n> It seems so. This change was introduced in f4b54e1ed98 [1]:\n>\n> ```\n> --- a/src/backend/tcop/dest.c\n> +++ b/src/backend/tcop/dest.c\n> @@ -176,7 +176,7 @@ EndCommand(const QueryCompletion *qc, CommandDest\n> dest, bool force_undecorated_o\n>\n> len = BuildQueryCompletionString(completionTag, qc,\n>\n> force_undecorated_output);\n> - pq_putmessage('C', completionTag, len + 1);\n> + pq_putmessage(PqMsg_Close, completionTag, len + 1);\n>\n> case DestNone:\n> case DestDebug\n> ```\n>\n> It should have been PqMsg_CommandComplete.\n>\n> [1]:\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=f4b54e1ed98\n\n\nhere is a patch - all tests passed\n\nRegards\n\nPavel\n\n>\n>\n> --\n> Best regards,\n> Aleksander Alekseev\n>",
"msg_date": "Mon, 28 Aug 2023 16:01:29 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Wrong usage of pqMsg_Close message code?"
},
{
"msg_contents": "Hi,\n\n> here is a patch - all tests passed\n\nLGTM and added to the nearest CF just in case [1].\n\n[1]: https://commitfest.postgresql.org/44/4523/\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Mon, 28 Aug 2023 17:26:29 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Wrong usage of pqMsg_Close message code?"
},
{
"msg_contents": ">> Hi Pavel,\n>>\n>> > There is message PqMsgClose, but this should be used from client side.\n>> Here should be used PqMsg_CommandComplete instead?\n>>\n>> It seems so. This change was introduced in f4b54e1ed98 [1]:\n>>\n>> ```\n>> --- a/src/backend/tcop/dest.c\n>> +++ b/src/backend/tcop/dest.c\n>> @@ -176,7 +176,7 @@ EndCommand(const QueryCompletion *qc, CommandDest\n>> dest, bool force_undecorated_o\n>>\n>> len = BuildQueryCompletionString(completionTag, qc,\n>>\n>> force_undecorated_output);\n>> - pq_putmessage('C', completionTag, len + 1);\n>> + pq_putmessage(PqMsg_Close, completionTag, len + 1);\n>>\n>> case DestNone:\n>> case DestDebug\n>> ```\n>>\n>> It should have been PqMsg_CommandComplete.\n>>\n>> [1]:\n>> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=f4b54e1ed98\n> \n> \n> here is a patch - all tests passed\n\nI think EndReplicationCommand needs to be fixed as well.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Tue, 29 Aug 2023 06:12:00 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Wrong usage of pqMsg_Close message code?"
},
{
"msg_contents": "On Tue, Aug 29, 2023 at 06:12:00AM +0900, Tatsuo Ishii wrote:\n> I think EndReplicationCommand needs to be fixed as well.\n\nYeah, both of you are right here. Anyway, it seems to me that there\nis a bit more going on in protocol.h. I have noticed two more things\nthat are incorrect:\n- HandleParallelMessage is missing a message for 'P', but I think that\nwe should have a code for it as well as part of the parallel query\nprotocol.\n- PqMsg_Terminate can be sent by the frontend *and* the backend, see\nfe-connect.c and parallel.c. However, protocol.h documents it as a\nfrontend-only code.\n--\nMichael",
"msg_date": "Tue, 29 Aug 2023 07:53:37 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Wrong usage of pqMsg_Close message code?"
},
{
"msg_contents": "> Yeah, both of you are right here. Anyway, it seems to me that there\n> is a bit more going on in protocol.h. I have noticed two more things\n> that are incorrect:\n> - HandleParallelMessage is missing a message for 'P', but I think that\n> we should have a code for it as well as part of the parallel query\n> protocol.\n\nI did not know this. Why is this not explained in the frontend/backend\nprotocol document?\n\n> - PqMsg_Terminate can be sent by the frontend *and* the backend, see\n> fe-connect.c and parallel.c. However, protocol.h documents it as a\n> frontend-only code.\n\nI do not blame protocol.h because our frontend/backend protocol\ndocument also stats that it's a frontend only message. Someone who\nstarted to use 'X' in backend should have added that in the\ndocumentation.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Tue, 29 Aug 2023 10:04:24 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Wrong usage of pqMsg_Close message code?"
},
{
"msg_contents": "On Tue, Aug 29, 2023 at 10:04:24AM +0900, Tatsuo Ishii wrote:\n>> Yeah, both of you are right here. Anyway, it seems to me that there\n>> is a bit more going on in protocol.h. I have noticed two more things\n>> that are incorrect:\n>> - HandleParallelMessage is missing a message for 'P', but I think that\n>> we should have a code for it as well as part of the parallel query\n>> protocol.\n> \n> I did not know this. Why is this not explained in the frontend/backend\n> protocol document?\n\nHmm. Thinking more about it, I am actually not sure that we need to\ndo that in this case, so perhaps things are OK as they stand for this\none.\n\n>> - PqMsg_Terminate can be sent by the frontend *and* the backend, see\n>> fe-connect.c and parallel.c. However, protocol.h documents it as a\n>> frontend-only code.\n> \n> I do not blame protocol.h because our frontend/backend protocol\n> document also stats that it's a frontend only message. Someone who\n> started to use 'X' in backend should have added that in the\n> documentation.\n\nActually, this may be OK as well as it stands. One can also say that\nthe parallel processing is out of this scope, being used only\ninternally. I cannot keep wondering whether we should put more\nefforts in documenting the parallel worker/leader protocol. That's\ninternal to the backend and out of the scope of this thread, still..\n--\nMichael",
"msg_date": "Tue, 29 Aug 2023 16:39:51 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Wrong usage of pqMsg_Close message code?"
},
{
"msg_contents": "Michael Paquier <[email protected]> writes:\n> Actually, this may be OK as well as it stands. One can also say that\n> the parallel processing is out of this scope, being used only\n> internally. I cannot keep wondering whether we should put more\n> efforts in documenting the parallel worker/leader protocol. That's\n> internal to the backend and out of the scope of this thread, still..\n\nYeah. I'm not sure whether the leader/worker protocol needs better\ndocumentation, but the parts of it that are not common with the\nfrontend protocol should NOT be documented in protocol.sgml.\nThat would just confuse authors of frontend code.\n\nI don't mind having constants for the leader/worker protocol in\nprotocol.h, as long as they're in a separate section that's clearly\nmarked as relevant only for server-internal parallelism.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 29 Aug 2023 10:01:47 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Wrong usage of pqMsg_Close message code?"
},
{
"msg_contents": "Hi everyone,\n\nThanks for the report. I'll get this fixed up. My guess is that this was\nleftover from an earlier version of the patch that used the same macro for\nidentical protocol characters.\n\nOn Tue, Aug 29, 2023 at 10:01:47AM -0400, Tom Lane wrote:\n> Michael Paquier <[email protected]> writes:\n>> Actually, this may be OK as well as it stands. One can also say that\n>> the parallel processing is out of this scope, being used only\n>> internally. I cannot keep wondering whether we should put more\n>> efforts in documenting the parallel worker/leader protocol. That's\n>> internal to the backend and out of the scope of this thread, still..\n> \n> Yeah. I'm not sure whether the leader/worker protocol needs better\n> documentation, but the parts of it that are not common with the\n> frontend protocol should NOT be documented in protocol.sgml.\n> That would just confuse authors of frontend code.\n> \n> I don't mind having constants for the leader/worker protocol in\n> protocol.h, as long as they're in a separate section that's clearly\n> marked as relevant only for server-internal parallelism.\n\n+1, I left the parallel stuff (and a couple other things) out in the first\nround to avoid prolonging the naming discussion, but we can continue to add\nto protocol.h.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 29 Aug 2023 09:15:55 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Wrong usage of pqMsg_Close message code?"
},
{
"msg_contents": "On Tue, Aug 29, 2023 at 09:15:55AM -0700, Nathan Bossart wrote:\n> Thanks for the report. I'll get this fixed up. My guess is that this was\n> leftover from an earlier version of the patch that used the same macro for\n> identical protocol characters.\n\nI plan to commit the attached patch shortly.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 29 Aug 2023 14:11:06 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Wrong usage of pqMsg_Close message code?"
},
{
"msg_contents": "On Tue, Aug 29, 2023 at 02:11:06PM -0700, Nathan Bossart wrote:\n> I plan to commit the attached patch shortly.\n\nWFM.\n--\nMichael",
"msg_date": "Wed, 30 Aug 2023 07:56:33 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Wrong usage of pqMsg_Close message code?"
},
{
"msg_contents": "On Wed, Aug 30, 2023 at 07:56:33AM +0900, Michael Paquier wrote:\n> On Tue, Aug 29, 2023 at 02:11:06PM -0700, Nathan Bossart wrote:\n>> I plan to commit the attached patch shortly.\n> \n> WFM.\n\nCommitted.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 29 Aug 2023 18:35:06 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Wrong usage of pqMsg_Close message code?"
}
] |
[
{
"msg_contents": "When looking at pg_regress I noticed that the --use-existing support didn't\nseem to work. ISTM that the removal/creation of test databases and roles\ndoesn't run since the conditional is reversed. There is also no support for\nusing a non-default socket directory with PG_REGRESS_SOCK_DIR. The attached\nhack fixes these and allows the tests to execute for me, but even with that the\ntest_setup suite fails due to the tablespace not being dropped and recreated\nlike databases and roles.\n\nIs it me who is too thick to get it working, or is it indeed broken? If it's\nthe latter, it's been like that for a long time which seems to indicate that it\nisn't really used and should probably be removed rather than fixed?\n\nDoes anyone here use it?\n\n--\nDaniel Gustafsson",
"msg_date": "Mon, 28 Aug 2023 15:11:15 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Is pg_regress --use-existing used by anyone or is it broken?"
},
{
"msg_contents": "On Mon, Aug 28, 2023 at 03:11:15PM +0200, Daniel Gustafsson wrote:\n> When looking at pg_regress I noticed that the --use-existing support didn't\n> seem to work. ISTM that the removal/creation of test databases and roles\n> doesn't run since the conditional is reversed. There is also no support for\n> using a non-default socket directory with PG_REGRESS_SOCK_DIR. The attached\n> hack fixes these and allows the tests to execute for me, but even with that the\n> test_setup suite fails due to the tablespace not being dropped and recreated\n> like databases and roles.\n> \n> Is it me who is too thick to get it working, or is it indeed broken? If it's\n> the latter, it's been like that for a long time which seems to indicate that it\n> isn't really used and should probably be removed rather than fixed?\n> \n> Does anyone here use it?\n\nI don't think I've ever used it. AFAICT it was added with hot standby mode\n(efc16ea) to support 'make standbycheck', which was removed last year\n(4483b2cf). Unless someone claims to be using it, it's probably fine to\njust remove it.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 29 Aug 2023 14:38:27 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is pg_regress --use-existing used by anyone or is it broken?"
},
{
"msg_contents": "> On 29 Aug 2023, at 23:38, Nathan Bossart <[email protected]> wrote:\n> On Mon, Aug 28, 2023 at 03:11:15PM +0200, Daniel Gustafsson wrote:\n\n>> Does anyone here use it?\n> \n> I don't think I've ever used it. AFAICT it was added with hot standby mode\n> (efc16ea) to support 'make standbycheck', which was removed last year\n> (4483b2cf). Unless someone claims to be using it, it's probably fine to\n> just remove it.\n\nHaving looked a bit more on it I have a feeling that plain removing it would be\nthe best option. Unless someone chimes in as a user of it I'll propose a patch\nto remove it.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Tue, 29 Aug 2023 23:52:52 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Is pg_regress --use-existing used by anyone or is it broken?"
},
{
"msg_contents": "On Tue, Aug 29, 2023 at 2:53 PM Daniel Gustafsson <[email protected]> wrote:\n> Having looked a bit more on it I have a feeling that plain removing it would be\n> the best option. Unless someone chimes in as a user of it I'll propose a patch\n> to remove it.\n\n-1. I use it.\n\nIt's handy when using pg_regress with a custom test suite, where I\ndon't want to be nagged about disconnecting from the database every\ntime.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 29 Aug 2023 15:33:27 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is pg_regress --use-existing used by anyone or is it broken?"
},
{
"msg_contents": "> On 30 Aug 2023, at 00:33, Peter Geoghegan <[email protected]> wrote:\n> \n> On Tue, Aug 29, 2023 at 2:53 PM Daniel Gustafsson <[email protected]> wrote:\n>> Having looked a bit more on it I have a feeling that plain removing it would be\n>> the best option. Unless someone chimes in as a user of it I'll propose a patch\n>> to remove it.\n> \n> -1. I use it.\n\nThanks for confirming!\n\n> It's handy when using pg_regress with a custom test suite, where I\n> don't want to be nagged about disconnecting from the database every\n> time.\n\nI'm curious about your workflow around it, it seems to me that it's kind of\nbroken so I wonder if we instead then should make it an equal citizen with temp\ninstance?\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 30 Aug 2023 00:37:03 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Is pg_regress --use-existing used by anyone or is it broken?"
},
{
"msg_contents": "On Tue, Aug 29, 2023 at 3:37 PM Daniel Gustafsson <[email protected]> wrote:\n> > It's handy when using pg_regress with a custom test suite, where I\n> > don't want to be nagged about disconnecting from the database every\n> > time.\n>\n> I'm curious about your workflow around it, it seems to me that it's kind of\n> broken so I wonder if we instead then should make it an equal citizen with temp\n> instance?\n\nI'm confused. You seem to think that it's a problem that\n--use-existing doesn't create databases and roles. But that's the\nwhole point, at least for me.\n\nI don't use --use-existing to run the standard regression tests, or\nanything like that. I use it to run my own custom test suite, often\nwhile relying upon the database having certain data already. Sometimes\nit's a nontrivial amount of data. I don't want to have to set up and\ntear down the data every time, since it isn't usually necessary.\n\nI usually have a relatively small and fast running read-only test\nsuite, and a larger test suite that does indeed need to do various\nsetup and teardown steps. It isn't possible to run the smaller test\nsuite without having first run the larger one at least once. But this\nis just for me, during development. Right now, with my SAOP nbtree\nproject, the smaller test suite takes me about 50ms to run, while the\nlarger one takes almost 10 seconds.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 29 Aug 2023 15:55:23 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is pg_regress --use-existing used by anyone or is it broken?"
},
{
"msg_contents": "> On 30 Aug 2023, at 00:55, Peter Geoghegan <[email protected]> wrote:\n> \n> On Tue, Aug 29, 2023 at 3:37 PM Daniel Gustafsson <[email protected]> wrote:\n>>> It's handy when using pg_regress with a custom test suite, where I\n>>> don't want to be nagged about disconnecting from the database every\n>>> time.\n>> \n>> I'm curious about your workflow around it, it seems to me that it's kind of\n>> broken so I wonder if we instead then should make it an equal citizen with temp\n>> instance?\n> \n> I'm confused. You seem to think that it's a problem that\n> --use-existing doesn't create databases and roles. But that's the\n> whole point, at least for me.\n\nWell, I think it's problematic that it doesn't handle database and role\ncreation due to it being buggy. I'll have another look at fixing the issues to\nsee if there is more than what I posted upthread, while at the same time making\nsure it will still support your use-case.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 30 Aug 2023 11:03:58 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Is pg_regress --use-existing used by anyone or is it broken?"
}
] |
[
{
"msg_contents": "I would like to be the commitfest manager for CF 2023-09.\n\n\n",
"msg_date": "Mon, 28 Aug 2023 15:46:35 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Commitfest manager for September"
},
{
"msg_contents": "Hi Peter,\n\n> I would like to be the commitfest manager for CF 2023-09.\n\nMany thanks for volunteering! If at some point you will require a bit\nof help please let me know.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Mon, 28 Aug 2023 17:31:58 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest manager for September"
},
{
"msg_contents": "On Mon, Aug 28, 2023 at 11:36 AM Aleksander Alekseev\n<[email protected]> wrote:\n>\n> Hi Peter,\n>\n> > I would like to be the commitfest manager for CF 2023-09.\n>\n> Many thanks for volunteering! If at some point you will require a bit\n> of help please let me know.\n\nI too had planned to volunteer to help. I volunteer to do a\ntriage/summary of patch statuses, as has been done occasionally in the\npast [1].\nHave folks found this helpful in the past?\n\n[1] https://www.postgresql.org/message-id/CAM-w4HOFOUNuOZSpsCfH_ir7dqJNdA1pxkxfaVEvLk5sn6HhsQ%40mail.gmail.com\n\n\n",
"msg_date": "Mon, 28 Aug 2023 11:42:22 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest manager for September"
},
{
"msg_contents": "Hi Melanie,\n\nOn 8/28/23 11:42, Melanie Plageman wrote:\n> I too had planned to volunteer to help. I volunteer to do a\n> triage/summary of patch statuses, as has been done occasionally in the\n> past [1].\n> Have folks found this helpful in the past?\n\n\nHaving a summary to begin with is very helpful for reviewers.\n\n\nBest regards,\n\n Jesper\n\n\n\n\n",
"msg_date": "Mon, 28 Aug 2023 12:11:09 -0400",
"msg_from": "Jesper Pedersen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest manager for September"
},
{
"msg_contents": "On Mon, Aug 28, 2023 at 11:42:22AM -0400, Melanie Plageman wrote:\n> I too had planned to volunteer to help. I volunteer to do a\n> triage/summary of patch statuses, as has been done occasionally in the\n> past [1].\n> Have folks found this helpful in the past?\n\nWith the number of patches in place, getting any help with triaging is\nmuch welcome, but I cannot speak for Peter. FWIW, with your\nexperience, I think that you are a good fit, as is Aleksander.\n--\nMichael",
"msg_date": "Tue, 29 Aug 2023 07:32:59 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest manager for September"
},
{
"msg_contents": "On 28.08.23 15:46, Peter Eisentraut wrote:\n> I would like to be the commitfest manager for CF 2023-09.\n\nI think my community account needs to have some privilege change to be \nable to act as CF manager in the web interface. Could someone make that \nhappen?\n\n\n\n",
"msg_date": "Wed, 30 Aug 2023 14:49:45 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Commitfest manager for September"
},
{
"msg_contents": "On Wed, Aug 30, 2023 at 2:50 PM Peter Eisentraut <[email protected]> wrote:\n>\n> On 28.08.23 15:46, Peter Eisentraut wrote:\n> > I would like to be the commitfest manager for CF 2023-09.\n>\n> I think my community account needs to have some privilege change to be\n> able to act as CF manager in the web interface. Could someone make that\n> happen?\n\nDone!\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Wed, 30 Aug 2023 15:01:24 +0200",
"msg_from": "Magnus Hagander <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest manager for September"
}
] |
[
{
"msg_contents": "Hi,\r\n\r\nAttached is the PostgreSQL 16 RC1 release announcement draft.\r\n\r\nCurrently there is only one item in it, as there was only one open item \r\nmarked as closed. If there are any other fixes for the RC1 that were \r\nspecific to v16 and should be included in the announcement, please let \r\nme know.\r\n\r\nPlease provide all feedback no later than August 31, 2023 @ 12:00 UTC \r\n(and preferably before that).\r\n\r\nThanks,\r\n\r\nJonathan",
"msg_date": "Mon, 28 Aug 2023 13:51:18 -0400",
"msg_from": "\"Jonathan S. Katz\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL 16 RC1 release announcement draft"
}
] |
[
{
"msg_contents": "On Debian 12, gcc version 12.2.0 (Debian 12.2.0-14) generates a warning\non PG 13 to current, but only with -O1 optimization level, and not at\n-O0/-O2/-O3:\n\n\tclauses.c: In function ‘recheck_cast_function_args’:\n\tclauses.c:4293:19: warning: ‘actual_arg_types’ may be used uninitialized [-Wmaybe-uninitialized]\n\t 4293 | rettype = enforce_generic_type_consistency(actual_arg_types,\n\t | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\t 4294 | declared_arg_types,\n\t | ~~~~~~~~~~~~~~~~~~~\n\t 4295 | nargs,\n\t | ~~~~~~\n\t 4296 | funcform->prorettype,\n\t | ~~~~~~~~~~~~~~~~~~~~~\n\t 4297 | false);\n\t | ~~~~~~\n\tIn file included from clauses.c:45:\n\t../../../../src/include/parser/parse_coerce.h:82:17: note: by argument 1 of type ‘const Oid *’ {aka ‘const unsigned int *’} to ‘enforce_generic_type_consistency’ declared here\n\t 82 | extern Oid enforce_generic_type_consistency(const Oid *actual_arg_types,\n\t | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\tclauses.c:4279:33: note: ‘actual_arg_types’ declared here\n\t 4279 | Oid actual_arg_types[FUNC_MAX_ARGS];\n\t | ^~~~~~~~~~~~~~~~\n\nThe code is:\n\n\tstatic void\n\trecheck_cast_function_args(List *args, Oid result_type,\n\t Oid *proargtypes, int pronargs,\n\t HeapTuple func_tuple)\n\t{\n\t Form_pg_proc funcform = (Form_pg_proc) GETSTRUCT(func_tuple);\n\t int nargs;\n\t Oid actual_arg_types[FUNC_MAX_ARGS];\n\t Oid declared_arg_types[FUNC_MAX_ARGS];\n\t Oid rettype;\n\t ListCell *lc;\n\t\n\t if (list_length(args) > FUNC_MAX_ARGS)\n\t elog(ERROR, \"too many function arguments\");\n\t nargs = 0;\n\t foreach(lc, args)\n\t {\n\t actual_arg_types[nargs++] = exprType((Node *) lfirst(lc));\n\t }\n\t Assert(nargs == pronargs);\n\t memcpy(declared_arg_types, proargtypes, pronargs * sizeof(Oid));\n-->\t rettype = enforce_generic_type_consistency(actual_arg_types,\n\t declared_arg_types,\n\t nargs,\n\t funcform->prorettype,\n\t false);\n\t /* let's just check we got the same answer as the parser did ... */\n\nI don't see a clean way of avoiding the warning except by initializing\nthe array, which seems wasteful.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Mon, 28 Aug 2023 15:37:20 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Debian 12 gcc warning"
},
{
"msg_contents": "On Mon, Aug 28, 2023 at 03:37:20PM -0400, Bruce Momjian wrote:\n> I don't see a clean way of avoiding the warning except by initializing\n> the array, which seems wasteful.\n\nOr just initialize the array with a {0}?\n--\nMichael",
"msg_date": "Tue, 29 Aug 2023 07:30:15 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Debian 12 gcc warning"
},
{
"msg_contents": "On Tue, Aug 29, 2023 at 07:30:15AM +0900, Michael Paquier wrote:\n> On Mon, Aug 28, 2023 at 03:37:20PM -0400, Bruce Momjian wrote:\n> > I don't see a clean way of avoiding the warning except by initializing\n> > the array, which seems wasteful.\n> \n> Or just initialize the array with a {0}?\n\nUh, doesn't that set all elements to zero? See:\n\n\thttps://stackoverflow.com/questions/2589749/how-to-initialize-array-to-0-in-c\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Mon, 28 Aug 2023 19:10:38 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Debian 12 gcc warning"
},
{
"msg_contents": "On Tue, 29 Aug 2023 at 07:37, Bruce Momjian <[email protected]> wrote:\n> nargs = 0;\n> foreach(lc, args)\n> {\n> actual_arg_types[nargs++] = exprType((Node *) lfirst(lc));\n> }\n\nDoes it still produce the warning if you form the above more like?\n\nnargs = list_length(args);\nfor (int i = 0; i < nargs; i++)\n actual_arg_types[i] = exprType((Node *) list_nth(args, i));\n\nI'm just not sure if it's unable to figure out if at least nargs\nelements is set or if it won't be happy until all 100 elements are\nset.\n\nDavid\n\n\n",
"msg_date": "Tue, 29 Aug 2023 11:55:48 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Debian 12 gcc warning"
},
{
"msg_contents": "On Tue, Aug 29, 2023 at 11:55:48AM +1200, David Rowley wrote:\n> On Tue, 29 Aug 2023 at 07:37, Bruce Momjian <[email protected]> wrote:\n> > nargs = 0;\n> > foreach(lc, args)\n> > {\n> > actual_arg_types[nargs++] = exprType((Node *) lfirst(lc));\n> > }\n> \n> Does it still produce the warning if you form the above more like?\n> \n> nargs = list_length(args);\n> for (int i = 0; i < nargs; i++)\n> actual_arg_types[i] = exprType((Node *) list_nth(args, i));\n> \n> I'm just not sure if it's unable to figure out if at least nargs\n> elements is set or if it won't be happy until all 100 elements are\n> set.\n\nI applied the attached patch but got the same warning:\n\n\tclauses.c: In function ‘recheck_cast_function_args’:\n\tclauses.c:4297:19: warning: ‘actual_arg_types’ may be used uninitialized [-Wmaybe-uninitialized]\n\t 4297 | rettype = enforce_generic_type_consistency(actual_arg_types,\n\t | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\t 4298 | declared_arg_types,\n\t | ~~~~~~~~~~~~~~~~~~~\n\t 4299 | nargs,\n\t | ~~~~~~\n\t 4300 | funcform->prorettype,\n\t | ~~~~~~~~~~~~~~~~~~~~~\n\t 4301 | false);\n\t | ~~~~~~\n\tIn file included from clauses.c:45:\n\t../../../../src/include/parser/parse_coerce.h:82:17: note: by argument 1 of type ‘const Oid *’ {aka ‘const unsigned int *’} to ‘enforce_generic_type_consistency’ declared here\n\t 82 | extern Oid enforce_generic_type_consistency(const Oid *actual_arg_types,\n\t | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\tclauses.c:4279:33: note: ‘actual_arg_types’ declared here\n\t 4279 | Oid actual_arg_types[FUNC_MAX_ARGS];\n\t | ^~~~~~~~~~~~~~~~\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.",
"msg_date": "Mon, 28 Aug 2023 20:44:15 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Debian 12 gcc warning"
},
{
"msg_contents": "On Mon, Aug 28, 2023 at 07:10:38PM -0400, Bruce Momjian wrote:\n> On Tue, Aug 29, 2023 at 07:30:15AM +0900, Michael Paquier wrote:\n> > On Mon, Aug 28, 2023 at 03:37:20PM -0400, Bruce Momjian wrote:\n> > > I don't see a clean way of avoiding the warning except by initializing\n> > > the array, which seems wasteful.\n> > \n> > Or just initialize the array with a {0}?\n> \n> Uh, doesn't that set all elements to zero? See:\n> \n> \thttps://stackoverflow.com/questions/2589749/how-to-initialize-array-to-0-in-c\n\nFYI, that does stop the warning.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Mon, 28 Aug 2023 21:00:37 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Debian 12 gcc warning"
},
{
"msg_contents": "On Tue, Aug 29, 2023 at 6:56 AM David Rowley <[email protected]> wrote:\n>\n> I'm just not sure if it's unable to figure out if at least nargs\n> elements is set or if it won't be happy until all 100 elements are\n> set.\n\nIt looks like the former, since I can silence it on gcc 13 / -O1 by doing:\n\n/* keep compiler quiet */\nactual_arg_types[0] = InvalidOid;\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Tue, Aug 29, 2023 at 6:56 AM David Rowley <[email protected]> wrote:>> I'm just not sure if it's unable to figure out if at least nargs> elements is set or if it won't be happy until all 100 elements are> set.It looks like the former, since I can silence it on gcc 13 / -O1 by doing:/* keep compiler quiet */actual_arg_types[0] = InvalidOid;--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 29 Aug 2023 10:26:27 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Debian 12 gcc warning"
},
{
"msg_contents": "On Mon Aug 28, 2023 at 2:37 PM CDT, Bruce Momjian wrote:\n> I don't see a clean way of avoiding the warning except by initializing\n> the array, which seems wasteful.\n\nFor what it's worth, we recently committed a patch[0] that initialized \nan array due to a similar warning being generated on Fedora 38 (gcc \n(GCC) 13.2.1 20230728 (Red Hat 13.2.1-1)).\n\n[0]: https://github.com/postgres/postgres/commit/4a8fef0d733965c1a1836022f8a42ab1e83a721f\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Tue, 29 Aug 2023 00:17:29 -0500",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Debian 12 gcc warning"
},
{
"msg_contents": "On Tue, Aug 29, 2023 at 10:26:27AM +0700, John Naylor wrote:\n> \n> On Tue, Aug 29, 2023 at 6:56 AM David Rowley <[email protected]> wrote:\n> >\n> > I'm just not sure if it's unable to figure out if at least nargs\n> > elements is set or if it won't be happy until all 100 elements are\n> > set.\n> \n> It looks like the former, since I can silence it on gcc 13 / -O1 by doing:\n> \n> /* keep compiler quiet */\n> actual_arg_types[0] = InvalidOid;\n\nAgreed, that fixes it for me too. In fact, assigning to only element 99 or\n200 also prevents the warning, and considering the array is defined for\n100 elements, the fact is accepts 200 isn't a good thing. Patch attached.\n\nI think the question is whether we add this to silence a common compiler\nbut non-default optimization level. It is the only such case in our\nsource code right now.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.",
"msg_date": "Tue, 29 Aug 2023 09:27:23 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Debian 12 gcc warning"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> On Tue, Aug 29, 2023 at 10:26:27AM +0700, John Naylor wrote:\n>> It looks like the former, since I can silence it on gcc 13 / -O1 by doing:\n>> /* keep compiler quiet */\n>> actual_arg_types[0] = InvalidOid;\n\n> Agreed, that fixes it for me too. In fact, assigning to only element 99 or\n> 200 also prevents the warning, and considering the array is defined for\n> 100 elements, the fact is accepts 200 isn't a good thing. Patch attached.\n\nThat seems like a pretty clear compiler bug, particularly since it just\nappears in this one version. Rather than contorting our code, I'd\nsuggest filing a gcc bug.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 29 Aug 2023 10:18:36 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Debian 12 gcc warning"
},
{
"msg_contents": "On Tue, Aug 29, 2023 at 10:18:36AM -0400, Tom Lane wrote:\n> Bruce Momjian <[email protected]> writes:\n> > On Tue, Aug 29, 2023 at 10:26:27AM +0700, John Naylor wrote:\n> >> It looks like the former, since I can silence it on gcc 13 / -O1 by doing:\n> >> /* keep compiler quiet */\n> >> actual_arg_types[0] = InvalidOid;\n> \n> > Agreed, that fixes it for me too. In fact, assigning to only element 99 or\n> > 200 also prevents the warning, and considering the array is defined for\n> > 100 elements, the fact is accepts 200 isn't a good thing. Patch attached.\n> \n> That seems like a pretty clear compiler bug, particularly since it just\n> appears in this one version. Rather than contorting our code, I'd\n> suggest filing a gcc bug.\n\nI assume I have to create a test case to report this to the gcc team. I\ntried to create such a test case on gcc 12 but it doesn't generate the\nwarning. Attached is my attempt. Any ideas? I assume we can't just\ntell them to download our software and compile it.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.",
"msg_date": "Tue, 29 Aug 2023 22:52:06 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Debian 12 gcc warning"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> On Tue, Aug 29, 2023 at 10:18:36AM -0400, Tom Lane wrote:\n>> That seems like a pretty clear compiler bug, particularly since it just\n>> appears in this one version. Rather than contorting our code, I'd\n>> suggest filing a gcc bug.\n\n> I assume I have to create a test case to report this to the gcc team. I\n> tried to create such a test case on gcc 12 but it doesn't generate the\n> warning. Attached is my attempt. Any ideas? I assume we can't just\n> tell them to download our software and compile it.\n\nIIRC, they'll accept preprocessed compiler input for the specific file;\nyou don't need to provide a complete source tree. Per\nhttps://gcc.gnu.org/bugs/\n\n Please include all of the following items, the first three of which can be obtained from the output of gcc -v:\n\n the exact version of GCC;\n the system type;\n the options given when GCC was configured/built;\n the complete command line that triggers the bug;\n the compiler output (error messages, warnings, etc.); and\n the preprocessed file (*.i*) that triggers the bug, generated by adding -save-temps to the complete compilation command, or, in the case of a bug report for the GNAT front end, a complete set of source files (see below).\n\nObviously, if you can trim the input it's good, but it doesn't\nhave to be a minimal reproducer.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 29 Aug 2023 23:30:06 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Debian 12 gcc warning"
},
{
"msg_contents": "On Tue, Aug 29, 2023 at 11:30:06PM -0400, Tom Lane wrote:\n> Bruce Momjian <[email protected]> writes:\n> > On Tue, Aug 29, 2023 at 10:18:36AM -0400, Tom Lane wrote:\n> >> That seems like a pretty clear compiler bug, particularly since it just\n> >> appears in this one version. Rather than contorting our code, I'd\n> >> suggest filing a gcc bug.\n> \n> > I assume I have to create a test case to report this to the gcc team. I\n> > tried to create such a test case on gcc 12 but it doesn't generate the\n> > warning. Attached is my attempt. Any ideas? I assume we can't just\n> > tell them to download our software and compile it.\n> \n> IIRC, they'll accept preprocessed compiler input for the specific file;\n> you don't need to provide a complete source tree. Per\n> https://gcc.gnu.org/bugs/\n> \n> Please include all of the following items, the first three of which can be obtained from the output of gcc -v:\n> \n> the exact version of GCC;\n> the system type;\n> the options given when GCC was configured/built;\n> the complete command line that triggers the bug;\n> the compiler output (error messages, warnings, etc.); and\n> the preprocessed file (*.i*) that triggers the bug, generated by adding -save-temps to the complete compilation command, or, in the case of a bug report for the GNAT front end, a complete set of source files (see below).\n> \n> Obviously, if you can trim the input it's good, but it doesn't\n> have to be a minimal reproducer.\n\nBug submitted, thanks for th preprocessed file tip.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Wed, 30 Aug 2023 11:16:48 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Debian 12 gcc warning"
},
{
"msg_contents": "On Wed, Aug 30, 2023 at 11:16:48AM -0400, Bruce Momjian wrote:\n> On Tue, Aug 29, 2023 at 11:30:06PM -0400, Tom Lane wrote:\n> > Bruce Momjian <[email protected]> writes:\n> > > On Tue, Aug 29, 2023 at 10:18:36AM -0400, Tom Lane wrote:\n> > >> That seems like a pretty clear compiler bug, particularly since it just\n> > >> appears in this one version. Rather than contorting our code, I'd\n> > >> suggest filing a gcc bug.\n> > \n> > > I assume I have to create a test case to report this to the gcc team. I\n> > > tried to create such a test case on gcc 12 but it doesn't generate the\n> > > warning. Attached is my attempt. Any ideas? I assume we can't just\n> > > tell them to download our software and compile it.\n> > \n> > IIRC, they'll accept preprocessed compiler input for the specific file;\n> > you don't need to provide a complete source tree. Per\n> > https://gcc.gnu.org/bugs/\n> > \n> > Please include all of the following items, the first three of which can be obtained from the output of gcc -v:\n> > \n> > the exact version of GCC;\n> > the system type;\n> > the options given when GCC was configured/built;\n> > the complete command line that triggers the bug;\n> > the compiler output (error messages, warnings, etc.); and\n> > the preprocessed file (*.i*) that triggers the bug, generated by adding -save-temps to the complete compilation command, or, in the case of a bug report for the GNAT front end, a complete set of source files (see below).\n> > \n> > Obviously, if you can trim the input it's good, but it doesn't\n> > have to be a minimal reproducer.\n> \n> Bug submitted, thanks for th preprocessed file tip.\n\nGood news, I was able to prevent the bug by causing compiling of\nclauses.c to use -O2 by adding this to src/Makefile.custom:\n\n\tclauses.o : CFLAGS+=-O2\n\nHere is my submitted bug report:\n\n\thttps://gcc.gnu.org/bugzilla/show_bug.cgi?id=111240\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Wed, 30 Aug 2023 11:34:22 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Debian 12 gcc warning"
},
{
"msg_contents": "On Wed, Aug 30, 2023 at 11:34:22AM -0400, Bruce Momjian wrote:\n> Good news, I was able to prevent the bug by causing compiling of\n> clauses.c to use -O2 by adding this to src/Makefile.custom:\n> \n> \tclauses.o : CFLAGS+=-O2\n> \n> Here is my submitted bug report:\n> \n> \thttps://gcc.gnu.org/bugzilla/show_bug.cgi?id=111240\n\nI got this reply on the bug report:\n\n\thttps://gcc.gnu.org/bugzilla/show_bug.cgi?id=111240#c5\n\n\tRichard Biener 2023-08-31 09:46:44 UTC\n\t\n\tConfirmed.\n\t\n\trettype_58 = enforce_generic_type_consistency (&actual_arg_types, &declared_arg_types, 0, _56, 0);\n\t\n\tand we reach this on the args == 0 path where indeed actual_arg_types\n\tis uninitialized and our heuristic says that a const qualified pointer\n\tis an input and thus might be read. So you get a maybe-uninitialized\n\tdiagnostic at the call.\n\t\n\tGCC doesn't know that the 'nargs' argument relates to the array and\n\tthat at most 'nargs' (zero here) arguments are inspected.\n\t\n\tSo I think it works as designed, we have some duplicate bugreports\n\tcomplaining about this \"heuristic\".\n\t\n\tWe are exposing this to ourselves by optimizing the args == 0 case\n\t(skipping the initialization loop and constant propagating the\n\tnargs argument). Aka jump-threading.\n\t\nI think we just have to assume this incorrect warning will be around for\na while.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Wed, 6 Sep 2023 12:33:31 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Debian 12 gcc warning"
}
] |
[
{
"msg_contents": "While working on a set of patches to combine the freeze and visibility\nmap WAL records into the prune record, I wrote the attached patches\nreusing the tuple visibility information collected in heap_page_prune()\nback in lazy_scan_prune().\n\nheap_page_prune() collects the HTSV_Result for every tuple on a page\nand saves it in an array used by heap_prune_chain(). If we make that\narray available to lazy_scan_prune(), it can use it when collecting\nstats for vacuum and determining whether or not to freeze tuples.\nThis avoids calling HeapTupleSatisfiesVacuum() again on every tuple in\nthe page.\n\nIt also gets rid of the retry loop in lazy_scan_prune().\n\n- Melanie",
"msg_date": "Mon, 28 Aug 2023 19:49:27 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Eliminate redundant tuple visibility check in vacuum"
},
{
"msg_contents": "Hi Melanie,\n\nOn 8/29/23 01:49, Melanie Plageman wrote:\n> While working on a set of patches to combine the freeze and visibility\n> map WAL records into the prune record, I wrote the attached patches\n> reusing the tuple visibility information collected in heap_page_prune()\n> back in lazy_scan_prune().\n>\n> heap_page_prune() collects the HTSV_Result for every tuple on a page\n> and saves it in an array used by heap_prune_chain(). If we make that\n> array available to lazy_scan_prune(), it can use it when collecting\n> stats for vacuum and determining whether or not to freeze tuples.\n> This avoids calling HeapTupleSatisfiesVacuum() again on every tuple in\n> the page.\n>\n> It also gets rid of the retry loop in lazy_scan_prune().\n\nHow did you test this change?\n\nCould you measure any performance difference?\n\nIf so could you provide your test case?\n\n-- \nDavid Geier\n(ServiceNow)\n\n\n\n",
"msg_date": "Tue, 29 Aug 2023 11:07:35 +0200",
"msg_from": "David Geier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Eliminate redundant tuple visibility check in vacuum"
},
{
"msg_contents": "Hi David,\nThanks for taking a look!\n\nOn Tue, Aug 29, 2023 at 5:07 AM David Geier <[email protected]> wrote:\n>\n> Hi Melanie,\n>\n> On 8/29/23 01:49, Melanie Plageman wrote:\n> > While working on a set of patches to combine the freeze and visibility\n> > map WAL records into the prune record, I wrote the attached patches\n> > reusing the tuple visibility information collected in heap_page_prune()\n> > back in lazy_scan_prune().\n> >\n> > heap_page_prune() collects the HTSV_Result for every tuple on a page\n> > and saves it in an array used by heap_prune_chain(). If we make that\n> > array available to lazy_scan_prune(), it can use it when collecting\n> > stats for vacuum and determining whether or not to freeze tuples.\n> > This avoids calling HeapTupleSatisfiesVacuum() again on every tuple in\n> > the page.\n> >\n> > It also gets rid of the retry loop in lazy_scan_prune().\n>\n> How did you test this change?\n\nI didn't add a new test, but you can confirm some existing test\ncoverage if you, for example, set every HTSV_Result in the array to\nHEAPTUPLE_LIVE and run the regression tests. Tests relying on vacuum\nremoving the right tuples may fail. For example, select count(*) from\ngin_test_tbl where i @> array[1, 999]; in src/test/regress/sql/gin.sql\nfails for me since it sees a tuple it shouldn't.\n\n> Could you measure any performance difference?\n>\n> If so could you provide your test case?\n\nI created a large table and then updated a tuple on every page in the\nrelation and vacuumed it. I saw a consistent slight improvement in\nvacuum execution time. I profiled a bit with perf stat as well. The\ndifference is relatively small for this kind of example, but I can\nwork on a more compelling, realistic example. I think eliminating the\nretry loop is also useful, as I have heard that users have had to\ncancel vacuums which were in this retry loop for too long.\n\n- Melanie\n\n\n",
"msg_date": "Tue, 29 Aug 2023 09:21:37 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Eliminate redundant tuple visibility check in vacuum"
},
{
"msg_contents": "> On Tue, Aug 29, 2023 at 5:07 AM David Geier <[email protected]> wrote:\n> > Could you measure any performance difference?\n> >\n> > If so could you provide your test case?\n>\n> I created a large table and then updated a tuple on every page in the\n> relation and vacuumed it. I saw a consistent slight improvement in\n> vacuum execution time. I profiled a bit with perf stat as well. The\n> difference is relatively small for this kind of example, but I can\n> work on a more compelling, realistic example. I think eliminating the\n> retry loop is also useful, as I have heard that users have had to\n> cancel vacuums which were in this retry loop for too long.\n\nJust to provide a specific test case, if you create a small table like this\n\ncreate table foo (a int, b int, c int) with(autovacuum_enabled=false);\ninsert into foo select i, i, i from generate_series(1, 10000000);\n\nAnd then vacuum it. I find that with my patch applied I see a\nconsistent ~9% speedup (averaged across multiple runs).\n\nmaster: ~533ms\npatch: ~487ms\n\nAnd in the profile, with my patch applied, you notice less time spent\nin HeapTupleSatisfiesVacuumHorizon()\n\nmaster:\n 11.83% postgres postgres [.] heap_page_prune\n 11.59% postgres postgres [.] heap_prepare_freeze_tuple\n 8.77% postgres postgres [.] lazy_scan_prune\n 8.01% postgres postgres [.] HeapTupleSatisfiesVacuumHorizon\n 7.79% postgres postgres [.] heap_tuple_should_freeze\n\npatch:\n 13.41% postgres postgres [.] heap_prepare_freeze_tuple\n 9.88% postgres postgres [.] heap_page_prune\n 8.61% postgres postgres [.] lazy_scan_prune\n 7.00% postgres postgres [.] heap_tuple_should_freeze\n 6.43% postgres postgres [.] HeapTupleSatisfiesVacuumHorizon\n\n- Melanie\n\n\n",
"msg_date": "Wed, 30 Aug 2023 20:59:07 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Eliminate redundant tuple visibility check in vacuum"
},
{
"msg_contents": "Hi Melanie,\n\nOn 8/31/23 02:59, Melanie Plageman wrote:\n>> I created a large table and then updated a tuple on every page in the\n>> relation and vacuumed it. I saw a consistent slight improvement in\n>> vacuum execution time. I profiled a bit with perf stat as well. The\n>> difference is relatively small for this kind of example, but I can\n>> work on a more compelling, realistic example. I think eliminating the\n>> retry loop is also useful, as I have heard that users have had to\n>> cancel vacuums which were in this retry loop for too long.\n> Just to provide a specific test case, if you create a small table like this\n>\n> create table foo (a int, b int, c int) with(autovacuum_enabled=false);\n> insert into foo select i, i, i from generate_series(1, 10000000);\n>\n> And then vacuum it. I find that with my patch applied I see a\n> consistent ~9% speedup (averaged across multiple runs).\n>\n> master: ~533ms\n> patch: ~487ms\n>\n> And in the profile, with my patch applied, you notice less time spent\n> in HeapTupleSatisfiesVacuumHorizon()\n>\n> master:\n> 11.83% postgres postgres [.] heap_page_prune\n> 11.59% postgres postgres [.] heap_prepare_freeze_tuple\n> 8.77% postgres postgres [.] lazy_scan_prune\n> 8.01% postgres postgres [.] HeapTupleSatisfiesVacuumHorizon\n> 7.79% postgres postgres [.] heap_tuple_should_freeze\n>\n> patch:\n> 13.41% postgres postgres [.] heap_prepare_freeze_tuple\n> 9.88% postgres postgres [.] heap_page_prune\n> 8.61% postgres postgres [.] lazy_scan_prune\n> 7.00% postgres postgres [.] heap_tuple_should_freeze\n> 6.43% postgres postgres [.] HeapTupleSatisfiesVacuumHorizon\n\nThanks a lot for providing additional information and the test case.\n\nI tried it on a release build and I also see a 10% speed-up. I reset the \nvisibility map between VACUUM runs, see:\n\nCREATE EXTENSION pg_visibility; CREATE TABLE foo (a INT, b INT, c INT) \nWITH(autovacuum_enabled=FALSE); INSERT INTO foo SELECT i, i, i from \ngenerate_series(1, 10000000) i; VACUUM foo; SELECT \npg_truncate_visibility_map('foo'); VACUUM foo; SELECT \npg_truncate_visibility_map('foo'); VACUUM foo; ...\n\nThe first patch, which refactors the code so we can pass the result of \nthe visibility checks to the caller, looks good to me.\n\nRegarding the 2nd patch (disclaimer: I'm not too familiar with that area \nof the code): I don't completely understand why the retry loop is not \nneeded anymore and how you now detect/handle the possible race \ncondition? It can still happen that an aborting transaction changes the \nstate of a row after heap_page_prune() looked at that row. Would that \ncase now not be ignored?\n\n-- \nDavid Geier\n(ServiceNow)\n\n\n\n",
"msg_date": "Thu, 31 Aug 2023 11:39:49 +0200",
"msg_from": "David Geier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Eliminate redundant tuple visibility check in vacuum"
},
{
"msg_contents": "On Mon, Aug 28, 2023 at 7:49 PM Melanie Plageman\n<[email protected]> wrote:\n> While working on a set of patches to combine the freeze and visibility\n> map WAL records into the prune record, I wrote the attached patches\n> reusing the tuple visibility information collected in heap_page_prune()\n> back in lazy_scan_prune().\n>\n> heap_page_prune() collects the HTSV_Result for every tuple on a page\n> and saves it in an array used by heap_prune_chain(). If we make that\n> array available to lazy_scan_prune(), it can use it when collecting\n> stats for vacuum and determining whether or not to freeze tuples.\n> This avoids calling HeapTupleSatisfiesVacuum() again on every tuple in\n> the page.\n>\n> It also gets rid of the retry loop in lazy_scan_prune().\n\nIn general, I like these patches. I think it's a clever approach, and\nI don't really see any down side. It should just be straight-up better\nthan what we have now, and if it's not better, it still shouldn't be\nany worse.\n\nI have a few suggestions:\n\n- Rather than removing the rather large comment block at the top of\nlazy_scan_prune() I'd like to see it rewritten to explain how we now\ndeal with the problem. I'd suggest leaving the first paragraph (\"Prior\nto...\") just as it is and replace all the words following \"The\napproach we take now is\" with a description of the approach that this\npatch takes to the problem.\n\n- I'm not a huge fan of the caller of heap_page_prune() having to know\nhow to initialize the PruneResult. Can heap_page_prune() itself do\nthat work, so that presult is an out parameter rather than an in-out\nparameter? Or, if not, can it be moved to a new helper function, like\nheap_page_prune_init(), rather than having that logic in 2+ places?\n\n- int ndeleted,\n- nnewlpdead;\n-\n- ndeleted = heap_page_prune(relation, buffer, vistest, limited_xmin,\n- limited_ts, &nnewlpdead, NULL);\n+ int ndeleted = heap_page_prune(relation, buffer, vistest, limited_xmin,\n+ limited_ts, &presult, NULL);\n\n- I don't particularly like merging the declaration with the\nassignment unless the call is narrow enough that we don't need a line\nbreak in there, which is not the case here.\n\n- I haven't thoroughly investigated yet, but I wonder if there are any\nother places where comments need updating. As a general note, I find\nit desirable for a function's header comment to mention whether any\npointer parameters are in parameters, out parameters, or in-out\nparameters, and what the contract between caller and callee actually\nis.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 31 Aug 2023 14:03:10 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Eliminate redundant tuple visibility check in vacuum"
},
{
"msg_contents": "On Thu, Aug 31, 2023 at 2:03 PM Robert Haas <[email protected]> wrote:\n>\n> I have a few suggestions:\n>\n> - Rather than removing the rather large comment block at the top of\n> lazy_scan_prune() I'd like to see it rewritten to explain how we now\n> deal with the problem. I'd suggest leaving the first paragraph (\"Prior\n> to...\") just as it is and replace all the words following \"The\n> approach we take now is\" with a description of the approach that this\n> patch takes to the problem.\n\nGood idea. I've updated the comment. I also explain why this new\napproach works in the commit message and reference the commit which\nadded the previous approach.\n\n> - I'm not a huge fan of the caller of heap_page_prune() having to know\n> how to initialize the PruneResult. Can heap_page_prune() itself do\n> that work, so that presult is an out parameter rather than an in-out\n> parameter? Or, if not, can it be moved to a new helper function, like\n> heap_page_prune_init(), rather than having that logic in 2+ places?\n\nAh, yes. Now that it has two callers, and since it is exclusively an\noutput parameter, it is quite ugly to initialize it in both callers.\nFixed in the attached.\n\n> - int ndeleted,\n> - nnewlpdead;\n> -\n> - ndeleted = heap_page_prune(relation, buffer, vistest, limited_xmin,\n> - limited_ts, &nnewlpdead, NULL);\n> + int ndeleted = heap_page_prune(relation, buffer, vistest, limited_xmin,\n> + limited_ts, &presult, NULL);\n>\n> - I don't particularly like merging the declaration with the\n> assignment unless the call is narrow enough that we don't need a line\n> break in there, which is not the case here.\n\nI have changed this.\n\n> - I haven't thoroughly investigated yet, but I wonder if there are any\n> other places where comments need updating. As a general note, I find\n> it desirable for a function's header comment to mention whether any\n> pointer parameters are in parameters, out parameters, or in-out\n> parameters, and what the contract between caller and callee actually\n> is.\n\nI've investigated vacuumlazy.c and pruneheap.c and looked at the\ncommit that added the retry loop (8523492d4e349) to see everywhere it\nadded comments and don't see anywhere else that needs updating.\n\nI have updated lazy_scan_prune()'s function header comment to describe\nthe nature of the in-out and output parameters and the contract.\n\n- Melanie",
"msg_date": "Thu, 31 Aug 2023 18:29:29 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Eliminate redundant tuple visibility check in vacuum"
},
{
"msg_contents": "On Thu, Aug 31, 2023 at 5:39 AM David Geier <[email protected]> wrote:\n> Regarding the 2nd patch (disclaimer: I'm not too familiar with that area\n> of the code): I don't completely understand why the retry loop is not\n> needed anymore and how you now detect/handle the possible race\n> condition? It can still happen that an aborting transaction changes the\n> state of a row after heap_page_prune() looked at that row. Would that\n> case now not be ignored?\n\nThanks for asking. I've updated the comment in the code and the commit\nmessage about this, as it seems important to be clear.\n\nAny inserting transaction which aborts after heap_page_prune()'s\nvisibility check will now be of no concern to lazy_scan_prune(). Since\nwe don't do the visibility check again, we won't find the tuple\nHEAPTUPLE_DEAD and thus won't have the problem of adding the tuple to\nthe array of tuples for vacuum to reap. This does mean that we won't\nreap and remove tuples whose inserting transactions abort right after\nheap_page_prune()'s visibility check. But, this doesn't seem like an\nissue. They may not be removed until the next vacuum, but ISTM it is\nactually worse to pay the cost of re-pruning the whole page just to\nclean up that one tuple. Maybe if that renders the page all visible\nand we can mark it as such in the visibility map -- but that seems\nlike a relatively narrow use case.\n\n- Melanie\n\n\n",
"msg_date": "Thu, 31 Aug 2023 18:35:19 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Eliminate redundant tuple visibility check in vacuum"
},
{
"msg_contents": "On Thu, Aug 31, 2023 at 3:35 PM Melanie Plageman\n<[email protected]> wrote:\n> Any inserting transaction which aborts after heap_page_prune()'s\n> visibility check will now be of no concern to lazy_scan_prune(). Since\n> we don't do the visibility check again, we won't find the tuple\n> HEAPTUPLE_DEAD and thus won't have the problem of adding the tuple to\n> the array of tuples for vacuum to reap. This does mean that we won't\n> reap and remove tuples whose inserting transactions abort right after\n> heap_page_prune()'s visibility check. But, this doesn't seem like an\n> issue.\n\nIt's definitely not an issue.\n\nThe loop is essentially a hacky way of getting a consistent picture of\nwhich tuples should be treated as HEAPTUPLE_DEAD, and which tuples\nneed to be left behind (consistent at the level of each page and each\nHOT chain, at least). Making that explicit seems strictly better.\n\n> They may not be removed until the next vacuum, but ISTM it is\n> actually worse to pay the cost of re-pruning the whole page just to\n> clean up that one tuple. Maybe if that renders the page all visible\n> and we can mark it as such in the visibility map -- but that seems\n> like a relatively narrow use case.\n\nThe chances of actually hitting the retry are microscopic anyway. It\nhas nothing to do with making sure that dead tuples from aborted\ntuples get removed for its own sake, or anything. Rather, the retry is\nall about making sure that all TIDs that get removed from indexes can\nonly point to LP_DEAD stubs. Prior to Postgres 14, HEAPTUPLE_DEAD\ntuples with storage would very occasionally be left behind, which made\nlife difficult in a bunch of other places -- for no good reason.\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 31 Aug 2023 18:25:05 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Eliminate redundant tuple visibility check in vacuum"
},
{
"msg_contents": "Hi,\n\nOn 9/1/23 03:25, Peter Geoghegan wrote:\n> On Thu, Aug 31, 2023 at 3:35 PM Melanie Plageman\n> <[email protected]> wrote:\n>> Any inserting transaction which aborts after heap_page_prune()'s\n>> visibility check will now be of no concern to lazy_scan_prune(). Since\n>> we don't do the visibility check again, we won't find the tuple\n>> HEAPTUPLE_DEAD and thus won't have the problem of adding the tuple to\n>> the array of tuples for vacuum to reap. This does mean that we won't\n>> reap and remove tuples whose inserting transactions abort right after\n>> heap_page_prune()'s visibility check. But, this doesn't seem like an\n>> issue.\n> It's definitely not an issue.\n>\n> The loop is essentially a hacky way of getting a consistent picture of\n> which tuples should be treated as HEAPTUPLE_DEAD, and which tuples\n> need to be left behind (consistent at the level of each page and each\n> HOT chain, at least). Making that explicit seems strictly better.\n>\n>> They may not be removed until the next vacuum, but ISTM it is\n>> actually worse to pay the cost of re-pruning the whole page just to\n>> clean up that one tuple. Maybe if that renders the page all visible\n>> and we can mark it as such in the visibility map -- but that seems\n>> like a relatively narrow use case.\n> The chances of actually hitting the retry are microscopic anyway. It\n> has nothing to do with making sure that dead tuples from aborted\n> tuples get removed for its own sake, or anything. Rather, the retry is\n> all about making sure that all TIDs that get removed from indexes can\n> only point to LP_DEAD stubs. Prior to Postgres 14, HEAPTUPLE_DEAD\n> tuples with storage would very occasionally be left behind, which made\n> life difficult in a bunch of other places -- for no good reason.\n>\nThat makes sense and seems like a much better compromise. Thanks for the \nexplanations. Please update the comment to document the corner case and \nhow we handle it.\n\n-- \nDavid Geier\n(ServiceNow)\n\n\n\n",
"msg_date": "Fri, 1 Sep 2023 10:47:10 +0200",
"msg_from": "David Geier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Eliminate redundant tuple visibility check in vacuum"
},
{
"msg_contents": "On Thu, Aug 31, 2023 at 6:29 PM Melanie Plageman\n<[email protected]> wrote:\n> I have changed this.\n\nI spent a bunch of time today looking at this, thinking maybe I could\ncommit it. But then I got cold feet.\n\nWith these patches applied, PruneResult ends up being declared in\nheapam.h, with a comment that says /* State returned from pruning */.\nBut that comment isn't accurate. The two new members that get added to\nthe structure by 0002, namely nnewlpdead and htsv, are in fact state\nthat is returned from pruning. But the other 5 members aren't. They're\njust initialized to constant values by pruning and then filled in for\nreal by the vacuum logic. That's extremely weird. It would be fine if\nheap_page_prune() just grew a new output argument that only returned\nthe HTSV results, or perhaps it could make sense to bundle any\nexisting out parameters together into a struct and then add new things\nto that struct instead of adding even more parameters to the function\nitself. But there doesn't seem to be any good reason to muddle\ntogether the new output parameters for heap_page_prune() with a bunch\nof state that is currently internal to vacuumlazy.c.\n\nI realize that the shape of the patches probably stems from the fact\nthat they started out life as part of a bigger patch set. But to be\ncommitted independently, they need to be shaped in a way that makes\nsense independently, and I don't think this qualifies. On the plus\nside, it seems to me that it's probably not that much work to fix this\nissue and that the result would likely be a smaller patch than what\nyou have now, which is something.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 6 Sep 2023 13:04:22 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Eliminate redundant tuple visibility check in vacuum"
},
{
"msg_contents": "On Wed, Sep 6, 2023 at 1:04 PM Robert Haas <[email protected]> wrote:\n>\n> On Thu, Aug 31, 2023 at 6:29 PM Melanie Plageman\n> <[email protected]> wrote:\n> > I have changed this.\n>\n> I spent a bunch of time today looking at this, thinking maybe I could\n> commit it. But then I got cold feet.\n>\n> With these patches applied, PruneResult ends up being declared in\n> heapam.h, with a comment that says /* State returned from pruning */.\n> But that comment isn't accurate. The two new members that get added to\n> the structure by 0002, namely nnewlpdead and htsv, are in fact state\n> that is returned from pruning. But the other 5 members aren't. They're\n> just initialized to constant values by pruning and then filled in for\n> real by the vacuum logic. That's extremely weird. It would be fine if\n> heap_page_prune() just grew a new output argument that only returned\n> the HTSV results, or perhaps it could make sense to bundle any\n> existing out parameters together into a struct and then add new things\n> to that struct instead of adding even more parameters to the function\n> itself. But there doesn't seem to be any good reason to muddle\n> together the new output parameters for heap_page_prune() with a bunch\n> of state that is currently internal to vacuumlazy.c.\n>\n> I realize that the shape of the patches probably stems from the fact\n> that they started out life as part of a bigger patch set. But to be\n> committed independently, they need to be shaped in a way that makes\n> sense independently, and I don't think this qualifies. On the plus\n> side, it seems to me that it's probably not that much work to fix this\n> issue and that the result would likely be a smaller patch than what\n> you have now, which is something.\n\nYeah, I think this is a fair concern. I have addressed it in the\nattached patches.\n\nI thought a lot about whether or not adding a PruneResult which\ncontains only the output parameters and result of heap_page_prune() is\nannoying since we have so many other *Prune* data structures. I\ndecided it's not annoying. In some cases, four outputs don't merit a\nnew structure. In this case, I think it declutters the code a bit --\nindependent of any other patches I may be writing :)\n\n- Melanie",
"msg_date": "Wed, 6 Sep 2023 17:21:22 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Eliminate redundant tuple visibility check in vacuum"
},
{
"msg_contents": "On Wed, Sep 6, 2023 at 5:21 PM Melanie Plageman\n<[email protected]> wrote:\n> Yeah, I think this is a fair concern. I have addressed it in the\n> attached patches.\n>\n> I thought a lot about whether or not adding a PruneResult which\n> contains only the output parameters and result of heap_page_prune() is\n> annoying since we have so many other *Prune* data structures. I\n> decided it's not annoying. In some cases, four outputs don't merit a\n> new structure. In this case, I think it declutters the code a bit --\n> independent of any other patches I may be writing :)\n\nI took a look at 0001 and I think that it's incorrect. In the existing\ncode, *off_loc gets updated before each call to\nheap_prune_satisfies_vacuum(). This means that if an error occurs in\nheap_prune_satisfies_vacuum(), *off_loc will as of that moment contain\nthe relevant offset number. In your version, the relevant offset\nnumber will only be stored in some local structure to which the caller\ndoesn't yet have access. The difference is meaningful. lazy_scan_prune\npasses off_loc as vacrel->offnum, which means that if an error\nhappens, vacrel->offnum will have the right value, and so when the\nerror context callback installed by heap_vacuum_rel() fires, namely\nvacuum_error_callback(), it can look at vacrel->offnum and know where\nthe error happened. With your patch, I think that would no longer\nwork.\n\nI haven't run the regression suite with 0001 applied. I'm guessing\nthat you did, and that they passed, which if true means that we don't\nhave a test for this, which is a shame, although writing such a test\nmight be a bit tricky. If there is a test case for this and you didn't\nrun it, woops. This is also why I think it's *extremely* important for\nthe header comment of a function that takes pointer parameters to\ndocument the semantics of those pointers. Normally they are in\nparameters or out parameters or in-out parameters, but here it's\nsomething even more complicated. The existing header comment says\n\"off_loc is the offset location required by the caller to use in error\ncallback,\" which I didn't really understand until I actually looked at\nwhat the code is doing, so I consider that somebody could've done a\nbetter job writing this comment, but in theory you could've also\nnoticed that, at least AFAICS, there's no way for the function to\nreturn with *off_loc set to anything other than InvalidOffsetNumber.\nThat means that the statements which set *off_loc to other values must\nhave some other purpose.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 7 Sep 2023 13:37:35 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Eliminate redundant tuple visibility check in vacuum"
},
{
"msg_contents": "On Thu, Sep 7, 2023 at 1:37 PM Robert Haas <[email protected]> wrote:\n>\n> On Wed, Sep 6, 2023 at 5:21 PM Melanie Plageman\n> <[email protected]> wrote:\n> > Yeah, I think this is a fair concern. I have addressed it in the\n> > attached patches.\n> >\n> > I thought a lot about whether or not adding a PruneResult which\n> > contains only the output parameters and result of heap_page_prune() is\n> > annoying since we have so many other *Prune* data structures. I\n> > decided it's not annoying. In some cases, four outputs don't merit a\n> > new structure. In this case, I think it declutters the code a bit --\n> > independent of any other patches I may be writing :)\n>\n> I took a look at 0001 and I think that it's incorrect. In the existing\n> code, *off_loc gets updated before each call to\n> heap_prune_satisfies_vacuum(). This means that if an error occurs in\n> heap_prune_satisfies_vacuum(), *off_loc will as of that moment contain\n> the relevant offset number. In your version, the relevant offset\n> number will only be stored in some local structure to which the caller\n> doesn't yet have access. The difference is meaningful. lazy_scan_prune\n> passes off_loc as vacrel->offnum, which means that if an error\n> happens, vacrel->offnum will have the right value, and so when the\n> error context callback installed by heap_vacuum_rel() fires, namely\n> vacuum_error_callback(), it can look at vacrel->offnum and know where\n> the error happened. With your patch, I think that would no longer\n> work.\n\nYou are right. That is a major problem. Thank you for finding that. I\nwas able to confirm the breakage by patching in an error to\nheap_page_prune() after we have set off_loc and confirming that the\nerror context has the offset in master and is missing it with my patch\napplied.\n\nI can fix it by changing the type of PruneResult->off_loc to be an\nOffsetNumber pointer. This does mean that PruneResult will be\ninitialized partially by heap_page_prune() callers. I wonder if you\nthink that undermines the case for making a new struct.\n\nI still want to eliminate the NULL check of off_loc in\nheap_page_prune() by making it a required parameter. Even though\non-access pruning does not have an error callback mechanism which uses\nthe offset, it seems better to have a pointless local variable in\nheap_page_prune_opt() than to do a NULL check for every tuple pruned.\n\n> I haven't run the regression suite with 0001 applied. I'm guessing\n> that you did, and that they passed, which if true means that we don't\n> have a test for this, which is a shame, although writing such a test\n> might be a bit tricky. If there is a test case for this and you didn't\n> run it, woops.\n\nThere is no test coverage for the vacuum error callback context\ncurrently (tests passed for me). I looked into how we might add such a\ntest. First, I investigated what kind of errors might occur during\nheap_prune_satisfies_vacuum(). Some of the multixact functions called\nby HeapTupleSatisfiesVacuumHorizon() could error out -- for example\nGetMultiXactIdMembers(). It seems difficult to trigger the errors in\nGetMultiXactIdMembers(), as we would have to cause wraparound. It\nwould be even more difficult to ensure that we hit those specific\nerrors from a call stack containing heap_prune_satisfies_vacuum(). As\nsuch, I'm not sure I can think of a way to protect future developers\nfrom repeating my mistake--apart from improving the comment like you\nmentioned.\n\n- Melanie\n\n\n",
"msg_date": "Thu, 7 Sep 2023 15:09:54 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Eliminate redundant tuple visibility check in vacuum"
},
{
"msg_contents": "On Thu, Sep 7, 2023 at 3:10 PM Melanie Plageman\n<[email protected]> wrote:\n> I can fix it by changing the type of PruneResult->off_loc to be an\n> OffsetNumber pointer. This does mean that PruneResult will be\n> initialized partially by heap_page_prune() callers. I wonder if you\n> think that undermines the case for making a new struct.\n\nI think that it undermines the case for including that particular\nargument in the struct. It's not really a Prune*Result* if the caller\ninitializes it in part. It seems fairly reasonable to still have a\nPruneResult struct for the other stuff, though, at least to me. How do\nyou see it?\n\n(One could also argue that this is a somewhat more byzantine way of\ndoing error reporting than would be desirable, but fixing that problem\ndoesn't seem too straightforward so perhaps it's prudent to leave it\nwell enough alone.)\n\n> I still want to eliminate the NULL check of off_loc in\n> heap_page_prune() by making it a required parameter. Even though\n> on-access pruning does not have an error callback mechanism which uses\n> the offset, it seems better to have a pointless local variable in\n> heap_page_prune_opt() than to do a NULL check for every tuple pruned.\n\nIt doesn't seem important to me unless it improves performance. If\nit's just stylistic, I don't object, but I also don't really see a\nreason to care.\n\n> > I haven't run the regression suite with 0001 applied. I'm guessing\n> > that you did, and that they passed, which if true means that we don't\n> > have a test for this, which is a shame, although writing such a test\n> > might be a bit tricky. If there is a test case for this and you didn't\n> > run it, woops.\n>\n> There is no test coverage for the vacuum error callback context\n> currently (tests passed for me). I looked into how we might add such a\n> test. First, I investigated what kind of errors might occur during\n> heap_prune_satisfies_vacuum(). Some of the multixact functions called\n> by HeapTupleSatisfiesVacuumHorizon() could error out -- for example\n> GetMultiXactIdMembers(). It seems difficult to trigger the errors in\n> GetMultiXactIdMembers(), as we would have to cause wraparound. It\n> would be even more difficult to ensure that we hit those specific\n> errors from a call stack containing heap_prune_satisfies_vacuum(). As\n> such, I'm not sure I can think of a way to protect future developers\n> from repeating my mistake--apart from improving the comment like you\n> mentioned.\n\n004_verify_heapam.pl has some tests that intentionally corrupt pages\nand then use pg_amcheck to detect the corruption. Such an approach\ncould probably also be used here. But it's a pain to get such tests\nright, because any change to the page format due to endianness,\ndifferent block size, or whatever can make carefully-written tests go\nboom.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 7 Sep 2023 15:29:48 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Eliminate redundant tuple visibility check in vacuum"
},
{
"msg_contents": "On Thu, Sep 7, 2023 at 3:30 PM Robert Haas <[email protected]> wrote:\n>\n> On Thu, Sep 7, 2023 at 3:10 PM Melanie Plageman\n> <[email protected]> wrote:\n> > I can fix it by changing the type of PruneResult->off_loc to be an\n> > OffsetNumber pointer. This does mean that PruneResult will be\n> > initialized partially by heap_page_prune() callers. I wonder if you\n> > think that undermines the case for making a new struct.\n>\n> I think that it undermines the case for including that particular\n> argument in the struct. It's not really a Prune*Result* if the caller\n> initializes it in part. It seems fairly reasonable to still have a\n> PruneResult struct for the other stuff, though, at least to me. How do\n> you see it?\n\nYes, I think off_loc probably didn't belong in PruneResult to begin with.\nIt is inherently not a result of pruning since it will only be used in\nthe event that pruning doesn't complete (it errors out).\n\nIn the attached v4 patch set, I have both PruneResult and off_loc as\nparameters to heap_page_prune(). I have also added a separate commit\nwhich adds comments both above heap_page_prune()'s call site in\nlazy_scan_prune() and in the heap_page_prune() function header\nclarifying the various points we discussed.\n\n> > I still want to eliminate the NULL check of off_loc in\n> > heap_page_prune() by making it a required parameter. Even though\n> > on-access pruning does not have an error callback mechanism which uses\n> > the offset, it seems better to have a pointless local variable in\n> > heap_page_prune_opt() than to do a NULL check for every tuple pruned.\n>\n> It doesn't seem important to me unless it improves performance. If\n> it's just stylistic, I don't object, but I also don't really see a\n> reason to care.\n\nI did some performance testing but, as expected, I couldn't concoct a\nscenario where the overhead was noticeable in a profile. So much else\nis happening in that code, the NULL check basically doesn't matter\n(even though it isn't optimized out).\n\nI mostly wanted to remove the NULL checks because I found them\ndistracting (so, a stylistic complaint). However, upon further\nreflection, I actually think it is better if heap_page_prune_opt()\npasses NULL. heap_page_prune() has no error callback mechanism that\ncould use it, and passing a valid value implies otherwise. Also, as\nyou said, off_loc will always be InvalidOffsetNumber when\nheap_page_prune() returns normally, so having heap_page_prune_opt()\npass a dummy value might actually be more confusing for future\nprogrammers.\n\n> > > I haven't run the regression suite with 0001 applied. I'm guessing\n> > > that you did, and that they passed, which if true means that we don't\n> > > have a test for this, which is a shame, although writing such a test\n> > > might be a bit tricky. If there is a test case for this and you didn't\n> > > run it, woops.\n> >\n> > There is no test coverage for the vacuum error callback context\n> > currently (tests passed for me). I looked into how we might add such a\n> > test. First, I investigated what kind of errors might occur during\n> > heap_prune_satisfies_vacuum(). Some of the multixact functions called\n> > by HeapTupleSatisfiesVacuumHorizon() could error out -- for example\n> > GetMultiXactIdMembers(). It seems difficult to trigger the errors in\n> > GetMultiXactIdMembers(), as we would have to cause wraparound. It\n> > would be even more difficult to ensure that we hit those specific\n> > errors from a call stack containing heap_prune_satisfies_vacuum(). As\n> > such, I'm not sure I can think of a way to protect future developers\n> > from repeating my mistake--apart from improving the comment like you\n> > mentioned.\n>\n> 004_verify_heapam.pl has some tests that intentionally corrupt pages\n> and then use pg_amcheck to detect the corruption. Such an approach\n> could probably also be used here. But it's a pain to get such tests\n> right, because any change to the page format due to endianness,\n> different block size, or whatever can make carefully-written tests go\n> boom.\n\nCool! I hadn't examined how these tests work until now. I took a stab\nat writing a test in the existing 0004_verify_heapam.pl. The simplest\nthing would be if we could just vacuum the corrupted table (\"test\")\nafter running pg_amcheck and compare the error context to our\nexpectation. I found that this didn't work, though. In an assert\nbuild, vacuum trips an assert before it hits an error while vacuuming\na corrupted tuple in the \"test\" table. There might be a way of\nmodifying the existing test code to avoid this, but I tried the next\neasiest thing -- corrupting a tuple in the other existing table in the\nfile, \"junk\". This is possible to do, but we have to repeat a lot of\nthe setup code done for the \"test\" table to get the line pointer array\nand loop through and corrupt a tuple. In order to do this well, I\nwould want to refactor some of the boilerplate into a function. There\nare other fiddly bits as well that I needed to consider. I think a\ntest like this could be useful coverage of the some of the possible\nerrors that could happen in heap_prune_satisfies_vacuum(), but it\ndefinitely isn't coverage of pg_amcheck (and thus shouldn't go in that\nfile) and a whole new test which spins up a Postgres to cover\nvacuum_error_callback() seemed like a bit much.\n\n- Melanie",
"msg_date": "Thu, 7 Sep 2023 18:23:22 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Eliminate redundant tuple visibility check in vacuum"
},
{
"msg_contents": "On Thu, Sep 7, 2023 at 6:23 PM Melanie Plageman\n<[email protected]> wrote:\n> I mostly wanted to remove the NULL checks because I found them\n> distracting (so, a stylistic complaint). However, upon further\n> reflection, I actually think it is better if heap_page_prune_opt()\n> passes NULL. heap_page_prune() has no error callback mechanism that\n> could use it, and passing a valid value implies otherwise. Also, as\n> you said, off_loc will always be InvalidOffsetNumber when\n> heap_page_prune() returns normally, so having heap_page_prune_opt()\n> pass a dummy value might actually be more confusing for future\n> programmers.\n\nI'll look at the new patches more next week, but I wanted to comment\non this point. I think this is kind of six of one, half a dozen of the\nother. It's not that hard to spot a variable that's only used in a\nfunction call and never initialized beforehand or used afterward, and\nif someone really feels the need to hammer home the point, they could\nalways name it dummy or dummy_loc or whatever. So this point doesn't\nreally carry a lot of weight with me. I actually think that the\nproposed change is probably better, but it seems like such a minor\nimprovement that I get slightly reluctant to make it, only because\nchurning the source code for arguable points sometimes annoys other\ndevelopers.\n\nBut I also had the thought that maybe it wouldn't be such a terrible\nidea if heap_page_prune_opt() actually used off_loc for some error\nreporting goodness. I mean, if HOT pruning fails, and we don't get the\ndetail as to which tuple caused the failure, we can always run VACUUM\nand it will give us that information, assuming of course that the same\nfailure happens again. But is there any reason why HOT pruning\nshouldn't include that error detail? If it did, then off_loc would be\npassed by all callers, at which point we surely would want to get rid\nof the branches.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 8 Sep 2023 11:06:04 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Eliminate redundant tuple visibility check in vacuum"
},
{
"msg_contents": "On Fri, Sep 8, 2023 at 11:06 AM Robert Haas <[email protected]> wrote:\n>\n> On Thu, Sep 7, 2023 at 6:23 PM Melanie Plageman\n> <[email protected]> wrote:\n> > I mostly wanted to remove the NULL checks because I found them\n> > distracting (so, a stylistic complaint). However, upon further\n> > reflection, I actually think it is better if heap_page_prune_opt()\n> > passes NULL. heap_page_prune() has no error callback mechanism that\n> > could use it, and passing a valid value implies otherwise. Also, as\n> > you said, off_loc will always be InvalidOffsetNumber when\n> > heap_page_prune() returns normally, so having heap_page_prune_opt()\n> > pass a dummy value might actually be more confusing for future\n> > programmers.\n>\n> I'll look at the new patches more next week, but I wanted to comment\n> on this point. I think this is kind of six of one, half a dozen of the\n> other. It's not that hard to spot a variable that's only used in a\n> function call and never initialized beforehand or used afterward, and\n> if someone really feels the need to hammer home the point, they could\n> always name it dummy or dummy_loc or whatever. So this point doesn't\n> really carry a lot of weight with me. I actually think that the\n> proposed change is probably better, but it seems like such a minor\n> improvement that I get slightly reluctant to make it, only because\n> churning the source code for arguable points sometimes annoys other\n> developers.\n>\n> But I also had the thought that maybe it wouldn't be such a terrible\n> idea if heap_page_prune_opt() actually used off_loc for some error\n> reporting goodness. I mean, if HOT pruning fails, and we don't get the\n> detail as to which tuple caused the failure, we can always run VACUUM\n> and it will give us that information, assuming of course that the same\n> failure happens again. But is there any reason why HOT pruning\n> shouldn't include that error detail? If it did, then off_loc would be\n> passed by all callers, at which point we surely would want to get rid\n> of the branches.\n\nThis is a good idea. I will work on a separate patch set to add an\nerror context callback for on-access HOT pruning.\n\n- Melanie\n\n\n",
"msg_date": "Mon, 11 Sep 2023 08:04:22 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Eliminate redundant tuple visibility check in vacuum"
},
{
"msg_contents": "Hi,\n\nOn 2023-09-07 18:23:22 -0400, Melanie Plageman wrote:\n> From e986940e546171d1f1d06f62a101d695a8481e7a Mon Sep 17 00:00:00 2001\n> From: Melanie Plageman <[email protected]>\n> Date: Wed, 6 Sep 2023 14:57:20 -0400\n> Subject: [PATCH v4 2/3] Move heap_page_prune output parameters into struct\n> \n> Add PruneResult, a structure containing the output parameters and result\n> of heap_page_prune(). Reorganizing the results of heap_page_prune() into\n> a struct simplifies the function signature and provides a location for\n> future commits to store additional output parameters.\n> \n> Discussion: https://postgr.es/m/CAAKRu_br124qsGJieuYA0nGjywEukhK1dKBfRdby_4yY3E9SXA%40mail.gmail.com\n> ---\n> src/backend/access/heap/pruneheap.c | 33 +++++++++++++---------------\n> src/backend/access/heap/vacuumlazy.c | 17 +++++---------\n> src/include/access/heapam.h | 13 +++++++++--\n> src/tools/pgindent/typedefs.list | 1 +\n> 4 files changed, 33 insertions(+), 31 deletions(-)\n> \n> diff --git a/src/backend/access/heap/pruneheap.c b/src/backend/access/heap/pruneheap.c\n> index 392b54f093..5ac286e152 100644\n> --- a/src/backend/access/heap/pruneheap.c\n> +++ b/src/backend/access/heap/pruneheap.c\n> @@ -155,15 +155,13 @@ heap_page_prune_opt(Relation relation, Buffer buffer)\n> \t\t */\n> \t\tif (PageIsFull(page) || PageGetHeapFreeSpace(page) < minfree)\n> \t\t{\n> -\t\t\tint\t\t\tndeleted,\n> -\t\t\t\t\t\tnnewlpdead;\n> +\t\t\tPruneResult presult;\n> \n> -\t\t\tndeleted = heap_page_prune(relation, buffer, vistest,\n> -\t\t\t\t\t\t\t\t\t &nnewlpdead, NULL);\n> +\t\t\theap_page_prune(relation, buffer, vistest, &presult, NULL);\n> \n> \t\t\t/*\n> \t\t\t * Report the number of tuples reclaimed to pgstats. This is\n> -\t\t\t * ndeleted minus the number of newly-LP_DEAD-set items.\n> +\t\t\t * presult.ndeleted minus the number of newly-LP_DEAD-set items.\n> \t\t\t *\n> \t\t\t * We derive the number of dead tuples like this to avoid totally\n> \t\t\t * forgetting about items that were set to LP_DEAD, since they\n> @@ -175,9 +173,9 @@ heap_page_prune_opt(Relation relation, Buffer buffer)\n> \t\t\t * tracks ndeleted, since it will set the same LP_DEAD items to\n> \t\t\t * LP_UNUSED separately.\n> \t\t\t */\n> -\t\t\tif (ndeleted > nnewlpdead)\n> +\t\t\tif (presult.ndeleted > presult.nnewlpdead)\n> \t\t\t\tpgstat_update_heap_dead_tuples(relation,\n> -\t\t\t\t\t\t\t\t\t\t\t ndeleted - nnewlpdead);\n> +\t\t\t\t\t\t\t\t\t\t\t presult.ndeleted - presult.nnewlpdead);\n> \t\t}\n> \n> \t\t/* And release buffer lock */\n> @@ -204,24 +202,22 @@ heap_page_prune_opt(Relation relation, Buffer buffer)\n> * (see heap_prune_satisfies_vacuum and\n> * HeapTupleSatisfiesVacuum).\n> *\n> - * Sets *nnewlpdead for caller, indicating the number of items that were\n> - * newly set LP_DEAD during prune operation.\n> + * presult contains output parameters needed by callers such as the number of\n> + * tuples removed and the number of line pointers newly marked LP_DEAD.\n> + * heap_page_prune() is responsible for initializing it.\n> *\n> * off_loc is the current offset into the line pointer array while pruning.\n> * This is used by vacuum to populate the error context message. On-access\n> * pruning has no such callback mechanism for populating the error context, so\n> * it passes NULL. When provided by the caller, off_loc is set exclusively by\n> * heap_page_prune().\n> - *\n> - * Returns the number of tuples deleted from the page during this call.\n> */\n> -int\n> +void\n> heap_page_prune(Relation relation, Buffer buffer,\n> \t\t\t\tGlobalVisState *vistest,\n> -\t\t\t\tint *nnewlpdead,\n> +\t\t\t\tPruneResult *presult,\n> \t\t\t\tOffsetNumber *off_loc)\n> {\n> -\tint\t\t\tndeleted = 0;\n> \tPage\t\tpage = BufferGetPage(buffer);\n> \tBlockNumber blockno = BufferGetBlockNumber(buffer);\n> \tOffsetNumber offnum,\n> @@ -247,6 +243,9 @@ heap_page_prune(Relation relation, Buffer buffer,\n> \tprstate.nredirected = prstate.ndead = prstate.nunused = 0;\n> \tmemset(prstate.marked, 0, sizeof(prstate.marked));\n> \n> +\tpresult->ndeleted = 0;\n> +\tpresult->nnewlpdead = 0;\n> +\n> \tmaxoff = PageGetMaxOffsetNumber(page);\n> \ttup.t_tableOid = RelationGetRelid(prstate.rel);\n> \n> @@ -321,7 +320,7 @@ heap_page_prune(Relation relation, Buffer buffer,\n> \t\t\tcontinue;\n> \n> \t\t/* Process this item or chain of items */\n> -\t\tndeleted += heap_prune_chain(buffer, offnum, &prstate);\n> +\t\tpresult->ndeleted += heap_prune_chain(buffer, offnum, &prstate);\n> \t}\n> \n> \t/* Clear the offset information once we have processed the given page. */\n> @@ -422,9 +421,7 @@ heap_page_prune(Relation relation, Buffer buffer,\n> \tEND_CRIT_SECTION();\n> \n> \t/* Record number of newly-set-LP_DEAD items for caller */\n> -\t*nnewlpdead = prstate.ndead;\n> -\n> -\treturn ndeleted;\n> +\tpresult->nnewlpdead = prstate.ndead;\n> }\n> \n> \n> diff --git a/src/backend/access/heap/vacuumlazy.c b/src/backend/access/heap/vacuumlazy.c\n> index 102cc97358..7ead9cfe9d 100644\n> --- a/src/backend/access/heap/vacuumlazy.c\n> +++ b/src/backend/access/heap/vacuumlazy.c\n> @@ -1544,12 +1544,11 @@ lazy_scan_prune(LVRelState *vacrel,\n> \tItemId\t\titemid;\n> \tHeapTupleData tuple;\n> \tHTSV_Result res;\n> -\tint\t\t\ttuples_deleted,\n> -\t\t\t\ttuples_frozen,\n> +\tPruneResult presult;\n> +\tint\t\t\ttuples_frozen,\n> \t\t\t\tlpdead_items,\n> \t\t\t\tlive_tuples,\n> \t\t\t\trecently_dead_tuples;\n> -\tint\t\t\tnnewlpdead;\n> \tHeapPageFreeze pagefrz;\n> \tint64\t\tfpi_before = pgWalUsage.wal_fpi;\n> \tOffsetNumber deadoffsets[MaxHeapTuplesPerPage];\n> @@ -1572,7 +1571,6 @@ retry:\n> \tpagefrz.FreezePageRelminMxid = vacrel->NewRelminMxid;\n> \tpagefrz.NoFreezePageRelfrozenXid = vacrel->NewRelfrozenXid;\n> \tpagefrz.NoFreezePageRelminMxid = vacrel->NewRelminMxid;\n> -\ttuples_deleted = 0;\n> \ttuples_frozen = 0;\n> \tlpdead_items = 0;\n> \tlive_tuples = 0;\n> @@ -1581,9 +1579,8 @@ retry:\n> \t/*\n> \t * Prune all HOT-update chains in this page.\n> \t *\n> -\t * We count tuples removed by the pruning step as tuples_deleted. Its\n> -\t * final value can be thought of as the number of tuples that have been\n> -\t * deleted from the table. It should not be confused with lpdead_items;\n> +\t * We count the number of tuples removed from the page by the pruning step\n> +\t * in presult.ndeleted. It should not be confused with lpdead_items;\n> \t * lpdead_items's final value can be thought of as the number of tuples\n> \t * that were deleted from indexes.\n> \t *\n> @@ -1591,9 +1588,7 @@ retry:\n> \t * current offset when populating the error context message, so it is\n> \t * imperative that we pass its location to heap_page_prune.\n> \t */\n> -\ttuples_deleted = heap_page_prune(rel, buf, vacrel->vistest,\n> -\t\t\t\t\t\t\t\t\t &nnewlpdead,\n> -\t\t\t\t\t\t\t\t\t &vacrel->offnum);\n> +\theap_page_prune(rel, buf, vacrel->vistest, &presult, &vacrel->offnum);\n> \n> \t/*\n> \t * Now scan the page to collect LP_DEAD items and check for tuples\n> @@ -1933,7 +1928,7 @@ retry:\n> \t}\n> \n> \t/* Finally, add page-local counts to whole-VACUUM counts */\n> -\tvacrel->tuples_deleted += tuples_deleted;\n> +\tvacrel->tuples_deleted += presult.ndeleted;\n> \tvacrel->tuples_frozen += tuples_frozen;\n> \tvacrel->lpdead_items += lpdead_items;\n> \tvacrel->live_tuples += live_tuples;\n> diff --git a/src/include/access/heapam.h b/src/include/access/heapam.h\n> index 6598c4d7d8..2d3f149e4f 100644\n> --- a/src/include/access/heapam.h\n> +++ b/src/include/access/heapam.h\n> @@ -191,6 +191,15 @@ typedef struct HeapPageFreeze\n> \n> } HeapPageFreeze;\n> \n> +/*\n> + * Per-page state returned from pruning\n> + */\n> +typedef struct PruneResult\n> +{\n> +\tint\t\t\tndeleted;\t\t/* Number of tuples deleted from the page */\n> +\tint\t\t\tnnewlpdead;\t\t/* Number of newly LP_DEAD items */\n> +} PruneResult;\n\nI think it might be worth making the names a bit less ambiguous than they\nwere. It's a bit odd that one has \"new\" in the name, the other doesn't,\ndespite both being about newly marked things. And \"deleted\" seems somewhat\nambiguous, it could also be understood as marking things LP_DEAD. Maybe\nnnewunused?\n\n\n> static int\theap_prune_chain(Buffer buffer,\n> \t\t\t\t\t\t\t OffsetNumber rootoffnum,\n> +\t\t\t\t\t\t\t int8 *htsv,\n> \t\t\t\t\t\t\t PruneState *prstate);\n\nHm, do we really want to pass this explicitly to a bunch of functions? Seems\nlike it might be better to either pass the PruneResult around or to have a\npointer in PruneState?\n\n\n> \t\t/*\n> \t\t * The criteria for counting a tuple as live in this block need to\n> @@ -1682,7 +1664,7 @@ retry:\n> \t\t * (Cases where we bypass index vacuuming will violate this optimistic\n> \t\t * assumption, but the overall impact of that should be negligible.)\n> \t\t */\n> -\t\tswitch (res)\n> +\t\tswitch ((HTSV_Result) presult.htsv[offnum])\n> \t\t{\n> \t\t\tcase HEAPTUPLE_LIVE:\n\nI think we should assert that we have a valid HTSV_Result here, i.e. not\n-1. You could wrap the cast and Assert into an inline funciton as well.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 13 Sep 2023 10:29:43 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Eliminate redundant tuple visibility check in vacuum"
},
{
"msg_contents": "[ sorry for the delay getting back to this ]\n\nOn Wed, Sep 13, 2023 at 1:29 PM Andres Freund <[email protected]> wrote:\n> I think it might be worth making the names a bit less ambiguous than they\n> were. It's a bit odd that one has \"new\" in the name, the other doesn't,\n> despite both being about newly marked things. And \"deleted\" seems somewhat\n> ambiguous, it could also be understood as marking things LP_DEAD. Maybe\n> nnewunused?\n\nI like it the better the way Melanie did it. The current name may not\nbe for the best, but that can be changed some other time, in a\nseparate patch, if someone likes. For now, changing the name seems\nlike a can of worms we don't need to open; the existing names have\nprecedent on their side if nothing else.\n\n> > static int heap_prune_chain(Buffer buffer,\n> > OffsetNumber rootoffnum,\n> > + int8 *htsv,\n> > PruneState *prstate);\n>\n> Hm, do we really want to pass this explicitly to a bunch of functions? Seems\n> like it might be better to either pass the PruneResult around or to have a\n> pointer in PruneState?\n\nAs far as I can see, 0002 adds it to one function (heap_page_pune) and\n0003 adds it to one more (heap_prune_chain). That's not much of a\nbunch.\n\n> > /*\n> > * The criteria for counting a tuple as live in this block need to\n> > @@ -1682,7 +1664,7 @@ retry:\n> > * (Cases where we bypass index vacuuming will violate this optimistic\n> > * assumption, but the overall impact of that should be negligible.)\n> > */\n> > - switch (res)\n> > + switch ((HTSV_Result) presult.htsv[offnum])\n> > {\n> > case HEAPTUPLE_LIVE:\n>\n> I think we should assert that we have a valid HTSV_Result here, i.e. not\n> -1. You could wrap the cast and Assert into an inline funciton as well.\n\nThis isn't a bad idea, although I don't find it completely necessary either.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 21 Sep 2023 15:53:13 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Eliminate redundant tuple visibility check in vacuum"
},
{
"msg_contents": "On Thu, Sep 21, 2023 at 3:53 PM Robert Haas <[email protected]> wrote:\n> > > static int heap_prune_chain(Buffer buffer,\n> > > OffsetNumber rootoffnum,\n> > > + int8 *htsv,\n> > > PruneState *prstate);\n> >\n> > Hm, do we really want to pass this explicitly to a bunch of functions? Seems\n> > like it might be better to either pass the PruneResult around or to have a\n> > pointer in PruneState?\n>\n> As far as I can see, 0002 adds it to one function (heap_page_pune) and\n> 0003 adds it to one more (heap_prune_chain). That's not much of a\n> bunch.\n\nI didn't read this carefully enough. Actually, heap_prune_chain() is\nthe *only* function that gets int8 *htsv as an argument. I don't\nunderstand how that's a bunch ... unless there are later patches not\nshown here that you're worried abot. What happens in 0002 is a\nfunction getting PruneResult * as an argument, not int8 *htsv.\n\nHonestly I think 0002 and 0003 are ready to commit, if you're not too\nopposed to them, or if we can find some relatively small changes that\nwould address your objections.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 21 Sep 2023 16:07:27 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Eliminate redundant tuple visibility check in vacuum"
},
{
"msg_contents": "On Thu, Sep 21, 2023 at 3:53 PM Robert Haas <[email protected]> wrote:\n>\n> On Wed, Sep 13, 2023 at 1:29 PM Andres Freund <[email protected]> wrote:\n>\n> > > /*\n> > > * The criteria for counting a tuple as live in this block need to\n> > > @@ -1682,7 +1664,7 @@ retry:\n> > > * (Cases where we bypass index vacuuming will violate this optimistic\n> > > * assumption, but the overall impact of that should be negligible.)\n> > > */\n> > > - switch (res)\n> > > + switch ((HTSV_Result) presult.htsv[offnum])\n> > > {\n> > > case HEAPTUPLE_LIVE:\n> >\n> > I think we should assert that we have a valid HTSV_Result here, i.e. not\n> > -1. You could wrap the cast and Assert into an inline funciton as well.\n>\n> This isn't a bad idea, although I don't find it completely necessary either.\n\nAttached v5 does this. Even though a value of -1 would hit the default\nswitch case and error out, I can see the argument for this validation\n-- as all other places switching on an HTSV_Result are doing so on a\nvalue which was always an HTSV_Result.\n\nOnce I started writing the function comment, however, I felt a bit\nawkward. In order to make the function available to both pruneheap.c\nand vacuumlazy.c, I had to put it in a header file. Writing a\nfunction, available to anyone including heapam.h, which takes an int\nand returns an HTSV_Result feels a bit odd. Do we want it to be common\npractice to use an int value outside the valid enum range to store\n\"status not yet computed\" for HTSV_Results?\n\nAnyway, on a tactical note, I added the inline function to heapam.h\nbelow the PruneResult definition since it is fairly tightly coupled to\nthe htsv array in PruneResult. All of the function prototypes are\nunder a comment that says \"function prototypes for heap access method\"\n-- which didn't feel like an accurate description of this function. I\nwonder if it makes sense to have pruneheap.c include vacuum.h and move\npruning specific stuff like this helper and PruneResult over there? I\ncan't remember why I didn't do this before, but maybe there is a\nreason not to? I also wasn't sure if I needed to forward declare the\ninline function or not.\n\nOh, and, one more note. I've dropped the former patch 0001 which\nchanged the function comment about off_loc above heap_page_prune(). I\nhave plans to write a separate patch adding an error context callback\nfor HOT pruning with the offset number and would include such a change\nin that patch.\n\n- Melanie",
"msg_date": "Thu, 28 Sep 2023 08:46:46 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Eliminate redundant tuple visibility check in vacuum"
},
{
"msg_contents": "On Thu, Sep 28, 2023 at 8:46 AM Melanie Plageman\n<[email protected]> wrote:\n> Once I started writing the function comment, however, I felt a bit\n> awkward. In order to make the function available to both pruneheap.c\n> and vacuumlazy.c, I had to put it in a header file. Writing a\n> function, available to anyone including heapam.h, which takes an int\n> and returns an HTSV_Result feels a bit odd. Do we want it to be common\n> practice to use an int value outside the valid enum range to store\n> \"status not yet computed\" for HTSV_Results?\n\nI noticed the awkwardness of that return convention when I reviewed\nthe first version of this patch, but I decided it wasn't worth\nspending time discussing. To avoid it, we'd either need to add a new\nHTSV_Result that is only used here, or add a new type\nHTSV_Result_With_An_Extra_Value and translate between the two, or pass\nback a boolean + an enum instead of an array of int8. And all of those\nseem to me to suck -- the first two are messy and the third would make\nthe return value much wider. So, no, I don't really like this, but\nalso, what would actually be any better? Also, IMV at least, it's more\nof an issue of it being sort of ugly than of anything becoming common\npractice, because how many callers of heap_page_prune() are there ever\ngoing to be? AFAIK, we've only ever had two since forever, and even if\nwe grow one or two more at some point, that's still not that many.\n\nI went ahead and committed 0001. If Andres still wants to push for\nmore renaming there, that can be a follow-up patch. And we can see if\nhe or anyone else has any comments on this new version of 0002. To me\nwe're down into the level of details that probably don't matter very\nmuch one way or the other, but others may disagree.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 28 Sep 2023 11:25:04 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Eliminate redundant tuple visibility check in vacuum"
},
{
"msg_contents": "Hi,\n\nOn 2023-09-28 11:25:04 -0400, Robert Haas wrote:\n> I went ahead and committed 0001. If Andres still wants to push for\n> more renaming there, that can be a follow-up patch.\n\nAgreed.\n\n> And we can see if he or anyone else has any comments on this new version of\n> 0002. To me we're down into the level of details that probably don't matter\n> very much one way or the other, but others may disagree.\n\nThe only thought I have is that it might be worth to amend the comment in\nlazy_scan_prune() to mention that such a tuple won't need to be frozen,\nbecause it was visible to another session when vacuum started.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 30 Sep 2023 10:02:43 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Eliminate redundant tuple visibility check in vacuum"
},
{
"msg_contents": "On Sat, Sep 30, 2023 at 1:02 PM Andres Freund <[email protected]> wrote:\n> The only thought I have is that it might be worth to amend the comment in\n> lazy_scan_prune() to mention that such a tuple won't need to be frozen,\n> because it was visible to another session when vacuum started.\n\nI revised the comment a bit, incorporating that language, and committed.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 2 Oct 2023 12:04:31 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Eliminate redundant tuple visibility check in vacuum"
}
] |
[
{
"msg_contents": "Hello,\n\nThe Postgresql docs on object privileges,\n https://www.postgresql.org/docs/14/ddl-priv.html\nsay this in regard to the output of the psql \\dp command:\n\n | If the “Access privileges” column is empty for a given object, it\n | means the object has default privileges (that is, its privileges\n | entry in the relevant system catalog is null). [...] The first GRANT\n | or REVOKE on an object will instantiate the default privileges\n | (producing, for example, miriam=arwdDxt/miriam) and then modify them\n | per the specified request.\n\nIf I've done a GRANT or REVOKE on some of the tables, how do I restore\nthe default privileges so that the “Access privileges” appears empty\nagain? I re-granted what I think are the default privileges but the\n\"Access privileges\" column for that table contains \"user1=arwdDxt/user1\"\nrather than being blank. This is Postgresql-14.\n\nThanks for any suggestions!\n\n\n",
"msg_date": "Mon, 28 Aug 2023 19:23:48 -0600",
"msg_from": "Stuart McGraw <[email protected]>",
"msg_from_op": true,
"msg_subject": "Restoring default privileges on objects"
},
{
"msg_contents": "> On 29/08/2023 03:23 CEST Stuart McGraw <[email protected]> wrote:\n>\n> If I've done a GRANT or REVOKE on some of the tables, how do I restore\n> the default privileges so that the “Access privileges” appears empty\n> again? I re-granted what I think are the default privileges but the\n> \"Access privileges\" column for that table contains \"user1=arwdDxt/user1\"\n> rather than being blank. This is Postgresql-14.\n\nYes, \"user1=arwdDxt/user1\" matches the default privileges if user1 is the table\nowner. Function acldefault('r', 'user1'::regrole) [1] gives you the default\nprivileges for tables.\n\nYou could set pg_class.relacl to NULL to restore the default privileges, but\nmessing with pg_catalog is at your own risk. Besides that I don't know of any\nway to restore the default privileges other than revoking all privileges before\ngranting whatever acldefault gives you. Changing the table owner will then\nalso change the grantee and grantor in pg_class.relacl to the new owner.\n\n[1] https://www.postgresql.org/docs/14/functions-info.html#FUNCTIONS-ACLITEM-FN-TABLE\n\n--\nErik\n\n\n",
"msg_date": "Tue, 29 Aug 2023 13:22:31 +0200 (CEST)",
"msg_from": "Erik Wienhold <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Restoring default privileges on objects"
},
{
"msg_contents": "Erik Wienhold <[email protected]> writes:\n> On 29/08/2023 03:23 CEST Stuart McGraw <[email protected]> wrote:\n>> If I've done a GRANT or REVOKE on some of the tables, how do I restore\n>> the default privileges so that the “Access privileges” appears empty\n>> again? I re-granted what I think are the default privileges but the\n>> \"Access privileges\" column for that table contains \"user1=arwdDxt/user1\"\n>> rather than being blank. This is Postgresql-14.\n\n> Yes, \"user1=arwdDxt/user1\" matches the default privileges if user1 is the table\n> owner.\n\nRight. There is no (supported) way to cause the ACL entry to go back\nto null. It starts life that way as an ancient hack to save a step\nduring object creation. But the moment you do anything to the object's\nprivileges, the NULL is replaced by an explicit representation of the\ndefault privileges, which is then modified per whatever command you\nare giving. After that the privileges will always be explicit.\n\nThere's been occasional discussion of changing this behavior, but\nit'd take work and it'd likely add about as much surprise as it\nremoves. People have been used to this quirk for a long time.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 29 Aug 2023 10:14:45 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Restoring default privileges on objects"
},
{
"msg_contents": "On 8/29/23 08:14, Tom Lane wrote:\n> Erik Wienhold <[email protected]> writes:\n>> On 29/08/2023 03:23 CEST Stuart McGraw <[email protected]> wrote:\n>>> If I've done a GRANT or REVOKE on some of the tables, how do I restore\n>>> the default privileges so that the “Access privileges” appears empty\n>>> again? I re-granted what I think are the default privileges but the\n>>> \"Access privileges\" column for that table contains \"user1=arwdDxt/user1\"\n>>> rather than being blank. This is Postgresql-14.\n> \n>> Yes, \"user1=arwdDxt/user1\" matches the default privileges if user1 is the table\n>> owner.\n> \n> Right. There is no (supported) way to cause the ACL entry to go back\n> to null. It starts life that way as an ancient hack to save a step\n> during object creation. But the moment you do anything to the object's\n> privileges, the NULL is replaced by an explicit representation of the\n> default privileges, which is then modified per whatever command you\n> are giving. After that the privileges will always be explicit.\n> \n> There's been occasional discussion of changing this behavior, but\n> it'd take work and it'd likely add about as much surprise as it\n> removes. People have been used to this quirk for a long time.\n\nThank you Erik and Tom for the explanations. I guess it's a it-is-\nwhat-it-is situation :-). But while trying to figure it out myself\nI found the following:\n\n test=# CREATE ROLE user1;\n test=# SET ROLE user1;\n test=> CREATE TABLE t1(x int);\n test=> \\dp\n Access privileges\n Schema | Name | Type | Access privileges | Column privileges | Policies\n --------+------+-------+-------------------+-------------------+----------\n public | t1 | table | | |\n\n test=> SELECT FROM t1;\n (0 rows)\n\n test=> SET ROLE postgres;\n test=# REVOKE ALL ON t1 FROM user1;\n test=# SET ROLE user1;\n test=> \\dp\n Schema | Name | Type | Access privileges | Column privileges | Policies\n --------+------+-------+-------------------+-------------------+----------\n public | t1 | table | | |\n\n test=> SELECT FROM t1;\n ERROR: permission denied for table t1\n\nHow does one distinguish between (blank)=(default privileges)\nand (blank)=(no privileges)?\n\nShouldn't psql put *something* (like \"(default)\" or \"-\") in the\n\"Access privileges\" column to indicate that? Or conversely,\nsomething (like \"(none)\"?) in the revoked case?\n\nIt doesn't seem like a good idea to use the same visual\nrepresentation for two nearly opposite conditions. It confused\nthe heck out of me anyway... :-)\n\n\n\n",
"msg_date": "Tue, 29 Aug 2023 10:43:45 -0600",
"msg_from": "Stuart McGraw <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Restoring default privileges on objects"
},
{
"msg_contents": "> On 29/08/2023 18:43 CEST Stuart McGraw <[email protected]> wrote:\n>\n> How does one distinguish between (blank)=(default privileges)\n> and (blank)=(no privileges)?\n>\n> Shouldn't psql put *something* (like \"(default)\" or \"-\") in the\n> \"Access privileges\" column to indicate that? Or conversely,\n> something (like \"(none)\"?) in the revoked case?\n>\n> It doesn't seem like a good idea to use the same visual\n> representation for two nearly opposite conditions. It confused\n> the heck out of me anyway... :-)\n\nIndeed, that's confusing. Command \\dp always prints null as empty string [1].\nSo \\pset null '(null)' has no effect.\n\nThe docs don't mention that edge case [2] (the second to last paragraph):\n\n\t\"If the “Access privileges” column is empty for a given object, it\n\t means the object has default privileges (that is, its privileges\n\t entry in the relevant system catalog is null).\"\n\n[1] https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/bin/psql/describe.c;h=bac94a338cfbc497200f0cf960cbabce2dadaa33;hb=9b581c53418666205938311ef86047aa3c6b741f#l1149\n[2] https://www.postgresql.org/docs/14/ddl-priv.html\n\n--\nErik\n\n\n",
"msg_date": "Tue, 29 Aug 2023 21:04:53 +0200 (CEST)",
"msg_from": "Erik Wienhold <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Restoring default privileges on objects"
},
{
"msg_contents": "Erik Wienhold <[email protected]> writes:\n> On 29/08/2023 18:43 CEST Stuart McGraw <[email protected]> wrote:\n>> Shouldn't psql put *something* (like \"(default)\" or \"-\") in the\n>> \"Access privileges\" column to indicate that? Or conversely,\n>> something (like \"(none)\"?) in the revoked case?\n\n> Indeed, that's confusing. Command \\dp always prints null as empty string [1].\n> So \\pset null '(null)' has no effect.\n\nYeah, perhaps. The reason it so seldom comes up is that a state of\nzero privileges is extremely rare (because it's useless in practice).\n\nThat being the case, if we were to do something about this, I'd vote\nfor changing the display of zero-privileges to \"(none)\" or something\nalong that line, rather than changing the display of NULL, which\npeople are accustomed to.\n\nFixing \\dp to honor \"\\pset null\" for this might be a reasonable\nthing to do too. I'm actually a bit surprised that that doesn't\nwork already.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 29 Aug 2023 15:27:18 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Restoring default privileges on objects"
},
{
"msg_contents": "> On 29/08/2023 21:27 CEST Tom Lane <[email protected]> wrote:\n>\n> Yeah, perhaps. The reason it so seldom comes up is that a state of\n> zero privileges is extremely rare (because it's useless in practice).\n>\n> That being the case, if we were to do something about this, I'd vote\n> for changing the display of zero-privileges to \"(none)\" or something\n> along that line, rather than changing the display of NULL, which\n> people are accustomed to.\n\n+1\n\n> Fixing \\dp to honor \"\\pset null\" for this might be a reasonable\n> thing to do too. I'm actually a bit surprised that that doesn't\n> work already.\n\nLooks like all commands in src/bin/psql/describe.c set nullPrint = NULL. Has\nbeen that way since at least 1999.\n\n--\nErik\n\n\n",
"msg_date": "Tue, 29 Aug 2023 21:50:30 +0200 (CEST)",
"msg_from": "Erik Wienhold <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Restoring default privileges on objects"
},
{
"msg_contents": "On 8/29/23 13:27, Tom Lane wrote:\n> Erik Wienhold <[email protected]> writes:\n>> On 29/08/2023 18:43 CEST Stuart McGraw <[email protected]> wrote:\n>>> Shouldn't psql put *something* (like \"(default)\" or \"-\") in the\n>>> \"Access privileges\" column to indicate that? Or conversely,\n>>> something (like \"(none)\"?) in the revoked case?\n> \n>> Indeed, that's confusing. Command \\dp always prints null as empty string [1].\n>> So \\pset null '(null)' has no effect.\n> \n> Yeah, perhaps. The reason it so seldom comes up is that a state of\n> zero privileges is extremely rare (because it's useless in practice).\n> \n> That being the case, if we were to do something about this, I'd vote\n> for changing the display of zero-privileges to \"(none)\" or something\n> along that line, rather than changing the display of NULL, which\n> people are accustomed to.\n> \n> Fixing \\dp to honor \"\\pset null\" for this might be a reasonable\n> thing to do too. I'm actually a bit surprised that that doesn't\n> work already.\n> \n> \t\t\tregards, tom lane\n\nThat change would still require someone using \\dp to realize that\nthe \"Access privileges\" value could be either '' or NULL (I guess\nthat could be pointed out more obviously in the psql doc), and then\ndo a '\\pset null' before doing \\dp? That seems a little inconvenient.\n\nAs a possible alternative, in the query that \\dp sends, what about\nreplacing the line:\n\n select ...,\n pg_catalog.array_to_string(c.relacl, E'\\n') as \"Access privileges\"\n ...\n\nwith something like:\n\n CASE array_length(c.relacl,1) WHEN 0 THEN '(none)' ELSE pg_catalog.array_to_string(c.relacl, E'\\n') END as \"Access privileges\"\n\nI realize that removes the ability to control with pset what is\ndisplayed, but maybe a little more foolproof for naive users like\nmyself?\n\n\n\n",
"msg_date": "Tue, 29 Aug 2023 14:44:48 -0600",
"msg_from": "Stuart McGraw <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Restoring default privileges on objects"
},
{
"msg_contents": "> On 29/08/2023 22:44 CEST Stuart McGraw <[email protected]> wrote:\n>\n> That change would still require someone using \\dp to realize that\n> the \"Access privileges\" value could be either '' or NULL (I guess\n> that could be pointed out more obviously in the psql doc), and then\n> do a '\\pset null' before doing \\dp? That seems a little inconvenient.\n\nRight.\n\n> As a possible alternative, in the query that \\dp sends, what about\n> replacing the line:\n>\n> select ...,\n> pg_catalog.array_to_string(c.relacl, E'\\n') as \"Access privileges\"\n> ...\n>\n> with something like:\n>\n> CASE array_length(c.relacl,1) WHEN 0 THEN '(none)' ELSE pg_catalog.array_to_string(c.relacl, E'\\n') END as \"Access privileges\"\n>\n> I realize that removes the ability to control with pset what is\n> displayed, but maybe a little more foolproof for naive users like\n> myself?\n\nI think hardcoding '(none)' is what Tom meant (at least how I read it). Also\n'(none)' should probably be localizable like the table header.\n\nThe \\pset change would be separate.\n\n--\nErik\n\n\n",
"msg_date": "Wed, 30 Aug 2023 02:59:14 +0200 (CEST)",
"msg_from": "Erik Wienhold <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Restoring default privileges on objects"
},
{
"msg_contents": "On 2023-08-29 14:44:48 -0600, Stuart McGraw wrote:\n> On 8/29/23 13:27, Tom Lane wrote:\n> > Fixing \\dp to honor \"\\pset null\" for this might be a reasonable\n> > thing to do too. I'm actually a bit surprised that that doesn't\n> > work already.\n> \n> That change would still require someone using \\dp to realize that\n> the \"Access privileges\" value could be either '' or NULL (I guess\n> that could be pointed out more obviously in the psql doc), and then\n> do a '\\pset null' before doing \\dp? That seems a little inconvenient.\n\nOr just always do a \\pset null. For me printing NULL the same as an\nempty string is just as confusing in normal tables, so that's the first\nline in my ~/.psqlrc. YMMV, of course.\n\nBut I guess the point is that people who do \\pset null expect to be able\nto distinguish '' and NULL visually and might be surprised if that\ndoesn't work everywhere, while people who don't \\pset null know that ''\nand NULL are visually indistinguishable and that they may need some\nother way to distinguish them if the difference matters.\n\nSo +1 for me fixing \\dp to honor \"\\pset null\".\n\n hp\n\n-- \n _ | Peter J. Holzer | Story must make more sense than reality.\n|_|_) | |\n| | | [email protected] | -- Charles Stross, \"Creative writing\n__/ | http://www.hjp.at/ | challenge!\"",
"msg_date": "Wed, 30 Aug 2023 12:00:30 +0200",
"msg_from": "\"Peter J. Holzer\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Restoring default privileges on objects"
},
{
"msg_contents": "On Wed, 2023-08-30 at 12:00 +0200, Peter J. Holzer wrote:\n> On 2023-08-29 14:44:48 -0600, Stuart McGraw wrote:\n> > On 8/29/23 13:27, Tom Lane wrote:\n> > > Fixing \\dp to honor \"\\pset null\" for this might be a reasonable\n> > > thing to do too. I'm actually a bit surprised that that doesn't\n> > > work already.\n> > \n> > That change would still require someone using \\dp to realize that\n> > the \"Access privileges\" value could be either '' or NULL (I guess\n> > that could be pointed out more obviously in the psql doc), and then\n> > do a '\\pset null' before doing \\dp? That seems a little inconvenient.\n> \n> Or just always do a \\pset null. For me printing NULL the same as an\n> empty string is just as confusing in normal tables, so that's the first\n> line in my ~/.psqlrc. YMMV, of course.\n> \n> But I guess the point is that people who do \\pset null expect to be able\n> to distinguish '' and NULL visually and might be surprised if that\n> doesn't work everywhere, while people who don't \\pset null know that ''\n> and NULL are visually indistinguishable and that they may need some\n> other way to distinguish them if the difference matters.\n> \n> So +1 for me fixing \\dp to honor \"\\pset null\".\n\n+1\n\nHere is a patch that does away with the special handling of NULL values\nin psql backslash commands.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Fri, 06 Oct 2023 22:16:28 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Restoring default privileges on objects"
},
{
"msg_contents": "On Fri, 2023-10-06 at 22:16 +0200, Laurenz Albe wrote:\n> Here is a patch that does away with the special handling of NULL values\n> in psql backslash commands.\n\nErm, I forgot to attach the patch.\n\nYours,\nLaurenz Albe",
"msg_date": "Fri, 06 Oct 2023 22:18:20 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Restoring default privileges on objects"
},
{
"msg_contents": "On Fri, 2023-10-06 at 22:18 +0200, Laurenz Albe wrote:\n> On Fri, 2023-10-06 at 22:16 +0200, Laurenz Albe wrote:\n> > Here is a patch that does away with the special handling of NULL values\n> > in psql backslash commands.\n> \n> Erm, I forgot to attach the patch.\n\nI just realize that there is a conflicting proposal. I'll reply to that thread.\nSorry for the noise.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Fri, 06 Oct 2023 22:28:20 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Restoring default privileges on objects"
},
{
"msg_contents": "On Fri, Oct 6, 2023 at 1:29 PM Laurenz Albe <[email protected]>\nwrote:\n\n> On Fri, 2023-10-06 at 22:18 +0200, Laurenz Albe wrote:\n> > On Fri, 2023-10-06 at 22:16 +0200, Laurenz Albe wrote:\n> > > Here is a patch that does away with the special handling of NULL values\n> > > in psql backslash commands.\n> >\n> > Erm, I forgot to attach the patch.\n>\n> I just realize that there is a conflicting proposal. I'll reply to that\n> thread.\n> Sorry for the noise.\n>\n>\nThis thread seems officially closed and the discussion moved to [1].\n\nOver there, after reading both threads, I am seeing enough agreement that\nchanging these queries to always print \"(none)\" (translating the word none)\nwhere today they output null, and thus plan to move forward with the v1\npatch on that thread proposing to do just that. Please chime in over there\non this specific option - whether you wish to support or reject it. Should\nit be rejected the plan is to have these queries respect the user's\npreference in \\pset null. Please make it clear if you would rather\nmaintain the status quo against either of those two options.\n\nThanks!\n\nDavid J.\n\n[1]\nhttps://www.postgresql.org/message-id/ab67c99bfb5dea7bae18c77f96442820d19b5448.camel%40cybertec.at\n\nOn Fri, Oct 6, 2023 at 1:29 PM Laurenz Albe <[email protected]> wrote:On Fri, 2023-10-06 at 22:18 +0200, Laurenz Albe wrote:\n> On Fri, 2023-10-06 at 22:16 +0200, Laurenz Albe wrote:\n> > Here is a patch that does away with the special handling of NULL values\n> > in psql backslash commands.\n> \n> Erm, I forgot to attach the patch.\n\nI just realize that there is a conflicting proposal. I'll reply to that thread.\nSorry for the noise.This thread seems officially closed and the discussion moved to [1].Over there, after reading both threads, I am seeing enough agreement that changing these queries to always print \"(none)\" (translating the word none) where today they output null, and thus plan to move forward with the v1 patch on that thread proposing to do just that. Please chime in over there on this specific option - whether you wish to support or reject it. Should it be rejected the plan is to have these queries respect the user's preference in \\pset null. Please make it clear if you would rather maintain the status quo against either of those two options.Thanks!David J.[1] https://www.postgresql.org/message-id/ab67c99bfb5dea7bae18c77f96442820d19b5448.camel%40cybertec.at",
"msg_date": "Tue, 17 Oct 2023 09:19:01 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Restoring default privileges on objects"
}
] |
[
{
"msg_contents": "Hi,\n\nAttached various patches to implement a few more jsonpath item methods.\n\nFor context, PostgreSQL already has some item methods, such as .double()\nand\n.datetime(). The above new methods are just added alongside these.\n\nHere are the brief descriptions for the same.\n\n---\n\nv1-0001-Implement-jsonpath-.bigint-.integer-and-.number-m.patch\n\nThis commit implements jsonpath .bigint(), .integer(), and .number()\nmethods. The JSON string or a numeric value is converted to the\nbigint, int4, and numeric type representation.\n\n---\n\nv1-0002-Implement-.date-.time-.time_tz-.timestamp-and-.ti.patch\n\nThis commit implements jsonpath .date(), .time(), .time_tz(),\n.timestamp(), .timestamp_tz() methods. The JSON string representing\na valid date/time is converted to the specific date or time type\nrepresentation.\n\nThe changes use the infrastructure of the .datetime() method and\nperform the datatype conversion as appropriate. All these methods\naccept no argument and use ISO datetime formats.\n\n---\n\nv1-0003-Implement-jsonpath-.boolean-and-.string-methods.patch\n\nThis commit implements jsonpath .boolean() and .string() methods.\n\n.boolean() method converts the given JSON string, numeric, or boolean\nvalue to the boolean type representation. In the numeric case, only\nintegers are allowed, whereas we use the parse_bool() backend function\nto convert a string to a bool.\n\n.string() method uses the datatype's out function to convert numeric\nand various date/time types to the string representation.\n\n---\n\nv1-0004-Implement-jasonpath-.decimal-precision-scale-meth.patch\n\nThis commit implements jsonpath .decimal() method with optional\nprecision and scale. If precision and scale are provided, then\nit is converted to the equivalent numerictypmod and applied to the\nnumeric number.\n\n---\n\nSuggestions/feedback/comments, please...\n\nThanks\n\n-- \nJeevan Chalke\n\n*Senior Staff SDE, Database Architect, and ManagerProduct Development*\n\n\n\nedbpostgres.com",
"msg_date": "Tue, 29 Aug 2023 12:35:07 +0530",
"msg_from": "Jeevan Chalke <[email protected]>",
"msg_from_op": true,
"msg_subject": "More new SQL/JSON item methods"
},
{
"msg_contents": "Hi,\n\nOn 2023-08-29 03:05, Jeevan Chalke wrote:\n> This commit implements jsonpath .bigint(), .integer(), and .number()\n> ---\n> This commit implements jsonpath .date(), .time(), .time_tz(),\n> .timestamp(), .timestamp_tz() methods.\n> ---\n> This commit implements jsonpath .boolean() and .string() methods.\n\nWriting as an interested outsider to the jsonpath spec, my first\nquestion would be, is there a published jsonpath spec independent\nof PostgreSQL, and are these methods in it, and are the semantics\nidentical?\n\nThe question comes out of my experience on a PostgreSQL integration\nof XQuery/XPath, which was nontrivial because the w3 specs for those\nlanguages give rigorous definitions of their data types, independently\nof SQL, and a good bit of the work was squinting at those types and at\nthe corresponding PostgreSQL types to see in what ways they were\ndifferent, and what the constraints on converting them were. (Some of\nthat squinting was already done by the SQL committee in the SQL/XML\nspec, which has plural pages on how those conversions have to happen,\nespecially for the date/time types.)\n\nIf I look in [1], am I looking in the right place for the most\ncurrent jsonpath draft?\n\n(I'm a little squeamish reading as a goal \"cover only essential\nparts of XPath 1.0\", given that XPath 1.0 is the one w3 threw away\nso XPath 2.0 wouldn't have the same problems.)\n\nOn details of the patch itself, I only have quick first impressions,\nlike:\n\n- surely there's a more direct way to make boolean from numeric\n than to serialize the numeric and parse an int?\n\n- I notice that .bigint() and .integer() finish up by casting the\n value to numeric so the existing jbv->val.numeric can hold it.\n That may leave some opportunity on the table: there is another\n patch under way [2] that concerns quickly getting such result\n values from json operations to the surrounding SQL query. That\n could avoid the trip through numeric completely if the query\n wants a bigint, if there were a val.bigint in JsonbValue.\n\n But of course that would complicate everything else that\n touches JsonbValue. Is there a way for a jsonpath operator to\n determine that it's the terminal operation in the path, and\n leave a value in val.bigint if it is, or build a numeric if\n it's not? Then most other jsonpath code could go on expecting\n a numeric value is always in val.numeric, and the only code\n checking for a val.bigint would be code involved with\n getting the result value out to the SQL caller.\n\nRegards,\n-Chap\n\n\n[1] \nhttps://www.ietf.org/archive/id/draft-goessner-dispatch-jsonpath-00.html\n[2] https://commitfest.postgresql.org/44/4476/\n\n\n",
"msg_date": "Wed, 30 Aug 2023 11:18:23 -0400",
"msg_from": "Chapman Flack <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: More new SQL/JSON item methods"
},
{
"msg_contents": "On 2023-08-30 11:18, Chapman Flack wrote:\n> If I look in [1], am I looking in the right place for the most\n> current jsonpath draft?\n\nMy bad, I see that it is not. Um if I look in [1'], am I then looking\nat the same spec you are?\n\n[1'] https://www.ietf.org/archive/id/draft-ietf-jsonpath-base-20.html\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Wed, 30 Aug 2023 11:21:30 -0400",
"msg_from": "Chapman Flack <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: More new SQL/JSON item methods"
},
{
"msg_contents": "On 2023-Aug-30, Chapman Flack wrote:\n\n> Hi,\n> \n> On 2023-08-29 03:05, Jeevan Chalke wrote:\n> > This commit implements jsonpath .bigint(), .integer(), and .number()\n> > ---\n> > This commit implements jsonpath .date(), .time(), .time_tz(),\n> > .timestamp(), .timestamp_tz() methods.\n> > ---\n> > This commit implements jsonpath .boolean() and .string() methods.\n> \n> Writing as an interested outsider to the jsonpath spec, my first\n> question would be, is there a published jsonpath spec independent\n> of PostgreSQL, and are these methods in it, and are the semantics\n> identical?\n\nLooking at the SQL standard itself, in the 2023 edition section 9.46\n\"SQL/JSON path language: syntax and semantics\", it shows this:\n\n<JSON method> ::=\ntype <left paren> <right paren>\n| size <left paren> <right paren>\n| double <left paren> <right paren>\n| ceiling <left paren> <right paren>\n| floor <left paren> <right paren>\n| abs <left paren> <right paren>\n| datetime <left paren> [ <JSON datetime template> ] <right paren>\n| keyvalue <left paren> <right paren>\n| bigint <left paren> <right paren>\n| boolean <left paren> <right paren>\n| date <left paren> <right paren>\n| decimal <left paren> [ <precision> [ <comma> <scale> ] ] <right paren>\n| integer <left paren> <right paren>\n| number <left paren> <right paren>\n| string <left paren> <right paren>\n| time <left paren> [ <time precision> ] <right paren>\n| time_tz <left paren> [ <time precision> ] <right paren>\n| timestamp <left paren> [ <timestamp precision> ] <right paren>\n| timestamp_tz <left paren> [ <timestamp precision> ] <right paren>\n\nand then details, for each of those, rules like\n\nIII) If JM specifies <double>, then:\n 1) For all j, 1 (one) ≤ j ≤ n,\n Case:\n\ta) If I_j is not a number or character string, then let ST be data\n exception — non-numeric SQL/JSON item (22036).\n b) Otherwise, let X be an SQL variable whose value is I_j.\n Let V_j be the result of\n CAST (X AS DOUBLE PRECISION)\n If this conversion results in an exception condition, then\n let ST be that exception condition.\n 2) Case:\n a) If ST is not successful completion, then the result of JAE\n is ST.\n b) Otherwise, the result of JAE is the SQL/JSON sequence V_1,\n ..., V_n.\n\nso at least superficially our implementation is constrained by what the\nSQL standard says to do, and we should verify that this implementation\nmatches those rules. We don't necessarily need to watch what do other\nspecs such as jsonpath itself.\n\n> The question comes out of my experience on a PostgreSQL integration\n> of XQuery/XPath, which was nontrivial because the w3 specs for those\n> languages give rigorous definitions of their data types, independently\n> of SQL, and a good bit of the work was squinting at those types and at\n> the corresponding PostgreSQL types to see in what ways they were\n> different, and what the constraints on converting them were.\n\nYeah, I think the experience of the SQL committee with XML was pretty\nbad, as you carefully documented. I hope they don't make such a mess\nwith JSON.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Wed, 30 Aug 2023 18:28:36 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: More new SQL/JSON item methods"
},
{
"msg_contents": "On 2023-08-30 12:28, Alvaro Herrera wrote:\n> Yeah, I think the experience of the SQL committee with XML was pretty\n> bad, as you carefully documented. I hope they don't make such a mess\n> with JSON.\n\nI guess the SQL committee was taken by surprise after basing something\non Infoset and XPath 1.0 for 2003, and then w3 deciding those things\nneeded to be scrapped and redone with the lessons learned. So the\nSQL committee had to come out with a rather different SQL/XML for 2006,\nbut I'd say the 2003-2006 difference is the only real 'mess', and other\nthan going back in time to unpublish 2003, I'm not sure how they'd have\ndone better.\n\n> b) Otherwise, the result of JAE is the SQL/JSON sequence V_1,\n> ..., V_n.\n\nThis has my Spidey sense tingling, as it seems very parallel to SQL/XML\nwhere the result of XMLQUERY is to have type XML(SEQUENCE), which is a\ntype we do not have, and I'm not sure we have a type for \"JSON sequence\"\neither, unless SQL/JSON makes it equivalent to a JSON array (which\nI guess is conceivable, more easily than with XML). What does SQL/JSON\nsay about this SQL/JSON sequence type and how it should behave?\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Wed, 30 Aug 2023 13:20:49 -0400",
"msg_from": "Chapman Flack <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: More new SQL/JSON item methods"
},
{
"msg_contents": "On 8/30/23 19:20, Chapman Flack wrote:\n> On 2023-08-30 12:28, Alvaro Herrera wrote:\n>> b) Otherwise, the result of JAE is the SQL/JSON sequence V_1,\n>> ..., V_n.\n> \n> This has my Spidey sense tingling, as it seems very parallel to SQL/XML\n> where the result of XMLQUERY is to have type XML(SEQUENCE), which is a\n> type we do not have, and I'm not sure we have a type for \"JSON sequence\"\n> either, unless SQL/JSON makes it equivalent to a JSON array (which\n> I guess is conceivable, more easily than with XML). What does SQL/JSON\n> say about this SQL/JSON sequence type and how it should behave?\n\nThe SQL/JSON data model comprises SQL/JSON items and SQL/JSON sequences. \nThe components of the SQL/JSON data model are:\n\n — An SQL/JSON item is defined recursively as any of the following:\n • An SQL/JSON scalar, defined as a non-null value of any of the\n following predefined (SQL) types: character string with character\n set Unicode, numeric, Boolean, or datetime.\n • An SQL/JSON null, defined as a value that is distinct from any\n value of any SQL type. NOTE 109 — An SQL/JSON null is distinct\n from the SQL null value.\n • An SQL/JSON array, defined as an ordered list of zero or more\n SQL/JSON items, called the SQL/JSON elements of the SQL/JSON\n array.\n • An SQL/JSON object, defined as an unordered collection of zero or\n more SQL/JSON members, where an SQL/JSON member is a pair whose\n first value is a character string with character set Unicode and\n whose second value is an SQL/JSON item. The first value of an\n SQL/JSON member is called the key and the second value is called\n the bound value.\n\n — An SQL/JSON sequence is an ordered list of zero or more SQL/JSON\n items.\n\n-- \nVik Fearing\n\n\n\n",
"msg_date": "Fri, 1 Sep 2023 02:50:43 +0200",
"msg_from": "Vik Fearing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: More new SQL/JSON item methods"
},
{
"msg_contents": "On 2023-08-31 20:50, Vik Fearing wrote:\n> — An SQL/JSON item is defined recursively as any of the following:\n> ...\n> • An SQL/JSON array, defined as an ordered list of zero or more\n> SQL/JSON items, called the SQL/JSON elements of the SQL/JSON\n> array.\n> ...\n> — An SQL/JSON sequence is an ordered list of zero or more SQL/JSON\n> items.\n\nAs I was thinking, because \"an ordered list of zero or more SQL/JSON\nitems\" is also exactly what an SQL/JSON array is, it seems at least\npossible to implement things that are specified to return \"SQL/JSON\nsequence\" by having them return an SQL/JSON array (the kind of thing\nthat isn't possible for XML(SEQUENCE), because there isn't any other\nXML construct that can subsume it).\n\nStill, it seems noteworthy that both terms are used in the spec, rather\nthan saying the function in question should return a JSON array. Makes\nme wonder if there are some other details that make the two distinct.\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Thu, 31 Aug 2023 21:41:40 -0400",
"msg_from": "Chapman Flack <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: More new SQL/JSON item methods"
},
{
"msg_contents": ">\n> Looking at the SQL standard itself, in the 2023 edition section 9.46\n> \"SQL/JSON path language: syntax and semantics\", it shows this:\n>\n> <JSON method> ::=\n> type <left paren> <right paren>\n> | size <left paren> <right paren>\n> | double <left paren> <right paren>\n> | ceiling <left paren> <right paren>\n> | floor <left paren> <right paren>\n> | abs <left paren> <right paren>\n> | datetime <left paren> [ <JSON datetime template> ] <right paren>\n> | keyvalue <left paren> <right paren>\n> | bigint <left paren> <right paren>\n> | boolean <left paren> <right paren>\n> | date <left paren> <right paren>\n> | decimal <left paren> [ <precision> [ <comma> <scale> ] ] <right paren>\n> | integer <left paren> <right paren>\n> | number <left paren> <right paren>\n> | string <left paren> <right paren>\n> | time <left paren> [ <time precision> ] <right paren>\n> | time_tz <left paren> [ <time precision> ] <right paren>\n> | timestamp <left paren> [ <timestamp precision> ] <right paren>\n> | timestamp_tz <left paren> [ <timestamp precision> ] <right paren>\n>\n> and then details, for each of those, rules like\n>\n> III) If JM specifies <double>, then:\n> 1) For all j, 1 (one) ≤ j ≤ n,\n> Case:\n> a) If I_j is not a number or character string, then let ST be data\n> exception — non-numeric SQL/JSON item (22036).\n> b) Otherwise, let X be an SQL variable whose value is I_j.\n> Let V_j be the result of\n> CAST (X AS DOUBLE PRECISION)\n> If this conversion results in an exception condition, then\n> let ST be that exception condition.\n> 2) Case:\n> a) If ST is not successful completion, then the result of JAE\n> is ST.\n> b) Otherwise, the result of JAE is the SQL/JSON sequence V_1,\n> ..., V_n.\n>\n> so at least superficially our implementation is constrained by what the\n> SQL standard says to do, and we should verify that this implementation\n> matches those rules. We don't necessarily need to watch what do other\n> specs such as jsonpath itself.\n>\n\nI believe our current implementation of the .double() method is in line with\nthis. And these new methods are following the same suit.\n\n\n\n> > - surely there's a more direct way to make boolean from numeric\n> > than to serialize the numeric and parse an int?\n>\n\nYeah, we can directly check the value = 0 for false, true otherwise.\nBut looking at the PostgreSQL conversion to bool, it doesn't allow floating\npoint values to be converted to boolean and only accepts int4. That's why I\ndid the int4 conversion.\n\nThanks\n\n-- \nJeevan Chalke\n\n*Senior Staff SDE, Database Architect, and ManagerProduct Development*\n\n\n\nedbpostgres.com\n\nLooking at the SQL standard itself, in the 2023 edition section 9.46\n\"SQL/JSON path language: syntax and semantics\", it shows this:\n\n<JSON method> ::=\ntype <left paren> <right paren>\n| size <left paren> <right paren>\n| double <left paren> <right paren>\n| ceiling <left paren> <right paren>\n| floor <left paren> <right paren>\n| abs <left paren> <right paren>\n| datetime <left paren> [ <JSON datetime template> ] <right paren>\n| keyvalue <left paren> <right paren>\n| bigint <left paren> <right paren>\n| boolean <left paren> <right paren>\n| date <left paren> <right paren>\n| decimal <left paren> [ <precision> [ <comma> <scale> ] ] <right paren>\n| integer <left paren> <right paren>\n| number <left paren> <right paren>\n| string <left paren> <right paren>\n| time <left paren> [ <time precision> ] <right paren>\n| time_tz <left paren> [ <time precision> ] <right paren>\n| timestamp <left paren> [ <timestamp precision> ] <right paren>\n| timestamp_tz <left paren> [ <timestamp precision> ] <right paren>\n\nand then details, for each of those, rules like\n\nIII) If JM specifies <double>, then:\n 1) For all j, 1 (one) ≤ j ≤ n,\n Case:\n a) If I_j is not a number or character string, then let ST be data\n exception — non-numeric SQL/JSON item (22036).\n b) Otherwise, let X be an SQL variable whose value is I_j.\n Let V_j be the result of\n CAST (X AS DOUBLE PRECISION)\n If this conversion results in an exception condition, then\n let ST be that exception condition.\n 2) Case:\n a) If ST is not successful completion, then the result of JAE\n is ST.\n b) Otherwise, the result of JAE is the SQL/JSON sequence V_1,\n ..., V_n.\n\nso at least superficially our implementation is constrained by what the\nSQL standard says to do, and we should verify that this implementation\nmatches those rules. We don't necessarily need to watch what do other\nspecs such as jsonpath itself.I believe our current implementation of the .double() method is in line withthis. And these new methods are following the same suit. > - surely there's a more direct way to make boolean from numeric\n> than to serialize the numeric and parse an int? Yeah, we can directly check the value = 0 for false, true otherwise.But looking at the PostgreSQL conversion to bool, it doesn't allow floatingpoint values to be converted to boolean and only accepts int4. That's why Idid the int4 conversion.Thanks-- Jeevan ChalkeSenior Staff SDE, Database Architect, and ManagerProduct Developmentedbpostgres.com",
"msg_date": "Mon, 4 Sep 2023 15:51:22 +0530",
"msg_from": "Jeevan Chalke <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: More new SQL/JSON item methods"
},
{
"msg_contents": "On 29.08.23 09:05, Jeevan Chalke wrote:\n> v1-0001-Implement-jsonpath-.bigint-.integer-and-.number-m.patch\n> \n> This commit implements jsonpath .bigint(), .integer(), and .number()\n> methods. The JSON string or a numeric value is converted to the\n> bigint, int4, and numeric type representation.\n\nA comment that applies to all of these: These add various keywords, \nswitch cases, documentation entries in some order. Are we happy with \nthat? Should we try to reorder all of that for better maintainability \nor readability?\n\n> v1-0002-Implement-.date-.time-.time_tz-.timestamp-and-.ti.patch\n> \n> This commit implements jsonpath .date(), .time(), .time_tz(),\n> .timestamp(), .timestamp_tz() methods. The JSON string representing\n> a valid date/time is converted to the specific date or time type\n> representation.\n> \n> The changes use the infrastructure of the .datetime() method and\n> perform the datatype conversion as appropriate. All these methods\n> accept no argument and use ISO datetime formats.\n\nThese should accept an optional precision argument. Did you plan to add \nthat?\n\n> v1-0003-Implement-jsonpath-.boolean-and-.string-methods.patch\n> \n> This commit implements jsonpath .boolean() and .string() methods.\n\nThis contains a compiler warning:\n\n../src/backend/utils/adt/jsonpath_exec.c: In function \n'executeItemOptUnwrapTarget':\n../src/backend/utils/adt/jsonpath_exec.c:1162:86: error: 'tmp' may be \nused uninitialized [-Werror=maybe-uninitialized]\n\n> v1-0004-Implement-jasonpath-.decimal-precision-scale-meth.patch\n> \n> This commit implements jsonpath .decimal() method with optional\n> precision and scale. If precision and scale are provided, then\n> it is converted to the equivalent numerictypmod and applied to the\n> numeric number.\n\nThis also contains compiler warnings:\n\n../src/backend/utils/adt/jsonpath_exec.c: In function \n'executeItemOptUnwrapTarget':\n../src/backend/utils/adt/jsonpath_exec.c:1403:53: error: declaration of \n'numstr' shadows a previous local [-Werror=shadow=compatible-local]\n../src/backend/utils/adt/jsonpath_exec.c:1442:54: error: declaration of \n'elem' shadows a previous local [-Werror=shadow=compatible-local]\n\nThere is a typo in the commit message: \"Implement jasonpath\"\n\nAny reason this patch is separate from 0002? Isn't number() and \ndecimal() pretty similar?\n\nYou could also update src/backend/catalog/sql_features.txt in each patch \n(features T865 through T878).\n\n\n\n",
"msg_date": "Fri, 6 Oct 2023 13:43:44 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: More new SQL/JSON item methods"
},
{
"msg_contents": "On Fri, Oct 6, 2023 at 7:47 PM Peter Eisentraut <[email protected]> wrote:\n>\n> On 29.08.23 09:05, Jeevan Chalke wrote:\n> > v1-0001-Implement-jsonpath-.bigint-.integer-and-.number-m.patch\n> >\n> > This commit implements jsonpath .bigint(), .integer(), and .number()\n> > methods. The JSON string or a numeric value is converted to the\n> > bigint, int4, and numeric type representation.\n>\n> A comment that applies to all of these: These add various keywords,\n> switch cases, documentation entries in some order. Are we happy with\n> that? Should we try to reorder all of that for better maintainability\n> or readability?\n>\n> > v1-0002-Implement-.date-.time-.time_tz-.timestamp-and-.ti.patch\n> >\n> > This commit implements jsonpath .date(), .time(), .time_tz(),\n> > .timestamp(), .timestamp_tz() methods. The JSON string representing\n> > a valid date/time is converted to the specific date or time type\n> > representation.\n> >\n> > The changes use the infrastructure of the .datetime() method and\n> > perform the datatype conversion as appropriate. All these methods\n> > accept no argument and use ISO datetime formats.\n>\n> These should accept an optional precision argument. Did you plan to add\n> that?\n\ncompiler warnings issue resolved.\n\nI figured out how to use the precision argument.\nBut I don't know how to get the precision argument in the parse stage.\n\nattached is my attempt to implement: select\njsonb_path_query('\"2017-03-10 11:11:01.123\"', '$.timestamp(2)');\nnot that familiar with src/backend/utils/adt/jsonpath_gram.y. imitate\ndecimal method failed. decimal has precision and scale two arguments.\nhere only one argument.\n\nlooking for hints.",
"msg_date": "Wed, 18 Oct 2023 19:19:54 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: More new SQL/JSON item methods"
},
{
"msg_contents": "Thanks, Peter for the comments.\n\nOn Fri, Oct 6, 2023 at 5:13 PM Peter Eisentraut <[email protected]>\nwrote:\n\n> On 29.08.23 09:05, Jeevan Chalke wrote:\n> > v1-0001-Implement-jsonpath-.bigint-.integer-and-.number-m.patch\n> >\n> > This commit implements jsonpath .bigint(), .integer(), and .number()\n> > methods. The JSON string or a numeric value is converted to the\n> > bigint, int4, and numeric type representation.\n>\n> A comment that applies to all of these: These add various keywords,\n> switch cases, documentation entries in some order. Are we happy with\n> that? Should we try to reorder all of that for better maintainability\n> or readability?\n>\n\nYeah, that's the better suggestion. While implementing these methods, I was\nconfused about where to put them exactly and tried keeping them in some\nlogical place.\nI think once these methods get in, we can have a follow-up patch\nreorganizing all of these.\n\n\n>\n> > v1-0002-Implement-.date-.time-.time_tz-.timestamp-and-.ti.patch\n> >\n> > This commit implements jsonpath .date(), .time(), .time_tz(),\n> > .timestamp(), .timestamp_tz() methods. The JSON string representing\n> > a valid date/time is converted to the specific date or time type\n> > representation.\n> >\n> > The changes use the infrastructure of the .datetime() method and\n> > perform the datatype conversion as appropriate. All these methods\n> > accept no argument and use ISO datetime formats.\n>\n> These should accept an optional precision argument. Did you plan to add\n> that?\n>\n\nYeah, will add that.\n\n\n>\n> > v1-0003-Implement-jsonpath-.boolean-and-.string-methods.patch\n> >\n> > This commit implements jsonpath .boolean() and .string() methods.\n>\n> This contains a compiler warning:\n>\n> ../src/backend/utils/adt/jsonpath_exec.c: In function\n> 'executeItemOptUnwrapTarget':\n> ../src/backend/utils/adt/jsonpath_exec.c:1162:86: error: 'tmp' may be\n> used uninitialized [-Werror=maybe-uninitialized]\n>\n> > v1-0004-Implement-jasonpath-.decimal-precision-scale-meth.patch\n> >\n> > This commit implements jsonpath .decimal() method with optional\n> > precision and scale. If precision and scale are provided, then\n> > it is converted to the equivalent numerictypmod and applied to the\n> > numeric number.\n>\n> This also contains compiler warnings:\n>\n\nThanks, for reporting these warnings. I don't get those on my machine, thus\nmissed them. Will fix them.\n\n\n>\n> ../src/backend/utils/adt/jsonpath_exec.c: In function\n> 'executeItemOptUnwrapTarget':\n> ../src/backend/utils/adt/jsonpath_exec.c:1403:53: error: declaration of\n> 'numstr' shadows a previous local [-Werror=shadow=compatible-local]\n> ../src/backend/utils/adt/jsonpath_exec.c:1442:54: error: declaration of\n> 'elem' shadows a previous local [-Werror=shadow=compatible-local]\n>\n> There is a typo in the commit message: \"Implement jasonpath\"\n>\n\nWill fix.\n\n\n>\n> Any reason this patch is separate from 0002? Isn't number() and\n> decimal() pretty similar?\n>\n\nSince DECIMAL has precision and scale arguments, I have implemented that at\nthe end. I tried merging that with 0001, but other patches ended up with\nthe conflicts and thus I didn't merge that and kept it as a separate patch.\nBut yes, logically it belongs to the 0001 group. My bad that I haven't put\nin that extra effort. Will do that in the next version. Sorry for the same.\n\n\n>\n> You could also update src/backend/catalog/sql_features.txt in each patch\n> (features T865 through T878).\n>\n\nOK.\n\nThanks\n\n-- \nJeevan Chalke\n\n*Senior Staff SDE, Database Architect, and ManagerProduct Development*\n\n\n\nedbpostgres.com\n\nThanks, Peter for the comments.On Fri, Oct 6, 2023 at 5:13 PM Peter Eisentraut <[email protected]> wrote:On 29.08.23 09:05, Jeevan Chalke wrote:\n> v1-0001-Implement-jsonpath-.bigint-.integer-and-.number-m.patch\n> \n> This commit implements jsonpath .bigint(), .integer(), and .number()\n> methods. The JSON string or a numeric value is converted to the\n> bigint, int4, and numeric type representation.\n\nA comment that applies to all of these: These add various keywords, \nswitch cases, documentation entries in some order. Are we happy with \nthat? Should we try to reorder all of that for better maintainability \nor readability?Yeah, that's the better suggestion. While implementing these methods, I was confused about where to put them exactly and tried keeping them in some logical place.I think once these methods get in, we can have a follow-up patch reorganizing all of these. \n\n> v1-0002-Implement-.date-.time-.time_tz-.timestamp-and-.ti.patch\n> \n> This commit implements jsonpath .date(), .time(), .time_tz(),\n> .timestamp(), .timestamp_tz() methods. The JSON string representing\n> a valid date/time is converted to the specific date or time type\n> representation.\n> \n> The changes use the infrastructure of the .datetime() method and\n> perform the datatype conversion as appropriate. All these methods\n> accept no argument and use ISO datetime formats.\n\nThese should accept an optional precision argument. Did you plan to add \nthat?Yeah, will add that. \n\n> v1-0003-Implement-jsonpath-.boolean-and-.string-methods.patch\n> \n> This commit implements jsonpath .boolean() and .string() methods.\n\nThis contains a compiler warning:\n\n../src/backend/utils/adt/jsonpath_exec.c: In function \n'executeItemOptUnwrapTarget':\n../src/backend/utils/adt/jsonpath_exec.c:1162:86: error: 'tmp' may be \nused uninitialized [-Werror=maybe-uninitialized]\n\n> v1-0004-Implement-jasonpath-.decimal-precision-scale-meth.patch\n> \n> This commit implements jsonpath .decimal() method with optional\n> precision and scale. If precision and scale are provided, then\n> it is converted to the equivalent numerictypmod and applied to the\n> numeric number.\n\nThis also contains compiler warnings:Thanks, for reporting these warnings. I don't get those on my machine, thus missed them. Will fix them. \n\n../src/backend/utils/adt/jsonpath_exec.c: In function \n'executeItemOptUnwrapTarget':\n../src/backend/utils/adt/jsonpath_exec.c:1403:53: error: declaration of \n'numstr' shadows a previous local [-Werror=shadow=compatible-local]\n../src/backend/utils/adt/jsonpath_exec.c:1442:54: error: declaration of \n'elem' shadows a previous local [-Werror=shadow=compatible-local]\n\nThere is a typo in the commit message: \"Implement jasonpath\"Will fix. \n\nAny reason this patch is separate from 0002? Isn't number() and \ndecimal() pretty similar?Since DECIMAL has precision and scale arguments, I have implemented that at the end. I tried merging that with 0001, but other patches ended up with the conflicts and thus I didn't merge that and kept it as a separate patch. But yes, logically it belongs to the 0001 group. My bad that I haven't put in that extra effort. Will do that in the next version. Sorry for the same. \n\nYou could also update src/backend/catalog/sql_features.txt in each patch \n(features T865 through T878).OK.Thanks -- Jeevan ChalkeSenior Staff SDE, Database Architect, and ManagerProduct Developmentedbpostgres.com",
"msg_date": "Thu, 19 Oct 2023 11:36:37 +0530",
"msg_from": "Jeevan Chalke <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: More new SQL/JSON item methods"
},
{
"msg_contents": "On Wed, Oct 18, 2023 at 4:50 PM jian he <[email protected]> wrote:\n\n> On Fri, Oct 6, 2023 at 7:47 PM Peter Eisentraut <[email protected]>\n> wrote:\n> >\n> > On 29.08.23 09:05, Jeevan Chalke wrote:\n> > > v1-0001-Implement-jsonpath-.bigint-.integer-and-.number-m.patch\n> > >\n> > > This commit implements jsonpath .bigint(), .integer(), and .number()\n> > > methods. The JSON string or a numeric value is converted to the\n> > > bigint, int4, and numeric type representation.\n> >\n> > A comment that applies to all of these: These add various keywords,\n> > switch cases, documentation entries in some order. Are we happy with\n> > that? Should we try to reorder all of that for better maintainability\n> > or readability?\n> >\n> > > v1-0002-Implement-.date-.time-.time_tz-.timestamp-and-.ti.patch\n> > >\n> > > This commit implements jsonpath .date(), .time(), .time_tz(),\n> > > .timestamp(), .timestamp_tz() methods. The JSON string representing\n> > > a valid date/time is converted to the specific date or time type\n> > > representation.\n> > >\n> > > The changes use the infrastructure of the .datetime() method and\n> > > perform the datatype conversion as appropriate. All these methods\n> > > accept no argument and use ISO datetime formats.\n> >\n> > These should accept an optional precision argument. Did you plan to add\n> > that?\n>\n> compiler warnings issue resolved.\n>\n\nThanks for pitching in, Jian.\nI was slightly busy with other stuff and thus could not spend time on this.\n\nI will start looking into it and expect a patch in a couple of days.\n\n\n> I figured out how to use the precision argument.\n> But I don't know how to get the precision argument in the parse stage.\n>\n> attached is my attempt to implement: select\n> jsonb_path_query('\"2017-03-10 11:11:01.123\"', '$.timestamp(2)');\n> not that familiar with src/backend/utils/adt/jsonpath_gram.y. imitate\n> decimal method failed. decimal has precision and scale two arguments.\n> here only one argument.\n>\n> looking for hints.\n>\n\nYou may refer to how .datetime(<format>) is implemented.\n\nThanks\n\n\n-- \nJeevan Chalke\n\n*Senior Staff SDE, Database Architect, and ManagerProduct Development*\n\n\n\nedbpostgres.com\n\nOn Wed, Oct 18, 2023 at 4:50 PM jian he <[email protected]> wrote:On Fri, Oct 6, 2023 at 7:47 PM Peter Eisentraut <[email protected]> wrote:\n>\n> On 29.08.23 09:05, Jeevan Chalke wrote:\n> > v1-0001-Implement-jsonpath-.bigint-.integer-and-.number-m.patch\n> >\n> > This commit implements jsonpath .bigint(), .integer(), and .number()\n> > methods. The JSON string or a numeric value is converted to the\n> > bigint, int4, and numeric type representation.\n>\n> A comment that applies to all of these: These add various keywords,\n> switch cases, documentation entries in some order. Are we happy with\n> that? Should we try to reorder all of that for better maintainability\n> or readability?\n>\n> > v1-0002-Implement-.date-.time-.time_tz-.timestamp-and-.ti.patch\n> >\n> > This commit implements jsonpath .date(), .time(), .time_tz(),\n> > .timestamp(), .timestamp_tz() methods. The JSON string representing\n> > a valid date/time is converted to the specific date or time type\n> > representation.\n> >\n> > The changes use the infrastructure of the .datetime() method and\n> > perform the datatype conversion as appropriate. All these methods\n> > accept no argument and use ISO datetime formats.\n>\n> These should accept an optional precision argument. Did you plan to add\n> that?\n\ncompiler warnings issue resolved.Thanks for pitching in, Jian.I was slightly busy with other stuff and thus could not spend time on this.I will start looking into it and expect a patch in a couple of days.\n\nI figured out how to use the precision argument.\nBut I don't know how to get the precision argument in the parse stage.\n\nattached is my attempt to implement: select\njsonb_path_query('\"2017-03-10 11:11:01.123\"', '$.timestamp(2)');\nnot that familiar with src/backend/utils/adt/jsonpath_gram.y. imitate\ndecimal method failed. decimal has precision and scale two arguments.\nhere only one argument.\n\nlooking for hints.You may refer to how .datetime(<format>) is implemented.Thanks -- Jeevan ChalkeSenior Staff SDE, Database Architect, and ManagerProduct Developmentedbpostgres.com",
"msg_date": "Thu, 19 Oct 2023 11:37:58 +0530",
"msg_from": "Jeevan Chalke <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: More new SQL/JSON item methods"
},
{
"msg_contents": "On Thu, Oct 19, 2023 at 11:36 AM Jeevan Chalke <\[email protected]> wrote:\n\n> Thanks, Peter for the comments.\n>\n> On Fri, Oct 6, 2023 at 5:13 PM Peter Eisentraut <[email protected]>\n> wrote:\n>\n>> On 29.08.23 09:05, Jeevan Chalke wrote:\n>> > v1-0001-Implement-jsonpath-.bigint-.integer-and-.number-m.patch\n>> >\n>> > This commit implements jsonpath .bigint(), .integer(), and .number()\n>> > methods. The JSON string or a numeric value is converted to the\n>> > bigint, int4, and numeric type representation.\n>>\n>> A comment that applies to all of these: These add various keywords,\n>> switch cases, documentation entries in some order. Are we happy with\n>> that? Should we try to reorder all of that for better maintainability\n>> or readability?\n>>\n>\n> Yeah, that's the better suggestion. While implementing these methods, I\n> was confused about where to put them exactly and tried keeping them in some\n> logical place.\n> I think once these methods get in, we can have a follow-up patch\n> reorganizing all of these.\n>\n>\n>>\n>> > v1-0002-Implement-.date-.time-.time_tz-.timestamp-and-.ti.patch\n>> >\n>> > This commit implements jsonpath .date(), .time(), .time_tz(),\n>> > .timestamp(), .timestamp_tz() methods. The JSON string representing\n>> > a valid date/time is converted to the specific date or time type\n>> > representation.\n>> >\n>> > The changes use the infrastructure of the .datetime() method and\n>> > perform the datatype conversion as appropriate. All these methods\n>> > accept no argument and use ISO datetime formats.\n>>\n>> These should accept an optional precision argument. Did you plan to add\n>> that?\n>>\n>\n> Yeah, will add that.\n>\n>\n>>\n>> > v1-0003-Implement-jsonpath-.boolean-and-.string-methods.patch\n>> >\n>> > This commit implements jsonpath .boolean() and .string() methods.\n>>\n>> This contains a compiler warning:\n>>\n>> ../src/backend/utils/adt/jsonpath_exec.c: In function\n>> 'executeItemOptUnwrapTarget':\n>> ../src/backend/utils/adt/jsonpath_exec.c:1162:86: error: 'tmp' may be\n>> used uninitialized [-Werror=maybe-uninitialized]\n>>\n>> > v1-0004-Implement-jasonpath-.decimal-precision-scale-meth.patch\n>> >\n>> > This commit implements jsonpath .decimal() method with optional\n>> > precision and scale. If precision and scale are provided, then\n>> > it is converted to the equivalent numerictypmod and applied to the\n>> > numeric number.\n>>\n>> This also contains compiler warnings:\n>>\n>\n> Thanks, for reporting these warnings. I don't get those on my machine,\n> thus missed them. Will fix them.\n>\n>\n>>\n>> ../src/backend/utils/adt/jsonpath_exec.c: In function\n>> 'executeItemOptUnwrapTarget':\n>> ../src/backend/utils/adt/jsonpath_exec.c:1403:53: error: declaration of\n>> 'numstr' shadows a previous local [-Werror=shadow=compatible-local]\n>> ../src/backend/utils/adt/jsonpath_exec.c:1442:54: error: declaration of\n>> 'elem' shadows a previous local [-Werror=shadow=compatible-local]\n>>\n>> There is a typo in the commit message: \"Implement jasonpath\"\n>>\n>\n> Will fix.\n>\n>\n>>\n>> Any reason this patch is separate from 0002? Isn't number() and\n>> decimal() pretty similar?\n>>\n>\n> Since DECIMAL has precision and scale arguments, I have implemented that\n> at the end. I tried merging that with 0001, but other patches ended up with\n> the conflicts and thus I didn't merge that and kept it as a separate patch.\n> But yes, logically it belongs to the 0001 group. My bad that I haven't put\n> in that extra effort. Will do that in the next version. Sorry for the same.\n>\n>\n>>\n>> You could also update src/backend/catalog/sql_features.txt in each patch\n>> (features T865 through T878).\n>>\n>\n> OK.\n>\n\nAttached are all three patches fixing the above comments.\n\nThanks\n\n\n>\n> Thanks\n>\n> --\n> Jeevan Chalke\n>\n> *Senior Staff SDE, Database Architect, and ManagerProduct Development*\n>\n>\n>\n> edbpostgres.com\n>\n\n\n-- \nJeevan Chalke\n\n*Senior Staff SDE, Database Architect, and ManagerProduct Development*\n\n\n\nedbpostgres.com",
"msg_date": "Mon, 23 Oct 2023 12:50:55 +0530",
"msg_from": "Jeevan Chalke <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: More new SQL/JSON item methods"
},
{
"msg_contents": "On Mon, Oct 23, 2023 at 3:29 PM Jeevan Chalke\n<[email protected]> wrote:\n>\n> Attached are all three patches fixing the above comments.\n>\n\nminor issue:\n/src/backend/utils/adt/jsonpath_exec.c\n2531: Timestamp result;\n2532: ErrorSaveContext escontext = {T_ErrorSaveContext};\n2533:\n2534: /* Get a warning when precision is reduced */\n2535: time_precision = anytimestamp_typmod_check(false,\n2536: time_precision);\n2537: result = DatumGetTimestamp(value);\n2538: AdjustTimestampForTypmod(&result, time_precision,\n2539: (Node *) &escontext);\n2540: if (escontext.error_occurred)\n2541: RETURN_ERROR(ereport(ERROR,\n2542: (errcode(ERRCODE_INVALID_ARGUMENT_FOR_SQL_JSON_DATETIME_FUNCTION),\n2543: errmsg(\"numeric argument of jsonpath item method .%s() is out\nof range for type integer\",\n2544: jspOperationName(jsp->type)))));\n\nyou already did anytimestamp_typmod_check. So this \"if\n(escontext.error_occurred)\" is unnecessary?\nA similar case applies to another function called anytimestamp_typmod_check.\n\n/src/backend/utils/adt/jsonpath_exec.c\n1493: /* Convert numstr to Numeric with typmod */\n1494: Assert(numstr != NULL);\n1495: noerr = DirectInputFunctionCallSafe(numeric_in, numstr,\n1496: InvalidOid, dtypmod,\n1497: (Node *) &escontext,\n1498: &numdatum);\n1499:\n1500: if (!noerr || escontext.error_occurred)\n1501: RETURN_ERROR(ereport(ERROR,\n1502: (errcode(ERRCODE_NON_NUMERIC_SQL_JSON_ITEM),\n1503: errmsg(\"string argument of jsonpath item method .%s() is not a\nvalid representation of a decimal or number\",\n1504: jspOperationName(jsp->type)))));\n\ninside DirectInputFunctionCallSafe already \"if (SOFT_ERROR_OCCURRED(escontext))\"\nso \"if (!noerr || escontext.error_occurred)\" change to \"if (!noerr)\"\nshould be fine?\n\n\n",
"msg_date": "Tue, 24 Oct 2023 11:16:29 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: More new SQL/JSON item methods"
},
{
"msg_contents": "On 2023-10-19 Th 02:06, Jeevan Chalke wrote:\n> Thanks, Peter for the comments.\n>\n> On Fri, Oct 6, 2023 at 5:13 PM Peter Eisentraut <[email protected]> \n> wrote:\n>\n> On 29.08.23 09:05, Jeevan Chalke wrote:\n> > v1-0001-Implement-jsonpath-.bigint-.integer-and-.number-m.patch\n> >\n> > This commit implements jsonpath .bigint(), .integer(), and .number()\n> > methods. The JSON string or a numeric value is converted to the\n> > bigint, int4, and numeric type representation.\n>\n> A comment that applies to all of these: These add various keywords,\n> switch cases, documentation entries in some order. Are we happy with\n> that? Should we try to reorder all of that for better\n> maintainability\n> or readability?\n>\n>\n> Yeah, that's the better suggestion. While implementing these methods, \n> I was confused about where to put them exactly and tried keeping them \n> in some logical place.\n> I think once these methods get in, we can have a follow-up patch \n> reorganizing all of these.\n\n\nI think it would be better to organize things how we want them before \nadding in more stuff.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-10-19 Th 02:06, Jeevan Chalke\n wrote:\n\n\n\n\nThanks, Peter for the comments.\n\n\nOn Fri, Oct 6, 2023 at\n 5:13 PM Peter Eisentraut <[email protected]>\n wrote:\n\nOn 29.08.23 09:05, Jeevan\n Chalke wrote:\n >\n v1-0001-Implement-jsonpath-.bigint-.integer-and-.number-m.patch\n > \n > This commit implements jsonpath .bigint(), .integer(),\n and .number()\n > methods. The JSON string or a numeric value is\n converted to the\n > bigint, int4, and numeric type representation.\n\n A comment that applies to all of these: These add various\n keywords, \n switch cases, documentation entries in some order. Are we\n happy with \n that? Should we try to reorder all of that for better\n maintainability \n or readability?\n\n\n\n Yeah, that's the better suggestion. While implementing these\n methods, I was confused about where to put them exactly and\n tried keeping them in some logical place.\nI think once these methods get in, we can have a\n follow-up patch reorganizing all of these.\n\n\n\n\n\nI think it would be better to organize things how we want them\n before adding in more stuff.\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Tue, 24 Oct 2023 09:11:47 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: More new SQL/JSON item methods"
},
{
"msg_contents": "Hello,\n\nOn Tue, Oct 24, 2023 at 6:41 PM Andrew Dunstan <[email protected]> wrote:\n\n>\n> On 2023-10-19 Th 02:06, Jeevan Chalke wrote:\n>\n> Thanks, Peter for the comments.\n>\n> On Fri, Oct 6, 2023 at 5:13 PM Peter Eisentraut <[email protected]>\n> wrote:\n>\n>> On 29.08.23 09:05, Jeevan Chalke wrote:\n>> > v1-0001-Implement-jsonpath-.bigint-.integer-and-.number-m.patch\n>> >\n>> > This commit implements jsonpath .bigint(), .integer(), and .number()\n>> > methods. The JSON string or a numeric value is converted to the\n>> > bigint, int4, and numeric type representation.\n>>\n>> A comment that applies to all of these: These add various keywords,\n>> switch cases, documentation entries in some order. Are we happy with\n>> that? Should we try to reorder all of that for better maintainability\n>> or readability?\n>>\n>\n> Yeah, that's the better suggestion. While implementing these methods, I\n> was confused about where to put them exactly and tried keeping them in some\n> logical place.\n> I think once these methods get in, we can have a follow-up patch\n> reorganizing all of these.\n>\n>\n> I think it would be better to organize things how we want them before\n> adding in more stuff.\n>\n\nI have tried reordering all the jsonpath Operators and Methods\nconsistently. With this patch, they all appear in the same order when\ntogether in the group.\n\nIn some switch cases, they are still divided, like in\nflattenJsonPathParseItem(), where 2-arg, 1-arg, and no-arg cases are\nclubbed together. But I have tried to keep them in order in those subgroups.\n\nI will rebase my patches for this task on this patch, but before doing so,\nI would like to get your views on this reordering.\n\nThanks\n\n\n>\n> cheers\n>\n>\n> andrew\n>\n>\n> --\n> Andrew Dunstan\n> EDB: https://www.enterprisedb.com\n>\n>\n\n-- \nJeevan Chalke\n\n*Senior Staff SDE, Database Architect, and ManagerProduct Development*\n\n\n\nedbpostgres.com",
"msg_date": "Wed, 1 Nov 2023 12:30:02 +0530",
"msg_from": "Jeevan Chalke <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: More new SQL/JSON item methods"
},
{
"msg_contents": "On 2023-11-01 We 03:00, Jeevan Chalke wrote:\n> Hello,\n>\n> On Tue, Oct 24, 2023 at 6:41 PM Andrew Dunstan <[email protected]> \n> wrote:\n>\n>\n> On 2023-10-19 Th 02:06, Jeevan Chalke wrote:\n>> Thanks, Peter for the comments.\n>>\n>> On Fri, Oct 6, 2023 at 5:13 PM Peter Eisentraut\n>> <[email protected]> wrote:\n>>\n>> On 29.08.23 09:05, Jeevan Chalke wrote:\n>> > v1-0001-Implement-jsonpath-.bigint-.integer-and-.number-m.patch\n>> >\n>> > This commit implements jsonpath .bigint(), .integer(), and\n>> .number()\n>> > methods. The JSON string or a numeric value is converted\n>> to the\n>> > bigint, int4, and numeric type representation.\n>>\n>> A comment that applies to all of these: These add various\n>> keywords,\n>> switch cases, documentation entries in some order. Are we\n>> happy with\n>> that? Should we try to reorder all of that for better\n>> maintainability\n>> or readability?\n>>\n>>\n>> Yeah, that's the better suggestion. While implementing these\n>> methods, I was confused about where to put them exactly and tried\n>> keeping them in some logical place.\n>> I think once these methods get in, we can have a follow-up patch\n>> reorganizing all of these.\n>\n>\n> I think it would be better to organize things how we want them\n> before adding in more stuff.\n>\n>\n> I have tried reordering all the jsonpath Operators and Methods \n> consistently. With this patch, they all appear in the same order when \n> together in the group.\n>\n> In some switch cases, they are still divided, like in \n> flattenJsonPathParseItem(), where 2-arg, 1-arg, and no-arg cases are \n> clubbed together. But I have tried to keep them in order in those \n> subgroups.\n>\n> I will rebase my patches for this task on this patch, but before doing \n> so, I would like to get your views on this reordering.\n>\n>\n\nThis appears to be reasonable. Maybe we need to add a note in one or two \nplaces about maintaining the consistency?\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-11-01 We 03:00, Jeevan Chalke\n wrote:\n\n\n\n\nHello,\n\n\nOn Tue, Oct 24, 2023 at\n 6:41 PM Andrew Dunstan <[email protected]>\n wrote:\n\n\n\n\n\nOn 2023-10-19 Th 02:06, Jeevan Chalke wrote:\n\n\n\nThanks, Peter for the comments.\n\n\nOn Fri, Oct 6,\n 2023 at 5:13 PM Peter Eisentraut <[email protected]>\n wrote:\n\nOn 29.08.23\n 09:05, Jeevan Chalke wrote:\n >\n v1-0001-Implement-jsonpath-.bigint-.integer-and-.number-m.patch\n > \n > This commit implements jsonpath .bigint(),\n .integer(), and .number()\n > methods. The JSON string or a numeric value\n is converted to the\n > bigint, int4, and numeric type\n representation.\n\n A comment that applies to all of these: These add\n various keywords, \n switch cases, documentation entries in some\n order. Are we happy with \n that? Should we try to reorder all of that for\n better maintainability \n or readability?\n\n\n\n Yeah, that's the better suggestion. While\n implementing these methods, I was confused about\n where to put them exactly and tried keeping them in\n some logical place.\nI think once these methods get in, we can have\n a follow-up patch reorganizing all of these.\n\n\n\n\n\nI think it would be better to organize things how we\n want them before adding in more stuff.\n\n\n\n\n I have tried reordering all the jsonpath Operators and Methods\n consistently. With this patch, they all appear in the same\n order when together in the group.\n\n In some switch cases, they are still divided, like in\n flattenJsonPathParseItem(), where 2-arg, 1-arg, and no-arg\n cases are clubbed together. But I have tried to keep them in\n order in those subgroups.\n\n I will rebase my patches for this task on this patch, but\n before doing so, I would like to get your views on this\n reordering.\n\n\n\n\n\n\n\nThis appears to be reasonable. Maybe we need to add a note in one\n or two places about maintaining the consistency?\n\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Wed, 1 Nov 2023 06:19:31 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: More new SQL/JSON item methods"
},
{
"msg_contents": "On Wed, Nov 1, 2023 at 3:49 PM Andrew Dunstan <[email protected]> wrote:\n\n>\n> On 2023-11-01 We 03:00, Jeevan Chalke wrote:\n>\n> Hello,\n>\n> On Tue, Oct 24, 2023 at 6:41 PM Andrew Dunstan <[email protected]>\n> wrote:\n>\n>>\n>> On 2023-10-19 Th 02:06, Jeevan Chalke wrote:\n>>\n>> Thanks, Peter for the comments.\n>>\n>> On Fri, Oct 6, 2023 at 5:13 PM Peter Eisentraut <[email protected]>\n>> wrote:\n>>\n>>> On 29.08.23 09:05, Jeevan Chalke wrote:\n>>> > v1-0001-Implement-jsonpath-.bigint-.integer-and-.number-m.patch\n>>> >\n>>> > This commit implements jsonpath .bigint(), .integer(), and .number()\n>>> > methods. The JSON string or a numeric value is converted to the\n>>> > bigint, int4, and numeric type representation.\n>>>\n>>> A comment that applies to all of these: These add various keywords,\n>>> switch cases, documentation entries in some order. Are we happy with\n>>> that? Should we try to reorder all of that for better maintainability\n>>> or readability?\n>>>\n>>\n>> Yeah, that's the better suggestion. While implementing these methods, I\n>> was confused about where to put them exactly and tried keeping them in some\n>> logical place.\n>> I think once these methods get in, we can have a follow-up patch\n>> reorganizing all of these.\n>>\n>>\n>> I think it would be better to organize things how we want them before\n>> adding in more stuff.\n>>\n>\n> I have tried reordering all the jsonpath Operators and Methods\n> consistently. With this patch, they all appear in the same order when\n> together in the group.\n>\n> In some switch cases, they are still divided, like in\n> flattenJsonPathParseItem(), where 2-arg, 1-arg, and no-arg cases are\n> clubbed together. But I have tried to keep them in order in those subgroups.\n>\n> I will rebase my patches for this task on this patch, but before doing so,\n> I would like to get your views on this reordering.\n>\n>\n>\n> This appears to be reasonable. Maybe we need to add a note in one or two\n> places about maintaining the consistency?\n>\n+1\nAdded a note in jsonpath.h where enums are defined.\n\nI have rebased all three patches over this reordering patch making 4\npatches in the set.\n\nLet me know your views on the same.\n\nThanks\n\n\n\n>\n> cheers\n>\n>\n> andrew\n>\n> --\n> Andrew Dunstan\n> EDB: https://www.enterprisedb.com\n>\n>\n\n-- \nJeevan Chalke\n\n*Senior Staff SDE, Database Architect, and ManagerProduct Development*\n\n\n\nedbpostgres.com",
"msg_date": "Mon, 6 Nov 2023 18:53:06 +0530",
"msg_from": "Jeevan Chalke <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: More new SQL/JSON item methods"
},
{
"msg_contents": "On 2023-11-06 Mo 08:23, Jeevan Chalke wrote:\n>\n>\n> On Wed, Nov 1, 2023 at 3:49 PM Andrew Dunstan <[email protected]> wrote:\n>\n>\n> On 2023-11-01 We 03:00, Jeevan Chalke wrote:\n>> Hello,\n>>\n>> On Tue, Oct 24, 2023 at 6:41 PM Andrew Dunstan\n>> <[email protected]> wrote:\n>>\n>>\n>> On 2023-10-19 Th 02:06, Jeevan Chalke wrote:\n>>> Thanks, Peter for the comments.\n>>>\n>>> On Fri, Oct 6, 2023 at 5:13 PM Peter Eisentraut\n>>> <[email protected]> wrote:\n>>>\n>>> On 29.08.23 09:05, Jeevan Chalke wrote:\n>>> >\n>>> v1-0001-Implement-jsonpath-.bigint-.integer-and-.number-m.patch\n>>> >\n>>> > This commit implements jsonpath .bigint(), .integer(),\n>>> and .number()\n>>> > methods. The JSON string or a numeric value is\n>>> converted to the\n>>> > bigint, int4, and numeric type representation.\n>>>\n>>> A comment that applies to all of these: These add\n>>> various keywords,\n>>> switch cases, documentation entries in some order. Are\n>>> we happy with\n>>> that? Should we try to reorder all of that for better\n>>> maintainability\n>>> or readability?\n>>>\n>>>\n>>> Yeah, that's the better suggestion. While implementing these\n>>> methods, I was confused about where to put them exactly and\n>>> tried keeping them in some logical place.\n>>> I think once these methods get in, we can have a follow-up\n>>> patch reorganizing all of these.\n>>\n>>\n>> I think it would be better to organize things how we want\n>> them before adding in more stuff.\n>>\n>>\n>> I have tried reordering all the jsonpath Operators and Methods\n>> consistently. With this patch, they all appear in the same order\n>> when together in the group.\n>>\n>> In some switch cases, they are still divided, like in\n>> flattenJsonPathParseItem(), where 2-arg, 1-arg, and no-arg cases\n>> are clubbed together. But I have tried to keep them in order in\n>> those subgroups.\n>>\n>> I will rebase my patches for this task on this patch, but before\n>> doing so, I would like to get your views on this reordering.\n>>\n>>\n>\n> This appears to be reasonable. Maybe we need to add a note in one\n> or two places about maintaining the consistency?\n>\n> +1\n> Added a note in jsonpath.h where enums are defined.\n>\n> I have rebased all three patches over this reordering patch making 4 \n> patches in the set.\n>\n> Let me know your views on the same.\n>\n> Thanks\n>\n>\n\n\nHi Jeevan,\n\n\nI think these are in reasonably good shape, but there are a few things \nthat concern me:\n\n\nandrew@~=# select jsonb_path_query_array('[1.2]', '$[*].bigint()');\nERROR: numeric argument of jsonpath item method .bigint() is out of \nrange for type bigint\n\nI'm ok with this being an error, but I think the error message is wrong. \nIt should be the \"invalid input\" message.\n\nandrew@~=# select jsonb_path_query_array('[1.0]', '$[*].bigint()');\nERROR: numeric argument of jsonpath item method .bigint() is out of \nrange for type bigint\n\nShould we trim trailing dot+zeros from numeric values before trying to \nconvert to bigint/int? If not, this too should be an \"invalid input\" case.\n\nandrew@~=# select jsonb_path_query_array('[1.0]', '$[*].boolean()');\nERROR: numeric argument of jsonpath item method .boolean() is out of \nrange for type boolean\n\nIt seems odd that any non-zero integer is true but not any non-zero \nnumeric. Is that in the spec? If not I'd avoid trying to convert it to \nan integer first, and just check for infinity/nan before looking to see \nif it's zero.\n\nThe code for integer() and bigint() seems a bit duplicative, but I'm not \nsure there's a clean way of avoiding that.\n\nThe items for datetime types and string look OK.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-11-06 Mo 08:23, Jeevan Chalke\n wrote:\n\n\n\n\n\n\n\n\nOn Wed, Nov 1, 2023 at\n 3:49 PM Andrew Dunstan <[email protected]>\n wrote:\n\n\n\n\n\nOn 2023-11-01 We 03:00, Jeevan Chalke wrote:\n\n\n\nHello,\n\n\nOn Tue, Oct 24,\n 2023 at 6:41 PM Andrew Dunstan <[email protected]>\n wrote:\n\n\n\n\n\nOn 2023-10-19 Th 02:06, Jeevan Chalke\n wrote:\n\n\n\nThanks, Peter for the comments.\n\n\nOn Fri,\n Oct 6, 2023 at 5:13 PM Peter Eisentraut\n <[email protected]>\n wrote:\n\nOn\n 29.08.23 09:05, Jeevan Chalke wrote:\n >\n v1-0001-Implement-jsonpath-.bigint-.integer-and-.number-m.patch\n > \n > This commit implements jsonpath\n .bigint(), .integer(), and .number()\n > methods. The JSON string or a\n numeric value is converted to the\n > bigint, int4, and numeric type\n representation.\n\n A comment that applies to all of these:\n These add various keywords, \n switch cases, documentation entries in\n some order. Are we happy with \n that? Should we try to reorder all of\n that for better maintainability \n or readability?\n\n\n\n Yeah, that's the better suggestion. While\n implementing these methods, I was confused\n about where to put them exactly and tried\n keeping them in some logical place.\nI think once these methods get in, we\n can have a follow-up patch reorganizing\n all of these.\n\n\n\n\n\nI think it would be better to organize things\n how we want them before adding in more stuff.\n\n\n\n\n I have tried reordering all the jsonpath Operators\n and Methods consistently. With this patch, they all\n appear in the same order when together in the group.\n\n In some switch cases, they are still divided, like\n in flattenJsonPathParseItem(), where 2-arg, 1-arg,\n and no-arg cases are clubbed together. But I have\n tried to keep them in order in those subgroups.\n\n I will rebase my patches for this task on this\n patch, but before doing so, I would like to get\n your views on this reordering.\n\n\n\n\n\n\n\nThis appears to be reasonable. Maybe we need to add a\n note in one or two places about maintaining the\n consistency?\n\n\n\n+1\n\nAdded a note in jsonpath.h where enums are defined.\n\n\nI have rebased all three patches over this reordering\n patch making 4 patches in the set.\n\n\nLet me know your views on the same.\n\n\nThanks\n\n\n \n\n\n\n\n\n\n\n\nHi Jeevan,\n\n\nI think these are in reasonably good shape, but there are a few\n things that concern me:\n\n\nandrew@~=# select jsonb_path_query_array('[1.2]',\n '$[*].bigint()');\n ERROR: numeric argument of jsonpath item method .bigint() is out\n of range for type bigint\nI'm ok with this being an error, but I think the error message is\n wrong. It should be the \"invalid input\" message.\nandrew@~=# select jsonb_path_query_array('[1.0]',\n '$[*].bigint()');\n ERROR: numeric argument of jsonpath item method .bigint() is out\n of range for type bigint\nShould we trim trailing dot+zeros from numeric values before\n trying to convert to bigint/int? If not, this too should be an\n \"invalid input\" case.\n\nandrew@~=# select jsonb_path_query_array('[1.0]',\n '$[*].boolean()');\n ERROR: numeric argument of jsonpath item method .boolean() is out\n of range for type boolean\nIt seems odd that any non-zero integer is true but not any\n non-zero numeric. Is that in the spec? If not I'd avoid trying to\n convert it to an integer first, and just check for infinity/nan\n before looking to see if it's zero.\n\nThe code for integer() and bigint() seems a bit duplicative, but\n I'm not sure there's a clean way of avoiding that.\nThe items for datetime types and string look OK.\n\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Sun, 3 Dec 2023 11:14:15 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: More new SQL/JSON item methods"
},
{
"msg_contents": "On Sun, Dec 3, 2023 at 9:44 PM Andrew Dunstan <[email protected]> wrote:\n\n> Hi Jeevan,\n>\n>\n> I think these are in reasonably good shape, but there are a few things\n> that concern me:\n>\n>\n> andrew@~=# select jsonb_path_query_array('[1.2]', '$[*].bigint()');\n> ERROR: numeric argument of jsonpath item method .bigint() is out of range\n> for type bigint\n>\n> I'm ok with this being an error, but I think the error message is wrong.\n> It should be the \"invalid input\" message.\n>\n> andrew@~=# select jsonb_path_query_array('[1.0]', '$[*].bigint()');\n> ERROR: numeric argument of jsonpath item method .bigint() is out of range\n> for type bigint\n>\n> Should we trim trailing dot+zeros from numeric values before trying to\n> convert to bigint/int? If not, this too should be an \"invalid input\" case.\n>\n\nWe have the same issue with integer conversion and need a fix.\n\nUnfortunately, I was using int8in() for the conversion of numeric values.\nWe should be using numeric_int8() instead. However, there is no opt_error\nversion of the same.\n\nSo, I have introduced a numeric_int8_opt_error() version just like we have\none for int4, i.e. numeric_int4_opt_error(), to suppress the error. These\nchanges are in the 0001 patch. (All other patch numbers are now increased\nby 1)\n\nI have used this new function to fix this reported issue and used\nnumeric_int4_opt_error() for integer conversion.\n\n\n> andrew@~=# select jsonb_path_query_array('[1.0]', '$[*].boolean()');\n> ERROR: numeric argument of jsonpath item method .boolean() is out of\n> range for type boolean\n>\n> It seems odd that any non-zero integer is true but not any non-zero\n> numeric. Is that in the spec? If not I'd avoid trying to convert it to an\n> integer first, and just check for infinity/nan before looking to see if\n> it's zero.\n>\nPostgreSQL doesn’t cast a numeric to boolean. So maybe we should keep this\nbehavior as is.\n\n# select 1.0::boolean;\nERROR: cannot cast type numeric to boolean\nLINE 1: select 1.0::boolean;\n\n\n> The code for integer() and bigint() seems a bit duplicative, but I'm not\n> sure there's a clean way of avoiding that.\n>\n> The items for datetime types and string look OK.\n>\nThanks.\n\nSuggestions?\n\n\n>\n> cheers\n>\n>\n> andrew\n>\n>\n> --\n> Andrew Dunstan\n> EDB: https://www.enterprisedb.com\n>\n>\n\n-- \nJeevan Chalke\n\n*Senior Staff SDE, Database Architect, and ManagerProduct Development*\n\n\n\nedbpostgres.com",
"msg_date": "Thu, 7 Dec 2023 18:54:31 +0530",
"msg_from": "Jeevan Chalke <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: More new SQL/JSON item methods"
},
{
"msg_contents": "On 07.12.23 14:24, Jeevan Chalke wrote:\n> We have the same issue with integer conversion and need a fix.\n> \n> Unfortunately, I was using int8in() for the conversion of numeric \n> values. We should be using numeric_int8() instead. However, there is no \n> opt_error version of the same.\n> \n> So, I have introduced a numeric_int8_opt_error() version just like we \n> have one for int4, i.e. numeric_int4_opt_error(), to suppress the error. \n> These changes are in the 0001 patch. (All other patch numbers are now \n> increased by 1)\n> \n> I have used this new function to fix this reported issue and used \n> numeric_int4_opt_error() for integer conversion.\n\nI have committed the 0001 and 0002 patches for now.\n\nThe remaining patches look reasonable to me, but I haven't reviewed them \nin detail.\n\n\n",
"msg_date": "Wed, 3 Jan 2024 13:01:23 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: More new SQL/JSON item methods"
},
{
"msg_contents": "On 03.01.24 13:01, Peter Eisentraut wrote:\n> On 07.12.23 14:24, Jeevan Chalke wrote:\n>> We have the same issue with integer conversion and need a fix.\n>>\n>> Unfortunately, I was using int8in() for the conversion of numeric \n>> values. We should be using numeric_int8() instead. However, there is \n>> no opt_error version of the same.\n>>\n>> So, I have introduced a numeric_int8_opt_error() version just like we \n>> have one for int4, i.e. numeric_int4_opt_error(), to suppress the \n>> error. These changes are in the 0001 patch. (All other patch numbers \n>> are now increased by 1)\n>>\n>> I have used this new function to fix this reported issue and used \n>> numeric_int4_opt_error() for integer conversion.\n> \n> I have committed the 0001 and 0002 patches for now.\n> \n> The remaining patches look reasonable to me, but I haven't reviewed them \n> in detail.\n\nThe 0002 patch had to be reverted, because we can't change the order of \nthe enum values in JsonPathItemType. I have instead committed a \ndifferent patch that adjusts the various switch cases to observe the \ncurrent order of the enum. That also means that the remaining patches \nthat add new item methods need to add the new enum values at the end and \nadjust the rest of their code accordingly.\n\n\n\n",
"msg_date": "Wed, 3 Jan 2024 22:04:52 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: More new SQL/JSON item methods"
},
{
"msg_contents": "On Thu, Jan 4, 2024 at 2:34 AM Peter Eisentraut <[email protected]>\nwrote:\n\n> On 03.01.24 13:01, Peter Eisentraut wrote:\n> > On 07.12.23 14:24, Jeevan Chalke wrote:\n> >> We have the same issue with integer conversion and need a fix.\n> >>\n> >> Unfortunately, I was using int8in() for the conversion of numeric\n> >> values. We should be using numeric_int8() instead. However, there is\n> >> no opt_error version of the same.\n> >>\n> >> So, I have introduced a numeric_int8_opt_error() version just like we\n> >> have one for int4, i.e. numeric_int4_opt_error(), to suppress the\n> >> error. These changes are in the 0001 patch. (All other patch numbers\n> >> are now increased by 1)\n> >>\n> >> I have used this new function to fix this reported issue and used\n> >> numeric_int4_opt_error() for integer conversion.\n> >\n> > I have committed the 0001 and 0002 patches for now.\n> >\n> > The remaining patches look reasonable to me, but I haven't reviewed them\n> > in detail.\n>\n> The 0002 patch had to be reverted, because we can't change the order of\n> the enum values in JsonPathItemType. I have instead committed a\n> different patch that adjusts the various switch cases to observe the\n> current order of the enum. That also means that the remaining patches\n> that add new item methods need to add the new enum values at the end and\n> adjust the rest of their code accordingly.\n>\n\nThanks, Peter.\n\nI will work on rebasing and reorganizing the remaining patches.\n\nThanks\n\n-- \nJeevan Chalke\n\n*PrincipalProduct Development*\n\n\n\nedbpostgres.com\n\nOn Thu, Jan 4, 2024 at 2:34 AM Peter Eisentraut <[email protected]> wrote:On 03.01.24 13:01, Peter Eisentraut wrote:\n> On 07.12.23 14:24, Jeevan Chalke wrote:\n>> We have the same issue with integer conversion and need a fix.\n>>\n>> Unfortunately, I was using int8in() for the conversion of numeric \n>> values. We should be using numeric_int8() instead. However, there is \n>> no opt_error version of the same.\n>>\n>> So, I have introduced a numeric_int8_opt_error() version just like we \n>> have one for int4, i.e. numeric_int4_opt_error(), to suppress the \n>> error. These changes are in the 0001 patch. (All other patch numbers \n>> are now increased by 1)\n>>\n>> I have used this new function to fix this reported issue and used \n>> numeric_int4_opt_error() for integer conversion.\n> \n> I have committed the 0001 and 0002 patches for now.\n> \n> The remaining patches look reasonable to me, but I haven't reviewed them \n> in detail.\n\nThe 0002 patch had to be reverted, because we can't change the order of \nthe enum values in JsonPathItemType. I have instead committed a \ndifferent patch that adjusts the various switch cases to observe the \ncurrent order of the enum. That also means that the remaining patches \nthat add new item methods need to add the new enum values at the end and \nadjust the rest of their code accordingly.Thanks, Peter.I will work on rebasing and reorganizing the remaining patches. Thanks-- Jeevan ChalkePrincipalProduct Developmentedbpostgres.com",
"msg_date": "Mon, 8 Jan 2024 12:30:51 +0530",
"msg_from": "Jeevan Chalke <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: More new SQL/JSON item methods"
},
{
"msg_contents": "On Mon, Jan 8, 2024 at 12:30 PM Jeevan Chalke <\[email protected]> wrote:\n\n>\n>\n> On Thu, Jan 4, 2024 at 2:34 AM Peter Eisentraut <[email protected]>\n> wrote:\n>\n>> On 03.01.24 13:01, Peter Eisentraut wrote:\n>> > On 07.12.23 14:24, Jeevan Chalke wrote:\n>> >> We have the same issue with integer conversion and need a fix.\n>> >>\n>> >> Unfortunately, I was using int8in() for the conversion of numeric\n>> >> values. We should be using numeric_int8() instead. However, there is\n>> >> no opt_error version of the same.\n>> >>\n>> >> So, I have introduced a numeric_int8_opt_error() version just like we\n>> >> have one for int4, i.e. numeric_int4_opt_error(), to suppress the\n>> >> error. These changes are in the 0001 patch. (All other patch numbers\n>> >> are now increased by 1)\n>> >>\n>> >> I have used this new function to fix this reported issue and used\n>> >> numeric_int4_opt_error() for integer conversion.\n>> >\n>> > I have committed the 0001 and 0002 patches for now.\n>> >\n>> > The remaining patches look reasonable to me, but I haven't reviewed\n>> them\n>> > in detail.\n>>\n>> The 0002 patch had to be reverted, because we can't change the order of\n>> the enum values in JsonPathItemType. I have instead committed a\n>> different patch that adjusts the various switch cases to observe the\n>> current order of the enum. That also means that the remaining patches\n>> that add new item methods need to add the new enum values at the end and\n>> adjust the rest of their code accordingly.\n>>\n>\n> Thanks, Peter.\n>\n> I will work on rebasing and reorganizing the remaining patches.\n>\n\nAttached are rebased patches.\n\nThanks\n\n\n>\n>\n> Thanks\n>\n> --\n> Jeevan Chalke\n>\n> *PrincipalProduct Development*\n>\n>\n>\n> edbpostgres.com\n>\n\n\n-- \nJeevan Chalke\n\n*Principal, ManagerProduct Development*\n\n\n\nedbpostgres.com",
"msg_date": "Wed, 10 Jan 2024 13:19:04 +0530",
"msg_from": "Jeevan Chalke <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: More new SQL/JSON item methods"
},
{
"msg_contents": "Attached are two small fixup patches for your patch set.\n\nIn the first one, I simplified the grammar for the .decimal() method. \nIt seemed a bit overkill to build a whole list structure when all we \nneed are 0, 1, or 2 arguments.\n\nPer SQL standard, the precision and scale arguments are unsigned \nintegers, so unary plus and minus signs are not supported. So my patch \nremoves that support, but I didn't adjust the regression tests for that.\n\nAlso note that in your 0002 patch, the datetime precision is similarly \nunsigned, so that's consistent.\n\nBy the way, in your 0002 patch, don't see the need for the separate \ndatetime_method grammar rule. You can fold that into accessor_op.\n\nOverall, I think it would be better if you combined all three of these \npatches into one. Right now, you have arranged these as incremental \nfeatures, and as a result of that, the additions to the JsonPathItemType \nenum and the grammar keywords etc. are ordered in the way you worked on \nthese features, I guess. It would be good to maintain a bit of sanity \nto put all of this together and order all the enums and everything else \nfor example in the order they are in the sql_features.txt file (which is \nalphabetical, I suppose). At this point I suspect we'll end up \ncommitting this whole feature set together anyway, so we might as well \norganize it that way.",
"msg_date": "Mon, 15 Jan 2024 15:10:57 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: More new SQL/JSON item methods"
},
{
"msg_contents": "On Mon, Jan 15, 2024 at 7:41 PM Peter Eisentraut <[email protected]>\nwrote:\n\n> Attached are two small fixup patches for your patch set.\n>\n\nThanks, Peter.\n\n\n>\n> In the first one, I simplified the grammar for the .decimal() method.\n> It seemed a bit overkill to build a whole list structure when all we\n> need are 0, 1, or 2 arguments.\n>\n\nAgree.\nI added unary '+' and '-' support as well and thus thought of having\nseparate rules altogether rather than folding those in.\n\n\n> Per SQL standard, the precision and scale arguments are unsigned\n> integers, so unary plus and minus signs are not supported. So my patch\n> removes that support, but I didn't adjust the regression tests for that.\n>\n\nHowever, PostgreSQL numeric casting does support a negative scale. Here is\nan example:\n\n# select '12345'::numeric(4,-2);\n numeric\n---------\n 12300\n(1 row)\n\nAnd thus thought of supporting those.\nDo we want this JSON item method to behave differently here?\n\n\n>\n> Also note that in your 0002 patch, the datetime precision is similarly\n> unsigned, so that's consistent.\n>\n> By the way, in your 0002 patch, don't see the need for the separate\n> datetime_method grammar rule. You can fold that into accessor_op.\n>\n\nSure.\n\n\n>\n> Overall, I think it would be better if you combined all three of these\n> patches into one. Right now, you have arranged these as incremental\n> features, and as a result of that, the additions to the JsonPathItemType\n> enum and the grammar keywords etc. are ordered in the way you worked on\n> these features, I guess. It would be good to maintain a bit of sanity\n> to put all of this together and order all the enums and everything else\n> for example in the order they are in the sql_features.txt file (which is\n> alphabetical, I suppose). At this point I suspect we'll end up\n> committing this whole feature set together anyway, so we might as well\n> organize it that way.\n>\n\nOK.\nI will merge them all into one and will try to keep them in the order\nspecified in sql_features.txt.\nHowever, for documentation, it makes more sense to keep them in logical\norder than the alphabetical one. What are your views on this?\n\n\nThanks\n-- \nJeevan Chalke\n\n*Principal, ManagerProduct Development*\n\n\n\nedbpostgres.com\n\nOn Mon, Jan 15, 2024 at 7:41 PM Peter Eisentraut <[email protected]> wrote:Attached are two small fixup patches for your patch set.Thanks, Peter. \n\nIn the first one, I simplified the grammar for the .decimal() method. \nIt seemed a bit overkill to build a whole list structure when all we \nneed are 0, 1, or 2 arguments.Agree.I added unary '+' and '-' support as well and thus thought of having separate rules altogether rather than folding those in. \nPer SQL standard, the precision and scale arguments are unsigned \nintegers, so unary plus and minus signs are not supported. So my patch \nremoves that support, but I didn't adjust the regression tests for that.However, PostgreSQL numeric casting does support a negative scale. Here is an example:# select '12345'::numeric(4,-2); numeric --------- 12300(1 row)And thus thought of supporting those.Do we want this JSON item method to behave differently here? \n\nAlso note that in your 0002 patch, the datetime precision is similarly \nunsigned, so that's consistent.\n\nBy the way, in your 0002 patch, don't see the need for the separate \ndatetime_method grammar rule. You can fold that into accessor_op.Sure. \n\nOverall, I think it would be better if you combined all three of these \npatches into one. Right now, you have arranged these as incremental \nfeatures, and as a result of that, the additions to the JsonPathItemType \nenum and the grammar keywords etc. are ordered in the way you worked on \nthese features, I guess. It would be good to maintain a bit of sanity \nto put all of this together and order all the enums and everything else \nfor example in the order they are in the sql_features.txt file (which is \nalphabetical, I suppose). At this point I suspect we'll end up \ncommitting this whole feature set together anyway, so we might as well \norganize it that way.OK.I will merge them all into one and will try to keep them in the order specified in sql_features.txt.However, for documentation, it makes more sense to keep them in logical order than the alphabetical one. What are your views on this?Thanks-- Jeevan ChalkePrincipal, ManagerProduct Developmentedbpostgres.com",
"msg_date": "Wed, 17 Jan 2024 14:33:36 +0530",
"msg_from": "Jeevan Chalke <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: More new SQL/JSON item methods"
},
{
"msg_contents": "On 2024-01-17 We 04:03, Jeevan Chalke wrote:\n>\n>\n> On Mon, Jan 15, 2024 at 7:41 PM Peter Eisentraut \n> <[email protected]> wrote:\n>\n>\n> Overall, I think it would be better if you combined all three of\n> these\n> patches into one. Right now, you have arranged these as incremental\n> features, and as a result of that, the additions to the\n> JsonPathItemType\n> enum and the grammar keywords etc. are ordered in the way you\n> worked on\n> these features, I guess. It would be good to maintain a bit of\n> sanity\n> to put all of this together and order all the enums and everything\n> else\n> for example in the order they are in the sql_features.txt file\n> (which is\n> alphabetical, I suppose). At this point I suspect we'll end up\n> committing this whole feature set together anyway, so we might as\n> well\n> organize it that way.\n>\n>\n> OK.\n> I will merge them all into one and will try to keep them in the order \n> specified in sql_features.txt.\n> However, for documentation, it makes more sense to keep them in \n> logical order than the alphabetical one. What are your views on this?\n>\n\nI agree that we should order the documentation logically. Users don't \ncare how we organize the code etc, but they do care about docs have \nsensible structure.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2024-01-17 We 04:03, Jeevan Chalke\n wrote:\n\n\n\n\n\n\n\n\nOn Mon, Jan 15, 2024 at\n 7:41 PM Peter Eisentraut <[email protected]>\n wrote:\n\n\n\n\n Overall, I think it would be better if you combined all\n three of these \n patches into one. Right now, you have arranged these as\n incremental \n features, and as a result of that, the additions to the\n JsonPathItemType \n enum and the grammar keywords etc. are ordered in the way\n you worked on \n these features, I guess. It would be good to maintain a bit\n of sanity \n to put all of this together and order all the enums and\n everything else \n for example in the order they are in the sql_features.txt\n file (which is \n alphabetical, I suppose). At this point I suspect we'll end\n up \n committing this whole feature set together anyway, so we\n might as well \n organize it that way.\n\n\n\n OK.\n I will merge them all into one and will try to keep them in\n the order specified in sql_features.txt.\n However, for documentation, it makes more sense to keep them\n in logical order than the alphabetical one. What are your\n views on this?\n\n\n\n\n\n\n\nI agree that we should order the documentation logically. Users\n don't care how we organize the code etc, but they do care about\n docs have sensible structure.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Wed, 17 Jan 2024 11:53:23 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: More new SQL/JSON item methods"
},
{
"msg_contents": "On 17.01.24 10:03, Jeevan Chalke wrote:\n> I added unary '+' and '-' support as well and thus thought of having \n> separate rules altogether rather than folding those in.\n> \n> Per SQL standard, the precision and scale arguments are unsigned\n> integers, so unary plus and minus signs are not supported. So my patch\n> removes that support, but I didn't adjust the regression tests for that.\n> \n> \n> However, PostgreSQL numeric casting does support a negative scale. Here \n> is an example:\n> \n> # select '12345'::numeric(4,-2);\n> numeric\n> ---------\n> 12300\n> (1 row)\n> \n> And thus thought of supporting those.\n> Do we want this JSON item method to behave differently here?\n\nOk, it would make sense to support this in SQL/JSON as well.\n\n> I will merge them all into one and will try to keep them in the order \n> specified in sql_features.txt.\n> However, for documentation, it makes more sense to keep them in logical \n> order than the alphabetical one. What are your views on this?\n\nThe documentation can be in a different order.\n\n\n\n",
"msg_date": "Wed, 17 Jan 2024 20:33:14 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: More new SQL/JSON item methods"
},
{
"msg_contents": "On Thu, Jan 18, 2024 at 1:03 AM Peter Eisentraut <[email protected]>\nwrote:\n\n> On 17.01.24 10:03, Jeevan Chalke wrote:\n> > I added unary '+' and '-' support as well and thus thought of having\n> > separate rules altogether rather than folding those in.\n> >\n> > Per SQL standard, the precision and scale arguments are unsigned\n> > integers, so unary plus and minus signs are not supported. So my\n> patch\n> > removes that support, but I didn't adjust the regression tests for\n> that.\n> >\n> >\n> > However, PostgreSQL numeric casting does support a negative scale. Here\n> > is an example:\n> >\n> > # select '12345'::numeric(4,-2);\n> > numeric\n> > ---------\n> > 12300\n> > (1 row)\n> >\n> > And thus thought of supporting those.\n> > Do we want this JSON item method to behave differently here?\n>\n> Ok, it would make sense to support this in SQL/JSON as well.\n>\n\nOK. So with this, we don't need changes done in your 0001 patches.\n\n\n>\n> > I will merge them all into one and will try to keep them in the order\n> > specified in sql_features.txt.\n> > However, for documentation, it makes more sense to keep them in logical\n> > order than the alphabetical one. What are your views on this?\n>\n> The documentation can be in a different order.\n>\n\nThanks, Andrew and Peter for the confirmation.\n\nAttached merged single patch along these lines.\n\nPeter, I didn't understand why the changes you did in your 0002 patch were\nrequired here. I did run the pgindent, and it didn't complain to me. So,\njust curious to know more about the changes. I have not merged those\nchanges in this single patch. However, if needed it can be cleanly applied\non top of this single patch.\n\nThanks\n\n\n-- \nJeevan Chalke\n\n*Principal, ManagerProduct Development*\n\n\n\nedbpostgres.com",
"msg_date": "Thu, 18 Jan 2024 19:55:28 +0530",
"msg_from": "Jeevan Chalke <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: More new SQL/JSON item methods"
},
{
"msg_contents": "On 18.01.24 15:25, Jeevan Chalke wrote:\n> Peter, I didn't understand why the changes you did in your 0002 patch \n> were required here. I did run the pgindent, and it didn't complain to \n> me. So, just curious to know more about the changes. I have not merged \n> those changes in this single patch. However, if needed it can be cleanly \n> applied on top of this single patch.\n\nI just thought it was a bit wasteful with vertical space. It's not \nessential.\n\n\n",
"msg_date": "Thu, 18 Jan 2024 16:49:11 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: More new SQL/JSON item methods"
},
{
"msg_contents": "On 2024-01-18 Th 09:25, Jeevan Chalke wrote:\n>\n>\n> On Thu, Jan 18, 2024 at 1:03 AM Peter Eisentraut \n> <[email protected]> wrote:\n>\n> On 17.01.24 10:03, Jeevan Chalke wrote:\n> > I added unary '+' and '-' support as well and thus thought of\n> having\n> > separate rules altogether rather than folding those in.\n> >\n> > Per SQL standard, the precision and scale arguments are unsigned\n> > integers, so unary plus and minus signs are not supported. \n> So my patch\n> > removes that support, but I didn't adjust the regression\n> tests for that.\n> >\n> >\n> > However, PostgreSQL numeric casting does support a negative\n> scale. Here\n> > is an example:\n> >\n> > # select '12345'::numeric(4,-2);\n> > numeric\n> > ---------\n> > 12300\n> > (1 row)\n> >\n> > And thus thought of supporting those.\n> > Do we want this JSON item method to behave differently here?\n>\n> Ok, it would make sense to support this in SQL/JSON as well.\n>\n>\n> OK. So with this, we don't need changes done in your 0001 patches.\n>\n>\n> > I will merge them all into one and will try to keep them in the\n> order\n> > specified in sql_features.txt.\n> > However, for documentation, it makes more sense to keep them in\n> logical\n> > order than the alphabetical one. What are your views on this?\n>\n> The documentation can be in a different order.\n>\n>\n> Thanks, Andrew and Peter for the confirmation.\n>\n> Attached merged single patch along these lines.\n\n\nThanks, I have pushed this.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2024-01-18 Th 09:25, Jeevan Chalke\n wrote:\n\n\n\n\n\n\n\n\nOn Thu, Jan 18, 2024 at\n 1:03 AM Peter Eisentraut <[email protected]>\n wrote:\n\nOn 17.01.24 10:03, Jeevan\n Chalke wrote:\n > I added unary '+' and '-' support as well and thus\n thought of having \n > separate rules altogether rather than folding those in.\n > \n > Per SQL standard, the precision and scale arguments\n are unsigned\n > integers, so unary plus and minus signs are not\n supported. So my patch\n > removes that support, but I didn't adjust the\n regression tests for that.\n > \n > \n > However, PostgreSQL numeric casting does support a\n negative scale. Here \n > is an example:\n > \n > # select '12345'::numeric(4,-2);\n > numeric\n > ---------\n > 12300\n > (1 row)\n > \n > And thus thought of supporting those.\n > Do we want this JSON item method to behave differently\n here?\n\n Ok, it would make sense to support this in SQL/JSON as well.\n\n\n\nOK. So with this, we don't need changes done in your 0001\n patches.\n \n\n\n > I will merge them all into one and will try to keep\n them in the order \n > specified in sql_features.txt.\n > However, for documentation, it makes more sense to keep\n them in logical \n > order than the alphabetical one. What are your views on\n this?\n\n The documentation can be in a different order.\n\n\n\nThanks, Andrew and Peter for the confirmation.\n\n\nAttached merged single patch along these lines.\n\n\n\n\n\n\n\nThanks, I have pushed this.\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Thu, 25 Jan 2024 10:40:26 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: More new SQL/JSON item methods"
},
{
"msg_contents": "Andrew Dunstan <[email protected]> writes:\n> Thanks, I have pushed this.\n\nThe buildfarm is pretty widely unhappy, mostly failing on\n\nselect jsonb_path_query('1.23', '$.string()');\n\nOn a guess, I tried running that under valgrind, and behold it said\n\n==00:00:00:05.637 435530== Conditional jump or move depends on uninitialised value(s)\n==00:00:00:05.637 435530== at 0x8FD131: executeItemOptUnwrapTarget (jsonpath_exec.c:1547)\n==00:00:00:05.637 435530== by 0x8FED03: executeItem (jsonpath_exec.c:626)\n==00:00:00:05.637 435530== by 0x8FED03: executeNextItem (jsonpath_exec.c:1604)\n==00:00:00:05.637 435530== by 0x8FCA58: executeItemOptUnwrapTarget (jsonpath_exec.c:956)\n==00:00:00:05.637 435530== by 0x8FFDE4: executeItem (jsonpath_exec.c:626)\n==00:00:00:05.637 435530== by 0x8FFDE4: executeJsonPath.constprop.30 (jsonpath_exec.c:612)\n==00:00:00:05.637 435530== by 0x8FFF8C: jsonb_path_query_internal (jsonpath_exec.c:438)\n\nIt's fairly obviously right about that:\n\n JsonbValue jbv;\n ...\n jb = &jbv;\n Assert(tmp != NULL); /* We must have set tmp above */\n jb->val.string.val = (jb->type == jbvString) ? tmp : pstrdup(tmp);\n ^^^^^^^^^^^^^^^^^^^^^\n\nPresumably, this is a mistaken attempt to test the type\nof the thing previously pointed to by \"jb\".\n\nOn the whole, what I'd be inclined to do here is get rid\nof this test altogether and demand that every path through\nthe preceding \"switch\" deliver a value that doesn't need\npstrdup(). The only path that doesn't do that already is\n\n case jbvBool:\n tmp = (jb->val.boolean) ? \"true\" : \"false\";\n break;\n\nand TBH I'm not sure that we really need a pstrdup there\neither. The constants are immutable enough. Is something\nlikely to try to pfree the pointer later? I tried\n\n@@ -1544,7 +1544,7 @@ executeItemOptUnwrapTarget(JsonPathExecContext *cxt, JsonPathItem *jsp,\n \n jb = &jbv;\n Assert(tmp != NULL); /* We must have set tmp above */\n- jb->val.string.val = (jb->type == jbvString) ? tmp : pstrdup(tmp);\n+ jb->val.string.val = tmp;\n jb->val.string.len = strlen(jb->val.string.val);\n jb->type = jbvString;\n \nand that quieted valgrind for this particular query and still\npasses regression.\n\n(The reported crashes seem to be happening later during a\nrecursive invocation, seemingly because JsonbType(jb) is\nreturning garbage. So there may be another bug after this one.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 25 Jan 2024 14:31:09 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: More new SQL/JSON item methods"
},
{
"msg_contents": "\nOn 2024-01-25 Th 14:31, Tom Lane wrote:\n> Andrew Dunstan <[email protected]> writes:\n>> Thanks, I have pushed this.\n> The buildfarm is pretty widely unhappy, mostly failing on\n>\n> select jsonb_path_query('1.23', '$.string()');\n>\n> On a guess, I tried running that under valgrind, and behold it said\n>\n> ==00:00:00:05.637 435530== Conditional jump or move depends on uninitialised value(s)\n> ==00:00:00:05.637 435530== at 0x8FD131: executeItemOptUnwrapTarget (jsonpath_exec.c:1547)\n> ==00:00:00:05.637 435530== by 0x8FED03: executeItem (jsonpath_exec.c:626)\n> ==00:00:00:05.637 435530== by 0x8FED03: executeNextItem (jsonpath_exec.c:1604)\n> ==00:00:00:05.637 435530== by 0x8FCA58: executeItemOptUnwrapTarget (jsonpath_exec.c:956)\n> ==00:00:00:05.637 435530== by 0x8FFDE4: executeItem (jsonpath_exec.c:626)\n> ==00:00:00:05.637 435530== by 0x8FFDE4: executeJsonPath.constprop.30 (jsonpath_exec.c:612)\n> ==00:00:00:05.637 435530== by 0x8FFF8C: jsonb_path_query_internal (jsonpath_exec.c:438)\n>\n> It's fairly obviously right about that:\n>\n> JsonbValue jbv;\n> ...\n> jb = &jbv;\n> Assert(tmp != NULL); /* We must have set tmp above */\n> jb->val.string.val = (jb->type == jbvString) ? tmp : pstrdup(tmp);\n> ^^^^^^^^^^^^^^^^^^^^^\n>\n> Presumably, this is a mistaken attempt to test the type\n> of the thing previously pointed to by \"jb\".\n>\n> On the whole, what I'd be inclined to do here is get rid\n> of this test altogether and demand that every path through\n> the preceding \"switch\" deliver a value that doesn't need\n> pstrdup(). The only path that doesn't do that already is\n>\n> case jbvBool:\n> tmp = (jb->val.boolean) ? \"true\" : \"false\";\n> break;\n>\n> and TBH I'm not sure that we really need a pstrdup there\n> either. The constants are immutable enough. Is something\n> likely to try to pfree the pointer later? I tried\n>\n> @@ -1544,7 +1544,7 @@ executeItemOptUnwrapTarget(JsonPathExecContext *cxt, JsonPathItem *jsp,\n> \n> jb = &jbv;\n> Assert(tmp != NULL); /* We must have set tmp above */\n> - jb->val.string.val = (jb->type == jbvString) ? tmp : pstrdup(tmp);\n> + jb->val.string.val = tmp;\n> jb->val.string.len = strlen(jb->val.string.val);\n> jb->type = jbvString;\n> \n> and that quieted valgrind for this particular query and still\n> passes regression.\n>\n> (The reported crashes seem to be happening later during a\n> recursive invocation, seemingly because JsonbType(jb) is\n> returning garbage. So there may be another bug after this one.)\n>\n> \t\t\t\n\n\nArgh! Will look, thanks.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 25 Jan 2024 14:41:07 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: More new SQL/JSON item methods"
},
{
"msg_contents": "\nOn 2024-01-25 Th 14:31, Tom Lane wrote:\n> Andrew Dunstan <[email protected]> writes:\n>> Thanks, I have pushed this.\n> The buildfarm is pretty widely unhappy, mostly failing on\n>\n> select jsonb_path_query('1.23', '$.string()');\n>\n> On a guess, I tried running that under valgrind, and behold it said\n>\n> ==00:00:00:05.637 435530== Conditional jump or move depends on uninitialised value(s)\n> ==00:00:00:05.637 435530== at 0x8FD131: executeItemOptUnwrapTarget (jsonpath_exec.c:1547)\n> ==00:00:00:05.637 435530== by 0x8FED03: executeItem (jsonpath_exec.c:626)\n> ==00:00:00:05.637 435530== by 0x8FED03: executeNextItem (jsonpath_exec.c:1604)\n> ==00:00:00:05.637 435530== by 0x8FCA58: executeItemOptUnwrapTarget (jsonpath_exec.c:956)\n> ==00:00:00:05.637 435530== by 0x8FFDE4: executeItem (jsonpath_exec.c:626)\n> ==00:00:00:05.637 435530== by 0x8FFDE4: executeJsonPath.constprop.30 (jsonpath_exec.c:612)\n> ==00:00:00:05.637 435530== by 0x8FFF8C: jsonb_path_query_internal (jsonpath_exec.c:438)\n>\n> It's fairly obviously right about that:\n>\n> JsonbValue jbv;\n> ...\n> jb = &jbv;\n> Assert(tmp != NULL); /* We must have set tmp above */\n> jb->val.string.val = (jb->type == jbvString) ? tmp : pstrdup(tmp);\n> ^^^^^^^^^^^^^^^^^^^^^\n>\n> Presumably, this is a mistaken attempt to test the type\n> of the thing previously pointed to by \"jb\".\n>\n> On the whole, what I'd be inclined to do here is get rid\n> of this test altogether and demand that every path through\n> the preceding \"switch\" deliver a value that doesn't need\n> pstrdup(). The only path that doesn't do that already is\n>\n> case jbvBool:\n> tmp = (jb->val.boolean) ? \"true\" : \"false\";\n> break;\n>\n> and TBH I'm not sure that we really need a pstrdup there\n> either. The constants are immutable enough. Is something\n> likely to try to pfree the pointer later? I tried\n>\n> @@ -1544,7 +1544,7 @@ executeItemOptUnwrapTarget(JsonPathExecContext *cxt, JsonPathItem *jsp,\n> \n> jb = &jbv;\n> Assert(tmp != NULL); /* We must have set tmp above */\n> - jb->val.string.val = (jb->type == jbvString) ? tmp : pstrdup(tmp);\n> + jb->val.string.val = tmp;\n> jb->val.string.len = strlen(jb->val.string.val);\n> jb->type = jbvString;\n> \n> and that quieted valgrind for this particular query and still\n> passes regression.\n\n\n\nYour fix looks sane. I also don't see why we need the pstrdup.\n\n\n>\n> (The reported crashes seem to be happening later during a\n> recursive invocation, seemingly because JsonbType(jb) is\n> returning garbage. So there may be another bug after this one.)\n>\n> \t\t\t\n\n\nI don't think so. AIUI The first call deals with the '$' and the second \none deals with the '.string()', which is why we see the error on the \nsecond call.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 25 Jan 2024 15:25:28 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: More new SQL/JSON item methods"
},
{
"msg_contents": "Andrew Dunstan <[email protected]> writes:\n> On 2024-01-25 Th 14:31, Tom Lane wrote:\n>> (The reported crashes seem to be happening later during a\n>> recursive invocation, seemingly because JsonbType(jb) is\n>> returning garbage. So there may be another bug after this one.)\n\n> I don't think so. AIUI The first call deals with the '$' and the second \n> one deals with the '.string()', which is why we see the error on the \n> second call.\n\nThere's something else going on, because I'm still getting the\nassertion failure on my Mac with this fix in place. Annoyingly,\nit goes away if I compile with -O0, so it's kind of hard to\nidentify what's going wrong.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 25 Jan 2024 15:33:51 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: More new SQL/JSON item methods"
},
{
"msg_contents": "I wrote:\n> There's something else going on, because I'm still getting the\n> assertion failure on my Mac with this fix in place. Annoyingly,\n> it goes away if I compile with -O0, so it's kind of hard to\n> identify what's going wrong.\n\nNo, belay that: I must've got confused about which version I was\ntesting. It's very unclear to me why the undefined reference\ncauses the preceding Assert to misbehave, but that is clearly\nwhat's happening. Compiler bug maybe? My Mac has clang 15.0.0,\nand the unhappy buildfarm members are also late-model clang.\n\nAnyway, I did note that the preceding line\n\n\tres = jperOk;\n\nis dead code and might as well get removed while you're at it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 25 Jan 2024 15:58:20 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: More new SQL/JSON item methods"
},
{
"msg_contents": "\nOn 2024-01-25 Th 15:33, Tom Lane wrote:\n> Andrew Dunstan <[email protected]> writes:\n>> On 2024-01-25 Th 14:31, Tom Lane wrote:\n>>> (The reported crashes seem to be happening later during a\n>>> recursive invocation, seemingly because JsonbType(jb) is\n>>> returning garbage. So there may be another bug after this one.)\n>> I don't think so. AIUI The first call deals with the '$' and the second\n>> one deals with the '.string()', which is why we see the error on the\n>> second call.\n> There's something else going on, because I'm still getting the\n> assertion failure on my Mac with this fix in place. Annoyingly,\n> it goes away if I compile with -O0, so it's kind of hard to\n> identify what's going wrong.\n>\n> \t\t\t\n\n\nCuriouser and curiouser. On my Mac the error is manifest but the fix you \nsuggested cures it. Built with -O2 -g, clang 15.0.0, Apple Silicon.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 25 Jan 2024 16:01:39 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: More new SQL/JSON item methods"
},
{
"msg_contents": "\nOn 2024-01-25 Th 15:58, Tom Lane wrote:\n> I wrote:\n>> There's something else going on, because I'm still getting the\n>> assertion failure on my Mac with this fix in place. Annoyingly,\n>> it goes away if I compile with -O0, so it's kind of hard to\n>> identify what's going wrong.\n> No, belay that: I must've got confused about which version I was\n> testing. It's very unclear to me why the undefined reference\n> causes the preceding Assert to misbehave, but that is clearly\n> what's happening. Compiler bug maybe? My Mac has clang 15.0.0,\n> and the unhappy buildfarm members are also late-model clang.\n>\n> Anyway, I did note that the preceding line\n>\n> \tres = jperOk;\n>\n> is dead code and might as well get removed while you're at it.\n>\n> \t\t\t\n\n\nOK, pushed those. Thanks.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 25 Jan 2024 16:27:41 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: More new SQL/JSON item methods"
},
{
"msg_contents": "On Fri, Jan 26, 2024 at 2:57 AM Andrew Dunstan <[email protected]> wrote:\n\n>\n> On 2024-01-25 Th 15:58, Tom Lane wrote:\n> > I wrote:\n> >> There's something else going on, because I'm still getting the\n> >> assertion failure on my Mac with this fix in place. Annoyingly,\n> >> it goes away if I compile with -O0, so it's kind of hard to\n> >> identify what's going wrong.\n> > No, belay that: I must've got confused about which version I was\n> > testing. It's very unclear to me why the undefined reference\n> > causes the preceding Assert to misbehave, but that is clearly\n> > what's happening. Compiler bug maybe? My Mac has clang 15.0.0,\n> > and the unhappy buildfarm members are also late-model clang.\n> >\n> > Anyway, I did note that the preceding line\n> >\n> > res = jperOk;\n> >\n> > is dead code and might as well get removed while you're at it.\n> >\n> >\n>\n>\n> OK, pushed those. Thanks.\n>\n\nThank you all.\n\n\n>\n>\n> cheers\n>\n>\n> andrew\n>\n> --\n> Andrew Dunstan\n> EDB: https://www.enterprisedb.com\n>\n>\n\n-- \nJeevan Chalke\n\n*Principal, ManagerProduct Development*\n\n\n\nedbpostgres.com\n\nOn Fri, Jan 26, 2024 at 2:57 AM Andrew Dunstan <[email protected]> wrote:\nOn 2024-01-25 Th 15:58, Tom Lane wrote:\n> I wrote:\n>> There's something else going on, because I'm still getting the\n>> assertion failure on my Mac with this fix in place. Annoyingly,\n>> it goes away if I compile with -O0, so it's kind of hard to\n>> identify what's going wrong.\n> No, belay that: I must've got confused about which version I was\n> testing. It's very unclear to me why the undefined reference\n> causes the preceding Assert to misbehave, but that is clearly\n> what's happening. Compiler bug maybe? My Mac has clang 15.0.0,\n> and the unhappy buildfarm members are also late-model clang.\n>\n> Anyway, I did note that the preceding line\n>\n> res = jperOk;\n>\n> is dead code and might as well get removed while you're at it.\n>\n> \n\n\nOK, pushed those. Thanks.Thank you all. \n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n-- Jeevan ChalkePrincipal, ManagerProduct Developmentedbpostgres.com",
"msg_date": "Fri, 26 Jan 2024 18:07:15 +0530",
"msg_from": "Jeevan Chalke <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: More new SQL/JSON item methods"
},
{
"msg_contents": "I have two possible issues in a recent commit.\n\nCommit 66ea94e8e6 has introduced the following messages:\n(Aplogizies in advance if the commit is not related to this thread.)\n\njsonpath_exec.c:1287\n> if (numeric_is_nan(num) || numeric_is_inf(num))\n> RETURN_ERROR(ereport(ERROR,\n> (errcode(ERRCODE_NON_NUMERIC_SQL_JSON_ITEM),\n> errmsg(\"numeric argument of jsonpath item method .%s() is out of range for type decimal or number\",\n> jspOperationName(jsp->type)))));\n\n:1387\n> noerr = DirectInputFunctionCallSafe(numeric_in, numstr,\n...\n> if (!noerr || escontext.error_occurred)\n> RETURN_ERROR(ereport(ERROR,\n> (errcode(ERRCODE_NON_NUMERIC_SQL_JSON_ITEM),\n> errmsg(\"string argument of jsonpath item method .%s() is not a valid representation of a decimal or number\",\n\nThey seem to be suggesting that PostgreSQL has the types \"decimal\" and\n\"number\". I know of the former, but I don't think PostgreSQL has the\n latter type. Perhaps the \"number\" was intended to refer to \"numeric\"?\n(And I think it is largely helpful if the given string were shown in\nthe error message, but it would be another issue.)\n\n\nThe same commit has introduced the following set of messages:\n\n> %s format is not recognized: \"%s\"\n> date format is not recognized: \"%s\"\n> time format is not recognized: \"%s\"\n> time_tz format is not recognized: \"%s\"\n> timestamp format is not recognized: \"%s\"\n> timestamp_tz format is not recognized: \"%s\"\n\nI believe that the first line was intended to cover all the others:p\n\nThey are attached to this message separately.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Mon, 29 Jan 2024 12:12:00 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: More new SQL/JSON item methods"
},
{
"msg_contents": "Kyotaro Horiguchi <[email protected]> writes:\n> I have two possible issues in a recent commit.\n> Commit 66ea94e8e6 has introduced the following messages:\n\n>> errmsg(\"numeric argument of jsonpath item method .%s() is out of range for type decimal or number\",\n\n> They seem to be suggesting that PostgreSQL has the types \"decimal\" and\n> \"number\". I know of the former, but I don't think PostgreSQL has the\n> latter type. Perhaps the \"number\" was intended to refer to \"numeric\"?\n\nProbably. But I would write just \"type numeric\". We do not generally\nacknowledge \"decimal\" as a separate type, because for us it's only an\nalias for numeric (there is not a pg_type entry for it).\n\nAlso, that leads to the thought that \"numeric argument ... is out of\nrange for type numeric\" seems either redundant or contradictory\ndepending on how you look at it. So I suggest wording like\n\nargument \"...input string here...\" of jsonpath item method .%s() is out of range for type numeric\n\n> (And I think it is largely helpful if the given string were shown in\n> the error message, but it would be another issue.)\n\nAgreed, so I suggest the above.\n\n> The same commit has introduced the following set of messages:\n\n>> %s format is not recognized: \"%s\"\n>> date format is not recognized: \"%s\"\n>> time format is not recognized: \"%s\"\n>> time_tz format is not recognized: \"%s\"\n>> timestamp format is not recognized: \"%s\"\n>> timestamp_tz format is not recognized: \"%s\"\n\n> I believe that the first line was intended to cover all the others:p\n\n+1\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 28 Jan 2024 22:47:02 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: More new SQL/JSON item methods"
},
{
"msg_contents": "At Sun, 28 Jan 2024 22:47:02 -0500, Tom Lane <[email protected]> \nwrote in\n> Kyotaro Horiguchi <[email protected]> writes:\n> > They seem to be suggesting that PostgreSQL has the types \n> \"decimal\" and\n> > \"number\". I know of the former, but I don't think PostgreSQL has \n> the\n> > latter type. Perhaps the \"number\" was intended to refer to \n> \"numeric\"?\n>\n> Probably. But I would write just \"type numeric\". We do not \n> generally\n> acknowledge \"decimal\" as a separate type, because for us it's only \n> an\n> alias for numeric (there is not a pg_type entry for it).\n>\n> Also, that leads to the thought that \"numeric argument ... is out of\n> range for type numeric\" seems either redundant or contradictory\n> depending on how you look at it. So I suggest wording like\n>\n> argument \"...input string here...\" of jsonpath item method .%s() is \n> out of range for type numeric\n>\n> > (And I think it is largely helpful if the given string were shown \n> in\n> > the error message, but it would be another issue.)\n>\n> Agreed, so I suggest the above.\n\nHaving said that, I'm a bit concerned about the case where an overly\nlong string is given there. However, considering that we already have\n\"invalid input syntax for type xxx: %x\" messages (including for the\nnumeric), this concern might be unnecessary.\n\nAnother concern is that the input value is already a numeric, not a\nstring. This situation occurs when the input is NaN of +-Inf. Although\nnumeric_out could be used, it might cause another error. Therefore,\nsimply writing out as \"argument NaN and Infinity..\" would be better.\n\nOnce we resolve these issues, another question arises regarding on of\nthe messages. In the case of NaN of Infinity, the message will be as\nthe follows:\n\n\"argument NaN or Infinity of jsonpath item method .%s() is out of \nrange for type numeric\"\n\nThis message appears quite strange and puzzling. I suspect that the\nintended message was somewhat different.\n\n\n> > The same commit has introduced the following set of messages:\n>\n> >> %s format is not recognized: \"%s\"\n> >> date format is not recognized: \"%s\"\n> >> time format is not recognized: \"%s\"\n> >> time_tz format is not recognized: \"%s\"\n> >> timestamp format is not recognized: \"%s\"\n> >> timestamp_tz format is not recognized: \"%s\"\n>\n> > I believe that the first line was intended to cover all the \n> others:p\n>\n> +1\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 29 Jan 2024 14:12:40 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: More new SQL/JSON item methods"
},
{
"msg_contents": "Kyotaro Horiguchi <[email protected]> writes:\n> Having said that, I'm a bit concerned about the case where an overly\n> long string is given there. However, considering that we already have\n> \"invalid input syntax for type xxx: %x\" messages (including for the\n> numeric), this concern might be unnecessary.\n\nYeah, we have not worried about that in the past.\n\n> Another concern is that the input value is already a numeric, not a\n> string. This situation occurs when the input is NaN of +-Inf. Although\n> numeric_out could be used, it might cause another error. Therefore,\n> simply writing out as \"argument NaN and Infinity..\" would be better.\n\nOh! I'd assumed that we were discussing a string that we'd failed to\nconvert to numeric. If the input is already numeric, then either\nthe error is unreachable or what we're really doing is rejecting\nspecial values such as NaN on policy grounds. I would ask first\nif that policy is sane at all. (I'd lean to \"not\" --- if we allow\nit in the input JSON, why not in the output?) If it is sane, the\nerror message needs to be far more specific.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 29 Jan 2024 00:33:22 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: More new SQL/JSON item methods"
},
{
"msg_contents": "On Mon, Jan 29, 2024 at 11:03 AM Tom Lane <[email protected]> wrote:\n\n> Kyotaro Horiguchi <[email protected]> writes:\n> > Having said that, I'm a bit concerned about the case where an overly\n> > long string is given there. However, considering that we already have\n> > \"invalid input syntax for type xxx: %x\" messages (including for the\n> > numeric), this concern might be unnecessary.\n>\n> Yeah, we have not worried about that in the past.\n>\n> > Another concern is that the input value is already a numeric, not a\n> > string. This situation occurs when the input is NaN of +-Inf. Although\n> > numeric_out could be used, it might cause another error. Therefore,\n> > simply writing out as \"argument NaN and Infinity..\" would be better.\n>\n> Oh! I'd assumed that we were discussing a string that we'd failed to\n> convert to numeric. If the input is already numeric, then either\n> the error is unreachable or what we're really doing is rejecting\n> special values such as NaN on policy grounds. I would ask first\n> if that policy is sane at all. (I'd lean to \"not\" --- if we allow\n> it in the input JSON, why not in the output?) If it is sane, the\n> error message needs to be far more specific.\n>\n> regards, tom lane\n>\n\n*Consistent error message related to type:*\n\nAgree that the number is not a PostgreSQL type and needs a change. As Tom\nsuggested, we can say \"type numeric\" here. However, I have seen two\nvariants of error messages here: (1) When the input is numeric and (2) when\nthe input is string. For first, we have error messages like:\nnumeric argument of jsonpath item method .%s() is out of range for type\ndouble precision\n\nand for the second, it is like:\nstring argument of jsonpath item method .%s() is not a valid representation\nof a double precision number\n\nThe second form says \"double precision number\". So, in the decimal/number\ncase, should we use \"numeric number\" and then similarly \"big integer\nnumber\"?\n\nWhat if we use phrases like \"for type double precision\" at both places,\nlike:\nnumeric argument of jsonpath item method .%s() is out of range for type\ndouble precision\nstring argument of jsonpath item method .%s() is not a valid representation\nfor type double precision\n\nWith this, the rest will be like:\nfor type numeric\nfor type bigint\nfor type integer\n\nSuggestions?\n\n---\n\n*Showing input string in the error message:*\n\nargument \"...input string here...\" of jsonpath item method .%s() is out of\nrange for type numeric\n\nIf we add the input string in the error, then for some errors, it will be\ntoo big, for example:\n\n-ERROR: numeric argument of jsonpath item method .double() is out of range\nfor type double precision\n+ERROR: argument\n\"10000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\"\nof jsonpath item method .double() is out of range for type double precision\n\nAlso, for non-string input, we need to convert numeric to string just for\nthe error message, which seems overkill.\n\nOn another note, irrespective of these changes, is it good to show the\ngiven input in the error messages? Error messages are logged and may leak\nsome details.\n\nI think the existing way seems ok.\n\n---\n\n*NaN and Infinity restrictions:*\n\nI am not sure why NaN and Infinity are not allowed in conversion to double\nprecision (.double() method). I have used the same restriction for\n.decimal() and .number(). However, as you said, we should have error\nmessages more specific. I tried that in the attached patch; please have\nyour views. I have the following wordings for that error message:\n\"NaN or Infinity is not allowed for jsonpath item method .%s()\"\n\nSuggestions...\n\n\nThanks\n-- \nJeevan Chalke\n\n*Principal, ManagerProduct Development*\n\n\n\nedbpostgres.com",
"msg_date": "Tue, 30 Jan 2024 13:46:17 +0530",
"msg_from": "Jeevan Chalke <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: More new SQL/JSON item methods"
},
{
"msg_contents": "Thank you for the fix!\r\n\r\nAt Tue, 30 Jan 2024 13:46:17 +0530, Jeevan Chalke <[email protected]> wrote in \r\n> On Mon, Jan 29, 2024 at 11:03 AM Tom Lane <[email protected]> wrote:\r\n> \r\n> > Kyotaro Horiguchi <[email protected]> writes:\r\n> > > Having said that, I'm a bit concerned about the case where an overly\r\n> > > long string is given there. However, considering that we already have\r\n> > > \"invalid input syntax for type xxx: %x\" messages (including for the\r\n> > > numeric), this concern might be unnecessary.\r\n> >\r\n> > Yeah, we have not worried about that in the past.\r\n> >\r\n> > > Another concern is that the input value is already a numeric, not a\r\n> > > string. This situation occurs when the input is NaN of +-Inf. Although\r\n> > > numeric_out could be used, it might cause another error. Therefore,\r\n> > > simply writing out as \"argument NaN and Infinity..\" would be better.\r\n> >\r\n> > Oh! I'd assumed that we were discussing a string that we'd failed to\r\n> > convert to numeric. If the input is already numeric, then either\r\n> > the error is unreachable or what we're really doing is rejecting\r\n> > special values such as NaN on policy grounds. I would ask first\r\n> > if that policy is sane at all. (I'd lean to \"not\" --- if we allow\r\n> > it in the input JSON, why not in the output?) If it is sane, the\r\n> > error message needs to be far more specific.\r\n> >\r\n> > regards, tom lane\r\n> >\r\n> \r\n> *Consistent error message related to type:*\r\n...\r\n> What if we use phrases like \"for type double precision\" at both places,\r\n> like:\r\n> numeric argument of jsonpath item method .%s() is out of range for type\r\n> double precision\r\n> string argument of jsonpath item method .%s() is not a valid representation\r\n> for type double precision\r\n> \r\n> With this, the rest will be like:\r\n> for type numeric\r\n> for type bigint\r\n> for type integer\r\n> \r\n> Suggestions?\r\n\r\nFWIW, I prefer consistently using \"for type hoge\".\r\n\r\n> ---\r\n> \r\n> *Showing input string in the error message:*\r\n> \r\n> argument \"...input string here...\" of jsonpath item method .%s() is out of\r\n> range for type numeric\r\n> \r\n> If we add the input string in the error, then for some errors, it will be\r\n> too big, for example:\r\n> \r\n> -ERROR: numeric argument of jsonpath item method .double() is out of range\r\n> for type double precision\r\n> +ERROR: argument\r\n> \"100000<many zeros follow>\"\r\n> of jsonpath item method .double() is out of range for type double precision\r\n\r\nAs Tom suggested, given that similar situations have already been\r\ndisregarded elsewhere, worrying about excessively long input strings\r\nin this specific instance won't notably improve safety in total.\r\n\r\n> Also, for non-string input, we need to convert numeric to string just for\r\n> the error message, which seems overkill.\r\n\r\nAs I suggested and you seem to agree, using literally \"Nan or\r\nInfinity\" would be sufficient.\r\n\r\n> On another note, irrespective of these changes, is it good to show the\r\n> given input in the error messages? Error messages are logged and may leak\r\n> some details.\r\n> \r\n> I think the existing way seems ok.\r\n\r\nIn my opinion, it is quite common to include the error-causing value\r\nin error messages. Also, we have already many functions that impliy\r\nthe possibility for revealing input values when converting text\r\nrepresentation into internal format, such as with int4in. However, I\r\ndon't stick to that way.\r\n\r\n> ---\r\n> \r\n> *NaN and Infinity restrictions:*\r\n> \r\n> I am not sure why NaN and Infinity are not allowed in conversion to double\r\n> precision (.double() method). I have used the same restriction for\r\n> .decimal() and .number(). However, as you said, we should have error\r\n> messages more specific. I tried that in the attached patch; please have\r\n> your views. I have the following wordings for that error message:\r\n> \"NaN or Infinity is not allowed for jsonpath item method .%s()\"\r\n> \r\n> Suggestions...\r\n\r\nThey seem good to *me*.\r\n\r\nBy the way, while playing with this feature, I noticed the following\r\nerror message:\r\n\r\n> select jsonb_path_query('1.1' , '$.boolean()');\r\n> ERROR: numeric argument of jsonpath item method .boolean() is out of range for type boolean\r\n\r\nThe error message seems a bit off to me. For example, \"argument '1.1'\r\nis invalid for type [bB]oolean\" seems more appropriate for this\r\nspecific issue. (I'm not ceratin about our policy on the spelling of\r\nBoolean..)\r\n\r\nregards.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center\r\n",
"msg_date": "Thu, 01 Feb 2024 10:49:57 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: More new SQL/JSON item methods"
},
{
"msg_contents": "At Thu, 01 Feb 2024 10:49:57 +0900 (JST), Kyotaro Horiguchi <[email protected]> wrote in \n> By the way, while playing with this feature, I noticed the following\n> error message:\n> \n> > select jsonb_path_query('1.1' , '$.boolean()');\n> > ERROR: numeric argument of jsonpath item method .boolean() is out of range for type boolean\n> \n> The error message seems a bit off to me. For example, \"argument '1.1'\n> is invalid for type [bB]oolean\" seems more appropriate for this\n> specific issue. (I'm not ceratin about our policy on the spelling of\n> Boolean..)\n\nOr, following our general convention, it would be spelled as:\n\n'invalid argument for type Boolean: \"1.1\"'\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 01 Feb 2024 10:53:57 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: More new SQL/JSON item methods"
},
{
"msg_contents": "On Thu, Feb 1, 2024 at 7:20 AM Kyotaro Horiguchi <[email protected]>\nwrote:\n\n> Thank you for the fix!\n>\n> At Tue, 30 Jan 2024 13:46:17 +0530, Jeevan Chalke <\n> [email protected]> wrote in\n> > On Mon, Jan 29, 2024 at 11:03 AM Tom Lane <[email protected]> wrote:\n> >\n> > > Kyotaro Horiguchi <[email protected]> writes:\n> > > > Having said that, I'm a bit concerned about the case where an overly\n> > > > long string is given there. However, considering that we already have\n> > > > \"invalid input syntax for type xxx: %x\" messages (including for the\n> > > > numeric), this concern might be unnecessary.\n> > >\n> > > Yeah, we have not worried about that in the past.\n> > >\n> > > > Another concern is that the input value is already a numeric, not a\n> > > > string. This situation occurs when the input is NaN of +-Inf.\n> Although\n> > > > numeric_out could be used, it might cause another error. Therefore,\n> > > > simply writing out as \"argument NaN and Infinity..\" would be better.\n> > >\n> > > Oh! I'd assumed that we were discussing a string that we'd failed to\n> > > convert to numeric. If the input is already numeric, then either\n> > > the error is unreachable or what we're really doing is rejecting\n> > > special values such as NaN on policy grounds. I would ask first\n> > > if that policy is sane at all. (I'd lean to \"not\" --- if we allow\n> > > it in the input JSON, why not in the output?) If it is sane, the\n> > > error message needs to be far more specific.\n> > >\n> > > regards, tom lane\n> > >\n> >\n> > *Consistent error message related to type:*\n> ...\n> > What if we use phrases like \"for type double precision\" at both places,\n> > like:\n> > numeric argument of jsonpath item method .%s() is out of range for type\n> > double precision\n> > string argument of jsonpath item method .%s() is not a valid\n> representation\n> > for type double precision\n> >\n> > With this, the rest will be like:\n> > for type numeric\n> > for type bigint\n> > for type integer\n> >\n> > Suggestions?\n>\n> FWIW, I prefer consistently using \"for type hoge\".\n>\n\nOK.\n\n\n>\n> > ---\n> >\n> > *Showing input string in the error message:*\n> >\n> > argument \"...input string here...\" of jsonpath item method .%s() is out\n> of\n> > range for type numeric\n> >\n> > If we add the input string in the error, then for some errors, it will be\n> > too big, for example:\n> >\n> > -ERROR: numeric argument of jsonpath item method .double() is out of\n> range\n> > for type double precision\n> > +ERROR: argument\n> > \"100000<many zeros follow>\"\n> > of jsonpath item method .double() is out of range for type double\n> precision\n>\n> As Tom suggested, given that similar situations have already been\n> disregarded elsewhere, worrying about excessively long input strings\n> in this specific instance won't notably improve safety in total.\n>\n> > Also, for non-string input, we need to convert numeric to string just for\n> > the error message, which seems overkill.\n>\n> As I suggested and you seem to agree, using literally \"Nan or\n> Infinity\" would be sufficient.\n>\n\nI am more concerned about .bigint() and .integer(). We can have errors when\nthe numeric input is out of range, but not NaN or Infinity. At those\nplaces, we need to convert numeric to string to put that value into the\nerror.\nDo you mean we should still put \"Nan or Infinity\" there?\n\nThis is the case:\n select jsonb_path_query('12345678901', '$.integer()');\n ERROR: numeric argument of jsonpath item method .integer() is out of\nrange for type integer\n\n\n>\n> > On another note, irrespective of these changes, is it good to show the\n> > given input in the error messages? Error messages are logged and may leak\n> > some details.\n> >\n> > I think the existing way seems ok.\n>\n> In my opinion, it is quite common to include the error-causing value\n> in error messages. Also, we have already many functions that impliy\n> the possibility for revealing input values when converting text\n> representation into internal format, such as with int4in. However, I\n> don't stick to that way.\n>\n> > ---\n> >\n> > *NaN and Infinity restrictions:*\n> >\n> > I am not sure why NaN and Infinity are not allowed in conversion to\n> double\n> > precision (.double() method). I have used the same restriction for\n> > .decimal() and .number(). However, as you said, we should have error\n> > messages more specific. I tried that in the attached patch; please have\n> > your views. I have the following wordings for that error message:\n> > \"NaN or Infinity is not allowed for jsonpath item method .%s()\"\n> >\n> > Suggestions...\n>\n> They seem good to *me*.\n>\n\nThanks\n\n\n>\n> By the way, while playing with this feature, I noticed the following\n> error message:\n>\n> > select jsonb_path_query('1.1' , '$.boolean()');\n> > ERROR: numeric argument of jsonpath item method .boolean() is out of\n> range for type boolean\n>\n> The error message seems a bit off to me. For example, \"argument '1.1'\n> is invalid for type [bB]oolean\" seems more appropriate for this\n> specific issue. (I'm not ceratin about our policy on the spelling of\n> Boolean..)\n>\n> regards.\n>\n> --\n> Kyotaro Horiguchi\n> NTT Open Source Software Center\n>\n\n\n-- \nJeevan Chalke\n\n*Principal, ManagerProduct Development*\n\n\n\nedbpostgres.com\n\nOn Thu, Feb 1, 2024 at 7:20 AM Kyotaro Horiguchi <[email protected]> wrote:Thank you for the fix!\n\nAt Tue, 30 Jan 2024 13:46:17 +0530, Jeevan Chalke <[email protected]> wrote in \n> On Mon, Jan 29, 2024 at 11:03 AM Tom Lane <[email protected]> wrote:\n> \n> > Kyotaro Horiguchi <[email protected]> writes:\n> > > Having said that, I'm a bit concerned about the case where an overly\n> > > long string is given there. However, considering that we already have\n> > > \"invalid input syntax for type xxx: %x\" messages (including for the\n> > > numeric), this concern might be unnecessary.\n> >\n> > Yeah, we have not worried about that in the past.\n> >\n> > > Another concern is that the input value is already a numeric, not a\n> > > string. This situation occurs when the input is NaN of +-Inf. Although\n> > > numeric_out could be used, it might cause another error. Therefore,\n> > > simply writing out as \"argument NaN and Infinity..\" would be better.\n> >\n> > Oh! I'd assumed that we were discussing a string that we'd failed to\n> > convert to numeric. If the input is already numeric, then either\n> > the error is unreachable or what we're really doing is rejecting\n> > special values such as NaN on policy grounds. I would ask first\n> > if that policy is sane at all. (I'd lean to \"not\" --- if we allow\n> > it in the input JSON, why not in the output?) If it is sane, the\n> > error message needs to be far more specific.\n> >\n> > regards, tom lane\n> >\n> \n> *Consistent error message related to type:*\n...\n> What if we use phrases like \"for type double precision\" at both places,\n> like:\n> numeric argument of jsonpath item method .%s() is out of range for type\n> double precision\n> string argument of jsonpath item method .%s() is not a valid representation\n> for type double precision\n> \n> With this, the rest will be like:\n> for type numeric\n> for type bigint\n> for type integer\n> \n> Suggestions?\n\nFWIW, I prefer consistently using \"for type hoge\".OK. \n\n> ---\n> \n> *Showing input string in the error message:*\n> \n> argument \"...input string here...\" of jsonpath item method .%s() is out of\n> range for type numeric\n> \n> If we add the input string in the error, then for some errors, it will be\n> too big, for example:\n> \n> -ERROR: numeric argument of jsonpath item method .double() is out of range\n> for type double precision\n> +ERROR: argument\n> \"100000<many zeros follow>\"\n> of jsonpath item method .double() is out of range for type double precision\n\nAs Tom suggested, given that similar situations have already been\ndisregarded elsewhere, worrying about excessively long input strings\nin this specific instance won't notably improve safety in total.\n\n> Also, for non-string input, we need to convert numeric to string just for\n> the error message, which seems overkill.\n\nAs I suggested and you seem to agree, using literally \"Nan or\nInfinity\" would be sufficient.I am more concerned about .bigint() and .integer(). We can have errors when the numeric input is out of range, but not NaN or Infinity. At those places, we need to convert numeric to string to put that value into the error.Do you mean we should still put \"Nan or Infinity\" there?This is the case: select jsonb_path_query('12345678901', '$.integer()'); ERROR: numeric argument of jsonpath item method .integer() is out of range for type integer \n\n> On another note, irrespective of these changes, is it good to show the\n> given input in the error messages? Error messages are logged and may leak\n> some details.\n> \n> I think the existing way seems ok.\n\nIn my opinion, it is quite common to include the error-causing value\nin error messages. Also, we have already many functions that impliy\nthe possibility for revealing input values when converting text\nrepresentation into internal format, such as with int4in. However, I\ndon't stick to that way.\n\n> ---\n> \n> *NaN and Infinity restrictions:*\n> \n> I am not sure why NaN and Infinity are not allowed in conversion to double\n> precision (.double() method). I have used the same restriction for\n> .decimal() and .number(). However, as you said, we should have error\n> messages more specific. I tried that in the attached patch; please have\n> your views. I have the following wordings for that error message:\n> \"NaN or Infinity is not allowed for jsonpath item method .%s()\"\n> \n> Suggestions...\n\nThey seem good to *me*.Thanks \n\nBy the way, while playing with this feature, I noticed the following\nerror message:\n\n> select jsonb_path_query('1.1' , '$.boolean()');\n> ERROR: numeric argument of jsonpath item method .boolean() is out of range for type boolean\n\nThe error message seems a bit off to me. For example, \"argument '1.1'\nis invalid for type [bB]oolean\" seems more appropriate for this\nspecific issue. (I'm not ceratin about our policy on the spelling of\nBoolean..)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n-- Jeevan ChalkePrincipal, ManagerProduct Developmentedbpostgres.com",
"msg_date": "Thu, 1 Feb 2024 09:19:40 +0530",
"msg_from": "Jeevan Chalke <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: More new SQL/JSON item methods"
},
{
"msg_contents": "On Thu, Feb 1, 2024 at 7:24 AM Kyotaro Horiguchi <[email protected]>\nwrote:\n\n> At Thu, 01 Feb 2024 10:49:57 +0900 (JST), Kyotaro Horiguchi <\n> [email protected]> wrote in\n> > By the way, while playing with this feature, I noticed the following\n> > error message:\n> >\n> > > select jsonb_path_query('1.1' , '$.boolean()');\n> > > ERROR: numeric argument of jsonpath item method .boolean() is out of\n> range for type boolean\n> >\n> > The error message seems a bit off to me. For example, \"argument '1.1'\n> > is invalid for type [bB]oolean\" seems more appropriate for this\n> > specific issue. (I'm not ceratin about our policy on the spelling of\n> > Boolean..)\n>\n> Or, following our general convention, it would be spelled as:\n>\n> 'invalid argument for type Boolean: \"1.1\"'\n>\n\njsonpath way:\n\nERROR: argument of jsonpath item method .boolean() is invalid for type\nboolean\n\nor, if we add input value, then\n\nERROR: argument \"1.1\" of jsonpath item method .boolean() is invalid for\ntype boolean\n\nAnd this should work for all the error types, like out of range, not valid,\ninvalid input, etc, etc. Also, we don't need separate error messages for\nstring input as well, which currently has the following form:\n\n\"string argument of jsonpath item method .%s() is not a valid\nrepresentation..\"\n\n\nThanks\n\n\n> regards.\n>\n> --\n> Kyotaro Horiguchi\n> NTT Open Source Software Center\n>\n\n\n-- \nJeevan Chalke\n\n*Principal, ManagerProduct Development*\n\n\n\nedbpostgres.com\n\nOn Thu, Feb 1, 2024 at 7:24 AM Kyotaro Horiguchi <[email protected]> wrote:At Thu, 01 Feb 2024 10:49:57 +0900 (JST), Kyotaro Horiguchi <[email protected]> wrote in \n> By the way, while playing with this feature, I noticed the following\n> error message:\n> \n> > select jsonb_path_query('1.1' , '$.boolean()');\n> > ERROR: numeric argument of jsonpath item method .boolean() is out of range for type boolean\n> \n> The error message seems a bit off to me. For example, \"argument '1.1'\n> is invalid for type [bB]oolean\" seems more appropriate for this\n> specific issue. (I'm not ceratin about our policy on the spelling of\n> Boolean..)\n\nOr, following our general convention, it would be spelled as:\n\n'invalid argument for type Boolean: \"1.1\"'jsonpath way:ERROR: argument of jsonpath item method .boolean() is invalid for type booleanor, if we add input value, thenERROR: argument \"1.1\" of jsonpath item method .boolean() is invalid for type booleanAnd this should work for all the error types, like out of range, not valid, invalid input, etc, etc. Also, we don't need separate error messages for string input as well, which currently has the following form:\"string argument of jsonpath item method .%s() is not a valid representation..\"Thanks\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n-- Jeevan ChalkePrincipal, ManagerProduct Developmentedbpostgres.com",
"msg_date": "Thu, 1 Feb 2024 09:22:22 +0530",
"msg_from": "Jeevan Chalke <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: More new SQL/JSON item methods"
},
{
"msg_contents": "At Thu, 1 Feb 2024 09:19:40 +0530, Jeevan Chalke <[email protected]> wrote in \n> > As Tom suggested, given that similar situations have already been\n> > disregarded elsewhere, worrying about excessively long input strings\n> > in this specific instance won't notably improve safety in total.\n> >\n> > > Also, for non-string input, we need to convert numeric to string just for\n> > > the error message, which seems overkill.\n> >\n> > As I suggested and you seem to agree, using literally \"Nan or\n> > Infinity\" would be sufficient.\n> >\n> \n> I am more concerned about .bigint() and .integer(). We can have errors when\n> the numeric input is out of range, but not NaN or Infinity. At those\n> places, we need to convert numeric to string to put that value into the\n> error.\n> Do you mean we should still put \"Nan or Infinity\" there?\n> \n> This is the case:\n> select jsonb_path_query('12345678901', '$.integer()');\n> ERROR: numeric argument of jsonpath item method .integer() is out of\n> range for type integer\n\nAh.. Understood. \"NaN or Infinity\" cannot be used in those\ncases. Additionally, for jpiBoolean and jpiBigint, we lack the text\nrepresentation of the value.\n\nBy a quick grepping, I found that the following functions call\nnumeric_out to convert the jbvNumeric values back into text\nrepresentation.\n\nJsonbValueAstext, populate_scalar, iterate_jsonb_values,\nexecuteItemOptUnrwapTarget, jsonb_put_escaped_value\n\nThe function iterate_jsonb_values(), in particular, iterates over a\nvalues array, calling numeric_out on each iteration.\n\nThe following functions re-converts the converted numeric into another type.\n\njsonb_int[248]() converts the numeric value into int2 using numeric_int[248]().\njsonb_float[48]() converts it into float4 using numeric_float[48]().\n\nGiven these facts, it seems more efficient for jbvNumber to retain the\noriginal scalar value, converting it only when necessary. If needed,\nwe could also add a numeric struct member as a cache for better\nperformance. I'm not sure we refer the values more than once, though.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 01 Feb 2024 14:53:57 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: More new SQL/JSON item methods"
},
{
"msg_contents": "At Thu, 1 Feb 2024 09:22:22 +0530, Jeevan Chalke <[email protected]> wrote in \r\n> On Thu, Feb 1, 2024 at 7:24 AM Kyotaro Horiguchi <[email protected]>\r\n> wrote:\r\n> \r\n> > At Thu, 01 Feb 2024 10:49:57 +0900 (JST), Kyotaro Horiguchi <\r\n> > [email protected]> wrote in\r\n> > > By the way, while playing with this feature, I noticed the following\r\n> > > error message:\r\n> > >\r\n> > > > select jsonb_path_query('1.1' , '$.boolean()');\r\n> > > > ERROR: numeric argument of jsonpath item method .boolean() is out of\r\n> > range for type boolean\r\n> > >\r\n> > > The error message seems a bit off to me. For example, \"argument '1.1'\r\n> > > is invalid for type [bB]oolean\" seems more appropriate for this\r\n> > > specific issue. (I'm not ceratin about our policy on the spelling of\r\n> > > Boolean..)\r\n> >\r\n> > Or, following our general convention, it would be spelled as:\r\n> >\r\n> > 'invalid argument for type Boolean: \"1.1\"'\r\n> >\r\n> \r\n> jsonpath way:\r\n\r\nHmm. I see.\r\n\r\n> ERROR: argument of jsonpath item method .boolean() is invalid for type\r\n> boolean\r\n> \r\n> or, if we add input value, then\r\n> \r\n> ERROR: argument \"1.1\" of jsonpath item method .boolean() is invalid for\r\n> type boolean\r\n> \r\n> And this should work for all the error types, like out of range, not valid,\r\n> invalid input, etc, etc. Also, we don't need separate error messages for\r\n> string input as well, which currently has the following form:\r\n> \r\n> \"string argument of jsonpath item method .%s() is not a valid\r\n> representation..\"\r\n\r\nAgreed.\r\n\r\nregards.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center\r\n",
"msg_date": "Thu, 01 Feb 2024 14:55:28 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: More new SQL/JSON item methods"
},
{
"msg_contents": "Sorry for a minor correction, but..\n\nAt Thu, 01 Feb 2024 14:53:57 +0900 (JST), Kyotaro Horiguchi <[email protected]> wrote in \n> Ah.. Understood. \"NaN or Infinity\" cannot be used in those\n> cases. Additionally, for jpiBoolean and jpiBigint, we lack the text\n> representation of the value.\n\nThis \"Additionally\" was merely left in by mistake.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 01 Feb 2024 15:08:27 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: More new SQL/JSON item methods"
},
{
"msg_contents": "On Thu, Feb 1, 2024 at 11:25 AM Kyotaro Horiguchi <[email protected]>\nwrote:\n\n> At Thu, 1 Feb 2024 09:22:22 +0530, Jeevan Chalke <\n> [email protected]> wrote in\n> > On Thu, Feb 1, 2024 at 7:24 AM Kyotaro Horiguchi <\n> [email protected]>\n> > wrote:\n> >\n> > > At Thu, 01 Feb 2024 10:49:57 +0900 (JST), Kyotaro Horiguchi <\n> > > [email protected]> wrote in\n> > > > By the way, while playing with this feature, I noticed the following\n> > > > error message:\n> > > >\n> > > > > select jsonb_path_query('1.1' , '$.boolean()');\n> > > > > ERROR: numeric argument of jsonpath item method .boolean() is out\n> of\n> > > range for type boolean\n> > > >\n> > > > The error message seems a bit off to me. For example, \"argument '1.1'\n> > > > is invalid for type [bB]oolean\" seems more appropriate for this\n> > > > specific issue. (I'm not ceratin about our policy on the spelling of\n> > > > Boolean..)\n> > >\n> > > Or, following our general convention, it would be spelled as:\n> > >\n> > > 'invalid argument for type Boolean: \"1.1\"'\n> > >\n> >\n> > jsonpath way:\n>\n> Hmm. I see.\n>\n> > ERROR: argument of jsonpath item method .boolean() is invalid for type\n> > boolean\n> >\n> > or, if we add input value, then\n> >\n> > ERROR: argument \"1.1\" of jsonpath item method .boolean() is invalid for\n> > type boolean\n> >\n> > And this should work for all the error types, like out of range, not\n> valid,\n> > invalid input, etc, etc. Also, we don't need separate error messages for\n> > string input as well, which currently has the following form:\n> >\n> > \"string argument of jsonpath item method .%s() is not a valid\n> > representation..\"\n>\n> Agreed.\n>\n\nAttached are patches based on the discussion.\n\n\n>\n> regards.\n>\n> --\n> Kyotaro Horiguchi\n> NTT Open Source Software Center\n>\n\n\n-- \nJeevan Chalke\n\n*Principal, ManagerProduct Development*\n\n\n\nedbpostgres.com",
"msg_date": "Fri, 2 Feb 2024 11:01:31 +0530",
"msg_from": "Jeevan Chalke <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: More new SQL/JSON item methods"
},
{
"msg_contents": "On 2024-02-02 Fr 00:31, Jeevan Chalke wrote:\n>\n>\n> On Thu, Feb 1, 2024 at 11:25 AM Kyotaro Horiguchi \n> <[email protected]> wrote:\n>\n> At Thu, 1 Feb 2024 09:22:22 +0530, Jeevan Chalke\n> <[email protected]> wrote in\n> > On Thu, Feb 1, 2024 at 7:24 AM Kyotaro Horiguchi\n> <[email protected]>\n> > wrote:\n> >\n> > > At Thu, 01 Feb 2024 10:49:57 +0900 (JST), Kyotaro Horiguchi <\n> > > [email protected]> wrote in\n> > > > By the way, while playing with this feature, I noticed the\n> following\n> > > > error message:\n> > > >\n> > > > > select jsonb_path_query('1.1' , '$.boolean()');\n> > > > > ERROR: numeric argument of jsonpath item method\n> .boolean() is out of\n> > > range for type boolean\n> > > >\n> > > > The error message seems a bit off to me. For example,\n> \"argument '1.1'\n> > > > is invalid for type [bB]oolean\" seems more appropriate for this\n> > > > specific issue. (I'm not ceratin about our policy on the\n> spelling of\n> > > > Boolean..)\n> > >\n> > > Or, following our general convention, it would be spelled as:\n> > >\n> > > 'invalid argument for type Boolean: \"1.1\"'\n> > >\n> >\n> > jsonpath way:\n>\n> Hmm. I see.\n>\n> > ERROR: argument of jsonpath item method .boolean() is invalid\n> for type\n> > boolean\n> >\n> > or, if we add input value, then\n> >\n> > ERROR: argument \"1.1\" of jsonpath item method .boolean() is\n> invalid for\n> > type boolean\n> >\n> > And this should work for all the error types, like out of range,\n> not valid,\n> > invalid input, etc, etc. Also, we don't need separate error\n> messages for\n> > string input as well, which currently has the following form:\n> >\n> > \"string argument of jsonpath item method .%s() is not a valid\n> > representation..\"\n>\n> Agreed.\n>\n>\n> Attached are patches based on the discussion.\n\n\n\nThanks, I combined these and pushed the result.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2024-02-02 Fr 00:31, Jeevan Chalke\n wrote:\n\n\n\n\n\n\n\n\nOn Thu, Feb 1, 2024 at\n 11:25 AM Kyotaro Horiguchi <[email protected]>\n wrote:\n\nAt Thu, 1 Feb 2024\n 09:22:22 +0530, Jeevan Chalke <[email protected]>\n wrote in \n > On Thu, Feb 1, 2024 at 7:24 AM Kyotaro Horiguchi <[email protected]>\n > wrote:\n > \n > > At Thu, 01 Feb 2024 10:49:57 +0900 (JST), Kyotaro\n Horiguchi <\n > > [email protected]>\n wrote in\n > > > By the way, while playing with this feature,\n I noticed the following\n > > > error message:\n > > >\n > > > > select jsonb_path_query('1.1' ,\n '$.boolean()');\n > > > > ERROR: numeric argument of jsonpath\n item method .boolean() is out of\n > > range for type boolean\n > > >\n > > > The error message seems a bit off to me. For\n example, \"argument '1.1'\n > > > is invalid for type [bB]oolean\" seems more\n appropriate for this\n > > > specific issue. (I'm not ceratin about our\n policy on the spelling of\n > > > Boolean..)\n > >\n > > Or, following our general convention, it would be\n spelled as:\n > >\n > > 'invalid argument for type Boolean: \"1.1\"'\n > >\n > \n > jsonpath way:\n\n Hmm. I see.\n\n > ERROR: argument of jsonpath item method .boolean() is\n invalid for type\n > boolean\n > \n > or, if we add input value, then\n > \n > ERROR: argument \"1.1\" of jsonpath item method\n .boolean() is invalid for\n > type boolean\n > \n > And this should work for all the error types, like out\n of range, not valid,\n > invalid input, etc, etc. Also, we don't need separate\n error messages for\n > string input as well, which currently has the following\n form:\n > \n > \"string argument of jsonpath item method .%s() is not a\n valid\n > representation..\"\n\n Agreed.\n\n\n\nAttached are patches based on the discussion.\n \n\n\n\n\n\n\n\nThanks, I combined these and pushed the result.\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Tue, 27 Feb 2024 02:10:12 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: More new SQL/JSON item methods"
},
{
"msg_contents": "On Tue, Feb 27, 2024 at 12:40 PM Andrew Dunstan <[email protected]> wrote:\n\n>\n> On 2024-02-02 Fr 00:31, Jeevan Chalke wrote:\n>\n>\n>\n> On Thu, Feb 1, 2024 at 11:25 AM Kyotaro Horiguchi <[email protected]>\n> wrote:\n>\n>> At Thu, 1 Feb 2024 09:22:22 +0530, Jeevan Chalke <\n>> [email protected]> wrote in\n>> > On Thu, Feb 1, 2024 at 7:24 AM Kyotaro Horiguchi <\n>> [email protected]>\n>> > wrote:\n>> >\n>> > > At Thu, 01 Feb 2024 10:49:57 +0900 (JST), Kyotaro Horiguchi <\n>> > > [email protected]> wrote in\n>> > > > By the way, while playing with this feature, I noticed the following\n>> > > > error message:\n>> > > >\n>> > > > > select jsonb_path_query('1.1' , '$.boolean()');\n>> > > > > ERROR: numeric argument of jsonpath item method .boolean() is\n>> out of\n>> > > range for type boolean\n>> > > >\n>> > > > The error message seems a bit off to me. For example, \"argument\n>> '1.1'\n>> > > > is invalid for type [bB]oolean\" seems more appropriate for this\n>> > > > specific issue. (I'm not ceratin about our policy on the spelling of\n>> > > > Boolean..)\n>> > >\n>> > > Or, following our general convention, it would be spelled as:\n>> > >\n>> > > 'invalid argument for type Boolean: \"1.1\"'\n>> > >\n>> >\n>> > jsonpath way:\n>>\n>> Hmm. I see.\n>>\n>> > ERROR: argument of jsonpath item method .boolean() is invalid for type\n>> > boolean\n>> >\n>> > or, if we add input value, then\n>> >\n>> > ERROR: argument \"1.1\" of jsonpath item method .boolean() is invalid for\n>> > type boolean\n>> >\n>> > And this should work for all the error types, like out of range, not\n>> valid,\n>> > invalid input, etc, etc. Also, we don't need separate error messages for\n>> > string input as well, which currently has the following form:\n>> >\n>> > \"string argument of jsonpath item method .%s() is not a valid\n>> > representation..\"\n>>\n>> Agreed.\n>>\n>\n> Attached are patches based on the discussion.\n>\n>\n>\n>\n> Thanks, I combined these and pushed the result.\n>\n\nThank you, Andrew.\n\n\n>\n> cheers\n>\n>\n> andrew\n>\n>\n> --\n> Andrew Dunstan\n> EDB: https://www.enterprisedb.com\n>\n>\n\n-- \nJeevan Chalke\n\n*Principal, ManagerProduct Development*\n\n\n\nedbpostgres.com\n\nOn Tue, Feb 27, 2024 at 12:40 PM Andrew Dunstan <[email protected]> wrote:\n\n\n\nOn 2024-02-02 Fr 00:31, Jeevan Chalke\n wrote:\n\n\n\n\n\n\n\nOn Thu, Feb 1, 2024 at\n 11:25 AM Kyotaro Horiguchi <[email protected]>\n wrote:\n\nAt Thu, 1 Feb 2024\n 09:22:22 +0530, Jeevan Chalke <[email protected]>\n wrote in \n > On Thu, Feb 1, 2024 at 7:24 AM Kyotaro Horiguchi <[email protected]>\n > wrote:\n > \n > > At Thu, 01 Feb 2024 10:49:57 +0900 (JST), Kyotaro\n Horiguchi <\n > > [email protected]>\n wrote in\n > > > By the way, while playing with this feature,\n I noticed the following\n > > > error message:\n > > >\n > > > > select jsonb_path_query('1.1' ,\n '$.boolean()');\n > > > > ERROR: numeric argument of jsonpath\n item method .boolean() is out of\n > > range for type boolean\n > > >\n > > > The error message seems a bit off to me. For\n example, \"argument '1.1'\n > > > is invalid for type [bB]oolean\" seems more\n appropriate for this\n > > > specific issue. (I'm not ceratin about our\n policy on the spelling of\n > > > Boolean..)\n > >\n > > Or, following our general convention, it would be\n spelled as:\n > >\n > > 'invalid argument for type Boolean: \"1.1\"'\n > >\n > \n > jsonpath way:\n\n Hmm. I see.\n\n > ERROR: argument of jsonpath item method .boolean() is\n invalid for type\n > boolean\n > \n > or, if we add input value, then\n > \n > ERROR: argument \"1.1\" of jsonpath item method\n .boolean() is invalid for\n > type boolean\n > \n > And this should work for all the error types, like out\n of range, not valid,\n > invalid input, etc, etc. Also, we don't need separate\n error messages for\n > string input as well, which currently has the following\n form:\n > \n > \"string argument of jsonpath item method .%s() is not a\n valid\n > representation..\"\n\n Agreed.\n\n\n\nAttached are patches based on the discussion.\n \n\n\n\n\n\n\n\nThanks, I combined these and pushed the result.Thank you, Andrew. \n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n-- Jeevan ChalkePrincipal, ManagerProduct Developmentedbpostgres.com",
"msg_date": "Tue, 27 Feb 2024 18:36:06 +0530",
"msg_from": "Jeevan Chalke <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: More new SQL/JSON item methods"
}
] |
[
{
"msg_contents": "Hello.\n\npg_resetwal and initdb has an error message like this:\n\nmsgid \"argument of --wal-segsize must be a power of 2 between 1 and 1024\"\n\nIn other parts in the tree, however, we spell it as \"power of two\". I\nthink it would make sense to standardize the spelling for\nconsistency. See the attached.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Tue, 29 Aug 2023 17:56:15 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Standardize spelling of \"power of two\""
},
{
"msg_contents": "> On 29 Aug 2023, at 10:56, Kyotaro Horiguchi <[email protected]> wrote:\n\n> pg_resetwal and initdb has an error message like this:\n> \n> msgid \"argument of --wal-segsize must be a power of 2 between 1 and 1024\"\n> \n> In other parts in the tree, however, we spell it as \"power of two\". I\n> think it would make sense to standardize the spelling for\n> consistency. See the attached.\n\nAgreed. While we have numerous \"power of 2\" these were the only ones in\ntranslated user-facing messages, so I've pushed this to master (it didn't seem\nworth disrupting translations for 16 as we are so close to wrapping it, if\nothers disagree I can backpatch).\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Tue, 29 Aug 2023 11:26:13 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Standardize spelling of \"power of two\""
},
{
"msg_contents": "On 2023-Aug-29, Daniel Gustafsson wrote:\n\n> Agreed. While we have numerous \"power of 2\" these were the only ones\n> in translated user-facing messages, so I've pushed this to master (it\n> didn't seem worth disrupting translations for 16 as we are so close to\n> wrapping it, if others disagree I can backpatch).\n\nI'd rather backpatch it. There's only 5 translations that are 100% for\ninitdb.po, and they have two weeks to make the change from \"2\" to \"two\".\nI don't think this is a problem.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 29 Aug 2023 13:11:30 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Standardize spelling of \"power of two\""
},
{
"msg_contents": "> On 29 Aug 2023, at 13:11, Alvaro Herrera <[email protected]> wrote:\n> \n> On 2023-Aug-29, Daniel Gustafsson wrote:\n> \n>> Agreed. While we have numerous \"power of 2\" these were the only ones\n>> in translated user-facing messages, so I've pushed this to master (it\n>> didn't seem worth disrupting translations for 16 as we are so close to\n>> wrapping it, if others disagree I can backpatch).\n> \n> I'd rather backpatch it. There's only 5 translations that are 100% for\n> initdb.po, and they have two weeks to make the change from \"2\" to \"two\".\n> I don't think this is a problem.\n\nFair enough, backpatched to v16.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Tue, 29 Aug 2023 14:39:42 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Standardize spelling of \"power of two\""
},
{
"msg_contents": "At Tue, 29 Aug 2023 14:39:42 +0200, Daniel Gustafsson <[email protected]> wrote in \n> > On 29 Aug 2023, at 13:11, Alvaro Herrera <[email protected]> wrote:\n> > \n> > On 2023-Aug-29, Daniel Gustafsson wrote:\n> > \n> >> Agreed. While we have numerous \"power of 2\" these were the only ones\n> >> in translated user-facing messages, so I've pushed this to master (it\n> >> didn't seem worth disrupting translations for 16 as we are so close to\n> >> wrapping it, if others disagree I can backpatch).\n> > \n> > I'd rather backpatch it. There's only 5 translations that are 100% for\n> > initdb.po, and they have two weeks to make the change from \"2\" to \"two\".\n> > I don't think this is a problem.\n> \n> Fair enough, backpatched to v16.\n\nThank you for committing this.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 30 Aug 2023 09:13:11 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Standardize spelling of \"power of two\""
}
] |
[
{
"msg_contents": "Hi,\n\nThe VS and MinGW Windows images are merged on Andres' pg-vm-images\nrepository now [1]. So, the old pg-ci-windows-ci-vs-2019 and\npg-ci-windows-ci-mingw64 images will not be updated from now on. This\nnew merged image (pg-ci-windows-ci) needs to be used on both VS and\nMinGW tasks.\nI attached a patch for using pg-ci-windows-ci Windows image on VS and\nMinGW tasks.\n\nCI run when pg-ci-windows-ci is used:\nhttps://cirrus-ci.com/build/6063036847357952\n\n[1]: https://github.com/anarazel/pg-vm-images/commit/6747f676b97348d47f041b05aa9b36cde43c33fe\n\nRegards,\nNazir Bilal Yavuz\nMicrosoft",
"msg_date": "Tue, 29 Aug 2023 15:18:29 +0300",
"msg_from": "Nazir Bilal Yavuz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Use the same Windows image on both VS and MinGW tasks"
},
{
"msg_contents": "Hi,\n\nOn 2023-08-29 15:18:29 +0300, Nazir Bilal Yavuz wrote:\n> The VS and MinGW Windows images are merged on Andres' pg-vm-images\n> repository now [1]. So, the old pg-ci-windows-ci-vs-2019 and\n> pg-ci-windows-ci-mingw64 images will not be updated from now on. This\n> new merged image (pg-ci-windows-ci) needs to be used on both VS and\n> MinGW tasks.\n> I attached a patch for using pg-ci-windows-ci Windows image on VS and\n> MinGW tasks.\n\nThanks! Pushed to 15, 16 and master.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 3 Jun 2024 19:44:04 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use the same Windows image on both VS and MinGW tasks"
}
] |
[
{
"msg_contents": "The -Wshadow compiler option reported 3 shadow warnings within the\nlogical replication files. (These are all in old code)\n\nPSA a patch to address those.\n\n======\n\nlogicalfuncs.c:184:13: warning: declaration of ‘name’ shadows a\nprevious local [-Wshadow]\n char *name = TextDatumGetCString(datum_opts[i]);\n ^\nlogicalfuncs.c:105:8: warning: shadowed declaration is here [-Wshadow]\n Name name;\n ^\n\n~~~\n\nreorderbuffer.c:4843:10: warning: declaration of ‘isnull’ shadows a\nprevious local [-Wshadow]\n bool isnull;\n ^\nreorderbuffer.c:4734:11: warning: shadowed declaration is here [-Wshadow]\n bool *isnull;\n ^\n\n~~~\n\nwalsender.c:3543:14: warning: declaration of ‘sentPtr’ shadows a\nglobal declaration [-Wshadow]\n XLogRecPtr sentPtr;\n ^\nwalsender.c:155:19: warning: shadowed declaration is here [-Wshadow]\n static XLogRecPtr sentPtr = InvalidXLogRecPtr;\n ^\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Wed, 30 Aug 2023 09:16:38 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fix shadow warnings in logical replication code"
},
{
"msg_contents": "On Wed, Aug 30, 2023 at 09:16:38AM +1000, Peter Smith wrote:\n> logicalfuncs.c:184:13: warning: declaration of ‘name’ shadows a\n> previous local [-Wshadow]\n> char *name = TextDatumGetCString(datum_opts[i]);\n> ^\n> logicalfuncs.c:105:8: warning: shadowed declaration is here [-Wshadow]\n> Name name;\n\nA bit confusing here, particularly as the name is reused with\nReplicationSlotAcquire() at the end of\npg_logical_slot_get_changes_guts() once again.\n\n> reorderbuffer.c:4843:10: warning: declaration of ‘isnull’ shadows a\n> previous local [-Wshadow]\n> bool isnull;\n> ^\n> reorderbuffer.c:4734:11: warning: shadowed declaration is here [-Wshadow]\n> bool *isnull;\n> ^\n\nAgreed as well about this one.\n\n> walsender.c:3543:14: warning: declaration of ‘sentPtr’ shadows a\n> global declaration [-Wshadow]\n> XLogRecPtr sentPtr;\n> ^\n> walsender.c:155:19: warning: shadowed declaration is here [-Wshadow]\n> static XLogRecPtr sentPtr = InvalidXLogRecPtr;\n> ^\n\nThis one looks pretty serious to me, particularly as the static\nsentPtr is used quite a bit. It is fortunate that the impact is\nlimited to the WAL sender stat function.\n\nFixing all these seems like a good thing in the long term, so OK for\nme. Like all the fixes similar to this one, I don't see a need for a\nbackpatch based on their locality, even if sentPtr makes me a bit\nnervous to keep even in stable branches.\n\nThere is much more going on with -Wshadow, but let's do things\nincrementally, case by case.\n--\nMichael",
"msg_date": "Wed, 30 Aug 2023 09:48:48 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix shadow warnings in logical replication code"
},
{
"msg_contents": "On Wed, Aug 30, 2023 at 8:49 AM Michael Paquier <[email protected]> wrote:\n\n> There is much more going on with -Wshadow, but let's do things\n> incrementally, case by case.\n\n\nYeah, IIRC the source tree currently is able to be built without any\nshadow-related warnings with -Wshadow=compatible-local. But with\n-Wshadow or -Wshadow=local, you can still see a lot of warnings.\n\nThanks\nRichard\n\nOn Wed, Aug 30, 2023 at 8:49 AM Michael Paquier <[email protected]> wrote:\nThere is much more going on with -Wshadow, but let's do things\nincrementally, case by case.Yeah, IIRC the source tree currently is able to be built without anyshadow-related warnings with -Wshadow=compatible-local. But with-Wshadow or -Wshadow=local, you can still see a lot of warnings.ThanksRichard",
"msg_date": "Wed, 30 Aug 2023 09:50:17 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix shadow warnings in logical replication code"
},
{
"msg_contents": "On Wed, Aug 30, 2023 at 09:50:17AM +0800, Richard Guo wrote:\n> Yeah, IIRC the source tree currently is able to be built without any\n> shadow-related warnings with -Wshadow=compatible-local. But with\n> -Wshadow or -Wshadow=local, you can still see a lot of warnings.\n\nYep. I've addressed on HEAD the ones proposed on this thread.\n--\nMichael",
"msg_date": "Thu, 31 Aug 2023 08:39:10 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix shadow warnings in logical replication code"
},
{
"msg_contents": "On Thu, Aug 31, 2023 at 9:39 AM Michael Paquier <[email protected]> wrote:\n>\n> On Wed, Aug 30, 2023 at 09:50:17AM +0800, Richard Guo wrote:\n> > Yeah, IIRC the source tree currently is able to be built without any\n> > shadow-related warnings with -Wshadow=compatible-local. But with\n> > -Wshadow or -Wshadow=local, you can still see a lot of warnings.\n>\n> Yep. I've addressed on HEAD the ones proposed on this thread.\n> --\n\nThanks for pushing!\n\n3 gone, ~150 remaining :)\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n~\n\n[postgres@CentOS7-x64 oss_postgres_misc]$ cat make.txt | grep\n'warning: declaration'\ncontroldata_utils.c:52:29: warning: declaration of ‘DataDir’ shadows a\nglobal declaration [-Wshadow]\ncontroldata_utils.c:145:32: warning: declaration of ‘DataDir’ shadows\na global declaration [-Wshadow]\nbrin.c:485:16: warning: declaration of ‘tmp’ shadows a previous local [-Wshadow]\ngistbuild.c:1225:23: warning: declaration of ‘splitinfo’ shadows a\nprevious local [-Wshadow]\nxlogreader.c:108:24: warning: declaration of ‘wal_segment_size’\nshadows a global declaration [-Wshadow]\nxlogrecovery.c:1170:13: warning: declaration of ‘backupEndRequired’\nshadows a global declaration [-Wshadow]\nxlogrecovery.c:1832:33: warning: declaration of ‘xlogreader’ shadows a\nglobal declaration [-Wshadow]\nxlogrecovery.c:3047:28: warning: declaration of ‘xlogprefetcher’\nshadows a global declaration [-Wshadow]\nxlogrecovery.c:3051:19: warning: declaration of ‘xlogreader’ shadows a\nglobal declaration [-Wshadow]\nxlogrecovery.c:3214:31: warning: declaration of ‘xlogreader’ shadows a\nglobal declaration [-Wshadow]\nxlogrecovery.c:3965:38: warning: declaration of ‘xlogprefetcher’\nshadows a global declaration [-Wshadow]\nobjectaddress.c:2173:12: warning: declaration of ‘nulls’ shadows a\nprevious local [-Wshadow]\nobjectaddress.c:2190:12: warning: declaration of ‘nulls’ shadows a\nprevious local [-Wshadow]\nobjectaddress.c:2227:12: warning: declaration of ‘nulls’ shadows a\nprevious local [-Wshadow]\npg_constraint.c:811:22: warning: declaration of ‘cooked’ shadows a\nparameter [-Wshadow]\nparse_target.c:1647:13: warning: declaration of ‘levelsup’ shadows a\nparameter [-Wshadow]\nextension.c:1079:13: warning: declaration of ‘schemaName’ shadows a\nparameter [-Wshadow]\nschemacmds.c:208:12: warning: declaration of ‘stmt’ shadows a\nparameter [-Wshadow]\nstatscmds.c:291:16: warning: declaration of ‘attnums’ shadows a\nprevious local [-Wshadow]\ntablecmds.c:14034:20: warning: declaration of ‘cmd’ shadows a\nparameter [-Wshadow]\ntablecmds.c:14107:20: warning: declaration of ‘cmd’ shadows a\nparameter [-Wshadow]\ntrigger.c:1170:13: warning: declaration of ‘qual’ shadows a previous\nlocal [-Wshadow]\nexecExprInterp.c:407:27: warning: declaration of ‘dispatch_table’\nshadows a global declaration [-Wshadow]\nnodeAgg.c:3978:20: warning: declaration of ‘phase’ shadows a previous\nlocal [-Wshadow]\nnodeValuesscan.c:145:16: warning: declaration of ‘estate’ shadows a\nprevious local [-Wshadow]\nbe-secure-common.c:110:44: warning: declaration of ‘ssl_key_file’\nshadows a global declaration [-Wshadow]\nmain.c:217:27: warning: declaration of ‘progname’ shadows a global\ndeclaration [-Wshadow]\nmain.c:327:18: warning: declaration of ‘progname’ shadows a global\ndeclaration [-Wshadow]\nmain.c:386:24: warning: declaration of ‘progname’ shadows a global\ndeclaration [-Wshadow]\nequivclass.c:727:16: warning: declaration of ‘rel’ shadows a parameter\n[-Wshadow]\ncreateplan.c:1245:12: warning: declaration of ‘plan’ shadows a\nprevious local [-Wshadow]\ncreateplan.c:2560:12: warning: declaration of ‘plan’ shadows a\nprevious local [-Wshadow]\npartdesc.c:218:16: warning: declaration of ‘key’ shadows a previous\nlocal [-Wshadow]\nlogicalfuncs.c:184:13: warning: declaration of ‘name’ shadows a\nprevious local [-Wshadow]\nreorderbuffer.c:4843:10: warning: declaration of ‘isnull’ shadows a\nprevious local [-Wshadow]\nwalsender.c:3543:14: warning: declaration of ‘sentPtr’ shadows a\nglobal declaration [-Wshadow]\ndependencies.c:377:23: warning: declaration of ‘DependencyGenerator’\nshadows a global declaration [-Wshadow]\ndependencies.c:1194:14: warning: declaration of ‘expr’ shadows a\nparameter [-Wshadow]\ndependencies.c:1228:22: warning: declaration of ‘expr’ shadows a\nparameter [-Wshadow]\nextended_stats.c:1047:10: warning: declaration of ‘isnull’ shadows a\nprevious local [-Wshadow]\ndate.c:506:9: warning: declaration of ‘days’ shadows a global\ndeclaration [-Wshadow]\ndate.c:530:9: warning: declaration of ‘days’ shadows a global\ndeclaration [-Wshadow]\ndate.c:2015:9: warning: declaration of ‘days’ shadows a global\ndeclaration [-Wshadow]\ndatetime.c:637:8: warning: declaration of ‘days’ shadows a global\ndeclaration [-Wshadow]\nformatting.c:3181:25: warning: declaration of ‘months’ shadows a\nglobal declaration [-Wshadow]\njsonpath_exec.c:410:17: warning: declaration of ‘found’ shadows a\nprevious local [-Wshadow]\njsonpath_exec.c:1328:24: warning: declaration of ‘res’ shadows a\nprevious local [-Wshadow]\njsonpath_exec.c:1339:24: warning: declaration of ‘res’ shadows a\nprevious local [-Wshadow]\njsonpath_exec.c:1515:17: warning: declaration of ‘res’ shadows a\nprevious local [-Wshadow]\npg_upgrade_support.c:220:13: warning: declaration of ‘extName’ shadows\na previous local [-Wshadow]\ntimestamp.c:1503:9: warning: declaration of ‘months’ shadows a global\ndeclaration [-Wshadow]\ntimestamp.c:1505:9: warning: declaration of ‘days’ shadows a global\ndeclaration [-Wshadow]\ntimestamp.c:2401:9: warning: declaration of ‘days’ shadows a global\ndeclaration [-Wshadow]\ntimestamp.c:5345:11: warning: declaration of ‘val’ shadows a previous\nlocal [-Wshadow]\nvarlena.c:4177:13: warning: declaration of ‘chunk_start’ shadows a\nprevious local [-Wshadow]\nvarlena.c:5764:14: warning: declaration of ‘str’ shadows a previous\nlocal [-Wshadow]\ninval.c:361:31: warning: declaration of ‘msg’ shadows a previous local\n[-Wshadow]\ninval.c:361:31: warning: declaration of ‘msg’ shadows a previous local\n[-Wshadow]\ninval.c:377:31: warning: declaration of ‘msgs’ shadows a parameter [-Wshadow]\ninval.c:377:31: warning: declaration of ‘msgs’ shadows a parameter [-Wshadow]\ninval.c:377:31: warning: declaration of ‘msgs’ shadows a parameter [-Wshadow]\ninval.c:377:31: warning: declaration of ‘msgs’ shadows a parameter [-Wshadow]\nfreepage.c:1589:9: warning: declaration of ‘result’ shadows a previous\nlocal [-Wshadow]\nfe-connect.c:3572:11: warning: declaration of ‘res’ shadows a previous\nlocal [-Wshadow]\nfe-secure-openssl.c:1527:15: warning: declaration of ‘err’ shadows a\nprevious local [-Wshadow]\nexecute.c:111:19: warning: declaration of ‘text’ shadows a global\ndeclaration [-Wshadow]\nprepare.c:104:26: warning: declaration of ‘text’ shadows a global\ndeclaration [-Wshadow]\nprepare.c:270:12: warning: declaration of ‘text’ shadows a global\ndeclaration [-Wshadow]\nc_keywords.c:36:32: warning: declaration of ‘text’ shadows a global\ndeclaration [-Wshadow]\ndescriptor.c:76:34: warning: declaration of ‘connection’ shadows a\nglobal declaration [-Wshadow]\ndescriptor.c:99:35: warning: declaration of ‘connection’ shadows a\nglobal declaration [-Wshadow]\ndescriptor.c:131:37: warning: declaration of ‘connection’ shadows a\nglobal declaration [-Wshadow]\necpg_keywords.c:39:35: warning: declaration of ‘text’ shadows a global\ndeclaration [-Wshadow]\npreproc.y:244:46: warning: declaration of ‘cur’ shadows a global\ndeclaration [-Wshadow]\npreproc.y:535:48: warning: declaration of ‘initializer’ shadows a\nglobal declaration [-Wshadow]\nprint.c:956:10: warning: declaration of ‘curr_nl_line’ shadows a\nprevious local [-Wshadow]\ninitdb.c:750:14: warning: declaration of ‘username’ shadows a global\ndeclaration [-Wshadow]\ninitdb.c:980:11: warning: declaration of ‘conf_file’ shadows a global\ndeclaration [-Wshadow]\ninitdb.c:2069:31: warning: declaration of ‘locale’ shadows a global\ndeclaration [-Wshadow]\ninitdb.c:2132:45: warning: declaration of ‘locale’ shadows a global\ndeclaration [-Wshadow]\ninitdb.c:2193:35: warning: declaration of ‘locale’ shadows a global\ndeclaration [-Wshadow]\ninitdb.c:2430:19: warning: declaration of ‘progname’ shadows a global\ndeclaration [-Wshadow]\ninitdb.c:2509:33: warning: declaration of ‘authmethodlocal’ shadows a\nglobal declaration [-Wshadow]\ninitdb.c:2509:62: warning: declaration of ‘authmethodhost’ shadows a\nglobal declaration [-Wshadow]\npg_amcheck.c:1136:18: warning: declaration of ‘progname’ shadows a\nglobal declaration [-Wshadow]\npg_basebackup.c:987:25: warning: declaration of ‘conn’ shadows a\nglobal declaration [-Wshadow]\npg_basebackup.c:1257:30: warning: declaration of ‘conn’ shadows a\nglobal declaration [-Wshadow]\npg_basebackup.c:1572:24: warning: declaration of ‘conn’ shadows a\nglobal declaration [-Wshadow]\npg_basebackup.c:1671:31: warning: declaration of ‘conn’ shadows a\nglobal declaration [-Wshadow]\npg_basebackup.c:1708:39: warning: declaration of ‘conn’ shadows a\nglobal declaration [-Wshadow]\nreceivelog.c:337:22: warning: declaration of ‘conn’ shadows a global\ndeclaration [-Wshadow]\nreceivelog.c:375:40: warning: declaration of ‘conn’ shadows a global\ndeclaration [-Wshadow]\nreceivelog.c:453:27: warning: declaration of ‘conn’ shadows a global\ndeclaration [-Wshadow]\nreceivelog.c:745:26: warning: declaration of ‘conn’ shadows a global\ndeclaration [-Wshadow]\nreceivelog.c:870:24: warning: declaration of ‘conn’ shadows a global\ndeclaration [-Wshadow]\nreceivelog.c:932:27: warning: declaration of ‘conn’ shadows a global\ndeclaration [-Wshadow]\nreceivelog.c:986:29: warning: declaration of ‘conn’ shadows a global\ndeclaration [-Wshadow]\nreceivelog.c:1040:28: warning: declaration of ‘conn’ shadows a global\ndeclaration [-Wshadow]\nreceivelog.c:1171:31: warning: declaration of ‘conn’ shadows a global\ndeclaration [-Wshadow]\nreceivelog.c:1211:29: warning: declaration of ‘conn’ shadows a global\ndeclaration [-Wshadow]\nstreamutil.c:268:28: warning: declaration of ‘conn’ shadows a global\ndeclaration [-Wshadow]\nstreamutil.c:346:35: warning: declaration of ‘conn’ shadows a global\ndeclaration [-Wshadow]\nstreamutil.c:400:27: warning: declaration of ‘conn’ shadows a global\ndeclaration [-Wshadow]\nstreamutil.c:481:28: warning: declaration of ‘conn’ shadows a global\ndeclaration [-Wshadow]\nstreamutil.c:575:31: warning: declaration of ‘conn’ shadows a global\ndeclaration [-Wshadow]\nstreamutil.c:683:29: warning: declaration of ‘conn’ shadows a global\ndeclaration [-Wshadow]\npg_receivewal.c:280:11: warning: declaration of ‘tli’ shadows a\nparameter [-Wshadow]\npg_recvlogical.c:126:22: warning: declaration of ‘conn’ shadows a\nglobal declaration [-Wshadow]\npg_recvlogical.c:1025:30: warning: declaration of ‘conn’ shadows a\nglobal declaration [-Wshadow]\npg_recvlogical.c:1042:28: warning: declaration of ‘conn’ shadows a\nglobal declaration [-Wshadow]\npg_recvlogical.c:1042:45: warning: declaration of ‘endpos’ shadows a\nglobal declaration [-Wshadow]\npg_controldata.c:73:24: warning: declaration of ‘wal_level’ shadows a\nglobal declaration [-Wshadow]\npg_ctl.c:868:36: warning: declaration of ‘argv0’ shadows a global\ndeclaration [-Wshadow]\npg_dump.c:1054:18: warning: declaration of ‘progname’ shadows a global\ndeclaration [-Wshadow]\npg_dump.c:1380:13: warning: declaration of ‘strict_names’ shadows a\nglobal declaration [-Wshadow]\npg_dump.c:1439:16: warning: declaration of ‘strict_names’ shadows a\nglobal declaration [-Wshadow]\npg_dump.c:1543:15: warning: declaration of ‘strict_names’ shadows a\nglobal declaration [-Wshadow]\npg_dump.c:9879:15: warning: declaration of ‘comments’ shadows a global\ndeclaration [-Wshadow]\npg_dump.c:9880:8: warning: declaration of ‘ncomments’ shadows a global\ndeclaration [-Wshadow]\npg_dump.c:9992:15: warning: declaration of ‘comments’ shadows a global\ndeclaration [-Wshadow]\npg_dump.c:9993:8: warning: declaration of ‘ncomments’ shadows a global\ndeclaration [-Wshadow]\npg_dump.c:11661:15: warning: declaration of ‘comments’ shadows a\nglobal declaration [-Wshadow]\npg_dump.c:11662:8: warning: declaration of ‘ncomments’ shadows a\nglobal declaration [-Wshadow]\npg_restore.c:430:19: warning: declaration of ‘progname’ shadows a\nglobal declaration [-Wshadow]\npg_dumpall.c:1846:11: warning: declaration of ‘connstr’ shadows a\nglobal declaration [-Wshadow]\npg_resetwal.c:709:25: warning: declaration of ‘guessed’ shadows a\nglobal declaration [-Wshadow]\nfile_ops.c:189:8: warning: declaration of ‘dstpath’ shadows a global\ndeclaration [-Wshadow]\nfile_ops.c:208:8: warning: declaration of ‘dstpath’ shadows a global\ndeclaration [-Wshadow]\nfile_ops.c:231:8: warning: declaration of ‘dstpath’ shadows a global\ndeclaration [-Wshadow]\nfile_ops.c:245:8: warning: declaration of ‘dstpath’ shadows a global\ndeclaration [-Wshadow]\nfile_ops.c:259:8: warning: declaration of ‘dstpath’ shadows a global\ndeclaration [-Wshadow]\nfile_ops.c:273:8: warning: declaration of ‘dstpath’ shadows a global\ndeclaration [-Wshadow]\npg_rewind.c:90:19: warning: declaration of ‘progname’ shadows a global\ndeclaration [-Wshadow]\npg_rewind.c:541:51: warning: declaration of ‘source’ shadows a global\ndeclaration [-Wshadow]\nxlogreader.c:108:24: warning: declaration of ‘wal_segment_size’\nshadows a global declaration [-Wshadow]\npg_test_fsync.c:621:29: warning: declaration of ‘start_t’ shadows a\nglobal declaration [-Wshadow]\npg_test_fsync.c:621:53: warning: declaration of ‘stop_t’ shadows a\nglobal declaration [-Wshadow]\nxlogreader.c:108:24: warning: declaration of ‘wal_segment_size’\nshadows a global declaration [-Wshadow]\npgbench.c:4536:11: warning: declaration of ‘skipped’ shadows a\nparameter [-Wshadow]\ncommon.c:1912:9: warning: declaration of ‘buf’ shadows a previous\nlocal [-Wshadow]\ndescribe.c:1719:17: warning: declaration of ‘myopt’ shadows a previous\nlocal [-Wshadow]\ndescribe.c:2243:13: warning: declaration of ‘schemaname’ shadows a\nparameter [-Wshadow]\nprompt.c:340:12: warning: declaration of ‘p’ shadows a previous local [-Wshadow]\ntab-complete.c:1630:29: warning: declaration of ‘text’ shadows a\nglobal declaration [-Wshadow]\ntab-complete.c:4979:46: warning: declaration of ‘text’ shadows a\nglobal declaration [-Wshadow]\ntab-complete.c:5008:38: warning: declaration of ‘text’ shadows a\nglobal declaration [-Wshadow]\ntab-complete.c:5017:36: warning: declaration of ‘text’ shadows a\nglobal declaration [-Wshadow]\ntab-complete.c:5026:37: warning: declaration of ‘text’ shadows a\nglobal declaration [-Wshadow]\ntab-complete.c:5037:33: warning: declaration of ‘text’ shadows a\nglobal declaration [-Wshadow]\ntab-complete.c:5045:43: warning: declaration of ‘text’ shadows a\nglobal declaration [-Wshadow]\ntab-complete.c:5061:40: warning: declaration of ‘text’ shadows a\nglobal declaration [-Wshadow]\ntab-complete.c:5069:50: warning: declaration of ‘text’ shadows a\nglobal declaration [-Wshadow]\ntab-complete.c:5131:19: warning: declaration of ‘text’ shadows a\nglobal declaration [-Wshadow]\ntab-complete.c:5502:32: warning: declaration of ‘text’ shadows a\nglobal declaration [-Wshadow]\ntab-complete.c:5582:33: warning: declaration of ‘text’ shadows a\nglobal declaration [-Wshadow]\ntab-complete.c:5630:37: warning: declaration of ‘text’ shadows a\nglobal declaration [-Wshadow]\ntab-complete.c:5675:33: warning: declaration of ‘text’ shadows a\nglobal declaration [-Wshadow]\ntab-complete.c:5807:27: warning: declaration of ‘text’ shadows a\nglobal declaration [-Wshadow]\n\n\n",
"msg_date": "Thu, 31 Aug 2023 10:26:42 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fix shadow warnings in logical replication code"
}
] |
[
{
"msg_contents": "Recently \\watch got the following help message.\n\n> \\watch [[i=]SEC] [c=N] [m=MIN]\n> execute query every SEC seconds, up to N times\n> stop if less than MIN rows are returned\n\nThe \"m=MIN\" can be a bit misleading. It may look like it's about\ninterval or counts, but it actually refers to the row number, which is\nnot spelled out in the line.\n\nWould it make sense to change it to MINROWS? There's enough room in\nthe line for that change and the doc already uses min_rows.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Wed, 30 Aug 2023 10:21:26 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": true,
"msg_subject": "A propose to revise \\watch help message"
},
{
"msg_contents": "At Wed, 30 Aug 2023 10:21:26 +0900 (JST), Kyotaro Horiguchi <[email protected]> wrote in \n> Recently \\watch got the following help message.\n> \n> > \\watch [[i=]SEC] [c=N] [m=MIN]\n> > execute query every SEC seconds, up to N times\n> > stop if less than MIN rows are returned\n> \n> The \"m=MIN\" can be a bit misleading. It may look like it's about\n> interval or counts, but it actually refers to the row number, which is\n> not spelled out in the line.\n> \n> Would it make sense to change it to MINROWS? There's enough room in\n> the line for that change and the doc already uses min_rows.\n\nMmm. I noticed the continuation lines are indented too much, probably\nbecause of the backslash escape in the main line. The attached\nincludes the fix for that.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Wed, 30 Aug 2023 10:33:56 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: A propose to revise \\watch help message"
}
] |
[
{
"msg_contents": "Synopsis:\n\n Publisher:\n\n CREATE TABLE x(i INT);\n CREATE TABLE y(i INT);\n INSERT INTO x VALUES(1);\n INSERT INTO y VALUES(-1);\n CREATE PUBLICATION pub1 FOR TABLE x;\n CREATE PUBLICATION pub2 FOR TABLE y;\n\n Subscriber:\n\n CREATE SERVER myserver FOR CONNECTION ONLY OPTIONS (\n host '...', dbname '...'\n );\n CREATE USER MAPPING FOR PUBLIC SERVER myserver OPTIONS (\n user '...', password '...'\n );\n\n CREATE TABLE x(i INT);\n CREATE TABLE y(i INT);\n CREATE SUBSCRIPTION sub1 SERVER myserver PUBLICATION pub1;\n CREATE SUBSCRIPTION sub2 SERVER myserver PUBLICATION pub2;\n\nMotivation:\n\n * Allow managing connections separately from managing the\n subscriptions themselves. For instance, if you update an\n authentication method or the location of the publisher, updating\n the server alone will update all subscriptions at once.\n * Enable separating the privileges to create a subscription from the\n privileges to create a connection string. (By default\n pg_create_subscription has both privileges for compatibility with\n v16, but the connection privilege can be revoked from\n pg_create_subscription, see below.)\n * Enable changing of single connection parameters without pasting\n the rest of the connection string as well. E.g. \"ALTER SERVER\n ... OPTIONS (SET ... '...');\".\n * Benefit from user mappings and ACLs on foreign server object if\n you have multiple roles creating subscriptions.\n\nDetails:\n\nThe attached patch implements \"CREATE SUBSCRIPTION ... SERVER myserver\"\nas an alternative to \"CREATE SUBSCRIPTION ... CONNECTION '...'\". The\nuser must be a member of pg_create_subscription and have USAGE\nprivileges on the server.\n\nThe server \"myserver\" must have been created with the new syntax:\n\n CREATE SERVER myserver FOR CONNECTION ONLY\n\ninstead of specifying FOREIGN DATA WRAPPER. In other words, a server\nFOR CONNECTION ONLY doesn't have a real FDW, it's a special server just\nused for the postgres connection options. To create a server FOR\nCONNECTION ONLY, the user must be a member of the new predefined role\npg_create_connection. A server FOR CONNECTION ONLY still uses ACLs and\nuser mappings the same way as other foreign servers, but cannot be used\nto create foreign tables.\n\nThe predefined role pg_create_subscription is also a member of the role\npg_create_connection, so that existing members of the\npg_create_subscription role may continue to create subscriptions using\nCONNECTION just like in v16 without any additional grant.\n\nSecurity:\n\nOne motivation of this patch is to enable separating the privileges to\ncreate a subscription from the privileges to create a connection\nstring, because each have their own security implications and may be\ndone through separate processes. To separate the privileges, simply\nrevoke pg_create_connection from pg_create_subscription; then you can\ngrant each one independently as you see fit.\n\nFor instance, there may be an administrator that controls what\npostgres instances are available, and what connections may be\nreasonable between those instances. That admin will need the\npg_create_connection role, and can proactively create all the servers\n(using FOR CONNECTION ONLY) and user mappings that may be useful, and\nmanage and update those as necessary without breaking\nsubscriptions. Another role may be used to manage the subscriptions\nthemselves, and they would need to be a member of\npg_create_subscription but do not need the privileges to create raw\nconnection strings.\n\nNote: the ability to revoke pg_create_connection from\npg_create_subscription avoids some risks in some environments; but\ncreating a subcription should still be considered a highly privileged\noperation whether using SERVER or CONNECTION.\n\nRemaining work:\n\nThe code for options handling needs some work. It's similar to\npostgres_fdw in behavior, but I didn't spend as much time on it because\nI suspect we will want to refactor the various ways connection strings\nare handled (in CREATE SUBSCRIPTION ... CONNECTION, postgres_fdw, and\ndblink) to make them more consistent.\n\nAlso, there are some nuances in handling connection options that I\ndon't fully understand. postgres_fdw makes a lot of effort: it\noverrides client_encoding, it does a\npost-connection security check, and allows GSS instead of a password\noption for non-superusers. But CREATE SUBSCRIPTION ... CONNECTION makes\nlittle effort, only checking whether the password is specified or not.\nI'd like to understand why they are different and what we can unify.\n\nAlso, right now dblink has it's own dblink_fdw, and perhaps a server\nFOR CONNECTION ONLY should become the preferred method instead.\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS",
"msg_date": "Tue, 29 Aug 2023 23:42:00 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "[17] CREATE SUBSCRIPTION ... SERVER"
},
{
"msg_contents": "Hi Jeff,\n\nOn Wed, Aug 30, 2023 at 2:12 PM Jeff Davis <[email protected]> wrote:\n>\n> The server \"myserver\" must have been created with the new syntax:\n>\n> CREATE SERVER myserver FOR CONNECTION ONLY\n>\n> instead of specifying FOREIGN DATA WRAPPER. In other words, a server\n> FOR CONNECTION ONLY doesn't have a real FDW, it's a special server just\n> used for the postgres connection options. To create a server FOR\n> CONNECTION ONLY, the user must be a member of the new predefined role\n> pg_create_connection. A server FOR CONNECTION ONLY still uses ACLs and\n> user mappings the same way as other foreign servers, but cannot be used\n> to create foreign tables.\n\nAre you suggesting that SERVERs created with FDW can not be used as\npublishers? I think there's value in knowing that the publisher which\ncontains a replica of a table is the same as the foreign server which\nis referenced by another foreign table. We can push down a join\nbetween a replicated table and foreign table down to the foreign\nserver. A basic need for sharding with replicated tables. Of course\nthere's a lot work that we have to do in order to actually achieve\nsuch a push down but by restricting this feature to only CONNECTION\nONLY, we are restricting the possibility of such a push down.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Wed, 30 Aug 2023 19:11:59 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [17] CREATE SUBSCRIPTION ... SERVER"
},
{
"msg_contents": "Jeff Davis <[email protected]> writes:\n> The server \"myserver\" must have been created with the new syntax:\n> CREATE SERVER myserver FOR CONNECTION ONLY\n> instead of specifying FOREIGN DATA WRAPPER. In other words, a server\n> FOR CONNECTION ONLY doesn't have a real FDW, it's a special server just\n> used for the postgres connection options.\n\nThis seems like it requires a whole lot of new mechanism (parser\nand catalog infrastructure) that could be done far more easily\nin other ways. In particular, how about inventing a built-in\ndummy FDW to serve the purpose? That could have some use for\nother testing as well.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 30 Aug 2023 09:49:45 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [17] CREATE SUBSCRIPTION ... SERVER"
},
{
"msg_contents": "On Wed, 2023-08-30 at 19:11 +0530, Ashutosh Bapat wrote:\n> Are you suggesting that SERVERs created with FDW can not be used as\n> publishers?\n\nCorrect. Without that, how would the subscription know that the FDW\ncontains valid postgres connection information? I suppose it could\ncreate a connection string out of the options itself and do another\nround of validation, is that what you had in mind?\n\n> We can push down a join\n> between a replicated table and foreign table down to the foreign\n> server.\n\nInteresting idea.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Wed, 30 Aug 2023 08:30:45 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [17] CREATE SUBSCRIPTION ... SERVER"
},
{
"msg_contents": "On Wed, 2023-08-30 at 09:49 -0400, Tom Lane wrote:\n> This seems like it requires a whole lot of new mechanism (parser\n> and catalog infrastructure) that could be done far more easily\n> in other ways. In particular, how about inventing a built-in\n> dummy FDW to serve the purpose?\n\nThat was my initial approach, but it was getting a bit messy.\n\nFDWs don't have a schema, so we can't put it in pg_catalog, and names\nbeginning with \"pg_\" aren't restricted now. Should I retroactively\nrestrict FDW names that begin with \"pg_\"? Or just use special cases in\npg_dump and elsewhere? Also I didn't see a great place to document it.\n\nAdmittedly, I didn't complete the dummy-FDW approach, so perhaps it\nworks out better overall. I can give it a try.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Wed, 30 Aug 2023 09:09:43 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [17] CREATE SUBSCRIPTION ... SERVER"
},
{
"msg_contents": "On Wed, Aug 30, 2023 at 9:00 PM Jeff Davis <[email protected]> wrote:\n>\n> On Wed, 2023-08-30 at 19:11 +0530, Ashutosh Bapat wrote:\n> > Are you suggesting that SERVERs created with FDW can not be used as\n> > publishers?\n>\n> Correct. Without that, how would the subscription know that the FDW\n> contains valid postgres connection information? I suppose it could\n> create a connection string out of the options itself and do another\n> round of validation, is that what you had in mind?\n\nThe server's FDW has to be postgres_fdw. So we have to handle the\nawkward dependency between core and postgres_fdw (an extension). The\nconnection string should be created from options itself. A special\nuser mapping for replication may be used. That's how I see it at a\nhigh level.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Thu, 31 Aug 2023 10:59:03 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [17] CREATE SUBSCRIPTION ... SERVER"
},
{
"msg_contents": "On Wed, Aug 30, 2023 at 1:19 PM Jeff Davis <[email protected]> wrote:\n> On Wed, 2023-08-30 at 09:49 -0400, Tom Lane wrote:\n> > This seems like it requires a whole lot of new mechanism (parser\n> > and catalog infrastructure) that could be done far more easily\n> > in other ways. In particular, how about inventing a built-in\n> > dummy FDW to serve the purpose?\n>\n> That was my initial approach, but it was getting a bit messy.\n>\n> FDWs don't have a schema, so we can't put it in pg_catalog, and names\n> beginning with \"pg_\" aren't restricted now. Should I retroactively\n> restrict FDW names that begin with \"pg_\"? Or just use special cases in\n> pg_dump and elsewhere? Also I didn't see a great place to document it.\n>\n> Admittedly, I didn't complete the dummy-FDW approach, so perhaps it\n> works out better overall. I can give it a try.\n\nWhat I feel is kind of weird about this syntax is that it seems like\nit's entangled with the FDW mechanism but doesn't really overlap with\nit. You could have something that is completely separate (CREATE\nSUBSCRIPTION CONNECTION) or something that truly does have some\noverlap (no new syntax and a dummy fdw, as Tom proposes, or somehow\nknowing that postgres_fdw is special, as Ashutosh proposes). But this\nseems like sort of an odd middle ground.\n\nI also think that the decision to make pg_create_connection a member\nof pg_create_subscription by default, but encouraging users to think\nabout revoking it, is kind of strange. I don't think we really want to\nencourage users to tinker with predefined roles in this kind of way. I\nthink there are better ways of achieving the goals here.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 31 Aug 2023 08:37:49 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [17] CREATE SUBSCRIPTION ... SERVER"
},
{
"msg_contents": "On Wed, 2023-08-30 at 09:09 -0700, Jeff Davis wrote:\n> Admittedly, I didn't complete the dummy-FDW approach, so perhaps it\n> works out better overall. I can give it a try.\n\nWe need to hide the dummy FDW from pg_dump. And we need to hide it from\npsql's \\dew, because that's used in tests and prints the owner's name,\nand the bootstrap superuser doesn't have a consistent name. But I\ndidn't find a good way to hide it because it doesn't have a schema.\n\nThe best I could come up with is special-casing by the name, but that\nseems like a pretty bad hack. For other built-in objects, psql is\nwilling to print them out if you just specify something like \"\\dT\npg_catalog.*\", but that wouldn't work here. We could maybe do something\nbased on the \"pg_\" prefix, but we'd have to retroactively restrict FDWs\nwith that prefix, which sounds like a bad idea.\n\nSuggestions?\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Thu, 31 Aug 2023 09:50:45 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [17] CREATE SUBSCRIPTION ... SERVER"
},
{
"msg_contents": "On Thu, 2023-08-31 at 10:59 +0530, Ashutosh Bapat wrote:\n> The server's FDW has to be postgres_fdw. So we have to handle the\n> awkward dependency between core and postgres_fdw (an extension).\n\nThat sounds more than just \"awkward\". I can't think of any precedent\nfor that and it seems to violate the idea of an \"extension\" entirely.\n\nCan you explain more concretely how we might resolve that?\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Thu, 31 Aug 2023 09:52:59 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [17] CREATE SUBSCRIPTION ... SERVER"
},
{
"msg_contents": "On Thu, 2023-08-31 at 08:37 -0400, Robert Haas wrote:\n> What I feel is kind of weird about this syntax is that it seems like\n> it's entangled with the FDW mechanism but doesn't really overlap with\n> it.\n\nI like the fact that it works with user mappings and benefits from the\nother thinking that's gone into that system. I would call that a\n\"feature\" not an \"entanglement\".\n\n> You could have something that is completely separate (CREATE\n> SUBSCRIPTION CONNECTION)\n\nI thought about that but it would be a new object type with a new\ncatalog and I didn't really see an upside. It would open up questions\nabout permissions, raw string vs individual options, whether we need\nuser mappings or not, etc., and those have all been worked out already\nwith foreign servers.\n\n> or something that truly does have some\n> overlap (no new syntax and a dummy fdw, as Tom proposes, or somehow\n> knowing that postgres_fdw is special, as Ashutosh proposes).\n\nI ran into a (perhaps very minor?) challenge[1] with the dummy FDW:\n\nhttps://www.postgresql.org/message-id/[email protected]\n\nsuggestions welcome there, of course.\n\nRegarding core code depending on postgres_fdw: how would that work?\nWould that be acceptable?\n\n> But this\n> seems like sort of an odd middle ground.\n\nI assume here that you're talking about the CREATE SERVER ... FOR\nCONNECTION ONLY syntax. I don't think it's odd. We have lots of objects\nthat are a lot like another object but treated differently for various\nreasons. A foreign table is an obvious example.\n\n> I also think that the decision to make pg_create_connection a member\n> of pg_create_subscription by default, but encouraging users to think\n> about revoking it, is kind of strange. I don't think we really want\n> to\n> encourage users to tinker with predefined roles in this kind of way.\n> I\n> think there are better ways of achieving the goals here.\n\nSuch as?\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Thu, 31 Aug 2023 10:28:42 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [17] CREATE SUBSCRIPTION ... SERVER"
},
{
"msg_contents": "On 8/31/23 12:52, Jeff Davis wrote:\n> On Thu, 2023-08-31 at 10:59 +0530, Ashutosh Bapat wrote:\n>> The server's FDW has to be postgres_fdw. So we have to handle the\n>> awkward dependency between core and postgres_fdw (an extension).\n> \n> That sounds more than just \"awkward\". I can't think of any precedent\n> for that and it seems to violate the idea of an \"extension\" entirely.\n> \n> Can you explain more concretely how we might resolve that?\n\n\nMaybe move postgres_fdw to be a first class built in feature instead of \nan extension?\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n",
"msg_date": "Thu, 31 Aug 2023 17:17:29 -0400",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [17] CREATE SUBSCRIPTION ... SERVER"
},
{
"msg_contents": "On Fri, Sep 1, 2023 at 2:47 AM Joe Conway <[email protected]> wrote:\n>\n> On 8/31/23 12:52, Jeff Davis wrote:\n> > On Thu, 2023-08-31 at 10:59 +0530, Ashutosh Bapat wrote:\n> >> The server's FDW has to be postgres_fdw. So we have to handle the\n> >> awkward dependency between core and postgres_fdw (an extension).\n> >\n> > That sounds more than just \"awkward\". I can't think of any precedent\n> > for that and it seems to violate the idea of an \"extension\" entirely.\n> >\n> > Can you explain more concretely how we might resolve that?\n>\n>\n> Maybe move postgres_fdw to be a first class built in feature instead of\n> an extension?\n\nYes, that's one way.\n\nThinking larger, how about we allow any FDW to be used here. We might\nas well, allow extensions to start logical receivers which accept\nchanges from non-PostgreSQL databases. So we don't have to make an\nexception for postgres_fdw. But I think there's some value in bringing\ntogether these two subsystems which deal with foreign data logically\n(as in logical vs physical view of data).\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Fri, 1 Sep 2023 12:28:43 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [17] CREATE SUBSCRIPTION ... SERVER"
},
{
"msg_contents": "On Fri, 2023-09-01 at 12:28 +0530, Ashutosh Bapat wrote:\n> Thinking larger, how about we allow any FDW to be used here.\n\nThat's a possibility, but I think that means the subscription would\nneed to constantly re-check the parameters rather than relying on the\nFDW's validator.\n\nOtherwise it might be the wrong kind of FDW, and the user might be able\nto circumvent the password_required protection. It might not even be a\npostgres-related FDW at all, which would be a bit strange.\n\nIf it's constantly re-checking the parameters then it raises the\npossibility that some \"ALTER SERVER\" or \"ALTER USER MAPPING\" succeeds\nbut then subscriptions to that foreign server start failing, which\nwould not be ideal. But I could be fine with that.\n\n> But I think there's some value in bringing\n> together these two subsystems which deal with foreign data logically\n> (as in logical vs physical view of data).\n\nI still don't understand how a core dependency on an extension would\nwork.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Fri, 01 Sep 2023 11:54:44 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [17] CREATE SUBSCRIPTION ... SERVER"
},
{
"msg_contents": "On Thu, 2023-08-31 at 17:17 -0400, Joe Conway wrote:\n> Maybe move postgres_fdw to be a first class built in feature instead\n> of \n> an extension?\n\nThat could make sense, but we still have to solve the problem of how to\npresent a built-in FDW.\n\nFDWs don't have a schema, so it can't be inside pg_catalog. So we'd\nneed some special logic somewhere to make pg_dump and psql \\dew work as\nexpected, and I'm not quite sure what to do there.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Fri, 01 Sep 2023 11:57:01 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [17] CREATE SUBSCRIPTION ... SERVER"
},
{
"msg_contents": "On Fri, Sep 1, 2023 at 4:04 PM Jeff Davis <[email protected]> wrote:\n> On Thu, 2023-08-31 at 17:17 -0400, Joe Conway wrote:\n> > Maybe move postgres_fdw to be a first class built in feature instead\n> > of\n> > an extension?\n>\n> That could make sense, but we still have to solve the problem of how to\n> present a built-in FDW.\n>\n> FDWs don't have a schema, so it can't be inside pg_catalog. So we'd\n> need some special logic somewhere to make pg_dump and psql \\dew work as\n> expected, and I'm not quite sure what to do there.\n\nI'm worried that an approach based on postgres_fdw would have security\nproblems. I think that we don't want postgres_fdw installed in every\nPostgreSQL cluster for security reasons. And I think that the set of\npeople who should be permitted to manage connection strings for\nlogical replication subscriptions could be different from the set of\npeople who are entitled to use postgres_fdw.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 1 Sep 2023 16:11:30 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [17] CREATE SUBSCRIPTION ... SERVER"
},
{
"msg_contents": "On Sat, Sep 2, 2023 at 12:24 AM Jeff Davis <[email protected]> wrote:\n>\n> On Fri, 2023-09-01 at 12:28 +0530, Ashutosh Bapat wrote:\n> > Thinking larger, how about we allow any FDW to be used here.\n>\n> That's a possibility, but I think that means the subscription would\n> need to constantly re-check the parameters rather than relying on the\n> FDW's validator.\n>\n> Otherwise it might be the wrong kind of FDW, and the user might be able\n> to circumvent the password_required protection. It might not even be a\n> postgres-related FDW at all, which would be a bit strange.\n>\n> If it's constantly re-checking the parameters then it raises the\n> possibility that some \"ALTER SERVER\" or \"ALTER USER MAPPING\" succeeds\n> but then subscriptions to that foreign server start failing, which\n> would not be ideal. But I could be fine with that.\n\nWhy do we need to re-check parameters constantly? We will need to\nrestart subscriptions which are using the user mapping of FDW when\nuser mapping or server options change. If that mechanism isn't there,\nwe will need to build it. But that's doable.\n\nI didn't understand your worry about circumventing password_required protection.\n\n>\n> > But I think there's some value in bringing\n> > together these two subsystems which deal with foreign data logically\n> > (as in logical vs physical view of data).\n>\n> I still don't understand how a core dependency on an extension would\n> work.\n\nWe don't need to if we allow any FDW (even if non-postgreSQL) to be\nspecified there. For non-postgresql FDW the receiver will need to\nconstruct the appropriate command and use appropriate protocol to get\nthe changes and apply locally. The server at the other end may not\neven have logical replication capability. The extension or \"input\nplugin\" (as against output plugin) would decide whether it can deal\nwith the foreign server specific logical replication protocol. We add\nanother callback to FDW which can return whether the given foreign\nserver supports logical replication or not. If the users\nmisconfigured, their subscriptions will throw errors.\n\nBut with this change we open a built-in way to \"replicate in\" as we\nhave today to \"replicate out\".\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Mon, 4 Sep 2023 18:01:57 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [17] CREATE SUBSCRIPTION ... SERVER"
},
{
"msg_contents": "On Sat, Sep 2, 2023 at 1:41 AM Robert Haas <[email protected]> wrote:\n>\n> On Fri, Sep 1, 2023 at 4:04 PM Jeff Davis <[email protected]> wrote:\n> > On Thu, 2023-08-31 at 17:17 -0400, Joe Conway wrote:\n> > > Maybe move postgres_fdw to be a first class built in feature instead\n> > > of\n> > > an extension?\n> >\n> > That could make sense, but we still have to solve the problem of how to\n> > present a built-in FDW.\n> >\n> > FDWs don't have a schema, so it can't be inside pg_catalog. So we'd\n> > need some special logic somewhere to make pg_dump and psql \\dew work as\n> > expected, and I'm not quite sure what to do there.\n>\n> I'm worried that an approach based on postgres_fdw would have security\n> problems. I think that we don't want postgres_fdw installed in every\n> PostgreSQL cluster for security reasons. And I think that the set of\n> people who should be permitted to manage connection strings for\n> logical replication subscriptions could be different from the set of\n> people who are entitled to use postgres_fdw.\n\nIf postgres_fdw was the only way to specify a connection to be used\nwith subscriptions, what you are saying makes sense. But it's not. We\nwill continue to support current mechanism which doesn't require\npostgres_fdw to be installed on every PostgreSQL cluster.\n\nWhat security problems do you foresee if postgres_fdw is used in\naddition to the current mechanism?\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Mon, 4 Sep 2023 18:04:28 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [17] CREATE SUBSCRIPTION ... SERVER"
},
{
"msg_contents": "On Mon, 2023-09-04 at 18:01 +0530, Ashutosh Bapat wrote:\n> Why do we need to re-check parameters constantly? We will need to\n> restart subscriptions which are using the user mapping of FDW when\n> user mapping or server options change.\n\n\"Constantly\" was an exaggeration, but the point is that it's a separate\nvalidation step after the ALTER SERVER or ALTER USER MAPPING has\nalready happened, so the subscription would start failing.\n\nPerhaps this is OK, but it's not the ideal user experience. Ideally,\nthe user would get some indication from the ALTER SERVER or ALTER USER\nMAPPING that it's about to break a subscription that depends on it.\n\n> I didn't understand your worry about circumventing password_required\n> protection.\n\nIf the subscription doesn't do its own validation, and if the FDW\ndoesn't ensure that the password is set, then it could end up creating\na creating a connection string without supplying the password.\n\n> We don't need to if we allow any FDW (even if non-postgreSQL) to be\n> specified there.\n\nOK, so we could have a built-in FDW called pg_connection that would do\nthe right kinds of validation; and then also allow other FDWs but the\nsubscription would have to do its own validation.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Tue, 05 Sep 2023 12:08:52 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [17] CREATE SUBSCRIPTION ... SERVER"
},
{
"msg_contents": "On Tue, 2023-09-05 at 12:08 -0700, Jeff Davis wrote:\n> OK, so we could have a built-in FDW called pg_connection that would\n> do\n> the right kinds of validation; and then also allow other FDWs but the\n> subscription would have to do its own validation.\n\nWhile working on this, I found a minor bug and there's another\ndiscussion happening here:\n\nhttps://www.postgresql.org/message-id/e5892973ae2a80a1a3e0266806640dae3c428100.camel%40j-davis.com\n\nIt looks like that's going in the direction of checking for the\npresence of a password in the connection string at connection time.\n\nAshutosh, that's compatible with your suggestion that CREATE\nSUBSCRIPTION ... SERVER works for any FDW that supplies the right\ninformation, because we need to validate it at connection time anyway.\nI'll wait to see how that discussion gets resolved, and then I'll post\nthe next version.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Tue, 12 Sep 2023 15:55:52 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [17] CREATE SUBSCRIPTION ... SERVER"
},
{
"msg_contents": "On Tue, 2023-09-05 at 12:08 -0700, Jeff Davis wrote:\n> OK, so we could have a built-in FDW called pg_connection that would\n> do\n> the right kinds of validation; and then also allow other FDWs but the\n> subscription would have to do its own validation.\n\nAttached a rough rebased version implementing the above with a\npg_connection_fdw foreign data wrapper built in.\n\nRegards,\n\tJeff Davis",
"msg_date": "Fri, 29 Dec 2023 15:22:23 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [17] CREATE SUBSCRIPTION ... SERVER"
},
{
"msg_contents": "On Fri, 2023-12-29 at 15:22 -0800, Jeff Davis wrote:\n> On Tue, 2023-09-05 at 12:08 -0700, Jeff Davis wrote:\n> > OK, so we could have a built-in FDW called pg_connection that would\n> > do\n> > the right kinds of validation; and then also allow other FDWs but\n> > the\n> > subscription would have to do its own validation.\n> \n> Attached a rough rebased version.\n\nAttached a slightly better version which fixes a pg_dump issue and\nimproves the documentation.\n\nRegards,\n\tJeff Davis",
"msg_date": "Sun, 31 Dec 2023 10:59:23 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [17] CREATE SUBSCRIPTION ... SERVER"
},
{
"msg_contents": "On Mon, Jan 1, 2024 at 12:29 AM Jeff Davis <[email protected]> wrote:\n>\n> On Fri, 2023-12-29 at 15:22 -0800, Jeff Davis wrote:\n> > On Tue, 2023-09-05 at 12:08 -0700, Jeff Davis wrote:\n> > > OK, so we could have a built-in FDW called pg_connection that would\n> > > do\n> > > the right kinds of validation; and then also allow other FDWs but\n> > > the\n> > > subscription would have to do its own validation.\n> >\n> > Attached a rough rebased version.\n>\n> Attached a slightly better version which fixes a pg_dump issue and\n> improves the documentation.\n\nHi, I spent some time today reviewing the v4 patch and below are my\ncomments. BTW, the patch needs a rebase due to commit 9a17be1e2.\n\n1.\n+ /*\n+ * We don't want to allow unprivileged users to be able to trigger\n+ * attempts to access arbitrary network destinations, so\nrequire the user\n+ * to have been specifically authorized to create connections.\n+ */\n+ if (!has_privs_of_role(owner, ROLE_PG_CREATE_CONNECTION))\n\nCan the pg_create_connection predefined role related code be put into\na separate 0001 patch? I think this can go in a separate commit.\n\n2. Can one use {FDW, user_mapping, foreign_server} combo other than\nthe built-in pg_connection_fdw? If yes, why to allow say oracle_fdw\nforeign server and user mapping with logical replication? Isn't this a\nsecurity concern?\n\n3. I'd like to understand how the permission model works with this\nfeature amidst various users a) subscription owner b) table owner c)\nFDW owner d) user mapping owner e) foreign server owner f) superuser\ng) user with which logical replication bg workers (table sync,\n{parallel} apply workers) are started up and running.\nWhat if foreign server owner doesn't have permissions on the table\nbeing applied by logical replication bg workers?\nWhat if foreign server owner is changed with ALTER SERVER ... OWNER TO\nwhen logical replication is in-progress?\nWhat if the owner of {FDW, user_mapping, foreign_server} is different\nfrom a subscription owner with USAGE privilege granted? Can the\nsubscription still use the foreign server?\n\n4. How does the invalidation of {FDW, user_mapping, foreign_server}\naffect associated subscription and vice-versa?\n\n5. What if the password is changed in user mapping with ALTER USER\nMAPPING? Will it refresh the subscription so that all the logical\nreplication workers get restarted with new connection info?\n\n6. How does this feature fit if a subscription is created with\nrun_as_owner? Will it check if the table owner has permissions to use\n{FDW, user_mapping, foreign_server} comob?\n\n7.\n+ if (strcmp(d->defname, \"user\") == 0 ||\n+ strcmp(d->defname, \"password\") == 0 ||\n+ strcmp(d->defname, \"sslpassword\") == 0 ||\n+ strcmp(d->defname, \"password_required\") == 0)\n+ ereport(ERROR,\n+ (errmsg(\"invalid option \\\"%s\\\" for pg_connection_fdw\",\n\n+ ereport(ERROR,\n+ (errmsg(\"invalid user mapping option \\\"%s\\\"\nfor pg_connection_fdw\",\n+ d->defname)));\n\nCan we emit an informative error message and hint using\ninitClosestMatch, updateClosestMatch, getClosestMatch similar to other\nFDWs elsewhere in the code?\n\n8.\n+ errmsg(\"password is required\"),\n+ errdetail(\"Non-superusers must provide a\npassword in the connection string.\")));\n\nThe error message and detail look generic, can it be improved to\ninclude something about pg_connection_fdw?\n\n9.\n+{ oid => '6015', oid_symbol => 'PG_CONNECTION_FDW',\n+ descr => 'Pseudo FDW for connections to Postgres',\n+ fdwname => 'pg_connection_fdw', fdwowner => 'POSTGRES',\n\nWhat if the database cluster is initialized with an owner different\nthan 'POSTGRES' at the time of initdb? Will the fdwowner be correct in\nthat case?\n\n10.\n+# src/include/catalog/pg_foreign_data_wrapper.dat\n+{ oid => '6015', oid_symbol => 'PG_CONNECTION_FDW',\n\nDo we want to REVOKE USAGE ON FOREIGN DATA WRAPPER pg_connection_fdw\nFROM PUBLIC and REVOKE EXECUTE ON its handler functions? With this,\nthe permissions are granted explicitly to the foreign server/user\nmapping creators.\n\n11. How about splitting patches in the following manner for better\nmanageability (all of which can go as separate commits) of this\nfeature?\n0001 for pg_create_connection predefined role per comment #1.\n0002 for introducing in-built FDW pg_connection_fdw.\n0003 utilizing in-built FDW for logical replication to provide CREATE\nSUBSCRIPTION ... SERVER.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 2 Jan 2024 15:14:36 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [17] CREATE SUBSCRIPTION ... SERVER"
},
{
"msg_contents": "On Tue, 2024-01-02 at 15:14 +0530, Bharath Rupireddy wrote:\n> Can the pg_create_connection predefined role related code be put into\n> a separate 0001 patch? I think this can go in a separate commit.\n\nDone (see below for details).\n\n> 2. Can one use {FDW, user_mapping, foreign_server} combo other than\n> the built-in pg_connection_fdw?\n\nYes, you can use any FDW for which you have USAGE privileges, passes\nthe validations, and provides enough of the expected fields to form a\nconnection string.\n\nThere was some discussion on this point already. Initially, I\nimplemented it with more catalog and grammar support, which improved\nerror checking, but others objected that the grammar wasn't worth it\nand that it was too inflexible. See:\n\nhttps://www.postgresql.org/message-id/172273.1693403385%40sss.pgh.pa.us\nhttps://www.postgresql.org/message-id/CAExHW5unvpDv6yMSmqurHP7Du1PqoJFWVxeK-4YNm5EnoNJiSQ%40mail.gmail.com\n\n> If yes, why to allow say oracle_fdw\n> foreign server and user mapping with logical replication? Isn't this\n> a\n> security concern?\n\nA user would need USAGE privileges on that other FDW and also must be a\nmember of pg_create_subscription.\n\nIn v16, a user with such privileges would already be able to create\nsuch connection by specifying the raw connection string, so that's not\na new risk with my proposal.\n\n> 3. I'd like to understand how the permission model works with this\n> feature amidst various users a) subscription owner b) table owner c)\n> FDW owner d) user mapping owner e) foreign server owner f) superuser\n> g) user with which logical replication bg workers (table sync,\n> {parallel} apply workers) are started up and running.\n\n(a) The subscription owner is only relevant if the subscription is\ncreated with run_as_owner=true, in which case the logical worker\napplies the changes with the privileges of the subscription owner. [No\nchange.]\n(b) The table owner is only relevant if the subscription is created\nwith run_as_owner=false (default), in which case the logical worker\napplies the changes with the privileges of the table owner. [No\nchange.]\n(c) The FDW owner is irrelevant, though the creator of a foreign server\nmust have USAGE privileges on it. [No change.]\n(d) User mappings do not have owners. [No change.]\n(e) The foreign server owner is irrelevant, but USAGE privileges on the\nforeign server are needed to create a subscription to it. [New\nbehavior.]\n(f) Not sure what you mean here, but superusers can do anything. [No\nchange.]\n(g) The actual user the process runs as is still the subscription\nowner. If run_as_owner=false, the actions are performed as the table\nowner; if run_as_owner=true, the actions are performed as the\nsubscription owner. [No change.]\n\nThere are only two actual changes to the model:\n\n1. Users with USAGE privileges on a foreign server can create\nsubscriptions using that foreign server instead of a connection string\n(they still need to be a member of pg_create_subscription).\n\n2. I created a conceptual separation of privileges between\npg_create_subscription and pg_create_connection; though by default\npg_create_subscription has exactly the same capabilities as before.\nThere is no behavior change unless the administrator revokes\npg_create_connection from pg_create_subscription.\n\nI'd like to also add the capability for subscriptions to a server to\nuse a passwordless connection as long as the server is trusted somehow.\nThe password_required subscription option is already fairly complex, so\nwe'd need to come up with a sensible way for those options to interact.\n\n> What if foreign server owner doesn't have permissions on the table\n> being applied by logical replication bg workers?\n\nThe owner of the foreign server is irrelevant -- only the USAGE\nprivileges on that foreign server matter, and only when it comes to\ncreating subscriptions.\n\n> What if foreign server owner is changed with ALTER SERVER ... OWNER\n> TO\n> when logical replication is in-progress?\n\nThat should have no effect as long as the USAGE priv is still present.\n\nNote that if the owner of the *subscription* changes, it may find a\ndifferent user mapping.\n\n> What if the owner of {FDW, user_mapping, foreign_server} is\n> different\n> from a subscription owner with USAGE privilege granted? Can the\n> subscription still use the foreign server?\n\nYes.\n\n> 4. How does the invalidation of {FDW, user_mapping, foreign_server}\n> affect associated subscription and vice-versa?\n\nIf the user mapping or foreign server change, it causes the apply\nworker to re-build the connection string from those objects and restart\nif something important changed.\n\nIf the FDW changes I don't think that matters.\n\n> 5. What if the password is changed in user mapping with ALTER USER\n> MAPPING? Will it refresh the subscription so that all the logical\n> replication workers get restarted with new connection info?\n\nYes. Notice the subscription_change_cb.\n\nThat's actually one of the nice features -- if your connection info\nchanges, update it in one place to affect all subscriptions to that\nserver.\n\n> 6. How does this feature fit if a subscription is created with\n> run_as_owner? Will it check if the table owner has permissions to use\n> {FDW, user_mapping, foreign_server} comob?\n\nSee above.\n\n> Can we emit an informative error message and hint using\n> initClosestMatch, updateClosestMatch, getClosestMatch similar to\n> other\n> FDWs elsewhere in the code?\n\nDone.\n\n> 8.\n> + errmsg(\"password is required\"),\n> + errdetail(\"Non-superusers must provide a\n> password in the connection string.\")));\n> \n> The error message and detail look generic, can it be improved to\n> include something about pg_connection_fdw?\n\nI believe this is addressed after some refactoring -- the FDW itself\ndoesn't try to validate that a password exists, because we can't rely\non that anyway (someone can use an FDW with no validation or different\nvalidation). Instead, the subscription does this validation.\n\nNote that there is an unrelated hole in the way the subscription does\nthe validation of password_required, which will be addressed separately\nas a part of this other thread:\n\nhttps://www.postgresql.org/message-id/e5892973ae2a80a1a3e0266806640dae3c428100.camel%40j-davis.com\n\n> 9.\n> +{ oid => '6015', oid_symbol => 'PG_CONNECTION_FDW',\n> + descr => 'Pseudo FDW for connections to Postgres',\n> + fdwname => 'pg_connection_fdw', fdwowner => 'POSTGRES',\n> \n> What if the database cluster is initialized with an owner different\n> than 'POSTGRES' at the time of initdb? Will the fdwowner be correct\n> in\n> that case?\n\nThank you, I changed it to use the conventional BKI_DEFAULT(POSTGRES)\ninstead. (The previous way worked, but was not consistent with existing\npatterns.)\n\n> 10.\n> +# src/include/catalog/pg_foreign_data_wrapper.dat\n> +{ oid => '6015', oid_symbol => 'PG_CONNECTION_FDW',\n> \n> Do we want to REVOKE USAGE ON FOREIGN DATA WRAPPER pg_connection_fdw\n> FROM PUBLIC\n\nThe FDW doesn't have USAGE privileges by default so we don't need to\nrevoke them.\n\n> and REVOKE EXECUTE ON its handler functions?\n\nIt has no handler function.\n\nI don't see a reason to restrict privileges on\npostgresql_fdw_validator(); it seems useful for testing/debugging.\n\n> 11. How about splitting patches in the following manner for better\n> manageability (all of which can go as separate commits) of this\n> feature?\n> 0001 for pg_create_connection predefined role per comment #1.\n> 0002 for introducing in-built FDW pg_connection_fdw.\n> 0003 utilizing in-built FDW for logical replication to provide CREATE\n> SUBSCRIPTION ... SERVER.\n\nGood suggestion, though I split it a bit differently:\n\n0001: fix postgresql_fdw_validator to use libpq options via walrcv\nmethod. This is appropriate for looser validation that doesn't try to\ncheck for password_required or that a password is set -- that's left up\nto the subscription.\n\n0002: built-in pg_connection_fdw, also includes code for validation and\ntransforming into a connection string. This creates a lot of test diffs\nin foreign_data.out because I need to exclude the built in FDW (it's\nowned by the bootstrap supseruser which is not a stable username). It\nwould be nice if there was a way to use a negative-matching regex in a\npsql \\dew+ command -- something like \"(?!pg_)*\" -- but I couldn't find\na way to do that because \"(?...)\" seems to not work in psql. Let me\nknow if you know a trick to do so.\n\n0003: CREATE SUBSCRIPTION... SERVER.\n\n0004: Add pg_create_connection role.\n\nRegards,\n\tJeff Davis",
"msg_date": "Thu, 04 Jan 2024 16:56:11 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [17] CREATE SUBSCRIPTION ... SERVER"
},
{
"msg_contents": "On Fri, Jan 5, 2024 at 6:26 AM Jeff Davis <[email protected]> wrote:\n\n>\n> > 2. Can one use {FDW, user_mapping, foreign_server} combo other than\n> > the built-in pg_connection_fdw?\n>\n> Yes, you can use any FDW for which you have USAGE privileges, passes\n> the validations, and provides enough of the expected fields to form a\n> connection string.\n>\n> There was some discussion on this point already. Initially, I\n> implemented it with more catalog and grammar support, which improved\n> error checking, but others objected that the grammar wasn't worth it\n> and that it was too inflexible. See:\n>\n> https://www.postgresql.org/message-id/172273.1693403385%40sss.pgh.pa.us\n> https://www.postgresql.org/message-id/CAExHW5unvpDv6yMSmqurHP7Du1PqoJFWVxeK-4YNm5EnoNJiSQ%40mail.gmail.com\n>\n\nCan you please provide an example using postgres_fdw to create a\nsubscription using this patch. I think we should document it in\npostgres_fdw and add a test for the same.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Fri, 5 Jan 2024 12:49:20 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [17] CREATE SUBSCRIPTION ... SERVER"
},
{
"msg_contents": "On Fri, 2024-01-05 at 12:49 +0530, Ashutosh Bapat wrote:\n> Can you please provide an example using postgres_fdw to create a\n> subscription using this patch. I think we should document it in\n> postgres_fdw and add a test for the same.\n\nThere's a basic test for postgres_fdw in patch 0003, just testing the\nsyntax and validation.\n\nA manual end-to-end test is pretty straightforward:\n\n -- on publisher\n create table foo(i int primary key);\n create publication pub1 for table foo;\n insert into foo values(42);\n\n -- on subscriber\n create extension postgres_fdw;\n create table foo(i int primary key);\n create server server1\n foreign data wrapper postgres_fdw\n options (host '/tmp', port '5432', dbname 'postgres');\n create user mapping for u1 server server1\n options (user 'u1');\n select pg_conninfo_from_server('server1','u1',true);\n create subscription sub1 server server1 publication pub1;\n\nI don't think we need to add an end-to-end test for each FDW, because\nit's just using the assembled connection string. To see if it's\nassembling the connection string properly, we can unit test with\npg_conninfo_from_server().\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Fri, 05 Jan 2024 00:04:26 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [17] CREATE SUBSCRIPTION ... SERVER"
},
{
"msg_contents": "On Fri, Jan 5, 2024 at 1:34 PM Jeff Davis <[email protected]> wrote:\n>\n> On Fri, 2024-01-05 at 12:49 +0530, Ashutosh Bapat wrote:\n> > Can you please provide an example using postgres_fdw to create a\n> > subscription using this patch. I think we should document it in\n> > postgres_fdw and add a test for the same.\n>\n> There's a basic test for postgres_fdw in patch 0003, just testing the\n> syntax and validation.\n>\n> A manual end-to-end test is pretty straightforward:\n>\n> -- on publisher\n> create table foo(i int primary key);\n> create publication pub1 for table foo;\n> insert into foo values(42);\n>\n> -- on subscriber\n> create extension postgres_fdw;\n> create table foo(i int primary key);\n> create server server1\n> foreign data wrapper postgres_fdw\n> options (host '/tmp', port '5432', dbname 'postgres');\n> create user mapping for u1 server server1\n> options (user 'u1');\n> select pg_conninfo_from_server('server1','u1',true);\n> create subscription sub1 server server1 publication pub1;\n>\n> I don't think we need to add an end-to-end test for each FDW, because\n> it's just using the assembled connection string. To see if it's\n> assembling the connection string properly, we can unit test with\n> pg_conninfo_from_server().\n\nThanks for the steps.\n\nI don't think we need to add a test for every FDW. E.g. adding a test\nin file_fdw would be pointless. But postgres_fdw is special. The test\ncould further create a foreign table ftab_foo on subscriber\nreferencing foo on publisher and then compare the data from foo and\nftab_foo to make sure that the replication is happening. This will\nserve as a good starting point for replicated tables setup in a\nsharded cluster.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Fri, 5 Jan 2024 16:11:59 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [17] CREATE SUBSCRIPTION ... SERVER"
},
{
"msg_contents": "On Fri, 2024-01-05 at 16:11 +0530, Ashutosh Bapat wrote:\n> I don't think we need to add a test for every FDW. E.g. adding a test\n> in file_fdw would be pointless. But postgres_fdw is special. The test\n> could further create a foreign table ftab_foo on subscriber\n> referencing foo on publisher and then compare the data from foo and\n> ftab_foo to make sure that the replication is happening. This will\n> serve as a good starting point for replicated tables setup in a\n> sharded cluster.\n> \n\nAttached updated patch set with added TAP test for postgres_fdw, which\nuses a postgres_fdw server as the source for subscription connection\ninformation.\n\nI think 0004 needs a bit more work, so I'm leaving it off for now, but\nI'll bring it back in the next patch set.\n\nRegards,\n\tJeff Davis",
"msg_date": "Fri, 12 Jan 2024 17:17:26 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [17] CREATE SUBSCRIPTION ... SERVER"
},
{
"msg_contents": "On 1/12/24 20:17, Jeff Davis wrote:\n> On Fri, 2024-01-05 at 16:11 +0530, Ashutosh Bapat wrote:\n>> I don't think we need to add a test for every FDW. E.g. adding a test\n>> in file_fdw would be pointless. But postgres_fdw is special. The test\n>> could further create a foreign table ftab_foo on subscriber\n>> referencing foo on publisher and then compare the data from foo and\n>> ftab_foo to make sure that the replication is happening. This will\n>> serve as a good starting point for replicated tables setup in a\n>> sharded cluster.\n>> \n> \n> Attached updated patch set with added TAP test for postgres_fdw, which\n> uses a postgres_fdw server as the source for subscription connection\n> information.\n> \n> I think 0004 needs a bit more work, so I'm leaving it off for now, but\n> I'll bring it back in the next patch set.\n\nI took a quick scan through the patch. The only thing that jumped out at \nme was that it seems like it might make sense to use \nquote_literal_cstr() rather than defining your own appendEscapedValue() \nfunction?\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n",
"msg_date": "Mon, 15 Jan 2024 15:53:04 -0500",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [17] CREATE SUBSCRIPTION ... SERVER"
},
{
"msg_contents": "On Mon, 2024-01-15 at 15:53 -0500, Joe Conway wrote:\n> I took a quick scan through the patch. The only thing that jumped out\n> at \n> me was that it seems like it might make sense to use \n> quote_literal_cstr() rather than defining your own\n> appendEscapedValue() \n> function?\n\nThe rules are slightly different. Libpq expects a connection string to\nescape only single-quote and backslash, and the escape character is\nalways backslash:\n\nhttps://www.postgresql.org/docs/16/libpq-connect.html#LIBPQ-CONNSTRING-KEYWORD-VALUE\n\nquote_literal_cstr() has more complicated rules. If there's a backslash\nanywhere in the string, it uses the E'' form. If it encounters a\nbackslash it escapes it with backslash, but if it encounters a single-\nquote it escapes it with single-quote. See:\n\nhttps://www.postgresql.org/docs/16/sql-syntax-lexical.html#SQL-SYNTAX-STRINGS\nhttps://www.postgresql.org/docs/16/sql-syntax-lexical.html#SQL-SYNTAX-STRINGS-ESCAPE\n\nI'll include some tests and a better comment for it in the next patch\nset.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Mon, 15 Jan 2024 13:34:18 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [17] CREATE SUBSCRIPTION ... SERVER"
},
{
"msg_contents": "On Fri, 2024-01-12 at 17:17 -0800, Jeff Davis wrote:\n> I think 0004 needs a bit more work, so I'm leaving it off for now,\n> but\n> I'll bring it back in the next patch set.\n\nHere's the next patch set. 0001 - 0003 are mostly the same with some\nimproved error messages and some code fixes. I am looking to start\ncommitting 0001 - 0003 soon, as they have received some feedback\nalready and 0004 isn't required for the earlier patches to be useful.\n\n0004 could use more discussion. The purpose is to split the privileges\nof pg_create_subscription into two: pg_create_subscription, and\npg_create_connection. By separating the privileges, it's possible to\nallow someone to create/manage subscriptions to a predefined set of\nforeign servers (on which they have USAGE privileges) without allowing\nthem to write an arbitrary connection string.\n\nThe reasoning behind the separation is that creating a connection\nstring has different and more nuanced security implications than\ncreating a subscription (cf. extensive discussion[1] related to the\npassword_required setting on a subscription).\n\nBy default, pg_create_subscription is a member of pg_create_connection,\nso there's no change/break of the default behavior. But administrators\nwho want the privileges to be separated can simply \"REVOKE\npg_create_connection FROM pg_create_subscription\".\n\nGiven that CREATE SUBSCRIPTION ... SERVER works on a server of any FDW,\nwe would also need to protect against someone making using an\nunexpected FDW (with no validation or different validation) to\nconstruct a foreign server with malicious connection settings. To do\nso, I added to the grammar \"CREATE SERVER ... FOR SUBSCRIPTION\" (and a\nboolean catalog entry in pg_foreign_server) that can only be set by a\nmember of pg_create_connection.\n\nThere was some resistance[2] to adding more grammar/catalog impact than\nnecessary, so I'm not sure if others think this is the right approach.\nThe earlier patches are still worth it without 0004, but I do think the\nidea of separating the privileges is useful and it would be nice to\nfind an agreeable solution to do so. At least with the 0004, the\napproach is a bit more direct.\n\nRegards,\n\tJeff Davis\n\n[1]\nhttps://www.postgresql.org/message-id/9DFC88D3-1300-4DE8-ACBC-4CEF84399A53%40enterprisedb.com\n\n[2]\nhttps://www.postgresql.org/message-id/172273.1693403385%40sss.pgh.pa.us",
"msg_date": "Mon, 15 Jan 2024 17:55:44 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [17] CREATE SUBSCRIPTION ... SERVER"
},
{
"msg_contents": "On Tue, Jan 16, 2024 at 7:25 AM Jeff Davis <[email protected]> wrote:\n>\n> On Fri, 2024-01-12 at 17:17 -0800, Jeff Davis wrote:\n> > I think 0004 needs a bit more work, so I'm leaving it off for now,\n> > but\n> > I'll bring it back in the next patch set.\n>\n> Here's the next patch set. 0001 - 0003 are mostly the same with some\n> improved error messages and some code fixes. I am looking to start\n> committing 0001 - 0003 soon, as they have received some feedback\n> already and 0004 isn't required for the earlier patches to be useful.\n\nThanks. Here are some comments on 0001. I'll look at other patches very soon.\n\n1.\n+ /* Load the library providing us libpq calls. */\n+ load_file(\"libpqwalreceiver\", false);\n\nAt first glance, it looks odd that libpqwalreceiver library is being\nlinked to every backend that uses postgresql_fdw_validator. After a\nbit of grokking, this feels/is a better and easiest way to not link\nlibpq to the main postgresql executable as specified at the beginning\nof libpqwalreceiver.c file comments. May be a more descriptive note is\nworth here instead of just saying \"Load the library providing us libpq\ncalls.\"?\n\n2. Why not typedef keyword before the ConnectionOption structure? This\nway all the \"struct ConnectionOption\" can be remvoed, no? I know the\npreviously there is no typedef, but we can add it now so that the code\nlooks cleaner.\n\ntypedef struct ConnectionOption\n{\n const char *optname;\n bool issecret; /* is option for a password? */\n bool isdebug; /* is option a debug option? */\n} ConnectionOption;\n\nFWIW, with the above change and removal of struct before every use of\nConnectionOption, the code compiles cleanly for me.\n\n3.\n+static const struct ConnectionOption *\n+libpqrcv_conninfo_options(void)\n\nWhy is libpqrcv_conninfo_options returning the const ConnectionOption?\nIs it that we don't expect callers to modify the result? I think it's\nnot needed given the fact that PQconndefaults doesn't constify the\nreturn value.\n\n4.\n+ /* skip options that must be overridden */\n+ if (strcmp(option, \"client_encoding\") == 0)\n+ return false;\n+\n\nOptions that must be overriden or disallow specifiing\n\"client_encoding\" in the SERVER/USER MAPPING definition (just like the\ndblink)?\n\n /* Disallow \"client_encoding\" */\n if (strcmp(opt->keyword, \"client_encoding\") == 0)\n return false;\n\n5.\n\"By using the correct libpq options, it no longer needs to be\ndeprecated, and can be used by the upcoming pg_connection_fdw.\"\n\nUse of postgresql_fdw_validator for pg_connection_fdw seems a bit odd\nto me. I don't mind pg_connection_fdw having its own validator\npg_connection_fdw_validator even if it duplicates the code. To avoid\ncode duplication we can move the guts to an internal function in\nforeign.c so that both postgresql_fdw_validator and\npg_connection_fdw_validator can use it. This way the code is cleaner\nand we can just leave postgresql_fdw_validator as deprecated.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 16 Jan 2024 09:23:13 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [17] CREATE SUBSCRIPTION ... SERVER"
},
{
"msg_contents": "On Tue, 2024-01-16 at 09:23 +0530, Bharath Rupireddy wrote:\n> 1.\n> May be a more descriptive note is\n> worth here instead of just saying \"Load the library providing us\n> libpq calls.\"?\n\nOK, will be in the next patch set.\n\n> 2. Why not typedef keyword before the ConnectionOption structure?\n\nAgreed. An earlier unpublished iteration had the struct more localized,\nbut here it makes more sense to be typedef'd.\n\n> 3.\n> +static const struct ConnectionOption *\n> +libpqrcv_conninfo_options(void)\n> \n> Why is libpqrcv_conninfo_options returning the const\n> ConnectionOption?\n\nI did that so I could save the result, and each subsequent call would\nbe free (just returning the same pointer). That also means that the\ncaller doesn't need to free the result, which would require another\nentry point in the API.\n\n> Is it that we don't expect callers to modify the result? I think it's\n> not needed given the fact that PQconndefaults doesn't constify the\n> return value.\n\nThe result of PQconndefaults() can change from call to call when the\ndefaults change. libpqrcv_conninfo_options() only depends on the\navailable option names (and dispchar), which should be a static list.\n\n> 4.\n> + /* skip options that must be overridden */\n> + if (strcmp(option, \"client_encoding\") == 0)\n> + return false;\n> +\n> \n> Options that must be overriden or disallow specifiing\n> \"client_encoding\" in the SERVER/USER MAPPING definition (just like\n> the\n> dblink)?\n\nI'm not quite sure of your question, but I'll try to improve the\ncomment.\n\n> 5.\n> \"By using the correct libpq options, it no longer needs to be\n> deprecated, and can be used by the upcoming pg_connection_fdw.\"\n> \n> Use of postgresql_fdw_validator for pg_connection_fdw seems a bit odd\n> to me. I don't mind pg_connection_fdw having its own validator\n> pg_connection_fdw_validator even if it duplicates the code. To avoid\n> code duplication we can move the guts to an internal function in\n> foreign.c so that both postgresql_fdw_validator and\n> pg_connection_fdw_validator can use it. This way the code is cleaner\n> and we can just leave postgresql_fdw_validator as deprecated.\n\nWill do so in the next patch set.\n\nThank you for taking a look.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Wed, 17 Jan 2024 23:17:01 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [17] CREATE SUBSCRIPTION ... SERVER"
},
{
"msg_contents": "Hi Jeff,\n\nOn Tue, Jan 16, 2024 at 7:25 AM Jeff Davis <[email protected]> wrote:\n>\n> On Fri, 2024-01-12 at 17:17 -0800, Jeff Davis wrote:\n> > I think 0004 needs a bit more work, so I'm leaving it off for now,\n> > but\n> > I'll bring it back in the next patch set.\n>\n> Here's the next patch set. 0001 - 0003 are mostly the same with some\n> improved error messages and some code fixes. I am looking to start\n> committing 0001 - 0003 soon, as they have received some feedback\n> already and 0004 isn't required for the earlier patches to be useful.\n>\n\nI am reviewing the patches. Here are some random comments.\n\n0002 adds a prefix \"regress_\" to almost every object that is created\nin foreign_data.sql. The commit message doesn't say why it's doing so.\nBut more importantly, the new tests added are lost in all the other\nchanges. It will be good to have prefix adding changes into its own\npatch explaining the reason. The new tests may stay in 0002.\nInterestingly the foreign server created in the new tests doesn't have\n\"regress_\" prefix. Why?\n\nDummy FDW makes me nervous. The way it's written, it may grow into a\nfull-fledged postgres_fdw and in the process might acquire the same\nconcerns that postgres_fdw has today. But I will study the patches and\ndiscussion around it more carefully.\n\nI enhanced the postgres_fdw TAP test to use foreign table. Please see\nthe attached patch. It works as expected. Of course a follow-on work\nwill require linking the local table and its replica on the publisher\ntable so that push down will work on replicated tables. But the\nconcept at least works with your changes. Thanks for that.\n\nI am not sure we need a full-fledged TAP test for testing\nsubscription. I wouldn't object to it, but TAP tests are heavy. It\nshould be possible to write the same test as a SQL test by creating\ntwo databases and switching between them. Do you think it's worth\ntrying that way?\n\n> 0004 could use more discussion. The purpose is to split the privileges\n> of pg_create_subscription into two: pg_create_subscription, and\n> pg_create_connection. By separating the privileges, it's possible to\n> allow someone to create/manage subscriptions to a predefined set of\n> foreign servers (on which they have USAGE privileges) without allowing\n> them to write an arbitrary connection string.\n\nHaven't studied this patch yet. Will continue reviewing the patches.\n\n--\nBest Wishes,\nAshutosh Bapat",
"msg_date": "Mon, 22 Jan 2024 18:41:07 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [17] CREATE SUBSCRIPTION ... SERVER"
},
{
"msg_contents": "On Mon, 2024-01-22 at 18:41 +0530, Ashutosh Bapat wrote:\n> 0002 adds a prefix \"regress_\" to almost every object that is created\n> in foreign_data.sql.\n\npsql \\dew outputs the owner, which in the case of a built-in FDW is the\nbootstrap superuser, which is not a stable name. I used the prefix to\nexclude the built-in FDW -- if you have a better suggestion, please let\nme know. (Though reading below, we might not even want a built-in FDW.)\n\n> Dummy FDW makes me nervous. The way it's written, it may grow into a\n> full-fledged postgres_fdw and in the process might acquire the same\n> concerns that postgres_fdw has today. But I will study the patches\n> and\n> discussion around it more carefully.\n\nI introduced that based on this comment[1].\n\nI also thought it fit with your previous suggestion to make it work\nwith postgres_fdw, but I suppose it's not required. We could just not\noffer the built-in FDW, and expect users to either use postgres_fdw or\ncreate their own dummy FDW.\n\n> I enhanced the postgres_fdw TAP test to use foreign table. Please see\n> the attached patch. It works as expected. Of course a follow-on work\n> will require linking the local table and its replica on the publisher\n> table so that push down will work on replicated tables. But the\n> concept at least works with your changes. Thanks for that.\n\nThank you, I'll include it in the next patch set.\n\n> I am not sure we need a full-fledged TAP test for testing\n> subscription. I wouldn't object to it, but TAP tests are heavy. It\n> should be possible to write the same test as a SQL test by creating\n> two databases and switching between them. Do you think it's worth\n> trying that way?\n\nI'm not entirely sure what you mean here, but I am open to test\nsimplifications if you see an opportunity.\n\nRegards,\n\tJeff Davis\n> \n\n[1] \nhttps://www.postgresql.org/message-id/172273.1693403385%40sss.pgh.pa.us\n\n\n\n\n",
"msg_date": "Mon, 22 Jan 2024 11:03:50 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [17] CREATE SUBSCRIPTION ... SERVER"
},
{
"msg_contents": "On Tue, Jan 23, 2024 at 12:33 AM Jeff Davis <[email protected]> wrote:\n>\n> On Mon, 2024-01-22 at 18:41 +0530, Ashutosh Bapat wrote:\n> > 0002 adds a prefix \"regress_\" to almost every object that is created\n> > in foreign_data.sql.\n>\n> psql \\dew outputs the owner, which in the case of a built-in FDW is the\n> bootstrap superuser, which is not a stable name. I used the prefix to\n> exclude the built-in FDW -- if you have a better suggestion, please let\n> me know. (Though reading below, we might not even want a built-in FDW.)\n\nI am with the prefix. The changes it causes make review difficult. If\nyou can separate those changes into a patch that will help.\n\n>\n> > Dummy FDW makes me nervous. The way it's written, it may grow into a\n> > full-fledged postgres_fdw and in the process might acquire the same\n> > concerns that postgres_fdw has today. But I will study the patches\n> > and\n> > discussion around it more carefully.\n>\n> I introduced that based on this comment[1].\n>\n> I also thought it fit with your previous suggestion to make it work\n> with postgres_fdw, but I suppose it's not required. We could just not\n> offer the built-in FDW, and expect users to either use postgres_fdw or\n> create their own dummy FDW.\n\nI am fine with this.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Tue, 23 Jan 2024 15:21:37 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [17] CREATE SUBSCRIPTION ... SERVER"
},
{
"msg_contents": "On Tue, 2024-01-23 at 15:21 +0530, Ashutosh Bapat wrote:\n> I am with the prefix. The changes it causes make review difficult. If\n> you can separate those changes into a patch that will help.\n\nI ended up just removing the dummy FDW. Real users are likely to want\nto use postgres_fdw, and if not, it's easy enough to issue a CREATE\nFOREIGN DATA WRAPPER. Or I can bring it back if desired.\n\nUpdated patch set (patches are renumbered):\n\n * removed dummy FDW and test churn\n * made a new pg_connection_validator function which leaves\npostgresql_fdw_validator in place. (I didn't document the new function\n-- should I?)\n * included your tests improvements\n * removed dependency from the subscription to the user mapping -- we\ndon't depend on the user mapping for foreign tables, so we shouldn't\ndepend on them here. Of course a change to a user mapping still\ninvalidates the subscription worker and it will restart.\n * general cleanup\n\nOverall it's simpler and hopefully easier to review. The patch to\nintroduce the pg_create_connection role could use some more discussion,\nbut I believe 0001 and 0002 are nearly ready.\n\nRegards,\n\tJeff Davis",
"msg_date": "Tue, 23 Jan 2024 17:45:07 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [17] CREATE SUBSCRIPTION ... SERVER"
},
{
"msg_contents": "On Wed, Jan 24, 2024 at 7:15 AM Jeff Davis <[email protected]> wrote:\n>\n> On Tue, 2024-01-23 at 15:21 +0530, Ashutosh Bapat wrote:\n> > I am with the prefix. The changes it causes make review difficult. If\n> > you can separate those changes into a patch that will help.\n>\n> I ended up just removing the dummy FDW. Real users are likely to want\n> to use postgres_fdw, and if not, it's easy enough to issue a CREATE\n> FOREIGN DATA WRAPPER. Or I can bring it back if desired.\n>\n> Updated patch set (patches are renumbered):\n>\n> * removed dummy FDW and test churn\n> * made a new pg_connection_validator function which leaves\n> postgresql_fdw_validator in place. (I didn't document the new function\n> -- should I?)\n> * included your tests improvements\n> * removed dependency from the subscription to the user mapping -- we\n> don't depend on the user mapping for foreign tables, so we shouldn't\n> depend on them here. Of course a change to a user mapping still\n> invalidates the subscription worker and it will restart.\n> * general cleanup\n>\n> Overall it's simpler and hopefully easier to review. The patch to\n> introduce the pg_create_connection role could use some more discussion,\n> but I believe 0001 and 0002 are nearly ready.\n\nThanks for the patches. I have some comments on v9-0001:\n\n1.\n+SELECT pg_conninfo_from_server('testserver1', CURRENT_USER, false);\n+ pg_conninfo_from_server\n+-----------------------------------\n+ user = 'value' password = 'value'\n\nIsn't this function an unsafe one as it shows the password? I don't\nsee its access being revoked from the public. If it seems important\nfor one to understand how the server forms a connection string by\ngathering bits and pieces from foreign server and user mapping, why\ncan't it look for the password in the result string and mask it before\nreturning it as output?\n\n2.\n+ */\n+typedef const struct ConnectionOption *(*walrcv_conninfo_options_fn) (void);\n+\n\nstruct here is unnecessary as the structure definition of\nConnectionOption is typedef-ed already.\n\n3.\n+ OPTIONS (user 'publicuser', password $pwd$'\\\"$# secret'$pwd$);\n\nIs pwd here present working directory name? If yes, isn't it going to\nbe different on BF animals making test output unstable?\n\n4.\n-struct ConnectionOption\n+struct TestConnectionOption\n {\n\nHow about say PgFdwConnectionOption instead of TestConnectionOption?\n\n5. Comment #4 makes me think - why not get rid of\npostgresql_fdw_validator altogether and use pg_connection_validator\ninstead for testing purposes? The tests don't complain much, see the\npatch Remove-deprecated-postgresql_fdw_validator.diff created on top\nof v9-0001.\n\nI'll continue to review the other patches.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 29 Jan 2024 23:11:41 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [17] CREATE SUBSCRIPTION ... SERVER"
},
{
"msg_contents": "On Mon, Jan 29, 2024 at 11:11 PM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> On Wed, Jan 24, 2024 at 7:15 AM Jeff Davis <[email protected]> wrote:\n> >\n> > On Tue, 2024-01-23 at 15:21 +0530, Ashutosh Bapat wrote:\n> > > I am with the prefix. The changes it causes make review difficult. If\n> > > you can separate those changes into a patch that will help.\n> >\n> > I ended up just removing the dummy FDW. Real users are likely to want\n> > to use postgres_fdw, and if not, it's easy enough to issue a CREATE\n> > FOREIGN DATA WRAPPER. Or I can bring it back if desired.\n> >\n> > Updated patch set (patches are renumbered):\n> >\n> > * removed dummy FDW and test churn\n> > * made a new pg_connection_validator function which leaves\n> > postgresql_fdw_validator in place. (I didn't document the new function\n> > -- should I?)\n> > * included your tests improvements\n> > * removed dependency from the subscription to the user mapping -- we\n> > don't depend on the user mapping for foreign tables, so we shouldn't\n> > depend on them here. Of course a change to a user mapping still\n> > invalidates the subscription worker and it will restart.\n> > * general cleanup\n> >\n> > Overall it's simpler and hopefully easier to review. The patch to\n> > introduce the pg_create_connection role could use some more discussion,\n> > but I believe 0001 and 0002 are nearly ready.\n>\n> Thanks for the patches. I have some comments on v9-0001:\n>\n> 1.\n> +SELECT pg_conninfo_from_server('testserver1', CURRENT_USER, false);\n> + pg_conninfo_from_server\n> +-----------------------------------\n> + user = 'value' password = 'value'\n>\n> Isn't this function an unsafe one as it shows the password? I don't\n> see its access being revoked from the public. If it seems important\n> for one to understand how the server forms a connection string by\n> gathering bits and pieces from foreign server and user mapping, why\n> can't it look for the password in the result string and mask it before\n> returning it as output?\n>\n> 2.\n> + */\n> +typedef const struct ConnectionOption *(*walrcv_conninfo_options_fn) (void);\n> +\n>\n> struct here is unnecessary as the structure definition of\n> ConnectionOption is typedef-ed already.\n>\n> 3.\n> + OPTIONS (user 'publicuser', password $pwd$'\\\"$# secret'$pwd$);\n>\n> Is pwd here present working directory name? If yes, isn't it going to\n> be different on BF animals making test output unstable?\n>\n> 4.\n> -struct ConnectionOption\n> +struct TestConnectionOption\n> {\n>\n> How about say PgFdwConnectionOption instead of TestConnectionOption?\n>\n> 5. Comment #4 makes me think - why not get rid of\n> postgresql_fdw_validator altogether and use pg_connection_validator\n> instead for testing purposes? The tests don't complain much, see the\n> patch Remove-deprecated-postgresql_fdw_validator.diff created on top\n> of v9-0001.\n>\n> I'll continue to review the other patches.\n\nI forgot to attach the diff patch as specified in comment #5, please\nfind the attached. Sorry for the noise.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 29 Jan 2024 23:17:40 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [17] CREATE SUBSCRIPTION ... SERVER"
},
{
"msg_contents": "On Wed, Jan 24, 2024 at 7:15 AM Jeff Davis <[email protected]> wrote:\n>\n> On Tue, 2024-01-23 at 15:21 +0530, Ashutosh Bapat wrote:\n> > I am with the prefix. The changes it causes make review difficult. If\n> > you can separate those changes into a patch that will help.\n>\n> I ended up just removing the dummy FDW. Real users are likely to want\n> to use postgres_fdw, and if not, it's easy enough to issue a CREATE\n> FOREIGN DATA WRAPPER. Or I can bring it back if desired.\n>\n> Updated patch set (patches are renumbered):\n>\n> * removed dummy FDW and test churn\n> * made a new pg_connection_validator function which leaves\n> postgresql_fdw_validator in place. (I didn't document the new function\n> -- should I?)\n> * included your tests improvements\n> * removed dependency from the subscription to the user mapping -- we\n> don't depend on the user mapping for foreign tables, so we shouldn't\n> depend on them here. Of course a change to a user mapping still\n> invalidates the subscription worker and it will restart.\n> * general cleanup\n>\n\nThanks.\n\n> Overall it's simpler and hopefully easier to review. The patch to\n> introduce the pg_create_connection role could use some more discussion,\n> but I believe 0001 and 0002 are nearly ready.\n\n0001 commit message says \"in preparation of CREATE SUBSCRIPTION\" but I\ndo not see the function being used anywhere except in testcases. Am I\nmissing something? Is this function necessary for this feature?\n\nBut more importantly this function and its minions are closely tied\nwith libpq and not an FDW. Converting a server and user mapping to\nconninfo should be delegated to the FDW being used since that FDW\nknows best how to use those options. Similarly options_to_conninfo()\nshould be delegated to the FDW. I imagine that the FDWs which want to\nsupport subscriptions will need to implement hooks in\nWalReceiverFunctionsType which seems to be designed to be pluggable.\n--- quote\nThis API should be considered internal at the moment, but we could open it\nup for 3rd party replacements of libpqwalreceiver in the future, allowing\npluggable methods for receiving WAL.\n--- unquote\nNot all of those hooks are applicable to every FDW since the publisher\nmay be different and may not provide all the functionality. So we\nmight need to rethink WalReceiverFunctionsType interface eventually.\nBut for now, we will need to change postgres_fdw to implement it.\n\nWe should mention something about the user mapping that will be used\nto connect to SERVER when subscription specifies SERVER. I am not sure\nwhere to mention this. May be we can get some clue from foreign server\ndocumentation.\n\n--\nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Tue, 30 Jan 2024 16:17:55 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [17] CREATE SUBSCRIPTION ... SERVER"
},
{
"msg_contents": "On Tue, 2024-01-30 at 16:17 +0530, Ashutosh Bapat wrote:\n> Converting a server and user mapping to\n> conninfo should be delegated to the FDW being used since that FDW\n> knows best how to use those options.\n\nIf I understand you correctly, you mean that there would be a new\noptional function associated with an FDW (in addition to the HANDLER\nand VALIDATOR) like \"CONNECTION\", which would be able to return the\nconninfo from a server using that FDW. Is that right?\n\nI like the idea -- it further decouples the logic from the core server.\nI suspect it will make postgres_fdw the primary way (though not the\nonly possible way) to use this feature. There would be little need to\ncreate a new builtin FDW to make this work.\n\nTo get the subscription invalidation right, we'd need to make the\n(reasonable) assumption that the connection information is based only\non the FDW, server, and user mapping. A FDW wouldn't be able to use,\nfor example, some kind of configuration table or GUC to control how the\nconnection string gets created. That's easy enough to solve with\ndocumentation.\n\nI'll work up a new patch for this.\n\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Tue, 30 Jan 2024 12:45:39 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [17] CREATE SUBSCRIPTION ... SERVER"
},
{
"msg_contents": "On Wed, Jan 31, 2024 at 2:16 AM Jeff Davis <[email protected]> wrote:\n>\n> On Tue, 2024-01-30 at 16:17 +0530, Ashutosh Bapat wrote:\n> > Converting a server and user mapping to\n> > conninfo should be delegated to the FDW being used since that FDW\n> > knows best how to use those options.\n>\n> If I understand you correctly, you mean that there would be a new\n> optional function associated with an FDW (in addition to the HANDLER\n> and VALIDATOR) like \"CONNECTION\", which would be able to return the\n> conninfo from a server using that FDW. Is that right?\n\nI am not sure whether it fits {HANDLER,VALIDATOR} set or should be\npart of FdwRoutine or a new set of hooks similar to FdwRoutine. But\nsomething like that. Since the hooks for query planning and execution\nhave different characteristics from the ones used for replication, it\nmight make sense to create a new set of hooks similar to FdwRoutine,\nsay FdwReplicationRoutines and rename FdwRoutines to FdwQueryRoutines.\nThis way, we know whether an FDW can handle subscription connections\nor not. A SERVER whose FDW does not support replication routines\nshould not be used with a subscription.\n\n>\n> I like the idea -- it further decouples the logic from the core server.\n> I suspect it will make postgres_fdw the primary way (though not the\n> only possible way) to use this feature. There would be little need to\n> create a new builtin FDW to make this work.\n\nThat's what I see as well. I am glad that we are on the same page.\n\n>\n> To get the subscription invalidation right, we'd need to make the\n> (reasonable) assumption that the connection information is based only\n> on the FDW, server, and user mapping. A FDW wouldn't be able to use,\n> for example, some kind of configuration table or GUC to control how the\n> connection string gets created. That's easy enough to solve with\n> documentation.\n>\n\nI think that's true for postgres_fdw as well right? But I think it's\nmore important for a subscription since it's expected to live very\nlong almost as long as the server itself does. So I agree. But that's\nFDW's responsibility.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Wed, 31 Jan 2024 11:10:00 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [17] CREATE SUBSCRIPTION ... SERVER"
},
{
"msg_contents": "On Wed, 2024-01-31 at 11:10 +0530, Ashutosh Bapat wrote:\n> > I like the idea -- it further decouples the logic from the core\n> > server.\n> > I suspect it will make postgres_fdw the primary way (though not the\n> > only possible way) to use this feature. There would be little need\n> > to\n> > create a new builtin FDW to make this work.\n> \n> That's what I see as well. I am glad that we are on the same page.\n\nImplemented in v11, attached.\n\nIs this what you had in mind? It leaves a lot of the work to\npostgres_fdw and it's almost unusable without postgres_fdw.\n\nThat's not a bad thing, but it makes the core functionality a bit\nharder to test standalone. I can work on the core tests some more. The\npostgres_fdw tests passed without modification, though, and offer a\nsimple example of how to use it.\n\nRegards,\n\tJeff Davis",
"msg_date": "Fri, 08 Mar 2024 00:20:32 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [17] CREATE SUBSCRIPTION ... SERVER"
}
] |
[
{
"msg_contents": "Dear hackers,\n\nWhile testing pg_upgrade for [1], I found a bug related with logical replication\nslots. \n\n# Found bug\n\nStatus of logical replication slots are still \"reserved\", but they are not usable.\n\n```\ntmp=# SELECT slot_name, slot_type, restart_lsn, confirmed_flush_lsn, wal_status FROM pg_replication_slots;\n slot_name | slot_type | restart_lsn | confirmed_flush_lsn | wal_status \n------------+-----------+-------------+---------------------+------------\n new_on_tmp | logical | 0/196C7B0 | 0/196C7E8 | reserved\n(1 row)\n\ntmp=# SELECT * FROM pg_logical_slot_get_changes('new_on_tmp', NULL, NULL);\nERROR: requested WAL segment pg_wal/000000010000000000000001 has already been removed\n```\n\nI did not check about physical slots, but it may also similar problem.\n\n# Condition\n\nThis happens when logical slots exist on new cluster before doing pg_upgrade.\nIt happened for HEAD and REL_16_STABLE branches, but I think it would happen\nall supported versions.\n\n## How to reproduce\n\nYou can get same ERROR with below steps. Also I attached the script for\nreproducing the bug, \n\n1. do initdb for old and new cluster\n2. create logical replication slots only on new cluster. Note that it must be\n done aother database than \"postgres\".\n3. do pg_upgrade.\n4. boot new cluster and executed pg_logical_slot_get_changes()\n\n# My analysis\n\nThe immediate cause is that pg_resetwal removes WALs required by logical\nreplication slots, it cannot be skipped.\nTherefore, I think it is better not to allow upgrade when replication slots are\ndefined on the new cluster. I was not sure the case for physical replication,\nso I want to hear your opinion.\n\nI will create a patch if it is real problem. Any comments for that are very\nwelcome.\n\n[1]: https://www.postgresql.org/message-id/flat/TYAPR01MB58664C81887B3AF2EB6B16E3F5939@TYAPR01MB5866.jpnprd01.prod.outlook.com\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED",
"msg_date": "Wed, 30 Aug 2023 10:57:33 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_upgrade bug: pg_upgrade successes even if the slots are defined,\n but they becomes unusable"
}
] |
[
{
"msg_contents": "I am seeing a new gcc 12.2.0 compiler warning from\nsrc/backend/commands/sequence.c:\n\n\tsequence.c: In function ‘DefineSequence’:\n\tsequence.c:196:35: warning: ‘coldef’ may be used uninitialized [-Wmaybe-uninitialized]\n\t 196 | stmt->tableElts = lappend(stmt->tableElts, coldef);\n\t | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\tsequence.c:175:29: note: ‘coldef’ was declared here\n\t 175 | ColumnDef *coldef;\n\t | ^~~~~~\n\nThe code is:\n\n\tfor (i = SEQ_COL_FIRSTCOL; i <= SEQ_COL_LASTCOL; i++)\n\t{\n-->\t ColumnDef *coldef;\n\n\t switch (i)\n\t {\n\t\tcase SEQ_COL_LASTVAL:\n\t\t coldef = makeColumnDef(\"last_value\", INT8OID, -1, InvalidOid);\n\t\t value[i - 1] = Int64GetDatumFast(seqdataform.last_value);\n\t\t break;\n\t\tcase SEQ_COL_LOG:\n\t\t coldef = makeColumnDef(\"log_cnt\", INT8OID, -1, InvalidOid);\n\t\t value[i - 1] = Int64GetDatum((int64) 0);\n\t\t break;\n\t\tcase SEQ_COL_CALLED:\n\t\t coldef = makeColumnDef(\"is_called\", BOOLOID, -1, InvalidOid);\n\t\t value[i - 1] = BoolGetDatum(false);\n\t\t break;\n\t }\n\n\t coldef->is_not_null = true;\n\t null[i - 1] = false;\n\n-->\t stmt->tableElts = lappend(stmt->tableElts, coldef);\n\t}\n\nand I think it is caused by this commit:\n\n\tcommit 1fa9241bdd\n\tAuthor: Peter Eisentraut <[email protected]>\n\tDate: Tue Aug 29 08:41:04 2023 +0200\n\t\n\t Make more use of makeColumnDef()\n\t\n\t Since we already have it, we might as well make full use of it,\n\t instead of assembling ColumnDef by hand in several places.\n\t\n\t Reviewed-by: Alvaro Herrera <[email protected]>\n\t Discussion: https://www.postgresql.org/message-id/flat/[email protected]\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Wed, 30 Aug 2023 07:55:28 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "New compiler warning"
},
{
"msg_contents": "Hi,\n\n> I am seeing a new gcc 12.2.0 compiler warning from\n> src/backend/commands/sequence.c:\n\nYep, the compiler is just not smart enough to derive that this\nactually is not going to happen.\n\nHere is a proposed fix.\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Wed, 30 Aug 2023 15:10:20 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New compiler warning"
},
{
"msg_contents": "On 8/30/23 08:10, Aleksander Alekseev wrote:\n> \n>> I am seeing a new gcc 12.2.0 compiler warning from\n>> src/backend/commands/sequence.c:\n> \n> Yep, the compiler is just not smart enough to derive that this\n> actually is not going to happen.\n> \n> Here is a proposed fix.\n\nHere's an alternate way to deal with this which is a bit more efficient \n(code not tested):\n\n-\t\tcase SEQ_COL_CALLED:\n-\t\t coldef = makeColumnDef(\"is_called\", BOOLOID, -1, InvalidOid);\n-\t\t value[i - 1] = BoolGetDatum(false);\n-\t\t break;\n+\t\tdefault:\n+ Assert(i == SEQ_COL_CALLED);\n+\t\t coldef = makeColumnDef(\"is_called\", BOOLOID, -1, InvalidOid);\n+\t\t value[i - 1] = BoolGetDatum(false);\n+\t\t break;\n\nThe downside is that any garbage in i will lead to processing as \nSEQ_COL_CALLED. But things are already pretty bad in that case, ISTM, \neven with the proposed patch (or the original code for that matter).\n\nRegards,\n-David\n\n\n",
"msg_date": "Wed, 30 Aug 2023 10:07:24 -0400",
"msg_from": "David Steele <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New compiler warning"
},
{
"msg_contents": "\nPeter Eisentraut has applied a patch to fix this.\n\n---------------------------------------------------------------------------\n\nOn Wed, Aug 30, 2023 at 10:07:24AM -0400, David Steele wrote:\n> On 8/30/23 08:10, Aleksander Alekseev wrote:\n> > \n> > > I am seeing a new gcc 12.2.0 compiler warning from\n> > > src/backend/commands/sequence.c:\n> > \n> > Yep, the compiler is just not smart enough to derive that this\n> > actually is not going to happen.\n> > \n> > Here is a proposed fix.\n> \n> Here's an alternate way to deal with this which is a bit more efficient\n> (code not tested):\n> \n> -\t\tcase SEQ_COL_CALLED:\n> -\t\t coldef = makeColumnDef(\"is_called\", BOOLOID, -1, InvalidOid);\n> -\t\t value[i - 1] = BoolGetDatum(false);\n> -\t\t break;\n> +\t\tdefault:\n> + Assert(i == SEQ_COL_CALLED);\n> +\t\t coldef = makeColumnDef(\"is_called\", BOOLOID, -1, InvalidOid);\n> +\t\t value[i - 1] = BoolGetDatum(false);\n> +\t\t break;\n> \n> The downside is that any garbage in i will lead to processing as\n> SEQ_COL_CALLED. But things are already pretty bad in that case, ISTM, even\n> with the proposed patch (or the original code for that matter).\n> \n> Regards,\n> -David\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Wed, 30 Aug 2023 12:06:42 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: New compiler warning"
}
] |
[
{
"msg_contents": "Hi everyone,\n\nI just release a logfmt log collector for PostgreSQL :\nhttps://pgxn.org/dist/logfmt/1.0.0/ . This works quite well but I have\na few issues I would like to share with hackers.\n\nFirst, what do you think of having logfmt output along json and CSV ?\nPostgreSQL internal syslogger has builtin support for the different\nLOG_DESTINATION_*. Thus logfmt does not send log collector headers\nusing write_syslogger_file or write_pipe_chunks but plain log line with\nwrite_console. Do you have some hint about this ? The consequences ?\nHow much is it a good bet to write a custom log collector in a shared\npreload library ?\n\nSecond issue, logfmt provides a guc called\n`logfmt.application_context`. The purpose of application_context is the\nsame as `application_name` but for a more varying value like request\nUUID, task ID, etc. What do you think of this ? Would it be cool to\nhave this GUC in PostgreSQL and available in log_line_prefix ?\n\nAnyway, it's my first attempt at writing C code for PostgreSQL, with\nthe help of Guillaume LELARGE and Jehan-Guillaume de RORTHAIS and it's\na pleasure ! PostgreSQL C code is very readable. Thanks everyone for\nthis !\n\nRegards,\nÉtienne BERSAC\nDeveloper at Dalibo\n\n\n\n\n",
"msg_date": "Wed, 30 Aug 2023 14:36:39 +0200",
"msg_from": "=?ISO-8859-1?Q?=C9tienne?= BERSAC <[email protected]>",
"msg_from_op": true,
"msg_subject": "logfmt and application_context"
},
{
"msg_contents": "> On 30 Aug 2023, at 14:36, Étienne BERSAC <[email protected]> wrote:\n\n> ..what do you think of having logfmt output along json and CSV ?\n\nlogfmt is widely supported by log ingestion and analysis tools, and have been\nfor a long enoug time (IMHO) to be called mature, which is good. Less ideal is\nthat there is no official formal definition of what logfmt is, some consumers\nof it (like Splunk) even support it while not calling it logfmt. If we add\nsupport for it, can we reasonably expect that what we emit is what consumers of\nit assume it will look like? Given the simplicity of it I think it can be\nargued, but I'm far from an expert in this area.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Tue, 5 Sep 2023 11:35:02 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: logfmt and application_context"
},
{
"msg_contents": "Hi Daniel,\n\nThanks for the feedback.\n\nLe mardi 05 septembre 2023 à 11:35 +0200, Daniel Gustafsson a écrit :\n> > On 30 Aug 2023, at 14:36, Étienne BERSAC <[email protected]> wrote:\n> \n> > ..what do you think of having logfmt output along json and CSV ?\n> \n> Less ideal is\n> that there is no official formal definition of what logfmt is [...] If we add\n> support for it, can we reasonably expect that what we emit is what consumers of\n> it assume it will look like?\n\nI didn't know logfmt had variation. Do you have a case of\nincompatibility ?\n\nAnyway, I think that logfmt will be better handled inside Postgres\nrather than in an extension due to limitation in syslogger\nextendability. I could send a patch if more people are interested in\nthis.\n\n\nWhat do you think about application_context as a way to render e.g. a\nweb request UUID to all backend log messages ?\n\n\nRegards,\nÉtienne\n\n\n\n",
"msg_date": "Tue, 26 Sep 2023 09:56:27 +0200",
"msg_from": "=?ISO-8859-1?Q?=C9tienne?= BERSAC <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: logfmt and application_context"
},
{
"msg_contents": "> On 26 Sep 2023, at 09:56, Étienne BERSAC <[email protected]> wrote:\n> Le mardi 05 septembre 2023 à 11:35 +0200, Daniel Gustafsson a écrit :\n>>> On 30 Aug 2023, at 14:36, Étienne BERSAC <[email protected]> wrote:\n>> \n>>> ..what do you think of having logfmt output along json and CSV ?\n>> \n>> Less ideal is\n>> that there is no official formal definition of what logfmt is [...] If we add\n>> support for it, can we reasonably expect that what we emit is what consumers of\n>> it assume it will look like?\n> \n> I didn't know logfmt had variation. Do you have a case of\n> incompatibility ?\n\nLike I said upthread, it might be reasonable to assume that the format is\nfairly stable, but without a formal definition there is no way of being\ncertain. Formats without specifications that become popular tend to diverge,\nMarkdown being the textbook example.\n\nBeing a common format in ingestion tools makes it interesting though, but I\nwonder if those tools aren't alreday supporting CSV such that adding logfmt\nwon't move the compatibility markers much?\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 27 Sep 2023 10:14:48 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: logfmt and application_context"
},
{
"msg_contents": "Hi,\n\nLe mercredi 27 septembre 2023 à 10:14 +0200, Daniel Gustafsson a écrit :\n> Being a common format in ingestion tools makes it interesting though, but I\n> wonder if those tools aren't alreday supporting CSV such that adding logfmt\n> won't move the compatibility markers much?\n\nCompared to CSV, logfmt has explicit named fields. This helps tools to\napply generic rules like : ok this is pid, this is timestamp, etc.\nwithout any configuration. Loki and Grafana indexes a subset of known\nfields. This is harder to achieve with a bunch a semi-colon separated\nvalues.\n\nCompared to JSON, logfmt is terser and easier for human eyes and\nfingers.\n\nThis is why I think logfmt for PostgreSQL could be a good option.\n\nRegards,\nÉtienne\n\n\n",
"msg_date": "Wed, 27 Sep 2023 13:57:50 +0200",
"msg_from": "=?ISO-8859-1?Q?=C9tienne?= BERSAC <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: logfmt and application_context"
}
] |
[
{
"msg_contents": "I noticed that pg_resetwal has poor test coverage. There are some TAP \ntests, but they all run with -n, so they don't actually test the full \nfunctionality. (There is a non-dry-run call of pg_resetwal in the \nrecovery test suite, but that is incidental.)\n\nSo I added a bunch of more tests to test all the different options and \nscenarios. That also led to adding more documentation about more \ndetails how some of the options behave, and some tweaks in the program \noutput.\n\nIt's attached as one big patch, but it could be split into smaller \npieces, depending what gets agreement.",
"msg_date": "Wed, 30 Aug 2023 14:45:30 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_resetwal tests, logging, and docs update"
},
{
"msg_contents": "Hi,\n\n> I noticed that pg_resetwal has poor test coverage. There are some TAP\n> tests, but they all run with -n, so they don't actually test the full\n> functionality. (There is a non-dry-run call of pg_resetwal in the\n> recovery test suite, but that is incidental.)\n>\n> So I added a bunch of more tests to test all the different options and\n> scenarios. That also led to adding more documentation about more\n> details how some of the options behave, and some tweaks in the program\n> output.\n>\n> It's attached as one big patch, but it could be split into smaller\n> pieces, depending what gets agreement.\n\nAll in all the patch looks OK but I have a couple of nitpicks.\n\n```\n+ working on a data directory in an unclean shutdown state or with a corrupt\n+ control file.\n```\n\n```\n+ After running this command on a data directory with corrupted WAL or a\n+ corrupt control file,\n```\n\nI'm not a native English speaker but shouldn't it be \"corruptED control file\"?\n\n```\n+ Force <command>pg_resetwal</command> to proceed even in situations where\n+ it could be dangerous,\n```\n\n\"where\" is probably fine but wouldn't \"WHEN it could be dangerous\" be better?\n\n```\n+ // FIXME: why 2?\n if (set_oldest_commit_ts_xid < 2 &&\n```\n\nShould we rewrite this to < FrozenTransactionId ?\n\n```\n+$mult = 32 * $blcksz * 4; # FIXME\n```\n\nUnless I'm missing something this $mult value is not used. Is it\nreally needed here?\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Wed, 13 Sep 2023 17:36:21 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_resetwal tests, logging, and docs update"
},
{
"msg_contents": "On 13.09.23 16:36, Aleksander Alekseev wrote:\n> ```\n> + // FIXME: why 2?\n> if (set_oldest_commit_ts_xid < 2 &&\n> ```\n> \n> Should we rewrite this to < FrozenTransactionId ?\n\nThat's what I suspect, but we should confirm that.\n\n> \n> ```\n> +$mult = 32 * $blcksz * 4; # FIXME\n> ```\n> \n> Unless I'm missing something this $mult value is not used. Is it\n> really needed here?\n\nThe FIXME is that I think a multiplier *should* be applied somehow. See \nalso the FIXME in the documentation for the -c option.\n\n\n\n\n",
"msg_date": "Wed, 13 Sep 2023 20:34:42 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_resetwal tests, logging, and docs update"
},
{
"msg_contents": "On 13.09.23 16:36, Aleksander Alekseev wrote:\n> All in all the patch looks OK but I have a couple of nitpicks.\n> \n> ```\n> + working on a data directory in an unclean shutdown state or with a corrupt\n> + control file.\n> ```\n> \n> ```\n> + After running this command on a data directory with corrupted WAL or a\n> + corrupt control file,\n> ```\n> \n> I'm not a native English speaker but shouldn't it be \"corruptED control file\"?\n\nfixed\n\n> \n> ```\n> + Force <command>pg_resetwal</command> to proceed even in situations where\n> + it could be dangerous,\n> ```\n> \n> \"where\" is probably fine but wouldn't \"WHEN it could be dangerous\" be better?\n\nHmm, I think I like \"where\" better.\n\nAttached is an updated patch set where I have split the changes into \nsmaller pieces. The last two patches still have some open questions \nabout what certain constants mean etc. The other patches should be settled.",
"msg_date": "Tue, 19 Sep 2023 16:31:35 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_resetwal tests, logging, and docs update"
},
{
"msg_contents": "Hi,\n\n> Hmm, I think I like \"where\" better.\n\nOK.\n\n> Attached is an updated patch set where I have split the changes into\n> smaller pieces. The last two patches still have some open questions\n> about what certain constants mean etc. The other patches should be settled.\n\nThe patches 0001..0005 seem to be ready and rather independent. I\nsuggest merging them and continue discussing the rest of the patches.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Tue, 26 Sep 2023 18:19:23 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_resetwal tests, logging, and docs update"
},
{
"msg_contents": "On 26.09.23 17:19, Aleksander Alekseev wrote:\n>> Attached is an updated patch set where I have split the changes into\n>> smaller pieces. The last two patches still have some open questions\n>> about what certain constants mean etc. The other patches should be settled.\n> \n> The patches 0001..0005 seem to be ready and rather independent. I\n> suggest merging them and continue discussing the rest of the patches.\n\nI have committed 0001..0005, and also posted a separate patch to discuss \nand correct the behavior of the -c option. I expect that we will carry \nover this patch set to the next commit fest.\n\n\n\n",
"msg_date": "Fri, 29 Sep 2023 10:02:28 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_resetwal tests, logging, and docs update"
},
{
"msg_contents": "On 29.09.23 10:02, Peter Eisentraut wrote:\n> On 26.09.23 17:19, Aleksander Alekseev wrote:\n>>> Attached is an updated patch set where I have split the changes into\n>>> smaller pieces. The last two patches still have some open questions\n>>> about what certain constants mean etc. The other patches should be \n>>> settled.\n>>\n>> The patches 0001..0005 seem to be ready and rather independent. I\n>> suggest merging them and continue discussing the rest of the patches.\n> \n> I have committed 0001..0005, and also posted a separate patch to discuss \n> and correct the behavior of the -c option. I expect that we will carry \n> over this patch set to the next commit fest.\n\nHere are updated versions of the remaining patches. I took out the \n\"FIXME\" notes about the multipliers applying to the -c option and \nreplaced them by gentler comments. I don't intend to resolve those \nissues here.",
"msg_date": "Sun, 29 Oct 2023 18:23:00 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_resetwal tests, logging, and docs update"
},
{
"msg_contents": "Hi,\n\n> Here are updated versions of the remaining patches. I took out the\n> \"FIXME\" notes about the multipliers applying to the -c option and\n> replaced them by gentler comments. I don't intend to resolve those\n> issues here.\n\nThe patch LGTM. However, postgresql:pg_resetwal test suite doesn't\npass on Windows according to cfbot. Seems to be a matter of picking a\nmore generic regular expression:\n\n```\nat C:/cirrus/src/bin/pg_resetwal/t/001_basic.pl line 54.\n 'pg_resetwal: error: could not change directory to\n\"foo\": No such file or directory\n doesn't match '(?^:error: could not read permissions of directory)'\n```\n\nShould we simply use something like:\n\n```\nqr/error: could not (read|change).* directory/\n```\n\n... instead?\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Mon, 30 Oct 2023 14:55:12 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_resetwal tests, logging, and docs update"
},
{
"msg_contents": "On 30.10.23 12:55, Aleksander Alekseev wrote:\n> The patch LGTM. However, postgresql:pg_resetwal test suite doesn't\n> pass on Windows according to cfbot. Seems to be a matter of picking a\n> more generic regular expression:\n> \n> ```\n> at C:/cirrus/src/bin/pg_resetwal/t/001_basic.pl line 54.\n> 'pg_resetwal: error: could not change directory to\n> \"foo\": No such file or directory\n> doesn't match '(?^:error: could not read permissions of directory)'\n> ```\n> \n> Should we simply use something like:\n> \n> ```\n> qr/error: could not (read|change).* directory/\n> ```\n\nHmm. I think maybe we should fix the behavior of \nGetDataDirectoryCreatePerm() to be more consistent between Windows and \nnon-Windows. This is usually the first function a program uses on the \nproposed data directory, so it's also responsible for reporting if the \ndata directory does not exist. But then on Windows, because the \nfunction does nothing, those error scenarios end up on quite different \ncode paths, and I'm not sure if those are really checked that carefully. \n I think we can make this more robust if we have \nGetDataDirectoryCreatePerm() still run the stat() call on the proposed \ndata directory and report the error. See attached patch.",
"msg_date": "Tue, 31 Oct 2023 07:50:23 -0400",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_resetwal tests, logging, and docs update"
},
{
"msg_contents": "Hi,\n\n> Hmm. I think maybe we should fix the behavior of\n> GetDataDirectoryCreatePerm() to be more consistent between Windows and\n> non-Windows. This is usually the first function a program uses on the\n> proposed data directory, so it's also responsible for reporting if the\n> data directory does not exist. But then on Windows, because the\n> function does nothing, those error scenarios end up on quite different\n> code paths, and I'm not sure if those are really checked that carefully.\n> I think we can make this more robust if we have\n> GetDataDirectoryCreatePerm() still run the stat() call on the proposed\n> data directory and report the error. See attached patch.\n\nYep, that would be much better.\n\nAttaching all three patches together in order to make sure cfbot is\nstill happy with them while the `master` branch is evolving.\n\nAssuming cfbot will have no complaints I suggest merging them.\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Wed, 1 Nov 2023 14:12:30 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_resetwal tests, logging, and docs update"
},
{
"msg_contents": "On 01.11.23 12:12, Aleksander Alekseev wrote:\n> Hi,\n> \n>> Hmm. I think maybe we should fix the behavior of\n>> GetDataDirectoryCreatePerm() to be more consistent between Windows and\n>> non-Windows. This is usually the first function a program uses on the\n>> proposed data directory, so it's also responsible for reporting if the\n>> data directory does not exist. But then on Windows, because the\n>> function does nothing, those error scenarios end up on quite different\n>> code paths, and I'm not sure if those are really checked that carefully.\n>> I think we can make this more robust if we have\n>> GetDataDirectoryCreatePerm() still run the stat() call on the proposed\n>> data directory and report the error. See attached patch.\n> \n> Yep, that would be much better.\n> \n> Attaching all three patches together in order to make sure cfbot is\n> still happy with them while the `master` branch is evolving.\n> \n> Assuming cfbot will have no complaints I suggest merging them.\n\nDone, thanks.\n\n\n\n",
"msg_date": "Mon, 6 Nov 2023 09:38:36 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_resetwal tests, logging, and docs update"
}
] |
[
{
"msg_contents": "Hi,\n\nI have inspected the performance of the GROUP BY and DISTINCT queries for the sorted data streams and found out, that Group node (produced by GROUP BY) works faster then the Unique node (produced by DISTINCT). The flame graph should out the reason - Unique palloc`s tuples for the result slot while the Group node doesn’t.\n\nI wonder, why do we use minimal tuples for the Unique node instead of the virtual ones? It looks like there is no actual reason for that as Unique doesn’t make any materialization.",
"msg_date": "Wed, 30 Aug 2023 23:32:02 +0700",
"msg_from": "=?utf-8?B?0JTQtdC90LjRgSDQodC80LjRgNC90L7Qsg==?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Use virtual tuple slot for Unique node"
},
{
"msg_contents": "On Thu, 31 Aug 2023 at 05:37, Денис Смирнов <[email protected]> wrote:\n> I have inspected the performance of the GROUP BY and DISTINCT queries for the sorted data streams and found out, that Group node (produced by GROUP BY) works faster then the Unique node (produced by DISTINCT). The flame graph should out the reason - Unique palloc`s tuples for the result slot while the Group node doesn’t.\n>\n> I wonder, why do we use minimal tuples for the Unique node instead of the virtual ones? It looks like there is no actual reason for that as Unique doesn’t make any materialization.\n\nIt would be good to see example queries and a demonstration of the\nperformance increase. I'm not disputing your claims, but showing some\nperformance numbers might catch the eye of a reviewer more quickly.\n\nYou should also add this to the September commitfest at\nhttps://commitfest.postgresql.org/44/\n\nDavid\n\n\n",
"msg_date": "Thu, 31 Aug 2023 09:59:59 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use virtual tuple slot for Unique node"
},
{
"msg_contents": "Here is the example (checked on the current master branch with release build + I've made about 10 runs for each explain analyze to get repeatable results)\n\nBefore the patch:\n\nadb=# create table t(a int, primary key(a));\n\nadb=# insert into t select random() * 5000000\nfrom generate_series(1, 5000000)\non conflict do nothing;\n\nadb=# explain analyze select a from t group by a;\n QUERY PLAN \n---------------------------------------------------------------------------------------------------------------------------------------\n Group (cost=0.43..98761.06 rows=3160493 width=4) (actual time=0.085..1225.139 rows=3160493 loops=1)\n Group Key: a\n -> Index Only Scan using t_pkey on t (cost=0.43..90859.82 rows=3160493 width=4) (actual time=0.081..641.567 rows=3160493 loops=1)\n Heap Fetches: 0\n Planning Time: 0.188 ms\n Execution Time: 1370.027 ms\n(6 rows)\n\n\n\n\nadb=# explain analyze select distinct a from t;\n QUERY PLAN \n---------------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=0.43..98761.06 rows=3160493 width=4) (actual time=0.135..1525.704 rows=3160493 loops=1)\n -> Index Only Scan using t_pkey on t (cost=0.43..90859.82 rows=3160493 width=4) (actual time=0.130..635.742 rows=3160493 loops=1)\n Heap Fetches: 0\n Planning Time: 0.273 ms\n Execution Time: 1660.857 ms\n(5 rows)\n\n\n\nWe can see that ExecCopySlot occupies 24% of the CPU inside ExecUnique function (thanks to palloc in Unique’s minimal tuples). On the other hand ExecCopySlot is only 6% of the ExecGroup function (we use virtual tuples in Group node).\n\nAfter the patch Unique node works a little bit faster then the Group node:\n\nadb=# explain analyze select distinct a from t;\n QUERY PLAN \n---------------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=0.43..98761.06 rows=3160493 width=4) (actual time=0.094..1072.007 rows=3160493 loops=1)\n -> Index Only Scan using t_pkey on t (cost=0.43..90859.82 rows=3160493 width=4) (actual time=0.092..592.619 rows=3160493 loops=1)\n Heap Fetches: 0\n Planning Time: 0.203 ms\n Execution Time: 1209.940 ms\n(5 rows)\n\nadb=# explain analyze select a from t group by a;\n QUERY PLAN \n---------------------------------------------------------------------------------------------------------------------------------------\n Group (cost=0.43..98761.06 rows=3160493 width=4) (actual time=0.074..1140.644 rows=3160493 loops=1)\n Group Key: a\n -> Index Only Scan using t_pkey on t (cost=0.43..90859.82 rows=3160493 width=4) (actual time=0.070..591.930 rows=3160493 loops=1)\n Heap Fetches: 0\n Planning Time: 0.193 ms\n Execution Time: 1276.026 ms\n(6 rows)\n\nI have added current patch to the commitfest.\n\n\n\n> 31 авг. 2023 г., в 04:59, David Rowley <[email protected]> написал(а):\n> \n> On Thu, 31 Aug 2023 at 05:37, Денис Смирнов <[email protected]> wrote:\n>> I have inspected the performance of the GROUP BY and DISTINCT queries for the sorted data streams and found out, that Group node (produced by GROUP BY) works faster then the Unique node (produced by DISTINCT). The flame graph should out the reason - Unique palloc`s tuples for the result slot while the Group node doesn’t.\n>> \n>> I wonder, why do we use minimal tuples for the Unique node instead of the virtual ones? It looks like there is no actual reason for that as Unique doesn’t make any materialization.\n> \n> It would be good to see example queries and a demonstration of the\n> performance increase. I'm not disputing your claims, but showing some\n> performance numbers might catch the eye of a reviewer more quickly.\n> \n> You should also add this to the September commitfest at\n> https://commitfest.postgresql.org/44/\n> \n> David",
"msg_date": "Thu, 31 Aug 2023 10:06:56 +0700",
"msg_from": "Denis Smirnov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use virtual tuple slot for Unique node"
},
{
"msg_contents": "It looks like my patch was not analyzed by the hackers mailing list due to incorrect mime type, so I duplicate it here.\n\n\n\n\n> 31 авг. 2023 г., в 10:06, Denis Smirnov <[email protected]> написал(а):\n> \n> <v2-use-virtual-slots-for-unique-node.patch.txt>",
"msg_date": "Thu, 31 Aug 2023 10:28:20 +0700",
"msg_from": "Denis Smirnov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use virtual tuple slot for Unique node"
},
{
"msg_contents": "I have made a small research and found out that though the patch itself is correct (i.e. we can benefit from replacing TTSOpsMinimalTuple with TTSOpsVirtual for the Unique node), my explanation WHY was wrong.\n\n1. We always materialize the new unique tuple in the slot, never mind what type of tuple table slots do we use.\n2. But the virtual tuple materialization (tts_virtual_copyslot) have performance benefits over the minimal tuple one (tts_minimal_copyslot):\n - tts_minimal_copyslot always allocates zeroed memory with palloc0 (expensive according to the flame graph);\n - tts_minimal_copyslot() doesn’t allocate additional memory if the tuples are constructed from the passed by value column (but for the variable-size columns we still need memory allocation);\n - if tts_minimal_copyslot() need allocation it doesn’t need to zero the memory;\n\nSo as a result we seriously benefit from virtual TTS for the tuples constructed from the fixed-sized columns when get a Unique node in the plan.\n\n\n\n> 31 авг. 2023 г., в 10:28, Denis Smirnov <[email protected]> написал(а):\n> \n> It looks like my patch was not analyzed by the hackers mailing list due to incorrect mime type, so I duplicate it here.\n> <v2-use-virtual-slots-for-unique-node.patch.txt>\n> \n>> 31 авг. 2023 г., в 10:06, Denis Smirnov <[email protected]> написал(а):\n>> \n>> <v2-use-virtual-slots-for-unique-node.patch.txt>\n>",
"msg_date": "Fri, 1 Sep 2023 00:12:14 +0700",
"msg_from": "Denis Smirnov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use virtual tuple slot for Unique node"
},
{
"msg_contents": "Again the new patch hasn't been attached to the thread, so resend it.",
"msg_date": "Fri, 1 Sep 2023 00:44:41 +0700",
"msg_from": "=?UTF-8?B?0JTQtdC90LjRgSDQodC80LjRgNC90L7Qsg==?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use virtual tuple slot for Unique node"
},
{
"msg_contents": "I did a little more perf testing with this. I'm seeing the same benefit \nwith the query you posted. But can we find a case where it's not \nbeneficial? If I understand correctly, when the input slot is a virtual \nslot, it's cheaper to copy it to another virtual slot than to form a \nminimal tuple. Like in your test case. What if the input is a minimial \ntuple?\n\nOn master:\n\npostgres=# set enable_hashagg=off;\nSET\npostgres=# explain analyze select distinct g::text, 'a', 'b', 'c','d', \n'e','f','g','h' from generate_series(1, 5000000) g;\n \nQUERY PLAN \n\n---------------------------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=2630852.42..2655852.42 rows=200 width=288) (actual \ntime=4525.212..6576.992 rows=5000000 loops=1)\n -> Sort (cost=2630852.42..2643352.42 rows=5000000 width=288) \n(actual time=4525.211..5960.967 rows=5000000 loops=1)\n Sort Key: ((g)::text)\n Sort Method: external merge Disk: 165296kB\n -> Function Scan on generate_series g (cost=0.00..75000.00 \nrows=5000000 width=288) (actual time=518.914..1194.702 rows=5000000 loops=1)\n Planning Time: 0.036 ms\n JIT:\n Functions: 5\n Options: Inlining true, Optimization true, Expressions true, \nDeforming true\n Timing: Generation 0.242 ms (Deform 0.035 ms), Inlining 63.457 ms, \nOptimization 29.764 ms, Emission 20.592 ms, Total 114.056 ms\n Execution Time: 6766.399 ms\n(11 rows)\n\n\nWith the patch:\n\npostgres=# set enable_hashagg=off;\nSET\npostgres=# explain analyze select distinct g::text, 'a', 'b', 'c','d', \n'e','f','g','h' from generate_series(1, 5000000) g;\n \nQUERY PLAN \n\n---------------------------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=2630852.42..2655852.42 rows=200 width=288) (actual \ntime=4563.639..7362.467 rows=5000000 loops=1)\n -> Sort (cost=2630852.42..2643352.42 rows=5000000 width=288) \n(actual time=4563.637..6069.000 rows=5000000 loops=1)\n Sort Key: ((g)::text)\n Sort Method: external merge Disk: 165296kB\n -> Function Scan on generate_series g (cost=0.00..75000.00 \nrows=5000000 width=288) (actual time=528.060..1191.483 rows=5000000 loops=1)\n Planning Time: 0.720 ms\n JIT:\n Functions: 5\n Options: Inlining true, Optimization true, Expressions true, \nDeforming true\n Timing: Generation 0.406 ms (Deform 0.065 ms), Inlining 68.385 ms, \nOptimization 21.656 ms, Emission 21.033 ms, Total 111.480 ms\n Execution Time: 7585.306 ms\n(11 rows)\n\n\nSo not a win in this case. Could you peek at the outer slot type, and \nuse the same kind of slot for the Unique's result? Or some more \ncomplicated logic, like use a virtual slot if all the values are \npass-by-val? I'd also like to keep this simple, though...\n\nWould this kind of optimization make sense elsewhere?\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Fri, 22 Sep 2023 15:36:44 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use virtual tuple slot for Unique node"
},
{
"msg_contents": "On Sat, 23 Sept 2023 at 03:15, Heikki Linnakangas <[email protected]> wrote:\n> So not a win in this case. Could you peek at the outer slot type, and\n> use the same kind of slot for the Unique's result? Or some more\n> complicated logic, like use a virtual slot if all the values are\n> pass-by-val? I'd also like to keep this simple, though...\n>\n> Would this kind of optimization make sense elsewhere?\n\nThere are a few usages of ExecGetResultSlotOps(). e.g ExecInitHashJoin().\n\nIf I adjust the patch to:\n\n- ExecInitResultTupleSlotTL(&uniquestate->ps, &TTSOpsMinimalTuple);\n+ ExecInitResultTupleSlotTL(&uniquestate->ps,\n+\nExecGetResultSlotOps(outerPlanState(uniquestate),\n+\n NULL));\n\nThen I get the following performance on my Zen2 machine.\n\nTest 1\n\ndrop table if exists t;\ncreate table t(a int, b int);\ninsert into t select x,x from generate_series(1,1000000)x;\ncreate index on t (a,b);\nvacuum analyze t;\n\nexplain (analyze, timing off) select distinct a,b from t;\n\nMaster:\nExecution Time: 149.669 ms\nExecution Time: 149.019 ms\nExecution Time: 151.240 ms\n\nPatched:\nExecution Time: 96.950 ms\nExecution Time: 94.509 ms\nExecution Time: 93.498 ms\n\nTest 2\n\ndrop table if exists t;\ncreate table t(a text, b text);\ninsert into t select x::text,x::text from generate_series(1,1000000)x;\ncreate index on t (a,b);\nvacuum analyze t;\n\nexplain (analyze, timing off) select distinct a,b from t;\n\nMaster:\nExecution Time: 185.282 ms\nExecution Time: 178.948 ms\nExecution Time: 179.217 ms\n\nPatched:\nExecution Time: 141.031 ms\nExecution Time: 141.136 ms\nExecution Time: 142.163 ms\n\nTest 3\n\nset enable_hashagg=off;\nexplain (analyze, timing off) select distinct g::text, 'a', 'b',\n'c','d', 'e','f','g','h' from generate_series(1, 50000) g;\n\nMaster:\nExecution Time: 87.599 ms\nExecution Time: 87.721 ms\nExecution Time: 87.635 ms\n\nPatched:\nExecution Time: 83.449 ms\nExecution Time: 84.314 ms\nExecution Time: 86.239 ms\n\nDavid\n\n\n",
"msg_date": "Wed, 27 Sep 2023 20:01:06 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use virtual tuple slot for Unique node"
},
{
"msg_contents": "On Wed, 27 Sept 2023 at 20:01, David Rowley <[email protected]> wrote:\n>\n> On Sat, 23 Sept 2023 at 03:15, Heikki Linnakangas <[email protected]> wrote:\n> > So not a win in this case. Could you peek at the outer slot type, and\n> > use the same kind of slot for the Unique's result? Or some more\n> > complicated logic, like use a virtual slot if all the values are\n> > pass-by-val? I'd also like to keep this simple, though...\n> >\n> > Would this kind of optimization make sense elsewhere?\n>\n> There are a few usages of ExecGetResultSlotOps(). e.g ExecInitHashJoin().\n>\n> If I adjust the patch to:\n>\n> - ExecInitResultTupleSlotTL(&uniquestate->ps, &TTSOpsMinimalTuple);\n> + ExecInitResultTupleSlotTL(&uniquestate->ps,\n> +\n> ExecGetResultSlotOps(outerPlanState(uniquestate),\n> +\n> NULL));\n\nJust to keep this from going cold, here's that in patch form for\nanyone who wants to test.\n\nI spent a bit more time running some more benchmarks and I don't see\nany workload where it slows things down. I'd be happy if someone else\nhad a go at finding a regression.\n\nDavid",
"msg_date": "Tue, 10 Oct 2023 21:48:51 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use virtual tuple slot for Unique node"
},
{
"msg_contents": "On Tue, Oct 10, 2023 at 2:23 PM David Rowley <[email protected]> wrote:\n>\n> On Wed, 27 Sept 2023 at 20:01, David Rowley <[email protected]> wrote:\n> >\n> > On Sat, 23 Sept 2023 at 03:15, Heikki Linnakangas <[email protected]> wrote:\n> > > So not a win in this case. Could you peek at the outer slot type, and\n> > > use the same kind of slot for the Unique's result? Or some more\n> > > complicated logic, like use a virtual slot if all the values are\n> > > pass-by-val? I'd also like to keep this simple, though...\n> > >\n> > > Would this kind of optimization make sense elsewhere?\n> >\n> > There are a few usages of ExecGetResultSlotOps(). e.g ExecInitHashJoin().\n> >\n> > If I adjust the patch to:\n> >\n> > - ExecInitResultTupleSlotTL(&uniquestate->ps, &TTSOpsMinimalTuple);\n> > + ExecInitResultTupleSlotTL(&uniquestate->ps,\n> > +\n> > ExecGetResultSlotOps(outerPlanState(uniquestate),\n> > +\n> > NULL));\n>\n> Just to keep this from going cold, here's that in patch form for\n> anyone who wants to test.\n\nThanks.\n\nI don't recollect why we chose MinimalTupleSlot here - may be because\nwe expected the underlying node to always produce a minimal tupe. But\nUnique node copies the tuple returned by the underlying node. This\ncopy is carried out by the TupleTableSlot specific copy function\ncopyslot. Every implementation of this function first converts the\nsource slot tuple into the required form and then copies it. Having\nboth the TupleTableSlots, ouput slot from the underlying node and the\noutput slot of Unique node, of the same type avoids the first step and\njust copies the slot. It makes sense that it performs better. The code\nlooks fine to me.\n\n>\n> I spent a bit more time running some more benchmarks and I don't see\n> any workload where it slows things down. I'd be happy if someone else\n> had a go at finding a regression.\n\nI built on your experiments and I might have found a minor regression.\n\nSetup\n=====\ndrop table if exists t_int;\ncreate table t_int(a int, b int);\ninsert into t_int select x, x from generate_series(1,1000000)x;\ncreate index on t_int (a,b);\nvacuum analyze t_int;\n\ndrop table if exists t_text;\ncreate table t_text(a text, b text);\ninsert into t_text select lpad(x::text, 1000, '0'), x::text from\ngenerate_series(1,1000000)x;\ncreate index on t_text (a,b);\nvacuum analyze t_text;\n\ndrop table if exists t_mixed; -- this one is new but it doesn't matter much\ncreate table t_mixed(a text, b int);\ninsert into t_mixed select lpad(x::text, 1000, '0'), x from\ngenerate_series(1,1000000)x;\ncreate index on t_mixed (a,b);\nvacuum analyze t_mixed;\n\nQueries and measurements (average execution time from 3 runs - on my\nThinkpad T490)\n======================\nQ1 select distinct a,b from t_int';\nHEAD: 544.45 ms\npatched: 381.55 ms\n\nQ2 select distinct a,b from t_text\nHEAD: 609.90 ms\npatched: 513.42 ms\n\nQ3 select distinct a,b from t_mixed\nHEAD: 626.80 ms\npatched: 468.22 ms\n\nThe more the pass by ref data, more memory is allocated which seems to\nreduce the gain by this patch.\nAbove nodes use Buffer or HeapTupleTableSlot.\nTry some different nodes which output minimal or virtual TTS.\n\nset enable_hashagg to off;\nQ4 select distinct a,b from (select sum(a) over (order by a rows 2\npreceding) a, b from t_int) q\nHEAD: 2529.58 ms\npatched: 2332.23\n\nQ5 select distinct a,b from (select sum(a) over (order by a rows 2\npreceding) a, b from t_int order by a, b) q\nHEAD: 2633.69 ms\npatched: 2255.99 ms\n\n\nQ6 select distinct a,b from (select string_agg(a, ', ') over (order by\na rows 2 preceding) a, b from t_text) q\nHEAD: 108589.85 ms\npatched: 107226.82 ms\n\nQ7 select distinct a,b from (select string_agg(left(a, 100), ', ')\nover (order by a rows 2 preceding) a, b from t_text) q\nHEAD: 16070.62 ms\npatched: 16182.16 ms\n\nThis one is surprising though. May be the advantage of using the same\ntuple table slot is so narrow when large data needs to be copied that\nthe execution times almost match. The patched and unpatched execution\ntimes differ by the margin of error either way.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Thu, 12 Oct 2023 15:35:53 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use virtual tuple slot for Unique node"
},
{
"msg_contents": "On Thu, 12 Oct 2023 at 23:06, Ashutosh Bapat\n<[email protected]> wrote:\n> Q7 select distinct a,b from (select string_agg(left(a, 100), ', ')\n> over (order by a rows 2 preceding) a, b from t_text) q\n> HEAD: 16070.62 ms\n> patched: 16182.16 ms\n\nDid you time the SELECT or EXPLAIN ANALYZE?\n\nWith SELECT, I'm unable to recreate this slowdown. Using your setup:\n\n$ cat bench.sql\nset enable_hashagg=0;\nset work_mem='10GB';\nselect distinct a,b from (select string_agg(left(a, 100), ', ') over\n(order by a rows 2 preceding) a, b from t_text) q;\n\nMaster @ 13d00729d\n$ pgbench -n -f bench.sql -T 300 postgres | grep latency\nlatency average = 7739.250 ms\n\nMaster + use_subnode_slot_type_for_nodeunique.patch\n$ pgbench -n -f bench.sql -T 300 postgres | grep latency\nlatency average = 7718.007 ms\n\nIt's hard to imagine why there would be a slowdown as this query uses\na TTSOpsMinimalTuple slot type in the patch and the unpatched version.\n\nDavid\n\n\n",
"msg_date": "Thu, 19 Oct 2023 22:29:17 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use virtual tuple slot for Unique node"
},
{
"msg_contents": "On Thu, 19 Oct 2023 at 22:29, David Rowley <[email protected]> wrote:\n> It's hard to imagine why there would be a slowdown as this query uses\n> a TTSOpsMinimalTuple slot type in the patch and the unpatched version.\n\nI shrunk down your table sizes to 10k rows instead of 1 million rows\nto reduce the CPU cache pressure on the queries.\n\nI ran pgbench for 1 minute on each query and did pg_prewarm on each\ntable. Here are the times I got in milliseconds:\n\nQuery master Master + 0001 compare\nQ1 2.576 1.979 130.17%\nQ2 9.546 9.941 96.03%\nQ3 9.069 9.536 95.10%\nQ4 7.285 7.208 101.07%\nQ5 7.585 6.904 109.86%\nQ6 162.253 161.434 100.51%\nQ7 62.507 58.922 106.08%\n\nI also noted down the slot type that nodeUnique.c is using in each of\nthe queries:\n\nQ1 TTSOpsVirtual\nQ2 TTSOpsVirtual\nQ3 TTSOpsVirtual\nQ4 TTSOpsMinimalTuple\nQ5 TTSOpsVirtual\nQ6 TTSOpsMinimalTuple\nQ7 TTSOpsMinimalTuple\n\nSo, I'm not really expecting Q4, Q6 or Q7 to change much. However, Q7\ndoes seem to be above noise level faster and I'm not sure why.\n\nWe can see that Q2 and Q3 become a bit slower. This makes sense as\ntts_virtual_materialize() is quite a bit more complex than\nheap_copy_minimal_tuple() which is a simple palloc/memcpy.\n\nWe'd likely see Q2 and Q3 do better with the patched version if there\nwere more duplicates as there'd be less tuple deforming going on\nbecause of the virtual slots.\n\nOverall, the patched version is 5.55% faster than master. However,\nit's pretty hard to say if we should do this or not. Q3 has a mix of\nvarlena and byval types and that came out slower with the patched\nversion.\n\nI've attached the script I used to get the results and the setup,\nwhich is just your tables shrunk down to 10k rows.\n\nDavid",
"msg_date": "Thu, 19 Oct 2023 23:56:14 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use virtual tuple slot for Unique node"
},
{
"msg_contents": "On Thu, Oct 19, 2023 at 4:26 PM David Rowley <[email protected]> wrote:\n>\n> On Thu, 19 Oct 2023 at 22:29, David Rowley <[email protected]> wrote:\n> > It's hard to imagine why there would be a slowdown as this query uses\n> > a TTSOpsMinimalTuple slot type in the patch and the unpatched version.\n>\n> I shrunk down your table sizes to 10k rows instead of 1 million rows\n> to reduce the CPU cache pressure on the queries.\n>\n> I ran pgbench for 1 minute on each query and did pg_prewarm on each\n> table. Here are the times I got in milliseconds:\n>\n> Query master Master + 0001 compare\n> Q1 2.576 1.979 130.17%\n> Q2 9.546 9.941 96.03%\n> Q3 9.069 9.536 95.10%\n> Q4 7.285 7.208 101.07%\n> Q5 7.585 6.904 109.86%\n> Q6 162.253 161.434 100.51%\n> Q7 62.507 58.922 106.08%\n>\n> I also noted down the slot type that nodeUnique.c is using in each of\n> the queries:\n>\n> Q1 TTSOpsVirtual\n> Q2 TTSOpsVirtual\n> Q3 TTSOpsVirtual\n> Q4 TTSOpsMinimalTuple\n> Q5 TTSOpsVirtual\n> Q6 TTSOpsMinimalTuple\n> Q7 TTSOpsMinimalTuple\n>\n> So, I'm not really expecting Q4, Q6 or Q7 to change much. However, Q7\n> does seem to be above noise level faster and I'm not sure why.\n\nI ran my experiments again. It seems on my machine the execution times\ndo vary a bit. I ran EXPLAIN ANALYZE on the query 5 times and took\naverage of execution times. I did this three times. For each run the\nstandard deviation was within 2%. Here are the numbers\nmaster: 13548.33, 13878.88, 14572.52\nmaster + 0001: 13734.58, 14193.83, 14574.73\n\nSo for me, I would say, this particular query performs the same with\nor without patch.\n\n>\n> We can see that Q2 and Q3 become a bit slower. This makes sense as\n> tts_virtual_materialize() is quite a bit more complex than\n> heap_copy_minimal_tuple() which is a simple palloc/memcpy.\n>\n\nIf the source slot is a materialized virtual slot,\ntts_virtual_copyslot() could perform a memcpy of the materialized data\nitself rather than materialising from datums. That might be more\nefficient.\n\n> We'd likely see Q2 and Q3 do better with the patched version if there\n> were more duplicates as there'd be less tuple deforming going on\n> because of the virtual slots.\n>\n> Overall, the patched version is 5.55% faster than master. However,\n> it's pretty hard to say if we should do this or not. Q3 has a mix of\n> varlena and byval types and that came out slower with the patched\n> version.\n\nTheoretically using the same slot type is supposed to be faster. We\nuse same slot types for input and output in other places where as\nwell. May be we should fix the above said inefficiency in\ntt_virtual_copyslot()?\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Fri, 20 Oct 2023 15:00:13 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use virtual tuple slot for Unique node"
},
{
"msg_contents": "On Fri, 20 Oct 2023 at 22:30, Ashutosh Bapat\n<[email protected]> wrote:\n> I ran my experiments again. It seems on my machine the execution times\n> do vary a bit. I ran EXPLAIN ANALYZE on the query 5 times and took\n> average of execution times. I did this three times. For each run the\n> standard deviation was within 2%. Here are the numbers\n> master: 13548.33, 13878.88, 14572.52\n> master + 0001: 13734.58, 14193.83, 14574.73\n>\n> So for me, I would say, this particular query performs the same with\n> or without patch.\n\nI'm not really sure which of the 7 queries you're referring to here.\nThe times you're quoting seem to align best to Q7 from your previous\nresults, so I'll assume you mean Q7.\n\nI'm not really concerned with Q7 as both patched and unpatched use\nTTSOpsMinimalTuple.\n\nI also think you need to shrink the size of your benchmark down. With\n1 million tuples, you're more likely to be also measuring the time it\ntakes to get cache lines from memory into the CPU. A smaller scale\ntest will make this less likely. Also, you'd be better timing SELECT\nrather than the time it takes to EXPLAIN ANALYZE. They're not the same\nthing. EXPLAIN ANALYZE has additional timing going on and we may end\nup not de-toasting toasted Datums.\n\n> On Thu, Oct 19, 2023 at 4:26 PM David Rowley <[email protected]> wrote:\n> > We can see that Q2 and Q3 become a bit slower. This makes sense as\n> > tts_virtual_materialize() is quite a bit more complex than\n> > heap_copy_minimal_tuple() which is a simple palloc/memcpy.\n> >\n>\n> If the source slot is a materialized virtual slot,\n> tts_virtual_copyslot() could perform a memcpy of the materialized data\n> itself rather than materialising from datums. That might be more\n> efficient.\n\nI think you're talking about just performing a memcpy() of the\nVirtualTupleTableSlot->data field. Unfortunately, you'd not be able\nto do just that as you'd also need to repoint the non-byval Datums in\ntts_values at the newly memcpy'd memory. If you skipped that part,\nthose would remain pointing to the original memory. If that memory\ngoes away, then bad things will happen. I think you'd still need to do\nthe 2nd loop in tts_virtual_materialize()\n\n> > We'd likely see Q2 and Q3 do better with the patched version if there\n> > were more duplicates as there'd be less tuple deforming going on\n> > because of the virtual slots.\n> >\n> > Overall, the patched version is 5.55% faster than master. However,\n> > it's pretty hard to say if we should do this or not. Q3 has a mix of\n> > varlena and byval types and that came out slower with the patched\n> > version.\n>\n> Theoretically using the same slot type is supposed to be faster. We\n> use same slot types for input and output in other places where as\n> well.\n\nWhich theory?\n\n> May be we should fix the above said inefficiency in\n> tt_virtual_copyslot()?\n\nI don't have any bright ideas on how to make tts_virtual_materialize()\nitself faster. If there were some way to remember !attbyval\nattributes for the 2nd loop, that might be good, but creating\nsomewhere to store that might result in further overheads.\n\ntts_virtual_copyslot() perhaps could be sped up a little by doing a\nmemcpy of the values/isnull arrays when the src and dst descriptors\nhave the same number of attributes. aka, something like:\n\nif (srcdesc->natts == dstslot->tts_tupleDescriptor->natts)\n memcpy(dstslot->tts_values, srcslot->tts_values,\n MAXALIGN(srcdesc->natts * sizeof(Datum)) +\n MAXALIGN(srcdesc->natts * sizeof(bool)));\nelse\n{\n for (int natt = 0; natt < srcdesc->natts; natt++)\n {\n dstslot->tts_values[natt] = srcslot->tts_values[natt];\n dstslot->tts_isnull[natt] = srcslot->tts_isnull[natt];\n }\n}\n\nI imagine we'd only start to notice gains by doing that for larger\nnatts values. Each of the 7 queries here, I imagine, wouldn't have\nenough columns for it to make much of a difference.\n\nThat seems valid enough to do based on how MakeTupleTableSlot()\nallocates those arrays. ExecSetSlotDescriptor() does not seem on board\nwith the single allocation method, however. (Those pfree's in\nExecSetSlotDescriptor() do look a bit fragile.\nhttps://coverage.postgresql.org/src/backend/executor/execTuples.c.gcov.html\nsays they're never called)\n\nDavid\n\n\n",
"msg_date": "Tue, 24 Oct 2023 11:59:52 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use virtual tuple slot for Unique node"
},
{
"msg_contents": "On Tue, Oct 24, 2023 at 4:30 AM David Rowley <[email protected]> wrote:\n>\n> On Fri, 20 Oct 2023 at 22:30, Ashutosh Bapat\n> <[email protected]> wrote:\n> > I ran my experiments again. It seems on my machine the execution times\n> > do vary a bit. I ran EXPLAIN ANALYZE on the query 5 times and took\n> > average of execution times. I did this three times. For each run the\n> > standard deviation was within 2%. Here are the numbers\n> > master: 13548.33, 13878.88, 14572.52\n> > master + 0001: 13734.58, 14193.83, 14574.73\n> >\n> > So for me, I would say, this particular query performs the same with\n> > or without patch.\n>\n> I'm not really sure which of the 7 queries you're referring to here.\n> The times you're quoting seem to align best to Q7 from your previous\n> results, so I'll assume you mean Q7.\n>\n> I'm not really concerned with Q7 as both patched and unpatched use\n> TTSOpsMinimalTuple.\n\nIt's Q7. Yes. I was responding to your statement: \" However, Q7 does\nseem to be above noise level faster and I'm not sure why.\". Anyway, we\ncan set that aside.\n\n>\n> I also think you need to shrink the size of your benchmark down. With\n> 1 million tuples, you're more likely to be also measuring the time it\n> takes to get cache lines from memory into the CPU. A smaller scale\n> test will make this less likely. Also, you'd be better timing SELECT\n> rather than the time it takes to EXPLAIN ANALYZE. They're not the same\n> thing. EXPLAIN ANALYZE has additional timing going on and we may end\n> up not de-toasting toasted Datums.\n\nI ran experiments with 10K rows and measured timing using \\timing in\npsql. The measurements are much more flaky than a larger set of rows\nand EXPLAIN ANALYZE. But I think your observations are good enough.\n\n>\n> > On Thu, Oct 19, 2023 at 4:26 PM David Rowley <[email protected]> wrote:\n> > > We can see that Q2 and Q3 become a bit slower. This makes sense as\n> > > tts_virtual_materialize() is quite a bit more complex than\n> > > heap_copy_minimal_tuple() which is a simple palloc/memcpy.\n> > >\n> >\n> > If the source slot is a materialized virtual slot,\n> > tts_virtual_copyslot() could perform a memcpy of the materialized data\n> > itself rather than materialising from datums. That might be more\n> > efficient.\n>\n> I think you're talking about just performing a memcpy() of the\n> VirtualTupleTableSlot->data field. Unfortunately, you'd not be able\n> to do just that as you'd also need to repoint the non-byval Datums in\n> tts_values at the newly memcpy'd memory. If you skipped that part,\n> those would remain pointing to the original memory. If that memory\n> goes away, then bad things will happen. I think you'd still need to do\n> the 2nd loop in tts_virtual_materialize()\n\nYes, we will need repoint non-byval Datums ofc.\n\n>\n> > May be we should fix the above said inefficiency in\n> > tt_virtual_copyslot()?\n>\n> I don't have any bright ideas on how to make tts_virtual_materialize()\n> itself faster. If there were some way to remember !attbyval\n> attributes for the 2nd loop, that might be good, but creating\n> somewhere to store that might result in further overheads.\n\nWe may save the size of data in VirtualTupleTableSlot, thus avoiding\nthe first loop. I assume that when allocating\nVirtualTupleTableSlot->data, we always know what size we are\nallocating so it should be just a matter of saving it in\nVirtualTupleTableSlot->size. This should avoid the first loop in\ntts_virtual_materialize() and give some speed up. We will need a loop\nto repoint non-byval Datums. I imagine that the pointers to non-byval\nDatums can be computed as dest_slot->tts_values[natts] =\ndest_vslot->data + (src_slot->tts_values[natts] - src_vslot->data).\nThis would work as long as all the non-byval datums in the source slot\nare all stored flattened in source slot's data. I am assuming that\nthat would be true in a materialized virtual slot. The byval datums\nare copied as is. I think, this will avoid multiple memcpy calls, one\nper non-byval attribute and hence some speedup. I may be wrong though.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Wed, 25 Oct 2023 15:18:01 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use virtual tuple slot for Unique node"
},
{
"msg_contents": "On Wed, 25 Oct 2023 at 22:48, Ashutosh Bapat\n<[email protected]> wrote:\n> We may save the size of data in VirtualTupleTableSlot, thus avoiding\n> the first loop. I assume that when allocating\n> VirtualTupleTableSlot->data, we always know what size we are\n> allocating so it should be just a matter of saving it in\n> VirtualTupleTableSlot->size. This should avoid the first loop in\n> tts_virtual_materialize() and give some speed up. We will need a loop\n> to repoint non-byval Datums. I imagine that the pointers to non-byval\n> Datums can be computed as dest_slot->tts_values[natts] =\n> dest_vslot->data + (src_slot->tts_values[natts] - src_vslot->data).\n> This would work as long as all the non-byval datums in the source slot\n> are all stored flattened in source slot's data. I am assuming that\n> that would be true in a materialized virtual slot. The byval datums\n> are copied as is. I think, this will avoid multiple memcpy calls, one\n> per non-byval attribute and hence some speedup. I may be wrong though.\n\nhmm, do you think it's common enough that we copy an already\nmaterialised virtual slot?\n\nI tried adding the following code totts_virtual_copyslot and didn't\nsee the NOTICE message when running each of your test queries. \"make\ncheck\" also worked without anything failing after adjusting\nnodeUnique.c to always use a TTSOpsVirtual slot.\n\n+ if (srcslot->tts_ops == &TTSOpsVirtual && TTS_SHOULDFREE(srcslot))\n+ elog(NOTICE, \"We copied a materialized virtual slot!\");\n\nI did get a failure in postgres_fdw's tests:\n\n server loopback options (table_name 'tab_batch_sharded_p1_remote');\n insert into tab_batch_sharded select * from tab_batch_local;\n+NOTICE: We copied a materialized virtual slot!\n+NOTICE: We copied a materialized virtual slot!\n\nso I think it's probably not that common that we'd gain from that optimisation.\n\nWhat might buy us a bit more would be to get rid of the for loop\ninside tts_virtual_copyslot() and copy the guts of\ntts_virtual_materialize() into tts_virtual_copyslot() and set the\ndstslot tts_isnull and tts_values arrays in the same loop that we use\nto calculate the size.\n\nI tried that in the attached patch and then tested it alongside the\npatch that changes the slot type.\n\nmaster = 74604a37f\n1 = [1]\n2 = optimize_tts_virtual_copyslot.patch\n\nUsing the script from [2] and the setup from [3] but reduced to 10k\ntuples instead of 1 million.\n\nTimes the average query time in milliseconds for a 60 second pgbench run.\n\nquery master master+1 master+1+2 m/m+1 m/m+1+2\nQ1 2.616 2.082 1.903 125.65%\n 137.47%\nQ2 9.479 10.593 10.361 89.48%\n 91.49%\nQ3 10.329 10.357 10.627 99.73%\n 97.20%\nQ4 7.272 7.306 6.941 99.53%\n 104.77%\nQ5 7.597 7.043 6.645 107.87%\n 114.33%\nQ6 162.177 161.037 162.807 100.71% 99.61%\nQ7 59.288 59.43 61.294 99.76%\n 96.73%\n\n 103.25% 105.94%\n\nI only expect Q2 and Q3 to gain from this. Q1 shouldn't improve but\ndid, so the results might not be stable enough to warrant making any\ndecisions from.\n\nI was uncertain if the old behaviour of when srcslot contains fewer\nattributes than dstslot was intended or not. What happens there is\nthat we'd leave the additional old dstslot tts_values in place and\nonly overwrite up to srcslot->natts but then we'd go on and\nmaterialize all the dstslot attributes. I think this might not be\nneeded as we do dstslot->tts_nvalid = srcdesc->natts. I suspect we may\nbe ok just to materialize the srcslot attributes and ignore the\nprevious additional dstslot attributes. Changing it to that would\nmake the attached patch more simple.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/attachment/151110/use_subnode_slot_type_for_nodeunique.patch\n[2] https://www.postgresql.org/message-id/attachment/151342/uniquebench.sh.txt\n[3] https://www.postgresql.org/message-id/CAExHW5uhTMdkk26oJg9f2ZVufbi5J4Lquj79MdSO%2BipnGJ_muw%40mail.gmail.com",
"msg_date": "Fri, 27 Oct 2023 16:18:10 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use virtual tuple slot for Unique node"
},
{
"msg_contents": "On Fri, Oct 27, 2023 at 8:48 AM David Rowley <[email protected]> wrote:\n>\n> On Wed, 25 Oct 2023 at 22:48, Ashutosh Bapat\n> <[email protected]> wrote:\n> > We may save the size of data in VirtualTupleTableSlot, thus avoiding\n> > the first loop. I assume that when allocating\n> > VirtualTupleTableSlot->data, we always know what size we are\n> > allocating so it should be just a matter of saving it in\n> > VirtualTupleTableSlot->size. This should avoid the first loop in\n> > tts_virtual_materialize() and give some speed up. We will need a loop\n> > to repoint non-byval Datums. I imagine that the pointers to non-byval\n> > Datums can be computed as dest_slot->tts_values[natts] =\n> > dest_vslot->data + (src_slot->tts_values[natts] - src_vslot->data).\n> > This would work as long as all the non-byval datums in the source slot\n> > are all stored flattened in source slot's data. I am assuming that\n> > that would be true in a materialized virtual slot. The byval datums\n> > are copied as is. I think, this will avoid multiple memcpy calls, one\n> > per non-byval attribute and hence some speedup. I may be wrong though.\n>\n> hmm, do you think it's common enough that we copy an already\n> materialised virtual slot?\n>\n> I tried adding the following code totts_virtual_copyslot and didn't\n> see the NOTICE message when running each of your test queries. \"make\n> check\" also worked without anything failing after adjusting\n> nodeUnique.c to always use a TTSOpsVirtual slot.\n>\n> + if (srcslot->tts_ops == &TTSOpsVirtual && TTS_SHOULDFREE(srcslot))\n> + elog(NOTICE, \"We copied a materialized virtual slot!\");\n>\n> I did get a failure in postgres_fdw's tests:\n>\n> server loopback options (table_name 'tab_batch_sharded_p1_remote');\n> insert into tab_batch_sharded select * from tab_batch_local;\n> +NOTICE: We copied a materialized virtual slot!\n> +NOTICE: We copied a materialized virtual slot!\n>\n> so I think it's probably not that common that we'd gain from that optimisation.\n\nThanks for this analysis. If we aren't copying a materialized virtual\nslot often, no point in adding that optimization.\n\n>\n> What might buy us a bit more would be to get rid of the for loop\n> inside tts_virtual_copyslot() and copy the guts of\n> tts_virtual_materialize() into tts_virtual_copyslot() and set the\n> dstslot tts_isnull and tts_values arrays in the same loop that we use\n> to calculate the size.\n>\n> I tried that in the attached patch and then tested it alongside the\n> patch that changes the slot type.\n>\n> master = 74604a37f\n> 1 = [1]\n> 2 = optimize_tts_virtual_copyslot.patch\n>\n> Using the script from [2] and the setup from [3] but reduced to 10k\n> tuples instead of 1 million.\n>\n> Times the average query time in milliseconds for a 60 second pgbench run.\n>\n> query master master+1 master+1+2 m/m+1 m/m+1+2\n> Q1 2.616 2.082 1.903 125.65%\n> 137.47%\n> Q2 9.479 10.593 10.361 89.48%\n> 91.49%\n> Q3 10.329 10.357 10.627 99.73%\n> 97.20%\n> Q4 7.272 7.306 6.941 99.53%\n> 104.77%\n> Q5 7.597 7.043 6.645 107.87%\n> 114.33%\n> Q6 162.177 161.037 162.807 100.71% 99.61%\n> Q7 59.288 59.43 61.294 99.76%\n> 96.73%\n>\n> 103.25% 105.94%\n>\n> I only expect Q2 and Q3 to gain from this. Q1 shouldn't improve but\n> did, so the results might not be stable enough to warrant making any\n> decisions from.\n\nI am actually surprised to see that eliminating loop is showing\nimprovements. There aren't hundreds of attributes involved in those\nqueries. So I share your stability concerns. And even with these\npatches, Q2 and Q3 are still slower.\n\nQ1 is consistently giving performance > 25% for both of us. But other\nqueries aren't showing a whole lot improvement. So I am wondering\nwhether it's worth pursuing this change; similar to the opinion you\nexpressed a few emails earlier.\n\n>\n> I was uncertain if the old behaviour of when srcslot contains fewer\n> attributes than dstslot was intended or not. What happens there is\n> that we'd leave the additional old dstslot tts_values in place and\n> only overwrite up to srcslot->natts but then we'd go on and\n> materialize all the dstslot attributes. I think this might not be\n> needed as we do dstslot->tts_nvalid = srcdesc->natts. I suspect we may\n> be ok just to materialize the srcslot attributes and ignore the\n> previous additional dstslot attributes. Changing it to that would\n> make the attached patch more simple.\n\nWe seem to use both tt_nvalid and tts_tupleDescriptor->natts. I forgot\nwhat's the difference. If we do what you say, we might end up trying\nto access unmaterialized values beyond tts_nvalid. Better to\ninvestigate whether such a hazard exists.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Fri, 27 Oct 2023 14:35:12 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use virtual tuple slot for Unique node"
},
{
"msg_contents": "On Fri, 27 Oct 2023 at 22:05, Ashutosh Bapat\n<[email protected]> wrote:\n>\n> On Fri, Oct 27, 2023 at 8:48 AM David Rowley <[email protected]> wrote:\n> > I was uncertain if the old behaviour of when srcslot contains fewer\n> > attributes than dstslot was intended or not. What happens there is\n> > that we'd leave the additional old dstslot tts_values in place and\n> > only overwrite up to srcslot->natts but then we'd go on and\n> > materialize all the dstslot attributes. I think this might not be\n> > needed as we do dstslot->tts_nvalid = srcdesc->natts. I suspect we may\n> > be ok just to materialize the srcslot attributes and ignore the\n> > previous additional dstslot attributes. Changing it to that would\n> > make the attached patch more simple.\n>\n> We seem to use both tt_nvalid and tts_tupleDescriptor->natts. I forgot\n> what's the difference. If we do what you say, we might end up trying\n> to access unmaterialized values beyond tts_nvalid. Better to\n> investigate whether such a hazard exists.\n\nThe TupleDesc's natts is the number of attributes in the tuple\ndescriptor. tts_nvalid is the greatest attribute number that's been\ndeformed in the tuple slot. For slot types other than virtual slots,\nwe'll call slot_getsomeattrs() to deform more attributes from the\ntuple.\n\nThe reason the code in question looks suspicious to me is that we do\n\"dstslot->tts_nvalid = srcdesc->natts;\" and there's no way to deform\nmore attributes in a virtual slot. Note that\ntts_virtual_getsomeattrs() unconditionally does elog(ERROR,\n\"getsomeattrs is not required to be called on a virtual tuple table\nslot\");. We shouldn't ever be accessing tts_values elements above\nwhat tts_nvalid is set to, so either we should be setting\ndstslot->tts_nvalid = to the dstdesc->natts so that we can access\ntts_values elements above srcdesc->natts or we're needlessly\nmaterialising too many attributes in tts_virtual_copyslot().\n\nDavid\n\n\n\nDavid\n\n\n",
"msg_date": "Sun, 29 Oct 2023 23:30:44 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use virtual tuple slot for Unique node"
},
{
"msg_contents": "I have taken a look at this discussion, at the code and I am confused how we choose tuple table slot (TTS) type in PG. May be you can clarify this topic or me. \n\n1. Brief intro. There are four types of TTS. Plan tree «leaves»:\n- buffer heap (produced by index and table scans, has system columns and keeps shared buffer pins)\n- heap (produced by FDW: has system columns, but doesn’t keep any pins)\n- minimal (produced by values and materializations nodes like sort, agg, etc.)\nPlan «branches»:\n- virtual (keeps datum references to the columns of the tuples in the child nodes)\n\nVirtual TTS is cheeper to copy among the plan (as we copy datum references), but more expensive to materialize (we have to construct a tuple from pieces).\n\nLeaves are cheeper to materialize (as we make a memcmp under hood), but very expensive to copy (we copy the value, not the datum reference).\n\n2. If we take a look at the materialize nodes in the plan, they produce different result TTS.\n- Outer child TTS type: gater, gather merge, lock rows, limit;\n- Minimal: material, sort, incremental sort, memoize, unique, hash, setup (can be heap as well);\n- Virtual: group, agg, window agg.\n\nFrom my point of view, the materialization node should preserve the incoming TTS type. For the sort node (that materializes incoming tuples as minimal) it is ok to output minimal result as well. Looks that unique should use the outer child’d TTS (instead of hardcoded minimal). But can anyone explain me why do group, agg and window agg return the virtual instead of the same TTS type as outer child has? Do we expect that the parent node exists and requires exactly virtual tuples (but what if the parent node is sort and benefits from minimal TTS)? So, it looks like we need to take a look not only at the unique, but also inspect all the materialization nodes.\n\n\n\n\n",
"msg_date": "Sun, 29 Oct 2023 23:30:06 +0700",
"msg_from": "Denis Smirnov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use virtual tuple slot for Unique node"
},
{
"msg_contents": "\n\n> On 29 Oct 2023, at 21:30, Denis Smirnov <[email protected]> wrote:\n> \n> I have taken a look at this discussion, at the code and I am confused how we choose tuple table slot (TTS) type in PG. \n\nAfter offline discussion with Denis, we decided to withdraw this patch from CF for now. If anyone is willing to revive working on this, please register a new entry in next commitfest.\nThanks!\n\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Thu, 28 Mar 2024 17:18:57 +0500",
"msg_from": "\"Andrey M. Borodin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use virtual tuple slot for Unique node"
}
] |
[
{
"msg_contents": "Hi,\n\ncstring_to_text has a small overhead, because call strlen for\npointer to char parameter.\n\nIs it worth the effort to avoid this, where do we know the size of the\nparameter?\n\nbest regards,\nRanier Vilela",
"msg_date": "Wed, 30 Aug 2023 15:00:13 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": true,
"msg_subject": "Replace some cstring_to_text to cstring_to_text_with_len"
},
{
"msg_contents": "On Wed, Aug 30, 2023 at 03:00:13PM -0300, Ranier Vilela wrote:\n> cstring_to_text has a small overhead, because call strlen for\n> pointer to char parameter.\n> \n> Is it worth the effort to avoid this, where do we know the size of the\n> parameter?\n\nAre there workloads where this matters?\n--\nMichael",
"msg_date": "Thu, 31 Aug 2023 12:21:47 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Replace some cstring_to_text to cstring_to_text_with_len"
},
{
"msg_contents": "Em qui., 31 de ago. de 2023 às 00:22, Michael Paquier <[email protected]>\nescreveu:\n\n> On Wed, Aug 30, 2023 at 03:00:13PM -0300, Ranier Vilela wrote:\n> > cstring_to_text has a small overhead, because call strlen for\n> > pointer to char parameter.\n> >\n> > Is it worth the effort to avoid this, where do we know the size of the\n> > parameter?\n>\n> Are there workloads where this matters?\n>\nNone, but note this change has the same spirit of 8b26769bc\n<https://github.com/postgres/postgres/commit/8b26769bc441fffa8aad31dddc484c2f4043d2c9>\n.\n\nbest regards,\nRanier Vilela\n\nEm qui., 31 de ago. de 2023 às 00:22, Michael Paquier <[email protected]> escreveu:On Wed, Aug 30, 2023 at 03:00:13PM -0300, Ranier Vilela wrote:\n> cstring_to_text has a small overhead, because call strlen for\n> pointer to char parameter.\n> \n> Is it worth the effort to avoid this, where do we know the size of the\n> parameter?\n\nAre there workloads where this matters?None, but note this change has the same spirit of 8b26769bc.best regards,Ranier Vilela",
"msg_date": "Thu, 31 Aug 2023 08:06:42 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Replace some cstring_to_text to cstring_to_text_with_len"
},
{
"msg_contents": "On Thu, Aug 31, 2023 at 6:07 PM Ranier Vilela <[email protected]> wrote:\n>\n> Em qui., 31 de ago. de 2023 às 00:22, Michael Paquier <[email protected]>\nescreveu:\n>>\n>> On Wed, Aug 30, 2023 at 03:00:13PM -0300, Ranier Vilela wrote:\n>> > cstring_to_text has a small overhead, because call strlen for\n>> > pointer to char parameter.\n>> >\n>> > Is it worth the effort to avoid this, where do we know the size of the\n>> > parameter?\n>>\n>> Are there workloads where this matters?\n>\n> None, but note this change has the same spirit of 8b26769bc.\n\n- return cstring_to_text(\"\");\n+ return cstring_to_text_with_len(\"\", 0);\n\nThis looks worse, so we'd better be getting something in return.\n\n@@ -360,7 +360,7 @@ pg_tablespace_location(PG_FUNCTION_ARGS)\n sourcepath)));\n targetpath[rllen] = '\\0';\n\n- PG_RETURN_TEXT_P(cstring_to_text(targetpath));\n+ PG_RETURN_TEXT_P(cstring_to_text_with_len(targetpath, rllen));\n\nThis could be a worthwhile cosmetic improvement if the nul-terminator (and\nspace reserved for it, and comment explaining that) is taken out as well,\nbut the patch didn't bother to do that.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Thu, Aug 31, 2023 at 6:07 PM Ranier Vilela <[email protected]> wrote:>> Em qui., 31 de ago. de 2023 às 00:22, Michael Paquier <[email protected]> escreveu:>>>> On Wed, Aug 30, 2023 at 03:00:13PM -0300, Ranier Vilela wrote:>> > cstring_to_text has a small overhead, because call strlen for>> > pointer to char parameter.>> >>> > Is it worth the effort to avoid this, where do we know the size of the>> > parameter?>>>> Are there workloads where this matters?>> None, but note this change has the same spirit of 8b26769bc.-\t\t\t\treturn cstring_to_text(\"\");+\t\t\t\treturn cstring_to_text_with_len(\"\", 0);This looks worse, so we'd better be getting something in return.@@ -360,7 +360,7 @@ pg_tablespace_location(PG_FUNCTION_ARGS) \t\t\t\t\t\tsourcepath))); \ttargetpath[rllen] = '\\0'; -\tPG_RETURN_TEXT_P(cstring_to_text(targetpath));+\tPG_RETURN_TEXT_P(cstring_to_text_with_len(targetpath, rllen));This could be a worthwhile cosmetic improvement if the nul-terminator (and space reserved for it, and comment explaining that) is taken out as well, but the patch didn't bother to do that.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 31 Aug 2023 18:41:14 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Replace some cstring_to_text to cstring_to_text_with_len"
},
{
"msg_contents": "On 2023-08-31 Th 07:41, John Naylor wrote:\n>\n> On Thu, Aug 31, 2023 at 6:07 PM Ranier Vilela <[email protected]> wrote:\n> >\n> > Em qui., 31 de ago. de 2023 às 00:22, Michael Paquier \n> <[email protected]> escreveu:\n> >>\n> >> On Wed, Aug 30, 2023 at 03:00:13PM -0300, Ranier Vilela wrote:\n> >> > cstring_to_text has a small overhead, because call strlen for\n> >> > pointer to char parameter.\n> >> >\n> >> > Is it worth the effort to avoid this, where do we know the size \n> of the\n> >> > parameter?\n> >>\n> >> Are there workloads where this matters?\n> >\n> > None, but note this change has the same spirit of 8b26769bc.\n>\n> - return cstring_to_text(\"\");\n> + return cstring_to_text_with_len(\"\", 0);\n>\n> This looks worse, so we'd better be getting something in return.\n\n\nI agree this is a bit ugly. I wonder if we'd be better off creating a \nfunction that returned an empty text value, so we'd just avoid \nconverting the empty cstring altogether and say:\n\n return empty_text();\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-08-31 Th 07:41, John Naylor\n wrote:\n\n\n\n\n On Thu, Aug 31, 2023 at 6:07 PM Ranier Vilela <[email protected]>\n wrote:\n >\n > Em qui., 31 de ago. de 2023 às 00:22, Michael Paquier <[email protected]>\n escreveu:\n >>\n >> On Wed, Aug 30, 2023 at 03:00:13PM -0300, Ranier Vilela\n wrote:\n >> > cstring_to_text has a small overhead, because call\n strlen for\n >> > pointer to char parameter.\n >> >\n >> > Is it worth the effort to avoid this, where do we\n know the size of the\n >> > parameter?\n >>\n >> Are there workloads where this matters?\n >\n > None, but note this change has the same spirit of\n 8b26769bc.\n\n - return cstring_to_text(\"\");\n + return cstring_to_text_with_len(\"\", 0);\n\n This looks worse, so we'd better be getting something in return.\n\n\n\n\nI agree this is a bit ugly. I wonder if we'd be better off\n creating a function that returned an empty text value, so we'd\n just avoid converting the empty cstring altogether and say:\n return empty_text();\n\n\ncheers\n\n\nandrew\n\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Thu, 31 Aug 2023 08:51:02 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Replace some cstring_to_text to cstring_to_text_with_len"
},
{
"msg_contents": "Andrew Dunstan <[email protected]> writes:\n\n> On 2023-08-31 Th 07:41, John Naylor wrote:\n>>\n>> On Thu, Aug 31, 2023 at 6:07 PM Ranier Vilela <[email protected]> wrote:\n>> >\n>> > Em qui., 31 de ago. de 2023 às 00:22, Michael Paquier\n>> <[email protected]> escreveu:\n>> >>\n>> >> On Wed, Aug 30, 2023 at 03:00:13PM -0300, Ranier Vilela wrote:\n>> >> > cstring_to_text has a small overhead, because call strlen for\n>> >> > pointer to char parameter.\n>> >> >\n>> >> > Is it worth the effort to avoid this, where do we know the size\n>> of the\n>> >> > parameter?\n>> >>\n>> >> Are there workloads where this matters?\n>> >\n>> > None, but note this change has the same spirit of 8b26769bc.\n>>\n>> - return cstring_to_text(\"\");\n>> + return cstring_to_text_with_len(\"\", 0);\n>>\n>> This looks worse, so we'd better be getting something in return.\n>\n>\n> I agree this is a bit ugly. I wonder if we'd be better off creating a\n> function that returned an empty text value, so we'd just avoid \n> converting the empty cstring altogether and say:\n>\n> return empty_text();\n\nOr we could generalise it for any string literal (of which there are\nslightly more¹ non-empty than empty in calls to\ncstring_to_text(_with_len)):\n\n#define literal_to_text(str) cstring_to_text_with_len(\"\" str \"\", sizeof(str)-1)\n\n[1]: \n\n~/src/postgresql $ rg 'cstring_to_text(_with_len)?\\(\"[^\"]+\"' | wc -l\n17\n~/src/postgresql $ rg 'cstring_to_text(_with_len)?\\(\"\"' | wc -l\n15\n\n- ilmari\n\n\n",
"msg_date": "Thu, 31 Aug 2023 14:12:20 +0100",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Replace some cstring_to_text to cstring_to_text_with_len"
},
{
"msg_contents": "Em qui., 31 de ago. de 2023 às 08:41, John Naylor <\[email protected]> escreveu:\n\n>\n> On Thu, Aug 31, 2023 at 6:07 PM Ranier Vilela <[email protected]> wrote:\n> >\n> > Em qui., 31 de ago. de 2023 às 00:22, Michael Paquier <\n> [email protected]> escreveu:\n> >>\n> >> On Wed, Aug 30, 2023 at 03:00:13PM -0300, Ranier Vilela wrote:\n> >> > cstring_to_text has a small overhead, because call strlen for\n> >> > pointer to char parameter.\n> >> >\n> >> > Is it worth the effort to avoid this, where do we know the size of the\n> >> > parameter?\n> >>\n> >> Are there workloads where this matters?\n> >\n> > None, but note this change has the same spirit of 8b26769bc.\n>\n> - return cstring_to_text(\"\");\n> + return cstring_to_text_with_len(\"\", 0);\n>\n> This looks worse, so we'd better be getting something in return.\n>\nPer suggestion by Andrew Dustan, I provided a new function.\n\n>\n> @@ -360,7 +360,7 @@ pg_tablespace_location(PG_FUNCTION_ARGS)\n> sourcepath)));\n> targetpath[rllen] = '\\0';\n>\n> - PG_RETURN_TEXT_P(cstring_to_text(targetpath));\n> + PG_RETURN_TEXT_P(cstring_to_text_with_len(targetpath, rllen));\n>\n> This could be a worthwhile cosmetic improvement if the nul-terminator (and\n> space reserved for it, and comment explaining that) is taken out as well,\n> but the patch didn't bother to do that.\n>\nThanks for the tip.\n\nPlease see a new version of the patch in the Andrew Dunstan, reply.\n\nbest regards,\nRanier Vilela\n\nEm qui., 31 de ago. de 2023 às 08:41, John Naylor <[email protected]> escreveu:On Thu, Aug 31, 2023 at 6:07 PM Ranier Vilela <[email protected]> wrote:>> Em qui., 31 de ago. de 2023 às 00:22, Michael Paquier <[email protected]> escreveu:>>>> On Wed, Aug 30, 2023 at 03:00:13PM -0300, Ranier Vilela wrote:>> > cstring_to_text has a small overhead, because call strlen for>> > pointer to char parameter.>> >>> > Is it worth the effort to avoid this, where do we know the size of the>> > parameter?>>>> Are there workloads where this matters?>> None, but note this change has the same spirit of 8b26769bc.-\t\t\t\treturn cstring_to_text(\"\");+\t\t\t\treturn cstring_to_text_with_len(\"\", 0);This looks worse, so we'd better be getting something in return.Per suggestion by Andrew Dustan, I provided a new function. @@ -360,7 +360,7 @@ pg_tablespace_location(PG_FUNCTION_ARGS) \t\t\t\t\t\tsourcepath))); \ttargetpath[rllen] = '\\0'; -\tPG_RETURN_TEXT_P(cstring_to_text(targetpath));+\tPG_RETURN_TEXT_P(cstring_to_text_with_len(targetpath, rllen));This could be a worthwhile cosmetic improvement if the nul-terminator (and space reserved for it, and comment explaining that) is taken out as well, but the patch didn't bother to do that.Thanks for the tip.Please see a new version of the patch in the Andrew Dunstan, reply.best regards,Ranier Vilela",
"msg_date": "Thu, 31 Aug 2023 11:09:48 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Replace some cstring_to_text to cstring_to_text_with_len"
},
{
"msg_contents": "Em qui., 31 de ago. de 2023 às 09:51, Andrew Dunstan <[email protected]>\nescreveu:\n\n>\n> On 2023-08-31 Th 07:41, John Naylor wrote:\n>\n>\n> On Thu, Aug 31, 2023 at 6:07 PM Ranier Vilela <[email protected]> wrote:\n> >\n> > Em qui., 31 de ago. de 2023 às 00:22, Michael Paquier <\n> [email protected]> escreveu:\n> >>\n> >> On Wed, Aug 30, 2023 at 03:00:13PM -0300, Ranier Vilela wrote:\n> >> > cstring_to_text has a small overhead, because call strlen for\n> >> > pointer to char parameter.\n> >> >\n> >> > Is it worth the effort to avoid this, where do we know the size of the\n> >> > parameter?\n> >>\n> >> Are there workloads where this matters?\n> >\n> > None, but note this change has the same spirit of 8b26769bc.\n>\n> - return cstring_to_text(\"\");\n> + return cstring_to_text_with_len(\"\", 0);\n>\n> This looks worse, so we'd better be getting something in return.\n>\n>\n> I agree this is a bit ugly. I wonder if we'd be better off creating a\n> function that returned an empty text value, so we'd just avoid converting\n> the empty cstring altogether and say:\n>\n> return empty_text();\n>\nHi,\nThanks for the suggestion, I agreed.\n\nNew patch is attached.\n\nbest regards,\nRanier Vilela\n\n>",
"msg_date": "Thu, 31 Aug 2023 11:10:55 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Replace some cstring_to_text to cstring_to_text_with_len"
},
{
"msg_contents": "Em qui., 31 de ago. de 2023 às 10:12, Dagfinn Ilmari Mannsåker <\[email protected]> escreveu:\n\n> Andrew Dunstan <[email protected]> writes:\n>\n> > On 2023-08-31 Th 07:41, John Naylor wrote:\n> >>\n> >> On Thu, Aug 31, 2023 at 6:07 PM Ranier Vilela <[email protected]>\n> wrote:\n> >> >\n> >> > Em qui., 31 de ago. de 2023 às 00:22, Michael Paquier\n> >> <[email protected]> escreveu:\n> >> >>\n> >> >> On Wed, Aug 30, 2023 at 03:00:13PM -0300, Ranier Vilela wrote:\n> >> >> > cstring_to_text has a small overhead, because call strlen for\n> >> >> > pointer to char parameter.\n> >> >> >\n> >> >> > Is it worth the effort to avoid this, where do we know the size\n> >> of the\n> >> >> > parameter?\n> >> >>\n> >> >> Are there workloads where this matters?\n> >> >\n> >> > None, but note this change has the same spirit of 8b26769bc.\n> >>\n> >> - return cstring_to_text(\"\");\n> >> + return cstring_to_text_with_len(\"\", 0);\n> >>\n> >> This looks worse, so we'd better be getting something in return.\n> >\n> >\n> > I agree this is a bit ugly. I wonder if we'd be better off creating a\n> > function that returned an empty text value, so we'd just avoid\n> > converting the empty cstring altogether and say:\n> >\n> > return empty_text();\n>\n> Or we could generalise it for any string literal (of which there are\n> slightly more¹ non-empty than empty in calls to\n> cstring_to_text(_with_len)):\n>\n> #define literal_to_text(str) cstring_to_text_with_len(\"\" str \"\",\n> sizeof(str)-1)\n>\nI do not agree, I think this will get worse.\n\nbest regards,\nRanier Vilela\n\nEm qui., 31 de ago. de 2023 às 10:12, Dagfinn Ilmari Mannsåker <[email protected]> escreveu:Andrew Dunstan <[email protected]> writes:\n\n> On 2023-08-31 Th 07:41, John Naylor wrote:\n>>\n>> On Thu, Aug 31, 2023 at 6:07 PM Ranier Vilela <[email protected]> wrote:\n>> >\n>> > Em qui., 31 de ago. de 2023 às 00:22, Michael Paquier\n>> <[email protected]> escreveu:\n>> >>\n>> >> On Wed, Aug 30, 2023 at 03:00:13PM -0300, Ranier Vilela wrote:\n>> >> > cstring_to_text has a small overhead, because call strlen for\n>> >> > pointer to char parameter.\n>> >> >\n>> >> > Is it worth the effort to avoid this, where do we know the size\n>> of the\n>> >> > parameter?\n>> >>\n>> >> Are there workloads where this matters?\n>> >\n>> > None, but note this change has the same spirit of 8b26769bc.\n>>\n>> - return cstring_to_text(\"\");\n>> + return cstring_to_text_with_len(\"\", 0);\n>>\n>> This looks worse, so we'd better be getting something in return.\n>\n>\n> I agree this is a bit ugly. I wonder if we'd be better off creating a\n> function that returned an empty text value, so we'd just avoid \n> converting the empty cstring altogether and say:\n>\n> return empty_text();\n\nOr we could generalise it for any string literal (of which there are\nslightly more¹ non-empty than empty in calls to\ncstring_to_text(_with_len)):\n\n#define literal_to_text(str) cstring_to_text_with_len(\"\" str \"\", sizeof(str)-1)I do not agree, I think this will get worse.best regards,Ranier Vilela",
"msg_date": "Thu, 31 Aug 2023 11:11:44 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Replace some cstring_to_text to cstring_to_text_with_len"
},
{
"msg_contents": "Ranier Vilela <[email protected]> writes:\n\n> Em qui., 31 de ago. de 2023 às 10:12, Dagfinn Ilmari Mannsåker <\n> [email protected]> escreveu:\n>\n>> Andrew Dunstan <[email protected]> writes:\n>>\n>> > On 2023-08-31 Th 07:41, John Naylor wrote:\n>> >>\n>> >> On Thu, Aug 31, 2023 at 6:07 PM Ranier Vilela <[email protected]>\n>> wrote:\n>> >> >\n>> >> > Em qui., 31 de ago. de 2023 às 00:22, Michael Paquier\n>> >> <[email protected]> escreveu:\n>> >> >>\n>> >> >> On Wed, Aug 30, 2023 at 03:00:13PM -0300, Ranier Vilela wrote:\n>> >> >> > cstring_to_text has a small overhead, because call strlen for\n>> >> >> > pointer to char parameter.\n>> >> >> >\n>> >> >> > Is it worth the effort to avoid this, where do we know the size\n>> >> of the\n>> >> >> > parameter?\n>> >> >>\n>> >> >> Are there workloads where this matters?\n>> >> >\n>> >> > None, but note this change has the same spirit of 8b26769bc.\n>> >>\n>> >> - return cstring_to_text(\"\");\n>> >> + return cstring_to_text_with_len(\"\", 0);\n>> >>\n>> >> This looks worse, so we'd better be getting something in return.\n>> >\n>> >\n>> > I agree this is a bit ugly. I wonder if we'd be better off creating a\n>> > function that returned an empty text value, so we'd just avoid\n>> > converting the empty cstring altogether and say:\n>> >\n>> > return empty_text();\n>>\n>> Or we could generalise it for any string literal (of which there are\n>> slightly more¹ non-empty than empty in calls to\n>> cstring_to_text(_with_len)):\n>>\n>> #define literal_to_text(str) cstring_to_text_with_len(\"\" str \"\",\n>> sizeof(str)-1)\n>>\n> I do not agree, I think this will get worse.\n\nHow exactly will it get worse? It's exactly equivalent to\ncstring_to_text_with_len(\"\", 0), since sizeof() is a compile-time\nconstruct, and the string concatenation makes it fail if the argument is\nnot a literal string.\n\nWhether we want an even-more-optimised version for an empty text value\nis another matter, but I doubt it'd be worth it. Another option would\nbe to make cstring_to_text(_with_len) static inline functions, which\nlets the compiler eliminate the memcpy() call when len == 0.\n\nIn fact, after playing around a bit (https://godbolt.org/z/x51aYGadh),\nit seems like GCC, Clang and MSVC all eliminate the strlen() and\nmemcpy() calls for cstring_to_text(\"\") under -O2 if it's static inline.\n\n- ilmari\n\n\n",
"msg_date": "Thu, 31 Aug 2023 16:12:14 +0100",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Replace some cstring_to_text to cstring_to_text_with_len"
},
{
"msg_contents": "On 31.08.23 16:10, Ranier Vilela wrote:\n> Em qui., 31 de ago. de 2023 às 09:51, Andrew Dunstan \n> <[email protected] <mailto:[email protected]>> escreveu:\n> \n> \n> On 2023-08-31 Th 07:41, John Naylor wrote:\n>>\n>> On Thu, Aug 31, 2023 at 6:07 PM Ranier Vilela <[email protected]\n>> <mailto:[email protected]>> wrote:\n>> >\n>> > Em qui., 31 de ago. de 2023 às 00:22, Michael Paquier\n>> <[email protected] <mailto:[email protected]>> escreveu:\n>> >>\n>> >> On Wed, Aug 30, 2023 at 03:00:13PM -0300, Ranier Vilela wrote:\n>> >> > cstring_to_text has a small overhead, because call strlen for\n>> >> > pointer to char parameter.\n>> >> >\n>> >> > Is it worth the effort to avoid this, where do we know the\n>> size of the\n>> >> > parameter?\n>> >>\n>> >> Are there workloads where this matters?\n>> >\n>> > None, but note this change has the same spirit of 8b26769bc.\n>>\n>> - return cstring_to_text(\"\");\n>> + return cstring_to_text_with_len(\"\", 0);\n>>\n>> This looks worse, so we'd better be getting something in return.\n> \n> \n> I agree this is a bit ugly. I wonder if we'd be better off creating\n> a function that returned an empty text value, so we'd just avoid\n> converting the empty cstring altogether and say:\n> \n> return empty_text();\n> \n> Hi,\n> Thanks for the suggestion, I agreed.\n> \n> New patch is attached.\n\nI think these patches make the code uniformly uglier and harder to \nunderstand.\n\nIf a performance benefit could be demonstrated, then making \ncstring_to_text() an inline function could be sensible. But I wouldn't \ngo beyond that.\n\n\n",
"msg_date": "Thu, 31 Aug 2023 17:28:57 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Replace some cstring_to_text to cstring_to_text_with_len"
},
{
"msg_contents": "Em qui., 31 de ago. de 2023 às 12:12, Dagfinn Ilmari Mannsåker <\[email protected]> escreveu:\n\n> Ranier Vilela <[email protected]> writes:\n>\n> > Em qui., 31 de ago. de 2023 às 10:12, Dagfinn Ilmari Mannsåker <\n> > [email protected]> escreveu:\n> >\n> >> Andrew Dunstan <[email protected]> writes:\n> >>\n> >> > On 2023-08-31 Th 07:41, John Naylor wrote:\n> >> >>\n> >> >> On Thu, Aug 31, 2023 at 6:07 PM Ranier Vilela <[email protected]>\n> >> wrote:\n> >> >> >\n> >> >> > Em qui., 31 de ago. de 2023 às 00:22, Michael Paquier\n> >> >> <[email protected]> escreveu:\n> >> >> >>\n> >> >> >> On Wed, Aug 30, 2023 at 03:00:13PM -0300, Ranier Vilela wrote:\n> >> >> >> > cstring_to_text has a small overhead, because call strlen for\n> >> >> >> > pointer to char parameter.\n> >> >> >> >\n> >> >> >> > Is it worth the effort to avoid this, where do we know the size\n> >> >> of the\n> >> >> >> > parameter?\n> >> >> >>\n> >> >> >> Are there workloads where this matters?\n> >> >> >\n> >> >> > None, but note this change has the same spirit of 8b26769bc.\n> >> >>\n> >> >> - return cstring_to_text(\"\");\n> >> >> + return cstring_to_text_with_len(\"\", 0);\n> >> >>\n> >> >> This looks worse, so we'd better be getting something in return.\n> >> >\n> >> >\n> >> > I agree this is a bit ugly. I wonder if we'd be better off creating a\n> >> > function that returned an empty text value, so we'd just avoid\n> >> > converting the empty cstring altogether and say:\n> >> >\n> >> > return empty_text();\n> >>\n> >> Or we could generalise it for any string literal (of which there are\n> >> slightly more¹ non-empty than empty in calls to\n> >> cstring_to_text(_with_len)):\n> >>\n> >> #define literal_to_text(str) cstring_to_text_with_len(\"\" str \"\",\n> >> sizeof(str)-1)\n> >>\n> > I do not agree, I think this will get worse.\n>\n> How exactly will it get worse? It's exactly equivalent to\n> cstring_to_text_with_len(\"\", 0), since sizeof() is a compile-time\n> construct, and the string concatenation makes it fail if the argument is\n> not a literal string.\n>\nI think that concatenation makes the strings twice bigger, doesn't it?\n\n\n>\n> Whether we want an even-more-optimised version for an empty text value\n> is another matter, but I doubt it'd be worth it. Another option would\n> be to make cstring_to_text(_with_len) static inline functions, which\n> lets the compiler eliminate the memcpy() call when len == 0.\n>\n\n> In fact, after playing around a bit (https://godbolt.org/z/x51aYGadh),\n> it seems like GCC, Clang and MSVC all eliminate the strlen() and\n> memcpy() calls for cstring_to_text(\"\") under -O2 if it's static inline.\n>\nIn that case, it seems to me that would be good too.\nCompilers removing memcpy would have the same as empty_text.\nWithout the burden of a new function and all its future maintenance.\n\nbest regards,\nRanier Vilela\n\nEm qui., 31 de ago. de 2023 às 12:12, Dagfinn Ilmari Mannsåker <[email protected]> escreveu:Ranier Vilela <[email protected]> writes:\n\n> Em qui., 31 de ago. de 2023 às 10:12, Dagfinn Ilmari Mannsåker <\n> [email protected]> escreveu:\n>\n>> Andrew Dunstan <[email protected]> writes:\n>>\n>> > On 2023-08-31 Th 07:41, John Naylor wrote:\n>> >>\n>> >> On Thu, Aug 31, 2023 at 6:07 PM Ranier Vilela <[email protected]>\n>> wrote:\n>> >> >\n>> >> > Em qui., 31 de ago. de 2023 às 00:22, Michael Paquier\n>> >> <[email protected]> escreveu:\n>> >> >>\n>> >> >> On Wed, Aug 30, 2023 at 03:00:13PM -0300, Ranier Vilela wrote:\n>> >> >> > cstring_to_text has a small overhead, because call strlen for\n>> >> >> > pointer to char parameter.\n>> >> >> >\n>> >> >> > Is it worth the effort to avoid this, where do we know the size\n>> >> of the\n>> >> >> > parameter?\n>> >> >>\n>> >> >> Are there workloads where this matters?\n>> >> >\n>> >> > None, but note this change has the same spirit of 8b26769bc.\n>> >>\n>> >> - return cstring_to_text(\"\");\n>> >> + return cstring_to_text_with_len(\"\", 0);\n>> >>\n>> >> This looks worse, so we'd better be getting something in return.\n>> >\n>> >\n>> > I agree this is a bit ugly. I wonder if we'd be better off creating a\n>> > function that returned an empty text value, so we'd just avoid\n>> > converting the empty cstring altogether and say:\n>> >\n>> > return empty_text();\n>>\n>> Or we could generalise it for any string literal (of which there are\n>> slightly more¹ non-empty than empty in calls to\n>> cstring_to_text(_with_len)):\n>>\n>> #define literal_to_text(str) cstring_to_text_with_len(\"\" str \"\",\n>> sizeof(str)-1)\n>>\n> I do not agree, I think this will get worse.\n\nHow exactly will it get worse? It's exactly equivalent to\ncstring_to_text_with_len(\"\", 0), since sizeof() is a compile-time\nconstruct, and the string concatenation makes it fail if the argument is\nnot a literal string.I think that concatenation makes the strings twice bigger, doesn't it? \n\nWhether we want an even-more-optimised version for an empty text value\nis another matter, but I doubt it'd be worth it. Another option would\nbe to make cstring_to_text(_with_len) static inline functions, which\nlets the compiler eliminate the memcpy() call when len == 0.\n\nIn fact, after playing around a bit (https://godbolt.org/z/x51aYGadh),\nit seems like GCC, Clang and MSVC all eliminate the strlen() and\nmemcpy() calls for cstring_to_text(\"\") under -O2 if it's static inline.In that case, it seems to me that would be good too.Compilers removing memcpy would have the same as empty_text.Without the burden of a new function and all its future maintenance.best regards,Ranier Vilela",
"msg_date": "Thu, 31 Aug 2023 13:57:24 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Replace some cstring_to_text to cstring_to_text_with_len"
},
{
"msg_contents": "Ranier Vilela <[email protected]> writes:\n\n> Em qui., 31 de ago. de 2023 às 12:12, Dagfinn Ilmari Mannsåker <\n> [email protected]> escreveu:\n>\n>> Ranier Vilela <[email protected]> writes:\n>>\n>> > Em qui., 31 de ago. de 2023 às 10:12, Dagfinn Ilmari Mannsåker <\n>> > [email protected]> escreveu:\n>> >\n>> >> Andrew Dunstan <[email protected]> writes:\n>> >>\n>> >> > On 2023-08-31 Th 07:41, John Naylor wrote:\n>> >> >>\n>> >> >> On Thu, Aug 31, 2023 at 6:07 PM Ranier Vilela <[email protected]>\n>> >> wrote:\n>> >> >> >\n>> >> >> > Em qui., 31 de ago. de 2023 às 00:22, Michael Paquier\n>> >> >> <[email protected]> escreveu:\n>> >> >> >>\n>> >> >> >> On Wed, Aug 30, 2023 at 03:00:13PM -0300, Ranier Vilela wrote:\n>> >> >> >> > cstring_to_text has a small overhead, because call strlen for\n>> >> >> >> > pointer to char parameter.\n>> >> >> >> >\n>> >> >> >> > Is it worth the effort to avoid this, where do we know the size\n>> >> >> of the\n>> >> >> >> > parameter?\n>> >> >> >>\n>> >> >> >> Are there workloads where this matters?\n>> >> >> >\n>> >> >> > None, but note this change has the same spirit of 8b26769bc.\n>> >> >>\n>> >> >> - return cstring_to_text(\"\");\n>> >> >> + return cstring_to_text_with_len(\"\", 0);\n>> >> >>\n>> >> >> This looks worse, so we'd better be getting something in return.\n>> >> >\n>> >> >\n>> >> > I agree this is a bit ugly. I wonder if we'd be better off creating a\n>> >> > function that returned an empty text value, so we'd just avoid\n>> >> > converting the empty cstring altogether and say:\n>> >> >\n>> >> > return empty_text();\n>> >>\n>> >> Or we could generalise it for any string literal (of which there are\n>> >> slightly more¹ non-empty than empty in calls to\n>> >> cstring_to_text(_with_len)):\n>> >>\n>> >> #define literal_to_text(str) cstring_to_text_with_len(\"\" str \"\",\n>> >> sizeof(str)-1)\n>> >>\n>> > I do not agree, I think this will get worse.\n>>\n>> How exactly will it get worse? It's exactly equivalent to\n>> cstring_to_text_with_len(\"\", 0), since sizeof() is a compile-time\n>> construct, and the string concatenation makes it fail if the argument is\n>> not a literal string.\n>>\n> I think that concatenation makes the strings twice bigger, doesn't it?\n\nNo, it's just taking advantage of the fact that C string literals can be\nsplit into multiple pieces separated by whitespace (like in SQL, but\nwithout requiring a newline between them). E.g. \"foo\" \"bar\" is exactly\nequivalent to \"foobar\" after parsing. The reason to use it in the macro\nis to make it a syntax error if the argument is not a literal string but\ninstead a string variable, becuause in that case the sizeof() would\nreturn the size of the pointer, not the string.\n\n- ilmari\n\n\n",
"msg_date": "Thu, 31 Aug 2023 18:23:54 +0100",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Replace some cstring_to_text to cstring_to_text_with_len"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n\n> On 31.08.23 16:10, Ranier Vilela wrote:\n>> Em qui., 31 de ago. de 2023 às 09:51, Andrew Dunstan\n>> <[email protected] <mailto:[email protected]>> escreveu:\n>> \n>> On 2023-08-31 Th 07:41, John Naylor wrote:\n>>>\n>>> On Thu, Aug 31, 2023 at 6:07 PM Ranier Vilela <[email protected]\n>>> <mailto:[email protected]>> wrote:\n>>> >\n>>> > Em qui., 31 de ago. de 2023 às 00:22, Michael Paquier\n>>> <[email protected] <mailto:[email protected]>> escreveu:\n>>> >>\n>>> >> On Wed, Aug 30, 2023 at 03:00:13PM -0300, Ranier Vilela wrote:\n>>> >> > cstring_to_text has a small overhead, because call strlen for\n>>> >> > pointer to char parameter.\n>>> >> >\n>>> >> > Is it worth the effort to avoid this, where do we know the\n>>> size of the\n>>> >> > parameter?\n>>> >>\n>>> >> Are there workloads where this matters?\n>>> >\n>>> > None, but note this change has the same spirit of 8b26769bc.\n>>>\n>>> - return cstring_to_text(\"\");\n>>> + return cstring_to_text_with_len(\"\", 0);\n>>>\n>>> This looks worse, so we'd better be getting something in return.\n>> \n>> I agree this is a bit ugly. I wonder if we'd be better off creating\n>> a function that returned an empty text value, so we'd just avoid\n>> converting the empty cstring altogether and say:\n>> return empty_text();\n>> Hi,\n>> Thanks for the suggestion, I agreed.\n>> New patch is attached.\n>\n> I think these patches make the code uniformly uglier and harder to\n> understand.\n>\n> If a performance benefit could be demonstrated, then making\n> cstring_to_text() an inline function could be sensible. But I wouldn't \n> go beyond that.\n\nI haven't benchmarked it yet, but here's a patch that inlines them and\nchanges callers of cstring_to_text_with_len() with a aliteral string and\nconstant length to cstring_to_text().\n\nOn an x86-64 Linux build (meson with -Dbuildtype=debugoptimized\n-Dcassert=true), the inlining increases the size of the text section of\nthe postgres binary from 9719722 bytes to 9750557, i.e. an increase of\n30KiB or 0.3%, while the change to cstring_to_text() makes zero\ndifference (as expected from my investigation).\n\n- ilmari",
"msg_date": "Thu, 31 Aug 2023 19:28:38 +0100",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Replace some cstring_to_text to cstring_to_text_with_len"
}
] |
[
{
"msg_contents": "Hi,\n\nCurrently PostgreSQL reads (and writes) data files 8KB at a time.\nThat's because we call ReadBuffer() one block at a time, with no\nopportunity for lower layers to do better than that. This thread is\nabout a model where you say which block you'll want next with a\ncallback, and then you pull the buffers out of a \"stream\". That way,\nthe streaming infrastructure can look as far into the future as it\nwants, and then:\n\n * systematically issue POSIX_FADV_WILLNEED for random access,\nreplacing patchy ad hoc advice\n * build larger vectored I/Os; eg one preadv() call can replace 16 pread() calls\n\nThat's more efficient, and it goes faster. It's better even on\nsystems without 'advice' and/or vectored I/O support, because some\nI/Os can be merged into wider simple pread/pwrite calls, and various\nother small efficiencies come from batching.\n\nThe real goal, though, is to make it easier for later work to replace\nthe I/O subsystem with true asynchronous and concurrent I/O, as\nrequired to get decent performance with direct I/O (and, at a wild\nguess, the magic network smgr replacements that many of our colleagues\non this list work on). Client code such as access methods wouldn't\nneed to change again to benefit from that, as it would be fully\ninsulated by the streaming abstraction.\n\nThere are more kinds of streaming I/O that would be useful, such as\nraw unbuffered files, and of course writes, and I've attached some\nearly incomplete demo code for writes (just for fun), but the main\nidea I want to share in this thread is the idea of replacing lots of\nReadBuffer() calls with the streaming model. That's the thing with\nthe most potential users throughout the source tree and AMs, and I've\nattached some work-in-progress examples of half a dozen use cases.\n\n=== 1. Vectored I/O through the layers ===\n\n * Provide vectored variants of FileRead() and FileWrite().\n * Provide vectored variants of smgrread() and smgrwrite().\n * Provide vectored variant of ReadBuffer().\n * Provide multi-block smgrprefetch().\n\n=== 2. Streaming read API ===\n\n * Give SMgrRelation pointers a well-defined lifetime.\n * Provide basic streaming read API.\n\n=== 3. Example users of streaming read API ===\n\n * Use streaming reads in pg_prewarm. [TM]\n * WIP: Use streaming reads in heapam scans. [AF]\n * WIP: Use streaming reads in vacuum. [AF]\n * WIP: Use streaming reads in nbtree vacuum scan. [AF]\n * WIP: Use streaming reads in bitmap heapscan. [MP]\n * WIP: Use streaming reads in recovery. [TM]\n\n=== 4. Some less developed work on vectored writes ===\n\n * WIP: Provide vectored variant of FlushBuffer().\n * WIP: Use vectored writes in checkpointer.\n\nAll of these are WIP; those marked WIP above are double-WIP. But\nthere's enough to demo the concept and discuss. Here are some\nassorted notes:\n\n * probably need to split block-count and I/O-count in stats system?\n * streaming needs to \"ramp up\", instead of going straight to big reads\n * the buffer pin limit is somewhat primitive\n * more study of buffer pool correctness required\n * 16 block/128KB size limit is not exactly arbitrary but not well\nresearched (by me at least)\n * various TODOs in user patches\n\nA bit about where this code came from and how it relates to the \"AIO\"\nproject[1]: The idea and terminology 'streaming I/O' are due to\nAndres Freund. This implementation of it is mine, and to keep this\nmailing list fun, he hasn't reviewed it yet. The example user patches\nare by Andres, Melanie Plageman and myself, and were cherry picked\nfrom the AIO branch, where they originally ran on top of Andres's\ntruly asynchronous 'streaming read', which is completely different\ncode. It has (or will have) exactly the same API, but it does much\nmore, with much more infrastructure. But the AIO branch is far too\nmuch to propose at once.\n\nWe might have been a little influenced by a recent discussion on\npgsql-performance[2] that I could summarise as \"why do you guys need\nto do all this fancy AIO stuff, just give me bigger reads!\". That was\nactually a bit of a special case, I think (something is wrong with\nbtrfs's prefetch heuristics?), but in conversation we realised that\nconverting parts of PostgreSQL over to a stream-oriented model could\nbe done independently of AIO, and could offer some nice incremental\nbenefits already. So I worked on producing this code with an\nidentical API that just maps on to old fashioned synchronous I/O\ncalls, except bigger and better.\n\nThe \"example user\" patches would be proposed separately in their own\nthreads after some more work, but I wanted to demonstrate the wide\napplicability of this style of API in this preview. Some of these\nmake use of the ability to attach a bit of extra data to each buffer\n-- see Melanie's bitmap heapscan patch, for example. In later\nrevisions I'll probably just pick one or two examples to work with for\na smaller core patch set, and then the rest can be developed\nseparately. (We thought about btree scans too as a nice high value\narea to tackle, but Tomas Vondra is hacking in that area and we didn't\nwant to step on his toes.)\n\n[1] https://wiki.postgresql.org/wiki/AIO\n[2] https://www.postgresql.org/message-id/flat/218fa2e0-bc58-e469-35dd-c5cb35906064%40gmx.net",
"msg_date": "Thu, 31 Aug 2023 16:00:13 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Streaming I/O, vectored I/O (WIP)"
},
{
"msg_contents": "On 31/08/2023 07:00, Thomas Munro wrote:\n> Currently PostgreSQL reads (and writes) data files 8KB at a time.\n> That's because we call ReadBuffer() one block at a time, with no\n> opportunity for lower layers to do better than that. This thread is\n> about a model where you say which block you'll want next with a\n> callback, and then you pull the buffers out of a \"stream\".\n\nI love this idea! Makes it a lot easier to perform prefetch, as \nevidenced by the 011-WIP-Use-streaming-reads-in-bitmap-heapscan.patch:\n\n 13 files changed, 289 insertions(+), 637 deletions(-)\n\nI'm a bit disappointed and surprised by \nv1-0009-WIP-Use-streaming-reads-in-vacuum.patch though:\n\n 4 files changed, 244 insertions(+), 78 deletions(-)\n\nThe current prefetching logic in vacuumlazy.c is pretty hairy, so I \nhoped that this would simplify it. I didn't look closely at that patch, \nso maybe it's simpler even though it's more code.\n\n> There are more kinds of streaming I/O that would be useful, such as\n> raw unbuffered files, and of course writes, and I've attached some\n> early incomplete demo code for writes (just for fun), but the main\n> idea I want to share in this thread is the idea of replacing lots of\n> ReadBuffer() calls with the streaming model.\n\nAll this makes sense. Some random comments on the patches:\n\n> +\t/* Avoid a slightly more expensive kernel call if there is no benefit. */\n> +\tif (iovcnt == 1)\n> +\t\treturnCode = pg_pread(vfdP->fd,\n> +\t\t\t\t\t\t\t iov[0].iov_base,\n> +\t\t\t\t\t\t\t iov[0].iov_len,\n> +\t\t\t\t\t\t\t offset);\n> +\telse\n> +\t\treturnCode = pg_preadv(vfdP->fd, iov, iovcnt, offset);\n\nHow about pushing down this optimization to pg_preadv() itself? \npg_readv() is currently just a macro if the system provides preadv(), \nbut it could be a \"static inline\" that does the above dance. I think \nthat optimization is platform-dependent anyway, pread() might not be any \nfaster on some OSs. In particular, if the system doesn't provide \npreadv() and we use the implementation in src/port/preadv.c, it's the \nsame kernel call anyway.\n\n> v1-0002-Provide-vectored-variants-of-smgrread-and-smgrwri.patch\n\nNo smgrextendv()? I guess none of the patches here needed it.\n\n> /*\n> * Prepare to read a block. The buffer is pinned. If this is a 'hit', then\n> * the returned buffer can be used immediately. Otherwise, a physical read\n> * should be completed with CompleteReadBuffers(). PrepareReadBuffer()\n> * followed by CompleteReadBuffers() is equivalent ot ReadBuffer(), but the\n> * caller has the opportunity to coalesce reads of neighboring blocks into one\n> * CompleteReadBuffers() call.\n> *\n> * *found is set to true for a hit, and false for a miss.\n> *\n> * *allocated is set to true for a miss that allocates a buffer for the first\n> * time. If there are multiple calls to PrepareReadBuffer() for the same\n> * block before CompleteReadBuffers() or ReadBuffer_common() finishes the\n> * read, then only the first such call will receive *allocated == true, which\n> * the caller might use to issue just one prefetch hint.\n> */\n> Buffer\n> PrepareReadBuffer(BufferManagerRelation bmr,\n> \t\t\t\t ForkNumber forkNum,\n> \t\t\t\t BlockNumber blockNum,\n> \t\t\t\t BufferAccessStrategy strategy,\n> \t\t\t\t bool *found,\n> \t\t\t\t bool *allocated)\n> \n\nIf you decide you don't want to perform the read, after all, is there a \nway to abort it without calling CompleteReadBuffers()? Looking at the \nlater patch that introduces the streaming read API, seems that it \nfinishes all the reads, so I suppose we don't need an abort function. \nDoes it all get cleaned up correctly on error?\n\n> /*\n> * Convert an array of buffer address into an array of iovec objects, and\n> * return the number that were required. 'iov' must have enough space for up\n> * to PG_IOV_MAX elements.\n> */\n> static int\n> buffers_to_iov(struct iovec *iov, void **buffers, int nblocks)\n The comment is a bit inaccurate. There's an assertion that If nblocks \n<= PG_IOV_MAX, so while it's true that 'iov' must have enough space for \nup to PG_IOV_MAX elements, that's only because we also assume that \nnblocks <= PG_IOV_MAX.\n\nI don't see anything in the callers (mdreadv() and mdwritev()) to \nprevent them from passing nblocks > PG_IOV_MAX.\n\nin streaming_read.h:\n\n> typedef bool (*PgStreamingReadBufferDetermineNextCB) (PgStreamingRead *pgsr,\n> uintptr_t pgsr_private,\n> void *per_io_private,\n> BufferManagerRelation *bmr,\n> ForkNumber *forkNum,\n> BlockNumber *blockNum,\n> ReadBufferMode *mode);\n\nI was surprised that 'bmr', 'forkNum' and 'mode' are given separately on \neach read. I see that you used that in the WAL replay prefetching, so I \nguess that makes sense.\n\n> extern void pg_streaming_read_prefetch(PgStreamingRead *pgsr);\n> extern Buffer pg_streaming_read_buffer_get_next(PgStreamingRead *pgsr, void **per_io_private);\n> extern void pg_streaming_read_reset(PgStreamingRead *pgsr);\n> extern void pg_streaming_read_free(PgStreamingRead *pgsr);\n\nDo we need to expose pg_streaming_read_prefetch()? It's only used in the \nWAL replay prefetching patch, and only after calling \npg_streaming_read_reset(). Could pg_streaming_read_reset() call \npg_streaming_read_prefetch() directly? Is there any need to \"reset\" the \nstream, without also starting prefetching?\n\nIn v1-0012-WIP-Use-streaming-reads-in-recovery.patch:\n\n> @@ -1978,6 +1979,9 @@ XLogRecGetBlockTag(XLogReaderState *record, uint8 block_id,\n> * If the WAL record contains a block reference with the given ID, *rlocator,\n> * *forknum, *blknum and *prefetch_buffer are filled in (if not NULL), and\n> * returns true. Otherwise returns false.\n> + *\n> + * If prefetch_buffer is not NULL, the buffer is already pinned, and ownership\n> + * of the pin is transferred to the caller.\n> */\n> bool\n> XLogRecGetBlockTagExtended(XLogReaderState *record, uint8 block_id,\n> @@ -1998,7 +2002,15 @@ XLogRecGetBlockTagExtended(XLogReaderState *record, uint8 block_id,\n> \tif (blknum)\n> \t\t*blknum = bkpb->blkno;\n> \tif (prefetch_buffer)\n> +\t{\n> \t\t*prefetch_buffer = bkpb->prefetch_buffer;\n> +\n> +\t\t/*\n> +\t\t * Clear this flag is so that we can assert that redo records take\n> +\t\t * ownership of all buffers pinned by xlogprefetcher.c.\n> +\t\t */\n> +\t\tbkpb->prefetch_buffer = InvalidBuffer;\n> +\t}\n> \treturn true;\n> }\n\nCould these changes be committed independently of all the other changes?\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Wed, 27 Sep 2023 21:33:15 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Streaming I/O, vectored I/O (WIP)"
},
{
"msg_contents": "Hi,\n\nOn 2023-09-27 21:33:15 +0300, Heikki Linnakangas wrote:\n> I'm a bit disappointed and surprised by\n> v1-0009-WIP-Use-streaming-reads-in-vacuum.patch though:\n> \n> 4 files changed, 244 insertions(+), 78 deletions(-)\n> \n> The current prefetching logic in vacuumlazy.c is pretty hairy, so I hoped\n> that this would simplify it. I didn't look closely at that patch, so maybe\n> it's simpler even though it's more code.\n\nA good chunk of the changes is pretty boring stuff. A good chunk of the\nremainder could be simplified a lot - it's partially there because vacuumlazy\nchanged a lot over the last couple years and because a bit more refactoring is\nneeded. I do think it's actually simpler in some ways - besides being more\nefficient...\n\n\n> > v1-0002-Provide-vectored-variants-of-smgrread-and-smgrwri.patch\n> \n> No smgrextendv()? I guess none of the patches here needed it.\n\nI can't really imagine needing it anytime soon - due to the desire to avoid\nENOSPC for pages in the buffer pool the common pattern is to extend relations\nwith zeroes on disk, then populate those buffers in memory. It's possible that\nyou could use something like smgrextendv() when operating directly on the smgr\nlevel - but then I suspect you're going to be better off to extend the\nrelation to the right size in one operation and then just use smgrwritev() to\nwrite out the contents.\n\n\n> > /*\n> > * Prepare to read a block. The buffer is pinned. If this is a 'hit', then\n> > * the returned buffer can be used immediately. Otherwise, a physical read\n> > * should be completed with CompleteReadBuffers(). PrepareReadBuffer()\n> > * followed by CompleteReadBuffers() is equivalent ot ReadBuffer(), but the\n> > * caller has the opportunity to coalesce reads of neighboring blocks into one\n> > * CompleteReadBuffers() call.\n> > *\n> > * *found is set to true for a hit, and false for a miss.\n> > *\n> > * *allocated is set to true for a miss that allocates a buffer for the first\n> > * time. If there are multiple calls to PrepareReadBuffer() for the same\n> > * block before CompleteReadBuffers() or ReadBuffer_common() finishes the\n> > * read, then only the first such call will receive *allocated == true, which\n> > * the caller might use to issue just one prefetch hint.\n> > */\n> > Buffer\n> > PrepareReadBuffer(BufferManagerRelation bmr,\n> > \t\t\t\t ForkNumber forkNum,\n> > \t\t\t\t BlockNumber blockNum,\n> > \t\t\t\t BufferAccessStrategy strategy,\n> > \t\t\t\t bool *found,\n> > \t\t\t\t bool *allocated)\n> > \n> \n> If you decide you don't want to perform the read, after all, is there a way\n> to abort it without calling CompleteReadBuffers()?\n\nWhen would that be needed?\n\n\n> Looking at the later patch that introduces the streaming read API, seems\n> that it finishes all the reads, so I suppose we don't need an abort\n> function. Does it all get cleaned up correctly on error?\n\nI think it should. The buffer error handling is one of the areas where I\nreally would like to have some way of testing the various cases, it's easy to\nget things wrong, and basically impossible to write reliable tests for with\nour current infrastructure.\n\n\n> > typedef bool (*PgStreamingReadBufferDetermineNextCB) (PgStreamingRead *pgsr,\n> > uintptr_t pgsr_private,\n> > void *per_io_private,\n> > BufferManagerRelation *bmr,\n> > ForkNumber *forkNum,\n> > BlockNumber *blockNum,\n> > ReadBufferMode *mode);\n> \n> I was surprised that 'bmr', 'forkNum' and 'mode' are given separately on\n> each read. I see that you used that in the WAL replay prefetching, so I\n> guess that makes sense.\n\nYea, that's the origin - I don't like it, but I don't really have a better\nidea.\n\n\n> > extern void pg_streaming_read_prefetch(PgStreamingRead *pgsr);\n> > extern Buffer pg_streaming_read_buffer_get_next(PgStreamingRead *pgsr, void **per_io_private);\n> > extern void pg_streaming_read_reset(PgStreamingRead *pgsr);\n> > extern void pg_streaming_read_free(PgStreamingRead *pgsr);\n> \n> Do we need to expose pg_streaming_read_prefetch()? It's only used in the WAL\n> replay prefetching patch, and only after calling pg_streaming_read_reset().\n> Could pg_streaming_read_reset() call pg_streaming_read_prefetch() directly?\n> Is there any need to \"reset\" the stream, without also starting prefetching?\n\nHeh, I think this is a discussion Thomas I were having before...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 27 Sep 2023 13:13:33 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Streaming I/O, vectored I/O (WIP)"
},
{
"msg_contents": "On Thu, Sep 28, 2023 at 9:13 AM Andres Freund <[email protected]> wrote:\n> On 2023-09-27 21:33:15 +0300, Heikki Linnakangas wrote:\n> > Looking at the later patch that introduces the streaming read API, seems\n> > that it finishes all the reads, so I suppose we don't need an abort\n> > function. Does it all get cleaned up correctly on error?\n>\n> I think it should. The buffer error handling is one of the areas where I\n> really would like to have some way of testing the various cases, it's easy to\n> get things wrong, and basically impossible to write reliable tests for with\n> our current infrastructure.\n\nOne thing to highlight is that this patch doesn't create a new state\nin that case. In master, we already have the concept of a buffer with\nBM_TAG_VALID but not BM_VALID and not BM_IO_IN_PROGRESS, reachable if\nthere is an I/O error. Eventually another reader will try the I/O\nagain, or the buffer will fall out of the pool. With this patch it's\nthe same, it's just a wider window: more kinds of errors might be\nthrown in code between Prepare() and Complete() before we even have\nBM_IO_IN_PROGRESS. So there is nothing extra to clean up. Right?\n\nYeah, it would be nice to test buffer pool logic directly. Perhaps\nwith a C unit test framework[1] and pluggable smgr[2] we could mock up\ncases like I/O errors...\n\n> > > typedef bool (*PgStreamingReadBufferDetermineNextCB) (PgStreamingRead *pgsr,\n> > > uintptr_t pgsr_private,\n> > > void *per_io_private,\n> > > BufferManagerRelation *bmr,\n> > > ForkNumber *forkNum,\n> > > BlockNumber *blockNum,\n> > > ReadBufferMode *mode);\n> >\n> > I was surprised that 'bmr', 'forkNum' and 'mode' are given separately on\n> > each read. I see that you used that in the WAL replay prefetching, so I\n> > guess that makes sense.\n>\n> Yea, that's the origin - I don't like it, but I don't really have a better\n> idea.\n\nAnother idea I considered was that streams could be associated with a\nsingle relation, but recovery could somehow manage a set of them.\n From a certain point of view, that makes sense (we could be redoing\nwork that was created by multiple concurrent streams at 'do' time, and\nwith the approach shown here some clustering opportunities available\nat do time are lost at redo time), but it's not at all clear that it's\nworth the overheads or complexity, and I couldn't immediately figure\nout how to do it. But I doubt there would ever be any other users of\na single stream with multiple relations, and I agree that this is\nsomehow not quite satisfying... Perhaps we should think about that\nsome more...\n\n[1] https://www.postgresql.org/message-id/flat/CA%2BhUKG%2BajSQ_8eu2AogTncOnZ5me2D-Cn66iN_-wZnRjLN%2Bicg%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/flat/CAEze2WgMySu2suO_TLvFyGY3URa4mAx22WeoEicnK=PCNWEMrA@mail.gmail.com\n\n\n",
"msg_date": "Thu, 28 Sep 2023 10:30:10 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Streaming I/O, vectored I/O (WIP)"
},
{
"msg_contents": "On Thu, Sep 28, 2023 at 7:33 AM Heikki Linnakangas <[email protected]> wrote:\n> > + /* Avoid a slightly more expensive kernel call if there is no benefit. */\n> > + if (iovcnt == 1)\n> > + returnCode = pg_pread(vfdP->fd,\n> > + iov[0].iov_base,\n> > + iov[0].iov_len,\n> > + offset);\n> > + else\n> > + returnCode = pg_preadv(vfdP->fd, iov, iovcnt, offset);\n>\n> How about pushing down this optimization to pg_preadv() itself?\n> pg_readv() is currently just a macro if the system provides preadv(),\n> but it could be a \"static inline\" that does the above dance. I think\n> that optimization is platform-dependent anyway, pread() might not be any\n> faster on some OSs. In particular, if the system doesn't provide\n> preadv() and we use the implementation in src/port/preadv.c, it's the\n> same kernel call anyway.\n\nDone. I like it, I just feel a bit bad about moving the p*v()\nreplacement functions around a couple of times already! I figured it\nmight as well be static inline even if we use the fallback (= Solaris\nand Windows).\n\n> I don't see anything in the callers (mdreadv() and mdwritev()) to\n> prevent them from passing nblocks > PG_IOV_MAX.\n\nThe outer loop in md*v() copes with segment boundaries and also makes\nsure lengthof(iov) AKA PG_IOV_MAX isn't exceeded (though that couldn't\nhappen with the current caller).\n\n> in streaming_read.h:\n>\n> > typedef bool (*PgStreamingReadBufferDetermineNextCB) (PgStreamingRead *pgsr,\n> > uintptr_t pgsr_private,\n> > void *per_io_private,\n> > BufferManagerRelation *bmr,\n> > ForkNumber *forkNum,\n> > BlockNumber *blockNum,\n> > ReadBufferMode *mode);\n>\n> I was surprised that 'bmr', 'forkNum' and 'mode' are given separately on\n> each read. I see that you used that in the WAL replay prefetching, so I\n> guess that makes sense.\n\nIn this version I have introduced an alternative simple callback.\nIt's approximately what we had already tried out in an earlier version\nbefore I started streamifying recovery, but in this version you can\nchoose, so recovery can opt for the wider callback.\n\nI've added some ramp-up logic. The idea is that after we streamify\neverything in sight, we don't want to penalise users that don't really\nneed more than one or two blocks, but don't know that yet. Here is\nhow the system calls look when you do pg_prewarm():\n\npread64(32, ..., 8192, 0) = 8192 <--- start with just one block\npread64(32, ..., 16384, 8192) = 16384\npread64(32, ..., 32768, 24576) = 32768\npread64(32, ..., 65536, 57344) = 65536\npread64(32, ..., 131072, 122880) = 131072 <--- soon reading 16\nblocks at a time\npread64(32, ..., 131072, 253952) = 131072\npread64(32, ..., 131072, 385024) = 131072\n\nI guess it could be done in quite a few different ways and I'm open to\nbetter ideas. This way inserts prefetching stalls but ramps up\nquickly and is soon out of the way. I wonder if we would want to make\nthat a policy that a caller can disable, if you want to skip the\nramp-up and go straight for the largest possible I/O size? Then I\nthink we'd need a 'flags' argument to the streaming read constructor\nfunctions.\n\nA small detour: While contemplating how this interacts with parallel\nsequential scan, which also has a notion of ramping up, I noticed\nanother problem. One parallel seq scan process does this:\n\nfadvise64(32, 35127296, 131072, POSIX_FADV_WILLNEED) = 0\npreadv(32, [...], 2, 35127296) = 131072\npreadv(32, [...], 2, 35258368) = 131072\nfadvise64(32, 36175872, 131072, POSIX_FADV_WILLNEED) = 0\npreadv(32, [...], 2, 36175872) = 131072\npreadv(32, [...], 2, 36306944) = 131072\n...\n\nWe don't really want those fadvise() calls. We don't get them with\nparallelism disabled, because streaming_read.c is careful not to\ngenerate advice for sequential workloads based on ancient wisdom from\nthis mailing list, re-confirmed on recent Linux: WILLNEED hints\nactually get in the way of Linux's own prefetching and slow you down,\nso we only want them for truly random access. But the logic can't see\nthat another process is making holes in this process's sequence. The\ntwo obvious solutions are (1) pass in a flag at the start saying \"I\npromise this is sequential even if it doesn't look like it, no hints\nplease\" and (2) invent \"shared\" (cross-process) streaming reads, and\nteach all the parallel seq scan processes to get their buffers from\nthere.\n\nIdea (2) is interesting to think about but even if it is a useful idea\n(not sure) it is certainly overkill just to solve this little problem\nfor now. So perhaps I should implement (1), which would be another\nreason to add a flags argument. It's not a perfect solution though\nbecause some more 'data driven' parallel scans (indexes, bitmaps, ...)\nhave a similar problem that is less amenable to top-down kludgery.\n\nI've included just the pg_prewarm example user for now while we\ndiscuss the basic infrastructure. The rest are rebased and in my\npublic Github branch streaming-read (repo macdice/postgres) if anyone\nis interested (don't mind the red CI failures, they're just saying I\nran out of monthly CI credits on the 29th, so close...)",
"msg_date": "Wed, 29 Nov 2023 01:17:19 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Streaming I/O, vectored I/O (WIP)"
},
{
"msg_contents": "On 28/11/2023 14:17, Thomas Munro wrote:\n> On Thu, Sep 28, 2023 at 7:33 AM Heikki Linnakangas <[email protected]> wrote:\n>>> + /* Avoid a slightly more expensive kernel call if there is no benefit. */\n>>> + if (iovcnt == 1)\n>>> + returnCode = pg_pread(vfdP->fd,\n>>> + iov[0].iov_base,\n>>> + iov[0].iov_len,\n>>> + offset);\n>>> + else\n>>> + returnCode = pg_preadv(vfdP->fd, iov, iovcnt, offset);\n>>\n>> How about pushing down this optimization to pg_preadv() itself?\n>> pg_readv() is currently just a macro if the system provides preadv(),\n>> but it could be a \"static inline\" that does the above dance. I think\n>> that optimization is platform-dependent anyway, pread() might not be any\n>> faster on some OSs. In particular, if the system doesn't provide\n>> preadv() and we use the implementation in src/port/preadv.c, it's the\n>> same kernel call anyway.\n> \n> Done. I like it, I just feel a bit bad about moving the p*v()\n> replacement functions around a couple of times already! I figured it\n> might as well be static inline even if we use the fallback (= Solaris\n> and Windows).\n\nLGTM. I think this 0001 patch is ready for commit, independently of the \nrest of the patches.\n\nIn v2-0002-Provide-vectored-variants-of-FileRead-and-FileWri-1.patch, fd.h:\n\n> +/* Filename components */\n> +#define PG_TEMP_FILES_DIR \"pgsql_tmp\"\n> +#define PG_TEMP_FILE_PREFIX \"pgsql_tmp\"\n> +\n\nThese seem out of place, we already have them in common/file_utils.h. \nOther than that, \nv2-0002-Provide-vectored-variants-of-FileRead-and-FileWri-1.patch and \nv2-0003-Provide-vectored-variants-of-smgrread-and-smgrwri.patch look \ngood to me.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Tue, 28 Nov 2023 14:44:25 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Streaming I/O, vectored I/O (WIP)"
},
{
"msg_contents": "On 28/11/2023 14:17, Thomas Munro wrote:\n> On Thu, Sep 28, 2023 at 7:33 AM Heikki Linnakangas <[email protected]> wrote:\n>> in streaming_read.h:\n>>\n>>> typedef bool (*PgStreamingReadBufferDetermineNextCB) (PgStreamingRead *pgsr,\n>>> uintptr_t pgsr_private,\n>>> void *per_io_private,\n>>> BufferManagerRelation *bmr,\n>>> ForkNumber *forkNum,\n>>> BlockNumber *blockNum,\n>>> ReadBufferMode *mode);\n>>\n>> I was surprised that 'bmr', 'forkNum' and 'mode' are given separately on\n>> each read. I see that you used that in the WAL replay prefetching, so I\n>> guess that makes sense.\n> \n> In this version I have introduced an alternative simple callback.\n> It's approximately what we had already tried out in an earlier version\n> before I started streamifying recovery, but in this version you can\n> choose, so recovery can opt for the wider callback.\n\nOk. Two APIs is a bit redundant, but because most callers would prefer \nthe simpler API, that's probably a good tradeoff.\n\n> I've added some ramp-up logic. The idea is that after we streamify\n> everything in sight, we don't want to penalise users that don't really\n> need more than one or two blocks, but don't know that yet. Here is\n> how the system calls look when you do pg_prewarm():\n> \n> pread64(32, ..., 8192, 0) = 8192 <--- start with just one block\n> pread64(32, ..., 16384, 8192) = 16384\n> pread64(32, ..., 32768, 24576) = 32768\n> pread64(32, ..., 65536, 57344) = 65536\n> pread64(32, ..., 131072, 122880) = 131072 <--- soon reading 16\n> blocks at a time\n> pread64(32, ..., 131072, 253952) = 131072\n> pread64(32, ..., 131072, 385024) = 131072\n\n> I guess it could be done in quite a few different ways and I'm open to\n> better ideas. This way inserts prefetching stalls but ramps up\n> quickly and is soon out of the way. I wonder if we would want to make\n> that a policy that a caller can disable, if you want to skip the\n> ramp-up and go straight for the largest possible I/O size? Then I\n> think we'd need a 'flags' argument to the streaming read constructor\n> functions.\n\nI think a 'flags' argument and a way to opt-out of the slow start would \nmake sense. pg_prewarm in particular knows that it will read the whole \nrelation.\n\n> A small detour: While contemplating how this interacts with parallel\n> sequential scan, which also has a notion of ramping up, I noticed\n> another problem. One parallel seq scan process does this:\n> \n> fadvise64(32, 35127296, 131072, POSIX_FADV_WILLNEED) = 0\n> preadv(32, [...], 2, 35127296) = 131072\n> preadv(32, [...], 2, 35258368) = 131072\n> fadvise64(32, 36175872, 131072, POSIX_FADV_WILLNEED) = 0\n> preadv(32, [...], 2, 36175872) = 131072\n> preadv(32, [...], 2, 36306944) = 131072\n> ...\n> \n> We don't really want those fadvise() calls. We don't get them with\n> parallelism disabled, because streaming_read.c is careful not to\n> generate advice for sequential workloads based on ancient wisdom from\n> this mailing list, re-confirmed on recent Linux: WILLNEED hints\n> actually get in the way of Linux's own prefetching and slow you down,\n> so we only want them for truly random access. But the logic can't see\n> that another process is making holes in this process's sequence.\n\nHmm, aside from making the sequential pattern invisible to this process, \nare we defeating Linux's logic too, just by performing the reads from \nmultiple processes? The processes might issue the reads to the kernel \nout-of-order.\n\nHow bad is the slowdown when you issue WILLNEED hints on sequential access?\n\n> The two obvious solutions are (1) pass in a flag at the start saying\n> \"I promise this is sequential even if it doesn't look like it, no\n> hints please\" and (2) invent \"shared\" (cross-process) streaming\n> reads, and teach all the parallel seq scan processes to get their\n> buffers from there.\n> \n> Idea (2) is interesting to think about but even if it is a useful idea\n> (not sure) it is certainly overkill just to solve this little problem\n> for now. So perhaps I should implement (1), which would be another\n> reason to add a flags argument. It's not a perfect solution though\n> because some more 'data driven' parallel scans (indexes, bitmaps, ...)\n> have a similar problem that is less amenable to top-down kludgery.\n\n(1) seems fine to me.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Tue, 28 Nov 2023 15:06:06 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Streaming I/O, vectored I/O (WIP)"
},
{
"msg_contents": "On Wed, Nov 29, 2023 at 01:17:19AM +1300, Thomas Munro wrote:\n\nThanks for posting a new version. I've included a review of 0004.\n\n> I've included just the pg_prewarm example user for now while we\n> discuss the basic infrastructure. The rest are rebased and in my\n> public Github branch streaming-read (repo macdice/postgres) if anyone\n> is interested (don't mind the red CI failures, they're just saying I\n> ran out of monthly CI credits on the 29th, so close...)\n\nI agree it makes sense to commit the interface with just prewarm as a\nuser. Then we can start new threads for the various streaming read users\n(e.g. vacuum, sequential scan, bitmapheapscan).\n\n> From db5de8ab5a1a804f41006239302fdce954cab331 Mon Sep 17 00:00:00 2001\n> From: Thomas Munro <[email protected]>\n> Date: Sat, 22 Jul 2023 17:31:54 +1200\n> Subject: [PATCH v2 4/8] Provide vectored variant of ReadBuffer().\n> \n> diff --git a/src/backend/storage/buffer/bufmgr.c b/src/backend/storage/buffer/bufmgr.c\n> index f7c67d504c..8ae3a72053 100644\n> --- a/src/backend/storage/buffer/bufmgr.c\n> +++ b/src/backend/storage/buffer/bufmgr.c\n> @@ -1046,175 +1048,326 @@ ReadBuffer_common(SMgrRelation smgr, char relpersistence, ForkNumber forkNum,\n> \t\tif (mode == RBM_ZERO_AND_LOCK || mode == RBM_ZERO_AND_CLEANUP_LOCK)\n> \t\t\tflags |= EB_LOCK_FIRST;\n> \n> -\t\treturn ExtendBufferedRel(BMR_SMGR(smgr, relpersistence),\n> -\t\t\t\t\t\t\t\t forkNum, strategy, flags);\n> +\t\t*hit = false;\n> +\n> +\t\treturn ExtendBufferedRel(bmr, forkNum, strategy, flags);\n> \t}\n> \n> -\tTRACE_POSTGRESQL_BUFFER_READ_START(forkNum, blockNum,\n> -\t\t\t\t\t\t\t\t\t smgr->smgr_rlocator.locator.spcOid,\n> -\t\t\t\t\t\t\t\t\t smgr->smgr_rlocator.locator.dbOid,\n> -\t\t\t\t\t\t\t\t\t smgr->smgr_rlocator.locator.relNumber,\n> -\t\t\t\t\t\t\t\t\t smgr->smgr_rlocator.backend);\n> +\tbuffer = PrepareReadBuffer(bmr,\n> +\t\t\t\t\t\t\t forkNum,\n> +\t\t\t\t\t\t\t blockNum,\n> +\t\t\t\t\t\t\t strategy,\n> +\t\t\t\t\t\t\t hit,\n> +\t\t\t\t\t\t\t &allocated);\n> +\n> +\t/* At this point we do NOT hold any locks. */\n> +\n> +\tif (mode == RBM_ZERO_AND_CLEANUP_LOCK || mode == RBM_ZERO_AND_LOCK)\n> +\t{\n> +\t\t/* if we just want zeroes and a lock, we're done */\n> +\t\tZeroBuffer(buffer, mode);\n> +\t}\n> +\telse if (!*hit)\n> +\t{\n> +\t\t/* we might need to perform I/O */\n> +\t\tCompleteReadBuffers(bmr,\n> +\t\t\t\t\t\t\t&buffer,\n> +\t\t\t\t\t\t\tforkNum,\n> +\t\t\t\t\t\t\tblockNum,\n> +\t\t\t\t\t\t\t1,\n> +\t\t\t\t\t\t\tmode == RBM_ZERO_ON_ERROR,\n> +\t\t\t\t\t\t\tstrategy);\n> +\t}\n> +\n> +\treturn buffer;\n> +}\n> +\n> +/*\n> + * Prepare to read a block. The buffer is pinned. If this is a 'hit', then\n> + * the returned buffer can be used immediately. Otherwise, a physical read\n> + * should be completed with CompleteReadBuffers(). PrepareReadBuffer()\n> + * followed by CompleteReadBuffers() is equivalent ot ReadBuffer(), but the\n\not -> to\n\n> + * caller has the opportunity to coalesce reads of neighboring blocks into one\n> + * CompleteReadBuffers() call.\n> + *\n> + * *found is set to true for a hit, and false for a miss.\n> + *\n> + * *allocated is set to true for a miss that allocates a buffer for the first\n> + * time. If there are multiple calls to PrepareReadBuffer() for the same\n> + * block before CompleteReadBuffers() or ReadBuffer_common() finishes the\n> + * read, then only the first such call will receive *allocated == true, which\n> + * the caller might use to issue just one prefetch hint.\n> + */\n> +Buffer\n> +PrepareReadBuffer(BufferManagerRelation bmr,\n> +\t\t\t\t ForkNumber forkNum,\n> +\t\t\t\t BlockNumber blockNum,\n> +\t\t\t\t BufferAccessStrategy strategy,\n> +\t\t\t\t bool *found,\n> +\t\t\t\t bool *allocated)\n> +{\n> +\tBufferDesc *bufHdr;\n> +\tbool\t\tisLocalBuf;\n> +\tIOContext\tio_context;\n> +\tIOObject\tio_object;\n> \n> +\tAssert(blockNum != P_NEW);\n> +\n> +\tif (bmr.rel)\n> +\t{\n> +\t\tbmr.smgr = RelationGetSmgr(bmr.rel);\n> +\t\tbmr.relpersistence = bmr.rel->rd_rel->relpersistence;\n> +\t}\n> +\n> +\tisLocalBuf = SmgrIsTemp(bmr.smgr);\n> \tif (isLocalBuf)\n> \t{\n> -\t\t/*\n> -\t\t * We do not use a BufferAccessStrategy for I/O of temporary tables.\n> -\t\t * However, in some cases, the \"strategy\" may not be NULL, so we can't\n> -\t\t * rely on IOContextForStrategy() to set the right IOContext for us.\n> -\t\t * This may happen in cases like CREATE TEMPORARY TABLE AS...\n> -\t\t */\n> \t\tio_context = IOCONTEXT_NORMAL;\n> \t\tio_object = IOOBJECT_TEMP_RELATION;\n> -\t\tbufHdr = LocalBufferAlloc(smgr, forkNum, blockNum, &found);\n> -\t\tif (found)\n> -\t\t\tpgBufferUsage.local_blks_hit++;\n> -\t\telse if (mode == RBM_NORMAL || mode == RBM_NORMAL_NO_LOG ||\n> -\t\t\t\t mode == RBM_ZERO_ON_ERROR)\n> -\t\t\tpgBufferUsage.local_blks_read++;\n> \t}\n> \telse\n> \t{\n> -\t\t/*\n> -\t\t * lookup the buffer. IO_IN_PROGRESS is set if the requested block is\n> -\t\t * not currently in memory.\n> -\t\t */\n> \t\tio_context = IOContextForStrategy(strategy);\n> \t\tio_object = IOOBJECT_RELATION;\n> -\t\tbufHdr = BufferAlloc(smgr, relpersistence, forkNum, blockNum,\n> -\t\t\t\t\t\t\t strategy, &found, io_context);\n> -\t\tif (found)\n> -\t\t\tpgBufferUsage.shared_blks_hit++;\n> -\t\telse if (mode == RBM_NORMAL || mode == RBM_NORMAL_NO_LOG ||\n> -\t\t\t\t mode == RBM_ZERO_ON_ERROR)\n> -\t\t\tpgBufferUsage.shared_blks_read++;\n\nYou've lost this test in your new version. You can do the same thing\n(avoid counting zeroed buffers as blocks read) by moving this\npgBufferUsage.shared/local_blks_read++ back into ReadBuffer_common()\nwhere you know if you called ZeroBuffer() or CompleteReadBuffers().\n\n> \t}\n> \n> -\t/* At this point we do NOT hold any locks. */\n> +\tTRACE_POSTGRESQL_BUFFER_READ_START(forkNum, blockNum,\n> +\t\t\t\t\t\t\t\t\t bmr.smgr->smgr_rlocator.locator.spcOid,\n> +\t\t\t\t\t\t\t\t\t bmr.smgr->smgr_rlocator.locator.dbOid,\n> +\t\t\t\t\t\t\t\t\t bmr.smgr->smgr_rlocator.locator.relNumber,\n> +\t\t\t\t\t\t\t\t\t bmr.smgr->smgr_rlocator.backend);\n> \n> -\t/* if it was already in the buffer pool, we're done */\n> -\tif (found)\n> +\tResourceOwnerEnlarge(CurrentResourceOwner);\n> +\tif (isLocalBuf)\n> +\t{\n> +\t\tbufHdr = LocalBufferAlloc(bmr.smgr, forkNum, blockNum, found, allocated);\n> +\t\tif (*found)\n> +\t\t\tpgBufferUsage.local_blks_hit++;\n> +\t\telse\n> +\t\t\tpgBufferUsage.local_blks_read++;\n\nSee comment above.\n\n> +\t}\n> +\telse\n> +\t{\n> +\t\tbufHdr = BufferAlloc(bmr.smgr, bmr.relpersistence, forkNum, blockNum,\n> +\t\t\t\t\t\t\t strategy, found, allocated, io_context);\n> +\t\tif (*found)\n> +\t\t\tpgBufferUsage.shared_blks_hit++;\n> +\t\telse\n> +\t\t\tpgBufferUsage.shared_blks_read++;\n> +\t}\n> +\tif (bmr.rel)\n> +\t{\n> +\t\tpgstat_count_buffer_read(bmr.rel);\n\nThis is double-counting reads. You've left the call in\nReadBufferExtended() as well as adding this here. It should be fine to\nremove it from ReadBufferExtended(). Because you test bmr.rel, leaving\nthe call here in PrepareReadBuffer() wouldn't have an effect on\nReadBuffer_common() callers who don't pass a relation (like recovery).\nThe other current callers of ReadBuffer_common() (by way of\nExtendBufferedRelTo()) who do pass a relation are visibility map and\nfreespace map extension, and I don't think we track relation stats for\nthe VM and FSM.\n\nThis does continue the practice of counting zeroed buffers as reads in\ntable-level stats. But, that is the same as master.\n\n> -\t * if we have gotten to this point, we have allocated a buffer for the\n> -\t * page but its contents are not yet valid. IO_IN_PROGRESS is set for it,\n> -\t * if it's a shared buffer.\n> -\t */\n> -\tAssert(!(pg_atomic_read_u32(&bufHdr->state) & BM_VALID));\t/* spinlock not needed */\n> +/*\n> + * Complete a set reads prepared with PrepareReadBuffers(). The buffers must\n> + * cover a cluster of neighboring block numbers.\n> + *\n> + * Typically this performs one physical vector read covering the block range,\n> + * but if some of the buffers have already been read in the meantime by any\n> + * backend, zero or multiple reads may be performed.\n> + */\n> +void\n> +CompleteReadBuffers(BufferManagerRelation bmr,\n> +\t\t\t\t\tBuffer *buffers,\n> +\t\t\t\t\tForkNumber forknum,\n> +\t\t\t\t\tBlockNumber blocknum,\n> +\t\t\t\t\tint nblocks,\n> +\t\t\t\t\tbool zero_on_error,\n> +\t\t\t\t\tBufferAccessStrategy strategy)\n> +{\n...\n> -\t\tpgstat_count_io_op_time(io_object, io_context,\n> -\t\t\t\t\t\t\t\tIOOP_READ, io_start, 1);\n> +\t\t/* We found a buffer that we have to read in. */\n> +\t\tio_buffers[0] = buffers[i];\n> +\t\tio_pages[0] = BufferGetBlock(buffers[i]);\n> +\t\tio_first_block = blocknum + i;\n> +\t\tio_buffers_len = 1;\n> \n> -\t\t/* check for garbage data */\n> -\t\tif (!PageIsVerifiedExtended((Page) bufBlock, blockNum,\n> -\t\t\t\t\t\t\t\t\tPIV_LOG_WARNING | PIV_REPORT_STAT))\n> +\t\t/*\n> +\t\t * How many neighboring-on-disk blocks can we can scatter-read into\n> +\t\t * other buffers at the same time?\n> +\t\t */\n> +\t\twhile ((i + 1) < nblocks &&\n> +\t\t\t CompleteReadBuffersCanStartIO(buffers[i + 1]))\n> +\t\t{\n> +\t\t\t/* Must be consecutive block numbers. */\n> +\t\t\tAssert(BufferGetBlockNumber(buffers[i + 1]) ==\n> +\t\t\t\t BufferGetBlockNumber(buffers[i]) + 1);\n> +\n> +\t\t\tio_buffers[io_buffers_len] = buffers[++i];\n> +\t\t\tio_pages[io_buffers_len++] = BufferGetBlock(buffers[i]);\n> +\t\t}\n> +\n> +\t\tio_start = pgstat_prepare_io_time();\n> +\t\tsmgrreadv(bmr.smgr, forknum, io_first_block, io_pages, io_buffers_len);\n> +\t\tpgstat_count_io_op_time(io_object, io_context, IOOP_READ, io_start, 1);\n\nI'd pass io_buffers_len as cnt to pgstat_count_io_op_time(). op_bytes\nwill be BLCKSZ and multiplying that by the number of reads should\nproduce the number of bytes read.\n\n> diff --git a/src/backend/storage/buffer/localbuf.c b/src/backend/storage/buffer/localbuf.c\n> index 4efb34b75a..ee9307b612 100644\n> --- a/src/backend/storage/buffer/localbuf.c\n> +++ b/src/backend/storage/buffer/localbuf.c\n> @@ -116,7 +116,7 @@ PrefetchLocalBuffer(SMgrRelation smgr, ForkNumber forkNum,\n> */\n> BufferDesc *\n> LocalBufferAlloc(SMgrRelation smgr, ForkNumber forkNum, BlockNumber blockNum,\n> -\t\t\t\t bool *foundPtr)\n> +\t\t\t\t bool *foundPtr, bool *allocPtr)\n> {\n> \tBufferTag\tnewTag;\t\t\t/* identity of requested block */\n> \tLocalBufferLookupEnt *hresult;\n> @@ -144,6 +144,7 @@ LocalBufferAlloc(SMgrRelation smgr, ForkNumber forkNum, BlockNumber blockNum,\n> \t\tAssert(BufferTagsEqual(&bufHdr->tag, &newTag));\n> \n> \t\t*foundPtr = PinLocalBuffer(bufHdr, true);\n> +\t\t*allocPtr = false;\n> \t}\n> \telse\n> \t{\n> @@ -170,6 +171,7 @@ LocalBufferAlloc(SMgrRelation smgr, ForkNumber forkNum, BlockNumber blockNum,\n> \t\tpg_atomic_unlocked_write_u32(&bufHdr->state, buf_state);\n> \n> \t\t*foundPtr = false;\n> +\t\t*allocPtr = true;\n> \t}\n\nI would prefer you use consistent naming for\nallocPtr/allocatedPtr/allocated. I also think that all the functions\ntaking it as an output argument should explain what it is\n(BufferAlloc()/LocalBufferAlloc(), etc). I found myself doing a bit of\ndigging around to figure it out. You have a nice comment about it above\nPrepareReadBuffer(). I think you may need to resign yourself to\nrestating that bit (or some version of it) for all of the functions\ntaking it as an argument.\n\n> \n> diff --git a/src/include/storage/bufmgr.h b/src/include/storage/bufmgr.h\n> index 41e26d3e20..e29ca85077 100644\n> --- a/src/include/storage/bufmgr.h\n> +++ b/src/include/storage/bufmgr.h\n> @@ -14,6 +14,8 @@\n> #ifndef BUFMGR_H\n> #define BUFMGR_H\n> \n> +#include \"pgstat.h\"\n\nI don't know what we are supposed to do, but I would have included this\nin bufmgr.c (where I actually needed it) instead of including it here.\n\n> +#include \"port/pg_iovec.h\"\n> #include \"storage/block.h\"\n> #include \"storage/buf.h\"\n> #include \"storage/bufpage.h\"\n> @@ -47,6 +49,8 @@ typedef enum\n> \tRBM_ZERO_AND_CLEANUP_LOCK,\t/* Like RBM_ZERO_AND_LOCK, but locks the page\n> \t\t\t\t\t\t\t\t * in \"cleanup\" mode */\n> \tRBM_ZERO_ON_ERROR,\t\t\t/* Read, but return an all-zeros page on error */\n\n> +\tRBM_WILL_ZERO,\t\t\t\t/* Don't read from disk, caller will call\n> +\t\t\t\t\t\t\t\t * ZeroBuffer() */\n\nIt's confusing that this (RBM_WILL_ZERO) is part of this commit since it\nisn't used in this commit.\n\n> \tRBM_NORMAL_NO_LOG,\t\t\t/* Don't log page as invalid during WAL\n> \t\t\t\t\t\t\t\t * replay; otherwise same as RBM_NORMAL */\n> } ReadBufferMode;\n\n- Melanie\n\n\n",
"msg_date": "Tue, 28 Nov 2023 20:21:28 -0500",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Streaming I/O, vectored I/O (WIP)"
},
{
"msg_contents": "On Wed, Nov 29, 2023 at 1:44 AM Heikki Linnakangas <[email protected]> wrote:\n> LGTM. I think this 0001 patch is ready for commit, independently of the\n> rest of the patches.\n\nDone.\n\n> In v2-0002-Provide-vectored-variants-of-FileRead-and-FileWri-1.patch, fd.h:\n>\n> > +/* Filename components */\n> > +#define PG_TEMP_FILES_DIR \"pgsql_tmp\"\n> > +#define PG_TEMP_FILE_PREFIX \"pgsql_tmp\"\n> > +\n>\n> These seem out of place, we already have them in common/file_utils.h.\n\nYeah, they moved from there in f39b2658 and I messed up the rebase. Fixed.\n\n> Other than that,\n> v2-0002-Provide-vectored-variants-of-FileRead-and-FileWri-1.patch and\n> v2-0003-Provide-vectored-variants-of-smgrread-and-smgrwri.patch look\n> good to me.\n\nOne thing I wasn't 100% happy with was the treatment of ENOSPC. A few\ncallers believe that short writes set errno: they error out with a\nmessage including %m. We have historically set errno = ENOSPC inside\nFileWrite() if the write size was unexpectedly small AND the kernel\ndidn't set errno to a non-zero value (having set it to zero ourselves\nearlier). In FileWriteV(), I didn't want to do that because it is\nexpensive to compute the total write size from the vector array and we\nmanaged to measure an effect due to that in some workloads.\n\nNote that the smgr patch actually handles short writes by continuing,\ninstead of raising an error. Short writes do already occur in the\nwild on various systems for various rare technical reasons other than\nENOSPC I have heard (imagine transient failure to acquire some\ntemporary memory that the kernel chooses not to wait for, stuff like\nthat, though certainly many people and programs believe they should\nnot happen[1]), and it seems like a good idea to actually handle them\nas our write sizes increase and the probability of short writes might\npresumably increase.\n\nWith the previous version of the patch, we'd have to change a couple\nof other callers not to believe that short writes are errors and set\nerrno (callers are inconsistent on this point). I don't really love\nthat we have \"fake\" system errors but I also want to stay focused\nhere, so in this new version V3 I tried a new approach: I realised I\ncan just always set errno without needing the total size, so that\n(undocumented) aspect of the interface doesn't change. The point\nbeing that it doesn't matter if you clobber errno with a bogus value\nwhen the write was non-short. Thoughts?\n\n[1] https://utcc.utoronto.ca/~cks/space/blog/unix/WritesNotShortOften",
"msg_date": "Thu, 30 Nov 2023 08:39:44 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Streaming I/O, vectored I/O (WIP)"
},
{
"msg_contents": "On 29/11/2023 21:39, Thomas Munro wrote:\n> One thing I wasn't 100% happy with was the treatment of ENOSPC. A few\n> callers believe that short writes set errno: they error out with a\n> message including %m. We have historically set errno = ENOSPC inside\n> FileWrite() if the write size was unexpectedly small AND the kernel\n> didn't set errno to a non-zero value (having set it to zero ourselves\n> earlier). In FileWriteV(), I didn't want to do that because it is\n> expensive to compute the total write size from the vector array and we\n> managed to measure an effect due to that in some workloads.\n> \n> Note that the smgr patch actually handles short writes by continuing,\n> instead of raising an error. Short writes do already occur in the\n> wild on various systems for various rare technical reasons other than\n> ENOSPC I have heard (imagine transient failure to acquire some\n> temporary memory that the kernel chooses not to wait for, stuff like\n> that, though certainly many people and programs believe they should\n> not happen[1]), and it seems like a good idea to actually handle them\n> as our write sizes increase and the probability of short writes might\n> presumably increase.\n\nMaybe we should bite the bullet and always retry short writes in \nFileWriteV(). Is that what you meant by \"handling them\"?\n\nIf the total size is expensive to calculate, how about passing it as an \nextra argument? Presumably it is cheap for the callers to calculate at \nthe same time that they build the iovec array?\n\n> With the previous version of the patch, we'd have to change a couple\n> of other callers not to believe that short writes are errors and set\n> errno (callers are inconsistent on this point). I don't really love\n> that we have \"fake\" system errors but I also want to stay focused\n> here, so in this new version V3 I tried a new approach: I realised I\n> can just always set errno without needing the total size, so that\n> (undocumented) aspect of the interface doesn't change. The point\n> being that it doesn't matter if you clobber errno with a bogus value\n> when the write was non-short. Thoughts?\n\nFeels pretty ugly, but I don't see anything outright wrong with that.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Thu, 30 Nov 2023 01:16:30 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Streaming I/O, vectored I/O (WIP)"
},
{
"msg_contents": "On Thu, Nov 30, 2023 at 12:16 PM Heikki Linnakangas <[email protected]> wrote:\n> On 29/11/2023 21:39, Thomas Munro wrote:\n> > One thing I wasn't 100% happy with was the treatment of ENOSPC. A few\n> > callers believe that short writes set errno: they error out with a\n> > message including %m. We have historically set errno = ENOSPC inside\n> > FileWrite() if the write size was unexpectedly small AND the kernel\n> > didn't set errno to a non-zero value (having set it to zero ourselves\n> > earlier). In FileWriteV(), I didn't want to do that because it is\n> > expensive to compute the total write size from the vector array and we\n> > managed to measure an effect due to that in some workloads.\n> >\n> > Note that the smgr patch actually handles short writes by continuing,\n> > instead of raising an error. Short writes do already occur in the\n> > wild on various systems for various rare technical reasons other than\n> > ENOSPC I have heard (imagine transient failure to acquire some\n> > temporary memory that the kernel chooses not to wait for, stuff like\n> > that, though certainly many people and programs believe they should\n> > not happen[1]), and it seems like a good idea to actually handle them\n> > as our write sizes increase and the probability of short writes might\n> > presumably increase.\n>\n> Maybe we should bite the bullet and always retry short writes in\n> FileWriteV(). Is that what you meant by \"handling them\"?\n> If the total size is expensive to calculate, how about passing it as an\n> extra argument? Presumably it is cheap for the callers to calculate at\n> the same time that they build the iovec array?\n\nIt's cheap for md.c, because it already has nblocks_this_segment.\nThat's one reason I chose to put the retry there. If we push it down\nto fd.c in order to be able to help other callers, you're right that\nwe could pass in the total size (and I guess assert that it's\ncorrect), but that is sort of annoyingly redundant and further from\nthe interface we're wrapping.\n\nThere is another problem with pushing it down to fd.c, though.\nSuppose you try to write 8192 bytes, and the kernel says \"you wrote\n4096 bytes\" so your loop goes around again with the second half the\ndata and now the kernel says \"-1, ENOSPC\". What are you going to do?\nfd.c doesn't raise errors for I/O failure, it fails with -1 and errno,\nso you'd either have to return -1, ENOSPC (converting short writes\ninto actual errors, a lie because you did write some data), or return\n4096 (and possibly also set errno = ENOSPC as we have always done).\nSo you can't really handle this problem at this level, can you?\nUnless you decide that fd.c should get into the business of raising\nerrors for I/O failures, which would be a bit of a departure.\n\nThat's why I did the retry higher up in md.c.\n\n> > With the previous version of the patch, we'd have to change a couple\n> > of other callers not to believe that short writes are errors and set\n> > errno (callers are inconsistent on this point). I don't really love\n> > that we have \"fake\" system errors but I also want to stay focused\n> > here, so in this new version V3 I tried a new approach: I realised I\n> > can just always set errno without needing the total size, so that\n> > (undocumented) aspect of the interface doesn't change. The point\n> > being that it doesn't matter if you clobber errno with a bogus value\n> > when the write was non-short. Thoughts?\n>\n> Feels pretty ugly, but I don't see anything outright wrong with that.\n\nCool. I would consider cleaning up all the callers and get rid of\nthis ENOSPC stuff in independent work, but I didn't want discussion of\nthat (eg what external/extension code knows about this API?) to derail\nTHIS project, hence desire to preserve existing behaviour.\n\n\n",
"msg_date": "Thu, 30 Nov 2023 13:01:46 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Streaming I/O, vectored I/O (WIP)"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-30 13:01:46 +1300, Thomas Munro wrote:\n> On Thu, Nov 30, 2023 at 12:16 PM Heikki Linnakangas <[email protected]> wrote:\n> > On 29/11/2023 21:39, Thomas Munro wrote:\n> > > One thing I wasn't 100% happy with was the treatment of ENOSPC. A few\n> > > callers believe that short writes set errno: they error out with a\n> > > message including %m. We have historically set errno = ENOSPC inside\n> > > FileWrite() if the write size was unexpectedly small AND the kernel\n> > > didn't set errno to a non-zero value (having set it to zero ourselves\n> > > earlier). In FileWriteV(), I didn't want to do that because it is\n> > > expensive to compute the total write size from the vector array and we\n> > > managed to measure an effect due to that in some workloads.\n> > >\n> > > Note that the smgr patch actually handles short writes by continuing,\n> > > instead of raising an error. Short writes do already occur in the\n> > > wild on various systems for various rare technical reasons other than\n> > > ENOSPC I have heard (imagine transient failure to acquire some\n> > > temporary memory that the kernel chooses not to wait for, stuff like\n> > > that, though certainly many people and programs believe they should\n> > > not happen[1]), and it seems like a good idea to actually handle them\n> > > as our write sizes increase and the probability of short writes might\n> > > presumably increase.\n> >\n> > Maybe we should bite the bullet and always retry short writes in\n> > FileWriteV(). Is that what you meant by \"handling them\"?\n> > If the total size is expensive to calculate, how about passing it as an\n> > extra argument? Presumably it is cheap for the callers to calculate at\n> > the same time that they build the iovec array?\n> \n> It's cheap for md.c, because it already has nblocks_this_segment.\n> That's one reason I chose to put the retry there. If we push it down\n> to fd.c in order to be able to help other callers, you're right that\n> we could pass in the total size (and I guess assert that it's\n> correct), but that is sort of annoyingly redundant and further from\n> the interface we're wrapping.\n\n> There is another problem with pushing it down to fd.c, though.\n> Suppose you try to write 8192 bytes, and the kernel says \"you wrote\n> 4096 bytes\" so your loop goes around again with the second half the\n> data and now the kernel says \"-1, ENOSPC\". What are you going to do?\n> fd.c doesn't raise errors for I/O failure, it fails with -1 and errno,\n> so you'd either have to return -1, ENOSPC (converting short writes\n> into actual errors, a lie because you did write some data), or return\n> 4096 (and possibly also set errno = ENOSPC as we have always done).\n> So you can't really handle this problem at this level, can you?\n> Unless you decide that fd.c should get into the business of raising\n> errors for I/O failures, which would be a bit of a departure.\n> \n> That's why I did the retry higher up in md.c.\n\nI think that's the right call. I think for AIO we can't do retry handling\npurely in fd.c, or at least it'd be quite awkward. It doesn't seem like it'd\nbuy us that much in md.c anyway, we still need to handle the cross segment\ncase and such, from what I can tell?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 8 Dec 2023 10:25:54 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Streaming I/O, vectored I/O (WIP)"
},
{
"msg_contents": "On Sat, Dec 9, 2023 at 7:25 AM Andres Freund <[email protected]> wrote:\n> On 2023-11-30 13:01:46 +1300, Thomas Munro wrote:\n> > On Thu, Nov 30, 2023 at 12:16 PM Heikki Linnakangas <[email protected]> wrote:\n> > > Maybe we should bite the bullet and always retry short writes in\n> > > FileWriteV(). Is that what you meant by \"handling them\"?\n> > > If the total size is expensive to calculate, how about passing it as an\n> > > extra argument? Presumably it is cheap for the callers to calculate at\n> > > the same time that they build the iovec array?\n\n> > There is another problem with pushing it down to fd.c, though.\n> > Suppose you try to write 8192 bytes, and the kernel says \"you wrote\n> > 4096 bytes\" so your loop goes around again with the second half the\n> > data and now the kernel says \"-1, ENOSPC\". What are you going to do?\n> > fd.c doesn't raise errors for I/O failure, it fails with -1 and errno,\n> > so you'd either have to return -1, ENOSPC (converting short writes\n> > into actual errors, a lie because you did write some data), or return\n> > 4096 (and possibly also set errno = ENOSPC as we have always done).\n> > So you can't really handle this problem at this level, can you?\n> > Unless you decide that fd.c should get into the business of raising\n> > errors for I/O failures, which would be a bit of a departure.\n> >\n> > That's why I did the retry higher up in md.c.\n>\n> I think that's the right call. I think for AIO we can't do retry handling\n> purely in fd.c, or at least it'd be quite awkward. It doesn't seem like it'd\n> buy us that much in md.c anyway, we still need to handle the cross segment\n> case and such, from what I can tell?\n\nHeikki, what do you think about this: we could go with the v3 fd.c\nand md.c patches, but move adjust_iovec_for_partial_transfer() into\nsrc/common/file_utils.c, so that at least that slightly annoying part\nof the job is available for re-use by future code that faces the same\nproblem?\n\nNote that in file_utils.c we already have pg_pwritev_with_retry(),\nwhich is clearly related to all this: that is a function that\nguarantees to either complete the full pwritev() or throw an ERROR,\nbut leaves it undefined whether any data has been written on ERROR.\nIt has to add up the size too, and it adjusts the iovec array at the\nsame time, so it wouldn't use adjust_iovec_for_partial_transfer().\nThis is essentially the type of interface that I declined to put into\nfd.c's FileWrite() and FileRead() because I feel like it doesn't fit\nwith the existing functions' primary business of adding vfd support to\nwell known basic I/O functions that return bytes transferred and set\nerrno. Perhaps someone might later want to introduce File*WithRetry()\nwrappers or something if that proves useful? I wouldn't want them for\nmd.c though because I already know the size.\n\n\n",
"msg_date": "Sat, 9 Dec 2023 13:41:31 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Streaming I/O, vectored I/O (WIP)"
},
{
"msg_contents": "On 09/12/2023 02:41, Thomas Munro wrote:\n> On Sat, Dec 9, 2023 at 7:25 AM Andres Freund <[email protected]> wrote:\n>> On 2023-11-30 13:01:46 +1300, Thomas Munro wrote:\n>>> On Thu, Nov 30, 2023 at 12:16 PM Heikki Linnakangas <[email protected]> wrote:\n>>>> Maybe we should bite the bullet and always retry short writes in\n>>>> FileWriteV(). Is that what you meant by \"handling them\"?\n>>>> If the total size is expensive to calculate, how about passing it as an\n>>>> extra argument? Presumably it is cheap for the callers to calculate at\n>>>> the same time that they build the iovec array?\n> \n>>> There is another problem with pushing it down to fd.c, though.\n>>> Suppose you try to write 8192 bytes, and the kernel says \"you wrote\n>>> 4096 bytes\" so your loop goes around again with the second half the\n>>> data and now the kernel says \"-1, ENOSPC\". What are you going to do?\n>>> fd.c doesn't raise errors for I/O failure, it fails with -1 and errno,\n>>> so you'd either have to return -1, ENOSPC (converting short writes\n>>> into actual errors, a lie because you did write some data), or return\n>>> 4096 (and possibly also set errno = ENOSPC as we have always done).\n>>> So you can't really handle this problem at this level, can you?\n>>> Unless you decide that fd.c should get into the business of raising\n>>> errors for I/O failures, which would be a bit of a departure.\n>>>\n>>> That's why I did the retry higher up in md.c.\n>>\n>> I think that's the right call. I think for AIO we can't do retry handling\n>> purely in fd.c, or at least it'd be quite awkward. It doesn't seem like it'd\n>> buy us that much in md.c anyway, we still need to handle the cross segment\n>> case and such, from what I can tell?\n> \n> Heikki, what do you think about this: we could go with the v3 fd.c\n> and md.c patches, but move adjust_iovec_for_partial_transfer() into\n> src/common/file_utils.c, so that at least that slightly annoying part\n> of the job is available for re-use by future code that faces the same\n> problem?\n\nOk, works for me.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Sat, 9 Dec 2023 11:23:00 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Streaming I/O, vectored I/O (WIP)"
},
{
"msg_contents": "On Sat, Dec 9, 2023 at 10:23 PM Heikki Linnakangas <[email protected]> wrote:\n> Ok, works for me.\n\nI finished up making a few more improvements:\n\n1. I eventually figured out how to generalise\ncompute_remaining_iovec() (as I now call it) so that the existing\npg_pwritev_with_retry() in file_utils.c could also use it, so that's\nnow done in a patch of its own.\n\n2. FileReadV/FileWriteV patch:\n\n * further simplification of the traditional ENOSPC 'guess'\n * unconstify() changed to raw cast (pending [1])\n * fixed the DO_DB()-wrapped debugging code\n\n3. smgrreadv/smgrwritev patch:\n\n * improved ENOSPC handling\n * improve description of EOF and ENOSPC handling\n * fixed the sizes reported in dtrace static probes\n * fixed some words in the docs about that\n * changed error messages to refer to \"blocks %u..%u\"\n\n4. smgrprefetch-with-nblocks patch has no change, hasn't drawn any\ncomments hopefully because it is uncontroversial.\n\nI'm planning to commit these fairly soon.\n\n[1] https://www.postgresql.org/message-id/flat/CA%2BhUKGK3OXFjkOyZiw-DgL2bUqk9by1uGuCnViJX786W%2BfyDSw%40mail.gmail.com",
"msg_date": "Mon, 11 Dec 2023 22:12:05 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Streaming I/O, vectored I/O (WIP)"
},
{
"msg_contents": "On 11/12/2023 11:12, Thomas Munro wrote:\n> 1. I eventually figured out how to generalise\n> compute_remaining_iovec() (as I now call it) so that the existing\n> pg_pwritev_with_retry() in file_utils.c could also use it, so that's\n> now done in a patch of its own.\n\nIn compute_remaining_iovec():\n> 'source' and 'destination' may point to the same array, in which\n> case it is adjusted in-place; otherwise 'destination' must have enough\n> space for 'iovcnt' elements.\nIs there any use case for not adjusting it in place? \npg_pwritev_with_retry() takes a const iovec array, but maybe just remove \nthe 'const' and document that it scribbles on it?\n\n> I'm planning to commit these fairly soon.\n\n+1\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Mon, 11 Dec 2023 11:27:58 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Streaming I/O, vectored I/O (WIP)"
},
{
"msg_contents": "On Mon, Dec 11, 2023 at 10:28 PM Heikki Linnakangas <[email protected]> wrote:\n> On 11/12/2023 11:12, Thomas Munro wrote:\n> > 1. I eventually figured out how to generalise\n> > compute_remaining_iovec() (as I now call it) so that the existing\n> > pg_pwritev_with_retry() in file_utils.c could also use it, so that's\n> > now done in a patch of its own.\n>\n> In compute_remaining_iovec():\n> > 'source' and 'destination' may point to the same array, in which\n> > case it is adjusted in-place; otherwise 'destination' must have enough\n> > space for 'iovcnt' elements.\n> Is there any use case for not adjusting it in place?\n> pg_pwritev_with_retry() takes a const iovec array, but maybe just remove\n> the 'const' and document that it scribbles on it?\n\nI guess I just wanted to preserve pg_pwritev_with_retry()'s existing\nprototype, primarily because it matches standard pwritev()/writev().\n\n\n",
"msg_date": "Mon, 11 Dec 2023 22:37:13 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Streaming I/O, vectored I/O (WIP)"
},
{
"msg_contents": "Le 11/12/2023 à 10:12, Thomas Munro a écrit :\n> 3. smgrreadv/smgrwritev patch:\n> \n> * improved ENOSPC handling\n> * improve description of EOF and ENOSPC handling\n> * fixed the sizes reported in dtrace static probes\n> * fixed some words in the docs about that\n> * changed error messages to refer to \"blocks %u..%u\"\n> \n> 4. smgrprefetch-with-nblocks patch has no change, hasn't drawn any\n> comments hopefully because it is uncontroversial.\n> \n> I'm planning to commit these fairly soon.\n\n\nThanks, very useful additions.\nNot sure what you have already done to come next...\n\nI have 2 smalls patches here:\n* to use range prefetch in pg_prewarm (smgrprefetch only at the moment, \nusing smgrreadv to come next).\n* to support nblocks=0 in smgrprefetch (posix_fadvise supports a len=0 \nto apply flag from offset to end of file).\n\nShould I add to commitfest ?\n---\nCédric Villemain +33 (0)6 20 30 22 52\nhttps://Data-Bene.io\nPostgreSQL Expertise, Support, Training, R&D\n\n\n",
"msg_date": "Sat, 30 Dec 2023 13:01:04 +0100",
"msg_from": "=?UTF-8?Q?C=C3=A9dric_Villemain?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Streaming I/O, vectored I/O (WIP)"
},
{
"msg_contents": "I've written a new version of the vacuum streaming read user on top of\nthe rebased patch set [1]. It differs substantially from Andres' and\nincludes several refactoring patches that can apply on top of master.\nAs such, I've proposed those in a separate thread [2]. I noticed mac\nand windows fail to build on CI for my branch with the streaming read\ncode. I haven't had a chance to investigate -- but I must have done\nsomething wrong on rebase.\n\n- Melanie\n\n[1] https://github.com/melanieplageman/postgres/tree/stepwise_vac_streaming_read\n[2] https://www.postgresql.org/message-id/CAAKRu_Yf3gvXGcCnqqfoq0Q8LX8UM-e-qbm_B1LeZh60f8WhWA%40mail.gmail.com\n\n\n",
"msg_date": "Sun, 31 Dec 2023 13:36:28 -0500",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Streaming I/O, vectored I/O (WIP)"
},
{
"msg_contents": "On Wed, Nov 29, 2023 at 2:21 PM Melanie Plageman\n<[email protected]> wrote:\n> Thanks for posting a new version. I've included a review of 0004.\n\nThanks! I committed the patches up as far as smgr.c before the\nholidays. The next thing to commit would be the bufmgr.c one for\nvectored ReadBuffer(), v5-0001. Here's my response to your review of\nthat, which triggered quite a few changes.\n\nSee also new version of the streaming_read.c patch, with change list\nat end. (I'll talk about v5-0002, the SMgrRelation lifetime one, over\non the separate thread about that where Heikki posted a better\nversion.)\n\n> ot -> to\n\nFixed.\n\n> > - if (found)\n> > - pgBufferUsage.shared_blks_hit++;\n> > - else if (mode == RBM_NORMAL || mode == RBM_NORMAL_NO_LOG ||\n> > - mode == RBM_ZERO_ON_ERROR)\n> > - pgBufferUsage.shared_blks_read++;\n>\n> You've lost this test in your new version. You can do the same thing\n> (avoid counting zeroed buffers as blocks read) by moving this\n> pgBufferUsage.shared/local_blks_read++ back into ReadBuffer_common()\n> where you know if you called ZeroBuffer() or CompleteReadBuffers().\n\nYeah, right.\n\nAfter thinking about that some more, that's true and that placement\nwould be good, but only if we look just at this patch, where we are\nchopping ReadBuffer() into two parts (PrepareReadBuffer() and\nCompleteReadBuffers()), and then putting them back together again.\nHowever, soon we'll want to use the two functions separately, and we\nwon't call ReadBuffer[_common]().\n\nNew idea: PrepareReadBuffer() can continue to be in charge of bumping\n{local,shared}_blks_hit, but {local,shared}_blks_read++ can happen in\nCompleteReadBuffers(). There is no counter for zeroed buffers, but if\nthere ever is in the future, it can go into ZeroBuffer().\n\nIn this version, I've moved that into CompleteReadBuffers(), along\nwith a new comment to explain a pre-existing deficiency in the whole\nscheme: there is a race where you finish up counting a read but\nsomeone else actually does the read, and also counts it. I'm trying\nto preserve the existing bean counting logic to the extent possible\nacross this refactoring.\n\n> > + }\n> > + else\n> > + {\n> > + bufHdr = BufferAlloc(bmr.smgr, bmr.relpersistence, forkNum, blockNum,\n> > + strategy, found, allocated, io_context);\n> > + if (*found)\n> > + pgBufferUsage.shared_blks_hit++;\n> > + else\n> > + pgBufferUsage.shared_blks_read++;\n> > + }\n> > + if (bmr.rel)\n> > + {\n> > + pgstat_count_buffer_read(bmr.rel);\n>\n> This is double-counting reads. You've left the call in\n> ReadBufferExtended() as well as adding this here. It should be fine to\n> remove it from ReadBufferExtended(). Because you test bmr.rel, leaving\n> the call here in PrepareReadBuffer() wouldn't have an effect on\n> ReadBuffer_common() callers who don't pass a relation (like recovery).\n> The other current callers of ReadBuffer_common() (by way of\n> ExtendBufferedRelTo()) who do pass a relation are visibility map and\n> freespace map extension, and I don't think we track relation stats for\n> the VM and FSM.\n\nOh yeah. Right. Fixed.\n\n> This does continue the practice of counting zeroed buffers as reads in\n> table-level stats. But, that is the same as master.\n\nRight. It is a little strange that pgstast_count_buffer_read()\nfinishes up in a different location than\npgBufferUsage.{local,shared}_blks_read++, but that's precisely due to\nthis pre-existing difference in accounting policy. That generally\nseems like POLA failure, so I've added a comment to help us remember\nabout that, for another day.\n\n> > + io_start = pgstat_prepare_io_time();\n> > + smgrreadv(bmr.smgr, forknum, io_first_block, io_pages, io_buffers_len);\n> > + pgstat_count_io_op_time(io_object, io_context, IOOP_READ, io_start, 1);\n>\n> I'd pass io_buffers_len as cnt to pgstat_count_io_op_time(). op_bytes\n> will be BLCKSZ and multiplying that by the number of reads should\n> produce the number of bytes read.\n\nOK, thanks, fixed.\n\n> > BufferDesc *\n> > LocalBufferAlloc(SMgrRelation smgr, ForkNumber forkNum, BlockNumber blockNum,\n> > - bool *foundPtr)\n> > + bool *foundPtr, bool *allocPtr)\n> > {\n> > BufferTag newTag; /* identity of requested block */\n> > LocalBufferLookupEnt *hresult;\n> > @@ -144,6 +144,7 @@ LocalBufferAlloc(SMgrRelation smgr, ForkNumber forkNum, BlockNumber blockNum,\n> > Assert(BufferTagsEqual(&bufHdr->tag, &newTag));\n> >\n> > *foundPtr = PinLocalBuffer(bufHdr, true);\n> > + *allocPtr = false;\n> ...\n>\n> I would prefer you use consistent naming for\n> allocPtr/allocatedPtr/allocated. I also think that all the functions\n> taking it as an output argument should explain what it is\n> (BufferAlloc()/LocalBufferAlloc(), etc). I found myself doing a bit of\n> digging around to figure it out. You have a nice comment about it above\n> PrepareReadBuffer(). I think you may need to resign yourself to\n> restating that bit (or some version of it) for all of the functions\n> taking it as an argument.\n\nI worked on addressing your complaints, but while doing so I had\nsecond thoughts about even having that argument. The need for it came\nup while working on streaming recovery. Without it, whenever there\nwere repeated references to the same block (as happens very often in\nthe WAL), it'd issue extra useless POSIX_FADV_WILLNEED syscalls, so I\nwanted to find a way to distinguish the *first* miss for a given\nblock, to be able to issue the advice just once before the actual read\nhappens.\n\nBut now I see that other users of streaming reads probably won't\nrepeatedly stream the same block, and I am not proposing streaming\nrecovery for PG 17. Time to simplify. I decided to kick allocatedPtr\nout to think about some more, but I really hope we'll be able to start\na real background I/O instead of issuing advice in PG 18 proposals,\nand that'll be negotiated via IO_IN_PROGRESS, so then we'd never need\nallocatedPtr.\n\nThus, removed.\n\n> > #ifndef BUFMGR_H\n> > #define BUFMGR_H\n> >\n> > +#include \"pgstat.h\"\n>\n> I don't know what we are supposed to do, but I would have included this\n> in bufmgr.c (where I actually needed it) instead of including it here.\n\nFixed.\n\n> > +#include \"port/pg_iovec.h\"\n> > #include \"storage/block.h\"\n> > #include \"storage/buf.h\"\n> > #include \"storage/bufpage.h\"\n> > @@ -47,6 +49,8 @@ typedef enum\n> > RBM_ZERO_AND_CLEANUP_LOCK, /* Like RBM_ZERO_AND_LOCK, but locks the page\n> > * in \"cleanup\" mode */\n> > RBM_ZERO_ON_ERROR, /* Read, but return an all-zeros page on error */\n>\n> > + RBM_WILL_ZERO, /* Don't read from disk, caller will call\n> > + * ZeroBuffer() */\n>\n> It's confusing that this (RBM_WILL_ZERO) is part of this commit since it\n> isn't used in this commit.\n\nYeah. Removed.\n\nBikeshedding call: I am open to better suggestions for the names\nPrepareReadBuffer() and CompleteReadBuffers(), they seem a little\ngrammatically clumsy.\n\nI now also have a much simplified version of the streaming read patch.\nThe short version is that it has all advanced features removed, so\nthat now it *only* does the clustering required to build up large\nCompleteReadBuffers() calls. That's short and sweet, and enough for\npg_prewarm to demonstrate 128KB reads, a good first step.\n\nThen WILLNEED advice, and then ramp-up are added as separate patches,\nfor easier review. I've got some more patches in development that\nwould re-add \"extended\" multi-relation mode with wider callback that\ncan also stream zeroed buffers, as required so far only by recovery --\nbut I can propose that later.\n\nOther improvements:\n\n* Melanie didn't like the overloaded term \"cluster\" (off-list\nfeedback). Now I use \"block range\" to describe a range of contiguous\nblocks (blocknum, nblocks).\n* KK didn't like \"prefetch\" being used with various meanings (off-list\nfeedback). Better words found.\n* pgsr_private might as well be void *; uintptr_t is theoretically\nmore general but doesn't seem to buy much for realistic use.\n* per_io_data might as well be called per_buffer_data (it's not per\n\"I/O\" and that isn't the level of abstraction for this API anyway\nwhich is about blocks and buffers).\n* I reordered some function arguments that jumped out after the above changes.\n* Once I started down the path of using a flags field to control\nvarious policy stuff as discussed up-thread, it started to seem\nclearer that callers probably shouldn't directly control I/O depth,\nwhich was all a bit ad-hoc and unfinished before. I think we'd likely\nwant that to be centralised. New idea: it should be enough to be able\nto specify policies as required now and in future with flags.\nThoughts?\n\nI've also included 4 of the WIP patches from earlier (based on an\nobsolete version of the vacuum thing, sorry I know you have a much\nbetter one now), which can be mostly ignored. Here I just wanted to\nshare working code to drive vectored reads with different access\npatterns, and also show the API interactions and how they might each\nset flag bits. For example, parallel seq scan uses\nPGSR_FLAG_SEQUENTIAL to insist that its scans are sequential despite\nappearances, pg_prewarm uses PGSR_FLAG_FULL to declare that it'll read\nthe whole relation so there is no point in ramping up, and the\nvacuum-related stuff uses PGSR_FLAG_MAINTENANCE to select tuning based\non maintenance_io_concurrency instead of effective_io_concurrency.",
"msg_date": "Wed, 10 Jan 2024 17:13:37 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Streaming I/O, vectored I/O (WIP)"
},
{
"msg_contents": "On 10/01/2024 06:13, Thomas Munro wrote:\n> Bikeshedding call: I am open to better suggestions for the names\n> PrepareReadBuffer() and CompleteReadBuffers(), they seem a little\n> grammatically clumsy.\n\nHow will these functions work in the brave new async I/O world? I assume \nPrepareReadBuffer() will initiate the async I/O, and \nCompleteReadBuffers() will wait for it to complete. How about \nStartReadBuffer() and WaitReadBuffer()? Or StartBufferRead() and \nWaitBufferRead()?\n\nAbout the signature of those functions: Does it make sense for \nCompleteReadBuffers() (or WaitReadBuffers()) function to take a vector \nof buffers? If StartReadBuffer() initiates the async I/O immediately, is \nthere any benefit to batching the waiting?\n\nIf StartReadBuffer() starts the async I/O, the idea that you can call \nZeroBuffer() instead of WaitReadBuffer() doesn't work. I think \nStartReadBuffer() needs to take ReadBufferMode, and do the zeroing for \nyou in RBM_ZERO_* modes.\n\n\nPutting all that together, I propose:\n\n/*\n * Initiate reading a block from disk to the buffer cache.\n *\n * XXX: Until we have async I/O, this just allocates the buffer in the \nbuffer\n * cache. The actual I/O happens in WaitReadBuffer().\n */\nBuffer\nStartReadBuffer(BufferManagerRelation bmr,\n\t\t\t\tForkNumber forkNum,\n\t\t\t\tBlockNumber blockNum,\n\t\t\t\tBufferAccessStrategy strategy,\n\t\t\t\tReadBufferMode mode,\n\t\t\t\tbool *foundPtr);\n\n/*\n * Wait for a read that was started earlier with StartReadBuffer() to \nfinish.\n *\n * XXX: Until we have async I/O, this is the function that actually \nperforms\n * the I/O. If multiple I/Os have been started with StartReadBuffer(), this\n * will try to perform all of them in one syscall. Subsequent calls to\n * WaitReadBuffer(), for those other buffers, will finish quickly.\n */\nvoid\nWaitReadBuffer(Buffer buf);\n\n\nI'm not sure how well this fits with the streaming read API. The \nstreaming read code performs grouping of adjacent blocks to one \nCompleteReadBuffers() call. If WaitReadBuffer() does the batching, \nthat's not really required. But does that make sense with async I/O? \nWith async I/O, will you need a vectorized version of StartReadBuffer() too?\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Wed, 10 Jan 2024 21:58:24 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Streaming I/O, vectored I/O (WIP)"
},
{
"msg_contents": "On Thu, Jan 11, 2024 at 8:58 AM Heikki Linnakangas <[email protected]> wrote:\n> On 10/01/2024 06:13, Thomas Munro wrote:\n> > Bikeshedding call: I am open to better suggestions for the names\n> > PrepareReadBuffer() and CompleteReadBuffers(), they seem a little\n> > grammatically clumsy.\n>\n> How will these functions work in the brave new async I/O world? I assume\n> PrepareReadBuffer() will initiate the async I/O, and\n> CompleteReadBuffers() will wait for it to complete. How about\n> StartReadBuffer() and WaitReadBuffer()? Or StartBufferRead() and\n> WaitBufferRead()?\n\nWhat we have imagined so far is that the asynchronous version would\nprobably have three steps, like this:\n\n * PrepareReadBuffer() -> pins one buffer, reports if found or IO/zeroing needed\n * StartReadBuffers() -> starts the I/O for n contiguous !found buffers\n * CompleteReadBuffers() -> waits, completes as necessary\n\nIn the proposed synchronous version, the middle step is missing, but\nstreaming_read.c directly calls smgrprefetch() instead. I thought\nabout shoving that inside a prosthetic StartReadBuffers() function,\nbut I backed out of simulating asynchronous I/O too fancifully. The\nstreaming read API is where we really want to stabilise a nice API, so\nwe can moves things around behind it if required.\n\nA bit of analysis of the one block -> nblocks change and the\nsynchronous -> asynchronous change:\n\nTwo things are new in a world with nblocks > 1. (1) We needed to be\nable to set BM_IO_IN_PROGRESS on more than one block at a time, but\ncommit 12f3867f already provided that, and (2) someone else might come\nalong and read a block in the middle of our range, effectively\nchopping our range into subranges. That's true also in master but\nwhen nblocks === 1 that's all-or-nothing, and now we have partial\ncases. In the proposed synchronous code, CompleteReadBuffers() claims\nas many contiguous BM_IO_IN_PROGRESS flags as it in the range, and\nthen loops process the rest, skipping over any blocks that are already\ndone. Further down in md.c, you might also cross a segment boundary.\nSo that's two different reasons while a single call to\nCompleteReadBuffers() might finish up generating zero or more than one\nI/O system call, though very often it's one.\n\nHmm, while spelling that out, I noticed an obvious problem and\nimprovement to make to that part of v5. If backend #1 is trying to\nread blocks 101..103 and acquires BM_IO_IN_PROGRESS for 101, but\nbackend #2 comes along and starts reading block 102 first, backend\n#1's StartBufferIO() call would wait for 102's I/O CV while it still\nholds BM_IO_IN_PROGRESS for block 101, potentially blocking a third\nbackend #3 that wants to read block 101 even though no I/O is in\nprogress for that block yet! At least that's deadlock free (because\nalways in block order), but it seems like undesirable lock chaining.\nHere is my proposed improvement: StartBufferIO() gains a nowait flag.\nFor the head block we wait, but while trying to build a larger range\nwe don't. We'll try 102 again in the next loop, with a wait. Here is\na small fixup for that.\n\nIn an asynchronous version, that BM_IO_IN_PROGRESS negotiation would\ntake place in StartReadBuffers() instead, which would be responsible\nfor kicking off asynchronous I/Os (= asking a background worker to\ncall pread(), or equivalent fancy kernel async APIs). One question is\nwhat it does if it finds a block in the middle that chops the read up,\nor for that matter a segment boundary. I don't think we have to\ndecide now, but the two options seem to be that it starts one single\nI/O and reports its size, making it the client's problem to call again\nwith the rest, or that it starts more than one I/O and they are\nsomehow chained together so that the caller doesn't know about that\nand can later wait for all of them to finish using just one <handle\nthing>.\n\n(The reason why I'm not 100% certain how it will look is that the\nreal, working code in the aio branch right now doesn't actually expose\na vector/nblocks bufmgr interface at all, yet. Andres's original\nprototype had a single-block Start*(), Complete*() design, but a lower\nlevel of the AIO system notices if pending read operations are\nadjacent and could be merged. While discussing all this we decided it\nwas a bit strange to have lower code deal with allocating, chaining\nand processing lots of separate I/O objects in shared memory, when\nhigher level code could often work in bigger ranges up front, and then\ninteract with the AIO subsystem with many fewer objects and steps.\nAlso, the present simple and lightweight synchronous proposal that\nlacks the whole subsystem that could do that by magic.)\n\n> About the signature of those functions: Does it make sense for\n> CompleteReadBuffers() (or WaitReadBuffers()) function to take a vector\n> of buffers? If StartReadBuffer() initiates the async I/O immediately, is\n> there any benefit to batching the waiting?\n>\n> If StartReadBuffer() starts the async I/O, the idea that you can call\n> ZeroBuffer() instead of WaitReadBuffer() doesn't work. I think\n> StartReadBuffer() needs to take ReadBufferMode, and do the zeroing for\n> you in RBM_ZERO_* modes.\n\nYeah, good thoughts, and topics that have occupied me for some time\nnow. I also thought that StartReadBuffer() should take\nReadBufferMode, but I came to the idea that it probably shouldn't like\nthis:\n\nI started working on all this by trying to implement the most\ncomplicated case I could imagine, streaming recovery, and then working\nback to the easy cases that just do scans with RBM_NORMAL. In\nrecovery, we can predict that a block will be zeroed using WAL flags,\nand pre-existing cross-checks at redo time that enforce that the flags\nand redo code definitely agree on that, but we can't predict which\nexact ReadBufferMode the redo code will use, RBM_ZERO_AND_LOCK or\nRBM_ZERO_AND_CLEANUP_LOCK (or mode=RBM_NORMAL and\nget_cleanup_lock=true, as the comment warns them not to, but I\ndigress).\n\nThat's OK, because we can't take locks while looking ahead in recovery\nanyway (the redo routine carefully controls lock order/protocol), so\nthe code to actually do the locking needs to be somewhere near the\noutput end of the stream when the redo code calls\nXLogReadBufferForRedoExtended(). But if you try to use RBM_XXX in\nthese interfaces, it begins to look pretty funny: the streaming\ncallback needs to be able to say which ReadBufferMode, but anywhere\nnear Prepare*(), Start*() or even Complete*() is too soon, so maybe we\nneed to invent a new value RBM_WILL_ZERO that doesn't yet say which of\nthe zero modes to use, and then the redo routine needs to pass in the\nRBM_ZERO_AND_{LOCK,CLEANUP_LOCK} value to\nXLogReadBufferForRedoExtended() and it does it in a separate step\nanyway, so we are ignoring ReadBufferMode. But that feels just wrong\n-- we'd be using RBM but implementing them only partially.\n\nAnother way to put it is that ReadBufferMode actually conflates a\nbunch of different behaviour that applies at different times that we\nare now separating, and recovery reveals this most clearly because it\ndoesn't have all the information needed while looking ahead. It might\nbe possible to shove more information in the WAL to fix the\ninformation problem, but it seemed more natural to me to separate the\naspects of ReadBufferMode, because that isn't the only problem:\ngroupwise-processing of lock doesn't even make sense.\n\nSo I teased a couple of those aspects out into separate flags, for example:\n\nThe streaming read interface has two variants: the \"simple\" implicit\nRBM_NORMAL, single relation, single fork version that is used in most\nclient examples and probably everything involving the executor, and\nthe \"extended\" version (like the earlier versions in this thread,\nremoved for now based on complaints that most early uses don't use it,\nwill bring it back separately later with streaming recovery patches).\nIn the extended version, the streaming callback can set *will_zero =\ntrue, which is about all the info that recovery can figure out from\nthe WAL anyway, and then XLogReadBufferForRedoExtended() will later\ncall ZeroBuffer() because at that time we have the ReadBufferMode.\n\nThe _ZERO_ON_ERROR aspect is a case where CompleteReadBuffers() is the\nright time and makes sense to process as a batch, so it becomes a\nflag.\n\n> Putting all that together, I propose:\n>\n> /*\n> * Initiate reading a block from disk to the buffer cache.\n> *\n> * XXX: Until we have async I/O, this just allocates the buffer in the\n> buffer\n> * cache. The actual I/O happens in WaitReadBuffer().\n> */\n> Buffer\n> StartReadBuffer(BufferManagerRelation bmr,\n> ForkNumber forkNum,\n> BlockNumber blockNum,\n> BufferAccessStrategy strategy,\n> ReadBufferMode mode,\n> bool *foundPtr);\n>\n> /*\n> * Wait for a read that was started earlier with StartReadBuffer() to\n> finish.\n> *\n> * XXX: Until we have async I/O, this is the function that actually\n> performs\n> * the I/O. If multiple I/Os have been started with StartReadBuffer(), this\n> * will try to perform all of them in one syscall. Subsequent calls to\n> * WaitReadBuffer(), for those other buffers, will finish quickly.\n> */\n> void\n> WaitReadBuffer(Buffer buf);\n\nI'm confused about where the extra state lives that would allow the\ncommunication required to build a larger I/O. In the AIO branch, it\ndoes look a little more like that, but there is more magic state and\nmachinery hiding behind the curtain: the backend's pending I/O list\nbuilds up a chain of I/Os, and when you try to wait, if it hasn't\nalready been submitted to the kernel/bgworkers yet it will be, and\nbefore that merging will happen. So you get bigger I/Os without\nhaving to say so explicitly.\n\nFor this synchronous version (and hopefully soon a more efficient\nimproved version in the AIO branch), we want to take advantage of the\nclient's pre-existing and more cheaply obtained knowledge of ranges.\n\n> I'm not sure how well this fits with the streaming read API. The\n> streaming read code performs grouping of adjacent blocks to one\n> CompleteReadBuffers() call. If WaitReadBuffer() does the batching,\n> that's not really required. But does that make sense with async I/O?\n\n> With async I/O, will you need a vectorized version of StartReadBuffer() too?\n\nI think so, yes.",
"msg_date": "Thu, 11 Jan 2024 16:19:48 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Streaming I/O, vectored I/O (WIP)"
},
{
"msg_contents": "On 11/01/2024 05:19, Thomas Munro wrote:\n> On Thu, Jan 11, 2024 at 8:58 AM Heikki Linnakangas <[email protected]> wrote:\n>> On 10/01/2024 06:13, Thomas Munro wrote:\n>>> Bikeshedding call: I am open to better suggestions for the names\n>>> PrepareReadBuffer() and CompleteReadBuffers(), they seem a little\n>>> grammatically clumsy.\n>>\n>> How will these functions work in the brave new async I/O world? I assume\n>> PrepareReadBuffer() will initiate the async I/O, and\n>> CompleteReadBuffers() will wait for it to complete. How about\n>> StartReadBuffer() and WaitReadBuffer()? Or StartBufferRead() and\n>> WaitBufferRead()?\n> \n> What we have imagined so far is that the asynchronous version would\n> probably have three steps, like this:\n> \n> * PrepareReadBuffer() -> pins one buffer, reports if found or IO/zeroing needed\n> * StartReadBuffers() -> starts the I/O for n contiguous !found buffers\n> * CompleteReadBuffers() -> waits, completes as necessary\n\nOk. It feels surprising to have three steps. I understand that you need \ntwo steps, one to start the I/O and another to wait for them to finish, \nbut why do you need separate Prepare and Start steps? What can you do in \nbetween them? (You explained that. I'm just saying that that's my \ninitial reaction when seeing that API. It is surprising.)\n\n>> If StartReadBuffer() starts the async I/O, the idea that you can call\n>> ZeroBuffer() instead of WaitReadBuffer() doesn't work. I think\n>> StartReadBuffer() needs to take ReadBufferMode, and do the zeroing for\n>> you in RBM_ZERO_* modes.\n> \n> Yeah, good thoughts, and topics that have occupied me for some time\n> now. I also thought that StartReadBuffer() should take\n> ReadBufferMode, but I came to the idea that it probably shouldn't like\n> this:\n> \n> I started working on all this by trying to implement the most\n> complicated case I could imagine, streaming recovery, and then working\n> back to the easy cases that just do scans with RBM_NORMAL. In\n> recovery, we can predict that a block will be zeroed using WAL flags,\n> and pre-existing cross-checks at redo time that enforce that the flags\n> and redo code definitely agree on that, but we can't predict which\n> exact ReadBufferMode the redo code will use, RBM_ZERO_AND_LOCK or\n> RBM_ZERO_AND_CLEANUP_LOCK (or mode=RBM_NORMAL and\n> get_cleanup_lock=true, as the comment warns them not to, but I\n> digress).\n> \n> That's OK, because we can't take locks while looking ahead in recovery\n> anyway (the redo routine carefully controls lock order/protocol), so\n> the code to actually do the locking needs to be somewhere near the\n> output end of the stream when the redo code calls\n> XLogReadBufferForRedoExtended(). But if you try to use RBM_XXX in\n> these interfaces, it begins to look pretty funny: the streaming\n> callback needs to be able to say which ReadBufferMode, but anywhere\n> near Prepare*(), Start*() or even Complete*() is too soon, so maybe we\n> need to invent a new value RBM_WILL_ZERO that doesn't yet say which of\n> the zero modes to use, and then the redo routine needs to pass in the\n> RBM_ZERO_AND_{LOCK,CLEANUP_LOCK} value to\n> XLogReadBufferForRedoExtended() and it does it in a separate step\n> anyway, so we are ignoring ReadBufferMode. But that feels just wrong\n> -- we'd be using RBM but implementing them only partially.\n\nI see. When you're about to zero the page, there's not much point in \nsplitting the operation into Prepare/Start/Complete stages anyway. \nYou're not actually doing any I/O. Perhaps it's best to have a separate \n\"Buffer ZeroBuffer(Relation, ForkNumber, BlockNumber, lockmode)\" \nfunction that does the same as \nReadBuffer(RBM_ZERO_AND_[LOCK|CLEANUP_LOCK]) today.\n\n> The _ZERO_ON_ERROR aspect is a case where CompleteReadBuffers() is the\n> right time and makes sense to process as a batch, so it becomes a\n> flag.\n\n+1\n\n>> Putting all that together, I propose:\n>>\n>> /*\n>> * Initiate reading a block from disk to the buffer cache.\n>> *\n>> * XXX: Until we have async I/O, this just allocates the buffer in the\n>> buffer\n>> * cache. The actual I/O happens in WaitReadBuffer().\n>> */\n>> Buffer\n>> StartReadBuffer(BufferManagerRelation bmr,\n>> ForkNumber forkNum,\n>> BlockNumber blockNum,\n>> BufferAccessStrategy strategy,\n>> ReadBufferMode mode,\n>> bool *foundPtr);\n>>\n>> /*\n>> * Wait for a read that was started earlier with StartReadBuffer() to\n>> finish.\n>> *\n>> * XXX: Until we have async I/O, this is the function that actually\n>> performs\n>> * the I/O. If multiple I/Os have been started with StartReadBuffer(), this\n>> * will try to perform all of them in one syscall. Subsequent calls to\n>> * WaitReadBuffer(), for those other buffers, will finish quickly.\n>> */\n>> void\n>> WaitReadBuffer(Buffer buf);\n> \n> I'm confused about where the extra state lives that would allow the\n> communication required to build a larger I/O. In the AIO branch, it\n> does look a little more like that, but there is more magic state and\n> machinery hiding behind the curtain: the backend's pending I/O list\n> builds up a chain of I/Os, and when you try to wait, if it hasn't\n> already been submitted to the kernel/bgworkers yet it will be, and\n> before that merging will happen. So you get bigger I/Os without\n> having to say so explicitly.\n\nYeah, I was thinking that there would be a global variable that holds a \nlist of operations started with StartReadBuffer().\n\n> For this synchronous version (and hopefully soon a more efficient\n> improved version in the AIO branch), we want to take advantage of the\n> client's pre-existing and more cheaply obtained knowledge of ranges.\n\nOk.\n\n> In an asynchronous version, that BM_IO_IN_PROGRESS negotiation would\n> take place in StartReadBuffers() instead, which would be responsible\n> for kicking off asynchronous I/Os (= asking a background worker to\n> call pread(), or equivalent fancy kernel async APIs). One question is\n> what it does if it finds a block in the middle that chops the read up,\n> or for that matter a segment boundary. I don't think we have to\n> decide now, but the two options seem to be that it starts one single\n> I/O and reports its size, making it the client's problem to call again\n> with the rest, or that it starts more than one I/O and they are\n> somehow chained together so that the caller doesn't know about that\n> and can later wait for all of them to finish using just one <handle\n> thing>.\n> \n> (The reason why I'm not 100% certain how it will look is that the\n> real, working code in the aio branch right now doesn't actually expose\n> a vector/nblocks bufmgr interface at all, yet. Andres's original\n> prototype had a single-block Start*(), Complete*() design, but a lower\n> level of the AIO system notices if pending read operations are\n> adjacent and could be merged. While discussing all this we decided it\n> was a bit strange to have lower code deal with allocating, chaining\n> and processing lots of separate I/O objects in shared memory, when\n> higher level code could often work in bigger ranges up front, and then\n> interact with the AIO subsystem with many fewer objects and steps.\n> Also, the present simple and lightweight synchronous proposal that\n> lacks the whole subsystem that could do that by magic.)\n\nHmm, let's sketch out what this would look like with the approach that \nyou always start one I/O and report the size:\n\n/*\n * Initiate reading a range of blocks from disk to the buffer cache.\n *\n * If the pages were already found in the buffer cache, returns true.\n * Otherwise false, and the caller must call WaitReadBufferRange() to\n * wait for the I/O to finish, before accessing the buffers.\n *\n * 'buffers' is a caller-supplied array large enough to hold (endBlk -\n * startBlk) buffers. It is filled with the buffers that the pages are\n * read into.\n *\n * This always starts a read of at least one block, and tries to\n * initiate one read I/O for the whole range if possible. But if the\n * read cannot be performed as a single I/O, a partial read is started,\n * and *endBlk is updated to reflect the range for which the read was\n * started. The caller can make another call to read the rest of the\n * range. A partial read can occur if some, but not all, of the pages\n * are already in the buffer cache, or because the range crosses a\n * segment boundary.\n *\n * XXX: Until we have async I/O, this just allocates the buffers in the\n * buffer cache. And perhaps calls smgrprefetch(). The actual I/O\n * happens in WaitReadBufferRange().\n */\nbool\nStartReadBufferRange(BufferManagerRelation bmr,\n ForkNumber forkNum,\n BlockNumber startBlk,\n BlockNumber *endBlk,\n BufferAccessStrategy strategy,\n Buffer *buffers);\n\n/*\n * Wait for a read that was started earlier with StartReadBufferRange()\n * to finish.\n *\n * XXX: Until we have async I/O, this is the function that actually\n * performs\n * the I/O. StartReadBufferRange already checked that the pages can be\n * read in one preadv() syscall. However, it's possible that another\n * backend performed the read for some of the pages in between. In that\n * case this will perform multiple syscalls, after all.\n */\nvoid\nWaitReadBufferRange(Buffer *buffers, int nbuffers, bool zero_on_error);\n\n/*\n * Allocates a buffer for the given page in the buffer cache, and locks\n * the page. No I/O is initiated. The caller must initialize it and mark\n * the buffer dirty before releasing the lock.\n *\n * This is equivalent to ReadBuffer(RBM_ZERO_AND_LOCK) or\n * ReadBuffer(RBM_ZERO_AND_CLEANUP_LOCK).\n */\nBuffer\nZeroBuffer(BufferManagerRelation bmr,\n\t\t ForkNumber forkNum,\n\t\t BlockNumber blockNum,\n\t\t BufferAccessStrategy strategy,\n\t\t bool cleanup_lock);\n\nThis range-oriented API fits the callers pretty well: the streaming read \nAPI works with block ranges already. There is no need for separate \nPrepare and Start steps.\n\nOne weakness is that if StartReadBufferRange() finds that the range is \n\"chopped up\", it needs to return and throw away the work it had to do to \nlook up the next buffer. So in the extreme case that every other block \nin the range is in the buffer cache, each call would look up two buffers \nin the buffer cache, startBlk and startBlk + 1, but only return one \nbuffer to the caller.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Thu, 11 Jan 2024 16:31:22 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Streaming I/O, vectored I/O (WIP)"
},
{
"msg_contents": "On Fri, Jan 12, 2024 at 3:31 AM Heikki Linnakangas <[email protected]> wrote:\n> Ok. It feels surprising to have three steps. I understand that you need\n> two steps, one to start the I/O and another to wait for them to finish,\n> but why do you need separate Prepare and Start steps? What can you do in\n> between them? (You explained that. I'm just saying that that's my\n> initial reaction when seeing that API. It is surprising.)\n\nActually I don't think I explained that very well. First, some more\ndetail about how a two-step version would work:\n\n* we only want to start one I/O in an StartReadBuffers() call, because\notherwise it is hard/impossible for the caller to cap concurrency I/O\n* therefore StartReadBuffers() can handle sequences matching /^H*I*H*/\n(\"H\" = hit, \"I\" = miss, I/O) in one call\n* in the asynchronous version, \"I\" in that pattern means we got\nBM_IO_IN_PROGRESS\n* in the synchronous version, \"I\" means that it's not valid, not\nBM_IN_IN_PROGRESS, but we won't actually try to get BM_IO_IN_PROGRESS\nuntil the later Complete/Wait call (and then the answer might chagne,\nbut we'll just deal with that by looping in the synchronous version)\n* streaming_read.c has to deal with buffering up work that\nStartReadBuffers() didn't accept\n* that's actually quite easy, you just use the rest to create a new\nrange in the next slot\n\nPreviously I thought the requirement to deal with buffering future\nstuff that StartReadBuffers() couldn't accept yet was a pain, and life\nbecame so much simpler once I deleted all that and exposed\nPrepareReadBuffer() to the calling code. Perhaps I just hadn't done a\ngood enough job of that.\n\nThe other disadvantage you reminded me of was the duplicate buffer\nlookup in certain unlucky patterns, which I had forgotten about in my\nprevious email. But I guess it's not fatal to the idea and there is a\npotential partial mitigation. (See below).\n\nA third thing was the requirement for communication between\nStartReadBuffers() and CompleteReadBuffers() which I originally had an\n\"opaque\" object that the caller has to keep around that held private\nstate. It seemed nice to go back to talking just about buffer\nnumbers, but that's not really an argument for anything...\n\nOK, I'm going to try the two-step version (again) with interfaces\nalong the lines you sketched out... more soon.\n\n> I see. When you're about to zero the page, there's not much point in\n> splitting the operation into Prepare/Start/Complete stages anyway.\n> You're not actually doing any I/O. Perhaps it's best to have a separate\n> \"Buffer ZeroBuffer(Relation, ForkNumber, BlockNumber, lockmode)\"\n> function that does the same as\n> ReadBuffer(RBM_ZERO_AND_[LOCK|CLEANUP_LOCK]) today.\n\nThat makes sense, but... hmm, sometimes just allocating a page\ngenerates I/O if it has to evict a dirty buffer. Nothing in this code\ndoes anything fancy about that, but imagine some hypothetical future\nthing that manages to do that asynchronously -- then we might want to\ntake advantage of the ability to stream even a zeroed page, ie doing\nsomething ahead of time? Just a thought for another day, and perhaps\nthat is just an argument for including it in the streaming read API,\nbut it doesn't mean that the bufmgr.c API can't be as you say.\n\n> One weakness is that if StartReadBufferRange() finds that the range is\n> \"chopped up\", it needs to return and throw away the work it had to do to\n> look up the next buffer. So in the extreme case that every other block\n> in the range is in the buffer cache, each call would look up two buffers\n> in the buffer cache, startBlk and startBlk + 1, but only return one\n> buffer to the caller.\n\nYeah, right. This was one of the observations that influenced my\nPrepareReadBuffer() three-step thing that I'd forgotten. To spell\nthat out with an example, suppose the buffer pool contains every odd\nnumbered block. Successive StartReadBuffers() calls would process\n\"HMHm\", \"MHm\", \"MHm\"... where \"m\" represents a miss that we can't do\nanything with for a block we'll look up in the buffer pool again in\nthe next call. With the PrepareReadBuffer() design, that miss just\nstarts a new range and we don't have to look it up again. Hmm, I\nsuppose that could be mitigated somewhat with ReadRecentBuffer() if we\ncan find somewhere decent to store it.\n\nBTW it was while thinking about and testing cases like that that I\nfound Palak Chaturvedi's https://commitfest.postgresql.org/46/4426/\nextremely useful. It can kick out every second page or any other\nrange-chopping scenario you can express in a WHERE clause. I would\nquite like to get that tool into the tree...\n\n\n",
"msg_date": "Fri, 12 Jan 2024 12:32:21 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Streaming I/O, vectored I/O (WIP)"
},
{
"msg_contents": "Hi,\n\nThanks for working on this!\n\nOn Wed, 10 Jan 2024 at 07:14, Thomas Munro <[email protected]> wrote:\n\n> Thanks! I committed the patches up as far as smgr.c before the\n> holidays. The next thing to commit would be the bufmgr.c one for\n> vectored ReadBuffer(), v5-0001. Here's my response to your review of\n> that, which triggered quite a few changes.\n>\n> See also new version of the streaming_read.c patch, with change list\n> at end. (I'll talk about v5-0002, the SMgrRelation lifetime one, over\n> on the separate thread about that where Heikki posted a better\n> version.)\n\nI have a couple of comments / questions.\n\n0001-Provide-vectored-variant-of-ReadBuffer:\n\n- Do we need to pass the hit variable to ReadBuffer_common()? I think\nit can be just declared in the ReadBuffer_common() now.\n\n\n0003-Provide-API-for-streaming-reads-of-relations:\n\n- Do we need to re-think about getting a victim buffer logic?\nStrategyGetBuffer() function errors if it can not find any unpinned\nbuffers, this can be more common in the async world since we pin\nbuffers before completing the read (while looking ahead).\n\n- If the returned block from the callback is an invalid block,\npg_streaming_read_look_ahead() sets pgsr->finished = true. Could there\nbe cases like the returned block being an invalid block but we should\ncontinue to read after this invalid block?\n\n- max_pinned_buffers and pinned_buffers_trigger variables are set in\nthe initialization part (in the\npg_streaming_read_buffer_alloc_internal() function) then they do not\nchange. In some cases there could be no acquirable buffers to pin\nwhile initializing the pgsr (LimitAdditionalPins() set\nmax_pinned_buffers to 1) but while the read is continuing there could\nbe chances to create larger reads (other consecutive reads are\nfinished while this read is continuing). Do you think that trying to\nreset max_pinned_buffers and pinned_buffers_trigger to have higher\nvalues after the initialization to have larger reads make sense?\n\n+ /* Is there a head range that we can't extend? */\n+ head_range = &pgsr->ranges[pgsr->head];\n+ if (head_range->nblocks > 0 &&\n+ (!need_complete ||\n+ !head_range->need_complete ||\n+ head_range->blocknum + head_range->nblocks != blocknum))\n+ {\n+ /* Yes, time to start building a new one. */\n+ head_range = pg_streaming_read_new_range(pgsr);\n\n- I think if both need_complete and head_range->need_complete are\nfalse, we can extend the head range regardless of the consecutiveness\nof the blocks.\n\n\n0006-Allow-streaming-reads-to-ramp-up-in-size:\n\n- ramp_up_pin_limit variable is declared as an int but we do not check\nthe overflow while doubling it. This could be a problem in longer\nreads.\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n",
"msg_date": "Wed, 7 Feb 2024 13:54:26 +0300",
"msg_from": "Nazir Bilal Yavuz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Streaming I/O, vectored I/O (WIP)"
},
{
"msg_contents": "On Wed, Feb 7, 2024 at 11:54 PM Nazir Bilal Yavuz <[email protected]> wrote:\n> 0001-Provide-vectored-variant-of-ReadBuffer:\n>\n> - Do we need to pass the hit variable to ReadBuffer_common()? I think\n> it can be just declared in the ReadBuffer_common() now.\n\nRight, thanks! Done, in the version I'll post shortly.\n\n> 0003-Provide-API-for-streaming-reads-of-relations:\n>\n> - Do we need to re-think about getting a victim buffer logic?\n> StrategyGetBuffer() function errors if it can not find any unpinned\n> buffers, this can be more common in the async world since we pin\n> buffers before completing the read (while looking ahead).\n\nHmm, well that is what the pin limit machinery is supposed to be the\nsolution to. It has always been possible to see that error if\nshared_buffers is too small, so that your backends can't pin what they\nneed to make progress because there are too many other backends, the\nwhole buffer pool is pinned and there is nothing available to\nsteal/evict. Here, sure, we pin more stuff per backend, but not more\nthan a \"fair share\", that is, Buffers / max backends, so it's not any\nworse, is it? Well maybe it's marginally worse in some case, for\nexample if a query that uses many streams has one pinned buffer per\nstream (which we always allow) where before we'd have acquired and\nreleased pins in a slightly different sequence or whatever, but there\nis always going to be a minimum shared_buffers that will work at all\nfor a given workload and we aren't changing it by much if at all here.\nIf you're anywhere near that limit, your performance must be so bad\nthat it'd only be a toy setting anyway. Does that sound reasonable?\n\nNote that this isn't the first to use multi-pin logic or that limit\nmechanism: that was the new extension code that shipped in 16. This\nwill do that more often, though.\n\n> - If the returned block from the callback is an invalid block,\n> pg_streaming_read_look_ahead() sets pgsr->finished = true. Could there\n> be cases like the returned block being an invalid block but we should\n> continue to read after this invalid block?\n\nYeah, I think there will be, and I think we should do it with some\nkind of reset/restart function. I don't think we need it for the\ncurrent users so I haven't included it yet (there is a maybe-related\ndiscussion about reset for another reasons, I think Melanie has an\nidea about that), but I think something like that will useful for\nfuture stuff like streaming recovery, where you can run out of WAL to\nread but more will come via the network soon.\n\n> - max_pinned_buffers and pinned_buffers_trigger variables are set in\n> the initialization part (in the\n> pg_streaming_read_buffer_alloc_internal() function) then they do not\n> change. In some cases there could be no acquirable buffers to pin\n> while initializing the pgsr (LimitAdditionalPins() set\n> max_pinned_buffers to 1) but while the read is continuing there could\n> be chances to create larger reads (other consecutive reads are\n> finished while this read is continuing). Do you think that trying to\n> reset max_pinned_buffers and pinned_buffers_trigger to have higher\n> values after the initialization to have larger reads make sense?\n\nThat sounds hard! You're right that in the execution of a query there\nmight well be cases like that (inner and outer scan of a hash join\ndon't actually run at the same time, likewise for various other plan\nshapes), and something that would magically and dynamically balance\nresource usage might be ideal, but I don't know where to begin.\nConcretely, as long as your buffer pool is measured in gigabytes and\nyour max backends is measured in hundreds, the per backend pin limit\nshould actually be fairly hard to hit anyway, as it would be in the\nthousands. So I don't think it is as important as other resource\nusage balance problems that we also don't attempt (memory, CPU, I/O\nbandwidth).\n\n> + /* Is there a head range that we can't extend? */\n> + head_range = &pgsr->ranges[pgsr->head];\n> + if (head_range->nblocks > 0 &&\n> + (!need_complete ||\n> + !head_range->need_complete ||\n> + head_range->blocknum + head_range->nblocks != blocknum))\n> + {\n> + /* Yes, time to start building a new one. */\n> + head_range = pg_streaming_read_new_range(pgsr);\n>\n> - I think if both need_complete and head_range->need_complete are\n> false, we can extend the head range regardless of the consecutiveness\n> of the blocks.\n\nYeah, I think we can experiment with ideas like that. Not done yet\nbut I'm thinking about it -- more shortly.\n\n> 0006-Allow-streaming-reads-to-ramp-up-in-size:\n>\n> - ramp_up_pin_limit variable is declared as an int but we do not check\n> the overflow while doubling it. This could be a problem in longer\n> reads.\n\nBut it can't get above very high, because eventually it exceeds\nmax_pinned_buffers, which is anchored to the ground by various small\nlimits.\n\n\n",
"msg_date": "Tue, 27 Feb 2024 16:21:20 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Streaming I/O, vectored I/O (WIP)"
},
{
"msg_contents": "On Fri, Jan 12, 2024 at 12:32 PM Thomas Munro <[email protected]> wrote:\n> On Fri, Jan 12, 2024 at 3:31 AM Heikki Linnakangas <[email protected]> wrote:\n> > Ok. It feels surprising to have three steps. I understand that you need\n> > two steps, one to start the I/O and another to wait for them to finish,\n> > but why do you need separate Prepare and Start steps? What can you do in\n> > between them? (You explained that. I'm just saying that that's my\n> > initial reaction when seeing that API. It is surprising.)\n[...]\n> OK, I'm going to try the two-step version (again) with interfaces\n> along the lines you sketched out... more soon.\n\nHere's the 2 step version. The streaming_read.c API is unchanged, but\nthe bugmfr.c API now has only the following extra functions:\n\n bool StartReadBuffers(..., int *nblocks, ..., ReadBuffersOperation *op)\n WaitReadBuffers(ReadBuffersOperation *op)\n\nThat is, the PrepareReadBuffer() step is gone.\n\nStartReadBuffers() updates *nblocks to the number actually processed,\nwhich is always at least one. If it returns true, then you must call\nWaitReadBuffers(). When it finds a 'hit' (doesn't need I/O), then\nthat one final (or only) buffer is processed, but no more.\nStartReadBuffers() always conceptually starts 0 or 1 I/Os. Example:\nif you ask for 16 blocks, and it finds two misses followed by a hit,\nit'll set *nblocks = 3, smgrprefetch(2 blocks), and smgrreadv(2\nblocks) them in WaitReadBuffer(). The caller can't really tell that\nthe third block was a hit. The only case it can distinguish is if the\nfirst one was a hit, and then it returns false and sets *nblocks = 1.\n\nThis arrangement, where the results include the 'boundary' block that\nends the readable range, avoids the double-lookup problem we discussed\nupthread. I think it should probably also be able to handle multiple\nconsecutive 'hits' at the start of a sequence, but in this version I\nkept it simpler. It couldn't ever handle more than one after an I/O\nrange though, because it can't guess if the one after will be a hit or\na miss. If it turned out to be a miss, we don't want to start a\nsecond I/O, so unless we decide that we're happy unpinning and\nre-looking-up next time, it's better to give up then. Hence the idea\nof including the hit as a bonus block on the end.\n\nIt took me a long time but I eventually worked my way around to\npreferring this way over the 3 step version. streaming_read.c now has\nto do a bit more work including sometimes 'ungetting' a block (ie\ndeferring one that the callback has requested until next time), to\nresolve some circularities that come up with flow control. But I\nsuspect you'd probably finish up having to deal with 'short' writes\nanyway, because in the asynchronous future, in a three-step version,\nthe StartReadBuffers() (as 2nd step) might also be short when it fails\nto get enough BM_IO_IN_PROGRESS flags, so you have to deal with some\nversion of these problems anyway. Thoughts?\n\nI am still thinking about how to improve the coding in\nstreaming_read.c, ie to simplify and beautify the main control loop\nand improve the flow control logic. And looking for interesting test\ncases to hit various conditions in it and try to break it. And trying\nto figure out how this read-coalescing and parallel seq scan's block\nallocator might interfere with each other to produce non-idea patterns\nof system calls.\n\nHere are some example strace results generated by a couple of simple\nqueries. See CF #4426 for pg_buffercache_invalidate().\n\n=== Sequential scan example ===\n\ncreate table big as select generate_series(1, 10000000);\n\nselect count(*) from big;\n\npread64(81, ...) = 8192 <-- starts small\npread64(81, ...) = 16384\npread64(81, ...) = 32768\npread64(81, ...) = 65536\npread64(81, ...) = 131072 <-- fully ramped up size reached\npreadv(81, ...) = 131072 <-- more Vs seen as buffers fill up/fragments\npreadv(81, ...) = 131072\n...repeating...\npreadv(81, ...) = 131072\npreadv(81, ...) = 131072\npread64(81, ...) = 8192 <-- end fragment\n\ncreate table small as select generate_series(1, 100000);\n\nselect bool_and(pg_buffercache_invalidate(bufferid))\n from pg_buffercache\n where relfilenode = pg_relation_filenode('small')\n and relblocknumber % 3 != 0; -- <-- kick out every 3rd block\n\nselect count(*) from small;\n\npreadv(88, ...) = 16384 <-- just the 2-block fragments we need to load\npreadv(88, ...) = 16384\npreadv(88, ...) = 16384\n\n=== Bitmap heapscan example ===\n\ncreate table heap (i int primary key);\ninsert into heap select generate_series(1, 1000000);\n\nselect bool_and(pg_buffercache_invalidate(bufferid))\n from pg_buffercache\nwhere relfilenode = pg_relation_filenode('heap');\n\nselect count(i) from heap where i in (10, 1000, 10000, 100000) or i in\n(20, 200, 2000, 200000);\n\npread64(75, ..., 8192, 0) = 8192\nfadvise64(75, 32768, 8192, POSIX_FADV_WILLNEED) = 0\nfadvise64(75, 65536, 8192, POSIX_FADV_WILLNEED) = 0\npread64(75, ..., 8192, 32768) = 8192\npread64(75, ..., 8192, 65536) = 8192\nfadvise64(75, 360448, 8192, POSIX_FADV_WILLNEED) = 0\nfadvise64(75, 3620864, 8192, POSIX_FADV_WILLNEED) = 0\nfadvise64(75, 7241728, 8192, POSIX_FADV_WILLNEED) = 0\npread64(75, ..., 8192, 360448) = 8192\npread64(75, ..., 8192, 3620864) = 8192\npread64(75, ..., 8192, 7241728) = 8192",
"msg_date": "Tue, 27 Feb 2024 16:54:36 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Streaming I/O, vectored I/O (WIP)"
},
{
"msg_contents": "On Tue, Feb 27, 2024 at 9:25 AM Thomas Munro <[email protected]> wrote:\n> Here's the 2 step version. The streaming_read.c API is unchanged, but\n> the bugmfr.c API now has only the following extra functions:\n>\n> bool StartReadBuffers(..., int *nblocks, ..., ReadBuffersOperation *op)\n> WaitReadBuffers(ReadBuffersOperation *op)\n\nI wonder if there are semi-standard names that people use for this\nkind of API. Somehow I like \"start\" and \"finish\" or \"start\" and\n\"complete\" better than \"start\" and \"wait\". But I don't really know\nwhat's best. If there's a usual practice, it'd be good to adhere to\nit.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 27 Feb 2024 09:32:48 +0530",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Streaming I/O, vectored I/O (WIP)"
},
{
"msg_contents": "On Tue, Feb 27, 2024 at 5:03 PM Robert Haas <[email protected]> wrote:\n> On Tue, Feb 27, 2024 at 9:25 AM Thomas Munro <[email protected]> wrote:\n> > Here's the 2 step version. The streaming_read.c API is unchanged, but\n> > the bugmfr.c API now has only the following extra functions:\n> >\n> > bool StartReadBuffers(..., int *nblocks, ..., ReadBuffersOperation *op)\n> > WaitReadBuffers(ReadBuffersOperation *op)\n>\n> I wonder if there are semi-standard names that people use for this\n> kind of API. Somehow I like \"start\" and \"finish\" or \"start\" and\n> \"complete\" better than \"start\" and \"wait\". But I don't really know\n> what's best. If there's a usual practice, it'd be good to adhere to\n> it.\n\nI think \"complete\" has a subtly different meaning, which is why I\nliked Heikki's suggestion. One way to look at it is that only one\nbackend can \"complete\" a read, but any number of backends can \"wait\".\nBut I don't have strong views on this. It feels like this API is\nrelatively difficult to use directly anyway, so almost all users will\ngo through the streaming read API.\n\nHere's a new version with two main improvements. (Note that I'm only\ntalking about 0001-0003 here, the rest are useful for testing but are\njust outdated versions of patches that have their own threads.)\n\n1. Melanie discovered small regressions in all-cached simple scans.\nHere's a better look-ahead distance control algorithm that addresses\nthat. First let me state the updated goals of the algorithm:\n\n A) for all-hit scans, pin very few buffers, since that can't help if\nwe're not doing any I/O\n B) for all-miss sequential scans, pin only as many buffers as it\ntakes to build full-sized I/Os, since fully sequential scans are left\nto the OS to optimise for now (ie no \"advice\")\n C) for all-miss random scans, pin as many buffers as it takes to\nreach our I/O concurrency level\n\nIn all cases, respect the per-backend pin limit as a last resort limit\n(roughly NBuffers / max_connections), but we're now actively trying\n*not* to use so many.\n\nFor patterns in between the A, B, C extremes, do something in between.\nThe way the new algorithm attempts to classify the scan adaptively\nover time is as follows:\n\n * look ahead distance starts out at one block (behaviour A)\n * every time we start an I/O, we double the distance until we reach\nthe max pin limit (behaviour C), or if we're not issuing \"advice\"\nbecause sequential access is detected, until we reach the\nMAX_TRANSFER_BUFFERS (behaviour B)\n * every time we get a hit, we decrement the distance by one (we move\nslowly back to behaviour A)\n\nQuery to observe a system transitioning A->B->A->B, when doing a full\nscan that has ~50 contiguous blocks already in shared buffers\nsomewhere in the middle:\n\ncreate extension pg_prewarm;\ncreate extension pg_buffercache;\nset max_parallel_workers_per_gather = 0;\n\ncreate table t (i int);\ninsert into t select generate_series(1, 100000);\nselect pg_prewarm('t');\nselect bool_and(pg_buffercache_invalidate(bufferid))\n from pg_buffercache\n where relfilenode = pg_relation_filenode('t')\n and (relblocknumber between 0 and 100 or relblocknumber > 150);\n\nselect count(*) from t;\n\npread(31,...) = 8192 (0x2000) <--- start small (A)\npreadv(31,...) = 16384 (0x4000) <--- ramp up...\npreadv(31,...) = 32768 (0x8000)\npreadv(31,...) = 65536 (0x10000)\npreadv(31,...) = 131072 (0x20000) <--- full size (B)\npreadv(31,...) = 131072 (0x20000)\n...\npreadv(31,...) = 131072 (0x20000)\npreadv(31,...) = 49152 (0xc000) <--- end of misses, decay to A\npread(31,...) = 8192 (0x2000) <--- start small again (A)\npreadv(31,...) = 16384 (0x4000)\npreadv(31,...) = 32768 (0x8000)\npreadv(31,...) = 65536 (0x10000)\npreadv(31,...) = 131072 (0x20000) <--- full size (B)\npreadv(31,...) = 131072 (0x20000)\n...\npreadv(31,...) = 131072 (0x20000)\npreadv(31,...) = 122880 (0x1e000) <-- end of relation\n\nNote that this never pins more than 16 buffers, because it's case B,\nnot case C in the description above. There is no benefit in looking\nfurther ahead if you're relying on the kernel's sequential\nprefetching.\n\nThe fully cached regression Melanie reported now stays entirely in A.\nThe previous coding would ramp up to high look-ahead distance for no\ngood reason and never ramp down, so it was wasteful and slightly\nslower than master.\n\n2. Melanie and I noticed while discussing the pre-existing ad hoc\nbitmap heap scan that this thing should respect the\n{effective,maintenance}_io_concurrency setting of the containing\ntablespace. Note special cases to avoid problems while stream-reading\npg_tablespace itself and pg_database in backend initialisation.\n\nThere is a third problem that I'm still working on: the behaviour for\nvery small values of effective_io_concurrency isn't quite right, as\ndiscussed in detail by Tomas and Melanie on the bitmap heapscan\nthread. The attached makes 0 do approximately the right thing (though\nI'm hoping to make it less special), but other small numbers aren't\nquite right yet -- 1 is still issuing a useless fadvise at the wrong\ntime, and 2 is working in groups of N at a time instead of\ninterleaving as you might perhaps expect. These are probably\nside-effects of my focusing on coalescing large reads and losing sight\nof the small cases. I need a little more adaptivity and generality in\nthe algorithm at the small end, not least because 1 is the default\nvalue. I'll share a patch to improve that very soon.",
"msg_date": "Sat, 9 Mar 2024 11:24:15 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Streaming I/O, vectored I/O (WIP)"
},
{
"msg_contents": "On Sat, Mar 9, 2024 at 3:55 AM Thomas Munro <[email protected]> wrote:\n>\nHi Thomas,\n\nI am planning to review this patch set, so started going through 0001,\nI have a question related to how we are issuing smgrprefetch in\nStartReadBuffers()\n\n+ if (operation->io_buffers_len > 0)\n+ {\n+ if (flags & READ_BUFFERS_ISSUE_ADVICE)\n {\n- if (mode == RBM_ZERO_ON_ERROR || zero_damaged_pages)\n- {\n- ereport(WARNING,\n- (errcode(ERRCODE_DATA_CORRUPTED),\n- errmsg(\"invalid page in block %u of relation %s; zeroing out page\",\n- blockNum,\n- relpath(smgr->smgr_rlocator, forkNum))));\n- MemSet((char *) bufBlock, 0, BLCKSZ);\n- }\n- else\n- ereport(ERROR,\n- (errcode(ERRCODE_DATA_CORRUPTED),\n- errmsg(\"invalid page in block %u of relation %s\",\n- blockNum,\n- relpath(smgr->smgr_rlocator, forkNum))));\n+ /*\n+ * In theory we should only do this if PrepareReadBuffers() had to\n+ * allocate new buffers above. That way, if two calls to\n+ * StartReadBuffers() were made for the same blocks before\n+ * WaitReadBuffers(), only the first would issue the advice.\n+ * That'd be a better simulation of true asynchronous I/O, which\n+ * would only start the I/O once, but isn't done here for\n+ * simplicity. Note also that the following call might actually\n+ * issue two advice calls if we cross a segment boundary; in a\n+ * true asynchronous version we might choose to process only one\n+ * real I/O at a time in that case.\n+ */\n+ smgrprefetch(bmr.smgr, forkNum, blockNum, operation->io_buffers_len);\n }\n\n This is always issuing smgrprefetch starting with the input blockNum,\nshouldn't we pass the first blockNum which we did not find in the\n Buffer pool? So basically in the loop above this call where we are\ndoing PrepareReadBuffer() we should track the first blockNum for which\n the found is not true and pass that blockNum into the smgrprefetch()\nas a first block right?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 12 Mar 2024 11:45:33 +0530",
"msg_from": "Dilip Kumar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Streaming I/O, vectored I/O (WIP)"
},
{
"msg_contents": "On Tue, Mar 12, 2024 at 7:15 PM Dilip Kumar <[email protected]> wrote:\n> I am planning to review this patch set, so started going through 0001,\n> I have a question related to how we are issuing smgrprefetch in\n> StartReadBuffers()\n\nThanks!\n\n> + /*\n> + * In theory we should only do this if PrepareReadBuffers() had to\n> + * allocate new buffers above. That way, if two calls to\n> + * StartReadBuffers() were made for the same blocks before\n> + * WaitReadBuffers(), only the first would issue the advice.\n> + * That'd be a better simulation of true asynchronous I/O, which\n> + * would only start the I/O once, but isn't done here for\n> + * simplicity. Note also that the following call might actually\n> + * issue two advice calls if we cross a segment boundary; in a\n> + * true asynchronous version we might choose to process only one\n> + * real I/O at a time in that case.\n> + */\n> + smgrprefetch(bmr.smgr, forkNum, blockNum, operation->io_buffers_len);\n> }\n>\n> This is always issuing smgrprefetch starting with the input blockNum,\n> shouldn't we pass the first blockNum which we did not find in the\n> Buffer pool? So basically in the loop above this call where we are\n> doing PrepareReadBuffer() we should track the first blockNum for which\n> the found is not true and pass that blockNum into the smgrprefetch()\n> as a first block right?\n\nI think you'd be right if StartReadBuffers() were capable of\nprocessing a sequence consisting of a hit followed by misses, but\ncurrently it always gives up after the first hit. That is, it always\nprocesses some number of misses (0-16) and then at most one hit. So\nfor now the variable would always turn out to be the same as blockNum.\n\nThe reason is that I wanted to allows \"full sized\" read system calls\nto form. If you said \"hey please read these 16 blocks\" (I'm calling\nthat \"full sized\", AKA MAX_BUFFERS_PER_TRANSFER), and it found 2 hits,\nthen it could only form a read of 14 blocks, but there might be more\nblocks that could be read after those. We would have some arbitrary\nshorter read system calls, when we wanted to make them all as big as\npossible. So in the current patch you say \"hey please read these 16\nblocks\" and it returns saying \"only read 1\", you call again with 15\nand it says \"only read 1\", and you call again and says \"read 16!\"\n(assuming 2 more were readable after the original range we started\nwith). Then physical reads are maximised. Maybe there is some nice\nway to solve that, but I thought this way was the simplest (and if\nthere is some instruction-cache-locality/tight-loop/perf reason why we\nshould work harder to find ranges of hits, it could be for later).\nDoes that make sense?\n\n\n",
"msg_date": "Tue, 12 Mar 2024 19:40:00 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Streaming I/O, vectored I/O (WIP)"
},
{
"msg_contents": "On Tue, Mar 12, 2024 at 7:40 PM Thomas Munro <[email protected]> wrote:\n> possible. So in the current patch you say \"hey please read these 16\n> blocks\" and it returns saying \"only read 1\", you call again with 15\n\nOops, typo worth correcting: s/15/16/. Point being that the caller is\ninterested in more blocks after the original 16, so it uses 16 again\nwhen it calls back (because that's the size of the Buffer array it\nprovides).\n\n\n",
"msg_date": "Tue, 12 Mar 2024 19:45:00 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Streaming I/O, vectored I/O (WIP)"
},
{
"msg_contents": "On Tue, Mar 12, 2024 at 12:10 PM Thomas Munro <[email protected]> wrote:\n>\n> I think you'd be right if StartReadBuffers() were capable of\n> processing a sequence consisting of a hit followed by misses, but\n> currently it always gives up after the first hit. That is, it always\n> processes some number of misses (0-16) and then at most one hit. So\n> for now the variable would always turn out to be the same as blockNum.\n>\nOkay, then shouldn't this \"if (found)\" block immediately break the\nloop so that when we hit the block we just return that block? So it\nmakes sense what you explained but with the current code if there are\nthe first few hits followed by misses then we will issue the\nsmgrprefetch() for the initial hit blocks as well.\n\n+ if (found)\n+ {\n+ /*\n+ * Terminate the read as soon as we get a hit. It could be a\n+ * single buffer hit, or it could be a hit that follows a readable\n+ * range. We don't want to create more than one readable range,\n+ * so we stop here.\n+ */\n+ actual_nblocks = operation->nblocks = *nblocks = i + 1; (Dilip: I\nthink we should break after this?)\n+ }\n+ else\n+ {\n+ /* Extend the readable range to cover this block. */\n+ operation->io_buffers_len++;\n+ }\n+ }\n\n> The reason is that I wanted to allows \"full sized\" read system calls\n> to form. If you said \"hey please read these 16 blocks\" (I'm calling\n> that \"full sized\", AKA MAX_BUFFERS_PER_TRANSFER), and it found 2 hits,\n> then it could only form a read of 14 blocks, but there might be more\n> blocks that could be read after those. We would have some arbitrary\n> shorter read system calls, when we wanted to make them all as big as\n> possible. So in the current patch you say \"hey please read these 16\n> blocks\" and it returns saying \"only read 1\", you call again with 15\n> and it says \"only read 1\", and you call again and says \"read 16!\"\n> (assuming 2 more were readable after the original range we started\n> with). Then physical reads are maximised. Maybe there is some nice\n> way to solve that, but I thought this way was the simplest (and if\n> there is some instruction-cache-locality/tight-loop/perf reason why we\n> should work harder to find ranges of hits, it could be for later).\n> Does that make sense?\n\nUnderstood, I think this makes sense.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 12 Mar 2024 16:09:01 +0530",
"msg_from": "Dilip Kumar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Streaming I/O, vectored I/O (WIP)"
},
{
"msg_contents": "On Tue, Mar 12, 2024 at 11:39 PM Dilip Kumar <[email protected]> wrote:\n> + actual_nblocks = operation->nblocks = *nblocks = i + 1;\n> (Dilip: I think we should break after this?)\n\nIn the next loop, i < actual_nblocks is false so the loop terminates.\nBut yeah that was a bit obscure, so I have added an explicit break.\n\nHere also is a new version also of the streaming_read.c patch. This\nis based on feedback from the bitmap heap scan thread, where Tomas and\nMelanie noticed some problems when comparing effective_io_concurrency\n= 0 and 1 against master. The attached random.sql exercises a bunch\nof random scans with different settings, and random.txt shows the\nresulting system calls, unpatched vs patched. Looking at it that way,\nI was able to tweak the coding until I had the same behaviour as\nmaster, except with fewer system calls wherever possible due to\ncoalescing or suppressing sequential advice.",
"msg_date": "Wed, 13 Mar 2024 02:02:35 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Streaming I/O, vectored I/O (WIP)"
},
{
"msg_contents": "I am planning to push the bufmgr.c patch soon. At that point the new\nAPI won't have any direct callers yet, but the traditional\nReadBuffer() family of functions will internally reach\nStartReadBuffers(nblocks=1) followed by WaitReadBuffers(),\nZeroBuffer() or nothing as appropriate. Any more thoughts or\nobjections? Naming, semantics, correctness of buffer protocol,\nsufficiency of comments, something else?\n\n\n",
"msg_date": "Sat, 16 Mar 2024 12:52:59 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Streaming I/O, vectored I/O (WIP)"
},
{
"msg_contents": "Hi,\n\nOn Sat, 16 Mar 2024 at 02:53, Thomas Munro <[email protected]> wrote:\n>\n> I am planning to push the bufmgr.c patch soon. At that point the new\n> API won't have any direct callers yet, but the traditional\n> ReadBuffer() family of functions will internally reach\n> StartReadBuffers(nblocks=1) followed by WaitReadBuffers(),\n> ZeroBuffer() or nothing as appropriate. Any more thoughts or\n> objections? Naming, semantics, correctness of buffer protocol,\n> sufficiency of comments, something else?\n\n+ if (StartReadBuffers(bmr,\n+ &buffer,\n+ forkNum,\n+ blockNum,\n+ &nblocks,\n+ strategy,\n+ flags,\n+ &operation))\n+ WaitReadBuffers(&operation);\n\nI think we need to call WaitReadBuffers when 'mode !=\nRBM_ZERO_AND_CLEANUP_LOCK && mode != RBM_ZERO_AND_LOCK' or am I\nmissing something?\n\nCouple of nitpicks:\n\nIt would be nice to explain what the PrepareReadBuffer function does\nwith a comment.\n\n+ if (nblocks == 0)\n+ return; /* nothing to do */\nIt is guaranteed that nblocks will be bigger than 0. Can't we just use\nAssert(operation->io_buffers_len > 0);?\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n",
"msg_date": "Mon, 18 Mar 2024 18:32:44 +0300",
"msg_from": "Nazir Bilal Yavuz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Streaming I/O, vectored I/O (WIP)"
},
{
"msg_contents": "Some quick comments:\n\nOn 12/03/2024 15:02, Thomas Munro wrote:\n> src/backend/storage/aio/streaming_read.c\n> src/include/storage/streaming_read.h\n\nStandard file header comments missing.\n\nIt would be nice to have a comment at the top of streaming_read.c, \nexplaining at a high level how the circular buffer, lookahead and all \nthat works. Maybe even some diagrams.\n\nFor example, what is head and what is tail? Before reading the code, I \nassumed that 'head' was the next block range to return in \npg_streaming_read_buffer_get_next(). But I think it's actually the other \nway round?\n\n> /*\n> * Create a new streaming read object that can be used to perform the\n> * equivalent of a series of ReadBuffer() calls for one fork of one relation.\n> * Internally, it generates larger vectored reads where possible by looking\n> * ahead.\n> */\n> PgStreamingRead *\n> pg_streaming_read_buffer_alloc(int flags,\n> \t\t\t\t\t\t\t void *pgsr_private,\n> \t\t\t\t\t\t\t size_t per_buffer_data_size,\n> \t\t\t\t\t\t\t BufferAccessStrategy strategy,\n> \t\t\t\t\t\t\t BufferManagerRelation bmr,\n> \t\t\t\t\t\t\t ForkNumber forknum,\n> \t\t\t\t\t\t\t PgStreamingReadBufferCB next_block_cb)\n\nI'm not a fan of the name, especially the 'alloc' part. Yeah, most of \nthe work it does is memory allocation. But I'd suggest something like \n'pg_streaming_read_begin' instead.\n\nDo we really need the pg_ prefix in these?\n\n> Buffer\n> pg_streaming_read_buffer_get_next(PgStreamingRead *pgsr, void **per_buffer_data)\n\nMaybe 'pg_streaming_read_next_buffer' or just 'pg_streaming_read_next', \nfor a shorter name.\n\n\n> \n> \t/*\n> \t * pgsr->ranges is a circular buffer. When it is empty, head == tail.\n> \t * When it is full, there is an empty element between head and tail. Head\n> \t * can also be empty (nblocks == 0), therefore we need two extra elements\n> \t * for non-occupied ranges, on top of max_pinned_buffers to allow for the\n> \t * maxmimum possible number of occupied ranges of the smallest possible\n> \t * size of one.\n> \t */\n> \tsize = max_pinned_buffers + 2;\n\nI didn't understand this explanation for why it's + 2.\n\n> \t/*\n> \t * Skip the initial ramp-up phase if the caller says we're going to be\n> \t * reading the whole relation. This way we start out doing full-sized\n> \t * reads.\n> \t */\n> \tif (flags & PGSR_FLAG_FULL)\n> \t\tpgsr->distance = Min(MAX_BUFFERS_PER_TRANSFER, pgsr->max_pinned_buffers);\n> \telse\n> \t\tpgsr->distance = 1;\n\nShould this be \"Max(MAX_BUFFERS_PER_TRANSFER, \npgsr->max_pinned_buffers)\"? max_pinned_buffers cannot be smaller than \nMAX_BUFFERS_PER_TRANSFER though, given how it's initialized earlier. So \nperhaps just 'pgsr->distance = pgsr->max_pinned_buffers' ?\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Tue, 19 Mar 2024 17:04:53 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Streaming I/O, vectored I/O (WIP)"
},
{
"msg_contents": "On Wed, Mar 20, 2024 at 4:04 AM Heikki Linnakangas <[email protected]> wrote:\n> On 12/03/2024 15:02, Thomas Munro wrote:\n> > src/backend/storage/aio/streaming_read.c\n> > src/include/storage/streaming_read.h\n>\n> Standard file header comments missing.\n\nFixed.\n\n> It would be nice to have a comment at the top of streaming_read.c,\n> explaining at a high level how the circular buffer, lookahead and all\n> that works. Maybe even some diagrams.\n\nDone.\n\n> For example, what is head and what is tail? Before reading the code, I\n> assumed that 'head' was the next block range to return in\n> pg_streaming_read_buffer_get_next(). But I think it's actually the other\n> way round?\n\nYeah. People seem to have different natural intuitions about head vs\ntail in this sort of thing, so I've switched to descriptive names:\nstream->{oldest,next}_buffer_index (see also below).\n\n> > /*\n> > * Create a new streaming read object that can be used to perform the\n> > * equivalent of a series of ReadBuffer() calls for one fork of one relation.\n> > * Internally, it generates larger vectored reads where possible by looking\n> > * ahead.\n> > */\n> > PgStreamingRead *\n> > pg_streaming_read_buffer_alloc(int flags,\n> > void *pgsr_private,\n> > size_t per_buffer_data_size,\n> > BufferAccessStrategy strategy,\n> > BufferManagerRelation bmr,\n> > ForkNumber forknum,\n> > PgStreamingReadBufferCB next_block_cb)\n>\n> I'm not a fan of the name, especially the 'alloc' part. Yeah, most of\n> the work it does is memory allocation. But I'd suggest something like\n> 'pg_streaming_read_begin' instead.\n\nI like it. Done.\n\n> Do we really need the pg_ prefix in these?\n\nGood question. My understanding of our convention is that pg_ is\nneeded for local replacements/variants/extensions of things with well\nknown names (pg_locale_t, pg_strdup(), yada yada), and perhaps also in\na few places where the word is very common/short and we want to avoid\ncollisions and make sure it's obviously ours (pg_popcount?), and I\nguess places that reflect the name of a SQL identifier with a prefix,\nbut this doesn't seem to qualify for any of those things. It's a new\nthing, our own thing entirely, and sufficiently distinctive and\nunconfusable with standard stuff. So, prefix removed.\n\nLots of other patches on top of this one are using \"pgsr\" as a\nvariable name, ie containing that prefix; perhaps they would use \"sr\"\nor \"streaming_read\" or \"stream\". I used \"stream\" in a few places in\nthis version.\n\nOther names improved in this version IMHO: pgsr_private ->\ncallback_private. I find it clearer, as a way to indicate that the\nprovider of the callback \"owns\" it. I also reordered the arguments:\nnow it's streaming_read_buffer_begin(..., callback, callback_private,\nper_buffer_data_size), to keep those three things together.\n\n> > Buffer\n> > pg_streaming_read_buffer_get_next(PgStreamingRead *pgsr, void **per_buffer_data)\n>\n> Maybe 'pg_streaming_read_next_buffer' or just 'pg_streaming_read_next',\n> for a shorter name.\n\nHmm. The idea of 'buffer' appearing in a couple of names is that\nthere are conceptually other kinds of I/O that we might want to\nstream, like raw files or buffers other than the buffer pool, maybe\neven sockets, so this would be part of a family of similar interfaces.\nI think it needs to be clear that this variant gives you buffers. I'm\nOK with removing \"get\" but I guess it would be better to keep the\nwords in the same order across the three functions? What about these?\n\nstreaming_read_buffer_begin();\nstreaming_read_buffer_next();\nstreaming_read_buffer_end();\n\nTried like that in this version. Other ideas would be to make\n\"stream\" the main noun, buffered_read_stream_begin() or something.\nIdeas welcome.\n\nIt's also a bit grammatically weird to say StartReadBuffers() and\nWaitReadBuffers() in the bufmgr API... Hmm. Perhaps we should just\ncall it ReadBuffers() and WaitForBufferIO()? Maybe surprising because\nthe former isn't just like ReadBuffer() ... but on the other hand no\none said it has to be, and sometimes it even is (when it gets a hit).\nI suppose there could even be a flag READ_BUFFERS_WAIT or the opposite\nto make the asynchrony optional or explicit if someone has a problem\nwith that.\n\n(Hmm, that'd be a bit like the Windows native file API, where\nReadFile() is synchronous or asynchronous depending on flags.)\n\n> >\n> > /*\n> > * pgsr->ranges is a circular buffer. When it is empty, head == tail.\n> > * When it is full, there is an empty element between head and tail. Head\n> > * can also be empty (nblocks == 0), therefore we need two extra elements\n> > * for non-occupied ranges, on top of max_pinned_buffers to allow for the\n> > * maxmimum possible number of occupied ranges of the smallest possible\n> > * size of one.\n> > */\n> > size = max_pinned_buffers + 2;\n>\n> I didn't understand this explanation for why it's + 2.\n\nI think the logic was right but the explanation was incomplete, sorry.\nIt needed one gap between head and tail because head == tail means\nempty, and another because the tail item was still 'live' for one\nextra call until you examined it one more time to notice that all the\nbuffers had been extracted. In any case I have now deleted all that.\nIn this new version I don't need extra space between head and tail at\nall, because empty is detected with stream->pinned_buffers == 0, and\ntail moves forwards immediately when you consume buffers.\n\n> > /*\n> > * Skip the initial ramp-up phase if the caller says we're going to be\n> > * reading the whole relation. This way we start out doing full-sized\n> > * reads.\n> > */\n> > if (flags & PGSR_FLAG_FULL)\n> > pgsr->distance = Min(MAX_BUFFERS_PER_TRANSFER, pgsr->max_pinned_buffers);\n> > else\n> > pgsr->distance = 1;\n>\n> Should this be \"Max(MAX_BUFFERS_PER_TRANSFER,\n> pgsr->max_pinned_buffers)\"? max_pinned_buffers cannot be smaller than\n> MAX_BUFFERS_PER_TRANSFER though, given how it's initialized earlier. So\n> perhaps just 'pgsr->distance = pgsr->max_pinned_buffers' ?\n\nRight, done.\n\nHere are some other changes:\n\n* I'm fairly happy with the ABC adaptive distance algorithm so far, I\nthink, but I spent more time tidying up the way it is implemented. I\ndidn't like the way each 'range' had buffer[MAX_BUFFERS_PER_TRANSFER],\nso I created a new dense array stream->buffers that behaved as a\nsecond circular queue.\n\n* The above also made it trivial for MAX_BUFFERS_PER_TRANSFER to\nbecome the GUC that it always wanted to be: buffer_io_size defaulting\nto 128kB. Seems like a reasonable thing to have? Could also\ninfluence things like bulk write? (The main problem I have with the\nGUC currently is choosing a category, async resources is wrong....)\n\n* By analogy, it started to look a bit funny that each range had room\nfor a ReadBuffersOperation, and we had enough ranges for\nmax_pinned_buffers * 1 block range. So I booted that out to another\ndense array, of size max_ios.\n\n* At the same time, Bilal and Andres had been complaining privately\nabout 'range' management overheads showing up in perf and creating a\nregression against master on fully cached scans that do nothing else\n(eg pg_prewarm, where we lookup, pin, unpin every page and do no I/O\nand no CPU work with the page, a somewhat extreme case but a\nreasonable way to isolate the management costs); having made the above\nchange, it suddenly seemed obvious that I should make the buffers\narray the 'main' circular queue, pointing off to another place for\ninformation required for dealing with misses. In this version, there\nare no more range objects. This feels better and occupies and touches\nless memory. See pictures below.\n\n* The 'head range' is replaced by the pending_read_{blocknum,nblocks}\nvariables, which seems easier to understand. Essentially the\ncallback's block numbers are run-length compressed into there until\nthey can't be, at which point we start a read and start forming a new\npending read.\n\n* A micro-optimisation arranges for the zero slots to be reused over\nand over again if we're doing distance = 1, to avoid rotating through\nmemory for no benefit; I don't yet know if that pays for itself, it's\njust a couple of lines...\n\n* Various indexes and sizes that couldn't quite fit in uint8_t but\ncouldn't possibly exceed a few thousand because they are bounded by\nnumbers deriving from range-limited GUCs are now int16_t (while I was\nlooking for low hanging opportunities to reduce memory usage...)\n\nIn pictures, the circular queue arrangement changed from\nmax_pinned_buffers * fat range objects like this, where only the\nper-buffer data was outside the range object (because its size is\nvariable), and in the worst case all-hit case there were a lot of\nranges with only one buffer in them:\n\n ranges buf/data\n\n +--------+--------------+----+ +-----+\n | | | | | |\n +--------+--------------+----+ +-----+\n tail -> | 10..10 | buffers[MAX] | op +----->| ? |\n +--------+--------------+----+ +-----+\n | 42..44 | buffers[MAX] | op +----->| ? |\n +--------+--------------+----+ +-----+\n | 60..60 | buffers[MAX] | op +--+ | ? |\n +--------+--------------+----+ | +-----+\n head -> | | | | | | ? |\n +--------+--------------+----+ | +-----+\n | | | | +-->| ? |\n +--------+--------------+----+ +-----+\n | | | | | |\n +--------+--------------+----+ +-----+\n\n... to something that you might call a \"columnar\" layout, where ops\nare kicked out to their own array of size max_ios (a much smaller\nnumber), and the buffers, per-buffer data and indexes pointing to\noptional I/O objects are in parallel arrays of size\nmax_pinned_buffers, like this:\n\n buffers buf/data buf/io ios (= ops)\n\n +----+ +-----+ +---+ +--------+\n | | | | | | +---->| 42..44 |\n +----+ +-----+ +---+ | +--------+\n oldest_buffer_index -> | 10 | | ? | | | | +-->| 60..60 |\n +----+ +-----+ +---+ | | +--------+\n | 42 | | ? | | 0 +--+ | | |\n +----+ +-----+ +---+ | +--------+\n | 43 | | ? | | | | | |\n +----+ +-----+ +---+ | +--------+\n | 44 | | ? | | | | | |\n +----+ +-----+ +---+ | +--------+\n | 60 | | ? | | 1 +----+\n +----+ +-----+ +---+\n next_buffer_index -> | | | | | |\n +----+ +-----+ +---+\n\nIn other words, there is essentially no waste/padding now, and in the\nall-hit case we stay in the zero'th elements of those arrays so they\ncan stay red hot. Still working on validating this refactoring with\nother patches and test scenarios. I hope it's easier to understand,\nand does a better job of explaining itself.\n\nI'm also still processing a bunch of performance-related fixups mostly\nfor bufmgr.c sent by Andres off-list (things like: StartReadBuffer()\nargument list is too wide, some things need inline, we should only\ninitialise the op if it will be needed, oh I squashed that last one\ninto the patch already), after he and Bilal studied some regressions\nin cases with no I/O. And thinking about Bilal's earlier message\n(extra read even when we're going to zero, oops, he's quite right\nabout that) and a patch he sent me for that. More on those soon.",
"msg_date": "Mon, 25 Mar 2024 02:02:12 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Streaming I/O, vectored I/O (WIP)"
},
{
"msg_contents": "On Sun, Mar 24, 2024 at 9:02 AM Thomas Munro <[email protected]> wrote:\n>\n> On Wed, Mar 20, 2024 at 4:04 AM Heikki Linnakangas <[email protected]> wrote:\n> > On 12/03/2024 15:02, Thomas Munro wrote:\n> > > src/backend/storage/aio/streaming_read.c\n> > > src/include/storage/streaming_read.h\n> >\n> > Standard file header comments missing.\n>\n> Fixed.\n>\n> > It would be nice to have a comment at the top of streaming_read.c,\n> > explaining at a high level how the circular buffer, lookahead and all\n> > that works. Maybe even some diagrams.\n>\n> Done.\n>\n> > For example, what is head and what is tail? Before reading the code, I\n> > assumed that 'head' was the next block range to return in\n> > pg_streaming_read_buffer_get_next(). But I think it's actually the other\n> > way round?\n>\n> Yeah. People seem to have different natural intuitions about head vs\n> tail in this sort of thing, so I've switched to descriptive names:\n> stream->{oldest,next}_buffer_index (see also below).\n>\n> > > /*\n> > > * Create a new streaming read object that can be used to perform the\n> > > * equivalent of a series of ReadBuffer() calls for one fork of one relation.\n> > > * Internally, it generates larger vectored reads where possible by looking\n> > > * ahead.\n> > > */\n> > > PgStreamingRead *\n> > > pg_streaming_read_buffer_alloc(int flags,\n> > > void *pgsr_private,\n> > > size_t per_buffer_data_size,\n> > > BufferAccessStrategy strategy,\n> > > BufferManagerRelation bmr,\n> > > ForkNumber forknum,\n> > > PgStreamingReadBufferCB next_block_cb)\n> >\n> > I'm not a fan of the name, especially the 'alloc' part. Yeah, most of\n> > the work it does is memory allocation. But I'd suggest something like\n> > 'pg_streaming_read_begin' instead.\n>\n> I like it. Done.\n>\n> > Do we really need the pg_ prefix in these?\n>\n> Good question. My understanding of our convention is that pg_ is\n> needed for local replacements/variants/extensions of things with well\n> known names (pg_locale_t, pg_strdup(), yada yada), and perhaps also in\n> a few places where the word is very common/short and we want to avoid\n> collisions and make sure it's obviously ours (pg_popcount?), and I\n> guess places that reflect the name of a SQL identifier with a prefix,\n> but this doesn't seem to qualify for any of those things. It's a new\n> thing, our own thing entirely, and sufficiently distinctive and\n> unconfusable with standard stuff. So, prefix removed.\n>\n> Lots of other patches on top of this one are using \"pgsr\" as a\n> variable name, ie containing that prefix; perhaps they would use \"sr\"\n> or \"streaming_read\" or \"stream\". I used \"stream\" in a few places in\n> this version.\n>\n> Other names improved in this version IMHO: pgsr_private ->\n> callback_private. I find it clearer, as a way to indicate that the\n> provider of the callback \"owns\" it. I also reordered the arguments:\n> now it's streaming_read_buffer_begin(..., callback, callback_private,\n> per_buffer_data_size), to keep those three things together.\n\nI haven't reviewed the whole patch, but as I was rebasing\nbitmapheapscan streaming read user, I found callback_private confusing\nbecause it seems like it is a private callback, not private data\nbelonging to the callback. Perhaps call it callback_private_data? Also\nmaybe mention what it is for in the comment above\nstreaming_read_buffer_begin() and in the StreamingRead structure\nitself.\n\n- Melanie\n\n\n",
"msg_date": "Sun, 24 Mar 2024 13:29:56 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Streaming I/O, vectored I/O (WIP)"
},
{
"msg_contents": "On Mon, Mar 25, 2024 at 6:30 AM Melanie Plageman\n<[email protected]> wrote:\n> I haven't reviewed the whole patch, but as I was rebasing\n> bitmapheapscan streaming read user, I found callback_private confusing\n> because it seems like it is a private callback, not private data\n> belonging to the callback. Perhaps call it callback_private_data? Also\n\nWFM.\n\n> maybe mention what it is for in the comment above\n> streaming_read_buffer_begin() and in the StreamingRead structure\n> itself.\n\nYeah. I've tried to improve the comments on all three public\nfunctions. I also moved the three public functions _begin(), _next(),\n_end() to be next to each other after the static helper functions.\n\nWorking on perf regression/tuning reports today, more soon...",
"msg_date": "Mon, 25 Mar 2024 12:02:46 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Streaming I/O, vectored I/O (WIP)"
},
{
"msg_contents": "On 24/03/2024 15:02, Thomas Munro wrote:\n> On Wed, Mar 20, 2024 at 4:04 AM Heikki Linnakangas <[email protected]> wrote:\n>> Maybe 'pg_streaming_read_next_buffer' or just 'pg_streaming_read_next',\n>> for a shorter name.\n> \n> Hmm. The idea of 'buffer' appearing in a couple of names is that\n> there are conceptually other kinds of I/O that we might want to\n> stream, like raw files or buffers other than the buffer pool, maybe\n> even sockets, so this would be part of a family of similar interfaces.\n> I think it needs to be clear that this variant gives you buffers. I'm\n> OK with removing \"get\" but I guess it would be better to keep the\n> words in the same order across the three functions? What about these?\n> \n> streaming_read_buffer_begin();\n> streaming_read_buffer_next();\n> streaming_read_buffer_end();\n> \n> Tried like that in this version. Other ideas would be to make\n> \"stream\" the main noun, buffered_read_stream_begin() or something.\n> Ideas welcome.\n\nWorks for me, although \"streaming_read_buffer\" is a pretty long prefix. \nThe flags like \"STREAMING_READ_MAINTENANCE\" probably ought to be \n\"STREAMING_READ_BUFFER_MAINTENANCE\" as well.\n\nMaybe \"buffer_stream_next()\"?\n\n> Here are some other changes:\n> \n> * I'm fairly happy with the ABC adaptive distance algorithm so far, I\n> think, but I spent more time tidying up the way it is implemented. I\n> didn't like the way each 'range' had buffer[MAX_BUFFERS_PER_TRANSFER],\n> so I created a new dense array stream->buffers that behaved as a\n> second circular queue.\n> \n> * The above also made it trivial for MAX_BUFFERS_PER_TRANSFER to\n> become the GUC that it always wanted to be: buffer_io_size defaulting\n> to 128kB. Seems like a reasonable thing to have? Could also\n> influence things like bulk write? (The main problem I have with the\n> GUC currently is choosing a category, async resources is wrong....)\n> \n> * By analogy, it started to look a bit funny that each range had room\n> for a ReadBuffersOperation, and we had enough ranges for\n> max_pinned_buffers * 1 block range. So I booted that out to another\n> dense array, of size max_ios.\n> \n> * At the same time, Bilal and Andres had been complaining privately\n> about 'range' management overheads showing up in perf and creating a\n> regression against master on fully cached scans that do nothing else\n> (eg pg_prewarm, where we lookup, pin, unpin every page and do no I/O\n> and no CPU work with the page, a somewhat extreme case but a\n> reasonable way to isolate the management costs); having made the above\n> change, it suddenly seemed obvious that I should make the buffers\n> array the 'main' circular queue, pointing off to another place for\n> information required for dealing with misses. In this version, there\n> are no more range objects. This feels better and occupies and touches\n> less memory. See pictures below.\n\n+1 for all that. Much better!\n\n> * Various indexes and sizes that couldn't quite fit in uint8_t but\n> couldn't possibly exceed a few thousand because they are bounded by\n> numbers deriving from range-limited GUCs are now int16_t (while I was\n> looking for low hanging opportunities to reduce memory usage...)\n\nIs int16 enough though? It seems so, because:\n\n max_pinned_buffers = Max(max_ios * 4, buffer_io_size);\n\nand max_ios is constrained by the GUC's maximum MAX_IO_CONCURRENCY, and \nbuffer_io_size is constrained by MAX_BUFFER_IO_SIZE == PG_IOV_MAX == 32.\n\nIf someone changes those constants though, int16 might overflow and fail \nin weird ways. I'd suggest being more careful here and explicitly clamp \nmax_pinned_buffers at PG_INT16_MAX or have a static assertion or \nsomething. (I think it needs to be somewhat less than PG_INT16_MAX, \nbecause of the extra \"overflow buffers\" stuff and some other places \nwhere you do arithmetic.)\n\n> \t/*\n> \t * We gave a contiguous range of buffer space to StartReadBuffers(), but\n> \t * we want it to wrap around at max_pinned_buffers. Move values that\n> \t * overflowed into the extra space. At the same time, put -1 in the I/O\n> \t * slots for the rest of the buffers to indicate no I/O. They are covered\n> \t * by the head buffer's I/O, if there is one. We avoid a % operator.\n> \t */\n> \toverflow = (stream->next_buffer_index + nblocks) - stream->max_pinned_buffers;\n> \tif (overflow > 0)\n> \t{\n> \t\tmemmove(&stream->buffers[0],\n> \t\t\t\t&stream->buffers[stream->max_pinned_buffers],\n> \t\t\t\tsizeof(stream->buffers[0]) * overflow);\n> \t\tfor (int i = 0; i < overflow; ++i)\n> \t\t\tstream->buffer_io_indexes[i] = -1;\n> \t\tfor (int i = 1; i < nblocks - overflow; ++i)\n> \t\t\tstream->buffer_io_indexes[stream->next_buffer_index + i] = -1;\n> \t}\n> \telse\n> \t{\n> \t\tfor (int i = 1; i < nblocks; ++i)\n> \t\t\tstream->buffer_io_indexes[stream->next_buffer_index + i] = -1;\n> \t}\n\nInstead of clearing buffer_io_indexes here, it might be cheaper/simpler \nto initialize the array to -1 in streaming_read_buffer_begin(), and \nreset buffer_io_indexes[io_index] = -1 in streaming_read_buffer_next(), \nafter the WaitReadBuffers() call. In other words, except when an I/O is \nin progress, keep all the elements at -1, even the elements that are not \ncurrently in use.\n\nAlternatively, you could remember the first buffer that the I/O applies \nto in the 'ios' array. In other words, instead of pointing from buffer \nto the I/O that it depends on, point from the I/O to the buffer that \ndepends on it. The last attached patch implements that approach. I'm not \nwedded to it, but it feels a little simpler.\n\n\n> \t\tif (stream->ios[io_index].flags & READ_BUFFERS_ISSUE_ADVICE)\n> \t\t{\n> \t\t\t/* Distance ramps up fast (behavior C). */\n> \t\t\t...\n> \t\t}\n> \t\telse\n> \t\t{\n> \t\t\t/* No advice; move towards full I/O size (behavior B). */\n> \t\t\t...\n> \t\t}\n\nThe comment on ReadBuffersOperation says \"Declared in public header only \nto allow inclusion in other structs, but contents should not be \naccessed\", but here you access the 'flags' field.\n\nYou also mentioned that the StartReadBuffers() argument list is too \nlong. Perhaps the solution is to redefine ReadBuffersOperation so that \nit consists of two parts: 1st part is filled in by the caller, and \ncontains the arguments, and 2nd part is private to bufmgr.c. The \nsignature for StartReadBuffers() would then be just:\n\nbool StartReadBuffers(ReadBuffersOperation *operation);\n\nThat would make it OK to read the 'flags' field. It would also allow \nreusing the same ReadBuffersOperation struct for multiple I/Os for the \nsame relation; you only need to change the changing parts of the struct \non each operation.\n\n\nIn the attached patch set, the first three patches are your v9 with no \nchanges. The last patch refactors away 'buffer_io_indexes' like I \nmentioned above. The others are fixes for some other trivial things that \ncaught my eye.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)",
"msg_date": "Tue, 26 Mar 2024 14:40:03 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Streaming I/O, vectored I/O (WIP)"
},
{
"msg_contents": "On Wed, Mar 27, 2024 at 1:40 AM Heikki Linnakangas <[email protected]> wrote:\n> Is int16 enough though? It seems so, because:\n>\n> max_pinned_buffers = Max(max_ios * 4, buffer_io_size);\n>\n> and max_ios is constrained by the GUC's maximum MAX_IO_CONCURRENCY, and\n> buffer_io_size is constrained by MAX_BUFFER_IO_SIZE == PG_IOV_MAX == 32.\n>\n> If someone changes those constants though, int16 might overflow and fail\n> in weird ways. I'd suggest being more careful here and explicitly clamp\n> max_pinned_buffers at PG_INT16_MAX or have a static assertion or\n> something. (I think it needs to be somewhat less than PG_INT16_MAX,\n> because of the extra \"overflow buffers\" stuff and some other places\n> where you do arithmetic.)\n\nClamp added.\n\n> > /*\n> > * We gave a contiguous range of buffer space to StartReadBuffers(), but\n> > * we want it to wrap around at max_pinned_buffers. Move values that\n> > * overflowed into the extra space. At the same time, put -1 in the I/O\n> > * slots for the rest of the buffers to indicate no I/O. They are covered\n> > * by the head buffer's I/O, if there is one. We avoid a % operator.\n> > */\n> > overflow = (stream->next_buffer_index + nblocks) - stream->max_pinned_buffers;\n> > if (overflow > 0)\n> > {\n> > memmove(&stream->buffers[0],\n> > &stream->buffers[stream->max_pinned_buffers],\n> > sizeof(stream->buffers[0]) * overflow);\n> > for (int i = 0; i < overflow; ++i)\n> > stream->buffer_io_indexes[i] = -1;\n> > for (int i = 1; i < nblocks - overflow; ++i)\n> > stream->buffer_io_indexes[stream->next_buffer_index + i] = -1;\n> > }\n> > else\n> > {\n> > for (int i = 1; i < nblocks; ++i)\n> > stream->buffer_io_indexes[stream->next_buffer_index + i] = -1;\n> > }\n>\n> Instead of clearing buffer_io_indexes here, it might be cheaper/simpler\n> to initialize the array to -1 in streaming_read_buffer_begin(), and\n> reset buffer_io_indexes[io_index] = -1 in streaming_read_buffer_next(),\n> after the WaitReadBuffers() call. In other words, except when an I/O is\n> in progress, keep all the elements at -1, even the elements that are not\n> currently in use.\n\nYeah that wasn't nice and I had already got as far as doing exactly\nthat ↑ on my own, but your second idea ↓ is better!\n\n> Alternatively, you could remember the first buffer that the I/O applies\n> to in the 'ios' array. In other words, instead of pointing from buffer\n> to the I/O that it depends on, point from the I/O to the buffer that\n> depends on it. The last attached patch implements that approach. I'm not\n> wedded to it, but it feels a little simpler.\n\nYeah, nice improvement.\n\n> > if (stream->ios[io_index].flags & READ_BUFFERS_ISSUE_ADVICE)\n> > {\n> > /* Distance ramps up fast (behavior C). */\n> > ...\n> > }\n> > else\n> > {\n> > /* No advice; move towards full I/O size (behavior B). */\n> > ...\n> > }\n>\n> The comment on ReadBuffersOperation says \"Declared in public header only\n> to allow inclusion in other structs, but contents should not be\n> accessed\", but here you access the 'flags' field.\n>\n> You also mentioned that the StartReadBuffers() argument list is too\n> long. Perhaps the solution is to redefine ReadBuffersOperation so that\n> it consists of two parts: 1st part is filled in by the caller, and\n> contains the arguments, and 2nd part is private to bufmgr.c. The\n> signature for StartReadBuffers() would then be just:\n>\n> bool StartReadBuffers(ReadBuffersOperation *operation);\n\nYeah. I had already got as far as doing this on the regression\nhunting expedition, but I kept some arguments for frequently changing\nthings, eg blocknum. It means that the stuff that never changes is in\nthere, and the stuff that changes each time doesn't have to be written\nto memory at all.\n\n> That would make it OK to read the 'flags' field. It would also allow\n> reusing the same ReadBuffersOperation struct for multiple I/Os for the\n> same relation; you only need to change the changing parts of the struct\n> on each operation.\n\nRight. Done.\n\n> In the attached patch set, the first three patches are your v9 with no\n> changes. The last patch refactors away 'buffer_io_indexes' like I\n> mentioned above. The others are fixes for some other trivial things that\n> caught my eye.\n\nThanks, all squashed into the patch.\n\nIn an offline chat with Robert and Andres, we searched for a better\nname for the GUC. We came up with \"io_combine_limit\". It's easier to\ndocument a general purpose limit than to explain what \"buffer_io_size\"\ndoes (particularly since the set of affected features will grow over\ntime but starts so small). I'll feel better about using it to control\nbulk writes too, with that name.\n\nI collapsed the various memory allocations into one palloc. The\nbuffer array is now a buffers[FLEXIBLE_ARRAY_MEMBER].\n\nI got rid of \"finished\" (now represented by distance == 0, I was\nremoving branches and variables). I got rid of \"started\", which can\nnow be deduced (used for suppressing advice when you're calling\n_next() because you need a block and we need to read it immediately),\nsee the function argument suppress_advice.\n\nHere is a new proposal for the names, updated in v10:\n\nread_stream_begin_relation()\nread_stream_next_buffer()\nvoid read_stream_end()\n\nI think we'll finish up with different 'constructor' functions for\ndifferent kinds of streams. For example I already want one that can\nprovide a multi-relation callback for use by recovery (shown in v1).\nOthers might exist for raw file access, etc. The defining\ncharacteristic of this one is that it accesses one specific\nrelation/fork. Well, _relfork() might be more accurate but less easy\non the eye. I won't be surprised if people not following this thread\nhave ideas after commit; it's certainly happened before something gets\nrenamed in beta and I won't mind a bit if that happens...\n\nI fixed a thinko in the new ReadBuffer() implementation mentioned\nbefore (thanks to Bilal for pointing this out): it didn't handle the\nRBM_ZERO_XXX flags properly. Well, it did work and the tests passed,\nbut it performed a useless read first. I probably got mixed up when I\nremoved the extended interface which was capable of streaming zeroed\nbuffers but this simple one isn't and it wasn't suppressing the read\nas it would need to. Later I'll propose to add that back in for\nrecovery.\n\nI fixed a recently added thinko in the circular queue: I mistakenly\nthought I didn't need a spare gap between head and tail anymore\nbecause now we never compare them, since we track the number of pinned\nbuffers instead, but after read_stream_next_buffer() returns, users of\nper-buffer data need that data to remain valid until the next call.\nSo the recent refactoring work didn't survive contact with Melanie's\nBHS work, which uses that. We need to \"wipe\" the previous (spare)\none, which can't possibly be in use yet, and if anyone ever sees 0x7f\nin their per-buffer data, it will mean that they illegally accessed\nthe older value after a new call to read_stream_next_buffer(). Fixed,\nwith comments to clarify.\n\nRetesting with Melanie's latest BHS patch set and my random.sql\n(upthread) gives the same system call trace as before. The intention\nof that was to demonstrate what exact sequences\neffective_io_concurrency values give. Now you can also run that with\ndifferent values of the new io_combine_limit. If you run it with\nio_combine_limit = '8kB', it looks almost exactly like master ie no\nbig reads allowed; the only difference is that read_stream.c refuses\nto issue advice for strictly sequential reads:\n\neffective_io_concurrency = 1, range size = 2\nunpatched patched\n==============================================================================\npread(93,...,8192,0x58000) = 8192 pread(84,...,8192,0x58000) = 8192\nposix_fadvise(93,0x5a000,0x2000,...) pread(84,...,8192,0x5a000) = 8192\npread(93,...,8192,0x5a000) = 8192 posix_fadvise(84,0xb0000,0x2000,...)\nposix_fadvise(93,0xb0000,0x2000,...) pread(84,...,8192,0xb0000) = 8192\npread(93,...,8192,0xb0000) = 8192 pread(84,...,8192,0xb2000) = 8192\nposix_fadvise(93,0xb2000,0x2000,...) posix_fadvise(84,0x108000,0x2000,...)\npread(93,...,8192,0xb2000) = 8192 pread(84,...,8192,0x108000) = 8192\nposix_fadvise(93,0x108000,0x2000,...) pread(84,...,8192,0x10a000) = 8192\n\nYou wouldn't normally see that though as the default io_combine_limit\nwould just merge those adjacent reads after the first one triggers\nramp-up towards behaviour C:\n\n effective_io_concurrency = 1, range size = 2\nunpatched patched\n==============================================================================\npread(93,...,8192,0x58000) = 8192 pread(80,...,8192,0x58000) = 8192\nposix_fadvise(93,0x5a000,0x2000,...) pread(80,...,8192,0x5a000) = 8192\npread(93,...,8192,0x5a000) = 8192 posix_fadvise(80,0xb0000,0x4000,...)\nposix_fadvise(93,0xb0000,0x2000,...) preadv(80,...,2,0xb0000) = 16384\npread(93,...,8192,0xb0000) = 8192 posix_fadvise(80,0x108000,0x4000,...)\nposix_fadvise(93,0xb2000,0x2000,...) preadv(80,...,2,0x108000) = 16384\npread(93,...,8192,0xb2000) = 8192 posix_fadvise(80,0x160000,0x4000,...)\n\nI spent most of the past few days trying to regain some lost\nperformance. Thanks to Andres for some key observations and help!\nThat began with reports from Bilal and Melanie (possibly related to\nthings Tomas had seen too, not sure) of regressions in all-cached\nworkloads, which I already improved a bit with the ABC algorithm that\nminimised pinning for this case. That is, if there's no recent I/O so\nwe reach what I call behaviour A, it should try to do as little magic\nas possible. But it turns out that wasn't enough! It is very hard to\nbeat a tight loop that just does ReadBuffer(), ReleaseBuffer() over\nmillions of already-cached blocks, if you have to do exactly the same\nwork AND extra instructions for management.\n\nThere were two layers to the solution in this new version: First, I\nnow have a special case in read_stream_next_buffer(), a sort of\nopen-coded specialisation for behaviour A with no per-buffer data, and\nthat got rid of most of the regression, but some remained. Next,\nAndres pointed out that ReadBuffer() itself, even though it is now\nimplemented on top of StartReadBuffers(nblocks = 1), was still beating\nmy special case code that calls StartReadBuffers(nblocks = 1), even\nthough it looks about the same, because bufmgr.c was able to inline\nand specialise the latter for one block. To give streaming_read.c\nthat power from its home inside another translation units, we needed\nto export a special case singular StartReadBuffer() (investigation and\npatch by Andres, added as co-author). It just calls the plural\nfunction with nblocks = 1, but it gets inlined. So now the special\ncase for behaviour A drills through both layers, and hopefully now\nthere is no measurable regression.. need to test a bitt more. Of\ncourse we can't *beat* the old code in this case, yet, but...\n\n(We speculate that a future tree-based buffer mapping table might\nallow efficient lookup for a range of block numbers in one go, and\nthen it could be worth paying the book-keeping costs to find ranges.\nPerhaps behaviour A and the associated special case code could then be\ndeleted, as you'd probably want to use multi-block magic all the time,\nfor both for I/O and mapping table lookups. Or something like that?)",
"msg_date": "Thu, 28 Mar 2024 03:10:50 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Streaming I/O, vectored I/O (WIP)"
},
{
"msg_contents": "On Wed, Mar 27, 2024 at 10:11 AM Thomas Munro <[email protected]> wrote:\n>\n> I got rid of \"finished\" (now represented by distance == 0, I was\n> removing branches and variables). I got rid of \"started\", which can\n> now be deduced (used for suppressing advice when you're calling\n> _next() because you need a block and we need to read it immediately),\n> see the function argument suppress_advice.\n\nI started rebasing the sequential scan streaming read user over this\nnew version, and this change (finished now represented with distance\n== 0) made me realize that I'm not sure what to set distance to on\nrescan.\n\nFor sequential scan, I added a little reset function to the streaming\nread API (read_stream_reset()) that just releases all the buffers.\nPreviously, it set finished to true before releasing the buffers (to\nindicate it was done) and then set it back to false after. Now, I'll\nset distance to 0 before releasing the buffers and !0 after. I could\njust restore whatever value distance had before I set it to 0. Or I\ncould set it to 1. But, thinking about it, are we sure we want to ramp\nup in the same way on rescans? Maybe we want to use some information\nfrom the previous scan to determine what to set distance to? Maybe I'm\novercomplicating it...\n\n> Here is a new proposal for the names, updated in v10:\n>\n> read_stream_begin_relation()\n> read_stream_next_buffer()\n> void read_stream_end()\n\nPersonally, I'm happy with these.\n\n- Melanie\n\n\n",
"msg_date": "Wed, 27 Mar 2024 16:43:41 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Streaming I/O, vectored I/O (WIP)"
},
{
"msg_contents": "On Thu, Mar 28, 2024 at 9:43 AM Melanie Plageman\n<[email protected]> wrote:\n> For sequential scan, I added a little reset function to the streaming\n> read API (read_stream_reset()) that just releases all the buffers.\n> Previously, it set finished to true before releasing the buffers (to\n> indicate it was done) and then set it back to false after. Now, I'll\n> set distance to 0 before releasing the buffers and !0 after. I could\n> just restore whatever value distance had before I set it to 0. Or I\n> could set it to 1. But, thinking about it, are we sure we want to ramp\n> up in the same way on rescans? Maybe we want to use some information\n> from the previous scan to determine what to set distance to? Maybe I'm\n> overcomplicating it...\n\nI think 1 is good, as a rescan is even more likely to find the pages\nin cache, and if that turns out to be wrong it'll very soon adjust.\n\n\n",
"msg_date": "Thu, 28 Mar 2024 10:52:01 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Streaming I/O, vectored I/O (WIP)"
},
{
"msg_contents": "On Thu, Mar 28, 2024 at 10:52 AM Thomas Munro <[email protected]> wrote:\n> I think 1 is good, as a rescan is even more likely to find the pages\n> in cache, and if that turns out to be wrong it'll very soon adjust.\n\nHmm, no I take that back, it probably won't be due to the\nstrategy/ring... I see your point now... when I had a separate flag,\nthe old distance was remembered across but now I'm zapping it. I was\ntrying to minimise the number of variables that have to be tested in\nthe fast path by consolidating. Hmm, it is signed -- would it be too\nweird if we used a negative number for \"finished\", so we can just flip\nit on reset?\n\n\n",
"msg_date": "Thu, 28 Mar 2024 11:07:19 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Streaming I/O, vectored I/O (WIP)"
},
{
"msg_contents": "On Mon, Mar 25, 2024 at 2:02 AM Thomas Munro <[email protected]> wrote:\n> On Wed, Mar 20, 2024 at 4:04 AM Heikki Linnakangas <[email protected]> wrote:\n> > > /*\n> > > * Skip the initial ramp-up phase if the caller says we're going to be\n> > > * reading the whole relation. This way we start out doing full-sized\n> > > * reads.\n> > > */\n> > > if (flags & PGSR_FLAG_FULL)\n> > > pgsr->distance = Min(MAX_BUFFERS_PER_TRANSFER, pgsr->max_pinned_buffers);\n> > > else\n> > > pgsr->distance = 1;\n> >\n> > Should this be \"Max(MAX_BUFFERS_PER_TRANSFER,\n> > pgsr->max_pinned_buffers)\"? max_pinned_buffers cannot be smaller than\n> > MAX_BUFFERS_PER_TRANSFER though, given how it's initialized earlier. So\n> > perhaps just 'pgsr->distance = pgsr->max_pinned_buffers' ?\n>\n> Right, done.\n\nBTW I forgot to mention that in v10 I changed my mind and debugged my\nway back to the original coding, which now looks like this:\n\n /*\n * Skip the initial ramp-up phase if the caller says we're going to be\n * reading the whole relation. This way we start out assuming we'll be\n * doing full io_combine_limit sized reads (behavior B).\n */\n if (flags & READ_STREAM_FULL)\n stream->distance = Min(max_pinned_buffers, io_combine_limit);\n else\n stream->distance = 1;\n\nIt's not OK for distance to exceed max_pinned_buffers. But if\nmax_pinned_buffers is huge, remember that the goal here is to access\n'behavior B' meaning wide read calls but no unnecessary extra\nlook-ahead beyond what is needed for that, so we also don't want to\nexceed io_combine_limit. Therefore we want the minimum of those two\nnumbers. In practice on a non-toy system, that's always going to be\nio_combine_limit. But I'm not sure how many users of READ_STREAM_FULL\nthere will be, and I am starting to wonder if it's a good name for the\nflag, or even generally very useful. It's sort of saying \"I expect to\ndo I/O, and it'll be sequential, and I won't give up until the end\".\nBut how many users can really make those claims? pg_prewarm is unsual\nin that it contains an explicit assumption that the cache is cold and\nwe want to warm it up. But maybe we should just let the adaptive\nalgorithm do its thing. It only takes a few reads to go from 1 ->\nio_combine_limit.\n\nThinking harder, if we're going to keep this and not just be fully\nadaptive, perhaps there should be a flag READ_STREAM_COLD, where you\nhint that the data is not expected to be cached, and you'd combine\nthat with the _SEQUENTIAL hint. pg_prewarm hints _COLD | _SEQUENTIAL.\nThen the initial distance would be something uses the flag\ncombinations to select initial behavior A, B, C (and we'll quickly\nadjust if you're wrong):\n\n if (!(flags & READ_STREAM_COLD))\n stream->distance = 1;\n else if (flags & READ_STREAM_SEQUENTIAL)\n stream->distance = Min(max_pinned_buffers, io_combine_limit);\n else\n stream->distance = max_pinned_buffers;\n\nBut probably almost all users especially in the executor haven't\nreally got much of a clue what they're going to do so they'd use the\ninitial starting position of 1 (A) and we'd soo figure it out. Maybe\noverengineering for pg_prewarm is a waste of time and we should just\ndelete the flag instead and hard code 1.\n\n\n",
"msg_date": "Thu, 28 Mar 2024 14:02:38 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Streaming I/O, vectored I/O (WIP)"
},
{
"msg_contents": "On Thu, Mar 28, 2024 at 2:02 PM Thomas Munro <[email protected]> wrote:\n> ... In practice on a non-toy system, that's always going to be\n> io_combine_limit. ...\n\nAnd to be more explicit about that: you're right that we initialise\nmax_pinned_buffers such that it's usually at least io_combine_limit,\nbut then if you have a very small buffer pool it gets clobbered back\ndown again by LimitAdditionalBins() and may finish up as low as 1.\nYou're not allowed to pin more than 1/Nth of the whole buffer pool,\nwhere N is approximately max connections (well it's not exactly that\nbut that's the general idea). So it's a degenerate case, but it can\nhappen that max_pinned_buffers is lower than io_combine_limit and then\nit's important not to set distance higher or you'd exceed the allowed\nlimits (or more likely the circular data structure would implode).\n\n\n",
"msg_date": "Thu, 28 Mar 2024 14:20:38 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Streaming I/O, vectored I/O (WIP)"
},
{
"msg_contents": "New version with some cosmetic/comment changes, and Melanie's\nread_stream_reset() function merged, as required by her sequential\nscan user patch. I tweaked it slightly: it might as well share code\nwith read_stream_end(). I think setting distance = 1 is fine for now,\nand we might later want to adjust that as we learn more about more\ninteresting users of _reset().",
"msg_date": "Thu, 28 Mar 2024 18:12:10 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Streaming I/O, vectored I/O (WIP)"
},
{
"msg_contents": "Small bug fix: the condition in the final test at the end of\nread_stream_look_ahead() wasn't quite right. In general when looking\nahead, we don't need to start a read just because the pending read\nwould bring us up to stream->distance if submitted now (we'd prefer to\nbuild it all the way up to size io_combine_limit if we can), but if\nthat condition is met AND we have nothing pinned yet, then there is no\nchance for the read to grow bigger by a pinned buffer being consumed.\nFixed, comment updated.",
"msg_date": "Fri, 29 Mar 2024 00:06:44 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Streaming I/O, vectored I/O (WIP)"
},
{
"msg_contents": "On Fri, Mar 29, 2024 at 12:06 AM Thomas Munro <[email protected]> wrote:\n> Small bug fix: the condition in the final test at the end of\n> read_stream_look_ahead() wasn't quite right. In general when looking\n> ahead, we don't need to start a read just because the pending read\n> would bring us up to stream->distance if submitted now (we'd prefer to\n> build it all the way up to size io_combine_limit if we can), but if\n> that condition is met AND we have nothing pinned yet, then there is no\n> chance for the read to grow bigger by a pinned buffer being consumed.\n> Fixed, comment updated.\n\nOops, I sent the wrong/unfixed version. This version has the fix\ndescribed above.",
"msg_date": "Fri, 29 Mar 2024 00:16:47 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Streaming I/O, vectored I/O (WIP)"
},
{
"msg_contents": "> I spent most of the past few days trying to regain some lost\n> performance. Thanks to Andres for some key observations and help!\n> That began with reports from Bilal and Melanie (possibly related to\n> things Tomas had seen too, not sure) of regressions in all-cached\n> workloads, which I already improved a bit with the ABC algorithm that\n> minimised pinning for this case. That is, if there's no recent I/O so\n> we reach what I call behaviour A, it should try to do as little magic\n> as possible. But it turns out that wasn't enough! It is very hard to\n> beat a tight loop that just does ReadBuffer(), ReleaseBuffer() over\n> millions of already-cached blocks, if you have to do exactly the same\n> work AND extra instructions for management.\n\nI got a little nerd-sniped by that, and did some micro-benchmarking of \nmy own. I tested essentially this, with small values of 'nblocks' so \nthat all pages are in cache:\n\n\tfor (int i = 0; i < niters; i++)\n\t{\n\t\tfor (BlockNumber blkno = 0; blkno < nblocks; blkno++)\n\t\t{\n\t\t\tbuf = ReadBuffer(rel, blkno);\n\t\t\tReleaseBuffer(buf);\n\t\t}\n\t}\n\nThe results look like this (lower is better, test program and script \nattached):\n\nmaster (213c959a29):\t\t8.0 s\nstreaming-api v13:\t\t9.5 s\n\nThis test exercises just the ReadBuffer() codepath, to check if there is \na regression there. It does not exercise the new streaming APIs.\n\nSo looks like the streaming API patches add some overhead to the simple \nnon-streaming ReadBuffer() case. This is a highly narrow \nmicro-benchmark, of course, so even though this is a very \nperformance-sensitive codepath, we could perhaps accept a small \nregression there. In any real workload, you'd at least need to take the \nbuffer lock and read something from the page.\n\nBut can we do better? Aside from performance, I was never quite happy \nwith the BMR_REL/BMR_SMGR stuff we introduced in PG v16. I like having \none common struct like BufferManagerRelation that is used in all the \nfunctions, instead of having separate Relation and SMgrRelation variants \nof every function. But those macros feel a bit hacky and we are not \nconsistently using them in all the functions. Why is there no \nReadBuffer() variant that takes a BufferManagerRelation?\n\nThe attached patch expands the use of BufferManagerRelations. The \nprinciple now is that before calling any bufmgr function, you first \ninitialize a BufferManagerRelation struct, and pass that to the \nfunction. The initialization is done by the InitBMRForRel() or \nInitBMRForSmgr() function, which replace the BMR_REL/BMR_SMGR macros. \nThey are full-blown functions now because they do more setup upfront \nthan BMR_REL/BMR_SMGR. For example, InitBMRForRel() always initializes \nthe 'smgr' field, so that you don't need to repeat this pattern in all \nthe other functions:\n\n- /* Make sure our bmr's smgr and persistent are populated. */\n- if (bmr.smgr == NULL)\n- {\n- bmr.smgr = RelationGetSmgr(bmr.rel);\n- bmr.relpersistence = bmr.rel->rd_rel->relpersistence;\n- }\n\nInitializing the BufferManagerRelation is still pretty cheap, so it's \nfeasible to call it separately for every ReadBuffer() call. But you can \nalso reuse it across calls, if you read multiple pages in a loop, for \nexample. That saves a few cycles.\n\nThe microbenchmark results with these changes:\n\nmaster (213c959a29):\t\t8.0 s\nstreaming-api v13:\t\t9.5 s\nbmr-refactor\t\t\t8.4 s\nbmr-refactor, InitBMR once\t7.7 s\n\nThe difference between the \"bmr-refactor\" and \"initBMR once\" is that in \nthe \"initBMR once\" test, I modified the benchmark to call \nInitBMRForRel() just once, outside the loop. So that shows the benefit \nof reusing the BufferManagerRelation. This refactoring seems to make \nperformance regression smaller, even if you don't take advantage of \nreusing the BufferManagerRelation.\n\nThis also moves things around a little in ReadBuffer_common() (now \ncalled ReadBufferBMR). Instead of calling StartReadBuffer(), it calls \nPinBufferForBlock() directly. I tried doing that before the other \nrefactorings, but that alone didn't seem to make much difference. Not \nsure if it's needed, it's perhaps an orthogonal refactoring, but it's \nincluded here nevertheless.\n\nWhat do you think? The first three attached patches are your v13 patches \nunchanged. The fourth is the micro-benchmark I used. The last patch is \nthe interesting one.\n\n\nPS. To be clear, I'm happy with your v13 streaming patch set as it is. I \ndon't think this BufferManagerRelation refactoring is a show-stopper.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)",
"msg_date": "Thu, 28 Mar 2024 22:45:08 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Streaming I/O, vectored I/O (WIP)"
},
{
"msg_contents": "On Fri, Mar 29, 2024 at 9:45 AM Heikki Linnakangas <[email protected]> wrote:\n> master (213c959a29): 8.0 s\n> streaming-api v13: 9.5 s\n\nHmm, that's not great, and I think I know one factor that has\nconfounded my investigation and the conflicting reports I have\nreceived from a couple of people: some are using meson, which is\ndefaulting to -O3 by default, and others are using make which gives\nyou -O2 by default, but at -O2, GCC doesn't inline that\nStartReadBuffer specialisation that is used in the \"fast path\", and\npossibly more. Some of that gap is closed by using\npg_attribute_inline_always. Clang fails to inline at any level. So I\nshould probably use the \"always\" macro there because that is the\nintention. Still processing the rest of your email...\n\n\n",
"msg_date": "Fri, 29 Mar 2024 20:01:25 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Streaming I/O, vectored I/O (WIP)"
},
{
"msg_contents": "On 29/03/2024 09:01, Thomas Munro wrote:\n> On Fri, Mar 29, 2024 at 9:45 AM Heikki Linnakangas <[email protected]> wrote:\n>> master (213c959a29): 8.0 s\n>> streaming-api v13: 9.5 s\n> \n> Hmm, that's not great, and I think I know one factor that has\n> confounded my investigation and the conflicting reports I have\n> received from a couple of people: some are using meson, which is\n> defaulting to -O3 by default, and others are using make which gives\n> you -O2 by default, but at -O2, GCC doesn't inline that\n> StartReadBuffer specialisation that is used in the \"fast path\", and\n> possibly more. Some of that gap is closed by using\n> pg_attribute_inline_always. Clang fails to inline at any level. So I\n> should probably use the \"always\" macro there because that is the\n> intention. Still processing the rest of your email...\n\nAh yeah, I also noticed that the inlining didn't happen with some \ncompilers and flags. I use a mix of gcc and clang and meson and autoconf \nin my local environment.\n\nThe above micro-benchmarks were with meson and gcc -O3. GCC version:\n\n$ gcc --version\ngcc (Debian 12.2.0-14) 12.2.0\nCopyright (C) 2022 Free Software Foundation, Inc.\nThis is free software; see the source for copying conditions. There is NO\nwarranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.\n\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Fri, 29 Mar 2024 18:28:29 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Streaming I/O, vectored I/O (WIP)"
},
{
"msg_contents": "1. I tried out Tomas's suggestion ALTER TABLESPACE ts SET\n(io_combine_limit = ...). I like it, it's simple and works nicely.\nUnfortunately we don't have support for units like '128kB' in\nreloptions.c, so for now it requires a number of blocks. That's not\ngreat, so we should probably fix that before merging it, so I'm\nleaving that patch (v14-0004) separate, hopefully for later.\n\n2. I also tried Tomas's suggestion of inventing a way to tell\nPostgreSQL what the OS readahead window size is. That allows for\nbetter modelling of whether the kernel is going to consider an access\npattern to be sequential. Again, the ALTER TABLESPACE version would\nprobably need unit support. This is initially just for\nexperimentation as it came up in discussions of BHS behaviour. I was\nagainst this idea originally as it seemed like more complexity to have\nto explain and tune and I'm not sure if it's worth the trouble... but\nin fact we are already in the business of second guessing kernel\nlogic, so there doesn't seem to be any good reason not to do it a bit\nbetter if we can. Do you think I have correctly understood what Linux\nis doing? The name I came up with is: effective_io_readahead_window.\nI wanted to make clear that it's a property of the system we are\ntelling it about, not to be confused with our own look-ahead concept\nor whatever. Better names very welcome. This is also in a separate\npatch (v14-0005), left for later.\n\n3. Another question I wondered about while retesting: does this need\nto be so low? I don't think so, so I've added a patch for that.\n\nsrc/include/port/pg_iovec.h:#define PG_IOV_MAX Min(IOV_MAX, 32)\n\nNow that I'm not using an array full of arrays of that size, I don't\ncare so much how big we make that 32 (= 256kB @ 8kB), which clamps\nio_combine_limit. I think 128 (= 1MB @ 8kB) might be a decent\narbitrary number. Sometimes we use it to size stack arrays, so I\ndon't want to make it insanely large, but 128 should be fine. I think\nit would be good to be able to at least experiment with up to 1MB (I'm\nnot saying it's a good idea to do it, who knows?, just that there\nisn't a technical reason why not to allow it AFAIK). FWIW every\nsystem on our target list that has p{read,write}v has IOV_MAX == 1024\n(I checked {Free,Net,Open}BSD, macOS, illumos and Linux), so the\nMin(IOV_MAX, ...) really only clamps the systems where\npg_{read,write}v fall back to loop-based emulation (Windows, Solaris)\nwhich is fine.\n\nPG_IOV_MAX also affects the routine that initialises new WAL files. I\ndon't currently see a downside to doing that in 1MB chunks, as there\nwas nothing sacred about the previous arbitrary number and the code\ndeals with short writes by retrying as it should.\n\n4. I agree with Heikki's complaints about the BMR interface. It\nshould be made more consistent and faster. I didn't want to make all\nof those changes touching AMs etc a dependency though, so I spent some\ntime trying to squeeze out regressions using some of these clues about\ncalling conventions, likely hints, memory access and batching. I'm\ntotally open to later improvements and refactoring of that stuff\nlater!\n\nAttached is the version with the best results I've managed to get. My\ntest is GCC -O3, pg_prewarm of a table of 220_000_000 integers =\n7.6GB, which sometimes comes out around the same ~250ms on master and\nstreaming pg_prewarm v14 on a random cloud ARM box I'm testing with,\nbut not always, sometimes it's ~5-7ms more. (Unfortunately I don't\nhave access to good benchmarking equipment right now, better numbers\nwelcome.) Two new ideas:\n\n* give fast path mode a single flag, instead of testing all the\nconditions for every block\n* give fast path mode a little buffer of future block numbers, so it\ncan call the callback in batches\n\nI'd tried that batch-calling thing before, and results were\ninconclusive, but I think sometimes it helps a bit. Note that it\nreplaces the 'unget' thing from before and it is possibly a tiny bit\nnicer anyway.\n\nI'm a bit stumped about how to improve this further -- if anyone has\nany ideas for further improvements I'm all ears.\n\nZooming back out of micro-benchmark mode, it must be pretty hard to\nsee in a real workload that actually does something with the buffers,\nlike a sequential scan. Earlier complaints about all-cached\nsequential scan regressions were resolved many versions ago AFAIK by\nminimising pin count in that case. I just tried Melanie's streaming\nsequential scan patch, with a simple SELECT COUNT(*) WHERE i = -1,\nwith the same all-cached table of 220 million integers. Patched\nconsistently comes out ahead for all-in-kernel-cache none-in-PG-cache:\n~14.7-> ~14.4, and all-in-PG-cache ~13.5s -> ~13.3s (which I don't\nhave an explanation for). I don't claim any of that is particularly\nscientific, I just wanted to note that single digit numbers of\nmilliseconds of regression while pinning a million pages is clearly\nlost in the noise of other effects once you add in real query\nexecution. That's half a dozen nanoseconds per page if I counted\nright.\n\nSo, I am finally starting to think we should commit this, and decide\nwhich user patches are candidates.",
"msg_date": "Mon, 1 Apr 2024 14:01:05 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Streaming I/O, vectored I/O (WIP)"
},
{
"msg_contents": "I had been planning to commit v14 this morning but got cold feet with\nthe BMR-based interface. Heikki didn't like it much, and in the end,\nneither did I. I have now removed it, and it seems much better. No\nother significant changes, just parameter types and inlining details.\nFor example:\n\n * read_stream_begin_relation() now takes a Relation, likes its name says\n * StartReadBuffers()'s operation takes smgr and optional rel\n * ReadBuffer_common() takes smgr and optional rel\n\nReadBuffer() (which calls ReadBuffer_common() which calls\nStartReadBuffer() as before) now shows no regression in a tight loop\nover ~1 million already-in-cache pages (something Heikki had observed\nbefore and could only completely fix with a change that affected all\ncallers). The same test using read_stream.c is still slightly slower,\n~1 million pages -in-cache pages 301ms -> 308ms, which seems\nacceptable to me and could perhaps be chased down with more study of\ninlining/specialisation. As mentioned before, it doesn't seem to be\nmeasurable once you actually do something with the pages.\n\nIn some ways BMR was better than the \"fake RelationData\" concept\n(another attempt at wrestling with the relation vs storage duality,\nthat is, the online vs recovery duality). But in other ways it was\nworse: a weird inconsistent mixture of pass-by-pointer and\npass-by-value interfaces that required several code paths to handle it\nbeing only partially initialised, which turned out to be wasted cycles\nimplicated in regressions, despite which it is not even very nice to\nuse anyway. I'm sure it could be made to work better, but I'm not yet\nsure it's really needed. In later work for recovery I will need to\nadd a separate constructor read_stream_begin_smgr_something() anyway\nfor other reasons (multi-relation streaming, different callback) and\nperhaps also a separate StartReadBuffersSmgr() if it saves measurable\ncycles to strip out branches. Maybe it was all just premature\npessimisation.\n\nSo this is the version I'm going to commit shortly, barring objections.",
"msg_date": "Tue, 2 Apr 2024 21:39:49 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Streaming I/O, vectored I/O (WIP)"
},
{
"msg_contents": "On Tue, Apr 2, 2024 at 9:39 PM Thomas Munro <[email protected]> wrote:\n> So this is the version I'm going to commit shortly, barring objections.\n\nAnd done, after fixing a small snafu with smgr-only reads coming from\nCreateAndCopyRelationData() (BM_PERMANENT would be\nincorrectly/unnecessarily set for unlogged tables).\n\nHere are the remaining patches discussed in this thread. They give\ntablespace-specific io_combine_limit, effective_io_readahead_window\n(is this useful?), and up-to-1MB io_combine_limit (is this useful?).\nI think the first two would probably require teaching reloption.c how\nto use guc.c's parse_int() and unit flags, but I won't have time to\nlook at that for this release so I'll just leave these here.\n\nOn the subject of guc.c, this is a terrible error message... did I do\nsomething wrong?\n\npostgres=# set io_combine_limit = '42MB';\nERROR: 5376 8kB is outside the valid range for parameter\n\"io_combine_limit\" (1 .. 32)",
"msg_date": "Wed, 3 Apr 2024 13:31:11 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Streaming I/O, vectored I/O (WIP)"
},
{
"msg_contents": "On Tue, Apr 2, 2024 at 8:32 PM Thomas Munro <[email protected]> wrote:\n>\n> Here are the remaining patches discussed in this thread. They give\n> tablespace-specific io_combine_limit, effective_io_readahead_window\n> (is this useful?), and up-to-1MB io_combine_limit (is this useful?).\n> I think the first two would probably require teaching reloption.c how\n> to use guc.c's parse_int() and unit flags, but I won't have time to\n> look at that for this release so I'll just leave these here.\n>\n> On the subject of guc.c, this is a terrible error message... did I do\n> something wrong?\n>\n> postgres=# set io_combine_limit = '42MB';\n> ERROR: 5376 8kB is outside the valid range for parameter\n> \"io_combine_limit\" (1 .. 32)\n\nWell, GUC_UNIT_BLOCKS interpolates the block limit into the error\nmessage string (get_config_unit_name()). But, I can't imagine this\nerror message is clear for any of the GUCs using GUC_UNIT_BLOCKS. I\nwould think some combination of the two would be helpful, like \"43008\nkB (5376 blocks) is outside of the valid range for parameter\". The\nuser can check what their block size is. I don't think we need to\ninterpolate and print the block size in the error message.\n\nOn another note, since io_combine_limit, when specified in size,\nrounds up to the nearest multiple of blocksize, it might be worth\nmentioning this in the io_combine_limit docs at some point. I checked\ndocs for another GUC_UNIT_BLOCKS guc, backend_flush_after, and it\nalludes to this.\n\n- Melanie\n\n\n",
"msg_date": "Wed, 3 Apr 2024 08:23:53 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Streaming I/O, vectored I/O (WIP)"
},
{
"msg_contents": "Hi,\n\nOn Tue, 2 Apr 2024 at 11:40, Thomas Munro <[email protected]> wrote:\n>\n> I had been planning to commit v14 this morning but got cold feet with\n> the BMR-based interface. Heikki didn't like it much, and in the end,\n> neither did I. I have now removed it, and it seems much better. No\n> other significant changes, just parameter types and inlining details.\n> For example:\n>\n> * read_stream_begin_relation() now takes a Relation, likes its name says\n> * StartReadBuffers()'s operation takes smgr and optional rel\n> * ReadBuffer_common() takes smgr and optional rel\n\nRead stream objects can be created only using Relations now. There\ncould be read stream users which do not have a Relation but\nSMgrRelations. So, I created another constructor for the read streams\nwhich use SMgrRelations instead of Relations. Related patch is\nattached.\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft",
"msg_date": "Sun, 7 Apr 2024 20:33:34 +0300",
"msg_from": "Nazir Bilal Yavuz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Streaming I/O, vectored I/O (WIP)"
},
{
"msg_contents": "On Sun, Apr 7, 2024 at 1:33 PM Nazir Bilal Yavuz <[email protected]> wrote:\n>\n> Hi,\n>\n> On Tue, 2 Apr 2024 at 11:40, Thomas Munro <[email protected]> wrote:\n> >\n> > I had been planning to commit v14 this morning but got cold feet with\n> > the BMR-based interface. Heikki didn't like it much, and in the end,\n> > neither did I. I have now removed it, and it seems much better. No\n> > other significant changes, just parameter types and inlining details.\n> > For example:\n> >\n> > * read_stream_begin_relation() now takes a Relation, likes its name says\n> > * StartReadBuffers()'s operation takes smgr and optional rel\n> > * ReadBuffer_common() takes smgr and optional rel\n>\n> Read stream objects can be created only using Relations now. There\n> could be read stream users which do not have a Relation but\n> SMgrRelations. So, I created another constructor for the read streams\n> which use SMgrRelations instead of Relations. Related patch is\n> attached.\n\nThis patch LGTM\n\n- Melanie\n\n\n",
"msg_date": "Sun, 7 Apr 2024 14:00:24 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Streaming I/O, vectored I/O (WIP)"
},
{
"msg_contents": "Hi,\n\nOn Sun, 7 Apr 2024 at 20:33, Nazir Bilal Yavuz <[email protected]> wrote:\n>\n> Hi,\n>\n> On Tue, 2 Apr 2024 at 11:40, Thomas Munro <[email protected]> wrote:\n> >\n> > I had been planning to commit v14 this morning but got cold feet with\n> > the BMR-based interface. Heikki didn't like it much, and in the end,\n> > neither did I. I have now removed it, and it seems much better. No\n> > other significant changes, just parameter types and inlining details.\n> > For example:\n> >\n> > * read_stream_begin_relation() now takes a Relation, likes its name says\n> > * StartReadBuffers()'s operation takes smgr and optional rel\n> > * ReadBuffer_common() takes smgr and optional rel\n>\n> Read stream objects can be created only using Relations now. There\n> could be read stream users which do not have a Relation but\n> SMgrRelations. So, I created another constructor for the read streams\n> which use SMgrRelations instead of Relations. Related patch is\n> attached.\n\nAfter sending this, I realized that I forgot to add persistence value\nto the new constructor. While working on it I also realized that\ncurrent code sets persistence in PinBufferForBlock() function and this\nfunction is called for each block, which can be costly. So, I moved\nsetting persistence to the out of PinBufferForBlock() function.\n\nSetting persistence outside of the PinBufferForBlock() function (0001)\nand creating the new constructor that uses SMgrRelations (0002) are\nattached.\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft",
"msg_date": "Mon, 8 Apr 2024 00:01:26 +0300",
"msg_from": "Nazir Bilal Yavuz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Streaming I/O, vectored I/O (WIP)"
},
{
"msg_contents": "Hi,\n\nOn Mon, 8 Apr 2024 at 00:01, Nazir Bilal Yavuz <[email protected]> wrote:\n>\n> Hi,\n>\n> On Sun, 7 Apr 2024 at 20:33, Nazir Bilal Yavuz <[email protected]> wrote:\n> >\n> > Hi,\n> >\n> > On Tue, 2 Apr 2024 at 11:40, Thomas Munro <[email protected]> wrote:\n> > >\n> > > I had been planning to commit v14 this morning but got cold feet with\n> > > the BMR-based interface. Heikki didn't like it much, and in the end,\n> > > neither did I. I have now removed it, and it seems much better. No\n> > > other significant changes, just parameter types and inlining details.\n> > > For example:\n> > >\n> > > * read_stream_begin_relation() now takes a Relation, likes its name says\n> > > * StartReadBuffers()'s operation takes smgr and optional rel\n> > > * ReadBuffer_common() takes smgr and optional rel\n> >\n> > Read stream objects can be created only using Relations now. There\n> > could be read stream users which do not have a Relation but\n> > SMgrRelations. So, I created another constructor for the read streams\n> > which use SMgrRelations instead of Relations. Related patch is\n> > attached.\n>\n> After sending this, I realized that I forgot to add persistence value\n> to the new constructor. While working on it I also realized that\n> current code sets persistence in PinBufferForBlock() function and this\n> function is called for each block, which can be costly. So, I moved\n> setting persistence to the out of PinBufferForBlock() function.\n>\n> Setting persistence outside of the PinBufferForBlock() function (0001)\n> and creating the new constructor that uses SMgrRelations (0002) are\n> attached.\n\nMelanie noticed there was a 'sgmr -> smgr' typo in 0002. Fixed in attached.\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft",
"msg_date": "Mon, 8 Apr 2024 00:30:18 +0300",
"msg_from": "Nazir Bilal Yavuz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Streaming I/O, vectored I/O (WIP)"
},
{
"msg_contents": "I've attached a patch with a few typo fixes and what looks like an\nincorrect type for max_ios. It's an int16 and I think it needs to be\nan int. Doing \"max_ios = Min(max_ios, PG_INT16_MAX);\" doesn't do\nanything when max_ios is int16.\n\nDavid",
"msg_date": "Wed, 24 Apr 2024 14:32:09 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Streaming I/O, vectored I/O (WIP)"
},
{
"msg_contents": "On Wed, 24 Apr 2024 at 14:32, David Rowley <[email protected]> wrote:\n> I've attached a patch with a few typo fixes and what looks like an\n> incorrect type for max_ios. It's an int16 and I think it needs to be\n> an int. Doing \"max_ios = Min(max_ios, PG_INT16_MAX);\" doesn't do\n> anything when max_ios is int16.\n\nNo feedback, so I'll just push this in a few hours unless anyone has anything.\n\nDavid\n\n\n",
"msg_date": "Wed, 1 May 2024 14:50:57 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Streaming I/O, vectored I/O (WIP)"
},
{
"msg_contents": "On Wed, May 1, 2024 at 2:51 PM David Rowley <[email protected]> wrote:\n> On Wed, 24 Apr 2024 at 14:32, David Rowley <[email protected]> wrote:\n> > I've attached a patch with a few typo fixes and what looks like an\n> > incorrect type for max_ios. It's an int16 and I think it needs to be\n> > an int. Doing \"max_ios = Min(max_ios, PG_INT16_MAX);\" doesn't do\n> > anything when max_ios is int16.\n>\n> No feedback, so I'll just push this in a few hours unless anyone has anything.\n\nPatch looks correct, thanks. Please do. (Sorry, running a bit behind\non email ATM... I also have a few more typos around here from an\noff-list email from Mr Lakhin, will get to that soon...)\n\n\n",
"msg_date": "Wed, 1 May 2024 15:12:41 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Streaming I/O, vectored I/O (WIP)"
},
{
"msg_contents": "On Wed, Nov 29, 2023 at 1:17 AM Thomas Munro <[email protected]> wrote:\n> Done. I like it, I just feel a bit bad about moving the p*v()\n> replacement functions around a couple of times already! I figured it\n> might as well be static inline even if we use the fallback (= Solaris\n> and Windows).\n\nJust for the record, since I'd said things like the above a few times\nwhile writing about this stuff: Solaris 11.4.69 has gained preadv()\nand pwritev(). That's interesting because it means that there will\nsoon be no liive Unixoid operating systems left without them, and the\nfallback code in src/include/port/pg_iovec.h will, in practice, be\nonly for Windows. I wondered if that might have implications for how\nwe code or comment stuff like that, but it still seems to make sense\nas we have it.\n\n(I don't think Windows can have a real synchronous implementation; the\nkernel knows how to do scatter/gather, a feature implemented\nspecifically for databases, but only in asynchronous (\"overlapped\") +\ndirect I/O mode, a difference I don't know how to hide at this level.\nIn later AIO work we should be able to use it as intended, but not by\npretending to be Unix like this.)\n\n\n",
"msg_date": "Sat, 25 May 2024 10:00:21 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Streaming I/O, vectored I/O (WIP)"
},
{
"msg_contents": "Hi,\n\nIt seems that Heikki's 'v9.heikki-0007-Trivial-comment-fixes.patch'\n[1] is partially applied, the top comment is not updated. The attached\npatch just updates it.\n\n[1] https://www.postgresql.org/message-id/289a1c0e-8444-4009-a8c2-c2d77ced6f07%40iki.fi\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft",
"msg_date": "Wed, 10 Jul 2024 19:21:59 +0300",
"msg_from": "Nazir Bilal Yavuz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Streaming I/O, vectored I/O (WIP)"
},
{
"msg_contents": "On Wed, Jul 10, 2024 at 07:21:59PM +0300, Nazir Bilal Yavuz wrote:\n> Hi,\n> \n> It seems that Heikki's 'v9.heikki-0007-Trivial-comment-fixes.patch'\n> [1] is partially applied, the top comment is not updated. The attached\n> patch just updates it.\n> \n> [1] https://www.postgresql.org/message-id/289a1c0e-8444-4009-a8c2-c2d77ced6f07%40iki.fi\n\nThanks, patch applied to master.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Fri, 16 Aug 2024 21:12:08 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Streaming I/O, vectored I/O (WIP)"
}
] |
[
{
"msg_contents": "pg_stats_export is a view that aggregates pg_statistic data by relation\noid and stores all of the column statistical data in a system-indepdent\n(i.e.\nno oids, collation information removed, all MCV values rendered as text)\njsonb format, along with the relation's relname, reltuples, and relpages\nfrom pg_class, as well as the schemaname from pg_namespace.\n\npg_import_rel_stats is a function which takes a relation oid,\nserver_version_num, num_tuples, num_pages, and a column_stats jsonb in\na format matching that of pg_stats_export, and applies that data to\nthe specified pg_class and pg_statistics rows for the relation\nspecified.\n\nThe most common use-case for such a function is in upgrades and\ndump/restore, wherein the upgrade process would capture the output of\npg_stats_export into a regular table, perform the upgrade, and then\njoin that data to the existing pg_class rows, updating statistics to be\na close approximation of what they were just prior to the upgrade. The\nhope is that these statistics are better than the early stages of\n--analyze-in-stages and can be applied faster, thus reducing system\ndowntime.\n\nThe values applied to pg_class are done inline, which is to say\nnon-transactionally. The values applied to pg_statitics are applied\ntransactionally, as if an ANALYZE operation was reading from a\ncheat-sheet.\n\nThis function and view will need to be followed up with corresponding\nones for pg_stastitic_ext and pg_stastitic_ext_data, and while we would\nlikely never backport the import functions, we can have user programs\ndo the same work as the export views such that statistics can be brought\nforward from versions as far back as there is jsonb to store it.\n\nWhile the primary purpose of the import function(s) are to reduce downtime\nduring an upgrade, it is not hard to see that they could also be used to\nfacilitate tuning and development operations, asking questions like \"how\nmight\nthis query plan change if this table has 1000x rows in it?\", without\nactually\nputting those rows into the table.",
"msg_date": "Thu, 31 Aug 2023 02:47:31 -0400",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Statistics Import and Export"
},
{
"msg_contents": "On Thu, Aug 31, 2023 at 12:17 PM Corey Huinker <[email protected]> wrote:\n>\n> While the primary purpose of the import function(s) are to reduce downtime\n> during an upgrade, it is not hard to see that they could also be used to\n> facilitate tuning and development operations, asking questions like \"how might\n> this query plan change if this table has 1000x rows in it?\", without actually\n> putting those rows into the table.\n\nThanks. I think this may be used with postgres_fdw to import\nstatistics directly from the foreigns server, whenever possible,\nrather than fetching the rows and building it locally. If it's known\nthat the stats on foreign and local servers match for a foreign table,\nwe will be one step closer to accurately estimating the cost of a\nforeign plan locally rather than through EXPLAIN.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Thu, 31 Aug 2023 12:37:12 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": ">\n>\n> Thanks. I think this may be used with postgres_fdw to import\n> statistics directly from the foreigns server, whenever possible,\n> rather than fetching the rows and building it locally. If it's known\n> that the stats on foreign and local servers match for a foreign table,\n> we will be one step closer to accurately estimating the cost of a\n> foreign plan locally rather than through EXPLAIN.\n>\n>\nYeah, that use makes sense as well, and if so then postgres_fdw would\nlikely need to be aware of the appropriate query for several versions back\n- they change, not by much, but they do change. So now we'd have each query\ntext in three places: a system view, postgres_fdw, and the bin/scripts\npre-upgrade program. So I probably should consider the best way to share\nthose in the codebase.\n\nThanks. I think this may be used with postgres_fdw to import\nstatistics directly from the foreigns server, whenever possible,\nrather than fetching the rows and building it locally. If it's known\nthat the stats on foreign and local servers match for a foreign table,\nwe will be one step closer to accurately estimating the cost of a\nforeign plan locally rather than through EXPLAIN.Yeah, that use makes sense as well, and if so then postgres_fdw would likely need to be aware of the appropriate query for several versions back - they change, not by much, but they do change. So now we'd have each query text in three places: a system view, postgres_fdw, and the bin/scripts pre-upgrade program. So I probably should consider the best way to share those in the codebase.",
"msg_date": "Thu, 31 Aug 2023 17:18:32 -0400",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": ">\n>\n> Yeah, that use makes sense as well, and if so then postgres_fdw would\n> likely need to be aware of the appropriate query for several versions back\n> - they change, not by much, but they do change. So now we'd have each query\n> text in three places: a system view, postgres_fdw, and the bin/scripts\n> pre-upgrade program. So I probably should consider the best way to share\n> those in the codebase.\n>\n>\nAttached is v2 of this patch.\n\nNew features:\n* imports index statistics. This is not strictly accurate: it re-computes\nindex statistics the same as ANALYZE does, which is to say it derives those\nstats entirely from table column stats, which are imported, so in that\nsense we're getting index stats without touching the heap.\n* now support extended statistics except for MCV, which is currently\nserialized as an difficult-to-decompose bytea field.\n* bare-bones CLI script pg_export_stats, which extracts stats on databases\nback to v12 (tested) and could work back to v10.\n* bare-bones CLI script pg_import_stats, which obviously only works on\ncurrent devel dbs, but can take exports from older versions.",
"msg_date": "Tue, 31 Oct 2023 03:25:17 -0400",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "\nOn 10/31/23 08:25, Corey Huinker wrote:\n>\n> Attached is v2 of this patch.\n> \n> New features:\n> * imports index statistics. This is not strictly accurate: it \n> re-computes index statistics the same as ANALYZE does, which is to\n> say it derives those stats entirely from table column stats, which\n> are imported, so in that sense we're getting index stats without\n> touching the heap.\n\nMaybe I just don't understand, but I'm pretty sure ANALYZE does not\nderive index stats from column stats. It actually builds them from the\nrow sample.\n\n> * now support extended statistics except for MCV, which is currently \n> serialized as an difficult-to-decompose bytea field.\n\nDoesn't pg_mcv_list_items() already do all the heavy work?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 1 Nov 2023 21:07:27 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": ">\n>\n>\n> Maybe I just don't understand, but I'm pretty sure ANALYZE does not\n> derive index stats from column stats. It actually builds them from the\n> row sample.\n>\n\nThat is correct, my error.\n\n\n>\n> > * now support extended statistics except for MCV, which is currently\n> > serialized as an difficult-to-decompose bytea field.\n>\n> Doesn't pg_mcv_list_items() already do all the heavy work?\n>\n\nThanks! I'll look into that.\n\nThe comment below in mcv.c made me think there was no easy way to get\noutput.\n\n/*\n * pg_mcv_list_out - output routine for type pg_mcv_list.\n *\n * MCV lists are serialized into a bytea value, so we simply call byteaout()\n * to serialize the value into text. But it'd be nice to serialize that into\n * a meaningful representation (e.g. for inspection by people).\n *\n * XXX This should probably return something meaningful, similar to what\n * pg_dependencies_out does. Not sure how to deal with the deduplicated\n * values, though - do we want to expand that or not?\n */\n\n\nMaybe I just don't understand, but I'm pretty sure ANALYZE does not\nderive index stats from column stats. It actually builds them from the\nrow sample.That is correct, my error. \n\n> * now support extended statistics except for MCV, which is currently \n> serialized as an difficult-to-decompose bytea field.\n\nDoesn't pg_mcv_list_items() already do all the heavy work?Thanks! I'll look into that.The comment below in mcv.c made me think there was no easy way to get output./* * pg_mcv_list_out - output routine for type pg_mcv_list. * * MCV lists are serialized into a bytea value, so we simply call byteaout() * to serialize the value into text. But it'd be nice to serialize that into * a meaningful representation (e.g. for inspection by people). * * XXX This should probably return something meaningful, similar to what * pg_dependencies_out does. Not sure how to deal with the deduplicated * values, though - do we want to expand that or not? */",
"msg_date": "Thu, 2 Nov 2023 01:01:49 -0400",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "On 11/2/23 06:01, Corey Huinker wrote:\n> \n> \n> Maybe I just don't understand, but I'm pretty sure ANALYZE does not\n> derive index stats from column stats. It actually builds them from the\n> row sample.\n> \n> \n> That is correct, my error.\n> \n> \n> \n> > * now support extended statistics except for MCV, which is currently\n> > serialized as an difficult-to-decompose bytea field.\n> \n> Doesn't pg_mcv_list_items() already do all the heavy work?\n> \n> \n> Thanks! I'll look into that.\n> \n> The comment below in mcv.c made me think there was no easy way to get\n> output.\n> \n> /*\n> * pg_mcv_list_out - output routine for type pg_mcv_list.\n> *\n> * MCV lists are serialized into a bytea value, so we simply call byteaout()\n> * to serialize the value into text. But it'd be nice to serialize that into\n> * a meaningful representation (e.g. for inspection by people).\n> *\n> * XXX This should probably return something meaningful, similar to what\n> * pg_dependencies_out does. Not sure how to deal with the deduplicated\n> * values, though - do we want to expand that or not?\n> */\n> \n\nYeah, that was the simplest output function possible, it didn't seem\nworth it to implement something more advanced. pg_mcv_list_items() is\nmore convenient for most needs, but it's quite far from the on-disk\nrepresentation.\n\nThat's actually a good question - how closely should the exported data\nbe to the on-disk format? I'd say we should keep it abstract, not tied\nto the details of the on-disk format (which might easily change between\nversions).\n\nI'm a bit confused about the JSON schema used in pg_statistic_export\nview, though. It simply serializes stakinds, stavalues, stanumbers into\narrays ... which works, but why not to use the JSON nesting? I mean,\nthere could be a nested document for histogram, MCV, ... with just the\ncorrect fields.\n\n {\n ...\n histogram : { stavalues: [...] },\n mcv : { stavalues: [...], stanumbers: [...] },\n ...\n }\n\nand so on. Also, what does TRIVIAL stand for?\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 2 Nov 2023 14:52:20 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "On Mon, Nov 6, 2023 at 4:16 PM Corey Huinker <[email protected]> wrote:\n>>\n>>\n>> Yeah, that use makes sense as well, and if so then postgres_fdw would likely need to be aware of the appropriate query for several versions back - they change, not by much, but they do change. So now we'd have each query text in three places: a system view, postgres_fdw, and the bin/scripts pre-upgrade program. So I probably should consider the best way to share those in the codebase.\n>>\n>\n> Attached is v2 of this patch.\n\nWhile applying Patch, I noticed few Indentation issues:\n1) D:\\Project\\Postgres>git am v2-0003-Add-pg_import_rel_stats.patch\n.git/rebase-apply/patch:1265: space before tab in indent.\n errmsg(\"invalid statistics\nformat, stxndeprs must be array or null\");\n.git/rebase-apply/patch:1424: trailing whitespace.\n errmsg(\"invalid statistics format,\nstxndistinct attnums elements must be strings, but one is %s\",\n.git/rebase-apply/patch:1315: new blank line at EOF.\n+\nwarning: 3 lines add whitespace errors.\nApplying: Add pg_import_rel_stats().\n\n2) D:\\Project\\Postgres>git am v2-0004-Add-pg_export_stats-pg_import_stats.patch\n.git/rebase-apply/patch:282: trailing whitespace.\nconst char *export_query_v14 =\n.git/rebase-apply/patch:489: trailing whitespace.\nconst char *export_query_v12 =\n.git/rebase-apply/patch:648: trailing whitespace.\nconst char *export_query_v10 =\n.git/rebase-apply/patch:826: trailing whitespace.\n\n.git/rebase-apply/patch:1142: trailing whitespace.\n result = PQexec(conn,\nwarning: squelched 4 whitespace errors\nwarning: 9 lines add whitespace errors.\nApplying: Add pg_export_stats, pg_import_stats.\n\nThanks and Regards,\nShubham Khanna.\n\n\n",
"msg_date": "Mon, 6 Nov 2023 16:19:41 +0530",
"msg_from": "Shubham Khanna <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "On Tue, Oct 31, 2023 at 12:55 PM Corey Huinker <[email protected]> wrote:\n>>\n>>\n>> Yeah, that use makes sense as well, and if so then postgres_fdw would likely need to be aware of the appropriate query for several versions back - they change, not by much, but they do change. So now we'd have each query text in three places: a system view, postgres_fdw, and the bin/scripts pre-upgrade program. So I probably should consider the best way to share those in the codebase.\n>>\n>\n> Attached is v2 of this patch.\n>\n> New features:\n> * imports index statistics. This is not strictly accurate: it re-computes index statistics the same as ANALYZE does, which is to say it derives those stats entirely from table column stats, which are imported, so in that sense we're getting index stats without touching the heap.\n> * now support extended statistics except for MCV, which is currently serialized as an difficult-to-decompose bytea field.\n> * bare-bones CLI script pg_export_stats, which extracts stats on databases back to v12 (tested) and could work back to v10.\n> * bare-bones CLI script pg_import_stats, which obviously only works on current devel dbs, but can take exports from older versions.\n>\n\nI did a small experiment with your patches. In a separate database\n\"fdw_dst\" I created a table t1 and populated it with 100K rows\n#create table t1 (a int, b int);\n#insert into t1 select i, i + 1 from generate_series(1, 100000) i;\n#analyse t1;\n\nIn database \"postgres\" on the same server, I created a foreign table\npointing to t1\n#create server fdw_dst_server foreign data wrapper postgres_fdw\nOPTIONS ( dbname 'fdw_dst', port '5432');\n#create user mapping for public server fdw_dst_server ;\n#create foreign table t1 (a int, b int) server fdw_dst_server;\n\nThe estimates are off\n#explain select * from t1 where a = 100;\n QUERY PLAN\n-----------------------------------------------------------\n Foreign Scan on t1 (cost=100.00..142.26 rows=13 width=8)\n(1 row)\n\nExport and import stats for table t1\n$ pg_export_stats -d fdw_dst | pg_import_stats -d postgres\n\ngives accurate estimates\n#explain select * from t1 where a = 100;\n QUERY PLAN\n-----------------------------------------------------------\n Foreign Scan on t1 (cost=100.00..1793.02 rows=1 width=8)\n(1 row)\n\nIn this simple case it's working like a charm.\n\nThen I wanted to replace all ANALYZE commands in postgres_fdw.sql with\nimport and export of statistics. But I can not do that since it\nrequires table names to match. Foreign table metadata stores the\nmapping between local and remote table as well as column names. Import\ncan use that mapping to install the statistics appropriately. We may\nwant to support a command or function in postgres_fdw to import\nstatistics of all the tables that point to a given foreign server.\nThat may be some future work based on your current patches.\n\nI have not looked at the code though.\n\n--\nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Tue, 7 Nov 2023 14:53:54 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": ">\n> Yeah, that was the simplest output function possible, it didn't seem\n>\nworth it to implement something more advanced. pg_mcv_list_items() is\n> more convenient for most needs, but it's quite far from the on-disk\n> representation.\n>\n\nI was able to make it work.\n\n\n>\n> That's actually a good question - how closely should the exported data\n> be to the on-disk format? I'd say we should keep it abstract, not tied\n> to the details of the on-disk format (which might easily change between\n> versions).\n>\n\nFor the most part, I chose the exported data json types and formats in a\nway that was the most accommodating to cstring input functions. So, while\nso many of the statistic values are obviously only ever integers/floats,\nthose get stored as a numeric data type which lacks direct\nnumeric->int/float4/float8 functions (though we could certainly create\nthem, and I'm not against that), casting them to text lets us leverage\npg_strtoint16, etc.\n\n\n>\n> I'm a bit confused about the JSON schema used in pg_statistic_export\n> view, though. It simply serializes stakinds, stavalues, stanumbers into\n> arrays ... which works, but why not to use the JSON nesting? I mean,\n> there could be a nested document for histogram, MCV, ... with just the\n> correct fields.\n>\n> {\n> ...\n> histogram : { stavalues: [...] },\n> mcv : { stavalues: [...], stanumbers: [...] },\n> ...\n> }\n>\n\nThat's a very good question. I went with this format because it was fairly\nstraightforward to code in SQL using existing JSON/JSONB functions, and\nthat's what we will need if we want to export statistics on any server\ncurrently in existence. I'm certainly not locked in with the current\nformat, and if it can be shown how to transform the data into a superior\nformat, I'd happily do so.\n\nand so on. Also, what does TRIVIAL stand for?\n>\n\nIt's currently serving double-duty for \"there are no stats in this slot\"\nand the situations where the stats computation could draw no conclusions\nabout the data.\n\nAttached is v3 of this patch. Key features are:\n\n* Handles regular pg_statistic stats for any relation type.\n* Handles extended statistics.\n* Export views pg_statistic_export and pg_statistic_ext_export to allow\ninspection of existing stats and saving those values for later use.\n* Import functions pg_import_rel_stats() and pg_import_ext_stats() which\ntake Oids as input. This is intentional to allow stats from one object to\nbe imported into another object.\n* User scripts pg_export_stats and pg_import stats, which offer a primitive\nway to serialize all the statistics of one database and import them into\nanother.\n* Has regression test coverage for both with a variety of data types.\n* Passes my own manual test of extracting all of the stats from a v15\nversion of the popular \"dvdrental\" example database, as well as some\nadditional extended statistics objects, and importing them into a\ndevelopment database.\n* Import operations never touch the heap of any relation outside of\npg_catalog. As such, this should be significantly faster than even the most\ncursory analyze operation, and therefore should be useful in upgrade\nsituations, allowing the database to work with \"good enough\" stats more\nquickly, while still allowing for regular autovacuum to recalculate the\nstats \"for real\" at some later point.\n\nThe relation statistics code was adapted from similar features in\nanalyze.c, but is now done in a query context. As before, the\nrowcount/pagecount values are updated on pg_class in a non-transactional\nfashion to avoid table bloat, while the updates to pg_statistic are\npg_statistic_ext_data are done transactionally.\n\nThe existing statistics _store() functions were leveraged wherever\npractical, so much so that the extended statistics import is mostly just\nadapting the existing _build() functions into _import() functions which\npull their values from JSON rather than computing the statistics.\n\nCurrent concerns are:\n\n1. I had to code a special-case exception for MCELEM stats on array data\ntypes, so that the array_in() call uses the element type rather than the\narray type. I had assumed that the existing exmaine_attribute() functions\nwould have properly derived the typoid for that column, but it appears to\nnot be the case, and I'm clearly missing how the existing code gets it\nright.\n2. This hasn't been tested with external custom datatypes, but if they have\na custom typanalyze function things should be ok.\n3. While I think I have cataloged all of the schema-structural changes to\npg_statistic[_ext[_data]] since version 10, I may have missed a case where\nthe schema stayed the same, but the values are interpreted differently.\n4. I don't yet have a complete vision for how these tools will be used by\npg_upgrade and pg_dump/restore, the places where these will provide the\nbiggest win for users.",
"msg_date": "Wed, 13 Dec 2023 05:26:04 -0500",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "On 13/12/2023 17:26, Corey Huinker wrote:> 4. I don't yet have a \ncomplete vision for how these tools will be used\n> by pg_upgrade and pg_dump/restore, the places where these will provide \n> the biggest win for users.\n\nSome issues here with docs:\n\nfunc.sgml:28465: parser error : Opening and ending tag mismatch: sect1 \nline 26479 and sect2\n </sect2>\n ^\n\nAlso, as I remember, we already had some attempts to invent dump/restore \nstatistics [1,2]. They were stopped with the problem of type \nverification. What if the definition of the type has changed between the \ndump and restore? As I see in the code, Importing statistics you just \ncheck the column name and don't see into the type.\n\n[1] Backup and recovery of pg_statistic\nhttps://www.postgresql.org/message-id/flat/724322880.K8vzik8zPz%40abook\n[2] Re: Ideas about a better API for postgres_fdw remote estimates\nhttps://www.postgresql.org/message-id/7a40707d-1758-85a2-7bb1-6e5775518e64%40postgrespro.ru\n\n-- \nregards,\nAndrei Lepikhov\nPostgres Professional\n\n\n\n",
"msg_date": "Fri, 15 Dec 2023 15:36:08 +0700",
"msg_from": "Andrei Lepikhov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "On Fri, Dec 15, 2023 at 3:36 AM Andrei Lepikhov <[email protected]>\nwrote:\n\n> On 13/12/2023 17:26, Corey Huinker wrote:> 4. I don't yet have a\n> complete vision for how these tools will be used\n> > by pg_upgrade and pg_dump/restore, the places where these will provide\n> > the biggest win for users.\n>\n> Some issues here with docs:\n>\n> func.sgml:28465: parser error : Opening and ending tag mismatch: sect1\n> line 26479 and sect2\n> </sect2>\n> ^\n>\n\nApologies, will fix.\n\n\n>\n> Also, as I remember, we already had some attempts to invent dump/restore\n> statistics [1,2]. They were stopped with the problem of type\n> verification. What if the definition of the type has changed between the\n> dump and restore? As I see in the code, Importing statistics you just\n> check the column name and don't see into the type.\n>\n\nWe look up the imported statistics via column name, that is correct.\n\nHowever, the values in stavalues and mcv and such are stored purely as\ntext, so they must be casted using the input functions for that particular\ndatatype. If that column definition changed, or the underlying input\nfunction changed, the stats import of that particular table would fail. It\nshould be noted, however, that those same input functions were used to\nbring the data into the table via restore, so it would have already failed\non that step. Either way, the structure of the table has effectively\nchanged, so failure to import those statistics would be a good thing.\n\n\n>\n> [1] Backup and recovery of pg_statistic\n> https://www.postgresql.org/message-id/flat/724322880.K8vzik8zPz%40abook\n\n\nThat proposal sought to serialize enough information on the old server such\nthat rows could be directly inserted into pg_statistic on the new server.\nAs was pointed out at the time, version N of a server cannot know what the\nformat of pg_statistic will be in version N+1.\n\nThis patch avoids that problem by inspecting the structure of the object to\nbe faux-analyzed, and using that to determine what parts of the JSON to\nfetch, and what datatype to cast values to in cases like mcv and\nstavaluesN. The exported JSON has no oids in it whatseover, all elements\nsubject to casting on import have already been cast to text, and the record\nreturned has the server version number of the producing system, and the\nimport function can use that to determine how it interprets the data it\nfinds.\n\n\n>\n> [2] Re: Ideas about a better API for postgres_fdw remote estimates\n>\n> https://www.postgresql.org/message-id/7a40707d-1758-85a2-7bb1-6e5775518e64%40postgrespro.ru\n>\n>\nThis one seems to be pulling oids from the remote server, and we can't\nguarantee their stability across systems, especially for objects and\noperators from extensions. I tried to go the route of extracting the full\ntext name of an operator, but discovered that the qualified names, in\naddition to being unsightly, were irrelevant because we can't insert stats\nthat disagree about type with the attribute/expression. So it didn't matter\nwhat type the remote system thought it had, the local system was going to\ncoerce it into the expected data type or ereport() trying.\n\nI think there is hope for having do_analyze() run a remote query fetching\nthe remote table's exported stats and then storing them locally, possibly\nafter some modification, and that would save us from having to sample a\nremote table.\n\nOn Fri, Dec 15, 2023 at 3:36 AM Andrei Lepikhov <[email protected]> wrote:On 13/12/2023 17:26, Corey Huinker wrote:> 4. I don't yet have a \ncomplete vision for how these tools will be used\n> by pg_upgrade and pg_dump/restore, the places where these will provide \n> the biggest win for users.\n\nSome issues here with docs:\n\nfunc.sgml:28465: parser error : Opening and ending tag mismatch: sect1 \nline 26479 and sect2\n </sect2>\n ^Apologies, will fix. \n\nAlso, as I remember, we already had some attempts to invent dump/restore \nstatistics [1,2]. They were stopped with the problem of type \nverification. What if the definition of the type has changed between the \ndump and restore? As I see in the code, Importing statistics you just \ncheck the column name and don't see into the type.We look up the imported statistics via column name, that is correct.However, the values in stavalues and mcv and such are stored purely as text, so they must be casted using the input functions for that particular datatype. If that column definition changed, or the underlying input function changed, the stats import of that particular table would fail. It should be noted, however, that those same input functions were used to bring the data into the table via restore, so it would have already failed on that step. Either way, the structure of the table has effectively changed, so failure to import those statistics would be a good thing. \n\n[1] Backup and recovery of pg_statistic\nhttps://www.postgresql.org/message-id/flat/724322880.K8vzik8zPz%40abookThat proposal sought to serialize enough information on the old server such that rows could be directly inserted into pg_statistic on the new server. As was pointed out at the time, version N of a server cannot know what the format of pg_statistic will be in version N+1. This patch avoids that problem by inspecting the structure of the object to be faux-analyzed, and using that to determine what parts of the JSON to fetch, and what datatype to cast values to in cases like mcv and stavaluesN. The exported JSON has no oids in it whatseover, all elements subject to casting on import have already been cast to text, and the record returned has the server version number of the producing system, and the import function can use that to determine how it interprets the data it finds. \n[2] Re: Ideas about a better API for postgres_fdw remote estimates\nhttps://www.postgresql.org/message-id/7a40707d-1758-85a2-7bb1-6e5775518e64%40postgrespro.ru\nThis one seems to be pulling oids from the remote server, and we can't guarantee their stability across systems, especially for objects and operators from extensions. I tried to go the route of extracting the full text name of an operator, but discovered that the qualified names, in addition to being unsightly, were irrelevant because we can't insert stats that disagree about type with the attribute/expression. So it didn't matter what type the remote system thought it had, the local system was going to coerce it into the expected data type or ereport() trying.I think there is hope for having do_analyze() run a remote query fetching the remote table's exported stats and then storing them locally, possibly after some modification, and that would save us from having to sample a remote table.",
"msg_date": "Fri, 15 Dec 2023 23:30:25 -0500",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "Hi,\n\nI finally had time to look at the last version of the patch, so here's a\ncouple thoughts and questions in somewhat random order. Please take this\nas a bit of a brainstorming and push back if you disagree some of my\ncomments.\n\nIn general, I like the goal of this patch - not having statistics is a\ncommon issue after an upgrade, and people sometimes don't even realize\nthey need to run analyze. So, it's definitely worth improving.\n\nI'm not entirely sure about the other use case - allowing people to\ntweak optimizer statistics on a running cluster, to see what would be\nthe plan in that case. Or more precisely - I agree that would be an\ninteresting and useful feature, but maybe the interface should not be\nthe same as for the binary upgrade use case?\n\n\ninterfaces\n----------\n\nWhen I thought about the ability to dump/load statistics in the past, I\nusually envisioned some sort of DDL that would do the export and import.\nSo for example we'd have EXPORT STATISTICS / IMPORT STATISTICS commands,\nor something like that, and that'd do all the work. This would mean\nstats are \"first-class citizens\" and it'd be fairly straightforward to\nadd this into pg_dump, for example. Or at least I think so ...\n\nAlternatively we could have the usual \"functional\" interface, with a\nfunctions to export/import statistics, replacing the DDL commands.\n\nUnfortunately, none of this works for the pg_upgrade use case, because\nexisting cluster versions would not support this new interface, of\ncourse. That's a significant flaw, as it'd make this useful only for\nupgrades of future versions.\n\nSo I think for the pg_upgrade use case, we don't have much choice other\nthan using \"custom\" export through a view, which is what the patch does.\n\nHowever, for the other use case (tweaking optimizer stats) this is not\nreally an issue - that always happens on the same instance, so no issue\nwith not having the \"export\" function and so on. I'd bet there are more\nconvenient ways to do this than using the export view. I'm sure it could\nshare a lot of the infrastructure, ofc.\n\nI suggest we focus on the pg_upgrade use case for now. In particular, I\nthink we really need to find a good way to integrate this into\npg_upgrade. I'm not against having custom CLI commands, but it's still a\nmanual thing - I wonder if we could extend pg_dump to dump stats, or\nmake it built-in into pg_upgrade in some way (possibly disabled by\ndefault, or something like that).\n\n\nJSON format\n-----------\n\nAs for the JSON format, I wonder if we need that at all? Isn't that an\nunnecessary layer of indirection? Couldn't we simply dump pg_statistic\nand pg_statistic_ext_data in CSV, or something like that? The amount of\nnew JSONB code seems to be very small, so it's OK I guess.\n\nI'm still a bit unsure about the \"right\" JSON schema. I find it a bit\ninconvenient that the JSON objects mimic the pg_statistic schema very\nclosely. In particular, it has one array for stakind values, another\narray for stavalues, array for stanumbers etc. I understand generating\nthis JSON in SQL is fairly straightforward, and for the pg_upgrade use\ncase it's probably OK. But my concern is it's not very convenient for\nthe \"manual tweaking\" use case, because the \"related\" fields are\nscattered in different parts of the JSON.\n\nThat's pretty much why I envisioned a format \"grouping\" the arrays for a\nparticular type of statistics (MCV, histogram) into the same object, as\nfor example in\n\n {\n \"mcv\" : {\"values\" : [...], \"frequencies\" : [...]}\n \"histogram\" : {\"bounds\" : [...]}\n }\n\nBut that's probably much harder to generate from plain SQL (at least I\nthink so, I haven't tried).\n\n\ndata missing in the export\n--------------------------\n\nI think the data needs to include more information. Maybe not for the\npg_upgrade use case, where it's mostly guaranteed not to change, but for\nthe \"manual tweak\" use case it can change. And I don't think we want two\ndifferent formats - we want one, working for everything.\n\nConsider for example about the staopN and stacollN fields - if we clone\nthe stats from one table to the other, and the table uses different\ncollations, will that still work? Similarly, I think we should include\nthe type of each column, because it's absolutely not guaranteed the\nimport function will fail if the type changes. For example, if the type\nchanges from integer to text, that will work, but the ordering will\nabsolutely not be the same. And so on.\n\nFor the extended statistics export, I think we need to include also the\nattribute names and expressions, because these can be different between\nthe statistics. And not only that - the statistics values reference the\nattributes by positions, but if the two tables have the attributes in a\ndifferent order (when ordered by attnum), that will break stuff.\n\n\nmore strict checks\n------------------\n\nI think the code should be a bit more \"defensive\" when importing stuff,\nand do at least some sanity checks. For the pg_upgrade use case this\nshould be mostly non-issue (except for maybe helping to detect bugs\nearlier), but for the \"manual tweak\" use case it's much more important.\n\nBy this I mean checks like:\n\n* making sure the frequencies in MCV lists are not obviously wrong\n (outside [0,1], sum exceeding > 1.0, etc.)\n\n* cross-checking that stanumbers/stavalues make sense (e.g. that MCV has\n both arrays while histogram has only stavalues, that the arrays have\n the same length for MCV, etc.)\n\n* checking there are no duplicate stakind values (e.g. two MCV lists)\n\nThis is another reason why I was thinking the current JSON format may be\na bit inconvenient, because it loads the fields separately, making the\nchecks harder. But I guess it could be done after loading everything, as\na separate phase.\n\nNot sure if all the checks need to be regular elog(ERROR), perhaps some\ncould/should be just asserts.\n\n\nminor questions\n---------------\n\n1) Should the views be called pg_statistic_export or pg_stats_export?\nPerhaps pg_stats_export is better, because the format is meant to be\nhuman-readable (rather than 100% internal).\n\n2) It's not very clear what \"non-transactional update\" of pg_class\nfields actually means. Does that mean we update the fields in-place,\ncan't be rolled back, is not subject to MVCC or what? I suspect users\nwon't know unless the docs say that explicitly.\n\n3) The \"statistics.c\" code should really document the JSON structure. Or\nmaybe if we plan to use this for other purposes, it should be documented\nin the SGML?\n\nActually, this means that the use supported cases determine if the\nexpected JSON structure is part of the API. For pg_upgrade we could keep\nit as \"internal\" and maybe change it as needed, but for \"manual tweak\"\nit'd become part of the public API.\n\n4) Why do we need the separate \"replaced\" flags in import_stakinds? Can\nit happen that collreplaces/opreplaces differ from kindreplaces?\n\n5) What happens in we import statistics for a table that already has\nsome statistics? Will this discard the existing statistics, or will this\nmerge them somehow? (I think we should always discard the existing\nstats, and keep only the new version.)\n\n6) What happens if we import extended stats with mismatching definition?\nFor example, what if the \"new\" statistics object does not have \"mcv\"\nenabled, but the imported data do include MCV? What if the statistics do\nhave the same number of \"dimensions\" but not the same number of columns\nand expressions?\n\n7) The func.sgml additions in 0007 seems a bit strange, particularly the\nfirst sentence of the paragraph.\n\n8) While experimenting with the patch, I noticed this:\n\n create table t (a int, b int, c text);\n create statistics s on a, b, c, (a+b), (a-b) from t;\n\n create table t2 (a text, b text, c text);\n create statistics s2 on a, c from t2;\n\n select pg_import_ext_stats(\n (select oid from pg_statistic_ext where stxname = 's2'),\n (select server_version_num from pg_statistic_ext_export\n where ext_stats_name = 's'),\n (select stats from pg_statistic_ext_export\n where ext_stats_name = 's'));\n\nWARNING: statistics import has 5 mcv dimensions, but the expects 2.\nSkipping excess dimensions.\nERROR: statistics import has 5 mcv dimensions, but the expects 2.\nSkipping excess dimensions.\n\nI guess we should not trigger WARNING+ERROR with the same message.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 26 Dec 2023 02:18:56 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "On Tue, Dec 26, 2023 at 02:18:56AM +0100, Tomas Vondra wrote:\n> interfaces\n> ----------\n> \n> When I thought about the ability to dump/load statistics in the past, I\n> usually envisioned some sort of DDL that would do the export and import.\n> So for example we'd have EXPORT STATISTICS / IMPORT STATISTICS commands,\n> or something like that, and that'd do all the work. This would mean\n> stats are \"first-class citizens\" and it'd be fairly straightforward to\n> add this into pg_dump, for example. Or at least I think so ...\n> \n> Alternatively we could have the usual \"functional\" interface, with a\n> functions to export/import statistics, replacing the DDL commands.\n> \n> Unfortunately, none of this works for the pg_upgrade use case, because\n> existing cluster versions would not support this new interface, of\n> course. That's a significant flaw, as it'd make this useful only for\n> upgrades of future versions.\n> \n> So I think for the pg_upgrade use case, we don't have much choice other\n> than using \"custom\" export through a view, which is what the patch does.\n> \n> However, for the other use case (tweaking optimizer stats) this is not\n> really an issue - that always happens on the same instance, so no issue\n> with not having the \"export\" function and so on. I'd bet there are more\n> convenient ways to do this than using the export view. I'm sure it could\n> share a lot of the infrastructure, ofc.\n> \n> I suggest we focus on the pg_upgrade use case for now. In particular, I\n> think we really need to find a good way to integrate this into\n> pg_upgrade. I'm not against having custom CLI commands, but it's still a\n> manual thing - I wonder if we could extend pg_dump to dump stats, or\n> make it built-in into pg_upgrade in some way (possibly disabled by\n> default, or something like that).\n\nI have some thoughts on this too. I understand the desire to add\nsomething that can be used for upgrades _to_ PG 17, but I am concerned\nthat this will give us a cumbersome API that will hamper future\ndevelopment. I think we should develop the API we want, regardless of\nhow useful it is for upgrades _to_ PG 17, and then figure out what\nshort-term hacks we can add to get it working for upgrades _to_ PG 17; \nthese hacks can eventually be removed. Even if they can't be removed,\nthey are export-only and we can continue developing the import SQL\ncommand cleanly, and I think import is going to need the most long-term\nmaintenance.\n\nI think we need a robust API to handle two cases:\n\n* changes in how we store statistics\n* changes in how how data type values are represented in the statistics\n\nWe have had such changes in the past, and I think these two issues are\nwhat have prevented import/export of statistics up to this point.\nDeveloping an API that doesn't cleanly handle these will cause long-term\npain.\n\nIn summary, I think we need an SQL-level command for this. I think we\nneed to embed the Postgres export version number into the statistics\nexport file (maybe in the COPY header), and then load the file via COPY\ninternally (not JSON) into a temporary table that we know matches the\nexported Postgres version. We then need to use SQL to make any\nadjustments to it before loading it into pg_statistic. Doing that\ninternally in JSON just isn't efficient. If people want JSON for such\ncases, I suggest we add a JSON format to COPY.\n\nI think we can then look at pg_upgrade to see if we can simulate the\nexport action which can use the statistics import SQL command.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Tue, 26 Dec 2023 13:15:14 -0500",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> I think we need a robust API to handle two cases:\n\n> * changes in how we store statistics\n> * changes in how how data type values are represented in the statistics\n\n> We have had such changes in the past, and I think these two issues are\n> what have prevented import/export of statistics up to this point.\n> Developing an API that doesn't cleanly handle these will cause long-term\n> pain.\n\nAgreed.\n\n> In summary, I think we need an SQL-level command for this.\n\nI think a SQL command is an actively bad idea. It'll just add development\nand maintenance overhead that we don't need. When I worked on this topic\nyears ago at Salesforce, I had things set up with simple functions, which\npg_dump would invoke by writing more or less\n\n\tSELECT pg_catalog.load_statistics(....);\n\nThis has a number of advantages, not least of which is that an extension\ncould plausibly add compatible functions to older versions. The trick,\nas you say, is to figure out what the argument lists ought to be.\nUnfortunately I recall few details of what I wrote for Salesforce,\nbut I think I had it broken down in a way where there was a separate\nfunction call occurring for each pg_statistic \"slot\", thus roughly\n\nload_statistics(table regclass, attname text, stakind int, stavalue ...);\n\nI might have had a separate load_statistics_xxx function for each\nstakind, which would ease the issue of deciding what the datatype\nof \"stavalue\" is. As mentioned already, we'd also need some sort of\nversion identifier, and we'd expect the load_statistics() functions\nto be able to transform the data if the old version used a different\nrepresentation. I agree with the idea that an explicit representation\nof the source table attribute's type would be wise, too.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 26 Dec 2023 14:19:04 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "On 12/26/23 20:19, Tom Lane wrote:\n> Bruce Momjian <[email protected]> writes:\n>> I think we need a robust API to handle two cases:\n> \n>> * changes in how we store statistics\n>> * changes in how how data type values are represented in the statistics\n> \n>> We have had such changes in the past, and I think these two issues are\n>> what have prevented import/export of statistics up to this point.\n>> Developing an API that doesn't cleanly handle these will cause long-term\n>> pain.\n> \n> Agreed.\n> \n\nI agree the format is important - we don't want to end up with a format\nthat's cumbersome or inconvenient to use. But I don't think the proposed\nformat is somewhat bad in those respects - it mostly reflects how we\nstore statistics and if I was designing a format for humans, it might\nlook a bit differently. But that's not the goal here, IMHO.\n\nI don't quite understand the two cases above. Why should this affect how\nwe store statistics? Surely, making the statistics easy to use for the\noptimizer is much more important than occasional export/import.\n\n>> In summary, I think we need an SQL-level command for this.\n> \n> I think a SQL command is an actively bad idea. It'll just add development\n> and maintenance overhead that we don't need. When I worked on this topic\n> years ago at Salesforce, I had things set up with simple functions, which\n> pg_dump would invoke by writing more or less\n> \n> \tSELECT pg_catalog.load_statistics(....);\n> \n> This has a number of advantages, not least of which is that an extension\n> could plausibly add compatible functions to older versions. The trick,\n> as you say, is to figure out what the argument lists ought to be.\n> Unfortunately I recall few details of what I wrote for Salesforce,\n> but I think I had it broken down in a way where there was a separate\n> function call occurring for each pg_statistic \"slot\", thus roughly\n> \n> load_statistics(table regclass, attname text, stakind int, stavalue ...);\n> \n> I might have had a separate load_statistics_xxx function for each\n> stakind, which would ease the issue of deciding what the datatype\n> of \"stavalue\" is. As mentioned already, we'd also need some sort of\n> version identifier, and we'd expect the load_statistics() functions\n> to be able to transform the data if the old version used a different\n> representation. I agree with the idea that an explicit representation\n> of the source table attribute's type would be wise, too.\n> \n\nYeah, this is pretty much what I meant by \"functional\" interface. But if\nI said maybe the format implemented by the patch is maybe too close to\nhow we store the statistics, then this has exactly the same issue. And\nit has other issues too, I think - it breaks down the stats into\nmultiple function calls, so ensuring the sanity/correctness of whole\nsets of statistics gets much harder, I think.\n\nI'm not sure about the extension idea. Yes, we could have an extension\nproviding such functions, but do we have any precedent of making\npg_upgrade dependent on an external extension? I'd much rather have\nsomething built-in that just works, especially if we intend to make it\nthe default behavior (which I think should be our aim here).\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 27 Dec 2023 13:08:47 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "On Wed, Dec 27, 2023 at 01:08:47PM +0100, Tomas Vondra wrote:\n> On 12/26/23 20:19, Tom Lane wrote:\n> > Bruce Momjian <[email protected]> writes:\n> >> I think we need a robust API to handle two cases:\n> > \n> >> * changes in how we store statistics\n> >> * changes in how how data type values are represented in the statistics\n> > \n> >> We have had such changes in the past, and I think these two issues are\n> >> what have prevented import/export of statistics up to this point.\n> >> Developing an API that doesn't cleanly handle these will cause long-term\n> >> pain.\n> > \n> > Agreed.\n> > \n> \n> I agree the format is important - we don't want to end up with a format\n> that's cumbersome or inconvenient to use. But I don't think the proposed\n> format is somewhat bad in those respects - it mostly reflects how we\n> store statistics and if I was designing a format for humans, it might\n> look a bit differently. But that's not the goal here, IMHO.\n> \n> I don't quite understand the two cases above. Why should this affect how\n> we store statistics? Surely, making the statistics easy to use for the\n> optimizer is much more important than occasional export/import.\n\nThe two items above were to focus on getting a solution that can easily\nhandle future statistics storage changes. I figured we would want to\nmanipulate the data as a table internally so I am confused why we would\nexport JSON instead of a COPY format. I didn't think we were changing\nhow we internall store or use the statistics.\n\n> >> In summary, I think we need an SQL-level command for this.\n> > \n> > I think a SQL command is an actively bad idea. It'll just add development\n> > and maintenance overhead that we don't need. When I worked on this topic\n> > years ago at Salesforce, I had things set up with simple functions, which\n> > pg_dump would invoke by writing more or less\n> > \n> > \tSELECT pg_catalog.load_statistics(....);\n> > \n> > This has a number of advantages, not least of which is that an extension\n> > could plausibly add compatible functions to older versions. The trick,\n> > as you say, is to figure out what the argument lists ought to be.\n> > Unfortunately I recall few details of what I wrote for Salesforce,\n> > but I think I had it broken down in a way where there was a separate\n> > function call occurring for each pg_statistic \"slot\", thus roughly\n> > \n> > load_statistics(table regclass, attname text, stakind int, stavalue ...);\n> > \n> > I might have had a separate load_statistics_xxx function for each\n> > stakind, which would ease the issue of deciding what the datatype\n> > of \"stavalue\" is. As mentioned already, we'd also need some sort of\n> > version identifier, and we'd expect the load_statistics() functions\n> > to be able to transform the data if the old version used a different\n> > representation. I agree with the idea that an explicit representation\n> > of the source table attribute's type would be wise, too.\n> > \n> \n> Yeah, this is pretty much what I meant by \"functional\" interface. But if\n> I said maybe the format implemented by the patch is maybe too close to\n> how we store the statistics, then this has exactly the same issue. And\n> it has other issues too, I think - it breaks down the stats into\n> multiple function calls, so ensuring the sanity/correctness of whole\n> sets of statistics gets much harder, I think.\n\nI was suggesting an SQL command because this feature is going to need a\nlot of options and do a lot of different things, I am afraid, and a\nsingle function might be too complex to manage.\n\n> I'm not sure about the extension idea. Yes, we could have an extension\n> providing such functions, but do we have any precedent of making\n> pg_upgrade dependent on an external extension? I'd much rather have\n> something built-in that just works, especially if we intend to make it\n> the default behavior (which I think should be our aim here).\n\nUh, an extension seems nice to allow people in back branches to install\nit, but not for normal usage.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Wed, 27 Dec 2023 10:29:01 -0500",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "On Mon, Dec 25, 2023 at 8:18 PM Tomas Vondra <[email protected]>\nwrote:\n\n> Hi,\n>\n> I finally had time to look at the last version of the patch, so here's a\n> couple thoughts and questions in somewhat random order. Please take this\n> as a bit of a brainstorming and push back if you disagree some of my\n> comments.\n>\n> In general, I like the goal of this patch - not having statistics is a\n> common issue after an upgrade, and people sometimes don't even realize\n> they need to run analyze. So, it's definitely worth improving.\n>\n> I'm not entirely sure about the other use case - allowing people to\n> tweak optimizer statistics on a running cluster, to see what would be\n> the plan in that case. Or more precisely - I agree that would be an\n> interesting and useful feature, but maybe the interface should not be\n> the same as for the binary upgrade use case?\n>\n>\n> interfaces\n> ----------\n>\n> When I thought about the ability to dump/load statistics in the past, I\n> usually envisioned some sort of DDL that would do the export and import.\n> So for example we'd have EXPORT STATISTICS / IMPORT STATISTICS commands,\n> or something like that, and that'd do all the work. This would mean\n> stats are \"first-class citizens\" and it'd be fairly straightforward to\n> add this into pg_dump, for example. Or at least I think so ...\n>\n> Alternatively we could have the usual \"functional\" interface, with a\n> functions to export/import statistics, replacing the DDL commands.\n>\n> Unfortunately, none of this works for the pg_upgrade use case, because\n> existing cluster versions would not support this new interface, of\n> course. That's a significant flaw, as it'd make this useful only for\n> upgrades of future versions.\n>\n\nThis was the reason I settled on the interface that I did: while we can\ncreate whatever interface we want for importing the statistics, we would\nneed to be able to extract stats from databases using only the facilities\navailable in those same databases, and then store that in a medium that\ncould be conveyed across databases, either by text files or by saving them\noff in a side table prior to upgrade. JSONB met the criteria.\n\n\n>\n> So I think for the pg_upgrade use case, we don't have much choice other\n> than using \"custom\" export through a view, which is what the patch does.\n>\n> However, for the other use case (tweaking optimizer stats) this is not\n> really an issue - that always happens on the same instance, so no issue\n> with not having the \"export\" function and so on. I'd bet there are more\n> convenient ways to do this than using the export view. I'm sure it could\n> share a lot of the infrastructure, ofc.\n>\n\nSo, there is a third use case - foreign data wrappers. When analyzing a\nforeign table, at least one in the postgresql_fdw family of foreign\nservers, we should be able to send a query specific to the version and\ndialect of that server, get back the JSONB, and import those results. That\nuse case may be more tangible to you than the tweak/tuning case.\n\n\n>\n>\n>\n> JSON format\n> -----------\n>\n> As for the JSON format, I wonder if we need that at all? Isn't that an\n> unnecessary layer of indirection? Couldn't we simply dump pg_statistic\n> and pg_statistic_ext_data in CSV, or something like that? The amount of\n> new JSONB code seems to be very small, so it's OK I guess.\n>\n\nI see a few problems with dumping pg_statistic[_ext_data]. The first is\nthat the importer now has to understand all of the past formats of those\ntwo tables. The next is that the tables are chock full of Oids that don't\nnecessarily carry forward. I could see us having a text-ified version of\nthose two tables, but we'd need that for all previous iterations of those\ntable formats. Instead, I put the burden on the stats export to de-oid the\ndata and make it *_in() function friendly.\n\n\n> That's pretty much why I envisioned a format \"grouping\" the arrays for a\n> particular type of statistics (MCV, histogram) into the same object, as\n> for example in\n>\n> {\n> \"mcv\" : {\"values\" : [...], \"frequencies\" : [...]}\n> \"histogram\" : {\"bounds\" : [...]}\n> }\n>\n\nI agree that would be a lot more readable, and probably a lot more\ndebuggable. But I went into this unsure if there could be more than one\nstats slot of a given kind per table. Knowing that they must be unique\nhelps.\n\n\n> But that's probably much harder to generate from plain SQL (at least I\n> think so, I haven't tried).\n>\n\nI think it would be harder, but far from impossible.\n\n\n\n> data missing in the export\n> --------------------------\n>\n> I think the data needs to include more information. Maybe not for the\n> pg_upgrade use case, where it's mostly guaranteed not to change, but for\n> the \"manual tweak\" use case it can change. And I don't think we want two\n> different formats - we want one, working for everything.\n>\n\nI\"m not against this at all, and I started out doing that, but the\nqualified names of operators got _ugly_, and I quickly realized that what I\nwas generating wouldn't matter, either the input data would make sense for\nthe attribute's stats or it would fail trying.\n\n\n> Consider for example about the staopN and stacollN fields - if we clone\n> the stats from one table to the other, and the table uses different\n> collations, will that still work? Similarly, I think we should include\n> the type of each column, because it's absolutely not guaranteed the\n> import function will fail if the type changes. For example, if the type\n> changes from integer to text, that will work, but the ordering will\n> absolutely not be the same. And so on.\n>\n\nI can see including the type of the column, that's a lot cleaner than the\noperator names for sure, and I can see us rejecting stats or sections of\nstats in certain situations. Like in your example, if the collation\nchanged, then reject all \"<\" op stats but keep the \"=\" ones.\n\n\n> For the extended statistics export, I think we need to include also the\n> attribute names and expressions, because these can be different between\n> the statistics. And not only that - the statistics values reference the\n> attributes by positions, but if the two tables have the attributes in a\n> different order (when ordered by attnum), that will break stuff.\n>\n\nCorrect me if I'm wrong, but I thought expression parse trees change _a\nlot_ from version to version?\n\nAttribute reordering is a definite vulnerability of the current\nimplementation, so an attribute name export might be a way to mitigate that.\n\n\n>\n> * making sure the frequencies in MCV lists are not obviously wrong\n> (outside [0,1], sum exceeding > 1.0, etc.)\n>\n\n+1\n\n\n>\n> * cross-checking that stanumbers/stavalues make sense (e.g. that MCV has\n> both arrays while histogram has only stavalues, that the arrays have\n> the same length for MCV, etc.)\n>\n\nTo this end, there's an edge-case hack in the code where I have to derive\nthe array elemtype. I had thought that examine_attribute() or\nstd_typanalyze() was going to do that for me, but it didn't. Very much want\nyour input there.\n\n\n>\n> * checking there are no duplicate stakind values (e.g. two MCV lists)\n>\n\nPer previous comment, it's good to learn these restrictions.\n\n\n> Not sure if all the checks need to be regular elog(ERROR), perhaps some\n> could/should be just asserts.\n>\n\nFor this first pass, all errors were one-size fits all, safe for the\nWARNING vs ERROR.\n\n\n>\n>\n> minor questions\n> ---------------\n>\n> 1) Should the views be called pg_statistic_export or pg_stats_export?\n> Perhaps pg_stats_export is better, because the format is meant to be\n> human-readable (rather than 100% internal).\n>\n\nI have no opinion on what the best name would be, and will go with\nconsensus.\n\n\n>\n> 2) It's not very clear what \"non-transactional update\" of pg_class\n> fields actually means. Does that mean we update the fields in-place,\n> can't be rolled back, is not subject to MVCC or what? I suspect users\n> won't know unless the docs say that explicitly.\n>\n\nCorrect. Cannot be rolled back, not subject to MVCC.\n\n\n\n> 3) The \"statistics.c\" code should really document the JSON structure. Or\n> maybe if we plan to use this for other purposes, it should be documented\n> in the SGML?\n>\n\nI agree, but I also didn't expect the format to survive first contact with\nreviewers, so I held back.\n\n\n>\n> 4) Why do we need the separate \"replaced\" flags in import_stakinds? Can\n> it happen that collreplaces/opreplaces differ from kindreplaces?\n>\n\nThat was initially done to maximize the amount of code that could be copied\nfrom do_analyze(). In retrospect, I like how extended statistics just\ndeletes all the pg_statistic_ext_data rows and replaces them and I would\nlike to do the same for pg_statistic before this is all done.\n\n\n>\n> 5) What happens in we import statistics for a table that already has\n> some statistics? Will this discard the existing statistics, or will this\n> merge them somehow? (I think we should always discard the existing\n> stats, and keep only the new version.)\n>\n\nIn the case of pg_statistic_ext_data, the stats are thrown out and replaced\nby the imported ones.\n\nIn the case of pg_statistic, it's basically an upsert, and any values that\nwere missing in the JSON are not updated on the existing row. That's\nappealing in a tweak situation where you want to only alter one or two bits\nof a stat, but not really useful in other situations. Per previous comment,\nI'd prefer a clean slate and forcing tweaking use cases to fill in all the\nblanks.\n\n\n>\n> 6) What happens if we import extended stats with mismatching definition?\n> For example, what if the \"new\" statistics object does not have \"mcv\"\n> enabled, but the imported data do include MCV? What if the statistics do\n> have the same number of \"dimensions\" but not the same number of columns\n> and expressions?\n>\n\nThe importer is currently driven by the types of stats to be expected for\nthat pg_attribute/pg_statistic_ext. It only looks for things that are\npossible for that stat type, and any extra JSON values are ignored.\n\nOn Mon, Dec 25, 2023 at 8:18 PM Tomas Vondra <[email protected]> wrote:Hi,\n\nI finally had time to look at the last version of the patch, so here's a\ncouple thoughts and questions in somewhat random order. Please take this\nas a bit of a brainstorming and push back if you disagree some of my\ncomments.\n\nIn general, I like the goal of this patch - not having statistics is a\ncommon issue after an upgrade, and people sometimes don't even realize\nthey need to run analyze. So, it's definitely worth improving.\n\nI'm not entirely sure about the other use case - allowing people to\ntweak optimizer statistics on a running cluster, to see what would be\nthe plan in that case. Or more precisely - I agree that would be an\ninteresting and useful feature, but maybe the interface should not be\nthe same as for the binary upgrade use case?\n\n\ninterfaces\n----------\n\nWhen I thought about the ability to dump/load statistics in the past, I\nusually envisioned some sort of DDL that would do the export and import.\nSo for example we'd have EXPORT STATISTICS / IMPORT STATISTICS commands,\nor something like that, and that'd do all the work. This would mean\nstats are \"first-class citizens\" and it'd be fairly straightforward to\nadd this into pg_dump, for example. Or at least I think so ...\n\nAlternatively we could have the usual \"functional\" interface, with a\nfunctions to export/import statistics, replacing the DDL commands.\n\nUnfortunately, none of this works for the pg_upgrade use case, because\nexisting cluster versions would not support this new interface, of\ncourse. That's a significant flaw, as it'd make this useful only for\nupgrades of future versions.This was the reason I settled on the interface that I did: while we can create whatever interface we want for importing the statistics, we would need to be able to extract stats from databases using only the facilities available in those same databases, and then store that in a medium that could be conveyed across databases, either by text files or by saving them off in a side table prior to upgrade. JSONB met the criteria. \n\nSo I think for the pg_upgrade use case, we don't have much choice other\nthan using \"custom\" export through a view, which is what the patch does.\n\nHowever, for the other use case (tweaking optimizer stats) this is not\nreally an issue - that always happens on the same instance, so no issue\nwith not having the \"export\" function and so on. I'd bet there are more\nconvenient ways to do this than using the export view. I'm sure it could\nshare a lot of the infrastructure, ofc.So, there is a third use case - foreign data wrappers. When analyzing a foreign table, at least one in the postgresql_fdw family of foreign servers, we should be able to send a query specific to the version and dialect of that server, get back the JSONB, and import those results. That use case may be more tangible to you than the tweak/tuning case. \n\n\nJSON format\n-----------\n\nAs for the JSON format, I wonder if we need that at all? Isn't that an\nunnecessary layer of indirection? Couldn't we simply dump pg_statistic\nand pg_statistic_ext_data in CSV, or something like that? The amount of\nnew JSONB code seems to be very small, so it's OK I guess.I see a few problems with dumping pg_statistic[_ext_data]. The first is that the importer now has to understand all of the past formats of those two tables. The next is that the tables are chock full of Oids that don't necessarily carry forward. I could see us having a text-ified version of those two tables, but we'd need that for all previous iterations of those table formats. Instead, I put the burden on the stats export to de-oid the data and make it *_in() function friendly. That's pretty much why I envisioned a format \"grouping\" the arrays for a\nparticular type of statistics (MCV, histogram) into the same object, as\nfor example in\n\n {\n \"mcv\" : {\"values\" : [...], \"frequencies\" : [...]}\n \"histogram\" : {\"bounds\" : [...]}\n }I agree that would be a lot more readable, and probably a lot more debuggable. But I went into this unsure if there could be more than one stats slot of a given kind per table. Knowing that they must be unique helps. But that's probably much harder to generate from plain SQL (at least I\nthink so, I haven't tried).I think it would be harder, but far from impossible. data missing in the export\n--------------------------\n\nI think the data needs to include more information. Maybe not for the\npg_upgrade use case, where it's mostly guaranteed not to change, but for\nthe \"manual tweak\" use case it can change. And I don't think we want two\ndifferent formats - we want one, working for everything.I\"m not against this at all, and I started out doing that, but the qualified names of operators got _ugly_, and I quickly realized that what I was generating wouldn't matter, either the input data would make sense for the attribute's stats or it would fail trying. Consider for example about the staopN and stacollN fields - if we clone\nthe stats from one table to the other, and the table uses different\ncollations, will that still work? Similarly, I think we should include\nthe type of each column, because it's absolutely not guaranteed the\nimport function will fail if the type changes. For example, if the type\nchanges from integer to text, that will work, but the ordering will\nabsolutely not be the same. And so on.I can see including the type of the column, that's a lot cleaner than the operator names for sure, and I can see us rejecting stats or sections of stats in certain situations. Like in your example, if the collation changed, then reject all \"<\" op stats but keep the \"=\" ones. For the extended statistics export, I think we need to include also the\nattribute names and expressions, because these can be different between\nthe statistics. And not only that - the statistics values reference the\nattributes by positions, but if the two tables have the attributes in a\ndifferent order (when ordered by attnum), that will break stuff.Correct me if I'm wrong, but I thought expression parse trees change _a lot_ from version to version?Attribute reordering is a definite vulnerability of the current implementation, so an attribute name export might be a way to mitigate that. \n* making sure the frequencies in MCV lists are not obviously wrong\n (outside [0,1], sum exceeding > 1.0, etc.)+1 \n\n* cross-checking that stanumbers/stavalues make sense (e.g. that MCV has\n both arrays while histogram has only stavalues, that the arrays have\n the same length for MCV, etc.)To this end, there's an edge-case hack in the code where I have to derive the array elemtype. I had thought that examine_attribute() or std_typanalyze() was going to do that for me, but it didn't. Very much want your input there. \n\n* checking there are no duplicate stakind values (e.g. two MCV lists)Per previous comment, it's good to learn these restrictions. Not sure if all the checks need to be regular elog(ERROR), perhaps some\ncould/should be just asserts.For this first pass, all errors were one-size fits all, safe for the WARNING vs ERROR. \n\n\nminor questions\n---------------\n\n1) Should the views be called pg_statistic_export or pg_stats_export?\nPerhaps pg_stats_export is better, because the format is meant to be\nhuman-readable (rather than 100% internal).I have no opinion on what the best name would be, and will go with consensus. \n\n2) It's not very clear what \"non-transactional update\" of pg_class\nfields actually means. Does that mean we update the fields in-place,\ncan't be rolled back, is not subject to MVCC or what? I suspect users\nwon't know unless the docs say that explicitly.Correct. Cannot be rolled back, not subject to MVCC. 3) The \"statistics.c\" code should really document the JSON structure. Or\nmaybe if we plan to use this for other purposes, it should be documented\nin the SGML?I agree, but I also didn't expect the format to survive first contact with reviewers, so I held back. \n4) Why do we need the separate \"replaced\" flags in import_stakinds? Can\nit happen that collreplaces/opreplaces differ from kindreplaces?That was initially done to maximize the amount of code that could be copied from do_analyze(). In retrospect, I like how extended statistics just deletes all the pg_statistic_ext_data rows and replaces them and I would like to do the same for pg_statistic before this is all done. \n\n5) What happens in we import statistics for a table that already has\nsome statistics? Will this discard the existing statistics, or will this\nmerge them somehow? (I think we should always discard the existing\nstats, and keep only the new version.)In the case of pg_statistic_ext_data, the stats are thrown out and replaced by the imported ones.In the case of pg_statistic, it's basically an upsert, and any values that were missing in the JSON are not updated on the existing row. That's appealing in a tweak situation where you want to only alter one or two bits of a stat, but not really useful in other situations. Per previous comment, I'd prefer a clean slate and forcing tweaking use cases to fill in all the blanks. \n\n6) What happens if we import extended stats with mismatching definition?\nFor example, what if the \"new\" statistics object does not have \"mcv\"\nenabled, but the imported data do include MCV? What if the statistics do\nhave the same number of \"dimensions\" but not the same number of columns\nand expressions?The importer is currently driven by the types of stats to be expected for that pg_attribute/pg_statistic_ext. It only looks for things that are possible for that stat type, and any extra JSON values are ignored.",
"msg_date": "Wed, 27 Dec 2023 21:41:31 -0500",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": ">\n>\n> As mentioned already, we'd also need some sort of\n> version identifier, and we'd expect the load_statistics() functions\n> to be able to transform the data if the old version used a different\n> representation. I agree with the idea that an explicit representation\n> of the source table attribute's type would be wise, too.\n\n\nThere is a version identifier currently (its own column not embedded in the\nJSON), but I discovered that I was able to put the burden on the export\nqueries to spackle-over the changes in the table structures over time.\nStill, I knew that we'd need the version number in there eventually.\n\n As mentioned already, we'd also need some sort of\nversion identifier, and we'd expect the load_statistics() functions\nto be able to transform the data if the old version used a different\nrepresentation. I agree with the idea that an explicit representation\nof the source table attribute's type would be wise, too.There is a version identifier currently (its own column not embedded in the JSON), but I discovered that I was able to put the burden on the export queries to spackle-over the changes in the table structures over time. Still, I knew that we'd need the version number in there eventually.",
"msg_date": "Wed, 27 Dec 2023 21:44:55 -0500",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": ">\n> Yeah, this is pretty much what I meant by \"functional\" interface. But if\n> I said maybe the format implemented by the patch is maybe too close to\n> how we store the statistics, then this has exactly the same issue. And\n> it has other issues too, I think - it breaks down the stats into\n> multiple function calls, so ensuring the sanity/correctness of whole\n> sets of statistics gets much harder, I think.\n>\n\nExport functions was my original plan, for simplicity, maintenance, etc,\nbut it seemed like I'd be adding quite a few functions, so the one view\nmade more sense for an initial version. Also, I knew that pg_dump or some\nother stats exporter would have to inline the guts of those functions into\nqueries for older versions, and adapting a view definition seemed more\nstraightforward for the reader than function definitions.\n\nYeah, this is pretty much what I meant by \"functional\" interface. But if\nI said maybe the format implemented by the patch is maybe too close to\nhow we store the statistics, then this has exactly the same issue. And\nit has other issues too, I think - it breaks down the stats into\nmultiple function calls, so ensuring the sanity/correctness of whole\nsets of statistics gets much harder, I think.Export functions was my original plan, for simplicity, maintenance, etc, but it seemed like I'd be adding quite a few functions, so the one view made more sense for an initial version. Also, I knew that pg_dump or some other stats exporter would have to inline the guts of those functions into queries for older versions, and adapting a view definition seemed more straightforward for the reader than function definitions.",
"msg_date": "Wed, 27 Dec 2023 21:49:23 -0500",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "Corey Huinker <[email protected]> writes:\n> Export functions was my original plan, for simplicity, maintenance, etc,\n> but it seemed like I'd be adding quite a few functions, so the one view\n> made more sense for an initial version. Also, I knew that pg_dump or some\n> other stats exporter would have to inline the guts of those functions into\n> queries for older versions, and adapting a view definition seemed more\n> straightforward for the reader than function definitions.\n\nHmm, I'm not sure we are talking about the same thing at all.\n\nWhat I am proposing is *import* functions. I didn't say anything about\nhow pg_dump obtains the data it prints; however, I would advocate that\nwe keep that part as simple as possible. You cannot expect export\nfunctionality to know the requirements of future server versions,\nso I don't think it's useful to put much intelligence there.\n\nSo I think pg_dump should produce a pretty literal representation of\nwhat it finds in the source server's catalog, and then rely on the\nimport functions in the destination server to make sense of that\nand do whatever slicing-n-dicing is required.\n\nThat being the case, I don't see a lot of value in a view -- especially\nnot given the requirement to dump from older server versions.\n(Conceivably we could just say that we won't dump stats from server\nversions predating the introduction of this feature, but that's hardly\na restriction that supports doing this via a view.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 27 Dec 2023 22:10:25 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "On Wed, Dec 27, 2023 at 09:41:31PM -0500, Corey Huinker wrote:\n> When I thought about the ability to dump/load statistics in the past, I\n> usually envisioned some sort of DDL that would do the export and import.\n> So for example we'd have EXPORT STATISTICS / IMPORT STATISTICS commands,\n> or something like that, and that'd do all the work. This would mean\n> stats are \"first-class citizens\" and it'd be fairly straightforward to\n> add this into pg_dump, for example. Or at least I think so ...\n> \n> Alternatively we could have the usual \"functional\" interface, with a\n> functions to export/import statistics, replacing the DDL commands.\n> \n> Unfortunately, none of this works for the pg_upgrade use case, because\n> existing cluster versions would not support this new interface, of\n> course. That's a significant flaw, as it'd make this useful only for\n> upgrades of future versions.\n> \n> \n> This was the reason I settled on the interface that I did: while we can create\n> whatever interface we want for importing the statistics, we would need to be\n> able to extract stats from databases using only the facilities available in\n> those same databases, and then store that in a medium that could be conveyed\n> across databases, either by text files or by saving them off in a side table\n> prior to upgrade. JSONB met the criteria.\n\nUh, it wouldn't be crazy to add this capability to pg_upgrade/pg_dump in\na minor version upgrade if it wasn't enabled by default, and if we were\nvery careful.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Wed, 27 Dec 2023 22:11:23 -0500",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "On Wed, Dec 27, 2023 at 10:10 PM Tom Lane <[email protected]> wrote:\n\n> Corey Huinker <[email protected]> writes:\n> > Export functions was my original plan, for simplicity, maintenance, etc,\n> > but it seemed like I'd be adding quite a few functions, so the one view\n> > made more sense for an initial version. Also, I knew that pg_dump or some\n> > other stats exporter would have to inline the guts of those functions\n> into\n> > queries for older versions, and adapting a view definition seemed more\n> > straightforward for the reader than function definitions.\n>\n> Hmm, I'm not sure we are talking about the same thing at all.\n>\n\nRight, I was conflating two things.\n\n\n>\n> What I am proposing is *import* functions. I didn't say anything about\n> how pg_dump obtains the data it prints; however, I would advocate that\n> we keep that part as simple as possible. You cannot expect export\n> functionality to know the requirements of future server versions,\n> so I don't think it's useful to put much intelligence there.\n>\n\nTrue, but presumably you'd be using the pg_dump/pg_upgrade of that future\nversion to do the exporting, so the export format would always be tailored\nto the importer's needs.\n\n\n>\n> So I think pg_dump should produce a pretty literal representation of\n> what it finds in the source server's catalog, and then rely on the\n> import functions in the destination server to make sense of that\n> and do whatever slicing-n-dicing is required.\n>\n\nObviously it can't be purely literal, as we have to replace the oid values\nwith whatever text representation we feel helps us carry forward. In\naddition, we're setting the number of tuples and number of pages directly\nin pg_class, and doing so non-transactionally just like ANALYZE does. We\ncould separate that out into its own import function, but then we're\nlocking every relation twice, once for the tuples/pages and once again for\nthe pg_statistic import.\n\nMy current line of thinking was that the stats import call, if enabled,\nwould immediately follow the CREATE statement of the object itself, but\nthat requires us to have everything we need to know for the import passed\ninto the import function, so we'd be needing a way to serialize _that_. If\nyou're thinking that we have one big bulk stats import, that might work,\nbut it also means that we're less tolerant of failures in the import step.\n\nOn Wed, Dec 27, 2023 at 10:10 PM Tom Lane <[email protected]> wrote:Corey Huinker <[email protected]> writes:\n> Export functions was my original plan, for simplicity, maintenance, etc,\n> but it seemed like I'd be adding quite a few functions, so the one view\n> made more sense for an initial version. Also, I knew that pg_dump or some\n> other stats exporter would have to inline the guts of those functions into\n> queries for older versions, and adapting a view definition seemed more\n> straightforward for the reader than function definitions.\n\nHmm, I'm not sure we are talking about the same thing at all.Right, I was conflating two things. \n\nWhat I am proposing is *import* functions. I didn't say anything about\nhow pg_dump obtains the data it prints; however, I would advocate that\nwe keep that part as simple as possible. You cannot expect export\nfunctionality to know the requirements of future server versions,\nso I don't think it's useful to put much intelligence there.True, but presumably you'd be using the pg_dump/pg_upgrade of that future version to do the exporting, so the export format would always be tailored to the importer's needs. \n\nSo I think pg_dump should produce a pretty literal representation of\nwhat it finds in the source server's catalog, and then rely on the\nimport functions in the destination server to make sense of that\nand do whatever slicing-n-dicing is required.Obviously it can't be purely literal, as we have to replace the oid values with whatever text representation we feel helps us carry forward. In addition, we're setting the number of tuples and number of pages directly in pg_class, and doing so non-transactionally just like ANALYZE does. We could separate that out into its own import function, but then we're locking every relation twice, once for the tuples/pages and once again for the pg_statistic import.My current line of thinking was that the stats import call, if enabled, would immediately follow the CREATE statement of the object itself, but that requires us to have everything we need to know for the import passed into the import function, so we'd be needing a way to serialize _that_. If you're thinking that we have one big bulk stats import, that might work, but it also means that we're less tolerant of failures in the import step.",
"msg_date": "Thu, 28 Dec 2023 12:28:06 -0500",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "On Thu, Dec 28, 2023 at 12:28:06PM -0500, Corey Huinker wrote:\n> What I am proposing is *import* functions. I didn't say anything about\n> how pg_dump obtains the data it prints; however, I would advocate that\n> we keep that part as simple as possible. You cannot expect export\n> functionality to know the requirements of future server versions,\n> so I don't think it's useful to put much intelligence there.\n> \n> True, but presumably you'd be using the pg_dump/pg_upgrade of that future\n> version to do the exporting, so the export format would always be tailored to\n> the importer's needs.\n\nI think the question is whether we will have the export functionality in\nthe old cluster, or if it will be queries run by pg_dump and therefore\nalso run by pg_upgrade calling pg_dump.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Thu, 28 Dec 2023 12:37:16 -0500",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "On 12/13/23 11:26, Corey Huinker wrote:\n> Yeah, that was the simplest output function possible, it didn't seem\n>\n> worth it to implement something more advanced. pg_mcv_list_items() is\n> more convenient for most needs, but it's quite far from the on-disk\n> representation.\n>\n>\n> I was able to make it work.\n>\n>\n>\n> That's actually a good question - how closely should the exported data\n> be to the on-disk format? I'd say we should keep it abstract, not tied\n> to the details of the on-disk format (which might easily change\nbetween\n> versions).\n>\n>\n> For the most part, I chose the exported data json types and formats in a\n> way that was the most accommodating to cstring input functions. So,\n> while so many of the statistic values are obviously only ever\n> integers/floats, those get stored as a numeric data type which lacks\n> direct numeric->int/float4/float8 functions (though we could certainly\n> create them, and I'm not against that), casting them to text lets us\n> leverage pg_strtoint16, etc.\n>\n>\n>\n> I'm a bit confused about the JSON schema used in pg_statistic_export\n> view, though. It simply serializes stakinds, stavalues, stanumbers\ninto\n> arrays ... which works, but why not to use the JSON nesting? I mean,\n> there could be a nested document for histogram, MCV, ... with just the\n> correct fields.\n>\n> {\n> ...\n> histogram : { stavalues: [...] },\n> mcv : { stavalues: [...], stanumbers: [...] },\n> ...\n> }\n>\n>\n> That's a very good question. I went with this format because it was\n> fairly straightforward to code in SQL using existing JSON/JSONB\n> functions, and that's what we will need if we want to export statistics\n> on any server currently in existence. I'm certainly not locked in with\n> the current format, and if it can be shown how to transform the data\n> into a superior format, I'd happily do so.\n>\n> and so on. Also, what does TRIVIAL stand for?\n>\n>\n> It's currently serving double-duty for \"there are no stats in this slot\"\n> and the situations where the stats computation could draw no conclusions\n> about the data.\n>\n> Attached is v3 of this patch. Key features are:\n>\n> * Handles regular pg_statistic stats for any relation type.\n> * Handles extended statistics.\n> * Export views pg_statistic_export and pg_statistic_ext_export to allow\n> inspection of existing stats and saving those values for later use.\n> * Import functions pg_import_rel_stats() and pg_import_ext_stats() which\n> take Oids as input. This is intentional to allow stats from one object\n> to be imported into another object.\n> * User scripts pg_export_stats and pg_import stats, which offer a\n> primitive way to serialize all the statistics of one database and import\n> them into another.\n> * Has regression test coverage for both with a variety of data types.\n> * Passes my own manual test of extracting all of the stats from a v15\n> version of the popular \"dvdrental\" example database, as well as some\n> additional extended statistics objects, and importing them into a\n> development database.\n> * Import operations never touch the heap of any relation outside of\n> pg_catalog. As such, this should be significantly faster than even the\n> most cursory analyze operation, and therefore should be useful in\n> upgrade situations, allowing the database to work with \"good enough\"\n> stats more quickly, while still allowing for regular autovacuum to\n> recalculate the stats \"for real\" at some later point.\n>\n> The relation statistics code was adapted from similar features in\n> analyze.c, but is now done in a query context. As before, the\n> rowcount/pagecount values are updated on pg_class in a non-transactional\n> fashion to avoid table bloat, while the updates to pg_statistic are\n> pg_statistic_ext_data are done transactionally.\n>\n> The existing statistics _store() functions were leveraged wherever\n> practical, so much so that the extended statistics import is mostly just\n> adapting the existing _build() functions into _import() functions which\n> pull their values from JSON rather than computing the statistics.\n>\n> Current concerns are:\n>\n> 1. I had to code a special-case exception for MCELEM stats on array data\n> types, so that the array_in() call uses the element type rather than the\n> array type. I had assumed that the existing exmaine_attribute()\n> functions would have properly derived the typoid for that column, but it\n> appears to not be the case, and I'm clearly missing how the existing\n> code gets it right.\nHmm, after looking at this, I'm not sure it's such an ugly hack ...\n\nThe way this works for ANALYZE is that examine_attribute() eventually\ncalls the typanalyze function:\n\n if (OidIsValid(stats->attrtype->typanalyze))\n ok = DatumGetBool(OidFunctionCall1(stats->attrtype->typanalyze,\n PointerGetDatum(stats)));\n\nwhich for arrays is array_typanalyze, and this sets stats->extra_data to\nArrayAnalyzeExtraData with all the interesting info about the array\nelement type, and then also std_extra_data with info about the array\ntype itself.\n\n stats -> extra_data -> std_extra_data\n\ncompute_array_stats then \"restores\" std_extra_data to compute standard\nstats for the whole array, and then uses the ArrayAnalyzeExtraData to\ncalculate stats for the elements.\n\nIt's not exactly pretty, because there are global variables and so on.\n\nAnd examine_rel_attribute() does the same thing - calls typanalyze, so\nif I break after it returns, I see this for int[] column:\n\n(gdb) p * (ArrayAnalyzeExtraData *) stat->extra_data\n\n$1 = {type_id = 23, eq_opr = 96, coll_id = 0, typbyval = true, typlen =\n4, typalign = 105 'i', cmp = 0x2e57920, hash = 0x2e57950,\nstd_compute_stats = 0x6681b8 <compute_scalar_stats>, std_extra_data =\n0x2efe670}\n\nI think the \"problem\" will be how to use this in import_stavalues(). You\ncan't just do this for any array type, I think. I could create an array\ntype (with ELEMENT=X) but with a custom analyze function, in which case\nthe extra_data may be something entirely different.\n\nI suppose the correct solution would be to add an \"import\" function into\nthe pg_type catalog (next to typanalyze). Or maybe it'd be enough to set\nit from the typanalyze? After all, that's what sets compute_stats.\n\nBut maybe it's enough to just do what you did - if we get an MCELEM\nslot, can it ever contain anything else than array of elements of the\nattribute array type? I'd bet that'd cause all sorts of issues, no?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 29 Dec 2023 00:55:08 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": ">\n> But maybe it's enough to just do what you did - if we get an MCELEM\n> slot, can it ever contain anything else than array of elements of the\n> attribute array type? I'd bet that'd cause all sorts of issues, no?\n>\n>\nThanks for the explanation of why it wasn't working for me. Knowing that\nthe case of MCELEM + is-array-type is the only case where we'd need to do\nthat puts me at ease.\n\nBut maybe it's enough to just do what you did - if we get an MCELEM\nslot, can it ever contain anything else than array of elements of the\nattribute array type? I'd bet that'd cause all sorts of issues, no?Thanks for the explanation of why it wasn't working for me. Knowing that the case of MCELEM + is-array-type is the only case where we'd need to do that puts me at ease.",
"msg_date": "Fri, 29 Dec 2023 11:27:50 -0500",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "On 12/29/23 17:27, Corey Huinker wrote:\n> But maybe it's enough to just do what you did - if we get an MCELEM\n> slot, can it ever contain anything else than array of elements of the\n> attribute array type? I'd bet that'd cause all sorts of issues, no?\n> \n> \n> Thanks for the explanation of why it wasn't working for me. Knowing that\n> the case of MCELEM + is-array-type is the only case where we'd need to\n> do that puts me at ease.\n> \n\nBut I didn't claim MCELEM is the only slot where this might be an issue.\nI merely asked if a MCELEM slot can ever contain an array with element\ntype different from the \"original\" attribute.\n\nAfter thinking about this a bit more, and doing a couple experiments\nwith a trivial custom data type, I think this is true:\n\n1) MCELEM slots for \"real\" array types are OK\n\nI don't think we allow \"real\" arrays created by users directly, all\narrays are created implicitly by the system. Those types always have\narray_typanalyze, which guarantees MCELEM has the correct element type.\n\nI haven't found a way to either inject my custom array type or alter the\ntypanalyze to some custom function. So I think this is OK.\n\n\n2) I'm not sure we can extend this regular data types / other slots\n\nFor example, I think I can implement a data type with custom typanalyze\nfunction (and custom compute_stats function) that fills slots with some\nother / strange stuff. For example I might build MCV with hashes of the\noriginal data, a CountMin sketch, or something like that.\n\nYes, I don't think people do that often, but as long as the type also\nimplements custom selectivity functions for the operators, I think this\nwould work.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 29 Dec 2023 21:14:34 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "2024-01 Commitfest.\n\nHi, This patch has a CF status of \"Needs Review\" [1], but it seems\nthere were CFbot test failures last time it was run [2]. Please have a\nlook and post an updated version if necessary.\n\n======\n[1] https://commitfest.postgresql.org/46/4538/\n[2] https://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/46/4538\n\nKind Regards,\nPeter Smith.\n\n\n",
"msg_date": "Mon, 22 Jan 2024 17:09:24 +1100",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "On Mon, Jan 22, 2024 at 1:09 AM Peter Smith <[email protected]> wrote:\n\n> 2024-01 Commitfest.\n>\n> Hi, This patch has a CF status of \"Needs Review\" [1], but it seems\n> there were CFbot test failures last time it was run [2]. Please have a\n> look and post an updated version if necessary.\n>\n> ======\n> [1] https://commitfest.postgresql.org/46/4538/\n> [2]\n> https://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/46/4538\n>\n> Kind Regards,\n> Peter Smith.\n>\n\nAttached is v4 of the statistics export/import patch.\n\nThis version has been refactored to match the design feedback received\npreviously.\n\nThe system views are gone. These were mostly there to serve as a baseline\nfor what an export query would look like. That role is temporarily\nreassigned to pg_export_stats.c, but hopefully they will be integrated into\npg_dump in the next version. The regression test also contains the version\nof each query suitable for the current server version.\n\nThe export format is far closer to the raw format of pg_statistic and\npg_statistic_ext_data, respectively. This format involves exporting oid\nvalues for types, collations, operators, and attributes - values which are\nspecific to the server they were created on. To make sense of those values,\na subset of the columns of pg_type, pg_attribute, pg_collation, and\npg_operator are exported as well, which allows pg_import_rel_stats() and\npg_import_ext_stats() to reconstitute the data structure as it existed on\nthe old server, and adapt it to the modern structure and local schema\nobjects.\n\npg_import_rel_stats matches up local columns with the exported stats by\ncolumn name, not attnum. This allows for stats to be imported when columns\nhave been dropped, added, or reordered.\n\npg_import_ext_stats can also handle column reordering, though it currently\nwould get confused by changes in expressions that maintain the same result\ndata type. I'm not yet brave enough to handle importing nodetrees, nor do I\nthink it's wise to try. I think we'd be better off validating that the\ndestination extended stats object is identical in structure, and to fail\nthe import of that one object if it isn't perfect.\n\nExport formats go back to v10.\n\nOn Mon, Jan 22, 2024 at 1:09 AM Peter Smith <[email protected]> wrote:2024-01 Commitfest.\n\nHi, This patch has a CF status of \"Needs Review\" [1], but it seems\nthere were CFbot test failures last time it was run [2]. Please have a\nlook and post an updated version if necessary.\n\n======\n[1] https://commitfest.postgresql.org/46/4538/\n[2] https://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/46/4538\n\nKind Regards,\nPeter Smith.Attached is v4 of the statistics export/import patch.This version has been refactored to match the design feedback received previously.The system views are gone. These were mostly there to serve as a baseline for what an export query would look like. That role is temporarily reassigned to pg_export_stats.c, but hopefully they will be integrated into pg_dump in the next version. The regression test also contains the version of each query suitable for the current server version.The export format is far closer to the raw format of pg_statistic and pg_statistic_ext_data, respectively. This format involves exporting oid values for types, collations, operators, and attributes - values which are specific to the server they were created on. To make sense of those values, a subset of the columns of pg_type, pg_attribute, pg_collation, and pg_operator are exported as well, which allows pg_import_rel_stats() and pg_import_ext_stats() to reconstitute the data structure as it existed on the old server, and adapt it to the modern structure and local schema objects.pg_import_rel_stats matches up local columns with the exported stats by column name, not attnum. This allows for stats to be imported when columns have been dropped, added, or reordered.pg_import_ext_stats can also handle column reordering, though it currently would get confused by changes in expressions that maintain the same result data type. I'm not yet brave enough to handle importing nodetrees, nor do I think it's wise to try. I think we'd be better off validating that the destination extended stats object is identical in structure, and to fail the import of that one object if it isn't perfect.Export formats go back to v10.",
"msg_date": "Fri, 2 Feb 2024 03:33:08 -0500",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "(hit send before attaching patches, reposting message as well)\n\nAttached is v4 of the statistics export/import patch.\n\nThis version has been refactored to match the design feedback received\npreviously.\n\nThe system views are gone. These were mostly there to serve as a baseline\nfor what an export query would look like. That role is temporarily\nreassigned to pg_export_stats.c, but hopefully they will be integrated into\npg_dump in the next version. The regression test also contains the version\nof each query suitable for the current server version.\n\nThe export format is far closer to the raw format of pg_statistic and\npg_statistic_ext_data, respectively. This format involves exporting oid\nvalues for types, collations, operators, and attributes - values which are\nspecific to the server they were created on. To make sense of those values,\na subset of the columns of pg_type, pg_attribute, pg_collation, and\npg_operator are exported as well, which allows pg_import_rel_stats() and\npg_import_ext_stats() to reconstitute the data structure as it existed on\nthe old server, and adapt it to the modern structure and local schema\nobjects.\n\npg_import_rel_stats matches up local columns with the exported stats by\ncolumn name, not attnum. This allows for stats to be imported when columns\nhave been dropped, added, or reordered.\n\npg_import_ext_stats can also handle column reordering, though it currently\nwould get confused by changes in expressions that maintain the same result\ndata type. I'm not yet brave enough to handle importing nodetrees, nor do I\nthink it's wise to try. I think we'd be better off validating that the\ndestination extended stats object is identical in structure, and to fail\nthe import of that one object if it isn't perfect.\n\nExport formats go back to v10.\n\n\nOn Mon, Jan 22, 2024 at 1:09 AM Peter Smith <[email protected]> wrote:\n\n> 2024-01 Commitfest.\n>\n> Hi, This patch has a CF status of \"Needs Review\" [1], but it seems\n> there were CFbot test failures last time it was run [2]. Please have a\n> look and post an updated version if necessary.\n>\n> ======\n> [1] https://commitfest.postgresql.org/46/4538/\n> [2]\n> https://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/46/4538\n>\n> Kind Regards,\n> Peter Smith.\n>",
"msg_date": "Fri, 2 Feb 2024 03:37:10 -0500",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "Hi,\n\nI took a quick look at the v4 patches. I haven't done much testing yet,\nso only some basic review.\n\n0001\n\n- The SGML docs for pg_import_rel_stats may need some changes. It starts\nwith description of what gets overwritten (non-)transactionally (which\ngets repeated twice), but that seems more like an implementation detail.\nBut it does not really say which pg_class fields get updated. Then it\nspeculates about the possible use case (pg_upgrade). I think it'd be\nbetter to focus on the overall goal of updating statistics, explain what\ngets updated/how, and only then maybe mention the pg_upgrade use case.\n\nAlso, it says \"statistics are replaced\" but it's quite clear if that\napplies only to matching statistics or if all stats are deleted first\nand then the new stuff is inserted. (FWIW remove_pg_statistics clearly\ndeletes all pre-existing stats).\n\n\n- import_pg_statistics: I somewhat dislike that we're passing arguments\nas datum[] array - it's hard to say what the elements are expected to\nbe, etc. Maybe we should expand this, to make it clear. How do we even\nknow the array is large enough?\n\n- I don't quite understand why we need examine_rel_attribute. It sets a\nlot of fields in the VacAttrStats struct, but then we only use attrtypid\nand attrtypmod from it - so why bother and not simply load just these\ntwo fields? Or maybe I miss something.\n\n- examine_rel_attribute can return NULL, but get_attrinfo does not check\nfor NULL and just dereferences the pointer. Surely that can lead to\nsegfaults?\n\n- validate_no_duplicates and the other validate functions would deserve\na better docs, explaining what exactly is checked (it took me a while to\nrealize we check just for duplicates), what the parameters do etc.\n\n- Do we want to make the validate_ functions part of the public API? I\nrealize we want to use them from multiple places (regular and extended\nstats), but maybe it'd be better to have an \"internal\" header file, just\nlike we have extended_stats_internal?\n\n- I'm not sure we do \"\\set debug f\" elsewhere. It took me a while to\nrealize why the query outputs are empty ...\n\n\n0002\n\n- I'd rename create_stat_ext_entry to statext_create_entry.\n\n- Do we even want to include OIDs from the source server? Why not to\njust have object names and resolve those? Seems safer - if the target\nserver has the OID allocated to a different object, that could lead to\nconfusing / hard to detect issues.\n\n- What happens if we import statistics which includes data for extended\nstatistics object which does not exist on the target machine?\n\n- pg_import_ext_stats seems to not use require_match_oids - bug?\n\n\n0003\n\n- no SGML docs for the new tools?\n\n- The help() seems to be wrong / copied from \"clusterdb\" or something\nlike that, right?\n\n\nOn 2/2/24 09:37, Corey Huinker wrote:\n> (hit send before attaching patches, reposting message as well)\n> \n> Attached is v4 of the statistics export/import patch.\n> \n> This version has been refactored to match the design feedback received\n> previously.\n> \n> The system views are gone. These were mostly there to serve as a baseline\n> for what an export query would look like. That role is temporarily\n> reassigned to pg_export_stats.c, but hopefully they will be integrated into\n> pg_dump in the next version. The regression test also contains the version\n> of each query suitable for the current server version.\n> \n\nOK\n\n> The export format is far closer to the raw format of pg_statistic and\n> pg_statistic_ext_data, respectively. This format involves exporting oid\n> values for types, collations, operators, and attributes - values which are\n> specific to the server they were created on. To make sense of those values,\n> a subset of the columns of pg_type, pg_attribute, pg_collation, and\n> pg_operator are exported as well, which allows pg_import_rel_stats() and\n> pg_import_ext_stats() to reconstitute the data structure as it existed on\n> the old server, and adapt it to the modern structure and local schema\n> objects.\n\nI have no opinion on the proposed format - still JSON, but closer to the\noriginal data. Works for me, but I wonder what Tom thinks about it,\nconsidering he suggested making it closer to the raw data.\n\n> \n> pg_import_rel_stats matches up local columns with the exported stats by\n> column name, not attnum. This allows for stats to be imported when columns\n> have been dropped, added, or reordered.\n> \n\nMakes sense. What will happen if we try to import data for extended\nstatistics (or index) that does not exist on the target server?\n\n> pg_import_ext_stats can also handle column reordering, though it currently\n> would get confused by changes in expressions that maintain the same result\n> data type. I'm not yet brave enough to handle importing nodetrees, nor do I\n> think it's wise to try. I think we'd be better off validating that the\n> destination extended stats object is identical in structure, and to fail\n> the import of that one object if it isn't perfect.\n> \n\nYeah, column reordering is something we probably need to handle. The\nstats order them by attnum, so if we want to allow import on a system\nwhere the attributes were dropped/created in a different way, this is\nnecessary. I haven't tested this - is there a regression test for this?\n\nI agree expressions are hard. I don't think it's feasible to import\nnodetree from other server versions, but why don't we simply deparse the\nexpression on the source, and either parse it on the target (and then\ncompare the two nodetrees), or deparse the target too and compare the\ntwo deparsed expressions? I suspect the deparsing may produce slightly\ndifferent results on the two versions (causing false mismatches), but\nperhaps the deparse on source + parse on target + compare nodetrees\nwould work? Haven't tried, though.\n\n> Export formats go back to v10.\n> \n\nDo we even want/need to go beyond 12? All earlier versions are EOL.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 7 Feb 2024 22:46:52 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": ">\n> Also, it says \"statistics are replaced\" but it's quite clear if that\n> applies only to matching statistics or if all stats are deleted first\n> and then the new stuff is inserted. (FWIW remove_pg_statistics clearly\n> deletes all pre-existing stats).\n>\n\nAll are now deleted first, both in the pg_statistic and\npg_statistic_ext_data tables. The previous version was taking a more\n\"replace it if we find a new value\" approach, but that's overly\ncomplicated, so following the example set by extended statistics seemed\nbest.\n\n\n\n> - import_pg_statistics: I somewhat dislike that we're passing arguments\n> as datum[] array - it's hard to say what the elements are expected to\n> be, etc. Maybe we should expand this, to make it clear. How do we even\n> know the array is large enough?\n>\n\nCompletely fair. Initially that was done with the expectation that the\narray would be the same for both regular stats and extended stats, but that\nwas no longer the case.\n\n\n> - I don't quite understand why we need examine_rel_attribute. It sets a\n> lot of fields in the VacAttrStats struct, but then we only use attrtypid\n> and attrtypmod from it - so why bother and not simply load just these\n> two fields? Or maybe I miss something.\n>\n\nI think you're right, we don't need it anymore for regular statistics. We\nstill need it in extended stats because statext_store() takes a subset of\nthe vacattrstats rows as an input.\n\nWhich leads to a side issue. We currently have 3 functions:\nexamine_rel_attribute and the two varieties of examine_attribute (one in\nanalyze.c and the other in extended stats). These are highly similar\nbut just different enough that I didn't feel comfortable refactoring them\ninto a one-size-fits-all function, and I was particularly reluctant to\nmodify existing code for the ANALYZE path.\n\n\n>\n> - examine_rel_attribute can return NULL, but get_attrinfo does not check\n> for NULL and just dereferences the pointer. Surely that can lead to\n> segfaults?\n>\n\nGood catch, and it highlights how little we need VacAttrStats for regular\nstatistics.\n\n\n>\n> - validate_no_duplicates and the other validate functions would deserve\n> a better docs, explaining what exactly is checked (it took me a while to\n> realize we check just for duplicates), what the parameters do etc.\n>\n\nThose functions are in a fairly formative phase - I expect a conversation\nabout what sort of validations we want to do to ensure that the statistics\nbeing imported make sense, and under what circumstances we would forego\nsome of those checks.\n\n\n>\n> - Do we want to make the validate_ functions part of the public API? I\n> realize we want to use them from multiple places (regular and extended\n> stats), but maybe it'd be better to have an \"internal\" header file, just\n> like we have extended_stats_internal?\n>\n\nI see no need to have them be a part of the public API. Will move.\n\n\n>\n> - I'm not sure we do \"\\set debug f\" elsewhere. It took me a while to\n> realize why the query outputs are empty ...\n>\n\nThat was an experiment that rose out of the difficulty in determining\n_where_ a difference was when the set-difference checks failed. So far I\nlike it, and I'm hoping it catches on.\n\n\n\n>\n>\n> 0002\n>\n> - I'd rename create_stat_ext_entry to statext_create_entry.\n>\n> - Do we even want to include OIDs from the source server? Why not to\n> just have object names and resolve those? Seems safer - if the target\n> server has the OID allocated to a different object, that could lead to\n> confusing / hard to detect issues.\n>\n\nThe import functions would obviously never use the imported oids to look up\nobjects on the destination system. Rather, they're there to verify that the\nlocal object oid matches the exported object oid, which is true in the case\nof a binary upgrade.\n\nThe export format is an attempt to export the pg_statistic[_ext_data] for\nthat object as-is, and, as Tom suggested, let the import function do the\ntransformations. We can of course remove them if they truly have no purpose\nfor validation.\n\n\n>\n> - What happens if we import statistics which includes data for extended\n> statistics object which does not exist on the target machine?\n>\n\nThe import function takes an oid of the object (relation or extstat\nobject), and the json payload is supposed to be the stats for ONE\ncorresponding object. Multiple objects of data really don't fit into the\njson format, and statistics exported for an object that does not exist on\nthe destination system would have no meaningful invocation. I envision the\ndump file looking like this\n\n CREATE TABLE public.foo (....);\n\n SELECT pg_import_rel_stats('public.foo'::regclass, <json blob>, option\nflag, option flag);\n\nSo a call against a nonexistent object would fail on the regclass cast.\n\n\n>\n> - pg_import_ext_stats seems to not use require_match_oids - bug?\n>\n\nI haven't yet seen a good way to make use of matching oids in extended\nstats. Checking matching operator/collation oids would make sense, but\nlittle else.\n\n\n>\n>\n> 0003\n>\n> - no SGML docs for the new tools?\n>\n\nCorrect. I foresee the export tool being folded into pg_dump(), and the\nimport tool going away entirely as psql could handle it.\n\n\n>\n> - The help() seems to be wrong / copied from \"clusterdb\" or something\n> like that, right?\n>\n\nCorrect, for the reason above.\n\n\n\n> >\n> > pg_import_rel_stats matches up local columns with the exported stats by\n> > column name, not attnum. This allows for stats to be imported when\n> columns\n> > have been dropped, added, or reordered.\n> >\n>\n> Makes sense. What will happen if we try to import data for extended\n> statistics (or index) that does not exist on the target server?\n>\n\nOne of the parameters to the function is the oid of the object that is the\ntarget of the stats. The importer will not seek out objects with matching\nnames and each JSON payload is limited to holding one object, though\nclearly someone could encapsulate the existing format in a format that has\na manifest of objects to import.\n\n\n>\n> > pg_import_ext_stats can also handle column reordering, though it\n> currently\n> > would get confused by changes in expressions that maintain the same\n> result\n> > data type. I'm not yet brave enough to handle importing nodetrees, nor\n> do I\n> > think it's wise to try. I think we'd be better off validating that the\n> > destination extended stats object is identical in structure, and to fail\n> > the import of that one object if it isn't perfect.\n> >\n>\n> Yeah, column reordering is something we probably need to handle. The\n> stats order them by attnum, so if we want to allow import on a system\n> where the attributes were dropped/created in a different way, this is\n> necessary. I haven't tested this - is there a regression test for this?\n>\n\nThe overlong transformation SQL starts with the object to be imported (the\nlocal oid was specified) and it\n\n1. grabs all the attributes (or exprs, for extended stats) of that object.\n2. looks for columns/exprs in the exported json for an attribute with a\nmatching name\n3. takes the exported attnum of that exported attribute for use in things\nlike stdexprs\n4. looks up the type, collation, and operators for the exported attribute.\n\nSo we get a situation where there might not be importable stats for an\nattribute of the destination table, and we'd import nothing for that\ncolumn. Stats for exported columns with no matching local column would\nnever be referenced.\n\nYes, there should be a test of this.\n\n\n> I agree expressions are hard. I don't think it's feasible to import\n> nodetree from other server versions, but why don't we simply deparse the\n> expression on the source, and either parse it on the target (and then\n> compare the two nodetrees), or deparse the target too and compare the\n> two deparsed expressions? I suspect the deparsing may produce slightly\n> different results on the two versions (causing false mismatches), but\n> perhaps the deparse on source + parse on target + compare nodetrees\n> would work? Haven't tried, though.\n>\n> > Export formats go back to v10.\n> >\n>\n> Do we even want/need to go beyond 12? All earlier versions are EOL.\n>\n\nTrue, but we had pg_dump and pg_restore stuff back to 7.x until fairly\nrecently, and a major friction point in getting customers to upgrade their\ninstances off of unsupported versions is the downtime caused by an upgrade,\nwhy wouldn't we make it easier for them?\n\nAlso, it says \"statistics are replaced\" but it's quite clear if that\napplies only to matching statistics or if all stats are deleted first\nand then the new stuff is inserted. (FWIW remove_pg_statistics clearly\ndeletes all pre-existing stats).All are now deleted first, both in the pg_statistic and pg_statistic_ext_data tables. The previous version was taking a more \"replace it if we find a new value\" approach, but that's overly complicated, so following the example set by extended statistics seemed best. - import_pg_statistics: I somewhat dislike that we're passing arguments\nas datum[] array - it's hard to say what the elements are expected to\nbe, etc. Maybe we should expand this, to make it clear. How do we even\nknow the array is large enough?Completely fair. Initially that was done with the expectation that the array would be the same for both regular stats and extended stats, but that was no longer the case. - I don't quite understand why we need examine_rel_attribute. It sets a\nlot of fields in the VacAttrStats struct, but then we only use attrtypid\nand attrtypmod from it - so why bother and not simply load just these\ntwo fields? Or maybe I miss something. I think you're right, we don't need it anymore for regular statistics. We still need it in extended stats because statext_store() takes a subset of the vacattrstats rows as an input.Which leads to a side issue. We currently have 3 functions: examine_rel_attribute and the two varieties of examine_attribute (one in analyze.c and the other in extended stats). These are highly similar but just different enough that I didn't feel comfortable refactoring them into a one-size-fits-all function, and I was particularly reluctant to modify existing code for the ANALYZE path. \n\n- examine_rel_attribute can return NULL, but get_attrinfo does not check\nfor NULL and just dereferences the pointer. Surely that can lead to\nsegfaults?Good catch, and it highlights how little we need VacAttrStats for regular statistics. \n\n- validate_no_duplicates and the other validate functions would deserve\na better docs, explaining what exactly is checked (it took me a while to\nrealize we check just for duplicates), what the parameters do etc.Those functions are in a fairly formative phase - I expect a conversation about what sort of validations we want to do to ensure that the statistics being imported make sense, and under what circumstances we would forego some of those checks. \n\n- Do we want to make the validate_ functions part of the public API? I\nrealize we want to use them from multiple places (regular and extended\nstats), but maybe it'd be better to have an \"internal\" header file, just\nlike we have extended_stats_internal?I see no need to have them be a part of the public API. Will move. \n\n- I'm not sure we do \"\\set debug f\" elsewhere. It took me a while to\nrealize why the query outputs are empty ...That was an experiment that rose out of the difficulty in determining _where_ a difference was when the set-difference checks failed. So far I like it, and I'm hoping it catches on. \n\n\n0002\n\n- I'd rename create_stat_ext_entry to statext_create_entry.\n\n- Do we even want to include OIDs from the source server? Why not to\njust have object names and resolve those? Seems safer - if the target\nserver has the OID allocated to a different object, that could lead to\nconfusing / hard to detect issues.The import functions would obviously never use the imported oids to look up objects on the destination system. Rather, they're there to verify that the local object oid matches the exported object oid, which is true in the case of a binary upgrade.The export format is an attempt to export the pg_statistic[_ext_data] for that object as-is, and, as Tom suggested, let the import function do the transformations. We can of course remove them if they truly have no purpose for validation. \n\n- What happens if we import statistics which includes data for extended\nstatistics object which does not exist on the target machine?The import function takes an oid of the object (relation or extstat object), and the json payload is supposed to be the stats for ONE corresponding object. Multiple objects of data really don't fit into the json format, and statistics exported for an object that does not exist on the destination system would have no meaningful invocation. I envision the dump file looking like this CREATE TABLE public.foo (....); SELECT pg_import_rel_stats('public.foo'::regclass, <json blob>, option flag, option flag);So a call against a nonexistent object would fail on the regclass cast. \n\n- pg_import_ext_stats seems to not use require_match_oids - bug?I haven't yet seen a good way to make use of matching oids in extended stats. Checking matching operator/collation oids would make sense, but little else. \n\n\n0003\n\n- no SGML docs for the new tools?Correct. I foresee the export tool being folded into pg_dump(), and the import tool going away entirely as psql could handle it. \n\n- The help() seems to be wrong / copied from \"clusterdb\" or something\nlike that, right?Correct, for the reason above. > \n> pg_import_rel_stats matches up local columns with the exported stats by\n> column name, not attnum. This allows for stats to be imported when columns\n> have been dropped, added, or reordered.\n> \n\nMakes sense. What will happen if we try to import data for extended\nstatistics (or index) that does not exist on the target server?One of the parameters to the function is the oid of the object that is the target of the stats. The importer will not seek out objects with matching names and each JSON payload is limited to holding one object, though clearly someone could encapsulate the existing format in a format that has a manifest of objects to import. \n\n> pg_import_ext_stats can also handle column reordering, though it currently\n> would get confused by changes in expressions that maintain the same result\n> data type. I'm not yet brave enough to handle importing nodetrees, nor do I\n> think it's wise to try. I think we'd be better off validating that the\n> destination extended stats object is identical in structure, and to fail\n> the import of that one object if it isn't perfect.\n> \n\nYeah, column reordering is something we probably need to handle. The\nstats order them by attnum, so if we want to allow import on a system\nwhere the attributes were dropped/created in a different way, this is\nnecessary. I haven't tested this - is there a regression test for this?The overlong transformation SQL starts with the object to be imported (the local oid was specified) and it1. grabs all the attributes (or exprs, for extended stats) of that object.2. looks for columns/exprs in the exported json for an attribute with a matching name3. takes the exported attnum of that exported attribute for use in things like stdexprs4. looks up the type, collation, and operators for the exported attribute.So we get a situation where there might not be importable stats for an attribute of the destination table, and we'd import nothing for that column. Stats for exported columns with no matching local column would never be referenced.Yes, there should be a test of this.\n\nI agree expressions are hard. I don't think it's feasible to import\nnodetree from other server versions, but why don't we simply deparse the\nexpression on the source, and either parse it on the target (and then\ncompare the two nodetrees), or deparse the target too and compare the\ntwo deparsed expressions? I suspect the deparsing may produce slightly\ndifferent results on the two versions (causing false mismatches), but\nperhaps the deparse on source + parse on target + compare nodetrees\nwould work? Haven't tried, though.\n\n> Export formats go back to v10.\n> \n\nDo we even want/need to go beyond 12? All earlier versions are EOL.True, but we had pg_dump and pg_restore stuff back to 7.x until fairly recently, and a major friction point in getting customers to upgrade their instances off of unsupported versions is the downtime caused by an upgrade, why wouldn't we make it easier for them?",
"msg_date": "Tue, 13 Feb 2024 00:07:26 -0500",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "Posting v5 updates of pg_import_rel_stats() and pg_import_ext_stats(),\nwhich address many of the concerns listed earlier.\n\nLeaving the export/import scripts off for the time being, as they haven't\nchanged and the next likely change is to fold them into pg_dump.",
"msg_date": "Thu, 15 Feb 2024 04:09:41 -0500",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "On Thu, Feb 15, 2024 at 4:09 AM Corey Huinker <[email protected]>\nwrote:\n\n> Posting v5 updates of pg_import_rel_stats() and pg_import_ext_stats(),\n> which address many of the concerns listed earlier.\n>\n> Leaving the export/import scripts off for the time being, as they haven't\n> changed and the next likely change is to fold them into pg_dump.\n>\n>\n>\nv6 posted below.\n\nChanges:\n\n- Additional documentation about the overall process.\n- Rewording of SGML docs.\n- removed a fair number of columns from the transformation queries.\n- enabled require_match_oids in extended statistics, but I'm having my\ndoubts about the value of that.\n- moved stats extraction functions to an fe_utils file stats_export.c that\nwill be used by both pg_export_stats and pg_dump.\n- pg_export_stats now generates SQL statements rather than a tsv, and has\nboolean flags to set the validate and require_match_oids parameters in the\ncalls to pg_import_(rel|ext)_stats.\n- pg_import_stats is gone, as importing can now be done with psql.\n\nI'm hoping to get feedback on a few areas.\n\n1. The checks for matching oids. On the one hand, in a binary upgrade\nsituation, we would of course want the oid of the relation to match what\nwas exported, as well as all of the atttypids of the attributes to match\nthe type ids exported, same for collations, etc. However, the binary\nupgrade is the one place where there are absolutely no middle steps that\ncould have altered either the stats jsons or the source tables. Given that\nand that oid simply will never match in any situation other than a binary\nupgrade, it may be best to discard those checks.\n\n2. The checks for relnames matching, and typenames of attributes matching\n(they are already matched by name, so the column order can change without\nthe import missing a beat) seem so necessary that there shouldn't be an\noption to enable/disable them. But if that's true, then the initial\nrelation parameter becomes somewhat unnecessary, and anyone using these\nfunctions for tuning or FDW purposes could easily transform the JSON using\nSQL to put in the proper relname.\n\n3. The data integrity validation functions may belong in a separate\nfunction rather than being a parameter on the existing import functions.\n\n4. Lastly, pg_dump. Each relation object and extended statistics object\nwill have a statistics import statement. From my limited experience with\npg_dump, it seems like we would add an additional Stmt variable (statsStmt)\nto the TOC entry for each object created, and the restore process would\ncheck the value of --with-statistics and in cases where the statistics flag\nwas set AND a stats import statement exists, then execute that stats\nstatement immediately after the creation of the object. This assumes that\nthere is no case where additional attributes are added to a relation after\nit's initial CREATE statement. Indexes are independent relations in this\nregard.",
"msg_date": "Tue, 20 Feb 2024 02:24:52 -0500",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "Greetings,\n\n* Corey Huinker ([email protected]) wrote:\n> On Thu, Feb 15, 2024 at 4:09 AM Corey Huinker <[email protected]>\n> wrote:\n> > Posting v5 updates of pg_import_rel_stats() and pg_import_ext_stats(),\n> > which address many of the concerns listed earlier.\n> >\n> > Leaving the export/import scripts off for the time being, as they haven't\n> > changed and the next likely change is to fold them into pg_dump.\n\n> v6 posted below.\n> \n> Changes:\n> \n> - Additional documentation about the overall process.\n> - Rewording of SGML docs.\n> - removed a fair number of columns from the transformation queries.\n> - enabled require_match_oids in extended statistics, but I'm having my\n> doubts about the value of that.\n> - moved stats extraction functions to an fe_utils file stats_export.c that\n> will be used by both pg_export_stats and pg_dump.\n> - pg_export_stats now generates SQL statements rather than a tsv, and has\n> boolean flags to set the validate and require_match_oids parameters in the\n> calls to pg_import_(rel|ext)_stats.\n> - pg_import_stats is gone, as importing can now be done with psql.\n\nHaving looked through this thread and discussed a bit with Corey\noff-line, the approach that Tom laid out up-thread seems like it would\nmake the most sense overall- that is, eliminate the JSON bits and the\nSPI and instead export the stats data by running queries from the new\nversion of pg_dump/server (in the FDW case) against the old server\nwith the intelligence of how to transform the data into the format\nneeded for the current pg_dump/server to accept, through function calls\nwhere the function calls generally map up to the rows/information being\nupdated- a call to update the information in pg_class for each relation\nand then a call for each attribute to update the information in\npg_statistic.\n\nPart of this process would include mapping from OIDs/attrnum's to names\non the source side and then from those names to the appropriate\nOIDs/attrnum's on the destination side.\n\nAs this code would be used by both pg_dump and the postgres_fdw, it\nseems logical that it would go into the common library. Further, it\nwould make sense to have this code be able to handle multiple major\nversions for the foreign side, such as how postgres_fdw and pg_dump\nalready do.\n\nIn terms of working to ensure that newer versions support loading from\nolder dumps (that is, that v18 would be able to load a dump file created\nby a v17 pg_dump against a v17 server in the face of changes having been\nmade to the statistics system in v18), we could have the functions take\na version parameter (to handle cases where the data structure is the\nsame but the contents have to be handled differently), use overloaded\nfunctions, or have version-specific names for the functions. I'm also\ngenerally supportive of the idea that we, perhaps initially, only\nsupport dumping/loading stats with pg_dump when in binary-upgrade mode,\nwhich removes our need to be concerned with this (perhaps that would be\na good v1 of this feature?) as the version of pg_dump needs to match\nthat of pg_upgrade and the destination server for various other reasons.\nIncluding a switch to exclude stats on restore might also be an\nacceptable answer, or even simply excluding them by default when going\nbetween major versions except in binary-upgrade mode.\n\nAlong those same lines when it comes to a 'v1', I'd say that we may wish\nto consider excluding extended statistics, which I am fairly confident\nCorey's heard a number of times previously already but thought I would\nadd my own support for that. To the extent that we do want to make\nextended stats work down the road, we should probably have some\npre-patches to flush out the missing _in/_recv functions for those types\nwhich don't have them today- and that would include modifying the _out\nof those types to use names instead of OIDs/attrnums. In thinking about\nthis, I was reviewing specifically pg_dependencies. To the extent that\nthere are people who depend on the current output, I would think that\nthey'd actually appreciate this change.\n\nI don't generally feel like we need to be checking that the OIDs between\nthe old server and the new server match- I appreciate that that should\nbe the case in a binary-upgrade situation but it still feels unnecessary\nand complicated and clutters up the output and the function calls.\n\nOverall, I definitely think this is a good project to work on as it's an\noften, rightfully, complained about issue when it comes to pg_upgrade\nand the amount of downtime required for it before the upgraded system\ncan be reasonably used again.\n\nThanks,\n\nStephen",
"msg_date": "Thu, 29 Feb 2024 15:23:07 -0500",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": ">\n>\n>\n> Having looked through this thread and discussed a bit with Corey\n> off-line, the approach that Tom laid out up-thread seems like it would\n> make the most sense overall- that is, eliminate the JSON bits and the\n> SPI and instead export the stats data by running queries from the new\n> version of pg_dump/server (in the FDW case) against the old server\n> with the intelligence of how to transform the data into the format\n> needed for the current pg_dump/server to accept, through function calls\n> where the function calls generally map up to the rows/information being\n> updated- a call to update the information in pg_class for each relation\n> and then a call for each attribute to update the information in\n> pg_statistic.\n>\n\nThanks for the excellent summary of our conversation, though I do add that\nwe discussed a problem with per-attribute functions: each function would be\nacquiring locks on both the relation (so it doesn't go away) and\npg_statistic, and that lock thrashing would add up. Whether that overhead\nis judged significant or not is up for discussion. If it is significant, it\nmakes sense to package up all the attributes into one call, passing in an\narray of some new pg_statistic-esque special type....the very issue that\nsent me down the JSON path.\n\nI certainly see the flexibility in having a per-attribute functions, but am\nconcerned about non-binary-upgrade situations where the attnums won't line\nup, and if we're passing them by name then the function has dig around\nlooking for the right matching attnum, and that's overhead too. In the\nwhole-table approach, we just iterate over the attributes that exist, and\nfind the matching parameter row.\n\n\nHaving looked through this thread and discussed a bit with Corey\noff-line, the approach that Tom laid out up-thread seems like it would\nmake the most sense overall- that is, eliminate the JSON bits and the\nSPI and instead export the stats data by running queries from the new\nversion of pg_dump/server (in the FDW case) against the old server\nwith the intelligence of how to transform the data into the format\nneeded for the current pg_dump/server to accept, through function calls\nwhere the function calls generally map up to the rows/information being\nupdated- a call to update the information in pg_class for each relation\nand then a call for each attribute to update the information in\npg_statistic.Thanks for the excellent summary of our conversation, though I do add that we discussed a problem with per-attribute functions: each function would be acquiring locks on both the relation (so it doesn't go away) and pg_statistic, and that lock thrashing would add up. Whether that overhead is judged significant or not is up for discussion. If it is significant, it makes sense to package up all the attributes into one call, passing in an array of some new pg_statistic-esque special type....the very issue that sent me down the JSON path.I certainly see the flexibility in having a per-attribute functions, but am concerned about non-binary-upgrade situations where the attnums won't line up, and if we're passing them by name then the function has dig around looking for the right matching attnum, and that's overhead too. In the whole-table approach, we just iterate over the attributes that exist, and find the matching parameter row.",
"msg_date": "Thu, 29 Feb 2024 17:47:59 -0500",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "Greetings,\n\nOn Thu, Feb 29, 2024 at 17:48 Corey Huinker <[email protected]> wrote:\n\n> Having looked through this thread and discussed a bit with Corey\n>> off-line, the approach that Tom laid out up-thread seems like it would\n>> make the most sense overall- that is, eliminate the JSON bits and the\n>> SPI and instead export the stats data by running queries from the new\n>> version of pg_dump/server (in the FDW case) against the old server\n>> with the intelligence of how to transform the data into the format\n>> needed for the current pg_dump/server to accept, through function calls\n>> where the function calls generally map up to the rows/information being\n>> updated- a call to update the information in pg_class for each relation\n>> and then a call for each attribute to update the information in\n>> pg_statistic.\n>>\n>\n> Thanks for the excellent summary of our conversation, though I do add that\n> we discussed a problem with per-attribute functions: each function would be\n> acquiring locks on both the relation (so it doesn't go away) and\n> pg_statistic, and that lock thrashing would add up. Whether that overhead\n> is judged significant or not is up for discussion. If it is significant, it\n> makes sense to package up all the attributes into one call, passing in an\n> array of some new pg_statistic-esque special type....the very issue that\n> sent me down the JSON path.\n>\n> I certainly see the flexibility in having a per-attribute functions, but\n> am concerned about non-binary-upgrade situations where the attnums won't\n> line up, and if we're passing them by name then the function has dig around\n> looking for the right matching attnum, and that's overhead too. In the\n> whole-table approach, we just iterate over the attributes that exist, and\n> find the matching parameter row.\n>\n\nThat’s certainly a fair point and my initial reaction (which could\ncertainly be wrong) is that it’s unlikely to be an issue- but also, if you\nfeel you could make it work with an array and passing all the attribute\ninfo in with one call, which I suspect would be possible but just a bit\nmore complex to build, then sure, go for it. If it ends up being overly\nunwieldy then perhaps the per-attribute call would be better and we could\nperhaps acquire the lock before the function calls..? Doing a check to see\nif we have already locked it would be cheaper than trying to acquire a new\nlock, I’m fairly sure.\n\nAlso per our prior discussion- this makes sense to include in post-data\nsection, imv, and also because then we have the indexes we may wish to load\nstats for, but further that also means it’ll be in the paralleliziable part\nof the process, making me a bit less concerned overall about the individual\ntiming.\n\nThanks!\n\nStephen\n\n>\n\nGreetings,On Thu, Feb 29, 2024 at 17:48 Corey Huinker <[email protected]> wrote:\nHaving looked through this thread and discussed a bit with Corey\noff-line, the approach that Tom laid out up-thread seems like it would\nmake the most sense overall- that is, eliminate the JSON bits and the\nSPI and instead export the stats data by running queries from the new\nversion of pg_dump/server (in the FDW case) against the old server\nwith the intelligence of how to transform the data into the format\nneeded for the current pg_dump/server to accept, through function calls\nwhere the function calls generally map up to the rows/information being\nupdated- a call to update the information in pg_class for each relation\nand then a call for each attribute to update the information in\npg_statistic.Thanks for the excellent summary of our conversation, though I do add that we discussed a problem with per-attribute functions: each function would be acquiring locks on both the relation (so it doesn't go away) and pg_statistic, and that lock thrashing would add up. Whether that overhead is judged significant or not is up for discussion. If it is significant, it makes sense to package up all the attributes into one call, passing in an array of some new pg_statistic-esque special type....the very issue that sent me down the JSON path.I certainly see the flexibility in having a per-attribute functions, but am concerned about non-binary-upgrade situations where the attnums won't line up, and if we're passing them by name then the function has dig around looking for the right matching attnum, and that's overhead too. In the whole-table approach, we just iterate over the attributes that exist, and find the matching parameter row.That’s certainly a fair point and my initial reaction (which could certainly be wrong) is that it’s unlikely to be an issue- but also, if you feel you could make it work with an array and passing all the attribute info in with one call, which I suspect would be possible but just a bit more complex to build, then sure, go for it. If it ends up being overly unwieldy then perhaps the per-attribute call would be better and we could perhaps acquire the lock before the function calls..? Doing a check to see if we have already locked it would be cheaper than trying to acquire a new lock, I’m fairly sure.Also per our prior discussion- this makes sense to include in post-data section, imv, and also because then we have the indexes we may wish to load stats for, but further that also means it’ll be in the paralleliziable part of the process, making me a bit less concerned overall about the individual timing. Thanks!Stephen",
"msg_date": "Thu, 29 Feb 2024 18:17:14 -0500",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": ">\n>\n> That’s certainly a fair point and my initial reaction (which could\n> certainly be wrong) is that it’s unlikely to be an issue- but also, if you\n> feel you could make it work with an array and passing all the attribute\n> info in with one call, which I suspect would be possible but just a bit\n> more complex to build, then sure, go for it. If it ends up being overly\n> unwieldy then perhaps the per-attribute call would be better and we could\n> perhaps acquire the lock before the function calls..? Doing a check to see\n> if we have already locked it would be cheaper than trying to acquire a new\n> lock, I’m fairly sure.\n>\n\nWell the do_analyze() code was already ok with acquiring the lock once for\nnon-inherited stats and again for inherited stats, so the locks were\nalready not the end of the world. However, that's at most a 2x of the\nlocking required, and this would natts * x, quite a bit more. Having the\nprocedures check for a pre-existing lock seems like a good compromise.\n\n\n> Also per our prior discussion- this makes sense to include in post-data\n> section, imv, and also because then we have the indexes we may wish to load\n> stats for, but further that also means it’ll be in the paralleliziable part\n> of the process, making me a bit less concerned overall about the individual\n> timing.\n>\n\nThe ability to parallelize is pretty persuasive. But is that per-statement\nparallelization or do we get transaction blocks? i.e. if we ended up\nimporting stats like this:\n\nBEGIN;\nLOCK TABLE schema.relation IN SHARE UPDATE EXCLUSIVE MODE;\nLOCK TABLE pg_catalog.pg_statistic IN ROW UPDATE EXCLUSIVE MODE;\nSELECT pg_import_rel_stats('schema.relation', ntuples, npages);\nSELECT pg_import_pg_statistic('schema.relation', 'id', ...);\nSELECT pg_import_pg_statistic('schema.relation', 'name', ...);\nSELECT pg_import_pg_statistic('schema.relation', 'description', ...);\n...\nCOMMIT;\n\nThat’s certainly a fair point and my initial reaction (which could certainly be wrong) is that it’s unlikely to be an issue- but also, if you feel you could make it work with an array and passing all the attribute info in with one call, which I suspect would be possible but just a bit more complex to build, then sure, go for it. If it ends up being overly unwieldy then perhaps the per-attribute call would be better and we could perhaps acquire the lock before the function calls..? Doing a check to see if we have already locked it would be cheaper than trying to acquire a new lock, I’m fairly sure.Well the do_analyze() code was already ok with acquiring the lock once for non-inherited stats and again for inherited stats, so the locks were already not the end of the world. However, that's at most a 2x of the locking required, and this would natts * x, quite a bit more. Having the procedures check for a pre-existing lock seems like a good compromise. Also per our prior discussion- this makes sense to include in post-data section, imv, and also because then we have the indexes we may wish to load stats for, but further that also means it’ll be in the paralleliziable part of the process, making me a bit less concerned overall about the individual timing. The ability to parallelize is pretty persuasive. But is that per-statement parallelization or do we get transaction blocks? i.e. if we ended up importing stats like this:BEGIN;LOCK TABLE schema.relation IN SHARE UPDATE EXCLUSIVE MODE;LOCK TABLE pg_catalog.pg_statistic IN ROW UPDATE EXCLUSIVE MODE;SELECT pg_import_rel_stats('schema.relation', ntuples, npages);SELECT pg_import_pg_statistic('schema.relation', 'id', ...);SELECT pg_import_pg_statistic('schema.relation', 'name', ...);SELECT pg_import_pg_statistic('schema.relation', 'description', ...);...COMMIT;",
"msg_date": "Thu, 29 Feb 2024 22:55:20 -0500",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "On Thu, Feb 29, 2024 at 10:55:20PM -0500, Corey Huinker wrote:\n>> That’s certainly a fair point and my initial reaction (which could\n>> certainly be wrong) is that it’s unlikely to be an issue- but also, if you\n>> feel you could make it work with an array and passing all the attribute\n>> info in with one call, which I suspect would be possible but just a bit\n>> more complex to build, then sure, go for it. If it ends up being overly\n>> unwieldy then perhaps the per-attribute call would be better and we could\n>> perhaps acquire the lock before the function calls..? Doing a check to see\n>> if we have already locked it would be cheaper than trying to acquire a new\n>> lock, I’m fairly sure.\n> \n> Well the do_analyze() code was already ok with acquiring the lock once for\n> non-inherited stats and again for inherited stats, so the locks were\n> already not the end of the world. However, that's at most a 2x of the\n> locking required, and this would natts * x, quite a bit more. Having the\n> procedures check for a pre-existing lock seems like a good compromise.\n\nI think this is a reasonable starting point. If the benchmarks show that\nthe locking is a problem, we can reevaluate, but otherwise IMHO we should\ntry to keep it as simple/flexible as possible.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 1 Mar 2024 11:13:57 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "Greetings,\n\nOn Fri, Mar 1, 2024 at 12:14 Nathan Bossart <[email protected]>\nwrote:\n\n> On Thu, Feb 29, 2024 at 10:55:20PM -0500, Corey Huinker wrote:\n> >> That’s certainly a fair point and my initial reaction (which could\n> >> certainly be wrong) is that it’s unlikely to be an issue- but also, if\n> you\n> >> feel you could make it work with an array and passing all the attribute\n> >> info in with one call, which I suspect would be possible but just a bit\n> >> more complex to build, then sure, go for it. If it ends up being overly\n> >> unwieldy then perhaps the per-attribute call would be better and we\n> could\n> >> perhaps acquire the lock before the function calls..? Doing a check to\n> see\n> >> if we have already locked it would be cheaper than trying to acquire a\n> new\n> >> lock, I’m fairly sure.\n> >\n> > Well the do_analyze() code was already ok with acquiring the lock once\n> for\n> > non-inherited stats and again for inherited stats, so the locks were\n> > already not the end of the world. However, that's at most a 2x of the\n> > locking required, and this would natts * x, quite a bit more. Having the\n> > procedures check for a pre-existing lock seems like a good compromise.\n>\n> I think this is a reasonable starting point. If the benchmarks show that\n> the locking is a problem, we can reevaluate, but otherwise IMHO we should\n> try to keep it as simple/flexible as possible.\n\n\nYeah, this was my general feeling as well. If it does become an issue, it\ncertainly seems like we would have ways to improve it in the future. Even\nwith this locking it is surely going to be better than having to re-analyze\nthe entire database which is where we are at now.\n\nThanks,\n\nStephen\n\n>\n\nGreetings,On Fri, Mar 1, 2024 at 12:14 Nathan Bossart <[email protected]> wrote:On Thu, Feb 29, 2024 at 10:55:20PM -0500, Corey Huinker wrote:\n>> That’s certainly a fair point and my initial reaction (which could\n>> certainly be wrong) is that it’s unlikely to be an issue- but also, if you\n>> feel you could make it work with an array and passing all the attribute\n>> info in with one call, which I suspect would be possible but just a bit\n>> more complex to build, then sure, go for it. If it ends up being overly\n>> unwieldy then perhaps the per-attribute call would be better and we could\n>> perhaps acquire the lock before the function calls..? Doing a check to see\n>> if we have already locked it would be cheaper than trying to acquire a new\n>> lock, I’m fairly sure.\n> \n> Well the do_analyze() code was already ok with acquiring the lock once for\n> non-inherited stats and again for inherited stats, so the locks were\n> already not the end of the world. However, that's at most a 2x of the\n> locking required, and this would natts * x, quite a bit more. Having the\n> procedures check for a pre-existing lock seems like a good compromise.\n\nI think this is a reasonable starting point. If the benchmarks show that\nthe locking is a problem, we can reevaluate, but otherwise IMHO we should\ntry to keep it as simple/flexible as possible.Yeah, this was my general feeling as well. If it does become an issue, it certainly seems like we would have ways to improve it in the future. Even with this locking it is surely going to be better than having to re-analyze the entire database which is where we are at now.Thanks,Stephen",
"msg_date": "Fri, 1 Mar 2024 12:16:51 -0500",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "Hi,\n\nOn Tue, Feb 20, 2024 at 02:24:52AM -0500, Corey Huinker wrote:\n> On Thu, Feb 15, 2024 at 4:09 AM Corey Huinker <[email protected]>\n> wrote:\n> \n> > Posting v5 updates of pg_import_rel_stats() and pg_import_ext_stats(),\n> > which address many of the concerns listed earlier.\n> >\n> > Leaving the export/import scripts off for the time being, as they haven't\n> > changed and the next likely change is to fold them into pg_dump.\n> >\n> >\n> >\n> v6 posted below.\n\nThanks!\n\nI had in mind to look at it but it looks like a rebase is needed.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 4 Mar 2024 14:39:40 +0000",
"msg_from": "Bertrand Drouvot <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "On Fri, 1 Mar 2024, 04:55 Corey Huinker, <[email protected]> wrote:\n>> Also per our prior discussion- this makes sense to include in post-data section, imv, and also because then we have the indexes we may wish to load stats for, but further that also means it’ll be in the paralleliziable part of the process, making me a bit less concerned overall about the individual timing.\n>\n>\n> The ability to parallelize is pretty persuasive. But is that per-statement parallelization or do we get transaction blocks? i.e. if we ended up importing stats like this:\n>\n> BEGIN;\n> LOCK TABLE schema.relation IN SHARE UPDATE EXCLUSIVE MODE;\n> LOCK TABLE pg_catalog.pg_statistic IN ROW UPDATE EXCLUSIVE MODE;\n> SELECT pg_import_rel_stats('schema.relation', ntuples, npages);\n> SELECT pg_import_pg_statistic('schema.relation', 'id', ...);\n> SELECT pg_import_pg_statistic('schema.relation', 'name', ...);\n\nHow well would this simplify to the following:\n\nSELECT pg_import_statistic('schema.relation', attname, ...)\nFROM (VALUES ('id', ...), ...) AS relation_stats (attname, ...);\n\nOr even just one VALUES for the whole statistics loading?\n\nI suspect the main issue with combining this into one statement\n(transaction) is that failure to load one column's statistics implies\nyou'll have to redo all the other statistics (or fail to load the\nstatistics at all), which may be problematic at the scale of thousands\nof relations with tens of columns each.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Wed, 6 Mar 2024 11:06:39 +0100",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "Greetings,\n\nOn Wed, Mar 6, 2024 at 11:07 Matthias van de Meent <\[email protected]> wrote:\n\n> On Fri, 1 Mar 2024, 04:55 Corey Huinker, <[email protected]> wrote:\n> >> Also per our prior discussion- this makes sense to include in post-data\n> section, imv, and also because then we have the indexes we may wish to load\n> stats for, but further that also means it’ll be in the paralleliziable part\n> of the process, making me a bit less concerned overall about the individual\n> timing.\n> >\n> >\n> > The ability to parallelize is pretty persuasive. But is that\n> per-statement parallelization or do we get transaction blocks? i.e. if we\n> ended up importing stats like this:\n> >\n> > BEGIN;\n> > LOCK TABLE schema.relation IN SHARE UPDATE EXCLUSIVE MODE;\n> > LOCK TABLE pg_catalog.pg_statistic IN ROW UPDATE EXCLUSIVE MODE;\n> > SELECT pg_import_rel_stats('schema.relation', ntuples, npages);\n> > SELECT pg_import_pg_statistic('schema.relation', 'id', ...);\n> > SELECT pg_import_pg_statistic('schema.relation', 'name', ...);\n>\n> How well would this simplify to the following:\n>\n> SELECT pg_import_statistic('schema.relation', attname, ...)\n> FROM (VALUES ('id', ...), ...) AS relation_stats (attname, ...);\n\n\nUsing a VALUES construct for this does seem like it might make it cleaner,\nso +1 for investigating that idea.\n\nOr even just one VALUES for the whole statistics loading?\n\n\nI don’t think we’d want to go beyond one relation at a time as then it can\nbe parallelized, we won’t be trying to lock a whole bunch of objects at\nonce, and any failures would only impact that one relation’s stats load.\n\nI suspect the main issue with combining this into one statement\n> (transaction) is that failure to load one column's statistics implies\n> you'll have to redo all the other statistics (or fail to load the\n> statistics at all), which may be problematic at the scale of thousands\n> of relations with tens of columns each.\n\n\nI’m pretty skeptical that “stats fail to load and lead to a failed\ntransaction” is a likely scenario that we have to spend a lot of effort\non. I’m pretty bullish on the idea that this simply won’t happen except in\nvery exceptional cases under a pg_upgrade (where the pg_dump that’s used\nmust match the target server version) and where it happens under a pg_dump\nit’ll be because it’s an older pg_dump’s output and the answer will likely\nneed to be “you’re using a pg_dump file generated using an older version of\npg_dump and need to exclude stats entirely from the load and instead run\nanalyze on the data after loading it.”\n\nWhat are the cases where we would be seeing stats reloads failing where it\nwould make sense to re-try on a subset of columns, or just generally, if we\nknow that the pg_dump version matches the target server version?\n\nThanks!\n\nStephen\n\n>\n\nGreetings,On Wed, Mar 6, 2024 at 11:07 Matthias van de Meent <[email protected]> wrote:On Fri, 1 Mar 2024, 04:55 Corey Huinker, <[email protected]> wrote:\n>> Also per our prior discussion- this makes sense to include in post-data section, imv, and also because then we have the indexes we may wish to load stats for, but further that also means it’ll be in the paralleliziable part of the process, making me a bit less concerned overall about the individual timing.\n>\n>\n> The ability to parallelize is pretty persuasive. But is that per-statement parallelization or do we get transaction blocks? i.e. if we ended up importing stats like this:\n>\n> BEGIN;\n> LOCK TABLE schema.relation IN SHARE UPDATE EXCLUSIVE MODE;\n> LOCK TABLE pg_catalog.pg_statistic IN ROW UPDATE EXCLUSIVE MODE;\n> SELECT pg_import_rel_stats('schema.relation', ntuples, npages);\n> SELECT pg_import_pg_statistic('schema.relation', 'id', ...);\n> SELECT pg_import_pg_statistic('schema.relation', 'name', ...);\n\nHow well would this simplify to the following:\n\nSELECT pg_import_statistic('schema.relation', attname, ...)\nFROM (VALUES ('id', ...), ...) AS relation_stats (attname, ...);Using a VALUES construct for this does seem like it might make it cleaner, so +1 for investigating that idea.\nOr even just one VALUES for the whole statistics loading?I don’t think we’d want to go beyond one relation at a time as then it can be parallelized, we won’t be trying to lock a whole bunch of objects at once, and any failures would only impact that one relation’s stats load.\nI suspect the main issue with combining this into one statement\n(transaction) is that failure to load one column's statistics implies\nyou'll have to redo all the other statistics (or fail to load the\nstatistics at all), which may be problematic at the scale of thousands\nof relations with tens of columns each.I’m pretty skeptical that “stats fail to load and lead to a failed transaction” is a likely scenario that we have to spend a lot of effort on. I’m pretty bullish on the idea that this simply won’t happen except in very exceptional cases under a pg_upgrade (where the pg_dump that’s used must match the target server version) and where it happens under a pg_dump it’ll be because it’s an older pg_dump’s output and the answer will likely need to be “you’re using a pg_dump file generated using an older version of pg_dump and need to exclude stats entirely from the load and instead run analyze on the data after loading it.”What are the cases where we would be seeing stats reloads failing where it would make sense to re-try on a subset of columns, or just generally, if we know that the pg_dump version matches the target server version?Thanks!Stephen",
"msg_date": "Wed, 6 Mar 2024 11:33:02 +0100",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "On Wed, 6 Mar 2024 at 11:33, Stephen Frost <[email protected]> wrote:\n> On Wed, Mar 6, 2024 at 11:07 Matthias van de Meent <[email protected]> wrote:\n>> Or even just one VALUES for the whole statistics loading?\n>\n>\n> I don’t think we’d want to go beyond one relation at a time as then it can be parallelized, we won’t be trying to lock a whole bunch of objects at once, and any failures would only impact that one relation’s stats load.\n\nThat also makes sense.\n\n>> I suspect the main issue with combining this into one statement\n>> (transaction) is that failure to load one column's statistics implies\n>> you'll have to redo all the other statistics (or fail to load the\n>> statistics at all), which may be problematic at the scale of thousands\n>> of relations with tens of columns each.\n>\n>\n> I’m pretty skeptical that “stats fail to load and lead to a failed transaction” is a likely scenario that we have to spend a lot of effort on.\n\nAgreed on the \"don't have to spend a lot of time on it\", but I'm not\nso sure on the \"unlikely\" part while the autovacuum deamon is\ninvolved, specifically for non-upgrade pg_restore. I imagine (haven't\nchecked) that autoanalyze is disabled during pg_upgrade, but\npg_restore doesn't do that, while it would have to be able to restore\nstatistics of a table if it is included in the dump (and the version\nmatches).\n\n> What are the cases where we would be seeing stats reloads failing where it would make sense to re-try on a subset of columns, or just generally, if we know that the pg_dump version matches the target server version?\n\nLast time I checked, pg_restore's default is to load data on a\nrow-by-row basis without --single-transaction or --exit-on-error. Of\ncourse, pg_upgrade uses it's own set of flags, but if a user is\nrestoring stats with pg_restore, I suspect they'd rather have some\ncolumn's stats loaded than no stats at all; so I would assume this\nrequires one separate pg_import_pg_statistic()-transaction for every\ncolumn.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Wed, 6 Mar 2024 12:06:28 +0100",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "Greetings,\n\n* Matthias van de Meent ([email protected]) wrote:\n> On Wed, 6 Mar 2024 at 11:33, Stephen Frost <[email protected]> wrote:\n> > On Wed, Mar 6, 2024 at 11:07 Matthias van de Meent <[email protected]> wrote:\n> >> Or even just one VALUES for the whole statistics loading?\n> > I don’t think we’d want to go beyond one relation at a time as then it can be parallelized, we won’t be trying to lock a whole bunch of objects at once, and any failures would only impact that one relation’s stats load.\n> \n> That also makes sense.\n\nGreat, thanks.\n\n> >> I suspect the main issue with combining this into one statement\n> >> (transaction) is that failure to load one column's statistics implies\n> >> you'll have to redo all the other statistics (or fail to load the\n> >> statistics at all), which may be problematic at the scale of thousands\n> >> of relations with tens of columns each.\n> >\n> >\n> > I’m pretty skeptical that “stats fail to load and lead to a failed transaction” is a likely scenario that we have to spend a lot of effort on.\n> \n> Agreed on the \"don't have to spend a lot of time on it\", but I'm not\n> so sure on the \"unlikely\" part while the autovacuum deamon is\n> involved, specifically for non-upgrade pg_restore. I imagine (haven't\n> checked) that autoanalyze is disabled during pg_upgrade, but\n> pg_restore doesn't do that, while it would have to be able to restore\n> statistics of a table if it is included in the dump (and the version\n> matches).\n\nEven if autovacuum was running and it kicked off an auto-analyze of the\nrelation at just the time that we were trying to load the stats, there\nwould be appropriate locking happening to keep them from causing an\noutright ERROR and transaction failure, or if not, that's a lack of\nlocking and should be fixed. With the per-attribute-function-call\napproach, that could lead to a situation where some stats are from the\nauto-analyze and some are from the stats being loaded but I'm not sure\nif that's a big concern or not.\n\nFor users of this, I would think we'd generally encourage them to\ndisable autovacuum on the tables they're loading as otherwise they'll\nend up with the stats going back to whatever an auto-analyze ends up\nfinding. That may be fine in some cases, but not in others.\n\nA couple questions to think about though: Should pg_dump explicitly ask\nautovacuum to ignore these tables while we're loading them? \nShould these functions only perform a load when there aren't any\nexisting stats? Should the latter be an argument to the functions to\nallow the caller to decide?\n\n> > What are the cases where we would be seeing stats reloads failing where it would make sense to re-try on a subset of columns, or just generally, if we know that the pg_dump version matches the target server version?\n> \n> Last time I checked, pg_restore's default is to load data on a\n> row-by-row basis without --single-transaction or --exit-on-error. Of\n> course, pg_upgrade uses it's own set of flags, but if a user is\n> restoring stats with pg_restore, I suspect they'd rather have some\n> column's stats loaded than no stats at all; so I would assume this\n> requires one separate pg_import_pg_statistic()-transaction for every\n> column.\n\nHaving some discussion around that would be useful. Is it better to\nhave a situation where there are stats for some columns but no stats for\nother columns? There would be a good chance that this would lead to a\nset of queries that were properly planned out and a set which end up\nwith unexpected and likely poor query plans due to lack of stats.\nArguably that's better overall, but either way an ANALYZE needs to be\ndone to address the lack of stats for those columns and then that\nANALYZE is going to blow away whatever stats got loaded previously\nanyway and all we did with a partial stats load was maybe have a subset\nof queries have better plans in the interim, after having expended the\ncost to try and individually load the stats and dealing with the case of\nsome of them succeeding and some failing.\n\nOverall, I'd suggest we wait to see what Corey comes up with in terms of\ndoing the stats load for all attributes in a single function call,\nperhaps using the VALUES construct as you suggested up-thread, and then\nwe can contemplate if that's clean enough to work or if it's so grotty\nthat the better plan would be to do per-attribute function calls. If it\nends up being the latter, then we can revisit this discussion and try to\nanswer some of the questions raised above.\n\nThanks!\n\nStephen",
"msg_date": "Wed, 6 Mar 2024 13:28:20 -0500",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": ">\n>\n> > BEGIN;\n> > LOCK TABLE schema.relation IN SHARE UPDATE EXCLUSIVE MODE;\n> > LOCK TABLE pg_catalog.pg_statistic IN ROW UPDATE EXCLUSIVE MODE;\n> > SELECT pg_import_rel_stats('schema.relation', ntuples, npages);\n> > SELECT pg_import_pg_statistic('schema.relation', 'id', ...);\n> > SELECT pg_import_pg_statistic('schema.relation', 'name', ...);\n>\n> How well would this simplify to the following:\n>\n> SELECT pg_import_statistic('schema.relation', attname, ...)\n> FROM (VALUES ('id', ...), ...) AS relation_stats (attname, ...);\n>\n> Or even just one VALUES for the whole statistics loading?\n>\n\nI'm sorry, I don't quite understand what you're suggesting here. I'm about\nto post the new functions, so perhaps you can rephrase this in the context\nof those functions.\n\n\n> I suspect the main issue with combining this into one statement\n> (transaction) is that failure to load one column's statistics implies\n> you'll have to redo all the other statistics (or fail to load the\n> statistics at all), which may be problematic at the scale of thousands\n> of relations with tens of columns each.\n\n\nYes, that is is a concern, and I can see value to having it both ways (one\nfailure fails the whole table's worth of set_something() functions, but I\ncan also see emitting a warning instead of error and returning false. I'm\neager to get feedback on which the community would prefer, or perhaps even\nmake it a parameter.\n\n> BEGIN;\n> LOCK TABLE schema.relation IN SHARE UPDATE EXCLUSIVE MODE;\n> LOCK TABLE pg_catalog.pg_statistic IN ROW UPDATE EXCLUSIVE MODE;\n> SELECT pg_import_rel_stats('schema.relation', ntuples, npages);\n> SELECT pg_import_pg_statistic('schema.relation', 'id', ...);\n> SELECT pg_import_pg_statistic('schema.relation', 'name', ...);\n\nHow well would this simplify to the following:\n\nSELECT pg_import_statistic('schema.relation', attname, ...)\nFROM (VALUES ('id', ...), ...) AS relation_stats (attname, ...);\n\nOr even just one VALUES for the whole statistics loading?I'm sorry, I don't quite understand what you're suggesting here. I'm about to post the new functions, so perhaps you can rephrase this in the context of those functions. I suspect the main issue with combining this into one statement\n(transaction) is that failure to load one column's statistics implies\nyou'll have to redo all the other statistics (or fail to load the\nstatistics at all), which may be problematic at the scale of thousands\nof relations with tens of columns each.Yes, that is is a concern, and I can see value to having it both ways (one failure fails the whole table's worth of set_something() functions, but I can also see emitting a warning instead of error and returning false. I'm eager to get feedback on which the community would prefer, or perhaps even make it a parameter.",
"msg_date": "Fri, 8 Mar 2024 01:09:10 -0500",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": ">\n>\n>\n> Having some discussion around that would be useful. Is it better to\n> have a situation where there are stats for some columns but no stats for\n> other columns? There would be a good chance that this would lead to a\n> set of queries that were properly planned out and a set which end up\n> with unexpected and likely poor query plans due to lack of stats.\n> Arguably that's better overall, but either way an ANALYZE needs to be\n> done to address the lack of stats for those columns and then that\n> ANALYZE is going to blow away whatever stats got loaded previously\n> anyway and all we did with a partial stats load was maybe have a subset\n> of queries have better plans in the interim, after having expended the\n> cost to try and individually load the stats and dealing with the case of\n> some of them succeeding and some failing.\n>\n\nIt is my (incomplete and entirely second-hand) understanding is that\npg_upgrade doesn't STOP autovacuum, but sets a delay to a very long value\nand then resets it on completion, presumably because analyzing a table\nbefore its data is loaded and indexes are created would just be a waste of\ntime.\n\n\n\n>\n> Overall, I'd suggest we wait to see what Corey comes up with in terms of\n> doing the stats load for all attributes in a single function call,\n> perhaps using the VALUES construct as you suggested up-thread, and then\n> we can contemplate if that's clean enough to work or if it's so grotty\n> that the better plan would be to do per-attribute function calls. If it\n> ends up being the latter, then we can revisit this discussion and try to\n> answer some of the questions raised above.\n>\n\nIn the patch below, I ended up doing per-attribute function calls, mostly\nbecause it allowed me to avoid creating a custom data type for the portable\nversion of pg_statistic. This comes at the cost of a very high number of\nparameters, but that's the breaks.\n\nI am a bit concerned about the number of locks on pg_statistic and the\nrelation itself, doing CatalogOpenIndexes/CatalogCloseIndexes once per\nattribute rather than once per relation. But I also see that this will\nmostly get used at a time when no other traffic is on the machine, and\nwhatever it costs, it's still faster than the smallest table sample (insert\njoke about \"don't have to be faster than the bear\" here).\n\nThis raises questions about whether a failure in one attribute update\nstatement should cause the others in that relation to roll back or not, and\nI can see situations where both would be desirable.\n\nI'm putting this out there ahead of the pg_dump / fe_utils work, mostly\nbecause what I do there heavily depends on how this is received.\n\nAlso, I'm still seeking confirmation that I can create a pg_dump TOC entry\nwith a chain of commands (e.g. BEGIN; ... COMMIT; ) or if I have to fan\nthem out into multiple entries.\n\nAnyway, here's v7. Eagerly awaiting feedback.",
"msg_date": "Fri, 8 Mar 2024 01:35:40 -0500",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "Greetings,\n\n* Corey Huinker ([email protected]) wrote:\n> > Having some discussion around that would be useful. Is it better to\n> > have a situation where there are stats for some columns but no stats for\n> > other columns? There would be a good chance that this would lead to a\n> > set of queries that were properly planned out and a set which end up\n> > with unexpected and likely poor query plans due to lack of stats.\n> > Arguably that's better overall, but either way an ANALYZE needs to be\n> > done to address the lack of stats for those columns and then that\n> > ANALYZE is going to blow away whatever stats got loaded previously\n> > anyway and all we did with a partial stats load was maybe have a subset\n> > of queries have better plans in the interim, after having expended the\n> > cost to try and individually load the stats and dealing with the case of\n> > some of them succeeding and some failing.\n> \n> It is my (incomplete and entirely second-hand) understanding is that\n> pg_upgrade doesn't STOP autovacuum, but sets a delay to a very long value\n> and then resets it on completion, presumably because analyzing a table\n> before its data is loaded and indexes are created would just be a waste of\n> time.\n\nNo, pg_upgrade starts the postmaster with -b (undocumented\nbinary-upgrade mode) which prevents autovacuum (and logical replication\nworkers) from starting, so we don't need to worry about autovacuum\ncoming in and causing issues during binary upgrade. That doesn't\nentirely eliminate the concerns around pg_dump vs. autovacuum because we\nmay restore a dump into a non-binary-upgrade-in-progress system though,\nof course.\n\n> > Overall, I'd suggest we wait to see what Corey comes up with in terms of\n> > doing the stats load for all attributes in a single function call,\n> > perhaps using the VALUES construct as you suggested up-thread, and then\n> > we can contemplate if that's clean enough to work or if it's so grotty\n> > that the better plan would be to do per-attribute function calls. If it\n> > ends up being the latter, then we can revisit this discussion and try to\n> > answer some of the questions raised above.\n> \n> In the patch below, I ended up doing per-attribute function calls, mostly\n> because it allowed me to avoid creating a custom data type for the portable\n> version of pg_statistic. This comes at the cost of a very high number of\n> parameters, but that's the breaks.\n\nPerhaps it's just me ... but it doesn't seem like it's all that many\nparameters.\n\n> I am a bit concerned about the number of locks on pg_statistic and the\n> relation itself, doing CatalogOpenIndexes/CatalogCloseIndexes once per\n> attribute rather than once per relation. But I also see that this will\n> mostly get used at a time when no other traffic is on the machine, and\n> whatever it costs, it's still faster than the smallest table sample (insert\n> joke about \"don't have to be faster than the bear\" here).\n\nI continue to not be too concerned about this until and unless it's\nactually shown to be an issue. Keeping things simple and\nstraight-forward has its own value.\n\n> This raises questions about whether a failure in one attribute update\n> statement should cause the others in that relation to roll back or not, and\n> I can see situations where both would be desirable.\n> \n> I'm putting this out there ahead of the pg_dump / fe_utils work, mostly\n> because what I do there heavily depends on how this is received.\n> \n> Also, I'm still seeking confirmation that I can create a pg_dump TOC entry\n> with a chain of commands (e.g. BEGIN; ... COMMIT; ) or if I have to fan\n> them out into multiple entries.\n\nIf we do go with this approach, we'd certainly want to make sure to do\nit in a manner which would allow pg_restore's single-transaction mode to\nstill work, which definitely complicates this some.\n\nGiven that and the other general feeling that the locking won't be a big\nissue, I'd suggest the simple approach on the pg_dump side of just\ndumping the stats out without the BEGIN/COMMIT.\n\n> Anyway, here's v7. Eagerly awaiting feedback.\n\n> Subject: [PATCH v7] Create pg_set_relation_stats, pg_set_attribute_stats.\n\n> diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat\n> index 291ed876fc..d12b6e3ca3 100644\n> --- a/src/include/catalog/pg_proc.dat\n> +++ b/src/include/catalog/pg_proc.dat\n> @@ -8818,7 +8818,6 @@\n> { oid => '3813', descr => 'generate XML text node',\n> proname => 'xmltext', proisstrict => 't', prorettype => 'xml',\n> proargtypes => 'text', prosrc => 'xmltext' },\n> -\n> { oid => '2923', descr => 'map table contents to XML',\n> proname => 'table_to_xml', procost => '100', provolatile => 's',\n> proparallel => 'r', prorettype => 'xml',\n> @@ -12163,8 +12162,24 @@\n> \n> # GiST stratnum implementations\n> { oid => '8047', descr => 'GiST support',\n> - proname => 'gist_stratnum_identity', prorettype => 'int2',\n> + proname => 'gist_stratnum_identity',prorettype => 'int2',\n> proargtypes => 'int2',\n> prosrc => 'gist_stratnum_identity' },\n\nRandom whitespace hunks shouldn't be included \n\n> diff --git a/src/backend/statistics/statistics.c b/src/backend/statistics/statistics.c\n> new file mode 100644\n> index 0000000000..999aebdfa9\n> --- /dev/null\n> +++ b/src/backend/statistics/statistics.c\n> @@ -0,0 +1,360 @@\n> +/*------------------------------------------------------------------------- * * statistics.c *\n> + * IDENTIFICATION\n> + * src/backend/statistics/statistics.c\n> + *\n> + *-------------------------------------------------------------------------\n> + */\n\nTop-of-file comment should be cleaned up.\n\n> +/*\n> + * Set statistics for a given pg_class entry.\n> + *\n> + * pg_set_relation_stats(relation Oid, reltuples double, relpages int)\n> + *\n> + * This does an in-place (i.e. non-transactional) update of pg_class, just as\n> + * is done in ANALYZE.\n> + *\n> + */\n> +Datum\n> +pg_set_relation_stats(PG_FUNCTION_ARGS)\n> +{\n> +\tconst char *param_names[] = {\n> +\t\t\"relation\",\n> +\t\t\"reltuples\",\n> +\t\t\"relpages\",\n> +\t};\n> +\n> +\tOid\t\t\t\trelid;\n> +\tRelation\t\trel;\n> +\tHeapTuple\t\tctup;\n> +\tForm_pg_class\tpgcform;\n> +\n> +\tfor (int i = 0; i <= 2; i++)\n> +\t\tif (PG_ARGISNULL(i))\n> +\t\t\tereport(ERROR,\n> +\t\t\t\t\t(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> +\t\t\t\t\t errmsg(\"%s cannot be NULL\", param_names[i])));\n\nWhy not just mark this function as strict..? Or perhaps we should allow\nNULLs to be passed in and just not update the current value in that\ncase? Also, in some cases we allow the function to be called with a\nNULL but then make it a no-op rather than throwing an ERROR (eg, if the\nOID ends up being NULL). Not sure if that makes sense here or not\noffhand but figured I'd mention it as something to consider.\n\n> +\tpgcform = (Form_pg_class) GETSTRUCT(ctup);\n> +\tpgcform->reltuples = PG_GETARG_FLOAT4(1);\n> +\tpgcform->relpages = PG_GETARG_INT32(2);\n\nShouldn't we include relallvisible?\n\nAlso, perhaps we should use the approach that we have in ANALYZE, and\nonly actually do something if the values are different rather than just\nalways doing an update.\n\n> +/*\n> + * Import statistics for a given relation attribute\n> + *\n> + * pg_set_attribute_stats(relation Oid, attname name, stainherit bool,\n> + * stanullfrac float4, stawidth int, stadistinct float4,\n> + * stakind1 int2, stakind2 int2, stakind3 int3,\n> + * stakind4 int2, stakind5 int2, stanumbers1 float4[],\n> + * stanumbers2 float4[], stanumbers3 float4[],\n> + * stanumbers4 float4[], stanumbers5 float4[],\n> + * stanumbers1 float4[], stanumbers2 float4[],\n> + * stanumbers3 float4[], stanumbers4 float4[],\n> + * stanumbers5 float4[], stavalues1 text,\n> + * stavalues2 text, stavalues3 text,\n> + * stavalues4 text, stavalues5 text);\n> + *\n> + *\n> + */\n\nDon't know that it makes sense to just repeat the function declaration\ninside a comment like this- it'll just end up out of date.\n\n> +Datum\n> +pg_set_attribute_stats(PG_FUNCTION_ARGS)\n\n> +\t/* names of columns that cannot be null */\n> +\tconst char *required_param_names[] = {\n> +\t\t\"relation\",\n> +\t\t\"attname\",\n> +\t\t\"stainherit\",\n> +\t\t\"stanullfrac\",\n> +\t\t\"stawidth\",\n> +\t\t\"stadistinct\",\n> +\t\t\"stakind1\",\n> +\t\t\"stakind2\",\n> +\t\t\"stakind3\",\n> +\t\t\"stakind4\",\n> +\t\t\"stakind5\",\n> +\t};\n\nSame comment here as above wrt NULL being passed in.\n\n> +\tfor (int k = 0; k < 5; k++)\n\nShouldn't we use STATISTIC_NUM_SLOTS here?\n\nThanks!\n\nStephen",
"msg_date": "Fri, 8 Mar 2024 07:05:04 -0500",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": ">\n> Perhaps it's just me ... but it doesn't seem like it's all that many\n>\nparameters.\n>\n\nIt's more than I can memorize, so that feels like a lot to me. Clearly it's\nnot insurmountable.\n\n\n\n> > I am a bit concerned about the number of locks on pg_statistic and the\n> > relation itself, doing CatalogOpenIndexes/CatalogCloseIndexes once per\n> > attribute rather than once per relation. But I also see that this will\n> > mostly get used at a time when no other traffic is on the machine, and\n> > whatever it costs, it's still faster than the smallest table sample\n> (insert\n> > joke about \"don't have to be faster than the bear\" here).\n>\n> I continue to not be too concerned about this until and unless it's\n> actually shown to be an issue. Keeping things simple and\n> straight-forward has its own value.\n>\n\nOk, I'm sold on that plan.\n\n\n>\n> > +/*\n> > + * Set statistics for a given pg_class entry.\n> > + *\n> > + * pg_set_relation_stats(relation Oid, reltuples double, relpages int)\n> > + *\n> > + * This does an in-place (i.e. non-transactional) update of pg_class,\n> just as\n> > + * is done in ANALYZE.\n> > + *\n> > + */\n> > +Datum\n> > +pg_set_relation_stats(PG_FUNCTION_ARGS)\n> > +{\n> > + const char *param_names[] = {\n> > + \"relation\",\n> > + \"reltuples\",\n> > + \"relpages\",\n> > + };\n> > +\n> > + Oid relid;\n> > + Relation rel;\n> > + HeapTuple ctup;\n> > + Form_pg_class pgcform;\n> > +\n> > + for (int i = 0; i <= 2; i++)\n> > + if (PG_ARGISNULL(i))\n> > + ereport(ERROR,\n> > +\n> (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> > + errmsg(\"%s cannot be NULL\",\n> param_names[i])));\n>\n> Why not just mark this function as strict..? Or perhaps we should allow\n> NULLs to be passed in and just not update the current value in that\n> case?\n\n\nStrict could definitely apply here, and I'm inclined to make it so.\n\n\n\n> Also, in some cases we allow the function to be called with a\n> NULL but then make it a no-op rather than throwing an ERROR (eg, if the\n> OID ends up being NULL).\n\n\nThoughts on it emitting a WARN or NOTICE before returning false?\n\n\n\n> Not sure if that makes sense here or not\n> offhand but figured I'd mention it as something to consider.\n>\n> > + pgcform = (Form_pg_class) GETSTRUCT(ctup);\n> > + pgcform->reltuples = PG_GETARG_FLOAT4(1);\n> > + pgcform->relpages = PG_GETARG_INT32(2);\n>\n> Shouldn't we include relallvisible?\n>\n\nYes. No idea why I didn't have that in there from the start.\n\n\n> Also, perhaps we should use the approach that we have in ANALYZE, and\n> only actually do something if the values are different rather than just\n> always doing an update.\n>\n\nThat was how it worked back in v1, more for the possibility that there was\nno matching JSON to set values.\n\nLooking again at analyze.c (currently lines 1751-1780), we just check if\nthere is a row in the way, and if so we replace it. I don't see where we\ncompare existing values to new values.\n\n\n>\n> > +/*\n> > + * Import statistics for a given relation attribute\n> > + *\n> > + * pg_set_attribute_stats(relation Oid, attname name, stainherit bool,\n> > + * stanullfrac float4, stawidth int, stadistinct\n> float4,\n> > + * stakind1 int2, stakind2 int2, stakind3 int3,\n> > + * stakind4 int2, stakind5 int2, stanumbers1\n> float4[],\n> > + * stanumbers2 float4[], stanumbers3 float4[],\n> > + * stanumbers4 float4[], stanumbers5 float4[],\n> > + * stanumbers1 float4[], stanumbers2 float4[],\n> > + * stanumbers3 float4[], stanumbers4 float4[],\n> > + * stanumbers5 float4[], stavalues1 text,\n> > + * stavalues2 text, stavalues3 text,\n> > + * stavalues4 text, stavalues5 text);\n> > + *\n> > + *\n> > + */\n>\n> Don't know that it makes sense to just repeat the function declaration\n> inside a comment like this- it'll just end up out of date.\n>\n\nHistorical artifact - previous versions had a long explanation of the JSON\nformat.\n\n\n\n>\n> > +Datum\n> > +pg_set_attribute_stats(PG_FUNCTION_ARGS)\n>\n> > + /* names of columns that cannot be null */\n> > + const char *required_param_names[] = {\n> > + \"relation\",\n> > + \"attname\",\n> > + \"stainherit\",\n> > + \"stanullfrac\",\n> > + \"stawidth\",\n> > + \"stadistinct\",\n> > + \"stakind1\",\n> > + \"stakind2\",\n> > + \"stakind3\",\n> > + \"stakind4\",\n> > + \"stakind5\",\n> > + };\n>\n> Same comment here as above wrt NULL being passed in.\n>\n\nIn this case, the last 10 params (stanumbersN and stavaluesN) can be null,\nand are NULL more often than not.\n\n\n>\n> > + for (int k = 0; k < 5; k++)\n>\n> Shouldn't we use STATISTIC_NUM_SLOTS here?\n>\n\nYes, I had in the past. Not sure why I didn't again.\n\nPerhaps it's just me ... but it doesn't seem like it's all that many\nparameters.It's more than I can memorize, so that feels like a lot to me. Clearly it's not insurmountable. > I am a bit concerned about the number of locks on pg_statistic and the\n> relation itself, doing CatalogOpenIndexes/CatalogCloseIndexes once per\n> attribute rather than once per relation. But I also see that this will\n> mostly get used at a time when no other traffic is on the machine, and\n> whatever it costs, it's still faster than the smallest table sample (insert\n> joke about \"don't have to be faster than the bear\" here).\n\nI continue to not be too concerned about this until and unless it's\nactually shown to be an issue. Keeping things simple and\nstraight-forward has its own value.Ok, I'm sold on that plan. \n> +/*\n> + * Set statistics for a given pg_class entry.\n> + *\n> + * pg_set_relation_stats(relation Oid, reltuples double, relpages int)\n> + *\n> + * This does an in-place (i.e. non-transactional) update of pg_class, just as\n> + * is done in ANALYZE.\n> + *\n> + */\n> +Datum\n> +pg_set_relation_stats(PG_FUNCTION_ARGS)\n> +{\n> + const char *param_names[] = {\n> + \"relation\",\n> + \"reltuples\",\n> + \"relpages\",\n> + };\n> +\n> + Oid relid;\n> + Relation rel;\n> + HeapTuple ctup;\n> + Form_pg_class pgcform;\n> +\n> + for (int i = 0; i <= 2; i++)\n> + if (PG_ARGISNULL(i))\n> + ereport(ERROR,\n> + (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> + errmsg(\"%s cannot be NULL\", param_names[i])));\n\nWhy not just mark this function as strict..? Or perhaps we should allow\nNULLs to be passed in and just not update the current value in that\ncase? Strict could definitely apply here, and I'm inclined to make it so. Also, in some cases we allow the function to be called with a\nNULL but then make it a no-op rather than throwing an ERROR (eg, if the\nOID ends up being NULL).Thoughts on it emitting a WARN or NOTICE before returning false? Not sure if that makes sense here or not\noffhand but figured I'd mention it as something to consider.\n\n> + pgcform = (Form_pg_class) GETSTRUCT(ctup);\n> + pgcform->reltuples = PG_GETARG_FLOAT4(1);\n> + pgcform->relpages = PG_GETARG_INT32(2);\n\nShouldn't we include relallvisible?Yes. No idea why I didn't have that in there from the start. Also, perhaps we should use the approach that we have in ANALYZE, and\nonly actually do something if the values are different rather than just\nalways doing an update.That was how it worked back in v1, more for the possibility that there was no matching JSON to set values.Looking again at analyze.c (currently lines 1751-1780), we just check if there is a row in the way, and if so we replace it. I don't see where we compare existing values to new values. \n\n> +/*\n> + * Import statistics for a given relation attribute\n> + *\n> + * pg_set_attribute_stats(relation Oid, attname name, stainherit bool,\n> + * stanullfrac float4, stawidth int, stadistinct float4,\n> + * stakind1 int2, stakind2 int2, stakind3 int3,\n> + * stakind4 int2, stakind5 int2, stanumbers1 float4[],\n> + * stanumbers2 float4[], stanumbers3 float4[],\n> + * stanumbers4 float4[], stanumbers5 float4[],\n> + * stanumbers1 float4[], stanumbers2 float4[],\n> + * stanumbers3 float4[], stanumbers4 float4[],\n> + * stanumbers5 float4[], stavalues1 text,\n> + * stavalues2 text, stavalues3 text,\n> + * stavalues4 text, stavalues5 text);\n> + *\n> + *\n> + */\n\nDon't know that it makes sense to just repeat the function declaration\ninside a comment like this- it'll just end up out of date.Historical artifact - previous versions had a long explanation of the JSON format. \n\n> +Datum\n> +pg_set_attribute_stats(PG_FUNCTION_ARGS)\n\n> + /* names of columns that cannot be null */\n> + const char *required_param_names[] = {\n> + \"relation\",\n> + \"attname\",\n> + \"stainherit\",\n> + \"stanullfrac\",\n> + \"stawidth\",\n> + \"stadistinct\",\n> + \"stakind1\",\n> + \"stakind2\",\n> + \"stakind3\",\n> + \"stakind4\",\n> + \"stakind5\",\n> + };\n\nSame comment here as above wrt NULL being passed in.In this case, the last 10 params (stanumbersN and stavaluesN) can be null, and are NULL more often than not. \n\n> + for (int k = 0; k < 5; k++)\n\nShouldn't we use STATISTIC_NUM_SLOTS here?Yes, I had in the past. Not sure why I didn't again.",
"msg_date": "Fri, 8 Mar 2024 14:17:31 -0500",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "On Fri, Mar 8, 2024 at 12:06 PM Corey Huinker <[email protected]> wrote:\n>\n> Anyway, here's v7. Eagerly awaiting feedback.\n\nThanks for working on this. It looks useful to have the ability to\nrestore the stats after upgrade and restore. But, the exported stats\nare valid only until the next ANALYZE is run on the table. IIUC,\npostgres collects stats during VACUUM, autovacuum and ANALYZE, right?\n Perhaps there are other ways to collect stats. I'm thinking what\nproblems does the user face if they are just asked to run ANALYZE on\nthe tables (I'm assuming ANALYZE doesn't block concurrent access to\nthe tables) instead of automatically exporting stats.\n\nHere are some comments on the v7 patch. I've not dived into the whole\nthread, so some comments may be of type repeated or need\nclarification. Please bear with me.\n\n1. The following two are unnecessary changes in pg_proc.dat, please remove them.\n\n--- a/src/include/catalog/pg_proc.dat\n+++ b/src/include/catalog/pg_proc.dat\n@@ -8818,7 +8818,6 @@\n { oid => '3813', descr => 'generate XML text node',\n proname => 'xmltext', proisstrict => 't', prorettype => 'xml',\n proargtypes => 'text', prosrc => 'xmltext' },\n-\n { oid => '2923', descr => 'map table contents to XML',\n proname => 'table_to_xml', procost => '100', provolatile => 's',\n proparallel => 'r', prorettype => 'xml',\n@@ -12163,8 +12162,24 @@\n\n # GiST stratnum implementations\n { oid => '8047', descr => 'GiST support',\n- proname => 'gist_stratnum_identity', prorettype => 'int2',\n+ proname => 'gist_stratnum_identity',prorettype => 'int2',\n proargtypes => 'int2',\n prosrc => 'gist_stratnum_identity' },\n\n2.\n+ they are replaced by the next auto-analyze. This function is used by\n+ <command>pg_upgrade</command> and <command>pg_restore</command> to\n+ convey the statistics from the old system version into the new one.\n+ </para>\n\nIs there any demonstration of pg_set_relation_stats and\npg_set_attribute_stats being used either in pg_upgrade or in\npg_restore? Perhaps, having them as 0002, 0003 and so on patches might\nshow real need for functions like this. It also clarifies how these\nfunctions pull stats from tables on the old cluster to the tables on\nthe new cluster.\n\n3. pg_set_relation_stats and pg_set_attribute_stats seem to be writing\nto pg_class and might affect the plans as stats can get tampered. Can\nwe REVOKE the execute permissions from the public out of the box in\nsrc/backend/catalog/system_functions.sql? This way one can decide who\nto give permissions to.\n\n4.\n+SELECT pg_set_relation_stats('stats_export_import.test'::regclass,\n3.6::float4, 15000);\n+ pg_set_relation_stats\n+-----------------------\n+ t\n+(1 row)\n+\n+SELECT reltuples, relpages FROM pg_class WHERE oid =\n'stats_export_import.test'::regclass;\n+ reltuples | relpages\n+-----------+----------\n+ 3.6 | 15000\n\nIsn't this test case showing a misuse of these functions? Table\nactually has no rows, but we are lying to the postgres optimizer on\nstats. I think altering stats of a table mustn't be that easy for the\nend user. As mentioned in comment #3, permissions need to be\ntightened. In addition, we can also mark the functions pg_upgrade only\nwith CHECK_IS_BINARY_UPGRADE, but that might not work for pg_restore\n(or I don't know if we have a way to know within the server that the\nserver is running for pg_restore).\n\n5. In continuation to the comment #2, is pg_dump supposed to generate\npg_set_relation_stats and pg_set_attribute_stats statements for each\ntable? When pg_dump does that , pg_restore can automatically load the\nstats.\n\n6.\n+/*-------------------------------------------------------------------------\n* * statistics.c *\n+ * IDENTIFICATION\n+ * src/backend/statistics/statistics.c\n+ *\n+ *-------------------------------------------------------------------------\n\nA description of what the new file statistics.c does is missing.\n\n7. pgindent isn't happy with new file statistics.c, please check.\n\n8.\n+/*\n+ * Import statistics for a given relation attribute\n+ *\n+ * pg_set_attribute_stats(relation Oid, attname name, stainherit bool,\n+ * stanullfrac float4, stawidth int, stadistinct float4,\n\nHaving function definition in the function comment isn't necessary -\nit's hard to keep it consistent with pg_proc.dat in future. If\nrequired, one can either look at pg_proc.dat or docs.\n\n9. Isn't it good to add a test case where the plan of a query on table\nafter exporting the stats would remain same as that of the original\ntable from which the stats are exported? IMO, this is a more realistic\nthan just comparing pg_statistic of the tables because this is what an\nend-user wants eventually.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Sun, 10 Mar 2024 21:27:22 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "On Sun, Mar 10, 2024 at 11:57 AM Bharath Rupireddy <\[email protected]> wrote:\n\n> On Fri, Mar 8, 2024 at 12:06 PM Corey Huinker <[email protected]>\n> wrote:\n> >\n> > Anyway, here's v7. Eagerly awaiting feedback.\n>\n> Thanks for working on this. It looks useful to have the ability to\n> restore the stats after upgrade and restore. But, the exported stats\n> are valid only until the next ANALYZE is run on the table. IIUC,\n> postgres collects stats during VACUUM, autovacuum and ANALYZE, right?\n> Perhaps there are other ways to collect stats. I'm thinking what\n> problems does the user face if they are just asked to run ANALYZE on\n> the tables (I'm assuming ANALYZE doesn't block concurrent access to\n> the tables) instead of automatically exporting stats.\n>\n\nCorrect. These are just as temporary as any other analyze of the table.\nAnother analyze will happen later, probably through autovacuum and wipe out\nthese values. This is designed to QUICKLY get stats into a table to enable\nthe database to be operational sooner. This is especially important after\nan upgrade/restore, when all stats were wiped out. Other uses could be\nadapting this for use the postgres_fdw so that we don't have to do table\nsampling on the remote table, and of course statistics injection to test\nthe query planner.\n\n\n> 2.\n> + they are replaced by the next auto-analyze. This function is used\n> by\n> + <command>pg_upgrade</command> and <command>pg_restore</command> to\n> + convey the statistics from the old system version into the new\n> one.\n> + </para>\n>\n> Is there any demonstration of pg_set_relation_stats and\n> pg_set_attribute_stats being used either in pg_upgrade or in\n> pg_restore? Perhaps, having them as 0002, 0003 and so on patches might\n> show real need for functions like this. It also clarifies how these\n> functions pull stats from tables on the old cluster to the tables on\n> the new cluster.\n>\n\nThat code was adapted from do_analyze(), and yes, there is a patch for\npg_dump, but as I noted earlier it is on hold pending feedback.\n\n\n>\n> 3. pg_set_relation_stats and pg_set_attribute_stats seem to be writing\n> to pg_class and might affect the plans as stats can get tampered. Can\n> we REVOKE the execute permissions from the public out of the box in\n> src/backend/catalog/system_functions.sql? This way one can decide who\n> to give permissions to.\n>\n\nYou'd have to be the table owner to alter the stats. I can envision these\nfunctions getting a special role, but they could also be fine as\nsuperuser-only.\n\n\n>\n> 4.\n> +SELECT pg_set_relation_stats('stats_export_import.test'::regclass,\n> 3.6::float4, 15000);\n> + pg_set_relation_stats\n> +-----------------------\n> + t\n> +(1 row)\n> +\n> +SELECT reltuples, relpages FROM pg_class WHERE oid =\n> 'stats_export_import.test'::regclass;\n> + reltuples | relpages\n> +-----------+----------\n> + 3.6 | 15000\n>\n> Isn't this test case showing a misuse of these functions? Table\n> actually has no rows, but we are lying to the postgres optimizer on\n> stats.\n\n\nConsider this case. You want to know at what point the query planner will\nstart using a given index. You can generate dummy data for a thousand, a\nmillion, a billion rows, and wait for that to complete, or you can just\ntell the table \"I say you have a billion rows, twenty million pages, etc\"\nand see when it changes.\n\nBut again, in most cases, you're setting the values to the same values the\ntable had on the old database just before the restore/upgrade.\n\n\n> I think altering stats of a table mustn't be that easy for the\n> end user.\n\n\nOnly easy for the end users that happen to be the table owner or a\nsuperuser.\n\n\n> As mentioned in comment #3, permissions need to be\n> tightened. In addition, we can also mark the functions pg_upgrade only\n> with CHECK_IS_BINARY_UPGRADE, but that might not work for pg_restore\n> (or I don't know if we have a way to know within the server that the\n> server is running for pg_restore).\n>\n\nI think they will have usage both in postgres_fdw and for tuning.\n\n\n>\n> 5. In continuation to the comment #2, is pg_dump supposed to generate\n> pg_set_relation_stats and pg_set_attribute_stats statements for each\n> table? When pg_dump does that , pg_restore can automatically load the\n> stats.\n>\n\nCurrent plan is to have one TOC entry in the post-data section with a\ndependency on the table/index/matview. That let's us leverage existing\nfilters. The TOC entry will have a series of statements in it, one\npg_set_relation_stats() and one pg_set_attribute_stats() per attribute.\n\n\n> 9. Isn't it good to add a test case where the plan of a query on table\n> after exporting the stats would remain same as that of the original\n> table from which the stats are exported? IMO, this is a more realistic\n> than just comparing pg_statistic of the tables because this is what an\n> end-user wants eventually.\n>\n\nI'm sure we can add something like that, but query plan formats change a\nlot and are greatly dependent on database configuration, so maintaining\nsuch a test would be a lot of work.\n\nOn Sun, Mar 10, 2024 at 11:57 AM Bharath Rupireddy <[email protected]> wrote:On Fri, Mar 8, 2024 at 12:06 PM Corey Huinker <[email protected]> wrote:\n>\n> Anyway, here's v7. Eagerly awaiting feedback.\n\nThanks for working on this. It looks useful to have the ability to\nrestore the stats after upgrade and restore. But, the exported stats\nare valid only until the next ANALYZE is run on the table. IIUC,\npostgres collects stats during VACUUM, autovacuum and ANALYZE, right?\n Perhaps there are other ways to collect stats. I'm thinking what\nproblems does the user face if they are just asked to run ANALYZE on\nthe tables (I'm assuming ANALYZE doesn't block concurrent access to\nthe tables) instead of automatically exporting stats.Correct. These are just as temporary as any other analyze of the table. Another analyze will happen later, probably through autovacuum and wipe out these values. This is designed to QUICKLY get stats into a table to enable the database to be operational sooner. This is especially important after an upgrade/restore, when all stats were wiped out. Other uses could be adapting this for use the postgres_fdw so that we don't have to do table sampling on the remote table, and of course statistics injection to test the query planner. 2.\n+ they are replaced by the next auto-analyze. This function is used by\n+ <command>pg_upgrade</command> and <command>pg_restore</command> to\n+ convey the statistics from the old system version into the new one.\n+ </para>\n\nIs there any demonstration of pg_set_relation_stats and\npg_set_attribute_stats being used either in pg_upgrade or in\npg_restore? Perhaps, having them as 0002, 0003 and so on patches might\nshow real need for functions like this. It also clarifies how these\nfunctions pull stats from tables on the old cluster to the tables on\nthe new cluster.That code was adapted from do_analyze(), and yes, there is a patch for pg_dump, but as I noted earlier it is on hold pending feedback. \n\n3. pg_set_relation_stats and pg_set_attribute_stats seem to be writing\nto pg_class and might affect the plans as stats can get tampered. Can\nwe REVOKE the execute permissions from the public out of the box in\nsrc/backend/catalog/system_functions.sql? This way one can decide who\nto give permissions to.You'd have to be the table owner to alter the stats. I can envision these functions getting a special role, but they could also be fine as superuser-only. \n\n4.\n+SELECT pg_set_relation_stats('stats_export_import.test'::regclass,\n3.6::float4, 15000);\n+ pg_set_relation_stats\n+-----------------------\n+ t\n+(1 row)\n+\n+SELECT reltuples, relpages FROM pg_class WHERE oid =\n'stats_export_import.test'::regclass;\n+ reltuples | relpages\n+-----------+----------\n+ 3.6 | 15000\n\nIsn't this test case showing a misuse of these functions? Table\nactually has no rows, but we are lying to the postgres optimizer on\nstats.Consider this case. You want to know at what point the query planner will start using a given index. You can generate dummy data for a thousand, a million, a billion rows, and wait for that to complete, or you can just tell the table \"I say you have a billion rows, twenty million pages, etc\" and see when it changes.But again, in most cases, you're setting the values to the same values the table had on the old database just before the restore/upgrade. I think altering stats of a table mustn't be that easy for the\nend user.Only easy for the end users that happen to be the table owner or a superuser. As mentioned in comment #3, permissions need to be\ntightened. In addition, we can also mark the functions pg_upgrade only\nwith CHECK_IS_BINARY_UPGRADE, but that might not work for pg_restore\n(or I don't know if we have a way to know within the server that the\nserver is running for pg_restore).I think they will have usage both in postgres_fdw and for tuning. \n\n5. In continuation to the comment #2, is pg_dump supposed to generate\npg_set_relation_stats and pg_set_attribute_stats statements for each\ntable? When pg_dump does that , pg_restore can automatically load the\nstats.Current plan is to have one TOC entry in the post-data section with a dependency on the table/index/matview. That let's us leverage existing filters. The TOC entry will have a series of statements in it, one pg_set_relation_stats() and one pg_set_attribute_stats() per attribute. 9. Isn't it good to add a test case where the plan of a query on table\nafter exporting the stats would remain same as that of the original\ntable from which the stats are exported? IMO, this is a more realistic\nthan just comparing pg_statistic of the tables because this is what an\nend-user wants eventually.I'm sure we can add something like that, but query plan formats change a lot and are greatly dependent on database configuration, so maintaining such a test would be a lot of work.",
"msg_date": "Sun, 10 Mar 2024 15:52:51 -0400",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "Hi,\n\nOn Fri, Mar 08, 2024 at 01:35:40AM -0500, Corey Huinker wrote:\n> Anyway, here's v7. Eagerly awaiting feedback.\n\nThanks!\n\nA few random comments:\n\n1 ===\n\n+ The purpose of this function is to apply statistics values in an\n+ upgrade situation that are \"good enough\" for system operation until\n\nWorth to add a few words about \"influencing\" the planner use case?\n\n2 ===\n\n+#include \"catalog/pg_type.h\"\n+#include \"fmgr.h\"\n\nAre those 2 needed?\n\n3 ===\n\n+ if (!HeapTupleIsValid(ctup))\n+ elog(ERROR, \"pg_class entry for relid %u vanished during statistics import\",\n\ns/during statistics import/when setting statistics/?\n\n4 ===\n\n+Datum\n+pg_set_relation_stats(PG_FUNCTION_ARGS)\n+{\n.\n.\n+ table_close(rel, ShareUpdateExclusiveLock);\n+\n+ PG_RETURN_BOOL(true);\n\nWhy returning a bool? (I mean we'd throw an error or return true).\n\n5 ===\n\n+ */\n+Datum\n+pg_set_attribute_stats(PG_FUNCTION_ARGS)\n+{\n\nThis function is not that simple, worth to explain its logic in a comment above?\n\n6 ===\n\n+ if (!HeapTupleIsValid(tuple))\n+ {\n+ relation_close(rel, NoLock);\n+ PG_RETURN_BOOL(false);\n+ }\n+\n+ attr = (Form_pg_attribute) GETSTRUCT(tuple);\n+ if (attr->attisdropped)\n+ {\n+ ReleaseSysCache(tuple);\n+ relation_close(rel, NoLock);\n+ PG_RETURN_BOOL(false);\n+ }\n\nWhy is it returning \"false\" and not throwing an error? (if ok, then I think\nwe can get rid of returning a bool).\n\n7 ===\n\n+ * If this relation is an index and that index has expressions in\n+ * it, and the attnum specified\n\ns/is an index and that index has/is an index that has/?\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 11 Mar 2024 08:50:33 +0000",
"msg_from": "Bertrand Drouvot <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "Greetings,\n\n* Corey Huinker ([email protected]) wrote:\n> > > +/*\n> > > + * Set statistics for a given pg_class entry.\n> > > + *\n> > > + * pg_set_relation_stats(relation Oid, reltuples double, relpages int)\n> > > + *\n> > > + * This does an in-place (i.e. non-transactional) update of pg_class,\n> > just as\n> > > + * is done in ANALYZE.\n> > > + *\n> > > + */\n> > > +Datum\n> > > +pg_set_relation_stats(PG_FUNCTION_ARGS)\n> > > +{\n> > > + const char *param_names[] = {\n> > > + \"relation\",\n> > > + \"reltuples\",\n> > > + \"relpages\",\n> > > + };\n> > > +\n> > > + Oid relid;\n> > > + Relation rel;\n> > > + HeapTuple ctup;\n> > > + Form_pg_class pgcform;\n> > > +\n> > > + for (int i = 0; i <= 2; i++)\n> > > + if (PG_ARGISNULL(i))\n> > > + ereport(ERROR,\n> > > +\n> > (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> > > + errmsg(\"%s cannot be NULL\",\n> > param_names[i])));\n> >\n> > Why not just mark this function as strict..? Or perhaps we should allow\n> > NULLs to be passed in and just not update the current value in that\n> > case?\n> \n> Strict could definitely apply here, and I'm inclined to make it so.\n\nHaving thought about it a bit more, I generally like the idea of being\nable to just update one stat instead of having to update all of them at\nonce (and therefore having to go look up what the other values currently\nare...). That said, per below, perhaps making it strict is the better\nplan.\n\n> > Also, in some cases we allow the function to be called with a\n> > NULL but then make it a no-op rather than throwing an ERROR (eg, if the\n> > OID ends up being NULL).\n> \n> Thoughts on it emitting a WARN or NOTICE before returning false?\n\nEh, I don't think so?\n\nWhere this is coming from is that we can often end up with functions\nlike these being called inside of larger queries, and having them spit\nout WARN or NOTICE will just make them noisy.\n\nThat leads to my general feeling of just returning NULL if called with a\nNULL OID, as we would get with setting the function strict.\n\n> > Not sure if that makes sense here or not\n> > offhand but figured I'd mention it as something to consider.\n> >\n> > > + pgcform = (Form_pg_class) GETSTRUCT(ctup);\n> > > + pgcform->reltuples = PG_GETARG_FLOAT4(1);\n> > > + pgcform->relpages = PG_GETARG_INT32(2);\n> >\n> > Shouldn't we include relallvisible?\n> \n> Yes. No idea why I didn't have that in there from the start.\n\nOk.\n\n> > Also, perhaps we should use the approach that we have in ANALYZE, and\n> > only actually do something if the values are different rather than just\n> > always doing an update.\n> \n> That was how it worked back in v1, more for the possibility that there was\n> no matching JSON to set values.\n> \n> Looking again at analyze.c (currently lines 1751-1780), we just check if\n> there is a row in the way, and if so we replace it. I don't see where we\n> compare existing values to new values.\n\nWell, that code is for pg_statistic while I was looking at pg_class (in\nvacuum.c:1428-1443, where we track if we're actually changing anything\nand only make the pg_class change if there's actually something\ndifferent):\n\nvacuum.c:1531\n /* If anything changed, write out the tuple. */\n if (dirty)\n heap_inplace_update(rd, ctup);\n\nNot sure why we don't treat both the same way though ... although it's\nprobably the case that it's much less likely to have an entire\npg_statistic row be identical than the few values in pg_class.\n\n> > > +Datum\n> > > +pg_set_attribute_stats(PG_FUNCTION_ARGS)\n> >\n> > > + /* names of columns that cannot be null */\n> > > + const char *required_param_names[] = {\n> > > + \"relation\",\n> > > + \"attname\",\n> > > + \"stainherit\",\n> > > + \"stanullfrac\",\n> > > + \"stawidth\",\n> > > + \"stadistinct\",\n> > > + \"stakind1\",\n> > > + \"stakind2\",\n> > > + \"stakind3\",\n> > > + \"stakind4\",\n> > > + \"stakind5\",\n> > > + };\n> >\n> > Same comment here as above wrt NULL being passed in.\n> \n> In this case, the last 10 params (stanumbersN and stavaluesN) can be null,\n> and are NULL more often than not.\n\nHmm, that's a valid point, so a NULL passed in would need to set that\nvalue actually to NULL, presumably. Perhaps then we should have\npg_set_relation_stats() be strict and have pg_set_attribute_stats()\nhandles NULLs passed in appropriately, and return NULL if the relation\nitself or attname, or other required (not NULL'able) argument passed in\ncause the function to return NULL.\n\n(What I'm trying to drive at here is a consistent interface for these\nfunctions, but one which does a no-op instead of returning an ERROR on\nvalues being passed in which aren't allowable; it can be quite\nfrustrating trying to get a query to work where one of the functions\ndecides to return ERROR instead of just ignoring things passed in which\naren't valid.)\n\n> > > + for (int k = 0; k < 5; k++)\n> >\n> > Shouldn't we use STATISTIC_NUM_SLOTS here?\n> \n> Yes, I had in the past. Not sure why I didn't again.\n\nNo worries.\n\nThanks!\n\nStephen",
"msg_date": "Mon, 11 Mar 2024 06:00:32 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": ">\n>\n>\n>\n> Having thought about it a bit more, I generally like the idea of being\n> able to just update one stat instead of having to update all of them at\n> once (and therefore having to go look up what the other values currently\n> are...). That said, per below, perhaps making it strict is the better\n> plan.\n>\n\nv8 has it as strict.\n\n\n>\n> > > Also, in some cases we allow the function to be called with a\n> > > NULL but then make it a no-op rather than throwing an ERROR (eg, if the\n> > > OID ends up being NULL).\n> >\n> > Thoughts on it emitting a WARN or NOTICE before returning false?\n>\n> Eh, I don't think so?\n>\n> Where this is coming from is that we can often end up with functions\n> like these being called inside of larger queries, and having them spit\n> out WARN or NOTICE will just make them noisy.\n>\n> That leads to my general feeling of just returning NULL if called with a\n> NULL OID, as we would get with setting the function strict.\n>\n\nIn which case we're failing nearly silently, yes, there is a null returned,\nbut we have no idea why there is a null returned. If I were using this\nfunction manually I'd want to know what I did wrong, what parameter I\nskipped, etc.\n\n\n> Well, that code is for pg_statistic while I was looking at pg_class (in\n> vacuum.c:1428-1443, where we track if we're actually changing anything\n> and only make the pg_class change if there's actually something\n> different):\n>\n\nI can do that, especially since it's only 3 tuples of known types, but my\nreservations are summed up in the next comment.\n\n\n\n> Not sure why we don't treat both the same way though ... although it's\n> probably the case that it's much less likely to have an entire\n> pg_statistic row be identical than the few values in pg_class.\n>\n\nThat would also involve comparing ANYARRAY values, yuk. Also, a matched\nrecord will never be the case when used in primary purpose of the function\n(upgrades), and not a big deal in the other future cases (if we use it in\nANALYZE on foreign tables instead of remote table samples, users\nexperimenting with tuning queries under hypothetical workloads).\n\n\n\n\n> Hmm, that's a valid point, so a NULL passed in would need to set that\n> value actually to NULL, presumably. Perhaps then we should have\n> pg_set_relation_stats() be strict and have pg_set_attribute_stats()\n> handles NULLs passed in appropriately, and return NULL if the relation\n> itself or attname, or other required (not NULL'able) argument passed in\n> cause the function to return NULL.\n>\n\nThat's how I have relstats done in v8, and could make it do that for attr\nstats.\n\n(What I'm trying to drive at here is a consistent interface for these\n> functions, but one which does a no-op instead of returning an ERROR on\n> values being passed in which aren't allowable; it can be quite\n> frustrating trying to get a query to work where one of the functions\n> decides to return ERROR instead of just ignoring things passed in which\n> aren't valid.)\n>\n\nI like the symmetry of a consistent interface, but we've already got an\nasymmetry in that the pg_class update is done non-transactionally (like\nANALYZE does).\n\nOne persistent problem is that there is no _safe equivalent to ARRAY_IN, so\nthat can always fail on us, though it should only do so if the string\npassed in wasn't a valid array input format, or the values in the array\ncan't coerce to the attribute's basetype.\n\nI should also point out that we've lost the ability to check if the export\nvalues were of a type, and if the destination column is also of that type.\nThat's a non-issue in binary upgrades, but of course if a field changed\nfrom integers to text the histograms would now be highly misleading.\nThoughts on adding a typname parameter that the function uses as a cheap\nvalidity check?\n\nv8 attached, incorporating these suggestions plus those of Bharath and\nBertrand. Still no pg_dump.\n\nAs for pg_dump, I'm currently leading toward the TOC entry having either a\nseries of commands:\n\n SELECT pg_set_relation_stats('foo.bar'::regclass, ...);\npg_set_attribute_stats('foo.bar'::regclass, 'id'::name, ...); ...\n\nOr one compound command\n\n SELECT pg_set_relation_stats(t.oid, ...)\n pg_set_attribute_stats(t.oid, 'id'::name, ...),\n pg_set_attribute_stats(t.oid, 'last_name'::name, ...),\n ...\n FROM (VALUES('foo.bar'::regclass)) AS t(oid);\n\nThe second one has the feature that if any one attribute fails, then the\nwhole update fails, except, of course, for the in-place update of pg_class.\nThis avoids having an explicit transaction block, but we could get that\nback by having restore wrap the list of commands in a transaction block\n(and adding the explicit lock commands) when it is safe to do so.",
"msg_date": "Mon, 11 Mar 2024 14:20:36 -0400",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "Greetings,\n\n* Corey Huinker ([email protected]) wrote:\n> > Having thought about it a bit more, I generally like the idea of being\n> > able to just update one stat instead of having to update all of them at\n> > once (and therefore having to go look up what the other values currently\n> > are...). That said, per below, perhaps making it strict is the better\n> > plan.\n> \n> v8 has it as strict.\n\nOk.\n\n> > > > Also, in some cases we allow the function to be called with a\n> > > > NULL but then make it a no-op rather than throwing an ERROR (eg, if the\n> > > > OID ends up being NULL).\n> > >\n> > > Thoughts on it emitting a WARN or NOTICE before returning false?\n> >\n> > Eh, I don't think so?\n> >\n> > Where this is coming from is that we can often end up with functions\n> > like these being called inside of larger queries, and having them spit\n> > out WARN or NOTICE will just make them noisy.\n> >\n> > That leads to my general feeling of just returning NULL if called with a\n> > NULL OID, as we would get with setting the function strict.\n> \n> In which case we're failing nearly silently, yes, there is a null returned,\n> but we have no idea why there is a null returned. If I were using this\n> function manually I'd want to know what I did wrong, what parameter I\n> skipped, etc.\n\nI can see it both ways and don't feel super strongly about it ... I just\nknow that I've had some cases where we returned an ERROR or otherwise\nwere a bit noisy on NULL values getting passed into a function and it\nwas much more on the annoying side than on the helpful side; to the\npoint where we've gone back and pulled out ereport(ERROR) calls from\nfunctions before because they were causing issues in otherwise pretty\nreasonable queries (consider things like functions getting pushed down\nto below WHERE clauses and such...).\n\n> > Well, that code is for pg_statistic while I was looking at pg_class (in\n> > vacuum.c:1428-1443, where we track if we're actually changing anything\n> > and only make the pg_class change if there's actually something\n> > different):\n> \n> I can do that, especially since it's only 3 tuples of known types, but my\n> reservations are summed up in the next comment.\n\n> > Not sure why we don't treat both the same way though ... although it's\n> > probably the case that it's much less likely to have an entire\n> > pg_statistic row be identical than the few values in pg_class.\n> \n> That would also involve comparing ANYARRAY values, yuk. Also, a matched\n> record will never be the case when used in primary purpose of the function\n> (upgrades), and not a big deal in the other future cases (if we use it in\n> ANALYZE on foreign tables instead of remote table samples, users\n> experimenting with tuning queries under hypothetical workloads).\n\nSure. Not a huge deal either way, was just pointing out the difference.\nI do think it'd be good to match what ANALYZE does here, so checking if\nthe values in pg_class are different and only updating if they are,\nwhile keeping the code for pg_statistic where it'll just always update.\n\n> > Hmm, that's a valid point, so a NULL passed in would need to set that\n> > value actually to NULL, presumably. Perhaps then we should have\n> > pg_set_relation_stats() be strict and have pg_set_attribute_stats()\n> > handles NULLs passed in appropriately, and return NULL if the relation\n> > itself or attname, or other required (not NULL'able) argument passed in\n> > cause the function to return NULL.\n> >\n> \n> That's how I have relstats done in v8, and could make it do that for attr\n> stats.\n\nThat'd be my suggestion, at least, but as I mention above, it's not a\nposition I hold very strongly.\n\n> > (What I'm trying to drive at here is a consistent interface for these\n> > functions, but one which does a no-op instead of returning an ERROR on\n> > values being passed in which aren't allowable; it can be quite\n> > frustrating trying to get a query to work where one of the functions\n> > decides to return ERROR instead of just ignoring things passed in which\n> > aren't valid.)\n> \n> I like the symmetry of a consistent interface, but we've already got an\n> asymmetry in that the pg_class update is done non-transactionally (like\n> ANALYZE does).\n\nDon't know that I really consider that to be the same kind of thing when\nit comes to talking about the interface as the other aspects we're\ndiscussing ...\n\n> One persistent problem is that there is no _safe equivalent to ARRAY_IN, so\n> that can always fail on us, though it should only do so if the string\n> passed in wasn't a valid array input format, or the values in the array\n> can't coerce to the attribute's basetype.\n\nThat would happen before we even get to being called and there's not\nmuch to do about it anyway.\n\n> I should also point out that we've lost the ability to check if the export\n> values were of a type, and if the destination column is also of that type.\n> That's a non-issue in binary upgrades, but of course if a field changed\n> from integers to text the histograms would now be highly misleading.\n> Thoughts on adding a typname parameter that the function uses as a cheap\n> validity check?\n\nSeems reasonable to me.\n\n> v8 attached, incorporating these suggestions plus those of Bharath and\n> Bertrand. Still no pg_dump.\n> \n> As for pg_dump, I'm currently leading toward the TOC entry having either a\n> series of commands:\n> \n> SELECT pg_set_relation_stats('foo.bar'::regclass, ...);\n> pg_set_attribute_stats('foo.bar'::regclass, 'id'::name, ...); ...\n\nI'm guessing the above was intended to be SELECT ..; SELECT ..;\n\n> Or one compound command\n> \n> SELECT pg_set_relation_stats(t.oid, ...)\n> pg_set_attribute_stats(t.oid, 'id'::name, ...),\n> pg_set_attribute_stats(t.oid, 'last_name'::name, ...),\n> ...\n> FROM (VALUES('foo.bar'::regclass)) AS t(oid);\n> \n> The second one has the feature that if any one attribute fails, then the\n> whole update fails, except, of course, for the in-place update of pg_class.\n> This avoids having an explicit transaction block, but we could get that\n> back by having restore wrap the list of commands in a transaction block\n> (and adding the explicit lock commands) when it is safe to do so.\n\nHm, I like this approach as it should essentially give us the\ntransaction block we had been talking about wanting but without needing\nto explicitly do a begin/commit, which would add in some annoying\ncomplications. This would hopefully also reduce the locking concern\nmentioned previously, since we'd get the lock needed in the first\nfunction call and then the others would be able to just see that we've\nalready got the lock pretty quickly.\n\n> Subject: [PATCH v8] Create pg_set_relation_stats, pg_set_attribute_stats.\n\n[...]\n\n> +Datum\n> +pg_set_relation_stats(PG_FUNCTION_ARGS)\n\n[...]\n\n> +\tctup = SearchSysCacheCopy1(RELOID, ObjectIdGetDatum(relid));\n> +\tif (!HeapTupleIsValid(ctup))\n> +\t\telog(ERROR, \"pg_class entry for relid %u vanished during statistics import\",\n> +\t\t\t relid);\n\nMaybe drop the 'during statistics import' part of this message? Also\nwonder if maybe we should make it a regular ereport() instead, since it\nmight be possible for a user to end up seeing this?\n\n> +\tpgcform = (Form_pg_class) GETSTRUCT(ctup);\n> +\n> +\treltuples = PG_GETARG_FLOAT4(P_RELTUPLES);\n> +\trelpages = PG_GETARG_INT32(P_RELPAGES);\n> +\trelallvisible = PG_GETARG_INT32(P_RELALLVISIBLE);\n> +\n> +\t/* Do not update pg_class unless there is no meaningful change */\n\nThis comment doesn't seem quite right. Maybe it would be better if it\nwas in the positive, eg: Only update pg_class if there is a meaningful\nchange.\n\nRest of it looks pretty good to me, at least.\n\nThanks!\n\nStephen",
"msg_date": "Mon, 11 Mar 2024 14:48:47 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": ">\n> > In which case we're failing nearly silently, yes, there is a null\n> returned,\n> > but we have no idea why there is a null returned. If I were using this\n> > function manually I'd want to know what I did wrong, what parameter I\n> > skipped, etc.\n>\n> I can see it both ways and don't feel super strongly about it ... I just\n> know that I've had some cases where we returned an ERROR or otherwise\n> were a bit noisy on NULL values getting passed into a function and it\n> was much more on the annoying side than on the helpful side; to the\n> point where we've gone back and pulled out ereport(ERROR) calls from\n> functions before because they were causing issues in otherwise pretty\n> reasonable queries (consider things like functions getting pushed down\n> to below WHERE clauses and such...).\n>\n\nI don't have strong feelings either. I think we should get more input on\nthis. Regardless, it's easy to change...for now.\n\n\n\n>\n> Sure. Not a huge deal either way, was just pointing out the difference.\n> I do think it'd be good to match what ANALYZE does here, so checking if\n> the values in pg_class are different and only updating if they are,\n> while keeping the code for pg_statistic where it'll just always update.\n>\n\nI agree that mirroring ANALYZE wherever possible is the ideal.\n\n\n\n> > I like the symmetry of a consistent interface, but we've already got an\n> > asymmetry in that the pg_class update is done non-transactionally (like\n> > ANALYZE does).\n>\n> Don't know that I really consider that to be the same kind of thing when\n> it comes to talking about the interface as the other aspects we're\n> discussing ...\n>\n\nFair.\n\n\n\n\n>\n> > One persistent problem is that there is no _safe equivalent to ARRAY_IN,\n> so\n> > that can always fail on us, though it should only do so if the string\n> > passed in wasn't a valid array input format, or the values in the array\n> > can't coerce to the attribute's basetype.\n>\n> That would happen before we even get to being called and there's not\n> much to do about it anyway.\n>\n\nNot sure I follow you here. the ARRAY_IN function calls happen once for\nevery non-null stavaluesN parameter, and it's done inside the function\nbecause the result type could be the base type for a domain/array type, or\ncould be the type itself. I suppose we could move that determination to the\ncaller, but then we'd need to call get_base_element_type() inside a client,\nand that seems wrong if it's even possible.\n\n\n> > I should also point out that we've lost the ability to check if the\n> export\n> > values were of a type, and if the destination column is also of that\n> type.\n> > That's a non-issue in binary upgrades, but of course if a field changed\n> > from integers to text the histograms would now be highly misleading.\n> > Thoughts on adding a typname parameter that the function uses as a cheap\n> > validity check?\n>\n> Seems reasonable to me.\n>\n\nI'd like to hear what Tomas thinks about this, as he was the initial\nadvocate for it.\n\n\n> > As for pg_dump, I'm currently leading toward the TOC entry having either\n> a\n> > series of commands:\n> >\n> > SELECT pg_set_relation_stats('foo.bar'::regclass, ...);\n> > pg_set_attribute_stats('foo.bar'::regclass, 'id'::name, ...); ...\n>\n> I'm guessing the above was intended to be SELECT ..; SELECT ..;\n>\n\nYes.\n\n\n>\n> > Or one compound command\n> >\n> > SELECT pg_set_relation_stats(t.oid, ...)\n> > pg_set_attribute_stats(t.oid, 'id'::name, ...),\n> > pg_set_attribute_stats(t.oid, 'last_name'::name, ...),\n> > ...\n> > FROM (VALUES('foo.bar'::regclass)) AS t(oid);\n> >\n> > The second one has the feature that if any one attribute fails, then the\n> > whole update fails, except, of course, for the in-place update of\n> pg_class.\n> > This avoids having an explicit transaction block, but we could get that\n> > back by having restore wrap the list of commands in a transaction block\n> > (and adding the explicit lock commands) when it is safe to do so.\n>\n> Hm, I like this approach as it should essentially give us the\n> transaction block we had been talking about wanting but without needing\n> to explicitly do a begin/commit, which would add in some annoying\n> complications. This would hopefully also reduce the locking concern\n> mentioned previously, since we'd get the lock needed in the first\n> function call and then the others would be able to just see that we've\n> already got the lock pretty quickly.\n>\n\nTrue, we'd get the lock needed in the first function call, but wouldn't we\nalso release that very lock before the subsequent call? Obviously we'd be\nshrinking the window in which another process could get in line and take a\nsuperior lock, and the universe of other processes that would even want a\nlock that blocks us is nil in the case of an upgrade, identical to existing\nbehavior in the case of an FDW ANALYZE, and perfectly fine in the case of\nsomeone tinkering with stats.\n\n\n>\n> > Subject: [PATCH v8] Create pg_set_relation_stats, pg_set_attribute_stats.\n>\n> [...]\n>\n> > +Datum\n> > +pg_set_relation_stats(PG_FUNCTION_ARGS)\n>\n> [...]\n>\n> > + ctup = SearchSysCacheCopy1(RELOID, ObjectIdGetDatum(relid));\n> > + if (!HeapTupleIsValid(ctup))\n> > + elog(ERROR, \"pg_class entry for relid %u vanished during\n> statistics import\",\n> > + relid);\n>\n> Maybe drop the 'during statistics import' part of this message? Also\n> wonder if maybe we should make it a regular ereport() instead, since it\n> might be possible for a user to end up seeing this?\n>\n\nAgreed and agreed. It was copypasta from ANALYZE.\n\n\n\n>\n> This comment doesn't seem quite right. Maybe it would be better if it\n> was in the positive, eg: Only update pg_class if there is a meaningful\n> change.\n>\n\n+1\n\n> In which case we're failing nearly silently, yes, there is a null returned,\n> but we have no idea why there is a null returned. If I were using this\n> function manually I'd want to know what I did wrong, what parameter I\n> skipped, etc.\n\nI can see it both ways and don't feel super strongly about it ... I just\nknow that I've had some cases where we returned an ERROR or otherwise\nwere a bit noisy on NULL values getting passed into a function and it\nwas much more on the annoying side than on the helpful side; to the\npoint where we've gone back and pulled out ereport(ERROR) calls from\nfunctions before because they were causing issues in otherwise pretty\nreasonable queries (consider things like functions getting pushed down\nto below WHERE clauses and such...).I don't have strong feelings either. I think we should get more input on this. Regardless, it's easy to change...for now. \nSure. Not a huge deal either way, was just pointing out the difference.\nI do think it'd be good to match what ANALYZE does here, so checking if\nthe values in pg_class are different and only updating if they are,\nwhile keeping the code for pg_statistic where it'll just always update.I agree that mirroring ANALYZE wherever possible is the ideal. > I like the symmetry of a consistent interface, but we've already got an\n> asymmetry in that the pg_class update is done non-transactionally (like\n> ANALYZE does).\n\nDon't know that I really consider that to be the same kind of thing when\nit comes to talking about the interface as the other aspects we're\ndiscussing ...Fair. \n\n> One persistent problem is that there is no _safe equivalent to ARRAY_IN, so\n> that can always fail on us, though it should only do so if the string\n> passed in wasn't a valid array input format, or the values in the array\n> can't coerce to the attribute's basetype.\n\nThat would happen before we even get to being called and there's not\nmuch to do about it anyway.Not sure I follow you here. the ARRAY_IN function calls happen once for every non-null stavaluesN parameter, and it's done inside the function because the result type could be the base type for a domain/array type, or could be the type itself. I suppose we could move that determination to the caller, but then we'd need to call get_base_element_type() inside a client, and that seems wrong if it's even possible. > I should also point out that we've lost the ability to check if the export\n> values were of a type, and if the destination column is also of that type.\n> That's a non-issue in binary upgrades, but of course if a field changed\n> from integers to text the histograms would now be highly misleading.\n> Thoughts on adding a typname parameter that the function uses as a cheap\n> validity check?\n\nSeems reasonable to me.I'd like to hear what Tomas thinks about this, as he was the initial advocate for it. > As for pg_dump, I'm currently leading toward the TOC entry having either a\n> series of commands:\n> \n> SELECT pg_set_relation_stats('foo.bar'::regclass, ...);\n> pg_set_attribute_stats('foo.bar'::regclass, 'id'::name, ...); ...\n\nI'm guessing the above was intended to be SELECT ..; SELECT ..;Yes. \n\n> Or one compound command\n> \n> SELECT pg_set_relation_stats(t.oid, ...)\n> pg_set_attribute_stats(t.oid, 'id'::name, ...),\n> pg_set_attribute_stats(t.oid, 'last_name'::name, ...),\n> ...\n> FROM (VALUES('foo.bar'::regclass)) AS t(oid);\n> \n> The second one has the feature that if any one attribute fails, then the\n> whole update fails, except, of course, for the in-place update of pg_class.\n> This avoids having an explicit transaction block, but we could get that\n> back by having restore wrap the list of commands in a transaction block\n> (and adding the explicit lock commands) when it is safe to do so.\n\nHm, I like this approach as it should essentially give us the\ntransaction block we had been talking about wanting but without needing\nto explicitly do a begin/commit, which would add in some annoying\ncomplications. This would hopefully also reduce the locking concern\nmentioned previously, since we'd get the lock needed in the first\nfunction call and then the others would be able to just see that we've\nalready got the lock pretty quickly.True, we'd get the lock needed in the first function call, but wouldn't we also release that very lock before the subsequent call? Obviously we'd be shrinking the window in which another process could get in line and take a superior lock, and the universe of other processes that would even want a lock that blocks us is nil in the case of an upgrade, identical to existing behavior in the case of an FDW ANALYZE, and perfectly fine in the case of someone tinkering with stats. \n\n> Subject: [PATCH v8] Create pg_set_relation_stats, pg_set_attribute_stats.\n\n[...]\n\n> +Datum\n> +pg_set_relation_stats(PG_FUNCTION_ARGS)\n\n[...]\n\n> + ctup = SearchSysCacheCopy1(RELOID, ObjectIdGetDatum(relid));\n> + if (!HeapTupleIsValid(ctup))\n> + elog(ERROR, \"pg_class entry for relid %u vanished during statistics import\",\n> + relid);\n\nMaybe drop the 'during statistics import' part of this message? Also\nwonder if maybe we should make it a regular ereport() instead, since it\nmight be possible for a user to end up seeing this?Agreed and agreed. It was copypasta from ANALYZE. \nThis comment doesn't seem quite right. Maybe it would be better if it\nwas in the positive, eg: Only update pg_class if there is a meaningful\nchange.+1",
"msg_date": "Mon, 11 Mar 2024 16:08:05 -0400",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "Greetings,\n\n* Corey Huinker ([email protected]) wrote:\n> > > One persistent problem is that there is no _safe equivalent to ARRAY_IN,\n> > so\n> > > that can always fail on us, though it should only do so if the string\n> > > passed in wasn't a valid array input format, or the values in the array\n> > > can't coerce to the attribute's basetype.\n> >\n> > That would happen before we even get to being called and there's not\n> > much to do about it anyway.\n> \n> Not sure I follow you here. the ARRAY_IN function calls happen once for\n> every non-null stavaluesN parameter, and it's done inside the function\n> because the result type could be the base type for a domain/array type, or\n> could be the type itself. I suppose we could move that determination to the\n> caller, but then we'd need to call get_base_element_type() inside a client,\n> and that seems wrong if it's even possible.\n\nAh, yeah, ok, I see what you're saying here and sure, there's a risk\nthose might ERROR too, but that's outright invalid data then as opposed\nto a NULL getting passed in.\n\n> > > Or one compound command\n> > >\n> > > SELECT pg_set_relation_stats(t.oid, ...)\n> > > pg_set_attribute_stats(t.oid, 'id'::name, ...),\n> > > pg_set_attribute_stats(t.oid, 'last_name'::name, ...),\n> > > ...\n> > > FROM (VALUES('foo.bar'::regclass)) AS t(oid);\n> > >\n> > > The second one has the feature that if any one attribute fails, then the\n> > > whole update fails, except, of course, for the in-place update of\n> > pg_class.\n> > > This avoids having an explicit transaction block, but we could get that\n> > > back by having restore wrap the list of commands in a transaction block\n> > > (and adding the explicit lock commands) when it is safe to do so.\n> >\n> > Hm, I like this approach as it should essentially give us the\n> > transaction block we had been talking about wanting but without needing\n> > to explicitly do a begin/commit, which would add in some annoying\n> > complications. This would hopefully also reduce the locking concern\n> > mentioned previously, since we'd get the lock needed in the first\n> > function call and then the others would be able to just see that we've\n> > already got the lock pretty quickly.\n> \n> True, we'd get the lock needed in the first function call, but wouldn't we\n> also release that very lock before the subsequent call? Obviously we'd be\n> shrinking the window in which another process could get in line and take a\n> superior lock, and the universe of other processes that would even want a\n> lock that blocks us is nil in the case of an upgrade, identical to existing\n> behavior in the case of an FDW ANALYZE, and perfectly fine in the case of\n> someone tinkering with stats.\n\nNo, we should be keeping the lock until the end of the transaction\n(which in this case would be just the one statement, but it would be the\nwhole statement and all of the calls in it). See analyze.c:268 or\nso, where we call relation_close(onerel, NoLock); meaning we're closing\nthe relation but we're *not* releasing the lock on it- it'll get\nreleased at the end of the transaction.\n\nThanks!\n\nStephen",
"msg_date": "Tue, 12 Mar 2024 04:51:34 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": ">\n> No, we should be keeping the lock until the end of the transaction\n> (which in this case would be just the one statement, but it would be the\n> whole statement and all of the calls in it). See analyze.c:268 or\n> so, where we call relation_close(onerel, NoLock); meaning we're closing\n> the relation but we're *not* releasing the lock on it- it'll get\n> released at the end of the transaction.\n>\n>\nIf that's the case, then changing the two table_close() statements to\nNoLock should resolve any remaining concern.\n\nNo, we should be keeping the lock until the end of the transaction\n(which in this case would be just the one statement, but it would be the\nwhole statement and all of the calls in it). See analyze.c:268 or\nso, where we call relation_close(onerel, NoLock); meaning we're closing\nthe relation but we're *not* releasing the lock on it- it'll get\nreleased at the end of the transaction.If that's the case, then changing the two table_close() statements to NoLock should resolve any remaining concern.",
"msg_date": "Tue, 12 Mar 2024 12:15:13 -0400",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "Greetings,\n\n* Corey Huinker ([email protected]) wrote:\n> > No, we should be keeping the lock until the end of the transaction\n> > (which in this case would be just the one statement, but it would be the\n> > whole statement and all of the calls in it). See analyze.c:268 or\n> > so, where we call relation_close(onerel, NoLock); meaning we're closing\n> > the relation but we're *not* releasing the lock on it- it'll get\n> > released at the end of the transaction.\n>\n> If that's the case, then changing the two table_close() statements to\n> NoLock should resolve any remaining concern.\n\nNote that there's two different things we're talking about here- the\nlock on the relation that we're analyzing and then the lock on the\npg_statistic (or pg_class) catalog itself. Currently, at least, it\nlooks like in the three places in the backend that we open\nStatisticRelationId, we release the lock when we close it rather than\nwaiting for transaction end. I'd be inclined to keep it that way in\nthese functions also. I doubt that one lock will end up causing much in\nthe way of issues to acquire/release it multiple times and it would keep\nthe code consistent with the way ANALYZE works.\n\nIf it can be shown to be an issue then we could certainly revisit this.\n\nThanks,\n\nStephen",
"msg_date": "Wed, 13 Mar 2024 08:10:37 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": ">\n> Note that there's two different things we're talking about here- the\n> lock on the relation that we're analyzing and then the lock on the\n> pg_statistic (or pg_class) catalog itself. Currently, at least, it\n> looks like in the three places in the backend that we open\n> StatisticRelationId, we release the lock when we close it rather than\n> waiting for transaction end. I'd be inclined to keep it that way in\n> these functions also. I doubt that one lock will end up causing much in\n> the way of issues to acquire/release it multiple times and it would keep\n> the code consistent with the way ANALYZE works.\n>\n\nANALYZE takes out one lock StatisticRelationId per relation, not per\nattribute like we do now. If we didn't release the lock after every\nattribute, and we only called the function outside of a larger transaction\n(as we plan to do with pg_restore) then that is the closest we're going to\nget to being consistent with ANALYZE.\n\nNote that there's two different things we're talking about here- the\nlock on the relation that we're analyzing and then the lock on the\npg_statistic (or pg_class) catalog itself. Currently, at least, it\nlooks like in the three places in the backend that we open\nStatisticRelationId, we release the lock when we close it rather than\nwaiting for transaction end. I'd be inclined to keep it that way in\nthese functions also. I doubt that one lock will end up causing much in\nthe way of issues to acquire/release it multiple times and it would keep\nthe code consistent with the way ANALYZE works.ANALYZE takes out one lock StatisticRelationId per relation, not per attribute like we do now. If we didn't release the lock after every attribute, and we only called the function outside of a larger transaction (as we plan to do with pg_restore) then that is the closest we're going to get to being consistent with ANALYZE.",
"msg_date": "Wed, 13 Mar 2024 18:33:14 -0400",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": ">\n> ANALYZE takes out one lock StatisticRelationId per relation, not per\n> attribute like we do now. If we didn't release the lock after every\n> attribute, and we only called the function outside of a larger transaction\n> (as we plan to do with pg_restore) then that is the closest we're going to\n> get to being consistent with ANALYZE.\n>\n\nv9 attached. This adds pg_dump support. It works in tests against existing\ndatabases such as dvdrental, though I was surprised at how few indexes have\nattribute stats there.\n\nStatistics are preserved by default, but this can be disabled with the\noption --no-statistics. This follows the prevailing option pattern in\npg_dump, etc.\n\nThere are currently several failing TAP tests around\npg_dump/pg_restore/pg_upgrade. I'm looking at those, but in the mean\ntime I'm seeking feedback on the progress so far.",
"msg_date": "Fri, 15 Mar 2024 03:55:13 -0400",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "On Fri, 2024-03-15 at 03:55 -0400, Corey Huinker wrote:\n> \n> Statistics are preserved by default, but this can be disabled with\n> the option --no-statistics. This follows the prevailing option\n> pattern in pg_dump, etc.\n\nI'm not sure if saving statistics should be the default in 17. I'm\ninclined to make it opt-in.\n\n> There are currently several failing TAP tests around\n> pg_dump/pg_restore/pg_upgrade.\n\nIt is a permissions problem. When user running pg_dump is not the\nsuperuser, they don't have permission to access pg_statistic. That\ncauses an error in exportRelationStatsStmt(), which returns NULL, and\nthen the caller segfaults.\n\n> I'm looking at those, but in the mean time I'm seeking feedback on\n> the progress so far.\n\nStill looking, but one quick comment is that the third argument of\ndumpRelationStats() should be const, which eliminates a warning.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Fri, 15 Mar 2024 15:30:51 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "On Fri, 2024-03-15 at 15:30 -0700, Jeff Davis wrote:\n> Still looking, but one quick comment is that the third argument of\n> dumpRelationStats() should be const, which eliminates a warning.\n\nA few other comments:\n\n* pg_set_relation_stats() needs to do an ACL check so you can't set the\nstats on someone else's table. I suggest honoring the new MAINTAIN\nprivilege as well.\n\n* If possible, reading from pg_stats (instead of pg_statistic) would be\nideal because pg_stats already does the right checks at read time, so a\nnon-superuser can export stats, too.\n\n* If reading from pg_stats, should you change the signature of\npg_set_relation_stats() to have argument names matching the columns of\npg_stats (e.g. most_common_vals instead of stakind/stavalues)?\n\nIn other words, make this a slightly higher level: conceptually\nexporting/importing pg_stats rather than pg_statistic. This may also\nmake the SQL export queries simpler.\n\nAlso, I'm wondering about error handling. Is some kind of error thrown\nby pg_set_relation_stats() going to abort an entire restore? That might\nbe easy to prevent with pg_restore, because it can just omit the stats,\nbut harder if it's in a SQL file.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Fri, 15 Mar 2024 16:43:03 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": ">\n>\n> * pg_set_relation_stats() needs to do an ACL check so you can't set the\n> stats on someone else's table. I suggest honoring the new MAINTAIN\n> privilege as well.\n>\n\nAdded.\n\n\n>\n> * If possible, reading from pg_stats (instead of pg_statistic) would be\n> ideal because pg_stats already does the right checks at read time, so a\n> non-superuser can export stats, too.\n>\n\nDone. That was sorta how it was originally, so returning to that wasn't too\nhard.\n\n\n>\n> * If reading from pg_stats, should you change the signature of\n> pg_set_relation_stats() to have argument names matching the columns of\n> pg_stats (e.g. most_common_vals instead of stakind/stavalues)?\n>\n\nDone.\n\n\n>\n> In other words, make this a slightly higher level: conceptually\n> exporting/importing pg_stats rather than pg_statistic. This may also\n> make the SQL export queries simpler.\n>\n\nEh, about the same.\n\n\n> Also, I'm wondering about error handling. Is some kind of error thrown\n> by pg_set_relation_stats() going to abort an entire restore? That might\n> be easy to prevent with pg_restore, because it can just omit the stats,\n> but harder if it's in a SQL file.\n>\n\nAside from the oid being invalid, there's not a whole lot that can go wrong\nin set_relation_stats(). The error checking I did closely mirrors that in\nanalyze.c.\n\nAside from the changes you suggested, as well as the error reporting change\nyou suggested for pg_dump, I also filtered out attempts to dump stats on\nviews.\n\nA few TAP tests are still failing and I haven't been able to diagnose why,\nthough the failures in parallel dump seem to be that it tries to import\nstats on indexes that haven't been created yet, which is odd because I sent\nthe dependency.\n\nAll those changes are available in the patches attached.",
"msg_date": "Sun, 17 Mar 2024 23:33:57 -0400",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "On Sun, 2024-03-17 at 23:33 -0400, Corey Huinker wrote:\n> \n> A few TAP tests are still failing and I haven't been able to diagnose\n> why, though the failures in parallel dump seem to be that it tries to\n> import stats on indexes that haven't been created yet, which is odd\n> because I sent the dependency.\n\n From testrun/pg_dump/002_pg_dump/log/regress_log_002_pg_dump, search\nfor the \"not ok\" and then look at what it tried to do right before\nthat. I see:\n\npg_dump: error: prepared statement failed: ERROR: syntax error at or\nnear \"%\"\nLINE 1: ..._histogram => %L::real[]) coalesce($2, format('%I.%I',\na.nsp...\n\n> All those changes are available in the patches attached.\n\nHow about if you provided \"get\" versions of the functions that return a\nset of rows that match what the \"set\" versions expect? That would make\n0001 essentially a complete feature itself.\n\nI think it would also make the changes in pg_dump simpler, and the\ntests in 0001 a lot simpler.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Mon, 18 Mar 2024 09:50:42 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": ">\n>\n>\n>\n> From testrun/pg_dump/002_pg_dump/log/regress_log_002_pg_dump, search\n> for the \"not ok\" and then look at what it tried to do right before\n> that. I see:\n>\n> pg_dump: error: prepared statement failed: ERROR: syntax error at or\n> near \"%\"\n> LINE 1: ..._histogram => %L::real[]) coalesce($2, format('%I.%I',\n> a.nsp...\n>\n\nThanks. Unfamiliar turf for me.\n\n\n>\n> > All those changes are available in the patches attached.\n>\n> How about if you provided \"get\" versions of the functions that return a\n> set of rows that match what the \"set\" versions expect? That would make\n> 0001 essentially a complete feature itself.\n>\n\nThat's tricky. At the base level, those functions would just be an\nencapsulation of \"SELECT * FROM pg_stats WHERE schemaname = $1 AND\ntablename = $2\" which isn't all that much of a savings. Perhaps we can make\nthe documentation more explicit about the source and nature of the\nparameters going into the pg_set_ functions.\n\nPer conversation, it would be trivial to add a helper functions that\nreplace the parameters after the initial oid with a pg_class rowtype, and\nthat would dissect the values needed and call the more complex function:\n\npg_set_relation_stats( oid, pg_class)\npg_set_attribute_stats( oid, pg_stats)\n\n\n>\n> I think it would also make the changes in pg_dump simpler, and the\n> tests in 0001 a lot simpler.\n>\n\nI agree. The tests are currently showing that a fidelity copy can be made\nfrom one table to another, but to do so we have to conceal the actual stats\nvalues because those are 1. not deterministic/known and 2. subject to\nchange from version to version.\n\nI can add some sets to arbitrary values like was done for\npg_set_relation_stats().\n\n\n\n From testrun/pg_dump/002_pg_dump/log/regress_log_002_pg_dump, search\nfor the \"not ok\" and then look at what it tried to do right before\nthat. I see:\n\npg_dump: error: prepared statement failed: ERROR: syntax error at or\nnear \"%\"\nLINE 1: ..._histogram => %L::real[]) coalesce($2, format('%I.%I',\na.nsp...Thanks. Unfamiliar turf for me. \n\n> All those changes are available in the patches attached.\n\nHow about if you provided \"get\" versions of the functions that return a\nset of rows that match what the \"set\" versions expect? That would make\n0001 essentially a complete feature itself.That's tricky. At the base level, those functions would just be an encapsulation of \"SELECT * FROM pg_stats WHERE schemaname = $1 AND tablename = $2\" which isn't all that much of a savings. Perhaps we can make the documentation more explicit about the source and nature of the parameters going into the pg_set_ functions.Per conversation, it would be trivial to add a helper functions that replace the parameters after the initial oid with a pg_class rowtype, and that would dissect the values needed and call the more complex function:pg_set_relation_stats( oid, pg_class)pg_set_attribute_stats( oid, pg_stats) \n\nI think it would also make the changes in pg_dump simpler, and the\ntests in 0001 a lot simpler.I agree. The tests are currently showing that a fidelity copy can be made from one table to another, but to do so we have to conceal the actual stats values because those are 1. not deterministic/known and 2. subject to change from version to version.I can add some sets to arbitrary values like was done for pg_set_relation_stats().",
"msg_date": "Mon, 18 Mar 2024 14:25:40 -0400",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "v11 attached.\n\n- TAP tests passing (the big glitch was that indexes that are used in\nconstraints should have their stats dependent on the constraint, not the\nindex, thanks Jeff)\n- The new range-specific statistics types are now supported. I'm not happy\nwith the typid machinations I do to get them to work, but it is working so\nfar. These are stored out-of-stakind-order (7 before 6), which is odd\nbecause all other types seem store stakinds in ascending order. It\nshouldn't matter, it was just odd.\n- regression tests now make simpler calls with arbitrary stats to\ndemonstrate the function usage more cleanly\n- pg_set_*_stats function now have all of their parameters in the same\norder as the table/view they pull from",
"msg_date": "Tue, 19 Mar 2024 05:16:29 -0400",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "On Tue, 2024-03-19 at 05:16 -0400, Corey Huinker wrote:\n> v11 attached.\n\nThank you.\n\nComments on 0001:\n\nThis test:\n\n +SELECT\n + format('SELECT pg_catalog.pg_set_attribute_stats( '\n ...\n\nseems misplaced. It's generating SQL that can be used to restore or\ncopy the stats -- that seems like the job of pg_dump, and shouldn't be\ntested within the plain SQL regression tests.\n\nAnd can the other tests use pg_stats rather than pg_statistic?\n\nThe function signature for pg_set_attribute_stats could be more\nfriendly -- how about there are a few required parameters, and then it\nonly sets the stats that are provided and the other ones are either\nleft to the existing value or get some reasonable default?\n\nMake sure all error paths ReleaseSysCache().\n\nWhy are you calling checkCanModifyRelation() twice?\n\nI'm confused about when the function should return false and when it\nshould throw an error. I'm inclined to think the return type should be\nvoid and all failures should be reported as ERROR.\n\nreplaces[] is initialized to {true}, which means only the first element\nis initialized to true. Try following the pattern in AlterDatabase (or\nsimilar) which reads the catalog tuple first, then updates a few fields\nselectively, setting the corresponding element of replaces[] along the\nway.\n\nThe test also sets the most_common_freqs in an ascending order, which\nis weird.\n\nRelatedly, I got worried recently about the idea of plain users\nupdating statistics. In theory, that should be fine, and the planner\nshould be robust to whatever pg_statistic contains; but in practice\nthere's some risk of mischief there until everyone understands that the\ncontents of pg_stats should not be trusted. Fortunately I didn't find\nany planner crashes or even errors after a brief test.\n\nOne thing we can do is some extra validation for consistency, like\nchecking that the arrays are properly sorted, check for negative\nnumbers in the wrong place, or fractions larger than 1.0, etc.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Wed, 20 Mar 2024 23:29:48 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "On Thu, Mar 21, 2024 at 2:29 AM Jeff Davis <[email protected]> wrote:\n\n> On Tue, 2024-03-19 at 05:16 -0400, Corey Huinker wrote:\n> > v11 attached.\n>\n> Thank you.\n>\n> Comments on 0001:\n>\n> This test:\n>\n> +SELECT\n> + format('SELECT pg_catalog.pg_set_attribute_stats( '\n> ...\n>\n> seems misplaced. It's generating SQL that can be used to restore or\n> copy the stats -- that seems like the job of pg_dump, and shouldn't be\n> tested within the plain SQL regression tests.\n>\n\nFair enough.\n\n\n>\n> And can the other tests use pg_stats rather than pg_statistic?\n>\n\nThey can, but part of what I wanted to show was that the values that aren't\ndirectly passed in as parameters (staopN, stacollN) get set to the correct\nvalues, and those values aren't guaranteed to match across databases, hence\ntesting them in the regression test rather than in a TAP test. I'd still\nlike to be able to test that.\n\n\n>\n> The function signature for pg_set_attribute_stats could be more\n> friendly -- how about there are a few required parameters, and then it\n> only sets the stats that are provided and the other ones are either\n> left to the existing value or get some reasonable default?\n>\n\nThat would be problematic.\n\n1. We'd have to compare the stats provided against the stats that are\nalready there, make that list in-memory, and then re-order what remains\n2. There would be no way to un-set statistics of a given stakind, unless we\nadded an \"actually set it null\" boolean for each parameter that can be\nnull.\n3. I tried that with the JSON formats, it made the code even messier than\nit already was.\n\nMake sure all error paths ReleaseSysCache().\n>\n\n+1\n\n\n>\n> Why are you calling checkCanModifyRelation() twice?\n>\n\nOnce for the relation itself, and once for pg_statistic.\n\n\n> I'm confused about when the function should return false and when it\n> should throw an error. I'm inclined to think the return type should be\n> void and all failures should be reported as ERROR.\n>\n\nI go back and forth on that. I can see making it void and returning an\nerror for everything that we currently return false for, but if we do that,\nthen a statement with one pg_set_relation_stats, and N\npg_set_attribute_stats (which we lump together in one command for the\nlocking benefits and atomic transaction) would fail entirely if one of the\nset_attributes named a column that we had dropped. It's up for debate\nwhether that's the right behavior or not.\n\nreplaces[] is initialized to {true}, which means only the first element\n> is initialized to true. Try following the pattern in AlterDatabase (or\n> similar) which reads the catalog tuple first, then updates a few fields\n> selectively, setting the corresponding element of replaces[] along the\n> way.\n>\n\n+1.\n\n\n>\n> The test also sets the most_common_freqs in an ascending order, which\n> is weird.\n>\n\nI pulled most of the hardcoded values from pg_stats itself. The sample set\nis trivially small, and the values inserted were in-order-ish. So maybe\nthat's why.\n\n\n> Relatedly, I got worried recently about the idea of plain users\n> updating statistics. In theory, that should be fine, and the planner\n> should be robust to whatever pg_statistic contains; but in practice\n> there's some risk of mischief there until everyone understands that the\n> contents of pg_stats should not be trusted. Fortunately I didn't find\n> any planner crashes or even errors after a brief test.\n>\n\nMaybe we could have the functions restricted to a role or roles:\n\n1. pg_write_all_stats (can modify stats on ANY table)\n2. pg_write_own_stats (can modify stats on tables owned by user)\n\nI'm iffy on the need for the first one, I list it first purely to show how\nI derived the name for the second.\n\n\n> One thing we can do is some extra validation for consistency, like\n> checking that the arrays are properly sorted, check for negative\n> numbers in the wrong place, or fractions larger than 1.0, etc.\n>\n\n+1. All suggestions of validation checks welcome.\n\nOn Thu, Mar 21, 2024 at 2:29 AM Jeff Davis <[email protected]> wrote:On Tue, 2024-03-19 at 05:16 -0400, Corey Huinker wrote:\n> v11 attached.\n\nThank you.\n\nComments on 0001:\n\nThis test:\n\n +SELECT\n + format('SELECT pg_catalog.pg_set_attribute_stats( '\n ...\n\nseems misplaced. It's generating SQL that can be used to restore or\ncopy the stats -- that seems like the job of pg_dump, and shouldn't be\ntested within the plain SQL regression tests.Fair enough. \n\nAnd can the other tests use pg_stats rather than pg_statistic?They can, but part of what I wanted to show was that the values that aren't directly passed in as parameters (staopN, stacollN) get set to the correct values, and those values aren't guaranteed to match across databases, hence testing them in the regression test rather than in a TAP test. I'd still like to be able to test that. \n\nThe function signature for pg_set_attribute_stats could be more\nfriendly -- how about there are a few required parameters, and then it\nonly sets the stats that are provided and the other ones are either\nleft to the existing value or get some reasonable default?That would be problematic.1. We'd have to compare the stats provided against the stats that are already there, make that list in-memory, and then re-order what remains2. There would be no way to un-set statistics of a given stakind, unless we added an \"actually set it null\" boolean for each parameter that can be null. 3. I tried that with the JSON formats, it made the code even messier than it already was.Make sure all error paths ReleaseSysCache().+1 \n\nWhy are you calling checkCanModifyRelation() twice?Once for the relation itself, and once for pg_statistic. I'm confused about when the function should return false and when it\nshould throw an error. I'm inclined to think the return type should be\nvoid and all failures should be reported as ERROR.I go back and forth on that. I can see making it void and returning an error for everything that we currently return false for, but if we do that, then a statement with one pg_set_relation_stats, and N pg_set_attribute_stats (which we lump together in one command for the locking benefits and atomic transaction) would fail entirely if one of the set_attributes named a column that we had dropped. It's up for debate whether that's the right behavior or not.replaces[] is initialized to {true}, which means only the first element\nis initialized to true. Try following the pattern in AlterDatabase (or\nsimilar) which reads the catalog tuple first, then updates a few fields\nselectively, setting the corresponding element of replaces[] along the\nway.+1. \n\nThe test also sets the most_common_freqs in an ascending order, which\nis weird.I pulled most of the hardcoded values from pg_stats itself. The sample set is trivially small, and the values inserted were in-order-ish. So maybe that's why. Relatedly, I got worried recently about the idea of plain users\nupdating statistics. In theory, that should be fine, and the planner\nshould be robust to whatever pg_statistic contains; but in practice\nthere's some risk of mischief there until everyone understands that the\ncontents of pg_stats should not be trusted. Fortunately I didn't find\nany planner crashes or even errors after a brief test.Maybe we could have the functions restricted to a role or roles:1. pg_write_all_stats (can modify stats on ANY table)2. pg_write_own_stats (can modify stats on tables owned by user)I'm iffy on the need for the first one, I list it first purely to show how I derived the name for the second. One thing we can do is some extra validation for consistency, like\nchecking that the arrays are properly sorted, check for negative\nnumbers in the wrong place, or fractions larger than 1.0, etc.+1. All suggestions of validation checks welcome.",
"msg_date": "Thu, 21 Mar 2024 03:27:47 -0400",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "On Thu, 2024-03-21 at 03:27 -0400, Corey Huinker wrote:\n> \n> They can, but part of what I wanted to show was that the values that\n> aren't directly passed in as parameters (staopN, stacollN) get set to\n> the correct values, and those values aren't guaranteed to match\n> across databases, hence testing them in the regression test rather\n> than in a TAP test. I'd still like to be able to test that.\n\nOK, that's fine.\n\n> > The function signature for pg_set_attribute_stats could be more\n> > friendly \n...\n> 1. We'd have to compare the stats provided against the stats that are\n> already there, make that list in-memory, and then re-order what\n> remains\n> 2. There would be no way to un-set statistics of a given stakind,\n> unless we added an \"actually set it null\" boolean for each parameter\n> that can be null. \n> 3. I tried that with the JSON formats, it made the code even messier\n> than it already was.\n\nHow about just some defaults then? Many of them have a reasonable\ndefault, like NULL or an empty array. Some are parallel arrays and\neither both should be specified or neither (e.g.\nmost_common_vals+most_common_freqs), but you can check for that.\n\n> > Why are you calling checkCanModifyRelation() twice?\n> \n> Once for the relation itself, and once for pg_statistic.\n\nNobody has the privileges to modify pg_statistic except superuser,\nright? I thought the point of a privilege check is that users could\nmodify statistics for their own tables, or the tables they maintain.\n\n> \n> I can see making it void and returning an error for everything that\n> we currently return false for, but if we do that, then a statement\n> with one pg_set_relation_stats, and N pg_set_attribute_stats (which\n> we lump together in one command for the locking benefits and atomic\n> transaction) would fail entirely if one of the set_attributes named a\n> column that we had dropped. It's up for debate whether that's the\n> right behavior or not.\n\nI'd probably make the dropped column a WARNING with a message like\n\"skipping dropped column whatever\". Regardless, have some kind of\nexplanatory comment.\n\n> \n> I pulled most of the hardcoded values from pg_stats itself. The\n> sample set is trivially small, and the values inserted were in-order-\n> ish. So maybe that's why.\n\nIn my simple test, most_common_freqs is descending:\n\n CREATE TABLE a(i int);\n INSERT INTO a VALUES(1);\n INSERT INTO a VALUES(2);\n INSERT INTO a VALUES(2);\n INSERT INTO a VALUES(3);\n INSERT INTO a VALUES(3);\n INSERT INTO a VALUES(3);\n INSERT INTO a VALUES(4);\n INSERT INTO a VALUES(4);\n INSERT INTO a VALUES(4);\n INSERT INTO a VALUES(4);\n ANALYZE a;\n SELECT most_common_vals, most_common_freqs\n FROM pg_stats WHERE tablename='a';\n most_common_vals | most_common_freqs \n ------------------+-------------------\n {4,3,2} | {0.4,0.3,0.2}\n (1 row)\n\nCan you show an example where it's not?\n\n> \n> Maybe we could have the functions restricted to a role or roles:\n> \n> 1. pg_write_all_stats (can modify stats on ANY table)\n> 2. pg_write_own_stats (can modify stats on tables owned by user)\n\nIf we go that route, we are giving up on the ability for users to\nrestore stats on their own tables. Let's just be careful about\nvalidating data to mitigate this risk.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Thu, 21 Mar 2024 10:27:53 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": ">\n> How about just some defaults then? Many of them have a reasonable\n> default, like NULL or an empty array. Some are parallel arrays and\n> either both should be specified or neither (e.g.\n> most_common_vals+most_common_freqs), but you can check for that.\n>\n\n+1\nDefault NULL has been implemented for all parameters after n_distinct.\n\n\n>\n> > > Why are you calling checkCanModifyRelation() twice?\n> >\n> > Once for the relation itself, and once for pg_statistic.\n>\n> Nobody has the privileges to modify pg_statistic except superuser,\n> right? I thought the point of a privilege check is that users could\n> modify statistics for their own tables, or the tables they maintain.\n>\n\nIn which case wouldn't the checkCanModify on pg_statistic would be a proxy\nfor is_superuser/has_special_role_we_havent_created_yet.\n\n\n\n>\n> >\n> > I can see making it void and returning an error for everything that\n> > we currently return false for, but if we do that, then a statement\n> > with one pg_set_relation_stats, and N pg_set_attribute_stats (which\n> > we lump together in one command for the locking benefits and atomic\n> > transaction) would fail entirely if one of the set_attributes named a\n> > column that we had dropped. It's up for debate whether that's the\n> > right behavior or not.\n>\n> I'd probably make the dropped column a WARNING with a message like\n> \"skipping dropped column whatever\". Regardless, have some kind of\n> explanatory comment.\n>\n\nThat's certainly do-able.\n\n\n\n\n>\n> >\n> > I pulled most of the hardcoded values from pg_stats itself. The\n> > sample set is trivially small, and the values inserted were in-order-\n> > ish. So maybe that's why.\n>\n> In my simple test, most_common_freqs is descending:\n>\n> CREATE TABLE a(i int);\n> INSERT INTO a VALUES(1);\n> INSERT INTO a VALUES(2);\n> INSERT INTO a VALUES(2);\n> INSERT INTO a VALUES(3);\n> INSERT INTO a VALUES(3);\n> INSERT INTO a VALUES(3);\n> INSERT INTO a VALUES(4);\n> INSERT INTO a VALUES(4);\n> INSERT INTO a VALUES(4);\n> INSERT INTO a VALUES(4);\n> ANALYZE a;\n> SELECT most_common_vals, most_common_freqs\n> FROM pg_stats WHERE tablename='a';\n> most_common_vals | most_common_freqs\n> ------------------+-------------------\n> {4,3,2} | {0.4,0.3,0.2}\n> (1 row)\n>\n> Can you show an example where it's not?\n>\n\nNot off hand, no.\n\n\n\n>\n> >\n> > Maybe we could have the functions restricted to a role or roles:\n> >\n> > 1. pg_write_all_stats (can modify stats on ANY table)\n> > 2. pg_write_own_stats (can modify stats on tables owned by user)\n>\n> If we go that route, we are giving up on the ability for users to\n> restore stats on their own tables. Let's just be careful about\n> validating data to mitigate this risk.\n>\n\nA great many test cases coming in the next patch.\n\nHow about just some defaults then? Many of them have a reasonable\ndefault, like NULL or an empty array. Some are parallel arrays and\neither both should be specified or neither (e.g.\nmost_common_vals+most_common_freqs), but you can check for that.+1Default NULL has been implemented for all parameters after n_distinct. \n\n> > Why are you calling checkCanModifyRelation() twice?\n> \n> Once for the relation itself, and once for pg_statistic.\n\nNobody has the privileges to modify pg_statistic except superuser,\nright? I thought the point of a privilege check is that users could\nmodify statistics for their own tables, or the tables they maintain.In which case wouldn't the checkCanModify on pg_statistic would be a proxy for is_superuser/has_special_role_we_havent_created_yet. \n\n> \n> I can see making it void and returning an error for everything that\n> we currently return false for, but if we do that, then a statement\n> with one pg_set_relation_stats, and N pg_set_attribute_stats (which\n> we lump together in one command for the locking benefits and atomic\n> transaction) would fail entirely if one of the set_attributes named a\n> column that we had dropped. It's up for debate whether that's the\n> right behavior or not.\n\nI'd probably make the dropped column a WARNING with a message like\n\"skipping dropped column whatever\". Regardless, have some kind of\nexplanatory comment.That's certainly do-able. \n\n> \n> I pulled most of the hardcoded values from pg_stats itself. The\n> sample set is trivially small, and the values inserted were in-order-\n> ish. So maybe that's why.\n\nIn my simple test, most_common_freqs is descending:\n\n CREATE TABLE a(i int);\n INSERT INTO a VALUES(1);\n INSERT INTO a VALUES(2);\n INSERT INTO a VALUES(2);\n INSERT INTO a VALUES(3);\n INSERT INTO a VALUES(3);\n INSERT INTO a VALUES(3);\n INSERT INTO a VALUES(4);\n INSERT INTO a VALUES(4);\n INSERT INTO a VALUES(4);\n INSERT INTO a VALUES(4);\n ANALYZE a;\n SELECT most_common_vals, most_common_freqs\n FROM pg_stats WHERE tablename='a';\n most_common_vals | most_common_freqs \n ------------------+-------------------\n {4,3,2} | {0.4,0.3,0.2}\n (1 row)\n\nCan you show an example where it's not?Not off hand, no. \n\n> \n> Maybe we could have the functions restricted to a role or roles:\n> \n> 1. pg_write_all_stats (can modify stats on ANY table)\n> 2. pg_write_own_stats (can modify stats on tables owned by user)\n\nIf we go that route, we are giving up on the ability for users to\nrestore stats on their own tables. Let's just be careful about\nvalidating data to mitigate this risk.A great many test cases coming in the next patch.",
"msg_date": "Thu, 21 Mar 2024 15:10:42 -0400",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "On Thu, 2024-03-21 at 15:10 -0400, Corey Huinker wrote:\n> \n> In which case wouldn't the checkCanModify on pg_statistic would be a\n> proxy for is_superuser/has_special_role_we_havent_created_yet.\n\nSo if someone pg_dumps their table and gets the statistics in the SQL,\nthen they will get errors loading it unless they are a member of a\nspecial role?\n\nIf so we'd certainly need to make --no-statistics the default, and have\nsome way of skipping stats during reload of the dump (perhaps make the\nset function a no-op based on a GUC?).\n\nBut ideally we'd just make it safe to dump and reload stats on your own\ntables, and then not worry about it.\n\n> Not off hand, no.\n\nTo me it seems like inconsistent data to have most_common_freqs in\nanything but descending order, and we should prevent it.\n\n> > \nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Thu, 21 Mar 2024 12:26:44 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": ">\n>\n>\n> But ideally we'd just make it safe to dump and reload stats on your own\n> tables, and then not worry about it.\n>\n\nThat is my strong preference, yes.\n\n\n>\n> > Not off hand, no.\n>\n> To me it seems like inconsistent data to have most_common_freqs in\n> anything but descending order, and we should prevent it.\n>\n\nSorry, I misunderstood, I thought we were talking about values, not the\nfrequencies. Yes, the frequencies should only be monotonically\nnon-increasing (i.e. it can go down or flatline from N->N+1). I'll add a\ntest case for that.\n\n\nBut ideally we'd just make it safe to dump and reload stats on your own\ntables, and then not worry about it.That is my strong preference, yes. \n\n> Not off hand, no.\n\nTo me it seems like inconsistent data to have most_common_freqs in\nanything but descending order, and we should prevent it.Sorry, I misunderstood, I thought we were talking about values, not the frequencies. Yes, the frequencies should only be monotonically non-increasing (i.e. it can go down or flatline from N->N+1). I'll add a test case for that.",
"msg_date": "Thu, 21 Mar 2024 15:33:29 -0400",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "v12 attached.\n\n0001 -\n\nThe functions pg_set_relation_stats() and pg_set_attribute_stats() now\nreturn void. There just weren't enough conditions where a condition was\nconsidered recoverable to justify having it. This may mean that combining\nmultiple pg_set_attribute_stats calls into one compound statement may no\nlonger be desirable, but that's just one of the places where I'd like\nfeedback on how pg_dump/pg_restore use these functions.\n\nThe function pg_set_attribute_stats() now has NULL defaults for all\nstakind-based statistics types. Thus, you can set statistics on a more\nterse basis, like so:\n\nSELECT pg_catalog.pg_set_attribute_stats(\n relation => 'stats_export_import.test'::regclass,\n attname => 'id'::name,\n inherited => false::boolean,\n null_frac => 0.5::real,\n avg_width => 2::integer,\n n_distinct => -0.1::real,\n most_common_vals => '{2,1,3}'::text,\n most_common_freqs => '{0.3,0.25,0.05}'::real[]\n );\n\nThis would generate a pg_statistic row with exactly one stakind in it, and\nreplaces whatever statistics previously existed for that attribute.\n\nIt now checks for many types of data inconsistencies, and most (35) of\nthose have test coverage in the regression. There's a few areas still\nuncovered, mostly surrounding histograms where the datatype is dependent on\nthe attribute.\n\nThe functions both require that the caller be the owner of the table/index.\n\nThe function pg_set_relation_stats is largely unchanged from previous\nversions.\n\nKey areas where I'm seeking feedback:\n\n- What additional checks can be made to ensure validity of statistics?\n- What additional regression tests would be desirable?\n- What extra information can we add to the error messages to give the user\nan idea of how to fix the error?\n- What are some edge cases we should test concerning putting bad stats in a\ntable to get an undesirable outcome?\n\n\n0002 -\n\nThis patch concerns invoking the functions in 0001 via\npg_restore/pg_upgrade. Little has changed here. Dumping statistics is\ncurrently the default for pg_dump/pg_restore/pg_upgrade, and can be\nswitched off with the switch --no-statistics. Some have expressed concern\nabout whether stats dumping should be the default. I have a slight\npreference for making it the default, for the following reasons:\n\n- The existing commandline switches are all --no-something based, and this\nfollows the pattern.\n- Complaints about poor performance post-upgrade are often the result of\nthe user not knowing about vacuumdb --analyze-in-stages or the need to\nmanually ANALYZE. If they don't know about that, how can we expect them to\nknow about about new switches in pg_upgrade?\n- The failure condition means that the user has a table with no stats in it\n(or possibly partial stats, if we change how we make the calls), which is\nexactly where they were before they made the call.\n- Any performance regressions will be remedied with the next autovacuum or\nmanual ANALYZE.\n- If we had a positive flag (e.g. --with-statistics or just --statistics),\nand we then changed the default, that could be considered a POLA violation.\n\n\nKey areas where I'm seeking feedback:\n\n- What level of errors in a restore will a user tolerate, and what should\nbe done to the error messages to indicate that the data itself is fine, but\na manual operation to update stats on that particular table is now\nwarranted?\n- To what degree could pg_restore/pg_upgrade take that recovery action\nautomatically?\n- Should the individual attribute/class set function calls be grouped by\nrelation, so that they all succeed/fail together, or should they be called\nseparately, each able to succeed or fail on their own?\n- Any other concerns about how to best use these new functions.",
"msg_date": "Fri, 22 Mar 2024 21:51:01 -0400",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "On Fri, Mar 22, 2024 at 9:51 PM Corey Huinker <[email protected]>\nwrote:\n\n> v12 attached.\n>\n>\nv13 attached. All the same features as v12, but with a lot more type\nchecking, bounds checking, value inspection, etc. Perhaps the most notable\nfeature is that we're now ensuring that histogram values are in ascending\norder. This could come in handy for detecting when we are applying stats to\na column of the wrong type, or the right type but with a different\ncollation. It's not a guarantee of validity, of course, but it would detect\negregious changes in sort order.",
"msg_date": "Mon, 25 Mar 2024 04:27:38 -0400",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "Hi Corey,\n\n\nOn Sat, Mar 23, 2024 at 7:21 AM Corey Huinker <[email protected]>\nwrote:\n\n> v12 attached.\n>\n> 0001 -\n>\n>\nSome random comments\n\n+SELECT\n+ format('SELECT pg_catalog.pg_set_attribute_stats( '\n+ || 'relation => %L::regclass::oid, attname => %L::name, '\n+ || 'inherited => %L::boolean, null_frac => %L::real, '\n+ || 'avg_width => %L::integer, n_distinct => %L::real, '\n+ || 'most_common_vals => %L::text, '\n+ || 'most_common_freqs => %L::real[], '\n+ || 'histogram_bounds => %L::text, '\n+ || 'correlation => %L::real, '\n+ || 'most_common_elems => %L::text, '\n+ || 'most_common_elem_freqs => %L::real[], '\n+ || 'elem_count_histogram => %L::real[], '\n+ || 'range_length_histogram => %L::text, '\n+ || 'range_empty_frac => %L::real, '\n+ || 'range_bounds_histogram => %L::text) ',\n+ 'stats_export_import.' || s.tablename || '_clone', s.attname,\n+ s.inherited, s.null_frac,\n+ s.avg_width, s.n_distinct,\n+ s.most_common_vals, s.most_common_freqs, s.histogram_bounds,\n+ s.correlation, s.most_common_elems, s.most_common_elem_freqs,\n+ s.elem_count_histogram, s.range_length_histogram,\n+ s.range_empty_frac, s.range_bounds_histogram)\n+FROM pg_catalog.pg_stats AS s\n+WHERE s.schemaname = 'stats_export_import'\n+AND s.tablename IN ('test', 'is_odd')\n+\\gexec\n\nWhy do we need to construct the command and execute? Can we instead execute\nthe function directly? That would also avoid ECHO magic.\n\n+ <table id=\"functions-admin-statsimport\">\n+ <title>Database Object Statistics Import Functions</title>\n+ <tgroup cols=\"1\">\n+ <thead>\n+ <row>\n+ <entry role=\"func_table_entry\"><para role=\"func_signature\">\n+ Function\n+ </para>\n+ <para>\n+ Description\n+ </para></entry>\n+ </row>\n+ </thead>\n\nCOMMENT: The functions throw many validation errors. Do we want to list the\nacceptable/unacceptable input values in the documentation corresponding to\nthose? I don't expect one line per argument validation. Something like\n\"these, these and these arguments can not be NULL\" or \"both arguments in\neach of the pairs x and y, a and b, and c and d should be non-NULL or NULL\nrespectively\".\n\n\n\n> The functions pg_set_relation_stats() and pg_set_attribute_stats() now\n> return void. There just weren't enough conditions where a condition was\n> considered recoverable to justify having it. This may mean that combining\n> multiple pg_set_attribute_stats calls into one compound statement may no\n> longer be desirable, but that's just one of the places where I'd like\n> feedback on how pg_dump/pg_restore use these functions.\n>\n>\n> 0002 -\n>\n> This patch concerns invoking the functions in 0001 via\n> pg_restore/pg_upgrade. Little has changed here. Dumping statistics is\n> currently the default for pg_dump/pg_restore/pg_upgrade, and can be\n> switched off with the switch --no-statistics. Some have expressed concern\n> about whether stats dumping should be the default. I have a slight\n> preference for making it the default, for the following reasons:\n>\n>\n+ /* Statistics are dependent on the definition, not the data */\n+ /* Views don't have stats */\n+ if ((tbinfo->dobj.dump & DUMP_COMPONENT_STATISTICS) &&\n+ (tbinfo->relkind == RELKIND_VIEW))\n+ dumpRelationStats(fout, &tbinfo->dobj, reltypename,\n+ tbinfo->dobj.dumpId);\n+\n\nStatistics are about data. Whenever pg_dump dumps some filtered data, the\nstatistics collected for the whole table are uselss. We should avoide\ndumping\nstatistics in such a case. E.g. when only schema is dumped what good is\nstatistics? Similarly the statistics on a partitioned table may not be\nuseful\nif some its partitions are not dumped. Said that dumping statistics on\nforeign\ntable makes sense since they do not contain data but the statistics still\nmakes sense.\n\n\n>\n> Key areas where I'm seeking feedback:\n>\n> - What level of errors in a restore will a user tolerate, and what should\n> be done to the error messages to indicate that the data itself is fine, but\n> a manual operation to update stats on that particular table is now\n> warranted?\n> - To what degree could pg_restore/pg_upgrade take that recovery action\n> automatically?\n> - Should the individual attribute/class set function calls be grouped by\n> relation, so that they all succeed/fail together, or should they be called\n> separately, each able to succeed or fail on their own?\n> - Any other concerns about how to best use these new functions.\n>\n>\n>\nWhether or not I pass --no-statistics, there is no difference in the dump\noutput. Am I missing something?\n$ pg_dump -d postgres > /tmp/dump_no_arguments.out\n$ pg_dump -d postgres --no-statistics > /tmp/dump_no_statistics.out\n$ diff /tmp/dump_no_arguments.out /tmp/dump_no_statistics.out\n$\n\nIIUC, pg_dump includes statistics by default. That means all our pg_dump\nrelated tests will have statistics output by default. That's good since the\nfunctionality will always be tested. 1. We need additional tests to ensure\nthat the statistics is installed after restore. 2. Some of those tests\ncompare dumps before and after restore. In case the statistics is changed\nbecause of auto-analyze happening post-restore, these tests will fail.\n\nI believe, in order to import statistics through IMPORT FOREIGN SCHEMA,\npostgresImportForeignSchema() will need to add SELECT commands invoking\npg_set_relation_stats() on each imported table and pg_set_attribute_stats()\non each of its attribute. Am I right? Do we want to make that happen in the\nfirst cut of the feature? How do you expect these functions to be used to\nupdate statistics of foreign tables?\n\n-- \nBest Wishes,\nAshutosh Bapat\n\nHi Corey,On Sat, Mar 23, 2024 at 7:21 AM Corey Huinker <[email protected]> wrote:v12 attached.0001 - Some random comments +SELECT+ format('SELECT pg_catalog.pg_set_attribute_stats( '+ || 'relation => %L::regclass::oid, attname => %L::name, '+ || 'inherited => %L::boolean, null_frac => %L::real, '+ || 'avg_width => %L::integer, n_distinct => %L::real, '+ || 'most_common_vals => %L::text, '+ || 'most_common_freqs => %L::real[], '+ || 'histogram_bounds => %L::text, '+ || 'correlation => %L::real, '+ || 'most_common_elems => %L::text, '+ || 'most_common_elem_freqs => %L::real[], '+ || 'elem_count_histogram => %L::real[], '+ || 'range_length_histogram => %L::text, '+ || 'range_empty_frac => %L::real, '+ || 'range_bounds_histogram => %L::text) ',+ 'stats_export_import.' || s.tablename || '_clone', s.attname,+ s.inherited, s.null_frac,+ s.avg_width, s.n_distinct,+ s.most_common_vals, s.most_common_freqs, s.histogram_bounds,+ s.correlation, s.most_common_elems, s.most_common_elem_freqs,+ s.elem_count_histogram, s.range_length_histogram,+ s.range_empty_frac, s.range_bounds_histogram)+FROM pg_catalog.pg_stats AS s+WHERE s.schemaname = 'stats_export_import'+AND s.tablename IN ('test', 'is_odd')+\\gexecWhy do we need to construct the command and execute? Can we instead execute the function directly? That would also avoid ECHO magic.+ <table id=\"functions-admin-statsimport\">+ <title>Database Object Statistics Import Functions</title>+ <tgroup cols=\"1\">+ <thead>+ <row>+ <entry role=\"func_table_entry\"><para role=\"func_signature\">+ Function+ </para>+ <para>+ Description+ </para></entry>+ </row>+ </thead>COMMENT: The functions throw many validation errors. Do we want to list the acceptable/unacceptable input values in the documentation corresponding to those? I don't expect one line per argument validation. Something like \"these, these and these arguments can not be NULL\" or \"both arguments in each of the pairs x and y, a and b, and c and d should be non-NULL or NULL respectively\". The functions pg_set_relation_stats() and pg_set_attribute_stats() now return void. There just weren't enough conditions where a condition was considered recoverable to justify having it. This may mean that combining multiple pg_set_attribute_stats calls into one compound statement may no longer be desirable, but that's just one of the places where I'd like feedback on how pg_dump/pg_restore use these functions.0002 -This patch concerns invoking the functions in 0001 via pg_restore/pg_upgrade. Little has changed here. Dumping statistics is currently the default for pg_dump/pg_restore/pg_upgrade, and can be switched off with the switch --no-statistics. Some have expressed concern about whether stats dumping should be the default. I have a slight preference for making it the default, for the following reasons:+\t/* Statistics are dependent on the definition, not the data */+\t/* Views don't have stats */+\tif ((tbinfo->dobj.dump & DUMP_COMPONENT_STATISTICS) &&+\t\t(tbinfo->relkind == RELKIND_VIEW))+\t\tdumpRelationStats(fout, &tbinfo->dobj, reltypename,+\t\t\t\t\t\t tbinfo->dobj.dumpId);+Statistics are about data. Whenever pg_dump dumps some filtered data, thestatistics collected for the whole table are uselss. We should avoide dumpingstatistics in such a case. E.g. when only schema is dumped what good isstatistics? Similarly the statistics on a partitioned table may not be usefulif some its partitions are not dumped. Said that dumping statistics on foreigntable makes sense since they do not contain data but the statistics still makes sense. Key areas where I'm seeking feedback:- What level of errors in a restore will a user tolerate, and what should be done to the error messages to indicate that the data itself is fine, but a manual operation to update stats on that particular table is now warranted?- To what degree could pg_restore/pg_upgrade take that recovery action automatically?- Should the individual attribute/class set function calls be grouped by relation, so that they all succeed/fail together, or should they be called separately, each able to succeed or fail on their own?- Any other concerns about how to best use these new functions.\n\nWhether or not I pass --no-statistics, there is no difference in the dump output. Am I missing something?$ pg_dump -d postgres > /tmp/dump_no_arguments.out$ pg_dump -d postgres --no-statistics > /tmp/dump_no_statistics.out$ diff /tmp/dump_no_arguments.out /tmp/dump_no_statistics.out$IIUC, pg_dump includes statistics by default. That means all our pg_dump related tests will have statistics output by default. That's good since the functionality will always be tested. 1. We need additional tests to ensure that the statistics is installed after restore. 2. Some of those tests compare dumps before and after restore. In case the statistics is changed because of auto-analyze happening post-restore, these tests will fail.I believe, in order to import statistics through IMPORT FOREIGN SCHEMA, postgresImportForeignSchema() will need to add SELECT commands invoking pg_set_relation_stats() on each imported table and pg_set_attribute_stats() on each of its attribute. Am I right? Do we want to make that happen in the first cut of the feature? How do you expect these functions to be used to update statistics of foreign tables?-- Best Wishes,Ashutosh Bapat",
"msg_date": "Mon, 25 Mar 2024 15:38:32 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "On 3/25/24 09:27, Corey Huinker wrote:\n> On Fri, Mar 22, 2024 at 9:51 PM Corey Huinker <[email protected]>\n> wrote:\n> \n>> v12 attached.\n>>\n>>\n> v13 attached. All the same features as v12, but with a lot more type\n> checking, bounds checking, value inspection, etc. Perhaps the most notable\n> feature is that we're now ensuring that histogram values are in ascending\n> order. This could come in handy for detecting when we are applying stats to\n> a column of the wrong type, or the right type but with a different\n> collation. It's not a guarantee of validity, of course, but it would detect\n> egregious changes in sort order.\n> \n\nHi,\n\nI did take a closer look at v13 today. I have a bunch of comments and\nsome minor whitespace fixes in the attached review patches.\n\n0001\n----\n\n1) The docs say this:\n\n <para>\n The purpose of this function is to apply statistics values in an\n upgrade situation that are \"good enough\" for system operation until\n they are replaced by the next <command>ANALYZE</command>, usually via\n <command>autovacuum</command> This function is used by\n <command>pg_upgrade</command> and <command>pg_restore</command> to\n convey the statistics from the old system version into the new one.\n </para>\n\nI find this a bit confusing, considering the pg_dump/pg_restore changes\nare only in 0002, not in this patch.\n\n2) Also, I'm not sure about this:\n\n <parameter>relation</parameter>, the parameters in this are all\n derived from <structname>pg_stats</structname>, and the values\n given are most often extracted from there.\n\nHow do we know where do the values come from \"most often\"? I mean, where\nelse would it come from?\n\n3) The function pg_set_attribute_stats() is veeeeery long - 1000 lines\nor so, that's way too many for me to think about. I agree the flow is\npretty simple, but I still wonder if there's a way to maybe split it\ninto some smaller \"meaningful\" steps.\n\n4) It took me *ages* to realize the enums at the beginning of some of\nthe functions are actually indexes of arguments in PG_FUNCTION_ARGS.\nThat'd surely deserve a comment explaining this.\n\n5) The comment for param_names in pg_set_attribute_stats says this:\n\n /* names of columns that cannot be null */\n const char *param_names[] = { ... }\n\nbut isn't that actually incorrect? I think that applies only to a couple\ninitial arguments, but then other fields (MCV, mcelem stats, ...) can be\nNULL, right?\n\n6) There's a couple minor whitespace fixes or comments etc.\n\n\n0002\n----\n\n1) I don't understand why we have exportExtStatsSupported(). Seems\npointless - nothing calls it, even if it did we don't know how to export\nthe stats.\n\n2) I think this condition in dumpTableSchema() is actually incorrect:\n\n if ((tbinfo->dobj.dump & DUMP_COMPONENT_STATISTICS) &&\n (tbinfo->relkind == RELKIND_VIEW))\n dumpRelationStats(fout, &tbinfo->dobj, reltypename,\n\nAren't indexes pretty much exactly the thing for which we don't want to\ndump statistics? In fact this skips dumping statistics for table - if\nyou dump a database with a single table (-Fc), pg_restore -l will tell\nyou this:\n\n217; 1259 16385 TABLE public t user\n3403; 0 16385 TABLE DATA public t user\n\nWhich is not surprising, because table is not a view. With an expression\nindex you get this:\n\n217; 1259 16385 TABLE public t user\n3404; 0 16385 TABLE DATA public t user\n3258; 1259 16418 INDEX public t_expr_idx user\n3411; 0 0 STATS IMPORT public INDEX t_expr_idx\n\nUnfortunately, fixing the condition does not work:\n\n $ pg_dump -Fc test > test.dump\n pg_dump: warning: archive items not in correct section order\n\nThis happens for a very simple reason - the statistics are marked as\nSECTION_POST_DATA, which for the index works, because indexes are in\npost-data section. But the table stats are dumped right after data,\nstill in the \"data\" section.\n\nIMO that's wrong, the statistics should be delayed to the post-data\nsection. Which probably means there needs to be a separate dumpable\nobject for statistics on table/index, with a dependency on the object.\n\n3) I don't like the \"STATS IMPORT\" description. For extended statistics\nwe dump the definition as \"STATISTICS\" so why to shorten it to \"STATS\"\nhere? And \"IMPORT\" seems more like the process of loading data, not the\ndata itself. So I suggest \"STATISTICS DATA\".\n\n\nregards\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Tue, 26 Mar 2024 00:16:48 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": ">\n>\n>\n> +\\gexec\n>\n> Why do we need to construct the command and execute? Can we instead\n> execute the function directly? That would also avoid ECHO magic.\n>\n\nWe don't strictly need it, but I've found the set-difference operation to\nbe incredibly useful in diagnosing problems. Additionally, the values are\nsubject to change due to changes in test data, no guarantee that the output\nof ANALYZE is deterministic, etc. But most of all, because the test cares\nabout the correct copying of values, not the values themselves.\n\n\n>\n> + <table id=\"functions-admin-statsimport\">\n> + <title>Database Object Statistics Import Functions</title>\n> + <tgroup cols=\"1\">\n> + <thead>\n> + <row>\n> + <entry role=\"func_table_entry\"><para role=\"func_signature\">\n> + Function\n> + </para>\n> + <para>\n> + Description\n> + </para></entry>\n> + </row>\n> + </thead>\n>\n> COMMENT: The functions throw many validation errors. Do we want to list\n> the acceptable/unacceptable input values in the documentation corresponding\n> to those? I don't expect one line per argument validation. Something like\n> \"these, these and these arguments can not be NULL\" or \"both arguments in\n> each of the pairs x and y, a and b, and c and d should be non-NULL or NULL\n> respectively\".\n>\n\nYes. It should.\n\n\n> Statistics are about data. Whenever pg_dump dumps some filtered data, the\n> statistics collected for the whole table are uselss. We should avoide\n> dumping\n> statistics in such a case. E.g. when only schema is dumped what good is\n> statistics? Similarly the statistics on a partitioned table may not be\n> useful\n> if some its partitions are not dumped. Said that dumping statistics on\n> foreign\n> table makes sense since they do not contain data but the statistics still\n> makes sense.\n>\n\nGood points, but I'm not immediately sure how to enforce those rules.\n\n\n>\n>\n>>\n>> Key areas where I'm seeking feedback:\n>>\n>> - What level of errors in a restore will a user tolerate, and what should\n>> be done to the error messages to indicate that the data itself is fine, but\n>> a manual operation to update stats on that particular table is now\n>> warranted?\n>> - To what degree could pg_restore/pg_upgrade take that recovery action\n>> automatically?\n>> - Should the individual attribute/class set function calls be grouped by\n>> relation, so that they all succeed/fail together, or should they be called\n>> separately, each able to succeed or fail on their own?\n>> - Any other concerns about how to best use these new functions.\n>>\n>>\n>>\n> Whether or not I pass --no-statistics, there is no difference in the dump\n> output. Am I missing something?\n> $ pg_dump -d postgres > /tmp/dump_no_arguments.out\n> $ pg_dump -d postgres --no-statistics > /tmp/dump_no_statistics.out\n> $ diff /tmp/dump_no_arguments.out /tmp/dump_no_statistics.out\n> $\n>\n> IIUC, pg_dump includes statistics by default. That means all our pg_dump\n> related tests will have statistics output by default. That's good since the\n> functionality will always be tested. 1. We need additional tests to ensure\n> that the statistics is installed after restore. 2. Some of those tests\n> compare dumps before and after restore. In case the statistics is changed\n> because of auto-analyze happening post-restore, these tests will fail.\n>\n\n+1\n\n\n> I believe, in order to import statistics through IMPORT FOREIGN SCHEMA,\n> postgresImportForeignSchema() will need to add SELECT commands invoking\n> pg_set_relation_stats() on each imported table and pg_set_attribute_stats()\n> on each of its attribute. Am I right? Do we want to make that happen in the\n> first cut of the feature? How do you expect these functions to be used to\n> update statistics of foreign tables?\n>\n\nI don't think there's time to get it into this release. I think we'd want\nto extend this functionality to both IMPORT FOREIGN SCHEMA and ANALYZE for\nforeign tables, in both cases with a server/table option to do regular\nremote sampling. In both cases, they'd do a remote query very similar to\nwhat pg_dump does (hence putting it in fe_utils), with some filters on\nwhich columns/tables it believes it can trust. The remote table might\nitself be a view (in which case they query would turn up nothing) or column\ndata types may change across the wire, and in those cases we'd have to fall\nback to sampling.\n\n+\\gexecWhy do we need to construct the command and execute? Can we instead execute the function directly? That would also avoid ECHO magic.We don't strictly need it, but I've found the set-difference operation to be incredibly useful in diagnosing problems. Additionally, the values are subject to change due to changes in test data, no guarantee that the output of ANALYZE is deterministic, etc. But most of all, because the test cares about the correct copying of values, not the values themselves. + <table id=\"functions-admin-statsimport\">+ <title>Database Object Statistics Import Functions</title>+ <tgroup cols=\"1\">+ <thead>+ <row>+ <entry role=\"func_table_entry\"><para role=\"func_signature\">+ Function+ </para>+ <para>+ Description+ </para></entry>+ </row>+ </thead>COMMENT: The functions throw many validation errors. Do we want to list the acceptable/unacceptable input values in the documentation corresponding to those? I don't expect one line per argument validation. Something like \"these, these and these arguments can not be NULL\" or \"both arguments in each of the pairs x and y, a and b, and c and d should be non-NULL or NULL respectively\".Yes. It should. Statistics are about data. Whenever pg_dump dumps some filtered data, thestatistics collected for the whole table are uselss. We should avoide dumpingstatistics in such a case. E.g. when only schema is dumped what good isstatistics? Similarly the statistics on a partitioned table may not be usefulif some its partitions are not dumped. Said that dumping statistics on foreigntable makes sense since they do not contain data but the statistics still makes sense.Good points, but I'm not immediately sure how to enforce those rules. Key areas where I'm seeking feedback:- What level of errors in a restore will a user tolerate, and what should be done to the error messages to indicate that the data itself is fine, but a manual operation to update stats on that particular table is now warranted?- To what degree could pg_restore/pg_upgrade take that recovery action automatically?- Should the individual attribute/class set function calls be grouped by relation, so that they all succeed/fail together, or should they be called separately, each able to succeed or fail on their own?- Any other concerns about how to best use these new functions.\n\nWhether or not I pass --no-statistics, there is no difference in the dump output. Am I missing something?$ pg_dump -d postgres > /tmp/dump_no_arguments.out$ pg_dump -d postgres --no-statistics > /tmp/dump_no_statistics.out$ diff /tmp/dump_no_arguments.out /tmp/dump_no_statistics.out$IIUC, pg_dump includes statistics by default. That means all our pg_dump related tests will have statistics output by default. That's good since the functionality will always be tested. 1. We need additional tests to ensure that the statistics is installed after restore. 2. Some of those tests compare dumps before and after restore. In case the statistics is changed because of auto-analyze happening post-restore, these tests will fail.+1 I believe, in order to import statistics through IMPORT FOREIGN SCHEMA, postgresImportForeignSchema() will need to add SELECT commands invoking pg_set_relation_stats() on each imported table and pg_set_attribute_stats() on each of its attribute. Am I right? Do we want to make that happen in the first cut of the feature? How do you expect these functions to be used to update statistics of foreign tables?I don't think there's time to get it into this release. I think we'd want to extend this functionality to both IMPORT FOREIGN SCHEMA and ANALYZE for foreign tables, in both cases with a server/table option to do regular remote sampling. In both cases, they'd do a remote query very similar to what pg_dump does (hence putting it in fe_utils), with some filters on which columns/tables it believes it can trust. The remote table might itself be a view (in which case they query would turn up nothing) or column data types may change across the wire, and in those cases we'd have to fall back to sampling.",
"msg_date": "Wed, 27 Mar 2024 02:20:41 -0400",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": ">\n> 1) The docs say this:\n>\n> <para>\n> The purpose of this function is to apply statistics values in an\n> upgrade situation that are \"good enough\" for system operation until\n> they are replaced by the next <command>ANALYZE</command>, usually via\n> <command>autovacuum</command> This function is used by\n> <command>pg_upgrade</command> and <command>pg_restore</command> to\n> convey the statistics from the old system version into the new one.\n> </para>\n>\n> I find this a bit confusing, considering the pg_dump/pg_restore changes\n> are only in 0002, not in this patch.\n>\n\nTrue, I'll split the docs.\n\n\n>\n> 2) Also, I'm not sure about this:\n>\n> <parameter>relation</parameter>, the parameters in this are all\n> derived from <structname>pg_stats</structname>, and the values\n> given are most often extracted from there.\n>\n> How do we know where do the values come from \"most often\"? I mean, where\n> else would it come from?\n>\n\nThe next most likely sources would be 1. stats from another similar table\nand 2. the imagination of a user testing hypothetical query plans.\n\n\n>\n> 3) The function pg_set_attribute_stats() is veeeeery long - 1000 lines\n> or so, that's way too many for me to think about. I agree the flow is\n> pretty simple, but I still wonder if there's a way to maybe split it\n> into some smaller \"meaningful\" steps.\n>\n\nI wrestle with that myself. I think there's some pieces that can be\nfactored out.\n\n\n> 4) It took me *ages* to realize the enums at the beginning of some of\n> the functions are actually indexes of arguments in PG_FUNCTION_ARGS.\n> That'd surely deserve a comment explaining this.\n>\n\nMy apologies, it definitely deserves a comment.\n\n\n>\n> 5) The comment for param_names in pg_set_attribute_stats says this:\n>\n> /* names of columns that cannot be null */\n> const char *param_names[] = { ... }\n>\n> but isn't that actually incorrect? I think that applies only to a couple\n> initial arguments, but then other fields (MCV, mcelem stats, ...) can be\n> NULL, right?\n>\n\nYes, that is vestigial, I'll remove it.\n\n\n>\n> 6) There's a couple minor whitespace fixes or comments etc.\n>\n>\n> 0002\n> ----\n>\n> 1) I don't understand why we have exportExtStatsSupported(). Seems\n> pointless - nothing calls it, even if it did we don't know how to export\n> the stats.\n>\n\nIt's not strictly necessary.\n\n\n>\n> 2) I think this condition in dumpTableSchema() is actually incorrect:\n>\n> IMO that's wrong, the statistics should be delayed to the post-data\n> section. Which probably means there needs to be a separate dumpable\n> object for statistics on table/index, with a dependency on the object.\n>\n\nGood points.\n\n\n>\n> 3) I don't like the \"STATS IMPORT\" description. For extended statistics\n> we dump the definition as \"STATISTICS\" so why to shorten it to \"STATS\"\n> here? And \"IMPORT\" seems more like the process of loading data, not the\n> data itself. So I suggest \"STATISTICS DATA\".\n>\n\n+1\n\n1) The docs say this:\n\n <para>\n The purpose of this function is to apply statistics values in an\n upgrade situation that are \"good enough\" for system operation until\n they are replaced by the next <command>ANALYZE</command>, usually via\n <command>autovacuum</command> This function is used by\n <command>pg_upgrade</command> and <command>pg_restore</command> to\n convey the statistics from the old system version into the new one.\n </para>\n\nI find this a bit confusing, considering the pg_dump/pg_restore changes\nare only in 0002, not in this patch.True, I'll split the docs. \n\n2) Also, I'm not sure about this:\n\n <parameter>relation</parameter>, the parameters in this are all\n derived from <structname>pg_stats</structname>, and the values\n given are most often extracted from there.\n\nHow do we know where do the values come from \"most often\"? I mean, where\nelse would it come from?The next most likely sources would be 1. stats from another similar table and 2. the imagination of a user testing hypothetical query plans. \n\n3) The function pg_set_attribute_stats() is veeeeery long - 1000 lines\nor so, that's way too many for me to think about. I agree the flow is\npretty simple, but I still wonder if there's a way to maybe split it\ninto some smaller \"meaningful\" steps.I wrestle with that myself. I think there's some pieces that can be factored out.\n\n4) It took me *ages* to realize the enums at the beginning of some of\nthe functions are actually indexes of arguments in PG_FUNCTION_ARGS.\nThat'd surely deserve a comment explaining this.My apologies, it definitely deserves a comment. \n\n5) The comment for param_names in pg_set_attribute_stats says this:\n\n /* names of columns that cannot be null */\n const char *param_names[] = { ... }\n\nbut isn't that actually incorrect? I think that applies only to a couple\ninitial arguments, but then other fields (MCV, mcelem stats, ...) can be\nNULL, right?Yes, that is vestigial, I'll remove it. \n\n6) There's a couple minor whitespace fixes or comments etc.\n\n\n0002\n----\n\n1) I don't understand why we have exportExtStatsSupported(). Seems\npointless - nothing calls it, even if it did we don't know how to export\nthe stats.It's not strictly necessary. \n\n2) I think this condition in dumpTableSchema() is actually incorrect:\nIMO that's wrong, the statistics should be delayed to the post-data\nsection. Which probably means there needs to be a separate dumpable\nobject for statistics on table/index, with a dependency on the object.Good points. \n\n3) I don't like the \"STATS IMPORT\" description. For extended statistics\nwe dump the definition as \"STATISTICS\" so why to shorten it to \"STATS\"\nhere? And \"IMPORT\" seems more like the process of loading data, not the\ndata itself. So I suggest \"STATISTICS DATA\".+1",
"msg_date": "Wed, 27 Mar 2024 02:27:17 -0400",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "Hi Tom,\n\nComparing the current patch set to your advice below:\n\nOn Tue, 2023-12-26 at 14:19 -0500, Tom Lane wrote:\n> I had things set up with simple functions, which\n> pg_dump would invoke by writing more or less\n> \n> SELECT pg_catalog.load_statistics(....);\n> \n> This has a number of advantages, not least of which is that an\n> extension\n> could plausibly add compatible functions to older versions.\n\nCheck.\n\n> The trick,\n> as you say, is to figure out what the argument lists ought to be.\n> Unfortunately I recall few details of what I wrote for Salesforce,\n> but I think I had it broken down in a way where there was a separate\n> function call occurring for each pg_statistic \"slot\", thus roughly\n> \n> load_statistics(table regclass, attname text, stakind int, stavalue\n> ...);\n\nThe problem with basing the function on pg_statistic directly is that\nit can only be exported by the superuser.\n\nThe current patches instead base it on the pg_stats view, which already\ndoes the privilege checking. Technically, information about which\nstakinds go in which slots is lost, but I don't think that's a problem\nas long as the stats make it in, right? It's also more user-friendly to\nhave nice names for the function arguments. The only downside I see is\nthat it's slightly asymmetric: exporting from pg_stats and importing\ninto pg_statistic.\n\nI do have some concerns about letting non-superusers import their own\nstatistics: how robust is the rest of the code to handle malformed\nstats once they make it into pg_statistic? Corey has addressed that\nwith basic input validation, so I think it's fine, but perhaps I'm\nmissing something.\n\n> As mentioned already, we'd also need some sort of\n> version identifier, and we'd expect the load_statistics() functions\n> to be able to transform the data if the old version used a different\n> representation.\n\nYou mean a version argument to the function, which would appear in the\nexported stats data? That's not in the current patch set.\n\nIt's relying on the new version of pg_dump understanding the old\nstatistics data, and dumping it out in a form that the new server will\nunderstand.\n\n> I agree with the idea that an explicit representation\n> of the source table attribute's type would be wise, too.\n\nThat's not in the current patch set, either. \n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Wed, 27 Mar 2024 23:32:09 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "On Tue, 2024-03-26 at 00:16 +0100, Tomas Vondra wrote:\n> I did take a closer look at v13 today. I have a bunch of comments and\n> some minor whitespace fixes in the attached review patches.\n\nI also attached a patch implementing a different approach to the\npg_dump support. Instead of trying to create a query that uses SQL\n\"format()\" to create more SQL, I did all the formatting in C. It turned\nout to be about 30% fewer lines, and I find it more understandable and\nconsistent with the way other stuff in pg_dump happens.\n\nThe attached patch is pretty rough -- not many comments, and perhaps\nsome things should be moved around. I only tested very basic\ndump/reload in SQL format.\n\nRegards,\n\tJeff Davis",
"msg_date": "Thu, 28 Mar 2024 23:25:38 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "On Fri, Mar 29, 2024 at 2:25 AM Jeff Davis <[email protected]> wrote:\n\n> I also attached a patch implementing a different approach to the\n> pg_dump support. Instead of trying to create a query that uses SQL\n> \"format()\" to create more SQL, I did all the formatting in C. It turned\n> out to be about 30% fewer lines, and I find it more understandable and\n> consistent with the way other stuff in pg_dump happens.\n>\n\nThat is fairly close to what I came up with per our conversation (attached\nbelow), but I really like the att_stats_arginfo construct and I definitely\nwant to adopt that and expand it to a third dimension that flags the fields\nthat cannot be null. I will incorporate that into v15.\n\nAs for v14, here are the highlights:\n\n0001:\n- broke up pg_set_attribute_stats() into many functions. Every stat kind\ngets its own validation function. Type derivation is now done in its own\nfunction.\n- removed check on inherited stats flag that required the table be\npartitioned. that was in error\n- added check for most_common_values to be unique in ascending order, and\ntests to match\n- no more mention of pg_dump in the function documentation\n- function documentation cites pg-stats-view as reference for the\nparameter's data requirements\n\n0002:\n- All relstats and attrstats calls are now their own statement instead of a\ncompound statement\n- moved the archive TOC entry from post-data back to SECTION_NONE (as it\nwas modeled on object COMMENTs), which seems to work better.\n- remove meta-query in favor of more conventional query building\n- removed all changes to fe_utils/",
"msg_date": "Fri, 29 Mar 2024 05:32:40 -0400",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "On Fri, 2024-03-29 at 05:32 -0400, Corey Huinker wrote:\n> That is fairly close to what I came up with per our conversation\n> (attached below), but I really like the att_stats_arginfo construct\n> and I definitely want to adopt that and expand it to a third\n> dimension that flags the fields that cannot be null. I will\n> incorporate that into v15.\n\nSounds good. I think it cuts down on the boilerplate.\n\n> 0002:\n> - All relstats and attrstats calls are now their own statement\n> instead of a compound statement\n> - moved the archive TOC entry from post-data back to SECTION_NONE (as\n> it was modeled on object COMMENTs), which seems to work better.\n> - remove meta-query in favor of more conventional query building\n> - removed all changes to fe_utils/\n\nCan we get a consensus on whether the default should be with stats or\nwithout? That seems like the most important thing remaining in the\npg_dump changes.\n\nThere's still a failure in the pg_upgrade TAP test. One seems to be\nordering, so perhaps we need to ORDER BY the attribute number. Others\nseem to be missing relstats and I'm not sure why yet. I suggest doing\nsome manual pg_upgrade tests and comparing the before/after dumps to\nsee if you can reproduce a smaller version of the problem.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Fri, 29 Mar 2024 08:05:00 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "Greetings,\n\nOn Fri, Mar 29, 2024 at 11:05 Jeff Davis <[email protected]> wrote:\n\n> On Fri, 2024-03-29 at 05:32 -0400, Corey Huinker wrote:\n> > 0002:\n> > - All relstats and attrstats calls are now their own statement\n> > instead of a compound statement\n> > - moved the archive TOC entry from post-data back to SECTION_NONE (as\n> > it was modeled on object COMMENTs), which seems to work better.\n> > - remove meta-query in favor of more conventional query building\n> > - removed all changes to fe_utils/\n>\n> Can we get a consensus on whether the default should be with stats or\n> without? That seems like the most important thing remaining in the\n> pg_dump changes.\n\n\nI’d certainly think “with stats” would be the preferred default of our\nusers.\n\nThanks!\n\nStephen\n\nGreetings,On Fri, Mar 29, 2024 at 11:05 Jeff Davis <[email protected]> wrote:On Fri, 2024-03-29 at 05:32 -0400, Corey Huinker wrote:\n> 0002:\n> - All relstats and attrstats calls are now their own statement\n> instead of a compound statement\n> - moved the archive TOC entry from post-data back to SECTION_NONE (as\n> it was modeled on object COMMENTs), which seems to work better.\n> - remove meta-query in favor of more conventional query building\n> - removed all changes to fe_utils/\n\nCan we get a consensus on whether the default should be with stats or\nwithout? That seems like the most important thing remaining in the\npg_dump changes.I’d certainly think “with stats” would be the preferred default of our users.Thanks!Stephen",
"msg_date": "Fri, 29 Mar 2024 18:02:27 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": ">\n>\n> There's still a failure in the pg_upgrade TAP test. One seems to be\n> ordering, so perhaps we need to ORDER BY the attribute number. Others\n> seem to be missing relstats and I'm not sure why yet. I suggest doing\n> some manual pg_upgrade tests and comparing the before/after dumps to\n> see if you can reproduce a smaller version of the problem.\n>\n\nThat's fixed in my current working version, as is a tsvector-specific\nissue. Working on the TAP issue.\n\nThere's still a failure in the pg_upgrade TAP test. One seems to be\nordering, so perhaps we need to ORDER BY the attribute number. Others\nseem to be missing relstats and I'm not sure why yet. I suggest doing\nsome manual pg_upgrade tests and comparing the before/after dumps to\nsee if you can reproduce a smaller version of the problem.That's fixed in my current working version, as is a tsvector-specific issue. Working on the TAP issue.",
"msg_date": "Fri, 29 Mar 2024 19:31:19 -0400",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "On Fri, 2024-03-29 at 18:02 -0400, Stephen Frost wrote:\n> I’d certainly think “with stats” would be the preferred default of\n> our users.\n\nI'm concerned there could still be paths that lead to an error. For\npg_restore, or when loading a SQL file, a single error isn't fatal\n(unless -e is specified), but it still could be somewhat scary to see\nerrors during a reload.\n\nAlso, it's new behavior, so it may cause some minor surprises, or there\nmight be minor interactions to work out. For instance, dumping stats\ndoesn't make a lot of sense if pg_upgrade (or something else) is just\ngoing to run analyze anyway.\n\nWhat do you think about starting off with it as non-default, and then\nswitching it to default in 18?\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Fri, 29 Mar 2024 16:34:48 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "Jeff Davis <[email protected]> writes:\n> On Fri, 2024-03-29 at 18:02 -0400, Stephen Frost wrote:\n>> I’d certainly think “with stats” would be the preferred default of\n>> our users.\n\n> What do you think about starting off with it as non-default, and then\n> switching it to default in 18?\n\nI'm with Stephen: I find it very hard to imagine that there's any\nusers who wouldn't want this as default. If we do what you suggest,\nthen there will be three historical behaviors to cope with not two.\nThat doesn't sound like it'll make anyone's life better.\n\nAs for the \"it might break\" argument, that could be leveled against\nany nontrivial patch. You're at least offering an opt-out switch,\nwhich is something we more often don't do.\n\n(I've not read the patch yet, but I assume the switch works like\nother pg_dump filters in that you can apply it on the restore\nside?)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 29 Mar 2024 19:47:42 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "On Fri, Mar 29, 2024 at 7:34 PM Jeff Davis <[email protected]> wrote:\n\n> On Fri, 2024-03-29 at 18:02 -0400, Stephen Frost wrote:\n> > I’d certainly think “with stats” would be the preferred default of\n> > our users.\n>\n> I'm concerned there could still be paths that lead to an error. For\n> pg_restore, or when loading a SQL file, a single error isn't fatal\n> (unless -e is specified), but it still could be somewhat scary to see\n> errors during a reload.\n>\n\nTo that end, I'm going to be modifying the \"Optimizer statistics are not\ntransferred by pg_upgrade...\" message when stats _were_ transferred,\nwidth additional instructions that the user should treat any stats-ish\nerror messages encountered as a reason to manually analyze that table. We\nshould probably say something about extended stats as well.\n\nOn Fri, Mar 29, 2024 at 7:34 PM Jeff Davis <[email protected]> wrote:On Fri, 2024-03-29 at 18:02 -0400, Stephen Frost wrote:\n> I’d certainly think “with stats” would be the preferred default of\n> our users.\n\nI'm concerned there could still be paths that lead to an error. For\npg_restore, or when loading a SQL file, a single error isn't fatal\n(unless -e is specified), but it still could be somewhat scary to see\nerrors during a reload.To that end, I'm going to be modifying the \"Optimizer statistics are not transferred by pg_upgrade...\" message when stats _were_ transferred, width additional instructions that the user should treat any stats-ish error messages encountered as a reason to manually analyze that table. We should probably say something about extended stats as well.",
"msg_date": "Fri, 29 Mar 2024 20:26:16 -0400",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": ">\n> (I've not read the patch yet, but I assume the switch works like\n> other pg_dump filters in that you can apply it on the restore\n> side?)\n>\n\nCorrect. It follows the existing --no-something pattern.\n\n(I've not read the patch yet, but I assume the switch works like\nother pg_dump filters in that you can apply it on the restore\nside?)Correct. It follows the existing --no-something pattern.",
"msg_date": "Fri, 29 Mar 2024 20:28:12 -0400",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "Greetings,\n\nOn Fri, Mar 29, 2024 at 19:35 Jeff Davis <[email protected]> wrote:\n\n> On Fri, 2024-03-29 at 18:02 -0400, Stephen Frost wrote:\n> > I’d certainly think “with stats” would be the preferred default of\n> > our users.\n>\n> I'm concerned there could still be paths that lead to an error. For\n> pg_restore, or when loading a SQL file, a single error isn't fatal\n> (unless -e is specified), but it still could be somewhat scary to see\n> errors during a reload.\n\n\nI understand that point.\n\nAlso, it's new behavior, so it may cause some minor surprises, or there\n> might be minor interactions to work out. For instance, dumping stats\n> doesn't make a lot of sense if pg_upgrade (or something else) is just\n> going to run analyze anyway.\n\n\nBut we don’t expect anything to run analyze … do we? So I’m not sure why\nit makes sense to raise this as a concern.\n\nWhat do you think about starting off with it as non-default, and then\n> switching it to default in 18?\n\n\nWhat’s different, given the above arguments, in making the change with 18\ninstead of now? I also suspect that if we say “we will change the default\nlater” … that later won’t ever come and we will end up making our users\nalways have to remember to say “with-stats” instead.\n\nThe stats are important which is why the effort is being made in the first\nplace. If just doing an analyze after loading the data was good enough then\nthis wouldn’t be getting worked on.\n\nIndependently, I had a thought around doing an analyze as the data is being\nloaded .. but we can’t do that for indexes (but we could perhaps analyze\nthe indexed values as we build the index..). This works when we do a\ntruncate or create the table in the same transaction, so we would tie into\nsome of the existing logic that we have around that. Would also adjust\nCOPY to accept an option that specifies the anticipated number of rows\nbeing loaded (which we can figure out during the dump phase reasonably..).\nPerhaps this would lead to a pg_dump option to do the data load as a\ntransaction with a truncate before the copy (point here being to be able to\nstill do parallel load while getting the benefits from knowing that we are\ncompletely reloading the table). Just some other thoughts- which I don’t\nintend to take away from the current effort at all, which I see as valuable\nand should be enabled by default.\n\nThanks!\n\nStephen\n\n>\n\nGreetings,On Fri, Mar 29, 2024 at 19:35 Jeff Davis <[email protected]> wrote:On Fri, 2024-03-29 at 18:02 -0400, Stephen Frost wrote:\n> I’d certainly think “with stats” would be the preferred default of\n> our users.\n\nI'm concerned there could still be paths that lead to an error. For\npg_restore, or when loading a SQL file, a single error isn't fatal\n(unless -e is specified), but it still could be somewhat scary to see\nerrors during a reload.I understand that point.\nAlso, it's new behavior, so it may cause some minor surprises, or there\nmight be minor interactions to work out. For instance, dumping stats\ndoesn't make a lot of sense if pg_upgrade (or something else) is just\ngoing to run analyze anyway.But we don’t expect anything to run analyze … do we? So I’m not sure why it makes sense to raise this as a concern. \nWhat do you think about starting off with it as non-default, and then\nswitching it to default in 18?What’s different, given the above arguments, in making the change with 18 instead of now? I also suspect that if we say “we will change the default later” … that later won’t ever come and we will end up making our users always have to remember to say “with-stats” instead.The stats are important which is why the effort is being made in the first place. If just doing an analyze after loading the data was good enough then this wouldn’t be getting worked on.Independently, I had a thought around doing an analyze as the data is being loaded .. but we can’t do that for indexes (but we could perhaps analyze the indexed values as we build the index..). This works when we do a truncate or create the table in the same transaction, so we would tie into some of the existing logic that we have around that. Would also adjust COPY to accept an option that specifies the anticipated number of rows being loaded (which we can figure out during the dump phase reasonably..). Perhaps this would lead to a pg_dump option to do the data load as a transaction with a truncate before the copy (point here being to be able to still do parallel load while getting the benefits from knowing that we are completely reloading the table). Just some other thoughts- which I don’t intend to take away from the current effort at all, which I see as valuable and should be enabled by default.Thanks!Stephen",
"msg_date": "Fri, 29 Mar 2024 20:54:20 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "On Fri, 2024-03-29 at 20:54 -0400, Stephen Frost wrote:\n> What’s different, given the above arguments, in making the change\n> with 18 instead of now?\n\nAcknowledged. You, Tom, and Corey (and perhaps everyone else) seem to\nbe aligned here, so that's consensus enough for me. Default is with\nstats, --no-statistics to disable them.\n\n> Independently, I had a thought around doing an analyze as the data is\n> being loaded ..\n\nRight, I think there are some interesting things to pursue here. I also\nhad an idea to use logical decoding to get a streaming sample, which\nwould be better randomness than block sampling. At this point that's\njust an idea, I haven't looked into it seriously.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Fri, 29 Mar 2024 18:02:40 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": ">\n> Right, I think there are some interesting things to pursue here. I also\n> had an idea to use logical decoding to get a streaming sample, which\n> would be better randomness than block sampling. At this point that's\n> just an idea, I haven't looked into it seriously.\n>\n> Regards,\n> Jeff Davis\n>\n>\n\nv15 attached\n\n0001:\n- fixed an error involving tsvector types\n- only derive element type if element stats available\n- general cleanup\n\n0002:\n\n- 002pg_upgrade.pl now dumps before/after databases with --no-statistics. I\ntried to find out why some tables were getting their relstats either not\nset, or set and reset, never affecting the attribute stats. I even tried\nturning autovacuum off for both instances, but nothing seemed to change the\nfact that the same tables were having their relstats reset.\n\nTODO list:\n\n- decision on whether suppressing stats in the pg_upgrade TAP check is for\nthe best\n- pg_upgrade option to suppress stats import, there is no real pattern to\nfollow there\n- what message text to convey to the user about the potential stats import\nerrors and their remediation, and to what degree that replaces the \"you\nought to run vacuumdb\" message.\n- what additional error context we want to add to the array_in() imports of\nanyarray strings",
"msg_date": "Sat, 30 Mar 2024 01:34:16 -0400",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "On Sat, Mar 30, 2024 at 1:26 AM Corey Huinker <[email protected]>\nwrote:\n\n>\n>\n> On Fri, Mar 29, 2024 at 7:34 PM Jeff Davis <[email protected]> wrote:\n>\n>> On Fri, 2024-03-29 at 18:02 -0400, Stephen Frost wrote:\n>> > I’d certainly think “with stats” would be the preferred default of\n>> > our users.\n>>\n>> I'm concerned there could still be paths that lead to an error. For\n>> pg_restore, or when loading a SQL file, a single error isn't fatal\n>> (unless -e is specified), but it still could be somewhat scary to see\n>> errors during a reload.\n>>\n>\n> To that end, I'm going to be modifying the \"Optimizer statistics are not\n> transferred by pg_upgrade...\" message when stats _were_ transferred,\n> width additional instructions that the user should treat any stats-ish\n> error messages encountered as a reason to manually analyze that table. We\n> should probably say something about extended stats as well.\n>\n>\n\nI'm getting late into this discussion and I apologize if I've missed this\nbeing discussed before. But.\n\nPlease don't.\n\nThat will make it *really* hard for any form of automation or drivers of\nthis. The information needs to go somewhere where such tools can easily\nconsume it, and an informational message during runtime (which is also\nlikely to be translated in many environments) is the exact opposite of that.\n\nSurely we can come up with something better. Otherwise, I think all those\ntools are just going ot have to end up assuming that it always failed and\nproceed based on that, and that would be a shame.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Sat, Mar 30, 2024 at 1:26 AM Corey Huinker <[email protected]> wrote:On Fri, Mar 29, 2024 at 7:34 PM Jeff Davis <[email protected]> wrote:On Fri, 2024-03-29 at 18:02 -0400, Stephen Frost wrote:\n> I’d certainly think “with stats” would be the preferred default of\n> our users.\n\nI'm concerned there could still be paths that lead to an error. For\npg_restore, or when loading a SQL file, a single error isn't fatal\n(unless -e is specified), but it still could be somewhat scary to see\nerrors during a reload.To that end, I'm going to be modifying the \"Optimizer statistics are not transferred by pg_upgrade...\" message when stats _were_ transferred, width additional instructions that the user should treat any stats-ish error messages encountered as a reason to manually analyze that table. We should probably say something about extended stats as well. \nI'm getting late into this discussion and I apologize if I've missed this being discussed before. But.Please don't.That will make it *really* hard for any form of automation or drivers of this. The information needs to go somewhere where such tools can easily consume it, and an informational message during runtime (which is also likely to be translated in many environments) is the exact opposite of that.Surely we can come up with something better. Otherwise, I think all those tools are just going ot have to end up assuming that it always failed and proceed based on that, and that would be a shame.-- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Sat, 30 Mar 2024 12:26:59 +0100",
"msg_from": "Magnus Hagander <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "On Sat, 2024-03-30 at 01:34 -0400, Corey Huinker wrote:\n> \n> - 002pg_upgrade.pl now dumps before/after databases with --no-\n> statistics. I tried to find out why some tables were getting their\n> relstats either not set, or set and reset, never affecting the\n> attribute stats. I even tried turning autovacuum off for both\n> instances, but nothing seemed to change the fact that the same tables\n> were having their relstats reset.\n\nI think I found out why this is happening: a schema-only dump first\ncreates the table, then sets the relstats, then creates indexes. The\nindex creation updates the relstats, but because the dump was schema-\nonly, it overwrites the relstats with zeros.\n\nThat exposes an interesting dependency, which is that relstats must be\nset after index creation, otherwise they will be lost -- at least in\nthe case of pg_upgrade.\n\nThis re-raises the question of whether stats are part of a schema-only\ndump or not. Did we settle conclusively that they are?\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Sat, 30 Mar 2024 10:01:54 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "Jeff Davis <[email protected]> writes:\n> This re-raises the question of whether stats are part of a schema-only\n> dump or not. Did we settle conclusively that they are?\n\nSurely they are data, not schema. It would make zero sense to restore\nthem if you aren't restoring the data they describe.\n\nHence, it'll be a bit messy if we can't put them in the dump's DATA\nsection. Maybe we need to revisit CREATE INDEX's behavior rather\nthan assuming it's graven in stone?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 30 Mar 2024 13:18:54 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "On Sat, 2024-03-30 at 13:18 -0400, Tom Lane wrote:\n> Surely they are data, not schema. It would make zero sense to\n> restore\n> them if you aren't restoring the data they describe.\n\nThe complexity is that pg_upgrade does create the data, but relies on a\nschema-only dump. So we'd need to at least account for that somehow,\neither with a separate stats-only dump, or make a special case in\nbinary upgrade mode that dumps schema+stats (and resolves the CREATE\nINDEX issue).\n\n> Maybe we need to revisit CREATE INDEX's behavior rather\n> than assuming it's graven in stone?\n\nWould there be a significant cost to just not doing that? Or are you\nsuggesting that we special-case the behavior, or turn it off during\nrestore with a GUC?\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Sat, 30 Mar 2024 10:29:43 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "Jeff Davis <[email protected]> writes:\n> On Sat, 2024-03-30 at 13:18 -0400, Tom Lane wrote:\n>> Surely they are data, not schema. It would make zero sense to\n>> restore them if you aren't restoring the data they describe.\n\n> The complexity is that pg_upgrade does create the data, but relies on a\n> schema-only dump. So we'd need to at least account for that somehow,\n> either with a separate stats-only dump, or make a special case in\n> binary upgrade mode that dumps schema+stats (and resolves the CREATE\n> INDEX issue).\n\nAh, good point. But binary-upgrade mode is special in tons of ways\nalready. I don't see a big problem with allowing it to dump stats\neven though --schema-only would normally imply not doing that.\n\n(You could also imagine an explicit positive --stats switch that would\noverride --schema-only, but I don't see that it's worth the trouble.)\n\n>> Maybe we need to revisit CREATE INDEX's behavior rather\n>> than assuming it's graven in stone?\n\n> Would there be a significant cost to just not doing that? Or are you\n> suggesting that we special-case the behavior, or turn it off during\n> restore with a GUC?\n\nI didn't have any specific proposal in mind, was just trying to think\noutside the box.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 30 Mar 2024 13:39:40 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "On Sat, 2024-03-30 at 13:39 -0400, Tom Lane wrote:\n> (You could also imagine an explicit positive --stats switch that\n> would\n> override --schema-only, but I don't see that it's worth the trouble.)\n\nThat would have its own utility for reproducing planner problems\noutside of production systems.\n\n(That could be a separate feature, though, and doesn't need to be a\npart of this patch set.)\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Sat, 30 Mar 2024 11:20:30 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": ">\n> I'm getting late into this discussion and I apologize if I've missed this\n> being discussed before. But.\n>\n> Please don't.\n>\n> That will make it *really* hard for any form of automation or drivers of\n> this. The information needs to go somewhere where such tools can easily\n> consume it, and an informational message during runtime (which is also\n> likely to be translated in many environments) is the exact opposite of that.\n>\n\nThat makes a lot of sense. I'm not sure what form it would take (file,\npseudo-table, something else?). Open to suggestions.\n\nI'm getting late into this discussion and I apologize if I've missed this being discussed before. But.Please don't.That will make it *really* hard for any form of automation or drivers of this. The information needs to go somewhere where such tools can easily consume it, and an informational message during runtime (which is also likely to be translated in many environments) is the exact opposite of that.That makes a lot of sense. I'm not sure what form it would take (file, pseudo-table, something else?). Open to suggestions.",
"msg_date": "Sat, 30 Mar 2024 19:11:30 -0400",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": ">\n> I didn't have any specific proposal in mind, was just trying to think\n> outside the box.\n>\n\nWhat if we added a separate resection SECTION_STATISTICS which is run\nfollowing post-data?\n\nI didn't have any specific proposal in mind, was just trying to think\noutside the box.What if we added a separate resection SECTION_STATISTICS which is run following post-data?",
"msg_date": "Sat, 30 Mar 2024 19:14:21 -0400",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "Corey Huinker <[email protected]> writes:\n>> I didn't have any specific proposal in mind, was just trying to think\n>> outside the box.\n\n> What if we added a separate resection SECTION_STATISTICS which is run\n> following post-data?\n\nMaybe, but that would have a lot of side-effects on pg_dump's API\nand probably on some end-user scripts. I'd rather not.\n\nI haven't looked at the details, but I'm really a bit surprised\nby Jeff's assertion that CREATE INDEX destroys statistics on the\nbase table. That seems wrong from here, and maybe something we\ncould have it not do. (I do realize that it recalculates reltuples\nand relpages, but so what? If it updates those, the results should\nbe perfectly accurate.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 30 Mar 2024 20:08:56 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": ">\n> That will make it *really* hard for any form of automation or drivers of\n> this. The information needs to go somewhere where such tools can easily\n> consume it, and an informational message during runtime (which is also\n> likely to be translated in many environments) is the exact opposite of that.\n>\n\nHaving given this some thought, I'd be inclined to create a view,\npg_stats_missing, with the same security barrier as pg_stats, but looking\nfor tables that lack stats on at least one column, or lack stats on an\nextended statistics object.\n\nTable structure would be\n\nschemaname name\ntablename name\nattnames text[]\next_stats text[]\n\n\nThe informational message, if it changes at all, could reference this new\nview as the place to learn about how well the stats import went.\n\nvacuumdb might get a --missing-only option in addition to\n--analyze-in-stages.\n\nFor that matter, we could add --analyze-missing options to pg_restore and\npg_upgrade to do the mopping up themselves.\n\nThat will make it *really* hard for any form of automation or drivers of this. The information needs to go somewhere where such tools can easily consume it, and an informational message during runtime (which is also likely to be translated in many environments) is the exact opposite of that.Having given this some thought, I'd be inclined to create a view, pg_stats_missing, with the same security barrier as pg_stats, but looking for tables that lack stats on at least one column, or lack stats on an extended statistics object.Table structure would be schemaname nametablename nameattnames text[]ext_stats text[]The informational message, if it changes at all, could reference this new view as the place to learn about how well the stats import went.vacuumdb might get a --missing-only option in addition to --analyze-in-stages.For that matter, we could add --analyze-missing options to pg_restore and pg_upgrade to do the mopping up themselves.",
"msg_date": "Sun, 31 Mar 2024 07:17:26 -0400",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "Corey Huinker <[email protected]> writes:\n> Having given this some thought, I'd be inclined to create a view,\n> pg_stats_missing, with the same security barrier as pg_stats, but looking\n> for tables that lack stats on at least one column, or lack stats on an\n> extended statistics object.\n\nThe week before feature freeze is no time to be designing something\nlike that, unless you've abandoned all hope of getting this into v17.\n\nThere's a bigger issue though: AFAICS this patch set does nothing\nabout dumping extended statistics. I surely don't want to hold up\nthe patch insisting that that has to happen before we can commit the\nfunctionality proposed here. But we cannot rip out pg_upgrade's\nsupport for post-upgrade ANALYZE processing before we do something\nabout extended statistics, and that means it's premature to be\ndesigning any changes to how that works. So I'd set that whole\ntopic on the back burner.\n\nIt's possible that we could drop the analyze-in-stages recommendation,\nfiguring that this functionality will get people to the\nable-to-limp-along level immediately and that all that is needed is a\nsingle mop-up ANALYZE pass. But I think we should leave that till we\nhave a bit more than zero field experience with this feature.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 31 Mar 2024 14:41:00 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "My apologies for having paid so little attention to this thread for\nmonths. I got around to reading the v15 patches today, and while\nI think they're going in more or less the right direction, there's\na long way to go IMO.\n\nI concur with the plan of extracting data from pg_stats not\npg_statistics, and with emitting a single \"set statistics\"\ncall per attribute. (I think at one point I'd suggested a call\nper stakind slot, but that would lead to a bunch of UPDATEs on\nexisting pg_attribute tuples and hence a bunch of dead tuples\nat the end of an import, so it's not the way to go. A series\nof UPDATEs would likely also play poorly with a background\nauto-ANALYZE happening concurrently.)\n\nI do not like the current design for pg_set_attribute_stats' API\nthough: I don't think it's at all future-proof. What happens when\nsomebody adds a new stakind (and hence new pg_stats column)?\nYou could try to add an overloaded pg_set_attribute_stats\nversion with more parameters, but I'm pretty sure that would\nlead to \"ambiguous function call\" failures when trying to load\nold dump files containing only the original parameters. The\npresent design is also fragile in that an unrecognized parameter\nwill lead to a parse-time failure and no function call happening,\nwhich is less robust than I'd like. As lesser points,\nthe relation argument ought to be declared regclass not oid for\nconvenience of use, and I really think that we need to provide\nthe source server's major version number --- maybe we will never\nneed that, but if we do and we don't have it we will be sad.\n\nSo this leads me to suggest that we'd be best off with a VARIADIC\nANY signature, where the variadic part consists of alternating\nparameter labels and values:\n\npg_set_attribute_stats(table regclass, attribute name,\n inherited bool, source_version int,\n variadic \"any\") returns void\n\nwhere a call might look like\n\nSELECT pg_set_attribute_stats('public.mytable', 'mycolumn',\n false, -- not inherited\n\t\t\t 16, -- source server major version\n -- pairs of labels and values follow\n 'null_frac', 0.4,\n 'avg_width', 42,\n 'histogram_bounds',\n array['a', 'b', 'c']::text[],\n ...);\n\nNote a couple of useful things here:\n\n* AFAICS we could label the function strict and remove all those ad-hoc\nnull checks. If you don't have a value for a particular stat, you\njust leave that pair of arguments out (exactly as the existing code\nin 0002 does, just using a different notation). This also means that\nwe don't need any default arguments and so no need for hackery in\nsystem_functions.sql.\n\n* If we don't recognize a parameter label at runtime, we can treat\nthat as a warning rather than a hard error, and press on. This case\nwould mostly be useful in major version downgrades I suppose, but\nthat will be something people will want eventually.\n\n* We can require the calling statement to cast arguments, particularly\narrays, to the proper type, removing the need for conversions within\nthe stats-setting function. (But instead, it'd need to check that the\nnext \"any\" argument is the type it ought to be based on the type of\nthe target column.)\n\nIf we write the labels as undecorated string literals as I show\nabove, I think they will arrive at the function as \"unknown\"-type\nconstants, which is a little weird but doesn't seem like it's\nreally a big problem. The alternative is to cast them all to text\nexplicitly, but that's adding notation to no great benefit IMO.\n\npg_set_relation_stats is simpler in that the set of stats values\nto be set will probably remain fairly static, and there seems little\nreason to allow only part of them to be supplied (so personally I'd\ndrop the business about accepting nulls there too). If we do grow\nanother value or values for it to set there shouldn't be much problem\nwith overloading it with another version with more arguments.\nStill needs to take regclass not oid though ...\n\nI've not read the patches in great detail, but I did note a\nfew low-level issues:\n\n* why is check_relation_permissions looking up the pg_class row?\nThere's already a copy of that in the Relation struct. Likewise\nfor the other caller of can_modify_relation (but why is that\ncaller not using check_relation_permissions?) That all looks\noverly complicated and duplicative. I think you don't need two\nlayers of function there.\n\n* I find the stuff with enums and \"const char *param_names\" to\nbe way too cute and unlike anything we do elsewhere. Please\ndon't invent your own notations for coding patterns that have\nhundreds of existing instances. pg_set_relation_stats, for\nexample, has absolutely no reason not to look like the usual\n\n\tOid\trelid = PG_GETARG_OID(0);\n\tfloat4\trelpages = PG_GETARG_FLOAT4(1);\n\t... etc ...\n\n* The array manipulations seem to me to be mostly not well chosen.\nThere's no reason to use expanded arrays here, since you won't be\nmodifying the arrays in-place; all that's doing is wasting memory.\nI'm also noting a lack of defenses against nulls in the arrays.\nI'd suggest using deconstruct_array to disassemble the arrays,\nif indeed they need disassembled at all. (Maybe they don't, see\nnext item.)\n\n* I'm dubious that we can fully vet the contents of these arrays,\nand even a little dubious that we need to try. As an example,\nwhat's the worst that's going to happen if a histogram array isn't\nsorted precisely? You might get bogus selectivity estimates\nfrom the planner, but that's no worse than you would've got with\nno stats at all. (It used to be that selfuncs.c would use a\nhistogram even if its contents didn't match the query's collation.\nThe comments justifying that seem to be gone, but I think it's\nstill the case that the code isn't *really* dependent on the sort\norder being exactly so.) The amount of hastily-written code in the\npatch for checking this seems a bit scary, and it's well within the\nrealm of possibility that it introduces more bugs than it prevents.\nWe do need to verify data types, lack of nulls, and maybe\n1-dimensional-ness, which could break the accessing code at a fairly\nlow level; but I'm not sure that we need more than that.\n\n* There's a lot of ERROR cases that maybe we ought to downgrade\nto WARN-and-press-on, in the service of not breaking the restore\ncompletely in case of trouble.\n\n* 0002 is confused about whether the tag for these new TOC\nentries is \"STATISTICS\" or \"STATISTICS DATA\". I also think\nthey need to be in SECTION_DATA not SECTION_NONE, and I'd be\ninclined to make them dependent on the table data objects\nnot the table declarations. We don't really want a parallel\nrestore to load them before the data is loaded: that just\nincreases the risk of bad interactions with concurrent\nauto-analyze.\n\n* It'd definitely not be OK to put BEGIN/COMMIT into the commands\nin these TOC entries. But I don't think we need to.\n\n* dumpRelationStats seems to be dumping the relation-level\nstats twice.\n\n* Why exactly are you suppressing testing of statistics upgrade\nin 002_pg_upgrade??\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 31 Mar 2024 14:48:19 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "On Sun, Mar 31, 2024 at 2:41 PM Tom Lane <[email protected]> wrote:\n\n> Corey Huinker <[email protected]> writes:\n> > Having given this some thought, I'd be inclined to create a view,\n> > pg_stats_missing, with the same security barrier as pg_stats, but looking\n> > for tables that lack stats on at least one column, or lack stats on an\n> > extended statistics object.\n>\n> The week before feature freeze is no time to be designing something\n> like that, unless you've abandoned all hope of getting this into v17.\n>\n\nIt was a response to the suggestion that there be some way for\ntools/automation to read the status of stats. I would view it as a separate\npatch, as such a view would be useful now for knowing which tables to\nANALYZE, regardless of whether this patch goes in or not.\n\n\n> There's a bigger issue though: AFAICS this patch set does nothing\n> about dumping extended statistics. I surely don't want to hold up\n> the patch insisting that that has to happen before we can commit the\n> functionality proposed here. But we cannot rip out pg_upgrade's\n> support for post-upgrade ANALYZE processing before we do something\n> about extended statistics, and that means it's premature to be\n> designing any changes to how that works. So I'd set that whole\n> topic on the back burner.\n>\n\nSo Extended Stats _were_ supported by earlier versions where the medium of\ncommunication was JSON. However, there were several problems with adapting\nthat to the current model where we match params to stat types:\n\n* Several of the column types do not have functional input functions, so we\nmust construct the data structure internally and pass them to\nstatext_store().\n* The output functions for some of those column types have lists of\nattnums, with negative values representing positional expressions in the\nstat definition. This information is not translatable to another system\nwithout also passing along the attnum/attname mapping of the source system.\n\nAt least three people told me \"nobody uses extended stats\" and to just drop\nthat from the initial version. Unhappy with this assessment, I inquired as\nto whether my employer (AWS) had some internal databases that used extended\nstats so that I could get good test data, and came up with nothing, nor did\nanyone know of customers who used the feature. So when the fourth person\ntold me that nobody uses extended stats, and not to let a rarely-used\nfeature get in the way of a feature that would benefit nearly 100% of\nusers, I dropped it.\n\n\n> It's possible that we could drop the analyze-in-stages recommendation,\n> figuring that this functionality will get people to the\n> able-to-limp-along level immediately and that all that is needed is a\n> single mop-up ANALYZE pass. But I think we should leave that till we\n> have a bit more than zero field experience with this feature.\n\n\nIt may be that we leave the recommendation exactly as it is.\n\nPerhaps we enhance the error messages in pg_set_*_stats() to indicate what\ncommand would remediate the issue.\n\nOn Sun, Mar 31, 2024 at 2:41 PM Tom Lane <[email protected]> wrote:Corey Huinker <[email protected]> writes:\n> Having given this some thought, I'd be inclined to create a view,\n> pg_stats_missing, with the same security barrier as pg_stats, but looking\n> for tables that lack stats on at least one column, or lack stats on an\n> extended statistics object.\n\nThe week before feature freeze is no time to be designing something\nlike that, unless you've abandoned all hope of getting this into v17.It was a response to the suggestion that there be some way for tools/automation to read the status of stats. I would view it as a separate patch, as such a view would be useful now for knowing which tables to ANALYZE, regardless of whether this patch goes in or not. There's a bigger issue though: AFAICS this patch set does nothing\nabout dumping extended statistics. I surely don't want to hold up\nthe patch insisting that that has to happen before we can commit the\nfunctionality proposed here. But we cannot rip out pg_upgrade's\nsupport for post-upgrade ANALYZE processing before we do something\nabout extended statistics, and that means it's premature to be\ndesigning any changes to how that works. So I'd set that whole\ntopic on the back burner.So Extended Stats _were_ supported by earlier versions where the medium of communication was JSON. However, there were several problems with adapting that to the current model where we match params to stat types:* Several of the column types do not have functional input functions, so we must construct the data structure internally and pass them to statext_store().* The output functions for some of those column types have lists of attnums, with negative values representing positional expressions in the stat definition. This information is not translatable to another system without also passing along the attnum/attname mapping of the source system.At least three people told me \"nobody uses extended stats\" and to just drop that from the initial version. Unhappy with this assessment, I inquired as to whether my employer (AWS) had some internal databases that used extended stats so that I could get good test data, and came up with nothing, nor did anyone know of customers who used the feature. So when the fourth person told me that nobody uses extended stats, and not to let a rarely-used feature get in the way of a feature that would benefit nearly 100% of users, I dropped it. It's possible that we could drop the analyze-in-stages recommendation,\nfiguring that this functionality will get people to the\nable-to-limp-along level immediately and that all that is needed is a\nsingle mop-up ANALYZE pass. But I think we should leave that till we\nhave a bit more than zero field experience with this feature.It may be that we leave the recommendation exactly as it is.Perhaps we enhance the error messages in pg_set_*_stats() to indicate what command would remediate the issue.",
"msg_date": "Sun, 31 Mar 2024 18:37:28 -0400",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "Corey Huinker <[email protected]> writes:\n> On Sun, Mar 31, 2024 at 2:41 PM Tom Lane <[email protected]> wrote:\n>> There's a bigger issue though: AFAICS this patch set does nothing\n>> about dumping extended statistics. I surely don't want to hold up\n>> the patch insisting that that has to happen before we can commit the\n>> functionality proposed here. But we cannot rip out pg_upgrade's\n>> support for post-upgrade ANALYZE processing before we do something\n>> about extended statistics, and that means it's premature to be\n>> designing any changes to how that works. So I'd set that whole\n>> topic on the back burner.\n\n> So Extended Stats _were_ supported by earlier versions where the medium of\n> communication was JSON. However, there were several problems with adapting\n> that to the current model where we match params to stat types:\n\n> * Several of the column types do not have functional input functions, so we\n> must construct the data structure internally and pass them to\n> statext_store().\n> * The output functions for some of those column types have lists of\n> attnums, with negative values representing positional expressions in the\n> stat definition. This information is not translatable to another system\n> without also passing along the attnum/attname mapping of the source system.\n\nI wonder if the right answer to that is \"let's enhance the I/O\nfunctions for those types\". But whether that helps or not, it's\nv18-or-later material for sure.\n\n> At least three people told me \"nobody uses extended stats\" and to just drop\n> that from the initial version.\n\nI can't quibble with that view of what has priority. I'm just\nsuggesting that redesigning what pg_upgrade does in this area\nshould come later than doing something about extended stats.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 31 Mar 2024 18:44:13 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": ">\n> I wonder if the right answer to that is \"let's enhance the I/O\n> functions for those types\". But whether that helps or not, it's\n> v18-or-later material for sure.\n>\n\nThat was Stephen's take as well, and I agreed given that I had to throw the\nkitchen-sink of source-side oid mappings (attname, types, collatons,\noperators) into the JSON to work around the limitation.\n\n\n> I can't quibble with that view of what has priority. I'm just\n> suggesting that redesigning what pg_upgrade does in this area\n> should come later than doing something about extended stats.\n>\n\nI mostly agree, with the caveat that pg_upgrade's existing message saying\nthat optimizer stats were not carried over wouldn't be 100% true anymore.\n\nI wonder if the right answer to that is \"let's enhance the I/O\nfunctions for those types\". But whether that helps or not, it's\nv18-or-later material for sure.That was Stephen's take as well, and I agreed given that I had to throw the kitchen-sink of source-side oid mappings (attname, types, collatons, operators) into the JSON to work around the limitation. \nI can't quibble with that view of what has priority. I'm just\nsuggesting that redesigning what pg_upgrade does in this area\nshould come later than doing something about extended stats.I mostly agree, with the caveat that pg_upgrade's existing message saying that optimizer stats were not carried over wouldn't be 100% true anymore.",
"msg_date": "Sun, 31 Mar 2024 18:58:48 -0400",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "Corey Huinker <[email protected]> writes:\n>> I can't quibble with that view of what has priority. I'm just\n>> suggesting that redesigning what pg_upgrade does in this area\n>> should come later than doing something about extended stats.\n\n> I mostly agree, with the caveat that pg_upgrade's existing message saying\n> that optimizer stats were not carried over wouldn't be 100% true anymore.\n\nI think we can tweak the message wording. I just don't want to be\ndoing major redesign of the behavior, nor adding fundamentally new\nmonitoring capabilities.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 31 Mar 2024 19:04:47 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": ">\n>\n> I concur with the plan of extracting data from pg_stats not\n> pg_statistics, and with emitting a single \"set statistics\"\n> call per attribute. (I think at one point I'd suggested a call\n> per stakind slot, but that would lead to a bunch of UPDATEs on\n> existing pg_attribute tuples and hence a bunch of dead tuples\n> at the end of an import, so it's not the way to go. A series\n> of UPDATEs would likely also play poorly with a background\n> auto-ANALYZE happening concurrently.)\n>\n\nThat was my reasoning as well.\n\n\n\n> I do not like the current design for pg_set_attribute_stats' API\n> though: I don't think it's at all future-proof. What happens when\n> somebody adds a new stakind (and hence new pg_stats column)?\n> You could try to add an overloaded pg_set_attribute_stats\n> version with more parameters, but I'm pretty sure that would\n> lead to \"ambiguous function call\" failures when trying to load\n> old dump files containing only the original parameters.\n\n\nI don't think we'd overload, we'd just add new parameters to the function\nsignature.\n\n\n> The\n> present design is also fragile in that an unrecognized parameter\n> will lead to a parse-time failure and no function call happening,\n> which is less robust than I'd like.\n\n\nThere was a lot of back-and-forth about what sorts of failures were\nerror-worthy, and which were warn-worthy. I'll discuss further below.\n\n\n> As lesser points,\n> the relation argument ought to be declared regclass not oid for\n> convenience of use,\n\n\n+1\n\n\n> and I really think that we need to provide\n> the source server's major version number --- maybe we will never\n> need that, but if we do and we don't have it we will be sad.\n>\n\nThe JSON had it, and I never did use it. Not against having it again.\n\n\n>\n> So this leads me to suggest that we'd be best off with a VARIADIC\n> ANY signature, where the variadic part consists of alternating\n> parameter labels and values:\n>\n> pg_set_attribute_stats(table regclass, attribute name,\n> inherited bool, source_version int,\n> variadic \"any\") returns void\n>\n> where a call might look like\n>\n> SELECT pg_set_attribute_stats('public.mytable', 'mycolumn',\n> false, -- not inherited\n> 16, -- source server major version\n> -- pairs of labels and values follow\n> 'null_frac', 0.4,\n> 'avg_width', 42,\n> 'histogram_bounds',\n> array['a', 'b', 'c']::text[],\n> ...);\n>\n> Note a couple of useful things here:\n>\n> * AFAICS we could label the function strict and remove all those ad-hoc\n> null checks. If you don't have a value for a particular stat, you\n> just leave that pair of arguments out (exactly as the existing code\n> in 0002 does, just using a different notation). This also means that\n> we don't need any default arguments and so no need for hackery in\n> system_functions.sql.\n>\n\nI'm not aware of how strict works with variadics. Would the lack of any\nvariadic parameters trigger it?\n\nAlso going with strict means that an inadvertent explicit NULL in one\nparameter would cause the entire attribute import to fail silently. I'd\nrather fail loudly.\n\n\n\n> * If we don't recognize a parameter label at runtime, we can treat\n> that as a warning rather than a hard error, and press on. This case\n> would mostly be useful in major version downgrades I suppose, but\n> that will be something people will want eventually.\n>\n\nInteresting.\n\n* We can require the calling statement to cast arguments, particularly\n> arrays, to the proper type, removing the need for conversions within\n> the stats-setting function. (But instead, it'd need to check that the\n> next \"any\" argument is the type it ought to be based on the type of\n> the target column.)\n>\n\nSo, that's tricky. The type of the values is not always the attribute type,\nfor expression indexes, we do call exprType() and exprCollation(), in which\ncase we always use the expression type over the attribute type, but only\nuse the collation type if the attribute had no collation. This mimics the\nbehavior of ANALYZE.\n\nThen, for the MCELEM and DECHIST stakinds we have to find the type's\nelement type, and that has special logic for tsvectors, ranges, and other\nnon-scalars, borrowing from the various *_typanalyze() functions. For that\nmatter, the existing typanalyze functions don't grab the < operator, which\nI need for later data validations, so using examine_attribute() was\nsimultaneously overkill and insufficient.\n\nNone of this functionality is accessible from a client program, so we'd\nhave to pull in a lot of backend stuff to pg_dump to make it resolve the\ntypecasts correctly. Text and array_in() was just easier.\n\n\n> pg_set_relation_stats is simpler in that the set of stats values\n> to be set will probably remain fairly static, and there seems little\n> reason to allow only part of them to be supplied (so personally I'd\n> drop the business about accepting nulls there too). If we do grow\n> another value or values for it to set there shouldn't be much problem\n> with overloading it with another version with more arguments.\n> Still needs to take regclass not oid though ...\n>\n\nI'm still iffy about the silent failures of strict.\n\nI looked it up, and the only change needed for changing oid to regclass is\nin the pg_proc.dat. (and the docs, of course). So I'm already on board.\n\n\n> * why is check_relation_permissions looking up the pg_class row?\n> There's already a copy of that in the Relation struct. Likewise\n> for the other caller of can_modify_relation (but why is that\n> caller not using check_relation_permissions?) That all looks\n> overly complicated and duplicative. I think you don't need two\n> layers of function there.\n>\n\nTo prove that the caller is the owner (or better) of the table.\n\n\n>\n> * The array manipulations seem to me to be mostly not well chosen.\n> There's no reason to use expanded arrays here, since you won't be\n> modifying the arrays in-place; all that's doing is wasting memory.\n> I'm also noting a lack of defenses against nulls in the arrays.\n>\n\nEasily remedied in light of the deconstruct_array() suggestion below, but I\ndo want to add that value_not_null_array_len() does check for nulls, and\nthat function is used to generate all but one of the arrays (and that one\nwe're just verifying that it's length matches the length of the other\narray).There's even a regression test that checks it (search for:\n\"elem_count_histogram null element\").\n\n\n> I'd suggest using deconstruct_array to disassemble the arrays,\n> if indeed they need disassembled at all. (Maybe they don't, see\n> next item.)\n>\n\n+1\n\n\n>\n> * I'm dubious that we can fully vet the contents of these arrays,\n> and even a little dubious that we need to try. As an example,\n> what's the worst that's going to happen if a histogram array isn't\n> sorted precisely? You might get bogus selectivity estimates\n> from the planner, but that's no worse than you would've got with\n> no stats at all. (It used to be that selfuncs.c would use a\n> histogram even if its contents didn't match the query's collation.\n> The comments justifying that seem to be gone, but I think it's\n> still the case that the code isn't *really* dependent on the sort\n> order being exactly so.) The amount of hastily-written code in the\n> patch for checking this seems a bit scary, and it's well within the\n> realm of possibility that it introduces more bugs than it prevents.\n> We do need to verify data types, lack of nulls, and maybe\n> 1-dimensional-ness, which could break the accessing code at a fairly\n> low level; but I'm not sure that we need more than that.\n>\n\nA lot of the feedback I got on this patch over the months concerned giving\ninaccurate, nonsensical, or malicious data to the planner. Surely the\nplanner does do *some* defensive programming when fetching these values,\nbut this is the first time those values were potentially set by a user, not\nby our own internal code. We can try to match types, collations, etc from\nsource to dest, but even that would fall victim to another glibc-level\ncollation change. Verifying that the list the source system said was sorted\nis actually sorted when put on the destination system is the truest test\nwe're ever going to get, albeit for sampled elements.\n\n\n>\n> * There's a lot of ERROR cases that maybe we ought to downgrade\n> to WARN-and-press-on, in the service of not breaking the restore\n> completely in case of trouble.\n>\n\nAll cases were made error precisely to spark debate about which cases we'd\nwant to continue from and which we'd want to error from. Also, I was under\nthe impression it was bad form to follow up NOTICE/WARN with an ERROR in\nthe same function call.\n\n\n\n> * 0002 is confused about whether the tag for these new TOC\n> entries is \"STATISTICS\" or \"STATISTICS DATA\". I also think\n> they need to be in SECTION_DATA not SECTION_NONE, and I'd be\n> inclined to make them dependent on the table data objects\n> not the table declarations. We don't really want a parallel\n> restore to load them before the data is loaded: that just\n> increases the risk of bad interactions with concurrent\n> auto-analyze.\n>\n\nSECTION_NONE works the best, but we're getting some situations where the\nrelpages/reltuples/relallvisible gets reset to 0s in pg_class. Hence the\ntemporary --no-statistics in the pg_upgrade TAP test.\n\nSECTION_POST_DATA (a previous suggestion) causes something weird to happen\nwhere certain GRANT/REVOKEs happen outside of their expected section.\n\nIn work I've done since v15, I tried giving the table stats archive entry a\ndependency on every index (and index constraint) as well as the table\nitself, thinking that would get us past all resets of pg_class, but it\nhasn't worked.\n\n\n> * It'd definitely not be OK to put BEGIN/COMMIT into the commands\n> in these TOC entries. But I don't think we need to.\n>\n\nAgreed. Don't need to, each function call now sinks or swims on its own.\n\n\n>\n> * dumpRelationStats seems to be dumping the relation-level\n> stats twice.\n>\n\n+1\n\n* Why exactly are you suppressing testing of statistics upgrade\n> in 002_pg_upgrade??\n>\n\nTemporary. Related to the pg_class overwrite issue above.\n\nI concur with the plan of extracting data from pg_stats not\npg_statistics, and with emitting a single \"set statistics\"\ncall per attribute. (I think at one point I'd suggested a call\nper stakind slot, but that would lead to a bunch of UPDATEs on\nexisting pg_attribute tuples and hence a bunch of dead tuples\nat the end of an import, so it's not the way to go. A series\nof UPDATEs would likely also play poorly with a background\nauto-ANALYZE happening concurrently.)That was my reasoning as well. I do not like the current design for pg_set_attribute_stats' API\nthough: I don't think it's at all future-proof. What happens when\nsomebody adds a new stakind (and hence new pg_stats column)?\nYou could try to add an overloaded pg_set_attribute_stats\nversion with more parameters, but I'm pretty sure that would\nlead to \"ambiguous function call\" failures when trying to load\nold dump files containing only the original parameters.I don't think we'd overload, we'd just add new parameters to the function signature. The\npresent design is also fragile in that an unrecognized parameter\nwill lead to a parse-time failure and no function call happening,\nwhich is less robust than I'd like.There was a lot of back-and-forth about what sorts of failures were error-worthy, and which were warn-worthy. I'll discuss further below. As lesser points,\nthe relation argument ought to be declared regclass not oid for\nconvenience of use,+1 and I really think that we need to provide\nthe source server's major version number --- maybe we will never\nneed that, but if we do and we don't have it we will be sad.The JSON had it, and I never did use it. Not against having it again. \n\nSo this leads me to suggest that we'd be best off with a VARIADIC\nANY signature, where the variadic part consists of alternating\nparameter labels and values:\n\npg_set_attribute_stats(table regclass, attribute name,\n inherited bool, source_version int,\n variadic \"any\") returns void\n\nwhere a call might look like\n\nSELECT pg_set_attribute_stats('public.mytable', 'mycolumn',\n false, -- not inherited\n 16, -- source server major version\n -- pairs of labels and values follow\n 'null_frac', 0.4,\n 'avg_width', 42,\n 'histogram_bounds',\n array['a', 'b', 'c']::text[],\n ...);\n\nNote a couple of useful things here:\n\n* AFAICS we could label the function strict and remove all those ad-hoc\nnull checks. If you don't have a value for a particular stat, you\njust leave that pair of arguments out (exactly as the existing code\nin 0002 does, just using a different notation). This also means that\nwe don't need any default arguments and so no need for hackery in\nsystem_functions.sql.I'm not aware of how strict works with variadics. Would the lack of any variadic parameters trigger it?Also going with strict means that an inadvertent explicit NULL in one parameter would cause the entire attribute import to fail silently. I'd rather fail loudly. \n* If we don't recognize a parameter label at runtime, we can treat\nthat as a warning rather than a hard error, and press on. This case\nwould mostly be useful in major version downgrades I suppose, but\nthat will be something people will want eventually.Interesting.* We can require the calling statement to cast arguments, particularly\narrays, to the proper type, removing the need for conversions within\nthe stats-setting function. (But instead, it'd need to check that the\nnext \"any\" argument is the type it ought to be based on the type of\nthe target column.)So, that's tricky. The type of the values is not always the attribute type, for expression indexes, we do call exprType() and exprCollation(), in which case we always use the expression type over the attribute type, but only use the collation type if the attribute had no collation. This mimics the behavior of ANALYZE.Then, for the MCELEM and DECHIST stakinds we have to find the type's element type, and that has special logic for tsvectors, ranges, and other non-scalars, borrowing from the various *_typanalyze() functions. For that matter, the existing typanalyze functions don't grab the < operator, which I need for later data validations, so using examine_attribute() was simultaneously overkill and insufficient.None of this functionality is accessible from a client program, so we'd have to pull in a lot of backend stuff to pg_dump to make it resolve the typecasts correctly. Text and array_in() was just easier. pg_set_relation_stats is simpler in that the set of stats values\nto be set will probably remain fairly static, and there seems little\nreason to allow only part of them to be supplied (so personally I'd\ndrop the business about accepting nulls there too). If we do grow\nanother value or values for it to set there shouldn't be much problem\nwith overloading it with another version with more arguments.\nStill needs to take regclass not oid though ...I'm still iffy about the silent failures of strict. I looked it up, and the only change needed for changing oid to regclass is in the pg_proc.dat. (and the docs, of course). So I'm already on board. * why is check_relation_permissions looking up the pg_class row?\nThere's already a copy of that in the Relation struct. Likewise\nfor the other caller of can_modify_relation (but why is that\ncaller not using check_relation_permissions?) That all looks\noverly complicated and duplicative. I think you don't need two\nlayers of function there.To prove that the caller is the owner (or better) of the table. \n\n* The array manipulations seem to me to be mostly not well chosen.\nThere's no reason to use expanded arrays here, since you won't be\nmodifying the arrays in-place; all that's doing is wasting memory.\nI'm also noting a lack of defenses against nulls in the arrays.Easily remedied in light of the deconstruct_array() suggestion below, but I do want to add that value_not_null_array_len() does check for nulls, and that function is used to generate all but one of the arrays (and that one we're just verifying that it's length matches the length of the other array).There's even a regression test that checks it (search for: \"elem_count_histogram null element\"). \nI'd suggest using deconstruct_array to disassemble the arrays,\nif indeed they need disassembled at all. (Maybe they don't, see\nnext item.)+1 \n\n* I'm dubious that we can fully vet the contents of these arrays,\nand even a little dubious that we need to try. As an example,\nwhat's the worst that's going to happen if a histogram array isn't\nsorted precisely? You might get bogus selectivity estimates\nfrom the planner, but that's no worse than you would've got with\nno stats at all. (It used to be that selfuncs.c would use a\nhistogram even if its contents didn't match the query's collation.\nThe comments justifying that seem to be gone, but I think it's\nstill the case that the code isn't *really* dependent on the sort\norder being exactly so.) The amount of hastily-written code in the\npatch for checking this seems a bit scary, and it's well within the\nrealm of possibility that it introduces more bugs than it prevents.\nWe do need to verify data types, lack of nulls, and maybe\n1-dimensional-ness, which could break the accessing code at a fairly\nlow level; but I'm not sure that we need more than that.A lot of the feedback I got on this patch over the months concerned giving inaccurate, nonsensical, or malicious data to the planner. Surely the planner does do *some* defensive programming when fetching these values, but this is the first time those values were potentially set by a user, not by our own internal code. We can try to match types, collations, etc from source to dest, but even that would fall victim to another glibc-level collation change. Verifying that the list the source system said was sorted is actually sorted when put on the destination system is the truest test we're ever going to get, albeit for sampled elements. \n\n* There's a lot of ERROR cases that maybe we ought to downgrade\nto WARN-and-press-on, in the service of not breaking the restore\ncompletely in case of trouble.All cases were made error precisely to spark debate about which cases we'd want to continue from and which we'd want to error from. Also, I was under the impression it was bad form to follow up NOTICE/WARN with an ERROR in the same function call. * 0002 is confused about whether the tag for these new TOC\nentries is \"STATISTICS\" or \"STATISTICS DATA\". I also think\nthey need to be in SECTION_DATA not SECTION_NONE, and I'd be\ninclined to make them dependent on the table data objects\nnot the table declarations. We don't really want a parallel\nrestore to load them before the data is loaded: that just\nincreases the risk of bad interactions with concurrent\nauto-analyze.SECTION_NONE works the best, but we're getting some situations where the relpages/reltuples/relallvisible gets reset to 0s in pg_class. Hence the temporary --no-statistics in the pg_upgrade TAP test.SECTION_POST_DATA (a previous suggestion) causes something weird to happen where certain GRANT/REVOKEs happen outside of their expected section. In work I've done since v15, I tried giving the table stats archive entry a dependency on every index (and index constraint) as well as the table itself, thinking that would get us past all resets of pg_class, but it hasn't worked. * It'd definitely not be OK to put BEGIN/COMMIT into the commands\nin these TOC entries. But I don't think we need to.Agreed. Don't need to, each function call now sinks or swims on its own. \n\n* dumpRelationStats seems to be dumping the relation-level\nstats twice.+1 * Why exactly are you suppressing testing of statistics upgrade\nin 002_pg_upgrade??Temporary. Related to the pg_class overwrite issue above.",
"msg_date": "Sun, 31 Mar 2024 20:10:19 -0400",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "Corey Huinker <[email protected]> writes:\n>> and I really think that we need to provide\n>> the source server's major version number --- maybe we will never\n>> need that, but if we do and we don't have it we will be sad.\n\n> The JSON had it, and I never did use it. Not against having it again.\n\nWell, you don't need it now seeing that the definition of pg_stats\ncolumns hasn't changed in the past ... but there's no guarantee we\nwon't want to change them in the future.\n\n>> So this leads me to suggest that we'd be best off with a VARIADIC\n>> ANY signature, where the variadic part consists of alternating\n>> parameter labels and values:\n>> pg_set_attribute_stats(table regclass, attribute name,\n>> inherited bool, source_version int,\n>> variadic \"any\") returns void\n\n> I'm not aware of how strict works with variadics. Would the lack of any\n> variadic parameters trigger it?\n\nIIRC, \"variadic any\" requires having at least one variadic parameter.\nBut that seems fine --- what would be the point, or even the\nsemantics, of calling pg_set_attribute_stats with no data fields?\n\n> Also going with strict means that an inadvertent explicit NULL in one\n> parameter would cause the entire attribute import to fail silently. I'd\n> rather fail loudly.\n\nNot really convinced that that is worth any trouble...\n\n> * We can require the calling statement to cast arguments, particularly\n>> arrays, to the proper type, removing the need for conversions within\n>> the stats-setting function. (But instead, it'd need to check that the\n>> next \"any\" argument is the type it ought to be based on the type of\n>> the target column.)\n\n> So, that's tricky. The type of the values is not always the attribute type,\n\nHmm. You would need to have enough smarts in pg_set_attribute_stats\nto identify the appropriate array type in any case: as coded, it needs\nthat for coercion, whereas what I'm suggesting would only require it\nfor checking, but either way you need it. I do concede that pg_dump\n(or other logic generating the calls) needs to know more under my\nproposal than before. I had been thinking that it would not need to\nhard-code that because it could look to see what the actual type is\nof the array it's dumping. However, I see that pg_typeof() doesn't\nwork for that because it just returns anyarray. Perhaps we could\ninvent a new backend function that extracts the actual element type\nof a non-null anyarray argument.\n\nAnother way we could get to no-coercions is to stick with your\nsignature but declare the relevant parameters as anyarray instead of\ntext. I still think though that we'd be better off to leave the\nparameter matching to runtime, so that we-don't-recognize-that-field\ncan be a warning not an error.\n\n>> * why is check_relation_permissions looking up the pg_class row?\n>> There's already a copy of that in the Relation struct.\n\n> To prove that the caller is the owner (or better) of the table.\n\nI think you missed my point: you're doing that inefficiently,\nand maybe even with race conditions. Use the relcache's copy\nof the pg_class row.\n\n>> * I'm dubious that we can fully vet the contents of these arrays,\n>> and even a little dubious that we need to try.\n\n> A lot of the feedback I got on this patch over the months concerned giving\n> inaccurate, nonsensical, or malicious data to the planner. Surely the\n> planner does do *some* defensive programming when fetching these values,\n> but this is the first time those values were potentially set by a user, not\n> by our own internal code. We can try to match types, collations, etc from\n> source to dest, but even that would fall victim to another glibc-level\n> collation change.\n\nThat sort of concern is exactly why I think the planner has to, and\ndoes, defend itself. Even if you fully vet the data at the instant\nof loading, we might have the collation change under us later.\n\nIt could be argued that feeding bogus data to the planner for testing\npurposes is a valid use-case for this feature. (Of course, as\nsuperuser we could inject bogus data into pg_statistic manually,\nso it's not necessary to have this feature for that purpose.)\nI guess I'm a great deal more sanguine than other people about the\nplanner's ability to tolerate inconsistent data; but in any case\nI don't have a lot of faith in relying on checks in\npg_set_attribute_stats to substitute for that ability. That idea\nmainly leads to having a whole lot of code that has to be kept in\nsync with other code that's far away from it and probably isn't\ncoded in a parallel fashion either.\n\n>> * There's a lot of ERROR cases that maybe we ought to downgrade\n>> to WARN-and-press-on, in the service of not breaking the restore\n>> completely in case of trouble.\n\n> All cases were made error precisely to spark debate about which cases we'd\n> want to continue from and which we'd want to error from.\n\nWell, I'm here to debate it if you want, but I'll just note that *one*\nerror will be enough to abort a pg_upgrade entirely, and most users\nthese days get scared by errors during manual dump/restore too. So we\nhad better not be throwing errors except for cases that we don't think\npg_dump could ever emit.\n\n> Also, I was under\n> the impression it was bad form to follow up NOTICE/WARN with an ERROR in\n> the same function call.\n\nSeems like nonsense to me. WARN then ERROR about the same condition\nwould be annoying, but that's not what we are talking about here.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 31 Mar 2024 20:47:32 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": ">\n> IIRC, \"variadic any\" requires having at least one variadic parameter.\n> But that seems fine --- what would be the point, or even the\n> semantics, of calling pg_set_attribute_stats with no data fields?\n>\n\nIf my pg_dump run emitted a bunch of stats that could never be imported,\nI'd want to know. With silent failures, I don't.\n\n\n\n> Perhaps we could\n> invent a new backend function that extracts the actual element type\n> of a non-null anyarray argument.\n>\n\nA backend function that we can't guarantee exists on the source system. :(\n\n\n> Another way we could get to no-coercions is to stick with your\n> signature but declare the relevant parameters as anyarray instead of\n> text. I still think though that we'd be better off to leave the\n> parameter matching to runtime, so that we-don't-recognize-that-field\n> can be a warning not an error.\n>\n\nI'm a bit confused here. AFAIK we can't construct an anyarray in SQL:\n\n# select '{1,2,3}'::anyarray;\nERROR: cannot accept a value of type anyarray\n\n\n> I think you missed my point: you're doing that inefficiently,\n> and maybe even with race conditions. Use the relcache's copy\n> of the pg_class row.\n>\n\nRoger Wilco.\n\n\n> Well, I'm here to debate it if you want, but I'll just note that *one*\n> error will be enough to abort a pg_upgrade entirely, and most users\n> these days get scared by errors during manual dump/restore too. So we\n> had better not be throwing errors except for cases that we don't think\n> pg_dump could ever emit.\n>\n\nThat's pretty persuasive. It also means that we need to trap for error in\nthe array_in() calls, as that function does not yet have a _safe() mode.\n\nIIRC, \"variadic any\" requires having at least one variadic parameter.\nBut that seems fine --- what would be the point, or even the\nsemantics, of calling pg_set_attribute_stats with no data fields?If my pg_dump run emitted a bunch of stats that could never be imported, I'd want to know. With silent failures, I don't. Perhaps we could\ninvent a new backend function that extracts the actual element type\nof a non-null anyarray argument.A backend function that we can't guarantee exists on the source system. :( \nAnother way we could get to no-coercions is to stick with your\nsignature but declare the relevant parameters as anyarray instead of\ntext. I still think though that we'd be better off to leave the\nparameter matching to runtime, so that we-don't-recognize-that-field\ncan be a warning not an error.I'm a bit confused here. AFAIK we can't construct an anyarray in SQL:# select '{1,2,3}'::anyarray;ERROR: cannot accept a value of type anyarray I think you missed my point: you're doing that inefficiently,\nand maybe even with race conditions. Use the relcache's copy\nof the pg_class row.Roger Wilco.Well, I'm here to debate it if you want, but I'll just note that *one*\nerror will be enough to abort a pg_upgrade entirely, and most users\nthese days get scared by errors during manual dump/restore too. So we\nhad better not be throwing errors except for cases that we don't think\npg_dump could ever emit.That's pretty persuasive. It also means that we need to trap for error in the array_in() calls, as that function does not yet have a _safe() mode.",
"msg_date": "Sun, 31 Mar 2024 21:32:17 -0400",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "Hi Corey,\n\n\nOn Mon, Mar 25, 2024 at 3:38 PM Ashutosh Bapat <[email protected]>\nwrote:\n\n> Hi Corey,\n>\n>\n> On Sat, Mar 23, 2024 at 7:21 AM Corey Huinker <[email protected]>\n> wrote:\n>\n>> v12 attached.\n>>\n>> 0001 -\n>>\n>>\n> Some random comments\n>\n> +SELECT\n> + format('SELECT pg_catalog.pg_set_attribute_stats( '\n> + || 'relation => %L::regclass::oid, attname => %L::name, '\n> + || 'inherited => %L::boolean, null_frac => %L::real, '\n> + || 'avg_width => %L::integer, n_distinct => %L::real, '\n> + || 'most_common_vals => %L::text, '\n> + || 'most_common_freqs => %L::real[], '\n> + || 'histogram_bounds => %L::text, '\n> + || 'correlation => %L::real, '\n> + || 'most_common_elems => %L::text, '\n> + || 'most_common_elem_freqs => %L::real[], '\n> + || 'elem_count_histogram => %L::real[], '\n> + || 'range_length_histogram => %L::text, '\n> + || 'range_empty_frac => %L::real, '\n> + || 'range_bounds_histogram => %L::text) ',\n> + 'stats_export_import.' || s.tablename || '_clone', s.attname,\n> + s.inherited, s.null_frac,\n> + s.avg_width, s.n_distinct,\n> + s.most_common_vals, s.most_common_freqs, s.histogram_bounds,\n> + s.correlation, s.most_common_elems, s.most_common_elem_freqs,\n> + s.elem_count_histogram, s.range_length_histogram,\n> + s.range_empty_frac, s.range_bounds_histogram)\n> +FROM pg_catalog.pg_stats AS s\n> +WHERE s.schemaname = 'stats_export_import'\n> +AND s.tablename IN ('test', 'is_odd')\n> +\\gexec\n>\n> Why do we need to construct the command and execute? Can we instead\n> execute the function directly? That would also avoid ECHO magic.\n>\n\nAddressed in v15\n\n\n>\n> + <table id=\"functions-admin-statsimport\">\n> + <title>Database Object Statistics Import Functions</title>\n> + <tgroup cols=\"1\">\n> + <thead>\n> + <row>\n> + <entry role=\"func_table_entry\"><para role=\"func_signature\">\n> + Function\n> + </para>\n> + <para>\n> + Description\n> + </para></entry>\n> + </row>\n> + </thead>\n>\n> COMMENT: The functions throw many validation errors. Do we want to list\n> the acceptable/unacceptable input values in the documentation corresponding\n> to those? I don't expect one line per argument validation. Something like\n> \"these, these and these arguments can not be NULL\" or \"both arguments in\n> each of the pairs x and y, a and b, and c and d should be non-NULL or NULL\n> respectively\".\n>\n\nAddressed in v15.\n\n\n> + /* Statistics are dependent on the definition, not the data */\n> + /* Views don't have stats */\n> + if ((tbinfo->dobj.dump & DUMP_COMPONENT_STATISTICS) &&\n> + (tbinfo->relkind == RELKIND_VIEW))\n> + dumpRelationStats(fout, &tbinfo->dobj, reltypename,\n> + tbinfo->dobj.dumpId);\n> +\n>\n> Statistics are about data. Whenever pg_dump dumps some filtered data, the\n> statistics collected for the whole table are uselss. We should avoide\n> dumping\n> statistics in such a case. E.g. when only schema is dumped what good is\n> statistics? Similarly the statistics on a partitioned table may not be\n> useful\n> if some its partitions are not dumped. Said that dumping statistics on\n> foreign\n> table makes sense since they do not contain data but the statistics still\n> makes sense.\n>\n\nDumping statistics without data is required for pg_upgrade. This is being\ndiscussed in the same thread. But I don't see some of the suggestions e.g.\nusing binary-mode switch being used in v15.\n\nAlso, should we handle sequences, composite types the same way? THe latter\nis probably not dumped, but in case.\n\n\n>\n> Whether or not I pass --no-statistics, there is no difference in the dump\n> output. Am I missing something?\n> $ pg_dump -d postgres > /tmp/dump_no_arguments.out\n> $ pg_dump -d postgres --no-statistics > /tmp/dump_no_statistics.out\n> $ diff /tmp/dump_no_arguments.out /tmp/dump_no_statistics.out\n> $\n>\n> IIUC, pg_dump includes statistics by default. That means all our pg_dump\n> related tests will have statistics output by default. That's good since the\n> functionality will always be tested. 1. We need additional tests to ensure\n> that the statistics is installed after restore. 2. Some of those tests\n> compare dumps before and after restore. In case the statistics is changed\n> because of auto-analyze happening post-restore, these tests will fail.\n>\n\nFixed.\n\nThanks for addressing those comments.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\nHi Corey,On Mon, Mar 25, 2024 at 3:38 PM Ashutosh Bapat <[email protected]> wrote:Hi Corey,On Sat, Mar 23, 2024 at 7:21 AM Corey Huinker <[email protected]> wrote:v12 attached.0001 - Some random comments +SELECT+ format('SELECT pg_catalog.pg_set_attribute_stats( '+ || 'relation => %L::regclass::oid, attname => %L::name, '+ || 'inherited => %L::boolean, null_frac => %L::real, '+ || 'avg_width => %L::integer, n_distinct => %L::real, '+ || 'most_common_vals => %L::text, '+ || 'most_common_freqs => %L::real[], '+ || 'histogram_bounds => %L::text, '+ || 'correlation => %L::real, '+ || 'most_common_elems => %L::text, '+ || 'most_common_elem_freqs => %L::real[], '+ || 'elem_count_histogram => %L::real[], '+ || 'range_length_histogram => %L::text, '+ || 'range_empty_frac => %L::real, '+ || 'range_bounds_histogram => %L::text) ',+ 'stats_export_import.' || s.tablename || '_clone', s.attname,+ s.inherited, s.null_frac,+ s.avg_width, s.n_distinct,+ s.most_common_vals, s.most_common_freqs, s.histogram_bounds,+ s.correlation, s.most_common_elems, s.most_common_elem_freqs,+ s.elem_count_histogram, s.range_length_histogram,+ s.range_empty_frac, s.range_bounds_histogram)+FROM pg_catalog.pg_stats AS s+WHERE s.schemaname = 'stats_export_import'+AND s.tablename IN ('test', 'is_odd')+\\gexecWhy do we need to construct the command and execute? Can we instead execute the function directly? That would also avoid ECHO magic.Addressed in v15 + <table id=\"functions-admin-statsimport\">+ <title>Database Object Statistics Import Functions</title>+ <tgroup cols=\"1\">+ <thead>+ <row>+ <entry role=\"func_table_entry\"><para role=\"func_signature\">+ Function+ </para>+ <para>+ Description+ </para></entry>+ </row>+ </thead>COMMENT: The functions throw many validation errors. Do we want to list the acceptable/unacceptable input values in the documentation corresponding to those? I don't expect one line per argument validation. Something like \"these, these and these arguments can not be NULL\" or \"both arguments in each of the pairs x and y, a and b, and c and d should be non-NULL or NULL respectively\".Addressed in v15. +\t/* Statistics are dependent on the definition, not the data */+\t/* Views don't have stats */+\tif ((tbinfo->dobj.dump & DUMP_COMPONENT_STATISTICS) &&+\t\t(tbinfo->relkind == RELKIND_VIEW))+\t\tdumpRelationStats(fout, &tbinfo->dobj, reltypename,+\t\t\t\t\t\t tbinfo->dobj.dumpId);+Statistics are about data. Whenever pg_dump dumps some filtered data, thestatistics collected for the whole table are uselss. We should avoide dumpingstatistics in such a case. E.g. when only schema is dumped what good isstatistics? Similarly the statistics on a partitioned table may not be usefulif some its partitions are not dumped. Said that dumping statistics on foreigntable makes sense since they do not contain data but the statistics still makes sense.Dumping statistics without data is required for pg_upgrade. This is being discussed in the same thread. But I don't see some of the suggestions e.g. using binary-mode switch being used in v15.Also, should we handle sequences, composite types the same way? THe latter is probably not dumped, but in case. Whether or not I pass --no-statistics, there is no difference in the dump output. Am I missing something?$ pg_dump -d postgres > /tmp/dump_no_arguments.out$ pg_dump -d postgres --no-statistics > /tmp/dump_no_statistics.out$ diff /tmp/dump_no_arguments.out /tmp/dump_no_statistics.out$IIUC, pg_dump includes statistics by default. That means all our pg_dump related tests will have statistics output by default. That's good since the functionality will always be tested. 1. We need additional tests to ensure that the statistics is installed after restore. 2. Some of those tests compare dumps before and after restore. In case the statistics is changed because of auto-analyze happening post-restore, these tests will fail.Fixed.Thanks for addressing those comments. -- Best Wishes,Ashutosh Bapat",
"msg_date": "Mon, 1 Apr 2024 16:56:39 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "Hi Corey,\n\nSome more comments on v15.\n\n+/*\n+ * A more encapsulated version of can_modify_relation for when the the\n+ * HeapTuple and Form_pg_class are not needed later.\n+ */\n+static void\n+check_relation_permissions(Relation rel)\n\nThis function is used exactly at one place, so usually won't make much\nsense to write a separate function. But given that the caller is so long,\nthis seems ok. If this function returns the cached tuple when permission\nchecks succeed, it can be used at the other place as well. The caller will\nbe responsible to release the tuple Or update it.\n\nAttached patch contains a test to invoke this function on a view. ANALYZE\nthrows a WARNING when a view is passed to it. Similarly this function\nshould refuse to update the statistics on relations for which ANALYZE\nthrows a warning. A warning instead of an error seems fine.\n\n+\n+ const float4 min = 0.0;\n+ const float4 max = 1.0;\n\nWhen reading the validation condition, I have to look up variable values.\nThat can be avoided by directly using the values in the condition itself?\nIf there's some dependency elsewhere in the code, we can use macros. But I\nhave not seen using constant variables in such a way elsewhere in the code.\n\n+ values[Anum_pg_statistic_starelid - 1] = ObjectIdGetDatum(relid);\n+ values[Anum_pg_statistic_staattnum - 1] = Int16GetDatum(attnum);\n+ values[Anum_pg_statistic_stainherit - 1] = PG_GETARG_DATUM(P_INHERITED);\n\nFor a partitioned table this value has to be true. For a normal table when\nsetting this value to true, it should at least make sure that the table has\nat least one child. Otherwise it should throw an error. Blindly accepting\nthe given value may render the statistics unusable. Prologue of the\nfunction needs to be updated accordingly.\n\nI have fixed a documentation error in the patch as well. Please incorporate\nit in your next patchset.\n-- \nBest Wishes,\nAshutosh Bapat",
"msg_date": "Mon, 1 Apr 2024 17:01:02 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "Corey Huinker <[email protected]> writes:\n>> IIRC, \"variadic any\" requires having at least one variadic parameter.\n>> But that seems fine --- what would be the point, or even the\n>> semantics, of calling pg_set_attribute_stats with no data fields?\n\n> If my pg_dump run emitted a bunch of stats that could never be imported,\n> I'd want to know. With silent failures, I don't.\n\nWhat do you think would be silent about that? If there's a complaint\nto be made, it's that it'd be a hard failure (\"no such function\").\n\nTo be clear, I'm ok with emitting ERROR for something that pg_dump\nclearly did wrong, which in this case would be emitting a\nset_statistics call for an attribute it had exactly no stats values\nfor. What I think needs to be WARN is conditions that the originating\npg_dump couldn't have foreseen, for example cross-version differences.\nIf we do try to check things like sort order, that complaint obviously\nhas to be WARN, since it's checking something potentially different\nfrom what was correct at the source server.\n\n>> Perhaps we could\n>> invent a new backend function that extracts the actual element type\n>> of a non-null anyarray argument.\n\n> A backend function that we can't guarantee exists on the source system. :(\n\n[ shrug... ] If this doesn't work for source servers below v17, that\nwould be a little sad, but it wouldn't be the end of the world.\nI see your point that that is an argument for finding another way,\nthough.\n\n>> Another way we could get to no-coercions is to stick with your\n>> signature but declare the relevant parameters as anyarray instead of\n>> text.\n\n> I'm a bit confused here. AFAIK we can't construct an anyarray in SQL:\n\n> # select '{1,2,3}'::anyarray;\n> ERROR: cannot accept a value of type anyarray\n\nThat's not what I suggested at all. The function parameters would\nbe declared anyarray, but the values passed to them would be coerced\nto the correct concrete array types. So as far as the coercion rules\nare concerned this'd be equivalent to the variadic-any approach.\n\n> That's pretty persuasive. It also means that we need to trap for error in\n> the array_in() calls, as that function does not yet have a _safe() mode.\n\nWell, the approach I'm advocating for would have the array input and\ncoercion done by the calling query before control ever reaches\npg_set_attribute_stats, so that any incorrect-for-the-data-type values\nwould result in hard errors. I think that's okay for the same reason\nyou probably figured you didn't have to trap array_in: it's the fault\nof the originating pg_dump if it offers a value that doesn't coerce to\nthe datatype it claims the value is of. My formulation is a bit safer\nthough in that it's the originating pg_dump, not the receiving server,\nthat is in charge of saying which type that is. (If that type doesn't\nagree with what the receiving server thinks it should be, that's a\ncondition that pg_set_attribute_stats itself will detect, and then it\ncan WARN and move on to the next thing.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 01 Apr 2024 11:10:09 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "On Sat, 2024-03-30 at 20:08 -0400, Tom Lane wrote:\n> I haven't looked at the details, but I'm really a bit surprised\n> by Jeff's assertion that CREATE INDEX destroys statistics on the\n> base table. That seems wrong from here, and maybe something we\n> could have it not do. (I do realize that it recalculates reltuples\n> and relpages, but so what? If it updates those, the results should\n> be perfectly accurate.)\n\nIn the v15 of the patch I was looking at, \"pg_dump -s\" included the\nstatistics. The stats appeared first in the dump, followed by the\nCREATE INDEX commands. The latter overwrote the relpages/reltuples set\nby the former.\n\nWhile zeros are the right answers for a schema-only dump, it defeated\nthe purpose of including relpages/reltuples stats in the dump, and\ncaused the pg_upgrade TAP test to fail.\n\nYou're right that there are a number of ways this could be resolved --\nI don't think it's an inherent problem.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Mon, 01 Apr 2024 10:06:45 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "Reality check --- are we still targeting this feature for PG 17?\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Mon, 1 Apr 2024 13:11:06 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "Jeff Davis <[email protected]> writes:\n> On Sat, 2024-03-30 at 20:08 -0400, Tom Lane wrote:\n>> I haven't looked at the details, but I'm really a bit surprised\n>> by Jeff's assertion that CREATE INDEX destroys statistics on the\n>> base table. That seems wrong from here, and maybe something we\n>> could have it not do. (I do realize that it recalculates reltuples\n>> and relpages, but so what? If it updates those, the results should\n>> be perfectly accurate.)\n\n> In the v15 of the patch I was looking at, \"pg_dump -s\" included the\n> statistics. The stats appeared first in the dump, followed by the\n> CREATE INDEX commands. The latter overwrote the relpages/reltuples set\n> by the former.\n\n> While zeros are the right answers for a schema-only dump, it defeated\n> the purpose of including relpages/reltuples stats in the dump, and\n> caused the pg_upgrade TAP test to fail.\n\n> You're right that there are a number of ways this could be resolved --\n> I don't think it's an inherent problem.\n\nI'm inclined to call it not a problem at all. While I do agree there\nare use-cases for injecting false statistics with these functions,\nI do not think that pg_dump has to cater to such use-cases.\n\nIn any case, I remain of the opinion that stats are data and should\nnot be included in a -s dump (with some sort of exception for\npg_upgrade). If the data has been loaded, then a subsequent\noverwrite by CREATE INDEX should not be a problem.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 01 Apr 2024 13:18:46 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Reality check --- are we still targeting this feature for PG 17?\n\nI'm not sure. I think if we put our heads down we could finish\nthe changes I'm suggesting and resolve the other issues this week.\nHowever, it is starting to feel like the sort of large, barely-ready\npatch that we often regret cramming in at the last minute. Maybe\nwe should agree that the first v18 CF would be a better time to\ncommit it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 01 Apr 2024 13:21:53 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "On Mon, 2024-04-01 at 13:11 -0400, Bruce Momjian wrote:\n> Reality check --- are we still targeting this feature for PG 17?\n\nI see a few useful pieces here:\n\n1. Support import of statistics (i.e.\npg_set_{relation|attribute}_stats()).\n\n2. Support pg_dump of stats\n\n3. Support pg_upgrade with stats\n\nIt's possible that not all of them make it, but let's not dismiss the\nentire feature yet.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Mon, 01 Apr 2024 10:31:03 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "On Sun, Mar 31, 2024 at 07:04:47PM -0400, Tom Lane wrote:\n> Corey Huinker <[email protected]> writes:\n> >> I can't quibble with that view of what has priority. I'm just\n> >> suggesting that redesigning what pg_upgrade does in this area\n> >> should come later than doing something about extended stats.\n> \n> > I mostly agree, with the caveat that pg_upgrade's existing message saying\n> > that optimizer stats were not carried over wouldn't be 100% true anymore.\n> \n> I think we can tweak the message wording. I just don't want to be\n> doing major redesign of the behavior, nor adding fundamentally new\n> monitoring capabilities.\n\nI think pg_upgrade could check for the existence of extended statistics\nin any database and adjust the analyze recommdnation wording\naccordingly.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Mon, 1 Apr 2024 13:33:28 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "On Sun, 2024-03-31 at 14:48 -0400, Tom Lane wrote:\n> What happens when\n> somebody adds a new stakind (and hence new pg_stats column)?\n> You could try to add an overloaded pg_set_attribute_stats\n> version with more parameters, but I'm pretty sure that would\n> lead to \"ambiguous function call\" failures when trying to load\n> old dump files containing only the original parameters.\n\nWhy would you need to overload in this case? Wouldn't we just define a\nnew function with more optional named parameters?\n\n> The\n> present design is also fragile in that an unrecognized parameter\n> will lead to a parse-time failure and no function call happening,\n> which is less robust than I'd like.\n\nI agree on this point; I found this annoying when testing the feature.\n\n> So this leads me to suggest that we'd be best off with a VARIADIC\n> ANY signature, where the variadic part consists of alternating\n> parameter labels and values:\n\nI didn't consider this and I think it has a lot of advantages. It's\nslightly unfortunate that we can't make them explicitly name/value\npairs, but pg_dump can use whitespace or even SQL comments to make it\nmore readable.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Mon, 01 Apr 2024 10:39:10 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "Jeff Davis <[email protected]> writes:\n> On Mon, 2024-04-01 at 13:11 -0400, Bruce Momjian wrote:\n>> Reality check --- are we still targeting this feature for PG 17?\n\n> I see a few useful pieces here:\n\n> 1. Support import of statistics (i.e.\n> pg_set_{relation|attribute}_stats()).\n\n> 2. Support pg_dump of stats\n\n> 3. Support pg_upgrade with stats\n\n> It's possible that not all of them make it, but let's not dismiss the\n> entire feature yet.\n\nThe unresolved questions largely have to do with the interactions\nbetween these pieces. I think we would seriously regret setting\nany one of them in stone before all three are ready to go.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 01 Apr 2024 13:56:13 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "Jeff Davis <[email protected]> writes:\n> On Sun, 2024-03-31 at 14:48 -0400, Tom Lane wrote:\n>> What happens when\n>> somebody adds a new stakind (and hence new pg_stats column)?\n\n> Why would you need to overload in this case? Wouldn't we just define a\n> new function with more optional named parameters?\n\nAh, yeah, you could change the function to have more parameters,\ngiven the assumption that all calls will be named-parameter style.\nI still suggest that my proposal is more robust for the case where\nthe dump lists parameters that the receiving system doesn't have.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 01 Apr 2024 14:09:37 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": ">\n> That's not what I suggested at all. The function parameters would\n> be declared anyarray, but the values passed to them would be coerced\n> to the correct concrete array types. So as far as the coercion rules\n> are concerned this'd be equivalent to the variadic-any approach.\n>\n\n+1\n\n\n\n>\n> > That's pretty persuasive. It also means that we need to trap for error in\n> > the array_in() calls, as that function does not yet have a _safe() mode.\n>\n> Well, the approach I'm advocating for would have the array input and\n> coercion done by the calling query before control ever reaches\n> pg_set_attribute_stats, so that any incorrect-for-the-data-type values\n> would result in hard errors. I think that's okay for the same reason\n> you probably figured you didn't have to trap array_in: it's the fault\n> of the originating pg_dump if it offers a value that doesn't coerce to\n> the datatype it claims the value is of.\n\n\n+1\n\nThat's not what I suggested at all. The function parameters would\nbe declared anyarray, but the values passed to them would be coerced\nto the correct concrete array types. So as far as the coercion rules\nare concerned this'd be equivalent to the variadic-any approach.+1 \n\n> That's pretty persuasive. It also means that we need to trap for error in\n> the array_in() calls, as that function does not yet have a _safe() mode.\n\nWell, the approach I'm advocating for would have the array input and\ncoercion done by the calling query before control ever reaches\npg_set_attribute_stats, so that any incorrect-for-the-data-type values\nwould result in hard errors. I think that's okay for the same reason\nyou probably figured you didn't have to trap array_in: it's the fault\nof the originating pg_dump if it offers a value that doesn't coerce to\nthe datatype it claims the value is of.+1",
"msg_date": "Mon, 1 Apr 2024 14:46:15 -0400",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": ">\n> I think pg_upgrade could check for the existence of extended statistics\n> in any database and adjust the analyze recommdnation wording\n> accordingly.\n>\n\n+1\n\nI think pg_upgrade could check for the existence of extended statistics\nin any database and adjust the analyze recommdnation wording\naccordingly.+1",
"msg_date": "Mon, 1 Apr 2024 14:49:44 -0400",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": ">\n> Ah, yeah, you could change the function to have more parameters,\n> given the assumption that all calls will be named-parameter style.\n> I still suggest that my proposal is more robust for the case where\n> the dump lists parameters that the receiving system doesn't have.\n>\n\nSo what's the behavior when the user fails to supply a parameter that is\ncurrently NOT NULL checked (example: avg_witdth)? Is that a WARN-and-exit?\n\nAh, yeah, you could change the function to have more parameters,\ngiven the assumption that all calls will be named-parameter style.\nI still suggest that my proposal is more robust for the case where\nthe dump lists parameters that the receiving system doesn't have.So what's the behavior when the user fails to supply a parameter that is currently NOT NULL checked (example: avg_witdth)? Is that a WARN-and-exit?",
"msg_date": "Mon, 1 Apr 2024 14:53:43 -0400",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "Corey Huinker <[email protected]> writes:\n> So what's the behavior when the user fails to supply a parameter that is\n> currently NOT NULL checked (example: avg_witdth)? Is that a WARN-and-exit?\n\nI still think that we could just declare the function strict, if we\nuse the variadic-any approach. Passing a null in any position is\nindisputable caller error. However, if you're allergic to silently\ndoing nothing in such a case, we could have pg_set_attribute_stats\ncheck each argument and throw an error. (Or warn and keep going;\nbut according to the design principle I posited earlier, this'd be\nthe sort of thing we don't need to tolerate.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 01 Apr 2024 15:24:15 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": ">\n>\n> I still think that we could just declare the function strict, if we\n> use the variadic-any approach. Passing a null in any position is\n> indisputable caller error. However, if you're allergic to silently\n> doing nothing in such a case, we could have pg_set_attribute_stats\n> check each argument and throw an error. (Or warn and keep going;\n> but according to the design principle I posited earlier, this'd be\n> the sort of thing we don't need to tolerate.)\n>\n\nAny thoughts about going back to having a return value, a caller could then\nsee that the function returned NULL rather than whatever the expected value\nwas (example: TRUE)?\n\nI still think that we could just declare the function strict, if we\nuse the variadic-any approach. Passing a null in any position is\nindisputable caller error. However, if you're allergic to silently\ndoing nothing in such a case, we could have pg_set_attribute_stats\ncheck each argument and throw an error. (Or warn and keep going;\nbut according to the design principle I posited earlier, this'd be\nthe sort of thing we don't need to tolerate.)Any thoughts about going back to having a return value, a caller could then see that the function returned NULL rather than whatever the expected value was (example: TRUE)?",
"msg_date": "Mon, 1 Apr 2024 15:54:30 -0400",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "Corey Huinker <[email protected]> writes:\n> Any thoughts about going back to having a return value, a caller could then\n> see that the function returned NULL rather than whatever the expected value\n> was (example: TRUE)?\n\nIf we are envisioning that the function might emit multiple warnings\nper call, a useful definition could be to return the number of\nwarnings (so zero is good, not-zero is bad). But I'm not sure that's\nreally better than a boolean result. pg_dump/pg_restore won't notice\nanyway, but perhaps other programs using these functions would care.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 01 Apr 2024 17:09:05 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": ">\n> If we are envisioning that the function might emit multiple warnings\n> per call, a useful definition could be to return the number of\n> warnings (so zero is good, not-zero is bad). But I'm not sure that's\n> really better than a boolean result. pg_dump/pg_restore won't notice\n> anyway, but perhaps other programs using these functions would care.\n>\n\nA boolean is what we had before, I'm quite comfortable with that, and it\naddresses my silent-failure concerns.\n\nIf we are envisioning that the function might emit multiple warnings\nper call, a useful definition could be to return the number of\nwarnings (so zero is good, not-zero is bad). But I'm not sure that's\nreally better than a boolean result. pg_dump/pg_restore won't notice\nanyway, but perhaps other programs using these functions would care.A boolean is what we had before, I'm quite comfortable with that, and it addresses my silent-failure concerns.",
"msg_date": "Mon, 1 Apr 2024 17:15:25 -0400",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "Corey Huinker <[email protected]> writes:\n> A boolean is what we had before, I'm quite comfortable with that, and it\n> addresses my silent-failure concerns.\n\nWFM.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 01 Apr 2024 17:47:58 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "Here's a one-liner patch for disabling update of pg_class\nrelpages/reltuples/relallviible during a binary upgrade.\n\nThis was causting pg_upgrade tests to fail in the existing stats import\nwork.",
"msg_date": "Tue, 2 Apr 2024 05:38:53 -0400",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "On Tue, 2024-04-02 at 05:38 -0400, Corey Huinker wrote:\n> Here's a one-liner patch for disabling update of pg_class\n> relpages/reltuples/relallviible during a binary upgrade.\n\nThis change makes sense to me regardless of the rest of the work.\nUpdating the relpages/reltuples/relallvisible during pg_upgrade before\nthe data is there will store the wrong stats.\n\nIt could use a brief comment, though.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Tue, 02 Apr 2024 08:10:50 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "I have refactored pg_set_relation_stats to be variadic, and I'm working on\npg_set_attribute_sttats, but I'm encountering an issue with the anyarray\nvalues.\n\nJeff suggested looking at anyarray_send as a way of extracting the type,\nand with some extra twiddling we can get and cast the type. However, some\nof the ANYARRAYs have element types that are themselves arrays, and near as\nI can tell, such a construct is not expressible in SQL. So, rather than\ngetting an anyarray of an array type, you instead get an array of one\nhigher dimension. Like so:\n\n# select schemaname, tablename, attname,\n\n substring(substring(anyarray_send(histogram_bounds) from 9 for\n4)::text,2)::bit(32)::integer::regtype,\n\n\n substring(substring(anyarray_send(histogram_bounds::text::text[][]) from 9\nfor 4)::text,2)::bit(32)::integer::regtype\nfrom pg_stats where histogram_bounds is not null\n\nand tablename = 'pg_proc' and attname = 'proargnames'\n\n\n ;\n\n schemaname | tablename | attname | substring | substring\n\n------------+-----------+-------------+-----------+-----------\n\n pg_catalog | pg_proc | proargnames | text[] | text\n\nLuckily, passing in such a value would have done all of the element\ntypechecking for us, so we would just move the data to an array of one less\ndimension typed elem[]. If there's an easy way to do that, I don't know of\nit.\n\nWhat remains is just checking the input types against the expected type of\nthe array, stepping down the dimension if need be, and skipping if the type\ndoesn't meet expectations.\n\nI have refactored pg_set_relation_stats to be variadic, and I'm working on pg_set_attribute_sttats, but I'm encountering an issue with the anyarray values.Jeff suggested looking at anyarray_send as a way of extracting the type, and with some extra twiddling we can get and cast the type. However, some of the ANYARRAYs have element types that are themselves arrays, and near as I can tell, such a construct is not expressible in SQL. So, rather than getting an anyarray of an array type, you instead get an array of one higher dimension. Like so:\n# select schemaname, tablename, attname, substring(substring(anyarray_send(histogram_bounds) from 9 for 4)::text,2)::bit(32)::integer::regtype, substring(substring(anyarray_send(histogram_bounds::text::text[][]) from 9 for 4)::text,2)::bit(32)::integer::regtypefrom pg_stats where histogram_bounds is not nulland tablename = 'pg_proc' and attname = 'proargnames' ;\n schemaname | tablename | attname | substring | substring \n------------+-----------+-------------+-----------+-----------\n pg_catalog | pg_proc | proargnames | text[] | textLuckily, passing in such a value would have done all of the element typechecking for us, so we would just move the data to an array of one less dimension typed elem[]. If there's an easy way to do that, I don't know of it.What remains is just checking the input types against the expected type of the array, stepping down the dimension if need be, and skipping if the type doesn't meet expectations.",
"msg_date": "Tue, 2 Apr 2024 12:59:08 -0400",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "On Tue, 2024-04-02 at 12:59 -0400, Corey Huinker wrote:\n> However, some of the ANYARRAYs have element types that are\n> themselves arrays, and near as I can tell, such a construct is not\n> expressible in SQL. So, rather than getting an anyarray of an array\n> type, you instead get an array of one higher dimension.\n\nFundamentally, you want to recreate the exact same anyarray values on\nthe destination system as they existed on the source. There's some\ncomplexity to that on both the export side as well as the import side,\nbut I believe the problems are solvable.\n\nOn the export side, the problem is that the element type (and\ndimensionality and maybe hasnull) is an important part of the anyarray\nvalue, but it's not part of the output of anyarray_out(). For new\nversions, we can add a scalar function that simply outputs the\ninformation we need. For old versions, we can hack it by parsing the\noutput of anyarray_send(), which contains the information we need\n(binary outputs are under-specified, but I believe they are specified\nenough in this case). There may be other hacks to get the information\nfrom the older systems; that's just an idea. To get the actual data,\ndoing histogram_bounds::text::text[] seems to be enough: that seems to\nalways give a one-dimensional array with element type \"text\", even if\nthe element type is an array. (Note: this means we need the function's\nAPI to also include this extra information about the anyarray values,\nso it might be slightly more complex than name/value pairs).\n\nOn the import side, the problem is that there may not be an input\nfunction to go from a 1-D array of text to a 1-D array of any element\ntype we want. For example, there's no input function that will create a\n1-D array with element type float4[] (that's because Postgres doesn't\nreally have arrays-of-arrays, it has multi-dimensional arrays).\nInstead, don't use the input function, pass each element of the 1-D\ntext array to the element type's input function (which may be scalar or\nnot) and then construct a 1-D array out of that with the appropriate\nelement type (which may be scalar or not).\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Tue, 02 Apr 2024 14:13:57 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "Jeff Davis <[email protected]> writes:\n> On the export side, the problem is that the element type (and\n> dimensionality and maybe hasnull) is an important part of the anyarray\n> value, but it's not part of the output of anyarray_out(). For new\n> versions, we can add a scalar function that simply outputs the\n> information we need. For old versions, we can hack it by parsing the\n> output of anyarray_send(), which contains the information we need\n> (binary outputs are under-specified, but I believe they are specified\n> enough in this case).\n\nYeah, I was thinking yesterday about pulling the anyarray columns in\nbinary and looking at the header fields. However, I fear there is a\nshowstopper problem: anyarray_send will fail if the element type\ndoesn't have a typsend function, which is entirely possible for\nuser-defined types (and I'm not even sure we've provided them for\nevery type in the core distro). I haven't thought of a good answer\nto that other than a new backend function. However ...\n\n> On the import side, the problem is that there may not be an input\n> function to go from a 1-D array of text to a 1-D array of any element\n> type we want. For example, there's no input function that will create a\n> 1-D array with element type float4[] (that's because Postgres doesn't\n> really have arrays-of-arrays, it has multi-dimensional arrays).\n> Instead, don't use the input function, pass each element of the 1-D\n> text array to the element type's input function (which may be scalar or\n> not) and then construct a 1-D array out of that with the appropriate\n> element type (which may be scalar or not).\n\nYup. I had hoped that we could avoid doing any array-munging inside\npg_set_attribute_stats, but this array-of-arrays problem seems to\nmean we have to. In turn, that means that the whole idea of\ndeclaring the function inputs as anyarray rather than text[] is\nprobably pointless. And that means that we don't need the sending\nside to know the element type anyway. So, I apologize for sending\nus down a useless side path. We may as well stick to the function\nsignature as shown in the v15 patch --- although maybe variadic\nany is still worthwhile so that an unrecognized field name doesn't\nneed to be a hard error?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 02 Apr 2024 17:31:07 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": ">\n> side to know the element type anyway. So, I apologize for sending\n> us down a useless side path. We may as well stick to the function\n> signature as shown in the v15 patch --- although maybe variadic\n> any is still worthwhile so that an unrecognized field name doesn't\n> need to be a hard error?\n>\n\nVariadic is nearly done. This issue was the main blocking point. I can go\nback to array_in() as we know that code works.\n\nside to know the element type anyway. So, I apologize for sending\nus down a useless side path. We may as well stick to the function\nsignature as shown in the v15 patch --- although maybe variadic\nany is still worthwhile so that an unrecognized field name doesn't\nneed to be a hard error?Variadic is nearly done. This issue was the main blocking point. I can go back to array_in() as we know that code works.",
"msg_date": "Tue, 2 Apr 2024 17:36:30 -0400",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "On Tue, 2024-04-02 at 17:31 -0400, Tom Lane wrote:\n> And that means that we don't need the sending\n> side to know the element type anyway.\n\nWe need to get the original element type on the import side somehow,\nright? Otherwise it will be hard to tell whether '{1, 2, 3, 4}' has\nelement type \"int4\" or \"text\", which affects the binary representation\nof the anyarray value in pg_statistic.\n\nEither we need to get it at export time (which seems the most reliable\nin principle, but problematic for older versions) and pass it as an\nargument to pg_set_attribute_stats(); or we need to derive it reliably\nfrom the table schema on the destination side, right?\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Tue, 02 Apr 2024 14:59:12 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "Jeff Davis <[email protected]> writes:\n> We need to get the original element type on the import side somehow,\n> right? Otherwise it will be hard to tell whether '{1, 2, 3, 4}' has\n> element type \"int4\" or \"text\", which affects the binary representation\n> of the anyarray value in pg_statistic.\n\nYeah, but that problem exists no matter what. I haven't read enough\nof the patch to find where it's determining that, but I assume there's\ncode in there to intuit the statistics storage type depending on the\ntable column's data type and the statistics kind.\n\n> Either we need to get it at export time (which seems the most reliable\n> in principle, but problematic for older versions) and pass it as an\n> argument to pg_set_attribute_stats(); or we need to derive it reliably\n> from the table schema on the destination side, right?\n\nWe could not trust the exporting side to tell us the correct answer;\nfor one reason, it might be different across different releases.\nSo \"derive it reliably on the destination\" is really the only option.\n\nI think that it's impossible to do this in the general case, since\ntype-specific typanalyze functions can store pretty nearly whatever\nthey like. However, the pg_stats view isn't going to show nonstandard\nstatistics kinds anyway, so we are going to be lossy for custom\nstatistics kinds.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 02 Apr 2024 18:18:53 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": ">\n> Yeah, but that problem exists no matter what. I haven't read enough\n> of the patch to find where it's determining that, but I assume there's\n> code in there to intuit the statistics storage type depending on the\n> table column's data type and the statistics kind.\n>\n\nCorrect. It borrows a lot from examine_attribute() and the *_typanalyze()\nfunctions. Actually using VacAttrStats proved problematic, but that can be\nrevisited at some point.\n\n\n> We could not trust the exporting side to tell us the correct answer;\n> for one reason, it might be different across different releases.\n> So \"derive it reliably on the destination\" is really the only option.\n>\n\n+1\n\n\n> I think that it's impossible to do this in the general case, since\n> type-specific typanalyze functions can store pretty nearly whatever\n> they like. However, the pg_stats view isn't going to show nonstandard\n> statistics kinds anyway, so we are going to be lossy for custom\n> statistics kinds.\n>\n\nSadly true.\n\nYeah, but that problem exists no matter what. I haven't read enough\nof the patch to find where it's determining that, but I assume there's\ncode in there to intuit the statistics storage type depending on the\ntable column's data type and the statistics kind.Correct. It borrows a lot from examine_attribute() and the *_typanalyze() functions. Actually using VacAttrStats proved problematic, but that can be revisited at some point. We could not trust the exporting side to tell us the correct answer;\nfor one reason, it might be different across different releases.\nSo \"derive it reliably on the destination\" is really the only option.+1 I think that it's impossible to do this in the general case, since\ntype-specific typanalyze functions can store pretty nearly whatever\nthey like. However, the pg_stats view isn't going to show nonstandard\nstatistics kinds anyway, so we are going to be lossy for custom\nstatistics kinds.Sadly true.",
"msg_date": "Tue, 2 Apr 2024 18:35:55 -0400",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "v16 attached.\n\n- both functions now use variadics for anything that can be considered a\nstat.\n- most consistency checks removed, null element tests remain\n- functions strive to not ERROR unless absolutely necessary. The biggest\nexposure is the call to array_in().\n- docs have not yet been updated, pending general acceptance of the\nvariadic over the named arg version.\n\nHaving variant arguments is definitely a little bit more work to manage,\nand the shift from ERROR to WARN removes a lot of the easy exits that it\npreviously had, as well as having to do some extra type checking that we\ngot for free with fixed arguments. Still, I don't think the readability\nsuffers too much, and we are now able to work for downgrades as well as\nupgrades.",
"msg_date": "Wed, 3 Apr 2024 00:59:10 -0400",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "Corey Huinker <[email protected]> writes:\n> - functions strive to not ERROR unless absolutely necessary. The biggest\n> exposure is the call to array_in().\n\nAs far as that goes, it shouldn't be that hard to deal with, at least\nnot for \"soft\" errors which hopefully cover most input-function\nfailures these days. You should be invoking array_in via\nInputFunctionCallSafe and passing a suitably-set-up ErrorSaveContext.\n(Look at pg_input_error_info() for useful precedent.)\n\nThere might be something to be said for handling all the error\ncases via an ErrorSaveContext and use of ereturn() instead of\nereport(). Not sure if it's worth the trouble or not.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 03 Apr 2024 13:18:07 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": ">\n>\n> As far as that goes, it shouldn't be that hard to deal with, at least\n> not for \"soft\" errors which hopefully cover most input-function\n> failures these days. You should be invoking array_in via\n> InputFunctionCallSafe and passing a suitably-set-up ErrorSaveContext.\n> (Look at pg_input_error_info() for useful precedent.)\n>\n\nAh, my understanding may be out of date. I was under the impression that\nthat mechanism relied on the the cooperation of the per-element input\nfunction, so even if we got all the builtin datatypes to play nice with\n*Safe(), we were always going to be at risk with a user-defined input\nfunction.\n\n\n> There might be something to be said for handling all the error\n> cases via an ErrorSaveContext and use of ereturn() instead of\n> ereport(). Not sure if it's worth the trouble or not.\n>\n\nIt would help us tailor the user experience. Right now we have several\nendgames. To recap:\n\n1. NULL input => Return NULL. (because strict).\n2. Actual error (permissions, cache lookup not found, etc) => Raise ERROR\n(thus ruining binary upgrade)\n3. Call values are so bad (examples: attname not found, required stat\nmissing) that nothing can recover => WARN, return FALSE.\n4. At least one stakind-stat is wonky (impossible for datatype, missing\nstat pair, wrong type on input parameter), but that's the worst of it => 1\nto N WARNs, write stats that do make sense, return TRUE.\n5. Hunky-dory. => No warns. Write all stats. return TRUE.\n\nWhich of those seem like good ereturn candidates to you?\n\nAs far as that goes, it shouldn't be that hard to deal with, at least\nnot for \"soft\" errors which hopefully cover most input-function\nfailures these days. You should be invoking array_in via\nInputFunctionCallSafe and passing a suitably-set-up ErrorSaveContext.\n(Look at pg_input_error_info() for useful precedent.)Ah, my understanding may be out of date. I was under the impression that that mechanism relied on the the cooperation of the per-element input function, so even if we got all the builtin datatypes to play nice with *Safe(), we were always going to be at risk with a user-defined input function. There might be something to be said for handling all the error\ncases via an ErrorSaveContext and use of ereturn() instead of\nereport(). Not sure if it's worth the trouble or not.It would help us tailor the user experience. Right now we have several endgames. To recap:1. NULL input => Return NULL. (because strict).2. Actual error (permissions, cache lookup not found, etc) => Raise ERROR (thus ruining binary upgrade)3. Call values are so bad (examples: attname not found, required stat missing) that nothing can recover => WARN, return FALSE.4. At least one stakind-stat is wonky (impossible for datatype, missing stat pair, wrong type on input parameter), but that's the worst of it => 1 to N WARNs, write stats that do make sense, return TRUE.5. Hunky-dory. => No warns. Write all stats. return TRUE.Which of those seem like good ereturn candidates to you?",
"msg_date": "Wed, 3 Apr 2024 14:13:04 -0400",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "Corey Huinker <[email protected]> writes:\n>> As far as that goes, it shouldn't be that hard to deal with, at least\n>> not for \"soft\" errors which hopefully cover most input-function\n>> failures these days. You should be invoking array_in via\n>> InputFunctionCallSafe and passing a suitably-set-up ErrorSaveContext.\n>> (Look at pg_input_error_info() for useful precedent.)\n\n> Ah, my understanding may be out of date. I was under the impression that\n> that mechanism relied on the the cooperation of the per-element input\n> function, so even if we got all the builtin datatypes to play nice with\n> *Safe(), we were always going to be at risk with a user-defined input\n> function.\n\nThat's correct, but it's silly not to do what we can. Also, I imagine\nthat there is going to be high evolutionary pressure on UDTs to\nsupport soft error mode for COPY, so over time the problem will\ndecrease --- as long as we invoke the soft error mode.\n\n> 1. NULL input => Return NULL. (because strict).\n> 2. Actual error (permissions, cache lookup not found, etc) => Raise ERROR\n> (thus ruining binary upgrade)\n> 3. Call values are so bad (examples: attname not found, required stat\n> missing) that nothing can recover => WARN, return FALSE.\n> 4. At least one stakind-stat is wonky (impossible for datatype, missing\n> stat pair, wrong type on input parameter), but that's the worst of it => 1\n> to N WARNs, write stats that do make sense, return TRUE.\n> 5. Hunky-dory. => No warns. Write all stats. return TRUE.\n\n> Which of those seem like good ereturn candidates to you?\n\nI'm good with all those behaviors. On reflection, the design I was\nvaguely imagining wouldn't cope with case 4 (multiple WARNs per call)\nso never mind that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 03 Apr 2024 16:02:47 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "On Mon, Apr 01, 2024 at 01:21:53PM -0400, Tom Lane wrote:\n> I'm not sure. I think if we put our heads down we could finish\n> the changes I'm suggesting and resolve the other issues this week.\n> However, it is starting to feel like the sort of large, barely-ready\n> patch that we often regret cramming in at the last minute. Maybe\n> we should agree that the first v18 CF would be a better time to\n> commit it.\n\nThere are still 4 days remaining, so there's still time, but my\noverall experience on the matter with my RMT hat on is telling me that\nwe should not rush this patch set. Redesigning portions close to the\nend of a dev cycle is not a good sign, I am afraid, especially if the\nsub-parts of the design don't fit well in the global picture as that\ncould mean more maintenance work on stable branches in the long term.\nStill, it is very good to be aware of the problems because you'd know\nwhat to tackle to reach the goals of this patch set.\n--\nMichael",
"msg_date": "Thu, 4 Apr 2024 10:14:52 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": ">\n>\n> I'm good with all those behaviors. On reflection, the design I was\n> vaguely imagining wouldn't cope with case 4 (multiple WARNs per call)\n> so never mind that.\n>\n> regards, tom lane\n>\n\nv17\n\n0001\n- array_in now repackages cast errors as warnings and skips the stat, test\nadded\n- version parameter added, though it's mostly for future compatibility,\ntests modified\n- both functions delay object/attribute locking until absolutely necessary\n- general cleanup\n\n0002\n- added version parameter to dumps\n- --schema-only will not dump stats unless in binary upgrade mode\n- stats are dumped SECTION_NONE\n- general cleanup\n\nI think that covers the outstanding issues.",
"msg_date": "Thu, 4 Apr 2024 00:30:18 -0400",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": ">\n> For a partitioned table this value has to be true. For a normal table when\n> setting this value to true, it should at least make sure that the table has\n> at least one child. Otherwise it should throw an error. Blindly accepting\n> the given value may render the statistics unusable. Prologue of the\n> function needs to be updated accordingly.\n>\n\nI can see rejecting non-inherited stats for a partitioned table. The\nreverse, however, isn't true, because a table may end up being inherited by\nanother, so those statistics may be legit. Having said that, a great deal\nof the data validation I was doing was seen as unnecessary, so I' not sure\nwhere this check would fall on that line. It's a trivial check if we do add\nit.\n\nFor a partitioned table this value has to be true. For a normal table when setting this value to true, it should at least make sure that the table has at least one child. Otherwise it should throw an error. Blindly accepting the given value may render the statistics unusable. Prologue of the function needs to be updated accordingly.I can see rejecting non-inherited stats for a partitioned table. The reverse, however, isn't true, because a table may end up being inherited by another, so those statistics may be legit. Having said that, a great deal of the data validation I was doing was seen as unnecessary, so I' not sure where this check would fall on that line. It's a trivial check if we do add it.",
"msg_date": "Thu, 4 Apr 2024 21:30:32 -0400",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "On Fri, Apr 5, 2024 at 7:00 AM Corey Huinker <[email protected]>\nwrote:\n\n> For a partitioned table this value has to be true. For a normal table when\n>> setting this value to true, it should at least make sure that the table has\n>> at least one child. Otherwise it should throw an error. Blindly accepting\n>> the given value may render the statistics unusable. Prologue of the\n>> function needs to be updated accordingly.\n>>\n>\n> I can see rejecting non-inherited stats for a partitioned table. The\n> reverse, however, isn't true, because a table may end up being inherited by\n> another, so those statistics may be legit. Having said that, a great deal\n> of the data validation I was doing was seen as unnecessary, so I' not sure\n> where this check would fall on that line. It's a trivial check if we do add\n> it.\n>\n\nI read that discussion, and it may be ok for pg_upgrade/pg_dump usecase and\nmaybe also for IMPORT foreign schema where the SQL is generated by\nPostgreSQL itself. But not for simulating statistics. In that case, if the\nfunction happily installs statistics cooked by the user and those aren't\nused anywhere, users may be misled by the plans that are generated\nsubsequently. Thus negating the very purpose of simulating statistics. Once\nthe feature is out there, we won't be able to restrict its usage unless we\ndocument the possible anomalies.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\nOn Fri, Apr 5, 2024 at 7:00 AM Corey Huinker <[email protected]> wrote:For a partitioned table this value has to be true. For a normal table when setting this value to true, it should at least make sure that the table has at least one child. Otherwise it should throw an error. Blindly accepting the given value may render the statistics unusable. Prologue of the function needs to be updated accordingly.I can see rejecting non-inherited stats for a partitioned table. The reverse, however, isn't true, because a table may end up being inherited by another, so those statistics may be legit. Having said that, a great deal of the data validation I was doing was seen as unnecessary, so I' not sure where this check would fall on that line. It's a trivial check if we do add it.\nI read that discussion, and it may be ok for pg_upgrade/pg_dump usecase and maybe also for IMPORT foreign schema where the SQL is generated by PostgreSQL itself. But not for simulating statistics. In that case, if the function happily installs statistics cooked by the user and those aren't used anywhere, users may be misled by the plans that are generated subsequently. Thus negating the very purpose of simulating statistics. Once the feature is out there, we won't be able to restrict its usage unless we document the possible anomalies.-- Best Wishes,Ashutosh Bapat",
"msg_date": "Fri, 5 Apr 2024 09:48:50 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "Ashutosh Bapat <[email protected]> writes:\n> I read that discussion, and it may be ok for pg_upgrade/pg_dump usecase and\n> maybe also for IMPORT foreign schema where the SQL is generated by\n> PostgreSQL itself. But not for simulating statistics. In that case, if the\n> function happily installs statistics cooked by the user and those aren't\n> used anywhere, users may be misled by the plans that are generated\n> subsequently. Thus negating the very purpose of simulating\n> statistics.\n\nI'm not sure what you think the \"purpose of simulating statistics\" is,\nbut it seems like you have an extremely narrow-minded view of it.\nI think we should allow injecting any stats that won't actively crash\nthe backend. Such functionality could be useful for stress-testing\nthe planner, for example, or even just to see what it would do in\na situation that is not what you have.\n\nNote that I don't think pg_dump or pg_upgrade need to support\ninjection of counterfactual statistics. But direct calls of the\nstats insertion functions should be able to do so.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 05 Apr 2024 00:37:32 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "On Fri, Apr 5, 2024 at 10:07 AM Tom Lane <[email protected]> wrote:\n\n> Ashutosh Bapat <[email protected]> writes:\n> > I read that discussion, and it may be ok for pg_upgrade/pg_dump usecase\n> and\n> > maybe also for IMPORT foreign schema where the SQL is generated by\n> > PostgreSQL itself. But not for simulating statistics. In that case, if\n> the\n> > function happily installs statistics cooked by the user and those aren't\n> > used anywhere, users may be misled by the plans that are generated\n> > subsequently. Thus negating the very purpose of simulating\n> > statistics.\n>\n> I'm not sure what you think the \"purpose of simulating statistics\" is,\n> but it seems like you have an extremely narrow-minded view of it.\n> I think we should allow injecting any stats that won't actively crash\n> the backend. Such functionality could be useful for stress-testing\n> the planner, for example, or even just to see what it would do in\n> a situation that is not what you have.\n>\n\nMy reply was in the following context\n\n> For a partitioned table this value has to be true. For a normal table when\n>> setting this value to true, it should at least make sure that the table has\n>> at least one child. Otherwise it should throw an error. Blindly accepting\n>> the given value may render the statistics unusable. Prologue of the\n>> function needs to be updated accordingly.\n>>\n>\n> I can see rejecting non-inherited stats for a partitioned table. The\n> reverse, however, isn't true, because a table may end up being inherited by\n> another, so those statistics may be legit. Having said that, a great deal\n> of the data validation I was doing was seen as unnecessary, so I' not sure\n> where this check would fall on that line. It's a trivial check if we do add\n> it.\n>\n\nIf a user installs inherited stats for a non-inherited table by accidently\npassing true to the corresponding argument, those stats won't be even used.\nThe user wouldn't know that those stats are not used. Yet, they would think\nthat any change in the plans is the result of their stats. So whatever\nsimulation experiment they are running would lead to wrong conclusions.\nThis could be easily avoided by raising an error. Similarly for installing\nnon-inherited stats for a partitioned table. There might be other scenarios\nwhere the error won't be required.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\nOn Fri, Apr 5, 2024 at 10:07 AM Tom Lane <[email protected]> wrote:Ashutosh Bapat <[email protected]> writes:\n> I read that discussion, and it may be ok for pg_upgrade/pg_dump usecase and\n> maybe also for IMPORT foreign schema where the SQL is generated by\n> PostgreSQL itself. But not for simulating statistics. In that case, if the\n> function happily installs statistics cooked by the user and those aren't\n> used anywhere, users may be misled by the plans that are generated\n> subsequently. Thus negating the very purpose of simulating\n> statistics.\n\nI'm not sure what you think the \"purpose of simulating statistics\" is,\nbut it seems like you have an extremely narrow-minded view of it.\nI think we should allow injecting any stats that won't actively crash\nthe backend. Such functionality could be useful for stress-testing\nthe planner, for example, or even just to see what it would do in\na situation that is not what you have.My reply was in the following contextFor\n a partitioned table this value has to be true. For a normal table when \nsetting this value to true, it should at least make sure that the table \nhas at least one child. Otherwise it should throw an error. Blindly \naccepting the given value may render the statistics unusable. Prologue \nof the function needs to be updated accordingly.I\n can see rejecting non-inherited stats for a partitioned table. The \nreverse, however, isn't true, because a table may end up being inherited\n by another, so those statistics may be legit. Having said that, a great\n deal of the data validation I was doing was seen as unnecessary, so I' \nnot sure where this check would fall on that line. It's a trivial check \nif we do add it.\nIf a user installs inherited stats for a non-inherited table by accidently passing true to the corresponding argument, those stats won't be even used. The user wouldn't know that those stats are not used. Yet, they would think that any change in the plans is the result of their stats. So whatever simulation experiment they are running would lead to wrong conclusions. This could be easily avoided by raising an error. Similarly for installing non-inherited stats for a partitioned table. There might be other scenarios where the error won't be required.-- Best Wishes,Ashutosh Bapat",
"msg_date": "Fri, 5 Apr 2024 11:39:57 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "On Thu, 2024-04-04 at 00:30 -0400, Corey Huinker wrote:\n> \n> v17\n> \n> 0001\n> - array_in now repackages cast errors as warnings and skips the stat,\n> test added\n> - version parameter added, though it's mostly for future\n> compatibility, tests modified\n> - both functions delay object/attribute locking until absolutely\n> necessary\n> - general cleanup\n> \n> 0002\n> - added version parameter to dumps\n> - --schema-only will not dump stats unless in binary upgrade mode\n> - stats are dumped SECTION_NONE\n> - general cleanup\n> \n> I think that covers the outstanding issues. \n\nThank you, this has improved a lot and the fundamentals are very close.\n\nI think it could benefit from a bit more time to settle on a few\nissues:\n\n1. SECTION_NONE. Conceptually, stats are more like data, and so\nintuitively I would expect this in the SECTION_DATA or\nSECTION_POST_DATA. However, the two most important use cases (in my\nopinion) don't involve dumping the data: pg_upgrade (data doesn't come\nfrom the dump) and planner simulations/repros. Perhaps the section we\nplace it in is not a critical decision, but we will need to stick with\nit for a long time, and I'm not sure that we have consensus on that\npoint.\n\n2. We changed the stats import function API to be VARIADIC very\nrecently. After we have a bit of time to think on it, I'm not 100% sure\nwe will want to stick with that new API. It's not easy to document,\nwhich is something I always like to consider.\n\n3. The error handling also changed recently to change soft errors (i.e.\ntype input errors) to warnings. I like this change but I'd need a bit\nmore time to get comfortable with how this is done, there is not a lot\nof precedent for doing this kind of thing. This is connected to the\nreturn value, as well as the machine-readability concern that Magnus\nraised.\n\nAdditionally, a lot of people are simply very busy around this time,\nand may not have had a chance to opine on all the recent changes yet.\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Fri, 05 Apr 2024 20:47:40 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "Jeff Davis <[email protected]> writes:\n> Thank you, this has improved a lot and the fundamentals are very close.\n> I think it could benefit from a bit more time to settle on a few\n> issues:\n\nYeah ... it feels like we aren't quite going to manage to get this\nover the line for v17. We could commit with the hope that these\nlast details will get sorted later, but that path inevitably leads\nto a mess.\n\n> 1. SECTION_NONE. Conceptually, stats are more like data, and so\n> intuitively I would expect this in the SECTION_DATA or\n> SECTION_POST_DATA. However, the two most important use cases (in my\n> opinion) don't involve dumping the data: pg_upgrade (data doesn't come\n> from the dump) and planner simulations/repros. Perhaps the section we\n> place it in is not a critical decision, but we will need to stick with\n> it for a long time, and I'm not sure that we have consensus on that\n> point.\n\nI think it'll be a serious, serious error for this not to be\nSECTION_DATA. Maybe POST_DATA is OK, but even that seems like\nan implementation compromise not \"the way it ought to be\".\n\n> 2. We changed the stats import function API to be VARIADIC very\n> recently. After we have a bit of time to think on it, I'm not 100% sure\n> we will want to stick with that new API. It's not easy to document,\n> which is something I always like to consider.\n\nPerhaps. I think the argument of wanting to be able to salvage\nsomething even in the presence of unrecognized stats types is\nstronger, but I agree this could use more time in the oven.\nUnlike many other things in this patch, this would be nigh\nimpossible to reconsider later.\n\n> 3. The error handling also changed recently to change soft errors (i.e.\n> type input errors) to warnings. I like this change but I'd need a bit\n> more time to get comfortable with how this is done, there is not a lot\n> of precedent for doing this kind of thing.\n\nI don't think there's much disagreement that that's the right thing,\nbut yeah there could be bugs or some more to do in this area.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 06 Apr 2024 00:05:28 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": ">\n>\n>\n> I think it'll be a serious, serious error for this not to be\n> SECTION_DATA. Maybe POST_DATA is OK, but even that seems like\n> an implementation compromise not \"the way it ought to be\".\n>\n\nWe'd have to split them on account of when the underlying object is\ncreated. Index statistics would be SECTION_POST_DATA, and everything else\nwould be SECTION_DATA. Looking ahead, statistics data for extended\nstatistics objects would also be POST. That's not a big change, but my\nfirst attempt at that resulted in a bunch of unrelated grants dumping in\nthe wrong section.\n\n\nI think it'll be a serious, serious error for this not to be\nSECTION_DATA. Maybe POST_DATA is OK, but even that seems like\nan implementation compromise not \"the way it ought to be\".We'd have to split them on account of when the underlying object is created. Index statistics would be SECTION_POST_DATA, and everything else would be SECTION_DATA. Looking ahead, statistics data for extended statistics objects would also be POST. That's not a big change, but my first attempt at that resulted in a bunch of unrelated grants dumping in the wrong section.",
"msg_date": "Sat, 6 Apr 2024 17:23:43 -0400",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "On Sat, Apr 6, 2024 at 5:23 PM Corey Huinker <[email protected]>\nwrote:\n\n>\n>>\n>> I think it'll be a serious, serious error for this not to be\n>> SECTION_DATA. Maybe POST_DATA is OK, but even that seems like\n>> an implementation compromise not \"the way it ought to be\".\n>>\n>\n> We'd have to split them on account of when the underlying object is\n> created. Index statistics would be SECTION_POST_DATA, and everything else\n> would be SECTION_DATA. Looking ahead, statistics data for extended\n> statistics objects would also be POST. That's not a big change, but my\n> first attempt at that resulted in a bunch of unrelated grants dumping in\n> the wrong section.\n>\n\nAt the request of a few people, attached is an attempt to move stats to\nDATA/POST-DATA, and the TAP test failure that results from that.\n\nThe relevant errors are confusing, in that they all concern GRANT/REVOKE,\nand the fact that I made no changes to the TAP test itself.\n\n$ grep 'not ok' build/meson-logs/testlog.txt\nnot ok 9347 - section_data: should not dump GRANT INSERT(col1) ON TABLE\ntest_second_table\nnot ok 9348 - section_data: should not dump GRANT SELECT (proname ...) ON\nTABLE pg_proc TO public\nnot ok 9349 - section_data: should not dump GRANT SELECT ON TABLE\nmeasurement\nnot ok 9350 - section_data: should not dump GRANT SELECT ON TABLE\nmeasurement_y2006m2\nnot ok 9351 - section_data: should not dump GRANT SELECT ON TABLE test_table\nnot ok 9379 - section_data: should not dump REVOKE SELECT ON TABLE pg_proc\nFROM public\nnot ok 9788 - section_pre_data: should dump CREATE TABLE test_table\nnot ok 9837 - section_pre_data: should dump GRANT INSERT(col1) ON TABLE\ntest_second_table\nnot ok 9838 - section_pre_data: should dump GRANT SELECT (proname ...) ON\nTABLE pg_proc TO public\nnot ok 9839 - section_pre_data: should dump GRANT SELECT ON TABLE\nmeasurement\nnot ok 9840 - section_pre_data: should dump GRANT SELECT ON TABLE\nmeasurement_y2006m2\nnot ok 9841 - section_pre_data: should dump GRANT SELECT ON TABLE test_table\nnot ok 9869 - section_pre_data: should dump REVOKE SELECT ON TABLE pg_proc\nFROM public",
"msg_date": "Thu, 11 Apr 2024 15:54:07 -0400",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "On Thu, Apr 11, 2024 at 03:54:07PM -0400, Corey Huinker wrote:\n> At the request of a few people, attached is an attempt to move stats to\n> DATA/POST-DATA, and the TAP test failure that results from that.\n> \n> The relevant errors are confusing, in that they all concern GRANT/REVOKE,\n> and the fact that I made no changes to the TAP test itself.\n> \n> $ grep 'not ok' build/meson-logs/testlog.txt\n> not ok 9347 - section_data: should not dump GRANT INSERT(col1) ON TABLE\n> test_second_table\n\nIt looks like the problem is that the ACLs are getting dumped in the data\nsection when we are also dumping stats. I'm able to get the tests to pass\nby moving the call to dumpRelationStats() that's in dumpTableSchema() to\ndumpTableData(). I'm not entirely sure why that fixes it yet, but if we're\ntreating stats as data, then it intuitively makes sense for us to dump it\nin dumpTableData(). However, that seems to prevent the stats from getting\nexported in the --schema-only/--binary-upgrade scenario, which presents a\nproblem for pg_upgrade. ISTM we'll need some extra hacks to get this to\nwork as desired.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 17 Apr 2024 11:50:53 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "On Wed, 2024-04-17 at 11:50 -0500, Nathan Bossart wrote:\n> It looks like the problem is that the ACLs are getting dumped in the\n> data\n> section when we are also dumping stats. I'm able to get the tests to\n> pass\n> by moving the call to dumpRelationStats() that's in dumpTableSchema()\n> to\n> dumpTableData(). I'm not entirely sure why that fixes it yet, but if\n> we're\n> treating stats as data, then it intuitively makes sense for us to\n> dump it\n> in dumpTableData().\n\nWould it make sense to have a new SECTION_STATS?\n\n> However, that seems to prevent the stats from getting\n> exported in the --schema-only/--binary-upgrade scenario, which\n> presents a\n> problem for pg_upgrade. ISTM we'll need some extra hacks to get this\n> to\n> work as desired.\n\nPhilosophically, I suppose stats are data, but I still don't understand\nwhy considering stats to be data is so important in pg_dump.\n\nPractically, I want to dump stats XOR data. That's because, if I dump\nthe data, it's so costly to reload and rebuild indexes that it's not\nvery important to avoid a re-ANALYZE.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Mon, 22 Apr 2024 10:56:54 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "Jeff Davis <[email protected]> writes:\n> Would it make sense to have a new SECTION_STATS?\n\nPerhaps, but the implications for pg_dump's API would be nontrivial,\neg would we break any applications that know about the current\noptions for --section. And you still have to face up to the question\n\"does --data-only include this stuff?\".\n\n> Philosophically, I suppose stats are data, but I still don't understand\n> why considering stats to be data is so important in pg_dump.\n> Practically, I want to dump stats XOR data. That's because, if I dump\n> the data, it's so costly to reload and rebuild indexes that it's not\n> very important to avoid a re-ANALYZE.\n\nHmm, interesting point. But the counterargument to that is that\nthe cost of building indexes will also dwarf the cost of installing\nstats, so why not do so? Loading data without stats, and hoping\nthat auto-analyze will catch up sooner not later, is exactly the\ncurrent behavior that we're doing all this work to get out of.\nI don't really think we want it to continue to be the default.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 22 Apr 2024 16:19:13 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "On Mon, 2024-04-22 at 16:19 -0400, Tom Lane wrote:\n> Loading data without stats, and hoping\n> that auto-analyze will catch up sooner not later, is exactly the\n> current behavior that we're doing all this work to get out of.\n\nThat's the disconnect, I think. For me, the main reason I'm excited\nabout this work is as a way to solve the bad-plans-after-upgrade\nproblem and to repro planner issues outside of production. Avoiding the\nneed to ANALYZE at the end of a data load is also a nice convenience,\nbut not a primary driver (for me).\n\nShould we just itemize some common use cases for pg_dump, and then\nchoose the defaults that are least likely to cause surprise?\n\nAs for the section, I'm not sure what to do about that. Based on this\nthread it seems that SECTION_NONE (or a SECTION_STATS?) is easiest to\nimplement, but I don't understand the long-term consequences of that\nchoice.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Mon, 22 Apr 2024 19:48:02 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "Jeff Davis <[email protected]> writes:\n> On Mon, 2024-04-22 at 16:19 -0400, Tom Lane wrote:\n>> Loading data without stats, and hoping\n>> that auto-analyze will catch up sooner not later, is exactly the\n>> current behavior that we're doing all this work to get out of.\n\n> That's the disconnect, I think. For me, the main reason I'm excited\n> about this work is as a way to solve the bad-plans-after-upgrade\n> problem and to repro planner issues outside of production. Avoiding the\n> need to ANALYZE at the end of a data load is also a nice convenience,\n> but not a primary driver (for me).\n\nOh, I don't doubt that there are use-cases for dumping stats without\ndata. I'm just dubious about the reverse. I think data+stats should\nbe the default, even if only because pg_dump's default has always\nbeen to dump everything. Then there should be a way to get stats\nonly, and maybe a way to get data only. Maybe this does argue for a\nfour-section definition, despite the ensuing churn in the pg_dump API.\n\n> Should we just itemize some common use cases for pg_dump, and then\n> choose the defaults that are least likely to cause surprise?\n\nPer above, I don't find any difficulty in deciding what should be the\ndefault. What I think we need to consider is what the pg_dump and\npg_restore switch sets should be. There's certainly a few different\nways we could present that; maybe we should sketch out the details for\na couple of ways.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 22 Apr 2024 23:52:21 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "On Tue, 23 Apr 2024, 05:52 Tom Lane, <[email protected]> wrote:\n> Jeff Davis <[email protected]> writes:\n> > On Mon, 2024-04-22 at 16:19 -0400, Tom Lane wrote:\n> >> Loading data without stats, and hoping\n> >> that auto-analyze will catch up sooner not later, is exactly the\n> >> current behavior that we're doing all this work to get out of.\n>\n> > That's the disconnect, I think. For me, the main reason I'm excited\n> > about this work is as a way to solve the bad-plans-after-upgrade\n> > problem and to repro planner issues outside of production. Avoiding the\n> > need to ANALYZE at the end of a data load is also a nice convenience,\n> > but not a primary driver (for me).\n>\n> Oh, I don't doubt that there are use-cases for dumping stats without\n> data. I'm just dubious about the reverse. I think data+stats should\n> be the default, even if only because pg_dump's default has always\n> been to dump everything. Then there should be a way to get stats\n> only, and maybe a way to get data only. Maybe this does argue for a\n> four-section definition, despite the ensuing churn in the pg_dump API.\n\nI've heard of use cases where dumping stats without data would help\nwith production database planner debugging on a non-prod system.\n\nSure, some planner inputs would have to be taken into account too, but\nhaving an exact copy of production stats is at least a start and can\nhelp build models and alerts for what'll happen when the tables grow\nlarger with the current stats.\n\nAs for other planner inputs: table size is relatively easy to shim\nwith sparse files; cumulative statistics can be copied from a donor\nreplica if needed, and btree indexes only really really need to\ncontain their highest and lowest values (and need their height set\ncorrectly).\n\nKind regards,\n\nMatthias van de Meent\n\n\n",
"msg_date": "Tue, 23 Apr 2024 18:33:48 +0200",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": ">\n> I've heard of use cases where dumping stats without data would help\n> with production database planner debugging on a non-prod system.\n>\n\n\nSo far, I'm seeing these use cases:\n\n1. Binary upgrade. (schema: on, data: off, stats: on)\n2. Dump to file/dir and restore elsewhere. (schema: on, data: on, stats: on)\n3. Dump stats for one or more objects, either to directly apply those stats\nto a remote database, or to allow a developer to edit/experiment with those\nstats. (schema: off, data: off, stats: on)\n4. restore situations where stats are not wanted and/or not trusted\n(whatever: on, stats: off)\n\nCase #1 is handled via pg_upgrade and special case flags in pg_dump.\nCase #2 uses the default pg_dump options, so that's covered.\nCase #3 would require a --statistics-only option mutually exclusive with\n--data-only and --schema-only. Alternatively, I could reanimate the script\npg_export_statistics, but we'd end up duplicating a lot of filtering\noptions that pg_dump already has solved. Similarly, we may want server-side\nfunctions that generate the statements for us (pg_get_*_stats paired with\neach pg_set_*_stats)\nCase #4 is handled via --no-statistics.\n\n\nAttached is v19, which attempts to put table stats in SECTION_DATA and\nmatview/index stats in SECTION_POST_DATA. It's still failing one TAP test\n(004_pg_dump_parallel: parallel restore as inserts). I'm still unclear as\nto why using SECTION_NONE is a bad idea, but I'm willing to go along with\nDATA/POST_DATA, assuming we can make it work.",
"msg_date": "Wed, 24 Apr 2024 06:18:30 -0400",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "On Tue, Apr 23, 2024 at 06:33:48PM +0200, Matthias van de Meent wrote:\n> I've heard of use cases where dumping stats without data would help\n> with production database planner debugging on a non-prod system.\n> \n> Sure, some planner inputs would have to be taken into account too, but\n> having an exact copy of production stats is at least a start and can\n> help build models and alerts for what'll happen when the tables grow\n> larger with the current stats.\n> \n> As for other planner inputs: table size is relatively easy to shim\n> with sparse files; cumulative statistics can be copied from a donor\n> replica if needed, and btree indexes only really really need to\n> contain their highest and lowest values (and need their height set\n> correctly).\n\nIs it possible to prevent stats from being updated by autovacuum and\nother methods?\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Wed, 24 Apr 2024 15:31:49 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "On Wed, 24 Apr 2024 at 21:31, Bruce Momjian <[email protected]> wrote:\n>\n> On Tue, Apr 23, 2024 at 06:33:48PM +0200, Matthias van de Meent wrote:\n> > I've heard of use cases where dumping stats without data would help\n> > with production database planner debugging on a non-prod system.\n> >\n> > Sure, some planner inputs would have to be taken into account too, but\n> > having an exact copy of production stats is at least a start and can\n> > help build models and alerts for what'll happen when the tables grow\n> > larger with the current stats.\n> >\n> > As for other planner inputs: table size is relatively easy to shim\n> > with sparse files; cumulative statistics can be copied from a donor\n> > replica if needed, and btree indexes only really really need to\n> > contain their highest and lowest values (and need their height set\n> > correctly).\n>\n> Is it possible to prevent stats from being updated by autovacuum\n\nYou can set autovacuum_analyze_threshold and *_scale_factor to\nexcessively high values, which has the effect of disabling autoanalyze\nuntil it has had similarly excessive tuple churn. But that won't\nguarantee autoanalyze won't run; that guarantee only exists with\nautovacuum = off.\n\n> and other methods?\n\nNo nice ways. AFAIK there is no command (or command sequence) that can\n\"disable\" only ANALYZE and which also guarantee statistics won't be\nupdated until ANALYZE is manually \"re-enabled\" for that table. An\nextension could maybe do this, but I'm not aware of any extension\npoints where this would hook into PostgreSQL in a nice way.\n\nYou can limit maintenance access on the table to only trusted roles\nthat you know won't go in and run ANALYZE for those tables, or even\nonly your superuser (so only they can run ANALYZE, and have them\npromise they won't). Alternatively, you can also constantly keep a\nlock on the table that conflicts with ANALYZE. The last few are just\nworkarounds though, and not all something I'd suggest running on a\nproduction database.\n\nKind regards,\n\nMatthias van de Meent\n\n\n",
"msg_date": "Wed, 24 Apr 2024 21:56:15 +0200",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": ">\n>\n> You can set autovacuum_analyze_threshold and *_scale_factor to\n> excessively high values, which has the effect of disabling autoanalyze\n> until it has had similarly excessive tuple churn. But that won't\n> guarantee autoanalyze won't run; that guarantee only exists with\n> autovacuum = off.\n>\n\nI'd be a bit afraid to set to those values so high, for fear that they\nwouldn't get reset when normal operations resumed, and nobody would notice\nuntil things got bad.\n\nv20 is attached. It resolves the dependency issue in v19, so while I'm\nstill unclear as to why we want it this way vs the simplicity of\nSECTION_NONE, I'm going to roll with it.\n\nNext up for question is how to handle --statistics-only or an equivalent.\nThe option would be mutually exclusive with --schema-only and --data-only,\nand it would be mildly incongruous if it didn't have a short option like\nthe others, so I'm suggested -P for Probablity / Percentile / ρ:\ncorrelation / etc.\n\nOne wrinkle with having three mutually exclusive options instead of two is\nthat the existing code was able to assume that one of the options being\ntrue meant that we could bail out of certain dumpXYZ() functions, and now\nthose tests have to compare against two, which makes me think we should add\nthree new DumpOptions that are the non-exclusive positives (yesSchema,\nyesData, yesStats) and set those in addition to the schemaOnly, dataOnly,\nand statsOnly flags. Thoughts?",
"msg_date": "Thu, 25 Apr 2024 23:27:08 -0400",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": ">\n>\n> Next up for question is how to handle --statistics-only or an equivalent.\n> The option would be mutually exclusive with --schema-only and --data-only,\n> and it would be mildly incongruous if it didn't have a short option like\n> the others, so I'm suggested -P for Probablity / Percentile / ρ:\n> correlation / etc.\n>\n> One wrinkle with having three mutually exclusive options instead of two is\n> that the existing code was able to assume that one of the options being\n> true meant that we could bail out of certain dumpXYZ() functions, and now\n> those tests have to compare against two, which makes me think we should add\n> three new DumpOptions that are the non-exclusive positives (yesSchema,\n> yesData, yesStats) and set those in addition to the schemaOnly, dataOnly,\n> and statsOnly flags. Thoughts?\n>\n\nv21 attached.\n\n0001 is the same.\n\n0002 is a preparatory change to pg_dump introducing\nDumpOption/RestoreOption variables dumpSchema and dumpData. The current\ncode makes heavy use of the fact that schemaOnly and dataOnly are mutually\nexclusive and logically opposite. That will not be the case when\nstatisticsOnly is introduced, so I decided to add the new variables whose\nvalue is entirely derivative of the existing command flags, but resolves\nthe complexities of those interactions in one spot, as those complexities\nare about to jump with the new options.\n\n0003 is the statistics changes to pg_dump, adding the options -X /\n--statistics-only, and the derivative boolean statisticsOnly. The -P option\nis already used by pg_restore, so instead I chose -X because of the passing\nresemblance to Chi as in the chi-square statistics test makes it vaguely\nstatistics-ish. If someone has a better letter, I'm listening.\n\nWith that change, people should be able to use pg_dump -X --table=foo to\ndump existing stats for a table and its dependent indexes, and then tweak\nthose calls to do tuning work. Have fun with it. If this becomes a common\nuse-case then it may make sense to get functions to fetch\nrelation/attribute stats for a given relation, either as a formed SQL\nstatement or as the parameter values.",
"msg_date": "Mon, 6 May 2024 23:43:50 -0400",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "On Mon, 2024-05-06 at 23:43 -0400, Corey Huinker wrote:\n> \n> v21 attached.\n> \n> 0003 is the statistics changes to pg_dump, adding the options -X / --\n> statistics-only, and the derivative boolean statisticsOnly. The -P\n> option is already used by pg_restore, so instead I chose -X because\n> of the passing resemblance to Chi as in the chi-square statistics\n> test makes it vaguely statistics-ish. If someone has a better letter,\n> I'm listening.\n> \n> With that change, people should be able to use pg_dump -X --table=foo\n> to dump existing stats for a table and its dependent indexes, and\n> then tweak those calls to do tuning work. Have fun with it. If this\n> becomes a common use-case then it may make sense to get functions to\n> fetch relation/attribute stats for a given relation, either as a\n> formed SQL statement or as the parameter values.\n\nCan you explain what you did with the\nSECTION_NONE/SECTION_DATA/SECTION_POST_DATA over v19-v21 and why?\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Wed, 15 May 2024 17:02:57 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": ">\n> Can you explain what you did with the\n> SECTION_NONE/SECTION_DATA/SECTION_POST_DATA over v19-v21 and why?\n>\n\nInitially, I got things to work by having statistics import behave like\nCOMMENTs, which meant that they were run immediately after the\ntable/matview/index/constraint that created the pg_class/pg_attribute\nentries, but they could be suppressed with a --noX flag\n\nPer previous comments, it was suggested by others that:\n\n- having them in SECTION_NONE was a grave mistake\n- Everything that could belong in SECTION_DATA should, and the rest should\nbe in SECTION_POST_DATA\n- This would almost certainly require the statistics import commands to be\nTOC objects (one object per pg_class entry, not one object per function\ncall)\n\nTurning them into TOC objects was a multi-phase process.\n\n1. the TOC entries are generated with dependencies (the parent pg_class\nobject as well as the potential unique/pk constraint in the case of\nindexes), but no statements are generated (in case the stats are filtered\nout or the parent object is filtered out). This TOC entry must have\neverything we'll need to later generate the function calls. So far, that\ninformation is the parent name, parent schema, and relkind of the parent\nobject.\n\n2. The TOC entries get sorted by dependencies, and additional dependencies\nare added which enforce the PRE/DATA/POST boundaries. This is where knowing\nthe parent object's relkind is required, as that determines the DATA/POST\nsection.\n\n3. Now the TOC entry is able to stand on its own, and generate the\nstatements if they survive the dump/restore filters. Most of the later\nversions of the patch were efforts to get the objects to fall into the\nright PRE/DATA/POST sections, and the central bug was that the dependencies\npassed into ARCHIVE_OPTS were incorrect, as the dependent object passed in\nwas now the new TOC object, not the parent TOC object. Once that was\nresolved, things fell into place.\n\nCan you explain what you did with the\nSECTION_NONE/SECTION_DATA/SECTION_POST_DATA over v19-v21 and why?Initially, I got things to work by having statistics import behave like COMMENTs, which meant that they were run immediately after the table/matview/index/constraint that created the pg_class/pg_attribute entries, but they could be suppressed with a --noX flagPer previous comments, it was suggested by others that:- having them in SECTION_NONE was a grave mistake- Everything that could belong in SECTION_DATA should, and the rest should be in SECTION_POST_DATA- This would almost certainly require the statistics import commands to be TOC objects (one object per pg_class entry, not one object per function call)Turning them into TOC objects was a multi-phase process.1. the TOC entries are generated with dependencies (the parent pg_class object as well as the potential unique/pk constraint in the case of indexes), but no statements are generated (in case the stats are filtered out or the parent object is filtered out). This TOC entry must have everything we'll need to later generate the function calls. So far, that information is the parent name, parent schema, and relkind of the parent object.2. The TOC entries get sorted by dependencies, and additional dependencies are added which enforce the PRE/DATA/POST boundaries. This is where knowing the parent object's relkind is required, as that determines the DATA/POST section.3. Now the TOC entry is able to stand on its own, and generate the statements if they survive the dump/restore filters. Most of the later versions of the patch were efforts to get the objects to fall into the right PRE/DATA/POST sections, and the central bug was that the dependencies passed into ARCHIVE_OPTS were incorrect, as the dependent object passed in was now the new TOC object, not the parent TOC object. Once that was resolved, things fell into place.",
"msg_date": "Thu, 16 May 2024 05:25:58 -0400",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "On Thu, 2024-05-16 at 05:25 -0400, Corey Huinker wrote:\n> \n> Per previous comments, it was suggested by others that:\n> \n> - having them in SECTION_NONE was a grave mistake\n> - Everything that could belong in SECTION_DATA should, and the rest\n> should be in SECTION_POST_DATA\n\nI don't understand the gravity of the choice here: what am I missing?\n\nTo be clear: I'm not arguing against it, but I'd like to understand it\nbetter. Perhaps it has to do with the relationship between the sections\nand the dependencies?\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Thu, 16 May 2024 11:26:08 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "On Thu, May 16, 2024 at 2:26 PM Jeff Davis <[email protected]> wrote:\n\n> On Thu, 2024-05-16 at 05:25 -0400, Corey Huinker wrote:\n> >\n> > Per previous comments, it was suggested by others that:\n> >\n> > - having them in SECTION_NONE was a grave mistake\n> > - Everything that could belong in SECTION_DATA should, and the rest\n> > should be in SECTION_POST_DATA\n>\n> I don't understand the gravity of the choice here: what am I missing?\n>\n> To be clear: I'm not arguing against it, but I'd like to understand it\n> better. Perhaps it has to do with the relationship between the sections\n> and the dependencies?\n>\n\nI'm with you, I don't understand the choice and would like to, but at the\nsame time it now works in the way others strongly suggested that it should,\nso I'm still curious about the why.\n\nThere were several people expressing interest in this patch at pgconf.dev,\nso I thought I'd post a rebase and give a summary of things to date.\n\nTHE INITIAL GOAL\n\nThe initial goal of this effort was to reduce upgrade downtimes by\neliminating the need for the vacuumdb --analyze-in-stages call that is\nrecommended (but often not done) after a pg_upgrade. The analyze-in-stages\nsteps is usually by far the longest part of a binary upgrade and is a\nsignificant part of a restore from dump, so eliminating this step will save\nusers time, and eliminate or greatly reduce a potential pitfall to\nupgrade...and thus reduce upgrade friction (read: excuses to not upgrade).\n\nTHE FUNCTIONS\n\nThese patches introduce two functions, pg_set_relation_stats() and\npg_set_attribute_stats(), which allow the caller to modify the statistics\nof any relation, provided that they own that relation or have maintainer\nprivilege.\n\nThe function pg_set_relation_stats looks like this:\n\nSELECT pg_set_relation_stats('stats_export_import.test'::regclass,\n 150000::integer,\n 'relpages', 17::integer,\n 'reltuples', 400.0::real,\n 'relallvisible', 4::integer);\n\nThe function takes an oid of the relation to have stats imported, a version\nnumber (SERVER_VERSION_NUM) for the source of the statistics, and then a\nseries of varargs organized as name-value pairs. Currently, three arg pairs\nare required to properly set (relpages, reltuples, and relallvisible). If\nall three are not present, the function will issue a warning, and the row\nwill not be updated.\n\nThe choice of varargs is a defensive one, basically ensuring that a\npgdump that includes statistics import calls will not fail on a future\nversion that does not have one or more of these values. The call itself\nwould fail to modify the relation row, but it wouldn't cause the whole\nrestore to fail. I'm personally not against having a fixed arg version of\nthis function, nor am I against having both at the same time, the varargs\nversion basically teeing up the fixed-param call appropriate for the\ndestination server version.\n\nThis function does an in-place update of the pg_class row to avoid bloat\npg_class, just like ANALYZE does. This means that this function call is\nNON-transactional.\n\nThe function pg_set_attribute_stats looks like this:\n\nSELECT pg_catalog.pg_set_attribute_stats(\n 'stats_export_import.test'::regclass,\n 'id'::name,\n false::boolean,\n 150000::integer,\n 'null_frac', 0.5::real,\n 'avg_width', 2::integer,\n 'n_distinct', -0.1::real,\n 'most_common_vals', '{2,1,3}'::text,\n 'most_common_freqs', '{0.3,0.25,0.05}'::real[]\n );\n\nLike the first function, it takes a relation oid and a source server\nversion though that is in the 4th position. It also takes the name of an\nattribute, and a boolean as to whether these stats are for inherited\nstatistics (true) or regular (false). Again what follows is a vararg list\nof name-value pairs, each name corresponding to an attribute of pg_stats,\nand expecting a value appropriate for said attribute of pg_stats. Note that\nANYARRAY values are passed in as text. This is done for a few reasons.\nFirst, if the attribute is an array type, then the most_common_elements\nvalue will be an array of that array type, and there is no way to represent\nthat in SQL (it instead gives a higher order array of the same base type).\nSecond, it allows us to import the values with a simple array_in() call.\nLast, it allows for situations where the type name changed from source\nsystem to destination (example: a schema-qualified extension type gets\nmoved to core).\n\nThere are lots of ways that this function call can go wrong. An invalid\nattribute name, an invalid parameter name in a name-value pair, invalid\ndata type of parameter being passed in the value of a name-value pair, or\ntype coercion errors in array_in() to name just a few. All of these errors\nresult in a warning and the import failing, but the function completes\nnormally. Internal typecasting and array_in are all done with the _safe()\nequivalents, and any such errors are re-emitted as warnings. The central\ngoal here is to not make a restore fail just because the statistics are\nwonky.\n\nCalls to pg_set_attribute_stats() are transactional. This wouldn't warrant\nmentioning if not for pg_set_relation_stats() being non-transactional.\n\nDUMP / RESTORE / UPGRADE\n\nThe code for pg_dump/restore/upgrade has been modified to allow for\nstatistics to be exported/imported by default. There are flags to prevent\nthis (--no-statistics) and there are flags to ONLY do statistics\n(--statistics-only) the utility of which will be discussed later.\n\npg_dump will make queries of the source database, adjusting the syntax to\nreflect the version of the source system. There is very little variance in\nthose queries, so it should be possible to query as far back as 9.2 and get\nusable stats. The output of these calls will be a series of SELECT\nstatements, each one making a call to either pg_set_relation_stats (one per\ntable/index/matview) or pg_set_attribute_stats (one per attribute that had\na matching pg_statistic row).\n\nThe positioning of these calls in the restore sequence was originally set\nup as SECTION_NONE, but it was strongly suggested that SECTION_DATA /\nSECTION_POST_DATA was the right spot instead, and that's where they\ncurrently reside.\n\nThe end result will be that the new database now has the stats identical\n(or at least close to) the source system. Those statistics might be good or\nbad, but they're almost certainly better than no stats at all. Even if they\nare bad, they will be overwritten by the next ANALYZE or autovacuum.\n\n\nWHAT IS NOT DONE\n\n1. Extended Statistics, which are considerably more complex than regular\nstats (stxdexprs is itself an array of pg_statistic rows) and thus more\ndifficult to express in a simple function call. They are also used fairly\nrarely in customer installations, so leaving them out of the v1 patch\nseemed like an easy trade-off.\n\n2. Any sort of validity checking beyond data-types. This was initially\nprovided, verifying that arrays values representing frequencies must be\nbetween 0.0 and 1.0, arrays that represent most common value frequencies\nmust be in monotonically non-increasing order, etc. but these were rejected\nas being overly complex, potentially rejecting valid stats, and getting in\nthe way of an other use I hadn't considered.\n\n3. Export functions. Strictly speaking we don't need them, but some\nuse-cases described below may make the case for including them.\n\nOTHER USES\n\nUsage of these functions is not restricted to upgrade/restore situations.\nThe most obvious use was to experiment with how the planner behaves when\none or more tables grow and/or skew. It is difficult to create a table with\n10 billion rows in it, but it's now trivial to create a table that says it\nhas 10 billion rows in it.\n\nThis can be taken a step further, and in a way I had not anticipated -\nactively stress-testing the planner by inserting wildly incorrect and/or\nnonsensical stats. In that sense, these functions are a fuzzing tool that\nhappens to make upgrades go faster.\n\nFUTURE PLANS\n\nIntegration with postgres_fdw is an obvious next step, allowing an ANALYZE\non a foreign table to, instead of asking for a remote row sample, to simply\nexport the stats of the remote table and import them into the foreign table.\n\nExtended Statistics.\n\nCURRENT PROGRESS\n\nI believe that all outstanding questions/request were addressed, and the\npatch is now back to needing a review.\n\nFOR YOUR CONSIDERATION\n\nRebase (current as of f04d1c1db01199f02b0914a7ca2962c531935717) attached.",
"msg_date": "Mon, 3 Jun 2024 23:34:51 -0400",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "v23:\n\nSplit pg_set_relation_stats into two functions: pg_set_relation_stats with\nnamed parameters like it had around v19 and pg_restore_relations_stats with\nthe variadic parameters it has had in more recent versions, which processes\nthe variadic parameters and then makes a call to pg_set_relation_stats.\n\nSplit pg_set_attribute_stats into two functions: pg_set_attribute_stats\nwith named parameters like it had around v19 and pg_restore_attribute_stats\nwith the variadic parameters it has had in more recent versions, which\nprocesses the variadic parameters and then makes a call to\npg_set_attribute_stats.\n\nThe intention here is that the named parameters signatures are easier for\nad-hoc use, while the variadic signatures are evergreen and thus ideal for\npg_dump/pg_upgrade.\n\nrebased to a0a5869a8598cdeae1d2f2d632038d26dcc69d19 (master as of early\nJuly 18)",
"msg_date": "Thu, 18 Jul 2024 02:09:26 -0400",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "On Thu, 2024-07-18 at 02:09 -0400, Corey Huinker wrote:\n> v23:\n> \n> Split pg_set_relation_stats into two functions: pg_set_relation_stats\n> with named parameters like it had around v19 and\n> pg_restore_relations_stats with the variadic parameters it has had in\n> more recent versions, which processes the variadic parameters and\n> then makes a call to pg_set_relation_stats.\n> \n> Split pg_set_attribute_stats into two functions:\n> pg_set_attribute_stats with named parameters like it had around v19\n> and pg_restore_attribute_stats with the variadic parameters it has\n> had in more recent versions, which processes the variadic parameters\n> and then makes a call to pg_set_attribute_stats.\n> \n> The intention here is that the named parameters signatures are easier\n> for ad-hoc use, while the variadic signatures are evergreen and thus\n> ideal for pg_dump/pg_upgrade.\n\nv23-0001:\n\n* I like the split for the reason you mention. I'm not 100% sure that\nwe need both, but from the standpoint of reviewing, it makes things\neasier. We can always remove one at the last minute if its found to be\nunnecessary. I also like the names.\n\n* Doc build error and malformatting.\n\n* I'm not certain that we want all changes to relation stats to be non-\ntransactional. Are there transactional use cases? Should it be an\noption? Should it be transactional for pg_set_relation_stats() but non-\ntransactional for pg_restore_relation_stats()?\n\n* The documentation for the pg_set_attribute_stats() still refers to\nupgrade scenarios -- shouldn't that be in the\npg_restore_attribute_stats() docs? I imagine the pg_set variant to be\nused for ad-hoc planner stuff rather than upgrades.\n\n* For the \"WARNING: stat names must be of type text\" I think we need an\nERROR instead. The calling convention of name/value pairs is broken and\nwe can't safely continue.\n\n* The huge list of \"else if (strcmp(statname, mc_freqs_name) == 0) ...\"\nseems wasteful and hard to read. I think we already discussed this,\nwhat was the reason we can't just use an array to map the arg name to\nan arg position type OID?\n\n* How much error checking did we decide is appropriate? Do we need to\ncheck that range_length_hist is always specified with range_empty_frac,\nor should we just call that the planner's problem if one is specified\nand the other not? Similarly, range stats for a non-range type.\n\n* I think most of the tests should be of pg_set_*_stats(). For\npg_restore_, we just want to know that it's translating the name/value\npairs reasonably well and throwing WARNINGs when appropriate. Then, for\npg_dump tests, it should exercise pg_restore_*_stats() more completely.\n\n* It might help to clarify which arguments are important (like\nn_distinct) vs not. I assume the difference is that it's a non-NULLable\ncolumn in pg_statistic.\n\n* Some arguments, like the relid, just seem absolutely required, and\nit's weird to just emit a WARNING and return false in that case. \n\n* To clarify: a return of \"true\" means all settings were successfully\napplied, whereas \"false\" means that some were applied and some were\nunrecognized, correct? Or does it also mean that some recognized\noptions may not have been applied?\n\n* pg_set_attribute_stats(): why initialize the output tuple nulls array\nto false? It seems like initializing it to true would be safer.\n\n* please use a better name for \"k\" and add some error checking to make\nsure it doesn't overrun the available slots.\n\n* the pg_statistic tuple is always completely replaced, but the way you\ncan call pg_set_attribute_stats() doesn't imply that -- calling\npg_set_attribute_stats(..., most_common_vals => ..., most_common_freqs\n=> ...) looks like it would just replace the most_common_vals+freqs and\nleave histogram_bounds as it was, but it actually clears\nhistogram_bounds, right? Should we make that work or should we document\nbetter that it doesn't?\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Fri, 19 Jul 2024 14:21:58 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "> * Doc build error and malformatting.\n>\n\nLooking into it.\n\n\n> * I'm not certain that we want all changes to relation stats to be non-\n> transactional. Are there transactional use cases? Should it be an\n> option? Should it be transactional for pg_set_relation_stats() but non-\n> transactional for pg_restore_relation_stats()?\n>\n\nIt's non-transactional because that's how ANALYZE does it to avoid bloating\npg_class. We _could_ do it transactionally, but on restore we'd immediately\nhave a pg_class that was 50% bloat.\n\n\n>\n> * The documentation for the pg_set_attribute_stats() still refers to\n> upgrade scenarios -- shouldn't that be in the\n> pg_restore_attribute_stats() docs? I imagine the pg_set variant to be\n> used for ad-hoc planner stuff rather than upgrades.\n>\n\nNoted.\n\n\n>\n> * For the \"WARNING: stat names must be of type text\" I think we need an\n> ERROR instead. The calling convention of name/value pairs is broken and\n> we can't safely continue.\n>\n\nThey can't be errors, because any one error fails the whole pg_upgrade.\n\n\n> * The huge list of \"else if (strcmp(statname, mc_freqs_name) == 0) ...\"\n> seems wasteful and hard to read. I think we already discussed this,\n> what was the reason we can't just use an array to map the arg name to\n> an arg position type OID?\n>\n\nThat was my overreaction to the dislike that the P_argname enum got in\nprevious reviews.\n\nWe'd need an array of struct like\n\nargname (ex. \"mc_vals\")\nargtypeoid (one of: int, text, real, rea[])\nargtypename (name we want to call the argtypeoid (integer, text. real,\nreal[] about covers it).\nargpos (position in the arg list of the corresponding pg_set_ function\n\n\n>\n> * How much error checking did we decide is appropriate? Do we need to\n> check that range_length_hist is always specified with range_empty_frac,\n> or should we just call that the planner's problem if one is specified\n> and the other not? Similarly, range stats for a non-range type.\n>\n\nI suppose we can let that go, and leave incomplete stat pairs in there.\n\nThe big risk is that somebody packs the call with more than 5 statkinds,\nwhich would overflow the struct.\n\n\n> * I think most of the tests should be of pg_set_*_stats(). For\n> pg_restore_, we just want to know that it's translating the name/value\n> pairs reasonably well and throwing WARNINGs when appropriate. Then, for\n> pg_dump tests, it should exercise pg_restore_*_stats() more completely.\n>\n\nI was afraid you'd suggest that, in which case I'd break up the patch into\nthe pg_sets and the pg_restores.\n\n\n> * It might help to clarify which arguments are important (like\n> n_distinct) vs not. I assume the difference is that it's a non-NULLable\n> column in pg_statistic.\n>\n\nThere are NOT NULL stats...now. They might not be in the future. Does that\nchange your opinion?\n\n\n>\n> * Some arguments, like the relid, just seem absolutely required, and\n> it's weird to just emit a WARNING and return false in that case.\n\n\nAgain, we can't fail.Any one failure breaks pg_upgrade.\n\n\n> * To clarify: a return of \"true\" means all settings were successfully\n> applied, whereas \"false\" means that some were applied and some were\n> unrecognized, correct? Or does it also mean that some recognized\n> options may not have been applied?\n>\n\nTrue means \"at least some stats were applied. False means \"nothing was\nmodified\".\n\n\n> * pg_set_attribute_stats(): why initialize the output tuple nulls array\n> to false? It seems like initializing it to true would be safer.\n>\n\n+1\n\n\n>\n> * please use a better name for \"k\" and add some error checking to make\n> sure it doesn't overrun the available slots.\n>\n\nk was an inheritance from analzye.c, from whence the very first version was\ncribbed. No objection to renaming.\n\n\n\n> * the pg_statistic tuple is always completely replaced, but the way you\n> can call pg_set_attribute_stats() doesn't imply that -- calling\n> pg_set_attribute_stats(..., most_common_vals => ..., most_common_freqs\n> => ...) looks like it would just replace the most_common_vals+freqs and\n> leave histogram_bounds as it was, but it actually clears\n> histogram_bounds, right? Should we make that work or should we document\n> better that it doesn't?\n>\n\nThat would complicate things. How would we intentionally null-out one stat,\nwhile leaving others unchanged? However, this points out that I didn't\nre-instate the re-definition that applied the NULL defaults.\n\n* Doc build error and malformatting.Looking into it. * I'm not certain that we want all changes to relation stats to be non-\ntransactional. Are there transactional use cases? Should it be an\noption? Should it be transactional for pg_set_relation_stats() but non-\ntransactional for pg_restore_relation_stats()?It's non-transactional because that's how ANALYZE does it to avoid bloating pg_class. We _could_ do it transactionally, but on restore we'd immediately have a pg_class that was 50% bloat. \n\n* The documentation for the pg_set_attribute_stats() still refers to\nupgrade scenarios -- shouldn't that be in the\npg_restore_attribute_stats() docs? I imagine the pg_set variant to be\nused for ad-hoc planner stuff rather than upgrades.Noted. \n\n* For the \"WARNING: stat names must be of type text\" I think we need an\nERROR instead. The calling convention of name/value pairs is broken and\nwe can't safely continue.They can't be errors, because any one error fails the whole pg_upgrade. * The huge list of \"else if (strcmp(statname, mc_freqs_name) == 0) ...\"\nseems wasteful and hard to read. I think we already discussed this,\nwhat was the reason we can't just use an array to map the arg name to\nan arg position type OID?That was my overreaction to the dislike that the P_argname enum got in previous reviews.We'd need an array of struct likeargname (ex. \"mc_vals\")argtypeoid (one of: int, text, real, rea[])argtypename (name we want to call the argtypeoid (integer, text. real, real[] about covers it).argpos (position in the arg list of the corresponding pg_set_ function \n\n* How much error checking did we decide is appropriate? Do we need to\ncheck that range_length_hist is always specified with range_empty_frac,\nor should we just call that the planner's problem if one is specified\nand the other not? Similarly, range stats for a non-range type.I suppose we can let that go, and leave incomplete stat pairs in there.The big risk is that somebody packs the call with more than 5 statkinds, which would overflow the struct. * I think most of the tests should be of pg_set_*_stats(). For\npg_restore_, we just want to know that it's translating the name/value\npairs reasonably well and throwing WARNINGs when appropriate. Then, for\npg_dump tests, it should exercise pg_restore_*_stats() more completely.I was afraid you'd suggest that, in which case I'd break up the patch into the pg_sets and the pg_restores. * It might help to clarify which arguments are important (like\nn_distinct) vs not. I assume the difference is that it's a non-NULLable\ncolumn in pg_statistic.There are NOT NULL stats...now. They might not be in the future. Does that change your opinion? \n\n* Some arguments, like the relid, just seem absolutely required, and\nit's weird to just emit a WARNING and return false in that case.Again, we can't fail.Any one failure breaks pg_upgrade. * To clarify: a return of \"true\" means all settings were successfully\napplied, whereas \"false\" means that some were applied and some were\nunrecognized, correct? Or does it also mean that some recognized\noptions may not have been applied?True means \"at least some stats were applied. False means \"nothing was modified\". * pg_set_attribute_stats(): why initialize the output tuple nulls array\nto false? It seems like initializing it to true would be safer. +1 \n\n* please use a better name for \"k\" and add some error checking to make\nsure it doesn't overrun the available slots.k was an inheritance from analzye.c, from whence the very first version was cribbed. No objection to renaming. * the pg_statistic tuple is always completely replaced, but the way you\ncan call pg_set_attribute_stats() doesn't imply that -- calling\npg_set_attribute_stats(..., most_common_vals => ..., most_common_freqs\n=> ...) looks like it would just replace the most_common_vals+freqs and\nleave histogram_bounds as it was, but it actually clears\nhistogram_bounds, right? Should we make that work or should we document\nbetter that it doesn't?That would complicate things. How would we intentionally null-out one stat, while leaving others unchanged? However, this points out that I didn't re-instate the re-definition that applied the NULL defaults.",
"msg_date": "Fri, 19 Jul 2024 21:58:33 -0400",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "Attached is v24, incorporating Jeff's feedback - looping an arg data\nstructure rather than individually checking each param type being the\nbiggest of them.\n\nv23's part one has been broken into three patches:\n\n* pg_set_relation_stats\n* pg_set_attribute_stats\n* pg_restore_X_stats\n\nAnd the two pg_dump-related patches remain unchanged.\n\nI think this split is a net-positive for reviewability. The one drawback is\nthat there's a lot of redundancy in the regression tests now, much of which\ncan go away once we decide what other data problems we don't need to check.\n\n>",
"msg_date": "Mon, 22 Jul 2024 12:05:34 -0400",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "On Mon, 2024-07-22 at 12:05 -0400, Corey Huinker wrote:\n> Attached is v24, incorporating Jeff's feedback - looping an arg data\n> structure rather than individually checking each param type being the\n> biggest of them.\n> \n\nThank you for splitting up the patches more finely.\n\nv24-0001:\n\n * pg_set_relation_stats(): the warning: \"cannot export statistics\nprior to version 9.2\" doesn't make sense because the function is for\nimporting. Reword.\n\n * I really think there should be a transactional option, just another\nboolean, and if it has a default it should be true. This clearly has\nuse cases for testing plans, etc., and often transactions will be the\nright thing there. This should be a trivial code change, and it will\nalso be easier to document.\n\n * The return type is documented as 'void'? Please change to bool and\nbe clear about what true/false returns really mean. I think false means\n\"no updates happened at all, and a WARNING was printed indicating why\"\nwhereas true means \"all updates were applied successfully\".\n\n * An alternative would be to have an 'error_ok' parameter to say\nwhether to issue WARNINGs or ERRORs. I think we already discussed that\nand agreed on the boolean return, but I just want to confirm that this\nwas a conscious choice?\n\n * tests should be called stats_import.sql; there's no exporting going\non\n\n * Aside from the above comments and some other cleanup, I think this\nis a simple patch and independently useful. I am looking to commit this\none soon.\n\nv24-0002:\n\n * Documented return type is 'void'\n\n * I'm not totally sure what should be returned in the event that some\nupdates were applied and some not. I'm inclined to say that true should\nmean that all updates were applied -- otherwise it's hard to\nautomatically detect some kind of typo.\n\n * Can you describe your approach to error checking? What kinds of\nerrors are worth checking, and which should we just put into the\ncatalog and let the planner deal with?\n\n * I'd check stakindidx at the time that it's incremented rather than\nsumming boolean values cast to integers.\n\nv24-0003:\n\n * I'm not convinced that we should continue when a stat name is not\ntext. The argument for being lenient is that statistics may change over\ntime, and we might have to ignore something that can't be imported from\nan old version into a new version because it's either gone or the\nmeaning has changed too much. But that argument doesn't apply to a\nbogus call, where the name/value pairs get misaligned or something.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Mon, 22 Jul 2024 17:45:50 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": ">\n> * pg_set_relation_stats(): the warning: \"cannot export statistics\n> prior to version 9.2\" doesn't make sense because the function is for\n> importing. Reword.\n>\n\n+1\n\n\n> * I really think there should be a transactional option, just another\n> boolean, and if it has a default it should be true. This clearly has\n> use cases for testing plans, etc., and often transactions will be the\n> right thing there. This should be a trivial code change, and it will\n> also be easier to document.\n>\n\nFor it to have a default, the parameter would have to be at the end of the\nlist, and it's a parameter list that will grow in the future. And when that\nhappens we have a jumbled parameter list, which is fine if we only ever\ncall params by name, but I know some people won't do that. Which means it's\nup front right after `version`. Since `version` is already in there, and we\ncan't default that, I feel ok about moving it there, but alas no default.\n\nIf there was some way that the function could detect that it was in a\nbinary upgrade, then we could use that to determine if it should update\ninplace or transactionally.\n\n * The return type is documented as 'void'? Please change to bool and\n> be clear about what true/false returns really mean. I think false means\n> \"no updates happened at all, and a WARNING was printed indicating why\"\n> whereas true means \"all updates were applied successfully\".\n>\n\nGood point, that's a holdover.\n\n\n> * An alternative would be to have an 'error_ok' parameter to say\n> whether to issue WARNINGs or ERRORs. I think we already discussed that\n> and agreed on the boolean return, but I just want to confirm that this\n> was a conscious choice?\n>\n\nThat had been discussed as well. If we're adding parameters, then we could\nadd one for that too. It's making the function call progressively more\nunwieldy, but anyone who chooses to wield these on a regular basis can\ncertainly write a SQL wrapper function to reduce the function call to their\npresets, I suppose.\n\n\n> * tests should be called stats_import.sql; there's no exporting going\n> on\n>\n\nSigh. True.\n\n\n> * Aside from the above comments and some other cleanup, I think this\n> is a simple patch and independently useful. I am looking to commit this\n> one soon.\n>\n> v24-0002:\n>\n> * Documented return type is 'void'\n>\n> * I'm not totally sure what should be returned in the event that some\n> updates were applied and some not. I'm inclined to say that true should\n> mean that all updates were applied -- otherwise it's hard to\n> automatically detect some kind of typo.\n>\n\nMe either. Suggestions welcome.\n\nI suppose we could return two integers: number of stats input, and number\nof stats applied. But that could be confusing, as some parameter pairs form\none stat ( MCV, ELEM_MCV, etc).\n\nI suppose we could return a set of (param_name text, was_set boolean,\napplied boolean), without trying to organize them into their pairs, but\nthat would get really verbose.\n\nWe should decide on something soon, because we'd want relation stats to\nfollow a similar signature.\n\n\n>\n> * Can you describe your approach to error checking? What kinds of\n> errors are worth checking, and which should we just put into the\n> catalog and let the planner deal with?\n>\n\n1. When the parameters given make for something nonsensical Such as\nproviding most_common_elems with no corresponding most_common_freqs, then\nyou can't form an MCV stat, so you must throw out the one you did receive.\nThat gets a warning.\n\n2. When the data provided is antithetical to the type of statistic. For\ninstance, most array-ish parameters can't have NULL values in them (there\nare separate stats for nulls (null-frac, empty_frac). I don't remember if\ndoing so crashes the server or just creates a hard error, but it's a big\nno-no, and we have to reject such stats, which for now means a warning and\ntrying to carry on with the stats that remain.\n\n3. When the stats provided would overflow the data structure. We attack\nthis from two directions: First, we eliminate stat kinds that are\nmeaningless for the data type (scalars can't have most-common-elements,\nonly ranges can have range stats, etc), issue warnings for those and move\non with the remaining stats. If, however, the number of those statkinds\nexceeds the number of statkind slots available, then we give up because now\nwe'd have to CHOOSE which N-5 stats to ignore, and the caller is clearly\njust having fun with us.\n\nWe let the planner have fun with other error-like things:\n\n1. most-common-element arrays where the elements are not sorted per spec.\n\n2. frequency histograms where the numbers are not monotonically\nnon-increasing per spec.\n\n3. frequency histograms that have corresponding low bound and high bound\nvalues embedded in the array, and the other values in that array must be\nbetween the low-high.\n\n\n>\n> * I'd check stakindidx at the time that it's incremented rather than\n> summing boolean values cast to integers.\n>\n\nWhich means that we're checking that and potentially raising the same error\nin 3-4 places (and growing, unless we raise the max slots), rather than 1.\nThat struck me as worse.\n\n\n>\n> v24-0003:\n>\n> * I'm not convinced that we should continue when a stat name is not\n> text. The argument for being lenient is that statistics may change over\n> time, and we might have to ignore something that can't be imported from\n> an old version into a new version because it's either gone or the\n> meaning has changed too much. But that argument doesn't apply to a\n> bogus call, where the name/value pairs get misaligned or something.\n>\n\nI agree with that.\n\n * pg_set_relation_stats(): the warning: \"cannot export statistics\nprior to version 9.2\" doesn't make sense because the function is for\nimporting. Reword.+1 * I really think there should be a transactional option, just another\nboolean, and if it has a default it should be true. This clearly has\nuse cases for testing plans, etc., and often transactions will be the\nright thing there. This should be a trivial code change, and it will\nalso be easier to document.For it to have a default, the parameter would have to be at the end of the list, and it's a parameter list that will grow in the future. And when that happens we have a jumbled parameter list, which is fine if we only ever call params by name, but I know some people won't do that. Which means it's up front right after `version`. Since `version` is already in there, and we can't default that, I feel ok about moving it there, but alas no default.If there was some way that the function could detect that it was in a binary upgrade, then we could use that to determine if it should update inplace or transactionally. * The return type is documented as 'void'? Please change to bool and\nbe clear about what true/false returns really mean. I think false means\n\"no updates happened at all, and a WARNING was printed indicating why\"\nwhereas true means \"all updates were applied successfully\".Good point, that's a holdover. * An alternative would be to have an 'error_ok' parameter to say\nwhether to issue WARNINGs or ERRORs. I think we already discussed that\nand agreed on the boolean return, but I just want to confirm that this\nwas a conscious choice?That had been discussed as well. If we're adding parameters, then we could add one for that too. It's making the function call progressively more unwieldy, but anyone who chooses to wield these on a regular basis can certainly write a SQL wrapper function to reduce the function call to their presets, I suppose. * tests should be called stats_import.sql; there's no exporting going\nonSigh. True. * Aside from the above comments and some other cleanup, I think this\nis a simple patch and independently useful. I am looking to commit this\none soon.\n\nv24-0002:\n\n * Documented return type is 'void'\n\n * I'm not totally sure what should be returned in the event that some\nupdates were applied and some not. I'm inclined to say that true should\nmean that all updates were applied -- otherwise it's hard to\nautomatically detect some kind of typo.Me either. Suggestions welcome.I suppose we could return two integers: number of stats input, and number of stats applied. But that could be confusing, as some parameter pairs form one stat ( MCV, ELEM_MCV, etc).I suppose we could return a set of (param_name text, was_set boolean, applied boolean), without trying to organize them into their pairs, but that would get really verbose.We should decide on something soon, because we'd want relation stats to follow a similar signature. \n\n * Can you describe your approach to error checking? What kinds of\nerrors are worth checking, and which should we just put into the\ncatalog and let the planner deal with?1. When the parameters given make for something nonsensical Such as providing most_common_elems with no corresponding most_common_freqs, then you can't form an MCV stat, so you must throw out the one you did receive. That gets a warning.2. When the data provided is antithetical to the type of statistic. For instance, most array-ish parameters can't have NULL values in them (there are separate stats for nulls (null-frac, empty_frac). I don't remember if doing so crashes the server or just creates a hard error, but it's a big no-no, and we have to reject such stats, which for now means a warning and trying to carry on with the stats that remain.3. When the stats provided would overflow the data structure. We attack this from two directions: First, we eliminate stat kinds that are meaningless for the data type (scalars can't have most-common-elements, only ranges can have range stats, etc), issue warnings for those and move on with the remaining stats. If, however, the number of those statkinds exceeds the number of statkind slots available, then we give up because now we'd have to CHOOSE which N-5 stats to ignore, and the caller is clearly just having fun with us.We let the planner have fun with other error-like things:1. most-common-element arrays where the elements are not sorted per spec.2. frequency histograms where the numbers are not monotonically non-increasing per spec.3. frequency histograms that have corresponding low bound and high bound values embedded in the array, and the other values in that array must be between the low-high. \n\n * I'd check stakindidx at the time that it's incremented rather than\nsumming boolean values cast to integers.Which means that we're checking that and potentially raising the same error in 3-4 places (and growing, unless we raise the max slots), rather than 1. That struck me as worse. \n\nv24-0003:\n\n * I'm not convinced that we should continue when a stat name is not\ntext. The argument for being lenient is that statistics may change over\ntime, and we might have to ignore something that can't be imported from\nan old version into a new version because it's either gone or the\nmeaning has changed too much. But that argument doesn't apply to a\nbogus call, where the name/value pairs get misaligned or something.I agree with that.",
"msg_date": "Tue, 23 Jul 2024 00:20:32 -0400",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "Giving the parameter lists more thought, the desire for a return code more\ngranular than true/false/null, and the likelihood that each function will\ninevitably get more parameters both stats and non-stats, I'm proposing the\nfollowing:\n\nTwo functions:\n\npg_set_relation_stats(\n out schemaname name,\n out relname name,\n out row_written boolean,\n out params_rejected text[],\n kwargs any[]) RETURNS RECORD\n\nand\n\npg_set_attribute_stats(\n out schemaname name,\n out relname name,\n out inherited bool,\n out row_written boolean,\n out params_accepted text[],\n out params_rejected text[],\n kwargs any[]) RETURNS RECORD\n\nThe leading OUT parameters tell us the rel/attribute/inh affected (if any),\nand which params had to be rejected for whatever reason. The kwargs is the\nvariadic key-value pairs that we were using for all stat functions, but now\nwe will be using it for all parameters, both statistics and control, the\ncontrol parameters will be:\n\nrelation - the oid of the relation\nattname - the attribute name (does not apply for relstats)\ninherited - true false for attribute stats, defaults false, does not apply\nfor relstats\nwarnings, boolean, if supplied AND set to true, then all ERROR that can be\nstepped down to WARNINGS will be. This is \"binary upgrade mode\".\nversion - the numeric version (a la PG_VERSION_NUM) of the statistics\ngiven. If NULL or omitted assume current PG_VERSION_NUM of server.\nactual stats columns.\n\nThis allows casual users to set only the params they want for their needs,\nand get proper errors, while pg_upgrade can set\n\n'warnings', 'true', 'version', 120034\n\nand get the upgrade behavior we need.\n\n\n\n\n\n\n\n\n\n\n\n and pg_set_attribute_stats.\n pg_set_relation_stats(out schemaname name, out relname name,, out\nrow_written boolean, out params_entered int, out params_accepted int,\nkwargs any[])\n\nGiving the parameter lists more thought, the desire for a return code more granular than true/false/null, and the likelihood that each function will inevitably get more parameters both stats and non-stats, I'm proposing the following:Two functions:pg_set_relation_stats( out schemaname name, out relname name, out row_written boolean, out params_rejected text[], kwargs any[]) RETURNS RECORDand pg_set_attribute_stats( out schemaname name, out relname name, out inherited bool, out row_written boolean, out params_accepted text[], out params_rejected text[], kwargs any[]) RETURNS RECORDThe leading OUT parameters tell us the rel/attribute/inh affected (if any), and which params had to be rejected for whatever reason. The kwargs is the variadic key-value pairs that we were using for all stat functions, but now we will be using it for all parameters, both statistics and control, the control parameters will be:relation - the oid of the relationattname - the attribute name (does not apply for relstats)inherited - true false for attribute stats, defaults false, does not apply for relstatswarnings, boolean, if supplied AND set to true, then all ERROR that can be stepped down to WARNINGS will be. This is \"binary upgrade mode\".version - the numeric version (a la PG_VERSION_NUM) of the statistics given. If NULL or omitted assume current PG_VERSION_NUM of server.actual stats columns.This allows casual users to set only the params they want for their needs, and get proper errors, while pg_upgrade can set 'warnings', 'true', 'version', 120034and get the upgrade behavior we need. and pg_set_attribute_stats. pg_set_relation_stats(out schemaname name, out relname name,, out row_written boolean, out params_entered int, out params_accepted int, kwargs any[])",
"msg_date": "Tue, 23 Jul 2024 17:48:57 -0400",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": ">\n>\n> and pg_set_attribute_stats.\n> pg_set_relation_stats(out schemaname name, out relname name,, out\n> row_written boolean, out params_entered int, out params_accepted int,\n> kwargs any[])\n>\n>\nOops, didn't hit undo fast enough. Disregard this last bit.\n\n and pg_set_attribute_stats. pg_set_relation_stats(out schemaname name, out relname name,, out row_written boolean, out params_entered int, out params_accepted int, kwargs any[]) Oops, didn't hit undo fast enough. Disregard this last bit.",
"msg_date": "Tue, 23 Jul 2024 17:50:17 -0400",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "On Tue, 2024-07-23 at 17:48 -0400, Corey Huinker wrote:\n> Two functions:\n\nI see that you moved back to a combination function to serve both the\n\"restore\" use case as well as the \"ad-hoc stats hacking\" use case.\n\nThe \"restore\" use case is the primary point of your patch, and that\nshould be as simple and future-proof as possible. The parameters should\nbe name/value pairs and there shouldn't be any \"control\" parameters --\nit's not the job of pg_dump to specify whether the restore should be\ntransactional or in-place, it should just output the necessary stats.\n\nThat restore function might be good enough to satisfy the \"ad-hoc stats\nhacking\" use case as well, but I suspect we want slightly different\nbehavior. Specifically, I think we'd want the updates to be\ntransactional rather than in-place, or at least optional.\n\n> The leading OUT parameters tell us the rel/attribute/inh affected (if\n> any), and which params had to be rejected for whatever reason. The\n> kwargs is the variadic key-value pairs that we were using for all\n> stat functions, but now we will be using it for all parameters, both\n> statistics and control, the control parameters will be:\n\nI don't like the idea of mixing statistics and control parameters in\nthe same list.\n\nI do like the idea of returning a set, but I think it should be the\npositive set (effectively a representation of what is now in the\npg_stats view) and any ignored settings would be output as WARNINGs.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Thu, 25 Jul 2024 12:25:28 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": ">\n> The \"restore\" use case is the primary point of your patch, and that\n> should be as simple and future-proof as possible. The parameters should\n> be name/value pairs and there shouldn't be any \"control\" parameters --\n> it's not the job of pg_dump to specify whether the restore should be\n> transactional or in-place, it should just output the necessary stats.\n>\n> That restore function might be good enough to satisfy the \"ad-hoc stats\n> hacking\" use case as well, but I suspect we want slightly different\n> behavior. Specifically, I think we'd want the updates to be\n> transactional rather than in-place, or at least optional.\n>\n\nPoint well taken.\n\nBoth function pairs now call a generic internal function.\n\nWhich is to say that pg_set_relation_stats and pg_restore_relation_stats\nboth accept parameters in their own way, and both call\nan internal function relation_statistics_update(), each with their own\ndefaults.\n\npg_set_relation_stats always leaves \"version\" NULL, does transactional\nupdates, and treats any data quality issue as an ERROR. This is is in line\nwith a person manually tweaking stats to check against a query to see if\nthe plan changes.\n\npg_restore_relation_stats does in-place updates, and steps down all errors\nto warnings. The stats may not write, but at least it won't fail the\npg_upgrade for you.\n\npg_set_attribute_stats is error-maximalist like pg_set_relation_stats.\npg_restore_attribute_stats never had an in-place option to begin with.\n\n\n\n\n>\n> > The leading OUT parameters tell us the rel/attribute/inh affected (if\n> > any), and which params had to be rejected for whatever reason. The\n> > kwargs is the variadic key-value pairs that we were using for all\n> > stat functions, but now we will be using it for all parameters, both\n> > statistics and control, the control parameters will be:\n>\n> I don't like the idea of mixing statistics and control parameters in\n> the same list.\n>\n\nThere's no way around it, at least now we need never worry about a\nconfusing order for the parameters in the _restore_ functions because they\ncan now be in any order you like. But that speaks to another point: there\nis no \"you\" in using the restore functions, those function calls will\nalmost exclusively be generated by pg_dump and we can all live rich and\nproductive lives never having seen one written down. I kid, but they're\nactually not that gross.\n\nHere is a -set function taken from the regression tests:\n\nSELECT pg_catalog.pg_set_attribute_stats(\n relation => 'stats_import.test'::regclass::oid,\n attname => 'arange'::name,\n inherited => false::boolean,\n null_frac => 0.5::real,\n avg_width => 2::integer,\n n_distinct => -0.1::real,\n range_empty_frac => 0.5::real,\n range_length_histogram => '{399,499,Infinity}'::text\n );\n pg_set_attribute_stats\n------------------------\n\n(1 row)\n\nand here is a restore function\n\n-- warning: mcv cast failure\nSELECT *\nFROM pg_catalog.pg_restore_attribute_stats(\n 'relation', 'stats_import.test'::regclass::oid,\n 'attname', 'id'::name,\n 'inherited', false::boolean,\n 'version', 150000::integer,\n 'null_frac', 0.5::real,\n 'avg_width', 2::integer,\n 'n_distinct', -0.4::real,\n 'most_common_vals', '{2,four,3}'::text,\n 'most_common_freqs', '{0.3,0.25,0.05}'::real[]\n );\nWARNING: invalid input syntax for type integer: \"four\"\n row_written | stats_applied | stats_rejected\n | params_rejected\n-------------+----------------------------------+--------------------------------------+-----------------\n t | {null_frac,avg_width,n_distinct} |\n{most_common_vals,most_common_freqs} |\n(1 row)\n\nThere's a few things going on here:\n\n1. An intentionally bad, impossible to write, value was put in\n'most_common_vals'. 'four' cannot cast to integer, so the value fails, and\nwe get a warning\n2. Because most_common_values failed, we can no longer construct a legit\nSTAKIND_MCV, so we have to throw out most_common_freqs with it.\n3. Those failures aren't enough to prevent us from writing the other stats,\nso we write the record, and report the row written, the stats we could\nwrite, the stats we couldn't, and a list of other parameters we entered\nthat didn't make sense and had to be rejected (empty).\n\nOverall, I'd say the format is on the pedantic side, but it's far from\nunreadable, and mixing control parameters (version) with stats parameters\nisn't that big a deal.\n\n\nI do like the idea of returning a set, but I think it should be the\n> positive set (effectively a representation of what is now in the\n> pg_stats view) and any ignored settings would be output as WARNINGs.\n>\n\nDisplaying the actual stats in pg_stats could get very, very big. So I\nwouldn't recommend that.\n\nWhat do you think of the example presented earlier?\n\nAttached is v25.\n\nKey changes:\n- Each set/restore function pair now each call a common function that does\nthe heavy lifting, and the callers mostly marshall parameters into the\nright spot and form the result set (really just one row).\n- The restore functions now have all parameters passed in via a variadic\nany[].\n- the set functions now error out on just about any discrepancy, and do not\nhave a result tuple.\n- test cases simplified a bit. There's still a lot of them, and I think\nthat's a good thing.\n- Documentation to reflect significant reorganization.\n- pg_dump modified to generate new function signatures.",
"msg_date": "Sat, 27 Jul 2024 21:08:44 -0400",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "On Sat, 2024-07-27 at 21:08 -0400, Corey Huinker wrote:\n> \n> > I don't like the idea of mixing statistics and control parameters\n> > in\n> > the same list.\n> > \n> \n> \n> There's no way around it, at least now we need never worry about a\n> confusing order for the parameters in the _restore_ functions because\n> they can now be in any order you like.\n\nPerhaps I was not precise enough when I said \"control\" parameters.\nMainly what I was worried about is trying to take parameters that\ncontrol things like transaction behavior (in-place vs mvcc), and\npg_dump should not be specifying that kind of thing. A parameter like\n\"version\" is specified by pg_dump anyway, so it's probably fine the way\nyou've done it.\n\n> SELECT pg_catalog.pg_set_attribute_stats(\n> relation => 'stats_import.test'::regclass::oid,\n> attname => 'arange'::name,\n> inherited => false::boolean,\n> null_frac => 0.5::real,\n> avg_width => 2::integer,\n> n_distinct => -0.1::real,\n> range_empty_frac => 0.5::real,\n> range_length_histogram => '{399,499,Infinity}'::text\n> );\n> pg_set_attribute_stats \n> ------------------------\n> \n> (1 row)\n\nI like it.\n\n> and here is a restore function\n> \n> -- warning: mcv cast failure\n> SELECT *\n> FROM pg_catalog.pg_restore_attribute_stats(\n> 'relation', 'stats_import.test'::regclass::oid,\n> 'attname', 'id'::name,\n> 'inherited', false::boolean,\n> 'version', 150000::integer,\n> 'null_frac', 0.5::real,\n> 'avg_width', 2::integer,\n> 'n_distinct', -0.4::real,\n> 'most_common_vals', '{2,four,3}'::text,\n> 'most_common_freqs', '{0.3,0.25,0.05}'::real[]\n> );\n> WARNING: invalid input syntax for type integer: \"four\"\n> row_written | stats_applied | \n> stats_rejected | params_rejected \n> -------------+----------------------------------+--------------------\n> ------------------+-----------------\n> t | {null_frac,avg_width,n_distinct} |\n> {most_common_vals,most_common_freqs} | \n> (1 row)\n\nI think I like this, as well, except for the return value, which seems\nlike too much information and a bit over-engineered. Can we simplify it\nto what's actually going to be used by pg_upgrade and other tools?\n\n> Attached is v25.\n\nI believe 0001 and 0002 are in good shape API-wise, and I can start\ngetting those committed. I will try to clean up the code in the\nprocess.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Thu, 01 Aug 2024 23:44:54 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": ">\n>\n> > WARNING: invalid input syntax for type integer: \"four\"\n> > row_written | stats_applied |\n> > stats_rejected | params_rejected\n> > -------------+----------------------------------+--------------------\n> > ------------------+-----------------\n> > t | {null_frac,avg_width,n_distinct} |\n> > {most_common_vals,most_common_freqs} |\n> > (1 row)\n>\n> I think I like this, as well, except for the return value, which seems\n> like too much information and a bit over-engineered. Can we simplify it\n> to what's actually going to be used by pg_upgrade and other tools?\n>\n\npg_upgrade currently won't need any of it, it currently does nothing when a\nstatistics import fails. But it could do *something* based on this\ninformation. For example, we might have an option\n--analyze-tables-that-have-a-statistics-import-failure that analyzes tables\nthat have at least one statistics that didn't import. For instance,\npostgres_fdw may try to do stats import first, and if that fails fall back\nto a remote table sample.\n\nWe could do other things. It seems a shame to just throw away this\ninformation when it could potentially be used in the future.\n\n\n>\n> > Attached is v25.\n>\n> I believe 0001 and 0002 are in good shape API-wise, and I can start\n> getting those committed. I will try to clean up the code in the\n> process.\n>\n\n:)\n\n> WARNING: invalid input syntax for type integer: \"four\"\n> row_written | stats_applied | \n> stats_rejected | params_rejected \n> -------------+----------------------------------+--------------------\n> ------------------+-----------------\n> t | {null_frac,avg_width,n_distinct} |\n> {most_common_vals,most_common_freqs} | \n> (1 row)\n\nI think I like this, as well, except for the return value, which seems\nlike too much information and a bit over-engineered. Can we simplify it\nto what's actually going to be used by pg_upgrade and other tools?pg_upgrade currently won't need any of it, it currently does nothing when a statistics import fails. But it could do *something* based on this information. For example, we might have an option --analyze-tables-that-have-a-statistics-import-failure that analyzes tables that have at least one statistics that didn't import. For instance, postgres_fdw may try to do stats import first, and if that fails fall back to a remote table sample.We could do other things. It seems a shame to just throw away this information when it could potentially be used in the future. \n\n> Attached is v25.\n\nI believe 0001 and 0002 are in good shape API-wise, and I can start\ngetting those committed. I will try to clean up the code in the\nprocess.:)",
"msg_date": "Sun, 4 Aug 2024 01:09:40 -0400",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "On Sat, 2024-07-27 at 21:08 -0400, Corey Huinker wrote:\n> \n> Attached is v25.\n\nI attached new versions of 0001 and 0002. Still working on them, so\nthese aren't final.\n\nv25j-0001:\n\n * There seems to be confusion between the relation for which we are\nupdating the stats, and pg_class. Permissions and ShareUpdateExclusive\nshould be taken on the former, not the latter. For consistency with\nvac_update_relstats(), RowExclusiveLock should be fine on pg_class.\n * Lots of unnecessary #includes were removed.\n * I refactored substantially to do basic checks in the SQL function\npg_set_relation_stats() and make calling the internal function easier.\nSimilar refactoring might not work for pg_set_attribute_stats(), but\nthat's OK.\n * You don't need to declare the SQL function signatures. They're\nautogenerated from pg_proc.dat into fmgrprotos.h.\n * I removed the inplace stuff for this patch because there's no\ncoverage for it and it can be easily added back in 0003.\n * I renamed the file to import_stats.c. Annoying to rebase, I know,\nbut better now than later.\n\nv25j-0002:\n\n * I just did some minor cleanup on the #includes and rebased it. I\nstill need to look in more detail.\n\nRegards,\n\tJeff Davis",
"msg_date": "Thu, 08 Aug 2024 18:32:23 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "On Sun, 2024-08-04 at 01:09 -0400, Corey Huinker wrote:\n> \n> > I believe 0001 and 0002 are in good shape API-wise, and I can start\n> > getting those committed. I will try to clean up the code in the\n> > process.\n\nAttached v26j.\n\nI'm slowly refactoring it and rediscovering some of the interesting\ncorners in deriving the right information to store the stats. There's\nstill a ways to go, though. The error paths could also use some work.\n\nRegards,\n\tJeff Davis",
"msg_date": "Thu, 15 Aug 2024 01:57:03 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "On Thu, 2024-08-15 at 01:57 -0700, Jeff Davis wrote:\n> On Sun, 2024-08-04 at 01:09 -0400, Corey Huinker wrote:\n> > \n> > > I believe 0001 and 0002 are in good shape API-wise, and I can\n> > > start\n> > > getting those committed. I will try to clean up the code in the\n> > > process.\n> \n> Attached v26j.\n\nI did a lot of refactoring, and it's starting to take the shape I had\nin mind. Some of it is surely just style preference, but I think it\nreads more nicely and I caught a couple bugs along the way. The\nfunction attribute_statsitics_update() is significantly shorter. (Thank\nyou for a good set of tests, by the way, which sped up the refactoring\nprocess.)\n\nAttached v27j.\n\nQuestions:\n\n * Remind me why the new stats completely replace the new row, rather\nthan updating only the statistic kinds that are specified?\n * I'm not sure what the type_is_scalar() function was doing before,\nbut I just removed it. If it can't find the element type, then it skips\nover the kinds that require it.\n * I introduced some hard errors. These happen when it can't find the\ntable, or the attribute, or doesn't have permissions. I don't see any\nreason to demote those to a WARNING. Even for the restore case,\nanalagous errors happen for COPY, etc.\n * I'm still sorting through some of the type info derivations. I\nthink we need better explanations about why it's doing exactly the\nthings it's doing, e.g. for tsvector and multiranges.\n\nRegards,\n\tJeff Davis",
"msg_date": "Thu, 15 Aug 2024 17:35:24 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": ">\n>\n> function attribute_statsitics_update() is significantly shorter. (Thank\n> you for a good set of tests, by the way, which sped up the refactoring\n> process.)\n>\n\nyw\n\n\n> * Remind me why the new stats completely replace the new row, rather\n> than updating only the statistic kinds that are specified?\n>\n\nbecause:\n- complexity\n- we would then need a mechanism to then tell it to *delete* a stakind\n- we'd have to figure out how to reorder the remaining stakinds, or spend\neffort finding a matching stakind in the existing row to know to replace it\n- \"do what analyze does\" was an initial goal and as a result many test\ncases directly compared pg_statistic rows from an original table to an\nempty clone table to see if the \"copy\" had fidelity.\n\n\n> * I'm not sure what the type_is_scalar() function was doing before,\n> but I just removed it. If it can't find the element type, then it skips\n> over the kinds that require it.\n>\n\nthat may be sufficient,\n\n\n> * I introduced some hard errors. These happen when it can't find the\n> table, or the attribute, or doesn't have permissions. I don't see any\n> reason to demote those to a WARNING. Even for the restore case,\n> analagous errors happen for COPY, etc.\n>\n\nI can accept that reasoning.\n\n\n> * I'm still sorting through some of the type info derivations. I\n> think we need better explanations about why it's doing exactly the\n> things it's doing, e.g. for tsvector and multiranges.\n\n\nI don't have the specifics of each, but any such cases were derived from\nsimilar behaviors in the custom typanalyze functions, and the lack of a\ncustom typanalyze function for a given type was taken as evidence that the\ntype was adequately handled by the default rules. I can see that this is an\nargument for having a second stats-specific custom typanalyze function for\ndatatypes that need them, but I wasn't ready to go that far myself.\n\nfunction attribute_statsitics_update() is significantly shorter. (Thank\nyou for a good set of tests, by the way, which sped up the refactoring\nprocess.)yw * Remind me why the new stats completely replace the new row, rather\nthan updating only the statistic kinds that are specified?because:- complexity- we would then need a mechanism to then tell it to *delete* a stakind- we'd have to figure out how to reorder the remaining stakinds, or spend effort finding a matching stakind in the existing row to know to replace it- \"do what analyze does\" was an initial goal and as a result many test cases directly compared pg_statistic rows from an original table to an empty clone table to see if the \"copy\" had fidelity. \n * I'm not sure what the type_is_scalar() function was doing before,\nbut I just removed it. If it can't find the element type, then it skips\nover the kinds that require it.that may be sufficient, \n * I introduced some hard errors. These happen when it can't find the\ntable, or the attribute, or doesn't have permissions. I don't see any\nreason to demote those to a WARNING. Even for the restore case,\nanalagous errors happen for COPY, etc.I can accept that reasoning. \n * I'm still sorting through some of the type info derivations. I\nthink we need better explanations about why it's doing exactly the\nthings it's doing, e.g. for tsvector and multiranges.I don't have the specifics of each, but any such cases were derived from similar behaviors in the custom typanalyze functions, and the lack of a custom typanalyze function for a given type was taken as evidence that the type was adequately handled by the default rules. I can see that this is an argument for having a second stats-specific custom typanalyze function for datatypes that need them, but I wasn't ready to go that far myself.",
"msg_date": "Thu, 15 Aug 2024 20:53:37 -0400",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "On Thu, 2024-08-15 at 20:53 -0400, Corey Huinker wrote:\n> \n> > * Remind me why the new stats completely replace the new row,\n> > rather\n> > than updating only the statistic kinds that are specified?\n> \n> because:\n> - complexity\n\nI don't think it significantly impacts the overall complexity. We have\na ShareUpdateExclusiveLock on the relation, so there's no concurrency\nto deal with, and an upsert operation is not many more lines of code.\n\n> - we would then need a mechanism to then tell it to *delete* a\n> stakind\n\nThat sounds useful regardless. I have introduced pg_clear_*_stats()\nfunctions.\n\n> - we'd have to figure out how to reorder the remaining stakinds, or\n> spend effort finding a matching stakind in the existing row to know\n> to replace it\n\nRight. I initialized the values/nulls arrays based on the existing\ntuple, if any, and created a set_stats_slot() function that searches\nfor either a matching stakind or the first empty slot.\n\n> - \"do what analyze does\" was an initial goal and as a result many\n> test cases directly compared pg_statistic rows from an original table\n> to an empty clone table to see if the \"copy\" had fidelity.\n\nCan't we just clear the stats first to achieve the same effect?\n\n\nI have attached version 28j as one giant patch covering what was\npreviously 0001-0003. It's a bit rough (tests in particular need some\nwork), but it implelements the logic to replace only those values\nspecified rather than the whole tuple.\n\nAt least for the interactive \"set\" variants of the functions, I think\nit's an improvement. It feels more natural to just change one stat\nwithout wiping out all the others. I realize a lot of the statistics\ndepend on each other, but the point is not to replace ANALYZE, the\npoint is to experiment with planner scenarios. What do others think?\n\nFor the \"restore\" variants, I'm not sure it matters a lot because the\nstats will already be empty. If it does matter, we could pretty easily\ndefine the \"restore\" variants to wipe out existing stats when loading\nthe table, though I'm not sure if that's a good thing or not.\n\nI also made more use of FunctionCallInfo structures to communicate\nbetween functions rather than huge parameter lists. I believe that\nreduced the line count substantially, and made it easier to transform\nthe argument pairs in the \"restore\" variants into the positional\narguments for the \"set\" variants.\n\nRegards,\n\tJeff Davis",
"msg_date": "Fri, 23 Aug 2024 13:49:52 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "On Sat, Aug 24, 2024 at 4:50 AM Jeff Davis <[email protected]> wrote:\n>\n>\n> I have attached version 28j as one giant patch covering what was\n> previously 0001-0003. It's a bit rough (tests in particular need some\n> work), but it implelements the logic to replace only those values\n> specified rather than the whole tuple.\n>\nhi.\nI did some review for v28j\n\ngit am shows some whitespace error.\n\n\n+extern Datum pg_set_relation_stats(PG_FUNCTION_ARGS);\n+extern Datum pg_set_attribute_stats(PG_FUNCTION_ARGS);\nis unnecessary?\n\n\n+ <entry role=\"func_table_entry\">\n+ <para role=\"func_signature\">\n+ <indexterm>\n+ <primary>pg_set_relation_stats</primary>\n+ </indexterm>\n+ <function>pg_set_relation_stats</function> (\n+ <parameter>relation</parameter> <type>regclass</type>\n+ <optional>, <parameter>relpages</parameter>\n<type>integer</type></optional>\n+ <optional>, <parameter>reltuples</parameter>\n<type>real</type></optional>\n+ <optional>, <parameter>relallvisible</parameter>\n<type>integer</type></optional> )\n+ <returnvalue>boolean</returnvalue>\n+ </para>\n+ <para>\n+ Updates table-level statistics for the given relation to the\n+ specified values. The parameters correspond to columns in <link\n+ linkend=\"catalog-pg-class\"><structname>pg_class</structname></link>.\nUnspecified\n+ or <literal>NULL</literal> values leave the setting\n+ unchanged. Returns <literal>true</literal> if a change was made;\n+ <literal>false</literal> otherwise.\n+ </para>\nare these <optional> flags wrong? there is only one function currently:\npg_set_relation_stats(relation regclass, relpages integer, reltuples\nreal, relallvisible integer)\ni think you want\npg_set_relation_stats(relation regclass, relpages integer default\nnull, reltuples real default null, relallvisible integer default null)\nwe can add following in src/backend/catalog/system_functions.sql:\n\nselect * from pg_set_relation_stats('emp'::regclass);\nCREATE OR REPLACE FUNCTION\n pg_set_relation_stats(\n relation regclass,\n relpages integer default null,\n reltuples real default null,\n relallvisible integer default null)\nRETURNS bool\nLANGUAGE INTERNAL\nCALLED ON NULL INPUT VOLATILE\nAS 'pg_set_relation_stats';\n\n\ntypedef enum ...\nneed to add src/tools/pgindent/typedefs.list\n\n\n+/*\n+ * Check that array argument is one dimensional with no NULLs.\n+ *\n+ * If not, emit at elevel, and set argument to NULL in fcinfo.\n+ */\n+static void\n+check_arg_array(FunctionCallInfo fcinfo, struct arginfo *arginfo,\n+ int argnum, int elevel)\n+{\n+ ArrayType *arr;\n+\n+ if (PG_ARGISNULL(argnum))\n+ return;\n+\n+ arr = DatumGetArrayTypeP(PG_GETARG_DATUM(argnum));\n+\n+ if (ARR_NDIM(arr) != 1)\n+ {\n+ ereport(elevel,\n+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n+ errmsg(\"\\\"%s\\\" cannot be a multidimensional array\",\n+ arginfo[argnum].argname)));\n+ fcinfo->args[argnum].isnull = true;\n+ }\n+\n+ if (array_contains_nulls(arr))\n+ {\n+ ereport(elevel,\n+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n+ errmsg(\"\\\"%s\\\" array cannot contain NULL values\",\n+ arginfo[argnum].argname)));\n+ fcinfo->args[argnum].isnull = true;\n+ }\n+}\nthis part elevel should always be ERROR?\nif so, we can just\n+ ereport(ERROR,\n+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n\n\n\n\nrelation_statistics_update and other functions\nmay need to check relkind?\nsince relpages, reltuples, relallvisible not meaning to all of relkind?\n\n\n",
"msg_date": "Tue, 27 Aug 2024 11:35:00 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": ">\n> I have attached version 28j as one giant patch covering what was\n> previously 0001-0003. It's a bit rough (tests in particular need some\n> work), but it implelements the logic to replace only those values\n> specified rather than the whole tuple.\n>\n\nI like what you did restoring the parameter enums, especially now that they\ncan be leveraged for the expected type oids data structure.\n\n\n> At least for the interactive \"set\" variants of the functions, I think\n> it's an improvement. It feels more natural to just change one stat\n> without wiping out all the others. I realize a lot of the statistics\n> depend on each other, but the point is not to replace ANALYZE, the\n> point is to experiment with planner scenarios. What do others think?\n>\n\nWhen I first heard that was what you wanted to do, I was very uneasy about\nit. The way you implemented it (one function to wipe out/reset all existing\nstats, and then the _set_ function works as an upsert) puts my mind at\nease. The things I really wanted to avoid were gaps in the stakind array\n(which can't happen as you wrote it) and getting the stakinds out of order\n(admittedly that's more a tidiness issue, but pg_statistic before/after\nfidelity is kept, so I'm happy).\n\nFor the \"restore\" variants, I'm not sure it matters a lot because the\n> stats will already be empty. If it does matter, we could pretty easily\n> define the \"restore\" variants to wipe out existing stats when loading\n> the table, though I'm not sure if that's a good thing or not.\n>\n\nI agree, and I'm leaning towards doing the clear, because \"restore\" to me\nimplies that what resides there exactly matches what was in the function\ncall, regardless of what might have been there before. But you're also\nright, \"restore\" is expected to be used on default/missing stats, and the\nrestore_* call generated is supposed to be comprehensive of all stats that\nwere there at time of dump/upgrade, so impact would be minimal.\n\n\n> I also made more use of FunctionCallInfo structures to communicate\n> between functions rather than huge parameter lists. I believe that\n> reduced the line count substantially, and made it easier to transform\n> the argument pairs in the \"restore\" variants into the positional\n> arguments for the \"set\" variants.\n>\n\nYou certainly did, and I see where it pays off given that _set_ / _restore_\nfunctions are just different ways of ordering the shared internal function\ncall.\n\nObservation: there is currently no way to delete a stakind, keeping the\nrest of the record. It's certainly possible to compose a SQL query that\ngets the current values, invokes pg_clear_* and then pg_set_* using the\nvalues that are meant to be kept, and in fact that pattern is how I\nimagined the pg_set_* functions would be used when they overwrote\neverything in the tuple. So I am fine with going forward with this paradigm.\n\nThe code mentions that more explanation should be given for the special\ncases (tsvector, etc) and that explanation is basically \"this code follows\nwhat the corresponding custom typanalyze() function does\". In the future,\nit may make sense to have custom typimport() functions for datatypes that\nhave a custom typanalzye(), which would solve the issue of handling custom\nstakinds.\n\nI'll continue to work on this.\n\np.s. dropping invalid email address from the thread\n\nI have attached version 28j as one giant patch covering what was\npreviously 0001-0003. It's a bit rough (tests in particular need some\nwork), but it implelements the logic to replace only those values\nspecified rather than the whole tuple.I like what you did restoring the parameter enums, especially now that they can be leveraged for the expected type oids data structure. At least for the interactive \"set\" variants of the functions, I think\nit's an improvement. It feels more natural to just change one stat\nwithout wiping out all the others. I realize a lot of the statistics\ndepend on each other, but the point is not to replace ANALYZE, the\npoint is to experiment with planner scenarios. What do others think?When I first heard that was what you wanted to do, I was very uneasy about it. The way you implemented it (one function to wipe out/reset all existing stats, and then the _set_ function works as an upsert) puts my mind at ease. The things I really wanted to avoid were gaps in the stakind array (which can't happen as you wrote it) and getting the stakinds out of order (admittedly that's more a tidiness issue, but pg_statistic before/after fidelity is kept, so I'm happy).For the \"restore\" variants, I'm not sure it matters a lot because the\nstats will already be empty. If it does matter, we could pretty easily\ndefine the \"restore\" variants to wipe out existing stats when loading\nthe table, though I'm not sure if that's a good thing or not.I agree, and I'm leaning towards doing the clear, because \"restore\" to me implies that what resides there exactly matches what was in the function call, regardless of what might have been there before. But you're also right, \"restore\" is expected to be used on default/missing stats, and the restore_* call generated is supposed to be comprehensive of all stats that were there at time of dump/upgrade, so impact would be minimal. \nI also made more use of FunctionCallInfo structures to communicate\nbetween functions rather than huge parameter lists. I believe that\nreduced the line count substantially, and made it easier to transform\nthe argument pairs in the \"restore\" variants into the positional\narguments for the \"set\" variants.You certainly did, and I see where it pays off given that _set_ / _restore_ functions are just different ways of ordering the shared internal function call.Observation: there is currently no way to delete a stakind, keeping the rest of the record. It's certainly possible to compose a SQL query that gets the current values, invokes pg_clear_* and then pg_set_* using the values that are meant to be kept, and in fact that pattern is how I imagined the pg_set_* functions would be used when they overwrote everything in the tuple. So I am fine with going forward with this paradigm.The code mentions that more explanation should be given for the special cases (tsvector, etc) and that explanation is basically \"this code follows what the corresponding custom typanalyze() function does\". In the future, it may make sense to have custom typimport() functions for datatypes that have a custom typanalzye(), which would solve the issue of handling custom stakinds.I'll continue to work on this.p.s. dropping invalid email address from the thread",
"msg_date": "Thu, 5 Sep 2024 13:29:44 -0400",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": ">\n>\n> git am shows some whitespace error.\n>\n\n\nJeff indicated that this was more of a stylistic/clarity reworking. I'll be\nhandling it again for now.\n\n\n>\n>\n> +extern Datum pg_set_relation_stats(PG_FUNCTION_ARGS);\n> +extern Datum pg_set_attribute_stats(PG_FUNCTION_ARGS);\n> is unnecessary?\n>\n\nThey're autogenerated from pg_proc.dat. I was (pleasantly) surprised too.\n\n\n\n> this part elevel should always be ERROR?\n> if so, we can just\n>\n\nI'm personally dis-inclined to error on any of these things, so I'll be\nleaving it as is. I suspect that the proper balance lies between all-ERROR\nand all-WARNING, but time will tell which.\n\n\n> relation_statistics_update and other functions\n> may need to check relkind?\n> since relpages, reltuples, relallvisible not meaning to all of relkind?\n>\n\nI'm not able to understand either of your questions, can you elaborate on\nthem?\n\ngit am shows some whitespace error.Jeff indicated that this was more of a stylistic/clarity reworking. I'll be handling it again for now. \n\n\n+extern Datum pg_set_relation_stats(PG_FUNCTION_ARGS);\n+extern Datum pg_set_attribute_stats(PG_FUNCTION_ARGS);\nis unnecessary?They're autogenerated from pg_proc.dat. I was (pleasantly) surprised too. this part elevel should always be ERROR?\nif so, we can justI'm personally dis-inclined to error on any of these things, so I'll be leaving it as is. I suspect that the proper balance lies between all-ERROR and all-WARNING, but time will tell which. relation_statistics_update and other functions\nmay need to check relkind?\nsince relpages, reltuples, relallvisible not meaning to all of relkind?I'm not able to understand either of your questions, can you elaborate on them?",
"msg_date": "Thu, 5 Sep 2024 13:34:31 -0400",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "On Fri, Sep 6, 2024 at 1:34 AM Corey Huinker <[email protected]> wrote:\n>>\n>>\n>> this part elevel should always be ERROR?\n>> if so, we can just\n>\n>\n> I'm personally dis-inclined to error on any of these things, so I'll be leaving it as is. I suspect that the proper balance lies between all-ERROR and all-WARNING, but time will tell which.\n>\nsomehow, i get it now.\n\n\n>>\n>> relation_statistics_update and other functions\n>> may need to check relkind?\n>> since relpages, reltuples, relallvisible not meaning to all of relkind?\n>\n>\n> I'm not able to understand either of your questions, can you elaborate on them?\n>\n\nPlease check my attached changes.\nalso see the attached cf-bot commit message.\n\n1. make sure these three functions: 'pg_set_relation_stats',\n'pg_restore_relation_stats','pg_clear_relation_stats' proisstrict to true.\nbecause in\npg_class catalog, these three attributes (relpages, reltuples, relallvisible) is\nmarked as not null. updating it to null will violate these constraints.\ntom also mention this at [\n\n2.refactor relation_statistics_update. first sanity check first argument\n(\"relation\").\nnot all kinds of relation can pass on\nrelation_statistics_update, for example view. so do the sanity check.\nalso do sanity check for the remaining 3 arguments.\nif not ok, ereport(elevel...), return false immediately.\n\n3.add some tests for partitioned table, view, and materialized view.\n\n4. minor sanity check output of \"attnum = get_attnum(reloid,\nNameStr(*attname));\"\n\n5.\ncreate table t(a int, b int);\nalter table t drop column b;\nSELECT pg_catalog.pg_set_attribute_stats(\nrelation => 't'::regclass,\nattname => 'b'::name,\ninherited => false::boolean,\nnull_frac => 0.1::real,\navg_width => 2::integer,\nn_distinct => 0.3::real);\n\nERROR: attribute 0 of relation with OID 34316 does not exist\nThe error message is not good, i think.\nAlso, in this case, I think we may need soft errors.\ninstead of returning ERROR, make it return FALSE would be more ok.\n\n6. there are no \"inherited => true::boolean,\"\ntests for pg_set_attribute_stats.\naslo there are no partitioned table related tests on stats_import.sql.\nI think we should add some.\n\n7. the doc output, functions-admin.html, there are 4 same warnings.\nMaybe one is enough?\n\n8. lock_check_privileges function issue.\n------------------------------------------------\n--asume there is a superuser jian\ncreate role alice NOSUPERUSER LOGIN;\ncreate role bob NOSUPERUSER LOGIN;\ncreate role carol NOSUPERUSER LOGIN;\nalter database test owner to alice\nGRANT CONNECT, CREATE on database test to bob;\n\\c test bob\ncreate schema one;\ncreate table one.t(a int);\ncreate table one.t1(a int);\nset session AUTHORIZATION; --switch to superuser.\nalter table one.t1 owner to carol;\n\\c test alice\n--now current database owner alice cannot do ANYTHING WITH table one.t1,\nlike ANALYZE, SELECT, INSERT, MAINTAIN etc.\n\nso i think your relation_statistics_update->lock_check_privileges part is wrong?\nalso the doc:\n\"The caller must have the MAINTAIN privilege on the table or be the\nowner of the database.\"\nshould be\n\"The caller must have the MAINTAIN privilege on the table or be the\nowner of the table\"\n?",
"msg_date": "Sun, 8 Sep 2024 10:02:00 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": ">\n> Please check my attached changes.\n> also see the attached cf-bot commit message.\n>\n> 1. make sure these three functions: 'pg_set_relation_stats',\n> 'pg_restore_relation_stats','pg_clear_relation_stats' proisstrict to true.\n> because in\n> pg_class catalog, these three attributes (relpages, reltuples,\n> relallvisible) is\n> marked as not null. updating it to null will violate these constraints.\n> tom also mention this at [\n>\n\nThings have changed a bit since then, and the purpose of the functions has\nchanged, so the considerations are now different. The function signature\ncould change in the future as new pg_class stats are added, and it might\nnot still be strict.\n\n\n>\n> 2.refactor relation_statistics_update. first sanity check first argument\n> (\"relation\").\n> not all kinds of relation can pass on\n> relation_statistics_update, for example view. so do the sanity check.\n> also do sanity check for the remaining 3 arguments.\n> if not ok, ereport(elevel...), return false immediately.\n>\n\nI have added checks for non-stats-having pg_class types.\n\n\n> 3.add some tests for partitioned table, view, and materialized view.\n>\n\nWe can do that, but they're all just relations, the underlying mechanism is\nthe same. All we'd be testing is that there is no check actively preventing\nstatistics import for those types.\n\n\n>\n> 4. minor sanity check output of \"attnum = get_attnum(reloid,\n> NameStr(*attname));\"\n>\n\nWhile this check makes sense, it falls into the same category as the sanity\nchecks mentioned in #2. Not against it, but others have found value in just\nallowing these things.\n\n\n>\n> 5.\n> create table t(a int, b int);\n> alter table t drop column b;\n> SELECT pg_catalog.pg_set_attribute_stats(\n> relation => 't'::regclass,\n> attname => 'b'::name,\n> inherited => false::boolean,\n> null_frac => 0.1::real,\n> avg_width => 2::integer,\n> n_distinct => 0.3::real);\n>\n> ERROR: attribute 0 of relation with OID 34316 does not exist\n> The error message is not good, i think.\n> Also, in this case, I think we may need soft errors.\n> instead of returning ERROR, make it return FALSE would be more ok.\n>\n\nI agree that we can extract the name of the oid for a better error message.\nAdded.\n\nThe ERROR vs WARNING debate is ongoing.\n\n\n>\n> 6. there are no \"inherited => true::boolean,\"\n> tests for pg_set_attribute_stats.\n> aslo there are no partitioned table related tests on stats_import.sql.\n> I think we should add some.\n>\n\nThere aren't any, but that does get tested in the pg_upgrade test.\n\n\n> 7. the doc output, functions-admin.html, there are 4 same warnings.\n> Maybe one is enough?\n>\n\nPerhaps, if we had a good place to put that unified message.\n\n\n>\n> 8. lock_check_privileges function issue.\n> ------------------------------------------------\n> --asume there is a superuser jian\n> create role alice NOSUPERUSER LOGIN;\n> create role bob NOSUPERUSER LOGIN;\n> create role carol NOSUPERUSER LOGIN;\n> alter database test owner to alice\n> GRANT CONNECT, CREATE on database test to bob;\n> \\c test bob\n> create schema one;\n> create table one.t(a int);\n> create table one.t1(a int);\n> set session AUTHORIZATION; --switch to superuser.\n> alter table one.t1 owner to carol;\n> \\c test alice\n> --now current database owner alice cannot do ANYTHING WITH table one.t1,\n> like ANALYZE, SELECT, INSERT, MAINTAIN etc.\n>\n\nInteresting.\n\n\nI've taken most of Jeff's work, reincorporated it into roughly the same\npatch structure as before, and am posting it now.\n\nHighlights:\n\n- import_stats.c is broken up into stats_utils.c, relation_stats.c, and\nattribute_stats.c. This is done in light of the existence of\nextended_stats.c, and the fact that we will have to eventually add stats\nimport to extended stats.\n- Many of Jian's suggestions were accepted.\n- Reorganized test structure to leverage pg_clear_* functions as a way to\ncleanse the stats palette between pg_set* function tests and pg_restore*\nfunction tests.\n- Rebased up to 95d6e9af07d2e5af2fdd272e72b5b552bad3ea0a on master, which\nincorporates Nathan's recent work on pg_upgrade.",
"msg_date": "Tue, 17 Sep 2024 05:02:49 -0400",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "On Tue, Sep 17, 2024 at 5:03 PM Corey Huinker <[email protected]> wrote:\n>>\n>> 1. make sure these three functions: 'pg_set_relation_stats',\n>> 'pg_restore_relation_stats','pg_clear_relation_stats' proisstrict to true.\n>> because in\n>> pg_class catalog, these three attributes (relpages, reltuples, relallvisible) is\n>> marked as not null. updating it to null will violate these constraints.\n>> tom also mention this at [\n>\n> Things have changed a bit since then, and the purpose of the functions has changed, so the considerations are now different. The function signature could change in the future as new pg_class stats are added, and it might not still be strict.\n>\n\nif you add more arguments to relation_statistics_update,\nbut the first 3 arguments (relpages, reltuples, relallvisible) still not null.\nand, we are unlikely to add 3 or more (nullable=null) arguments?\n\nwe have code like:\n if (!PG_ARGISNULL(RELPAGES_ARG))\n {\n values[ncols] = Int32GetDatum(relpages);\n ncols++;\n }\n if (!PG_ARGISNULL(RELTUPLES_ARG))\n {\n replaces[ncols] = Anum_pg_class_reltuples;\n values[ncols] = Float4GetDatum(reltuples);\n }\n if (!PG_ARGISNULL(RELALLVISIBLE_ARG))\n {\n values[ncols] = Int32GetDatum(relallvisible);\n ncols++;\n }\n newtup = heap_modify_tuple_by_cols(ctup, tupdesc, ncols, replaces, nulls);\n\nyou just directly declared \"bool nulls[3] = {false, false, false};\"\nif any of (RELPAGES_ARG, RELTUPLES_ARG, RELALLVISIBLE_ARG)\nis null, should you set that null[position] to true?\notherwise, i am confused with the variable nulls.\n\nLooking at other usage of heap_modify_tuple_by_cols, \"ncols\" cannot be\ndynamic, it should be a fixed value?\nThe current implementation works, because the (bool[3] nulls) is\nalways false, never changed.\nif nulls becomes {false, false, true} then \"ncols\" must be 3, cannot be 2.\n\n\n\n\n>>\n>> 8. lock_check_privileges function issue.\n>> ------------------------------------------------\n>> --asume there is a superuser jian\n>> create role alice NOSUPERUSER LOGIN;\n>> create role bob NOSUPERUSER LOGIN;\n>> create role carol NOSUPERUSER LOGIN;\n>> alter database test owner to alice\n>> GRANT CONNECT, CREATE on database test to bob;\n>> \\c test bob\n>> create schema one;\n>> create table one.t(a int);\n>> create table one.t1(a int);\n>> set session AUTHORIZATION; --switch to superuser.\n>> alter table one.t1 owner to carol;\n>> \\c test alice\n>> --now current database owner alice cannot do ANYTHING WITH table one.t1,\n>> like ANALYZE, SELECT, INSERT, MAINTAIN etc.\n>\n>\n> Interesting.\n>\n\ndatabase owners do not necessarily have schema USAGE privilege.\n-------------<<<>>>------------------\ncreate role alice NOSUPERUSER LOGIN;\ncreate role bob NOSUPERUSER LOGIN;\ncreate database test;\nalter database test owner to alice;\nGRANT CONNECT, CREATE on database test to bob;\n\\c test bob\ncreate schema one;\ncreate table one.t(a int);\n\\c test alice\n\nanalyze one.t;\n\nwith cte as (\nselect oid as the_t\nfrom pg_class\nwhere relname = any('{t}') and relnamespace = 'one'::regnamespace)\nSELECT\npg_catalog.pg_set_relation_stats(\nrelation => the_t,\nrelpages => 17::integer,\nreltuples => 400.0::real,\nrelallvisible => 4::integer)\nfrom cte;\n\n\nIn the above case, alice cannot do \"analyze one.t;\",\nbut can do pg_set_relation_stats, which seems not ok?\n-------------<<<>>>------------------\n\nsrc/include/statistics/stats_utils.h\ncomment\n * Portions Copyright (c) 1994, Regents of the University of California\n *\n * src/include/statistics/statistics.h\n\nshould be \"src/include/statistics/stats_utils.h\"\n\n\n\ncomment src/backend/statistics/stats_utils.c\n * IDENTIFICATION\n * src/backend/statistics/stats_privs.c\nshould be\n * IDENTIFICATION\n * src/backend/statistics/stats_utils.c\n\n\n",
"msg_date": "Mon, 23 Sep 2024 08:57:01 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
},
{
"msg_contents": "On Mon, Sep 23, 2024 at 8:57 AM jian he <[email protected]> wrote:\n>\n> database owners do not necessarily have schema USAGE privilege.\n> -------------<<<>>>------------------\n> create role alice NOSUPERUSER LOGIN;\n> create role bob NOSUPERUSER LOGIN;\n> create database test;\n> alter database test owner to alice;\n> GRANT CONNECT, CREATE on database test to bob;\n> \\c test bob\n> create schema one;\n> create table one.t(a int);\n> \\c test alice\n>\n> analyze one.t;\n>\n> with cte as (\n> select oid as the_t\n> from pg_class\n> where relname = any('{t}') and relnamespace = 'one'::regnamespace)\n> SELECT\n> pg_catalog.pg_set_relation_stats(\n> relation => the_t,\n> relpages => 17::integer,\n> reltuples => 400.0::real,\n> relallvisible => 4::integer)\n> from cte;\n>\n>\n> In the above case, alice cannot do \"analyze one.t;\",\n> but can do pg_set_relation_stats, which seems not ok?\n\nsorry for the noise.\nwhat you stats_lock_check_privileges about privilege is right.\n\ndatabase owner cannot do\n\"ANALYZE one.t;\"\nbut it can do \"ANALYZE;\" to indirect analyzing one.t\n\n\n\nwhich seems to be the expected behavior per\nhttps://www.postgresql.org/docs/17/sql-analyze.html\n<<\nTo analyze a table, one must ordinarily have the MAINTAIN privilege on\nthe table.\nHowever, database owners are allowed to analyze all tables in their\ndatabases, except shared catalogs.\n<<\n\n\n",
"msg_date": "Mon, 23 Sep 2024 11:59:45 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Statistics Import and Export"
}
] |
[
{
"msg_contents": "While working on a bug in expandRecordVariable() I noticed that in the\nswitch statement for case RTE_SUBQUERY we initialize struct ParseState\nwith {0} while for case RTE_CTE we do that with MemSet. I understand\nthat there is nothing wrong with this, just cannot get away with the\ninconsistency inside the same function (sorry for the nitpicking).\n\nDo we have a preference for how to initialize structures? From 9fd45870\nit seems that we prefer to {0}. So here is a trivial patch doing that.\nAnd with a rough scan the MemSet calls in pg_stat_get_backend_subxact()\ncan also be replaced with {0}, so include that in the patch too.\n\nThanks\nRichard",
"msg_date": "Thu, 31 Aug 2023 16:32:59 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Should we use MemSet or {0} for struct initialization?"
},
{
"msg_contents": "On Thu, Aug 31, 2023 at 5:34 PM Richard Guo <[email protected]> wrote:\n>\n> While working on a bug in expandRecordVariable() I noticed that in the\n> switch statement for case RTE_SUBQUERY we initialize struct ParseState\n> with {0} while for case RTE_CTE we do that with MemSet. I understand\n> that there is nothing wrong with this, just cannot get away with the\n> inconsistency inside the same function (sorry for the nitpicking).\n>\n> Do we have a preference for how to initialize structures? From 9fd45870\n> it seems that we prefer to {0}. So here is a trivial patch doing that.\n> And with a rough scan the MemSet calls in pg_stat_get_backend_subxact()\n> can also be replaced with {0}, so include that in the patch too.\n>\n> Thanks\n> Richard\n\nIf the struct has padding or aligned, {0} only guarantee the struct\nmembers initialized to 0, while memset sets the alignment/padding\nto 0 as well, but since we will not access the alignment/padding, so\nthey give the same effect.\n\nI bet {0} should be faster since there is no function call, but I'm not\n100% sure ;)\n\n\n-- \nRegards\nJunwang Zhao\n\n\n",
"msg_date": "Thu, 31 Aug 2023 17:56:58 +0800",
"msg_from": "Junwang Zhao <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should we use MemSet or {0} for struct initialization?"
},
{
"msg_contents": "> On Thu, Aug 31, 2023 at 5:34 PM Richard Guo <[email protected]>\nwrote:\n> >\n> > While working on a bug in expandRecordVariable() I noticed that in the\n> > switch statement for case RTE_SUBQUERY we initialize struct ParseState\n> > with {0} while for case RTE_CTE we do that with MemSet. I understand\n> > that there is nothing wrong with this, just cannot get away with the\n> > inconsistency inside the same function (sorry for the nitpicking).\n> >\n> > Do we have a preference for how to initialize structures? From 9fd45870\n> > it seems that we prefer to {0}. So here is a trivial patch doing that.\n\nIt seems to have been deliberately left that way in the wake of that\ncommit, see:\n\nhttps://www.postgresql.org/message-id/87d2e5f8-3c37-d185-4bbc-1de163ac4b10%40enterprisedb.com\n\n(If so, it deserves a comment to keep people from trying to change it...)\n\n> > And with a rough scan the MemSet calls in pg_stat_get_backend_subxact()\n> > can also be replaced with {0}, so include that in the patch too.\n\nI _believe_ that's harmless to change.\n\nOn Thu, Aug 31, 2023 at 4:57 PM Junwang Zhao <[email protected]> wrote:\n\n> If the struct has padding or aligned, {0} only guarantee the struct\n> members initialized to 0, while memset sets the alignment/padding\n> to 0 as well, but since we will not access the alignment/padding, so\n> they give the same effect.\n\nSee above -- if it's used as a hash key, for example, you must clear\neverything.\n\n> I bet {0} should be faster since there is no function call, but I'm not\n> 100% sure ;)\n\nNeither has a function call. MemSet is a PG macro. You're thinking of\nmemset, the libc library function, but a decent compiler can easily turn\nthat into something else for fixed-size inputs.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n> On Thu, Aug 31, 2023 at 5:34 PM Richard Guo <[email protected]> wrote:> >> > While working on a bug in expandRecordVariable() I noticed that in the> > switch statement for case RTE_SUBQUERY we initialize struct ParseState> > with {0} while for case RTE_CTE we do that with MemSet. I understand> > that there is nothing wrong with this, just cannot get away with the> > inconsistency inside the same function (sorry for the nitpicking).> >> > Do we have a preference for how to initialize structures? From 9fd45870> > it seems that we prefer to {0}. So here is a trivial patch doing that.It seems to have been deliberately left that way in the wake of that commit, see:https://www.postgresql.org/message-id/87d2e5f8-3c37-d185-4bbc-1de163ac4b10%40enterprisedb.com(If so, it deserves a comment to keep people from trying to change it...)> > And with a rough scan the MemSet calls in pg_stat_get_backend_subxact()> > can also be replaced with {0}, so include that in the patch too.I _believe_ that's harmless to change.On Thu, Aug 31, 2023 at 4:57 PM Junwang Zhao <[email protected]> wrote:> If the struct has padding or aligned, {0} only guarantee the struct> members initialized to 0, while memset sets the alignment/padding> to 0 as well, but since we will not access the alignment/padding, so> they give the same effect.See above -- if it's used as a hash key, for example, you must clear everything.> I bet {0} should be faster since there is no function call, but I'm not> 100% sure ;)Neither has a function call. MemSet is a PG macro. You're thinking of memset, the libc library function, but a decent compiler can easily turn that into something else for fixed-size inputs.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 31 Aug 2023 18:07:22 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should we use MemSet or {0} for struct initialization?"
},
{
"msg_contents": "On Thu, Aug 31, 2023 at 7:07 PM John Naylor\n<[email protected]> wrote:\n>\n> > On Thu, Aug 31, 2023 at 5:34 PM Richard Guo <[email protected]> wrote:\n> > >\n> > > While working on a bug in expandRecordVariable() I noticed that in the\n> > > switch statement for case RTE_SUBQUERY we initialize struct ParseState\n> > > with {0} while for case RTE_CTE we do that with MemSet. I understand\n> > > that there is nothing wrong with this, just cannot get away with the\n> > > inconsistency inside the same function (sorry for the nitpicking).\n> > >\n> > > Do we have a preference for how to initialize structures? From 9fd45870\n> > > it seems that we prefer to {0}. So here is a trivial patch doing that.\n>\n> It seems to have been deliberately left that way in the wake of that commit, see:\n>\n> https://www.postgresql.org/message-id/87d2e5f8-3c37-d185-4bbc-1de163ac4b10%40enterprisedb.com\n>\n> (If so, it deserves a comment to keep people from trying to change it...)\n>\n> > > And with a rough scan the MemSet calls in pg_stat_get_backend_subxact()\n> > > can also be replaced with {0}, so include that in the patch too.\n>\n> I _believe_ that's harmless to change.\n>\n> On Thu, Aug 31, 2023 at 4:57 PM Junwang Zhao <[email protected]> wrote:\n>\n> > If the struct has padding or aligned, {0} only guarantee the struct\n> > members initialized to 0, while memset sets the alignment/padding\n> > to 0 as well, but since we will not access the alignment/padding, so\n> > they give the same effect.\n>\n> See above -- if it's used as a hash key, for example, you must clear everything.\n\nYeah, if memcmp was used as the key comparison function, there is a problem.\n\n>\n> > I bet {0} should be faster since there is no function call, but I'm not\n> > 100% sure ;)\n>\n> Neither has a function call. MemSet is a PG macro. You're thinking of memset, the libc library function, but a decent compiler can easily turn that into something else for fixed-size inputs.\n\ngood to know, thanks.\n\n>\n> --\n> John Naylor\n> EDB: http://www.enterprisedb.com\n>\n>\n\n\n-- \nRegards\nJunwang Zhao\n\n\n",
"msg_date": "Thu, 31 Aug 2023 19:35:32 +0800",
"msg_from": "Junwang Zhao <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should we use MemSet or {0} for struct initialization?"
},
{
"msg_contents": "On Thu, Aug 31, 2023 at 7:07 PM John Naylor <[email protected]>\nwrote:\n\n> > On Thu, Aug 31, 2023 at 5:34 PM Richard Guo <[email protected]>\n> wrote:\n> > >\n> > > While working on a bug in expandRecordVariable() I noticed that in the\n> > > switch statement for case RTE_SUBQUERY we initialize struct ParseState\n> > > with {0} while for case RTE_CTE we do that with MemSet. I understand\n> > > that there is nothing wrong with this, just cannot get away with the\n> > > inconsistency inside the same function (sorry for the nitpicking).\n> > >\n> > > Do we have a preference for how to initialize structures? From\n> 9fd45870\n> > > it seems that we prefer to {0}. So here is a trivial patch doing that.\n>\n> It seems to have been deliberately left that way in the wake of that\n> commit, see:\n>\n>\n> https://www.postgresql.org/message-id/87d2e5f8-3c37-d185-4bbc-1de163ac4b10%40enterprisedb.com\n>\n> (If so, it deserves a comment to keep people from trying to change it...)\n>\n\nThanks for pointing this out. Yeah, struct initialization does not work\nfor some cases with padding bits, such as for a hash key we need to\nclear the padding too.\n\nThe case in expandRecordVariable() mentioned here should be safe though,\nmaybe this is an omission from 9fd45870?\n\nThanks\nRichard\n\nOn Thu, Aug 31, 2023 at 7:07 PM John Naylor <[email protected]> wrote:> On Thu, Aug 31, 2023 at 5:34 PM Richard Guo <[email protected]> wrote:> >> > While working on a bug in expandRecordVariable() I noticed that in the> > switch statement for case RTE_SUBQUERY we initialize struct ParseState> > with {0} while for case RTE_CTE we do that with MemSet. I understand> > that there is nothing wrong with this, just cannot get away with the> > inconsistency inside the same function (sorry for the nitpicking).> >> > Do we have a preference for how to initialize structures? From 9fd45870> > it seems that we prefer to {0}. So here is a trivial patch doing that.It seems to have been deliberately left that way in the wake of that commit, see:https://www.postgresql.org/message-id/87d2e5f8-3c37-d185-4bbc-1de163ac4b10%40enterprisedb.com(If so, it deserves a comment to keep people from trying to change it...)Thanks for pointing this out. Yeah, struct initialization does not workfor some cases with padding bits, such as for a hash key we need toclear the padding too.The case in expandRecordVariable() mentioned here should be safe though,maybe this is an omission from 9fd45870?ThanksRichard",
"msg_date": "Fri, 1 Sep 2023 11:03:43 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Should we use MemSet or {0} for struct initialization?"
},
{
"msg_contents": "On Thu, 31 Aug 2023 at 13:35, Junwang Zhao <[email protected]> wrote:\n> > > If the struct has padding or aligned, {0} only guarantee the struct\n> > > members initialized to 0, while memset sets the alignment/padding\n> > > to 0 as well, but since we will not access the alignment/padding, so\n> > > they give the same effect.\n> >\n> > See above -- if it's used as a hash key, for example, you must clear everything.\n>\n> Yeah, if memcmp was used as the key comparison function, there is a problem.\n\nThe C standard says:\n> When a value is stored in an object of structure or union type, including in a member object, the bytes of the object representation that correspond to any padding bytes take unspecified values.\n\nSo if you set any of the fields after a MemSet, the values of the\npadding bytes that were set to 0 are now unspecified. It seems much\nsafer to actually spell out the padding fields of a hash key.\n\n\n",
"msg_date": "Fri, 1 Sep 2023 14:47:56 +0200",
"msg_from": "Jelte Fennema <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should we use MemSet or {0} for struct initialization?"
},
{
"msg_contents": "On Fri, Sep 1, 2023 at 7:48 PM Jelte Fennema <[email protected]> wrote:\n\n> The C standard says:\n> > When a value is stored in an object of structure or union type,\nincluding in a member object, the bytes of the object representation that\ncorrespond to any padding bytes take unspecified values.\n>\n> So if you set any of the fields after a MemSet, the values of the\n> padding bytes that were set to 0 are now unspecified. It seems much\n> safer to actually spell out the padding fields of a hash key.\n\nNo, the standard is telling you why you need to memset if consistency of\npadding bytes matters.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Fri, Sep 1, 2023 at 7:48 PM Jelte Fennema <[email protected]> wrote:> The C standard says:> > When a value is stored in an object of structure or union type, including in a member object, the bytes of the object representation that correspond to any padding bytes take unspecified values.>> So if you set any of the fields after a MemSet, the values of the> padding bytes that were set to 0 are now unspecified. It seems much> safer to actually spell out the padding fields of a hash key.No, the standard is telling you why you need to memset if consistency of padding bytes matters.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Fri, 1 Sep 2023 20:25:25 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should we use MemSet or {0} for struct initialization?"
},
{
"msg_contents": "On 2023-09-01 09:25, John Naylor wrote:\n> On Fri, Sep 1, 2023 at 7:48 PM Jelte Fennema <[email protected]> \n> wrote:\n>> The C standard says:\n>> > When a value is stored in an object of structure or union type,\n>> > including in a member object, the bytes of the object representation that\n>> > correspond to any padding bytes take unspecified values.\n>> \n>> So if you set any of the fields after a MemSet, the values of the\n>> padding bytes that were set to 0 are now unspecified. It seems much\n>> safer to actually spell out the padding fields of a hash key.\n> \n> No, the standard is telling you why you need to memset if consistency \n> of\n> padding bytes matters.\n\nUm, I'm in no way a language lawyer for recent C specs, but the language\nJelte Fennema quoted is also present in the rather old 9899 TC2 draft\nI still have around from years back, and in particular it does say\nthat upon assignment, padding bytes ▶take◀ unspecified values, not\nmerely that they retain whatever unspecified values they may have had\nbefore. There is a footnote attached (in 9899 TC2) that says \"Thus,\nfor example, structure assignment need not copy any padding bits.\"\nIf that footnote example were normative, it would be reassuring,\nbecause you could assume that padding bits not copied are unchanged\nand remember what you originally memset() them to. So that would be\nnice. But everything about the form and phrasing of the footnote\nconveys that it isn't normative. And the normative text does appear\nto be saying that those padding bytes ▶take◀ unspecified values upon,\nassignment to the object, even if you may have memset() them before.\nOr at least to be saying that's what could happen, in some \nimplementation\non some architecture, and it would be standard-conformant if it did.\n\nPerhaps there is language elsewhere in the standard that pins it down\nto the way you've interpreted it? If you know where that language\nis, could you point to it?\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Fri, 01 Sep 2023 10:03:27 -0400",
"msg_from": "Chapman Flack <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should we use MemSet or {0} for struct initialization?"
},
{
"msg_contents": "On Fri, 1 Sept 2023 at 15:25, John Naylor <[email protected]>\nwrote:\n> On Fri, Sep 1, 2023 at 7:48 PM Jelte Fennema <[email protected]> wrote:\n> > The C standard says:\n> > > When a value is stored in an object of structure or union type,\nincluding in a member object, the bytes of the object representation that\ncorrespond to any padding bytes take unspecified values.\n> >\n> > So if you set any of the fields after a MemSet, the values of the\n> > padding bytes that were set to 0 are now unspecified. It seems much\n> > safer to actually spell out the padding fields of a hash key.\n>\n> No, the standard is telling you why you need to memset if consistency of\npadding bytes matters.\n\nMaybe I'm misunderstanding the sentence from the C standard I quoted. But\nunder my interpretation it means that even an assignment to a field of a\nstruct causes the padding bytes to take unspecified (but not undefined)\nvalues, because of the \"including in a member object\" part of the sentence.\nIt's ofcourse possible that all compilers relevant to Postgres never\nactually change padding when assigning to a field.\n\nOn Fri, 1 Sept 2023 at 15:25, John Naylor <[email protected]> wrote:\n> On Fri, Sep 1, 2023 at 7:48 PM Jelte Fennema <[email protected]> wrote:\n> > The C standard says:\n> > > When a value is stored in an object of structure or union type, including in a member object, the bytes of the object representation that correspond to any padding bytes take unspecified values.\n> >\n> > So if you set any of the fields after a MemSet, the values of the\n> > padding bytes that were set to 0 are now unspecified. It seems much\n> > safer to actually spell out the padding fields of a hash key.\n>\n> No, the standard is telling you why you need to memset if consistency of padding bytes matters.\n\nMaybe I'm misunderstanding the sentence from the C standard I quoted. But under my interpretation it means that even an assignment to a field of a struct causes the padding bytes to take unspecified (but not undefined) values, because of the \"including in a member object\" part of the sentence. It's ofcourse possible that all compilers relevant to Postgres never actually change padding when assigning to a field.",
"msg_date": "Fri, 1 Sep 2023 16:04:58 +0200",
"msg_from": "Jelte Fennema <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should we use MemSet or {0} for struct initialization?"
},
{
"msg_contents": "On 31.08.23 10:32, Richard Guo wrote:\n> While working on a bug in expandRecordVariable() I noticed that in the\n> switch statement for case RTE_SUBQUERY we initialize struct ParseState\n> with {0} while for case RTE_CTE we do that with MemSet. I understand\n> that there is nothing wrong with this, just cannot get away with the\n> inconsistency inside the same function (sorry for the nitpicking).\n> \n> Do we have a preference for how to initialize structures? From 9fd45870\n> it seems that we prefer to {0}. So here is a trivial patch doing that.\n> And with a rough scan the MemSet calls in pg_stat_get_backend_subxact()\n> can also be replaced with {0}, so include that in the patch too.\n\nThe first part (parse_target.c) was already addressed by e0e492e5a9. I \nhave applied the second part (pgstatfuncs.c).\n\n\n\n",
"msg_date": "Tue, 19 Sep 2023 11:37:30 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should we use MemSet or {0} for struct initialization?"
},
{
"msg_contents": "On Tue, Sep 19, 2023 at 5:37 PM Peter Eisentraut <[email protected]>\nwrote:\n\n> On 31.08.23 10:32, Richard Guo wrote:\n> > While working on a bug in expandRecordVariable() I noticed that in the\n> > switch statement for case RTE_SUBQUERY we initialize struct ParseState\n> > with {0} while for case RTE_CTE we do that with MemSet. I understand\n> > that there is nothing wrong with this, just cannot get away with the\n> > inconsistency inside the same function (sorry for the nitpicking).\n> >\n> > Do we have a preference for how to initialize structures? From 9fd45870\n> > it seems that we prefer to {0}. So here is a trivial patch doing that.\n> > And with a rough scan the MemSet calls in pg_stat_get_backend_subxact()\n> > can also be replaced with {0}, so include that in the patch too.\n>\n> The first part (parse_target.c) was already addressed by e0e492e5a9. I\n> have applied the second part (pgstatfuncs.c).\n\n\nThanks for pushing this.\n\nThanks\nRichard\n\nOn Tue, Sep 19, 2023 at 5:37 PM Peter Eisentraut <[email protected]> wrote:On 31.08.23 10:32, Richard Guo wrote:\n> While working on a bug in expandRecordVariable() I noticed that in the\n> switch statement for case RTE_SUBQUERY we initialize struct ParseState\n> with {0} while for case RTE_CTE we do that with MemSet. I understand\n> that there is nothing wrong with this, just cannot get away with the\n> inconsistency inside the same function (sorry for the nitpicking).\n> \n> Do we have a preference for how to initialize structures? From 9fd45870\n> it seems that we prefer to {0}. So here is a trivial patch doing that.\n> And with a rough scan the MemSet calls in pg_stat_get_backend_subxact()\n> can also be replaced with {0}, so include that in the patch too.\n\nThe first part (parse_target.c) was already addressed by e0e492e5a9. I \nhave applied the second part (pgstatfuncs.c).Thanks for pushing this.ThanksRichard",
"msg_date": "Tue, 19 Sep 2023 18:39:13 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Should we use MemSet or {0} for struct initialization?"
}
] |
[
{
"msg_contents": "Commitfest 2023-09 (https://commitfest.postgresql.org/44/) starts in \nless than 28 hours.\n\nIf you have any patches you would like considered, be sure to add them \nin good time.\n\nAll patch authors, and especially experienced hackers, are requested to \nmake sure the patch status is up to date. If the patch is still being \nworked on, it might not need to be in \"Needs review\". Conversely, if \nyou are hoping for a review but the status is \"Waiting on author\", then \nit will likely be ignored by reviewers.\n\nThere are a number of patches carried over from the PG16 development \ncycle that have been in \"Waiting on author\" for several months. I will \naggressively prune those after the start of this commitfest if there \nhasn't been any author activity by then.\n\n\n",
"msg_date": "Thu, 31 Aug 2023 10:36:31 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Commitfest 2023-09 starts soon"
},
{
"msg_contents": "Hi Peter,\n\n> Commitfest 2023-09 (https://commitfest.postgresql.org/44/) starts in\n> less than 28 hours.\n>\n> If you have any patches you would like considered, be sure to add them\n> in good time.\n>\n> All patch authors, and especially experienced hackers, are requested to\n> make sure the patch status is up to date. If the patch is still being\n> worked on, it might not need to be in \"Needs review\". Conversely, if\n> you are hoping for a review but the status is \"Waiting on author\", then\n> it will likely be ignored by reviewers.\n>\n> There are a number of patches carried over from the PG16 development\n> cycle that have been in \"Waiting on author\" for several months. I will\n> aggressively prune those after the start of this commitfest if there\n> hasn't been any author activity by then.\n\nThe \"64-bit TOAST value ID\" [1] is one of such \"Waiting on author\"\npatches I've been reviewing. See the last 2-3 messages in the thread.\nI believe it's safe to mark it as RwF for now.\n\n[1]: https://commitfest.postgresql.org/44/4296/\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Thu, 31 Aug 2023 14:18:44 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2023-09 starts soon"
},
{
"msg_contents": "Hi,\n\n> > There are a number of patches carried over from the PG16 development\n> > cycle that have been in \"Waiting on author\" for several months. I will\n> > aggressively prune those after the start of this commitfest if there\n> > hasn't been any author activity by then.\n>\n> The \"64-bit TOAST value ID\" [1] is one of such \"Waiting on author\"\n> patches I've been reviewing. See the last 2-3 messages in the thread.\n> I believe it's safe to mark it as RwF for now.\n>\n> [1]: https://commitfest.postgresql.org/44/4296/\n\nThis was the one that I could name off the top of my head.\n\n1. Flush SLRU counters in checkpointer process\nhttps://commitfest.postgresql.org/44/4120/\n\nSimilarly, I suggest marking it as RwF\n\n2. Allow logical replication via inheritance root table\nhttps://commitfest.postgresql.org/44/4225/\n\nThis one seems to be in active development. Changing status to \"Needs\nreview\" since it definitely could use more code review.\n\n3. ResourceOwner refactoring\nhttps://commitfest.postgresql.org/44/3982/\n\nThe patch is in good shape but requires actions from Heikki. I suggest\nkeeping it as is for now.\n\n4. Move SLRU data into the regular buffer pool\nhttps://commitfest.postgresql.org/44/3514/\n\nRotted one and for a long time. Suggestion: RwF\n\n5. A minor adjustment to get_cheapest_path_for_pathkeys\nhttps://commitfest.postgresql.org/44/4286/\n\nDoesn't seem to be valuable. Suggestion: Rejected.\n\nI will apply corresponding status changes if there will be no objections.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Thu, 31 Aug 2023 14:34:26 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2023-09 starts soon"
},
{
"msg_contents": "Okay, here we go, starting with:\n\nStatus summary: Needs review: 227. Waiting on Author: 37. Ready for \nCommitter: 30. Committed: 40. Rejected: 1. Returned with Feedback: 1. \nWithdrawn: 1. Total: 337.\n\n(which is less than CF 2023-07)\n\nI have also already applied one round of the waiting-on-author-pruning \ndescribed below (not included in the above figures).\n\n\nOn 31.08.23 10:36, Peter Eisentraut wrote:\n> Commitfest 2023-09 (https://commitfest.postgresql.org/44/) starts in \n> less than 28 hours.\n> \n> If you have any patches you would like considered, be sure to add them \n> in good time.\n> \n> All patch authors, and especially experienced hackers, are requested to \n> make sure the patch status is up to date. If the patch is still being \n> worked on, it might not need to be in \"Needs review\". Conversely, if \n> you are hoping for a review but the status is \"Waiting on author\", then \n> it will likely be ignored by reviewers.\n> \n> There are a number of patches carried over from the PG16 development \n> cycle that have been in \"Waiting on author\" for several months. I will \n> aggressively prune those after the start of this commitfest if there \n> hasn't been any author activity by then.\n> \n> \n\n\n\n",
"msg_date": "Fri, 1 Sep 2023 14:55:35 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Commitfest 2023-09 starts soon"
},
{
"msg_contents": "Hi Peter,\n\n> Okay, here we go, starting with:\n>\n> Status summary: Needs review: 227. Waiting on Author: 37. Ready for\n> Committer: 30. Committed: 40. Rejected: 1. Returned with Feedback: 1.\n> Withdrawn: 1. Total: 337.\n>\n> (which is less than CF 2023-07)\n>\n> I have also already applied one round of the waiting-on-author-pruning\n> described below (not included in the above figures).\n\n* Index SLRUs by 64-bit integers rather than by 32-bit integers\nhttps://commitfest.postgresql.org/44/3489/\n\nThe status here was changed to \"Needs Review\". These patches are in\ngood shape and previously were marked as \"Ready for Committer\".\nActually I thought Heikki would commit them to PG16, but it didn't\nhappen. If there are no objections, I will return the RfC status in a\nbit since it seems to be more appropriate in this case.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Mon, 4 Sep 2023 16:22:40 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2023-09 starts soon"
},
{
"msg_contents": "On 04.09.23 15:22, Aleksander Alekseev wrote:\n> Hi Peter,\n> \n>> Okay, here we go, starting with:\n>>\n>> Status summary: Needs review: 227. Waiting on Author: 37. Ready for\n>> Committer: 30. Committed: 40. Rejected: 1. Returned with Feedback: 1.\n>> Withdrawn: 1. Total: 337.\n>>\n>> (which is less than CF 2023-07)\n>>\n>> I have also already applied one round of the waiting-on-author-pruning\n>> described below (not included in the above figures).\n> \n> * Index SLRUs by 64-bit integers rather than by 32-bit integers\n> https://commitfest.postgresql.org/44/3489/\n> \n> The status here was changed to \"Needs Review\". These patches are in\n> good shape and previously were marked as \"Ready for Committer\".\n> Actually I thought Heikki would commit them to PG16, but it didn't\n> happen. If there are no objections, I will return the RfC status in a\n> bit since it seems to be more appropriate in this case.\n\nThe patch was first set to \"Ready for Committer\" on 2023-03-29, and if I \npull up the thread in the web archive view, that is in the middle of the \npage. So as a committer, I would expect that someone would review \nwhatever happened in the second half of that thread before turning it \nover to committer.\n\nAs a general rule, if significant additional discussion or patch posting \nhappens after a patch is set to \"Ready for Committer\", I'm setting it \nback to \"Needs review\" until someone actually re-reviews it.\n\nI also notice that you are listed as both author and reviewer of that \npatch, which I think shouldn't be done. It appears that you are in fact \nthe author, so I would recommend that you remove yourself from the \nreviewers.\n\n\n\n",
"msg_date": "Mon, 4 Sep 2023 16:19:37 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Commitfest 2023-09 starts soon"
},
{
"msg_contents": "I have done several passes to make sure that patch statuses are more \naccurate. As explained in a nearby message, I have set several patches \nback from \"Ready to Committer\" to \"Needs review\" if additional \ndiscussion happened past the first status change. I have also in \nseveral cases removed reviewers from a patch entry if they haven't been \nactive on the thread in several months. (Feel free to sign up again, \nbut don't \"block\" the patch.)\n\nI was going to say, unfortunately that means that there is now even more \nwork to do, but in fact, it seems this has all kind of balanced out, and \nI hope it's now more accurate for someone wanting to pick up some work:\n\nStatus summary: Needs review: 224. Waiting on Author: 24. Ready for \nCommitter: 31. Committed: 41. Returned with Feedback: 14. Withdrawn: 2. \nRejected: 2. Total: 338.\n\n(And yes, the total number of patches has grown since the commitfest has \nstarted!?!)\n\n\nOn 01.09.23 14:55, Peter Eisentraut wrote:\n> Okay, here we go, starting with:\n> \n> Status summary: Needs review: 227. Waiting on Author: 37. Ready for \n> Committer: 30. Committed: 40. Rejected: 1. Returned with Feedback: 1. \n> Withdrawn: 1. Total: 337.\n> \n> (which is less than CF 2023-07)\n> \n> I have also already applied one round of the waiting-on-author-pruning \n> described below (not included in the above figures).\n> \n> \n> On 31.08.23 10:36, Peter Eisentraut wrote:\n>> Commitfest 2023-09 (https://commitfest.postgresql.org/44/) starts in \n>> less than 28 hours.\n>>\n>> If you have any patches you would like considered, be sure to add them \n>> in good time.\n>>\n>> All patch authors, and especially experienced hackers, are requested \n>> to make sure the patch status is up to date. If the patch is still \n>> being worked on, it might not need to be in \"Needs review\". \n>> Conversely, if you are hoping for a review but the status is \"Waiting \n>> on author\", then it will likely be ignored by reviewers.\n>>\n>> There are a number of patches carried over from the PG16 development \n>> cycle that have been in \"Waiting on author\" for several months. I \n>> will aggressively prune those after the start of this commitfest if \n>> there hasn't been any author activity by then.\n>>\n>>\n> \n> \n> \n\n\n\n",
"msg_date": "Mon, 4 Sep 2023 16:29:20 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Commitfest 2023-09 starts soon"
},
{
"msg_contents": "Hi Peter,\n\n> The patch was first set to \"Ready for Committer\" on 2023-03-29, and if I\n> pull up the thread in the web archive view, that is in the middle of the\n> page. So as a committer, I would expect that someone would review\n> whatever happened in the second half of that thread before turning it\n> over to committer.\n>\n> As a general rule, if significant additional discussion or patch posting\n> happens after a patch is set to \"Ready for Committer\", I'm setting it\n> back to \"Needs review\" until someone actually re-reviews it.\n\nOK, fair enough.\n\n> I also notice that you are listed as both author and reviewer of that\n> patch, which I think shouldn't be done. It appears that you are in fact\n> the author, so I would recommend that you remove yourself from the\n> reviewers.\n\nSometimes I start as a reviewer and then for instance add my own\npatches to the thread. In cases like this I end up being both an\nauthor and a reviewer, but it doesn't mean that I review my own\npatches :)\n\nIn this particular case IMO it would be appropriate to remove myself\nfrom the list of reviewers. So I did.\n\nThanks!\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Mon, 4 Sep 2023 17:36:02 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2023-09 starts soon"
},
{
"msg_contents": "On Thu, 31 Aug 2023 at 14:35, Aleksander Alekseev\n<[email protected]> wrote:\n>\n> Hi,\n> > On Thu, 31 Aug 2023 at 11:37, Peter Eisentraut <[email protected]> wrote:\n> > > There are a number of patches carried over from the PG16 development\n> > > cycle that have been in \"Waiting on author\" for several months. I will\n> > > aggressively prune those after the start of this commitfest if there\n> > > hasn't been any author activity by then.\n> >\n> > [1 patch]\n>\n> This was the one that I could name off the top of my head.\n>\n> [5 more patches]\n>\n> I will apply corresponding status changes if there will be no objections.\n\nOn Mon, 4 Sept 2023 at [various], Aleksander Alekseev\n<[email protected]> wrote:\n>\n> Hi,\n>\n> > [various patches]\n>\n> A consensus was reached [1] to mark this patch as RwF for now. There\n> are many patches to be reviewed and this one doesn't seem to be in the\n> best shape, so we have to prioritise. Please feel free re-submitting\n> the patch for the next commitfest.\n\nI'm a bit confused about your use of \"consensus\". True, there was no\nobjection, but it looks like no patch author or reviewer was informed\n(cc-ed or directly referenced) that the patch was being discussed\nbefore achieving this \"consensus\", and the \"consensus\" was reached\nwithin 4 days, of which 2 weekend, in a thread that has (until now)\ninvolved only you and Peter E.\n\nUsually, you'd expect discussion about a patch to happen on the\npatch's thread before any action is taken (or at least a mention on\nthat thread), but quite clearly that hasn't happened here.\nAre patch authors expected to follow any and all discussion on threads\nwith \"Commitfest\" in the title?\nIf so, shouldn't the relevant wiki pages be updated, and/or the\n-hackers community be updated by mail in a new thread about these\npolicy changes?\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n[0] https://wiki.postgresql.org/wiki/Submitting_a_Patch\n\n\n",
"msg_date": "Mon, 4 Sep 2023 17:55:51 +0200",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2023-09 starts soon"
},
{
"msg_contents": "Hi Matthias,\n\n> I'm a bit confused about your use of \"consensus\". True, there was no\n> objection, but it looks like no patch author or reviewer was informed\n> (cc-ed or directly referenced) that the patch was being discussed\n> before achieving this \"consensus\", and the \"consensus\" was reached\n> within 4 days, of which 2 weekend, in a thread that has (until now)\n> involved only you and Peter E.\n>\n> Usually, you'd expect discussion about a patch to happen on the\n> patch's thread before any action is taken (or at least a mention on\n> that thread), but quite clearly that hasn't happened here.\n> Are patch authors expected to follow any and all discussion on threads\n> with \"Commitfest\" in the title?\n> If so, shouldn't the relevant wiki pages be updated, and/or the\n> -hackers community be updated by mail in a new thread about these\n> policy changes?\n\nI understand your disappointment and assure you that no one is acting\nwith bad intentions here. Also please note that English is a second\nlanguage for many of us which represents a challenge when it comes to\nexpressing thoughts on the mailing list. We have a common goal here,\nto make PostgreSQL an even better system than it is now.\n\nThe patches under question were in \"Waiting for Author\" state for a\n*long* time and the authors were notified about this. We could toss\nsuch patches from one CF to another month after month or mark as RwF\nand let the author know that no one is going to review that patch\nuntil the author takes the actions. It's been noted that the letter\napproach is more productive in the long run. The discussion can\ncontinue in the same thread and the same thread can be registered for\nthe upcoming CF.\n\nThis being said, Peter is the CF manager, so he has every right to\nchange the status of the patches under questions if he disagrees.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Mon, 4 Sep 2023 19:18:52 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2023-09 starts soon"
},
{
"msg_contents": "On Mon, 4 Sept 2023 at 18:19, Aleksander Alekseev\n<[email protected]> wrote:\n>\n> Hi Matthias,\n>\n> > I'm a bit confused about your use of \"consensus\". True, there was no\n> > objection, but it looks like no patch author or reviewer was informed\n> > (cc-ed or directly referenced) that the patch was being discussed\n> > before achieving this \"consensus\", and the \"consensus\" was reached\n> > within 4 days, of which 2 weekend, in a thread that has (until now)\n> > involved only you and Peter E.\n> >\n> > Usually, you'd expect discussion about a patch to happen on the\n> > patch's thread before any action is taken (or at least a mention on\n> > that thread), but quite clearly that hasn't happened here.\n> > Are patch authors expected to follow any and all discussion on threads\n> > with \"Commitfest\" in the title?\n> > If so, shouldn't the relevant wiki pages be updated, and/or the\n> > -hackers community be updated by mail in a new thread about these\n> > policy changes?\n>\n> I understand your disappointment and assure you that no one is acting\n> with bad intentions here. Also please note that English is a second\n> language for many of us which represents a challenge when it comes to\n> expressing thoughts on the mailing list. We have a common goal here,\n> to make PostgreSQL an even better system than it is now.\n>\n> The patches under question were in \"Waiting for Author\" state for a\n> *long* time and the authors were notified about this. We could toss\n> such patches from one CF to another month after month or mark as RwF\n> and let the author know that no one is going to review that patch\n> until the author takes the actions. It's been noted that the letter\n> approach is more productive in the long run.\n\nThis far I agree - we can't keep patches around with issues if they're\nnot being worked on. And I do appreciate your work on pruning dead or\nstale patches. But:\n\n> The discussion can\n> continue in the same thread and the same thread can be registered for\n> the upcoming CF.\n\nThis is one of my major concerns here: Patch resolution is being\ndiscussed on -hackers, but outside of the thread used to discuss that\npatch (as indicated in the CF app), and without apparent author\ninclusion.To me, that feels like going behind the author's back, and I\ndon't think that this should be normalized.\n\nAs mentioned in the earlier mail, my other concern is the use of\n\"consensus\" in this context. You link to a message on -hackers, with\nno visible agreements. As a patch author myself, if a lack of comments\non my patch in an otherwise unrelated thread is \"consensus\", then I'll\nprobably move all patches that have yet to be commented on to RfC, as\nthere'd be \"consensus\" that they should be included as-is in\nPostgreSQL. But I digress.\n\nI think it would be better to just remove the \"consensus\" part of your\nmail, and just put down the real reason why each patch is being RfC-ed\nor rejected. That is, don't imply that there are hackers that OK-ed it\nwhen there are none, and inform patch authors directly about the\nreasons why the patch is being revoked; so without \"see consensus in\n[0]\".\n\nKind regards,\n\nMatthias van de Meent\n\n\n",
"msg_date": "Mon, 4 Sep 2023 20:10:57 +0200",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2023-09 starts soon"
},
{
"msg_contents": "On Mon, Sep 4, 2023 at 1:01 PM Peter Eisentraut <[email protected]> wrote:\n>\n> I have done several passes to make sure that patch statuses are more\n> accurate. As explained in a nearby message, I have set several patches\n> back from \"Ready to Committer\" to \"Needs review\" if additional\n> discussion happened past the first status change. I have also in\n> several cases removed reviewers from a patch entry if they haven't been\n> active on the thread in several months. (Feel free to sign up again,\n> but don't \"block\" the patch.)\n\nI had originally planned to write thread summaries for all status=\"Needs\nreview\" patches in the commitfest. However, ISTM these summaries are\nlargely useful for reaching consensus on changing the status of these\npatches (setting them to \"waiting on author\", \"returned with feedback\",\netc). Since Peter has already updated all of the statuses, I've decided\nnot to write summaries and instead spend that time doing review.\n\nHowever, I thought it might be useful to provide a list of all of the\n\"Needs review\" entries which, at the time of writing, have had zero\nreviews or replies (except from the author):\n\nDocument efficient self-joins / UPDATE LIMIT techniques.\nhttps://commitfest.postgresql.org/44/4539/\n\npg_basebackup: Always return valid temporary slot names\nhttps://commitfest.postgresql.org/44/4534/\n\npg_resetwal tests, logging, and docs update\nhttps://commitfest.postgresql.org/44/4533/\n\nStreaming I/O, vectored I/O\nhttps://commitfest.postgresql.org/44/4532/\n\nImproving the heapgetpage function improves performance in common scenarios\nhttps://commitfest.postgresql.org/44/4524/\n\nCI speed improvements for FreeBSD\nhttps://commitfest.postgresql.org/44/4520/\n\nImplementation of distinct in Window Aggregates: take two\nhttps://commitfest.postgresql.org/44/4519/\n\nImprove pg_restore toc file parsing and format for better performances\nhttps://commitfest.postgresql.org/44/4509/\n\nFix false report of wraparound in pg_serial\nhttps://commitfest.postgresql.org/44/4516/\n\nSimplify create_merge_append_path a bit for clarity\nhttps://commitfest.postgresql.org/44/4496/\n\nFix bogus Asserts in calc_non_nestloop_required_outer\nhttps://commitfest.postgresql.org/44/4478/\n\nRetiring is_pushed_down\nhttps://commitfest.postgresql.org/44/4458/\n\nFlush disk write caches by default on macOS and Windows\nhttps://commitfest.postgresql.org/44/4453/\n\nAdd last_commit_lsn to pg_stat_database\nhttps://commitfest.postgresql.org/44/4355/\n\nOptimizing \"boundary cases\" during backward scan B-Tree index descents\nhttps://commitfest.postgresql.org/44/4380/\n\nXLog size reductions: Reduced XLog record header size\nhttps://commitfest.postgresql.org/44/4386/\n\nUnified file access using virtual file descriptors\nhttps://commitfest.postgresql.org/44/4420/\n\nOptimise index range scan by performing first check with upper page boundary\nhttps://commitfest.postgresql.org/44/4434/\n\nRevises for the check of parameterized partial paths\nhttps://commitfest.postgresql.org/44/4425/\n\nOpportunistically pruning page before update\nhttps://commitfest.postgresql.org/44/4384/\n\nChecks in RegisterBackgroundWorker()\nhttps://commitfest.postgresql.org/44/4514/\n\nAllow direct lookups of SpecialJoinInfo by ojrelid\nhttps://commitfest.postgresql.org/44/4313/\n\nParent/child context relation in pg_get_backend_memory_contexts()\nhttps://commitfest.postgresql.org/44/4379/\n\nSupport Right Semi Join\nhttps://commitfest.postgresql.org/44/4284/\n\nbug: ANALYZE progress report with inheritance tables\nhttps://commitfest.postgresql.org/44/4144/\n\narchive modules loose ends\nhttps://commitfest.postgresql.org/44/4192/\n\n- Melanie\n\n\n",
"msg_date": "Mon, 4 Sep 2023 15:38:02 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2023-09 starts soon"
},
{
"msg_contents": "Hi,\n\n> I think it would be better to just remove the \"consensus\" part of your\n> mail, and just put down the real reason why each patch is being RfC-ed\n> or rejected. That is, don't imply that there are hackers that OK-ed it\n> when there are none, and inform patch authors directly about the\n> reasons why the patch is being revoked; so without \"see consensus in\n> [0]\".\n\nThat's fair enough. I will use \"It's been decided\" or something like\nthis next time to avoid any confusion.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Tue, 5 Sep 2023 14:55:04 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2023-09 starts soon"
}
] |
[
{
"msg_contents": "With PgBouncer in the middle PQbackendPID can return negative values\ndue to it filling all 32 bits of the be_pid with random bits.\n\nWhen this happens it results in pg_basebackup generating an invalid slot\nname (when no specific slot name is passed in) and thus throwing an\nerror like this:\n\npg_basebackup: error: could not send replication command\n\"CREATE_REPLICATION_SLOT \"pg_basebackup_-1201966863\" TEMPORARY\nPHYSICAL ( RESERVE_WAL)\": ERROR: replication slot name\n\"pg_basebackup_-1201966863\" contains invalid character\nHINT: Replication slot names may only contain lower case letters,\nnumbers, and the underscore character.\n\nThis patch fixes that problem by formatting the result from PQbackendPID\nas an unsigned integer when creating the temporary replication slot\nname.\n\nI think it would be good to backport this fix too.\n\nReplication connection support for PgBouncer is not merged yet, but\nit's pretty much ready:\nhttps://github.com/pgbouncer/pgbouncer/pull/876\n\nThe reason PgBouncer does not pass on the actual Postgres backend PID\nis that it doesn't have an associated server connection yet when it\nneeds to send the startup message to the client. It also cannot use\nit's own PID, because that would be the same for all clients, since\npgbouncer is a single process.",
"msg_date": "Thu, 31 Aug 2023 11:13:00 +0200",
"msg_from": "Jelte Fennema <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_basebackup: Always return valid temporary slot names"
},
{
"msg_contents": "Hi Jelte,\n\n\nPlease find my reviews below:-\n*1)* With what I have understood from above, the pgbouncer fills up\nbe_pid (int, 32 bits) with random bits as it does not have an\nassociated server connection yet.\nWith this, I was thinking, isn't this a problem of pgbouncer filling\nbe_pid with random bits? Maybe it should have filled the be_pid\nwith a random positive integer instead of any integer as it\nrepresents a pid? -- If this makes sense here, then maybe the fix\nshould be in pgbouncer instead of how the be_pid is processed in\npg_basebackup?\n\n*2)* Rest, the patch looks straightforward, with these two changes :\n\"%d\" --> \"%u\" and \"(int)\" --> \"(unsigned)\".\n\n\nRegards,\nNishant.\n\n\nOn Thu, Aug 31, 2023 at 2:43 PM Jelte Fennema <[email protected]> wrote:\n\n> With PgBouncer in the middle PQbackendPID can return negative values\n> due to it filling all 32 bits of the be_pid with random bits.\n>\n> When this happens it results in pg_basebackup generating an invalid slot\n> name (when no specific slot name is passed in) and thus throwing an\n> error like this:\n>\n> pg_basebackup: error: could not send replication command\n> \"CREATE_REPLICATION_SLOT \"pg_basebackup_-1201966863\" TEMPORARY\n> PHYSICAL ( RESERVE_WAL)\": ERROR: replication slot name\n> \"pg_basebackup_-1201966863\" contains invalid character\n> HINT: Replication slot names may only contain lower case letters,\n> numbers, and the underscore character.\n>\n> This patch fixes that problem by formatting the result from PQbackendPID\n> as an unsigned integer when creating the temporary replication slot\n> name.\n>\n> I think it would be good to backport this fix too.\n>\n> Replication connection support for PgBouncer is not merged yet, but\n> it's pretty much ready:\n> https://github.com/pgbouncer/pgbouncer/pull/876\n>\n> The reason PgBouncer does not pass on the actual Postgres backend PID\n> is that it doesn't have an associated server connection yet when it\n> needs to send the startup message to the client. It also cannot use\n> it's own PID, because that would be the same for all clients, since\n> pgbouncer is a single process.\n>\n\nHi Jelte,Please find my reviews below:-1) With what I have understood from above, the pgbouncer fills upbe_pid (int, 32 bits) with random bits as it does not have anassociated server connection yet.With this, I was thinking, isn't this a problem of pgbouncer fillingbe_pid with random bits? Maybe it should have filled the be_pidwith a random positive integer instead of any integer as itrepresents a pid? -- If this makes sense here, then maybe the fixshould be in pgbouncer instead of how the be_pid is processed inpg_basebackup?2) Rest, the patch looks straightforward, with these two changes :\"%d\" --> \"%u\" and \"(int)\" --> \"(unsigned)\".Regards,Nishant.On Thu, Aug 31, 2023 at 2:43 PM Jelte Fennema <[email protected]> wrote:With PgBouncer in the middle PQbackendPID can return negative values\ndue to it filling all 32 bits of the be_pid with random bits.\n\nWhen this happens it results in pg_basebackup generating an invalid slot\nname (when no specific slot name is passed in) and thus throwing an\nerror like this:\n\npg_basebackup: error: could not send replication command\n\"CREATE_REPLICATION_SLOT \"pg_basebackup_-1201966863\" TEMPORARY\nPHYSICAL ( RESERVE_WAL)\": ERROR: replication slot name\n\"pg_basebackup_-1201966863\" contains invalid character\nHINT: Replication slot names may only contain lower case letters,\nnumbers, and the underscore character.\n\nThis patch fixes that problem by formatting the result from PQbackendPID\nas an unsigned integer when creating the temporary replication slot\nname.\n\nI think it would be good to backport this fix too.\n\nReplication connection support for PgBouncer is not merged yet, but\nit's pretty much ready:\nhttps://github.com/pgbouncer/pgbouncer/pull/876\n\nThe reason PgBouncer does not pass on the actual Postgres backend PID\nis that it doesn't have an associated server connection yet when it\nneeds to send the startup message to the client. It also cannot use\nit's own PID, because that would be the same for all clients, since\npgbouncer is a single process.",
"msg_date": "Tue, 5 Sep 2023 12:39:03 +0530",
"msg_from": "Nishant Sharma <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup: Always return valid temporary slot names"
},
{
"msg_contents": "> On 5 Sep 2023, at 09:09, Nishant Sharma <[email protected]> wrote:\n\n> With this, I was thinking, isn't this a problem of pgbouncer filling\n> be_pid with random bits? Maybe it should have filled the be_pid\n> with a random positive integer instead of any integer as it\n> represents a pid?\n\nI'm inclined to agree that anyone sending a value which is supposed to\nrepresent a PID should be expected to send a value which corresponds to the\nformat of a PID.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Tue, 5 Sep 2023 11:39:26 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup: Always return valid temporary slot names"
},
{
"msg_contents": "On Tue, 5 Sept 2023 at 11:39, Daniel Gustafsson <[email protected]> wrote:\n>\n> > On 5 Sep 2023, at 09:09, Nishant Sharma <[email protected]> wrote:\n>\n> > With this, I was thinking, isn't this a problem of pgbouncer filling\n> > be_pid with random bits? Maybe it should have filled the be_pid\n> > with a random positive integer instead of any integer as it\n> > represents a pid?\n>\n> I'm inclined to agree that anyone sending a value which is supposed to\n> represent a PID should be expected to send a value which corresponds to the\n> format of a PID.\n\nWhen there is a pooler in the middle it already isn't a PID anyway. I\ntook a look at a few other connection poolers and all the ones I\nlooked at (Odyssey and pgcat) do the same: They put random bytes in\nthe be_pid field (and thus can result in negative values). This normally\ndoes not cause any problems, because the be_pid value is simply sent\nback verbatim to the server when canceling a query, which is it's main\npurpose according to the docs:\n\n> This message provides secret-key data that the frontend must save if it wants to be able to issue cancel requests later.\n\nSource: https://www.postgresql.org/docs/current/protocol-flow.html#id-1.10.6.7.3\n\nFor that purpose it's actually more secure to use all bits for random\ndata, instead of keeping one bit always 0.\n\nIts main other purpose that I know if is displaying it in a psql\nprompt, so you know where to attach a debugger. This is completely\nbroken either way as soon as you have a connection pooler in the\nmiddle, because you would want to display the Postgres backend PID\ninstead of the random ID that the connection pooler sends back. So if\nit's negative that's no issue (it displays fine and it's useless\neither way).\n\nSo, while I agree that putting a negative value in the process ID field of\nBackendData, is arguably incorrect. Given the simplicity of the fix on\nthe pg_basebackup side, I think addressing it in pg_basebackup is\neasier than fixing this in all connection poolers.\n\nSidenote: When PgBouncer is run in peering mode it actually uses the\nfirst two bytes of the PID to encode the peer_id into it. That way it\nknows to which peer it should forward the cancellation message. Thus\nfixing this in PgBouncer would require using other bytes for that.\n\n\n",
"msg_date": "Tue, 5 Sep 2023 12:21:46 +0200",
"msg_from": "Jelte Fennema <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_basebackup: Always return valid temporary slot names"
},
{
"msg_contents": "I modified the PgBouncer PR to always set the sign bit to 0. But I\nstill I think it makes sense to also address this in pg_basebackup.\n\nOn Tue, 5 Sept 2023 at 12:21, Jelte Fennema <[email protected]> wrote:\n>\n> On Tue, 5 Sept 2023 at 11:39, Daniel Gustafsson <[email protected]> wrote:\n> >\n> > > On 5 Sep 2023, at 09:09, Nishant Sharma <[email protected]> wrote:\n> >\n> > > With this, I was thinking, isn't this a problem of pgbouncer filling\n> > > be_pid with random bits? Maybe it should have filled the be_pid\n> > > with a random positive integer instead of any integer as it\n> > > represents a pid?\n> >\n> > I'm inclined to agree that anyone sending a value which is supposed to\n> > represent a PID should be expected to send a value which corresponds to the\n> > format of a PID.\n>\n> When there is a pooler in the middle it already isn't a PID anyway. I\n> took a look at a few other connection poolers and all the ones I\n> looked at (Odyssey and pgcat) do the same: They put random bytes in\n> the be_pid field (and thus can result in negative values). This normally\n> does not cause any problems, because the be_pid value is simply sent\n> back verbatim to the server when canceling a query, which is it's main\n> purpose according to the docs:\n>\n> > This message provides secret-key data that the frontend must save if it wants to be able to issue cancel requests later.\n>\n> Source: https://www.postgresql.org/docs/current/protocol-flow.html#id-1.10.6.7.3\n>\n> For that purpose it's actually more secure to use all bits for random\n> data, instead of keeping one bit always 0.\n>\n> Its main other purpose that I know if is displaying it in a psql\n> prompt, so you know where to attach a debugger. This is completely\n> broken either way as soon as you have a connection pooler in the\n> middle, because you would want to display the Postgres backend PID\n> instead of the random ID that the connection pooler sends back. So if\n> it's negative that's no issue (it displays fine and it's useless\n> either way).\n>\n> So, while I agree that putting a negative value in the process ID field of\n> BackendData, is arguably incorrect. Given the simplicity of the fix on\n> the pg_basebackup side, I think addressing it in pg_basebackup is\n> easier than fixing this in all connection poolers.\n>\n> Sidenote: When PgBouncer is run in peering mode it actually uses the\n> first two bytes of the PID to encode the peer_id into it. That way it\n> knows to which peer it should forward the cancellation message. Thus\n> fixing this in PgBouncer would require using other bytes for that.\n\n\n",
"msg_date": "Tue, 5 Sep 2023 13:10:11 +0200",
"msg_from": "Jelte Fennema <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_basebackup: Always return valid temporary slot names"
},
{
"msg_contents": "> On 5 Sep 2023, at 12:21, Jelte Fennema <[email protected]> wrote:\n> \n> On Tue, 5 Sept 2023 at 11:39, Daniel Gustafsson <[email protected]> wrote:\n>> \n>>> On 5 Sep 2023, at 09:09, Nishant Sharma <[email protected]> wrote:\n>> \n>>> With this, I was thinking, isn't this a problem of pgbouncer filling\n>>> be_pid with random bits? Maybe it should have filled the be_pid\n>>> with a random positive integer instead of any integer as it\n>>> represents a pid?\n>> \n>> I'm inclined to agree that anyone sending a value which is supposed to\n>> represent a PID should be expected to send a value which corresponds to the\n>> format of a PID.\n> \n> When there is a pooler in the middle it already isn't a PID anyway. I\n> took a look at a few other connection poolers and all the ones I\n> looked at (Odyssey and pgcat) do the same: They put random bytes in\n> the be_pid field (and thus can result in negative values). This normally\n> does not cause any problems, because the be_pid value is simply sent\n> back verbatim to the server when canceling a query, which is it's main\n> purpose according to the docs:\n> \n>> This message provides secret-key data that the frontend must save if it wants to be able to issue cancel requests later.\n> \n> Source: https://www.postgresql.org/docs/current/protocol-flow.html#id-1.10.6.7.3\n> \n> For that purpose it's actually more secure to use all bits for random\n> data, instead of keeping one bit always 0.\n\nIf it's common practice to submit a pid which isn't a pid, I wonder if longer\nterm it's worth inventing a value for be_pid which means \"unknown pid\" such\nthat consumers can make informed calls when reading it? Not the job of this\npatch to do so, but maybe food for thought.\n\n> So, while I agree that putting a negative value in the process ID field of\n> BackendData, is arguably incorrect. Given the simplicity of the fix on\n> the pg_basebackup side, I think addressing it in pg_basebackup is\n> easier than fixing this in all connection poolers.\n\nSince the value in the temporary slotname isn't used to convey meaning, but\nmerely to ensure uniqueness, I don't think it's unreasonable to guard aginst\nmalformed input (ie negative integer).\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Tue, 5 Sep 2023 14:06:14 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup: Always return valid temporary slot names"
},
{
"msg_contents": "On Tue, Sep 5, 2023 at 4:40 PM Jelte Fennema <[email protected]> wrote:\n\n> I modified the PgBouncer PR to always set the sign bit to 0. But I\n> still I think it makes sense to also address this in pg_basebackup.\n\n\nSounds good to me. Thank you!\n\n\nOn Tue, Sep 5, 2023 at 5:36 PM Daniel Gustafsson <[email protected]> wrote:\n\n> Since the value in the temporary slotname isn't used to convey meaning, but\n> merely to ensure uniqueness, I don't think it's unreasonable to guard\n> aginst\n> malformed input (ie negative integer).\n>\n\n Ok. In this case, I also agree.\n\n\n+1 to the patch from my side. Thank you!\n\n\nRegards,\nNishant.\n\nOn Tue, Sep 5, 2023 at 4:40 PM Jelte Fennema <[email protected]> wrote:I modified the PgBouncer PR to always set the sign bit to 0. But Istill I think it makes sense to also address this in pg_basebackup. Sounds good to me. Thank you!On Tue, Sep 5, 2023 at 5:36 PM Daniel Gustafsson <[email protected]> wrote:Since the value in the temporary slotname isn't used to convey meaning, but\nmerely to ensure uniqueness, I don't think it's unreasonable to guard aginst\nmalformed input (ie negative integer). Ok. In this case, I also agree.+1 to the patch from my side. Thank you!Regards,Nishant.",
"msg_date": "Wed, 6 Sep 2023 11:25:35 +0530",
"msg_from": "Nishant Sharma <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup: Always return valid temporary slot names"
},
{
"msg_contents": "On Tue, Sep 05, 2023 at 02:06:14PM +0200, Daniel Gustafsson wrote:\n>> For that purpose it's actually more secure to use all bits for random\n>> data, instead of keeping one bit always 0.\n> \n> If it's common practice to submit a pid which isn't a pid, I wonder if longer\n> term it's worth inventing a value for be_pid which means \"unknown pid\" such\n> that consumers can make informed calls when reading it? Not the job of this\n> patch to do so, but maybe food for thought.\n\nPerhaps.\n\n>> So, while I agree that putting a negative value in the process ID field of\n>> BackendData, is arguably incorrect. Given the simplicity of the fix on\n>> the pg_basebackup side, I think addressing it in pg_basebackup is\n>> easier than fixing this in all connection poolers.\n> \n> Since the value in the temporary slotname isn't used to convey meaning, but\n> merely to ensure uniqueness, I don't think it's unreasonable to guard aginst\n> malformed input (ie negative integer).\n\nPQbackendPID() returns a signed value, likely coming from the fact\nthat it was thought to be OK back in the days where PIDs were always\ndefined with less bits. The fix is OK taken in isolation, so I am\ngoing to apply it in a few minutes as I'm just passing by..\n\nSaying that, I agree with the point that we should also use %u in\npsql's prompt.c to get a correct PID if the 32th bit is set, and move\non.\n--\nMichael",
"msg_date": "Thu, 7 Sep 2023 13:23:33 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup: Always return valid temporary slot names"
},
{
"msg_contents": "On Thu, Sep 07, 2023 at 01:23:33PM +0900, Michael Paquier wrote:\n> PQbackendPID() returns a signed value, likely coming from the fact\n> that it was thought to be OK back in the days where PIDs were always\n> defined with less bits. The fix is OK taken in isolation, so I am\n> going to apply it in a few minutes as I'm just passing by..\n\nActually, correcting myself, pid_max cannot be higher than 2^22 on 64b\nmachines even these days (per man 5 proc).\n--\nMichael",
"msg_date": "Thu, 7 Sep 2023 13:28:46 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup: Always return valid temporary slot names"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.