threads
listlengths
1
2.99k
[ { "msg_contents": "Hi,\n\nThe subscription worker was not getting invalidated when the\nsubscription owner changed from superuser to non-superuser. Here is a\ntest case for the same:\nPublisher:\n CREATE USER repl REPLICATION PASSWORD 'secret';\n CREATE TABLE t(i INT);\n INSERT INTO t VALUES(1);\n GRANT SELECT ON t TO repl;\n CREATE PUBLICATION p1 FOR TABLE t;\n\nSubscriber (has a PGPASSFILE for user \"repl\"):\n CREATE USER u1 SUPERUSER;\n CREATE TABLE t(i INT);\n ALTER TABLE t OWNER TO u1;\n -- no password specified\n CREATE SUBSCRIPTION s1\n CONNECTION 'dbname=postgres host=127.0.0.1 port=5432 user=repl'\n PUBLICATION p1;\n\n ALTER USER u1 NOSUPERUSER: -- Change u1 user to non-superuser\n\nPublisher:\nINSERT INTO t VALUES(1);\n\nSubscriber:\nSELECT COUNT(*) FROM t; -- should have been 1 but is 2, the apply\nworker has not exited after changing from superuser to non-superuser.\n\nFixed this issue by checking if the subscription owner has changed\nfrom superuser to non-superuser in case the pg_authid rows changes.\nThe attached patch has the changes for the same.\nThanks to Jeff Davis for identifying this issue and reporting it at [1].\n\n[1] - https://www.postgresql.org/message-id/5dff4caf26f45ce224a33a5e18e110b93a351b2f.camel%40j-davis.com\n\nRegards,\nVignesh", "msg_date": "Sat, 23 Sep 2023 00:22:00 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": true, "msg_subject": "Invalidate the subscription worker in cases where a user loses their\n superuser status" }, { "msg_contents": "On Sat, Sep 23, 2023 at 1:27 AM vignesh C <[email protected]> wrote:\n>\n>\n> Fixed this issue by checking if the subscription owner has changed\n> from superuser to non-superuser in case the pg_authid rows changes.\n> The attached patch has the changes for the same.\n>\n\n@@ -3952,7 +3953,9 @@ maybe_reread_subscription(void)\n newsub->passwordrequired != MySubscription->passwordrequired ||\n strcmp(newsub->origin, MySubscription->origin) != 0 ||\n newsub->owner != MySubscription->owner ||\n- !equal(newsub->publications, MySubscription->publications))\n+ !equal(newsub->publications, MySubscription->publications) ||\n+ (!superuser_arg(MySubscription->owner) &&\n+ MySubscription->isownersuperuser))\n {\n if (am_parallel_apply_worker())\n ereport(LOG,\n@@ -4605,6 +4608,13 @@ InitializeLogRepWorker(void)\n proc_exit(0);\n }\n\n+ /*\n+ * Fetch subscription owner is a superuser. This value will be later\n+ * checked to see when there is any change with this role and the worker\n+ * will be restarted if required.\n+ */\n+ MySubscription->isownersuperuser = superuser_arg(MySubscription->owner);\n\nWhy didn't you filled this parameter in GetSubscription() like other\nparameters? If we do that then the comparison of first change in your\npatch will look similar to all other comparisons.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 23 Sep 2023 11:28:04 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Invalidate the subscription worker in cases where a user loses\n their superuser status" }, { "msg_contents": "On Sat, 23 Sept 2023 at 11:28, Amit Kapila <[email protected]> wrote:\n>\n> On Sat, Sep 23, 2023 at 1:27 AM vignesh C <[email protected]> wrote:\n> >\n> >\n> > Fixed this issue by checking if the subscription owner has changed\n> > from superuser to non-superuser in case the pg_authid rows changes.\n> > The attached patch has the changes for the same.\n> >\n>\n> @@ -3952,7 +3953,9 @@ maybe_reread_subscription(void)\n> newsub->passwordrequired != MySubscription->passwordrequired ||\n> strcmp(newsub->origin, MySubscription->origin) != 0 ||\n> newsub->owner != MySubscription->owner ||\n> - !equal(newsub->publications, MySubscription->publications))\n> + !equal(newsub->publications, MySubscription->publications) ||\n> + (!superuser_arg(MySubscription->owner) &&\n> + MySubscription->isownersuperuser))\n> {\n> if (am_parallel_apply_worker())\n> ereport(LOG,\n> @@ -4605,6 +4608,13 @@ InitializeLogRepWorker(void)\n> proc_exit(0);\n> }\n>\n> + /*\n> + * Fetch subscription owner is a superuser. This value will be later\n> + * checked to see when there is any change with this role and the worker\n> + * will be restarted if required.\n> + */\n> + MySubscription->isownersuperuser = superuser_arg(MySubscription->owner);\n>\n> Why didn't you filled this parameter in GetSubscription() like other\n> parameters? If we do that then the comparison of first change in your\n> patch will look similar to all other comparisons.\n\nI felt this variable need not be added to the pg_subscription catalog\ntable, instead we could save the state of subscription owner when the\nworker is started and compare this value during invalidations. As this\ninformation is added only to the memory Subscription structure and not\nadded to the catalog FormData_pg_subscription, the checking is\nslightly different in this case. Also since this variable will be used\nonly within the worker, I felt we need not add it to the catalog.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Mon, 25 Sep 2023 00:32:03 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Invalidate the subscription worker in cases where a user loses\n their superuser status" }, { "msg_contents": "On Mon, 25 Sept 2023 at 00:32, vignesh C <[email protected]> wrote:\n>\n> On Sat, 23 Sept 2023 at 11:28, Amit Kapila <[email protected]> wrote:\n> >\n> > On Sat, Sep 23, 2023 at 1:27 AM vignesh C <[email protected]> wrote:\n> > >\n> > >\n> > > Fixed this issue by checking if the subscription owner has changed\n> > > from superuser to non-superuser in case the pg_authid rows changes.\n> > > The attached patch has the changes for the same.\n> > >\n> >\n> > @@ -3952,7 +3953,9 @@ maybe_reread_subscription(void)\n> > newsub->passwordrequired != MySubscription->passwordrequired ||\n> > strcmp(newsub->origin, MySubscription->origin) != 0 ||\n> > newsub->owner != MySubscription->owner ||\n> > - !equal(newsub->publications, MySubscription->publications))\n> > + !equal(newsub->publications, MySubscription->publications) ||\n> > + (!superuser_arg(MySubscription->owner) &&\n> > + MySubscription->isownersuperuser))\n> > {\n> > if (am_parallel_apply_worker())\n> > ereport(LOG,\n> > @@ -4605,6 +4608,13 @@ InitializeLogRepWorker(void)\n> > proc_exit(0);\n> > }\n> >\n> > + /*\n> > + * Fetch subscription owner is a superuser. This value will be later\n> > + * checked to see when there is any change with this role and the worker\n> > + * will be restarted if required.\n> > + */\n> > + MySubscription->isownersuperuser = superuser_arg(MySubscription->owner);\n> >\n> > Why didn't you filled this parameter in GetSubscription() like other\n> > parameters? If we do that then the comparison of first change in your\n> > patch will look similar to all other comparisons.\n>\n> I felt this variable need not be added to the pg_subscription catalog\n> table, instead we could save the state of subscription owner when the\n> worker is started and compare this value during invalidations. As this\n> information is added only to the memory Subscription structure and not\n> added to the catalog FormData_pg_subscription, the checking is\n> slightly different in this case. Also since this variable will be used\n> only within the worker, I felt we need not add it to the catalog.\n\nOn further thinking I felt getting superuser can be moved to\nGetSubscription which will make the code consistent with other\nchecking and will fix the comment Amit had given, the attached version\nhas the change for the same.\n\nRegards,\nVignesh", "msg_date": "Mon, 25 Sep 2023 13:05:03 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Invalidate the subscription worker in cases where a user loses\n their superuser status" }, { "msg_contents": "Here are some comments for patch v2-0001.\n\n======\nsrc/backend/replication/logical/worker.c\n\n1. maybe_reread_subscription\n\nereport(LOG,\n (errmsg(\"logical replication worker for subscription \\\"%s\\\"\nwill restart because of a parameter change\",\n MySubscription->name)));\n\nIs this really a \"parameter change\" though? It might be a stretch to\ncall the user role change a subscription parameter change. Perhaps\nthis case needs its own LOG message?\n\n======\nsrc/include/catalog/pg_subscription.h\n\n2.\n char *origin; /* Only publish data originating from the\n * specified origin */\n+ bool isownersuperuser; /* Is subscription owner superuser? */\n } Subscription;\n\n\nIs a new Subscription field member really necessary? The Subscription\nalready has 'owner' -- why doesn't function\nmaybe_reread_subscription() just check:\n\n(!superuser_arg(sub->owner) && superuser_arg(MySubscription->owner))\n\n======\nsrc/test/subscription/t/027_nosuperuser.pl\n\n3.\n# The apply worker should get restarted after the superuser prvileges are\n# revoked for subscription owner alice.\n\ntypo\n\n/prvileges/privileges/\n\n~\n\n4.\n+# After the user becomes non-superuser the apply worker should be restarted and\n+# it should fail with 'password is required' error as password option is not\n+# part of the connection string.\n\n/as password option/because the password option/\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia.\n\n\n", "msg_date": "Tue, 26 Sep 2023 17:32:58 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Invalidate the subscription worker in cases where a user loses\n their superuser status" }, { "msg_contents": "On Tue, 26 Sept 2023 at 13:03, Peter Smith <[email protected]> wrote:\n>\n> Here are some comments for patch v2-0001.\n>\n> ======\n> src/backend/replication/logical/worker.c\n>\n> 1. maybe_reread_subscription\n>\n> ereport(LOG,\n> (errmsg(\"logical replication worker for subscription \\\"%s\\\"\n> will restart because of a parameter change\",\n> MySubscription->name)));\n>\n> Is this really a \"parameter change\" though? It might be a stretch to\n> call the user role change a subscription parameter change. Perhaps\n> this case needs its own LOG message?\n\nWhen I was doing this change the same thought had come to my mind too\nbut later I noticed that in case of owner change there was no separate\nlog message. Since superuser check is somewhat similar to owner\nchange, I felt no need to make any change for this.\n\n> ======\n> src/include/catalog/pg_subscription.h\n>\n> 2.\n> char *origin; /* Only publish data originating from the\n> * specified origin */\n> + bool isownersuperuser; /* Is subscription owner superuser? */\n> } Subscription;\n>\n>\n> Is a new Subscription field member really necessary? The Subscription\n> already has 'owner' -- why doesn't function\n> maybe_reread_subscription() just check:\n>\n> (!superuser_arg(sub->owner) && superuser_arg(MySubscription->owner))\n\nWe need the new variable so that we store this value when the worker\nis started and check the current value with the value that was when\nthe worker was started. Since we need the value of the superuser\nstatus when the worker is started, I feel this variable is required.\n\n> ======\n> src/test/subscription/t/027_nosuperuser.pl\n>\n> 3.\n> # The apply worker should get restarted after the superuser prvileges are\n> # revoked for subscription owner alice.\n>\n> typo\n>\n> /prvileges/privileges/\n>.\n> ~\n\nI will change this in the next version\n\n> 4.\n> +# After the user becomes non-superuser the apply worker should be restarted and\n> +# it should fail with 'password is required' error as password option is not\n> +# part of the connection string.\n>\n> /as password option/because the password option/\n\nI will change this in the next version\n\nRegards,\nVignesh\n\n\n", "msg_date": "Tue, 26 Sep 2023 19:27:34 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Invalidate the subscription worker in cases where a user loses\n their superuser status" }, { "msg_contents": "On Tue, Sep 26, 2023 at 11:57 PM vignesh C <[email protected]> wrote:\n>\n> On Tue, 26 Sept 2023 at 13:03, Peter Smith <[email protected]> wrote:\n> >\n> > Here are some comments for patch v2-0001.\n> >\n> > ======\n> > src/backend/replication/logical/worker.c\n> >\n> > 1. maybe_reread_subscription\n> >\n> > ereport(LOG,\n> > (errmsg(\"logical replication worker for subscription \\\"%s\\\"\n> > will restart because of a parameter change\",\n> > MySubscription->name)));\n> >\n> > Is this really a \"parameter change\" though? It might be a stretch to\n> > call the user role change a subscription parameter change. Perhaps\n> > this case needs its own LOG message?\n>\n> When I was doing this change the same thought had come to my mind too\n> but later I noticed that in case of owner change there was no separate\n> log message. Since superuser check is somewhat similar to owner\n> change, I felt no need to make any change for this.\n>\n\nYeah, I had seen the same already before my comment. Anyway, YMMV.\n\n> > ======\n> > src/include/catalog/pg_subscription.h\n> >\n> > 2.\n> > char *origin; /* Only publish data originating from the\n> > * specified origin */\n> > + bool isownersuperuser; /* Is subscription owner superuser? */\n> > } Subscription;\n> >\n> >\n> > Is a new Subscription field member really necessary? The Subscription\n> > already has 'owner' -- why doesn't function\n> > maybe_reread_subscription() just check:\n> >\n> > (!superuser_arg(sub->owner) && superuser_arg(MySubscription->owner))\n>\n> We need the new variable so that we store this value when the worker\n> is started and check the current value with the value that was when\n> the worker was started. Since we need the value of the superuser\n> status when the worker is started, I feel this variable is required.\n>\n\nOK. In that case, then shouldn't the patch replace the other\nsuperuser_arg() code already in function run_apply_worker() to make\nuse of this variable? Otherwise, there are 2 ways of getting the same\ninformation.\n\n======\nsrc/test/subscription/t/027_nosuperuser.pl\n\nI am suspicious that something may be wrong with either the code or\nthe test because while experimenting, I accidentally found that even\nif I *completely* remove the important change below, the TAP test will\nstill pass anyway.\n\n- !equal(newsub->publications, MySubscription->publications))\n+ !equal(newsub->publications, MySubscription->publications) ||\n+ (!newsub->isownersuperuser && MySubscription->isownersuperuser))\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Wed, 27 Sep 2023 11:28:19 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Invalidate the subscription worker in cases where a user loses\n their superuser status" }, { "msg_contents": "On Tue, 26 Sept 2023 at 13:03, Peter Smith <[email protected]> wrote:\n>\n> Here are some comments for patch v2-0001.\n> ======\n> src/test/subscription/t/027_nosuperuser.pl\n>\n> 3.\n> # The apply worker should get restarted after the superuser prvileges are\n> # revoked for subscription owner alice.\n>\n> typo\n>\n> /prvileges/privileges/\n\nModified\n\n> ~\n>\n> 4.\n> +# After the user becomes non-superuser the apply worker should be restarted and\n> +# it should fail with 'password is required' error as password option is not\n> +# part of the connection string.\n>\n> /as password option/because the password option/\n\nModified\n\nThe attached v3 version patch has the changes for the same.\n\nRegards,\nVignesh", "msg_date": "Wed, 27 Sep 2023 11:13:42 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Invalidate the subscription worker in cases where a user loses\n their superuser status" }, { "msg_contents": "On Wed, 27 Sept 2023 at 06:58, Peter Smith <[email protected]> wrote:\n>\n> On Tue, Sep 26, 2023 at 11:57 PM vignesh C <[email protected]> wrote:\n> >\n> > On Tue, 26 Sept 2023 at 13:03, Peter Smith <[email protected]> wrote:\n> > >\n> > > Here are some comments for patch v2-0001.\n> > >\n> > > ======\n> > > src/backend/replication/logical/worker.c\n> > >\n> > > 1. maybe_reread_subscription\n> > >\n> > > ereport(LOG,\n> > > (errmsg(\"logical replication worker for subscription \\\"%s\\\"\n> > > will restart because of a parameter change\",\n> > > MySubscription->name)));\n> > >\n> > > Is this really a \"parameter change\" though? It might be a stretch to\n> > > call the user role change a subscription parameter change. Perhaps\n> > > this case needs its own LOG message?\n> >\n> > When I was doing this change the same thought had come to my mind too\n> > but later I noticed that in case of owner change there was no separate\n> > log message. Since superuser check is somewhat similar to owner\n> > change, I felt no need to make any change for this.\n> >\n>\n> Yeah, I had seen the same already before my comment. Anyway, YMMV.\n>\n> > > ======\n> > > src/include/catalog/pg_subscription.h\n> > >\n> > > 2.\n> > > char *origin; /* Only publish data originating from the\n> > > * specified origin */\n> > > + bool isownersuperuser; /* Is subscription owner superuser? */\n> > > } Subscription;\n> > >\n> > >\n> > > Is a new Subscription field member really necessary? The Subscription\n> > > already has 'owner' -- why doesn't function\n> > > maybe_reread_subscription() just check:\n> > >\n> > > (!superuser_arg(sub->owner) && superuser_arg(MySubscription->owner))\n> >\n> > We need the new variable so that we store this value when the worker\n> > is started and check the current value with the value that was when\n> > the worker was started. Since we need the value of the superuser\n> > status when the worker is started, I feel this variable is required.\n> >\n>\n> OK. In that case, then shouldn't the patch replace the other\n> superuser_arg() code already in function run_apply_worker() to make\n> use of this variable? Otherwise, there are 2 ways of getting the same\n> information.\n\nModified\n\n> ======\n> src/test/subscription/t/027_nosuperuser.pl\n>\n> I am suspicious that something may be wrong with either the code or\n> the test because while experimenting, I accidentally found that even\n> if I *completely* remove the important change below, the TAP test will\n> still pass anyway.\n>\n> - !equal(newsub->publications, MySubscription->publications))\n> + !equal(newsub->publications, MySubscription->publications) ||\n> + (!newsub->isownersuperuser && MySubscription->isownersuperuser))\n\nThe test did not wait for subscription to sync, as the owner has\nchanged to non-superuser the tablesync were getting started later and\nfailing with this error in HEAD. Now I have added wait for\nsubscription to sync, so the error will always come from apply worker\nrestart and also when the test is run on HEAD we can notice that the\nerror message does not appear in the log.\n\nThe v3 patch attached at [1] has the changes for the same.\n[1] - https://www.postgresql.org/message-id/CALDaNm2JwPmX0rhkTyjGRQmZjwbKax%3D3Ytgw2KY9msDPPNGmBg%40mail.gmail.com\n\nRegards,\nVignesh\n\n\n", "msg_date": "Wed, 27 Sep 2023 11:17:42 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Invalidate the subscription worker in cases where a user loses\n their superuser status" }, { "msg_contents": "On Wed, Sep 27, 2023 at 6:58 AM Peter Smith <[email protected]> wrote:\n>\n> On Tue, Sep 26, 2023 at 11:57 PM vignesh C <[email protected]> wrote:\n> >\n> > On Tue, 26 Sept 2023 at 13:03, Peter Smith <[email protected]> wrote:\n> > >\n> > > Here are some comments for patch v2-0001.\n> > >\n> > > ======\n> > > src/backend/replication/logical/worker.c\n> > >\n> > > 1. maybe_reread_subscription\n> > >\n> > > ereport(LOG,\n> > > (errmsg(\"logical replication worker for subscription \\\"%s\\\"\n> > > will restart because of a parameter change\",\n> > > MySubscription->name)));\n> > >\n> > > Is this really a \"parameter change\" though? It might be a stretch to\n> > > call the user role change a subscription parameter change. Perhaps\n> > > this case needs its own LOG message?\n> >\n> > When I was doing this change the same thought had come to my mind too\n> > but later I noticed that in case of owner change there was no separate\n> > log message. Since superuser check is somewhat similar to owner\n> > change, I felt no need to make any change for this.\n> >\n>\n> Yeah, I had seen the same already before my comment. Anyway, YMMV.\n>\n\nBut OTOH, the owner of the subscription can be changed by the Alter\nSubscription command whereas superuser status can't be changed. I\nthink we should consider changing the message for this case.\n\nBTW, do we want to backpatch this patch? I think we should backatch to\nPG16 as it impacts password_required functionality. Before this patch\neven if the subscription owner's superuser status is lost, it won't\nuse a password for connection till the server gets restarted or the\napply worker gets restarted due to some other reason. What do you\nthink?\n\nAdding Jeff and Robert to see what is their opinion on whether we\nshould backpatch this or not.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 27 Sep 2023 12:27:55 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Invalidate the subscription worker in cases where a user loses\n their superuser status" }, { "msg_contents": "On Wed, 27 Sept 2023 at 12:28, Amit Kapila <[email protected]> wrote:\n>\n> On Wed, Sep 27, 2023 at 6:58 AM Peter Smith <[email protected]> wrote:\n> >\n> > On Tue, Sep 26, 2023 at 11:57 PM vignesh C <[email protected]> wrote:\n> > >\n> > > On Tue, 26 Sept 2023 at 13:03, Peter Smith <[email protected]> wrote:\n> > > >\n> > > > Here are some comments for patch v2-0001.\n> > > >\n> > > > ======\n> > > > src/backend/replication/logical/worker.c\n> > > >\n> > > > 1. maybe_reread_subscription\n> > > >\n> > > > ereport(LOG,\n> > > > (errmsg(\"logical replication worker for subscription \\\"%s\\\"\n> > > > will restart because of a parameter change\",\n> > > > MySubscription->name)));\n> > > >\n> > > > Is this really a \"parameter change\" though? It might be a stretch to\n> > > > call the user role change a subscription parameter change. Perhaps\n> > > > this case needs its own LOG message?\n> > >\n> > > When I was doing this change the same thought had come to my mind too\n> > > but later I noticed that in case of owner change there was no separate\n> > > log message. Since superuser check is somewhat similar to owner\n> > > change, I felt no need to make any change for this.\n> > >\n> >\n> > Yeah, I had seen the same already before my comment. Anyway, YMMV.\n> >\n>\n> But OTOH, the owner of the subscription can be changed by the Alter\n> Subscription command whereas superuser status can't be changed. I\n> think we should consider changing the message for this case.\n\nModified\n\n> BTW, do we want to backpatch this patch? I think we should backatch to\n> PG16 as it impacts password_required functionality. Before this patch\n> even if the subscription owner's superuser status is lost, it won't\n> use a password for connection till the server gets restarted or the\n> apply worker gets restarted due to some other reason. What do you\n> think?\n\nI felt since password_required functionality is there in PG16, we\nshould fix this in PG16 too. I have checked that password_required\nfunctionality is not there in PG15, so no need to make any change in\nPG15\n\nThe updated patch has the changes for the same.\n\nRegards,\nVignesh", "msg_date": "Thu, 28 Sep 2023 16:54:58 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Invalidate the subscription worker in cases where a user loses\n their superuser status" }, { "msg_contents": "On Wed, Sep 27, 2023 at 2:58 AM Amit Kapila <[email protected]> wrote:\n> But OTOH, the owner of the subscription can be changed by the Alter\n> Subscription command whereas superuser status can't be changed. I\n> think we should consider changing the message for this case.\n\nThe superuser status of the subscription owner is definitely *not* a\nparameter of the subscription, so it doesn't seem like the same\nmessage is appropriate.\n\n> Adding Jeff and Robert to see what is their opinion on whether we\n> should backpatch this or not.\n\nI guess it depends on whether we think this is a bug. I think you\ncould argue it either way.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 28 Sep 2023 11:34:14 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Invalidate the subscription worker in cases where a user loses\n their superuser status" }, { "msg_contents": "On Thu, 2023-09-28 at 11:34 -0400, Robert Haas wrote:\n> I guess it depends on whether we think this is a bug. I think you\n> could argue it either way.\n\nI'd suggest backporting to 16 unless there's some kind of difficulty.\nOtherwise we have a minor difference in behavior for no reason.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Thu, 28 Sep 2023 10:52:33 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Invalidate the subscription worker in cases where a user loses\n their superuser status" }, { "msg_contents": "Some minor review comments for v4-0001:\n\n======\nsrc/backend/replication/logical/worker.c\n\n1.\n+ /*\n+ * Exit if the owner of the subscription has changed from superuser to a\n+ * non-superuser.\n+ */\n+ if (!newsub->isownersuperuser && MySubscription->isownersuperuser)\n+ {\n+ if (am_parallel_apply_worker())\n+ ereport(LOG,\n+ errmsg(\"logical replication parallel apply worker for subscription\n\\\"%s\\\" will stop because subscription owner has become non-superuser\",\n+ MySubscription->name));\n+ else\n+ ereport(LOG,\n+ errmsg(\"logical replication worker for subscription \\\"%s\\\" will\nrestart because subscription owner has become non-superuser\",\n+ MySubscription->name));\n+\n+ apply_worker_exit();\n+ }\n+\n\n/because subscription owner has become non-superuser/because the\nsubscription owner has become a non-superuser/ (in 2 places)\n\n======\nsrc/include/catalog/pg_subscription.h\n\n2.\n char *origin; /* Only publish data originating from the\n * specified origin */\n+ bool isownersuperuser; /* Is subscription owner superuser? */\n } Subscription;\n\n~\n\n2a.\nWould it be better to put this new field adjacent to the existing\n'owner' field, since they kind of belong together?\n\n~\n\n2b.\nNone of the other bool fields here has an 'is' prefix, so you could\nconsider a shorter field name, like 'ownersuperuser' or\n'superuserowner', etc.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Fri, 29 Sep 2023 09:24:52 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Invalidate the subscription worker in cases where a user loses\n their superuser status" }, { "msg_contents": "On Fri, 29 Sept 2023 at 04:55, Peter Smith <[email protected]> wrote:\n>\n> Some minor review comments for v4-0001:\n>\n> ======\n> src/backend/replication/logical/worker.c\n>\n> 1.\n> + /*\n> + * Exit if the owner of the subscription has changed from superuser to a\n> + * non-superuser.\n> + */\n> + if (!newsub->isownersuperuser && MySubscription->isownersuperuser)\n> + {\n> + if (am_parallel_apply_worker())\n> + ereport(LOG,\n> + errmsg(\"logical replication parallel apply worker for subscription\n> \\\"%s\\\" will stop because subscription owner has become non-superuser\",\n> + MySubscription->name));\n> + else\n> + ereport(LOG,\n> + errmsg(\"logical replication worker for subscription \\\"%s\\\" will\n> restart because subscription owner has become non-superuser\",\n> + MySubscription->name));\n> +\n> + apply_worker_exit();\n> + }\n> +\n>\n> /because subscription owner has become non-superuser/because the\n> subscription owner has become a non-superuser/ (in 2 places)\n\nModified\n\n> ======\n> src/include/catalog/pg_subscription.h\n>\n> 2.\n> char *origin; /* Only publish data originating from the\n> * specified origin */\n> + bool isownersuperuser; /* Is subscription owner superuser? */\n> } Subscription;\n>\n> ~\n>\n> 2a.\n> Would it be better to put this new field adjacent to the existing\n> 'owner' field, since they kind of belong together?\n\nModified\n\n> ~\n>\n> 2b.\n> None of the other bool fields here has an 'is' prefix, so you could\n> consider a shorter field name, like 'ownersuperuser' or\n> 'superuserowner', etc.\n\nModified\n\nThe attached v5 version patch has the changes for the same.\n\nRegards,\nVignesh", "msg_date": "Fri, 29 Sep 2023 11:22:14 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Invalidate the subscription worker in cases where a user loses\n their superuser status" }, { "msg_contents": "Some review comments for v5.\n\n======\nsrc/backend/catalog/pg_subscription.c\n\n1. GetSubscription - comment\n\n+ /* Get superuser for subscription owner */\n+ sub->ownersuperuser = superuser_arg(sub->owner);\n+\n\nThe comment doesn't seem very good.\n\nSUGGESTION\n/* Is the subscription owner a superuser? */\n\n======\n\n2. General - consistency\n\nBelow are the code fragments using the new Subscription field.\n\nAlterSubscription_refresh:\nmust_use_password = !sub->ownersuperuser && sub->passwordrequired;\n\nAlterSubscription:\nwalrcv_check_conninfo(stmt->conninfo, sub->passwordrequired &&\n!sub->ownersuperuser);\n\nLogicalRepSyncTableStart:\nmust_use_password = MySubscription->passwordrequired &&\n!MySubscription->ownersuperuser;\n\nrun_apply_worker:\nmust_use_password = MySubscription->passwordrequired &&\n!MySubscription->ownersuperuser;\n\n~\n\nIt is not a difference caused by this patch, but since you are\nmodifying these lines anyway, I felt it would be better if all the\nexpressions were consistent. So, in AlterSubscription_refresh IMO it\nwould be better like:\n\nBEFORE\nmust_use_password = !sub->ownersuperuser && sub->passwordrequired;\n\nSUGGESTION\nmust_use_password = sub->passwordrequired && !sub->ownersuperuser;\n\n======\n\nOther than those trivial things, v5 LGTM.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Tue, 3 Oct 2023 11:39:09 +1100", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Invalidate the subscription worker in cases where a user loses\n their superuser status" }, { "msg_contents": "On Tue, 3 Oct 2023 at 06:09, Peter Smith <[email protected]> wrote:\n>\n> Some review comments for v5.\n>\n> ======\n> src/backend/catalog/pg_subscription.c\n>\n> 1. GetSubscription - comment\n>\n> + /* Get superuser for subscription owner */\n> + sub->ownersuperuser = superuser_arg(sub->owner);\n> +\n>\n> The comment doesn't seem very good.\n>\n> SUGGESTION\n> /* Is the subscription owner a superuser? */\n\nModified\n\n> ======\n>\n> 2. General - consistency\n>\n> Below are the code fragments using the new Subscription field.\n>\n> AlterSubscription_refresh:\n> must_use_password = !sub->ownersuperuser && sub->passwordrequired;\n>\n> AlterSubscription:\n> walrcv_check_conninfo(stmt->conninfo, sub->passwordrequired &&\n> !sub->ownersuperuser);\n>\n> LogicalRepSyncTableStart:\n> must_use_password = MySubscription->passwordrequired &&\n> !MySubscription->ownersuperuser;\n>\n> run_apply_worker:\n> must_use_password = MySubscription->passwordrequired &&\n> !MySubscription->ownersuperuser;\n>\n> ~\n>\n> It is not a difference caused by this patch, but since you are\n> modifying these lines anyway, I felt it would be better if all the\n> expressions were consistent. So, in AlterSubscription_refresh IMO it\n> would be better like:\n>\n> BEFORE\n> must_use_password = !sub->ownersuperuser && sub->passwordrequired;\n>\n> SUGGESTION\n> must_use_password = sub->passwordrequired && !sub->ownersuperuser;\n\nModified\n\nThanks for the comments, the attached v6 version patch has the changes\nfor the same.\n\nRegards,\nVignesh", "msg_date": "Tue, 3 Oct 2023 12:12:35 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Invalidate the subscription worker in cases where a user loses\n their superuser status" }, { "msg_contents": "On Tue, Oct 3, 2023 at 5:42 PM vignesh C <[email protected]> wrote:\n>\n> Thanks for the comments, the attached v6 version patch has the changes\n> for the same.\n>\n\nv6 LGTM.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Tue, 3 Oct 2023 17:51:30 +1100", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Invalidate the subscription worker in cases where a user loses\n their superuser status" }, { "msg_contents": "On Wed, 4 Oct 2023 at 16:56, Peter Smith <[email protected]> wrote:\n>\n> On Tue, Oct 3, 2023 at 5:42 PM vignesh C <[email protected]> wrote:\n> >\n> > Thanks for the comments, the attached v6 version patch has the changes\n> > for the same.\n> >\n>\n> v6 LGTM.\n>\nI have verified the patch and it is working fine for me.\n\n\n", "msg_date": "Wed, 4 Oct 2023 17:01:00 +0530", "msg_from": "Shlok Kyal <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Invalidate the subscription worker in cases where a user loses\n their superuser status" }, { "msg_contents": "On Wed, 4 Oct 2023 at 17:01, Shlok Kyal <[email protected]> wrote:\n>\n> On Wed, 4 Oct 2023 at 16:56, Peter Smith <[email protected]> wrote:\n> >\n> > On Tue, Oct 3, 2023 at 5:42 PM vignesh C <[email protected]> wrote:\n> > >\n> > > Thanks for the comments, the attached v6 version patch has the changes\n> > > for the same.\n> > >\n> >\n> > v6 LGTM.\n> >\n> I have verified the patch and it is working fine for me.\n\nI have created the commitfest entry for this at:\nhttps://commitfest.postgresql.org/45/4595/\n\nRegards.\nVignesh\n\n\n", "msg_date": "Fri, 6 Oct 2023 17:30:54 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Invalidate the subscription worker in cases where a user loses\n their superuser status" }, { "msg_contents": "On Tue, Oct 3, 2023 at 12:12 PM vignesh C <[email protected]> wrote:\n>\n> Thanks for the comments, the attached v6 version patch has the changes\n> for the same.\n>\n\nFew comments:\n=============\n1.\n/* Is the use of a password mandatory? */\n must_use_password = MySubscription->passwordrequired &&\n- !superuser_arg(MySubscription->owner);\n+ !MySubscription->ownersuperuser;\n\n- /* Note that the superuser_arg call can access the DB */\n CommitTransactionCommand();\n\nWe can call CommitTransactionCommand() before the above check now. It\nwas done afterward to invoke superuser_arg(), so, if that requirement\nis changed, we no longer need to keep the transaction open for a\nlonger time. Please check other places for similar changes.\n\n2.\n+ ereport(LOG,\n+ errmsg(\"logical replication worker for subscription \\\"%s\\\" will\nrestart because the subscription owner has become a non-superuser\",\n\nHow about something on the below lines?\nlogical replication worker for subscription \\\"%s\\\" will restart\nbecause superuser privileges have been revoked for the subscription\nowner\nOR\nlogical replication worker for subscription \\\"%s\\\" will restart\nbecause the subscription owner's superuser privileges have been\nrevoked\n\n3.\n- /* Keep us informed about subscription changes. */\n+ /*\n+ * Keep us informed about subscription changes or pg_authid rows.\n+ * (superuser can become non-superuser.)\n+ */\n\nLet's slightly change the comment to: \"Keep us informed about\nsubscription or role changes. Note that role's superuser privilege can\nbe revoked.\"\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 7 Oct 2023 08:12:37 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Invalidate the subscription worker in cases where a user loses\n their superuser status" }, { "msg_contents": "On Sat, 7 Oct 2023 at 08:12, Amit Kapila <[email protected]> wrote:\n>\n> On Tue, Oct 3, 2023 at 12:12 PM vignesh C <[email protected]> wrote:\n> >\n> > Thanks for the comments, the attached v6 version patch has the changes\n> > for the same.\n> >\n>\n> Few comments:\n> =============\n> 1.\n> /* Is the use of a password mandatory? */\n> must_use_password = MySubscription->passwordrequired &&\n> - !superuser_arg(MySubscription->owner);\n> + !MySubscription->ownersuperuser;\n>\n> - /* Note that the superuser_arg call can access the DB */\n> CommitTransactionCommand();\n>\n> We can call CommitTransactionCommand() before the above check now. It\n> was done afterward to invoke superuser_arg(), so, if that requirement\n> is changed, we no longer need to keep the transaction open for a\n> longer time. Please check other places for similar changes.\n\nModified\n\n> 2.\n> + ereport(LOG,\n> + errmsg(\"logical replication worker for subscription \\\"%s\\\" will\n> restart because the subscription owner has become a non-superuser\",\n>\n> How about something on the below lines?\n> logical replication worker for subscription \\\"%s\\\" will restart\n> because superuser privileges have been revoked for the subscription\n> owner\n> OR\n> logical replication worker for subscription \\\"%s\\\" will restart\n> because the subscription owner's superuser privileges have been\n> revoked\n\nModified\n\n> 3.\n> - /* Keep us informed about subscription changes. */\n> + /*\n> + * Keep us informed about subscription changes or pg_authid rows.\n> + * (superuser can become non-superuser.)\n> + */\n>\n> Let's slightly change the comment to: \"Keep us informed about\n> subscription or role changes. Note that role's superuser privilege can\n> be revoked.\"\n\nModified\n\nThe attached v7 version patch has the changes for the same.\n\nRegards,\nVignesh", "msg_date": "Sun, 8 Oct 2023 08:22:28 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Invalidate the subscription worker in cases where a user loses\n their superuser status" }, { "msg_contents": "On Sun, Oct 8, 2023 at 8:22 AM vignesh C <[email protected]> wrote:\n>\n\n--- a/src/include/catalog/pg_subscription.h\n+++ b/src/include/catalog/pg_subscription.h\n@@ -127,6 +127,7 @@ typedef struct Subscription\n * skipped */\n char *name; /* Name of the subscription */\n Oid owner; /* Oid of the subscription owner */\n+ bool ownersuperuser; /* Is the subscription owner a superuser? */\n bool enabled; /* Indicates if the subscription is enabled */\n bool binary; /* Indicates if the subscription wants data in\n * binary format */\n\nWe normally don't change the exposed structure in back branches as\nthat poses a risk of breaking extensions. In this case, if we want, we\ncan try to squeeze some padding space or we even can fix it without\nintroducing a new member. OTOH, it is already debatable whether to fix\nit in back branches, so we can even commit this patch just in HEAD.\n\nThoughts?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 12 Oct 2023 11:10:36 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Invalidate the subscription worker in cases where a user loses\n their superuser status" }, { "msg_contents": "On Thu, 12 Oct 2023 at 11:10, Amit Kapila <[email protected]> wrote:\n>\n> On Sun, Oct 8, 2023 at 8:22 AM vignesh C <[email protected]> wrote:\n> >\n>\n> --- a/src/include/catalog/pg_subscription.h\n> +++ b/src/include/catalog/pg_subscription.h\n> @@ -127,6 +127,7 @@ typedef struct Subscription\n> * skipped */\n> char *name; /* Name of the subscription */\n> Oid owner; /* Oid of the subscription owner */\n> + bool ownersuperuser; /* Is the subscription owner a superuser? */\n> bool enabled; /* Indicates if the subscription is enabled */\n> bool binary; /* Indicates if the subscription wants data in\n> * binary format */\n>\n> We normally don't change the exposed structure in back branches as\n> that poses a risk of breaking extensions. In this case, if we want, we\n> can try to squeeze some padding space or we even can fix it without\n> introducing a new member. OTOH, it is already debatable whether to fix\n> it in back branches, so we can even commit this patch just in HEAD.\n\nI too feel we can commit this patch only in HEAD.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Fri, 13 Oct 2023 10:04:19 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Invalidate the subscription worker in cases where a user loses\n their superuser status" }, { "msg_contents": "On Fri, Oct 13, 2023 at 10:04 AM vignesh C <[email protected]> wrote:\n>\n> On Thu, 12 Oct 2023 at 11:10, Amit Kapila <[email protected]> wrote:\n> >\n> > On Sun, Oct 8, 2023 at 8:22 AM vignesh C <[email protected]> wrote:\n> > >\n> >\n> > --- a/src/include/catalog/pg_subscription.h\n> > +++ b/src/include/catalog/pg_subscription.h\n> > @@ -127,6 +127,7 @@ typedef struct Subscription\n> > * skipped */\n> > char *name; /* Name of the subscription */\n> > Oid owner; /* Oid of the subscription owner */\n> > + bool ownersuperuser; /* Is the subscription owner a superuser? */\n> > bool enabled; /* Indicates if the subscription is enabled */\n> > bool binary; /* Indicates if the subscription wants data in\n> > * binary format */\n> >\n> > We normally don't change the exposed structure in back branches as\n> > that poses a risk of breaking extensions. In this case, if we want, we\n> > can try to squeeze some padding space or we even can fix it without\n> > introducing a new member. OTOH, it is already debatable whether to fix\n> > it in back branches, so we can even commit this patch just in HEAD.\n>\n> I too feel we can commit this patch only in HEAD.\n>\n\nFair enough. I'll wait till early next week (say till Monday EOD) to\nsee if anyone thinks otherwise and push this patch to HEAD after some\nmore testing and review.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 13 Oct 2023 11:08:53 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Invalidate the subscription worker in cases where a user loses\n their superuser status" }, { "msg_contents": "On Fri, Oct 13, 2023 at 11:08 AM Amit Kapila <[email protected]> wrote:\n>\n> On Fri, Oct 13, 2023 at 10:04 AM vignesh C <[email protected]> wrote:\n> >\n> > On Thu, 12 Oct 2023 at 11:10, Amit Kapila <[email protected]> wrote:\n> > >\n> > > On Sun, Oct 8, 2023 at 8:22 AM vignesh C <[email protected]> wrote:\n> > > >\n> > >\n> > > --- a/src/include/catalog/pg_subscription.h\n> > > +++ b/src/include/catalog/pg_subscription.h\n> > > @@ -127,6 +127,7 @@ typedef struct Subscription\n> > > * skipped */\n> > > char *name; /* Name of the subscription */\n> > > Oid owner; /* Oid of the subscription owner */\n> > > + bool ownersuperuser; /* Is the subscription owner a superuser? */\n> > > bool enabled; /* Indicates if the subscription is enabled */\n> > > bool binary; /* Indicates if the subscription wants data in\n> > > * binary format */\n> > >\n> > > We normally don't change the exposed structure in back branches as\n> > > that poses a risk of breaking extensions. In this case, if we want, we\n> > > can try to squeeze some padding space or we even can fix it without\n> > > introducing a new member. OTOH, it is already debatable whether to fix\n> > > it in back branches, so we can even commit this patch just in HEAD.\n> >\n> > I too feel we can commit this patch only in HEAD.\n> >\n>\n> Fair enough. I'll wait till early next week (say till Monday EOD) to\n> see if anyone thinks otherwise and push this patch to HEAD after some\n> more testing and review.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 17 Oct 2023 14:17:18 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Invalidate the subscription worker in cases where a user loses\n their superuser status" }, { "msg_contents": "On Tue, 17 Oct 2023 at 14:17, Amit Kapila <[email protected]> wrote:\n>\n> On Fri, Oct 13, 2023 at 11:08 AM Amit Kapila <[email protected]> wrote:\n> >\n> > On Fri, Oct 13, 2023 at 10:04 AM vignesh C <[email protected]> wrote:\n> > >\n> > > On Thu, 12 Oct 2023 at 11:10, Amit Kapila <[email protected]> wrote:\n> > > >\n> > > > On Sun, Oct 8, 2023 at 8:22 AM vignesh C <[email protected]> wrote:\n> > > > >\n> > > >\n> > > > --- a/src/include/catalog/pg_subscription.h\n> > > > +++ b/src/include/catalog/pg_subscription.h\n> > > > @@ -127,6 +127,7 @@ typedef struct Subscription\n> > > > * skipped */\n> > > > char *name; /* Name of the subscription */\n> > > > Oid owner; /* Oid of the subscription owner */\n> > > > + bool ownersuperuser; /* Is the subscription owner a superuser? */\n> > > > bool enabled; /* Indicates if the subscription is enabled */\n> > > > bool binary; /* Indicates if the subscription wants data in\n> > > > * binary format */\n> > > >\n> > > > We normally don't change the exposed structure in back branches as\n> > > > that poses a risk of breaking extensions. In this case, if we want, we\n> > > > can try to squeeze some padding space or we even can fix it without\n> > > > introducing a new member. OTOH, it is already debatable whether to fix\n> > > > it in back branches, so we can even commit this patch just in HEAD.\n> > >\n> > > I too feel we can commit this patch only in HEAD.\n> > >\n> >\n> > Fair enough. I'll wait till early next week (say till Monday EOD) to\n> > see if anyone thinks otherwise and push this patch to HEAD after some\n> > more testing and review.\n> >\n>\n> Pushed.\n\nThanks for committing this.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Tue, 17 Oct 2023 20:10:42 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Invalidate the subscription worker in cases where a user loses\n their superuser status" }, { "msg_contents": "On Tue, 2023-10-17 at 14:17 +0530, Amit Kapila wrote:\n> > Fair enough. I'll wait till early next week (say till Monday EOD)\n> > to\n> > see if anyone thinks otherwise and push this patch to HEAD after\n> > some\n> > more testing and review.\n> > \n> \n> Pushed.\n\nThere was a brief discussion on backporting this here:\n\nhttps://www.postgresql.org/message-id/CA%2BTgmob2mYpaUMT7aoFOukYTrpxt-WdgcitJnsjWhufnbDWEeg%40mail.gmail.com\n\nIt looks like you opted not to backport it. Was there a reason for\nthat? The only risky thing I see there is a change in the Subscription\nstructure -- I suppose that could be used by extensions?\n\n(I am fine with it not being backported, but I was inclined to think it\nshould be backported.)\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Fri, 12 Jan 2024 12:32:33 -0800", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Invalidate the subscription worker in cases where a user loses\n their superuser status" }, { "msg_contents": "On Sat, Jan 13, 2024 at 2:02 AM Jeff Davis <[email protected]> wrote:\n>\n> On Tue, 2023-10-17 at 14:17 +0530, Amit Kapila wrote:\n> > > Fair enough. I'll wait till early next week (say till Monday EOD)\n> > > to\n> > > see if anyone thinks otherwise and push this patch to HEAD after\n> > > some\n> > > more testing and review.\n> > >\n> >\n> > Pushed.\n>\n> There was a brief discussion on backporting this here:\n>\n> https://www.postgresql.org/message-id/CA%2BTgmob2mYpaUMT7aoFOukYTrpxt-WdgcitJnsjWhufnbDWEeg%40mail.gmail.com\n>\n> It looks like you opted not to backport it. Was there a reason for\n> that? The only risky thing I see there is a change in the Subscription\n> structure -- I suppose that could be used by extensions?\n>\n\nRight, the same is pointed out by me in an email [1].\n\n> (I am fine with it not being backported, but I was inclined to think it\n> should be backported.)\n>\n\nI don't mind backporting it if you think so but we need to ensure that\nwe don't break any extensions.\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1JztFkYeVANuH0Ja_c3zqDjTyz0j15xQqwCDRPokYhWgw%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 13 Jan 2024 16:51:00 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Invalidate the subscription worker in cases where a user loses\n their superuser status" } ]
[ { "msg_contents": "\nsrc/backend/transam/README says:\n\n ...\n 4. Mark the shared buffer(s) as dirty with MarkBufferDirty(). (This \n must happen before the WAL record is inserted; see notes in \n SyncOneBuffer().)\n ...\n\nBut GenericXLogFinish() does this:\n\n ...\n /* Insert xlog record */\n lsn = XLogInsert(RM_GENERIC_ID, 0);\n\n /* Set LSN and mark buffers dirty */\n for (i = 0; i < MAX_GENERIC_XLOG_PAGES; i++)\n {\n PageData *pageData = &state->pages[i];\n\n if (BufferIsInvalid(pageData->buffer))\n continue;\n PageSetLSN(BufferGetPage(pageData->buffer), lsn);\n MarkBufferDirty(pageData->buffer);\n }\n END_CRIT_SECTION();\n\nAm I missing something or is that a problem?\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Fri, 22 Sep 2023 13:52:48 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Is this a problem in GenericXLogFinish()?" }, { "msg_contents": "On 22/09/2023 23:52, Jeff Davis wrote:\n> \n> src/backend/transam/README says:\n> \n> ...\n> 4. Mark the shared buffer(s) as dirty with MarkBufferDirty(). (This\n> must happen before the WAL record is inserted; see notes in\n> SyncOneBuffer().)\n> ...\n> \n> But GenericXLogFinish() does this:\n> \n> ...\n> /* Insert xlog record */\n> lsn = XLogInsert(RM_GENERIC_ID, 0);\n> \n> /* Set LSN and mark buffers dirty */\n> for (i = 0; i < MAX_GENERIC_XLOG_PAGES; i++)\n> {\n> PageData *pageData = &state->pages[i];\n> \n> if (BufferIsInvalid(pageData->buffer))\n> continue;\n> PageSetLSN(BufferGetPage(pageData->buffer), lsn);\n> MarkBufferDirty(pageData->buffer);\n> }\n> END_CRIT_SECTION();\n> \n> Am I missing something or is that a problem?\n\nYes, that's a problem.\n\nI wish we had an assertion for that. XLogInsert() could assert that the \npage is already marked dirty, for example.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Mon, 25 Sep 2023 13:04:41 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is this a problem in GenericXLogFinish()?" }, { "msg_contents": "On Mon, 2023-09-25 at 13:04 +0300, Heikki Linnakangas wrote:\n> Yes, that's a problem.\n\nPatch attached. I rearranged the code a bit to follow the expected\npattern of: write, mark dirty, WAL, set LSN. I think computing the\ndeltas could also be moved earlier, outside of the critical section,\nbut I'm not sure that would be useful.\n\nDo you have a suggestion for any kind of test addition, or should we\njust review carefully?\n\n> I wish we had an assertion for that. XLogInsert() could assert that\n> the \n> page is already marked dirty, for example.\n\nUnfortunately that's not always the case, e.g. log_newpage_range().\n\nRegards,\n\tJeff Davis", "msg_date": "Tue, 26 Sep 2023 12:32:13 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Is this a problem in GenericXLogFinish()?" }, { "msg_contents": "On 26/09/2023 22:32, Jeff Davis wrote:\n> On Mon, 2023-09-25 at 13:04 +0300, Heikki Linnakangas wrote:\n>> Yes, that's a problem.\n> \n> Patch attached. I rearranged the code a bit to follow the expected\n> pattern of: write, mark dirty, WAL, set LSN. I think computing the\n> deltas could also be moved earlier, outside of the critical section,\n> but I'm not sure that would be useful.\n\nLooks correct. You now loop through all the block IDs three times, \nhowever. I wonder if that is measurably slower, but even if it's not, \nwas there a reason you wanted to move the XLogRegisterBuffer() calls to \na separate loop?\n\n> Do you have a suggestion for any kind of test addition, or should we\n> just review carefully?\n> \n>> I wish we had an assertion for that. XLogInsert() could assert that\n>> the page is already marked dirty, for example.\n> \n> Unfortunately that's not always the case, e.g. log_newpage_range().\n\nHmm, I'm sure there are exceptions but log_newpage_range() actually \nseems to be doing the right thing; it calls MarkBufferDirty() before \nXLogInsert(). It only calls it after XLogRegisterBuffer() though, and I \nconcur that XLogRegisterBuffer() would be the logical place for the \nassertion. We could tighten this up, require that you call \nMarkBufferDirty() before XLogRegisterBuffer(), and fix all the callers.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Wed, 27 Sep 2023 00:14:26 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is this a problem in GenericXLogFinish()?" }, { "msg_contents": "On Wed, 2023-09-27 at 00:14 +0300, Heikki Linnakangas wrote:\n> Looks correct. You now loop through all the block IDs three times, \n> however. I wonder if that is measurably slower, but even if it's not,\n> was there a reason you wanted to move the XLogRegisterBuffer() calls\n> to \n> a separate loop?\n\nI did so to correspond more closely to what's outlined in the README\nand in other places in the code, where marking the buffers dirty\nhappens before XLogBeginInsert(). It didn't occur to me that one extra\nloop would matter, but I can combine them again.\n\nIt would be a bit more concise to do the XLogBeginInsert() first (like\nbefore) and then register the buffers in the same loop that does the\nwrites and marks the buffers dirty. Updated patch attached.\n\n> \n> Hmm, I'm sure there are exceptions but log_newpage_range() actually \n> seems to be doing the right thing; it calls MarkBufferDirty() before \n> XLogInsert(). It only calls it after XLogRegisterBuffer() though, and\n> I \n> concur that XLogRegisterBuffer() would be the logical place for the \n> assertion. We could tighten this up, require that you call \n> MarkBufferDirty() before XLogRegisterBuffer(), and fix all the\n> callers.\n\nThat site is pretty trivial to fix, but there are also a couple places\nin hash.c and hashovfl.c that are registering a clean page and it's not\nclear to me exactly what's going on.\n\nRegards,\n\tJeff Davis", "msg_date": "Tue, 26 Sep 2023 16:13:32 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Is this a problem in GenericXLogFinish()?" }, { "msg_contents": "On Tue, Sep 26, 2023 at 9:36 PM Jeff Davis <[email protected]> wrote:\n> That site is pretty trivial to fix, but there are also a couple places\n> in hash.c and hashovfl.c that are registering a clean page and it's not\n> clear to me exactly what's going on.\n\nHuh, I wonder if that's just a bug. Do you know where exactly?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 27 Sep 2023 10:36:54 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is this a problem in GenericXLogFinish()?" }, { "msg_contents": "On Wed, 2023-09-27 at 10:36 -0400, Robert Haas wrote:\n> On Tue, Sep 26, 2023 at 9:36 PM Jeff Davis <[email protected]> wrote:\n> > That site is pretty trivial to fix, but there are also a couple\n> > places\n> > in hash.c and hashovfl.c that are registering a clean page and it's\n> > not\n> > clear to me exactly what's going on.\n> \n> Huh, I wonder if that's just a bug. Do you know where exactly?\n\nhashovfl.c:665 and hash.c:831\n\nIn both cases it's registering the bucket_buf, and has a comment like:\n\n /*\n * bucket buffer needs to be registered to ensure that we can\n * acquire a cleanup lock on it during replay.\n */\n\nAnd various redo routines have comments like:\n\n /*\n * Ensure we have a cleanup lock on primary bucket page before we\nstart\n * with the actual replay operation. This is to ensure that neither a\n * scan can start nor a scan can be already-in-progress during the\nreplay\n * of this operation. If we allow scans during this operation, then\nthey\n * can miss some records or show the same record multiple times.\n */\n\nSo it looks like it's intentionally registering a clean buffer so that\nit can take a cleanup lock for reasons other than cleaning (or even\nmodiying) the page. I would think that there's a better way of\naccomplishing that goal, so perhaps we can fix that?\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Wed, 27 Sep 2023 08:03:45 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Is this a problem in GenericXLogFinish()?" }, { "msg_contents": "On Wed, Sep 27, 2023 at 11:03 AM Jeff Davis <[email protected]> wrote:\n> So it looks like it's intentionally registering a clean buffer so that\n> it can take a cleanup lock for reasons other than cleaning (or even\n> modiying) the page. I would think that there's a better way of\n> accomplishing that goal, so perhaps we can fix that?\n\nI had forgotten some of the details of how this works until you\nreminded me, but now that you've jarred my memory, I remember some of\nit.\n\nWhen Amit Kaplla and I were working on the project to add WAL-logging\nto hash indexes, we ran into some problems with concurrency control\nfor individual buckets within the hash index. Before that project,\nthis was handled using heavyweight locks, one per bucket. That got\nchanged in 6d46f4783efe457f74816a75173eb23ed8930020 for the reasons\nexplained in that commit message. Basically, instead of taking\nheavyweight locks, we started taking cleanup locks on the primary\nbucket pages. I always thought that was a little awkward, but I didn't\nquite see how to do better. I don't think that I gave much thought at\nthe time to the consequence you've uncovered here, namely that it\nmeans we're sometimes locking one page (the primary bucket page)\nbecause we want to do something to another bucket page (some page in\nthe linked list of pages that are part of that bucket) and for that to\nbe safe, we need to lock out concurrent scans of that bucket.\n\nI guess I don't know of any easy fix here. :-(\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 27 Sep 2023 11:47:04 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is this a problem in GenericXLogFinish()?" }, { "msg_contents": "On 27/09/2023 18:47, Robert Haas wrote:\n> On Wed, Sep 27, 2023 at 11:03 AM Jeff Davis <[email protected]> wrote:\n>> So it looks like it's intentionally registering a clean buffer so that\n>> it can take a cleanup lock for reasons other than cleaning (or even\n>> modiying) the page. I would think that there's a better way of\n>> accomplishing that goal, so perhaps we can fix that?\n> \n> I had forgotten some of the details of how this works until you\n> reminded me, but now that you've jarred my memory, I remember some of\n> it.\n> \n> When Amit Kaplla and I were working on the project to add WAL-logging\n> to hash indexes, we ran into some problems with concurrency control\n> for individual buckets within the hash index. Before that project,\n> this was handled using heavyweight locks, one per bucket. That got\n> changed in 6d46f4783efe457f74816a75173eb23ed8930020 for the reasons\n> explained in that commit message. Basically, instead of taking\n> heavyweight locks, we started taking cleanup locks on the primary\n> bucket pages. I always thought that was a little awkward, but I didn't\n> quite see how to do better. I don't think that I gave much thought at\n> the time to the consequence you've uncovered here, namely that it\n> means we're sometimes locking one page (the primary bucket page)\n> because we want to do something to another bucket page (some page in\n> the linked list of pages that are part of that bucket) and for that to\n> be safe, we need to lock out concurrent scans of that bucket.\n> \n> I guess I don't know of any easy fix here. :-(\n\nThat seems OK. Maybe there would be a better way to do that, but there's \nnothing wrong with that approach per se.\n\nWe could define a new REGBUF_NO_CHANGE flag to signal that the buffer is \nregistered just for locking purposes, and not actually modified by the \nWAL record.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Wed, 27 Sep 2023 18:57:42 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is this a problem in GenericXLogFinish()?" }, { "msg_contents": "On Wed, 2023-09-27 at 18:57 +0300, Heikki Linnakangas wrote:\n> We could define a new REGBUF_NO_CHANGE flag to signal that the buffer\n> is \n> registered just for locking purposes, and not actually modified by\n> the \n> WAL record.\n\nOut of curiosity I also added a trial assert (not intending to actually\nadd this):\n\n Assert(!(flags & REGBUF_NO_CHANGE) || !BufferIsExclusiveLocked());\n\nI expected that to fail, but it didn't -- perhaps that buffer is only\nlocked during replay or something? I don't think we want that specific\nAssert; I was just experimenting with some cross-checks we could do to\nverify that REGBUF_NO_CHANGE is used properly.\n\nAlso, I ran into some problems with GIN that might require a bit more\nrefactoring in ginPlaceToPage(). Perhaps we could consider\nREGBUF_NO_CHANGE a general bypass of the Assert(BufferIsDirty()), and\nuse it temporarily until we have a chance to analyze/refactor each of\nthe callers that need it.\n\nRegards,\n        Jeff Davis\n\n\n\n", "msg_date": "Thu, 28 Sep 2023 12:05:00 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Is this a problem in GenericXLogFinish()?" }, { "msg_contents": "I committed and backported 0001 (the GenericXLogFinish() fix but not\nthe Assert).\n\nStrangely I didn't see the -committers email come through yet. If\nanyone notices anything wrong, please let me know before the final v11\nrelease.\n\nOn Thu, 2023-09-28 at 12:05 -0700, Jeff Davis wrote:\n> Also, I ran into some problems with GIN that might require a bit more\n> refactoring in ginPlaceToPage(). Perhaps we could consider\n> REGBUF_NO_CHANGE a general bypass of the Assert(BufferIsDirty()), and\n> use it temporarily until we have a chance to analyze/refactor each of\n> the callers that need it.\n\nFor the Assert, I have attached a new patch for v17.\n\nI changed the name of the flag to REGBUF_CLEAN_OK, because for some of\nthe callsites it was not clear to me whether the page is always\nunchanged or sometimes unchanged. In other words, REGBUF_CLEAN_OK is a\nway to bypass the Assert for callsites where we either know that we\nactually want to register an unchanged page, or for callsites where we\ndon't know if the page is changed or not.\n\nIf we commit this, ideally someone would look into the places where\nREGBUF_CLEAN_OK is used and make sure that it's not a bug, and perhaps\nrefactor so that it would benefit from the Assert. But refactoring\nthose places is outside of the scope of this patch.\n\nIt sounds like we have no intention to change the hash index code, so\nare we OK if this flag just lasts forever? Do you still think it offers\na useful check?\n\nRegards,\n\tJeff Davis", "msg_date": "Tue, 10 Oct 2023 12:57:35 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Is this a problem in GenericXLogFinish()?" }, { "msg_contents": "On 10/10/2023 22:57, Jeff Davis wrote:\n> On Thu, 2023-09-28 at 12:05 -0700, Jeff Davis wrote:\n>> Also, I ran into some problems with GIN that might require a bit more\n>> refactoring in ginPlaceToPage(). Perhaps we could consider\n>> REGBUF_NO_CHANGE a general bypass of the Assert(BufferIsDirty()), and\n>> use it temporarily until we have a chance to analyze/refactor each of\n>> the callers that need it.\n> \n> For the Assert, I have attached a new patch for v17.\n\nThanks for working on this!\n\n> +/*\n> + * BufferIsDirty\n> + *\n> + *\t\tChecks if buffer is already dirty.\n> + *\n> + * Buffer must be pinned and exclusive-locked. (If caller does not hold\n> + * exclusive lock, then the result may be stale before it's returned.)\n> + */\n> +bool\n> +BufferIsDirty(Buffer buffer)\n> +{\n> +\tBufferDesc *bufHdr;\n> +\n> +\tif (BufferIsLocal(buffer))\n> +\t{\n> +\t\tint bufid = -buffer - 1;\n> +\t\tbufHdr = GetLocalBufferDescriptor(bufid);\n> +\t}\n> +\telse\n> +\t{\n> +\t\tbufHdr = GetBufferDescriptor(buffer - 1);\n> +\t}\n> +\n> +\tAssert(BufferIsPinned(buffer));\n> +\tAssert(LWLockHeldByMeInMode(BufferDescriptorGetContentLock(bufHdr),\n> +\t\t\t\t\t\t\t\tLW_EXCLUSIVE));\n> +\n> +\treturn pg_atomic_read_u32(&bufHdr->state) & BM_DIRTY;\n> +}\nThe comment suggests that you don't need to hold an exclusive lock when \nyou call this, but there's an assertion that you do.\n\nIs it a new requirement that you must hold an exclusive lock on the \nbuffer when you call XLogRegisterBuffer()? I think that's reasonable.\n\nI thought if that might be a problem when you WAL-log a page when you \nset hint bits on it and checksums are enabled. Hint bits can be set \nholding just a share lock. But it uses XLogSaveBufferForHint(), which \ncalls XLogRegisterBlock() instead of XLogRegisterBuffer()\n\nThere is a comment above XLogSaveBufferForHint() that implies that \nXLogRegisterBuffer() requires you to hold an exclusive-lock but I don't \nsee that spelled out explicitly in XLogRegisterBuffer() itself. Maybe \nadd an assertion for that in XLogRegisterBuffer() to make it more explicit.\n\n> --- a/src/backend/access/transam/xloginsert.c\n> +++ b/src/backend/access/transam/xloginsert.c\n> @@ -246,6 +246,7 @@ XLogRegisterBuffer(uint8 block_id, Buffer buffer, uint8 flags)\n> \n> \t/* NO_IMAGE doesn't make sense with FORCE_IMAGE */\n> \tAssert(!((flags & REGBUF_FORCE_IMAGE) && (flags & (REGBUF_NO_IMAGE))));\n> +\tAssert((flags & REGBUF_CLEAN_OK) || BufferIsDirty(buffer));\n> \tAssert(begininsert_called);\n> \n> \tif (block_id >= max_registered_block_id)\n\nI'd suggest a comment here to explain what's wrong if someone hits this \nassertion. Something like \"The buffer must be marked as dirty before \nregistering, unless REGBUF_CLEAN_OK is set to indicate that the buffer \nis not being modified\".\n\n> I changed the name of the flag to REGBUF_CLEAN_OK, because for some of\n> the callsites it was not clear to me whether the page is always\n> unchanged or sometimes unchanged. In other words, REGBUF_CLEAN_OK is a\n> way to bypass the Assert for callsites where we either know that we\n> actually want to register an unchanged page, or for callsites where we\n> don't know if the page is changed or not.\n\nOk. A flag to explicitly mark that the page is not modified would \nactually be nice for pg_rewind. It could ignore such page references. \nNot that it matters much in practice, since those records are so rare. \nAnd for that, we'd need to include the flag in the WAL record too. So I \nthink this is fine.\n\n> If we commit this, ideally someone would look into the places where\n> REGBUF_CLEAN_OK is used and make sure that it's not a bug, and perhaps\n> refactor so that it would benefit from the Assert. But refactoring\n> those places is outside of the scope of this patch.\n\nAbout that: you added the flag to a couple of XLogRegisterBuffer() calls \nin GIN, because they call MarkBufferDirty() only after \nXLogRegisterBuffer(). Those would be easy to refactor so I'd suggest \ndoing that now.\n\n> It sounds like we have no intention to change the hash index code, so\n> are we OK if this flag just lasts forever? Do you still think it offers\n> a useful check?\n\nYeah, I think this is a useful assertion. It might catch some bugs in \nextensions too.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Wed, 11 Oct 2023 14:53:02 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is this a problem in GenericXLogFinish()?" }, { "msg_contents": "On Wed, Oct 11, 2023 at 7:53 AM Heikki Linnakangas <[email protected]> wrote:\n> > + * Buffer must be pinned and exclusive-locked. (If caller does not hold\n> > + * exclusive lock, then the result may be stale before it's returned.)\n> The comment suggests that you don't need to hold an exclusive lock when\n> you call this, but there's an assertion that you do.\n\nI don't think the comment suggests that. It would if you only read the\nsentence in parentheses. But if you read both of them it seems clear\nenough. I guess the parenthetical sentence cloud say \"If the caller\ndid not hold an exclusive lock, then the result might become stale\neven before it was returned,\" basically putting the whole thing in the\nsubjunctive.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 11 Oct 2023 08:25:12 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is this a problem in GenericXLogFinish()?" }, { "msg_contents": "On Wed, 2023-10-11 at 14:53 +0300, Heikki Linnakangas wrote:\n> \n> The comment suggests that you don't need to hold an exclusive lock\n> when \n> you call this, but there's an assertion that you do.\n\nReworded.\n\n> Is it a new requirement that you must hold an exclusive lock on the \n> buffer when you call XLogRegisterBuffer()? I think that's reasonable.\n\nAgreed.\n\n> \n> I'd suggest a comment here to explain what's wrong if someone hits\n> this \n> assertion. Something like \"The buffer must be marked as dirty before \n> registering, unless REGBUF_CLEAN_OK is set to indicate that the\n> buffer \n> is not being modified\".\n\nDone, with different wording.\n\n> \n> > If we commit this, ideally someone would look into the places where\n> > REGBUF_CLEAN_OK is used and make sure that it's not a bug, and\n> > perhaps\n> > refactor so that it would benefit from the Assert. But refactoring\n> > those places is outside of the scope of this patch.\n\nI don't think it makes sense to register a buffer that is perhaps clean\nand perhaps not. After registering a buffer and writing WAL, you need\nto PageSetLSN(), and that should only be done after MarkBufferDirty(),\nright?\n\nSo if you need to condition PageSetLSN() on whether MarkBufferDirty()\nhas happened, it would be trivial to use the same condition for\nXLogRegisterBuffer(). Therefore I changed the name back to\nREGBUF_NO_CHANGE, as you suggested originally.\n\nThe hash indexing code knows it's not modifying the bucket buffer,\ndoesn't mark it dirty, and doesn't set the LSN. I presume that's safe.\n\nHowever, there are quite a few places where\nXLogRegisterBuffer()+WAL+PageSetLSN() all happen, but it's not\nimmediately obvious that all code paths to get there also\nMarkBufferDirty(). For instanace:\n\n lazy_scan_heap() -- conditionally marks heapbuf as dirty\n visibilitymap_set() -- conditionally calls log_heap_visible\n log_heap_visible() \n XLogRegisterBuffer(1, heap_buffer, flags)\n\nif those conditions don't match up exactly, it seems we could get into\na situation where we update the LSN of a clean page, which seems bad.\n\nThere are a few other places in the hash code and spgist code where I\nhave similar concerns. Not many, though, I looked at all of the call\nsites (at least briefly) and most of them are fine.\n\n> About that: you added the flag to a couple of XLogRegisterBuffer()\n> calls \n> in GIN, because they call MarkBufferDirty() only after \n> XLogRegisterBuffer(). Those would be easy to refactor so I'd suggest \n> doing that now.\n\nDone.\n\n> > It sounds like we have no intention to change the hash index code,\n> > so\n> > are we OK if this flag just lasts forever? Do you still think it\n> > offers\n> > a useful check?\n> \n> Yeah, I think this is a useful assertion. It might catch some bugs in\n> extensions too.\n\nI was asking more about the REGBUF_NO_CHANGE flag. It feels very\nspecific to the hash indexes, and I'm not sure we want to encourage\nextensions to use it.\n\nThough maybe the locking protocol used by hash indexes is more\ngenerally applicable, and other indexing strategies might want to use\nit, too?\n\nAnother option might be to just change the hash indexing code to follow\nthe correct protocol, locking and calling MarkBufferDirty() in those 3\ncall sites. Marking the buffer dirty is easy, but making sure that it's\nlocked might require some refactoring. Robert, would following the\nright protocol here affect performance?\n\nRegards,\n\tJeff Davis", "msg_date": "Mon, 16 Oct 2023 16:31:14 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Is this a problem in GenericXLogFinish()?" }, { "msg_contents": "On Mon, Oct 16, 2023 at 7:31 PM Jeff Davis <[email protected]> wrote:\n> Another option might be to just change the hash indexing code to follow\n> the correct protocol, locking and calling MarkBufferDirty() in those 3\n> call sites. Marking the buffer dirty is easy, but making sure that it's\n> locked might require some refactoring. Robert, would following the\n> right protocol here affect performance?\n\nSorry, I'm not sure I understand the question. Are you asking whether\ndirtying buffers unnecessarily might be slower than not doing that?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 17 Oct 2023 08:41:15 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is this a problem in GenericXLogFinish()?" }, { "msg_contents": "On Tue, 2023-10-17 at 08:41 -0400, Robert Haas wrote:\n> Sorry, I'm not sure I understand the question. Are you asking whether\n> dirtying buffers unnecessarily might be slower than not doing that?\n\nI meant: are those cleanup operations frequent enough that dirtying\nthose buffers in that case would matter?\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Tue, 17 Oct 2023 09:38:27 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Is this a problem in GenericXLogFinish()?" }, { "msg_contents": "On Tue, Oct 17, 2023 at 12:38 PM Jeff Davis <[email protected]> wrote:\n> I meant: are those cleanup operations frequent enough that dirtying\n> those buffers in that case would matter?\n\nHonestly, I'm not sure. Probably not? I mean, hashbucketcleanup()\nseems to only be called during vacuum or a bucket split, and I don't\nthink you can have super-frequent calls to _hash_freeovflpage()\neither. For what it's worth, though, I think it would be better to\njust make these cases exceptions to your Assert, as you did in the\npatch, rather than changing them to dirty the buffer. There doesn't\nseem to be enough upside to making the assert unconditional to justify\nchanging stuff that might have a real-world performance cost ... even\nif we don't think it would amount to much.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 19 Oct 2023 16:12:52 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is this a problem in GenericXLogFinish()?" }, { "msg_contents": "On Thu, 2023-10-19 at 16:12 -0400, Robert Haas wrote:\n> For what it's worth, though, I think it would be better to\n> just make these cases exceptions to your Assert\n\nOK, I'll probably commit something like v4 then.\n\nI still have a question though: if a buffer is exclusive-locked,\nunmodified and clean, and then the caller registers it and later does\nPageSetLSN (just as if it were dirty), is that a definite bug?\n\nThere are a couple callsites where the control flow is complex enough\nthat it's hard to be sure the buffer is always marked dirty before\nbeing registered (like in log_heap_visible(), as I mentioned upthread).\nBut those callsites are all doing PageSetLSN, unlike the hash index\ncase.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Fri, 20 Oct 2023 09:28:17 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Is this a problem in GenericXLogFinish()?" }, { "msg_contents": "On Fri, Oct 20, 2023 at 12:28 PM Jeff Davis <[email protected]> wrote:\n> I still have a question though: if a buffer is exclusive-locked,\n> unmodified and clean, and then the caller registers it and later does\n> PageSetLSN (just as if it were dirty), is that a definite bug?\n>\n> There are a couple callsites where the control flow is complex enough\n> that it's hard to be sure the buffer is always marked dirty before\n> being registered (like in log_heap_visible(), as I mentioned upthread).\n> But those callsites are all doing PageSetLSN, unlike the hash index\n> case.\n\nI think that would be a bug. I think it would be OK to just change the\npage LSN and nothing else, but that requires calling MarkBufferDirty()\nat some point. If you only call PageSetLSN() and never\nMarkBufferDirty(), that sounds messed up: either the former should be\nskipped, or the latter should be added. We shouldn't modify a shared\nbuffer without marking it dirty.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 23 Oct 2023 10:14:27 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is this a problem in GenericXLogFinish()?" }, { "msg_contents": "On Mon, 2023-10-23 at 10:14 -0400, Robert Haas wrote:\n> I think that would be a bug. I think it would be OK to just change\n> the\n> page LSN and nothing else, but that requires calling\n> MarkBufferDirty()\n> at some point. If you only call PageSetLSN() and never\n> MarkBufferDirty(), that sounds messed up: either the former should be\n> skipped, or the latter should be added. We shouldn't modify a shared\n> buffer without marking it dirty.\n> \n\nIn that case, I think REGBUF_NO_CHANGE is the right name for the flag.\n\nCommitted.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Mon, 23 Oct 2023 17:45:06 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Is this a problem in GenericXLogFinish()?" }, { "msg_contents": "Hello hackers,\n\n24.10.2023 03:45, Jeff Davis wrote:\n> Committed.\n\nI've encountered a case with a hash index, when an assert added by the\ncommit fails:\n\nCREATE TABLE t (i INT);\nCREATE INDEX hi ON t USING HASH (i);\nINSERT INTO t SELECT 1 FROM generate_series(1, 1000);\n\nBEGIN;\nINSERT INTO t SELECT 1 FROM generate_series(1, 1000);\nROLLBACK;\n\nCHECKPOINT;\n\nVACUUM t;\n\nLeads to:\nCore was generated by `postgres: law regression [local] VACUUM                                       '.\nProgram terminated with signal SIGABRT, Aborted.\n\nwarning: Section `.reg-xstate/211944' in core file too small.\n#0  __pthread_kill_implementation (no_tid=0, signo=6, threadid=139924569388864) at ./nptl/pthread_kill.c:44\n44      ./nptl/pthread_kill.c: No such file or directory.\n(gdb) bt\n#0  __pthread_kill_implementation (no_tid=0, signo=6, threadid=139924569388864) at ./nptl/pthread_kill.c:44\n#1  __pthread_kill_internal (signo=6, threadid=139924569388864) at ./nptl/pthread_kill.c:78\n#2  __GI___pthread_kill (threadid=139924569388864, signo=signo@entry=6) at ./nptl/pthread_kill.c:89\n#3  0x00007f42b99ed476 in __GI_raise (sig=sig@entry=6) at ../sysdeps/posix/raise.c:26\n#4  0x00007f42b99d37f3 in __GI_abort () at ./stdlib/abort.c:79\n#5  0x000055f128e83f1b in ExceptionalCondition (conditionName=0x55f128f33520 \"BufferIsExclusiveLocked(buffer) && \nBufferIsDirty(buffer)\", fileName=0x55f128f333a8 \"xloginsert.c\", lineNumber=262) at assert.c:66\n#6  0x000055f1287edce3 in XLogRegisterBuffer (block_id=1 '\\001', buffer=1808, flags=8 '\\b') at xloginsert.c:262\n#7  0x000055f128742833 in _hash_freeovflpage (rel=0x7f42adb95c88, bucketbuf=1808, ovflbuf=1825, wbuf=1808, \nitups=0x7ffc3f18c010, itup_offsets=0x7ffc3f18bce0, tups_size=0x7ffc3f18ccd0, nitups=0, bstrategy=0x55f12a562820)\n     at hashovfl.c:671\n#8  0x000055f128743567 in _hash_squeezebucket (rel=0x7f42adb95c88, bucket=6, bucket_blkno=7, bucket_buf=1808, \nbstrategy=0x55f12a562820) at hashovfl.c:1064\n#9  0x000055f12873ca2a in hashbucketcleanup (rel=0x7f42adb95c88, cur_bucket=6, bucket_buf=1808, bucket_blkno=7, \nbstrategy=0x55f12a562820, maxbucket=7, highmask=15, lowmask=7, tuples_removed=0x7ffc3f18fb28,\n     num_index_tuples=0x7ffc3f18fb30, split_cleanup=false, callback=0x55f1289ba1de <vac_tid_reaped>, \ncallback_state=0x55f12a566778) at hash.c:921\n#10 0x000055f12873bfcf in hashbulkdelete (info=0x7ffc3f18fc30, stats=0x0, callback=0x55f1289ba1de <vac_tid_reaped>, \ncallback_state=0x55f12a566778) at hash.c:549\n#11 0x000055f128776fbb in index_bulk_delete (info=0x7ffc3f18fc30, istat=0x0, callback=0x55f1289ba1de <vac_tid_reaped>, \ncallback_state=0x55f12a566778) at indexam.c:709\n#12 0x000055f1289ba03d in vac_bulkdel_one_index (ivinfo=0x7ffc3f18fc30, istat=0x0, dead_items=0x55f12a566778) at \nvacuum.c:2480\n#13 0x000055f128771286 in lazy_vacuum_one_index (indrel=0x7f42adb95c88, istat=0x0, reltuples=-1, vacrel=0x55f12a4b9c30) \nat vacuumlazy.c:2768\n#14 0x000055f1287704a3 in lazy_vacuum_all_indexes (vacrel=0x55f12a4b9c30) at vacuumlazy.c:2338\n#15 0x000055f128770275 in lazy_vacuum (vacrel=0x55f12a4b9c30) at vacuumlazy.c:2256\n...\n\nIt looks like the buffer is not dirty in the problematic call.\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Thu, 26 Oct 2023 07:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is this a problem in GenericXLogFinish()?" }, { "msg_contents": "On Thu, 2023-10-26 at 07:00 +0300, Alexander Lakhin wrote:\n\n> It looks like the buffer is not dirty in the problematic call.\n\nThank you for the report! I was able to reproduce and observe that the\nbuffer is not marked dirty.\n\nThe call (hashovfl.c:671):\n\n XLogRegisterBuffer(1, wbuf, REGBUF_STANDARD)\n\nis followed unconditionally by:\n\n PageSetLSN(BufferGetPage(wbuf), recptr)\n\nso if the Assert were not there, it would be setting the LSN on a page\nthat's not marked dirty. Therefore I think this is a bug, or at least\nan interesting/unexpected case.\n\nPerhaps the registration and the PageSetLSN don't need to happen when\nnitups==0?\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Wed, 25 Oct 2023 22:06:23 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Is this a problem in GenericXLogFinish()?" }, { "msg_contents": "On Wed, Oct 25, 2023 at 10:06:23PM -0700, Jeff Davis wrote:\n> Thank you for the report! I was able to reproduce and observe that the\n> buffer is not marked dirty.\n> \n> The call (hashovfl.c:671):\n> \n> XLogRegisterBuffer(1, wbuf, REGBUF_STANDARD)\n> \n> is followed unconditionally by:\n> \n> PageSetLSN(BufferGetPage(wbuf), recptr)\n> \n> so if the Assert were not there, it would be setting the LSN on a page\n> that's not marked dirty. Therefore I think this is a bug, or at least\n> an interesting/unexpected case.\n> \n> Perhaps the registration and the PageSetLSN don't need to happen when\n> nitups==0?\n\nHmm. Looking at hash_xlog_squeeze_page(), it looks like wbuf is\nexpected to always be registered. So, you're right here: it should be\nOK to be less aggressive with setting the page LSN on wbuf if ntups is\n0, but there's more to it? For example, it is possible that\nbucketbuf, prevbuf and wbuf refer to the same buffer, causing\nis_prim_bucket_same_wrt and is_prev_bucket_same_wrt to be both true. \nParticularly, if nextbuf and wbuf are the same buffers, we finish by\nregistering twice the same buffer with two different IDs to perform\nthe tuple insertion and the opaque area updates in two steps. And\nthat's.. Err.. not really necessary? I mean, as far as I read this\ncode you could just reuse the buffer registered at ID 1 and update its\nopaque area if is_prim_bucket_same_wrt is true?\n--\nMichael", "msg_date": "Thu, 26 Oct 2023 16:31:19 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is this a problem in GenericXLogFinish()?" }, { "msg_contents": "On Thu, Oct 26, 2023 at 3:31 AM Michael Paquier <[email protected]> wrote:\n> Hmm. Looking at hash_xlog_squeeze_page(), it looks like wbuf is\n> expected to always be registered. So, you're right here: it should be\n> OK to be less aggressive with setting the page LSN on wbuf if ntups is\n> 0, but there's more to it? For example, it is possible that\n> bucketbuf, prevbuf and wbuf refer to the same buffer, causing\n> is_prim_bucket_same_wrt and is_prev_bucket_same_wrt to be both true.\n> Particularly, if nextbuf and wbuf are the same buffers, we finish by\n> registering twice the same buffer with two different IDs to perform\n> the tuple insertion and the opaque area updates in two steps. And\n> that's.. Err.. not really necessary? I mean, as far as I read this\n> code you could just reuse the buffer registered at ID 1 and update its\n> opaque area if is_prim_bucket_same_wrt is true?\n\nRead the comments for _hash_squeezebucket. Particularly, note this part:\n\n * Try to squeeze the tuples onto pages occurring earlier in the\n * bucket chain in an attempt to free overflow pages. When we start\n * the \"squeezing\", the page from which we start taking tuples (the\n * \"read\" page) is the last bucket in the bucket chain and the page\n * onto which we start squeezing tuples (the \"write\" page) is the\n * first page in the bucket chain. The read page works backward and\n * the write page works forward; the procedure terminates when the\n * read page and write page are the same page.\n\nBecause of this, it is possible for bucketbuf, prevbuf, and wbuf to be\nthe same (your first scenario) but the second scenario you mention\n(nextbuf == wbuf) should be impossible.\n\nIt seems to me that maybe we shouldn't even be registering wbuf or\ndoing anything at all to it if there are no tuples that need moving.\nThat would also require some adjustment of the redo routine.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 26 Oct 2023 09:40:09 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is this a problem in GenericXLogFinish()?" }, { "msg_contents": "On Thu, Oct 26, 2023 at 09:40:09AM -0400, Robert Haas wrote:\n> Because of this, it is possible for bucketbuf, prevbuf, and wbuf to be\n> the same (your first scenario) but the second scenario you mention\n> (nextbuf == wbuf) should be impossible.\n\nOkay..\n\n> It seems to me that maybe we shouldn't even be registering wbuf or\n> doing anything at all to it if there are no tuples that need moving.\n> That would also require some adjustment of the redo routine.\n\nHmm. So my question is: do we need the cleanup lock on the write\nbuffer even if there are no tuples, and even if primary bucket and the\nwrite bucket are the same? I'd like to think that what you say is OK,\nstill I am not completely sure after reading the lock assumptions in\nthe hash README or 6d46f4783efe. A simpler thing would be to mark\nbuffer 1 with REGBUF_NO_CHANGE when the primary and write buffers are\nthe same if we expect the lock to always be taken, I guess..\n\nI've noticed that the replay paths for XLOG_HASH_MOVE_PAGE_CONTENTS\nand XLOG_HASH_SQUEEZE_PAGE are similar with their page handlings (some\ncopy-pastes?). A MOVE record should never have zero tuples, still the\nreplay path assumes that this can be possible, so it could be\nsimplified.\n--\nMichael", "msg_date": "Fri, 27 Oct 2023 09:15:19 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is this a problem in GenericXLogFinish()?" }, { "msg_contents": "On Fri, Oct 27, 2023 at 5:45 AM Michael Paquier <[email protected]> wrote:\n>\n> On Thu, Oct 26, 2023 at 09:40:09AM -0400, Robert Haas wrote:\n> > Because of this, it is possible for bucketbuf, prevbuf, and wbuf to be\n> > the same (your first scenario) but the second scenario you mention\n> > (nextbuf == wbuf) should be impossible.\n>\n> Okay..\n>\n> > It seems to me that maybe we shouldn't even be registering wbuf or\n> > doing anything at all to it if there are no tuples that need moving.\n> > That would also require some adjustment of the redo routine.\n>\n> Hmm. So my question is: do we need the cleanup lock on the write\n> buffer even if there are no tuples, and even if primary bucket and the\n> write bucket are the same?\n>\n\nYes, we need it to exclude any concurrent in-progress scans that could\nreturn incorrect tuples during bucket squeeze operation.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 28 Oct 2023 15:45:13 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is this a problem in GenericXLogFinish()?" }, { "msg_contents": "On Sat, Oct 28, 2023 at 03:45:13PM +0530, Amit Kapila wrote:\n> Yes, we need it to exclude any concurrent in-progress scans that could\n> return incorrect tuples during bucket squeeze operation.\n\nThanks. So I assume that we should just set REGBUF_NO_CHANGE when the\nprimary and write buffers are the same and there are no tuples to\nmove. Say with something like the attached?\n--\nMichael", "msg_date": "Sat, 28 Oct 2023 20:00:00 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is this a problem in GenericXLogFinish()?" }, { "msg_contents": "On Sat, Oct 28, 2023 at 4:30 PM Michael Paquier <[email protected]> wrote:\n>\n> On Sat, Oct 28, 2023 at 03:45:13PM +0530, Amit Kapila wrote:\n> > Yes, we need it to exclude any concurrent in-progress scans that could\n> > return incorrect tuples during bucket squeeze operation.\n>\n> Thanks. So I assume that we should just set REGBUF_NO_CHANGE when the\n> primary and write buffers are the same and there are no tuples to\n> move. Say with something like the attached?\n>\n\nWhat if the primary and write buffer are not the same but ntups is\nzero? Won't we hit the assertion again in that case?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 30 Oct 2023 15:54:39 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is this a problem in GenericXLogFinish()?" }, { "msg_contents": "On Sat, Oct 28, 2023 at 6:15 AM Amit Kapila <[email protected]> wrote:\n> > Hmm. So my question is: do we need the cleanup lock on the write\n> > buffer even if there are no tuples, and even if primary bucket and the\n> > write bucket are the same?\n>\n> Yes, we need it to exclude any concurrent in-progress scans that could\n> return incorrect tuples during bucket squeeze operation.\n\nAmit, thanks for weighing in, but I'm not convinced. I thought we only\never used a cleanup lock on the main bucket page to guard against\nconcurrent scans. Here you seem to be saying that we need a cleanup\nlock on some page that may be an overflow page somewhere in the middle\nof the chain, and that doesn't seem right to me.\n\nSo ... are you sure? If yes, can you provide any more detailed justification?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 30 Oct 2023 09:43:02 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is this a problem in GenericXLogFinish()?" }, { "msg_contents": "On Mon, Oct 30, 2023 at 7:13 PM Robert Haas <[email protected]> wrote:\n>\n> On Sat, Oct 28, 2023 at 6:15 AM Amit Kapila <[email protected]> wrote:\n> > > Hmm. So my question is: do we need the cleanup lock on the write\n> > > buffer even if there are no tuples, and even if primary bucket and the\n> > > write bucket are the same?\n> >\n> > Yes, we need it to exclude any concurrent in-progress scans that could\n> > return incorrect tuples during bucket squeeze operation.\n>\n> Amit, thanks for weighing in, but I'm not convinced. I thought we only\n> ever used a cleanup lock on the main bucket page to guard against\n> concurrent scans.\n>\n\nMy understanding is the same. It is possible that my wording is not\nclear. Let me try to clarify again, Michael asked: \"do we need the\ncleanup lock on the write buffer even if there are no tuples, and even\nif primary bucket and the write bucket are the same?\" My reading of\nhis question was do we need a cleanup lock even if the primary bucket\nand write bucket are the same which means the write bucket will be the\nfirst page in the chain and we need a cleanup lock on it. I think the\nsecond condition (no tuples) seems irrelevant here as whether that is\ntrue or false we need a cleanup lock.\n\n>\n Here you seem to be saying that we need a cleanup\n> lock on some page that may be an overflow page somewhere in the middle\n> of the chain, and that doesn't seem right to me.\n>\n\nSorry, I don't intend to say this.\n\n\n--\nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 31 Oct 2023 08:44:58 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is this a problem in GenericXLogFinish()?" }, { "msg_contents": "On Mon, Oct 30, 2023 at 11:15 PM Amit Kapila <[email protected]> wrote:\n> My understanding is the same. It is possible that my wording is not\n> clear. Let me try to clarify again, Michael asked: \"do we need the\n> cleanup lock on the write buffer even if there are no tuples, and even\n> if primary bucket and the write bucket are the same?\" My reading of\n> his question was do we need a cleanup lock even if the primary bucket\n> and write bucket are the same which means the write bucket will be the\n> first page in the chain and we need a cleanup lock on it. I think the\n> second condition (no tuples) seems irrelevant here as whether that is\n> true or false we need a cleanup lock.\n\nAh, OK, I see now. I missed the fact that, per what Michael wrote, you\nwere assuming the primary buffer and the write buffer were the same.\nIn that case I agree, but the whole formulation seems backwards. I\nthink the clearer way to say this is: we need a cleanup lock on the\nprimary bucket page no matter what; and we need to write tuples into\nthe write buffer if there are any tuples to write but not if there\naren't. If the two buffers are the same then the requirements are\nadded together.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 31 Oct 2023 08:37:50 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is this a problem in GenericXLogFinish()?" }, { "msg_contents": "On Mon, Oct 30, 2023 at 03:54:39PM +0530, Amit Kapila wrote:\n> On Sat, Oct 28, 2023 at 4:30 PM Michael Paquier <[email protected]> wrote:\n>> On Sat, Oct 28, 2023 at 03:45:13PM +0530, Amit Kapila wrote:\n>> > Yes, we need it to exclude any concurrent in-progress scans that could\n>> > return incorrect tuples during bucket squeeze operation.\n>>\n>> Thanks. So I assume that we should just set REGBUF_NO_CHANGE when the\n>> primary and write buffers are the same and there are no tuples to\n>> move. Say with something like the attached?\n> \n> What if the primary and write buffer are not the same but ntups is\n> zero? Won't we hit the assertion again in that case?\n\nThe code tells that it should be able to handle such a case, so this\nwould mean to set REGBUF_NO_CHANGE only when we have no tuples to\nmove:\n- XLogRegisterBuffer(1, wbuf, REGBUF_STANDARD);\n+ /*\n+ * A cleanup lock is still required on the write buffer even\n+ * if there are no tuples to move, so it needs to be registered\n+ * in this case.\n+ */\n+ wbuf_flags = REGBUF_STANDARD;\n+ if (xlrec.ntups == 0)\n+ wbuf_flags |= REGBUF_NO_CHANGE;\n+ XLogRegisterBuffer(1, wbuf, wbuf_flags);\n\nAnyway, there is still a hole here: the regression tests have zero\ncoverage for the case where there are no tuples to move if\n!is_prim_bucket_same_wrt. There are only two queries that stress the\nsqueeze path with no tuples, both use is_prim_bucket_same_wrt:\nINSERT INTO hash_split_heap SELECT a/2 FROM generate_series(1, 25000) a;\nVACUUM hash_split_heap;\n\nPerhaps this should have more coverage to make sure that all these\nscenarios are covered at replay (likely with 027_stream_regress.pl)?\nThe case posted by Alexander at [1] falls under the same category\n(is_prim_bucket_same_wrt with no tuples).\n\n[1]: https://www.postgresql.org/message-id/[email protected]\n--\nMichael", "msg_date": "Wed, 1 Nov 2023 16:24:07 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is this a problem in GenericXLogFinish()?" }, { "msg_contents": "On Wed, Nov 1, 2023 at 12:54 PM Michael Paquier <[email protected]> wrote:\n>\n> On Mon, Oct 30, 2023 at 03:54:39PM +0530, Amit Kapila wrote:\n> > On Sat, Oct 28, 2023 at 4:30 PM Michael Paquier <[email protected]> wrote:\n> >> On Sat, Oct 28, 2023 at 03:45:13PM +0530, Amit Kapila wrote:\n> >> > Yes, we need it to exclude any concurrent in-progress scans that could\n> >> > return incorrect tuples during bucket squeeze operation.\n> >>\n> >> Thanks. So I assume that we should just set REGBUF_NO_CHANGE when the\n> >> primary and write buffers are the same and there are no tuples to\n> >> move. Say with something like the attached?\n> >\n> > What if the primary and write buffer are not the same but ntups is\n> > zero? Won't we hit the assertion again in that case?\n>\n> The code tells that it should be able to handle such a case,\n>\n\nIt should be possible when we encounter some page in between the chain\nthat has all dead items and write buffer points to some page after the\nprimary bucket page and before the read buffer page.\n\n> so this\n> would mean to set REGBUF_NO_CHANGE only when we have no tuples to\n> move:\n> - XLogRegisterBuffer(1, wbuf, REGBUF_STANDARD);\n> + /*\n> + * A cleanup lock is still required on the write buffer even\n> + * if there are no tuples to move, so it needs to be registered\n> + * in this case.\n> + */\n> + wbuf_flags = REGBUF_STANDARD;\n> + if (xlrec.ntups == 0)\n> + wbuf_flags |= REGBUF_NO_CHANGE;\n> + XLogRegisterBuffer(1, wbuf, wbuf_flags);\n>\n\nWhy register wbuf at all if there are no tuples to add and it is not\nthe same as bucketbuf? Also, I think this won't be correct if prevbuf\nand wrtbuf are the same and also we have no tuples to add to wbuf. I\nhave attached a naive and crude way to achieve it. This needs more\nwork both in terms of trying to find a better way to change the code\nor ensure this won't break any existing case. I have just run the\nexisting tests. Such a fix certainly required more testing.\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Thu, 2 Nov 2023 12:16:54 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is this a problem in GenericXLogFinish()?" }, { "msg_contents": "Dear Amit, Michael,\r\n\r\nThanks for making the patch!\r\n\r\n> Why register wbuf at all if there are no tuples to add and it is not\r\n> the same as bucketbuf? Also, I think this won't be correct if prevbuf\r\n> and wrtbuf are the same and also we have no tuples to add to wbuf. I\r\n> have attached a naive and crude way to achieve it. This needs more\r\n> work both in terms of trying to find a better way to change the code\r\n> or ensure this won't break any existing case. I have just run the\r\n> existing tests. Such a fix certainly required more testing.\r\n\r\nI'm verifying the idea (currently it seems OK), but at least there is a coding error -\r\nwbuf_flags should be uint8 here. PSA the fixed patch.\r\n\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED", "msg_date": "Mon, 6 Nov 2023 03:23:52 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Is this a problem in GenericXLogFinish()?" }, { "msg_contents": "Dear hackers,\r\n\r\n> Dear Amit, Michael,\r\n> \r\n> Thanks for making the patch!\r\n> \r\n> > Why register wbuf at all if there are no tuples to add and it is not\r\n> > the same as bucketbuf? Also, I think this won't be correct if prevbuf\r\n> > and wrtbuf are the same and also we have no tuples to add to wbuf. I\r\n> > have attached a naive and crude way to achieve it. This needs more\r\n> > work both in terms of trying to find a better way to change the code\r\n> > or ensure this won't break any existing case. I have just run the\r\n> > existing tests. Such a fix certainly required more testing.\r\n> \r\n> I'm verifying the idea (currently it seems OK), but at least there is a coding error -\r\n> wbuf_flags should be uint8 here. PSA the fixed patch.\r\n\r\nHere is a new patch which is bit refactored. It did:\r\n\r\n* If-conditions in _hash_freeovflpage() were swapped.\r\n* Based on above, an Assert(xlrec.ntups == 0) was added.\r\n* A condition in hash_xlog_squeeze_page() was followed the change as well\r\n* comments were adjusted\r\n\r\nNext we should add some test codes. I will continue considering but please post anything\r\nIf you have idea.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED", "msg_date": "Mon, 6 Nov 2023 06:46:16 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Is this a problem in GenericXLogFinish()?" }, { "msg_contents": "Dear hackers,\r\n\r\n> Next we should add some test codes. I will continue considering but please post\r\n> anything\r\n> If you have idea.\r\n\r\nAnd I did, PSA the patch. This patch adds two parts in hash_index.sql.\r\n\r\nIn the first part, the primary bucket page is filled by live tuples and some overflow\r\npages are by dead ones. This leads removal of overflow pages without moving tuples.\r\nHEAD will crash if this were executed, which is already reported on hackers.\r\n\r\nThe second one tests a case (ntups == 0 && is_prim_bucket_same_wrt == false &&\r\nis_prev_bucket_same_wrt == true), which is never done before.\r\n\r\n\r\n\r\nAlso, I measured the execution time of before/after patching. Below is a madian\r\nfor 9 measurements.\r\n\r\nBEFORE -> AFTER\r\n647 ms -> 710 ms\r\n\r\nThis means that the execution time increased -10%, it will not affect so much.\r\n\r\nHow do you think?\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED", "msg_date": "Fri, 10 Nov 2023 03:47:30 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Is this a problem in GenericXLogFinish()?" }, { "msg_contents": "On Fri, Nov 10, 2023 at 9:17 AM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> > Next we should add some test codes. I will continue considering but please post\n> > anything\n> > If you have idea.\n>\n> And I did, PSA the patch. This patch adds two parts in hash_index.sql.\n>\n> In the first part, the primary bucket page is filled by live tuples and some overflow\n> pages are by dead ones. This leads removal of overflow pages without moving tuples.\n> HEAD will crash if this were executed, which is already reported on hackers.\n>\n\n+-- And do CHECKPOINT and vacuum. Some overflow pages will be removed.\n+CHECKPOINT;\n+VACUUM hash_cleanup_heap;\n\nWhy do we need CHECKPOINT in the above test?\n\n> The second one tests a case (ntups == 0 && is_prim_bucket_same_wrt == false &&\n> is_prev_bucket_same_wrt == true), which is never done before.\n>\n\n+-- Insert few tuples, the primary bucket page will not satisfy\n+INSERT INTO hash_cleanup_heap SELECT 1 FROM generate_series(1, 50) as i;\n\nWhat do you mean by the second part of the sentence (the primary\nbucket page will not satisfy)?\n\nFew other minor comments:\n=======================\n1. Can we use ROLLBACK instead of ABORT in the tests?\n\n2.\n- for (i = 0; i < nitups; i++)\n+ for (int i = 0; i < nitups; i++)\n\nI don't think there is a need to make this unrelated change.\n\n3.\n+ /*\n+ * A write buffer needs to be registered even if no tuples are added to\n+ * it to ensure that we can acquire a cleanup lock on it if it is the\n+ * same as primary bucket buffer or update the nextblkno if it is same\n+ * as the previous bucket buffer.\n+ */\n+ else if (xlrec.is_prim_bucket_same_wrt || xlrec.is_prev_bucket_same_wrt)\n+ {\n+ uint8 wbuf_flags;\n+\n+ Assert(xlrec.ntups == 0);\n\nCan we move this comment inside else if, just before Assert?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 13 Nov 2023 09:25:06 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is this a problem in GenericXLogFinish()?" }, { "msg_contents": "Dear Amit,\r\n\r\nThanks for reviewing! PSA new version patch.\r\n\r\n> > > Next we should add some test codes. I will continue considering but please\r\n> post\r\n> > > anything\r\n> > > If you have idea.\r\n> >\r\n> > And I did, PSA the patch. This patch adds two parts in hash_index.sql.\r\n> >\r\n> > In the first part, the primary bucket page is filled by live tuples and some\r\n> overflow\r\n> > pages are by dead ones. This leads removal of overflow pages without moving\r\n> tuples.\r\n> > HEAD will crash if this were executed, which is already reported on hackers.\r\n> >\r\n> \r\n> +-- And do CHECKPOINT and vacuum. Some overflow pages will be removed.\r\n> +CHECKPOINT;\r\n> +VACUUM hash_cleanup_heap;\r\n> \r\n> Why do we need CHECKPOINT in the above test?\r\n\r\nCHECKPOINT command is needed for writing a hash pages to disk. IIUC if the command\r\nis omitted, none of tuples are recorded to hash index *file*, so squeeze would not\r\nbe occurred.\r\n\r\nYou can test by 1) restoring changes for hashovfl.c then 2) removing CHECKPOINT\r\ncommand. Server crash would not be occurred.\r\n\r\n> > The second one tests a case (ntups == 0 && is_prim_bucket_same_wrt ==\r\n> false &&\r\n> > is_prev_bucket_same_wrt == true), which is never done before.\r\n> >\r\n> \r\n> +-- Insert few tuples, the primary bucket page will not satisfy\r\n> +INSERT INTO hash_cleanup_heap SELECT 1 FROM generate_series(1, 50) as i;\r\n> \r\n> What do you mean by the second part of the sentence (the primary\r\n> bucket page will not satisfy)?\r\n\r\nI meant to say that the primary bucket page still can have more tuples. Maybe I\r\nshould say \"will not be full\". Reworded.\r\n\r\n> Few other minor comments:\r\n> =======================\r\n> 1. Can we use ROLLBACK instead of ABORT in the tests?\r\n\r\nChanged. IIRC they have same meaning, but it seems that most of sql files have\r\n\"ROLLBACK\".\r\n\r\n> 2.\r\n> - for (i = 0; i < nitups; i++)\r\n> + for (int i = 0; i < nitups; i++)\r\n> \r\n> I don't think there is a need to make this unrelated change.\r\n\r\nReverted. I thought that the variable i would be used only when nitups>0 so that\r\nit could be removed, but it was not my business.\r\n\r\n> 3.\r\n> + /*\r\n> + * A write buffer needs to be registered even if no tuples are added to\r\n> + * it to ensure that we can acquire a cleanup lock on it if it is the\r\n> + * same as primary bucket buffer or update the nextblkno if it is same\r\n> + * as the previous bucket buffer.\r\n> + */\r\n> + else if (xlrec.is_prim_bucket_same_wrt || xlrec.is_prev_bucket_same_wrt)\r\n> + {\r\n> + uint8 wbuf_flags;\r\n> +\r\n> + Assert(xlrec.ntups == 0);\r\n> \r\n> Can we move this comment inside else if, just before Assert?\r\n\r\nMoved.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED", "msg_date": "Mon, 13 Nov 2023 05:47:26 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Is this a problem in GenericXLogFinish()?" }, { "msg_contents": "On Mon, Nov 13, 2023 at 12:47 AM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n> Moved.\n\nI see that this patch was committed, but I'm not very convinced that\nthe approach is correct. The comment says this:\n\n+ /*\n+ * A write buffer needs to be registered even if no tuples are\n+ * added to it to ensure that we can acquire a cleanup lock on it\n+ * if it is the same as primary bucket buffer or update the\n+ * nextblkno if it is same as the previous bucket buffer.\n+ */\n\nBut surely if the buffer is the same as one of those others, then it's\nregistered anyway, and if it isn't, then it doesn't need to be.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 13 Nov 2023 12:21:34 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is this a problem in GenericXLogFinish()?" }, { "msg_contents": "On Mon, Nov 13, 2023 at 10:51 PM Robert Haas <[email protected]> wrote:\n>\n> On Mon, Nov 13, 2023 at 12:47 AM Hayato Kuroda (Fujitsu)\n> <[email protected]> wrote:\n> > Moved.\n>\n> I see that this patch was committed, but I'm not very convinced that\n> the approach is correct. The comment says this:\n>\n> + /*\n> + * A write buffer needs to be registered even if no tuples are\n> + * added to it to ensure that we can acquire a cleanup lock on it\n> + * if it is the same as primary bucket buffer or update the\n> + * nextblkno if it is same as the previous bucket buffer.\n> + */\n>\n> But surely if the buffer is the same as one of those others, then it's\n> registered anyway,\n\nI don't think for others it's registered. For example, consider the\ncase when prevpage and the writepage are the same (aka\nxlrec.is_prev_bucket_same_wrt is true), it won't be registered in\nanother code path (see comment [1]).\n\n>\n> and if it isn't, then it doesn't need to be.\n>\n\nIn the previous example, we need it to update the nextblockno during replay.\n\nI am not sure if I understand the scenario you are worried about, so\nif my response doesn't address your concern, can you please explain a\nbit more about the scenario you have in mind?\n\n[1] -\n/*\n * If prevpage and the writepage (block in which we are moving tuples\n * from overflow) are same, then no need to separately register\n * prevpage. During replay, we can directly update the nextblock in\n * writepage.\n */\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 14 Nov 2023 09:12:29 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is this a problem in GenericXLogFinish()?" }, { "msg_contents": "Hello,\n\n13.11.2023 20:21, Robert Haas wrote:\n> On Mon, Nov 13, 2023 at 12:47 AM Hayato Kuroda (Fujitsu)\n> <[email protected]> wrote:\n>> Moved.\n> I see that this patch was committed, but I'm not very convinced that\n> the approach is correct. The comment says this:\n>\n\nI've discovered that that patch introduced a code path leading to an\nuninitialized memory access.\nWith the following addition to hash_index.sql:\n  -- Fill overflow pages by \"dead\" tuples.\n  BEGIN;\n  INSERT INTO hash_cleanup_heap SELECT 1 FROM generate_series(1, 1000) as i;\n+INSERT INTO hash_cleanup_heap SELECT 1 FROM generate_series(1, 1000) as i;\n  ROLLBACK;\n+INSERT INTO hash_cleanup_heap SELECT 1 FROM generate_series(1, 1000) as i;\n\nmake check -C src/test/recovery/ PROVE_TESTS=\"t/027*\"\nwhen executed under Valgrind, triggers:\n==00:00:02:30.285 97744== Conditional jump or move depends on uninitialised value(s)\n==00:00:02:30.285 97744==    at 0x227585: BufferIsValid (bufmgr.h:303)\n==00:00:02:30.285 97744==    by 0x227585: hash_xlog_squeeze_page (hash_xlog.c:781)\n==00:00:02:30.285 97744==    by 0x228133: hash_redo (hash_xlog.c:1083)\n==00:00:02:30.285 97744==    by 0x2C2801: ApplyWalRecord (xlogrecovery.c:1943)\n==00:00:02:30.285 97744==    by 0x2C5C52: PerformWalRecovery (xlogrecovery.c:1774)\n==00:00:02:30.285 97744==    by 0x2B63A1: StartupXLOG (xlog.c:5559)\n==00:00:02:30.285 97744==    by 0x558165: StartupProcessMain (startup.c:282)\n==00:00:02:30.285 97744==    by 0x54DFE8: AuxiliaryProcessMain (auxprocess.c:141)\n==00:00:02:30.285 97744==    by 0x5546B0: StartChildProcess (postmaster.c:5331)\n==00:00:02:30.285 97744==    by 0x557A53: PostmasterMain (postmaster.c:1458)\n==00:00:02:30.285 97744==    by 0x4720C2: main (main.c:198)\n==00:00:02:30.285 97744==\n(in 027_stream_regress_standby_1.log)\n\nThat is, when line\nhttps://coverage.postgresql.org/src/backend/access/hash/hash_xlog.c.gcov.html#661\nis reached, writebuf stays uninitialized.\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Wed, 29 Nov 2023 18:00:01 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is this a problem in GenericXLogFinish()?" }, { "msg_contents": "Dear Alexander,\r\n\r\n> \r\n> I've discovered that that patch introduced a code path leading to an\r\n> uninitialized memory access.\r\n> With the following addition to hash_index.sql:\r\n> -- Fill overflow pages by \"dead\" tuples.\r\n> BEGIN;\r\n> INSERT INTO hash_cleanup_heap SELECT 1 FROM generate_series(1, 1000)\r\n> as i;\r\n> +INSERT INTO hash_cleanup_heap SELECT 1 FROM generate_series(1, 1000) as\r\n> i;\r\n> ROLLBACK;\r\n> +INSERT INTO hash_cleanup_heap SELECT 1 FROM generate_series(1, 1000) as\r\n> i;\r\n> \r\n> make check -C src/test/recovery/ PROVE_TESTS=\"t/027*\"\r\n> when executed under Valgrind, triggers:\r\n> ==00:00:02:30.285 97744== Conditional jump or move depends on uninitialised\r\n> value(s)\r\n> ==00:00:02:30.285 97744== at 0x227585: BufferIsValid (bufmgr.h:303)\r\n> ==00:00:02:30.285 97744== by 0x227585: hash_xlog_squeeze_page\r\n> (hash_xlog.c:781)\r\n> ==00:00:02:30.285 97744== by 0x228133: hash_redo (hash_xlog.c:1083)\r\n> ==00:00:02:30.285 97744== by 0x2C2801: ApplyWalRecord\r\n> (xlogrecovery.c:1943)\r\n> ==00:00:02:30.285 97744== by 0x2C5C52: PerformWalRecovery\r\n> (xlogrecovery.c:1774)\r\n> ==00:00:02:30.285 97744== by 0x2B63A1: StartupXLOG (xlog.c:5559)\r\n> ==00:00:02:30.285 97744== by 0x558165: StartupProcessMain\r\n> (startup.c:282)\r\n> ==00:00:02:30.285 97744== by 0x54DFE8: AuxiliaryProcessMain\r\n> (auxprocess.c:141)\r\n> ==00:00:02:30.285 97744== by 0x5546B0: StartChildProcess\r\n> (postmaster.c:5331)\r\n> ==00:00:02:30.285 97744== by 0x557A53: PostmasterMain\r\n> (postmaster.c:1458)\r\n> ==00:00:02:30.285 97744== by 0x4720C2: main (main.c:198)\r\n> ==00:00:02:30.285 97744==\r\n> (in 027_stream_regress_standby_1.log)\r\n> \r\n> That is, when line\r\n> https://coverage.postgresql.org/src/backend/access/hash/hash_xlog.c.gcov.ht\r\n> ml#661\r\n> is reached, writebuf stays uninitialized.\r\n\r\nGood catch, thank you for reporting! I will investigate more about it and post my analysis.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n", "msg_date": "Thu, 30 Nov 2023 03:09:02 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Is this a problem in GenericXLogFinish()?" }, { "msg_contents": "Dear Alexander,\r\n\r\n> \r\n> Good catch, thank you for reporting! I will investigate more about it and post my\r\n> analysis.\r\n>\r\n\r\nAgain, good catch. Here is my analysis and fix patch.\r\nI think it is sufficient to add an initialization for writebuf.\r\n\r\nIn the reported case, neither is_prim_bucket_same_wrt nor is_prev_bucket_same_wrt\r\nis set in the WAL record, and ntups is also zero. This means that the wbuf is not\r\nwritten in the WAL record on primary side (see [1]).\r\nAlso, there are no reasons to read (and lock) other buffer page because we do\r\nnot modify it. Based on them, I think that just initializing as InvalidBuffer is sufficient.\r\n\r\n\r\nI want to add a test case for it as well. I've tested on my env and found that proposed\r\ntuples seems sufficient.\r\n(We must fill the primary bucket, so initial 500 is needed. Also, we must add\r\nmany dead pages to lead squeeze operation. Final page seems OK to smaller value.)\r\n\r\nI compared the execution time before/after patching, and they are not so different\r\n(1077 ms -> 1125 ms).\r\n\r\nHow do you think?\r\n\r\n[1]:\r\n```\r\n\t\telse if (xlrec.is_prim_bucket_same_wrt || xlrec.is_prev_bucket_same_wrt)\r\n\t\t{\r\n\t\t\tuint8\t\twbuf_flags;\r\n\r\n\t\t\t/*\r\n\t\t\t * A write buffer needs to be registered even if no tuples are\r\n\t\t\t * added to it to ensure that we can acquire a cleanup lock on it\r\n\t\t\t * if it is the same as primary bucket buffer or update the\r\n\t\t\t * nextblkno if it is same as the previous bucket buffer.\r\n\t\t\t */\r\n\t\t\tAssert(xlrec.ntups == 0);\r\n\r\n\t\t\twbuf_flags = REGBUF_STANDARD;\r\n\t\t\tif (!xlrec.is_prev_bucket_same_wrt)\r\n\t\t\t\twbuf_flags |= REGBUF_NO_CHANGE;\r\n\t\t\tXLogRegisterBuffer(1, wbuf, wbuf_flags);\r\n\t\t}\r\n```\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED", "msg_date": "Thu, 30 Nov 2023 07:28:05 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Is this a problem in GenericXLogFinish()?" }, { "msg_contents": "On Thu, Nov 30, 2023 at 12:58 PM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> >\n> > Good catch, thank you for reporting! I will investigate more about it and post my\n> > analysis.\n> >\n>\n> Again, good catch. Here is my analysis and fix patch.\n> I think it is sufficient to add an initialization for writebuf.\n>\n> In the reported case, neither is_prim_bucket_same_wrt nor is_prev_bucket_same_wrt\n> is set in the WAL record, and ntups is also zero. This means that the wbuf is not\n> written in the WAL record on primary side (see [1]).\n> Also, there are no reasons to read (and lock) other buffer page because we do\n> not modify it. Based on them, I think that just initializing as InvalidBuffer is sufficient.\n>\n\nAgreed.\n\n>\n> I want to add a test case for it as well. I've tested on my env and found that proposed\n> tuples seems sufficient.\n> (We must fill the primary bucket, so initial 500 is needed. Also, we must add\n> many dead pages to lead squeeze operation. Final page seems OK to smaller value.)\n>\n> I compared the execution time before/after patching, and they are not so different\n> (1077 ms -> 1125 ms).\n>\n\nIn my environment, it increased from 375ms to 385ms (median of five\nruns). I think it is acceptable especially if it increases code\ncoverage. Can you once check that?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 30 Nov 2023 15:41:19 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is this a problem in GenericXLogFinish()?" }, { "msg_contents": "Hello Kuroda-san,\n\n30.11.2023 10:28, Hayato Kuroda (Fujitsu) wrote:\n> Again, good catch. Here is my analysis and fix patch.\n> I think it is sufficient to add an initialization for writebuf.\n\nI agree with the change. It aligns hash_xlog_squeeze_page() with\nhash_xlog_move_page_contents() in regard to initialization of writebuf.\nInterestingly enough, the discrepancy exists since these functions\nappeared with c11453ce0, and I can't see what justified adding the\ninitialization inside hash_xlog_move_page_contents() only.\nSo I think that if this doesn't affect performance, having aligned coding\n(writebuf initialized in both functions) is better.\n\n> I want to add a test case for it as well. I've tested on my env and found that proposed\n> tuples seems sufficient.\n> (We must fill the primary bucket, so initial 500 is needed. Also, we must add\n> many dead pages to lead squeeze operation. Final page seems OK to smaller value.)\n>\n> I compared the execution time before/after patching, and they are not so different\n> (1077 ms -> 1125 ms).\n>\n> How do you think?\n\nI can confirm that the test case proposed demonstrates what is expected\nand the duration increase is tolerable, as to me.\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Thu, 30 Nov 2023 14:00:01 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is this a problem in GenericXLogFinish()?" }, { "msg_contents": "Dear Amit,\r\n\r\n> I think it is acceptable especially if it increases code\r\n> coverage. Can you once check that?\r\n\r\nPSA the screen shot. I did \"PROVE_TESTS=\"t/027*\" make check\" in src/test/recovery, and\r\ngenerated a report.\r\nLine 661 was not hit in the HEAD, but the screen showed that it was executed.\r\n\r\n[1]: https://coverage.postgresql.org/src/backend/access/hash/hash_xlog.c.gcov.html#661\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED", "msg_date": "Thu, 30 Nov 2023 11:29:17 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Is this a problem in GenericXLogFinish()?" }, { "msg_contents": "On Thu, Nov 30, 2023 at 4:30 PM Alexander Lakhin <[email protected]> wrote:\n>\n> 30.11.2023 10:28, Hayato Kuroda (Fujitsu) wrote:\n> > Again, good catch. Here is my analysis and fix patch.\n> > I think it is sufficient to add an initialization for writebuf.\n>\n> I agree with the change.\n>\n\nPushed!\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 1 Dec 2023 15:27:33 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is this a problem in GenericXLogFinish()?" }, { "msg_contents": "On Fri, Dec 01, 2023 at 03:27:33PM +0530, Amit Kapila wrote:\n> Pushed!\n\nAmit, this has been applied as of 861f86beea1c, and I got pinged about\nthe fact this triggers inconsistencies because we always set the LSN\nof the write buffer (wbuf in _hash_freeovflpage) but\nXLogRegisterBuffer() would *not* be called when the two following\nconditions happen:\n- When xlrec.ntups <= 0.\n- When !xlrec.is_prim_bucket_same_wrt && !xlrec.is_prev_bucket_same_wrt\n\nAnd it seems to me that there is still a bug here: there should be no\npoint in setting the LSN on the write buffer if we don't register it\nin WAL at all, no?\n--\nMichael", "msg_date": "Fri, 2 Feb 2024 16:10:12 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is this a problem in GenericXLogFinish()?" }, { "msg_contents": "On Fri, Feb 2, 2024 at 12:40 PM Michael Paquier <[email protected]> wrote:\n>\n> Amit, this has been applied as of 861f86beea1c, and I got pinged about\n> the fact this triggers inconsistencies because we always set the LSN\n> of the write buffer (wbuf in _hash_freeovflpage) but\n> XLogRegisterBuffer() would *not* be called when the two following\n> conditions happen:\n> - When xlrec.ntups <= 0.\n> - When !xlrec.is_prim_bucket_same_wrt && !xlrec.is_prev_bucket_same_wrt\n>\n> And it seems to me that there is still a bug here: there should be no\n> point in setting the LSN on the write buffer if we don't register it\n> in WAL at all, no?\n>\n\nRight, I see the problem. I'll look into it further.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 3 Feb 2024 11:47:52 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is this a problem in GenericXLogFinish()?" }, { "msg_contents": "Dear Michael, Amit,\n\n> \n> Amit, this has been applied as of 861f86beea1c, and I got pinged about\n> the fact this triggers inconsistencies because we always set the LSN\n> of the write buffer (wbuf in _hash_freeovflpage) but\n> XLogRegisterBuffer() would *not* be called when the two following\n> conditions happen:\n> - When xlrec.ntups <= 0.\n> - When !xlrec.is_prim_bucket_same_wrt && !xlrec.is_prev_bucket_same_wrt\n> \n> And it seems to me that there is still a bug here: there should be no\n> point in setting the LSN on the write buffer if we don't register it\n> in WAL at all, no?\n\nThanks for pointing out, I agreed your saying. PSA the patch for diagnosing the\nissue.\n\nThis patch can avoid the inconsistency due to the LSN setting and output a debug\nLOG when we met such a case. I executed hash_index.sql and confirmed the log was\noutput [1]. This meant that current test has already had a workload which meets below\nconditions:\n\n - the overflow page has no tuples (xlrec.ntups is 0),\n - to-be-written page - wbuf - is not the primary (xlrec.is_prim_bucket_same_wrt\n is false), and\n - to-be-written buffer is not next to the overflow page\n (xlrec.is_prev_bucket_same_wrt is false)\n\nSo, I think my patch (after removing elog(...) part) can fix the issue. Thought?\n\n[1]:\n```\nLOG: XXX: is_wbuf_registered: false\nCONTEXT: while vacuuming index \"hash_cleanup_index\" of relation \"public.hash_cleanup_heap\"\nSTATEMENT: VACUUM hash_cleanup_heap;\n```\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\nhttps://www.fujitsu.com/", "msg_date": "Mon, 5 Feb 2024 04:29:57 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Is this a problem in GenericXLogFinish()?" }, { "msg_contents": "On Mon, Feb 5, 2024 at 10:00 AM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> >\n> > Amit, this has been applied as of 861f86beea1c, and I got pinged about\n> > the fact this triggers inconsistencies because we always set the LSN\n> > of the write buffer (wbuf in _hash_freeovflpage) but\n> > XLogRegisterBuffer() would *not* be called when the two following\n> > conditions happen:\n> > - When xlrec.ntups <= 0.\n> > - When !xlrec.is_prim_bucket_same_wrt && !xlrec.is_prev_bucket_same_wrt\n> >\n> > And it seems to me that there is still a bug here: there should be no\n> > point in setting the LSN on the write buffer if we don't register it\n> > in WAL at all, no?\n>\n> Thanks for pointing out, I agreed your saying. PSA the patch for diagnosing the\n> issue.\n>\n\n@@ -692,6 +697,9 @@ _hash_freeovflpage(Relation rel, Buffer bucketbuf,\nBuffer ovflbuf,\n if (!xlrec.is_prev_bucket_same_wrt)\n wbuf_flags |= REGBUF_NO_CHANGE;\n XLogRegisterBuffer(1, wbuf, wbuf_flags);\n+\n+ /* Track the registration status for later use */\n+ wbuf_registered = true;\n }\n\n XLogRegisterBuffer(2, ovflbuf, REGBUF_STANDARD);\n@@ -719,7 +727,12 @@ _hash_freeovflpage(Relation rel, Buffer\nbucketbuf, Buffer ovflbuf,\n\n recptr = XLogInsert(RM_HASH_ID, XLOG_HASH_SQUEEZE_PAGE);\n\n- PageSetLSN(BufferGetPage(wbuf), recptr);\n+ /* Set LSN to wbuf page buffer only when it is being registered */\n+ if (wbuf_registered)\n+ PageSetLSN(BufferGetPage(wbuf), recptr);\n\nWhy set LSN when the page is not modified (say when we use the flag\nREGBUF_NO_CHANGE)? I think you need to use a flag mod_wbuf and set it\nin appropriate places during register buffer calls.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 5 Feb 2024 12:10:52 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is this a problem in GenericXLogFinish()?" }, { "msg_contents": "Dear Amit,\r\n\r\n> \r\n> @@ -692,6 +697,9 @@ _hash_freeovflpage(Relation rel, Buffer bucketbuf,\r\n> Buffer ovflbuf,\r\n> if (!xlrec.is_prev_bucket_same_wrt)\r\n> wbuf_flags |= REGBUF_NO_CHANGE;\r\n> XLogRegisterBuffer(1, wbuf, wbuf_flags);\r\n> +\r\n> + /* Track the registration status for later use */\r\n> + wbuf_registered = true;\r\n> }\r\n> \r\n> XLogRegisterBuffer(2, ovflbuf, REGBUF_STANDARD);\r\n> @@ -719,7 +727,12 @@ _hash_freeovflpage(Relation rel, Buffer\r\n> bucketbuf, Buffer ovflbuf,\r\n> \r\n> recptr = XLogInsert(RM_HASH_ID, XLOG_HASH_SQUEEZE_PAGE);\r\n> \r\n> - PageSetLSN(BufferGetPage(wbuf), recptr);\r\n> + /* Set LSN to wbuf page buffer only when it is being registered */\r\n> + if (wbuf_registered)\r\n> + PageSetLSN(BufferGetPage(wbuf), recptr);\r\n> \r\n> Why set LSN when the page is not modified (say when we use the flag\r\n> REGBUF_NO_CHANGE)? I think you need to use a flag mod_wbuf and set it\r\n> in appropriate places during register buffer calls.\r\n\r\nYou are right. Based on the previous discussions, PageSetLSN() must be called\r\nafter the MakeBufferDirty(). REGBUF_NO_CHANGE has been introduced for skipping\r\nthese requirements. Definitevely, no_change buffers must not be PageSetLSN()'d.\r\nOther pages, e.g., metabuf, has already been followed the rule.\r\n\r\nI updated the patch based on the requirement.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\nhttps://www.fujitsu.com/", "msg_date": "Mon, 5 Feb 2024 07:57:09 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Is this a problem in GenericXLogFinish()?" }, { "msg_contents": "On Mon, Feb 05, 2024 at 07:57:09AM +0000, Hayato Kuroda (Fujitsu) wrote:\n> You are right. Based on the previous discussions, PageSetLSN() must be called\n> after the MakeBufferDirty(). REGBUF_NO_CHANGE has been introduced for skipping\n> these requirements. Definitevely, no_change buffers must not be PageSetLSN()'d.\n> Other pages, e.g., metabuf, has already been followed the rule.\n\nAt quick glance, this v2 seems kind of right to me: you are setting\nthe page LSN only when the page is registered in the record and\nactually dirtied.\n--\nMichael", "msg_date": "Mon, 5 Feb 2024 17:03:15 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is this a problem in GenericXLogFinish()?" }, { "msg_contents": "On Mon, Feb 5, 2024 at 1:33 PM Michael Paquier <[email protected]> wrote:\n>\n> On Mon, Feb 05, 2024 at 07:57:09AM +0000, Hayato Kuroda (Fujitsu) wrote:\n> > You are right. Based on the previous discussions, PageSetLSN() must be called\n> > after the MakeBufferDirty(). REGBUF_NO_CHANGE has been introduced for skipping\n> > these requirements. Definitevely, no_change buffers must not be PageSetLSN()'d.\n> > Other pages, e.g., metabuf, has already been followed the rule.\n>\n> At quick glance, this v2 seems kind of right to me: you are setting\n> the page LSN only when the page is registered in the record and\n> actually dirtied.\n>\n\nThanks for the report and looking into it. Pushed!\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 7 Feb 2024 14:08:42 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is this a problem in GenericXLogFinish()?" }, { "msg_contents": "On Wed, Feb 07, 2024 at 02:08:42PM +0530, Amit Kapila wrote:\n> Thanks for the report and looking into it. Pushed!\n\nThanks Amit for the commit and Kuroda-san for the patch!\n--\nMichael", "msg_date": "Wed, 7 Feb 2024 18:42:21 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is this a problem in GenericXLogFinish()?" }, { "msg_contents": "On Wed, Feb 07, 2024 at 06:42:21PM +0900, Michael Paquier wrote:\n> On Wed, Feb 07, 2024 at 02:08:42PM +0530, Amit Kapila wrote:\n> > Thanks for the report and looking into it. Pushed!\n> \n> Thanks Amit for the commit and Kuroda-san for the patch!\n\nI have been pinged about this thread and I should have looked a bit\ncloser, but wait a minute here.\n\nThere is still some divergence between the code path of\n_hash_freeovflpage() and the replay in hash_xlog_squeeze_page() when\nsqueezing a page: we do not set the LSN of the write buffer if\n(xlrec.ntups <= 0 && xlrec.is_prim_bucket_same_wrt &&\n!xlrec.is_prev_bucket_same_wrt) when generating the squeeze record,\nbut at replay we call PageSetLSN() on the write buffer and mark it\ndirty in this case. Isn't that incorrect? It seems to me that we\nshould be able to make the conditional depending on the state of the\nxl_hash_squeeze_page record, no?\n--\nMichael", "msg_date": "Fri, 5 Apr 2024 13:26:29 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is this a problem in GenericXLogFinish()?" }, { "msg_contents": "On Fri, Apr 5, 2024 at 9:56 AM Michael Paquier <[email protected]> wrote:\n>\n> On Wed, Feb 07, 2024 at 06:42:21PM +0900, Michael Paquier wrote:\n> > On Wed, Feb 07, 2024 at 02:08:42PM +0530, Amit Kapila wrote:\n> > > Thanks for the report and looking into it. Pushed!\n> >\n> > Thanks Amit for the commit and Kuroda-san for the patch!\n>\n> I have been pinged about this thread and I should have looked a bit\n> closer, but wait a minute here.\n>\n> There is still some divergence between the code path of\n> _hash_freeovflpage() and the replay in hash_xlog_squeeze_page() when\n> squeezing a page: we do not set the LSN of the write buffer if\n> (xlrec.ntups <= 0 && xlrec.is_prim_bucket_same_wrt &&\n> !xlrec.is_prev_bucket_same_wrt) when generating the squeeze record,\n> but at replay we call PageSetLSN() on the write buffer and mark it\n> dirty in this case. Isn't that incorrect?\n>\n\nAgreed. We will try to reproduce this.\n\n> It seems to me that we\n> should be able to make the conditional depending on the state of the\n> xl_hash_squeeze_page record, no?\n>\n\nI think we can have a flag like mod_buf and set it in both the\nconditions if (xldata->ntups > 0) and if\n(xldata->is_prev_bucket_same_wrt). If the flag is set then we can set\nthe LSN and mark buffer dirty.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 5 Apr 2024 11:17:03 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is this a problem in GenericXLogFinish()?" }, { "msg_contents": "Dear Michael,\n\n> There is still some divergence between the code path of\n> _hash_freeovflpage() and the replay in hash_xlog_squeeze_page() when\n> squeezing a page: we do not set the LSN of the write buffer if\n> (xlrec.ntups <= 0 && xlrec.is_prim_bucket_same_wrt &&\n> !xlrec.is_prev_bucket_same_wrt) when generating the squeeze record,\n> but at replay we call PageSetLSN() on the write buffer and mark it\n> dirty in this case. Isn't that incorrect? It seems to me that we\n> should be able to make the conditional depending on the state of the\n> xl_hash_squeeze_page record, no?\n\nThanks for pointing out. Yes, we fixed a behavior by the commit aa5edbe37,\nbut we missed the redo case. I made a fix patch based on the suggestion [1].\n\nBelow part contained my analysis and how I reproduced.\nI posted for clarifying the issue to others.\n\n\n=====\n\n## Pointed issue\n\nAssuming the case\n\t- xlrec.ntups is 0,\n\t- xlrec.is_prim_bucket_same_wrt is true, and \n\t- xlrec.is_prev_bucket_same_wrt is false.\nThis meant that there are several overflow pages and the tail deadpage is removing.\nIn this case, the primary page is not have to be modified.\n\nWhile doing the normal operation, the removal is done in _hash_freeovflpage().\nIf above condition is met, mod_wbuf is not set to true so PageSetLSN() is skipped.\n\nWhile doing the recovery, the squeeze and removal is done in hash_xlog_squeeze_page().\nIn this function PageSetLSN() is set unconditionally.\nHe said in this case the PageSetLSN should be avoided as well.\n\n## Analysis\n\nIIUC there is the same issue with pointed by [2].\nHe said the PageSetLSN() should be set when the page is really modified.\nIn the discussing case, wbug is not modified (just removing the tail entry) so that no need\nto assign LSN to it. However, we might miss to update the redo case as well.\n\n## How to reproduce\n\nI could reproduce the case below steps.\n1. Added the debug log like [3]\n2. Constructed a physical replication.\n3. Ran hash_index.sql\n4. Found the added debug log.\n\n[1]: https://www.postgresql.org/message-id/CAA4eK1%2BayneM-8mSRC0iWpDMnm39EwDoqgiOCSqrrMLcdnUnAA%40mail.gmail.com\n[2]: https://www.postgresql.org/message-id/ZbyVVG_7eW3YD5-A%40paquier.xyz\n[3]:\n```\n--- a/src/backend/access/hash/hash_xlog.c\n+++ b/src/backend/access/hash/hash_xlog.c\n@@ -713,6 +713,11 @@ hash_xlog_squeeze_page(XLogReaderState *record)\n writeopaque->hasho_nextblkno = xldata->nextblkno;\n }\n\n+ if (xldata->ntups == 0 &&\n+ xldata->is_prim_bucket_same_wrt &&\n+ !xldata->is_prev_bucket_same_wrt)\n+ elog(LOG, \"XXX no need to set PageSetLSN\");\n+\n```\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\nhttps://www.fujitsu.com/", "msg_date": "Fri, 5 Apr 2024 06:22:58 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Is this a problem in GenericXLogFinish()?" }, { "msg_contents": "On Fri, Apr 05, 2024 at 01:26:29PM +0900, Michael Paquier wrote:\n> On Wed, Feb 07, 2024 at 06:42:21PM +0900, Michael Paquier wrote:\n>> On Wed, Feb 07, 2024 at 02:08:42PM +0530, Amit Kapila wrote:\n>> > Thanks for the report and looking into it. Pushed!\n>> \n>> Thanks Amit for the commit and Kuroda-san for the patch!\n> \n> I have been pinged about this thread and I should have looked a bit\n> closer, but wait a minute here.\n\nI have forgotten to mention that Zubeyr Eryilmaz, a colleague, should\nbe credited here.\n--\nMichael", "msg_date": "Sat, 6 Apr 2024 08:45:08 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is this a problem in GenericXLogFinish()?" }, { "msg_contents": "On Fri, Apr 05, 2024 at 06:22:58AM +0000, Hayato Kuroda (Fujitsu) wrote:\n> Thanks for pointing out. Yes, we fixed a behavior by the commit aa5edbe37,\n> but we missed the redo case. I made a fix patch based on the suggestion [1].\n\n+ bool mod_buf = false;\n\nPerhaps you could name that mod_wbuf, to be consistent with the WAL\ninsert path.\n\nI'm slightly annoyed by the fact that there is no check on\nis_prim_bucket_same_wrt for buffer 1 in the BLK_NEEDS_REDO case to\nshow the symmetry between the insert and replay paths. Shouldn't\nthere be at least an assert for that in the branch when there are no\ntuples (imagine an else to cover xldata->ntups == 0)? I mean between\njust before updating the hash page's opaque area when\nis_prev_bucket_same_wrt.\n\nI've been thinking about ways to make such cases detectable in an\nautomated fashion. The best choice would be\nverifyBackupPageConsistency(), just after RestoreBlockImage() on the\ncopy of the block image stored in WAL before applying the page masking\nwhich would mask the LSN. A problem, unfortunately, is that we would\nneed to transfer the knowledge of REGBUF_NO_CHANGE as a new BKPIMAGE_*\nflag so we would be able to track if the block rebuilt from the record\nhas the *same* LSN as the copy used for the consistency check. So\nthis edge consistency case would come at a cost, I am afraid, and the\n8 bits of bimg_info are precious :/\n--\nMichael", "msg_date": "Tue, 9 Apr 2024 15:53:08 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is this a problem in GenericXLogFinish()?" }, { "msg_contents": "Dear Michael,\n\n> On Fri, Apr 05, 2024 at 06:22:58AM +0000, Hayato Kuroda (Fujitsu) wrote:\n> > Thanks for pointing out. Yes, we fixed a behavior by the commit aa5edbe37,\n> > but we missed the redo case. I made a fix patch based on the suggestion [1].\n> \n> + bool mod_buf = false;\n> \n> Perhaps you could name that mod_wbuf, to be consistent with the WAL\n> insert path.\n\nRight, fixed.\n\n> I'm slightly annoyed by the fact that there is no check on\n> is_prim_bucket_same_wrt for buffer 1 in the BLK_NEEDS_REDO case to\n> show the symmetry between the insert and replay paths. Shouldn't\n> there be at least an assert for that in the branch when there are no\n> tuples (imagine an else to cover xldata->ntups == 0)? I mean between\n> just before updating the hash page's opaque area when\n> is_prev_bucket_same_wrt.\n\nIndeed, added an Assert() in else part. Was it same as your expectation?\n\n> I've been thinking about ways to make such cases detectable in an\n> automated fashion. The best choice would be\n> verifyBackupPageConsistency(), just after RestoreBlockImage() on the\n> copy of the block image stored in WAL before applying the page masking\n> which would mask the LSN. A problem, unfortunately, is that we would\n> need to transfer the knowledge of REGBUF_NO_CHANGE as a new BKPIMAGE_*\n> flag so we would be able to track if the block rebuilt from the record\n> has the *same* LSN as the copy used for the consistency check. So\n> this edge consistency case would come at a cost, I am afraid, and the\n> 8 bits of bimg_info are precious :/\n\nI could not decide that the change has more benefit than its cost, so I did\nnothing for it.\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\nhttps://www.fujitsu.com/", "msg_date": "Tue, 9 Apr 2024 10:25:36 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Is this a problem in GenericXLogFinish()?" }, { "msg_contents": "On Tue, Apr 09, 2024 at 10:25:36AM +0000, Hayato Kuroda (Fujitsu) wrote:\n>> On Fri, Apr 05, 2024 at 06:22:58AM +0000, Hayato Kuroda (Fujitsu) wrote:\n>> I'm slightly annoyed by the fact that there is no check on\n>> is_prim_bucket_same_wrt for buffer 1 in the BLK_NEEDS_REDO case to\n>> show the symmetry between the insert and replay paths. Shouldn't\n>> there be at least an assert for that in the branch when there are no\n>> tuples (imagine an else to cover xldata->ntups == 0)? I mean between\n>> just before updating the hash page's opaque area when\n>> is_prev_bucket_same_wrt.\n> \n> Indeed, added an Assert() in else part. Was it same as your expectation?\n\nYep, close enough. Thanks to the insert path we know that there will\nbe no tuples if (is_prim_bucket_same_wrt || is_prev_bucket_same_wrt),\nand the replay path where the assertion is added.\n\n>> I've been thinking about ways to make such cases detectable in an\n>> automated fashion. The best choice would be\n>> verifyBackupPageConsistency(), just after RestoreBlockImage() on the\n>> copy of the block image stored in WAL before applying the page masking\n>> which would mask the LSN. A problem, unfortunately, is that we would\n>> need to transfer the knowledge of REGBUF_NO_CHANGE as a new BKPIMAGE_*\n>> flag so we would be able to track if the block rebuilt from the record\n>> has the *same* LSN as the copy used for the consistency check. So\n>> this edge consistency case would come at a cost, I am afraid, and the\n>> 8 bits of bimg_info are precious :/\n> \n> I could not decide that the change has more benefit than its cost, so I did\n> nothing for it.\n\nIt does not prevent doing an implementation that can be used for some\ntest validation in the context of this thread. Using this idea, I\nhave done the attached 0002. This is not something to merge into the\ntree, of course, just a toy to:\n- Validate the fix for the problem we know.\n- More braodly, check if there are other areas covered by the main\nregression test suite if there are other problems.\n\nAnd from what I can see, applying 0002 without 0001 causes the\nfollowing test to fail as the standby chokes on a inconsistent LSN\nwith a FATAL because the LSN of the apply page coming from the primary\nand the LSN saved in the page replayed don't match. Here is a command\nto reproduce the problem:\ncd src/test/recovery/ && \\\n PG_TEST_EXTRA=wal_consistency_checking \\\n PROVE_TESTS=t/027_stream_regress.pl make check\n\nAnd then in the logs you'd see stuff like:\n2024-04-10 16:52:21.844 JST [2437] FATAL: inconsistent page LSN\nreplayed 0/A7E5CD18 primary 0/A7E56348\n2024-04-10 16:52:21.844 JST [2437] CONTEXT: WAL redo at 0/A7E59F68\nfor Hash/SQUEEZE_PAGE: prevblkno 28, nextblkno 4294967295, ntups 0,\nis_primary T; blkref #1: rel 1663/16384/25434, blk 7 FPW; blkref #2:\nrel 1663/16384/25434, blk 29 FPW; blkref #3: rel 1663/16384/25434, blk\n28 FPW; blkref #5: rel 1663/16384/25434, blk 9 FPW; blkref #6: rel\n1663/16384/25434, blk 0 FPW\n\nI don't see other areas with a similar problem, at the extent of the\ncore regression tests, that is to say. That's my way to say that your\npatch looks good to me and that I'm planning to apply it to fix the\nissue.\n\nThis shows me a more interesting issue unrelated to this thread:\n027_stream_regress.pl would be stuck if the standby finds an\ninconsistent page under wal_consistency_checking. This needs to be\nfixed before I'm able to create a buildfarm animal with this setup.\nI'll spawn a new thread about that tomorrow.\n--\nMichael", "msg_date": "Wed, 10 Apr 2024 16:57:43 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is this a problem in GenericXLogFinish()?" }, { "msg_contents": "On Wed, Apr 10, 2024 at 1:27 PM Michael Paquier <[email protected]> wrote:\n>\n> On Tue, Apr 09, 2024 at 10:25:36AM +0000, Hayato Kuroda (Fujitsu) wrote:\n> >> On Fri, Apr 05, 2024 at 06:22:58AM +0000, Hayato Kuroda (Fujitsu) wrote:\n> >> I'm slightly annoyed by the fact that there is no check on\n> >> is_prim_bucket_same_wrt for buffer 1 in the BLK_NEEDS_REDO case to\n> >> show the symmetry between the insert and replay paths. Shouldn't\n> >> there be at least an assert for that in the branch when there are no\n> >> tuples (imagine an else to cover xldata->ntups == 0)? I mean between\n> >> just before updating the hash page's opaque area when\n> >> is_prev_bucket_same_wrt.\n> >\n> > Indeed, added an Assert() in else part. Was it same as your expectation?\n>\n> Yep, close enough. Thanks to the insert path we know that there will\n> be no tuples if (is_prim_bucket_same_wrt || is_prev_bucket_same_wrt),\n> and the replay path where the assertion is added.\n>\n\nIt is fine to have an assertion for this path.\n\n+ else\n+ {\n+ /*\n+ * See _hash_freeovflpage() which has a similar assertion when\n+ * there are no tuples.\n+ */\n+ Assert(xldata->is_prim_bucket_same_wrt ||\n+ xldata->is_prev_bucket_same_wrt);\n\nI can understand this comment as I am aware of this code but not sure\nit would be equally easy for the people first time looking at this\ncode. One may try to find the equivalent assertion in\n_hash_freeovflpage(). The alternative could be: \"Ensure that the\nrequired flags are set when there are no tuples. See\n_hash_freeovflpage().\". I am also fine if you prefer to go with your\nproposed comment.\n\nOtherwise, the patch looks good to me.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 10 Apr 2024 15:28:22 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is this a problem in GenericXLogFinish()?" }, { "msg_contents": "On Wed, Apr 10, 2024 at 03:28:22PM +0530, Amit Kapila wrote:\n> I can understand this comment as I am aware of this code but not sure\n> it would be equally easy for the people first time looking at this\n> code. One may try to find the equivalent assertion in\n> _hash_freeovflpage(). The alternative could be: \"Ensure that the\n> required flags are set when there are no tuples. See\n> _hash_freeovflpage().\". I am also fine if you prefer to go with your\n> proposed comment.\n\nYes, I can see your point about why that's confusing. Your suggestion\nis much better, so after a second look I've used your version of the\ncomment and applied the patch on HEAD.\n\nI am wondering if we have other problems like that with dirty buffers\nat replay. Perhaps I should put my nose more onto the replay paths\nand extend these automated checks with wal_consistency_checking.\n--\nMichael", "msg_date": "Thu, 11 Apr 2024 09:23:34 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is this a problem in GenericXLogFinish()?" } ]
[ { "msg_contents": "Don't trust unvalidated xl_tot_len.\n\nxl_tot_len comes first in a WAL record. Usually we don't trust it to be\nthe true length until we've validated the record header. If the record\nheader was split across two pages, previously we wouldn't do the\nvalidation until after we'd already tried to allocate enough memory to\nhold the record, which was bad because it might actually be garbage\nbytes from a recycled WAL file, so we could try to allocate a lot of\nmemory. Release 15 made it worse.\n\nSince 70b4f82a4b5, we'd at least generate an end-of-WAL condition if the\ngarbage 4 byte value happened to be > 1GB, but we'd still try to\nallocate up to 1GB of memory bogusly otherwise. That was an\nimprovement, but unfortunately release 15 tries to allocate another\nobject before that, so you could get a FATAL error and recovery could\nfail.\n\nWe can fix both variants of the problem more fundamentally using\npre-existing page-level validation, if we just re-order some logic.\n\nThe new order of operations in the split-header case defers all memory\nallocation based on xl_tot_len until we've read the following page. At\nthat point we know that its first few bytes are not recycled data, by\nchecking its xlp_pageaddr, and that its xlp_rem_len agrees with\nxl_tot_len on the preceding page. That is strong evidence that\nxl_tot_len was truly the start of a record that was logged.\n\nThis problem was most likely to occur on a standby, because\nwalreceiver.c recycles WAL files without zeroing out trailing regions of\neach page. We could fix that too, but it wouldn't protect us from rare\ncrash scenarios where the trailing zeroes don't make it to disk.\n\nWith reliable xl_tot_len validation in place, the ancient policy of\nconsidering malloc failure to indicate corruption at end-of-WAL seems\nquite surprising, but changing that is left for later work.\n\nAlso included is a new TAP test to exercise various cases of end-of-WAL\ndetection by writing contrived data into the WAL from Perl.\n\nBack-patch to 12. We decided not to put this change into the final\nrelease of 11.\n\nAuthor: Thomas Munro <[email protected]>\nAuthor: Michael Paquier <[email protected]>\nReported-by: Alexander Lakhin <[email protected]>\nReviewed-by: Noah Misch <[email protected]> (the idea, not the code)\nReviewed-by: Michael Paquier <[email protected]>\nReviewed-by: Sergei Kornilov <[email protected]>\nReviewed-by: Alexander Lakhin <[email protected]>\nDiscussion: https://postgr.es/m/17928-aa92416a70ff44a2%40postgresql.org\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/bae868caf222ca01c569b61146fc2e398427127a\n\nModified Files\n--------------\nsrc/backend/access/transam/xlogreader.c | 123 +++++----\nsrc/test/perl/PostgreSQL/Test/Utils.pm | 41 +++\nsrc/test/recovery/meson.build | 1 +\nsrc/test/recovery/t/039_end_of_wal.pl | 460 ++++++++++++++++++++++++++++++++\n4 files changed, 569 insertions(+), 56 deletions(-)", "msg_date": "Fri, 22 Sep 2023 22:51:44 +0000", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "pgsql: Don't trust unvalidated xl_tot_len." }, { "msg_contents": "Re: Thomas Munro\n> Don't trust unvalidated xl_tot_len.\n\n> src/test/recovery/t/039_end_of_wal.pl | 460 ++++++++++++++++++++++++++++++++\n\nI haven't investigated the details yet, and it's not affecting the\nbuilds on apt.postgresql.org, but the Debian amd64 and i386 regression\ntests just failed this test on PG13 (11 and 15 are ok):\n\nt/039_end_of_wal.pl ..................\nDubious, test returned 2 (wstat 512, 0x200)\nNo subtests run\nTest Summary Report\n-------------------\nt/039_end_of_wal.pl (Wstat: 512 Tests: 0 Failed: 0)\n Non-zero exit status: 2\n Parse errors: No plan found in TAP output\nFiles=26, Tests=254, 105 wallclock secs ( 0.10 usr 0.02 sys + 19.58 cusr 11.02 csys = 30.72 CPU)\nResult: FAIL\nmake[2]: *** [Makefile:23: check] Error 1\n\nhttps://salsa.debian.org/postgresql/postgresql/-/jobs/4910354\nhttps://salsa.debian.org/postgresql/postgresql/-/jobs/4910355\n\nChristoph\n\n\n", "msg_date": "Fri, 10 Nov 2023 19:10:49 +0100", "msg_from": "Christoph Berg <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Don't trust unvalidated xl_tot_len." }, { "msg_contents": "Re: To Thomas Munro\n> I haven't investigated the details yet, and it's not affecting the\n> builds on apt.postgresql.org, but the Debian amd64 and i386 regression\n> tests just failed this test on PG13 (11 and 15 are ok):\n\nThat's on Debian bullseye, fwiw. (But the 13 build on apt.pg.o/bullseye passed.)\n\nChristoph\n\n\n", "msg_date": "Fri, 10 Nov 2023 19:13:12 +0100", "msg_from": "Christoph Berg <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Don't trust unvalidated xl_tot_len." }, { "msg_contents": "Re: To Thomas Munro\n> > src/test/recovery/t/039_end_of_wal.pl | 460 ++++++++++++++++++++++++++++++++\n> \n> I haven't investigated the details yet, and it's not affecting the\n> builds on apt.postgresql.org, but the Debian amd64 and i386 regression\n> tests just failed this test on PG13 (11 and 15 are ok):\n\n12 and 14 are also failing, now on Debian unstable. (Again, only in\nthe salsa.debian.org tests, not on apt.postgresql.org's buildds.)\n\n12 amd64: https://salsa.debian.org/postgresql/postgresql/-/jobs/4898857\n12 i386: https://salsa.debian.org/postgresql/postgresql/-/jobs/4898858\n\nThe tests there are running in Docker containers.\n\nChristoph\n\n\n", "msg_date": "Fri, 10 Nov 2023 22:42:26 +0100", "msg_from": "Christoph Berg <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Don't trust unvalidated xl_tot_len." }, { "msg_contents": "On Sat, Nov 11, 2023 at 10:42 AM Christoph Berg <[email protected]> wrote:\n> > I haven't investigated the details yet, and it's not affecting the\n> > builds on apt.postgresql.org, but the Debian amd64 and i386 regression\n> > tests just failed this test on PG13 (11 and 15 are ok):\n>\n> 12 and 14 are also failing, now on Debian unstable. (Again, only in\n> the salsa.debian.org tests, not on apt.postgresql.org's buildds.)\n>\n> 12 amd64: https://salsa.debian.org/postgresql/postgresql/-/jobs/4898857\n> 12 i386: https://salsa.debian.org/postgresql/postgresql/-/jobs/4898858\n>\n> The tests there are running in Docker containers.\n\nHmm. regress_log_039_end_of_wal says:\n\nNo such file or directory at\n/builds/postgresql/postgresql/debian/output/source_dir/build/../src/test/perl/TestLib.pm\nline 655.\n\nIn the 13 branch we see that's in the new scan_server_header()\nsubroutine where it tries to open the header, after asking pg_config\n--includedir-server where it lives. Hmm...\n\n\n", "msg_date": "Sat, 11 Nov 2023 11:05:48 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Don't trust unvalidated xl_tot_len." }, { "msg_contents": "Re: Thomas Munro\n> In the 13 branch we see that's in the new scan_server_header()\n> subroutine where it tries to open the header, after asking pg_config\n> --includedir-server where it lives. Hmm...\n\nIt's no ok to use pg_config at test time since `make install` might\nnot have been called yet:\n\nhttps://www.postgresql.org/message-id/[email protected]\nhttps://www.postgresql.org/message-id/YqkV/[email protected]\n\nChristoph\n\n\n", "msg_date": "Sat, 11 Nov 2023 12:53:37 +0100", "msg_from": "Christoph Berg <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Don't trust unvalidated xl_tot_len." }, { "msg_contents": "On Sun, Nov 12, 2023 at 12:53 AM Christoph Berg <[email protected]> wrote:\n> Re: Thomas Munro\n> > In the 13 branch we see that's in the new scan_server_header()\n> > subroutine where it tries to open the header, after asking pg_config\n> > --includedir-server where it lives. Hmm...\n>\n> It's no ok to use pg_config at test time since `make install` might\n> not have been called yet:\n>\n> https://www.postgresql.org/message-id/[email protected]\n> https://www.postgresql.org/message-id/YqkV/[email protected]\n\n[CC'ing Michael who was involved in that analysis and who also wrote\nthose bits of this commit]\n\nWe don't have an installation into the final --prefix, but we have\ntmp_install, surely? And the tests are run with PATH set to point to\ntmp_install's bin directory. It looks like it did actually find a\npg_config executable because otherwise we'd have hit die \"could not\nexecute pg_config\" and failed sooner. So now I'm wondering if the\npg_config it found gives the wrong answer for --includedir-server,\nbecause of Debian's local patches that insert a major version into\nsome paths. For example, maybe it's trying to look for\naccess/xlog_internal.h under tmp_install/usr/include/postgresql/server\nwhen it's really under tmp_install/usr/include/postgresql/13/server,\nor vice versa. But then why does that only happen on the salsa build,\nnot on the apt.postgresql.org one?\n\n\n", "msg_date": "Sun, 12 Nov 2023 07:17:02 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Don't trust unvalidated xl_tot_len." }, { "msg_contents": "Re: Thomas Munro\n> But then why does that only happen on the salsa build,\n> not on the apt.postgresql.org one?\n\nThe build chroots there have postgresql-NN already installed so\nextension builds don't have to download 7 PG versions over and over.\nMy guess would be that that's the difference and it's using some\npg_config from /usr/bin or /usr/lib/postgresql/*/bin.\n\nI can confirm that it's also failing in my local chroots if none of\nthe postgresql-* packages are preinstalled.\n\nChristoph\n\n\n", "msg_date": "Sat, 11 Nov 2023 19:26:59 +0100", "msg_from": "Christoph Berg <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Don't trust unvalidated xl_tot_len." }, { "msg_contents": "On Sun, Nov 12, 2023 at 7:27 AM Christoph Berg <[email protected]> wrote:\n> I can confirm that it's also failing in my local chroots if none of\n> the postgresql-* packages are preinstalled.\n\nIn your chroot after it fails, can you please find xlog_internal.h\nsomewhere under tmp_install and tell us the full path, and can you\nfind pg_config (however many of them there might be, I'm a little\nconfused on where and when Debian creates extra versioned variants)\nand tell us the full path, and also what --includedir-server prints,\nand can you also find regress_log_039_end_of_wal and confirm that it\ncontains a complaint about being unable to open a file, not a\ncomplaint about being unable to execute pg_config?\n\n\n", "msg_date": "Sun, 12 Nov 2023 08:00:59 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Don't trust unvalidated xl_tot_len." }, { "msg_contents": "Hi,\n\nOn 2023-11-12 07:17:02 +1300, Thomas Munro wrote:\n> On Sun, Nov 12, 2023 at 12:53 AM Christoph Berg <[email protected]> wrote:\n> > Re: Thomas Munro\n> > > In the 13 branch we see that's in the new scan_server_header()\n> > > subroutine where it tries to open the header, after asking pg_config\n> > > --includedir-server where it lives. Hmm...\n> >\n> > It's no ok to use pg_config at test time since `make install` might\n> > not have been called yet:\n> >\n> > https://www.postgresql.org/message-id/[email protected]\n> > https://www.postgresql.org/message-id/YqkV/[email protected]\n> \n> [CC'ing Michael who was involved in that analysis and who also wrote\n> those bits of this commit]\n> \n> We don't have an installation into the final --prefix, but we have\n> tmp_install, surely?\n\nYea, that should work and does work locally. I guess it'd fail though, if you\nsomehow ran the tests with NO_TEMP_INSTALL=1 or such.\n\n\n> And the tests are run with PATH set to point to\n> tmp_install's bin directory. It looks like it did actually find a\n> pg_config executable because otherwise we'd have hit die \"could not\n> execute pg_config\" and failed sooner. So now I'm wondering if the\n> pg_config it found gives the wrong answer for --includedir-server,\n> because of Debian's local patches that insert a major version into\n> some paths.\n\n From the second link above, the problem might more be that debian pg_config is\npatched to not be relocatable (huh?) - so it'd return an absolute path into\nthe final non-DESTDIR path. Which would fail, because the file isn't installed\nyet.\n\nIf that's the case, does that also mean that all the tests that are\nselectively enabled using check_pg_config() don't work in the debian context?\n\n\nI assume the reason to not use the source-tree access/xlog_internal.h here was\njust that it's nontrivial to find the top of the source tree form tap tests?\nIt's not hard to make it available... And as we already pass in top_builddir,\nit doesn't actually measurably further weaken usability of the tap framework\nin the pgxs context.\n\n\n> For example, maybe it's trying to look for access/xlog_internal.h under\n> tmp_install/usr/include/postgresql/server when it's really under\n> tmp_install/usr/include/postgresql/13/server, or vice versa. But then why\n> does that only happen on the salsa build, not on the apt.postgresql.org one?\n\n\nChristoph, can you apply a patch to emit a bit more information in that\nenvironment?\n\ndiff --git i/src/test/perl/PostgreSQL/Test/Utils.pm w/src/test/perl/PostgreSQL/Test/Utils.pm\nindex cd86897580c..3c588a41755 100644\n--- i/src/test/perl/PostgreSQL/Test/Utils.pm\n+++ w/src/test/perl/PostgreSQL/Test/Utils.pm\n@@ -722,7 +722,8 @@ sub scan_server_header\n chomp($stdout);\n $stdout =~ s/\\r$//;\n \n- open my $header_h, '<', \"$stdout/$header_path\" or die \"$!\";\n+ my $fname = \"$stdout/$header_path\";\n+ open my $header_h, '<', \"$fname\" or die \"could not open $fname: $!\";\n \n my @match = undef;\n while (<$header_h>)\n\nwould be helpful.\n\nI guess we should also apply something like that to our tree - printing a\nstringified errno without even a path/filename isn't very useful.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 11 Nov 2023 11:43:25 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Don't trust unvalidated xl_tot_len." }, { "msg_contents": "That's a lot of questions :). Let me try:\n\n> In your chroot after it fails, can you please find xlog_internal.h\n> somewhere under tmp_install and tell us the full path, and can you\n\n./build/tmp_install/usr/include/postgresql/13/server/access/xlog_internal.h\n./src/include/access/xlog_internal.h\n\n> find pg_config (however many of them there might be, I'm a little\n> confused on where and when Debian creates extra versioned variants)\n\nThat's only after testing and `make install`:\nhttps://salsa.debian.org/postgresql/postgresql-common/-/blob/master/server/postgresql.mk#L219-225\n\n> and tell us the full path,\n\n./build/tmp_install/usr/lib/postgresql/13/bin/pg_config\n./build/src/bin/pg_config/pg_config\n\n> and also what --includedir-server prints,\n\n$ ./build/tmp_install/usr/lib/postgresql/13/bin/pg_config --includedir-server\n/usr/include/postgresql/13/server\n\n> and can you also find regress_log_039_end_of_wal and confirm that it\n> contains a complaint about being unable to open a file, not a\n> complaint about being unable to execute pg_config?\n\n$ cat ./build/src/test/recovery/tmp_check/log/regress_log_039_end_of_wal \nNo such file or directory at /home/myon/projects/postgresql/debian/13/build/../src/test/perl/TestLib.pm line 655.\n\n\nThe 13-bullseye version of the package still has the \"don't relocate\nme\" patch:\n\nhttps://salsa.debian.org/postgresql/postgresql/-/blob/13-bullseye/debian/patches/50-per-version-dirs.patch?ref_type=heads\n\nThe PGBINDIR mangling is exactly what is breaking the use case now.\nThe commit that removed that bit in the 15 branch explains why it was\nthere:\n\nhttps://salsa.debian.org/postgresql/postgresql/-/commit/a249c75e86fe8733b11c47630e4931c5c196e8da\n\nI can (and should) do the change also in the other branches, but from\nthe 2022 discussion, I had the impression there were more reasons to\nprefer static paths instead of calling pg_config from tmp_install.\n\nAfter all, this seems to be the only 2nd case of actually calling\npg_config from tests if I'm grepping for the right things - the other\nis check_pg_config() called from test/ssl/t/002_scram.pl. (I wonder\nwhy that's not failing as well.)\n\nChristoph\n\n\n", "msg_date": "Sat, 11 Nov 2023 21:03:27 +0100", "msg_from": "Christoph Berg <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Don't trust unvalidated xl_tot_len." }, { "msg_contents": "On Sun, Nov 12, 2023 at 9:03 AM Christoph Berg <[email protected]> wrote:\n> The PGBINDIR mangling is exactly what is breaking the use case now.\n> The commit that removed that bit in the 15 branch explains why it was\n> there:\n>\n> https://salsa.debian.org/postgresql/postgresql/-/commit/a249c75e86fe8733b11c47630e4931c5c196e8da\n>\n> I can (and should) do the change also in the other branches, but from\n> the 2022 discussion, I had the impression there were more reasons to\n> prefer static paths instead of calling pg_config from tmp_install.\n\nNo opinion on potential advantages to other approaches, but I don't\nsee why this way shouldn't be expected to work. So I hope you can\ndrop that diff.\n\n> After all, this seems to be the only 2nd case of actually calling\n> pg_config from tests if I'm grepping for the right things - the other\n> is check_pg_config() called from test/ssl/t/002_scram.pl. (I wonder\n> why that's not failing as well.)\n\nMaybe you aren't running the SSL tests?\n\n\n", "msg_date": "Sun, 12 Nov 2023 12:17:54 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Don't trust unvalidated xl_tot_len." }, { "msg_contents": "On Sun, Nov 12, 2023 at 12:17:54PM +1300, Thomas Munro wrote:\n> No opinion on potential advantages to other approaches, but I don't\n> see why this way shouldn't be expected to work. So I hope you can\n> drop that diff.\n\nAnother thing that could be done in stable branches is just to switch\n039_end_of_wal.pl to hardcoded values for $XLP_PAGE_MAGIC and\n$XLP_FIRST_IS_CONTRECORD. This is not going to change in a released\nversion anyway, so there's no real maintenance cost.\n--\nMichael", "msg_date": "Sun, 12 Nov 2023 09:22:20 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Don't trust unvalidated xl_tot_len." } ]
[ { "msg_contents": "I am not sure why REL_16_STABLE fails consistently as of ~4 days ago.\nIt seems like bad storage or something? Just now it happened also on\nHEAD. I wonder why it would be sensitive to the branch.\n\n\n", "msg_date": "Sat, 23 Sep 2023 13:53:47 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Failures on gombessa -- EIO?" }, { "msg_contents": "Hi Thomas,\n\nI will check it today.\n\nRegards\nNikola\n\nOn Sat, 23 Sept 2023 at 04:54, Thomas Munro <[email protected]> wrote:\n\n> I am not sure why REL_16_STABLE fails consistently as of ~4 days ago.\n> It seems like bad storage or something? Just now it happened also on\n> HEAD. I wonder why it would be sensitive to the branch.\n>\n\nHi Thomas,I will check it today.RegardsNikolaOn Sat, 23 Sept 2023 at 04:54, Thomas Munro <[email protected]> wrote:I am not sure why REL_16_STABLE fails consistently as of ~4 days ago.\nIt seems like bad storage or something?  Just now it happened also on\nHEAD.  I wonder why it would be sensitive to the branch.", "msg_date": "Sat, 23 Sep 2023 17:29:04 +0300", "msg_from": "Nikola Ivanov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Failures on gombessa -- EIO?" } ]
[ { "msg_contents": "Attached test case demonstrates an issue with nbtree's mark/restore\ncode. All supported versions are affected.\n\nMy suspicion is that bugfix commit 70bc5833 missed some subtlety\naround what we need to do to make sure that the array keys stay \"in\nsync\" with the scan. I'll have time to debug the problem some more\ntomorrow.\n\nMy ScalarArrayOp project [1] seems unaffected by the bug, so I don't\nexpect it'll take long to get to the bottom of this. This is probably\ndue to its general design, and not any specific detail. The patch\nmakes the relationship between the current scan position and the\ncurrent array keys a great deal looser.\n\n[1] https://commitfest.postgresql.org/44/4455/\n-- \nPeter Geoghegan", "msg_date": "Fri, 22 Sep 2023 20:17:16 -0700", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": true, "msg_subject": "nbtree's ScalarArrayOp array mark/restore code appears to be buggy" }, { "msg_contents": "On Fri, Sep 22, 2023 at 8:17 PM Peter Geoghegan <[email protected]> wrote:\n> My suspicion is that bugfix commit 70bc5833 missed some subtlety\n> around what we need to do to make sure that the array keys stay \"in\n> sync\" with the scan. I'll have time to debug the problem some more\n> tomorrow.\n\nI've figured out what's going on here.\n\nIf I make my test case \"group by\" both of the indexed columns from the\ncomposite index (either index/table will do, since it's an equijoin),\na more detailed picture emerges that hints at the underlying problem:\n\n┌───────┬─────────┬─────────┐\n│ count │ small_a │ small_b │\n├───────┼─────────┼─────────┤\n│ 8,192 │ 1 │ 2 │\n│ 8,192 │ 1 │ 3 │\n│ 8,192 │ 1 │ 5 │\n│ 8,192 │ 1 │ 10 │\n│ 8,192 │ 1 │ 12 │\n│ 8,192 │ 1 │ 17 │\n│ 2,872 │ 1 │ 19 │\n└───────┴─────────┴─────────┘\n(7 rows)\n\nThe count for the final row is wrong. It should be 8,192, just like\nthe earlier counts for lower (small_a, small_b) groups. Notably, the\nissue is limited to the grouping that has the highest sort order. That\nstrongly hints that the problem has something to do with \"array\nwraparound\".\n\nThe query qual contains \"WHERE small_a IN (1, 3)\", so we'll \"wraps\naround\" from cur_elem index 1 (value 3) to cur_elem index 0 (value 1),\nwithout encountering any rows where small_a is 3 (because there aren't\nany in the index). That in itself isn't the problem. The problem is\nthat _bt_restore_array_keys() doesn't consider wraparound. It sees\nthat \"cur_elem == mark_elem\" for all array scan keys, and figues that\nit doesn't need to call _bt_preprocess_keys(). This is incorrect,\nsince the current set of search-type scan keys (the set most recently\noutput, during the last _bt_preprocess_keys() call) still have the\nvalue \"3\".\n\nThe fix for this should be fairly straightforward. We must teach\n_bt_restore_array_keys() to distinguish \"past the end of the array\"\nfrom \"after the start of the array\", so that doesn't spuriously skip a\nrequired call to _bt_preprocess_keys() . I already see that the\nproblem goes away once _bt_restore_array_keys() is made to call\n_bt_preprocess_keys() unconditionally, so I'm already fairly confident\nthat this will work.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sat, 23 Sep 2023 11:47:48 -0700", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: nbtree's ScalarArrayOp array mark/restore code appears to be\n buggy" }, { "msg_contents": "On Sat, Sep 23, 2023 at 11:47 AM Peter Geoghegan <[email protected]> wrote:\n> The fix for this should be fairly straightforward. We must teach\n> _bt_restore_array_keys() to distinguish \"past the end of the array\"\n> from \"after the start of the array\", so that doesn't spuriously skip a\n> required call to _bt_preprocess_keys() . I already see that the\n> problem goes away once _bt_restore_array_keys() is made to call\n> _bt_preprocess_keys() unconditionally, so I'm already fairly confident\n> that this will work.\n\nAttached draft patch shows how this could work.\n\n_bt_restore_array_keys() has comments that seem to suppose that\ncalling _bt_preprocess_keys is fairly expensive, and something that's\nwell worth avoiding. But...is it, really? I wonder if we'd be better\noff just biting the bullet and always calling _bt_preprocess_keys\nhere. Could it really be such a big cost, compared to pinning and\nlocking the page in the btrestpos() path?\n\nThe current draft SAOP patch calls _bt_preprocess_keys() with a buffer\nlock held. This is very likely unsafe, so I'll need to come up with a\nprincipled approach (e.g. a stripped down, specialized version of\n_bt_preprocess_keys that's safe to call while holding a lock seems doable).\nI've been able to put that off for a few months now because it just\ndoesn't impact my microbenchmarks to any notable degree (and not just\nin those cases where we can use the \"numberOfKeys == 1\" fast path at\nthe start of _bt_preprocess_keys). This at least suggests that the cost of\nalways calling _bt_preprocess_keys in _bt_restore_array_keys might be\nacceptable.\n\n--\nPeter Geoghegan", "msg_date": "Sat, 23 Sep 2023 16:22:48 -0700", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: nbtree's ScalarArrayOp array mark/restore code appears to be\n buggy" }, { "msg_contents": "On Sat, Sep 23, 2023 at 4:22 PM Peter Geoghegan <[email protected]> wrote:\n> Attached draft patch shows how this could work.\n>\n> _bt_restore_array_keys() has comments that seem to suppose that\n> calling _bt_preprocess_keys is fairly expensive, and something that's\n> well worth avoiding. But...is it, really? I wonder if we'd be better\n> off just biting the bullet and always calling _bt_preprocess_keys\n> here.\n\nMy current plan is to commit something close to my original patch on\nWednesday or Thursday. My proposed fix is minimally invasive (it still\nallows _bt_restore_array_keys to avoid most calls to\n_bt_preprocess_keys), so I don't see any reason to delay acting here.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 25 Sep 2023 10:33:22 -0700", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: nbtree's ScalarArrayOp array mark/restore code appears to be\n buggy" } ]
[ { "msg_contents": "Hi,\n\nPer Coverity.\nCID 1518088 (#2 of 2): Improper use of negative value (NEGATIVE_RETURNS)\n\nThe function bms_singleton_member can returns a negative number.\n\n/*\n* Get a child rel for rel2 with the relids. See above comments.\n*/\nif (rel2_is_simple)\n{\nint varno = bms_singleton_member(child_relids2);\n\nchild_rel2 = find_base_rel(root, varno);\n}\n\nIt turns out that in the get_matching_part_pairs function (joinrels.c), the\nreturn of bms_singleton_member is passed to the find_base_rel function,\nwhich cannot receive a negative value.\n\nfind_base_rel is protected by an Assertion, which effectively indicates\nthat the error does not occur in tests and in DEBUG mode.\n\nBut this does not change the fact that bms_singleton_member can return a\nnegative value, which may occur on some production servers.\n\nFix by changing the Assertion into a real test, to protect the\nsimple_rel_array array.\n\nbest regards,\nRanier Vilela", "msg_date": "Sat, 23 Sep 2023 08:58:29 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": true, "msg_subject": "Avoid a possible out-of-bounds access\n (src/backend/optimizer/util/relnode.c)" }, { "msg_contents": "Hi,\n\nOn Sat, Sep 23, 2023 at 9:59 PM Ranier Vilela <[email protected]> wrote:\n> Per Coverity.\n> CID 1518088 (#2 of 2): Improper use of negative value (NEGATIVE_RETURNS)\n>\n> The function bms_singleton_member can returns a negative number.\n>\n> /*\n> * Get a child rel for rel2 with the relids. See above comments.\n> */\n> if (rel2_is_simple)\n> {\n> int varno = bms_singleton_member(child_relids2);\n>\n> child_rel2 = find_base_rel(root, varno);\n> }\n>\n> It turns out that in the get_matching_part_pairs function (joinrels.c), the return of bms_singleton_member is passed to the find_base_rel function, which cannot receive a negative value.\n>\n> find_base_rel is protected by an Assertion, which effectively indicates that the error does not occur in tests and in DEBUG mode.\n>\n> But this does not change the fact that bms_singleton_member can return a negative value, which may occur on some production servers.\n>\n> Fix by changing the Assertion into a real test, to protect the simple_rel_array array.\n\nThanks for the report and patch! I will review the patch.\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Sun, 24 Sep 2023 13:50:39 +0900", "msg_from": "Etsuro Fujita <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Avoid a possible out-of-bounds access\n (src/backend/optimizer/util/relnode.c)" }, { "msg_contents": "On Sat, Sep 23, 2023 at 7:29 PM Ranier Vilela <[email protected]> wrote:\n>\n> Hi,\n>\n> Per Coverity.\n> CID 1518088 (#2 of 2): Improper use of negative value (NEGATIVE_RETURNS)\n>\n> The function bms_singleton_member can returns a negative number.\n>\n> /*\n> * Get a child rel for rel2 with the relids. See above comments.\n> */\n> if (rel2_is_simple)\n> {\n> int varno = bms_singleton_member(child_relids2);\n>\n> child_rel2 = find_base_rel(root, varno);\n> }\n>\n> It turns out that in the get_matching_part_pairs function (joinrels.c), the return of bms_singleton_member is passed to the find_base_rel function, which cannot receive a negative value.\n>\n> find_base_rel is protected by an Assertion, which effectively indicates that the error does not occur in tests and in DEBUG mode.\n>\n> But this does not change the fact that bms_singleton_member can return a negative value, which may occur on some production servers.\n>\n> Fix by changing the Assertion into a real test, to protect the simple_rel_array array.\n\nDo you have a scenario where bms_singleton_member() actually returns a\n-ve number OR it's just a possibility. bms_make_singleton() has an\nassertion at the end: Assert(result >= 0);\nbms_make_singleton(), bms_add_member() explicitly forbids negative\nvalues. It looks like we have covered all the places which can add a\nnegative value to a bitmapset. May be we are missing a place or two.\nIt will be good to investigate it.\n\nWhat you are suggesting may be ok, as is as well.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Mon, 25 Sep 2023 16:53:42 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Avoid a possible out-of-bounds access\n (src/backend/optimizer/util/relnode.c)" }, { "msg_contents": "Em seg., 25 de set. de 2023 às 08:23, Ashutosh Bapat <\[email protected]> escreveu:\n\n> On Sat, Sep 23, 2023 at 7:29 PM Ranier Vilela <[email protected]> wrote:\n> >\n> > Hi,\n> >\n> > Per Coverity.\n> > CID 1518088 (#2 of 2): Improper use of negative value (NEGATIVE_RETURNS)\n> >\n> > The function bms_singleton_member can returns a negative number.\n> >\n> > /*\n> > * Get a child rel for rel2 with the relids. See above comments.\n> > */\n> > if (rel2_is_simple)\n> > {\n> > int varno = bms_singleton_member(child_relids2);\n> >\n> > child_rel2 = find_base_rel(root, varno);\n> > }\n> >\n> > It turns out that in the get_matching_part_pairs function (joinrels.c),\n> the return of bms_singleton_member is passed to the find_base_rel function,\n> which cannot receive a negative value.\n> >\n> > find_base_rel is protected by an Assertion, which effectively indicates\n> that the error does not occur in tests and in DEBUG mode.\n> >\n> > But this does not change the fact that bms_singleton_member can return a\n> negative value, which may occur on some production servers.\n> >\n> > Fix by changing the Assertion into a real test, to protect the\n> simple_rel_array array.\n>\n> Do you have a scenario where bms_singleton_member() actually returns a\n> -ve number OR it's just a possibility.\n\nJust a possibility.\n\n\n> bms_make_singleton() has an\n> assertion at the end: Assert(result >= 0);\n> bms_make_singleton(), bms_add_member() explicitly forbids negative\n> values. It looks like we have covered all the places which can add a\n> negative value to a bitmapset. May be we are missing a place or two.\n> It will be good to investigate it.\n>\nI try to do the research, mostly, with runtime compilation.\nAs previously stated, the error does not appear in the tests.\nThat said, although Assert protects in most cases, that doesn't mean it\ncan't occur in a query, running on a server in production mode.\n\nNow thinking about what you said about Assertion in bms_make_singleton.\nI think it's nonsense, no?\nWhy design a function that in DEBUG mode prohibits negative returns, but in\nruntime mode allows it?\nAfter all, why allow a negative return, if for all practical purposes this\nis prohibited?\nRegarding the find_base_rel function, it is nonsense to protect the array\nwith Assertion.\nAfter all, we have already protected the upper limit with a real test, why\nnot also protect the lower limit.\nThe additional testing is cheap and makes perfect sense, making the\nfunction more robust in production mode.\nAs an added bonus, modern compilers will probably be able to remove the\nadditional test if it deems it not necessary.\nFurthermore, all protections that were added to protect find_base_real\ncalls can eventually be removed,\nsince find_base_real will accept parameters with negative values.\n\nbest regards,\nRanier Vilela\n\nEm seg., 25 de set. de 2023 às 08:23, Ashutosh Bapat <[email protected]> escreveu:On Sat, Sep 23, 2023 at 7:29 PM Ranier Vilela <[email protected]> wrote:\n>\n> Hi,\n>\n> Per Coverity.\n> CID 1518088 (#2 of 2): Improper use of negative value (NEGATIVE_RETURNS)\n>\n> The function bms_singleton_member can returns a negative number.\n>\n> /*\n> * Get a child rel for rel2 with the relids.  See above comments.\n> */\n> if (rel2_is_simple)\n> {\n> int varno = bms_singleton_member(child_relids2);\n>\n> child_rel2 = find_base_rel(root, varno);\n> }\n>\n> It turns out that in the get_matching_part_pairs function (joinrels.c), the return of bms_singleton_member is passed to the find_base_rel function, which cannot receive a negative value.\n>\n> find_base_rel is protected by an Assertion, which effectively indicates that the error does not occur in tests and in DEBUG mode.\n>\n> But this does not change the fact that bms_singleton_member can return a negative value, which may occur on some production servers.\n>\n> Fix by changing the Assertion into a real test, to protect the simple_rel_array array.\n\nDo you have a scenario where bms_singleton_member() actually returns a\n-ve number OR it's just a possibility.Just a possibility.  bms_make_singleton() has an\nassertion at the end: Assert(result >= 0);\nbms_make_singleton(), bms_add_member() explicitly forbids negative\nvalues. It looks like we have covered all the places which can add a\nnegative value to a bitmapset. May be we are missing a place or two.\nIt will be good to investigate it.I try to do the research, mostly, with runtime compilation.As previously stated, the error does not appear in the tests.That said, although Assert protects in most cases, that doesn't mean it can't occur in a query, running on a server in production mode.Now thinking about what you said about Assertion in bms_make_singleton.I think it's nonsense, no?Why design a function that in DEBUG mode prohibits negative returns, but in runtime mode allows it?After all, why allow a negative return, if for all practical purposes this is prohibited?Regarding the find_base_rel function, it is nonsense to protect the array with Assertion.After all, we have already protected the upper limit with a real test, why not also protect the lower limit.The additional testing is cheap and makes perfect sense, making the function more robust in production mode.As an added bonus, modern compilers will probably be able to remove the additional test if it deems it not necessary.Furthermore, all protections that were added to protect find_base_real calls can eventually be removed, since find_base_real will accept parameters with negative values.best regards,Ranier Vilela", "msg_date": "Mon, 25 Sep 2023 10:43:49 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Avoid a possible out-of-bounds access\n (src/backend/optimizer/util/relnode.c)" }, { "msg_contents": "On Mon, Sep 25, 2023 at 7:14 PM Ranier Vilela <[email protected]> wrote:\n>\n> Em seg., 25 de set. de 2023 às 08:23, Ashutosh Bapat <[email protected]> escreveu:\n>>\n>> On Sat, Sep 23, 2023 at 7:29 PM Ranier Vilela <[email protected]> wrote:\n>> >\n>> > Hi,\n>> >\n>> > Per Coverity.\n>> > CID 1518088 (#2 of 2): Improper use of negative value (NEGATIVE_RETURNS)\n>> >\n>> > The function bms_singleton_member can returns a negative number.\n>> >\n>> > /*\n>> > * Get a child rel for rel2 with the relids. See above comments.\n>> > */\n>> > if (rel2_is_simple)\n>> > {\n>> > int varno = bms_singleton_member(child_relids2);\n>> >\n>> > child_rel2 = find_base_rel(root, varno);\n>> > }\n>> >\n>> > It turns out that in the get_matching_part_pairs function (joinrels.c), the return of bms_singleton_member is passed to the find_base_rel function, which cannot receive a negative value.\n>> >\n>> > find_base_rel is protected by an Assertion, which effectively indicates that the error does not occur in tests and in DEBUG mode.\n>> >\n>> > But this does not change the fact that bms_singleton_member can return a negative value, which may occur on some production servers.\n>> >\n>> > Fix by changing the Assertion into a real test, to protect the simple_rel_array array.\n>>\n>> Do you have a scenario where bms_singleton_member() actually returns a\n>> -ve number OR it's just a possibility.\n>\n> Just a possibility.\n>\n>>\n>> bms_make_singleton() has an\n>> assertion at the end: Assert(result >= 0);\n>> bms_make_singleton(), bms_add_member() explicitly forbids negative\n>> values. It looks like we have covered all the places which can add a\n>> negative value to a bitmapset. May be we are missing a place or two.\n>> It will be good to investigate it.\n>\n> I try to do the research, mostly, with runtime compilation.\n> As previously stated, the error does not appear in the tests.\n> That said, although Assert protects in most cases, that doesn't mean it can't occur in a query, running on a server in production mode.\n>\n> Now thinking about what you said about Assertion in bms_make_singleton.\n> I think it's nonsense, no?\n\nSorry, I didn't write it correctly. bms_make_singleton() doesn't\naccept a negative integer and bms_get_singleton_member() and\nbms_singleton_member() has assertion at the end. Since there is no\npossibility of a negative integer making itself a part of bitmapset,\nthe two functions Asserting instead of elog'ing is better. Assert are\ncheaper in production.\n\n> Why design a function that in DEBUG mode prohibits negative returns, but in runtime mode allows it?\n> After all, why allow a negative return, if for all practical purposes this is prohibited?\n\nYou haven't given any proof that there's a possibility that a negative\nvalue may be returned. We are not allowing negative value being\nreturned at all.\n\n> Regarding the find_base_rel function, it is nonsense to protect the array with Assertion.\n> After all, we have already protected the upper limit with a real test, why not also protect the lower limit.\n> The additional testing is cheap and makes perfect sense, making the function more robust in production mode.\n> As an added bonus, modern compilers will probably be able to remove the additional test if it deems it not necessary.\n> Furthermore, all protections that were added to protect find_base_real calls can eventually be removed,\n> since find_base_real will accept parameters with negative values.\n\nHowever, I agree that changing find_base_rel() the way you have done\nin your patch is fine and mildly future-proof. +1 to that idea\nirrespective of what bitmapset functions do.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Tue, 26 Sep 2023 11:11:51 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Avoid a possible out-of-bounds access\n (src/backend/optimizer/util/relnode.c)" }, { "msg_contents": "On Tue, 26 Sept 2023 at 21:45, Ashutosh Bapat\n<[email protected]> wrote:\n> However, I agree that changing find_base_rel() the way you have done\n> in your patch is fine and mildly future-proof. +1 to that idea\n> irrespective of what bitmapset functions do.\n\nI'm not a fan of adding additional run-time overhead for this\ntheoretical problem.\n\nfind_base_rel() could be made more robust for free by just casting the\nrelid and simple_rel_array_size to uint32 while checking that relid <\nroot->simple_rel_array_size. The 0th element should be NULL anyway,\nso \"if (rel)\" should let relid==0 calls through and allow that to\nERROR still. I see that just changes a \"jle\" to \"jnb\" vs adding an\nadditional jump for Ranier's version. [1]\n\nIt seems worth not making find_base_rel() more expensive than it is\ntoday as commonly we just reference root->simple_rel_array[n] directly\nanyway because it's cheaper. It would be nice if we didn't add further\noverhead to find_base_rel() as this would make the case for using\nPlannerInfo.simple_rel_array directly even stronger.\n\nDavid\n\n[1] https://godbolt.org/z/qrxKTbvva\n\n\n", "msg_date": "Tue, 26 Sep 2023 23:02:27 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Avoid a possible out-of-bounds access\n (src/backend/optimizer/util/relnode.c)" }, { "msg_contents": "On Tue, Sep 26, 2023 at 3:32 PM David Rowley <[email protected]> wrote:\n>\n> find_base_rel() could be made more robust for free by just casting the\n> relid and simple_rel_array_size to uint32 while checking that relid <\n> root->simple_rel_array_size. The 0th element should be NULL anyway,\n> so \"if (rel)\" should let relid==0 calls through and allow that to\n> ERROR still. I see that just changes a \"jle\" to \"jnb\" vs adding an\n> additional jump for Ranier's version. [1]\n\nThat's a good suggestion.\n\nI am fine with find_base_rel() as it is today as well. But\nfuture-proofing it seems to be fine too.\n\n>\n> It seems worth not making find_base_rel() more expensive than it is\n> today as commonly we just reference root->simple_rel_array[n] directly\n> anyway because it's cheaper. It would be nice if we didn't add further\n> overhead to find_base_rel() as this would make the case for using\n> PlannerInfo.simple_rel_array directly even stronger.\n\nI am curious, is the overhead in find_base_rel() impacting overall performance?\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Tue, 26 Sep 2023 16:04:36 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Avoid a possible out-of-bounds access\n (src/backend/optimizer/util/relnode.c)" }, { "msg_contents": "Em ter., 26 de set. de 2023 às 07:34, Ashutosh Bapat <\[email protected]> escreveu:\n\n> On Tue, Sep 26, 2023 at 3:32 PM David Rowley <[email protected]> wrote:\n> >\n> > find_base_rel() could be made more robust for free by just casting the\n> > relid and simple_rel_array_size to uint32 while checking that relid <\n> > root->simple_rel_array_size. The 0th element should be NULL anyway,\n> > so \"if (rel)\" should let relid==0 calls through and allow that to\n> > ERROR still. I see that just changes a \"jle\" to \"jnb\" vs adding an\n> > additional jump for Ranier's version. [1]\n>\n> That's a good suggestion.\n>\n> I am fine with find_base_rel() as it is today as well. But\n> future-proofing it seems to be fine too.\n>\n> >\n> > It seems worth not making find_base_rel() more expensive than it is\n> > today as commonly we just reference root->simple_rel_array[n] directly\n> > anyway because it's cheaper. It would be nice if we didn't add further\n> > overhead to find_base_rel() as this would make the case for using\n> > PlannerInfo.simple_rel_array directly even stronger.\n>\n> I am curious, is the overhead in find_base_rel() impacting overall\n> performance?\n>\nIt seems to me that it adds a LEA instruction.\nhttps://godbolt.org/z/b4jK3PErE\n\nAlthough it doesn't seem like much,\nI believe the solution (casting to unsigned) seems better.\nSo +1.\n\nbest regards,\nRanier Vilela\n\nEm ter., 26 de set. de 2023 às 07:34, Ashutosh Bapat <[email protected]> escreveu:On Tue, Sep 26, 2023 at 3:32 PM David Rowley <[email protected]> wrote:\n>\n> find_base_rel() could be made more robust for free by just casting the\n> relid and simple_rel_array_size to uint32 while checking that relid <\n> root->simple_rel_array_size.  The 0th element should be NULL anyway,\n> so \"if (rel)\" should let relid==0 calls through and allow that to\n> ERROR still. I see that just changes a \"jle\" to \"jnb\" vs adding an\n> additional jump for Ranier's version. [1]\n\nThat's a good suggestion.\n\nI am fine with find_base_rel() as it is today as well. But\nfuture-proofing it seems to be fine too.\n\n>\n> It seems worth not making find_base_rel() more expensive than it is\n> today as commonly we just reference root->simple_rel_array[n] directly\n> anyway because it's cheaper. It would be nice if we didn't add further\n> overhead to find_base_rel() as this would make the case for using\n> PlannerInfo.simple_rel_array directly even stronger.\n\nI am curious, is the overhead in find_base_rel() impacting overall performance?It seems to me that it adds a LEA instruction.https://godbolt.org/z/b4jK3PErE Although it doesn't seem like much, I believe the solution (casting to unsigned) seems better.So +1.best regards,Ranier Vilela", "msg_date": "Tue, 26 Sep 2023 09:30:57 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Avoid a possible out-of-bounds access\n (src/backend/optimizer/util/relnode.c)" }, { "msg_contents": "On Wed, 27 Sept 2023 at 01:31, Ranier Vilela <[email protected]> wrote:\n> It seems to me that it adds a LEA instruction.\n> https://godbolt.org/z/b4jK3PErE\n\nThere's a fairly significant difference in the optimisability of a\ncomparison with a compile-time constant vs a variable. For example,\nwould you expect the compiler to emit assembly for each side of the\nboolean AND in: if (a > 12 && a > 20), or how about if (a > 12 && a >\ny)? No need to answer. Just consider it.\n\nI suggest keeping your experiments as close to the target code as\npractical. You might be surprised by what the compiler can optimise.\n\nDavid\n\n\n", "msg_date": "Wed, 27 Sep 2023 01:57:19 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Avoid a possible out-of-bounds access\n (src/backend/optimizer/util/relnode.c)" }, { "msg_contents": "Em ter., 26 de set. de 2023 às 09:30, Ranier Vilela <[email protected]>\nescreveu:\n\n> Em ter., 26 de set. de 2023 às 07:34, Ashutosh Bapat <\n> [email protected]> escreveu:\n>\n>> On Tue, Sep 26, 2023 at 3:32 PM David Rowley <[email protected]>\n>> wrote:\n>> >\n>> > find_base_rel() could be made more robust for free by just casting the\n>> > relid and simple_rel_array_size to uint32 while checking that relid <\n>> > root->simple_rel_array_size. The 0th element should be NULL anyway,\n>> > so \"if (rel)\" should let relid==0 calls through and allow that to\n>> > ERROR still. I see that just changes a \"jle\" to \"jnb\" vs adding an\n>> > additional jump for Ranier's version. [1]\n>>\n>> That's a good suggestion.\n>>\n>> I am fine with find_base_rel() as it is today as well. But\n>> future-proofing it seems to be fine too.\n>>\n>> >\n>> > It seems worth not making find_base_rel() more expensive than it is\n>> > today as commonly we just reference root->simple_rel_array[n] directly\n>> > anyway because it's cheaper. It would be nice if we didn't add further\n>> > overhead to find_base_rel() as this would make the case for using\n>> > PlannerInfo.simple_rel_array directly even stronger.\n>>\n>> I am curious, is the overhead in find_base_rel() impacting overall\n>> performance?\n>>\n> It seems to me that it adds a LEA instruction.\n> https://godbolt.org/z/b4jK3PErE\n>\n> Although it doesn't seem like much,\n> I believe the solution (casting to unsigned) seems better.\n> So +1.\n>\nAs suggested, casting is the best option that does not add any overhead and\nimproves the robustness of the find_base_rel function.\nI propose patch v1, with the additional addition of fixing the\nfind_base_rel_ignore_join function,\nwhich despite not appearing in Coverity reports, suffers from the same\nproblem.\n\nTaking advantage, I also propose a scope reduction,\n as well as the const of the root parameter, which is very appropriate.\n\nbest regards,\nRanier Vilela", "msg_date": "Tue, 26 Sep 2023 14:10:04 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Avoid a possible out-of-bounds access\n (src/backend/optimizer/util/relnode.c)" }, { "msg_contents": "On Wed, 27 Sept 2023 at 06:10, Ranier Vilela <[email protected]> wrote:\n> As suggested, casting is the best option that does not add any overhead and improves the robustness of the find_base_rel function.\n> I propose patch v1, with the additional addition of fixing the find_base_rel_ignore_join function,\n> which despite not appearing in Coverity reports, suffers from the same problem.\n\nCan you confirm that this silences the Converity warning?\n\nI think it probably warrants a comment to mention why we cast to uint32.\n\ne.g. /* perform an unsigned comparison so that we also catch negative\nrelid values */\n\n> Taking advantage, I also propose a scope reduction,\n> as well as the const of the root parameter, which is very appropriate.\n\nCan you explain why adding the const qualifier is \"very appropriate\"\nto catching negative relids?\n\nPlease check [1] for the mention of:\n\n\"The fastest way to get your patch rejected is to make unrelated\nchanges. Reformatting lines that haven't changed, changing unrelated\ncomments you felt were poorly worded, touching code not necessary to\nyour change, etc. Each patch should have the minimum set of changes\nrequired to work robustly. If you do not follow the code formatting\nsuggestions above, expect your patch to be returned to you with the\nfeedback of \"follow the code conventions\", quite likely without any\nother review.\"\n\nDavid\n\n[1] https://wiki.postgresql.org/wiki/Submitting_a_Patch\n\n\n", "msg_date": "Wed, 27 Sep 2023 20:35:04 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Avoid a possible out-of-bounds access\n (src/backend/optimizer/util/relnode.c)" }, { "msg_contents": "Em qua., 27 de set. de 2023 às 04:35, David Rowley <[email protected]>\nescreveu:\n\n> On Wed, 27 Sept 2023 at 06:10, Ranier Vilela <[email protected]> wrote:\n> > As suggested, casting is the best option that does not add any overhead\n> and improves the robustness of the find_base_rel function.\n> > I propose patch v1, with the additional addition of fixing the\n> find_base_rel_ignore_join function,\n> > which despite not appearing in Coverity reports, suffers from the same\n> problem.\n>\n> Can you confirm that this silences the Converity warning?\n>\nCID#1518088\nThis is a historical version of the file displaying the issue before it was\nin the Fixed state.\n\n\n> I think it probably warrants a comment to mention why we cast to uint32.\n>\n> e.g. /* perform an unsigned comparison so that we also catch negative\n> relid values */\n>\nI'm ok.\n\n\n>\n> > Taking advantage, I also propose a scope reduction,\n> > as well as the const of the root parameter, which is very appropriate.\n>\n> Can you explain why adding the const qualifier is \"very appropriate\"\n> to catching negative relids?\n>\nOf course that has nothing to do with it.\n\n\n> Please check [1] for the mention of:\n>\n> \"The fastest way to get your patch rejected is to make unrelated\n> changes. Reformatting lines that haven't changed, changing unrelated\n> comments you felt were poorly worded, touching code not necessary to\n> your change, etc. Each patch should have the minimum set of changes\n> required to work robustly. If you do not follow the code formatting\n> suggestions above, expect your patch to be returned to you with the\n> feedback of \"follow the code conventions\", quite likely without any\n> other review.\"\n>\nForgive my impulsiveness, anyone who loves perfect, well written code,\nwould understand.\n\nDo you have an objection to fixing the function find_base_rel_ignore_join?\nOr is it included in unrelated changes?\n\nRanier Vilela\n\nEm qua., 27 de set. de 2023 às 04:35, David Rowley <[email protected]> escreveu:On Wed, 27 Sept 2023 at 06:10, Ranier Vilela <[email protected]> wrote:\n> As suggested, casting is the best option that does not add any overhead and improves the robustness of the find_base_rel function.\n> I propose patch v1, with the additional addition of fixing the find_base_rel_ignore_join function,\n> which despite not appearing in Coverity reports, suffers from the same problem.\n\nCan you confirm that this silences the Converity warning?CID#1518088 \nThis is a historical version of the file displaying the issue before it was in the Fixed state. \n\nI think it probably warrants a comment to mention why we cast to uint32.\n\ne.g. /* perform an unsigned comparison so that we also catch negative\nrelid values */I'm ok. \n\n> Taking advantage, I also propose a scope reduction,\n>  as well as the const of the root parameter, which is very appropriate.\n\nCan you explain why adding the const qualifier is \"very appropriate\"\nto catching negative relids?Of course that has nothing to do with it. \n\nPlease check [1] for the mention of:\n\n\"The fastest way to get your patch rejected is to make unrelated\nchanges. Reformatting lines that haven't changed, changing unrelated\ncomments you felt were poorly worded, touching code not necessary to\nyour change, etc. Each patch should have the minimum set of changes\nrequired to work robustly. If you do not follow the code formatting\nsuggestions above, expect your patch to be returned to you with the\nfeedback of \"follow the code conventions\", quite likely without any\nother review.\"Forgive my impulsiveness, anyone who loves perfect, well written code, would understand.Do you have an objection to fixing the function find_base_rel_ignore_join?Or is it included in unrelated changes?Ranier Vilela", "msg_date": "Wed, 27 Sep 2023 10:37:45 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Avoid a possible out-of-bounds access\n (src/backend/optimizer/util/relnode.c)" }, { "msg_contents": "At 2023-09-27 10:37:45 -0300, [email protected] wrote:\n>\n> Forgive my impulsiveness, anyone who loves perfect, well written code,\n> would understand.\n\nI actually find this characterisation offensive.\n\nBeing scrupulous about not grouping random drive-by changes together\nwith the primary change is right up there in importance with writing\ngood commit messages to help future readers to reduce their WTFs/minute\nscore one, five, seven, or ten years later.\n\nIgnoring that concern once is thoughtless. Ignoring it over and over\nagain is disrespectful. Casually deriding it by equating it to hating\n\"perfect, well-written code\" is gross.\n\n-- Abhijit\n\n\n", "msg_date": "Wed, 27 Sep 2023 19:37:29 +0530", "msg_from": "Abhijit Menon-Sen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Avoid a possible out-of-bounds access\n (src/backend/optimizer/util/relnode.c)" }, { "msg_contents": "On Thu, 28 Sept 2023 at 02:37, Ranier Vilela <[email protected]> wrote:\n>> Please check [1] for the mention of:\n>>\n>> \"The fastest way to get your patch rejected is to make unrelated\n>> changes. Reformatting lines that haven't changed, changing unrelated\n>> comments you felt were poorly worded, touching code not necessary to\n>> your change, etc. Each patch should have the minimum set of changes\n>> required to work robustly. If you do not follow the code formatting\n>> suggestions above, expect your patch to be returned to you with the\n>> feedback of \"follow the code conventions\", quite likely without any\n>> other review.\"\n>\n> Forgive my impulsiveness, anyone who loves perfect, well written code, would understand.\n\nPerhaps, but the committers on this project seem to be against the\nblunderbuss approach to committing patches. You might meet less\nresistance around here if you assume all of their weapons have a scope\nand that they like to aim for something before pulling the trigger.\n\nPersonally, this seems like a good idea to me and I'd like to follow\nit too. If you'd like to hold off you a committer whose weapon of\nchoice is the blunderbuss then I can back off and let you wait for one\nto come along. Just let me know.\n\n> Do you have an objection to fixing the function find_base_rel_ignore_join?\n> Or is it included in unrelated changes?\n\nWell, the topic seems to be adding additional safety to prevent\naccessing negative entries for simple_rel_array. I can't think why\nfixing the same theoretical hazard in find_base_rel_ignore_join()\nwould be unrelated. I hope you can see the difference here. Randomly\nadjusting function signatures because you happen to be changing some\ncode within that function does not, in my book, seem related. You\nseem to agree with this given you mentioned \"Of course that has\nnothing to do with it.\"\n\nDavid\n\n\n", "msg_date": "Thu, 28 Sep 2023 14:27:52 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Avoid a possible out-of-bounds access\n (src/backend/optimizer/util/relnode.c)" }, { "msg_contents": "Em qua., 27 de set. de 2023 às 22:28, David Rowley <[email protected]>\nescreveu:\n\n> On Thu, 28 Sept 2023 at 02:37, Ranier Vilela <[email protected]> wrote:\n> >> Please check [1] for the mention of:\n> >>\n> >> \"The fastest way to get your patch rejected is to make unrelated\n> >> changes. Reformatting lines that haven't changed, changing unrelated\n> >> comments you felt were poorly worded, touching code not necessary to\n> >> your change, etc. Each patch should have the minimum set of changes\n> >> required to work robustly. If you do not follow the code formatting\n> >> suggestions above, expect your patch to be returned to you with the\n> >> feedback of \"follow the code conventions\", quite likely without any\n> >> other review.\"\n> >\n> > Forgive my impulsiveness, anyone who loves perfect, well written code,\n> would understand.\n>\n> Perhaps, but the committers on this project seem to be against the\n> blunderbuss approach to committing patches. You might meet less\n> resistance around here if you assume all of their weapons have a scope\n> and that they like to aim for something before pulling the trigger.\n>\nPerhaps, and using your own words, the leaders on this project seem\nto be against reviewers armed with blunderbuss, too.\n\n\n>\n> Personally, this seems like a good idea to me and I'd like to follow\n> it too. If you'd like to hold off you a committer whose weapon of\n> choice is the blunderbuss then I can back off and let you wait for one\n> to come along. Just let me know.\n>\nPlease, no, I love good combat too.\nYou are welcome in my threads.\nBut I hope that you will be strong like me, and don't wait for weak\ncomebacks,\nwhen you find another strong elephant.\n\n\n> > Do you have an objection to fixing the function\n> find_base_rel_ignore_join?\n> > Or is it included in unrelated changes?\n>\n> Well, the topic seems to be adding additional safety to prevent\n> accessing negative entries for simple_rel_array. I can't think why\n> fixing the same theoretical hazard in find_base_rel_ignore_join()\n> would be unrelated.\n\nGood to know.\n\n\n> I hope you can see the difference here. Randomly\n> adjusting function signatures because you happen to be changing some\n> code within that function does not, in my book, seem related.\n\nI confess that some \"in pass\", \"while there\", and some desire to enrich the\npatch, clouded my judgment.\n\nSo it seems that we have some consensus, I propose version 2 of the patch,\nwhich I hope we will have to correct, perhaps, the words of the comments.\n\nbest regards,\nRanier Vilela", "msg_date": "Thu, 28 Sep 2023 09:00:39 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Avoid a possible out-of-bounds access\n (src/backend/optimizer/util/relnode.c)" }, { "msg_contents": "On Fri, 29 Sept 2023 at 01:00, Ranier Vilela <[email protected]> wrote:\n> Perhaps, and using your own words, the leaders on this project seem\n> to be against reviewers armed with blunderbuss, too.\n\nI don't have any ideas on what you're talking about here, but if this\nis a valid concern that you think is unfair then maybe the code of\nconduct team is a more relevant set of people to raise it with.\n\nMy mention of the scattergun approach to writing patches was aimed at\nhelping you have more success here. I'd recommend you resist the urge\nto change unrelated code in your patches. Your experience here might\nimprove if you're able to aim your patches at resolving specific\nissues. I know you'd love to see commit messages that include \"In\npassing, adjust a random assortment of changes contributed by Ranier\nVilela\". That's just not going to happen. You might think the\ncommitter's job is just to commit patches contributed by contributors,\nbut what you might not realise is that we're also trying to maintain a\ncodebase that's over 3 decades old and will probably outlive most of\nits current contributors. Making things easy both for ourselves in\nseveral years time and for our future counterparts is something that\nwe do need to consider when discussing and making changes. The mail\narchives and commit messages are an audit trail for our future selves\nand future counterparts. That's one of the main reasons why we tend\nto not like you trying to sneak in assortments of random changes along\nwith your change. If you can understand this and adapt to this way of\nthinking then you're more likely to find yourself feeling like you're\nworking with committers and other contributors rather than against\nthem.\n\nI say this hoping you take it constructively, but I find that you're\noften very defensive about your patches and often appear to take\nchange requests in a very negative way. On this thread you've\ndemonstrated to me that by me requesting you to remove an unrelated\nchange in the patch that I must not care about code quality in\nPostgreSQL. I personally find these sorts of responses quite draining\nand it makes me not want to touch your work. I would imagine I'm not\nthe only person that feels this. So there's a real danger here that if\nyou continue to have too many negative responses in emails, you'll\nfind yourself ignored and you're likely to find that frustrating and\nthat will lead to you having a worse experience here. This does not\nmean you have to become passive to all requests for changes to your\npatches, but if you're going to argue or resist, then it pays to\npolitely explain your reasoning so that the topic can be discussed\nconstructively.\n\n> I confess that some \"in pass\", \"while there\", and some desire to enrich the patch, clouded my judgment.\n\nI'm glad to see you're keen to make improvements to PostgreSQL,\nhowever, I'd like to suggest that you channel that energy and aim to\nproduce patches targeted in those areas that attempt to resolve a\nseries or perhaps all problems of that particular category in\nisolation. If it's a large patch then it's likely best to get\nconsensus first as having lots of work rejected is always more\npainful.\n\n> So it seems that we have some consensus, I propose version 2 of the patch,\n> which I hope we will have to correct, perhaps, the words of the comments.\n\nI've pushed this patch after making an adjustment to the comment to\nshorten it to a single line.\n\nThank you for working on this.\n\nDavid\n\n\n", "msg_date": "Fri, 29 Sep 2023 17:25:21 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Avoid a possible out-of-bounds access\n (src/backend/optimizer/util/relnode.c)" } ]
[ { "msg_contents": "Hello hackers,\n\nWhen playing with oversized tuples, I've found that it's possible to set\nsuch oversized password for a user, that could not be validated.\nFor example:\nSELECT format('CREATE ROLE test_user LOGIN PASSWORD ''SCRAM-SHA-256$' || repeat('0', 2000000) || \n'4096:NuDacwYSUxeOeFUEf3ivTQ==$Wgvq3OCYrJI6eUfvKlAzn4p/j3mzgWzXbVnWeFK1qhY=:r1qSP0j2QojCjLpFUjI0i6ckInvxJDKoyWnN3zF8WCM='';')\n\\gexec\n-- the password is \"pass\"\n(One can achieve the same result with a large salt size, for example, 2048.)\n\npsql -U \"test_user\" -c \"SELECT 1\"\npsql: error: connection to server on socket \"/tmp/.s.PGSQL.5432\" failed: FATAL:  cannot read pg_class without having \nselected a database\n\nI've tried to set attstorage = 'p' for the rolpassword attribute forcefully\nby dirty hacking genbki.pl, and as a result I get an error on CREATE ROLE:\nERROR:  row is too big: size 2000256, maximum size 8160\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Sat, 23 Sep 2023 15:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": true, "msg_subject": "Should rolpassword be toastable?" }, { "msg_contents": "Alexander Lakhin <[email protected]> writes:\n> When playing with oversized tuples, I've found that it's possible to set\n> such oversized password for a user, that could not be validated.\n> For example:\n> ...\n> psql -U \"test_user\" -c \"SELECT 1\"\n> psql: error: connection to server on socket \"/tmp/.s.PGSQL.5432\" failed: FATAL:  cannot read pg_class without having \n> selected a database\n\nMy inclination is to fix this by removing pg_authid's toast table.\nI was not in favor of \"let's attach a toast table to every catalog\"\nto begin with, and I think this failure graphically illustrates\nwhy that was not as great an idea as some people thought.\nI also don't think it's worth trying to make it really work.\n\nI'm also now more than just slightly skeptical about whether\npg_database should have a toast table. Has anybody tried,\nsay, storing a daticurules field wide enough to end up\nout-of-line?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 23 Sep 2023 10:39:22 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should rolpassword be toastable?" }, { "msg_contents": "23.09.2023 17:39, Tom Lane wrote:\n> I'm also now more than just slightly skeptical about whether\n> pg_database should have a toast table. Has anybody tried,\n> say, storing a daticurules field wide enough to end up\n> out-of-line?\n\nI tried, but failed, because pg_database accessed in InitPostgres() before\nassigning MyDatabaseId only via the function GetDatabaseTupleByOid(),\nwhich doesn't unpack the database tuple.\nAnother access to a system catalog with unassigned MyDatabaseId might occur\nin the has_privs_of_role() call, but pg_auth_members contains no toastable\nattributes.\nSo for now only pg_authid is worthy of condemnation, AFAICS.\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Sat, 23 Sep 2023 21:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Should rolpassword be toastable?" }, { "msg_contents": "Hello Tom and Nathan,\n\n23.09.2023 21:00, Alexander Lakhin wrote:\n> 23.09.2023 17:39, Tom Lane wrote:\n>> I'm also now more than just slightly skeptical about whether\n>> pg_database should have a toast table.  Has anybody tried,\n>> say, storing a daticurules field wide enough to end up\n>> out-of-line?\n>\n> So for now only pg_authid is worthy of condemnation, AFAICS.\n>\n\nLet me remind you of this issue in light of b52c4fc3c.\nYes, it's opposite, but maybe it makes sense to fix it now in the hope that\n~1 year of testing will bring something helpful for both changes.\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Thu, 19 Sep 2024 06:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Should rolpassword be toastable?" }, { "msg_contents": "On Thu, Sep 19, 2024 at 06:00:00AM +0300, Alexander Lakhin wrote:\n> 23.09.2023 21:00, Alexander Lakhin wrote:\n>> So for now only pg_authid is worthy of condemnation, AFAICS.\n> \n> Let me remind you of this issue in light of b52c4fc3c.\n> Yes, it's opposite, but maybe it makes sense to fix it now in the hope that\n> ~1 year of testing will bring something helpful for both changes.\n\nHm. It does seem like there's little point in giving pg_authid a TOAST\ntable, as rolpassword is the only varlena column, and it obviously has\nproblems. But wouldn't removing it just trade one unhelpful internal error\nwhen trying to log in for another when trying to add a really long password\nhash (which hopefully nobody is really trying to do in practice)? I wonder\nif we could make this a little more user-friendly.\n\n-- \nnathan\n\n\n", "msg_date": "Thu, 19 Sep 2024 09:22:31 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should rolpassword be toastable?" }, { "msg_contents": "Nathan Bossart <[email protected]> writes:\n> Hm. It does seem like there's little point in giving pg_authid a TOAST\n> table, as rolpassword is the only varlena column, and it obviously has\n> problems. But wouldn't removing it just trade one unhelpful internal error\n> when trying to log in for another when trying to add a really long password\n> hash (which hopefully nobody is really trying to do in practice)? I wonder\n> if we could make this a little more user-friendly.\n\nWe could put an arbitrary limit (say, half of BLCKSZ) on the length of\npasswords.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 19 Sep 2024 10:31:15 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should rolpassword be toastable?" }, { "msg_contents": "On Thu, Sep 19, 2024 at 10:31:15AM -0400, Tom Lane wrote:\n> Nathan Bossart <[email protected]> writes:\n>> Hm. It does seem like there's little point in giving pg_authid a TOAST\n>> table, as rolpassword is the only varlena column, and it obviously has\n>> problems. But wouldn't removing it just trade one unhelpful internal error\n>> when trying to log in for another when trying to add a really long password\n>> hash (which hopefully nobody is really trying to do in practice)? I wonder\n>> if we could make this a little more user-friendly.\n> \n> We could put an arbitrary limit (say, half of BLCKSZ) on the length of\n> passwords.\n\nSomething like that could be good enough. I was thinking about actually\nvalidating that the hash had the correct form, but that might be a little\nmore complex than is warranted here.\n\n-- \nnathan\n\n\n", "msg_date": "Thu, 19 Sep 2024 12:44:32 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should rolpassword be toastable?" }, { "msg_contents": "On Thu, Sep 19, 2024 at 12:44:32PM -0500, Nathan Bossart wrote:\n> On Thu, Sep 19, 2024 at 10:31:15AM -0400, Tom Lane wrote:\n>> We could put an arbitrary limit (say, half of BLCKSZ) on the length of\n>> passwords.\n> \n> Something like that could be good enough. I was thinking about actually\n> validating that the hash had the correct form, but that might be a little\n> more complex than is warranted here.\n\nOh, actually, I see that we are already validating the hash, but you can\ncreate valid SCRAM-SHA-256 hashes that are really long. So putting an\narbitrary limit (patch attached) is probably the correct path forward. I'd\nalso remove pg_authid's TOAST table while at it.\n\n-- \nnathan", "msg_date": "Thu, 19 Sep 2024 16:52:02 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should rolpassword be toastable?" }, { "msg_contents": "Nathan Bossart <[email protected]> writes:\n> Oh, actually, I see that we are already validating the hash, but you can\n> create valid SCRAM-SHA-256 hashes that are really long. So putting an\n> arbitrary limit (patch attached) is probably the correct path forward. I'd\n> also remove pg_authid's TOAST table while at it.\n\nShouldn't we enforce the limit in every case in encrypt_password,\nnot just this one? (I do agree that encrypt_password is an okay\nplace to enforce it.)\n\nI think you will get pushback from a limit of 256 bytes --- I seem\nto recall discussion of actual use-cases where people were using\nstrings of a couple of kB. Whatever the limit is, the error message\nhad better cite it explicitly.\n\nAlso, the ereport call needs an errcode.\nERRCODE_PROGRAM_LIMIT_EXCEEDED is probably suitable.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 19 Sep 2024 18:14:34 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should rolpassword be toastable?" }, { "msg_contents": "On 9/19/24 6:14 PM, Tom Lane wrote:\r\n> Nathan Bossart <[email protected]> writes:\r\n>> Oh, actually, I see that we are already validating the hash, but you can\r\n>> create valid SCRAM-SHA-256 hashes that are really long. \r\n\r\nYou _can_, but it's up to a driver or a very determined user to do this, \r\nas it involves creating a very long salt.\r\n\r\n> So putting an\r\n>> arbitrary limit (patch attached) is probably the correct path forward. I'd\r\n>> also remove pg_authid's TOAST table while at it.\r\n> \r\n> Shouldn't we enforce the limit in every case in encrypt_password,\r\n> not just this one? (I do agree that encrypt_password is an okay\r\n> place to enforce it.)\r\n\r\n+1; if there's any breakage, my guess is it would be on very long \r\nplaintext passwords, but that would be from a very old upgrade?\r\n\r\n> I think you will get pushback from a limit of 256 bytes --- I seem\r\n> to recall discussion of actual use-cases where people were using\r\n> strings of a couple of kB. Whatever the limit is, the error message\r\n> had better cite it explicitly.\r\n\r\nI think it's OK to be a bit generous with the limit. Also, currently oru \r\nhashes are 256-bit (I know the above says byte), but this could increase \r\nshould we support larger hashes.\r\n\r\n> Also, the ereport call needs an errcode.\r\n> ERRCODE_PROGRAM_LIMIT_EXCEEDED is probably suitable.\r\n\r\nJonathan", "msg_date": "Thu, 19 Sep 2024 19:37:55 -0400", "msg_from": "\"Jonathan S. Katz\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should rolpassword be toastable?" }, { "msg_contents": "On Thu, Sep 19, 2024 at 07:37:55PM -0400, Jonathan S. Katz wrote:\n> On 9/19/24 6:14 PM, Tom Lane wrote:\n>> Nathan Bossart <[email protected]> writes:\n>> > Oh, actually, I see that we are already validating the hash, but you can\n>> > create valid SCRAM-SHA-256 hashes that are really long.\n> \n> You _can_, but it's up to a driver or a very determined user to do this, as\n> it involves creating a very long salt.\n\nI can't think of any reason to support this, unless we want Alexander to\nfind more bugs.\n\n>> So putting an\n>> > arbitrary limit (patch attached) is probably the correct path forward. I'd\n>> > also remove pg_authid's TOAST table while at it.\n>> \n>> Shouldn't we enforce the limit in every case in encrypt_password,\n>> not just this one? (I do agree that encrypt_password is an okay\n>> place to enforce it.)\n\nYeah, that seems like a good idea. I've attached a more fleshed-out patch\nset that applies the limit in all cases.\n\n> +1; if there's any breakage, my guess is it would be on very long plaintext\n> passwords, but that would be from a very old upgrade?\n\nIIUC there's zero support for plain-text passwords in newer versions, and\nany that remain in older clusters will be silently converted to a hash by\npg_upgrade.\n\n>> I think you will get pushback from a limit of 256 bytes --- I seem\n>> to recall discussion of actual use-cases where people were using\n>> strings of a couple of kB. Whatever the limit is, the error message\n>> had better cite it explicitly.\n> \n> I think it's OK to be a bit generous with the limit. Also, currently oru\n> hashes are 256-bit (I know the above says byte), but this could increase\n> should we support larger hashes.\n\nHm. Are you thinking of commit 67a472d? That one removed the password\nlength restrictions in client-side code and password message packets, which\nI think is entirely separate from the lengths of the hashes stored in\nrolpassword.\n\n>> Also, the ereport call needs an errcode.\n>> ERRCODE_PROGRAM_LIMIT_EXCEEDED is probably suitable.\n\nThis is added in v2.\n\n-- \nnathan", "msg_date": "Thu, 19 Sep 2024 21:46:00 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should rolpassword be toastable?" }, { "msg_contents": "On Thu, Sep 19, 2024 at 09:46:00PM -0500, Nathan Bossart wrote:\n> On Thu, Sep 19, 2024 at 07:37:55PM -0400, Jonathan S. Katz wrote:\n>>> Shouldn't we enforce the limit in every case in encrypt_password,\n>>> not just this one? (I do agree that encrypt_password is an okay\n>>> place to enforce it.)\n> \n> Yeah, that seems like a good idea. I've attached a more fleshed-out patch\n> set that applies the limit in all cases.\n\nNot sure. Is this really something we absolutely need? Sure, this\ngenerates a better error when inserting a record too long to\npg_authid, but removing the toast relation is enough to avoid the\nproblems one would see when authenticating. Not sure if this argument\nis enough to count as an objection, just sharing some doubts :)\n\nRemoving the toast relation for pg_authid sounds good to me.\n\n> -- These are the toast table and index of pg_authid.\n> -REINDEX TABLE CONCURRENTLY pg_toast.pg_toast_1260; -- no catalog toast table\n> +REINDEX TABLE CONCURRENTLY pg_toast.pg_toast_1262; -- no catalog toast table\n> ERROR: cannot reindex system catalogs concurrently\n> -REINDEX INDEX CONCURRENTLY pg_toast.pg_toast_1260_index; -- no catalog toast index\n> +REINDEX INDEX CONCURRENTLY pg_toast.pg_toast_1262_index; -- no catalog toast index\n\nThis comment should be refreshed as of s/pg_authid/pg_database/.\n--\nMichael", "msg_date": "Fri, 20 Sep 2024 14:23:58 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should rolpassword be toastable?" }, { "msg_contents": "On 9/20/24 1:23 AM, Michael Paquier wrote:\r\n> On Thu, Sep 19, 2024 at 09:46:00PM -0500, Nathan Bossart wrote:\r\n>> On Thu, Sep 19, 2024 at 07:37:55PM -0400, Jonathan S. Katz wrote:\r\n>>>> Shouldn't we enforce the limit in every case in encrypt_password,\r\n>>>> not just this one? (I do agree that encrypt_password is an okay\r\n>>>> place to enforce it.)\r\n>>\r\n>> Yeah, that seems like a good idea. I've attached a more fleshed-out patch\r\n>> set that applies the limit in all cases.\r\n> \r\n> Not sure. Is this really something we absolutely need? Sure, this\r\n> generates a better error when inserting a record too long to\r\n> pg_authid, but removing the toast relation is enough to avoid the\r\n> problems one would see when authenticating. Not sure if this argument\r\n> is enough to count as an objection, just sharing some doubts :)\r\n\r\nThe errors from lack of TOAST are confusing to users. Why can't we have \r\na user friendly error here?\r\n\r\nJonathan", "msg_date": "Fri, 20 Sep 2024 10:06:28 -0400", "msg_from": "\"Jonathan S. Katz\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should rolpassword be toastable?" }, { "msg_contents": "On Fri, Sep 20, 2024 at 10:06:28AM -0400, Jonathan S. Katz wrote:\n> On 9/20/24 1:23 AM, Michael Paquier wrote:\n>> Not sure. Is this really something we absolutely need? Sure, this\n>> generates a better error when inserting a record too long to\n>> pg_authid, but removing the toast relation is enough to avoid the\n>> problems one would see when authenticating. Not sure if this argument\n>> is enough to count as an objection, just sharing some doubts :)\n> \n> The errors from lack of TOAST are confusing to users. Why can't we have a\n> user friendly error here?\n\nIf I wanted to argue against adding a user-friendly error, I'd point out\nthat it's highly unlikely anyone is actually trying to use super long\nhashes unless they are trying to break things, and it's just another\narbitrary limit that we'll need to maintain/enforce. But on the off-chance\nthat someone is building a custom driver that generates long hashes for\nwhatever reason, I'd imagine that a clear error would be more helpful than\n\"row is too big.\"\n\nHere is a v3 patch set that fixes the test comment and a compiler warning\nin cfbot.\n\n-- \nnathan", "msg_date": "Fri, 20 Sep 2024 11:09:51 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should rolpassword be toastable?" }, { "msg_contents": "Nathan Bossart <[email protected]> writes:\n> Here is a v3 patch set that fixes the test comment and a compiler warning\n> in cfbot.\n\nNitpick: the message should say \"%d bytes\" not \"%d characters\",\nbecause we're counting bytes. Passes an eyeball check otherwise.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 20 Sep 2024 12:27:41 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should rolpassword be toastable?" }, { "msg_contents": "On Fri, Sep 20, 2024 at 12:27:41PM -0400, Tom Lane wrote:\n> Nitpick: the message should say \"%d bytes\" not \"%d characters\",\n> because we're counting bytes. Passes an eyeball check otherwise.\n\nThanks for reviewing. I went ahead and committed 0002 since it seems like\nthere's consensus on that one. I've attached a rebased version of 0001\nwith s/characters/bytes.\n\n-- \nnathan", "msg_date": "Sat, 21 Sep 2024 15:25:54 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should rolpassword be toastable?" } ]
[ { "msg_contents": "typedef struct TupleDescData\n{\n\tint\t\t\tnatts;\t\t\t/* number of attributes in the tuple */\n\tOid\t\t\ttdtypeid;\t\t/* composite type ID for tuple type */\n\tint32\t\ttdtypmod;\t\t/* typmod for tuple type */\n\tint\t\t\ttdrefcount;\t\t/* reference count, or -1 if not counting */\n\tTupleConstr *constr;\t\t/* constraints, or NULL if none */\n\t/* attrs[N] is the description of Attribute Number N+1 */\n\tFormData_pg_attribute attrs[FLEXIBLE_ARRAY_MEMBER];\n}\t\t\tTupleDescData;\n\nHi, the attrs use FLEXIBLE_ARRAY_MEMBER, so which API should I use to get the real length of this array?\n\n", "msg_date": "Sun, 24 Sep 2023 20:56:51 +0800", "msg_from": "jacktby jacktby <[email protected]>", "msg_from_op": true, "msg_subject": "How to Know the number of attrs?" }, { "msg_contents": "On 2023-09-24 20:56 +0800, jacktby jacktby wrote:\n> typedef struct TupleDescData\n> {\n> \tint\t\t\tnatts;\t\t\t/* number of attributes in the tuple */\n> \tOid\t\t\ttdtypeid;\t\t/* composite type ID for tuple type */\n> \tint32\t\ttdtypmod;\t\t/* typmod for tuple type */\n> \tint\t\t\ttdrefcount;\t\t/* reference count, or -1 if not counting */\n> \tTupleConstr *constr;\t\t/* constraints, or NULL if none */\n> \t/* attrs[N] is the description of Attribute Number N+1 */\n> \tFormData_pg_attribute attrs[FLEXIBLE_ARRAY_MEMBER];\n> }\t\t\tTupleDescData;\n> \n> Hi, the attrs use FLEXIBLE_ARRAY_MEMBER, so which API should I use to\n> get the real length of this array?\n\nUse the natts field of TupleDescData.\n\n-- \nErik\n\n\n", "msg_date": "Sun, 24 Sep 2023 15:38:28 +0200", "msg_from": "Erik Wienhold <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to Know the number of attrs?" } ]
[ { "msg_contents": "typedef struct TupleDescData\n{\n\tint\t\t\tnatts;\t\t\t/* number of attributes in the tuple */\n\tOid\t\t\ttdtypeid;\t\t/* composite type ID for tuple type */\n\tint32\t\ttdtypmod;\t\t/* typmod for tuple type */\n\tint\t\t\ttdrefcount;\t\t/* reference count, or -1 if not counting */\n\tTupleConstr *constr;\t\t/* constraints, or NULL if none */\n\t/* attrs[N] is the description of Attribute Number N+1 */\n\tFormData_pg_attribute attrs[FLEXIBLE_ARRAY_MEMBER];\n}\t\t\tTupleDescData;\n\nHi, the attrs use FLEXIBLE_ARRAY_MEMBER, so which API should I use to get the real length of this array?\n\n", "msg_date": "Sun, 24 Sep 2023 20:57:33 +0800", "msg_from": "jacktby jacktby <[email protected]>", "msg_from_op": true, "msg_subject": "How to Know the number of attrs?" } ]
[ { "msg_contents": "Hi,\n\nI have made various, mostly unrelated to each other,\nsmall improvements to the documentation. These\nare usually in the areas of plpgsql, schemas, and permissions.\nMost change 1 lines, but some supply short overviews.\n\n\"Short\" is subjective, so if these need to be\nbroken into different threads or different\ncommitfest entries let me know. I'm starting\nsimple and submitting a single patch.\n\nAttached: various_doc_patches_v1.patch\n\nRegards,\n\nKarl <[email protected]>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein", "msg_date": "Sun, 24 Sep 2023 17:57:47 -0500", "msg_from": "\"Karl O. Pinc\" <[email protected]>", "msg_from_op": true, "msg_subject": "Various small doc improvements; plpgsql, schemas, permissions,\n oidvector" }, { "msg_contents": "> On 25 Sep 2023, at 00:57, Karl O. Pinc <[email protected]> wrote:\n\n> I have made various, mostly unrelated to each other,\n> small improvements to the documentation. These\n> are usually in the areas of plpgsql, schemas, and permissions.\n> Most change 1 lines, but some supply short overviews.\n> \n> \"Short\" is subjective, so if these need to be\n> broken into different threads or different\n> commitfest entries let me know.\n\nWhile I agree it's subjective, I don't think adding a new section or paragraph\nqualifies as short or small. I would prefer if each (related) change is in a\nsingle commit with a commit message which describes the motivation for the\nchange. A reviewer can second-guess the rationale for the changes, but they\nshouldn't have to.\n\nThe resulting patchset can all be in the same thread though.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Mon, 25 Sep 2023 09:26:38 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Various small doc improvements; plpgsql, schemas, permissions,\n oidvector" }, { "msg_contents": "On Mon, 25 Sep 2023 09:26:38 +0200\nDaniel Gustafsson <[email protected]> wrote:\n\n> > On 25 Sep 2023, at 00:57, Karl O. Pinc <[email protected]> wrote: \n> \n> > I have made various, mostly unrelated to each other,\n> > small improvements to the documentation. \n\n> While I agree it's subjective, I don't think adding a new section or\n> paragraph qualifies as short or small. I would prefer if each\n> (related) change is in a single commit with a commit message which\n> describes the motivation for the change. A reviewer can second-guess\n> the rationale for the changes, but they shouldn't have to.\n\nWill do. Is there a preferred data format or should I send\neach patch in a separate attachment with description?\n\n> The resulting patchset can all be in the same thread though.\n\nThanks.\n\nRegards,\n\nKarl <[email protected]>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n\n", "msg_date": "Mon, 25 Sep 2023 07:00:49 -0500", "msg_from": "\"Karl O. Pinc\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Various small doc improvements; plpgsql, schemas, permissions,\n oidvector" }, { "msg_contents": "> On 25 Sep 2023, at 14:00, Karl O. Pinc <[email protected]> wrote:\n\n> Is there a preferred data format or should I send\n> each patch in a separate attachment with description?\n\nThe easiest way would be to create a patchset off of master I think. In a\nbranch, commit each change with an explanatory commit message. Once done you\ncan do \"git format-patch origin/master -v 1\" which will generate a set of n\npatches named v1-0001 through v1-000n. You can then attache those to the\nthread. This will make it easier for a reviewer, and it's easy to apply them\nin the right order in case one change depends on another earlier change.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Mon, 25 Sep 2023 14:14:37 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Various small doc improvements; plpgsql, schemas, permissions,\n oidvector" }, { "msg_contents": "On Mon, 25 Sep 2023 14:14:37 +0200\nDaniel Gustafsson <[email protected]> wrote:\n\n> > On 25 Sep 2023, at 14:00, Karl O. Pinc <[email protected]> wrote: \n> \n> > Is there a preferred data format or should I send\n> > each patch in a separate attachment with description? \n\n> Once done you can do \"git format-patch origin/master -v 1\" which will\n> generate a set of n patches named v1-0001 through v1-000n. You can\n> then attache those to the thread. \n\nDone. 11 patches attached. Thanks for the help.\n\n(This is v2, since I made some changes upon review.)\n\nI am not particularly confident in the top-line commit\ndescriptions. Some seem kind of long and not a whole\nlot of thought went into them. But the commit descriptions\nare for the committer to decide anyway.\n\nThe bulk of the commit descriptions are very wordy\nand will surely need at least some editing.\n\nListing all the attachments here for future discussion:\n\nv2-0001-Change-section-heading-to-better-reflect-saving-a.patch\nv2-0002-Change-section-heading-to-better-describe-referen.patch\nv2-0003-Better-section-heading-for-plpgsql-exception-trap.patch\nv2-0004-Describe-how-to-raise-an-exception-in-the-excepti.patch\nv2-0005-Improve-sentences-in-overview-of-system-configura.patch\nv2-0006-Provide-examples-of-listing-all-settings.patch\nv2-0007-Cleanup-summary-of-role-powers.patch\nv2-0008-Explain-the-difference-between-role-attributes-an.patch\nv2-0009-Document-the-oidvector-type.patch\nv2-0010-Improve-sentences-about-the-significance-of-the-s.patch\nv2-0011-Add-a-sub-section-to-describe-schema-resolution.patch\n\nRegards,\n\nKarl <[email protected]>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein", "msg_date": "Mon, 25 Sep 2023 17:55:59 -0500", "msg_from": "\"Karl O. Pinc\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Various small doc improvements; plpgsql, schemas, permissions,\n oidvector" }, { "msg_contents": "Version 3.\n\nRe-do title, which is all of patch v3-003.\n\nOn Mon, 25 Sep 2023 17:55:59 -0500\n\"Karl O. Pinc\" <[email protected]> wrote:\n\n> On Mon, 25 Sep 2023 14:14:37 +0200\n> Daniel Gustafsson <[email protected]> wrote:\n\n> > Once done you can do \"git format-patch origin/master -v 1\" which\n> > will generate a set of n patches named v1-0001 through v1-000n.\n\n> Done. 11 patches attached. Thanks for the help.\n\n> I am not particularly confident in the top-line commit\n> descriptions.\n\n> The bulk of the commit descriptions are very wordy\n\n> Listing all the attachments here for future discussion:\n\nv3-0001-Change-section-heading-to-better-reflect-saving-a.patch\nv3-0002-Change-section-heading-to-better-describe-referen.patch\nv3-0003-Better-section-heading-for-plpgsql-exception-trap.patch\nv3-0004-Describe-how-to-raise-an-exception-in-the-excepti.patch\nv3-0005-Improve-sentences-in-overview-of-system-configura.patch\nv3-0006-Provide-examples-of-listing-all-settings.patch\nv3-0007-Cleanup-summary-of-role-powers.patch\nv3-0008-Explain-the-difference-between-role-attributes-an.patch\nv3-0009-Document-the-oidvector-type.patch\nv3-0010-Improve-sentences-about-the-significance-of-the-s.patch\nv3-0011-Add-a-sub-section-to-describe-schema-resolution.patch\n\nRegards,\n\nKarl <[email protected]>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n\n", "msg_date": "Mon, 25 Sep 2023 23:30:38 -0500", "msg_from": "\"Karl O. Pinc\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Various small doc improvements; plpgsql, schemas, permissions,\n oidvector" }, { "msg_contents": "Forgot to attach. Sorry.\n\nOn Mon, 25 Sep 2023 23:30:38 -0500\n\"Karl O. Pinc\" <[email protected]> wrote:\n\n> Version 3.\n> \n> Re-do title, which is all of patch v3-003.\n> \n> On Mon, 25 Sep 2023 17:55:59 -0500\n> \"Karl O. Pinc\" <[email protected]> wrote:\n> \n> > On Mon, 25 Sep 2023 14:14:37 +0200\n> > Daniel Gustafsson <[email protected]> wrote: \n> \n> > > Once done you can do \"git format-patch origin/master -v 1\" which\n> > > will generate a set of n patches named v1-0001 through v1-000n. \n> \n> > Done. 11 patches attached. Thanks for the help. \n> \n> > I am not particularly confident in the top-line commit\n> > descriptions. \n> \n> > The bulk of the commit descriptions are very wordy \n> \n> > Listing all the attachments here for future discussion: \n> \n> v3-0001-Change-section-heading-to-better-reflect-saving-a.patch\n> v3-0002-Change-section-heading-to-better-describe-referen.patch\n> v3-0003-Better-section-heading-for-plpgsql-exception-trap.patch\n> v3-0004-Describe-how-to-raise-an-exception-in-the-excepti.patch\n> v3-0005-Improve-sentences-in-overview-of-system-configura.patch\n> v3-0006-Provide-examples-of-listing-all-settings.patch\n> v3-0007-Cleanup-summary-of-role-powers.patch\n> v3-0008-Explain-the-difference-between-role-attributes-an.patch\n> v3-0009-Document-the-oidvector-type.patch\n> v3-0010-Improve-sentences-about-the-significance-of-the-s.patch\n> v3-0011-Add-a-sub-section-to-describe-schema-resolution.patch\n\nRegards,\n\nKarl <[email protected]>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein", "msg_date": "Mon, 25 Sep 2023 23:37:44 -0500", "msg_from": "\"Karl O. Pinc\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Various small doc improvements; plpgsql, schemas, permissions,\n oidvector" }, { "msg_contents": "Version 4.\n\nAdded: v4-0012-Explain-role-management.patch\n\nOn Mon, 25 Sep 2023 23:37:44 -0500\n\"Karl O. Pinc\" <[email protected]> wrote:\n\n> > On Mon, 25 Sep 2023 17:55:59 -0500\n> > \"Karl O. Pinc\" <[email protected]> wrote:\n> > \n> > > On Mon, 25 Sep 2023 14:14:37 +0200\n> > > Daniel Gustafsson <[email protected]> wrote: \n> > \n> > > > Once done you can do \"git format-patch origin/master -v 1\" which\n> > > > will generate a set of n patches named v1-0001 through v1-000n.\n\n> > > I am not particularly confident in the top-line commit\n> > > descriptions. \n> > \n> > > The bulk of the commit descriptions are very wordy \n> > \n> > > Listing all the attachments here for future discussion: \n\nv4-0001-Change-section-heading-to-better-reflect-saving-a.patch\nv4-0002-Change-section-heading-to-better-describe-referen.patch\nv4-0003-Better-section-heading-for-plpgsql-exception-trap.patch\nv4-0004-Describe-how-to-raise-an-exception-in-the-excepti.patch\nv4-0005-Improve-sentences-in-overview-of-system-configura.patch\nv4-0006-Provide-examples-of-listing-all-settings.patch\nv4-0007-Cleanup-summary-of-role-powers.patch\nv4-0008-Explain-the-difference-between-role-attributes-an.patch\nv4-0009-Document-the-oidvector-type.patch\nv4-0010-Improve-sentences-about-the-significance-of-the-s.patch\nv4-0011-Add-a-sub-section-to-describe-schema-resolution.patch\nv4-0012-Explain-role-management.patch\n\nRegards,\n\nKarl <[email protected]>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein", "msg_date": "Sat, 30 Sep 2023 19:47:34 -0500", "msg_from": "\"Karl O. Pinc\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Various small doc improvements; plpgsql, schemas, permissions,\n oidvector" }, { "msg_contents": "Version 5\n\nChanged word order in a sentence:\nv5-0012-Explain-role-management.patch\n\nAdded a hyperlink:\nv5-0013-Hyperlink-from-CREATE-FUNCTION-reference-page-to-.patch\n\nAdded 3 index entries:\nv5-0014-Add-index-entries-for-parallel-safety.patch\n\n> On Mon, 25 Sep 2023 23:37:44 -0500\n> \"Karl O. Pinc\" <[email protected]> wrote:\n> \n> > > On Mon, 25 Sep 2023 17:55:59 -0500\n> > > \"Karl O. Pinc\" <[email protected]> wrote:\n> > > \n> > > > On Mon, 25 Sep 2023 14:14:37 +0200\n> > > > Daniel Gustafsson <[email protected]> wrote: \n> > > \n> > > > > Once done you can do \"git format-patch origin/master -v 1\"\n> > > > > which will generate a set of n patches named v1-0001 through\n> > > > > v1-000n. \n> \n> > > > I am not particularly confident in the top-line commit\n> > > > descriptions. \n> > > \n> > > > The bulk of the commit descriptions are very wordy \n> > > \n> > > > Listing all the attachments here for future discussion: \nv5-0001-Change-section-heading-to-better-reflect-saving-a.patch\nv5-0002-Change-section-heading-to-better-describe-referen.patch\nv5-0003-Better-section-heading-for-plpgsql-exception-trap.patch\nv5-0004-Describe-how-to-raise-an-exception-in-the-excepti.patch\nv5-0005-Improve-sentences-in-overview-of-system-configura.patch\nv5-0006-Provide-examples-of-listing-all-settings.patch\nv5-0007-Cleanup-summary-of-role-powers.patch\nv5-0008-Explain-the-difference-between-role-attributes-an.patch\nv5-0009-Document-the-oidvector-type.patch\nv5-0010-Improve-sentences-about-the-significance-of-the-s.patch\nv5-0011-Add-a-sub-section-to-describe-schema-resolution.patch\nv5-0012-Explain-role-management.patch\nv5-0013-Hyperlink-from-CREATE-FUNCTION-reference-page-to-.patch\nv5-0014-Add-index-entries-for-parallel-safety.patch\n\nRegards,\n\nKarl <[email protected]>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n\n", "msg_date": "Sun, 1 Oct 2023 18:16:57 -0500", "msg_from": "\"Karl O. Pinc\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Various small doc improvements; plpgsql, schemas, permissions,\n oidvector" }, { "msg_contents": "Version 5, this time with attachments.\n\nChanged word order in a sentence:\nv5-0012-Explain-role-management.patch\n\nAdded a hyperlink:\nv5-0013-Hyperlink-from-CREATE-FUNCTION-reference-page-to-.patch\n\nAdded 3 index entries:\nv5-0014-Add-index-entries-for-parallel-safety.patch\n\n> On Mon, 25 Sep 2023 23:37:44 -0500\n> \"Karl O. Pinc\" <[email protected]> wrote:\n> \n> > > On Mon, 25 Sep 2023 17:55:59 -0500\n> > > \"Karl O. Pinc\" <[email protected]> wrote:\n> > > \n> > > > On Mon, 25 Sep 2023 14:14:37 +0200\n> > > > Daniel Gustafsson <[email protected]> wrote: \n> > > \n> > > > > Once done you can do \"git format-patch origin/master -v 1\"\n> > > > > which will generate a set of n patches named v1-0001 through\n> > > > > v1-000n. \n> \n> > > > I am not particularly confident in the top-line commit\n> > > > descriptions. \n> > > \n> > > > The bulk of the commit descriptions are very wordy \n> > > \n> > > > Listing all the attachments here for future discussion: \nv5-0001-Change-section-heading-to-better-reflect-saving-a.patch\nv5-0002-Change-section-heading-to-better-describe-referen.patch\nv5-0003-Better-section-heading-for-plpgsql-exception-trap.patch\nv5-0004-Describe-how-to-raise-an-exception-in-the-excepti.patch\nv5-0005-Improve-sentences-in-overview-of-system-configura.patch\nv5-0006-Provide-examples-of-listing-all-settings.patch\nv5-0007-Cleanup-summary-of-role-powers.patch\nv5-0008-Explain-the-difference-between-role-attributes-an.patch\nv5-0009-Document-the-oidvector-type.patch\nv5-0010-Improve-sentences-about-the-significance-of-the-s.patch\nv5-0011-Add-a-sub-section-to-describe-schema-resolution.patch\nv5-0012-Explain-role-management.patch\nv5-0013-Hyperlink-from-CREATE-FUNCTION-reference-page-to-.patch\nv5-0014-Add-index-entries-for-parallel-safety.patch\n\nRegards,\n\nKarl <[email protected]>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein", "msg_date": "Sun, 1 Oct 2023 18:18:07 -0500", "msg_from": "\"Karl O. Pinc\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Various small doc improvements; plpgsql, schemas, permissions,\n oidvector" }, { "msg_contents": "On Sun, 1 Oct 2023 18:18:07 -0500\n\"Karl O. Pinc\" <[email protected]> wrote:\n\nVersion 6\n\nAdded:\nv6-0015-Trigger-authors-need-not-worry-about-parallelism.patch\n\nCan't say if this is an awesome idea or not. (Might have saved me time.) \nRead the commit message for a justification.\n\n> > On Mon, 25 Sep 2023 23:37:44 -0500\n> > \"Karl O. Pinc\" <[email protected]> wrote:\n> > \n> > > > On Mon, 25 Sep 2023 17:55:59 -0500\n> > > > \"Karl O. Pinc\" <[email protected]> wrote:\n> > > > \n> > > > > On Mon, 25 Sep 2023 14:14:37 +0200\n> > > > > Daniel Gustafsson <[email protected]> wrote: \n> > > > \n> > > > > > Once done you can do \"git format-patch origin/master -v 1\"\n> > > > > > which will generate a set of n patches named v1-0001 through\n> > > > > > v1-000n. \n> > \n> > > > > I am not particularly confident in the top-line commit\n> > > > > descriptions. \n> > > > \n> > > > > The bulk of the commit descriptions are very wordy \n> > > > \n> > > > > Listing all the attachments here for future discussion: \n\nv6-0001-Change-section-heading-to-better-reflect-saving-a.patch\nv6-0002-Change-section-heading-to-better-describe-referen.patch\nv6-0003-Better-section-heading-for-plpgsql-exception-trap.patch\nv6-0004-Describe-how-to-raise-an-exception-in-the-excepti.patch\nv6-0005-Improve-sentences-in-overview-of-system-configura.patch\nv6-0006-Provide-examples-of-listing-all-settings.patch\nv6-0007-Cleanup-summary-of-role-powers.patch\nv6-0008-Explain-the-difference-between-role-attributes-an.patch\nv6-0009-Document-the-oidvector-type.patch\nv6-0010-Improve-sentences-about-the-significance-of-the-s.patch\nv6-0011-Add-a-sub-section-to-describe-schema-resolution.patch\nv6-0012-Explain-role-management.patch\nv6-0013-Hyperlink-from-CREATE-FUNCTION-reference-page-to-.patch\nv6-0014-Add-index-entries-for-parallel-safety.patch\nv6-0015-Trigger-authors-need-not-worry-about-parallelism.patch\n\nRegards,\n\nKarl <[email protected]>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein", "msg_date": "Mon, 2 Oct 2023 15:18:32 -0500", "msg_from": "\"Karl O. Pinc\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Various small doc improvements; plpgsql, schemas, permissions,\n oidvector" }, { "msg_contents": "On Mon, 2 Oct 2023 15:18:32 -0500\n\"Karl O. Pinc\" <[email protected]> wrote:\n\nVersion 7\n\nAdded:\nv7-0016-Predicate-locks-are-held-per-cluster-not-per-data.patch\n\n> > > On Mon, 25 Sep 2023 23:37:44 -0500\n> > > \"Karl O. Pinc\" <[email protected]> wrote:\n> > > \n> > > > > On Mon, 25 Sep 2023 17:55:59 -0500\n> > > > > \"Karl O. Pinc\" <[email protected]> wrote:\n> > > > > \n> > > > > > On Mon, 25 Sep 2023 14:14:37 +0200\n> > > > > > Daniel Gustafsson <[email protected]> wrote: \n> > > > > \n> > > > > > > Once done you can do \"git format-patch origin/master -v 1\"\n> > > > > > > which will generate a set of n patches named v1-0001\n> > > > > > > through v1-000n. \n> > > \n> > > > > > I am not particularly confident in the top-line commit\n> > > > > > descriptions. \n> > > > > \n> > > > > > The bulk of the commit descriptions are very wordy \n> > > > > \n> > > > > > Listing all the attachments here for future discussion:\n\nv7-0001-Change-section-heading-to-better-reflect-saving-a.patch\nv7-0002-Change-section-heading-to-better-describe-referen.patch\nv7-0003-Better-section-heading-for-plpgsql-exception-trap.patch\nv7-0004-Describe-how-to-raise-an-exception-in-the-excepti.patch\nv7-0005-Improve-sentences-in-overview-of-system-configura.patch\nv7-0006-Provide-examples-of-listing-all-settings.patch\nv7-0007-Cleanup-summary-of-role-powers.patch\nv7-0008-Explain-the-difference-between-role-attributes-an.patch\nv7-0009-Document-the-oidvector-type.patch\nv7-0010-Improve-sentences-about-the-significance-of-the-s.patch\nv7-0011-Add-a-sub-section-to-describe-schema-resolution.patch\nv7-0012-Explain-role-management.patch\nv7-0013-Hyperlink-from-CREATE-FUNCTION-reference-page-to-.patch\nv7-0014-Add-index-entries-for-parallel-safety.patch\nv7-0015-Trigger-authors-need-not-worry-about-parallelism.patch\nv7-0016-Predicate-locks-are-held-per-cluster-not-per-data.patch\n\nRegards,\n\nKarl <[email protected]>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein", "msg_date": "Tue, 3 Oct 2023 12:52:48 -0500", "msg_from": "\"Karl O. Pinc\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Various small doc improvements; plpgsql, schemas, permissions,\n oidvector" }, { "msg_contents": "On Tue, Oct 3, 2023 at 10:56 AM Karl O. Pinc <[email protected]> wrote:\n\n> On Mon, 2 Oct 2023 15:18:32 -0500\n> \"Karl O. Pinc\" <[email protected]> wrote:\n>\n> Version 7\n>\n>\n0001 - I would just call the section:\nCapturing Command Results into Variables\nI would add commentary in there that it is only possible for variables to\ntake on single value at any given time and so in order to handle multiple\nrow results you need to instantiate a loop as per 43.6.6\n\n0002 - {Inferred | Indirect} Types ?\nWe are already in the Declarations section so the fact we are declaring new\nvariables is already covered.\n\"Instead of literally writing a type name you can write variable%TYPE and\nthe system will indirectly apply the then-current type of the named\nvariable to the newly declared variable.\" (using \"copy the then-current\"\nreads pretty well and makes the existing title usable...)\n\n0003 - The noun \"Exception\" here means \"deviating from the normal flow of\nthe code\", not, \"A subclass of error\". I don't see this title change being\nparticularly beneficial.\n\n0004 - Agreed, but \"You can raise an error explicitly as described in\n\"Errors and Messages\". I would not use the phrase \"raise an exception\", it\ndoesn't fit.\n\n0005 - This tone of voice is used throughout the introductory documentation\nsections, this single change doesn't seem warranted.\n\n0006 - Useful. The view itself provides all configuration parameters known\nto the system, the \"or selected\" isn't true of the view itself, and it\nseems to go without saying that the user can add a where clause to any\nquery they write using that view. At most I'd make one of the examples\ninclude a where clause.\n\n0007 - I'm leaning toward this area being due for some improvement,\nespecially given the v16 changes bring new attention to it, but this patch\ndoesn't scream \"improvement\" to me.\n\n0008 - Same as 0007. INHERIT is no longer an attribute though, it is a\nproperty of membership. This seems more acceptable on its own but I'd need\nmore time to take in the big picture myself.\n\nThat's it for now. I'll look over the other 8 when I can.\n\nDavid J.\n\nOn Tue, Oct 3, 2023 at 10:56 AM Karl O. Pinc <[email protected]> wrote:On Mon, 2 Oct 2023 15:18:32 -0500\n\"Karl O. Pinc\" <[email protected]> wrote:\n\nVersion 70001 - I would just call the section:Capturing Command Results into VariablesI would add commentary in there that it is only possible for variables to take on single value at any given time and so in order to handle multiple row results you need to instantiate a loop as per 43.6.60002 - {Inferred | Indirect} Types ?We are already in the Declarations section so the fact we are declaring new variables is already covered.\"Instead of literally writing a type name you can write variable%TYPE and the system will indirectly apply the then-current type of the named variable to the newly declared variable.\" (using \"copy the then-current\" reads pretty well and makes the existing title usable...)0003 - The noun \"Exception\" here means \"deviating from the normal flow of the code\", not, \"A subclass of error\".  I don't see this title change being particularly beneficial.0004 - Agreed, but \"You can raise an error explicitly as described in \"Errors and Messages\".  I would not use the phrase \"raise an exception\", it doesn't fit.0005 - This tone of voice is used throughout the introductory documentation sections, this single change doesn't seem warranted.0006 - Useful.  The view itself provides all configuration parameters known to the system, the \"or selected\" isn't true of the view itself, and it seems to go without saying that the user can add a where clause to any query they write using that view.  At most I'd make one of the examples include a where clause.0007 - I'm leaning toward this area being due for some improvement, especially given the v16 changes bring new attention to it, but this patch doesn't scream \"improvement\" to me.0008 - Same as 0007.  INHERIT is no longer an attribute though, it is a property of membership.  This seems more acceptable on its own but I'd need more time to take in the big picture myself.That's it for now.  I'll look over the other 8 when I can.David J.", "msg_date": "Tue, 3 Oct 2023 14:51:31 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Various small doc improvements;\n plpgsql, schemas, permissions, oidvector" }, { "msg_contents": "On Tue, 3 Oct 2023 14:51:31 -0700\n\"David G. Johnston\" <[email protected]> wrote:\n\n> 0001 - I would just call the section:\n> Capturing Command Results into Variables\n\nI like your wording a lot. Let's use it!\n\n> I would add commentary in there that it is only possible for\n> variables to take on single value at any given time and so in order\n> to handle multiple row results you need to instantiate a loop as per\n> 43.6.6\n\nThat sounds reasonable. Let me see what I can come up with.\nI'll do it in a separate commit.\n\n> 0002 - {Inferred | Indirect} Types ?\n> We are already in the Declarations section so the fact we are\n> declaring new variables is already covered.\n\nI agree. But the problem is that the section title needs\nto stand on it's own when the table of contents is scanned.\nBy that I don't mean that getting context from a \"Declaration\"\nhigher level section is somehow out-of-bounds, but that\nneither \"inferred\" nor \"indirect\" has a recognizable meaning\nunless the section body itself is read.\n\nI thought about: Referencing Existing Types\nBut the problem with that is that the reader does not know\nthat this has to do with the types of existing objects\nrather than working with user-defined types (or something else).\n\nAlso, I kind of like \"re-using\". Shorter, simpler, word.\n\nThere is: Re-Using the Type of Objects\n\n\"Objects\" is not very clear. Variables are very different from\ncolumns. It seemed best to just write it out.\n\n> \"Instead of literally writing a type name you can write variable%TYPE\n> and the system will indirectly apply the then-current type of the\n> named variable to the newly declared variable.\" \n\nI've no objection to the section leading with a summary sentence like\nthis. The trouble is coming up with something good. I'd go with\n\"You can reference the data type of an existing column or variable\nwith the %TYPE notation. The syntax is:\" Shorter and simpler.\n\nAlong with this change, it might be nice to move the \"By using %TYPE\n...\" paragraph to after the first example (before the first paragraph).\n\nBut this is really feature creep for this commit. :)\n\n> (using \"copy the\n> then-current\" reads pretty well and makes the existing title\n> usable...)\n\nI disagree. The title needs to be understandable without reading\nthe section body.\n\n> \n> 0003 - The noun \"Exception\" here means \"deviating from the normal\n> flow of the code\", not, \"A subclass of error\". I don't see this\n> title change being particularly beneficial.\n\nIsn't the entire section about \"deviating from the normal flow of the\ncode\"? That's what makes me want \"Exception\" in the section title.\n\n> 0004 - Agreed, but \"You can raise an error explicitly as described in\n> \"Errors and Messages\". I would not use the phrase \"raise an\n> exception\", it doesn't fit.\n\nI like the brevity of your sentence. And you're right that\nthe sentence does not flow as written. How about:\n\nYou can raise an exception to throw an error as described in\n\"Errors and Messages\".\n\n? I remain (overly?) focused on the word \"exception\", since that's\nwhats in the brain of the user that's writing RAISE EXCEPTION.\nIt matters if exceptions and errors are different. If they're\nnot, then it also matters since it's exceptions that the user's\ncode raises.\n\n> 0005 - This tone of voice is used throughout the introductory\n> documentation sections, this single change doesn't seem warranted.\n\nOk. I'll drop it unless somebody else chimes in.\n\n> 0006 - Useful. The view itself provides all configuration parameters\n> known to the system, the \"or selected\" isn't true of the view itself,\n> and it seems to go without saying that the user can add a where\n> clause to any query they write using that view. At most I'd make one\n> of the examples include a where clause.\n\nGood points.\n\nI'll get rid of the \", or selected,\". May as well leave the\nexamples as short as possible. Less is more. :)\n\n> 0007 - I'm leaning toward this area being due for some improvement,\n> especially given the v16 changes bring new attention to it, but this\n> patch doesn't scream \"improvement\" to me.\n\nLet's see how it looks with 0012, which uses the vocabulary.\n\nI do like the \"Roles therefore control who has what access to which\nobjects.\" sentence. I was shooting for shorter sentences, but then\nwhen I broke the existing sentences into pieces I couldn't resist\nadding text.\n\n> 0008 - Same as 0007. INHERIT is no longer an attribute though, it is\n> a property of membership.\n\n(!) I'm going to have to pay more attention.\n\n> This seems more acceptable on its own but\n> I'd need more time to take in the big picture myself.\n\nIt's the big picture that I'm trying to draw. 0007, 0008, and 0012\nall kind of go together. It may be worth forking the email thread,\nor something.\n\nHave you any thoughts on the \"permissions\", \"privleges\" and \n\"attributes\" vocabulary/concepts used in this area?\n\n\nThanks very much for the review. I'm going to wait a bit\nbefore incorporating your suggestions and sending in another\npatch set in case Daniel chimes in. (I'm slightly\nnervous about the renumbering making the thread hard to follow.)\n\nRegards,\n\nKarl <[email protected]>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n\n", "msg_date": "Tue, 3 Oct 2023 18:15:49 -0500", "msg_from": "\"Karl O. Pinc\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Various small doc improvements; plpgsql, schemas, permissions,\n oidvector" }, { "msg_contents": "On Tue, Oct 3, 2023 at 4:15 PM Karl O. Pinc <[email protected]> wrote:\n\n> On Tue, 3 Oct 2023 14:51:31 -0700\n> \"David G. Johnston\" <[email protected]> wrote:\n>\n> Isn't the entire section about \"deviating from the normal flow of the\n> code\"? That's what makes me want \"Exception\" in the section title.\n>\n\nIt is about how error handling in a procedure diverts the flow from the\nnormal code path to some other code path - what that path is labelled is\nless important than the thing that causes the diversion - an error.\n\n\n> ? I remain (overly?) focused on the word \"exception\", since that's\n> whats in the brain of the user that's writing RAISE EXCEPTION.\n> It matters if exceptions and errors are different. If they're\n> not, then it also matters since it's exceptions that the user's\n> code raises.\n>\n>\nIt's unfortunate the keyword to raise the message level \"ERROR\" is\n\"EXCEPTION\" in that command but I'd rather simply handle that one anomaly\nthat make the rest of the system use the word exception, especially seem to\nbe fairly consistent in our usage of ERROR already. I'm sympathetic that\nother systems out there also encourage the usage of exception in this\ncontext instead of error - but not to the point of opening up this\nlong-standing decision for rework.\n\n\n> Have you any thoughts on the \"permissions\", \"privleges\" and\n> \"attributes\" vocabulary/concepts used in this area?\n>\n\nI think we benefit from being able to equate permissions and privileges and\ntrying to separate them is going to be more harmful than helpful. The\nlimited things that role attributes permit, and how they fall outside the\nprivilege/permission concept as we use it, isn't something that I've\nnoticed is a problem that needs addressing.\n\n\n (I'm slightly\n> nervous about the renumbering making the thread hard to follow.)\n>\n>\n0009 - Something just seems off with this one. Unless there are more\nplaces with this type in use I would just move the relevant notes (i.e.,\nthe one in proallargtypes) to that column and be done with it. If there\nare multiple places then moving the notes to the main docs and\ncross-referencing to them seems warranted. I also wouldn't call it legacy.\n\n0010 -\n\nWhen creating new objects, if a schema qualification is not given with the\nname the first extant entry in the search_path is chosen; then an error\nwill be raised should the supplied name already exist in that schema.\nIn contexts where the object must already exist, but its name is not schema\nqualified, the extant search_path schemas will be consulted serially until\none of them contains an appropriate object, returning it, or all schemas\nare consulted, resulting in an object not found error.\n\nI'm not seeing much value in presenting the additional user/public details\nhere. Especially as it would then seem appropriate to include pg_temp.\nAnd now we have to deal with the fact that by default the public schema\nisn't so public anymore.\n\nDavid J.\n\nOn Tue, Oct 3, 2023 at 4:15 PM Karl O. Pinc <[email protected]> wrote:On Tue, 3 Oct 2023 14:51:31 -0700\n\"David G. Johnston\" <[email protected]> wrote:\nIsn't the entire section about \"deviating from the normal flow of the\ncode\"?  That's what makes me want \"Exception\" in the section title.It is about how error handling in a procedure diverts the flow from the normal code path to some other code path - what that path is labelled is less important than the thing that causes the diversion - an error.\n\n?  I remain (overly?) focused on the word \"exception\", since that's\nwhats in the brain of the user that's writing RAISE EXCEPTION.\nIt matters if exceptions and errors are different.  If they're\nnot, then it also matters since it's exceptions that the user's\ncode raises.It's unfortunate the keyword to raise the message level \"ERROR\" is \"EXCEPTION\" in that command but I'd rather simply handle that one anomaly that make the rest of the system use the word exception, especially seem to be fairly consistent in our usage of ERROR already.  I'm sympathetic that other systems out there also encourage the usage of exception in this context instead of error - but not to the point of opening up this long-standing decision for rework.\n\nHave you any thoughts on the \"permissions\", \"privleges\" and \n\"attributes\" vocabulary/concepts used in this area?I think we benefit from being able to equate permissions and privileges and trying to separate them is going to be more harmful than helpful.  The limited things that role attributes permit, and how they fall outside the privilege/permission concept as we use it, isn't something that I've noticed is a problem that needs addressing.  (I'm slightly\nnervous about the renumbering making the thread hard to follow.)0009 - Something just seems off with this one.  Unless there are more places with this type in use I would just move the relevant notes (i.e., the one in proallargtypes) to that column and be done with it.  If there are multiple places then moving the notes to the main docs and cross-referencing to them seems warranted.  I also wouldn't call it legacy.0010 -When creating new objects, if a schema qualification is not given with the name the first extant entry in the search_path is chosen; then an error will be raised should the supplied name already exist in that schema.In contexts where the object must already exist, but its name is not schema qualified, the extant search_path schemas will be consulted serially until one of them contains an appropriate object, returning it, or all schemas are consulted, resulting in an object not found error.I'm not seeing much value in presenting the additional user/public details here.  Especially as it would then seem appropriate to include pg_temp.  And now we have to deal with the fact that by default the public schema isn't so public anymore.David J.", "msg_date": "Tue, 3 Oct 2023 19:00:18 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Various small doc improvements;\n plpgsql, schemas, permissions, oidvector" }, { "msg_contents": "Extending my prior email which is now redundant.\n\nOn Tue, Oct 3, 2023 at 7:00 PM David G. Johnston <[email protected]>\nwrote:\n\n> On Tue, Oct 3, 2023 at 4:15 PM Karl O. Pinc <[email protected]> wrote:\n>\n>> On Tue, 3 Oct 2023 14:51:31 -0700\n>> \"David G. Johnston\" <[email protected]> wrote:\n>>\n>> Isn't the entire section about \"deviating from the normal flow of the\n>> code\"? That's what makes me want \"Exception\" in the section title.\n>>\n>\n> It is about how error handling in a procedure diverts the flow from the\n> normal code path to some other code path - what that path is labelled is\n> less important than the thing that causes the diversion - an error.\n>\n>\n>> ? I remain (overly?) focused on the word \"exception\", since that's\n>> whats in the brain of the user that's writing RAISE EXCEPTION.\n>> It matters if exceptions and errors are different. If they're\n>> not, then it also matters since it's exceptions that the user's\n>> code raises.\n>>\n>>\n> It's unfortunate the keyword to raise the message level \"ERROR\" is\n> \"EXCEPTION\" in that command but I'd rather simply handle that one anomaly\n> that make the rest of the system use the word exception, especially seem to\n> be fairly consistent in our usage of ERROR already. I'm sympathetic that\n> other systems out there also encourage the usage of exception in this\n> context instead of error - but not to the point of opening up this\n> long-standing decision for rework.\n>\n>\n>> Have you any thoughts on the \"permissions\", \"privleges\" and\n>> \"attributes\" vocabulary/concepts used in this area?\n>>\n>\n> I think we benefit from being able to equate permissions and privileges\n> and trying to separate them is going to be more harmful than helpful. The\n> limited things that role attributes permit, and how they fall outside the\n> privilege/permission concept as we use it, isn't something that I've\n> noticed is a problem that needs addressing.\n>\n>\n> (I'm slightly\n>> nervous about the renumbering making the thread hard to follow.)\n>>\n>>\n> 0009 - Something just seems off with this one. Unless there are more\n> places with this type in use I would just move the relevant notes (i.e.,\n> the one in proallargtypes) to that column and be done with it. If there\n> are multiple places then moving the notes to the main docs and\n> cross-referencing to them seems warranted. I also wouldn't call it legacy.\n>\n> 0010 -\n>\n> When creating new objects, if a schema qualification is not given with the\n> name the first extant entry in the search_path is chosen; then an error\n> will be raised should the supplied name already exist in that schema.\n> In contexts where the object must already exist, but its name is not\n> schema qualified, the extant search_path schemas will be consulted serially\n> until one of them contains an appropriate object, returning it, or all\n> schemas are consulted, resulting in an object not found error.\n>\n> I'm not seeing much value in presenting the additional user/public details\n> here. Especially as it would then seem appropriate to include pg_temp.\n> And now we have to deal with the fact that by default the public schema\n> isn't so public anymore.\n>\n>\n0011 - (first pass, going from memory, might have missed some needed\ndetails)\n\nAside from non-atomic SQL routine bodies (functions and procedures) the\nresult of the server executing SQL sent by the connected client does not\nresult in raw SQL, or textual expressions, being stored for later\nevaluation. All objects are identified (or created) during execution and\ntheir effects stored within the system catalogs and assigned system\nidentifiers (oids) to provide an absolute and immutable reference to be\nused while establishing inter-object dependencies. In short, indirect\nactions taken by the server, based upon stored knowledge, can and often\nwill execute while in a search_path that only contains the pg_catalog\nschema so that the stored knowledge can be found.\n\nFor routines written in any language except Atomic SQL the textual body of\nthe routine is stored as-is within the database. When executing such a\nroutine the (parent) session basically opens up a new connection to the\nserver (one per routine) and within that new sub-session sends the SQL\ncontained within the routine to the server for execution just like any\nother client, and therefore any object references present in that SQL need\nto be resolved to a schema as previously discussed. By default, upon\nconnecting, the newly created session is updated so that its settings take\non the current values in the parent session. When authoring a routine this\nis often undesirable as the behavior of the routine now depends upon an\nenvironment that is not definitively known to the routine author.\nSchema-qualifying object references within the routine body is one tool to\nremove such uncertainty. Another is by using the SET clause of the\nrelevant CREATE SQL Command to specify what the value of important settings\nare to be.\n\nThe key takeaway from the preceding two paragraphs is that because routines\nare stored as text and their settings resolved at execution time, and\nindirect server actions can invoke those routines with a pg_catalog only\nsearch_path, any routine that potentially can be invoked in that manner and\nmakes use of search_path should either be modified to eliminate such use or\ndefine the required search_path via the SET option during its creation or\nreplacement.\n\n0012 - (this has changed recently too, I'm not sure how this fits within\nthe rest. I still feel like something is missing even in my revision but\nnot sure what or if it is covered sufficiently nearby)\n\nAll roles are ultimately owned and managed by the bootstrap superuser, who\ncan establish trees of groups and users upon which the object permission\ngranting system works. By enabling the CREATEROLE attribute on a user a\nsuperuser can delegate role creation to other people (it is inadvisable to\nenable CREATEROLE on a group) who can then construct their own trees of\ngroups and users.\n\n(not sure how true this is still but something to consider in terms of big\npicture role setups)\nIt is likewise inadvisable to create multiple superusers since in practice\ntheir actions in many cases can be made to look attributable to the\nbootstrap superuser. It is necessary to enlist services outside of\nPostgreSQL to adequately establish auditing in a multi-superuser setup.\n\nNote my intentional use of users and groups here. We got rid of the\ndistinction with CREATE ROLE but in terms of system administration they\nstill have, IMO, significant utility.\n\n0013 - +1\n0014 - +1\n\n0015 - I'd almost rather only note in CREATE FUNCTION that PARALLEL does\nnot matter for a trigger returning function as triggers only execute in\ncases of data writing which precludes using parallelism. Which is indeed\nwhat the documentation explicitly calls out in \"When Can Parallel Query Be\nUsed?\" so it isn't inference from omission.\n\nI don't have a problem saying in the trigger documentation, maybe at the\nvery end:\n\nThe functions that triggers execute are more appropriately considered\nprocedures but since the later feature did not exist when triggers were\nimplemented precedent compels the dba to write their routines as\nfunctions. As a consequence, function attributes such as PARALLEL, and\nWINDOW, are possible to define on a function that is to be used as a\ntrigger but will have no effect. (though I would think at least some of\nthese get rejected outright)\n\n0016 - not within my knowledge base\n\nDavid J.\n\nExtending my prior email which is now redundant.On Tue, Oct 3, 2023 at 7:00 PM David G. Johnston <[email protected]> wrote:On Tue, Oct 3, 2023 at 4:15 PM Karl O. Pinc <[email protected]> wrote:On Tue, 3 Oct 2023 14:51:31 -0700\n\"David G. Johnston\" <[email protected]> wrote:\nIsn't the entire section about \"deviating from the normal flow of the\ncode\"?  That's what makes me want \"Exception\" in the section title.It is about how error handling in a procedure diverts the flow from the normal code path to some other code path - what that path is labelled is less important than the thing that causes the diversion - an error.\n\n?  I remain (overly?) focused on the word \"exception\", since that's\nwhats in the brain of the user that's writing RAISE EXCEPTION.\nIt matters if exceptions and errors are different.  If they're\nnot, then it also matters since it's exceptions that the user's\ncode raises.It's unfortunate the keyword to raise the message level \"ERROR\" is \"EXCEPTION\" in that command but I'd rather simply handle that one anomaly that make the rest of the system use the word exception, especially seem to be fairly consistent in our usage of ERROR already.  I'm sympathetic that other systems out there also encourage the usage of exception in this context instead of error - but not to the point of opening up this long-standing decision for rework.\n\nHave you any thoughts on the \"permissions\", \"privleges\" and \n\"attributes\" vocabulary/concepts used in this area?I think we benefit from being able to equate permissions and privileges and trying to separate them is going to be more harmful than helpful.  The limited things that role attributes permit, and how they fall outside the privilege/permission concept as we use it, isn't something that I've noticed is a problem that needs addressing.  (I'm slightly\nnervous about the renumbering making the thread hard to follow.)0009 - Something just seems off with this one.  Unless there are more places with this type in use I would just move the relevant notes (i.e., the one in proallargtypes) to that column and be done with it.  If there are multiple places then moving the notes to the main docs and cross-referencing to them seems warranted.  I also wouldn't call it legacy.0010 -When creating new objects, if a schema qualification is not given with the name the first extant entry in the search_path is chosen; then an error will be raised should the supplied name already exist in that schema.In contexts where the object must already exist, but its name is not schema qualified, the extant search_path schemas will be consulted serially until one of them contains an appropriate object, returning it, or all schemas are consulted, resulting in an object not found error.I'm not seeing much value in presenting the additional user/public details here.  Especially as it would then seem appropriate to include pg_temp.  And now we have to deal with the fact that by default the public schema isn't so public anymore.0011 - (first pass, going from memory, might have missed some needed details)Aside from non-atomic SQL routine bodies (functions and procedures) the result of the server executing SQL sent by the connected client does not result in raw SQL, or textual expressions, being stored for later evaluation.  All objects are identified (or created) during execution and their effects stored within the system catalogs and assigned system identifiers (oids) to provide an absolute and immutable reference to be used while establishing inter-object dependencies.  In short, indirect actions taken by the server, based upon stored knowledge, can and often will execute while in a search_path that only contains the pg_catalog schema so that the stored knowledge can be found.For routines written in any language except Atomic SQL the textual body of the routine is stored as-is within the database.  When executing such a routine the (parent) session basically opens up a new connection to the server (one per routine) and within that new sub-session sends the SQL contained within the routine to the server for execution just like any other client, and therefore any object references present in that SQL need to be resolved to a schema as previously discussed.  By default, upon connecting, the newly created session is updated so that its settings take on the current values in the parent session.  When authoring a routine this is often undesirable as the behavior of the routine now depends upon an environment that is not definitively known to the routine author.  Schema-qualifying object references within the routine body is one tool to remove such uncertainty.  Another is by using the SET clause of the relevant CREATE SQL Command to specify what the value of important settings are to be.The key takeaway from the preceding two paragraphs is that because routines are stored as text and their settings resolved at execution time, and indirect server actions can invoke those routines with a pg_catalog only search_path, any routine that potentially can be invoked in that manner and makes use of search_path should either be modified to eliminate such use or define the required search_path via the SET option during its creation or replacement.0012 - (this has changed recently too, I'm not sure how this fits within the rest.  I still feel like something is missing even in my revision but not sure what or if it is covered sufficiently nearby)All roles are ultimately owned and managed by the bootstrap superuser, who can establish trees of groups and users upon which the object permission granting system works.  By enabling the CREATEROLE attribute on a user a superuser can delegate role creation to other people (it is inadvisable to enable CREATEROLE on a group) who can then construct their own trees of groups and users.(not sure how true this is still but something to consider in terms of big picture role setups)It is likewise inadvisable to create multiple superusers since in practice their actions in many cases can be made to look attributable to the bootstrap superuser.  It is necessary to enlist services outside of PostgreSQL to adequately establish auditing in a multi-superuser setup.Note my intentional use of users and groups here.  We got rid of the distinction with CREATE ROLE but in terms of system administration they still have, IMO, significant utility.0013 - +10014 - +10015 - I'd almost rather only note in CREATE FUNCTION that PARALLEL does not matter for a trigger returning function as triggers only execute in cases of data writing which precludes using parallelism.  Which is indeed what the documentation explicitly calls out in \"When Can Parallel Query Be Used?\" so it isn't inference from omission.I don't have a problem saying in the trigger documentation, maybe at the very end:The functions that triggers execute are more appropriately considered procedures but since the later feature did not exist when triggers were implemented precedent compels the dba to write their routines as functions.  As a consequence, function attributes such as PARALLEL, and WINDOW, are possible to define on a function that is to be used as a trigger but will have no effect. (though I would think at least some of these get rejected outright)0016 - not within my knowledge baseDavid J.", "msg_date": "Tue, 3 Oct 2023 23:25:26 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Various small doc improvements;\n plpgsql, schemas, permissions, oidvector" }, { "msg_contents": "On Thu, Nov 30, 2023 at 3:59 PM David G. Johnston\n<[email protected]> wrote:\n>\n> Extending my prior email which is now redundant.\n>\n> On Tue, Oct 3, 2023 at 7:00 PM David G. Johnston <[email protected]> wrote:\n>>\n>> On Tue, Oct 3, 2023 at 4:15 PM Karl O. Pinc <[email protected]> wrote:\n>>>\n>>> On Tue, 3 Oct 2023 14:51:31 -0700\n>>> \"David G. Johnston\" <[email protected]> wrote:\n>>>\n>>> Isn't the entire section about \"deviating from the normal flow of the\n>>> code\"? That's what makes me want \"Exception\" in the section title.\n>>\n>>\n>> It is about how error handling in a procedure diverts the flow from the normal code path to some other code path - what that path is labelled is less important than the thing that causes the diversion - an error.\n>>\n>>>\n>>> ? I remain (overly?) focused on the word \"exception\", since that's\n>>> whats in the brain of the user that's writing RAISE EXCEPTION.\n>>> It matters if exceptions and errors are different. If they're\n>>> not, then it also matters since it's exceptions that the user's\n>>> code raises.\n>>>\n>>\n>> It's unfortunate the keyword to raise the message level \"ERROR\" is \"EXCEPTION\" in that command but I'd rather simply handle that one anomaly that make the rest of the system use the word exception, especially seem to be fairly consistent in our usage of ERROR already. I'm sympathetic that other systems out there also encourage the usage of exception in this context instead of error - but not to the point of opening up this long-standing decision for rework.\n>>\n>>>\n>>> Have you any thoughts on the \"permissions\", \"privleges\" and\n>>> \"attributes\" vocabulary/concepts used in this area?\n>>\n>>\n>> I think we benefit from being able to equate permissions and privileges and trying to separate them is going to be more harmful than helpful. The limited things that role attributes permit, and how they fall outside the privilege/permission concept as we use it, isn't something that I've noticed is a problem that needs addressing.\n>>\n>>\n>>> (I'm slightly\n>>> nervous about the renumbering making the thread hard to follow.)\n>>>\n>>\n>> 0009 - Something just seems off with this one. Unless there are more places with this type in use I would just move the relevant notes (i.e., the one in proallargtypes) to that column and be done with it. If there are multiple places then moving the notes to the main docs and cross-referencing to them seems warranted. I also wouldn't call it legacy.\n>>\n>> 0010 -\n>>\n>> When creating new objects, if a schema qualification is not given with the name the first extant entry in the search_path is chosen; then an error will be raised should the supplied name already exist in that schema.\n>> In contexts where the object must already exist, but its name is not schema qualified, the extant search_path schemas will be consulted serially until one of them contains an appropriate object, returning it, or all schemas are consulted, resulting in an object not found error.\n>>\n>> I'm not seeing much value in presenting the additional user/public details here. Especially as it would then seem appropriate to include pg_temp. And now we have to deal with the fact that by default the public schema isn't so public anymore.\n>>\n>\n> 0011 - (first pass, going from memory, might have missed some needed details)\n>\n> Aside from non-atomic SQL routine bodies (functions and procedures) the result of the server executing SQL sent by the connected client does not result in raw SQL, or textual expressions, being stored for later evaluation. All objects are identified (or created) during execution and their effects stored within the system catalogs and assigned system identifiers (oids) to provide an absolute and immutable reference to be used while establishing inter-object dependencies. In short, indirect actions taken by the server, based upon stored knowledge, can and often will execute while in a search_path that only contains the pg_catalog schema so that the stored knowledge can be found.\n>\n> For routines written in any language except Atomic SQL the textual body of the routine is stored as-is within the database. When executing such a routine the (parent) session basically opens up a new connection to the server (one per routine) and within that new sub-session sends the SQL contained within the routine to the server for execution just like any other client, and therefore any object references present in that SQL need to be resolved to a schema as previously discussed. By default, upon connecting, the newly created session is updated so that its settings take on the current values in the parent session. When authoring a routine this is often undesirable as the behavior of the routine now depends upon an environment that is not definitively known to the routine author. Schema-qualifying object references within the routine body is one tool to remove such uncertainty. Another is by using the SET clause of the relevant CREATE SQL Command to specify what the value of important settings are to be.\n>\n> The key takeaway from the preceding two paragraphs is that because routines are stored as text and their settings resolved at execution time, and indirect server actions can invoke those routines with a pg_catalog only search_path, any routine that potentially can be invoked in that manner and makes use of search_path should either be modified to eliminate such use or define the required search_path via the SET option during its creation or replacement.\n>\n> 0012 - (this has changed recently too, I'm not sure how this fits within the rest. I still feel like something is missing even in my revision but not sure what or if it is covered sufficiently nearby)\n>\n> All roles are ultimately owned and managed by the bootstrap superuser, who can establish trees of groups and users upon which the object permission granting system works. By enabling the CREATEROLE attribute on a user a superuser can delegate role creation to other people (it is inadvisable to enable CREATEROLE on a group) who can then construct their own trees of groups and users.\n>\n> (not sure how true this is still but something to consider in terms of big picture role setups)\n> It is likewise inadvisable to create multiple superusers since in practice their actions in many cases can be made to look attributable to the bootstrap superuser. It is necessary to enlist services outside of PostgreSQL to adequately establish auditing in a multi-superuser setup.\n>\n> Note my intentional use of users and groups here. We got rid of the distinction with CREATE ROLE but in terms of system administration they still have, IMO, significant utility.\n>\n> 0013 - +1\n> 0014 - +1\n>\n> 0015 - I'd almost rather only note in CREATE FUNCTION that PARALLEL does not matter for a trigger returning function as triggers only execute in cases of data writing which precludes using parallelism. Which is indeed what the documentation explicitly calls out in \"When Can Parallel Query Be Used?\" so it isn't inference from omission.\n>\n> I don't have a problem saying in the trigger documentation, maybe at the very end:\n>\n> The functions that triggers execute are more appropriately considered procedures but since the later feature did not exist when triggers were implemented precedent compels the dba to write their routines as functions. As a consequence, function attributes such as PARALLEL, and WINDOW, are possible to define on a function that is to be used as a trigger but will have no effect. (though I would think at least some of these get rejected outright)\n>\n> 0016 - not within my knowledge base\n>\n>I reviewed the Patch and found a few changes. Please have a look at them:\n-v7-0002-Change-section-heading-to-better-describe-referen.patch\n\n\"Re-Using the Type of Columns and Variables\" seems adequate. Getting\nsomething in there about declartions seems too wordy. I thought\nperhaps \"Referencing\" instead of \"Re-Using\", but \"referencing\" isn't\nperfect and \"re-using\" is generic enough, shorter, and simpler to\n\nHere 'declartions' should be replaced with 'declarations'.\n\n\n-v7-0012-Explain-role-management.patch\n\n+ The managment of most database objects is by way of granting some role\n\nHere 'managment' should be replaced with 'management'.\n\n-v7-0013-Hyperlink-from-CREATE-FUNCTION-reference-page-to-.patch\n\nIs is nice to have a link in the reference material to a full discussion.\n\nHere 'is' should be removed.\n\n-v7-0015-Trigger-authors-need-not-worry-about-parallelism.patch\n\nPlus, this patch adds an index entry so the new verbage is easy to find\nfor those who do investigate.\n\nHere 'verbage' should be replaced with 'verbiage'.\n\n-v7-0016-Predicate-locks-are-held-per-cluster-not-per-data.patch\n\nThis is a significant corner case and so should be documented. It is\nalso somewhat suprising since the databases within a cluster are\notherwise isolated, at least from the user's perspective.\n\nHere 'suprising' should be replaced with 'surprising'.\n\nPredicate locks are held per-cluster, not per database.\n+ This means that serializeable transactions in one database can have\n+ effects in another.\n+ Long running serializeable transactions, as might occur accidentally\n+ when\n+ <link linkend=\"app-psql-meta-command-pset-pager\">pagination</link>\n+ halts <link linkend=\"app-psql\">psql</link> output, can have\n+ significant inter-database effects.\n+ These include exhausting available predicate locks and\n+ cluster-wide <link linkend=\"ports12\">WAL checkpoint delay</link>.\n+ When making use of serializeable transactions consider having\n+ separate clusters for production and non-production use.\n\nHere 'serializeable' should be replaced with 'serializable'.\n\nThanks and Regards,\nShubham Khanna.\n\n\n", "msg_date": "Thu, 30 Nov 2023 16:02:28 +0530", "msg_from": "Shubham Khanna <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Various small doc improvements;\n plpgsql, schemas, permissions, oidvector" }, { "msg_contents": "Hello,\n\nThank you all for your help. I won't be able to submit\nnew patches based on reviews for 2 weeks.\n\nOn Thu, 30 Nov 2023 16:02:28 +0530\nShubham Khanna <[email protected]> wrote:\n<snip>\n> -v7-0012-Explain-role-management.patch\n> \n> + The managment of most database objects is by way of granting some\n> role\n> \n> Here 'managment' should be replaced with 'management'.\n\n<snip>\n\nRegards,\n\nKarl <[email protected]>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n\n", "msg_date": "Fri, 1 Dec 2023 08:00:39 -0600", "msg_from": "\"Karl O. Pinc\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Various small doc improvements; plpgsql, schemas, permissions,\n oidvector" }, { "msg_contents": "\n\n> On 1 Dec 2023, at 19:00, Karl O. Pinc <[email protected]> wrote:\n> \n> I won't be able to submit\n> new patches based on reviews for 2 weeks.\n\nHi everyone!\n\nIs there any work going on? Maybe is someone interested in moving this forward?\n\nThanks!\n\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Thu, 28 Mar 2024 17:16:17 +0500", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Various small doc improvements; plpgsql, schemas, permissions,\n oidvector" }, { "msg_contents": "On Thu, Mar 28, 2024 at 8:16 AM Andrey M. Borodin <[email protected]> wrote:\n> > On 1 Dec 2023, at 19:00, Karl O. Pinc <[email protected]> wrote:\n> >\n> > I won't be able to submit\n> > new patches based on reviews for 2 weeks.\n>\n> Hi everyone!\n>\n> Is there any work going on? Maybe is someone interested in moving this forward?\n>\n> Thanks!\n>\n\nHey Andrey,\n\nI spoke with Karl briefly on this and he is working on getting an\nupdated patch together. The work now involves incorporating feedback\nand some rebasing, but hopefully we will see something in the next few\ndays.\n\nRobert Treat\nhttps://xzilla.net\n\n\n", "msg_date": "Thu, 28 Mar 2024 08:28:16 -0400", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Various small doc improvements;\n plpgsql, schemas, permissions, oidvector" }, { "msg_contents": "On Thu, 28 Mar 2024 08:28:16 -0400\nRobert Treat <[email protected]> wrote:\n\n> I spoke with Karl briefly on this and he is working on getting an\n> updated patch together. The work now involves incorporating feedback\n> and some rebasing, but hopefully we will see something in the next few\n> days.\n\nWell, Friday has come and gone and I've not gotten to this.\nI'll see if I can spend time tomorrow.\n\nRegards,\n\nKarl <[email protected]>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n\n", "msg_date": "Sat, 30 Mar 2024 01:39:35 -0500", "msg_from": "\"Karl O. Pinc\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Various small doc improvements; plpgsql, schemas, permissions,\n oidvector" }, { "msg_contents": "\n\n> On 30 Mar 2024, at 11:39, Karl O. Pinc <[email protected]> wrote:\n> \n> Well, Friday has come and gone and I've not gotten to this.\n> I'll see if I can spend time tomorrow.\n\nNo worries, Karl! I just wanted to know if anyone is interested in this thread, and, now is obvious that you are. Thanks for your work!\n\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Sat, 30 Mar 2024 14:11:38 +0500", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Various small doc improvements; plpgsql, schemas, permissions,\n oidvector" } ]
[ { "msg_contents": "\nHi, hackers\n\nI find src/backend/utils/mb/Unicode/Makefile has the following comments:\n\n> # Note that while each script call produces two output files, to be\n> # parallel-make safe we need to split this into two rules. (See for\n> # example gram.y for more explanation.)\n> #\n\nI could not find the explanation in gram.y easily. Would someone point\nit out for me? Thanks in advance!\n\n-- \nRegrads,\nJapin Li\n\n\n", "msg_date": "Mon, 25 Sep 2023 10:38:36 +0800", "msg_from": "Japin Li <[email protected]>", "msg_from_op": true, "msg_subject": "Confused about gram.y referencs in Makefile?" } ]
[ { "msg_contents": "\nHi, hackers\n\nI find src/backend/utils/mb/Unicode/Makefile has the following comments:\n\n> # Note that while each script call produces two output files, to be\n> # parallel-make safe we need to split this into two rules. (See for\n> # example gram.y for more explanation.)\n> #\n\nI could not find the explanation in gram.y easily. Would someone point\nit out for me? Thanks in advance!\n\n-- \nRegrads,\nJapin Li\n\n\n", "msg_date": "Mon, 25 Sep 2023 10:41:05 +0800", "msg_from": "Japin Li <[email protected]>", "msg_from_op": true, "msg_subject": "Confused about gram.y referencs in Makefile?" }, { "msg_contents": "Japin Li <[email protected]> writes:\n> I find src/backend/utils/mb/Unicode/Makefile has the following comments:\n\n>> # Note that while each script call produces two output files, to be\n>> # parallel-make safe we need to split this into two rules. (See for\n>> # example gram.y for more explanation.)\n\n> I could not find the explanation in gram.y easily. Would someone point\n> it out for me? Thanks in advance!\n\nIt's referring to this bit in src/backend/parser/Makefile:\n\n-----\n# There is no correct way to write a rule that generates two files.\n# Rules with two targets don't have that meaning, they are merely\n# shorthand for two otherwise separate rules. If we have an action\n# that in fact generates two or more files, we must choose one of them\n# as primary and show it as the action's output, then make all of the\n# other output files dependent on the primary, like this. Furthermore,\n# the \"touch\" action is essential, because it ensures that gram.h is\n# marked as newer than (or at least no older than) gram.c. Without that,\n# make is likely to try to rebuild gram.h in subsequent runs, which causes\n# failures in VPATH builds from tarballs.\n\ngram.h: gram.c\n\ttouch $@\n\ngram.c: BISONFLAGS += -d\ngram.c: BISON_CHECK_CMD = $(PERL) $(srcdir)/check_keywords.pl $< $(top_srcdir)/src/include/parser/kwlist.h\n-----\n\nThis is indeed kind of confusing, because there's no explicit\nreference to gram.y here --- the last two lines just tweak\nthe behavior of the default .y to .c rule.\n\nMaybe we should adjust the comment in Unicode/Makefile, but\nI'm not sure what would be a better reference.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 24 Sep 2023 23:17:50 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Confused about gram.y referencs in Makefile?" }, { "msg_contents": "\nOn Mon, 25 Sep 2023 at 11:17, Tom Lane <[email protected]> wrote:\n> Japin Li <[email protected]> writes:\n>> I find src/backend/utils/mb/Unicode/Makefile has the following comments:\n>\n>>> # Note that while each script call produces two output files, to be\n>>> # parallel-make safe we need to split this into two rules. (See for\n>>> # example gram.y for more explanation.)\n>\n>> I could not find the explanation in gram.y easily. Would someone point\n>> it out for me? Thanks in advance!\n>\n> It's referring to this bit in src/backend/parser/Makefile:\n>\n> -----\n> # There is no correct way to write a rule that generates two files.\n> # Rules with two targets don't have that meaning, they are merely\n> # shorthand for two otherwise separate rules. If we have an action\n> # that in fact generates two or more files, we must choose one of them\n> # as primary and show it as the action's output, then make all of the\n> # other output files dependent on the primary, like this. Furthermore,\n> # the \"touch\" action is essential, because it ensures that gram.h is\n> # marked as newer than (or at least no older than) gram.c. Without that,\n> # make is likely to try to rebuild gram.h in subsequent runs, which causes\n> # failures in VPATH builds from tarballs.\n>\n> gram.h: gram.c\n> \ttouch $@\n>\n> gram.c: BISONFLAGS += -d\n> gram.c: BISON_CHECK_CMD = $(PERL) $(srcdir)/check_keywords.pl $< $(top_srcdir)/src/include/parser/kwlist.h\n> -----\n>\n> This is indeed kind of confusing, because there's no explicit\n> reference to gram.y here --- the last two lines just tweak\n> the behavior of the default .y to .c rule.\n>\n> Maybe we should adjust the comment in Unicode/Makefile, but\n> I'm not sure what would be a better reference.\n>\n> \t\t\tregards, tom lane\n\nThank you!\n\nMaybe be reference src/backend/parser/Makefile is better than current.\n\nHow about \"See gram.h target's comment in src/backend/parser/Makefile\"\nor just \"See src/backend/parser/Makefile\"?\n\n-- \nRegrads,\nJapin Li\n\n\n", "msg_date": "Mon, 25 Sep 2023 11:34:27 +0800", "msg_from": "Japin Li <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Confused about gram.y referencs in Makefile?" }, { "msg_contents": "> On 25 Sep 2023, at 05:34, Japin Li <[email protected]> wrote:\n\n> Maybe be reference src/backend/parser/Makefile is better than current.\n> \n> How about \"See gram.h target's comment in src/backend/parser/Makefile\"\n> or just \"See src/backend/parser/Makefile\"?\n\nThe latter seems more stable, if the Makefile is ever restructured it's almost\nguaranteed that this comment will be missed with the location info becoming\nstale.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Mon, 25 Sep 2023 09:29:16 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Confused about gram.y referencs in Makefile?" }, { "msg_contents": "Daniel Gustafsson <[email protected]> writes:\n> On 25 Sep 2023, at 05:34, Japin Li <[email protected]> wrote:\n>> How about \"See gram.h target's comment in src/backend/parser/Makefile\"\n>> or just \"See src/backend/parser/Makefile\"?\n\n> The latter seems more stable, if the Makefile is ever restructured it's almost\n> guaranteed that this comment will be missed with the location info becoming\n> stale.\n\nI did it like this:\n\n # Note that while each script call produces two output files, to be\n-# parallel-make safe we need to split this into two rules. (See for\n-# example gram.y for more explanation.)\n+# parallel-make safe we need to split this into two rules. (See notes\n+# in src/backend/parser/Makefile about rules with multiple outputs.)\n #\n\nThere are a whole lot of other cross-references to that same comment,\nand they all look like\n\n# See notes in src/backend/parser/Makefile about the following two rules\n\nI considered modifying all of those as well, but decided it wasn't\nreally worth the trouble. The Makefiles' days are numbered I think.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 25 Sep 2023 11:29:54 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Confused about gram.y referencs in Makefile?" } ]
[ { "msg_contents": "Hi,\n\nRecently I read the document about ereport()[1].\nThen, I felt that there is little information about severity level.\nSo I guess it can be kind to clarify where severity level is defined(see\nattached patch please).\n\nAny thoughts?\n\n\n[1] https://www.postgresql.org/docs/devel/error-message-reporting.html", "msg_date": "Mon, 25 Sep 2023 15:22:38 +0900", "msg_from": "Kuwamura Masaki <[email protected]>", "msg_from_op": true, "msg_subject": "Clarify where the severity level is defined" }, { "msg_contents": "> On 25 Sep 2023, at 08:22, Kuwamura Masaki <[email protected]> wrote:\n\n> Recently I read the document about ereport()[1].\n> Then, I felt that there is little information about severity level.\n> So I guess it can be kind to clarify where severity level is defined(see attached patch please).\n\nThat makes sense, we already point to other related files on that page so this\nis line with that.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Mon, 25 Sep 2023 08:37:43 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Clarify where the severity level is defined" }, { "msg_contents": "> On 25 Sep 2023, at 08:37, Daniel Gustafsson <[email protected]> wrote:\n> \n>> On 25 Sep 2023, at 08:22, Kuwamura Masaki <[email protected]> wrote:\n> \n>> Recently I read the document about ereport()[1].\n>> Then, I felt that there is little information about severity level.\n>> So I guess it can be kind to clarify where severity level is defined(see attached patch please).\n> \n> That makes sense, we already point to other related files on that page so this\n> is line with that.\n\nCommitted, with some minor wordsmithing. Thanks!\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Thu, 28 Sep 2023 15:39:41 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Clarify where the severity level is defined" }, { "msg_contents": ">\n> Committed, with some minor wordsmithing. Thanks!\n>\n\nThanks for tweaking and pushing, Daniel-san!\n\nMasaki Kuwamura\n\nCommitted, with some minor wordsmithing. Thanks!Thanks for tweaking and pushing, Daniel-san!Masaki Kuwamura", "msg_date": "Thu, 28 Sep 2023 23:17:46 +0900", "msg_from": "Kuwamura Masaki <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Clarify where the severity level is defined" } ]
[ { "msg_contents": "Hi,\n\nI found that \"taptest\" is missing in vcregress.bat command list in the\ndocumentation when I tested psql and pgbench on Windows. I know there\nis a plan to remove MSVC scripts[1], but is it worth adding one line\nto the list for now?\n\n[1] https://www.postgresql.org/message-id/flat/ZQzp_VMJcerM1Cs_%40paquier.xyz\n\nI attached a patch for this case.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <[email protected]>", "msg_date": "Mon, 25 Sep 2023 15:32:04 +0900", "msg_from": "Yugo NAGATA <[email protected]>", "msg_from_op": true, "msg_subject": "Doc: vcregress .bat commands list lacks \"taptest\"" }, { "msg_contents": "On Mon, Sep 25, 2023 at 03:32:04PM +0900, Yugo NAGATA wrote:\n> I found that \"taptest\" is missing in vcregress.bat command list in the\n> documentation when I tested psql and pgbench on Windows. I know there\n> is a plan to remove MSVC scripts[1], but is it worth adding one line\n> to the list for now?\n> \n> [1] https://www.postgresql.org/message-id/flat/ZQzp_VMJcerM1Cs_%40paquier.xyz\n> \n> I attached a patch for this case.\n\nAssuming that the MSVC scripts are gone in v17, and assuming that your\nsuggestion is backpatched, it would still be useful for 5 years until\nv16 is EOL'd. So I would say yes, but with a backpatch.\n--\nMichael", "msg_date": "Mon, 25 Sep 2023 15:58:39 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Doc: vcregress .bat commands list lacks \"taptest\"" }, { "msg_contents": "> On 25 Sep 2023, at 08:58, Michael Paquier <[email protected]> wrote:\n\n> I would say yes, but with a backpatch.\n\nAgreed.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Mon, 25 Sep 2023 09:14:48 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Doc: vcregress .bat commands list lacks \"taptest\"" }, { "msg_contents": "On 2023-09-25 Mo 03:14, Daniel Gustafsson wrote:\n>> On 25 Sep 2023, at 08:58, Michael Paquier<[email protected]> wrote:\n>> I would say yes, but with a backpatch.\n> Agreed.\n\n\n+1\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-09-25 Mo 03:14, Daniel\n Gustafsson wrote:\n\n\n\nOn 25 Sep 2023, at 08:58, Michael Paquier <[email protected]> wrote:\n\n\n\n\n\nI would say yes, but with a backpatch.\n\n\n\nAgreed.\n\n\n\n+1\n\n\ncheers\n\n\nandrew\n \n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Mon, 25 Sep 2023 11:07:57 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Doc: vcregress .bat commands list lacks \"taptest\"" }, { "msg_contents": "On Mon, Sep 25, 2023 at 11:07:57AM -0400, Andrew Dunstan wrote:\n> +1\n\nThanks, applied and backpatched then.\n--\nMichael", "msg_date": "Tue, 26 Sep 2023 08:18:01 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Doc: vcregress .bat commands list lacks \"taptest\"" }, { "msg_contents": "On Tue, 26 Sep 2023 08:18:01 +0900\nMichael Paquier <[email protected]> wrote:\n\n> On Mon, Sep 25, 2023 at 11:07:57AM -0400, Andrew Dunstan wrote:\n> > +1\n> \n> Thanks, applied and backpatched then.\n\nThank you!\n\nRegards,\nYugo Nagata\n\n> --\n> Michael\n\n\n-- \nYugo NAGATA <[email protected]>\n\n\n", "msg_date": "Tue, 26 Sep 2023 14:21:45 +0900", "msg_from": "Yugo NAGATA <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Doc: vcregress .bat commands list lacks \"taptest\"" } ]
[ { "msg_contents": "\nHi, hackers\n\nWhen I try to update the unicode mapping table through *.xml in\nsrc/backend/utils/mb/Unicode/, it doesn't update the *.map files.\nI find the make cannot go to this directory, what can I do to update\nthe mapping tables?\n\n-- \nRegrads,\nJapin Li\nChengDu WenWu Information Technology Co., Ltd.\n\n\n", "msg_date": "Mon, 25 Sep 2023 15:02:25 +0800", "msg_from": "Japin Li <[email protected]>", "msg_from_op": true, "msg_subject": "How to update unicode mapping table?" }, { "msg_contents": "On 25.09.23 08:02, Japin Li wrote:\n> When I try to update the unicode mapping table through *.xml in\n> src/backend/utils/mb/Unicode/, it doesn't update the *.map files.\n> I find the make cannot go to this directory, what can I do to update\n> the mapping tables?\n\nThis is done by \"make update-unicode\".\n\n\n\n", "msg_date": "Mon, 25 Sep 2023 22:58:23 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to update unicode mapping table?" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> On 25.09.23 08:02, Japin Li wrote:\n>> When I try to update the unicode mapping table through *.xml in\n>> src/backend/utils/mb/Unicode/, it doesn't update the *.map files.\n>> I find the make cannot go to this directory, what can I do to update\n>> the mapping tables?\n\n> This is done by \"make update-unicode\".\n\nOn a slightly related note, I noticed while preparing 3aff1d3fd\nthat src/backend/utils/mb/Unicode/Makefile seems a little screwy.\nIf you go into that directory and type \"make distclean\", you'll\nfind it removes some files that are in git:\n\n[postgres@sss1 Unicode]$ make distclean\nrm -f 8859-10.TXT 8859-13.TXT 8859-14.TXT 8859-15.TXT 8859-16.TXT 8859-2.TXT 8859-3.TXT 8859-4.TXT 8859-5.TXT 8859-6.TXT 8859-7.TXT 8859-8.TXT 8859-9.TXT BIG5.TXT CNS11643.TXT CP1250.TXT CP1251.TXT CP1252.TXT CP1253.TXT CP1254.TXT CP1255.TXT CP1256.TXT CP1257.TXT CP1258.TXT CP866.TXT CP874.TXT CP932.TXT CP936.TXT CP950.TXT JIS0212.TXT JOHAB.TXT KOI8-R.TXT KOI8-U.TXT KSX1001.TXT euc-jis-2004-std.txt gb-18030-2000.xml sjis-0213-2004-std.txt windows-949-2000.xml\n[postgres@sss1 Unicode]$ git status\nOn branch master\nYour branch is up to date with 'origin/master'.\n\nChanges not staged for commit:\n (use \"git add/rm <file>...\" to update what will be committed)\n (use \"git restore <file>...\" to discard changes in working directory)\n deleted: euc-jis-2004-std.txt\n deleted: gb-18030-2000.xml\n deleted: sjis-0213-2004-std.txt\n\nno changes added to commit (use \"git add\" and/or \"git commit -a\")\n\nThis seems wrong. If you \"make maintainer-clean\", that removes even more:\n\n$ git status\nOn branch master\nYour branch is up to date with 'origin/master'.\nChanges not staged for commit:\n (use \"git add/rm <file>...\" to update what will be committed)\n (use \"git restore <file>...\" to discard changes in working directory)\n deleted: big5_to_utf8.map\n deleted: euc-jis-2004-std.txt\n deleted: euc_cn_to_utf8.map\n deleted: euc_jis_2004_to_utf8.map\n deleted: euc_jp_to_utf8.map\n deleted: euc_kr_to_utf8.map\n deleted: euc_tw_to_utf8.map\n deleted: gb-18030-2000.xml\n deleted: gb18030_to_utf8.map\n deleted: gbk_to_utf8.map\n deleted: iso8859_10_to_utf8.map\n deleted: iso8859_13_to_utf8.map\n deleted: iso8859_14_to_utf8.map\n deleted: iso8859_15_to_utf8.map\n deleted: iso8859_16_to_utf8.map\n deleted: iso8859_2_to_utf8.map\n deleted: iso8859_3_to_utf8.map\n deleted: iso8859_4_to_utf8.map\n deleted: iso8859_5_to_utf8.map\n deleted: iso8859_6_to_utf8.map\n deleted: iso8859_7_to_utf8.map\n deleted: iso8859_8_to_utf8.map\n deleted: iso8859_9_to_utf8.map\n deleted: johab_to_utf8.map\n deleted: koi8r_to_utf8.map\n deleted: koi8u_to_utf8.map\n deleted: shift_jis_2004_to_utf8.map\n deleted: sjis-0213-2004-std.txt\n deleted: sjis_to_utf8.map\n deleted: uhc_to_utf8.map\n deleted: utf8_to_big5.map\n deleted: utf8_to_euc_cn.map\n deleted: utf8_to_euc_jis_2004.map\n deleted: utf8_to_euc_jp.map\n deleted: utf8_to_euc_kr.map\n deleted: utf8_to_euc_tw.map\n deleted: utf8_to_gb18030.map\n deleted: utf8_to_gbk.map\n deleted: utf8_to_iso8859_10.map\n deleted: utf8_to_iso8859_13.map\n deleted: utf8_to_iso8859_14.map\n deleted: utf8_to_iso8859_15.map\n deleted: utf8_to_iso8859_16.map\n deleted: utf8_to_iso8859_2.map\n deleted: utf8_to_iso8859_3.map\n deleted: utf8_to_iso8859_4.map\n deleted: utf8_to_iso8859_5.map\n deleted: utf8_to_iso8859_6.map\n deleted: utf8_to_iso8859_7.map\n deleted: utf8_to_iso8859_8.map\n deleted: utf8_to_iso8859_9.map\n deleted: utf8_to_johab.map\n deleted: utf8_to_koi8r.map\n deleted: utf8_to_koi8u.map\n deleted: utf8_to_shift_jis_2004.map\n deleted: utf8_to_sjis.map\n deleted: utf8_to_uhc.map\n deleted: utf8_to_win1250.map\n deleted: utf8_to_win1251.map\n deleted: utf8_to_win1252.map\n deleted: utf8_to_win1253.map\n deleted: utf8_to_win1254.map\n deleted: utf8_to_win1255.map\n deleted: utf8_to_win1256.map\n deleted: utf8_to_win1257.map\n deleted: utf8_to_win1258.map\n deleted: utf8_to_win866.map\n deleted: utf8_to_win874.map\n deleted: win1250_to_utf8.map\n deleted: win1251_to_utf8.map\n deleted: win1252_to_utf8.map\n deleted: win1253_to_utf8.map\n deleted: win1254_to_utf8.map\n deleted: win1255_to_utf8.map\n deleted: win1256_to_utf8.map\n deleted: win1257_to_utf8.map\n deleted: win1258_to_utf8.map\n deleted: win866_to_utf8.map\n deleted: win874_to_utf8.map\n\nno changes added to commit (use \"git add\" and/or \"git commit -a\")\n\nTo undo the mess, I tried \"make\", which did rebuild all those files\nbut now I have\n\n$ git status\nOn branch master\nYour branch is up to date with 'origin/master'.\n\nUntracked files:\n (use \"git add <file>...\" to include in what will be committed)\n 8859-10.TXT\n 8859-13.TXT\n 8859-14.TXT\n 8859-15.TXT\n 8859-16.TXT\n 8859-2.TXT\n 8859-3.TXT\n 8859-4.TXT\n 8859-5.TXT\n 8859-6.TXT\n 8859-7.TXT\n 8859-8.TXT\n 8859-9.TXT\n BIG5.TXT\n CNS11643.TXT\n CP1250.TXT\n CP1251.TXT\n CP1252.TXT\n CP1253.TXT\n CP1254.TXT\n CP1255.TXT\n CP1256.TXT\n CP1257.TXT\n CP1258.TXT\n CP866.TXT\n CP874.TXT\n CP932.TXT\n CP936.TXT\n CP950.TXT\n JIS0212.TXT\n JOHAB.TXT\n KOI8-R.TXT\n KOI8-U.TXT\n KSX1001.TXT\n windows-949-2000.xml\n\nnothing added to commit but untracked files present (use \"git add\" to track)\n\nSo there doesn't seem to be any clean way to regenerate the fileset\npresent in git. Maybe these targets aren't supposed to be invoked\nhere, but then why have a Makefile here at all? Alternatively,\nmaybe we have files in git that shouldn't be there (very likely due\nto the fact that this directory also lacks a .gitignore file).\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 25 Sep 2023 18:20:10 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to update unicode mapping table?" }, { "msg_contents": "\nOn Tue, 26 Sep 2023 at 05:58, Peter Eisentraut <[email protected]> wrote:\n> On 25.09.23 08:02, Japin Li wrote:\n>> When I try to update the unicode mapping table through *.xml in\n>> src/backend/utils/mb/Unicode/, it doesn't update the *.map files.\n>> I find the make cannot go to this directory, what can I do to update\n>> the mapping tables?\n>\n> This is done by \"make update-unicode\".\n\nThanks, it seems works.\n\n-- \nRegrads,\nJapin Li\nChengDu WenWu Information Technology Co., Ltd.\n\n\n", "msg_date": "Tue, 26 Sep 2023 09:23:37 +0800", "msg_from": "Japin Li <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to update unicode mapping table?" }, { "msg_contents": "\nOn Tue, 26 Sep 2023 at 06:20, Tom Lane <[email protected]> wrote:\n> Peter Eisentraut <[email protected]> writes:\n>> On 25.09.23 08:02, Japin Li wrote:\n>>> When I try to update the unicode mapping table through *.xml in\n>>> src/backend/utils/mb/Unicode/, it doesn't update the *.map files.\n>>> I find the make cannot go to this directory, what can I do to update\n>>> the mapping tables?\n>\n>> This is done by \"make update-unicode\".\n>\n> On a slightly related note, I noticed while preparing 3aff1d3fd\n> that src/backend/utils/mb/Unicode/Makefile seems a little screwy.\n>\n> So there doesn't seem to be any clean way to regenerate the fileset\n> present in git. Maybe these targets aren't supposed to be invoked\n> here, but then why have a Makefile here at all? Alternatively,\n> maybe we have files in git that shouldn't be there (very likely due\n> to the fact that this directory also lacks a .gitignore file).\n>\n\nI find those files do not removed when using VPATH build.\n\n-- \nRegrads,\nJapin Li\nChengDu WenWu Information Technology Co., Ltd.\n\n\n", "msg_date": "Tue, 26 Sep 2023 09:30:03 +0800", "msg_from": "Japin Li <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to update unicode mapping table?" }, { "msg_contents": "On 26.09.23 00:20, Tom Lane wrote:\n> On a slightly related note, I noticed while preparing 3aff1d3fd\n> that src/backend/utils/mb/Unicode/Makefile seems a little screwy.\n> If you go into that directory and type \"make distclean\", you'll\n> find it removes some files that are in git:\n\nSince this is only used during update-unicode, whose purpose is to \noverwrite files in git, this might not be totally wrong.\n\nMaybe the target names distclean and maintainer-clean are inappropriate, \nsince they suggest some standard semantics. Maybe the current semantics \nare also not that useful, I'm not sure. I guess among the things you'd \nwant are \"delete all intermediate downloaded files\", which seemingly \nnone of these do.\n\n\n\n", "msg_date": "Fri, 6 Oct 2023 09:05:38 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to update unicode mapping table?" } ]
[ { "msg_contents": "I think I've come across a wrong result issue with grouping sets, as\nshown by the query below.\n\n-- result is correct with only grouping sets\nselect a, b\nfrom (values (1, 1), (2, 2)) as t (a, b) where a = b\ngroup by grouping sets((a, b), (a));\n a | b\n---+---\n 1 | 1\n 1 |\n 2 | 2\n 2 |\n(4 rows)\n\n-- result is NOT correct with grouping sets and distinct on\nselect distinct on (a, b) a, b\nfrom (values (1, 1), (2, 2)) as t (a, b) where a = b\ngroup by grouping sets((a, b), (a));\n a | b\n---+---\n 1 | 1\n 2 | 2\n(2 rows)\n\nThe distinct on expressions include both 'a' and 'b', so rows (1, 1) and\n(1, NULL) should not be considered equal. (The same for rows (2, 2) and\n(2, NULL).)\n\nI think the root cause is that when we generate distinct_pathkeys, we\nfailed to realize that Var 'b' might be nullable by the grouping sets,\nso it's no longer always equal to Var 'a'. It's not correct to deem\nthat the PathKey for 'b' is redundant and thus remove it from the\npathkeys list.\n\nWe have the same issue when generating sort_pathkeys. As a result, we\nmay have the final output in the wrong order. There were several\nreports about this issue before, such as [1][2].\n\nTo fix this issue, I'm thinking that we mark the grouping expressions\nnullable by grouping sets with a dummy RTE for grouping sets, something\nlike attached. In practice we do not need to create a real RTE for\nthat, what we need is just a RT index. In the patch I use 0, because\nit's not a valid outer join relid, so it would not conflict with the\nouter-join-aware-Var infrastructure.\n\nIf the grouping expression is a Var or PHV, we can just set its\nnullingrels, very straightforward. For an expression that is neither a\nVar nor a PHV, I'm not quite sure how to set the nullingrels. I tried\nthe idea of wrapping it in a new PHV to carry the nullingrels, but that\nmay cause some unnecessary plan diffs. In the patch for such an\nexpression I just set the nullingrels of Vars or PHVs that are contained\nin it. This is not really 'correct' in theory, because it is the whole\nexpression that can be nullable by grouping sets, not its individual\nvars. But it works in practice, because what we need is that the\nexpression can be somehow distinguished from the same expression in ECs,\nand marking its vars is sufficient for this purpose. But what if the\nexpression is variable-free? This is the point that needs more work.\nFow now the patch just handles variable-free expressions of type Const,\nby wrapping it in a new PHV.\n\nThere are some places where we need to artificially remove the RT index\nof grouping sets from the nullingrels, such as when we generate\nPathTarget for initial input to grouping nodes, or when we generate\nPathKeys for the grouping clauses, because the expressions there are\nlogically below the grouping sets. We also need to do that in\nset_upper_references when we update the targetlist of an Agg node, so\nthat we can perform exact match for nullingrels, rather than superset\nmatch.\n\nSince the fix depends on the nullingrels stuff, it seems not easy for\nback-patching. I'm not sure what we should do in back branches.\n\nAny thoughts?\n\n[1]\nhttps://www.postgresql.org/message-id/CAMbWs48AtQTQGk37MSyDk_EAgDO3Y0iA_LzvuvGQ2uO_Wh2muw@mail.gmail.com\n[2]\nhttps://www.postgresql.org/message-id/[email protected]\n\nThanks\nRichard", "msg_date": "Mon, 25 Sep 2023 15:11:38 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Wrong results with grouping sets" }, { "msg_contents": "On Mon, Sep 25, 2023 at 3:11 PM Richard Guo <[email protected]> wrote:\n\n> I think the root cause is that when we generate distinct_pathkeys, we\n> failed to realize that Var 'b' might be nullable by the grouping sets,\n> so it's no longer always equal to Var 'a'. It's not correct to deem\n> that the PathKey for 'b' is redundant and thus remove it from the\n> pathkeys list.\n>\n> We have the same issue when generating sort_pathkeys. As a result, we\n> may have the final output in the wrong order. There were several\n> reports about this issue before, such as [1][2].\n>\n> To fix this issue, I'm thinking that we mark the grouping expressions\n> nullable by grouping sets with a dummy RTE for grouping sets, something\n> like attached.\n>\n\nHi Tom, I'm wondering if you've got a chance to look into this issue.\nWhat do you think about the fix?\n\nThanks\nRichard\n\nOn Mon, Sep 25, 2023 at 3:11 PM Richard Guo <[email protected]> wrote:I think the root cause is that when we generate distinct_pathkeys, wefailed to realize that Var 'b' might be nullable by the grouping sets,so it's no longer always equal to Var 'a'.  It's not correct to deemthat the PathKey for 'b' is redundant and thus remove it from thepathkeys list.We have the same issue when generating sort_pathkeys.  As a result, wemay have the final output in the wrong order.  There were severalreports about this issue before, such as [1][2].To fix this issue, I'm thinking that we mark the grouping expressionsnullable by grouping sets with a dummy RTE for grouping sets, somethinglike attached. Hi Tom, I'm wondering if you've got a chance to look into this issue.What do you think about the fix?ThanksRichard", "msg_date": "Sat, 7 Oct 2023 18:29:41 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Wrong results with grouping sets" }, { "msg_contents": "Hi! Thank you for your work on the subject.\n\nOn 25.09.2023 10:11, Richard Guo wrote:\n> I think I've come across a wrong result issue with grouping sets, as\n> shown by the query below.\n>\n> -- result is correct with only grouping sets\n> select a, b\n> from (values (1, 1), (2, 2)) as t (a, b) where a = b\n> group by grouping sets((a, b), (a));\n>  a | b\n> ---+---\n>  1 | 1\n>  1 |\n>  2 | 2\n>  2 |\n> (4 rows)\n>\n> -- result is NOT correct with grouping sets and distinct on\n> select distinct on (a, b) a, b\n> from (values (1, 1), (2, 2)) as t (a, b) where a = b\n> group by grouping sets((a, b), (a));\n>  a | b\n> ---+---\n>  1 | 1\n>  2 | 2\n> (2 rows)\n>\n> The distinct on expressions include both 'a' and 'b', so rows (1, 1) and\n> (1, NULL) should not be considered equal.  (The same for rows (2, 2) and\n> (2, NULL).)\n\nI noticed that this query worked correctly in the main branch with the \ninequality operator:\n\npostgres=# select distinct on (a, b) a, b from (values (3, 1), (2, 2)) \nas t (a, b) where a > b group by grouping sets((a, b), (a)); a | b \n---+--- 3 | 1 3 | (2 rows)\n\nSo, I think you are right)\n\n\n> I think the root cause is that when we generate distinct_pathkeys, we\n> failed to realize that Var 'b' might be nullable by the grouping sets,\n> so it's no longer always equal to Var 'a'.  It's not correct to deem\n> that the PathKey for 'b' is redundant and thus remove it from the\n> pathkeys list.\n>\n> We have the same issue when generating sort_pathkeys.  As a result, we\n> may have the final output in the wrong order.  There were several\n> reports about this issue before, such as [1][2].\n>\n> To fix this issue, I'm thinking that we mark the grouping expressions\n> nullable by grouping sets with a dummy RTE for grouping sets, something\n> like attached.  In practice we do not need to create a real RTE for\n> that, what we need is just a RT index.  In the patch I use 0, because\n> it's not a valid outer join relid, so it would not conflict with the\n> outer-join-aware-Var infrastructure.\n>\n> If the grouping expression is a Var or PHV, we can just set its\n> nullingrels, very straightforward.   For an expression that is neither a\n> Var nor a PHV, I'm not quite sure how to set the nullingrels.  I tried\n> the idea of wrapping it in a new PHV to carry the nullingrels, but that\n> may cause some unnecessary plan diffs.  In the patch for such an\n> expression I just set the nullingrels of Vars or PHVs that are contained\n> in it.  This is not really 'correct' in theory, because it is the whole\n> expression that can be nullable by grouping sets, not its individual\n> vars.  But it works in practice, because what we need is that the\n> expression can be somehow distinguished from the same expression in ECs,\n> and marking its vars is sufficient for this purpose.  But what if the\n> expression is variable-free?  This is the point that needs more work.\n> Fow now the patch just handles variable-free expressions of type Const,\n> by wrapping it in a new PHV.\n>\n> There are some places where we need to artificially remove the RT index\n> of grouping sets from the nullingrels, such as when we generate\n> PathTarget for initial input to grouping nodes, or when we generate\n> PathKeys for the grouping clauses, because the expressions there are\n> logically below the grouping sets.  We also need to do that in\n> set_upper_references when we update the targetlist of an Agg node, so\n> that we can perform exact match for nullingrels, rather than superset\n> match.\n>\n> Since the fix depends on the nullingrels stuff, it seems not easy for\n> back-patching.  I'm not sure what we should do in back branches.\n>\n> Any thoughts?\n>\n> [1] \n> https://www.postgresql.org/message-id/CAMbWs48AtQTQGk37MSyDk_EAgDO3Y0iA_LzvuvGQ2uO_Wh2muw@mail.gmail.com\n> [2] \n> https://www.postgresql.org/message-id/[email protected]\n>\nI looked at your patch and noticed a few things:\n\n1. I think you should add a test with the cube operator, because I \nnoticed that the order of the query in the result has also changed:\n\nmaster:\npostgres=# select a,b from (values(1,1),(2,2)) as t (a,b) where a=b \ngroup by cube (a, (a,b)) order by b,a; a | b ---+--- 1 | 1 1 | 1 1 | 2 | \n2 2 | 2 2 | | (7 rows)\n\nwith patch:\n\npostgres=# select a, b from (values (1, 1), (2, 2)) as t (a, b) where a \n= b group by cube(a, (a, b)) order by b,a; a | b ---+--- 1 | 1 1 | 1 2 | \n2 2 | 2 1 | 2 | | (7 rows)\n\n-- \nRegards,\nAlena Rybakina\n\n\n\n\n\n\nHi! Thank you for your work on the subject.\nOn 25.09.2023 10:11, Richard Guo wrote:\n\n\n\nI think I've come across a wrong result issue with\n grouping sets, as\n shown by the query below.\n\n -- result is correct with only grouping sets\n select a, b\n from (values (1, 1), (2, 2)) as t (a, b) where a = b\n group by grouping sets((a, b), (a));\n  a | b\n ---+---\n  1 | 1\n  1 |\n  2 | 2\n  2 |\n (4 rows)\n\n -- result is NOT correct with grouping sets and distinct on\n select distinct on (a, b) a, b\n from (values (1, 1), (2, 2)) as t (a, b) where a = b\n group by grouping sets((a, b), (a));\n  a | b\n ---+---\n  1 | 1\n  2 | 2\n (2 rows)\n\n The distinct on expressions include both 'a' and 'b', so rows\n (1, 1) and\n (1, NULL) should not be considered equal.  (The same for rows\n (2, 2) and\n (2, NULL).)\n\n\nI noticed that this query worked correctly in the main branch\n with the inequality operator:\npostgres=# select distinct on (a, b) a, b\nfrom (values (3, 1), (2, 2)) as t (a, b) where a > b\ngroup by grouping sets((a, b), (a));\n a | b \n---+---\n 3 | 1\n 3 | \n(2 rows)\nSo, I think you are right)\n\n\n\nI think the root cause is that when we generate\n distinct_pathkeys, we\n failed to realize that Var 'b' might be nullable by the grouping\n sets,\n so it's no longer always equal to Var 'a'.  It's not correct to\n deem\n that the PathKey for 'b' is redundant and thus remove it from\n the\n pathkeys list.\n\n We have the same issue when generating sort_pathkeys.  As a\n result, we\n may have the final output in the wrong order.  There were\n several\n reports about this issue before, such as [1][2].\n\n To fix this issue, I'm thinking that we mark the grouping\n expressions\n nullable by grouping sets with a dummy RTE for grouping sets,\n something\n like attached.  In practice we do not need to create a real RTE\n for\n that, what we need is just a RT index.  In the patch I use 0,\n because\n it's not a valid outer join relid, so it would not conflict with\n the\n outer-join-aware-Var infrastructure.\n\n If the grouping expression is a Var or PHV, we can just set its\n nullingrels, very straightforward.   For an expression that is\n neither a\n Var nor a PHV, I'm not quite sure how to set the nullingrels.  I\n tried\n the idea of wrapping it in a new PHV to carry the nullingrels,\n but that\n may cause some unnecessary plan diffs.  In the patch for such an\n expression I just set the nullingrels of Vars or PHVs that are\n contained\n in it.  This is not really 'correct' in theory, because it is\n the whole\n expression that can be nullable by grouping sets, not its\n individual\n vars.  But it works in practice, because what we need is that\n the\n expression can be somehow distinguished from the same expression\n in ECs,\n and marking its vars is sufficient for this purpose.  But what\n if the\n expression is variable-free?  This is the point that needs more\n work.\n Fow now the patch just handles variable-free expressions of type\n Const,\n by wrapping it in a new PHV.\n\n There are some places where we need to artificially remove the\n RT index\n of grouping sets from the nullingrels, such as when we generate\n PathTarget for initial input to grouping nodes, or when we\n generate\n PathKeys for the grouping clauses, because the expressions there\n are\n logically below the grouping sets.  We also need to do that in\n set_upper_references when we update the targetlist of an Agg\n node, so\n that we can perform exact match for nullingrels, rather than\n superset\n match.\n\n Since the fix depends on the nullingrels stuff, it seems not\n easy for\n back-patching.  I'm not sure what we should do in back branches.\n\n Any thoughts?\n\n [1] https://www.postgresql.org/message-id/CAMbWs48AtQTQGk37MSyDk_EAgDO3Y0iA_LzvuvGQ2uO_Wh2muw@mail.gmail.com\n [2] https://www.postgresql.org/message-id/[email protected]\n\n\n\nI looked at your patch and noticed a few things:\n1. I think you should add a test with the cube operator, because\n I noticed that the order of the query in the result has also\n changed:\nmaster:\npostgres=# select a,b from (values(1,1),(2,2)) as t (a,b) where a=b group by cube (a, (a,b)) order by b,a;\n a | b \n---+---\n 1 | 1\n 1 | 1\n 1 | \n 2 | 2\n 2 | 2\n 2 | \n | \n(7 rows)\nwith patch:\npostgres=# select a, b\nfrom (values (1, 1), (2, 2)) as t (a, b) where a = b\ngroup by cube(a, (a, b)) order by b,a;\n a | b \n---+---\n 1 | 1\n 1 | 1\n 2 | 2\n 2 | 2\n 1 | \n 2 | \n | \n(7 rows)\n-- \nRegards,\nAlena Rybakina", "msg_date": "Thu, 16 Nov 2023 18:25:50 +0300", "msg_from": "Alena Rybakina <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wrong results with grouping sets" }, { "msg_contents": "On Thu, Nov 16, 2023 at 11:25 PM Alena Rybakina <[email protected]>\nwrote:\n\n> I noticed that this query worked correctly in the main branch with the\n> inequality operator:\n>\n> postgres=# select distinct on (a, b) a, b from (values (3, 1), (2, 2)) as\n> t (a, b) where a > b group by grouping sets((a, b), (a)); a | b ---+--- 3 |\n> 1 3 | (2 rows)\n>\n> So, I think you are right)\n>\n\nThanks for taking an interest in this patch and verifying it.\n\n\n> I looked at your patch and noticed a few things:\n>\n> 1. I think you should add a test with the cube operator, because I noticed\n> that the order of the query in the result has also changed:\n>\n\nHmm, I'm not sure if that's necessary. The wrong result order you saw\nhere is caused by the same reason explained above: the planner fails to\nrealize that Var 'a' and 'b' are nullable by the grouping sets, making\nthem no longer always equal to each other. This issue should have been\ncovered in the tests added by v1 patch.\n\nThanks\nRichard\n\nOn Thu, Nov 16, 2023 at 11:25 PM Alena Rybakina <[email protected]> wrote:\nI noticed that this query worked correctly in the main branch\n with the inequality operator:\npostgres=# select distinct on (a, b) a, b\nfrom (values (3, 1), (2, 2)) as t (a, b) where a > b\ngroup by grouping sets((a, b), (a));\n a | b \n---+---\n 3 | 1\n 3 | \n(2 rows)\nSo, I think you are right)Thanks for taking an interest in this patch and verifying it. \nI looked at your patch and noticed a few things:\n1. I think you should add a test with the cube operator, because\n I noticed that the order of the query in the result has also\n changed: Hmm, I'm not sure if that's necessary.  The wrong result order you sawhere is caused by the same reason explained above: the planner fails torealize that Var 'a' and 'b' are nullable by the grouping sets, makingthem no longer always equal to each other.  This issue should have beencovered in the tests added by v1 patch.ThanksRichard", "msg_date": "Fri, 17 Nov 2023 11:25:11 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Wrong results with grouping sets" }, { "msg_contents": "On Mon, Sep 25, 2023 at 3:11 PM Richard Guo <[email protected]> wrote:\n\n> If the grouping expression is a Var or PHV, we can just set its\n> nullingrels, very straightforward. For an expression that is neither a\n> Var nor a PHV, I'm not quite sure how to set the nullingrels. I tried\n> the idea of wrapping it in a new PHV to carry the nullingrels, but that\n> may cause some unnecessary plan diffs. In the patch for such an\n> expression I just set the nullingrels of Vars or PHVs that are contained\n> in it. This is not really 'correct' in theory, because it is the whole\n> expression that can be nullable by grouping sets, not its individual\n> vars. But it works in practice, because what we need is that the\n> expression can be somehow distinguished from the same expression in ECs,\n> and marking its vars is sufficient for this purpose. But what if the\n> expression is variable-free? This is the point that needs more work.\n> Fow now the patch just handles variable-free expressions of type Const,\n> by wrapping it in a new PHV.\n>\n\nFor a variable-free expression, if it contains volatile functions, SRFs,\naggregates, or window functions, it would not be treated as a member of\nEC that is redundant (see get_eclass_for_sort_expr()). That means it\nwould not be removed from the pathkeys list, so we do not need to set\nthe nullingrels for it. Otherwise we can just wrap the expression in a\nPlaceHolderVar. Attached is an updated patch to do that.\n\nBTW, this wrong results issue has existed ever since grouping sets was\nintroduced in v9.5, and there were field reports complaining about this\nissue. I think it would be great if we can get rid of it. I'm still\nnot sure what we should do in back branches. But let's fix it at least\non v16 and later.\n\nThanks\nRichard", "msg_date": "Thu, 7 Dec 2023 16:22:01 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Wrong results with grouping sets" }, { "msg_contents": "Richard Guo <[email protected]> writes:\n> For a variable-free expression, if it contains volatile functions, SRFs,\n> aggregates, or window functions, it would not be treated as a member of\n> EC that is redundant (see get_eclass_for_sort_expr()). That means it\n> would not be removed from the pathkeys list, so we do not need to set\n> the nullingrels for it. Otherwise we can just wrap the expression in a\n> PlaceHolderVar. Attached is an updated patch to do that.\n\nI don't think this is going in quite the right direction. We have\nmany serious problems with grouping sets (latest one today at [1]),\nand I don't believe that hacking around EquivalenceClasses is going\nto fix them all.\n\nI think that what we really need to do is invent a new kind of RTE\nrepresenting the output of the grouping step, with columns that\nare the Vars or expressions being grouped on. Then we would make\nthe parser actually replace subexpressions in the tlist with Vars\nreferencing this new RTE (that is, change check_ungrouped_columns\ninto something that modifies the expression tree into something that\ncontains no Vars that aren't grouping-RTE Vars). In this way the\noutput of the parser directly expresses the semantic requirement that\ncertain subexpressions be gotten from the grouping output rather than\ncomputed some other way.\n\nThe trick is to do this without losing optimization capability.\nWe could have the planner replace these Vars with the underlying\nVars in cases where it's safe to do so (perhaps after adding a\nnullingrel bit that references the grouping RTE). If a grouping\ncolumn is an expression, we might be able to replace the reference\nVars with PHVs as you've done here ... but I think we need the\nparser infrastructure fixed first.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/CAEzk6fcgXWabEG%2BRFDaG6tDmFX6g1h7LPGUdrX85Pb0XB3B76g%40mail.gmail.com\n\n\n", "msg_date": "Sat, 06 Jan 2024 15:59:36 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wrong results with grouping sets" }, { "msg_contents": "On Thu, 7 Dec 2023 at 13:52, Richard Guo <[email protected]> wrote:\n>\n>\n> On Mon, Sep 25, 2023 at 3:11 PM Richard Guo <[email protected]> wrote:\n>>\n>> If the grouping expression is a Var or PHV, we can just set its\n>> nullingrels, very straightforward. For an expression that is neither a\n>> Var nor a PHV, I'm not quite sure how to set the nullingrels. I tried\n>> the idea of wrapping it in a new PHV to carry the nullingrels, but that\n>> may cause some unnecessary plan diffs. In the patch for such an\n>> expression I just set the nullingrels of Vars or PHVs that are contained\n>> in it. This is not really 'correct' in theory, because it is the whole\n>> expression that can be nullable by grouping sets, not its individual\n>> vars. But it works in practice, because what we need is that the\n>> expression can be somehow distinguished from the same expression in ECs,\n>> and marking its vars is sufficient for this purpose. But what if the\n>> expression is variable-free? This is the point that needs more work.\n>> Fow now the patch just handles variable-free expressions of type Const,\n>> by wrapping it in a new PHV.\n>\n>\n> For a variable-free expression, if it contains volatile functions, SRFs,\n> aggregates, or window functions, it would not be treated as a member of\n> EC that is redundant (see get_eclass_for_sort_expr()). That means it\n> would not be removed from the pathkeys list, so we do not need to set\n> the nullingrels for it. Otherwise we can just wrap the expression in a\n> PlaceHolderVar. Attached is an updated patch to do that.\n>\n> BTW, this wrong results issue has existed ever since grouping sets was\n> introduced in v9.5, and there were field reports complaining about this\n> issue. I think it would be great if we can get rid of it. I'm still\n> not sure what we should do in back branches. But let's fix it at least\n> on v16 and later.\n\nI have changed the status of the patch to \"Waiting on Author\" as Tom\nLane's comments have not yet been addressed, feel free to address them\nand update the commitfest entry accordingly.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Thu, 11 Jan 2024 20:40:37 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wrong results with grouping sets" }, { "msg_contents": "\n\n> On 11 Jan 2024, at 20:10, vignesh C <[email protected]> wrote:\n> \n> I have changed the status of the patch to \"Waiting on Author\" as Tom\n> Lane's comments have not yet been addressed, feel free to address them\n> and update the commitfest entry accordingly.\n\nThis CF entry seems to be a fix for actually unexpected behaviour. But seems like we need another fix.\nRichard, Alena, what do you think? Should we mark CF entry [0] \"RwF\" or leave it to wait for better fix?\n\nBest regards, Andrey Borodin.\n\n\n[0] https://commitfest.postgresql.org/47/4583/\n\n", "msg_date": "Sun, 31 Mar 2024 14:02:42 +0500", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wrong results with grouping sets" }, { "msg_contents": "On Sun, Jan 7, 2024 at 4:59 AM Tom Lane <[email protected]> wrote:\n\n> I don't think this is going in quite the right direction. We have\n> many serious problems with grouping sets (latest one today at [1]),\n> and I don't believe that hacking around EquivalenceClasses is going\n> to fix them all.\n>\n> I think that what we really need to do is invent a new kind of RTE\n> representing the output of the grouping step, with columns that\n> are the Vars or expressions being grouped on. Then we would make\n> the parser actually replace subexpressions in the tlist with Vars\n> referencing this new RTE (that is, change check_ungrouped_columns\n> into something that modifies the expression tree into something that\n> contains no Vars that aren't grouping-RTE Vars). In this way the\n> output of the parser directly expresses the semantic requirement that\n> certain subexpressions be gotten from the grouping output rather than\n> computed some other way.\n>\n> The trick is to do this without losing optimization capability.\n> We could have the planner replace these Vars with the underlying\n> Vars in cases where it's safe to do so (perhaps after adding a\n> nullingrel bit that references the grouping RTE). If a grouping\n> column is an expression, we might be able to replace the reference\n> Vars with PHVs as you've done here ... but I think we need the\n> parser infrastructure fixed first.\n\n\nSorry it takes me some time to get back to this thread.\n\nI think you're right. To fix the cases where there are subqueries in\nthe grouping sets, as in Geoff's example, it seems that we'll have to\nfix the parser infrastructure by inventing a new RTE for the grouping\nstep and replacing the subquery expressions with Vars referencing this\nnew RTE, so that there is only one instance of the subquery in the\nparser output.\n\nI have experimented with this approach, and here is the outcome. The\npatch fixes Geoff's query, but it's still somewhat messy as I'm not\nexperienced enough in the parser code. And the patch has not yet\nimplemented the nullingrel bit manipulation trick. Before proceeding\nfurther with this patch, I'd like to know if it is going in the right\ndirection.\n\nThanks\nRichard", "msg_date": "Thu, 16 May 2024 17:43:40 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Wrong results with grouping sets" }, { "msg_contents": "On Thu, May 16, 2024 at 5:43 PM Richard Guo <[email protected]> wrote:\n\n> I have experimented with this approach, and here is the outcome. The\n> patch fixes Geoff's query, but it's still somewhat messy as I'm not\n> experienced enough in the parser code. And the patch has not yet\n> implemented the nullingrel bit manipulation trick. Before proceeding\n> further with this patch, I'd like to know if it is going in the right\n> direction.\n>\n\nI've spent some more time on this patch, and now it passes all the\nregression tests. But I had to hack explain.c and ruleutils.c to make\nthe varprefix stuff work as it did before, which is not great.\n\nOne thing I'm not sure about is whether we need to also replace\nsubexpressions in the arguments of GroupingFunc nodes with Vars\nreferencing the new GROUP RTE. These arguments would not be executed at\nruntime, so it seems that we can just replace them. I tried to do that\nand found several plan changes in the regression tests. Such as\n\nexplain (verbose, costs off)\nselect grouping(ss.x)\nfrom int8_tbl i1\ncross join lateral (select (select i1.q1) as x) ss\ngroup by ss.x;\n QUERY PLAN\n------------------------------------------------\n GroupAggregate\n Output: GROUPING((SubPlan 1)), ((SubPlan 2))\n Group Key: ((SubPlan 2))\n -> Sort\n Output: ((SubPlan 2)), i1.q1\n Sort Key: ((SubPlan 2))\n -> Seq Scan on public.int8_tbl i1\n Output: (SubPlan 2), i1.q1\n SubPlan 2\n -> Result\n Output: i1.q1\n(11 rows)\n\nIf we substitute the subquery expression in the argument of GroupingFunc\nwith the GROUP RTE's Var, the final plan would contain only one SubPlan\ninstead of two.\n\nAlso the patch has not yet manipulated the nullingrel stuff. Maybe that\ncan be done with the code in my v2 patch. But I think we'd better get\nthe parser fixed first before stepping into that.\n\nAlso please ignore the comment and code format things in the patch as I\nhaven't worked on them.\n\nThanks\nRichard", "msg_date": "Fri, 17 May 2024 17:41:04 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Wrong results with grouping sets" }, { "msg_contents": "On Fri, May 17, 2024 at 5:41 PM Richard Guo <[email protected]> wrote:\n\n> I've spent some more time on this patch, and now it passes all the\n> regression tests. But I had to hack explain.c and ruleutils.c to make\n> the varprefix stuff work as it did before, which is not great.\n>\n\nI've realized that I made a mistake in the v4 patch: If there are join\nalias vars in the targetlist and HAVING clause, we should first flatten\nthem before replacing the grouped variables involved there with\ngrouping-RTE Vars. To fix this issue, I decide to merge the newly added\nfunction substitute_group_exprs into check_ungrouped_columns by changing\ncheck_ungrouped_columns to also perform the replacement, which is Tom's\ninitial suggestion I think.\n\nNow it seems that 'check_ungrouped_columns' is no longer an appropriate\nname for the function. So I rename it to 'substitute_grouped_columns'.\nBut I'm open to other names if there are any suggestions.\n\nI've also worked on the comments.\n\nThanks\nRichard", "msg_date": "Thu, 23 May 2024 15:30:43 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Wrong results with grouping sets" }, { "msg_contents": "On the basis of the parser infrastructure fixup, 0002 patch adds the\nnullingrel bit that references the grouping RTE to the grouping\nexpressions.\n\nHowever, it seems to me that we have to manually remove this nullingrel\nbit from expressions in various cases where these expressions are\nlogically below the grouping step, such as when we generate groupClause\npathkeys for grouping sets, or when we generate PathTarget for initial\ninput to grouping nodes.\n\nFurthermore, in set_upper_references, the targetlist and quals of an Agg\nnode should have nullingrels that include the effects of the grouping\nstep, ie they will have nullingrels equal to the input Vars/PHVs'\nnullingrels plus the nullingrel bit that references the grouping RTE.\nIn order to perform exact nullingrels matches, I think we also need to\nmanually remove this nullingrel bit.\n\nThanks\nRichard", "msg_date": "Fri, 24 May 2024 21:08:22 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Wrong results with grouping sets" }, { "msg_contents": "On Fri, May 24, 2024 at 9:08 PM Richard Guo <[email protected]> wrote:\n> On the basis of the parser infrastructure fixup, 0002 patch adds the\n> nullingrel bit that references the grouping RTE to the grouping\n> expressions.\n\nI found a bug in the v6 patch. The following query would trigger the\nAssert in make_restrictinfo that the given subexpression should not be\nan AND clause.\n\nselect max(a) from t group by a > b and a = b having a > b and a = b;\n\nThis is because the expression 'a > b and a = b' in the HAVING clause is\nreplaced by a Var that references the GROUP RTE. When we preprocess the\ncolumns of the GROUP RTE, we do not know whether the grouped expression\nis a havingQual or not, so we do not perform make_ands_implicit for it.\nAs a result, after we replace the group Var in the HAVING clause with\nthe underlying grouping expression, we will have a havingQual that is an\nAND clause.\n\nAs we know, in the planner we need to first preprocess all the columns\nof the GROUP RTE. We also need to replace any Vars in the targetlist\nand HAVING clause that reference the GROUP RTE with the underlying\ngrouping expressions. To fix the mentioned issue, I choose the perform\nthis replacement before we preprocess the targetlist and havingQual, so\nthat the make_ands_implicit would be performed when we preprocess the\nhavingQual.\n\nOne problem with this is, when we preprocess the targetlist and\nhavingQual, we would see already-planned tree, which is generated by the\npreprocessing work for the grouping expressions and then substituted for\nthe GROUP Vars in the targetlist and havingQual. This would break the\nAssert 'Assert(!IsA(node, SubPlan))' in flatten_join_alias_vars_mutator\nand process_sublinks_mutator. I think we can just return the\nalready-planned tree unchanged when we see it in the preprocessing\nprocess.\n\nHence here is the v7 patchset. I've also added detailed commit messages\nfor the two patches.\n\nThanks\nRichard", "msg_date": "Wed, 5 Jun 2024 17:42:42 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Wrong results with grouping sets" }, { "msg_contents": "On Wed, Jun 5, 2024 at 5:42 PM Richard Guo <[email protected]> wrote:\n> Hence here is the v7 patchset. I've also added detailed commit messages\n> for the two patches.\n\nThis patchset does not apply any more. Here is a new rebase.\n\nWhile at it, I added more checks for 'root->group_rtindex', and also\nadded a new test case to verify that we generate window_pathkeys\ncorrectly with grouping sets.\n\nThanks\nRichard", "msg_date": "Mon, 10 Jun 2024 17:05:01 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Wrong results with grouping sets" }, { "msg_contents": "On Mon, Jun 10, 2024 at 5:05 PM Richard Guo <[email protected]> wrote:\n> This patchset does not apply any more. Here is a new rebase.\n\nHere is an updated version of this patchset. I've run pgindent for it,\nand also tweaked the commit messages a bit.\n\nIn principle, 0001 can be backpatched to all supported versions to fix\nthe cases where there are subqueries in the grouping expressions; 0002\ncan be backpatched to 16 where we have the nullingrels stuff. But both\npatches seem to be quite invasive. I'm not sure if we want to backpatch\nthem to stable branches. Any thoughts about backpatching?\n\nThanks\nRichard", "msg_date": "Mon, 1 Jul 2024 16:29:16 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Wrong results with grouping sets" }, { "msg_contents": "On Mon, Jul 1, 2024 at 1:59 PM Richard Guo <[email protected]> wrote:\n>\n> On Mon, Jun 10, 2024 at 5:05 PM Richard Guo <[email protected]> wrote:\n> > This patchset does not apply any more. Here is a new rebase.\n>\n> Here is an updated version of this patchset. I've run pgindent for it,\n> and also tweaked the commit messages a bit.\n>\n> In principle, 0001 can be backpatched to all supported versions to fix\n> the cases where there are subqueries in the grouping expressions; 0002\n> can be backpatched to 16 where we have the nullingrels stuff. But both\n> patches seem to be quite invasive. I'm not sure if we want to backpatch\n> them to stable branches. Any thoughts about backpatching?\n\n\nI don't have any specific thoughts on backpatching, but I have started\nreviewing the patches.\n\nThe first patch in the set adds a new RTEKind for GROUP. From prologue\nof RangeTblEntry structure I can not understand what an RTE represents\nespecially when the RTE represents something other than a FROM clause\nitem.\n```\n * This is because we only need the RTE to deal with SQL features\n * like outer joins and join-output-column aliasing.) Other special\n * RTE types also exist, as indicated by RTEKind.\n```\nI can not use this description to decide whether a GROUP BY construct\nshould have an RTE for itself or not. It looks like the patch adds a\nnew RTE (kind) here so that its rtindex can be used to differentiate\nbetween a and b from VALUES clause and those from the GroupingSet\nresult in the query mentioned in the first email in this thread. But I\ndon't see any discussion of other alternatives. For example, how about\ninfrastructure in EC to tell which stages this EC is valid for/upto? I\nsee Tom suggesting use of RTE instead of changing EC but I don't see\nwhy that's better. We do mark a RestrictInfo with relids above which\nit can be computed. Similarly we assess validity of EC by stages or\nrelations being computed. That might open some opportunities for using\nbroken ECs? We are almost reimplementing parts of the GROUPING set\nfeature, so may be it's worth spending time thinking about it.\n\nAssuming new RTEkind is the right approach, I am wondering whether\nthere are other things that should have been represented by RTE for\nthe same reason. For example, a HAVING clause changes the\ncharacteristics of results by introducing new constraints on the\naggregated results. Should that have an RTE by itself? Will the\nRTEKind introduced by this patch cover HAVING clause as well? Will\nthat open opportunities for more optimizations E.g.\n```\nexplain select sum(a), sum(b), stddev(a + b) from (values (1, 1), (2,\n2)) as t(a, b) group by a, b having sum(a) = sum(b) order by 1, 2;\n QUERY PLAN\n-------------------------------------------------------------------------\n Sort (cost=0.10..0.10 rows=1 width=56)\n Sort Key: (sum(\"*VALUES*\".column1)), (sum(\"*VALUES*\".column2))\n -> HashAggregate (cost=0.06..0.09 rows=1 width=56)\n Group Key: \"*VALUES*\".column1, \"*VALUES*\".column2\n Filter: (sum(\"*VALUES*\".column1) = sum(\"*VALUES*\".column2))\n -> Values Scan on \"*VALUES*\" (cost=0.00..0.03 rows=2 width=8)\n(6 rows)\n```\nSort Key can be just (sum(\"*VALUES*\".column1)) instead of both\n(sum(\"*VALUES*\".column1)), (sum(\"*VALUES*\".column2)) because of HAVING\nclause?\n\nSome code level random comments\n1.\n```\nif (rte->rtekind == RTE_GROUP)\n{\nes->rtable_size--;\nbreak;\n```\nbecause of the variable name, it would be interpreted as the size of\nes->rtable and will be expected to be the same as\nlist_length(es->rtable) which it is not. The comment at the member\ndeclaration won't help much for a quick reader. All that variable is\ndoing is to tell us whether to use alias as prefix or not;\n`useprefix = es->rtable_size > 1;` OR useprefix = (es->rtable_size > 1\n|| es->verbose);.\nInstead of rtable_size, we could let the new member track the fact\nwhether there are multiple aliases in the query (requiring multiple\nprefixes) instead of size of rtable. However, the fact that the GROUP\nRTE requires special handling indicates that the new RTEKind doesn't\nquite fit the rest of the set. No other RTE, even if outside FROM\nclause, required this special handling.\n\n2. expandRecordVariable: The comment below the change should be\nupdated to explain why an output of GROUPing can not have RECORD or at\nleast mention GROUPING there.\n\n3. I see code like below in get_eclass_for_sort_expr() and\nmark_rels_nulled_by_join()\n```\n/* ignore GROUP RTE */\nif (i == root->group_rtindex)\ncontinue;\n```\nI assume that rel for this RTE index would be NULL, so \"if\" block just\nbelow this code would get executed. I think we should just change\nAssert() in that code block rather than adding a new \"if\" block to\navoid confusion.\n\n4. Looking at parse_clause.c most (if not all) addRangeTableEntry*\nfunction calls are from transform* functions. On those lines, I\nexpected addRangeTableEntryForGroup() to be called from\ntransformGroupClause(). Why are we calling\naddRangeTableEntryForGroup() from parseCheckAggregates()?\n\n5. In the parseCheckAggregates, we are replacing expressions from\ntargetlist and havingQual with Vars pointing to GROUP RTE. But we are\nnot doing that to sortClause, the remaining SQL construct. That's\nbecause sortClause is just a list of entries pointing back to\ntargetlist. So there's nothing to change there. Am I right?\n\n6. I think ParseState::p_grouping_nsitem should be collocated with\nother ParseNamespaceItem members or lists in ParseState. I think it\nserves a similar purpose as them. Similarly PlannerInfo::group_rtindex\nshould be placed next to outer_join_rels?\n\n7. Do we need RangeTblEntry::groupexprs as a separate member? They are\nthe same as GROUP BY or GROUPING SET expressions. So the list can be\nconstructed from groupClause whenever required. Do we need to maintain\nthe list separately? I am comparing with other RTEs, say Subquery RTE.\nWe don't copy all the targetlist expressions from subquery to\nsubquery's RTE. I noticed that groupexprs are being treated on lines\nsimilar to joinaliasvars. But they are somewhat different. The latter\nis a unified representation of columns of joining relations different\nfrom those columns and hence needs a new representation. That doesn't\nseem to be the case with RangeTblEntry::groupexpr.\n\n8. The change in process_sublinks_mutator() appears to be related to\nthe fact that GROUPING() may have subqueries which were not being\nhandled earlier. That change seems to be independent of the bug being\nfixed here. Am I right? If yes, having those changes in a separate\npatch will help.\n\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Thu, 4 Jul 2024 15:32:26 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wrong results with grouping sets" }, { "msg_contents": "On 2024-07-01 16:29:16 +0800, Richard Guo wrote:\n> On Mon, Jun 10, 2024 at 5:05 PM Richard Guo <[email protected]> wrote:\n> > This patchset does not apply any more. Here is a new rebase.\n> \n> Here is an updated version of this patchset. I've run pgindent for it,\n> and also tweaked the commit messages a bit.\n> \n> In principle, 0001 can be backpatched to all supported versions to fix\n> the cases where there are subqueries in the grouping expressions; 0002\n> can be backpatched to 16 where we have the nullingrels stuff. But both\n> patches seem to be quite invasive. I'm not sure if we want to backpatch\n> them to stable branches. Any thoughts about backpatching?\n\nAs-is they can't be backpatched, unless I am missing something? Afaict they\nintroduce rather thorough ABI breaks? And API breaks, actually?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 4 Jul 2024 14:51:40 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wrong results with grouping sets" }, { "msg_contents": "On Thu, Jul 4, 2024 at 6:02 PM Ashutosh Bapat\n<[email protected]> wrote:\n> On Mon, Jul 1, 2024 at 1:59 PM Richard Guo <[email protected]> wrote:\n> > Here is an updated version of this patchset. I've run pgindent for it,\n> > and also tweaked the commit messages a bit.\n> >\n> > In principle, 0001 can be backpatched to all supported versions to fix\n> > the cases where there are subqueries in the grouping expressions; 0002\n> > can be backpatched to 16 where we have the nullingrels stuff. But both\n> > patches seem to be quite invasive. I'm not sure if we want to backpatch\n> > them to stable branches. Any thoughts about backpatching?\n>\n> I don't have any specific thoughts on backpatching, but I have started\n> reviewing the patches.\n\nThanks for reviewing this patchset!\n\n> The first patch in the set adds a new RTEKind for GROUP. From prologue\n> of RangeTblEntry structure I can not understand what an RTE represents\n> especially when the RTE represents something other than a FROM clause\n> item.\n> ```\n> * This is because we only need the RTE to deal with SQL features\n> * like outer joins and join-output-column aliasing.) Other special\n> * RTE types also exist, as indicated by RTEKind.\n> ```\n> I can not use this description to decide whether a GROUP BY construct\n> should have an RTE for itself or not. It looks like the patch adds a\n> new RTE (kind) here so that its rtindex can be used to differentiate\n> between a and b from VALUES clause and those from the GroupingSet\n> result in the query mentioned in the first email in this thread. But I\n> don't see any discussion of other alternatives. For example, how about\n> infrastructure in EC to tell which stages this EC is valid for/upto? I\n> see Tom suggesting use of RTE instead of changing EC but I don't see\n> why that's better. We do mark a RestrictInfo with relids above which\n> it can be computed. Similarly we assess validity of EC by stages or\n> relations being computed. That might open some opportunities for using\n> broken ECs? We are almost reimplementing parts of the GROUPING set\n> feature, so may be it's worth spending time thinking about it.\n\nThe reason why we need a new RTE for the grouping step is to address\ncases where there are subqueries in the grouping expressions. In such\ncases, each of these subqueries in the targetlist and HAVING clause is\nexpanded into distinct SubPlan nodes. Only one of these SubPlan nodes\nwould be converted to reference to the grouping key column output by\nthe Agg node; others would have to get evaluated afresh, and might not\ngo to NULL when they are supposed to. I do not think this can be\naddressed by changing ECs.\n\n> Assuming new RTEkind is the right approach, I am wondering whether\n> there are other things that should have been represented by RTE for\n> the same reason. For example, a HAVING clause changes the\n> characteristics of results by introducing new constraints on the\n> aggregated results. Should that have an RTE by itself? Will the\n> RTEKind introduced by this patch cover HAVING clause as well?\n\nAFAIU, HAVING clauses are just quals applied to the grouped rows after\ngroups and aggregates are computed. I cannot see why and how to add a\nnew RTE for HAVING.\n\n> ```\n> explain select sum(a), sum(b), stddev(a + b) from (values (1, 1), (2,\n> 2)) as t(a, b) group by a, b having sum(a) = sum(b) order by 1, 2;\n> QUERY PLAN\n> -------------------------------------------------------------------------\n> Sort (cost=0.10..0.10 rows=1 width=56)\n> Sort Key: (sum(\"*VALUES*\".column1)), (sum(\"*VALUES*\".column2))\n> -> HashAggregate (cost=0.06..0.09 rows=1 width=56)\n> Group Key: \"*VALUES*\".column1, \"*VALUES*\".column2\n> Filter: (sum(\"*VALUES*\".column1) = sum(\"*VALUES*\".column2))\n> -> Values Scan on \"*VALUES*\" (cost=0.00..0.03 rows=2 width=8)\n> (6 rows)\n> ```\n> Sort Key can be just (sum(\"*VALUES*\".column1)) instead of both\n> (sum(\"*VALUES*\".column1)), (sum(\"*VALUES*\".column2)) because of HAVING\n> clause?\n\nThis looks like an optimization that can be achieved by hacking around\nECs. I'm not sure. But I think adding new RTEs does not help here.\n\n> Some code level random comments\n> 1.\n> ```\n> if (rte->rtekind == RTE_GROUP)\n> {\n> es->rtable_size--;\n> break;\n> ```\n> because of the variable name, it would be interpreted as the size of\n> es->rtable and will be expected to be the same as\n> list_length(es->rtable) which it is not. The comment at the member\n> declaration won't help much for a quick reader. All that variable is\n> doing is to tell us whether to use alias as prefix or not;\n> `useprefix = es->rtable_size > 1;` OR useprefix = (es->rtable_size > 1\n> || es->verbose);.\n> Instead of rtable_size, we could let the new member track the fact\n> whether there are multiple aliases in the query (requiring multiple\n> prefixes) instead of size of rtable. However, the fact that the GROUP\n> RTE requires special handling indicates that the new RTEKind doesn't\n> quite fit the rest of the set. No other RTE, even if outside FROM\n> clause, required this special handling.\n\nAFAIU we want to print prefixes on Vars when there are more than one\nRTE entries to indicate which column is from which RTE entry. If\nthere is only one RTE (and not verbose), we try to avoid the prefixes.\nThis patch adds a new dummy RTE, resulting in plans that previously\nhad one RTE now having two and starting to print prefixes. This has\ncaused a lot of plan diffs in regression tests. That's why this patch\nhas to hack explain.c and ruleutils.c to make the varprefix stuff work\nas it did before.\n\nBut I do not think this is alone for the new RTE. Consider\n\nexplain (costs off)\nselect sum(a) from (select * from t) having sum(a) = 1;\n QUERY PLAN\n--------------------------\n Aggregate\n Filter: (sum(t.a) = 1)\n -> Seq Scan on t\n(3 rows)\n\nBTW, not related to the discussion here, I noticed an inconsistency\nregarding the varprefix in the qual and targetlist. Look at:\n\nexplain (verbose, costs off)\nselect sum(a) from t having sum(a) = 1;\n QUERY PLAN\n----------------------------\n Aggregate\n Output: sum(a)\n Filter: (sum(t.a) = 1)\n -> Seq Scan on public.t\n Output: a\n(5 rows)\n\nIn the 'Filter' we add the prefix while in the 'Output' we do not.\nDoes anyone think this is something worth investigating?\n\n> 2. expandRecordVariable: The comment below the change should be\n> updated to explain why an output of GROUPing can not have RECORD or at\n> least mention GROUPING there.\n\nThanks. Will add some comments here.\n\n> 3. I see code like below in get_eclass_for_sort_expr() and\n> mark_rels_nulled_by_join()\n> ```\n> /* ignore GROUP RTE */\n> if (i == root->group_rtindex)\n> continue;\n> ```\n> I assume that rel for this RTE index would be NULL, so \"if\" block just\n> below this code would get executed. I think we should just change\n> Assert() in that code block rather than adding a new \"if\" block to\n> avoid confusion.\n\nActually I initially coded it as you suggested, and then moved the\ncheck for the RTE_GROUP RTE out of the 'if' block later, in order to\nmaintain separate logic for GROUP RTE and outer joins. I'm not quite\nsure which is better.\n\n> 4. Looking at parse_clause.c most (if not all) addRangeTableEntry*\n> function calls are from transform* functions. On those lines, I\n> expected addRangeTableEntryForGroup() to be called from\n> transformGroupClause(). Why are we calling\n> addRangeTableEntryForGroup() from parseCheckAggregates()?\n\nI think this is the most handy place to add the RTE_GROUP RTE, as the\njoin_flattened grouping expressions are available here.\n\n> 5. In the parseCheckAggregates, we are replacing expressions from\n> targetlist and havingQual with Vars pointing to GROUP RTE. But we are\n> not doing that to sortClause, the remaining SQL construct. That's\n> because sortClause is just a list of entries pointing back to\n> targetlist. So there's nothing to change there. Am I right?\n\nWell, it's not about that. Actually groupClause is also 'a list of\nentries pointing back to targetlist'. The primary reason is that the\ngrouping step may result in some grouping expressions being set to\nNULL, whereas the sorting step does not have this behavior.\n\n\n> 6. I think ParseState::p_grouping_nsitem should be collocated with\n> other ParseNamespaceItem members or lists in ParseState. I think it\n> serves a similar purpose as them. Similarly PlannerInfo::group_rtindex\n> should be placed next to outer_join_rels?\n\nI agree that ParseState.p_grouping_nsitem should be moved to a more\nproper place, and we should mention it in the comment for ParseState\ntoo. But I'm not sure about the root.group_rtindex. I will give it\nanother thought later.\n\n> 7. Do we need RangeTblEntry::groupexprs as a separate member? They are\n> the same as GROUP BY or GROUPING SET expressions. So the list can be\n> constructed from groupClause whenever required. Do we need to maintain\n> the list separately? I am comparing with other RTEs, say Subquery RTE.\n> We don't copy all the targetlist expressions from subquery to\n> subquery's RTE. I noticed that groupexprs are being treated on lines\n> similar to joinaliasvars. But they are somewhat different. The latter\n> is a unified representation of columns of joining relations different\n> from those columns and hence needs a new representation. That doesn't\n> seem to be the case with RangeTblEntry::groupexpr.\n\nWe need to preprocess the grouping expressions first and then\nsubstitute them back into the targetlist and havingQual. I don't\nthink this can be achieved without keeping groupexprs as a separate\nmember.\n\n> 8. The change in process_sublinks_mutator() appears to be related to\n> the fact that GROUPING() may have subqueries which were not being\n> handled earlier. That change seems to be independent of the bug being\n> fixed here. Am I right? If yes, having those changes in a separate\n> patch will help.\n\nNo, I don't think so. Without this patch we should never see a\nSubPlan/AlternativeSubPlan expression in process_sublinks_mutator,\nbecause this is where SubPlans are created.\n\nThanks\nRichard\n\n\n", "msg_date": "Sat, 6 Jul 2024 09:26:49 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Wrong results with grouping sets" }, { "msg_contents": "On Fri, Jul 5, 2024 at 5:51 AM Andres Freund <[email protected]> wrote:\n> On 2024-07-01 16:29:16 +0800, Richard Guo wrote:\n> > Here is an updated version of this patchset. I've run pgindent for it,\n> > and also tweaked the commit messages a bit.\n> >\n> > In principle, 0001 can be backpatched to all supported versions to fix\n> > the cases where there are subqueries in the grouping expressions; 0002\n> > can be backpatched to 16 where we have the nullingrels stuff. But both\n> > patches seem to be quite invasive. I'm not sure if we want to backpatch\n> > them to stable branches. Any thoughts about backpatching?\n>\n> As-is they can't be backpatched, unless I am missing something? Afaict they\n> introduce rather thorough ABI breaks? And API breaks, actually?\n\nIndeed, you're correct. I did not think about this. This patchset\nmodifies certain struct definitions in src/include/ and also changes\nthe signature of several functions, resulting in definite ABI and API\nbreaks.\n\nBTW, from catversion.h I read:\n\n * Another common reason for a catversion update is a change in parsetree\n * external representation, since serialized parsetrees appear in stored\n * rules and new-style SQL functions. Almost any change in primnodes.h or\n * parsenodes.h will warrant a catversion update.\n\nSince this patchset changes the querytree produced by the parser, does\nthis indicate that a catversion bump is needed?\n\nThanks\nRichard\n\n\n", "msg_date": "Sat, 6 Jul 2024 10:06:48 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Wrong results with grouping sets" }, { "msg_contents": "Richard Guo <[email protected]> writes:\n> BTW, from catversion.h I read:\n\n> * Another common reason for a catversion update is a change in parsetree\n> * external representation, since serialized parsetrees appear in stored\n> * rules and new-style SQL functions. Almost any change in primnodes.h or\n> * parsenodes.h will warrant a catversion update.\n\n> Since this patchset changes the querytree produced by the parser, does\n> this indicate that a catversion bump is needed?\n\nYes, it would.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 05 Jul 2024 22:37:38 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wrong results with grouping sets" }, { "msg_contents": "On Sat, Jul 6, 2024 at 10:37 AM Tom Lane <[email protected]> wrote:\n> Richard Guo <[email protected]> writes:\n> > BTW, from catversion.h I read:\n>\n> > * Another common reason for a catversion update is a change in parsetree\n> > * external representation, since serialized parsetrees appear in stored\n> > * rules and new-style SQL functions. Almost any change in primnodes.h or\n> > * parsenodes.h will warrant a catversion update.\n>\n> > Since this patchset changes the querytree produced by the parser, does\n> > this indicate that a catversion bump is needed?\n>\n> Yes, it would.\n\nThank you for confirming.\n\nThanks\nRichard\n\n\n", "msg_date": "Wed, 10 Jul 2024 09:11:59 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Wrong results with grouping sets" }, { "msg_contents": "On Sat, Jul 6, 2024 at 9:26 AM Richard Guo <[email protected]> wrote:\n> On Thu, Jul 4, 2024 at 6:02 PM Ashutosh Bapat\n> <[email protected]> wrote:\n> > I don't have any specific thoughts on backpatching, but I have started\n> > reviewing the patches.\n\n> Thanks for reviewing this patchset!\n\nHere is an updated version of this patchset. I've added some comments\naccording to the review feedback, and also tweaked the commit messages\na bit more.\n\nAdditionally, I've made a change to only add the new RTE_GROUP RTE\nwhen there are acceptable GROUP BY expressions. This allows us to\nskip all the trouble of doing this for queries without GROUP BY\nclauses.\n\nThanks\nRichard", "msg_date": "Wed, 10 Jul 2024 09:22:54 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Wrong results with grouping sets" }, { "msg_contents": "FWIW, in addition to fixing wrong result issues for queries with\ngrouping sets, the changes in 0001 also improve performance for\nqueries that have subqueries in the grouping expressions, because\ndifferent instances of the same subquery would need to be executed\nonly once. As a simple example, consider\n\ncreate table t (a int, b int);\ninsert into t select i, i from generate_series(1,10000)i;\nanalyze t;\n\n-- on patched\nexplain (analyze, costs off)\nselect (select t1.b from t t2 where a = t1.a) as s1,\n (select t1.b from t t2 where a = t1.a) as s2,\n (select t1.b from t t2 where a = t1.a) as s3\nfrom t t1\ngroup by a, s1;\n QUERY PLAN\n------------------------------------------------------------------------------------\n Group (actual time=20475.028..20480.543 rows=10000 loops=1)\n Group Key: t1.a, ((SubPlan 1))\n -> Sort (actual time=20475.017..20475.821 rows=10000 loops=1)\n Sort Key: t1.a, ((SubPlan 1))\n Sort Method: quicksort Memory: 697kB\n -> Seq Scan on t t1 (actual time=7.435..20468.599 rows=10000 loops=1)\n SubPlan 1\n -> Seq Scan on t t2 (actual time=1.022..2.045 rows=1\nloops=10000)\n Filter: (a = t1.a)\n Rows Removed by Filter: 9999\n Planning Time: 1.561 ms\n Execution Time: 20481.933 ms\n(12 rows)\n\n-- on master\nexplain (analyze, costs off)\nselect (select t1.b from t t2 where a = t1.a) as s1,\n (select t1.b from t t2 where a = t1.a) as s2,\n (select t1.b from t t2 where a = t1.a) as s3\nfrom t t1\ngroup by a, s1;\n QUERY PLAN\n------------------------------------------------------------------------------------\n Group (actual time=20779.318..62233.526 rows=10000 loops=1)\n Group Key: t1.a, ((SubPlan 1))\n -> Sort (actual time=20775.125..20777.936 rows=10000 loops=1)\n Sort Key: t1.a, ((SubPlan 1))\n Sort Method: quicksort Memory: 697kB\n -> Seq Scan on t t1 (actual time=7.492..20770.060 rows=10000 loops=1)\n SubPlan 1\n -> Seq Scan on t t2 (actual time=1.037..2.075 rows=1\nloops=10000)\n Filter: (a = t1.a)\n Rows Removed by Filter: 9999\n SubPlan 2\n -> Seq Scan on t t2_1 (actual time=1.037..2.071 rows=1 loops=10000)\n Filter: (a = t1.a)\n Rows Removed by Filter: 9999\n SubPlan 3\n -> Seq Scan on t t2_2 (actual time=1.037..2.071 rows=1 loops=10000)\n Filter: (a = t1.a)\n Rows Removed by Filter: 9999\n Planning Time: 1.286 ms\n Execution Time: 62235.753 ms\n(20 rows)\n\nWe can see that with the 0001 patch, this query runs ~3 times faster,\nwhich is no surprise because there are 3 instances of the same\nsubquery in the targetlist.\n\nThanks\nRichard\n\n\n", "msg_date": "Mon, 15 Jul 2024 10:45:16 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Wrong results with grouping sets" }, { "msg_contents": "Hi,\n\nI'm reviewing patches in Commitfest 2024-07 from top to bottom:\nhttps://commitfest.postgresql.org/48/\n\nThis is the 3rd patch:\nhttps://commitfest.postgresql.org/48/4583/\n\nFYI: https://commitfest.postgresql.org/48/4681/ is my patch.\n\nIn <CAMbWs49RNmFhgDzoL=suWJrCSk-wizXa6uVtp0Jmz0z+741nSA@mail.gmail.com>\n \"Re: Wrong results with grouping sets\" on Wed, 10 Jul 2024 09:22:54 +0800,\n Richard Guo <[email protected]> wrote:\n\n> Here is an updated version of this patchset. I've added some comments\n> according to the review feedback, and also tweaked the commit messages\n> a bit more.\n\nI'm not familiar with related codes but here are my\ncomments:\n\n0001:\n\n---\ndiff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h\nindex 85a62b538e5..8055f4b2b9e 100644\n--- a/src/include/nodes/parsenodes.h\n+++ b/src/include/nodes/parsenodes.h\n@@ -1242,6 +1245,12 @@ typedef struct RangeTblEntry\n /* estimated or actual from caller */\n Cardinality enrtuples pg_node_attr(query_jumble_ignore);\n \n+ /*\n+ * Fields valid for a GROUP RTE (else NULL/zero):\n+ */\n+ /* list of expressions grouped on */\n+ List *groupexprs pg_node_attr(query_jumble_ignore);\n+\n /*\n * Fields valid in all RTEs:\n */\n----\n\n+ * Fields valid for a GROUP RTE (else NULL/zero):\n\nThere is only one field and it's LIST. So how about using\nthe following?\n\n* A field valid for a GROUP RTE (else NIL):\n\n\n----\ndiff --git a/src/backend/optimizer/util/var.c b/src/backend/optimizer/util/var.c\nindex 844fc30978b..0982f873a42 100644\n--- a/src/backend/optimizer/util/var.c\n+++ b/src/backend/optimizer/util/var.c\n@@ -902,6 +915,141 @@ flatten_join_alias_vars_mutator(Node *node,\n...\n+Node *\n+flatten_group_exprs(PlannerInfo *root, Query *query, Node *node)\n+{\n+ flatten_join_alias_vars_context context;\n...\n---\n\nIf we want to reuse flatten_join_alias_vars_context for\nflatten_group_exprs(), how about renaming it?\nflatten_join_alias_vars() only uses\nflatten_join_alias_vars_context for now. So the name of\nflatten_join_alias_vars_context is meaningful. But if we\nwant to flatten_join_alias_vars_context for\nflatten_group_exprs() too. The name of\nflatten_join_alias_vars_context is strange.\n\n\n----\ndiff --git a/src/backend/parser/parse_relation.c b/src/backend/parser/parse_relation.c\nindex 2f64eaf0e37..69476384252 100644\n--- a/src/backend/parser/parse_relation.c\n+++ b/src/backend/parser/parse_relation.c\n@@ -2557,6 +2557,79 @@ addRangeTableEntryForENR(ParseState *pstate,\n...\n+ char *colname = te->resname ? pstrdup(te->resname) : \"unamed_col\";\n...\n----\n\nCan the \"te->resname == NULL\" case be happen? If so, how\nabout adding a new test for the case?\n\n(BTW, is \"unamed_col\" intentional name? Is it a typo of\n\"unnamed_col\"?)\n\n\nThanks,\n-- \nkou\n\n\n", "msg_date": "Mon, 15 Jul 2024 17:38:22 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wrong results with grouping sets" }, { "msg_contents": "On Mon, Jul 15, 2024 at 4:38 PM Sutou Kouhei <[email protected]> wrote:\n> I'm not familiar with related codes but here are my\n> comments:\n\nThanks for reviewing this patchset!\n\n> + * Fields valid for a GROUP RTE (else NULL/zero):\n>\n> There is only one field and it's LIST. So how about using\n> the following?\n>\n> * A field valid for a GROUP RTE (else NIL):\n\nGood point. I ended up with\n\n * Fields valid for a GROUP RTE (else NIL):\n\n... since this is the pattern used by other types of RTEs that have\nonly one field.\n\n> If we want to reuse flatten_join_alias_vars_context for\n> flatten_group_exprs(), how about renaming it?\n> flatten_join_alias_vars() only uses\n> flatten_join_alias_vars_context for now. So the name of\n> flatten_join_alias_vars_context is meaningful. But if we\n> want to flatten_join_alias_vars_context for\n> flatten_group_exprs() too. The name of\n> flatten_join_alias_vars_context is strange.\n\nI think the current name should be fine. It's not uncommon that we\nreuse the same structure intended for other functions within one\nfunction.\n\n> Can the \"te->resname == NULL\" case be happen? If so, how\n> about adding a new test for the case?\n\nIt's quite common for te->resname to be NULL, such as when TargetEntry\nis resjunk. I don't think a new case for this is needed. It should\nalready be covered in lots of instances in the current regression\ntests.\n\n> (BTW, is \"unamed_col\" intentional name? Is it a typo of\n> \"unnamed_col\"?)\n\nYeah, it's a typo. I changed it to be \"?column?\", which is the\ndefault name if FigureColname can't guess anything.\n\nHere is an updated version of this patchset. I'm seeking the\npossibility to push this patchset sometime this month. Please let me\nknow if anyone thinks this is unreasonable.\n\nThanks\nRichard", "msg_date": "Tue, 16 Jul 2024 15:57:52 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Wrong results with grouping sets" }, { "msg_contents": "On Mon, Jul 15, 2024 at 8:15 AM Richard Guo <[email protected]> wrote:\n\n\n>\n> We can see that with the 0001 patch, this query runs ~3 times faster,\n> which is no surprise because there are 3 instances of the same\n> subquery in the targetlist.\n\nI am not sure if that's the right thing to do.\n\nI am using a slightly elaborate version of the tests in your patch\n#select v, grouping(v) gv, grouping((select t1.v from gstest5 t2 where\nid = t1.id)) gs,grouping((select t1.v from gstest5 t2 where id =\nt1.id)) gs2,\n (select t1.v from gstest5 t2 where id = t1.id) as s,\ncase when grouping(v) = 0\n then v\n else null end as cv,\ncase when grouping((select t1.v from gstest5 t2 where id = t1.id)) = 0\n then (select t1.v from gstest5 t2 where id = t1.id)\n else null end as cs\nfrom gstest5 t1\ngroup by grouping sets(v, s)\n;\n v | gv | gs | gs2 | s | cv | cs\n---+----+----+-----+---+----+----\n 3 | 0 | 1 | 1 | | 3 |\n 5 | 0 | 1 | 1 | | 5 |\n 4 | 0 | 1 | 1 | | 4 |\n 2 | 0 | 1 | 1 | | 2 |\n 1 | 0 | 1 | 1 | | 1 |\n | 1 | 0 | 0 | 2 | | 2\n | 1 | 0 | 0 | 5 | | 5\n | 1 | 0 | 0 | 4 | | 4\n | 1 | 0 | 0 | 3 | | 3\n | 1 | 0 | 0 | 1 | | 1\n(10 rows)\n\n#explain verbose select v, grouping(v) gv, grouping((select t1.v from\ngstest5 t2 where id = t1.id)) gs,grouping((select t1.v from gstest5 t2\nw\nhere id = t1.id)) gs2,\n (select t1.v from gstest5 t2 where id = t1.id) as s,\ncase when grouping(v) = 0\n then v\n else null end as cv,\ncase when grouping((select t1.v from gstest5 t2 where id = t1.id)) = 0\n then (select t1.v from gstest5 t2 where id = t1.id)\n else null end as cs\nfrom gstest5 t1\ngroup by grouping sets(v, s)\n;\n\n QUERY PLAN\n\n-------------------------------------------------------------------------------------------------------------------------------------------------------------\n----------------------------------------------------------------------------------\n HashAggregate (cost=18508.10..58790.10 rows=2460 width=28)\n Output: t1.v, GROUPING(t1.v), GROUPING((SubPlan 2)),\nGROUPING((SubPlan 3)), ((SubPlan 1)), CASE WHEN (GROUPING(t1.v) = 0)\nTHEN t1.v ELSE NULL::integer END\n, CASE WHEN (GROUPING((SubPlan 4)) = 0) THEN ((SubPlan 1)) ELSE\nNULL::integer END\n Hash Key: t1.v\n Hash Key: (SubPlan 1)\n -> Seq Scan on pg_temp.gstest5 t1 (cost=0.00..18502.45 rows=2260 width=12)\n Output: t1.v, (SubPlan 1), t1.id\n SubPlan 1\n -> Index Only Scan using gstest5_pkey on pg_temp.gstest5\nt2 (cost=0.15..8.17 rows=1 width=4)\n Output: t1.v\n Index Cond: (t2.id = t1.id)\n\nThe result looks as expected but the plan isn't consistent with what\nhappens without grouping set\n#select v,\n (select t1.v from gstest5 t2 where id = t1.id) as s,\n (select t1.v from gstest5 t2 where id = t1.id) as s2,\n case when t1.v < 3\n then (select t1.v from gstest5 t2 where id = t1.id)\n else null end as cs\n from gstest5 t1\n order by case when t1.v < 3\n then (select t1.v from gstest5 t2 where id = t1.id)\n else null end\n;\n v | s | s2 | cs\n---+---+----+----\n 1 | 1 | 1 | 1\n 2 | 2 | 2 | 2\n 3 | 3 | 3 |\n 4 | 4 | 4 |\n 5 | 5 | 5 |\n(5 rows)\n\npostgres@92841=#explain verbose select v,\n (select t1.v from gstest5 t2 where id = t1.id) as s,\n (select t1.v from gstest5 t2 where id = t1.id) as s2,\n case when t1.v < 3\n then (select t1.v from gstest5 t2 where id = t1.id)\n else null end as cs\n from gstest5 t1\n order by case when t1.v < 3\n then (select t1.v from gstest5 t2 where id = t1.id)\n else null end\n;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------\n Sort (cost=55573.71..55579.36 rows=2260 width=16)\n Output: t1.v, ((SubPlan 1)), ((SubPlan 2)), (CASE WHEN (t1.v < 3)\nTHEN (SubPlan 3) ELSE NULL::integer END)\n Sort Key: (CASE WHEN (t1.v < 3) THEN (SubPlan 3) ELSE NULL::integer END)\n -> Seq Scan on pg_temp.gstest5 t1 (cost=0.00..55447.80 rows=2260 width=16)\n Output: t1.v, (SubPlan 1), (SubPlan 2), CASE WHEN (t1.v < 3)\nTHEN (SubPlan 3) ELSE NULL::integer END\n SubPlan 1\n -> Index Only Scan using gstest5_pkey on pg_temp.gstest5\nt2 (cost=0.15..8.17 rows=1 width=4)\n Output: t1.v\n Index Cond: (t2.id = t1.id)\n SubPlan 2\n -> Index Only Scan using gstest5_pkey on pg_temp.gstest5\nt2_1 (cost=0.15..8.17 rows=1 width=4)\n Output: t1.v\n Index Cond: (t2_1.id = t1.id)\n SubPlan 3\n -> Index Only Scan using gstest5_pkey on pg_temp.gstest5\nt2_2 (cost=0.15..8.17 rows=1 width=4)\n Output: t1.v\n Index Cond: (t2_2.id = t1.id)\n(17 rows)\n\nNotice that every instance of that subquery has its own subplan in\nthis case. Why should the grouping set be different and have the same\nsubplan for two instances of the subquery? And if so, why not all of\nthe instances have the same subplan?\n\nSince a subquery is a volatile expression, each of its instances\nshould be evaluated separately. If the expressions in ORDER BY,\nGROUPING and GROUP BY are the same as an expression in the targetlist,\nsubqueries in those expressions won't need a subplan of their own. If\nthey are not part of targetlist, they will be added to the targetlist\nas resjunk columns and thus form separate instances of subquery thus\nadding more subplans.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Tue, 16 Jul 2024 15:10:57 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wrong results with grouping sets" }, { "msg_contents": "Thanks for the work!\n\n> Since a subquery is a volatile expression, each of its instances\nshould be evaluated separately.\n\nThis seems like a valid point, though \"query 2\" below which groups over a\nRANDOM() column and outputs an additional RANDOM() column a potential,\nalbeit contrived, counter-example? [NOTE: this was done on Postgres 16.3]\nI've included a few different combinations of GROUP BYs.\n\n-- setup\ncreate table t as (select 0 x);\nanalyze t;\n\n-- query 1: base --> multiple evaluations of RANDOM(), col0 != col1\npostgres=# select x, random() col0, random() col1 from t group by x;\n x | col0 | col1\n---+---------------------+--------------------\n 0 | 0.07205921113992653 | 0.9847359546402477\n(1 row)\n\n-- query 2: group by one volatile column --> single evaluation of RANDOM(),\ncol0 == col1\npostgres=# select x, random() col0, random() col1 from t group by x, col0;\n x | col0 | col1\n---+--------------------+--------------------\n 0 | 0.7765600922298943 | 0.7765600922298943\n(1 row)\n\n-- query 3: group by both volatile columns --> multiple evaluations of\nRANDOM() again, col0 != col1\npostgres=# select x, random() col0, random() col1 from t group by x, col0,\ncol1;\n x | col0 | col1\n---+---------------------+--------------------\n 0 | 0.07334303548896548 | 0.6528967617521189\n(1 row)\n\n--\n\nRelated to your point about the unexpected asymmetry in single vs multiple\nevaluations of subquery plans, I'm curious if the pair of subqueries in\nboth examples below should be considered equivalent? The queries output the\nsame results and the subqueries differ only in output name. With this\npatch, they're considered equivalent in the first query but not in the\nsecond. [NOTE: this was done on a branch with the patch applied]\n\n-- query 1: alias outside subquery\ntest=# explain (verbose, costs off) select x, (select 1) col0, (select 1)\ncol1 from t group by x, col0;\n QUERY PLAN\n-----------------------------------------------------\n Group\n Output: t.x, (InitPlan 1).col1, (InitPlan 1).col1\n Group Key: t.x\n InitPlan 1\n -> Result\n Output: 1\n -> Sort\n Output: t.x\n Sort Key: t.x\n -> Seq Scan on public.t\n Output: t.x\n(11 rows)\n\n...compared to...\n\n-- query 2: alias inside subquery\ntest=# explain (verbose, costs off) select x, (select 1 col0), (select 1\ncol1) from t group by x, col0;\n QUERY PLAN\n-----------------------------------------------------\n Group\n Output: t.x, (InitPlan 1).col1, (InitPlan 2).col1\n Group Key: t.x\n InitPlan 1\n -> Result\n Output: 1\n InitPlan 2\n -> Result\n Output: 1\n -> Sort\n Output: t.x\n Sort Key: t.x\n -> Seq Scan on public.t\n Output: t.x\n(14 rows)\n\n\n-Paul-\n\nThanks for the work!> Since a subquery is a volatile expression, each of its instancesshould be evaluated separately.This seems like a valid point, though \"query 2\" below which groups over a RANDOM() column and outputs an additional RANDOM() column a potential, albeit contrived, counter-example? [NOTE: this was done on Postgres 16.3] I've included a few different combinations of GROUP BYs.-- setupcreate table t as (select 0 x);analyze t;-- query 1: base --> multiple evaluations of RANDOM(), col0 != col1postgres=# select x, random() col0, random() col1 from t group by x; x |        col0         |        col1        ---+---------------------+-------------------- 0 | 0.07205921113992653 | 0.9847359546402477(1 row)-- query 2: group by one volatile column --> single evaluation of RANDOM(), col0 == col1postgres=# select x, random() col0, random() col1 from t group by x, col0; x |        col0        |        col1        ---+--------------------+-------------------- 0 | 0.7765600922298943 | 0.7765600922298943(1 row)-- query 3: group by both volatile columns --> multiple evaluations of RANDOM() again, col0 != col1postgres=# select x, random() col0, random() col1 from t group by x, col0, col1; x |        col0         |        col1        ---+---------------------+-------------------- 0 | 0.07334303548896548 | 0.6528967617521189(1 row)--Related to your point about the unexpected asymmetry in single vs multiple evaluations of subquery plans, I'm curious if the pair of subqueries in both examples below should be considered equivalent? The queries output the same results and the subqueries differ only in output name. With this patch, they're considered equivalent in the first query but not in the second. [NOTE: this was done on a branch with the patch applied]-- query 1: alias outside subquerytest=# explain (verbose, costs off) select x, (select 1) col0, (select 1) col1 from t group by x, col0;                     QUERY PLAN                      ----------------------------------------------------- Group   Output: t.x, (InitPlan 1).col1, (InitPlan 1).col1   Group Key: t.x   InitPlan 1     ->  Result           Output: 1   ->  Sort         Output: t.x         Sort Key: t.x         ->  Seq Scan on public.t               Output: t.x(11 rows)...compared to...-- query 2: alias inside subquerytest=# explain (verbose, costs off) select x, (select 1 col0), (select 1 col1) from t group by x, col0;                     QUERY PLAN                      ----------------------------------------------------- Group   Output: t.x, (InitPlan 1).col1, (InitPlan 2).col1   Group Key: t.x   InitPlan 1     ->  Result           Output: 1   InitPlan 2     ->  Result           Output: 1   ->  Sort         Output: t.x         Sort Key: t.x         ->  Seq Scan on public.t               Output: t.x(14 rows)-Paul-", "msg_date": "Tue, 16 Jul 2024 17:50:15 -0700", "msg_from": "Paul George <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wrong results with grouping sets" }, { "msg_contents": "On Wed, Jul 17, 2024 at 8:50 AM Paul George <[email protected]> wrote:\n> > Since a subquery is a volatile expression, each of its instances\n> should be evaluated separately.\n\nI don't think this conclusion is correct. Look at:\n\nselect random(), random() from t group by random();\n random | random\n--------------------+--------------------\n 0.7972330769936766 | 0.7972330769936766\n(1 row)\n\n> This seems like a valid point, though \"query 2\" below which groups over a RANDOM() column and outputs an additional RANDOM() column a potential, albeit contrived, counter-example? [NOTE: this was done on Postgres 16.3] I've included a few different combinations of GROUP BYs.\n\nInteresting. I looked into the scenarios with multiple instances of\nthe same volatile grouping expressions and here is what I observed.\n\ncreate table t (a int, b int);\ninsert into t select 1,1;\n\n-- on master, with plain volatile functions\nselect random() as c1,\n random() as c2,\n random() as c3\nfrom t t1 group by c1;\n c1 | c2 | c3\n-------------------+-------------------+-------------------\n 0.567478050404431 | 0.567478050404431 | 0.567478050404431\n(1 row)\n\nSo the random() function is evaluated only once, even though it\nappears three times.\n\n-- on master, with subqueries that are 'volatile'\nselect (select random() from t t2 where a = t1.a) as c1,\n (select random() from t t2 where a = t1.a) as c2,\n (select random() from t t2 where a = t1.a) as c3\nfrom t t1 group by c1;\n c1 | c2 | c3\n--------------------+--------------------+--------------------\n 0.8420177313766823 | 0.2969648209746336 | 0.3499675329093421\n(1 row)\n\nSo on master the subquery is evaluated three times. Why isn't this\nconsistent with the behavior of the first query?\n\n-- on patched, with subqueries that are 'volatile'\nselect (select random() from t t2 where a = t1.a) as c1,\n (select random() from t t2 where a = t1.a) as c2,\n (select random() from t t2 where a = t1.a) as c3\nfrom t t1 group by c1;\n c1 | c2 | c3\n--------------------+--------------------+--------------------\n 0.5203586066423254 | 0.5203586066423254 | 0.5203586066423254\n(1 row)\n\nSo on patched the subquery is evaluated only once, which is consistent\nwith the behavior of the first query.\n\nDoes this suggest that the patched version is more 'correct' for this\ncase?\n\n\nNow let's look at the scenario with two grouping keys.\n\n-- on master, with plain volatile functions\nselect random() as c1,\n random() as c2,\n random() as c3\nfrom t t1 group by c1, c2;\n c1 | c2 | c3\n--------------------+--------------------+--------------------\n 0.9388558105069595 | 0.2900389441597979 | 0.9388558105069595\n(1 row)\n\nSo the first two random() functions are evaluated independently, and\nthe third random() function references the result of the first one.\n\n-- on master, with subqueries that are 'volatile'\nselect (select random() from t t2 where a = t1.a) as c1,\n (select random() from t t2 where a = t1.a) as c2,\n (select random() from t t2 where a = t1.a) as c3\nfrom t t1 group by c1, c2;\n c1 | c2 | c3\n---------------------+--------------------+--------------------\n 0.46275163300894073 | 0.5083760995112951 | 0.6752682696191123\n(1 row)\n\nSo on master the subquery is evaluated three times.\n\n-- on patched, with subqueries that are 'volatile'\nselect (select random() from t t2 where a = t1.a) as c1,\n (select random() from t t2 where a = t1.a) as c2,\n (select random() from t t2 where a = t1.a) as c3\nfrom t t1 group by c1, c2;\n c1 | c2 | c3\n--------------------+--------------------+--------------------\n 0.9887848690744176 | 0.9887848690744176 | 0.9887848690744176\n(1 row)\n\nSo on patched the subquery is evaluated only once.\n\nIt seems that in this scenario, neither the master nor the patched\nversion handles volatile subqueries in grouping expressions the same\nway as it handles plain volatile functions.\n\nI am confused. Does the SQL standard explicitly define or standardize\nthe behavior of grouping by volatile expressions? Does anyone know\nabout that?\n\nThanks\nRichard\n\n\n", "msg_date": "Thu, 18 Jul 2024 08:31:18 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Wrong results with grouping sets" }, { "msg_contents": "On Thu, Jul 18, 2024 at 8:31 AM Richard Guo <[email protected]> wrote:\n> I am confused. Does the SQL standard explicitly define or standardize\n> the behavior of grouping by volatile expressions? Does anyone know\n> about that?\n\nJust for the record, multiple instances of non-volatile grouping\nexpressions should always be evaluated only once. As an example,\nconsider:\n\ncreate function f_stable_add(a integer, b integer) returns integer as\n$$ begin return a+b; end; $$ language plpgsql stable;\n\nexplain (verbose, costs off)\nselect f_stable_add(a, b) as c1,\n f_stable_add(a, b) as c2,\n f_stable_add(a, b) as c3\nfrom t t1 group by c1, c2;\n QUERY PLAN\n----------------------------------------------------------------------------\n HashAggregate\n Output: (f_stable_add(a, b)), (f_stable_add(a, b)), (f_stable_add(a, b))\n Group Key: f_stable_add(t1.a, t1.b)\n -> Seq Scan on public.t t1\n Output: f_stable_add(a, b), a, b\n(5 rows)\n\nIn this regard, the patched version is correct on handling subqueries\nin grouping expressions, whereas the master version is incorrect.\n\nThanks\nRichard\n\n\n", "msg_date": "Thu, 18 Jul 2024 09:17:02 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Wrong results with grouping sets" }, { "msg_contents": "On Thu, Jul 4, 2024 at 5:52 PM Andres Freund <[email protected]> wrote:\n> As-is they can't be backpatched, unless I am missing something? Afaict they\n> introduce rather thorough ABI breaks? And API breaks, actually?\n\nAside from that, this looks quite invasive for back-patching, and the\nnumber of bug reports so far suggest that we should be worried about\nmore breakage appearing later.\n\nHowever, that leaves us in a situation where we have no back-patchable\nfix for a bug which causes queries to return the wrong answer, which\nis not a great situation.\n\nIs there a smaller fix that we could commit to fix the bug?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 19 Jul 2024 14:45:01 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wrong results with grouping sets" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> On Thu, Jul 4, 2024 at 5:52 PM Andres Freund <[email protected]> wrote:\n>> As-is they can't be backpatched, unless I am missing something? Afaict they\n>> introduce rather thorough ABI breaks? And API breaks, actually?\n\n> Aside from that, this looks quite invasive for back-patching, and the\n> number of bug reports so far suggest that we should be worried about\n> more breakage appearing later.\n\nYeah, 0 chance of back-patching this. If we had more confidence in it\nmaybe we could see our way to putting it in v17, but I fear that would\nbe tempting the software gods. It needs to get through a full beta\ntest cycle.\n\n> However, that leaves us in a situation where we have no back-patchable\n> fix for a bug which causes queries to return the wrong answer, which\n> is not a great situation.\n\nIt's not; but this has been wrong since grouping sets were put in,\nyet the number of field reports so far can probably still be counted\nwithout running out of fingers. I'm content if we can fix it going\nforward, and would not expend a lot of effort on a probably-futile\nsearch for a fix that doesn't involve a query data structure change.\n\n(I'm aware that I ought to review this patch, and will try to make\ntime for that before the end of the CF.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 19 Jul 2024 15:22:57 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wrong results with grouping sets" }, { "msg_contents": "I've been looking at cases where there are grouping-set keys that\nreduce to Consts, and I noticed a plan with v11 patch that is not very\ngreat.\n\nexplain (verbose, costs off)\nselect 1 as one group by rollup(one) order by one nulls first;\n QUERY PLAN\n-------------------------------\n Sort\n Output: (1)\n Sort Key: (1) NULLS FIRST\n -> GroupAggregate\n Output: (1)\n Group Key: (1)\n Group Key: ()\n -> Sort\n Output: (1)\n Sort Key: (1)\n -> Result\n Output: 1\n(12 rows)\n\nThe Sort operation below the Agg node is unnecessary because the\ngrouping key is actually a Const. This plan results from wrapping the\nConst in a PlaceHolderVar to carry the nullingrel bit of the RTE_GROUP\nRT index, as it can be nulled by the grouping step. Although we\nremove this nullingrel bit when generating the groupClause pathkeys\nsince we know the groupClause is logically below the grouping step, we\ndo not unwrap the PlaceHolderVar.\n\nThis suggests that we might need a mechanism to unwrap PHVs when safe.\n0003 includes a flag in PlaceHolderVar to indicate whether it is safe\nto remove the PHV and use its contained expression instead when its\nphnullingrels becomes empty. Currently it is set true only in cases\nwhere the PHV is used to carry the nullingrel bit of the RTE_GROUP RT\nindex. With 0003 the plan above becomes more reasonable:\n\nexplain (verbose, costs off)\nselect 1 as one group by rollup(one) order by one nulls first;\n QUERY PLAN\n-----------------------------\n Sort\n Output: (1)\n Sort Key: (1) NULLS FIRST\n -> GroupAggregate\n Output: (1)\n Group Key: 1\n Group Key: ()\n -> Result\n Output: 1\n(9 rows)\n\nThis could potentially open up opportunities for optimization by\nunwrapping PHVs in other cases. As an example, consider\n\nexplain (costs off)\nselect * from t t1 left join\n lateral (select t1.a as x, * from t t2) s on true\nwhere t1.a = s.a;\n QUERY PLAN\n----------------------------\n Nested Loop\n -> Seq Scan on t t1\n -> Seq Scan on t t2\n Filter: (t1.a = a)\n(4 rows)\n\nThe target entry s.x is wrapped in a PHV that contains lateral\nreference to t1, which forces us to resort to nestloop join. However,\nsince the left join has been reduced to an inner join, we should be\nable to remove this PHV and use merge or hash joins instead. I did\nnot implement this optimization in 0003. It seems that it should be\naddressed in a separate patch.\n\nThanks\nRichard", "msg_date": "Fri, 2 Aug 2024 17:45:35 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Wrong results with grouping sets" }, { "msg_contents": "On Wed, Jun 5, 2024 at 5:42 PM Richard Guo <[email protected]> wrote:\n> I found a bug in the v6 patch. The following query would trigger the\n> Assert in make_restrictinfo that the given subexpression should not be\n> an AND clause.\n>\n> select max(a) from t group by a > b and a = b having a > b and a = b;\n>\n> This is because the expression 'a > b and a = b' in the HAVING clause is\n> replaced by a Var that references the GROUP RTE. When we preprocess the\n> columns of the GROUP RTE, we do not know whether the grouped expression\n> is a havingQual or not, so we do not perform make_ands_implicit for it.\n> As a result, after we replace the group Var in the HAVING clause with\n> the underlying grouping expression, we will have a havingQual that is an\n> AND clause.\n>\n> As we know, in the planner we need to first preprocess all the columns\n> of the GROUP RTE. We also need to replace any Vars in the targetlist\n> and HAVING clause that reference the GROUP RTE with the underlying\n> grouping expressions. To fix the mentioned issue, I choose the perform\n> this replacement before we preprocess the targetlist and havingQual, so\n> that the make_ands_implicit would be performed when we preprocess the\n> havingQual.\n\nI've realized that there is something wrong with this conclusion. If\nwe perform the replacement of GROUP Vars with the underlying grouping\nexpressions before we've done with expression preprocessing on\ntargetlist and havingQual, we may end up with failing to match the\nexpressions that are part of grouping items to lower target items.\nConsider:\n\ncreate table t (a int, b int);\ninsert into t values (1, 2);\n\nselect a < b and b < 3 from t group by rollup(a < b and b < 3)\nhaving a < b and b < 3;\n\nThe expression preprocessing process would convert the HAVING clause\nto implicit-AND format and thus it would fail to be matched to lower\ntarget items.\n\nAnother example is:\n\ncreate table t1 (a boolean);\ninsert into t1 values (true);\n\nselect not a from t1 group by rollup(not a) having not not a;\n\nThis HAVING clause 'not not a' would be reduced to 'a' and thus fail\nto be matched to lower tlist.\n\nI fixed this issue in v13 by performing the replacement of GROUP Vars\nafter we've done with expression preprocessing on targetlist and\nhavingQual. An ensuing effect of this approach is that a HAVING\nclause may contain expressions that are not fully preprocessed if they\nare part of grouping items. This is not an issue as long as the\nclause remains in HAVING. But if the clause is moved or copied into\nWHERE, we need to re-preprocess these expressions. Please see the\nattached for the changes.\n\nThanks\nRichard", "msg_date": "Tue, 6 Aug 2024 16:17:12 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Wrong results with grouping sets" }, { "msg_contents": "On Tue, Aug 6, 2024 at 4:17 PM Richard Guo <[email protected]> wrote:\n> I fixed this issue in v13 by performing the replacement of GROUP Vars\n> after we've done with expression preprocessing on targetlist and\n> havingQual. An ensuing effect of this approach is that a HAVING\n> clause may contain expressions that are not fully preprocessed if they\n> are part of grouping items. This is not an issue as long as the\n> clause remains in HAVING. But if the clause is moved or copied into\n> WHERE, we need to re-preprocess these expressions. Please see the\n> attached for the changes.\n\nI'm seeking the possibility to push 0001 and 0002 sometime this month.\nPlease let me know if anyone thinks this is unreasonable.\n\nFor 0003, it might be extended to remove all no-op PHVs except those\nthat are serving to isolate subexpressions, not only the PHVs used to\ncarry the nullingrel bit that represents the grouping step. There is\na separate thread for it [1].\n\n[1] https://postgr.es/m/CAMbWs48biJp-vof82PNP_LzzFkURh0W+RKt4phoML-MyYavgdg@mail.gmail.com\n\nThanks\nRichard\n\n\n", "msg_date": "Wed, 4 Sep 2024 09:16:39 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Wrong results with grouping sets" }, { "msg_contents": "On Wed, Sep 4, 2024 at 9:16 AM Richard Guo <[email protected]> wrote:\n> I'm seeking the possibility to push 0001 and 0002 sometime this month.\n> Please let me know if anyone thinks this is unreasonable.\n>\n> For 0003, it might be extended to remove all no-op PHVs except those\n> that are serving to isolate subexpressions, not only the PHVs used to\n> carry the nullingrel bit that represents the grouping step. There is\n> a separate thread for it [1].\n\nI went ahead and pushed 0001 and 0002, and am now waiting for the\nupcoming bug reports.\n\nThanks for all the discussions and reviews.\n\nThanks\nRichard\n\n\n", "msg_date": "Tue, 10 Sep 2024 12:04:17 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Wrong results with grouping sets" } ]
[ { "msg_contents": "In v16 and later, the following fails:\n\nCREATE TABLE boom (t character varying(5) DEFAULT 'a long string');\n\nCOPY boom FROM STDIN;\nERROR: value too long for type character varying(5)\n\nIn PostgreSQL v15 and earlier, the COPY statement succeeds.\n\nThe error is thrown in BeginCopyFrom in line 1578 (HEAD)\n\n defexpr = expression_planner(defexpr);\n\nBisecting shows that the regression was introduced by commit 9f8377f7a2,\nwhich introduced DEFAULT values for COPY FROM.\n\nThe table definition is clearly silly, so I am not sure if that\nregression is worth fixing. On the other hand, it is not cool if\nsomething that worked without an error in v15 starts to fail later on.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Mon, 25 Sep 2023 09:54:22 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": true, "msg_subject": "Regression in COPY FROM caused by 9f8377f7a2" }, { "msg_contents": "On Mon, 2023-09-25 at 09:54 +0200, Laurenz Albe wrote:\n> In v16 and later, the following fails:\n> \n> CREATE TABLE boom (t character varying(5) DEFAULT 'a long string');\n> \n> COPY boom FROM STDIN;\n> ERROR:  value too long for type character varying(5)\n> \n> In PostgreSQL v15 and earlier, the COPY statement succeeds.\n> \n> The error is thrown in BeginCopyFrom in line 1578 (HEAD)\n> \n>   defexpr = expression_planner(defexpr);\n> \n> Bisecting shows that the regression was introduced by commit 9f8377f7a2,\n> which introduced DEFAULT values for COPY FROM.\n\nI suggest the attached fix, which evaluates default values only if\nthe DEFAULT option was specified or if the column does not appear in\nthe column list of COPY.\n\nYours,\nLaurenz Albe", "msg_date": "Mon, 25 Sep 2023 10:59:21 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Regression in COPY FROM caused by 9f8377f7a2" }, { "msg_contents": "On 2023-09-25 Mo 04:59, Laurenz Albe wrote:\n> On Mon, 2023-09-25 at 09:54 +0200, Laurenz Albe wrote:\n>> In v16 and later, the following fails:\n>>\n>> CREATE TABLE boom (t character varying(5) DEFAULT 'a long string');\n>>\n>> COPY boom FROM STDIN;\n>> ERROR:  value too long for type character varying(5)\n>>\n>> In PostgreSQL v15 and earlier, the COPY statement succeeds.\n>>\n>> The error is thrown in BeginCopyFrom in line 1578 (HEAD)\n>>\n>>   defexpr = expression_planner(defexpr);\n>>\n>> Bisecting shows that the regression was introduced by commit 9f8377f7a2,\n>> which introduced DEFAULT values for COPY FROM.\n\n\nOops :-(\n\n\n> I suggest the attached fix, which evaluates default values only if\n> the DEFAULT option was specified or if the column does not appear in\n> the column list of COPY.\n>\n\nPatch looks reasonable, haven't tested yet.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-09-25 Mo 04:59, Laurenz Albe\n wrote:\n\n\nOn Mon, 2023-09-25 at 09:54 +0200, Laurenz Albe wrote:\n\n\nIn v16 and later, the following fails:\n\nCREATE TABLE boom (t character varying(5) DEFAULT 'a long string');\n\nCOPY boom FROM STDIN;\nERROR:  value too long for type character varying(5)\n\nIn PostgreSQL v15 and earlier, the COPY statement succeeds.\n\nThe error is thrown in BeginCopyFrom in line 1578 (HEAD)\n\n  defexpr = expression_planner(defexpr);\n\nBisecting shows that the regression was introduced by commit 9f8377f7a2,\nwhich introduced DEFAULT values for COPY FROM.\n\n\n\n\n\nOops :-(\n\n\n\n\n\nI suggest the attached fix, which evaluates default values only if\nthe DEFAULT option was specified or if the column does not appear in\nthe column list of COPY.\n\n\n\n\n\nPatch looks reasonable, haven't tested yet.\n\n\n\ncheers\n\n\nandrew\n \n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Mon, 25 Sep 2023 11:06:58 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regression in COPY FROM caused by 9f8377f7a2" }, { "msg_contents": "On 2023-09-25 Mo 11:06, Andrew Dunstan wrote:\n>\n>\n> On 2023-09-25 Mo 04:59, Laurenz Albe wrote:\n>> On Mon, 2023-09-25 at 09:54 +0200, Laurenz Albe wrote:\n>>> In v16 and later, the following fails:\n>>>\n>>> CREATE TABLE boom (t character varying(5) DEFAULT 'a long string');\n>>>\n>>> COPY boom FROM STDIN;\n>>> ERROR:  value too long for type character varying(5)\n>>>\n>>> In PostgreSQL v15 and earlier, the COPY statement succeeds.\n>>>\n>>> The error is thrown in BeginCopyFrom in line 1578 (HEAD)\n>>>\n>>>   defexpr = expression_planner(defexpr);\n>>>\n>>> Bisecting shows that the regression was introduced by commit 9f8377f7a2,\n>>> which introduced DEFAULT values for COPY FROM.\n>\n>\n\n\nThinking about this a little more, wouldn't it be better if we checked \nat the time we set the default that the value is actually valid for the \ngiven column? This is only one manifestation of a problem you could run \ninto given this table definition.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-09-25 Mo 11:06, Andrew Dunstan\n wrote:\n\n\n\n\n\nOn 2023-09-25 Mo 04:59, Laurenz Albe\n wrote:\n\n\nOn Mon, 2023-09-25 at 09:54 +0200, Laurenz Albe wrote:\n\n\nIn v16 and later, the following fails:\n\nCREATE TABLE boom (t character varying(5) DEFAULT 'a long string');\n\nCOPY boom FROM STDIN;\nERROR:  value too long for type character varying(5)\n\nIn PostgreSQL v15 and earlier, the COPY statement succeeds.\n\nThe error is thrown in BeginCopyFrom in line 1578 (HEAD)\n\n  defexpr = expression_planner(defexpr);\n\nBisecting shows that the regression was introduced by commit 9f8377f7a2,\nwhich introduced DEFAULT values for COPY FROM.\n\n\n\n\n\n\n\n\n\n\nThinking about this a little more, wouldn't it be better if we\n checked at the time we set the default that the value is actually\n valid for the given column? This is only one manifestation of a\n problem you could run into given this table definition.\n\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Mon, 25 Sep 2023 16:52:40 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regression in COPY FROM caused by 9f8377f7a2" }, { "msg_contents": "Andrew Dunstan <[email protected]> writes:\n> On 2023-09-25 Mo 11:06, Andrew Dunstan wrote:\n>> On 2023-09-25 Mo 04:59, Laurenz Albe wrote:\n>>> CREATE TABLE boom (t character varying(5) DEFAULT 'a long string');\n\n> Thinking about this a little more, wouldn't it be better if we checked \n> at the time we set the default that the value is actually valid for the \n> given column? This is only one manifestation of a problem you could run \n> into given this table definition.\n\nI dunno, it seems at least possible that someone would do this\ndeliberately as a means of preventing the column from being defaulted.\nIn any case, the current behavior has stood for a very long time and\nno one has complained that an error should be thrown sooner.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 25 Sep 2023 17:49:42 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regression in COPY FROM caused by 9f8377f7a2" }, { "msg_contents": "On Mon, 2023-09-25 at 17:49 -0400, Tom Lane wrote:\n> Andrew Dunstan <[email protected]> writes:\n> > On 2023-09-25 Mo 11:06, Andrew Dunstan wrote:\n> > > On 2023-09-25 Mo 04:59, Laurenz Albe wrote:\n> > > > CREATE TABLE boom (t character varying(5) DEFAULT 'a long string');\n> \n> > Thinking about this a little more, wouldn't it be better if we checked \n> > at the time we set the default that the value is actually valid for the \n> > given column? This is only one manifestation of a problem you could run \n> > into given this table definition.\n> \n> I dunno, it seems at least possible that someone would do this\n> deliberately as a means of preventing the column from being defaulted.\n> In any case, the current behavior has stood for a very long time and\n> no one has complained that an error should be thrown sooner.\n\nMoreover, this makes restoring a pg_dump from v15 to v16 fail, which\nshould never happen. This is how I got that bug report.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Tue, 26 Sep 2023 08:48:31 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Regression in COPY FROM caused by 9f8377f7a2" }, { "msg_contents": "Here is an improved version of the patch with regression tests.\n\nYours,\nLaurenz Albe", "msg_date": "Tue, 26 Sep 2023 10:11:25 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Regression in COPY FROM caused by 9f8377f7a2" }, { "msg_contents": "\nOn 2023-09-26 Tu 04:11, Laurenz Albe wrote:\n> Here is an improved version of the patch with regression tests.\n>\n\nThanks, pushed.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sun, 1 Oct 2023 10:55:33 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regression in COPY FROM caused by 9f8377f7a2" }, { "msg_contents": "On Sun, 2023-10-01 at 10:55 -0400, Andrew Dunstan wrote:\n> Thanks, pushed.\n\nThanks for taking care of that.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Mon, 02 Oct 2023 06:10:24 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Regression in COPY FROM caused by 9f8377f7a2" } ]
[ { "msg_contents": "I noticed a wrong comment in search_indexed_tlist_for_sortgroupref().\n\n foreach(lc, itlist->tlist)\n {\n TargetEntry *tle = (TargetEntry *) lfirst(lc);\n\n /* The equal() check should be redundant, but let's be paranoid */\n if (tle->ressortgroupref == sortgroupref &&\n equal(node, tle->expr))\n {\n\nIt turns out that the equal() check is necessary, because the given\nsort/group expression might be type of FuncExpr which converts integer\nto numeric. In this case we need to modify its args not itself to\nreference the matching subplan output expression. As an example,\nconsider\n\nexplain (costs off, verbose)\nSELECT 1.1 AS two UNION (SELECT 2 UNION ALL SELECT 2);\n QUERY PLAN\n-------------------------------------\n HashAggregate\n Output: (1.1)\n Group Key: (1.1)\n -> Append\n -> Result\n Output: 1.1\n -> Result\n Output: (2)\n -> Append\n -> Result\n Output: 2\n -> Result\n Output: 2\n(13 rows)\n\nIf we remove the equal() check, this query would cause crash in\nexecution.\n\nI'm considering changing the comment as below.\n\n- /* The equal() check should be redundant, but let's be paranoid */\n+ /*\n+ * The equal() check is necessary, because expressions with the same\n+ * sortgroupref might be different, e.g., the given sort/group\n+ * expression can be type of FuncExpr which converts integer to\n+ * numeric, and we need to modify its args not itself to reference the\n+ * matching subplan output expression in this case.\n+ */\n\nAny thoughts?\n\nThanks\nRichard", "msg_date": "Mon, 25 Sep 2023 18:11:41 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Fix a wrong comment in setrefs.c" }, { "msg_contents": "Richard Guo <[email protected]> writes:\n> I noticed a wrong comment in search_indexed_tlist_for_sortgroupref().\n\n> /* The equal() check should be redundant, but let's be paranoid */\n\n> It turns out that the equal() check is necessary, because the given\n> sort/group expression might be type of FuncExpr which converts integer\n> to numeric.\n\nHmm. This kind of makes me itch, because in principle a ressortgroupref\nidentifier should uniquely identify a sorting/grouping column. If it\nfails to do so here, maybe there are outright bugs lurking elsewhere?\n\nI poked into it a little and determined that the problematic\nressortgroupref values are being assigned during prepunion.c,\nwhich says\n\n * By convention, all non-resjunk columns in a setop tree have\n * ressortgroupref equal to their resno. In some cases the ref isn't\n * needed, but this is a cleaner way than modifying the tlist later.\n\nSo at the end of that, we can have Vars in the upper relations'\ntargetlists that are associated by ressortgroupref with values\nin the setop input relations' tlists, but don't match.\n(You are right that added-on implicit coercions are one reason for\nthe expressions to be different, but it's not the only one.)\n\nI'm inclined to write the comment more like \"Usually the equal()\ncheck is redundant, but in setop plans it may not be, since\nprepunion.c assigns ressortgroupref equal to the column resno\nwithout regard to whether that matches the topmost level's\nsortgrouprefs and without regard to whether any implicit coercions\nare added in the setop tree. We might have to clean that up someday;\nbut for now, just ignore any false matches.\"\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 25 Sep 2023 17:45:33 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix a wrong comment in setrefs.c" }, { "msg_contents": "On Tue, Sep 26, 2023 at 5:45 AM Tom Lane <[email protected]> wrote:\n\n> Hmm. This kind of makes me itch, because in principle a ressortgroupref\n> identifier should uniquely identify a sorting/grouping column. If it\n> fails to do so here, maybe there are outright bugs lurking elsewhere?\n>\n> I poked into it a little and determined that the problematic\n> ressortgroupref values are being assigned during prepunion.c,\n> which says\n>\n> * By convention, all non-resjunk columns in a setop tree have\n> * ressortgroupref equal to their resno. In some cases the ref\n> isn't\n> * needed, but this is a cleaner way than modifying the tlist\n> later.\n>\n> So at the end of that, we can have Vars in the upper relations'\n> targetlists that are associated by ressortgroupref with values\n> in the setop input relations' tlists, but don't match.\n> (You are right that added-on implicit coercions are one reason for\n> the expressions to be different, but it's not the only one.)\n\n\nAh. Thanks for the investigation.\n\n\n> I'm inclined to write the comment more like \"Usually the equal()\n> check is redundant, but in setop plans it may not be, since\n> prepunion.c assigns ressortgroupref equal to the column resno\n> without regard to whether that matches the topmost level's\n> sortgrouprefs and without regard to whether any implicit coercions\n> are added in the setop tree. We might have to clean that up someday;\n> but for now, just ignore any false matches.\"\n\n\n+1. It explains the situation much more clearly and accurately.\n\nThanks\nRichard\n\nOn Tue, Sep 26, 2023 at 5:45 AM Tom Lane <[email protected]> wrote:\nHmm.  This kind of makes me itch, because in principle a ressortgroupref\nidentifier should uniquely identify a sorting/grouping column.  If it\nfails to do so here, maybe there are outright bugs lurking elsewhere?\n\nI poked into it a little and determined that the problematic\nressortgroupref values are being assigned during prepunion.c,\nwhich says\n\n         * By convention, all non-resjunk columns in a setop tree have\n         * ressortgroupref equal to their resno.  In some cases the ref isn't\n         * needed, but this is a cleaner way than modifying the tlist later.\n\nSo at the end of that, we can have Vars in the upper relations'\ntargetlists that are associated by ressortgroupref with values\nin the setop input relations' tlists, but don't match.\n(You are right that added-on implicit coercions are one reason for\nthe expressions to be different, but it's not the only one.)Ah.  Thanks for the investigation. \nI'm inclined to write the comment more like \"Usually the equal()\ncheck is redundant, but in setop plans it may not be, since\nprepunion.c assigns ressortgroupref equal to the column resno\nwithout regard to whether that matches the topmost level's\nsortgrouprefs and without regard to whether any implicit coercions\nare added in the setop tree.  We might have to clean that up someday;\nbut for now, just ignore any false matches.\"+1.  It explains the situation much more clearly and accurately.ThanksRichard", "msg_date": "Tue, 26 Sep 2023 09:51:45 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Fix a wrong comment in setrefs.c" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: not tested\nImplements feature: not tested\nSpec compliant: not tested\nDocumentation: not tested\n\nThat looks correct for me\n\nThe new status of this patch is: Ready for Committer\n", "msg_date": "Tue, 17 Oct 2023 16:42:12 +0000", "msg_from": "shihao zhong <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix a wrong comment in setrefs.c" }, { "msg_contents": "On Tue, Sep 26, 2023 at 9:51 AM Richard Guo <[email protected]> wrote:\n\n> On Tue, Sep 26, 2023 at 5:45 AM Tom Lane <[email protected]> wrote:\n>\n>> I'm inclined to write the comment more like \"Usually the equal()\n>> check is redundant, but in setop plans it may not be, since\n>> prepunion.c assigns ressortgroupref equal to the column resno\n>> without regard to whether that matches the topmost level's\n>> sortgrouprefs and without regard to whether any implicit coercions\n>> are added in the setop tree. We might have to clean that up someday;\n>> but for now, just ignore any false matches.\"\n>\n>\n> +1. It explains the situation much more clearly and accurately.\n>\n\nTo make it easier to review, I've updated the patch to be so.\n\nThanks\nRichard", "msg_date": "Fri, 3 Nov 2023 14:10:45 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Fix a wrong comment in setrefs.c" }, { "msg_contents": "On 03/11/2023 08:10, Richard Guo wrote:\n> On Tue, Sep 26, 2023 at 9:51 AM Richard Guo <[email protected] \n> <mailto:[email protected]>> wrote:\n> \n> On Tue, Sep 26, 2023 at 5:45 AM Tom Lane <[email protected]\n> <mailto:[email protected]>> wrote:\n> \n> I'm inclined to write the comment more like \"Usually the equal()\n> check is redundant, but in setop plans it may not be, since\n> prepunion.c assigns ressortgroupref equal to the column resno\n> without regard to whether that matches the topmost level's\n> sortgrouprefs and without regard to whether any implicit coercions\n> are added in the setop tree.  We might have to clean that up\n> someday;\n> but for now, just ignore any false matches.\"\n> \n> \n> +1.  It explains the situation much more clearly and accurately.\n> \n> To make it easier to review, I've updated the patch to be so.\n\nCommitted, thanks!\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Tue, 28 Nov 2023 14:19:34 +0200", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix a wrong comment in setrefs.c" }, { "msg_contents": "On Tue, Nov 28, 2023 at 8:19 PM Heikki Linnakangas <[email protected]> wrote:\n\n> On 03/11/2023 08:10, Richard Guo wrote:\n> > On Tue, Sep 26, 2023 at 9:51 AM Richard Guo <[email protected]\n> > <mailto:[email protected]>> wrote:\n> > On Tue, Sep 26, 2023 at 5:45 AM Tom Lane <[email protected]\n> > <mailto:[email protected]>> wrote:\n> >\n> > I'm inclined to write the comment more like \"Usually the equal()\n> > check is redundant, but in setop plans it may not be, since\n> > prepunion.c assigns ressortgroupref equal to the column resno\n> > without regard to whether that matches the topmost level's\n> > sortgrouprefs and without regard to whether any implicit\n> coercions\n> > are added in the setop tree. We might have to clean that up\n> > someday;\n> > but for now, just ignore any false matches.\"\n> >\n> > +1. It explains the situation much more clearly and accurately.\n> >\n> > To make it easier to review, I've updated the patch to be so.\n>\n> Committed, thanks!\n\n\nThanks for pushing!\n\nThanks\nRichard\n\nOn Tue, Nov 28, 2023 at 8:19 PM Heikki Linnakangas <[email protected]> wrote:On 03/11/2023 08:10, Richard Guo wrote:\n> On Tue, Sep 26, 2023 at 9:51 AM Richard Guo <[email protected] \n> <mailto:[email protected]>> wrote:\n>     On Tue, Sep 26, 2023 at 5:45 AM Tom Lane <[email protected]\n>     <mailto:[email protected]>> wrote:\n> \n>         I'm inclined to write the comment more like \"Usually the equal()\n>         check is redundant, but in setop plans it may not be, since\n>         prepunion.c assigns ressortgroupref equal to the column resno\n>         without regard to whether that matches the topmost level's\n>         sortgrouprefs and without regard to whether any implicit coercions\n>         are added in the setop tree.  We might have to clean that up\n>         someday;\n>         but for now, just ignore any false matches.\"\n>\n>     +1.  It explains the situation much more clearly and accurately.\n> \n> To make it easier to review, I've updated the patch to be so.\n\nCommitted, thanks!Thanks for pushing!ThanksRichard", "msg_date": "Fri, 1 Dec 2023 19:17:22 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Fix a wrong comment in setrefs.c" } ]
[ { "msg_contents": "I find this in source code\n#define ShareLock\t\t\t\t5\t/* CREATE INDEX (WITHOUT CONCURRENTLY) */\nI think if there is no CONCURRENTLY, so why we need a lock, I think that’s unnecessary. What’s the point?\n\n", "msg_date": "Mon, 25 Sep 2023 23:04:18 +0800", "msg_from": "jacktby jacktby <[email protected]>", "msg_from_op": true, "msg_subject": "Why need a lock?" }, { "msg_contents": "jacktby jacktby <[email protected]> writes:\n> I find this in source code\n> #define ShareLock\t\t\t\t5\t/* CREATE INDEX (WITHOUT CONCURRENTLY) */\n> I think if there is no CONCURRENTLY, so why we need a lock, I think that’s unnecessary. What’s the point?\n\nWe need a lock precisely to prevent concurrent execution\n(of table modifications that would confuse the index build).\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 25 Sep 2023 11:33:43 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why need a lock?" } ]
[ { "msg_contents": "\nHi, hackers\n\nWhen I try to update unicode mapping tables using make update-unicode [1],\nI encountered an error about following:\n\ngenerate_unaccent_rules.py --unicode-data-file ../../src/common/unicode/UnicodeData.txt --latin-ascii-file Latin-ASCII.xml >unaccent.rules\n/bin/sh: 1: generate_unaccent_rules.py: not found\nmake: *** [Makefile:33: unaccent.rules] Error 127\nmake: *** Deleting file 'unaccent.rules'\n\nThe generate_unaccent_rules.py is in contrib/unaccent and the Makefile:\n\n# Allow running this even without --with-python\nPYTHON ?= python\n\n$(srcdir)/unaccent.rules: generate_unaccent_rules.py ../../src/common/unicode/UnicodeData.txt Latin-ASCII.xml\n $(PYTHON) $< --unicode-data-file $(word 2,$^) --latin-ascii-file $(word 3,$^) >$@\n\nIt use python to run generate_unaccent_rules.py, However, the ?= operator in\nMakefile only check variable is defined or not, but do not check variable is\nempty. Since the PYTHON is defined in src/Makefile.global, so here PYTHON\nget empty when without --with-ptyhon.\n\nHere are some examples:\n\njapin@coltd-devel:~$ cat Makefile\nPYTHON =\nPYTHON ?= python\n\ntest:\n echo '$(PYTHON)'\njapin@coltd-devel:~$ make\necho ''\n\njapin@coltd-devel:~$ cat Makefile\nPYTHON = python3\nPYTHON ?= python\n\ntest:\n echo '$(PYTHON)'\njapin@coltd-devel:~$ make\necho 'python3'\npython3\n\njapin@coltd-devel:~$ cat Makefile\nPYTHON =\nifeq ($(PYTHON),)\nPYTHON = python\nendif\n\ntest:\n echo '$(PYTHON)'\njapin@coltd-devel:~$ make\necho 'python'\npython\njapin@coltd-devel:~$ cat Makefile\nPYTHON = python3\nifeq ($(PYTHON),)\nPYTHON = python\nendif\n\ntest:\n echo '$(PYTHON)'\njapin@coltd-devel:~$ make\necho 'python3'\npython3\n\nHere is a patch to fix this, any thoughts?\n\ndiff --git a/contrib/unaccent/Makefile b/contrib/unaccent/Makefile\nindex 652a3e774c..3ff49ba1e9 100644\n--- a/contrib/unaccent/Makefile\n+++ b/contrib/unaccent/Makefile\n@@ -26,7 +26,9 @@ endif\n update-unicode: $(srcdir)/unaccent.rules\n \n # Allow running this even without --with-python\n-PYTHON ?= python\n+ifeq ($(PYTHON),)\n+PYTHON = python\n+endif\n \n $(srcdir)/unaccent.rules: generate_unaccent_rules.py ../../src/common/unicode/UnicodeData.txt Latin-ASCII.xml\n \t$(PYTHON) $< --unicode-data-file $(word 2,$^) --latin-ascii-file $(word 3,$^) >$@\n\n\n[1] https://www.postgresql.org/message-id/MEYP282MB1669AC78EE8374B3DE797A09B6FCA%40MEYP282MB1669.AUSP282.PROD.OUTLOOK.COM\n\n-- \nRegrads,\nJapin Li\nChengDu WenWu Information Technology Co., Ltd.\n\n\n", "msg_date": "Tue, 26 Sep 2023 10:43:40 +0800", "msg_from": "Japin Li <[email protected]>", "msg_from_op": true, "msg_subject": "Could not run generate_unaccent_rules.py script when update unicode" }, { "msg_contents": "On Tue, Sep 26, 2023 at 10:43:40AM +0800, Japin Li wrote:\n> # Allow running this even without --with-python\n> PYTHON ?= python\n> \n> $(srcdir)/unaccent.rules: generate_unaccent_rules.py ../../src/common/unicode/UnicodeData.txt Latin-ASCII.xml\n> $(PYTHON) $< --unicode-data-file $(word 2,$^) --latin-ascii-file $(word 3,$^) >$@\n> \n> It use python to run generate_unaccent_rules.py, However, the ?= operator in\n> Makefile only check variable is defined or not, but do not check variable is\n> empty. Since the PYTHON is defined in src/Makefile.global, so here PYTHON\n> get empty when without --with-ptyhon.\n\nI am not sure that many people run this script frequently so that may\nnot be worth adding a check for a defined, still empty or incorrect\nvalue, but.. If you were to change the Makefile we use in this path,\nhow are you suggesting to change it?\n--\nMichael", "msg_date": "Wed, 27 Sep 2023 09:03:19 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Could not run generate_unaccent_rules.py script when update\n unicode" }, { "msg_contents": "\nOn Wed, 27 Sep 2023 at 08:03, Michael Paquier <[email protected]> wrote:\n> On Tue, Sep 26, 2023 at 10:43:40AM +0800, Japin Li wrote:\n>> # Allow running this even without --with-python\n>> PYTHON ?= python\n>> \n>> $(srcdir)/unaccent.rules: generate_unaccent_rules.py ../../src/common/unicode/UnicodeData.txt Latin-ASCII.xml\n>> $(PYTHON) $< --unicode-data-file $(word 2,$^) --latin-ascii-file $(word 3,$^) >$@\n>> \n>> It use python to run generate_unaccent_rules.py, However, the ?= operator in\n>> Makefile only check variable is defined or not, but do not check variable is\n>> empty. Since the PYTHON is defined in src/Makefile.global, so here PYTHON\n>> get empty when without --with-ptyhon.\n>\n> I am not sure that many people run this script frequently so that may\n> not be worth adding a check for a defined, still empty or incorrect\n\nYeah, not frequently, however, it already be used by me, since we provide this\nfunction, why not make it better?\n\n> value, but.. If you were to change the Makefile we use in this path,\n> how are you suggesting to change it?\n\nI provide a patch at bottom of in [1]. Attached here again.\n\ndiff --git a/contrib/unaccent/Makefile b/contrib/unaccent/Makefile\nindex 652a3e774c..3ff49ba1e9 100644\n--- a/contrib/unaccent/Makefile\n+++ b/contrib/unaccent/Makefile\n@@ -26,7 +26,9 @@ endif\n update-unicode: $(srcdir)/unaccent.rules\n \n # Allow running this even without --with-python\n-PYTHON ?= python\n+ifeq ($(PYTHON),)\n+PYTHON = python\n+endif\n \n $(srcdir)/unaccent.rules: generate_unaccent_rules.py ../../src/common/unicode/UnicodeData.txt Latin-ASCII.xml\n \t$(PYTHON) $< --unicode-data-file $(word 2,$^) --latin-ascii-file $(word 3,$^) >$@\n\n[1] https://www.postgresql.org/message-id/MEYP282MB1669F86C0DC7B4DC48489CB0B6C3A@MEYP282MB1669.AUSP282.PROD.OUTLOOK.COM\n\n-- \nRegrads,\nJapin Li\nChengDu WenWu Information Technology Co., Ltd.\n\n\n", "msg_date": "Wed, 27 Sep 2023 09:15:00 +0800", "msg_from": "Japin Li <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Could not run generate_unaccent_rules.py script when update\n unicode" }, { "msg_contents": "On Wed, Sep 27, 2023 at 09:15:00AM +0800, Japin Li wrote:\n> On Wed, 27 Sep 2023 at 08:03, Michael Paquier <[email protected]> wrote:\n>> I am not sure that many people run this script frequently so that may\n>> not be worth adding a check for a defined, still empty or incorrect\n> \n> Yeah, not frequently, however, it already be used by me, since we provide this\n> function, why not make it better?\n\nWell, I don't mind doing as you suggest.\n\n>> value, but.. If you were to change the Makefile we use in this path,\n>> how are you suggesting to change it?\n> \n> I provide a patch at bottom of in [1]. Attached here again.\n\nNo file was attached so I somewhat missed it. And indeed, you're\nright. The current rule is useless when compiling without\n--with-python, as PYTHON is empty but defined. What you are proposing\nis similar to what pgxs.mk does for bison and flex, and \"python\" would\nstill be able to map to python3 or python2.7, which should be OK in\nmost cases.\n\nI thought that this was quite old, but no, f85a485f8 was at the origin\nof that. So I've applied the patch down to 13.\n--\nMichael", "msg_date": "Wed, 27 Sep 2023 14:46:01 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Could not run generate_unaccent_rules.py script when update\n unicode" }, { "msg_contents": "\nOn Wed, 27 Sep 2023 at 13:46, Michael Paquier <[email protected]> wrote:\n> On Wed, Sep 27, 2023 at 09:15:00AM +0800, Japin Li wrote:\n>> On Wed, 27 Sep 2023 at 08:03, Michael Paquier <[email protected]> wrote:\n>>> I am not sure that many people run this script frequently so that may\n>>> not be worth adding a check for a defined, still empty or incorrect\n>> \n>> Yeah, not frequently, however, it already be used by me, since we provide this\n>> function, why not make it better?\n>\n> Well, I don't mind doing as you suggest.\n>\n>>> value, but.. If you were to change the Makefile we use in this path,\n>>> how are you suggesting to change it?\n>> \n>> I provide a patch at bottom of in [1]. Attached here again.\n>\n> No file was attached so I somewhat missed it. And indeed, you're\n> right. The current rule is useless when compiling without\n> --with-python, as PYTHON is empty but defined. What you are proposing\n> is similar to what pgxs.mk does for bison and flex, and \"python\" would\n> still be able to map to python3 or python2.7, which should be OK in\n> most cases.\n>\n> I thought that this was quite old, but no, f85a485f8 was at the origin\n> of that. So I've applied the patch down to 13.\n\nThanks for your review and push!\n\n-- \nRegrads,\nJapin Li\nChengDu WenWu Information Technology Co., Ltd.\n\n\n", "msg_date": "Wed, 27 Sep 2023 13:56:10 +0800", "msg_from": "Japin Li <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Could not run generate_unaccent_rules.py script when update\n unicode" } ]
[ { "msg_contents": "Hi,\n\nIn the regression test I see tests the one below [1], but generally, jsonb_path_query is a SRF,\nand the definition in pg_proc.dat has it as such [0], looking at the implementation it doesn’t look like it calls jsonb_query_first behind the scenes (say if it detects it’s being called in SELECT), which would explain it.\n\nHow would I re-create this same behaviour with say another $.func() in jsonb or any SRF in general.\n\n{ oid => '4006', descr => 'jsonpath query',\n proname => 'jsonb_path_query', prorows => '1000', proretset => 't',\n prorettype => 'jsonb', proargtypes => 'jsonb jsonpath jsonb bool',\n prosrc => 'jsonb_path_query' },\n\n\nselect jsonb_path_query('\"10-03-2017\"', '$.datetime(\"dd-mm-yyyy\")');\n jsonb_path_query \n------------------\n \"2017-03-10\"\n(1 row)\nHi,In the regression test I see tests the one below [1], but generally, jsonb_path_query is a SRF,and the definition in pg_proc.dat has it as such [0], looking at the implementation it doesn’t look like it calls jsonb_query_first behind the scenes (say if it detects it’s being called in  SELECT), which would explain it.How would I re-create this same behaviour with say another $.func() in jsonb or any SRF in general.{ oid => '4006', descr => 'jsonpath query', proname => 'jsonb_path_query', prorows => '1000', proretset => 't', prorettype => 'jsonb', proargtypes => 'jsonb jsonpath jsonb bool', prosrc => 'jsonb_path_query' },select jsonb_path_query('\"10-03-2017\"', '$.datetime(\"dd-mm-yyyy\")'); jsonb_path_query ------------------ \"2017-03-10\"(1 row)", "msg_date": "Tue, 26 Sep 2023 07:19:06 +0300", "msg_from": "Markur Sens <[email protected]>", "msg_from_op": true, "msg_subject": "How are jsonb_path_query SRFs $.datetime() defined ?" } ]
[ { "msg_contents": "Hi all,\n(Thomas in CC.)\n\nNow that becfbdd6c1c9 has improved the situation to detect the\ndifference between out-of-memory and invalid WAL data in WAL, I guess\nthat it is time to tackle the problem of what we should do when\nreading WAL records bit fail on out-of-memory.\n\nTo summarize, currently the WAL reader APIs fail the same way if we\ndetect some incorrect WAL record or if a memory allocation fails: an\nerror is generated and returned back to the caller to consume. For\nWAL replay, not being able to make the difference between an OOM and\nthe end-of-wal is a problem in some cases. For example, in crash\nrecovery, failing an internal allocation will be detected as the\nend-of-wal, causing recovery to stop prematurely. In the worst cases,\nthis silently corrupts clusters because not all the records generated\nin the local pg_wal/ have been replayed. Oops.\n\nWhen in standby mode, things are a bit better, because we'd just loop\nand wait for the next record. But, even in this case, if the startup\nprocess does a crash recovery while standby is set up, we may finish\nby attempting recovery from a different source than the local pg_wal/.\nNot strictly critical, but less optimal in some cases as we could\nswitch to archive recovery earlier than necessary.\n\nIn a different thread, I have proposed to extend the WAL reader\nfacility so as an error code is returned to make the difference\nbetween an OOM or the end of WAL with an incorrect record:\nhttps://www.postgresql.org/message-id/ZRJ-p1dLUY0uoChc%40paquier.xyz\n\nHowever this requires some ABI changes, so that's not backpatchable.\n\nThis leaves out what we can do for the existing back-branches, and\none option is to do the simplest thing I can think of: if an\nallocation fails, just fail *hard*. The allocations of the WAL reader\nrely on palloc_extended(), so I'd like to suggest that we switch to\npalloc() instead. If we do so, an ERROR is promoted to a FATAL during\nWAL replay, which makes sure that we will never stop recovery earlier\nthan we should, FATAL-ing before things go wrong.\n\nNote that the WAL prefetching already relies on a palloc() on HEAD in\nXLogReadRecordAlloc(), which would fail hard the same way on OOM.\n\nSo, attached is a proposal of patch to do something down to 12.\n\nThoughts and/or comments are welcome.\n--\nMichael", "msg_date": "Tue, 26 Sep 2023 16:38:28 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Fail hard if xlogreader.c fails on out-of-memory" }, { "msg_contents": "On Tue, Sep 26, 2023 at 8:38 PM Michael Paquier <[email protected]> wrote:\n> Thoughts and/or comments are welcome.\n\nI don't have an opinion yet on your other thread about making this\nstuff configurable for replicas, but for the simple crash recovery\ncase shown here, hard failure makes sense to me.\n\nHere are some interesting points in the history of this topic:\n\n1999 30659d43: xl_len is 16 bit, fixed size buffer in later commits\n2001 7d4d5c00: WAL files recycled, xlp_pageaddr added\n2004 0ffe11ab: xl_len is 32 bit, dynamic buffer, malloc() failure ends redo\n2005 21fda22e: xl_tot_len and xl_len co-exist\n2014 2c03216d: xl_tot_len fully replaces xl_len\n2018 70b4f82a: xl_tot_len > 1GB ends redo\n2023 8fcb32db: don't let xl_tot_len > 1GB be logged!\n2023 bae868ca: check next xlp_pageaddr, xlp_rem_len before allocating\n\nRecycled pages can't fool us into making a huge allocation any more.\nIf xl_tot_len implies more than one page but the next page's\nxlp_pageaddr is too low, then either the xl_tot_len you read was\nrecycled garbage bits, or it was legitimate but the overwrite of the\nfollowing page didn't make it to disk; either way, we don't have a\nrecord, so we have an end-of-wal condition. The xlp_rem_len check\ndefends against the second page making it to disk while the first one\nstill contains recycled garbage where the xl_tot_len should be*.\n\nWhat Michael wants to do now is remove the 2004-era assumption that\nmalloc failure implies bogus data. It must be pretty unlikely in a 64\nbit world with overcommitted virtual memory, but a legitimate\nxl_tot_len can falsely end recovery and lose data, as reported from a\nproduction case analysed by his colleagues. In other words, we can\nactually distinguish between lack of resources and recycled bogus\ndata, so why treat them the same?\n\nFor comparison, if you run out of disk space during recovery we don't\nsay \"oh well, that's enough redoing for today, the computer is full,\nlet's forget about the rest of the WAL and start accepting new\ntransactions!\". The machine running recovery has certain resource\nrequirements relative to the machine that generated the WAL, and if\nthey're not met it just can't do it. It's the same if it various\nother allocations fail. The new situation is that we are now\nverifying that xl_tot_len was actually logged by PostgreSQL, so if we\ncan't allocate space for it, we can't replay the WAL.\n\n*A more detailed analysis would talk about sectors (page header is\natomic), and consider whether we're only trying to defend ourselves\nagainst recycled pages written by PostgreSQL (yes), arbitrary random\ndata (no, but it's probably still pretty good) or someone trying to\ntrick us (no, and we don't stand a chance).\n\n\n", "msg_date": "Wed, 27 Sep 2023 11:06:37 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fail hard if xlogreader.c fails on out-of-memory" }, { "msg_contents": "On Wed, Sep 27, 2023 at 11:06:37AM +1300, Thomas Munro wrote:\n> I don't have an opinion yet on your other thread about making this\n> stuff configurable for replicas, but for the simple crash recovery\n> case shown here, hard failure makes sense to me.\n\nAlso, if we conclude that we're OK with just failing hard all the time\nfor crash recovery and archive recovery on OOM, the other patch is not\nreally required. That would be disruptive for standbys in some cases,\nstill perhaps OK in the long-term. I am wondering if people have lost\ndata because of this problem on production systems, actually.. It\nwould not be possible to know that it happened until you see a page on\ndisk that has a somewhat valid LSN, still an LSN older than the\nposition currently being inserted, and that could show up in various\nforms. Even that could get hidden quickly if WAL is written at a fast\npace after a crash recovery. A standby promotion at an LSN older\nwould be unlikely as monitoring solutions discard standbys lagging\nbehind N bytes.\n\n> *A more detailed analysis would talk about sectors (page header is\n> atomic), and consider whether we're only trying to defend ourselves\n> against recycled pages written by PostgreSQL (yes), arbitrary random\n> data (no, but it's probably still pretty good) or someone trying to\n> trick us (no, and we don't stand a chance).\n\nWAL would not be the only part of the system that would get borked if\narbitrary bytes can be inserted into what's read from disk, random or\nnot.\n--\nMichael", "msg_date": "Wed, 27 Sep 2023 08:14:15 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Fail hard if xlogreader.c fails on out-of-memory" }, { "msg_contents": "On Wed, Sep 27, 2023 at 11:06:37AM +1300, Thomas Munro wrote:\n> On Tue, Sep 26, 2023 at 8:38 PM Michael Paquier <[email protected]> wrote:\n> > Thoughts and/or comments are welcome.\n> \n> I don't have an opinion yet on your other thread about making this\n> stuff configurable for replicas, but for the simple crash recovery\n> case shown here, hard failure makes sense to me.\n\n> Recycled pages can't fool us into making a huge allocation any more.\n> If xl_tot_len implies more than one page but the next page's\n> xlp_pageaddr is too low, then either the xl_tot_len you read was\n> recycled garbage bits, or it was legitimate but the overwrite of the\n> following page didn't make it to disk; either way, we don't have a\n> record, so we have an end-of-wal condition. The xlp_rem_len check\n> defends against the second page making it to disk while the first one\n> still contains recycled garbage where the xl_tot_len should be*.\n> \n> What Michael wants to do now is remove the 2004-era assumption that\n> malloc failure implies bogus data. It must be pretty unlikely in a 64\n> bit world with overcommitted virtual memory, but a legitimate\n> xl_tot_len can falsely end recovery and lose data, as reported from a\n> production case analysed by his colleagues. In other words, we can\n> actually distinguish between lack of resources and recycled bogus\n> data, so why treat them the same?\n\nIndeed. Hard failure is fine, and ENOMEM=end-of-WAL definitely isn't.\n\n> *A more detailed analysis would talk about sectors (page header is\n> atomic)\n\nI think the page header is atomic on POSIX-compliant filesystems but not\natomic on ext4. That doesn't change the conclusion on $SUBJECT.\n\n\n", "msg_date": "Tue, 26 Sep 2023 18:28:30 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fail hard if xlogreader.c fails on out-of-memory" }, { "msg_contents": "On Tue, Sep 26, 2023 at 06:28:30PM -0700, Noah Misch wrote:\n> On Wed, Sep 27, 2023 at 11:06:37AM +1300, Thomas Munro wrote:\n>> What Michael wants to do now is remove the 2004-era assumption that\n>> malloc failure implies bogus data. It must be pretty unlikely in a 64\n>> bit world with overcommitted virtual memory, but a legitimate\n>> xl_tot_len can falsely end recovery and lose data, as reported from a\n>> production case analysed by his colleagues. In other words, we can\n>> actually distinguish between lack of resources and recycled bogus\n>> data, so why treat them the same?\n> \n> Indeed. Hard failure is fine, and ENOMEM=end-of-WAL definitely isn't.\n\nAre there any more comments and/or suggestions here?\n\nIf none, I propose to apply the patch to switch to palloc() instead of\npalloc_extended(NO_OOM) in this code around the beginning of next\nweek, down to 12.\n--\nMichael", "msg_date": "Thu, 28 Sep 2023 09:36:37 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Fail hard if xlogreader.c fails on out-of-memory" }, { "msg_contents": "On Thu, Sep 28, 2023 at 09:36:37AM +0900, Michael Paquier wrote:\n> If none, I propose to apply the patch to switch to palloc() instead of\n> palloc_extended(NO_OOM) in this code around the beginning of next\n> week, down to 12.\n\nDone down to 12 as of 6b18b3fe2c2f, then.\n--\nMichael", "msg_date": "Tue, 3 Oct 2023 15:39:40 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Fail hard if xlogreader.c fails on out-of-memory" } ]
[ { "msg_contents": "Hello,\n\n\nI read this article from Haproxy, they noticed OpenSSL v3 has huge \nperformance regressions :\nhttps://github.com/haproxy/wiki/wiki/SSL-Libraries-Support-Status#openssl\n\nThis is a known issue : \nhttps://github.com/openssl/openssl/issues/17627#issuecomment-1060123659\n\nUnfortunately, v3 is shipped with many distributions (Debian, Redhat, \nRockylinux...) : https://pkgs.org/search/?q=openssl\n\nI am afraid of users will face performance issues once they update their \ndistro.\n\nDoes someone already encounter problems? Should Postgres support \nalternative SSL libraries has HAProxy?\n\nRegards,\n\n-- \nAdrien NAYRAT\n\n\n\n", "msg_date": "Tue, 26 Sep 2023 10:36:22 +0200", "msg_from": "Adrien Nayrat <[email protected]>", "msg_from_op": true, "msg_subject": "OpenSSL v3 performance regressions" }, { "msg_contents": "> On 26 Sep 2023, at 10:36, Adrien Nayrat <[email protected]> wrote:\n\n> Should Postgres support alternative SSL libraries has HAProxy?\n\nPostgreSQL can be built with LibreSSL instead of OpenSSL, which may or may not\nbe a better option performance wise for a particular application. Benchmarking\nyour workload is key to understanding performance, a lot of the regressions\npointed to in that blogpost wont affect postgres.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Tue, 26 Sep 2023 10:52:06 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OpenSSL v3 performance regressions" } ]
[ { "msg_contents": "Hi,\n\n\nPFA small code cleanup in twophase.sql. Which contains a drop table\nstatement for 'test_prepared_savepoint'. Which, to me, appears to be\nmissing in the cleanup section of that file.\n\nTo support it I have below points:-\n\n1) Grepping this table 'test_prepared_savepoint' shows occurrences\nonly in twophase.out & twophase.sql files. This means that table is\nlocal to that sql test file and not used in any other test file.\n\n2) I don't see any comment on why this was not added in the cleanup\nsection of twophase.sql, but drop for other two test tables are done.\n\n3) I ran \"make check-world\" with the patch and I don't see any failures.\n\nKindly correct, if I missed anything.\n\n\nRegards,\nNishant (EDB).", "msg_date": "Tue, 26 Sep 2023 18:31:42 +0530", "msg_from": "Nishant Sharma <[email protected]>", "msg_from_op": true, "msg_subject": "[Code Cleanup] : Small code cleanup in twophase.sql" }, { "msg_contents": "Hi,\n\nAny taker or rejector for above? -- It's a very small 'good to have' change\npatch for cleanup.\n\nThanks,\nNishant (EDB).\n\nOn Tue, Sep 26, 2023 at 6:31 PM Nishant Sharma <\[email protected]> wrote:\n\n> Hi,\n>\n>\n> PFA small code cleanup in twophase.sql. Which contains a drop table\n> statement for 'test_prepared_savepoint'. Which, to me, appears to be\n> missing in the cleanup section of that file.\n>\n> To support it I have below points:-\n>\n> 1) Grepping this table 'test_prepared_savepoint' shows occurrences\n> only in twophase.out & twophase.sql files. This means that table is\n> local to that sql test file and not used in any other test file.\n>\n> 2) I don't see any comment on why this was not added in the cleanup\n> section of twophase.sql, but drop for other two test tables are done.\n>\n> 3) I ran \"make check-world\" with the patch and I don't see any failures.\n>\n> Kindly correct, if I missed anything.\n>\n>\n> Regards,\n> Nishant (EDB).\n>\n\nHi,Any taker or rejector for above? -- It's a very small 'good to have' change patch for cleanup.Thanks,Nishant (EDB).On Tue, Sep 26, 2023 at 6:31 PM Nishant Sharma <[email protected]> wrote:Hi,PFA small code cleanup in twophase.sql. Which contains a drop tablestatement for 'test_prepared_savepoint'. Which, to me, appears to bemissing in the cleanup section of that file.To support it I have below points:-1) Grepping this table 'test_prepared_savepoint' shows occurrencesonly in twophase.out & twophase.sql files. This means that table islocal to that sql test file and not used in any other test file.2) I don't see any comment on why this was not added in the cleanupsection of twophase.sql, but drop for other two test tables are done.3) I ran \"make check-world\" with the patch and I don't see any failures.Kindly correct, if I missed anything.Regards,Nishant (EDB).", "msg_date": "Tue, 10 Oct 2023 11:27:14 +0530", "msg_from": "Nishant Sharma <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [Code Cleanup] : Small code cleanup in twophase.sql" }, { "msg_contents": "On Tue, Sep 26, 2023 at 06:31:42PM +0530, Nishant Sharma wrote:\n> To support it I have below points:-\n> \n> 1) Grepping this table 'test_prepared_savepoint' shows occurrences\n> only in twophase.out & twophase.sql files. This means that table is\n> local to that sql test file and not used in any other test file.\n> \n> 2) I don't see any comment on why this was not added in the cleanup\n> section of twophase.sql, but drop for other two test tables are done.\n> \n> 3) I ran \"make check-world\" with the patch and I don't see any failures.\n\nNote that sometimes tables are left around in the regression tests for\npg_upgrade, so as we can test dedicated upgrade paths for some object\ntypes. But, here, I think you're right: this is not a table that\nmatters for an upgrade and the end of the test file also expects a\ncleanup. So okay for me to drop this table here.\n--\nMichael", "msg_date": "Tue, 10 Oct 2023 15:04:14 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [Code Cleanup] : Small code cleanup in twophase.sql" } ]
[ { "msg_contents": "/* Typedef for callback function for table_index_build_scan */\ntypedef void (*IndexBuildCallback) (Relation index,\n\t\t\t\t\t\t\t\t\tItemPointer tid,\n\t\t\t\t\t\t\t\t\tDatum *values,\n\t\t\t\t\t\t\t\t\tbool *isnull,\n\t\t\t\t\t\t\t\t\tbool tupleIsAlive,\n\t\t\t\t\t\t\t\t\tvoid *state);\nWhen we build an index on an existed heap table, so pg will read the tuple one by one and give the tuple’s hepatid as this ItemPointer, is that right?\n\n", "msg_date": "Tue, 26 Sep 2023 22:16:15 +0800", "msg_from": "jacktby jacktby <[email protected]>", "msg_from_op": true, "msg_subject": "What's the ItemPointer's meaning here?" } ]
[ { "msg_contents": "typedef bool (*aminsert_function) (Relation indexRelation,\n\t\t\t\t\t\t\t\t Datum *values,\n\t\t\t\t\t\t\t\t bool *isnull,\n\t\t\t\t\t\t\t\t ItemPointer heap_tid,\n\t\t\t\t\t\t\t\t Relation heapRelation,\n\t\t\t\t\t\t\t\t IndexUniqueCheck checkUnique,\n\t\t\t\t\t\t\t\t bool indexUnchanged,\n\t\t\t\t\t\t\t\t struct IndexInfo *indexInfo);\n\nWhy is there a heap_tid, We haven’t inserted the value, so where does it from ?\ntypedef bool (*aminsert_function) (Relation indexRelation,   Datum *values,   bool *isnull,   ItemPointer heap_tid,   Relation heapRelation,   IndexUniqueCheck checkUnique,   bool indexUnchanged,   struct IndexInfo *indexInfo);Why is there a heap_tid, We haven’t inserted the value, so where does it from ?", "msg_date": "Tue, 26 Sep 2023 22:31:09 +0800", "msg_from": "jacktby jacktby <[email protected]>", "msg_from_op": true, "msg_subject": "Index AmInsert Parameter Confused?" }, { "msg_contents": "On Tue, 26 Sept 2023 at 18:38, jacktby jacktby <[email protected]> wrote:\n>\n> typedef bool (*aminsert_function) (Relation indexRelation,\n> Datum *values,\n> bool *isnull,\n> ItemPointer heap_tid,\n> Relation heapRelation,\n> IndexUniqueCheck checkUnique,\n> bool indexUnchanged,\n> struct IndexInfo *indexInfo);\n>\n> Why is there a heap_tid, We haven’t inserted the value, so where does it from ?\n\nIndex insertion only happens after the TableAM tuple has been\ninserted. As indexes refer to locations in the heap, this TID contains\nthe TID of the table tuple that contains the indexed values, so that\nthe index knows which tuple to refer to.\n\nNote that access/amapi.h describes only index AM APIs; it does not\ncover the table AM APIs descibed in access/tableam.h\n\nKind regards,\n\nMatthias van de Meent\n\n\n", "msg_date": "Tue, 26 Sep 2023 18:45:05 +0200", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index AmInsert Parameter Confused?" }, { "msg_contents": "\n\n> 2023年9月27日 00:45,Matthias van de Meent <[email protected]> 写道:\n> \n> On Tue, 26 Sept 2023 at 18:38, jacktby jacktby <[email protected]> wrote:\n>> \n>> typedef bool (*aminsert_function) (Relation indexRelation,\n>> Datum *values,\n>> bool *isnull,\n>> ItemPointer heap_tid,\n>> Relation heapRelation,\n>> IndexUniqueCheck checkUnique,\n>> bool indexUnchanged,\n>> struct IndexInfo *indexInfo);\n>> \n>> Why is there a heap_tid, We haven’t inserted the value, so where does it from ?\n> \n> Index insertion only happens after the TableAM tuple has been\n> inserted. As indexes refer to locations in the heap, this TID contains\n> the TID of the table tuple that contains the indexed values, so that\n> the index knows which tuple to refer to.\n> \n> Note that access/amapi.h describes only index AM APIs; it does not\n> cover the table AM APIs descibed in access/tableam.h\n> \n> Kind regards,\n> \n> Matthias van de Meent\n1.Thanks, so if we insert a tuple into a table which has a index on itself, pg will insert tuple into heap firstly, and the give the heaptid form heap to the Index am api right?\n2. I’m trying to implement a new index, but I just need the data held in index table, I hope it’s not inserted into heap, because the all data I want can be in index table.\n\n", "msg_date": "Wed, 27 Sep 2023 11:03:40 +0800", "msg_from": "jacktby jacktby <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Index AmInsert Parameter Confused?" }, { "msg_contents": "On Wed, 27 Sept 2023 at 05:03, jacktby jacktby <[email protected]> wrote:\n>\n>\n>\n> > 2023年9月27日 00:45,Matthias van de Meent <[email protected]> 写道:\n> >\n> > On Tue, 26 Sept 2023 at 18:38, jacktby jacktby <[email protected]> wrote:\n> >>\n> >> typedef bool (*aminsert_function) (Relation indexRelation,\n> >> Datum *values,\n> >> bool *isnull,\n> >> ItemPointer heap_tid,\n> >> Relation heapRelation,\n> >> IndexUniqueCheck checkUnique,\n> >> bool indexUnchanged,\n> >> struct IndexInfo *indexInfo);\n> >>\n> >> Why is there a heap_tid, We haven’t inserted the value, so where does it from ?\n> >\n> > Index insertion only happens after the TableAM tuple has been\n> > inserted. As indexes refer to locations in the heap, this TID contains\n> > the TID of the table tuple that contains the indexed values, so that\n> > the index knows which tuple to refer to.\n> >\n> > Note that access/amapi.h describes only index AM APIs; it does not\n> > cover the table AM APIs descibed in access/tableam.h\n> >\n> > Kind regards,\n> >\n> > Matthias van de Meent\n> 1.Thanks, so if we insert a tuple into a table which has a index on itself, pg will insert tuple into heap firstly, and the give the heaptid form heap to the Index am api right?\n\nCorrect. I think this is also detailed in various places of the\ndocumentation, yes.\n\n> 2. I’m trying to implement a new index, but I just need the data held in index table, I hope it’s not inserted into heap, because the all data I want can be in index table.\n\nIn PostgreSQL, a table maintains the source of truth for the data, and\nindexes are ephemeral data structures that improve the speed of\nquerying the data in their table. As such, dropping an index should\nnot impact the availability of the table's data.\nIf the only copy of your (non-derived) data is in the index, then it\nis likely that some normal table operations will result in failures\ndue to the tableAM/indexAM breaking built-in assumptions about access\nmethods and data availability.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n", "msg_date": "Wed, 27 Sep 2023 12:08:52 +0200", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index AmInsert Parameter Confused?" }, { "msg_contents": "> 2023年9月27日 18:08,Matthias van de Meent <[email protected]> 写道:\n> \n> On Wed, 27 Sept 2023 at 05:03, jacktby jacktby <[email protected] <mailto:[email protected]>> wrote:\n>> \n>> \n>> \n>>> 2023年9月27日 00:45,Matthias van de Meent <[email protected]> 写道:\n>>> \n>>> On Tue, 26 Sept 2023 at 18:38, jacktby jacktby <[email protected]> wrote:\n>>>> \n>>>> typedef bool (*aminsert_function) (Relation indexRelation,\n>>>> Datum *values,\n>>>> bool *isnull,\n>>>> ItemPointer heap_tid,\n>>>> Relation heapRelation,\n>>>> IndexUniqueCheck checkUnique,\n>>>> bool indexUnchanged,\n>>>> struct IndexInfo *indexInfo);\n>>>> \n>>>> Why is there a heap_tid, We haven’t inserted the value, so where does it from ?\n>>> \n>>> Index insertion only happens after the TableAM tuple has been\n>>> inserted. As indexes refer to locations in the heap, this TID contains\n>>> the TID of the table tuple that contains the indexed values, so that\n>>> the index knows which tuple to refer to.\n>>> \n>>> Note that access/amapi.h describes only index AM APIs; it does not\n>>> cover the table AM APIs descibed in access/tableam.h\n>>> \n>>> Kind regards,\n>>> \n>>> Matthias van de Meent\n>> 1.Thanks, so if we insert a tuple into a table which has a index on itself, pg will insert tuple into heap firstly, and the give the heaptid form heap to the Index am api right?\n> \n> Correct. I think this is also detailed in various places of the\n> documentation, yes.\n> \n>> 2. I’m trying to implement a new index, but I just need the data held in index table, I hope it’s not inserted into heap, because the all data I want can be in index table.\n> \n> In PostgreSQL, a table maintains the source of truth for the data, and\n> indexes are ephemeral data structures that improve the speed of\n> querying the data in their table. As such, dropping an index should\n> not impact the availability of the table's data.\n> If the only copy of your (non-derived) data is in the index, then it\n> is likely that some normal table operations will result in failures\n> due to the tableAM/indexAM breaking built-in assumptions about access\n> methods and data availability.\n> \n> Kind regards,\n> \n> Matthias van de Meent\n> Neon (https://neon.tech <https://neon.tech/>)\nSo do I need to free the ItemPointer and Values in the func implemented by myself?\n2023年9月27日 18:08,Matthias van de Meent <[email protected]> 写道:On Wed, 27 Sept 2023 at 05:03, jacktby jacktby <[email protected]> wrote:2023年9月27日 00:45,Matthias van de Meent <[email protected]> 写道:On Tue, 26 Sept 2023 at 18:38, jacktby jacktby <[email protected]> wrote:typedef bool (*aminsert_function) (Relation indexRelation,Datum *values,bool *isnull,ItemPointer heap_tid,Relation heapRelation,IndexUniqueCheck checkUnique,bool indexUnchanged,struct IndexInfo *indexInfo);Why is there a heap_tid, We haven’t inserted the value, so where does it from ?Index insertion only happens after the TableAM tuple has beeninserted. As indexes refer to locations in the heap, this TID containsthe TID of the table tuple that contains the indexed values, so thatthe index knows which tuple to refer to.Note that access/amapi.h describes only index AM APIs; it does notcover the table AM APIs descibed in access/tableam.hKind regards,Matthias van de Meent1.Thanks, so if we insert a tuple into a table which has a index on itself, pg will insert tuple into heap firstly, and the give the heaptid form heap to the Index am api right?Correct. I think this is also detailed in various places of thedocumentation, yes.2. I’m trying to implement a new index, but I just need the data held in index table, I hope it’s not inserted into heap, because the all data I want can be in index table.In PostgreSQL, a table maintains the source of truth for the data, andindexes are ephemeral data structures that improve the speed ofquerying the data in their table. As such, dropping an index shouldnot impact the availability of the table's data.If the only copy of your (non-derived) data is in the index, then itis likely that some normal table operations will result in failuresdue to the tableAM/indexAM breaking built-in assumptions about accessmethods and data availability.Kind regards,Matthias van de MeentNeon (https://neon.tech)So do I need to free the ItemPointer and Values in the func implemented by myself?", "msg_date": "Thu, 28 Sep 2023 00:17:43 +0800", "msg_from": "jacktby jacktby <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Index AmInsert Parameter Confused?" } ]
[ { "msg_contents": "Since updating to Xcode 15.0, my macOS machines have been\nspitting a bunch of linker-generated warnings. There\nseems to be one instance of\n\nld: warning: -multiply_defined is obsolete\n\nfor each loadable module we link, and some program links complain\n\nld: warning: ignoring duplicate libraries: '-lpgcommon', '-lpgport'\n\nYou can see these in the build logs for both sifaka [1] and\nindri [2], so MacPorts isn't affecting it.\n\nI'd held out some hope that this was a transient problem due to\nthe underlying OS still being Ventura, but I just updated another\nmachine to shiny new Sonoma (14.0), and it's still doing it.\nGuess we gotta do something about it.\n\nWe used to need \"-multiply_defined suppress\" to suppress other\nlinker warnings. I tried removing that from the Makefile.shlib\nrecipes for Darwin, and those complaints go away while no new\nones appear, so that's good --- but I wonder whether slightly\nolder toolchain versions will still want it.\n\nAs for the duplicate libraries, yup guilty as charged, but\nI think we were doing that intentionally to satisfy some other\nold toolchains. I wonder whether removing the duplication\nwill create new problems.\n\n\t\t\tregards, tom lane\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=sifaka&dt=2023-09-26%2021%3A09%3A01&stg=build\n[2] https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=indri&dt=2023-09-26%2021%3A42%3A17&stg=build\n\n\n", "msg_date": "Tue, 26 Sep 2023 18:23:18 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Annoying build warnings from latest Apple toolchain" }, { "msg_contents": "I wrote:\n> Since updating to Xcode 15.0, my macOS machines have been\n> spitting a bunch of linker-generated warnings. There\n> seems to be one instance of\n> ld: warning: -multiply_defined is obsolete\n> for each loadable module we link ...\n\nI poked into this a little more. We started using \"-multiply_defined\nsuppress\" in commit 9df308697 of 2004-07-13, which was in the OS X 10.3\nera. I failed to find any specific discussion of that switch in our\narchives, but the commit message suggests that I probably stole it\nfrom a patch the Fink project was carrying.\n\nGoogling finds some non-authoritative claims that \"-multiply_defined\"\nhas been a no-op since OS X 10.9 (Mavericks). I don't have anything\nolder than 10.15 to check, but removing it on 10.15 does not seem\nto cause any problems.\n\nSo I think we can safely just remove this switch from Makefile.shlib.\nThe meson build process isn't invoking it either I think.\n\nThe other thing will take a bit more work ...\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 26 Sep 2023 20:37:36 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Annoying build warnings from latest Apple toolchain" }, { "msg_contents": "I wrote:\n> Since updating to Xcode 15.0, my macOS machines have been\n> spitting a bunch of linker-generated warnings. ...\n> some program links complain\n\n> ld: warning: ignoring duplicate libraries: '-lpgcommon', '-lpgport'\n\nI found that this is being caused by the libpq_pgport hack in\nMakefile.global.in, which ensures that libpgcommon and libpgport\nget linked before libpq. The comment freely admits that it results in\nlinking libpgcommon and libpgport twice. Now, AFAICS that whole hack\nis unnecessary on any platform where we know how to do symbol export\ncontrol, because then libpq won't expose any of the troublesome\nsymbols to begin with. So we can resolve the problem by just not\ndoing that on macOS, as in the attached draft patch. I've confirmed\nthat this suppresses the duplicate-libraries warnings on Xcode 15.0\nwithout creating any issues on older macOS (though I'm only in a\nposition to test as far back as Catalina).\n\nThe patch is written to change things only on macOS, but I wonder\nif we should be more aggressive and change it for all platforms\nwhere we have symbol export control (which is almost everything\nthese days). I doubt it'd make any noticeable difference in\nbuild time, but doing that would give us more test coverage\nwhich would help expose any weak spots.\n\nI've not yet looked at the meson build infrastructure to\nsee if it needs a corresponding change.\n\n\t\t\tregards, tom lane", "msg_date": "Wed, 27 Sep 2023 15:05:49 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Annoying build warnings from latest Apple toolchain" }, { "msg_contents": "I wrote:\n> I've not yet looked at the meson build infrastructure to\n> see if it needs a corresponding change.\n\nI think it doesn't, as long as all the relevant build targets\nwrite their dependencies with \"frontend_code\" before \"libpq\".\n(The need for this is, of course, documented nowhere. The state\nof the documentation for our meson build system is abysmal.)\n\nHowever, it's hard to test this, because the meson build\nseems completely broken on current macOS:\n\n-----\n$ meson setup build --prefix=$HOME/pginstall\nThe Meson build system\nVersion: 0.64.1\nSource dir: /Users/tgl/pgsql\nBuild dir: /Users/tgl/pgsql/build\nBuild type: native build\nProject name: postgresql\nProject version: 17devel\n\nmeson.build:9:0: ERROR: Unable to detect linker for compiler `cc -Wl,--version`\nstdout: \nstderr: ld: unknown options: --version \nclang: error: linker command failed with exit code 1 (use -v to see invocation)\n\nA full log can be found at /Users/tgl/pgsql/build/meson-logs/meson-log.txt\n-----\n\nThat log file offers no more detail than the terminal output did.\n\n(I also tried with a more recent meson version, 1.1.1, with\nthe same result.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 27 Sep 2023 16:52:44 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Annoying build warnings from latest Apple toolchain" }, { "msg_contents": "Hi,\n\nOn 2023-09-27 16:52:44 -0400, Tom Lane wrote:\n> I wrote:\n> > I've not yet looked at the meson build infrastructure to\n> > see if it needs a corresponding change.\n> \n> I think it doesn't, as long as all the relevant build targets\n> write their dependencies with \"frontend_code\" before \"libpq\".\n\nHm, that's not great. I don't think that should be required. I'll try to take\na look at why that's needed.\n\n\n> However, it's hard to test this, because the meson build\n> seems completely broken on current macOS:\n\nI am travelling and I don't quite dare to upgrade my mac mini remotely. So I\ncan't try Sonoma directly.\n\nBut CI worked after switching to sonoma - although installing packages from\nmacports took forever, due to macports building all packages locally.\n\nhttps://cirrus-ci.com/task/5133869171605504\n\nThere's some weird warnings about hashlib/blake2, but it looks like that's a\npython installation issue. Looks like this is with python from macports in\nPATH.\n\n[00:59:14.442] ERROR:root:code for hash blake2b was not found.\n[00:59:14.442] Traceback (most recent call last):\n[00:59:14.442] File \"/opt/local/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/hashlib.py\", line 307, in <module>\n[00:59:14.442] globals()[__func_name] = __get_hash(__func_name)\n[00:59:14.442] ^^^^^^^^^^^^^^^^^^^^^^^\n[00:59:14.442] File \"/opt/local/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/hashlib.py\", line 129, in __get_openssl_constructor\n[00:59:14.442] return __get_builtin_constructor(name)\n[00:59:14.442] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n[00:59:14.442] File \"/opt/local/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/hashlib.py\", line 123, in __get_builtin_constructor\n[00:59:14.442] raise ValueError('unsupported hash type ' + name)\n[00:59:14.442] ValueError: unsupported hash type blake2b\n\nThis just happens whenever python's hashlib - supposedly in the standard\nlibrary - is imported.\n\n\nThere *are* some buildsystem warnings:\n[00:59:27.289] [260/2328] Linking target src/interfaces/libpq/libpq.5.dylib\n[00:59:27.289] ld: warning: -undefined error is deprecated\n[00:59:27.289] ld: warning: ignoring -e, not used for output type\n\nFull command:\n[1/1] cc -o src/interfaces/libpq/libpq.5.dylib src/interfaces/libpq/libpq.5.dylib.p/fe-auth-scram.c.o src/interfaces/libpq/libpq.5.dylib.p/fe-auth.c.o src/interfaces/libpq/libpq.5.dylib.p/fe-connect.c.o src/interfaces/libpq/libpq.5.dylib.p/fe-exec.c.o src/interfaces/libpq/libpq.5.dylib.p/fe-lobj.c.o src/interfaces/libpq/libpq.5.dylib.p/fe-misc.c.o src/interfaces/libpq/libpq.5.dylib.p/fe-print.c.o src/interfaces/libpq/libpq.5.dylib.p/fe-protocol3.c.o src/interfaces/libpq/libpq.5.dylib.p/fe-secure.c.o src/interfaces/libpq/libpq.5.dylib.p/fe-trace.c.o src/interfaces/libpq/libpq.5.dylib.p/legacy-pqsignal.c.o src/interfaces/libpq/libpq.5.dylib.p/libpq-events.c.o src/interfaces/libpq/libpq.5.dylib.p/pqexpbuffer.c.o src/interfaces/libpq/libpq.5.dylib.p/fe-secure-common.c.o src/interfaces/libpq/libpq.5.dylib.p/fe-secure-openssl.c.o src/interfaces/libpq/libpq.5.dylib.p/fe-gssapi-common.c.o src/interfaces/libpq/libpq.5.dylib.p/fe-secure-gssapi.c.o -Wl,-dead_strip_dylibs -Wl,-headerpad_max_install_names -Wl,-undefined,error -shared -install_name @rpath/libpq.5.dylib -compatibility_version 5 -current_version 5.17 -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX14.0.sdk -Og -ggdb -Wl,-rpath,/opt/local/lib -Wl,-rpath,/opt/local/libexec/openssl3/lib src/common/libpgcommon_shlib.a src/port/libpgport_shlib.a -exported_symbols_list=/Users/admin/pgsql/build/src/interfaces/libpq/exports.list -lm /opt/local/lib/libintl.dylib /opt/local/lib/libgssapi_krb5.dylib /opt/local/lib/libldap.dylib /opt/local/lib/liblber.dylib /opt/local/libexec/openssl3/lib/libssl.dylib /opt/local/libexec/openssl3/lib/libcrypto.dylib /opt/local/lib/libz.dylib /opt/local/lib/libzstd.dylib\nld: warning: -undefined error is deprecated\nld: warning: ignoring -e, not used for output type\n\nSo we need to make the addition of -Wl,-undefined,error conditional, that\nshould be easy enough. Although I'm a bit confused about this being\ndeprecated.\n\nFor the -e bit, this seems to do the trick:\n@@ -224,7 +224,7 @@ elif host_system == 'darwin'\n library_path_var = 'DYLD_LIBRARY_PATH'\n \n export_file_format = 'darwin'\n- export_fmt = '-exported_symbols_list=@0@'\n+ export_fmt = '-Wl,-exported_symbols_list,@0@'\n \n mod_link_args_fmt = ['-bundle_loader', '@0@']\n mod_link_with_dir = 'bindir'\n\nIt's quite annoying that apple is changing things option syntax.\n\n\n> (I also tried with a more recent meson version, 1.1.1, with\n> the same result.)\n\nLooks like you need 1.2 for the new clang / ld output... Apparently apple's\nlinker changed the format of its version output :/.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 28 Sep 2023 13:32:13 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Annoying build warnings from latest Apple toolchain" }, { "msg_contents": "Andres Freund <[email protected]> writes:\n> On 2023-09-27 16:52:44 -0400, Tom Lane wrote:\n>> I think it doesn't, as long as all the relevant build targets\n>> write their dependencies with \"frontend_code\" before \"libpq\".\n\n> Hm, that's not great. I don't think that should be required. I'll try to take\n> a look at why that's needed.\n\nWell, it's only important on platforms where we can't restrict\nlibpq.so from exporting all symbols. I don't know how close we are\nto deciding that such cases are no longer interesting to worry about.\nMakefile.shlib seems to know how to do it everywhere except Windows,\nand I imagine we know how to do it over in the MSVC scripts.\n\n>> However, it's hard to test this, because the meson build\n>> seems completely broken on current macOS:\n\n> Looks like you need 1.2 for the new clang / ld output... Apparently apple's\n> linker changed the format of its version output :/.\n\nAh, yeah, updating MacPorts again brought in meson 1.2.1 which seems\nto work. I now see a bunch of\n\nld: warning: ignoring -e, not used for output type\nld: warning: -undefined error is deprecated\n\nwhich are unrelated. There's still one duplicate warning\nfrom the backend link:\n\nld: warning: ignoring duplicate libraries: '-lpam'\n\nI'm a bit baffled why that's showing up; there's no obvious\ndouble reference to pam.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 28 Sep 2023 16:46:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Annoying build warnings from latest Apple toolchain" }, { "msg_contents": "Hi,\n\nOn 2023-09-28 16:46:08 -0400, Tom Lane wrote:\n> Andres Freund <[email protected]> writes:\n> > On 2023-09-27 16:52:44 -0400, Tom Lane wrote:\n> >> I think it doesn't, as long as all the relevant build targets\n> >> write their dependencies with \"frontend_code\" before \"libpq\".\n> \n> > Hm, that's not great. I don't think that should be required. I'll try to take\n> > a look at why that's needed.\n> \n> Well, it's only important on platforms where we can't restrict\n> libpq.so from exporting all symbols. I don't know how close we are\n> to deciding that such cases are no longer interesting to worry about.\n> Makefile.shlib seems to know how to do it everywhere except Windows,\n> and I imagine we know how to do it over in the MSVC scripts.\n\nHm, then I'd argue that we don't need to care about it anymore. The meson\nbuild does the necessary magic on windows, as do the current msvc scripts.\n\nI think right now it doesn't work as-is on sonoma, because apple decided to\nchange the option syntax, which is what causes the -e warning below, so the\nrelevant option is just ignored.\n\n\n> There's still one duplicate warning\n> from the backend link:\n> \n> ld: warning: ignoring duplicate libraries: '-lpam'\n> \n> I'm a bit baffled why that's showing up; there's no obvious\n> double reference to pam.\n\nI think it's because multiple libraries/binaries depend on it. Meson knows how\nto deduplicate libraries found via pkg-config (presumably because that has\nenough information for a topological sort), but apparently not when they're\nfound as \"raw\" libraries. Until now that was also just pretty harmless, so I\nunderstand not doing anything about it.\n\nI see a way to avoid the warnings, but perhaps it's better to ask the meson\nfolks to put in a generic way of dealing with this.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 28 Sep 2023 15:22:48 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Annoying build warnings from latest Apple toolchain" }, { "msg_contents": "Andres Freund <[email protected]> writes:\n> On 2023-09-28 16:46:08 -0400, Tom Lane wrote:\n>> Well, it's only important on platforms where we can't restrict\n>> libpq.so from exporting all symbols. I don't know how close we are\n>> to deciding that such cases are no longer interesting to worry about.\n>> Makefile.shlib seems to know how to do it everywhere except Windows,\n>> and I imagine we know how to do it over in the MSVC scripts.\n\n> Hm, then I'd argue that we don't need to care about it anymore. The meson\n> build does the necessary magic on windows, as do the current msvc scripts.\n\nIf we take that argument seriously, then I'm inclined to adjust my\nupthread patch for Makefile.global.in so that it removes the extra\ninclusions of libpgport/libpgcommon everywhere, not only macOS.\nThe rationale would be that it's not worth worrying about ABI\nstability details on any straggler platforms.\n\n> I think right now it doesn't work as-is on sonoma, because apple decided to\n> change the option syntax, which is what causes the -e warning below, so the\n> relevant option is just ignored.\n\nHmm, we'd better fix that then. Or is it their bug? It looks to me like\nclang's argument is -exported_symbols_list=/path/to/exports.list, so\nit must be translating that to \"-e\".\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 28 Sep 2023 18:58:49 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Annoying build warnings from latest Apple toolchain" }, { "msg_contents": "I wrote:\n> Andres Freund <[email protected]> writes:\n>> I think right now it doesn't work as-is on sonoma, because apple decided to\n>> change the option syntax, which is what causes the -e warning below, so the\n>> relevant option is just ignored.\n\n> Hmm, we'd better fix that then. Or is it their bug? It looks to me like\n> clang's argument is -exported_symbols_list=/path/to/exports.list, so\n> it must be translating that to \"-e\".\n\nLooking closer, the documented syntax is\n\n\t-exported_symbols_list filename\n\n(two arguments, not one with an \"=\"). That is what our Makefiles\nuse, and it still works fine with latest Xcode. However, meson.build\nthinks it can get away with one argument containing \"=\", and evidently\nthat doesn't work now (or maybe it never did?).\n\nI tried\n\n export_fmt = '-exported_symbols_list @0@'\n\nand\n\n export_fmt = ['-exported_symbols_list', '@0@']\n\nand neither of those did what I wanted, so maybe I will have to\nstudy meson's command language sometime soon. In the meantime,\nI suppose this might be an easy fix for somebody who knows their\nway around meson.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 28 Sep 2023 19:17:37 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Annoying build warnings from latest Apple toolchain" }, { "msg_contents": "Hi,\n\nOn 2023-09-28 19:17:37 -0400, Tom Lane wrote:\n> I wrote:\n> > Andres Freund <[email protected]> writes:\n> >> I think right now it doesn't work as-is on sonoma, because apple decided to\n> >> change the option syntax, which is what causes the -e warning below, so the\n> >> relevant option is just ignored.\n> \n> > Hmm, we'd better fix that then. Or is it their bug? It looks to me like\n> > clang's argument is -exported_symbols_list=/path/to/exports.list, so\n> > it must be translating that to \"-e\".\n> \n> Looking closer, the documented syntax is\n> \n> \t-exported_symbols_list filename\n> \n> (two arguments, not one with an \"=\"). That is what our Makefiles\n> use, and it still works fine with latest Xcode. However, meson.build\n> thinks it can get away with one argument containing \"=\", and evidently\n> that doesn't work now (or maybe it never did?).\n\nIt does still work on Ventura.\n\n\n> I tried\n> \n> export_fmt = '-exported_symbols_list @0@'\n\nThat would expand to a single argument with a space inbetween.\n\n\n> and\n> \n> export_fmt = ['-exported_symbols_list', '@0@']\n\nThat would work in many places, but not here, export_fmt is used as a format\nstring... We could make the callsites do that for each array element, but\nthere's an easier solution that seems to work for both Ventura and Sonoma -\nhowever I don't have anything older to test with.\n\nTBH, I find it hard to understand what arguments go to the linker and which to\nthe compiler on macos. The argument is documented for the linker and not the\ncompiler, but so far we'd been passing it to the compiler, so there must be\nsome logic forwarding it.\n\nLooking through the clang code, I see various llvm libraries using\n-Wl,-exported_symbols_list and there are tests\n(clang/test/Driver/darwin-ld.c) ensuring both syntaxes work.\n\nThus the easiest fix looks to be to use this:\n\ndiff --git a/meson.build b/meson.build\nindex 5422885b0a2..16a2b0f801e 100644\n--- a/meson.build\n+++ b/meson.build\n@@ -224,7 +224,7 @@ elif host_system == 'darwin'\n library_path_var = 'DYLD_LIBRARY_PATH'\n \n export_file_format = 'darwin'\n- export_fmt = '-exported_symbols_list=@0@'\n+ export_fmt = '-Wl,-exported_symbols_list,@0@'\n \n mod_link_args_fmt = ['-bundle_loader', '@0@']\n mod_link_with_dir = 'bindir'\n\n\nI don't have anything older than Ventura to check though.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 28 Sep 2023 19:20:27 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Annoying build warnings from latest Apple toolchain" }, { "msg_contents": "Hi,\n\nOn 2023-09-28 19:20:27 -0700, Andres Freund wrote:\n> Thus the easiest fix looks to be to use this:\n> \n> diff --git a/meson.build b/meson.build\n> index 5422885b0a2..16a2b0f801e 100644\n> --- a/meson.build\n> +++ b/meson.build\n> @@ -224,7 +224,7 @@ elif host_system == 'darwin'\n> library_path_var = 'DYLD_LIBRARY_PATH'\n> \n> export_file_format = 'darwin'\n> - export_fmt = '-exported_symbols_list=@0@'\n> + export_fmt = '-Wl,-exported_symbols_list,@0@'\n> \n> mod_link_args_fmt = ['-bundle_loader', '@0@']\n> mod_link_with_dir = 'bindir'\n> \n> \n> I don't have anything older than Ventura to check though.\n\nAttached is the above change and a commit to change CI over to Sonoma. Not\nsure when we should switch, but it seems useful to include for testing\npurposes at the very least.\n\nGreetings,\n\nAndres Freund", "msg_date": "Thu, 28 Sep 2023 19:33:15 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Annoying build warnings from latest Apple toolchain" }, { "msg_contents": "Andres Freund <[email protected]> writes:\n> On 2023-09-28 19:20:27 -0700, Andres Freund wrote:\n>> Thus the easiest fix looks to be to use this:\n>> - export_fmt = '-exported_symbols_list=@0@'\n>> + export_fmt = '-Wl,-exported_symbols_list,@0@'\n>> I don't have anything older than Ventura to check though.\n\nI don't have meson installed on my surviving Catalina box, but\nI tried the equivalent thing in the Makefile universe:\n\ndiff --git a/src/Makefile.shlib b/src/Makefile.shlib\nindex f94d59d1c5..f2ed222cc7 100644\n--- a/src/Makefile.shlib\n+++ b/src/Makefile.shlib\n@@ -136,7 +136,7 @@ ifeq ($(PORTNAME), darwin)\n BUILD.exports = $(AWK) '/^[^\\#]/ {printf \"_%s\\n\",$$1}' $< >$@\n exports_file = $(SHLIB_EXPORTS:%.txt=%.list)\n ifneq (,$(exports_file))\n- exported_symbols_list = -exported_symbols_list $(exports_file)\n+ exported_symbols_list = -Wl,-exported_symbols_list,$(exports_file)\n endif\n endif\n\nThat builds and produces correctly-symbol-trimmed shlibs, so I'd\nsay it's fine. (Perhaps we should apply the above to HEAD\nalongside the meson.build fix, to get more test coverage?)\n\n> Attached is the above change and a commit to change CI over to Sonoma. Not\n> sure when we should switch, but it seems useful to include for testing\n> purposes at the very least.\n\nNo opinion on when to switch CI. Sonoma is surely pretty bleeding edge\nyet.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 28 Sep 2023 22:53:09 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Annoying build warnings from latest Apple toolchain" }, { "msg_contents": "Hi,\n\nOn 2023-09-28 22:53:09 -0400, Tom Lane wrote:\n> Andres Freund <[email protected]> writes:\n> > On 2023-09-28 19:20:27 -0700, Andres Freund wrote:\n> >> Thus the easiest fix looks to be to use this:\n> >> - export_fmt = '-exported_symbols_list=@0@'\n> >> + export_fmt = '-Wl,-exported_symbols_list,@0@'\n> >> I don't have anything older than Ventura to check though.\n> \n> I don't have meson installed on my surviving Catalina box, but\n> I tried the equivalent thing in the Makefile universe:\n> \n> diff --git a/src/Makefile.shlib b/src/Makefile.shlib\n> index f94d59d1c5..f2ed222cc7 100644\n> --- a/src/Makefile.shlib\n> +++ b/src/Makefile.shlib\n> @@ -136,7 +136,7 @@ ifeq ($(PORTNAME), darwin)\n> BUILD.exports = $(AWK) '/^[^\\#]/ {printf \"_%s\\n\",$$1}' $< >$@\n> exports_file = $(SHLIB_EXPORTS:%.txt=%.list)\n> ifneq (,$(exports_file))\n> - exported_symbols_list = -exported_symbols_list $(exports_file)\n> + exported_symbols_list = -Wl,-exported_symbols_list,$(exports_file)\n> endif\n> endif\n> \n> That builds and produces correctly-symbol-trimmed shlibs, so I'd\n> say it's fine.\n\nThanks for testing!\n\nI'll go and push that 16/HEAD then.\n\n\n> (Perhaps we should apply the above to HEAD alongside the meson.build fix, to\n> get more test coverage?)\n\nThe macos animals BF seem to run Ventura, so I think it'd not really provide\nadditional coverage that CI and your manual testing already has. So probably\nnot worth it from that angle?\n\n\n> > Attached is the above change and a commit to change CI over to Sonoma. Not\n> > sure when we should switch, but it seems useful to include for testing\n> > purposes at the very least.\n> \n> No opinion on when to switch CI. Sonoma is surely pretty bleeding edge\n> yet.\n\nYea, it does feel like that...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 29 Sep 2023 07:30:53 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Annoying build warnings from latest Apple toolchain" }, { "msg_contents": "Andres Freund <[email protected]> writes:\n> On 2023-09-28 22:53:09 -0400, Tom Lane wrote:\n>> (Perhaps we should apply the above to HEAD alongside the meson.build fix, to\n>> get more test coverage?)\n\n> The macos animals BF seem to run Ventura, so I think it'd not really provide\n> additional coverage that CI and your manual testing already has. So probably\n> not worth it from that angle?\n\nMy thought was that if it's in the tree we'd get testing from\nnon-buildfarm sources.\n\nFWIW, I'm going to update sifaka/indri to Sonoma before too much longer\n(they're already using Xcode 15.0 which is the Sonoma toolchain).\nI recently pulled longfin up to Ventura, and plan to leave it on that\nfor the next year or so. I don't think anyone else is running macOS\nanimals.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 29 Sep 2023 11:11:49 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Annoying build warnings from latest Apple toolchain" }, { "msg_contents": "On Thu Sep 28, 2023 at 5:22 PM CDT, Andres Freund wrote:\n> Hi,\n>\n> On 2023-09-28 16:46:08 -0400, Tom Lane wrote:\n> > There's still one duplicate warning\n> > from the backend link:\n> > \n> > ld: warning: ignoring duplicate libraries: '-lpam'\n> > \n> > I'm a bit baffled why that's showing up; there's no obvious\n> > double reference to pam.\n>\n> I think it's because multiple libraries/binaries depend on it. Meson knows how\n> to deduplicate libraries found via pkg-config (presumably because that has\n> enough information for a topological sort), but apparently not when they're\n> found as \"raw\" libraries. Until now that was also just pretty harmless, so I\n> understand not doing anything about it.\n>\n> I see a way to avoid the warnings, but perhaps it's better to ask the meson\n> folks to put in a generic way of dealing with this.\n\nI wonder if this Meson PR[0] will help.\n\n[0]: https://github.com/mesonbuild/meson/pull/12276\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Fri, 29 Sep 2023 10:24:27 -0500", "msg_from": "\"Tristan Partin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Annoying build warnings from latest Apple toolchain" }, { "msg_contents": "I wrote:\n> Andres Freund <[email protected]> writes:\n>> On 2023-09-28 16:46:08 -0400, Tom Lane wrote:\n>>> Well, it's only important on platforms where we can't restrict\n>>> libpq.so from exporting all symbols. I don't know how close we are\n>>> to deciding that such cases are no longer interesting to worry about.\n>>> Makefile.shlib seems to know how to do it everywhere except Windows,\n>>> and I imagine we know how to do it over in the MSVC scripts.\n\n>> Hm, then I'd argue that we don't need to care about it anymore. The meson\n>> build does the necessary magic on windows, as do the current msvc scripts.\n\n> If we take that argument seriously, then I'm inclined to adjust my\n> upthread patch for Makefile.global.in so that it removes the extra\n> inclusions of libpgport/libpgcommon everywhere, not only macOS.\n> The rationale would be that it's not worth worrying about ABI\n> stability details on any straggler platforms.\n\nLooking closer, it's only since v16 that we have export list support\non all officially-supported platforms. Therefore, I think the prudent\nthing to do in the back branches is use the patch I posted before,\nto suppress the duplicate -l switches only on macOS. In HEAD,\nI propose we simplify life by doing it everywhere, as attached.\n\n\t\t\tregards, tom lane", "msg_date": "Fri, 29 Sep 2023 12:14:40 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Annoying build warnings from latest Apple toolchain" }, { "msg_contents": "Hi,\n\nOn 2023-09-29 12:14:40 -0400, Tom Lane wrote:\n> I wrote:\n> > Andres Freund <[email protected]> writes:\n> >> On 2023-09-28 16:46:08 -0400, Tom Lane wrote:\n> >>> Well, it's only important on platforms where we can't restrict\n> >>> libpq.so from exporting all symbols. I don't know how close we are\n> >>> to deciding that such cases are no longer interesting to worry about.\n> >>> Makefile.shlib seems to know how to do it everywhere except Windows,\n> >>> and I imagine we know how to do it over in the MSVC scripts.\n> \n> >> Hm, then I'd argue that we don't need to care about it anymore. The meson\n> >> build does the necessary magic on windows, as do the current msvc scripts.\n> \n> > If we take that argument seriously, then I'm inclined to adjust my\n> > upthread patch for Makefile.global.in so that it removes the extra\n> > inclusions of libpgport/libpgcommon everywhere, not only macOS.\n> > The rationale would be that it's not worth worrying about ABI\n> > stability details on any straggler platforms.\n> \n> Looking closer, it's only since v16 that we have export list support\n> on all officially-supported platforms.\n\nOh, right, before that Solaris didn't support it. I guess we could backpatch\nthat commit, it's simple enough, but I don't think I care enough about Solaris\nto do so.\n\n\n> Therefore, I think the prudent thing to do in the back branches is use the\n> patch I posted before, to suppress the duplicate -l switches only on macOS.\n> In HEAD, I propose we simplify life by doing it everywhere, as attached.\n\nMakes sense.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 30 Sep 2023 10:13:06 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Annoying build warnings from latest Apple toolchain" }, { "msg_contents": "Andres Freund <[email protected]> writes:\n> On 2023-09-29 12:14:40 -0400, Tom Lane wrote:\n>> Looking closer, it's only since v16 that we have export list support\n>> on all officially-supported platforms.\n\n> Oh, right, before that Solaris didn't support it. I guess we could backpatch\n> that commit, it's simple enough, but I don't think I care enough about Solaris\n> to do so.\n\nHPUX would be an issue too, as we never did figure out how to do\nexport control on that. However, I doubt it would be a great idea\nto back-patch export control in minor releases, even if we had\nthe patch at hand. That would be an ABI break of its own, although\nit'd only affect clients that hadn't done things quite correctly.\n\n>> Therefore, I think the prudent thing to do in the back branches is use the\n>> patch I posted before, to suppress the duplicate -l switches only on macOS.\n>> In HEAD, I propose we simplify life by doing it everywhere, as attached.\n\n> Makes sense.\n\nDone that way. Were you going to push the -Wl,-exported_symbols_list\nchange?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 30 Sep 2023 13:28:01 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Annoying build warnings from latest Apple toolchain" }, { "msg_contents": "Hi,\n\nOn 2023-09-30 13:28:01 -0400, Tom Lane wrote:\n> Andres Freund <[email protected]> writes:\n> > On 2023-09-29 12:14:40 -0400, Tom Lane wrote:\n> >> Looking closer, it's only since v16 that we have export list support\n> >> on all officially-supported platforms.\n> \n> > Oh, right, before that Solaris didn't support it. I guess we could backpatch\n> > that commit, it's simple enough, but I don't think I care enough about Solaris\n> > to do so.\n> \n> HPUX would be an issue too, as we never did figure out how to do\n> export control on that.\n\nAh, right.\n\n\n> However, I doubt it would be a great idea\n> to back-patch export control in minor releases, even if we had\n> the patch at hand. That would be an ABI break of its own, although\n> it'd only affect clients that hadn't done things quite correctly.\n\nAgreed.\n\n\n> >> Therefore, I think the prudent thing to do in the back branches is use the\n> >> patch I posted before, to suppress the duplicate -l switches only on macOS.\n> >> In HEAD, I propose we simplify life by doing it everywhere, as attached.\n> \n> > Makes sense.\n> \n> Done that way. Were you going to push the -Wl,-exported_symbols_list\n> change?\n\nDone now.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 30 Sep 2023 12:37:00 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Annoying build warnings from latest Apple toolchain" }, { "msg_contents": "Hi,\n\nOn 2023-09-29 11:11:49 -0400, Tom Lane wrote:\n> Andres Freund <[email protected]> writes:\n> > On 2023-09-28 22:53:09 -0400, Tom Lane wrote:\n> >> (Perhaps we should apply the above to HEAD alongside the meson.build fix, to\n> >> get more test coverage?)\n> \n> > The macos animals BF seem to run Ventura, so I think it'd not really provide\n> > additional coverage that CI and your manual testing already has. So probably\n> > not worth it from that angle?\n> \n> My thought was that if it's in the tree we'd get testing from\n> non-buildfarm sources.\n\nI'm not against it, but it also doesn't quite seem necessary, given your\ntesting on Catalina.\n\n\n> FWIW, I'm going to update sifaka/indri to Sonoma before too much longer\n> (they're already using Xcode 15.0 which is the Sonoma toolchain).\n> I recently pulled longfin up to Ventura, and plan to leave it on that\n> for the next year or so. I don't think anyone else is running macOS\n> animals.\n\nIt indeed looks like nobody is. I wonder if it's worth setting up a gcc macos\nanimal, it's not too hard to imagine that breaking.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 30 Sep 2023 12:43:49 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Annoying build warnings from latest Apple toolchain" }, { "msg_contents": "I wrote:\n> Andres Freund <[email protected]> writes:\n>> On 2023-09-29 12:14:40 -0400, Tom Lane wrote:\n>>> Therefore, I think the prudent thing to do in the back branches is use the\n>>> patch I posted before, to suppress the duplicate -l switches only on macOS.\n>>> In HEAD, I propose we simplify life by doing it everywhere, as attached.\n\n>> Makes sense.\n\n> Done that way.\n\nSo, in the no-good-deed-goes-unpunished department, I see that Noah's\nAIX animals are now complaining about a few duplicate symbols, eg\nwhile building initdb:\n\nld: 0711-224 WARNING: Duplicate symbol: .pg_encoding_to_char\nld: 0711-224 WARNING: Duplicate symbol: .pg_valid_server_encoding\nld: 0711-224 WARNING: Duplicate symbol: .pg_valid_server_encoding_id\nld: 0711-224 WARNING: Duplicate symbol: .pg_char_to_encoding\n\nIt's far from clear to me why we see this warning now when we didn't\nbefore, because there are strictly fewer sources of these symbols in\nthe link than before. They are available from libpgcommon and are\nalso intentionally exported from libpq. Presumably, initdb is now\nlinking to these symbols from libpq where before it got them from the\nfirst mention of libpgcommon, but why is that any more worthy of a\nwarning?\n\nAnyway, I don't have a huge problem with just ignoring these warnings\nas such, since AFAICT they're only showing up on AIX.\n\nHowever, thinking about this made me realize that there's a related\nproblem. At one time we had intentionally made initdb use its own\ncopy of these routines rather than letting it get them from libpq.\nThe reason is explained in commit 8468146b0:\n\n Fix the inadvertent libpq ABI breakage discovered by Martin Pitt: the\n renumbering of encoding IDs done between 8.2 and 8.3 turns out to break 8.2\n initdb and psql if they are run with an 8.3beta1 libpq.so.\n\nThis policy is still memorialized in a comment in initdb/Makefile:\n\n# Note: it's important that we link to encnames.o from libpgcommon, not\n# from libpq, else we have risks of version skew if we run with a libpq\n# shared library from a different PG version. The libpq_pgport macro\n# should ensure that that happens.\n\nand pg_wchar.h has this related comment:\n\n * XXX\tWe must avoid renumbering any backend encoding until libpq's major\n * version number is increased beyond 5; it turns out that the backend\n * encoding IDs are effectively part of libpq's ABI as far as 8.2 initdb and\n * psql are concerned.\n\nNow it's not happening that way. How big a problem is that?\n\nIn the case of psql, I think it's actually fixing a latent bug.\n8468146b0's changes in psql clearly intend that psql will be\nlinking to libpq's copies of pg_char_to_encoding and\npg_valid_server_encoding_id, which is appropriate because it's\ndealing with libpq's encoding IDs. We broke that when we moved\nencnames.c into libpgcommon. We've not made any encoding ID\nredefinitions since then, and we'd be unlikely to renumber PG_UTF8\nin any case, but clearly linking to libpq's copies is safer.\n\n(Which means that the makefiles are now OK, but the meson\nbuild is not: we need libpq to be linked before libpgcommon.)\n\nHowever, in the case of initdb, we had better be using the same\nencoding IDs as the backend code we are setting up the database for.\nIf we ever add/renumber any backend-safe encodings again, we'd be\nexposed to the same problem that 8.3 had.\n\nAssuming that this problem is restricted to initdb, which I think\nis true, probably the best fix is to cause the initdb link *only*\nto link libpgcommon before libpq. Every other non-backend program\nis interested in libpq's encoding IDs if it cares at all.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 03 Oct 2023 13:34:33 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Annoying build warnings from latest Apple toolchain" }, { "msg_contents": "I wrote:\n> Assuming that this problem is restricted to initdb, which I think\n> is true, probably the best fix is to cause the initdb link *only*\n> to link libpgcommon before libpq. Every other non-backend program\n> is interested in libpq's encoding IDs if it cares at all.\n\nThe more I thought about that the less I liked it. We're trying to\nget away from link order dependencies, not add more. And the fact\nthat we've had a latent bug for awhile from random-ish changes in\nlink order should reinforce our desire to get out of that business.\n\nSo I experimented with fixing things so that the versions of these\nfunctions exported by libpq have physically different names from those\nthat you'd get from linking to libpgcommon.a or libpgcommon_srv.a.\nThen, there's certainty about which one a given usage will link to,\nbased on what the #define environment is when the call is compiled.\n\nThis leads to a pleasingly small patch, at least in the Makefile\nuniverse (I've not attempted to sync the meson or MSVC infrastructure\nwith this yet). As a bonus, it should silence those new warnings\non AIX. A disadvantage is that this causes an ABI break for\nbackend extensions, so we couldn't consider back-patching it.\nBut I think that's fine given that the problem is only latent\nin released branches.\n\nThoughts?\n\n\t\t\tregards, tom lane", "msg_date": "Tue, 03 Oct 2023 16:07:30 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Annoying build warnings from latest Apple toolchain" }, { "msg_contents": "I wrote:\n> So I experimented with fixing things so that the versions of these\n> functions exported by libpq have physically different names from those\n> that you'd get from linking to libpgcommon.a or libpgcommon_srv.a.\n> Then, there's certainty about which one a given usage will link to,\n> based on what the #define environment is when the call is compiled.\n\n> This leads to a pleasingly small patch, at least in the Makefile\n> universe (I've not attempted to sync the meson or MSVC infrastructure\n> with this yet).\n\nHere's a v2 that addresses the meson infrastructure as well.\n(This is my first attempt at writing meson code, so feel free\nto critique.) I propose not bothering to fix src/tools/msvc/,\non the grounds that (a) that infrastructure will likely be gone\nbefore v17 ships, and (b) the latent problem with version\nskew between libpq.dll and calling programs seems much less\nlikely to matter in the Windows world in the first place.\n\nBarring comments or CI failures, I intend to push this soon.\n\n\t\t\tregards, tom lane", "msg_date": "Thu, 05 Oct 2023 13:17:25 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Annoying build warnings from latest Apple toolchain" }, { "msg_contents": "Hi,\n\nOn 2023-10-05 13:17:25 -0400, Tom Lane wrote:\n> I wrote:\n> > So I experimented with fixing things so that the versions of these\n> > functions exported by libpq have physically different names from those\n> > that you'd get from linking to libpgcommon.a or libpgcommon_srv.a.\n> > Then, there's certainty about which one a given usage will link to,\n> > based on what the #define environment is when the call is compiled.\n\nI think that's a good plan. IIRC I previously complained about the symbols\nexisting in multiple places... Don't remember the details, IIRCI I saw\nwarnings about symbol conflicts in extensions using libpq.\n\n\n> > This leads to a pleasingly small patch, at least in the Makefile\n> > universe (I've not attempted to sync the meson or MSVC infrastructure\n> > with this yet).\n> \n> Here's a v2 that addresses the meson infrastructure as well.\n> (This is my first attempt at writing meson code, so feel free\n> to critique.) I propose not bothering to fix src/tools/msvc/,\n> on the grounds that (a) that infrastructure will likely be gone\n> before v17 ships, and (b) the latent problem with version\n> skew between libpq.dll and calling programs seems much less\n> likely to matter in the Windows world in the first place.\n\nMakes sense.\n\n\n> +# Note: it's important that we link to encnames.o from libpgcommon, not\n> +# from libpq, else we have risks of version skew if we run with a libpq\n> +# shared library from a different PG version. Define\n> +# USE_PRIVATE_ENCODING_FUNCS to ensure that that happens.\n> +c_args = default_bin_args.get('c_args', []) + ['-DUSE_PRIVATE_ENCODING_FUNCS']\n> +\n> initdb = executable('initdb',\n> initdb_sources,\n> include_directories: [timezone_inc],\n> dependencies: [frontend_code, libpq, icu, icu_i18n],\n> - kwargs: default_bin_args,\n> + kwargs: default_bin_args + {\n> + 'c_args': c_args,\n> + },\n\nI think you can just pass c_args directly to executable() here, I think adding\nc_args to default_bin_args would be a bad idea.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 5 Oct 2023 10:31:47 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Annoying build warnings from latest Apple toolchain" }, { "msg_contents": "Andres Freund <[email protected]> writes:\n> I think you can just pass c_args directly to executable() here, I think adding\n> c_args to default_bin_args would be a bad idea.\n\nHm. IIUC that would result in an error if someone did try to\nput some c_args into default_bin_args, and I didn't think it would\nbe appropriate for this code to fail in such a case. Still, I see\nthere are a bunch of other ways to inject globally-used compilation\nflags, so maybe you're right that it'd never need to happen.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 05 Oct 2023 13:37:38 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Annoying build warnings from latest Apple toolchain" }, { "msg_contents": "Hi,\n\nOn 2023-10-05 13:37:38 -0400, Tom Lane wrote:\n> Andres Freund <[email protected]> writes:\n> > I think you can just pass c_args directly to executable() here, I think adding\n> > c_args to default_bin_args would be a bad idea.\n> \n> Hm. IIUC that would result in an error if someone did try to\n> put some c_args into default_bin_args, and I didn't think it would\n> be appropriate for this code to fail in such a case. Still, I see\n> there are a bunch of other ways to inject globally-used compilation\n> flags, so maybe you're right that it'd never need to happen.\n\nI think the other ways of injecting c_args compose better (and indeed other\noptions are either injected globally, or via declare_dependency()).\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 6 Oct 2023 12:04:38 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Annoying build warnings from latest Apple toolchain" }, { "msg_contents": "Andres Freund <[email protected]> writes:\n> On 2023-10-05 13:37:38 -0400, Tom Lane wrote:\n>> Hm. IIUC that would result in an error if someone did try to\n>> put some c_args into default_bin_args, and I didn't think it would\n>> be appropriate for this code to fail in such a case. Still, I see\n>> there are a bunch of other ways to inject globally-used compilation\n>> flags, so maybe you're right that it'd never need to happen.\n\n> I think the other ways of injecting c_args compose better (and indeed other\n> options are either injected globally, or via declare_dependency()).\n\nDone that way.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 07 Oct 2023 12:09:06 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Annoying build warnings from latest Apple toolchain" }, { "msg_contents": "On Sat, Oct 7, 2023 at 12:09 PM Tom Lane <[email protected]> wrote:\n> Done that way.\n\nIs there still outstanding work on this thread? Because I'm just now\nusing a new MacBook (M2, Ventura 13.6.2) and I'm getting a lot of this\nkind of thing in a meson build:\n\n[2264/2287] Linking target src/interfaces/ecpg/test/sql/parser\nld: warning: -undefined error is deprecated\nld: warning: ignoring duplicate libraries: '-lz'\n[2266/2287] Linking target src/interfaces/ecpg/test/sql/insupd\nld: warning: -undefined error is deprecated\nld: warning: ignoring duplicate libraries: '-lz'\n[2273/2287] Linking target src/interfaces/ecpg/test/sql/quote\nld: warning: -undefined error is deprecated\nld: warning: ignoring duplicate libraries: '-lz'\n[2278/2287] Linking target src/interfaces/ecpg/test/sql/show\nld: warning: -undefined error is deprecated\nld: warning: ignoring duplicate libraries: '-lz'\n[2280/2287] Linking target src/interfaces/ecpg/test/sql/sqlda\nld: warning: -undefined error is deprecated\nld: warning: ignoring duplicate libraries: '-lz'\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 20 Nov 2023 14:14:00 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Annoying build warnings from latest Apple toolchain" }, { "msg_contents": "Hi,\n\nOn 2023-11-20 14:14:00 -0500, Robert Haas wrote:\n> On Sat, Oct 7, 2023 at 12:09 PM Tom Lane <[email protected]> wrote:\n> > Done that way.\n> \n> Is there still outstanding work on this thread? Because I'm just now\n> using a new MacBook (M2, Ventura 13.6.2) and I'm getting a lot of this\n> kind of thing in a meson build:\n\nVentura? In that case I assume you installed newer developer tools? Because\notherwise I think we were talking about issues introduced in Sonoma.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 20 Nov 2023 11:35:53 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Annoying build warnings from latest Apple toolchain" }, { "msg_contents": "On Mon, Nov 20, 2023 at 2:35 PM Andres Freund <[email protected]> wrote:\n> > Is there still outstanding work on this thread? Because I'm just now\n> > using a new MacBook (M2, Ventura 13.6.2) and I'm getting a lot of this\n> > kind of thing in a meson build:\n>\n> Ventura? In that case I assume you installed newer developer tools? Because\n> otherwise I think we were talking about issues introduced in Sonoma.\n\nI don't think I did anything special when installing developer tools.\nxcode-select --version reports 2397, if that tells you anything.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 20 Nov 2023 14:46:13 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Annoying build warnings from latest Apple toolchain" }, { "msg_contents": "Hi,\n\nOn 2023-11-20 14:46:13 -0500, Robert Haas wrote:\n> On Mon, Nov 20, 2023 at 2:35 PM Andres Freund <[email protected]> wrote:\n> > > Is there still outstanding work on this thread? Because I'm just now\n> > > using a new MacBook (M2, Ventura 13.6.2) and I'm getting a lot of this\n> > > kind of thing in a meson build:\n> >\n> > Ventura? In that case I assume you installed newer developer tools? Because\n> > otherwise I think we were talking about issues introduced in Sonoma.\n>\n> I don't think I did anything special when installing developer tools.\n> xcode-select --version reports 2397, if that tells you anything.\n\nOdd then. My m1-mini running Ventura, also reporting 2397, doesn't show any of\nthose warnings. I did a CI run with Sonoma, and that does show them.\n\nI'm updating said m1-mini it to Sonoma now, but that will take until I have to\nleave for an appointment.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 20 Nov 2023 12:01:09 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Annoying build warnings from latest Apple toolchain" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> Is there still outstanding work on this thread? Because I'm just now\n> using a new MacBook (M2, Ventura 13.6.2) and I'm getting a lot of this\n> kind of thing in a meson build:\n\n13.6.2? longfin's host is on 13.6.1, and the only thing Software\nUpdate is showing me is an option to upgrade to Sonoma. But anyway...\n\n> [2264/2287] Linking target src/interfaces/ecpg/test/sql/parser\n> ld: warning: -undefined error is deprecated\n> ld: warning: ignoring duplicate libraries: '-lz'\n\nHmm ... I fixed these things in the autoconf build: neither my\nbuildfarm animals nor manual builds show any warnings. I thought\nthe problems weren't there in the meson build. Need to take another\nlook I guess.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 20 Nov 2023 15:53:16 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Annoying build warnings from latest Apple toolchain" }, { "msg_contents": "On Mon, Nov 20, 2023 at 3:53 PM Tom Lane <[email protected]> wrote:\n> 13.6.2? longfin's host is on 13.6.1, and the only thing Software\n> Update is showing me is an option to upgrade to Sonoma. But anyway...\n\nUh, I guess Apple made a special version just for me? That's\ndefinitely what it says.\n\n> > [2264/2287] Linking target src/interfaces/ecpg/test/sql/parser\n> > ld: warning: -undefined error is deprecated\n> > ld: warning: ignoring duplicate libraries: '-lz'\n>\n> Hmm ... I fixed these things in the autoconf build: neither my\n> buildfarm animals nor manual builds show any warnings. I thought\n> the problems weren't there in the meson build. Need to take another\n> look I guess.\n\nThey're definitely there for me, and there are a whole lot of them. I\nwould have thought that if they were there for you in the meson build\nyou would have noticed, since ninja suppresses a lot of distracting\noutput that make prints.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 20 Nov 2023 15:57:40 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Annoying build warnings from latest Apple toolchain" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> On Mon, Nov 20, 2023 at 3:53 PM Tom Lane <[email protected]> wrote:\n>> 13.6.2? longfin's host is on 13.6.1, and the only thing Software\n>> Update is showing me is an option to upgrade to Sonoma. But anyway...\n\n> Uh, I guess Apple made a special version just for me? That's\n> definitely what it says.\n\nMight be for M-series only; longfin's host is still Intel.\n\n>> Hmm ... I fixed these things in the autoconf build: neither my\n>> buildfarm animals nor manual builds show any warnings. I thought\n>> the problems weren't there in the meson build. Need to take another\n>> look I guess.\n\n> They're definitely there for me, and there are a whole lot of them. I\n> would have thought that if they were there for you in the meson build\n> you would have noticed, since ninja suppresses a lot of distracting\n> output that make prints.\n\nI'm generally still using autoconf, I only run meson builds when\nsomebody complains about them ;-). But yeah, I see lots of\n\"ld: warning: -undefined error is deprecated\" when I do that.\nThis seems to have been installed by Andres' 9a95a510a:\n\n ldflags += ['-isysroot', pg_sysroot]\n+ # meson defaults to -Wl,-undefined,dynamic_lookup for modules, which we\n+ # don't want because a) it's different from what we do for autoconf, b) it\n+ # causes warnings starting in macOS Ventura\n+ ldflags_mod += ['-Wl,-undefined,error']\n\nThe autoconf side seems to just be letting this option default.\nI'm not sure what the default choice is, but evidently it's not\n\"-undefined error\"? Or were they stupid enough to not allow you\nto explicitly select the default behavior?\n\nAlso, I *don't* see any complaints about duplicate libraries.\nWhat build options are you using?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 20 Nov 2023 16:20:20 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Annoying build warnings from latest Apple toolchain" }, { "msg_contents": "I wrote:\n> The autoconf side seems to just be letting this option default.\n> I'm not sure what the default choice is, but evidently it's not\n> \"-undefined error\"? Or were they stupid enough to not allow you\n> to explicitly select the default behavior?\n\nSeems we are not the only project having trouble with this:\n\nhttps://github.com/mesonbuild/meson/issues/12450\n\nI had not realized that Apple recently wrote themselves a whole\nnew linker, but apparently that's why all these deprecation\nwarnings are showing up. It's not exactly clear whether\n\"deprecation\" means they actually plan to remove the feature\nlater, or just that some bozo decided that explicitly specifying\nthe default behavior is bad style.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 20 Nov 2023 17:02:30 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Annoying build warnings from latest Apple toolchain" }, { "msg_contents": "Hi,\n\nOn 2023-11-20 16:20:20 -0500, Tom Lane wrote:\n> I'm generally still using autoconf, I only run meson builds when\n> somebody complains about them ;-). But yeah, I see lots of\n> \"ld: warning: -undefined error is deprecated\" when I do that.\n> This seems to have been installed by Andres' 9a95a510a:\n>\n> ldflags += ['-isysroot', pg_sysroot]\n> + # meson defaults to -Wl,-undefined,dynamic_lookup for modules, which we\n> + # don't want because a) it's different from what we do for autoconf, b) it\n> + # causes warnings starting in macOS Ventura\n> + ldflags_mod += ['-Wl,-undefined,error']\n\nThat's not the sole issue, because meson automatically adds that for binaries\n(due to some linker issue that existed in the past IIRC).\n\n\n> The autoconf side seems to just be letting this option default.\n> I'm not sure what the default choice is, but evidently it's not\n> \"-undefined error\"? Or were they stupid enough to not allow you\n> to explicitly select the default behavior?\n\nI'm somewhat confused by what Apple did. I just was upgrading my m1 mac mini\nto sonoma, and in one state I recall man ld documenting it below \"Obsolete\nOptions\". But then I also updated xcode, and now there's no mention whatsoever\nof the option being deprecated in the man page at all. Perhaps my mind is\nplaying tricks on me.\n\nAnd yes, it sure looks like everything other than 'dynamic_lookup' creates a\nwarning. Which seems absurd.\n\n\n> Also, I *don't* see any complaints about duplicate libraries.\n> What build options are you using?\n\nI don't understand what Apple is thinking here. Getting the same library name\ntwice, e.g. icu once directly and once indirectly via xml2-config --libs or\nsuch, seems very common. To avoid that portably, you basically need to do a\ntopographical sort of the libraries, to figure out an ordering that\ndeduplicates but doesn't move a library to before where it's used. With a\nbunch of complexities due to -L, which could lead to finding different\nlibraries for the same library name, thrown in.\n\n\nWRT Robert seeing those warnings and Tom not: There's something odd going\non. I couldn't immediately reproduce it. Then I realized it reproduces against\na homebrew install but not a macports one.\n\nRobert, which are you using?\n\n<a bunch later>\n\nMeson actually *tries* to avoid this warning without resulting in incorrect\nresults due to ordering. But homebrew does something odd, with libxml-2.0,\nzlib and a few others: Unless you do something to change that, brew has\n/opt/homebrew/Library/Homebrew/os/mac/pkgconfig/14/ in its search path, but\nthose libraries aren't from homebrew, they're referencing macos\nframeworks. Apparently meson isn't able to understand which files those .pc\nfiles link to and gives up on the deduplicating.\n\nIf I add to the pkg-config search path, e.g. via\nmeson configure -Dpkg_config_path=$OTHER_STUFF:/opt/homebrew/opt/zlib/lib/pkgconfig/:/opt/homebrew/opt/libxml2/lib/pkgconfig/\n\nthe warnings about duplicate libraries vanish.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 20 Nov 2023 20:37:07 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Annoying build warnings from latest Apple toolchain" }, { "msg_contents": "On Mon, Nov 20, 2023 at 11:37 PM Andres Freund <[email protected]> wrote:\n> WRT Robert seeing those warnings and Tom not: There's something odd going\n> on. I couldn't immediately reproduce it. Then I realized it reproduces against\n> a homebrew install but not a macports one.\n>\n> Robert, which are you using?\n\nmacports\n\n> Meson actually *tries* to avoid this warning without resulting in incorrect\n> results due to ordering. But homebrew does something odd, with libxml-2.0,\n> zlib and a few others: Unless you do something to change that, brew has\n> /opt/homebrew/Library/Homebrew/os/mac/pkgconfig/14/ in its search path, but\n> those libraries aren't from homebrew, they're referencing macos\n> frameworks. Apparently meson isn't able to understand which files those .pc\n> files link to and gives up on the deduplicating.\n>\n> If I add to the pkg-config search path, e.g. via\n> meson configure -Dpkg_config_path=$OTHER_STUFF:/opt/homebrew/opt/zlib/lib/pkgconfig/:/opt/homebrew/opt/libxml2/lib/pkgconfig/\n>\n> the warnings about duplicate libraries vanish.\n\nHmm, I'm happy to try something here, but I'm not sure exactly what.\nI'm not setting pkg_config_path at all right now. I'm using this:\n\nmeson setup $HOME/pgsql $HOME/pgsql-meson -Dcassert=true -Ddebug=true\n-Dextra_include_dirs=/opt/local/include\n-Dextra_lib_dirs=/opt/local/lib -Dprefix=$HOME/install/dev\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 21 Nov 2023 08:35:56 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Annoying build warnings from latest Apple toolchain" }, { "msg_contents": "On 21.11.23 14:35, Robert Haas wrote:\n> On Mon, Nov 20, 2023 at 11:37 PM Andres Freund <[email protected]> wrote:\n>> WRT Robert seeing those warnings and Tom not: There's something odd going\n>> on. I couldn't immediately reproduce it. Then I realized it reproduces against\n>> a homebrew install but not a macports one.\n>>\n>> Robert, which are you using?\n> \n> macports\n\nBtw., I'm also seeing warnings like this. I'm using homebrew. Here is \na sample:\n\n[145/147] Linking target src/test/modules/test_shm_mq/test_shm_mq.dylib\n-macosx_version_min has been renamed to -macos_version_min\nld: warning: -undefined error is deprecated\nld: warning: ignoring duplicate libraries: '-lgcc'\n[146/147] Linking target src/test/modules/test_slru/test_slru.dylib\n\n\n\n", "msg_date": "Tue, 21 Nov 2023 15:59:20 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Annoying build warnings from latest Apple toolchain" }, { "msg_contents": "Hi,\n\n> > On Mon, Nov 20, 2023 at 11:37 PM Andres Freund <[email protected]> wrote:\n> >> WRT Robert seeing those warnings and Tom not: There's something odd going\n> >> on. I couldn't immediately reproduce it. Then I realized it reproduces against\n> >> a homebrew install but not a macports one.\n> >>\n> >> Robert, which are you using?\n> >\n> > macports\n>\n> Btw., I'm also seeing warnings like this. I'm using homebrew. Here is\n> a sample:\n>\n> [145/147] Linking target src/test/modules/test_shm_mq/test_shm_mq.dylib\n> -macosx_version_min has been renamed to -macos_version_min\n> ld: warning: -undefined error is deprecated\n> ld: warning: ignoring duplicate libraries: '-lgcc'\n> [146/147] Linking target src/test/modules/test_slru/test_slru.dylib\n\nI prefer not to build Postgres on Mac because it slows down GMail and\nSlack. After reading this discussion I tried and I can confirm I see\nthe same warnings Robert does:\n\n```\n[322/1905] Linking target src/interfaces/libpq/libpq.5.dylib\nld: warning: -undefined error is deprecated\nld: warning: ignoring duplicate libraries: '-lz'\n[326/1905] Linking target src/timezone/zic\nld: warning: -undefined error is deprecated\nld: warning: ignoring duplicate libraries: '-lz'\n[1113/1905] Linking target src/backend/postgres\nld: warning: -undefined error is deprecated\nld: warning: ignoring duplicate libraries: '-lpam', '-lxml2', '-lz'\n[1124/1905] Linking target src/backend/replication/pgoutput/pgoutput.dylib\nld: warning: -undefined error is deprecated\n[1125/1905] Linking target\nsrc/backend/replication/libpqwalreceiver/libpqwalreceiver.dylib\nld: warning: -undefined error is deprecated\n\n... many more ...\n```\n\nMy laptop is an Intel MacBook Pro 2019. The MacOS version is Sonoma\n14.0 and I'm using homebrew. `xcode-select --version` says 2399. I'm\nusing the following command:\n\n```\nXML_CATALOG_FILES=/usr/local/etc/xml/catalog time -p sh -c 'git clean\n-dfx && meson setup --buildtype debug -DPG_TEST_EXTRA=\"kerberos ldap\nssl\" -Dldap=disabled -Dssl=openssl -Dcassert=true -Dtap_tests=enabled\n-Dprefix=/Users/eax/pginstall build && ninja -C build && ninja -C\nbuild docs && meson test -C build'\n```\n\nI don't see any warnings when using Autotools.\n\n--\nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Tue, 21 Nov 2023 18:41:35 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Annoying build warnings from latest Apple toolchain" }, { "msg_contents": "On Tue, Nov 21, 2023 at 9:59 AM Peter Eisentraut <[email protected]> wrote:\n> Btw., I'm also seeing warnings like this. I'm using homebrew. Here is\n> a sample:\n>\n> [145/147] Linking target src/test/modules/test_shm_mq/test_shm_mq.dylib\n> -macosx_version_min has been renamed to -macos_version_min\n> ld: warning: -undefined error is deprecated\n> ld: warning: ignoring duplicate libraries: '-lgcc'\n> [146/147] Linking target src/test/modules/test_slru/test_slru.dylib\n\nI poked at this issue a bit more. In meson.build, for Darwin, we have this:\n\n # meson defaults to -Wl,-undefined,dynamic_lookup for modules, which we\n # don't want because a) it's different from what we do for autoconf, b) it\n # causes warnings starting in macOS Ventura\n ldflags_mod += ['-Wl,-undefined,error']\n\nI don't really understand how meson works, but I blindly tried\ncommenting that out. What I found is that doing so reduces the number\nof warnings that I get locally from 226 to 113. The difference seems\nto be that, with the unpatched meson.build file, I get warnings both\nabout binaries and also about loadable modules, but with the patched\nversion, the loadable modules stop emitting warnings, and the binaries\ncontinue to do so. This gives the flavor:\n\n-[] Linking target contrib/adminpack/adminpack.dylib\n-[] Linking target contrib/amcheck/amcheck.dylib\n...\n-[] Linking target src/backend...version_procs/latin2_and_win1250.dylib\n-[] Linking target src/backend...version_procs/utf8_and_iso8859_1.dylib\n [] Linking target src/backend/postgres\n-[] Linking target src/backend/replication/pgoutput/pgoutput.dylib\n-[] Linking target src/backend/snowball/dict_snowball.dylib\n [] Linking target src/bin/initdb/initdb\n [] Linking target src/bin/pg_amcheck/pg_amcheck\n [] Linking target src/bin/pg_archivecleanup/pg_archivecleanup\n [] Linking target src/bin/pg_basebackup/pg_basebackup\n...\n\nThe lines with - at the beginning are the warnings that disappear when\nI comment out the addition to ldflags_mod. The lines without a - at\nthe beginning are the ones that appear either way.\n\nThe first, rather inescapable, conclusion is that the comment isn't\ncompletely correct. It claims that we need to add -Wl,-undefined,error\non macOS Ventura to avoid warnings, but on my system it has exactly\nthe opposite effect: it produces warnings. I think we must have\nmisdiagnosed what the triggering condition actually is -- maybe it\ndepends on CPU architecture or choice of compiler or something, but\nit's not as simple as \"on Ventura you need this\" because I am on\nVentura and I anti-need this.\n\nThe second conclusion that I draw is that there's something in meson\nitself which is adding -Wl,-undefined,error when building binaries.\nThe snippet above is the only place in the entire source tree where we\nspecify a -undefined flag for a compile. The fact that the warning\nstill shows up when I comment that out means that in other cases,\nmeson itself is adding the flag, seemingly wrongly. But I don't know\nhow to make it not do that. I tried adding an option to ldflags, but\nthe linker isn't happy with adding something like\n-Wl,-undefined,warning --- then it complains about both\n-Wl,-undefined,error and -Wl,-undefined,warning. Apparently, what it\nreally wants is for the option to not be specified at all:\n\nhttps://stackoverflow.com/questions/77525544/apple-linker-warning-ld-warning-undefined-error-is-deprecated\n\nSee also https://github.com/mesonbuild/meson/issues/12450\n\nWhat a stupid, annoying decision on Apple's part. It seems like\n-Wl,-undefined,error is the default behavior, so they could have just\nignored that flag if present, but instead they complain about being\nasked to do what they were going to do anyway.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 28 Nov 2023 10:48:04 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Annoying build warnings from latest Apple toolchain" }, { "msg_contents": "Hi,\n\nOn 2023-11-28 10:48:04 -0500, Robert Haas wrote:\n> The second conclusion that I draw is that there's something in meson\n> itself which is adding -Wl,-undefined,error when building binaries.\n\nRight.\n\n\n> What a stupid, annoying decision on Apple's part. It seems like\n> -Wl,-undefined,error is the default behavior, so they could have just\n> ignored that flag if present, but instead they complain about being\n> asked to do what they were going to do anyway.\n\nEspecially because I think it didn't actually use to be the default when\nbuilding a dylib.\n\n\nWhile not helpful for this, I just noticed that there is\n-no_warn_duplicate_libraries, which would at least get rid of the\n ld: warning: ignoring duplicate libraries: '-lxml2'\nstyle warnings.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 30 Nov 2023 18:12:47 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Annoying build warnings from latest Apple toolchain" }, { "msg_contents": "Andres Freund <[email protected]> writes:\n> On 2023-11-28 10:48:04 -0500, Robert Haas wrote:\n>> What a stupid, annoying decision on Apple's part. It seems like\n>> -Wl,-undefined,error is the default behavior, so they could have just\n>> ignored that flag if present, but instead they complain about being\n>> asked to do what they were going to do anyway.\n\n> Especially because I think it didn't actually use to be the default when\n> building a dylib.\n\nIndeed. Whoever's in charge of their linker now seems to be quite\nclueless, or at least aggressively anti-backwards-compatibility.\n\n> While not helpful for this, I just noticed that there is\n> -no_warn_duplicate_libraries, which would at least get rid of the\n> ld: warning: ignoring duplicate libraries: '-lxml2'\n> style warnings.\n\nOh! If that's been there long enough to rely on, that does seem\nvery helpful.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 30 Nov 2023 21:24:21 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Annoying build warnings from latest Apple toolchain" }, { "msg_contents": "Hi,\n\nOn 2023-11-30 21:24:21 -0500, Tom Lane wrote:\n> Andres Freund <[email protected]> writes:\n> > On 2023-11-28 10:48:04 -0500, Robert Haas wrote:\n> >> What a stupid, annoying decision on Apple's part. It seems like\n> >> -Wl,-undefined,error is the default behavior, so they could have just\n> >> ignored that flag if present, but instead they complain about being\n> >> asked to do what they were going to do anyway.\n> \n> > Especially because I think it didn't actually use to be the default when\n> > building a dylib.\n> \n> Indeed. Whoever's in charge of their linker now seems to be quite\n> clueless, or at least aggressively anti-backwards-compatibility.\n\nIt looks like it even affects a bunch of Apple's own products [1]...\n\n\n> > While not helpful for this, I just noticed that there is\n> > -no_warn_duplicate_libraries, which would at least get rid of the\n> > ld: warning: ignoring duplicate libraries: '-lxml2'\n> > style warnings.\n> \n> Oh! If that's been there long enough to rely on, that does seem\n> very helpful.\n\nI think it's new too. But we can just check if the flag is supported.\n\n\nThis is really ridiculous. For at least part of venturas life they warned\nabout -Wl,-undefined,dynamic_lookup, which lead to 9a95a510adf3. They don't\nseem to do that anymore, but now you can't specify -Wl,-undefined,error\nanymore without a warning.\n\n\nAttached is a prototype testing this via CI on both Sonoma and Ventura.\n\n\nIt's certainly possible to encounter the duplicate library issue with\nautoconf, but probably not that common. So I'm not sure if we should inject\n-Wl,-no_warn_duplicate_libraries as well?\n\nGreetings,\n\nAndres Freund\n\n\n[1] https://developer.apple.com/forums/thread/733317", "msg_date": "Thu, 30 Nov 2023 20:05:15 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Annoying build warnings from latest Apple toolchain" }, { "msg_contents": "Hi,\n\nLooks like I forgot to update the thread to note that I finally committed the\nremaining warning fixes. I had already fixed a bunch of others in upstream\nmeson.\n\ncommit a3da95deee38ee067b0bead639c830eacbe894d5\nAuthor: Andres Freund <[email protected]>\nDate: 2024-03-13 01:40:53 -0700\n\n meson: macos: Avoid warnings on Sonoma\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 9 Apr 2024 10:53:37 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Annoying build warnings from latest Apple toolchain" } ]
[ { "msg_contents": "Hi -hackers,\n\nWe've got customer report that high max_connections (3k) with high\npgstat_track_activity_query_size (1MB) ends up with:\n\npostgres=# select * from pg_stat_get_activity(NULL);\nERROR: invalid memory alloc request size 18446744072590721024\npostgres=# select version();\n version\n------------------------------------------------------------------------------------------------------\n PostgreSQL 17devel on x86_64-pc-linux-gnu, compiled by gcc (Debian\n10.2.1-6) 10.2.1 20210110, 64-bit\n(1 row)\n\nit's in:\n#0 errstart (elevel=elevel@entry=21, domain=domain@entry=0x0) at elog.c:358\n#1 0x000055971cafc9a8 in errstart_cold (elevel=elevel@entry=21,\ndomain=domain@entry=0x0) at elog.c:333\n#2 0x000055971cb018b7 in MemoryContextAllocHuge (context=<optimized\nout>, size=18446744072590721024) at mcxt.c:1594\n#3 0x000055971ce2fd59 in pgstat_read_current_status () at backend_status.c:767\n#4 0x000055971ce30ab1 in pgstat_read_current_status () at backend_status.c:1167\n#5 pgstat_fetch_stat_numbackends () at backend_status.c:1168\n#6 0x000055971ceccc7f in pg_stat_get_activity (fcinfo=0x55971e239ff8)\nat pgstatfuncs.c:308\n\nIt seems to be integer overflow due to (int) * (int), while\nMemoryContextAllocHuge() allows taking Size(size_t) as parameter. I\nget similar behaviour with:\nsize_t val = (int)1048576 * (int)3022;\n\nAttached patch adjusts pgstat_track_activity_query_size to be of\nsize_t from int and fixes the issue.\n\nRegards,\n-Jakub Wartak.", "msg_date": "Wed, 27 Sep 2023 08:41:55 +0200", "msg_from": "Jakub Wartak <[email protected]>", "msg_from_op": true, "msg_subject": "pg_stat_get_activity(): integer overflow due to (int) * (int) for\n MemoryContextAllocHuge()" }, { "msg_contents": "On Wed, Sep 27, 2023 at 08:41:55AM +0200, Jakub Wartak wrote:\n> Attached patch adjusts pgstat_track_activity_query_size to be of\n> size_t from int and fixes the issue.\n\nThis cannot be backpatched, and using size_t is not really needed as\ntrack_activity_query_size is capped at 1MB. Why don't you just tweak\nthe calculation done in pgstat_read_current_status() instead, say with\na cast?\n--\nMichael", "msg_date": "Wed, 27 Sep 2023 17:08:23 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_stat_get_activity(): integer overflow due to (int) * (int)\n for MemoryContextAllocHuge()" }, { "msg_contents": "On Wed, Sep 27, 2023 at 10:08 AM Michael Paquier <[email protected]> wrote:\n>\n> On Wed, Sep 27, 2023 at 08:41:55AM +0200, Jakub Wartak wrote:\n> > Attached patch adjusts pgstat_track_activity_query_size to be of\n> > size_t from int and fixes the issue.\n>\n> This cannot be backpatched, and using size_t is not really needed as\n> track_activity_query_size is capped at 1MB. Why don't you just tweak\n> the calculation done in pgstat_read_current_status() instead, say with\n> a cast?\n\nThanks Michael, sure, that is probably a better alternative. I've attached v2.\n\nBTW: CF entry is https://commitfest.postgresql.org/45/4592/\n\nRegards,\n-Jakub Wartak.", "msg_date": "Wed, 27 Sep 2023 10:28:05 +0200", "msg_from": "Jakub Wartak <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_stat_get_activity(): integer overflow due to (int) * (int) for\n MemoryContextAllocHuge()" }, { "msg_contents": "On 27.09.23 09:08, Michael Paquier wrote:\n> On Wed, Sep 27, 2023 at 08:41:55AM +0200, Jakub Wartak wrote:\n>> Attached patch adjusts pgstat_track_activity_query_size to be of\n>> size_t from int and fixes the issue.\n> \n> This cannot be backpatched, and using size_t is not really needed as\n> track_activity_query_size is capped at 1MB. Why don't you just tweak\n> the calculation done in pgstat_read_current_status() instead, say with\n> a cast?\n\nI think it's preferable to use the right type to begin with, rather than \nfixing it up afterwards with casts.\n\n\n\n", "msg_date": "Wed, 27 Sep 2023 15:42:15 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_stat_get_activity(): integer overflow due to (int) * (int) for\n MemoryContextAllocHuge()" }, { "msg_contents": "Hi,\n\nOn 2023-09-27 15:42:15 +0100, Peter Eisentraut wrote:\n> On 27.09.23 09:08, Michael Paquier wrote:\n> > On Wed, Sep 27, 2023 at 08:41:55AM +0200, Jakub Wartak wrote:\n> > > Attached patch adjusts pgstat_track_activity_query_size to be of\n> > > size_t from int and fixes the issue.\n> > \n> > This cannot be backpatched, and using size_t is not really needed as\n> > track_activity_query_size is capped at 1MB. Why don't you just tweak\n> > the calculation done in pgstat_read_current_status() instead, say with\n> > a cast?\n> \n> I think it's preferable to use the right type to begin with, rather than\n> fixing it up afterwards with casts.\n\nI don't think going for size_t is a viable path for fixing this. I'm pretty\nsure the initial patch would trigger a type mismatch from guc_tables.c - we\ndon't have infrastructure for size_t GUCs.\n\nNor do I think either of the patches here is a complete fix - there will still\nbe overflows on 32bit systems.\n\nPerhaps we ought to error out (in BackendStatusShmemSize() or such) if\npgstat_track_activity_query_size * MaxBackends >= 4GB?\n\nFrankly, it seems like a quite bad idea to have such a high limit for\npgstat_track_activity_query_size. The overhead such a high value has will\nsurprise people...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 27 Sep 2023 10:29:25 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_stat_get_activity(): integer overflow due to (int) * (int)\n for MemoryContextAllocHuge()" }, { "msg_contents": "On Wed, Sep 27, 2023 at 10:29:25AM -0700, Andres Freund wrote:\n> I don't think going for size_t is a viable path for fixing this. I'm pretty\n> sure the initial patch would trigger a type mismatch from guc_tables.c - we\n> don't have infrastructure for size_t GUCs.\n\nNothing marked as PGDLLIMPORT uses size_t in the tree currently, FWIW.\n\n> Perhaps we ought to error out (in BackendStatusShmemSize() or such) if\n> pgstat_track_activity_query_size * MaxBackends >= 4GB?\n\nYeah, agreed that putting a check like that could catch errors more\nquickly.\n\n> Frankly, it seems like a quite bad idea to have such a high limit for\n> pgstat_track_activity_query_size. The overhead such a high value has will\n> surprise people...\n\nStill it could have some value for some users with large analytical\nqueries where the syslogger is not going to be a bottleneck? It seems\ntoo late to me to change that, but perhaps the docs could be improved\nto tell that using a too high value can have performance consequences,\nwhile mentioning the maximum value.\n--\nMichael", "msg_date": "Thu, 28 Sep 2023 07:53:45 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_stat_get_activity(): integer overflow due to (int) * (int)\n for MemoryContextAllocHuge()" }, { "msg_contents": "Hi,\n\nOn 2023-09-28 07:53:45 +0900, Michael Paquier wrote:\n> On Wed, Sep 27, 2023 at 10:29:25AM -0700, Andres Freund wrote:\n> > Frankly, it seems like a quite bad idea to have such a high limit for\n> > pgstat_track_activity_query_size. The overhead such a high value has will\n> > surprise people...\n> \n> Still it could have some value for some users with large analytical\n> queries where the syslogger is not going to be a bottleneck? It seems\n> too late to me to change that, but perhaps the docs could be improved\n> to tell that using a too high value can have performance consequences,\n> while mentioning the maximum value.\n\nI don't think the issue is syslogger, the problem is that suddenly accessing\npg_stat_activity requires gigabytes of memory. That's insane.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 27 Sep 2023 18:37:11 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_stat_get_activity(): integer overflow due to (int) * (int)\n for MemoryContextAllocHuge()" }, { "msg_contents": "On Thu, Sep 28, 2023 at 12:53 AM Michael Paquier <[email protected]> wrote:\n>\n> On Wed, Sep 27, 2023 at 10:29:25AM -0700, Andres Freund wrote:\n> > I don't think going for size_t is a viable path for fixing this. I'm pretty\n> > sure the initial patch would trigger a type mismatch from guc_tables.c - we\n> > don't have infrastructure for size_t GUCs.\n>\n> Nothing marked as PGDLLIMPORT uses size_t in the tree currently, FWIW.\n>\n> > Perhaps we ought to error out (in BackendStatusShmemSize() or such) if\n> > pgstat_track_activity_query_size * MaxBackends >= 4GB?\n>\n> Yeah, agreed that putting a check like that could catch errors more\n> quickly.\n\nHi,\n\nv3 attached. I had a problem coming out with a better error message,\nso suggestions are welcome. The cast still needs to be present as per\nabove suggestion as 3GB is still valid buf size and still was causing\ninteger overflow. We just throw an error on >= 4GB with v3.\n\n-J.", "msg_date": "Thu, 28 Sep 2023 11:01:14 +0200", "msg_from": "Jakub Wartak <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_stat_get_activity(): integer overflow due to (int) * (int) for\n MemoryContextAllocHuge()" }, { "msg_contents": "On Thu, Sep 28, 2023 at 11:01:14AM +0200, Jakub Wartak wrote:\n> v3 attached. I had a problem coming out with a better error message,\n> so suggestions are welcome. The cast still needs to be present as per\n> above suggestion as 3GB is still valid buf size and still was causing\n> integer overflow. We just throw an error on >= 4GB with v3.\n\n+/* Safety net to prevent requesting huge memory by each query to pg_stat_activity */\n+#define PGSTAT_MAX_ACTIVITY_BUF_SIZE 4 * 1024 * 1024 * 1024L\n \n-\tsize = add_size(size,\n-\t\t\t\t\tmul_size(pgstat_track_activity_query_size, NumBackendStatSlots));\n+\tpgstat_track_size = mul_size(pgstat_track_activity_query_size,\n+\t\t\t\t\tNumBackendStatSlots);\n+\tif(pgstat_track_size >= PGSTAT_MAX_ACTIVITY_BUF_SIZE)\n+\t\telog(FATAL, \"too big Backend Activity Buffer allocation of %zu bytes\", pgstat_track_size);\n+\tsize = add_size(size, pgstat_track_size);\n\nThat should be enough to put in a hardcoded 4GB safety limit, while\nmul_size() detects it at a higher range. Note, however, that elog()\nis only used for internal errors that users should never face, but\nthis one can come from a misconfiguration. This would be better as an\nereport(), with ERRCODE_CONFIG_FILE_ERROR as errcode, I guess.\n\n\"Backend Activity Buffer\" is the name of the internal struct. Sure,\nit shows up on the system views for shmem, but I wouldn't use this\nterm in a user-facing error message. Perhaps something like \"size\nrequested for backend status is out of range\" would be cleaner. Other\nideas are welcome.\n--\nMichael", "msg_date": "Fri, 29 Sep 2023 11:00:07 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_stat_get_activity(): integer overflow due to (int) * (int)\n for MemoryContextAllocHuge()" }, { "msg_contents": "On Fri, Sep 29, 2023 at 4:00 AM Michael Paquier <[email protected]> wrote:\n>\n> On Thu, Sep 28, 2023 at 11:01:14AM +0200, Jakub Wartak wrote:\n> > v3 attached. I had a problem coming out with a better error message,\n> > so suggestions are welcome. The cast still needs to be present as per\n> > above suggestion as 3GB is still valid buf size and still was causing\n> > integer overflow. We just throw an error on >= 4GB with v3.\n>\n> +/* Safety net to prevent requesting huge memory by each query to pg_stat_activity */\n> +#define PGSTAT_MAX_ACTIVITY_BUF_SIZE 4 * 1024 * 1024 * 1024L\n>\n> - size = add_size(size,\n> - mul_size(pgstat_track_activity_query_size, NumBackendStatSlots));\n> + pgstat_track_size = mul_size(pgstat_track_activity_query_size,\n> + NumBackendStatSlots);\n> + if(pgstat_track_size >= PGSTAT_MAX_ACTIVITY_BUF_SIZE)\n> + elog(FATAL, \"too big Backend Activity Buffer allocation of %zu bytes\", pgstat_track_size);\n> + size = add_size(size, pgstat_track_size);\n>\n> That should be enough to put in a hardcoded 4GB safety limit, while\n> mul_size() detects it at a higher range. Note, however, that elog()\n> is only used for internal errors that users should never face, but\n> this one can come from a misconfiguration. This would be better as an\n> ereport(), with ERRCODE_CONFIG_FILE_ERROR as errcode, I guess.\n>\n> \"Backend Activity Buffer\" is the name of the internal struct. Sure,\n> it shows up on the system views for shmem, but I wouldn't use this\n> term in a user-facing error message. Perhaps something like \"size\n> requested for backend status is out of range\" would be cleaner. Other\n> ideas are welcome.\n\nHi Michael,\n\nI've attached v4 that covers your suggestions.\n\n-J.", "msg_date": "Mon, 2 Oct 2023 10:22:06 +0200", "msg_from": "Jakub Wartak <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_stat_get_activity(): integer overflow due to (int) * (int) for\n MemoryContextAllocHuge()" }, { "msg_contents": "On Mon, Oct 02, 2023 at 10:22:06AM +0200, Jakub Wartak wrote:\n> I've attached v4 that covers your suggestions.\n\nHmm. I was looking at all that and pondered quite a bit about the\naddition of the restriction when starting up the server, particularly\nwhy there would be any need to include it in the same commit as the\none fixing the arguments given to AllocHuge(). At the end, as the\nrestriction goes a bit against 8d0ddccec636, I have removed it and \nwent to the simplest solution of adding the casts to Size, which is\nthe same thing as we do in all the other callers that deal with signed\nvariables (like mbutils.c).\n\nRegarding any restrictions, perhaps we should improve the docs, at\nleast. It would be better to discuss that separately.\n--\nMichael", "msg_date": "Tue, 3 Oct 2023 16:08:49 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_stat_get_activity(): integer overflow due to (int) * (int)\n for MemoryContextAllocHuge()" } ]
[ { "msg_contents": "postgres=# SET enable_seqscan = off;\nSET\npostgres=# explain select * from t;\n QUERY PLAN \n-------------------------------------------------------------------------\n Seq Scan on t (cost=10000000000.00..10000000023.60 rows=1360 width=32)\n(1 row)\n\npostgres=# select * from t;\n a \n-------\n [1,2]\n(1 row)\n\n\n", "msg_date": "Thu, 28 Sep 2023 00:37:41 +0800", "msg_from": "jacktby jacktby <[email protected]>", "msg_from_op": true, "msg_subject": "Set enable_seqscan doesn't take effect?" }, { "msg_contents": "Hi,\n\nOn 2023-09-28 00:37:41 +0800, jacktby jacktby wrote:\n> postgres=# SET enable_seqscan = off;\n> SET\n> postgres=# explain select * from t;\n> QUERY PLAN \n> -------------------------------------------------------------------------\n> Seq Scan on t (cost=10000000000.00..10000000023.60 rows=1360 width=32)\n> (1 row)\n> \n> postgres=# select * from t;\n> a \n> -------\n> [1,2]\n> (1 row)\n\nSorry to be the grump here:\n\nYou start several threads a week, often clearly not having done much, if any,\nprior research. Often even sending the same question to multiple lists. It\nshould not be hard to find an explanation for the behaviour you see here.\n\npgsql-hackers isn't a \"do my work for me service\". We're hacking on\npostgres. It's fine to occasionally ask for direction, but you're very clearly\nexceeding that.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 27 Sep 2023 10:07:21 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Set enable_seqscan doesn't take effect?" }, { "msg_contents": "On Thu, 28 Sept 2023 at 13:47, jacktby jacktby <[email protected]> wrote:\n>\n> postgres=# SET enable_seqscan = off;\n> SET\n> postgres=# explain select * from t;\n> QUERY PLAN\n> -------------------------------------------------------------------------\n> Seq Scan on t (cost=10000000000.00..10000000023.60 rows=1360 width=32)\n> (1 row)\n>\n> postgres=# select * from t;\n> a\n> -------\n> [1,2]\n> (1 row)>\n\nIt may be worth checking what the manual says about this. I guess if\nyou assume the meaning from the GUC name, then it might be surprising.\nIf you're still surprised after reading the manual then please report\nback.\n\nDavid\n\n\n", "msg_date": "Thu, 28 Sep 2023 14:11:06 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Set enable_seqscan doesn't take effect?" }, { "msg_contents": "\n> 2023年9月28日 01:07,Andres Freund <[email protected]> 写道:\n> \n> Hi,\n> \n> On 2023-09-28 00:37:41 +0800, jacktby jacktby wrote:\n>> postgres=# SET enable_seqscan = off;\n>> SET\n>> postgres=# explain select * from t;\n>> QUERY PLAN \n>> -------------------------------------------------------------------------\n>> Seq Scan on t (cost=10000000000.00..10000000023.60 rows=1360 width=32)\n>> (1 row)\n>> \n>> postgres=# select * from t;\n>> a \n>> -------\n>> [1,2]\n>> (1 row)\n> \n> Sorry to be the grump here:\n> \n> You start several threads a week, often clearly not having done much, if any,\n> prior research. Often even sending the same question to multiple lists. It\n> should not be hard to find an explanation for the behaviour you see here.\n> \n> pgsql-hackers isn't a \"do my work for me service\". We're hacking on\n> postgres. It's fine to occasionally ask for direction, but you're very clearly\n> exceeding that.\n> \n> Greetings,\n> \n> Andres Freund\nI’m so sorry for that. I think I’m not very familiar with pg, so I ask many naive questions. And I apologize for my behavior.\n\n", "msg_date": "Thu, 28 Sep 2023 11:45:13 +0800", "msg_from": "jacktby jacktby <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Set enable_seqscan doesn't take effect?" }, { "msg_contents": "\n> 2023年9月28日 01:07,Andres Freund <[email protected]> 写道:\n> \n> Hi,\n> \n> On 2023-09-28 00:37:41 +0800, jacktby jacktby wrote:\n>> postgres=# SET enable_seqscan = off;\n>> SET\n>> postgres=# explain select * from t;\n>> QUERY PLAN \n>> -------------------------------------------------------------------------\n>> Seq Scan on t (cost=10000000000.00..10000000023.60 rows=1360 width=32)\n>> (1 row)\n>> \n>> postgres=# select * from t;\n>> a \n>> -------\n>> [1,2]\n>> (1 row)\n> \n> Sorry to be the grump here:\n> \n> You start several threads a week, often clearly not having done much, if any,\n> prior research. Often even sending the same question to multiple lists. It\n> should not be hard to find an explanation for the behaviour you see here.\n> \n> pgsql-hackers isn't a \"do my work for me service\". We're hacking on\n> postgres. It's fine to occasionally ask for direction, but you're very clearly\n> exceeding that.\n> \n> Greetings,\n> \n> Andres Freund\nI’m so sorry for that. I think I’m not very familiar with pg, so I ask many naive questions. And I apologize for my behavior.\n\n", "msg_date": "Thu, 28 Sep 2023 11:45:34 +0800", "msg_from": "jacktby jacktby <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Set enable_seqscan doesn't take effect?" }, { "msg_contents": "On Wednesday, September 27, 2023, jacktby jacktby <[email protected]> wrote:\n\n> postgres=# SET enable_seqscan = off;\n> SET\n> postgres=# explain select * from t;\n> QUERY PLAN\n> -------------------------------------------------------------------------\n> Seq Scan on t (cost=10000000000.00..10000000023.60 rows=1360 width=32)\n\n\nIt wouldn’t cost 10billion to return the first tuple if that setting wasn’t\nworking.\n\nThat is the “discouragement” the documentation is referring to.\n\nI do agree the wording in the docs could be improved since it is a bit\nself-contradictory and unspecific, but it is explicitly clear a plan with\nsequential scan can still be chosen even with this set to off.\n\nDavid J.\n\nOn Wednesday, September 27, 2023, jacktby jacktby <[email protected]> wrote:postgres=# SET enable_seqscan = off;\nSET\npostgres=# explain select * from t;\n                               QUERY PLAN                                \n-------------------------------------------------------------------------\n Seq Scan on t  (cost=10000000000.00..10000000023.60 rows=1360 width=32)It wouldn’t cost 10billion to return the first tuple if that setting wasn’t working.That is the “discouragement” the documentation is referring to.I do agree the wording in the docs could be improved since it is a bit self-contradictory and unspecific, but it is explicitly clear a plan with sequential scan can still be chosen even with this set to off.David J.", "msg_date": "Wed, 27 Sep 2023 21:26:06 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Set enable_seqscan doesn't take effect?" }, { "msg_contents": "\n> 2023年9月28日 01:07,Andres Freund <[email protected]> 写道:\n> \n> Hi,\n> \n> On 2023-09-28 00:37:41 +0800, jacktby jacktby wrote:\n>> postgres=# SET enable_seqscan = off;\n>> SET\n>> postgres=# explain select * from t;\n>> QUERY PLAN \n>> -------------------------------------------------------------------------\n>> Seq Scan on t (cost=10000000000.00..10000000023.60 rows=1360 width=32)\n>> (1 row)\n>> \n>> postgres=# select * from t;\n>> a \n>> -------\n>> [1,2]\n>> (1 row)\n> \n> Sorry to be the grump here:\n> \n> You start several threads a week, often clearly not having done much, if any,\n> prior research. Often even sending the same question to multiple lists. It\n> should not be hard to find an explanation for the behaviour you see here.\n> \n> pgsql-hackers isn't a \"do my work for me service\". We're hacking on\n> postgres. It's fine to occasionally ask for direction, but you're very clearly\n> exceeding that.\n> \n> Greetings,\n> \n> Andres Freund\nI’m so sorry for that. I think I’m not very familiar with pg, so I ask many naive questions. And I apologize for my behavior.\n\n", "msg_date": "Thu, 28 Sep 2023 15:38:28 +0800", "msg_from": "jacktby jacktby <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Set enable_seqscan doesn't take effect?" }, { "msg_contents": "> 2023年9月28日 12:26,David G. Johnston <[email protected]> 写道:\n> \n> On Wednesday, September 27, 2023, jacktby jacktby <[email protected] <mailto:[email protected]>> wrote:\n>> postgres=# SET enable_seqscan = off;\n>> SET\n>> postgres=# explain select * from t;\n>> QUERY PLAN \n>> -------------------------------------------------------------------------\n>> Seq Scan on t (cost=10000000000.00..10000000023.60 rows=1360 width=32)\n> \n> It wouldn’t cost 10billion to return the first tuple if that setting wasn’t working.\n> \n> That is the “discouragement” the documentation is referring to.\n> \n> I do agree the wording in the docs could be improved since it is a bit self-contradictory and unspecific, but it is explicitly clear a plan with sequential scan can still be chosen even with this set to off.\n> \n> David J.\n> \nYes, I think that’s it.Thanks.\n2023年9月28日 12:26,David G. Johnston <[email protected]> 写道:On Wednesday, September 27, 2023, jacktby jacktby <[email protected]> wrote:postgres=# SET enable_seqscan = off;\nSET\npostgres=# explain select * from t;\n                               QUERY PLAN                                \n-------------------------------------------------------------------------\n Seq Scan on t  (cost=10000000000.00..10000000023.60 rows=1360 width=32)It wouldn’t cost 10billion to return the first tuple if that setting wasn’t working.That is the “discouragement” the documentation is referring to.I do agree the wording in the docs could be improved since it is a bit self-contradictory and unspecific, but it is explicitly clear a plan with sequential scan can still be chosen even with this set to off.David J.\nYes, I think that’s it.Thanks.", "msg_date": "Thu, 28 Sep 2023 15:46:48 +0800", "msg_from": "jacktby jacktby <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Set enable_seqscan doesn't take effect?" }, { "msg_contents": "On Thu, Sep 28, 2023 at 03:38:28PM +0800, jacktby jacktby wrote:\n> > You start several threads a week, often clearly not having done much, if any,\n> > prior research. Often even sending the same question to multiple lists. It\n> > should not be hard to find an explanation for the behaviour you see here.\n> > \n> > pgsql-hackers isn't a \"do my work for me service\". We're hacking on\n> > postgres. It's fine to occasionally ask for direction, but you're very clearly\n> > exceeding that.\n> > \n> > Greetings,\n> > \n> > Andres Freund\n> I’m so sorry for that. I think I’m not very familiar with pg, so I ask many naive questions. And I apologize for my behavior.\n\nI think you might find our IRC channel a more natural fit for getting\nyour questions answered:\n\n\thttps://www.postgresql.org/community/irc/\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Thu, 28 Sep 2023 09:57:15 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Set enable_seqscan doesn't take effect?" } ]
[ { "msg_contents": "Hello.\n\nI noticed that -c option of initdb behaves in an unexpected\nmanner. Identical variable names with variations in letter casing are\ntreated as distinct variables.\n\n$ initdb -cwork_mem=100 -cWORK_MEM=1000 -cWork_mem=2000\n...\n$ grep -i 'work_mem ' $PGDATA/postgresql.conf\nwork_mem = 100 # min 64kB\n#maintenance_work_mem = 64MB # min 1MB\n#autovacuum_work_mem = -1 # min 1MB, or -1 to use maintenance_work_mem\n#logical_decoding_work_mem = 64MB # min 64kB\nWORK_MEM = 1000\nWork_mem = 2000\n\n\nThe original intention was apparently to overwrite the existing\nline. Furthermore, I surmise that preserving the original letter\ncasing is preferable.\n\nAttached is a patch to address this issue. To retrieve the variable\nname from the existing line, the code is slightly restructured.\nAlternatively, should we just down-case the provided variable names?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Thu, 28 Sep 2023 16:49:04 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <[email protected]>", "msg_from_op": true, "msg_subject": "initdb's -c option behaves wrong way?" }, { "msg_contents": "> On 28 Sep 2023, at 09:49, Kyotaro Horiguchi <[email protected]> wrote:\n\n> I noticed that -c option of initdb behaves in an unexpected\n> manner. Identical variable names with variations in letter casing are\n> treated as distinct variables.\n> \n> $ initdb -cwork_mem=100 -cWORK_MEM=1000 -cWork_mem=2000\n\n> The original intention was apparently to overwrite the existing\n> line. Furthermore, I surmise that preserving the original letter\n> casing is preferable.\n\nCircling back to an old thread, I agree that this seems odd and the original\nthread [0] makes no mention of it being intentional.\n\nThe patch seems fine to me, the attached version is rebased, pgindented and has\na test case added.\n\n--\nDaniel Gustafsson\n\n[0] https://www.postgresql.org/message-id/flat/2844176.1674681919%40sss.pgh.pa.us", "msg_date": "Tue, 16 Jan 2024 12:17:23 +0100", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: initdb's -c option behaves wrong way?" }, { "msg_contents": "On 2024-Jan-16, Daniel Gustafsson wrote:\n\n> > On 28 Sep 2023, at 09:49, Kyotaro Horiguchi <[email protected]> wrote:\n> \n> > I noticed that -c option of initdb behaves in an unexpected\n> > manner. Identical variable names with variations in letter casing are\n> > treated as distinct variables.\n> > \n> > $ initdb -cwork_mem=100 -cWORK_MEM=1000 -cWork_mem=2000\n> \n> > The original intention was apparently to overwrite the existing\n> > line. Furthermore, I surmise that preserving the original letter\n> > casing is preferable.\n> \n> Circling back to an old thread, I agree that this seems odd and the original\n> thread [0] makes no mention of it being intentional.\n\nHmm, how about raising an error if multiple options are given targetting\nthe same GUC?\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Wed, 17 Jan 2024 17:15:53 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: initdb's -c option behaves wrong way?" }, { "msg_contents": "Alvaro Herrera <[email protected]> writes:\n> Hmm, how about raising an error if multiple options are given targetting\n> the same GUC?\n\nI don't see any reason to do that. The underlying configuration\nfiles don't complain about duplicate entries, they just take the\nlast setting.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 17 Jan 2024 12:05:27 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: initdb's -c option behaves wrong way?" }, { "msg_contents": "> On 17 Jan 2024, at 18:05, Tom Lane <[email protected]> wrote:\n> \n> Alvaro Herrera <[email protected]> writes:\n>> Hmm, how about raising an error if multiple options are given targetting\n>> the same GUC?\n> \n> I don't see any reason to do that. The underlying configuration\n> files don't complain about duplicate entries, they just take the\n> last setting.\n\nAgreed, I think the patch as it stands now where it replaces case insensitive,\nwhile keeping the original casing, is the best path forward. The issue exist\nin 16 as well so I propose a backpatch to there.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Wed, 17 Jan 2024 20:30:48 +0100", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: initdb's -c option behaves wrong way?" }, { "msg_contents": "On Wed, Jan 17, 2024 at 2:31 PM Daniel Gustafsson <[email protected]> wrote:\n> Agreed, I think the patch as it stands now where it replaces case insensitive,\n> while keeping the original casing, is the best path forward. The issue exist\n> in 16 as well so I propose a backpatch to there.\n\nI like that approach, too. I could go either way on back-patching. It\ndoesn't seem important, but it probably won't break anything, either.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 17 Jan 2024 14:57:35 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: initdb's -c option behaves wrong way?" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> On Wed, Jan 17, 2024 at 2:31 PM Daniel Gustafsson <[email protected]> wrote:\n>> Agreed, I think the patch as it stands now where it replaces case insensitive,\n>> while keeping the original casing, is the best path forward. The issue exist\n>> in 16 as well so I propose a backpatch to there.\n\n> I like that approach, too. I could go either way on back-patching. It\n> doesn't seem important, but it probably won't break anything, either.\n\nWe just added this switch in 16, so I think backpatching to keep all\nthe branches consistent is a good idea.\n\nHowever ... I don't like the patch much. It seems to have left\nthe code in a rather random state. Why, for example, didn't you\nkeep all the code that constructs the \"newline\" value together?\nI'm also unconvinced by the removal of the \"assume there's only\none match\" break --- if we need to support multiple matches\nI doubt that's sufficient.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 17 Jan 2024 15:26:10 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: initdb's -c option behaves wrong way?" }, { "msg_contents": "I wrote:\n> However ... I don't like the patch much. It seems to have left\n> the code in a rather random state. Why, for example, didn't you\n> keep all the code that constructs the \"newline\" value together?\n\nAfter thinking about it a bit more, I don't see why you didn't just\ns/strncmp/strncasecmp/ and call it good. The messiness seems to be\na result of your choice to replace the GUC's case as shown in the\nfile with the case used on the command line, which is not better\nIMO. We don't change our mind about the canonical spelling of a\nGUC because somebody varied the case in a SET command.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 17 Jan 2024 15:33:02 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: initdb's -c option behaves wrong way?" }, { "msg_contents": "> On 17 Jan 2024, at 21:33, Tom Lane <[email protected]> wrote:\n> \n> I wrote:\n>> However ... I don't like the patch much. It seems to have left\n>> the code in a rather random state. Why, for example, didn't you\n>> keep all the code that constructs the \"newline\" value together?\n> \n> After thinking about it a bit more, I don't see why you didn't just\n> s/strncmp/strncasecmp/ and call it good. The messiness seems to be\n> a result of your choice to replace the GUC's case as shown in the\n> file with the case used on the command line, which is not better\n> IMO. We don't change our mind about the canonical spelling of a\n> GUC because somebody varied the case in a SET command.\n\nThe original patch was preserving the case of the file throwing away the case\nfrom the commandline. The attached is a slimmed down version which only moves\nthe assembly of newline to the different cases (replace vs. new) keeping the\nrest of the code intact. Keeping it in one place gets sort of messy too since\nit needs to use different values for a replacement and a new variable. Not\nsure if there is a cleaner approach?\n\n--\nDaniel Gustafsson", "msg_date": "Wed, 17 Jan 2024 23:47:41 +0100", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: initdb's -c option behaves wrong way?" }, { "msg_contents": "Thank you for upicking this up.\n\nAt Wed, 17 Jan 2024 23:47:41 +0100, Daniel Gustafsson <[email protected]> wrote in \n> > On 17 Jan 2024, at 21:33, Tom Lane <[email protected]> wrote:\n> > \n> > I wrote:\n> >> However ... I don't like the patch much. It seems to have left\n> >> the code in a rather random state. Why, for example, didn't you\n> >> keep all the code that constructs the \"newline\" value together?\n> > \n> > After thinking about it a bit more, I don't see why you didn't just\n> > s/strncmp/strncasecmp/ and call it good. The messiness seems to be\n> > a result of your choice to replace the GUC's case as shown in the\n> > file with the case used on the command line, which is not better\n> > IMO. We don't change our mind about the canonical spelling of a\n> > GUC because somebody varied the case in a SET command.\n> \n> The original patch was preserving the case of the file throwing away the case\n> from the commandline. The attached is a slimmed down version which only moves\n> the assembly of newline to the different cases (replace vs. new) keeping the\n> rest of the code intact. Keeping it in one place gets sort of messy too since\n> it needs to use different values for a replacement and a new variable. Not\n> sure if there is a cleaner approach?\n\nJust to clarify, the current code breaks out after processing the\nfirst matching line. I haven't changed that behavior. The reason I\nmoved the rewrite processing code out of the loop was I wanted to\navoid adding more lines that are executed only once into the\nloop. However, it is in exchange of additional complexity to remember\nthe original spelling of the parameter name. Personally, I believe\nseparating the search and rewrite processing is better, but I'm not\nparticularly attached to the approach, so either way is fine with me.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 18 Jan 2024 13:49:44 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <[email protected]>", "msg_from_op": true, "msg_subject": "Re: initdb's -c option behaves wrong way?" }, { "msg_contents": "> On 18 Jan 2024, at 05:49, Kyotaro Horiguchi <[email protected]> wrote:\n> \n> Thank you for upicking this up.\n> \n> At Wed, 17 Jan 2024 23:47:41 +0100, Daniel Gustafsson <[email protected]> wrote in \n>>> On 17 Jan 2024, at 21:33, Tom Lane <[email protected]> wrote:\n>>> \n>>> I wrote:\n>>>> However ... I don't like the patch much. It seems to have left\n>>>> the code in a rather random state. Why, for example, didn't you\n>>>> keep all the code that constructs the \"newline\" value together?\n>>> \n>>> After thinking about it a bit more, I don't see why you didn't just\n>>> s/strncmp/strncasecmp/ and call it good. The messiness seems to be\n>>> a result of your choice to replace the GUC's case as shown in the\n>>> file with the case used on the command line, which is not better\n>>> IMO. We don't change our mind about the canonical spelling of a\n>>> GUC because somebody varied the case in a SET command.\n>> \n>> The original patch was preserving the case of the file throwing away the case\n>> from the commandline. The attached is a slimmed down version which only moves\n>> the assembly of newline to the different cases (replace vs. new) keeping the\n>> rest of the code intact. Keeping it in one place gets sort of messy too since\n>> it needs to use different values for a replacement and a new variable. Not\n>> sure if there is a cleaner approach?\n> \n> Just to clarify, the current code breaks out after processing the\n> first matching line. I haven't changed that behavior.\n\nYup.\n\n> The reason I\n> moved the rewrite processing code out of the loop was I wanted to\n> avoid adding more lines that are executed only once into the\n> loop. However, it is in exchange of additional complexity to remember\n> the original spelling of the parameter name. Personally, I believe\n> separating the search and rewrite processing is better, but I'm not\n> particularly attached to the approach, so either way is fine with me.\n\nI'll give some more time for opinions, then I'll go ahead with one of the\npatches with a backpatch to v16.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Fri, 19 Jan 2024 15:53:59 +0100", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: initdb's -c option behaves wrong way?" }, { "msg_contents": "Daniel Gustafsson <[email protected]> writes:\n> I'll give some more time for opinions, then I'll go ahead with one of the\n> patches with a backpatch to v16.\n\nOK, I take back my previous complaint that just using strncasecmp\nwould be enough --- I was misremembering how the code worked, and\nyou're right that it would use the spelling from the command line\nrather than that from the file.\n\nHowever, the v3 patch is flat broken. You can't assume you have\nfound a match until you've verified that whitespace and '='\nappear next --- otherwise, you'll be fooled by a guc_name that\nis a prefix of one that appears in the file. I think the simplest\nchange that does it correctly is as attached.\n\nNow, since I was the one who wrote the existing code, I freely\nconcede that I may have too high an opinion of its readability.\nMaybe some larger refactoring is appropriate. But I didn't\nfind that what you'd done improved the readability. I'd still\nrather keep the newline-assembly code together as much as possible.\nMaybe we should do the search part before we build any of newline?\n\n\t\t\tregards, tom lane", "msg_date": "Fri, 19 Jan 2024 11:33:57 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: initdb's -c option behaves wrong way?" }, { "msg_contents": "> On 19 Jan 2024, at 17:33, Tom Lane <[email protected]> wrote:\n> \n> Daniel Gustafsson <[email protected]> writes:\n>> I'll give some more time for opinions, then I'll go ahead with one of the\n>> patches with a backpatch to v16.\n> \n> OK, I take back my previous complaint that just using strncasecmp\n> would be enough --- I was misremembering how the code worked, and\n> you're right that it would use the spelling from the command line\n> rather than that from the file.\n> \n> However, the v3 patch is flat broken. You can't assume you have\n> found a match until you've verified that whitespace and '='\n> appear next --- otherwise, you'll be fooled by a guc_name that\n> is a prefix of one that appears in the file. I think the simplest\n> change that does it correctly is as attached.\n\nThe attached v4 looks good to me, I don't think it moves the needle wrt\nreadability (ie, no need to move the search). Feel free to go ahead with that\nversion, or let me know if you want me to deal with it.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Mon, 22 Jan 2024 11:09:14 +0100", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: initdb's -c option behaves wrong way?" }, { "msg_contents": "> On 22 Jan 2024, at 11:09, Daniel Gustafsson <[email protected]> wrote:\n\n> Feel free to go ahead with that\n> version, or let me know if you want me to deal with it.\n\nI took the liberty to add this to the upcoming CF to make sure we don't lose\ntrack of it.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Thu, 29 Feb 2024 11:23:06 +0100", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: initdb's -c option behaves wrong way?" }, { "msg_contents": "Daniel Gustafsson <[email protected]> writes:\n> I took the liberty to add this to the upcoming CF to make sure we don't lose\n> track of it.\n\nThanks for doing that, because the cfbot pointed out a problem:\nI should have written pg_strncasecmp not strncasecmp. If this\nversion tests cleanly, I'll push it.\n\n\t\t\tregards, tom lane", "msg_date": "Sun, 03 Mar 2024 20:01:36 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: initdb's -c option behaves wrong way?" }, { "msg_contents": "\n\n> On 4 Mar 2024, at 02:01, Tom Lane <[email protected]> wrote:\n> \n> Daniel Gustafsson <[email protected]> writes:\n>> I took the liberty to add this to the upcoming CF to make sure we don't lose\n>> track of it.\n> \n> Thanks for doing that, because the cfbot pointed out a problem:\n> I should have written pg_strncasecmp not strncasecmp. If this\n> version tests cleanly, I'll push it.\n\n+1, LGTM.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Mon, 4 Mar 2024 09:39:39 +0100", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: initdb's -c option behaves wrong way?" }, { "msg_contents": "At Mon, 4 Mar 2024 09:39:39 +0100, Daniel Gustafsson <[email protected]> wrote in \n> \n> \n> > On 4 Mar 2024, at 02:01, Tom Lane <[email protected]> wrote:\n> > \n> > Daniel Gustafsson <[email protected]> writes:\n> >> I took the liberty to add this to the upcoming CF to make sure we don't lose\n> >> track of it.\n> > \n> > Thanks for doing that, because the cfbot pointed out a problem:\n> > I should have written pg_strncasecmp not strncasecmp. If this\n> > version tests cleanly, I'll push it.\n> \n> +1, LGTM.\n\nThank you for fixing this, Tom!\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 05 Mar 2024 13:04:55 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <[email protected]>", "msg_from_op": true, "msg_subject": "Re: initdb's -c option behaves wrong way?" } ]
[ { "msg_contents": "Hello,\n\nOne of our customers recently complained that his pg_dump stopped \nabruptly with the message \"out of memory\".\n\nAfter some time, we understood that the 20 million of large objects were \nresponsible for the huge memory usage (more than 10 GB) by pg_dump.\n\nI think a more useful error message would help for such cases. Indeed, \nit's not always possible to ask the client to run pg_dump with \n\"valgrind --tool=massif\" on its server.\n\nNow, I understand that we don't want to add too much to the frontend \ncode, it would be a lot of pain for not much gain.\n\nBut I wonder if we could add some checks in a few strategic places, as \nin the attached patch.\n\nThe idea would be to print a more useful error message in most of the \ncases, and keep the basic \"out of memory\" for the remaining ones.\n\nI haven't try to get the patch ready for review, I know that the format \nof the messages isn't right, I'd like to know what do you think of the \nidea, first.\n\nBest regards,\nFrédéric", "msg_date": "Thu, 28 Sep 2023 10:14:21 +0200", "msg_from": "=?UTF-8?Q?Fr=c3=a9d=c3=a9ric_Yhuel?= <[email protected]>", "msg_from_op": true, "msg_subject": "Out of memory error handling in frontend code" }, { "msg_contents": "> On 28 Sep 2023, at 10:14, Frédéric Yhuel <[email protected]> wrote:\n\n> After some time, we understood that the 20 million of large objects were responsible for the huge memory usage (more than 10 GB) by pg_dump.\n\nThis sounds like a known issue [0] which has been reported several times, and\none we should get around to fixing sometime.\n\n> I think a more useful error message would help for such cases.\n\nKnowing that this is case that pops up, I agree that we could do better around\nthe messaging here.\n\n> I haven't try to get the patch ready for review, I know that the format of the messages isn't right, I'd like to know what do you think of the idea, first.\n\nI don't think adding more details is a bad idea, but it shouldn't require any\nknowledge about internals so I think messages like the one below needs to be\nreworded to be more helpful.\n\n+\tif (loinfo == NULL)\n+\t{\n+\t\tpg_fatal(\"getLOs: out of memory\");\n+\t}\n\n--\nDaniel Gustafsson\n\n[0] https://www.postgresql.org/message-id/[email protected]\n\n\n\n", "msg_date": "Thu, 28 Sep 2023 14:02:57 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Out of memory error handling in frontend code" }, { "msg_contents": "Hi Daniel,\n\nThank you for your answer.\n\nOn 9/28/23 14:02, Daniel Gustafsson wrote:\n>> On 28 Sep 2023, at 10:14, Frédéric Yhuel <[email protected]> wrote:\n> \n>> After some time, we understood that the 20 million of large objects were responsible for the huge memory usage (more than 10 GB) by pg_dump.\n> \n> This sounds like a known issue [0] which has been reported several times, and\n> one we should get around to fixing sometime.\n>\n\nIndeed, I saw some of these reports afterwards :)\n\n>> I think a more useful error message would help for such cases.\n> \n> Knowing that this is case that pops up, I agree that we could do better around\n> the messaging here.\n> \n>> I haven't try to get the patch ready for review, I know that the format of the messages isn't right, I'd like to know what do you think of the idea, first.\n> \n> I don't think adding more details is a bad idea, but it shouldn't require any\n> knowledge about internals so I think messages like the one below needs to be\n> reworded to be more helpful.\n> \n> +\tif (loinfo == NULL)\n> +\t{\n> +\t\tpg_fatal(\"getLOs: out of memory\");\n> +\t}\n> \n\nOK, here is a second version of the patch.\n\nI didn't try to fix the path getLOs -> AssignDumpId -> catalogid_insert \n-> [...] -> catalogid_allocate, but that's annoying because it amounts \nto 11% of the memory allocations from valgrind's output.\n\nBest regards,\nFrédéric", "msg_date": "Fri, 6 Oct 2023 09:04:33 +0200", "msg_from": "=?UTF-8?Q?Fr=c3=a9d=c3=a9ric_Yhuel?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Out of memory error handling in frontend code" } ]
[ { "msg_contents": "Hello,\n\nI have a question about the automatic removal of unused WAL files. When\nloading data with pg_restore (200Gb) we noticed that a lot of WALs files\nare generated and they are not purged automatically nor recycled despite\nfrequent checkpoints, then pg_wal folder (150Gb) fill and become out of\nspace.\nWe have a cluster of 2 members (1 primary and 1 standby) with Postgres\nversion 14.9 and 2 barman server, slots are only configured for barman,\nbarman is version 3.7.\nThe archive command is desactivated (archive_command=':')\nI use pg_archivecleanup (with the wal file generated from the last\ncheckpoint in parameter) to remove files manually before the limit of 150Gb\nso that the restore can terminate.\n\nWhy does postgres do not this cleanup automatically, which part of the code\nis responsible for removing or recycling the wals?\n\nThanks for your help\n\nFabrice\n\nHello,I have a question about the automatic removal of unused WAL files. When loading data with pg_restore (200Gb) we noticed that a lot of WALs files are generated and they are not purged automatically nor recycled despite frequent checkpoints, then pg_wal folder (150Gb) fill and become out of space.We have a cluster of 2 members (1 primary and 1 standby) with Postgres version 14.9 and 2 barman server, slots are only configured for barman, barman is version 3.7.The archive command is desactivated (archive_command=':')I use pg_archivecleanup (with the wal file generated from the last checkpoint in parameter) to remove files manually before the limit of 150Gb so that the restore can terminate.Why does postgres do not this cleanup automatically, which part of the code is responsible for removing or recycling the wals?Thanks for your helpFabrice", "msg_date": "Thu, 28 Sep 2023 13:11:41 +0200", "msg_from": "Fabrice Chapuis <[email protected]>", "msg_from_op": true, "msg_subject": "wal recycling problem" }, { "msg_contents": "## Fabrice Chapuis ([email protected]):\n\n> We have a cluster of 2 members (1 primary and 1 standby) with Postgres\n> version 14.9 and 2 barman server, slots are only configured for barman,\n> barman is version 3.7.\n\nThe obvious question here is: can both of those barmans keep up with\nyour database, or are you seeing WAL retention due to exactly these\nreplication slots? (Check pg_replication_slots).\n\nRegards,\nChristoph\n\n-- \nSpare Space\n\n\n", "msg_date": "Thu, 28 Sep 2023 19:59:08 +0200", "msg_from": "Christoph Moench-Tegeder <[email protected]>", "msg_from_op": false, "msg_subject": "Re: wal recycling problem" }, { "msg_contents": "Yes, barman replication can keep up with primary, wals segments size are\nunder max_wal_size (24Gb in our configuration)\n\nHere is pg_replication_slots view:\n\nbarman_ge physical f t 39409 1EE2/49000000\nreserved f\nbarman_be physical f t 39434 1EE2/3D000000\nreserved f\n\non the other hand there are 2 slots for logical replication which display\nstatus extended. I don't understand why given that the confirmed_flush_lsn\nfield that is up to date. The restart_lsn remains frozen, for what reason?\n\npgoutput │ logical │ 2667915 │ db019a00 │ f │ t │ 1880162\n│ │ 68512101 │ 1ECA/37C3F1B8 │ 1EE2/4D6BDCF8 │ extended │\n │ f │\npgoutput │ logical │ 2668584 │ db038a00 │ f │ t │\n 363230 │ │ 68512101 │ 1ECA/37C3F1B8 │ 1EE2/4D6BDCF8 │\nextended │ │ f │\n\nRegards\nFabrice\n\nOn Thu, Sep 28, 2023 at 7:59 PM Christoph Moench-Tegeder <[email protected]>\nwrote:\n\n> ## Fabrice Chapuis ([email protected]):\n>\n> > We have a cluster of 2 members (1 primary and 1 standby) with Postgres\n> > version 14.9 and 2 barman server, slots are only configured for barman,\n> > barman is version 3.7.\n>\n> The obvious question here is: can both of those barmans keep up with\n> your database, or are you seeing WAL retention due to exactly these\n> replication slots? (Check pg_replication_slots).\n>\n> Regards,\n> Christoph\n>\n> --\n> Spare Space\n>\n\nYes, barman replication can keep up with primary, wals segments size are under max_wal_size (24Gb in our configuration)Here is  pg_replication_slots view:barman_ge      physical  f          t            39409 1EE2/49000000  reserved    f         barman_be      physical  f          t            39434 1EE2/3D000000 reserved    f         on the other hand there are 2 slots for logical replication which display status extended. I don't understand why given that the confirmed_flush_lsn field that is up to date. The restart_lsn remains frozen, for what reason?pgoutput │ logical   │ 2667915 │ db019a00 │ f         │ t      │    1880162 │      │     68512101 │ 1ECA/37C3F1B8 │ 1EE2/4D6BDCF8       │ extended   │               │ f         │pgoutput │ logical   │ 2668584 │ db038a00 │ f         │ t      │     363230  │      │     68512101 │ 1ECA/37C3F1B8 │ 1EE2/4D6BDCF8       │ extended   │               │ f         │RegardsFabriceOn Thu, Sep 28, 2023 at 7:59 PM Christoph Moench-Tegeder <[email protected]> wrote:## Fabrice Chapuis ([email protected]):\n\n> We have a cluster of 2 members (1 primary and 1 standby) with Postgres\n> version 14.9 and 2 barman server, slots are only configured for barman,\n> barman is version 3.7.\n\nThe obvious question here is: can both of those barmans keep up with\nyour database, or are you seeing WAL retention due to exactly these\nreplication slots? (Check pg_replication_slots).\n\nRegards,\nChristoph\n\n-- \nSpare Space", "msg_date": "Fri, 29 Sep 2023 10:48:09 +0200", "msg_from": "Fabrice Chapuis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: wal recycling problem" }, { "msg_contents": "Hi,\n\n## Fabrice Chapuis ([email protected]):\n\n> on the other hand there are 2 slots for logical replication which display\n> status extended. I don't understand why given that the confirmed_flush_lsn\n> field that is up to date. The restart_lsn remains frozen, for what reason?\n\nThere you have it - \"extended\" means \"holding wal\". And as long as the\nrestart_lsn does not advance, checkpointer cannot free any wal beyond\nthat lsn. My first idea would be some long-running (or huge) transaction\nwhich is in process (active or still being streamed). I'd recommend\nlooking into what the clients on these slots are doing.\n\nRegards,\nChristoph\n\n-- \nSpare Space\n\n\n", "msg_date": "Mon, 2 Oct 2023 12:06:37 +0200", "msg_from": "Christoph Moench-Tegeder <[email protected]>", "msg_from_op": false, "msg_subject": "Re: wal recycling problem" }, { "msg_contents": "Thanks Christoph for your message.\nNow I understand why the wals are preserved if logical replication is\nconfigured and enabled. The problem is that when a large volume of data is\nloaded into a database, for example during a pg_restore, the wal sender\nprocess associated with the logical replication slot will have to decrypt\nall of the wals generated during this operation which will take a long time\nand the restart_lsn will not be modified.\n From a conceptual point of view I think that specific wals per subscription\nshould be used and stored in the pg_replslot folder in order to avoid\nworking directly on the wals of the instance.\n\nWhat do you think about this proposal?\n\nRegards\n\nFabrice\n\n\nOn Mon, Oct 2, 2023 at 12:06 PM Christoph Moench-Tegeder <[email protected]>\nwrote:\n\n> Hi,\n>\n> ## Fabrice Chapuis ([email protected]):\n>\n> > on the other hand there are 2 slots for logical replication which display\n> > status extended. I don't understand why given that the\n> confirmed_flush_lsn\n> > field that is up to date. The restart_lsn remains frozen, for what\n> reason?\n>\n> There you have it - \"extended\" means \"holding wal\". And as long as the\n> restart_lsn does not advance, checkpointer cannot free any wal beyond\n> that lsn. My first idea would be some long-running (or huge) transaction\n> which is in process (active or still being streamed). I'd recommend\n> looking into what the clients on these slots are doing.\n>\n> Regards,\n> Christoph\n>\n> --\n> Spare Space\n>\n\nThanks Christoph for your message.Now I understand why the wals are preserved if logical replication is configured and enabled. The problem is that when a large volume of data is loaded into a database, for example during a pg_restore, the wal sender process associated with the logical replication slot will have to decrypt all of the wals generated during this operation which will take a long time and the restart_lsn will not be modified.From a conceptual point of view I think that specific wals per subscription should be used and stored in the pg_replslot folder in order to avoid working directly on the wals of the instance. What do you think about this proposal?Regards FabriceOn Mon, Oct 2, 2023 at 12:06 PM Christoph Moench-Tegeder <[email protected]> wrote:Hi,\n\n## Fabrice Chapuis ([email protected]):\n\n> on the other hand there are 2 slots for logical replication which display\n> status extended. I don't understand why given that the confirmed_flush_lsn\n> field that is up to date. The restart_lsn remains frozen, for what reason?\n\nThere you have it - \"extended\" means \"holding wal\". And as long as the\nrestart_lsn does not advance, checkpointer cannot free any wal beyond\nthat lsn. My first idea would be some long-running (or huge) transaction\nwhich is in process (active or still being streamed). I'd recommend\nlooking into what the clients on these slots are doing.\n\nRegards,\nChristoph\n\n-- \nSpare Space", "msg_date": "Fri, 6 Oct 2023 12:49:49 +0200", "msg_from": "Fabrice Chapuis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: wal recycling problem" }, { "msg_contents": "## Fabrice Chapuis ([email protected]):\n\n> From a conceptual point of view I think that specific wals per subscription\n> should be used and stored in the pg_replslot folder in order to avoid\n> working directly on the wals of the instance.\n> What do you think about this proposal?\n\nI think that would open a wholly new can of worms.\nThe most obvious point here is: that WAL is primarily generated for\nthe operation of the database itself - it's our kind of transaction\nlog, or \"Redo Log\" in other systems' lingo. Replication (be it physical\nor logical) is a secondary purpose (an obvious and important one, but\nstill secondary).\nHow would you know which part of WAL is needed for any specific\nreplication slot? You'd have to decode and filter it, and already\nyou're back at square one. How would you handle multiple replications\nfor the same table (in the same publication, or even over multiple\n(overlapping) publications) - do you multiply the WAL?\n\nFor now, we have \"any replication using replication slots, be it logical\nor physical replication, retains WAL up to max_slot_wal_keep_size\n(or \"unlimited\" if not set - and on PostgreSQL 12 and before); and you\nneed to monitor the state of your replication slots\", which is a\ntotally usabe rule, I think.\n\nRegards,\nChristoph\n\n-- \nSpare Space\n\n\n", "msg_date": "Sun, 8 Oct 2023 15:57:54 +0200", "msg_from": "Christoph Moench-Tegeder <[email protected]>", "msg_from_op": false, "msg_subject": "Re: wal recycling problem" }, { "msg_contents": "Thanks for your feedback\n> How would you know which part of WAL is needed for any specific\nreplication slot?\nchange are captured for each published table and written twice, once in\nthe current wal and once in the slot-specific wal\n> How would you handle multiple replications\nfor the same table\nadded information about from which publication a table belongs is entered\nin the wal slot\n> be it logical or physical replication, retains WAL up to\nmax_slot_wal_keep_size\nok but if max_slot_wal_keep_size is exceeded the changes are lost and all\nof the replicated tables must be resynchronized\n\nRegards\n\nFabrice\n\nOn Sun, Oct 8, 2023 at 3:57 PM Christoph Moench-Tegeder <[email protected]>\nwrote:\n\n> ## Fabrice Chapuis ([email protected]):\n>\n> > From a conceptual point of view I think that specific wals per\n> subscription\n> > should be used and stored in the pg_replslot folder in order to avoid\n> > working directly on the wals of the instance.\n> > What do you think about this proposal?\n>\n> I think that would open a wholly new can of worms.\n> The most obvious point here is: that WAL is primarily generated for\n> the operation of the database itself - it's our kind of transaction\n> log, or \"Redo Log\" in other systems' lingo. Replication (be it physical\n> or logical) is a secondary purpose (an obvious and important one, but\n> still secondary).\n> How would you know which part of WAL is needed for any specific\n> replication slot? You'd have to decode and filter it, and already\n> you're back at square one. How would you handle multiple replications\n> for the same table (in the same publication, or even over multiple\n> (overlapping) publications) - do you multiply the WAL?\n>\n> For now, we have \"any replication using replication slots, be it logical\n> or physical replication, retains WAL up to max_slot_wal_keep_size\n> (or \"unlimited\" if not set - and on PostgreSQL 12 and before); and you\n> need to monitor the state of your replication slots\", which is a\n> totally usabe rule, I think.\n>\n> Regards,\n> Christoph\n>\n> --\n> Spare Space\n>\n\nThanks for your feedback> How would you know which part of WAL is needed for any specificreplication slot?change are captured for each published table and written twice,  once in the current wal and once in the slot-specific wal> How would you handle multiple replicationsfor the same table added information about from which publication a table belongs is entered in the wal slot> be it logical or physical replication, retains WAL up to max_slot_wal_keep_sizeok but if max_slot_wal_keep_size is exceeded the changes are lost and all of the replicated tables must be resynchronizedRegardsFabriceOn Sun, Oct 8, 2023 at 3:57 PM Christoph Moench-Tegeder <[email protected]> wrote:## Fabrice Chapuis ([email protected]):\n\n> From a conceptual point of view I think that specific wals per subscription\n> should be used and stored in the pg_replslot folder in order to avoid\n> working directly on the wals of the instance.\n> What do you think about this proposal?\n\nI think that would open a wholly new can of worms.\nThe most obvious point here is: that WAL is primarily generated for\nthe operation of the database itself - it's our kind of transaction\nlog, or \"Redo Log\" in other systems' lingo. Replication (be it physical\nor logical) is a secondary purpose (an obvious and important one, but\nstill secondary).\nHow would you know which part of WAL is needed for any specific\nreplication slot? You'd have to decode and filter it, and already\nyou're back at square one. How would you handle multiple replications\nfor the same table (in the same publication, or even over multiple\n(overlapping) publications) - do you multiply the WAL?\n\nFor now, we have \"any replication using replication slots, be it logical\nor physical replication, retains WAL up to max_slot_wal_keep_size\n(or \"unlimited\" if not set - and on PostgreSQL 12 and before); and you\nneed to monitor the state of your replication slots\", which is a\ntotally usabe rule, I think.\n\nRegards,\nChristoph\n\n-- \nSpare Space", "msg_date": "Tue, 17 Oct 2023 12:37:05 +0200", "msg_from": "Fabrice Chapuis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: wal recycling problem" } ]
[ { "msg_contents": "In [1] Andrey highlighted that I'd forgotten to add print_path()\nhandling for TidRangePaths in bb437f995.\n\nI know the OPTIMIZER_DEBUG code isn't exactly well used. I never\npersonally use it and I work quite a bit in the planner, however, if\nwe're keeping it, I thought maybe we might get the memo of missing\npaths a bit sooner if we add an Assert(false) in the default cases.\n\nIs the attached worthwhile?\n\nDavid\n\n[1] https://www.postgresql.org/message-id/[email protected]", "msg_date": "Fri, 29 Sep 2023 00:23:00 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Is it worth adding Assert(false) for unknown paths in print_path()?" }, { "msg_contents": "On 2023-Sep-29, David Rowley wrote:\n\n> In [1] Andrey highlighted that I'd forgotten to add print_path()\n> handling for TidRangePaths in bb437f995.\n> \n> I know the OPTIMIZER_DEBUG code isn't exactly well used. I never\n> personally use it and I work quite a bit in the planner, however, if\n> we're keeping it, I thought maybe we might get the memo of missing\n> paths a bit sooner if we add an Assert(false) in the default cases.\n> \n> Is the attached worthwhile?\n\nHmm, if we had a buildfarm animal with OPTIMIZER_DEBUG turned on, then I\nagree it would catch the omission quickly.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"¿Cómo puedes confiar en algo que pagas y que no ves,\ny no confiar en algo que te dan y te lo muestran?\" (Germán Poo)\n\n\n", "msg_date": "Thu, 28 Sep 2023 15:41:06 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is it worth adding Assert(false) for unknown paths in\n print_path()?" }, { "msg_contents": "David Rowley <[email protected]> writes:\n> In [1] Andrey highlighted that I'd forgotten to add print_path()\n> handling for TidRangePaths in bb437f995.\n\n> I know the OPTIMIZER_DEBUG code isn't exactly well used. I never\n> personally use it and I work quite a bit in the planner, however, if\n> we're keeping it, I thought maybe we might get the memo of missing\n> paths a bit sooner if we add an Assert(false) in the default cases.\n\nFWIW, I'd argue for dropping print_path rather than continuing to\nmaintain it. I never use it, finding pprint() to serve the need\nbetter and more reliably. However, assuming that we keep it ...\n\n> Is the attached worthwhile?\n\n... I think this is actually counterproductive. It will certainly\nnot help draw the notice of anyone who wouldn't otherwise pay\nattention to print_path. Also, observe the extremely longstanding\npolicy decision in outNode's default: case:\n\n /*\n * This should be an ERROR, but it's too useful to be able to\n * dump structures that outNode only understands part of.\n */\n elog(WARNING, \"could not dump unrecognized node type: %d\",\n (int) nodeTag(obj));\n break;\n\nThe same argument applies to print_path, I should think.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 28 Sep 2023 10:23:43 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is it worth adding Assert(false) for unknown paths in\n print_path()?" }, { "msg_contents": "On Fri, 29 Sept 2023 at 03:23, Tom Lane <[email protected]> wrote:\n> FWIW, I'd argue for dropping print_path rather than continuing to\n> maintain it. I never use it, finding pprint() to serve the need\n> better and more reliably.\n\nThen perhaps we just need to open a thread with an appropriate subject\nto check if anyone finds it useful and if we don't get any response\nafter some number of weeks, just remove it from master.\n\nDavid\n\n\n", "msg_date": "Fri, 29 Sep 2023 03:55:33 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Is it worth adding Assert(false) for unknown paths in\n print_path()?" } ]
[ { "msg_contents": "Greetings, everyone!\nI would like to offer my patch on the problem of removing values from enums\n\nIt adds support for expression ALTER TYPE <enum_name> DROP VALUE\n<value_name>\n\nAdded:\n1. expression in grammar\n2. function to drop enum values\n3. regression tests\n4. documentation", "msg_date": "Thu, 28 Sep 2023 19:13:29 +0700", "msg_from": "=?UTF-8?B?0JTQsNC90LjQuyDQodGC0L7Qu9C/0L7QstGB0LrQuNGF?=\n <[email protected]>", "msg_from_op": true, "msg_subject": "Allow deleting enumerated values from an existing enumerated data\n type" }, { "msg_contents": "On 9/28/23 14:13, Данил Столповских wrote:\n> Greetings, everyone!\n> I would like to offer my patch on the problem of removing values from enums\n> \n> It adds support for expression ALTER TYPE <enum_name> DROP VALUE\n> <value_name>\n> \n> Added:\n> 1. expression in grammar\n> 2. function to drop enum values\n> 3. regression tests\n> 4. documentation\n\nThanks for this patch that a lot of people want.\n\nHowever, it does not seem to address the issue of how to handle the \ndropped value being in the high key of an index. Until we solve that \nproblem (and maybe others), this kind of patch is insufficient to add \nthe feature.\n-- \nVik Fearing\n\n\n\n", "msg_date": "Thu, 28 Sep 2023 15:50:58 +0200", "msg_from": "Vik Fearing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Allow deleting enumerated values from an existing enumerated data\n type" }, { "msg_contents": "=?UTF-8?B?0JTQsNC90LjQuyDQodGC0L7Qu9C/0L7QstGB0LrQuNGF?= <[email protected]> writes:\n> I would like to offer my patch on the problem of removing values from enums\n> It adds support for expression ALTER TYPE <enum_name> DROP VALUE\n> <value_name>\n\nThis does not fix any of the hard problems that caused us not to\nhave such a feature to begin with. Notably, what happens to\nstored data of the enum type if it is a now-deleted value?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 28 Sep 2023 10:28:59 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Allow deleting enumerated values from an existing enumerated data\n type" }, { "msg_contents": "\nOn 2023-09-28 Th 10:28, Tom Lane wrote:\n> =?UTF-8?B?0JTQsNC90LjQuyDQodGC0L7Qu9C/0L7QstGB0LrQuNGF?= <[email protected]> writes:\n>> I would like to offer my patch on the problem of removing values from enums\n>> It adds support for expression ALTER TYPE <enum_name> DROP VALUE\n>> <value_name>\n> This does not fix any of the hard problems that caused us not to\n> have such a feature to begin with. Notably, what happens to\n> stored data of the enum type if it is a now-deleted value?\n>\n> \t\t\t\n\n\nI wonder if we could have a boolean flag in pg_enum, indicating that \nsetting an enum to that value was forbidden. That wouldn't delete the \nvalue but it wouldn't show up in enum_range and friends. We'd have to \nteach pg_dump and pg_upgrade to deal with it, but that shouldn't be too \nhard.\n\n\nPerhaps the command could be something like\n\n\nALTER TYPE enum_name DISABLE value;\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 28 Sep 2023 14:35:57 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Allow deleting enumerated values from an existing enumerated data\n type" }, { "msg_contents": "Andrew Dunstan <[email protected]> writes:\n> I wonder if we could have a boolean flag in pg_enum, indicating that \n> setting an enum to that value was forbidden.\n\nYeah, but that still offers no coherent solution to the problem of\nwhat happens if there's a table that already contains such a value.\nIt doesn't seem terribly useful to forbid new entries if you can't\nget rid of old ones.\n\nAdmittedly, a DISABLE flag would at least offer a chance at a\nrace-condition-free scan to verify that no such values remain\nin tables. But as somebody already mentioned upthread, that\nwouldn't guarantee that the value doesn't appear in non-leaf\nindex pages. So basically you could never get rid of the pg_enum\nrow, short of a full dump and restore.\n\nWe went through all these points years ago when the enum feature\nwas first developed, as I recall. Nobody thought that the ability\nto remove an enum value was worth the amount of complexity it'd\nentail.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 28 Sep 2023 14:46:22 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Allow deleting enumerated values from an existing enumerated data\n type" }, { "msg_contents": "On 9/28/23 20:46, Tom Lane wrote:\n> Andrew Dunstan <[email protected]> writes:\n>> I wonder if we could have a boolean flag in pg_enum, indicating that\n>> setting an enum to that value was forbidden.\n> \n> Yeah, but that still offers no coherent solution to the problem of\n> what happens if there's a table that already contains such a value.\n> It doesn't seem terribly useful to forbid new entries if you can't\n> get rid of old ones.\n> \n> Admittedly, a DISABLE flag would at least offer a chance at a\n> race-condition-free scan to verify that no such values remain\n> in tables. But as somebody already mentioned upthread, that\n> wouldn't guarantee that the value doesn't appear in non-leaf\n> index pages. So basically you could never get rid of the pg_enum\n> row, short of a full dump and restore.\n> \n> We went through all these points years ago when the enum feature\n> was first developed, as I recall. Nobody thought that the ability\n> to remove an enum value was worth the amount of complexity it'd\n> entail.\n\nThis issue comes up regularly (although far from often). Do we want to \nput some comments right where would-be implementors would be sure to see it?\n\nAttached is an example of what I mean. Documentation is intentionally \nomitted.\n-- \nVik Fearing", "msg_date": "Fri, 29 Sep 2023 02:36:53 +0200", "msg_from": "Vik Fearing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Allow deleting enumerated values from an existing enumerated data\n type" }, { "msg_contents": "Vik Fearing <[email protected]> writes:\n> On 9/28/23 20:46, Tom Lane wrote:\n>> We went through all these points years ago when the enum feature\n>> was first developed, as I recall. Nobody thought that the ability\n>> to remove an enum value was worth the amount of complexity it'd\n>> entail.\n\n> This issue comes up regularly (although far from often). Do we want to \n> put some comments right where would-be implementors would be sure to see it?\n\nPerhaps. I'd be kind of inclined to leave the \"yet\" out of \"not yet\nimplemented\" in the error message, as that wording sounds like we just\nhaven't got round to it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 28 Sep 2023 21:17:09 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Allow deleting enumerated values from an existing enumerated data\n type" }, { "msg_contents": "\nOn 2023-09-28 Th 14:46, Tom Lane wrote:\n> Andrew Dunstan <[email protected]> writes:\n>> I wonder if we could have a boolean flag in pg_enum, indicating that\n>> setting an enum to that value was forbidden.\n> Yeah, but that still offers no coherent solution to the problem of\n> what happens if there's a table that already contains such a value.\n> It doesn't seem terribly useful to forbid new entries if you can't\n> get rid of old ones.\n>\n> Admittedly, a DISABLE flag would at least offer a chance at a\n> race-condition-free scan to verify that no such values remain\n> in tables. But as somebody already mentioned upthread, that\n> wouldn't guarantee that the value doesn't appear in non-leaf\n> index pages. So basically you could never get rid of the pg_enum\n> row, short of a full dump and restore.\n\n\nor a reindex, I think, although getting the timing right would be messy. \nI agree the non-leaf index pages are rather pesky in dealing with this.\n\nI guess the alternative would be to create a new enum with the \nto-be-deleted value missing, and then alter the column type to the new \nenum type. For massive tables that would be painful.\n\n\n>\n> We went through all these points years ago when the enum feature\n> was first developed, as I recall. Nobody thought that the ability\n> to remove an enum value was worth the amount of complexity it'd\n> entail.\n>\n> \t\n\n\nThat's quite true, and I accept my part in this history. But I'm not \nsure we were correct back then.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Fri, 29 Sep 2023 16:50:07 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Allow deleting enumerated values from an existing enumerated data\n type" }, { "msg_contents": "On 9/29/23 03:17, Tom Lane wrote:\n> Vik Fearing <[email protected]> writes:\n>> On 9/28/23 20:46, Tom Lane wrote:\n>>> We went through all these points years ago when the enum feature\n>>> was first developed, as I recall. Nobody thought that the ability\n>>> to remove an enum value was worth the amount of complexity it'd\n>>> entail.\n> \n>> This issue comes up regularly (although far from often). Do we want to\n>> put some comments right where would-be implementors would be sure to see it?\n> \n> Perhaps. I'd be kind of inclined to leave the \"yet\" out of \"not yet\n> implemented\" in the error message, as that wording sounds like we just\n> haven't got round to it.\n\nI see your point, but should we be dissuading people who might want to \nwork on solving those problems? I intentionally did not document that \nthis syntax exists so the only people seeing the message are those who \njust try it, and those wanting to write a patch like Danil did.\n\nNo one except you has said anything about this patch. I think it would \nbe good to commit it, wordsmithing aside.\n-- \nVik Fearing\n\n\n\n", "msg_date": "Mon, 2 Oct 2023 19:49:22 +0200", "msg_from": "Vik Fearing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Allow deleting enumerated values from an existing enumerated data\n type" }, { "msg_contents": "Vik Fearing <[email protected]> writes:\n\n> On 9/29/23 03:17, Tom Lane wrote:\n>> Vik Fearing <[email protected]> writes:\n>>> On 9/28/23 20:46, Tom Lane wrote:\n>>>> We went through all these points years ago when the enum feature\n>>>> was first developed, as I recall. Nobody thought that the ability\n>>>> to remove an enum value was worth the amount of complexity it'd\n>>>> entail.\n>> \n>>> This issue comes up regularly (although far from often). Do we want to\n>>> put some comments right where would-be implementors would be sure to see it?\n>> Perhaps. I'd be kind of inclined to leave the \"yet\" out of \"not yet\n>> implemented\" in the error message, as that wording sounds like we just\n>> haven't got round to it.\n>\n> I see your point, but should we be dissuading people who might want to\n> work on solving those problems? I intentionally did not document that \n> this syntax exists so the only people seeing the message are those who\n> just try it, and those wanting to write a patch like Danil did.\n>\n> No one except you has said anything about this patch. I think it would\n> be good to commit it, wordsmithing aside.\n\nFWIW I'm +1 on this patch, and with Tom on dropping the \"yet\". To me it\nmakes it sound like we intend to implement it soon (fsvo).\n\n- ilmari\n\n\n", "msg_date": "Mon, 02 Oct 2023 19:07:53 +0100", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Allow deleting enumerated values from an existing enumerated\n data type" }, { "msg_contents": "Andrew Dunstan <[email protected]> writes:\n> On 2023-09-28 Th 14:46, Tom Lane wrote:\n>> We went through all these points years ago when the enum feature\n>> was first developed, as I recall. Nobody thought that the ability\n>> to remove an enum value was worth the amount of complexity it'd\n>> entail.\n\n> That's quite true, and I accept my part in this history. But I'm not \n> sure we were correct back then.\n\nI think it was the right decision at the time, given that the\nalternative was to not add the enum feature at all. The question\nis whether we're now prepared to do additional work to support DROP\nVALUE. But the tradeoff still looks pretty grim, because the\nproblems haven't gotten any easier.\n\nI've been trying to convince myself that there'd be some value in\nyour idea about a DISABLE flag, but I feel like there's something\nmissing there. The easiest implementation would be to have\nenum_in() reject disabled values, while still allowing enum_out()\nto print them. But that doesn't seem to lead to nice results:\n\n* You couldn't do, say,\n\tSELECT * FROM my_table WHERE enum_col = 'disabled_value'\nto look for rows that you need to clean up. I guess this'd work:\n\tSELECT * FROM my_table WHERE enum_col::text = 'disabled_value'\nbut it's un-obvious and could not use an index on enum_col.\n\n* If any of the disabled values remain, dump/restore would fail.\nMaybe you'd want that to be sure you got rid of them, but it sounds\nlike a foot-gun. (\"What do you mean, our only backup doesn't\nrestore?\") Probably people would wish for two different behaviors:\neither don't list disabled values at all in the dumped CREATE TYPE,\nor do list them but disable them only after loading data. The latter\napproach will still have problems in data-only restores, but there are\ncomparable hazards with things like foreign keys. (pg_upgrade would\nneed still a third behavior, perhaps.)\n\nOn the whole this is still a long way from a clean easy-to-use DROP\nfacility, and it adds a lot of complexity of its own for pg_dump.\nSo I'm not sure we want to build it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 02 Oct 2023 16:20:28 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Allow deleting enumerated values from an existing enumerated data\n type" }, { "msg_contents": "On 10/2/23 20:07, Dagfinn Ilmari Mannsåker wrote:\n> Vik Fearing <[email protected]> writes:\n> \n>> No one except you has said anything about this patch. I think it would\n>> be good to commit it, wordsmithing aside.\n> \n> FWIW I'm +1 on this patch,\n\nThanks.\n\n> and with Tom on dropping the \"yet\". To me it\n> makes it sound like we intend to implement it soon (fsvo).\nI am not fundamentally opposed to it, nor to any other wordsmithing the \ncommitter (probably Tom) wants to do. The main point of the patch is to \nlist at least some of the problems that need to be solved in a correct \nimplementation.\n-- \nVik Fearing\n\n\n\n", "msg_date": "Tue, 3 Oct 2023 02:21:21 +0200", "msg_from": "Vik Fearing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Allow deleting enumerated values from an existing enumerated data\n type" }, { "msg_contents": "Vik Fearing <[email protected]> writes:\n> On 10/2/23 20:07, Dagfinn Ilmari Mannsåker wrote:\n>> FWIW I'm +1 on this patch,\n\n> Thanks.\n\n>> and with Tom on dropping the \"yet\". To me it\n>> makes it sound like we intend to implement it soon (fsvo).\n\n> I am not fundamentally opposed to it, nor to any other wordsmithing the \n> committer (probably Tom) wants to do. The main point of the patch is to \n> list at least some of the problems that need to be solved in a correct \n> implementation.\n\nPushed with a bit more work on the text.\n\nI left out the regression test, as it seems like it'd add test cycles\nto little purpose. It won't do anything to improve the odds that\nsomeone finds this text.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 03 Oct 2023 11:44:15 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Allow deleting enumerated values from an existing enumerated data\n type" }, { "msg_contents": "On 10/3/23 17:44, Tom Lane wrote:\n> Vik Fearing <[email protected]> writes:\n>> On 10/2/23 20:07, Dagfinn Ilmari Mannsåker wrote:\n>>> FWIW I'm +1 on this patch,\n> \n>> Thanks.\n> \n>>> and with Tom on dropping the \"yet\". To me it\n>>> makes it sound like we intend to implement it soon (fsvo).\n> \n>> I am not fundamentally opposed to it, nor to any other wordsmithing the\n>> committer (probably Tom) wants to do. The main point of the patch is to\n>> list at least some of the problems that need to be solved in a correct\n>> implementation.\n> \n> Pushed with a bit more work on the text.\n> \n> I left out the regression test, as it seems like it'd add test cycles\n> to little purpose. It won't do anything to improve the odds that\n> someone finds this text.\n\nThanks!\n-- \nVik Fearing\n\n\n\n", "msg_date": "Tue, 3 Oct 2023 17:49:22 +0200", "msg_from": "Vik Fearing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Allow deleting enumerated values from an existing enumerated data\n type" }, { "msg_contents": "On Tue, 3 Oct 2023 at 22:49, Tom Lane <[email protected]> wrote:\n>\n> Andrew Dunstan <[email protected]> writes:\n> > On 2023-09-28 Th 14:46, Tom Lane wrote:\n> >> We went through all these points years ago when the enum feature\n> >> was first developed, as I recall. Nobody thought that the ability\n> >> to remove an enum value was worth the amount of complexity it'd\n> >> entail.\n>\n> > That's quite true, and I accept my part in this history. But I'm not\n> > sure we were correct back then.\n>\n> I think it was the right decision at the time, given that the\n> alternative was to not add the enum feature at all. The question\n> is whether we're now prepared to do additional work to support DROP\n> VALUE. But the tradeoff still looks pretty grim, because the\n> problems haven't gotten any easier.\n>\n> I've been trying to convince myself that there'd be some value in\n> your idea about a DISABLE flag, but I feel like there's something\n> missing there. The easiest implementation would be to have\n> enum_in() reject disabled values, while still allowing enum_out()\n> to print them. But that doesn't seem to lead to nice results:\n>\n> [...]\n>\n> On the whole this is still a long way from a clean easy-to-use DROP\n> facility, and it adds a lot of complexity of its own for pg_dump.\n> So I'm not sure we want to build it.\n\nI don't quite get what the hard problem is that we haven't already\nsolved for other systems:\nWe already can add additional constraints to domains (e.g. VALUE::int\n<> 4), which (according to docs) scan existing data columns for\nviolations. We already drop columns without rewriting the table to\nremove the column's data, and reject new data insertions for those\nstill-in-the-catalogs-but-inaccessible columns.\n\nSo, if a user wants to drop an enum value, why couldn't we \"just\" use\nthe DOMAIN facilities and 1.) add a constraint WHERE value NOT IN\n(deleted_values), and after validation of that constraint 2.) mark the\nenum value as deleted like we do with table column's pg_attribute\nentries?\n\nThe only real issue that I can think of is making sure that concurrent\nbackends don't modify this data, but that shouldn't be very different\nfrom the other locks we already have to take in e.g. ALTER TYPE ...\nDROP ATTRIBUTE.\n\nKind regards,\n\nMatthias van de Meent\n\n\n", "msg_date": "Tue, 3 Oct 2023 23:14:01 +0200", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Allow deleting enumerated values from an existing enumerated data\n type" }, { "msg_contents": "Matthias van de Meent <[email protected]> writes:\n> I don't quite get what the hard problem is that we haven't already\n> solved for other systems:\n> We already can add additional constraints to domains (e.g. VALUE::int\n> <> 4), which (according to docs) scan existing data columns for\n> violations.\n\nThat's \"solved\" only for rather small values of \"solved\". While the\ncode does try to look for violations of the new constraint, there is\nno interlock against race conditions (ie, concurrent insertions of\na conflicting value). It doesn't check temp tables belonging to\nother backends, because it can't. And IIRC it doesn't look for\ninstances in metadata, such as stored views.\n\nThe reason we've considered that Good Enough(TM) for domain\nconstraints is that if something does sneak through those loopholes,\nnothing terribly surprising happens. You've got a value there that\nshouldn't be there, but that's mostly your own fault, and the\nsystem continues to behave sanely. Also you don't have any problem\nintrospecting what you've got, because such values will still print\nnormally. Plus, we can't really guarantee that users won't get\ninto such a state in other ways, for example if their constraint\nisn't really immutable.\n\nThe problem with dropping an enum value is that surprising things\nmight very well happen, because removal of the pg_enum row risks\ncomparisons failing if they involve the dropped value. Thus for\nexample you might find yourself with a broken index that fails\nall insertion and search attempts, even if you'd carefully removed\nevery user-visible instance of the doomed value: an instance of\nit high up in the index tree will break most index accesses.\nEven for values in user-visible places like views, the fact that\nenum_out will fail doesn't make it any easier to figure out what\nis wrong.\n\nWe might be able to get to a place where the surprise factor is\nlow enough to tolerate, but the domain-constraint precedent isn't\ngood enough for that IMO.\n\nAndrew's idea of DISABLE rather than full DROP is one way of\nameliorating these problems: comparisons would still work, and\nwe can still print a value that perhaps shouldn't have been there.\nBut it's not without other problems.\n\n> We already drop columns without rewriting the table to\n> remove the column's data, and reject new data insertions for those\n> still-in-the-catalogs-but-inaccessible columns.\n\nThose cases don't seem to have a lot of connection to the enum problem.\n\n> The only real issue that I can think of is making sure that concurrent\n> backends don't modify this data, but that shouldn't be very different\n> from the other locks we already have to take in e.g. ALTER TYPE ...\n> DROP ATTRIBUTE.\n\nI'd bet a good deal of money that those cases aren't too bulletproof.\nWe do not lock types simply because somebody has a value of the type\nin flight somewhere in a query, and the cost of doing so would be\nquite discouraging I fear. On the whole, I'd rather accept the\nidea that the DROP might not be completely watertight; but then\nwe have to work out the details of coping with orphaned values\nin an acceptable way.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 03 Oct 2023 17:46:51 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Allow deleting enumerated values from an existing enumerated data\n type" } ]
[ { "msg_contents": "Hi hackers,\n\nPlease find attached a patch proposal to $SUBJECT.\n\nThis patch allows the role provided in BackgroundWorkerInitializeConnection()\nand BackgroundWorkerInitializeConnectionByOid() to lack login authorization.\n\nIn InitPostgres(), in case of a background worker, authentication is not performed\n(PerformAuthentication() is not called), so having the role used to connect to the database\nlacking login authorization seems to make sense.\n\nWith this new flag in place, one could give \"high\" privileges to the role used to initialize\nthe background workers connections without any risk of seeing this role being used by a\n\"normal user\" to login.\n\nThe attached patch:\n\n- adds the new flag\n- adds documentation\n- adds testing\n\nLooking forward to your feedback,\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 28 Sep 2023 14:37:02 +0200", "msg_from": "\"Drouvot, Bertrand\" <[email protected]>", "msg_from_op": true, "msg_subject": "Add a new BGWORKER_BYPASS_ROLELOGINCHECK flag" }, { "msg_contents": "On Thu, Sep 28, 2023 at 02:37:02PM +0200, Drouvot, Bertrand wrote:\n> This patch allows the role provided in BackgroundWorkerInitializeConnection()\n> and BackgroundWorkerInitializeConnectionByOid() to lack login authorization.\n\nInteresting. Yes, there would be use cases for that, I suppose.\n\n> +\t\t\t uint32 flags,\n> \t\t\t char *out_dbname)\n> {\n\nThis may be more adapted with a bits32 for the flags.\n\n> +# Ask the background workers to connect with this role with the flag in place.\n> +$node->append_conf(\n> + 'postgresql.conf', q{\n> +worker_spi.role = 'nologrole'\n> +worker_spi.bypass_login_check = true\n> +});\n> +$node->restart;\n> +\n> +# An error message should not be issued.\n> +ok( !$node->log_contains(\n> + \"role \\\"nologrole\\\" is not permitted to log in\", $log_start),\n> + \"nologrole allowed to connect if BGWORKER_BYPASS_ROLELOGINCHECK is set\");\n> +\n> done_testing();\n\nIt would be cheaper to use a dynamic background worker for such tests.\nSomething that I've been tempted to do in this module is to extend the\namount of data that's given to bgw_main_arg when launching a worker\nwith worker_spi_launch(). How about extending the SQL function so as\nit is possible to give in input a role name (or a regrole), a database\nname (or a database OID) and a text[] for the flags? This would\nrequire a bit more refactoring, but this would be benefitial to show\nor one can pass down a full structure from the registration to the\nmain() routine. On top of that, it would make the addition of the new\nGUCs worker_spi.bypass_login_check and worker_spi.role unnecessary.\n\n> +# return the size of logfile of $node in bytes\n> +sub get_log_size\n> +{\n> + my ($node) = @_;\n> +\n> + return (stat $node->logfile)[7];\n> +}\n\nJust use -s here. See other tests that want to check the contents of\nthe logs from an offset.\n\n> - * Allow bypassing datallowconn restrictions when connecting to database\n> + * Allow bypassing datallowconn restrictions and login check when connecting\n> + * to database\n> */\n> -#define BGWORKER_BYPASS_ALLOWCONN 1\n> +#define BGWORKER_BYPASS_ALLOWCONN 0x0001\n> +#define BGWORKER_BYPASS_ROLELOGINCHECK 0x0002\n\nThe structure of the patch is inconsistent. These flags are in\nbgworker.h, but they are used also by InitPostgres(). Perhaps a\nsecond boolean flag would be OK rather than a second set of flags for\nInitPostgres() mapping with the bgworker set.\n--\nMichael", "msg_date": "Fri, 29 Sep 2023 15:19:47 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add a new BGWORKER_BYPASS_ROLELOGINCHECK flag" }, { "msg_contents": "Hi,\n\nOn 9/29/23 8:19 AM, Michael Paquier wrote:\n> On Thu, Sep 28, 2023 at 02:37:02PM +0200, Drouvot, Bertrand wrote:\n>> This patch allows the role provided in BackgroundWorkerInitializeConnection()\n>> and BackgroundWorkerInitializeConnectionByOid() to lack login authorization.\n> \n> Interesting. Yes, there would be use cases for that, I suppose.\n\nThanks for looking at it!\n\n> \n>> +\t\t\t uint32 flags,\n>> \t\t\t char *out_dbname)\n>> {\n> \n> This may be more adapted with a bits32 for the flags.\n\nDone that way in v2 attached.\n\n> \n>> +# Ask the background workers to connect with this role with the flag in place.\n>> +$node->append_conf(\n>> + 'postgresql.conf', q{\n>> +worker_spi.role = 'nologrole'\n>> +worker_spi.bypass_login_check = true\n>> +});\n>> +$node->restart;\n>> +\n>> +# An error message should not be issued.\n>> +ok( !$node->log_contains(\n>> + \"role \\\"nologrole\\\" is not permitted to log in\", $log_start),\n>> + \"nologrole allowed to connect if BGWORKER_BYPASS_ROLELOGINCHECK is set\");\n>> +\n>> done_testing();\n> \n> It would be cheaper to use a dynamic background worker for such tests.\n> Something that I've been tempted to do in this module is to extend the\n> amount of data that's given to bgw_main_arg when launching a worker\n> with worker_spi_launch(). How about extending the SQL function so as\n> it is possible to give in input a role name (or a regrole), a database\n> name (or a database OID) and a text[] for the flags? This would\n> require a bit more refactoring, but this would be benefitial to show\n> or one can pass down a full structure from the registration to the\n> main() routine. On top of that, it would make the addition of the new\n> GUCs worker_spi.bypass_login_check and worker_spi.role unnecessary.\n> \n\nI think that would make sense to have more flexibility in the worker_spi\nmodule. I think that could be done in a dedicated patch though. I think it makes\nmore sense to have the current patch \"focusing\" on this new flag (while adding a test\nabout it without too much refactoring). What about doing the worker_spi module\nre-factoring as a follow up of this one?\n\n>> +# return the size of logfile of $node in bytes\n>> +sub get_log_size\n>> +{\n>> + my ($node) = @_;\n>> +\n>> + return (stat $node->logfile)[7];\n>> +}\n> \n> Just use -s here.\n\nDone in v2 attached.\n\n> See other tests that want to check the contents of\n> the logs from an offset.\n> \n\nOh right, worth to modify 019_replslot_limit.pl, 002_corrupted.pl and\n001_pg_controldata.pl in a separate patch for consistency? (they are using\n\"(stat $node->logfile)[7]\" or \"(stat($pg_control))[7]\").\n\n>> - * Allow bypassing datallowconn restrictions when connecting to database\n>> + * Allow bypassing datallowconn restrictions and login check when connecting\n>> + * to database\n>> */\n>> -#define BGWORKER_BYPASS_ALLOWCONN 1\n>> +#define BGWORKER_BYPASS_ALLOWCONN 0x0001\n>> +#define BGWORKER_BYPASS_ROLELOGINCHECK 0x0002\n> \n> The structure of the patch is inconsistent. These flags are in\n> bgworker.h, but they are used also by InitPostgres(). Perhaps a\n> second boolean flag would be OK rather than a second set of flags for\n> InitPostgres() mapping with the bgworker set.\n\nI did not want initially to add an extra parameter to InitPostgres() but\nI agree it would make more sense. So done that way in v2.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 2 Oct 2023 10:01:04 +0200", "msg_from": "\"Drouvot, Bertrand\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add a new BGWORKER_BYPASS_ROLELOGINCHECK flag" }, { "msg_contents": "On Mon, Oct 02, 2023 at 10:01:04AM +0200, Drouvot, Bertrand wrote:\n> I think that would make sense to have more flexibility in the worker_spi\n> module. I think that could be done in a dedicated patch though. I\n> think it makes more sense to have the current patch \"focusing\" on\n> this new flag (while adding a test about it without too much\n> refactoring). What about doing the worker_spi module re-factoring\n> as a follow up of this one?\n\nI would do that first, as that's what I usually do, but I see also\nyour point that this is not mandatory. If you want, I could give it a\nshot tomorrow to see where it leads.\n\n> Oh right, worth to modify 019_replslot_limit.pl, 002_corrupted.pl and\n> 001_pg_controldata.pl in a separate patch for consistency? (they are using\n> \"(stat $node->logfile)[7]\" or \"(stat($pg_control))[7]\").\n\nIndeed, that's strange. Let's remove the dependency to stat here.\nThe other solution is slightly more elegant IMO, as we don't rely on\nthe position of the result from stat().\n--\nMichael", "msg_date": "Mon, 2 Oct 2023 17:17:59 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add a new BGWORKER_BYPASS_ROLELOGINCHECK flag" }, { "msg_contents": "Hi,\n\nOn 10/2/23 10:17 AM, Michael Paquier wrote:\n> On Mon, Oct 02, 2023 at 10:01:04AM +0200, Drouvot, Bertrand wrote:\n>> I think that would make sense to have more flexibility in the worker_spi\n>> module. I think that could be done in a dedicated patch though. I\n>> think it makes more sense to have the current patch \"focusing\" on\n>> this new flag (while adding a test about it without too much\n>> refactoring). What about doing the worker_spi module re-factoring\n>> as a follow up of this one?\n> \n> I would do that first, as that's what I usually do,\n\nThe reason I was thinking not doing that first is that there is no real use\ncase in the current worker_spi module test.\n\n> but I see also\n> your point that this is not mandatory. If you want, I could give it a\n> shot tomorrow to see where it leads.\n\nOh yeah that would be great (and maybe you already see a use case in the\ncurrent test). Thanks!\n\n>> Oh right, worth to modify 019_replslot_limit.pl, 002_corrupted.pl and\n>> 001_pg_controldata.pl in a separate patch for consistency? (they are using\n>> \"(stat $node->logfile)[7]\" or \"(stat($pg_control))[7]\").\n> \n> Indeed, that's strange. Let's remove the dependency to stat here.\n> The other solution is slightly more elegant IMO, as we don't rely on\n> the position of the result from stat().\n\nAgree, I will propose a new patch for this.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 2 Oct 2023 10:53:22 +0200", "msg_from": "\"Drouvot, Bertrand\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add a new BGWORKER_BYPASS_ROLELOGINCHECK flag" }, { "msg_contents": "On Mon, Oct 2, 2023 at 4:58 PM Drouvot, Bertrand\n<[email protected]> wrote:\n>\n> On 9/29/23 8:19 AM, Michael Paquier wrote:\n> > On Thu, Sep 28, 2023 at 02:37:02PM +0200, Drouvot, Bertrand wrote:\n> >> This patch allows the role provided in BackgroundWorkerInitializeConnection()\n> >> and BackgroundWorkerInitializeConnectionByOid() to lack login authorization.\n> >\n> > Interesting. Yes, there would be use cases for that, I suppose.\n\nCorrect. It allows the roles that don't have LOGIN capabilities to\nstart and use bg workers.\n\n> > This may be more adapted with a bits32 for the flags.\n>\n> Done that way in v2 attached.\n\nWhile I like the idea of the flag to skip login checks for bg workers,\nI don't quite like the APIs being changes InitializeSessionUserId and\nInitPostgres (adding a new input parameter),\nBackgroundWorkerInitializeConnection and\nBackgroundWorkerInitializeConnectionByOid (changing of input parameter\ntype) given that all of these functions are available for external\nmodules and will break things for sure.\n\nWhat if BGWORKER_BYPASS_ROLELOGINCHECK be part of bgw_flags? With\nthis, none of the API needs to be changed, so no compatibility\nproblems as such for external modules and the InitializeSessionUserId\ncan just do something like [1]. We might be tempted to add\nBGWORKER_BYPASS_ALLOWCONN also to bgw_flags, but I'd prefer not to do\nit for the same compatibility reasons.\n\nThoughts?\n\n[1]\ndiff --git a/src/backend/utils/init/miscinit.c\nb/src/backend/utils/init/miscinit.c\nindex 1e671c560c..27dcf052ab 100644\n--- a/src/backend/utils/init/miscinit.c\n+++ b/src/backend/utils/init/miscinit.c\n@@ -786,10 +786,17 @@ InitializeSessionUserId(const char *rolename, Oid roleid)\n */\n if (IsUnderPostmaster)\n {\n+ bool skip_check = false;\n+\n+ /* If asked, skip the role login check for background\nworkers. */\n+ if (IsBackgroundWorker &&\n+ (MyBgworkerEntry->bgw_flags &\nBGWORKER_BYPASS_ROLELOGINCHECK) != 0)\n+ skip_check = true;\n+\n /*\n * Is role allowed to login at all?\n */\n- if (!rform->rolcanlogin)\n+ if (!skip_check && !rform->rolcanlogin)\n ereport(FATAL,\n\n(errcode(ERRCODE_INVALID_AUTHORIZATION_SPECIFICATION),\n errmsg(\"role \\\"%s\\\" is not\npermitted to log in\",\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 3 Oct 2023 14:51:13 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add a new BGWORKER_BYPASS_ROLELOGINCHECK flag" }, { "msg_contents": "Hi,\n\nOn 10/3/23 11:21 AM, Bharath Rupireddy wrote:\n> On Mon, Oct 2, 2023 at 4:58 PM Drouvot, Bertrand\n> <[email protected]> wrote:\n>>\n>> On 9/29/23 8:19 AM, Michael Paquier wrote:\n>>> On Thu, Sep 28, 2023 at 02:37:02PM +0200, Drouvot, Bertrand wrote:\n>>>> This patch allows the role provided in BackgroundWorkerInitializeConnection()\n>>>> and BackgroundWorkerInitializeConnectionByOid() to lack login authorization.\n>>>\n>>> Interesting. Yes, there would be use cases for that, I suppose.\n> \n> Correct. It allows the roles that don't have LOGIN capabilities to\n> start and use bg workers.\n> \n>>> This may be more adapted with a bits32 for the flags.\n>>\n>> Done that way in v2 attached.\n> \n> While I like the idea of the flag to skip login checks for bg workers,\n> I don't quite like the APIs being changes InitializeSessionUserId and\n> InitPostgres (adding a new input parameter),\n> BackgroundWorkerInitializeConnection and\n> BackgroundWorkerInitializeConnectionByOid (changing of input parameter\n> type) given that all of these functions are available for external\n> modules and will break things for sure.\n> \n> What if BGWORKER_BYPASS_ROLELOGINCHECK be part of bgw_flags? With\n> this, none of the API needs to be changed, so no compatibility\n> problems as such for external modules and the InitializeSessionUserId\n> can just do something like [1]. We might be tempted to add\n> BGWORKER_BYPASS_ALLOWCONN also to bgw_flags, but I'd prefer not to do\n> it for the same compatibility reasons.\n> \n> Thoughts?\n> \n\nThanks for looking at it!\n\nI did some research and BGWORKER_BYPASS_ALLOWCONN has been added in eed1ce72e1 and\nat that time the bgw_flags did already exist.\n\nIn this the related thread [1], Tom mentioned:\n\n\"\nWe change exported APIs in new major versions all the time. As\nlong as it's just a question of an added parameter, people can deal\nwith it.\n\"\n\nAnd I agree with that.\n\nNow, I understand your point but it looks to me that bgw_flags is more\nabout the capabilities (Access to shared memory with BGWORKER_SHMEM_ACCESS\nor ability to establish database connection with BGWORKER_BACKEND_DATABASE_CONNECTION),\n\nWhile with BGWORKER_BYPASS_ROLELOGINCHECK (and BGWORKER_BYPASS_ALLOWCONN) it's more related to\nthe BGW behavior once the capability is in place.\n\nSo, I think I'm fine with the current proposal and don't see the need to move\nBGWORKER_BYPASS_ROLELOGINCHECK in bgw_flags.\n\n[1]: https://www.postgresql.org/message-id/22769.1519323861%40sss.pgh.pa.us\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 3 Oct 2023 14:15:48 +0200", "msg_from": "\"Drouvot, Bertrand\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add a new BGWORKER_BYPASS_ROLELOGINCHECK flag" }, { "msg_contents": "On Tue, Oct 3, 2023 at 5:45 PM Drouvot, Bertrand\n<[email protected]> wrote:\n>\n> > While I like the idea of the flag to skip login checks for bg workers,\n> > I don't quite like the APIs being changes InitializeSessionUserId and\n> > InitPostgres (adding a new input parameter),\n> > BackgroundWorkerInitializeConnection and\n> > BackgroundWorkerInitializeConnectionByOid (changing of input parameter\n> > type) given that all of these functions are available for external\n> > modules and will break things for sure.\n> >\n> > What if BGWORKER_BYPASS_ROLELOGINCHECK be part of bgw_flags? With\n> > this, none of the API needs to be changed, so no compatibility\n> > problems as such for external modules and the InitializeSessionUserId\n> > can just do something like [1]. We might be tempted to add\n> > BGWORKER_BYPASS_ALLOWCONN also to bgw_flags, but I'd prefer not to do\n> > it for the same compatibility reasons.\n> >\n> > Thoughts?\n> >\n>\n> Thanks for looking at it!\n>\n> I did some research and BGWORKER_BYPASS_ALLOWCONN has been added in eed1ce72e1 and\n> at that time the bgw_flags did already exist.\n>\n> In this the related thread [1], Tom mentioned:\n>\n> \"\n> We change exported APIs in new major versions all the time. As\n> long as it's just a question of an added parameter, people can deal\n> with it.\n> \"\n\nIt doesn't have to be always/all the time. If the case here is okay to\nchange the bgw and other core functions API, I honestly feel that we\nmust move BGWORKER_BYPASS_ALLOWCONN to bgw_flags.\n\n> Now, I understand your point but it looks to me that bgw_flags is more\n> about the capabilities (Access to shared memory with BGWORKER_SHMEM_ACCESS\n> or ability to establish database connection with BGWORKER_BACKEND_DATABASE_CONNECTION),\n>\n> While with BGWORKER_BYPASS_ROLELOGINCHECK (and BGWORKER_BYPASS_ALLOWCONN) it's more related to\n> the BGW behavior once the capability is in place.\n\nI look at the new flag as a capability of the bgw to connect with a\nrole without login access. IMV, all are the same.\n\n> So, I think I'm fine with the current proposal and don't see the need to move\n> BGWORKER_BYPASS_ROLELOGINCHECK in bgw_flags.\n>\n> [1]: https://www.postgresql.org/message-id/22769.1519323861%40sss.pgh.pa.us\n\nI prefer to have it as bgw_flag, however, let's hear from others.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 3 Oct 2023 19:02:11 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add a new BGWORKER_BYPASS_ROLELOGINCHECK flag" }, { "msg_contents": "On Tue, Oct 03, 2023 at 07:02:11PM +0530, Bharath Rupireddy wrote:\n> On Tue, Oct 3, 2023 at 5:45 PM Drouvot, Bertrand\n> <[email protected]> wrote:\n>> I did some research and BGWORKER_BYPASS_ALLOWCONN has been added in eed1ce72e1 and\n>> at that time the bgw_flags did already exist.\n>>\n>> In this the related thread [1], Tom mentioned:\n>>\n>> \"\n>> We change exported APIs in new major versions all the time. As\n>> long as it's just a question of an added parameter, people can deal\n>> with it.\n>> \"\n> \n> It doesn't have to be always/all the time. If the case here is okay to\n> change the bgw and other core functions API, I honestly feel that we\n> must move BGWORKER_BYPASS_ALLOWCONN to bgw_flags.\n\nI don't agree with this point. BackgroundWorkerInitializeConnection()\nand its other friend are designed to be called at the beginning of the\nmain routine of a background worker, where bgw_flags is not accessible.\nThere is much more happening before a bgworker initializes its\nconnection, like signal handling and definitions of other states that\ndepend on the GUCs loaded for the bgworker.\n\n>> Now, I understand your point but it looks to me that bgw_flags is more\n>> about the capabilities (Access to shared memory with BGWORKER_SHMEM_ACCESS\n>> or ability to establish database connection with BGWORKER_BACKEND_DATABASE_CONNECTION),\n>>\n>> While with BGWORKER_BYPASS_ROLELOGINCHECK (and BGWORKER_BYPASS_ALLOWCONN) it's more related to\n>> the BGW behavior once the capability is in place.\n> \n> I look at the new flag as a capability of the bgw to connect with a\n> role without login access. IMV, all are the same.\n\nBertrand is arguing that the current code with its current split is\nOK, because both are different concepts:\n- bgw_flags is used by the postmaster to control how to launch the\nbgworkers.\n- The BGWORKER_* flags are used by the bgworkers themselves, once\nthings are set up by the postmaster based on bgw_flags.\n--\nMichael", "msg_date": "Wed, 4 Oct 2023 08:17:41 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add a new BGWORKER_BYPASS_ROLELOGINCHECK flag" }, { "msg_contents": "On Mon, Oct 02, 2023 at 10:53:22AM +0200, Drouvot, Bertrand wrote:\n> On 10/2/23 10:17 AM, Michael Paquier wrote:\n>> On Mon, Oct 02, 2023 at 10:01:04AM +0200, Drouvot, Bertrand wrote:\n>>> I think that would make sense to have more flexibility in the worker_spi\n>>> module. I think that could be done in a dedicated patch though. I\n>>> think it makes more sense to have the current patch \"focusing\" on\n>>> this new flag (while adding a test about it without too much\n>>> refactoring). What about doing the worker_spi module re-factoring\n>>> as a follow up of this one?\n>> \n>> I would do that first, as that's what I usually do,\n> \n> The reason I was thinking not doing that first is that there is no real use\n> case in the current worker_spi module test.\n\nAs a template, improving and extending it seems worth it to me as long\nas it can also improve tests.\n\n> > but I see also\n> > your point that this is not mandatory. If you want, I could give it a\n> > shot tomorrow to see where it leads.\n> \n> Oh yeah that would be great (and maybe you already see a use case in the\n> current test). Thanks!\n\nYes, while it shows a bit more what one can achieve with bgw_extra, it\nalso helps in improving the test coverage with starting dynamic\nworkers across defined databases and roles through a SQL function.\n\nIt took me a bit longer than I expected, but here is what I finish\nwith:\n- 0001 extends worker_spi to be able to pass down database and role\nIDs for the workers to start, through MyBgworkerEntry->bgw_extra.\nPerhaps the two new arguments of worker_spi_launch() should use\nInvalidOid as default, actually, to fall back to the database and\nroles defined by the GUCs in these cases. That would be two lines to\nchange in worker_spi--1.0.sql.\n- 0002 is your patch, on top of which I have added handling for the\nflags in the launch() function with a text[]. The tests get much\nsimpler, and don't need restarts.\n\nBy the way, I am pretty sure that we are going to need a wait phase\nafter the startup of the worker in the case where \"nologrole\" is not\nallowed to log in even with the original patch: the worker may not\nhave started at the point where we check the logs. That's true as\nwell when involving worker_spi_launch().\n\nWhat do you think?\n--\nMichael", "msg_date": "Wed, 4 Oct 2023 15:20:01 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add a new BGWORKER_BYPASS_ROLELOGINCHECK flag" }, { "msg_contents": "Hi,\n\nOn 10/4/23 8:20 AM, Michael Paquier wrote:\n> On Mon, Oct 02, 2023 at 10:53:22AM +0200, Drouvot, Bertrand wrote:\n>> On 10/2/23 10:17 AM, Michael Paquier wrote:\n>>> On Mon, Oct 02, 2023 at 10:01:04AM +0200, Drouvot, Bertrand wrote:\n>>>> I think that would make sense to have more flexibility in the worker_spi\n>>>> module. I think that could be done in a dedicated patch though. I\n>>>> think it makes more sense to have the current patch \"focusing\" on\n>>>> this new flag (while adding a test about it without too much\n>>>> refactoring). What about doing the worker_spi module re-factoring\n>>>> as a follow up of this one?\n>>>\n>>> I would do that first, as that's what I usually do,\n>>\n>> The reason I was thinking not doing that first is that there is no real use\n>> case in the current worker_spi module test.\n> \n> As a template, improving and extending it seems worth it to me as long\n> as it can also improve tests.\n> \n>>> but I see also\n>>> your point that this is not mandatory. If you want, I could give it a\n>>> shot tomorrow to see where it leads.\n>>\n>> Oh yeah that would be great (and maybe you already see a use case in the\n>> current test). Thanks!\n> \n> Yes, while it shows a bit more what one can achieve with bgw_extra, it\n> also helps in improving the test coverage with starting dynamic\n> workers across defined databases and roles through a SQL function.\n> \n\nYeah right.\n\n> It took me a bit longer than I expected, \n\nThanks for having looked at it!\n\n> but here is what I finish\n> with:\n> - 0001 extends worker_spi to be able to pass down database and role\n> IDs for the workers to start, through MyBgworkerEntry->bgw_extra.\n> Perhaps the two new arguments of worker_spi_launch() should use\n> InvalidOid as default, actually, to fall back to the database and\n> roles defined by the GUCs in these cases. \n\nI'm fine with the way it's currently done in 0001 and that sounds\nmore logical to me. I mean we don't \"really\" want InvalidOid but to fall\nback to the GUCs.\n\nJust a remark here:\n\n+ if (!OidIsValid(roleoid))\n+ {\n+ /*\n+ * worker_spi_role is NULL by default, so just pass down an invalid\n+ * OID to let the main() routine do its connection work.\n+ */\n+ if (worker_spi_role)\n+ roleoid = get_role_oid(worker_spi_role, false);\n+ else\n+ roleoid = InvalidOid;\n\nthe\n\n+ else\n+ roleoid = InvalidOid;\n\nI think it is not needed as we're already in \"!OidIsValid(roleoid)\".\n\n> - 0002 is your patch, on top of which I have added handling for the\n> flags in the launch() function with a text[]. The tests get much\n> simpler, and don't need restarts.\n> \n\nYeah, agree that's better.\n\n> By the way, I am pretty sure that we are going to need a wait phase\n> after the startup of the worker in the case where \"nologrole\" is not\n> allowed to log in even with the original patch: the worker may not\n> have started at the point where we check the logs.\n\nI agree and it's now failing on my side.\nI added this \"wait\" in v4-0002 attach and it's now working fine.\n\nPlease note there is more changes than adding this wait in 001_worker_spi.pl (as compare\nto v3-0002) as I ran pgperltidy on it.\nFWIW, the new \"wait\" is just the part related to \"nb_errors\".\n\n> What do you think?\n\nExcept the Nit that I mentioned in 0001, that looks all good to me (with the\nnew wait in 001_worker_spi.pl).\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 4 Oct 2023 12:54:24 +0200", "msg_from": "\"Drouvot, Bertrand\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add a new BGWORKER_BYPASS_ROLELOGINCHECK flag" }, { "msg_contents": "On Wed, Oct 04, 2023 at 12:54:24PM +0200, Drouvot, Bertrand wrote:\n> Except the Nit that I mentioned in 0001, that looks all good to me (with the\n> new wait in 001_worker_spi.pl).\n\nThanks, I've applied the refactoring, including in it the stuff to be\nable to control the flags used when launching a dynamic worker. 0001\nalso needed some indenting. You'll notice that the diffs in\nworker_spi are minimal now. worker_spi_main is no more, as an effect\nof.. Cough.. c8e318b1b.\n\n+# An error message should be issued.\n+my $nb_errors = 0;\n+for (my $i = 0; $i < 10 * $PostgreSQL::Test::Utils::timeout_default; $i++)\n+{\n+\tif ($node->log_contains(\n+\t\t\t\"role \\\"nologrole\\\" is not permitted to log in\", $log_start))\n+\t{\n+\t\t$nb_errors = 1;\n+\t\tlast;\n+\t}\n+\tusleep(100_000);\n+}\n\nThis can be switched to use $node->wait_for_log, making the test\nsimpler. No need for Time::HiRes, either.\n\n-extern void BackgroundWorkerInitializeConnection(const char *dbname,\nconst char *username, uint32 flags);\n+extern void BackgroundWorkerInitializeConnection(const char *dbname,\nconst char *username, bits32 flags);\n[...]\n-BackgroundWorkerInitializeConnectionByOid(Oid dboid, Oid useroid,\nuint32 flags) \n+BackgroundWorkerInitializeConnectionByOid(Oid dboid, Oid useroid,\nbits32 flags)\n\nThat's changing signatures just for the sake of breaking them. I\nwould leave that alone, I guess..\n\n@@ -719,6 +719,7 @@ InitPostgres(const char *in_dbname, Oid dboid,\n \t\t\t const char *username, Oid useroid,\n \t\t\t bool load_session_libraries,\n \t\t\t bool override_allow_connections,\n+\t\t\t bool bypass_login_check,\n \t\t\t char *out_dbname)\n\nI was not paying much attention here, but load_session_libraries gives\na good argument in favor of switching all these booleans to a single\nbits32 argument as that would make three of them now, with a different\nset of flags than the bgworker ones. This can be refactored on its\nown.\n\n-#define BGWORKER_BYPASS_ALLOWCONN 1\n+#define BGWORKER_BYPASS_ALLOWCONN 0x0001\n\nCan be a change of its own as well.\n\nWhile looking at the refactoring at worker_spi. I've noticed that it\nwould be simple and cheap to have some coverage for BYPASS_ALLOWCONN,\nsomething we've never done since this has been introduced. Let me\nsuggest 0001 to add some coverage.\n\n0002 is your own patch, with the tests simplified a bit.\n--\nMichael", "msg_date": "Thu, 5 Oct 2023 14:10:28 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add a new BGWORKER_BYPASS_ROLELOGINCHECK flag" }, { "msg_contents": "Hi,\n\nOn 10/5/23 7:10 AM, Michael Paquier wrote:\n> On Wed, Oct 04, 2023 at 12:54:24PM +0200, Drouvot, Bertrand wrote:\n>> Except the Nit that I mentioned in 0001, that looks all good to me (with the\n>> new wait in 001_worker_spi.pl).\n> \n> Thanks, I've applied the refactoring, including in it the stuff to be\n> able to control the flags used when launching a dynamic worker. 0001\n> also needed some indenting. You'll notice that the diffs in\n> worker_spi are minimal now. worker_spi_main is no more, as an effect\n> of.. Cough.. c8e318b1b.\n\nThanks!\n\n> +# An error message should be issued.\n> +my $nb_errors = 0;\n> +for (my $i = 0; $i < 10 * $PostgreSQL::Test::Utils::timeout_default; $i++)\n> +{\n> +\tif ($node->log_contains(\n> +\t\t\t\"role \\\"nologrole\\\" is not permitted to log in\", $log_start))\n> +\t{\n> +\t\t$nb_errors = 1;\n> +\t\tlast;\n> +\t}\n> +\tusleep(100_000);\n> +}\n> \n> This can be switched to use $node->wait_for_log, making the test\n> simpler. No need for Time::HiRes, either.\n> \n\nOh, thanks, did not know about $node->wait_for_log, good to know!\n\n> -extern void BackgroundWorkerInitializeConnection(const char *dbname,\n> const char *username, uint32 flags);\n> +extern void BackgroundWorkerInitializeConnection(const char *dbname,\n> const char *username, bits32 flags);\n> [...]\n> -BackgroundWorkerInitializeConnectionByOid(Oid dboid, Oid useroid,\n> uint32 flags)\n> +BackgroundWorkerInitializeConnectionByOid(Oid dboid, Oid useroid,\n> bits32 flags)\n> \n> That's changing signatures just for the sake of breaking them. I\n> would leave that alone, I guess..\n\nOk, switched back to uint32 in v6-0002 attached (v6-0001 is your v5-0001\nunchanged).\n\n> \n> @@ -719,6 +719,7 @@ InitPostgres(const char *in_dbname, Oid dboid,\n> \t\t\t const char *username, Oid useroid,\n> \t\t\t bool load_session_libraries,\n> \t\t\t bool override_allow_connections,\n> +\t\t\t bool bypass_login_check,\n> \t\t\t char *out_dbname)\n> \n> I was not paying much attention here, but load_session_libraries gives\n> a good argument in favor of switching all these booleans to a single\n> bits32 argument as that would make three of them now, with a different\n> set of flags than the bgworker ones. This can be refactored on its\n> own.\n\nYeah good point, will work on it once the current one is committed.\n\n> \n> -#define BGWORKER_BYPASS_ALLOWCONN 1\n> +#define BGWORKER_BYPASS_ALLOWCONN 0x0001\n> \n> Can be a change of its own as well.\n\nYeah, but I think it's simple enough to just keep this change here\n(and I don't think it's \"really\" needed without introducing 0x0002)\n\n> \n> While looking at the refactoring at worker_spi. I've noticed that it\n> would be simple and cheap to have some coverage for BYPASS_ALLOWCONN,\n> something we've never done since this has been introduced. Let me\n> suggest 0001 to add some coverage.\n\nGood idea! I looked at 0001 and it looks ok to me.\n\n> \n> 0002 is your own patch, with the tests simplified a bit.\n\nThanks, LGTM.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 5 Oct 2023 08:46:48 +0200", "msg_from": "\"Drouvot, Bertrand\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add a new BGWORKER_BYPASS_ROLELOGINCHECK flag" }, { "msg_contents": "On Thu, Oct 5, 2023 at 12:22 PM Drouvot, Bertrand\n<[email protected]> wrote:\n>\n> >\n> > @@ -719,6 +719,7 @@ InitPostgres(const char *in_dbname, Oid dboid,\n> > const char *username, Oid useroid,\n> > bool load_session_libraries,\n> > bool override_allow_connections,\n> > + bool bypass_login_check,\n> > char *out_dbname)\n> >\n> > I was not paying much attention here, but load_session_libraries gives\n> > a good argument in favor of switching all these booleans to a single\n> > bits32 argument as that would make three of them now, with a different\n> > set of flags than the bgworker ones. This can be refactored on its\n> > own.\n>\n> Yeah good point, will work on it once the current one is committed.\n>\n> >\n> > -#define BGWORKER_BYPASS_ALLOWCONN 1\n> > +#define BGWORKER_BYPASS_ALLOWCONN 0x0001\n> >\n> > Can be a change of its own as well.\n>\n> Yeah, but I think it's simple enough to just keep this change here\n> (and I don't think it's \"really\" needed without introducing 0x0002)\n\nI think changing BGWORKER_BYPASS_ALLOWCONN to 0x0001 and having bits32\nfor InitPostgres inputs load_session_libraries and\noverride_allow_connections can be 0001 in this patch series so that it\ncan go first, no? This avoids the new code being in the old format and\nchanging things right after it commits.\n\nv6-0001 LGTM.\n\nA comment on v6-0002:\n1.\n+ CREATE ROLE nologrole with nologin;\n+ ALTER ROLE nologrole with superuser;\n+]);\nWe don't need superuser privileges here, do we? Or do we need it for\nthe worker_spi to access pg_catalog and stuff in worker_spi_main? If\nnot, can we remove it to showcase non-superusers requesting bg\nworkers?\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 5 Oct 2023 17:51:31 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add a new BGWORKER_BYPASS_ROLELOGINCHECK flag" }, { "msg_contents": "Hi,\n\nOn 10/5/23 2:21 PM, Bharath Rupireddy wrote:\n> On Thu, Oct 5, 2023 at 12:22 PM Drouvot, Bertrand\n> <[email protected]> wrote:\n>>\n> A comment on v6-0002:\n> 1.\n> + CREATE ROLE nologrole with nologin;\n> + ALTER ROLE nologrole with superuser;\n> +]);\n> We don't need superuser privileges here, do we? Or do we need it for\n> the worker_spi to access pg_catalog and stuff in worker_spi_main? If\n> not, can we remove it to showcase non-superusers requesting bg\n> workers?\n\nsuperuser is not needed here.\nI removed it but had to change it in v7 attached to:\n\n+ CREATE ROLE nologrole with nologin;\n+ GRANT CREATE ON DATABASE mydb TO nologrole;\n\nTo avoid things like:\n\n\"\n2023-10-05 15:59:39.189 UTC [2830732] LOG: worker_spi dynamic worker 13 initialized with schema13.counted\n2023-10-05 15:59:39.191 UTC [2830732] ERROR: permission denied for database mydb\n2023-10-05 15:59:39.191 UTC [2830732] CONTEXT: SQL statement \"CREATE SCHEMA \"schema13\" CREATE TABLE \"counted\"\n\"\n\nRegards,\n \n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 5 Oct 2023 18:02:43 +0200", "msg_from": "\"Drouvot, Bertrand\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add a new BGWORKER_BYPASS_ROLELOGINCHECK flag" }, { "msg_contents": "On Thu, Oct 5, 2023 at 9:32 PM Drouvot, Bertrand\n<[email protected]> wrote:\n>\n> + CREATE ROLE nologrole with nologin;\n> + GRANT CREATE ON DATABASE mydb TO nologrole;\n\nA few nit-picks:\n\n1. s/with/WITH\n2. s/nologin/NOLOGIN\n3. + is specified as <varname>flags</varname> it is possible to\nbypass the login check to connect to databases.\nHow about \"it is possible to bypass the login check for the role used\nto connect to databases.\"?\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 5 Oct 2023 21:53:33 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add a new BGWORKER_BYPASS_ROLELOGINCHECK flag" }, { "msg_contents": "On Thu, Oct 05, 2023 at 05:51:31PM +0530, Bharath Rupireddy wrote:\n> I think changing BGWORKER_BYPASS_ALLOWCONN to 0x0001 and having bits32\n> for InitPostgres inputs load_session_libraries and\n> override_allow_connections can be 0001 in this patch series so that it\n> can go first, no? This avoids the new code being in the old format and\n> changing things right after it commits.\n\n+1\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 5 Oct 2023 13:39:15 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add a new BGWORKER_BYPASS_ROLELOGINCHECK flag" }, { "msg_contents": "On Thu, Oct 05, 2023 at 05:51:31PM +0530, Bharath Rupireddy wrote:\n> I think changing BGWORKER_BYPASS_ALLOWCONN to 0x0001 and having bits32\n> for InitPostgres inputs load_session_libraries and\n> override_allow_connections can be 0001 in this patch series so that it\n> can go first, no? This avoids the new code being in the old format and\n> changing things right after it commits.\n\nI am not completely sure what you mean here. We've never supported\nload_session_libraries for background workers, and I'm not quite sure\nthat there is a case for it. FWIW, my idea around that would be to\nhave two separate sets of flags: one set for the bgworkers and one set\nfor PostgresInit() as an effect of load_session_libraries which looks\nlike a fuzzy concept for bgworkers.\n--\nMichael", "msg_date": "Fri, 6 Oct 2023 09:09:10 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add a new BGWORKER_BYPASS_ROLELOGINCHECK flag" }, { "msg_contents": "On Fri, Oct 06, 2023 at 09:09:10AM +0900, Michael Paquier wrote:\n> I am not completely sure what you mean here. We've never supported\n> load_session_libraries for background workers, and I'm not quite sure\n> that there is a case for it. FWIW, my idea around that would be to\n> have two separate sets of flags: one set for the bgworkers and one set\n> for PostgresInit() as an effect of load_session_libraries which looks\n> like a fuzzy concept for bgworkers.\n\nI have applied v1 a few hours ago as of 991bb0f9653c. Then, the\nbuildfarm has quickly complained that a bgworkers able to start with \nBYPASS_ALLOWCONN while the database has its access restricted could\nspawn workers that themselves try to connect to the database\nrestricted, causing the test to fail. I have applied fd4d93d269c0 as\na quick way to avoid the spawn of workers in this case, and the\nbuildfarm has turned back to green.\n\nNow, there's been a second type failure on serinus even after all\nthat:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=serinus&dt=2023-10-06%2001%3A04%3A05\n\nThe step running a `make check` on worker_spi in the run has worked:\nhttps://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=serinus&dt=2023-10-06%2001%3A04%3A05&stg=module-worker_spi-check\n\nBut the follow-up step doing an installcheck with worker_spi has not.\nAnd this looks like a crash:\nhttps://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=serinus&dt=2023-10-06%2001%3A04%3A05&stg=testmodules-install-check-C\n\nThe logs reported by this step are not really helpful, as they contain\nonly information about the sql/ tests in the modules. It is the first\ntime that this stuff is tested, so this could be a race condition\nthat's been around for some time but we've never seen it until now, or\nit could be an issue in the test I fail to see.\n\nAndres, are there logs for this TAP test on serinus? Or perhaps there\nis a core file that could be looked at? The other animals are not\nshowing anything for the moment.\n--\nMichael", "msg_date": "Fri, 6 Oct 2023 15:29:28 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add a new BGWORKER_BYPASS_ROLELOGINCHECK flag" }, { "msg_contents": "Hi,\n\nOn 10/5/23 6:23 PM, Bharath Rupireddy wrote:\n> On Thu, Oct 5, 2023 at 9:32 PM Drouvot, Bertrand\n> <[email protected]> wrote:\n>>\n>> + CREATE ROLE nologrole with nologin;\n>> + GRANT CREATE ON DATABASE mydb TO nologrole;\n> \n> A few nit-picks:\n> \n> 1. s/with/WITH\n> 2. s/nologin/NOLOGIN\n\ndone in v8 attached.\n\n> 3. + is specified as <varname>flags</varname> it is possible to\n> bypass the login check to connect to databases.\n> How about \"it is possible to bypass the login check for the role used\n> to connect to databases.\"?\n> \n\n\"for the role used\" sounds implicit to me but I don't have a strong opinion\nabout it so re-worded as per your proposal in v8.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Fri, 6 Oct 2023 08:48:32 +0200", "msg_from": "\"Drouvot, Bertrand\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add a new BGWORKER_BYPASS_ROLELOGINCHECK flag" }, { "msg_contents": "On Fri, Oct 06, 2023 at 03:29:28PM +0900, Michael Paquier wrote:\n> Andres, are there logs for this TAP test on serinus? Or perhaps there\n> is a core file that could be looked at? The other animals are not\n> showing anything for the moment.\n\nserinus has reported back once again, and just returned with a green\nstate, twice in a row:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=serinus&dt=2023-10-06%2007%3A42%3A53\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=serinus&dt=2023-10-06%2007%3A28%3A02\n\nWell, it looks OK. Still that's itching a bit.\n--\nMichael", "msg_date": "Fri, 6 Oct 2023 17:06:41 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add a new BGWORKER_BYPASS_ROLELOGINCHECK flag" }, { "msg_contents": "Michael Paquier <[email protected]> writes:\n> On Fri, Oct 06, 2023 at 03:29:28PM +0900, Michael Paquier wrote:\n>> Andres, are there logs for this TAP test on serinus? Or perhaps there\n>> is a core file that could be looked at? The other animals are not\n>> showing anything for the moment.\n\n> Well, it looks OK. Still that's itching a bit.\n\nThere have been intermittent failures on various buildfarm machines\nsince this went in. After seeing one on my own animal mamba [1],\nI tried to reproduce it manually on that machine, and it does\nindeed fail about one time in two. The buildfarm script is not\nmanaging to capture the relevant log files, but what I see in a\nmanual run is that 001_worker_spi.pl logs this:\n\n...\n# Postmaster PID for node \"mynode\" is 21897\n[01:19:53.931](2.663s) ok 5 - bgworkers all launched\n[01:19:54.711](0.780s) ok 6 - dynamic bgworkers all launched\nerror running SQL: 'psql:<stdin>:1: ERROR: could not start background process\nHINT: More details may be available in the server log.'\nwhile running 'psql -XAtq -d port=56393 host=/tmp/PETPK0Stwi dbname='postgres' -f - -v ON_ERROR_STOP=1' with sql 'SELECT worker_spi_launch(12, 16394, 16395);' at /home/tgl/pgsql/src/test/modules/worker_spi/../../../../src/test/perl/PostgreSQL/Test/Cluster.pm line 2009.\n# Postmaster PID for node \"mynode\" is 21897\n### Stopping node \"mynode\" using mode immediate\n# Running: pg_ctl -D /home/tgl/pgsql/src/test/modules/worker_spi/tmp_check/t_001_worker_spi_mynode_data/pgdata -m immediate stop\nwaiting for server to shut down.... done\nserver stopped\n# No postmaster PID for node \"mynode\"\n[01:19:55.032](0.321s) # Tests were run but no plan was declared and done_testing() was not seen.\n[01:19:55.035](0.002s) # Looks like your test exited with 29 just after 6.\n\nand in the postmaster log\n\n2023-10-08 01:19:54.265 EDT [5820] LOG: worker_spi dynamic worker 10 initialized with schema10.counted\n2023-10-08 01:19:54.378 EDT [27776] 001_worker_spi.pl LOG: statement: SELECT worker_spi_launch(11, 5, 16395);\n2023-10-08 01:19:54.476 EDT [18120] 001_worker_spi.pl LOG: statement: SELECT datname, usename, wait_event FROM pg_stat_activity\n WHERE backend_type = 'worker_spi dynamic' AND\n pid IN (5820, 428) ORDER BY datname;\n2023-10-08 01:19:54.548 EDT [428] LOG: worker_spi dynamic worker 11 initialized with schema11.counted\n2023-10-08 01:19:54.680 EDT [152] 001_worker_spi.pl LOG: statement: SELECT datname, usename, wait_event FROM pg_stat_activity\n WHERE backend_type = 'worker_spi dynamic' AND\n pid IN (5820, 428) ORDER BY datname;\n2023-10-08 01:19:54.779 EDT [1675] 001_worker_spi.pl LOG: statement: ALTER DATABASE mydb ALLOW_CONNECTIONS false;\n2023-10-08 01:19:54.854 EDT [26562] 001_worker_spi.pl LOG: statement: SELECT worker_spi_launch(12, 16394, 16395);\n2023-10-08 01:19:54.878 EDT [23636] FATAL: database \"mydb\" is not currently accepting connections\n2023-10-08 01:19:54.888 EDT [21897] LOG: background worker \"worker_spi dynamic\" (PID 23636) exited with exit code 1\n2023-10-08 01:19:54.888 EDT [26562] 001_worker_spi.pl ERROR: could not start background process\n2023-10-08 01:19:54.888 EDT [26562] 001_worker_spi.pl HINT: More details may be available in the server log.\n2023-10-08 01:19:54.888 EDT [26562] 001_worker_spi.pl STATEMENT: SELECT worker_spi_launch(12, 16394, 16395);\n2023-10-08 01:19:54.912 EDT [21897] LOG: received immediate shutdown request\n2023-10-08 01:19:55.014 EDT [21897] LOG: database system is shut down\n\nWhat it looks like to me is that there is a code path by which \"could\nnot start background process\" is reported as a failure of the SELECT\nworker_spi_launch() query itself. The test script is not expecting\nthat, because it executes that query with\n\n# bgworker cannot be launched with connection restriction.\nmy $worker3_pid = $node->safe_psql('postgres',\n qq[SELECT worker_spi_launch(12, $mydb_id, $myrole_id);]);\n$node->wait_for_log(\n qr/database \"mydb\" is not currently accepting connections/, $log_offset);\n\nso safe_psql bails out and we get no further.\n\nSince this only seems to happen on slow machines, I'd call it a timing\nproblem or race condition. Unless you want to argue that the race\nshould not happen, probably the fix is to make the test script cope\nwith this worker_spi_launch() call failing. As long as we see the\nexpected result from wait_for_log, we can be pretty sure the right\nthing happened.\n\n\t\t\tregards, tom lane\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mamba&dt=2023-10-08%2001%3A00%3A22\n\n\n", "msg_date": "Sun, 08 Oct 2023 17:48:55 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add a new BGWORKER_BYPASS_ROLELOGINCHECK flag" }, { "msg_contents": "On Sun, Oct 08, 2023 at 05:48:55PM -0400, Tom Lane wrote:\n> There have been intermittent failures on various buildfarm machines\n> since this went in. After seeing one on my own animal mamba [1],\n> I tried to reproduce it manually on that machine, and it does\n> indeed fail about one time in two. The buildfarm script is not\n> managing to capture the relevant log files, but what I see in a\n> manual run is that 001_worker_spi.pl logs this:\n\nThanks for the logs, I've noticed the failure but could not make any\nsense of it based on the lack of information provided from the\nbuildfarm. Serinus has complained once, for instance. \n\n> Since this only seems to happen on slow machines, I'd call it a timing\n> problem or race condition. Unless you want to argue that the race\n> should not happen, probably the fix is to make the test script cope\n> with this worker_spi_launch() call failing. As long as we see the\n> expected result from wait_for_log, we can be pretty sure the right\n> thing happened.\n\nThe trick to reproduce the failure is to slow down worker_spi_launch()\nbefore WaitForBackgroundWorkerStartup() with a worker already\nregistered so as the worker has the time to start and exit because of\nthe ALLOW_CONNECTIONS restriction. (SendPostmasterSignal() in\nRegisterDynamicBackgroundWorker() interrupts a hardcoded sleep, so\nI've just used an on-disk flag.)\n\nAnother thing is that we cannot rely on the PID returned by launch()\nas it could fail, so $worker3_pid needs to disappear. If we do that,\nI'd rather just switch to a specific database for the tests with\nALLOWCONN rather than reuse \"mydb\" that could have other workers. The\nattached fixes the issue for me.\n--\nMichael", "msg_date": "Mon, 9 Oct 2023 18:37:21 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add a new BGWORKER_BYPASS_ROLELOGINCHECK flag" }, { "msg_contents": "Michael Paquier <[email protected]> writes:\n> Another thing is that we cannot rely on the PID returned by launch()\n> as it could fail, so $worker3_pid needs to disappear. If we do that,\n> I'd rather just switch to a specific database for the tests with\n> ALLOWCONN rather than reuse \"mydb\" that could have other workers.\n\nRight.\n\n> The\n> attached fixes the issue for me.\n\nHmm. This passed a few dozen test cycles on mamba's host,\nbut it seems to me there's still a race condition here:\n\n$result = $node->safe_psql('postgres',\n\t\"SELECT count(*) FROM pg_stat_activity WHERE datname = 'noconndb';\");\nis($result, '0', 'dynamic bgworker without BYPASS_ALLOWCONN not started');\n\nThere will be a window where the worker has logged \"database\n\"noconndb\" is not currently accepting connections\" but hasn't yet\nexited, so that conceivably this query could see a positive count.\n\nWe could just drop this test, reasoning that the appearance of\nthe error message is sufficient evidence that the right thing\nhappened. (If the failed worker is still around, it won't break\nthe remaining tests AFAICS.) Or we could convert this to a\npoll_query_until() loop.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 09 Oct 2023 12:20:18 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add a new BGWORKER_BYPASS_ROLELOGINCHECK flag" }, { "msg_contents": "On Mon, Oct 09, 2023 at 12:20:18PM -0400, Tom Lane wrote:\n> There will be a window where the worker has logged \"database\n> \"noconndb\" is not currently accepting connections\" but hasn't yet\n> exited, so that conceivably this query could see a positive count.\n\nI don't think that's possible here. The check on datallowconn is done\nbefore a backend calls pgstat_bestart() which would make its backend\nentry reported to pg_stat_activity. So there is no window where a\nbackend would be in pg_stat_activity if this check fails.\n\n> We could just drop this test, reasoning that the appearance of\n> the error message is sufficient evidence that the right thing\n> happened. (If the failed worker is still around, it won't break\n> the remaining tests AFAICS.) Or we could convert this to a\n> poll_query_until() loop.\n\nSaying that, I'm OK with just dropping this query, as it could also be\npossible that one decides that calling pgstat_bestart() before the\ndatallowconn check is a good idea for a reason or another.\n--\nMichael", "msg_date": "Tue, 10 Oct 2023 07:37:49 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add a new BGWORKER_BYPASS_ROLELOGINCHECK flag" }, { "msg_contents": "Michael Paquier <[email protected]> writes:\n> On Mon, Oct 09, 2023 at 12:20:18PM -0400, Tom Lane wrote:\n>> There will be a window where the worker has logged \"database\n>> \"noconndb\" is not currently accepting connections\" but hasn't yet\n>> exited, so that conceivably this query could see a positive count.\n\n> I don't think that's possible here. The check on datallowconn is done\n> before a backend calls pgstat_bestart() which would make its backend\n> entry reported to pg_stat_activity. So there is no window where a\n> backend would be in pg_stat_activity if this check fails.\n\nAh, right. I complained after seeing that we set MyProc->databaseId\nbefore doing CheckMyDatabase, but you're right that it doesn't\nmatter for pg_stat_activity until pgstat_bestart. \n\n> Saying that, I'm OK with just dropping this query, as it could also be\n> possible that one decides that calling pgstat_bestart() before the\n> datallowconn check is a good idea for a reason or another.\n\nNot sure if that's a likely change or not. However, if we're in\nagreement that this test step isn't buying much, let's just drop\nit and save the test cycles.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 09 Oct 2023 23:11:58 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add a new BGWORKER_BYPASS_ROLELOGINCHECK flag" }, { "msg_contents": "On Mon, Oct 09, 2023 at 11:11:58PM -0400, Tom Lane wrote:\n> Michael Paquier <[email protected]> writes:\n>> Saying that, I'm OK with just dropping this query, as it could also be\n>> possible that one decides that calling pgstat_bestart() before the\n>> datallowconn check is a good idea for a reason or another.\n> \n> Not sure if that's a likely change or not. However, if we're in\n> agreement that this test step isn't buying much, let's just drop\n> it and save the test cycles.\n\nNo problem here. f483b2090 has removed the query entirely, relying\nnow only on a wait_for_log() when the worker startup fails.\n--\nMichael", "msg_date": "Tue, 10 Oct 2023 13:36:01 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add a new BGWORKER_BYPASS_ROLELOGINCHECK flag" }, { "msg_contents": "Hi,\n\nOn 10/6/23 8:48 AM, Drouvot, Bertrand wrote:\n> Hi,\n> \n> On 10/5/23 6:23 PM, Bharath Rupireddy wrote:\n>> On Thu, Oct 5, 2023 at 9:32 PM Drouvot, Bertrand\n>> <[email protected]> wrote:\n>>>\n>>> +  CREATE ROLE nologrole with nologin;\n>>> +  GRANT CREATE ON DATABASE mydb TO nologrole;\n>>\n>> A few nit-picks:\n>>\n>> 1. s/with/WITH\n>> 2. s/nologin/NOLOGIN\n> \n> done in v8 attached.\n> \n>> 3. +   is specified as <varname>flags</varname> it is possible to\n>> bypass the login check to connect to databases.\n>> How about \"it is possible to bypass the login check for the role used\n>> to connect to databases.\"?\n>>\n> \n> \"for the role used\" sounds implicit to me but I don't have a strong opinion\n> about it so re-worded as per your proposal in v8.\n> \n\nPlease find attached v9 (v8 rebase due to f483b2090).\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 10 Oct 2023 06:57:05 +0200", "msg_from": "\"Drouvot, Bertrand\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add a new BGWORKER_BYPASS_ROLELOGINCHECK flag" }, { "msg_contents": "On Tue, Oct 10, 2023 at 06:57:05AM +0200, Drouvot, Bertrand wrote:\n> Please find attached v9 (v8 rebase due to f483b2090).\n\nI was looking at v8 just before you sent this v9, and still got\nannoyed by the extra boolean argument added to InitPostgres(). So\nplease let me propose to bite the bullet and refactor that, as of the\n0001 attached that means less diff footprints in all the callers of\nInitPostgres() (I am not wedded to the flag names).\n\nIt looks like 0002 had the same issues as f483b209: the worker that\ncould not be started because of the login restriction could be\ndetected as stopped by worker_spi_launch(), causing the script to fail\nhard.\n\n0002 is basically your v9, able to work with the refactoring from\n0001. \n--\nMichael", "msg_date": "Tue, 10 Oct 2023 14:58:03 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add a new BGWORKER_BYPASS_ROLELOGINCHECK flag" }, { "msg_contents": "Hi,\n\nOn 10/10/23 7:58 AM, Michael Paquier wrote:\n> On Tue, Oct 10, 2023 at 06:57:05AM +0200, Drouvot, Bertrand wrote:\n>> Please find attached v9 (v8 rebase due to f483b2090).\n> \n> I was looking at v8 just before you sent this v9, and still got\n> annoyed by the extra boolean argument added to InitPostgres(). \n\nArf, I did not look at it as I had in mind to look at it once\nthis one is in.\n\n> So\n> please let me propose to bite the bullet and refactor that, as of the\n> 0001 attached that means less diff footprints in all the callers of\n> InitPostgres() (I am not wedded to the flag names).\n\nThanks for having looked at it!\n\n+ bits32 init_flags = 0; /* never honor session_preload_libraries */\n\nAlso a few word about datallowconn in the comment? (as the flag deals with both).\n\n> \n> It looks like 0002 had the same issues as f483b209: the worker that\n> could not be started because of the login restriction could be\n> detected as stopped by worker_spi_launch(), causing the script to fail\n> hard.\n> \n> 0002 is basically your v9, able to work with the refactoring from\n> 0001.\n\nThanks!\n\n #define INIT_PG_OVERRIDE_ALLOW_CONNS 0x0002\n+#define INIT_PG_BYPASS_ROLE_LOGIN 0x0004\n\nAny reason why INIT_PG_BYPASS_ROLE_LOGIN is not 0x0003?\n\nExcept that it does look good to me.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 10 Oct 2023 09:12:49 +0200", "msg_from": "\"Drouvot, Bertrand\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add a new BGWORKER_BYPASS_ROLELOGINCHECK flag" }, { "msg_contents": "\n\nOn 10/10/23 9:12 AM, Drouvot, Bertrand wrote:\n> Hi,\n> Any reason why INIT_PG_BYPASS_ROLE_LOGIN is not 0x0003?\n> \n\nPlease forget about it ;-)\n\nRegards,\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 10 Oct 2023 09:18:04 +0200", "msg_from": "\"Drouvot, Bertrand\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add a new BGWORKER_BYPASS_ROLELOGINCHECK flag" }, { "msg_contents": "On Tue, Oct 10, 2023 at 09:18:04AM +0200, Drouvot, Bertrand wrote:\n> Please forget about it ;-)\n\nThat's called an ENOCOFFEE :D\n--\nMichael", "msg_date": "Tue, 10 Oct 2023 16:21:15 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add a new BGWORKER_BYPASS_ROLELOGINCHECK flag" }, { "msg_contents": "\nOn 10/10/23 9:21 AM, Michael Paquier wrote:\n> On Tue, Oct 10, 2023 at 09:18:04AM +0200, Drouvot, Bertrand wrote:\n>> Please forget about it ;-)\n> \n> That's called an ENOCOFFEE :D\n\nExactly, good one! ;-)\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 10 Oct 2023 09:26:31 +0200", "msg_from": "\"Drouvot, Bertrand\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add a new BGWORKER_BYPASS_ROLELOGINCHECK flag" }, { "msg_contents": "On Tue, Oct 10, 2023 at 09:12:49AM +0200, Drouvot, Bertrand wrote:\n> On 10/10/23 7:58 AM, Michael Paquier wrote:\n>> I was looking at v8 just before you sent this v9, and still got\n>> annoyed by the extra boolean argument added to InitPostgres().\n> \n> Arf, I did not look at it as I had in mind to look at it once\n> this one is in.\n\nNo problem. I'm OK to do it.\n\n>> So\n>> please let me propose to bite the bullet and refactor that, as of the\n>> 0001 attached that means less diff footprints in all the callers of\n>> InitPostgres() (I am not wedded to the flag names).\n> \n> Thanks for having looked at it!\n> \n> + bits32 init_flags = 0; /* never honor session_preload_libraries */\n> \n> Also a few word about datallowconn in the comment? (as the flag deals with both).\n\nI am not sure that this is necessary in the code paths of\nBackgroundWorkerInitializeConnectionByOid() and\nBackgroundWorkerInitializeConnection() as datallowconn is handled a\nfew lines down.\n--\nMichael", "msg_date": "Wed, 11 Oct 2023 08:26:42 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add a new BGWORKER_BYPASS_ROLELOGINCHECK flag" }, { "msg_contents": "On Wed, Oct 11, 2023 at 08:26:42AM +0900, Michael Paquier wrote:\n> On Tue, Oct 10, 2023 at 09:12:49AM +0200, Drouvot, Bertrand wrote:\n> > On 10/10/23 7:58 AM, Michael Paquier wrote:\n> >> I was looking at v8 just before you sent this v9, and still got\n> >> annoyed by the extra boolean argument added to InitPostgres().\n> > \n> > Arf, I did not look at it as I had in mind to look at it once\n> > this one is in.\n> \n> No problem. I'm OK to do it.\n\nApplied 0001 for now.\n\n> I am not sure that this is necessary in the code paths of\n> BackgroundWorkerInitializeConnectionByOid() and\n> BackgroundWorkerInitializeConnection() as datallowconn is handled a\n> few lines down.\n\n /* flags for InitPostgres() */\n #define INIT_PG_LOAD_SESSION_LIBS 0x0001\n #define INIT_PG_OVERRIDE_ALLOW_CONNS 0x0002\n+#define INIT_PG_BYPASS_ROLE_LOGIN 0x0004\n\nIn 0002, I am not sure that this is the best name for this new flag.\nThere is consistency with the bgworker part, for sure, but shouldn't\nwe name that OVERRIDE_ROLE_LOGIN instead in miscadmin.h?\n--\nMichael", "msg_date": "Wed, 11 Oct 2023 12:40:28 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add a new BGWORKER_BYPASS_ROLELOGINCHECK flag" }, { "msg_contents": "Hi,\n\nOn 10/11/23 5:40 AM, Michael Paquier wrote:\n> On Wed, Oct 11, 2023 at 08:26:42AM +0900, Michael Paquier wrote:\n> \n> /* flags for InitPostgres() */\n> #define INIT_PG_LOAD_SESSION_LIBS 0x0001\n> #define INIT_PG_OVERRIDE_ALLOW_CONNS 0x0002\n> +#define INIT_PG_BYPASS_ROLE_LOGIN 0x0004\n> \n> In 0002, I am not sure that this is the best name for this new flag.\n> There is consistency with the bgworker part, for sure, but shouldn't\n> we name that OVERRIDE_ROLE_LOGIN instead in miscadmin.h?\n\nYeah, agree. Changed in v12 attached.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 11 Oct 2023 08:48:04 +0200", "msg_from": "\"Drouvot, Bertrand\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add a new BGWORKER_BYPASS_ROLELOGINCHECK flag" }, { "msg_contents": "On Wed, Oct 11, 2023 at 08:48:04AM +0200, Drouvot, Bertrand wrote:\n> Yeah, agree. Changed in v12 attached.\n\nI have tweaked a few comments, and applied that. Thanks.\n--\nMichael", "msg_date": "Thu, 12 Oct 2023 09:26:52 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add a new BGWORKER_BYPASS_ROLELOGINCHECK flag" }, { "msg_contents": "Hi,\n\nOn 10/12/23 2:26 AM, Michael Paquier wrote:\n> On Wed, Oct 11, 2023 at 08:48:04AM +0200, Drouvot, Bertrand wrote:\n>> Yeah, agree. Changed in v12 attached.\n> \n> I have tweaked a few comments, and applied that. Thanks.\n\nOh and you also closed the CF entry, thanks!\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 12 Oct 2023 07:42:28 +0200", "msg_from": "\"Drouvot, Bertrand\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add a new BGWORKER_BYPASS_ROLELOGINCHECK flag" }, { "msg_contents": "On Thu, Oct 12, 2023 at 07:42:28AM +0200, Drouvot, Bertrand wrote:\n> On 10/12/23 2:26 AM, Michael Paquier wrote:\n>> I have tweaked a few comments, and applied that. Thanks.\n> \n> Oh and you also closed the CF entry, thanks!\n\nThe buildfarm has provided some feedback, and the new tests have been\nunstable on mamba:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mamba&dt=2023-10-15%2005%3A04%3A21\n\nIn more details:\n# poll_query_until timed out executing this query:\n# SELECT datname, usename, wait_event FROM pg_stat_activity\n# WHERE backend_type = 'worker_spi dynamic' AND\n# pid = ;\n# expecting this output:\n# mydb|nologrole|WorkerSpiMain\n# last actual query output:\n#\n# with stderr:\n# ERROR: syntax error at or near \";\"\n# LINE 3: pid = ;\n\nSo this looks like a hard failure in starting the worker that should\nbypass the role login check. The logs don't offer much information,\nbut I think I know what's going on here: at this stage of the tests,\nthe number of workers created is 7, very close to the limit of\nmax_worker_processes, at 8 by default. So one parallel worker spawned\nby any of the other bgworkers would be enough to prevent the last one\nto start, and mamba has been slow enough in the startup of the static\nworkers to show that this could be possible.\n\nI think that we should just bump up max_worker_processes, like in the\nattached.\n--\nMichael", "msg_date": "Mon, 16 Oct 2023 11:22:38 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add a new BGWORKER_BYPASS_ROLELOGINCHECK flag" }, { "msg_contents": "Michael Paquier <[email protected]> writes:\n> The buildfarm has provided some feedback, and the new tests have been\n> unstable on mamba:\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mamba&dt=2023-10-15%2005%3A04%3A21\n\nYeah. FWIW, I tried to reproduce this on mamba's host, but did not\nsee it again in 100 or so tries.\n\n> In more details:\n> # poll_query_until timed out executing this query:\n> # SELECT datname, usename, wait_event FROM pg_stat_activity\n> # WHERE backend_type = 'worker_spi dynamic' AND\n> # pid = ;\n> # with stderr:\n> # ERROR: syntax error at or near \";\"\n> # LINE 3: pid = ;\n\n> So this looks like a hard failure in starting the worker that should\n> bypass the role login check. The logs don't offer much information,\n> but I think I know what's going on here: at this stage of the tests,\n> the number of workers created is 7, very close to the limit of\n> max_worker_processes, at 8 by default. So one parallel worker spawned\n> by any of the other bgworkers would be enough to prevent the last one\n> to start, and mamba has been slow enough in the startup of the static\n> workers to show that this could be possible.\n\nI agree that that probably is the root cause, and we should fix it\nby bumping up max_worker_processes in this test.\n\nBut this failure is annoying in another way. Evidently,\nworker_spi_launch returned NULL, which must have been from here:\n\n\tif (!RegisterDynamicBackgroundWorker(&worker, &handle))\n\t\tPG_RETURN_NULL();\n\nWhy in the world is that \"return NULL\", and not \"ereport(ERROR)\"\nlike all the other failure conditions in that function?\n\nIf there's actually a defensible reason for the C code to act\nlike that, then all the call sites need to have checks for\na null result.\n\n(cc'ing Robert, as this coding originated at 090d0f205.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 15 Oct 2023 22:47:22 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add a new BGWORKER_BYPASS_ROLELOGINCHECK flag" }, { "msg_contents": "On Sun, Oct 15, 2023 at 10:47:22PM -0400, Tom Lane wrote:\n> I agree that that probably is the root cause, and we should fix it\n> by bumping up max_worker_processes in this test.\n\nThanks. I've fixed this one now. Let's see if mamba is OK after\nthat.\n\n> If there's actually a defensible reason for the C code to act\n> like that, then all the call sites need to have checks for\n> a null result.\n\nWe're just talking about a test module and an ERROR in the same\nfashion as autoprewarm makes things more predictible for the TAP\nscript, IMO.\n--\nMichael", "msg_date": "Mon, 16 Oct 2023 16:16:37 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add a new BGWORKER_BYPASS_ROLELOGINCHECK flag" } ]
[ { "msg_contents": "Branching off from [0], here is a for-discussion patch about some \ncorrections for the pg_resetwal -c (--commit-timestamp-ids) option.\n\nFirst, in the documentation for finding a manual value for the -c option\nbased on the files present in the data directory, it was missing a \nmultiplier, like for the other SLRU-based values, and also missing the \nmention of adding one for the upper value. The value I came up with is \ncomputed as\n\n SLRU_PAGES_PER_SEGMENT * COMMIT_TS_XACTS_PER_PAGE = 13088 = 0x3320\n\nSecond, the present pg_resetwal code hardcodes the minimum value as 2, \nwhich is FrozenTransactionId, but it's not clear why that is allowed. \nMaybe we should change that to FirstNormalTransactionId, which matches \nother xid-related options in pg_resetwal.\n\nThoughts?\n\n\n[0]: \nhttps://www.postgresql.org/message-id/flat/[email protected]", "msg_date": "Thu, 28 Sep 2023 15:50:43 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "pg_resetwal: Corrections around -c option" }, { "msg_contents": "On 2023-Sep-28, Peter Eisentraut wrote:\n\n> Branching off from [0], here is a for-discussion patch about some\n> corrections for the pg_resetwal -c (--commit-timestamp-ids) option.\n> \n> First, in the documentation for finding a manual value for the -c option\n> based on the files present in the data directory, it was missing a\n> multiplier, like for the other SLRU-based values, and also missing the\n> mention of adding one for the upper value. The value I came up with is\n> computed as\n> \n> SLRU_PAGES_PER_SEGMENT * COMMIT_TS_XACTS_PER_PAGE = 13088 = 0x3320\n\nHmm, not sure about this. SLRU_PAGES_PER_SEGMENT is 32, and\nCOMMIT_TS_XACTS_PER_PAGE is 819, so this formula gives decimal 26208 =\n0x6660. But more generally, I'm not sure about the algorithm. Really,\nthe safe value also depends on how large the latest file actually is;\ne.g., if your numerically greatest file is only 32kB long (four pages)\nthen you can't specify values that refer to Xids in pages 5 and beyond,\nbecause those pages will not have been zeroed into existence yet, so\nyou'll get an error:\n ERROR: could not access status of transaction 55692\n DETAIL: Could not read from file \"pg_commit_ts/0002\" at offset 32768: read too few bytes.\nI think a useful value can be had by multiplying 26208 by the latest\n*complete* file number, then if there is an incomplete last file, add\n819 multiplied by the number of pages in it.\n\nAlso, \"numerically greatest\" is the wrong thing in case you've recently\nwrapped XIDs around but the oldest files (before the wraparound) are\nstill present. You really want the \"logically latest\" files. (I think\nthat'll coincide with the files having the latest modification times.)\n\nNot sure how to write this concisely, though. Should we really try?\n\n(I think the number 13088 appeared somewhere in connection with\nmultixacts. Maybe there was a confusion with that.)\n\n> Second, the present pg_resetwal code hardcodes the minimum value as 2, which\n> is FrozenTransactionId, but it's not clear why that is allowed. Maybe we\n> should change that to FirstNormalTransactionId, which matches other\n> xid-related options in pg_resetwal.\n\nYes, that's clearly a mistake I made at the last minute: in [1] I posted\nthis patch *without* the test for 2, and when I pushed the patch two\ndays later, I had introduced that without any further explanation.\n\nBTW if you `git show 666e8db` (which is the SHA1 identifier for\npg_resetxlog.c that appears in the patch I posted back then) you'll see\nthat the existing code did not have any similar protection for valid XID\nvalues. The tests to FirstNormalTransactionId for -u and -x were\nintroduced by commit 74cf7d46a91d, seven years later -- that commit both\nintroduced -u as a new feature, and hardened the tests for -x, which was\npreviously only testing for zero.\n\n[1] https://postgr.es/m/[email protected]\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Las navajas y los monos deben estar siempre distantes\" (Germán Poo)\n\n\n", "msg_date": "Mon, 9 Oct 2023 17:48:50 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_resetwal: Corrections around -c option" }, { "msg_contents": "On 09.10.23 17:48, Alvaro Herrera wrote:\n> Hmm, not sure about this. SLRU_PAGES_PER_SEGMENT is 32, and\n> COMMIT_TS_XACTS_PER_PAGE is 819, so this formula gives decimal 26208 =\n> 0x6660. But more generally, I'm not sure about the algorithm. Really,\n> the safe value also depends on how large the latest file actually is;\n> e.g., if your numerically greatest file is only 32kB long (four pages)\n> then you can't specify values that refer to Xids in pages 5 and beyond,\n> because those pages will not have been zeroed into existence yet, so\n> you'll get an error:\n> ERROR: could not access status of transaction 55692\n> DETAIL: Could not read from file \"pg_commit_ts/0002\" at offset 32768: read too few bytes.\n> I think a useful value can be had by multiplying 26208 by the latest\n> *complete* file number, then if there is an incomplete last file, add\n> 819 multiplied by the number of pages in it.\n> \n> Also, \"numerically greatest\" is the wrong thing in case you've recently\n> wrapped XIDs around but the oldest files (before the wraparound) are\n> still present. You really want the \"logically latest\" files. (I think\n> that'll coincide with the files having the latest modification times.)\n\nWould those issues also apply to the other SLRU-based guides on this man \npage? Are they all a bit wrong?\n\n> Not sure how to write this concisely, though. Should we really try?\n\nMaybe not. But the documentation currently suggests you can try \n(probably somewhat copy-and-pasted).\n\n>> Second, the present pg_resetwal code hardcodes the minimum value as 2, which\n>> is FrozenTransactionId, but it's not clear why that is allowed. Maybe we\n>> should change that to FirstNormalTransactionId, which matches other\n>> xid-related options in pg_resetwal.\n> \n> Yes, that's clearly a mistake I made at the last minute: in [1] I posted\n> this patch *without* the test for 2, and when I pushed the patch two\n> days later, I had introduced that without any further explanation.\n> \n> BTW if you `git show 666e8db` (which is the SHA1 identifier for\n> pg_resetxlog.c that appears in the patch I posted back then) you'll see\n> that the existing code did not have any similar protection for valid XID\n> values. The tests to FirstNormalTransactionId for -u and -x were\n> introduced by commit 74cf7d46a91d, seven years later -- that commit both\n> introduced -u as a new feature, and hardened the tests for -x, which was\n> previously only testing for zero.\n\nI have committed this.\n\n\n\n", "msg_date": "Tue, 10 Oct 2023 09:30:50 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_resetwal: Corrections around -c option" }, { "msg_contents": "On 2023-Oct-10, Peter Eisentraut wrote:\n\n> On 09.10.23 17:48, Alvaro Herrera wrote:\n> > Hmm, not sure about this. [...]\n> \n> Would those issues also apply to the other SLRU-based guides on this man\n> page? Are they all a bit wrong?\n\nI didn't verify, but I think it's likely that they do and they are.\n\n> > Not sure how to write this concisely, though. Should we really try?\n> \n> Maybe not. But the documentation currently suggests you can try (probably\n> somewhat copy-and-pasted).\n\nI bet this has not been thoroughly verified.\n\nPerhaps we should have (somewhere outside the option list) a separate\nparagraph that explains how to determine the safest maximum value to\nuse, and list only the multiplier together with each option.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Tue, 10 Oct 2023 12:53:58 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_resetwal: Corrections around -c option" } ]
[ { "msg_contents": "Hackers,\n\nWhile reading through [1] I saw there were two instances where \nbackup_label was removed to achieve a \"successful\" restore. This might \nwork on trivial test restores but is an invitation to (silent) disaster \nin a production environment where the checkpoint stored in backup_label \nis almost certain to be earlier than the one stored in pg_control.\n\nA while back I had an idea on how to prevent this so I decided to give \nit a try. Basically, before writing pg_control to the backup I set \ncheckpoint to 0xFFFFFFFFFFFFFFFF.\n\nRecovery worked perfectly as long as backup_label was present and failed \nhard when it was not:\n\nLOG: invalid primary checkpoint record\nPANIC: could not locate a valid checkpoint record\n\nIt's not a very good message, but at least the foot gun has been \nremoved. We could use this as a special value to give a better message, \nand maybe use something a bit more unique like 0xFFFFFFFFFADEFADE (or \nwhatever) as the value.\n\nThis is all easy enough for pg_basebackup to do, but will certainly be \nnon-trivial for most backup software to implement. In [2] we have \ndiscussed perhaps returning pg_control from pg_backup_stop() for the \nbackup software to save, or it could become part of the backup_label \n(encoded as hex or base64, presumably). I prefer the latter as this \nmeans less work for the backup software (except for the need to exclude \npg_control from the backup).\n\nI don't have a patch for this yet because I did not test this idea using \npg_basebackup, but I'll be happy to work up a patch if there is interest.\n\nI feel like we should do *something* here. If even advanced users are \nmaking this mistake, then we should take it pretty seriously.\n\nRegards,\n-David\n\n[1] \nhttps://www.postgresql.org/message-id/flat/CAM_vCudkSjr7NsNKSdjwtfAm9dbzepY6beZ5DP177POKy8%3D2aw%40mail.gmail.com#746e492bfcd2667635634f1477a61288\n[2] \nhttps://www.postgresql.org/message-id/CA%2BhUKGKiZJcfZSA5G5Rm8oC78SNOQ4c8az5Ku%3D4wMTjw1FZ40g%40mail.gmail.com\n\n\n", "msg_date": "Thu, 28 Sep 2023 17:14:22 -0400", "msg_from": "David Steele <[email protected]>", "msg_from_op": true, "msg_subject": "The danger of deleting backup_label" }, { "msg_contents": "On Thu, Sep 28, 2023 at 05:14:22PM -0400, David Steele wrote:\n> While reading through [1] I saw there were two instances where backup_label\n> was removed to achieve a \"successful\" restore. This might work on trivial\n> test restores but is an invitation to (silent) disaster in a production\n> environment where the checkpoint stored in backup_label is almost certain to\n> be earlier than the one stored in pg_control.\n\nDefinitely successful.\n\n> Recovery worked perfectly as long as backup_label was present and failed\n> hard when it was not:\n> \n> LOG: invalid primary checkpoint record\n> PANIC: could not locate a valid checkpoint record\n> \n> It's not a very good message, but at least the foot gun has been removed. We\n> could use this as a special value to give a better message, and maybe use\n> something a bit more unique like 0xFFFFFFFFFADEFADE (or whatever) as the\n> value.\n\nWhy not just InvalidXLogRecPtr?\n\n> This is all easy enough for pg_basebackup to do, but will certainly be\n> non-trivial for most backup software to implement. In [2] we have discussed\n> perhaps returning pg_control from pg_backup_stop() for the backup software\n> to save, or it could become part of the backup_label (encoded as hex or\n> base64, presumably). I prefer the latter as this means less work for the\n> backup software (except for the need to exclude pg_control from the backup).\n> \n> I don't have a patch for this yet because I did not test this idea using\n> pg_basebackup, but I'll be happy to work up a patch if there is interest.\n\nIf the contents of the control file are tweaked before sending it\nthrough a BASE_BACKUP, it would cover more than just pg_basebackup.\nSwitching the way the control file is sent with new contents in\nsendFileWithContent() rather than sendFile() would be one way, for\ninstance..\n--\nMichael", "msg_date": "Fri, 29 Sep 2023 11:30:21 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The danger of deleting backup_label" }, { "msg_contents": "On 9/28/23 22:30, Michael Paquier wrote:\n> On Thu, Sep 28, 2023 at 05:14:22PM -0400, David Steele wrote:\n> \n>> Recovery worked perfectly as long as backup_label was present and failed\n>> hard when it was not:\n>>\n>> LOG: invalid primary checkpoint record\n>> PANIC: could not locate a valid checkpoint record\n>>\n>> It's not a very good message, but at least the foot gun has been removed. We\n>> could use this as a special value to give a better message, and maybe use\n>> something a bit more unique like 0xFFFFFFFFFADEFADE (or whatever) as the\n>> value.\n> \n> Why not just InvalidXLogRecPtr?\n\nThat fails because there is a check to make sure the checkpoint is valid \nwhen pg_control is loaded. Another possibility is to use a special LSN \nlike we use for unlogged tables. Anything >= 24 and < WAL segment size \nwill work fine.\n\n>> This is all easy enough for pg_basebackup to do, but will certainly be\n>> non-trivial for most backup software to implement. In [2] we have discussed\n>> perhaps returning pg_control from pg_backup_stop() for the backup software\n>> to save, or it could become part of the backup_label (encoded as hex or\n>> base64, presumably). I prefer the latter as this means less work for the\n>> backup software (except for the need to exclude pg_control from the backup).\n>>\n>> I don't have a patch for this yet because I did not test this idea using\n>> pg_basebackup, but I'll be happy to work up a patch if there is interest.\n> \n> If the contents of the control file are tweaked before sending it\n> through a BASE_BACKUP, it would cover more than just pg_basebackup.\n> Switching the way the control file is sent with new contents in\n> sendFileWithContent() rather than sendFile() would be one way, for\n> instance..\n\nGood point, and that makes this even more compelling. If we include \npg_control into backup_label then there is no need to modify pg_control \n(as above) -- we can just exclude it from the backup entirely. That will \ncertainly require some rejigging in recovery but seems worth it for \nbackup solutions that can't easily modify pg_control. The C-based \nsolutions can do this pretty easily but it is a pretty high bar for \nanyone else.\n\nRegards,\n-David\n\n\n", "msg_date": "Tue, 10 Oct 2023 17:06:45 -0400", "msg_from": "David Steele <[email protected]>", "msg_from_op": true, "msg_subject": "Re: The danger of deleting backup_label" }, { "msg_contents": "Hi David,\n\nEven though I spent a whole bunch of time trying to figure out how to\nmake concurrent reads of the control file sufficiently atomic for\nbackups (pg_basebackup and low level filesystem tools), and we\nexplored multiple avenues with varying results, and finally came up\nwith something that basically works pretty well... actually I just\nhate all of that stuff, and I'm hoping to be able to just withdraw\nhttps://commitfest.postgresql.org/45/4025/ and chalk it all up to\ndiscovery/education and call *this* thread the real outcome of that\npreliminary work.\n\nSo I'm +1 on the idea of putting a control file image into the backup\nlabel and I'm happy that you're looking into it.\n\nWe could just leave the control file out of the base backup\ncompletely, as you said, removing a whole foot-gun. People following\nthe 'low level' instructions will still get a copy of the control file\nfrom the filesystem, and I don't see any reliable way to poison that\nfile without also making it so that a crash wouldn't also be prevented\nfrom recovering. I have wondered about putting extra \"fingerprint\"\ninformation into the control file such as the file's path and inode\nnumber etc, so that you can try to distinguish between a control file\nwritten by PostgreSQL, and a control file copied somewhere else, but\nthat all feels too fragile, and at the end of the day, people\nfollowing the low level backup instructions had better follow the low\nlevel backup instructions (hopefully via the intermediary of an\nexcellent external backup tool).\n\nAs Stephen mentioned[1], we could perhaps also complain if both backup\nlabel and control file exist, and then hint that the user should\nremove the *control file* (not the backup label!). I had originally\nsuggested we would just overwrite the control file, but by explicitly\ncomplaining about it we would also bring the matter to tool/script\nauthors' attention, ie that they shouldn't be backing that file up, or\nshould be removing it in a later step if they copy everything. He\nalso mentions that there doesn't seem to be anything stopping us from\nback-patching changes to the backup label contents if we go this way.\nI don't have a strong opinion on that and we could leave the question\nfor later.\n\n[1] https://www.postgresql.org/message-id/ZL69NXjCNG%2BWHCqG%40tamriel.snowman.net\n\n\n", "msg_date": "Thu, 12 Oct 2023 11:10:41 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The danger of deleting backup_label" }, { "msg_contents": "On Tue, Oct 10, 2023 at 05:06:45PM -0400, David Steele wrote:\n> That fails because there is a check to make sure the checkpoint is valid\n> when pg_control is loaded. Another possibility is to use a special LSN like\n> we use for unlogged tables. Anything >= 24 and < WAL segment size will work\n> fine.\n\nDo we have any reason to do that in the presence of a backup_label\nfile anyway? We'll know the LSN of the checkpoint based on what the\nbase backup wants us to use. Using a fake-still-rather-valid value\nfor the LSN in the control file to bypass this check does not address\nthe issue you are pointing at: it is just avoiding this check. A\nreasonable answer would be, IMO, to just not do this check at all\nbased on the control file in this case.\n\n>> If the contents of the control file are tweaked before sending it\n>> through a BASE_BACKUP, it would cover more than just pg_basebackup.\n>> Switching the way the control file is sent with new contents in\n>> sendFileWithContent() rather than sendFile() would be one way, for\n>> instance..\n> \n> Good point, and that makes this even more compelling. If we include\n> pg_control into backup_label then there is no need to modify pg_control (as\n> above) -- we can just exclude it from the backup entirely. That will\n> certainly require some rejigging in recovery but seems worth it for backup\n> solutions that can't easily modify pg_control. The C-based solutions can do\n> this pretty easily but it is a pretty high bar for anyone else.\n\nI have little idea about that, but I guess that you are referring to\nbackrest here.\n--\nMichael", "msg_date": "Thu, 12 Oct 2023 07:22:09 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The danger of deleting backup_label" }, { "msg_contents": "Hi Thomas,\n\nOn 10/11/23 18:10, Thomas Munro wrote:\n> \n> Even though I spent a whole bunch of time trying to figure out how to\n> make concurrent reads of the control file sufficiently atomic for\n> backups (pg_basebackup and low level filesystem tools), and we\n> explored multiple avenues with varying results, and finally came up\n> with something that basically works pretty well... actually I just\n> hate all of that stuff, and I'm hoping to be able to just withdraw\n> https://commitfest.postgresql.org/45/4025/ and chalk it all up to\n> discovery/education and call *this* thread the real outcome of that\n> preliminary work.\n> \n> So I'm +1 on the idea of putting a control file image into the backup\n> label and I'm happy that you're looking into it.\n\nWell, hopefully this thread will *at least* be the solution going \nforward. Not sure about a back patch yet, see below...\n\n> We could just leave the control file out of the base backup\n> completely, as you said, removing a whole foot-gun. \n\nThat's the plan.\n\n> People following\n> the 'low level' instructions will still get a copy of the control file\n> from the filesystem, and I don't see any reliable way to poison that\n> file without also making it so that a crash wouldn't also be prevented\n> from recovering. I have wondered about putting extra \"fingerprint\"\n> information into the control file such as the file's path and inode\n> number etc, so that you can try to distinguish between a control file\n> written by PostgreSQL, and a control file copied somewhere else, but\n> that all feels too fragile, and at the end of the day, people\n> following the low level backup instructions had better follow the low\n> level backup instructions (hopefully via the intermediary of an\n> excellent external backup tool).\n\nNot sure about the inode idea, because it seems OK for people to move a \ncluster elsewhere under a variety of circumstances. I do have an idea \nabout how to mark a cluster in \"recovery to consistency\" mode, but not \nquite sure how to atomically turn that off at the end of recovery to \nconsistency. I have some ideas I'll work on though.\n\n> As Stephen mentioned[1], we could perhaps also complain if both backup\n> label and control file exist, and then hint that the user should\n> remove the *control file* (not the backup label!). I had originally\n> suggested we would just overwrite the control file, but by explicitly\n> complaining about it we would also bring the matter to tool/script\n> authors' attention, ie that they shouldn't be backing that file up, or\n> should be removing it in a later step if they copy everything. He\n> also mentions that there doesn't seem to be anything stopping us from\n> back-patching changes to the backup label contents if we go this way.\n> I don't have a strong opinion on that and we could leave the question\n> for later.\n\nI'm worried about the possibility of back patching this unless the \nsolution comes out to be simpler than I think and that rarely comes to \npass. Surely throwing errors on something that is currently valid (i.e. \nbackup_label and pg_control both present).\n\nBut perhaps there is a simpler, acceptable solution we could back patch \n(transparent to all parties except Postgres) and then a more advanced \nsolution we could go forward with.\n\nI guess I had better get busy on this.\n\nRegards,\n-David\n\n[1] \nhttps://www.postgresql.org/message-id/ZL69NXjCNG%2BWHCqG%40tamriel.snowman.net\n\n\n", "msg_date": "Thu, 12 Oct 2023 10:19:15 -0400", "msg_from": "David Steele <[email protected]>", "msg_from_op": true, "msg_subject": "Re: The danger of deleting backup_label" }, { "msg_contents": "\n\nOn 10/11/23 18:22, Michael Paquier wrote:\n> On Tue, Oct 10, 2023 at 05:06:45PM -0400, David Steele wrote:\n>> That fails because there is a check to make sure the checkpoint is valid\n>> when pg_control is loaded. Another possibility is to use a special LSN like\n>> we use for unlogged tables. Anything >= 24 and < WAL segment size will work\n>> fine.\n> \n> Do we have any reason to do that in the presence of a backup_label\n> file anyway? We'll know the LSN of the checkpoint based on what the\n> base backup wants us to use. Using a fake-still-rather-valid value\n> for the LSN in the control file to bypass this check does not address\n> the issue you are pointing at: it is just avoiding this check. A\n> reasonable answer would be, IMO, to just not do this check at all\n> based on the control file in this case.\n\nYeah, that's fair. And it looks like we are leaning towards excluding \npg_control from the backup entirely, so the point is probably moot.\n\n>>> If the contents of the control file are tweaked before sending it\n>>> through a BASE_BACKUP, it would cover more than just pg_basebackup.\n>>> Switching the way the control file is sent with new contents in\n>>> sendFileWithContent() rather than sendFile() would be one way, for\n>>> instance..\n>>\n>> Good point, and that makes this even more compelling. If we include\n>> pg_control into backup_label then there is no need to modify pg_control (as\n>> above) -- we can just exclude it from the backup entirely. That will\n>> certainly require some rejigging in recovery but seems worth it for backup\n>> solutions that can't easily modify pg_control. The C-based solutions can do\n>> this pretty easily but it is a pretty high bar for anyone else.\n> \n> I have little idea about that, but I guess that you are referring to\n> backrest here.\n\nSure, pgBackRest, but there are other backup solutions written in C. My \npoint is really that we should not depend on backup solutions being able \nto manipulate C structs. It looks the the solution we are working \ntowards would not require that.\n\nRegards,\n-David\n\n\n", "msg_date": "Thu, 12 Oct 2023 10:25:45 -0400", "msg_from": "David Steele <[email protected]>", "msg_from_op": true, "msg_subject": "Re: The danger of deleting backup_label" }, { "msg_contents": "On 10/12/23 10:19, David Steele wrote:\n> On 10/11/23 18:10, Thomas Munro wrote:\n> \n>> As Stephen mentioned[1], we could perhaps also complain if both backup\n>> label and control file exist, and then hint that the user should\n>> remove the *control file* (not the backup label!).  I had originally\n>> suggested we would just overwrite the control file, but by explicitly\n>> complaining about it we would also bring the matter to tool/script\n>> authors' attention, ie that they shouldn't be backing that file up, or\n>> should be removing it in a later step if they copy everything.  He\n>> also mentions that there doesn't seem to be anything stopping us from\n>> back-patching changes to the backup label contents if we go this way.\n>> I don't have a strong opinion on that and we could leave the question\n>> for later.\n> \n> I'm worried about the possibility of back patching this unless the \n> solution comes out to be simpler than I think and that rarely comes to \n> pass. Surely throwing errors on something that is currently valid (i.e. \n> backup_label and pg_control both present).\n> \n> But perhaps there is a simpler, acceptable solution we could back patch \n> (transparent to all parties except Postgres) and then a more advanced \n> solution we could go forward with.\n> \n> I guess I had better get busy on this.\n\nAttached is a very POC attempt at something along these lines that could \nbe back patched. I stopped when I realized excluding pg_control from the \nbackup is a necessity to make this work (else we still could end up with \na torn copy of pg_control even if there is a good copy elsewhere). To \nenumerate the back patch issues as I see them:\n\n1) We still need a copy of pg_control in order to get Postgres to start \nand that copy might be torn (pretty much where we are now). We can \nhandle this easily in pg_basebackup but most backup software will not be \nable to do so. In my opinion teaching Postgres to start without \npg_control is too big a change to possibly back patch.\n\n2) We need to deal with backups made with a prior *minor* version that \ndid not include pg_control in the backup_label. Doable, but...\n\n3) We need to move backup_label to the end of the main pg_basebackup \ntar, which could cause unforeseen breakage for tools.\n\n4) This patch is less efficient for backups taken from standby because \nit will overwrite pg_control on restart and force replay back to the \noriginal MinRecoveryPoint.\n\n5) We still need a solution for exclusive backup (still valid in PG <= \n14). Doable but it would still have the weakness of 1.\n\nAll of this is fixable in HEAD, but seems incredibly dangerous to back \npatch. Even so, I have attached the patch in case somebody sees an \nopportunity that I do not.\n\nRegards,\n-David", "msg_date": "Sat, 14 Oct 2023 11:30:55 -0400", "msg_from": "David Steele <[email protected]>", "msg_from_op": true, "msg_subject": "Re: The danger of deleting backup_label" }, { "msg_contents": "On Sat, Oct 14, 2023 at 11:33 AM David Steele <[email protected]> wrote:\n> All of this is fixable in HEAD, but seems incredibly dangerous to back\n> patch. Even so, I have attached the patch in case somebody sees an\n> opportunity that I do not.\n\nI really do not think we should be even thinking about back-patching\nsomething like this. It's clearly not a bug fix, although I'm sure\nthat someone can try to characterize it that way, if they want to make\nthe well-worn argument that any behavior they don't like is a bug. But\nthat's a pretty lame argument. Usage errors on the part of users are\nnot bugs, even if we've coded the software in such a way as to make\nthose errors more likely.\n\nI think what we ought to be talking about is whether a change like\nthis is a good idea even in master. I don't think it's a terrible\nidea, but I'm also not sure that it's a good idea. The problem is that\nif you're doing the right thing with your backup_label, then this is\nunnecessary, and if you're doing the wrong thing, then why should you\ndo the right thing about this? I mean, admittedly you can't just\nignore a fatal error, but I think people will just run pg_resetwal,\nwhich is even worse than starting from the wrong checkpoint. I feel\nlike in cases where a customer I'm working with has a bad backup,\ntheir entire focus is on doing something to that backup to get a\nrunning system back, whatever it takes. It's already too late at that\npoint to fix the backup procedure - they only have the backups they\nhave. You could hope people would do test restores before disaster\nstrikes, but people who are that prepared are probably running a real\nbackup tool and will never have this problem in the first place.\n\nPerhaps that's all too pessimistic. I don't know. Certainly, other\npeople can have experiences that are different than mine. But I feel\nlike I struggle to think of a case where this would have prevented a\nbad outcome, and that makes me wonder whether it's really a good idea\nto complicate the system.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 16 Oct 2023 10:55:31 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The danger of deleting backup_label" }, { "msg_contents": "On 10/16/23 10:55, Robert Haas wrote:\n> On Sat, Oct 14, 2023 at 11:33 AM David Steele <[email protected]> wrote:\n>> All of this is fixable in HEAD, but seems incredibly dangerous to back\n>> patch. Even so, I have attached the patch in case somebody sees an\n>> opportunity that I do not.\n> \n> I really do not think we should be even thinking about back-patching\n> something like this. It's clearly not a bug fix, although I'm sure\n> that someone can try to characterize it that way, if they want to make\n> the well-worn argument that any behavior they don't like is a bug. But\n> that's a pretty lame argument. Usage errors on the part of users are\n> not bugs, even if we've coded the software in such a way as to make\n> those errors more likely.\n\nHmmm, the reason to back patch this is that it would fix [1], which sure \nlooks like a problem to me even if it is not a \"bug\". We can certainly \nrequire backup software to retry pg_control until the checksum is valid \nbut that seems like a pretty big ask, even considering how complicated \nbackup is.\n\nI investigated this as a solution to [1] because it seemed like a better \nsolution that what we have in [1]. But once I got into the weeds it was \nobvious that this wasn't going to be something we could back patch.\n\n> I think what we ought to be talking about is whether a change like\n> this is a good idea even in master. I don't think it's a terrible\n> idea, but I'm also not sure that it's a good idea. The problem is that\n> if you're doing the right thing with your backup_label, then this is\n> unnecessary, and if you're doing the wrong thing, then why should you\n> do the right thing about this? \n\nFirst and foremost, this solves the problem of pg_control being torn \nwhen read by the backup software. It can't be torn if it is not there.\n\nThere are also other advantages -- we can massage pg_control before \nwriting it back out. This already happens, but before that happens there \nis a copy of the (prior) running pg_control there that can mess up the \nprocess.\n\n> I mean, admittedly you can't just\n> ignore a fatal error, but I think people will just run pg_resetwal,\n> which is even worse than starting from the wrong checkpoint. \n\nIf you start from the last checkpoint (which is what will generally be \nstored in pg_control) then the effect is pretty similar.\n\n> I feel\n> like in cases where a customer I'm working with has a bad backup,\n> their entire focus is on doing something to that backup to get a\n> running system back, whatever it takes. It's already too late at that\n> point to fix the backup procedure - they only have the backups they\n> have. You could hope people would do test restores before disaster\n> strikes, but people who are that prepared are probably running a real\n> backup tool and will never have this problem in the first place.\n\nRight now the user can remove backup_label and get a \"successful\" \nrestore and not realize that they have just corrupted their cluster. \nThis is independent of the backup/restore tool doing all the right things.\n\nMy goal here is to narrow the options to try and make it so there is \n*one* valid procedure that will work. For this patch the idea is that \nthey *must* start Postgres to get a valid pg_control from the \nbackup_label. Any other action leads to a fatal error.\n\nNote that the current patch is very WIP and does not actually do \neverything I'm talking about here. I was just trying to see if it could \nbe used to solve the problem in [1]. It can't.\n\n> Perhaps that's all too pessimistic. I don't know. Certainly, other\n> people can have experiences that are different than mine. But I feel\n> like I struggle to think of a case where this would have prevented a\n> bad outcome, and that makes me wonder whether it's really a good idea\n> to complicate the system.\n\nI'm specifically addressing cases like those that came up (twice!) in \n[2]. This is the main place I see people stumbling these days. If even \nhackers can make this mistake then we should do better.\n\nRegards,\n-David\n\n[1] \nhttps://www.postgresql.org/message-id/20221123014224.xisi44byq3cf5psi%40awork3.anarazel.de\n[2] [1] \nhttps://www.postgresql.org/message-id/flat/CAM_vCudkSjr7NsNKSdjwtfAm9dbzepY6beZ5DP177POKy8%3D2aw%40mail.gmail.com#746e492bfcd2667635634f1477a61288\n\n\n", "msg_date": "Mon, 16 Oct 2023 11:45:07 -0400", "msg_from": "David Steele <[email protected]>", "msg_from_op": true, "msg_subject": "Re: The danger of deleting backup_label" }, { "msg_contents": "On Mon, Oct 16, 2023 at 11:45 AM David Steele <[email protected]> wrote:\n> Hmmm, the reason to back patch this is that it would fix [1], which sure\n> looks like a problem to me even if it is not a \"bug\". We can certainly\n> require backup software to retry pg_control until the checksum is valid\n> but that seems like a pretty big ask, even considering how complicated\n> backup is.\n\nThat seems like a problem with pg_control not being written atomically\nwhen the standby server is updating it during recovery, rather than a\nproblem with backup_label not being used at the start of recovery.\nUnless I'm confused.\n\n> If you start from the last checkpoint (which is what will generally be\n> stored in pg_control) then the effect is pretty similar.\n\nIf the backup didn't span a checkpoint, then restoring from the one in\npg_control actually works fine. Not that I'm encouraging that. But if\nyou replay WAL from the control file, you at least get the last\ncheckpoint's worth of WAL; if you use pg_resetwal, you get nothing.\n\nI don't really want to get hung up on this though. My main point here\nis that I have trouble believing that an error after you've already\nscrewed up your backup helps much. I think what we need is to make it\nless likely that you will screw up your backup in the first place.\n\n> Right now the user can remove backup_label and get a \"successful\"\n> restore and not realize that they have just corrupted their cluster.\n> This is independent of the backup/restore tool doing all the right things.\n\nI don't think it's independent of that at all.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 16 Oct 2023 12:25:59 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The danger of deleting backup_label" }, { "msg_contents": "On 10/16/23 12:25, Robert Haas wrote:\n> On Mon, Oct 16, 2023 at 11:45 AM David Steele <[email protected]> wrote:\n>> Hmmm, the reason to back patch this is that it would fix [1], which sure\n>> looks like a problem to me even if it is not a \"bug\". We can certainly\n>> require backup software to retry pg_control until the checksum is valid\n>> but that seems like a pretty big ask, even considering how complicated\n>> backup is.\n> \n> That seems like a problem with pg_control not being written atomically\n> when the standby server is updating it during recovery, rather than a\n> problem with backup_label not being used at the start of recovery.\n> Unless I'm confused.\n\nYou are not confused. But the fact that it practically can be read as \ntorn at all on the standby is a function of how rapidly it is being \nwritten to update min recovery point. Writing it atomically via a temp \nfile may affect performance in this area, but only during the backup, \nwhich may cause recovery to run more slowly during a backup.\n\nI don't have proof of this because I don't have any hosts large enough \nto test the theory.\n\n>> Right now the user can remove backup_label and get a \"successful\"\n>> restore and not realize that they have just corrupted their cluster.\n>> This is independent of the backup/restore tool doing all the right things.\n> \n> I don't think it's independent of that at all.\n\nI think it is. Imagine the user does backup/recovery using their \nfavorite tool and everything is fine. But due to some misconfiguration \nor a problem in the WAL archive, they get this error:\n\nFATAL: could not locate required checkpoint record\n2023-10-16 16:42:50.132 UTC HINT: If you are restoring from a backup, \ntouch \"/home/dev/test/data/recovery.signal\" or \n\"/home/dev/test/data/standby.signal\" and add required recovery options.\n If you are not restoring from a backup, try removing the file \n\"/home/dev/test/data/backup_label\".\n Be careful: removing \"/home/dev/test/data/backup_label\" will \nresult in a corrupt cluster if restoring from a backup.\n\nI did this by setting restore_command=false, but it could just as easily \nbe the actual restore command that returns false due to a variety of \nreasons. The user has no idea what \"could not locate required checkpoint \nrecord\" means and if there is enough automation they may not even \nrealize they just restored from a backup.\n\nAfter some agonizing (we hope) they decide to delete backup_label and, \nwow, it just works! So now they merrily go on their way with a corrupted \ncluster. They also remember for the next time that deleting backup_label \nis definitely a good procedure.\n\nThe idea behind this patch is that deleting backup_label would produce a \nhard error because pg_control would be missing as well (if the backup \nsoftware did its job). If both pg_control and backup_label are present \n(but pg_control has not been loaded with the contents of backup_label, \ni.e. it is the running copy from the backup cluster) we can also error.\n\nIt's not perfect, because they could backup (or restore) pg_control but \nnot backup_label, but we are narrowing the cases where something can go \nwrong and they have silent corruption, especially if their \nbackup/restore software follows the directions.\n\nRegards,\n-David\n\n\n", "msg_date": "Mon, 16 Oct 2023 13:00:11 -0400", "msg_from": "David Steele <[email protected]>", "msg_from_op": true, "msg_subject": "Re: The danger of deleting backup_label" }, { "msg_contents": "On Mon, Oct 16, 2023 at 1:00 PM David Steele <[email protected]> wrote:\n> After some agonizing (we hope) they decide to delete backup_label and,\n> wow, it just works! So now they merrily go on their way with a corrupted\n> cluster. They also remember for the next time that deleting backup_label\n> is definitely a good procedure.\n>\n> The idea behind this patch is that deleting backup_label would produce a\n> hard error because pg_control would be missing as well (if the backup\n> software did its job). If both pg_control and backup_label are present\n> (but pg_control has not been loaded with the contents of backup_label,\n> i.e. it is the running copy from the backup cluster) we can also error.\n\nI mean, I think we're just going in circles, here. I did and do\nunderstand, but I didn't and don't agree. You're hypothesizing a user\nwho is willing to do ONE thing that they shouldn't do during backup\nrestoration (namely, remove backup_label) but who won't be willing to\ndo a SECOND thing that they shouldn't do during backup restoration\n(namely, run pg_resetwal). In my experience, users who are willing to\ncorrupt their database don't typically limit themselves to one bad\ndecision, and therefore I doubt that this proposal delivers enough\nvalue to justify the complexity.\n\nI understand that you feel differently, and that's fine, but I don't\nthink our disagreement here stems from me being confused. I just ...\ndon't agree.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 16 Oct 2023 15:06:21 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The danger of deleting backup_label" }, { "msg_contents": "On Mon, Oct 16, 2023 at 12:25:59PM -0400, Robert Haas wrote:\n> On Mon, Oct 16, 2023 at 11:45 AM David Steele <[email protected]> wrote:\n> > If you start from the last checkpoint (which is what will generally be\n> > stored in pg_control) then the effect is pretty similar.\n> \n> If the backup didn't span a checkpoint, then restoring from the one in\n> pg_control actually works fine. Not that I'm encouraging that. But if\n> you replay WAL from the control file, you at least get the last\n> checkpoint's worth of WAL; if you use pg_resetwal, you get nothing.\n\nThere's no guarantee that the backend didn't spawn an extra checkpoint\nwhile a base backup was taken, either, because we don't fail hard at\nthe end of the BASE_BACKUP code paths if the redo LSN has been updated\nsince the beginning of a BASE_BACKUP. So that's really *never* do it\nexcept if you like silent corruptions.\n\n> I don't really want to get hung up on this though. My main point here\n> is that I have trouble believing that an error after you've already\n> screwed up your backup helps much. I think what we need is to make it\n> less likely that you will screw up your backup in the first place.\n\nYeah.. Now what's the best user experience? Is it better for a base\nbackup to fail and have a user retry? Or is it better to have the\nbackend-side backup logic do what we think is safer? The former\n(likely with a REDO check or similar), will likely never work on large\ninstances, while users will likely always find ways to screw up base\nbackups taken by latter methods. A third approach is to put more\ncareful checks at restore time, and the latter helps a lot here.\n--\nMichael", "msg_date": "Tue, 17 Oct 2023 08:38:40 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The danger of deleting backup_label" }, { "msg_contents": "On 10/16/23 15:06, Robert Haas wrote:\n> On Mon, Oct 16, 2023 at 1:00 PM David Steele <[email protected]> wrote:\n>> After some agonizing (we hope) they decide to delete backup_label and,\n>> wow, it just works! So now they merrily go on their way with a corrupted\n>> cluster. They also remember for the next time that deleting backup_label\n>> is definitely a good procedure.\n>>\n>> The idea behind this patch is that deleting backup_label would produce a\n>> hard error because pg_control would be missing as well (if the backup\n>> software did its job). If both pg_control and backup_label are present\n>> (but pg_control has not been loaded with the contents of backup_label,\n>> i.e. it is the running copy from the backup cluster) we can also error.\n> \n> I mean, I think we're just going in circles, here. I did and do\n> understand, but I didn't and don't agree. You're hypothesizing a user\n> who is willing to do ONE thing that they shouldn't do during backup\n> restoration (namely, remove backup_label) but who won't be willing to\n> do a SECOND thing that they shouldn't do during backup restoration\n> (namely, run pg_resetwal). \n\nIn my experience the first case is much more likely than the second. \nYour experience may vary.\n\nAnyway, I think they are pretty different. Deleting backup label appears \nto give a perfectly valid restore. Running pg_resetwal is more clearly \n(I think) the nuclear solution.\n\n> I understand that you feel differently, and that's fine, but I don't\n> think our disagreement here stems from me being confused. I just ...\n> don't agree.\n\nFair enough, we don't agree.\n\nRegards,\n-David\n\n\n", "msg_date": "Mon, 16 Oct 2023 19:45:35 -0400", "msg_from": "David Steele <[email protected]>", "msg_from_op": true, "msg_subject": "Re: The danger of deleting backup_label" }, { "msg_contents": "Greetings,\n\n* David Steele ([email protected]) wrote:\n> On 10/16/23 15:06, Robert Haas wrote:\n> > On Mon, Oct 16, 2023 at 1:00 PM David Steele <[email protected]> wrote:\n> > > After some agonizing (we hope) they decide to delete backup_label and,\n> > > wow, it just works! So now they merrily go on their way with a corrupted\n> > > cluster. They also remember for the next time that deleting backup_label\n> > > is definitely a good procedure.\n> > > \n> > > The idea behind this patch is that deleting backup_label would produce a\n> > > hard error because pg_control would be missing as well (if the backup\n> > > software did its job). If both pg_control and backup_label are present\n> > > (but pg_control has not been loaded with the contents of backup_label,\n> > > i.e. it is the running copy from the backup cluster) we can also error.\n> > \n> > I mean, I think we're just going in circles, here. I did and do\n> > understand, but I didn't and don't agree. You're hypothesizing a user\n> > who is willing to do ONE thing that they shouldn't do during backup\n> > restoration (namely, remove backup_label) but who won't be willing to\n> > do a SECOND thing that they shouldn't do during backup restoration\n> > (namely, run pg_resetwal).\n> \n> In my experience the first case is much more likely than the second. Your\n> experience may vary.\n\nMy experience (though perhaps not a surprise) mirrors David's.\n\n> Anyway, I think they are pretty different. Deleting backup label appears to\n> give a perfectly valid restore. Running pg_resetwal is more clearly (I\n> think) the nuclear solution.\n\nRight, and a delete of backup_label is just an 'rm' that folks may think\n\"oh, this is just some leftover thing that isn't actually needed\".\n\nOTOH, pg_resetwal has an online documentation page and a man page that's\nvery clear that it's only to be used as a last resort (perhaps we should\npull that into the --help output too..?). It's also pretty clear that\npg_resetwal is actually changing things about the cluster while nuking\nbackup_label doesn't *seem* to be in that same category, even though we\nall know it is because it's needed once recovery begins.\n\nI'd also put out there that while people don't do restore testing\nnearly as much as they should, they tend to at _least_ try to do it once\nafter taking their first backup and if that fails then they try to figure\nout why and what they're not doing right.\n\nThanks,\n\nStephen", "msg_date": "Tue, 17 Oct 2023 15:17:47 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The danger of deleting backup_label" }, { "msg_contents": "On Tue, Oct 17, 2023 at 3:17 PM Stephen Frost <[email protected]> wrote:\n> I'd also put out there that while people don't do restore testing\n> nearly as much as they should, they tend to at _least_ try to do it once\n> after taking their first backup and if that fails then they try to figure\n> out why and what they're not doing right.\n\nWell, I agree with you on that point, but a lot of people only seem to\nrealize that after it's too late.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 17 Oct 2023 16:07:11 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The danger of deleting backup_label" }, { "msg_contents": "On 10/14/23 11:30, David Steele wrote:\n> On 10/12/23 10:19, David Steele wrote:\n>> On 10/11/23 18:10, Thomas Munro wrote:\n>>\n>>> As Stephen mentioned[1], we could perhaps also complain if both backup\n>>> label and control file exist, and then hint that the user should\n>>> remove the *control file* (not the backup label!).  I had originally\n>>> suggested we would just overwrite the control file, but by explicitly\n>>> complaining about it we would also bring the matter to tool/script\n>>> authors' attention, ie that they shouldn't be backing that file up, or\n>>> should be removing it in a later step if they copy everything.  He\n>>> also mentions that there doesn't seem to be anything stopping us from\n>>> back-patching changes to the backup label contents if we go this way.\n>>> I don't have a strong opinion on that and we could leave the question\n>>> for later.\n>>\n>> I'm worried about the possibility of back patching this unless the \n>> solution comes out to be simpler than I think and that rarely comes to \n>> pass. Surely throwing errors on something that is currently valid \n>> (i.e. backup_label and pg_control both present).\n>>\n>> But perhaps there is a simpler, acceptable solution we could back \n>> patch (transparent to all parties except Postgres) and then a more \n>> advanced solution we could go forward with.\n>>\n>> I guess I had better get busy on this.\n> \n> Attached is a very POC attempt at something along these lines that could \n> be back patched. I stopped when I realized excluding pg_control from the \n> backup is a necessity to make this work (else we still could end up with \n> a torn copy of pg_control even if there is a good copy elsewhere). To \n> enumerate the back patch issues as I see them:\n\nGiven that the above can't be back patched, I'm thinking we don't need \nbackup_label at all going forward. We just write the values we need for \nrecovery into pg_control and return *that* from pg_backup_stop() and \ntell the user to store it with their backup. We already have \"These \nfiles are vital to the backup working and must be written byte for byte \nwithout modification, which may require opening the file in binary \nmode.\" in the documentation so dealing with pg_control should not be a \nproblem. pg_control also has a CRC so we will know if it gets munged.\n\nIt doesn't really matter where/how they store pg_control as long as it \nis written back into PGDATA before the cluster starts. If \nbackupEndRequired, etc., are set appropriately then recovery will do the \nright thing when it starts, just as now if PG crashes after it has \nrenamed backup_label but before recovery to consistency has completed.\n\nWe can still enforce the presence of recovery.signal by checking \nbackupEndRequired if that's something we want but it seems like \nbackupEndRequired would be enough. I'm fine either way.\n\nAnother thing we can do here is make backup from standby easier. The \nuser won't need to worry about *when* pg_control is copied. We can just \nwrite the ideal min recovery point into pg_control.\n\nAny informational data currently in backup_label can be returned as \ncolumns (like the start/end lsn is now).\n\nThis makes the patch much less invasive and while it definitely, \nabsolutely cannot be back patched, it seems like a good way forward.\n\nThis is the direction I'm planning to work on patch-wise but I'd like to \nhear people's thoughts.\n\nRegards,\n-David\n\n\n", "msg_date": "Tue, 17 Oct 2023 16:16:42 -0400", "msg_from": "David Steele <[email protected]>", "msg_from_op": true, "msg_subject": "Re: The danger of deleting backup_label" }, { "msg_contents": "At Tue, 17 Oct 2023 16:16:42 -0400, David Steele <[email protected]> wrote in \n> Given that the above can't be back patched, I'm thinking we don't need\n> backup_label at all going forward. We just write the values we need\n> for recovery into pg_control and return *that* from pg_backup_stop()\n> and tell the user to store it with their backup. We already have\n> \"These files are vital to the backup working and must be written byte\n> for byte without modification, which may require opening the file in\n> binary mode.\" in the documentation so dealing with pg_control should\n> not be a problem. pg_control also has a CRC so we will know if it gets\n> munged.\n\nI'm somewhat perplexed regarding the objective of this thread.\n\nThis thread began with the intent of preventing users from removing\nthe backup_label from a backup. At the beginning, the proposal aimed\nto achieve this by injecting an invalid value to pg_control file\nlocated in the generated backup. However, this (and previous) proposal\nseems to deviate from that initial objective. It now eliminates the\nneed to be concerned about the pg_control version that is coped into\nthe generated backup. However, if someone removes the backup_label\nfrom a backup, the initial concerns could still surface.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 18 Oct 2023 11:13:01 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The danger of deleting backup_label" }, { "msg_contents": "On Tue, Oct 17, 2023 at 4:17 PM David Steele <[email protected]> wrote:\n> Given that the above can't be back patched, I'm thinking we don't need\n> backup_label at all going forward. We just write the values we need for\n> recovery into pg_control and return *that* from pg_backup_stop() and\n> tell the user to store it with their backup. We already have \"These\n> files are vital to the backup working and must be written byte for byte\n> without modification, which may require opening the file in binary\n> mode.\" in the documentation so dealing with pg_control should not be a\n> problem. pg_control also has a CRC so we will know if it gets munged.\n\nYeah, I was thinking about this kind of idea, too. I think it might be\na good idea, although I'm not completely certain about that, either.\n\nOn the positive side, you can't remove backup_label in error if\nbackup_label is not a thing. You certainly can't remove the control\nfile. You can, however, use the original control file instead of the\none that you were supposed to use. However, that is not really any\ndifferent from failing to write the backup_label into the backup\ndirectory, which you can already do today. Also, it adds very little\nnet complexity to the low-level backup procedure. Instead of needing\nto write the backup_label into the backup directory, you write the\ncontrol file -- but that's instead, not in addition. So overall it\nseems like the complexity is similar to what we have today but one\npossible mistake is eliminated.\n\nAlso on the positive side, I suspect we could remove a decent amount\nof code for dealing with backup_label files. We wouldn't have to read\nthem any more (and the code to do that is pretty rough-and-ready) and\nwe wouldn't have to do different things based on whether the\nbackup_label exists or not. The logic in xlog*.c is extremely\ncomplicated, and everything that we can do to reduce the number of\ncases that need to be considered is not just good, but great.\n\nBut there are also some negatives.\n\nFirst, anything that is stored in the backup_label but not the control\nfile has to (a) move into the control file, (b) be stored someplace\nelse, or (c) be eliminated as a concept. We're likely to get\ncomplaints about (a), especially if the data in question is anything\nbig. Any proposal to do (b) risks undermining the whole theory under\nwhich this is a good proposal, namely that removing backup_label gives\nus one less thing to worry about. So that brings us to (c).\nPersonally, I would lose very little sleep if the LABEL field died and\nnever came back, and I wouldn't miss START TIME and STOP TIME either,\nbut somebody else might well feel differently. I don't think it's\ntrivial to get rid of BACKUP METHOD, as there unfortunately seems to\nbe code that depends on knowing the difference between BACKUP FROM:\nstreamed and BACKUP FROM: pg_rewind. I suspect that BACKUP FROM:\nprimary/standby might have the same issue, but I'm not sure. STOP\nTIMELINE could be a problem too. I think that if somebody could do\nsome rejiggering to eliminate some of the differences between the\ncases here, that could be really good general cleanup irrespective of\nwhat we decide about this proposal, and moving some things in to\npg_control is probably reasonable too. For instance, it would seem\ncrazy to me to argue that storing the backup end location in the\ncontrol file is OK, but storing the backup end TLI there would not be\nOK. But the point here is that there's probably a good deal of careful\nthinking that would need to be done here about exactly where all of\nthe stuff that currently exists in the backup_label file but not in\npg_control needs to end up.\n\nSecond, right now, the stuff that we return at the end of a backup is\nall text data. With this proposal, it becomes binary data. I entirely\nrealize that people should only be doing these kinds of backups using\nautomated tools that that those automated tools should be perfectly\ncapable of handling binary data without garbling anything. But that's\nabout as realistic as supposing that people won't instantly remove the\nbackup_label file the moment it seems like it will solve some problem,\neven when the directions clearly state that this should only be done\nin some other situation that is not the one the user is facing. It\njust occurred to me that one thing we could do to improve the user\nexperience here is offer some kind of command-line utility to assist\nwith taking a low-level backup. This could be done even if we don't\nproceed with this proposal e.g.\n\npg_llbackup -d $CONNTR --backup-label=PATH --tablespace-map=PATH\n--copy-data-directory=SHELLCOMMAND\n\nI don't know for sure how much that would help, but I wonder if it\nmight actually help quite a bit, because right now people do things\nlike use psql in a shell script to try to juggle a database connection\nand then in some other part of the shell script do the data copying.\nBut it is quite easy to screw up the error handling or the psql\nsession lifetime or something like that, and this would maybe provide\na nicer interface. Details likely need a good deal of kibitizing.\n\nThere might be other problems, too. This is just what occurs to me off\nthe top of my head. But I think it's an interesting angle to explore\nfurther.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 18 Oct 2023 08:39:03 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The danger of deleting backup_label" }, { "msg_contents": "On 10/17/23 22:13, Kyotaro Horiguchi wrote:\n> At Tue, 17 Oct 2023 16:16:42 -0400, David Steele <[email protected]> wrote in\n>> Given that the above can't be back patched, I'm thinking we don't need\n>> backup_label at all going forward. We just write the values we need\n>> for recovery into pg_control and return *that* from pg_backup_stop()\n>> and tell the user to store it with their backup. We already have\n>> \"These files are vital to the backup working and must be written byte\n>> for byte without modification, which may require opening the file in\n>> binary mode.\" in the documentation so dealing with pg_control should\n>> not be a problem. pg_control also has a CRC so we will know if it gets\n>> munged.\n> \n> I'm somewhat perplexed regarding the objective of this thread.\n> \n> This thread began with the intent of preventing users from removing\n> the backup_label from a backup. At the beginning, the proposal aimed\n> to achieve this by injecting an invalid value to pg_control file\n> located in the generated backup. However, this (and previous) proposal\n> seems to deviate from that initial objective. It now eliminates the\n> need to be concerned about the pg_control version that is coped into\n> the generated backup. However, if someone removes the backup_label\n> from a backup, the initial concerns could still surface.\n\nYeah, the discussion has moved around quite a bit, but the goal remains \nthe same, to make Postgres error when it does not have the information \nit needs to proceed with recovery. Right now if you delete backup_label \nrecovery appears to complete successfully, silently corrupting the database.\n\nIn the proposal as it stands now there would be no backup_label at all, \nso no danger of removing it.\n\nWe have also gotten a bit sidetracked by our hope to use this proposal \nto address torn reads of pg_control during the backup, at least in HEAD.\n\nRegards,\n-David\n\n\n", "msg_date": "Wed, 18 Oct 2023 10:38:39 -0400", "msg_from": "David Steele <[email protected]>", "msg_from_op": true, "msg_subject": "Re: The danger of deleting backup_label" }, { "msg_contents": "On 10/18/23 08:39, Robert Haas wrote:\n> On Tue, Oct 17, 2023 at 4:17 PM David Steele <[email protected]> wrote:\n>> Given that the above can't be back patched, I'm thinking we don't need\n>> backup_label at all going forward. We just write the values we need for\n>> recovery into pg_control and return *that* from pg_backup_stop() and\n>> tell the user to store it with their backup. We already have \"These\n>> files are vital to the backup working and must be written byte for byte\n>> without modification, which may require opening the file in binary\n>> mode.\" in the documentation so dealing with pg_control should not be a\n>> problem. pg_control also has a CRC so we will know if it gets munged.\n> \n> Yeah, I was thinking about this kind of idea, too. I think it might be\n> a good idea, although I'm not completely certain about that, either.\n\n<snip>\n\n> First, anything that is stored in the backup_label but not the control\n> file has to (a) move into the control file, \n\nI'd rather avoid this.\n\n> (b) be stored someplace\n> else, \n\nI don't think the additional fields *need* to be stored anywhere at all, \nat least not by us. We can provide them as output from pg_backup_stop() \nand the caller can do as they please. None of those fields are part of \nthe restore process.\n\n> or (c) be eliminated as a concept. \n\nI think we should keep them as above since I don't believe they hurt \nanything.\n\n> We're likely to get\n> complaints about (a), especially if the data in question is anything\n> big. Any proposal to do (b) risks undermining the whole theory under\n> which this is a good proposal, namely that removing backup_label gives\n> us one less thing to worry about. So that brings us to (c).\n> Personally, I would lose very little sleep if the LABEL field died and\n> never came back, and I wouldn't miss START TIME and STOP TIME either,\n> but somebody else might well feel differently. I don't think it's\n> trivial to get rid of BACKUP METHOD, as there unfortunately seems to\n> be code that depends on knowing the difference between BACKUP FROM:\n> streamed and BACKUP FROM: pg_rewind. I suspect that BACKUP FROM:\n> primary/standby might have the same issue, but I'm not sure. \n\nBACKUP FROM: primary/standby we can definitely handle, and BACKUP \nMETHOD: pg_rewind just means backupEndRequired is false. That will need \nto be written by pg_rewind (instead of writing backup_label).\n\n> STOP\n> TIMELINE could be a problem too. I think that if somebody could do\n> some rejiggering to eliminate some of the differences between the\n> cases here, that could be really good general cleanup irrespective of\n> what we decide about this proposal, and moving some things in to\n> pg_control is probably reasonable too. For instance, it would seem\n> crazy to me to argue that storing the backup end location in the\n> control file is OK, but storing the backup end TLI there would not be\n> OK. \n\nWe have a place in pg_control for the end TLI (minRecoveryPointTLI). \nThat's only valid for backup from standby since a primary backup can \nnever change timelines.\n\n> But the point here is that there's probably a good deal of careful\n> thinking that would need to be done here about exactly where all of\n> the stuff that currently exists in the backup_label file but not in\n> pg_control needs to end up.\n\nAgreed.\n\n> Second, right now, the stuff that we return at the end of a backup is\n> all text data. With this proposal, it becomes binary data. I entirely\n> realize that people should only be doing these kinds of backups using\n> automated tools that that those automated tools should be perfectly\n> capable of handling binary data without garbling anything. But that's\n> about as realistic as supposing that people won't instantly remove the\n> backup_label file the moment it seems like it will solve some problem,\n> even when the directions clearly state that this should only be done\n> in some other situation that is not the one the user is facing. \n\nWell, we do specify that backup_label and tablespace_map should be saved \nbyte for byte. But We've already seen users mess this up in the past and \nadd \\r characters that made the files unreadable.\n\nIf it can be done wrong, it will be done wrong by somebody.\n\n> It\n> just occurred to me that one thing we could do to improve the user\n> experience here is offer some kind of command-line utility to assist\n> with taking a low-level backup. This could be done even if we don't\n> proceed with this proposal e.g.\n> \n> pg_llbackup -d $CONNTR --backup-label=PATH --tablespace-map=PATH\n> --copy-data-directory=SHELLCOMMAND\n> \n> I don't know for sure how much that would help, but I wonder if it\n> might actually help quite a bit, because right now people do things\n> like use psql in a shell script to try to juggle a database connection\n> and then in some other part of the shell script do the data copying.\n> But it is quite easy to screw up the error handling or the psql\n> session lifetime or something like that, and this would maybe provide\n> a nicer interface. \n\nI think in most cases where this would be useful the user should just be \nusing pg_basebackup. If the backup is trying to use snapshots, then \nbackup_label needs to be stored outside the snapshot and we won't be \nable to easily help.\n\n> There might be other problems, too. This is just what occurs to me off\n> the top of my head. But I think it's an interesting angle to explore\n> further.\n\nThere may definitely be other problems and I'm pretty sure there will \nbe. My feeling is that they will be surmountable, but I won't know for \nsure until I finish the patch.\n\nBut I also feel it's a good idea to explore further. I'll work on the \npatch and should have something to share soon.\n\nRegards,\n-David\n\n\n", "msg_date": "Wed, 18 Oct 2023 19:15:09 -0400", "msg_from": "David Steele <[email protected]>", "msg_from_op": true, "msg_subject": "Re: The danger of deleting backup_label" }, { "msg_contents": "On Wednesday, October 18, 2023, David Steele <[email protected]> wrote:\n\n> On 10/18/23 08:39, Robert Haas wrote:\n>\n>> On Tue, Oct 17, 2023 at 4:17 PM David Steele <[email protected]> wrote:\n>>\n>>> Given that the above can't be back patched, I'm thinking we don't need\n>>> backup_label at all going forward. We just write the values we need for\n>>> recovery into pg_control and return *that* from pg_backup_stop() and\n>>> tell the user to store it with their backup. We already have \"These\n>>> files are vital to the backup working and must be written byte for byte\n>>> without modification, which may require opening the file in binary\n>>> mode.\" in the documentation so dealing with pg_control should not be a\n>>> problem. pg_control also has a CRC so we will know if it gets munged.\n>>>\n>>\n>> Yeah, I was thinking about this kind of idea, too. I think it might be\n>> a good idea, although I'm not completely certain about that, either.\n>>\n>\n> <snip>\n>\n> First, anything that is stored in the backup_label but not the control\n>> file has to (a) move into the control file,\n>>\n>\n> I'd rather avoid this.\n>\n>\nIf we are OK outputting custom pg_control file content from pg_backup_end\nto govern this then I’d probably just include\n“tablespace_map_required=true|false” and “backup_label_required=true” lines\nin it and leave everything else mostly the same, including the name. In\norder for a server with those lines in its pg_control to boot it requires\nthat a signal file be present along with any of the named files where\n*_required is true. Upon successful completion those lines are removed\nfrom pg_control.\n\nIt seem unnecessary to move any and all relevant content into pg_control;\njust a flag to ensure that those files that are needed a present in the\nbackup directory and whatever validation of those files is needed to ensure\nthey provide sufficient data.\n\nIf the user removes those files without a backup the server is not going to\nstart unless they make further obvious attempts to circumvent the design.\nManually editing pg_comtrol being obvious circumventing.\n\nThis would seem to be a compatible change. If you fail to use the\npg_control from pg_backup_stop you don’t get the safeguard but everything\nstill works. Do we really believe we need to break/force-upgrade tooling\nto use this new procedure? Depending on the answer to the torn pg_comtrol\nfile problem which may indeed warrant such breakage.\n\nDavid J.\n\nOn Wednesday, October 18, 2023, David Steele <[email protected]> wrote:On 10/18/23 08:39, Robert Haas wrote:\n\nOn Tue, Oct 17, 2023 at 4:17 PM David Steele <[email protected]> wrote:\n\nGiven that the above can't be back patched, I'm thinking we don't need\nbackup_label at all going forward. We just write the values we need for\nrecovery into pg_control and return *that* from pg_backup_stop() and\ntell the user to store it with their backup. We already have \"These\nfiles are vital to the backup working and must be written byte for byte\nwithout modification, which may require opening the file in binary\nmode.\" in the documentation so dealing with pg_control should not be a\nproblem. pg_control also has a CRC so we will know if it gets munged.\n\n\nYeah, I was thinking about this kind of idea, too. I think it might be\na good idea, although I'm not completely certain about that, either.\n\n\n<snip>\n\n\nFirst, anything that is stored in the backup_label but not the control\nfile has to (a) move into the control file, \n\n\nI'd rather avoid this.\nIf we are OK outputting custom pg_control file content from pg_backup_end to govern this then I’d probably just include “tablespace_map_required=true|false” and “backup_label_required=true” lines in it and leave everything else mostly the same, including the name.  In order for a server with those lines in its pg_control to boot it requires that a signal file be present along with any of the named files where *_required is true.  Upon successful completion those lines are removed from pg_control.It seem unnecessary to move any and all relevant content into pg_control; just a flag to ensure that those files that are needed a present in the backup directory and whatever validation of those files is needed to ensure they provide sufficient data.If the user removes those files without a backup the server is not going to start unless they make further obvious attempts to circumvent the design. Manually editing pg_comtrol being obvious circumventing.This would seem to be a compatible change.  If you fail to use the pg_control from pg_backup_stop you don’t get the safeguard but everything still works.  Do we really believe we need to break/force-upgrade tooling to use this new procedure?  Depending on the answer to the torn pg_comtrol file problem which may indeed warrant such breakage.David J.", "msg_date": "Wed, 18 Oct 2023 16:35:27 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The danger of deleting backup_label" }, { "msg_contents": "On Wed, Oct 18, 2023 at 7:15 PM David Steele <[email protected]> wrote:\n> > (b) be stored someplace\n> > else,\n>\n> I don't think the additional fields *need* to be stored anywhere at all,\n> at least not by us. We can provide them as output from pg_backup_stop()\n> and the caller can do as they please. None of those fields are part of\n> the restore process.\n\nNot sure which fields we're talking about here. I agree that if\nthey're not really needed, we can return them and the user can keep\nthem or discard them as they wish. But in that case you might also ask\nwhy bother even returning them.\n\n> > pg_llbackup -d $CONNTR --backup-label=PATH --tablespace-map=PATH\n> > --copy-data-directory=SHELLCOMMAND\n> >\n> I think in most cases where this would be useful the user should just be\n> using pg_basebackup. If the backup is trying to use snapshots, then\n> backup_label needs to be stored outside the snapshot and we won't be\n> able to easily help.\n\nRight, the idea of the above was that you would specify paths for the\nbackup label and the tablespace map that were outside of the snapshot\ndirectory in that kind of case. But you couldn't screw up the line\nendings or whatever because pg_llbackup would take care of that aspect\nof it for you.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 19 Oct 2023 10:24:41 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The danger of deleting backup_label" }, { "msg_contents": "On 10/19/23 10:24, Robert Haas wrote:\n> On Wed, Oct 18, 2023 at 7:15 PM David Steele <[email protected]> wrote:\n>>> (b) be stored someplace\n>>> else,\n>>\n>> I don't think the additional fields *need* to be stored anywhere at all,\n>> at least not by us. We can provide them as output from pg_backup_stop()\n>> and the caller can do as they please. None of those fields are part of\n>> the restore process.\n> \n> Not sure which fields we're talking about here. I agree that if\n> they're not really needed, we can return them and the user can keep\n> them or discard them as they wish. But in that case you might also ask\n> why bother even returning them.\n\nI'm specifically talking about START TIME and LABEL. They are currently \nstored in backup_label but not used for recovery. START TIMELINE is also \nnot used, except as a cross check against START WAL LOCATION.\n\nI'd still like to see most or all of these fields exposed through \npg_backup_stop(). The user can choose to store them or not, but none of \nthem will be required for recovery.\n\n>>> pg_llbackup -d $CONNTR --backup-label=PATH --tablespace-map=PATH\n>>> --copy-data-directory=SHELLCOMMAND\n>>>\n>> I think in most cases where this would be useful the user should just be\n>> using pg_basebackup. If the backup is trying to use snapshots, then\n>> backup_label needs to be stored outside the snapshot and we won't be\n>> able to easily help.\n> \n> Right, the idea of the above was that you would specify paths for the\n> backup label and the tablespace map that were outside of the snapshot\n> directory in that kind of case. But you couldn't screw up the line\n> endings or whatever because pg_llbackup would take care of that aspect\n> of it for you.\n\nWhat I meant here (but said badly) is that in the case of snapshot \nbackups, the backup_label and tablespace_map will likely need to be \nstored somewhere off the server since they can't be part of the \nsnapshot, perhaps in a key store. In that case the backup software would \nstill need to read the files from wherever we stored then and correctly \nhandle them when storing elsewhere. If you were moving the files to say, \nS3, a similar thing needs to happen. In general, I think a locally \nmounted filesystem is very unlikely to be the final destination for \nthese files, and if it is then probably pg_basebackup is your friend.\n\nRegards,\n-David\n\n\n\n", "msg_date": "Thu, 19 Oct 2023 10:43:04 -0400", "msg_from": "David Steele <[email protected]>", "msg_from_op": true, "msg_subject": "Re: The danger of deleting backup_label" }, { "msg_contents": "On Thu, Oct 19, 2023 at 10:43 AM David Steele <[email protected]> wrote:\n> What I meant here (but said badly) is that in the case of snapshot\n> backups, the backup_label and tablespace_map will likely need to be\n> stored somewhere off the server since they can't be part of the\n> snapshot, perhaps in a key store. In that case the backup software would\n> still need to read the files from wherever we stored then and correctly\n> handle them when storing elsewhere. If you were moving the files to say,\n> S3, a similar thing needs to happen. In general, I think a locally\n> mounted filesystem is very unlikely to be the final destination for\n> these files, and if it is then probably pg_basebackup is your friend.\n\nI mean, writing those tiny little files locally and then uploading\nthem should be fine in a case like that. It still reduces the surface\nfor mistakes. And you could also have --backup-label='| whatever' or\nsomething if you wanted. The point is that right now we're asking\npeople to pull this information out of a query result, and that means\npeople are trying to do it by calling out to psql, and that is a GREAT\nway to screw up the escaping or the newlines or whatever. I don't\nthink the mistakes people are making here are being made by people\nusing Perl and DBD::Pg, or Python and psycopg2, or C and libpq.\nThey're being made by people who are trying to shell script their way\nthrough it, which entails using psql, which makes screwing it up a\nbreeze.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 19 Oct 2023 10:56:55 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The danger of deleting backup_label" }, { "msg_contents": "On Thursday, October 19, 2023, David Steele <[email protected]> wrote:\n\n> On 10/19/23 10:24, Robert Haas wrote:\n>\n>> On Wed, Oct 18, 2023 at 7:15 PM David Steele <[email protected]> wrote:\n>>\n>>>\n>>>> pg_llbackup -d $CONNTR --backup-label=PATH --tablespace-map=PATH\n>>>> --copy-data-directory=SHELLCOMMAND\n>>>>\n>>>> I think in most cases where this would be useful the user should just be\n>>> using pg_basebackup. If the backup is trying to use snapshots, then\n>>> backup_label needs to be stored outside the snapshot and we won't be\n>>> able to easily help.\n>>>\n>>\n>> Right, the idea of the above was that you would specify paths for the\n>> backup label and the tablespace map that were outside of the snapshot\n>> directory in that kind of case. But you couldn't screw up the line\n>> endings or whatever because pg_llbackup would take care of that aspect\n>> of it for you.\n>>\n>\n> What I meant here (but said badly) is that in the case of snapshot\n> backups, the backup_label and tablespace_map will likely need to be stored\n> somewhere off the server since they can't be part of the snapshot, perhaps\n> in a key store. In that case the backup software would still need to read\n> the files from wherever we stored then and correctly handle them when\n> storing elsewhere. If you were moving the files to say, S3, a similar thing\n> needs to happen. In general, I think a locally mounted filesystem is very\n> unlikely to be the final destination for these files, and if it is then\n> probably pg_basebackup is your friend.\n\n\n>\nWe are choosing to not take responsibility if the procedure used by the dba\nis one that takes a full live backup of the data directory as-is and then\nbrings that backup back into production without making any changes to it.\nThat restored copy will be bootable but quite probably corrupted. That is\non them as we have no way to make it non-bootable without risking source\ndatabase being unable to be restarted automatically upon a crash. In\nshort, a snapshot methodology is beyond what we are willing to provide\nprotections for.\n\nWhat I do kinda gather from the above is we should be providing a\npg_baserestore application if we want to give our users fewer reasons to\nwrite their own tooling against the API and/or want to add more complexity\nto pg_basebackup to handle various needs that need corresponding recovery\nneeds that we should implement as code instead of documentation.\n\nDavid J.\n\nOn Thursday, October 19, 2023, David Steele <[email protected]> wrote:On 10/19/23 10:24, Robert Haas wrote:\n\nOn Wed, Oct 18, 2023 at 7:15 PM David Steele <[email protected]> wrote:\n\n\npg_llbackup -d $CONNTR --backup-label=PATH --tablespace-map=PATH\n--copy-data-directory=SHELLCOMMAND\n\n\nI think in most cases where this would be useful the user should just be\nusing pg_basebackup. If the backup is trying to use snapshots, then\nbackup_label needs to be stored outside the snapshot and we won't be\nable to easily help.\n\n\nRight, the idea of the above was that you would specify paths for the\nbackup label and the tablespace map that were outside of the snapshot\ndirectory in that kind of case. But you couldn't screw up the line\nendings or whatever because pg_llbackup would take care of that aspect\nof it for you.\n\n\nWhat I meant here (but said badly) is that in the case of snapshot backups, the backup_label and tablespace_map will likely need to be stored somewhere off the server since they can't be part of the snapshot, perhaps in a key store. In that case the backup software would still need to read the files from wherever we stored then and correctly handle them when storing elsewhere. If you were moving the files to say, S3, a similar thing needs to happen. In general, I think a locally mounted filesystem is very unlikely to be the final destination for these files, and if it is then probably pg_basebackup is your friend.We are choosing to not take responsibility if the procedure used by the dba is one that takes a full live backup of the data directory as-is and then brings that backup back into production without making any changes to it.  That restored copy will be bootable but quite probably corrupted.  That is on them as we have no way to make it non-bootable without risking source database being unable to be restarted automatically upon a crash.  In short, a snapshot methodology is beyond what we are willing to provide protections for.What I do kinda gather from the above is we should be providing a pg_baserestore application if we want to give our users fewer reasons to write their own tooling against the API and/or want to add more complexity to pg_basebackup to handle various needs that need corresponding recovery needs that we should implement as code instead of documentation.David J.", "msg_date": "Thu, 19 Oct 2023 08:17:53 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The danger of deleting backup_label" }, { "msg_contents": "On 10/19/23 10:56, Robert Haas wrote:\n> On Thu, Oct 19, 2023 at 10:43 AM David Steele <[email protected]> wrote:\n>> What I meant here (but said badly) is that in the case of snapshot\n>> backups, the backup_label and tablespace_map will likely need to be\n>> stored somewhere off the server since they can't be part of the\n>> snapshot, perhaps in a key store. In that case the backup software would\n>> still need to read the files from wherever we stored then and correctly\n>> handle them when storing elsewhere. If you were moving the files to say,\n>> S3, a similar thing needs to happen. In general, I think a locally\n>> mounted filesystem is very unlikely to be the final destination for\n>> these files, and if it is then probably pg_basebackup is your friend.\n> \n> I mean, writing those tiny little files locally and then uploading\n> them should be fine in a case like that. It still reduces the surface\n> for mistakes. And you could also have --backup-label='| whatever' or\n> something if you wanted. The point is that right now we're asking\n> people to pull this information out of a query result, and that means\n> people are trying to do it by calling out to psql, and that is a GREAT\n> way to screw up the escaping or the newlines or whatever. I don't\n> think the mistakes people are making here are being made by people\n> using Perl and DBD::Pg, or Python and psycopg2, or C and libpq.\n> They're being made by people who are trying to shell script their way\n> through it, which entails using psql, which makes screwing it up a\n> breeze.\n\nOK, I see what you mean and I agree. Better documentation might be the \nanswer here, but I doubt that psql is a good tool for starting/stopping \nbackup and not sure we want to encourage it.\n\nRegards,\n-David\n\n\n", "msg_date": "Thu, 19 Oct 2023 12:16:51 -0400", "msg_from": "David Steele <[email protected]>", "msg_from_op": true, "msg_subject": "Re: The danger of deleting backup_label" } ]
[ { "msg_contents": "In c4a1933b4 I pushed a fix for a 4 year old bug in print_path() where\nI'd forgotten to add handling for TidRangePaths while working on\nbb437f995.\n\n4 years is quite a long time for such a bug. Maybe that's because\nnobody uses OPTIMIZER_DEBUG. I certainly don't, and Tom mentions [1]\nhe doesn't either.\n\nIs there anyone out there who uses it?\n\nIf not, it's about 320 lines of uselessness.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/[email protected]\n\n\n", "msg_date": "Fri, 29 Sep 2023 10:20:39 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Does anyone ever use OPTIMIZER_DEBUG?" }, { "msg_contents": "David Rowley <[email protected]> writes:\n> In c4a1933b4 I pushed a fix for a 4 year old bug in print_path() where\n> I'd forgotten to add handling for TidRangePaths while working on\n> bb437f995.\n> 4 years is quite a long time for such a bug. Maybe that's because\n> nobody uses OPTIMIZER_DEBUG. I certainly don't, and Tom mentions [1]\n> he doesn't either.\n> Is there anyone out there who uses it?\n> If not, it's about 320 lines of uselessness.\n\nWe could also discuss keeping the \"tracing\" aspect of it, but\nreplacing debug_print_rel with pprint(rel), which'd still allow\nremoval of all the \"DEBUG SUPPORT\" stuff at the bottom of allpaths.c.\nThat's pretty much all of the maintenance-requiring stuff in it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 28 Sep 2023 17:59:45 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Does anyone ever use OPTIMIZER_DEBUG?" }, { "msg_contents": "On Fri, Sep 29, 2023 at 10:20:39AM +1300, David Rowley wrote:\n> 4 years is quite a long time for such a bug. Maybe that's because\n> nobody uses OPTIMIZER_DEBUG. I certainly don't, and Tom mentions [1]\n> he doesn't either.\n\nI've used it perhaps once in the last 10 years, so removing it is OK\nby me.\n--\nMichael", "msg_date": "Fri, 29 Sep 2023 09:01:23 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Does anyone ever use OPTIMIZER_DEBUG?" }, { "msg_contents": "On Fri, 29 Sept 2023 at 10:59, Tom Lane <[email protected]> wrote:\n> We could also discuss keeping the \"tracing\" aspect of it, but\n> replacing debug_print_rel with pprint(rel), which'd still allow\n> removal of all the \"DEBUG SUPPORT\" stuff at the bottom of allpaths.c.\n> That's pretty much all of the maintenance-requiring stuff in it.\n\nTo assist discussion, I've attached a patch for that.\n\nI likely can't contribute much to that discussion due to being more of\nan \"attach a debugger\" person rather than an \"add printf statements\"\nperson.\n\nTo eliminate a hurdle for anyone who wants to chip in, I've attached\nthe old and new debug output from the following query:\n\nselect * from pg_class where oid = 1234;\n\nOne observation is that the output is quite a bit larger with the\npatched version and does not seem as useful if you wanted\nOPTIMIZER_DEBUG to help you figure out why a given Path was chosen.\n\nDavid", "msg_date": "Tue, 3 Oct 2023 12:29:46 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Does anyone ever use OPTIMIZER_DEBUG?" }, { "msg_contents": "On Tue, 3 Oct 2023 at 12:29, David Rowley <[email protected]> wrote:\n>\n> On Fri, 29 Sept 2023 at 10:59, Tom Lane <[email protected]> wrote:\n> > We could also discuss keeping the \"tracing\" aspect of it, but\n> > replacing debug_print_rel with pprint(rel), which'd still allow\n> > removal of all the \"DEBUG SUPPORT\" stuff at the bottom of allpaths.c.\n> > That's pretty much all of the maintenance-requiring stuff in it.\n>\n> To assist discussion, I've attached a patch for that.\n\nIt looks like nobody is objecting to this. I understand that not\neveryone who might object will have read this email thread, so what I\npropose to do here is move along and just commit the patch to swap out\ndebug_print_rel and use pprint instead. If that's done now then there\nare around 10 months where we could realistically revert this again if\nsomeone were to come forward with an objection.\n\nSound ok?\n\nDavid\n\n\n", "msg_date": "Mon, 9 Oct 2023 10:26:38 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Does anyone ever use OPTIMIZER_DEBUG?" }, { "msg_contents": "David Rowley <[email protected]> writes:\n> It looks like nobody is objecting to this. I understand that not\n> everyone who might object will have read this email thread, so what I\n> propose to do here is move along and just commit the patch to swap out\n> debug_print_rel and use pprint instead. If that's done now then there\n> are around 10 months where we could realistically revert this again if\n> someone were to come forward with an objection.\n\n> Sound ok?\n\nWFM.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 08 Oct 2023 17:28:25 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Does anyone ever use OPTIMIZER_DEBUG?" }, { "msg_contents": "On Mon, 9 Oct 2023 at 10:28, Tom Lane <[email protected]> wrote:\n>\n> David Rowley <[email protected]> writes:\n> > It looks like nobody is objecting to this. I understand that not\n> > everyone who might object will have read this email thread, so what I\n> > propose to do here is move along and just commit the patch to swap out\n> > debug_print_rel and use pprint instead. If that's done now then there\n> > are around 10 months where we could realistically revert this again if\n> > someone were to come forward with an objection.\n>\n> > Sound ok?\n>\n> WFM.\n\nOk. I've pushed the patch. Let's see if anyone comes forward.\n\nDavid\n\n\n", "msg_date": "Mon, 9 Oct 2023 15:55:25 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Does anyone ever use OPTIMIZER_DEBUG?" } ]
[ { "msg_contents": "Hi,\n\nI found a link in the docs that referred to the stats \"views\" section,\ninstead of the more relevant (IMO) stats \"functions\" section.\n\nPSA the 1 line patch -- it explains what I mean better than I can\ndescribe in words...\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Fri, 29 Sep 2023 14:51:10 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": true, "msg_subject": "[PGDOCS] change function linkend to refer to a more relevant target" }, { "msg_contents": "> On 29 Sep 2023, at 06:51, Peter Smith <[email protected]> wrote:\n\n> I found a link in the docs that referred to the stats \"views\" section,\n> instead of the more relevant (IMO) stats \"functions\" section.\n\nAgreed. This link was added in 2007 (in 7d3b7011b), and back in 7.3/7.4 days\nthe functions were listed in the same section as the views, so the anchor was\nat the time pointing to the right section. In 2012 it was given its own\nsection (in aebe989477) but the link wasn't updated.\n\nThanks for the patch, I'll go ahead with it.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Fri, 29 Sep 2023 10:54:50 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PGDOCS] change function linkend to refer to a more relevant\n target" }, { "msg_contents": "> On 29 Sep 2023, at 10:54, Daniel Gustafsson <[email protected]> wrote:\n> \n>> On 29 Sep 2023, at 06:51, Peter Smith <[email protected]> wrote:\n> \n>> I found a link in the docs that referred to the stats \"views\" section,\n>> instead of the more relevant (IMO) stats \"functions\" section.\n> \n> Agreed. This link was added in 2007 (in 7d3b7011b), and back in 7.3/7.4 days\n> the functions were listed in the same section as the views, so the anchor was\n> at the time pointing to the right section. In 2012 it was given its own\n> section (in aebe989477) but the link wasn't updated.\n> \n> Thanks for the patch, I'll go ahead with it.\n\nApplied to all supported branches, thanks!\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Fri, 29 Sep 2023 16:03:40 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PGDOCS] change function linkend to refer to a more relevant\n target" }, { "msg_contents": "On Sat, Sep 30, 2023 at 12:04 AM Daniel Gustafsson <[email protected]> wrote:\n>\n> > On 29 Sep 2023, at 10:54, Daniel Gustafsson <[email protected]> wrote:\n> >\n> >> On 29 Sep 2023, at 06:51, Peter Smith <[email protected]> wrote:\n> >\n> >> I found a link in the docs that referred to the stats \"views\" section,\n> >> instead of the more relevant (IMO) stats \"functions\" section.\n> >\n> > Agreed. This link was added in 2007 (in 7d3b7011b), and back in 7.3/7.4 days\n> > the functions were listed in the same section as the views, so the anchor was\n> > at the time pointing to the right section. In 2012 it was given its own\n> > section (in aebe989477) but the link wasn't updated.\n> >\n> > Thanks for the patch, I'll go ahead with it.\n>\n> Applied to all supported branches, thanks!\n>\n\nThank you for pushing.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Tue, 3 Oct 2023 08:27:37 +1100", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PGDOCS] change function linkend to refer to a more relevant\n target" } ]
[ { "msg_contents": "Dear hackers,\n(CC: Peter Eisentraut - committer of the problematic commit)\n\nWhile developing pg_upgrade patch, I found a candidate regression for pg_resetwal.\nIt might be occurred due to 1d863c2504.\n\nIs it really regression, or am I missing something?\n\n# Phenomenon\n\npg_resetwal with relative path cannot be executed. It could be done at 7273945,\nbut could not at 1d863.\n\n\nAt 1d863:\n\n```\n$ pg_resetwal -n data_N1/\npg_resetwal: error: could not read permissions of directory \"data_N1/\": No such file or directory\n```\n\nAt 7273945:\n\n```\n$ pg_resetwal -n data_N1/\nCurrent pg_control values:\n\npg_control version number: 1300\nCatalog version number: 202309251\n...\n```\n\n# Environment\n\nAttached script was executed on RHEL 7.9, gcc was 8.3.1.\nI used meson build system with following options:\n\nmeson setup -Dcassert=true -Ddebug=true -Dc_args=\"-ggdb -O0 -g3 -fno-omit-frame-pointer\"\n\n# My analysis\n\nI found that below part in GetDataDirectoryCreatePerm() returns false, it was a\ncause.\n\n```\n\t/*\n\t * If an error occurs getting the mode then return false. The caller is\n\t * responsible for generating an error, if appropriate, indicating that we\n\t * were unable to access the data directory.\n\t */\n\tif (stat(dataDir, &statBuf) == -1)\n\t\treturn false;\n```\n\nAlso, I found that the value DataDir in main() has relative path.\nBased on that, upcoming stat() may not able to detect the given location because\nthe process has already located inside the directory.\n\n```\n(gdb) break chdir\nBreakpoint 1 at 0x4016f0\n(gdb) run -n data_N1\n\n...\nBreakpoint 1, 0x00007ffff78e1390 in chdir () from /lib64/libc.so.6\nMissing separate debuginfos, use: debuginfo-install glibc-2.17-326.el7_9.x86_64\n(gdb) print DataDir\n$1 = 0x7fffffffe25c \"data_N1\"\n(gdb) frame 1\n#1 0x00000000004028d7 in main (argc=3, argv=0x7fffffffdf58) at ../postgres/src/bin/pg_resetwal/pg_resetwal.c:348\n348 if (chdir(DataDir) < 0)\n(gdb) print DataDir\n$2 = 0x7fffffffe25c \"data_N1\"\n```\n\n# How to fix\n\nOne alternative approach is to call chdir() several times. PSA the patch.\n(I'm not sure the commit should be reverted)\n\n# Appendix - How did I find?\n\nOriginally, I found an issue when attached script was executed.\nIt creates two clusters and executes pg_upgrade, but failed with following output.\n(I also attached whole output, please see result_*.out)\n\n```\nPerforming Consistency Checks\n-----------------------------\nChecking cluster versions ok\npg_resetwal: error: could not read permissions of directory \"data_N1\": No such file or directory\n```\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED", "msg_date": "Fri, 29 Sep 2023 07:39:09 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>", "msg_from_op": true, "msg_subject": "pg_resetwal regression: could not upgrade after 1d863c2504" }, { "msg_contents": "On 29.09.23 09:39, Hayato Kuroda (Fujitsu) wrote:\n> pg_resetwal with relative path cannot be executed. It could be done at 7273945,\n> but could not at 1d863.\n\nOk, I have reverted the offending patch.\n\n\n\n", "msg_date": "Fri, 29 Sep 2023 11:05:15 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_resetwal regression: could not upgrade after 1d863c2504" } ]
[ { "msg_contents": "I have a private repository on GitHub where I park PostgreSQL \ndevelopment work. I also have Cirrus activated on that repository, to \nbuild those branches, using the existing Cirrus configuration.\n\nNow under the new system of limited credits that started in September, \nthis blew through the credits about half-way through the month.\n\nDoes anyone have an idea how to manage this better? Is there maybe a \nway to globally set \"only trigger manually\", or could we make one?\n\nI suppose one way to deal with this is to make a second repository and \nonly activate Cirrus on that one, and push there on demand.\n\nAny ideas?\n\n\n", "msg_date": "Fri, 29 Sep 2023 11:13:24 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "how to manage Cirrus on personal repository" }, { "msg_contents": "> On 29 Sep 2023, at 11:13, Peter Eisentraut <[email protected]> wrote:\n\n> Does anyone have an idea how to manage this better? Is there maybe a way to globally set \"only trigger manually\", or could we make one?\n\nOn my personal repo I only build via doing pull-requests, such that I can\ncontrol when and what is built and rate-limit myself that way. Using the\nGithub CLI client it's quite easy to script \"push-and-build\". Not sure if it's\nbetter, it's just what I got used to from doing personal CI on Github before we\nhad Cirrus support in the tree.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Fri, 29 Sep 2023 12:38:48 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how to manage Cirrus on personal repository" }, { "msg_contents": "Hi,\n\nOn Fri, 29 Sept 2023 at 13:24, Peter Eisentraut <[email protected]> wrote:\n>\n> I have a private repository on GitHub where I park PostgreSQL\n> development work. I also have Cirrus activated on that repository, to\n> build those branches, using the existing Cirrus configuration.\n>\n> Now under the new system of limited credits that started in September,\n> this blew through the credits about half-way through the month.\n>\n> Does anyone have an idea how to manage this better? Is there maybe a\n> way to globally set \"only trigger manually\", or could we make one?\n\nYou can create another repository / branch with only a .cirrus.yml\nfile which will only have the 'trigger_type: manual' line. Then, you\ncan go to your private repository's settings on Cirrus CI and set\nREPO_CI_CONFIG_GIT_URL=github.com/{user}/{repository}/.cirrus.yml@{branch}\nenvironment variable. This will write contents of the newly created\n.cirrus.yml file to your private repository's .cirrus.yml\nconfiguration while running the CI. You can look at the .cirrus.star\nfile for more information. I just tested this on a public repository\nand it worked but I am not sure if something is different for private\nrepositories. I hope these help.\n\nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n", "msg_date": "Fri, 29 Sep 2023 14:05:39 +0300", "msg_from": "Nazir Bilal Yavuz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how to manage Cirrus on personal repository" }, { "msg_contents": "On Fri, Sep 29, 2023 at 7:11 AM Daniel Gustafsson <[email protected]> wrote:\n>\n> > On 29 Sep 2023, at 11:13, Peter Eisentraut <[email protected]> wrote:\n>\n> > Does anyone have an idea how to manage this better? Is there maybe a way to globally set \"only trigger manually\", or could we make one?\n>\n> On my personal repo I only build via doing pull-requests, such that I can\n> control when and what is built and rate-limit myself that way. Using the\n> Github CLI client it's quite easy to script \"push-and-build\". Not sure if it's\n> better, it's just what I got used to from doing personal CI on Github before we\n> had Cirrus support in the tree.\n\nIt is not a global configuration solution, but I just add an empty\nci-os-only: tag to my commit messages so it doesn't trigger CI.\nI'm sure this is not what you are looking for, though\n\n- Melanie\n\n\n", "msg_date": "Fri, 29 Sep 2023 09:38:56 -0400", "msg_from": "Melanie Plageman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how to manage Cirrus on personal repository" }, { "msg_contents": "Hi,\n\nOn 2023-09-29 11:13:24 +0200, Peter Eisentraut wrote:\n> I have a private repository on GitHub where I park PostgreSQL development\n> work. I also have Cirrus activated on that repository, to build those\n> branches, using the existing Cirrus configuration.\n> \n> Now under the new system of limited credits that started in September, this\n> blew through the credits about half-way through the month.\n\n:(\n\n\n> Does anyone have an idea how to manage this better? Is there maybe a way to\n> globally set \"only trigger manually\", or could we make one?\n\nOne way is to configure to run under a custom compute account. If we get\ncredits from google, that might be realistic to provide to many hackers, it's\nnot that expensive to do.\n\nAnother thing is to work on our tests - we apparently waste *tremendous*\namounts of time due to tap tests forking psql over and over.\n\n\nWe definitely can provide a way to configure on a repo level which tests run\nautomatically. I think that would be useful not just for turning things\nmanual, but also the opposite, enabling tests that we don't want everybody to\nrun to automatically run as part of cfbot. E.g. running mingw tests during\ncfbot while keeping it manual for everyone else.\n\nMaybe we should have two environment variables, which can be overwritten set\non a repository level, one to make manual tasks run by default, and one the\nother way?\n\n\n> I suppose one way to deal with this is to make a second repository and only\n> activate Cirrus on that one, and push there on demand.\n\nYou already can control which platforms are run via commit messages,\nfwiw. By adding, e.g.:\nci-os-only: linux, macos\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 29 Sep 2023 07:39:43 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how to manage Cirrus on personal repository" }, { "msg_contents": "On Sat, Sep 30, 2023 at 3:35 AM Nazir Bilal Yavuz <[email protected]> wrote:\n> On Fri, 29 Sept 2023 at 13:24, Peter Eisentraut <[email protected]> wrote:\n> > I have a private repository on GitHub where I park PostgreSQL\n> > development work. I also have Cirrus activated on that repository, to\n> > build those branches, using the existing Cirrus configuration.\n> >\n> > Now under the new system of limited credits that started in September,\n> > this blew through the credits about half-way through the month.\n\n[Replying to wrong person because I never saw Peter's original email,\nsomething must be jammed in the intertubes...]\n\nIt's annoying, but on the bright side... if you're making it halfway\nthrough the month, I think that means there's a chance you'd have\nenough credit if we can depessimise the known problems with TAP query\nexecution[1], and there are some more obviously stupid things too\n(sleep/poll for replication progress where PostgreSQL should offer an\nevent-based wait-for-replay, running all the tests when only docs\nchanged, running the regression tests fewer times, ...).\n\n[1] https://www.postgresql.org/message-id/flat/CA%2BhUKGJoEO33K%3DZynsH%3DxkiEyfBMZjOoqBK%2BgouBdTGW2-woig%40mail.gmail.com\n\n\n", "msg_date": "Sat, 30 Sep 2023 11:02:15 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how to manage Cirrus on personal repository" }, { "msg_contents": "\n\nOn 9/29/23 18:02, Thomas Munro wrote:\n> On Sat, Sep 30, 2023 at 3:35 AM Nazir Bilal Yavuz <[email protected]> wrote:\n>> On Fri, 29 Sept 2023 at 13:24, Peter Eisentraut <[email protected]> wrote:\n>>> I have a private repository on GitHub where I park PostgreSQL\n>>> development work. I also have Cirrus activated on that repository, to\n>>> build those branches, using the existing Cirrus configuration.\n>>>\n>>> Now under the new system of limited credits that started in September,\n>>> this blew through the credits about half-way through the month.\n> \n> [Replying to wrong person because I never saw Peter's original email,\n> something must be jammed in the intertubes...]\n> \n> It's annoying, but on the bright side... if you're making it halfway\n> through the month, I think that means there's a chance you'd have\n> enough credit if we can depessimise the known problems with TAP query\n> execution[1], and there are some more obviously stupid things too\n> (sleep/poll for replication progress where PostgreSQL should offer an\n> event-based wait-for-replay, running all the tests when only docs\n> changed, running the regression tests fewer times, ...).\n\nI also have Cirrus setup for my cloned Postgres repo and it is annoying \nthat it will run CI whenever I push up new commits that I pulled from \ngit.p.o.\n\nMy strategy for this (on other projects) is to require branches that \nneed CI to end in -ci. Since I use multiple CI services, I further allow \n-cic (Cirrus), -cig (Github Actions), etc. PRs are also included. That \nway, nothing runs through CI unless I want it to.\n\nHere's an example of how this works on Cirrus:\n\n# Build the branch if it is a pull request, or ends in -ci/-cic (-cic \ntargets only Cirrus CI)\nonly_if: $CIRRUS_PR != '' || $CIRRUS_BRANCH =~ '.*-ci$' || \n$CIRRUS_BRANCH =~ '.*-cic$'\n\nRegards,\n-David\n\n\n", "msg_date": "Fri, 29 Sep 2023 18:56:10 -0400", "msg_from": "David Steele <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how to manage Cirrus on personal repository" } ]
[ { "msg_contents": "Hello,\n\n \n\nThere is a problem on receiving large tsvector from binary format with\ngetting error \"invalid tsvector: maximum total lexeme length exceeded\".\nRequired simple steps to reproduce problem:\n\n- Make table with one column of type 'tsvector'\n\n- Add row with large tsvector (900 Kb will be enougth)\n\n- Save table to file with \"copy to with binary\"\n\n- Restore table from file with \"copy from with binary\"\n\n \n\nAt last step we can't restore a legal tsvector because there is error of\nrequired memory size calculation for tsvector during reading of its\nbinary.\n\n \n\nIt seems function \"tsvectorrecv\" can be improved:\n\nv1-0001-Fix-receiving-of-big-tsvector-from-binary - patch fixes problem of\nrequired size calculation for tsvector where wrong type is used (WordEntry\ninstead of WordEntryPos). Size of wrong type is bigger than type of needed\ntype therefore total size turns out larger than needed. So next test of\nthat size fails during maximum size check.\n\nv1-0002-Replace-magic-one-in-tsvector-binary-receiving - patch removes\nmagic ones from code. Those ones are used to add \"sizeof(uint16)\" to\nrequired size where number of WordEntryPos'es will be stored. Now it works\nonly because size of WordEntryPos is equal to size of uint16. But it is\nnot obviously during code reading and causes question \"what for this magic\none is?\"\n\nv1-0003-Optimize-memory-allocation-during-tsvector-binary - patch makes\nmemory allocation more accuracy and rarely. It seems memory to store\ntsvector's data is allocated redundant and reallocations happen more often\nthan necessary.\n\n \n\nBest regards,\n\nDenis Erokhin\n\nJatoba", "msg_date": "Fri, 29 Sep 2023 13:36:27 +0000 (UTC)", "msg_from": "=?koi8-r?B?5dLPyMnOIOTFzsnTIPfMwcTJzcnSz9fJ3g==?= <[email protected]>", "msg_from_op": true, "msg_subject": "Fix receiving large legal tsvector from binary format" }, { "msg_contents": "=?koi8-r?B?5dLPyMnOIOTFzsnTIPfMwcTJzcnSz9fJ3g==?= <[email protected]> writes:\n> There is a problem on receiving large tsvector from binary format with\n> getting error \"invalid tsvector: maximum total lexeme length exceeded\".\n\nGood catch! Even without an actual failure, we'd be wasting space\non-disk anytime we stored a tsvector received through binary input.\n\nI pushed your 0001 and 0002, but I don't really agree that 0003\nis an improvement. It looks to me like it'll result in one\nrepalloc per lexeme, instead of the O(log N) behavior we had before.\nIt's not that important to keep the palloc chunk size small here,\ngiven that we don't allow tsvectors to get anywhere near 1Gb anyway.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 01 Oct 2023 13:24:01 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix receiving large legal tsvector from binary format" } ]
[ { "msg_contents": "Hello,\n\nWhile working on my talk for PGConf.NYC next week I came across this\nbullet in the docs on heap only tuples:\n\n> Old versions of updated rows can be completely removed during normal\n> operation, including SELECTs, instead of requiring periodic vacuum\n> operations. (This is possible because indexes do not reference their page\n> item identifiers.)\n\nBut when a HOT update happens the entry in an (logically unchanged)\nindex still points to the original heap tid, and that line item is\nupdated with a pointer to the new line pointer in the same page.\n\nAssuming I'm understanding this correctly, attached is a patch\ncorrecting the description.\n\nThanks,\nJames Coleman", "msg_date": "Fri, 29 Sep 2023 13:39:07 -0400", "msg_from": "James Coleman <[email protected]>", "msg_from_op": true, "msg_subject": "[DOCS] HOT - correct claim about indexes not referencing old line\n pointers" }, { "msg_contents": "On Fri, Sep 29, 2023 at 10:45 AM James Coleman <[email protected]> wrote:\n\n> Hello,\n>\n> While working on my talk for PGConf.NYC next week I came across this\n> bullet in the docs on heap only tuples:\n>\n> > Old versions of updated rows can be completely removed during normal\n> > operation, including SELECTs, instead of requiring periodic vacuum\n> > operations. (This is possible because indexes do not reference their page\n> > item identifiers.)\n>\n> But when a HOT update happens the entry in an (logically unchanged)\n> index still points to the original heap tid, and that line item is\n> updated with a pointer to the new line pointer in the same page.\n>\n> Assuming I'm understanding this correctly, attached is a patch\n> correcting the description.\n>\n>\nI think we want to somehow distinguish between the old tuple that is the\nroot of the chain and old tuples that are not. This comment refers to\npruning the chain and removing intermediate links in the chain that are no\nlonger relevant because the root has been updated to point to the live\ntuple. In README.HOT, tuple 2 in the example after 1 points to 3.\n\nhttps://github.com/postgres/postgres/blob/ec99d6e9c87a8ff0f4805cc0c6c12cbb89c48e06/src/backend/access/heap/README.HOT#L109\n\nDavid J.\n\nOn Fri, Sep 29, 2023 at 10:45 AM James Coleman <[email protected]> wrote:Hello,\n\nWhile working on my talk for PGConf.NYC next week I came across this\nbullet in the docs on heap only tuples:\n\n> Old versions of updated rows can be completely removed during normal\n> operation, including SELECTs, instead of requiring periodic vacuum\n> operations. (This is possible because indexes do not reference their page\n> item identifiers.)\n\nBut when a HOT update happens the entry in an (logically unchanged)\nindex still points to the original heap tid, and that line item is\nupdated with a pointer to the new line pointer in the same page.\n\nAssuming I'm understanding this correctly, attached is a patch\ncorrecting the description.I think we want to somehow distinguish between the old tuple that is the root of the chain and old tuples that are not.  This comment refers to pruning the chain and removing intermediate links in the chain that are no longer relevant because the root has been updated to point to the live tuple.  In README.HOT, tuple 2 in the example after 1 points to 3.https://github.com/postgres/postgres/blob/ec99d6e9c87a8ff0f4805cc0c6c12cbb89c48e06/src/backend/access/heap/README.HOT#L109David J.", "msg_date": "Fri, 29 Sep 2023 11:03:30 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [DOCS] HOT - correct claim about indexes not referencing old line\n pointers" }, { "msg_contents": "On Fri, Sep 29, 2023 at 10:39 AM James Coleman <[email protected]> wrote:\n> > Old versions of updated rows can be completely removed during normal\n> > operation, including SELECTs, instead of requiring periodic vacuum\n> > operations. (This is possible because indexes do not reference their page\n> > item identifiers.)\n>\n> But when a HOT update happens the entry in an (logically unchanged)\n> index still points to the original heap tid, and that line item is\n> updated with a pointer to the new line pointer in the same page.\n\nIt's true that the original root heap tuple (which is never a\nheap-only tuple) must have its line pointer changed from LP_NORMAL to\nLP_REDIRECT the first time pruning takes place that affects its HOT\nchain. But I don't think that referring to the root item as something\nalong the lines of \"an obsolescent/old tuple's line pointer\" is\nparticularly helpful. Changing from LP_NORMAL to LP_REDIRECT during\nthe initial prune isn't terribly different from changing an existing\nLP_REDIRECT's redirect-TID link during every subsequent prune. In both\ncases you're just updating where the first heap-only tuple begins.\n\nThe really important point is that the TID (which maps to the root\nitem of the HOT chain) has a decent chance of being stable over time,\nno matter how many versions the HOT chain churns through. And that\nthat can break (or at least weaken) our dependence on VACUUM with some\nworkloads.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 29 Sep 2023 11:04:27 -0700", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [DOCS] HOT - correct claim about indexes not referencing old line\n pointers" }, { "msg_contents": "On Fri, Sep 29, 2023 at 11:04 AM Peter Geoghegan <[email protected]> wrote:\n> > But when a HOT update happens the entry in an (logically unchanged)\n> > index still points to the original heap tid, and that line item is\n> > updated with a pointer to the new line pointer in the same page.\n>\n> It's true that the original root heap tuple (which is never a\n> heap-only tuple) must have its line pointer changed from LP_NORMAL to\n> LP_REDIRECT the first time pruning takes place that affects its HOT\n> chain. But I don't think that referring to the root item as something\n> along the lines of \"an obsolescent/old tuple's line pointer\" is\n> particularly helpful.\n\nTo be clear, the existing wording seems correct to me. Even heap-only\ntuples require line pointers. These line pointers are strictly\nguaranteed to never be *directly* referenced from indexes (if they are\nthen we'll actually detect it and report data corruption on recent\nversions). The name \"heap-only tuple\" quite literally means that the\ntuple and its line pointer are only represented in the heap, and never\nin indexes.\n\nThere is a related rule about what is allowed to happen to any\nheap-only tuple's line pointer: it can only change from LP_NORMAL to\nLP_UNUSED (never LP_DEAD or LP_REDIRECT). You can think of a heap-only\ntuple as \"skipping the LP_DEAD step\" that regular heap tuples must go\nthrough. We don't need LP_DEAD tombstones precisely because there\ncannot possibly be any references to heap-only tuples in indexes -- so\nwe can't break index scans by going straight from LP_NORMAL to\nLP_UNUSED for heap-only tuple line pointers.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 29 Sep 2023 11:39:14 -0700", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [DOCS] HOT - correct claim about indexes not referencing old line\n pointers" }, { "msg_contents": "On Fri, Sep 29, 2023 at 2:39 PM Peter Geoghegan <[email protected]> wrote:\n>\n> On Fri, Sep 29, 2023 at 11:04 AM Peter Geoghegan <[email protected]> wrote:\n> > > But when a HOT update happens the entry in an (logically unchanged)\n> > > index still points to the original heap tid, and that line item is\n> > > updated with a pointer to the new line pointer in the same page.\n> >\n> > It's true that the original root heap tuple (which is never a\n> > heap-only tuple) must have its line pointer changed from LP_NORMAL to\n> > LP_REDIRECT the first time pruning takes place that affects its HOT\n> > chain. But I don't think that referring to the root item as something\n> > along the lines of \"an obsolescent/old tuple's line pointer\" is\n> > particularly helpful.\n>\n> To be clear, the existing wording seems correct to me. Even heap-only\n> tuples require line pointers. These line pointers are strictly\n> guaranteed to never be *directly* referenced from indexes (if they are\n> then we'll actually detect it and report data corruption on recent\n> versions). The name \"heap-only tuple\" quite literally means that the\n> tuple and its line pointer are only represented in the heap, and never\n> in indexes.\n\nHmm, to my reading the issue is that \"old versions\" doesn't say\nanything about \"old HOT versions; it seems to be describing what\nhappens generally when a heap-only tuple is written -- which would\ninclude the first time a heap-only tuple is written. And when it's the\nfirst heap-only tuple the \"old version\" would be the original version,\nwhich would not be a heap-only tuple.\n\nI can work up a tweaked version of the patch that shows there are two\npaths here (original tuple is being updated versus an intermediate\nheap-only tuple is being updated); would you be willing to consider\nthat?\n\nThanks,\nJames Coleman\n\n\n", "msg_date": "Fri, 29 Sep 2023 14:45:35 -0400", "msg_from": "James Coleman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [DOCS] HOT - correct claim about indexes not referencing old line\n pointers" }, { "msg_contents": "On Fri, Sep 29, 2023 at 11:45 AM James Coleman <[email protected]>\nwrote:my reading the issue is that \"old versions\" doesn't say\n> anything about \"old HOT versions; it seems to be describing what\n> happens generally when a heap-only tuple is written -- which would\n> include the first time a heap-only tuple is written.\n\nI think that it's talking about what happens during opportunistic\npruning, in particular what happens to HOT chains. (Though pruning\ndoes almost the same amount of useful work with non-heap-only tuples,\nso it's a bit unfortunate that the name \"HOT pruning\" seems to have\nstuck.)\n\n> And when it's the\n> first heap-only tuple the \"old version\" would be the original version,\n> which would not be a heap-only tuple.\n\nThe docs say \"Old versions of updated rows can be completely removed\nduring normal operation\". Opportunistic pruning removes dead heap-only\ntuples completely, and makes their line pointers LP_UNUSED right away.\nBut it can also entail removing storage for the original root item\nheap tuple, and making its line pointer LP_REDIRECT right away (not\nLP_DEAD or LP_UNUSED) at most once in the life of each HOT chain. So\nyeah, we're not quite limited to removing storage for heap-only tuples\nwhen pruning a HOT chain. Does that distinction really matter, though?\n\nThere isn't even any special case handling for it in pruneheap.c (we\nonly have assertions that make sure that we're performing \"valid\ntransitions\" for each tuple/line pointer). That is, we don't really\ncare about the difference between calling ItemIdSetRedirect() for an\nLP_NORMAL item versus an existing LP_REDIRECT item at the code level\n(we just do it and let PageRepairFragmentation() clean things up).\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Fri, 29 Sep 2023 13:05:39 -0700", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [DOCS] HOT - correct claim about indexes not referencing old line\n pointers" }, { "msg_contents": "On Fri, Sep 29, 2023 at 4:06 PM Peter Geoghegan <[email protected]> wrote:\n>\n> On Fri, Sep 29, 2023 at 11:45 AM James Coleman <[email protected]>\n> wrote:my reading the issue is that \"old versions\" doesn't say\n> > anything about \"old HOT versions; it seems to be describing what\n> > happens generally when a heap-only tuple is written -- which would\n> > include the first time a heap-only tuple is written.\n>\n> I think that it's talking about what happens during opportunistic\n> pruning, in particular what happens to HOT chains. (Though pruning\n> does almost the same amount of useful work with non-heap-only tuples,\n> so it's a bit unfortunate that the name \"HOT pruning\" seems to have\n> stuck.)\n\nThat's very likely what the intention was. I read it again, and the\nsame confusion still sticks out to me: it doesn't say anything\nexplicitly about opportunistic pruning (I'm not sure if that term is\n\"public docs\" level, so that's probably fine), and it doesn't scope\nthe claim to intermediate tuples in a HOT chain -- indeed the context\nis the HOT feature generally.\n\nThis is why I discovered it: it says \"indexes do not reference their\npage item identifiers\", which is manifestly not true when talking\nabout the root item, and in fact would defeat the whole purpose of HOT\n(at least in a old-to-new chain like Postgres uses).\n\nAssuming people can be convinced this is confusing (I realize you may\nnot be yet), I see two basic options:\n\n1. Update this to discuss both intermediate tuples and root items\nseparately. This could entail either one larger paragraph or splitting\nsuch that instead of \"two optimizations\" we say \"three\" optimizations.\n\n2. Change \"old versions\" to something like \"intermediate versions in a\nseries of updates\".\n\nI prefer some form of (1) since it more fully describes the behavior,\nbut we could tweak further for concision.\n\n> > And when it's the\n> > first heap-only tuple the \"old version\" would be the original version,\n> > which would not be a heap-only tuple.\n>\n> The docs say \"Old versions of updated rows can be completely removed\n> during normal operation\". Opportunistic pruning removes dead heap-only\n> tuples completely, and makes their line pointers LP_UNUSED right away.\n> But it can also entail removing storage for the original root item\n> heap tuple, and making its line pointer LP_REDIRECT right away (not\n> LP_DEAD or LP_UNUSED) at most once in the life of each HOT chain. So\n> yeah, we're not quite limited to removing storage for heap-only tuples\n> when pruning a HOT chain. Does that distinction really matter, though?\n\nGiven pageinspect can show you the original tuple still exists and\nthat the index still references it...I think it does.\n\nI suppose very few people go checking that out, of course, but I'd\nlike to be precise.\n\nRegards,\nJames Coleman\n\n\n", "msg_date": "Fri, 29 Sep 2023 21:27:11 -0400", "msg_from": "James Coleman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [DOCS] HOT - correct claim about indexes not referencing old line\n pointers" }, { "msg_contents": "On Fri, Sep 29, 2023 at 6:27 PM James Coleman <[email protected]> wrote:\n> On Fri, Sep 29, 2023 at 4:06 PM Peter Geoghegan <[email protected]> wrote:\n> > I think that it's talking about what happens during opportunistic\n> > pruning, in particular what happens to HOT chains. (Though pruning\n> > does almost the same amount of useful work with non-heap-only tuples,\n> > so it's a bit unfortunate that the name \"HOT pruning\" seems to have\n> > stuck.)\n>\n> That's very likely what the intention was. I read it again, and the\n> same confusion still sticks out to me: it doesn't say anything\n> explicitly about opportunistic pruning (I'm not sure if that term is\n> \"public docs\" level, so that's probably fine), and it doesn't scope\n> the claim to intermediate tuples in a HOT chain -- indeed the context\n> is the HOT feature generally.\n\nIt doesn't mention opportunistic pruning by name, but it does say:\n\n\"Old versions of updated rows can be completely removed during normal\noperation, including SELECTs, instead of requiring periodic vacuum\noperations.\"\n\nThere is a strong association between HOT and pruning (particularly\nopportunistic pruning) in the minds of some hackers (and perhaps some\nusers), because both features appeared together in 8.3, and both are\nclosely related at the implementation level. It's nevertheless not\nquite accurate to say that HOT \"provides two optimizations\" -- since\npruning (the second of the two bullet points) isn't fundamentally\ndifferent for pages that don't have any HOT chains. Not at the level\nof the heap pages, at least (indexes are another matter).\n\nExplaining these sorts of distinctions through prose is very\ndifficult. You really need diagrams for something like this IMV.\nWithout that, the only way to make all of this less confusing is to\navoid all discussion of pruning...but then you can't really make the\npoint about breaking the dependency on VACUUM, which is a relatively\nimportant point -- one with real practical relevance.\n\n> This is why I discovered it: it says \"indexes do not reference their\n> page item identifiers\", which is manifestly not true when talking\n> about the root item, and in fact would defeat the whole purpose of HOT\n> (at least in a old-to-new chain like Postgres uses).\n\nYeah, but...that's not what was intended. Obviously, the index hasn't\nchanged, and we expect index scans to continue to give correct\nanswers. So it is pretty strongly implied that it continues to point\nto something valid.\n\n> Assuming people can be convinced this is confusing (I realize you may\n> not be yet), I see two basic options:\n>\n> 1. Update this to discuss both intermediate tuples and root items\n> separately. This could entail either one larger paragraph or splitting\n> such that instead of \"two optimizations\" we say \"three\" optimizations.\n>\n> 2. Change \"old versions\" to something like \"intermediate versions in a\n> series of updates\".\n>\n> I prefer some form of (1) since it more fully describes the behavior,\n> but we could tweak further for concision.\n\nBruce authored these docs. I was mostly just glad to have anything at\nall about HOT in the user-facing docs, quite honestly.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 29 Sep 2023 19:02:24 -0700", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [DOCS] HOT - correct claim about indexes not referencing old line\n pointers" }, { "msg_contents": "On Sat, Sep 30, 2023 at 1:05 AM Peter Geoghegan <[email protected]> wrote:\n> > This is why I discovered it: it says \"indexes do not reference their\n> > page item identifiers\", which is manifestly not true when talking\n> > about the root item, and in fact would defeat the whole purpose of HOT\n> > (at least in a old-to-new chain like Postgres uses).\n>\n> Yeah, but...that's not what was intended. Obviously, the index hasn't\n> changed, and we expect index scans to continue to give correct\n> answers. So it is pretty strongly implied that it continues to point\n> to something valid.\n\nI took a look at this. I agree with James that the current wording is\njust plain wrong.\n\n periodic vacuum operations. (This is possible because indexes\n do not reference their <link linkend=\"storage-page-layout\">page\n item identifiers</link>.)\n\nHere, the antecedent of \"their\" is \"old versions of updated rows\". It\nis slightly unclear whether we should interpret this to mean (a) the\ntuples together with the line pointers which point to them or (b) only\nthe tuple data to which the line pointers point. If (a), then it's\nwrong because we can't actually get rid of the root line pointer;\nrather, we have to change it to a redirect. If (b), then it's wrong\nbecause heap_page_prune() removes dead tuples in this sense whether\nHOT is involved or not. I can see no interpretation under which this\nstatement is true as written.\n\nI reviewed James's proposed alternative:\n\n+ periodic vacuum operations. However because indexes reference the old\n+ version's <link linkend=\"storage-page-layout\">page item identifiers</link>\n+ the line pointer must remain in place. Such a line pointer has its\n+ <literal>LP_REDIRECT</literal> bit set and its offset updated to the\n+ <link linkend=\"storage-page-layout\">page item identifiers</link> of\n+ the updated row.\n\nI don't think that's really right either. That's true for the root\nline pointer, but the phrasing seems to be referring to old versions\ngenerally, which would seem to include not only the root, for which\nthis is correct, and also all subsequent now-dead row versions, for\nwhich it is wrong.\n\nHere is my attempt:\n\nWhen a row is updated multiple times, row versions other than the\noldest and the newest can be completely removed during normal\noperation, including <command>SELECT</command>s, instead of requiring\nperiodic vacuum operations. (Indexes always refer to the <link\nlinkend=\"storage-page-layout\">page item identifiers</link> of the\noriginal row version. The tuple data associated with that row version\nis removed, and its item identifier is converted to a redirect that\npoints to the oldest version that may still be visible to some\nconcurrent transaction. Intermediate row versions that are no longer\nvisible to anyone are completely removed, and the associated page item\nidentifiers are made available for reuse.)\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 2 Oct 2023 14:55:15 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [DOCS] HOT - correct claim about indexes not referencing old line\n pointers" }, { "msg_contents": "On Mon, Oct 2, 2023 at 2:55 PM Robert Haas <[email protected]> wrote:\n>\n> On Sat, Sep 30, 2023 at 1:05 AM Peter Geoghegan <[email protected]> wrote:\n> > > This is why I discovered it: it says \"indexes do not reference their\n> > > page item identifiers\", which is manifestly not true when talking\n> > > about the root item, and in fact would defeat the whole purpose of HOT\n> > > (at least in a old-to-new chain like Postgres uses).\n> >\n> > Yeah, but...that's not what was intended. Obviously, the index hasn't\n> > changed, and we expect index scans to continue to give correct\n> > answers. So it is pretty strongly implied that it continues to point\n> > to something valid.\n>\n> I took a look at this. I agree with James that the current wording is\n> just plain wrong.\n>\n> periodic vacuum operations. (This is possible because indexes\n> do not reference their <link linkend=\"storage-page-layout\">page\n> item identifiers</link>.)\n>\n> Here, the antecedent of \"their\" is \"old versions of updated rows\". It\n> is slightly unclear whether we should interpret this to mean (a) the\n> tuples together with the line pointers which point to them or (b) only\n> the tuple data to which the line pointers point. If (a), then it's\n> wrong because we can't actually get rid of the root line pointer;\n> rather, we have to change it to a redirect. If (b), then it's wrong\n> because heap_page_prune() removes dead tuples in this sense whether\n> HOT is involved or not. I can see no interpretation under which this\n> statement is true as written.\n>\n> I reviewed James's proposed alternative:\n>\n> + periodic vacuum operations. However because indexes reference the old\n> + version's <link linkend=\"storage-page-layout\">page item identifiers</link>\n> + the line pointer must remain in place. Such a line pointer has its\n> + <literal>LP_REDIRECT</literal> bit set and its offset updated to the\n> + <link linkend=\"storage-page-layout\">page item identifiers</link> of\n> + the updated row.\n>\n> I don't think that's really right either. That's true for the root\n> line pointer, but the phrasing seems to be referring to old versions\n> generally, which would seem to include not only the root, for which\n> this is correct, and also all subsequent now-dead row versions, for\n> which it is wrong.\n>\n> Here is my attempt:\n>\n> When a row is updated multiple times, row versions other than the\n> oldest and the newest can be completely removed during normal\n> operation, including <command>SELECT</command>s, instead of requiring\n> periodic vacuum operations. (Indexes always refer to the <link\n> linkend=\"storage-page-layout\">page item identifiers</link> of the\n> original row version. The tuple data associated with that row version\n> is removed, and its item identifier is converted to a redirect that\n> points to the oldest version that may still be visible to some\n> concurrent transaction. Intermediate row versions that are no longer\n> visible to anyone are completely removed, and the associated page item\n> identifiers are made available for reuse.)\n\nHi Robert,\n\nThanks for reviewing!\n\nI like your changes. Reading through this several times, and noting\nPeter's comments about pruning being more than just HOT, I'm thinking\nthat rather than a simple fixup for this one paragraph what we\nactually want is to split out the concept of page pruning into its own\nsection of the storage docs. Attached is a patch that does that,\nincorporating much of your language about LP_REDIRECT, along with\nLP_DEAD so that readers know this affects more than just heap-only\ntuple workloads.\n\nRegards,\nJames", "msg_date": "Tue, 3 Oct 2023 15:35:15 -0400", "msg_from": "James Coleman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [DOCS] HOT - correct claim about indexes not referencing old line\n pointers" }, { "msg_contents": "On Tue, Oct 3, 2023 at 3:35 PM James Coleman <[email protected]> wrote:\n> I like your changes. Reading through this several times, and noting\n> Peter's comments about pruning being more than just HOT, I'm thinking\n> that rather than a simple fixup for this one paragraph what we\n> actually want is to split out the concept of page pruning into its own\n> section of the storage docs. Attached is a patch that does that,\n> incorporating much of your language about LP_REDIRECT, along with\n> LP_DEAD so that readers know this affects more than just heap-only\n> tuple workloads.\n\nI considered this kind of approach, but I don't think it's better. I\nthink the point of this documentation is to give people a general idea\nof how HOT works, not all the technical details. If we start getting\ninto the nitty gritty, we're going to have to explain a lot more\nstuff, which is going to detract from the actual purpose of this\ndocumentation. For example, your patch talks about LP_REDIRECT and\nLP_DEAD, which are not referenced in any other part of the\ndocumentation. It also uses the term line pointer instead of page item\nidentifier without explaining that those are basically the same thing.\nObviously those kinds of things can be fixed, but in my opinion, your\nversion doesn't really add much information that is likely to be\nuseful to a reader of this section, while at the same time it does add\nsome things that might be confusing. If we wanted to have a more\ntechnical discussion of all of this somewhere in the documentation, we\ncould, but that would be quite a bit more work to write and review,\nand it would probably duplicate some of what we've already got in\nREADME.HOT.\n\nI suggest that we should be looking for a patch that tries to correct\nthe wrong stuff in the present wording while adding the minimum\npossible amount of text.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 4 Oct 2023 09:17:48 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [DOCS] HOT - correct claim about indexes not referencing old line\n pointers" }, { "msg_contents": "On Wed, Oct 4, 2023 at 9:18 AM Robert Haas <[email protected]> wrote:\n>\n> On Tue, Oct 3, 2023 at 3:35 PM James Coleman <[email protected]> wrote:\n> > I like your changes. Reading through this several times, and noting\n> > Peter's comments about pruning being more than just HOT, I'm thinking\n> > that rather than a simple fixup for this one paragraph what we\n> > actually want is to split out the concept of page pruning into its own\n> > section of the storage docs. Attached is a patch that does that,\n> > incorporating much of your language about LP_REDIRECT, along with\n> > LP_DEAD so that readers know this affects more than just heap-only\n> > tuple workloads.\n>\n> I considered this kind of approach, but I don't think it's better. I\n> think the point of this documentation is to give people a general idea\n> of how HOT works, not all the technical details. If we start getting\n> into the nitty gritty, we're going to have to explain a lot more\n> stuff, which is going to detract from the actual purpose of this\n> documentation. For example, your patch talks about LP_REDIRECT and\n> LP_DEAD, which are not referenced in any other part of the\n> documentation. It also uses the term line pointer instead of page item\n> identifier without explaining that those are basically the same thing.\n> Obviously those kinds of things can be fixed, but in my opinion, your\n> version doesn't really add much information that is likely to be\n> useful to a reader of this section, while at the same time it does add\n> some things that might be confusing. If we wanted to have a more\n> technical discussion of all of this somewhere in the documentation, we\n> could, but that would be quite a bit more work to write and review,\n> and it would probably duplicate some of what we've already got in\n> README.HOT.\n>\n> I suggest that we should be looking for a patch that tries to correct\n> the wrong stuff in the present wording while adding the minimum\n> possible amount of text.\n\nThere's one primary thing I think this approach adds (I don't mean the\npatch I proposed is the only way to address this concern): the fact\nthat pages get cleaned up outside of the HOT optimization. We could\nadd a short paragraph about that on the HOT page, but that seems a bit\nconfusing.\n\nAre you thinking we should simply elide the fact that there is pruning\nthat happens outside of HOT? Or add that information onto the HOT\npage, even though it doesn't directly fit?\n\nRegards,\nJames\n\n\n", "msg_date": "Wed, 4 Oct 2023 09:35:55 -0400", "msg_from": "James Coleman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [DOCS] HOT - correct claim about indexes not referencing old line\n pointers" }, { "msg_contents": "On Wed, Oct 4, 2023 at 9:36 AM James Coleman <[email protected]> wrote:\n> Are you thinking we should simply elide the fact that there is pruning\n> that happens outside of HOT? Or add that information onto the HOT\n> page, even though it doesn't directly fit?\n\nI think we should elide it. Maybe with a much larger rewrite there\nwould be a good place to include that information, but with the\ncurrent structure, the page is about why HOT is good, and talking\nabout pruning that can happen apart from HOT doesn't advance that\nmessage.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 4 Oct 2023 09:42:43 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [DOCS] HOT - correct claim about indexes not referencing old line\n pointers" }, { "msg_contents": "On Wed, Oct 4, 2023 at 9:42 AM Robert Haas <[email protected]> wrote:\n>\n> On Wed, Oct 4, 2023 at 9:36 AM James Coleman <[email protected]> wrote:\n> > Are you thinking we should simply elide the fact that there is pruning\n> > that happens outside of HOT? Or add that information onto the HOT\n> > page, even though it doesn't directly fit?\n>\n> I think we should elide it. Maybe with a much larger rewrite there\n> would be a good place to include that information, but with the\n> current structure, the page is about why HOT is good, and talking\n> about pruning that can happen apart from HOT doesn't advance that\n> message.\n\nAll right, attached is a v3 which attempts to fix the wrong\ninformation with an economy of words. I may at some point submit a\nseparate patch that adds a broader pruning section, but this at least\nbrings the docs inline with reality insofar as they address it.\n\nJames", "msg_date": "Wed, 4 Oct 2023 21:12:33 -0400", "msg_from": "James Coleman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [DOCS] HOT - correct claim about indexes not referencing old line\n pointers" }, { "msg_contents": "On Tue, Dec 5, 2023 at 10:51 AM James Coleman <[email protected]> wrote:\n>\n> Hello,\n>\n> While working on my talk for PGConf.NYC next week I came across this\n> bullet in the docs on heap only tuples:\n>\n> > Old versions of updated rows can be completely removed during normal\n> > operation, including SELECTs, instead of requiring periodic vacuum\n> > operations. (This is possible because indexes do not reference their page\n> > item identifiers.)\n>\n> But when a HOT update happens the entry in an (logically unchanged)\n> index still points to the original heap tid, and that line item is\n> updated with a pointer to the new line pointer in the same page.\n>\n> Assuming I'm understanding this correctly, attached is a patch\n> correcting the description.\n>\n> I have Reviewed the patch. Patch applies neatly without any issues. Documentation build was successful and there was no Spell-check issue also. I did not find any issues. The patch looks >good to me.\n>\n>Thanks and Regards,\n>Shubham Khanna.\n\n\n", "msg_date": "Tue, 5 Dec 2023 10:52:43 +0530", "msg_from": "Shubham Khanna <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [DOCS] HOT - correct claim about indexes not referencing old line\n pointers" }, { "msg_contents": "On Wed, Oct 4, 2023 at 9:12 PM James Coleman <[email protected]> wrote:\n> All right, attached is a v3 which attempts to fix the wrong\n> information with an economy of words. I may at some point submit a\n> separate patch that adds a broader pruning section, but this at least\n> brings the docs inline with reality insofar as they address it.\n\nI don't think this is as good as what I proposed back on October 2nd.\nIMHO, that version does a good job making the text accurate and clear,\nand is directly responsive to your original complaint, namely, that\nthe root of the HOT chain can't be removed. But this version seems to\ncontain a number of incidental changes that are unrelated to that\npoint, e.g. \"old versions\" -> \"old, no longer visible versions\", \"can\nbe completely removed\" -> \"may be pruned\", and the removal of the\nsentence \"In summary, heap-only tuple updates can only be created - if\ncolumns used by indexes are not updated\" which AFAICT is both\ncompletely correct as-is and unrelated to the original complaint.\n\nMaybe I shouldn't be, but I'm slightly frustrated here. I thought I\nhad proposed an alternative which you found acceptable, but then you\nproposed several more versions that did other things instead, and I\nnever really understood why we couldn't just adopt the version that\nyou seemed to think was OK. If there's a problem with that, say what\nit is. If there's not, let's do that and move on.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 14 Mar 2024 10:27:54 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [DOCS] HOT - correct claim about indexes not referencing old line\n pointers" }, { "msg_contents": "On Thu, Mar 14, 2024 at 10:28 AM Robert Haas <[email protected]> wrote:\n>\n> On Wed, Oct 4, 2023 at 9:12 PM James Coleman <[email protected]> wrote:\n> > All right, attached is a v3 which attempts to fix the wrong\n> > information with an economy of words. I may at some point submit a\n> > separate patch that adds a broader pruning section, but this at least\n> > brings the docs inline with reality insofar as they address it.\n>\n> I don't think this is as good as what I proposed back on October 2nd.\n> IMHO, that version does a good job making the text accurate and clear,\n> and is directly responsive to your original complaint, namely, that\n> the root of the HOT chain can't be removed. But this version seems to\n> contain a number of incidental changes that are unrelated to that\n> point, e.g. \"old versions\" -> \"old, no longer visible versions\", \"can\n> be completely removed\" -> \"may be pruned\", and the removal of the\n> sentence \"In summary, heap-only tuple updates can only be created - if\n> columns used by indexes are not updated\" which AFAICT is both\n> completely correct as-is and unrelated to the original complaint.\n>\n> Maybe I shouldn't be, but I'm slightly frustrated here. I thought I\n> had proposed an alternative which you found acceptable, but then you\n> proposed several more versions that did other things instead, and I\n> never really understood why we couldn't just adopt the version that\n> you seemed to think was OK. If there's a problem with that, say what\n> it is. If there's not, let's do that and move on.\n\nI think there's simply a misunderstanding here. I read your proposal\nas \"here's an idea to consider as you work on the patch\" (as happens\non many other threads), and so I attempted to incorporate your primary\npoints of feedback into my next version of the patch.\n\nObviously I have reasons for the other changes I made: for example,\n\"no longer visible\" improves the correctness, since being an old\nversion isn't sufficient. I removed the \"In summary\" sentence because\nit simply doesn't follow from the prose before it. That sentence\nsimply restates information already appearing earlier in almost as\nsimple a form, so it's redundant. But more importantly it's just not\nactually a summary of the text before it, so removing it improves the\ndocumentation.\n\nI can explain my reasoning further if desired, but I fear it would\nsimply frustrate you further, so I'll stop here.\n\nIf the goal here is the most minimal patch possible, then please\ncommit what you proposed. I am interested in improving the document\nfurther, but I don't know how to do that easily if the requirement is\neffectively \"must only change one specific detail at a time\". So, that\nleaves me feeling a bit frustrated also.\n\nRegards,\nJames Coleman\n\n\n", "msg_date": "Thu, 14 Mar 2024 21:06:54 -0400", "msg_from": "James Coleman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [DOCS] HOT - correct claim about indexes not referencing old line\n pointers" }, { "msg_contents": "On Thu, Mar 14, 2024 at 9:07 PM James Coleman <[email protected]> wrote:\n> Obviously I have reasons for the other changes I made: for example,\n> \"no longer visible\" improves the correctness, since being an old\n> version isn't sufficient. I removed the \"In summary\" sentence because\n> it simply doesn't follow from the prose before it. That sentence\n> simply restates information already appearing earlier in almost as\n> simple a form, so it's redundant. But more importantly it's just not\n> actually a summary of the text before it, so removing it improves the\n> documentation.\n>\n> I can explain my reasoning further if desired, but I fear it would\n> simply frustrate you further, so I'll stop here.\n>\n> If the goal here is the most minimal patch possible, then please\n> commit what you proposed. I am interested in improving the document\n> further, but I don't know how to do that easily if the requirement is\n> effectively \"must only change one specific detail at a time\". So, that\n> leaves me feeling a bit frustrated also.\n\nI don't think the goal is to have the most minimal patch possible, but\nat the same time, I'm responsible for what I commit. If I commit a\npatch that changes something and somebody, especially another\ncommitter, writes an email and says \"hey, why the heck did you change\nthis, you dummy?\" then I need to be able to answer that question, and\nsaying \"uh, well, James Coleman had it in his patch and he didn't\nexplain why it was there and I didn't know either but I just kind of\ncommitted it anyway\" is not where I want to be. Almost without\nexception, it's not the patch author who gets asked why they included\na certain thing in the patch; it's the committer who gets asked why it\ngot committed that way -- and at least as I see it, if there's not a\ngood answer to that question, it's the committer who gets judged, not\nthe patch author. In some cases, such questions and judgements arrive\nwithout warning YEARS after the commit.\n\nSo, to protect myself against questions from other hackers that end,\nat least implicitly, in \"you dummy,\" I try to make sure there's an\nadequate paper trail for everything I commit. Between the commit\nmessage, the comments, and the email discussion, it needs to be\ncrystal clear why something was thought to be a good idea at the time.\nHopefully, it will be clear enough that I never even get a question,\nbecause the reader will be able to understand ON THEIR OWN why\nsomething was done and either go \"oh, that make sense\" or \"well, I get\nwhy they did that, but I disagree because $REASON\" ... but if that\ndoesn't quite work out, then I hope that at the very least the paper\ntrail will be good enough that I can reconstruct the reasoning if\nneeded. For a recent example of a case where I clearly failed do a\ngood enough job to keep someone from asking a \"you dummy\" question,\nsee the http://postgr.es/m/[email protected]\nand in particular the paragraph that starts with \"However\".\n\nTherefore, if I realize when reading the patch that it contains\nodd-looking changes which I can't relate to the stated goal, it's\ngetting bounced, pretty much 100% of the time. If it's a small patch\nand I really care about it and it's a new submitter who doesn't know\nany better, I might choose to remove those changes myself and commit\nthe rest; and if I like those changes for other reasons, I might\nchoose to commit them separately. In any other situation, I'm just\ngoing to tell the submitter that they need to take out the unrelated\nchanges and move on to the next email thread.\n\nIn this particular situation, I see that this whole discussion is\nslightly moot by virtue of 7a9328e8e40534fb4de41b4ac152e3c6989d84e7\n(Jeff Davis, 2024-03-05) which removed the \"In summary, heap-only\ntuple updates can only be created if columns used by indexes are not\nupdated\" sentence that you also removed. But there is an important\ndifference between that patch and yours, at least IMHO. You argue that\nthe sentence in question didn't flow from what precedes it, but I\ndon't agree with that as a rationale; I think that sentence flows\nperfectly well from what precedes it and is a generally adequate\nsummary of the point of the section. That we disagree on the merits of\nthat sentence is fine; there's no hope that we're all going to agree\non everything. What makes me frustrated is that you didn't provide\nthat rationale, which greatly complicates the whole discussion,\nbecause now I have to spend time getting you to either take it out or\nprovide your justification before we can even reach the point of\narguing about the substance.\n\nAt least in my judgment, the patch Jeff committed doesn't have that\nproblem, because the rationale that he gives in the commit message\n(partly by reference to another commit) is that heap-only tuples now\ncan, in circumstances, be created even if columns used by indexes ARE\nupdated. Based on that justification, it seems really clear to me that\nit was right to do SOMETHING to the sentence in question. Whether\nremoving the sentence was better than some other alternative is\ndebatable, and I might well have done something different, but between\nreading the commit message and reading the diff, it's 100%\nunderstandable to me what his reasoning was for every single byte that\nis in that diff. So if I love what he committed, cool! And if I hate\nwhat he committed, I'm well-placed to argue, because I know what I'm\narguing against.\n\nAlso, if I do feel like arguing, the fact that I know his reasoning\nwill also make it a heck of a lot easier to come up with a proposal\nthat meets my goals without undermining his. I can see, for example,\nthat I can't simply propose to put that sentence back the way that it\nwas; based on his reason for removing it, he obviously isn't going to\nlike that. But if, say, my goal were to have some kind of a summary\nsentence there because I just thought that was really helpful, I could\nwrite a patch to insert a new summary sentence there and then say\n\"hey, Jeff, you took this summary sentence out because it's not\naccurate, but here's a new one which I think is accurate and I'd like\nto add it back.\" Maybe he'd like that, and maybe he wouldn't, but it\nwould have a shot, because it would take his reasoning into account,\nwhich I can do, because I know what it was. I want people who read the\npatches that I commit to have the same opportunities.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 20 Mar 2024 14:15:12 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [DOCS] HOT - correct claim about indexes not referencing old line\n pointers" }, { "msg_contents": "On Thu, Mar 14, 2024 at 9:07 PM James Coleman <[email protected]> wrote:\n> If the goal here is the most minimal patch possible, then please\n> commit what you proposed. I am interested in improving the document\n> further, but I don't know how to do that easily if the requirement is\n> effectively \"must only change one specific detail at a time\".\n\nSo, yesterday I wrote a long email on how I saw the goals here.\nDespite our disagreements, I believe we agree that the text I proposed\nis better than what's there, so I've committed that change now. I've\nalso marked the CF entry as committed. Please propose the other\nchanges you want separately.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 21 Mar 2024 13:09:24 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [DOCS] HOT - correct claim about indexes not referencing old line\n pointers" }, { "msg_contents": "On Wed, Mar 20, 2024 at 2:15 PM Robert Haas <[email protected]> wrote:\n>\n> On Thu, Mar 14, 2024 at 9:07 PM James Coleman <[email protected]> wrote:\n> > Obviously I have reasons for the other changes I made: for example,\n> > \"no longer visible\" improves the correctness, since being an old\n> > version isn't sufficient. I removed the \"In summary\" sentence because\n> > it simply doesn't follow from the prose before it. That sentence\n> > simply restates information already appearing earlier in almost as\n> > simple a form, so it's redundant. But more importantly it's just not\n> > actually a summary of the text before it, so removing it improves the\n> > documentation.\n> >\n> > I can explain my reasoning further if desired, but I fear it would\n> > simply frustrate you further, so I'll stop here.\n> >\n> > If the goal here is the most minimal patch possible, then please\n> > commit what you proposed. I am interested in improving the document\n> > further, but I don't know how to do that easily if the requirement is\n> > effectively \"must only change one specific detail at a time\". So, that\n> > leaves me feeling a bit frustrated also.\n>\n> I don't think the goal is to have the most minimal patch possible, but\n> at the same time, I'm responsible for what I commit. If I commit a\n> patch that changes something and somebody, especially another\n> committer, writes an email and says \"hey, why the heck did you change\n> this, you dummy?\" then I need to be able to answer that question, and\n> saying \"uh, well, James Coleman had it in his patch and he didn't\n> explain why it was there and I didn't know either but I just kind of\n> committed it anyway\" is not where I want to be. Almost without\n> exception, it's not the patch author who gets asked why they included\n> a certain thing in the patch; it's the committer who gets asked why it\n> got committed that way -- and at least as I see it, if there's not a\n> good answer to that question, it's the committer who gets judged, not\n> the patch author. In some cases, such questions and judgements arrive\n> without warning YEARS after the commit.\n>\n> So, to protect myself against questions from other hackers that end,\n> at least implicitly, in \"you dummy,\" I try to make sure there's an\n> adequate paper trail for everything I commit. Between the commit\n> message, the comments, and the email discussion, it needs to be\n> crystal clear why something was thought to be a good idea at the time.\n> Hopefully, it will be clear enough that I never even get a question,\n> because the reader will be able to understand ON THEIR OWN why\n> something was done and either go \"oh, that make sense\" or \"well, I get\n> why they did that, but I disagree because $REASON\" ... but if that\n> doesn't quite work out, then I hope that at the very least the paper\n> trail will be good enough that I can reconstruct the reasoning if\n> needed. For a recent example of a case where I clearly failed do a\n> good enough job to keep someone from asking a \"you dummy\" question,\n> see the http://postgr.es/m/[email protected]\n> and in particular the paragraph that starts with \"However\".\n>\n> Therefore, if I realize when reading the patch that it contains\n> odd-looking changes which I can't relate to the stated goal, it's\n> getting bounced, pretty much 100% of the time. If it's a small patch\n> and I really care about it and it's a new submitter who doesn't know\n> any better, I might choose to remove those changes myself and commit\n> the rest; and if I like those changes for other reasons, I might\n> choose to commit them separately. In any other situation, I'm just\n> going to tell the submitter that they need to take out the unrelated\n> changes and move on to the next email thread.\n>\n> In this particular situation, I see that this whole discussion is\n> slightly moot by virtue of 7a9328e8e40534fb4de41b4ac152e3c6989d84e7\n> (Jeff Davis, 2024-03-05) which removed the \"In summary, heap-only\n> tuple updates can only be created if columns used by indexes are not\n> updated\" sentence that you also removed. But there is an important\n> difference between that patch and yours, at least IMHO. You argue that\n> the sentence in question didn't flow from what precedes it, but I\n> don't agree with that as a rationale; I think that sentence flows\n> perfectly well from what precedes it and is a generally adequate\n> summary of the point of the section. That we disagree on the merits of\n> that sentence is fine; there's no hope that we're all going to agree\n> on everything. What makes me frustrated is that you didn't provide\n> that rationale, which greatly complicates the whole discussion,\n> because now I have to spend time getting you to either take it out or\n> provide your justification before we can even reach the point of\n> arguing about the substance.\n>\n> At least in my judgment, the patch Jeff committed doesn't have that\n> problem, because the rationale that he gives in the commit message\n> (partly by reference to another commit) is that heap-only tuples now\n> can, in circumstances, be created even if columns used by indexes ARE\n> updated. Based on that justification, it seems really clear to me that\n> it was right to do SOMETHING to the sentence in question. Whether\n> removing the sentence was better than some other alternative is\n> debatable, and I might well have done something different, but between\n> reading the commit message and reading the diff, it's 100%\n> understandable to me what his reasoning was for every single byte that\n> is in that diff. So if I love what he committed, cool! And if I hate\n> what he committed, I'm well-placed to argue, because I know what I'm\n> arguing against.\n>\n> Also, if I do feel like arguing, the fact that I know his reasoning\n> will also make it a heck of a lot easier to come up with a proposal\n> that meets my goals without undermining his. I can see, for example,\n> that I can't simply propose to put that sentence back the way that it\n> was; based on his reason for removing it, he obviously isn't going to\n> like that. But if, say, my goal were to have some kind of a summary\n> sentence there because I just thought that was really helpful, I could\n> write a patch to insert a new summary sentence there and then say\n> \"hey, Jeff, you took this summary sentence out because it's not\n> accurate, but here's a new one which I think is accurate and I'd like\n> to add it back.\" Maybe he'd like that, and maybe he wouldn't, but it\n> would have a shot, because it would take his reasoning into account,\n> which I can do, because I know what it was. I want people who read the\n> patches that I commit to have the same opportunities.\n\nI'm happy to provide the rationale; my apologies for not providing it\noriginally.\n\nI didn't realize that was the root of the frustration: I'd read your\nfeedback (\"[my proposed change[ is directly responsive to your\noriginal complaint...[your] version seems to contain a number of\nincidental changes that are unrelated to that\") as being aimed at\nsomething like \"a patch can only address the original specific\ncomplaint\".\n\nI'll try to be more specific in my reasoning when e.g. I think a piece\nof documentation reads better with a few additional changes. If I\nforget to do that I'd also appreciate your asking up front for the\nrationale so that I have a chance to clarify that without\nmisunderstanding.\n\nRegards,\nJames Coleman\n\n\n", "msg_date": "Sat, 23 Mar 2024 17:42:39 -0400", "msg_from": "James Coleman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [DOCS] HOT - correct claim about indexes not referencing old line\n pointers" }, { "msg_contents": "On Thu, Mar 21, 2024 at 1:09 PM Robert Haas <[email protected]> wrote:\n>\n> On Thu, Mar 14, 2024 at 9:07 PM James Coleman <[email protected]> wrote:\n> > If the goal here is the most minimal patch possible, then please\n> > commit what you proposed. I am interested in improving the document\n> > further, but I don't know how to do that easily if the requirement is\n> > effectively \"must only change one specific detail at a time\".\n>\n> So, yesterday I wrote a long email on how I saw the goals here.\n> Despite our disagreements, I believe we agree that the text I proposed\n> is better than what's there, so I've committed that change now. I've\n> also marked the CF entry as committed. Please propose the other\n> changes you want separately.\n\nThanks for committing the fix.\n\nRegards,\nJames Coleman\n\n\n", "msg_date": "Sat, 23 Mar 2024 17:43:07 -0400", "msg_from": "James Coleman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [DOCS] HOT - correct claim about indexes not referencing old line\n pointers" } ]
[ { "msg_contents": "Hi\n\nI had to fix plpgsql_check issue\nhttps://github.com/okbob/plpgsql_check/issues/155\n\nThe problem is in execution of _PG_init() in CREATE EXTENSION time.\n\nIt is a problem for any extension that uses plpgsql debug API, because it\nis quietly activated.\n\nIs it necessary?\n\nRegards\n\nPavel\n\nHiI had to fix plpgsql_check issue https://github.com/okbob/plpgsql_check/issues/155The problem is in execution of _PG_init() in CREATE EXTENSION time.It is a problem for any extension that uses plpgsql debug API, because it is quietly activated.Is it necessary?RegardsPavel", "msg_date": "Fri, 29 Sep 2023 20:10:10 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": true, "msg_subject": "CREATE EXTENSION forces an library initialization - is it bug?" }, { "msg_contents": "Pavel Stehule <[email protected]> writes:\n> I had to fix plpgsql_check issue\n> https://github.com/okbob/plpgsql_check/issues/155\n\n> The problem is in execution of _PG_init() in CREATE EXTENSION time.\n\n> It is a problem for any extension that uses plpgsql debug API, because it\n> is quietly activated.\n\n> Is it necessary?\n\nYes, I think so. If the extension has any C functions, then when its\nscript executes those CREATE FUNCTION commands then the underlying\nlibrary will be loaded (so we can check that the library is loadable\nand the functions really exist). That's always happened and I do not\nthink it is negotiable.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 29 Sep 2023 14:14:24 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CREATE EXTENSION forces an library initialization - is it bug?" }, { "msg_contents": "pá 29. 9. 2023 v 20:14 odesílatel Tom Lane <[email protected]> napsal:\n\n> Pavel Stehule <[email protected]> writes:\n> > I had to fix plpgsql_check issue\n> > https://github.com/okbob/plpgsql_check/issues/155\n>\n> > The problem is in execution of _PG_init() in CREATE EXTENSION time.\n>\n> > It is a problem for any extension that uses plpgsql debug API, because it\n> > is quietly activated.\n>\n> > Is it necessary?\n>\n> Yes, I think so. If the extension has any C functions, then when its\n> script executes those CREATE FUNCTION commands then the underlying\n> library will be loaded (so we can check that the library is loadable\n> and the functions really exist). That's always happened and I do not\n> think it is negotiable.\n>\n\nok\n\nthank you for info\n\n\n\n>\n> regards, tom lane\n>\n\npá 29. 9. 2023 v 20:14 odesílatel Tom Lane <[email protected]> napsal:Pavel Stehule <[email protected]> writes:\n> I had to fix plpgsql_check issue\n> https://github.com/okbob/plpgsql_check/issues/155\n\n> The problem is in execution of _PG_init() in CREATE EXTENSION time.\n\n> It is a problem for any extension that uses plpgsql debug API, because it\n> is quietly activated.\n\n> Is it necessary?\n\nYes, I think so.  If the extension has any C functions, then when its\nscript executes those CREATE FUNCTION commands then the underlying\nlibrary will be loaded (so we can check that the library is loadable\nand the functions really exist).  That's always happened and I do not\nthink it is negotiable.okthank you for info \n\n                        regards, tom lane", "msg_date": "Fri, 29 Sep 2023 20:19:22 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CREATE EXTENSION forces an library initialization - is it bug?" } ]
[ { "msg_contents": "Hi I am trying to implement a LRU cache in postgres extension. Please find\nthe markdown file for more details. Looking forward to hearing from you.", "msg_date": "Sat, 30 Sep 2023 02:26:57 +0530", "msg_from": "Lakshmi Narayana Velayudam <[email protected]>", "msg_from_op": true, "msg_subject": "Implementing LRU cache for postgresql extension" } ]
[ { "msg_contents": "The committfest app is down for repairs. We will reply back here once it \nis back up.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Sat, 30 Sep 2023 08:47:00 -0400", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": true, "msg_subject": "commitfest app down for repairs" }, { "msg_contents": "On 9/30/23 08:47, Joe Conway wrote:\n> The committfest app is down for repairs. We will reply back here once it\n> is back up.\nThe commitfest app is back up.\n\nWe restored to a backup from one day prior. We will take a look at what \nchanged in between, but it might be up to folks to redo some things.\n\nA cooling off period was added to the commitfest app for new community \naccounts, similar to what was done with the wiki a few years ago.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Sat, 30 Sep 2023 09:26:51 -0400", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": true, "msg_subject": "Re: commitfest app down for repairs" }, { "msg_contents": "On Sat, Sep 30, 2023 at 9:31 AM Joe Conway <[email protected]> wrote:\n> We restored to a backup from one day prior. We will take a look at what\n> changed in between, but it might be up to folks to redo some things.\n>\n> A cooling off period was added to the commitfest app for new community\n> accounts, similar to what was done with the wiki a few years ago.\n\nOuch. Thanks for cleaning up the mess.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 2 Oct 2023 13:26:43 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest app down for repairs" } ]
[ { "msg_contents": "Hi,\n\nRemove ParallelReadyList and worker_spi_state from typedefs.list,\nthese structures have been removed as part of an earlier commits,\nParallelReadyList was removed as part of\n9bfd44bbde4261181bf94738f3b041c629c65a7e. worker_spi_state was removed\nas part of af720b4c50a122647182f4a030bb0ea8f750fe2f.\nAttached a patch for fixing the same.\n\nRegards,\nVignesh", "msg_date": "Sat, 30 Sep 2023 20:18:46 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": true, "msg_subject": "Remove ParallelReadyList and worker_spi_state from typedefs.list" }, { "msg_contents": "vignesh C <[email protected]> writes:\n> Remove ParallelReadyList and worker_spi_state from typedefs.list,\n> these structures have been removed as part of an earlier commits,\n> ParallelReadyList was removed as part of\n> 9bfd44bbde4261181bf94738f3b041c629c65a7e. worker_spi_state was removed\n> as part of af720b4c50a122647182f4a030bb0ea8f750fe2f.\n> Attached a patch for fixing the same.\n\nI don't think we need to trouble with removing such entries by hand.\nI still anticipate updating typedefs.list from the buildfarm at least\nonce per release cycle, and that will take care of cleaning out\nobsolete entries.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 30 Sep 2023 12:14:03 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove ParallelReadyList and worker_spi_state from typedefs.list" }, { "msg_contents": "On Sat, 30 Sept 2023 at 21:44, Tom Lane <[email protected]> wrote:\n>\n> vignesh C <[email protected]> writes:\n> > Remove ParallelReadyList and worker_spi_state from typedefs.list,\n> > these structures have been removed as part of an earlier commits,\n> > ParallelReadyList was removed as part of\n> > 9bfd44bbde4261181bf94738f3b041c629c65a7e. worker_spi_state was removed\n> > as part of af720b4c50a122647182f4a030bb0ea8f750fe2f.\n> > Attached a patch for fixing the same.\n>\n> I don't think we need to trouble with removing such entries by hand.\n> I still anticipate updating typedefs.list from the buildfarm at least\n> once per release cycle, and that will take care of cleaning out\n> obsolete entries.\n\nThanks, that will take care of removal at that time.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Mon, 2 Oct 2023 18:07:31 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Remove ParallelReadyList and worker_spi_state from typedefs.list" } ]
[ { "msg_contents": "Hello. While reading the docs for the enable_partitionwise_aggregate\nparameter on the Query Planning page, I thought the description had a small\nmistake that could be improved.\n\nThe current wording is: \"which allows grouping or aggregation on a\npartitioned tables performed separately \"\n\nPage: https://www.postgresql.org/docs/current/runtime-config-query.html\n\nI think possible better alternatives could be:\n\n - (Option 1) a \"partitioned table's partitions\" (the possessive form of\n \"it's\"). The \"enable_partition_pruning\" parameter uses \"the partitioned\n table's partitions\" in this form. I think this option is good, but I had a\n slight preference for option 2.\n - (Option 2) Or to just cut out the first part and say \"to be performed\n separately for each partition\", which seemed simpler. So the sentence\n reads: \"which allows grouping or aggregation to be performed separately for\n each partition\"\n - (Option 3) dropping the \"a\" so it says \"which allows grouping or\n aggregation on partitioned tables performed separately\". I don't think this\n is as good though because the aggregation happens on the partitions, so it\n feels slightly off to me to say the \"partitioned tables\" instead of the\n partitions.\n\nI tested toggling this parameter on and off with a test partitioned table,\nand looked at the query execution plan, and saw how the aggregation\nhappened on the partitions first when it was enabled.\n\nThis is my first ever submission to pgsql-hackers. :) I used this guide\nfrom Lætitia to prepare the patch file for Option 2 above, which is\nattached. I am having a problem with the \"make STYLE=website html\" step, so\nI hadn't seen the preview (still fixing this up).\nhttps://mydbanotebook.org/post/patching-doc/\n\nLet me know what you think!\n\nThanks.", "msg_date": "Sat, 30 Sep 2023 17:29:53 -0500", "msg_from": "Andy Atkinson <[email protected]>", "msg_from_op": true, "msg_subject": "Doc: Minor update for enable_partitionwise_aggregate" }, { "msg_contents": "On Sun, Oct 1, 2023 at 7:38 AM Andy Atkinson <[email protected]> wrote:\n>\n> Hello. While reading the docs for the enable_partitionwise_aggregate parameter on the Query Planning page, I thought the description had a small mistake that could be improved.\n>\n> The current wording is: \"which allows grouping or aggregation on a partitioned tables performed separately \"\n>\n> Page: https://www.postgresql.org/docs/current/runtime-config-query.html\n>\n> I think possible better alternatives could be:\n>\n> (Option 1) a \"partitioned table's partitions\" (the possessive form of \"it's\"). The \"enable_partition_pruning\" parameter uses \"the partitioned table's partitions\" in this form. I think this option is good, but I had a slight preference for option 2.\n> (Option 2) Or to just cut out the first part and say \"to be performed separately for each partition\", which seemed simpler. So the sentence reads: \"which allows grouping or aggregation to be performed separately for each partition\"\n\nI would leave \"on a partitioned table\". Notice that I have removed \"s\"\nfrom tables.\n\n> (Option 3) dropping the \"a\" so it says \"which allows grouping or aggregation on partitioned tables performed separately\". I don't think this is as good though because the aggregation happens on the partitions, so it feels slightly off to me to say the \"partitioned tables\" instead of the partitions.\n\nIt's technically incorrect as well. Aggregation is performed on a\nsingle relation always - a join or subquery or simple relation. A join\nmay have multiple tables in it but the aggregation is performed on its\nresult and not individual tables and hence not on partitions of\nindividual tables.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Tue, 3 Oct 2023 15:02:58 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Doc: Minor update for enable_partitionwise_aggregate" }, { "msg_contents": "Thank you for the feedback and clarifications Ashutosh. How about this?\n\n\"which allows grouping or aggregation on partitioned tables to be performed\nseparately for each partition.\"\n\nAttached a v2 patch file with this change.\n\nHere is a gist w/ a partitioned table and 2 partitions, comparing execution\nplans after enabling the parameter, and reading the plan nodes up from the\nbottom. With enable_partitionwise_aggregate = on, I can see the Partial\nHashAggregate/\"Group Key\" on each of the two partitions (of the partitioned\ntable) where I don't see that when the setting is off (by default).\nhttps://gist.github.com/andyatkinson/7af81fb8a5b9e677af6049e29ab2cb73\n\nFor the terms partitioned table vs. partitions, I used how they're\ndescribed here:\nhttps://www.postgresql.org/docs/current/ddl-partitioning.html\n- partitioned table (virtual table)\n- partitions (of a partitioned table)\n\nThanks!\nAndrew\n\n\nOn Tue, Oct 3, 2023 at 4:33 AM Ashutosh Bapat <[email protected]>\nwrote:\n\n> On Sun, Oct 1, 2023 at 7:38 AM Andy Atkinson <[email protected]>\n> wrote:\n> >\n> > Hello. While reading the docs for the enable_partitionwise_aggregate\n> parameter on the Query Planning page, I thought the description had a small\n> mistake that could be improved.\n> >\n> > The current wording is: \"which allows grouping or aggregation on a\n> partitioned tables performed separately \"\n> >\n> > Page: https://www.postgresql.org/docs/current/runtime-config-query.html\n> >\n> > I think possible better alternatives could be:\n> >\n> > (Option 1) a \"partitioned table's partitions\" (the possessive form of\n> \"it's\"). The \"enable_partition_pruning\" parameter uses \"the partitioned\n> table's partitions\" in this form. I think this option is good, but I had a\n> slight preference for option 2.\n> > (Option 2) Or to just cut out the first part and say \"to be performed\n> separately for each partition\", which seemed simpler. So the sentence\n> reads: \"which allows grouping or aggregation to be performed separately for\n> each partition\"\n>\n> I would leave \"on a partitioned table\". Notice that I have removed \"s\"\n> from tables.\n>\n> > (Option 3) dropping the \"a\" so it says \"which allows grouping or\n> aggregation on partitioned tables performed separately\". I don't think this\n> is as good though because the aggregation happens on the partitions, so it\n> feels slightly off to me to say the \"partitioned tables\" instead of the\n> partitions.\n>\n> It's technically incorrect as well. Aggregation is performed on a\n> single relation always - a join or subquery or simple relation. A join\n> may have multiple tables in it but the aggregation is performed on its\n> result and not individual tables and hence not on partitions of\n> individual tables.\n>\n> --\n> Best Wishes,\n> Ashutosh Bapat\n>", "msg_date": "Tue, 10 Oct 2023 22:21:50 -0500", "msg_from": "Andrew Atkinson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Doc: Minor update for enable_partitionwise_aggregate" }, { "msg_contents": "On Wed, 11 Oct 2023 at 16:26, Andrew Atkinson <[email protected]> wrote:\n>\n> Thank you for the feedback and clarifications Ashutosh. How about this?\n>\n> \"which allows grouping or aggregation on partitioned tables to be performed separately for each partition.\"\n\nThis looks good to me. I can take care of this.\n\nDavid\n\n\n", "msg_date": "Wed, 11 Oct 2023 19:38:03 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Doc: Minor update for enable_partitionwise_aggregate" }, { "msg_contents": "On Wed, 11 Oct 2023 at 19:38, David Rowley <[email protected]> wrote:\n>\n> On Wed, 11 Oct 2023 at 16:26, Andrew Atkinson <[email protected]> wrote:\n> > \"which allows grouping or aggregation on partitioned tables to be performed separately for each partition.\"\n>\n> This looks good to me. I can take care of this.\n\nPushed and backpatched to v11.\n\nDavid\n\n\n", "msg_date": "Thu, 12 Oct 2023 21:18:53 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Doc: Minor update for enable_partitionwise_aggregate" } ]
[ { "msg_contents": "Hi, all\n\n\nWhen create  or refresh a Materialized View, if the view’s query has order by, we may sort and insert the sorted data into view.\n\n\tCreate Materialized View mv1 as select c1, c2 from t1 order by c2;\n\tRefresh Materialized View mv1;\n\nAnd it appears that we could get ordered data  when select from Materialized View;\n\n\tSelect * from mv1;\n\nBut it’s not true if we use other access methods, or we choose a parallel seqscan plan.\nA non-parallel seqscan on heap table appears ordered as we always create new rel file and swap them, in my opinion, it’s more like a free lunch.\n\nSo, conclusion1:  We couldn’t rely on the `ordered-data` even the mv’s sql has order by clause, is it right?\n\nAnd if it’s true, shall we skip the order by clause for Materialized View  when executing create/refresh statement?\n\nMaterialized View’s order by clause could be skipped if\n\n1. Order by clause is on the top query level\n2. There is no real limit clause\n\nThe benefit is the query may be speeded up without sort nodes each time creating/refreshing Materialized View.\n\n\nA simple results:\n\n\tcreate table t1 as select i as c1 , i/2 as c2 , i/5 as c3 from generate_series(1, 100000) i;\n\tcreate materialized view mvt1_order as select c1, c2, c3 from t1 order by c2, c3, c1 asc\n\nWithout this patch:\n\tzml=# refresh materialized view mvt1_order;\n\tREFRESH MATERIALIZED VIEW\n\tTime: 228.548 ms\n\tzml=# refresh materialized view mvt1_order;\n\tREFRESH MATERIALIZED VIEW\n\tTime: 230.374 ms\n\tzml=# refresh materialized view mvt1_order;\n\tREFRESH MATERIALIZED VIEW\n\tTime: 217.079 ms\n\n\nWith this patch:\n\nzml=# refresh materialized view mvt1_order;\nREFRESH MATERIALIZED VIEW\nTime: 192.409 ms\nzml=# refresh materialized view mvt1_order;\nREFRESH MATERIALIZED VIEW\nTime: 204.398 ms\nzml=# refresh materialized view mvt1_order;\nREFRESH MATERIALIZED VIEW\nTime: 197.510 ms\n\n\n\nZhang Mingli\nwww.hashdata.xyz", "msg_date": "Sun, 1 Oct 2023 22:44:07 +0800", "msg_from": "Zhang Mingli <[email protected]>", "msg_from_op": true, "msg_subject": "Skip Orderby Execution for Materialized Views" }, { "msg_contents": "Zhang Mingli <[email protected]> writes:\n> When create  or refresh a Materialized View, if the view’s query has order by, we may sort and insert the sorted data into view.\n\nIndeed.\n\n> And if it’s true, shall we skip the order by clause for Materialized View  when executing create/refresh statement?\n\nNo. The intent of a materialized view is to execute the query\nas presented. If the user doesn't understand the consequences\nof that, it's not our job to think we are smarter than they are.\n\nI think this patch should be rejected.\n\nBTW, I'm pretty certain that this patch breaks some valid cases\neven if we take your point of view as correct. For one example,\nyou can't just remove the sort clause if the query uses DISTINCT ON.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 01 Oct 2023 10:54:46 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Skip Orderby Execution for Materialized Views" }, { "msg_contents": "HI,\n\n> On Oct 1, 2023, at 22:54, Tom Lane <[email protected]> wrote:\n> \n> For one example,\n> you can't just remove the sort clause if the query uses DISTINCT ON\n\n\nHi, Tom, got it, thanks, \n\nZhang Mingli\nHashData https://www.hashdata.xyz\n\n\nHI,On Oct 1, 2023, at 22:54, Tom Lane <[email protected]> wrote: For one example,you can't just remove the sort clause if the query uses DISTINCT ONHi, Tom, got it, thanks, \nZhang MingliHashData https://www.hashdata.xyz", "msg_date": "Sun, 1 Oct 2023 23:02:17 +0800", "msg_from": "Zhang Mingli <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Skip Orderby Execution for Materialized Views" }, { "msg_contents": "On Sun, Oct 1, 2023 at 8:57 AM Zhang Mingli <[email protected]> wrote:\n\n> And if it’s true, shall we skip the order by clause for Materialized\n> View when executing create/refresh statement?\n>\n\nWe tend to do precisely what the user writes into their query. If they\ndon't want an order by they can remove it. I don't see any particular\nreason we should be second-guessing them here. And what makes the trade-off\nworse is the reasonable expectation that we'd provide a way to force an\nordering of the inserts should the user actually want it after we defaulted\nto ignoring that part of their query.\n\nBut yes, you are correct that adding an order by to a materialized view is\ntypically pointless. To the extent it is detrimental varies since even\npartially ordered results can save on processing time.\n\nDavid J.\n\nOn Sun, Oct 1, 2023 at 8:57 AM Zhang Mingli <[email protected]> wrote:\n\n\nAnd if it’s true, shall we skip the order by clause for Materialized View  when executing create/refresh statement?We tend to do precisely what the user writes into their query.  If they don't want an order by they can remove it.  I don't see any particular reason we should be second-guessing them here. And what makes the trade-off worse is the reasonable expectation that we'd provide a way to force an ordering of the inserts should the user actually want it after we defaulted to ignoring that part of their query.But yes, you are correct that adding an order by to a materialized view is typically pointless.  To the extent it is detrimental varies since even partially ordered results can save on processing time.David J.", "msg_date": "Sun, 1 Oct 2023 09:05:49 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Skip Orderby Execution for Materialized Views" } ]
[ { "msg_contents": "Running pgstatindex on some !indisready indexes fails with \"ERROR: XX001:\ncould not read block 0 in file\". This reproduces it:\n\nbegin;\ndrop table if exists not_indisready_t;\ncreate table not_indisready_t (c int);\ninsert into not_indisready_t values (1),(1);\ncommit;\ncreate unique index concurrently not_indisready_i on not_indisready_t(c);\nbegin;\ncreate extension pgstattuple;\n\\set VERBOSITY verbose\nselect * from pgstatindex('not_indisready_i');\n\\set VERBOSITY default\nrollback;\n\nSince XX001 = ERRCODE_DATA_CORRUPTED appears in the \"can't-happen\" class, it's\nnot a good fit for this scenario. I propose to have pgstatindex fail early on\n!indisready indexes. We could go a step further and also fail on\nindisready&&!indisvalid indexes, which are complete enough to accept inserts\nbut not complete enough to answer queries. I don't see a reason to do that,\nbut maybe someone else will.\n\nThis made me wonder about functions handling unlogged rels during recovery. I\nused the attached hack to test most regclass-arg functions. While some of\nthese errors from src/test/recovery/tmp_check/log/001_stream_rep_standby_2.log\nmay deserve improvement, there were no class-XX messages:\n\n2023-10-01 12:24:05.992 PDT [646914:11] 001_stream_rep.pl ERROR: 58P01: could not open file \"base/5/16862\": No such file or directory\n2023-10-01 12:24:05.996 PDT [646914:118] 001_stream_rep.pl ERROR: 22023: fork \"main\" does not exist for this relation\n\nThanks,\nnm", "msg_date": "Sun, 1 Oct 2023 12:53:09 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": true, "msg_subject": "pgstatindex vs. !indisready" }, { "msg_contents": "Noah Misch <[email protected]> writes:\n> Running pgstatindex on some !indisready indexes fails with \"ERROR: XX001:\n> could not read block 0 in file\". This reproduces it:\n> ...\n> Since XX001 = ERRCODE_DATA_CORRUPTED appears in the \"can't-happen\" class, it's\n> not a good fit for this scenario. I propose to have pgstatindex fail early on\n> !indisready indexes.\n\n+1\n\n> We could go a step further and also fail on\n> indisready&&!indisvalid indexes, which are complete enough to accept inserts\n> but not complete enough to answer queries. I don't see a reason to do that,\n> but maybe someone else will.\n\nHmm. Seems like the numbers pgstatindex would produce for a\nnot-yet-complete index would be rather irrelevant, even if the case\ndoesn't risk any outright problems. I'd be inclined to be\nconservative and insist on indisvalid being true too.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 01 Oct 2023 16:37:25 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgstatindex vs. !indisready" }, { "msg_contents": "On Sun, Oct 01, 2023 at 04:37:25PM -0400, Tom Lane wrote:\n> Noah Misch <[email protected]> writes:\n> > Running pgstatindex on some !indisready indexes fails with \"ERROR: XX001:\n> > could not read block 0 in file\". This reproduces it:\n> > ...\n> > Since XX001 = ERRCODE_DATA_CORRUPTED appears in the \"can't-happen\" class, it's\n> > not a good fit for this scenario. I propose to have pgstatindex fail early on\n> > !indisready indexes.\n> \n> +1\n> \n> > We could go a step further and also fail on\n> > indisready&&!indisvalid indexes, which are complete enough to accept inserts\n> > but not complete enough to answer queries. I don't see a reason to do that,\n> > but maybe someone else will.\n> \n> Hmm. Seems like the numbers pgstatindex would produce for a\n> not-yet-complete index would be rather irrelevant, even if the case\n> doesn't risk any outright problems. I'd be inclined to be\n> conservative and insist on indisvalid being true too.\n\nOkay. If no other preferences appear, I'll do pgstatindex that way.\n\n> > This made me wonder about functions handling unlogged rels during recovery. I\n> > used the attached hack to test most regclass-arg functions.\n\nI forgot to test the same battery of functions on !indisready indexes. I've\nnow done that, using the attached script. While I didn't get additional\nclass-XX errors, more should change:\n\n[pgstatginindex pgstathashindex pgstattuple] currently succeed. Like\npgstatindex, they should ERROR, including in the back branches.\n\n[brin_desummarize_range brin_summarize_new_values brin_summarize_range\ngin_clean_pending_list] currently succeed. I propose to make them emit a\nDEBUG1 message and return early, like amcheck does, except on !indisready.\nThis would allow users to continue running them on all indexes of the\napplicable access method. Doing these operations on an\nindisready&&!indisvalid index is entirely reasonable, since they relate to\nINSERT/UPDATE/DELETE operations.\n\n[pg_freespace pg_indexes_size pg_prewarm] currently succeed, and I propose to\nleave them that way. No undefined behavior arises. pg_freespace needs to be\nresilient to garbage data anyway, given the lack of WAL for the FSM. One\ncould argue for a relkind check in pg_indexes_size. One could argue for\ntreating pg_prewarm like amcheck (DEBUG1 and return).", "msg_date": "Sun, 1 Oct 2023 13:58:30 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pgstatindex vs. !indisready" }, { "msg_contents": "On Sun, Oct 1, 2023 at 2:00 PM Noah Misch <[email protected]> wrote:\n> Okay. If no other preferences appear, I'll do pgstatindex that way.\n\nSounds reasonable.\n\n> [pgstatginindex pgstathashindex pgstattuple] currently succeed. Like\n> pgstatindex, they should ERROR, including in the back branches.\n\nMakes sense.\n\n> [brin_desummarize_range brin_summarize_new_values brin_summarize_range\n> gin_clean_pending_list] currently succeed. I propose to make them emit a\n> DEBUG1 message and return early, like amcheck does, except on !indisready.\n> This would allow users to continue running them on all indexes of the\n> applicable access method. Doing these operations on an\n> indisready&&!indisvalid index is entirely reasonable, since they relate to\n> INSERT/UPDATE/DELETE operations.\n\n+1 to all that (including the part about these operations being a\nlittle different to the amcheck functions in one particular respect).\n\nFWIW, amcheck's handling of unlogged relations when in hot standby\nmode is based on the following theory: if recovery were to end, the\nindex would become an empty index, so just treat it as if it was\nalready empty during recovery. Not everybody thought that this\nbehavior was ideal, but ISTM that it has fewer problems than any\nalternative approach you can think of. The same argument works just as\nwell with any function that accepts a regclass argument IMV.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Sun, 1 Oct 2023 16:20:42 -0700", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgstatindex vs. !indisready" }, { "msg_contents": "On Sun, Oct 01, 2023 at 04:20:42PM -0700, Peter Geoghegan wrote:\n> On Sun, Oct 1, 2023 at 2:00 PM Noah Misch <[email protected]> wrote:\n> > Okay. If no other preferences appear, I'll do pgstatindex that way.\n> \n> Sounds reasonable.\n> \n>> [pgstatginindex pgstathashindex pgstattuple] currently succeed. Like\n>> pgstatindex, they should ERROR, including in the back branches.\n> \n> Makes sense.\n\nThese are already restrictive, makes sense.\n\n>> [brin_desummarize_range brin_summarize_new_values brin_summarize_range\n>> gin_clean_pending_list] currently succeed. I propose to make them emit a\n>> DEBUG1 message and return early, like amcheck does, except on !indisready.\n>> This would allow users to continue running them on all indexes of the\n>> applicable access method. Doing these operations on an\n>> indisready&&!indisvalid index is entirely reasonable, since they relate to\n>> INSERT/UPDATE/DELETE operations.\n\nHmm. Still slightly incorrect in some cases? Before being switched\nto indisvalid, an index built concurrently may miss some tuples\ndeleted before the reference snapshot used to build the index was\ntaken.\n\n> +1 to all that (including the part about these operations being a\n> little different to the amcheck functions in one particular respect).\n\nMaking them return early sounds sensible here.\n\n> FWIW, amcheck's handling of unlogged relations when in hot standby\n> mode is based on the following theory: if recovery were to end, the\n> index would become an empty index, so just treat it as if it was\n> already empty during recovery. Not everybody thought that this\n> behavior was ideal, but ISTM that it has fewer problems than any\n> alternative approach you can think of. The same argument works just as\n> well with any function that accepts a regclass argument IMV.\n\nIt depends, I guess, on how \"user-friendly\" all that should be. I\nhave seen in the past as argument that it may be sometimes better to\nhave a function do nothing rather than ERROR when these are used\nacross a scan of pg_class, for example, particularly for non-SRFs.\nEven if sometimes errors can be bypassed with more quals on the\nrelkind or such (aka less complicated queries with less JOINs to\nwrite).\n--\nMichael", "msg_date": "Mon, 2 Oct 2023 09:24:33 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgstatindex vs. !indisready" }, { "msg_contents": "On Sun, Oct 1, 2023 at 5:24 PM Michael Paquier <[email protected]> wrote:\n> > FWIW, amcheck's handling of unlogged relations when in hot standby\n> > mode is based on the following theory: if recovery were to end, the\n> > index would become an empty index, so just treat it as if it was\n> > already empty during recovery. Not everybody thought that this\n> > behavior was ideal, but ISTM that it has fewer problems than any\n> > alternative approach you can think of. The same argument works just as\n> > well with any function that accepts a regclass argument IMV.\n>\n> It depends, I guess, on how \"user-friendly\" all that should be. I\n> have seen in the past as argument that it may be sometimes better to\n> have a function do nothing rather than ERROR when these are used\n> across a scan of pg_class, for example, particularly for non-SRFs.\n> Even if sometimes errors can be bypassed with more quals on the\n> relkind or such (aka less complicated queries with less JOINs to\n> write).\n\nI think of recovery as a property of the whole system. So throwing an\nerror about one particular unlogged index that the user (say) checked\nduring recovery doesn't seem sensible. After all, the answer that\nRecoveryInProgress() gives can change in a way that's observable\nwithin individual transactions.\n\nAgain, I wouldn't claim that this is very elegant. Just that it seems\nto have the fewest problems.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sun, 1 Oct 2023 17:44:29 -0700", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgstatindex vs. !indisready" }, { "msg_contents": "On Mon, Oct 02, 2023 at 09:24:33AM +0900, Michael Paquier wrote:\n> On Sun, Oct 01, 2023 at 04:20:42PM -0700, Peter Geoghegan wrote:\n> > On Sun, Oct 1, 2023 at 2:00 PM Noah Misch <[email protected]> wrote:\n> >> [brin_desummarize_range brin_summarize_new_values brin_summarize_range\n> >> gin_clean_pending_list] currently succeed. I propose to make them emit a\n> >> DEBUG1 message and return early, like amcheck does, except on !indisready.\n> >> This would allow users to continue running them on all indexes of the\n> >> applicable access method. Doing these operations on an\n> >> indisready&&!indisvalid index is entirely reasonable, since they relate to\n> >> INSERT/UPDATE/DELETE operations.\n> \n> Hmm. Still slightly incorrect in some cases? Before being switched\n> to indisvalid, an index built concurrently may miss some tuples\n> deleted before the reference snapshot used to build the index was\n> taken.\n\nThe !indisvalid index may be missing tuples, yes. In what way does that make\none of those four operations incorrect?\n\n\n", "msg_date": "Sun, 1 Oct 2023 18:31:26 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pgstatindex vs. !indisready" }, { "msg_contents": "On Sun, Oct 01, 2023 at 06:31:26PM -0700, Noah Misch wrote:\n> The !indisvalid index may be missing tuples, yes. In what way does that make\n> one of those four operations incorrect?\n\nReminding myself of what these four do, it looks that you're right and\nthat the state of indisvalid is not going to change what they report.\n\nStill, I'd like to agree with Tom's point to be more conservative and\ncheck also for indisvalid which is what the planner does. These\nfunctions will be used in SELECTs, and one thing that worries me is\nthat checks based on indisready may get copy-pasted somewhere else,\nleading to incorrect results where they get copied. (indisready &&\n!indisvalid) is a \"short\"-term combination in a concurrent build\nprocess, as well (depends on how long one waits for the old snapshots\nbefore switching indisvalid, but that's way shorter than the period of\ntime where the built indexes remain valid).\n--\nMichael", "msg_date": "Wed, 4 Oct 2023 09:00:23 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgstatindex vs. !indisready" }, { "msg_contents": "On Wed, Oct 04, 2023 at 09:00:23AM +0900, Michael Paquier wrote:\n> On Sun, Oct 01, 2023 at 06:31:26PM -0700, Noah Misch wrote:\n> > The !indisvalid index may be missing tuples, yes. In what way does that make\n> > one of those four operations incorrect?\n> \n> Reminding myself of what these four do, it looks that you're right and\n> that the state of indisvalid is not going to change what they report.\n> \n> Still, I'd like to agree with Tom's point to be more conservative and\n> check also for indisvalid which is what the planner does. These\n> functions will be used in SELECTs, and one thing that worries me is\n> that checks based on indisready may get copy-pasted somewhere else,\n> leading to incorrect results where they get copied. (indisready &&\n> !indisvalid) is a \"short\"-term combination in a concurrent build\n> process, as well (depends on how long one waits for the old snapshots\n> before switching indisvalid, but that's way shorter than the period of\n> time where the built indexes remain valid).\n\nNeither choice would harm the user experience in an important way, so I've\nfollowed your preference in the attached patch.", "msg_date": "Sun, 22 Oct 2023 14:02:45 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pgstatindex vs. !indisready" }, { "msg_contents": "On Sun, Oct 22, 2023 at 02:02:45PM -0700, Noah Misch wrote:\n> -\t/* OK, do it */\n> -\tbrinsummarize(indexRel, heapRel, heapBlk, true, &numSummarized, NULL);\n> +\t/* see gin_clean_pending_list() */\n> +\tif (indexRel->rd_index->indisvalid)\n> +\t\tbrinsummarize(indexRel, heapRel, heapBlk, true, &numSummarized, NULL);\n> +\telse\n> +\t\tereport(DEBUG1,\n> +\t\t\t\t(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n> +\t\t\t\t errmsg(\"index \\\"%s\\\" is not valid\",\n> +\t\t\t\t\t\tRelationGetRelationName(indexRel))));\n\nbrinsummarize() could return 0 even for a valid index, and we would\nnot be able to make the difference with an invalid index. Perhaps you\nare right and this is not a big deal in practice to do as you are\nsuggesting.\n--\nMichael", "msg_date": "Mon, 23 Oct 2023 10:47:50 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgstatindex vs. !indisready" } ]
[ { "msg_contents": "CREATE TABLE parent (id integer PRIMARY KEY);\n\nCREATE TABLE child (id integer REFERENCES parent ON DELETE CASCADE);\n\nCREATE FUNCTION silly() RETURNS trigger LANGUAGE plpgsql AS 'BEGIN RETURN NULL; END;';\n\nCREATE TRIGGER silly BEFORE DELETE ON child FOR EACH ROW EXECUTE FUNCTION silly();\n\nINSERT INTO parent VALUES (1);\n\nINSERT INTO child VALUES (1);\n\nDELETE FROM parent WHERE id = 1;\n\nTABLE child;\n id \n════\n 1\n(1 row)\n\nThe trigger function cancels the cascaded delete on \"child\", and we are left with\na row in \"child\" that references no row in \"parent\".\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Mon, 02 Oct 2023 09:03:33 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": true, "msg_subject": "Trigger violates foreign key constraint" }, { "msg_contents": "Perhaps it would be enough to run \"RI_FKey_noaction_del\" after\n\"RI_FKey_cascade_del\", although that would impact the performance.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Mon, 02 Oct 2023 12:02:17 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Trigger violates foreign key constraint" }, { "msg_contents": "Laurenz Albe <[email protected]> writes:\n> CREATE FUNCTION silly() RETURNS trigger LANGUAGE plpgsql AS 'BEGIN RETURN NULL; END;';\n> CREATE TRIGGER silly BEFORE DELETE ON child FOR EACH ROW EXECUTE FUNCTION silly();\n\n> The trigger function cancels the cascaded delete on \"child\", and we are left with\n> a row in \"child\" that references no row in \"parent\".\n\nYes. This is by design: triggers operate at a lower level than\nforeign keys, so an ill-conceived trigger can break an FK constraint.\nThat's documented somewhere, though maybe not visibly enough.\n\nThere are good reasons to want triggers to be able to see and\nreact to FK-driven updates, so it's unlikely that we'd want to\nrevisit that design decision, even if it hadn't already stood\nfor decades.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 02 Oct 2023 09:49:53 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Trigger violates foreign key constraint" }, { "msg_contents": "On Mon, 2023-10-02 at 09:49 -0400, Tom Lane wrote:\n> Laurenz Albe <[email protected]> writes:\n> > CREATE FUNCTION silly() RETURNS trigger LANGUAGE plpgsql AS 'BEGIN RETURN NULL; END;';\n> > CREATE TRIGGER silly BEFORE DELETE ON child FOR EACH ROW EXECUTE FUNCTION silly();\n> \n> > The trigger function cancels the cascaded delete on \"child\", and we are left with\n> > a row in \"child\" that references no row in \"parent\".\n> \n> Yes.  This is by design: triggers operate at a lower level than\n> foreign keys, so an ill-conceived trigger can break an FK constraint.\n> That's documented somewhere, though maybe not visibly enough.\n> \n> There are good reasons to want triggers to be able to see and\n> react to FK-driven updates, so it's unlikely that we'd want to\n> revisit that design decision, even if it hadn't already stood\n> for decades.\n\nThanks for the clarification. I keep learning.\n\nI didn't find anything about that in the documentation or the\nREADMEs in the source, but perhaps I didn't look well enough.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Mon, 02 Oct 2023 16:11:59 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Trigger violates foreign key constraint" }, { "msg_contents": "On Mon, 2023-10-02 at 09:49 -0400, Tom Lane wrote:\n> This is by design: triggers operate at a lower level than\n> foreign keys, so an ill-conceived trigger can break an FK constraint.\n> That's documented somewhere, though maybe not visibly enough.\n\nNot having found any documentation, I propose the attached caution.\n\nYours,\nLaurenz Albe", "msg_date": "Tue, 03 Oct 2023 09:24:47 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Trigger violates foreign key constraint" }, { "msg_contents": "On Mon, Oct 02, 2023 at 09:49:53AM -0400, Tom Lane wrote:\n> Laurenz Albe <[email protected]> writes:\n> > CREATE FUNCTION silly() RETURNS trigger LANGUAGE plpgsql AS 'BEGIN RETURN NULL; END;';\n> > CREATE TRIGGER silly BEFORE DELETE ON child FOR EACH ROW EXECUTE FUNCTION silly();\n> \n> > The trigger function cancels the cascaded delete on \"child\", and we are left with\n> > a row in \"child\" that references no row in \"parent\".\n> \n> Yes. This is by design: triggers operate at a lower level than\n> foreign keys, so an ill-conceived trigger can break an FK constraint.\n> That's documented somewhere, though maybe not visibly enough.\n> \n> There are good reasons to want triggers to be able to see and\n> react to FK-driven updates,\n\nI agree with that, but I also think it's a bug that other triggers can\ninvalidate the constraint, without even going out of their way to do so.\nIdeally, triggers would be able to react, yet when all non-superuser-defined\ncode settles, the constraint would still hold. While UNIQUE indexes over\nexpressions aren't that strict, at least for those you need to commit the\nclear malfeasance of redefining an IMMUTABLE function.\n\nOn Mon, Oct 02, 2023 at 12:02:17PM +0200, Laurenz Albe wrote:\n> Perhaps it would be enough to run \"RI_FKey_noaction_del\" after\n> \"RI_FKey_cascade_del\", although that would impact the performance.\n\nYes. A cure that doubles the number of heap fetches would be worse than the\ndisease, but a more-optimized version of this idea could work. The FK system\ncould use a broader optimization-oriented rewrite, to deal with the unbounded\nmemory usage and redundant I/O. If that happens, it could be a good time to\nplan for closing the trigger hole.\n\n\n", "msg_date": "Sun, 8 Oct 2023 11:17:50 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Trigger violates foreign key constraint" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: not tested\nImplements feature: not tested\nSpec compliant: not tested\nDocumentation: tested, passed\n\nIt seems like people have been talking about this problem since 2010 (https://stackoverflow.com/questions/3961825/foreign-keys-in-postgresql-can-be-violated-by-trigger), and we finally got our act together and put it in the official document today, only 13 years later.\n\nThe new status of this patch is: Ready for Committer\n", "msg_date": "Mon, 30 Oct 2023 21:08:55 +0000", "msg_from": "shihao zhong <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Trigger violates foreign key constraint" }, { "msg_contents": "On Tue, Oct 3, 2023 at 12:52 AM Laurenz Albe <[email protected]>\nwrote:\n\n> On Mon, 2023-10-02 at 09:49 -0400, Tom Lane wrote:\n> > This is by design: triggers operate at a lower level than\n> > foreign keys, so an ill-conceived trigger can break an FK constraint.\n> > That's documented somewhere, though maybe not visibly enough.\n>\n> Not having found any documentation, I propose the attached caution.\n>\n>\nI dislike scaring the user like this without providing any context on what\nconditions or actions are problematic.\n\nThe ON DELETE and ON UPDATE clauses of foreign keys are implemented as\nsystem triggers on the referenced table that invoke additional delete or\nupdate commands on the referencing table. The final outcome of these\nadditional commands are not checked - it is the responsibility of the DBA\nto ensure that the user triggers on the referencing table actually remove\nthe rows they are requested to remove, or update to NULL any referencing\nforeign key columns. In particular, before row triggers that return NULL\nwill prevent the delete/update from occurring and thus result in a violated\nforeign key constraint.\n\nAdd sgml as needed, note the original patch missed adding \"<productname>\"\nto PostgreSQL.\n\nDavid J.\n\nOn Tue, Oct 3, 2023 at 12:52 AM Laurenz Albe <[email protected]> wrote:On Mon, 2023-10-02 at 09:49 -0400, Tom Lane wrote:\n> This is by design: triggers operate at a lower level than\n> foreign keys, so an ill-conceived trigger can break an FK constraint.\n> That's documented somewhere, though maybe not visibly enough.\n\nNot having found any documentation, I propose the attached caution.I dislike scaring the user like this without providing any context on what conditions or actions are problematic.The ON DELETE and ON UPDATE clauses of foreign keys are implemented as system triggers on the referenced table that invoke additional delete or update commands on the referencing table.  The final outcome of these additional commands are not checked - it is the responsibility of the DBA to ensure that the user triggers on the referencing table actually remove the rows they are requested to remove, or update to NULL any referencing foreign key columns.  In particular, before row triggers that return NULL will prevent the delete/update from occurring and thus result in a violated foreign key constraint.Add sgml as needed, note the original patch missed adding \"<productname>\" to PostgreSQL.David J.", "msg_date": "Mon, 30 Oct 2023 14:50:58 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Trigger violates foreign key constraint" }, { "msg_contents": "On Mon, Oct 30, 2023 at 2:50 PM David G. Johnston <\[email protected]> wrote:\n\n> On Tue, Oct 3, 2023 at 12:52 AM Laurenz Albe <[email protected]>\n> wrote:\n>\n>> On Mon, 2023-10-02 at 09:49 -0400, Tom Lane wrote:\n>> > This is by design: triggers operate at a lower level than\n>> > foreign keys, so an ill-conceived trigger can break an FK constraint.\n>> > That's documented somewhere, though maybe not visibly enough.\n>>\n>> Not having found any documentation, I propose the attached caution.\n>>\n>>\n> I dislike scaring the user like this without providing any context on what\n> conditions or actions are problematic.\n>\n> The ON DELETE and ON UPDATE clauses of foreign keys are implemented as\n> system triggers on the referenced table that invoke additional delete or\n> update commands on the referencing table. The final outcome of these\n> additional commands are not checked - it is the responsibility of the DBA\n> to ensure that the user triggers on the referencing table actually remove\n> the rows they are requested to remove, or update to NULL any referencing\n> foreign key columns. In particular, before row triggers that return NULL\n> will prevent the delete/update from occurring and thus result in a violated\n> foreign key constraint.\n>\n> Add sgml as needed, note the original patch missed adding \"<productname>\"\n> to PostgreSQL.\n>\n>\nAdditionally, the existing place this is covered is here:\n\n\"\"\"\nTrigger functions invoked by per-statement triggers should always return\nNULL. Trigger functions invoked by per-row triggers can return a table row\n(a value of type HeapTuple) to the calling executor, if they choose. A\nrow-level trigger fired before an operation has the following choices:\n\nIt can return NULL to skip the operation for the current row. This\ninstructs the executor to not perform the row-level operation that invoked\nthe trigger (the insertion, modification, or deletion of a particular table\nrow).\n\nFor row-level INSERT and UPDATE triggers only, the returned row becomes the\nrow that will be inserted or will replace the row being updated. This\nallows the trigger function to modify the row being inserted or updated.\n\nA row-level BEFORE trigger that does not intend to cause either of these\nbehaviors must be careful to return as its result the same row that was\npassed in (that is, the NEW row for INSERT and UPDATE triggers, the OLD row\nfor DELETE triggers).\n\"\"\"\n\nWe should probably add a note pointing back to the DDL chapter and that\nmore concisely says.\n\n\"Note: If this table also contains any foreign key constraints with on\nupdate or on delete clauses, then a failure to return the same row that was\npassed in for update and delete triggers is going to result in broken\nreferential integrity for the affected row.\"\n\nI do like \"broken referential integrity\" from the original patch over\n\"violated foreign key constraint\" - so that needs to be substituted in for\nthe final part of my earlier proposal if we go with its more detailed\nwording. My issue with \"violated\" is that it sounds like the system is\ngoing to catch it at the end - broken doesn't have the same implication.\n\nDavid J.\n\nOn Mon, Oct 30, 2023 at 2:50 PM David G. Johnston <[email protected]> wrote:On Tue, Oct 3, 2023 at 12:52 AM Laurenz Albe <[email protected]> wrote:On Mon, 2023-10-02 at 09:49 -0400, Tom Lane wrote:\n> This is by design: triggers operate at a lower level than\n> foreign keys, so an ill-conceived trigger can break an FK constraint.\n> That's documented somewhere, though maybe not visibly enough.\n\nNot having found any documentation, I propose the attached caution.I dislike scaring the user like this without providing any context on what conditions or actions are problematic.The ON DELETE and ON UPDATE clauses of foreign keys are implemented as system triggers on the referenced table that invoke additional delete or update commands on the referencing table.  The final outcome of these additional commands are not checked - it is the responsibility of the DBA to ensure that the user triggers on the referencing table actually remove the rows they are requested to remove, or update to NULL any referencing foreign key columns.  In particular, before row triggers that return NULL will prevent the delete/update from occurring and thus result in a violated foreign key constraint.Add sgml as needed, note the original patch missed adding \"<productname>\" to PostgreSQL.Additionally, the existing place this is covered is here:\"\"\"Trigger functions invoked by per-statement triggers should always return NULL. Trigger functions invoked by per-row triggers can return a table row (a value of type HeapTuple) to the calling executor, if they choose. A row-level trigger fired before an operation has the following choices:It can return NULL to skip the operation for the current row. This instructs the executor to not perform the row-level operation that invoked the trigger (the insertion, modification, or deletion of a particular table row).For row-level INSERT and UPDATE triggers only, the returned row becomes the row that will be inserted or will replace the row being updated. This allows the trigger function to modify the row being inserted or updated.A row-level BEFORE trigger that does not intend to cause either of these behaviors must be careful to return as its result the same row that was passed in (that is, the NEW row for INSERT and UPDATE triggers, the OLD row for DELETE triggers).\"\"\"We should probably add a note pointing back to the DDL chapter and that more concisely says.\"Note: If this table also contains any foreign key constraints with on update or on delete clauses, then a failure to return the same row that was passed in for update and delete triggers is going to result in broken referential integrity for the affected row.\"I do like \"broken referential integrity\" from the original patch over \"violated foreign key constraint\" - so that needs to be substituted in for the final part of my earlier proposal if we go with its more detailed wording.  My issue with \"violated\" is that it sounds like the system is going to catch it at the end - broken doesn't have the same implication.David J.", "msg_date": "Mon, 30 Oct 2023 15:03:28 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Trigger violates foreign key constraint" }, { "msg_contents": "Thanks for having a look at my patch!\n\nOn Mon, 2023-10-30 at 15:03 -0700, David G. Johnston wrote:\n> On Mon, Oct 30, 2023 at 2:50 PM David G. Johnston <[email protected]> wrote:\n> > On Tue, Oct 3, 2023 at 12:52 AM Laurenz Albe <[email protected]> wrote:\n> > > On Mon, 2023-10-02 at 09:49 -0400, Tom Lane wrote:\n> > > > This is by design: triggers operate at a lower level than\n> > > > foreign keys, so an ill-conceived trigger can break an FK constraint.\n> > > > That's documented somewhere, though maybe not visibly enough.\n> > > \n> > > Not having found any documentation, I propose the attached caution.\n> > \n> > I dislike scaring the user like this without providing any context on what\n> > conditions or actions are problematic.\n\nI specifically *want* to scare^H^H^H^H^Halert the user, and I thought I\nprovided sufficient context and a link to a more detailed description of\nhow triggers behave.\nWhat is unclear or lacking in the proposed wording?\n\n In particular, other triggers\n defined on the referencing table can cancel or modify the effects of\n cascading delete or update, thereby breaking referential integrity.\n\n> > The ON DELETE and ON UPDATE clauses of foreign keys are implemented as system triggers\n> > on the referenced table that invoke additional delete or update commands on the\n> > referencing table.  The final outcome of these additional commands are not checked -\n> > it is the responsibility of the DBA to ensure that the user triggers on the\n> > referencing table actually remove the rows they are requested to remove, or\n> > update to NULL any referencing foreign key columns.  In particular, before row\n> > triggers that return NULL will prevent the delete/update from occurring and thus\n> > result in a violated foreign key constraint.\n\nI didn't plan to write a novel on the topic... and I don't think your wording is\nclearer than mine. I went over my text again with the intent to add clarity, but\napart from a few minor modifications (\"other triggers\" -> \"user-defined triggers\")\nI couldn't make it clearer. I'd have to write an example to make it clearer,\nand that would certainly be out of scope.\n\n> > Add sgml as needed, note the original patch missed adding \"<productname>\" to PostgreSQL.\n\nAh, thanks for noticing! Fixed.\n\n> \n> Additionally, the existing place this is covered is here:\n> \n> [https://www.postgresql.org/docs/current/trigger-definition.html]\n> \n> We should probably add a note pointing back to the DDL chapter and that more concisely says.\n> \n> \"Note: If this table also contains any foreign key constraints with on update\n> or on delete clauses, then a failure to return the same row that was passed in\n> for update and delete triggers is going to result in broken referential integrity\n> for the affected row.\"\n\nMy patch already contains a link to this very section.\n\nI tried to understand your sentence and had to read it several times. I don't\nthink that it adds clarity to my patch.\n\n\n\nAttached is a slightly modified version of the patch.\n\nYours,\nLaurenz Albe", "msg_date": "Tue, 31 Oct 2023 11:40:38 +0100", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Trigger violates foreign key constraint" }, { "msg_contents": "Hi,\n\n> Attached is a slightly modified version of the patch.\n\nThe patch was marked as \"Needs Review\" so I decided to take a look.\n\nI believe it's a good patch. The text is well written, has the\nnecessary references, it warns the user but doesn't scare him/her too\nmuch.\n\n> I couldn't make it clearer. I'd have to write an example to make it clearer,\n> and that would certainly be out of scope.\n\nPersonally I don't think we should document particular steps to break\nFK constraints. In a similar situation when we documented the\nlimitations about XID wraparound we didn't document particular steps\nto trigger XID wraparound. I don't recall encountering such examples\nin the documentation, so I don't think we should add them here, at\nleast for consistency.\n\nAll in all the patch seems to be as good as it will get. I suggest merging it.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Thu, 16 Nov 2023 16:40:28 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Trigger violates foreign key constraint" }, { "msg_contents": "I fully support this addition to the documentation. The legal \npossibility of breaking data consistency must be documented at least.\n\nPlease, consider small suggestion to replace last sentence.\n\n- This is not considered a bug, and it is the responsibility of the user \nto write triggers so that such problems are avoided.\n+ It is the trigger programmer's responsibility to avoid such scenarios.\n\nTo be consistent with the sentence about recursive trigger calls: [1]\n\"It is the trigger programmer's responsibility to avoid infinite \nrecursion in such scenarios.\"\n\nAlso I don't really like \"This is not considered a bug\" part, since it \nlooks like an excuse.\n\n\n1. https://www.postgresql.org/docs/devel/trigger-definition.html\n\n-- \nPavel Luzanov\nPostgres Professional: https://postgrespro.com\n\n\n\n", "msg_date": "Fri, 22 Dec 2023 10:59:21 +0300", "msg_from": "Pavel Luzanov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Trigger violates foreign key constraint" }, { "msg_contents": "One more not documented issue with system triggers. It might be worth \nconsidering together.\n\nCREATE ROLE app_owner;\n\nCREATE TABLE t (\n     id        int PRIMARY KEY,\n     parent_id int REFERENCES t(id)\n);\n\nALTER TABLE t OWNER TO app_owner;\n\n-- No actions by application owner\nREVOKE ALL ON t FROM app_owner;\n\nINSERT INTO t VALUES (1,NULL);\n\nDELETE FROM t;\nERROR:  permission denied for table t\nCONTEXT:  SQL statement \"SELECT 1 FROM ONLY \"public\".\"t\" x WHERE \"id\" \nOPERATOR(pg_catalog.=) $1 FOR KEY SHARE OF x\"\n\nIt is not at all obvious why the superuser cannot delete the row that he \njust added. The reason is that system triggers are executed with the \nrights of the table owner, not the current role. But I can't find a \ndescription of this behavior in the documentation.\n\n-- \nPavel Luzanov\nPostgres Professional: https://postgrespro.com\n\n\n\n", "msg_date": "Fri, 22 Dec 2023 12:01:05 +0300", "msg_from": "Pavel Luzanov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Trigger violates foreign key constraint" }, { "msg_contents": "On Fri, 2023-12-22 at 10:59 +0300, Pavel Luzanov wrote:\n> Please, consider small suggestion to replace last sentence.\n> \n> - This is not considered a bug, and it is the responsibility of the user \n> to write triggers so that such problems are avoided.\n> + It is the trigger programmer's responsibility to avoid such scenarios.\n> \n> To be consistent with the sentence about recursive trigger calls: [1]\n> \"It is the trigger programmer's responsibility to avoid infinite \n> recursion in such scenarios.\"\n\nYes, that is better - shorter and avoids passive mode. Changed.\n\n> Also I don't really like \"This is not considered a bug\" part, since it \n> looks like an excuse.\n\nIn a way, it is an excuse, so why not be honest about it.\n\nThe example you provided in your other message (cascading triggers\nfail if the table ovner has revoked the required permissions from\nherself) is not really about breaking foreign keys. You hit a\nsurprising error, but referential integrity will be maintained.\n\nPatch v3 is attached.\n\nYours,\nLaurenz Albe", "msg_date": "Fri, 22 Dec 2023 12:39:23 +0100", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Trigger violates foreign key constraint" }, { "msg_contents": "On 22.12.2023 14:39, Laurenz Albe wrote:\n> Yes, that is better - shorter and avoids passive mode. Changed.\n\nThanks.\n\n>> Also I don't really like \"This is not considered a bug\" part, since it\n>> looks like an excuse.\n> In a way, it is an excuse, so why not be honest about it.\n\nI still think that the \"this is not a bug\" style is not suitable for \ndocumentation.\nBut I checked the documentation and found 3-4 more such places.\nTherefore, it's ok from me, especially since it really looks like a bug.\n> The example you provided in your other message (cascading triggers\n> fail if the table ovner has revoked the required permissions from\n> herself) is not really about breaking foreign keys. You hit a\n> surprising error, but referential integrity will be maintained.\n\nYes, referential integrity will be maintained. But perhaps\nit is worth adding a section to the documentation about system triggers\nthat are used to implement foreign keys. Now it is not mentioned anywhere,\nbut questions and problems arise from time to time.\nSuch a section named \"System triggers\" may be added as a separate chapter\nto Part VII. Internals or as a subsection to Chapter 38.Triggers.\n\nI thought about this after your recent excellent article [1],\nwhich has an introduction to system triggers.\n\nThis does not negate the need for the patch being discussed.\n\n> Patch v3 is attached.\n\nFor me, it is ready for committer.\n\n1. https://www.cybertec-postgresql.com/en/broken-foreign-keys-postgresql/\n\n-- \nPavel Luzanov\nPostgres Professional:https://postgrespro.com\n\n\n\n\n\n\n On 22.12.2023 14:39, Laurenz Albe wrote:\n\n\nYes, that is better - shorter and avoids passive mode. Changed.\n\n\n Thanks.\n\n\n\n\n\nAlso I don't really like \"This is not considered a bug\" part, since it \nlooks like an excuse.\n\n\n\nIn a way, it is an excuse, so why not be honest about it.\n\n\n I still think that the \"this is not a bug\" style is not suitable for\n documentation.\n But I checked the documentation and found 3-4 more such places.\n Therefore, it's ok from me, especially since it really looks like a\n bug.\n\n\n\n\nThe example you provided in your other message (cascading triggers\nfail if the table ovner has revoked the required permissions from\nherself) is not really about breaking foreign keys. You hit a\nsurprising error, but referential integrity will be maintained.\n\n\n Yes, referential integrity will be maintained. But perhaps \n it is worth adding a section to the documentation about system\n triggers\n that are used to implement foreign keys. Now it is not mentioned\n anywhere,\n but questions and problems arise from time to time.\n Such a section named \"System triggers\" may be added as a separate\n chapter\n to Part VII. Internals or as a subsection to Chapter 38.Triggers.\n\n I thought about this after your recent excellent article [1],\n which has an introduction to system triggers.\n\n This does not negate the need for the patch being discussed.\n\n\n\n\nPatch v3 is attached.\n\n\n\n For me, it is ready for committer.\n\n 1.\n https://www.cybertec-postgresql.com/en/broken-foreign-keys-postgresql/\n-- \nPavel Luzanov\nPostgres Professional: https://postgrespro.com", "msg_date": "Mon, 25 Dec 2023 12:03:16 +0300", "msg_from": "Pavel Luzanov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Trigger violates foreign key constraint" }, { "msg_contents": "Laurenz Albe <[email protected]> writes:\n> Patch v3 is attached.\n\nI agree with documenting this hazard, but I think it'd be better\nto do so in the \"Triggers\" chapter. There is no hazard unless\nyou are writing user-defined triggers, which is surely far fewer\npeople than use foreign keys. So I suggest something like the\nattached.\n\n\t\t\tregards, tom lane", "msg_date": "Sun, 07 Apr 2024 14:52:23 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Trigger violates foreign key constraint" }, { "msg_contents": "Hi,\n\n> Laurenz Albe <[email protected]> writes:\n> > Patch v3 is attached.\n>\n> I agree with documenting this hazard, but I think it'd be better\n> to do so in the \"Triggers\" chapter. There is no hazard unless\n> you are writing user-defined triggers, which is surely far fewer\n> people than use foreign keys. So I suggest something like the\n> attached.\n\nI don't mind changing the chapter, but I prefer the wording chosen in\nv3. The explanation in v4 is somewhat hard to follow IMO.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Thu, 18 Apr 2024 15:24:39 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Trigger violates foreign key constraint" }, { "msg_contents": "Aleksander Alekseev <[email protected]> writes:\n>> I agree with documenting this hazard, but I think it'd be better\n>> to do so in the \"Triggers\" chapter. There is no hazard unless\n>> you are writing user-defined triggers, which is surely far fewer\n>> people than use foreign keys. So I suggest something like the\n>> attached.\n\n> I don't mind changing the chapter, but I prefer the wording chosen in\n> v3. The explanation in v4 is somewhat hard to follow IMO.\n\nWell, I didn't like v3 because I thought it was wrong, or at least\nfairly misleading. The fact that we use triggers rather than some\nother mechanism to invoke referential updates isn't really relevant.\nThe issue is that the update actions fire user-defined triggers,\nand it's those that are the (potential) problem.\n\nPerhaps we should leave the system triggers out of the discussion\nentirely? More or less like:\n\n If a foreign key constraint specifies referential actions (that\n is, cascading updates or deletes), those actions are performed via\n ordinary SQL update or delete commands on the referencing table.\n In particular, any triggers that exist on the referencing table\n will be fired for those changes. If such a trigger modifies or\n blocks the effect of one of these commands, the end result could\n be to break referential integrity. It is the trigger programmer's\n responsibility to avoid that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 18 Apr 2024 11:23:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Trigger violates foreign key constraint" }, { "msg_contents": "Hi,\n\n> Perhaps we should leave the system triggers out of the discussion\n> entirely? More or less like:\n>\n> If a foreign key constraint specifies referential actions (that\n> is, cascading updates or deletes), those actions are performed via\n> ordinary SQL update or delete commands on the referencing table.\n> In particular, any triggers that exist on the referencing table\n> will be fired for those changes. If such a trigger modifies or\n> blocks the effect of one of these commands, the end result could\n> be to break referential integrity. It is the trigger programmer's\n> responsibility to avoid that.\n\nThat's perfect!\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Fri, 19 Apr 2024 12:16:50 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Trigger violates foreign key constraint" }, { "msg_contents": "Aleksander Alekseev <[email protected]> writes:\n>> Perhaps we should leave the system triggers out of the discussion\n>> entirely? More or less like:\n>> \n>> If a foreign key constraint specifies referential actions (that\n>> is, cascading updates or deletes), those actions are performed via\n>> ordinary SQL update or delete commands on the referencing table.\n>> In particular, any triggers that exist on the referencing table\n>> will be fired for those changes. If such a trigger modifies or\n>> blocks the effect of one of these commands, the end result could\n>> be to break referential integrity. It is the trigger programmer's\n>> responsibility to avoid that.\n\n> That's perfect!\n\nHearing no further comments, done like that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 08 May 2024 11:14:18 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Trigger violates foreign key constraint" } ]
[ { "msg_contents": "Following [0], I did a broader analysis of some dubious or nonsensical \nlike/unlike combinations in the pg_dump tests.\n\nThis includes\n\n1) Remove useless entries from \"unlike\" lists. Runs that are not\n listed in \"like\" don't need to be excluded in \"unlike\".\n\n2) Ensure there is always a \"like\" list, even if it is empty. This\n makes the test more self-documenting.\n\n3) Use predefined lists such as %full_runs where appropriate, instead\n of listing all runs separately.\n\nI also added code that checks 1 and 2 automatically and issues a message\nfor violations. (This is currently done with \"diag\". We could also \nmake it an error.)\n\nThe results are in the attached patch.\n\n[0]: \nhttps://www.postgresql.org/message-id/[email protected]", "msg_date": "Mon, 2 Oct 2023 09:12:29 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Clean up some pg_dump tests" }, { "msg_contents": "I tried this out. I agree it's a good change. BTW, this made me\nrealize that \"unlike\" is not a good name: maybe it should be called\n\"except\".\n\nOn 2023-Oct-02, Peter Eisentraut wrote:\n\n> +\t\tif (!defined($tests{$test}->{like}))\n> +\t\t{\n> +\t\t\tdiag \"missing like in test \\\"$test\\\"\";\n> +\t\t}\n> +\t\tif ($tests{$test}->{unlike}->{$test_key} &&\n> +\t\t\t!defined($tests{$test}->{like}->{$test_key}))\n> +\t\t{\n> +\t\t\tdiag \"useless unlike \\\"$test_key\\\" in test \\\"$test\\\"\";\n> +\t\t}\n\nI would add quotes to the words \"like\" and \"unlike\" there. Otherwise,\nthese sentences are hard to parse. Also, some commentary on what this\nis about seems warranted: maybe \"Check that this test properly defines\nwhich dumps the output should match on.\" or similar.\n\nI didn't like using diag(), because automated runs will not alert to any\nproblems. Now maybe that's not critical, but I fear that people would\nnot notice problems if they are just noise in the output. Let's make\nthem test errors. fail() seems good enough: with the lines I quote\nabove and omitting the test corrections, I get this, which seems good\nenough:\n\n# Failed test 'useless unlike \"binary_upgrade\" in test \"Disabled trigger on partition is not created\"'\n# at t/002_pg_dump.pl line 4960.\n\n# Failed test 'useless unlike \"clean\" in test \"Disabled trigger on partition is not created\"'\n# at t/002_pg_dump.pl line 4960.\n\n[... a few others ...]\n\nTest Summary Report\n-------------------\nt/002_pg_dump.pl (Wstat: 15104 (exited 59) Tests: 11368 Failed: 59)\n Failed tests: 241, 486, 731, 1224, 1473, 1719, 1968, 2217\n 2463, 2712, 2961, 3207, 3452, 3941, 4190\n 4442, 4692, 4735-4736, 4943, 5094, 5189\n 5242, 5341, 5436, 5681, 5926, 6171, 6660\n 6905, 7150, 7395, 7640, 7683, 7762, 7887\n 7930, 7941, 8134, 8187, 8229, 8287, 8626\n 8871, 8924, 9023, 9170, 9269, 9457, 9515\n 9704, 9762, 10345, 10886, 10985, 11105\n 11123, 11134, 11327\n Non-zero exit status: 59\nFiles=5, Tests=11482, 15 wallclock secs ( 0.43 usr 0.04 sys + 4.56 cusr 1.63 csys = 6.66 CPU)\nResult: FAIL\n\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Ni aún el genio muy grande llegaría muy lejos\nsi tuviera que sacarlo todo de su propio interior\" (Goethe)\n\n\n", "msg_date": "Mon, 9 Oct 2023 11:20:14 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Clean up some pg_dump tests" }, { "msg_contents": "On 09.10.23 11:20, Alvaro Herrera wrote:\n> I tried this out. I agree it's a good change. BTW, this made me\n> realize that \"unlike\" is not a good name: maybe it should be called\n> \"except\".\n\nright\n\n> I would add quotes to the words \"like\" and \"unlike\" there. Otherwise,\n> these sentences are hard to parse. Also, some commentary on what this\n> is about seems warranted: maybe \"Check that this test properly defines\n> which dumps the output should match on.\" or similar.\n\nDone.\n\nI also moved the code a bit earlier, before the checks for supported \ncompression libraries etc., so it runs even if those cause a skip.\n\n> I didn't like using diag(), because automated runs will not alert to any\n> problems. Now maybe that's not critical, but I fear that people would\n> not notice problems if they are just noise in the output. Let's make\n> them test errors. fail() seems good enough: with the lines I quote\n> above and omitting the test corrections, I get this, which seems good\n> enough:\n\nAfter researching this a bit more, I think \"die\" is the convention for \nproblems in the test definitions themselves. (Otherwise, you're writing \na test about the tests, which would be a bit weird.) The result is \napproximately the same.", "msg_date": "Tue, 10 Oct 2023 10:03:47 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Clean up some pg_dump tests" }, { "msg_contents": "On 10.10.23 10:03, Peter Eisentraut wrote:\n> On 09.10.23 11:20, Alvaro Herrera wrote:\n>> I tried this out.  I agree it's a good change.  BTW, this made me\n>> realize that \"unlike\" is not a good name: maybe it should be called\n>> \"except\".\n> \n> right\n> \n>> I would add quotes to the words \"like\" and \"unlike\" there.  Otherwise,\n>> these sentences are hard to parse.  Also, some commentary on what this\n>> is about seems warranted: maybe \"Check that this test properly defines\n>> which dumps the output should match on.\" or similar.\n> \n> Done.\n> \n> I also moved the code a bit earlier, before the checks for supported \n> compression libraries etc., so it runs even if those cause a skip.\n> \n>> I didn't like using diag(), because automated runs will not alert to any\n>> problems.  Now maybe that's not critical, but I fear that people would\n>> not notice problems if they are just noise in the output.  Let's make\n>> them test errors.  fail() seems good enough: with the lines I quote\n>> above and omitting the test corrections, I get this, which seems good\n>> enough:\n> \n> After researching this a bit more, I think \"die\" is the convention for \n> problems in the test definitions themselves.  (Otherwise, you're writing \n> a test about the tests, which would be a bit weird.)  The result is \n> approximately the same.\n\ncommitted\n\n\n", "msg_date": "Wed, 18 Oct 2023 08:16:47 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Clean up some pg_dump tests" }, { "msg_contents": "On 02.10.23 09:12, Peter Eisentraut wrote:\n> 1) Remove useless entries from \"unlike\" lists.  Runs that are not\n>    listed in \"like\" don't need to be excluded in \"unlike\".\n> \n> 2) Ensure there is always a \"like\" list, even if it is empty.  This\n>    makes the test more self-documenting.\n\n> I also added code that checks 1 and 2 automatically and issues a message\n> for violations.\n\nI have recently discovered that the same code also exists separately in \nthe test_pg_dump module test. This should probably be kept consistent. \nSo here is a patch that adds the same checks there. In this case, we \ndidn't need to fix any of the existing subtests.\n\nI plan to commit this soon if there are no concerns.", "msg_date": "Mon, 5 Feb 2024 16:42:16 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Clean up some pg_dump tests" } ]
[ { "msg_contents": "Hello, all!\n\nI found a query which consumes a lot of memory and triggers OOM killer. \nMemory leak occurs in memoize node for numeric key.\nVersion postgresql is 14.9. The problem is very \nsimilar https://www.postgresql.org/message-id/[email protected]\n\nI attached to the backend with a debugger and set a breakpoint in AllocSetAlloc \n\n(gdb) bt 10\n#0 AllocSetAlloc (context=0x5c55086dc2f0, size=12) at aset.c:722\n#1 0x00005c5507d886e0 in palloc (size=size@entry=12) at mcxt.c:1082\n#2 0x00005c5507890bba in detoast_attr (attr=0x715d5daa04c9) at detoast.c:184\n#3 0x00005c5507d62375 in pg_detoast_datum (datum=<optimized out>) at fmgr.c:1725\n#4 0x00005c5507cc94ea in hash_numeric (fcinfo=<optimized out>) at numeric.c:2554\n#5 0x00005c5507d61570 in FunctionCall1Coll (flinfo=flinfo@entry=0x5c5508b93d00, \ncollation=<optimized out>, arg1=<optimized out>) at fmgr.c:1138\n#6 0x00005c5507aadc16 in MemoizeHash_hash (key=0x0, tb=<optimized out>) at nodeMemoize.c:199\n#7 0x00005c5507aadf22 in memoize_insert (key=0x0, found=<synthetic pointer>,\n tb=0x5c5508bb4760) at ../../../src/include/lib/simplehash.h:762\n#8 cache_lookup (found=<synthetic pointer>, mstate=0x5c5508b91418) at nodeMemoize.c:519\n#9 ExecMemoize (pstate=0x5c5508b91418) at nodeMemoize.c:705\n\nI was able to create reproducible test case on machine with default config\nand postgresql 14.9: \n\nCREATE TABLE table1 (\n\tid numeric(38) NOT NULL,\n\tcol1 text,\n\tCONSTRAINT id2 PRIMARY KEY (id)\n);\nCREATE TABLE table2 (\n\tid numeric(38) NOT NULL,\n\tid_table1 numeric(38) NULL,\n\tCONSTRAINT id1 PRIMARY KEY (id)\n);\nALTER TABLE table2 ADD CONSTRAINT constr1 FOREIGN KEY (id_table1) REFERENCES table1(id);\n\nINSERT INTO table1 (id, col1)\nSELECT id::numeric, id::text\n FROM generate_series(3000000000, 3000000000 + 600000) gs(id);\n\nINSERT INTO table2 (id, id_table1)\nSELECT id::numeric , (select floor(random() * 600000)::numeric + 3000000000)::numeric\n FROM generate_series(1,600000) gs(id);\n\nset max_parallel_workers_per_gather=0;\nset enable_hashjoin = off;\n\nEXPLAIN analyze\nselect sum(q.id_table1)\n from (\nSELECT t2.*\n FROM table1 t1\n JOIN table2 t2\n ON t2.id_table1 = t1.id) q;\n\nPlan:\nAggregate (cost=25744.90..25744.91 rows=1 width=32) (actual time=380.140..380.142 rows=1 loops=1)\n -> Nested Loop (cost=0.43..24244.90 rows=600000 width=9) (actual time=0.063..310.915 rows=600000 loops=1)\n -> Seq Scan on table2 t2 (cost=0.00..9244.00 rows=600000 width=9) (actual time=0.009..38.629 rows=600000 loops=1)\n -> Memoize (cost=0.43..0.47 rows=1 width=8) (actual time=0.000..0.000 rows=1 loops=600000)\n Cache Key: t2.id_table1\n Cache Mode: logical\n Hits: 599999 Misses: 1 Evictions: 0 Overflows: 0 Memory Usage: 1kB\n -> Index Only Scan using id2 on table1 t1 (cost=0.42..0.46 rows=1 width=8) (actual time=0.039..0.040 rows=1 loops=1)\n Index Cond: (id = t2.id_table1)\n Heap Fetches: 0\nPlanning Time: 0.445 ms\nExecution Time: 380.750 ms\n\nI've attached memoize_memory_leak_numeric_key.patch to address this.\n\nUsing test case, here are the memory stats before and after the\nfix (taken during ExecEndMemoize by using MemoryContextStatsDetail(TopMemoryContext, 100, 1)).\nBefore:\n ExecutorState: 25209672 total in 15 blocks; 1134432 free (7 chunks); 24075240 used\n MemoizeHashTable: 8192 total in 1 blocks; 7480 free (1 chunks); 712 used\n\t\t\nAfter:\n ExecutorState: 76616 total in 5 blocks; 1776 free (8 chunks); 74840 used\n MemoizeHashTable: 8192 total in 1 blocks; 7480 free (1 chunks); 712 used \t\t\n\nThanks,\nAlexei Orlov\[email protected],\[email protected]", "msg_date": "Mon, 2 Oct 2023 08:20:31 +0000", "msg_from": "Orlov Aleksej <[email protected]>", "msg_from_op": true, "msg_subject": "[PATCH] Fix memory leak in memoize for numeric key" }, { "msg_contents": "On Tue, 3 Oct 2023 at 19:38, Orlov Aleksej <[email protected]> wrote:\n> I found a query which consumes a lot of memory and triggers OOM killer.\n> Memory leak occurs in memoize node for numeric key.\n\nThanks for the analysis and the patch.\n\n> I've attached memoize_memory_leak_numeric_key.patch to address this.\n\nYeah, this is a bug for sure.\n\nLooking at ExecHashGetHashValue() for example purposes, I see it's\nquite careful to call ResetExprContext(econtext) at the top of the\nfunction to reset the tuple context.\n\nI think the patch might need to go a bit further and also adjust\nMemoizeHash_equal(). In non-binary mode, we just call\nExecQualAndReset() which evaluates the join condition and resets the\ncontext. The binary mode code does not do this, so I think we should\nexpand on what you've done and adjust that code too.\n\nI've done that in the attached patch.\n\nDavid", "msg_date": "Tue, 3 Oct 2023 21:51:31 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix memory leak in memoize for numeric key" }, { "msg_contents": "I've finished testing the patch. \r\nI confirm that the patch solves the problem and works just as fast.\r\n\r\nThanks,\r\nAlexey Orlov.\r\n", "msg_date": "Wed, 4 Oct 2023 08:08:49 +0000", "msg_from": "Orlov Aleksej <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [PATCH] Fix memory leak in memoize for numeric key" }, { "msg_contents": "On Wed, 4 Oct 2023 at 21:08, Orlov Aleksej <[email protected]> wrote:\n> I've finished testing the patch.\n> I confirm that the patch solves the problem and works just as fast.\n\nThanks for checking that.\n\nI've pushed the patch now.\n\nDavid\n\n\n", "msg_date": "Thu, 5 Oct 2023 20:33:12 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix memory leak in memoize for numeric key" } ]
[ { "msg_contents": "Hi hackers,\n\nPlease find attached a tiny patch to $SUBJECT.\n\nIt:\n\n - provides more consistency to the way we get files size in TAP tests\n - seems more elegant that relying on a hardcoded result position\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 2 Oct 2023 11:35:45 +0200", "msg_from": "\"Drouvot, Bertrand\" <[email protected]>", "msg_from_op": true, "msg_subject": "Replace (stat(<file>))[7] in TAP tests with -s" }, { "msg_contents": "\"Drouvot, Bertrand\" <[email protected]> writes:\n\n> Hi hackers,\n>\n> Please find attached a tiny patch to $SUBJECT.\n>\n> It:\n>\n> - provides more consistency to the way we get files size in TAP tests\n> - seems more elegant that relying on a hardcoded result position\n\nI approve of removing use of the list form of stat, it's a horrible API.\n\nIf we weren't already using -s everywhere else, I would prefer\nFile::stat, which makes stat (in scalar context) return an object with\nmethods for the fields, so you'd do stat($file)->size. It's included in\nPerl core since 5.4, and we're already using it in several places for\nother fields (mode and ino at least).\n\nI see another use of stat array positions (for mtime) in\nsrc/tools/msvc/Solution.pm, but that's on the chopping block, so not\nmuch point in fixing.\n\n- ilmari\n\n\n", "msg_date": "Mon, 02 Oct 2023 12:44:59 +0100", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replace (stat(<file>))[7] in TAP tests with -s" }, { "msg_contents": "On Mon, Oct 02, 2023 at 12:44:59PM +0100, Dagfinn Ilmari Mannsåker wrote:\n> I approve of removing use of the list form of stat, it's a horrible API.\n\nAgreed, I've appied the suggestion to use -s, like we do anywhere\nelse.\n\n> If we weren't already using -s everywhere else, I would prefer\n> File::stat, which makes stat (in scalar context) return an object with\n> methods for the fields, so you'd do stat($file)->size. It's included in\n> Perl core since 5.4, and we're already using it in several places for\n> other fields (mode and ino at least).\n\nRight, like in 017_shm.pl. I didn't notice that.\n\n> I see another use of stat array positions (for mtime) in\n> src/tools/msvc/Solution.pm, but that's on the chopping block, so not\n> much point in fixing.\n\nThe removal of this code depends on a few more things, hopefully it\nwill be able to get rid of it during this release cycle.\n--\nMichael", "msg_date": "Tue, 3 Oct 2023 08:44:35 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replace (stat(<file>))[7] in TAP tests with -s" } ]
[ { "msg_contents": "I posted this question to [email protected]<mailto:[email protected]> last week but on one has responded so posting here now.\n\nWe download the ODBC source from http://ftp.postgresql.org<http://ftp.postgresql.org/> and build it on-site, 13.02.0000 in this case.\n\nA colleague noticed that the following files in the psqlODBC MSI for Windows have no version numbers:\npgenlist.dll\npgenlista.dll\npgxalib.dll\npgenlist.pdb\npgenlista.pdb\npsqlodbc30a.pdb\npsqlodbc35w.pdb\n\nDoes anyone know if that is be design or some other reason? Should they have version numbers?\n\nI checked earlier build and the same holds for ODBC 12.02.0000.\nThanks, Mark\n\n\n\n\n\n\n\n\n\n\nI posted this question to \[email protected] last week but on one has responded so posting here now.\n\nWe download the ODBC source from http://ftp.postgresql.org and build it on-site, 13.02.0000 in this case.\n\nA colleague noticed that the following files in the psqlODBC MSI for Windows have no version numbers:\npgenlist.dll\npgenlista.dll\npgxalib.dll\npgenlist.pdb\npgenlista.pdb\npsqlodbc30a.pdb\npsqlodbc35w.pdb\n\nDoes anyone know if that is be design or some other reason?   Should they have version numbers?\n \nI checked earlier build and the same holds for ODBC 12.02.0000.\nThanks, Mark", "msg_date": "Mon, 2 Oct 2023 14:28:58 +0000", "msg_from": "Mark Hill <[email protected]>", "msg_from_op": true, "msg_subject": "pg*.dll and *.pdb files in psqlODBC have no version numbers" }, { "msg_contents": "Mark Hill <[email protected]> writes:\n> I posted this question to [email protected]<mailto:[email protected]> last week but on one has responded so posting here now.\n\n> A colleague noticed that the following files in the psqlODBC MSI for Windows have no version numbers:\n> ...\n> Does anyone know if that is be design or some other reason? Should they have version numbers?\n\nNo idea, but actually the pgsql-odbc list would be the most authoritative\nplace for answers, if you find none here.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 02 Oct 2023 10:45:07 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg*.dll and *.pdb files in psqlODBC have no version numbers" }, { "msg_contents": "On Mon, Oct 02, 2023 at 02:28:58PM +0000, Mark Hill wrote:\n> A colleague noticed that the following files in the psqlODBC MSI for Windows have no version numbers:\n> pgenlist.dll\n> pgenlista.dll\n> pgxalib.dll\n> pgenlist.pdb\n> pgenlista.pdb\n> psqlodbc30a.pdb\n> psqlodbc35w.pdb\n> \n> Does anyone know if that is be design or some other reason? Should\n> they have version numbers?\n\nVersion numbers are critical in MSI installers to make sure that\ncomponents get updated, so yes, these are important depending on the\nupgrade mode. (I vaguely remember that there's a hard mode where\nthings are forcibly replaced, and a soft mode where only components\nwith newer version numbers are replaced, but it's from memories from\nquite a few years ago so I may recall incorrectly).\n\n> I checked earlier build and the same holds for ODBC 12.02.0000.\n\nPerhaps it would be better to discuss that on the pgsql-odbc list,\nwhere the driver is maintained, not pgsql-hackers.\n--\nMichael", "msg_date": "Tue, 3 Oct 2023 08:42:02 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg*.dll and *.pdb files in psqlODBC have no version numbers" } ]
[ { "msg_contents": "As September has ended, I have closed commitfest 2023-09. The status \nbefore I started moving patches over was\n\nStatus summary: Needs review: 189. Waiting on Author: 30. Ready for \nCommitter: 28. Committed: 68. Moved to next CF: 1. Returned with \nFeedback: 16. Rejected: 1. Withdrawn: 5. Total: 338.\n\nThe final status is now\n\nStatus summary: Committed: 68. Moved to next CF: 248. Withdrawn: 5. \nRejected: 1. Returned with Feedback: 16. Total: 338.\n\nThe \"Committed\" number is lower than in the preceding commitfests, but \nit is actually higher than in the September commitfests of preceding years.\n\nI did some more adjustments of status and removal of stale reviewer \nentries. But overall, everything looked pretty accurate and up to date.\n\nIn the November commitfest right now, there are more than 50 patches \nthat are 2 or 3 CFs old (meaning they arrived after PG16 feature freeze) \nand that have no reviewers. Plus at least 30 patches without reviewers \nthat are completely new in the November CF. Also, it looks like a lot \nof these are in the Performance category, which is typically harder to \nreview and get agreement on. So, while we do have some really old \npatches that, well, are hard to get sorted out, there is a lot of new \nwork coming in that really just needs plain reviewer attention.\n\n\n", "msg_date": "Mon, 2 Oct 2023 20:27:00 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Commitfest 2023-09 has finished" } ]
[ { "msg_contents": "The following documentation comment has been logged on the website:\n\nPage: https://www.postgresql.org/docs/16/functions-range.html\nDescription:\n\nThe doc says:\r\n* unnest ( anymultirange ) → setof anyrange\r\n* Expands a multirange into a set of ranges. The ranges are read out in\nstorage order (ascending).\r\n\r\nWhat is storage order ? \r\n\r\nAt first I thought that it was the order in which the different ranges are\ninserted in the internal data structure. However, the following sort of\nshows that it is not:\r\n```\r\npostgres=# select unnest('{[1,4), [8,10)}'::int4multirange + '{[-5,-3)}' -\n'{[2,3)}') ;\r\n unnest\r\n---------\r\n [-5,-3)\r\n [1,2)\r\n [3,4)\r\n [8,10)\r\n(4 lignes)\r\n```\r\nWhatever I try, it always return in range order instead of \"storage order\".\n\r\n\r\nSome context: I ask because we have some seemingly random (and impossible to\nrepro in tests up to now) errors in our code. The code assumes that this\nreturns things in range order and as the doc is unclear to me on this point,\nI cannot exclude this to be our culprit.\r\n\r\nThank you", "msg_date": "Mon, 02 Oct 2023 18:42:14 +0000", "msg_from": "PG Doc comments form <[email protected]>", "msg_from_op": true, "msg_subject": "unnest multirange, returned order" }, { "msg_contents": "On Mon, 2023-10-02 at 18:42 +0000, PG Doc comments form wrote:\n> Page: https://www.postgresql.org/docs/16/functions-range.html\n> \n> The doc says:\n> * unnest ( anymultirange ) → setof anyrange\n> * Expands a multirange into a set of ranges. The ranges are read out in\n> storage order (ascending).\n> \n> What is storage order ? \n> \n> At first I thought that it was the order in which the different ranges are\n> inserted in the internal data structure. However, the following sort of\n> shows that it is not:\n> ```\n> postgres=# select unnest('{[1,4), [8,10)}'::int4multirange + '{[-5,-3)}' -\n> '{[2,3)}') ;\n>  unnest\n> ---------\n>  [-5,-3)\n>  [1,2)\n>  [3,4)\n>  [8,10)\n> (4 lignes)\n> ```\n> Whatever I try, it always return in range order instead of \"storage order\".\n\nI'd say that the storag order is the order in which PostgreSQL stores\nmultiranges internally:\n\nSELECT '{[100,200),[-100,-50),[-1,2)}'::int4multirange;\n\n int4multirange \n═══════════════════════════════\n {[-100,-50),[-1,2),[100,200)}\n(1 row)\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Tue, 03 Oct 2023 15:46:23 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: unnest multirange, returned order" }, { "msg_contents": ">\n> I'd say that the storag order is the order in which PostgreSQL stores\n> multiranges internally:\n\n\nRight, I believe that you are right but then this information is not useful\nfor the developer.\nIf storage order is always ascending by range order then let's make it\nclear, if order cannot be counted upon as it may evolve from postgres\nversion to version, then let's make it clear as well. WDYT ?\n\nThank you.\nDaniel Fredouille\n\nLe mar. 3 oct. 2023 à 09:46, Laurenz Albe <[email protected]> a\nécrit :\n\n> On Mon, 2023-10-02 at 18:42 +0000, PG Doc comments form wrote:\n> > Page: https://www.postgresql.org/docs/16/functions-range.html\n> >\n> > The doc says:\n> > * unnest ( anymultirange ) → setof anyrange\n> > * Expands a multirange into a set of ranges. The ranges are read out in\n> > storage order (ascending).\n> >\n> > What is storage order ?\n> >\n> > At first I thought that it was the order in which the different ranges\n> are\n> > inserted in the internal data structure. However, the following sort of\n> > shows that it is not:\n> > ```\n> > postgres=# select unnest('{[1,4), [8,10)}'::int4multirange + '{[-5,-3)}'\n> -\n> > '{[2,3)}') ;\n> > unnest\n> > ---------\n> > [-5,-3)\n> > [1,2)\n> > [3,4)\n> > [8,10)\n> > (4 lignes)\n> > ```\n> > Whatever I try, it always return in range order instead of \"storage\n> order\".\n>\n> I'd say that the storag order is the order in which PostgreSQL stores\n> multiranges internally:\n>\n> SELECT '{[100,200),[-100,-50),[-1,2)}'::int4multirange;\n>\n> int4multirange\n> ═══════════════════════════════\n> {[-100,-50),[-1,2),[100,200)}\n> (1 row)\n>\n> Yours,\n> Laurenz Albe\n>\n\nI'd say that the storag order is the order in which PostgreSQL storesmultiranges internally:Right, I believe that you are right but then this information is not useful for the developer. If storage order is always ascending by range order then let's make it clear, if order cannot be counted upon as it may evolve from postgres version to version, then let's make it clear as well. WDYT ?Thank you.Daniel FredouilleLe mar. 3 oct. 2023 à 09:46, Laurenz Albe <[email protected]> a écrit :On Mon, 2023-10-02 at 18:42 +0000, PG Doc comments form wrote:\n> Page: https://www.postgresql.org/docs/16/functions-range.html\n> \n> The doc says:\n> * unnest ( anymultirange ) → setof anyrange\n> * Expands a multirange into a set of ranges. The ranges are read out in\n> storage order (ascending).\n> \n> What is storage order ? \n> \n> At first I thought that it was the order in which the different ranges are\n> inserted in the internal data structure. However, the following sort of\n> shows that it is not:\n> ```\n> postgres=# select unnest('{[1,4), [8,10)}'::int4multirange + '{[-5,-3)}' -\n> '{[2,3)}') ;\n>  unnest\n> ---------\n>  [-5,-3)\n>  [1,2)\n>  [3,4)\n>  [8,10)\n> (4 lignes)\n> ```\n> Whatever I try, it always return in range order instead of \"storage order\".\n\nI'd say that the storag order is the order in which PostgreSQL stores\nmultiranges internally:\n\nSELECT '{[100,200),[-100,-50),[-1,2)}'::int4multirange;\n\n        int4multirange         \n═══════════════════════════════\n {[-100,-50),[-1,2),[100,200)}\n(1 row)\n\nYours,\nLaurenz Albe", "msg_date": "Tue, 3 Oct 2023 20:40:40 -0400", "msg_from": "Daniel Fredouille <[email protected]>", "msg_from_op": false, "msg_subject": "Re: unnest multirange, returned order" }, { "msg_contents": "On Tue, 2023-10-03 at 20:40 -0400, Daniel Fredouille wrote:\n> > I'd say that the storag order is the order in which PostgreSQL stores\n> > multiranges internally:\n> \n> Right, I believe that you are right but then this information is not useful for the developer. \n> If storage order is always ascending by range order then let's make it clear,\n> if order cannot be counted upon as it may evolve from postgres version to version,\n> then let's make it clear as well. WDYT ?\n\nI personally think that it is clear as it is written now.\n\nIf you have a good suggestion for an improvement, you could send it;\nperhaps someone will pick it up.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Wed, 04 Oct 2023 09:20:16 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: unnest multirange, returned order" }, { "msg_contents": "Trying a suggestion then:\n\n\"\"\"\n\nunnest ( anymultirange ) → setof anyrange\n\nExpands a multirange into a set of ranges. The ranges are read out in\nstorage order (ascending) and therefore cannot be relied upon.\n\nunnest('{[1,2), [3,4)}'::int4multirange) →\n\n [1,2)\n [3,4)\n\n\"\"\"\nDaniel\n\nLe mer. 4 oct. 2023 à 03:20, Laurenz Albe <[email protected]> a\nécrit :\n\n> On Tue, 2023-10-03 at 20:40 -0400, Daniel Fredouille wrote:\n> > > I'd say that the storag order is the order in which PostgreSQL stores\n> > > multiranges internally:\n> >\n> > Right, I believe that you are right but then this information is not\n> useful for the developer.\n> > If storage order is always ascending by range order then let's make it\n> clear,\n> > if order cannot be counted upon as it may evolve from postgres version\n> to version,\n> > then let's make it clear as well. WDYT ?\n>\n> I personally think that it is clear as it is written now.\n>\n> If you have a good suggestion for an improvement, you could send it;\n> perhaps someone will pick it up.\n>\n> Yours,\n> Laurenz Albe\n>\n\nTrying a suggestion then:\"\"\"unnest ( anymultirange ) → setof anyrangeExpands a multirange into a set of ranges. The ranges are read out in storage order (ascending) and therefore cannot be relied upon.unnest('{[1,2), [3,4)}'::int4multirange) → [1,2)\n [3,4)\"\"\"DanielLe mer. 4 oct. 2023 à 03:20, Laurenz Albe <[email protected]> a écrit :On Tue, 2023-10-03 at 20:40 -0400, Daniel Fredouille wrote:\n> > I'd say that the storag order is the order in which PostgreSQL stores\n> > multiranges internally:\n> \n> Right, I believe that you are right but then this information is not useful for the developer. \n> If storage order is always ascending by range order then let's make it clear,\n> if order cannot be counted upon as it may evolve from postgres version to version,\n> then let's make it clear as well. WDYT ?\n\nI personally think that it is clear as it is written now.\n\nIf you have a good suggestion for an improvement, you could send it;\nperhaps someone will pick it up.\n\nYours,\nLaurenz Albe", "msg_date": "Wed, 4 Oct 2023 20:04:41 -0400", "msg_from": "Daniel Fredouille <[email protected]>", "msg_from_op": false, "msg_subject": "Re: unnest multirange, returned order" }, { "msg_contents": "Sorry correcting my own suggestion:\n\n\"\"\"\n\nunnest ( anymultirange ) → setof anyrange\n\nExpands a multirange into a set of ranges. The ranges are read out in\nstorage order (ascending) and therefore order cannot be relied upon.\n\nunnest('{[1,2), [3,4)}'::int4multirange) →\n\n [1,2)\n [3,4)\n\n\"\"\"\n\n\nLe mer. 4 oct. 2023 à 20:04, Daniel Fredouille <[email protected]>\na écrit :\n\n> Trying a suggestion then:\n>\n> \"\"\"\n>\n> unnest ( anymultirange ) → setof anyrange\n>\n> Expands a multirange into a set of ranges. The ranges are read out in\n> storage order (ascending) and therefore cannot be relied upon.\n>\n> unnest('{[1,2), [3,4)}'::int4multirange) →\n>\n> [1,2)\n> [3,4)\n>\n> \"\"\"\n> Daniel\n>\n> Le mer. 4 oct. 2023 à 03:20, Laurenz Albe <[email protected]> a\n> écrit :\n>\n>> On Tue, 2023-10-03 at 20:40 -0400, Daniel Fredouille wrote:\n>> > > I'd say that the storag order is the order in which PostgreSQL stores\n>> > > multiranges internally:\n>> >\n>> > Right, I believe that you are right but then this information is not\n>> useful for the developer.\n>> > If storage order is always ascending by range order then let's make it\n>> clear,\n>> > if order cannot be counted upon as it may evolve from postgres version\n>> to version,\n>> > then let's make it clear as well. WDYT ?\n>>\n>> I personally think that it is clear as it is written now.\n>>\n>> If you have a good suggestion for an improvement, you could send it;\n>> perhaps someone will pick it up.\n>>\n>> Yours,\n>> Laurenz Albe\n>>\n>\n\nSorry correcting my own suggestion:\"\"\"unnest ( anymultirange ) → setof anyrangeExpands a multirange into a set of ranges. The ranges are read out in storage order (ascending) and therefore order cannot be relied upon.unnest('{[1,2), [3,4)}'::int4multirange) → [1,2)\n [3,4)\"\"\"Le mer. 4 oct. 2023 à 20:04, Daniel Fredouille <[email protected]> a écrit :Trying a suggestion then:\"\"\"unnest ( anymultirange ) → setof anyrangeExpands a multirange into a set of ranges. The ranges are read out in storage order (ascending) and therefore cannot be relied upon.unnest('{[1,2), [3,4)}'::int4multirange) → [1,2)\n [3,4)\"\"\"DanielLe mer. 4 oct. 2023 à 03:20, Laurenz Albe <[email protected]> a écrit :On Tue, 2023-10-03 at 20:40 -0400, Daniel Fredouille wrote:\n> > I'd say that the storag order is the order in which PostgreSQL stores\n> > multiranges internally:\n> \n> Right, I believe that you are right but then this information is not useful for the developer. \n> If storage order is always ascending by range order then let's make it clear,\n> if order cannot be counted upon as it may evolve from postgres version to version,\n> then let's make it clear as well. WDYT ?\n\nI personally think that it is clear as it is written now.\n\nIf you have a good suggestion for an improvement, you could send it;\nperhaps someone will pick it up.\n\nYours,\nLaurenz Albe", "msg_date": "Wed, 4 Oct 2023 20:12:19 -0400", "msg_from": "Daniel Fredouille <[email protected]>", "msg_from_op": false, "msg_subject": "Re: unnest multirange, returned order" }, { "msg_contents": "On Wed, 2023-10-04 at 20:12 -0400, Daniel Fredouille wrote:\n> unnest ( anymultirange ) → setof anyrange\n> Expands a multirange into a set of ranges. The ranges are read out in storage order (ascending) and therefore order cannot be relied upon.\n\nThat's not true. The order is deterministic and can be relied on.\n\nHow about the attached patch, which does away with the confusing\nmention of \"storage order\"?\n\nYours,\nLaurenz Albe", "msg_date": "Thu, 05 Oct 2023 08:50:24 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: unnest multirange, returned order" }, { "msg_contents": "Hi,\n\nsorry it took me some time to reply. Yes, the patch is perfect if this is\nindeed the behavior.\n\ncheers\nDaniel\n\nLe jeu. 5 oct. 2023 à 02:50, Laurenz Albe <[email protected]> a\nécrit :\n\n> On Wed, 2023-10-04 at 20:12 -0400, Daniel Fredouille wrote:\n> > unnest ( anymultirange ) → setof anyrange\n> > Expands a multirange into a set of ranges. The ranges are read out in\n> storage order (ascending) and therefore order cannot be relied upon.\n>\n> That's not true. The order is deterministic and can be relied on.\n>\n> How about the attached patch, which does away with the confusing\n> mention of \"storage order\"?\n>\n> Yours,\n> Laurenz Albe\n>\n\nHi,sorry it took me some time to reply. Yes, the patch is perfect if this is indeed the behavior.cheersDanielLe jeu. 5 oct. 2023 à 02:50, Laurenz Albe <[email protected]> a écrit :On Wed, 2023-10-04 at 20:12 -0400, Daniel Fredouille wrote:\n> unnest ( anymultirange ) → setof anyrange\n> Expands a multirange into a set of ranges. The ranges are read out in storage order (ascending) and therefore order cannot be relied upon.\n\nThat's not true.  The order is deterministic and can be relied on.\n\nHow about the attached patch, which does away with the confusing\nmention of \"storage order\"?\n\nYours,\nLaurenz Albe", "msg_date": "Fri, 13 Oct 2023 15:33:57 -0400", "msg_from": "Daniel Fredouille <[email protected]>", "msg_from_op": false, "msg_subject": "Re: unnest multirange, returned order" }, { "msg_contents": "On Fri, 2023-10-13 at 15:33 -0400, Daniel Fredouille wrote:\n> sorry it took me some time to reply. Yes, the patch is perfect if this is indeed the behavior.\n\nI'm sending a reply to the hackers list so that I can add the patch to the commitfest.\n\nTiny as the patch is, I don't want it to fall between the cracks.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Fri, 27 Oct 2023 08:48:49 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: unnest multirange, returned order" }, { "msg_contents": "On Fri, 2023-10-27 at 08:48 +0200, Laurenz Albe wrote:\n> On Fri, 2023-10-13 at 15:33 -0400, Daniel Fredouille wrote:\n> > sorry it took me some time to reply. Yes, the patch is perfect if\n> > this is indeed the behavior.\n> \n> I'm sending a reply to the hackers list so that I can add the patch\n> to the commitfest.\n> \n> Tiny as the patch is, I don't want it to fall between the cracks.\n\nCommitted with adjusted wording. Thank you!\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS\n\n\n\n\n", "msg_date": "Fri, 27 Oct 2023 16:08:37 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: unnest multirange, returned order" }, { "msg_contents": "On Fri, 2023-10-27 at 16:08 -0700, Jeff Davis wrote:\n> On Fri, 2023-10-27 at 08:48 +0200, Laurenz Albe wrote:\n> > On Fri, 2023-10-13 at 15:33 -0400, Daniel Fredouille wrote:\n> > > sorry it took me some time to reply. Yes, the patch is perfect if\n> > > this is indeed the behavior.\n> > \n> > I'm sending a reply to the hackers list so that I can add the patch\n> > to the commitfest.\n> > \n> > Tiny as the patch is, I don't want it to fall between the cracks.\n> \n> Committed with adjusted wording. Thank you!\n\nThanks!\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Sat, 28 Oct 2023 10:53:58 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: unnest multirange, returned order" } ]
[ { "msg_contents": "The attached v1-0001 patch adjusts some code in stringinfo.h to find\nmisusages of the appendStringInfo functions. I don't intend to commit\nthat, but I do intend to commit the patch for the new misusages that\nit found, which is also attached.\n\nThis is along the same lines as 8b26769bc, f736e188c, 110d81728 and 8abc13a88.\n\nDavid", "msg_date": "Tue, 3 Oct 2023 11:24:37 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Fixup some more appendStringInfo misusages" }, { "msg_contents": "On Tue, 3 Oct 2023 at 11:24, David Rowley <[email protected]> wrote:\n> This is along the same lines as 8b26769bc, f736e188c, 110d81728 and 8abc13a88.\n\nI've pushed the patch to fix the misusages of the functions.\n\nDavid\n\n\n", "msg_date": "Tue, 3 Oct 2023 17:13:13 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Fixup some more appendStringInfo misusages" } ]
[ { "msg_contents": "I noticed one or two \"monitoring\" links and linkends that are slightly\ninconsistent from all the others.\n\n~~~\n\n From \"Dynamic Statistics Views\"\n\npg_stat_activity, linkend=\"monitoring-pg-stat-activity-view\" ==> ok\npg_stat_replication, linkend=\"monitoring-pg-stat-replication-view\" ==> ok\npg_stat_wal_receiver, linkend=\"monitoring-pg-stat-wal-receiver-view\"> ==> ok\npg_stat_recovery_prefetch,\nlinkend=\"monitoring-pg-stat-recovery-prefetch\" ==> MODIFY\nlinkend=\"monitoring-pg-stat-recovery-prefetch-view\"\npg_stat_subscription, linkend=\"monitoring-pg-stat-subscription\" ==>\nMODIFY linkend=\"monitoring-pg-stat-subscription-view\"\npg_stat_ssl, linkend=\"monitoring-pg-stat-ssl-view\" ==> ok\n\n~~~\n\n From \"Collected Statistics Views\"\n\npg_stat_archiver, linkend=\"monitoring-pg-stat-archiver-view\" ==> ok\npg_stat_bgwriter, linkend=\"monitoring-pg-stat-bgwriter-view\" ==> ok\npg_stat_database, linkend=\"monitoring-pg-stat-database-view\" ==> ok\npg_stat_database_conflicts,\nlinkend=\"monitoring-pg-stat-database-conflicts-view\" ==> ok\npg_stat_io, linkend=\"monitoring-pg-stat-io-view\"> ==> ok\npg_stat_replication_slots,\nlinkend=\"monitoring-pg-stat-replication-slots-view\" ==> ok\npg_stat_slru, linkend=\"monitoring-pg-stat-slru-view\" ==> ok\npg_stat_subscription_stats,\nlinkend=\"monitoring-pg-stat-subscription-stats\" ==> MODIFY\nlinkend=\"monitoring-pg-stat-subscription-stats-view\"\npg_stat_wal, linkend=\"monitoring-pg-stat-wal-view\" ==> ok\npg_stat_all_tables, linkend=\"monitoring-pg-stat-all-tables-view\" ==> ok\npg_stat_all_indexes, linkend=\"monitoring-pg-stat-all-indexes-view\" ==> ok\npg_stat_user_functions, linkend=\"monitoring-pg-stat-user-functions-view\" ==> ok\npg_statio_all_tables, linkend=\"monitoring-pg-statio-all-tables-view\" ==> ok\npg_statio_all_indexes, linkend=\"monitoring-pg-statio-all-indexes-view\" ==> ok\npg_statio_all_sequences,\nlinkend=\"monitoring-pg-statio-all-sequences-view\" ==> ok\n\n~~~\n\nPSA a patch to make these few changes.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Tue, 3 Oct 2023 13:11:15 +1100", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": true, "msg_subject": "[PGDOCS] Inconsistent linkends to \"monitoring\" views." }, { "msg_contents": "On Tue, Oct 03, 2023 at 01:11:15PM +1100, Peter Smith wrote:\n> I noticed one or two \"monitoring\" links and linkends that are slightly\n> inconsistent from all the others.\n\n- <link linkend=\"monitoring-pg-stat-subscription\">\n+ <link linkend=\"monitoring-pg-stat-subscription-view\">\n\nIs that really worth bothering for the internal link references? This\ncan create extra backpatching conflicts.\n--\nMichael", "msg_date": "Tue, 3 Oct 2023 16:30:09 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PGDOCS] Inconsistent linkends to \"monitoring\" views." }, { "msg_contents": "On Tue, Oct 3, 2023 at 6:30 PM Michael Paquier <[email protected]> wrote:\n>\n> On Tue, Oct 03, 2023 at 01:11:15PM +1100, Peter Smith wrote:\n> > I noticed one or two \"monitoring\" links and linkends that are slightly\n> > inconsistent from all the others.\n>\n> - <link linkend=\"monitoring-pg-stat-subscription\">\n> + <link linkend=\"monitoring-pg-stat-subscription-view\">\n>\n> Is that really worth bothering for the internal link references?\n\nI preferred 100% consistency instead of 95% consistency. YMMV.\n\n> This can create extra backpatching conflicts.\n\nCouldn't the same be said for every patch that fixes a comment typo?\nThis is like a link typo, so what's the difference?\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Tue, 3 Oct 2023 20:40:52 +1100", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PGDOCS] Inconsistent linkends to \"monitoring\" views." }, { "msg_contents": "On Tue, Oct 3, 2023 at 4:40 PM Peter Smith <[email protected]> wrote:\n>\n> On Tue, Oct 3, 2023 at 6:30 PM Michael Paquier <[email protected]> wrote:\n> >\n> > On Tue, Oct 03, 2023 at 01:11:15PM +1100, Peter Smith wrote:\n> > > I noticed one or two \"monitoring\" links and linkends that are slightly\n> > > inconsistent from all the others.\n> >\n> > - <link linkend=\"monitoring-pg-stat-subscription\">\n> > + <link linkend=\"monitoring-pg-stat-subscription-view\">\n> >\n> > Is that really worth bothering for the internal link references?\n>\n> I preferred 100% consistency instead of 95% consistency. YMMV.\n>\n> > This can create extra backpatching conflicts.\n>\n> Couldn't the same be said for every patch that fixes a comment typo?\n> This is like a link typo, so what's the difference?\n\nMy 2 cents: Comment typos are visible to readers, so more annoying\nwhen seen in isolation, and less likely to have surroundings that\ncould change in back branches. Consistency would preferred all else\nbeing equal, but then again nothing is wrong with the existing links.\nIn any case, no one has come out in favor of the patch, so it seems\nlike it should be rejected unless that changes.\n\n--\nJohn Naylor\n\n\n", "msg_date": "Wed, 8 Nov 2023 14:02:02 +0700", "msg_from": "John Naylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PGDOCS] Inconsistent linkends to \"monitoring\" views." }, { "msg_contents": "On Wed, Nov 8, 2023 at 2:02 PM John Naylor <[email protected]> wrote:\n\n> My 2 cents: Comment typos are visible to readers, so more annoying\n> when seen in isolation, and less likely to have surroundings that\n> could change in back branches. Consistency would preferred all else\n> being equal, but then again nothing is wrong with the existing links.\n> In any case, no one has come out in favor of the patch, so it seems\n> like it should be rejected unless that changes.\n\nThis is done.\n\n\n", "msg_date": "Thu, 30 Nov 2023 17:05:23 +0700", "msg_from": "John Naylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PGDOCS] Inconsistent linkends to \"monitoring\" views." } ]
[ { "msg_contents": "Hello,\n\nMy colleague has been working on submitting a patch [1] to the Ruby\nRails framework to address some of the problems discussed in [2].\nRegardless of whether that change lands, the change in Rails would be\nuseful since people will be running Postgres versions without this\npatch for a while.\n\nMy colleague's patch changes SQL generated from Ruby expressions like\n`where(id: [1, 2])` . This is currently translated to roughly `WHERE\nid IN (1, 2)` and would be changed to `id = ANY('{1,2}')`.\n\nAs far as we know, the expressions are equivalent, but we wanted to\ndouble-check: are there any edge cases to consider here (other than\nthe pg_stat_statements behavior, of course)?\n\nThanks,\nMaciek\n\n[1]: https://github.com/rails/rails/pull/49388\n[2]: https://www.postgresql.org/message-id/flat/20230209172651.cfgrebpyyr72h7fv%40alvherre.pgsql#eef3c77bc28b9922ea6b9660b0221b5d\n\n\n", "msg_date": "Mon, 2 Oct 2023 22:01:53 -0700", "msg_from": "Maciek Sakrejda <[email protected]>", "msg_from_op": true, "msg_subject": "Differences between = ANY and IN?" }, { "msg_contents": "Maciek Sakrejda <[email protected]> writes:\n> My colleague's patch changes SQL generated from Ruby expressions like\n> `where(id: [1, 2])` . This is currently translated to roughly `WHERE\n> id IN (1, 2)` and would be changed to `id = ANY('{1,2}')`.\n\n> As far as we know, the expressions are equivalent, but we wanted to\n> double-check: are there any edge cases to consider here (other than\n> the pg_stat_statements behavior, of course)?\n\nYou would find it profitable to read transformAExprIn() in parse_expr.c.\nThe most important points are in this comment:\n\n * We try to generate a ScalarArrayOpExpr from IN/NOT IN, but this is only\n * possible if there is a suitable array type available. If not, we fall\n * back to a boolean condition tree with multiple copies of the lefthand\n * expression. Also, any IN-list items that contain Vars are handled as\n * separate boolean conditions, because that gives the planner more scope\n * for optimization on such clauses.\n\nIf all the values in the IN form were being sent to the backend as\nconstants of the same datatype, I think you're okay to consider it\nas exactly equivalent to =ANY. It would likely be a good idea to\nprovide an explicit cast `id = ANY('{1,2}'::int[])` rather than just\nhoping an unadorned literal will be taken as the type you want\n(see transformAExprOpAny and thence make_scalar_array_op).\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 03 Oct 2023 01:15:31 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Differences between = ANY and IN?" }, { "msg_contents": "Great, thanks for the guidance!\n\n\n", "msg_date": "Mon, 2 Oct 2023 23:34:29 -0700", "msg_from": "Maciek Sakrejda <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Differences between = ANY and IN?" } ]
[ { "msg_contents": "The comment\n\n /* On some platforms, readline is declared as readline(char *) */\n\nis obsolete. The casting away of const can be removed.\n\nThe const in the readline() prototype was added in GNU readline 4.2, \nreleased in 2001. BSD libedit has also had const in the prototype since \nat least 2001.\n\n(The commit that introduced this comment (187e865174) talked about \nFreeBSD 4.8, which didn't have readline compatibility in libedit yet, so \nit must have been talking about GNU readline in the base system. This \nchecks out, but already FreeBSD 5 had an updated GNU readline with const.)", "msg_date": "Tue, 3 Oct 2023 12:23:12 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Modernize const handling with readline" }, { "msg_contents": "Hi,\n\n> The comment\n>\n> /* On some platforms, readline is declared as readline(char *) */\n>\n> is obsolete. The casting away of const can be removed.\n>\n> The const in the readline() prototype was added in GNU readline 4.2,\n> released in 2001. BSD libedit has also had const in the prototype since\n> at least 2001.\n>\n> (The commit that introduced this comment (187e865174) talked about\n> FreeBSD 4.8, which didn't have readline compatibility in libedit yet, so\n> it must have been talking about GNU readline in the base system. This\n> checks out, but already FreeBSD 5 had an updated GNU readline with const.)\n\nLGTM.\n\nWhile examining the code for similar places I noticed that the\nfollowing functions can also be const'ified:\n\n- crc32_sz\n- pg_checksum_page (? temporary modifies the page but then restores it)\n- XLogRegisterData (?)\n\nThe callers of cstring_to_text[_with_len] often cast the argument to\n(char *) while in fact it's (const char *). This can be refactored\ntoo.\n\nAdditionally there is a slight difference between XLogRegisterBlock()\ndeclaration in xloginsert.h:\n\n```\nextern void XLogRegisterBlock(uint8 block_id, RelFileLocator *rlocator,\n ForkNumber forknum, BlockNumber blknum,\nchar *page,\n uint8 flags);\n```\n\n... and xloginsert.c:\n\n```\nvoid\nXLogRegisterBlock(uint8 block_id, RelFileLocator *rlocator, ForkNumber forknum,\n BlockNumber blknum, Page page, uint8 flags)\n```\n\nWill there be a value in addressing anything of this?\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Tue, 3 Oct 2023 14:28:18 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Modernize const handling with readline" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> The comment\n> /* On some platforms, readline is declared as readline(char *) */\n> is obsolete. The casting away of const can be removed.\n\n+1, that's surely not of interest on anything we still support.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 03 Oct 2023 10:10:38 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Modernize const handling with readline" }, { "msg_contents": "On 03.10.23 13:28, Aleksander Alekseev wrote:\n> While examining the code for similar places I noticed that the\n> following functions can also be const'ified:\n> \n> - crc32_sz\n\nI suppose this could be changed.\n\n> - pg_checksum_page (? temporary modifies the page but then restores it)\n\nThen it's not really const?\n\n> - XLogRegisterData (?)\n\nI don't think this would work, at least without further work elsewhere, \nbecause the data is stored in XLogRecData, which has no const handling.\n\n> The callers of cstring_to_text[_with_len] often cast the argument to\n> (char *) while in fact it's (const char *). This can be refactored\n> too.\n\nThese look fine to me.\n\n> Additionally there is a slight difference between XLogRegisterBlock()\n> declaration in xloginsert.h:\n> \n> ```\n> extern void XLogRegisterBlock(uint8 block_id, RelFileLocator *rlocator,\n> ForkNumber forknum, BlockNumber blknum,\n> char *page,\n> uint8 flags);\n> ```\n> \n> ... and xloginsert.c:\n> \n> ```\n> void\n> XLogRegisterBlock(uint8 block_id, RelFileLocator *rlocator, ForkNumber forknum,\n> BlockNumber blknum, Page page, uint8 flags)\n> ```\n\nIt looks like the reason here is that xloginsert.h does not have the \nPage type in scope. I don't know how difficult it would be to change \nthat, but it seems fine as is.\n\n\n\n", "msg_date": "Wed, 4 Oct 2023 16:37:16 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Modernize const handling with readline" }, { "msg_contents": "Hi,\n\n> On 03.10.23 13:28, Aleksander Alekseev wrote:\n> > While examining the code for similar places I noticed that the\n> > following functions can also be const'ified:\n> >\n> > - crc32_sz\n>\n> I suppose this could be changed.\n\nOK, that's a simple change. Here is the patch.\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Wed, 4 Oct 2023 18:09:36 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Modernize const handling with readline" }, { "msg_contents": "On 04.10.23 17:09, Aleksander Alekseev wrote:\n> Hi,\n> \n>> On 03.10.23 13:28, Aleksander Alekseev wrote:\n>>> While examining the code for similar places I noticed that the\n>>> following functions can also be const'ified:\n>>>\n>>> - crc32_sz\n>>\n>> I suppose this could be changed.\n> \n> OK, that's a simple change. Here is the patch.\n\ncommitted this and my patch\n\n\n\n", "msg_date": "Thu, 5 Oct 2023 08:57:51 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Modernize const handling with readline" }, { "msg_contents": "On 04.10.23 16:37, Peter Eisentraut wrote:\n> On 03.10.23 13:28, Aleksander Alekseev wrote:\n>> While examining the code for similar places I noticed that the\n>> following functions can also be const'ified:\n\n>> - XLogRegisterData (?)\n> \n> I don't think this would work, at least without further work elsewhere, \n> because the data is stored in XLogRecData, which has no const handling.\n\nI got around to fixing this. Here is a patch. It allows removing a few \nunconstify() calls, which is nice.", "msg_date": "Wed, 28 Aug 2024 10:09:27 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Add const qualifiers to XLogRegister*() functions" }, { "msg_contents": "Hi,\n\n> On 04.10.23 16:37, Peter Eisentraut wrote:\n> > On 03.10.23 13:28, Aleksander Alekseev wrote:\n> >> While examining the code for similar places I noticed that the\n> >> following functions can also be const'ified:\n>\n> >> - XLogRegisterData (?)\n> >\n> > I don't think this would work, at least without further work elsewhere,\n> > because the data is stored in XLogRecData, which has no const handling.\n>\n> I got around to fixing this. Here is a patch. It allows removing a few\n> unconstify() calls, which is nice.\n\nLGTM.\n\nNote that this may affect third-party code. IMO this is not a big deal\nin this particular case.\n\nAlso by randomly checking one of the affected non-static functions I\nfound a bunch of calls like this:\n\nXLogRegisterData((char *) msgs, ...)\n\n... where the first argument is going to become (const char *). It\nlooks like the compilers are OK with implicitly casting (char*) to a\nmore restrictive (const char*) though.\n\n--\nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Wed, 28 Aug 2024 13:04:38 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add const qualifiers to XLogRegister*() functions" }, { "msg_contents": "On 28.08.24 12:04, Aleksander Alekseev wrote:\n> Hi,\n> \n>> On 04.10.23 16:37, Peter Eisentraut wrote:\n>>> On 03.10.23 13:28, Aleksander Alekseev wrote:\n>>>> While examining the code for similar places I noticed that the\n>>>> following functions can also be const'ified:\n>>\n>>>> - XLogRegisterData (?)\n>>>\n>>> I don't think this would work, at least without further work elsewhere,\n>>> because the data is stored in XLogRecData, which has no const handling.\n>>\n>> I got around to fixing this. Here is a patch. It allows removing a few\n>> unconstify() calls, which is nice.\n> \n> LGTM.\n\ncommitted\n\n> Note that this may affect third-party code. IMO this is not a big deal\n> in this particular case.\n\nI don't think this will impact any third-party code. Only maybe for the \nbetter, by being able to remove some casts.\n\n> Also by randomly checking one of the affected non-static functions I\n> found a bunch of calls like this:\n> \n> XLogRegisterData((char *) msgs, ...)\n> \n> ... where the first argument is going to become (const char *). It\n> looks like the compilers are OK with implicitly casting (char*) to a\n> more restrictive (const char*) though.\n\nYes, that's ok.\n\n\n\n", "msg_date": "Tue, 3 Sep 2024 08:15:29 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add const qualifiers to XLogRegister*() functions" } ]
[ { "msg_contents": "Hello all,\n \non pgsql-general this got no answers, so...\n\nAccording to:\n\nhttps://www.postgresql.org/docs/current/logical-replication-conflicts.html\n\nor\n\nhttps://www.postgresql.fastware.com/blog/addressing-replication-conflicts-using-alter-subscription-skip\n\nThe logfile is the _only_ place to find the transaction finish LSN that must be skipped if logical replication is stuck on a conflict. Is the logfile the only place to find this information, or can it also be found somewhere inside the database, e. g. in some system catalog view?\n\nbest regards\n\nErnst-Georg\n\n\n\n\n\n\n Hello all,\n \n\n  \n \n\n on pgsql-general this got no answers, so...\n \nAccording to:\n \nhttps://www.postgresql.org/docs/current/logical-replication-conflicts.html\n\nor\n \nhttps://www.postgresql.fastware.com/blog/addressing-replication-conflicts-using-alter-subscription-skip\n\nThe logfile is the _only_ place to find the transaction finish LSN that must be skipped if logical replication is stuck on a conflict. Is the logfile the only place to find this information, or can it also be found somewhere inside the database, e. g. in some system catalog view?\n \nbest regards\n \nErnst-Georg", "msg_date": "Wed, 4 Oct 2023 11:56:51 +0200 (CEST)", "msg_from": "pgchem pgchem <[email protected]>", "msg_from_op": true, "msg_subject": "Is the logfile the only place to find the finish LSN?" } ]
[ { "msg_contents": "The various command-line utilities that have recently acquired a\n--sync-method option document it like this:\n\n<term><option>--sync-method</option></term>\n\nBut that is not how we document options which take an argument. We do\nit like this:\n\n<term><option>--pgdata=<replaceable>directory</replaceable></option></term>\n<term><option>--filenode=<replaceable>filenode</replaceable></option></term>\n\netc.\n\nThis one should be something like this:\n\n<term><option>--sync-method=<replaceable>method</replaceable></option></term>\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 4 Oct 2023 09:08:57 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "--sync-method isn't documented to take an argument" }, { "msg_contents": "> On 4 Oct 2023, at 15:08, Robert Haas <[email protected]> wrote:\n\n> This one should be something like this:\n> \n> <term><option>--sync-method=<replaceable>method</replaceable></option></term>\n\nShouldn't it be <replaceable class=\"parameter\">method</replaceable> ?\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Wed, 4 Oct 2023 15:15:37 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: --sync-method isn't documented to take an argument" }, { "msg_contents": "On Wed, Oct 4, 2023 at 9:15 AM Daniel Gustafsson <[email protected]> wrote:\n> > On 4 Oct 2023, at 15:08, Robert Haas <[email protected]> wrote:\n> > This one should be something like this:\n> >\n> > <term><option>--sync-method=<replaceable>method</replaceable></option></term>\n>\n> Shouldn't it be <replaceable class=\"parameter\">method</replaceable> ?\n\nHmm, I think you're probably right. But look at this:\n\n <term><option>-S <replaceable>slotname</replaceable></option></term>\n <term><option>--slot=<replaceable\nclass=\"parameter\">slotname</replaceable></option></term>\n\nBut then in the very same file:\n\n <term><option>-r <replaceable\nclass=\"parameter\">rate</replaceable></option></term>\n <term><option>--max-rate=<replaceable\nclass=\"parameter\">rate</replaceable></option></term>\n\nIt doesn't look to me like we're entirely consistent about this. I\nalso found this in vacuumlo.sgml, and there seem to be various other\nexamples:\n\n<term><option>-U <replaceable>username</replaceable></option></term>\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 4 Oct 2023 09:22:14 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: --sync-method isn't documented to take an argument" }, { "msg_contents": "> On 4 Oct 2023, at 15:22, Robert Haas <[email protected]> wrote:\n> \n> On Wed, Oct 4, 2023 at 9:15 AM Daniel Gustafsson <[email protected]> wrote:\n>>> On 4 Oct 2023, at 15:08, Robert Haas <[email protected]> wrote:\n>>> This one should be something like this:\n>>> \n>>> <term><option>--sync-method=<replaceable>method</replaceable></option></term>\n>> \n>> Shouldn't it be <replaceable class=\"parameter\">method</replaceable> ?\n> \n> Hmm, I think you're probably right. But look at this:\n> \n> <term><option>-S <replaceable>slotname</replaceable></option></term>\n> <term><option>--slot=<replaceable\n> class=\"parameter\">slotname</replaceable></option></term>\n> \n> But then in the very same file:\n> \n> <term><option>-r <replaceable\n> class=\"parameter\">rate</replaceable></option></term>\n> <term><option>--max-rate=<replaceable\n> class=\"parameter\">rate</replaceable></option></term>\n\nHmm.. that's a bit unfortunate.\n\n> It doesn't look to me like we're entirely consistent about this.\n\nThat (sadly) applies to a fair chunk of the docs.\n\nI can take a stab at tidying this up during breaks at the conference. It might\nnot be the most important bit of markup, but for anyone building the docs who\nmight want to use this it seems consistency will help.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Wed, 4 Oct 2023 15:34:49 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: --sync-method isn't documented to take an argument" }, { "msg_contents": "On 2023-Oct-04, Daniel Gustafsson wrote:\n\n> I can take a stab at tidying this up during breaks at the conference. It might\n> not be the most important bit of markup, but for anyone building the docs who\n> might want to use this it seems consistency will help.\n\nSo for HTML, the result of the pg_basebackup lines are these two lines:\n\n</p></dd><dt><span class=\"term\"><code class=\"option\">-S <em class=\"replaceable\"><code>slotname</code></em></code><br /></span><span class=\"term\"><code class=\"option\">--slot=<em class=\"replaceable\"><code>slotname</code></em></code></span></dt><dd><p>\n\nand \n\n</p></dd><dt><span class=\"term\"><code class=\"option\">-r <em class=\"replaceable\"><code>rate</code></em></code><br /></span><span class=\"term\"><code class=\"option\">--max-rate=<em class=\"replaceable\"><code>rate</code></em></code></span></dt><dd><p>\n\nSo I'm not sure that specifying the class=\"parameter\" bit does anything in\nreality, or that changing lines to add or remove it will have any effect.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Wed, 4 Oct 2023 16:35:32 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: --sync-method isn't documented to take an argument" }, { "msg_contents": "On Wed, Oct 4, 2023 at 10:35 AM Alvaro Herrera <[email protected]> wrote:\n> So I'm not sure that specifying the class=\"parameter\" bit does anything in\n> reality, or that changing lines to add or remove it will have any effect.\n\nInteresting. I wondered whether that might be the case.\n\nThe original issue I reported does make a real difference, though. :-)\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 4 Oct 2023 10:39:19 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: --sync-method isn't documented to take an argument" }, { "msg_contents": "Alvaro Herrera <[email protected]> writes:\n> So I'm not sure that specifying the class=\"parameter\" bit does anything in\n> reality, or that changing lines to add or remove it will have any effect.\n\nI concluded a long time ago that it does nothing. We have a good\nmix of places that write <replaceable> with or without that, and\nI've never detected any difference in the output. So I tend to\nleave it out, or at most make new entries look like the adjacent\nones.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 04 Oct 2023 10:39:50 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: --sync-method isn't documented to take an argument" }, { "msg_contents": "> On 4 Oct 2023, at 16:39, Tom Lane <[email protected]> wrote:\n> \n> Alvaro Herrera <[email protected]> writes:\n>> So I'm not sure that specifying the class=\"parameter\" bit does anything in\n>> reality, or that changing lines to add or remove it will have any effect.\n> \n> I concluded a long time ago that it does nothing.\n\nIt does nothing in our current doc rendering, but if someone would like to\nrender docs with another style where it does make a difference it seems\nunhelpful to not be consistent.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Wed, 4 Oct 2023 16:49:30 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: --sync-method isn't documented to take an argument" }, { "msg_contents": "Daniel Gustafsson <[email protected]> writes:\n> On 4 Oct 2023, at 16:39, Tom Lane <[email protected]> wrote:\n>> I concluded a long time ago that it does nothing.\n\n> It does nothing in our current doc rendering, but if someone would like to\n> render docs with another style where it does make a difference it seems\n> unhelpful to not be consistent.\n\nTo do that, we'd need some sort of agreement on what the possible\n\"class\" values are and when to use each one. I've never seen any\ndocumentation about that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 04 Oct 2023 10:51:59 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: --sync-method isn't documented to take an argument" }, { "msg_contents": "On Wed, Oct 04, 2023 at 09:08:57AM -0400, Robert Haas wrote:\n> The various command-line utilities that have recently acquired a\n> --sync-method option document it like this:\n>\n> <term><option>--sync-method</option></term>\n>\n> But that is not how we document options which take an argument. We do\n> it like this:\n>\n> <term><option>--pgdata=<replaceable>directory</replaceable></option></term>\n> <term><option>--filenode=<replaceable>filenode</replaceable></option></term>\n>\n> etc.\n>\n> This one should be something like this:\n>\n> <term><option>--sync-method=<replaceable>method</replaceable></option></term>\n\nWhoops. Thanks for pointing this out. I'll get this fixed.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 4 Oct 2023 09:56:35 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: --sync-method isn't documented to take an argument" }, { "msg_contents": "> On 4 Oct 2023, at 16:51, Tom Lane <[email protected]> wrote:\n> \n> Daniel Gustafsson <[email protected]> writes:\n>> On 4 Oct 2023, at 16:39, Tom Lane <[email protected]> wrote:\n>>> I concluded a long time ago that it does nothing.\n> \n>> It does nothing in our current doc rendering, but if someone would like to\n>> render docs with another style where it does make a difference it seems\n>> unhelpful to not be consistent.\n> \n> To do that, we'd need some sort of agreement on what the possible\n> \"class\" values are and when to use each one. I've never seen any\n> documentation about that.\n\nThats fair. The 4.5 docbook guide isn't terribly helpful on what the\nattributes mean and how they should be used:\n\n\thttps://tdg.docbook.org/tdg/4.5/replaceable\n\nIn the 5.2 version the text is slightly expanded but not by all that much:\n\n\thttps://tdg.docbook.org/tdg/5.2/replaceable\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Wed, 4 Oct 2023 16:56:54 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: --sync-method isn't documented to take an argument" }, { "msg_contents": "On 2023-Oct-04, Robert Haas wrote:\n\n> The original issue I reported does make a real difference, though. :-)\n\nYes, absolutely, and I agree that it'd be better to get it fixed.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Learn about compilers. Then everything looks like either a compiler or\na database, and now you have two problems but one of them is fun.\"\n https://twitter.com/thingskatedid/status/1456027786158776329\n\n\n", "msg_date": "Wed, 4 Oct 2023 17:00:26 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: --sync-method isn't documented to take an argument" }, { "msg_contents": "Daniel Gustafsson <[email protected]> writes:\n> On 4 Oct 2023, at 16:51, Tom Lane <[email protected]> wrote:\n>> To do that, we'd need some sort of agreement on what the possible\n>> \"class\" values are and when to use each one. I've never seen any\n>> documentation about that.\n\n> Thats fair. The 4.5 docbook guide isn't terribly helpful on what the\n> attributes mean and how they should be used:\n> \thttps://tdg.docbook.org/tdg/4.5/replaceable\n> In the 5.2 version the text is slightly expanded but not by all that much:\n> \thttps://tdg.docbook.org/tdg/5.2/replaceable\n\nOK, so now I know what the possible values of \"class\" are, but\nI'm still not seeing a reason why we shouldn't just assume that\n\"parameter\" is the only one of interest. We don't really have\nplaces where \"command\" or \"function\" would be appropriate AFAIR.\nAs for \"option\", what's the distinction between that and\n\"parameter\"? And why wouldn't I use \"<option>\" instead?\n\nEven if we did have uses for the other class values, I'm skeptical\nthat we'd ever use them consistently enough that there'd be\nvalue in rendering them differently.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 04 Oct 2023 11:07:07 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: --sync-method isn't documented to take an argument" }, { "msg_contents": "> On 4 Oct 2023, at 17:07, Tom Lane <[email protected]> wrote:\n\n> OK, so now I know what the possible values of \"class\" are, but\n> I'm still not seeing a reason why we shouldn't just assume that\n> \"parameter\" is the only one of interest.\n\nI think \"parameter\" is the only one of interest for this usecase (params to\napplication options), but it might not necessarily be applicable to other uses\nof <replaceable /> we have in the tree, like for example in bki.sgml:\n\n\t<replaceable>oprname(lefttype,righttype)</replaceable>\n\nBut since we don't know if anyone is rendering our docs with a custom style, or\never will, this is all very theoretical. Maybe skipping the class attribute is\nthe right choice for consistency?\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Wed, 4 Oct 2023 17:16:48 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: --sync-method isn't documented to take an argument" }, { "msg_contents": "On Wed, Oct 04, 2023 at 05:00:26PM +0200, Alvaro Herrera wrote:\n> On 2023-Oct-04, Robert Haas wrote:\n> \n>> The original issue I reported does make a real difference, though. :-)\n> \n> Yes, absolutely, and I agree that it'd be better to get it fixed.\n\nHere's a patch. I didn't address the class=\"parameter\" stuff at all. I\nfigured it would be best to handle that separately.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 4 Oct 2023 10:27:50 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: --sync-method isn't documented to take an argument" }, { "msg_contents": "On Wed, Oct 4, 2023 at 11:27 AM Nathan Bossart <[email protected]> wrote:\n> Here's a patch. I didn't address the class=\"parameter\" stuff at all. I\n> figured it would be best to handle that separately.\n\nI guess I'll vote for including class=parameter in this addition for\nnow, as that appears to be the majority position in the documentation\ntoday. If we get a consensus to change something, so be it. But also,\nif you don't want to do that, so be it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 4 Oct 2023 11:51:32 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: --sync-method isn't documented to take an argument" }, { "msg_contents": "On Wed, Oct 04, 2023 at 11:51:32AM -0400, Robert Haas wrote:\n> On Wed, Oct 4, 2023 at 11:27 AM Nathan Bossart <[email protected]> wrote:\n>> Here's a patch. I didn't address the class=\"parameter\" stuff at all. I\n>> figured it would be best to handle that separately.\n> \n> I guess I'll vote for including class=parameter in this addition for\n> now, as that appears to be the majority position in the documentation\n> today. If we get a consensus to change something, so be it. But also,\n> if you don't want to do that, so be it.\n\nWFM. I'll add it before committing, which I plan to do later today.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 4 Oct 2023 11:12:14 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: --sync-method isn't documented to take an argument" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> On Wed, Oct 4, 2023 at 11:27 AM Nathan Bossart <[email protected]> wrote:\n>> Here's a patch. I didn't address the class=\"parameter\" stuff at all. I\n>> figured it would be best to handle that separately.\n\n> I guess I'll vote for including class=parameter in this addition for\n> now, as that appears to be the majority position in the documentation\n> today. If we get a consensus to change something, so be it. But also,\n> if you don't want to do that, so be it.\n\nFWIW, I just did a little sed hacking to count the instances of the\ndifferent cases in the docs as of today. I found\n\n 4038 <replaceable>\n 3 <replaceable class=\"command\">\n 4017 <replaceable class=\"parameter\">\n\nThe three with \"command\" are all in plpgsql.sgml, and are marking up\nquery strings in the synopses of EXECUTE and variants. I'm inclined\nto argue that those three are actually wrong, on the grounds that\n\n(1) From the perspective of EXECUTE, you could equally well say that\nthe string to be executed is a parameter;\n\n(2) Our general convention elsewhere is that \"command\" refers to a\ncommand type such as SELECT or UPDATE, not to a complete query string.\n\nIn any case, trying to standardize this looks like it would be a\nhuge amount of churn for very little gain. I'd recommend making\nyour markup look similar to what's immediately adjacent, if possible,\nand not sweating too much otherwise.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 04 Oct 2023 12:24:36 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: --sync-method isn't documented to take an argument" }, { "msg_contents": "On Wed, Oct 04, 2023 at 12:24:36PM -0400, Tom Lane wrote:\n> In any case, trying to standardize this looks like it would be a\n> huge amount of churn for very little gain. I'd recommend making\n> your markup look similar to what's immediately adjacent, if possible,\n> and not sweating too much otherwise.\n\nI matched the adjacent options as you suggested. Perhaps unsurprisingly,\nthe inclusion of class=\"parameter\" is not the only inconsistency. I also\nfound that pg_upgrade adds the </option> before the argument name!\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 4 Oct 2023 14:50:59 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: --sync-method isn't documented to take an argument" } ]
[ { "msg_contents": "Hi,\n\nI'm opening this thread after a brief discussion regarding a potential \nnew syntax to enable annotations in pg_hba entries. [1]\n\nThis feature mainly aims to annotate pg_hba entries in a way that the \nannotations can be parsed and displayed in the pg_hba_file_rule view for \nreporting purposes. For instance, these annotations could contain \ninformation like tags, client (application) names or any relevant info \nregarding the granted access.\n\nInitially I explored the possibility of using the inline comments after \na '#', but there were a few valid concerns to this approach [2]\n\nhostssl  db  jim  127.0.0.1/32  cert  map=foo  # comment\n\nI had previously thought of introducing a new character do identify such \nannotations, e.g [] ... but the necessary changes in the hba.c to add \nthis feature could add too much complexity to the code. [3]\n\nPerhaps a \"less controversial\" option would be to add a new variable, \njust like with user name maps.\n\nhostssl  db  jim  127.0.0.1/32  cert  map=foo  annotation=comment\nhostssl  db  jim  127.0.0.1/32  cert  map=bar annotation=\"comment\"\n\nAny thoughts?\n\nThanks!\n\nJim\n\n1- \nhttps://www.postgresql.org/message-id/flat/[email protected]\n2- \nhttps://www.postgresql.org/message-id/E543222B-DE8D-4116-BA67-3C2D3FA83110%40yesql.se\n3- \nhttps://www.postgresql.org/message-id/flat/ZPHAiNp%2ByKMsa/vc%40paquier.xyz#05a8405be272342037538ee432d92884 \n\n\n\n", "msg_date": "Wed, 4 Oct 2023 22:03:38 +0200", "msg_from": "Jim Jones <[email protected]>", "msg_from_op": true, "msg_subject": "Add annotation syntax to pg_hba.conf entries" }, { "msg_contents": "On Wed, Oct 4, 2023 at 4:06 PM Jim Jones <[email protected]> wrote:\n> Any thoughts?\n\nYou're probably not going to like this answer very much, but this\ndoesn't seem particularly worthwhile to me. If somebody needs to\ndocument why they did something in pg_hba.conf, they can already put a\ncomment in the file to explain that. Or they can track the reasons for\nwhat's in the file using some completely external system, like a\nGoogle document or a git repository or whatever. The argument for this\nfeature is not that this information needs to exist, but that it needs\nto be queryable from within PostgreSQL. And I guess I just wonder if\nthat is something that users in general want. It's not a terrible idea\nor anything, but it would be sad if we added such a feature and you\nwere the only one who ever used it... and if a bunch of people now\nshow up and say \"actually, this would be great, I would totally like\nto have that,\" well, then, forget I said anything.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 4 Oct 2023 16:18:50 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add annotation syntax to pg_hba.conf entries" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> You're probably not going to like this answer very much, but this\n> doesn't seem particularly worthwhile to me.\n\nYeah, I was unconvinced about the number of use-cases too.\nAs you say, some support from other potential users could convince\nme otherwise, but right now the evidence seems thin.\n\n> The argument for this\n> feature is not that this information needs to exist, but that it needs\n> to be queryable from within PostgreSQL.\n\nNot only that, but that it needs to be accessible via the\npg_hba_file_rules view. Superusers could already see the\npg_hba file's contents via pg_read_file().\n\nAgain, that's not an argument that this is a bad idea.\nBut it's an answer that would likely satisfy some fraction\nof whatever potential users are out there, which makes the\nquestion of how many use-cases really exist even more\npressing.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 04 Oct 2023 18:55:44 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add annotation syntax to pg_hba.conf entries" }, { "msg_contents": "Hi Robert, Hi Tom,\n\nThanks for the feedback!\n\nOn 05.10.23 00:55, Tom Lane wrote:\n> Robert Haas <[email protected]> writes:\n>> You're probably not going to like this answer very much, but this\n>> doesn't seem particularly worthwhile to me.\n> Yeah, I was unconvinced about the number of use-cases too.\n> As you say, some support from other potential users could convince\n> me otherwise, but right now the evidence seems thin.\nMost likely I am one of the very few using comments to sort of\nsemantically annotate pg_hba entries :)\n>> The argument for this\n>> feature is not that this information needs to exist, but that it needs\n>> to be queryable from within PostgreSQL.\n> Not only that, but that it needs to be accessible via the\n> pg_hba_file_rules view. Superusers could already see the\n> pg_hba file's contents via pg_read_file().\nThat's my current strategy. I will keep doing that :)\n> Again, that's not an argument that this is a bad idea.\n> But it's an answer that would likely satisfy some fraction\n> of whatever potential users are out there, which makes the\n> question of how many use-cases really exist even more\n> pressing.\n>\n> \t\t\tregards, tom lane\n\nI'll withdraw the CF entry, since the feature didn't seem to resonate\nwith other users.\n\nThanks again for the feedback.\n\nBest, Jim\n\n\n", "msg_date": "Tue, 10 Oct 2023 09:43:10 +0200", "msg_from": "Jim Jones <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add annotation syntax to pg_hba.conf entries" } ]
[ { "msg_contents": "\nHiya Hackers!\n\nSo I have some good news! At long last I've found a company/manager that \nwants to actually factually pay me to do some work on PG!!\n\nHad my performance review today, and Apple wants me to get a patch \naccepted this quarter, with the promise of more to come after that. \nLuckily, this first patch can be anything (doesn't have to be of use to \nApple, more to prove that I can get a patch accepted), so I'm open to \nsuggestions of smaller stuff that is in demand at the moment.\n\nFor the proposal (this one is a bit Apple specific): because my team \noffers managed postgres to our Apple-internal customers, many of whom \nare not database experts, or at least not postgres experts, we'd like to \nbe able to toggle the availability of UNLOGGED tables in \npostgresql.conf, so our less clueful users have fewer footguns.\n\nSo, my proposal is for a GUC to implement that, which would *OF COURSE* \nundefault to allowing UNLOGGED.\n\nThe reasoning here is we have autofailover set up for our standard \ncluster offering that we give to customers, using sync-rep to guarantee \nno data loss if we flop to the HA node. Any users not up to speed on \nwhat UNLOGGED really means could inadvertently lose data, so we'd like \nto be able to force it to be off, and turn it on upon request after \neducating the customer in question it's it's a valid use case.\n\nSo to begin with: I'm sure some folks will hate this idea, but maybe a \ngood many wont, and remember, this would default to UNLOGGED enabled, so \nno change to current behaviour. and no encouragement to disallow it, but \njust the ability to do so, which i think is useful in \nhosted/managed/multi-tenant environment where most things are taken care \nof for the customer.\n\nSo I'd like to get a general idea how likely this would be to getting \naccepted if it did it, and did it right?\n\nLet the flame war begin!\n\nPS: I'm SO happy that this phase of my postgres journey has finally \nstarted!!!!\n-- \nJon Erdman (aka StuckMojo on IRC)\n PostgreSQL Zealot\n\n\n", "msg_date": "Thu, 5 Oct 2023 02:22:26 +0000", "msg_from": "=?UTF-8?Q?Jon_Erdman?= <[email protected]>", "msg_from_op": true, "msg_subject": "Good News Everyone! + feature proposal" }, { "msg_contents": "On Thu, 2023-10-05 at 02:22 +0000, Jon Erdman wrote:\n\n> For the proposal (this one is a bit Apple specific): because my team \n> offers managed postgres to our Apple-internal customers, many of whom \n> are not database experts, or at least not postgres experts, we'd like to \n> be able to toggle the availability of UNLOGGED tables in \n> postgresql.conf, so our less clueful users have fewer footguns.\n> \n> So, my proposal is for a GUC to implement that, which would *OF COURSE* \n> undefault to allowing UNLOGGED.\n\nIt certainly sounds harmless, but there are two things that make me\nunhappy about this:\n\n- Yet another GUC. It's not like we don't have enough of them.\n (This is a small quibble.)\n\n- This setting would influence the way SQL is processed.\n We have had bad experiences with those; an often-quoted example is\n the \"autocommit\" parameter that got removed in 7.4.\n This certainly is less harmfuls, but still another pitfall that\n can confuse users.\n\nThis reminds me of the proposal for a GUC to forbid UPDATE and DELETE\nwithout a WHERE clause. That didn't get anywhere, see\nhttps://www.postgresql.org/message-id/flat/20160721045746.GA25043%40fetter.org\n\n> PS: I'm SO happy that this phase of my postgres journey has finally \n> started!!!!\n\nI am happy for you.\n\nPlease don't be discouraged if some of your patches get stuck because\nno consensus can be reached or because nobody cares enough. Your\ncontributions are still welcome. One good way to gain experience\nis to review others' patches. In fact, you are expected to do that\nif you submit your own.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Thu, 05 Oct 2023 11:22:01 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Good News Everyone! + feature proposal" }, { "msg_contents": "Hi Jon,\n\n> Had my performance review today, and Apple wants me to get a patch\n> accepted this quarter, with the promise of more to come after that.\n> Luckily, this first patch can be anything (doesn't have to be of use to\n> Apple, more to prove that I can get a patch accepted), so I'm open to\n> suggestions of smaller stuff that is in demand at the moment.\n\nMy sincere congratulations!\n\n From personal experience however delivering any non-trivial patch may\ntake from several years up to infinity even if the RFC is in agreement\nwith the community and generally everyone is enthusiastic about the\nproposal change. Take \"Clarify the behavior of the system when\napproaching XID wraparound\" [1] as a recent example. It's a fairly\nsimple change but after 10 months it's only yet to be committed. I\nknow people who were working on a single patch for 5 years.\n\nPlease make sure your employer understands the specifics of working on\nopen source, especially the fact that no one cares about this\nemployer's internal deadlines, and also that this is reflected in your\nteam metrics. There are also many other things to be mindful of. I\nwould recommend making sure that your team owns only one product (i.e.\nPostgreSQL Upstream), no extensions, no forks etc. Make sure the team\ncharter reflects this, otherwise other products will always be a\npriority.\n\nRegarding your deliverables for this quarter. If the size of the patch\nis not critical, I would suggest focusing on simple refactorings and\nalso code reviews. Especially code reviews. Practice shows that it's\nrealistic for one person to deliver somewhere between 10 to 20 patches\nper quarter this way. Then compare the number you got to the average\namount of patches one person (except for the Core Team) typically\ncontributes. Your goal is to be above the median. If on top of that\nyou are able, lets say, to make internal technical talks about\nPostgreSQL internals and/or speak at conferences and/or ... this will\nlook great on your PFD and your manager will be extremely happy with\nyour performance.\n\nI know this may sound like gaming the metrics or something but this is\nexactly how large companies work.\n\nI honestly wish you all the best at your new job and will be happy to\nshare my findings regarding building the processes around OSS\ndevelopment. Please don't hesitate reaching out to me off-list.\n\n[1]: https://commitfest.postgresql.org/45/4128/\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Thu, 5 Oct 2023 12:24:11 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Good News Everyone! + feature proposal" }, { "msg_contents": "On Thu, Oct 5, 2023 at 1:52 AM Jon Erdman <[email protected]> wrote:\n>\n> So I have some good news! At long last I've found a company/manager that\n> wants to actually factually pay me to do some work on PG!!\n\nCongratulations!\n\n> For the proposal (this one is a bit Apple specific): because my team\n> offers managed postgres to our Apple-internal customers, many of whom\n> are not database experts, or at least not postgres experts, we'd like to\n> be able to toggle the availability of UNLOGGED tables in\n> postgresql.conf, so our less clueful users have fewer footguns.\n>\n> So, my proposal is for a GUC to implement that, which would *OF COURSE*\n> undefault to allowing UNLOGGED.\n\nThat was difficult to parse at first glance. I guess you mean the\nGUC's default value will not change the current behaviour, as you\nmention below.\n\n> The reasoning here is we have autofailover set up for our standard\n> cluster offering that we give to customers, using sync-rep to guarantee\n> no data loss if we flop to the HA node. Any users not up to speed on\n> what UNLOGGED really means could inadvertently lose data, so we'd like\n> to be able to force it to be off, and turn it on upon request after\n> educating the customer in question it's it's a valid use case.\n>\n> So to begin with: I'm sure some folks will hate this idea, but maybe a\n> good many wont, and remember, this would default to UNLOGGED enabled, so\n> no change to current behaviour. and no encouragement to disallow it, but\n> just the ability to do so, which i think is useful in\n> hosted/managed/multi-tenant environment where most things are taken care\n> of for the customer.\n\nI see the need to disable this feature and agree that some\ninstallations may need it, where the users are not savvy enough to\nrealize its dangers and fall for its promise to increase INSERT/UPDATE\nperformance. Your specific example of an internal hosted/managed\nservice is a good example. Even in smaller installations the DBA might\nwant to disable this, so that unwary developers don't willy-nilly\ncreate unlogged tables and end up losing data after a failover. I hope\nothers can provide more examples and situations where this may be\nuseful, to make it obvious that we need this feature.\n\nMy first reaction was to make it a GRANTable permission. But since one\ncan always do `ALTER USER savvy_user SET allow_unlogged_tables TO\ntrue` and override the system-wide setting in postgresql.conf, for a\nspecific user, I feel a GUC would be the right way to implement it.\n\nThe complaint about too many GUCs is a valid one, but I'd worry more\nabout it if it were an optimizer/performance improvement being hidden\nbehind a GUC. This new GUC would be a on-off switch to override the\nSQL/grammar feature provided by the UNLOGGED keyword, hence not really\na concern IMO.\n\n> So I'd like to get a general idea how likely this would be to getting\n> accepted if it did it, and did it right?\n\nLike others said, there are no guarantees. A working patch may help\nguide people's opinion one way or the other, so I'd work on submitting\na patch while (some) people are in agreement.\n\n> Let the flame war begin!\n\nHeh. I'm sure you already know this, but this community's flame wars\nare way more timid compared to what members of other communities may\nbe used to :-) I consider it lucky if someone throws as much as a lit\nmatch.\n\n> PS: I'm SO happy that this phase of my postgres journey has finally\n> started!!!!\n\nI, too, am very happy for you! :-)\n\n> Jon Erdman (aka StuckMojo on IRC)\n\nTIL.\n\nI wish there was a directory of IRC identities that pointed to real\nidentities (at least for folks who don't mind this mapping available\nin the open), so that when we converse in IRC, we have a face to go\nwith the IRC handles. As a human I feel that necessity.\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\n\n", "msg_date": "Thu, 5 Oct 2023 05:10:37 -0700", "msg_from": "Gurjeet Singh <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Good News Everyone! + feature proposal" }, { "msg_contents": "Laurenz Albe <[email protected]> writes:\n> On Thu, 2023-10-05 at 02:22 +0000, Jon Erdman wrote:\n>> For the proposal (this one is a bit Apple specific): because my team \n>> offers managed postgres to our Apple-internal customers, many of whom \n>> are not database experts, or at least not postgres experts, we'd like to \n>> be able to toggle the availability of UNLOGGED tables in \n>> postgresql.conf, so our less clueful users have fewer footguns.\n\nI'm doubtful that this is a problem that needs a solution.\nIf anything, the right answer is to fix whatever part of the\ndocumentation isn't warning of the hazards strongly enough.\n\nEven more to the point: if we accept this, how many other\nfootgun-preventing GUCs will have the same or stronger claim to\nexistence?\n\n> It certainly sounds harmless, but there are two things that make me\n> unhappy about this:\n\n> - Yet another GUC. It's not like we don't have enough of them.\n> (This is a small quibble.)\n\n> - This setting would influence the way SQL is processed.\n> We have had bad experiences with those; an often-quoted example is\n> the \"autocommit\" parameter that got removed in 7.4.\n> This certainly is less harmfuls, but still another pitfall that\n> can confuse users.\n\nSame objections here. Also note that the semantics we've defined\nfor GUCs (when they can be set and where) don't always line up\nnicely with requirements of this sort. It's far from clear to me\nwhether such a GUC should be SUSET (making it a hard prohibition\nfor ordinary users) or USERSET (making it just a training wheel).\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 05 Oct 2023 09:53:16 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Good News Everyone! + feature proposal" }, { "msg_contents": "On Wednesday, October 4, 2023, Jon Erdman <[email protected]> wrote:\n\n>\n> So I'd like to get a general idea how likely this would be to getting\n> accepted if it did it, and did it right?\n>\n\nRun a cron job checking for them. Allow for overrides by adding a comment\nto any unclogged tables you’ve identified as being acceptable.\n\nDavid J.\n\nOn Wednesday, October 4, 2023, Jon Erdman <[email protected]> wrote:So I'd like to get a general idea how likely this would be to getting \naccepted if it did it, and did it right?\nRun a cron job checking for them.  Allow for overrides by adding a comment to any unclogged tables you’ve identified as being acceptable.David J.", "msg_date": "Thu, 5 Oct 2023 07:04:15 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Good News Everyone! + feature proposal" }, { "msg_contents": "\nOn 10/5/23 8:53 AM, Tom Lane wrote:\n> Laurenz Albe <[email protected]> writes:\n>> On Thu, 2023-10-05 at 02:22 +0000, Jon Erdman wrote:\n>>> For the proposal (this one is a bit Apple specific): because my team\n>>> offers managed postgres to our Apple-internal customers, many of whom\n>>> are not database experts, or at least not postgres experts, we'd like to\n>>> be able to toggle the availability of UNLOGGED tables in\n>>> postgresql.conf, so our less clueful users have fewer footguns.\n> \n> I'm doubtful that this is a problem that needs a solution.\n> If anything, the right answer is to fix whatever part of the\n> documentation isn't warning of the hazards strongly enough.\n> \n> Even more to the point: if we accept this, how many other\n> footgun-preventing GUCs will have the same or stronger claim to\n> existence?\n> \n>> It certainly sounds harmless, but there are two things that make me\n>> unhappy about this:\n> \n>> - Yet another GUC. It's not like we don't have enough of them.\n>> (This is a small quibble.)\n> \n>> - This setting would influence the way SQL is processed.\n>> We have had bad experiences with those; an often-quoted example is\n>> the \"autocommit\" parameter that got removed in 7.4.\n>> This certainly is less harmfuls, but still another pitfall that\n>> can confuse users.\n> \n> Same objections here. Also note that the semantics we've defined\n> for GUCs (when they can be set and where) don't always line up\n> nicely with requirements of this sort. It's far from clear to me\n> whether such a GUC should be SUSET (making it a hard prohibition\n> for ordinary users) or USERSET (making it just a training wheel).\n\nSomeone on linked-in suggested an event trigger, so now I'm thinking of \na custom extension that would do nothing but create said event trigger, \nand maybe could be toggled with a customized setting (but that might \nallow users to turn it off themselves...which is maybe ok).\n\nIf the extension were installed by the DBA user, the customer wouldn't \nbe able to drop it, and if we decided to give them an exception, we just \ndrop or disable the extension.\n\nAs a second more general question: could my original idea (i.e. sans \nevent trigger) be implemented in an extension somehow, or is that not \ntechnically possible (I suspect not)?\n-- \nJon Erdman (aka StuckMojo on IRC)\n PostgreSQL Zealot\n\n\n", "msg_date": "Thu, 5 Oct 2023 14:58:05 +0000", "msg_from": "=?UTF-8?Q?Jon_Erdman?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Good News Everyone! + feature proposal" }, { "msg_contents": "On Thu, Oct 5, 2023 at 11:11 PM Jon Erdman <[email protected]> wrote:\n>\n> As a second more general question: could my original idea (i.e. sans\n> event trigger) be implemented in an extension somehow, or is that not\n> technically possible (I suspect not)?\n\nIt should be easy to do using the ProcessUtility_hook hook, defined in\na custom module written in C. As long as your module is preloaded\n(one of the *_preload_libraries GUC), your code will be called without\nthe need for any SQL-level object and you would be free to add any\ncustom GUC you want to enable it on a per-user basis or anything else.\n\n\n", "msg_date": "Fri, 6 Oct 2023 00:02:15 +0800", "msg_from": "Julien Rouhaud <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Good News Everyone! + feature proposal" }, { "msg_contents": "On Thu, 2023-10-05 at 14:58 +0000, Jon Erdman wrote:\n> > > > For the proposal (this one is a bit Apple specific): because my team\n> > > > offers managed postgres to our Apple-internal customers, many of whom\n> > > > are not database experts, or at least not postgres experts, we'd like to\n> > > > be able to toggle the availability of UNLOGGED tables in\n> > > > postgresql.conf, so our less clueful users have fewer footguns.\n> \n> Someone on linked-in suggested an event trigger, so now I'm thinking of \n> a custom extension that would do nothing but create said event trigger, \n> and maybe could be toggled with a customized setting (but that might \n> allow users to turn it off themselves...which is maybe ok).\n\nAn event trigger is the perfect solution for this requirement.\n\n> If the extension were installed by the DBA user, the customer wouldn't \n> be able to drop it, and if we decided to give them an exception, we just \n> drop or disable the extension.\n\nRight. Also, only a superuser can create or drop event triggers.\n\n> As a second more general question: could my original idea (i.e. sans \n> event trigger) be implemented in an extension somehow, or is that not \n> technically possible (I suspect not)?\n\nYou could perhaps use \"object_access_hook\" in an extension.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Thu, 05 Oct 2023 21:54:29 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Good News Everyone! + feature proposal" }, { "msg_contents": "On Thu, Oct 5, 2023 at 05:10:37AM -0700, Gurjeet Singh wrote:\n> I wish there was a directory of IRC identities that pointed to real\n> identities (at least for folks who don't mind this mapping available\n> in the open), so that when we converse in IRC, we have a face to go\n> with the IRC handles. As a human I feel that necessity.\n\nThere is:\n\n\thttps://wiki.postgresql.org/wiki/IRC2RWNames\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Fri, 6 Oct 2023 15:52:22 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Good News Everyone! + feature proposal" } ]
[ { "msg_contents": "Suppose we start with this nbtree (subset of a diagram from verify_nbtree.c):\n\n * 1\n * / \\\n * 2 <-> 3\n\nWe're deleting 2, the leftmost leaf under a leftmost internal page. After the\nMARK_PAGE_HALFDEAD record, the first downlink from 1 will lead to 3, which\nstill has a btpo_prev pointing to 2. bt_index_parent_check() complains here:\n\n\t\t/* The first page we visit at the level should be leftmost */\n\t\tif (first && !BlockNumberIsValid(state->prevrightlink) && !P_LEFTMOST(opaque))\n\t\t\tereport(ERROR,\n\t\t\t\t\t(errcode(ERRCODE_INDEX_CORRUPTED),\n\t\t\t\t\t errmsg(\"the first child of leftmost target page is not leftmost of its level in index \\\"%s\\\"\",\n\t\t\t\t\t\t\tRelationGetRelationName(state->rel)),\n\t\t\t\t\t errdetail_internal(\"Target block=%u child block=%u target page lsn=%X/%X.\",\n\t\t\t\t\t\t\t\t\t\tstate->targetblock, blkno,\n\t\t\t\t\t\t\t\t\t\tLSN_FORMAT_ARGS(state->targetlsn))));\n\nOne can encounter this if recovery ends between a MARK_PAGE_HALFDEAD record\nand its corresponding UNLINK_PAGE record. See the attached test case. The\nindex is actually fine in such a state, right? I lean toward fixing this by\nhaving amcheck scan left; if left links reach only half-dead or deleted pages,\nthat's as good as the present child block being P_LEFTMOST. There's a\ndifferent error from bt_index_check(), and I've not yet studied how to fix\nthat:\n\n ERROR: left link/right link pair in index \"not_leftmost_pk\" not in agreement\n DETAIL: Block=0 left block=0 left link from block=4294967295.\n\nAlternatively, one could view this as a need for the user to VACUUM between\nrecovery and amcheck. The documentation could direct users to \"VACUUM\n(DISABLE_PAGE_SKIPPING off, INDEX_CLEANUP on, TRUNCATE off)\" if not done since\nlast recovery. Does anyone prefer that or some other alternative?\n\nFor some other amcheck expectations, the comments suggest reliance on the\nbt_index_parent_check() ShareLock. I haven't tried to make test cases for\nthem, but perhaps recovery can trick them the same way. Examples:\n\n errmsg(\"downlink or sibling link points to deleted block in index \\\"%s\\\"\",\n errmsg(\"block %u is not leftmost in index \\\"%s\\\"\",\n errmsg(\"block %u is not true root in index \\\"%s\\\"\",\n\nThanks,\nnm", "msg_date": "Wed, 4 Oct 2023 19:52:32 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": true, "msg_subject": "post-recovery amcheck expectations" }, { "msg_contents": "On Wed, Oct 4, 2023 at 7:52 PM Noah Misch <[email protected]> wrote:\n> Suppose we start with this nbtree (subset of a diagram from verify_nbtree.c):\n>\n> * 1\n> * / \\\n> * 2 <-> 3\n>\n> We're deleting 2, the leftmost leaf under a leftmost internal page. After the\n> MARK_PAGE_HALFDEAD record, the first downlink from 1 will lead to 3, which\n> still has a btpo_prev pointing to 2. bt_index_parent_check() complains here:\n\nThanks for working on this. Your analysis seems sound to me.\n\nWhen I reviewed the patch that became commit d114cc53, I presented\nAlexander with a test case that demonstrated false positive reports of\ncorruption involving interrupted page splits (not interrupted page\ndeletions). Obviously I didn't give sufficient thought to this case,\nwhich is analogous.\n\nMight make sense to test the fix for this issue using a similar\napproach: by adding custom code that randomly throws errors at a point\nthat stresses the implementation. I'm referring to the point at which\nVACUUM is between the first and second phase of page deletion: right\nbefore (or directly after) _bt_unlink_halfdead_page() is called. That\nisn't fundamentally different to the approach from your TAP test, but\nseems like it might add some interesting variation.\n\n> One can encounter this if recovery ends between a MARK_PAGE_HALFDEAD record\n> and its corresponding UNLINK_PAGE record. See the attached test case. The\n> index is actually fine in such a state, right?\n\nYes, it is fine.\n\nFWIW, this feels like it might be related to the fact that (unlike\nLanin & Shasha), we don't make the key space move left; we make it\nmove right instead (just like page splits). In other words, page\ndeletion isn't the precise opposite of a page split, which is a bit\nawkward.\n\nNote, in particular, that _bt_mark_page_halfdead() doesn't do a\nstraight delete of the pivot tuple in the parent page that points to\nthe target page, as you might expect. It actually deletes the right\nsibling of the target page's pivot, and then performs an in-place overwrite of\nthe downlink from the pivot tuple that originally pointed to the\ntarget page. Perhaps this isn't worth going into now, but thought you\nmight appreciate the context.\n\nTerminology note: we sometimes use \"downlink\" as a synonym of \"pivot\ntuple\" or even \"separator key\", which is misleading.\n\n> I lean toward fixing this by\n> having amcheck scan left; if left links reach only half-dead or deleted pages,\n> that's as good as the present child block being P_LEFTMOST.\n\nAlso my preference.\n\n> There's a\n> different error from bt_index_check(), and I've not yet studied how to fix\n> that:\n>\n> ERROR: left link/right link pair in index \"not_leftmost_pk\" not in agreement\n> DETAIL: Block=0 left block=0 left link from block=4294967295.\n\nThis looks like this might be a straightforward case of confusing\nP_NONE for InvalidBlockNumber (or vice-versa). They're both used to\nindicate \"no such block\" here.\n\n> Alternatively, one could view this as a need for the user to VACUUM between\n> recovery and amcheck. The documentation could direct users to \"VACUUM\n> (DISABLE_PAGE_SKIPPING off, INDEX_CLEANUP on, TRUNCATE off)\" if not done since\n> last recovery. Does anyone prefer that or some other alternative?\n\nI'd rather not go that route. That strikes me as defining the problem\nout of existence.\n\n> For some other amcheck expectations, the comments suggest reliance on the\n> bt_index_parent_check() ShareLock. I haven't tried to make test cases for\n> them, but perhaps recovery can trick them the same way. Examples:\n>\n> errmsg(\"downlink or sibling link points to deleted block in index \\\"%s\\\"\",\n> errmsg(\"block %u is not leftmost in index \\\"%s\\\"\",\n> errmsg(\"block %u is not true root in index \\\"%s\\\"\",\n\nThese are all much older. They're certainly all from before the\nrelevant checks were first added (by commit d114cc53), and seem much\nless likely to be buggy.\n\nThese older cases are all cases where we descend directly from an\ninternal page to one of its child pages. Whereas the problem you've\ndemonstrated involves traversal across levels *and* across siblings in\nnewer code. That's quite a bit more complicated, since it requires\nthat we worry about both phases of page deletion -- not just the\nfirst. That in itself necessitates that we deal with various edge\ncases. (The really prominent edge-case is the interrupted page\ndeletion case, which requires significant handling, but evidently\nmissed a subtlety with leftmost pages).\n\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Mon, 9 Oct 2023 16:46:26 -0700", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: post-recovery amcheck expectations" }, { "msg_contents": "On Mon, Oct 09, 2023 at 04:46:26PM -0700, Peter Geoghegan wrote:\n> On Wed, Oct 4, 2023 at 7:52 PM Noah Misch <[email protected]> wrote:\n\n> Might make sense to test the fix for this issue using a similar\n> approach: by adding custom code that randomly throws errors at a point\n> that stresses the implementation. I'm referring to the point at which\n> VACUUM is between the first and second phase of page deletion: right\n> before (or directly after) _bt_unlink_halfdead_page() is called. That\n> isn't fundamentally different to the approach from your TAP test, but\n> seems like it might add some interesting variation.\n\nMy initial manual test was like that, actually.\n\n> > I lean toward fixing this by\n> > having amcheck scan left; if left links reach only half-dead or deleted pages,\n> > that's as good as the present child block being P_LEFTMOST.\n> \n> Also my preference.\n\nDone mostly that way, except I didn't accept deleted pages. Making this work\non !readonly would take more than that, and readonly shouldn't need that.\n\n> > There's a\n> > different error from bt_index_check(), and I've not yet studied how to fix\n> > that:\n> >\n> > ERROR: left link/right link pair in index \"not_leftmost_pk\" not in agreement\n> > DETAIL: Block=0 left block=0 left link from block=4294967295.\n> \n> This looks like this might be a straightforward case of confusing\n> P_NONE for InvalidBlockNumber (or vice-versa). They're both used to\n> indicate \"no such block\" here.\n\nRoughly so. It turned out this scenario was passing leftcurrent=P_NONE to\nbt_recheck_sibling_links(), causing that function to use BTPageGetOpaque() on\nthe metapage.\n\n> > For some other amcheck expectations, the comments suggest reliance on the\n> > bt_index_parent_check() ShareLock. I haven't tried to make test cases for\n> > them, but perhaps recovery can trick them the same way. Examples:\n> >\n> > errmsg(\"downlink or sibling link points to deleted block in index \\\"%s\\\"\",\n> > errmsg(\"block %u is not leftmost in index \\\"%s\\\"\",\n> > errmsg(\"block %u is not true root in index \\\"%s\\\"\",\n> \n> These are all much older. They're certainly all from before the\n> relevant checks were first added (by commit d114cc53), and seem much\n> less likely to be buggy.\n\nAfter I fixed the original error, the \"block %u is not leftmost\" surfaced\nnext. The attached patch fixes that, too. I didn't investigate the others.\nThe original test was flaky in response to WAL flush timing, but this one\nsurvives thousands of runs.", "msg_date": "Fri, 20 Oct 2023 20:54:57 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": true, "msg_subject": "Re: post-recovery amcheck expectations" }, { "msg_contents": "On Fri, Oct 20, 2023 at 8:55 PM Noah Misch <[email protected]> wrote:\n> > > I lean toward fixing this by\n> > > having amcheck scan left; if left links reach only half-dead or deleted pages,\n> > > that's as good as the present child block being P_LEFTMOST.\n> >\n> > Also my preference.\n>\n> Done mostly that way, except I didn't accept deleted pages. Making this work\n> on !readonly would take more than that, and readonly shouldn't need that.\n\nThat makes sense to me. I believe that it's not possible to have a\nstring of consecutive sibling pages that are all half-dead (regardless\nof the BlockNumber order of sibling pages, even). But I'd probably\nhave written the fix in roughly the same way. Although...maybe you\nshould try to detect a string of half-dead pages? Hard to say if it's\nworth the trouble.\n\nSuggest adding a CHECK_FOR_INTERRUPTS() call to the loop, too, just\nfor good luck.\n\n> After I fixed the original error, the \"block %u is not leftmost\" surfaced\n> next. The attached patch fixes that, too. I didn't investigate the others.\n> The original test was flaky in response to WAL flush timing, but this one\n> survives thousands of runs.\n\nHmm. Can't argue with that. Your fix seems sound.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 23 Oct 2023 16:46:23 -0700", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: post-recovery amcheck expectations" }, { "msg_contents": "On Mon, Oct 23, 2023 at 04:46:23PM -0700, Peter Geoghegan wrote:\n> On Fri, Oct 20, 2023 at 8:55 PM Noah Misch <[email protected]> wrote:\n> > > > I lean toward fixing this by\n> > > > having amcheck scan left; if left links reach only half-dead or deleted pages,\n> > > > that's as good as the present child block being P_LEFTMOST.\n> > >\n> > > Also my preference.\n> >\n> > Done mostly that way, except I didn't accept deleted pages. Making this work\n> > on !readonly would take more than that, and readonly shouldn't need that.\n> \n> That makes sense to me. I believe that it's not possible to have a\n> string of consecutive sibling pages that are all half-dead (regardless\n> of the BlockNumber order of sibling pages, even). But I'd probably\n> have written the fix in roughly the same way. Although...maybe you\n> should try to detect a string of half-dead pages? Hard to say if it's\n> worth the trouble.\n\nI imagined a string of half-dead siblings could arise in structure like this:\n\n * 1 \n * / | \\ \n * 4 <-> 2 <-> 3\n\nWith events like this:\n\n- DELETE renders blk 4 deletable.\n- Crash with concurrent VACUUM, leaving 4 half-dead after having visited 1-4.\n- DELETE renders blk 2 deletable.\n- Crash with concurrent VACUUM, leaving 2 half-dead after having visited 1-2.\n\nI didn't try to reproduce that, and something may well prevent it.\n\n> Suggest adding a CHECK_FOR_INTERRUPTS() call to the loop, too, just\n> for good luck.\n\nAdded. That gave me the idea to check for circular links, like other parts of\namcheck do. Net diff:\n\n--- a/contrib/amcheck/verify_nbtree.c\n+++ b/contrib/amcheck/verify_nbtree.c\n@@ -949,11 +949,16 @@ bt_leftmost_ignoring_half_dead(BtreeCheckState *state,\n \t\tPage\t\tpage = palloc_btree_page(state, reached);\n \t\tBTPageOpaque reached_opaque = BTPageGetOpaque(page);\n \n+\t\tCHECK_FOR_INTERRUPTS();\n+\n \t\t/*\n-\t\t * _bt_unlink_halfdead_page() writes that side-links will continue to\n-\t\t * point to the siblings. We can easily check btpo_next.\n+\t\t * Try to detect btpo_prev circular links. _bt_unlink_halfdead_page()\n+\t\t * writes that side-links will continue to point to the siblings.\n+\t\t * Check btpo_next for that property.\n \t\t */\n-\t\tall_half_dead = P_ISHALFDEAD(reached_opaque) &&\n+\t\tall_half_dead = P_ISHALFDEAD(reached_opaque);\n+\t\t\treached != start &&\n+\t\t\treached != reached_from &&\n \t\t\treached_opaque->btpo_next == reached_from;\n \t\tif (all_half_dead)\n \t\t{\n\n\n", "msg_date": "Mon, 23 Oct 2023 19:28:48 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": true, "msg_subject": "Re: post-recovery amcheck expectations" }, { "msg_contents": "On Mon, Oct 23, 2023 at 7:28 PM Noah Misch <[email protected]> wrote:\n> > That makes sense to me. I believe that it's not possible to have a\n> > string of consecutive sibling pages that are all half-dead (regardless\n> > of the BlockNumber order of sibling pages, even). But I'd probably\n> > have written the fix in roughly the same way. Although...maybe you\n> > should try to detect a string of half-dead pages? Hard to say if it's\n> > worth the trouble.\n>\n> I imagined a string of half-dead siblings could arise in structure like this:\n>\n> * 1\n> * / | \\\n> * 4 <-> 2 <-> 3\n>\n> With events like this:\n>\n> - DELETE renders blk 4 deletable.\n> - Crash with concurrent VACUUM, leaving 4 half-dead after having visited 1-4.\n> - DELETE renders blk 2 deletable.\n> - Crash with concurrent VACUUM, leaving 2 half-dead after having visited 1-2.\n>\n> I didn't try to reproduce that, and something may well prevent it.\n\nFWIW a couple of factors prevent it (in the absence of corruption). These are:\n\n1. Only VACUUM can delete pages, and in general the only possible\nsource of half-dead pages is an unfortunately timed crash/error within\nVACUUM. Each interrupted VACUUM can leave behind at most one half-dead\npage.\n\n2. One thing that makes VACUUM back out of deleting an empty page is\nthe presence of a half-dead right sibling leaf page left behind by\nsome VACUUM that was interrupted at some point in the past -- see\n_bt_rightsib_halfdeadflag() for details.\n\nObviously, factors 1 and 2 together make three consecutive half-dead\nsibling pages impossible. I'm not quite prepared to say that even two\nneighboring half-dead sibling pages are an impossibility right now,\nbut I think that it might well be. Possibly for reasons that are more\naccidental than anything else (is the _bt_rightsib_halfdeadflag thing\na \"real invariant\" or just something we do because we don't want to\nadd additional complicated handling to nbtpage.c?), so I'll avoid\ngoing into further detail for now.\n\nI'm pointing this out because it argues for softening the wording\nabout \"accept[ing] an arbitrarily-long chain of half-dead,\nsibling-linked pages to the left\" from your patch.\n\nI was also wondering (mostly to myself) about the relationship (if\nany) between the _bt_rightsib_halfdeadflag/_bt_leftsib_splitflag\n\"invariants\" and the bt_child_highkey_check() check. But I don't think\nthat it makes sense to put that in scope -- your fix seems like a\nstrict improvement. This relationship is perhaps in scope here to the\nlimited extent that talking about strings of consecutive half-dead\npages might make it even harder to understand the design of the\nbt_child_highkey_check() check. On the other hand...I'm not sure that\nI understand every nuance of it myself.\n\n> > Suggest adding a CHECK_FOR_INTERRUPTS() call to the loop, too, just\n> > for good luck.\n>\n> Added. That gave me the idea to check for circular links, like other parts of\n> amcheck do.\n\nGood idea.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 24 Oct 2023 19:03:34 -0700", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: post-recovery amcheck expectations" }, { "msg_contents": "On Tue, Oct 24, 2023 at 07:03:34PM -0700, Peter Geoghegan wrote:\n> On Mon, Oct 23, 2023 at 7:28 PM Noah Misch <[email protected]> wrote:\n> > > That makes sense to me. I believe that it's not possible to have a\n> > > string of consecutive sibling pages that are all half-dead (regardless\n> > > of the BlockNumber order of sibling pages, even). But I'd probably\n> > > have written the fix in roughly the same way. Although...maybe you\n> > > should try to detect a string of half-dead pages? Hard to say if it's\n> > > worth the trouble.\n> >\n> > I imagined a string of half-dead siblings could arise in structure like this:\n> >\n> > * 1\n> > * / | \\\n> > * 4 <-> 2 <-> 3\n> >\n> > With events like this:\n> >\n> > - DELETE renders blk 4 deletable.\n> > - Crash with concurrent VACUUM, leaving 4 half-dead after having visited 1-4.\n> > - DELETE renders blk 2 deletable.\n> > - Crash with concurrent VACUUM, leaving 2 half-dead after having visited 1-2.\n> >\n> > I didn't try to reproduce that, and something may well prevent it.\n> \n> FWIW a couple of factors prevent it (in the absence of corruption). These are:\n> \n> 1. Only VACUUM can delete pages, and in general the only possible\n> source of half-dead pages is an unfortunately timed crash/error within\n> VACUUM. Each interrupted VACUUM can leave behind at most one half-dead\n> page.\n\nAgreed.\n\n> 2. One thing that makes VACUUM back out of deleting an empty page is\n> the presence of a half-dead right sibling leaf page left behind by\n> some VACUUM that was interrupted at some point in the past -- see\n> _bt_rightsib_halfdeadflag() for details.\n> \n> Obviously, factors 1 and 2 together make three consecutive half-dead\n> sibling pages impossible.\n\nCan't it still happen if the sequence of unfortunately timed crashes causes\ndeletions from left to right? Take this example, expanding the one above.\nHalf-kill 4, crash, half-kill 3, crash, half-kill 2 in:\n\n * 1\n * / / | \\ \\\n * 4 <-> 3 <-> 2 <-> 1\n\n(That's not to say it has ever happened outside of a test.)\n\n\n", "msg_date": "Tue, 24 Oct 2023 20:04:59 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": true, "msg_subject": "Re: post-recovery amcheck expectations" }, { "msg_contents": "On Tue, Oct 24, 2023 at 8:05 PM Noah Misch <[email protected]> wrote:\n> Can't it still happen if the sequence of unfortunately timed crashes causes\n> deletions from left to right? Take this example, expanding the one above.\n> Half-kill 4, crash, half-kill 3, crash, half-kill 2 in:\n>\n> * 1\n> * / / | \\ \\\n> * 4 <-> 3 <-> 2 <-> 1\n>\n> (That's not to say it has ever happened outside of a test.)\n\nHmm. Perhaps you're right. I thought that this wasn't possible in part\ndue to the fact that you'd have to access all of these leaf pages in\nthe same order each time, without ever passing over a previous\nhalf-dead page. But I suppose that there's nothing stopping the index\ntuples from being deleted from each page in an order that leaves open\nthe possibility of something like this. (It's extremely unlikely, of\ncourse, but that wasn't ever in question.)\n\nI withdraw my suggestion about the wording from your patch. It seems\ncommittable.\n\nThanks\n--\nPeter Geoghegan\n\n\n", "msg_date": "Wed, 25 Oct 2023 15:45:54 -0700", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: post-recovery amcheck expectations" } ]
[ { "msg_contents": "Hi team,\n\n      We have observed that for logically same over clause two different window aggregate nodes are created in plan.  \nThe below query contains two window functions. Both Over clause contain the same partition & order clause in it. But in one over clause ordering option is mentioned as ascending but not in another over clause which represents the default option \"ascending\". \n\n \nSample Query:\n \n\n\n\n\n\n\n\n CREATE TEMPORARY TABLE empsalary (\n\n depname varchar,\n\n empno bigint,\n\n salary int,\n\n enroll_date date\n\n );\n\n\n\n INSERT INTO empsalary VALUES ('develop', 10, 5200, '2007-08-01'), ('sales', 1, 5000, '2006-10-01'),\n\n ('personnel', 5, 3500, '2007-12-10'),('sales', 4, 4800, '2007-08-08'),('personnel', 2, 3900, '2006-12-23'),\n\n ('develop', 7, 4200, '2008-01-01'),('develop', 9, 4500, '2008-01-01'),('sales', 3, 4800, '2007-08-01'),\n\n ('develop', 8, 6000, '2006-10-01'),('develop', 11, 5200, '2007-08-15');\n\n\n\n explain verbose select rank() over (partition by depname order by salary), percent_rank() over(partition by depname order by salary asc) from empsalary;\n\n                  QUERY PLAN\n\n ------------------------------------------------------------------------------------------\n\n     WindowAgg (cost=10000000074.54..10000000114.66 rows=1070 width=52)\n\n     Output: (rank() OVER (?)), percent_rank() OVER (?), salary, depname\n\n      -> WindowAgg (cost=10000000074.54..10000000095.94 rows=1070 width=44)\n\n          Output: salary, depname, rank() OVER (?)\n\n            -> Sort (cost=10000000074.54..10000000077.21 rows=1070 width=36)\n\n                Output: salary, depname\n\n                Sort Key: empsalary.depname, empsalary.salary\n\n                   -> Seq Scan on pg_temp_7.empsalary (cost=0.00..20.70 rows=1070 width=36)\n\n                       Output: salary, depname \n\n\n\n\n\nOrdering option of Sort is represented by enum SortByDir (parsenodes.h). \n\nList of SortBy is present in WindowDef structure which has info of order by clause in Over clause\n\n\n\n\n\n\n\n /* Sort ordering options for ORDER BY and CREATE INDEX */ \n\n typedef enum SortByDir\n\n {\n\n SORTBY_DEFAULT,\n\n SORTBY_ASC,\n\n SORTBY_DESC,\n\n SORTBY_USING /* not allowed in CREATE INDEX ... */\n\n } SortByDir; \n\n\n\n\n\n typedef struct SortBy\n\n {\n\n NodeTag type;\n\n Node *node; /* expression to sort on */\n\n SortByDir sortby_dir; /* ASC/DESC/USING/default */\n\n SortByNulls sortby_nulls; /* NULLS FIRST/LAST */\n\n List *useOp; /* name of op to use, if SORTBY_USING */\n\n int location; /* operator location, or -1 if none/unknown */\n\n } SortBy; \n\n\n\n\n\nIn transformWindowFuncCall API, Equality check of order clause in window definition failed while comparing SortByDir enum of both over clause i.e SORT_DEFAULT  is not equal to SORT_ASC. Hence two window clause are created in parse tree resulting in the creation of two different window aggregate node.\n\nThis check can be modified to form a single window aggregate node for the above results in faster query execution. Is there any reason for creating two different window aggregate node?\n\nThanks \nAnitha S\n\n \nHi team,      We have observed that for logically same over clause two different window aggregate nodes are created in plan.  The below query contains two window functions. Both Over clause contain the same partition & order clause in it. But in one over clause ordering option is mentioned as ascending but not in another over clause which represents the default option \"ascending\".  Sample Query: \n CREATE TEMPORARY TABLE empsalary ( depname varchar, empno bigint, salary int, enroll_date date ); INSERT INTO empsalary VALUES ('develop', 10, 5200, '2007-08-01'), ('sales', 1, 5000, '2006-10-01'), ('personnel', 5, 3500, '2007-12-10'),('sales', 4, 4800, '2007-08-08'),('personnel', 2, 3900, '2006-12-23'), ('develop', 7, 4200, '2008-01-01'),('develop', 9, 4500, '2008-01-01'),('sales', 3, 4800, '2007-08-01'), ('develop', 8, 6000, '2006-10-01'),('develop', 11, 5200, '2007-08-15'); explain verbose select rank() over (partition by depname order by salary), percent_rank() over(partition by depname order by salary asc) from empsalary;                  QUERY PLAN ------------------------------------------------------------------------------------------     WindowAgg (cost=10000000074.54..10000000114.66 rows=1070 width=52)     Output: (rank() OVER (?)), percent_rank() OVER (?), salary, depname      -> WindowAgg (cost=10000000074.54..10000000095.94 rows=1070 width=44)          Output: salary, depname, rank() OVER (?)            -> Sort (cost=10000000074.54..10000000077.21 rows=1070 width=36)                Output: salary, depname                Sort Key: empsalary.depname, empsalary.salary                   -> Seq Scan on pg_temp_7.empsalary (cost=0.00..20.70 rows=1070 width=36)                       Output: salary, depname \nOrdering option of Sort is represented by enum SortByDir (parsenodes.h). List of SortBy is present in WindowDef structure which has info of order by clause in Over clause\n /* Sort ordering options for ORDER BY and CREATE INDEX */ typedef enum SortByDir { SORTBY_DEFAULT, SORTBY_ASC, SORTBY_DESC, SORTBY_USING /* not allowed in CREATE INDEX ... */ } SortByDir; \n typedef struct SortBy { NodeTag type; Node *node; /* expression to sort on */ SortByDir sortby_dir; /* ASC/DESC/USING/default */ SortByNulls sortby_nulls; /* NULLS FIRST/LAST */ List *useOp; /* name of op to use, if SORTBY_USING */ int location; /* operator location, or -1 if none/unknown */ } SortBy; \nIn transformWindowFuncCall API, Equality check of order clause in window definition failed while comparing SortByDir enum of both over clause i.e SORT_DEFAULT  is not equal to SORT_ASC. Hence two window clause are created in parse tree resulting in the creation of two different window aggregate node.This check can be modified to form a single window aggregate node for the above results in faster query execution. Is there any reason for creating two different window aggregate node?Thanks Anitha S", "msg_date": "Thu, 05 Oct 2023 16:35:08 +0530", "msg_from": "\"\\\"Anitha S\\\"\" <[email protected]>", "msg_from_op": true, "msg_subject": "Two Window aggregate node for logically same over clause" }, { "msg_contents": "On Thu, Oct 5, 2023 at 8:53 PM \"Anitha S\" <[email protected]> wrote:\n>\n>\n>\n> Hi team,\n>\n> We have observed that for logically same over clause two different window aggregate nodes are created in plan.\n> The below query contains two window functions. Both Over clause contain the same partition & order clause in it. But in one over clause ordering option is mentioned as ascending but not in another over clause which represents the default option \"ascending\".\n>\n>\n> Sample Query:\n>\n> CREATE TEMPORARY TABLE empsalary (\n> depname varchar,\n> empno bigint,\n> salary int,\n> enroll_date date\n> );\n>\n> INSERT INTO empsalary VALUES ('develop', 10, 5200, '2007-08-01'), ('sales', 1, 5000, '2006-10-01'),\n> ('personnel', 5, 3500, '2007-12-10'),('sales', 4, 4800, '2007-08-08'),('personnel', 2, 3900, '2006-12-23'),\n> ('develop', 7, 4200, '2008-01-01'),('develop', 9, 4500, '2008-01-01'),('sales', 3, 4800, '2007-08-01'),\n> ('develop', 8, 6000, '2006-10-01'),('develop', 11, 5200, '2007-08-15');\n>\n> explain verbose select rank() over (partition by depname order by salary), percent_rank() over(partition by depname order by salary asc) from empsalary;\n> QUERY PLAN\n> ------------------------------------------------------------------------------------------\n> WindowAgg (cost=10000000074.54..10000000114.66 rows=1070 width=52)\n> Output: (rank() OVER (?)), percent_rank() OVER (?), salary, depname\n> -> WindowAgg (cost=10000000074.54..10000000095.94 rows=1070 width=44)\n> Output: salary, depname, rank() OVER (?)\n> -> Sort (cost=10000000074.54..10000000077.21 rows=1070 width=36)\n> Output: salary, depname\n> Sort Key: empsalary.depname, empsalary.salary\n> -> Seq Scan on pg_temp_7.empsalary (cost=0.00..20.70 rows=1070 width=36)\n> Output: salary, depname\n>\n>\n> Ordering option of Sort is represented by enum SortByDir (parsenodes.h).\n>\n> List of SortBy is present in WindowDef structure which has info of order by clause in Over clause\n>\n> /* Sort ordering options for ORDER BY and CREATE INDEX */\n> typedef enum SortByDir\n> {\n> SORTBY_DEFAULT,\n> SORTBY_ASC,\n> SORTBY_DESC,\n> SORTBY_USING /* not allowed in CREATE INDEX ... */\n> } SortByDir;\n> typedef struct SortBy\n> {\n> NodeTag type;\n> Node *node; /* expression to sort on */\n> SortByDir sortby_dir; /* ASC/DESC/USING/default */\n> SortByNulls sortby_nulls; /* NULLS FIRST/LAST */\n> List *useOp; /* name of op to use, if SORTBY_USING */\n> int location; /* operator location, or -1 if none/unknown */\n> } SortBy;\n>\n>\n> In transformWindowFuncCall API, Equality check of order clause in window definition failed while comparing SortByDir enum of both over clause i.e SORT_DEFAULT is not equal to SORT_ASC. Hence two window clause are created in parse tree resulting in the creation of two different window aggregate node.\n>\n> This check can be modified to form a single window aggregate node for the above results in faster query execution. Is there any reason for creating two different window aggregate node?\n\nI don't see any. https://www.postgresql.org/docs/16/sql-select.html\ndescription of ORDER BY clause clearly says that ASC is assumed when\nno direction is mentioned. The only place in code which is used to\ncreate the node treats DEFAULT and ASC as same. May be we want to\nallow default to be ASC or DESC based on some setting (read GUC) in\nsome future.\n\nAnother angle is to ask: Why would the query add ASC to one window\nspecification and not the other?\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Fri, 6 Oct 2023 18:06:10 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Two Window aggregate node for logically same over clause" }, { "msg_contents": "Ashutosh Bapat <[email protected]> writes:\n> On Thu, Oct 5, 2023 at 8:53 PM \"Anitha S\" <[email protected]> wrote:\n>> We have observed that for logically same over clause two different window aggregate nodes are created in plan.\n>> The below query contains two window functions. Both Over clause contain the same partition & order clause in it. But in one over clause ordering option is mentioned as ascending but not in another over clause which represents the default option \"ascending\".\n\n> Another angle is to ask: Why would the query add ASC to one window\n> specification and not the other?\n\nYeah. I can't get excited about doing anything about this. We\npromise to merge identical window clauses, but these aren't identical.\nIf you say \"let's merge semantically equivalent clauses\", that's\nopening a fairly large can of worms --- for example, ought we to\nrecognize that \"x + 1.0\" and \"x + 1.00\" are equivalent? Or even\n\"x\" and \"x + 0\"? (I'm pretty sure I've seen query hacks recommended\nthat depend on our *not* detecting that.)\n\nAlso, it would be an extremely bad idea IMO to change the way\nequal() deals with this, which means that transformWindowFuncCall\nwould have to use bespoke code not equal() to check for matches.\nThat'd be ugly and a permanent maintenance gotcha.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 06 Oct 2023 10:21:38 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Two Window aggregate node for logically same over clause" }, { "msg_contents": "https://www.postgresql.org/docs/16/sql-select.html#SQL-ORDERBY:~:text=Optionally%20one%20can%20add%20the%20key%20word%20ASC%20(ascending)%20or%20DESC%20(descending)%20after%20any%20expression%20in%20the%20ORDER%20BY%20clause.%20If%20not%20specified%2C%20ASC%20is%20assumed%20by%20default.%20Alternatively\n\nIf order by directions is not mentioned it is assumed as ASC. It is mention ORDER by section. Currently PG don't have any GUC to set default order direction. Is there any idea to set default ordering direction via GUC ?\n\nAnother angle is to ask: Why would the query add ASC to one window \n\nspecification and not the other?\n\nHad seen a query from user where one over clause contains order direction & others not.  The reason for this thread is to get a solution to handle such cases also.\n\n\n\n\n\n\n\n---- On Fri, 06 Oct 2023 18:06:10 +0530 Ashutosh Bapat <[email protected]> wrote ---\n\n\n\nOn Thu, Oct 5, 2023 at 8:53 PM \"Anitha S\" <mailto:[email protected]> wrote: \n> \n> \n> \n> Hi team, \n> \n> We have observed that for logically same over clause two different window aggregate nodes are created in plan. \n> The below query contains two window functions. Both Over clause contain the same partition & order clause in it. But in one over clause ordering option is mentioned as ascending but not in another over clause which represents the default option \"ascending\". \n> \n> \n> Sample Query: \n> \n> CREATE TEMPORARY TABLE empsalary ( \n> depname varchar, \n> empno bigint, \n> salary int, \n> enroll_date date \n> ); \n> \n> INSERT INTO empsalary VALUES ('develop', 10, 5200, '2007-08-01'), ('sales', 1, 5000, '2006-10-01'), \n> ('personnel', 5, 3500, '2007-12-10'),('sales', 4, 4800, '2007-08-08'),('personnel', 2, 3900, '2006-12-23'), \n> ('develop', 7, 4200, '2008-01-01'),('develop', 9, 4500, '2008-01-01'),('sales', 3, 4800, '2007-08-01'), \n> ('develop', 8, 6000, '2006-10-01'),('develop', 11, 5200, '2007-08-15'); \n> \n> explain verbose select rank() over (partition by depname order by salary), percent_rank() over(partition by depname order by salary asc) from empsalary; \n> QUERY PLAN \n> ------------------------------------------------------------------------------------------ \n> WindowAgg (cost=10000000074.54..10000000114.66 rows=1070 width=52) \n> Output: (rank() OVER (?)), percent_rank() OVER (?), salary, depname \n> -> WindowAgg (cost=10000000074.54..10000000095.94 rows=1070 width=44) \n> Output: salary, depname, rank() OVER (?) \n> -> Sort (cost=10000000074.54..10000000077.21 rows=1070 width=36) \n> Output: salary, depname \n> Sort Key: empsalary.depname, empsalary.salary \n> -> Seq Scan on pg_temp_7.empsalary (cost=0.00..20.70 rows=1070 width=36) \n> Output: salary, depname \n> \n> \n> Ordering option of Sort is represented by enum SortByDir (parsenodes.h). \n> \n> List of SortBy is present in WindowDef structure which has info of order by clause in Over clause \n> \n> /* Sort ordering options for ORDER BY and CREATE INDEX */ \n> typedef enum SortByDir \n> { \n> SORTBY_DEFAULT, \n> SORTBY_ASC, \n> SORTBY_DESC, \n> SORTBY_USING /* not allowed in CREATE INDEX ... */ \n> } SortByDir; \n> typedef struct SortBy \n> { \n> NodeTag type; \n> Node *node; /* expression to sort on */ \n> SortByDir sortby_dir; /* ASC/DESC/USING/default */ \n> SortByNulls sortby_nulls; /* NULLS FIRST/LAST */ \n> List *useOp; /* name of op to use, if SORTBY_USING */ \n> int location; /* operator location, or -1 if none/unknown */ \n> } SortBy; \n> \n> \n> In transformWindowFuncCall API, Equality check of order clause in window definition failed while comparing SortByDir enum of both over clause i.e SORT_DEFAULT is not equal to SORT_ASC. Hence two window clause are created in parse tree resulting in the creation of two different window aggregate node. \n> \n> This check can be modified to form a single window aggregate node for the above results in faster query execution. Is there any reason for creating two different window aggregate node? \n \nI don't see any. https://www.postgresql.org/docs/16/sql-select.html \ndescription of ORDER BY clause clearly says that ASC is assumed when \nno direction is mentioned. The only place in code which is used to \ncreate the node treats DEFAULT and ASC as same. May be we want to \nallow default to be ASC or DESC based on some setting (read GUC) in \nsome future. \n \nAnother angle is to ask: Why would the query add ASC to one window \nspecification and not the other? \n \n-- \nBest Wishes, \nAshutosh Bapat", "msg_date": "Mon, 09 Oct 2023 10:31:51 +0530", "msg_from": "\"\\\"Anitha S\\\"\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Two Window aggregate node for logically same over clause" } ]
[ { "msg_contents": "Greetings,\n\nBefore 16 if I created an array type in schema1 it would be named\nschema1._array_type\nif I created the same type in schema 2 it would have been named\n\nschema2.__array_type\n\nCan someone point me to where the code was changed ?\n\nThanks,\nDave Cramer\n\nGreetings,Before 16 if I created an array type in schema1 it would be named schema1._array_typeif I created the same type in schema 2 it would have been named schema2.__array_typeCan someone point me to where the code was changed ?Thanks,Dave Cramer", "msg_date": "Thu, 5 Oct 2023 08:08:36 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": true, "msg_subject": "Change of behaviour for creating same type name in multiple schemas" }, { "msg_contents": "On Thu, 5 Oct 2023 at 14:13, Dave Cramer <[email protected]> wrote:\n>\n> Greetings,\n>\n> Before 16 if I created an array type in schema1 it would be named schema1._array_type\n> if I created the same type in schema 2 it would have been named\n>\n> schema2.__array_type\n>\n> Can someone point me to where the code was changed ?\n\nThis was with commit 70988b7b [0] in July 2022, based on this thread\n[1] (moved from -bugs).\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n[0] https://github.com/postgres/postgres/commits/70988b7b0a0bd03c59a2314d0b5bcf2135692349\n[1] https://www.postgresql.org/message-id/flat/b84cd82c-cc67-198a-8b1c-60f44e1259ad%40postgrespro.ru\n\n\n", "msg_date": "Thu, 5 Oct 2023 14:31:00 +0200", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Change of behaviour for creating same type name in multiple\n schemas" } ]
[ { "msg_contents": "Dear Concern,\n\nWe are trying to connect SQL Server from PostgreSQL through odbc_fdw extension.\nWe are using PostgreSQL version 15 and enclosed installed extension is attached. First, we create the foreign server.\nThen we map the postgres user and created foreign table as per the requirement by following code:\n\nCREATE EXTENSION odbc_fdw;\n\nCREATE SERVER sql_server_odbc\nFOREIGN DATA WRAPPER odbc_fdw\nOPTIONS (dsn 'SqlConnect');\n\n\nCREATE FOREIGN TABLE USER_LOCATION_PERMISSION_ODBC (\n id bigint NOT NULL,\n locationsfilter_id bigint NOT NULL,\n user_id bigint NOT NULL\n)\nSERVER sql_server_odbc\nOPTIONS (schema_name 'dbo', table_name 'USER_LOCATION_PERMISSION');\n\nCREATE USER MAPPING FOR postgres\nSERVER sql_server_odbc\n\nBut while executing below statement,\nSELECT * FROM USER_LOCATION_PERMISSION_ODBC\n\nWe will get the following error:\nERROR: Connecting to driver SQL state: 58000\n\nPlease suggest me why this error was occurred.\n\nRegards,\nSoumya Ghosh", "msg_date": "Thu, 5 Oct 2023 13:18:01 +0000", "msg_from": "Soumya Ghosh <[email protected]>", "msg_from_op": true, "msg_subject": "Error : SQL state: 58000" } ]
[ { "msg_contents": "On an instance running pg16.0:\n\nlog_time | 2023-10-05 10:03:00.014-05\nbackend_type | autovacuum worker\nleft | page verification failed, calculated checksum 5074 but expected 5050\ncontext | while scanning block 119 of relation \"public.postgres_log_2023_10_05_0900\"\n\nThis is the only error I've seen so far, and for all I know there's a\nissue on the storage behind the VM, or a cosmic ray hit. But I moved\nthe table out of the way and saved a copy of get_raw_page() in case\nsomeone wants to ask about it.\n\n public | BROKEN_postgres_log_2023_10_05_0900 | table | postgres | permanent | heap | 1664 kB\n\nThis table is what it sounds like: a partition into which CSV logs are\nCOPY'ed. It would've been created around 8am. There's no special\nparams set for the table nor for autovacuum.\n\nAlthough we have a ZFS tablespace, these tables aren't on it, and\nfull_page_writes=on. There's no crashes, and the instance has been up\nsince it was pg_upgraded from v15.4 on sep25.\n\npg_stat_all_tables indicates that the table was never (successfully)\nvacuumed.\n\nThis was compiled to RPM on centos7, and might include a few commits\nmade since v16.0.\n\npostgres=# SELECT * FROM heap_page_item_attrs(get_raw_page(801594131::regclass::text, 119), 801594131);\n lp | lp_off | lp_flags | lp_len | t_xmin | t_xmax | t_field3 | t_ctid | t_infomask2 | t_infomask | t_hoff | t_bits | t_oid | t_attrs \n 1 | 2304 | 1 | 16 | | | | | | | | | | \n 2 | 8160 | 1 | 16 | | | | | | | | | | \n 3 | 8144 | 1 | 16 | | | | | | | | | | \n...all the same except for lp_off...\n 365 | 2352 | 1 | 16 | | | | | | | | | | \n 366 | 2336 | 1 | 16 | | | | | | | | | | \n 367 | 2320 | 1 | 16 | | | | | | | | | | \n\npostgres=# SELECT FROM (SELECT tuple_data_split(801594131, t_data, t_infomask, t_infomask2, t_bits) a FROM heap_page_items(get_raw_page(801594131::regclass::text, 119))) WHERE a IS NOT NULL;\nWARNING: page verification failed, calculated checksum 5074 but expected 5050\n(0 rows)\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 5 Oct 2023 11:45:18 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": true, "msg_subject": "pg16: invalid page/page verification failed" }, { "msg_contents": "On Thu, 5 Oct 2023 at 18:48, Justin Pryzby <[email protected]> wrote:\n>\n> On an instance running pg16.0:\n>\n> log_time | 2023-10-05 10:03:00.014-05\n> backend_type | autovacuum worker\n> left | page verification failed, calculated checksum 5074 but expected 5050\n> context | while scanning block 119 of relation \"public.postgres_log_2023_10_05_0900\"\n>\n> This is the only error I've seen so far, and for all I know there's a\n> issue on the storage behind the VM, or a cosmic ray hit. But I moved\n> the table out of the way and saved a copy of get_raw_page() in case\n> someone wants to ask about it.\n>\n> postgres=# SELECT * FROM heap_page_item_attrs(get_raw_page(801594131::regclass::text, 119), 801594131);\n> lp | lp_off | lp_flags | lp_len | t_xmin | t_xmax | t_field3 | t_ctid | t_infomask2 | t_infomask | t_hoff | t_bits | t_oid | t_attrs\n> 1 | 2304 | 1 | 16 | | | | | | | | | |\n> 2 | 8160 | 1 | 16 | | | | | | | | | |\n> 3 | 8144 | 1 | 16 | | | | | | | | | |\n> ...all the same except for lp_off...\n> 365 | 2352 | 1 | 16 | | | | | | | | | |\n> 366 | 2336 | 1 | 16 | | | | | | | | | |\n> 367 | 2320 | 1 | 16 | | | | | | | | | |\n\nThat's not a HEAP page; it looks more like a btree page: lp_len is too\nshort for heap (which starts at lp_len = 24), and there are too many\nline pointers for an 8KiB heap page. btree often has lp_len of 16: 8\nbytes indextuple header, one maxalign of data (e.g. int or bigint).\n\nSo, assuming it's a block of a different relation kind, then it's also\nlikely it was originally located elsewhere in that other relation,\nindeed causing the checksum failure. You can further validate this by\nlooking at the page header's pd_special value - if it is 8176, that'd\nbe another indicator for it being a btree.\n\nKind regards,\n\nMatthias van de Meent.\n\n\n", "msg_date": "Thu, 5 Oct 2023 19:16:31 +0200", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg16: invalid page/page verification failed" }, { "msg_contents": "On Thu, Oct 05, 2023 at 07:16:31PM +0200, Matthias van de Meent wrote:\n> On Thu, 5 Oct 2023 at 18:48, Justin Pryzby <[email protected]> wrote:\n> >\n> > On an instance running pg16.0:\n> >\n> > log_time | 2023-10-05 10:03:00.014-05\n> > backend_type | autovacuum worker\n> > left | page verification failed, calculated checksum 5074 but expected 5050\n> > context | while scanning block 119 of relation \"public.postgres_log_2023_10_05_0900\"\n> >\n> > This is the only error I've seen so far, and for all I know there's a\n> > issue on the storage behind the VM, or a cosmic ray hit. But I moved\n> > the table out of the way and saved a copy of get_raw_page() in case\n> > someone wants to ask about it.\n> >\n> > postgres=# SELECT * FROM heap_page_item_attrs(get_raw_page(801594131::regclass::text, 119), 801594131);\n> > lp | lp_off | lp_flags | lp_len | t_xmin | t_xmax | t_field3 | t_ctid | t_infomask2 | t_infomask | t_hoff | t_bits | t_oid | t_attrs\n> > 1 | 2304 | 1 | 16 | | | | | | | | | |\n> > 2 | 8160 | 1 | 16 | | | | | | | | | |\n> > 3 | 8144 | 1 | 16 | | | | | | | | | |\n> > ...all the same except for lp_off...\n> > 365 | 2352 | 1 | 16 | | | | | | | | | |\n> > 366 | 2336 | 1 | 16 | | | | | | | | | |\n> > 367 | 2320 | 1 | 16 | | | | | | | | | |\n> \n> That's not a HEAP page; it looks more like a btree page: lp_len is too\n> short for heap (which starts at lp_len = 24), and there are too many\n> line pointers for an 8KiB heap page. btree often has lp_len of 16: 8\n> bytes indextuple header, one maxalign of data (e.g. int or bigint).\n> \n> So, assuming it's a block of a different relation kind, then it's also\n> likely it was originally located elsewhere in that other relation,\n> indeed causing the checksum failure. You can further validate this by\n> looking at the page header's pd_special value - if it is 8176, that'd\n> be another indicator for it being a btree.\n\nNice point.\n\npostgres=# SET ignore_checksum_failure=on; SELECT * FROM generate_series(115,119) AS a, page_header(get_raw_page(801594131::regclass::text, a)) AS b;\nWARNING: page verification failed, calculated checksum 5074 but expected 5050\n a | lsn | checksum | flags | lower | upper | special | pagesize | version | prune_xid \n-----+--------------+----------+-------+-------+-------+---------+----------+---------+-----------\n 115 | B61/A9436C8 | -23759 | 4 | 92 | 336 | 8192 | 8192 | 4 | 0\n 116 | B61/A944FA0 | 3907 | 4 | 104 | 224 | 8192 | 8192 | 4 | 0\n 117 | B61/A946828 | -24448 | 4 | 76 | 264 | 8192 | 8192 | 4 | 0\n 118 | B61/A94CCE0 | 26915 | 4 | 28 | 6256 | 8192 | 8192 | 4 | 0\n 119 | B5C/9F30D1C8 | 5050 | 0 | 1492 | 2304 | 8176 | 8192 | 4 | 0\n\nThe table itself has a few btree indexes on text columns and a brin\nindex on log_timestamp, but not on the integers.\n\nIt sounds like it's what's expected at this point, but after I\n\"SET ignore_checksum_failure=on\", and read the page in, vacuum kicked\noff and then crashed (in heap_page_prune() if that half of the stack\ntrace can be trusted).\n\n*** stack smashing detected ***: postgres: autovacuum worker postgres terminated\n\n< 2023-10-05 12:35:30.764 CDT >LOG: server process (PID 30692) was terminated by signal 11: Segmentation fault\n< 2023-10-05 12:35:30.764 CDT >DETAIL: Failed process was running: autovacuum: VACUUM ANALYZE public.BROKEN_postgres_log_2023_10_05_0900\n\nI took the opportunity to fsck the FS, which showed no errors.\n\nI was curious if the relfilenodes had gotten confused/corrupted/??\nBut this seems to indicate not; the problem is only one block.\n\npostgres=# SELECT oid, relfilenode, oid=relfilenode, relname FROM pg_class WHERE oid BETWEEN 801550000 AND 801594199 ORDER BY 1;\n oid | relfilenode | ?column? | relname \n-----------+-------------+----------+-------------------------------------------------\n 801564542 | 801564542 | t | postgres_log_2023_10_05_0800\n 801564545 | 801564545 | t | pg_toast_801564542\n 801564546 | 801564546 | t | pg_toast_801564542_index\n 801564547 | 801564547 | t | postgres_log_2023_10_05_0800_log_time_idx\n 801564548 | 801564548 | t | postgres_log_2023_10_05_0800_error_severity_idx\n 801564549 | 801564549 | t | postgres_log_2023_10_05_0800_error_message_idx\n 801564550 | 801564550 | t | postgres_log_2023_10_05_0800_duration_idx\n 801564551 | 801564551 | t | postgres_log_2023_10_05_0800_tempfile_idx\n 801594131 | 801594131 | t | BROKEN_postgres_log_2023_10_05_0900\n 801594134 | 801594134 | t | pg_toast_801594131\n 801594135 | 801594135 | t | pg_toast_801594131_index\n 801594136 | 801594136 | t | postgres_log_2023_10_05_0900_log_time_idx\n 801594137 | 801594137 | t | postgres_log_2023_10_05_0900_error_severity_idx\n 801594138 | 801594138 | t | postgres_log_2023_10_05_0900_error_message_idx\n 801594139 | 801594139 | t | postgres_log_2023_10_05_0900_duration_idx\n 801594140 | 801594140 | t | postgres_log_2023_10_05_0900_tempfile_idx\n\nBefore anybody asks, we didn't retain WAL from this morning.\n\nFYI, the storage is ext4/LVM/scsi (it looks like this didn't use\nvmw_pvscsi but an emulated hardware driver).\n\n/dev/mapper/data-postgres on /var/lib/pgsql type ext4 (rw,relatime,seclabel,data=ordered)\n[ 0.000000] Linux version 3.10.0-1160.el7.x86_64 ([email protected]) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-44) (GCC) ) #1 SMP Mon Oct 19 16:18:59 UTC 2020\n[ 1.446380] scsi 2:0:1:0: Direct-Access VMware Virtual disk 1.0 PQ: 0 ANSI: 2\n[ 1.470764] scsi target2:0:1: Beginning Domain Validation\n[ 1.471077] scsi target2:0:1: Domain Validation skipping write tests\n[ 1.471079] scsi target2:0:1: Ending Domain Validation\n[ 1.471099] scsi target2:0:1: FAST-40 WIDE SCSI 80.0 MB/s ST (25 ns, offset 127)\n[ 1.484109] sd 2:0:1:0: [sdb] 1048576000 512-byte logical blocks: (536 GB/500 GiB)\n[ 1.484136] sd 2:0:1:0: [sdb] Write Protect is off\n[ 1.484139] sd 2:0:1:0: [sdb] Mode Sense: 45 00 00 00\n[ 1.484163] sd 2:0:1:0: [sdb] Write cache: disabled, read cache: disabled, doesn't support DPO or FUA\n[ 1.485808] sd 2:0:1:0: [sdb] Attached SCSI disk\n[ 4.271339] sd 2:0:1:0: Attached scsi generic sg1 type 0\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 5 Oct 2023 13:48:00 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg16: invalid page/page verification failed" }, { "msg_contents": "On Thu, Oct 05, 2023 at 11:45:18AM -0500, Justin Pryzby wrote:\n> This table is what it sounds like: a partition into which CSV logs are\n> COPY'ed. It would've been created around 8am. There's no special\n> params set for the table nor for autovacuum.\n\nThis may be an important bit of information. 31966b151e6a is new as\nof Postgres 16, has changed the way relations are extended and COPY\nwas one area touched. I am adding Andres in CC.\n--\nMichael", "msg_date": "Fri, 6 Oct 2023 09:20:05 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg16: invalid page/page verification failed" }, { "msg_contents": "On Fri, Oct 06, 2023 at 09:20:05AM +0900, Michael Paquier wrote:\n> On Thu, Oct 05, 2023 at 11:45:18AM -0500, Justin Pryzby wrote:\n> > This table is what it sounds like: a partition into which CSV logs are\n> > COPY'ed. It would've been created around 8am. There's no special\n> > params set for the table nor for autovacuum.\n> \n> This may be an important bit of information. 31966b151e6a is new as\n> of Postgres 16, has changed the way relations are extended and COPY\n> was one area touched. I am adding Andres in CC.\n\nAlso, I realized that someone kicked off a process just after 9am which\nwould've done a lot of INSERT ON CONFLICT DO UPDATE, VACUUM FULL, and\nVACUUM. Which consumed and dirtied buffers about 100x faster than normal.\n\nlog_time | 2023-10-05 10:00:55.794-05\npid | 31754\nleft | duration: 51281.001 ms statement: VACUUM (FULL,FREEZE) othertable...\n\nlog_time | 2023-10-05 10:01:01.784-05\nbackend_type | checkpointer\nleft | checkpoint starting: time\n\nlog_time | 2023-10-05 10:01:02.935-05\npid | 10023\nleft | page verification failed, calculated checksum 5074 but expected 5050\ncontext | COPY postgres_log, line 947\nleft | COPY postgres_log FROM '/var/log/postgresql/postgresql-2023-10-05_095600.csv' WITH csv\n\nlog_time | 2023-10-05 10:01:02.935-05\npid | 10023\nleft | invalid page in block 119 of relation base/16409/801594131\ncontext | COPY postgres_log, line 947\nleft | COPY postgres_log FROM '/var/log/postgresql/postgresql-2023-10-05_095600.csv' WITH csv\n\nlog_time | 2023-10-05 10:01:11.636-05\npid | 31754\nleft | duration: 15838.374 ms statement: VACUUM (FREEZE) othertable...\n\nI meant to point out that the issue is on the last block.\n\npostgres=# SELECT pg_relation_size('\"BROKEN_postgres_log_2023_10_05_0900\"')/8192;\n?column? | 120\n\nIt sounds like there may be an issue locking (pinning?) a page, or\nrather not locking it, or releasing the lock too early.\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 6 Oct 2023 10:00:57 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg16: invalid page/page verification failed" }, { "msg_contents": "Hi,\n\nOn 2023-10-06 09:20:05 +0900, Michael Paquier wrote:\n> On Thu, Oct 05, 2023 at 11:45:18AM -0500, Justin Pryzby wrote:\n> > This table is what it sounds like: a partition into which CSV logs are\n> > COPY'ed. It would've been created around 8am. There's no special\n> > params set for the table nor for autovacuum.\n> \n> This may be an important bit of information. 31966b151e6a is new as\n> of Postgres 16, has changed the way relations are extended and COPY\n> was one area touched. I am adding Andres in CC.\n\nHm, is there any chance the COPY targets more than one partition? If so, this\nsounds like it might be the issue described here\nhttps://postgr.es/m/20230925213746.fwqauhhifjgefyzk%40alap3.anarazel.de\n\nI think at this stage the easiest fix might be just to copy the approach of\ncalling ReleaseBulkInsertStatePin(), even though I think that's\narchitecturally wrong.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 6 Oct 2023 08:47:39 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg16: invalid page/page verification failed" }, { "msg_contents": "On Fri, Oct 06, 2023 at 08:47:39AM -0700, Andres Freund wrote:\n> Hi,\n> \n> On 2023-10-06 09:20:05 +0900, Michael Paquier wrote:\n> > On Thu, Oct 05, 2023 at 11:45:18AM -0500, Justin Pryzby wrote:\n> > > This table is what it sounds like: a partition into which CSV logs are\n> > > COPY'ed. It would've been created around 8am. There's no special\n> > > params set for the table nor for autovacuum.\n> > \n> > This may be an important bit of information. 31966b151e6a is new as\n> > of Postgres 16, has changed the way relations are extended and COPY\n> > was one area touched. I am adding Andres in CC.\n> \n> Hm, is there any chance the COPY targets more than one partition? If so, this\n> sounds like it might be the issue described here\n> https://postgr.es/m/20230925213746.fwqauhhifjgefyzk%40alap3.anarazel.de\n\nThe first error was from:\nlog_time | 2023-10-05 09:57:01.939-05\nleft | COPY postgres_log FROM '/var/log/postgresql/postgresql-2023-10-05_095200.csv' WITH csv\n\nUnfortunately, I no longer have the CSV files which caused errors.\nAfter I moved the broken table out of the way and created a new\npartition, they would've been imported successfully, and then removed.\n\nAlso, it's sad, but the original 2023_10_05_0900 partition I created was\nitself rotated out of existence a few hours ago (I still have the most\ninteresting lines, though).\n\nI've seen that it's possible for a CSV to include some data that ideally\nwould've gone into the \"next\" CSV: 2023-01-01_180000.csv might include a line\nof data after 6pm. For example, with log_rotation_age=2min,\npostgresql-2023-10-06_120800.csv had a row after 12:10:\n2023-10-06 12:10:00.101 CDT,\"pryzbyj\",\"pryzbyj\",5581,\"[local]\",65203f66.15cd,2,...\n\nBut I'm not sure how that can explain this issue, because this was\n095600.csv, and not 095800.csv. My script knows to create the \"next\"\npartition, to handle the case that the file includes some data that\nshould've gone to the next logfile. I'm handling that case with the\nanticipation that there might be a few tenths of a second or even a few\nseconds of logs in the wrong file - typically 0 lines and sometimes 1\nline. I don't know if it's even possible to have multiple lines in the\n\"wrong\" file. In any case, I'm not not expecting log rotation to be 2\nminutes behind.\n\nAlso, not only was the data in the CSV earlier than 10am, but the error\n*itself* was also earlier. The error importing the CSV was at 9:57, so\nthe CSV couldn't have had data after 10:00. Not that it matters, but my\nscript doesn't import the most recent logfile, and also avoids importing\nfiles written within the last minute.\n\nI don't see how a CSV with a 2 minute interval of data beginning at 9:56\ncould straddle hourly partitions.\n\nlog_time | 2023-10-05 09:57:01.939-05\nleft | invalid page in block 119 of relation base/16409/801594131\nleft | COPY postgres_log FROM '/var/log/postgresql/postgresql-2023-10-05_095200.csv' WITH csv\n\nlog_time | 2023-10-05 09:57:01.939-05\nleft | page verification failed, calculated checksum 5074 but expected 50\nleft | COPY postgres_log FROM '/var/log/postgresql/postgresql-2023-10-05_095200.csv' WITH csv\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 6 Oct 2023 16:36:20 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg16: invalid page/page verification failed" } ]
[ { "msg_contents": "Hello,\n\nIs there a way, where an authorized user (Creates Table / Inserts Data) in\na DB, which the SuperUser cannot access the same.\n\nI understand SuperUser can revoke the access of the user, but he should not\nbe able to see the table structure and data inserted in those tables.\n\nBest,\n\nRajesh\n\nHello, Is there a way, where an authorized user (Creates Table / Inserts Data) in a DB, which the SuperUser cannot access the same.I understand SuperUser can revoke the access of the user, but he should not be able to see the table structure and data inserted in those tables.Best, Rajesh", "msg_date": "Fri, 6 Oct 2023 01:08:59 +0530", "msg_from": "Rajesh Mittal <[email protected]>", "msg_from_op": true, "msg_subject": "Rights Control within DB (which SuperUser cannot access,\n but user can)" }, { "msg_contents": "Rajesh Mittal <[email protected]> writes:\n> Is there a way, where an authorized user (Creates Table / Inserts Data) in\n> a DB, which the SuperUser cannot access the same.\n> I understand SuperUser can revoke the access of the user, but he should not\n> be able to see the table structure and data inserted in those tables.\n\nYou might be able to do something with contrib/sepgsql, if you're\non a selinux-enabled platform. But TBH the correct solution here\nis to not give out superuser to people you don't trust. There is\nno way that you're likely to make an entirely bulletproof setup.\n(Consider, just to begin with, how you're going to prevent a rogue\nsuperuser from de-installing sepgsql, or even simply doing\n\"set role other_user\".)\n\nAlso keep in mind that \"prevent user A from seeing the structure\nof user B's tables\" is not part of Postgres' threat models at all.\nMost system catalogs are world-readable, and you can't change that\nwithout breaking an awful lot of tools. If you don't like this,\na plausible answer is to give each user their own database.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 05 Oct 2023 15:47:27 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Rights Control within DB (which SuperUser cannot access,\n but user can)" } ]
[ { "msg_contents": "The \"Description\" and \"Notes\" parts of the following logical\nreplication PUB/SUB reference pages (almost always) link to each other\nwhenever a sibling command gets mentioned.\n\nCREATE PUBLICATION\nALTER PUBLICATION\nDROP PUBLICATION\n\nCREATE SUBSCRIPTION\nALTER SUBSCRIPTION\nDROP SUBSCRIPTION\n\n~\n\nAFAICT the only omissions are:\n\nALTER PUBLICATION page -- mentions ALTER SUBSCRIPTION but there is no link\nDROP SUBSCRIPTION page -- mentions ALTER SUBSCRIPTION but there is no link\n\n~\n\nHere is a patch to add the 2 missing references:\n\nref/alter_subscription.sgml ==> added more ids\nref/alter_publication.sgml ==> added link to\n\"sql-altersubscription-refresh-publication\"\nref/drop_subscription.sgml ==> added link to \"sql-altersubscription\"\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Fri, 6 Oct 2023 17:44:32 +1100", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": true, "msg_subject": "PGDOCS - add more links in the pub/sub reference pages" }, { "msg_contents": "On Fri, Oct 6, 2023 at 12:15 PM Peter Smith <[email protected]> wrote:\n>\n> Here is a patch to add the 2 missing references:\n>\n> ref/alter_subscription.sgml ==> added more ids\n> ref/alter_publication.sgml ==> added link to\n> \"sql-altersubscription-refresh-publication\"\n> ref/drop_subscription.sgml ==> added link to \"sql-altersubscription\"\n>\n\n- <varlistentry>\n+ <varlistentry id=\"sql-altersubscription-new-owner\">\n <term><replaceable class=\"parameter\">new_owner</replaceable></term>\n <listitem>\n <para>\n@@ -281,7 +281,7 @@ ALTER SUBSCRIPTION <replaceable\nclass=\"parameter\">name</replaceable> RENAME TO <\n </listitem>\n </varlistentry>\n\n- <varlistentry>\n+ <varlistentry id=\"sql-altersubscription-new-name\">\n <term><replaceable class=\"parameter\">new_name</replaceable></term>\n <listitem>\n\nShall we append 'params' in the above and other id's in the patch. For\nexample, sql-altersubscription-params-new-name. The other places like\nalter_role.sgml and alter_table.sgml uses similar id's. Is there a\nreason to be different here?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 9 Oct 2023 10:02:11 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PGDOCS - add more links in the pub/sub reference pages" }, { "msg_contents": "On Mon, Oct 9, 2023 at 3:32 PM Amit Kapila <[email protected]> wrote:\n>\n> On Fri, Oct 6, 2023 at 12:15 PM Peter Smith <[email protected]> wrote:\n> >\n> > Here is a patch to add the 2 missing references:\n> >\n> > ref/alter_subscription.sgml ==> added more ids\n> > ref/alter_publication.sgml ==> added link to\n> > \"sql-altersubscription-refresh-publication\"\n> > ref/drop_subscription.sgml ==> added link to \"sql-altersubscription\"\n> >\n>\n> - <varlistentry>\n> + <varlistentry id=\"sql-altersubscription-new-owner\">\n> <term><replaceable class=\"parameter\">new_owner</replaceable></term>\n> <listitem>\n> <para>\n> @@ -281,7 +281,7 @@ ALTER SUBSCRIPTION <replaceable\n> class=\"parameter\">name</replaceable> RENAME TO <\n> </listitem>\n> </varlistentry>\n>\n> - <varlistentry>\n> + <varlistentry id=\"sql-altersubscription-new-name\">\n> <term><replaceable class=\"parameter\">new_name</replaceable></term>\n> <listitem>\n>\n\nThanks for looking at this patch!\n\n> Shall we append 'params' in the above and other id's in the patch. For\n> example, sql-altersubscription-params-new-name. The other places like\n> alter_role.sgml and alter_table.sgml uses similar id's. Is there a\n> reason to be different here?\n\nIn v1, I used the same pattern as on the CREATE SUBSCRIPTION page,\nwhich doesn't look like those...\n\n~~~\n\nThe \"Parameters\" section describes some things that really are parameters:\n\ne.g.\n\"sql-altersubscription-name\"\n\"sql-altersubscription-new-owner\"\n\"sql-altersubscription-new-name\">\n\nI agree, emphasising that those ones are parameters is better. Changed\nlike this in v2.\n\n\"sql-altersubscription-params-name\"\n\"sql-altersubscription-params-new-owner\"\n\"sql-altersubscription-params-new-name\">\n\n~\n\nBut, the \"Parameters\" section also describes other SQL syntax clauses\nwhich are not really parameters at all.\n\ne.g.\n\"sql-altersubscription-refresh-publication\"\n\"sql-altersubscription-enable\"\n\"sql-altersubscription-disable\"\n\nSo I felt those ones are more intuitive left as they are -- e.g.,\ninstead of having ids/linkends like:\n\n\"sql-altersubscription-params-refresh-publication\"\n\"sql-altersubscription-params-enable\"\n\"sql-altersubscription-params-disable\"\n\n~~\n\nPSA v2.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Mon, 9 Oct 2023 17:45:31 +1100", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PGDOCS - add more links in the pub/sub reference pages" }, { "msg_contents": "On Mon, 9 Oct 2023 at 12:18, Peter Smith <[email protected]> wrote:\n>\n> On Mon, Oct 9, 2023 at 3:32 PM Amit Kapila <[email protected]> wrote:\n> >\n> > On Fri, Oct 6, 2023 at 12:15 PM Peter Smith <[email protected]> wrote:\n> > >\n> > > Here is a patch to add the 2 missing references:\n> > >\n> > > ref/alter_subscription.sgml ==> added more ids\n> > > ref/alter_publication.sgml ==> added link to\n> > > \"sql-altersubscription-refresh-publication\"\n> > > ref/drop_subscription.sgml ==> added link to \"sql-altersubscription\"\n> > >\n> >\n> > - <varlistentry>\n> > + <varlistentry id=\"sql-altersubscription-new-owner\">\n> > <term><replaceable class=\"parameter\">new_owner</replaceable></term>\n> > <listitem>\n> > <para>\n> > @@ -281,7 +281,7 @@ ALTER SUBSCRIPTION <replaceable\n> > class=\"parameter\">name</replaceable> RENAME TO <\n> > </listitem>\n> > </varlistentry>\n> >\n> > - <varlistentry>\n> > + <varlistentry id=\"sql-altersubscription-new-name\">\n> > <term><replaceable class=\"parameter\">new_name</replaceable></term>\n> > <listitem>\n> >\n>\n> Thanks for looking at this patch!\n>\n> > Shall we append 'params' in the above and other id's in the patch. For\n> > example, sql-altersubscription-params-new-name. The other places like\n> > alter_role.sgml and alter_table.sgml uses similar id's. Is there a\n> > reason to be different here?\n>\n> In v1, I used the same pattern as on the CREATE SUBSCRIPTION page,\n> which doesn't look like those...\n>\n> ~~~\n>\n> The \"Parameters\" section describes some things that really are parameters:\n>\n> e.g.\n> \"sql-altersubscription-name\"\n> \"sql-altersubscription-new-owner\"\n> \"sql-altersubscription-new-name\">\n>\n> I agree, emphasising that those ones are parameters is better. Changed\n> like this in v2.\n>\n> \"sql-altersubscription-params-name\"\n> \"sql-altersubscription-params-new-owner\"\n> \"sql-altersubscription-params-new-name\">\n>\n> ~\n>\n> But, the \"Parameters\" section also describes other SQL syntax clauses\n> which are not really parameters at all.\n>\n> e.g.\n> \"sql-altersubscription-refresh-publication\"\n> \"sql-altersubscription-enable\"\n> \"sql-altersubscription-disable\"\n>\n> So I felt those ones are more intuitive left as they are -- e.g.,\n> instead of having ids/linkends like:\n>\n> \"sql-altersubscription-params-refresh-publication\"\n> \"sql-altersubscription-params-enable\"\n> \"sql-altersubscription-params-disable\"\n>\n> ~~\n>\n> PSA v2.\n\nI noticed a couple of other places where a link to \"ALTER SUBSCRIPTION\n... DISABLE\" and \"ALTER SUBSCRIPTION ... SET\" can be specified in\ndrop_subscription:\nTo proceed in this situation, first disable the subscription by\nexecuting <literal>ALTER SUBSCRIPTION ... DISABLE</literal>, and then\ndisassociate it from the replication slot by executing <literal>ALTER\nSUBSCRIPTION ... SET (slot_name = NONE)</literal>.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Tue, 10 Oct 2023 06:16:15 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PGDOCS - add more links in the pub/sub reference pages" }, { "msg_contents": "On Tue, Oct 10, 2023 at 11:46 AM vignesh C <[email protected]> wrote:\n>\n> I noticed a couple of other places where a link to \"ALTER SUBSCRIPTION\n> ... DISABLE\" and \"ALTER SUBSCRIPTION ... SET\" can be specified in\n> drop_subscription:\n> To proceed in this situation, first disable the subscription by\n> executing <literal>ALTER SUBSCRIPTION ... DISABLE</literal>, and then\n> disassociate it from the replication slot by executing <literal>ALTER\n> SUBSCRIPTION ... SET (slot_name = NONE)</literal>.\n>\n\nHi Vignesh. Thanks for the review comments.\n\nModified as suggested.\n\nPSA v3.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Tue, 10 Oct 2023 14:17:25 +1100", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PGDOCS - add more links in the pub/sub reference pages" }, { "msg_contents": "On Tue, 10 Oct 2023 at 08:47, Peter Smith <[email protected]> wrote:\n>\n> On Tue, Oct 10, 2023 at 11:46 AM vignesh C <[email protected]> wrote:\n> >\n> > I noticed a couple of other places where a link to \"ALTER SUBSCRIPTION\n> > ... DISABLE\" and \"ALTER SUBSCRIPTION ... SET\" can be specified in\n> > drop_subscription:\n> > To proceed in this situation, first disable the subscription by\n> > executing <literal>ALTER SUBSCRIPTION ... DISABLE</literal>, and then\n> > disassociate it from the replication slot by executing <literal>ALTER\n> > SUBSCRIPTION ... SET (slot_name = NONE)</literal>.\n> >\n>\n> Hi Vignesh. Thanks for the review comments.\n>\n> Modified as suggested.\n>\n> PSA v3.\n\nFew more instances in other logical replication related pages:\n1) Another instance was in alter_subscription.sgml:\n Fetch missing table information from publisher. This will start\n replication of tables that were added to the subscribed-to publications\n since <command>CREATE SUBSCRIPTION</command> or\n the last invocation of <command>REFRESH PUBLICATION</command>.\n2) Few more instances were in logical-replication-subscription.html\n2.a) Normally, the remote replication slot is created automatically when the\n subscription is created using <command>CREATE SUBSCRIPTION</command> and it\n is dropped automatically when the subscription is dropped using\n <command>DROP SUBSCRIPTION</command>. In some situations, however, it can\n be useful or necessary to manipulate the subscription and the underlying\n replication slot separately.\n2.b) When dropping a subscription, the remote host is not reachable. In\n that case, disassociate the slot from the subscription\n using <command>ALTER SUBSCRIPTION</command> before attempting to drop\n the subscription.\n2.c) If a subscription is affected by this problem, the only way to resume\n replication is to adjust one of the column lists on the publication\n side so that they all match; and then either recreate the subscription,\n or use <literal>ALTER SUBSCRIPTION ... DROP PUBLICATION</literal> to\n remove one of the offending publications and add it again.\n2.d) The\n transaction that produced the conflict can be skipped by using\n <command>ALTER SUBSCRIPTION ... SKIP</command> with the finish LSN\n (i.e., LSN 0/14C0378).\n2.e) Before using this function, the subscription needs to be\ndisabled temporarily\n either by <command>ALTER SUBSCRIPTION ... DISABLE</command> or,\n\nRegards,\nVignesh\n\n\n", "msg_date": "Tue, 10 Oct 2023 11:40:46 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PGDOCS - add more links in the pub/sub reference pages" }, { "msg_contents": "On Tue, Oct 10, 2023 at 11:40 AM vignesh C <[email protected]> wrote:\n>\n> On Tue, 10 Oct 2023 at 08:47, Peter Smith <[email protected]> wrote:\n> > PSA v3.\n>\n> Few more instances in other logical replication related pages:\n> 1) Another instance was in alter_subscription.sgml:\n> Fetch missing table information from publisher. This will start\n> replication of tables that were added to the subscribed-to publications\n> since <command>CREATE SUBSCRIPTION</command> or\n> the last invocation of <command>REFRESH PUBLICATION</command>.\n>\n\nDo we want each and every occurrence of the commands to have\ncorresponding links? I am not against it if we think that is useful\nfor users but asking as I am not aware of the general practice we\nfollow in this regard. Does anyone else have any opinion on this\nmatter?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 10 Oct 2023 18:03:05 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PGDOCS - add more links in the pub/sub reference pages" }, { "msg_contents": "On Tue, Oct 10, 2023 at 11:33 PM Amit Kapila <[email protected]> wrote:\n>\n> On Tue, Oct 10, 2023 at 11:40 AM vignesh C <[email protected]> wrote:\n> >\n> > On Tue, 10 Oct 2023 at 08:47, Peter Smith <[email protected]> wrote:\n> > > PSA v3.\n> >\n> > Few more instances in other logical replication related pages:\n> > 1) Another instance was in alter_subscription.sgml:\n> > Fetch missing table information from publisher. This will start\n> > replication of tables that were added to the subscribed-to publications\n> > since <command>CREATE SUBSCRIPTION</command> or\n> > the last invocation of <command>REFRESH PUBLICATION</command>.\n> >\n>\n> Do we want each and every occurrence of the commands to have\n> corresponding links? I am not against it if we think that is useful\n> for users but asking as I am not aware of the general practice we\n> follow in this regard. Does anyone else have any opinion on this\n> matter?\n>\n\nThe goal of the patch was to use a consistent approach for all the\npub/sub pages. Otherwise, there was a mixture and no apparent reason\nwhy some commands had links while some did not.\n\nThe rules this patch is using are:\n- only including inter-page links to other pub/sub commands\n- if the same pub/sub linkend occurs multiple times in the same block\nof text, then only give a link for the first one\n\n~~\n\nWhat links are \"useful to users\" is subjective, and the convenience\nprobably also varies depending on how much scrolling is needed to get\nto the \"See Also\" part at the bottom. I felt a consistent linking\napproach is better than having differences based on some arbitrary\njudgement of usefulness.\n\nAFAICT some other PG DOCS pages strive to do the same. For example,\nthe ALTER TABLE page [1] mentions the \"CREATE TABLE\" command 10 times\nand 8 of those have links. (the missing ones don't look any different\nto me so seem like accidental omissions).\n\n======\n[1] https://www.postgresql.org/docs/devel/sql-altertable.html\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Wed, 11 Oct 2023 11:28:16 +1100", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PGDOCS - add more links in the pub/sub reference pages" }, { "msg_contents": "On Tue, Oct 10, 2023 at 5:10 PM vignesh C <[email protected]> wrote:\n>\n> Few more instances in other logical replication related pages:\n> 1) Another instance was in alter_subscription.sgml:\n> Fetch missing table information from publisher. This will start\n> replication of tables that were added to the subscribed-to publications\n> since <command>CREATE SUBSCRIPTION</command> or\n> the last invocation of <command>REFRESH PUBLICATION</command>.\n\nOK, modified in v4.\n\n> 2) Few more instances were in logical-replication-subscription.html\n> 2.a) Normally, the remote replication slot is created automatically when the\n> subscription is created using <command>CREATE SUBSCRIPTION</command> and it\n> is dropped automatically when the subscription is dropped using\n> <command>DROP SUBSCRIPTION</command>. In some situations, however, it can\n> be useful or necessary to manipulate the subscription and the underlying\n> replication slot separately.\n> 2.b) When dropping a subscription, the remote host is not reachable. In\n> that case, disassociate the slot from the subscription\n> using <command>ALTER SUBSCRIPTION</command> before attempting to drop\n> the subscription.\n\nThe above were in section \"31.2.1 Replication Slot Management\". OK,\nmodified in v4.\n\n> 2.c) If a subscription is affected by this problem, the only way to resume\n> replication is to adjust one of the column lists on the publication\n> side so that they all match; and then either recreate the subscription,\n> or use <literal>ALTER SUBSCRIPTION ... DROP PUBLICATION</literal> to\n> remove one of the offending publications and add it again.\n\nThe above was in section \"31.2.1 Replication Slot Management\". OK,\nmodified in v4.\n\n> 2.d) The\n> transaction that produced the conflict can be skipped by using\n> <command>ALTER SUBSCRIPTION ... SKIP</command> with the finish LSN\n> (i.e., LSN 0/14C0378).\n> 2.e) Before using this function, the subscription needs to be\n> disabled temporarily\n> either by <command>ALTER SUBSCRIPTION ... DISABLE</command> or,\n>\n\nThe above was in the section \"31.5 Conflicts\". OK, modified in v4.\n\n~~\n\nThanks for reporting those. PSA v4.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Wed, 11 Oct 2023 11:31:02 +1100", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PGDOCS - add more links in the pub/sub reference pages" }, { "msg_contents": "On Mon, Oct 9, 2023 at 12:15 PM Peter Smith <[email protected]> wrote:\n>\n> On Mon, Oct 9, 2023 at 3:32 PM Amit Kapila <[email protected]> wrote:\n> >\n>\n> In v1, I used the same pattern as on the CREATE SUBSCRIPTION page,\n> which doesn't look like those...\n>\n\nYeah, I think it would have been better if we used params in the\nCREATE SUBSCRIPTION page as well. I don't know if it is a good idea to\ndo now with this patch but it makes sense to be consistent. What do\nyou think?\n\n> ~~~\n>\n> The \"Parameters\" section describes some things that really are parameters:\n>\n> e.g.\n> \"sql-altersubscription-name\"\n> \"sql-altersubscription-new-owner\"\n> \"sql-altersubscription-new-name\">\n>\n> I agree, emphasising that those ones are parameters is better. Changed\n> like this in v2.\n>\n> \"sql-altersubscription-params-name\"\n> \"sql-altersubscription-params-new-owner\"\n> \"sql-altersubscription-params-new-name\">\n>\n> ~\n>\n> But, the \"Parameters\" section also describes other SQL syntax clauses\n> which are not really parameters at all.\n>\n> e.g.\n> \"sql-altersubscription-refresh-publication\"\n> \"sql-altersubscription-enable\"\n> \"sql-altersubscription-disable\"\n>\n> So I felt those ones are more intuitive left as they are -- e.g.,\n> instead of having ids/linkends like:\n>\n> \"sql-altersubscription-params-refresh-publication\"\n> \"sql-altersubscription-params-enable\"\n> \"sql-altersubscription-params-disable\"\n>\n\nI checked alter_role.sgml which has similar mixed usage and it is\nusing 'params' consistently in all cases. So, I would suggest\nfollowing a similar style here.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 12 Oct 2023 10:14:35 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PGDOCS - add more links in the pub/sub reference pages" }, { "msg_contents": "On Thu, Oct 12, 2023 at 3:44 PM Amit Kapila <[email protected]> wrote:\n>\n> On Mon, Oct 9, 2023 at 12:15 PM Peter Smith <[email protected]> wrote:\n> >\n> > On Mon, Oct 9, 2023 at 3:32 PM Amit Kapila <[email protected]> wrote:\n> > >\n> >\n> > In v1, I used the same pattern as on the CREATE SUBSCRIPTION page,\n> > which doesn't look like those...\n> >\n>\n> Yeah, I think it would have been better if we used params in the\n> CREATE SUBSCRIPTION page as well. I don't know if it is a good idea to\n> do now with this patch but it makes sense to be consistent. What do\n> you think?\n>\n\nOK, I have given those changes as separate patches:\n- 0002 (changes the CREATE PUBLICATION parameter ids)\n- 0003 (changes CREATE SUBSCRIPTION parameter ids)\n\n> > ~~~\n> >\n> > The \"Parameters\" section describes some things that really are parameters:\n> >\n> > e.g.\n> > \"sql-altersubscription-name\"\n> > \"sql-altersubscription-new-owner\"\n> > \"sql-altersubscription-new-name\">\n> >\n> > I agree, emphasising that those ones are parameters is better. Changed\n> > like this in v2.\n> >\n> > \"sql-altersubscription-params-name\"\n> > \"sql-altersubscription-params-new-owner\"\n> > \"sql-altersubscription-params-new-name\">\n> >\n> > ~\n> >\n> > But, the \"Parameters\" section also describes other SQL syntax clauses\n> > which are not really parameters at all.\n> >\n> > e.g.\n> > \"sql-altersubscription-refresh-publication\"\n> > \"sql-altersubscription-enable\"\n> > \"sql-altersubscription-disable\"\n> >\n> > So I felt those ones are more intuitive left as they are -- e.g.,\n> > instead of having ids/linkends like:\n> >\n> > \"sql-altersubscription-params-refresh-publication\"\n> > \"sql-altersubscription-params-enable\"\n> > \"sql-altersubscription-params-disable\"\n> >\n>\n> I checked alter_role.sgml which has similar mixed usage and it is\n> using 'params' consistently in all cases. So, I would suggest\n> following a similar style here.\n>\n\nAs you wish. Done that way in patch 0001.\n\n~~\n\nPSA the v5 patches.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Fri, 13 Oct 2023 14:32:52 +1100", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PGDOCS - add more links in the pub/sub reference pages" }, { "msg_contents": "Thanks for pushing the 0001 patch! I am unsure what the decision was\nfor the remaining patches, but anyway, here they are again (rebased).\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Mon, 16 Oct 2023 11:45:30 +1100", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PGDOCS - add more links in the pub/sub reference pages" }, { "msg_contents": "On Mon, Oct 16, 2023 at 6:15 AM Peter Smith <[email protected]> wrote:\n>\n> Thanks for pushing the 0001 patch! I am unsure what the decision was\n> for the remaining patches, but anyway, here they are again (rebased).\n>\n\nTo keep the link names the same in logical replication-related\ncommands, I have pushed your patch.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 30 Oct 2023 14:45:44 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PGDOCS - add more links in the pub/sub reference pages" }, { "msg_contents": "Thanks for pushing!\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Tue, 31 Oct 2023 08:53:05 +1100", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PGDOCS - add more links in the pub/sub reference pages" } ]
[ { "msg_contents": "Hi,\n\nI noticed a couple of typos in code. \"the the\" should have been \"the\",\nattached patch has the changes for the same.\n\nRegards,\nVignesh", "msg_date": "Fri, 6 Oct 2023 15:30:33 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": true, "msg_subject": "typo in couple of places" }, { "msg_contents": "vignesh C <[email protected]> writes:\n\n> Hi,\n>\n> I noticed a couple of typos in code. \"the the\" should have been \"the\",\n> attached patch has the changes for the same.\n\nThis made me curious about other duplicate word occurrences, and after a\nfew minutes of increasingly-elaborate regexing to exclude false\npostives, I found a few more (plus a bonus a _missing_ \"the\"). See the\nattached patch (which includes your originl one, for completeness).\n\n> Regards,\n> Vignesh\n\n- ilmari", "msg_date": "Fri, 06 Oct 2023 16:20:29 +0100", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: typo in couple of places" }, { "msg_contents": "On Fri, Oct 6, 2023 at 8:52 PM Dagfinn Ilmari Mannsåker\n<[email protected]> wrote:\n>\n> vignesh C <[email protected]> writes:\n>\n> > Hi,\n> >\n> > I noticed a couple of typos in code. \"the the\" should have been \"the\",\n> > attached patch has the changes for the same.\n>\n> This made me curious about other duplicate word occurrences, and after a\n> few minutes of increasingly-elaborate regexing to exclude false\n> postives, I found a few more (plus a bonus a _missing_ \"the\"). See the\n> attached patch (which includes your originl one, for completeness).\n>\n\nLGTM.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 7 Oct 2023 05:49:26 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: typo in couple of places" }, { "msg_contents": "On Fri, 6 Oct 2023 at 20:50, Dagfinn Ilmari Mannsåker <[email protected]> wrote:\n>\n> vignesh C <[email protected]> writes:\n>\n> > Hi,\n> >\n> > I noticed a couple of typos in code. \"the the\" should have been \"the\",\n> > attached patch has the changes for the same.\n>\n> This made me curious about other duplicate word occurrences, and after a\n> few minutes of increasingly-elaborate regexing to exclude false\n> postives, I found a few more (plus a bonus a _missing_ \"the\"). See the\n> attached patch (which includes your originl one, for completeness).\n\nThanks, Looks good.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Sat, 7 Oct 2023 08:27:31 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": true, "msg_subject": "Re: typo in couple of places" }, { "msg_contents": "On Sat, Oct 7, 2023 at 8:28 AM vignesh C <[email protected]> wrote:\n>\n> On Fri, 6 Oct 2023 at 20:50, Dagfinn Ilmari Mannsåker <[email protected]> wrote:\n> >\n> > vignesh C <[email protected]> writes:\n> >\n> > > Hi,\n> > >\n> > > I noticed a couple of typos in code. \"the the\" should have been \"the\",\n> > > attached patch has the changes for the same.\n> >\n> > This made me curious about other duplicate word occurrences, and after a\n> > few minutes of increasingly-elaborate regexing to exclude false\n> > postives, I found a few more (plus a bonus a _missing_ \"the\"). See the\n> > attached patch (which includes your originl one, for completeness).\n>\n> Thanks, Looks good.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 9 Oct 2023 16:17:34 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: typo in couple of places" }, { "msg_contents": "On Mon, 9 Oct 2023 at 16:17, Amit Kapila <[email protected]> wrote:\n>\n> On Sat, Oct 7, 2023 at 8:28 AM vignesh C <[email protected]> wrote:\n> >\n> > On Fri, 6 Oct 2023 at 20:50, Dagfinn Ilmari Mannsåker <[email protected]> wrote:\n> > >\n> > > vignesh C <[email protected]> writes:\n> > >\n> > > > Hi,\n> > > >\n> > > > I noticed a couple of typos in code. \"the the\" should have been \"the\",\n> > > > attached patch has the changes for the same.\n> > >\n> > > This made me curious about other duplicate word occurrences, and after a\n> > > few minutes of increasingly-elaborate regexing to exclude false\n> > > postives, I found a few more (plus a bonus a _missing_ \"the\"). See the\n> > > attached patch (which includes your originl one, for completeness).\n> >\n> > Thanks, Looks good.\n> >\n>\n> Pushed.\n\nThanks for pushing this.\n\n\n", "msg_date": "Tue, 10 Oct 2023 06:13:17 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": true, "msg_subject": "Re: typo in couple of places" } ]
[ { "msg_contents": "There are a lot of Datum *values, bool *nulls argument pairs that should \nreally be const. The 0001 patch makes those changes. Some of these \nhunks depend on each other.\n\nThe 0002 patch, which I'm not proposing to commit at this time, makes \nsimilar changes but in a way that breaks the table and index AM APIs. \nSo I'm just including that here in case anyone wonders, why didn't you \ntouch those. And also maybe if we ever change that API incompatibly we \ncould throw this one in then.", "msg_date": "Fri, 6 Oct 2023 15:11:50 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Add const to values and nulls arguments" }, { "msg_contents": "Hi,\n\n> There are a lot of Datum *values, bool *nulls argument pairs that should\n> really be const. The 0001 patch makes those changes. Some of these\n> hunks depend on each other.\n>\n> The 0002 patch, which I'm not proposing to commit at this time, makes\n> similar changes but in a way that breaks the table and index AM APIs.\n> So I'm just including that here in case anyone wonders, why didn't you\n> touch those. And also maybe if we ever change that API incompatibly we\n> could throw this one in then.\n\n0001 looks good to me and passes the tests in several environments,\nnot surprisingly. Let's merge it.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Fri, 6 Oct 2023 17:51:59 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add const to values and nulls arguments" }, { "msg_contents": "On 06.10.23 16:51, Aleksander Alekseev wrote:\n>> There are a lot of Datum *values, bool *nulls argument pairs that should\n>> really be const. The 0001 patch makes those changes. Some of these\n>> hunks depend on each other.\n>>\n>> The 0002 patch, which I'm not proposing to commit at this time, makes\n>> similar changes but in a way that breaks the table and index AM APIs.\n>> So I'm just including that here in case anyone wonders, why didn't you\n>> touch those. And also maybe if we ever change that API incompatibly we\n>> could throw this one in then.\n> \n> 0001 looks good to me and passes the tests in several environments,\n> not surprisingly. Let's merge it.\n\ndone\n\n\n\n", "msg_date": "Tue, 10 Oct 2023 07:57:00 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add const to values and nulls arguments" }, { "msg_contents": "Hi,\n\n> >> The 0002 patch, which I'm not proposing to commit at this time, makes\n> >> similar changes but in a way that breaks the table and index AM APIs.\n> >> So I'm just including that here in case anyone wonders, why didn't you\n> >> touch those. And also maybe if we ever change that API incompatibly we\n> >> could throw this one in then.\n> >\n> > 0001 looks good to me and passes the tests in several environments,\n> > not surprisingly. Let's merge it.\n>\n> done\n\nGreat. FWIW changing the index AM API in this particular aspect\ndoesn't strike me as such a terrible idea.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Tue, 10 Oct 2023 12:14:26 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add const to values and nulls arguments" } ]
[ { "msg_contents": "Hi,\n\nI want to make a custom range (and multirange type) that carries a few useful details. \nLooks like I may have to implement the operators, custom functions and my own aggregator.\nWhere can I find the code SQL code for anything that relates to the tstzrange range type?\n\nOr would you recommend a different approach?\nI essentially want to be able to aggregate multiple tstzranges - each range with its own importance. The aggregation would be like a a join/intersect where ranges with higher importance override the ones with lower importance.\n\nThanks!\nRares\n\n", "msg_date": "Fri, 6 Oct 2023 16:55:18 +0300", "msg_from": "\"Rares Pop (Treelet)\" <[email protected]>", "msg_from_op": true, "msg_subject": "Custom tstzrange with importance factored in" }, { "msg_contents": "On Fri, 2023-10-06 at 16:55 +0300, Rares Pop (Treelet) wrote:\n> I essentially want to be able to aggregate multiple tstzranges - each\n> range with its own importance. The aggregation would be like a a\n> join/intersect where ranges with higher importance override the ones\n> with lower importance.\n\nIt may be possible without a new data type. Can you describe the\nsemantics more precisely?\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Wed, 25 Oct 2023 12:02:40 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Custom tstzrange with importance factored in" } ]
[ { "msg_contents": "Hello postgres hackers,\n\nI noticed that combination of prepared statement with generic plan and\n'IS NULL' clause could lead partition pruning to crash.\n\nAffected versions start from 12 it seems.\n\n'How to repeat' below and an attempt to fix it is in attachment.\n\n\nData set:\n------\ncreate function part_hashint4_noop(value int4, seed int8)\n     returns int8 as $$\n     select value + seed;\n     $$ language sql strict immutable parallel safe;\n\ncreate operator class part_test_int4_ops for type int4 using hash as\n     operator 1 =,\n     function 2 part_hashint4_noop(int4, int8);\n\ncreate function part_hashtext_length(value text, seed int8)\n     returns int8 as $$\n     select length(coalesce(value, ''))::int8\n     $$ language sql strict immutable parallel safe;\n\ncreate operator class part_test_text_ops for type text using hash as\n     operator 1 =,\n     function 2 part_hashtext_length(text, int8);\n\n\ncreate table hp (a int, b text, c int)\n   partition by hash (a part_test_int4_ops, b part_test_text_ops);\ncreate table hp0 partition of hp for values with (modulus 4, remainder 0);\ncreate table hp3 partition of hp for values with (modulus 4, remainder 3);\ncreate table hp1 partition of hp for values with (modulus 4, remainder 1);\ncreate table hp2 partition of hp for values with (modulus 4, remainder 2);\n\ninsert into hp values (null, null, 0);\ninsert into hp values (1, null, 1);\ninsert into hp values (1, 'xxx', 2);\ninsert into hp values (null, 'xxx', 3);\ninsert into hp values (2, 'xxx', 4);\ninsert into hp values (1, 'abcde', 5);\n------\n\nTest case:\n------\nset plan_cache_mode to force_generic_plan;\nprepare stmt AS select * from hp where a is null and b = $1;\nexplain execute stmt('xxx');\n------\n\n\nRegargs,\nGluh", "msg_date": "Fri, 6 Oct 2023 18:09:45 +0400", "msg_from": "Sergei Glukhov <[email protected]>", "msg_from_op": true, "msg_subject": "Problem, partition pruning for prepared statement with IS NULL\n clause." }, { "msg_contents": "On Fri, Oct 6, 2023 at 06:09:45PM +0400, Sergei Glukhov wrote:\n> Test case:\n> ------\n> set plan_cache_mode to force_generic_plan;\n> prepare stmt AS select * from hp where a is null and b = $1;\n> explain execute stmt('xxx');\n> ------\n\nI can confirm the crash in git master.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Fri, 6 Oct 2023 17:00:54 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problem, partition pruning for prepared statement with IS NULL\n clause." }, { "msg_contents": "On Fri, Oct 6, 2023 at 05:00:54PM -0400, Bruce Momjian wrote:\n> On Fri, Oct 6, 2023 at 06:09:45PM +0400, Sergei Glukhov wrote:\n> > Test case:\n> > ------\n> > set plan_cache_mode to force_generic_plan;\n> > prepare stmt AS select * from hp where a is null and b = $1;\n> > explain execute stmt('xxx');\n> > ------\n> \n> I can confirm the crash in git master.\n\nThere were some UTF8 non-space whitespace characters in the email so\nattached is a clean reproducable SQL crash file.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.", "msg_date": "Fri, 6 Oct 2023 17:02:43 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problem, partition pruning for prepared statement with IS NULL\n clause." }, { "msg_contents": "On Sat, 7 Oct 2023 at 03:11, Sergei Glukhov <[email protected]> wrote:\n> I noticed that combination of prepared statement with generic plan and\n> 'IS NULL' clause could lead partition pruning to crash.\n\n> Test case:\n> ------\n> set plan_cache_mode to force_generic_plan;\n> prepare stmt AS select * from hp where a is null and b = $1;\n> explain execute stmt('xxx');\n\nThanks for the detailed report and proposed patch.\n\nI think your proposed fix isn't quite correct. I think the problem\nlies in InitPartitionPruneContext() where we assume that the list\npositions of step->exprs are in sync with the keyno. If you look at\nperform_pruning_base_step() the code there makes a special effort to\nskip over any keyno when a bit is set in opstep->nullkeys.\n\nIt seems that your patch is adjusting the keyno that's given to the\nPruneCxtStateIdx() and it looks like (for your test case) it'll end up\npassing keyno==0 when it should be passing keyno==1. keyno is the\nindex of the partition key, so you can't pass 0 when it's for key\nindex 1.\n\nI wonder if it's worth expanding the tests further to cover more of\nthe pruning cases to cover run-time pruning too.\n\nDavid", "msg_date": "Mon, 9 Oct 2023 12:26:24 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problem,\n partition pruning for prepared statement with IS NULL clause." }, { "msg_contents": "The comment /* not needed for Consts */ may be more better close to if\n(!IsA(expr, Const)).\nOthers look good to me.\n\nDavid Rowley <[email protected]> 于2023年10月9日周一 07:28写道:\n\n> On Sat, 7 Oct 2023 at 03:11, Sergei Glukhov <[email protected]>\n> wrote:\n> > I noticed that combination of prepared statement with generic plan and\n> > 'IS NULL' clause could lead partition pruning to crash.\n>\n> > Test case:\n> > ------\n> > set plan_cache_mode to force_generic_plan;\n> > prepare stmt AS select * from hp where a is null and b = $1;\n> > explain execute stmt('xxx');\n>\n> Thanks for the detailed report and proposed patch.\n>\n> I think your proposed fix isn't quite correct. I think the problem\n> lies in InitPartitionPruneContext() where we assume that the list\n> positions of step->exprs are in sync with the keyno. If you look at\n> perform_pruning_base_step() the code there makes a special effort to\n> skip over any keyno when a bit is set in opstep->nullkeys.\n>\n> It seems that your patch is adjusting the keyno that's given to the\n> PruneCxtStateIdx() and it looks like (for your test case) it'll end up\n> passing keyno==0 when it should be passing keyno==1. keyno is the\n> index of the partition key, so you can't pass 0 when it's for key\n> index 1.\n>\n> I wonder if it's worth expanding the tests further to cover more of\n> the pruning cases to cover run-time pruning too.\n>\n\n I think it's worth doing that.\n\nDavid\n>\n\nThe comment  /* not needed for Consts */  may be more better close to if (!IsA(expr, Const)).Others look good to me.David Rowley <[email protected]> 于2023年10月9日周一 07:28写道:On Sat, 7 Oct 2023 at 03:11, Sergei Glukhov <[email protected]> wrote:\n> I noticed that combination of prepared statement with generic plan and\n> 'IS NULL' clause could lead partition pruning to crash.\n\n> Test case:\n> ------\n> set plan_cache_mode to force_generic_plan;\n> prepare stmt AS select * from hp where a is null and b = $1;\n> explain execute stmt('xxx');\n\nThanks for the detailed report and proposed patch.\n\nI think your proposed fix isn't quite correct.  I think the problem\nlies in InitPartitionPruneContext() where we assume that the list\npositions of step->exprs are in sync with the keyno.  If you look at\nperform_pruning_base_step() the code there makes a special effort to\nskip over any keyno when a bit is set in opstep->nullkeys.\n\nIt seems that your patch is adjusting the keyno that's given to the\nPruneCxtStateIdx() and it looks like (for your test case) it'll end up\npassing keyno==0 when it should be passing keyno==1.  keyno is the\nindex of the partition key, so you can't pass 0 when it's for key\nindex 1.\n\nI wonder if it's worth expanding the tests further to cover more of\nthe pruning cases to cover run-time pruning too.    I think it's worth doing that.  \nDavid", "msg_date": "Tue, 10 Oct 2023 15:58:22 +0800", "msg_from": "tender wang <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problem,\n partition pruning for prepared statement with IS NULL clause." }, { "msg_contents": "Hi David,\n\n\nOn 10/9/23 03:26, David Rowley wrote:\n> On Sat, 7 Oct 2023 at 03:11, Sergei Glukhov <[email protected]> wrote:\n>> I noticed that combination of prepared statement with generic plan and\n>> 'IS NULL' clause could lead partition pruning to crash.\n>> Test case:\n>> ------\n>> set plan_cache_mode to force_generic_plan;\n>> prepare stmt AS select * from hp where a is null and b = $1;\n>> explain execute stmt('xxx');\n> Thanks for the detailed report and proposed patch.\n>\n> I think your proposed fix isn't quite correct. I think the problem\n> lies in InitPartitionPruneContext() where we assume that the list\n> positions of step->exprs are in sync with the keyno. If you look at\n> perform_pruning_base_step() the code there makes a special effort to\n> skip over any keyno when a bit is set in opstep->nullkeys.\n>\n> It seems that your patch is adjusting the keyno that's given to the\n> PruneCxtStateIdx() and it looks like (for your test case) it'll end up\n> passing keyno==0 when it should be passing keyno==1. keyno is the\n> index of the partition key, so you can't pass 0 when it's for key\n> index 1.\n>\n> I wonder if it's worth expanding the tests further to cover more of\n> the pruning cases to cover run-time pruning too.\n\nThanks for the explanation. I thought by some reason that 'exprstates ' \narray doesn't\ncontain elements related to 'IS NULL' clause. Now I see that they are \nthere and\njust empty and untouched.\n\nI verified the patch and it fixes the problem.\n\nRegarding test case,\nbesides the current test case and the test for dynamic partition \npruning, say,\n\nselect a, (select b from hp where a is null and b = a.b) AS b from hp a \nwhere a = 1 and b = 'xxx';\n\nI would like to suggest to slightly refactor 'Test Partition pruning for \nHASH partitioning' test\nfrom 'partition_prune.sql' and add one more key field. The reason is \nthat two-element\nkey is not enough for thorough testing since it tests mostly corner \ncases. Let me know\nif it's worth doing.\n\nExample:\n------\ncreate table hp (a int, b text, c int, d int)\n   partition by hash (a part_test_int4_ops, b part_test_text_ops, c \npart_test_int4_ops);\ncreate table hp0 partition of hp for values with (modulus 4, remainder 0);\ncreate table hp3 partition of hp for values with (modulus 4, remainder 3);\ncreate table hp1 partition of hp for values with (modulus 4, remainder 1);\ncreate table hp2 partition of hp for values with (modulus 4, remainder 2);\n\ninsert into hp values (null, null, null, 0);\ninsert into hp values (1, null, 1, 1);\ninsert into hp values (1, 'xxx', 1, 2);\ninsert into hp values (null, 'xxx', null, 3);\ninsert into hp values (2, 'xxx', 2, 4);\ninsert into hp values (1, 'abcde', 1, 5);\n------\n\nAnother crash in the different place even with the fix:\n------\nexplain select * from hp where a = 1 and b is null and c = 1;\n------\n\n\nRegards,\nGluh\n\n\n\n", "msg_date": "Tue, 10 Oct 2023 12:31:32 +0400", "msg_from": "Sergei Glukhov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Problem, partition pruning for prepared statement with IS NULL\n clause." }, { "msg_contents": "On Tue, 10 Oct 2023 at 21:31, Sergei Glukhov <[email protected]> wrote:\n> create table hp (a int, b text, c int, d int)\n> partition by hash (a part_test_int4_ops, b part_test_text_ops, c\n> part_test_int4_ops);\n> create table hp0 partition of hp for values with (modulus 4, remainder 0);\n> create table hp3 partition of hp for values with (modulus 4, remainder 3);\n> create table hp1 partition of hp for values with (modulus 4, remainder 1);\n> create table hp2 partition of hp for values with (modulus 4, remainder 2);\n>\n>\n> Another crash in the different place even with the fix:\n> explain select * from hp where a = 1 and b is null and c = 1;\n\nOuch. It looks like 13838740f tried to fix things in this area before\nand even added a regression test for it. Namely:\n\n-- Test that get_steps_using_prefix() handles non-NULL step_nullkeys\nexplain (costs off) select * from hp_prefix_test where a = 1 and b is\nnull and c = 1 and d = 1;\n\nI guess that one does not crash because of the \"d = 1\" clause is in\nthe \"start\" ListCell in get_steps_using_prefix_recurse(), whereas,\nwith your case start is NULL which is an issue for cur_keyno =\n((PartClauseInfo *) lfirst(start))->keyno;.\n\nIt might have been better if PartClauseInfo could also describe IS\nNULL quals, but I feel if we do that now then it would require lots of\ncareful surgery in partprune.c to account for that. Probably the fix\nshould be localised to get_steps_using_prefix_recurse() to have it do\nsomething like pass the keyno to try and work on rather than trying to\nget that from the \"prefix\" list. That way if there's no item in that\nlist for that keyno, we can check in step_nullkeys for the keyno.\n\nI'll continue looking.\n\nDavid\n\n\n", "msg_date": "Wed, 11 Oct 2023 15:49:38 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problem,\n partition pruning for prepared statement with IS NULL clause." }, { "msg_contents": "For hash partition table, if partition key is IS NULL clause, the\ncondition in if in get_steps_using_prefix_recurse:\nif (cur_keyno < step_lastkeyno - 1)\nis not enough.\nLike the decode crashed case, explain select * from hp where a = 1 and b is\nnull and c = 1;\nprefix list just has a = 1 clause.\nI try fix this in attached patch.\nDavid Rowley <[email protected]> 于2023年10月11日周三 10:50写道:\n\n> On Tue, 10 Oct 2023 at 21:31, Sergei Glukhov <[email protected]>\n> wrote:\n> > create table hp (a int, b text, c int, d int)\n> > partition by hash (a part_test_int4_ops, b part_test_text_ops, c\n> > part_test_int4_ops);\n> > create table hp0 partition of hp for values with (modulus 4, remainder\n> 0);\n> > create table hp3 partition of hp for values with (modulus 4, remainder\n> 3);\n> > create table hp1 partition of hp for values with (modulus 4, remainder\n> 1);\n> > create table hp2 partition of hp for values with (modulus 4, remainder\n> 2);\n> >\n> >\n> > Another crash in the different place even with the fix:\n> > explain select * from hp where a = 1 and b is null and c = 1;\n>\n> Ouch. It looks like 13838740f tried to fix things in this area before\n> and even added a regression test for it. Namely:\n>\n> -- Test that get_steps_using_prefix() handles non-NULL step_nullkeys\n> explain (costs off) select * from hp_prefix_test where a = 1 and b is\n> null and c = 1 and d = 1;\n>\n> I guess that one does not crash because of the \"d = 1\" clause is in\n> the \"start\" ListCell in get_steps_using_prefix_recurse(), whereas,\n> with your case start is NULL which is an issue for cur_keyno =\n> ((PartClauseInfo *) lfirst(start))->keyno;.\n>\n> It might have been better if PartClauseInfo could also describe IS\n> NULL quals, but I feel if we do that now then it would require lots of\n> careful surgery in partprune.c to account for that. Probably the fix\n> should be localised to get_steps_using_prefix_recurse() to have it do\n> something like pass the keyno to try and work on rather than trying to\n> get that from the \"prefix\" list. That way if there's no item in that\n> list for that keyno, we can check in step_nullkeys for the keyno.\n>\n> I'll continue looking.\n>\n> David\n>\n>\n>", "msg_date": "Wed, 11 Oct 2023 11:44:15 +0800", "msg_from": "tender wang <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problem,\n partition pruning for prepared statement with IS NULL clause." }, { "msg_contents": "On Wed, 11 Oct 2023 at 15:49, David Rowley <[email protected]> wrote:\n> It might have been better if PartClauseInfo could also describe IS\n> NULL quals, but I feel if we do that now then it would require lots of\n> careful surgery in partprune.c to account for that. Probably the fix\n> should be localised to get_steps_using_prefix_recurse() to have it do\n> something like pass the keyno to try and work on rather than trying to\n> get that from the \"prefix\" list. That way if there's no item in that\n> list for that keyno, we can check in step_nullkeys for the keyno.\n>\n> I'll continue looking.\n\nThe fix seems to amount to the attached. The following condition\nassumes that by not recursively processing step_lastkeyno - 1 that\nthere will be at least one more PartClauseInfo in the prefix List to\nprocess. However, that didn't work when that partition key clause was\ncovered by an IS NULL clause.\n\nIf we adjust the following condition:\n\nif (cur_keyno < step_lastkeyno - 1)\n\nto become:\n\nfinal_keyno = ((PartClauseInfo *) llast(prefix))->keyno;\nif (cur_keyno < final_keyno)\n\nthen that ensures that the else clause can pick up any clauses for the\nfinal column mentioned in the 'prefix' list, plus any nullkeys if\nthere happens to be any of those too.\n\nFor testing, given that 13838740f (from 2020) had a go at fixing this\nalready, I'm kinda thinking that it's not overkill to test all\npossible 16 combinations of IS NULL and equality equals on the 4\npartition key column partitioned table that commit added in\npartition_prune.sql.\n\nI added some tests there using \\gexec to prevent having to write out\neach of the 16 queries by hand. I tested that pruning worked (i.e 1\nmatching partition in EXPLAIN), and that we get the correct results\n(i.e we pruned the correct partition) by running the query and we get\nthe expected 1 row after having inserted 16 rows, one for each\ncombination of quals to test.\n\nI wanted to come up with some tests that test for multiple quals\nmatching the same partition key. This is tricky due to the\nequivalence class code being smart and removing any duplicates or\nmarking the rel as dummy when it finds conflicting quals. With hash\npartitioning, we're limited to just equality quals, so maybe something\ncould be done with range-partitioned tables instead. I see there are\nsome tests just above the ones I modified which try to cover this.\n\nI also tried to outsmart the planner by using Params and prepared\nqueries. Namely:\n\nset plan_cache_mode = 'force_generic_plan';\nprepare q1 (int, int, int, int, int, int, int, int) as select\ntableoid::regclass,* from hp_prefix_test where a = $1 and b = $2 and c\n= $3 and d = $4 and a = $5 and b = $6 and c = $7 and d = $8;\nexplain (costs off) execute q1 (1,2,3,4,1,2,3,4);\n\nBut I was outsmarted again with a gating qual which checked the pairs\nmatch before doing the scan :-(\n\n Append\n Subplans Removed: 15\n -> Result\n One-Time Filter: (($1 = $5) AND ($2 = $6) AND ($3 = $7) AND ($4 = $8))\n -> Seq Scan on hp_prefix_test_p14 hp_prefix_test_1\n Filter: ((a = $5) AND (b = $6) AND (c = $7) AND (d = $8))\n\nI'm aiming to commit these as two separate fixes, so I'm going to go\nlook again at the first one and wait to see if anyone wants to comment\non this patch in the meantime.\n\nDavid", "msg_date": "Wed, 11 Oct 2023 20:50:41 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problem,\n partition pruning for prepared statement with IS NULL clause." }, { "msg_contents": "David Rowley <[email protected]> 于2023年10月11日周三 15:52写道:\n\n> On Wed, 11 Oct 2023 at 15:49, David Rowley <[email protected]> wrote:\n> > It might have been better if PartClauseInfo could also describe IS\n> > NULL quals, but I feel if we do that now then it would require lots of\n> > careful surgery in partprune.c to account for that. Probably the fix\n> > should be localised to get_steps_using_prefix_recurse() to have it do\n> > something like pass the keyno to try and work on rather than trying to\n> > get that from the \"prefix\" list. That way if there's no item in that\n> > list for that keyno, we can check in step_nullkeys for the keyno.\n> >\n> > I'll continue looking.\n>\n> The fix seems to amount to the attached. The following condition\n> assumes that by not recursively processing step_lastkeyno - 1 that\n> there will be at least one more PartClauseInfo in the prefix List to\n> process. However, that didn't work when that partition key clause was\n> covered by an IS NULL clause.\n>\n> If we adjust the following condition:\n>\n> if (cur_keyno < step_lastkeyno - 1)\n>\n> to become:\n>\n> final_keyno = ((PartClauseInfo *) llast(prefix))->keyno;\n> if (cur_keyno < final_keyno)\n>\n\nYeah, aggred.\n\n\n> then that ensures that the else clause can pick up any clauses for the\n> final column mentioned in the 'prefix' list, plus any nullkeys if\n> there happens to be any of those too.\n>\n> For testing, given that 13838740f (from 2020) had a go at fixing this\n> already, I'm kinda thinking that it's not overkill to test all\n> possible 16 combinations of IS NULL and equality equals on the 4\n> partition key column partitioned table that commit added in\n> partition_prune.sql.\n>\n> I added some tests there using \\gexec to prevent having to write out\n> each of the 16 queries by hand. I tested that pruning worked (i.e 1\n> matching partition in EXPLAIN), and that we get the correct results\n> (i.e we pruned the correct partition) by running the query and we get\n> the expected 1 row after having inserted 16 rows, one for each\n> combination of quals to test.\n>\n> I wanted to come up with some tests that test for multiple quals\n> matching the same partition key. This is tricky due to the\n> equivalence class code being smart and removing any duplicates or\n> marking the rel as dummy when it finds conflicting quals. With hash\n> partitioning, we're limited to just equality quals, so maybe something\n> could be done with range-partitioned tables instead. I see there are\n> some tests just above the ones I modified which try to cover this.\n>\n> I also tried to outsmart the planner by using Params and prepared\n> queries. Namely:\n>\n> set plan_cache_mode = 'force_generic_plan';\n> prepare q1 (int, int, int, int, int, int, int, int) as select\n> tableoid::regclass,* from hp_prefix_test where a = $1 and b = $2 and c\n> = $3 and d = $4 and a = $5 and b = $6 and c = $7 and d = $8;\n> explain (costs off) execute q1 (1,2,3,4,1,2,3,4);\n>\n> But I was outsmarted again with a gating qual which checked the pairs\n> match before doing the scan :-(\n>\n> Append\n> Subplans Removed: 15\n> -> Result\n> One-Time Filter: (($1 = $5) AND ($2 = $6) AND ($3 = $7) AND ($4 =\n> $8))\n> -> Seq Scan on hp_prefix_test_p14 hp_prefix_test_1\n> Filter: ((a = $5) AND (b = $6) AND (c = $7) AND (d = $8))\n>\n> I'm aiming to commit these as two separate fixes, so I'm going to go\n> look again at the first one and wait to see if anyone wants to comment\n> on this patch in the meantime.\n>\n+1, LGTM\n\n\n> David\n>\n\nDavid Rowley <[email protected]> 于2023年10月11日周三 15:52写道:On Wed, 11 Oct 2023 at 15:49, David Rowley <[email protected]> wrote:\n> It might have been better if PartClauseInfo could also describe IS\n> NULL quals, but I feel if we do that now then it would require lots of\n> careful surgery in partprune.c to account for that.  Probably the fix\n> should be localised to get_steps_using_prefix_recurse() to have it do\n> something like pass the keyno to try and work on rather than trying to\n> get that from the \"prefix\" list. That way if there's no item in that\n> list for that keyno, we can check in step_nullkeys for the keyno.\n>\n> I'll continue looking.\n\nThe fix seems to amount to the attached.  The following condition\nassumes that by not recursively processing step_lastkeyno - 1 that\nthere will be at least one more PartClauseInfo in the prefix List to\nprocess.  However, that didn't work when that partition key clause was\ncovered by an IS NULL clause.\n\nIf we adjust the following condition:\n\nif (cur_keyno < step_lastkeyno - 1)\n\nto become:\n\nfinal_keyno = ((PartClauseInfo *) llast(prefix))->keyno;\nif (cur_keyno < final_keyno)Yeah, aggred. \nthen that ensures that the else clause can pick up any clauses for the\nfinal column mentioned in the 'prefix' list, plus any nullkeys if\nthere happens to be any of those too.\n\nFor testing, given that 13838740f (from 2020) had a go at fixing this\nalready, I'm kinda thinking that it's not overkill to test all\npossible 16 combinations of IS NULL and equality equals on the 4\npartition key column partitioned table that commit added in\npartition_prune.sql.\n\nI added some tests there using \\gexec to prevent having to write out\neach of the 16 queries by hand. I tested that pruning worked (i.e 1\nmatching partition in EXPLAIN), and that we get the correct results\n(i.e we pruned the correct partition) by running the query and we get\nthe expected 1 row after having inserted 16 rows, one for each\ncombination of quals to test.\n\nI wanted to come up with some tests that test for multiple quals\nmatching the same partition key.  This is tricky due to the\nequivalence class code being smart and removing any duplicates or\nmarking the rel as dummy when it finds conflicting quals.  With hash\npartitioning, we're limited to just equality quals, so maybe something\ncould be done with range-partitioned tables instead.  I see there are\nsome tests just above the ones I modified which try to cover this.\n\nI also tried to outsmart the planner by using Params and prepared\nqueries. Namely:\n\nset plan_cache_mode = 'force_generic_plan';\nprepare q1 (int, int, int, int, int, int, int, int) as select\ntableoid::regclass,* from hp_prefix_test where a = $1 and b = $2 and c\n= $3 and d = $4 and a = $5 and b = $6 and c = $7 and d = $8;\nexplain (costs off) execute q1 (1,2,3,4,1,2,3,4);\n\nBut I was outsmarted again with a gating qual which checked the pairs\nmatch before doing the scan :-(\n\n Append\n   Subplans Removed: 15\n   ->  Result\n         One-Time Filter: (($1 = $5) AND ($2 = $6) AND ($3 = $7) AND ($4 = $8))\n         ->  Seq Scan on hp_prefix_test_p14 hp_prefix_test_1\n               Filter: ((a = $5) AND (b = $6) AND (c = $7) AND (d = $8))\n\nI'm aiming to commit these as two separate fixes, so I'm going to go\nlook again at the first one and wait to see if anyone wants to comment\non this patch in the meantime.+1, LGTM \nDavid", "msg_date": "Wed, 11 Oct 2023 17:06:29 +0800", "msg_from": "tender wang <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problem,\n partition pruning for prepared statement with IS NULL clause." }, { "msg_contents": "Hi,\n\nThanks for fixing this!\n\nI verified that issues are fixed.\n\nOn 10/11/23 11:50, David Rowley wrote:\n\n> I'm aiming to commit these as two separate fixes, so I'm going to go\n> look again at the first one and wait to see if anyone wants to comment\n> on this patch in the meantime.\n\nRegarding test case for the first patch,\nthe line 'set plan_cache_mode = 'force_generic_plan';' is not necessary\nsince cache mode is set at the top of the test. On the other hand test\nscenario can silently  be loosed if someone set another cache mode\nsomewhere upper. As you mentioned earlier it's worth maybe adding\nthe test for run-time partition pruning.\n\nRegards,\nGluh\n\n\n\n", "msg_date": "Wed, 11 Oct 2023 13:09:36 +0400", "msg_from": "Sergei Glukhov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Problem, partition pruning for prepared statement with IS NULL\n clause." }, { "msg_contents": "On Wed, 11 Oct 2023 at 22:09, Sergei Glukhov <[email protected]> wrote:\n> Thanks for fixing this!\n>\n> I verified that issues are fixed.\n\nThanks for having a look.\n\nUnfortunately, I'd not long sent the last email and realised that the\nstep_lastkeyno parameter is now unused and can just be removed from\nboth get_steps_using_prefix() and get_steps_using_prefix_recurse().\nThis requires some comment rewriting so I've attempted to do that too\nin the attached updated version.\n\nDavid", "msg_date": "Wed, 11 Oct 2023 22:19:37 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problem,\n partition pruning for prepared statement with IS NULL clause." }, { "msg_contents": "\n\nOn 10/11/23 13:19, David Rowley wrote:\n>\n> Thanks for having a look.\n>\n> Unfortunately, I'd not long sent the last email and realised that the\n> step_lastkeyno parameter is now unused and can just be removed from\n> both get_steps_using_prefix() and get_steps_using_prefix_recurse().\n> This requires some comment rewriting so I've attempted to do that too\n> in the attached updated version.\n\nThanks, verified again and everything is fine!\n\nRegards,\nGluh\n\n\n", "msg_date": "Wed, 11 Oct 2023 13:59:37 +0400", "msg_from": "Sergei Glukhov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Problem, partition pruning for prepared statement with IS NULL\n clause." }, { "msg_contents": "On Wed, 11 Oct 2023 at 22:59, Sergei Glukhov <[email protected]> wrote:\n> > Unfortunately, I'd not long sent the last email and realised that the\n> > step_lastkeyno parameter is now unused and can just be removed from\n> > both get_steps_using_prefix() and get_steps_using_prefix_recurse().\n> > This requires some comment rewriting so I've attempted to do that too\n> > in the attached updated version.\n>\n> Thanks, verified again and everything is fine!\n\nThanks for looking. I spent quite a bit more time on this again today\nand fiddled lots more with the comments and tests.\n\nI also did more testing after finding a way to easily duplicate the\nquals to cause multiple quals per partition key. The equivalence\nclass code will only make ECs for mergejoin-able clauses, so if we\njust find a type that's not mergejoin-able but hashable, we can\nduplicate the quals with a hash partitioned table\n\n-- find a suitable non-mergejoin-able type.\nselect oprleft::regtype from pg_operator where oprcanmerge=false and\noprcanhash=true;\n oprleft\n---------\n xid\n cid\n aclitem\n\ncreate table hash_xid(a xid, b xid, c xid) partition by hash(a,b,c);\ncreate table hash_xid1 partition of hash_xid for values with (modulus\n2, remainder 0);\ncreate table hash_xid2 partition of hash_xid for values with (modulus\n2, remainder 1);\n\nI tried out various combinations of the following query. Each\nequality clause is duplicated 6 times. When I enable all 6 for each\nof the 3 columns, I see 216 pruning steps. That's 6*6*6, just what I\nexpected.\n\nThe IS NULL quals are not duplicated since we can only set a bit once\nin the nullkeys.\n\nexplain select * from hash_xid where\na = '123'::xid and a = '123'::xid and a = '123'::xid and a =\n'123'::xid and a = '123'::xid and a = '123'::xid and\n--a is null and a is null and a is null and a is null and a is null\nand a is null and\nb = '123'::xid and b = '123'::xid and b = '123'::xid and b =\n'123'::xid and b = '123'::xid and b = '123'::xid and\n--b is null and b is null and b is null and b is null and b is null\nand b is null and\nc = '123'::xid and c = '123'::xid and c = '123'::xid and c =\n'123'::xid and c = '123'::xid and c = '123'::xid;\n--c is null and c is null and c is null and c is null and c is null\nand c is null;\n\nputting a breakpoint at the final line of\ngen_prune_steps_from_opexps() yields 216 steps.\n\nI didn't include anything of the above as part of the additional\ntests. Perhaps something like that is worthwhile in a reduced form.\nHowever, someone might make xid mergejoinable some time, which would\nbreak the test.\n\nThanks for reviewing the previous version of this patch.\n\nOnto the other run-time one now...\n\nDavid\n\n\n", "msg_date": "Thu, 12 Oct 2023 20:05:32 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problem,\n partition pruning for prepared statement with IS NULL clause." }, { "msg_contents": "On Mon, 9 Oct 2023 at 12:26, David Rowley <[email protected]> wrote:\n>\n> On Sat, 7 Oct 2023 at 03:11, Sergei Glukhov <[email protected]> wrote:\n> > I noticed that combination of prepared statement with generic plan and\n> > 'IS NULL' clause could lead partition pruning to crash.\n>\n> > Test case:\n> > ------\n> > set plan_cache_mode to force_generic_plan;\n> > prepare stmt AS select * from hp where a is null and b = $1;\n> > explain execute stmt('xxx');\n>\n> Thanks for the detailed report and proposed patch.\n>\n> I think your proposed fix isn't quite correct. I think the problem\n> lies in InitPartitionPruneContext() where we assume that the list\n> positions of step->exprs are in sync with the keyno. If you look at\n> perform_pruning_base_step() the code there makes a special effort to\n> skip over any keyno when a bit is set in opstep->nullkeys.\n\nI've now also pushed the fix for the incorrect logic for nullkeys in\nExecInitPruningContext().\n\nI didn't quite find a test to make this work for v11. I tried calling\nexecute 5 times as we used to have to before the plan_cache_mode GUC\nwas added in v12, but the test case kept picking the custom plan. So I\nended up pushing v11 without any test. This goes out of support in ~1\nmonth, so I'm not too concerned about the lack of test. I did do a\nmanual test to ensure it works with:\n\ncreate table hp (a int, b text, c int) partition by hash (a, b);\ncreate table hp0 partition of hp for values with (modulus 4, remainder 0);\ncreate table hp3 partition of hp for values with (modulus 4, remainder 3);\ncreate table hp1 partition of hp for values with (modulus 4, remainder 1);\ncreate table hp2 partition of hp for values with (modulus 4, remainder 2);\n\nprepare hp_q1 (text) as select * from hp where a is null and b = $1;\n\n(set breakpoint in choose_custom_plan() and have it return false when\nwe hit it.)\n\nexplain (costs off) execute hp_q1('xxx');\n\nDavid\n\n\n", "msg_date": "Fri, 13 Oct 2023 01:27:57 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problem,\n partition pruning for prepared statement with IS NULL clause." }, { "msg_contents": "\n\nOn 10/12/23 16:27, David Rowley wrote:\n>\n> I've now also pushed the fix for the incorrect logic for nullkeys in\n> ExecInitPruningContext().\n>\n\nThanks!\n\nRegards,\nGluh\n\n\n\n", "msg_date": "Thu, 12 Oct 2023 17:47:03 +0400", "msg_from": "Sergei Glukhov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Problem, partition pruning for prepared statement with IS NULL\n clause." } ]
[ { "msg_contents": "Hello hackers,\n\nFirst timer here with a question:\n\nI’ve searched through various e-mail threads and documents and could not find if pushdown of LIMIT clauses to FDW is implemented.\nSeen some conversation about that a couple of years ago when that was treated as a feature request but nothing since then.\nCitus does that according to its documentation but for now we are trying to base our solution on “plain” Postgres.\n\nIf it is not implemented - could you, guys, provide me with some pointers to source code where I could look at and try to implement that?\n\n\nThanks,\nMichal\n\n", "msg_date": "Fri, 6 Oct 2023 18:02:58 +0200", "msg_from": "=?utf-8?Q?Micha=C5=82_K=C5=82eczek?= <[email protected]>", "msg_from_op": true, "msg_subject": "FDW LIM IT pushdown" }, { "msg_contents": "Sorry, I wasn’t precise - my question is about foreign partitions - LIMIT on ordinary tables is supported.\n\nThanks,\nMichal\n\n> On 6 Oct 2023, at 18:02, Michał Kłeczek <[email protected]> wrote:\n> \n> Hello hackers,\n> \n> First timer here with a question:\n> \n> I’ve searched through various e-mail threads and documents and could not find if pushdown of LIMIT clauses to FDW is implemented.\n> Seen some conversation about that a couple of years ago when that was treated as a feature request but nothing since then.\n> Citus does that according to its documentation but for now we are trying to base our solution on “plain” Postgres.\n> \n> If it is not implemented - could you, guys, provide me with some pointers to source code where I could look at and try to implement that?\n> \n> \n> Thanks,\n> Michal\n\n\n\n", "msg_date": "Fri, 6 Oct 2023 19:55:08 +0200", "msg_from": "=?utf-8?Q?Micha=C5=82_K=C5=82eczek?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: FDW LIM IT pushdown" }, { "msg_contents": "Hi Michal,\n\n\nOn Fri, Oct 6, 2023 at 9:34 PM Michał Kłeczek <[email protected]> wrote:\n>\n> Hello hackers,\n>\n> First timer here with a question:\n>\n> I’ve searched through various e-mail threads and documents and could not find if pushdown of LIMIT clauses to FDW is implemented.\n> Seen some conversation about that a couple of years ago when that was treated as a feature request but nothing since then.\n> Citus does that according to its documentation but for now we are trying to base our solution on “plain” Postgres.\n>\n> If it is not implemented - could you, guys, provide me with some pointers to source code where I could look at and try to implement that?\n>\n\nI started looking at code from grouping_planner and reached\ncreate_ordinary_grouping_paths(). It calls\ncreate_partitionwise_grouping_paths() to push aggregate and grouping\ndown into partitions and from there it is pushed down into FDW.\nLooking at create_limit_path(), I don't similar treatment. So there\nmay be some cases where we are not pushing down final LIMIT.\n\nHowever create_append_path() uses PlannerInfo::limit_tuples or\nroot->limit_tuples when creating append path node. So either it's\nbeing used for costing or for pushing it down to the partitions.\n\nThis isn't a full answer, but I hope these pointers would help you.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Mon, 9 Oct 2023 15:02:15 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FDW LIM IT pushdown" } ]
[ { "msg_contents": "Hello hackers,\n\nHere is an experimental POC of fast/cheap database cloning. For\nclones from little template databases, no one cares much, but it might\nbe useful to be able to create a snapshot or fork of very large\ndatabase for testing/experimentation like this:\n\n create database foodb_snapshot20231007 template=foodb strategy=file_clone\n\nIt should be a lot faster, and use less physical disk, than the two\nexisting strategies on recent-ish XFS, BTRFS, very recent OpenZFS,\nAPFS (= macOS), and it could in theory be extended to other systems\nthat invented different system calls for this with more work (Solaris,\nWindows). Then extra physical disk space will be consumed only as the\ntwo clones diverge.\n\nIt's just like the old strategy=file_copy, except it asks the OS to do\nits best copying trick. If you try it on a system that doesn't\nsupport copy-on-write, then copy_file_range() should fall back to\nplain old copy, but it might still be better than we could do, as it\ncan push copy commands to network storage or physical storage.\n\nTherefore, the usual caveats from strategy=file_copy also apply here.\nNamely that it has to perform checkpoints which could be very\nexpensive, and there are some quirks/brokenness about concurrent\nbackups and PITR. Which makes me wonder if it's worth pursuing this\nidea. Thoughts?\n\nI tested on bleeding edge FreeBSD/ZFS, where you need to set sysctl\nvfs.zfs.bclone_enabled=1 to enable the optimisation, as it's still a\nvery new feature that is still being rolled out. The system call\nsucceeds either way, but that controls whether the new database\ninitially shares blocks on disk, or get new copies. I also tested on\na Mac. In both cases I could clone large databases in a fraction of a\nsecond.", "msg_date": "Sat, 7 Oct 2023 18:51:45 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "CREATE DATABASE with filesystem cloning" }, { "msg_contents": "One thing I forgot to mention about this experiment: when used in a\nbackend I think this probably needs a chunk size (what size?) and a\nCFI(). Which is a bit annoying, because for best results we want\nSSIZE_MAX, but in the case that copy_file_range() falls back to raw\ncopy, that'd do I/O work bounded only by segment size :-/ (That's not\na problem that comes up in the similar patch for pg_upgrade.)\n\n\n", "msg_date": "Sun, 8 Oct 2023 11:42:20 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CREATE DATABASE with filesystem cloning" }, { "msg_contents": "\nOn 2023-10-07 Sa 01:51, Thomas Munro wrote:\n> Hello hackers,\n>\n> Here is an experimental POC of fast/cheap database cloning. For\n> clones from little template databases, no one cares much, but it might\n> be useful to be able to create a snapshot or fork of very large\n> database for testing/experimentation like this:\n>\n> create database foodb_snapshot20231007 template=foodb strategy=file_clone\n>\n> It should be a lot faster, and use less physical disk, than the two\n> existing strategies on recent-ish XFS, BTRFS, very recent OpenZFS,\n> APFS (= macOS), and it could in theory be extended to other systems\n> that invented different system calls for this with more work (Solaris,\n> Windows). Then extra physical disk space will be consumed only as the\n> two clones diverge.\n>\n> It's just like the old strategy=file_copy, except it asks the OS to do\n> its best copying trick. If you try it on a system that doesn't\n> support copy-on-write, then copy_file_range() should fall back to\n> plain old copy, but it might still be better than we could do, as it\n> can push copy commands to network storage or physical storage.\n>\n> Therefore, the usual caveats from strategy=file_copy also apply here.\n> Namely that it has to perform checkpoints which could be very\n> expensive, and there are some quirks/brokenness about concurrent\n> backups and PITR. Which makes me wonder if it's worth pursuing this\n> idea. Thoughts?\n>\n> I tested on bleeding edge FreeBSD/ZFS, where you need to set sysctl\n> vfs.zfs.bclone_enabled=1 to enable the optimisation, as it's still a\n> very new feature that is still being rolled out. The system call\n> succeeds either way, but that controls whether the new database\n> initially shares blocks on disk, or get new copies. I also tested on\n> a Mac. In both cases I could clone large databases in a fraction of a\n> second.\n\n\nI've had to disable COW on my BTRFS-resident buildfarm animals (see \nprevious discussion re Direct I/O).\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sun, 8 Oct 2023 09:20:32 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CREATE DATABASE with filesystem cloning" }, { "msg_contents": "On Mon, Oct 9, 2023 at 2:20 AM Andrew Dunstan <[email protected]> wrote:\n> I've had to disable COW on my BTRFS-resident buildfarm animals (see\n> previous discussion re Direct I/O).\n\nRight, because it is still buggy[1]. I don't see any sign that a fix\nhas been committed yet, assuming that is the right thing (and it sure\nsounds like it). It means you still have to disable COW to run the\n004_io_direct.pl test for now, but that's an independent thing due\nhopefully to be fixed soon, and you can still run PostgreSQL just fine\nwith COW enabled as it is by default as long as you don't turn on\ndebug_io_direct (which isn't for users yet anyway).\n\nSince I hadn't actually tried out this cloning stuff out on\nLinux/btrfs before and was claiming that it should work, I took it for\na quick unscientific spin (literally, this is on a spinning SATA disk\nfor extra crunchy slowness...). I created a scale 500 pgbench\ndatabase, saw that du -h showed 7.4G, and got these times:\n\npostgres=# create database foodb_copy template=foodb strategy=file_copy;\nCREATE DATABASE\nTime: 124019.885 ms (02:04.020)\npostgres=# create database foodb_clone template=foodb strategy=file_clone;\nCREATE DATABASE\nTime: 8618.195 ms (00:08.618)\n\nThat's something, but not as good as I was expecting, so let's also\ntry Linux/XFS for reference on the same spinny rust... One thing I\nlearned is that if you have an existing XFS partition, it might have\nbeen created without reflinks enabled (see output of xfs_info) as that\nwas the default not very long ago and it's not changeable later, so on\nthe box I'm writing from I had to do a fresh mkfs.xfs to see any\nbenefit from this.\n\npostgres=# create database foodb_copy template=foodb strategy=file_copy;\nCREATE DATABASE\nTime: 49157.876 ms (00:49.158)\npostgres=# create database foodb_clone template=foodb strategy=file_clone;\nCREATE DATABASE\nTime: 1026.455 ms (00:01.026)\n\nNot bad. To understand what that did, we can check which physical\nblocks on disk hold the first segment of the pgbench_accounts table in\nfoodb and foodb_clone:\n\n$ sudo xfs_bmap /mnt/xfs/pgdata/base/16384/16400\n/mnt/xfs/pgdata/base/16384/16400:\n 0: [0..1637439]: 977586048..979223487\n 1: [1637440..2097151]: 1464966136..1465425847\n$ sudo xfs_bmap /mnt/xfs/pgdata/base/16419/16400\n/mnt/xfs/pgdata/base/16419/16400:\n 0: [0..1637439]: 977586048..979223487\n 1: [1637440..2097151]: 1464966136..1465425847\n\nThe same blocks. Now let's update a tuple on the second page of\npgbench_accounts in the clone:\n\nfoodb=# update pgbench_accounts set bid = bid + 1 where ctid = '(1, 1)';\nUPDATE 1\nfoodb=# checkpoint;\nCHECKPOINT\n\nNow some new physical disk blocks have been allocated just for that\npage, but the rest are still clones:\n\n$ sudo xfs_bmap /mnt/xfs/pgdata/base/16419/16400\n/mnt/xfs/pgdata/base/16419/16400:\n 0: [0..15]: 977586048..977586063\n 1: [16..31]: 977586064..977586079\n 2: [32..1637439]: 977586080..979223487\n 3: [1637440..2097151]: 1464966136..1465425847\n\nI tried changing it to work in 1MB chunks and add the CFI() (v2\nattached), and it didn't affect the time measurably and also didn't\ngenerate any extra extents as displayed by xfs_bmap, so the end result\nis the same. I haven't looked into the chunked version on the other\nfile systems yet.\n\nI don't have the numbers to hand (different machines far from me right\nnow) but FreeBSD/ZFS and macOS/APFS were on the order of a few hundred\nmilliseconds for the same scale of pgbench on laptop storage (so not\ncomparable with the above).\n\nI also tried a -s 5000 database, and saw that XFS could clone a 74GB\ndatabase just as fast as the 7.4GB database (still ~1s). At a guess,\nthis is going to scale not so much by total data size, but more by\nthings like number of relations, segment size and internal (invisible)\nfragmentation due to previous cloning/update history in\nfilesystem-dependent ways, since those are the things that generate\nextents (contiguous ranges of physical blocks to be referenced by the\nnew file).\n\n[1] https://lore.kernel.org/linux-btrfs/ae81e48b0e954bae1c3451c0da1a24ae7146606c.1676684984.git.boris@bur.io/T/#u", "msg_date": "Mon, 9 Oct 2023 12:19:30 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CREATE DATABASE with filesystem cloning" }, { "msg_contents": "Hi,\n\nOn 2023-10-07 18:51:45 +1300, Thomas Munro wrote:\n> It should be a lot faster, and use less physical disk, than the two\n> existing strategies on recent-ish XFS, BTRFS, very recent OpenZFS,\n> APFS (= macOS), and it could in theory be extended to other systems\n> that invented different system calls for this with more work (Solaris,\n> Windows). Then extra physical disk space will be consumed only as the\n> two clones diverge.\n\n> It's just like the old strategy=file_copy, except it asks the OS to do\n> its best copying trick. If you try it on a system that doesn't\n> support copy-on-write, then copy_file_range() should fall back to\n> plain old copy, but it might still be better than we could do, as it\n> can push copy commands to network storage or physical storage.\n> \n> Therefore, the usual caveats from strategy=file_copy also apply here.\n> Namely that it has to perform checkpoints which could be very\n> expensive, and there are some quirks/brokenness about concurrent\n> backups and PITR. Which makes me wonder if it's worth pursuing this\n> idea. Thoughts?\n\nI think it'd be interesting to have. For the regression tests we do end up\nspending a lot of disk throughput on contents duplicated between\ntemplate0/template1/postgres. And I've plenty of time spent time copying huge\ntemplate databases, to have a reproducible starting point for some benchmark\nthat's expensive to initialize.\n\nIf we do this, I think we should consider creating template0, template1 with\nthe new strategy, so that a new initdb cluster ends up with deduplicated data.\n\n\nFWIW, I experimented with using cp -c on macos for the initdb template, and\nthat provided some further gain. I suspect that that gain would increase if\ntemplate0/template1/postgres were deduplicated.\n\n\n> diff --git a/src/backend/storage/file/copydir.c b/src/backend/storage/file/copydir.c\n> index e04bc3941a..8c963ff548 100644\n> --- a/src/backend/storage/file/copydir.c\n> +++ b/src/backend/storage/file/copydir.c\n> @@ -19,14 +19,21 @@\n> #include \"postgres.h\"\n> \n> #include <fcntl.h>\n> +#include <limits.h>\n> #include <unistd.h>\n> \n> +#ifdef HAVE_COPYFILE_H\n> +#include <copyfile.h>\n> +#endif\n\nWe already have code around this in src/bin/pg_upgrade/file.c, seems we ought\nto move it somewhere in src/port?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 9 Oct 2023 16:48:27 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CREATE DATABASE with filesystem cloning" }, { "msg_contents": "On 07.10.23 07:51, Thomas Munro wrote:\n> Here is an experimental POC of fast/cheap database cloning.\n\nHere are some previous discussions of this:\n\nhttps://www.postgresql.org/message-id/flat/20131001223108.GG23410%40saarenmaa.fi\n\nhttps://www.postgresql.org/message-id/flat/511B5D11.4040507%40socialserve.com\n\nhttps://www.postgresql.org/message-id/flat/bc9ca382-b98d-0446-f699-8c5de2307ca7%402ndquadrant.com\n\n(I don't see any clear conclusions in any of these threads, but it might \nbe good to check them in any case.)\n\n\n\n", "msg_date": "Wed, 11 Oct 2023 08:40:19 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CREATE DATABASE with filesystem cloning" }, { "msg_contents": "On Mon, Oct 9, 2023 at 7:49 PM Andres Freund <[email protected]> wrote:\n> If we do this, I think we should consider creating template0, template1 with\n> the new strategy, so that a new initdb cluster ends up with deduplicated data.\n\nSeems a little questionable given the reports earlier in the thread\nabout some filesystems still having bugs in this functionality.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 11 Oct 2023 08:35:46 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CREATE DATABASE with filesystem cloning" }, { "msg_contents": "On Sat, Oct 7, 2023, at 1:51 AM, Thomas Munro wrote:\n\n> I tested on bleeding edge FreeBSD/ZFS, where you need to set sysctl\n> vfs.zfs.bclone_enabled=1 to enable the optimisation, as it's still a\n> very new feature that is still being rolled out. The system call\n> succeeds either way, but that controls whether the new database\n> initially shares blocks on disk, or get new copies. I also tested on\n> a Mac. In both cases I could clone large databases in a fraction of a\n> second.\n\nWOOT! Looking forward to this. It would greatly improve times needed\nwhen deploying test environments.\n\nThank you.\n\n-- \n Dan Langille\n [email protected]\n\n\n", "msg_date": "Mon, 16 Oct 2023 22:35:02 -0400", "msg_from": "\"Dan Langille\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CREATE DATABASE with filesystem cloning" }, { "msg_contents": "On Wed, Oct 11, 2023 at 7:40 PM Peter Eisentraut <[email protected]> wrote:\n> On 07.10.23 07:51, Thomas Munro wrote:\n> > Here is an experimental POC of fast/cheap database cloning.\n>\n> Here are some previous discussions of this:\n>\n> https://www.postgresql.org/message-id/flat/20131001223108.GG23410%40saarenmaa.fi\n>\n> https://www.postgresql.org/message-id/flat/511B5D11.4040507%40socialserve.com\n>\n> https://www.postgresql.org/message-id/flat/bc9ca382-b98d-0446-f699-8c5de2307ca7%402ndquadrant.com\n>\n> (I don't see any clear conclusions in any of these threads, but it might\n> be good to check them in any case.)\n\nThanks. Wow, quite a lot of people have written an experimental patch\nlike this. I would say the things that changed since those ones are:\n\n* copy_file_range() became the preferred way to do this on Linux AFAIK\n(instead of various raw ioctls)\n* FreeBSD adopted Linux's copy_file_range()\n* Open ZFS 2.2 implemented range-based cloning\n* XFS enabled reflink support by default\n* Apple invented ApFS with cloning\n* Several OSes adopted XFS, BTRFS, ZFS, ApFS by default\n* copy_file_range() went in the direction of not revealing how the\ncopying is done (no flags to force behaviour)\n\nHere's a rebase.\n\nThe main thing that is missing is support for redo. It's mostly\ntrivial I think, probably just a record type for \"try cloning first\"\nand then teaching that clone function to fall back to the regular copy\npath if it fails in recovery, do you agree with that idea? Another\napproach would be to let it fail if it doesn't work on the replica, so\nyou don't finish up using dramatically different amounts of disk\nspace, but that seems terrible because now your replica is broken. So\nprobably fallback with logged warning (?), though I'm not sure exactly\nwhich errnos to give that treatment to.\n\nOne thing to highlight about COW file system semantics: PostgreSQL\nbehaves differently when space runs out. When writing relation data,\neg ZFS sometimes fails like bullet point 2 in this ENOSPC article[1],\nwhile XFS usually fails like bullet point 1. A database on XFS that\nhas been cloned in this way might presumably start to fail like bullet\npoint 2, eg when checkpointing dirty pages, instead of its usual\nextension-time-only ENOSPC-rolls-back-your-transaction behaviour.\n\n[1] https://wiki.postgresql.org/wiki/ENOSPC", "msg_date": "Wed, 6 Mar 2024 15:16:38 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CREATE DATABASE with filesystem cloning" }, { "msg_contents": "On Wed, Mar 6, 2024 at 3:16 PM Thomas Munro <[email protected]> wrote:\n> Here's a rebase.\n\nNow with a wait event and a paragraph of documentation.", "msg_date": "Wed, 6 Mar 2024 16:00:03 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CREATE DATABASE with filesystem cloning" }, { "msg_contents": "Hi,\n\nOn Wed, 6 Mar 2024 at 05:17, Thomas Munro <[email protected]> wrote:\n>\n> The main thing that is missing is support for redo. It's mostly\n> trivial I think, probably just a record type for \"try cloning first\"\n> and then teaching that clone function to fall back to the regular copy\n> path if it fails in recovery, do you agree with that idea? Another\n> approach would be to let it fail if it doesn't work on the replica, so\n> you don't finish up using dramatically different amounts of disk\n> space, but that seems terrible because now your replica is broken. So\n> probably fallback with logged warning (?), though I'm not sure exactly\n> which errnos to give that treatment to.\n\nWe had an off-list talk with Thomas and we thought making this option\nGUC instead of SQL command level could solve this problem.\n\nI am posting a new rebased version of the patch with some important changes:\n\n* 'createdb_file_copy_method' GUC is created. Possible values are\n'copy' and 'clone'. Copy is the default option. Clone option can be\nchosen if the system supports it, otherwise it gives error at the\nstartup.\n\n* #else part of the clone_file() function calls pg_unreachable()\ninstead of ereport().\n\n* Documentation updates.\n\nAlso, what should happen when the kernel has clone support but the\nfile system does not?\n\n- I tested this on Linux and copy_file_range() silently uses normal\ncopy when this happens. I assume the same thing will happen for\nFreeBSD because it uses the copy_file_range() function as well.\n\n- I am not sure about MacOS since the force flag\n(COPYFILE_CLONE_FORCE) is used. I do not have MacOS so I can not test\nit but I assume it will error out when this happens. If that is the\ncase, is this a problem? I am asking that since this is a GUC now, the\nuser will have the full responsibility.\n\nAny kind of feedback would be appreciated.\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft", "msg_date": "Tue, 7 May 2024 15:00:11 +0300", "msg_from": "Nazir Bilal Yavuz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CREATE DATABASE with filesystem cloning" }, { "msg_contents": "Hi Ranier,\n\nThanks for looking into this!\n\nI am not sure why but your reply does not show up in the thread, so I\ncopied your reply and answered it in the thread for visibility.\n\nOn Tue, 7 May 2024 at 16:28, Ranier Vilela <[email protected]> wrote:\n>\n> I know it's coming from copy-and-paste, but\n> I believe the flags could be:\n> - dstfd = OpenTransientFile(tofile, O_RDWR | O_CREAT | O_EXCL | PG_BINARY);\n> + dstfd = OpenTransientFile(tofile, O_CREAT | O_WRONLY | O_TRUNC | O_EXCL | PG_BINARY);\n>\n> The flags:\n> O_WRONLY | O_TRUNC\n>\n> Allow the OS to make some optimizations, if you deem it possible.\n>\n> The flag O_RDWR indicates that the file can be read, which is not true in this case.\n> The destination file will just be written.\n\nYou may be right about the O_WRONLY flag but I am not sure about the\nO_TRUNC flag.\n\nO_TRUNC from the linux man page [1]: If the file already exists and is\na regular file and the access mode allows writing (i.e., is O_RDWR or\nO_WRONLY) it will be truncated to length 0. If the file is a FIFO or\nterminal device file, the O_TRUNC flag is ignored. Otherwise, the\neffect of O_TRUNC is unspecified.\n\nBut it should be already checked if the destination directory already\nexists in dbcommands.c file in createdb() function [2], so this should\nnot happen.\n\n[1] https://man7.org/linux/man-pages/man2/open.2.html\n\n[2]\n /*\n * If database OID is configured, check if the OID is already in use or\n * data directory already exists.\n */\n if (OidIsValid(dboid))\n {\n char *existing_dbname = get_database_name(dboid);\n\n if (existing_dbname != NULL)\n ereport(ERROR,\n (errcode(ERRCODE_INVALID_PARAMETER_VALUE)),\n errmsg(\"database OID %u is already in use by\ndatabase \\\"%s\\\"\",\n dboid, existing_dbname));\n\n if (check_db_file_conflict(dboid))\n ereport(ERROR,\n (errcode(ERRCODE_INVALID_PARAMETER_VALUE)),\n errmsg(\"data directory with the specified OID %u\nalready exists\", dboid));\n }\n else\n {\n /* Select an OID for the new database if is not explicitly\nconfigured. */\n do\n {\n dboid = GetNewOidWithIndex(pg_database_rel, DatabaseOidIndexId,\n Anum_pg_database_oid);\n } while (check_db_file_conflict(dboid));\n }\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n", "msg_date": "Wed, 8 May 2024 10:37:20 +0300", "msg_from": "Nazir Bilal Yavuz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CREATE DATABASE with filesystem cloning" }, { "msg_contents": "Em qua., 8 de mai. de 2024 às 04:37, Nazir Bilal Yavuz <[email protected]>\nescreveu:\n\n> Hi Ranier,\n>\n> Thanks for looking into this!\n>\n> I am not sure why but your reply does not show up in the thread, so I\n> copied your reply and answered it in the thread for visibility.\n>\n> On Tue, 7 May 2024 at 16:28, Ranier Vilela <[email protected]> wrote:\n> >\n> > I know it's coming from copy-and-paste, but\n> > I believe the flags could be:\n> > - dstfd = OpenTransientFile(tofile, O_RDWR | O_CREAT | O_EXCL |\n> PG_BINARY);\n> > + dstfd = OpenTransientFile(tofile, O_CREAT | O_WRONLY | O_TRUNC |\n> O_EXCL | PG_BINARY);\n> >\n> > The flags:\n> > O_WRONLY | O_TRUNC\n> >\n> > Allow the OS to make some optimizations, if you deem it possible.\n> >\n> > The flag O_RDWR indicates that the file can be read, which is not true\n> in this case.\n> > The destination file will just be written.\n>\n> You may be right about the O_WRONLY flag but I am not sure about the\n> O_TRUNC flag.\n>\n> O_TRUNC from the linux man page [1]: If the file already exists and is\n> a regular file and the access mode allows writing (i.e., is O_RDWR or\n> O_WRONLY) it will be truncated to length 0. If the file is a FIFO or\n> terminal device file, the O_TRUNC flag is ignored. Otherwise, the\n> effect of O_TRUNC is unspecified.\n>\nO_TRUNC is usually used in conjunction with O_WRONLY.\nSee the example at:\nopen.html\n<https://pubs.opengroup.org/onlinepubs/009696699/functions/open.html>\n\nO_TRUNC signals the OS to forget the current contents of the file, if it\nhappens to exist.\nIn other words, there will be no seeks, only and exclusively writes.\n\n\n> But it should be already checked if the destination directory already\n> exists in dbcommands.c file in createdb() function [2], so this should\n> not happen.\n>\nI'm not sure what you're referring to here.\n\nbest regards,\nRanier Vilela\n\nEm qua., 8 de mai. de 2024 às 04:37, Nazir Bilal Yavuz <[email protected]> escreveu:Hi Ranier,\n\nThanks for looking into this!\n\nI am not sure why but your reply does not show up in the thread, so I\ncopied your reply and answered it in the thread for visibility.\n\nOn Tue, 7 May 2024 at 16:28, Ranier Vilela <[email protected]> wrote:\n>\n> I know it's coming from copy-and-paste, but\n> I believe the flags could be:\n> - dstfd = OpenTransientFile(tofile, O_RDWR | O_CREAT | O_EXCL | PG_BINARY);\n> + dstfd = OpenTransientFile(tofile, O_CREAT | O_WRONLY | O_TRUNC | O_EXCL | PG_BINARY);\n>\n> The flags:\n> O_WRONLY | O_TRUNC\n>\n> Allow the OS to make some optimizations, if you deem it possible.\n>\n> The flag O_RDWR indicates that the file can be read, which is not true in this case.\n> The destination file will just be written.\n\nYou may be right about the O_WRONLY flag but I am not sure about the\nO_TRUNC flag.\n\nO_TRUNC from the linux man page [1]: If the file already exists and is\na regular file and the access mode allows writing (i.e., is O_RDWR or\nO_WRONLY) it will be truncated to length 0.  If the file is a FIFO  or\nterminal device file, the O_TRUNC flag is ignored. Otherwise, the\neffect of O_TRUNC is unspecified.O_TRUNC is usually used in conjunction with \n\nO_WRONLY.See the example at:open.htmlO_TRUNC signals the OS to forget the current contents of the file, if it happens to exist.In other words, there will be no seeks, only and exclusively writes.\n\nBut it should be already checked if the destination directory already\nexists in dbcommands.c file in createdb() function [2], so this should\nnot happen.I'm not sure what you're referring to here.best regards,Ranier Vilela", "msg_date": "Wed, 8 May 2024 08:16:18 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CREATE DATABASE with filesystem cloning" }, { "msg_contents": "Hi,\n\nOn Wed, 8 May 2024 at 14:16, Ranier Vilela <[email protected]> wrote:\n>\n>\n> Em qua., 8 de mai. de 2024 às 04:37, Nazir Bilal Yavuz <[email protected]> escreveu:\n>>\n>> Hi Ranier,\n>>\n>> Thanks for looking into this!\n>>\n>> I am not sure why but your reply does not show up in the thread, so I\n>> copied your reply and answered it in the thread for visibility.\n>>\n>> On Tue, 7 May 2024 at 16:28, Ranier Vilela <[email protected]> wrote:\n>> >\n>> > I know it's coming from copy-and-paste, but\n>> > I believe the flags could be:\n>> > - dstfd = OpenTransientFile(tofile, O_RDWR | O_CREAT | O_EXCL | PG_BINARY);\n>> > + dstfd = OpenTransientFile(tofile, O_CREAT | O_WRONLY | O_TRUNC | O_EXCL | PG_BINARY);\n>> >\n>> > The flags:\n>> > O_WRONLY | O_TRUNC\n>> >\n>> > Allow the OS to make some optimizations, if you deem it possible.\n>> >\n>> > The flag O_RDWR indicates that the file can be read, which is not true in this case.\n>> > The destination file will just be written.\n>>\n>> You may be right about the O_WRONLY flag but I am not sure about the\n>> O_TRUNC flag.\n>>\n>> O_TRUNC from the linux man page [1]: If the file already exists and is\n>> a regular file and the access mode allows writing (i.e., is O_RDWR or\n>> O_WRONLY) it will be truncated to length 0. If the file is a FIFO or\n>> terminal device file, the O_TRUNC flag is ignored. Otherwise, the\n>> effect of O_TRUNC is unspecified.\n>\n> O_TRUNC is usually used in conjunction with O_WRONLY.\n> See the example at:\n> open.html\n>\n> O_TRUNC signals the OS to forget the current contents of the file, if it happens to exist.\n> In other words, there will be no seeks, only and exclusively writes.\n\nYou are right; the O_TRUNC flag truncates the file, if it happens to\nexist. But it should not exist in the first place because the code at\n[2] should check this before the OpenTransientFile() call. Even if we\nassume somehow the check at [2] does not work, I do not think the\ncorrect operation is to truncate the contents of the existing file.\n\n>>\n>> But it should be already checked if the destination directory already\n>> exists in dbcommands.c file in createdb() function [2], so this should\n>> not happen.\n>\n> I'm not sure what you're referring to here.\n\nSorry, I meant that the destination directory / file should not exist\nbecause the code at [2] confirms it does not exist, otherwise it\nerrors out.\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n", "msg_date": "Wed, 8 May 2024 14:42:27 +0300", "msg_from": "Nazir Bilal Yavuz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CREATE DATABASE with filesystem cloning" }, { "msg_contents": "Em qua., 8 de mai. de 2024 às 08:42, Nazir Bilal Yavuz <[email protected]>\nescreveu:\n\n> Hi,\n>\n> On Wed, 8 May 2024 at 14:16, Ranier Vilela <[email protected]> wrote:\n> >\n> >\n> > Em qua., 8 de mai. de 2024 às 04:37, Nazir Bilal Yavuz <\n> [email protected]> escreveu:\n> >>\n> >> Hi Ranier,\n> >>\n> >> Thanks for looking into this!\n> >>\n> >> I am not sure why but your reply does not show up in the thread, so I\n> >> copied your reply and answered it in the thread for visibility.\n> >>\n> >> On Tue, 7 May 2024 at 16:28, Ranier Vilela <[email protected]> wrote:\n> >> >\n> >> > I know it's coming from copy-and-paste, but\n> >> > I believe the flags could be:\n> >> > - dstfd = OpenTransientFile(tofile, O_RDWR | O_CREAT | O_EXCL |\n> PG_BINARY);\n> >> > + dstfd = OpenTransientFile(tofile, O_CREAT | O_WRONLY | O_TRUNC |\n> O_EXCL | PG_BINARY);\n> >> >\n> >> > The flags:\n> >> > O_WRONLY | O_TRUNC\n> >> >\n> >> > Allow the OS to make some optimizations, if you deem it possible.\n> >> >\n> >> > The flag O_RDWR indicates that the file can be read, which is not\n> true in this case.\n> >> > The destination file will just be written.\n> >>\n> >> You may be right about the O_WRONLY flag but I am not sure about the\n> >> O_TRUNC flag.\n> >>\n> >> O_TRUNC from the linux man page [1]: If the file already exists and is\n> >> a regular file and the access mode allows writing (i.e., is O_RDWR or\n> >> O_WRONLY) it will be truncated to length 0. If the file is a FIFO or\n> >> terminal device file, the O_TRUNC flag is ignored. Otherwise, the\n> >> effect of O_TRUNC is unspecified.\n> >\n> > O_TRUNC is usually used in conjunction with O_WRONLY.\n> > See the example at:\n> > open.html\n> >\n> > O_TRUNC signals the OS to forget the current contents of the file, if it\n> happens to exist.\n> > In other words, there will be no seeks, only and exclusively writes.\n>\n> You are right; the O_TRUNC flag truncates the file, if it happens to\n> exist. But it should not exist in the first place because the code at\n> [2] should check this before the OpenTransientFile() call. Even if we\n> assume somehow the check at [2] does not work, I do not think the\n> correct operation is to truncate the contents of the existing file.\n>\nI don't know if there is a communication problem here.\nBut what I'm trying to suggest refers to the destination file,\nwhich doesn't matter if it exists or not?\n\nIf the destination file does not exist, O_TRUNC is ignored.\nIf the destination file exists, O_TRUNC truncates the current contents of\nthe file.\nI don't see why you think it's a problem to truncate the current content if\nthe destination file exists.\nIsn't he going to be replaced anyway?\n\nUnless we want to preserve the current content (destination file), in case\nthe copy/clone fails?\n\nbest regards,\nRanier Vilela\n\nEm qua., 8 de mai. de 2024 às 08:42, Nazir Bilal Yavuz <[email protected]> escreveu:Hi,\n\nOn Wed, 8 May 2024 at 14:16, Ranier Vilela <[email protected]> wrote:\n>\n>\n> Em qua., 8 de mai. de 2024 às 04:37, Nazir Bilal Yavuz <[email protected]> escreveu:\n>>\n>> Hi Ranier,\n>>\n>> Thanks for looking into this!\n>>\n>> I am not sure why but your reply does not show up in the thread, so I\n>> copied your reply and answered it in the thread for visibility.\n>>\n>> On Tue, 7 May 2024 at 16:28, Ranier Vilela <[email protected]> wrote:\n>> >\n>> > I know it's coming from copy-and-paste, but\n>> > I believe the flags could be:\n>> > - dstfd = OpenTransientFile(tofile, O_RDWR | O_CREAT | O_EXCL | PG_BINARY);\n>> > + dstfd = OpenTransientFile(tofile, O_CREAT | O_WRONLY | O_TRUNC | O_EXCL | PG_BINARY);\n>> >\n>> > The flags:\n>> > O_WRONLY | O_TRUNC\n>> >\n>> > Allow the OS to make some optimizations, if you deem it possible.\n>> >\n>> > The flag O_RDWR indicates that the file can be read, which is not true in this case.\n>> > The destination file will just be written.\n>>\n>> You may be right about the O_WRONLY flag but I am not sure about the\n>> O_TRUNC flag.\n>>\n>> O_TRUNC from the linux man page [1]: If the file already exists and is\n>> a regular file and the access mode allows writing (i.e., is O_RDWR or\n>> O_WRONLY) it will be truncated to length 0.  If the file is a FIFO  or\n>> terminal device file, the O_TRUNC flag is ignored. Otherwise, the\n>> effect of O_TRUNC is unspecified.\n>\n> O_TRUNC is usually used in conjunction with O_WRONLY.\n> See the example at:\n> open.html\n>\n> O_TRUNC signals the OS to forget the current contents of the file, if it happens to exist.\n> In other words, there will be no seeks, only and exclusively writes.\n\nYou are right; the O_TRUNC flag truncates the file, if it happens to\nexist. But it should not exist in the first place because the code at\n[2] should check this before the OpenTransientFile() call. Even if we\nassume somehow the check at [2] does not work, I do not think the\ncorrect operation is to truncate the contents of the existing file.I don't know if there is a communication problem here.But what I'm trying to suggest refers to the destination file, which doesn't matter if it exists or not? If the destination file does not exist, O_TRUNC is ignored.If the destination file exists, O_TRUNC truncates the current contents of the file.I don't see why you think it's a problem to truncate the current content if the destination file exists.Isn't he going to be replaced anyway?Unless we want to preserve the current content (destination file), in case the copy/clone fails?best regards,Ranier Vilela", "msg_date": "Wed, 8 May 2024 09:23:00 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CREATE DATABASE with filesystem cloning" }, { "msg_contents": "Hi,\n\nOn Wed, 8 May 2024 at 15:23, Ranier Vilela <[email protected]> wrote:\n>\n> Em qua., 8 de mai. de 2024 às 08:42, Nazir Bilal Yavuz <[email protected]> escreveu:\n>>\n>> Hi,\n>>\n>> On Wed, 8 May 2024 at 14:16, Ranier Vilela <[email protected]> wrote:\n>> >\n>> >\n>> > Em qua., 8 de mai. de 2024 às 04:37, Nazir Bilal Yavuz <[email protected]> escreveu:\n>> >>\n>> >> Hi Ranier,\n>> >>\n>> >> Thanks for looking into this!\n>> >>\n>> >> I am not sure why but your reply does not show up in the thread, so I\n>> >> copied your reply and answered it in the thread for visibility.\n>> >>\n>> >> On Tue, 7 May 2024 at 16:28, Ranier Vilela <[email protected]> wrote:\n>> >> >\n>> >> > I know it's coming from copy-and-paste, but\n>> >> > I believe the flags could be:\n>> >> > - dstfd = OpenTransientFile(tofile, O_RDWR | O_CREAT | O_EXCL | PG_BINARY);\n>> >> > + dstfd = OpenTransientFile(tofile, O_CREAT | O_WRONLY | O_TRUNC | O_EXCL | PG_BINARY);\n>> >> >\n>> >> > The flags:\n>> >> > O_WRONLY | O_TRUNC\n>> >> >\n>> >> > Allow the OS to make some optimizations, if you deem it possible.\n>> >> >\n>> >> > The flag O_RDWR indicates that the file can be read, which is not true in this case.\n>> >> > The destination file will just be written.\n>> >>\n>> >> You may be right about the O_WRONLY flag but I am not sure about the\n>> >> O_TRUNC flag.\n>> >>\n>> >> O_TRUNC from the linux man page [1]: If the file already exists and is\n>> >> a regular file and the access mode allows writing (i.e., is O_RDWR or\n>> >> O_WRONLY) it will be truncated to length 0. If the file is a FIFO or\n>> >> terminal device file, the O_TRUNC flag is ignored. Otherwise, the\n>> >> effect of O_TRUNC is unspecified.\n>> >\n>> > O_TRUNC is usually used in conjunction with O_WRONLY.\n>> > See the example at:\n>> > open.html\n>> >\n>> > O_TRUNC signals the OS to forget the current contents of the file, if it happens to exist.\n>> > In other words, there will be no seeks, only and exclusively writes.\n>>\n>> You are right; the O_TRUNC flag truncates the file, if it happens to\n>> exist. But it should not exist in the first place because the code at\n>> [2] should check this before the OpenTransientFile() call. Even if we\n>> assume somehow the check at [2] does not work, I do not think the\n>> correct operation is to truncate the contents of the existing file.\n>\n> I don't know if there is a communication problem here.\n> But what I'm trying to suggest refers to the destination file,\n> which doesn't matter if it exists or not?\n\nI do not think there is a communication problem. Actually it matters\nbecause the destination file should not exist, there is a code [2]\nwhich already checks and confirms that it does not exist.\n\n>\n> If the destination file does not exist, O_TRUNC is ignored.\n> If the destination file exists, O_TRUNC truncates the current contents of the file.\n> I don't see why you think it's a problem to truncate the current content if the destination file exists.\n> Isn't he going to be replaced anyway?\n\n'If the destination file does not exist' means the code at [2] works\nwell and we do not need the O_TRUNC flag.\n\n'If the destination file already exists' means the code at [2] is\nbroken somehow and there is a high chance that we could truncate\nsomething that we do not want to. For example, there is a foo db and\nwe want to create bar db, Postgres chose the foo db's location as the\ndestination of the bar db (which should not happen but let's assume\nsomehow checks at [2] failed), then we could wrongly truncate the foo\ndb's contents.\n\nHence, if Postgres works successfully I think the O_TRUNC flag is\nunnecessary but if Postgres does not work successfully, the O_TRUNC\nflag could cause harm.\n\n>\n> Unless we want to preserve the current content (destination file), in case the copy/clone fails?\n\nLike I said above, Postgres should not choose the existing file as the\ndestination file.\n\nAlso, we have O_CREAT | O_EXCL flags together, from the link [3] you\nshared before: If O_CREAT and O_EXCL are set, open() shall fail if the\nfile exists. So, overwriting the already existing file is already\nprevented.\n\n[3] https://pubs.opengroup.org/onlinepubs/009696699/functions/open.html\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n", "msg_date": "Wed, 8 May 2024 16:06:45 +0300", "msg_from": "Nazir Bilal Yavuz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CREATE DATABASE with filesystem cloning" }, { "msg_contents": "Em qua., 8 de mai. de 2024 às 10:06, Nazir Bilal Yavuz <[email protected]>\nescreveu:\n\n> Hi,\n>\n> On Wed, 8 May 2024 at 15:23, Ranier Vilela <[email protected]> wrote:\n> >\n> > Em qua., 8 de mai. de 2024 às 08:42, Nazir Bilal Yavuz <\n> [email protected]> escreveu:\n> >>\n> >> Hi,\n> >>\n> >> On Wed, 8 May 2024 at 14:16, Ranier Vilela <[email protected]> wrote:\n> >> >\n> >> >\n> >> > Em qua., 8 de mai. de 2024 às 04:37, Nazir Bilal Yavuz <\n> [email protected]> escreveu:\n> >> >>\n> >> >> Hi Ranier,\n> >> >>\n> >> >> Thanks for looking into this!\n> >> >>\n> >> >> I am not sure why but your reply does not show up in the thread, so I\n> >> >> copied your reply and answered it in the thread for visibility.\n> >> >>\n> >> >> On Tue, 7 May 2024 at 16:28, Ranier Vilela <[email protected]>\n> wrote:\n> >> >> >\n> >> >> > I know it's coming from copy-and-paste, but\n> >> >> > I believe the flags could be:\n> >> >> > - dstfd = OpenTransientFile(tofile, O_RDWR | O_CREAT | O_EXCL |\n> PG_BINARY);\n> >> >> > + dstfd = OpenTransientFile(tofile, O_CREAT | O_WRONLY | O_TRUNC |\n> O_EXCL | PG_BINARY);\n> >> >> >\n> >> >> > The flags:\n> >> >> > O_WRONLY | O_TRUNC\n> >> >> >\n> >> >> > Allow the OS to make some optimizations, if you deem it possible.\n> >> >> >\n> >> >> > The flag O_RDWR indicates that the file can be read, which is not\n> true in this case.\n> >> >> > The destination file will just be written.\n> >> >>\n> >> >> You may be right about the O_WRONLY flag but I am not sure about the\n> >> >> O_TRUNC flag.\n> >> >>\n> >> >> O_TRUNC from the linux man page [1]: If the file already exists and\n> is\n> >> >> a regular file and the access mode allows writing (i.e., is O_RDWR or\n> >> >> O_WRONLY) it will be truncated to length 0. If the file is a FIFO\n> or\n> >> >> terminal device file, the O_TRUNC flag is ignored. Otherwise, the\n> >> >> effect of O_TRUNC is unspecified.\n> >> >\n> >> > O_TRUNC is usually used in conjunction with O_WRONLY.\n> >> > See the example at:\n> >> > open.html\n> >> >\n> >> > O_TRUNC signals the OS to forget the current contents of the file, if\n> it happens to exist.\n> >> > In other words, there will be no seeks, only and exclusively writes.\n> >>\n> >> You are right; the O_TRUNC flag truncates the file, if it happens to\n> >> exist. But it should not exist in the first place because the code at\n> >> [2] should check this before the OpenTransientFile() call. Even if we\n> >> assume somehow the check at [2] does not work, I do not think the\n> >> correct operation is to truncate the contents of the existing file.\n> >\n> > I don't know if there is a communication problem here.\n> > But what I'm trying to suggest refers to the destination file,\n> > which doesn't matter if it exists or not?\n>\n> I do not think there is a communication problem. Actually it matters\n> because the destination file should not exist, there is a code [2]\n> which already checks and confirms that it does not exist.\n>\nI got it.\n\n\n>\n> >\n> > If the destination file does not exist, O_TRUNC is ignored.\n> > If the destination file exists, O_TRUNC truncates the current contents\n> of the file.\n> > I don't see why you think it's a problem to truncate the current content\n> if the destination file exists.\n> > Isn't he going to be replaced anyway?\n>\n> 'If the destination file does not exist' means the code at [2] works\n> well and we do not need the O_TRUNC flag.\n>\nTrue, the O_TRUNC is ignored in this case.\n\n\n>\n> 'If the destination file already exists' means the code at [2] is\n> broken somehow and there is a high chance that we could truncate\n> something that we do not want to. For example, there is a foo db and\n> we want to create bar db, Postgres chose the foo db's location as the\n> destination of the bar db (which should not happen but let's assume\n> somehow checks at [2] failed), then we could wrongly truncate the foo\n> db's contents.\n>\nOf course, truncating the wrong file would be pretty bad.\n\n\n>\n> Hence, if Postgres works successfully I think the O_TRUNC flag is\n> unnecessary but if Postgres does not work successfully, the O_TRUNC\n> flag could cause harm.\n>\nThe disaster will happen anyway, but of course we can help in some way.\nEven without truncating, the wrong file will be destroyed anyway.\n\n\n> >\n> > Unless we want to preserve the current content (destination file), in\n> case the copy/clone fails?\n>\n> Like I said above, Postgres should not choose the existing file as the\n> destination file.\n>\n> Also, we have O_CREAT | O_EXCL flags together, from the link [3] you\n> shared before: If O_CREAT and O_EXCL are set, open() shall fail if the\n> file exists. So, overwriting the already existing file is already\n> prevented.\n>\nThat said, I agree that not using O_TRUNC helps in some way.\n\nbest regards,\nRanier Vilela\n\nEm qua., 8 de mai. de 2024 às 10:06, Nazir Bilal Yavuz <[email protected]> escreveu:Hi,\n\nOn Wed, 8 May 2024 at 15:23, Ranier Vilela <[email protected]> wrote:\n>\n> Em qua., 8 de mai. de 2024 às 08:42, Nazir Bilal Yavuz <[email protected]> escreveu:\n>>\n>> Hi,\n>>\n>> On Wed, 8 May 2024 at 14:16, Ranier Vilela <[email protected]> wrote:\n>> >\n>> >\n>> > Em qua., 8 de mai. de 2024 às 04:37, Nazir Bilal Yavuz <[email protected]> escreveu:\n>> >>\n>> >> Hi Ranier,\n>> >>\n>> >> Thanks for looking into this!\n>> >>\n>> >> I am not sure why but your reply does not show up in the thread, so I\n>> >> copied your reply and answered it in the thread for visibility.\n>> >>\n>> >> On Tue, 7 May 2024 at 16:28, Ranier Vilela <[email protected]> wrote:\n>> >> >\n>> >> > I know it's coming from copy-and-paste, but\n>> >> > I believe the flags could be:\n>> >> > - dstfd = OpenTransientFile(tofile, O_RDWR | O_CREAT | O_EXCL | PG_BINARY);\n>> >> > + dstfd = OpenTransientFile(tofile, O_CREAT | O_WRONLY | O_TRUNC | O_EXCL | PG_BINARY);\n>> >> >\n>> >> > The flags:\n>> >> > O_WRONLY | O_TRUNC\n>> >> >\n>> >> > Allow the OS to make some optimizations, if you deem it possible.\n>> >> >\n>> >> > The flag O_RDWR indicates that the file can be read, which is not true in this case.\n>> >> > The destination file will just be written.\n>> >>\n>> >> You may be right about the O_WRONLY flag but I am not sure about the\n>> >> O_TRUNC flag.\n>> >>\n>> >> O_TRUNC from the linux man page [1]: If the file already exists and is\n>> >> a regular file and the access mode allows writing (i.e., is O_RDWR or\n>> >> O_WRONLY) it will be truncated to length 0.  If the file is a FIFO  or\n>> >> terminal device file, the O_TRUNC flag is ignored. Otherwise, the\n>> >> effect of O_TRUNC is unspecified.\n>> >\n>> > O_TRUNC is usually used in conjunction with O_WRONLY.\n>> > See the example at:\n>> > open.html\n>> >\n>> > O_TRUNC signals the OS to forget the current contents of the file, if it happens to exist.\n>> > In other words, there will be no seeks, only and exclusively writes.\n>>\n>> You are right; the O_TRUNC flag truncates the file, if it happens to\n>> exist. But it should not exist in the first place because the code at\n>> [2] should check this before the OpenTransientFile() call. Even if we\n>> assume somehow the check at [2] does not work, I do not think the\n>> correct operation is to truncate the contents of the existing file.\n>\n> I don't know if there is a communication problem here.\n> But what I'm trying to suggest refers to the destination file,\n> which doesn't matter if it exists or not?\n\nI do not think there is a communication problem. Actually it matters\nbecause the destination file should not exist, there is a code [2]\nwhich already checks and confirms that it does not exist.I got it. \n\n>\n> If the destination file does not exist, O_TRUNC is ignored.\n> If the destination file exists, O_TRUNC truncates the current contents of the file.\n> I don't see why you think it's a problem to truncate the current content if the destination file exists.\n> Isn't he going to be replaced anyway?\n\n'If the destination file does not exist' means the code at [2] works\nwell and we do not need the O_TRUNC flag.True, the O_TRUNC is ignored in this case. \n\n'If the destination file already exists' means the code at [2] is\nbroken somehow and there is a high chance that we could truncate\nsomething that we do not want to. For example, there is a foo db and\nwe want to create bar db, Postgres chose the foo db's location as the\ndestination of the bar db (which should not happen but let's assume\nsomehow checks at [2] failed), then we could wrongly truncate the foo\ndb's contents.Of course, truncating the wrong file would be pretty bad. \n\nHence, if Postgres works successfully I think the O_TRUNC flag is\nunnecessary but if Postgres does not work successfully, the O_TRUNC\nflag could cause harm.The disaster will happen anyway, but of course we can help in some way.Even without truncating, the wrong file will be destroyed anyway.\n\n>\n> Unless we want to preserve the current content (destination file), in case the copy/clone fails?\n\nLike I said above, Postgres should not choose the existing file as the\ndestination file.\n\nAlso, we have O_CREAT | O_EXCL flags together, from the link [3] you\nshared before: If O_CREAT and O_EXCL are set, open() shall fail if the\nfile exists. So, overwriting the already existing file is already\nprevented.That said, I agree that not using O_TRUNC helps in some way.best regards,Ranier Vilela", "msg_date": "Wed, 8 May 2024 10:20:54 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CREATE DATABASE with filesystem cloning" }, { "msg_contents": "On Tue, May 7, 2024 at 8:00 AM Nazir Bilal Yavuz <[email protected]> wrote:\n> We had an off-list talk with Thomas and we thought making this option\n> GUC instead of SQL command level could solve this problem.\n>\n> I am posting a new rebased version of the patch with some important changes:\n>\n> * 'createdb_file_copy_method' GUC is created. Possible values are\n> 'copy' and 'clone'. Copy is the default option. Clone option can be\n> chosen if the system supports it, otherwise it gives error at the\n> startup.\n\nThis seems like a smart idea, because the type of file copying that we\ndo during redo need not be the same as what was done when the\noperation was originally performed.\n\nI'm not so sure about the GUC name. On the one hand, it feels like\ncreatedb should be spelled out as create_database, but on the other\nhand, the GUC name is quite long already. Then again, why would we\nmake this specific to CREATE DATABASE in the first place? Would we\nalso want alter_tablespace_file_copy_method and\nbasic_archive.file_copy_method?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 8 May 2024 09:57:50 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CREATE DATABASE with filesystem cloning" }, { "msg_contents": "Hi,\n\nOn Wed, 8 May 2024 at 16:58, Robert Haas <[email protected]> wrote:\n>\n> On Tue, May 7, 2024 at 8:00 AM Nazir Bilal Yavuz <[email protected]> wrote:\n> > We had an off-list talk with Thomas and we thought making this option\n> > GUC instead of SQL command level could solve this problem.\n> >\n> > I am posting a new rebased version of the patch with some important changes:\n> >\n> > * 'createdb_file_copy_method' GUC is created. Possible values are\n> > 'copy' and 'clone'. Copy is the default option. Clone option can be\n> > chosen if the system supports it, otherwise it gives error at the\n> > startup.\n>\n> This seems like a smart idea, because the type of file copying that we\n> do during redo need not be the same as what was done when the\n> operation was originally performed.\n>\n> I'm not so sure about the GUC name. On the one hand, it feels like\n> createdb should be spelled out as create_database, but on the other\n> hand, the GUC name is quite long already. Then again, why would we\n> make this specific to CREATE DATABASE in the first place? Would we\n> also want alter_tablespace_file_copy_method and\n> basic_archive.file_copy_method?\n\nI agree that it is already quite long, because of that I chose the\ncreatedb as a prefix. I did not think that file cloning was planned to\nbe used in other places. If that is the case, does something like\n'preferred_copy_method' work? Then, we would mention which places will\nbe affected with this GUC in the docs.\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n", "msg_date": "Wed, 8 May 2024 17:34:20 +0300", "msg_from": "Nazir Bilal Yavuz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CREATE DATABASE with filesystem cloning" }, { "msg_contents": "On Wed, May 8, 2024 at 10:34 AM Nazir Bilal Yavuz <[email protected]> wrote:\n> > I'm not so sure about the GUC name. On the one hand, it feels like\n> > createdb should be spelled out as create_database, but on the other\n> > hand, the GUC name is quite long already. Then again, why would we\n> > make this specific to CREATE DATABASE in the first place? Would we\n> > also want alter_tablespace_file_copy_method and\n> > basic_archive.file_copy_method?\n>\n> I agree that it is already quite long, because of that I chose the\n> createdb as a prefix. I did not think that file cloning was planned to\n> be used in other places. If that is the case, does something like\n> 'preferred_copy_method' work? Then, we would mention which places will\n> be affected with this GUC in the docs.\n\nI would go with file_copy_method rather than preferred_copy_method,\nbecause (1) there's nothing \"preferred\" about it if we're using it\nunconditionally and (2) it's nice to clarify that we're talking about\ncopying files rather than anything else.\n\nMy personal enthusiasm for making platform-specific file copy methods\nusable all over PostgreSQL is quite limited. However, it is my\nobservation that other people seem far more enthusiastic about it than\nI am. For example, consider how quickly it got added to\npg_combinebackup. So, I suspect it's smart to plan on anything we add\nin this area getting used in a bunch of places. And perhaps it is even\nbest to think about making it work in all of those places right from\nthe start. If we build support into copydir and copy_file, then we\njust get everything that uses those, and all that remains is to\ndocument was is covered (and add comments so that future patch authors\nknow they should further update the documentation).\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 9 May 2024 12:29:26 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CREATE DATABASE with filesystem cloning" }, { "msg_contents": "Hi,\n\nOn Thu, 9 May 2024 at 19:29, Robert Haas <[email protected]> wrote:\n>\n> On Wed, May 8, 2024 at 10:34 AM Nazir Bilal Yavuz <[email protected]> wrote:\n> > > I'm not so sure about the GUC name. On the one hand, it feels like\n> > > createdb should be spelled out as create_database, but on the other\n> > > hand, the GUC name is quite long already. Then again, why would we\n> > > make this specific to CREATE DATABASE in the first place? Would we\n> > > also want alter_tablespace_file_copy_method and\n> > > basic_archive.file_copy_method?\n> >\n> > I agree that it is already quite long, because of that I chose the\n> > createdb as a prefix. I did not think that file cloning was planned to\n> > be used in other places. If that is the case, does something like\n> > 'preferred_copy_method' work? Then, we would mention which places will\n> > be affected with this GUC in the docs.\n>\n> I would go with file_copy_method rather than preferred_copy_method,\n> because (1) there's nothing \"preferred\" about it if we're using it\n> unconditionally and (2) it's nice to clarify that we're talking about\n> copying files rather than anything else.\n\nDone.\n\n>\n> My personal enthusiasm for making platform-specific file copy methods\n> usable all over PostgreSQL is quite limited. However, it is my\n> observation that other people seem far more enthusiastic about it than\n> I am. For example, consider how quickly it got added to\n> pg_combinebackup. So, I suspect it's smart to plan on anything we add\n> in this area getting used in a bunch of places. And perhaps it is even\n> best to think about making it work in all of those places right from\n> the start. If we build support into copydir and copy_file, then we\n> just get everything that uses those, and all that remains is to\n> document was is covered (and add comments so that future patch authors\n> know they should further update the documentation).\n\nI updated the documentation and put a comment on top of the copydir()\nfunction to inform that further changes and uses of this function may\nrequire documentation updates.\n\nI also changed O_RDWR to O_WRONLY flag in the clone_file() function\nbased on Raniers' feedback. Also, does this feature need tests? I\nthought about possible test cases but since this feature requires\nspecific file systems to run, I could not find any.\n\nv6 is attached.\n\n--\nRegards,\nNazir Bilal Yavuz\nMicrosoft", "msg_date": "Thu, 16 May 2024 15:35:38 +0300", "msg_from": "Nazir Bilal Yavuz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CREATE DATABASE with filesystem cloning" }, { "msg_contents": "On Thu, May 16, 2024 at 8:35 AM Nazir Bilal Yavuz <[email protected]> wrote:\n> I updated the documentation and put a comment on top of the copydir()\n> function to inform that further changes and uses of this function may\n> require documentation updates.\n\nI was assuming that the documentation for the file_copy_method was\ngoing to list the things that it controlled, and that the comment was\ngoing to say that you should update that list specifically. Just\nsaying that you may need to update some part of the documentation in\nsome way is fairly useless, IMHO.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 16 May 2024 08:40:32 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CREATE DATABASE with filesystem cloning" }, { "msg_contents": "Hi,\n\nOn Thu, 16 May 2024 at 15:40, Robert Haas <[email protected]> wrote:\n>\n> On Thu, May 16, 2024 at 8:35 AM Nazir Bilal Yavuz <[email protected]> wrote:\n> > I updated the documentation and put a comment on top of the copydir()\n> > function to inform that further changes and uses of this function may\n> > require documentation updates.\n>\n> I was assuming that the documentation for the file_copy_method was\n> going to list the things that it controlled, and that the comment was\n> going to say that you should update that list specifically. Just\n> saying that you may need to update some part of the documentation in\n> some way is fairly useless, IMHO.\n\nActually, the documentation for the file_copy_method was mentioning\nthe things it controls; I converted it to an itemized list now. Also,\nchanged the comment to: 'Further uses of this function requires\nupdates to the list that GUC controls in its documentation.'. v7 is\nattached.\n\n--\nRegards,\nNazir Bilal Yavuz\nMicrosoft", "msg_date": "Thu, 16 May 2024 16:42:52 +0300", "msg_from": "Nazir Bilal Yavuz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CREATE DATABASE with filesystem cloning" }, { "msg_contents": "On Thu, May 16, 2024 at 9:43 AM Nazir Bilal Yavuz <[email protected]> wrote:\n> Actually, the documentation for the file_copy_method was mentioning\n> the things it controls; I converted it to an itemized list now. Also,\n> changed the comment to: 'Further uses of this function requires\n> updates to the list that GUC controls in its documentation.'. v7 is\n> attached.\n\nI think the comments need some wordsmithing.\n\nI don't see why this parameter should be PGC_POSTMASTER.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 16 May 2024 10:46:56 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CREATE DATABASE with filesystem cloning" }, { "msg_contents": "Hi,\n\nOn Thu, 16 May 2024 at 17:47, Robert Haas <[email protected]> wrote:\n>\n> On Thu, May 16, 2024 at 9:43 AM Nazir Bilal Yavuz <[email protected]> wrote:\n> > Actually, the documentation for the file_copy_method was mentioning\n> > the things it controls; I converted it to an itemized list now. Also,\n> > changed the comment to: 'Further uses of this function requires\n> > updates to the list that GUC controls in its documentation.'. v7 is\n> > attached.\n>\n> I think the comments need some wordsmithing.\n\nI changed it to 'Uses of this function must be documented in the list\nof places affected by this GUC.', I am open to any suggestions.\n\n> I don't see why this parameter should be PGC_POSTMASTER.\n\nI changed it to 'PGC_USERSET' now. My initial point was the database\nor tablespace to be copied with the same method. I thought copying\nsome portion of the database with the copy and rest with the clone\ncould cause potential problems. After a bit of searching, I could not\nfind any problems related to that.\n\nv8 is attached.\n\n--\nRegards,\nNazir Bilal Yavuz\nMicrosoft", "msg_date": "Tue, 21 May 2024 11:18:34 +0300", "msg_from": "Nazir Bilal Yavuz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CREATE DATABASE with filesystem cloning" }, { "msg_contents": "Em ter., 21 de mai. de 2024 às 05:18, Nazir Bilal Yavuz <[email protected]>\nescreveu:\n\n> Hi,\n>\n> On Thu, 16 May 2024 at 17:47, Robert Haas <[email protected]> wrote:\n> >\n> > On Thu, May 16, 2024 at 9:43 AM Nazir Bilal Yavuz <[email protected]>\n> wrote:\n> > > Actually, the documentation for the file_copy_method was mentioning\n> > > the things it controls; I converted it to an itemized list now. Also,\n> > > changed the comment to: 'Further uses of this function requires\n> > > updates to the list that GUC controls in its documentation.'. v7 is\n> > > attached.\n> >\n> > I think the comments need some wordsmithing.\n>\n> I changed it to 'Uses of this function must be documented in the list\n> of places affected by this GUC.', I am open to any suggestions.\n>\n> > I don't see why this parameter should be PGC_POSTMASTER.\n>\n> I changed it to 'PGC_USERSET' now. My initial point was the database\n> or tablespace to be copied with the same method. I thought copying\n> some portion of the database with the copy and rest with the clone\n> could cause potential problems. After a bit of searching, I could not\n> find any problems related to that.\n>\n> v8 is attached.\n>\nHi,\nI did some research on the subject and despite being an improvement,\nI believe that some terminologies should be changed in this patch.\nAlthough the new function is called *clone_file*, I'm not sure if it really\nis \"clone\".\nWhy MacOS added an API called clonefile [1] and Linux exists\nanother called FICLONE.[2]\nSo perhaps it should be treated here as a copy and not a clone?\nLeaving it open, is the possibility of implementing a true clone api?\nThoughts?\n\nbest regards,\nRanier Vilela\n\n[1] clonefile <https://www.unix.com/man-page/mojave/2/clonefile/>\n[2] ioctl_ficlonerange\n<https://www.unix.com/man-page/linux/2/ioctl_ficlonerange/>\n\n\n>\n> --\n> Regards,\n> Nazir Bilal Yavuz\n> Microsoft\n>\n\nEm ter., 21 de mai. de 2024 às 05:18, Nazir Bilal Yavuz <[email protected]> escreveu:Hi,\n\nOn Thu, 16 May 2024 at 17:47, Robert Haas <[email protected]> wrote:\n>\n> On Thu, May 16, 2024 at 9:43 AM Nazir Bilal Yavuz <[email protected]> wrote:\n> > Actually, the documentation for the file_copy_method was mentioning\n> > the things it controls; I converted it to an itemized list now. Also,\n> > changed the comment to: 'Further uses of this function requires\n> > updates to the list that GUC controls in its documentation.'. v7 is\n> > attached.\n>\n> I think the comments need some wordsmithing.\n\nI changed it to 'Uses of this function must be documented in the list\nof places affected by this GUC.', I am open to any suggestions.\n\n> I don't see why this parameter should be PGC_POSTMASTER.\n\nI changed it to 'PGC_USERSET' now. My initial point was the database\nor tablespace to be copied with the same method. I thought copying\nsome portion of the database with the copy and rest with the clone\ncould cause potential problems. After a bit of searching, I could not\nfind any problems related to that.\n\nv8 is attached.Hi,I did some research on the subject and despite being an improvement, I believe that some terminologies should be changed in this patch.Although the new function is called *clone_file*, I'm not sure if it really is \"clone\".Why MacOS added an API called clonefile [1] and Linux existsanother called FICLONE.[2]So perhaps it should be treated here as a copy and not a clone?Leaving it open, is the possibility of implementing a true clone api?Thoughts?best regards,Ranier Vilela[1] clonefile[2] ioctl_ficlonerange \n\n--\nRegards,\nNazir Bilal Yavuz\nMicrosoft", "msg_date": "Tue, 21 May 2024 09:08:13 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CREATE DATABASE with filesystem cloning" }, { "msg_contents": "Hi,\n\nOn Tue, 21 May 2024 at 15:08, Ranier Vilela <[email protected]> wrote:\n>\n> Em ter., 21 de mai. de 2024 às 05:18, Nazir Bilal Yavuz <[email protected]> escreveu:\n>>\n>> Hi,\n>>\n>> On Thu, 16 May 2024 at 17:47, Robert Haas <[email protected]> wrote:\n>> >\n>> > On Thu, May 16, 2024 at 9:43 AM Nazir Bilal Yavuz <[email protected]> wrote:\n>> > > Actually, the documentation for the file_copy_method was mentioning\n>> > > the things it controls; I converted it to an itemized list now. Also,\n>> > > changed the comment to: 'Further uses of this function requires\n>> > > updates to the list that GUC controls in its documentation.'. v7 is\n>> > > attached.\n>> >\n>> > I think the comments need some wordsmithing.\n>>\n>> I changed it to 'Uses of this function must be documented in the list\n>> of places affected by this GUC.', I am open to any suggestions.\n>>\n>> > I don't see why this parameter should be PGC_POSTMASTER.\n>>\n>> I changed it to 'PGC_USERSET' now. My initial point was the database\n>> or tablespace to be copied with the same method. I thought copying\n>> some portion of the database with the copy and rest with the clone\n>> could cause potential problems. After a bit of searching, I could not\n>> find any problems related to that.\n>>\n>> v8 is attached.\n>\n> Hi,\n> I did some research on the subject and despite being an improvement,\n> I believe that some terminologies should be changed in this patch.\n> Although the new function is called *clone_file*, I'm not sure if it really is \"clone\".\n> Why MacOS added an API called clonefile [1] and Linux exists\n> another called FICLONE.[2]\n> So perhaps it should be treated here as a copy and not a clone?\n> Leaving it open, is the possibility of implementing a true clone api?\n> Thoughts?\n\nThank you for mentioning this.\n\n1- I do not know where to look for MacOS' function definitions but I\nfollowed the same links you shared. It says copyfile(...,\nCOPYFILE_CLONE_FORCE) means 'Clone the file instead. This is a force\nflag i.e. if cloning fails, an error is returned....' [1]. It does the\ncloning but I still do not know whether there is a difference between\n'copyfile() function with COPYFILE_CLONE_FORCE flag' and 'clonefile()'\nfunction.\n\n2- I read a couple of discussions about copy_file_range() and FICLONE.\nIt seems that the copy_file_range() function is a slightly upgraded\nversion of FICLONE but less available since it is newer. It still does\nthe clone [2]. Also, FICLONE is already used in pg_upgrade and\npg_combinebackup.\n\nBoth of these copyfile(..., COPYFILE_CLONE_FORCE) and\ncopy_file_range() functions do the cloning, so I think using clone\nterminology is correct. But, using FICLONE instead of\ncopy_file_range() could be better since it is more widely available.\n\n[1] https://www.unix.com/man-page/mojave/3/copyfile/\n[2] https://elixir.bootlin.com/linux/v5.15-rc5/source/fs/read_write.c#L1495\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n", "msg_date": "Mon, 3 Jun 2024 15:49:29 +0300", "msg_from": "Nazir Bilal Yavuz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CREATE DATABASE with filesystem cloning" }, { "msg_contents": "Hi,\n\nRebased version of the patch is attached.\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft", "msg_date": "Thu, 8 Aug 2024 15:15:42 +0300", "msg_from": "Nazir Bilal Yavuz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CREATE DATABASE with filesystem cloning" } ]
[ { "msg_contents": "In relation_excluded_by_constraints() when we're trying to figure out\nwhether the relation need not be scanned, one of the checks we do is to\ndetect constant-FALSE-or-NULL restriction clauses. Currently we perform\nthis check only when there is exactly one baserestrictinfo entry, and\nthe comment explains this as below.\n\n * Regardless of the setting of constraint_exclusion, detect\n * constant-FALSE-or-NULL restriction clauses. Because const-folding will\n * reduce \"anything AND FALSE\" to just \"FALSE\", any such case should\n * result in exactly one baserestrictinfo entry.\n\nThis doesn't seem entirely correct, because equivclass.c may generate\nconstant-FALSE baserestrictinfo entry on the fly. In addition, other\nquals could get pushed down to the baserel. All these cases would\nresult in that the baserestrictinfo list might possibly have other\nmembers besides the FALSE constant.\n\nSo I'm wondering if we should check each of base restriction clauses for\nconstant-FALSE-or-NULL quals, like attached.\n\nHere are some examples.\n\n-- #1 constant-FALSE generated by ECs\n\n-- unpatched (in all branches)\nexplain (costs off) select * from t t1 where a = 1 and a = 2;\n QUERY PLAN\n--------------------------\n Result\n One-Time Filter: false\n -> Seq Scan on t t1\n Filter: (a = 1)\n(4 rows)\n\n-- patched\nexplain (costs off) select * from t t1 where a = 1 and a = 2;\n QUERY PLAN\n--------------------------\n Result\n One-Time Filter: false\n(2 rows)\n\n\n-- #2 other quals get pushed down to the baserel\n\n-- unpatched (in 15 and earlier)\nexplain (costs off)\nselect * from t t1 left join (select * from t t2 where false) s on s.a = 1;\n QUERY PLAN\n--------------------------------------\n Nested Loop Left Join\n -> Seq Scan on t t1\n -> Materialize\n -> Result\n One-Time Filter: false\n -> Seq Scan on t t2\n Filter: (a = 1)\n(7 rows)\n\n-- patched\nexplain (costs off)\nselect * from t t1 left join (select * from t t2 where false) s on s.a = 1;\n QUERY PLAN\n--------------------------------\n Nested Loop Left Join\n -> Seq Scan on t t1\n -> Result\n One-Time Filter: false\n(4 rows)\n\nI'm a little concerned that it will bring some overhead to loop through\nthe baserestrictinfo list. But considering that other codes in the same\nfunction also loops through the list, maybe I'm worrying over nothing.\n\nAny thoughts?\n\nThanks\nRichard", "msg_date": "Sat, 7 Oct 2023 17:39:48 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Check each of base restriction clauses for constant-FALSE-or-NULL" }, { "msg_contents": "On Sat, Oct 7, 2023 at 3:14 PM Richard Guo <[email protected]> wrote:\n>\n> In relation_excluded_by_constraints() when we're trying to figure out\n> whether the relation need not be scanned, one of the checks we do is to\n> detect constant-FALSE-or-NULL restriction clauses. Currently we perform\n> this check only when there is exactly one baserestrictinfo entry, and\n> the comment explains this as below.\n>\n> * Regardless of the setting of constraint_exclusion, detect\n> * constant-FALSE-or-NULL restriction clauses. Because const-folding will\n> * reduce \"anything AND FALSE\" to just \"FALSE\", any such case should\n> * result in exactly one baserestrictinfo entry.\n>\n> This doesn't seem entirely correct, because equivclass.c may generate\n> constant-FALSE baserestrictinfo entry on the fly. In addition, other\n> quals could get pushed down to the baserel. All these cases would\n> result in that the baserestrictinfo list might possibly have other\n> members besides the FALSE constant.\n>\n> So I'm wondering if we should check each of base restriction clauses for\n> constant-FALSE-or-NULL quals, like attached.\n>\n> Here are some examples.\n>\n> -- #1 constant-FALSE generated by ECs\n>\n> -- unpatched (in all branches)\n>\n> QUERY PLAN\n> --------------------------\n> Result\n> One-Time Filter: false\n> -> Seq Scan on t t1\n> Filter: (a = 1)\n> (4 rows)\n>\n\nI used a slightly modified query as below\n\n# explain (costs off) select * from pg_class t1 where oid = 1 and oid = 2;\n QUERY PLAN\n----------------------------------------------------------\n Result\n One-Time Filter: false\n -> Index Scan using pg_class_oid_index on pg_class t1\n Index Cond: (oid = '1'::oid)\n(4 rows)\n\npostgres@312571=# explain (analyze, costs off) select * from pg_class\nt1 where oid = 1 and oid = 2;\n QUERY PLAN\n---------------------------------------------------------------------------\n Result (actual time=0.002..0.003 rows=0 loops=1)\n One-Time Filter: false\n -> Index Scan using pg_class_oid_index on pg_class t1 (never executed)\n Index Cond: (oid = '1'::oid)\n Planning Time: 0.176 ms\n Execution Time: 0.052 ms\n(6 rows)\n\nYou will see that the scan node was never executed. Hence there's no\nexecution time benefit if we remove the scan plan.\n\nWhere do we produce the single baserestrictinfo mentioned in the\ncomments? Is it before the planning proper starts?\n\nget_gating_quals does what you are doing much earlier in the query\nprocessing. Your code would just duplicate that.\n\n>\n> -- patched\n> explain (costs off)\n> select * from t t1 left join (select * from t t2 where false) s on s.a = 1;\n> QUERY PLAN\n> --------------------------------\n> Nested Loop Left Join\n> -> Seq Scan on t t1\n> -> Result\n> One-Time Filter: false\n> (4 rows)\n\nDoes your code have any other benefits like deeming an inner join as empty?\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Mon, 9 Oct 2023 15:17:50 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Check each of base restriction clauses for constant-FALSE-or-NULL" }, { "msg_contents": "On Mon, Oct 9, 2023 at 5:48 PM Ashutosh Bapat <[email protected]>\nwrote:\n\n> postgres@312571=# explain (analyze, costs off) select * from pg_class\n> t1 where oid = 1 and oid = 2;\n> QUERY PLAN\n> ---------------------------------------------------------------------------\n> Result (actual time=0.002..0.003 rows=0 loops=1)\n> One-Time Filter: false\n> -> Index Scan using pg_class_oid_index on pg_class t1 (never executed)\n> Index Cond: (oid = '1'::oid)\n> Planning Time: 0.176 ms\n> Execution Time: 0.052 ms\n> (6 rows)\n>\n> You will see that the scan node was never executed. Hence there's no\n> execution time benefit if we remove the scan plan.\n\n\nYeah, the constant-FALSE is a pseudoconstant qual and will result in a\ngating Result node atop the scan node. So this optimization about\ndetecting constant-FALSE restriction clauses and marking the rel as\ndummy if there is one is not likely to benefit execution time. AFAICS\nit can help save some planning efforts, because once a base rel is\nmarked dummy, we won't bother building access paths for it. Also a\ndummy input rel can save efforts when we generate paths for joinrel, see\nhow we cope with dummy rels in populate_joinrel_with_paths().\n\n\n> Where do we produce the single baserestrictinfo mentioned in the\n> comments? Is it before the planning proper starts?\n\n\nDo you mean the const-folding? It happens in the preprocessing phase,\nspecifically in eval_const_expressions().\n\n\n> get_gating_quals does what you are doing much earlier in the query\n> processing. Your code would just duplicate that.\n\n\nHm, I don't think so. get_gating_quals is called in createplan.c, where\nwe've selected the best path, while the optimization with my code\nhappens much earlier, when we set size estimates for a base relation.\nNeither of these two is a duplicate of the other. I think the theory\nhere is that it's always a win to mark a rel as dummy if possible as\nearly as we can.\n\nThanks\nRichard\n\nOn Mon, Oct 9, 2023 at 5:48 PM Ashutosh Bapat <[email protected]> wrote:\npostgres@312571=# explain (analyze, costs off) select * from pg_class\nt1 where oid = 1 and oid = 2;\n                                QUERY PLAN\n---------------------------------------------------------------------------\n Result (actual time=0.002..0.003 rows=0 loops=1)\n   One-Time Filter: false\n   ->  Index Scan using pg_class_oid_index on pg_class t1 (never executed)\n         Index Cond: (oid = '1'::oid)\n Planning Time: 0.176 ms\n Execution Time: 0.052 ms\n(6 rows)\n\nYou will see that the scan node was never executed. Hence there's no\nexecution time benefit if we remove the scan plan.Yeah, the constant-FALSE is a pseudoconstant qual and will result in agating Result node atop the scan node.  So this optimization aboutdetecting constant-FALSE restriction clauses and marking the rel asdummy if there is one is not likely to benefit execution time.  AFAICSit can help save some planning efforts, because once a base rel ismarked dummy, we won't bother building access paths for it.  Also adummy input rel can save efforts when we generate paths for joinrel, seehow we cope with dummy rels in populate_joinrel_with_paths(). \nWhere do we produce the single baserestrictinfo mentioned in the\ncomments? Is it before the planning proper starts?Do you mean the const-folding?  It happens in the preprocessing phase,specifically in eval_const_expressions(). \nget_gating_quals does what you are doing much earlier in the query\nprocessing. Your code would just duplicate that.Hm, I don't think so.  get_gating_quals is called in createplan.c, wherewe've selected the best path, while the optimization with my codehappens much earlier, when we set size estimates for a base relation.Neither of these two is a duplicate of the other.  I think the theoryhere is that it's always a win to mark a rel as dummy if possible asearly as we can.ThanksRichard", "msg_date": "Tue, 10 Oct 2023 13:39:02 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Check each of base restriction clauses for constant-FALSE-or-NULL" }, { "msg_contents": "On Tue, Oct 10, 2023 at 11:09 AM Richard Guo <[email protected]> wrote:\n> Do you mean the const-folding? It happens in the preprocessing phase,\n> specifically in eval_const_expressions().\n\nThanks.\n\n> Hm, I don't think so. get_gating_quals is called in createplan.c, where\n> we've selected the best path, while the optimization with my code\n> happens much earlier, when we set size estimates for a base relation.\n> Neither of these two is a duplicate of the other. I think the theory\n> here is that it's always a win to mark a rel as dummy if possible as\n> early as we can.\n\nRight. Do you have an example where this could be demonstrated?\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Tue, 10 Oct 2023 11:15:31 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Check each of base restriction clauses for constant-FALSE-or-NULL" }, { "msg_contents": "On Tue, Oct 10, 2023 at 1:45 PM Ashutosh Bapat <[email protected]>\nwrote:\n\n> On Tue, Oct 10, 2023 at 11:09 AM Richard Guo <[email protected]>\n> wrote:\n> > Hm, I don't think so. get_gating_quals is called in createplan.c, where\n> > we've selected the best path, while the optimization with my code\n> > happens much earlier, when we set size estimates for a base relation.\n> > Neither of these two is a duplicate of the other. I think the theory\n> > here is that it's always a win to mark a rel as dummy if possible as\n> > early as we can.\n>\n> Right. Do you have an example where this could be demonstrated?\n\n\nHmm, do you think the two examples in the initial email of this thread\ncan serve the purpose, by observing how we avoid building access paths\nfor the dummy rel with this optimization?\n\nThanks\nRichard\n\nOn Tue, Oct 10, 2023 at 1:45 PM Ashutosh Bapat <[email protected]> wrote:On Tue, Oct 10, 2023 at 11:09 AM Richard Guo <[email protected]> wrote:\n> Hm, I don't think so.  get_gating_quals is called in createplan.c, where\n> we've selected the best path, while the optimization with my code\n> happens much earlier, when we set size estimates for a base relation.\n> Neither of these two is a duplicate of the other.  I think the theory\n> here is that it's always a win to mark a rel as dummy if possible as\n> early as we can.\n\nRight. Do you have an example where this could be demonstrated?Hmm, do you think the two examples in the initial email of this threadcan serve the purpose, by observing how we avoid building access pathsfor the dummy rel with this optimization?ThanksRichard", "msg_date": "Tue, 10 Oct 2023 16:32:55 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Check each of base restriction clauses for constant-FALSE-or-NULL" }, { "msg_contents": "On Sat, 7 Oct 2023 at 22:44, Richard Guo <[email protected]> wrote:\n>\n> In relation_excluded_by_constraints() when we're trying to figure out\n> whether the relation need not be scanned, one of the checks we do is to\n> detect constant-FALSE-or-NULL restriction clauses. Currently we perform\n> this check only when there is exactly one baserestrictinfo entry, and\n> the comment explains this as below.\n>\n> * Regardless of the setting of constraint_exclusion, detect\n> * constant-FALSE-or-NULL restriction clauses. Because const-folding will\n> * reduce \"anything AND FALSE\" to just \"FALSE\", any such case should\n> * result in exactly one baserestrictinfo entry.\n\nCoincidentally (?), I saw the same thing just a few weeks ago while\nworking on [1]. I made the exact same adjustment to the code in\nrelation_excluded_by_constraints() as you have.\n\nI wasn't really expecting the baserestrictinfo list to be excessively\nlong, and if it ever was, I think looking at things like selectivity\nestimations would by far drown out looping over the entire list in\nrelation_excluded_by_constraints() rather than just looking at the\nfirst item in the list.\n\nAfter making the change, I saw the same regression test change as you\ndid, but didn't really feel like it was worth tackling separately from\nthe patch that we were working on.\n\nDavid\n\n[1] https://postgr.es/m/CAApHDvpkfS1hY3P4DWbOw6WCgRrja=yDLoEz+5g+E2z19Upsrg@mail.gmail.com\n\n\n", "msg_date": "Tue, 10 Oct 2023 22:10:12 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Check each of base restriction clauses for constant-FALSE-or-NULL" }, { "msg_contents": "On Tue, Oct 10, 2023 at 5:10 PM David Rowley <[email protected]> wrote:\n\n> On Sat, 7 Oct 2023 at 22:44, Richard Guo <[email protected]> wrote:\n> >\n> > In relation_excluded_by_constraints() when we're trying to figure out\n> > whether the relation need not be scanned, one of the checks we do is to\n> > detect constant-FALSE-or-NULL restriction clauses. Currently we perform\n> > this check only when there is exactly one baserestrictinfo entry, and\n> > the comment explains this as below.\n> >\n> > * Regardless of the setting of constraint_exclusion, detect\n> > * constant-FALSE-or-NULL restriction clauses. Because const-folding\n> will\n> > * reduce \"anything AND FALSE\" to just \"FALSE\", any such case should\n> > * result in exactly one baserestrictinfo entry.\n>\n> Coincidentally (?), I saw the same thing just a few weeks ago while\n> working on [1]. I made the exact same adjustment to the code in\n> relation_excluded_by_constraints() as you have.\n\n\nHaha, I noticed the need of this change while writing v5 patch [1] for\nthat same thread. That patch generates a new constant-FALSE\nRestrictInfo for an IS NULL qual that can be reduced to FALSE, and this\nmakes the comment in relation_excluded_by_constraints() about 'any such\ncase should result in exactly one baserestrictinfo entry' not true any\nmore. Without this change in relation_excluded_by_constraints(), a\nquery like below would not be able to be marked as dummy.\n\n select * from t where a is null and 'otherquals';\n\nAnd then the regression test diff after applying this change reminds me\nthat equivclass.c may also generate new constant-FALSE RestrictInfos on\nthe fly, so it seems to me that this change may benefit some queries\neven without the 'reduce-NullTest' patch.\n\n\n> I wasn't really expecting the baserestrictinfo list to be excessively\n> long, and if it ever was, I think looking at things like selectivity\n> estimations would by far drown out looping over the entire list in\n> relation_excluded_by_constraints() rather than just looking at the\n> first item in the list.\n\n\nAgreed.\n\n\n> After making the change, I saw the same regression test change as you\n> did, but didn't really feel like it was worth tackling separately from\n> the patch that we were working on.\n\n\nI was thinking that this change may be worthwhile by itself even without\nthe 'reduce-NullTest' patch, because it can benefit some cases, such as\nwhere EC generates constant-FALSE on the fly. So maybe it's worth a\nseparate patch? I'm not quite sure.\n\n[1]\nhttps://www.postgresql.org/message-id/CAMbWs4-eNVNTNc94eF%2BO_UwHYKv43vyMurhcdqMV%3DHt5fehcOg%40mail.gmail.com\n\nThanks\nRichard\n\nOn Tue, Oct 10, 2023 at 5:10 PM David Rowley <[email protected]> wrote:On Sat, 7 Oct 2023 at 22:44, Richard Guo <[email protected]> wrote:\n>\n> In relation_excluded_by_constraints() when we're trying to figure out\n> whether the relation need not be scanned, one of the checks we do is to\n> detect constant-FALSE-or-NULL restriction clauses.  Currently we perform\n> this check only when there is exactly one baserestrictinfo entry, and\n> the comment explains this as below.\n>\n>  * Regardless of the setting of constraint_exclusion, detect\n>  * constant-FALSE-or-NULL restriction clauses.  Because const-folding will\n>  * reduce \"anything AND FALSE\" to just \"FALSE\", any such case should\n>  * result in exactly one baserestrictinfo entry.\n\nCoincidentally (?), I saw the same thing just a few weeks ago while\nworking on [1].  I made the exact same adjustment to the code in\nrelation_excluded_by_constraints() as you have.Haha, I noticed the need of this change while writing v5 patch [1] forthat same thread.  That patch generates a new constant-FALSERestrictInfo for an IS NULL qual that can be reduced to FALSE, and thismakes the comment in relation_excluded_by_constraints() about 'any suchcase should result in exactly one baserestrictinfo entry' not true anymore.  Without this change in relation_excluded_by_constraints(), aquery like below would not be able to be marked as dummy.    select * from t where a is null and 'otherquals';And then the regression test diff after applying this change reminds methat equivclass.c may also generate new constant-FALSE RestrictInfos onthe fly, so it seems to me that this change may benefit some querieseven without the 'reduce-NullTest' patch. \nI wasn't really expecting the baserestrictinfo list to be excessively\nlong, and if it ever was, I think looking at things like selectivity\nestimations would by far drown out looping over the entire list in\nrelation_excluded_by_constraints() rather than just looking at the\nfirst item in the list.Agreed. \nAfter making the change, I saw the same regression test change as you\ndid, but didn't really feel like it was worth tackling separately from\nthe patch that we were working on.I was thinking that this change may be worthwhile by itself even withoutthe 'reduce-NullTest' patch, because it can benefit some cases, such aswhere EC generates constant-FALSE on the fly.  So maybe it's worth aseparate patch?  I'm not quite sure.[1] https://www.postgresql.org/message-id/CAMbWs4-eNVNTNc94eF%2BO_UwHYKv43vyMurhcdqMV%3DHt5fehcOg%40mail.gmail.comThanksRichard", "msg_date": "Tue, 10 Oct 2023 18:52:33 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Check each of base restriction clauses for constant-FALSE-or-NULL" }, { "msg_contents": "Richard Guo <[email protected]> writes:\n> On Tue, Oct 10, 2023 at 5:10 PM David Rowley <[email protected]> wrote:\n>> After making the change, I saw the same regression test change as you\n>> did, but didn't really feel like it was worth tackling separately from\n>> the patch that we were working on.\n\n> I was thinking that this change may be worthwhile by itself even without\n> the 'reduce-NullTest' patch, because it can benefit some cases, such as\n> where EC generates constant-FALSE on the fly. So maybe it's worth a\n> separate patch? I'm not quite sure.\n\nI think it's worth pushing separately, since it has a positive impact\non existing cases, as shown by the regression test plan change.\nAlso, if you compare that test case to the one immediately following\nit, it's downright weird that we are presently smarter about\noptimizing the more complicated case. (I've not dug into exactly\nwhy that is; maybe worth running it to ground?)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 10 Oct 2023 13:56:16 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Check each of base restriction clauses for constant-FALSE-or-NULL" }, { "msg_contents": "I wrote:\n> Also, if you compare that test case to the one immediately following\n> it, it's downright weird that we are presently smarter about\n> optimizing the more complicated case. (I've not dug into exactly\n> why that is; maybe worth running it to ground?)\n\nThe reason seems to be that joinrels.c's restriction_is_constant_false\nknows that it has to check all members of the restrictinfo list, not\njust one; and we get to that because some of the originally generated\nEC clauses are join clauses in the second case.\n\nSo this logic in relation_excluded_by_constraints is just wrong ---\npremature optimization on my part, looks like.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 10 Oct 2023 14:24:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Check each of base restriction clauses for constant-FALSE-or-NULL" }, { "msg_contents": "I wrote:\n> So this logic in relation_excluded_by_constraints is just wrong ---\n> premature optimization on my part, looks like.\n\nPushed after a bit of fiddling with the comment.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 11 Oct 2023 13:08:59 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Check each of base restriction clauses for constant-FALSE-or-NULL" }, { "msg_contents": "On Thu, Oct 12, 2023 at 1:09 AM Tom Lane <[email protected]> wrote:\n\n> Pushed after a bit of fiddling with the comment.\n\n\nThanks for pushing!\n\nThanks\nRichard\n\nOn Thu, Oct 12, 2023 at 1:09 AM Tom Lane <[email protected]> wrote:\nPushed after a bit of fiddling with the comment.Thanks for pushing!ThanksRichard", "msg_date": "Mon, 16 Oct 2023 09:01:08 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Check each of base restriction clauses for constant-FALSE-or-NULL" } ]
[ { "msg_contents": "Hi All,\n\nAttached is a draft patch implementing LIMIT pushdown to Append and MergeAppend nodes.\n\nThis patch eliminates the need to resort to subqueries to optimise UNIONs.\nIt also enables more aggressive partition pruning.\nNot sure if it causes LIMIT pushdown to foreign partitions though.\n\nApplying this patch causes regressions in:\n- postgres_fdw tests\n- partitions tests\n\nThis is due to subsequent partition pruning applied when LIMIT is pushed down - I guess that’s a (big) win.\n\nI would be happy to hear if the approach is sound.\n\nThanks,\nMichal", "msg_date": "Sat, 7 Oct 2023 12:01:17 +0200", "msg_from": "=?utf-8?Q?Micha=C5=82_K=C5=82eczek?= <[email protected]>", "msg_from_op": true, "msg_subject": "Draft LIMIT pushdown to Append and MergeAppend patch" }, { "msg_contents": "Hi All,\n\nAttached is a second version of the patch.\n\nThe goal is to:\n1. Apply LIMIT as early as possible - especially to apply LIMIT in partition scans\n2. Enable LIMIT pushdown for FDW partitions.\n\nMain idea of the patch is:\n\n1. Wrap children of Append and MergeAppend paths in LimitPaths.\n2. Let FDW extension handle limit pushdown\n\nThe changes are mainly in pathnode.c:\n- Introduced a new function: pushdown_limit() used by planner instead of create_limit_node\n- pushdown_limit handles MergeAppend, Append and ForeignScan nodes specially\n- it falls back to create_limit_node for other path types\n\nChanges in fdw:\n- added a new FDW operation PushdownLimitNode\n- this operation is called by pushdown_limit in pathnode.c\n\nChanges in postgres_fdw.c\n- Added stub implementation of PushdownLimitNode operation that delegates to create_limit_node wrapping original ForeignPath node\n\nI am going to work on tests right now as (obviously) they are failing due to different plans.\n\nAs this is my first time I dig into the internals of Postgres I would be really grateful for friendly review and some directions - I am not sure it the approach is the right one.\n\nThe need for this is real: we are struggling with slow queries on partitioned tables - the business requirements are such that the only way to avoid index scans yielding many records is to apply LIMIT early and not execute partition scans at all if enough rows are produced.\n\nKind regards,\nMichal\n\n\n\n\n\n\n\n> On 7 Oct 2023, at 12:01, Michał Kłeczek <[email protected]> wrote:\n> \n> Hi All,\n> \n> Attached is a draft patch implementing LIMIT pushdown to Append and MergeAppend nodes.\n> \n> This patch eliminates the need to resort to subqueries to optimise UNIONs.\n> It also enables more aggressive partition pruning.\n> Not sure if it causes LIMIT pushdown to foreign partitions though.\n> \n> Applying this patch causes regressions in:\n> - postgres_fdw tests\n> - partitions tests\n> \n> This is due to subsequent partition pruning applied when LIMIT is pushed down - I guess that’s a (big) win.\n> \n> I would be happy to hear if the approach is sound.\n> \n> Thanks,\n> Michal<limit-pushdown.patch>", "msg_date": "Sat, 7 Oct 2023 23:01:34 +0200", "msg_from": "=?utf-8?Q?Micha=C5=82_K=C5=82eczek?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Draft LIMIT pushdown to Append and MergeAppend patch" }, { "msg_contents": "On Sun, Oct 8, 2023 at 5:04 AM Michał Kłeczek <[email protected]> wrote:\n\n> Hi All,\n>\n> Attached is a second version of the patch.\n>\n> The goal is to:\n> 1. Apply LIMIT as early as possible - especially to apply LIMIT in\n> partition scans\n>\n\nFor the patches for performance improvement, it is better to provide\nan example to show how much benefits we can get. As for this case,\nI'm doubtful it can work as an improvement.\n\n2. Enable LIMIT pushdown for FDW partitions.\n>\n\nThe same as above, some testing is helpful.\n\n-- \nBest Regards\nAndy Fan\n\nOn Sun, Oct 8, 2023 at 5:04 AM Michał Kłeczek <[email protected]> wrote:Hi All,\n\nAttached is a second version of the patch.\n\nThe goal is to:\n1. Apply LIMIT as early as possible - especially to apply LIMIT in partition scansFor the patches for performance improvement,  it is better to providean example to show how much benefits we can get.  As for this case,I'm doubtful it can work as an improvement. \n2. Enable LIMIT pushdown for FDW partitions.The same as above,  some testing is helpful.  -- Best RegardsAndy Fan", "msg_date": "Sun, 8 Oct 2023 09:33:50 +0800", "msg_from": "Andy Fan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Draft LIMIT pushdown to Append and MergeAppend patch" }, { "msg_contents": "Thanks for the feedback.\n\n> On 8 Oct 2023, at 03:33, Andy Fan <[email protected]> wrote:\n> \n> \n> \n> On Sun, Oct 8, 2023 at 5:04 AM Michał Kłeczek <[email protected] <mailto:[email protected]>> wrote:\n>> Hi All,\n>> \n>> Attached is a second version of the patch.\n>> \n>> The goal is to:\n>> 1. Apply LIMIT as early as possible - especially to apply LIMIT in partition scans\n> \n> For the patches for performance improvement, it is better to provide\n> an example to show how much benefits we can get. As for this case,\n> I'm doubtful it can work as an improvement. \n> \n>> 2. Enable LIMIT pushdown for FDW partitions.\n> \n> The same as above, some testing is helpful. \n\nThe idea came up from this e-mail thread from 2019:\nhttps://www.postgresql.org/message-id/CAFT%2BaqL1Tt0qfYqjHH%2BshwPoW8qdFjpJ8vBR5ABoXJDUcHyN1w%40mail.gmail.com\nFDW does not push down LIMIT & ORDER BY with sharding (partitions)\npostgresql.org\n\n\nWhile obviously permofmance testing is needed to confirm any real improvements\nI now (after your feedback) have second thoughts if it is worth pursuing at all.\n\nCould you elaborate a little why you think it won’t work as an improvement?\nIs it because in practice LIMIT _is_ pushed down already during execution?\nFrom what I understand postgres_fdw does indeed fetch on demand.\nOTOH pushing down LIMIT is considered an improvement (as witnessed in the postgres_fdw code itself after d50d172e51)\n\nCare to provide some more information?\n\nThanks,\n\n--\nMichal", "msg_date": "Sun, 8 Oct 2023 07:27:06 +0200", "msg_from": "=?utf-8?Q?Micha=C5=82_K=C5=82eczek?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Draft LIMIT pushdown to Append and MergeAppend patch" }, { "msg_contents": "On Sun, 8 Oct 2023 at 18:32, Michał Kłeczek <[email protected]> wrote:\n> On 8 Oct 2023, at 03:33, Andy Fan <[email protected]> wrote:\n>> For the patches for performance improvement, it is better to provide\n>> an example to show how much benefits we can get. As for this case,\n>> I'm doubtful it can work as an improvement.\n\n> Could you elaborate a little why you think it won’t work as an improvement?\n> Is it because in practice LIMIT _is_ pushed down already during execution?\n> From what I understand postgres_fdw does indeed fetch on demand.\n> OTOH pushing down LIMIT is considered an improvement (as witnessed in the postgres_fdw code itself after d50d172e51)\n\nIn my opinion, analysis of where we can push LIMIT node deeper into\nthe plan is an interesting area for research and work.\n\nThe Append / MergeAppend case is certainly one place where pushing\nLIMIT nodes down could be beneficial. Of course, if there was some\nfilter or join or aggregation/distinct, etc that occurred after the\nAppend/MergeAppend then you'd not be able to do this as the pushed\nlimit might be met before we've got enough rows at the top level of\nthe plan and that could result in fewer than rows being output than\nwhat was requested in the query (aka wrong results). Andy was working\naround this area recently (see [1] and corresponding thread). He had\nto add a bunch of code that checked for operations that might mean\nwe'd need to read more than the tuple_fraction rows from the Append\nnode. If we had nice way to know when building base rel paths if\nthere were or were not upper-level operations that affect if LIMIT\npushing can be done, then that might make such a patch cleaner. Andy\nin one of his proposed patches [2] added a field to PlannerInfo to\nmark this. That field wasn't well named or documented, so anything\nyou did would have to be an improvement on that.\n\nLooking at your patch, I see you've solved this by delaying the\npushing down until the upper planner and just checking if the lower\nplanner (join search) produced an Append or MergeAppend path. I've not\nreally developed an opinion yet what's the best method. I feel\ncreating the correct paths up-front is likely more flexible and more\ntrue to the path costing code.\n\nIt might also be worth taking a step backwards and seeing if there are\nany other cases where we could push down LIMITs and try to see if\nthere's something more generic that could be built to do this in a way\nthat's beneficial to more cases. I can't think of any off the top of\nmy head, but I've not thought very hard about it.\n\nFWIW, it's trivial to mock up the possible benefits of pushing LIMIT\nnodes down with something like the following:\n\ncreate table rp (a int) partition by range (a);\ncreate table rp1 partition of rp for values from (0) to (1000000);\ncreate table rp2 partition of rp for values from (1000000) to (2000000);\ninsert into rp select x from generate_series(0, 1999999)x;\n\n-- limit not pushed:\nexplain analyze select * from rp order by a limit 10;\nExecution Time: 148.041 ms\n\n-- limit pushed (mocked):\nexplain analyze (select * from rp1 order by a limit 10) union all\n(select * from rp2 order by a limit 10) limit 10;\nExecution Time: 49.263 ms\n\nabout 3x faster for this case.\n\nHowever, it may also be worth you reading over [3] and the ultimate\nreason I changed my mind on that being a good idea. Pushing LIMITs\nbelow an Append seems quite incomplete when we don't yet push sorts\nbelow Appends, which is what that patch did. I just was not\ncomfortable proceeding with [3] as nodeSort.c holds onto the tuplesort\nuntil executor shutdown. That'll be done for rescan reasons, but it\ndoes mean if you pushed Sort below Append that we could have a very\nlarge number of sort nodes holding onto work_mem all at once. I find\nthat a bit scary, especially so given the excessive partitioning cases\nI've seen and worked on recently. I did consider if we maybe could\nadjust nodeSort.c to do tuplesort_end() after the final row. We'd need\nto only do that if we could be somehow certain there were going to be\nno rescans. I don't have a plan on how that would be detected.\n\nAnyway, I don't think anything above is that useful to push you\nforward with that. I just didn't want you running off thinking we\ndon't want to see improvements in this area. I certainly do.\n\nDavid\n\n[1] https://postgr.es/m/CAApHDvoMGyc+1eb8g5rEMUUMeGWhe2c_f8yvJjUO1kUHZj0h7w@mail.gmail.com\n[2] https://postgr.es/m/CAKU4AWqaTNPrYcb_cMEDDYWZVGfFxuUjr75F5LBZwnUQ0JvVPw@mail.gmail.com\n[3] https://postgr.es/m/CAApHDvojKdBR3MR59JXmaCYbyHB6Q_5qPRU+dy93En8wm+XiDA@mail.gmail.com\n\n\n", "msg_date": "Mon, 9 Oct 2023 13:52:40 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Draft LIMIT pushdown to Append and MergeAppend patch" }, { "msg_contents": "On Mon, Oct 9, 2023 at 8:52 AM David Rowley <[email protected]> wrote:\n\n> On Sun, 8 Oct 2023 at 18:32, Michał Kłeczek <[email protected]> wrote:\n> > On 8 Oct 2023, at 03:33, Andy Fan <[email protected]> wrote:\n> >> For the patches for performance improvement, it is better to provide\n> >> an example to show how much benefits we can get. As for this case,\n> >> I'm doubtful it can work as an improvement.\n>\n> > Could you elaborate a little why you think it won’t work as an\n> improvement?\n> > Is it because in practice LIMIT _is_ pushed down already during\n> execution?\n> > From what I understand postgres_fdw does indeed fetch on demand.\n> > OTOH pushing down LIMIT is considered an improvement (as witnessed in\n> the postgres_fdw code itself after d50d172e51)\n>\n> In my opinion, analysis of where we can push LIMIT node deeper into\n> the plan is an interesting area for research and work.\n>\n\nWell, really impressive on your feedback, always professional and\nPATIENT, just like what you helped me in pretty many areas for the\nlast N years.\n\nYes, I think \"analysis of where we can ..\" is the key point. SQL is\nan complex area because of its ever-changing, so providing an\nexample will be pretty helpful for communication. However\n\"doubtful\" might not be a good word:(:(\n\n\n>\n> The Append / MergeAppend case is certainly one place where pushing\n> LIMIT nodes down could be beneficial. Of course, if there was some\n> filter or join or aggregation/distinct, etc that occurred after the\n> Append/MergeAppend then you'd not be able to do this as the pushed\n> limit might be met before we've got enough rows at the top level of\n> the plan and that could result in fewer than rows being output than\n> what was requested in the query (aka wrong results).\n\n\nI'm not totally agree with this, my main idea came from Tom's reply\nat [1]. The best situation should be we know we should \"plan for\"\ntop-N rows, but we don't need to really add the execution overhead.\nand in my current knowledge, in the pretty number of cases, we have\nachieved this already. If an area which is missed and can be shown\nwithin an example, I would be pretty happy to change my mind.\n\nAndy was working\n> around this area recently (see [1] and corresponding thread). He had\n> to add a bunch of code that checked for operations that might mean\n> we'd need to read more than the tuple_fraction rows from the Append\n> node. If we had nice way to know when building base rel paths if\n> there were or were not upper-level operations that affect if LIMIT\n> pushing can be done, then that might make such a patch cleaner. Andy\n> in one of his proposed patches [2] added a field to PlannerInfo to\n> mark this. That field wasn't well named or documented, so anything\n> you did would have to be an improvement on that.\n>\n> Looking at your patch, I see you've solved this by delaying the\n> pushing down until the upper planner and just checking if the lower\n> planner (join search) produced an Append or MergeAppend path. I've not\n> really developed an opinion yet what's the best method. I feel\n> creating the correct paths up-front is likely more flexible and more\n> true to the path costing code.\n\n\n> It might also be worth taking a step backwards and seeing if there are\n> any other cases where we could push down LIMITs and try to see if\n> there's something more generic that could be built to do this in a way\n> that's beneficial to more cases. I can't think of any off the top of\n> my head, but I've not thought very hard about it.\n>\n> FWIW, it's trivial to mock up the possible benefits of pushing LIMIT\n> nodes down with something like the following:\n>\n> create table rp (a int) partition by range (a);\n> create table rp1 partition of rp for values from (0) to (1000000);\n> create table rp2 partition of rp for values from (1000000) to (2000000);\n> insert into rp select x from generate_series(0, 1999999)x;\n>\n> -- limit not pushed:\n> explain analyze select * from rp order by a limit 10;\n> Execution Time: 148.041 ms\n>\n> -- limit pushed (mocked):\n> explain analyze (select * from rp1 order by a limit 10) union all\n> (select * from rp2 order by a limit 10) limit 10;\n> Execution Time: 49.263 ms\n\n\n>\nabout 3x faster for this case.\n>\n> However, it may also be worth you reading over [3] and the ultimate\n> reason I changed my mind on that being a good idea. Pushing LIMITs\n> below an Append seems quite incomplete when we don't yet push sorts\n> below Appends, which is what that patch did. I just was not\n\ncomfortable proceeding with [3] as nodeSort.c holds onto the tuplesort\n> until executor shutdown. That'll be done for rescan reasons, but it\n> does mean if you pushed Sort below Append that we could have a very\n> large number of sort nodes holding onto work_mem all at once. I find\n> that a bit scary, especially so given the excessive partitioning cases\n> I've seen and worked on recently. I did consider if we maybe could\n> adjust nodeSort.c to do tuplesort_end() after the final row. We'd need\n> to only do that if we could be somehow certain there were going to be\n> no rescans. I don't have a plan on how that would be detected.\n>\n\nThat patch looks interesting and the example there should be not\nuncommon in the real user case. I'd see if I can do anything useful.\n\n[1] https://www.postgresql.org/message-id/[email protected]\n\n\n-- \nBest Regards\nAndy Fan\n\nOn Mon, Oct 9, 2023 at 8:52 AM David Rowley <[email protected]> wrote:On Sun, 8 Oct 2023 at 18:32, Michał Kłeczek <[email protected]> wrote:\n> On 8 Oct 2023, at 03:33, Andy Fan <[email protected]> wrote:\n>> For the patches for performance improvement,  it is better to provide\n>> an example to show how much benefits we can get.  As for this case,\n>> I'm doubtful it can work as an improvement.\n\n> Could you elaborate a little why you think it won’t work as an improvement?\n> Is it because in practice LIMIT _is_ pushed down already during execution?\n> From what I understand postgres_fdw does indeed fetch on demand.\n> OTOH pushing down LIMIT is considered an improvement (as witnessed in the postgres_fdw code itself after d50d172e51)\n\nIn my opinion, analysis of where we can push LIMIT node deeper into\nthe plan is an interesting area for research and work.Well,  really impressive on your feedback, always professional and PATIENT, just like what you helped me in pretty many areas for thelast N years. Yes, I think \"analysis of where we can ..\"  is the key point.   SQL is an complex area because of its ever-changing,  so providing an example will be pretty helpful for communication.  However \"doubtful\" might not be a good word:(:(   \n\nThe Append / MergeAppend case is certainly one place where pushing\nLIMIT nodes down could be beneficial. Of course, if there was some\nfilter or join or aggregation/distinct, etc that occurred after the\nAppend/MergeAppend then you'd not be able to do this as the pushed\nlimit might be met before we've got enough rows at the top level of\nthe plan and that could result in fewer than rows being output than\nwhat was requested in the query (aka wrong results).  I'm not totally agree with this,  my main idea came from Tom's reply at [1].  The best situation should be we know we should \"plan for\"top-N rows,  but we don't need to really add the execution overhead. and in my current knowledge, in the pretty number of cases, we haveachieved this already.  If an area which is missed and can be shown within an example,  I would be pretty happy to change my mind. Andy was working\naround this area recently (see [1] and corresponding thread).  He had\nto add a bunch of code that checked for operations that might mean\nwe'd need to read more than the tuple_fraction rows from the Append\nnode.  If we had nice way to know when building base rel paths if\nthere were or were not upper-level operations that affect if LIMIT\npushing can be done, then that might make such a patch cleaner.  Andy\nin one of his proposed patches [2] added a field to PlannerInfo to\nmark this.  That field wasn't well named or documented, so anything\nyou did would have to be an improvement on that.\n\nLooking at your patch, I see you've solved this by delaying the\npushing down until the upper planner and just checking if the lower\nplanner (join search) produced an Append or MergeAppend path. I've not\nreally developed an opinion yet what's the best method. I feel\ncreating the correct paths up-front is likely more flexible and more\ntrue to the path costing code.\n\nIt might also be worth taking a step backwards and seeing if there are\nany other cases where we could push down LIMITs and try to see if\nthere's something more generic that could be built to do this in a way\nthat's beneficial to more cases. I can't think of any off the top of\nmy head, but I've not thought very hard about it.\n\nFWIW, it's trivial to mock up the possible benefits of pushing LIMIT\nnodes down with something like the following:\n\ncreate table rp (a int) partition by range (a);\ncreate table rp1 partition of rp for values from (0) to (1000000);\ncreate table rp2 partition of rp for values from (1000000) to (2000000);\ninsert into rp select x from generate_series(0, 1999999)x;\n\n-- limit not pushed:\nexplain analyze select * from rp order by a limit 10;\nExecution Time: 148.041 ms\n\n-- limit pushed (mocked):\nexplain analyze (select * from rp1 order by a limit 10) union all\n(select * from rp2 order by a limit 10) limit 10;\nExecution Time: 49.263 ms  \nabout 3x faster for this case.\n\nHowever, it may also be worth you reading over [3] and the ultimate\nreason I changed my mind on that being a good idea. Pushing LIMITs\nbelow an Append seems quite incomplete when we don't yet push sorts\nbelow Appends, which is what that patch did.  I just was not\ncomfortable proceeding with [3] as nodeSort.c holds onto the tuplesort\nuntil executor shutdown.  That'll be done for rescan reasons, but it\ndoes mean if you pushed Sort below Append that we could have a very\nlarge number of sort nodes holding onto work_mem all at once.   I find\nthat a bit scary, especially so given the excessive partitioning cases\nI've seen and worked on recently.  I did consider if we maybe could\nadjust nodeSort.c to do tuplesort_end() after the final row. We'd need\nto only do that if we could be somehow certain there were going to be\nno rescans.  I don't have a plan on how that would be detected.That patch looks interesting and the example there should be not uncommon in the real user case. I'd see if I can do anything useful. [1] https://www.postgresql.org/message-id/[email protected]    -- Best RegardsAndy Fan", "msg_date": "Mon, 9 Oct 2023 09:49:59 +0800", "msg_from": "Andy Fan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Draft LIMIT pushdown to Append and MergeAppend patch" }, { "msg_contents": "On Mon, Oct 9, 2023 at 6:25 AM David Rowley <[email protected]> wrote:\n>\n> However, it may also be worth you reading over [3] and the ultimate\n> reason I changed my mind on that being a good idea. Pushing LIMITs\n> below an Append seems quite incomplete when we don't yet push sorts\n> below Appends, which is what that patch did.\n\nWhen the paths are already ordered according to ORDER BY\nspecification, pushing down LIMIT will give them extra benefit of\nbeing cost effective. Do you think we can proceed along those lines?\nLater when we implement Sorting push down we will adjust the LIMIT\npushdown code for the same.\n\n> I just was not\n> comfortable proceeding with [3] as nodeSort.c holds onto the tuplesort\n> until executor shutdown. That'll be done for rescan reasons, but it\n> does mean if you pushed Sort below Append that we could have a very\n> large number of sort nodes holding onto work_mem all at once. I find\n> that a bit scary, especially so given the excessive partitioning cases\n> I've seen and worked on recently. I did consider if we maybe could\n> adjust nodeSort.c to do tuplesort_end() after the final row. We'd need\n> to only do that if we could be somehow certain there were going to be\n> no rescans. I don't have a plan on how that would be detected.\n\nWe have that problem with partitionwise join. Have you seen it in the\nfield? I have not seen such reports but that could be because not many\nknow the partitionwise join needs to be explicitly turned ON. The\nsolution we will develop here will solve problem with partitionwise\njoin as well. It's hard to solve this problem. If there's a real case\nwhere LIMIT pushdown helps without fixing Sort pushdown case, it might\nhelp proceeding with the same.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Mon, 9 Oct 2023 16:05:34 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Draft LIMIT pushdown to Append and MergeAppend patch" }, { "msg_contents": "On Mon, 9 Oct 2023 at 23:35, Ashutosh Bapat\n<[email protected]> wrote:\n>\n> On Mon, Oct 9, 2023 at 6:25 AM David Rowley <[email protected]> wrote:\n> >\n> > However, it may also be worth you reading over [3] and the ultimate\n> > reason I changed my mind on that being a good idea. Pushing LIMITs\n> > below an Append seems quite incomplete when we don't yet push sorts\n> > below Appends, which is what that patch did.\n>\n> When the paths are already ordered according to ORDER BY\n> specification, pushing down LIMIT will give them extra benefit of\n> being cost effective. Do you think we can proceed along those lines?\n> Later when we implement Sorting push down we will adjust the LIMIT\n> pushdown code for the same.\n\nWhat are there benefits if the paths are already ordered? e.g if it's\nan index scan then we'll only pull the tuples we need from it.\n\nI think if we did manage to get something working to push Sorts below\nAppends then ExecSetTupleBound() would take care of most of the limit\nproblem (with the exception of FDWs). If you look at\nExecSetTupleBound() you'll see that it does recursively descend into\nAppendStates and set the bound on the append children. That's not\ngoing to work when the Sort is above the Append. So isn't it mostly\nthe work_mem * n_append_children concern that is holding us up here?\n\n> We have that problem with partitionwise join. Have you seen it in the\n> field? I have not seen such reports but that could be because not many\n> know the partitionwise join needs to be explicitly turned ON. The\n> solution we will develop here will solve problem with partitionwise\n> join as well. It's hard to solve this problem. If there's a real case\n> where LIMIT pushdown helps without fixing Sort pushdown case, it might\n> help proceeding with the same.\n\nI've not heard anything about that. What I saw were just complaints\nabout the planner being too slow to produce a plan which accessed well\nin excess of the number of partitions that we recommend in the\npartitioning best practices documents.\n\nDavid\n\n\n", "msg_date": "Tue, 10 Oct 2023 00:03:35 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Draft LIMIT pushdown to Append and MergeAppend patch" }, { "msg_contents": "On Mon, Oct 9, 2023 at 4:33 PM David Rowley <[email protected]> wrote:\n>\n> On Mon, 9 Oct 2023 at 23:35, Ashutosh Bapat\n> <[email protected]> wrote:\n> >\n> > On Mon, Oct 9, 2023 at 6:25 AM David Rowley <[email protected]> wrote:\n> > >\n> > > However, it may also be worth you reading over [3] and the ultimate\n> > > reason I changed my mind on that being a good idea. Pushing LIMITs\n> > > below an Append seems quite incomplete when we don't yet push sorts\n> > > below Appends, which is what that patch did.\n> >\n> > When the paths are already ordered according to ORDER BY\n> > specification, pushing down LIMIT will give them extra benefit of\n> > being cost effective. Do you think we can proceed along those lines?\n> > Later when we implement Sorting push down we will adjust the LIMIT\n> > pushdown code for the same.\n>\n> What are there benefits if the paths are already ordered? e.g if it's\n> an index scan then we'll only pull the tuples we need from it.\n>\n\npostgres_fdw creates a path with pathkeys based on the clauses on that\nrelation. There is no index involved there. Pushing down LIMIT will\nlimit the number of rows fetched from the foreign server and the\nforeign server may have opportunity to optimize query based on the\nLIMIT.\n\n> > We have that problem with partitionwise join. Have you seen it in the> > field? I have not seen such reports but that could be because not many\n> > know the partitionwise join needs to be explicitly turned ON. The\n> > solution we will develop here will solve problem with partitionwise\n> > join as well. It's hard to solve this problem. If there's a real case\n> > where LIMIT pushdown helps without fixing Sort pushdown case, it might\n> > help proceeding with the same.\n>\n> I've not heard anything about that. What I saw were just complaints\n> about the planner being too slow to produce a plan which accessed well\n> in excess of the number of partitions that we recommend in the\n> partitioning best practices documents.\n\nI am not able to relate the planner slowness and work_mem * number of\npartitions problems. I agree that both of them are problems but\ndifferent one.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Mon, 9 Oct 2023 18:34:39 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Draft LIMIT pushdown to Append and MergeAppend patch" }, { "msg_contents": "> On 9 Oct 2023, at 15:04, Ashutosh Bapat <[email protected]> wrote:\n> \n> On Mon, Oct 9, 2023 at 4:33 PM David Rowley <[email protected] <mailto:[email protected]>> wrote:\n>> \n>> What are there benefits if the paths are already ordered? e.g if it's\n>> an index scan then we'll only pull the tuples we need from it.\n>> \n> \n> postgres_fdw creates a path with pathkeys based on the clauses on that\n> relation. There is no index involved there. Pushing down LIMIT will\n> limit the number of rows fetched from the foreign server and the\n> foreign server may have opportunity to optimize query based on the\n> LIMIT.\n\nI would add another benefit:\nopportunity to fetch everything early, buffer it and release the session.\n\nWithout limit information fdw has to keep cursors open.\n\n\n—\nMichal\nOn 9 Oct 2023, at 15:04, Ashutosh Bapat <[email protected]> wrote:On Mon, Oct 9, 2023 at 4:33 PM David Rowley <[email protected]> wrote:What are there benefits if the paths are already ordered?  e.g if it'san index scan then we'll only pull the tuples we need from it.postgres_fdw creates a path with pathkeys based on the clauses on thatrelation. There is no index involved there. Pushing down LIMIT willlimit the number of rows fetched from the foreign server and theforeign server may have opportunity to optimize query based on theLIMIT.I would add another benefit:opportunity to fetch everything early, buffer it and release the session.Without limit information fdw has to keep cursors open.—Michal", "msg_date": "Mon, 9 Oct 2023 15:09:01 +0200", "msg_from": "=?utf-8?Q?Micha=C5=82_K=C5=82eczek?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Draft LIMIT pushdown to Append and MergeAppend patch" } ]
[ { "msg_contents": "I just noticed that e8c334c47a fixes typos in e0b1ee17dc. I think there\nis an omission in _bt_readpage.\n\n--- a/src/backend/access/nbtree/nbtsearch.c\n+++ b/src/backend/access/nbtree/nbtsearch.c\n@@ -1784,7 +1784,7 @@ _bt_readpage(IndexScanDesc scan, ScanDirection dir,\nOffsetNumber offnum)\n /*\n * If the result of prechecking required keys was true, then in\n * assert-enabled builds we also recheck that _bt_checkkeys()\n- * result is is the same.\n+ * result is the same.\n */\n Assert(!requiredMatchedByPrecheck ||\n passes_quals == _bt_checkkeys(scan, itup, indnatts, dir,\n\nThanks\nRichard\n\nI just noticed that e8c334c47a fixes typos in e0b1ee17dc.  I think thereis an omission in _bt_readpage.--- a/src/backend/access/nbtree/nbtsearch.c+++ b/src/backend/access/nbtree/nbtsearch.c@@ -1784,7 +1784,7 @@ _bt_readpage(IndexScanDesc scan, ScanDirection dir, OffsetNumber offnum)            /*             * If the result of prechecking required keys was true, then in             * assert-enabled builds we also recheck that _bt_checkkeys()-            * result is is the same.+            * result is the same.             */            Assert(!requiredMatchedByPrecheck ||                   passes_quals == _bt_checkkeys(scan, itup, indnatts, dir,ThanksRichard", "msg_date": "Sat, 7 Oct 2023 19:02:09 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Fix a typo in _bt_readpage" }, { "msg_contents": "On Sat, Oct 7, 2023 at 2:02 PM Richard Guo <[email protected]> wrote:\n>\n> I just noticed that e8c334c47a fixes typos in e0b1ee17dc. I think there\n> is an omission in _bt_readpage.\n>\n> --- a/src/backend/access/nbtree/nbtsearch.c\n> +++ b/src/backend/access/nbtree/nbtsearch.c\n> @@ -1784,7 +1784,7 @@ _bt_readpage(IndexScanDesc scan, ScanDirection dir, OffsetNumber offnum)\n> /*\n> * If the result of prechecking required keys was true, then in\n> * assert-enabled builds we also recheck that _bt_checkkeys()\n> - * result is is the same.\n> + * result is the same.\n> */\n> Assert(!requiredMatchedByPrecheck ||\n> passes_quals == _bt_checkkeys(scan, itup, indnatts, dir,\n\nFixed, thanks!\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Sat, 7 Oct 2023 20:38:54 +0300", "msg_from": "Alexander Korotkov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix a typo in _bt_readpage" } ]
[ { "msg_contents": "Implement TODO item:\n\nFix log_line_prefix to display the transaction id (%x) for statements \nnot in a transaction block\n Currently it displays zero.\n\n\nCheck that the XID has been assigned at the location where the statement \nlog is now printed. If not, no statement log is output.\nAnd then before finish_xact_command. If the statement has not been \noutput to the log. Here the log can get XID.\n\nDML that does not manipulate any data still does not get XID.\n\n[32718][788] LOG: statement: insert into t1 values(1,0,'');\n[32718][789] LOG: statement: delete from t1;\n[32718][0] LOG: statement: delete from t1;\n\n\n--\nQuan Zongliang", "msg_date": "Sun, 8 Oct 2023 11:50:43 +0800", "msg_from": "Quan Zongliang <[email protected]>", "msg_from_op": true, "msg_subject": "Fix log_line_prefix to display the transaction id (%x) for statements\n not in a transaction block" }, { "msg_contents": "Hi.\n+ /* Log immediately if dictated by log_statement and XID assigned. */\n+ if (GetTopTransactionIdIfAny() != InvalidTransactionId &&\n+ check_log_statement(parsetree_list))\n\nchange to\n\n+ /* Log immediately if dictated by log_statement and XID assigned. */\n+ if ( check_log_statement(parsetree_list) &&\n+ GetTopTransactionIdIfAny() != InvalidTransactionId)\n\nI think it would reduce GetTopTransactionIdIfAny() calls.\n\nI guess people will have different opinion that\nsimple query like:\n`explain(analyze) select g from generate_series(1,1e6) g, pg_sleep(10);`\nThe log output will only be generated after 10 seconds.\nof course, there is pg_stat_activity and other ways to view the running query.\n\nplaying around with the patch.\nThe patch is better than the current HEAD, in some cases.\nboth under condition:\nalter system set log_line_prefix = '%m [%p] %q%u@%d/%a XID:%x ';\nset log_statement = 'all';\nselect pg_reload_conf();\n\nWith Patch:\nsrc3=# create table x1(a int);\n2024-01-11 17:11:51.150 CST [115782] jian@src3/psql XID:814 LOG:\nstatement: create table x1(a int);\nCREATE TABLE\nsrc3=#\nsrc3=# insert into x1 select 100;\n2024-01-11 17:12:06.953 CST [115782] jian@src3/psql XID:815 LOG:\nstatement: insert into x1 select 100;\nINSERT 0 1\nsrc3=# begin;\n2024-01-11 17:12:17.543 CST [115782] jian@src3/psql XID:0 LOG:\nstatement: begin;\nBEGIN\nsrc3=*# insert into x1 select 100;\n2024-01-11 17:12:24.779 CST [115782] jian@src3/psql XID:816 LOG:\nstatement: insert into x1 select 100;\nINSERT 0 1\nsrc3=*# commit;\n2024-01-11 17:12:29.851 CST [115782] jian@src3/psql XID:816 LOG:\nstatement: commit;\nCOMMIT\nsrc3=# select 11;\n2024-01-11 17:14:22.909 CST [115782] jian@src3/psql XID:0 LOG:\nstatement: select 11;\n ?column?\n----------\n 11\n(1 row)\nsrc3=# drop table x1;\n2024-01-11 17:15:01.409 CST [115782] jian@src3/psql XID:817 LOG:\nstatement: drop table x1;\nDROP TABLE\nsrc3=# select pg_current_xact_id();\n2024-01-11 17:21:55.602 CST [115782] jian@src3/psql XID:818 LOG:\nstatement: select pg_current_xact_id();\n pg_current_xact_id\n--------------------\n 818\n(1 row)\n---------------------------------------------------------------------------------\nwithout patch:\n\nsrc4=# insert into x1 select 100;\n2024-01-11 17:07:13.556 CST [115240] jian@src4/psql XID:0 LOG:\nstatement: insert into x1 select 100;\nINSERT 0 1\nsrc4=# begin;\n2024-01-11 17:07:31.345 CST [115240] jian@src4/psql XID:0 LOG:\nstatement: begin;\nBEGIN\nsrc4=*# insert into x1 select 100;\n2024-01-11 17:07:35.475 CST [115240] jian@src4/psql XID:0 LOG:\nstatement: insert into x1 select 100;\nINSERT 0 1\nsrc4=*# commit;\n2024-01-11 17:07:39.095 CST [115240] jian@src4/psql XID:863 LOG:\nstatement: commit;\nCOMMIT\nsrc4=# show logging_collector;\n2024-01-11 17:09:59.307 CST [115240] jian@src4/psql XID:0 LOG:\nstatement: show logging_collector;\n logging_collector\n-------------------\n off\n(1 row)\nsrc4=# select 11;\n2024-01-11 17:14:30.001 CST [115240] jian@src4/psql XID:0 LOG:\nstatement: select 11;\n ?column?\n----------\n 11\n(1 row)\nsrc4=# drop table x1;\n2024-01-11 17:15:08.010 CST [115240] jian@src4/psql XID:0 LOG:\nstatement: drop table x1;\nDROP TABLE\nsrc4=# select pg_current_xact_id();\n2024-01-11 17:21:22.085 CST [115240] jian@src4/psql XID:0 LOG:\nstatement: select pg_current_xact_id();\n pg_current_xact_id\n--------------------\n 867\n(1 row)\n\n\n", "msg_date": "Thu, 11 Jan 2024 21:18:15 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix log_line_prefix to display the transaction id (%x) for\n statements not in a transaction block" }, { "msg_contents": "Hi\nanother big difference compare to HEAD:\n\nselect name,setting\n from pg_settings\n where name in\n ('auto_explain.log_timing','auto_explain.log_analyze',\n'auto_explain.log_min_duration','log_statement','log_line_prefix')\n ;\n name | setting\n-------------------------------+----------------------------\n auto_explain.log_analyze | on\n auto_explain.log_min_duration | 0\n auto_explain.log_timing | on\n log_line_prefix | %m [%p] %q%u@%d/%a XID:%x\n log_statement | all\n\nsimple query: explain(analyze, costs off) select 1 from pg_sleep(10);\n\nwith patch:\nsrc3=# explain(analyze, costs off) select 1 from pg_sleep(10);\n2024-01-12 08:43:14.750 CST [5739] jian@src3/psql XID:0 LOG:\nduration: 10010.167 ms plan:\n Query Text: explain(analyze, costs off) select 1 from pg_sleep(10);\n Function Scan on pg_sleep (cost=0.00..0.01 rows=1 width=4)\n(actual time=10010.155..10010.159 rows=1 loops=1)\n2024-01-12 08:43:14.750 CST [5739] jian@src3/psql XID:0 LOG:\nstatement: explain(analyze, costs off) select 1 from pg_sleep(10);\n QUERY PLAN\n-----------------------------------------------------------------------------\n Function Scan on pg_sleep (actual time=10010.155..10010.159 rows=1 loops=1)\n Planning Time: 0.115 ms\n Execution Time: 10010.227 ms\n(3 rows)\n\nwithout patch:\n\nsrc4=#\nsrc4=# explain(analyze, costs off) select 1 from pg_sleep(10);\n2024-01-12 08:43:13.462 CST [5869] jian@src4/psql XID:0 LOG:\nstatement: explain(analyze, costs off) select 1 from pg_sleep(10);\n2024-01-12 08:43:23.473 CST [5869] jian@src4/psql XID:0 LOG:\nduration: 10010.133 ms plan:\n Query Text: explain(analyze, costs off) select 1 from pg_sleep(10);\n Function Scan on pg_sleep (cost=0.00..0.01 rows=1 width=4)\n(actual time=10010.126..10010.128 rows=1 loops=1)\n QUERY PLAN\n-----------------------------------------------------------------------------\n Function Scan on pg_sleep (actual time=10010.126..10010.128 rows=1 loops=1)\n Planning Time: 0.031 ms\n Execution Time: 10010.172 ms\n(3 rows)\n\n\n", "msg_date": "Fri, 12 Jan 2024 08:51:07 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix log_line_prefix to display the transaction id (%x) for\n statements not in a transaction block" }, { "msg_contents": "A little tweak to the code.\nGetTopTransactionIdIfAny() != InvalidTransactionId\nchanged to\nTransactionIdIsValid(GetTopTransactionIdIfAny()\n\n\nOn 2024/1/12 08:51, jian he wrote:\n> Hi\n...\n> with patch:\n> src3=# explain(analyze, costs off) select 1 from pg_sleep(10);\n> 2024-01-12 08:43:14.750 CST [5739] jian@src3/psql XID:0 LOG:\n> duration: 10010.167 ms plan:\n> Query Text: explain(analyze, costs off) select 1 from pg_sleep(10);\n> Function Scan on pg_sleep (cost=0.00..0.01 rows=1 width=4)\n> (actual time=10010.155..10010.159 rows=1 loops=1)\n> 2024-01-12 08:43:14.750 CST [5739] jian@src3/psql XID:0 LOG:\n> statement: explain(analyze, costs off) select 1 from pg_sleep(10);\n> QUERY PLAN\n> -----------------------------------------------------------------------------\n> Function Scan on pg_sleep (actual time=10010.155..10010.159 rows=1 loops=1)\n> Planning Time: 0.115 ms\n> Execution Time: 10010.227 ms\n> (3 rows)\nThis problem does exist in a statement that takes a long time to run.\nXID is applied only for the first change tuple. If the user want to see \nit in a single statement log, they have to wait until the statement has \nfinished executing. And we don't know how long it will take until the \nstatement ends. It is not appropriate to output the log twice because of \nxid. Besides, without parsing log_line_prefix we don't know if the user \nwants to see xid.", "msg_date": "Fri, 2 Feb 2024 08:47:36 +0800", "msg_from": "Quan Zongliang <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Fix log_line_prefix to display the transaction id (%x) for\n statements not in a transaction block" }, { "msg_contents": "\n\n> On 12 Jan 2024, at 05:51, jian he <[email protected]> wrote:\n> \n> another big difference compare to HEAD:\n\nHi Jian,\n\nthanks for looking into this. Would you be willing to review the next version of the patch?\nAs long as you are looking into this, you might be interested in registering yourself in a CF entry as a reviewer. [0]\n\nThanks!\n\n\nBest regards, Andrey Borodin.\n\n[0] https://commitfest.postgresql.org/47/4606/\n\n", "msg_date": "Mon, 4 Mar 2024 11:11:50 +0500", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix log_line_prefix to display the transaction id (%x) for\n statements not in a transaction block" }, { "msg_contents": "On Mon, Mar 4, 2024 at 2:12 PM Andrey M. Borodin <[email protected]> wrote:\n>\n> > On 12 Jan 2024, at 05:51, jian he <[email protected]> wrote:\n> >\n> > another big difference compare to HEAD:\n>\n> Hi Jian,\n>\n> thanks for looking into this. Would you be willing to review the next version of the patch?\n\nI just applied postgres-v2.patch.\nPreviously, I just mentioned, there is a big difference between to HEAD.\nI didn't mention it explicitly at [1]\n\npostgres-v2.patch still has the problem.\nit's not related to XID, it's the misbehavior of the patch.\nWhen we load the 'auto_explain' module, we can log two things, the\nquery plan and statement.\nAfter applying the patch, the outputs of the query plan and the\nstatement will be mixed together, which\nobviously is far from the desired behavior.\nMaybe we can tolerate LOG, first output the query plan then statement.\n\n> As long as you are looking into this, you might be interested in registering yourself in a CF entry as a reviewer. [0]\nsure.\n\n[1] https://www.postgresql.org/message-id/CACJufxHg1oir8sd=xScMP3n+tYcbug=zusG5KiA2KzH5PmOeuQ@mail.gmail.com\n\n\n", "msg_date": "Mon, 4 Mar 2024 15:48:01 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix log_line_prefix to display the transaction id (%x) for\n statements not in a transaction block" }, { "msg_contents": "\n\nOn 2024/3/4 15:48, jian he wrote:\n\n> Maybe we can tolerate LOG, first output the query plan then statement.\n> \nIt is more appropriate to let the extension solve its own problems.\nOf course, this change is not easy to implement.\nDue to the way XID is assigned, there seems to be no good solution at \nthe moment.\n\n\n\n", "msg_date": "Mon, 11 Mar 2024 09:25:19 +0800", "msg_from": "Quan Zongliang <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Fix log_line_prefix to display the transaction id (%x) for\n statements not in a transaction block" }, { "msg_contents": "On 2024/3/11 09:25, Quan Zongliang wrote:\n> \n> \n> On 2024/3/4 15:48, jian he wrote:\n> \n>> Maybe we can tolerate LOG, first output the query plan then statement.\n>>\n> It is more appropriate to let the extension solve its own problems.\n> Of course, this change is not easy to implement.\n> Due to the way XID is assigned, there seems to be no good solution at \n> the moment.\n> \n> \nAccording to the discussion with Jian He. Use the guc hook to check if \nthe xid needs to be output. If needed, the statement log can be delayed \nto be output.", "msg_date": "Tue, 16 Apr 2024 15:15:57 +0800", "msg_from": "Quan Zongliang <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Fix log_line_prefix to display the transaction id (%x) for\n statements not in a transaction block" }, { "msg_contents": "On Tue, Apr 16, 2024 at 3:16 AM Quan Zongliang <[email protected]> wrote:\n> According to the discussion with Jian He. Use the guc hook to check if\n> the xid needs to be output. If needed, the statement log can be delayed\n> to be output.\n\nI appreciate the work that both of you have put into this, but I think\nwe should reject this patch and remove the TODO item. We currently\nhave some facilities (like log_statement) that log the statement\nbefore parsing it or executing it, and others (like\nlog_min_duration_statement) that log it afterward. That is probably\nnot documented as clearly as it should be, but it's long-established\nbehavior.\n\nWhat this patch does is change the behavior of log_statement so that\nlog_statement sometimes logs the statement before it's executed, and\nsometimes after the statement. I think that's going to be confusing\nand unhelpful. In particular, right now you can assume that if you set\nlog_statement=all and there's a statement running, it's already been\nlogged. With this change, that would sometimes be true and sometimes\nfalse.\n\nFor example, suppose that at 9am sharp, I run an UPDATE command that\ntakes ten seconds to complete. Right now, the log_statement message\nwill appear at 9am. With this change, it will run at 9am if I do it\ninside a transaction block that has an XID already, and at 9:00:10am\nif I do it in a transaction block that does not yet have an XID, or if\nI do it outside of a transaction. I don't think the benefit of getting\nthe XID in the log message is nearly enough to justify such a strange\nbehavior.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 15 May 2024 12:58:34 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix log_line_prefix to display the transaction id (%x) for\n statements not in a transaction block" }, { "msg_contents": "\n\nOn 2024/5/16 00:58, Robert Haas wrote:\n> On Tue, Apr 16, 2024 at 3:16 AM Quan Zongliang <[email protected]> wrote:\n>> According to the discussion with Jian He. Use the guc hook to check if\n>> the xid needs to be output. If needed, the statement log can be delayed\n>> to be output.\n> \n> I appreciate the work that both of you have put into this, but I think\n> we should reject this patch and remove the TODO item. We currently\n> have some facilities (like log_statement) that log the statement\n> before parsing it or executing it, and others (like\n> log_min_duration_statement) that log it afterward. That is probably\n> not documented as clearly as it should be, but it's long-established\n> behavior.\n> \n> What this patch does is change the behavior of log_statement so that\n> log_statement sometimes logs the statement before it's executed, and\n> sometimes after the statement. I think that's going to be confusing\n> and unhelpful. In particular, right now you can assume that if you set\n> log_statement=all and there's a statement running, it's already been\n> logged. With this change, that would sometimes be true and sometimes\n> false.\n> \n> For example, suppose that at 9am sharp, I run an UPDATE command that\n> takes ten seconds to complete. Right now, the log_statement message\n> will appear at 9am. With this change, it will run at 9am if I do it\n> inside a transaction block that has an XID already, and at 9:00:10am\n> if I do it in a transaction block that does not yet have an XID, or if\n> I do it outside of a transaction. I don't think the benefit of getting\n> the XID in the log message is nearly enough to justify such a strange\n> behavior.\n> \nI thought about writing statement log when xid assigned. But it's seemed \ntoo complicated.\nI'm inclined to keep it for a while. Until we find a good way or give \nup. It's a reasonable request, after all.\n\n\n\n", "msg_date": "Thu, 16 May 2024 18:01:21 +0800", "msg_from": "Quan Zongliang <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Fix log_line_prefix to display the transaction id (%x) for\n statements not in a transaction block" }, { "msg_contents": "On Thu, May 16, 2024 at 6:01 AM Quan Zongliang <[email protected]> wrote:\n> I thought about writing statement log when xid assigned. But it's seemed\n> too complicated.\n> I'm inclined to keep it for a while. Until we find a good way or give\n> up. It's a reasonable request, after all.\n\nI don't think it's reasonable at all. We can't log the XID before it's\nassigned, and we can't log the statement after the XID is assigned\nwithout completely changing how the parameter works.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 16 May 2024 08:37:43 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix log_line_prefix to display the transaction id (%x) for\n statements not in a transaction block" }, { "msg_contents": "On Thu, May 16, 2024 at 08:37:43AM -0400, Robert Haas wrote:\n> On Thu, May 16, 2024 at 6:01 AM Quan Zongliang <[email protected]> wrote:\n> > I thought about writing statement log when xid assigned. But it's seemed\n> > too complicated.\n> > I'm inclined to keep it for a while. Until we find a good way or give\n> > up. It's a reasonable request, after all.\n> \n> I don't think it's reasonable at all. We can't log the XID before it's\n> assigned, and we can't log the statement after the XID is assigned\n> without completely changing how the parameter works.\n\nI have removed the TODO item.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Fri, 17 May 2024 17:38:34 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix log_line_prefix to display the transaction id (%x) for\n statements not in a transaction block" } ]
[ { "msg_contents": "I came across a crash in add_paths_to_append_rel() with the query below.\n\ncreate table t (a int);\n\ncreate table inh (b int);\ncreate table inh_child() inherits(inh);\n\nexplain (costs off)\nselect * from t left join lateral (select t.a from inh) on true limit 1;\nserver closed the connection unexpectedly\n\nIt was introduced by commit a8a968a8 where we referenced\ncheapest_startup_path->param_info without first checking that we have a\nvalid cheapest_startup_path. We do have checked that the pathlist is\nnot empty, but that does not imply that the cheapest_startup_path is not\nNULL. Normally only unparameterized paths are considered candidates for\ncheapest_startup_path, because startup cost does not have too much value\nfor parameterized paths since they are on the inside of a nestloop. So\nif there are no unparameterized paths, the cheapest_startup_path will be\nNULL. That is the case in the repro query above. Because of the\nlateral reference within PHV, the child rels of 'inh' do not have\nunparameterized paths, so their cheapest_startup_path is NULL.\n\nI think we should explicitly check that cheapest_startup_path is not\nNULL here. Doing that seems to make the check of pathlist not being\nempty unnecessary. Besides, I think we do not need to check that\ncheapest_startup_path->param_info is NULL, since cheapest_startup_path\nmust be unparameterized path. Maybe we can use an Assert instead.\n\nAttached is my proposed fix.\n\nThanks\nRichard", "msg_date": "Mon, 9 Oct 2023 17:45:41 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Crash in add_paths_to_append_rel" }, { "msg_contents": "On Mon, 9 Oct 2023 at 22:49, Richard Guo <[email protected]> wrote:\n> I came across a crash in add_paths_to_append_rel() with the query below.\n\n> It was introduced by commit a8a968a8 where we referenced\n> cheapest_startup_path->param_info without first checking that we have a\n> valid cheapest_startup_path.\n\nThank you for the report.\n\nSince you've managed to attribute this to a specific commit, for the\nfuture, I think instead of opening a new thread, it would be more\nuseful to communicate the issue on the thread that's linked in the\ncommit message.\n\nI will look at this tomorrow.\n\nDavid\n\n\n", "msg_date": "Tue, 10 Oct 2023 00:35:38 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Crash in add_paths_to_append_rel" }, { "msg_contents": "On Mon, 9 Oct 2023 at 22:49, Richard Guo <[email protected]> wrote:\n> I came across a crash in add_paths_to_append_rel() with the query below.\n\n> It was introduced by commit a8a968a8 where we referenced\n> cheapest_startup_path->param_info without first checking that we have a\n> valid cheapest_startup_path.\n\nI pushed this with just minor adjustments to the comments you wrote. I\ndidn't feel the need to reiterate on the set_cheapest() comments.\n\nDavid\n\n\n", "msg_date": "Tue, 10 Oct 2023 16:52:05 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Crash in add_paths_to_append_rel" }, { "msg_contents": "On Tue, Oct 10, 2023 at 11:52 AM David Rowley <[email protected]> wrote:\n\n> On Mon, 9 Oct 2023 at 22:49, Richard Guo <[email protected]> wrote:\n> > I came across a crash in add_paths_to_append_rel() with the query below.\n>\n> > It was introduced by commit a8a968a8 where we referenced\n> > cheapest_startup_path->param_info without first checking that we have a\n> > valid cheapest_startup_path.\n>\n> I pushed this with just minor adjustments to the comments you wrote. I\n> didn't feel the need to reiterate on the set_cheapest() comments.\n>\n>\n Thanks Richard for the report and also Thanks you David for the push!\n\n-- \nBest Regards\nAndy Fan\n\nOn Tue, Oct 10, 2023 at 11:52 AM David Rowley <[email protected]> wrote:On Mon, 9 Oct 2023 at 22:49, Richard Guo <[email protected]> wrote:\n> I came across a crash in add_paths_to_append_rel() with the query below.\n\n> It was introduced by commit a8a968a8 where we referenced\n> cheapest_startup_path->param_info without first checking that we have a\n> valid cheapest_startup_path.\n\nI pushed this with just minor adjustments to the comments you wrote. I\ndidn't feel the need to reiterate on the set_cheapest() comments.  Thanks Richard for the report and also Thanks you David for the push!-- Best RegardsAndy Fan", "msg_date": "Tue, 10 Oct 2023 12:48:17 +0800", "msg_from": "Andy Fan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Crash in add_paths_to_append_rel" }, { "msg_contents": "On Mon, Oct 9, 2023 at 7:35 PM David Rowley <[email protected]> wrote:\n\n> Since you've managed to attribute this to a specific commit, for the\n> future, I think instead of opening a new thread, it would be more\n> useful to communicate the issue on the thread that's linked in the\n> commit message.\n\n\nAh, yes, I should have done it that way, sorry.\n\nThanks\nRichard\n\nOn Mon, Oct 9, 2023 at 7:35 PM David Rowley <[email protected]> wrote:\nSince you've managed to attribute this to a specific commit, for the\nfuture, I think instead of opening a new thread, it would be more\nuseful to communicate the issue on the thread that's linked in the\ncommit message.Ah, yes, I should have done it that way, sorry.ThanksRichard", "msg_date": "Tue, 10 Oct 2023 13:49:31 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Crash in add_paths_to_append_rel" }, { "msg_contents": "On Tue, Oct 10, 2023 at 11:52 AM David Rowley <[email protected]> wrote:\n\n> On Mon, 9 Oct 2023 at 22:49, Richard Guo <[email protected]> wrote:\n> > I came across a crash in add_paths_to_append_rel() with the query below.\n>\n> > It was introduced by commit a8a968a8 where we referenced\n> > cheapest_startup_path->param_info without first checking that we have a\n> > valid cheapest_startup_path.\n>\n> I pushed this with just minor adjustments to the comments you wrote. I\n> didn't feel the need to reiterate on the set_cheapest() comments.\n\n\nThanks for pushing!\n\nThanks\nRichard\n\nOn Tue, Oct 10, 2023 at 11:52 AM David Rowley <[email protected]> wrote:On Mon, 9 Oct 2023 at 22:49, Richard Guo <[email protected]> wrote:\n> I came across a crash in add_paths_to_append_rel() with the query below.\n\n> It was introduced by commit a8a968a8 where we referenced\n> cheapest_startup_path->param_info without first checking that we have a\n> valid cheapest_startup_path.\n\nI pushed this with just minor adjustments to the comments you wrote. I\ndidn't feel the need to reiterate on the set_cheapest() comments.Thanks for pushing!ThanksRichard", "msg_date": "Tue, 10 Oct 2023 13:50:15 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Crash in add_paths_to_append_rel" } ]
[ { "msg_contents": "Hi,\n\nI've mentioned this to a few people before, but forgot to start an actual\nthread. So here we go:\n\nI think we should lower the default wal_blocksize / XLOG_BLCKSZ to 4096, from\nthe current 8192. The reason is that\n\na) We don't gain much from a blocksize above 4096, as we already do one write\n all the pending WAL data in one go (except when at the tail of\n wal_buffers). We *do* incur more overhead for page headers, but compared to\n the actual WAL data it is not a lot (~0.29% of space is page headers 8192\n vs 0.59% with 4096).\n\nb) Writing 8KB when we we have to flush a partially filled buffer can\n substantially increase write amplification. In a transactional workload,\n this will often double the write volume.\n\nCurrently disks mostly have 4096 bytes as their \"sector size\". Sometimes\nthat's exposed directly, sometimes they can also write in 512 bytes, but that\ninternally requires a read-modify-write operation.\n\n\nFor some example numbers, I ran a very simple insert workload with a varying\nnumber of clients with both a wal_blocksize=4096 and wal_blocksize=8192\ncluster, and measured the amount of bytes written before/after. The table was\nrecreated before each run, followed by a checkpoint and the benchmark. Here I\nran the inserts only for 15s each, because the results don't change\nmeaningfully with longer runs.\n\n\nWith XLOG_BLCKSZ=8192\n\nclients\t tps disk bytes written\n1\t 667\t\t 81296\n2\t 739\t\t 89796\n4\t 1446\t\t 89208\n8\t 2858\t\t 90858\n16\t 5775\t\t 96928\n32\t 11920\t\t115351\n64\t 23686\t\t135244\n128\t 46001\t\t173390\n256\t 88833\t\t239720\n512\t 146208\t\t335669\n\n\nWith XLOG_BLCKSZ=4096\n\nclients\t tps disk bytes written\n1\t 751\t\t 46838\n2\t 773\t\t 47936\n4\t 1512\t\t 48317\n8\t 3143\t\t 52584\n16\t 6221\t\t 59097\n32\t 12863\t\t 73776\n64\t 25652\t\t 98792\n128\t 48274\t\t133330\n256\t 88969\t\t200720\n512\t 146298\t\t298523\n\n\nThis is on a not-that-fast NVMe SSD (Samsung SSD 970 PRO 1TB).\n\n\nIt's IMO quite interesting that even at the higher client counts, the number\nof bytes written don't reach parity.\n\n\nOn a stripe of two very fast SSDs:\n\nWith XLOG_BLCKSZ=8192\n\nclients\t tps disk bytes written\n1\t 23786\t\t2893392\n2\t 38515\t\t4683336\n4\t 63436\t\t4688052\n8\t 106618\t\t4618760\n16\t 177905\t\t4384360\n32\t 254890\t\t3890664\n64\t 297113\t\t3031568\n128\t 299878\t\t2297808\n256\t 308774\t\t1935064\n512\t 292515\t\t1630408\n\n\nWith XLOG_BLCKSZ=4096\n\nclients\t tps disk bytes written\n1\t 25742\t\t1586748\n2\t 43578\t\t2686708\n4\t 62734\t\t2613856\n8\t 116217\t\t2809560\n16\t 200802\t\t2947580\n32\t 269268\t\t2461364\n64\t 323195\t\t2042196\n128\t 317160\t\t1550364\n256\t 309601\t\t1285744\n512\t 292063\t\t1103816\n\nIt's fun to see how the total number of writes *decreases* at higher\nconcurrency, because it becomes more likely that pages are filled completely.\n\n\nOne thing I noticed is that our auto-configuration of wal_buffers leads to\ndifferent wal_buffers settings for different XLOG_BLCKSZ, which doesn't seem\ngreat.\n\n\nPerforming the same COPY workload (1024 files, split across N clients) for\nboth settings shows no performance difference, but a very slight increase in\ntotal bytes written (about 0.25%, which is roughly what I'd expect).\n\n\nPersonally I'd say the slight increase in WAL volume is more than outweighed\nby the increase in throughput and decrease in bytes written.\n\n\nThere's an alternative approach we could take, which is to write in 4KB\nincrements, while keeping 8KB pages. With the current format that's not\nobviously a bad idea. But given there aren't really advantages in 8KB WAL\npages, it seems we should just go for 4KB?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 9 Oct 2023 16:08:05 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": true, "msg_subject": "Lowering the default wal_blocksize to 4K" }, { "msg_contents": "On Mon, Oct 9, 2023 at 04:08:05PM -0700, Andres Freund wrote:\n> There's an alternative approach we could take, which is to write in 4KB\n> increments, while keeping 8KB pages. With the current format that's not\n> obviously a bad idea. But given there aren't really advantages in 8KB WAL\n> pages, it seems we should just go for 4KB?\n\nHow do we handle shorter maximum row lengths and shorter maximum index\nentry lengths?\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Mon, 9 Oct 2023 19:26:54 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Lowering the default wal_blocksize to 4K" }, { "msg_contents": "Hi,\n\nOn 2023-10-09 19:26:54 -0400, Bruce Momjian wrote:\n> On Mon, Oct 9, 2023 at 04:08:05PM -0700, Andres Freund wrote:\n> > There's an alternative approach we could take, which is to write in 4KB\n> > increments, while keeping 8KB pages. With the current format that's not\n> > obviously a bad idea. But given there aren't really advantages in 8KB WAL\n> > pages, it seems we should just go for 4KB?\n> \n> How do we handle shorter maximum row lengths and shorter maximum index\n> entry lengths?\n\nThe WAL blocksize shouldn't influence either, unless we have a bug somewhere.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 9 Oct 2023 16:36:20 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Lowering the default wal_blocksize to 4K" }, { "msg_contents": "On Mon, Oct 9, 2023 at 04:36:20PM -0700, Andres Freund wrote:\n> Hi,\n> \n> On 2023-10-09 19:26:54 -0400, Bruce Momjian wrote:\n> > On Mon, Oct 9, 2023 at 04:08:05PM -0700, Andres Freund wrote:\n> > > There's an alternative approach we could take, which is to write in 4KB\n> > > increments, while keeping 8KB pages. With the current format that's not\n> > > obviously a bad idea. But given there aren't really advantages in 8KB WAL\n> > > pages, it seems we should just go for 4KB?\n> > \n> > How do we handle shorter maximum row lengths and shorter maximum index\n> > entry lengths?\n> \n> The WAL blocksize shouldn't influence either, unless we have a bug somewhere.\n\nOh, good point.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Mon, 9 Oct 2023 19:45:16 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Lowering the default wal_blocksize to 4K" }, { "msg_contents": "Andres Freund <[email protected]> writes:\n> There's an alternative approach we could take, which is to write in 4KB\n> increments, while keeping 8KB pages. With the current format that's not\n> obviously a bad idea. But given there aren't really advantages in 8KB WAL\n> pages, it seems we should just go for 4KB?\n\nSeems like that's doubling the overhead of WAL page headers. Do we need\nto try to skinny those down?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 09 Oct 2023 23:16:30 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Lowering the default wal_blocksize to 4K" }, { "msg_contents": "Hi,\n\nOn 2023-10-09 23:16:30 -0400, Tom Lane wrote:\n> Andres Freund <[email protected]> writes:\n> > There's an alternative approach we could take, which is to write in 4KB\n> > increments, while keeping 8KB pages. With the current format that's not\n> > obviously a bad idea. But given there aren't really advantages in 8KB WAL\n> > pages, it seems we should just go for 4KB?\n> \n> Seems like that's doubling the overhead of WAL page headers. Do we need\n> to try to skinny those down?\n\nI think the overhead is small, and we are wasting so much space in other\nplaces, that I am not worried about the proportional increase page header\nspace usage at this point, particularly compared to saving in overall write\nrate and increase in TPS. There's other areas we can save much more space, if\nwe want to focus on that.\n\nI was thinking we should perhaps do the opposite, namely getting rid of short\npage headers. The overhead in the \"byte position\" <-> LSN conversion due to\nthe differing space is worse than the gain. Or do something inbetween - having\nthe system ID in the header adds a useful crosscheck, but I'm far less\nconvinced that having segment and block size in there, as 32bit numbers no\nless, is worthwhile. After all, if the system id matches, it's not likely that\nthe xlog block or segment size differ.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 9 Oct 2023 21:14:15 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Lowering the default wal_blocksize to 4K" }, { "msg_contents": "On Tue, 10 Oct 2023 at 01:08, Andres Freund <[email protected]> wrote:\n>\n> Hi,\n>\n> I've mentioned this to a few people before, but forgot to start an actual\n> thread. So here we go:\n>\n> I think we should lower the default wal_blocksize / XLOG_BLCKSZ to 4096, from\n> the current 8192.\n\nSeems like a good idea.\n\n> It's IMO quite interesting that even at the higher client counts, the number\n> of bytes written don't reach parity.\n>\n> It's fun to see how the total number of writes *decreases* at higher\n> concurrency, because it becomes more likely that pages are filled completely.\n\nWith higher client counts and short transactions I think it is not too\nunexpected to see commit_delay+commit_siblings configured. Did you\nmeasure the impact of this change on such configurations?\n\n> One thing I noticed is that our auto-configuration of wal_buffers leads to\n> different wal_buffers settings for different XLOG_BLCKSZ, which doesn't seem\n> great.\n\nHmm.\n\n> Performing the same COPY workload (1024 files, split across N clients) for\n> both settings shows no performance difference, but a very slight increase in\n> total bytes written (about 0.25%, which is roughly what I'd expect).\n>\n> Personally I'd say the slight increase in WAL volume is more than outweighed\n> by the increase in throughput and decrease in bytes written.\n\nAgreed.\n\n> There's an alternative approach we could take, which is to write in 4KB\n> increments, while keeping 8KB pages. With the current format that's not\n> obviously a bad idea. But given there aren't really advantages in 8KB WAL\n> pages, it seems we should just go for 4KB?\n\nIt is not just the disk overhead of blocks, but we also maintain some\nother data (currently in the form of XLogRecPtrs) in memory for each\nWAL buffer, the overhead of which will also increase when we increase\nthe number of XLog pages per MB of WAL that we cache.\nAdditionally, highly concurrent workloads with transactions that write\na high multiple of XLOG_BLCKSZ bytes to WAL may start to see increased\noverhead due to the .25% additional WAL getting written and a doubling\nof the number of XLog pages being touched (both initialization and the\nsmaller memcpy for records that would now cross an extra page\nboundary).\n\nHowever, for all of these issues I doubt that they actually matter\nmuch in the grand scheme of things, so I definitely wouldn't mind\nmoving to 4KiB XLog pages.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n", "msg_date": "Tue, 10 Oct 2023 12:57:04 +0200", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Lowering the default wal_blocksize to 4K" }, { "msg_contents": "On Tue, 10 Oct 2023 at 06:14, Andres Freund <[email protected]> wrote:\n>\n> Hi,\n>\n> On 2023-10-09 23:16:30 -0400, Tom Lane wrote:\n>> Andres Freund <[email protected]> writes:\n>>> There's an alternative approach we could take, which is to write in 4KB\n>>> increments, while keeping 8KB pages. With the current format that's not\n>>> obviously a bad idea. But given there aren't really advantages in 8KB WAL\n>>> pages, it seems we should just go for 4KB?\n>>\n>> Seems like that's doubling the overhead of WAL page headers. Do we need\n>> to try to skinny those down?\n>\n> I think the overhead is small, and we are wasting so much space in other\n> places, that I am not worried about the proportional increase page header\n> space usage at this point, particularly compared to saving in overall write\n> rate and increase in TPS. There's other areas we can save much more space, if\n> we want to focus on that.\n>\n> I was thinking we should perhaps do the opposite, namely getting rid of short\n> page headers. The overhead in the \"byte position\" <-> LSN conversion due to\n> the differing space is worse than the gain. Or do something inbetween - having\n> the system ID in the header adds a useful crosscheck, but I'm far less\n> convinced that having segment and block size in there, as 32bit numbers no\n> less, is worthwhile. After all, if the system id matches, it's not likely that\n> the xlog block or segment size differ.\n\nHmm. I don't think we should remove those checks, as I can see people\nthat would want to change their XLog block size with e.g.\npg_reset_wal.\nBut I think we can relatively easily move segsize/blocksize checks to\na different place in the normal page header, which would reduce the\nnumber of bytes we'd have to store elsewhere.\n\nWe could move segsize/blocksize into the xlp_info flags: 12 of the 16\nbits are currently unused. Using 4 of these bits for segsize\n(indicating 2^N MB, current accepted values are N=0..10 for 1 MB ...\n1024MB) and 4 (or 3) for blcksz (as we currently support 1..64 kB\nblocks, or 2^{0..6} kB). This would remove the need for 2 of the 3\nfields in the large xlog block header.\n\nAfter that we'll only have the system ID left from the extended\nheader, which we could store across 2 pages in the (current) alignment\nlosses of xlp_rem_len - even pages the upper half, uneven pages the\nlower half of the ID. This should allow for enough integrity checks\nwithout further increasing the size of XLogPageHeader in most\ninstallations.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n", "msg_date": "Tue, 10 Oct 2023 21:30:44 +0200", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Lowering the default wal_blocksize to 4K" }, { "msg_contents": "Hi,\n\nOn 2023-10-10 21:30:44 +0200, Matthias van de Meent wrote:\n> On Tue, 10 Oct 2023 at 06:14, Andres Freund <[email protected]> wrote:\n> > On 2023-10-09 23:16:30 -0400, Tom Lane wrote:\n> >> Andres Freund <[email protected]> writes:\n> >>> There's an alternative approach we could take, which is to write in 4KB\n> >>> increments, while keeping 8KB pages. With the current format that's not\n> >>> obviously a bad idea. But given there aren't really advantages in 8KB WAL\n> >>> pages, it seems we should just go for 4KB?\n> >>\n> >> Seems like that's doubling the overhead of WAL page headers. Do we need\n> >> to try to skinny those down?\n> >\n> > I think the overhead is small, and we are wasting so much space in other\n> > places, that I am not worried about the proportional increase page header\n> > space usage at this point, particularly compared to saving in overall write\n> > rate and increase in TPS. There's other areas we can save much more space, if\n> > we want to focus on that.\n> >\n> > I was thinking we should perhaps do the opposite, namely getting rid of short\n> > page headers. The overhead in the \"byte position\" <-> LSN conversion due to\n> > the differing space is worse than the gain. Or do something inbetween - having\n> > the system ID in the header adds a useful crosscheck, but I'm far less\n> > convinced that having segment and block size in there, as 32bit numbers no\n> > less, is worthwhile. After all, if the system id matches, it's not likely that\n> > the xlog block or segment size differ.\n> \n> Hmm. I don't think we should remove those checks, as I can see people\n> that would want to change their XLog block size with e.g.\n> pg_reset_wal.\n\nI don't think that's something we need to address in every physical\nsegment. For one, there's no option to do so. But more importantly, if they\ndon't change the xlog block size, we'll just accept random WAL as well. If\nsomebody goes to the trouble of writing a custom tool, they can live with the\nconsequences of that potentially causing breakage. Particularly if the checks\nwouldn't meaningfully prevent that anyway.\n\n\n> After that we'll only have the system ID left from the extended\n> header, which we could store across 2 pages in the (current) alignment\n> losses of xlp_rem_len - even pages the upper half, uneven pages the\n> lower half of the ID. This should allow for enough integrity checks\n> without further increasing the size of XLogPageHeader in most\n> installations.\n\nI doubt that that's a good idea - what if there's just a single page in a\nsegment? And there aren't earlier segments? That's not a rare case, IME.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 10 Oct 2023 16:29:33 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Lowering the default wal_blocksize to 4K" }, { "msg_contents": "On Wed, Oct 11, 2023 at 12:29 PM Andres Freund <[email protected]> wrote:\n> On 2023-10-10 21:30:44 +0200, Matthias van de Meent wrote:\n> > On Tue, 10 Oct 2023 at 06:14, Andres Freund <[email protected]> wrote:\n> > > I was thinking we should perhaps do the opposite, namely getting rid of short\n> > > page headers. The overhead in the \"byte position\" <-> LSN conversion due to\n> > > the differing space is worse than the gain. Or do something inbetween - having\n> > > the system ID in the header adds a useful crosscheck, but I'm far less\n> > > convinced that having segment and block size in there, as 32bit numbers no\n> > > less, is worthwhile. After all, if the system id matches, it's not likely that\n> > > the xlog block or segment size differ.\n> >\n> > Hmm. I don't think we should remove those checks, as I can see people\n> > that would want to change their XLog block size with e.g.\n> > pg_reset_wal.\n>\n> I don't think that's something we need to address in every physical\n> segment. For one, there's no option to do so. But more importantly, if they\n> don't change the xlog block size, we'll just accept random WAL as well. If\n> somebody goes to the trouble of writing a custom tool, they can live with the\n> consequences of that potentially causing breakage. Particularly if the checks\n> wouldn't meaningfully prevent that anyway.\n\nHow about this idea: Put the system ID etc into the new record Robert\nis proposing for the redo point, and also into the checkpoint record,\nso that it's at both ends of the to-be-replayed range. That just\nleaves the WAL segments in between. If you find yourself writing a\nnew record that would go in the first usable byte of a segment, insert\na new special system ID (etc) record that will be checked during\nreplay. For segments that start with XLP_FIRST_IS_CONTRECORD, don't\nworry about it: those already form part of a chain of verification\n(xlp_rem_len, xl_crc) that started on the preceding page, so it seems\nalmost impossible to accidentally replay from a segment that came from\nanother system.\n\n\n", "msg_date": "Wed, 11 Oct 2023 14:39:12 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Lowering the default wal_blocksize to 4K" }, { "msg_contents": "Hi,\n\nOn 2023-10-11 14:39:12 +1300, Thomas Munro wrote:\n> On Wed, Oct 11, 2023 at 12:29 PM Andres Freund <[email protected]> wrote:\n> > On 2023-10-10 21:30:44 +0200, Matthias van de Meent wrote:\n> > > On Tue, 10 Oct 2023 at 06:14, Andres Freund <[email protected]> wrote:\n> > > > I was thinking we should perhaps do the opposite, namely getting rid of short\n> > > > page headers. The overhead in the \"byte position\" <-> LSN conversion due to\n> > > > the differing space is worse than the gain. Or do something inbetween - having\n> > > > the system ID in the header adds a useful crosscheck, but I'm far less\n> > > > convinced that having segment and block size in there, as 32bit numbers no\n> > > > less, is worthwhile. After all, if the system id matches, it's not likely that\n> > > > the xlog block or segment size differ.\n> > >\n> > > Hmm. I don't think we should remove those checks, as I can see people\n> > > that would want to change their XLog block size with e.g.\n> > > pg_reset_wal.\n> >\n> > I don't think that's something we need to address in every physical\n> > segment. For one, there's no option to do so. But more importantly, if they\n> > don't change the xlog block size, we'll just accept random WAL as well. If\n> > somebody goes to the trouble of writing a custom tool, they can live with the\n> > consequences of that potentially causing breakage. Particularly if the checks\n> > wouldn't meaningfully prevent that anyway.\n> \n> How about this idea: Put the system ID etc into the new record Robert\n> is proposing for the redo point, and also into the checkpoint record,\n> so that it's at both ends of the to-be-replayed range.\n\nI think that's a very good idea.\n\n\n> That just leaves the WAL segments in between. If you find yourself writing\n> a new record that would go in the first usable byte of a segment, insert a\n> new special system ID (etc) record that will be checked during replay.\n\nI don't see how we can do that without incuring a lot of overhead though. This\ndetermination would need to happen in ReserveXLogInsertLocation(), while\nholding the spinlock. Which is one of the most contended bits of code in\npostgres. The whole reason that we have this \"byte pos\" to LSN conversion\nstuff is to make the spinlock-protected part of ReserveXLogInsertLocation() as\nshort as possible.\n\n\n> For segments that start with XLP_FIRST_IS_CONTRECORD, don't worry about it:\n> those already form part of a chain of verification (xlp_rem_len, xl_crc)\n> that started on the preceding page, so it seems almost impossible to\n> accidentally replay from a segment that came from another system.\n\nBut I think we might just be ok with logic similar to this, even for the\nnon-contrecord case. If recovery starts in one segment where we have verified\nsysid, xlog block size etc and we encounter a WAL record starting on the first\n\"content byte\" of a segment, we can still verify that the prev LSN is correct\netc. Sure, if you try hard you could come up with a scenario where you could\nmislead such a check, but we don't need to protect against intentional malice\nhere, just against accidents.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 10 Oct 2023 19:47:44 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Lowering the default wal_blocksize to 4K" }, { "msg_contents": "On Wed, 11 Oct 2023 at 01:29, Andres Freund <[email protected]> wrote:\n>\n> Hi,\n>\n> On 2023-10-10 21:30:44 +0200, Matthias van de Meent wrote:\n> > On Tue, 10 Oct 2023 at 06:14, Andres Freund <[email protected]> wrote:\n> > > On 2023-10-09 23:16:30 -0400, Tom Lane wrote:\n> > >> Andres Freund <[email protected]> writes:\n> > >>> There's an alternative approach we could take, which is to write in 4KB\n> > >>> increments, while keeping 8KB pages. With the current format that's not\n> > >>> obviously a bad idea. But given there aren't really advantages in 8KB WAL\n> > >>> pages, it seems we should just go for 4KB?\n> > >>\n> > >> Seems like that's doubling the overhead of WAL page headers. Do we need\n> > >> to try to skinny those down?\n> > >\n> > > I think the overhead is small, and we are wasting so much space in other\n> > > places, that I am not worried about the proportional increase page header\n> > > space usage at this point, particularly compared to saving in overall write\n> > > rate and increase in TPS. There's other areas we can save much more space, if\n> > > we want to focus on that.\n> > >\n> > > I was thinking we should perhaps do the opposite, namely getting rid of short\n> > > page headers. The overhead in the \"byte position\" <-> LSN conversion due to\n> > > the differing space is worse than the gain. Or do something inbetween - having\n> > > the system ID in the header adds a useful crosscheck, but I'm far less\n> > > convinced that having segment and block size in there, as 32bit numbers no\n> > > less, is worthwhile. After all, if the system id matches, it's not likely that\n> > > the xlog block or segment size differ.\n> >\n> > Hmm. I don't think we should remove those checks, as I can see people\n> > that would want to change their XLog block size with e.g.\n> > pg_reset_wal.\n>\n> I don't think that's something we need to address in every physical\n> segment. For one, there's no option to do so.\n\nNot block size, but xlog segment size is modifiable with pg_resetwal,\nand could thus reasonably change across restarts. Apart from more\npractical concerns around compile-time options requiring you to swap\nout binaries, I don't really see why xlog block size couldn't be\nchanged with pg_resetwal in a securely shutdown cluster as one does\nwith the WAL segment size.\n\n> But more importantly, if they\n> don't change the xlog block size, we'll just accept random WAL as well. If\n> somebody goes to the trouble of writing a custom tool, they can live with the\n> consequences of that potentially causing breakage. Particularly if the checks\n> wouldn't meaningfully prevent that anyway.\n\nI don't understand what you mean by that \"we'll just accept random WAL\nas well\". We do significant validation in XLogReaderValidatePageHeader\nto make sure that all pages of WAL are sufficiently formatted so that\nthey can securely be read by the available infrastructure with the\nleast chance of misreading data. There is no chance currently that we\nread WAL from WAL segments that contain correct data for different\nsegment or block sizes. That includes WAL from segments created before\na pg_resetwal changed the WAL segment size.\n\nIf this \"custom tool\" refers to the typo-ed name of pg_resetwal, that\nis hardly a custom tool, it is shipped with PostgreSQL and you can\nfind the sources under src/bin/pg_resetwal.\n\n> > After that we'll only have the system ID left from the extended\n> > header, which we could store across 2 pages in the (current) alignment\n> > losses of xlp_rem_len - even pages the upper half, uneven pages the\n> > lower half of the ID. This should allow for enough integrity checks\n> > without further increasing the size of XLogPageHeader in most\n> > installations.\n>\n> I doubt that that's a good idea - what if there's just a single page in a\n> segment? And there aren't earlier segments? That's not a rare case, IME.\n\nThen we'd still have 50% of a system ID which we can check against for\nany corruption. I agree that it increases the chance of conflics, but\nit's still strictly better than nothing at all.\nAn alternative solution would be to write the first two pages of a WAL\nsegment regardless of contents, so that we essentially never only have\naccess to the first page during crash recovery. Physical replication's\nrecovery wouldn't be able to read ahead, but I consider that as less\nproblematic.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n", "msg_date": "Wed, 11 Oct 2023 16:09:21 +0200", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Lowering the default wal_blocksize to 4K" }, { "msg_contents": "On Tue, Oct 10, 2023 at 7:29 PM Andres Freund <[email protected]> wrote:\n> > Hmm. I don't think we should remove those checks, as I can see people\n> > that would want to change their XLog block size with e.g.\n> > pg_reset_wal.\n>\n> I don't think that's something we need to address in every physical\n> segment. For one, there's no option to do so. But more importantly, if they\n> don't change the xlog block size, we'll just accept random WAL as well. If\n> somebody goes to the trouble of writing a custom tool, they can live with the\n> consequences of that potentially causing breakage. Particularly if the checks\n> wouldn't meaningfully prevent that anyway.\n\nI'm extremely confused about what both of you are saying.\n\nMatthias is referring to pg_reset_wal, which I assume means\npg_resetwal. But it has no option to change the WAL block size. It\ndoes have an option to change the WAL segment size, but that's not the\nsame thing. And even if pg_resetwal did have an option to change the\nWAL segment size, it removes all WAL from pg_wal when it runs, so you\nwouldn't normally end up trying to replay WAL from before the\noperation because it would have been removed. You might still have\nthose files around in an archive or something, but the primary doesn't\nreplay from the archive. You might have standbys, but I would assume\nthey would have to be rebuilt after changing the WAL block size on the\nmaster, unless you were trying to follow some probably-too-clever\nprocedure to avoid a standby rebuild. So I'm really kind of lost as to\nwhat the scenario is that Matthias has in mind.\n\nBut Andres's response doesn't make any sense to me either. What in the\nworld does \"if they don't change the xlog block size, we'll just\naccept random WAL as well\" mean? Neither having or not having a check\nthat the block size hasn't change causes us to \"just accept random\nWAL\". To \"accept random WAL,\" we'd have to remove all of the sanity\nchecks, which nobody is proposing and nobody would accept.\n\nBut if we do want to keep those cross-checks, why not take what Thomas\nproposed a little further and move all of xlp_sysid, xlp_seg_size, and\nxlp_xlog_blcksz into XLOG_CHECKPOINT_REDO? Then long and short page\nheaders would become identical. We'd lose the ability to recheck those\nvalues for every new segment, but it seems quite unlikely that any of\nthese values would change in the middle of replay. If they did, would\nxl_prev and xl_crc be sufficient to catch that? I think Andres says in\na later email that they would be, and I think I'm inclined to agree.\nFalse xl_prev matches don't seem especially unlikely, but xl_crc seems\nlike it should be effective.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 11 Oct 2023 16:05:02 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Lowering the default wal_blocksize to 4K" }, { "msg_contents": "On Thu, Oct 12, 2023 at 9:05 AM Robert Haas <[email protected]> wrote:\n> But if we do want to keep those cross-checks, why not take what Thomas\n> proposed a little further and move all of xlp_sysid, xlp_seg_size, and\n> xlp_xlog_blcksz into XLOG_CHECKPOINT_REDO? Then long and short page\n> headers would become identical.\n\nFTR that's exactly what I was trying to say.\n\n> We'd lose the ability to recheck those\n> values for every new segment, but it seems quite unlikely that any of\n> these values would change in the middle of replay. If they did, would\n> xl_prev and xl_crc be sufficient to catch that? I think Andres says in\n> a later email that they would be, and I think I'm inclined to agree.\n> False xl_prev matches don't seem especially unlikely, but xl_crc seems\n> like it should be effective.\n\nRight, it is strong enough, and covers the common case where a record\ncrosses the segment boundary.\n\nThat leaves only the segments where a record starts exactly on the\nfirst usable byte of a segment, which is why I was trying to think of\na way to cover that case too. I suggested we could notice and insert\na new record at that place. But Andres suggests it would be too\nexpensive and not worth worrying about.\n\n\n", "msg_date": "Thu, 12 Oct 2023 09:27:33 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Lowering the default wal_blocksize to 4K" }, { "msg_contents": "On Thu, Oct 12, 2023 at 9:27 AM Thomas Munro <[email protected]> wrote:\n> On Thu, Oct 12, 2023 at 9:05 AM Robert Haas <[email protected]> wrote:\n> > But if we do want to keep those cross-checks, why not take what Thomas\n> > proposed a little further and move all of xlp_sysid, xlp_seg_size, and\n> > xlp_xlog_blcksz into XLOG_CHECKPOINT_REDO? Then long and short page\n> > headers would become identical.\n>\n> FTR that's exactly what I was trying to say.\n\nAnd to be extra double explicit, the point of that is to kill the\n'div' instruction that Andres was complaining about, because now the\ndivision depends only on compile time constants so it can be done with\nmultiplication and bitswizzling tricks. For example, when X is a\nvariable I get:\n\n *a = n / X;\n 0x0000000000000003 <+3>: mov %rdi,%rax\n 0x0000000000000006 <+6>: xor %edx,%edx\n 0x0000000000000008 <+8>: divq 0x0(%rip) # 0xf <f+15>\n 0x0000000000000011 <+17>: mov %rax,(%rsi)\n\n *b = n % X;\n 0x000000000000000f <+15>: xor %edx,%edx\n 0x0000000000000014 <+20>: mov %rdi,%rax\n 0x0000000000000017 <+23>: divq 0x0(%rip) # 0x1e <f+30>\n 0x000000000000001e <+30>: mov %rdx,(%rcx)\n\n\n... but when it's the constant 8192 - 24 I get:\n\n *a = n / X;\n 0x0000000000000000 <+0>: movabs $0x2018120d8a279db7,%rax\n 0x000000000000000d <+13>: mov %rdi,%rdx\n 0x0000000000000010 <+16>: shr $0x3,%rdx\n 0x0000000000000014 <+20>: mul %rdx\n 0x0000000000000017 <+23>: shr $0x7,%rdx\n 0x000000000000001b <+27>: mov %rdx,(%rsi)\n\n *b = n % X;\n 0x000000000000001e <+30>: imul $0x1fe8,%rdx,%rdx\n 0x0000000000000025 <+37>: sub %rdx,%rdi\n 0x0000000000000028 <+40>: mov %rdi,(%rcx)\n\n\n", "msg_date": "Thu, 12 Oct 2023 09:47:29 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Lowering the default wal_blocksize to 4K" }, { "msg_contents": "Hi,\n\nOn 2023-10-11 16:05:02 -0400, Robert Haas wrote:\n> On Tue, Oct 10, 2023 at 7:29 PM Andres Freund <[email protected]> wrote:\n> > > Hmm. I don't think we should remove those checks, as I can see people\n> > > that would want to change their XLog block size with e.g.\n> > > pg_reset_wal.\n> >\n> > I don't think that's something we need to address in every physical\n> > segment. For one, there's no option to do so. But more importantly, if they\n> > don't change the xlog block size, we'll just accept random WAL as well. If\n> > somebody goes to the trouble of writing a custom tool, they can live with the\n> > consequences of that potentially causing breakage. Particularly if the checks\n> > wouldn't meaningfully prevent that anyway.\n>\n> I'm extremely confused about what both of you are saying.\n>\n> Matthias is referring to pg_reset_wal, which I assume means\n> pg_resetwal. But it has no option to change the WAL block size. It\n> does have an option to change the WAL segment size, but that's not the\n> same thing. And even if pg_resetwal did have an option to change the\n> WAL segment size, it removes all WAL from pg_wal when it runs, so you\n> wouldn't normally end up trying to replay WAL from before the\n> operation because it would have been removed. You might still have\n> those files around in an archive or something, but the primary doesn't\n> replay from the archive. You might have standbys, but I would assume\n> they would have to be rebuilt after changing the WAL block size on the\n> master, unless you were trying to follow some probably-too-clever\n> procedure to avoid a standby rebuild. So I'm really kind of lost as to\n> what the scenario is that Matthias has in mind.\n\nI think the question is what the point of the crosschecks in long page headers\nis. It's pretty easy to see what the point of the xlp_sysid check is - make it\nless likely to accidentally replay WAL from a different system. It's much\nless clear what the point of xlp_seg_size and xlp_xlog_blcksz is - after all,\nthey are also in ControlFileData and the xlp_sysid check tied the control file\nand WAL file together.\n\n\n> But Andres's response doesn't make any sense to me either. What in the world\n> does \"if they don't change the xlog block size, we'll just accept random WAL\n> as well\" mean? Neither having or not having a check that the block size\n> hasn't change causes us to \"just accept random WAL\". To \"accept random WAL,\"\n> we'd have to remove all of the sanity checks, which nobody is proposing and\n> nobody would accept.\n\nLet me rephrase my point:\n\nIf somebody uses a modified pg_resetwal to change the xlog block size, then\ntries to replay WAL from before that change, and is unlucky enough that the\nLSN looked for in a segment is the start of a valid record both before/after\nthe pg_resetwal invocation, then yes, we might not catch that anymore if we\nremove the block size check. But the much much more common case is that the\nblock size was *not* changed, in which case we *already* don't catch that\npg_resetwal was invoked.\n\nISTM that the xlp_seg_size and xlp_xlog_blcksz checks in long page headers are\na belt and suspenders check that is very unlikely to ever catch a mistake that\nwouldn't otherwise be caught.\n\n\n> But if we do want to keep those cross-checks, why not take what Thomas\n> proposed a little further and move all of xlp_sysid, xlp_seg_size, and\n> xlp_xlog_blcksz into XLOG_CHECKPOINT_REDO?\n\nI think that's what Thomas was proposing. Thinking about it a bit more I'm\nnot sure that having the data both in the checkpoint record itself and in\nXLOG_CHECKPOINT_REDO buys much. But it's also pretty much free, so ...\n\n\n> Then long and short page headers would become identical.\n\nWhich would make the code more efficient...\n\n\n> We'd lose the ability to recheck those values for every new segment, but it\n> seems quite unlikely that any of these values would change in the middle of\n> replay.\n\nI guess the most likely scenario would be a replica that has some local WAL\nfiles (e.g. due to pg_basebackup -X ...), but accidentally configures a\nrestore_command pointing to the wrong archive. In that case recovery could\nstart up successfully as the checkpoint/redo records have sensible contents,\nbut we then wouldn't necessarily notice that the subsequent files aren't from\nthe correct system. Of course you need to be quite unlucky and have LSNs that\nmatch between the systems. The most likely path I can think of is an idle\nsystem with archive_timeout.\n\nAs outlined above, I don't think xlp_seg_size, xlp_xlog_blcksz buy us\nanything, but that the protection by xlp_sysid is a bit more meaningful. So a\ncompromise position could be to include xlp_sysid in the page header, possibly\nin a \"chopped up\" manner, as Matthias suggested.\n\n\n> If they did, would xl_prev and xl_crc be sufficient to catch that? I think\n> Andres says in a later email that they would be, and I think I'm inclined to\n> agree. False xl_prev matches don't seem especially unlikely, but xl_crc\n> seems like it should be effective.\n\nI think it'd be an entirely tolerable risk. Even if a WAL file were falsely\nreplayed, we'd still notice the problem soon after. I think once such a\nmistake was made, it'd be inadvasible to continue using the cluster anyway.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 11 Oct 2023 15:11:26 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Lowering the default wal_blocksize to 4K" }, { "msg_contents": "Hi,\n\nOn 2023-10-11 16:09:21 +0200, Matthias van de Meent wrote:\n> On Wed, 11 Oct 2023 at 01:29, Andres Freund <[email protected]> wrote:\n> > > After that we'll only have the system ID left from the extended\n> > > header, which we could store across 2 pages in the (current) alignment\n> > > losses of xlp_rem_len - even pages the upper half, uneven pages the\n> > > lower half of the ID. This should allow for enough integrity checks\n> > > without further increasing the size of XLogPageHeader in most\n> > > installations.\n> >\n> > I doubt that that's a good idea - what if there's just a single page in a\n> > segment? And there aren't earlier segments? That's not a rare case, IME.\n> \n> Then we'd still have 50% of a system ID which we can check against for\n> any corruption. I agree that it increases the chance of conflics, but\n> it's still strictly better than nothing at all.\n\nA fair point - I somehow disregarded that all bits in the system id are\nequally meaningful.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 11 Oct 2023 15:16:33 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Lowering the default wal_blocksize to 4K" }, { "msg_contents": "On Wed, Oct 11, 2023 at 4:28 PM Thomas Munro <[email protected]> wrote:\n> That leaves only the segments where a record starts exactly on the\n> first usable byte of a segment, which is why I was trying to think of\n> a way to cover that case too. I suggested we could notice and insert\n> a new record at that place. But Andres suggests it would be too\n> expensive and not worth worrying about.\n\nHmm. Even in that case, xl_prev has to match. It's not like it's the\nwild west. Sure, it's not nearly as good of a cross-check, but it's\nsomething. It seems to me that it's not worth worrying very much about\nxlp_seg_size or xlp_blcksz changing undetected in that scenario - if\nyou're doing that kind of advanced magic, you need to be careful\nenough to not mess it up, and if we still cross-check once per\ncheckpoint cycle that's pretty good. I do worry a bit about the sysid\nchanging under us, though. It's not that hard to get your WAL archives\nmixed up, and it'd be nice to catch that right away.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 12 Oct 2023 09:36:19 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Lowering the default wal_blocksize to 4K" }, { "msg_contents": "On Wed, Oct 11, 2023 at 6:11 PM Andres Freund <[email protected]> wrote:\n> I think the question is what the point of the crosschecks in long page headers\n> is. It's pretty easy to see what the point of the xlp_sysid check is - make it\n> less likely to accidentally replay WAL from a different system. It's much\n> less clear what the point of xlp_seg_size and xlp_xlog_blcksz is - after all,\n> they are also in ControlFileData and the xlp_sysid check tied the control file\n> and WAL file together.\n\nYeah, fair.\n\n> Let me rephrase my point:\n>\n> If somebody uses a modified pg_resetwal to change the xlog block size, then\n> tries to replay WAL from before that change, and is unlucky enough that the\n> LSN looked for in a segment is the start of a valid record both before/after\n> the pg_resetwal invocation, then yes, we might not catch that anymore if we\n> remove the block size check. But the much much more common case is that the\n> block size was *not* changed, in which case we *already* don't catch that\n> pg_resetwal was invoked.\n\nHmm. Should we invent a mechanism just for that?\n\n> ISTM that the xlp_seg_size and xlp_xlog_blcksz checks in long page headers are\n> a belt and suspenders check that is very unlikely to ever catch a mistake that\n> wouldn't otherwise be caught.\n\nI think that's probably right.\n\n> I think that's what Thomas was proposing. Thinking about it a bit more I'm\n> not sure that having the data both in the checkpoint record itself and in\n> XLOG_CHECKPOINT_REDO buys much. But it's also pretty much free, so ...\n\nYes. To me, having it in the redo record seems considerably more\nvaluable. Because that's where we're going to begin replay, so we\nshould catch most problems straight off. To escape detection at that\npoint, you need to not just be pointed at the wrong WAL archive, but\nactually have files of diverse origin mixed together in the same WAL\narchive. That's a less-likely error, and we still have some ways of\ncatching it if it happens.\n\n> Which would make the code more efficient...\n\nRight.\n\n> As outlined above, I don't think xlp_seg_size, xlp_xlog_blcksz buy us\n> anything, but that the protection by xlp_sysid is a bit more meaningful. So a\n> compromise position could be to include xlp_sysid in the page header, possibly\n> in a \"chopped up\" manner, as Matthias suggested.\n\nI'm not that keen on the idea of storing the upper half and lower half\nin alternate pages. That seems to me to add code complexity and\ncognitive burden with little increased likelihood of catching real\nproblems. I'm not completely opposed to the idea if somebody wants to\nmake it happen, but I bet it would be better to either store the whole\nthing or just cut it in half and store, say, the low-order bits.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 12 Oct 2023 09:46:53 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Lowering the default wal_blocksize to 4K" }, { "msg_contents": "On Thu, 12 Oct 2023 at 16:36, Robert Haas <[email protected]> wrote:\n\n> On Wed, Oct 11, 2023 at 4:28 PM Thomas Munro <[email protected]>\n> wrote:\n> > That leaves only the segments where a record starts exactly on the\n> > first usable byte of a segment, which is why I was trying to think of\n> > a way to cover that case too. I suggested we could notice and insert\n> > a new record at that place. But Andres suggests it would be too\n> > expensive and not worth worrying about.\n>\n> Hmm. Even in that case, xl_prev has to match. It's not like it's the\n> wild west. Sure, it's not nearly as good of a cross-check, but it's\n> something. It seems to me that it's not worth worrying very much about\n> xlp_seg_size or xlp_blcksz changing undetected in that scenario - if\n> you're doing that kind of advanced magic, you need to be careful\n> enough to not mess it up, and if we still cross-check once per\n> checkpoint cycle that's pretty good. I do worry a bit about the sysid\n> changing under us, though. It's not that hard to get your WAL archives\n> mixed up, and it'd be nice to catch that right away.\n>\n\nThis reminds me that xlp_tli is not being used to its full potential right\nnow either. We only check that it's not going backwards, but there is at\nleast one not very hard to hit way to get postgres to silently replay on\nthe wrong timeline. [1]\n\n[1]\nhttps://www.postgresql.org/message-id/CANwKhkMN3QwAcvuDZHb6wsvLRtkweBiYso-KLFykkQVWuQLcOw@mail.gmail.com\n-- \n\nAnts Aasma\nSenior Database Engineerwww.cybertec-postgresql.com\n\nOn Thu, 12 Oct 2023 at 16:36, Robert Haas <[email protected]> wrote:On Wed, Oct 11, 2023 at 4:28 PM Thomas Munro <[email protected]> wrote:\n> That leaves only the segments where a record starts exactly on the\n> first usable byte of a segment, which is why I was trying to think of\n> a way to cover that case too.  I suggested we could notice and insert\n> a new record at that place.  But Andres suggests it would be too\n> expensive and not worth worrying about.\n\nHmm. Even in that case, xl_prev has to match. It's not like it's the\nwild west. Sure, it's not nearly as good of a cross-check, but it's\nsomething. It seems to me that it's not worth worrying very much about\nxlp_seg_size or xlp_blcksz changing undetected in that scenario - if\nyou're doing that kind of advanced magic, you need to be careful\nenough to not mess it up, and if we still cross-check once per\ncheckpoint cycle that's pretty good. I do worry a bit about the sysid\nchanging under us, though. It's not that hard to get your WAL archives\nmixed up, and it'd be nice to catch that right away.This reminds me that xlp_tli is not being used to its full potential right now either. We only check that it's not going backwards, but there is at least one not very hard to hit way to get postgres to silently replay on the wrong timeline. [1] [1] https://www.postgresql.org/message-id/CANwKhkMN3QwAcvuDZHb6wsvLRtkweBiYso-KLFykkQVWuQLcOw@mail.gmail.com-- Ants Aasma\nSenior Database Engineer\nwww.cybertec-postgresql.com", "msg_date": "Thu, 12 Oct 2023 16:57:11 +0300", "msg_from": "Ants Aasma <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Lowering the default wal_blocksize to 4K" }, { "msg_contents": "On Thu, Oct 12, 2023 at 9:57 AM Ants Aasma <[email protected]> wrote:\n> This reminds me that xlp_tli is not being used to its full potential right now either. We only check that it's not going backwards, but there is at least one not very hard to hit way to get postgres to silently replay on the wrong timeline. [1]\n>\n> [1] https://www.postgresql.org/message-id/CANwKhkMN3QwAcvuDZHb6wsvLRtkweBiYso-KLFykkQVWuQLcOw@mail.gmail.com\n\nMaybe I'm missing something, but that seems mostly unrelated. What\nyou're discussing there is the server's ability to figure out when it\nought to perform a timeline switch. In other words, the server settles\non the wrong TLI and therefore opens and reads from the wrong\nfilename. But here, we're talking about the case where the server is\ncorrect about the TLI and LSN and hence opens exactly the right file\non disk, but the contents of the file on disk aren't what they're\nsupposed to be due to a procedural error.\n\nSaid differently, I don't see how anything we could do with xlp_tli\nwould actually fix the problem discussed in that thread. That can\ndetect a situation where the TLI of the file doesn't match the TLI of\nthe pages inside the file, but it doesn't help with the case where the\nserver decided to read the wrong file in the first place.\n\nBut this does make me wonder whether storing xlp_tli and xlp_pageaddr\nin every page is really worth the bit-space. That takes 12 bytes plus\nany padding it forces us to incur, but the actual entropy content of\nthose 12 bytes must be quite low. In normal cases probably 7 or so of\nthose bytes are going to consist entirely of zero bits (TLI < 256,\nLSN%8k == 0, LSN < 2^40). We could probably find a way of jumbling\nthe LSN, TLI, and maybe some other stuff into an 8-byte quantity or\neven perhaps a 4-byte quantity that would do about as good a job\ncatching problems as what we have now (e.g.\nLSN_HIGH32^LSN_LOW32^BITREVERSE(TLI)). In the event of a mismatch, the\nvalue actually stored in the page header would be harder for humans to\nunderstand, but I'm not sure that really matters here. Users should\nmostly be concerned with whether a WAL file matches the cluster where\nthey're trying to replay it; forensics on misplaced or corrupted WAL\nfiles should be comparatively rare.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 12 Oct 2023 10:56:37 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Lowering the default wal_blocksize to 4K" } ]
[ { "msg_contents": "The attached patch adds the word 'databases' to show that template0,\ntemplate1 and postgres are databases in a default installation.\n\nBest regards,\nGurjeet\nhttp://Gurje.et", "msg_date": "Mon, 9 Oct 2023 21:55:29 -0700", "msg_from": "Gurjeet Singh <[email protected]>", "msg_from_op": true, "msg_subject": "Minor edit to src/bin/pg_upgrade/IMPLEMENTAION" }, { "msg_contents": "On Mon, Oct 09, 2023 at 09:55:29PM -0700, Gurjeet Singh wrote:\n> The attached patch adds the word 'databases' to show that template0,\n> template1 and postgres are databases in a default installation.\n\n+1.\n--\nMichael", "msg_date": "Tue, 10 Oct 2023 14:59:32 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Minor edit to src/bin/pg_upgrade/IMPLEMENTAION" } ]
[ { "msg_contents": "Hi all,\n\nWe have developed an extension, allowing PostgreSQL to run queries over\nencrypted data. This functionality is achieved via user-defined functions\nthat extend encrypted data types and support commonly used expression\noperations. Our tests validated its effectiveness with TPC-C and TPC-H\nbenchmarks. You may find the code here: https://github.com/SJTU-IPADS/HEDB.\n\nThis PoC is a reimplementation fork while collaborating with a cloud\ndatabase company; the aim is to enable their DBAs to manage databases\nwithout the risk of data leaks, *meeting the requirements of laws such as\nGDPR.*\n\nI am wondering if anyone thinks this is a nice feature. If so, I am curious\nabout the steps to further it mature and potentially have it incorporated\nas a part of PostgreSQL contrib.\n\nBest regards,\nMingyu Li\n\nHi all,We have developed an extension, allowing PostgreSQL to run queries over encrypted data. This functionality is achieved via user-defined functions that extend encrypted data types and support commonly used expression operations. Our tests validated its effectiveness with TPC-C and TPC-H benchmarks. You may find the code here: https://github.com/SJTU-IPADS/HEDB.This PoC is a reimplementation fork while collaborating with a cloud database company; the aim is to enable their DBAs to manage databases without the risk of data leaks, meeting the requirements of laws such as GDPR.I am wondering if anyone thinks this is a nice feature. If so, I am curious about the steps to further it mature and potentially have it incorporated as a part of PostgreSQL contrib.Best regards,Mingyu Li", "msg_date": "Tue, 10 Oct 2023 14:42:04 +0800", "msg_from": "Mingyu Li <[email protected]>", "msg_from_op": true, "msg_subject": "[PoC] run SQL over ciphertext" }, { "msg_contents": "Hello,\nI think this is a very interesting topic, especially for European companies\nwhere data sovereignty in the cloud has become critical.\n\nIf I understand correctly, the idea is to split users into 'client users'\nwho can see data unencrypted, and 'server users', who are administrators\nunable to decrypt data.\n\nA few questions:\n- how are secrets managed? Do you use a sort of vault to keep encryption\nkeys? Is there a master key to encrypt session keys?\n- what about performances? Is it possible to use indexes on encrypted\ncolumns?\n\n\nHi all,\n>\n> We have developed an extension, allowing PostgreSQL to run queries over\n> encrypted data. This functionality is achieved via user-defined functions\n> that extend encrypted data types and support commonly used expression\n> operations. Our tests validated its effectiveness with TPC-C and TPC-H\n> benchmarks. You may find the code here: https://github.com/SJTU-IPADS/HEDB\n> .\n>\n> This PoC is a reimplementation fork while collaborating with a cloud\n> database company; the aim is to enable their DBAs to manage databases\n> without the risk of data leaks, *meeting the requirements of laws such as\n> GDPR.*\n>\n> I am wondering if anyone thinks this is a nice feature. If so, I am\n> curious about the steps to further it mature and potentially have it\n> incorporated as a part of PostgreSQL contrib.\n>\n> Best regards,\n> Mingyu Li\n>\n\n\n--\nbest regards\nGiampaolo Capelli\n\nHello,I think this is a very interesting topic, especially for European companies where data sovereignty in the cloud has become critical.If I understand correctly, the idea is to split users into 'client users' who can see data unencrypted, and 'server users', who are administrators unable to decrypt data.A few questions:- how are secrets managed? Do you use a sort of vault to keep encryption keys? Is there a master key to encrypt session keys?- what about performances? Is it possible to use indexes on encrypted columns?Hi all,We have developed an extension, allowing PostgreSQL to run queries over encrypted data. This functionality is achieved via user-defined functions that extend encrypted data types and support commonly used expression operations. Our tests validated its effectiveness with TPC-C and TPC-H benchmarks. You may find the code here: https://github.com/SJTU-IPADS/HEDB.This PoC is a reimplementation fork while collaborating with a cloud database company; the aim is to enable their DBAs to manage databases without the risk of data leaks, meeting the requirements of laws such as GDPR.I am wondering if anyone thinks this is a nice feature. If so, I am curious about the steps to further it mature and potentially have it incorporated as a part of PostgreSQL contrib.Best regards,Mingyu Li\n--best regardsGiampaolo Capelli", "msg_date": "Tue, 10 Oct 2023 10:17:54 +0200", "msg_from": "Giampaolo Capelli <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PoC] run SQL over ciphertext" }, { "msg_contents": "On 10.10.23 08:42, Mingyu Li wrote:\n> We have developed an extension, allowing PostgreSQL to run queries over \n> encrypted data. This functionality is achieved via user-defined \n> functions that extend encrypted data types and support commonly used \n> expression operations. Our tests validated its effectiveness with TPC-C \n> and TPC-H benchmarks. You may find the code here: \n> https://github.com/SJTU-IPADS/HEDB <https://github.com/SJTU-IPADS/HEDB>.\n> \n> This PoC is a reimplementation fork while collaborating with a cloud \n> database company; the aim is to enable their DBAs to manage databases \n> without the risk of data leaks, /meeting the requirements of laws such \n> as GDPR./\n> \n> I am wondering if anyone thinks this is a nice feature. If so, I am \n> curious about the steps to further it mature and potentially have it \n> incorporated as a part of PostgreSQL contrib.\n\nFYI, see also \n<https://www.postgresql.org/message-id/flat/[email protected]> \nfor a similar project.\n\n\n\n", "msg_date": "Wed, 11 Oct 2023 08:43:21 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PoC] run SQL over ciphertext" }, { "msg_contents": "Hi,\n\n> the idea is to split users into 'client users' who can see data\nunencrypted, and 'server users', who are administrators unable to decrypt\ndata.\n\nExactly!\n\n> how are secrets managed? Do you use a sort of vault to keep encryption\nkeys?\n\nGood question. The client holds the key and uses a proxy for transparent\nencryption. The implementation also assumes secure storage of encryption\nkeys in hardware-protected memory called \"enclaves\". Only client users and\nserver enclaves have access to the plaintext. Please take a glance at page\n5 of the slide: www.usenix.org/system/files/osdi23_slides_li_mingyu_v2.pdf.\nModern clouds like OVH and Azure now offer hardware enclaves. If enclaves\nare not available, a rich client-side proxy can be used, with extra\nround-trip costs.\n\n> Is there a master key to encrypt session keys?\n\nThere should be.\n\n> what about performances?\n\nTPC-C overhead is <50%. TPC-H overhead ranges from 5-20 times the baseline;\nthere is room for TPC-H improvement and we are working on it.\n\n> Is it possible to use indexes on encrypted columns?\n\nYes. The extension allows client users to intentionally reveal the ordering\nof encrypted columns for indexing purposes.\n\n--\nBest,\nMingyu\n\nGiampaolo Capelli <[email protected]> 于2023年10月10日周二 16:18写道:\n\n> Hello,\n> I think this is a very interesting topic, especially for European\n> companies where data sovereignty in the cloud has become critical.\n>\n> If I understand correctly, the idea is to split users into 'client users'\n> who can see data unencrypted, and 'server users', who are administrators\n> unable to decrypt data.\n>\n> A few questions:\n> - how are secrets managed? Do you use a sort of vault to keep encryption\n> keys? Is there a master key to encrypt session keys?\n> - what about performances? Is it possible to use indexes on encrypted\n> columns?\n>\n>\n> Hi all,\n>>\n>> We have developed an extension, allowing PostgreSQL to run queries over\n>> encrypted data. This functionality is achieved via user-defined functions\n>> that extend encrypted data types and support commonly used expression\n>> operations. Our tests validated its effectiveness with TPC-C and TPC-H\n>> benchmarks. You may find the code here:\n>> https://github.com/SJTU-IPADS/HEDB.\n>>\n>> This PoC is a reimplementation fork while collaborating with a cloud\n>> database company; the aim is to enable their DBAs to manage databases\n>> without the risk of data leaks, *meeting the requirements of laws such\n>> as GDPR.*\n>>\n>> I am wondering if anyone thinks this is a nice feature. If so, I am\n>> curious about the steps to further it mature and potentially have it\n>> incorporated as a part of PostgreSQL contrib.\n>>\n>> Best regards,\n>> Mingyu Li\n>>\n>\n>\n> --\n> best regards\n> Giampaolo Capelli\n>\n\nHi,> the idea is to split users into 'client users' who can see data unencrypted, and 'server users', who are administrators unable to decrypt data.Exactly!> how are secrets managed? Do you use a sort of vault to keep encryption keys?Good question. The client holds the key and uses a proxy for transparent encryption. The implementation also assumes secure storage of encryption keys in hardware-protected memory called \"enclaves\". Only client users and server enclaves have access to the plaintext. Please take a glance at page 5 of the slide: www.usenix.org/system/files/osdi23_slides_li_mingyu_v2.pdf. Modern clouds like OVH and Azure now offer hardware enclaves. If enclaves are not available, a rich client-side proxy can be used, with extra round-trip costs.> Is there a master key to encrypt session keys?There should be.> what about performances?TPC-C overhead is <50%. TPC-H overhead ranges from 5-20 times the baseline; there is room for TPC-H improvement and we are working on it.> Is it possible to use indexes on encrypted columns?Yes. The extension allows client users to intentionally reveal the ordering of encrypted columns for indexing purposes.--Best,MingyuGiampaolo Capelli <[email protected]> 于2023年10月10日周二 16:18写道:Hello,I think this is a very interesting topic, especially for European companies where data sovereignty in the cloud has become critical.If I understand correctly, the idea is to split users into 'client users' who can see data unencrypted, and 'server users', who are administrators unable to decrypt data.A few questions:- how are secrets managed? Do you use a sort of vault to keep encryption keys? Is there a master key to encrypt session keys?- what about performances? Is it possible to use indexes on encrypted columns?Hi all,We have developed an extension, allowing PostgreSQL to run queries over encrypted data. This functionality is achieved via user-defined functions that extend encrypted data types and support commonly used expression operations. Our tests validated its effectiveness with TPC-C and TPC-H benchmarks. You may find the code here: https://github.com/SJTU-IPADS/HEDB.This PoC is a reimplementation fork while collaborating with a cloud database company; the aim is to enable their DBAs to manage databases without the risk of data leaks, meeting the requirements of laws such as GDPR.I am wondering if anyone thinks this is a nice feature. If so, I am curious about the steps to further it mature and potentially have it incorporated as a part of PostgreSQL contrib.Best regards,Mingyu Li\n--best regardsGiampaolo Capelli", "msg_date": "Wed, 11 Oct 2023 15:04:55 +0800", "msg_from": "Mingyu Li <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PoC] run SQL over ciphertext" }, { "msg_contents": "Hello Peter,\n\n>\nhttps://www.postgresql.org/message-id/flat/[email protected]\n\nThanks for referring me to your TCE project, nice work! It takes time to go\nthrough the long thread of discussion and the patch.\n\nA quick question: what operations do pg_encrypted_* support? Are\n(in)equality checks sufficient to fulfill real-world queries?\n\n--\nBest,\nMingyu\n\nPeter Eisentraut <[email protected]> 于2023年10月11日周三 14:43写道:\n\n> On 10.10.23 08:42, Mingyu Li wrote:\n> > We have developed an extension, allowing PostgreSQL to run queries over\n> > encrypted data. This functionality is achieved via user-defined\n> > functions that extend encrypted data types and support commonly used\n> > expression operations. Our tests validated its effectiveness with TPC-C\n> > and TPC-H benchmarks. You may find the code here:\n> > https://github.com/SJTU-IPADS/HEDB <https://github.com/SJTU-IPADS/HEDB>.\n> >\n> > This PoC is a reimplementation fork while collaborating with a cloud\n> > database company; the aim is to enable their DBAs to manage databases\n> > without the risk of data leaks, /meeting the requirements of laws such\n> > as GDPR./\n> >\n> > I am wondering if anyone thinks this is a nice feature. If so, I am\n> > curious about the steps to further it mature and potentially have it\n> > incorporated as a part of PostgreSQL contrib.\n>\n> FYI, see also\n> <\n> https://www.postgresql.org/message-id/flat/[email protected]>\n>\n> for a similar project.\n>\n>\n\nHello Peter,> https://www.postgresql.org/message-id/flat/[email protected] for referring me to your TCE project, nice work! It takes time to go through the long thread of discussion and the patch.A quick question: what operations do pg_encrypted_* support? Are (in)equality checks sufficient to fulfill real-world queries?--Best,MingyuPeter Eisentraut <[email protected]> 于2023年10月11日周三 14:43写道:On 10.10.23 08:42, Mingyu Li wrote:\n> We have developed an extension, allowing PostgreSQL to run queries over \n> encrypted data. This functionality is achieved via user-defined \n> functions that extend encrypted data types and support commonly used \n> expression operations. Our tests validated its effectiveness with TPC-C \n> and TPC-H benchmarks. You may find the code here: \n> https://github.com/SJTU-IPADS/HEDB <https://github.com/SJTU-IPADS/HEDB>.\n> \n> This PoC is a reimplementation fork while collaborating with a cloud \n> database company; the aim is to enable their DBAs to manage databases \n> without the risk of data leaks, /meeting the requirements of laws such \n> as GDPR./\n> \n> I am wondering if anyone thinks this is a nice feature. If so, I am \n> curious about the steps to further it mature and potentially have it \n> incorporated as a part of PostgreSQL contrib.\n\nFYI, see also \n<https://www.postgresql.org/message-id/flat/[email protected]> \nfor a similar project.", "msg_date": "Wed, 11 Oct 2023 15:34:27 +0800", "msg_from": "Mingyu Li <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PoC] run SQL over ciphertext" } ]
[ { "msg_contents": "The function has_multiple_baserels() is used in set_subquery_pathlist()\nto check and see if there are more than 1 base rel, by looping through\nsimple_rel_array[]. I think one simpler way to do that is to leverage\nroot->all_baserels by\n\n bms_membership(root->all_baserels) == BMS_MULTIPLE\n\nall_baserels is computed in deconstruct_jointree (v16) or in\nmake_one_rel (v15 and earlier), both are before we generate access paths\nfor subquery RTEs, and it contains all base rels (but not \"other\" rels).\nSo it should be a suitable replacement. I doubt that there would be any\nmeasurable performance gains. So please consider it cosmetic.\n\nI've attached a patch to do that. Any thoughts?\n\nThanks\nRichard", "msg_date": "Tue, 10 Oct 2023 16:22:02 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Retire has_multiple_baserels()" }, { "msg_contents": "Hi,\n\n> The function has_multiple_baserels() is used in set_subquery_pathlist()\n> to check and see if there are more than 1 base rel, by looping through\n> simple_rel_array[]. I think one simpler way to do that is to leverage\n> root->all_baserels by\n>\n> bms_membership(root->all_baserels) == BMS_MULTIPLE\n>\n> all_baserels is computed in deconstruct_jointree (v16) or in\n> make_one_rel (v15 and earlier), both are before we generate access paths\n> for subquery RTEs, and it contains all base rels (but not \"other\" rels).\n> So it should be a suitable replacement. I doubt that there would be any\n> measurable performance gains. So please consider it cosmetic.\n>\n> I've attached a patch to do that. Any thoughts?\n\nI used the following patch to double check that nothing was missed:\n\n```\n--- a/src/backend/optimizer/path/allpaths.c\n+++ b/src/backend/optimizer/path/allpaths.c\n@@ -2207,8 +2207,13 @@ has_multiple_baserels(PlannerInfo *root)\n /* ignore RTEs that are \"other rels\" */\n if (brel->reloptkind == RELOPT_BASEREL)\n if (++num_base_rels > 1)\n+ {\n+\nAssert(bms_membership(root->all_baserels) == BMS_MULTIPLE);\n return true;\n+ }\n }\n+\n+ Assert(bms_membership(root->all_baserels) != BMS_MULTIPLE);\n return false;\n }\n```\n\nIt wasn't. The patch LGTM.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Tue, 10 Oct 2023 12:43:48 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Retire has_multiple_baserels()" }, { "msg_contents": "On Tue, Oct 10, 2023 at 5:43 PM Aleksander Alekseev <\[email protected]> wrote:\n\n> I used the following patch to double check that nothing was missed:\n>\n> ```\n> --- a/src/backend/optimizer/path/allpaths.c\n> +++ b/src/backend/optimizer/path/allpaths.c\n> @@ -2207,8 +2207,13 @@ has_multiple_baserels(PlannerInfo *root)\n> /* ignore RTEs that are \"other rels\" */\n> if (brel->reloptkind == RELOPT_BASEREL)\n> if (++num_base_rels > 1)\n> + {\n> +\n> Assert(bms_membership(root->all_baserels) == BMS_MULTIPLE);\n> return true;\n> + }\n> }\n> +\n> + Assert(bms_membership(root->all_baserels) != BMS_MULTIPLE);\n> return false;\n> }\n> ```\n>\n> It wasn't. The patch LGTM.\n\n\nThanks for the verification.\n\nThanks\nRichard\n\nOn Tue, Oct 10, 2023 at 5:43 PM Aleksander Alekseev <[email protected]> wrote:\nI used the following patch to double check that nothing was missed:\n\n```\n--- a/src/backend/optimizer/path/allpaths.c\n+++ b/src/backend/optimizer/path/allpaths.c\n@@ -2207,8 +2207,13 @@ has_multiple_baserels(PlannerInfo *root)\n                /* ignore RTEs that are \"other rels\" */\n                if (brel->reloptkind == RELOPT_BASEREL)\n                        if (++num_base_rels > 1)\n+                       {\n+\nAssert(bms_membership(root->all_baserels) == BMS_MULTIPLE);\n                                return true;\n+                       }\n        }\n+\n+       Assert(bms_membership(root->all_baserels) != BMS_MULTIPLE);\n        return false;\n }\n```\n\nIt wasn't. The patch LGTM.Thanks for the verification.ThanksRichard", "msg_date": "Tue, 10 Oct 2023 19:01:27 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Retire has_multiple_baserels()" }, { "msg_contents": "Aleksander Alekseev <[email protected]> writes:\n>> The function has_multiple_baserels() is used in set_subquery_pathlist()\n>> to check and see if there are more than 1 base rel, by looping through\n>> simple_rel_array[]. I think one simpler way to do that is to leverage\n>> root->all_baserels by\n>> \tbms_membership(root->all_baserels) == BMS_MULTIPLE\n\n> I used the following patch to double check that nothing was missed:\n> ...\n> It wasn't. The patch LGTM.\n\nI thought this test wasn't too complete, because has_multiple_baserels\nisn't reached at all in many cases thanks to the way the calling if()\nis coded. I tried testing like this instead:\n\ndiff --git a/src/backend/optimizer/path/allpaths.c b/src/backend/optimizer/path/allpaths.c\nindex eea49cca7b..3f6fc51fb4 100644\n--- a/src/backend/optimizer/path/allpaths.c\n+++ b/src/backend/optimizer/path/allpaths.c\n@@ -2649,6 +2649,8 @@ set_subquery_pathlist(PlannerInfo *root, RelOptInfo *rel,\n */\n remove_unused_subquery_outputs(subquery, rel, run_cond_attrs);\n \n+ Assert(has_multiple_baserels(root) == (bms_membership(root->all_baserels) == BMS_MULTIPLE));\n+\n /*\n * We can safely pass the outer tuple_fraction down to the subquery if the\n * outer level has no joining, aggregation, or sorting to do. Otherwise\n\nand came to the same conclusion: check-world finds no cases where\nthe assertion fails. So it LGTM too. Pushed.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 10 Oct 2023 13:13:48 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Retire has_multiple_baserels()" }, { "msg_contents": "On Wed, Oct 11, 2023 at 1:13 AM Tom Lane <[email protected]> wrote:\n\n> I thought this test wasn't too complete, because has_multiple_baserels\n> isn't reached at all in many cases thanks to the way the calling if()\n> is coded. I tried testing like this instead:\n>\n> diff --git a/src/backend/optimizer/path/allpaths.c\n> b/src/backend/optimizer/path/allpaths.c\n> index eea49cca7b..3f6fc51fb4 100644\n> --- a/src/backend/optimizer/path/allpaths.c\n> +++ b/src/backend/optimizer/path/allpaths.c\n> @@ -2649,6 +2649,8 @@ set_subquery_pathlist(PlannerInfo *root, RelOptInfo\n> *rel,\n> */\n> remove_unused_subquery_outputs(subquery, rel, run_cond_attrs);\n>\n> + Assert(has_multiple_baserels(root) ==\n> (bms_membership(root->all_baserels) == BMS_MULTIPLE));\n> +\n> /*\n> * We can safely pass the outer tuple_fraction down to the subquery\n> if the\n> * outer level has no joining, aggregation, or sorting to do.\n> Otherwise\n>\n> and came to the same conclusion: check-world finds no cases where\n> the assertion fails. So it LGTM too. Pushed.\n\n\nThanks for pushing!\n\nThanks\nRichard\n\nOn Wed, Oct 11, 2023 at 1:13 AM Tom Lane <[email protected]> wrote:\nI thought this test wasn't too complete, because has_multiple_baserels\nisn't reached at all in many cases thanks to the way the calling if()\nis coded.  I tried testing like this instead:\n\ndiff --git a/src/backend/optimizer/path/allpaths.c b/src/backend/optimizer/path/allpaths.c\nindex eea49cca7b..3f6fc51fb4 100644\n--- a/src/backend/optimizer/path/allpaths.c\n+++ b/src/backend/optimizer/path/allpaths.c\n@@ -2649,6 +2649,8 @@ set_subquery_pathlist(PlannerInfo *root, RelOptInfo *rel,\n      */\n     remove_unused_subquery_outputs(subquery, rel, run_cond_attrs);\n\n+    Assert(has_multiple_baserels(root) == (bms_membership(root->all_baserels) == BMS_MULTIPLE));\n+\n     /*\n      * We can safely pass the outer tuple_fraction down to the subquery if the\n      * outer level has no joining, aggregation, or sorting to do. Otherwise\n\nand came to the same conclusion: check-world finds no cases where\nthe assertion fails.  So it LGTM too.  Pushed.Thanks for pushing!ThanksRichard", "msg_date": "Wed, 11 Oct 2023 09:59:59 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Retire has_multiple_baserels()" } ]
[ { "msg_contents": "MobilityDB\nhttps://github.com/MobilityDB/MobilityDB\nis a PostgreSQL extension that depends on PosGIS.\n\nBradford Boyle who has been working on packaging MobilityDB\nhttps://www.postgresql.org/message-id/CAPqRbE716D3gpD0JDbaFAb72ELaJrPpG1LUZvobELNBgL3R0+g@mail.gmail.com\nhighlighted the issue of which of the GUC shared_preload_libraries vs\nlocal_preload_libraries vs session_preload_libraries should be used to load\nthe postgis-3 library.\n\nOur understanding of the information in the manual\nhttps://www.postgresql.org/docs/16/runtime-config-client.html#GUC-SESSION-PRELOAD-LIBRARIES\ndoes not give us a clear-cut answer for this question. We are looking for\nadvice on which of the three options mentioned above should be used.\n\nMobilityDB requires loading PostGIS before any MobilityDB query can be\nissued. For example, commenting out the following line\n#shared_preload_libraries = 'postgis-3'\nin the postgresql.conf shows the following\n\n$ psql test\npsql (15.3)\nType \"help\" for help.\n\ntest=# select tgeompoint 'Point(1 1)@2000-01-01';\n2023-10-03 16:41:25.980 CEST [8683] ERROR: could not load library\n\"/usr/local/pgsql/15/lib/libMobilityDB-1.1.so\": /usr/local/pgsql/15/lib/\nlibMobilityDB-1.1.so: undefined symbol: ST_Intersects at character 19\n2023-10-03 16:41:25.980 CEST [8683] STATEMENT: select tgeompoint 'Point(1\n1)@2000-01-01';\nERROR: could not load library \"/usr/local/pgsql/15/lib/libMobilityDB-1.1.so\":\n/usr/local/pgsql/15/lib/libMobilityDB-1.1.so: undefined symbol:\nST_Intersects\nLINE 1: select tgeompoint 'Point(1 1)@2000-01-01';\n ^\ntest=# select st_point(1,1);\n st_point\n--------------------------------------------\n 0101000000000000000000F03F000000000000F03F\n(1 row)\n\ntest=# select tgeompoint 'Point(1 1)@2000-01-01';\n tgeompoint\n-------------------------------------------------------------------\n 0101000000000000000000F03F000000000000F03F@2000-01-01 00:00:00+01\n(1 row)\n\ntest=#\n------------------------------------------------------------\n\nAs can be seen above, it is not REALLY mandatory to have\nshared_preload_libraries = 'postgis-3' but then the user is responsible for\nissuing a query to load PostGIS (select st_point(1,1); above) and then she\nis able to execute MobilityDB queries.\n\nThanks for your advice.\n\nMobilityDBhttps://github.com/MobilityDB/MobilityDBis a PostgreSQL extension that depends on PosGIS.Bradford Boyle who has been working on packaging MobilityDBhttps://www.postgresql.org/message-id/CAPqRbE716D3gpD0JDbaFAb72ELaJrPpG1LUZvobELNBgL3R0+g@mail.gmail.comhighlighted the issue of which of the GUC shared_preload_libraries vs local_preload_libraries vs session_preload_libraries should be used to load the postgis-3 library.Our understanding of the information in the manualhttps://www.postgresql.org/docs/16/runtime-config-client.html#GUC-SESSION-PRELOAD-LIBRARIESdoes not give us a clear-cut answer for this question. We are looking for advice on which of the three options mentioned above should be used.MobilityDB requires loading PostGIS before any MobilityDB query can be issued. For example, commenting out the following line#shared_preload_libraries = 'postgis-3'in the postgresql.conf shows the following$ psql testpsql (15.3)Type \"help\" for help.test=# select tgeompoint 'Point(1 1)@2000-01-01';2023-10-03 16:41:25.980 CEST [8683] ERROR:  could not load library \"/usr/local/pgsql/15/lib/libMobilityDB-1.1.so\": /usr/local/pgsql/15/lib/libMobilityDB-1.1.so: undefined symbol: ST_Intersects at character 192023-10-03 16:41:25.980 CEST [8683] STATEMENT:  select tgeompoint 'Point(1 1)@2000-01-01';ERROR:  could not load library \"/usr/local/pgsql/15/lib/libMobilityDB-1.1.so\": /usr/local/pgsql/15/lib/libMobilityDB-1.1.so: undefined symbol: ST_IntersectsLINE 1: select tgeompoint 'Point(1 1)@2000-01-01';                          ^test=# select st_point(1,1);                  st_point-------------------------------------------- 0101000000000000000000F03F000000000000F03F(1 row)test=# select tgeompoint 'Point(1 1)@2000-01-01';                            tgeompoint------------------------------------------------------------------- 0101000000000000000000F03F000000000000F03F@2000-01-01 00:00:00+01(1 row)test=#------------------------------------------------------------As can be seen above, it is not REALLY mandatory to have shared_preload_libraries = 'postgis-3' but then the user is responsible for issuing a query to load PostGIS (select st_point(1,1); above) and then she is able to execute MobilityDB queries.Thanks for your advice.", "msg_date": "Tue, 10 Oct 2023 10:58:57 +0200", "msg_from": "Esteban Zimanyi <[email protected]>", "msg_from_op": true, "msg_subject": "Fwd: Advice about preloaded libraries" }, { "msg_contents": "Hi,\n\n> MobilityDB\n> https://github.com/MobilityDB/MobilityDB\n> is a PostgreSQL extension that depends on PosGIS.\n>\n> Bradford Boyle who has been working on packaging MobilityDB\n> https://www.postgresql.org/message-id/CAPqRbE716D3gpD0JDbaFAb72ELaJrPpG1LUZvobELNBgL3R0+g@mail.gmail.com\n> highlighted the issue of which of the GUC shared_preload_libraries vs local_preload_libraries vs session_preload_libraries should be used to load the postgis-3 library.\n>\n> Our understanding of the information in the manual\n> https://www.postgresql.org/docs/16/runtime-config-client.html#GUC-SESSION-PRELOAD-LIBRARIES\n> does not give us a clear-cut answer for this question. We are looking for advice on which of the three options mentioned above should be used.\n>\n> MobilityDB requires loading PostGIS before any MobilityDB query can be issued. For example, commenting out the following line\n> #shared_preload_libraries = 'postgis-3'\n> in the postgresql.conf shows the following\n>\n> $ psql test\n> psql (15.3)\n> Type \"help\" for help.\n>\n> test=# select tgeompoint 'Point(1 1)@2000-01-01';\n> 2023-10-03 16:41:25.980 CEST [8683] ERROR: could not load library \"/usr/local/pgsql/15/lib/libMobilityDB-1.1.so\": /usr/local/pgsql/15/lib/libMobilityDB-1.1.so: undefined symbol: ST_Intersects at character 19\n> 2023-10-03 16:41:25.980 CEST [8683] STATEMENT: select tgeompoint 'Point(1 1)@2000-01-01';\n> ERROR: could not load library \"/usr/local/pgsql/15/lib/libMobilityDB-1.1.so\": /usr/local/pgsql/15/lib/libMobilityDB-1.1.so: undefined symbol: ST_Intersects\n> LINE 1: select tgeompoint 'Point(1 1)@2000-01-01';\n> ^\n> test=# select st_point(1,1);\n> st_point\n> --------------------------------------------\n> 0101000000000000000000F03F000000000000F03F\n> (1 row)\n>\n> test=# select tgeompoint 'Point(1 1)@2000-01-01';\n> tgeompoint\n> -------------------------------------------------------------------\n> 0101000000000000000000F03F000000000000F03F@2000-01-01 00:00:00+01\n> (1 row)\n>\n> test=#\n> ------------------------------------------------------------\n>\n> As can be seen above, it is not REALLY mandatory to have shared_preload_libraries = 'postgis-3' but then the user is responsible for issuing a query to load PostGIS (select st_point(1,1); above) and then she is able to execute MobilityDB queries.\n>\n> Thanks for your advice.\n\nI read the email several times but I'm still not sure I understand\nwhat in particular you are asking.\n\n From what I can tell the goal is to package a third-party project,\nMobilityDB in this case. This project should have clear instructions\non how to install it. If not, consider reporting the issue to the\ndevelopers. If they can't provide good documentation maybe the project\nis not mature enough to invest your time into it yet.\n\nIf you find the observed behavior of PostgreSQL confusing, that's\nunderstandable, but this behavior is expected one. Typically\nPostgreSQL loads an extension to the backend when needed. This is what\nhappens when a user calls `select st_point(1,1);`. You can load an\nextension to the postmaster by using shared_preload_libraries. This is\ntypically used when an extension needs to acquire locks and shared\nmemory when DBMS starts. To my knowledge PostGIS doesn't use any of\nthis, although I'm not an expert in PostGIS.\n\nAll PostgreSQL GUCs are well documented. You can find more details here [1].\n\nHopefully I answered the right question. If not please be a bit more specific.\n\n[1]: https://www.postgresql.org/docs/current/runtime-config-client.html\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Tue, 10 Oct 2023 13:01:21 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Advice about preloaded libraries" }, { "msg_contents": "On 2023-Oct-10, Esteban Zimanyi wrote:\n\n> As can be seen above, it is not REALLY mandatory to have\n> shared_preload_libraries = 'postgis-3' but then the user is responsible for\n> issuing a query to load PostGIS (select st_point(1,1); above) and then she\n> is able to execute MobilityDB queries.\n\nCalling a function that exists in some library will cause the library to\nbe loaded. Alternatively, you can cause the library to be loaded\nautomatically at some point of the start sequence, by\nshared_preload_libraries or the other configuration options. Or you can\nuse the LOAD statement.\n\nIf by whichever mechanism postgis has been loaded into your session,\nthen calling a function in MobilityDB will work fine, because the\npostgis library will have been loaded. It doesn't matter exactly how\nwas postgis loaded.\n\nThe advantage of using shared_preload_libraries is performance of\nconnection establishment: the library is loaded by the postmaster, so\neach new backend inherits it already loaded and doesn't have to load it\nitself.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"I love the Postgres community. It's all about doing things _properly_. :-)\"\n(David Garamond)\n\n\n", "msg_date": "Tue, 10 Oct 2023 17:15:36 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fwd: Advice about preloaded libraries" } ]
[ { "msg_contents": "Dears,\n\nI noticed that in the `check_GUC_init` function, there is a direct\ncomparison using the != operator for two double values, which seems\nproblematic.\n\nI wrote this patch to fix this.\n\n--\nRegard,\nBowen Shi", "msg_date": "Tue, 10 Oct 2023 18:31:48 +0800", "msg_from": "Bowen Shi <[email protected]>", "msg_from_op": true, "msg_subject": "Comparing two double values method" }, { "msg_contents": "On Tue, 10 Oct 2023 at 12:33, Bowen Shi <[email protected]> wrote:\n>\n> Dears,\n>\n> I noticed that in the `check_GUC_init` function, there is a direct\n> comparison using the != operator for two double values, which seems\n> problematic.\n\nI don't think I understand the problem. The code checks that the\ndynamic initialization values are equal to the current value of the\nGUC, or 0. Why would a \"margin for error\" of 1e-6 be of any use?\nWhy was the margin of 1e-6 chosen instead of one based on the exponent\nof the GUC's current value (if any)?\n\nIn my view, this would break the code, not fix it, as it would\ndecrease the cases where we detect broken GUC registrations.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n", "msg_date": "Tue, 10 Oct 2023 12:55:15 +0200", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Comparing two double values method" }, { "msg_contents": "On 10/10/2023 13:31, Bowen Shi wrote:\n> Dears,\n> \n> I noticed that in the `check_GUC_init` function, there is a direct\n> comparison using the != operator for two double values, which seems\n> problematic.\n> \n> I wrote this patch to fix this.\n\nNo, the compile-time initial values should match exactly.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Tue, 10 Oct 2023 13:56:47 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Comparing two double values method" }, { "msg_contents": "You're right, I made a mistake.\n\nThanks for your explanation.\n\n\n", "msg_date": "Tue, 10 Oct 2023 20:12:33 +0800", "msg_from": "Bowen Shi <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Comparing two double values method" }, { "msg_contents": "Heikki Linnakangas <[email protected]> writes:\n> On 10/10/2023 13:31, Bowen Shi wrote:\n>> I noticed that in the `check_GUC_init` function, there is a direct\n>> comparison using the != operator for two double values, which seems\n>> problematic.\n\n> No, the compile-time initial values should match exactly.\n\nRight. The point of this test is to catch cases where you wrote,\nsay,\n\n\tdouble my_guc = 1.1;\n\nbut the boot_val for it in guc_tables.c is 1.2. There is no\nreason to allow any divergence in the spellings of the two C\nliterals, so as long as they're compiled by the same compiler\nthere's no reason to expect that the compiled values wouldn't\nbe bit-equal.\n\nThe point of the exclusions for zero is to allow you to just\nwrite\n\n\tdouble my_guc;\n\nwithout expressing an opinion about the initial value.\n(Yes, this does mean that \"double my_guc = 0.0;\" could\nbe misleading. It's just a heuristic though.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 10 Oct 2023 10:13:36 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Comparing two double values method" } ]
[ { "msg_contents": "Hi,\nI found a typo here while using psql. I think this should be a trivial patch.\nThe typo is that there is an extra `l` before `列出所有事件触发器`.\n\n-- \nregards,\nJinser Kafak.", "msg_date": "Wed, 11 Oct 2023 01:11:15 +0800", "msg_from": "jinser <[email protected]>", "msg_from_op": true, "msg_subject": "Fix typo in psql zh_CN.po" }, { "msg_contents": "On Wed, Oct 11, 2023 at 4:30 AM jinser <[email protected]> wrote:\n\n> Hi,\n> I found a typo here while using psql. I think this should be a trivial\n> patch.\n> The typo is that there is an extra `l` before `列出所有事件触发器`.\n\n\n+1.\n\nThanks\nRichard\n\nOn Wed, Oct 11, 2023 at 4:30 AM jinser <[email protected]> wrote:Hi,\nI found a typo here while using psql. I think this should be a trivial patch.\nThe typo is that there is an extra `l` before `列出所有事件触发器`.+1.ThanksRichard", "msg_date": "Wed, 11 Oct 2023 09:58:18 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix typo in psql zh_CN.po" }, { "msg_contents": "Richard Guo <[email protected]> writes:\n> On Wed, Oct 11, 2023 at 4:30 AM jinser <[email protected]> wrote:\n>> I found a typo here while using psql. I think this should be a trivial\n>> patch.\n>> The typo is that there is an extra `l` before `列出所有事件触发器`.\n\n> +1.\n\nFYI, we have a slightly odd process around this: PG's translated\nmessages are managed by a different set of people and have a\ndifferent authoritative repo. There's enough overlap between\nthose people and pgsql-hackers that this report will likely be\nseen by somebody who can commit into the translations repo.\nIdeally however, translation bugs should be reported to\nthe pgsql-translators mailing list.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 10 Oct 2023 22:51:15 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix typo in psql zh_CN.po" }, { "msg_contents": "Sorry, I missed the pgsql-translators.\nWell, I searched for typo patches in pgsql-hackers before, and now that you\nreminded me, those were supposed to be source code typos...\nI'll send the patch to pgsql-translators :)\n\nTom Lane <[email protected]> 于 2023年10月11日周三 10:51写道:\n\n> Richard Guo <[email protected]> writes:\n> > On Wed, Oct 11, 2023 at 4:30 AM jinser <[email protected]> wrote:\n> >> I found a typo here while using psql. I think this should be a trivial\n> >> patch.\n> >> The typo is that there is an extra `l` before `列出所有事件触发器`.\n>\n> > +1.\n>\n> FYI, we have a slightly odd process around this: PG's translated\n> messages are managed by a different set of people and have a\n> different authoritative repo. There's enough overlap between\n> those people and pgsql-hackers that this report will likely be\n> seen by somebody who can commit into the translations repo.\n> Ideally however, translation bugs should be reported to\n> the pgsql-translators mailing list.\n>\n> regards, tom lane\n>\n>\n\nSorry, I missed the pgsql-translators. Well, I searched for typo patches in pgsql-hackers before, and now that you reminded me, those were supposed to be source code typos...I'll send the patch to pgsql-translators :)Tom Lane <[email protected]> 于 2023年10月11日周三 10:51写道:Richard Guo <[email protected]> writes:\n> On Wed, Oct 11, 2023 at 4:30 AM jinser <[email protected]> wrote:\n>> I found a typo here while using psql. I think this should be a trivial\n>> patch.\n>> The typo is that there is an extra `l` before `列出所有事件触发器`.\n\n> +1.\n\nFYI, we have a slightly odd process around this: PG's translated\nmessages are managed by a different set of people and have a\ndifferent authoritative repo.  There's enough overlap between\nthose people and pgsql-hackers that this report will likely be\nseen by somebody who can commit into the translations repo.\nIdeally however, translation bugs should be reported to\nthe pgsql-translators mailing list.\n\n                        regards, tom lane", "msg_date": "Wed, 11 Oct 2023 11:34:23 +0800", "msg_from": "jinser <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Fix typo in psql zh_CN.po" } ]
[ { "msg_contents": "Gents, I have a suggestion for DISTINCT ON clause syntax.\n \n DISTINCT ON (expression(s) [ORDER BY expression(s)]) \nDetermines the precedence within each DISTINCT ON group (i.e. the ‘first’ row to be picked)\n\nMotivation\n• Using the query-wide ORDER BY clause to determine which record to pick mixes two unrelated concerns, ‘first’ row selection and result-set ordering. This may be confusing;\n• The DISTINCT ON expression(s) must match the leftmost ORDER BY expression(s). This may cause inconvenience and require nesting as a sub-query to order the result-set.\n\nPros\n• Backward compatibility. If the local ORDER BY clause is missing then the current rules shall apply;\n• Familiar and consistent syntax and semantics, the same as in *_agg functions;\n• Clear distinction of first row selection and result-set ordering;\n• Good readability;\n• The DISTINCT ON expression(s) do not have to match the leftmost ORDER BY expression(s).\n\nCons\n • Possible extra verbosity \n Best regards, Stefan\n 1 1 1 1 MicrosoftInternetExplorer4 0 2 DocumentNotSpecified 7.8 磅 Normal 0 \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\nGents, I have a suggestion for DISTINCT ON clause syntax.DISTINCT ON (expression(s) [ORDER BY expression(s)])Determines the precedence within each DISTINCT ON group (i.e. the ‘first’ row to be picked)Motivation• Using the query-wide ORDER BY clause to determine which record to pick mixes two unrelated concerns, ‘first’ row selection and result-set ordering. This may be confusing;• The DISTINCT ON expression(s) must match the leftmost ORDER BY expression(s). This may cause inconvenience and require nesting as a sub-query to order the result-set.Pros• Backward compatibility. If the local ORDER BY clause is missing then the current rules shall apply;• Familiar and consistent syntax and semantics, the same as in *_agg functions;• Clear distinction of first row selection and result-set ordering;• Good readability;• The DISTINCT ON expression(s) do not have to match the leftmost ORDER BY expression(s).Cons• Possible extra verbosityBest regards,Stefan", "msg_date": "Tue, 10 Oct 2023 20:21:30 +0300 (EEST)", "msg_from": "Stefan Stefanov <[email protected]>", "msg_from_op": true, "msg_subject": "Suggestion. Optional local ORDER BY clause for DISTINCT ON " }, { "msg_contents": "Stefan Stefanov <[email protected]> writes:\n> Gents, I have a suggestion for DISTINCT ON clause syntax.\n> DISTINCT ON (expression(s) [ORDER BY expression(s)]) \n> Determines the precedence within each DISTINCT ON group (i.e. the ‘first’ row to be picked)\n\n> Motivation\n> • Using the query-wide ORDER BY clause to determine which record to pick mixes two unrelated concerns, ‘first’ row selection and result-set ordering. This may be confusing;\n> • The DISTINCT ON expression(s) must match the leftmost ORDER BY expression(s). This may cause inconvenience and require nesting as a sub-query to order the result-set.\n\nSince you can get the desired behavior with a sub-select, I'm\nnot especially excited about extending DISTINCT ON. If it weren't\nsuch a nonstandard kluge, I might feel differently; but it's not\nan area that I think we ought to put more effort into.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 10 Oct 2023 14:29:23 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Suggestion. Optional local ORDER BY clause for DISTINCT ON" } ]
[ { "msg_contents": "The btequalimage() header comment says:\n\n * Generic \"equalimage\" support function.\n *\n * B-Tree operator classes whose equality function could safely be replaced by\n * datum_image_eq() in all cases can use this as their \"equalimage\" support\n * function.\n\ninterval_ops, however, recognizes equal-but-distinguishable values:\n\n create temp table t (c interval);\n insert into t values ('1d'::interval), ('24h');\n table t;\n select distinct c from t;\n\nThe CREATE INDEX of the following test:\n\n begin;\n create table t (c interval);\n insert into t select x from generate_series(1,500), (values ('1 year 1 month'::interval), ('1 year 30 days')) t(x);\n select distinct c from t;\n create index ti on t (c);\n rollback;\n\nFails with:\n\n 2498151 2023-10-10 05:06:46.177 GMT DEBUG: building index \"ti\" on table \"t\" serially\n 2498151 2023-10-10 05:06:46.178 GMT DEBUG: index \"ti\" can safely use deduplication\n TRAP: failed Assert(\"!itup_key->allequalimage || keepnatts == _bt_keep_natts_fast(rel, lastleft, firstright)\"), File: \"nbtutils.c\", Line: 2443, PID: 2498151\n\nI've also caught btree posting lists where one TID refers to a '1d' heap\ntuple, while another TID refers to a '24h' heap tuple. amcheck complains.\nIndex-only scans can return the '1d' bits where the actual tuple had the '24h'\nbits. Are there other consequences to highlight in the release notes? The\nback-branch patch is larger, to fix things without initdb. Hence, I'm\nattaching patches for HEAD and for v16 (trivial to merge back from there). I\nglanced at the other opfamilies permitting deduplication, and they look okay:\n\n[local] test=*# select amproc, amproclefttype = amprocrighttype as l_eq_r, array_agg(array[opfname, amproclefttype::regtype::text]) from pg_amproc join pg_opfamily f on amprocfamily = f.oid where amprocnum = 4 and opfmethod = 403 group by 1,2;\n─[ RECORD 1 ]───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\namproc │ btequalimage\nl_eq_r │ t\narray_agg │ {{bit_ops,bit},{bool_ops,boolean},{bytea_ops,bytea},{char_ops,\"\\\"char\\\"\"},{datetime_ops,date},{datetime_ops,\"timestamp without time zone\"},{datetime_ops,\"timestamp with time zone\"},{network_ops,inet},{integer_ops,smallint},{integer_ops,integer},{integer_ops,bigint},{interval_ops,interval},{macaddr_ops,macaddr},{oid_ops,oid},{oidvector_ops,oidvector},{time_ops,\"time without time zone\"},{timetz_ops,\"time with time zone\"},{varbit_ops,\"bit varying\"},{text_pattern_ops,text},{bpchar_pattern_ops,character},{money_ops,money},{tid_ops,tid},{uuid_ops,uuid},{pg_lsn_ops,pg_lsn},{macaddr8_ops,macaddr8},{enum_ops,anyenum},{xid8_ops,xid8}}\n─[ RECORD 2 ]───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\namproc │ btvarstrequalimage\nl_eq_r │ t\narray_agg │ {{bpchar_ops,character},{text_ops,text},{text_ops,name}}\n\nThanks,\nnm", "msg_date": "Tue, 10 Oct 2023 18:33:17 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": true, "msg_subject": "interval_ops shall stop using btequalimage (deduplication)" }, { "msg_contents": "On Tue, Oct 10, 2023 at 6:33 PM Noah Misch <[email protected]> wrote:\n> interval_ops, however, recognizes equal-but-distinguishable values:\n\n> Fails with:\n>\n> 2498151 2023-10-10 05:06:46.177 GMT DEBUG: building index \"ti\" on table \"t\" serially\n> 2498151 2023-10-10 05:06:46.178 GMT DEBUG: index \"ti\" can safely use deduplication\n> TRAP: failed Assert(\"!itup_key->allequalimage || keepnatts == _bt_keep_natts_fast(rel, lastleft, firstright)\"), File: \"nbtutils.c\", Line: 2443, PID: 2498151\n\nNice catch.\n\nOut of curiosity, how did you figure this out? Did it just occur to\nyou that interval_ops had a behavior that made deduplication unsafe?\nOr did the problem come to your attention after running amcheck on a\ncustomer database? Or was it something else?\n\n> I've also caught btree posting lists where one TID refers to a '1d' heap\n> tuple, while another TID refers to a '24h' heap tuple. amcheck complains.\n> Index-only scans can return the '1d' bits where the actual tuple had the '24h'\n> bits. Are there other consequences to highlight in the release notes?\n\nNothing else comes to mind right now. I should think about posting\nlist splits some more tomorrow, though -- those are tricky.\n\n> The back-branch patch is larger, to fix things without initdb. Hence, I'm\n> attaching patches for HEAD and for v16 (trivial to merge back from there).\n\n> I glanced at the other opfamilies permitting deduplication, and they look okay:\n\nDue to the way that nbtsplitloc.c deals with duplicates, I'd expect\nthe same assertion failure with any index where a single leaf page is\nfilled with opclass-wise duplicates with more than one distinct\nrepresentation/output -- the details beyond that shouldn't matter. I\nwas happy with how easy it was to make this assertion fail (with a\nknown broken numeric_ops opclass) while testing/developing\ndeduplication. I'm a little surprised that it took this long to notice\nthe interval_ops issue.\n\nDo we really need to change the catalog contents when backpatching?\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Tue, 10 Oct 2023 20:12:36 -0700", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: interval_ops shall stop using btequalimage (deduplication)" }, { "msg_contents": "On Tue, Oct 10, 2023 at 08:12:36PM -0700, Peter Geoghegan wrote:\n> On Tue, Oct 10, 2023 at 6:33 PM Noah Misch <[email protected]> wrote:\n> > interval_ops, however, recognizes equal-but-distinguishable values:\n> \n> > Fails with:\n> >\n> > 2498151 2023-10-10 05:06:46.177 GMT DEBUG: building index \"ti\" on table \"t\" serially\n> > 2498151 2023-10-10 05:06:46.178 GMT DEBUG: index \"ti\" can safely use deduplication\n> > TRAP: failed Assert(\"!itup_key->allequalimage || keepnatts == _bt_keep_natts_fast(rel, lastleft, firstright)\"), File: \"nbtutils.c\", Line: 2443, PID: 2498151\n> \n> Nice catch.\n> \n> Out of curiosity, how did you figure this out? Did it just occur to\n> you that interval_ops had a behavior that made deduplication unsafe?\n> Or did the problem come to your attention after running amcheck on a\n> customer database? Or was it something else?\n\nMy friend got an amcheck \"lacks matching index tuple\" failure, and they asked\nme about it. I ran into the assertion failure while reproducing things.\n\n> I'm a little surprised that it took this long to notice\n> the interval_ops issue.\n\nAgreed. I don't usually store interval values in tables, and I'm not sure\nI've ever indexed one. Who knows.\n\n> Do we really need to change the catalog contents when backpatching?\n\nNot really. I think we usually do. On the other hand, unlike some past\ncases, there's no functional need for the catalog changes. The catalog\nchanges just get a bit of efficiency. No strong preference here.\n\n\n", "msg_date": "Tue, 10 Oct 2023 20:29:08 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": true, "msg_subject": "Re: interval_ops shall stop using btequalimage (deduplication)" }, { "msg_contents": "On Tue, Oct 10, 2023 at 8:29 PM Noah Misch <[email protected]> wrote:\n> My friend got an amcheck \"lacks matching index tuple\" failure, and they asked\n> me about it. I ran into the assertion failure while reproducing things.\n\nReminds me of the time that amcheck found a bug in the default btree\nopclass for PostGIS's geography type.\n\nThe opclass wasn't really intended to be used for indexing. The issue\nwith the opclass (which violated transitive consistency) would\nprobably never have been detected were it not for the tendency of\nPostGIS users to accidentally create useless B-Tree indexes on\ngeography columns. Users sometimes omitted \"using gist\", without\nnecessarily noticing that the index was practically useless.\n\n> > Do we really need to change the catalog contents when backpatching?\n>\n> Not really. I think we usually do. On the other hand, unlike some past\n> cases, there's no functional need for the catalog changes. The catalog\n> changes just get a bit of efficiency. No strong preference here.\n\nI'll defer to you on this question, then.\n\nI don't see any reason to delay committing your fix. The issue that\nyou've highlighted is exactly the kind of issue that I anticipated\nmight happen at some point. This seems straightforward.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 10 Oct 2023 20:51:10 -0700", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: interval_ops shall stop using btequalimage (deduplication)" }, { "msg_contents": "On Tue, Oct 10, 2023 at 8:51 PM Peter Geoghegan <[email protected]> wrote:\n> I don't see any reason to delay committing your fix. The issue that\n> you've highlighted is exactly the kind of issue that I anticipated\n> might happen at some point. This seems straightforward.\n\nBTW, we don't need to recommend the heapallindexed option in the\nrelease notes. Calling bt_check_index() will reliably indicate that\ncorruption is present when called against existing interval_ops\nindexes once your fix is in. Simply having an index whose metapage's\nallequalimage field is spuriously set to true will be recognized as\ncorruption right away. Obviously, this will be no less true with an\nexisting interval_ops index that happens to be completely empty.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 10 Oct 2023 21:35:45 -0700", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: interval_ops shall stop using btequalimage (deduplication)" }, { "msg_contents": "On Tue, Oct 10, 2023 at 09:35:45PM -0700, Peter Geoghegan wrote:\n> On Tue, Oct 10, 2023 at 8:51 PM Peter Geoghegan <[email protected]> wrote:\n> > I don't see any reason to delay committing your fix. The issue that\n> > you've highlighted is exactly the kind of issue that I anticipated\n> > might happen at some point. This seems straightforward.\n> \n> BTW, we don't need to recommend the heapallindexed option in the\n> release notes. Calling bt_check_index() will reliably indicate that\n> corruption is present when called against existing interval_ops\n> indexes once your fix is in. Simply having an index whose metapage's\n> allequalimage field is spuriously set to true will be recognized as\n> corruption right away. Obviously, this will be no less true with an\n> existing interval_ops index that happens to be completely empty.\n\nInteresting. So, >99% of interval-type indexes, even ones WITH\n(deduplicate_items=off), will get amcheck failures. The <1% of exceptions\nmight include indexes having allequalimage=off due to an additional column,\ne.g. a two-column (interval, numeric) index. If interval indexes are common\nenough and \"pg_amcheck --heapallindexed\" failures from $SUBJECT are relatively\nrare, that could argue for giving amcheck a special case. Specifically,\ndowngrade its \"metapage incorrectly indicates that deduplication is safe\" from\nERROR to WARNING for interval_ops only. Without that special case (i.e. with\nthe v1 patch), the release notes should probably resemble, \"After updating,\nrun REINDEX on all indexes having an interval-type column.\" There's little\npoint in recommending pg_amcheck if >99% will fail. I'm inclined to bet that\ninterval-type indexes are rare, so I lean against adding the amcheck special\ncase. It's not a strong preference. Other opinions?\n\nIf users want to reduce their exposure now, they could do \"ALTER INDEX ... SET\n(deduplicate_items = off)\" and then REINDEX any indexes already failing\n\"pg_amcheck --heapallindexed\". allequalimage will remain wrong but have no\nill effects beyond making amcheck fail. Another REINDEX after the update\nwould let amcheck pass.\n\n\n", "msg_date": "Wed, 11 Oct 2023 11:38:35 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": true, "msg_subject": "Re: interval_ops shall stop using btequalimage (deduplication)" }, { "msg_contents": "On Wed, Oct 11, 2023 at 11:38 AM Noah Misch <[email protected]> wrote:\n> Interesting. So, >99% of interval-type indexes, even ones WITH\n> (deduplicate_items=off), will get amcheck failures. The <1% of exceptions\n> might include indexes having allequalimage=off due to an additional column,\n> e.g. a two-column (interval, numeric) index. If interval indexes are common\n> enough and \"pg_amcheck --heapallindexed\" failures from $SUBJECT are relatively\n> rare, that could argue for giving amcheck a special case. Specifically,\n> downgrade its \"metapage incorrectly indicates that deduplication is safe\" from\n> ERROR to WARNING for interval_ops only.\n\nI am not aware of any user actually running \"deduplicate_items = off\"\nin production, for any index. It was added purely as a defensive thing\n-- not because I anticipated any real need to disable deduplication.\nDeduplication was optimized for being enabled by default.\n\nAnything is possible, of course, but it's hard to give too much weight\nto cases where two very unlikely things happen to intersect. (Plus\n\"deduplicate_items = off\" doesn't really do that much; more on that\nbelow.)\n\n> Without that special case (i.e. with\n> the v1 patch), the release notes should probably resemble, \"After updating,\n> run REINDEX on all indexes having an interval-type column.\"\n\n+1\n\n> There's little\n> point in recommending pg_amcheck if >99% will fail. I'm inclined to bet that\n> interval-type indexes are rare, so I lean against adding the amcheck special\n> case. It's not a strong preference. Other opinions?\n\nWell, there will only be one known reason why anybody will ever see\nthis test fail (barring a near-miraculous coincidence where \"generic\ncorruption\" somehow gets passed our most basic sanity tests, only to\nfail this metapage field check a bit later on). Even if you\npessimistically assume that similar problems remain undiscovered in\nsome other opfamily, this particular check isn't going to be the check\nthat detects the other problems -- you really would need\nheapallindexed verification for that.\n\nIn short, this metapage check is only effective retrospectively, once\nwe recognize and fix problems in an opclass. Clearly there will be\nexactly one case like that post-fix (interval_ops is at least the only\naffected core code opfamily), so why not point that out directly with\na HINT? A HINT could go a long way towards putting the problem in\ncontext, without really adding a special case, and without any real\nquestion of users being misled.\n\n> If users want to reduce their exposure now, they could do \"ALTER INDEX ... SET\n> (deduplicate_items = off)\" and then REINDEX any indexes already failing\n> \"pg_amcheck --heapallindexed\". allequalimage will remain wrong but have no\n> ill effects beyond making amcheck fail. Another REINDEX after the update\n> would let amcheck pass.\n\nEven when \"deduplicate_items = off\", that just means that the nbtree\ncode won't apply further deduplication passes going forward (until\nsuch time as deduplication is reenabled). It doesn't really mean that\nthis problem can't exist. OTOH it's easy to detect affected indexes\nusing SQL. So this is one case where telling users to REINDEX really\ndoes seem like the best thing (as opposed to something we say because\nwe're too lazy to come up with nuanced, practical guidance).\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 11 Oct 2023 13:00:44 -0700", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: interval_ops shall stop using btequalimage (deduplication)" }, { "msg_contents": "On Wed, Oct 11, 2023 at 01:00:44PM -0700, Peter Geoghegan wrote:\n> On Wed, Oct 11, 2023 at 11:38 AM Noah Misch <[email protected]> wrote:\n> > Interesting. So, >99% of interval-type indexes, even ones WITH\n> > (deduplicate_items=off), will get amcheck failures. The <1% of exceptions\n> > might include indexes having allequalimage=off due to an additional column,\n> > e.g. a two-column (interval, numeric) index. If interval indexes are common\n> > enough and \"pg_amcheck --heapallindexed\" failures from $SUBJECT are relatively\n> > rare, that could argue for giving amcheck a special case. Specifically,\n> > downgrade its \"metapage incorrectly indicates that deduplication is safe\" from\n> > ERROR to WARNING for interval_ops only.\n> \n> I am not aware of any user actually running \"deduplicate_items = off\"\n> in production, for any index. It was added purely as a defensive thing\n> -- not because I anticipated any real need to disable deduplication.\n> Deduplication was optimized for being enabled by default.\n\nSure. Low-importance background information: deduplicate_items=off got on my\nradar while I was wondering if ALTER INDEX ... SET (deduplicate_items=off)\nwould clear allequalimage. If it had, we could have advised people to use\nALTER INDEX, then rebuild only those indexes still failing \"pg_amcheck\n--heapallindexed\". ALTER INDEX doesn't do that, ruling out that idea.\n\n> > Without that special case (i.e. with\n> > the v1 patch), the release notes should probably resemble, \"After updating,\n> > run REINDEX on all indexes having an interval-type column.\"\n> \n> +1\n> \n> > There's little\n> > point in recommending pg_amcheck if >99% will fail. I'm inclined to bet that\n> > interval-type indexes are rare, so I lean against adding the amcheck special\n> > case. It's not a strong preference. Other opinions?\n\n> exactly one case like that post-fix (interval_ops is at least the only\n> affected core code opfamily), so why not point that out directly with\n> a HINT? A HINT could go a long way towards putting the problem in\n> context, without really adding a special case, and without any real\n> question of users being misled.\n\nWorks for me. Added.", "msg_date": "Thu, 12 Oct 2023 16:10:09 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": true, "msg_subject": "Re: interval_ops shall stop using btequalimage (deduplication)" }, { "msg_contents": "On Thu, Oct 12, 2023 at 4:10 PM Noah Misch <[email protected]> wrote:\n> > exactly one case like that post-fix (interval_ops is at least the only\n> > affected core code opfamily), so why not point that out directly with\n> > a HINT? A HINT could go a long way towards putting the problem in\n> > context, without really adding a special case, and without any real\n> > question of users being misled.\n>\n> Works for me. Added.\n\nLooks good. Thanks!\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 12 Oct 2023 16:21:15 -0700", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: interval_ops shall stop using btequalimage (deduplication)" }, { "msg_contents": "> I've also caught btree posting lists where one TID refers to a '1d' heap\n> tuple, while another TID refers to a '24h' heap tuple. amcheck complains.\nIndex-only scans can return the '1d' bits where the actual tuple had the\n'24h'\nbits.\n\nHave a build without the patch. I can't reproduce amcheck complaints in\nrelease mode\nwhere all these statements succeed.\n create table t (c interval);\n insert into t select x from generate_series(1,500), (values ('1 year 1\nmonth'::interval), ('1 year 30 days')) t(x);\n select distinct c from t;\n create index ti on t (c);\n select bt_index_check('ti'::regclass);\nOn following query, the query seems to return the right result sets,\nindex-only scan doesn't seem to be mislead by the misuse of btequalimage\n\n postgres=# vacuum (INDEX_CLEANUP on) t;\nVACUUM\npostgres=# explain (analyze, buffers) select c::text, count(c) from t group\nby c::text;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=49.27..49.29 rows=1 width=40) (actual\ntime=3.329..3.333 rows=2 loops=1)\n Group Key: (c)::text\n Batches: 1 Memory Usage: 24kB\n Buffers: shared hit=6\n -> Index Only Scan using ti on t (cost=0.28..44.27 rows=1000 width=48)\n(actual time=0.107..2.269 rows=1000 loops=1)\n Heap Fetches: 0\n Buffers: shared hit=6\n Planning:\n Buffers: shared hit=4\n Planning Time: 0.319 ms\n Execution Time: 3.432 ms\n(11 rows)\n\npostgres=# select c::text, count(c) from t group by c::text;\n c | count\n----------------+-------\n 1 year 1 mon | 500\n 1 year 30 days | 500\n(2 rows)\n\n> * Generic \"equalimage\" support function.\n> *\n> * B-Tree operator classes whose equality function could safely be\nreplaced by\n> * datum_image_eq() in all cases can use this as their \"equalimage\" support\n> * function.\nIt seems to me that as long as a data type has a deterministic sort but not\nnecessarily be equalimage,\nit should be able to support deduplication. e.g for interval type, we add\na byte wise tie breaker\nafter '24h' and '1day' are compared equally. In the btree, '24h' and '1day'\nare still adjacent,\n'1day' is always sorted before '24h' in a btree page, can we do dedup for\neach value\nwithout problem?\nThe assertion will still be triggered as it's testing btequalimage, but\nI'll defer it for now.\nWanted to know if the above idea sounds sane first...\n\n\nDonghang Lin\n(ServiceNow)\n\n\nOn Thu, Oct 12, 2023 at 4:22 PM Peter Geoghegan <[email protected]> wrote:\n\n> On Thu, Oct 12, 2023 at 4:10 PM Noah Misch <[email protected]> wrote:\n> > > exactly one case like that post-fix (interval_ops is at least the only\n> > > affected core code opfamily), so why not point that out directly with\n> > > a HINT? A HINT could go a long way towards putting the problem in\n> > > context, without really adding a special case, and without any real\n> > > question of users being misled.\n> >\n> > Works for me. Added.\n>\n> Looks good. Thanks!\n>\n> --\n> Peter Geoghegan\n>\n>\n>\n\n> I've also caught btree posting lists where one TID refers to a '1d' heap> tuple, while another TID refers to a '24h' heap tuple.  amcheck complains.Index-only scans can return the '1d' bits where the actual tuple had the '24h'bits.Have a build without the patch. I can't reproduce amcheck complaints in release modewhere all these statements succeed.   create table t (c interval);  insert into t select x from generate_series(1,500), (values ('1 year 1 month'::interval), ('1 year 30 days')) t(x);  select distinct c from t;  create index ti on t (c);  select bt_index_check('ti'::regclass);On following query, the query seems to return the right result sets, index-only scan doesn't seem to be mislead by the misuse of btequalimage   postgres=# vacuum (INDEX_CLEANUP on) t;VACUUMpostgres=# explain (analyze, buffers) select c::text, count(c) from t group by c::text;                                                       QUERY PLAN------------------------------------------------------------------------------------------------------------------------- HashAggregate  (cost=49.27..49.29 rows=1 width=40) (actual time=3.329..3.333 rows=2 loops=1)   Group Key: (c)::text   Batches: 1  Memory Usage: 24kB   Buffers: shared hit=6   ->  Index Only Scan using ti on t  (cost=0.28..44.27 rows=1000 width=48) (actual time=0.107..2.269 rows=1000 loops=1)         Heap Fetches: 0         Buffers: shared hit=6 Planning:   Buffers: shared hit=4 Planning Time: 0.319 ms Execution Time: 3.432 ms(11 rows)postgres=# select c::text, count(c) from t group by c::text;       c        | count----------------+------- 1 year 1 mon   |   500 1 year 30 days |   500(2 rows)>  * Generic \"equalimage\" support function.> *> * B-Tree operator classes whose equality function could safely be replaced by> * datum_image_eq() in all cases can use this as their \"equalimage\" support> * function.It seems to me that as long as a data type has a deterministic sort but not necessarily be equalimage,it should be able to support deduplication.  e.g for interval type, we add a byte wise tie breaker after '24h' and '1day' are compared equally. In the btree, '24h' and '1day' are still adjacent, '1day' is always sorted before '24h' in a btree page, can we do dedup for each value without problem? The assertion will still be triggered as it's testing btequalimage, but I'll defer it for now. Wanted to know if the above idea sounds sane first...Donghang Lin(ServiceNow)On Thu, Oct 12, 2023 at 4:22 PM Peter Geoghegan <[email protected]> wrote:On Thu, Oct 12, 2023 at 4:10 PM Noah Misch <[email protected]> wrote:\n> > exactly one case like that post-fix (interval_ops is at least the only\n> > affected core code opfamily), so why not point that out directly with\n> > a HINT? A HINT could go a long way towards putting the problem in\n> > context, without really adding a special case, and without any real\n> > question of users being misled.\n>\n> Works for me.  Added.\n\nLooks good. Thanks!\n\n-- \nPeter Geoghegan", "msg_date": "Mon, 16 Oct 2023 23:21:20 -0700", "msg_from": "Donghang Lin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: interval_ops shall stop using btequalimage (deduplication)" }, { "msg_contents": "On Mon, Oct 16, 2023 at 11:21:20PM -0700, Donghang Lin wrote:\n> > I've also caught btree posting lists where one TID refers to a '1d' heap\n> > tuple, while another TID refers to a '24h' heap tuple. amcheck complains.\n> Index-only scans can return the '1d' bits where the actual tuple had the\n> '24h'\n> bits.\n> \n> Have a build without the patch. I can't reproduce amcheck complaints in\n> release mode\n> where all these statements succeed.\n\nThe queries I shared don't create the problematic structure, just an assertion\nfailure.\n\n> > * Generic \"equalimage\" support function.\n> > *\n> > * B-Tree operator classes whose equality function could safely be\n> replaced by\n> > * datum_image_eq() in all cases can use this as their \"equalimage\" support\n> > * function.\n> It seems to me that as long as a data type has a deterministic sort but not\n> necessarily be equalimage,\n> it should be able to support deduplication. e.g for interval type, we add\n> a byte wise tie breaker\n> after '24h' and '1day' are compared equally. In the btree, '24h' and '1day'\n> are still adjacent,\n> '1day' is always sorted before '24h' in a btree page, can we do dedup for\n> each value\n> without problem?\n\nYes. I'm not aware of correctness obstacles arising if one did that.\n\n\n", "msg_date": "Mon, 23 Oct 2023 06:07:10 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": true, "msg_subject": "Re: interval_ops shall stop using btequalimage (deduplication)" }, { "msg_contents": "On Mon, Oct 16, 2023 at 11:21 PM Donghang Lin <[email protected]> wrote:\n> It seems to me that as long as a data type has a deterministic sort but not necessarily be equalimage,\n> it should be able to support deduplication. e.g for interval type, we add a byte wise tie breaker\n> after '24h' and '1day' are compared equally. In the btree, '24h' and '1day' are still adjacent,\n> '1day' is always sorted before '24h' in a btree page, can we do dedup for each value\n> without problem?\n> The assertion will still be triggered as it's testing btequalimage, but I'll defer it for now.\n> Wanted to know if the above idea sounds sane first...\n\n It's hard to give any one reason why this won't work. I'm sure that\nwith enough effort some scheme like this could work. It's just that\nthere are significant practical problems that at least make it seem\nunappealing as a project. This has been discussed before:\n\nhttps://www.postgresql.org/message-id/flat/CAH2-WzkZkSC7G%2Bv1WwXGo0emh8E-rByw%3DxSpBUoavk7PTjwF2Q%40mail.gmail.com#4e98cba0d76c8c8c0bf67ae0d4652903\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Mon, 23 Oct 2023 15:53:24 -0700", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: interval_ops shall stop using btequalimage (deduplication)" } ]
[ { "msg_contents": "Hi,\n\nThe parallel apply worker didn't add null termination to the string received\nfrom the leader apply worker via the shared memory queue. This action doesn't\nbring bugs as it's binary data but violates the rule established in StringInfo,\nwhich guarantees the presence of a terminating '\\0' at the end of the string.\n\nAlthough the original string in leader already includes the null termination,\nbut we cannot just send all string including the null termination to the\nparallel worker, because that would increase the length while at the receiver\nthere is still no null termination at data[length] which actually the length +\n1 position.\n\nAnd we also cannot directly modify the received string like data[len] = '\\0',\nbecause the data still points a shared buffer maintained by shared memory\nqueue, so we'd better not modify the data outside of the string length.\n\nSo, here is patch to use the standard StringInfo API to store the string while\nensuring the addition of null termination. This can also resolve the Assert\nissue raised by another patch[1] currently under discussion.\n\nThanks to Amit for offlist discussion regarding the analysis and\nfix.\n\n[1] https://www.postgresql.org/message-id/CAApHDvp6J4Bq9%3Df36-Z3mNWTsmkgGkSkX1Nwut%2BxhSi1aU8zQg%40mail.gmail.com\n\nBest Regards,\nHou Zhijie", "msg_date": "Wed, 11 Oct 2023 06:48:44 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>", "msg_from_op": true, "msg_subject": "Add null termination to string received in parallel apply worker" }, { "msg_contents": "Hi Hou-san.\n\n+ /*\n+ * Note that the data received via the shared memory queue is not\n+ * null-terminated. So we use the StringInfo API to store the\n+ * string so as to maintain the convention that StringInfos has a\n+ * trailing null.\n+ */\n\n\"... that StringInfos has a trailing null.\"\n\nProbably should be either \"StringInfo has\" or \"StringInfos have\"\n\n======\nKind Regards.\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Wed, 11 Oct 2023 20:11:15 +1100", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add null termination to string received in parallel apply worker" }, { "msg_contents": "On Wed, Oct 11, 2023 at 12:18 PM Zhijie Hou (Fujitsu)\n<[email protected]> wrote:\n>\n> The parallel apply worker didn't add null termination to the string received\n> from the leader apply worker via the shared memory queue. This action doesn't\n> bring bugs as it's binary data but violates the rule established in StringInfo,\n> which guarantees the presence of a terminating '\\0' at the end of the string.\n>\n> Although the original string in leader already includes the null termination,\n> but we cannot just send all string including the null termination to the\n> parallel worker, because that would increase the length while at the receiver\n> there is still no null termination at data[length] which actually the length +\n> 1 position.\n>\n> And we also cannot directly modify the received string like data[len] = '\\0',\n> because the data still points a shared buffer maintained by shared memory\n> queue, so we'd better not modify the data outside of the string length.\n>\n\nYeah, it may not be a good idea to modify the buffer pointing to\nshared memory without any lock as we haven't reserved that part of\nmemory. So, we can't follow the trick used in exec_bind_message() to\nmaintain the convention that StringInfos have a trailing null. David,\ndo you see any better way to fix this case?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 11 Oct 2023 17:56:01 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add null termination to string received in parallel apply worker" }, { "msg_contents": "On 2023-Oct-11, Amit Kapila wrote:\n\n> Yeah, it may not be a good idea to modify the buffer pointing to\n> shared memory without any lock as we haven't reserved that part of\n> memory. So, we can't follow the trick used in exec_bind_message() to\n> maintain the convention that StringInfos have a trailing null. David,\n> do you see any better way to fix this case?\n\nI was thinking about this when skimming the other StringInfo thread a\ncouple of days ago. I wondered if it wouldn't be more convenient to\nchange the convention that all StringInfos are null-terminated: what is\nreally the reason to have them all be like that? We do keep track of\nthe exact length of the data in it, so strictly speaking we don't need\nthe assumption. Maybe there are some routines that are fragile and end\nup reading more data thn 'len' if the null terminator is missing; we\ncould fix those instead. Right now, it seems we're doing some\npstrdup'ing and unconstification just to be able to install a \\0 in the\nright place.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Use it up, wear it out, make it do, or do without\"\n\n\n", "msg_date": "Wed, 11 Oct 2023 17:14:24 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add null termination to string received in parallel apply worker" }, { "msg_contents": "Alvaro Herrera <[email protected]> writes:\n> I was thinking about this when skimming the other StringInfo thread a\n> couple of days ago. I wondered if it wouldn't be more convenient to\n> change the convention that all StringInfos are null-terminated: what is\n> really the reason to have them all be like that?\n\nIt makes sense for StringInfos containing text, not because the\nstringinfo.c routines need it but because callers inspecting the\nstring will very likely do something that expects nul-termination.\nWhen the StringInfo contains binary data, that argument has little\nforce of course.\n\nI could see extending the convention for caller-supplied buffers\n(as is under discussion in the other thread) to say that the caller\nneedn't provide a nul-terminated buffer if it is confident that\nno reader of the StringInfo will need that. I'd be even more\ninclined than before to tie this to a specification that such a\nStringInfo is read-only, though.\n\nIn any case, this does not immediately let us jump to the conclusion\nthat it'd be safe to use such a convention in apply workers. Aren't\nthe things being passed around here usually text strings? Do you\nreally want to promise that no reader is depending on nul-termination?\nThat doesn't sound safe either for query strings or data input values.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 11 Oct 2023 12:04:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add null termination to string received in parallel apply worker" }, { "msg_contents": "On Thu, 12 Oct 2023 at 05:04, Tom Lane <[email protected]> wrote:\n>\n> Alvaro Herrera <[email protected]> writes:\n> > I was thinking about this when skimming the other StringInfo thread a\n> > couple of days ago. I wondered if it wouldn't be more convenient to\n> > change the convention that all StringInfos are null-terminated: what is\n> > really the reason to have them all be like that?\n>\n> It makes sense for StringInfos containing text, not because the\n> stringinfo.c routines need it but because callers inspecting the\n> string will very likely do something that expects nul-termination.\n> When the StringInfo contains binary data, that argument has little\n> force of course.\n\nI'd like to know why we're even using StringInfo for receiving bytes\npq_* functions.\n\nIt does, unfortunately, seem like we're well down that path now and\nchanging it would be quite a bit of churn, and not just for core :-(\nIf we had invented a ByteReceiver or something, then StringInfoData\nwouldn't need a cursor field (perhaps with the exception of its use in\nstring_agg_transfn/string_agg_combine, but that could be done some\nother way).\n\nIt seems like we're trying to enforce rules that are useful for\nStringInfo's likely intended original purpose that are often just not\nthat relevant to the job it's ended up doing in pq_* functions. It\nwould be good if we could relax the most-be-NUL-terminated rule. It\nseems to be causing quite a bit of ugliness around the code. Just\nsearch for \"csave\". We often use a variable by that name to save the\nchar where we temporarily put a NUL so we restore it again. A comment\nexec_bind_message() does admit this is \"grotty\", which I don't\ndisagree with.\n\nDavid\n\n\n", "msg_date": "Thu, 12 Oct 2023 10:39:26 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add null termination to string received in parallel apply worker" }, { "msg_contents": "On Thursday, October 12, 2023 12:04 AM Tom Lane <[email protected]> wrote:\n\nHi,\n\n> \n> Alvaro Herrera <[email protected]> writes:\n> > I was thinking about this when skimming the other StringInfo thread a\n> > couple of days ago. I wondered if it wouldn't be more convenient to\n> > change the convention that all StringInfos are null-terminated: what\n> > is really the reason to have them all be like that?\n> \n> It makes sense for StringInfos containing text, not because the stringinfo.c\n> routines need it but because callers inspecting the string will very likely do\n> something that expects nul-termination.\n> When the StringInfo contains binary data, that argument has little force of\n> course.\n> \n> I could see extending the convention for caller-supplied buffers (as is under\n> discussion in the other thread) to say that the caller needn't provide a\n> nul-terminated buffer if it is confident that no reader of the StringInfo will need\n> that. I'd be even more inclined than before to tie this to a specification that\n> such a StringInfo is read-only, though.\n> \n> In any case, this does not immediately let us jump to the conclusion that it'd be\n> safe to use such a convention in apply workers. Aren't the things being passed\n> around here usually text strings? \n\nI think the data passed to parallel apply worker is of mixed types. If we see\nthe data reading logic for it like logicalrep_read_attrs(), it uses both\npq_getmsgint/pq_getmsgbyte/pq_getmsgint(binary) and pq_getmsgstring(text).\n\n> Do you really want to promise that no reader is depending on nul-termination?\n\nI think we could not make an absolute guarantee in this regard, but currently\nall the consumer uses pq_getxxx style functions to read the passed data and it\nalso takes care to read the text stuff(get the length separately e.g.\nlogicalrep_read_tuple). So it seems OK to release the rule for it.\n\nOTOH, I am not opposed to keeping the rule intact for the apply worker, just to\nshare the information and to gather opinions from others.\n\nBest Regards,\nHou zj\n\n\n\n", "msg_date": "Thu, 12 Oct 2023 03:43:33 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Add null termination to string received in parallel apply worker" }, { "msg_contents": "On Wed, 11 Oct 2023 at 19:54, Zhijie Hou (Fujitsu)\n<[email protected]> wrote:\n> The parallel apply worker didn't add null termination to the string received\n> from the leader apply worker via the shared memory queue. This action doesn't\n> bring bugs as it's binary data but violates the rule established in StringInfo,\n> which guarantees the presence of a terminating '\\0' at the end of the string.\n\nJust for anyone not following the other thread that you linked, I\njust pushed f0efa5aec, and, providing that sticks, I think we can drop\nthis discussion now.\n\nThat commit relaxes the requirement that the StringInfo must be NUL terminated.\n\nDavid\n\n\n", "msg_date": "Thu, 26 Oct 2023 16:40:27 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add null termination to string received in parallel apply worker" } ]
[ { "msg_contents": "The small size of the SLRU buffer pools can sometimes become a\nperformance problem because it’s not difficult to have a workload\nwhere the number of buffers actively in use is larger than the\nfixed-size buffer pool. However, just increasing the size of the\nbuffer pool doesn’t necessarily help, because the linear search that\nwe use for buffer replacement doesn’t scale, and also because\ncontention on the single centralized lock limits scalability.\n\nThere is a couple of patches proposed in the past to address the\nproblem of increasing the buffer pool size, one of the patch [1] was\nproposed by Thomas Munro where we make the size of the buffer pool\nconfigurable. And, in order to deal with the linear search in the\nlarge buffer pool, we divide the SLRU buffer pool into associative\nbanks so that searching in the buffer pool doesn’t get affected by the\nlarge size of the buffer pool. This does well for the workloads which\nare mainly impacted by the frequent buffer replacement but this still\ndoesn’t stand well with the workloads where the centralized control\nlock is the bottleneck.\n\nSo I have taken this patch as my base patch (v1-0001) and further\nadded 2 more improvements to this 1) In v1-0002, Instead of a\ncentralized control lock for the SLRU I have introduced a bank-wise\ncontrol lock 2)In v1-0003, I have removed the global LRU counter and\nintroduced a bank-wise counter. The second change (v1-0003) is in\norder to avoid the CPU/OS cache invalidation due to frequent updates\nof the single variable, later in my performance test I will show how\nmuch gain we have gotten because of these 2 changes.\n\nNote: This is going to be a long email but I have summarised the main\nidea above this point and now I am going to discuss more internal\ninformation in order to show that the design idea is valid and also\ngoing to show 2 performance tests where one is specific to the\ncontention on the centralized lock and other is mainly contention due\nto frequent buffer replacement in SLRU buffer pool. We are getting ~2x\nTPS compared to the head by these patches and in later sections, I am\ngoing discuss this in more detail i.e. exact performance numbers and\nanalysis of why we are seeing the gain.\n\nThere are some problems I faced while converting this centralized\ncontrol lock to a bank-wise lock and that is mainly because this lock\nis (mis)used for different purposes. The main purpose of this control\nlock as I understand it is to protect the in-memory access\n(read/write) of the buffers in the SLRU buffer pool.\n\nHere is the list of some problems and their analysis:\n\n1) In some of the SLRU, we use this lock for protecting the members\ninside the control structure which is specific to that SLRU layer i.e.\nSerialControlData() members are protected by the SerialSLRULock, and I\ndon’t think it is the right use of this lock so for this purpose I\nhave introduced another lock called SerialControlLock for this\nspecific purpose. Based on my analysis there is no reason for\nprotecting these members and the SLRU buffer access with the same\nlock.\n2) The member called ‘latest_page_number’ inside SlruSharedData is\nalso protected by the SLRULock, I would not say this use case is wrong\nbut since this is a common variable and not a per bank variable can\nnot be protected by the bankwise lock now. But the usage of this\nvariable is just to track the latest page in an SLRU so that we do not\nevict out the latest page during victim page selection. So I have\nconverted this to an atomic variable as it is completely independent\nof the SLRU buffer access.\n3) In order to protect SlruScanDirectory, basically the\nSlruScanDirectory() from DeactivateCommitTs(), is called under the\nSLRU control lock, but from all other places SlruScanDirectory() is\ncalled without lock and that is because the caller of this function is\ncalled from the places which are not executed concurrently(i.e.\nstartup, checkpoint). This function DeactivateCommitTs() is also\ncalled from the startup only so there doesn't seem any use in calling\nthis under the SLRU control lock. Currently, I have called this under\nthe all-bank lock because logically this is not a performance path,\nand that way we are keeping it consistent with the current logic, but\nif others also think that we do not need a lock at this place then we\nmight remove it and then we don't need this all-bank lock anywhere.\n\nThere are some other uses of this lock where we might think it will be\na problem if we divide it into a bank-wise lock but it's not and I\nhave given my analysis for the same\n\n1) SimpleLruTruncate: We might worry that if we convert to a bank-wise\nlock then this could be an issue as we might need to release and\nacquire different locks as we scan different banks. But as per my\nanalysis, this is not an issue because a) With the current code also\ndo release and acquire the centralized lock multiple times in order to\nperform the I/O on the buffer slot so the behavior is not changed but\nthe most important thing is b) All SLRU layers take care that this\nfunction should not be accessed concurrently, I have verified all\naccess to this function and its true and the function header of this\nfunction also says the same. So this is not an issue as per my\nanalysis.\n\n2) Extending or adding a new page to SLRU: I have noticed that this\nis also protected by either some other exclusive lock or only done\nduring startup. So in short the SLRULock is just used for protecting\nagainst the access of the buffers in the buffer pool but that is not\nfor guaranteeing the exclusive access inside the function because that\nis taken care of in some other way.\n\n3) Another thing that I noticed while writing this and thought it\nwould be good to make a note of that as well. Basically for the CLOG\ngroup update of the xid status. Therein if we do not get the control\nlock on the SLRU then we add ourselves to a group and then the group\nleader does the job for all the members in the group. One might think\nthat different pages in the group might belong to different SLRU bank\nso the leader might need to acquire/release the lock as it process the\nrequest in the group. Yes, that is true, and it is taken care but we\ndon’t need to worry about the case because as per the implementation\nof the group update, we are trying to have the members with the same\npage request in one group and only due to some exception there could\nbe members with the different page request. So the point is with a\nbank-wise lock we are handling that exception case but that's not a\nregular case that we need to acquire/release multiple times. So\ndesign-wise we are good and performance-wise there should not be any\nproblem because most of the time we might be updating the pages from\nthe same bank, and if in some cases we have some updates for old\ntransactions due to long-running transactions then we should do better\nby not having a centralized lock.\n\nPerformance Test:\nExp1: Show problems due to CPU/OS cache invalidation due to frequent\nupdates of the centralized lock and a common LRU counter. So here we\nare running a parallel transaction to pgbench script which frequently\ncreates subtransaction overflow and that forces the visibility-check\nmechanism to access the subtrans SLRU.\nTest machine: 8 CPU/ 64 core/ 128 with HT/ 512 MB RAM / SSD\nscale factor: 300\nshared_buffers=20GB\ncheckpoint_timeout=40min\nmax_wal_size=20GB\nmax_connections=200\n\nWorkload: Run these 2 scripts parallelly:\n./pgbench -c $ -j $ -T 600 -P5 -M prepared postgres\n./pgbench -c 1 -j 1 -T 600 -f savepoint.sql postgres\n\nsavepoint.sql (create subtransaction overflow)\nBEGIN;\nSAVEPOINT S1;\nINSERT INTO test VALUES(1)\n← repeat 70 times →\nSELECT pg_sleep(1);\nCOMMIT;\n\nCode under test:\nHead: PostgreSQL head code\nSlruBank: The first patch applied to convert the SLRU buffer pool into\nthe bank (0001)\nSlruBank+BankwiseLockAndLru: Applied 0001+0002+0003\n\nResults:\nClients Head SlruBank SlruBank+BankwiseLockAndLru\n1 457 491 475\n8 3753 3819 3782\n32 14594 14328 17028\n64 15600 16243 25944\n128 15957 16272 31731\n\nSo we can see that at 128 clients, we get ~2x TPS(with SlruBank +\nBankwiseLock and bankwise LRU counter) as compared to HEAD. We might\nbe thinking that we do not see much gain only with the SlruBank patch.\nThe reason is that in this particular test case, we are not seeing\nmuch load on the buffer replacement. In fact, the wait event also\ndoesn’t show contention on any lock instead the main load is due to\nfrequently modifying the common variable like the centralized control\nlock and the centralized LRU counters. That is evident in perf data\nas shown below\n\n+ 74.72% 0.06% postgres postgres [.] XidInMVCCSnapshot\n+ 74.08% 0.02% postgres postgres [.] SubTransGetTopmostTransaction\n+ 74.04% 0.07% postgres postgres [.] SubTransGetParent\n+ 57.66% 0.04% postgres postgres [.] LWLockAcquire\n+ 57.64% 0.26% postgres postgres [.] SimpleLruReadPage_ReadOnly\n……\n+ 16.53% 0.07% postgres postgres [.] LWLockRelease\n+ 16.36% 0.04% postgres postgres [.] pg_atomic_sub_fetch_u32\n+ 16.31% 16.24% postgres postgres [.] pg_atomic_fetch_sub_u32_impl\n\nWe can notice that the main load is on the atomic variable within the\nLWLockAcquire and LWLockRelease. Once we apply the bankwise lock\npatch(v1-0002) the same problem is visible on cur_lru_count updation\nin the SlruRecentlyUsed[2] macro (I have not shown that here but it\nwas visible in my perf report). And that is resolved by implementing\na bankwise counter.\n\n[2]\n#define SlruRecentlyUsed(shared, slotno) \\\ndo { \\\n..\n(shared)->cur_lru_count = ++new_lru_count; \\\n..\n} \\\n} while (0)\n\n\nExp2: This test shows the load on SLRU frequent buffer replacement. In\nthis test, we are running the pgbench kind script which frequently\ngenerates multixact-id, and parallelly we are starting and committing\na long-running transaction so that the multixact-ids are not\nimmediately cleaned up by the vacuum and we create contention on the\nSLRU buffer pool. I am not leaving the long-running transaction\nrunning forever as that will start to show another problem with\nrespect to bloat and we will lose the purpose of what I am trying to\nshow here.\n\nNote: test configurations are the same as Exp1, just the workload is\ndifferent, we are running below 2 scripts.\nand new config parameter(added in v1-0001) slru_buffers_size_scale=4,\nthat means NUM_MULTIXACTOFFSET_BUFFERS will be 64 that is 16 in Head\nand\nNUM_MULTIXACTMEMBER_BUFFERS will be 128 which is 32 in head\n\n./pgbench -c $ -j $ -T 600 -P5 -M prepared -f multixact.sql postgres\n./pgbench -c 1 -j 1 -T 600 -f longrunning.sql postgres\n\ncat > multixact.sql <<EOF\n\\set aid random(1, 100000 * :scale)\n\\set bid random(1, 1 * :scale)\n\\set tid random(1, 10 * :scale)\n\\set delta random(-5000, 5000)\nBEGIN;\nSELECT FROM pgbench_accounts WHERE aid = :aid FOR UPDATE;\nSAVEPOINT S1;\nUPDATE pgbench_accounts SET abalance = abalance + :delta WHERE aid = :aid;\nSELECT abalance FROM pgbench_accounts WHERE aid = :aid;\nINSERT INTO pgbench_history (tid, bid, aid, delta, mtime) VALUES\n(:tid, :bid, :aid, :delta, CURRENT_TIMESTAMP);\nEND;\nEOF\n\ncat > longrunning.sql << EOF\nBEGIN;\nINSERT INTO pgbench_test VALUES(1);\nselect pg_sleep(10);\nCOMMIT;\nEOF\n\nResults:\nClients Head SlruBank SlruBank+BankwiseLock\n1 528 513 531\n8 3870 4239 4157\n32 13945 14470 14556\n64 10086 19034 24482\n128 6909 15627 18161\n\nHere we can see good improvement with the SlruBank patch itself\nbecause of increasing the SLRU buffer pool, as in this workload there\nis a lot of contention due to buffer replacement. As shown below we\ncan see a lot of load on MultiXactOffsetSLRU as well as on\nMultiXactOffsetBuffer which shows there are frequent buffer evictions\nin this workload. And, increasing the SLRU buffer pool size is\nhelping a lot, and further dividing the SLRU lock into bank-wise locks\nwe are seeing a further gain. So in total, we are seeing ~2.5x TPS at\n64 and 128 thread compared to head.\n\n 3401 LWLock | MultiXactOffsetSLRU\n 2031 LWLock | MultiXactOffsetBuffer\n 687 |\n 427 LWLock | BufferContent\n\nCredits:\n- The base patch v1-0001 is authored by Thomas Munro and I have just rebased it.\n- 0002 and 0003 are new patches written by me based on design ideas\nfrom Robert and Myself.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 11 Oct 2023 16:34:37 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": true, "msg_subject": "SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "On Wed, Oct 11, 2023 at 4:34 PM Dilip Kumar <[email protected]> wrote:\n>\n> The small size of the SLRU buffer pools can sometimes become a\n> performance problem because it’s not difficult to have a workload\n> where the number of buffers actively in use is larger than the\n> fixed-size buffer pool. However, just increasing the size of the\n> buffer pool doesn’t necessarily help, because the linear search that\n> we use for buffer replacement doesn’t scale, and also because\n> contention on the single centralized lock limits scalability.\n>\n> There is a couple of patches proposed in the past to address the\n> problem of increasing the buffer pool size, one of the patch [1] was\n> proposed by Thomas Munro where we make the size of the buffer pool\n> configurable.\n\nIn my last email, I forgot to give the link from where I have taken\nthe base path for dividing the buffer pool in banks so giving the same\nhere[1]. And looking at this again it seems that the idea of that\npatch was from\nAndrey M. Borodin and the idea of the SLRU scale factor were\nintroduced by Yura Sokolov and Ivan Lazarev. Apologies for missing\nthat in the first email.\n\n[1] https://commitfest.postgresql.org/43/2627/\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 11 Oct 2023 17:57:20 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "On Wed, Oct 11, 2023 at 5:57 PM Dilip Kumar <[email protected]> wrote:\n>\n> On Wed, Oct 11, 2023 at 4:34 PM Dilip Kumar <[email protected]> wrote:\n\n> In my last email, I forgot to give the link from where I have taken\n> the base path for dividing the buffer pool in banks so giving the same\n> here[1]. And looking at this again it seems that the idea of that\n> patch was from\n> Andrey M. Borodin and the idea of the SLRU scale factor were\n> introduced by Yura Sokolov and Ivan Lazarev. Apologies for missing\n> that in the first email.\n>\n> [1] https://commitfest.postgresql.org/43/2627/\n\nIn my last email I have just rebased the base patch, so now while\nreading through that patch I realized that there was some refactoring\nneeded and some unused functions were there so I have removed that and\nalso added some comments. Also did some refactoring to my patches. So\nreposting the patch series.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 12 Oct 2023 18:16:05 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "On Wed, Oct 11, 2023 at 4:35 PM Dilip Kumar <[email protected]> wrote:\n>\n> The small size of the SLRU buffer pools can sometimes become a\n> performance problem because it’s not difficult to have a workload\n> where the number of buffers actively in use is larger than the\n> fixed-size buffer pool. However, just increasing the size of the\n> buffer pool doesn’t necessarily help, because the linear search that\n> we use for buffer replacement doesn’t scale, and also because\n> contention on the single centralized lock limits scalability.\n>\n> There is a couple of patches proposed in the past to address the\n> problem of increasing the buffer pool size, one of the patch [1] was\n> proposed by Thomas Munro where we make the size of the buffer pool\n> configurable. And, in order to deal with the linear search in the\n> large buffer pool, we divide the SLRU buffer pool into associative\n> banks so that searching in the buffer pool doesn’t get affected by the\n> large size of the buffer pool. This does well for the workloads which\n> are mainly impacted by the frequent buffer replacement but this still\n> doesn’t stand well with the workloads where the centralized control\n> lock is the bottleneck.\n>\n> So I have taken this patch as my base patch (v1-0001) and further\n> added 2 more improvements to this 1) In v1-0002, Instead of a\n> centralized control lock for the SLRU I have introduced a bank-wise\n> control lock 2)In v1-0003, I have removed the global LRU counter and\n> introduced a bank-wise counter. The second change (v1-0003) is in\n> order to avoid the CPU/OS cache invalidation due to frequent updates\n> of the single variable, later in my performance test I will show how\n> much gain we have gotten because of these 2 changes.\n>\n> Note: This is going to be a long email but I have summarised the main\n> idea above this point and now I am going to discuss more internal\n> information in order to show that the design idea is valid and also\n> going to show 2 performance tests where one is specific to the\n> contention on the centralized lock and other is mainly contention due\n> to frequent buffer replacement in SLRU buffer pool. We are getting ~2x\n> TPS compared to the head by these patches and in later sections, I am\n> going discuss this in more detail i.e. exact performance numbers and\n> analysis of why we are seeing the gain.\n>\n...\n>\n> Performance Test:\n> Exp1: Show problems due to CPU/OS cache invalidation due to frequent\n> updates of the centralized lock and a common LRU counter. So here we\n> are running a parallel transaction to pgbench script which frequently\n> creates subtransaction overflow and that forces the visibility-check\n> mechanism to access the subtrans SLRU.\n> Test machine: 8 CPU/ 64 core/ 128 with HT/ 512 MB RAM / SSD\n> scale factor: 300\n> shared_buffers=20GB\n> checkpoint_timeout=40min\n> max_wal_size=20GB\n> max_connections=200\n>\n> Workload: Run these 2 scripts parallelly:\n> ./pgbench -c $ -j $ -T 600 -P5 -M prepared postgres\n> ./pgbench -c 1 -j 1 -T 600 -f savepoint.sql postgres\n>\n> savepoint.sql (create subtransaction overflow)\n> BEGIN;\n> SAVEPOINT S1;\n> INSERT INTO test VALUES(1)\n> ← repeat 70 times →\n> SELECT pg_sleep(1);\n> COMMIT;\n>\n> Code under test:\n> Head: PostgreSQL head code\n> SlruBank: The first patch applied to convert the SLRU buffer pool into\n> the bank (0001)\n> SlruBank+BankwiseLockAndLru: Applied 0001+0002+0003\n>\n> Results:\n> Clients Head SlruBank SlruBank+BankwiseLockAndLru\n> 1 457 491 475\n> 8 3753 3819 3782\n> 32 14594 14328 17028\n> 64 15600 16243 25944\n> 128 15957 16272 31731\n>\n> So we can see that at 128 clients, we get ~2x TPS(with SlruBank +\n> BankwiseLock and bankwise LRU counter) as compared to HEAD.\n>\n\nThis and other results shared by you look promising. Will there be any\nimprovement in workloads related to clog buffer usage? BTW, I remember\nthat there was also a discussion of moving SLRU into a regular buffer\npool [1]. You have not provided any explanation as to whether that\napproach will have any merits after we do this or whether that\napproach is not worth pursuing at all.\n\n[1] - https://commitfest.postgresql.org/43/3514/\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 14 Oct 2023 09:43:35 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "On Sat, Oct 14, 2023 at 9:43 AM Amit Kapila <[email protected]> wrote:\n>\n> This and other results shared by you look promising. Will there be any\n> improvement in workloads related to clog buffer usage?\n\nI did not understand this question can you explain this a bit? In\nshort, if it is regarding the performance then we will see it for all\nthe SLRUs as the control lock is not centralized anymore instead it is\na bank-wise lock.\n\n BTW, I remember\n> that there was also a discussion of moving SLRU into a regular buffer\n> pool [1]. You have not provided any explanation as to whether that\n> approach will have any merits after we do this or whether that\n> approach is not worth pursuing at all.\n>\n> [1] - https://commitfest.postgresql.org/43/3514/\n\nYeah, I haven't read that thread in detail about performance numbers\nand all. But both of these can not coexist because this patch is\nimproving the SLRU buffer pool access/configurable size and also lock\ncontention. If we move SLRU to the main buffer pool then we might not\nhave a similar problem instead there might be other problems like SLRU\nbuffers getting swapped out due to other relation buffers and all and\nOTOH the advantages of that approach would be that we can just use a\nbigger buffer pool and SLRU can also take advantage of that. But in\nmy opinion, most of the time we have limited page access in SLRU and\nthe SLRU buffer access pattern is also quite different from the\nrelation pages access pattern so keeping them under the same buffer\npool and comparing against relation pages for victim buffer selection\nmight cause different problems. But anyway I would discuss those\npoints maybe in that thread.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 20 Oct 2023 09:40:35 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "On 2023-Oct-11, Dilip Kumar wrote:\n\n> In my last email, I forgot to give the link from where I have taken\n> the base path for dividing the buffer pool in banks so giving the same\n> here[1]. And looking at this again it seems that the idea of that\n> patch was from Andrey M. Borodin and the idea of the SLRU scale factor\n> were introduced by Yura Sokolov and Ivan Lazarev. Apologies for\n> missing that in the first email.\n\nYou mean [1].\n[1] https://postgr.es/m/452d01f7e331458f56ad79bef537c31b%40postgrespro.ru\nI don't like this idea very much, because of the magic numbers that act\nas ratios for numbers of buffers on each SLRU compared to other SLRUs.\nThese values, which I took from the documentation part of the patch,\nappear to have been selected by throwing darts at the wall:\n\nNUM_CLOG_BUFFERS = Min(128 << slru_buffers_size_scale, shared_buffers/256)\nNUM_COMMIT_TS_BUFFERS = Min(128 << slru_buffers_size_scale, shared_buffers/256)\nNUM_SUBTRANS_BUFFERS = Min(64 << slru_buffers_size_scale, shared_buffers/256)\nNUM_NOTIFY_BUFFERS = Min(32 << slru_buffers_size_scale, shared_buffers/256)\nNUM_SERIAL_BUFFERS = Min(32 << slru_buffers_size_scale, shared_buffers/256)\nNUM_MULTIXACTOFFSET_BUFFERS = Min(32 << slru_buffers_size_scale, shared_buffers/256)\nNUM_MULTIXACTMEMBER_BUFFERS = Min(64 << slru_buffers_size_scale, shared_buffers/256)\n\n... which look pretty random already, if similar enough to the current\nhardcoded values. In reality, the code implements different values than\nwhat the documentation says.\n\nI don't see why would CLOG have the same number as COMMIT_TS, when the\nsize for elements of the latter is like 32 times bigger -- however, the\nfrequency of reads for COMMIT_TS is like 1000x smaller than for CLOG.\nSUBTRANS is half of CLOG, yet it is 16 times larger, and it covers the\nsame range. The MULTIXACT ones appear to keep the current ratio among\nthem (8/16 gets changed to 32/64).\n\n... and this whole mess is scaled exponentially without regard to the\nsize that each SLRU requires. This is just betting that enough memory\ncan be wasted across all SLRUs up to the point where the one that is\nactually contended has sufficient memory. This doesn't sound sensible\nto me.\n\nLike everybody else, I like having less GUCs to configure, but going\nthis far to avoid them looks rather disastrous to me. IMO we should\njust use Munro's older patches that gave one GUC per SLRU, and users\nonly need to increase the one that shows up in pg_wait_event sampling.\nSomeday we will get the (much more complicated) patches to move these\nbuffers to steal memory from shared buffers, and that'll hopefully let\nuse get rid of all this complexity.\n\n\nI'm inclined to use Borodin's patch last posted here [2] instead of your\nproposed 0001.\n[2] https://postgr.es/m/[email protected]\n\nI did skim patches 0002 and 0003 without going into too much detail;\nthey look reasonable ideas. I have not tried to reproduce the claimed\nperformance benefits. I think measuring this patch set with the tests\nposted by Shawn Debnath in [3] is important, too.\n[3] https://postgr.es/m/[email protected]\n\n\nOn the other hand, here's a somewhat crazy idea. What if, instead of\nstealing buffers from shared_buffers (which causes a lot of complexity),\nwe allocate a common pool for all SLRUs to use? We provide a single\nknob -- say, non_relational_buffers=32MB as default -- and we use a LRU\nalgorithm (or something) to distribute that memory across all the SLRUs.\nSo the ratio to use for this SLRU or that one would depend on the nature\nof the workload: maybe more for multixact in this server here, but more\nfor subtrans in that server there; it's just the total amount that the\nuser would have to configure, side by side with shared_buffers (and\nperhaps scale with it like wal_buffers), and the LRU would handle the\nrest. The \"only\" problem here is finding a distribution algorithm that\ndoesn't further degrade performance, of course ...\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"The problem with the facetime model is not just that it's demoralizing, but\nthat the people pretending to work interrupt the ones actually working.\"\n -- Paul Graham, http://www.paulgraham.com/opensource.html\n\n\n", "msg_date": "Tue, 24 Oct 2023 18:04:13 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning\n the SLRU lock" }, { "msg_contents": "On Tue, Oct 24, 2023 at 06:04:13PM +0200, Alvaro Herrera wrote:\n> Like everybody else, I like having less GUCs to configure, but going\n> this far to avoid them looks rather disastrous to me. IMO we should\n> just use Munro's older patches that gave one GUC per SLRU, and users\n> only need to increase the one that shows up in pg_wait_event sampling.\n> Someday we will get the (much more complicated) patches to move these\n> buffers to steal memory from shared buffers, and that'll hopefully let\n> use get rid of all this complexity.\n\n+1\n\n> On the other hand, here's a somewhat crazy idea. What if, instead of\n> stealing buffers from shared_buffers (which causes a lot of complexity),\n> we allocate a common pool for all SLRUs to use? We provide a single\n> knob -- say, non_relational_buffers=32MB as default -- and we use a LRU\n> algorithm (or something) to distribute that memory across all the SLRUs.\n> So the ratio to use for this SLRU or that one would depend on the nature\n> of the workload: maybe more for multixact in this server here, but more\n> for subtrans in that server there; it's just the total amount that the\n> user would have to configure, side by side with shared_buffers (and\n> perhaps scale with it like wal_buffers), and the LRU would handle the\n> rest. The \"only\" problem here is finding a distribution algorithm that\n> doesn't further degrade performance, of course ...\n\nI think it's worth a try. It does seem simpler, and it might allow us to\nsidestep some concerns about scaling when the SLRU pages are in\nshared_buffers [0].\n\n[0] https://postgr.es/m/ZPsaEGRvllitxB3v%40tamriel.snowman.net\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 24 Oct 2023 12:04:51 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning\n the SLRU lock" }, { "msg_contents": "On Tue, Oct 24, 2023 at 9:34 PM Alvaro Herrera <[email protected]> wrote:\n>\n> On 2023-Oct-11, Dilip Kumar wrote:\n>\n> > In my last email, I forgot to give the link from where I have taken\n> > the base path for dividing the buffer pool in banks so giving the same\n> > here[1]. And looking at this again it seems that the idea of that\n> > patch was from Andrey M. Borodin and the idea of the SLRU scale factor\n> > were introduced by Yura Sokolov and Ivan Lazarev. Apologies for\n> > missing that in the first email.\n>\n> You mean [1].\n> [1] https://postgr.es/m/452d01f7e331458f56ad79bef537c31b%40postgrespro.ru\n> I don't like this idea very much, because of the magic numbers that act\n> as ratios for numbers of buffers on each SLRU compared to other SLRUs.\n> These values, which I took from the documentation part of the patch,\n> appear to have been selected by throwing darts at the wall:\n>\n> NUM_CLOG_BUFFERS = Min(128 << slru_buffers_size_scale, shared_buffers/256)\n> NUM_COMMIT_TS_BUFFERS = Min(128 << slru_buffers_size_scale, shared_buffers/256)\n> NUM_SUBTRANS_BUFFERS = Min(64 << slru_buffers_size_scale, shared_buffers/256)\n> NUM_NOTIFY_BUFFERS = Min(32 << slru_buffers_size_scale, shared_buffers/256)\n> NUM_SERIAL_BUFFERS = Min(32 << slru_buffers_size_scale, shared_buffers/256)\n> NUM_MULTIXACTOFFSET_BUFFERS = Min(32 << slru_buffers_size_scale, shared_buffers/256)\n> NUM_MULTIXACTMEMBER_BUFFERS = Min(64 << slru_buffers_size_scale, shared_buffers/256)\n>\n> ... which look pretty random already, if similar enough to the current\n> hardcoded values. In reality, the code implements different values than\n> what the documentation says.\n>\n> I don't see why would CLOG have the same number as COMMIT_TS, when the\n> size for elements of the latter is like 32 times bigger -- however, the\n> frequency of reads for COMMIT_TS is like 1000x smaller than for CLOG.\n> SUBTRANS is half of CLOG, yet it is 16 times larger, and it covers the\n> same range. The MULTIXACT ones appear to keep the current ratio among\n> them (8/16 gets changed to 32/64).\n>\n> ... and this whole mess is scaled exponentially without regard to the\n> size that each SLRU requires. This is just betting that enough memory\n> can be wasted across all SLRUs up to the point where the one that is\n> actually contended has sufficient memory. This doesn't sound sensible\n> to me.\n>\n> Like everybody else, I like having less GUCs to configure, but going\n> this far to avoid them looks rather disastrous to me. IMO we should\n> just use Munro's older patches that gave one GUC per SLRU, and users\n> only need to increase the one that shows up in pg_wait_event sampling.\n> Someday we will get the (much more complicated) patches to move these\n> buffers to steal memory from shared buffers, and that'll hopefully let\n> use get rid of all this complexity.\n\nOverall I agree with your comments, actually, I haven't put that much\nthought into the GUC part and how it scales the SLRU buffers w.r.t.\nthis single configurable parameter. Yeah, so I think it is better\nthat we take the older patch version as our base patch where we have\nseparate GUC per SLRU.\n\n> I'm inclined to use Borodin's patch last posted here [2] instead of your\n> proposed 0001.\n> [2] https://postgr.es/m/[email protected]\n\nI will rebase my patches on top of this.\n\n> I did skim patches 0002 and 0003 without going into too much detail;\n> they look reasonable ideas. I have not tried to reproduce the claimed\n> performance benefits. I think measuring this patch set with the tests\n> posted by Shawn Debnath in [3] is important, too.\n> [3] https://postgr.es/m/[email protected]\n\nThanks for taking a look.\n\n>\n> On the other hand, here's a somewhat crazy idea. What if, instead of\n> stealing buffers from shared_buffers (which causes a lot of complexity),\n\nCurrently, we do not steal buffers from shared_buffers, computation is\ndependent upon Nbuffers though. I mean for each SLRU we are computing\nseparate memory which is additional than the shared_buffers no?\n\n> we allocate a common pool for all SLRUs to use? We provide a single\n> knob -- say, non_relational_buffers=32MB as default -- and we use a LRU\n> algorithm (or something) to distribute that memory across all the SLRUs.\n> So the ratio to use for this SLRU or that one would depend on the nature\n> of the workload: maybe more for multixact in this server here, but more\n> for subtrans in that server there; it's just the total amount that the\n> user would have to configure, side by side with shared_buffers (and\n> perhaps scale with it like wal_buffers), and the LRU would handle the\n> rest. The \"only\" problem here is finding a distribution algorithm that\n> doesn't further degrade performance, of course ...\n\nYeah, this could be an idea, but are you talking about that all the\nSLRUs will share the single buffer pool and based on the LRU algorithm\nit will be decided which page will stay in the buffer pool and which\nwill be out? But wouldn't that create another issue of different\nSLRUs starting to contend on the same lock if we have a common buffer\npool for all the SLRUs? Or am I missing something? Or you are saying\nthat although there is a common buffer pool each SLRU will have its\nown boundaries in it so protected by a separate lock and based on the\nworkload those boundaries can change dynamically? I haven't put much\nthought into how practical the idea is but just trying to understand\nwhat you have in mind.\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 25 Oct 2023 10:34:15 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "On Fri, Oct 20, 2023 at 9:40 AM Dilip Kumar <[email protected]> wrote:\n>\n> On Sat, Oct 14, 2023 at 9:43 AM Amit Kapila <[email protected]> wrote:\n> >\n> > This and other results shared by you look promising. Will there be any\n> > improvement in workloads related to clog buffer usage?\n>\n> I did not understand this question can you explain this a bit?\n>\n\nI meant to ask about the impact of this patch on accessing transaction\nstatus via TransactionIdGetStatus(). Shouldn't we expect some\nimprovement in accessing CLOG buffers?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 25 Oct 2023 17:58:09 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "On Wed, Oct 25, 2023 at 5:58 PM Amit Kapila <[email protected]> wrote:\n>\n> On Fri, Oct 20, 2023 at 9:40 AM Dilip Kumar <[email protected]> wrote:\n> >\n> > On Sat, Oct 14, 2023 at 9:43 AM Amit Kapila <[email protected]> wrote:\n> > >\n> > > This and other results shared by you look promising. Will there be any\n> > > improvement in workloads related to clog buffer usage?\n> >\n> > I did not understand this question can you explain this a bit?\n> >\n>\n> I meant to ask about the impact of this patch on accessing transaction\n> status via TransactionIdGetStatus(). Shouldn't we expect some\n> improvement in accessing CLOG buffers?\n\nYes, there should be because 1) Now there is no common lock so\ncontention on a centralized control lock will be reduced when we are\naccessing the transaction status from pages falling in different SLRU\nbanks 2) Buffers size is configurable so if the workload is accessing\ntransactions status of different range then it would help in frequent\nbuffer eviction but this might not be most common case.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 25 Oct 2023 18:13:27 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "On Wed, Oct 25, 2023 at 10:34 AM Dilip Kumar <[email protected]> wrote:\n>\n> On Tue, Oct 24, 2023 at 9:34 PM Alvaro Herrera <[email protected]> wrote:\n\n> Overall I agree with your comments, actually, I haven't put that much\n> thought into the GUC part and how it scales the SLRU buffers w.r.t.\n> this single configurable parameter. Yeah, so I think it is better\n> that we take the older patch version as our base patch where we have\n> separate GUC per SLRU.\n>\n> > I'm inclined to use Borodin's patch last posted here [2] instead of your\n> > proposed 0001.\n> > [2] https://postgr.es/m/[email protected]\n>\n> I will rebase my patches on top of this.\n\nI have taken 0001 and 0002 from [1], done some bug fixes in 0001, and\nchanged the logic of SlruAdjustNSlots() in 0002, such that now it\nstarts with the next power of 2 value of the configured slots and\nkeeps doubling the number of banks until we reach the number of banks\nto the max SLRU_MAX_BANKS(128) and bank size is bigger than\nSLRU_MIN_BANK_SIZE (8). By doing so, we will ensure we don't have too\nmany banks, but also that we don't have very large banks. There was\nalso a patch 0003 in this thread but I haven't taken this as this is\nanother optimization of merging some structure members and I will\nanalyze the performance characteristic of this and try to add it on\ntop of the complete patch series.\n\nPatch details:\n0001 - GUC parameter for each SLRU\n0002 - Divide the SLRU pool into banks\n(The above 2 are taken from [1] with some modification and rebasing by me)\n0003 - Implement bank-wise SLRU lock as described in the first email\nof this thread\n0004 - Implement bank-wise LRU counter as described in the first email\nof this thread\n0005 - Some other optimization suggested offlist by Alvaro, i.e.\nmerging buffer locks and bank locks in the same array so that the\nbank-wise LRU counter does not fetch the next cache line in a hot\nfunction SlruRecentlyUsed()\n\nNote: I think 0003,0004 and 0005 can be merged together but kept\nseparate so that we can review them independently and see how useful\neach of them is.\n\n[1] https://postgr.es/m/[email protected]\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 30 Oct 2023 11:50:40 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "On Mon, Oct 30, 2023 at 11:50 AM Dilip Kumar <[email protected]> wrote:\n>\nBased on some offlist discussions with Alvaro and Robert in separate\nconversations, I and Alvaro we came to the same point if a user sets a\nvery high value for the number of slots (say 1GB) then the number of\nslots in each bank will be 1024 (considering max number of bank 128)\nand if we continue the sequence search for finding the buffer for the\npage then that could be costly in such cases. But later in one of the\nconversations with Robert, I realized that we can have this bank-wise\nlock approach along with the partitioned hash table.\n\nSo the idea is, that we will use the buffer mapping hash table\nsomething like Thoams used in one of his patches [1], but instead of a\nnormal hash table, we will use the partitioned hash table. The SLRU\nbuffer pool is still divided as we have done in the bank-wise approach\nand there will be separate locks for each slot range. So now we get\nthe benefit of both approaches 1) By having a mapping hash we can\navoid the sequence search 2) By dividing the buffer pool into banks\nand keeping the victim buffer search within those banks we avoid\nlocking all the partitions during victim buffer search 3) And we can\nalso maintain a bank-wise LRU counter so that we avoid contention on a\nsingle variable as we have discussed in my first email of this thread.\nPlease find the updated patch set details and patches attached to the\nemail.\n\n[1] 0001-Make-all-SLRU-buffer-sizes-configurable: This is the same\npatch as the previous patch set\n[2] 0002-Add-a-buffer-mapping-table-for-SLRUs: Patch to introduce\nbuffer mapping hash table\n[3] 0003-Partition-wise-slru-locks: Partition the hash table and also\nintroduce partition-wise locks: this is a merge of 0003 and 0004 from\nthe previous patch set but instead of bank-wise locks it has\npartition-wise locks and LRU counter.\n[4] 0004-Merge-partition-locks-array-with-buffer-locks-array: merging\nbuffer locks and bank locks in the same array so that the bank-wise\nLRU counter does not fetch the next cache line in a hot function\nSlruRecentlyUsed()(same as 0005 from the previous patch set)\n[5] 0005-Ensure-slru-buffer-slots-are-in-multiple-of-number-of: Ensure\nthat the number of slots is in multiple of the number of banks\n\nWith this approach, I have also made some changes where the number of\nbanks is constant (i.e. 8) so that some of the computations are easy.\nI think with a buffer mapping hash table we should not have much\nproblem in keeping this fixed as with very extreme configuration and\nvery high numbers of slots also we do not have performance problems as\nwe are not doing sequence search because of buffer mapping hash and if\nthe number of slots is set so high then the victim buffer search also\nshould not be frequent so we should not be worried about sequence\nsearch within a bank for victim buffer search. I have also changed\nthe default value of the number of slots to 64 and the minimum value\nto 16 I think this is a reasonable default value because the existing\nvalues are too low considering the modern hardware and these\nparameters is configurable so user can set it to low value if running\nwith very low memory.\n\n\n[1] https://www.postgresql.org/message-id/CA%2BhUKGLCLDtgDj2Xsf0uBk5WXDCeHxBDDJPsyY7m65Fde-%3Dpyg%40mail.gmail.com\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Fri, 3 Nov 2023 10:58:43 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "> On 30 Oct 2023, at 09:20, Dilip Kumar <[email protected]> wrote:\n> \n> changed the logic of SlruAdjustNSlots() in 0002, such that now it\n> starts with the next power of 2 value of the configured slots and\n> keeps doubling the number of banks until we reach the number of banks\n> to the max SLRU_MAX_BANKS(128) and bank size is bigger than\n> SLRU_MIN_BANK_SIZE (8). By doing so, we will ensure we don't have too\n> many banks\nThere was nothing wrong with having too many banks. Until bank-wise locks and counters were added in later patchsets.\nHaving hashtable to find SLRU page in the buffer IMV is too slow. Some comments on this approach can be found here [0].\nI'm OK with having HTAB for that if we are sure performance does not degrade significantly, but I really doubt this is the case.\nI even think SLRU buffers used HTAB in some ancient times, but I could not find commit when it was changed to linear search.\n\nMaybe we could decouple locks and counters from SLRU banks? Banks were meant to be small to exploit performance of local linear search. Lock partitions have to be bigger for sure.\n\n\n\n> On 30 Oct 2023, at 09:20, Dilip Kumar <[email protected]> wrote:\n> \n> I have taken 0001 and 0002 from [1], done some bug fixes in 0001\n\n\nBTW can you please describe in more detail what kind of bugs?\n\n\nThanks for working on this!\n\n\nBest regards, Andrey Borodin.\n\n\n[0] https://www.postgresql.org/message-id/flat/CA%2BhUKGKVqrxOp82zER1%3DXN%3DyPwV_-OCGAg%3Dez%3D1iz9rG%2BA7Smw%40mail.gmail.com#b60f1cb73d350cf686338d4e800e12a2\nOn 30 Oct 2023, at 09:20, Dilip Kumar <[email protected]> wrote:changed the logic of SlruAdjustNSlots() in 0002, such that now itstarts with the next power of 2 value of the configured slots andkeeps doubling the number of banks until we reach the number of banksto the max SLRU_MAX_BANKS(128) and bank size is bigger thanSLRU_MIN_BANK_SIZE (8).  By doing so, we will ensure we don't have toomany banksThere was nothing wrong with having too many banks. Until bank-wise locks and counters were added in later patchsets.Having hashtable to find SLRU page in the buffer IMV is too slow. Some comments on this approach can be found here [0].I'm OK with having HTAB for that if we are sure performance does not degrade significantly, but I really doubt this is the case.I even think SLRU buffers used HTAB in some ancient times, but I could not find commit when it was changed to linear search.Maybe we could decouple locks and counters from SLRU banks? Banks were meant to be small to exploit performance of local linear search. Lock partitions have to be bigger for sure.On 30 Oct 2023, at 09:20, Dilip Kumar <[email protected]> wrote:I have taken 0001 and 0002 from [1], done some bug fixes in 0001BTW can you please describe in more detail what kind of bugs?Thanks for working on this!Best regards, Andrey Borodin.[0] https://www.postgresql.org/message-id/flat/CA%2BhUKGKVqrxOp82zER1%3DXN%3DyPwV_-OCGAg%3Dez%3D1iz9rG%2BA7Smw%40mail.gmail.com#b60f1cb73d350cf686338d4e800e12a2", "msg_date": "Sat, 4 Nov 2023 23:07:52 +0300", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "On Sun, Nov 5, 2023 at 1:37 AM Andrey M. Borodin <[email protected]> wrote:\n\n> On 30 Oct 2023, at 09:20, Dilip Kumar <[email protected]> wrote:\n>\n> changed the logic of SlruAdjustNSlots() in 0002, such that now it\n> starts with the next power of 2 value of the configured slots and\n> keeps doubling the number of banks until we reach the number of banks\n> to the max SLRU_MAX_BANKS(128) and bank size is bigger than\n> SLRU_MIN_BANK_SIZE (8). By doing so, we will ensure we don't have too\n> many banks\n>\n> There was nothing wrong with having too many banks. Until bank-wise locks and counters were added in later patchsets.\n\nI agree with that, but I feel with bank-wise locks we are removing\nmajor contention from the centralized control lock and we can see that\nfrom my first email that how much benefit we can get in one of the\nsimple test cases when we create subtransaction overflow.\n\n> Having hashtable to find SLRU page in the buffer IMV is too slow. Some comments on this approach can be found here [0].\n> I'm OK with having HTAB for that if we are sure performance does not degrade significantly, but I really doubt this is the case.\n> I even think SLRU buffers used HTAB in some ancient times, but I could not find commit when it was changed to linear search.\n\nThe main intention of having this buffer mapping hash is to find the\nSLRU page faster than sequence search when banks are relatively bigger\nin size, but if we find the cases where having hash creates more\noverhead than providing gain then I am fine to remove the hash because\nthe whole purpose of adding hash here to make the lookup faster. So\nfar in my test I did not find the slowness. Do you or anyone else\nhave any test case based on the previous research on whether it\ncreates any slowness?\n\n> Maybe we could decouple locks and counters from SLRU banks? Banks were meant to be small to exploit performance of local linear search. Lock partitions have to be bigger for sure.\n\nYeah, that could also be an idea if we plan to drop the hash. I mean\nbank-wise counter is fine as we are finding a victim buffer within a\nbank itself, but each lock could cover more slots than one bank size\nor in other words, it can protect multiple banks. Let's hear more\nopinion on this.\n\n>\n> On 30 Oct 2023, at 09:20, Dilip Kumar <[email protected]> wrote:\n>\n> I have taken 0001 and 0002 from [1], done some bug fixes in 0001\n>\n>\n> BTW can you please describe in more detail what kind of bugs?\n\nYeah, actually that patch was using the same GUC\n(multixact_offsets_buffers) in SimpleLruInit for MultiXactOffsetCtl as\nwell as for MultiXactMemberCtl, see the below patch snippet from the\noriginal patch.\n\n@@ -1851,13 +1851,13 @@ MultiXactShmemInit(void)\n MultiXactMemberCtl->PagePrecedes = MultiXactMemberPagePrecedes;\n\n SimpleLruInit(MultiXactOffsetCtl,\n- \"MultiXactOffset\", NUM_MULTIXACTOFFSET_BUFFERS, 0,\n+ \"MultiXactOffset\", multixact_offsets_buffers, 0,\n MultiXactOffsetSLRULock, \"pg_multixact/offsets\",\n LWTRANCHE_MULTIXACTOFFSET_BUFFER,\n SYNC_HANDLER_MULTIXACT_OFFSET);\n SlruPagePrecedesUnitTests(MultiXactOffsetCtl, MULTIXACT_OFFSETS_PER_PAGE);\n SimpleLruInit(MultiXactMemberCtl,\n- \"MultiXactMember\", NUM_MULTIXACTMEMBER_BUFFERS, 0,\n+ \"MultiXactMember\", multixact_offsets_buffers, 0,\n MultiXactMemberSLRULock, \"pg_multixact/members\",\n LWTRANCHE_MULTIXACTMEMBER_BUFFER,\n SYNC_HANDLER_MULTIXACT_MEMBER);\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 6 Nov 2023 09:39:54 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "> On 6 Nov 2023, at 09:09, Dilip Kumar <[email protected]> wrote:\n> \n> \n>> Having hashtable to find SLRU page in the buffer IMV is too slow. Some comments on this approach can be found here [0].\n>> I'm OK with having HTAB for that if we are sure performance does not degrade significantly, but I really doubt this is the case.\n>> I even think SLRU buffers used HTAB in some ancient times, but I could not find commit when it was changed to linear search.\n> \n> The main intention of having this buffer mapping hash is to find the\n> SLRU page faster than sequence search when banks are relatively bigger\n> in size, but if we find the cases where having hash creates more\n> overhead than providing gain then I am fine to remove the hash because\n> the whole purpose of adding hash here to make the lookup faster. So\n> far in my test I did not find the slowness. Do you or anyone else\n> have any test case based on the previous research on whether it\n> creates any slowness?\nPFA test benchmark_slru_page_readonly(). In this test we run SimpleLruReadPage_ReadOnly() (essential part of TransactionIdGetStatus())\nbefore introducing HTAB for buffer mapping I get\nTime: 14837.851 ms (00:14.838)\nwith buffer HTAB I get\nTime: 22723.243 ms (00:22.723)\n\nThis hash table makes getting transaction status ~50% slower.\n\nBenchmark script I used:\nmake -C $HOME/postgresMX -j 8 install && (pkill -9 postgres; rm -rf test; ./initdb test && echo \"shared_preload_libraries = 'test_slru'\">> test/postgresql.conf && ./pg_ctl -D test start && ./psql -c 'create extension test_slru' postgres && ./pg_ctl -D test restart && ./psql -c \"SELECT count(test_slru_page_write(a, 'Test SLRU'))\n FROM generate_series(12346, 12393, 1) as a;\" -c '\\timing' -c \"SELECT benchmark_slru_page_readonly(12377);\" postgres)\n\n> \n>> Maybe we could decouple locks and counters from SLRU banks? Banks were meant to be small to exploit performance of local linear search. Lock partitions have to be bigger for sure.\n> \n> Yeah, that could also be an idea if we plan to drop the hash. I mean\n> bank-wise counter is fine as we are finding a victim buffer within a\n> bank itself, but each lock could cover more slots than one bank size\n> or in other words, it can protect multiple banks. Let's hear more\n> opinion on this.\n+1\n\n> \n>> \n>> On 30 Oct 2023, at 09:20, Dilip Kumar <[email protected]> wrote:\n>> \n>> I have taken 0001 and 0002 from [1], done some bug fixes in 0001\n>> \n>> \n>> BTW can you please describe in more detail what kind of bugs?\n> \n> Yeah, actually that patch was using the same GUC\n> (multixact_offsets_buffers) in SimpleLruInit for MultiXactOffsetCtl as\n> well as for MultiXactMemberCtl, see the below patch snippet from the\n> original patch.\n\nOuch. We were running this for serveral years with this bug... Thanks!\n\n\nBest regards, Andrey Borodin.", "msg_date": "Mon, 6 Nov 2023 12:35:55 +0500", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "On Mon, Nov 6, 2023 at 1:05 PM Andrey M. Borodin <[email protected]> wrote:\n\n> > On 6 Nov 2023, at 09:09, Dilip Kumar <[email protected]> wrote:\n> >\n> >\n> >> Having hashtable to find SLRU page in the buffer IMV is too slow. Some comments on this approach can be found here [0].\n> >> I'm OK with having HTAB for that if we are sure performance does not degrade significantly, but I really doubt this is the case.\n> >> I even think SLRU buffers used HTAB in some ancient times, but I could not find commit when it was changed to linear search.\n> >\n> > The main intention of having this buffer mapping hash is to find the\n> > SLRU page faster than sequence search when banks are relatively bigger\n> > in size, but if we find the cases where having hash creates more\n> > overhead than providing gain then I am fine to remove the hash because\n> > the whole purpose of adding hash here to make the lookup faster. So\n> > far in my test I did not find the slowness. Do you or anyone else\n> > have any test case based on the previous research on whether it\n> > creates any slowness?\n> PFA test benchmark_slru_page_readonly(). In this test we run SimpleLruReadPage_ReadOnly() (essential part of TransactionIdGetStatus())\n> before introducing HTAB for buffer mapping I get\n> Time: 14837.851 ms (00:14.838)\n> with buffer HTAB I get\n> Time: 22723.243 ms (00:22.723)\n>\n> This hash table makes getting transaction status ~50% slower.\n>\n> Benchmark script I used:\n> make -C $HOME/postgresMX -j 8 install && (pkill -9 postgres; rm -rf test; ./initdb test && echo \"shared_preload_libraries = 'test_slru'\">> test/postgresql.conf && ./pg_ctl -D test start && ./psql -c 'create extension test_slru' postgres && ./pg_ctl -D test restart && ./psql -c \"SELECT count(test_slru_page_write(a, 'Test SLRU'))\n> FROM generate_series(12346, 12393, 1) as a;\" -c '\\timing' -c \"SELECT benchmark_slru_page_readonly(12377);\" postgres)\n\nWith this test, I got below numbers,\n\nnslots. no-hash hash\n8 10s 13s\n16 10s 13s\n32 15s 13s\n64 17s 13s\n\nYeah so we can see with a small bank size <=16 slots we are seeing\nthat the fetching page with hash is 30% slower than the sequential\nsearch, but beyond 32 slots sequential search is become slower as you\ngrow the number of slots whereas with hash it stays constant as\nexpected. But now as you told if keep the lock partition range\ndifferent than the bank size then we might not have any problem by\nhaving more numbers of banks and with that, we can keep the bank size\nsmall like 16. Let me put some more thought into this and get back.\nAny other opinions on this?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 6 Nov 2023 14:51:04 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "On 2023-Nov-06, Dilip Kumar wrote:\n\n> Yeah so we can see with a small bank size <=16 slots we are seeing\n> that the fetching page with hash is 30% slower than the sequential\n> search, but beyond 32 slots sequential search is become slower as you\n> grow the number of slots whereas with hash it stays constant as\n> expected. But now as you told if keep the lock partition range\n> different than the bank size then we might not have any problem by\n> having more numbers of banks and with that, we can keep the bank size\n> small like 16. Let me put some more thought into this and get back.\n> Any other opinions on this?\n\ndynahash is notoriously slow, which is why we have simplehash.h since\ncommit b30d3ea824c5. Maybe we could use that instead.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Escucha y olvidarás; ve y recordarás; haz y entenderás\" (Confucio)\n\n\n", "msg_date": "Mon, 6 Nov 2023 10:31:41 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning\n the SLRU lock" }, { "msg_contents": "\n\n> On 6 Nov 2023, at 14:31, Alvaro Herrera <[email protected]> wrote:\n> \n> dynahash is notoriously slow, which is why we have simplehash.h since\n> commit b30d3ea824c5. Maybe we could use that instead.\n\nDynahash has lock partitioning. Simplehash has not, AFAIK.\nThe thing is we do not really need a hash function - pageno is already a best hash function itself. And we do not need to cope with collisions much - we can evict a collided buffer.\n\nGiven this we do not need a hashtable at all. That’s exact reasoning how banks emerged, I started implementing dynahsh patch in April 2021 and found out that “banks” approach is cleaner. However the term “bank” is not common in software, it’s taken from hardware cache.\n\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Mon, 6 Nov 2023 16:14:32 +0500", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "On Mon, Nov 6, 2023 at 4:44 PM Andrey M. Borodin <[email protected]> wrote:\n> > On 6 Nov 2023, at 14:31, Alvaro Herrera <[email protected]> wrote:\n> >\n> > dynahash is notoriously slow, which is why we have simplehash.h since\n> > commit b30d3ea824c5. Maybe we could use that instead.\n>\n> Dynahash has lock partitioning. Simplehash has not, AFAIK.\n\nYeah, Simplehash doesn't have partitioning so with simple hash we will\nbe stuck with the centralized control lock that is one of the main\nproblems trying to solve here.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 6 Nov 2023 16:54:42 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "On Mon, Nov 6, 2023 at 4:44 PM Andrey M. Borodin <[email protected]>\nwrote:\n\n>\n>\n> > On 6 Nov 2023, at 14:31, Alvaro Herrera <[email protected]> wrote:\n> >\n> > dynahash is notoriously slow, which is why we have simplehash.h since\n> > commit b30d3ea824c5. Maybe we could use that instead.\n>\n> Dynahash has lock partitioning. Simplehash has not, AFAIK.\n> The thing is we do not really need a hash function - pageno is already a\n> best hash function itself. And we do not need to cope with collisions much\n> - we can evict a collided buffer.\n>\n> Given this we do not need a hashtable at all. That’s exact reasoning how\n> banks emerged, I started implementing dynahsh patch in April 2021 and found\n> out that “banks” approach is cleaner. However the term “bank” is not common\n> in software, it’s taken from hardware cache.\n>\n\nI agree that we don't need the hash function to generate hash value out of\npageno which itself is sufficient, but I don't understand how we can get\nrid of\nthe hash table itself -- how we would map the pageno and the slot number?\nThat mapping is not needed at all?\n\nRegards,\nAmul\n\nOn Mon, Nov 6, 2023 at 4:44 PM Andrey M. Borodin <[email protected]> wrote:\n\n> On 6 Nov 2023, at 14:31, Alvaro Herrera <[email protected]> wrote:\n> \n> dynahash is notoriously slow, which is why we have simplehash.h since\n> commit b30d3ea824c5.  Maybe we could use that instead.\n\nDynahash has lock partitioning. Simplehash has not, AFAIK.\nThe thing is we do not really need a hash function - pageno is already a best hash function itself. And we do not need to cope with collisions much - we can evict a collided buffer.\n\nGiven this we do not need a hashtable at all. That’s exact reasoning how banks emerged, I started implementing dynahsh patch in April 2021 and found out that “banks” approach is cleaner. However the term “bank” is not common in software, it’s taken from hardware cache.I agree that we don't need the hash function to generate hash value out ofpageno which itself is sufficient, but I don't understand how we can get rid ofthe hash table itself -- how we would map the pageno and the slot number?That mapping is not needed at all? Regards,Amul", "msg_date": "Wed, 8 Nov 2023 10:39:34 +0530", "msg_from": "Amul Sul <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "On Fri, Nov 3, 2023 at 10:59 AM Dilip Kumar <[email protected]> wrote:\n\n> On Mon, Oct 30, 2023 at 11:50 AM Dilip Kumar <[email protected]>\n> wrote:\n> > [...]\n> [1] 0001-Make-all-SLRU-buffer-sizes-configurable: This is the same\n> patch as the previous patch set\n> [2] 0002-Add-a-buffer-mapping-table-for-SLRUs: Patch to introduce\n> buffer mapping hash table\n> [3] 0003-Partition-wise-slru-locks: Partition the hash table and also\n> introduce partition-wise locks: this is a merge of 0003 and 0004 from\n> the previous patch set but instead of bank-wise locks it has\n> partition-wise locks and LRU counter.\n> [4] 0004-Merge-partition-locks-array-with-buffer-locks-array: merging\n> buffer locks and bank locks in the same array so that the bank-wise\n> LRU counter does not fetch the next cache line in a hot function\n> SlruRecentlyUsed()(same as 0005 from the previous patch set)\n> [5] 0005-Ensure-slru-buffer-slots-are-in-multiple-of-number-of: Ensure\n> that the number of slots is in multiple of the number of banks\n> [...]\n\n\nHere are some minor comments:\n\n+ * By default, we'll use 1MB of for every 1GB of shared buffers, up to the\n+ * maximum value that slru.c will allow, but always at least 16 buffers.\n */\n Size\n CommitTsShmemBuffers(void)\n {\n- return Min(256, Max(4, NBuffers / 256));\n+ /* Use configured value if provided. */\n+ if (commit_ts_buffers > 0)\n+ return Max(16, commit_ts_buffers);\n+ return Min(256, Max(16, NBuffers / 256));\n\nDo you mean \"4MB of for every 1GB\" in the comment?\n--\n\ndiff --git a/src/include/access/commit_ts.h b/src/include/access/commit_ts.h\nindex 5087cdce51..78d017ad85 100644\n--- a/src/include/access/commit_ts.h\n+++ b/src/include/access/commit_ts.h\n@@ -16,7 +16,6 @@\n #include \"replication/origin.h\"\n #include \"storage/sync.h\"\n\n-\n extern PGDLLIMPORT bool track_commit_timestamp;\n\nA spurious change.\n--\n\n@@ -168,10 +180,19 @@ SimpleLruShmemSize(int nslots, int nlsns)\n\n if (nlsns > 0)\n sz += MAXALIGN(nslots * nlsns * sizeof(XLogRecPtr)); /*\ngroup_lsn[] */\n-\n return BUFFERALIGN(sz) + BLCKSZ * nslots;\n }\n\nAnother spurious change in 0002 patch.\n--\n\n+/*\n+ * The slru buffer mapping table is partitioned to reduce contention. To\n+ * determine which partition lock a given pageno requires, compute the\npageno's\n+ * hash code with SlruBufTableHashCode(), then apply SlruPartitionLock().\n+ */\n\nI didn't see SlruBufTableHashCode() & SlruPartitionLock() functions\nanywhere in\nyour patches, is that outdated comment?\n--\n\n- sz += MAXALIGN(nslots * sizeof(LWLockPadded)); /* buffer_locks[] */\n- sz += MAXALIGN(SLRU_NUM_PARTITIONS * sizeof(LWLockPadded)); /*\npart_locks[] */\n+ sz += MAXALIGN((nslots + SLRU_NUM_PARTITIONS) * sizeof(LWLockPadded));\n /* locks[] */\n\nI am a bit uncomfortable with these changes, merging parts and buffer locks\nmaking it hard to understand the code. Not sure what we were getting out of\nthis?\n--\n\nSubject: [PATCH v4 5/5] Ensure slru buffer slots are in multiple of numbe of\n partitions\n\nI think the 0005 patch can be merged to 0001.\n\nRegards,\nAmul\n\nOn Fri, Nov 3, 2023 at 10:59 AM Dilip Kumar <[email protected]> wrote:On Mon, Oct 30, 2023 at 11:50 AM Dilip Kumar <[email protected]> wrote:\n> [...]\n[1] 0001-Make-all-SLRU-buffer-sizes-configurable: This is the same\npatch as the previous patch set\n[2] 0002-Add-a-buffer-mapping-table-for-SLRUs: Patch to introduce\nbuffer mapping hash table\n[3] 0003-Partition-wise-slru-locks: Partition the hash table and also\nintroduce partition-wise locks: this is a merge of 0003 and 0004 from\nthe previous patch set but instead of bank-wise locks it has\npartition-wise locks and LRU counter.\n[4] 0004-Merge-partition-locks-array-with-buffer-locks-array: merging\nbuffer locks and bank locks in the same array so that the bank-wise\nLRU counter does not fetch the next cache line in a hot function\nSlruRecentlyUsed()(same as 0005 from the previous patch set)\n[5] 0005-Ensure-slru-buffer-slots-are-in-multiple-of-number-of: Ensure\nthat the number of slots is in multiple of the number of banks[...]Here are some minor comments:+ * By default, we'll use 1MB of for every 1GB of shared buffers, up to the+ * maximum value that slru.c will allow, but always at least 16 buffers.  */ Size CommitTsShmemBuffers(void) {-   return Min(256, Max(4, NBuffers / 256));+   /* Use configured value if provided. */+   if (commit_ts_buffers > 0)+       return Max(16, commit_ts_buffers);+   return Min(256, Max(16, NBuffers / 256));Do you mean \"4MB of for every 1GB\"  in the comment?--diff --git a/src/include/access/commit_ts.h b/src/include/access/commit_ts.hindex 5087cdce51..78d017ad85 100644--- a/src/include/access/commit_ts.h+++ b/src/include/access/commit_ts.h@@ -16,7 +16,6 @@ #include \"replication/origin.h\" #include \"storage/sync.h\"- extern PGDLLIMPORT bool track_commit_timestamp;A spurious change.--@@ -168,10 +180,19 @@ SimpleLruShmemSize(int nslots, int nlsns)     if (nlsns > 0)        sz += MAXALIGN(nslots * nlsns * sizeof(XLogRecPtr));    /* group_lsn[] */-    return BUFFERALIGN(sz) + BLCKSZ * nslots; }Another spurious change in 0002 patch.--+/*+ * The slru buffer mapping table is partitioned to reduce contention. To+ * determine which partition lock a given pageno requires, compute the pageno's+ * hash code with SlruBufTableHashCode(), then apply SlruPartitionLock().+ */I didn't see SlruBufTableHashCode() & SlruPartitionLock() functions anywhere inyour patches, is that outdated comment?---   sz += MAXALIGN(nslots * sizeof(LWLockPadded));  /* buffer_locks[] */-   sz += MAXALIGN(SLRU_NUM_PARTITIONS * sizeof(LWLockPadded)); /* part_locks[] */+   sz += MAXALIGN((nslots + SLRU_NUM_PARTITIONS) * sizeof(LWLockPadded));  /* locks[] */I am a bit uncomfortable with these changes, merging parts and buffer locksmaking it hard to understand the code. Not sure what we were getting out ofthis?--Subject: [PATCH v4 5/5] Ensure slru buffer slots are in multiple of numbe of partitionsI think the 0005 patch can be merged to 0001.Regards,Amul", "msg_date": "Wed, 8 Nov 2023 10:51:38 +0530", "msg_from": "Amul Sul <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "On Wed, Nov 8, 2023 at 10:52 AM Amul Sul <[email protected]> wrote:\n\nThanks for review Amul,\n\n> Here are some minor comments:\n>\n> + * By default, we'll use 1MB of for every 1GB of shared buffers, up to the\n> + * maximum value that slru.c will allow, but always at least 16 buffers.\n> */\n> Size\n> CommitTsShmemBuffers(void)\n> {\n> - return Min(256, Max(4, NBuffers / 256));\n> + /* Use configured value if provided. */\n> + if (commit_ts_buffers > 0)\n> + return Max(16, commit_ts_buffers);\n> + return Min(256, Max(16, NBuffers / 256));\n>\n> Do you mean \"4MB of for every 1GB\" in the comment?\n\nYou are right\n\n> --\n>\n> diff --git a/src/include/access/commit_ts.h b/src/include/access/commit_ts.h\n> index 5087cdce51..78d017ad85 100644\n> --- a/src/include/access/commit_ts.h\n> +++ b/src/include/access/commit_ts.h\n> @@ -16,7 +16,6 @@\n> #include \"replication/origin.h\"\n> #include \"storage/sync.h\"\n>\n> -\n> extern PGDLLIMPORT bool track_commit_timestamp;\n>\n> A spurious change.\n\nWill fix\n\n> --\n>\n> @@ -168,10 +180,19 @@ SimpleLruShmemSize(int nslots, int nlsns)\n>\n> if (nlsns > 0)\n> sz += MAXALIGN(nslots * nlsns * sizeof(XLogRecPtr)); /* group_lsn[] */\n> -\n> return BUFFERALIGN(sz) + BLCKSZ * nslots;\n> }\n>\n> Another spurious change in 0002 patch.\n\nWill fix\n\n> --\n>\n> +/*\n> + * The slru buffer mapping table is partitioned to reduce contention. To\n> + * determine which partition lock a given pageno requires, compute the pageno's\n> + * hash code with SlruBufTableHashCode(), then apply SlruPartitionLock().\n> + */\n>\n> I didn't see SlruBufTableHashCode() & SlruPartitionLock() functions anywhere in\n> your patches, is that outdated comment?\n\nYes will fix it, actually, there are some major design changes to this.\n\n> --\n>\n> - sz += MAXALIGN(nslots * sizeof(LWLockPadded)); /* buffer_locks[] */\n> - sz += MAXALIGN(SLRU_NUM_PARTITIONS * sizeof(LWLockPadded)); /* part_locks[] */\n> + sz += MAXALIGN((nslots + SLRU_NUM_PARTITIONS) * sizeof(LWLockPadded)); /* locks[] */\n>\n> I am a bit uncomfortable with these changes, merging parts and buffer locks\n> making it hard to understand the code. Not sure what we were getting out of\n> this?\n\nYes, even I do not like this much because it is confusing. But the\nadvantage of this is that we are using a single pointer for the lock\nwhich means the next variable for the LRU counter will come in the\nsame cacheline and frequent updates of lru counter will be benefitted\nfrom this. Although I don't have any number which proves this.\nCurrently, I want to focus on all the base patches and keep this patch\nas add on and later if we find its useful and want to pursue this then\nwe will see how to make it better readable.\n\n\n>\n> Subject: [PATCH v4 5/5] Ensure slru buffer slots are in multiple of numbe of\n> partitions\n>\n> I think the 0005 patch can be merged to 0001.\n\nYeah in the next version, it is done that way. Planning to post end of the day.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 8 Nov 2023 11:16:03 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "On Sat, 4 Nov 2023 at 22:08, Andrey M. Borodin <[email protected]> wrote:\n\n> On 30 Oct 2023, at 09:20, Dilip Kumar <[email protected]> wrote:\n>\n> changed the logic of SlruAdjustNSlots() in 0002, such that now it\n> starts with the next power of 2 value of the configured slots and\n> keeps doubling the number of banks until we reach the number of banks\n> to the max SLRU_MAX_BANKS(128) and bank size is bigger than\n> SLRU_MIN_BANK_SIZE (8). By doing so, we will ensure we don't have too\n> many banks\n>\n> There was nothing wrong with having too many banks. Until bank-wise locks\n> and counters were added in later patchsets.\n> Having hashtable to find SLRU page in the buffer IMV is too slow. Some\n> comments on this approach can be found here [0].\n> I'm OK with having HTAB for that if we are sure performance does not\n> degrade significantly, but I really doubt this is the case.\n> I even think SLRU buffers used HTAB in some ancient times, but I could not\n> find commit when it was changed to linear search.\n>\n> Maybe we could decouple locks and counters from SLRU banks? Banks were\n> meant to be small to exploit performance of local linear search. Lock\n> partitions have to be bigger for sure.\n>\n\nIs there a particular reason why lock partitions need to be bigger? We have\none lock per buffer anyway, bankwise locks will increase the number of\nlocks < 10%.\n\nI am working on trying out a SIMD based LRU mechanism that uses a 16 entry\nbank. The data layout is:\n\nstruct CacheBank {\n int page_numbers[16];\n char access_age[16];\n}\n\nThe first part uses up one cache line, and the second line has 48 bytes of\nspace left over that could fit a lwlock and page_status, page_dirty arrays.\n\nLookup + LRU maintenance has 20 instructions/14 cycle latency and the only\nbranch is for found/not found. Hoping to have a working prototype of SLRU\non top in the next couple of days.\n\nRegards,\nAnts Aasma\n\nOn Sat, 4 Nov 2023 at 22:08, Andrey M. Borodin <[email protected]> wrote:On 30 Oct 2023, at 09:20, Dilip Kumar <[email protected]> wrote:changed the logic of SlruAdjustNSlots() in 0002, such that now itstarts with the next power of 2 value of the configured slots andkeeps doubling the number of banks until we reach the number of banksto the max SLRU_MAX_BANKS(128) and bank size is bigger thanSLRU_MIN_BANK_SIZE (8).  By doing so, we will ensure we don't have toomany banksThere was nothing wrong with having too many banks. Until bank-wise locks and counters were added in later patchsets.Having hashtable to find SLRU page in the buffer IMV is too slow. Some comments on this approach can be found here [0].I'm OK with having HTAB for that if we are sure performance does not degrade significantly, but I really doubt this is the case.I even think SLRU buffers used HTAB in some ancient times, but I could not find commit when it was changed to linear search.Maybe we could decouple locks and counters from SLRU banks? Banks were meant to be small to exploit performance of local linear search. Lock partitions have to be bigger for sure.Is there a particular reason why lock partitions need to be bigger? We have one lock per buffer anyway, bankwise locks will increase the number of locks < 10%.I am working on trying out a SIMD based LRU mechanism that uses a 16 entry bank. The data layout is:struct CacheBank {    int page_numbers[16];    char access_age[16];}The first part uses up one cache line, and the second line has 48 bytes of space left over that could fit a lwlock and page_status, page_dirty arrays.Lookup + LRU maintenance has 20 instructions/14 cycle latency and the only branch is for found/not found. Hoping to have a working prototype of SLRU on top in the next couple of days.Regards,Ants Aasma", "msg_date": "Wed, 8 Nov 2023 11:17:23 +0200", "msg_from": "Ants Aasma <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "\n\n> On 8 Nov 2023, at 14:17, Ants Aasma <[email protected]> wrote:\n> \n> Is there a particular reason why lock partitions need to be bigger? We have one lock per buffer anyway, bankwise locks will increase the number of locks < 10%.\n\nThe problem was not attracting much attention for some years. So my reasoning was that solution should not have any costs at all. Initial patchset with banks did not add any memory footprint.\n\n\n> On 8 Nov 2023, at 14:17, Ants Aasma <[email protected]> wrote:\n> \n> I am working on trying out a SIMD based LRU mechanism that uses a 16 entry bank. \n\nFWIW I tried to pack struct parts together to minimize cache lines touched, see step 3 in [0]. So far I could not prove any performance benefits of this approach. But maybe your implementation will be more efficient.\n\n\nThanks!\n\n\nBest regards, Andrey Borodin.\n\n\n[0] https://www.postgresql.org/message-id/[email protected]\n\n\n\n", "msg_date": "Wed, 8 Nov 2023 14:32:48 +0500", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "On Mon, Nov 6, 2023 at 9:39 AM Dilip Kumar <[email protected]> wrote:\n>\n> On Sun, Nov 5, 2023 at 1:37 AM Andrey M. Borodin <[email protected]> wrote:\n>\n> > Maybe we could decouple locks and counters from SLRU banks? Banks were meant to be small to exploit performance of local linear search. Lock partitions have to be bigger for sure.\n>\n> Yeah, that could also be an idea if we plan to drop the hash. I mean\n> bank-wise counter is fine as we are finding a victim buffer within a\n> bank itself, but each lock could cover more slots than one bank size\n> or in other words, it can protect multiple banks. Let's hear more\n> opinion on this.\n\nHere is the updated version of the patch, here I have taken the\napproach suggested by Andrey and I discussed the same with Alvaro\nofflist and he also agrees with it. So the idea is that we will keep\nthe bank size fixed which is 16 buffers per bank and the allowed GUC\nvalue for each slru buffer must be in multiple of the bank size. We\nhave removed the centralized lock but instead of one lock per bank, we\nhave kept the maximum limit on the number of bank locks which is 128.\nWe kept the max limit as 128 because, in one of the operations (i.e.\nActivateCommitTs), we need to acquire all the bank locks (but this is\nnot a performance path at all) and at a time we can acquire a max of\n200 LWlocks, so we think this limit of 128 is good. So now if the\nnumber of banks is <= 128 then we will be using one lock per bank\notherwise the one lock may protect access of buffer in multiple banks.\n\nWe might argue that we should keep the max lock lesser than 128 i.e.\n64 or 32 and I am open to that we can do more experiments with a very\nlarge buffer pool and a very heavy workload to see whether having lock\nup to 128 is helpful or not\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 8 Nov 2023 17:10:43 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "IMO the whole area of SLRU buffering is in horrible shape and many users\nare struggling with overall PG performance because of it. An\nimprovement doesn't have to be perfect -- it just has to be much better\nthan the current situation, which should be easy enough. We can\ncontinue to improve later, using more scalable algorithms or ones that\nallow us to raise the limits higher.\n\nThe only point on which we do not have full consensus yet is the need to\nhave one GUC per SLRU, and a lot of effort seems focused on trying to\nfix the problem without adding so many GUCs (for example, using shared\nbuffers instead, or use a single \"scaling\" GUC). I think that hinders\nprogress. Let's just add multiple GUCs, and users can leave most of\nthem alone and only adjust the one with which they have a performance\nproblems; it's not going to be the same one for everybody.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Sallah, I said NO camels! That's FIVE camels; can't you count?\"\n(Indiana Jones)\n\n\n", "msg_date": "Thu, 9 Nov 2023 12:25:13 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning\n the SLRU lock" }, { "msg_contents": "On Wed, Nov 8, 2023 at 6:41 AM Dilip Kumar <[email protected]> wrote:\n> Here is the updated version of the patch, here I have taken the\n> approach suggested by Andrey and I discussed the same with Alvaro\n> offlist and he also agrees with it. So the idea is that we will keep\n> the bank size fixed which is 16 buffers per bank and the allowed GUC\n> value for each slru buffer must be in multiple of the bank size. We\n> have removed the centralized lock but instead of one lock per bank, we\n> have kept the maximum limit on the number of bank locks which is 128.\n> We kept the max limit as 128 because, in one of the operations (i.e.\n> ActivateCommitTs), we need to acquire all the bank locks (but this is\n> not a performance path at all) and at a time we can acquire a max of\n> 200 LWlocks, so we think this limit of 128 is good. So now if the\n> number of banks is <= 128 then we will be using one lock per bank\n> otherwise the one lock may protect access of buffer in multiple banks.\n\nJust so I understand, I guess this means that an SLRU is limited to\n16*128 = 2k buffers = 16MB?\n\nWhen we were talking about this earlier, I suggested fixing the number\nof banks and allowing the number of buffers per bank to scale\ndepending on the setting. That seemed simpler than allowing both the\nnumber of banks and the number of buffers to vary, and it might allow\nthe compiler to optimize some code better, by converting a calculation\nlike page_no%number_of_banks into a masking operation like page_no&0xf\nor whatever. However, because it allows an individual bank to become\narbitrarily large, it more or less requires us to use a buffer mapping\ntable. Some of the performance problems mentioned could be alleviated\nby omitting the hash table when the number of buffers per bank is\nsmall, and we could also create the dynahash with a custom hash\nfunction that just does modular arithmetic on the page number rather\nthan a real hashing operation. However, maybe we don't really need to\ndo any of that. I agree that dynahash is clunky on a good day. I\nhadn't realized the impact would be so noticeable.\n\nThis proposal takes the opposite approach of fixing the number of\nbuffers per bank, letting the number of banks vary. I think that's\nprobably fine, although it does reduce the effective associativity of\nthe cache. If there are more hot buffers in a bank than the bank size,\nthe bank will be contended, even if other banks are cold. However,\ngiven the way SLRUs are accessed, it seems hard to imagine this being\na real problem in practice. There aren't likely to be say 20 hot\nbuffers that just so happen to all be separated from one another by a\nnumber of pages that is a multiple of the configured number of banks.\nAnd in the seemingly very unlikely event that you have a workload that\nbehaves like that, you could always adjust the number of banks up or\ndown by one, and the problem would go away. So this seems OK to me.\n\nI also agree with a couple of points that Alvaro made, specifically\nthat (1) this doesn't have to be perfect, just better than now and (2)\nseparate GUCs for each SLRU is fine. On the latter point, it's worth\nkeeping in mind that the cost of a GUC that most people don't need to\ntune is fairly low. GUCs like work_mem and shared_buffers are\n\"expensive\" because everybody more or less needs to understand what\nthey are and how to set them and getting the right value can tricky --\nbut a GUC like autovacuum_naptime is a lot cheaper, because almost\nnobody needs to change it. It seems to me that these GUCs will fall\ninto the latter category. Users can hopefully just ignore them except\nif they see a contention on the SLRU bank locks -- and then they can\nconsider increasing the number of banks for that particular SLRU. That\nseems simple enough. As with autovacuum_naptime, there is a danger\nthat people will configure a ridiculous value of the parameter for no\ngood reason and get bad results, so it would be nice if someday we had\na magical system that just got all of this right without the user\nneeding to configure anything. But in the meantime, it's better to\nhave a somewhat manual system to relieve pressure on these locks than\nno system at all.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 9 Nov 2023 11:08:51 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "On Thu, Nov 9, 2023 at 9:39 PM Robert Haas <[email protected]> wrote:\n>\n> On Wed, Nov 8, 2023 at 6:41 AM Dilip Kumar <[email protected]> wrote:\n> > Here is the updated version of the patch, here I have taken the\n> > approach suggested by Andrey and I discussed the same with Alvaro\n> > offlist and he also agrees with it. So the idea is that we will keep\n> > the bank size fixed which is 16 buffers per bank and the allowed GUC\n> > value for each slru buffer must be in multiple of the bank size. We\n> > have removed the centralized lock but instead of one lock per bank, we\n> > have kept the maximum limit on the number of bank locks which is 128.\n> > We kept the max limit as 128 because, in one of the operations (i.e.\n> > ActivateCommitTs), we need to acquire all the bank locks (but this is\n> > not a performance path at all) and at a time we can acquire a max of\n> > 200 LWlocks, so we think this limit of 128 is good. So now if the\n> > number of banks is <= 128 then we will be using one lock per bank\n> > otherwise the one lock may protect access of buffer in multiple banks.\n>\n> Just so I understand, I guess this means that an SLRU is limited to\n> 16*128 = 2k buffers = 16MB?\n\nNot really, because 128 is the maximum limit on the number of bank\nlocks not on the number of banks. So for example, if you have 16*128\n= 2k buffers then each lock will protect one bank, and likewise when\nyou have 16 * 512 = 8k buffers then each lock will protect 4 banks.\nSo in short we can get the lock for each bank by simple computation\n(banklockno = bankno % 128 )\n\n> When we were talking about this earlier, I suggested fixing the number\n> of banks and allowing the number of buffers per bank to scale\n> depending on the setting. That seemed simpler than allowing both the\n> number of banks and the number of buffers to vary, and it might allow\n> the compiler to optimize some code better, by converting a calculation\n> like page_no%number_of_banks into a masking operation like page_no&0xf\n> or whatever. However, because it allows an individual bank to become\n> arbitrarily large, it more or less requires us to use a buffer mapping\n> table. Some of the performance problems mentioned could be alleviated\n> by omitting the hash table when the number of buffers per bank is\n> small, and we could also create the dynahash with a custom hash\n> function that just does modular arithmetic on the page number rather\n> than a real hashing operation. However, maybe we don't really need to\n> do any of that. I agree that dynahash is clunky on a good day. I\n> hadn't realized the impact would be so noticeable.\n\nYes, so one idea is that we keep the number of banks fixed and with\nthat, as you pointed out correctly with a large number of buffers, the\nbank size can be quite big and for that, we would need a hash table\nand OTOH what I am doing here is keeping the bank size fixed and\nsmaller (16 buffers per bank) and with that we can have large numbers\nof the bank when the buffer pool size is quite big. But I feel having\nmore banks is not really a problem if we grow the number of locks\nbeyond a certain limit as in some corner cases we need to acquire all\nlocks together and there is a limit on that. So I like this idea of\nsharing locks across the banks with that 1) We can have enough locks\nso that lock contention or cache invalidation due to a common lock\nshould not be a problem anymore 2) We can keep a small bank size with\nthat seq search within the bank is quite fast so reads are fast 3)\nWith small bank size victim buffer search which has to be sequential\nis quite fast.\n\n> This proposal takes the opposite approach of fixing the number of\n> buffers per bank, letting the number of banks vary. I think that's\n> probably fine, although it does reduce the effective associativity of\n> the cache. If there are more hot buffers in a bank than the bank size,\n> the bank will be contended, even if other banks are cold. However,\n> given the way SLRUs are accessed, it seems hard to imagine this being\n> a real problem in practice. There aren't likely to be say 20 hot\n> buffers that just so happen to all be separated from one another by a\n> number of pages that is a multiple of the configured number of banks.\n> And in the seemingly very unlikely event that you have a workload that\n> behaves like that, you could always adjust the number of banks up or\n> down by one, and the problem would go away. So this seems OK to me.\n\nI agree with this\n\n> I also agree with a couple of points that Alvaro made, specifically\n> that (1) this doesn't have to be perfect, just better than now and (2)\n> separate GUCs for each SLRU is fine. On the latter point, it's worth\n> keeping in mind that the cost of a GUC that most people don't need to\n> tune is fairly low. GUCs like work_mem and shared_buffers are\n> \"expensive\" because everybody more or less needs to understand what\n> they are and how to set them and getting the right value can tricky --\n> but a GUC like autovacuum_naptime is a lot cheaper, because almost\n> nobody needs to change it. It seems to me that these GUCs will fall\n> into the latter category. Users can hopefully just ignore them except\n> if they see a contention on the SLRU bank locks -- and then they can\n> consider increasing the number of banks for that particular SLRU. That\n> seems simple enough. As with autovacuum_naptime, there is a danger\n> that people will configure a ridiculous value of the parameter for no\n> good reason and get bad results, so it would be nice if someday we had\n> a magical system that just got all of this right without the user\n> needing to configure anything. But in the meantime, it's better to\n> have a somewhat manual system to relieve pressure on these locks than\n> no system at all.\n\n+1\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 10 Nov 2023 10:16:29 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "On Thu, Nov 9, 2023 at 4:55 PM Alvaro Herrera <[email protected]> wrote:\n>\n> IMO the whole area of SLRU buffering is in horrible shape and many users\n> are struggling with overall PG performance because of it. An\n> improvement doesn't have to be perfect -- it just has to be much better\n> than the current situation, which should be easy enough. We can\n> continue to improve later, using more scalable algorithms or ones that\n> allow us to raise the limits higher.\n\nI agree with this.\n\n> The only point on which we do not have full consensus yet is the need to\n> have one GUC per SLRU, and a lot of effort seems focused on trying to\n> fix the problem without adding so many GUCs (for example, using shared\n> buffers instead, or use a single \"scaling\" GUC). I think that hinders\n> progress. Let's just add multiple GUCs, and users can leave most of\n> them alone and only adjust the one with which they have a performance\n> problems; it's not going to be the same one for everybody.\n\n+1\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 10 Nov 2023 10:17:49 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "On Fri, Nov 10, 2023 at 10:17:49AM +0530, Dilip Kumar wrote:\n> On Thu, Nov 9, 2023 at 4:55 PM Alvaro Herrera <[email protected]> wrote:\n>> The only point on which we do not have full consensus yet is the need to\n>> have one GUC per SLRU, and a lot of effort seems focused on trying to\n>> fix the problem without adding so many GUCs (for example, using shared\n>> buffers instead, or use a single \"scaling\" GUC). I think that hinders\n>> progress. Let's just add multiple GUCs, and users can leave most of\n>> them alone and only adjust the one with which they have a performance\n>> problems; it's not going to be the same one for everybody.\n> \n> +1\n\n+1\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 10 Nov 2023 10:43:32 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning\n the SLRU lock" }, { "msg_contents": "I just noticed that 0003 does some changes to\nTransactionGroupUpdateXidStatus() that haven't been adequately\nexplained AFAICS. How do you know that these changes are safe?\n\n0001 contains one typo in the docs, \"cotents\".\n\nI'm not a fan of the fact that some CLOG sizing macros moved to clog.h,\nleaving others in clog.c. Maybe add commentary cross-linking both.\nAlternatively, perhaps allowing xact_buffers to grow beyond 65536 up to\nthe slru.h-defined limit of 131072 is not that bad, even if it's more\nthan could possibly be needed for xact_buffers; nobody is going to use\n64k buffers, since useful values are below a couple thousand anyhow.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\nTom: There seems to be something broken here.\nTeodor: I'm in sackcloth and ashes... Fixed.\n http://postgr.es/m/[email protected]\n\n\n", "msg_date": "Thu, 16 Nov 2023 10:41:48 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning\n the SLRU lock" }, { "msg_contents": "On Thu, Nov 16, 2023 at 3:11 PM Alvaro Herrera <[email protected]> wrote:\n>\n> I just noticed that 0003 does some changes to\n> TransactionGroupUpdateXidStatus() that haven't been adequately\n> explained AFAICS. How do you know that these changes are safe?\n\nIMHO this is safe as well as logical to do w.r.t. performance. It's\nsafe because whenever we are updating any page in a group we are\nacquiring the respective bank lock in exclusive mode and in extreme\ncases if there are pages from different banks then we do switch the\nlock as well before updating the pages from different groups. And, we\ndo not wake any process in a group unless we have done the status\nupdate for all the processes so there could not be any race condition\nas well. Also, It should not affect the performance adversely as well\nand this will not remove the need for group updates. The main use\ncase of group update is that it will optimize a situation when most of\nthe processes are contending for status updates on the same page and\nprocesses that are waiting for status updates on different pages will\ngo to different groups w.r.t. that page, so in short in a group on\nbest effort basis we are trying to have the processes which are\nwaiting to update the same clog page that mean logically all the\nprocesses in the group will be waiting on the same bank lock. In an\nextreme situation if there are processes in the group that are trying\nto update different pages or even pages from different banks then we\nare handling it well by changing the lock. Although someone may raise\na concern that in cases where there are processes that are waiting for\ndifferent bank locks then after releasing one lock why not wake up\nthose processes, I think that is not required because that is the\nsituation we are trying to avoid where there are processes trying to\nupdate different are in the same group so there is no point in adding\ncomplexity to optimize that case.\n\n\n> 0001 contains one typo in the docs, \"cotents\".\n>\n> I'm not a fan of the fact that some CLOG sizing macros moved to clog.h,\n> leaving others in clog.c. Maybe add commentary cross-linking both.\n> Alternatively, perhaps allowing xact_buffers to grow beyond 65536 up to\n> the slru.h-defined limit of 131072 is not that bad, even if it's more\n> than could possibly be needed for xact_buffers; nobody is going to use\n> 64k buffers, since useful values are below a couple thousand anyhow.\n\nI agree, that allowing xact_buffers to grow beyond 65536 up to the\nslru.h-defined limit of 131072 is not that bad, so I will change that\nin the next version.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 17 Nov 2023 13:09:03 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "On Fri, Nov 17, 2023 at 1:09 PM Dilip Kumar <[email protected]> wrote:\n>\n> On Thu, Nov 16, 2023 at 3:11 PM Alvaro Herrera <[email protected]> wrote:\n\nPFA, updated patch version, this fixes the comment given by Alvaro and\nalso improves some of the comments.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Fri, 17 Nov 2023 16:41:24 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "In SlruSharedData, a new comment is added that starts:\n \"Instead of global counter we maintain a bank-wise lru counter because ...\" \nYou don't need to explain what we don't do. Just explain what we do do.\nSo remove the words \"Instead of a global counter\" from there, because\nthey offer no wisdom. Same with the phrase \"so there is no point to ...\".\nI think \"The oldest page is therefore\" should say \"The oldest page *in\nthe bank* is therefore\", for extra clarity.\n\nI wonder what's the deal with false sharing in the new\nbank_cur_lru_count array. Maybe instead of using LWLockPadded for\nbank_locks, we should have a new struct, with both the LWLock and the\nLRU counter; then pad *that* to the cacheline size. This way, both the\nlwlock and the counter come to the CPU running this code together.\n\nLooking at SlruRecentlyUsed, which was a macro and is now a function.\nThe comment about \"multiple evaluation of arguments\" no longer applies,\nso it needs to be removed; and it shouldn't talk about itself being a\nmacro.\n\nUsing \"Size\" as type for bank_mask looks odd. For a bitmask, maybe it's\nbe more appropriate to use bits64 if we do need a 64-bit mask (we don't\nhave bits64, but it's easy to add a typedef). I bet we don't really\nneed a 64 bit mask, and a 32-bit or even a 16-bit is sufficient, given\nthe other limitations on number of buffers.\n\nI think SimpleLruReadPage should have this assert at start:\n\n+ Assert(LWLockHeldByMe(SimpleLruGetSLRUBankLock(ctl, pageno)));\n\nDo we really need one separate lwlock tranche for each SLRU?\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Cuando mañana llegue pelearemos segun lo que mañana exija\" (Mowgli)\n\n\n", "msg_date": "Fri, 17 Nov 2023 13:46:29 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning\n the SLRU lock" }, { "msg_contents": "On 2023-Nov-17, Dilip Kumar wrote:\n\n> On Thu, Nov 16, 2023 at 3:11 PM Alvaro Herrera <[email protected]> wrote:\n> >\n> > I just noticed that 0003 does some changes to\n> > TransactionGroupUpdateXidStatus() that haven't been adequately\n> > explained AFAICS. How do you know that these changes are safe?\n> \n> IMHO this is safe as well as logical to do w.r.t. performance. It's\n> safe because whenever we are updating any page in a group we are\n> acquiring the respective bank lock in exclusive mode and in extreme\n> cases if there are pages from different banks then we do switch the\n> lock as well before updating the pages from different groups.\n\nLooking at the coverage for this code,\nhttps://coverage.postgresql.org/src/backend/access/transam/clog.c.gcov.html#413\nit seems in our test suites we never hit the case where there is\nanything in the \"nextidx\" field for commit groups. To be honest, I\ndon't understand this group stuff, and so I'm doubly hesitant to go\nwithout any testing here. Maybe it'd be possible to use Michael\nPaquier's injection points somehow?\n\n\nI think in the code comments where you use \"w.r.t.\", that acronym can be\nreplaced with \"for\", which improves readability.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"All rings of power are equal,\nBut some rings of power are more equal than others.\"\n (George Orwell's The Lord of the Rings)\n\n\n", "msg_date": "Fri, 17 Nov 2023 14:58:32 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning\n the SLRU lock" }, { "msg_contents": "On Fri, Nov 17, 2023 at 6:16 PM Alvaro Herrera <[email protected]> wrote:\n\nThanks for the review, all comments looks fine to me, replying to\nthose that need some clarification\n\n> I wonder what's the deal with false sharing in the new\n> bank_cur_lru_count array. Maybe instead of using LWLockPadded for\n> bank_locks, we should have a new struct, with both the LWLock and the\n> LRU counter; then pad *that* to the cacheline size. This way, both the\n> lwlock and the counter come to the CPU running this code together.\n\nActually, the array lengths of both LWLock and the LRU counter are\ndifferent so I don't think we can move them to a common structure.\nThe length of the *buffer_locks array is equal to the number of slots,\nthe length of the *bank_locks array is Min (number_of_banks, 128), and\nthe length of the *bank_cur_lru_count array is number_of_banks.\n\n> Looking at the coverage for this code,\n> https://coverage.postgresql.org/src/backend/access/transam/clog.c.gcov.html#413\n> it seems in our test suites we never hit the case where there is\n> anything in the \"nextidx\" field for commit groups. To be honest, I\n> don't understand this group stuff, and so I'm doubly hesitant to go\n> without any testing here. Maybe it'd be possible to use Michael\n> Paquier's injection points somehow?\n\nSorry, but I am not aware of \"Michael Paquier's injection\" Is it\nsomething already in the repo? Can you redirect me to some of the\nexample test cases if we already have them? Then I will try it out.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 18 Nov 2023 15:58:58 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "On 2023-Nov-18, Dilip Kumar wrote:\n\n> On Fri, Nov 17, 2023 at 6:16 PM Alvaro Herrera <[email protected]> wrote:\n\n> > I wonder what's the deal with false sharing in the new\n> > bank_cur_lru_count array. Maybe instead of using LWLockPadded for\n> > bank_locks, we should have a new struct, with both the LWLock and the\n> > LRU counter; then pad *that* to the cacheline size. This way, both the\n> > lwlock and the counter come to the CPU running this code together.\n> \n> Actually, the array lengths of both LWLock and the LRU counter are\n> different so I don't think we can move them to a common structure.\n> The length of the *buffer_locks array is equal to the number of slots,\n> the length of the *bank_locks array is Min (number_of_banks, 128), and\n> the length of the *bank_cur_lru_count array is number_of_banks.\n\nOh.\n\n> > Looking at the coverage for this code,\n> > https://coverage.postgresql.org/src/backend/access/transam/clog.c.gcov.html#413\n> > it seems in our test suites we never hit the case where there is\n> > anything in the \"nextidx\" field for commit groups. To be honest, I\n> > don't understand this group stuff, and so I'm doubly hesitant to go\n> > without any testing here. Maybe it'd be possible to use Michael\n> > Paquier's injection points somehow?\n> \n> Sorry, but I am not aware of \"Michael Paquier's injection\" Is it\n> something already in the repo? Can you redirect me to some of the\n> example test cases if we already have them? Then I will try it out.\n\nhttps://postgr.es/[email protected]\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Sallah, I said NO camels! That's FIVE camels; can't you count?\"\n(Indiana Jones)\n\n\n", "msg_date": "Sat, 18 Nov 2023 20:00:04 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning\n the SLRU lock" }, { "msg_contents": "\n\n> On 17 Nov 2023, at 16:11, Dilip Kumar <[email protected]> wrote:\n> \n> On Fri, Nov 17, 2023 at 1:09 PM Dilip Kumar <[email protected]> wrote:\n>> \n>> On Thu, Nov 16, 2023 at 3:11 PM Alvaro Herrera <[email protected]> wrote:\n> \n> PFA, updated patch version, this fixes the comment given by Alvaro and\n> also improves some of the comments.\n\nI’ve skimmed through the patch set. Here are some minor notes.\n\n1. Cycles “for (slotno = bankstart; slotno < bankend; slotno++)” in SlruSelectLRUPage() and SimpleLruReadPage_ReadOnly() now have identical comments. I think a little of copy-paste is OK.\nBut SimpleLruReadPage_ReadOnly() does pgstat_count_slru_page_hit(), while SlruSelectLRUPage() does not. This is not related to the patch set, just a code nearby.\n\n2. Do we really want these functions doing all the same?\nextern bool check_multixact_offsets_buffers(int *newval, void **extra,GucSource source);\nextern bool check_multixact_members_buffers(int *newval, void **extra,GucSource source);\nextern bool check_subtrans_buffers(int *newval, void **extra,GucSource source);\nextern bool check_notify_buffers(int *newval, void **extra, GucSource source);\nextern bool check_serial_buffers(int *newval, void **extra, GucSource source);\nextern bool check_xact_buffers(int *newval, void **extra, GucSource source);\nextern bool check_commit_ts_buffers(int *newval, void **extra,GucSource source);\n\n3. The name SimpleLruGetSLRUBankLock() contains meaning of SLRU twice. I’d suggest truncating prefix of infix.\n\nI do not have hard opinion on any of this items.\n\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Sun, 19 Nov 2023 12:09:18 +0500", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "On Sun, Nov 19, 2023 at 12:39 PM Andrey M. Borodin <[email protected]> wrote:\n>\n> I’ve skimmed through the patch set. Here are some minor notes.\n\nThanks for the review\n>\n> 1. Cycles “for (slotno = bankstart; slotno < bankend; slotno++)” in SlruSelectLRUPage() and SimpleLruReadPage_ReadOnly() now have identical comments. I think a little of copy-paste is OK.\n> But SimpleLruReadPage_ReadOnly() does pgstat_count_slru_page_hit(), while SlruSelectLRUPage() does not. This is not related to the patch set, just a code nearby.\n\nDo you mean to say we need to modify the comments or you are saying\npgstat_count_slru_page_hit() is missing in SlruSelectLRUPage(), if it\nis later then I can see the caller of SlruSelectLRUPage() is calling\npgstat_count_slru_page_hit() and the SlruRecentlyUsed().\n\n> 2. Do we really want these functions doing all the same?\n> extern bool check_multixact_offsets_buffers(int *newval, void **extra,GucSource source);\n> extern bool check_multixact_members_buffers(int *newval, void **extra,GucSource source);\n> extern bool check_subtrans_buffers(int *newval, void **extra,GucSource source);\n> extern bool check_notify_buffers(int *newval, void **extra, GucSource source);\n> extern bool check_serial_buffers(int *newval, void **extra, GucSource source);\n> extern bool check_xact_buffers(int *newval, void **extra, GucSource source);\n> extern bool check_commit_ts_buffers(int *newval, void **extra,GucSource source);\n\nI tried duplicating these by doing all the work inside the\ncheck_slru_buffers() function. But I think it is hard to make them a\nsingle function because there is no option to pass an SLRU name in the\nGUC check hook and IMHO in the check hook we need to print the GUC\nname, any suggestions on how we can avoid having so many functions?\n\n> 3. The name SimpleLruGetSLRUBankLock() contains meaning of SLRU twice. I’d suggest truncating prefix of infix.\n>\n> I do not have hard opinion on any of this items.\n>\n\nI prefer SimpleLruGetBankLock() so that it is consistent with other\nexternal functions starting with \"SimpleLruGet\", are you fine with\nthis name?\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 20 Nov 2023 10:02:47 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "On Fri, Nov 17, 2023 at 7:28 PM Alvaro Herrera <[email protected]> wrote:\n>\n> On 2023-Nov-17, Dilip Kumar wrote:\n\nI think I need some more clarification for some of the review comments\n\n> > On Thu, Nov 16, 2023 at 3:11 PM Alvaro Herrera <[email protected]> wrote:\n> > >\n> > > I just noticed that 0003 does some changes to\n> > > TransactionGroupUpdateXidStatus() that haven't been adequately\n> > > explained AFAICS. How do you know that these changes are safe?\n> >\n> > IMHO this is safe as well as logical to do w.r.t. performance. It's\n> > safe because whenever we are updating any page in a group we are\n> > acquiring the respective bank lock in exclusive mode and in extreme\n> > cases if there are pages from different banks then we do switch the\n> > lock as well before updating the pages from different groups.\n>\n> Looking at the coverage for this code,\n> https://coverage.postgresql.org/src/backend/access/transam/clog.c.gcov.html#413\n> it seems in our test suites we never hit the case where there is\n> anything in the \"nextidx\" field for commit groups.\n\n1)\nI was looking into your coverage report and I have attached a\nscreenshot from the same, it seems we do hit the block where nextidx\nis not INVALID_PGPROCNO, which means there is some other process other\nthan the group leader. Although I have already started exploring the\ninjection point but just wanted to be sure what is your main concern\npoint about the coverage so though of checking that first.\n\n470 : /*\n 471 : * If the list was not empty, the leader\nwill update the status of our\n 472 : * XID. It is impossible to have followers\nwithout a leader because the\n 473 : * first process that has added itself to\nthe list will always have\n 474 : * nextidx as INVALID_PGPROCNO.\n 475 : */\n 476 98 : if (nextidx != INVALID_PGPROCNO)\n 477 : {\n 478 2 : int extraWaits = 0;\n 479 :\n 480 : /* Sleep until the leader updates our\nXID status. */\n 481 2 :\npgstat_report_wait_start(WAIT_EVENT_XACT_GROUP_UPDATE);\n 482 : for (;;)\n 483 : {\n 484 : /* acts as a read barrier */\n 485 2 : PGSemaphoreLock(proc->sem);\n 486 2 : if (!proc->clogGroupMember)\n 487 2 : break;\n 488 0 : extraWaits++;\n 489 : }\n\n2) Do we really need one separate lwlock tranche for each SLRU?\n\nIMHO if we use the same lwlock tranche then the wait event will show\nthe same wait event name, right? And that would be confusing for the\nuser, whether we are waiting for Subtransaction or Multixact or\nanything else. Is my understanding no correct here?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 20 Nov 2023 14:21:47 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "\n\n> On 20 Nov 2023, at 13:51, Dilip Kumar <[email protected]> wrote:\n> \n> 2) Do we really need one separate lwlock tranche for each SLRU?\n> \n> IMHO if we use the same lwlock tranche then the wait event will show\n> the same wait event name, right? And that would be confusing for the\n> user, whether we are waiting for Subtransaction or Multixact or\n> anything else. Is my understanding no correct here?\n\nIf we give to a user multiple GUCs to tweak, I think we should give a way to understand which GUC to tweak when they observe wait times.\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Mon, 20 Nov 2023 14:07:18 +0500", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "On Mon, Nov 20, 2023 at 2:37 PM Andrey M. Borodin <[email protected]> wrote:\n\n> > On 20 Nov 2023, at 13:51, Dilip Kumar <[email protected]> wrote:\n> >\n> > 2) Do we really need one separate lwlock tranche for each SLRU?\n> >\n> > IMHO if we use the same lwlock tranche then the wait event will show\n> > the same wait event name, right? And that would be confusing for the\n> > user, whether we are waiting for Subtransaction or Multixact or\n> > anything else. Is my understanding no correct here?\n>\n> If we give to a user multiple GUCs to tweak, I think we should give a way to understand which GUC to tweak when they observe wait times.\n\n+1\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 20 Nov 2023 16:42:43 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "On Mon, Nov 20, 2023 at 4:42 PM Dilip Kumar <[email protected]> wrote:\n>\n> On Mon, Nov 20, 2023 at 2:37 PM Andrey M. Borodin <[email protected]> wrote:\n>\n> > > On 20 Nov 2023, at 13:51, Dilip Kumar <[email protected]> wrote:\n> > >\n> > > 2) Do we really need one separate lwlock tranche for each SLRU?\n> > >\n> > > IMHO if we use the same lwlock tranche then the wait event will show\n> > > the same wait event name, right? And that would be confusing for the\n> > > user, whether we are waiting for Subtransaction or Multixact or\n> > > anything else. Is my understanding no correct here?\n> >\n> > If we give to a user multiple GUCs to tweak, I think we should give a way to understand which GUC to tweak when they observe wait times.\n\nPFA, updated patch set, I have worked on review comments by Alvaro and\nAndrey. So the only open comments are about clog group commit\ntesting, for that my question was as I sent in the previous email\nexactly what part we are worried about in the coverage report?\n\nThe second point is, if we want to generate a group update we will\nhave to create the injection point after we hold the control lock so\nthat other processes go for group update and then for waking up the\nwaiting process who is holding the SLRU control lock in the exclusive\nmode we would need to call a function ('test_injection_points_wake()')\nto wake that up and for calling the function we would need to again\nacquire the SLRU lock in read mode for visibility check in the catalog\nfor fetching the procedure row and now this wake up session will block\non control lock for the session which is waiting on injection point so\nnow it will create a deadlock. Maybe with bank-wise lock we can\ncreate a lot of transaction so that these 2 falls in different banks\nand then we can somehow test this, but then we will have to generate\n16 * 4096 = 64k transaction so that the SLRU banks are different for\nthe transaction which inserted procedure row in system table from the\ntransaction in which we are trying to do the group commit\n\n--\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 21 Nov 2023 14:03:46 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "On Tue, Nov 21, 2023 at 2:03 PM Dilip Kumar <[email protected]> wrote:\n>\n> On Mon, Nov 20, 2023 at 4:42 PM Dilip Kumar <[email protected]> wrote:\n> >\n> > On Mon, Nov 20, 2023 at 2:37 PM Andrey M. Borodin <[email protected]> wrote:\n> >\n> > > > On 20 Nov 2023, at 13:51, Dilip Kumar <[email protected]> wrote:\n> > > >\n> > > > 2) Do we really need one separate lwlock tranche for each SLRU?\n> > > >\n> > > > IMHO if we use the same lwlock tranche then the wait event will show\n> > > > the same wait event name, right? And that would be confusing for the\n> > > > user, whether we are waiting for Subtransaction or Multixact or\n> > > > anything else. Is my understanding no correct here?\n> > >\n> > > If we give to a user multiple GUCs to tweak, I think we should give a way to understand which GUC to tweak when they observe wait times.\n>\n> PFA, updated patch set, I have worked on review comments by Alvaro and\n> Andrey. So the only open comments are about clog group commit\n> testing, for that my question was as I sent in the previous email\n> exactly what part we are worried about in the coverage report?\n>\n> The second point is, if we want to generate a group update we will\n> have to create the injection point after we hold the control lock so\n> that other processes go for group update and then for waking up the\n> waiting process who is holding the SLRU control lock in the exclusive\n> mode we would need to call a function ('test_injection_points_wake()')\n> to wake that up and for calling the function we would need to again\n> acquire the SLRU lock in read mode for visibility check in the catalog\n> for fetching the procedure row and now this wake up session will block\n> on control lock for the session which is waiting on injection point so\n> now it will create a deadlock. Maybe with bank-wise lock we can\n> create a lot of transaction so that these 2 falls in different banks\n> and then we can somehow test this, but then we will have to generate\n> 16 * 4096 = 64k transaction so that the SLRU banks are different for\n> the transaction which inserted procedure row in system table from the\n> transaction in which we are trying to do the group commit\n\nI have attached a POC patch for testing the group update using the\ninjection point framework. This is just for testing the group update\npart and is not yet a committable test. I have added a bunch of logs\nin the code so that we can see what's going on with the group update.\n From the below logs, we can see that multiple processes are getting\naccumulated for the group update and the leader is updating their xid\nstatus.\n\n\nNote: With this testing, we have found a bug in the bank-wise\napproach, basically we are clearing a procglobal->clogGroupFirst, even\nbefore acquiring the bank lock that means in most of the cases there\nwill be a single process in each group as a group leader (I think this\nis what Alvaro was pointing in his coverage report). I have added\nthis fix in this POC just for testing purposes but in my next version\nI will add this fix to my proper patch version after a proper review\nand a bit more testing.\n\n\nhere is the output after running the test\n==============\n2023-11-23 05:55:29.399 UTC [93367] 003_clog_group_commit.pl LOG:\nprocno 6 got the lock\n2023-11-23 05:55:29.399 UTC [93367] 003_clog_group_commit.pl\nSTATEMENT: SELECT txid_current();\n2023-11-23 05:55:29.406 UTC [93369] 003_clog_group_commit.pl LOG:\nstatement: SELECT test_injection_points_attach('ClogGroupCommit',\n'wait');\n2023-11-23 05:55:29.415 UTC [93371] 003_clog_group_commit.pl LOG:\nstatement: INSERT INTO test VALUES(1);\n2023-11-23 05:55:29.416 UTC [93371] 003_clog_group_commit.pl LOG:\nprocno 4 got the lock\n2023-11-23 05:55:29.416 UTC [93371] 003_clog_group_commit.pl\nSTATEMENT: INSERT INTO test VALUES(1);\n2023-11-23 05:55:29.424 UTC [93373] 003_clog_group_commit.pl LOG:\nstatement: INSERT INTO test VALUES(2);\n2023-11-23 05:55:29.425 UTC [93373] 003_clog_group_commit.pl LOG:\nprocno 3 for xid 128742 added for group update\n2023-11-23 05:55:29.425 UTC [93373] 003_clog_group_commit.pl\nSTATEMENT: INSERT INTO test VALUES(2);\n2023-11-23 05:55:29.431 UTC [93376] 003_clog_group_commit.pl LOG:\nstatement: INSERT INTO test VALUES(3);\n2023-11-23 05:55:29.438 UTC [93378] 003_clog_group_commit.pl LOG:\nstatement: INSERT INTO test VALUES(4);\n2023-11-23 05:55:29.438 UTC [93376] 003_clog_group_commit.pl LOG:\nprocno 2 for xid 128743 added for group update\n2023-11-23 05:55:29.438 UTC [93376] 003_clog_group_commit.pl\nSTATEMENT: INSERT INTO test VALUES(3);\n2023-11-23 05:55:29.438 UTC [93376] 003_clog_group_commit.pl LOG:\nprocno 2 is follower and wait for group leader to update commit status\nof xid 128743\n2023-11-23 05:55:29.438 UTC [93376] 003_clog_group_commit.pl\nSTATEMENT: INSERT INTO test VALUES(3);\n2023-11-23 05:55:29.439 UTC [93378] 003_clog_group_commit.pl LOG:\nprocno 1 for xid 128744 added for group update\n2023-11-23 05:55:29.439 UTC [93378] 003_clog_group_commit.pl\nSTATEMENT: INSERT INTO test VALUES(4);\n2023-11-23 05:55:29.439 UTC [93378] 003_clog_group_commit.pl LOG:\nprocno 1 is follower and wait for group leader to update commit status\nof xid 128744\n2023-11-23 05:55:29.439 UTC [93378] 003_clog_group_commit.pl\nSTATEMENT: INSERT INTO test VALUES(4);\n2023-11-23 05:55:29.445 UTC [93380] 003_clog_group_commit.pl LOG:\nstatement: INSERT INTO test VALUES(5);\n2023-11-23 05:55:29.446 UTC [93380] 003_clog_group_commit.pl LOG:\nprocno 0 for xid 128745 added for group update\n2023-11-23 05:55:29.446 UTC [93380] 003_clog_group_commit.pl\nSTATEMENT: INSERT INTO test VALUES(5);\n2023-11-23 05:55:29.446 UTC [93380] 003_clog_group_commit.pl LOG:\nprocno 0 is follower and wait for group leader to update commit status\nof xid 128745\n2023-11-23 05:55:29.446 UTC [93380] 003_clog_group_commit.pl\nSTATEMENT: INSERT INTO test VALUES(5);\n2023-11-23 05:55:29.451 UTC [93382] 003_clog_group_commit.pl LOG:\nstatement: SELECT test_injection_points_wake();\n2023-11-23 05:55:29.460 UTC [93384] 003_clog_group_commit.pl LOG:\nstatement: SELECT test_injection_points_detach('ClogGroupCommit');\n\n=============\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 23 Nov 2023 11:34:15 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "On Thu, Nov 23, 2023 at 11:34 AM Dilip Kumar <[email protected]> wrote:\n>\n> Note: With this testing, we have found a bug in the bank-wise\n> approach, basically we are clearing a procglobal->clogGroupFirst, even\n> before acquiring the bank lock that means in most of the cases there\n> will be a single process in each group as a group leader\n\nI realized that the bug fix I have done is not proper, so will send\nthe updated patch set with the proper fix soon.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 24 Nov 2023 10:17:54 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "On Fri, Nov 24, 2023 at 10:17 AM Dilip Kumar <[email protected]> wrote:\n>\n> On Thu, Nov 23, 2023 at 11:34 AM Dilip Kumar <[email protected]> wrote:\n> >\n> > Note: With this testing, we have found a bug in the bank-wise\n> > approach, basically we are clearing a procglobal->clogGroupFirst, even\n> > before acquiring the bank lock that means in most of the cases there\n> > will be a single process in each group as a group leader\n>\n> I realized that the bug fix I have done is not proper, so will send\n> the updated patch set with the proper fix soon.\n\nPFA, updated patch set fixes the bug found during the testing of the\ngroup update using the injection point. Also attached a path to test\nthe injection point but for that, we need to apply the injection point\npatches [1]\n\n[1] https://www.postgresql.org/message-id/ZWACtHPetBFIvP61%40paquier.xyz\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Fri, 24 Nov 2023 14:37:33 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "The v8-0001 patch failed to apply in my local repo as below:\n\ngit apply v8-0001-Make-all-SLRU-buffer-sizes-configurable.patch\nerror: patch failed: src/backend/access/transam/multixact.c:1851\nerror: src/backend/access/transam/multixact.c: patch does not apply\nerror: patch failed: src/backend/access/transam/subtrans.c:184\nerror: src/backend/access/transam/subtrans.c: patch does not apply\nerror: patch failed: src/backend/commands/async.c:117\nerror: src/backend/commands/async.c: patch does not apply\nerror: patch failed: src/backend/storage/lmgr/predicate.c:808\nerror: src/backend/storage/lmgr/predicate.c: patch does not apply\nerror: patch failed: src/include/commands/async.h:15\nerror: src/include/commands/async.h: patch does not apply\n\nMy local head commit is 15c9ac36299. Is there something I missed?\n\nDilip Kumar <[email protected]> 于2023年11月24日周五 17:08写道:\n\n> On Fri, Nov 24, 2023 at 10:17 AM Dilip Kumar <[email protected]>\n> wrote:\n> >\n> > On Thu, Nov 23, 2023 at 11:34 AM Dilip Kumar <[email protected]>\n> wrote:\n> > >\n> > > Note: With this testing, we have found a bug in the bank-wise\n> > > approach, basically we are clearing a procglobal->clogGroupFirst, even\n> > > before acquiring the bank lock that means in most of the cases there\n> > > will be a single process in each group as a group leader\n> >\n> > I realized that the bug fix I have done is not proper, so will send\n> > the updated patch set with the proper fix soon.\n>\n> PFA, updated patch set fixes the bug found during the testing of the\n> group update using the injection point. Also attached a path to test\n> the injection point but for that, we need to apply the injection point\n> patches [1]\n>\n> [1] https://www.postgresql.org/message-id/ZWACtHPetBFIvP61%40paquier.xyz\n>\n> --\n> Regards,\n> Dilip Kumar\n> EnterpriseDB: http://www.enterprisedb.com\n>\n\nThe v8-0001 patch failed to apply in my local repo as below:git apply v8-0001-Make-all-SLRU-buffer-sizes-configurable.patcherror: patch failed: src/backend/access/transam/multixact.c:1851error: src/backend/access/transam/multixact.c: patch does not applyerror: patch failed: src/backend/access/transam/subtrans.c:184error: src/backend/access/transam/subtrans.c: patch does not applyerror: patch failed: src/backend/commands/async.c:117error: src/backend/commands/async.c: patch does not applyerror: patch failed: src/backend/storage/lmgr/predicate.c:808error: src/backend/storage/lmgr/predicate.c: patch does not applyerror: patch failed: src/include/commands/async.h:15error: src/include/commands/async.h: patch does not applyMy local head commit is 15c9ac36299.  Is there something I missed?Dilip Kumar <[email protected]> 于2023年11月24日周五 17:08写道:On Fri, Nov 24, 2023 at 10:17 AM Dilip Kumar <[email protected]> wrote:\n>\n> On Thu, Nov 23, 2023 at 11:34 AM Dilip Kumar <[email protected]> wrote:\n> >\n> > Note: With this testing, we have found a bug in the bank-wise\n> > approach, basically we are clearing a procglobal->clogGroupFirst, even\n> > before acquiring the bank lock that means in most of the cases there\n> > will be a single process in each group as a group leader\n>\n> I realized that the bug fix I have done is not proper, so will send\n> the updated patch set with the proper fix soon.\n\nPFA, updated patch set fixes the bug found during the testing of the\ngroup update using the injection point.  Also attached a path to test\nthe injection point but for that, we need to apply the injection point\npatches [1]\n\n[1] https://www.postgresql.org/message-id/ZWACtHPetBFIvP61%40paquier.xyz\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 29 Nov 2023 17:40:48 +0800", "msg_from": "tender wang <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "On 2023-Nov-29, tender wang wrote:\n\n> The v8-0001 patch failed to apply in my local repo as below:\n> \n> git apply v8-0001-Make-all-SLRU-buffer-sizes-configurable.patch\n> error: patch failed: src/backend/access/transam/multixact.c:1851\n> error: src/backend/access/transam/multixact.c: patch does not apply\n> error: patch failed: src/backend/access/transam/subtrans.c:184\n> error: src/backend/access/transam/subtrans.c: patch does not apply\n> error: patch failed: src/backend/commands/async.c:117\n> error: src/backend/commands/async.c: patch does not apply\n> error: patch failed: src/backend/storage/lmgr/predicate.c:808\n> error: src/backend/storage/lmgr/predicate.c: patch does not apply\n> error: patch failed: src/include/commands/async.h:15\n> error: src/include/commands/async.h: patch does not apply\n\nYeah, this patch series conflicts with today's commit 4ed8f0913bfd.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\nSyntax error: function hell() needs an argument.\nPlease choose what hell you want to involve.\n\n\n", "msg_date": "Wed, 29 Nov 2023 10:59:05 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning\n the SLRU lock" }, { "msg_contents": "On Wed, Nov 29, 2023 at 3:29 PM Alvaro Herrera <[email protected]> wrote:\n>\n> On 2023-Nov-29, tender wang wrote:\n>\n> > The v8-0001 patch failed to apply in my local repo as below:\n> >\n> > git apply v8-0001-Make-all-SLRU-buffer-sizes-configurable.patch\n> > error: patch failed: src/backend/access/transam/multixact.c:1851\n> > error: src/backend/access/transam/multixact.c: patch does not apply\n> > error: patch failed: src/backend/access/transam/subtrans.c:184\n> > error: src/backend/access/transam/subtrans.c: patch does not apply\n> > error: patch failed: src/backend/commands/async.c:117\n> > error: src/backend/commands/async.c: patch does not apply\n> > error: patch failed: src/backend/storage/lmgr/predicate.c:808\n> > error: src/backend/storage/lmgr/predicate.c: patch does not apply\n> > error: patch failed: src/include/commands/async.h:15\n> > error: src/include/commands/async.h: patch does not apply\n>\n> Yeah, this patch series conflicts with today's commit 4ed8f0913bfd.\n\nI will send a rebased version by tomorrow.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 29 Nov 2023 16:58:07 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "On Wed, Nov 29, 2023 at 4:58 PM Dilip Kumar <[email protected]> wrote:\n>\n> On Wed, Nov 29, 2023 at 3:29 PM Alvaro Herrera <[email protected]> wrote:\n> >\n> > On 2023-Nov-29, tender wang wrote:\n> >\n> > > The v8-0001 patch failed to apply in my local repo as below:\n> > >\n> > > git apply v8-0001-Make-all-SLRU-buffer-sizes-configurable.patch\n> > > error: patch failed: src/backend/access/transam/multixact.c:1851\n> > > error: src/backend/access/transam/multixact.c: patch does not apply\n> > > error: patch failed: src/backend/access/transam/subtrans.c:184\n> > > error: src/backend/access/transam/subtrans.c: patch does not apply\n> > > error: patch failed: src/backend/commands/async.c:117\n> > > error: src/backend/commands/async.c: patch does not apply\n> > > error: patch failed: src/backend/storage/lmgr/predicate.c:808\n> > > error: src/backend/storage/lmgr/predicate.c: patch does not apply\n> > > error: patch failed: src/include/commands/async.h:15\n> > > error: src/include/commands/async.h: patch does not apply\n> >\n> > Yeah, this patch series conflicts with today's commit 4ed8f0913bfd.\n>\n> I will send a rebased version by tomorrow.\n\nPFA, a rebased version of the patch, I have avoided attaching because\na) that patch is POC to show the coverage and it has a dependency on\nthe other thread b) the old patch still applies so it doesn't need\nrebase.\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 30 Nov 2023 15:30:15 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "On Thu, Nov 30, 2023 at 3:30 PM Dilip Kumar <[email protected]> wrote:\n>\n> On Wed, Nov 29, 2023 at 4:58 PM Dilip Kumar <[email protected]> wrote:\n\nHere is the updated patch based on some comments by tender wang (those\ncomments were sent only to me)\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 11 Dec 2023 10:41:47 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "[Added Andrey again in CC, because as I understand they are using this\ncode or something like it in production. Please don't randomly remove\npeople from CC lists.]\n\nI've been looking at this some more, and I'm not confident in that the\ngroup clog update stuff is correct. I think the injection points test\ncase was good enough to discover a problem, but it's hard to get peace\nof mind that there aren't other, more subtle problems.\n\nThe problem I see is that the group update mechanism is designed around\ncontention of the global xact-SLRU control lock; it uses atomics to\ncoordinate a single queue when the lock is contended. So if we split up\nthe global SLRU control lock using banks, then multiple processes using\ndifferent bank locks might not contend. OK, this is fine, but what\nhappens if two separate groups of processes encounter contention on two\ndifferent bank locks? I think they will both try to update the same\nqueue, and coordinate access to that *using different bank locks*. I\ndon't see how can this work correctly.\n\nI suspect the first part of that algorithm, where atomics are used to\ncreate the list without a lock, might work fine. But will each \"leader\"\nprocess, each of which is potentially using a different bank lock,\ncoordinate correctly? Maybe this works correctly because only one\nprocess will find the queue head not empty? If this is what happens,\nthen there needs to be comments about it. Without any explanation,\nthis seems broken and potentially dangerous, as some transaction commit\nbits might become lost given high enough concurrency and bad luck.\n\nMaybe this can be made to work by having one more lwlock that we use\nsolely to coordinate this task. Though we would have to demonstrate\nthat coordinating this task with a different lock works correctly in\nconjunction with the per-bank lwlock usage in the regular slru.c paths.\n\n\nAndrey, do you have any stress tests or anything else that you used to\ngain confidence in this code?\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"El sabio habla porque tiene algo que decir;\nel tonto, porque tiene que decir algo\" (Platon).\n\n\n", "msg_date": "Tue, 12 Dec 2023 14:28:51 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning\n the SLRU lock" }, { "msg_contents": "On Tue, Dec 12, 2023 at 6:58 PM Alvaro Herrera <[email protected]> wrote:\n>\n> [Added Andrey again in CC, because as I understand they are using this\n> code or something like it in production. Please don't randomly remove\n> people from CC lists.]\n\nOh, glad to know that. Yeah, I generally do not remove but I have\nnoticed that in the mail chain, some of the reviewers just replied to\nme and the hackers' list, and from that point onwards I lost track of\nthe CC list.\n\n> I've been looking at this some more, and I'm not confident in that the\n> group clog update stuff is correct. I think the injection points test\n> case was good enough to discover a problem, but it's hard to get peace\n> of mind that there aren't other, more subtle problems.\n\nYeah, I agree.\n\n> The problem I see is that the group update mechanism is designed around\n> contention of the global xact-SLRU control lock; it uses atomics to\n> coordinate a single queue when the lock is contended. So if we split up\n> the global SLRU control lock using banks, then multiple processes using\n> different bank locks might not contend. OK, this is fine, but what\n> happens if two separate groups of processes encounter contention on two\n> different bank locks? I think they will both try to update the same\n> queue, and coordinate access to that *using different bank locks*. I\n> don't see how can this work correctly.\n\nLet's back up a bit and start from the current design with the\ncentralized lock. With that, if one process is holding the lock the\nother processes will try to perform the group update, and if there is\nalready a group that still hasn't got the lock but trying to update\nthe different CLOG page then what this process wants to update then it\nwill not add itself for the group update instead it will fallback to\nthe normal lock wait. Now, in another situation, it may so happen\nthat the group leader of the other group already got the control lock\nand in such case, it would have cleared the\n'procglobal->clogGroupFirst' that means now we will start forming a\ndifferent group. So logically if we talk only about the optimization\npart then the thing is that it is assumed that at a time when we are\ntrying to commit a log of concurrent xid then those xids are mostly of\nthe same range and will fall in the same SLRU page and group update\nwill help them. But if we are getting some out-of-range xid of some\nlong-running transaction they might not even go for group update as\nthe page number will be different. Although the situation might be\nbetter here with a bank-wise lock because there if those xids are\nfalling in altogether a different bank it might not even contend.\n\nNow, let's talk about the correctness, I think even though we are\ngetting processes that might be contending on different bank-lock,\nstill we are ensuring that in a group all the processes are trying to\nupdate the same SLRU page (i.e. same bank also, we will talk about the\nexception later[1]). One of the processes is becoming a leader and as\nsoon as the leader gets the lock it detaches the queue from the\n'procglobal->clogGroupFirst' by setting it as INVALID_PGPROCNO so that\nother group update requesters now can form another parallel group.\nBut here I do not see a problem with correctness.\n\nI agree someone might say that since now there is a possibility that\ndifferent groups might get formed for different bank locks we do not\nget other groups to get formed until we get the lock for our bank as\nwe do not clear 'procglobal->clogGroupFirst' before we get the lock.\nOther requesters might want to update the page in different banks so\nwhy block them? But the thing is the group update design is optimized\nfor the cases where all requesters are trying to update the status of\nxids generated near the same range.\n\n\n> I suspect the first part of that algorithm, where atomics are used to\n> create the list without a lock, might work fine. But will each \"leader\"\n> process, each of which is potentially using a different bank lock,\n> coordinate correctly? Maybe this works correctly because only one\n> process will find the queue head not empty? If this is what happens,\n> then there needs to be comments about it.\n\nYes, you got it right, I will try to comment on it better.\n\n Without any explanation,\n> this seems broken and potentially dangerous, as some transaction commit\n> bits might become lost given high enough concurrency and bad luck.\n>\n> Maybe this can be made to work by having one more lwlock that we use\n> solely to coordinate this task.\n\nDo you mean to say a different lock for adding/removing in the list\ninstead of atomic operation? I think then we will lose the benefit we\ngot in the group update by having contention on another lock.\n\n[1] I think we already know about the exception case and I have\nexplained in the comments as well that in some cases we might add\ndifferent clog page update requests in the same group, and for\nhandling that exceptional case we are checking the respective bank\nlock for each page and if that exception occurred we will release the\nold bank lock and acquire a new lock. This case might not be\nperformant because now it is possible that after getting the lock\nleader might need to wait again on another bank lock, but this is an\nextremely exceptional case so should not be worried about performance\nand I do not see any correctness issue here as well.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 13 Dec 2023 10:26:23 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "\n\n> On 12 Dec 2023, at 18:28, Alvaro Herrera <[email protected]> wrote:\n> \n> Andrey, do you have any stress tests or anything else that you used to\n> gain confidence in this code?\n\nWe are using only first two steps of the patchset, these steps do not touch locking stuff.\n\nWe’ve got some confidence after Yura Sokolov’s benchmarks [0]. Thanks!\n\n\nBest regards, Andrey Borodin.\n\n[0] https://www.postgresql.org/message-id/flat/e46cdea96979545b2d8a13b451d8b1ce61dc7238.camel%40postgrespro.ru#0ed2cad11470825d464093fe6b8ef6a3\n\n\n\n", "msg_date": "Wed, 13 Dec 2023 17:19:11 +0500", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "On Mon, Dec 11, 2023 at 10:42 AM Dilip Kumar <[email protected]> wrote:\n\n> On Thu, Nov 30, 2023 at 3:30 PM Dilip Kumar <[email protected]> wrote:\n> >\n> > On Wed, Nov 29, 2023 at 4:58 PM Dilip Kumar <[email protected]>\n> wrote:\n>\n> Here is the updated patch based on some comments by tender wang (those\n> comments were sent only to me)\n>\n\nfew nitpicks:\n\n+\n+ /*\n+ * Mask for slotno banks, considering 1GB SLRU buffer pool size and the\n+ * SLRU_BANK_SIZE bits16 should be sufficient for the bank mask.\n+ */\n+ bits16 bank_mask;\n } SlruCtlData;\n\n...\n...\n\n+ int bankno = pageno & ctl->bank_mask;\n\nI am a bit uncomfortable seeing it as a mask, why can't it be simply a\nnumber\nof banks (num_banks) and get the bank number through modulus op (pageno %\nnum_banks) instead of bitwise & operation (pageno & ctl->bank_mask) which\nis a\nbit difficult to read compared to modulus op which is quite simple,\nstraightforward and much common practice in hashing.\n\nAre there any advantages of using & over % ?\n\nAlso, a few places in 0002 and 0003 patch, need the bank number, it is\nbetter\nto have a macro for that.\n---\n\n extern bool SlruScanDirCbDeleteAll(SlruCtl ctl, char *filename, int64\nsegpage,\n void *data);\n-\n+extern bool check_slru_buffers(const char *name, int *newval);\n #endif /* SLRU_H */\n\n\nAdd an empty line after the declaration, in 0002 patch.\n---\n\n-TransactionIdSetStatusBit(TransactionId xid, XidStatus status, XLogRecPtr\nlsn, int slotno)\n+TransactionIdSetStatusBit(TransactionId xid, XidStatus status, XLogRecPtr\nlsn,\n+ int slotno)\n\nUnrelated change for 0003 patch.\n---\n\nRegards,\nAmul\n\nOn Mon, Dec 11, 2023 at 10:42 AM Dilip Kumar <[email protected]> wrote:On Thu, Nov 30, 2023 at 3:30 PM Dilip Kumar <[email protected]> wrote:\n>\n> On Wed, Nov 29, 2023 at 4:58 PM Dilip Kumar <[email protected]> wrote:\n\nHere is the updated patch based on some comments by tender wang (those\ncomments were sent only to me)few nitpicks:++   /*  +    * Mask for slotno banks, considering 1GB SLRU buffer pool size and the +    * SLRU_BANK_SIZE bits16 should be sufficient for the bank mask.+    */  +   bits16      bank_mask; } SlruCtlData;......+\t\tint\t\t\tbankno = pageno & ctl->bank_mask;I am a bit uncomfortable seeing it as a mask, why can't it be simply a numberof banks (num_banks) and get the bank number through modulus op (pageno %num_banks) instead of bitwise & operation (pageno & ctl->bank_mask) which is abit difficult to read compared to modulus op which is quite simple,straightforward and much common practice in hashing.Are there any advantages of using &  over % ?Also, a few places in 0002 and 0003 patch, need the bank number, it is betterto have a macro for that.--- extern bool SlruScanDirCbDeleteAll(SlruCtl ctl, char *filename, int64 segpage,                                   void *data);-+extern bool check_slru_buffers(const char *name, int *newval); #endif                         /* SLRU_H */Add an empty line after the declaration, in 0002 patch.----TransactionIdSetStatusBit(TransactionId xid, XidStatus status, XLogRecPtr lsn, int slotno)+TransactionIdSetStatusBit(TransactionId xid, XidStatus status, XLogRecPtr lsn,+                         int slotno)Unrelated change for 0003 patch. ---Regards,Amul", "msg_date": "Thu, 14 Dec 2023 08:42:46 +0530", "msg_from": "Amul Sul <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "On Thu, Dec 14, 2023 at 8:43 AM Amul Sul <[email protected]> wrote:\n>\n> On Mon, Dec 11, 2023 at 10:42 AM Dilip Kumar <[email protected]> wrote:\n>>\n>> On Thu, Nov 30, 2023 at 3:30 PM Dilip Kumar <[email protected]> wrote:\n>> >\n>> > On Wed, Nov 29, 2023 at 4:58 PM Dilip Kumar <[email protected]> wrote:\n>>\n>> Here is the updated patch based on some comments by tender wang (those\n>> comments were sent only to me)\n>\n>\n> few nitpicks:\n>\n> +\n> + /*\n> + * Mask for slotno banks, considering 1GB SLRU buffer pool size and the\n> + * SLRU_BANK_SIZE bits16 should be sufficient for the bank mask.\n> + */\n> + bits16 bank_mask;\n> } SlruCtlData;\n>\n> ...\n> ...\n>\n> + int bankno = pageno & ctl->bank_mask;\n>\n> I am a bit uncomfortable seeing it as a mask, why can't it be simply a number\n> of banks (num_banks) and get the bank number through modulus op (pageno %\n> num_banks) instead of bitwise & operation (pageno & ctl->bank_mask) which is a\n> bit difficult to read compared to modulus op which is quite simple,\n> straightforward and much common practice in hashing.\n>\n> Are there any advantages of using & over % ?\n\nI am not sure either but since this change in 0002 is by Andrey, I\nwill let him comment on this before we change or take any decision.\n\n> Also, a few places in 0002 and 0003 patch, need the bank number, it is better\n> to have a macro for that.\n> ---\n>\n> extern bool SlruScanDirCbDeleteAll(SlruCtl ctl, char *filename, int64 segpage,\n> void *data);\n> -\n> +extern bool check_slru_buffers(const char *name, int *newval);\n> #endif /* SLRU_H */\n>\n>\n> Add an empty line after the declaration, in 0002 patch.\n> ---\n>\n> -TransactionIdSetStatusBit(TransactionId xid, XidStatus status, XLogRecPtr lsn, int slotno)\n> +TransactionIdSetStatusBit(TransactionId xid, XidStatus status, XLogRecPtr lsn,\n> + int slotno)\n>\n> Unrelated change for 0003 patch.\n\n Fixed\n\nThanks for your review, PFA updated version.\n\nI have added @Amit Kapila to the list to view his opinion about\nwhether anything can break in the clog group update with our changes\nof bank-wise SLRU lock.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 14 Dec 2023 13:53:12 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "\n\n> On 14 Dec 2023, at 08:12, Amul Sul <[email protected]> wrote:\n> \n> \n> + int bankno = pageno & ctl->bank_mask;\n> \n> I am a bit uncomfortable seeing it as a mask, why can't it be simply a number\n> of banks (num_banks) and get the bank number through modulus op (pageno %\n> num_banks) instead of bitwise & operation (pageno & ctl->bank_mask) which is a\n> bit difficult to read compared to modulus op which is quite simple,\n> straightforward and much common practice in hashing.\n> \n> Are there any advantages of using & over % ?\n\nThe instruction AND is ~20 times faster than IDIV [0]. This is relatively hot function, worth sacrificing some readability to save ~ten nanoseconds on each check of a status of a transaction.\n\n\n[0] https://www.agner.org/optimize/instruction_tables.pdf\n\n\n\n", "msg_date": "Thu, 14 Dec 2023 14:01:52 +0500", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "Andrey M. Borodin <[email protected]> 于2023年12月14日周四 17:02写道:\n\n>\n>\n> > On 14 Dec 2023, at 08:12, Amul Sul <[email protected]> wrote:\n> >\n> >\n> > + int bankno = pageno & ctl->bank_mask;\n> >\n> > I am a bit uncomfortable seeing it as a mask, why can't it be simply a\n> number\n> > of banks (num_banks) and get the bank number through modulus op (pageno %\n> > num_banks) instead of bitwise & operation (pageno & ctl->bank_mask)\n> which is a\n> > bit difficult to read compared to modulus op which is quite simple,\n> > straightforward and much common practice in hashing.\n> >\n> > Are there any advantages of using & over % ?\n>\n\nuse Compiler Explorer[1] tool, '%' has more Assembly instructions than '&'\n.\nint GetBankno1(int pageno) {\n return pageno & 127;\n}\n\nint GetBankno2(int pageno) {\n return pageno % 127;\n}\nunder clang 13.0\nGetBankno1: # @GetBankno1\n push rbp\n mov rbp, rsp\n mov dword ptr [rbp - 4], edi\n mov eax, dword ptr [rbp - 4]\n and eax, 127\n pop rbp\n ret\nGetBankno2: # @GetBankno2\n push rbp\n mov rbp, rsp\n mov dword ptr [rbp - 4], edi\n mov eax, dword ptr [rbp - 4]\n mov ecx, 127\n cdq\n idiv ecx\n mov eax, edx\n pop rbp\n ret\nunder gcc 13.2\nGetBankno1:\n push rbp\n mov rbp, rsp\n mov DWORD PTR [rbp-4], edi\n mov eax, DWORD PTR [rbp-4]\n and eax, 127\n pop rbp\n ret\nGetBankno2:\n push rbp\n mov rbp, rsp\n mov DWORD PTR [rbp-4], edi\n mov eax, DWORD PTR [rbp-4]\n movsx rdx, eax\n imul rdx, rdx, -2130574327\n shr rdx, 32\n add edx, eax\n mov ecx, edx\n sar ecx, 6\n cdq\n sub ecx, edx\n mov edx, ecx\n sal edx, 7\n sub edx, ecx\n sub eax, edx\n mov ecx, eax\n mov eax, ecx\n pop rbp\n ret\n\n\n[1] https://godbolt.org/\n\nThe instruction AND is ~20 times faster than IDIV [0]. This is relatively\n> hot function, worth sacrificing some readability to save ~ten nanoseconds\n> on each check of a status of a transaction.\n>\n\n Now that AND is more faster, Can we replace the '% SLRU_MAX_BANKLOCKS'\noperation in SimpleLruGetBankLock() with '& 127' :\n SimpleLruGetBankLock()\n{\n int banklockno = (pageno & ctl->bank_mask) % SLRU_MAX_BANKLOCKS;\n\n use '&'\n return &(ctl->shared->bank_locks[banklockno].lock);\n}\nThoughts?\n\n>\n> [0] https://www.agner.org/optimize/instruction_tables.pdf\n>\n>\n\nAndrey M. Borodin <[email protected]> 于2023年12月14日周四 17:02写道:\n\n> On 14 Dec 2023, at 08:12, Amul Sul <[email protected]> wrote:\n> \n> \n> + int bankno = pageno & ctl->bank_mask;\n> \n> I am a bit uncomfortable seeing it as a mask, why can't it be simply a number\n> of banks (num_banks) and get the bank number through modulus op (pageno %\n> num_banks) instead of bitwise & operation (pageno & ctl->bank_mask) which is a\n> bit difficult to read compared to modulus op which is quite simple,\n> straightforward and much common practice in hashing.\n> \n> Are there any advantages of using &  over % ? use Compiler Explorer[1] tool, '%' has more Assembly instructions than '&' . int GetBankno1(int pageno) {    return pageno & 127;}int GetBankno2(int pageno) {    return pageno % 127;}\nunder clang 13.0\nGetBankno1:                             # @GetBankno1        push    rbp        mov     rbp, rsp        mov     dword ptr [rbp - 4], edi        mov     eax, dword ptr [rbp - 4]        and     eax, 127        pop     rbp        retGetBankno2:                             # @GetBankno2        push    rbp        mov     rbp, rsp        mov     dword ptr [rbp - 4], edi        mov     eax, dword ptr [rbp - 4]        mov     ecx, 127        cdq        idiv    ecx        mov     eax, edx        pop     rbp        ret\nunder gcc 13.2\nGetBankno1:        push    rbp        mov     rbp, rsp        mov     DWORD PTR [rbp-4], edi        mov     eax, DWORD PTR [rbp-4]        and     eax, 127        pop     rbp        retGetBankno2:        push    rbp        mov     rbp, rsp        mov     DWORD PTR [rbp-4], edi        mov     eax, DWORD PTR [rbp-4]        movsx   rdx, eax        imul    rdx, rdx, -2130574327        shr     rdx, 32        add     edx, eax        mov     ecx, edx        sar     ecx, 6        cdq        sub     ecx, edx        mov     edx, ecx        sal     edx, 7        sub     edx, ecx        sub     eax, edx        mov     ecx, eax        mov     eax, ecx        pop     rbp        ret\n[1] https://godbolt.org/\nThe instruction AND is ~20 times faster than IDIV [0]. This is relatively hot function, worth sacrificing some readability to save ~ten nanoseconds on each check of a status of a transaction.  Now that AND is more faster, Can we  replace the '% SLRU_MAX_BANKLOCKS' operation in  SimpleLruGetBankLock()  with '& 127' :  SimpleLruGetBankLock()  {       int\t\tbanklockno = (pageno & ctl->bank_mask) % SLRU_MAX_BANKLOCKS;                                                                               use '&'       return &(ctl->shared->bank_locks[banklockno].lock);}Thoughts?\n\n[0] https://www.agner.org/optimize/instruction_tables.pdf", "msg_date": "Thu, 14 Dec 2023 17:28:01 +0800", "msg_from": "tender wang <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "\n\n> On 14 Dec 2023, at 14:28, tender wang <[email protected]> wrote:\n> \n> Now that AND is more faster, Can we replace the '% SLRU_MAX_BANKLOCKS' operation in SimpleLruGetBankLock() with '& 127'\n\nunsigned int GetBankno1(unsigned int pageno) {\nreturn pageno & 127;\n}\n\nunsigned int GetBankno2(unsigned int pageno) {\nreturn pageno % 128;\n}\n\nGenerates with -O2\n\nGetBankno1(unsigned int):\nmov eax, edi\nand eax, 127\nret\nGetBankno2(unsigned int):\nmov eax, edi\nand eax, 127\nret\n\n\nCompiler is smart enough with constants.\n\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Thu, 14 Dec 2023 14:35:37 +0500", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "On Wed, Dec 13, 2023 at 5:49 PM Andrey M. Borodin <[email protected]> wrote:\n\n> > On 12 Dec 2023, at 18:28, Alvaro Herrera <[email protected]> wrote:\n> >\n> > Andrey, do you have any stress tests or anything else that you used to\n> > gain confidence in this code?\n>\n> We are using only first two steps of the patchset, these steps do not touch locking stuff.\n>\n> We’ve got some confidence after Yura Sokolov’s benchmarks [0]. Thanks!\n>\n\nI have run this test [1], instead of comparing against the master I\nhave compared the effect of (patch-1 = (0001+0002)slur buffer bank) vs\n(patch-2 = (0001+0002+0003) slur buffer bank + bank-wise lock), and\nhere is the result of the benchmark-1 and benchmark-2. I have noticed\na very good improvement with the addition of patch 0003.\n\nMachine information:\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nByte Order: Little Endian\nCPU(s): 128\nOn-line CPU(s) list: 0-127\n\nconfigurations:\n\nmax_wal_size=20GB\nshared_buffers=20GB\ncheckpoint_timeout=40min\nmax_connections=700\nmaintenance_work_mem=1GB\n\nsubtrans_buffers=$variable\nmultixact_offsets_buffers=$variable\nmultixact_members_buffers=$variable\n\nbenchmark-1\nversion | subtrans | multixact | tps\n | buffers | offs/memb | func+ballast\n-----------+--------------+--------------+------\npatch-1 | 64 | 64/128 | 87 + 58\npatch-2 | 64 | 64/128 | 128 +83\npatch-1 | 1024 | 512/1024 | 96 + 64\npatch-2 | 1024 | 512/1024 | 163+108\n\nbenchmark-2\n\nversion | subtrans | multixact | tps\n | buffers | offs/memb | func\n-----------+--------------+--------------+------\npatch-1 | 64 | 64/128 | 10\npatch-2 | 64 | 64/128 | 12\npatch-1 | 1024 | 512/1024 | 44\npatch-2 | 1024 | 512/1024 | 72\n\n\n[1] https://www.postgresql.org/message-id/flat/e46cdea96979545b2d8a13b451d8b1ce61dc7238.camel%40postgrespro.ru#0ed2cad11470825d464093fe6b8ef6a3\n\n\n--\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 14 Dec 2023 16:36:03 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "\n\n> On 14 Dec 2023, at 16:06, Dilip Kumar <[email protected]> wrote:\n> \n> I have noticed\n> a very good improvement with the addition of patch 0003.\n\nIndeed, a very impressive results! It’s almost x2 of performance on high contention scenario, on top of previous improvements.\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Thu, 14 Dec 2023 16:25:46 +0500", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "Andrey M. Borodin <[email protected]> 于2023年12月14日周四 17:35写道:\n\n>\n>\n> > On 14 Dec 2023, at 14:28, tender wang <[email protected]> wrote:\n> >\n> > Now that AND is more faster, Can we replace the '%\n> SLRU_MAX_BANKLOCKS' operation in SimpleLruGetBankLock() with '& 127'\n>\n> unsigned int GetBankno1(unsigned int pageno) {\n> return pageno & 127;\n> }\n>\n> unsigned int GetBankno2(unsigned int pageno) {\n> return pageno % 128;\n> }\n>\n> Generates with -O2\n>\n> GetBankno1(unsigned int):\n> mov eax, edi\n> and eax, 127\n> ret\n> GetBankno2(unsigned int):\n> mov eax, edi\n> and eax, 127\n> ret\n>\n>\n> Compiler is smart enough with constants.\n>\n\nYeah, that's true.\n\nint GetBankno(long pageno) {\n unsigned short bank_mask = 128;\n int bankno = (pageno & bank_mask) % 128;\n return bankno;\n}\nenable -O2, only one instruction:\nxor eax, eax\n\nBut if we all use '%', thing changs as below:\nint GetBankno(long pageno) {\n unsigned short bank_mask = 128;\n int bankno = (pageno % bank_mask) % 128;\n return bankno;\n}\n mov rdx, rdi\n sar rdx, 63\n shr rdx, 57\n lea rax, [rdi+rdx]\n and eax, 127\n sub eax, edx\n\n\n> Best regards, Andrey Borodin.\n\nAndrey M. Borodin <[email protected]> 于2023年12月14日周四 17:35写道:\n\n> On 14 Dec 2023, at 14:28, tender wang <[email protected]> wrote:\n> \n>   Now that AND is more faster, Can we  replace the '% SLRU_MAX_BANKLOCKS' operation in  SimpleLruGetBankLock()  with '& 127'\n\nunsigned int GetBankno1(unsigned int pageno) {\nreturn pageno & 127;\n}\n\nunsigned int GetBankno2(unsigned int pageno) {\nreturn pageno % 128;\n}\n\nGenerates with -O2\n\nGetBankno1(unsigned int):\nmov eax, edi\nand eax, 127\nret\nGetBankno2(unsigned int):\nmov eax, edi\nand eax, 127\nret\n\n\nCompiler is smart enough with constants.Yeah, that's true.int GetBankno(long pageno) {    unsigned short bank_mask = 128;    int bankno = (pageno & bank_mask) % 128;    return bankno;}enable -O2, only one  instruction:xor     eax, eaxBut if we all use '%',  thing changs as below:int GetBankno(long pageno) {    unsigned short bank_mask = 128;    int bankno = (pageno % bank_mask) % 128;    return bankno;}        mov     rdx, rdi        sar     rdx, 63        shr     rdx, 57        lea     rax, [rdi+rdx]        and     eax, 127        sub     eax, edx\n\nBest regards, Andrey Borodin.", "msg_date": "Thu, 14 Dec 2023 19:32:06 +0800", "msg_from": "tender wang <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "\n\n> On 14 Dec 2023, at 16:32, tender wang <[email protected]> wrote:\n> \n> enable -O2, only one instruction:\n> xor eax, eax\n\nThis is not fast code. This is how friendly C compiler suggests you that mask must be 127, not 128.\n\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Thu, 14 Dec 2023 16:56:50 +0500", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "On Thu, Dec 14, 2023 at 4:36 PM Dilip Kumar <[email protected]> wrote:\n>\n> On Wed, Dec 13, 2023 at 5:49 PM Andrey M. Borodin <[email protected]> wrote:\n>\n> > > On 12 Dec 2023, at 18:28, Alvaro Herrera <[email protected]> wrote:\n> > >\n> > > Andrey, do you have any stress tests or anything else that you used to\n> > > gain confidence in this code?\n> >\n\nI have done some more testing for the clog group update as the\nattached test file executes two concurrent scripts executed with\npgbench, the first script is the slow script which will run 10-second\nlong transactions and the second script is a very fast transaction\nwith ~10000 transactions per second. Along with that, I have also\nchanged the bank size such that each bank will contain just 1 page\ni.e. 32k transactions per bank. I have done this way so that we do\nnot need to keep long-running transactions running for very long in\norder to get the transactions from different banks committed during\nthe same time. With this test, I have got that behavior and the below\nlogs shows that multiple transaction range which is in different\nslru-bank (considering 32k transactions per bank) are doing group\nupdate at the same time. e.g. in the below logs, we can see xid range\naround 70600, 70548, and 70558, and xid range around 755, and 752 are\ngetting group updates by different leaders but near the same time.\n\nIt is running fine when running for a long duration, but I am not sure\nhow to validate the sanity of this kind of test.\n\n2023-12-14 14:43:31.813 GMT [3306] LOG: group leader procno 606\nupdated status of procno 606 xid 70600\n2023-12-14 14:43:31.816 GMT [3326] LOG: procno 586 for xid 70548\nadded for group update\n2023-12-14 14:43:31.816 GMT [3326] LOG: procno 586 is group leader\nand got the lock\n2023-12-14 14:43:31.816 GMT [3326] LOG: group leader procno 586\nupdated status of procno 586 xid 70548\n2023-12-14 14:43:31.818 GMT [3327] LOG: procno 585 for xid 70558\nadded for group update\n2023-12-14 14:43:31.818 GMT [3327] LOG: procno 585 is group leader\nand got the lock\n2023-12-14 14:43:31.818 GMT [3327] LOG: group leader procno 585\nupdated status of procno 585 xid 70558\n2023-12-14 14:43:31.829 GMT [3155] LOG: procno 687 for xid 752 added\nfor group update\n2023-12-14 14:43:31.829 GMT [3207] LOG: procno 669 for xid 755 added\nfor group update\n2023-12-14 14:43:31.829 GMT [3155] LOG: procno 687 is group leader\nand got the lock\n2023-12-14 14:43:31.829 GMT [3155] LOG: group leader procno 687\nupdated status of procno 669 xid 755\n2023-12-14 14:43:31.829 GMT [3155] LOG: group leader procno 687\nupdated status of procno 687 xid 752\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 14 Dec 2023 20:26:17 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "On Thu, Dec 14, 2023 at 1:53 PM Dilip Kumar <[email protected]> wrote:\n>\n> On Thu, Dec 14, 2023 at 8:43 AM Amul Sul <[email protected]> wrote:\n> >\n> > On Mon, Dec 11, 2023 at 10:42 AM Dilip Kumar <[email protected]> wrote:\n> >>\n> >> On Thu, Nov 30, 2023 at 3:30 PM Dilip Kumar <[email protected]> wrote:\n> >> >\n> >> > On Wed, Nov 29, 2023 at 4:58 PM Dilip Kumar <[email protected]> wrote:\n> >>\n> >> Here is the updated patch based on some comments by tender wang (those\n> >> comments were sent only to me)\n> >\n> >\n> > few nitpicks:\n> >\n> > +\n> > + /*\n> > + * Mask for slotno banks, considering 1GB SLRU buffer pool size and the\n> > + * SLRU_BANK_SIZE bits16 should be sufficient for the bank mask.\n> > + */\n> > + bits16 bank_mask;\n> > } SlruCtlData;\n> >\n> > ...\n> > ...\n> >\n> > + int bankno = pageno & ctl->bank_mask;\n> >\n> > I am a bit uncomfortable seeing it as a mask, why can't it be simply a number\n> > of banks (num_banks) and get the bank number through modulus op (pageno %\n> > num_banks) instead of bitwise & operation (pageno & ctl->bank_mask) which is a\n> > bit difficult to read compared to modulus op which is quite simple,\n> > straightforward and much common practice in hashing.\n> >\n> > Are there any advantages of using & over % ?\n>\n> I am not sure either but since this change in 0002 is by Andrey, I\n> will let him comment on this before we change or take any decision.\n>\n> > Also, a few places in 0002 and 0003 patch, need the bank number, it is better\n> > to have a macro for that.\n> > ---\n> >\n> > extern bool SlruScanDirCbDeleteAll(SlruCtl ctl, char *filename, int64 segpage,\n> > void *data);\n> > -\n> > +extern bool check_slru_buffers(const char *name, int *newval);\n> > #endif /* SLRU_H */\n> >\n> >\n> > Add an empty line after the declaration, in 0002 patch.\n> > ---\n> >\n> > -TransactionIdSetStatusBit(TransactionId xid, XidStatus status, XLogRecPtr lsn, int slotno)\n> > +TransactionIdSetStatusBit(TransactionId xid, XidStatus status, XLogRecPtr lsn,\n> > + int slotno)\n> >\n> > Unrelated change for 0003 patch.\n>\n> Fixed\n>\n> Thanks for your review, PFA updated version.\n>\n> I have added @Amit Kapila to the list to view his opinion about\n> whether anything can break in the clog group update with our changes\n> of bank-wise SLRU lock.\n\nUpdated the comments about group commit safety based on the off-list\nsuggestion by Alvaro.\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Fri, 15 Dec 2023 10:45:36 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "On Tue, Dec 12, 2023 at 8:29 AM Alvaro Herrera <[email protected]> wrote:\n> The problem I see is that the group update mechanism is designed around\n> contention of the global xact-SLRU control lock; it uses atomics to\n> coordinate a single queue when the lock is contended. So if we split up\n> the global SLRU control lock using banks, then multiple processes using\n> different bank locks might not contend. OK, this is fine, but what\n> happens if two separate groups of processes encounter contention on two\n> different bank locks? I think they will both try to update the same\n> queue, and coordinate access to that *using different bank locks*. I\n> don't see how can this work correctly.\n>\n> I suspect the first part of that algorithm, where atomics are used to\n> create the list without a lock, might work fine. But will each \"leader\"\n> process, each of which is potentially using a different bank lock,\n> coordinate correctly? Maybe this works correctly because only one\n> process will find the queue head not empty? If this is what happens,\n> then there needs to be comments about it. Without any explanation,\n> this seems broken and potentially dangerous, as some transaction commit\n> bits might become lost given high enough concurrency and bad luck.\n\nI don't want to be dismissive of this concern, but I took a look at\nthis part of the patch set and I don't see a correctness problem. I\nthink the idea of the mechanism is that we have a single linked list\nin shared memory that can accumulate those waiters. At some point a\nprocess pops the entire list of waiters, which allows a new group of\nwaiters to begin accumulating. The process that pops the list must\nperform the updates for every process in the just-popped list without\nfailing, else updates would be lost. In theory, there can be any\nnumber of lists that some process has popped and is currently working\nits way through at the same time, although in practice I bet it's\nquite rare for there to be more than one. The only correctness problem\nis if it's possible for a process that popped the list to error out\nbefore it finishes doing the updates that it \"promised\" to do by\npopping the list.\n\nHaving individual bank locks doesn't really change anything here.\nThat's just a matter of what lock has to be held in order to perform\nthe update that was promised, and the algorithm described in the\nprevious paragraph doesn't really care about that. Nor is there a\ndeadlock hazard here as long as processes only take one lock at a\ntime, which I believe is the case. The only potential issue that I see\nhere is with performance. I've heard some questions about whether this\nmachinery performs well even as it stands, but certainly if we divide\nup the lock into a bunch of bankwise locks then that should tend in\nthe direction of making a mechanism like this less valuable, because\nboth mechanisms are trying to alleviate contention, and so in a\ncertain sense they are competing for the same job. However, they do\naim to alleviate different TYPES of contention: the group XID update\nstuff should be most valuable when lots of processes are trying to\nupdate the same page, and the banks should be most valuable when there\nis simultaneous access to a bunch of different pages. So I'm not\nconvinced that this patch is a reason to remove the group XID update\nmechanism, but someone might argue otherwise.\n\nA related concern is that, if by chance we do end up with multiple\nupdaters from different pages in the same group, it will now be more\nexpensive to sort that out because we'll have to potentially keep\nswitching locks. So that could make the group XID update mechanism\nless performant than it is currently. I think that's probably not an\nissue because I think it should be a rare occurrence, as the comments\nsay. A bit more cost in a code path that is almost never taken won't\nmatter. But if that path is more commonly taken than I think, then\nmaybe making it more expensive could hurt. It might be worth adding\nsome debugging to see how often we actually go down that path in a\nhighly stressed system.\n\nBTW:\n\n+ * as well. The main reason for now allowing requesters of\ndifferent pages\n\nnow -> not\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 18 Dec 2023 12:04:07 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "On Mon, Dec 18, 2023 at 12:04 PM Robert Haas <[email protected]> wrote:\n> certain sense they are competing for the same job. However, they do\n> aim to alleviate different TYPES of contention: the group XID update\n> stuff should be most valuable when lots of processes are trying to\n> update the same page, and the banks should be most valuable when there\n> is simultaneous access to a bunch of different pages. So I'm not\n> convinced that this patch is a reason to remove the group XID update\n> mechanism, but someone might argue otherwise.\n\nHmm, but, on the other hand:\n\nCurrently all readers and writers are competing for the same LWLock.\nBut with this change, the readers will (mostly) no longer be competing\nwith the writers. So, in theory, that might reduce lock contention\nenough to make the group update mechanism pointless.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 18 Dec 2023 12:30:11 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "\n\n> On 18 Dec 2023, at 22:30, Robert Haas <[email protected]> wrote:\n> \n> On Mon, Dec 18, 2023 at 12:04 PM Robert Haas <[email protected]> wrote:\n>> certain sense they are competing for the same job. However, they do\n>> aim to alleviate different TYPES of contention: the group XID update\n>> stuff should be most valuable when lots of processes are trying to\n>> update the same page, and the banks should be most valuable when there\n>> is simultaneous access to a bunch of different pages. So I'm not\n>> convinced that this patch is a reason to remove the group XID update\n>> mechanism, but someone might argue otherwise.\n> \n> Hmm, but, on the other hand:\n> \n> Currently all readers and writers are competing for the same LWLock.\n> But with this change, the readers will (mostly) no longer be competing\n> with the writers. So, in theory, that might reduce lock contention\n> enough to make the group update mechanism pointless.\n\nOne page still accommodates 32K transaction statuses under one lock. It feels like a lot. About 1 second of transactions on a typical installation.\n\nWhen the group commit was committed did we have a benchmark to estimate efficiency of this technology? Can we repeat that test again?\n\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Mon, 18 Dec 2023 22:53:43 +0500", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "On Mon, Dec 18, 2023 at 12:53 PM Andrey M. Borodin <[email protected]> wrote:\n> One page still accommodates 32K transaction statuses under one lock. It feels like a lot. About 1 second of transactions on a typical installation.\n>\n> When the group commit was committed did we have a benchmark to estimate efficiency of this technology? Can we repeat that test again?\n\nI think we did, but it might take some research to find it in the\narchives. If we can, I agree that repeating it feels like a good idea.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 18 Dec 2023 13:18:47 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "On Mon, Dec 18, 2023 at 11:00 PM Robert Haas <[email protected]> wrote:\n>\n> On Mon, Dec 18, 2023 at 12:04 PM Robert Haas <[email protected]> wrote:\n> > certain sense they are competing for the same job. However, they do\n> > aim to alleviate different TYPES of contention: the group XID update\n> > stuff should be most valuable when lots of processes are trying to\n> > update the same page, and the banks should be most valuable when there\n> > is simultaneous access to a bunch of different pages. So I'm not\n> > convinced that this patch is a reason to remove the group XID update\n> > mechanism, but someone might argue otherwise.\n>\n> Hmm, but, on the other hand:\n>\n> Currently all readers and writers are competing for the same LWLock.\n> But with this change, the readers will (mostly) no longer be competing\n> with the writers. So, in theory, that might reduce lock contention\n> enough to make the group update mechanism pointless.\n\nThanks for your feedback, I agree that with a bank-wise lock, we might\nnot need group updates for some of the use cases as you said where\nreaders and writers are contenting on the centralized lock because, in\nmost of the cases, readers will be distributed across different banks.\nOTOH there are use cases where the writer commit is the bottleneck (on\nSLRU lock) like pgbench simple-update or TPC-B then we will still\nbenefit by group update. During group update testing we have seen\nbenefits with such a scenario[1] with high client counts. So as per\nmy understanding by distributing the SLRU locks there are scenarios\nwhere we will not need group update anymore but OTOH there are also\nscenarios where we will still benefit from the group update.\n\n\n[1] https://www.postgresql.org/message-id/CAFiTN-u-XEzhd%3DhNGW586fmQwdTy6Qy6_SXe09tNB%3DgBcVzZ_A%40mail.gmail.com\n\n\n\n--\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 19 Dec 2023 11:04:42 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "> On 19 Dec 2023, at 10:34, Dilip Kumar <[email protected]> wrote:\n\nJust a side node.\nIt seems like commit log is kind of antipattern of data contention. Even when we build a super-optimized SLRU. Nearby **bits** are written by different CPUs.\nI think that banks and locks are good thing. But also we could reorganize log so that \nstatus of transaction 0 is on a page 0 at bit offset 0\nstatus of transaction 1 is on a page 1 at bit offset 0\nstatus of transaction 2 is on a page 2 at bit offset 0\nstatus of transaction 3 is on a page 3 at bit offset 0\nstatus of transaction 4 is on a page 0 at bit offset 2\nstatus of transaction 5 is on a page 1 at bit offset 2\nstatus of transaction 6 is on a page 2 at bit offset 2\nstatus of transaction 7 is on a page 3 at bit offset 2\netc...\n\nAnd it would be even better if page for transaction statuses would be determined by backend id somehow. Or at least cache line. Can we allocate a range (sizeof(cacheline)) of xids\\subxids\\multixacts\\whatever for each backend?\n\nThis does not matter much because\n0. Patch set in current thread produces robust SLRU anyway\n1. One day we are going to throw away SLRU anyway\n\n\nBest regards, Andrey Borodin.\nOn 19 Dec 2023, at 10:34, Dilip Kumar <[email protected]> wrote:Just a side node.It seems like commit log is kind of antipattern of data contention. Even when we build a super-optimized SLRU. Nearby **bits** are written by different CPUs.I think that banks and locks are good thing. But also we could reorganize log so that status of transaction 0 is on a page 0 at bit offset 0status of transaction 1 is on a page 1 at bit offset 0status of transaction 2 is on a page 2 at bit offset 0status of transaction 3 is on a page 3 at bit offset 0status of transaction 4 is on a page 0 at bit offset 2status of transaction 5 is on a page 1 at bit offset 2status of transaction 6 is on a page 2 at bit offset 2status of transaction 7 is on a page 3 at bit offset 2etc...And it would be even better if page for transaction statuses would be determined by backend id somehow. Or at least cache line. Can we allocate a range (sizeof(cacheline)) of xids\\subxids\\multixacts\\whatever for each backend?This does not matter much because0. Patch set in current thread produces robust SLRU anyway1. One day we are going to throw away SLRU anywayBest regards, Andrey Borodin.", "msg_date": "Fri, 22 Dec 2023 18:14:17 +0500", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "On Fri, Dec 22, 2023 at 8:14 AM Andrey M. Borodin <[email protected]> wrote:\n> Just a side node.\n> It seems like commit log is kind of antipattern of data contention. Even when we build a super-optimized SLRU. Nearby **bits** are written by different CPUs.\n> I think that banks and locks are good thing. But also we could reorganize log so that\n> status of transaction 0 is on a page 0 at bit offset 0\n> status of transaction 1 is on a page 1 at bit offset 0\n> status of transaction 2 is on a page 2 at bit offset 0\n> status of transaction 3 is on a page 3 at bit offset 0\n> status of transaction 4 is on a page 0 at bit offset 2\n> status of transaction 5 is on a page 1 at bit offset 2\n> status of transaction 6 is on a page 2 at bit offset 2\n> status of transaction 7 is on a page 3 at bit offset 2\n> etc...\n\nThis is an interesting idea. A variant would be to stripe across\ncachelines within the same page rather than across pages. If we do\nstripe across pages as proposed here, we'd probably want to rethink\nthe way the SLRU is extended -- page at a time wouldn't really make\nsense, but preallocating an entire file might.\n\n> And it would be even better if page for transaction statuses would be determined by backend id somehow. Or at least cache line. Can we allocate a range (sizeof(cacheline)) of xids\\subxids\\multixacts\\whatever for each backend?\n\nI don't understand how this could work. We need to be able to look up\ntransaction status by XID, not backend ID.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 2 Jan 2024 09:23:27 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "\n\n> On 2 Jan 2024, at 19:23, Robert Haas <[email protected]> wrote:\n> \n>> \n>> And it would be even better if page for transaction statuses would be determined by backend id somehow. Or at least cache line. Can we allocate a range (sizeof(cacheline)) of xids\\subxids\\multixacts\\whatever for each backend?\n> \n> I don't understand how this could work. We need to be able to look up\n> transaction status by XID, not backend ID.\n\nWhen GetNewTransactionId() is called we can reserve 256 xids in backend local memory. This values will be reused by transactions or subtransactions of this backend. Here 256 == sizeof(CacheLine).\nThis would ensure that different backends touch different cache lines.\n\nBut this approach would dramatically increase xid consumption speed on patterns where client reconnects after several transactions. So we can keep unused xids in procarray for future reuse.\n\nI doubt we can find significant performance improvement here, because false cache line sharing cannot be _that_ bad.\n\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Tue, 2 Jan 2024 23:10:04 +0500", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "On Tue, Jan 2, 2024 at 1:10 PM Andrey M. Borodin <[email protected]> wrote:\n> > On 2 Jan 2024, at 19:23, Robert Haas <[email protected]> wrote:\n> >> And it would be even better if page for transaction statuses would be determined by backend id somehow. Or at least cache line. Can we allocate a range (sizeof(cacheline)) of xids\\subxids\\multixacts\\whatever for each backend?\n> >\n> > I don't understand how this could work. We need to be able to look up\n> > transaction status by XID, not backend ID.\n>\n> When GetNewTransactionId() is called we can reserve 256 xids in backend local memory. This values will be reused by transactions or subtransactions of this backend. Here 256 == sizeof(CacheLine).\n> This would ensure that different backends touch different cache lines.\n>\n> But this approach would dramatically increase xid consumption speed on patterns where client reconnects after several transactions. So we can keep unused xids in procarray for future reuse.\n>\n> I doubt we can find significant performance improvement here, because false cache line sharing cannot be _that_ bad.\n\nYeah, this seems way too complicated for what we'd potentially gain\nfrom it. An additional problem is that the xmin horizon computation\nassumes that XIDs are assigned in monotonically increasing fashion;\nbreaking that would be messy. And even an occasional leak of XIDs\ncould precipitate enough additional vacuuming to completely outweigh\nany gains we could hope to achieve here.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 2 Jan 2024 13:30:43 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "On Tue, Jan 2, 2024 at 7:53 PM Robert Haas <[email protected]> wrote:\n>\n> On Fri, Dec 22, 2023 at 8:14 AM Andrey M. Borodin <[email protected]> wrote:\n> > Just a side node.\n> > It seems like commit log is kind of antipattern of data contention. Even when we build a super-optimized SLRU. Nearby **bits** are written by different CPUs.\n> > I think that banks and locks are good thing. But also we could reorganize log so that\n> > status of transaction 0 is on a page 0 at bit offset 0\n> > status of transaction 1 is on a page 1 at bit offset 0\n> > status of transaction 2 is on a page 2 at bit offset 0\n> > status of transaction 3 is on a page 3 at bit offset 0\n> > status of transaction 4 is on a page 0 at bit offset 2\n> > status of transaction 5 is on a page 1 at bit offset 2\n> > status of transaction 6 is on a page 2 at bit offset 2\n> > status of transaction 7 is on a page 3 at bit offset 2\n> > etc...\n>\n> This is an interesting idea. A variant would be to stripe across\n> cachelines within the same page rather than across pages. If we do\n> stripe across pages as proposed here, we'd probably want to rethink\n> the way the SLRU is extended -- page at a time wouldn't really make\n> sense, but preallocating an entire file might.\n\nYeah, this is indeed an interesting idea. So I think if we are\ninterested in working in this direction maybe this can be submitted in\na different thread, IMHO.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 3 Jan 2024 10:38:06 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "On Wed, Jan 3, 2024 at 12:08 AM Dilip Kumar <[email protected]> wrote:\n> Yeah, this is indeed an interesting idea. So I think if we are\n> interested in working in this direction maybe this can be submitted in\n> a different thread, IMHO.\n\nYeah, that's something quite different from the patch before us.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 3 Jan 2024 08:45:48 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "The more I look at TransactionGroupUpdateXidStatus, the more I think\nit's broken, and while we do have some tests, I don't have confidence\nthat they cover all possible cases.\n\nOr, at least, if this code is good, then it hasn't been sufficiently\nexplained.\n\nIf we have multiple processes trying to write bits to clog, and they are\nusing different banks, then the LWLockConditionalAcquire will be able to\nacquire the bank lock \n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"The saddest aspect of life right now is that science gathers knowledge faster\n than society gathers wisdom.\" (Isaac Asimov)\n\n\n", "msg_date": "Mon, 8 Jan 2024 12:25:20 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning\n the SLRU lock" }, { "msg_contents": "On Mon, Jan 8, 2024 at 4:55 PM Alvaro Herrera <[email protected]> wrote:\n>\n> The more I look at TransactionGroupUpdateXidStatus, the more I think\n> it's broken, and while we do have some tests, I don't have confidence\n> that they cover all possible cases.\n>\n> Or, at least, if this code is good, then it hasn't been sufficiently\n> explained.\n\nAny thought about a case in which you think it might be broken, I mean\nany vague thought might also help where you think it might not work as\nexpected so that I can also think in that direction. It might be\npossible that I might not be thinking of some perspective that you are\nthinking and comments might be lacking from that point of view.\n\n> If we have multiple processes trying to write bits to clog, and they are\n> using different banks, then the LWLockConditionalAcquire will be able to\n> acquire the bank lock\n\nDo you think there is a problem with multiple processes getting the\nlock? I mean they are modifying different CLOG pages so that can be\ndone concurrently right?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 8 Jan 2024 18:48:09 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "On 2024-Jan-08, Dilip Kumar wrote:\n\n> On Mon, Jan 8, 2024 at 4:55 PM Alvaro Herrera <[email protected]> wrote:\n> >\n> > The more I look at TransactionGroupUpdateXidStatus, the more I think\n> > it's broken, and while we do have some tests, I don't have confidence\n> > that they cover all possible cases.\n> >\n> > Or, at least, if this code is good, then it hasn't been sufficiently\n> > explained.\n> \n> Any thought about a case in which you think it might be broken, I mean\n> any vague thought might also help where you think it might not work as\n> expected so that I can also think in that direction. It might be\n> possible that I might not be thinking of some perspective that you are\n> thinking and comments might be lacking from that point of view.\n\nEh, apologies. This email was an unfinished draft that I had laying\naround before the holidays which I intended to discard it but somehow\nkept around, and just now I happened to press the wrong key combination\nand it ended up being sent instead. We had some further discussion,\nafter which I no longer think that there is a problem here, so please\nignore this email.\n\nI'll come back to this patch later this week.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"El que vive para el futuro es un iluso, y el que vive para el pasado,\nun imbécil\" (Luis Adler, \"Los tripulantes de la noche\")\n\n\n", "msg_date": "Mon, 8 Jan 2024 16:42:34 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning\n the SLRU lock" }, { "msg_contents": "On Mon, Jan 8, 2024 at 9:12 PM Alvaro Herrera <[email protected]> wrote:\n>\n> Eh, apologies. This email was an unfinished draft that I had laying\n> around before the holidays which I intended to discard it but somehow\n> kept around, and just now I happened to press the wrong key combination\n> and it ended up being sent instead. We had some further discussion,\n> after which I no longer think that there is a problem here, so please\n> ignore this email.\n>\n> I'll come back to this patch later this week.\n\n No problem\n\nThe patch was facing some compilation issues after some recent\ncommits, so I have changed it. Reported by Julien Tachoires (offlist)\n\n\n\n--\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 10 Jan 2024 18:50:20 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "On Wed, Jan 10, 2024 at 6:50 PM Dilip Kumar <[email protected]> wrote:\n>\n> On Mon, Jan 8, 2024 at 9:12 PM Alvaro Herrera <[email protected]> wrote:\n> >\n> > Eh, apologies. This email was an unfinished draft that I had laying\n> > around before the holidays which I intended to discard it but somehow\n> > kept around, and just now I happened to press the wrong key combination\n> > and it ended up being sent instead. We had some further discussion,\n> > after which I no longer think that there is a problem here, so please\n> > ignore this email.\n> >\n> > I'll come back to this patch later this week.\n>\n> No problem\n>\n> The patch was facing some compilation issues after some recent\n> commits, so I have changed it. Reported by Julien Tachoires (offlist)\n\nThe last patch conflicted with some of the recent commits, so here is\nthe updated version of the patch, I also noticed that the slur bank\nlock wat event details were missing from the wait_event_names.txt file\nso added that as well.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 23 Jan 2024 11:05:48 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "Here's a touched-up version of this patch.\n\nFirst, PROC_GLOBAL->clogGroupFirst and SlruCtl->latest_page_number\nchange from being protected by locks to being atomics, but there's no\nmention of considering memory barriers that they should have. Looking\nat the code, I think the former doesn't need any additional barriers,\nbut latest_page_number is missing some, which I have added. This\ndeserves another careful look.\n\nSecond and most user visibly, the GUC names seem to have been chosen\nbased on the source-code variables, which have never meant to be user-\nvisible. So I renamed a few:\n\nxact_buffers -> transaction_buffers\nsubtrans_buffers -> subtransaction_buffers\nserial_buffers -> serializable_buffers\ncommit_ts_buffers -> commit_timestamp_buffers\n\n(unchanged: multixact_offsets_buffers, multixact_members_buffers,\nnotify_buffers)\n\nI did this explicitly trying to avoid using the term SLRU in a\nuser-visible manner, because what do users care? But immediately after\ndoing this I realized that we already have pg_stat_slru, so maybe the\ncat is already out of the bag, and so perhaps we should name these GUCS\nas, say slru_transaction_buffers? That may make the connection between\nthese things a little more explicit. (I do think we need to add\ncross-links in the documentation of those GUCs to the pg_stat_slru\ndocs and vice-versa.)\n\n\nAnother thing that bothered me a bit is that we have auto-tuning for\ntransaction_buffers and commit_timestamp_buffers, but not for\nsubtransaction_buffers. (Autotuning means you set the GUC to 0 and it\nscales with shared_buffers). I don't quite understand what's the reason\nfor the ommision, so I added it for subtrans too. I think it may make\nsense to do likewise for the multixact ones too, not sure. It doesn't\nseem worth having that for pg_serial and notify.\n\nWhile messing about with these GUCs I realized that the usage of the\nshow_hook to print what the current number is, when autoturning is used,\nwas bogus: SHOW would print the number of blocks for (say)\ntransaction_buffers, but if you asked it to print (say)\nmultixact_offsets_buffers, it would give a size in MB or kB. I'm sure\nsuch an inconsistency would bite us. So, digging around I found that a\ngood way to handle this is to remove the show_hook, and instead call\nSetConfigOption() at the time when the ShmemInit function is called,\nwith the correct number of buffers determined. This is pretty much what\nis used for XLOGbuffers, and it works correctly as far as my testing\nshows.\n\nStill with these auto-tuning GUCs, I noticed that the auto-tuning code\nwould continue to grow the buffer sizes with shared_buffers to\narbitrarily large values. I added an arbitrary maximum of 1024 (8 MB),\nwhich is much higher than the current value of 128; but if you have\n(say) 30 GB of shared_buffers (not uncommon these days), do you really\nneed 30MB of pg_clog cache? It seems mostly unnecessary ... and you can\nstill set it manually that way if you need it. So, largely I just\nrewrote those small functions completely.\n\nI also made the SGML documentation and postgresql.sample.conf all match\nwhat the code actually docs. The whole thing wasn't particularly\nconsistent.\n\nI rewrote a bunch of code comments and moved stuff around to appear in\nalphabetical order, etc.\n\nMore comment rewriting still pending.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/", "msg_date": "Thu, 25 Jan 2024 17:22:07 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning\n the SLRU lock" }, { "msg_contents": "On 2024-Jan-25, Alvaro Herrera wrote:\n\n> Here's a touched-up version of this patch.\n\n> diff --git a/src/backend/storage/lmgr/lwlock.c b/src/backend/storage/lmgr/lwlock.c\n> index 98fa6035cc..4a5e05d5e4 100644\n> --- a/src/backend/storage/lmgr/lwlock.c\n> +++ b/src/backend/storage/lmgr/lwlock.c\n> @@ -163,6 +163,13 @@ static const char *const BuiltinTrancheNames[] = {\n> \t[LWTRANCHE_LAUNCHER_HASH] = \"LogicalRepLauncherHash\",\n> \t[LWTRANCHE_DSM_REGISTRY_DSA] = \"DSMRegistryDSA\",\n> \t[LWTRANCHE_DSM_REGISTRY_HASH] = \"DSMRegistryHash\",\n> +\t[LWTRANCHE_COMMITTS_SLRU] = \"CommitTSSLRU\",\n> +\t[LWTRANCHE_MULTIXACTOFFSET_SLRU] = \"MultixactOffsetSLRU\",\n> +\t[LWTRANCHE_MULTIXACTMEMBER_SLRU] = \"MultixactMemberSLRU\",\n> +\t[LWTRANCHE_NOTIFY_SLRU] = \"NotifySLRU\",\n> +\t[LWTRANCHE_SERIAL_SLRU] = \"SerialSLRU\"\n> +\t[LWTRANCHE_SUBTRANS_SLRU] = \"SubtransSLRU\",\n> +\t[LWTRANCHE_XACT_SLRU] = \"XactSLRU\",\n> };\n\nEeek. Last minute changes ... Fixed here.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"La primera ley de las demostraciones en vivo es: no trate de usar el sistema.\nEscriba un guión que no toque nada para no causar daños.\" (Jakob Nielsen)", "msg_date": "Thu, 25 Jan 2024 17:33:43 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning\n the SLRU lock" }, { "msg_contents": "On Thu, Jan 25, 2024 at 11:22 AM Alvaro Herrera <[email protected]> wrote:\n> Still with these auto-tuning GUCs, I noticed that the auto-tuning code\n> would continue to grow the buffer sizes with shared_buffers to\n> arbitrarily large values. I added an arbitrary maximum of 1024 (8 MB),\n> which is much higher than the current value of 128; but if you have\n> (say) 30 GB of shared_buffers (not uncommon these days), do you really\n> need 30MB of pg_clog cache? It seems mostly unnecessary ... and you can\n> still set it manually that way if you need it. So, largely I just\n> rewrote those small functions completely.\n\nYeah, I think that if we're going to scale with shared_buffers, it\nshould be capped.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 25 Jan 2024 12:28:19 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "I've continued to review this and decided that I don't like the mess\nthis patch proposes in order to support pg_commit_ts's deletion of all\nfiles. (Yes, I know that I was the one that proposed this idea. It's\nstill a bad one). I'd like to change that code by removing the limit\nthat we can only have 128 bank locks in a SLRU. To recap, the reason we\ndid this is that commit_ts sometimes wants to delete all files while\nrunning (DeactivateCommitTs), and for this it needs to acquire all bank\nlocks. Since going above the MAX_SIMUL_LWLOCKS limit is disallowed, we\nadded the SLRU limit making multiple banks share lwlocks.\n\nI propose two alternative solutions:\n\n1. The easiest is to have DeactivateCommitTs continue to hold\nCommitTsLock until the end, including while SlruScanDirectory does its\nthing. This sounds terrible, but considering that this code only runs\nwhen the module is being deactivated, I don't think it's really all that\nbad in practice. I mean, if you deactivate the commit_ts module and\nthen try to read commit timestamp data, you deserve to wait for a few\nseconds just as a punishment for your stupidity. AFAICT the cases where\nanything is done in the module mostly check without locking that\ncommitTsActive is set, so we're not slowing down any critical\noperations. Also, if you don't like to be delayed for a couple of\nseconds, just don't deactivate the module.\n\n2. If we want some refinement, the other idea is to change\nSlruScanDirCbDeleteAll (the callback that SlruScanDirectory uses in this\ncase) so that it acquires the bank lock appropriate for all the slots\nused by the file that's going to be deleted. This is OK because in the\ndefault compilation each file only has 32 segments, so that requires\nonly 32 lwlocks held at once while the file is being deleted. I think\nwe don't need to bother with this, but it's an option if we see the\nabove as unworkable for whatever reason.\n\n\nThe only other user of SlruScanDirCbDeleteAll is async.c (the LISTEN/\nNOTIFY code), and what that does is delete all the files at server\nstart, where nothing is running concurrently anyway. So whatever we do\nfor commit_ts, it won't affect async.c.\n\n\nSo, if we do away with the SLRU_MAX_BANKLOCKS idea, then we're assured\none LWLock per bank (instead of sharing some lwlocks to multiple banks),\nand that makes the code simpler to reason about.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"In fact, the basic problem with Perl 5's subroutines is that they're not\ncrufty enough, so the cruft leaks out into user-defined code instead, by\nthe Conservation of Cruft Principle.\" (Larry Wall, Apocalypse 6)\n\n\n", "msg_date": "Fri, 26 Jan 2024 18:38:04 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning\n the SLRU lock" }, { "msg_contents": "\n\n> On 26 Jan 2024, at 22:38, Alvaro Herrera <[email protected]> wrote:\n> \n> This is OK because in the\n> default compilation each file only has 32 segments, so that requires\n> only 32 lwlocks held at once while the file is being deleted.\n\nDo we account somehow that different subsystems do not accumulate MAX_SIMUL_LWLOCKS together?\nE.g. GiST during split can combine 75 locks, and somehow commit_ts will be deactivated by this backend at the same moment and add 32 locks more :)\nI understand that this sounds fantastic, these subsystems do not interfere. But this is fantastic only until something like that actually happens.\nIf possible, I'd prefer one lock at a time, any maybe sometimes two-three with some guarantees that this is safe.\nSo, from my POV first solution that you proposed seems much better to me.\n\nThanks for working on this!\n\n\nBest regard, Andrey Borodin.\n\n", "msg_date": "Fri, 26 Jan 2024 23:21:18 +0500", "msg_from": "Andrey Borodin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "On Thu, Jan 25, 2024 at 10:03 PM Alvaro Herrera <[email protected]> wrote:\n>\n> On 2024-Jan-25, Alvaro Herrera wrote:\n>\n> > Here's a touched-up version of this patch.\n>\n> > diff --git a/src/backend/storage/lmgr/lwlock.c b/src/backend/storage/lmgr/lwlock.c\n> > index 98fa6035cc..4a5e05d5e4 100644\n> > --- a/src/backend/storage/lmgr/lwlock.c\n> > +++ b/src/backend/storage/lmgr/lwlock.c\n> > @@ -163,6 +163,13 @@ static const char *const BuiltinTrancheNames[] = {\n> > [LWTRANCHE_LAUNCHER_HASH] = \"LogicalRepLauncherHash\",\n> > [LWTRANCHE_DSM_REGISTRY_DSA] = \"DSMRegistryDSA\",\n> > [LWTRANCHE_DSM_REGISTRY_HASH] = \"DSMRegistryHash\",\n> > + [LWTRANCHE_COMMITTS_SLRU] = \"CommitTSSLRU\",\n> > + [LWTRANCHE_MULTIXACTOFFSET_SLRU] = \"MultixactOffsetSLRU\",\n> > + [LWTRANCHE_MULTIXACTMEMBER_SLRU] = \"MultixactMemberSLRU\",\n> > + [LWTRANCHE_NOTIFY_SLRU] = \"NotifySLRU\",\n> > + [LWTRANCHE_SERIAL_SLRU] = \"SerialSLRU\"\n> > + [LWTRANCHE_SUBTRANS_SLRU] = \"SubtransSLRU\",\n> > + [LWTRANCHE_XACT_SLRU] = \"XactSLRU\",\n> > };\n>\n> Eeek. Last minute changes ... Fixed here.\n\nThank you for working on this. There is one thing that I feel is\nproblematic. We have kept the allowed values for these GUCs to be in\nmultiple of SLRU_BANK_SIZE i.e. 16 and that's the reason the min\nvalues were changed to 16 but in this refactoring patch for some of\nthe buffers you have changed that to 8 so I think that's not good.\n\n+ {\n+ {\"multixact_offsets_buffers\", PGC_POSTMASTER, RESOURCES_MEM,\n+ gettext_noop(\"Sets the size of the dedicated buffer pool used for\nthe MultiXact offset cache.\"),\n+ NULL,\n+ GUC_UNIT_BLOCKS\n+ },\n+ &multixact_offsets_buffers,\n+ 16, 8, SLRU_MAX_ALLOWED_BUFFERS,\n+ check_multixact_offsets_buffers, NULL, NULL\n+ },\n\nOther than this patch looks good to me.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 29 Jan 2024 11:26:40 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "On Fri, Jan 26, 2024 at 11:08 PM Alvaro Herrera <[email protected]> wrote:\n>\n> I've continued to review this and decided that I don't like the mess\n> this patch proposes in order to support pg_commit_ts's deletion of all\n> files. (Yes, I know that I was the one that proposed this idea. It's\n> still a bad one). I'd like to change that code by removing the limit\n> that we can only have 128 bank locks in a SLRU. To recap, the reason we\n> did this is that commit_ts sometimes wants to delete all files while\n> running (DeactivateCommitTs), and for this it needs to acquire all bank\n> locks. Since going above the MAX_SIMUL_LWLOCKS limit is disallowed, we\n> added the SLRU limit making multiple banks share lwlocks.\n>\n> I propose two alternative solutions:\n>\n> 1. The easiest is to have DeactivateCommitTs continue to hold\n> CommitTsLock until the end, including while SlruScanDirectory does its\n> thing. This sounds terrible, but considering that this code only runs\n> when the module is being deactivated, I don't think it's really all that\n> bad in practice. I mean, if you deactivate the commit_ts module and\n> then try to read commit timestamp data, you deserve to wait for a few\n> seconds just as a punishment for your stupidity.\n\nI think this idea looks reasonable. I agree that if we are trying to\nread commit_ts after deactivating it then it's fine to make it wait.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 29 Jan 2024 11:31:32 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "On 2024-Jan-29, Dilip Kumar wrote:\n\n> Thank you for working on this. There is one thing that I feel is\n> problematic. We have kept the allowed values for these GUCs to be in\n> multiple of SLRU_BANK_SIZE i.e. 16 and that's the reason the min\n> values were changed to 16 but in this refactoring patch for some of\n> the buffers you have changed that to 8 so I think that's not good.\n\nOh, absolutely, you're right. Restored the minimum to 16.\n\nSo, here's the patchset as two pieces. 0001 converts\nSlruSharedData->latest_page_number to use atomics. I don't see any\nreason to mix this in with the rest of the patch, and though it likely\nwon't have any performance advantage by itself (since the lock\nacquisition is pretty much the same), it seems better to get it in ahead\nof the rest -- I think that simplifies matters for the second patch,\nwhich is large enough.\n\nSo, 0002 introduces the rest of the feature. I removed use of banklocks\nin a different amount as banks, and I made commit_ts use a longer\nlwlock acquisition at truncation time, rather than forcing all-lwlock\nacquisition.\n\nThe more I look at 0002, the more I notice that some comments need badly\nupdated, so please don't read too much into it yet. But I wanted to\npost it anyway for archives and cfbot purposes.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/", "msg_date": "Thu, 1 Feb 2024 00:31:34 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning\n the SLRU lock" }, { "msg_contents": "Hah:\n\npostgres -c lc_messages=C -c shared_buffers=$((512*17))\n\n2024-02-01 10:48:13.548 CET [1535379] FATAL: invalid value for parameter \"transaction_buffers\": 17\n2024-02-01 10:48:13.548 CET [1535379] DETAIL: \"transaction_buffers\" must be a multiple of 16\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n<Schwern> It does it in a really, really complicated way\n<crab> why does it need to be complicated?\n<Schwern> Because it's MakeMaker.\n\n\n", "msg_date": "Thu, 1 Feb 2024 10:49:29 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning\n the SLRU lock" }, { "msg_contents": "On Thu, Feb 1, 2024 at 3:19 PM Alvaro Herrera <[email protected]> wrote:\n>\n> Hah:\n>\n> postgres -c lc_messages=C -c shared_buffers=$((512*17))\n>\n> 2024-02-01 10:48:13.548 CET [1535379] FATAL: invalid value for parameter \"transaction_buffers\": 17\n> 2024-02-01 10:48:13.548 CET [1535379] DETAIL: \"transaction_buffers\" must be a multiple of 16\n\nMaybe we should resize it to the next multiple of the SLRU_BANK_SIZE\ninstead of giving an error?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 1 Feb 2024 15:33:34 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "On 2024-Feb-01, Dilip Kumar wrote:\n\n> On Thu, Feb 1, 2024 at 3:19 PM Alvaro Herrera <[email protected]> wrote:\n> >\n> > postgres -c lc_messages=C -c shared_buffers=$((512*17))\n> >\n> > 2024-02-01 10:48:13.548 CET [1535379] FATAL: invalid value for parameter \"transaction_buffers\": 17\n> > 2024-02-01 10:48:13.548 CET [1535379] DETAIL: \"transaction_buffers\" must be a multiple of 16\n> \n> Maybe we should resize it to the next multiple of the SLRU_BANK_SIZE\n> instead of giving an error?\n\nSince this is the auto-tuning feature, I think it should use the\nprevious multiple rather than the next, but yeah, something like that.\n\n\nWhile I have your attention -- if you could give a look to the 0001\npatch I posted, I would appreciate it.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Los trabajadores menos efectivos son sistematicamente llevados al lugar\ndonde pueden hacer el menor daño posible: gerencia.\" (El principio Dilbert)\n\n\n", "msg_date": "Thu, 1 Feb 2024 11:14:00 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning\n the SLRU lock" }, { "msg_contents": "On Thu, Feb 1, 2024 at 3:44 PM Alvaro Herrera <[email protected]> wrote:\n>\n> On 2024-Feb-01, Dilip Kumar wrote:\n>\n> > On Thu, Feb 1, 2024 at 3:19 PM Alvaro Herrera <[email protected]> wrote:\n> > >\n> > > postgres -c lc_messages=C -c shared_buffers=$((512*17))\n> > >\n> > > 2024-02-01 10:48:13.548 CET [1535379] FATAL: invalid value for parameter \"transaction_buffers\": 17\n> > > 2024-02-01 10:48:13.548 CET [1535379] DETAIL: \"transaction_buffers\" must be a multiple of 16\n> >\n> > Maybe we should resize it to the next multiple of the SLRU_BANK_SIZE\n> > instead of giving an error?\n>\n> Since this is the auto-tuning feature, I think it should use the\n> previous multiple rather than the next, but yeah, something like that.\n\nOkay.\n>\n> While I have your attention -- if you could give a look to the 0001\n> patch I posted, I would appreciate it.\n>\n\nI will look into it. Thanks.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 1 Feb 2024 16:12:39 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "On Thu, Feb 1, 2024 at 4:12 PM Dilip Kumar <[email protected]> wrote:\n>\n> On Thu, Feb 1, 2024 at 3:44 PM Alvaro Herrera <[email protected]> wrote:\n\n> Okay.\n> >\n> > While I have your attention -- if you could give a look to the 0001\n> > patch I posted, I would appreciate it.\n> >\n>\n> I will look into it. Thanks.\n\nSome quick observations,\n\nDo we need below two write barriers at the end of the function?\nbecause the next instruction is separated by the function boundary\n\n@@ -766,14 +766,11 @@ StartupCLOG(void)\n ..\n- XactCtl->shared->latest_page_number = pageno;\n-\n- LWLockRelease(XactSLRULock);\n+ pg_atomic_init_u64(&XactCtl->shared->latest_page_number, pageno);\n+ pg_write_barrier();\n }\n\n/*\n * Initialize member's idea of the latest page number.\n */\n pageno = MXOffsetToMemberPage(offset);\n- MultiXactMemberCtl->shared->latest_page_number = pageno;\n+ pg_atomic_init_u64(&MultiXactMemberCtl->shared->latest_page_number,\n+ pageno);\n+\n+ pg_write_barrier();\n }\n\nI am looking more into this from the concurrency point of view and\nwill update you soon.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 1 Feb 2024 16:34:06 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "On Thu, Feb 1, 2024 at 4:34 PM Dilip Kumar <[email protected]> wrote:\n>\n> On Thu, Feb 1, 2024 at 4:12 PM Dilip Kumar <[email protected]> wrote:\n> >\n> > On Thu, Feb 1, 2024 at 3:44 PM Alvaro Herrera <[email protected]> wrote:\n>\n> > Okay.\n> > >\n> > > While I have your attention -- if you could give a look to the 0001\n> > > patch I posted, I would appreciate it.\n> > >\n> >\n> > I will look into it. Thanks.\n>\n> Some quick observations,\n>\n> Do we need below two write barriers at the end of the function?\n> because the next instruction is separated by the function boundary\n>\n> @@ -766,14 +766,11 @@ StartupCLOG(void)\n> ..\n> - XactCtl->shared->latest_page_number = pageno;\n> -\n> - LWLockRelease(XactSLRULock);\n> + pg_atomic_init_u64(&XactCtl->shared->latest_page_number, pageno);\n> + pg_write_barrier();\n> }\n>\n> /*\n> * Initialize member's idea of the latest page number.\n> */\n> pageno = MXOffsetToMemberPage(offset);\n> - MultiXactMemberCtl->shared->latest_page_number = pageno;\n> + pg_atomic_init_u64(&MultiXactMemberCtl->shared->latest_page_number,\n> + pageno);\n> +\n> + pg_write_barrier();\n> }\n>\n\nI have checked the patch and it looks fine to me other than the above\nquestion related to memory barrier usage one more question about the\nsame, basically below to instances 1 and 2 look similar but in 1 you\nare not using the memory write_barrier whereas in 2 you are using the\nwrite_barrier, why is it so? I mean why the reordering can not happen\nin 1 and it may happen in 2?\n\n1.\n+ pg_atomic_write_u64(&CommitTsCtl->shared->latest_page_number,\n+ trunc->pageno);\n\n SimpleLruTruncate(CommitTsCtl, trunc->pageno);\n\nvs\n2.\n\n - shared->latest_page_number = pageno;\n+ pg_atomic_write_u64(&shared->latest_page_number, pageno);\n+ pg_write_barrier();\n\n /* update the stats counter of zeroed pages */\n pgstat_count_slru_page_zeroed(shared->slru_stats_idx);\n\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 2 Feb 2024 11:36:45 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "On 2024-Feb-02, Dilip Kumar wrote:\n\n> I have checked the patch and it looks fine to me other than the above\n> question related to memory barrier usage one more question about the\n> same, basically below to instances 1 and 2 look similar but in 1 you\n> are not using the memory write_barrier whereas in 2 you are using the\n> write_barrier, why is it so? I mean why the reordering can not happen\n> in 1 and it may happen in 2?\n\nWhat I was thinking is that there's a lwlock operation just below, which\nacts as a barrier. But I realized something more important: there are\nonly two places that matter, which are SlruSelectLRUPage and\nSimpleLruZeroPage. The others are all initialization code that run at a\npoint where there's no going to be any concurrency in SLRU access, so we\ndon't need barriers anyway. In SlruSelectLRUPage we definitely don't\nwant to evict the page that SimpleLruZeroPage has initialized, starting\nfrom the point where it returns that new page to its caller.\n\nBut if you consider the code of those two routines, you realize that the\nonly time an equality between latest_page_number and \"this_page_number\"\nis going to occur, is when both pages are in the same bank ... and both\nroutines are required to be holding the bank lock while they run, so in\npractice this is never a problem.\n\nWe need the atomic write and atomic read so that multiple processes\nprocessing pages in different banks can update latest_page_number\nsimultaneously. But the equality condition that we're looking for?\nit can never happen concurrently.\n\nIn other words, these barriers are fully useless.\n\n(We also have SimpleLruTruncate, but I think it's not as critical to\nhave a barrier there anyhow: accessing a slightly outdated page number\ncould only be a problem if a bug elsewhere causes us to try to truncate\nin the current page. I think we only have this code there because we\ndid have such bugs in the past, but IIUC this shouldn't happen anymore.)\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Sun, 4 Feb 2024 14:38:38 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning\n the SLRU lock" }, { "msg_contents": "In short, I propose the attached.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/", "msg_date": "Sun, 4 Feb 2024 15:06:33 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning\n the SLRU lock" }, { "msg_contents": "Sorry, brown paper bag bug there. Here's the correct one.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"I can't go to a restaurant and order food because I keep looking at the\nfonts on the menu. Five minutes later I realize that it's also talking\nabout food\" (Donald Knuth)", "msg_date": "Sun, 4 Feb 2024 15:14:11 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning\n the SLRU lock" }, { "msg_contents": "\n\n> On 4 Feb 2024, at 18:38, Alvaro Herrera <[email protected]> wrote:\n> \n> In other words, these barriers are fully useless.\n\n+1. I've tried to understand ideas behind barriers, but latest_page_number is heuristics that does not need any guarantees at all. It's also is used in safety check which can fire only when everything is already broken beyond any repair.. (Though using atomic access seems a good idea anyway)\n\nThis patch uses wording \"banks\" in comments before banks start to exist. But as far as I understand, it is expected to be committed before \"banks\" patch.\n\nBesides this patch looks good to me.\n\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Sun, 4 Feb 2024 22:40:17 +0500", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "On Sun, Feb 4, 2024 at 7:10 PM Alvaro Herrera <[email protected]> wrote:\n>\n> On 2024-Feb-02, Dilip Kumar wrote:\n>\n> > I have checked the patch and it looks fine to me other than the above\n> > question related to memory barrier usage one more question about the\n> > same, basically below to instances 1 and 2 look similar but in 1 you\n> > are not using the memory write_barrier whereas in 2 you are using the\n> > write_barrier, why is it so? I mean why the reordering can not happen\n> > in 1 and it may happen in 2?\n>\n> What I was thinking is that there's a lwlock operation just below, which\n> acts as a barrier. But I realized something more important: there are\n> only two places that matter, which are SlruSelectLRUPage and\n> SimpleLruZeroPage. The others are all initialization code that run at a\n> point where there's no going to be any concurrency in SLRU access, so we\n> don't need barriers anyway. In SlruSelectLRUPage we definitely don't\n> want to evict the page that SimpleLruZeroPage has initialized, starting\n> from the point where it returns that new page to its caller.\n> But if you consider the code of those two routines, you realize that the\n> only time an equality between latest_page_number and \"this_page_number\"\n> is going to occur, is when both pages are in the same bank ... and both\n> routines are required to be holding the bank lock while they run, so in\n> practice this is never a problem.\n\nRight, in fact when I first converted this 'latest_page_number' to an\natomic the thinking was to protect it from concurrently setting the\nvalues in SimpleLruZeroPage() and also concurrently reading in\nSlruSelectLRUPage() should not read the corrupted value. All other\nusages were during the initialization phase where we do not need any\nprotection.\n\n>\n> We need the atomic write and atomic read so that multiple processes\n> processing pages in different banks can update latest_page_number\n> simultaneously. But the equality condition that we're looking for?\n> it can never happen concurrently.\n\nYeah, that's right, after you told I also realized that the case is\nprotected by the bank lock. Earlier I didn't think about this case.\n\n> In other words, these barriers are fully useless.\n>\n> (We also have SimpleLruTruncate, but I think it's not as critical to\n> have a barrier there anyhow: accessing a slightly outdated page number\n> could only be a problem if a bug elsewhere causes us to try to truncate\n> in the current page. I think we only have this code there because we\n> did have such bugs in the past, but IIUC this shouldn't happen anymore.)\n\n+1, I agree with this theory in general. But the below comment in\nSimpleLruTrucate in your v3 patch doesn't seem correct, because here\nwe are checking if the latest_page_number is smaller than the cutoff\nif so we log it as wraparound and skip the whole thing and that is\nfine even if we are reading with atomic variable and slightly outdated\nvalue should not be a problem but the comment claim that this safe\nbecause we have the same bank lock as SimpleLruZeroPage(), but that's\nnot true here we will be acquiring different bank locks one by one\nbased on which slotno we are checking. Am I missing something?\n\n\n+ * An important safety check: the current endpoint page must not be\n+ * eligible for removal. Like SlruSelectLRUPage, we don't need a\n+ * memory barrier here because for the affected page to be relevant,\n+ * we'd have to have the same bank lock as SimpleLruZeroPage.\n */\n- if (ctl->PagePrecedes(shared->latest_page_number, cutoffPage))\n+ if (ctl->PagePrecedes(pg_atomic_read_u64(&shared->latest_page_number),\n+ cutoffPage))\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 5 Feb 2024 13:17:11 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "On 2024-Feb-04, Andrey M. Borodin wrote:\n\n> This patch uses wording \"banks\" in comments before banks start to\n> exist. But as far as I understand, it is expected to be committed\n> before \"banks\" patch.\n\nTrue -- changed that to use ControlLock.\n\n> Besides this patch looks good to me.\n\nMany thanks for reviewing.\n\nOn 2024-Feb-05, Dilip Kumar wrote:\n\n> > (We also have SimpleLruTruncate, but I think it's not as critical to\n> > have a barrier there anyhow: accessing a slightly outdated page number\n> > could only be a problem if a bug elsewhere causes us to try to truncate\n> > in the current page. I think we only have this code there because we\n> > did have such bugs in the past, but IIUC this shouldn't happen anymore.)\n> \n> +1, I agree with this theory in general. But the below comment in\n> SimpleLruTrucate in your v3 patch doesn't seem correct, because here\n> we are checking if the latest_page_number is smaller than the cutoff\n> if so we log it as wraparound and skip the whole thing and that is\n> fine even if we are reading with atomic variable and slightly outdated\n> value should not be a problem but the comment claim that this safe\n> because we have the same bank lock as SimpleLruZeroPage(), but that's\n> not true here we will be acquiring different bank locks one by one\n> based on which slotno we are checking. Am I missing something?\n\nI think you're correct. I reworded this comment, so now it says this:\n\n /*\n * An important safety check: the current endpoint page must not be\n * eligible for removal. This check is just a backstop against wraparound\n * bugs elsewhere in SLRU handling, so we don't care if we read a slightly\n * outdated value; therefore we don't add a memory barrier.\n */\n\nPushed with those changes. Thank you!\n\nNow I'll go rebase the rest of the patch on top.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Having your biases confirmed independently is how scientific progress is\nmade, and hence made our great society what it is today\" (Mary Gardiner)\n\n\n", "msg_date": "Tue, 6 Feb 2024 11:53:12 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning\n the SLRU lock" }, { "msg_contents": "On Tue, Feb 6, 2024 at 4:23 PM Alvaro Herrera <[email protected]> wrote:\n>\n> > > (We also have SimpleLruTruncate, but I think it's not as critical to\n> > > have a barrier there anyhow: accessing a slightly outdated page number\n> > > could only be a problem if a bug elsewhere causes us to try to truncate\n> > > in the current page. I think we only have this code there because we\n> > > did have such bugs in the past, but IIUC this shouldn't happen anymore.)\n> >\n> > +1, I agree with this theory in general. But the below comment in\n> > SimpleLruTrucate in your v3 patch doesn't seem correct, because here\n> > we are checking if the latest_page_number is smaller than the cutoff\n> > if so we log it as wraparound and skip the whole thing and that is\n> > fine even if we are reading with atomic variable and slightly outdated\n> > value should not be a problem but the comment claim that this safe\n> > because we have the same bank lock as SimpleLruZeroPage(), but that's\n> > not true here we will be acquiring different bank locks one by one\n> > based on which slotno we are checking. Am I missing something?\n>\n> I think you're correct. I reworded this comment, so now it says this:\n>\n> /*\n> * An important safety check: the current endpoint page must not be\n> * eligible for removal. This check is just a backstop against wraparound\n> * bugs elsewhere in SLRU handling, so we don't care if we read a slightly\n> * outdated value; therefore we don't add a memory barrier.\n> */\n>\n> Pushed with those changes. Thank you!\n\nYeah, this looks perfect, thanks.\n\n> Now I'll go rebase the rest of the patch on top.\n\nOkay, I will review and test after that.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 6 Feb 2024 17:05:16 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "Here's the rest of it rebased on top of current master. I think it\nmakes sense to have this as one individual commit.\n\nI made CLOGShmemBuffers, CommitTsShmemBuffers and SUBTRANSShmemBuffers\ncompute a number that's multiple of SLRU_BANK_SIZE. But it's a crock,\nbecause we don't have that macro at that point, so I just used constant\n16. Obviously need a better solution for this.\n\nI also changed the location of bank_mask in SlruCtlData for better\npacking, as advised by pahole; and renamed SLRU_SLOTNO_GET_BANKLOCKNO()\nto SlotGetBankNumber().\n\nSome very critical comments still need to be updated to the new design,\nparticularly anything that mentions \"control lock\"; but also the overall\nmodel needs to be explained in some central location, rather than\nincongruently some pieces here and other pieces there. I'll see about\nthis later. But at least this is code you should be able to play with.\n\n\nI've been wondering whether we should add a \"slru\" to the name of the\nGUCs:\n\ncommit_timestamp_slru_buffers\ntransaction_slru_buffers\netc\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Aprender sin pensar es inútil; pensar sin aprender, peligroso\" (Confucio)", "msg_date": "Tue, 6 Feb 2024 16:25:10 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning\n the SLRU lock" }, { "msg_contents": "On Tue, Feb 6, 2024 at 8:55 PM Alvaro Herrera <[email protected]> wrote:\n>\n> Here's the rest of it rebased on top of current master. I think it\n> makes sense to have this as one individual commit.\n>\n> I made CLOGShmemBuffers, CommitTsShmemBuffers and SUBTRANSShmemBuffers\n> compute a number that's multiple of SLRU_BANK_SIZE. But it's a crock,\n> because we don't have that macro at that point, so I just used constant\n> 16. Obviously need a better solution for this.\n\nIf we define SLRU_BANK_SIZE in slur.h then we can use it here right,\nbecause these files are anyway include slur.h so.\n\n>\n> I also changed the location of bank_mask in SlruCtlData for better\n> packing, as advised by pahole; and renamed SLRU_SLOTNO_GET_BANKLOCKNO()\n> to SlotGetBankNumber().\n\nOkay.\n\n> Some very critical comments still need to be updated to the new design,\n> particularly anything that mentions \"control lock\"; but also the overall\n> model needs to be explained in some central location, rather than\n> incongruently some pieces here and other pieces there. I'll see about\n> this later. But at least this is code you should be able to play with.\n\nOkay, I will review and test this\n\n> I've been wondering whether we should add a \"slru\" to the name of the\n> GUCs:\n>\n> commit_timestamp_slru_buffers\n> transaction_slru_buffers\n> etc\n\nI am not sure we are exposing anything related to SLRU to the user, I\nmean transaction_buffers should make sense for the user that it stores\ntransaction-related data in some buffers pool but whether that buffer\npool is called SLRU or not doesn't matter much to the user IMHO.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 7 Feb 2024 11:28:58 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "\n\n> On 7 Feb 2024, at 10:58, Dilip Kumar <[email protected]> wrote:\n> \n>> commit_timestamp_slru_buffers\n>> transaction_slru_buffers\n>> etc\n> \n> I am not sure we are exposing anything related to SLRU to the user, \n\nI think we already tell something about SLRU to the user. I’d rather consider if “transaction_slru_buffers\" is easier to understand than “transaction_buffers” ..\nIMO transaction_buffers is clearer. But I do not have strong opinion.\n\n> I\n> mean transaction_buffers should make sense for the user that it stores\n> transaction-related data in some buffers pool but whether that buffer\n> pool is called SLRU or not doesn't matter much to the user IMHO.\n+1\n\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Wed, 7 Feb 2024 11:45:53 +0500", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "On 2024-Feb-07, Dilip Kumar wrote:\n\n> On Tue, Feb 6, 2024 at 8:55 PM Alvaro Herrera <[email protected]> wrote:\n> >\n> > I made CLOGShmemBuffers, CommitTsShmemBuffers and SUBTRANSShmemBuffers\n> > compute a number that's multiple of SLRU_BANK_SIZE. But it's a crock,\n> > because we don't have that macro at that point, so I just used constant\n> > 16. Obviously need a better solution for this.\n> \n> If we define SLRU_BANK_SIZE in slur.h then we can use it here right,\n> because these files are anyway include slur.h so.\n\nSure, but is that really what we want?\n\n> > I've been wondering whether we should add a \"slru\" to the name of the\n> > GUCs:\n> >\n> > commit_timestamp_slru_buffers\n> > transaction_slru_buffers\n> > etc\n> \n> I am not sure we are exposing anything related to SLRU to the user,\n\nWe do -- we have pg_stat_slru already.\n\n> I mean transaction_buffers should make sense for the user that it\n> stores transaction-related data in some buffers pool but whether that\n> buffer pool is called SLRU or not doesn't matter much to the user\n> IMHO.\n\nYeah, that's exactly what my initial argument was for naming these this\nway. But since the term slru already escaped into the wild via the\npg_stat_slru view, perhaps it helps users make the connection between\nthese things. Alternatively, we can cross-reference each term from the\nother's documentation and call it a day.\n\nAnother painful point is that pg_stat_slru uses internal names in the\ndata it outputs, which obviously do not match the new GUCs.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Uno puede defenderse de los ataques; contra los elogios se esta indefenso\"\n\n\n", "msg_date": "Wed, 7 Feb 2024 11:19:29 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning\n the SLRU lock" }, { "msg_contents": "On Wed, Feb 7, 2024 at 3:49 PM Alvaro Herrera <[email protected]> wrote:\n>\n> On 2024-Feb-07, Dilip Kumar wrote:\n>\n> > On Tue, Feb 6, 2024 at 8:55 PM Alvaro Herrera <[email protected]> wrote:\n> > >\n> > > I made CLOGShmemBuffers, CommitTsShmemBuffers and SUBTRANSShmemBuffers\n> > > compute a number that's multiple of SLRU_BANK_SIZE. But it's a crock,\n> > > because we don't have that macro at that point, so I just used constant\n> > > 16. Obviously need a better solution for this.\n> >\n> > If we define SLRU_BANK_SIZE in slur.h then we can use it here right,\n> > because these files are anyway include slur.h so.\n>\n> Sure, but is that really what we want?\n\nSo your question is do we want these buffers to be in multiple of\nSLRU_BANK_SIZE? Maybe we can have the last bank to be partial, I\ndon't think it should create any problem logically. I mean we can\nlook again in the patch to see if we have made any such assumptions\nbut that should be fairly easy to fix, then maybe if we are going in\nthis way we should get rid of the check_slru_buffers() function as\nwell.\n\n> > > I've been wondering whether we should add a \"slru\" to the name of the\n> > > GUCs:\n> > >\n> > > commit_timestamp_slru_buffers\n> > > transaction_slru_buffers\n> > > etc\n> >\n> > I am not sure we are exposing anything related to SLRU to the user,\n>\n> We do -- we have pg_stat_slru already.\n>\n> > I mean transaction_buffers should make sense for the user that it\n> > stores transaction-related data in some buffers pool but whether that\n> > buffer pool is called SLRU or not doesn't matter much to the user\n> > IMHO.\n>\n> Yeah, that's exactly what my initial argument was for naming these this\n> way. But since the term slru already escaped into the wild via the\n> pg_stat_slru view, perhaps it helps users make the connection between\n> these things. Alternatively, we can cross-reference each term from the\n> other's documentation and call it a day.\n\nYeah, that's true I forgot this point about the pg_stat_slru, from\nthis pov if the configuration has the name slru they would be able to\nmake a better connection with the configured value, and the stats in\nthis view based on that they can take call if they need to somehow\nincrease the size of these slru buffers.\n\n> Another painful point is that pg_stat_slru uses internal names in the\n> data it outputs, which obviously do not match the new GUCs.\n\nYeah, that's true, but I think this could be explained somewhere not\nsure what is the right place for this.\n\n\nFYI, I have also repeated all the performance tests I performed in my\nfirst email[1], and I am seeing a similar gain.\n\nSome comments on v18 in my first pass of the review.\n\n1.\n@@ -665,7 +765,7 @@ TransactionIdGetStatus(TransactionId xid, XLogRecPtr *lsn)\n lsnindex = GetLSNIndex(slotno, xid);\n *lsn = XactCtl->shared->group_lsn[lsnindex];\n\n- LWLockRelease(XactSLRULock);\n+ LWLockRelease(SimpleLruGetBankLock(XactCtl, pageno));\n\nMaybe here we can add an assert before releasing the lock for a safety check\n\nAssert(LWLockHeldByMe(SimpleLruGetBankLock(XactCtl, pageno)));\n\n2.\n+ *\n+ * XXX could we make the LSNs to be bank-based?\n */\n XLogRecPtr *group_lsn;\n\nIMHO, the flush still happens at the page level so up to which LSN\nshould be flush before flushing the particular clog page should also\nbe maintained at the page level.\n\n[1] https://www.postgresql.org/message-id/CAFiTN-vzDvNz%3DExGXz6gdyjtzGixKSqs0mKHMmaQ8sOSEFZ33A%40mail.gmail.com\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 7 Feb 2024 16:07:07 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "On 2024-Feb-07, Dilip Kumar wrote:\n\n> On Wed, Feb 7, 2024 at 3:49 PM Alvaro Herrera <[email protected]> wrote:\n\n> > Sure, but is that really what we want?\n> \n> So your question is do we want these buffers to be in multiple of\n> SLRU_BANK_SIZE? Maybe we can have the last bank to be partial, I\n> don't think it should create any problem logically. I mean we can\n> look again in the patch to see if we have made any such assumptions\n> but that should be fairly easy to fix, then maybe if we are going in\n> this way we should get rid of the check_slru_buffers() function as\n> well.\n\nNot really, I just don't think the macro should be in slru.h.\n\nAnother thing I've been thinking is that perhaps it would be useful to\nmake the banks smaller, when the total number of buffers is small. For\nexample, if you have 16 or 32 buffers, it's not really clear to me that\nit makes sense to have just 1 bank or 2 banks. It might be more\nsensible to have 4 banks with 4 or 8 buffers instead. That should make\nthe algorithm scale down as well as up ...\n\nI haven't done either of those things in the attached v19 version. I\ndid go over the comments once again and rewrote the parts I was unhappy\nwith, including some existing ones. I think it's OK now from that point\nof view ... at some point I thought about creating a separate README,\nbut in the end I thought it not necessary.\n\nI did add a bunch of Assert()s to make sure the locks that are supposed\nto be held are actually held. This led me to testing the page status to\nbe not EMPTY during SimpleLruWriteAll() before calling\nSlruInternalWritePage(), because the assert was firing. The previous\ncode is not really *buggy*, but to me it's weird to call WritePage() on\na slot with no contents.\n\nAnother change was in TransactionGroupUpdateXidStatus: the original code\nhad the leader doing pg_atomic_read_u32(&procglobal->clogGroupFirst) to\nknow which bank to lock. I changed it to simply be the page used by the\nleader process; this doesn't need an atomic read, and should be the same\npage anyway. (If it isn't, it's no big deal). But what's more: even if\nwe do read ->clogGroupFirst at that point, there's no guarantee that\nthis is going to be exactly for the same process that ends up being the\nfirst in the list, because since we have not set it to INVALID by the\ntime we grab the bank lock, it is quite possible for more processes to\nadd themselves to the list.\n\nI realized all this while rewriting the comments in a way that would let\nme understand what was going on ... so IMO the effort was worthwhile.\n\nAnyway, what I send now should be pretty much final, modulo the\nchange to the check_slru_buffers routines and documentation additions to\nmatch pg_stat_slru to the new GUC names.\n\n> > Another painful point is that pg_stat_slru uses internal names in the\n> > data it outputs, which obviously do not match the new GUCs.\n> \n> Yeah, that's true, but I think this could be explained somewhere not\n> sure what is the right place for this.\n\nOr we can change those names in the view.\n\n> FYI, I have also repeated all the performance tests I performed in my\n> first email[1], and I am seeing a similar gain.\n\nExcellent, thanks for doing that.\n\n> Some comments on v18 in my first pass of the review.\n> \n> 1.\n> @@ -665,7 +765,7 @@ TransactionIdGetStatus(TransactionId xid, XLogRecPtr *lsn)\n> lsnindex = GetLSNIndex(slotno, xid);\n> *lsn = XactCtl->shared->group_lsn[lsnindex];\n> \n> - LWLockRelease(XactSLRULock);\n> + LWLockRelease(SimpleLruGetBankLock(XactCtl, pageno));\n> \n> Maybe here we can add an assert before releasing the lock for a safety check\n> \n> Assert(LWLockHeldByMe(SimpleLruGetBankLock(XactCtl, pageno)));\n\nHmm, I think this would just throw a warning or error \"you don't hold\nsuch lwlock\", so it doesn't seem necessary.\n\n> 2.\n> + *\n> + * XXX could we make the LSNs to be bank-based?\n> */\n> XLogRecPtr *group_lsn;\n> \n> IMHO, the flush still happens at the page level so up to which LSN\n> should be flush before flushing the particular clog page should also\n> be maintained at the page level.\n\nYeah, this was a misguided thought, nevermind.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n<Schwern> It does it in a really, really complicated way\n<crab> why does it need to be complicated?\n<Schwern> Because it's MakeMaker.", "msg_date": "Thu, 22 Feb 2024 19:43:00 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning\n the SLRU lock" }, { "msg_contents": "On Fri, Feb 23, 2024 at 1:48 AM Alvaro Herrera <[email protected]> wrote:\n>\n> On 2024-Feb-07, Dilip Kumar wrote:\n>\n> > On Wed, Feb 7, 2024 at 3:49 PM Alvaro Herrera <[email protected]> wrote:\n>\n> > > Sure, but is that really what we want?\n> >\n> > So your question is do we want these buffers to be in multiple of\n> > SLRU_BANK_SIZE? Maybe we can have the last bank to be partial, I\n> > don't think it should create any problem logically. I mean we can\n> > look again in the patch to see if we have made any such assumptions\n> > but that should be fairly easy to fix, then maybe if we are going in\n> > this way we should get rid of the check_slru_buffers() function as\n> > well.\n>\n> Not really, I just don't think the macro should be in slru.h.\n\nOkay\n\n> Another thing I've been thinking is that perhaps it would be useful to\n> make the banks smaller, when the total number of buffers is small. For\n> example, if you have 16 or 32 buffers, it's not really clear to me that\n> it makes sense to have just 1 bank or 2 banks. It might be more\n> sensible to have 4 banks with 4 or 8 buffers instead. That should make\n> the algorithm scale down as well as up ...\n\nIt might be helpful to have small-size banks when SLRU buffers are set\nto a very low value and we are only accessing a couple of pages at a\ntime (i.e. no buffer replacement) because in such cases most of the\ncontention will be on SLRU Bank lock. Although I am not sure how\npractical such a use case would be, I mean if someone is using\nmulti-xact very heavily or creating frequent subtransaction overflow\nthen wouldn't they should set this buffer limit to some big enough\nvalue? By doing this we would lose some simplicity of the patch I\nmean instead of using the simple macro i.e. SLRU_BANK_SIZE we would\nneed to compute this and store it in SlruShared. Maybe that's not that\nbad.\n\n>\n> I haven't done either of those things in the attached v19 version. I\n> did go over the comments once again and rewrote the parts I was unhappy\n> with, including some existing ones. I think it's OK now from that point\n> of view ... at some point I thought about creating a separate README,\n> but in the end I thought it not necessary.\n\nThanks, I will review those changes.\n\n> I did add a bunch of Assert()s to make sure the locks that are supposed\n> to be held are actually held. This led me to testing the page status to\n> be not EMPTY during SimpleLruWriteAll() before calling\n> SlruInternalWritePage(), because the assert was firing. The previous\n> code is not really *buggy*, but to me it's weird to call WritePage() on\n> a slot with no contents.\n\nOkay, I mean internally SlruInternalWritePage() will flush only if\nthe status is SLRU_PAGE_VALID, but it is better the way you have done.\n\n> Another change was in TransactionGroupUpdateXidStatus: the original code\n> had the leader doing pg_atomic_read_u32(&procglobal->clogGroupFirst) to\n> know which bank to lock. I changed it to simply be the page used by the\n> leader process; this doesn't need an atomic read, and should be the same\n> page anyway. (If it isn't, it's no big deal). But what's more: even if\n> we do read ->clogGroupFirst at that point, there's no guarantee that\n> this is going to be exactly for the same process that ends up being the\n> first in the list, because since we have not set it to INVALID by the\n> time we grab the bank lock, it is quite possible for more processes to\n> add themselves to the list.\n\nYeah, this looks better\n\n> I realized all this while rewriting the comments in a way that would let\n> me understand what was going on ... so IMO the effort was worthwhile.\n\n+1\n\nI will review and do some more testing early next week and share my feedback.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 23 Feb 2024 13:06:26 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "\n\n> On 23 Feb 2024, at 12:36, Dilip Kumar <[email protected]> wrote:\n> \n>> Another thing I've been thinking is that perhaps it would be useful to\n>> make the banks smaller, when the total number of buffers is small. For\n>> example, if you have 16 or 32 buffers, it's not really clear to me that\n>> it makes sense to have just 1 bank or 2 banks. It might be more\n>> sensible to have 4 banks with 4 or 8 buffers instead. That should make\n>> the algorithm scale down as well as up ...\n> \n> It might be helpful to have small-size banks when SLRU buffers are set\n> to a very low value and we are only accessing a couple of pages at a\n> time (i.e. no buffer replacement) because in such cases most of the\n> contention will be on SLRU Bank lock. Although I am not sure how\n> practical such a use case would be, I mean if someone is using\n> multi-xact very heavily or creating frequent subtransaction overflow\n> then wouldn't they should set this buffer limit to some big enough\n> value? By doing this we would lose some simplicity of the patch I\n> mean instead of using the simple macro i.e. SLRU_BANK_SIZE we would\n> need to compute this and store it in SlruShared. Maybe that's not that\n> bad.\n\nI'm sure anyone with multiple CPUs should increase, not decrease previous default of 128 buffers (with 512MB shared buffers). Having more CPUs (the only way to benefit from more locks) implies bigger transaction buffers.\nIMO making bank size variable adds unneeded computation overhead, bank search loops should be unrollable by compiler etc.\nOriginally there was a patch set step, that packed bank's page addresses together in one array. It was done to make bank search a SIMD instruction.\n\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Fri, 23 Feb 2024 16:02:23 +0500", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "On Fri, Feb 23, 2024 at 1:06 PM Dilip Kumar <[email protected]> wrote:\n>\n> On Fri, Feb 23, 2024 at 1:48 AM Alvaro Herrera <[email protected]> wrote:\n> >\n> > On 2024-Feb-07, Dilip Kumar wrote:\n> >\n> > > On Wed, Feb 7, 2024 at 3:49 PM Alvaro Herrera <[email protected]> wrote:\n> >\n> > > > Sure, but is that really what we want?\n> > >\n> > > So your question is do we want these buffers to be in multiple of\n> > > SLRU_BANK_SIZE? Maybe we can have the last bank to be partial, I\n> > > don't think it should create any problem logically. I mean we can\n> > > look again in the patch to see if we have made any such assumptions\n> > > but that should be fairly easy to fix, then maybe if we are going in\n> > > this way we should get rid of the check_slru_buffers() function as\n> > > well.\n> >\n> > Not really, I just don't think the macro should be in slru.h.\n>\n> Okay\n>\n> > Another thing I've been thinking is that perhaps it would be useful to\n> > make the banks smaller, when the total number of buffers is small. For\n> > example, if you have 16 or 32 buffers, it's not really clear to me that\n> > it makes sense to have just 1 bank or 2 banks. It might be more\n> > sensible to have 4 banks with 4 or 8 buffers instead. That should make\n> > the algorithm scale down as well as up ...\n>\n> It might be helpful to have small-size banks when SLRU buffers are set\n> to a very low value and we are only accessing a couple of pages at a\n> time (i.e. no buffer replacement) because in such cases most of the\n> contention will be on SLRU Bank lock. Although I am not sure how\n> practical such a use case would be, I mean if someone is using\n> multi-xact very heavily or creating frequent subtransaction overflow\n> then wouldn't they should set this buffer limit to some big enough\n> value? By doing this we would lose some simplicity of the patch I\n> mean instead of using the simple macro i.e. SLRU_BANK_SIZE we would\n> need to compute this and store it in SlruShared. Maybe that's not that\n> bad.\n>\n> >\n> > I haven't done either of those things in the attached v19 version. I\n> > did go over the comments once again and rewrote the parts I was unhappy\n> > with, including some existing ones. I think it's OK now from that point\n> > of view ... at some point I thought about creating a separate README,\n> > but in the end I thought it not necessary.\n>\n> Thanks, I will review those changes.\n\nFew other things I noticed while reading through the patch, I haven't\nread it completely yet but this is what I got for now.\n\n1.\n+ * If no process is already in the list, we're the leader; our first step\n+ * is to \"close out the group\" by resetting the list pointer from\n+ * ProcGlobal->clogGroupFirst (this lets other processes set up other\n+ * groups later); then we lock the SLRU bank corresponding to our group's\n+ * page, do the SLRU updates, release the SLRU bank lock, and wake up the\n+ * sleeping processes.\n\nI think here we are saying that we \"close out the group\" before\nacquiring the SLRU lock but that's not true. We keep the group open\nuntil we gets the lock so that we can get maximum members in while we\nare anyway waiting for the lock.\n\n2.\n static void\n TransactionIdSetCommitTs(TransactionId xid, TimestampTz ts,\n RepOriginId nodeid, int slotno)\n {\n- Assert(TransactionIdIsNormal(xid));\n+ if (!TransactionIdIsNormal(xid))\n+ return;\n+\n+ entryno = TransactionIdToCTsEntry(xid);\n\nI do not understand why we need this change.\n\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 23 Feb 2024 16:34:36 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "On 2024-Feb-23, Andrey M. Borodin wrote:\n\n> I'm sure anyone with multiple CPUs should increase, not decrease\n> previous default of 128 buffers (with 512MB shared buffers). Having\n> more CPUs (the only way to benefit from more locks) implies bigger\n> transaction buffers.\n\nSure.\n\n> IMO making bank size variable adds unneeded computation overhead, bank\n> search loops should be unrollable by compiler etc.\n\nMakes sense.\n\n> Originally there was a patch set step, that packed bank's page\n> addresses together in one array. It was done to make bank search a\n> SIMD instruction.\n\nAnts Aasma had proposed a rework of the LRU code for better performance.\nHe told me it depended on bank size being 16, so you're right that it's\nprobably not a good idea to make it variable.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Fri, 23 Feb 2024 12:48:11 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning\n the SLRU lock" }, { "msg_contents": "On 2024-Feb-23, Dilip Kumar wrote:\n\n> 1.\n> + * If no process is already in the list, we're the leader; our first step\n> + * is to \"close out the group\" by resetting the list pointer from\n> + * ProcGlobal->clogGroupFirst (this lets other processes set up other\n> + * groups later); then we lock the SLRU bank corresponding to our group's\n> + * page, do the SLRU updates, release the SLRU bank lock, and wake up the\n> + * sleeping processes.\n> \n> I think here we are saying that we \"close out the group\" before\n> acquiring the SLRU lock but that's not true. We keep the group open\n> until we gets the lock so that we can get maximum members in while we\n> are anyway waiting for the lock.\n\nAbsolutely right. Reworded that.\n\n> 2.\n> static void\n> TransactionIdSetCommitTs(TransactionId xid, TimestampTz ts,\n> RepOriginId nodeid, int slotno)\n> {\n> - Assert(TransactionIdIsNormal(xid));\n> + if (!TransactionIdIsNormal(xid))\n> + return;\n> +\n> + entryno = TransactionIdToCTsEntry(xid);\n> \n> I do not understand why we need this change.\n\nAh yeah, I was bothered by the fact that if you pass Xid values earlier\nthan NormalXid to this function, we'd reply with some nonsensical values\ninstead of throwing an error. But you're right that it doesn't belong\nin this patch, so I removed that.\n\nHere's a version with these fixes, where I also added some text to the\npg_stat_slru documentation:\n\n+ <para>\n+ For each <literal>SLRU</literal> area that's part of the core server,\n+ there is a configuration parameter that controls its size, with the suffix\n+ <literal>_buffers</literal> appended. For historical\n+ reasons, the names are not exact matches, but <literal>Xact</literal>\n+ corresponds to <literal>transaction_buffers</literal> and the rest should\n+ be obvious.\n+ <!-- Should we edit pgstat_internal.h::slru_names so that the \"name\" matches\n+ the GUC name?? -->\n+ </para>\n\nI think I would like to suggest renaming the GUCs to have the _slru_ bit\nin the middle:\n\n+# - SLRU Buffers (change requires restart) -\n+\n+#commit_timestamp_slru_buffers = 0 # memory for pg_commit_ts (0 = auto)\n+#multixact_offsets_slru_buffers = 16 # memory for pg_multixact/offsets\n+#multixact_members_slru_buffers = 32 # memory for pg_multixact/members\n+#notify_slru_buffers = 16 # memory for pg_notify\n+#serializable_slru_buffers = 32 # memory for pg_serial\n+#subtransaction_slru_buffers = 0 # memory for pg_subtrans (0 = auto)\n+#transaction_slru_buffers = 0 # memory for pg_xact (0 = auto)\n\nand the pgstat_internal.h table:\n \nstatic const char *const slru_names[] = {\n\t\"commit_timestamp\",\n\t\"multixact_members\",\n\t\"multixact_offsets\",\n\t\"notify\",\n\t\"serializable\",\n\t\"subtransaction\",\n\t\"transaction\",\n\t\"other\"\t\t\t\t\t\t/* has to be last */\n};\n\nThis way they match perfectly.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"All rings of power are equal,\nBut some rings of power are more equal than others.\"\n (George Orwell's The Lord of the Rings)", "msg_date": "Mon, 26 Feb 2024 17:16:44 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning\n the SLRU lock" }, { "msg_contents": "On Mon, Feb 26, 2024 at 9:46 PM Alvaro Herrera <[email protected]>\nwrote:\n\n> On 2024-Feb-23, Dilip Kumar wrote:\n>\n> + <para>\n> + For each <literal>SLRU</literal> area that's part of the core server,\n> + there is a configuration parameter that controls its size, with the\n> suffix\n> + <literal>_buffers</literal> appended. For historical\n> + reasons, the names are not exact matches, but <literal>Xact</literal>\n> + corresponds to <literal>transaction_buffers</literal> and the rest\n> should\n> + be obvious.\n> + <!-- Should we edit pgstat_internal.h::slru_names so that the \"name\"\n> matches\n> + the GUC name?? -->\n> + </para>\n>\n> I think I would like to suggest renaming the GUCs to have the _slru_ bit\n> in the middle:\n>\n> +# - SLRU Buffers (change requires restart) -\n> +\n> +#commit_timestamp_slru_buffers = 0 # memory for pg_commit_ts (0\n> = auto)\n> +#multixact_offsets_slru_buffers = 16 # memory for\n> pg_multixact/offsets\n> +#multixact_members_slru_buffers = 32 # memory for\n> pg_multixact/members\n> +#notify_slru_buffers = 16 # memory for pg_notify\n> +#serializable_slru_buffers = 32 # memory for pg_serial\n> +#subtransaction_slru_buffers = 0 # memory for pg_subtrans (0 =\n> auto)\n> +#transaction_slru_buffers = 0 # memory for pg_xact (0 =\n> auto)\n>\n> and the pgstat_internal.h table:\n>\n> static const char *const slru_names[] = {\n> \"commit_timestamp\",\n> \"multixact_members\",\n> \"multixact_offsets\",\n> \"notify\",\n> \"serializable\",\n> \"subtransaction\",\n> \"transaction\",\n> \"other\" /* has to be last\n> */\n> };\n>\n> This way they match perfectly.\n>\n\nYeah, I think this looks fine to me.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Mon, Feb 26, 2024 at 9:46 PM Alvaro Herrera <[email protected]> wrote:On 2024-Feb-23, Dilip Kumar wrote:\n\n+  <para>\n+   For each <literal>SLRU</literal> area that's part of the core server,\n+   there is a configuration parameter that controls its size, with the suffix\n+   <literal>_buffers</literal> appended.  For historical\n+   reasons, the names are not exact matches, but <literal>Xact</literal>\n+   corresponds to <literal>transaction_buffers</literal> and the rest should\n+   be obvious.\n+   <!-- Should we edit pgstat_internal.h::slru_names so that the \"name\" matches\n+        the GUC name?? -->\n+  </para>\n\nI think I would like to suggest renaming the GUCs to have the _slru_ bit\nin the middle:\n\n+# - SLRU Buffers (change requires restart) -\n+\n+#commit_timestamp_slru_buffers = 0          # memory for pg_commit_ts (0 = auto)\n+#multixact_offsets_slru_buffers = 16            # memory for pg_multixact/offsets\n+#multixact_members_slru_buffers = 32            # memory for pg_multixact/members\n+#notify_slru_buffers = 16                   # memory for pg_notify\n+#serializable_slru_buffers = 32             # memory for pg_serial\n+#subtransaction_slru_buffers = 0            # memory for pg_subtrans (0 = auto)\n+#transaction_slru_buffers = 0               # memory for pg_xact (0 = auto)\n\nand the pgstat_internal.h table:\n\nstatic const char *const slru_names[] = {\n        \"commit_timestamp\",\n        \"multixact_members\",\n        \"multixact_offsets\",\n        \"notify\",\n        \"serializable\",\n        \"subtransaction\",\n        \"transaction\",\n        \"other\"                                         /* has to be last */\n};\n\nThis way they match perfectly.Yeah, I think this looks fine to me.  -- Regards,Dilip KumarEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 27 Feb 2024 15:26:35 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "On 2024-Feb-27, Dilip Kumar wrote:\n\n> > static const char *const slru_names[] = {\n> > \"commit_timestamp\",\n> > \"multixact_members\",\n> > \"multixact_offsets\",\n> > \"notify\",\n> > \"serializable\",\n> > \"subtransaction\",\n> > \"transaction\",\n> > \"other\" /* has to be last\n> > */\n> > };\n\nHere's a patch for the renaming part.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"No nos atrevemos a muchas cosas porque son difíciles,\npero son difíciles porque no nos atrevemos a hacerlas\" (Séneca)", "msg_date": "Tue, 27 Feb 2024 17:03:12 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning\n the SLRU lock" }, { "msg_contents": "\n\n> On 27 Feb 2024, at 21:03, Alvaro Herrera <[email protected]> wrote:\n> \n> On 2024-Feb-27, Dilip Kumar wrote:\n> \n>>> static const char *const slru_names[] = {\n>>> \"commit_timestamp\",\n>>> \"multixact_members\",\n>>> \"multixact_offsets\",\n>>> \"notify\",\n>>> \"serializable\",\n>>> \"subtransaction\",\n>>> \"transaction\",\n>>> \"other\" /* has to be last\n>>> */\n>>> };\n> \n> Here's a patch for the renaming part.\n\nSorry for the late reply, I have one nit. Are you sure that multixact_members and multixact_offsets are plural, while transaction and commit_timestamp are singular?\nMaybe multixact_members and multixact_offset? Because there are many members and one offset for a givent multixact? Users certainly do not care, thought...\n\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Tue, 27 Feb 2024 21:17:11 +0500", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "On 2024-Feb-27, Andrey M. Borodin wrote:\n\n> Sorry for the late reply, I have one nit. Are you sure that\n> multixact_members and multixact_offsets are plural, while transaction\n> and commit_timestamp are singular?\n> Maybe multixact_members and multixact_offset? Because there are many\n> members and one offset for a givent multixact? Users certainly do not\n> care, thought...\n\nI made myself the same question actually, and thought about putting them\nboth in the singular. I only backed off because I noticed that the\ndirectories themselves are in plural (an old mistake of mine, evidently). \nMaybe we should follow that instinct and use the singular for these.\n\nIf we do that, we can rename the directories to also appear in singular\nwhen/if the patch to add standard page headers to the SLRUs lands --\nwhich is going to need code to rewrite the files during pg_upgrade\nanyway, so the rename is not going to be a big deal.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Crear es tan difícil como ser libre\" (Elsa Triolet)\n\n\n", "msg_date": "Tue, 27 Feb 2024 17:26:07 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning\n the SLRU lock" }, { "msg_contents": "Here's the complete set, with these two names using the singular.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Uno puede defenderse de los ataques; contra los elogios se esta indefenso\"", "msg_date": "Tue, 27 Feb 2024 18:33:18 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning\n the SLRU lock" }, { "msg_contents": "\n\n> On 27 Feb 2024, at 22:33, Alvaro Herrera <[email protected]> wrote:\n> \n> <v21-0001-Rename-SLRU-elements-in-pg_stat_slru.patch><v21-0002-Make-SLRU-buffer-sizes-configurable.patch>\n\nThese patches look amazing!\n\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Tue, 27 Feb 2024 22:51:58 +0500", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "On 2024-Feb-27, Alvaro Herrera wrote:\n\n> Here's the complete set, with these two names using the singular.\n\nBTW one thing I had not noticed is that before this patch we have\nminimum shmem size that's lower than the lowest you can go with the new\ncode.\n\nThis means Postgres may no longer start when extremely tight memory\nrestrictions (and of course use more memory even when idle or with small\ndatabases). I wonder to what extent should we make an effort to relax\nthat. For small, largely inactive servers, this is just memory we use\nfor no good reason. However, anything we do here will impact\nperformance on the high end, because as Andrey says this will add\ncalculations and jumps where there are none today.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"We’ve narrowed the problem down to the customer’s pants being in a situation\n of vigorous combustion\" (Robert Haas, Postgres expert extraordinaire)\n\n\n", "msg_date": "Tue, 27 Feb 2024 19:11:07 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning\n the SLRU lock" }, { "msg_contents": "On Tue, Feb 27, 2024 at 11:41 PM Alvaro Herrera <[email protected]>\nwrote:\n\n> On 2024-Feb-27, Alvaro Herrera wrote:\n>\n> > Here's the complete set, with these two names using the singular.\n>\n> BTW one thing I had not noticed is that before this patch we have\n> minimum shmem size that's lower than the lowest you can go with the new\n> code.\n>\n> This means Postgres may no longer start when extremely tight memory\n> restrictions (and of course use more memory even when idle or with small\n> databases). I wonder to what extent should we make an effort to relax\n> that. For small, largely inactive servers, this is just memory we use\n> for no good reason. However, anything we do here will impact\n> performance on the high end, because as Andrey says this will add\n> calculations and jumps where there are none today.\n>\n>\nI was just comparing the minimum memory required for SLRU when the system\nis minimally configured, correct me if I am wrong.\n\nSLRU unpatched\npatched\ncommit_timestamp_buffers 4 16\nsubtransaction_buffers 32 16\ntransaction_buffers 4 16\nmultixact_offset_buffers 8 16\nmultixact_member_buffers 16 16\nnotify_buffers 8\n 16\nserializable_buffers 16 16\n-------------------------------------------------------------------------------------\ntotal buffers 88\n112\n\nso that is < 200kB of extra memory on a minimally configured system, IMHO\nthis should not matter.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Tue, Feb 27, 2024 at 11:41 PM Alvaro Herrera <[email protected]> wrote:On 2024-Feb-27, Alvaro Herrera wrote:\n\n> Here's the complete set, with these two names using the singular.\n\nBTW one thing I had not noticed is that before this patch we have\nminimum shmem size that's lower than the lowest you can go with the new\ncode.\n\nThis means Postgres may no longer start when extremely tight memory\nrestrictions (and of course use more memory even when idle or with small\ndatabases).  I wonder to what extent should we make an effort to relax\nthat.  For small, largely inactive servers, this is just memory we use\nfor no good reason.  However, anything we do here will impact\nperformance on the high end, because as Andrey says this will add\ncalculations and jumps where there are none today.\nI was just comparing the minimum memory required for SLRU when the system is minimally configured, correct me if I am wrong.SLRU                                            unpatched            patched                    commit_timestamp_buffers          4                           16subtransaction_buffers                 32                         16transaction_buffers                       4                           16multixact_offset_buffers                8                           16 multixact_member_buffers           16                          16 notify_buffers                                 8                           16serializable_buffers                       16                          16-------------------------------------------------------------------------------------total buffers                                 88                            112so that is < 200kB of extra memory on a minimally configured system, IMHO this should not matter.-- Regards,Dilip KumarEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 28 Feb 2024 09:20:13 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "At Tue, 27 Feb 2024 18:33:18 +0100, Alvaro Herrera <[email protected]> wrote in \n> Here's the complete set, with these two names using the singular.\n\nThe commit by the second patch added several GUC descriptions:\n\n> Sets the size of the dedicated buffer pool used for the commit timestamp cache.\n\nSome of them, commit_timestamp_buffers, transaction_buffers,\nsubtransaction_buffers use 0 to mean auto-tuning based on\nshared-buffer size. I think it's worth adding an extra_desc such as \"0\nto automatically determine this value based on the shared buffer\nsize\".\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 29 Feb 2024 13:04:04 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning\n the SLRU lock" }, { "msg_contents": "On 2024-Feb-29, Kyotaro Horiguchi wrote:\n\n> At Tue, 27 Feb 2024 18:33:18 +0100, Alvaro Herrera <[email protected]> wrote in \n> > Here's the complete set, with these two names using the singular.\n> \n> The commit by the second patch added several GUC descriptions:\n> \n> > Sets the size of the dedicated buffer pool used for the commit timestamp cache.\n> \n> Some of them, commit_timestamp_buffers, transaction_buffers,\n> subtransaction_buffers use 0 to mean auto-tuning based on\n> shared-buffer size. I think it's worth adding an extra_desc such as \"0\n> to automatically determine this value based on the shared buffer\n> size\".\n\nHow about this?\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"La victoria es para quien se atreve a estar solo\"", "msg_date": "Thu, 29 Feb 2024 11:46:42 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning\n the SLRU lock" }, { "msg_contents": "On 2024-Feb-29, Alvaro Herrera wrote:\n\n> On 2024-Feb-29, Kyotaro Horiguchi wrote:\n\n> > Some of them, commit_timestamp_buffers, transaction_buffers,\n> > subtransaction_buffers use 0 to mean auto-tuning based on\n> > shared-buffer size. I think it's worth adding an extra_desc such as \"0\n> > to automatically determine this value based on the shared buffer\n> > size\".\n> \n> How about this?\n\nPushed that way, but we can discuss further wording improvements/changes\nif someone wants to propose any.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"La rebeldía es la virtud original del hombre\" (Arthur Schopenhauer)\n\n\n", "msg_date": "Sun, 3 Mar 2024 14:58:39 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning\n the SLRU lock" }, { "msg_contents": "Alvaro Herrera <[email protected]> writes:\n> Pushed that way, but we can discuss further wording improvements/changes\n> if someone wants to propose any.\n\nI just noticed that drongo is complaining about two lines added\nby commit 53c2a97a9:\n\n drongo | 2024-03-04 14:34:52 | ../pgsql/src/backend/access/transam/slru.c(436): warning C4047: '!=': 'SlruPageStatus *' differs in levels of indirection from 'int'\n drongo | 2024-03-04 14:34:52 | ../pgsql/src/backend/access/transam/slru.c(717): warning C4047: '!=': 'SlruPageStatus *' differs in levels of indirection from 'int'\n\nThese lines are\n\n\tAssert(&shared->page_status[slotno] != SLRU_PAGE_EMPTY);\n\n\tAssert(&ctl->shared->page_status[slotno] != SLRU_PAGE_EMPTY);\n\nThese are comparing the address of something with an enum value,\nwhich surely cannot be sane. Is the \"&\" operator incorrect?\n\nIt looks like SLRU_PAGE_EMPTY has (by chance, or deliberately)\nthe numeric value of zero, so I guess the majority of our BF\nanimals are understanding this as \"address != NULL\". But that\ndoesn't look like a useful test to be making.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 04 Mar 2024 17:14:32 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "I wrote:\n> It looks like SLRU_PAGE_EMPTY has (by chance, or deliberately)\n> the numeric value of zero, so I guess the majority of our BF\n> animals are understanding this as \"address != NULL\". But that\n> doesn't look like a useful test to be making.\n\nIn hopes of noticing whether there are other similar thinkos,\nI permuted the order of the SlruPageStatus enum values, and\nnow I get the expected warnings from gcc:\n\nIn file included from ../../../../src/include/postgres.h:45,\n from slru.c:59:\nslru.c: In function ‘SimpleLruWaitIO’:\nslru.c:436:38: warning: comparison between pointer and integer\n Assert(&shared->page_status[slotno] != SLRU_PAGE_EMPTY);\n ^~\n../../../../src/include/c.h:862:9: note: in definition of macro ‘Assert’\n if (!(condition)) \\\n ^~~~~~~~~\nslru.c: In function ‘SimpleLruWritePage’:\nslru.c:717:43: warning: comparison between pointer and integer\n Assert(&ctl->shared->page_status[slotno] != SLRU_PAGE_EMPTY);\n ^~\n../../../../src/include/c.h:862:9: note: in definition of macro ‘Assert’\n if (!(condition)) \\\n ^~~~~~~~~\n\nSo it looks like it's just these two places.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 04 Mar 2024 17:41:58 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "On 2024-Mar-04, Tom Lane wrote:\n\n> In hopes of noticing whether there are other similar thinkos,\n> I permuted the order of the SlruPageStatus enum values, and\n> now I get the expected warnings from gcc:\n\nThanks for checking! I pushed the fixes.\n\nMaybe we should assign a nonzero value (= 1) to the first element of\nenums, to avoid this kind of mistake.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Tue, 5 Mar 2024 12:18:05 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning\n the SLRU lock" }, { "msg_contents": "Hello Alvaro,\n\n27.02.2024 20:33, Alvaro Herrera wrote:\n> Here's the complete set, with these two names using the singular.\n>\n\nI've managed to trigger an assert added by 53c2a97a9.\nPlease try the following script against a server compiled with\n-DTEST_SUMMARIZE_SERIAL (initially I observed this failure without the\ndefine, it just simplifies reproducing...):\n# initdb & start ...\n\ncreatedb test\necho \"\nSELECT pg_current_xact_id() AS tx\n\\gset\n\nSELECT format('CREATE TABLE t%s(i int)', g)\n   FROM generate_series(1, 1022 - :tx) g\n\\gexec\n\nBEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE;\nSELECT pg_current_xact_id();\nSELECT pg_sleep(5);\n\" | psql test &\n\necho \"\nSELECT pg_sleep(1);\nBEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE;\nSELECT 1 INTO a;\nCOMMIT;\n\nBEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE;\nSELECT 2 INTO b;\n\" | psql test\n\nIt fails for me with the following stack trace:\nTRAP: failed Assert(\"LWLockHeldByMeInMode(SimpleLruGetBankLock(ctl, pageno), LW_EXCLUSIVE)\"), File: \"slru.c\", Line: 366, \nPID: 21711\nExceptionalCondition at assert.c:52:13\nSimpleLruZeroPage at slru.c:369:11\nSerialAdd at predicate.c:921:20\nSummarizeOldestCommittedSxact at predicate.c:1521:2\nGetSerializableTransactionSnapshotInt at predicate.c:1787:16\nGetSerializableTransactionSnapshot at predicate.c:1691:1\nGetTransactionSnapshot at snapmgr.c:264:21\nexec_simple_query at postgres.c:1162:4\n...\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Wed, 3 Apr 2024 15:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "Hello,\n\nOn 2024-Apr-03, Alexander Lakhin wrote:\n\n> I've managed to trigger an assert added by 53c2a97a9.\n> Please try the following script against a server compiled with\n> -DTEST_SUMMARIZE_SERIAL (initially I observed this failure without the\n> define, it just simplifies reproducing...):\n\nAh yes, absolutely, we're missing to trade the correct SLRU bank lock\nthere. This rewrite of that small piece should fix it. Thanks for\nreporting this.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Pido que me den el Nobel por razones humanitarias\" (Nicanor Parra)", "msg_date": "Wed, 3 Apr 2024 16:10:08 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning\n the SLRU lock" }, { "msg_contents": "On Wed, Apr 3, 2024 at 7:40 PM Alvaro Herrera <[email protected]> wrote:\n>\n> Hello,\n>\n> On 2024-Apr-03, Alexander Lakhin wrote:\n>\n> > I've managed to trigger an assert added by 53c2a97a9.\n> > Please try the following script against a server compiled with\n> > -DTEST_SUMMARIZE_SERIAL (initially I observed this failure without the\n> > define, it just simplifies reproducing...):\n>\n> Ah yes, absolutely, we're missing to trade the correct SLRU bank lock\n> there. This rewrite of that small piece should fix it. Thanks for\n> reporting this.\n>\n\nYeah, we missed acquiring the bank lock w.r.t. intervening pages,\nthanks for reporting. Your fix looks correct to me.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 3 Apr 2024 19:48:35 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning the\n SLRU lock" }, { "msg_contents": "On 2024-Apr-03, Dilip Kumar wrote:\n\n> Yeah, we missed acquiring the bank lock w.r.t. intervening pages,\n> thanks for reporting. Your fix looks correct to me.\n\nThanks for the quick review! And thanks to Alexander for the report.\nPushed the fix.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"No hay hombre que no aspire a la plenitud, es decir,\nla suma de experiencias de que un hombre es capaz\"\n\n\n", "msg_date": "Wed, 3 Apr 2024 18:01:28 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SLRU optimization - configurable buffer pool and partitioning\n the SLRU lock" } ]
[ { "msg_contents": "Hi,\n\nI understand that in READ COMMITTED isolation level, SELECT queries\nreads a snapshot of the database as of the instant the query begins.\nAnd also a concurrent transaction(uncommitted) writing to the same\ntable won't block the readers.\nHowever, I see that in the heap_update(heapam.c) function there is a\nbrief interval(Lock and unlock the buffer) where a writer may block\nreaders if the writer is updating the same row which readers are\nreading.\nCould anyone please help me with the below query?\n\n1) Is my understanding correct? In so, Is it not against the\nstatements \"readers does not block writers and writers does not block\nreaders\"\n\nBest,\nAjay\n\n\n", "msg_date": "Wed, 11 Oct 2023 14:30:33 -0700", "msg_from": "Ajay P S <[email protected]>", "msg_from_op": true, "msg_subject": "Regarding Postgresql Transaction isolation" }, { "msg_contents": "Ajay P S <[email protected]> writes:\n> However, I see that in the heap_update(heapam.c) function there is a\n> brief interval(Lock and unlock the buffer) where a writer may block\n> readers if the writer is updating the same row which readers are\n> reading.\n> Could anyone please help me with the below query?\n\n> 1) Is my understanding correct? In so, Is it not against the\n> statements \"readers does not block writers and writers does not block\n> readers\"\n\nYou should probably read that as \"there are no macroscopic block\nconditions between readers and writers\". If you want to quibble\nabout whether a transient buffer lock violates the statement,\nthere are doubtless hundreds of other places where there's some\nsort of short-term blockage. A trivial example is that just\nfinding the buffer in the first place can be transiently blocked\nby spinlock or LWLock exclusion on the buffer lookup data structures.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 11 Oct 2023 17:49:37 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regarding Postgresql Transaction isolation" } ]
[ { "msg_contents": "Hi,\n\nYou can't tell if your checkpointer is spending a lot of time waiting\naround for flags in delayChkptFlags to clear. Trivial patch to add\nthat. I've managed to see it a few times when checkpointing\nrepeatedly with a heavy pgbench workload.\n\nI had to stop and think for a moment about whether these events belong\nunder \"WaitEventIPC\", \"waiting for notification from another process\"\nor under \"WaitEventTimeout\", \"waiting for a timeout to expire\". I\nmean, both? It's using sleep-and-poll instead of (say) a CV due to\nthe economics, we want to make the other side as cheap as possible, so\nwe don't care about making the checkpointer take some micro-naps in\nthis case. I feel like the key point here is that it's waiting for\nanother process to do stuff and unblock it.", "msg_date": "Thu, 12 Oct 2023 14:10:25 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Wait events for delayed checkpoints" }, { "msg_contents": "On Wed, Oct 11, 2023 at 9:13 PM Thomas Munro <[email protected]> wrote:\n> You can't tell if your checkpointer is spending a lot of time waiting\n> around for flags in delayChkptFlags to clear. Trivial patch to add\n> that. I've managed to see it a few times when checkpointing\n> repeatedly with a heavy pgbench workload.\n>\n> I had to stop and think for a moment about whether these events belong\n> under \"WaitEventIPC\", \"waiting for notification from another process\"\n> or under \"WaitEventTimeout\", \"waiting for a timeout to expire\". I\n> mean, both? It's using sleep-and-poll instead of (say) a CV due to\n> the economics, we want to make the other side as cheap as possible, so\n> we don't care about making the checkpointer take some micro-naps in\n> this case. I feel like the key point here is that it's waiting for\n> another process to do stuff and unblock it.\n\nIPC seems right to me. Yeah, a timeout is being used, but as you say,\nthat's an implementation detail.\n\n+1 for the idea, too.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 12 Oct 2023 13:32:29 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wait events for delayed checkpoints" }, { "msg_contents": "On Thu, Oct 12, 2023 at 01:32:29PM -0400, Robert Haas wrote:\n> IPC seems right to me. Yeah, a timeout is being used, but as you say,\n> that's an implementation detail.\n> \n> +1 for the idea, too.\n\nAgreed that timeout makes little sense in this context, and IPC looks\ncorrect.\n\n+ pgstat_report_wait_start(WAIT_EVENT_CHECKPOINT_DELAY_START);\n do\n {\n pg_usleep(10000L); /* wait for 10 msec */\n } while (HaveVirtualXIDsDelayingChkpt(vxids, nvxids,\n DELAY_CHKPT_START));\n+ pgstat_report_wait_end();\n\nHaveVirtualXIDsDelayingChkpt() does immediately a LWLockAcquire()\nwhich would itself report a wait event for ProcArrayLock, overwriting\nthis new one, no?\n--\nMichael", "msg_date": "Fri, 13 Oct 2023 08:09:21 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wait events for delayed checkpoints" }, { "msg_contents": "On Thu, Oct 12, 2023 at 7:09 PM Michael Paquier <[email protected]> wrote:\n> On Thu, Oct 12, 2023 at 01:32:29PM -0400, Robert Haas wrote:\n> > IPC seems right to me. Yeah, a timeout is being used, but as you say,\n> > that's an implementation detail.\n> >\n> > +1 for the idea, too.\n>\n> Agreed that timeout makes little sense in this context, and IPC looks\n> correct.\n>\n> + pgstat_report_wait_start(WAIT_EVENT_CHECKPOINT_DELAY_START);\n> do\n> {\n> pg_usleep(10000L); /* wait for 10 msec */\n> } while (HaveVirtualXIDsDelayingChkpt(vxids, nvxids,\n> DELAY_CHKPT_START));\n> + pgstat_report_wait_end();\n>\n> HaveVirtualXIDsDelayingChkpt() does immediately a LWLockAcquire()\n> which would itself report a wait event for ProcArrayLock, overwriting\n> this new one, no?\n\nAh, right: the wait event should be set and cleared around pg_usleep,\nnot the whole loop.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 12 Oct 2023 21:19:29 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wait events for delayed checkpoints" }, { "msg_contents": "On Fri, Oct 13, 2023 at 2:19 PM Robert Haas <[email protected]> wrote:\n> On Thu, Oct 12, 2023 at 7:09 PM Michael Paquier <[email protected]> wrote:\n> > HaveVirtualXIDsDelayingChkpt() does immediately a LWLockAcquire()\n> > which would itself report a wait event for ProcArrayLock, overwriting\n> > this new one, no?\n>\n> Ah, right: the wait event should be set and cleared around pg_usleep,\n> not the whole loop.\n\nDuh. Yeah. Pushed like that. Thanks both.\n\n\n", "msg_date": "Fri, 13 Oct 2023 16:51:50 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Wait events for delayed checkpoints" } ]
[ { "msg_contents": "On Fri, Aug 25, 2023 at 8:35 AM Stephen Frost <[email protected]> wrote:\n\n> Greetings,\n>\n> This is getting a bit far afield in terms of this specific thread, but\n> there's an ongoing effort to give PG administrators knobs to be able to\n> control how much actual memory is used rather than depending on the\n> kernel to actually tell us when we're \"out\" of memory. There'll be new\n> patches for the September commitfest posted soon. If you're interested\n> in this issue, it'd be great to get more folks involved in review and\n> testing.\n>\n\nNoticed I missed this. I'm interested. Test #1 would be to set memory to\nabout max there is, maybe a hair under, turn off swap, and see what happens\nin various dynamic load situations.\n\nDisabling overcommit is not a practical solution in my experience; it moves\ninstability from one place to another and seems to make problems appear in\na broader set of situations. For zero downtime platforms it has place but I\nwould tend to roll the dice on a reboot even for direct user facing\napplications given that it can provide relief for systemic conditions.\n\nMy unsophisticated hunch is that postgres and the kernel are not on the\nsame page about memory somehow and that the multi-process architecture\nmight be contributing to that issue. Of course, regarding\nrearchitecture skeptically and realistically is a good idea given the\neffort and risks.\n\nI guess, in summary, I would personally rate things like better management\nof resource tradeoffs, better handling of transient dmenands, predictable\nfailure modes, and stability in dynamic workloads over things like better\nperformance in extremely high concurrency situations. Others might think\ndifferently for objectively good reasons.\n\nmerlin\n\nOn Fri, Aug 25, 2023 at 8:35 AM Stephen Frost <[email protected]> wrote:Greetings,\nThis is getting a bit far afield in terms of this specific thread, but\nthere's an ongoing effort to give PG administrators knobs to be able to\ncontrol how much actual memory is used rather than depending on the\nkernel to actually tell us when we're \"out\" of memory.  There'll be new\npatches for the September commitfest posted soon.  If you're interested\nin this issue, it'd be great to get more folks involved in review and\ntesting.Noticed I missed this.  I'm interested.   Test #1 would be to set memory to about max there is, maybe a hair under, turn off swap, and see what happens in various dynamic load situations. Disabling overcommit is not a practical solution in my experience; it moves instability from one place to another and seems to make problems appear in a broader set of situations. For zero downtime platforms it has place but I would tend to roll the dice on a reboot even for direct user facing applications given that it can provide relief for systemic conditions.My unsophisticated hunch is that postgres and the kernel are not on the same page about memory somehow and that the multi-process architecture might be contributing to that issue.  Of course, regarding rearchitecture skeptically and realistically is a good idea given the effort and risks.I guess, in summary, I would personally rate things like better management of resource tradeoffs, better handling of transient dmenands, predictable failure modes, and stability in dynamic workloads over things like better performance in extremely high concurrency situations.  Others might think differently for objectively good reasons.merlin", "msg_date": "Wed, 11 Oct 2023 22:05:50 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": true, "msg_subject": "Memory knob testing (was Re: Let's make PostgreSQL multi-threaded)" } ]
[ { "msg_contents": "Greetengs!\n\nFound that simple test pgbench -c20 -T20 -j8 gives approximately\nfor REL_15_STABLE at 5143f76: 336+-1 TPS\nand\nfor REL_16_STABLE at 4ac7635f: 324+-1 TPS\n\nThe performance drop is approximately 3,5% while the corrected standard deviation is only 0.3%.\nSee the raw_data.txt attached.\n\nHow do you think, is there any cause for concern here?\n\nAnd is it worth spending time bisecting for the commit where this degradation may have occurred?\n\nWould be glad for any comments and concerns.\n\nWith the best regards,\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Thu, 12 Oct 2023 11:00:22 +0300", "msg_from": "\"Anton A. Melnikov\" <[email protected]>", "msg_from_op": true, "msg_subject": "Some performance degradation in REL_16 vs REL_15" }, { "msg_contents": "On Thu, 12 Oct 2023 at 21:01, Anton A. Melnikov\n<[email protected]> wrote:\n>\n> Greetengs!\n>\n> Found that simple test pgbench -c20 -T20 -j8 gives approximately\n> for REL_15_STABLE at 5143f76: 336+-1 TPS\n> and\n> for REL_16_STABLE at 4ac7635f: 324+-1 TPS\n>\n> And is it worth spending time bisecting for the commit where this degradation may have occurred?\n\nIt would be interesting to know what's to blame here and if you can\nattribute it to a certain commit.\n\nDavid\n\n\n", "msg_date": "Thu, 12 Oct 2023 21:20:36 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Some performance degradation in REL_16 vs REL_15" }, { "msg_contents": "On Thu, Oct 12, 2023 at 09:20:36PM +1300, David Rowley wrote:\n> It would be interesting to know what's to blame here and if you can\n> attribute it to a certain commit.\n\n+1.\n--\nMichael", "msg_date": "Fri, 13 Oct 2023 09:40:04 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Some performance degradation in REL_16 vs REL_15" }, { "msg_contents": "Hi,\n\nOn 2023-10-12 11:00:22 +0300, Anton A. Melnikov wrote:\n> Found that simple test pgbench -c20 -T20 -j8 gives approximately\n> for REL_15_STABLE at 5143f76: 336+-1 TPS\n> and\n> for REL_16_STABLE at 4ac7635f: 324+-1 TPS\n> \n> The performance drop is approximately 3,5% while the corrected standard deviation is only 0.3%.\n> See the raw_data.txt attached.\n\nCould you provide a bit more details about how you ran the benchmark? The\nreason I am asking is that ~330 TPS is pretty slow for -c20. Even on spinning\nrust and using the default settings, I get considerably higher results.\n\nOh - I do get results closer to yours if I use pgbench scale 1, causing a lot\nof row level contention. What scale did you use?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 12 Oct 2023 19:05:22 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Some performance degradation in REL_16 vs REL_15" }, { "msg_contents": "On 13.10.2023 05:05, Andres Freund wrote:\n> Could you provide a bit more details about how you ran the benchmark? The\n> reason I am asking is that ~330 TPS is pretty slow for -c20. Even on spinning\n> rust and using the default settings, I get considerably higher results.\n> \n> Oh - I do get results closer to yours if I use pgbench scale 1, causing a lot\n> of row level contention. What scale did you use?\n\n\nI use default scale of 1.\nAnd run the command sequence:\n$pgbench -i bench\n$sleep 1\n$pgbench -c20 -T10 -j8\nin a loop to get similar initial conditions for every \"pgbench -c20 -T10 -j8\" run.\n\nThanks for your interest!\n\nWith the best wishes,\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Mon, 16 Oct 2023 11:04:25 +0300", "msg_from": "\"Anton A. Melnikov\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Some performance degradation in REL_16 vs REL_15" }, { "msg_contents": "I wrote a script and test on branch REL_[10-16]_STABLE, and do see performance drop in REL_13_STABLE, which is about 1~2%.\n\nscale\tround\t10\t11\t12\t13\t14\t15\t16\n1\t1\t7922.2\t8018.3\t8102.8\t7838.3\t7829.2\t7870.0\t7846.1\n\t2\t7922.4\t7923.5\t8090.3\t7887.7\t7912.4\t7815.2\t7865.6\n\t3\t7937.6\t7964.9\t8012.8\t7918.5\t7879.4\t7786.4\t7981.1\n\t4\t8000.4\t7959.5\t8141.1\t7886.3\t7840.9\t7863.5\t8022.4\n\t5\t7921.8\t7945.5\t8005.2\t7993.7\t7957.0\t7803.8\t7899.8\n\t6\t7893.8\t7895.1\t8017.2\t7879.8\t7880.9\t7911.4\t7909.2\n\t7\t7879.3\t7853.5\t8071.7\t7956.2\t7876.7\t7863.3\t7986.3\n\t8\t7980.5\t7964.1\t8119.2\t8015.2\t7877.6\t7784.9\t7923.6\n\t9\t8083.9\t7946.4\t7960.3\t7913.9\t7924.6\t7867.7\t7928.6\n\t10\t7971.2\t7991.8\t7999.5\t7812.4\t7824.3\t7831.0\t7953.4\n\tAVG\t7951.3\t7946.3\t8052.0\t7910.2\t7880.3\t7839.7\t7931.6\n\tMED\t7930.0\t7952.9\t8044.5\t7900.8\t7878.5\t7847.1\t7926.1\n10\t1\t41221.5\t41394.8\t40926.8\t40566.6\t41661.3\t40511.9\t40961.8\n\t2\t40974.0\t40697.9\t40842.4\t40269.2\t41127.7\t40795.5\t40814.9\n\t3\t41453.5\t41426.4\t41066.2\t40890.9\t41018.6\t40897.3\t40891.7\n\t4\t41691.9\t40294.9\t41189.8\t40873.8\t41539.7\t40943.2\t40643.8\n\t5\t40843.4\t40855.5\t41243.8\t40351.3\t40863.2\t40839.6\t40795.5\n\t6\t40969.3\t40897.9\t41380.8\t40734.7\t41269.3\t41301.0\t41061.0\n\t7\t40981.1\t41119.5\t41158.0\t40834.6\t40967.1\t40790.6\t41061.6\n\t8\t41006.4\t41205.9\t40740.3\t40978.7\t40742.4\t40951.6\t41242.1\n\t9\t41089.9\t41129.7\t40648.3\t40622.1\t40782.0\t40460.5\t40877.9\n\t10\t41280.3\t41462.7\t41316.4\t40728.0\t40983.9\t40747.0\t40964.6\n\tAVG\t41151.1\t41048.5\t41051.3\t40685.0\t41095.5\t40823.8\t40931.5\n\tMED\t41048.2\t41124.6\t41112.1\t40731.3\t41001.3\t40817.6\t40926.7\n100\t1\t43429.0\t43190.2\t44099.3\t43941.5\t43883.3\t44215.0\t44604.9\n\t2\t43281.7\t43795.2\t44963.6\t44331.5\t43559.7\t43571.5\t43403.9\n\t3\t43749.0\t43614.1\t44616.7\t43759.5\t43617.8\t43530.3\t43362.4\n\t4\t43362.0\t43197.3\t44296.7\t43692.4\t42020.5\t43607.3\t43081.8\n\t5\t43373.4\t43288.0\t44240.9\t43795.0\t43630.6\t43576.7\t43512.0\n\t6\t43637.0\t43385.2\t45130.1\t43792.5\t43635.4\t43905.2\t43371.2\n\t7\t43621.2\t43474.2\t43735.0\t43592.2\t43889.7\t43947.7\t43369.8\n\t8\t43351.0\t43937.5\t44285.6\t43877.2\t43771.1\t43879.1\t43680.4\n\t9\t43481.3\t43700.5\t44119.9\t43786.9\t43440.8\t44083.1\t43563.2\n\t10\t43238.7\t43559.5\t44310.8\t43406.0\t44306.6\t43376.3\t43242.7\n\tAVG\t43452.4\t43514.2\t44379.9\t43797.5\t43575.6\t43769.2\t43519.2\n\tMED\t43401.2\t43516.8\t44291.2\t43789.7\t43633.0\t43743.2\t43387.5\n\nThe script looks like:\n initdb data >/dev/null 2>&1 #initdb on every round\n pg_ctl -D data -l logfile start >/dev/null 2>&1 #start without changing any setting\n pgbench -i postgres $scale >/dev/null 2>&1\n sleep 1 >/dev/null 2>&1\n pgbench -c20 -T10 -j8\n\nAnd here is the pg_config output:\n...\nCONFIGURE = '--enable-debug' '--prefix=/home/postgres/base' '--enable-depend' 'PKG_CONFIG_PATH=/usr/local/lib64/pkgconfig::/usr/lib/pkgconfig'\nCC = gcc\nCPPFLAGS = -D_GNU_SOURCE\nCFLAGS = -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wimplicit-fallthrough=3 -Wcast-function-type -Wshadow=compatible-local -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -g -O2\nCFLAGS_SL = -fPIC\nLDFLAGS = -Wl,--as-needed -Wl,-rpath,'/home/postgres/base/lib',--enable-new-dtags\nLDFLAGS_EX = \nLDFLAGS_SL = \nLIBS = -lpgcommon -lpgport -lz -lreadline -lpthread -lrt -ldl -lm \nVERSION = PostgreSQL 16.0\n\n—-\nYuhang Qiu\nI wrote a script and test on branch REL_[10-16]_STABLE, and do see performance drop in REL_13_STABLE, which is about 1~2%.scale round 10 11 12 13 14 15 161 1 7922.2 8018.3 8102.8 7838.3 7829.2 7870.0 7846.1 2 7922.4 7923.5 8090.3 7887.7 7912.4 7815.2 7865.6 3 7937.6 7964.9 8012.8 7918.5 7879.4 7786.4 7981.1 4 8000.4 7959.5 8141.1 7886.3 7840.9 7863.5 8022.4 5 7921.8 7945.5 8005.2 7993.7 7957.0 7803.8 7899.8 6 7893.8 7895.1 8017.2 7879.8 7880.9 7911.4 7909.2 7 7879.3 7853.5 8071.7 7956.2 7876.7 7863.3 7986.3 8 7980.5 7964.1 8119.2 8015.2 7877.6 7784.9 7923.6 9 8083.9 7946.4 7960.3 7913.9 7924.6 7867.7 7928.6 10 7971.2 7991.8 7999.5 7812.4 7824.3 7831.0 7953.4 AVG 7951.3 7946.3 8052.0 7910.2 7880.3 7839.7 7931.6 MED 7930.0 7952.9 8044.5 7900.8 7878.5 7847.1 7926.110 1 41221.5 41394.8 40926.8 40566.6 41661.3 40511.9 40961.8 2 40974.0 40697.9 40842.4 40269.2 41127.7 40795.5 40814.9 3 41453.5 41426.4 41066.2 40890.9 41018.6 40897.3 40891.7 4 41691.9 40294.9 41189.8 40873.8 41539.7 40943.2 40643.8 5 40843.4 40855.5 41243.8 40351.3 40863.2 40839.6 40795.5 6 40969.3 40897.9 41380.8 40734.7 41269.3 41301.0 41061.0 7 40981.1 41119.5 41158.0 40834.6 40967.1 40790.6 41061.6 8 41006.4 41205.9 40740.3 40978.7 40742.4 40951.6 41242.1 9 41089.9 41129.7 40648.3 40622.1 40782.0 40460.5 40877.9 10 41280.3 41462.7 41316.4 40728.0 40983.9 40747.0 40964.6 AVG 41151.1 41048.5 41051.3 40685.0 41095.5 40823.8 40931.5 MED 41048.2 41124.6 41112.1 40731.3 41001.3 40817.6 40926.7100 1 43429.0 43190.2 44099.3 43941.5 43883.3 44215.0 44604.9 2 43281.7 43795.2 44963.6 44331.5 43559.7 43571.5 43403.9 3 43749.0 43614.1 44616.7 43759.5 43617.8 43530.3 43362.4 4 43362.0 43197.3 44296.7 43692.4 42020.5 43607.3 43081.8 5 43373.4 43288.0 44240.9 43795.0 43630.6 43576.7 43512.0 6 43637.0 43385.2 45130.1 43792.5 43635.4 43905.2 43371.2 7 43621.2 43474.2 43735.0 43592.2 43889.7 43947.7 43369.8 8 43351.0 43937.5 44285.6 43877.2 43771.1 43879.1 43680.4 9 43481.3 43700.5 44119.9 43786.9 43440.8 44083.1 43563.2 10 43238.7 43559.5 44310.8 43406.0 44306.6 43376.3 43242.7 AVG 43452.4 43514.2 44379.9 43797.5 43575.6 43769.2 43519.2 MED 43401.2 43516.8 44291.2 43789.7 43633.0 43743.2 43387.5The script looks like:    initdb data >/dev/null 2>&1 #initdb on every round    pg_ctl -D data -l logfile start >/dev/null 2>&1 #start without changing any setting    pgbench -i postgres $scale >/dev/null 2>&1    sleep 1 >/dev/null 2>&1    pgbench -c20 -T10 -j8And here is the pg_config output:...CONFIGURE =  '--enable-debug' '--prefix=/home/postgres/base' '--enable-depend' 'PKG_CONFIG_PATH=/usr/local/lib64/pkgconfig::/usr/lib/pkgconfig'CC = gccCPPFLAGS = -D_GNU_SOURCECFLAGS = -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wimplicit-fallthrough=3 -Wcast-function-type -Wshadow=compatible-local -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -g -O2CFLAGS_SL = -fPICLDFLAGS = -Wl,--as-needed -Wl,-rpath,'/home/postgres/base/lib',--enable-new-dtagsLDFLAGS_EX = LDFLAGS_SL = LIBS = -lpgcommon -lpgport -lz -lreadline -lpthread -lrt -ldl -lm VERSION = PostgreSQL 16.0—-Yuhang Qiu", "msg_date": "Wed, 18 Oct 2023 11:45:55 +0800", "msg_from": "=?utf-8?B?6YKx5a6H6Iiq?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Some performance degradation in REL_16 vs REL_15" }, { "msg_contents": "Hi,\n\nOn 2023-10-16 11:04:25 +0300, Anton A. Melnikov wrote:\n> On 13.10.2023 05:05, Andres Freund wrote:\n> > Could you provide a bit more details about how you ran the benchmark? The\n> > reason I am asking is that ~330 TPS is pretty slow for -c20. Even on spinning\n> > rust and using the default settings, I get considerably higher results.\n> > \n> > Oh - I do get results closer to yours if I use pgbench scale 1, causing a lot\n> > of row level contention. What scale did you use?\n> \n> \n> I use default scale of 1.\n\nThat means you're largely going to be bottlenecked due to row level\ncontention. For read/write pgbench you normally want to use a scale that's\nbigger than the client count, best by at least 2x.\n\nHave you built postgres with assertions enabled or such?\n\nWhat is the server configuration for both versions?\n\n\n> And run the command sequence:\n> $pgbench -i bench\n> $sleep 1\n> $pgbench -c20 -T10 -j8\n\nI assume you also specify the database name here, given you specified it for\npgbench -i?\n\nAs you're not doing a new initdb here, the state of the cluster will\nsubstantially depend on what has run before. This can matter substantially\nbecause a cluster with prior substantial write activity will already have\ninitialized WAL files and can reuse them cheaply, whereas one without that\nactivity needs to initialize new files. Although that matters a bit less with\nscale 1, because there's just not a whole lot of writes.\n\nAt the very least you should trigger a checkpoint before or after pgbench\n-i. The performance between having a checkpoint during the pgbench run or not\nis substantially different, and if you're not triggering one explicitly, it'll\nbe up to random chance whether it happens during the run or not. It's less\nimportant if you run pgbench for an extended time, but if you do it just for\n10s...\n\nE.g. on my workstation, if there's no checkpoint, I get around 633 TPS across\nrepeated runs, but if there's a checkpoint between pgbench -i and the pgbench\nrun, it's around 615 TPS.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 17 Oct 2023 21:10:00 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Some performance degradation in REL_16 vs REL_15" }, { "msg_contents": "=?utf-8?B?6YKx5a6H6Iiq?= <[email protected]> writes:\n> I wrote a script and test on branch REL_[10-16]_STABLE, and do see performance drop in REL_13_STABLE, which is about 1~2%.\n\nI'm really skeptical that we should pay much attention to these numbers.\nYou've made several of the mistakes that we typically tell people not to\nmake when using pgbench:\n\n* scale <= number of sessions means you're measuring a lot of\nrow-update contention\n\n* once you crank up the scale enough to avoid that problem, running\nwith the default shared_buffers seems like a pretty poor choice\n\n* 10-second runtime is probably an order of magnitude too small\nto get useful, reliable numbers\n\nOn top of all that, discrepancies on the order of a percent or two\ncommonly arise from hard-to-control-for effects like the cache\nalignment of hot spots in different parts of the code. That means\nthat you can see changes of that size from nothing more than\nday-to-day changes in completely unrelated parts of the code.\n\nI'd get excited about say a 10% performance drop, because that's\nprobably more than noise; but I'm not convinced that any of the\ndifferences you show here are more than noise.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 18 Oct 2023 00:14:16 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Some performance degradation in REL_16 vs REL_15" }, { "msg_contents": "Hi, Andres!\n\nThanks for your patience and advice.\n\nOn 18.10.2023 07:10, Andres Freund wrote:\n>> On 13.10.2023 05:05, Andres Freund wrote:\n>>> Could you provide a bit more details about how you ran the benchmark? The\n>>> reason I am asking is that ~330 TPS is pretty slow for -c20. Even on spinning\n>>> rust and using the default settings, I get considerably higher results.\n>>>\n>>> Oh - I do get results closer to yours if I use pgbench scale 1, causing a lot\n>>> of row level contention. What scale did you use?\n\n>> I use default scale of 1.\n> \n> That means you're largely going to be bottlenecked due to row level\n> contention. For read/write pgbench you normally want to use a scale that's\n> bigger than the client count, best by at least 2x.\n\nI performed differential measurements with -s21 to obtain scale > number\nin accordance with:\n> On 18.10.2023 07:14, Tom Lane wrote:\n>> * scale <= number of sessions means you're measuring a lot of\n>> row-update contention\n>> \nAnd plan to do the same for -s40.\n\n> Have you built postgres with assertions enabled or such?\nI ran the debug configuration with --enable-cassert and -O0.\nAnd plan to do the same without asserts and -O2 soon.\n\n> What is the server configuration for both versions?\nIn all measurements i used default postgresql.conf files. Please see the detailed reproduction below.\n\n> \n>> And run the command sequence:\n>> $pgbench -i bench\n>> $sleep 1\n>> $pgbench -c20 -T10 -j8\n> \n> I assume you also specify the database name here, given you specified it for\n> pgbench -i?\n\nSorry, there were a two mistakes in the command sequence that I copied in the previous letter\ncompared to the real one. I did not completely copy it from the sighting, not from the real script.\nIn fact i used pgbench -c20 -T20 -j8 bench. -T10 is really unstable and was used only for\npreliminary measurements to make them more faster. Please see my first letter in this thread.\n \nHere are the detailed description of the last measurements that were performed on my pc.\nWith a scale of -s21, the difference between REL_16 and REL_15 is rather small, but it has\na cumulative effect from version to version and reaches a maximum between\nREL_10_STABLE and REL_16_STABLE.\nThe measurement procedure was as follows;\n- rebuild from sources and reinstall server with\n./configure --enable-debug --enable-cassert --with-perl \\\n\t--with-icu --enable-depend --enable-tap-tests\n- init the new empty db with initdb -k -D $INSTDIR/data\nSee postgresql10.conf and postgresql16.conf attached.\n- run the series of 100 measurements with the scripts like that for REL_16:\ndate && rm -f result.txt && for ((i=0; i<100; i++)); do pgbench -U postgres -i -s21 bench> /dev/null 2>&1;\npsql -U postgres -d bench -c \"checkpoint\"; RES=$(pgbench -U postgres -c20 -T20 -j8 bench 2>&1 | awk '/tps[[:space:]]/ { print $3 }');\necho $RES >> result.txt; echo Measurement N$i done. TPS=$RES; done; cat result.txt\nor\ndate && rm -f result.txt && for ((i=0; i<100; i++)); do pgbench -U postgres -i -s21 bench> /dev/null 2>&1;\npsql -U postgres -d bench -c \"checkpoint\"; RES=$(pgbench -U postgres -c20 -T20 -j8 bench 2>&1 | awk '/excluding/ { print $3 }');\necho $RES >> result.txt; echo Measurement N$i done. TPS=$RES; done; cat result.txt\nfor REL_10 respectively.\n\nFor REL_16_STABLE at 7cc2f59dd the average TPS was: 2020+-70,\nfor REL_10_STABLE at c18c12c98 - 2260+-70\n\nThe percentage difference was approximately 11%.\nPlease see the 16vs10.png picture with the graphical representation of the data obtained.\nAlso there are the raw data in the raw_data_s21.txt.\n\nIn some days i hope to perform additional measurements that were mentioned above in this letter.\nIt would be interesting to establish the reason for this difference. And i would be very grateful\nif you could advise me what other settings can be tweaked.\n\nWith the best wishes!\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Mon, 30 Oct 2023 15:28:53 +0300", "msg_from": "\"Anton A. Melnikov\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Some performance degradation in REL_16 vs REL_15" }, { "msg_contents": "Hi,\n\nOn 2023-10-30 15:28:53 +0300, Anton A. Melnikov wrote:\n> For REL_16_STABLE at 7cc2f59dd the average TPS was: 2020+-70,\n> for REL_10_STABLE at c18c12c98 - 2260+-70\n> \n> The percentage difference was approximately 11%.\n> Please see the 16vs10.png picture with the graphical representation of the data obtained.\n> Also there are the raw data in the raw_data_s21.txt.\n> \n> In some days i hope to perform additional measurements that were mentioned above in this letter.\n> It would be interesting to establish the reason for this difference. And i would be very grateful\n> if you could advise me what other settings can be tweaked.\n\nThere's really no point in comparing peformance with assertions enabled\n(leaving aside assertions that cause extreme performance difference, making\ndevelopment harder). We very well might have added assertions making things\nmore expensive, without affecting performance in optimized/non-assert builds.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 30 Oct 2023 12:51:09 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Some performance degradation in REL_16 vs REL_15" }, { "msg_contents": "On 30.10.2023 22:51, Andres Freund wrote:\n\n> There's really no point in comparing peformance with assertions enabled\n> (leaving aside assertions that cause extreme performance difference, making\n> development harder). We very well might have added assertions making things\n> more expensive, without affecting performance in optimized/non-assert builds.\n> \n\nThanks for advice! I repeated measurements on my pc without asserts and CFLAGS=\"-O2\".\nAlso i reduced the number of clients to -c6 to leave a reserve of two cores\nfrom my 8-core cpu and used -j6 accordingly.\n\nThe results were similar: on my pc REL_10_STABLE(c18c12c9) was faster than REL_16_STABLE(07494a0d)\nbut the effect became weaker:\n REL_10_STABLE gives ~965+-15 TPS(+-2%) while REL_16_STABLE gives ~920+-30 TPS(+-3%) in the test: pgbench -s8 -c6 -T20 -j6\nSo 10 is faster than 16 by ~5%. (see raw-my-pc.txt attached for the raw data)\n\nThen, thanks to my colleagues, i carried out similar measurements on the more powerful 24-core standalone server.\nThe REL_10_STABLE gives 8260+-100 TPS(+-1%) while REL_16_STABLE gives 8580+-90 TPS(+-1%) in the same test: pgbench -s8 -c6 -T20 -j6\nThe test gave an opposite result!\nOn that server the 16 is faster than 10 by ~4%.\n\nWhen i scaled the test on server to get the same reserve of two cores, the results became like this:\nREL_10_STABLE gives ~16000+-300 TPS(+-2%) while REL_16_STABLE gives ~18500+-200 TPS(+-1%) in the scaled test: pgbench -s24 -c22 -T20 -j22\nHere the difference is more noticeable: 16 is faster than 10 by ~15%. (raw-server.txt)\n\nThe configure options and test scripts on my pc and server were the same:\nexport CFLAGS=\"-O2\"\n./configure --enable-debug --with-perl --with-icu --enable-depend --enable-tap-tests\n#reinstall\n#reinitdb\n#create database bench\nfor ((i=0; i<100; i++)); do pgbench -U postgres -i -s8 bench> /dev/null 2>&1;\npsql -U postgres -d bench -c \"checkpoint\"; RES=$(pgbench -U postgres -c6 -T20 -j6 bench;\n\nConfigurations:\nmy pc: 8-core AMD Ryzen 7 4700U @ 1.4GHz, 64GB RAM, NVMe M.2 SSD drive.\nLinux 5.15.0-88-generic #98~20.04.1-Ubuntu SMP Mon Oct 9 16:43:45 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux\nserver: 2x 12-hyperthreaded cores Intel(R) Xeon(R) CPU X5675 @ 3.07GHz, 24GB RAM, RAID from SSD drives.\nLinux 5.10.0-21-amd64 #1 SMP Debian 5.10.162-1 (2023-01-21) x86_64 GNU/Linux\n\nI can't understand why i get the opposite results on my pc and on the server. It is clear that the absolute\nTPS values will be different for various configurations. This is normal. But differences?\nIs it unlikely that some kind of reference configuration is needed to accurately\nmeasure the difference in performance. Probably something wrong with my pc, but now\ni can not figure out what's wrong.\n\nWould be very grateful for any advice or comments to clarify this problem.\n\nWith the best wishes!\n\n--\nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Wed, 15 Nov 2023 11:33:44 +0300", "msg_from": "\"Anton A. Melnikov\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Some performance degradation in REL_16 vs REL_15" }, { "msg_contents": "\"Anton A. Melnikov\" <[email protected]> writes:\n> I can't understand why i get the opposite results on my pc and on the server. It is clear that the absolute\n> TPS values will be different for various configurations. This is normal. But differences?\n> Is it unlikely that some kind of reference configuration is needed to accurately\n> measure the difference in performance. Probably something wrong with my pc, but now\n> i can not figure out what's wrong.\n\n> Would be very grateful for any advice or comments to clarify this problem.\n\nBenchmarking is hard :-(. IME it's absolutely typical to see\nvariations of a couple of percent even when \"nothing has changed\",\nfor example after modifying some code that's nowhere near any\nhot code path for the test case. I usually attribute this to\ncache effects, such as a couple of bits of hot code now sharing or\nnot sharing a cache line. If you use two different compiler versions\nthen that situation is likely to occur all over the place even with\nexactly the same source code. NUMA creates huge reproducibility\nproblems too on multisocket machines (which your server is IIUC).\nWhen I had a multisocket workstation I'd usually bind all the server\nprocesses to one socket if I wanted more-or-less-repeatable numbers.\n\nI wouldn't put a lot of faith in the idea that measured pgbench\ndifferences of up to several percent are meaningful at all,\nespecially when comparing across different hardware and different\nOS+compiler versions. There are too many variables that have\nlittle to do with the theoretical performance of the source code.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 15 Nov 2023 10:09:06 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Some performance degradation in REL_16 vs REL_15" }, { "msg_contents": "Hi,\n\nOn 2023-11-15 11:33:44 +0300, Anton A. Melnikov wrote:\n> The configure options and test scripts on my pc and server were the same:\n> export CFLAGS=\"-O2\"\n> ./configure --enable-debug --with-perl --with-icu --enable-depend --enable-tap-tests\n> #reinstall\n> #reinitdb\n> #create database bench\n> for ((i=0; i<100; i++)); do pgbench -U postgres -i -s8 bench> /dev/null 2>&1;\n> psql -U postgres -d bench -c \"checkpoint\"; RES=$(pgbench -U postgres -c6 -T20 -j6 bench;\n\nEven with scale 8 you're likely significantly impacted by contention. And\nobviously WAL write latency. See below for why that matters.\n\n\n\n> I can't understand why i get the opposite results on my pc and on the server. It is clear that the absolute\n> TPS values will be different for various configurations. This is normal. But differences?\n> Is it unlikely that some kind of reference configuration is needed to accurately\n> measure the difference in performance. Probably something wrong with my pc, but now\n> i can not figure out what's wrong.\n\nOne very common reason for symptoms like this are power-saving measures by the\nCPU. In workloads where the CPU is not meaningfully utilized, the CPU will go\ninto a powersaving mode - which can cause workloads that are latency sensitive\nto be badly affected. Both because initially the cpu will just work at a\nlower frequency and because it takes time to shift to a higher latency.\n\n\nHere's an example:\nI bound the server and psql to the same CPU core (nothing else is allowed to\nuse that core. And ran the following:\n\n\\o /dev/null\nSELECT 1; SELECT 1; SELECT 1; SELECT pg_sleep(0.1); SELECT 1; SELECT 1; SELECT 1;\nTime: 0.181 ms\nTime: 0.085 ms\nTime: 0.071 ms\nTime: 100.474 ms\nTime: 0.153 ms\nTime: 0.077 ms\nTime: 0.069 ms\n\nYou can see how the first query timing was slower, the next two were faster,\nand then after the pg_sleep() it's slow again.\n\n\n# tell the CPU to optimize for performance not power\ncpupower frequency-set --governor performance\n\n# disable going to lower power states\ncpupower idle-set -D0\n\n# disable turbo mode for consistent performance\necho 1 > /sys/devices/system/cpu/intel_pstate/no_turbo\n\nNow the timings are:\nTime: 0.038 ms\nTime: 0.028 ms\nTime: 0.025 ms\nTime: 1000.262 ms (00:01.000)\nTime: 0.027 ms\nTime: 0.024 ms\nTime: 0.023 ms\n\nLook, fast and reasonably consistent timings.\n\nSwitching back:\nTime: 0.155 ms\nTime: 0.123 ms\nTime: 0.074 ms\nTime: 1001.235 ms (00:01.001)\nTime: 0.120 ms\nTime: 0.077 ms\nTime: 0.068 ms\n\n\nThe perverse thing is that this often means that *reducing* the number of\ninstructions executed yields to *worse* behaviour when under non-sustained\nload, because from the CPUs point of view there is less need to increase clock\nspeed.\n\n\nTo show how much of a difference that can make, I ran pgbench with a single\nclient on one core, and the server on another (so the CPU is idle inbetween):\nnumactl --physcpubind 11 pgbench -n -M prepared -P1 -S -c 1 -T10\n\nWith power optimized configuration:\nlatency average = 0.035 ms\nlatency stddev = 0.002 ms\ninitial connection time = 5.255 ms\ntps = 28434.334672 (without initial connection time)\n\nWith performance optimized configuration:\nlatency average = 0.025 ms\nlatency stddev = 0.001 ms\ninitial connection time = 3.544 ms\ntps = 40079.995935 (without initial connection time)\n\nThat's a whopping 1.4x in throughput!\n\n\nNow, the same thing, except that I used a custom workload where pgbench\ntransactions are executed in a pipelined fashion, 100 read-only transactions\nin one script execution:\n\nWith power optimized configuration:\nlatency average = 1.055 ms\nlatency stddev = 0.125 ms\ninitial connection time = 6.915 ms\ntps = 947.985286 (without initial connection time)\n\n(this means we actually executed 94798.5286 readonly pgbench transactions/s)\n\nWith performance optimized configuration:\nlatency average = 1.376 ms\nlatency stddev = 0.083 ms\ninitial connection time = 3.759 ms\ntps = 726.849018 (without initial connection time)\n\nSuddenly the super-duper performance optimized settings are worse (but note\nthat stddev is down)! I suspect the problem is that now because we disabled\nidle states, the cpu ends up clocking *lower*, due to power usage.\n\nIf I just change the relevant *cores* to the performance optimized\nconfiguration:\n\ncpupower -c 10,11 idle-set -D0; cpupower -c 10,11 frequency-set --governor performance\n\nlatency average = 0.940 ms\nlatency stddev = 0.061 ms\ninitial connection time = 3.311 ms\ntps = 1063.719116 (without initial connection time)\n\nIt wins again.\n\n\nNow, realistically you'd never use -D0 (i.e. disabling all downclocking, not\njust lower states) - the power differential is quite big and as shown here it\ncan hurt performance as well.\n\nOn an idle system, looking at the cpu power usage with:\npowerstat -D -R 5 1000\n\n Time User Nice Sys Idle IO Run Ctxt/s IRQ/s Fork Exec Exit Watts pkg-0 dram pkg-1\n09:45:03 0.6 0.0 0.2 99.2 0.0 1 2861 2823 0 0 0 46.84 24.82 3.68 18.33\n09:45:08 1.0 0.0 0.1 99.0 0.0 2 2565 1602 0 0 0 54.78 28.92 4.74 21.12\n09:45:13 0.8 0.0 0.3 98.9 0.0 1 3769 2322 0 0 0 55.65 29.43 4.72 21.50\n09:45:18 0.8 0.0 0.1 99.1 0.0 3 2513 1479 0 0 0 51.95 27.47 4.23 20.24\n09:45:23 0.6 0.0 0.1 99.3 0.0 1 2282 1448 0 0 0 49.44 26.12 3.91 19.41\n09:45:28 0.8 0.0 0.1 99.1 0.0 2 2422 1465 0 0 0 51.79 27.33 4.27 20.19\n09:45:33 0.9 0.0 0.1 99.0 0.0 2 2358 1566 0 0 0 55.05 29.03 4.73 21.29\n09:45:38 0.6 0.0 0.1 99.4 0.0 1 2976 4207 0 0 0 54.38 29.08 4.02 21.28\n09:45:43 1.2 0.0 0.2 98.6 0.0 2 3988 37670 0 0 0 125.51 64.15 7.97 53.39\n09:45:48 0.6 0.0 0.2 99.2 0.0 2 3263 40000 0 0 0 126.31 63.84 7.97 54.50\n09:45:53 0.4 0.0 0.0 99.6 0.0 1 2333 39716 0 0 0 125.94 63.64 7.90 54.40\n09:45:58 0.3 0.0 0.0 99.6 0.0 1 2783 39795 0 0 0 125.93 63.58 7.90 54.44\n09:46:03 0.6 0.0 0.2 99.2 0.0 1 3081 24910 0 0 0 93.55 47.69 6.10 39.75\n09:46:08 0.5 0.0 0.1 99.4 0.0 1 2623 1356 0 0 0 42.65 22.59 3.18 16.89\n\n(interesting columns are Watts, pkg-0, dram, pkg-1)\n\nInitially this was with cpupower idle-set -E, then I ran cpupower idle-set\n-D0, and then cpupower idle-set -E - you can easily see where.\n\n\nIf I use a more sensible -D5 system-wide and run the pgbench from earlier, I\nget the performance *and* sensible power usage:\n\nlatency average = 0.946 ms\nlatency stddev = 0.063 ms\ninitial connection time = 3.482 ms\ntps = 1057.413946 (without initial connection time)\n\n09:50:36 0.4 0.0 0.1 99.5 0.0 1 2593 1206 0 0 0 42.63 22.47 3.18 16.97\n\n\nTo get back to your benchmark: Because you're measuring a highly contended\nsystem, where most of the time postgres will just wait for\n a) row-level locks to be released\n b) WAL writes to finish (you're clearly not using a very fast disk)\nthe CPU would have plenty time to clock down.\n\n\nBenchmarking sucks. Modern hardware realities suck.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 15 Nov 2023 10:04:33 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Some performance degradation in REL_16 vs REL_15" }, { "msg_contents": "Hi,\n\nOn 2023-11-15 10:09:06 -0500, Tom Lane wrote:\n> \"Anton A. Melnikov\" <[email protected]> writes:\n> > I can't understand why i get the opposite results on my pc and on the server. It is clear that the absolute\n> > TPS values will be different for various configurations. This is normal. But differences?\n> > Is it unlikely that some kind of reference configuration is needed to accurately\n> > measure the difference in performance. Probably something wrong with my pc, but now\n> > i can not figure out what's wrong.\n>\n> > Would be very grateful for any advice or comments to clarify this problem.\n>\n> Benchmarking is hard :-(.\n\nIndeed.\n\n\n> IME it's absolutely typical to see variations of a couple of percent even\n> when \"nothing has changed\", for example after modifying some code that's\n> nowhere near any hot code path for the test case. I usually attribute this\n> to cache effects, such as a couple of bits of hot code now sharing or not\n> sharing a cache line.\n\nFWIW, I think we're overusing that explanation in our community. Of course you\ncan encounter things like this, but the replacement policies of cpu caches\nhave gotten a lot better and the caches have gotten bigger too.\n\nIME this kind of thing is typically dwarfed by much bigger variations from\nthings like\n\n- cpu scheduling - whether the relevant pgbench thread is colocated on the\n same core as the relevant backend can make a huge difference,\n particularly when CPU power saving modes are not disabled. Just looking at\n tps from a fully cached readonly pgbench, with a single client:\n\n Power savings enabled, same core:\n 37493\n\n Power savings enabled, different core:\n 28539\n\n Power savings disabled, same core:\n 38167\n\n Power savings disabled, different core:\n 37365\n\n\n- can transparent huge pages be used for the executable mapping, or not\n\n On newer kernels linux (and some filesystems) can use huge pages for the\n executable. To what degree that succeeds is a large factor in performance.\n\n Single threaded read-only pgbench\n\n postgres mapped without huge pages:\n 37155 TPS\n\n with 2MB of postgres as huge pages:\n 37695 TPS\n\n with 6MB of postgres as huge pages:\n 42733 TPS\n\n The really annoying thing about this is that entirely unpredictable whether\n huge pages are used or not. Building the same way, sometimes 0, sometimes 2MB,\n sometimes 6MB are mapped huge. Even though the on-disk contents are\n precisely the same. And it can even change without rebuilding, if the\n binary is evicted from the page cache.\n\n This alone makes benchmarking extremely annoying. It basically can't be\n controlled and has huge effects.\n\n\n- How long has the server been started\n\n If e.g. once you run your benchmark on the first connection to a database,\n and after a restart not (e.g. autovacuum starts up beforehand), you can get\n a fairly different memory layout and cache situation, due to [not] using the\n relcache init file. If not, you'll have a catcache that's populated,\n otherwise not.\n\n Another mean one is whether you start your benchmark within a relatively\n short time of the server starting. Readonly pgbench with a single client,\n started immediately after the server:\n\n progress: 12.0 s, 37784.4 tps, lat 0.026 ms stddev 0.001, 0 failed\n progress: 13.0 s, 37779.6 tps, lat 0.026 ms stddev 0.001, 0 failed\n progress: 14.0 s, 37668.2 tps, lat 0.026 ms stddev 0.001, 0 failed\n progress: 15.0 s, 32133.0 tps, lat 0.031 ms stddev 0.113, 0 failed\n progress: 16.0 s, 37564.9 tps, lat 0.027 ms stddev 0.012, 0 failed\n progress: 17.0 s, 37731.7 tps, lat 0.026 ms stddev 0.001, 0 failed\n\n There's a dip at 15s, odd - turns out that's due to bgwriter writing a WAL\n record, which triggers walwriter to write it out and then initialize the\n whole WAL buffers with 0s - happens once. In this case I've exagerated the\n effect a bit by using a 1GB wal_buffers, but it's visible otherwise too.\n Whether your benchmark period includes that dip or not adds a fair bit of\n noise.\n\n You can even see the effects of autovacuum workers launching - even if\n there's nothing to do! Not as a huge dip, but enough to add some \"run to\n run\" variation.\n\n\n- How much other dirty data is there in the kernel pagecache. If you e.g. just\n built a new binary, even with just minor changes, the kernel will need to\n flush those pages eventually, which may contend for IO and increases page\n faults.\n\n Rebuilding an optimized build generates something like 1GB of dirty\n data. Particularly with ccache, that'll typically not yet be flushed by the\n time you run a benchmark. That's not nothing, even with a decent NVMe SSD.\n\n- many more, unfortunately\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 15 Nov 2023 12:21:33 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Some performance degradation in REL_16 vs REL_15" }, { "msg_contents": "Hi!\n\nThank you very much for pointing out these important moments:\n\n\"NUMA\" effects:\nOn 15.11.2023 18:09, Tom Lane wrote:\n> NUMA creates huge reproducibility\n> problems too on multisocket machines (which your server is IIUC).\n> When I had a multisocket workstation I'd usually bind all the server\n> processes to one socket if I wanted more-or-less-repeatable numbers.\n>\n\n\"frequency\" effects:\nOn 15.11.2023 21:04, Andres Freund wrote:\n>\n> One very common reason for symptoms like this are power-saving measures by the\n> CPU. In workloads where the CPU is not meaningfully utilized, the CPU will go\n> into a powersaving mode - which can cause workloads that are latency sensitive\n> to be badly affected. Both because initially the cpu will just work at a\n> lower frequency and because it takes time to shift to a higher latency.\n\n> To get back to your benchmark: Because you're measuring a highly contended\n> system, where most of the time postgres will just wait for\n> a) row-level locks to be released\n> b) WAL writes to finish (you're clearly not using a very fast disk)\n> the CPU would have plenty time to clock down.\n\nAnd \"huge page\" effect:\nOn 15.11.2023 23:21, Andres Freund wrote:\n> The really annoying thing about this is that entirely unpredictable whether\n> huge pages are used or not. Building the same way, sometimes 0, sometimes 2MB,\n> sometimes 6MB are mapped huge. Even though the on-disk contents are\n> precisely the same. And it can even change without rebuilding, if the\n> binary is evicted from the page cache.\n\nAs for \"NUMA\" and \"frequency\" effects the following steps were made.\n\nAs directed by Andres i really found that CPU cores sometimes were falling down in low-frequency\nstates during the test. See server-C1-C1E-falls.mkv attached.\nAdditionally, with my colleague Andrew Bille [<andrewbille(at)gmail(dot)com>]\nwe found an interesting thing.\nWe disabled software performance management in the server BIOS\nand completely prohibited C-states here. But still,\nthe processor cores sometimes fell to C1 and C1E with a frequency decrease.\nSo we set back the software management in the BIOS and an additional command\necho <scaling_max_freq> | sudo tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_min_freq\nsolved this problem. The server cores began to operate at a fixed\nfrequency as shown in: server-stable-freq.mkv\nAnd a similar result was obtained on my pc: mypc-stable-freq.mkv\n\nThen i brought the server and my pc hardware configuration as close as possible to each other.\nThere are eight real cores on my pc without hyperthreading. On the other hand there are\nonly six real cores in every cpu on the server. So it is impossible to obtain 8 real cores on single cpu here.\nTo align both configurations i disable two of the 8 cores on my pc with numactl. On the server the second cpu was also disabled\nthrough numactl while the hyperthreading was disabled in the BIOS.\nFinally we got a similar configuration of single six-core processor both on the server and on my pc.\n\nFull set of the configuration commands for server was like that:\nnumactl --cpunodebind=0 --membind=0 --physcpubind=1,3,5,7,9,11 bash\nsudo cpupower frequency-set -g performance\nsudo cpupower idle-set -D0\necho 3059000 | sudo tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_min_freq\n(Turbo Boost was disabled in BIOS.)\nAnd for my pc like that:\nnumactl --cpunodebind=0 --membind=0 --physcpubind=0,1,2,3,4,5 bash\nsudo cpupower frequency-set -g performance\nsudo cpupower idle-set -D0\necho 2000000 | sudo tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_min_freq\necho 0 | sudo tee /sys/devices/system/cpu/cpufreq/boost\n\nThere are the numactl and cpupower output after configuration in\nserver-hw.txt and mypc-hw.txt respectively.\n\nTo eliminate the \"huge page\" effect i use huge_pages = off\nin the all further differential measurements.\n\nAs for the other effects that Andres mentioned: dips due to bgwriter, autovacuum,\nkernel pagecache effects and other possible noise,\ni minimize the effect of them statistically using the following measurement sequence.\nOne series of such a measurement consists of:\n1) Remove install and data dirs;\n2) Fully reconfigure, rebuild and reinstall server;\n3) Reinit new db;\n4) Launch pgbench for 20 seconds in a loop with intermediate checkpoints, as you advised.\nA few tens measurements totally.\n5) Sort the results and remove points with considerable (more than 1%) deviation from tails.\nThe algorithm was as follows:\n if the fist or last point deviates from the average by more than 1% remove it;\n recalculate the average value;\n repeat from the begin.\nRemoved points marked with \"x\" in the raw data files raw-server.txt and raw-myPC.txt.\n6) Get a result series of TPS data and average (expected) value\n7) Calculate the standard deviation (error)\nEach series takes approximately 20 minutes.\n\nThe general order of measurements was as follows:\n1) Compare REL_10_STABLE vs REL_16_STABLE on my pc and take difference1 in TPS;\n2) Compare REL_10_STABLE vs REL_16_STABLE at the same commits on server and take the difference2;\n3) Compare difference1 and difference2 with each other.\n\nIn my first attempt i got exactly the opposite results. On my pc REL_10 was noticeably\nfaster than REL_16 while on the server the latter wins.\n\nAfter repeating measurements using the updated configurations and methods described above\ni got the following results:\n\nOn the server:\nREL_10_STABLE gives ~6130 TPS(+-0.3%) while REL_16_STABLE gives ~6350 TPS(+-0.3%) in the test: pgbench -s8 -c4 -T20 -j4\nIn comparison with previous attempt the standard deviation became smaller by 3 times but difference\nin speed remained almost the same:\nthe 16th is faster than 10th by ~3,6%. See 10vs16-server.png\n(previous was ~4% in the pgbench -s8 -c6 -T20 -j6)\n\nOn my pc :\nREL_10_STABLE gives ~783 TPS(+-0.4%) while REL_16_STABLE gives ~784 TPS(+-0.4%) in the same test: pgbench -s8 -c4 -T20 -j4\nHere the difference in comparison with last attempt is significant. 10th and 16th no longer differ in speed while\nthe standard deviation became smaller by 5 times. See 10vs16-mypc.png\n\nThe results obtained on server seems more reliable as they are consistent with the previous ones\nbut i would like to figure out why i don’t see a difference in TPS on my PC.\nWill be glad to any comments and and concerns.\n\nAnd want to thank again for the previous advice. Owing to them the measurements have become more stable.\n\nMerry Christmas and the best regards!\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Tue, 26 Dec 2023 07:38:58 +0300", "msg_from": "\"Anton A. Melnikov\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Some performance degradation in REL_16 vs REL_15" }, { "msg_contents": "On 26.12.2023 07:38, Anton A. Melnikov wrote:\n> The results obtained on server seems more reliable as they are consistent with the previous ones\n> but i would like to figure out why i don’t see a difference in TPS on my PC.\n\nSeems i found the solution.\nSince the possible reason could be an I/O bottleneck, as firstly pointed by Thomas in [1] in similar measurements,\ni took on my PC the same approach as in this thread [2] using a 12GB ramdisk from 64GB of my total RAM.\n\nThe REL_10_STABLE at c18c12c983 gives ~6900+-100 TPS (+-1.4%), while\nREL_16_STABLE at 07494a0df gives ~7200+-90 TPS (+-1.3%). See raw data in the raw-myPC.txt\nand graphical representation in the 16vs10-mypc.png\n\nREL_16_STABLE is faster than REL_10_STABLE by ~4%.\nSo the results on my PC became in consistent with the ones obtained on the server.\n\nAlso i performed comparative measurements on the server with ramdrive for all current versions as of January 27th.\nPlease, see the dependence of the average TPS on the version number at the 10-to-master.png graph.\nThe raw data are here: raw-server.txt\n\nBrief results are as follows:\n1) Differences between 12th, 13th, 14th, 15th and master are subtle and within ~1%.\n2) 11th slower than 12th..master by ~2%\n3) 10th slower than 12th..master by ~4%\nThe standard deviation in all measurements except 10th and 11th was ~0.3%, for 11th ~0,4%, for 10th ~0,5%.\n\nThus, the problem in the subject of this thread is actually not relevant, there is no performance\ndegradation in REL_16 vs REL_15.\n\nBut i'm worried about such a question. If we do not take into account possible fluctuations of a few percent\nfrom commit to commit we can gradually accumulate a considerable loss in performance without noticing it.\nAnd then it will be quite difficult to establish its origin.\nPerhaps i'm missing something here. If there any thoughts on this question\nwould be glad to take them into account.\n\n\nWith the best wishes!\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n[1] https://www.postgresql.org/message-id/98646b96-6dcf-8d8a-3daf-837f25f8b1e3%40enterprisedb.com\n[2] https://www.postgresql.org/message-id/a74b5a91-d7c1-4138-86df-371c5e2b2be3%40postgrespro.ru", "msg_date": "Mon, 5 Feb 2024 14:45:08 +0300", "msg_from": "\"Anton A. Melnikov\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Some performance degradation in REL_16 vs REL_15" } ]
[ { "msg_contents": "The attached patch adds special-case expression steps for common sets of steps\nin the executor to shave a few cycles off during execution, and make the JIT\ngenerated code simpler.\n\n* Adds EEOP_FUNCEXPR_STRICT_1 and EEOP_FUNCEXPR_STRICT_2 for function calls of\n strict functions with 1 or 2 arguments (EEOP_FUNCEXPR_STRICT remains used for\n > 2 arguments).\n* Adds EEOP_AGG_STRICT_INPUT_CHECK_ARGS_1 which is a special case for the\n common case of one arg aggs.\n* Replace EEOP_DONE with EEOP_DONE_RETURN and EEOP_DONE_NO_RETURN to be able to\n skip extra setup for steps which are only interested in the side effects.\n\nStressing the EEOP_FUNCEXPR_STRICT_* steps specifically shows a 1.5%\nimprovement and pgbench over the branch shows a ~1% improvement in TPS (both\nmeasured over 6 runs with outliers removed).\n\nEEOP_FUNCEXPR_STRICT_* (10M iterations):\n master : (7503.317, 7553.691, 7634.524) \n patched : (7422.756, 7455.120, 7492.393)\n\npgbench:\n master : (3653.83, 3792.97, 3863.70)\n patched : (3743.04, 3830.02, 3869.80)\n\nThis patch was extracted from a larger body of work from Andres [0] aiming at\nproviding the necessary executor infrastructure for making JIT expression\ncaching possible. This patch, and more which are to be submitted, is however\nseparate in the sense that it is not part of the infrastructure, it's an\nimprovements on its own.\n\nThoughts?\n\n--\nDaniel Gustafsson\n\n[0]: https://postgr.es/m/[email protected]", "msg_date": "Thu, 12 Oct 2023 11:48:35 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": true, "msg_subject": "Special-case executor expression steps for common combinations" }, { "msg_contents": "On 12/10/2023 12:48, Daniel Gustafsson wrote:\n> The attached patch adds special-case expression steps for common sets of steps\n> in the executor to shave a few cycles off during execution, and make the JIT\n> generated code simpler.\n> \n> * Adds EEOP_FUNCEXPR_STRICT_1 and EEOP_FUNCEXPR_STRICT_2 for function calls of\n> strict functions with 1 or 2 arguments (EEOP_FUNCEXPR_STRICT remains used for\n> > 2 arguments).\n> * Adds EEOP_AGG_STRICT_INPUT_CHECK_ARGS_1 which is a special case for the\n> common case of one arg aggs.\n\nAre these relevant when JITting? I'm a little sad if the JIT compiler \ncannot unroll these on its own. Is there something we could do to hint \nit, so that it could treat the number of arguments as a constant?\n\nI understand that this can give a small boost in interpreter mode, so \nmaybe we should do it in any case. But I'd like to know if we're missing \na trick with the JITter, before we mask it with this.\n\n> * Replace EEOP_DONE with EEOP_DONE_RETURN and EEOP_DONE_NO_RETURN to be able to\n> skip extra setup for steps which are only interested in the side effects.\n\nI'm a little surprised if this makes a measurable performance \ndifference, but sure, why not. It seems nice to be more explicit when \nyou don't expect a return value.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Thu, 12 Oct 2023 13:24:27 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Special-case executor expression steps for common combinations" }, { "msg_contents": "On Thu, 12 Oct 2023 at 22:54, Daniel Gustafsson <[email protected]> wrote:\n> EEOP_FUNCEXPR_STRICT_* (10M iterations):\n> master : (7503.317, 7553.691, 7634.524)\n> patched : (7422.756, 7455.120, 7492.393)\n>\n> pgbench:\n> master : (3653.83, 3792.97, 3863.70)\n> patched : (3743.04, 3830.02, 3869.80)\n>\n> Thoughts?\n\nDid any of these tests compile the expression with JIT?\n\nIf not, how does the performance compare for a query that JITs the expression?\n\nDavid\n\n\n", "msg_date": "Fri, 13 Oct 2023 01:04:21 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Special-case executor expression steps for common combinations" }, { "msg_contents": "Hi,\n\nOn 2023-10-12 13:24:27 +0300, Heikki Linnakangas wrote:\n> On 12/10/2023 12:48, Daniel Gustafsson wrote:\n> > The attached patch adds special-case expression steps for common sets of steps\n> > in the executor to shave a few cycles off during execution, and make the JIT\n> > generated code simpler.\n> > \n> > * Adds EEOP_FUNCEXPR_STRICT_1 and EEOP_FUNCEXPR_STRICT_2 for function calls of\n> > strict functions with 1 or 2 arguments (EEOP_FUNCEXPR_STRICT remains used for\n> > > 2 arguments).\n> > * Adds EEOP_AGG_STRICT_INPUT_CHECK_ARGS_1 which is a special case for the\n> > common case of one arg aggs.\n> \n> Are these relevant when JITting? I'm a little sad if the JIT compiler cannot\n> unroll these on its own. Is there something we could do to hint it, so that\n> it could treat the number of arguments as a constant?\n\nI think it's mainly important for interpreted execution.\n\n\n> > skip extra setup for steps which are only interested in the side effects.\n> \n> I'm a little surprised if this makes a measurable performance difference,\n> but sure, why not. It seems nice to be more explicit when you don't expect a\n> return value.\n\nIIRC this is more interesting for JIT than the above, because it allows LLVM\nto know that the return value isn't needed and thus doesn't need to be\ncomputed.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 12 Oct 2023 10:52:04 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Special-case executor expression steps for common combinations" }, { "msg_contents": "> On 12 Oct 2023, at 19:52, Andres Freund <[email protected]> wrote:\n> On 2023-10-12 13:24:27 +0300, Heikki Linnakangas wrote:\n>> On 12/10/2023 12:48, Daniel Gustafsson wrote:\n\n>>> The attached patch adds special-case expression steps for common sets of steps\n>>> in the executor to shave a few cycles off during execution, and make the JIT\n>>> generated code simpler.\n>>> \n>>> * Adds EEOP_FUNCEXPR_STRICT_1 and EEOP_FUNCEXPR_STRICT_2 for function calls of\n>>> strict functions with 1 or 2 arguments (EEOP_FUNCEXPR_STRICT remains used for\n>>>> 2 arguments).\n>>> * Adds EEOP_AGG_STRICT_INPUT_CHECK_ARGS_1 which is a special case for the\n>>> common case of one arg aggs.\n>> \n>> Are these relevant when JITting? I'm a little sad if the JIT compiler cannot\n>> unroll these on its own. Is there something we could do to hint it, so that\n>> it could treat the number of arguments as a constant?\n> \n> I think it's mainly important for interpreted execution.\n\nAgreed.\n\n>>> skip extra setup for steps which are only interested in the side effects.\n>> \n>> I'm a little surprised if this makes a measurable performance difference,\n>> but sure, why not. It seems nice to be more explicit when you don't expect a\n>> return value.\n\nRight, performance benefits aside it does improve readability IMHO.\n\n> IIRC this is more interesting for JIT than the above, because it allows LLVM\n> to know that the return value isn't needed and thus doesn't need to be\n> computed.\n\nCorrect, this is important to the JIT code which no longer has to perform two\nLoads and one Store in order to get nothing, but can instead fastpath to\nbuilding a zero returnvalue.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Thu, 19 Oct 2023 12:11:13 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Special-case executor expression steps for common combinations" }, { "msg_contents": "On 10/12/23 11:48 AM, Daniel Gustafsson wrote:\n> Thoughts?\n\nI have looked at the patch and it still applies, builds and passes the \ntest cases and I personally think these optimizations are pretty much \nno-brainers that we should do and it is a pity nobody has had the time \nto review this patch.\n\n1) The no-return case should help with the JIT, making jitted code faster.\n\n2) The specialized strict steps helps with many common queries in the \ninterpreted mode.\n\nThe code itself looks really good (great work!) but I have two comments \non it.\n\n1) I think the patch should be split into two. The two different \noptimizations are not related at all other than that they create \nspecialized versions of expressions steps. Having them as separate makes \nthe commit history easier to read for future developers.\n\n2) We could generate functions which return void rather than NULL and \ntherefore not have to do a return at all but I am not sure that small \noptimization and extra clarity would be worth the hassle. The current \napproach with adding Assert() is ok with me. Daniel, what do you think?\n\nAndreas\n\n\n", "msg_date": "Thu, 20 Jun 2024 17:22:41 +0200", "msg_from": "Andreas Karlsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Special-case executor expression steps for common combinations" }, { "msg_contents": "On 6/20/24 5:22 PM, Andreas Karlsson wrote:\n> On 10/12/23 11:48 AM, Daniel Gustafsson wrote:\n>> Thoughts?\n> \n> I have looked at the patch and it still applies, builds and passes the \n> test cases and I personally think these optimizations are pretty much \n> no-brainers that we should do and it is a pity nobody has had the time \n> to review this patch.\n\nForgot to write that I am planning to also try to do so benchmarks to \nsee if I can reproduce the speedups. :)\n\nAndreas\n\n\n", "msg_date": "Thu, 20 Jun 2024 17:25:06 +0200", "msg_from": "Andreas Karlsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Special-case executor expression steps for common combinations" }, { "msg_contents": "> On 20 Jun 2024, at 17:22, Andreas Karlsson <[email protected]> wrote:\n> \n> On 10/12/23 11:48 AM, Daniel Gustafsson wrote:\n>> Thoughts?\n> \n> I have looked at the patch and it still applies, builds and passes the test cases and I personally think these optimizations are pretty much no-brainers that we should do and it is a pity nobody has had the time to review this patch.\n> \n> 1) The no-return case should help with the JIT, making jitted code faster.\n> \n> 2) The specialized strict steps helps with many common queries in the interpreted mode.\n> \n> The code itself looks really good (great work!) but I have two comments on it.\n\nThanks for review!\n\n> 1) I think the patch should be split into two. The two different optimizations are not related at all other than that they create specialized versions of expressions steps. Having them as separate makes the commit history easier to read for future developers.\n\nThat's a good point, the attached v2 splits it into two separate commits.\n\n> 2) We could generate functions which return void rather than NULL and therefore not have to do a return at all but I am not sure that small optimization and extra clarity would be worth the hassle. The current approach with adding Assert() is ok with me. Daniel, what do you think?\n\nI'm not sure that would move the needle enough to warrant the extra complexity.\nIt could be worth pursuing, but it can be done separately from this.\n\n--\nDaniel Gustafsson", "msg_date": "Thu, 4 Jul 2024 18:26:18 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Special-case executor expression steps for common combinations" }, { "msg_contents": "On 7/4/24 6:26 PM, Daniel Gustafsson wrote:\n>> 2) We could generate functions which return void rather than NULL and therefore not have to do a return at all but I am not sure that small optimization and extra clarity would be worth the hassle. The current approach with adding Assert() is ok with me. Daniel, what do you think?\n> \n> I'm not sure that would move the needle enough to warrant the extra complexity.\n> It could be worth pursuing, but it can be done separately from this.\n\nAgreed.\n\nI looked some more at the patch and have a suggestion for code style. \nAttaching the diff. Do with them what you wish, for me they make to code \neasier to read.\n\nAndreas", "msg_date": "Mon, 22 Jul 2024 14:33:16 +0200", "msg_from": "Andreas Karlsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Special-case executor expression steps for common combinations" }, { "msg_contents": "I have bench marked the two patches now and failed to measure any \nspeedup or slowdown from the first patch (removing return) but I think \nit is a good idea anyway.\n\nFor the second patch (optimize strict) I managed to measure a ~1% speed \nup for the following query \"SELECT sum(x + y + 1) FROM t;\" over one \nmillion rows.\n\nI would say both patches are ready for committer modulo my proposed \nstyle fixes.\n\nAndreas\n\n\n\n", "msg_date": "Mon, 22 Jul 2024 23:25:07 +0200", "msg_from": "Andreas Karlsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Special-case executor expression steps for common combinations" }, { "msg_contents": "> On 22 Jul 2024, at 23:25, Andreas Karlsson <[email protected]> wrote:\n> \n> I have bench marked the two patches now and failed to measure any speedup or slowdown from the first patch (removing return) but I think it is a good idea anyway.\n> \n> For the second patch (optimize strict) I managed to measure a ~1% speed up for the following query \"SELECT sum(x + y + 1) FROM t;\" over one million rows.\n\nThat's expected, this is mostly about refactoring the code to simplifying the\nJITed code (and making tiny strides towards JIT expression caching).\n\n> I would say both patches are ready for committer modulo my proposed style fixes.\n\nI am a bit wary about removing the out_error label and goto since it may open\nup for reports from static analyzers about control reaching the end of a\nnon-void function without a return. The other change has been incorporated.\n\nThe attached v3 is a rebase to handle executor changes done since v2, with the\nabove mentioned fix as well. If there are no objections I think we should\napply this version.\n\n--\nDaniel Gustafsson", "msg_date": "Tue, 10 Sep 2024 10:54:36 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Special-case executor expression steps for common combinations" }, { "msg_contents": "On 9/10/24 10:54 AM, Daniel Gustafsson wrote:\n>> On 22 Jul 2024, at 23:25, Andreas Karlsson <[email protected]> wrote:\n>>\n>> I have bench marked the two patches now and failed to measure any speedup or slowdown from the first patch (removing return) but I think it is a good idea anyway.\n>>\n>> For the second patch (optimize strict) I managed to measure a ~1% speed up for the following query \"SELECT sum(x + y + 1) FROM t;\" over one million rows.\n> \n> That's expected, this is mostly about refactoring the code to simplifying the\n> JITed code (and making tiny strides towards JIT expression caching).\n\nYup! Expected and nice tiny speedup.\n\n>> I would say both patches are ready for committer modulo my proposed style fixes.\n> \n> I am a bit wary about removing the out_error label and goto since it may open\n> up for reports from static analyzers about control reaching the end of a\n> non-void function without a return. The other change has been incorporated.\n> \n> The attached v3 is a rebase to handle executor changes done since v2, with the\n> above mentioned fix as well. If there are no objections I think we should\n> apply this version.\n\nSounds good to me and in my opinion this is ready to be committed.\n\nAndreas\n\n\n\n", "msg_date": "Fri, 13 Sep 2024 15:01:03 +0200", "msg_from": "Andreas Karlsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Special-case executor expression steps for common combinations" } ]
[ { "msg_contents": "Hello\n\nPostgreSQL's CREATE DOMAIN documentation (section Notes) describes a way how one can add NULL's to a column that has a domain with the NOT NULL constraint.\nhttps://www.postgresql.org/docs/current/sql-createdomain.html\n\nTo me it seems very strange and amounts to a bug because it defeats the purpose of domains (to be a reusable assets) and constraints (to avoid any bypassing of these).\n\nOracle 23c added the support of domains (https://docs.oracle.com/en/database/oracle/oracle-database/23/sqlrf/create-domain.html). I tested the same scenario both in PostgreSQL and Oracle (https://www.oracle.com/database/free/) and found out that in these situations Oracle does not allow NULL's to be added to the column. I do not know as to whether the behaviour that is implemented in PostgreSQL is specified by the standard. However, if it is not the case, then how it could be that Oracle can but PostgreSQL cannot.\n\nBest regards\nErki Eessaar\n\nThe scenario that I tested both in PostgreSQL (16) and Oracle (23c).\n***********************************\n/*PostgreSQL 16*/\n\nCREATE DOMAIN d_name VARCHAR(50) NOT NULL;\n\nCREATE TABLE Product_state_type (product_state_type_code SMALLINT NOT NULL,\nname d_name,\nCONSTRAINT pk_product_state_type PRIMARY KEY (product_state_type_code),\nCONSTRAINT ak_product_state_type_name UNIQUE (name));\n\nCREATE TABLE Product (product_code INTEGER NOT NULL,\nname d_name,\nproduct_state_type_code SMALLINT NOT NULL,\nCONSTRAINT pk_product PRIMARY KEY (product_code),\nCONSTRAINT fk_product_product_state_type FOREIGN KEY (product_state_type_code)\nREFERENCES Product_state_type(product_state_type_code) ON UPDATE CASCADE);\n\nINSERT INTO Product_state_type (product_state_type_code, name)\nVALUES (1, (SELECT name FROM Product_state_type WHERE FALSE));\n/*Insertion succeeds, name is NULL!*/\n\nINSERT INTO Product (product_code, name, product_state_type_code)\nSELECT 1 AS product_code, Product.name, 1 AS product_state_type_code\nFROM Product_state_type LEFT JOIN Product USING (product_state_type_code);\n/*Insertion succeeds, name is NULL!*/\n\n/*Oracle 23c*/\n\nCREATE DOMAIN d_name AS VARCHAR2(50) NOT NULL;\n\nCREATE TABLE Product_state_type (product_state_type_code NUMBER(4) NOT NULL,\nname d_name,\nCONSTRAINT pk_product_state_type PRIMARY KEY (product_state_type_code),\nCONSTRAINT ak_product_state_type_name UNIQUE (name));\n\nCREATE TABLE Product (product_code NUMBER(8) NOT NULL,\nname d_name,\nproduct_state_type_code NUMBER(4) NOT NULL,\nCONSTRAINT pk_product PRIMARY KEY (product_code),\nCONSTRAINT fk_product_product_state_type FOREIGN KEY (product_state_type_code)\nREFERENCES Product_state_type(product_state_type_code));\n\n\nINSERT INTO Product_state_type (product_state_type_code, name)\nVALUES (1, (SELECT name FROM Product_state_type WHERE FALSE));\n/*Fails.\nError report -\nSQL Error: ORA-01400: cannot insert NULL into\n(\"SYSTEM\".\"PRODUCT_STATE_TYPE\".\"NAME\")\nHelp: https://docs.oracle.com/error-help/db/ora-01400/\n01400. 00000 - \"cannot insert NULL into (%s)\"\n*Cause: An attempt was made to insert NULL into previously listed objects.\n*Action: These objects cannot accept NULL values.*/\n\nINSERT INTO Product_state_type (product_state_type_code, name)\nVALUES (1, 'Active');\n\nINSERT INTO Product (product_code, name, product_state_type_code)\nSELECT 1 AS product_code, Product.name, 1 AS product_state_type_code\nFROM Product_state_type LEFT JOIN Product USING (product_state_type_code);\n/*Fails.\nSQL Error: ORA-01400: cannot insert NULL into\n(\"SYSTEM\".\"PRODUCT\".\"NAME\")\nHelp: https://docs.oracle.com/error-help/db/ora-01400/\n01400. 00000 - \"cannot insert NULL into (%s)\"\n*Cause: An attempt was made to insert NULL into previously listed objects.\n*Action: These objects cannot accept NULL values.*/\n\n\n\n\n\n\n\n\n\nHello\n\n\n\n\nPostgreSQL's CREATE DOMAIN documentation (section Notes) describes a way how one can add NULL's to a column that has a domain with the NOT NULL constraint.\n\nhttps://www.postgresql.org/docs/current/sql-createdomain.html\n\n\n\n\nTo me it seems very strange and amounts to a bug because it defeats the purpose of domains (to be a reusable assets) and constraints (to avoid any bypassing of these).\n\n\n\n\nOracle 23c added the support of domains (https://docs.oracle.com/en/database/oracle/oracle-database/23/sqlrf/create-domain.html).\n I tested the same scenario both in PostgreSQL and Oracle (https://www.oracle.com/database/free/) and found out that in these situations Oracle does not allow NULL's to\n be added to the column. I do not know as to whether the behaviour that is implemented in PostgreSQL is specified by the standard. However, if it is not the case, then how it could be that Oracle can but PostgreSQL cannot.\n\n\n\nBest regards\n\nErki Eessaar\n\n\n\n\nThe scenario that I tested both in PostgreSQL (16) and Oracle (23c).\n\n***********************************\n\n/*PostgreSQL 16*/\n\n\n\n\nCREATE DOMAIN d_name VARCHAR(50) NOT NULL;\n\n\nCREATE TABLE Product_state_type (product_state_type_code SMALLINT NOT NULL,\nname d_name,\nCONSTRAINT pk_product_state_type PRIMARY KEY (product_state_type_code),\nCONSTRAINT ak_product_state_type_name UNIQUE (name));\n\n\nCREATE TABLE Product (product_code INTEGER NOT NULL,\nname d_name,\nproduct_state_type_code SMALLINT NOT NULL,\nCONSTRAINT pk_product PRIMARY KEY (product_code),\nCONSTRAINT fk_product_product_state_type FOREIGN KEY (product_state_type_code)\nREFERENCES Product_state_type(product_state_type_code) ON UPDATE CASCADE);\n\n\nINSERT INTO Product_state_type (product_state_type_code, name)\nVALUES (1, (SELECT name FROM Product_state_type WHERE FALSE));\n/*Insertion succeeds, name is NULL!*/\n\n\nINSERT INTO Product (product_code, name, product_state_type_code)\nSELECT 1 AS product_code, Product.name, 1 AS product_state_type_code\nFROM Product_state_type LEFT JOIN Product USING (product_state_type_code);\n/*Insertion succeeds, name is NULL!*/\n\n\n/*Oracle 23c*/\n\n\nCREATE DOMAIN d_name AS VARCHAR2(50) NOT NULL;\n\n\nCREATE TABLE Product_state_type (product_state_type_code NUMBER(4) NOT NULL,\nname d_name,\nCONSTRAINT pk_product_state_type PRIMARY KEY (product_state_type_code),\nCONSTRAINT ak_product_state_type_name UNIQUE (name));\n\n\nCREATE TABLE Product (product_code NUMBER(8) NOT NULL,\nname d_name,\nproduct_state_type_code NUMBER(4) NOT NULL,\nCONSTRAINT pk_product PRIMARY KEY (product_code),\nCONSTRAINT fk_product_product_state_type FOREIGN KEY (product_state_type_code)\nREFERENCES Product_state_type(product_state_type_code));\n\n\n\n\nINSERT INTO Product_state_type (product_state_type_code, name)\nVALUES (1, (SELECT name FROM Product_state_type WHERE FALSE));\n/*Fails.\nError report -\nSQL Error: ORA-01400: cannot insert NULL into\n(\"SYSTEM\".\"PRODUCT_STATE_TYPE\".\"NAME\")\nHelp: https://docs.oracle.com/error-help/db/ora-01400/\n01400. 00000 -  \"cannot insert NULL into (%s)\"\n*Cause:    An attempt was made to insert NULL into previously listed objects.\n*Action:   These objects cannot accept NULL values.*/\n\n\nINSERT INTO Product_state_type (product_state_type_code, name)\nVALUES (1, 'Active');\n\n\nINSERT INTO Product (product_code, name, product_state_type_code)\nSELECT 1 AS product_code, Product.name, 1 AS product_state_type_code\nFROM Product_state_type LEFT JOIN Product USING (product_state_type_code);\n/*Fails.\nSQL Error: ORA-01400: cannot insert NULL into\n(\"SYSTEM\".\"PRODUCT\".\"NAME\")\nHelp: https://docs.oracle.com/error-help/db/ora-01400/\n01400. 00000 -  \"cannot insert NULL into (%s)\"\n*Cause:    An attempt was made to insert NULL into previously listed objects.\n*Action:   These objects cannot accept NULL values.*/", "msg_date": "Thu, 12 Oct 2023 10:38:25 +0000", "msg_from": "Erki Eessaar <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL domains and NOT NULL constraint" }, { "msg_contents": "Erki Eessaar <[email protected]> writes:\n> PostgreSQL's CREATE DOMAIN documentation (section Notes) describes a way how one can add NULL's to a column that has a domain with the NOT NULL constraint.\n> https://www.postgresql.org/docs/current/sql-createdomain.html\n> To me it seems very strange and amounts to a bug because it defeats the purpose of domains (to be a reusable assets) and constraints (to avoid any bypassing of these).\n\nI doubt we'd consider doing anything about that. The whole business\nof domains with NOT NULL constraints is arguably a defect of the SQL\nstandard, because there are multiple ways to produce a value that\nis NULL and yet must be considered to be of the domain type.\nThe subselect-with-no-output case that you show isn't even the most\ncommon one; I'd say that outer joins where there are domain columns\non the nullable side are the biggest problem.\n\nThere's been some discussion of treating the output of such a join,\nsubselect, etc as being of the domain's base type not the domain\nproper. That'd solve this particular issue since then we'd decide\nwe have to cast the base type back up to the domain type (and hence\ncheck its constraints) before inserting the row. But that choice\njust moves the surprise factor somewhere else, in that queries that\nused to produce one data type now produce another one. There are\napplications that this would break. Moreover, I do not think there's\nany justification for it in the SQL spec.\n\nOur general opinion about this is what is stated in the NOTES\nsection of our CREATE DOMAIN reference page [1]:\n\n Best practice therefore is to design a domain's constraints so that a\n null value is allowed, and then to apply column NOT NULL constraints\n to columns of the domain type as needed, rather than directly to the\n domain type.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/docs/current/sql-createdomain.html\n\n\n", "msg_date": "Thu, 12 Oct 2023 10:54:06 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL domains and NOT NULL constraint" }, { "msg_contents": "On 10/12/23 15:54, Tom Lane wrote:\n> Erki Eessaar <[email protected]> writes:\n>> PostgreSQL's CREATE DOMAIN documentation (section Notes) describes a way how one can add NULL's to a column that has a domain with the NOT NULL constraint.\n>> https://www.postgresql.org/docs/current/sql-createdomain.html\n>> To me it seems very strange and amounts to a bug because it defeats the purpose of domains (to be a reusable assets) and constraints (to avoid any bypassing of these).\n> \n> I doubt we'd consider doing anything about that. The whole business\n> of domains with NOT NULL constraints is arguably a defect of the SQL\n> standard, because there are multiple ways to produce a value that\n> is NULL and yet must be considered to be of the domain type.\n> The subselect-with-no-output case that you show isn't even the most\n> common one; I'd say that outer joins where there are domain columns\n> on the nullable side are the biggest problem.\n> \n> There's been some discussion of treating the output of such a join,\n> subselect, etc as being of the domain's base type not the domain\n> proper. That'd solve this particular issue since then we'd decide\n> we have to cast the base type back up to the domain type (and hence\n> check its constraints) before inserting the row. But that choice\n> just moves the surprise factor somewhere else, in that queries that\n> used to produce one data type now produce another one. There are\n> applications that this would break. Moreover, I do not think there's\n> any justification for it in the SQL spec.\n\n\nI do not believe this is a defect of the SQL standard at all. \nSQL:2023-2 Section 4.14 \"Domains\" clearly states \"The purpose of a \ndomain is to constrain the set of valid values that can be stored in a \ncolumn of a base table by various operations.\"\n\nThat seems very clear to me that *storing* a value in a base table must \nrespect the domain's constraints, even if *operations* on those values \nmight not respect all of the domain's constraints.\n\nWhether or not it is practical to implement that is a different story, \nbut allowing the null value to be stored in a column of a base table \nwhose domain specifies NOT NULL is frankly a bug.\n-- \nVik Fearing\n\n\n\n", "msg_date": "Fri, 13 Oct 2023 03:25:40 +0200", "msg_from": "Vik Fearing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL domains and NOT NULL constraint" }, { "msg_contents": "Vik Fearing <[email protected]> writes:\n> On 10/12/23 15:54, Tom Lane wrote:\n>> There's been some discussion of treating the output of such a join,\n>> subselect, etc as being of the domain's base type not the domain\n>> proper. That'd solve this particular issue since then we'd decide\n>> we have to cast the base type back up to the domain type (and hence\n>> check its constraints) before inserting the row. But that choice\n>> just moves the surprise factor somewhere else, in that queries that\n>> used to produce one data type now produce another one. There are\n>> applications that this would break. Moreover, I do not think there's\n>> any justification for it in the SQL spec.\n\n> I do not believe this is a defect of the SQL standard at all. \n> SQL:2023-2 Section 4.14 \"Domains\" clearly states \"The purpose of a \n> domain is to constrain the set of valid values that can be stored in a \n> column of a base table by various operations.\"\n\nSo I wonder what is the standard's interpretation of\n\nregression=# create domain dpos as integer not null check (value > 0);\nCREATE DOMAIN\nregression=# create table t1 (x int, d dpos);\nCREATE TABLE\nregression=# create view v1 as select ty.d from t1 tx left join t1 ty using (x);\nCREATE VIEW\nregression=# \\d+ v1\n View \"public.v1\"\n Column | Type | Collation | Nullable | Default | Storage | Description \n--------+------+-----------+----------+---------+---------+-------------\n d | dpos | | | | plain | \nView definition:\n SELECT ty.d\n FROM t1 tx\n LEFT JOIN t1 ty USING (x);\n\nIf we are incorrect in ascribing the type \"dpos\" to v1.d, where\nin the spec contradicts that? (Or in other words, 4.14 might lay\nout some goals for the feature, but that's just empty words if\nit's not supported by accurate details in other places.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 12 Oct 2023 21:44:06 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL domains and NOT NULL constraint" }, { "msg_contents": "On 10/13/23 02:44, Tom Lane wrote:\n> Vik Fearing <[email protected]> writes:\n>> On 10/12/23 15:54, Tom Lane wrote:\n>>> There's been some discussion of treating the output of such a join,\n>>> subselect, etc as being of the domain's base type not the domain\n>>> proper. That'd solve this particular issue since then we'd decide\n>>> we have to cast the base type back up to the domain type (and hence\n>>> check its constraints) before inserting the row. But that choice\n>>> just moves the surprise factor somewhere else, in that queries that\n>>> used to produce one data type now produce another one. There are\n>>> applications that this would break. Moreover, I do not think there's\n>>> any justification for it in the SQL spec.\n> \n>> I do not believe this is a defect of the SQL standard at all.\n>> SQL:2023-2 Section 4.14 \"Domains\" clearly states \"The purpose of a\n>> domain is to constrain the set of valid values that can be stored in a\n>> column of a base table by various operations.\"\n> \n> So I wonder what is the standard's interpretation of\n> \n> regression=# create domain dpos as integer not null check (value > 0);\n> CREATE DOMAIN\n> regression=# create table t1 (x int, d dpos);\n> CREATE TABLE\n> regression=# create view v1 as select ty.d from t1 tx left join t1 ty using (x);\n> CREATE VIEW\n> regression=# \\d+ v1\n> View \"public.v1\"\n> Column | Type | Collation | Nullable | Default | Storage | Description\n> --------+------+-----------+----------+---------+---------+-------------\n> d | dpos | | | | plain |\n> View definition:\n> SELECT ty.d\n> FROM t1 tx\n> LEFT JOIN t1 ty USING (x);\n> \n> If we are incorrect in ascribing the type \"dpos\" to v1.d, where\n> in the spec contradicts that? (Or in other words, 4.14 might lay\n> out some goals for the feature, but that's just empty words if\n> it's not supported by accurate details in other places.)\nObjection, Your Honor: Relevance.\n\nRegardless of what the spec may or may not say about v1.d, it still \nremains that nulls should not be allowed in a *base table* if the domain \nsays nulls are not allowed. Not mentioned in this thread but the \nconstraints are also applied when CASTing to the domain.\n\nNow, to answer your straw man, this might be helpful:\n\nSQL:2023-2 Section 11.4 <column definition> Syntax Rule 9, \"If the \ndescriptor of D includes any domain constraint descriptors, then T shall \nbe a persistent base table.\". Your v1 is not that and therefore \narguably illegal.\n\nAs you know, I am more than happy to (try to) amend the spec where \nneeded, but Erki's complaint of a null value being allowed in a base \ntable is clearly a bug in our implementation regardless of what we do \nwith views.\n-- \nVik Fearing\n\n\n\n", "msg_date": "Fri, 13 Oct 2023 04:41:24 +0200", "msg_from": "Vik Fearing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL domains and NOT NULL constraint" }, { "msg_contents": "Vik Fearing <[email protected]> writes:\n> Regardless of what the spec may or may not say about v1.d, it still \n> remains that nulls should not be allowed in a *base table* if the domain \n> says nulls are not allowed. Not mentioned in this thread but the \n> constraints are also applied when CASTing to the domain.\n\nHmph. The really basic problem here, I think, is that the spec\nwants to claim that a domain is a data type, but then it backs\noff and limits where the domain's constraints need to hold.\nThat's fundamentally inconsistent. It's like claiming that\n'foobarbaz' is a valid value of type numeric as long as it's\nonly in flight within a query and you haven't tried to store it\ninto a table.\n\nPractical problems with this include:\n\n* If a function declares its argument as being of a domain type,\ncan it expect that the passed value obeys the constraints?\n\n* If a function declares its result as being of a domain type,\nis it required to return a result that obeys the constraints?\n(This has particular force for RETURNS NULL ON NULL INPUT\nfunctions, for which we just automatically return NULL given\na NULL input without any consideration of whether the result\ntype nominally prohibits that.)\n\n* If a plpgsql function has a variable that is declared to be of\ndomain type, do we enforce the domain's constraints when assigning?\n\n* If a composite type has a column of a domain type, do we enforce\nthe domain's constraints when assigning or casting to that?\n\nAFAICS, the spec's position leaves all of these as judgment calls,\nor else you might claim that none of the above cases are even allowed\nto be declared per spec. I don't find either of those satisfactory,\nso I reiterate my position that the committee hasn't thought this\nthrough.\n\n> As you know, I am more than happy to (try to) amend the spec where \n> needed, but Erki's complaint of a null value being allowed in a base \n> table is clearly a bug in our implementation regardless of what we do \n> with views.\n\nI agree it's not a good behavior, but I still say it's traceable\nto schizophenia in the spec. If the result of a sub-select is\nnominally of a domain type, we should not have to recheck the\ndomain constraints in order to assign it to a domain-typed target.\nIf it's not nominally of a domain type, please cite chapter and\nverse that says it isn't.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 13 Oct 2023 01:37:14 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL domains and NOT NULL constraint" }, { "msg_contents": "Hello\n\nEqualing a domain with a type is really confusing because why, for instance, in this case the following is possible without defining any additional operators.\n\nCREATE DOMAIN d_name VARCHAR(50) NOT NULL;\nCREATE DOMAIN d_description VARCHAR(1000) NOT NULL;\nCREATE TABLE x(name d_name, description d_description);\nSELECT *\nFROM x\nWHERE name=description;\n\nIsn't it so that domains are not types and for this reason there are separate CREATE DOMAIN and CREATE TYPE statements?!\n\nIn my opinion the Notes section of CREATE DOMAIN documentation should offer better examples. The two examples that I provided in my demonstration seemed very far fetched and artificial. Frankly, I have difficulties in imagining why someone would like to write statements like that in a production environment and how the proper enforcement of NOT NULL constraints of domains could break things.\n\nLets say I have a column that I have declared mandatory by using a domain, but somehow I have added NULLs to the column, and if it is not possible any more, then things break down.\n\nIf I want to permit NULLs, then ALTER DOMAIN d DROP NOT NULL; will fix it with one stroke. If I do not want to permit NULLs but I have registered NULLs, then this is a data quality issue that has to be addressed.\n\nCurrently there is a feature (NOT NULL of domain) that the documentation explicitly suggests not to use. Isn't it in this case better to remove this feature completely?! If this would break something, then it would mean that systems actually rely on this constraint.\n\nBest regards\nErki Eessaar\n\n________________________________\nFrom: Tom Lane <[email protected]>\nSent: Friday, October 13, 2023 08:37\nTo: Vik Fearing <[email protected]>\nCc: Erki Eessaar <[email protected]>; [email protected] <[email protected]>\nSubject: Re: PostgreSQL domains and NOT NULL constraint\n\nVik Fearing <[email protected]> writes:\n> Regardless of what the spec may or may not say about v1.d, it still\n> remains that nulls should not be allowed in a *base table* if the domain\n> says nulls are not allowed. Not mentioned in this thread but the\n> constraints are also applied when CASTing to the domain.\n\nHmph. The really basic problem here, I think, is that the spec\nwants to claim that a domain is a data type, but then it backs\noff and limits where the domain's constraints need to hold.\nThat's fundamentally inconsistent. It's like claiming that\n'foobarbaz' is a valid value of type numeric as long as it's\nonly in flight within a query and you haven't tried to store it\ninto a table.\n\nPractical problems with this include:\n\n* If a function declares its argument as being of a domain type,\ncan it expect that the passed value obeys the constraints?\n\n* If a function declares its result as being of a domain type,\nis it required to return a result that obeys the constraints?\n(This has particular force for RETURNS NULL ON NULL INPUT\nfunctions, for which we just automatically return NULL given\na NULL input without any consideration of whether the result\ntype nominally prohibits that.)\n\n* If a plpgsql function has a variable that is declared to be of\ndomain type, do we enforce the domain's constraints when assigning?\n\n* If a composite type has a column of a domain type, do we enforce\nthe domain's constraints when assigning or casting to that?\n\nAFAICS, the spec's position leaves all of these as judgment calls,\nor else you might claim that none of the above cases are even allowed\nto be declared per spec. I don't find either of those satisfactory,\nso I reiterate my position that the committee hasn't thought this\nthrough.\n\n> As you know, I am more than happy to (try to) amend the spec where\n> needed, but Erki's complaint of a null value being allowed in a base\n> table is clearly a bug in our implementation regardless of what we do\n> with views.\n\nI agree it's not a good behavior, but I still say it's traceable\nto schizophenia in the spec. If the result of a sub-select is\nnominally of a domain type, we should not have to recheck the\ndomain constraints in order to assign it to a domain-typed target.\nIf it's not nominally of a domain type, please cite chapter and\nverse that says it isn't.\n\n regards, tom lane\n\n\n\n\n\n\n\n\nHello\n\n\n\n\nEqualing a domain with a type is really confusing because why, for instance, in this case the following is possible without defining any additional operators.\n\n\n\n\nCREATE DOMAIN d_name VARCHAR(50) NOT NULL;\nCREATE DOMAIN d_description VARCHAR(1000) NOT NULL;\nCREATE TABLE x(name d_name, description d_description);\nSELECT *\nFROM x\nWHERE name=description;\n\n\n\n\nIsn't it so that domains are not types and for this reason there are separate CREATE DOMAIN and CREATE TYPE statements?!\n\n\n\n\nIn my opinion the Notes section of CREATE DOMAIN documentation should offer better examples. The two examples that I provided in my demonstration seemed very far fetched and artificial. Frankly, I have difficulties in imagining why someone would like to write\n statements like that in a production environment and how the proper enforcement of NOT NULL constraints of domains could break things. \n\n\n\n\nLets say I have a column that I have declared mandatory by using a domain, but somehow I have added NULLs to the column, and if it is not possible any more, then things break down.\n\n\n\n\n\n\nIf I want to permit NULLs, then ALTER DOMAIN d DROP NOT NULL; will fix it with one stroke. If I do not want to permit NULLs but I have registered NULLs, then this is a data quality issue that has to be addressed.\n\n\n\n\n\nCurrently there is a feature (NOT NULL of domain) that the documentation explicitly suggests not to use. Isn't it in this case better to remove this feature completely?! If this would break something, then it would mean that systems actually rely on this constraint.\n\n\n\n\nBest regards\n\nErki Eessaar\n\n\n\n\n\n\nFrom: Tom Lane <[email protected]>\nSent: Friday, October 13, 2023 08:37\nTo: Vik Fearing <[email protected]>\nCc: Erki Eessaar <[email protected]>; [email protected] <[email protected]>\nSubject: Re: PostgreSQL domains and NOT NULL constraint\n \n\n\nVik Fearing <[email protected]> writes:\n> Regardless of what the spec may or may not say about v1.d, it still \n> remains that nulls should not be allowed in a *base table* if the domain \n> says nulls are not allowed.  Not mentioned in this thread but the \n> constraints are also applied when CASTing to the domain.\n\nHmph.  The really basic problem here, I think, is that the spec\nwants to claim that a domain is a data type, but then it backs\noff and limits where the domain's constraints need to hold.\nThat's fundamentally inconsistent.  It's like claiming that\n'foobarbaz' is a valid value of type numeric as long as it's\nonly in flight within a query and you haven't tried to store it\ninto a table.\n\nPractical problems with this include:\n\n* If a function declares its argument as being of a domain type,\ncan it expect that the passed value obeys the constraints?\n\n* If a function declares its result as being of a domain type,\nis it required to return a result that obeys the constraints?\n(This has particular force for RETURNS NULL ON NULL INPUT\nfunctions, for which we just automatically return NULL given\na NULL input without any consideration of whether the result\ntype nominally prohibits that.)\n\n* If a plpgsql function has a variable that is declared to be of\ndomain type, do we enforce the domain's constraints when assigning?\n\n* If a composite type has a column of a domain type, do we enforce\nthe domain's constraints when assigning or casting to that?\n\nAFAICS, the spec's position leaves all of these as judgment calls,\nor else you might claim that none of the above cases are even allowed\nto be declared per spec.  I don't find either of those satisfactory,\nso I reiterate my position that the committee hasn't thought this\nthrough.\n\n> As you know, I am more than happy to (try to) amend the spec where \n> needed, but Erki's complaint of a null value being allowed in a base \n> table is clearly a bug in our implementation regardless of what we do \n> with views.\n\nI agree it's not a good behavior, but I still say it's traceable\nto schizophenia in the spec.  If the result of a sub-select is\nnominally of a domain type, we should not have to recheck the\ndomain constraints in order to assign it to a domain-typed target.\nIf it's not nominally of a domain type, please cite chapter and\nverse that says it isn't.\n\n                        regards, tom lane", "msg_date": "Fri, 13 Oct 2023 12:01:44 +0000", "msg_from": "Erki Eessaar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL domains and NOT NULL constraint" }, { "msg_contents": "On 10/13/23 06:37, Tom Lane wrote:\n> Vik Fearing <[email protected]> writes:\n>> Regardless of what the spec may or may not say about v1.d, it still\n>> remains that nulls should not be allowed in a *base table* if the domain\n>> says nulls are not allowed. Not mentioned in this thread but the\n>> constraints are also applied when CASTing to the domain.\n> \n> Hmph. The really basic problem here, I think, is that the spec\n> wants to claim that a domain is a data type, but then it backs\n> off and limits where the domain's constraints need to hold.\n\n\nI don't think that is an accurate depiction of domains.\n\nFirst of all, I am not seeing where it says that a domain is a data \ntype. It allows domains to be used in some places where a data type is \nused, but that is not equivalent to a domain /being/ a data type.\n\nSection 4.14 says, \"A domain is a set of permissible values.\" and then \ngoes on to say that that is a combination of a predefined type and zero \nor more search conditions. It can also have a default value, but it \ndoes not seem relevant to talk about that in this discussion.\n\nSection 4.25.4, \"Domain constraints\" has this to say (emphasis mine):\n\n- A domain constraint is satisfied by SQL-data *if and only if*, for \nevery *table* T that has a column named C based on that domain, the \napplicable <search condition> recorded in the appropriate domain \nconstraint usage evaluates to True or Unknown.\n\n- A domain constraint is satisfied by the result of a <cast \nspecification> if and only if the specified template <search condition>, \nwith each occurrence of the <general value specification> VALUE replaced \nby that result, evaluates to True or Unknown.\n\nThis tells me that the constraints should only be checked at those two \npoints.\n\nSecondly, why are you so concerned about outer join nulls here and not \nfor any other column marked NOT NULL?\n\n\n> That's fundamentally inconsistent. It's like claiming that\n> 'foobarbaz' is a valid value of type numeric as long as it's\n> only in flight within a query and you haven't tried to store it\n> into a table.\n\n\nIt's like claiming that null is a valid value of type numeric as long as \nit's only in flight within a query and you haven't tried to store it \ninto a table with that column marked NOT NULL.\n\n\n> Practical problems with this include:\n> \n> * If a function declares its argument as being of a domain type,\n> can it expect that the passed value obeys the constraints?\n> \n> * If a function declares its result as being of a domain type,\n> is it required to return a result that obeys the constraints?\n> (This has particular force for RETURNS NULL ON NULL INPUT\n> functions, for which we just automatically return NULL given\n> a NULL input without any consideration of whether the result\n> type nominally prohibits that.)\n> \n> * If a plpgsql function has a variable that is declared to be of\n> domain type, do we enforce the domain's constraints when assigning?\n\n\nRoutines are not allowed to have domains in their parameters or result \ntypes.\n\nI am all for PostgreSQL expanding the spec wherever we can, but in the \nabove cases we have to define things ourselves.\n\n\n> * If a composite type has a column of a domain type, do we enforce\n> the domain's constraints when assigning or casting to that?\n\n\nI don't see that a composite type is able to have a member of a domain. \nAs for what PostgreSQL should do in this case, my opinion is \"yes\".\n\n\n> AFAICS, the spec's position leaves all of these as judgment calls,\n> or else you might claim that none of the above cases are even allowed\n> to be declared per spec. I don't find either of those satisfactory,\n> so I reiterate my position that the committee hasn't thought this\n> through.\n\n\nMy claim is indeed that these cases are not allowed per-spec and \ntherefore the spec doesn't *need* to think about them. We do.\n\n\n>> As you know, I am more than happy to (try to) amend the spec where\n>> needed, but Erki's complaint of a null value being allowed in a base\n>> table is clearly a bug in our implementation regardless of what we do\n>> with views.\n> \n> I agree it's not a good behavior, but I still say it's traceable\n> to schizophenia in the spec. If the result of a sub-select is\n> nominally of a domain type, we should not have to recheck the\n> domain constraints in order to assign it to a domain-typed target.\n\n\nWell, yes, we should.\n\nAllowing a null to be stored in a column where the user has specified \nNOT NULL, no matter how the user did that, is unacceptable and I am \nfrankly surprised that you are defending it.\n\n\n> If it's not nominally of a domain type, please cite chapter and\n> verse that says it isn't.\n\nI don't see anything for or against this, I just see that the domain \nconstraints are only checked on storage or casting.\n\nAnd therefore, I think with these definitions:\n\nCREATE DOMAIN dom AS INTEGER CHECK (VALUE >= 0);\nCREATE TABLE t (d dom);\nINSERT INTO t (d) VALUES (1);\n\nthis should be valid according to the spec:\n\nSELECT -d FROM t;\n\nand this should error:\n\nSELECT CAST(-d AS dom) FROM t;\n-- \nVik Fearing\n\n\n\n", "msg_date": "Sat, 14 Oct 2023 04:23:08 +0200", "msg_from": "Vik Fearing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL domains and NOT NULL constraint" }, { "msg_contents": ">I doubt we'd consider doing anything about that.\n>The whole business of domains with NOT NULL constraints\n>is arguably a defect of the SQL standard, because\n>there are multiple ways to produce a value that\n>is NULL and yet must be considered to be of the domain type.\n\nIn my opinion it is inconsistent and illogical if a type sometimes contains a value and sometimes not.\n\nCREATE DOMAIN d_int INTEGER NOT NULL;\n\nAll the following statements fail (and correctly so in my opinion).\n\nSELECT (NULL)::d_int;\n/*ERROR: domain d_int does not allow null values*/\n\nSELECT Cast(NULL AS d_int);\n/*ERROR: domain d_int does not allow null values*/\n\nWITH val (v) AS (VALUES (1), (NULL))\nSELECT Cast(v AS d_int) AS v\nFROM Val;\n/*ERROR: domain d_int does not allow null values*/\n\nIn my opinion the confusion and related problems arise from the widespread practice of sometimes treating a domain as a type (which it is not) and sometimes treating NULL as a value (which it is not).\n\nBest regards\nErki Eessaar\n\n\n\n\n\n\n\n>I doubt we'd consider doing anything about that.  \n>The whole business of domains with NOT NULL constraints \n>is arguably a defect of the SQL standard, because \n>there are multiple ways to produce a value that\n>is NULL and yet must be considered to be of the domain type.\n\n\n\n\nIn my opinion it is inconsistent and illogical if a type sometimes contains a value and sometimes not.\n\n\n\n\nCREATE DOMAIN d_int INTEGER NOT NULL;\n\n\nAll the following statements fail (and correctly so in my opinion).\n\n\n\nSELECT (NULL)::d_int;\n/*ERROR:  domain d_int does not allow null values*/\n\n\nSELECT Cast(NULL AS d_int);\n/*ERROR:  domain d_int does not allow null values*/\n\n\nWITH val (v) AS (VALUES (1), (NULL))\nSELECT Cast(v AS d_int) AS v\nFROM Val;\n/*ERROR:  domain d_int does not allow null values*/\n\n\n\n\nIn my opinion the confusion and related problems arise from the widespread practice of sometimes treating a domain as a type (which it is not) and sometimes treating NULL as  a value (which it is not).\n\n\n\n\nBest regards\n\nErki Eessaar", "msg_date": "Sat, 14 Oct 2023 07:31:24 +0000", "msg_from": "Erki Eessaar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL domains and NOT NULL constraint" }, { "msg_contents": "On 10/13/23 06:37, Tom Lane wrote:\n> If it's not nominally of a domain type, please cite chapter and\n> verse that says it isn't.\n\nOkay, I found it.\n\n\nSQL:2023-2 6.7 <column reference>\n\nSyntax Rules\n\n5) Let C be the column that is referenced by CR. The declared type of CR is\n Case:\n a) If the column descriptor of C includes a data type, then that \ndata type.\n\n b) Otherwise, the data type identified in the domain descriptor that \ndescribes the domain that is identified by the <domain name> that is \nincluded in the column descriptor of C.\n\n\nSo the domain should not be carried into a query expression (including \nviews) and the data type should be the one specified in the domain.\n-- \nVik Fearing\n\n\n\n", "msg_date": "Sat, 14 Oct 2023 16:00:49 +0200", "msg_from": "Vik Fearing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL domains and NOT NULL constraint" }, { "msg_contents": "Vik Fearing <[email protected]> writes:\n> On 10/13/23 06:37, Tom Lane wrote:\n>> Hmph. The really basic problem here, I think, is that the spec\n>> wants to claim that a domain is a data type, but then it backs\n>> off and limits where the domain's constraints need to hold.\n\n> I don't think that is an accurate depiction of domains.\n> First of all, I am not seeing where it says that a domain is a data \n> type. It allows domains to be used in some places where a data type is \n> used, but that is not equivalent to a domain /being/ a data type.\n\nHmm, you are right. This is something I'd never paid attention to\nbefore, but they do seem to exclude domains from being the declared\ntype of any expression. Most notably, not even a CAST to a domain\ntype produces the domain type. Per SQL:2021 6.13 <cast specification>\nsyntax rules:\n\n 1) Case:\n a) If a <domain name> is specified, then let TD be the data\n type of the specified domain.\n\n b) If a <data type> is specified, then let TD be the data type\n identified by <data type>. <data type> shall not contain a\n <collate clause>.\n\n 2) The declared type of the result of the <cast specification> is TD.\n\nEven more amusingly for our current purposes, CAST does not enforce\nNOT NULL. <cast specification> general rule 2:\n\n 2) Case:\n a) If the <cast operand> specifies NULL, then the result of CS\n is the null value and no further General Rules of this\n Subclause are applied.\n\n b) If the <cast operand> specifies an <empty specification>,\n then the result of CS is an empty collection of declared type\n TD and no further General Rules of this Subclause are applied.\n\n c) If SV is the null value, then the result of CS is the null\n value and no further General Rules of this Subclause are\n applied.\n\nSo for a null value the spec never reaches GR 23 that says to apply\nthe domain's constraints.\n\nThis is already a sufficient intellectual muddle that I'm not sure\nwe want to follow it slavishly. If not-null can be ignored here,\nwhy not elsewhere?\n\nBut anyway, yeah, the spec's notion of a domain bears only passing\nresemblance to what we've actually implemented. I'm not really sure\nthat we want to switch, because AFAICS the spec's model doesn't\ninclude any of these things:\n\n* Domains over other domains\n\n* Domains over arrays, composite types, etc\n\n* Functions accepting or returning domain types\n\nIf we were to try to do something closer to what the spec has in mind,\nhow would we do it without ripping out a ton of functionality that\npeople have requested and come to depend on?\n\n> Section 4.25.4, \"Domain constraints\" has this to say (emphasis mine):\n>\n> - A domain constraint is satisfied by SQL-data *if and only if*, for \n> every *table* T that has a column named C based on that domain, the \n> applicable <search condition> recorded in the appropriate domain \n> constraint usage evaluates to True or Unknown.\n\nI think that isn't particularly relevant, because I believe that by\nSQL-data they mean the static contents of a database, so of course\nonly table contents matter. What we are concerned about is dynamic\nbehavior within queries and functions.\n\n> Secondly, why are you so concerned about outer join nulls here and not \n> for any other column marked NOT NULL?\n\nPrimarily because that's an easy way for a column that was marked\nNOT NULL to read out as NULL.\n\n>> That's fundamentally inconsistent. It's like claiming that\n>> 'foobarbaz' is a valid value of type numeric as long as it's\n>> only in flight within a query and you haven't tried to store it\n>> into a table.\n\n> It's like claiming that null is a valid value of type numeric as long as \n> it's only in flight within a query and you haven't tried to store it \n> into a table with that column marked NOT NULL.\n\nAnd? NULL *is* a valid value of type numeric, as well as all other\nbase types.\n\n> Allowing a null to be stored in a column where the user has specified \n> NOT NULL, no matter how the user did that, is unacceptable and I am \n> frankly surprised that you are defending it.\n\nWhat I'm trying to hold onto is the notion that a domain can\nmeaningfully be considered to be a data type (that is, that a value in\nflight can be considered to be of a domain type). We've been building\nthe system on that assumption for over twenty years now, and I think\nit's pretty deeply ingrained. I don't understand the consequences\nof abandoning it, and I'm not convinced that the spec's model is\nsufficiently intellectually rigorous that we can just say \"oh, we'll\nfollow the spec instead of what we've been doing, and it'll be fine\".\n\nAs a trivial example: our implementation assumes that enforcing a\ndomain's constraints is to be done by casting the base type value\nto the domain type. Per the above reading of <6.13>, this should\nfail to reject nulls, so we'd have to understand and implement\nchecking of domain constraints in some other way.\n\nGiven the exception the spec makes for CAST, I wonder if we shouldn't\njust say \"NULL is a valid value of every domain type, as well as every\nbase type. If you don't like it, too bad; write a separate NOT NULL\nconstraint for your table column.\"\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 14 Oct 2023 12:09:47 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL domains and NOT NULL constraint" }, { "msg_contents": "Hello\n\nSimilarly, PostgreSQL does not enforce CHECK constraints of domains that try to enforce NOT NULL in the same situations where it does not enforce NOT NULL constraints - see example in the end.\n\nThus, in my base tables can be rows that violate domain NOT NULL and CHECK constraints. For me, it is not a \"feature\", it is a bug.\n\nBy the way, my small applications use domain NOT NULL constraints. This was the reason why I asked are there any other examples in addition to those that I provided that allow NULL's to NOT NULL columns.\n\nBest regards\nErki Eessaar\n\n****************************\nDROP TABLE IF EXISTS Product;\nDROP TABLE IF EXISTS Product_state_type;\nDROP DOMAIN IF EXISTS d_name;\n\nCREATE DOMAIN d_name VARCHAR(50)\nCONSTRAINT chk_d_name CHECK (VALUE IS NOT NULL);\n\nCREATE TABLE Product_state_type (product_state_type_code SMALLINT NOT NULL,\nname d_name,\nCONSTRAINT pk_product_state_type PRIMARY KEY (product_state_type_code),\nCONSTRAINT ak_product_state_type_name UNIQUE (name));\n\nCREATE TABLE Product (product_code INTEGER NOT NULL,\nname d_name,\nproduct_state_type_code SMALLINT NOT NULL,\nCONSTRAINT pk_product PRIMARY KEY (product_code),\nCONSTRAINT fk_product_product_state_type FOREIGN KEY (product_state_type_code)\nREFERENCES Product_state_type(product_state_type_code) ON UPDATE CASCADE);\n\nINSERT INTO Product_state_type (product_state_type_code, name)\nVALUES (1, (SELECT name FROM Product_state_type WHERE FALSE));\n/*Insertion succeeds, name is NULL!*/\n\nINSERT INTO Product (product_code, name, product_state_type_code)\nSELECT 1 AS product_code, Product.name, 1 AS product_state_type_code\nFROM Product_state_type LEFT JOIN Product USING (product_state_type_code);\n/*Insertion succeeds, name is NULL!*/\n\nDROP TABLE IF EXISTS Product;\nDROP TABLE IF EXISTS Product_state_type;\nDROP DOMAIN IF EXISTS d_name;\n\nCREATE DOMAIN d_name VARCHAR(50)\nCONSTRAINT chk_d_name CHECK (coalesce(VALUE,'')<>'');\n\nCREATE TABLE Product_state_type (product_state_type_code SMALLINT NOT NULL,\nname d_name,\nCONSTRAINT pk_product_state_type PRIMARY KEY (product_state_type_code),\nCONSTRAINT ak_product_state_type_name UNIQUE (name));\n\nCREATE TABLE Product (product_code INTEGER NOT NULL,\nname d_name,\nproduct_state_type_code SMALLINT NOT NULL,\nCONSTRAINT pk_product PRIMARY KEY (product_code),\nCONSTRAINT fk_product_product_state_type FOREIGN KEY (product_state_type_code)\nREFERENCES Product_state_type(product_state_type_code) ON UPDATE CASCADE);\n\nINSERT INTO Product_state_type (product_state_type_code, name)\nVALUES (1, (SELECT name FROM Product_state_type WHERE FALSE));\n/*Insertion succeeds, name is NULL!*/\n\nINSERT INTO Product (product_code, name, product_state_type_code)\nSELECT 1 AS product_code, Product.name, 1 AS product_state_type_code\nFROM Product_state_type LEFT JOIN Product USING (product_state_type_code);\n/*Insertion succeeds, name is NULL!*/\n\n________________________________\nFrom: Tom Lane <[email protected]>\nSent: Saturday, October 14, 2023 19:09\nTo: Vik Fearing <[email protected]>\nCc: Erki Eessaar <[email protected]>; [email protected] <[email protected]>\nSubject: Re: PostgreSQL domains and NOT NULL constraint\n\nVik Fearing <[email protected]> writes:\n> On 10/13/23 06:37, Tom Lane wrote:\n>> Hmph. The really basic problem here, I think, is that the spec\n>> wants to claim that a domain is a data type, but then it backs\n>> off and limits where the domain's constraints need to hold.\n\n> I don't think that is an accurate depiction of domains.\n> First of all, I am not seeing where it says that a domain is a data\n> type. It allows domains to be used in some places where a data type is\n> used, but that is not equivalent to a domain /being/ a data type.\n\nHmm, you are right. This is something I'd never paid attention to\nbefore, but they do seem to exclude domains from being the declared\ntype of any expression. Most notably, not even a CAST to a domain\ntype produces the domain type. Per SQL:2021 6.13 <cast specification>\nsyntax rules:\n\n 1) Case:\n a) If a <domain name> is specified, then let TD be the data\n type of the specified domain.\n\n b) If a <data type> is specified, then let TD be the data type\n identified by <data type>. <data type> shall not contain a\n <collate clause>.\n\n 2) The declared type of the result of the <cast specification> is TD.\n\nEven more amusingly for our current purposes, CAST does not enforce\nNOT NULL. <cast specification> general rule 2:\n\n 2) Case:\n a) If the <cast operand> specifies NULL, then the result of CS\n is the null value and no further General Rules of this\n Subclause are applied.\n\n b) If the <cast operand> specifies an <empty specification>,\n then the result of CS is an empty collection of declared type\n TD and no further General Rules of this Subclause are applied.\n\n c) If SV is the null value, then the result of CS is the null\n value and no further General Rules of this Subclause are\n applied.\n\nSo for a null value the spec never reaches GR 23 that says to apply\nthe domain's constraints.\n\nThis is already a sufficient intellectual muddle that I'm not sure\nwe want to follow it slavishly. If not-null can be ignored here,\nwhy not elsewhere?\n\nBut anyway, yeah, the spec's notion of a domain bears only passing\nresemblance to what we've actually implemented. I'm not really sure\nthat we want to switch, because AFAICS the spec's model doesn't\ninclude any of these things:\n\n* Domains over other domains\n\n* Domains over arrays, composite types, etc\n\n* Functions accepting or returning domain types\n\nIf we were to try to do something closer to what the spec has in mind,\nhow would we do it without ripping out a ton of functionality that\npeople have requested and come to depend on?\n\n> Section 4.25.4, \"Domain constraints\" has this to say (emphasis mine):\n>\n> - A domain constraint is satisfied by SQL-data *if and only if*, for\n> every *table* T that has a column named C based on that domain, the\n> applicable <search condition> recorded in the appropriate domain\n> constraint usage evaluates to True or Unknown.\n\nI think that isn't particularly relevant, because I believe that by\nSQL-data they mean the static contents of a database, so of course\nonly table contents matter. What we are concerned about is dynamic\nbehavior within queries and functions.\n\n> Secondly, why are you so concerned about outer join nulls here and not\n> for any other column marked NOT NULL?\n\nPrimarily because that's an easy way for a column that was marked\nNOT NULL to read out as NULL.\n\n>> That's fundamentally inconsistent. It's like claiming that\n>> 'foobarbaz' is a valid value of type numeric as long as it's\n>> only in flight within a query and you haven't tried to store it\n>> into a table.\n\n> It's like claiming that null is a valid value of type numeric as long as\n> it's only in flight within a query and you haven't tried to store it\n> into a table with that column marked NOT NULL.\n\nAnd? NULL *is* a valid value of type numeric, as well as all other\nbase types.\n\n> Allowing a null to be stored in a column where the user has specified\n> NOT NULL, no matter how the user did that, is unacceptable and I am\n> frankly surprised that you are defending it.\n\nWhat I'm trying to hold onto is the notion that a domain can\nmeaningfully be considered to be a data type (that is, that a value in\nflight can be considered to be of a domain type). We've been building\nthe system on that assumption for over twenty years now, and I think\nit's pretty deeply ingrained. I don't understand the consequences\nof abandoning it, and I'm not convinced that the spec's model is\nsufficiently intellectually rigorous that we can just say \"oh, we'll\nfollow the spec instead of what we've been doing, and it'll be fine\".\n\nAs a trivial example: our implementation assumes that enforcing a\ndomain's constraints is to be done by casting the base type value\nto the domain type. Per the above reading of <6.13>, this should\nfail to reject nulls, so we'd have to understand and implement\nchecking of domain constraints in some other way.\n\nGiven the exception the spec makes for CAST, I wonder if we shouldn't\njust say \"NULL is a valid value of every domain type, as well as every\nbase type. If you don't like it, too bad; write a separate NOT NULL\nconstraint for your table column.\"\n\n regards, tom lane\n\n\n\n\n\n\n\n\nHello\n\n\n\n\nSimilarly, PostgreSQL does not enforce CHECK constraints of domains that try to enforce NOT NULL in the same situations where it does not enforce NOT NULL constraints - see example in the end.\n\n\n\n\n\n\nThus, in my base tables can be rows that violate domain NOT NULL and CHECK constraints. For me, it is not a \"feature\", it is a bug.\n\n\n\n\nBy the way, my small applications use domain NOT NULL constraints. This was the reason why I asked are there any other examples in addition to those that I provided that allow NULL's to NOT NULL columns.\n\n\n\n\n\nBest regards\n\nErki Eessaar\n\n\n\n\n****************************\n\nDROP TABLE IF EXISTS Product;\nDROP TABLE IF EXISTS Product_state_type;\nDROP DOMAIN IF EXISTS d_name;\n\n\nCREATE DOMAIN d_name VARCHAR(50)\nCONSTRAINT chk_d_name CHECK (VALUE IS NOT NULL);\n\n\nCREATE TABLE Product_state_type (product_state_type_code SMALLINT NOT NULL,\nname d_name,\nCONSTRAINT pk_product_state_type PRIMARY KEY (product_state_type_code),\nCONSTRAINT ak_product_state_type_name UNIQUE (name));\n\n\nCREATE TABLE Product (product_code INTEGER NOT NULL,\nname d_name,\nproduct_state_type_code SMALLINT NOT NULL,\nCONSTRAINT pk_product PRIMARY KEY (product_code),\nCONSTRAINT fk_product_product_state_type FOREIGN KEY (product_state_type_code)\nREFERENCES Product_state_type(product_state_type_code) ON UPDATE CASCADE);\n\n\nINSERT INTO Product_state_type (product_state_type_code, name)\nVALUES (1, (SELECT name FROM Product_state_type WHERE FALSE));\n/*Insertion succeeds, name is NULL!*/\n\n\nINSERT INTO Product (product_code, name, product_state_type_code)\nSELECT 1 AS product_code, Product.name, 1 AS product_state_type_code\nFROM Product_state_type LEFT JOIN Product USING (product_state_type_code);\n/*Insertion succeeds, name is NULL!*/\n\n\nDROP TABLE IF EXISTS Product;\nDROP TABLE IF EXISTS Product_state_type;\nDROP DOMAIN IF EXISTS d_name;\n\n\nCREATE DOMAIN d_name VARCHAR(50)\nCONSTRAINT chk_d_name CHECK (coalesce(VALUE,'')<>'');\n\n\nCREATE TABLE Product_state_type (product_state_type_code SMALLINT NOT NULL,\nname d_name,\nCONSTRAINT pk_product_state_type PRIMARY KEY (product_state_type_code),\nCONSTRAINT ak_product_state_type_name UNIQUE (name));\n\n\nCREATE TABLE Product (product_code INTEGER NOT NULL,\nname d_name,\nproduct_state_type_code SMALLINT NOT NULL,\nCONSTRAINT pk_product PRIMARY KEY (product_code),\nCONSTRAINT fk_product_product_state_type FOREIGN KEY (product_state_type_code)\nREFERENCES Product_state_type(product_state_type_code) ON UPDATE CASCADE);\n\n\nINSERT INTO Product_state_type (product_state_type_code, name)\nVALUES (1, (SELECT name FROM Product_state_type WHERE FALSE));\n/*Insertion succeeds, name is NULL!*/\n\n\nINSERT INTO Product (product_code, name, product_state_type_code)\nSELECT 1 AS product_code, Product.name, 1 AS product_state_type_code\nFROM Product_state_type LEFT JOIN Product USING (product_state_type_code);\n/*Insertion succeeds, name is NULL!*/\n\n\n\n\n\n\nFrom: Tom Lane <[email protected]>\nSent: Saturday, October 14, 2023 19:09\nTo: Vik Fearing <[email protected]>\nCc: Erki Eessaar <[email protected]>; [email protected] <[email protected]>\nSubject: Re: PostgreSQL domains and NOT NULL constraint\n \n\n\nVik Fearing <[email protected]> writes:\n> On 10/13/23 06:37, Tom Lane wrote:\n>> Hmph.  The really basic problem here, I think, is that the spec\n>> wants to claim that a domain is a data type, but then it backs\n>> off and limits where the domain's constraints need to hold.\n\n> I don't think that is an accurate depiction of domains.\n> First of all, I am not seeing where it says that a domain is a data \n> type.  It allows domains to be used in some places where a data type is \n> used, but that is not equivalent to a domain /being/ a data type.\n\nHmm, you are right.  This is something I'd never paid attention to\nbefore, but they do seem to exclude domains from being the declared\ntype of any expression.  Most notably, not even a CAST to a domain\ntype produces the domain type.  Per SQL:2021 6.13 <cast specification>\nsyntax rules:\n\n    1) Case:\n        a) If a <domain name> is specified, then let TD be the data\n        type of the specified domain.\n\n        b) If a <data type> is specified, then let TD be the data type\n        identified by <data type>. <data type> shall not contain a\n        <collate clause>.\n\n    2) The declared type of the result of the <cast specification> is TD.\n\nEven more amusingly for our current purposes, CAST does not enforce\nNOT NULL.  <cast specification> general rule 2:\n\n    2) Case:\n        a) If the <cast operand> specifies NULL, then the result of CS\n        is the null value and no further General Rules of this\n        Subclause are applied.\n\n        b) If the <cast operand> specifies an <empty specification>,\n        then the result of CS is an empty collection of declared type\n        TD and no further General Rules of this Subclause are applied.\n\n        c) If SV is the null value, then the result of CS is the null\n        value and no further General Rules of this Subclause are\n        applied.\n\nSo for a null value the spec never reaches GR 23 that says to apply\nthe domain's constraints.\n\nThis is already a sufficient intellectual muddle that I'm not sure\nwe want to follow it slavishly.  If not-null can be ignored here,\nwhy not elsewhere?\n\nBut anyway, yeah, the spec's notion of a domain bears only passing\nresemblance to what we've actually implemented.  I'm not really sure\nthat we want to switch, because AFAICS the spec's model doesn't\ninclude any of these things:\n\n* Domains over other domains\n\n* Domains over arrays, composite types, etc\n\n* Functions accepting or returning domain types\n\nIf we were to try to do something closer to what the spec has in mind,\nhow would we do it without ripping out a ton of functionality that\npeople have requested and come to depend on?\n\n> Section 4.25.4, \"Domain constraints\" has this to say (emphasis mine):\n>\n> - A domain constraint is satisfied by SQL-data *if and only if*, for \n> every *table* T that has a column named C based on that domain, the \n> applicable <search condition> recorded in the appropriate domain \n> constraint usage evaluates to True or Unknown.\n\nI think that isn't particularly relevant, because I believe that by\nSQL-data they mean the static contents of a database, so of course\nonly table contents matter.  What we are concerned about is dynamic\nbehavior within queries and functions.\n\n> Secondly, why are you so concerned about outer join nulls here and not \n> for any other column marked NOT NULL?\n\nPrimarily because that's an easy way for a column that was marked\nNOT NULL to read out as NULL.\n\n>> That's fundamentally inconsistent.  It's like claiming that\n>> 'foobarbaz' is a valid value of type numeric as long as it's\n>> only in flight within a query and you haven't tried to store it\n>> into a table.\n\n> It's like claiming that null is a valid value of type numeric as long as \n> it's only in flight within a query and you haven't tried to store it \n> into a table with that column marked NOT NULL.\n\nAnd?  NULL *is* a valid value of type numeric, as well as all other\nbase types.\n\n> Allowing a null to be stored in a column where the user has specified \n> NOT NULL, no matter how the user did that, is unacceptable and I am \n> frankly surprised that you are defending it.\n\nWhat I'm trying to hold onto is the notion that a domain can\nmeaningfully be considered to be a data type (that is, that a value in\nflight can be considered to be of a domain type).  We've been building\nthe system on that assumption for over twenty years now, and I think\nit's pretty deeply ingrained.  I don't understand the consequences\nof abandoning it, and I'm not convinced that the spec's model is\nsufficiently intellectually rigorous that we can just say \"oh, we'll\nfollow the spec instead of what we've been doing, and it'll be fine\".\n\nAs a trivial example: our implementation assumes that enforcing a\ndomain's constraints is to be done by casting the base type value\nto the domain type.  Per the above reading of <6.13>, this should\nfail to reject nulls, so we'd have to understand and implement\nchecking of domain constraints in some other way.\n\nGiven the exception the spec makes for CAST, I wonder if we shouldn't\njust say \"NULL is a valid value of every domain type, as well as every\nbase type.  If you don't like it, too bad; write a separate NOT NULL\nconstraint for your table column.\"\n\n                        regards, tom lane", "msg_date": "Sun, 15 Oct 2023 07:22:59 +0000", "msg_from": "Erki Eessaar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL domains and NOT NULL constraint" }, { "msg_contents": "I wrote:\n> Given the exception the spec makes for CAST, I wonder if we shouldn't\n> just say \"NULL is a valid value of every domain type, as well as every\n> base type. If you don't like it, too bad; write a separate NOT NULL\n> constraint for your table column.\"\n\nAfter ruminating on this for awhile, here's a straw-man proposal:\n\n1. Domains are data types, with the proviso that NULL is always\na valid value no matter what the domain constraints might say.\nImplementation-wise, this'd just require that CoerceToDomain\nimmediately return any null input without checking the constraints.\nThis has two big attractions:\n\n(1A) It satisfies the plain language of the SQL spec about how\nCAST to a domain type behaves.\n\n(1B) It legitimizes our behavior of allowing nullable outer join\ncolumns, sub-SELECT outputs, etc to be considered to be of the\nsource column's domain type and not just the base type.\n\n2. In INSERT and UPDATE queries, thumb through the constraints of\nany domain-typed target columns to see if any of them are NOT NULL\nor CHECK(VALUE IS NOT NULL). If so, act as though there's a table\nNOT NULL constraint on that column.\n\nThe idea of point #2 is to have a cheap check that 99% satisfies\nwhat the spec says about not-null constraints on domains. If we\ndon't do #2, I think we have to fully recheck all the domain's\nconstraints during column assignment. I find that ugly as well\nas expensive performance-wise. It does mean that if you have\nsome domain constraint that would act to reject NULLs, but it's\nspelled in some weird way, it won't reject NULLs. I don't find\nthat possibility compelling enough to justify the performance hit\nof recomputing every constraint just in case it acts like that.\n\n3. Left unsaid here is whether we should treat assignments to,\ne.g., plpgsql variables as acting like assignments to table\ncolumns. I'm inclined not to, because\n\n(3A) I'm lazy, and I'm also worried that we'd miss places where\nthis arguably should happen.\n\n(3B) I don't think the SQL spec contemplates any such thing\nhappening.\n\n(3C) Not doing that means we have a pretty consistent view of\nwhat the semantics are for \"values in flight\" within a query.\nAnything that's not stored in a table is \"in flight\" and so\ncan be NULL.\n\n(3D) Again, if you don't like it, there's already ways to attach\na separate NOT NULL constraint to plpgsql variables.\n\n\nDocumenting this in an intelligible fashion might be tricky,\nbut explaining the exact spec-mandated behavior wouldn't be\nmuch fun either.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 23 Oct 2023 12:53:50 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL domains and NOT NULL constraint" }, { "msg_contents": "On Mon, 23 Oct 2023 at 13:40, Tom Lane <[email protected]> wrote:\n\n> I wrote:\n> > Given the exception the spec makes for CAST, I wonder if we shouldn't\n> > just say \"NULL is a valid value of every domain type, as well as every\n> > base type. If you don't like it, too bad; write a separate NOT NULL\n> > constraint for your table column.\"\n>\n> After ruminating on this for awhile, here's a straw-man proposal:\n>\n\n[....]\n\n\n> 3. Left unsaid here is whether we should treat assignments to,\n> e.g., plpgsql variables as acting like assignments to table\n> columns. I'm inclined not to, because\n>\n> (3A) I'm lazy, and I'm also worried that we'd miss places where\n> this arguably should happen.\n>\n> (3B) I don't think the SQL spec contemplates any such thing\n> happening.\n>\n> (3C) Not doing that means we have a pretty consistent view of\n> what the semantics are for \"values in flight\" within a query.\n> Anything that's not stored in a table is \"in flight\" and so\n> can be NULL.\n>\n> (3D) Again, if you don't like it, there's already ways to attach\n> a separate NOT NULL constraint to plpgsql variables.\n>\n>\n> Documenting this in an intelligible fashion might be tricky,\n> but explaining the exact spec-mandated behavior wouldn't be\n> much fun either.\n\n\nThis sounds pretty good.\n\nI'd be OK with only running the CHECK clause on non-NULL values. This would\nimply that \"CHECK (VALUE NOT NULL)\" would have exactly the same effect as\n\"CHECK (TRUE)\" (i.e., no effect). This might seem insane but it avoids a\nspecial case and in any event if somebody wants the NOT NULL behaviour,\nthey can get it by specifying NOT NULL in the CREATE DOMAIN command.\n\nThen domain CHECK constraints are checked anytime a non-NULL value is\nturned into a domain value, and NOT NULL ones are checked only when storing\nto a table. CHECK constraints would be like STRICT functions; if the input\nis NULL, the implementation is not run and the result is NULL (which for a\nCHECK means accept the input).\n\nWhether I actually think the above is a good idea would require me to read\ncarefully the relevant section of the SQL spec. If it agrees that CHECK ()\nis for testing non-NULL values and NOT NULL is for saying that columns of\nactual tables can't be NULL, then I would probably agree with my own idea,\notherwise perhaps not depending on exactly what it said.\n\nSome possible documentation wording to consider for the CREATE DOMAIN page:\n\nUnder \"NOT NULL\": \"Table columns whose data type is this domain may not be\nNULL, exactly as if NOT NULL had been given in the column specification.\"\n\nUnder \"NULL\": \"This is a noise word indicating the default, which is that\nthe domain does not restrict NULL from occurring in table columns whose\ndata type is this domain.\"\n\nUnder \"CHECK (expression)\", replacing the first sentence: \"CHECK clauses\nspecify integrity constraints or tests which non-NULL values of the domain\nmust satisfy; NULLs are never checked by domain CHECK clauses. To use a\ndomain to prevent a NULL from occurring in a table column, use the NOT NULL\nclause.\"\n\nAlso, where it says \"Expressions evaluating to TRUE or UNKNOWN succeed\": Do\nwe really mean \"Expressions evaluating to TRUE or NULL succeed\"?\n\nIt would be nice if we had universally agreed terminology so that we would\nhave one word for the non-NULL things of various data types, and another\nword for the possibly NULL things that might occur in variable or column.\n\nIf we decide we do want \"CHECK (VALUE NOT NULL)\" to work, then I wonder if\nwe could pass NULL to the constraint at CREATE DOMAIN time, and if it\nreturns FALSE, do exactly what we would have done (set pg_type.typnotnull)\nif an actual NOT NULL clause had been specified? Then when actually\nprocessing domain constraints during a query, we could use the above\nprocedure. I'm thinking about more complicated constraints that evaluate to\nFALSE for NULL but which are not simply \"CHECK (VALUE NOT NULL)\".\n\nIs it an error to specify both NULL and NOT NULL? What about CHECK (VALUE\nNOT NULL) and NULL?\n\nOn Mon, 23 Oct 2023 at 13:40, Tom Lane <[email protected]> wrote:I wrote:\n> Given the exception the spec makes for CAST, I wonder if we shouldn't\n> just say \"NULL is a valid value of every domain type, as well as every\n> base type.  If you don't like it, too bad; write a separate NOT NULL\n> constraint for your table column.\"\n\nAfter ruminating on this for awhile, here's a straw-man proposal: [....] \n3. Left unsaid here is whether we should treat assignments to,\ne.g., plpgsql variables as acting like assignments to table\ncolumns.  I'm inclined not to, because\n\n(3A) I'm lazy, and I'm also worried that we'd miss places where\nthis arguably should happen.\n\n(3B) I don't think the SQL spec contemplates any such thing\nhappening.\n\n(3C) Not doing that means we have a pretty consistent view of\nwhat the semantics are for \"values in flight\" within a query.\nAnything that's not stored in a table is \"in flight\" and so\ncan be NULL.\n\n(3D) Again, if you don't like it, there's already ways to attach\na separate NOT NULL constraint to plpgsql variables.\n\n\nDocumenting this in an intelligible fashion might be tricky,\nbut explaining the exact spec-mandated behavior wouldn't be\nmuch fun either.This sounds pretty good.I'd be OK with only running the CHECK clause on non-NULL values. This would imply that \"CHECK (VALUE NOT NULL)\" would have exactly the same effect as \"CHECK (TRUE)\" (i.e., no effect). This might seem insane but it avoids a special case and in any event if somebody wants the NOT NULL behaviour, they can get it by specifying NOT NULL in the CREATE DOMAIN command.Then domain CHECK constraints are checked anytime a non-NULL value is turned into a domain value, and NOT NULL ones are checked only when storing to a table. CHECK constraints would be like STRICT functions; if the input is NULL, the implementation is not run and the result is NULL (which for a CHECK means accept the input).Whether I actually think the above is a good idea would require me to read carefully the relevant section of the SQL spec. If it agrees that CHECK () is for testing non-NULL values and NOT NULL is for saying that columns of actual tables can't be NULL, then I would probably agree with my own idea, otherwise perhaps not depending on exactly what it said.Some possible documentation wording to consider for the CREATE DOMAIN page:Under \"NOT NULL\": \"Table columns whose data type is this domain may not be NULL, exactly as if NOT NULL had been given in the column specification.\"Under \"NULL\": \"This is a noise word indicating the default, which is that the domain does not restrict NULL from occurring in table columns whose data type is this domain.\"Under \"CHECK (expression)\", replacing the first sentence: \"CHECK clauses specify integrity constraints or tests which non-NULL values of the domain must satisfy; NULLs are never checked by domain CHECK clauses. To use a domain to prevent a NULL from occurring in a table column, use the NOT NULL clause.\"Also, where it says \"Expressions evaluating to TRUE or UNKNOWN succeed\": Do we really mean \"Expressions evaluating to TRUE or NULL succeed\"?It would be nice if we had universally agreed terminology so that we would have one word for the non-NULL things of various data types, and another word for the possibly NULL things that might occur in variable or column.If we decide we do want \"CHECK (VALUE NOT NULL)\" to work, then I wonder if we could pass NULL to the constraint at CREATE DOMAIN time, and if it returns FALSE, do exactly what we would have done (set pg_type.typnotnull) if an actual NOT NULL clause had been specified? Then when actually processing domain constraints during a query, we could use the above procedure. I'm thinking about more complicated constraints that evaluate to FALSE for NULL but which are not simply \"CHECK (VALUE NOT NULL)\".Is it an error to specify both NULL and NOT NULL? What about CHECK (VALUE NOT NULL) and NULL?", "msg_date": "Mon, 23 Oct 2023 14:36:12 -0400", "msg_from": "Isaac Morland <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL domains and NOT NULL constraint" }, { "msg_contents": "po 23. 10. 2023 v 19:34 odesílatel Tom Lane <[email protected]> napsal:\n\n> I wrote:\n> > Given the exception the spec makes for CAST, I wonder if we shouldn't\n> > just say \"NULL is a valid value of every domain type, as well as every\n> > base type. If you don't like it, too bad; write a separate NOT NULL\n> > constraint for your table column.\"\n>\n> After ruminating on this for awhile, here's a straw-man proposal:\n>\n> 1. Domains are data types, with the proviso that NULL is always\n> a valid value no matter what the domain constraints might say.\n> Implementation-wise, this'd just require that CoerceToDomain\n> immediately return any null input without checking the constraints.\n> This has two big attractions:\n>\n> (1A) It satisfies the plain language of the SQL spec about how\n> CAST to a domain type behaves.\n>\n> (1B) It legitimizes our behavior of allowing nullable outer join\n> columns, sub-SELECT outputs, etc to be considered to be of the\n> source column's domain type and not just the base type.\n>\n> 2. In INSERT and UPDATE queries, thumb through the constraints of\n> any domain-typed target columns to see if any of them are NOT NULL\n> or CHECK(VALUE IS NOT NULL). If so, act as though there's a table\n> NOT NULL constraint on that column.\n>\n\n+1\n\nI think only this interpretation makes sense.\n\n\n> The idea of point #2 is to have a cheap check that 99% satisfies\n> what the spec says about not-null constraints on domains. If we\n> don't do #2, I think we have to fully recheck all the domain's\n> constraints during column assignment. I find that ugly as well\n> as expensive performance-wise. It does mean that if you have\n> some domain constraint that would act to reject NULLs, but it's\n> spelled in some weird way, it won't reject NULLs. I don't find\n> that possibility compelling enough to justify the performance hit\n> of recomputing every constraint just in case it acts like that.\n>\n> 3. Left unsaid here is whether we should treat assignments to,\n> e.g., plpgsql variables as acting like assignments to table\n> columns. I'm inclined not to, because\n>\n> (3A) I'm lazy, and I'm also worried that we'd miss places where\n> this arguably should happen.\n>\n> (3B) I don't think the SQL spec contemplates any such thing\n> happening.\n>\n> (3C) Not doing that means we have a pretty consistent view of\n> what the semantics are for \"values in flight\" within a query.\n> Anything that's not stored in a table is \"in flight\" and so\n> can be NULL.\n>\n> (3D) Again, if you don't like it, there's already ways to attach\n> a separate NOT NULL constraint to plpgsql variables.\n>\n\nAlthough I don't fully like it, I think ignoring the NOT NULL constraint\nfor plpgsql's variables is a better way, then apply it. Elsewhere there can\nbe issues related to variable's initialization.\n\nRegards\n\nPavel\n\n\n\n\n\n\n>\n>\n> Documenting this in an intelligible fashion might be tricky,\n> but explaining the exact spec-mandated behavior wouldn't be\n> much fun either.\n>\n> Thoughts?\n>\n> regards, tom lane\n>\n>\n>\n\npo 23. 10. 2023 v 19:34 odesílatel Tom Lane <[email protected]> napsal:I wrote:\n> Given the exception the spec makes for CAST, I wonder if we shouldn't\n> just say \"NULL is a valid value of every domain type, as well as every\n> base type.  If you don't like it, too bad; write a separate NOT NULL\n> constraint for your table column.\"\n\nAfter ruminating on this for awhile, here's a straw-man proposal:\n\n1. Domains are data types, with the proviso that NULL is always\na valid value no matter what the domain constraints might say.\nImplementation-wise, this'd just require that CoerceToDomain\nimmediately return any null input without checking the constraints.\nThis has two big attractions:\n\n(1A) It satisfies the plain language of the SQL spec about how\nCAST to a domain type behaves.\n\n(1B) It legitimizes our behavior of allowing nullable outer join\ncolumns, sub-SELECT outputs, etc to be considered to be of the\nsource column's domain type and not just the base type.\n\n2. In INSERT and UPDATE queries, thumb through the constraints of\nany domain-typed target columns to see if any of them are NOT NULL\nor CHECK(VALUE IS NOT NULL).  If so, act as though there's a table\nNOT NULL constraint on that column.+1I think only this interpretation makes sense.\n\nThe idea of point #2 is to have a cheap check that 99% satisfies\nwhat the spec says about not-null constraints on domains.  If we\ndon't do #2, I think we have to fully recheck all the domain's\nconstraints during column assignment.  I find that ugly as well\nas expensive performance-wise.  It does mean that if you have\nsome domain constraint that would act to reject NULLs, but it's\nspelled in some weird way, it won't reject NULLs.  I don't find\nthat possibility compelling enough to justify the performance hit\nof recomputing every constraint just in case it acts like that.\n\n3. Left unsaid here is whether we should treat assignments to,\ne.g., plpgsql variables as acting like assignments to table\ncolumns.  I'm inclined not to, because\n\n(3A) I'm lazy, and I'm also worried that we'd miss places where\nthis arguably should happen.\n\n(3B) I don't think the SQL spec contemplates any such thing\nhappening.\n\n(3C) Not doing that means we have a pretty consistent view of\nwhat the semantics are for \"values in flight\" within a query.\nAnything that's not stored in a table is \"in flight\" and so\ncan be NULL.\n\n(3D) Again, if you don't like it, there's already ways to attach\na separate NOT NULL constraint to plpgsql variables.Although I don't fully like it, I think ignoring the NOT NULL constraint for plpgsql's variables is a better way, then apply it. Elsewhere there can be issues related to variable's initialization. RegardsPavel \n\n\nDocumenting this in an intelligible fashion might be tricky,\nbut explaining the exact spec-mandated behavior wouldn't be\nmuch fun either.\n\nThoughts?\n\n                        regards, tom lane", "msg_date": "Mon, 23 Oct 2023 20:49:21 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL domains and NOT NULL constraint" }, { "msg_contents": "On 10/23/23 20:36, Isaac Morland wrote:\n> Also, where it says \"Expressions evaluating to TRUE or UNKNOWN succeed\": \n> Do we really mean \"Expressions evaluating to TRUE or NULL succeed\"?\n\nNo, UNKNOWN is the correct nomenclature for booleans.\n-- \nVik Fearing\n\n\n\n", "msg_date": "Mon, 23 Oct 2023 23:01:17 +0200", "msg_from": "Vik Fearing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL domains and NOT NULL constraint" }, { "msg_contents": "On 10/23/23 18:53, Tom Lane wrote:\n> 1. Domains are data types, with the proviso that NULL is always a valid \n> value no matter what the domain constraints might say. \n> Implementation-wise, this'd just require that CoerceToDomain immediately \n> return any null input without checking the constraints. This has two big \n> attractions:\n\n\n> (1A) It satisfies the plain language of the SQL spec about \n> how CAST to a domain type behaves.\n\n\nI agree with all of your proposal, except for this part. I think the \nshortcut in the General Rules of <cast specification> is an oversight \nand I plan on submitting a paper to fix it. The intention is, in my \nview, clearly to check the constraints upon casting. What other \nexplanation is there since the result type is still the domain's base \ntype[*]?\n\n\n[*] In the standard, not in our superior implementation of it.\n-- \nVik Fearing\n\n\n\n", "msg_date": "Mon, 23 Oct 2023 23:08:12 +0200", "msg_from": "Vik Fearing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL domains and NOT NULL constraint" }, { "msg_contents": "Vik Fearing <[email protected]> writes:\n> On 10/23/23 18:53, Tom Lane wrote:\n>> (1A) It satisfies the plain language of the SQL spec about \n>> how CAST to a domain type behaves.\n\n> I agree with all of your proposal, except for this part. I think the \n> shortcut in the General Rules of <cast specification> is an oversight \n> and I plan on submitting a paper to fix it.\n\nYeah, it might be a bug in the spec, but if so the bug has been there\nsince SQL92 without anyone noticing. SQL92 has GR2 as\n\n 2) Case:\n\n a) If the <cast operand> specifies NULL or if SV is the null\n value, then the result of the <cast specification> is the\n null value.\n\nSQL99 revised the text some, but without changing that outcome.\nThen in SQL:2003 they doubled down on the point:\n\n a) If the <cast operand> specifies NULL, then TV is the null value and\n no further General Rules of this Subclause are applied.\n\n b) If the <cast operand> specifies an <empty specification>, then TV\n is an empty collection of declared type TD and no further General\n Rules of this Subclause are applied.\n\n c) If SV is the null value, then TV is the null value and no further\n General Rules of this Subclause are applied.\n\nYou're suggesting that nobody noticed that this wording requires NULLs\nto skip the domain checks? Maybe, but I think it must be intentional.\nI'll await the committee's reaction with interest.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 23 Oct 2023 17:21:29 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL domains and NOT NULL constraint" }, { "msg_contents": "Isaac Morland <[email protected]> writes:\n> Then domain CHECK constraints are checked anytime a non-NULL value is\n> turned into a domain value, and NOT NULL ones are checked only when storing\n> to a table. CHECK constraints would be like STRICT functions; if the input\n> is NULL, the implementation is not run and the result is NULL (which for a\n> CHECK means accept the input).\n\nRight.\n\n> Whether I actually think the above is a good idea would require me to read\n> carefully the relevant section of the SQL spec. If it agrees that CHECK ()\n> is for testing non-NULL values and NOT NULL is for saying that columns of\n> actual tables can't be NULL, then I would probably agree with my own idea,\n> otherwise perhaps not depending on exactly what it said.\n\nThe spec doesn't actually allow bare NOT NULL as a domain constraint;\nit only has CHECK constraints. Of course you can write CHECK(VALUE\nIS NOT NULL), or more-complicated things that will reject a NULL,\nbut they're effectively ignored during CAST and applied only when\nstoring to a table column.\n\nI think we decided to implement NOT NULL because it seemed like an\nodd wart not to have it if you could do the CHECK equivalent.\nIn the light of this new understanding, though, I bet they omitted\nit deliberately because it'd be too-obviously-inconsistent behavior.\n\nIn any case, we can't drop the NOT NULL option now without breaking\napps. I think it should continue to behave exactly the same as\n\"CHECK(VALUE IS NOT NULL)\".\n\n> If we decide we do want \"CHECK (VALUE NOT NULL)\" to work, then I wonder if\n> we could pass NULL to the constraint at CREATE DOMAIN time, and if it\n> returns FALSE, do exactly what we would have done (set pg_type.typnotnull)\n> if an actual NOT NULL clause had been specified?\n\nMaybe, but then ALTER DOMAIN would have to be prepared to update that\nflag when adding or dropping constraints. Perhaps that's better than\nchecking on-the-fly during DML commands, though.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 23 Oct 2023 17:41:45 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL domains and NOT NULL constraint" }, { "msg_contents": "I wrote:\n> Isaac Morland <[email protected]> writes:\n>> If we decide we do want \"CHECK (VALUE NOT NULL)\" to work, then I wonder if\n>> we could pass NULL to the constraint at CREATE DOMAIN time, and if it\n>> returns FALSE, do exactly what we would have done (set pg_type.typnotnull)\n>> if an actual NOT NULL clause had been specified?\n\n> Maybe, but then ALTER DOMAIN would have to be prepared to update that\n> flag when adding or dropping constraints. Perhaps that's better than\n> checking on-the-fly during DML commands, though.\n\nAfter further thought I like that idea a lot, but we can't simply\noverwrite pg_type.typnotnull without losing track of whether the user\nhad given a bare NOT NULL constraint. Instead I think the details\nshould be like this:\n\n1. Add a bool column \"connotnull\" (or some such name) to pg_constraint.\nSet this to true when the constraint is a domain CHECK constraint that\nreturns FALSE for NULL input. (In future we could maintain the flag\nfor table CHECK constraints too, perhaps, but I don't see value in\nthat right now.) This requires assuming that the constraint is\nimmutable (which we assume already) and that it's okay to evaluate it\non a NULL immediately during CREATE DOMAIN or ALTER DOMAIN ADD\nCONSTRAINT. It seems possible that that could fail, but only with\nrather questionable choices of constraints.\n\n2. INSERT/UPDATE enforce not-nullness if pg_type.typnotnull is set\nor there is any domain constraint with pg_constraint.connotnull\nset. This still requires thumbing through the constraints at\nquery start, but the check is cheaper and a good deal more bulletproof\nthan my previous suggestion of a purely-syntactic check.\n\nWe could make query start still cheaper by adding another pg_type\ncolumn that is the OR of the associated constraints' connotnull\nflags, but I suspect it's not worth the trouble. The typcache\ncan probably maintain that info with epsilon extra cost.\n\nA variant approach could be to omit the catalog changes and have\nthis state be tracked entirely by the typcache. That'd result in\nrather more trial evaluations of the domain constraints on NULLs,\nbut it would have the advantage of not requiring any constraint\nevaluations to occur during CREATE/ALTER DOMAIN, only during startup\nof a query that's likely to evaluate them anyway. That'd reduce\nthe odds of breaking things thanks to search_path dependencies\nand suchlike.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 23 Oct 2023 18:08:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL domains and NOT NULL constraint" }, { "msg_contents": "On Mon, 23 Oct 2023, 19:34 Tom Lane, <[email protected]> wrote:\n>\n> I wrote:\n> > Given the exception the spec makes for CAST, I wonder if we shouldn't\n> > just say \"NULL is a valid value of every domain type, as well as every\n> > base type. If you don't like it, too bad; write a separate NOT NULL\n> > constraint for your table column.\"\n>\n> After ruminating on this for awhile, here's a straw-man proposal:\n>\n> 1. Domains are data types, with the proviso that NULL is always\n> a valid value no matter what the domain constraints might say.\n> Implementation-wise, this'd just require that CoerceToDomain\n> immediately return any null input without checking the constraints.\n> This has two big attractions:\n\nAgreed.\n\n> 2. In INSERT and UPDATE queries, thumb through the constraints of\n> any domain-typed target columns to see if any of them are NOT NULL\n> or CHECK(VALUE IS NOT NULL). If so, act as though there's a table\n> NOT NULL constraint on that column.\n\nHow does this work w.r.t. concurrently created tables that contain the\ndomain? Right now, you can do something along the lines of the\nfollowing due to a lack of locking on domains for new columns/tables\nthat use said domain, and I believe that this is the main source of\ndomain constraint violations:\n\nCREATE DOMAIN mydomain text;\nCREATE TABLE c (d mydomain);\n\nS1: BEGIN; INSERT INTO c VALUES (''); CREATE TABLE t (d mydomain);\nINSERT INTO t VALUES (NULL);\n\nS2: BEGIN; ALTER DOMAIN mydomain SET NOT NULL;\n-- waits for S1 to release lock on c\n\nS1: COMMIT;\n-- S2's ALTER DOMAIN gets unblocked and succeeds, despite the NULL\nvalue in \"t\" because that table is invisible to the transaction of\nALTER DOMAIN.\n\nSo my base question is, should we then require e.g. SHARE locks on\ntypes that depend on domains when we do DDL that depends on the type,\nand SHARE UPDATE EXCLUSIVE when we modify the type?\n\n> The idea of point #2 is to have a cheap check that 99% satisfies\n> what the spec says about not-null constraints on domains. If we\n> don't do #2, I think we have to fully recheck all the domain's\n> constraints during column assignment. I find that ugly as well\n> as expensive performance-wise. It does mean that if you have\n> some domain constraint that would act to reject NULLs, but it's\n> spelled in some weird way, it won't reject NULLs. I don't find\n> that possibility compelling enough to justify the performance hit\n> of recomputing every constraint just in case it acts like that.\n\nMakes sense.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n", "msg_date": "Tue, 24 Oct 2023 01:27:44 +0200", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL domains and NOT NULL constraint" }, { "msg_contents": "Matthias van de Meent <[email protected]> writes:\n> On Mon, 23 Oct 2023, 19:34 Tom Lane, <[email protected]> wrote:\n>> After ruminating on this for awhile, here's a straw-man proposal:\n>> ...\n\n> How does this work w.r.t. concurrently created tables that contain the\n> domain?\n\nIt wouldn't change that at all I think. I had noticed that we'd\nprobably need to tweak validateDomainConstraint() to ensure it applies\nthe same semantics that INSERT/UPDATE do --- although with Isaac's\nidea to enable better tracking of which constraints will fail on NULL,\nmaybe just a blind application of the constraint expression will still\nbe close enough.\n\nI agree that concurrent transactions can create violations of the new\nconstraint, but (a) that's true now, (b) I have no good ideas about\nhow to improve it, and (c) it seems like an independent problem.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 23 Oct 2023 21:02:28 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL domains and NOT NULL constraint" } ]