threads
listlengths 1
2.99k
|
---|
[
{
"msg_contents": "I have noticed, if query comes from PostgreSQL JDBC Driver, then query_id\nis not present in pg_stat_activity. Erik Wienhold figured out that reason\ncan be in extended query protocol (\nhttps://www.postgresql.org/message-id/[email protected]\n)\nMy question is, is it expected or is it a bug: if extended query protocol\nthen no query_id in pg_stat_activity for running query.\n\nbr\nKaido\n\nI have noticed, if query comes from PostgreSQL JDBC Driver, then query_id is not present in pg_stat_activity. Erik Wienhold figured out that reason can be in extended query protocol (https://www.postgresql.org/message-id/[email protected])My question is, is it expected or is it a bug: if extended query protocol then no query_id in pg_stat_activity for running query.brKaido",
"msg_date": "Mon, 12 Jun 2023 21:03:24 +0300",
"msg_from": "kaido vaikla <[email protected]>",
"msg_from_op": true,
"msg_subject": "query_id, pg_stat_activity, extended query protocol"
},
{
"msg_contents": "On Mon, Jun 12, 2023 at 09:03:24PM +0300, kaido vaikla wrote:\n> I have noticed, if query comes from PostgreSQL JDBC Driver, then query_id\n> is not present in pg_stat_activity. Erik Wienhold figured out that reason\n> can be in extended query protocol (\n> https://www.postgresql.org/message-id/[email protected]\n> )\n> My question is, is it expected or is it a bug: if extended query protocol\n> then no query_id in pg_stat_activity for running query.\n\nWell, you could say a bit of both, I guess. The query ID is compiled\nand stored in backend entries only after parse analysis, which is not\nsomething that would happen when using the execution phase of the\nextended query protocol, though it should be possible to access to the\nQuery nodes in the cached plans and their assigned query IDs.\n\nFWIW, I'd like to think that we could improve the situation, requiring\na mix of calling pgstat_report_query_id() while feeding on some query\nIDs retrieved from CachedPlanSource->query_list. I have not in\ndetails looked at how much could be achieved, TBH.\n--\nMichael",
"msg_date": "Tue, 13 Jun 2023 09:16:07 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query_id, pg_stat_activity, extended query protocol"
},
{
"msg_contents": "Thnx.\nbr\nKaido\n\nOn Tue, 13 Jun 2023 at 03:16, Michael Paquier <[email protected]> wrote:\n\n> On Mon, Jun 12, 2023 at 09:03:24PM +0300, kaido vaikla wrote:\n> > I have noticed, if query comes from PostgreSQL JDBC Driver, then query_id\n> > is not present in pg_stat_activity. Erik Wienhold figured out that\n> reason\n> > can be in extended query protocol (\n> >\n> https://www.postgresql.org/message-id/[email protected]\n> > )\n> > My question is, is it expected or is it a bug: if extended query protocol\n> > then no query_id in pg_stat_activity for running query.\n>\n> Well, you could say a bit of both, I guess. The query ID is compiled\n> and stored in backend entries only after parse analysis, which is not\n> something that would happen when using the execution phase of the\n> extended query protocol, though it should be possible to access to the\n> Query nodes in the cached plans and their assigned query IDs.\n>\n> FWIW, I'd like to think that we could improve the situation, requiring\n> a mix of calling pgstat_report_query_id() while feeding on some query\n> IDs retrieved from CachedPlanSource->query_list. I have not in\n> details looked at how much could be achieved, TBH.\n> --\n> Michael\n>\n\nThnx.brKaidoOn Tue, 13 Jun 2023 at 03:16, Michael Paquier <[email protected]> wrote:On Mon, Jun 12, 2023 at 09:03:24PM +0300, kaido vaikla wrote:\n> I have noticed, if query comes from PostgreSQL JDBC Driver, then query_id\n> is not present in pg_stat_activity. Erik Wienhold figured out that reason\n> can be in extended query protocol (\n> https://www.postgresql.org/message-id/[email protected]\n> )\n> My question is, is it expected or is it a bug: if extended query protocol\n> then no query_id in pg_stat_activity for running query.\n\nWell, you could say a bit of both, I guess. The query ID is compiled\nand stored in backend entries only after parse analysis, which is not\nsomething that would happen when using the execution phase of the\nextended query protocol, though it should be possible to access to the\nQuery nodes in the cached plans and their assigned query IDs.\n\nFWIW, I'd like to think that we could improve the situation, requiring\na mix of calling pgstat_report_query_id() while feeding on some query\nIDs retrieved from CachedPlanSource->query_list. I have not in\ndetails looked at how much could be achieved, TBH.\n--\nMichael",
"msg_date": "Tue, 13 Jun 2023 09:16:08 +0300",
"msg_from": "kaido vaikla <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: query_id, pg_stat_activity, extended query protocol"
},
{
"msg_contents": ">\n>\n>>\n>> FWIW, I'd like to think that we could improve the situation, requiring\n>> a mix of calling pgstat_report_query_id() while feeding on some query\n>> IDs retrieved from CachedPlanSource->query_list. I have not in\n>> details looked at how much could be achieved, TBH.\n>>\n>\nThis just cropped up as a pgjdbc github issue. Seems like something that\nshould be addressed.\n\nDave\n\n\nFWIW, I'd like to think that we could improve the situation, requiring\na mix of calling pgstat_report_query_id() while feeding on some query\nIDs retrieved from CachedPlanSource->query_list. I have not in\ndetails looked at how much could be achieved, TBH.This just cropped up as a pgjdbc github issue. Seems like something that should be addressed.Dave",
"msg_date": "Wed, 20 Mar 2024 09:07:34 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query_id, pg_stat_activity, extended query protocol"
},
{
"msg_contents": "> FWIW, I'd like to think that we could improve the situation, requiring\r\n> a mix of calling pgstat_report_query_id() while feeding on some query\r\n> IDs retrieved from CachedPlanSource->query_list. I have not in\r\n> details looked at how much could be achieved, TBH.\r\n\r\nI was dealing with this today and found this thread. I spent some time\r\nlooking at possible solutions.\r\n\r\nIn the flow of extended query protocol, the exec_parse_message \r\nreports the queryId, but subsequent calls to exec_bind_message\r\nand exec_execute_message reset the queryId when calling\r\npgstat_report_activity(STATE_RUNNING,..) as you can see below.\r\n \r\n /*\r\n * If a new query is started, we reset the query identifier as it'll only\r\n * be known after parse analysis, to avoid reporting last query's\r\n * identifier.\r\n */\r\n if (state == STATE_RUNNING)\r\n beentry->st_query_id = UINT64CONST(0);\r\n\r\n\r\nSo, I think the simple answer is something like the below. \r\nInside exec_bind_message and exec_execute_message,\r\nthe query_id should be reported after pg_report_activity. \r\n\r\ndiff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c\r\nindex 76f48b13d2..7ec2df91d5 100644\r\n--- a/src/backend/tcop/postgres.c\r\n+++ b/src/backend/tcop/postgres.c\r\n@@ -1678,6 +1678,7 @@ exec_bind_message(StringInfo input_message)\r\n debug_query_string = psrc->query_string;\r\n \r\n pgstat_report_activity(STATE_RUNNING, psrc->query_string);\r\n+ pgstat_report_query_id(linitial_node(Query, psrc->query_list)->queryId, true);\r\n \r\n set_ps_display(\"BIND\");\r\n \r\n@@ -2146,6 +2147,7 @@ exec_execute_message(const char *portal_name, long max_rows)\r\n debug_query_string = sourceText;\r\n \r\n pgstat_report_activity(STATE_RUNNING, sourceText);\r\n+ pgstat_report_query_id(portal->queryDesc->plannedstmt->queryId, true);\r\n \r\n cmdtagname = GetCommandTagNameAndLen(portal->commandTag, &cmdtaglen);\r\n\r\n\r\nthoughts?\r\n\r\n\r\nRegards,\r\n\r\nSami Imseih\r\nAmazon Web Services (AWS)\r\n\r\n",
"msg_date": "Tue, 23 Apr 2024 04:16:29 +0000",
"msg_from": "\"Imseih (AWS), Sami\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query_id, pg_stat_activity, extended query protocol"
},
{
"msg_contents": "On 23/4/2024 11:16, Imseih (AWS), Sami wrote:\n>> FWIW, I'd like to think that we could improve the situation, requiring\n>> a mix of calling pgstat_report_query_id() while feeding on some query\n>> IDs retrieved from CachedPlanSource->query_list. I have not in\n>> details looked at how much could be achieved, TBH.\n> \n> I was dealing with this today and found this thread. I spent some time\n> looking at possible solutions.\n> \n> In the flow of extended query protocol, the exec_parse_message\n> reports the queryId, but subsequent calls to exec_bind_message\n> and exec_execute_message reset the queryId when calling\n> pgstat_report_activity(STATE_RUNNING,..) as you can see below.\n> \n> /*\n> * If a new query is started, we reset the query identifier as it'll only\n> * be known after parse analysis, to avoid reporting last query's\n> * identifier.\n> */\n> if (state == STATE_RUNNING)\n> beentry->st_query_id = UINT64CONST(0);\n> \n> \n> So, I think the simple answer is something like the below.\n> Inside exec_bind_message and exec_execute_message,\n> the query_id should be reported after pg_report_activity.\n> \n> diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c\n> index 76f48b13d2..7ec2df91d5 100644\n> --- a/src/backend/tcop/postgres.c\n> +++ b/src/backend/tcop/postgres.c\n> @@ -1678,6 +1678,7 @@ exec_bind_message(StringInfo input_message)\n> debug_query_string = psrc->query_string;\n> \n> pgstat_report_activity(STATE_RUNNING, psrc->query_string);\n> + pgstat_report_query_id(linitial_node(Query, psrc->query_list)->queryId, true);\n> \n> set_ps_display(\"BIND\");\n> \n> @@ -2146,6 +2147,7 @@ exec_execute_message(const char *portal_name, long max_rows)\n> debug_query_string = sourceText;\n> \n> pgstat_report_activity(STATE_RUNNING, sourceText);\n> + pgstat_report_query_id(portal->queryDesc->plannedstmt->queryId, true);\n> \n> cmdtagname = GetCommandTagNameAndLen(portal->commandTag, &cmdtaglen);\n> \n> \n> thoughts?\nIn exec_bind_message, how can you be sure that queryId exists in \nquery_list before the call of GetCachedPlan(), which will validate and \nlock the plan? What if some OIDs were altered in the middle?\n\n-- \nregards, Andrei Lepikhov\n\n\n\n",
"msg_date": "Tue, 23 Apr 2024 11:42:41 +0700",
"msg_from": "Andrei Lepikhov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query_id, pg_stat_activity, extended query protocol"
},
{
"msg_contents": "On Tue, Apr 23, 2024 at 11:42:41AM +0700, Andrei Lepikhov wrote:\n> On 23/4/2024 11:16, Imseih (AWS), Sami wrote:\n>> + pgstat_report_query_id(linitial_node(Query, psrc->query_list)->queryId, true);\n>> set_ps_display(\"BIND\");\n>> @@ -2146,6 +2147,7 @@ exec_execute_message(const char *portal_name, long max_rows)\n>> debug_query_string = sourceText;\n>> pgstat_report_activity(STATE_RUNNING, sourceText);\n>> + pgstat_report_query_id(portal->queryDesc->plannedstmt->queryId, true);\n>> cmdtagname = GetCommandTagNameAndLen(portal->commandTag, &cmdtaglen);\n>\n> In exec_bind_message, how can you be sure that queryId exists in query_list\n> before the call of GetCachedPlan(), which will validate and lock the plan?\n> What if some OIDs were altered in the middle?\n\nI am also a bit surprised with the choice of using the first Query\navailable in the list for the ID, FWIW.\n\nDid you consider using \\bind to show how this behaves in a regression\ntest?\n--\nMichael",
"msg_date": "Tue, 23 Apr 2024 14:49:43 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query_id, pg_stat_activity, extended query protocol"
},
{
"msg_contents": "On 4/23/24 12:49, Michael Paquier wrote:\n> On Tue, Apr 23, 2024 at 11:42:41AM +0700, Andrei Lepikhov wrote:\n>> On 23/4/2024 11:16, Imseih (AWS), Sami wrote:\n>>> + pgstat_report_query_id(linitial_node(Query, psrc->query_list)->queryId, true);\n>>> set_ps_display(\"BIND\");\n>>> @@ -2146,6 +2147,7 @@ exec_execute_message(const char *portal_name, long max_rows)\n>>> debug_query_string = sourceText;\n>>> pgstat_report_activity(STATE_RUNNING, sourceText);\n>>> + pgstat_report_query_id(portal->queryDesc->plannedstmt->queryId, true);\n>>> cmdtagname = GetCommandTagNameAndLen(portal->commandTag, &cmdtaglen);\n>>\n>> In exec_bind_message, how can you be sure that queryId exists in query_list\n>> before the call of GetCachedPlan(), which will validate and lock the plan?\n>> What if some OIDs were altered in the middle?\n> \n> I am also a bit surprised with the choice of using the first Query\n> available in the list for the ID, FWIW.\n> \n> Did you consider using \\bind to show how this behaves in a regression\n> test?\nI'm not sure how to invent a test based on the \\bind command - we need \nsome pause in the middle.\nBut simplistic case with a prepared statement shows how the value of \nqueryId can be changed if you don't acquire all the objects needed for \nthe execution:\n\nCREATE TABLE test();\nPREPARE name AS SELECT * FROM test;\nEXPLAIN (ANALYSE, VERBOSE, COSTS OFF) EXECUTE name;\nDROP TABLE test;\nCREATE TABLE test();\nEXPLAIN (ANALYSE, VERBOSE, COSTS OFF) EXECUTE name;\n\n/*\n QUERY PLAN\n-------------------------------------------------------------------\n Seq Scan on public.test (actual time=0.002..0.004 rows=0 loops=1)\n Query Identifier: 6750745711909650694\n\n QUERY PLAN\n-------------------------------------------------------------------\n Seq Scan on public.test (actual time=0.004..0.004 rows=0 loops=1)\n Query Identifier: -2597546769858730762\n*/\n\nWe have different objects which can be changed - I just have invented \nthe most trivial example to discuss.\n\n-- \nregards, Andrei Lepikhov\n\n\n\n",
"msg_date": "Tue, 23 Apr 2024 16:37:37 +0700",
"msg_from": "Andrei Lepikhov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query_id, pg_stat_activity, extended query protocol"
},
{
"msg_contents": "> I am also a bit surprised with the choice of using the first Query\r\n> available in the list for the ID, FWIW.\r\n\r\n\r\nIIUC, the query trees returned from QueryRewrite\r\nwill all have the same queryId, so it appears valid to \r\nuse the queryId from the first tree in the list. Right?\r\n\r\nHere is an example I was working with that includes user-defined rules\r\nthat has a list with more than 1 tree.\r\n\r\n\r\npostgres=# explain (verbose, generic_plan) insert into mytab values ($1) RETURNING pg_sleep($1), id ;\r\nQUERY PLAN \r\n-----------------------------------------------------------\r\nInsert on public.mytab (cost=0.00..0.01 rows=1 width=4)\r\nOutput: pg_sleep(($1)::double precision), mytab.id\r\n-> Result (cost=0.00..0.01 rows=1 width=4)\r\nOutput: $1\r\nQuery Identifier: 3703848357297795425\r\n\r\n\r\nInsert on public.mytab2 (cost=0.00..0.01 rows=0 width=0)\r\n-> Result (cost=0.00..0.01 rows=1 width=4)\r\nOutput: $1\r\nQuery Identifier: 3703848357297795425\r\n\r\n\r\nInsert on public.mytab3 (cost=0.00..0.01 rows=0 width=0)\r\n-> Result (cost=0.00..0.01 rows=1 width=4)\r\nOutput: $1\r\nQuery Identifier: 3703848357297795425\r\n\r\n\r\nInsert on public.mytab4 (cost=0.00..0.01 rows=0 width=0)\r\n-> Result (cost=0.00..0.01 rows=1 width=4)\r\nOutput: $1\r\nQuery Identifier: 3703848357297795425\r\n(20 rows)\r\n\r\n\r\n\r\n> Did you consider using \\bind to show how this behaves in a regression\r\n> test?\r\n\r\n\r\nYes, this is precisely how I tested. Without the patch, I could not\r\nsee a queryId after 9 seconds of a pg_sleep, but with the patch it \r\nappears. See the test below.\r\n\r\n\r\n## test query\r\nselect pg_sleep($1) \\bind 30\r\n\r\n\r\n## unpatched\r\npostgres=# select \r\nquery_id, \r\nquery, \r\nnow()-query_start query_duration, \r\nstate \r\nfrom pg_stat_activity where pid <> pg_backend_pid()\r\nand state = 'active';\r\nquery_id | query | query_duration | state \r\n----------+----------------------+-----------------+--------\r\n | select pg_sleep($1) +| 00:00:08.604845 | active\r\n | ; | | \r\n(1 row)\r\n\r\n## patched\r\n\r\npostgres=# truncate table large;^C\r\npostgres=# select \r\n query_id, \r\n query, \r\n now()-query_start query_duration, \r\n state \r\nfrom pg_stat_activity where pid <> pg_backend_pid()\r\nand state = 'active';\r\n query_id | query | query_duration | state \r\n---------------------+----------------------+----------------+--------\r\n 2433215470630378210 | select pg_sleep($1) +| 00:00:09.6881 | active\r\n | ; | | \r\n(1 row)\r\n\r\n\r\nFor exec_execute_message, I realized that to report queryId for\r\nUtility and non-utility statements, we need to report the queryId \r\ninside the portal routines where PlannedStmt contains the queryId.\r\n\r\nAttached is the first real attempt at the fix. \r\n\r\nRegards,\r\n\r\n\r\nSami",
"msg_date": "Wed, 24 Apr 2024 01:40:45 +0000",
"msg_from": "\"Imseih (AWS), Sami\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query_id, pg_stat_activity, extended query protocol"
},
{
"msg_contents": "> But simplistic case with a prepared statement shows how the value of\r\n> queryId can be changed if you don't acquire all the objects needed for\r\n> the execution:\r\n\r\n\r\n> CREATE TABLE test();\r\n> PREPARE name AS SELECT * FROM test;\r\n> EXPLAIN (ANALYSE, VERBOSE, COSTS OFF) EXECUTE name;\r\n> DROP TABLE test;\r\n> CREATE TABLE test();\r\n> EXPLAIN (ANALYSE, VERBOSE, COSTS OFF) EXECUTE name;\r\n\r\nHmm, you raise a good point. Isn't this a fundamental problem\r\nwith prepared statements? If there is DDL on the\r\nrelations of the prepared statement query, shouldn't the prepared\r\nstatement be considered invalid at that point and raise an error\r\nto the user?\r\n\r\nRegards,\r\n\r\nSami \r\n\r\n",
"msg_date": "Sat, 27 Apr 2024 13:54:57 +0000",
"msg_from": "\"Imseih (AWS), Sami\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query_id, pg_stat_activity, extended query protocol"
},
{
"msg_contents": "On Sat, Apr 27, 2024 at 6:55 AM Imseih (AWS), Sami <[email protected]>\nwrote:\n\n>\n> Hmm, you raise a good point. Isn't this a fundamental problem\n> with prepared statements? If there is DDL on the\n> relations of the prepared statement query, shouldn't the prepared\n> statement be considered invalid at that point and raise an error\n> to the user?\n>\n>\nWe choose a arguably more user-friendly option:\n\nhttps://www.postgresql.org/docs/current/sql-prepare.html\n\n\"\"\"\nAlthough the main point of a prepared statement is to avoid repeated parse\nanalysis and planning of the statement, PostgreSQL will force re-analysis\nand re-planning of the statement before using it whenever database objects\nused in the statement have undergone definitional (DDL) changes or their\nplanner statistics have been updated since the previous use of the prepared\nstatement.\n\"\"\"\n\nDavid J.\n\nOn Sat, Apr 27, 2024 at 6:55 AM Imseih (AWS), Sami <[email protected]> wrote:\nHmm, you raise a good point. Isn't this a fundamental problem\nwith prepared statements? If there is DDL on the\nrelations of the prepared statement query, shouldn't the prepared\nstatement be considered invalid at that point and raise an error\nto the user?We choose a arguably more user-friendly option:https://www.postgresql.org/docs/current/sql-prepare.html\"\"\"Although the main point of a prepared statement is to avoid repeated parse analysis and planning of the statement, PostgreSQL will force re-analysis and re-planning of the statement before using it whenever database objects used in the statement have undergone definitional (DDL) changes or their planner statistics have been updated since the previous use of the prepared statement.\"\"\"David J.",
"msg_date": "Sat, 27 Apr 2024 07:18:57 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query_id, pg_stat_activity, extended query protocol"
},
{
"msg_contents": "> We choose a arguably more user-friendly option:\r\n\r\n> https://www.postgresql.org/docs/current/sql-prepare.html\r\n\r\nThanks for pointing this out!\r\n\r\nRegards,\r\n\r\nSami\r\n\r\n\r\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n> We choose a arguably more user-friendly option:\n\n \n\n\n\n> \r\nhttps://www.postgresql.org/docs/current/sql-prepare.html\n \nThanks for pointing this out!\n \nRegards,\n \nSami",
"msg_date": "Sat, 27 Apr 2024 14:57:44 +0000",
"msg_from": "\"Imseih (AWS), Sami\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query_id, pg_stat_activity, extended query protocol"
},
{
"msg_contents": ">> But simplistic case with a prepared statement shows how the value of\r\n>> queryId can be changed if you don't acquire all the objects needed for\r\n>> the execution:\r\n\r\n\r\n\r\n\r\n>> CREATE TABLE test();\r\n>> PREPARE name AS SELECT * FROM test;\r\n>> EXPLAIN (ANALYSE, VERBOSE, COSTS OFF) EXECUTE name;\r\n>> DROP TABLE test;\r\n>> CREATE TABLE test();\r\n>> EXPLAIN (ANALYSE, VERBOSE, COSTS OFF) EXECUTE name;\r\n\r\n\r\n> Hmm, you raise a good point. Isn't this a fundamental problem\r\n> with prepared statements? If there is DDL on the\r\n> relations of the prepared statement query, shouldn't the prepared\r\n> statement be considered invalid at that point and raise an error\r\n> to the user?\r\n\r\nI tested v1 thoroughly.\r\n\r\nUsing the attached JDBC script for testing, I added some logging of the queryId \r\nbeing reported by the patch and added a breakpoint after sync [1] which at that \r\npoint the locks are released on the table. I then proceeded to drop and recreate the table\r\nand observed that the first bind after recreating the table still reports the\r\nold queryId but the execute reports the correct queryId. This is because\r\nthe bind still has not had a chance to re-parse and re-plan after the\r\ncache invalidation.\r\n\r\n\r\n2024-04-27 13:51:15.757 CDT [43483] LOG: duration: 21322.475 ms execute S_1: select pg_sleep(10)\r\n2024-04-27 13:51:21.591 CDT [43483] LOG: duration: 0.834 ms parse S_2: select from tab1 where id = $1\r\n2024-04-27 13:51:21.591 CDT [43483] LOG: query_id = -192969736922694368\r\n2024-04-27 13:51:21.592 CDT [43483] LOG: duration: 0.729 ms bind S_2: select from tab1 where id = $1\r\n2024-04-27 13:51:21.592 CDT [43483] LOG: query_id = -192969736922694368\r\n2024-04-27 13:51:21.592 CDT [43483] LOG: duration: 0.032 ms execute S_2: select from tab1 where id = $1\r\n2024-04-27 13:51:32.501 CDT [43483] LOG: query_id = -192969736922694368\r\n2024-04-27 13:51:32.502 CDT [43483] LOG: duration: 0.342 ms bind S_2: select from tab1 where id = $1\r\n2024-04-27 13:51:32.502 CDT [43483] LOG: query_id = -192969736922694368\r\n2024-04-27 13:51:32.502 CDT [43483] LOG: duration: 0.067 ms execute S_2: select from tab1 where id = $1\r\n2024-04-27 13:51:42.613 CDT [43526] LOG: query_id = -4766379021163149612\r\n-- recreate the tables\r\n2024-04-27 13:51:42.621 CDT [43526] LOG: duration: 8.488 ms statement: drop table if exists tab1;\r\n2024-04-27 13:51:42.621 CDT [43526] LOG: query_id = 7875284141628316369\r\n2024-04-27 13:51:42.625 CDT [43526] LOG: duration: 3.364 ms statement: create table tab1 ( id int );\r\n2024-04-27 13:51:42.625 CDT [43526] LOG: query_id = 2967282624086800441\r\n2024-04-27 13:51:42.626 CDT [43526] LOG: duration: 0.936 ms statement: insert into tab1 values (1);\r\n\r\n-- this reports the old query_id\r\n2024-04-27 13:51:45.058 CDT [43483] LOG: query_id = -192969736922694368 \r\n\r\n2024-04-27 13:51:45.059 CDT [43483] LOG: duration: 0.913 ms bind S_2: select from tab1 where id = $1\r\n2024-04-27 13:51:45.059 CDT [43483] LOG: query_id = 3010297048333693297\r\n2024-04-27 13:51:45.059 CDT [43483] LOG: duration: 0.096 ms execute S_2: select from tab1 where id = $1\r\n2024-04-27 13:51:46.777 CDT [43483] LOG: query_id = 3010297048333693297\r\n2024-04-27 13:51:46.777 CDT [43483] LOG: duration: 0.108 ms bind S_2: select from tab1 where id = $1\r\n2024-04-27 13:51:46.777 CDT [43483] LOG: query_id = 3010297048333693297\r\n2024-04-27 13:51:46.777 CDT [43483] LOG: duration: 0.024 ms execute S_2: select from tab1 where id = $1\r\n\r\nThe easy answer is to not report queryId during the bind message, but I will look\r\nat what else can be done here as it's good to have a queryId reported in this scenario\r\nfor cases there are long planning times and we rather not have those missed in \r\npg_stat_activity sampling.\r\n\r\n\r\n[1] https://github.com/postgres/postgres/blob/master/src/backend/tcop/postgres.c#L4877\r\n\r\n\r\nRegards,\r\n\r\nSami",
"msg_date": "Sat, 27 Apr 2024 19:08:41 +0000",
"msg_from": "\"Imseih (AWS), Sami\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query_id, pg_stat_activity, extended query protocol"
},
{
"msg_contents": "On 27/4/2024 20:54, Imseih (AWS), Sami wrote:\n>> But simplistic case with a prepared statement shows how the value of\n>> queryId can be changed if you don't acquire all the objects needed for\n>> the execution:\n> \n> \n>> CREATE TABLE test();\n>> PREPARE name AS SELECT * FROM test;\n>> EXPLAIN (ANALYSE, VERBOSE, COSTS OFF) EXECUTE name;\n>> DROP TABLE test;\n>> CREATE TABLE test();\n>> EXPLAIN (ANALYSE, VERBOSE, COSTS OFF) EXECUTE name;\n> \n> Hmm, you raise a good point. Isn't this a fundamental problem\n> with prepared statements? If there is DDL on the\n> relations of the prepared statement query, shouldn't the prepared\n> statement be considered invalid at that point and raise an error\n> to the user?\nI don't think so. It may be any object, even stored procedure, that can \nbe changed. IMO, the right option here is to report zero (like the \nundefined value of queryId) until the end of the parsing stage.\n\n-- \nregards, Andrei Lepikhov\n\n\n\n",
"msg_date": "Sun, 28 Apr 2024 08:22:30 +0700",
"msg_from": "Andrei Lepikhov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query_id, pg_stat_activity, extended query protocol"
},
{
"msg_contents": "Here is a new rev of the patch which deals with the scenario\r\nmentioned by Andrei [1] in which the queryId may change\r\ndue to a cached query invalidation.\r\n\r\n\r\n[1] https://www.postgresql.org/message-id/724348C9-8023-41BC-895E-80634E79A538%40amazon.com\r\n\r\nRegards,\r\n\r\nSami",
"msg_date": "Wed, 1 May 2024 03:07:06 +0000",
"msg_from": "\"Imseih (AWS), Sami\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query_id, pg_stat_activity, extended query protocol"
},
{
"msg_contents": "On 5/1/24 10:07, Imseih (AWS), Sami wrote:\n> Here is a new rev of the patch which deals with the scenario\n> mentioned by Andrei [1] in which the queryId may change\n> due to a cached query invalidation.\n> \n> \n> [1] https://www.postgresql.org/message-id/724348C9-8023-41BC-895E-80634E79A538%40amazon.com\nI discovered the current state of queryId reporting and found that it \nmay be unlogical: Postgres resets queryId right before query execution \nin simple protocol and doesn't reset it at all in extended protocol and \nother ways to execute queries.\nI think we should generally report it when the backend executes a job \nrelated to the query with that queryId. This means it would reset the \nqueryId at the end of the query execution.\nHowever, the process of setting up the queryId is more complex. Should \nwe set it at the beginning of query execution? This seems logical, but \nwhat about the planning process? If an extension plans a query without \nthe intention to execute it for speculative reasons, should we still \nshow the queryId? Perhaps we should reset the state right after planning \nto accurately reflect the current queryId.\nSee in the attachment some sketch for that - it needs to add queryId \nreset on abortion.\n\n-- \nregards, Andrei Lepikhov",
"msg_date": "Thu, 9 May 2024 12:22:33 +0700",
"msg_from": "Andrei Lepikhov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query_id, pg_stat_activity, extended query protocol"
},
{
"msg_contents": "> I discovered the current state of queryId reporting and found that it\r\n> may be unlogical: Postgres resets queryId right before query execution\r\n> in simple protocol and doesn't reset it at all in extended protocol and\r\n> other ways to execute queries.\r\n\r\nIn exec_parse_message, exec_bind_message and exec_execute_message,\r\nthe queryId is reset via pgstat_report_activity\r\n\r\n> I think we should generally report it when the backend executes a job\r\n> related to the query with that queryId. This means it would reset the\r\n> queryId at the end of the query execution.\r\n\r\nWhen the query completes execution and the session goes into a state \r\nother than \"active\", both the query text and the queryId should be of the \r\nlast executed statement. This is the documented behavior, and I believe\r\nit's the correct behavior.\r\n\r\nIf we reset queryId at the end of execution, this behavior breaks. Right?\r\n\r\n> This seems logical, but\r\n> what about the planning process? If an extension plans a query without\r\n> the intention to execute it for speculative reasons, should we still\r\n> show the queryId? Perhaps we should reset the state right after planning\r\n> to accurately reflect the current queryId.\r\n\r\nI think you are suggesting that during planning, the queryId\r\nof the current statement being planned should not be reported.\r\n \r\nIf my understanding is correct, I don't think that is a good idea. Tools that \r\nsnasphot pg_stat_activity will not be able to account for the queryId during\r\nplanning. This could mean that certain load on the database cannot be tied\r\nback to a specific queryId.\r\n\r\nRegards,\r\n\r\nSami\r\n\r\n\r\n\r\n",
"msg_date": "Wed, 15 May 2024 03:24:05 +0000",
"msg_from": "\"Imseih (AWS), Sami\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query_id, pg_stat_activity, extended query protocol"
},
{
"msg_contents": "On Wed, May 15, 2024 at 03:24:05AM +0000, Imseih (AWS), Sami wrote:\n>> I think we should generally report it when the backend executes a job\n>> related to the query with that queryId. This means it would reset the\n>> queryId at the end of the query execution.\n> \n> When the query completes execution and the session goes into a state \n> other than \"active\", both the query text and the queryId should be of the \n> last executed statement. This is the documented behavior, and I believe\n> it's the correct behavior.\n> \n> If we reset queryId at the end of execution, this behavior breaks. Right?\n\nIdle sessions keep track of the last query string run, hence being\nconsistent in pg_stat_activity and report its query ID is user\nfriendly. Resetting it while keeping the string is less consistent.\nIt's been this way for years, so I'd rather let it be this way.\n\n>> This seems logical, but\n>> what about the planning process? If an extension plans a query without\n>> the intention to execute it for speculative reasons, should we still\n>> show the queryId? Perhaps we should reset the state right after planning\n>> to accurately reflect the current queryId.\n>\n> I think you are suggesting that during planning, the queryId\n> of the current statement being planned should not be reported.\n> \n> If my understanding is correct, I don't think that is a good idea. Tools that \n> snasphot pg_stat_activity will not be able to account for the queryId during\n> planning. This could mean that certain load on the database cannot be tied\n> back to a specific queryId.\n\nI'm -1 with the point of resetting the query ID based on what the\npatch does, even if it remains available in the hooks.\npg_stat_activity is one thing, but you would also reduce the coverage\nof log_line_prefix with %Q. And that can provide really useful\ndebugging information in the code paths where the query ID would be\nreset as an effect of the proposed patch.\n\nThe patch to report the query ID of a planned query when running a\nquery through a PortalRunSelect() feels more intuitive in the\ninformation it reports.\n--\nMichael",
"msg_date": "Wed, 15 May 2024 14:09:47 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query_id, pg_stat_activity, extended query protocol"
},
{
"msg_contents": "On 15/5/2024 12:09, Michael Paquier wrote:\n> On Wed, May 15, 2024 at 03:24:05AM +0000, Imseih (AWS), Sami wrote:\n>>> I think we should generally report it when the backend executes a job\n>>> related to the query with that queryId. This means it would reset the\n>>> queryId at the end of the query execution.\n>>\n>> When the query completes execution and the session goes into a state\n>> other than \"active\", both the query text and the queryId should be of the\n>> last executed statement. This is the documented behavior, and I believe\n>> it's the correct behavior.\n>>\n>> If we reset queryId at the end of execution, this behavior breaks. Right?\n> \n> Idle sessions keep track of the last query string run, hence being\n> consistent in pg_stat_activity and report its query ID is user\n> friendly. Resetting it while keeping the string is less consistent.\n> It's been this way for years, so I'd rather let it be this way.\nOkay, that's what I precisely wanted to understand: queryId doesn't have \nsemantics to show the job that consumes resources right now—it is mostly \nabout convenience to know that the backend processes nothing except \n(probably) this query.\n\n-- \nregards, Andrei Lepikhov\n\n\n\n",
"msg_date": "Wed, 15 May 2024 20:29:00 +0700",
"msg_from": "Andrei Lepikhov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query_id, pg_stat_activity, extended query protocol"
},
{
"msg_contents": "> Okay, that's what I precisely wanted to understand: queryId doesn't have\r\n> semantics to show the job that consumes resources right now—it is mostly\r\n> about convenience to know that the backend processes nothing except\r\n> (probably) this query.\r\n\r\nIt may be a good idea to expose in pg_stat_activity or a\r\nsupplemental activity view information about the current state of the\r\nquery processing. i.e. Is it parsing, planning or executing a query or\r\nis it processing a nested query. \r\n\r\nI can see this being useful and perhaps could be taken up in a \r\nseparate thread.\r\n\r\nRegards,\r\n\r\nSami\r\n\r\n\r\n\r\n\r\n",
"msg_date": "Wed, 15 May 2024 18:36:23 +0000",
"msg_from": "\"Imseih (AWS), Sami\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query_id, pg_stat_activity, extended query protocol"
},
{
"msg_contents": "On Wed, May 15, 2024 at 06:36:23PM +0000, Imseih (AWS), Sami wrote:\n>> Okay, that's what I precisely wanted to understand: queryId doesn't have\n>> semantics to show the job that consumes resources right now—it is mostly\n>> about convenience to know that the backend processes nothing except\n>> (probably) this query.\n> \n> It may be a good idea to expose in pg_stat_activity or a\n> supplemental activity view information about the current state of the\n> query processing. i.e. Is it parsing, planning or executing a query or\n> is it processing a nested query. \n\npg_stat_activity is already quite bloated with attributes, and I'd\nsuspect that there are more properties in a query that would be\ninteresting to track down at a thinner level as long as it mirrors a\ndynamic activity of the query. Perhaps a separate catalog like a\npg_stat_query would make sense, moving query_start there as well?\nCatalog breakages are never fun, still always happen because the\nreasons behind a backward-incompatible change make the picture better\nin the long-term for users.\n--\nMichael",
"msg_date": "Thu, 16 May 2024 10:02:54 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query_id, pg_stat_activity, extended query protocol"
},
{
"msg_contents": "On 15.05.2024 10:24, Imseih (AWS), Sami wrote:\n>> I discovered the current state of queryId reporting and found that it\n>> may be unlogical: Postgres resets queryId right before query execution\n>> in simple protocol and doesn't reset it at all in extended protocol and\n>> other ways to execute queries.\n> \n> In exec_parse_message, exec_bind_message and exec_execute_message,\n> the queryId is reset via pgstat_report_activity\n> \n>> I think we should generally report it when the backend executes a job\n>> related to the query with that queryId. This means it would reset the\n>> queryId at the end of the query execution.\n> \n> When the query completes execution and the session goes into a state\n> other than \"active\", both the query text and the queryId should be of the\n> last executed statement. This is the documented behavior, and I believe\n> it's the correct behavior.\nI discovered this case a bit.\nAs I can see, the origin of the problem is that the exec_execute_message \nreport STATE_RUNNING, although ExecutorStart was called in the \nexec_bind_message routine beforehand.\nI'm unsure if it needs to call ExecutorStart in the bind code. But if we \ndon't change the current logic, would it make more sense to move \npgstat_report_query_id to the ExecutorRun routine?\n\n-- \nregards, Andrei Lepikhov\n\n\n\n",
"msg_date": "Thu, 16 May 2024 12:33:27 +0700",
"msg_from": "Andrei Lepikhov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query_id, pg_stat_activity, extended query protocol"
},
{
"msg_contents": "> I'm unsure if it needs to call ExecutorStart in the bind code. But if we\r\n> don't change the current logic, would it make more sense to move\r\n> pgstat_report_query_id to the ExecutorRun routine?\r\n\r\nI initially thought about that, but for utility statements (CTAS, etc.) being \r\nexecuted with extended query protocol, we will still not advertise the queryId \r\nas we should. This is why I chose to set the queryId in PortalRunSelect and\r\nPortalRunMulti in v2 of the patch [1].\r\n\r\nWe can advertise the queryId inside ExecutorRun instead of\r\nPortalRunSelect as the patch does, but we will still need to advertise \r\nthe queryId inside PortalRunMulti.\r\n\r\n[1] https://www.postgresql.org/message-id/FAB6AEA1-AB5E-4DFF-9A2E-BB320E6C3DF1%40amazon.com\r\n\r\n\r\nRegards,\r\n\r\nSami\r\n\r\n\r\n\r\n\r\n\r\n",
"msg_date": "Thu, 16 May 2024 20:34:54 +0000",
"msg_from": "\"Imseih (AWS), Sami\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query_id, pg_stat_activity, extended query protocol"
},
{
"msg_contents": "Hi,\n\nWouldn't it be enough to call pgstat_report_query_id in ExecutorRun\nand ProcessUtility? With those changes [1], both normal statements and\nutility statements called through extended protocol will correctly\nreport the query_id.\n\n-- Test utility statement with extended protocol\nshow all \\bind \\g\n\n-- Check reported query_id\nselect query, query_id from pg_stat_activity where\napplication_name ='psql' and pid!=pg_backend_pid();\n query | query_id\n-----------+---------------------\n show all | -866221123969716490\n\n[1] https://github.com/bonnefoa/postgres/commit/bf4b332d7b481549c6d9cfa70db51e39a305b9b2\n\nRegards,\nAnthonin\n\n\n",
"msg_date": "Wed, 17 Jul 2024 11:32:49 +0200",
"msg_from": "Anthonin Bonnefoy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query_id, pg_stat_activity, extended query protocol"
},
{
"msg_contents": "On Wed, Jul 17, 2024 at 11:32:49AM +0200, Anthonin Bonnefoy wrote:\n> Wouldn't it be enough to call pgstat_report_query_id in ExecutorRun\n> and ProcessUtility? With those changes [1], both normal statements and\n> utility statements called through extended protocol will correctly\n> report the query_id.\n\nInteresting, and this position is really tempting. By doing so you\nwould force the query ID to be set to the one from the CTAS and\nEXPLAIN, because these would be executed before the inner queries, and\npgstat_report_query_id() with its non-force option does not overwrite\nwhat would be already set (aka what should be the top-level query ID).\n\nUsing ExecutorRun() feels consistent with the closest thing I've\ntouched in this area lately in 1d477a907e63, because that's the only\ncode path that we are sure to take depending on the portal execution\n(two execution scenarios depending on how rows are retrieved, as far\nas I recall). The comment should be perhaps more consistent with the\nexecutor start counterpart. So I would be OK with that.. The\nlocation before the hook of ProcessUtility is tempting, as it would\ntake care of the case of PortalRunMulti(). However.. Joining with a\npoint from Sami upthread..\n\nThis is still not enough in the case of where we have a holdStore, no?\nThis is the case where we would do *one* ExecutorRun(), followed up by\na scan of the tuplestore in more than one execute message. The v2\nproposed upthread, by positioning a query ID to be set in\nPortalRunSelect(), is still positioning that in two places.\n\nHmm... How about being much more aggressive and just do the whole\nbusiness in exec_execute_message(), just before we do the PortalRun()?\nI mean, that's the source of all our problems, and we know the\nstatements that the portal will work on so we could go through the\nlist, grab the first planned query and set the query ID based on that,\nwithout caring about the portal patterns we would need to think about.\n\n> [1] https://github.com/bonnefoa/postgres/commit/bf4b332d7b481549c6d9cfa70db51e39a305b9b2\n\nOr use the following to download the patch, that I am attaching here:\nhttps://github.com/bonnefoa/postgres/commit/bf4b332d7b481549c6d9cfa70db51e39a305b9b2.patch\n\nPlease attach things to your emails, if your repository disappears for\na reason or another we would lose knowledge in the archives of the\ncommunity lists.\n--\nMichael",
"msg_date": "Thu, 18 Jul 2024 17:56:09 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query_id, pg_stat_activity, extended query protocol"
},
{
"msg_contents": "> On Wed, Jul 17, 2024 at 11:32:49AM +0200, Anthonin Bonnefoy wrote:\n>> Wouldn't it be enough to call pgstat_report_query_id in ExecutorRun\n>> and ProcessUtility? With those changes [1], both normal statements and\n>> utility statements called through extended protocol will correctly\n>> report the query_id.\n> \n> Interesting, and this position is really tempting. By doing so you\n> would force the query ID to be set to the one from the CTAS and\n> EXPLAIN, because these would be executed before the inner queries, and\n> pgstat_report_query_id() with its non-force option does not overwrite\n> what would be already set (aka what should be the top-level query ID).\n> \n> Using ExecutorRun() feels consistent with the closest thing I've\n> touched in this area lately in 1d477a907e63, because that's the only\n> code path that we are sure to take depending on the portal execution\n> (two execution scenarios depending on how rows are retrieved, as far\n> as I recall). The comment should be perhaps more consistent with the\n> executor start counterpart. So I would be OK with that.. The\n> location before the hook of ProcessUtility is tempting, as it would\n> take care of the case of PortalRunMulti(). However.. Joining with a\n> point from Sami upthread..\n> \n> This is still not enough in the case of where we have a holdStore, no?\n> This is the case where we would do *one* ExecutorRun(), followed up by\n> a scan of the tuplestore in more than one execute message. The v2\n> proposed upthread, by positioning a query ID to be set in\n> PortalRunSelect(), is still positioning that in two places.\n\nCorrect, I also don’t think ExecutorRun is enough. Another reason is we should also \nbe setting the queryId during bind, right before planning starts. \nPlanning could have significant impact on the server and I think we better\ntrack the responsible queryId. \n\nI have not tested the holdStore case. IIUC the holdStore deals with fetching a \nWITH HOLD CURSOR. Why would this matter for this conversation?\n\n> Hmm... How about being much more aggressive and just do the whole\n> business in exec_execute_message(), just before we do the PortalRun()?\n> I mean, that's the source of all our problems, and we know the\n> statements that the portal will work on so we could go through the\n> list, grab the first planned query and set the query ID based on that,\n> without caring about the portal patterns we would need to think about.\n\nDoing the work in exec_execute_message makes sense, although maybe\nsetting the queryId after pgstat_report_activity is better because it occurs earlier. \nAlso, we should do the same for exec_bind_message and set the queryId \nright after pgstat_report_activity in this function as well.\n\nWe do have to account for the queryId changing after cache revalidation, so\nwe should still set the queryId inside GetCachedPlan in the case the query\nunderwent re-analysis. This means there is a chance that a queryId set at\nthe start of the exec_bind_message may be different by the time we complete\nthe function, in the case the query revalidation results in a different queryId.\n\nSee the attached v3.\n\nRegards, \n\nSami",
"msg_date": "Tue, 23 Jul 2024 16:00:25 -0500",
"msg_from": "Sami Imseih <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query_id, pg_stat_activity, extended query protocol"
},
{
"msg_contents": "On Tue, Jul 23, 2024 at 04:00:25PM -0500, Sami Imseih wrote:\n> Correct, I also don´t think ExecutorRun is enough. Another reason is we should also \n> be setting the queryId during bind, right before planning starts. \n> Planning could have significant impact on the server and I think we better\n> track the responsible queryId. \n> \n> I have not tested the holdStore case. IIUC the holdStore deals with fetching a \n> WITH HOLD CURSOR. Why would this matter for this conversation?\n\nNot only, see portal.h. This matters for holdable cursors,\nPORTAL_ONE_RETURNING and PORTAL_UTIL_SELECT.\n\n> Doing the work in exec_execute_message makes sense, although maybe\n> setting the queryId after pgstat_report_activity is better because it occurs earlier. \n> Also, we should do the same for exec_bind_message and set the queryId \n> right after pgstat_report_activity in this function as well.\n\nSounds fine by me (still need to check all three patterns).\n\n+ if (list_length(psrc->query_list) > 0)\n+ pgstat_report_query_id(linitial_node(Query, psrc->query_list)->queryId, false);\n\nSomething that slightly worries me is to assume that the first Query\nin the query_list is fetched. Using a foreach() for all three paths\nmay be better, jumping out at the loop when finding a valid query ID.\n\nI have not looked at that entirely in details, and I'd need to check\nif it is possible to use what's here for more predictible tests:\nhttps://www.postgresql.org/message-id/ZqCMCS4HUshUYjGc%40paquier.xyz\n\n> We do have to account for the queryId changing after cache revalidation, so\n> we should still set the queryId inside GetCachedPlan in the case the query\n> underwent re-analysis. This means there is a chance that a queryId set at\n> the start of the exec_bind_message may be different by the time we complete\n> the function, in the case the query revalidation results in a different queryId.\n\nMakes sense to me. I'd rather make that a separate patch, with, if\npossible, its own tests (the case of Andrei with a DROP/CREATE TABLE) .\n--\nMichael",
"msg_date": "Thu, 25 Jul 2024 12:46:30 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query_id, pg_stat_activity, extended query protocol"
},
{
"msg_contents": "On Thu, Jul 18, 2024 at 10:56 AM Michael Paquier <[email protected]> wrote:\n> Please attach things to your emails, if your repository disappears for\n> a reason or another we would lose knowledge in the archives of the\n> community lists.\n\nNoted and thanks for the reminder, I'm still learning about mailing\nlist etiquette.\n\n> I have not looked at that entirely in details, and I'd need to check\n> if it is possible to use what's here for more predictible tests:\n> https://www.postgresql.org/message-id/ZqCMCS4HUshUYjGc%40paquier.xyz\n\nFor the tests, there are limited possibilities to check whether a\nquery_id has been set correctly.\n- Checking pg_stat_activity is not possible in the regress tests as\nyou need a second session to check the reported query_id.\n- pg_stat_statements can be used indirectly but you're limited to how\npgss uses query_id. For example, it doesn't rely on queryId in\nExecutorRun.\n\nA possible solution I've been thinking of is to use a test module. The\nmodule will assert on whether the queryId is set or not in parse, plan\nand executor hooks. It will also check if the queryId reported in\npgstat matches the queryId at the root level.\n\nThis allows us to check that the queryId is correctly set with the\nextended protocol. I've also found some queries which will trigger a\nfailure (ctas and cursor usage) though this is probably a different\nissue from the extended protocol issue.\n\nRegards,\nAnthonin",
"msg_date": "Fri, 26 Jul 2024 14:39:41 +0200",
"msg_from": "Anthonin Bonnefoy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query_id, pg_stat_activity, extended query protocol"
},
{
"msg_contents": "On Fri, Jul 26, 2024 at 02:39:41PM +0200, Anthonin Bonnefoy wrote:\n> For the tests, there are limited possibilities to check whether a\n> query_id has been set correctly.\n> - Checking pg_stat_activity is not possible in the regress tests as\n> you need a second session to check the reported query_id.\n> - pg_stat_statements can be used indirectly but you're limited to how\n> pgss uses query_id. For example, it doesn't rely on queryId in\n> ExecutorRun.\n> \n> A possible solution I've been thinking of is to use a test module. The\n> module will assert on whether the queryId is set or not in parse, plan\n> and executor hooks. It will also check if the queryId reported in\n> pgstat matches the queryId at the root level.\n\nFWIW, I was more thinking in the lines of a TAP test with\nPostgreSQL::Test::BackgroundPsql to hold the sessions around while\ndoing pg_stat_activity lookups.\n\nUsing a test module like what you have is really tempting to rely on\nthe hooks for the work, that's something I'll try to think about more.\n\nWe could perhaps push the query ID into a table saving the state that\ngets queried in the SQL test, using only assertions is not enough as\nthis makes the test moot with assertions disabled. And actually,\nthere may be a point in just pushing safety assertions to be in the\ncore backend code, as a HEAD-only improvement.\n--\nMichael",
"msg_date": "Sat, 27 Jul 2024 07:36:01 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query_id, pg_stat_activity, extended query protocol"
},
{
"msg_contents": "> Sounds fine by me (still need to check all three patterns).\n>\n> + if (list_length(psrc->query_list) > 0)\n> + pgstat_report_query_id(linitial_node(Query, psrc->query_list)->queryId, false);\n>\n> Something that slightly worries me is to assume that the first Query\n> in the query_list is fetched. Using a foreach() for all three paths\n> may be better, jumping out at the loop when finding a valid query ID.\n>\nI cannot see how the inital node would not contain the queryId, but\nto be on the safe side, your suggestion makes sense.\n\nAre you thinking something like the below? In the foreach,\ncheck for the first queryId != 0, set the queryId and then\nbreak out of the loop\n\nforeach(lc, psrc->query_list)\n{\n Query *query = lfirst_node(Query, lc);\n if (query->queryId != UINT64CONST(0))\n {\npgstat_report_query_id(query->queryId, false);\n break;\n }\n}\n>> We do have to account for the queryId changing after cache revalidation, so\n>> we should still set the queryId inside GetCachedPlan in the case the query\n>> underwent re-analysis. This means there is a chance that a queryId set at\n>> the start of the exec_bind_message may be different by the time we complete\n>> the function, in the case the query revalidation results in a different queryId.\n> Makes sense to me. I'd rather make that a separate patch, with, if\n\nI will create a separate patch for this.\n\n\n> possible, its own tests (the case of Andrei with a DROP/CREATE TABLE) .\n\n\nIn terms of testing, there are several options being discussed [1] including\nBackgroundPsql and using hooks. I want to add a another idea which\nis to rely on compute_plan_id = regress to log if my_query_id is a\nnon-zero value inside pgstat_report_query_id. Something like below:\n\n\n@@ -640,6 +641,14 @@ pgstat_report_query_id(uint64 query_id, bool force)\n PGSTAT_BEGIN_WRITE_ACTIVITY(beentry);\n beentry->st_query_id = query_id;\n PGSTAT_END_WRITE_ACTIVITY(beentry);\n+\n+ if (compute_query_id == COMPUTE_QUERY_ID_REGRESS)\n+ {\n+ int64 queryId = pgstat_get_my_query_id();\n+\n+ if (queryId != UINT64CONST(0))\n+ elog(DEBUG3, \"queryId value is not zero\");\n+ }\n\n\nThe number of logs can be counted and compared with what\nis expected. For example, in simple query, I expect the queryId to be\nset once. Using the \\bind, I expect the queryId to be set 3 times ( \nparse/bind/execute).\n\nSpecifically for the DROP/CREATE TABLE test, the \\parse and \\bindx\nbeing proposed in [2] can be used. The table can be dropped and\nrecreated after the \\parse step. If we count the logs, we would expect\na total of 4 logs to be set (parse/bind/revalidation/execution).\n\nI think the testing discussion should be moved to a different thread.\nWhat do you think?\n\nRegards,\n\nSami\n\n\n[1] https://www.postgresql.org/message-id/ZqQk0WHN8EMBEai9%40paquier.xyz\n[2] https://www.postgresql.org/message-id/[email protected]\n\n\n\n\n\n\n\n\nSounds fine by me (still need to check all three patterns).\n\n+ if (list_length(psrc->query_list) > 0)\n+ pgstat_report_query_id(linitial_node(Query, psrc->query_list)->queryId, false);\n\nSomething that slightly worries me is to assume that the first Query\nin the query_list is fetched. Using a foreach() for all three paths\nmay be better, jumping out at the loop when finding a valid query ID.\n\n\n\nI cannot see how the inital node would not contain the queryId,\n but\n to be on the safe side, your suggestion makes sense.\n\n Are you thinking something like the below? In the foreach,\n check for the first queryId != 0, set the queryId and then\n break out of the loop\n\n\nforeach(lc, psrc->query_list)\n{\n Query *query = lfirst_node(Query, lc);\n if (query->queryId != UINT64CONST(0))\n {\n \n pgstat_report_query_id(query->queryId, false);\n break; \n }\n}\n\n\n\n\nWe do have to account for the queryId changing after cache revalidation, so\nwe should still set the queryId inside GetCachedPlan in the case the query\nunderwent re-analysis. This means there is a chance that a queryId set at\nthe start of the exec_bind_message may be different by the time we complete\nthe function, in the case the query revalidation results in a different queryId.\n\n\n\nMakes sense to me. I'd rather make that a separate patch, with, if\n\nI will create a separate patch for this.\n\n\n\n\npossible, its own tests (the case of Andrei with a DROP/CREATE TABLE) .\n\n\n\n In terms of testing, there are several options being discussed [1]\n including\n BackgroundPsql and using hooks. I want to add a another idea which\n is to rely on compute_plan_id = regress to log if my_query_id is a\n non-zero value inside pgstat_report_query_id. Something like\n below:\n\n\n @@ -640,6 +641,14 @@ pgstat_report_query_id(uint64 query_id, bool\n force)\n PGSTAT_BEGIN_WRITE_ACTIVITY(beentry);\n beentry->st_query_id = query_id;\n PGSTAT_END_WRITE_ACTIVITY(beentry);\n +\n + if (compute_query_id == COMPUTE_QUERY_ID_REGRESS)\n + {\n + int64 queryId = pgstat_get_my_query_id();\n +\n + if (queryId != UINT64CONST(0))\n + elog(DEBUG3, \"queryId value is not zero\");\n + }\n\n\n The number of logs can be counted and compared with what\n is expected. For example, in simple query, I expect the queryId to\n be\n set once. Using the \\bind, I expect the queryId to be set 3 times\n ( parse/bind/execute).\n\n Specifically for the DROP/CREATE TABLE test, the \\parse and \\bindx\n \n being proposed in [2] can be used. The table can be dropped and\n recreated after the \\parse step. If we count the logs, we would\n expect\n a total of 4 logs to be set (parse/bind/revalidation/execution).\n I think the testing discussion should be moved to a different\n thread. \n What do you think?\n Regards,\nSami\n\n\n [1]\n https://www.postgresql.org/message-id/ZqQk0WHN8EMBEai9%40paquier.xyz\n [2]\n https://www.postgresql.org/message-id/[email protected]",
"msg_date": "Tue, 13 Aug 2024 21:40:48 -0500",
"msg_from": "\"Imseih (AWS), Sami\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query_id, pg_stat_activity, extended query protocol"
},
{
"msg_contents": "> I think the testing discussion should be moved to a different thread.\n> What do you think?\nSee v4.\n\n0001 deals with reporting queryId in exec_execute_message and \nexec_bind_message.\n0002 deals with reporting queryId after a cache invalidation.\n\nThere are no tests as this requires more discussion in a separate thread(?)\n\n\nRegards,\n\n\nSami",
"msg_date": "Wed, 14 Aug 2024 16:05:59 -0500",
"msg_from": "\"Imseih (AWS), Sami\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query_id, pg_stat_activity, extended query protocol"
},
{
"msg_contents": "On Thu, Aug 15, 2024 at 5:06 AM Imseih (AWS), Sami <[email protected]> wrote:\n>\n> > I think the testing discussion should be moved to a different thread.\n> > What do you think?\n> See v4.\n>\n> 0001 deals with reporting queryId in exec_execute_message and\n> exec_bind_message.\n> 0002 deals with reporting queryId after a cache invalidation.\n>\n> There are no tests as this requires more discussion in a separate thread(?)\n>\n\nhi.\nv4-0001 work as expected. i don't know how to test 0002\n\nIn 0001 and 0002, all foreach loops, we can use the new macro foreach_node.\nsee https://git.postgresql.org/cgit/postgresql.git/commit/?id=14dd0f27d7cd56ffae9ecdbe324965073d01a9ff\n\n\n\nthe following are the minimum tests I come up with for 0001\n\n/* test \\bind queryid exists */\nselect query_id is not null as query_id_exist\nfrom pg_stat_activity where pid = pg_backend_pid() \\bind \\g\n\n\n\n/* test that \\parse \\bind_named queryid exists */\nselect pg_backend_pid() as current_pid \\gset pref01_\nselect query_id is not null as query_id_exist from pg_stat_activity\nwhere pid = $1 \\parse stmt11\n\\bind_named stmt11 :pref01_current_pid \\g\n\n\n",
"msg_date": "Sat, 31 Aug 2024 09:47:41 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query_id, pg_stat_activity, extended query protocol"
},
{
"msg_contents": "On Sat, Aug 31, 2024 at 09:47:41AM +0800, jian he wrote:\n> /* test \\bind queryid exists */\n> select query_id is not null as query_id_exist\n> from pg_stat_activity where pid = pg_backend_pid() \\bind \\g\n> \n> /* test that \\parse \\bind_named queryid exists */\n> select pg_backend_pid() as current_pid \\gset pref01_\n> select query_id is not null as query_id_exist from pg_stat_activity\n> where pid = $1 \\parse stmt11\n> \\bind_named stmt11 :pref01_current_pid \\g\n\nI need to spend a bit more time with my head down for this thread, but\ncouldn't we use these commands with various query patterns in\npg_stat_statements and look at the shmem counters reported through its\nview?\n--\nMichael",
"msg_date": "Mon, 2 Sep 2024 10:11:43 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query_id, pg_stat_activity, extended query protocol"
},
{
"msg_contents": "On 14/8/2024 23:05, Imseih (AWS), Sami wrote:\n>> I think the testing discussion should be moved to a different thread.\n>> What do you think?\n> See v4.\n> \n> 0001 deals with reporting queryId in exec_execute_message and \n> exec_bind_message.\n> 0002 deals with reporting queryId after a cache invalidation.\n> \n> There are no tests as this requires more discussion in a separate thread(?)\nAt first, these patches look good.\nBut I have a feeling of some mess here:\nqueryId should be initialised at the top-level query. At the same time, \nthe RevalidateCachedQuery routine can change this value in the case of \nthe query tree re-validation.\nYou can say that this routine can't be called from a non-top-level query \nright now, except SPI. Yes, but what about extensions or future usage?\n\n-- \nregards, Andrei Lepikhov\n\n\n\n",
"msg_date": "Mon, 2 Sep 2024 21:30:18 +0200",
"msg_from": "Andrei Lepikhov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query_id, pg_stat_activity, extended query protocol"
},
{
"msg_contents": "On 14/8/2024 23:05, Imseih (AWS), Sami wrote:\n> There are no tests as this requires more discussion in a separate thread(?)\nUnfortunately, TAP tests don't allow us to keep a connection and \nmanually permutate the order of queries sent to different connections. \nBut isolation tests are designed to do so. Of course, they aren't the \nbest if you need to compare values produced by various queries but see a \nclumsy sketch doing that in the attachment.\nAlso, while writing the test, I found out that now, JumbleQuery takes \ninto account constants of the A_Const node, and calls of the same \nprepared statement with different parameters generate different \nquery_id. Is it a reason to introduce JumbleQuery options and allow \ndifferent logic of queryid generation?\n\n-- \nregards, Andrei Lepikhov",
"msg_date": "Tue, 3 Sep 2024 16:49:31 +0200",
"msg_from": "Andrei Lepikhov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query_id, pg_stat_activity, extended query protocol"
},
{
"msg_contents": "Sorry for the late reply on this thread.\n\nOn 14/8/2024 23:05, Imseih (AWS), Sami wrote:\n> There are no tests as this requires more discussion in a separate thread(?)\n> Unfortunately, TAP tests don't allow us to keep a connection and \n> manually permutate the order of queries sent to different connections. \n> But isolation tests are designed to do so. Of course, they aren't the \n> best if you need to compare values produced by various queries but see a \n> clumsy sketch doing that in the attachment.\n\nIt would be nice to use isolation tests as you have, those type of tests \ndon't support psql meta-commands. We need \\parse, \\bind, \\bind_named \nto test queryId for queries issued through extended query protocol.\n\nWith TAP tests we can use query_until in BackgroundPsql to have one\nconnection issue a command and another connection track the # of distinct\nqueryIds expected. See the 007_query_id.pl of an example TAP test that\ncould be added under test_misc.\n\nAn INJECTION_POINT can also be added right before we call pgstat_report_query_id\nin plancache.c. This will allow us to test when we expect the queryId to\nchange after a cache revalidation. Thoughts?\n\n> Also, while writing the test, I found out that now, JumbleQuery takes \n> into account constants of the A_Const node, and calls of the same \n> prepared statement with different parameters generate different \n> query_id. Is it a reason to introduce JumbleQuery options and allow \n> different logic of queryid generation?\n\nCan you start a new thread for this prepared statement scenario?\n\n--\nSami",
"msg_date": "Mon, 09 Sep 2024 18:20:01 -0500",
"msg_from": "Sami Imseih <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query_id, pg_stat_activity, extended query protocol"
},
{
"msg_contents": "> >> I think the testing discussion should be moved to a different thread.\n> >> What do you think?\n> > See v4.\n> > \n> > 0001 deals with reporting queryId in exec_execute_message and \n> > exec_bind_message.\n> > 0002 deals with reporting queryId after a cache invalidation.\n> > \n> > There are no tests as this requires more discussion in a separate thread(?)\n> At first, these patches look good.\n> But I have a feeling of some mess here:\n> queryId should be initialised at the top-level query. At the same time, \n> the RevalidateCachedQuery routine can change this value in the case of \n> the query tree re-validation.\n> You can say that this routine can't be called from a non-top-level query \n> right now, except SPI. Yes, but what about extensions or future usage?\n\nThis is a valid point. RevalidatePlanCache is forcing a \nnew queryId to be advertised ( 'true' as the second argument to \npgstat_report_query_id) . This means,\nv4-0002-Report-new-queryId-after-plancache-re-validation.patch \nwill result in a non top-level queryId being advertised.\n\nSee the attached test case.\n\nI need to think about this a bit.\n\n--\nSami",
"msg_date": "Mon, 09 Sep 2024 20:20:01 -0500",
"msg_from": "Sami Imseih <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query_id, pg_stat_activity, extended query protocol"
},
{
"msg_contents": "> > >> I think the testing discussion should be moved to a different thread.\n> > >> What do you think?\n> > > See v4.\n> > > \n> > > 0001 deals with reporting queryId in exec_execute_message and \n> > > exec_bind_message.\n> > > 0002 deals with reporting queryId after a cache invalidation.\n> > > \n> > > There are no tests as this requires more discussion in a separate thread(?)\n> > At first, these patches look good.\n> > But I have a feeling of some mess here:\n> > queryId should be initialised at the top-level query. At the same time, \n> > the RevalidateCachedQuery routine can change this value in the case of \n> > the query tree re-validation.\n> > You can say that this routine can't be called from a non-top-level query \n> > right now, except SPI. Yes, but what about extensions or future usage?\n\n\n> This is a valid point. RevalidatePlanCache is forcing a \n> new queryId to be advertised ( 'true' as the second argument to \n> pgstat_report_query_id) . This means,\n> v4-0002-Report-new-queryId-after-plancache-re-validation.patch \n> will result in a non top-level queryId being advertised.\n\nAn idea would be to add bool field called force_update_qid to \nCachedPlanSource, and this field can be set to 'true' after a call\nto CreateCachedPlan. RevalidateCachedQuery will only update\nthe queryId if this value is 'true'.\n\nFor now, only exec_parse_message will set this field to 'true', \nbut any caller can decide to set it to 'true' if there are other \ncases in the future.\n\nWhat do you think?\n \n--\n\nSami\n\n\n\n\n",
"msg_date": "Mon, 09 Sep 2024 21:27:00 -0500",
"msg_from": "Sami Imseih <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query_id, pg_stat_activity, extended query protocol"
},
{
"msg_contents": "On Mon, Sep 09, 2024 at 06:20:01PM -0500, Sami Imseih wrote:\n> On 14/8/2024 23:05, Imseih (AWS), Sami wrote:\n>> Also, while writing the test, I found out that now, JumbleQuery takes \n>> into account constants of the A_Const node, and calls of the same \n>> prepared statement with different parameters generate different \n>> query_id. Is it a reason to introduce JumbleQuery options and allow \n>> different logic of queryid generation?\n> \n> Can you start a new thread for this prepared statement scenario?\n\nYes, please, this makes the thread rather confusing by adding\ndifferent problems into the mix that require different analysis and\nactions. Let's only focus on the issue that the query ID reporting\nin pg_stat_activity is missing for the extended query protocol here.\n--\nMichael",
"msg_date": "Wed, 11 Sep 2024 12:06:30 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query_id, pg_stat_activity, extended query protocol"
},
{
"msg_contents": "On Mon, Sep 02, 2024 at 10:11:43AM +0900, Michael Paquier wrote:\n> I need to spend a bit more time with my head down for this thread, but\n> couldn't we use these commands with various query patterns in\n> pg_stat_statements and look at the shmem counters reported through its\n> view?\n\nMy apologies for the time it took, but here you go with a patch set.\n\nI have looked at this thread overall, and there are two problems at\nhand regarding the lack of reporting of the query ID in backend\nentries for the extended query protocol:\n1) ExecutorRun() misses the reports, which happens when a query\ndoes an ExecutorStart(), then a series of ExecutorRun() through a\nportal with bind messages. Robert has mentioned that separately a few\ndays ago at [1]. But that's not everything.\n2) A query executed through a portal with tuples to return in a\ntuplestore also miss the query ID report. For example, a DML\nRETURNING with the extended protocol would use an execute (with\nExecutorStart and ExecutorRun) followed by a series of execute fetch.\npg_stat_activity would report the query ID for the execute, not for\nthe fetches, while pg_stat_activity has the query string. That's\nconfusing.\n\nThe patch series attached address these two in 0001 and 0003. 0001\nshould be backpatched (still need to wordsmith the comments), where\nI've come down to the approach of using a report in ExecutorRun()\nbecause it is simpler and it does the job. Perhaps also 0003, but\nnobody has complained about that, either.\n\nI have also looked at the tests proposed (isolation, TAP, custom\nmodule); all of them are a bit disappointing because they duplicate\nsome patterns that are already tested in pg_stat_statements, while\nwilling to check the contents of pg_stat_statements. I am afraid that\nit is not going to age well because we'd need to have the same query\npatterns in more than one place. We should have tests, definitely,\nbut we can do an equivalent of pg_stat_activity lookups by calling\npgstat_get_my_query_id() in strategic places, making sure that all\ndedicated paths always have the query ID reported:\n- Check pgstat_get_my_query_id() in the run, finish and end executor\nhooks.\n- In the parse-analyze hook, before the query ID is reported (except\nfor a PREPARE), check that the ID in a Query is set.\n\nThe test proposed by Robert on the other thread was fancy enough that\nI've added it. All that is in 0002, and that's enough to cause 0001\nto fail, planning only these on HEAD. Tests in 0003 require fetch\nmessages, and I don't have a trick in my sleeves except if we invent a\nnew meta-command in psql.\n\nThere are other problems mentioned on this thread, with plan caching\nfor example. Let's deal with that separately, in separate threads.\n\n[1]: https://www.postgresql.org/message-id/CA+TgmoZxtnf_jZ=VqBSyaU8hfUkkwoJCJ6ufy4LGpXaunKrjrg@mail.gmail.com\n--\nMichael",
"msg_date": "Wed, 11 Sep 2024 14:43:04 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query_id, pg_stat_activity, extended query protocol"
},
{
"msg_contents": "I took a look at your patches and here are my comments\n\n> 1) ExecutorRun() misses the reports, which happens when a query\n> does an ExecutorStart(), then a series of ExecutorRun() through a\n> portal with bind messages. Robert has mentioned that separately a few\n> days ago at [1]. But that's not everything.\n> 2) A query executed through a portal with tuples to return in a\n> tuplestore also miss the query ID report. For example, a DML\n> RETURNING with the extended protocol would use an execute (with\n> ExecutorStart and ExecutorRun) followed by a series of execute fetch.\n> pg_stat_activity would report the query ID for the execute, not for\n> the fetches, while pg_stat_activity has the query string. That's\n> confusing.\n\n1/ \nIn your 0003-Report-query-ID-for-execute-fetch-in-extended-query-.patch \npatch, you are still setting the queryId inside exec_execute_message \nif (execute_is_fetch). This condition could be removed and don't need to set \nthe queryId inside ExecutorRun. This is exactly what v5-0001 does. \n\nV5-0001 also sets the queryId inside the exec_bind_message.\nWe must do that otherwise we will have a NULL queryId during bind.\n\nalso tested it against this for the case that was raised by Robert [1].\n\nI also think we need to handle RevalidateCachedQuery. This is the case where we \nhave a new queryId after a cached query revalidation. \n\nI addressed the comments by Andrei [3] in v5-0002. For RevalidateCachedQuery, \nwe can simple call pgstat_report_query_id with \"force\" = \"false\" so it will take care \nof updating a queryId only if it's a top level query. \n\n2/ \nAs far as 0002-Add-sanity-checks-related-to-query-ID-reporting-in-p.patch, \nI do like the pg_stat_statements extended tests to perform these tests. \n\nWhat about adding the Assert(pgstat_get_my_query_id() != 0) inside \nexec_parse_message, exec_bind_message and exec_execute_message as well?\n\nI think having the Asserts inside the hooks in pg_stat_statements are good\nas well.\n\nI am not sure how we can add tests for RevalidateCachedQuery though using\npg_stat_statements. We could skip testing this scenario, maybe??\n\nLet me know what you think.\n\n\n[1] https://www.postgresql.org/message-id/CA+TgmoZxtnf_jZ=VqBSyaU8hfUkkwoJCJ6ufy4LGpXaunKrjrg@mail.gmail.com\n[2] https://www.postgresql.org/message-id/flat/2beb1a00-3060-453a-90a6-7990d6940d62%40gmail.com#fffec59b563dbf49910e8b6d9f855e5a\n[3] https://www.postgresql.org/message-id/F001F959-400F-41C6-9886-C9665A4DE0A3%40gmail.com\n\n\nRegards,\n\nSami",
"msg_date": "Wed, 11 Sep 2024 17:02:07 -0500",
"msg_from": "Sami Imseih <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query_id, pg_stat_activity, extended query protocol"
},
{
"msg_contents": "On Wed, Sep 11, 2024 at 05:02:07PM -0500, Sami Imseih wrote:\n> In your 0003-Report-query-ID-for-execute-fetch-in-extended-query-.patch \n> patch, you are still setting the queryId inside exec_execute_message \n> if (execute_is_fetch). This condition could be removed and don't need to set \n> the queryId inside ExecutorRun. This is exactly what v5-0001 does. \n> \n> V5-0001 also sets the queryId inside the exec_bind_message.\n> We must do that otherwise we will have a NULL queryId during bind.\n> \n> also tested it against this for the case that was raised by Robert [1].\n\nThere are a few ways to do things:\n- Add an extra report in ExecutorRun(), because we know that it is\ngoing to be what we are going to cross when using a portal with\nmultiple execution calls. This won't work for the case of multiple\nfetch messages where there is only one initial ExecutorRun() call\nfollowed by the tuple fetches, as you say.\n- Add these higher in the stack, when processing the messages. In\nwhich case, we can also argue about removing the calls in\nExecutorRun() and ExecutorStart(), entirely, because these are\nunnecessary duplicates as long as the query ID is set close to where\nit is reset when we are processing the kind and execute messages.\nExecutorStart() as report location is ill-thought from the start.\n- Keep all of them, relying on the first one set as the follow-up ones\nare harmless. Perhaps also just reduce the number of calls on HEAD.\n\nAfter sleeping on it, I'd tend to slightly favor the last option in\nthe back-branches and the second option on HEAD where we reduce the\nnumber of report calls. This way, we are a bit more careful in\nreleased branches by being more aggressive in reporting the query ID.\nThat's also why I have ordered the previous patch set this way but\nthat was badly presented, even if it does not take care of the\nprocessing of the execute_is_fetch case for execute messages.\n\nThe tests in pg_stat_statements are one part I'm pretty sure is one\ngood way forward. It is not perfect, but with the psql meta-commands\nwe have a good deal of coverage on top of the other queries already in\nthe test suite. That's also the only place in core where we force the\nquery ID across all these hooks, and this does not impact switching\nthe way stats are stored if we were to switch to pgstats in shmem with\nthe custom stats APIs.\n\n> I am not sure how we can add tests for RevalidateCachedQuery though using\n> pg_stat_statements. We could skip testing this scenario, maybe??\n\nPerhaps. I'd need to think through this one. Let's do things in\norder and see about the reports for the bind/execute messages, first,\nplease?\n--\nMichael",
"msg_date": "Thu, 12 Sep 2024 08:07:31 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query_id, pg_stat_activity, extended query protocol"
},
{
"msg_contents": "> After sleeping on it, I'd tend to slightly favor the last option in\n> the back-branches and the second option on HEAD where we reduce the\n> number of report calls. This way, we are a bit more careful in\n>released branches by being more aggressive in reporting the query ID.\n\nI agree with this because it will safely allow us to backpatch this\nfix. \n\n> The tests in pg_stat_statements are one part I'm pretty sure is one\n> good way forward. It is not perfect, but with the psql meta-commands\n\nI played around with BackgrounsPsql. It works and gives us more flexibility\nin testing, but I think the pg_stat_statements test are good enough for this\npurpose. \n\nMy only concern is this approach tests core functionality ( reporting of queryId )\nin the tests of a contrib module ( pg_stat_statements ). Is that a valid\nconcern?\n\n> Perhaps. I'd need to think through this one. Let's do things in\n> order and see about the reports for the bind/execute messages, first,\n> please?\n\nSure, that is fine.\n\n\nRegards,\n\nSami \n\n\n\n\n\n",
"msg_date": "Wed, 11 Sep 2024 21:41:58 -0500",
"msg_from": "Sami Imseih <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query_id, pg_stat_activity, extended query protocol"
},
{
"msg_contents": "On Wed, Sep 11, 2024 at 09:41:58PM -0500, Sami Imseih wrote:\n>> The tests in pg_stat_statements are one part I'm pretty sure is one\n>> good way forward. It is not perfect, but with the psql meta-commands\n> \n> I played around with BackgrounsPsql. It works and gives us more flexibility\n> in testing, but I think the pg_stat_statements test are good enough for this\n> purpose. \n> \n> My only concern is this approach tests core functionality ( reporting of queryId )\n> in the tests of a contrib module ( pg_stat_statements ). Is that a valid\n> concern?\n\nDo you think that we'd better replace the calls reporting the query ID\nin execMain.c by some assertions on HEAD? This won't work for\nExecutorStart() because PREPARE statements (or actually EXECUTE,\ne.g. I bumped on that yesterday but I don't recall which one) would\nblow up on that with compute_query_id enabled. We could do something\nlike that in ExecutorRun() at least as that may be helpful for\nextensions? An assertion would be like:\nAssert(!IsQueryIdEnabled() || pgstat_get_my_query_id() != 0);\n\nExecutorFinish() and ExecutorEnd() are not that mandatory, so there's\na risk that this causes the backend to complain because a planner or\npost-analyze hook decides to force the hand of the backend entry in an\nextension. With such checks, we're telling them to just not do that.\nSo your point would be to force this rule within the core executor on\nHEAD? I would not object to that in case we're missing more spots\nwith the extended query protocol, actually. That would help us detect\ncases where we're still missing the query ID to be set and the\nexecutor should know about that. The execute/fetch has been missing\nfor years without us being able to detect it automatically.\n\nNote that I'm not much worried about the dependency with\npg_stat_statements. We already rely on it for query jumbling\nnormalization for some parse node patterns like DISCARD, and query\njumbling requires query IDs to be around. So that's not new.\n--\nMichael",
"msg_date": "Thu, 12 Sep 2024 12:11:35 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query_id, pg_stat_activity, extended query protocol"
},
{
"msg_contents": "> Do you think that we'd better replace the calls reporting the query ID\n> in execMain.c by some assertions on HEAD? This won't work for\n> ExecutorStart() because PREPARE statements (or actually EXECUTE,\n> e.g. I bumped on that yesterday but I don't recall which one) would\n\nYes, adding the asserts in execMain.c is better, but there is complications\nthere due to the issue you mention. I think the issue you are bumping into\nis when pg_stat_statements.track_utility = on ( default ), the assert in \nExecutorStart will fail on EXECUTE. I believe it's because ( need to verify )\npg_stat_statements.c sets the queryId = 0 in the ProcessUtility hook [1].\n\n> So your point would be to force this rule within the core executor on\n> HEAD? I would not object to that in case we're missing more spots\n> with the extended query protocol, actually. That would help us detect\n> cases where we're still missing the query ID to be set and the\n> executor should know about that.\n\nYes, but looking at how pg_stat_statements works with PREPARE/EXECUTE, \nI am now thinking it's better to Just keep the tests in pg_stat_statements. \nHaving test coverage in pg_stat_statements is better than nothing, and\ncheck-world ( or similar ) will be able to cacth such failures.\n\n\n> Note that I'm not much worried about the dependency with\n> pg_stat_statements. We already rely on it for query jumbling\n> normalization for some parse node patterns like DISCARD, and query\n> jumbling requires query IDs to be around. So that's not new.\n\nGood point.\n\n[1] https://github.com/postgres/postgres/blob/master/contrib/pg_stat_statements/pg_stat_statements.c#L1127-L1128\n\nRegards,\n\nSami \n\n\n\n\n",
"msg_date": "Thu, 12 Sep 2024 15:58:27 -0500",
"msg_from": "Sami Imseih <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query_id, pg_stat_activity, extended query protocol"
},
{
"msg_contents": "On Thu, Sep 12, 2024 at 03:58:27PM -0500, Sami Imseih wrote:\n> Yes, adding the asserts in execMain.c is better, but there is complications\n> there due to the issue you mention. I think the issue you are bumping into\n> is when pg_stat_statements.track_utility = on ( default ), the assert in \n> ExecutorStart will fail on EXECUTE. I believe it's because ( need to verify )\n> pg_stat_statements.c sets the queryId = 0 in the ProcessUtility hook [1].\n\nYes.\n\n> I am now thinking it's better to Just keep the tests in pg_stat_statements. \n> Having test coverage in pg_stat_statements is better than nothing, and\n> check-world ( or similar ) will be able to catch such failures.\n\nI have begun things by applying a patch to add new tests in\npg_stat_statements. It is just something that is useful on its own,\nand we had nothing of the kind.\n\nThen, please see attached two lightly-updated patches. 0001 is for a\nbackpatch down to v14. This is yours to force things in the exec and\nbind messages for all portal types, with the test (placed elsewhere in\n14~15 branches). 0002 is for HEAD to add some sanity checks, blowing\nup the tests of pg_stat_statements if one is not careful with the\nquery ID reporting.\n\nI'm planning to look at that again at the beginning of next week.\n--\nMichael",
"msg_date": "Fri, 13 Sep 2024 14:58:26 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query_id, pg_stat_activity, extended query protocol"
},
{
"msg_contents": "> Then, please see attached two lightly-updated patches. 0001 is for a\n> backpatch down to v14. This is yours to force things in the exec and\n> bind messages for all portal types, with the test (placed elsewhere in\n> 14~15 branches). 0002 is for HEAD to add some sanity checks, blowing\n> up the tests of pg_stat_statements if one is not careful with the\n> query ID reporting.\n\nThese 2 patches look good to me; except for the slight typo\nIn the commit message of 0002. \"backpatch\" instead of \"backpatck\".\n\nThat leaves us with considering v5-0002 [1]. I do think this is good\nfor overall correctness of the queryId being advertised after a cache \nrevalidation, even if users of pg_stat_activity will hardly notice this.\n\n[1] https://www.postgresql.org/message-id/DB325894-3EE3-4B2E-A18C-4B34E7B2F5EC%40gmail.com \n\n\nRegards,\n\nSami \n\n\n\n\n",
"msg_date": "Tue, 17 Sep 2024 17:01:18 -0500",
"msg_from": "Sami Imseih <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query_id, pg_stat_activity, extended query protocol"
},
{
"msg_contents": "On Tue, Sep 17, 2024 at 05:01:18PM -0500, Sami Imseih wrote:\n> > Then, please see attached two lightly-updated patches. 0001 is for a\n> > backpatch down to v14. This is yours to force things in the exec and\n> > bind messages for all portal types, with the test (placed elsewhere in\n> > 14~15 branches). 0002 is for HEAD to add some sanity checks, blowing\n> > up the tests of pg_stat_statements if one is not careful with the\n> > query ID reporting.\n> \n> These 2 patches look good to me; except for the slight typo\n> In the commit message of 0002. \"backpatch\" instead of \"backpatck\".\n\nYes, I've noticed this one last Friday and fixed the typo in the\ncommit log after sending the previous patch series.\n\n> That leaves us with considering v5-0002 [1]. I do think this is good\n> for overall correctness of the queryId being advertised after a cache \n> revalidation, even if users of pg_stat_activity will hardly notice this.\n> \n> [1] https://www.postgresql.org/message-id/DB325894-3EE3-4B2E-A18C-4B34E7B2F5EC%40gmail.com \n\nYeah. I need more time to evaluate this one.\n\nAlso, please find one of the scripts I have used for the execute/fetch\ncase, that simply does an INSERT RETURNING with a small fetch size to\ncreate a larger window in pg_stat_activity where we don't report the\nquery ID. One can run it like that, crude still on point:\n# Download a JDBC driver\n# Create the table to use.\npsql -c 'create table aa (a int);' postgres\nCLASSPATH=postgresql-42.7.4.jar java TestReturning.java\n\nThen, while running the script, you would notice that pg_stat_activity\nreports NULL for the query ID with the query text while the batch\nfetches are processing. I've taken and expanded one of the scripts\nyou have sent for 1d477a907e63.\n\nI'd like to get to the point where we are able to test that in core\nreliably. The sanity checks in the executor paths are a good step\nforward but they do nothing for the fetch cases. Perhaps Andrew\nDunstan work to expose libpq's APIs with the perl TAP tests would\nhelp at some point to control the extended protocol queries, but we\nare going to need more for the fetch case as there are no hooks that\nwould help to grab a query ID. A second option I have in mind would\nbe to set up an injection point that produces a NOTICE if a query ID\nis set when we end processing an execute message, then check the\nnumber of NOTICE messages produced as these can be predictible\ndepending on the number of rows and the fetch size.. This won't fly\nfar unless we can control the fetch size.\n--\nMichael",
"msg_date": "Wed, 18 Sep 2024 07:50:27 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query_id, pg_stat_activity, extended query protocol"
},
{
"msg_contents": "> would help to grab a query ID. A second option I have in mind would\n> be to set up an injection point that produces a NOTICE if a query ID\n> is set when we end processing an execute message, then check the\n> number of NOTICE messages produced as these can be predictible\n> depending on the number of rows and the fetch size.. This won't fly\n> far unless we can control the fetch size.\n\nFWIW, I do like the INJECTION_POINT idea and actually mentioned something \nsimilar up the thread [1] for the revalidate cache case, but I can see it being applied\nto all the other places we expect the queryId to be set. \n\n\n[1] https://www.postgresql.org/message-id/465EECA3-D98C-4E46-BBDB-4D057617DD89%40gmail.com\n\n--\n\nSami \n\n\n\n\n",
"msg_date": "Tue, 17 Sep 2024 18:39:17 -0500",
"msg_from": "Sami Imseih <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query_id, pg_stat_activity, extended query protocol"
},
{
"msg_contents": "On Tue, Sep 17, 2024 at 06:39:17PM -0500, Sami Imseih wrote:\n> FWIW, I do like the INJECTION_POINT idea and actually mentioned something \n> similar up the thread [1] for the revalidate cache case, but I can see it being applied\n> to all the other places we expect the queryId to be set. \n> \n> [1] https://www.postgresql.org/message-id/465EECA3-D98C-4E46-BBDB-4D057617DD89%40gmail.com\n\nFWIW, I was thinking about something like what has been done in\nindexcmds.c for 5bbdfa8a18dc as the query ID value is not predictible\nacross releases, but we could see whether it is set or not.\n--\nMichael",
"msg_date": "Wed, 18 Sep 2024 09:38:32 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query_id, pg_stat_activity, extended query protocol"
},
{
"msg_contents": "On Wed, Sep 18, 2024 at 07:50:27AM +0900, Michael Paquier wrote:\n> On Tue, Sep 17, 2024 at 05:01:18PM -0500, Sami Imseih wrote:\n> > > Then, please see attached two lightly-updated patches. 0001 is for a\n> > > backpatch down to v14. This is yours to force things in the exec and\n> > > bind messages for all portal types, with the test (placed elsewhere in\n> > > 14~15 branches). 0002 is for HEAD to add some sanity checks, blowing\n> > > up the tests of pg_stat_statements if one is not careful with the\n> > > query ID reporting.\n> > \n> > These 2 patches look good to me; except for the slight typo\n> > In the commit message of 0002. \"backpatch\" instead of \"backpatck\".\n> \n> Yes, I've noticed this one last Friday and fixed the typo in the\n> commit log after sending the previous patch series.\n\nSo, I have applied 0001 down to 14, followed by 0002 on HEAD.\n\nFor the sake of completeness, I have tested all the five\nPortalStrategys with the extended query protocol and with the sanity\nchecks of 0002 in place and compute_query_id=regress to force the\ncomputations, so I'd like to think that we are pretty good now.\n\n0002 is going to be interesting to see moving forward. I am wondering\nhow existing out-of-core extensions will react on that and if it will\nhelp catching up any issues. So, let's see how the experiment goes\nwith HEAD on this side. Perhaps we'll have to revert 0002 at the end,\nor perhaps not...\n--\nMichael",
"msg_date": "Wed, 18 Sep 2024 14:46:32 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query_id, pg_stat_activity, extended query protocol"
},
{
"msg_contents": "On Wed, Sep 18, 2024 at 09:38:32AM +0900, Michael Paquier wrote:\n> FWIW, I was thinking about something like what has been done in\n> indexcmds.c for 5bbdfa8a18dc as the query ID value is not predictible\n> across releases, but we could see whether it is set or not.\n\nBy the way, with the main issue fixed as of 933848d16dc9, could it be\npossible to deal with the plan cache part in a separate thread? This\nis from the start a separate thread to me, and we've done quite a bit\nhere already.\n--\nMichael",
"msg_date": "Wed, 18 Sep 2024 14:48:13 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query_id, pg_stat_activity, extended query protocol"
},
{
"msg_contents": "> So, I have applied 0001 down to 14, followed by 0002 on HEAD.\n\nThank you!\n\n> 0002 is going to be interesting to see moving forward. I am wondering\n> how existing out-of-core extensions will react on that and if it will\n> help catching up any issues. So, let's see how the experiment goes\n> with HEAD on this side. Perhaps we'll have to revert 0002 at the end,\n> or perhaps not...\n\nIf an extension breaks because of this, then it's doing something wrong __\nLet's see what happens.\n\n--\nSami\n\n\n\n\n\n\n",
"msg_date": "Wed, 18 Sep 2024 15:13:10 -0500",
"msg_from": "Sami Imseih <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query_id, pg_stat_activity, extended query protocol"
},
{
"msg_contents": "> By the way, with the main issue fixed as of 933848d16dc9, could it be\n> possible to deal with the plan cache part in a separate thread? This\n> is from the start a separate thread to me, and we've done quite a bit\n> here already.\n\nAgree, will do start a new thread.\n\n-- \n\nSami \n\n\n\n\n\n\n",
"msg_date": "Wed, 18 Sep 2024 15:14:07 -0500",
"msg_from": "Sami Imseih <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query_id, pg_stat_activity, extended query protocol"
},
{
"msg_contents": "On Wed, Sep 18, 2024 at 03:14:07PM -0500, Sami Imseih wrote:\n> Agree, will do start a new thread.\n\nThanks.\n--\nMichael",
"msg_date": "Thu, 19 Sep 2024 08:07:04 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query_id, pg_stat_activity, extended query protocol"
},
{
"msg_contents": "Hello Michael and Sami,\n\n18.09.2024 08:46, Michael Paquier wrote:\n> So, I have applied 0001 down to 14, followed by 0002 on HEAD.\n>\n\nPlease look at the script, which triggers Assert added by 24f520594:\n(assuming shared_preload_libraries=pg_stat_statements)\nSELECT repeat('x', 100) INTO t FROM generate_series(1, 100000);\nCREATE FUNCTION f() RETURNS int LANGUAGE sql IMMUTABLE RETURN 0;\nCREATE INDEX ON t(f());\n\nTRAP: failed Assert(\"!IsQueryIdEnabled() || pgstat_get_my_query_id() != 0\"), File: \"execMain.c\", Line: 300, PID: 1288609\nExceptionalCondition at assert.c:52:13\nExecutorRun at execMain.c:302:6\npostquel_getnext at functions.c:903:24\nfmgr_sql at functions.c:1198:15\nExecInterpExpr at execExprInterp.c:746:8\nExecInterpExprStillValid at execExprInterp.c:2034:1\nExecEvalExprSwitchContext at executor.h:367:13\nevaluate_expr at clauses.c:4997:14\nevaluate_function at clauses.c:4505:1\nsimplify_function at clauses.c:4092:12\neval_const_expressions_mutator at clauses.c:2591:14\nexpression_tree_mutator_impl at nodeFuncs.c:3550:12\neval_const_expressions_mutator at clauses.c:3712:1\neval_const_expressions at clauses.c:2267:1\nRelationGetIndexExpressions at relcache.c:5079:20\nBuildIndexInfo at index.c:2426:7\n...\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Wed, 25 Sep 2024 17:00:00 +0300",
"msg_from": "Alexander Lakhin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query_id, pg_stat_activity, extended query protocol"
},
{
"msg_contents": "On Wed, Sep 25, 2024 at 05:00:00PM +0300, Alexander Lakhin wrote:\n> Please look at the script, which triggers Assert added by 24f520594:\n> (assuming shared_preload_libraries=pg_stat_statements)\n\nOr just compute_query_id = on.\n\n> SELECT repeat('x', 100) INTO t FROM generate_series(1, 100000);\n> CREATE FUNCTION f() RETURNS int LANGUAGE sql IMMUTABLE RETURN 0;\n> CREATE INDEX ON t(f());\n> \n> TRAP: failed Assert(\"!IsQueryIdEnabled() || pgstat_get_my_query_id() != 0\"), File: \"execMain.c\", Line: 300, PID: 1288609\n> ExceptionalCondition at assert.c:52:13\n> ExecutorRun at execMain.c:302:6\n> postquel_getnext at functions.c:903:24\n> fmgr_sql at functions.c:1198:15\n> ExecInterpExpr at execExprInterp.c:746:8\n> ExecInterpExprStillValid at execExprInterp.c:2034:1\n> ExecEvalExprSwitchContext at executor.h:367:13\n\nAnd this assertion is doing the job I want it to do, because it is\ntelling us that we are not setting a query ID when doing a parallel\nbtree build. The query string that we would report at the beginning\nof _bt_parallel_build_main() is passed down as a parameter, but not\nthe query ID. Hence pg_stat_activity would report a NULL query ID\nwhen spawning parallel workers in this cases, even if there is a query\nstring.\n\nThe same can be said for the parallel build for BRIN, that uses a lot\nof logic taken from btree for there parallel parameters, and even\nvacuum, as it goes through a parse analyze where its query ID would be\nset. but that's missed in the workers.\n\nSee _bt_parallel_build_main(), _brin_parallel_build_main() and\nparallel_vacuum_main() which are the entry point used by the workers\nfor all of them. For BRIN, note that I can get the same failure with\nthe following query, based on the table of your previous test that\nwould spawn a worker:\nCREATE INDEX foo ON t using brin(f());\n\nThe recovery test 027_stream_regress.pl not catching these failures\nmeans that we don't have tests with an index expression for such\nparallel builds, or the assertion would have triggered. It looks like\nthis is just because we don't do a parallel btree build with an index\nexpression where we need to go through the executor to build its\nIndexInfo.\n\nNote that parallel workers launched by execParallel.c pass down the\nquery ID in a minimal PlannedStmt where we use pgstat_get_my_query_id,\nso let's do the same for all these.\n\nAttached is the patch I am finishing with, with some new tests for\nBRIN and btree to force parallel builds with immutable expressions\nthrough functions. These fail the assertions in the recovery TAP\ntest. It may be a good idea to keep these tests in the long-term\nanyway. It took me a few minutes to find out that\nmin_parallel_table_scan_size and max_parallel_maintenance_workers was\nenough to force workers to spawn even if tables have no data, to make\nthe tests cheaper.\n\nThoughts or comments?\n--\nMichael",
"msg_date": "Thu, 26 Sep 2024 10:08:21 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query_id, pg_stat_activity, extended query protocol"
},
{
"msg_contents": "> Attached is the patch I am finishing with, with some new tests for\n> BRIN and btree to force parallel builds with immutable expressions\n> through functions.\n\nglad to see the asserts are working as expected ad finding these issues.\nI took a look at the patch and tested it. It looks good. My only concern\nis the stability of using min_parallel_table_scan_size = 0. Will it always\nguarantee parallel workers? Can we print some debugging that proves\na parallel worker was spun up?\n\nSomething like this I get with DEBUG1\n\npostgres=*# CREATE INDEX btree_test_expr_idx ON btree_test_expr USING btree\n(btree_test_func());\nDEBUG: building index \"btree_test_expr_idx\" on table \"btree_test_expr\"\nwith request for 1 parallel workers\n\nAlso, we can just set the max_parallel_maintenance_workers to 1.\n\nWhat do you think?\n\nRegards,\n\nSami\nDEBUG: building index \"btree_test_expr_idx\" on table \"btree_test_expr\"\nwith request for 1 parallel workers\n\n\n\nOn Wed, Sep 25, 2024 at 8:08 PM Michael Paquier <[email protected]> wrote:\n\n> On Wed, Sep 25, 2024 at 05:00:00PM +0300, Alexander Lakhin wrote:\n> > Please look at the script, which triggers Assert added by 24f520594:\n> > (assuming shared_preload_libraries=pg_stat_statements)\n>\n> Or just compute_query_id = on.\n>\n> > SELECT repeat('x', 100) INTO t FROM generate_series(1, 100000);\n> > CREATE FUNCTION f() RETURNS int LANGUAGE sql IMMUTABLE RETURN 0;\n> > CREATE INDEX ON t(f());\n> >\n> > TRAP: failed Assert(\"!IsQueryIdEnabled() || pgstat_get_my_query_id() !=\n> 0\"), File: \"execMain.c\", Line: 300, PID: 1288609\n> > ExceptionalCondition at assert.c:52:13\n> > ExecutorRun at execMain.c:302:6\n> > postquel_getnext at functions.c:903:24\n> > fmgr_sql at functions.c:1198:15\n> > ExecInterpExpr at execExprInterp.c:746:8\n> > ExecInterpExprStillValid at execExprInterp.c:2034:1\n> > ExecEvalExprSwitchContext at executor.h:367:13\n>\n> And this assertion is doing the job I want it to do, because it is\n> telling us that we are not setting a query ID when doing a parallel\n> btree build. The query string that we would report at the beginning\n> of _bt_parallel_build_main() is passed down as a parameter, but not\n> the query ID. Hence pg_stat_activity would report a NULL query ID\n> when spawning parallel workers in this cases, even if there is a query\n> string.\n>\n> The same can be said for the parallel build for BRIN, that uses a lot\n> of logic taken from btree for there parallel parameters, and even\n> vacuum, as it goes through a parse analyze where its query ID would be\n> set. but that's missed in the workers.\n>\n> See _bt_parallel_build_main(), _brin_parallel_build_main() and\n> parallel_vacuum_main() which are the entry point used by the workers\n> for all of them. For BRIN, note that I can get the same failure with\n> the following query, based on the table of your previous test that\n> would spawn a worker:\n> CREATE INDEX foo ON t using brin(f());\n>\n> The recovery test 027_stream_regress.pl not catching these failures\n> means that we don't have tests with an index expression for such\n> parallel builds, or the assertion would have triggered. It looks like\n> this is just because we don't do a parallel btree build with an index\n> expression where we need to go through the executor to build its\n> IndexInfo.\n>\n> Note that parallel workers launched by execParallel.c pass down the\n> query ID in a minimal PlannedStmt where we use pgstat_get_my_query_id,\n> so let's do the same for all these.\n>\n> Attached is the patch I am finishing with, with some new tests for\n> BRIN and btree to force parallel builds with immutable expressions\n> through functions. These fail the assertions in the recovery TAP\n> test. It may be a good idea to keep these tests in the long-term\n> anyway. It took me a few minutes to find out that\n> min_parallel_table_scan_size and max_parallel_maintenance_workers was\n> enough to force workers to spawn even if tables have no data, to make\n> the tests cheaper.\n>\n> Thoughts or comments?\n> --\n> Michael\n>\n\n> Attached is the patch I am finishing with, with some new tests for\n> BRIN and btree to force parallel builds with immutable expressions\n> through functions. glad to see the asserts are working as expected ad finding these issues.I took a look at the patch and tested it. It looks good. My only concernis the stability of using min_parallel_table_scan_size = 0. Will it alwaysguarantee parallel workers? Can we print some debugging that provesa parallel worker was spun up?Something like this I get with DEBUG1postgres=*# CREATE INDEX btree_test_expr_idx ON btree_test_expr USING btree (btree_test_func());DEBUG: building index \"btree_test_expr_idx\" on table \"btree_test_expr\" with request for 1 parallel workersAlso, we can just set the max_parallel_maintenance_workers to 1.What do you think?Regards,Sami DEBUG: building index \"btree_test_expr_idx\" on table \"btree_test_expr\" with request for 1 parallel workersOn Wed, Sep 25, 2024 at 8:08 PM Michael Paquier <[email protected]> wrote:On Wed, Sep 25, 2024 at 05:00:00PM +0300, Alexander Lakhin wrote:\n> Please look at the script, which triggers Assert added by 24f520594:\n> (assuming shared_preload_libraries=pg_stat_statements)\n\nOr just compute_query_id = on.\n\n> SELECT repeat('x', 100) INTO t FROM generate_series(1, 100000);\n> CREATE FUNCTION f() RETURNS int LANGUAGE sql IMMUTABLE RETURN 0;\n> CREATE INDEX ON t(f());\n> \n> TRAP: failed Assert(\"!IsQueryIdEnabled() || pgstat_get_my_query_id() != 0\"), File: \"execMain.c\", Line: 300, PID: 1288609\n> ExceptionalCondition at assert.c:52:13\n> ExecutorRun at execMain.c:302:6\n> postquel_getnext at functions.c:903:24\n> fmgr_sql at functions.c:1198:15\n> ExecInterpExpr at execExprInterp.c:746:8\n> ExecInterpExprStillValid at execExprInterp.c:2034:1\n> ExecEvalExprSwitchContext at executor.h:367:13\n\nAnd this assertion is doing the job I want it to do, because it is\ntelling us that we are not setting a query ID when doing a parallel\nbtree build. The query string that we would report at the beginning\nof _bt_parallel_build_main() is passed down as a parameter, but not\nthe query ID. Hence pg_stat_activity would report a NULL query ID\nwhen spawning parallel workers in this cases, even if there is a query\nstring.\n\nThe same can be said for the parallel build for BRIN, that uses a lot\nof logic taken from btree for there parallel parameters, and even\nvacuum, as it goes through a parse analyze where its query ID would be\nset. but that's missed in the workers.\n\nSee _bt_parallel_build_main(), _brin_parallel_build_main() and\nparallel_vacuum_main() which are the entry point used by the workers\nfor all of them. For BRIN, note that I can get the same failure with\nthe following query, based on the table of your previous test that\nwould spawn a worker:\nCREATE INDEX foo ON t using brin(f());\n\nThe recovery test 027_stream_regress.pl not catching these failures\nmeans that we don't have tests with an index expression for such\nparallel builds, or the assertion would have triggered. It looks like\nthis is just because we don't do a parallel btree build with an index\nexpression where we need to go through the executor to build its\nIndexInfo.\n\nNote that parallel workers launched by execParallel.c pass down the\nquery ID in a minimal PlannedStmt where we use pgstat_get_my_query_id,\nso let's do the same for all these.\n\nAttached is the patch I am finishing with, with some new tests for\nBRIN and btree to force parallel builds with immutable expressions\nthrough functions. These fail the assertions in the recovery TAP\ntest. It may be a good idea to keep these tests in the long-term\nanyway. It took me a few minutes to find out that\nmin_parallel_table_scan_size and max_parallel_maintenance_workers was\nenough to force workers to spawn even if tables have no data, to make\nthe tests cheaper.\n\nThoughts or comments?\n--\nMichael",
"msg_date": "Thu, 26 Sep 2024 17:46:27 -0500",
"msg_from": "Sami Imseih <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query_id, pg_stat_activity, extended query protocol"
},
{
"msg_contents": "> Attached is the patch I am finishing with, with some new tests for\r\n> BRIN and btree to force parallel builds with immutable expressions\r\n> through functions.\r\n\r\nSorry about my last reply. Not sure what happened with my email client.\r\nHere it is again.\r\n\r\n\r\nglad to see the asserts are working as expected ad finding these issues.\r\nI took a look at the patch and tested it. It looks good. My only concern\r\nis the stability of using min_parallel_table_scan_size = 0. Will it always\r\nguarantee parallel workers? Can we print some debugging that proves\r\na parallel worker was spun up?\r\n\r\nSomething like this I get with DEBUG1\r\n\r\nDEBUG: building index \"btree_test_expr_idx\" on table \"btree_test_expr\" with request for 1 parallel workers\r\n\r\nWhat do you think?\r\n\r\nRegards,\r\n\r\nSami\r\n\n\n\n\n\n\n\n\n\n\n\n\n> Attached is the patch I am finishing with, with some new tests for\n> BRIN and btree to force parallel builds with immutable expressions\n> through functions. \n \nSorry about my last reply. Not sure what happened with my email client.\nHere it is again.\n \n \nglad to see the asserts are working as expected ad finding these issues.\nI took a look at the patch and tested it. It looks good. My only concern\nis the stability of using min_parallel_table_scan_size = 0. Will it always\nguarantee parallel workers? Can we print some debugging that proves\na parallel worker was spun up?\n \nSomething like this I get with DEBUG1\n \nDEBUG: building index \"btree_test_expr_idx\" on table \"btree_test_expr\" with request for 1 parallel workers\n \nWhat do you think?\n \nRegards,\n \nSami",
"msg_date": "Thu, 26 Sep 2024 22:55:37 +0000",
"msg_from": "\"Imseih (AWS), Sami\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query_id, pg_stat_activity, extended query protocol"
},
{
"msg_contents": "On Thu, Sep 26, 2024 at 10:55:37PM +0000, Imseih (AWS), Sami wrote:\n> Sorry about my last reply. Not sure what happened with my email client.\n> Here it is again.\n\nNo worries.\n\n> glad to see the asserts are working as expected ad finding these issues.\n> I took a look at the patch and tested it. It looks good. My only concern\n> is the stability of using min_parallel_table_scan_size = 0. Will it always\n> guarantee parallel workers? Can we print some debugging that proves\n> a parallel worker was spun up?\n>\n> Something like this I get with DEBUG1\n> \n> DEBUG: building index \"btree_test_expr_idx\" on table \"btree_test_expr\" with request for 1 parallel workers\n> \n> What do you think?\n\nI am not sure. The GUCs pretty much enforce this behavior and I doubt\nthat these are going to break moving on. Of course they would, but we\nare usually careful enough about that as long as it is possible to\ngrep for them. For example see the BRIN case in pageinspect.\n\nThe usual method for output that we use to confirm parallelism would\nbe EXPLAIN. Perhaps a potential target for CREATE INDEX now that it\nsupports more modes? I don't know if that's worth it, just throwing\none idea in the bucket of ideas.\n--\nMichael",
"msg_date": "Fri, 27 Sep 2024 08:17:53 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query_id, pg_stat_activity, extended query protocol"
},
{
"msg_contents": "> I am not sure. The GUCs pretty much enforce this behavior and I doubt\n> that these are going to break moving on. Of course they would, but we\n> are usually careful enough about that as long as it is possible to\n> grep for them. For example see the BRIN case in pageinspect.\n\nYes, I see pageinspect does the same thing for the BRIN case.\nThat is probably OK for this case also.\n\n> The usual method for output that we use to confirm parallelism would\n> be EXPLAIN. Perhaps a potential target for CREATE INDEX now that it\n> supports more modes? I don't know if that's worth it, just throwing\n> one idea in the bucket of ideas.\n\nNot sure about EXPLAIN for CREATE INDEX, since it's not a plannable\nstatement.\n\nMaybe a CREATE INDEX VERBOSE, just Like ANALYZE VERBOSE, \nVACUUM VERBOSE, etc. This will show the step that an index \nbuild is on (CONCURRENTLY has many steps), and can also show \nif parallel workers are launched for the index build.\n\n--\n\nSami \n\n\n\n\n\n",
"msg_date": "Thu, 26 Sep 2024 23:01:12 -0500",
"msg_from": "Sami Imseih <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query_id, pg_stat_activity, extended query protocol"
},
{
"msg_contents": "On Thu, Sep 26, 2024 at 11:01:12PM -0500, Sami Imseih wrote:\n>> I am not sure. The GUCs pretty much enforce this behavior and I doubt\n>> that these are going to break moving on. Of course they would, but we\n>> are usually careful enough about that as long as it is possible to\n>> grep for them. For example see the BRIN case in pageinspect.\n> \n> Yes, I see pageinspect does the same thing for the BRIN case.\n> That is probably OK for this case also.\n\nOkay, I've applied this part then to fix the query ID reporting\nfor these parallel workers. If people would like a backpatch, please\nlet me know.\n\nWhile thinking more about the assertion check in the executor over the\nweekend, I've found two things that are not right about it, as of:\n- It is OK to not set the query ID if we don't have a query string to\nmap to. This is something that came up to me because of the parallel\nVACUUM case, the query string given to the parallel workers is\noptional because we don't have a query string in the case of\nautovacuum. This is not an issue currently because autovacuum does\nnot support parallel jobs (see \"tab->at_params.nworkers = -1\" in\nautovacuum.c), but if we support parallel jobs in autovacuum at some\npoint the assertion would fail. BRIN and btree always expect a query\nstring, AFAIK.\n- The GUC track_activities. We don't really test it in any tests and\nit is enabled by default, so that's really easy to miss. I have been\nable to trigger an assertion failure with something like that:\nSET compute_query_id = on;\nSET track_activities = off;\nSELECT 1;\n\nThe first point is just some prevention for the future. The second\ncase is something we should fix and test. I am attaching a patch that\naddresses both. Note that the test case cannot use a transaction\nblock as query IDs are only reported for the top queries, and we can\ndo a scan of pg_stat_activity to see if the query ID is set. The\nassertion was getting more complicated, so I have hidden that behind a\nmacro in execMain.c. All that should complete this project.\n\nThoughts?\n--\nMichael",
"msg_date": "Mon, 30 Sep 2024 10:07:55 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query_id, pg_stat_activity, extended query protocol"
}
] |
[
{
"msg_contents": "Hi,\n\nI noticed that these two function, introduced in d746021de18b, disagree\non what types they support. For construct_array_builtin, we support\nthese types:\n\n - CHAROID:\n - CSTRINGOID:\n - FLOAT4OID:\n - INT2OID:\n - INT4OID:\n - INT8OID:\n - NAMEOID:\n - OIDOID:\n - REGTYPEOID:\n - TEXTOID:\n - TIDOID:\n\nwhile deconstruct_array_builtin supports just a subset:\n\n - CHAROID:\n - CSTRINGOID:\n - FLOAT8OID:\n - INT2OID:\n - OIDOID:\n - TEXTOID:\n - TIDOID:\n\nI ran into this for INT4OID. Sure, I can just lookup the stuff and use\nthe regualr deconstruct_array, but the asymmetry seems a bit strange. Is\nthat intentional?\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 12 Jun 2023 22:38:59 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Shouldn't construct_array_builtin and deconstruct_array_builtin agree\n on types?"
},
{
"msg_contents": "On 12.06.23 22:38, Tomas Vondra wrote:\n> I noticed that these two function, introduced in d746021de18b, disagree\n> on what types they support.\n\n> I ran into this for INT4OID. Sure, I can just lookup the stuff and use\n> the regualr deconstruct_array, but the asymmetry seems a bit strange. Is\n> that intentional?\n\nThey only support the types that they were actually being used with. If \nyou need another type, feel free to add it.\n\n\n\n",
"msg_date": "Mon, 12 Jun 2023 23:06:18 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Shouldn't construct_array_builtin and deconstruct_array_builtin\n agree on types?"
},
{
"msg_contents": "On Mon, Jun 12, 2023 at 11:06:18PM +0200, Peter Eisentraut wrote:\n> They only support the types that they were actually being used with. If you\n> need another type, feel free to add it.\n\nFWIW, I agree with Tomas that this is an oversight that should be\nfixed in v16, saving from the need to have a copy of\ndeconstruct_array_builtin() in extensions. Same argument here:\ncouldn't it be possible that an extension does multiple array\nconstructs and deconstructs in a single code path? We have patterns\nlike that in core as well. For instance, see\nextension_config_remove().\n--\nMichael",
"msg_date": "Tue, 13 Jun 2023 08:23:06 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Shouldn't construct_array_builtin and deconstruct_array_builtin\n agree on types?"
},
{
"msg_contents": "Michael Paquier <[email protected]> writes:\n> On Mon, Jun 12, 2023 at 11:06:18PM +0200, Peter Eisentraut wrote:\n>> They only support the types that they were actually being used with. If you\n>> need another type, feel free to add it.\n\n> FWIW, I agree with Tomas that this is an oversight that should be\n> fixed in v16, saving from the need to have a copy of\n> deconstruct_array_builtin() in extensions.\n\nWe don't want to bloat these functions indefinitely, so I understand\nPeter's approach of only adding the cases actually being used.\nAt the same time, it's reasonable to take some thought for extensions\nthat might want slightly more functionality than the core code\nhappens to need at any given moment.\n\nThe idea of making both functions support the same set of types\ndoes seem like a reasonable compromise.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 12 Jun 2023 20:26:45 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Shouldn't construct_array_builtin and deconstruct_array_builtin\n agree on types?"
}
] |
[
{
"msg_contents": "Hi!\n\nTL;DR:\n\nCLANG is used to compile *.bc files during postgresql build. Is it OK to have a different compiler for the rest of the build? gcc, or even another version of clang?\n\n--\n\nSlightly longer version:\n\nI'm packaging postgresql for FreeBSD and as you probably know, in that OS clang is the default compiler.\n\nAt present, depending on OS version, it is clang version 13, 14 or even 16. That version of cc (clang) is always present.\n\nLLVM is an optional add-on, a package. The default version is 15, and it also installs the clang15 binary for the corresponding clang version 15.\n\nAs I understand, you're \"better off\" compiling the LLVM stuff in PostgreSQL with the same version clang compiler as the LLVM version you're using. Hence, with LLVM 15, set the environment variable CLANG=/path/to/clang15 when running configure. If the .bc files will get compiled by the base system clang compiler, this can lead to a ThinLTO link error, if the base system compiler is a newer version of llvm.\n\nThe question is if it is a bad idea to use the base compiler, say clang13, to build postgresql, but set CLANG=clang15 to match the LLVM version. Am I better off using clang15 for everything then?\n\nCheers,\nPalle\n\n\n\n",
"msg_date": "Tue, 13 Jun 2023 11:19:48 +0200",
"msg_from": "Palle Girgensohn <[email protected]>",
"msg_from_op": true,
"msg_subject": "OK to build LLVM (*.bc) with CLANG but rest of postgresql with CC\n (other compiler)?"
}
] |
[
{
"msg_contents": "Hi!\n\nTL;DR:\n\nCLANG is used to compile *.bc files during postgresql build. Is it OK to have a different compiler for the rest of the build? gcc, or even another version of clang?\n\n--\n\nSlightly longer version:\n\nI'm packaging postgresql for FreeBSD and as you probably know, in that OS clang is the default compiler.\n\nAt present, depending on OS version, it is clang version 13, 14 or even 16. That version of cc (clang) is always present.\n\nLLVM is an optional add-on, a package. The default version is 15, and it also installs the clang15 binary for the corresponding clang version 15.\n\nAs I understand, you're \"better off\" compiling the LLVM stuff in PostgreSQL with the same version clang compiler as the LLVM version you're using. Hence, with LLVM 15, set the environment variable CLANG=/path/to/clang15 when running configure. If the .bc files will get compiled by the base system clang compiler, this can lead to a ThinLTO link error, if the base system compiler is a newer version of llvm.\n\nThe question is if it is a bad idea to use the base compiler, say clang13, to build postgresql, but set CLANG=clang15 to match the LLVM version. Am I better off using clang15 for everything then?\n\nCheers,\nPalle\n\n\n\n",
"msg_date": "Tue, 13 Jun 2023 11:20:52 +0200",
"msg_from": "Palle Girgensohn <[email protected]>",
"msg_from_op": true,
"msg_subject": "OK to build LLVM (*.bc) with CLANG but rest of postgresql with CC\n (other compiler)?"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-13 11:20:52 +0200, Palle Girgensohn wrote:\n> CLANG is used to compile *.bc files during postgresql build. Is it OK to\n> have a different compiler for the rest of the build? gcc, or even another\n> version of clang?\n\nYes.\n\n\n> LLVM is an optional add-on, a package. The default version is 15, and it also installs the clang15 binary for the corresponding clang version 15.\n> \n> As I understand, you're \"better off\" compiling the LLVM stuff in PostgreSQL with the same version clang compiler as the LLVM version you're using. Hence, with LLVM 15, set the environment variable CLANG=/path/to/clang15 when running configure. If the .bc files will get compiled by the base system clang compiler, this can lead to a ThinLTO link error, if the base system compiler is a newer versione of llvm.\n\nYea, the compatibility matrix for llvm bitcode is complicated.\n\n\n> The question is if it is a bad idea to use the base compiler, say clang13,\n> to build postgresql, but set CLANG=clang15 to match the LLVM version. Am I\n> better off using clang15 for everything then?\n\nThat should be entirely fine. If you already have the newer clang version, it\nmight also make sense to just use it from a simplicity perspective\n(e.g. compiler warnings being the same etc), but it's not required. It works\njust fine to compile the postgres binary with gcc and use clang for the\nbitcode after all.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 22 Jun 2023 12:02:10 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: OK to build LLVM (*.bc) with CLANG but rest of postgresql with\n CC (other compiler)?"
}
] |
[
{
"msg_contents": "Hi,\n When join two table on multiple columns equaljoin, rows estimation always use selectivity = multiplied by distinct multiple individual columns, possible to use extended n-distinct statistics on multiple columns?\n PG v14.8-1, attached please check test case with details.\n\nThanks,\n\nJames",
"msg_date": "Tue, 13 Jun 2023 09:21:25 +0000",
"msg_from": "\"James Pang (chaolpan)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "extended statistics n-distinct on multiple columns not used when join\n two tables"
},
{
"msg_contents": "Hi\n\nút 13. 6. 2023 v 11:21 odesílatel James Pang (chaolpan) <[email protected]>\nnapsal:\n\n> Hi,\n>\n> When join two table on multiple columns equaljoin, rows estimation\n> always use selectivity = multiplied by distinct multiple individual\n> columns, possible to use extended n-distinct statistics on multiple\n> columns?\n>\n> PG v14.8-1, attached please check test case with details.\n>\n\nThere is not any support for multi tables statistic\n\nRegards\n\nPavel\n\n\n>\n>\n> Thanks,\n>\n>\n>\n> James\n>\n>\n>\n\nHiút 13. 6. 2023 v 11:21 odesílatel James Pang (chaolpan) <[email protected]> napsal:\n\n\nHi,\n When join two table on multiple columns equaljoin, rows estimation always use selectivity = multiplied by distinct multiple individual columns, possible to use extended n-distinct statistics on multiple columns?\n\n PG v14.8-1, attached please check test case with details.There is not any support for multi tables statisticRegardsPavel \n \nThanks,\n \nJames",
"msg_date": "Tue, 13 Jun 2023 11:29:49 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: extended statistics n-distinct on multiple columns not used when\n join two tables"
},
{
"msg_contents": "(moving to -hackers)\n\nOn Tue, 13 Jun 2023 at 21:30, Pavel Stehule <[email protected]> wrote:\n> út 13. 6. 2023 v 11:21 odesílatel James Pang (chaolpan) <[email protected]> napsal:\n>> When join two table on multiple columns equaljoin, rows estimation always use selectivity = multiplied by distinct multiple individual columns, possible to use extended n-distinct statistics on multiple columns?\n>>\n>> PG v14.8-1, attached please check test case with details.\n>\n> There is not any support for multi tables statistic\n\nI think it's probably worth adjusting the docs to mention this. It\nseems like it might be something that could surprise someone.\n\nSomething like the attached, maybe?\n\nDavid",
"msg_date": "Tue, 13 Jun 2023 23:25:53 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: extended statistics n-distinct on multiple columns not used when\n join two tables"
},
{
"msg_contents": "út 13. 6. 2023 v 13:26 odesílatel David Rowley <[email protected]>\nnapsal:\n\n> (moving to -hackers)\n>\n> On Tue, 13 Jun 2023 at 21:30, Pavel Stehule <[email protected]>\n> wrote:\n> > út 13. 6. 2023 v 11:21 odesílatel James Pang (chaolpan) <\n> [email protected]> napsal:\n> >> When join two table on multiple columns equaljoin, rows estimation\n> always use selectivity = multiplied by distinct multiple individual\n> columns, possible to use extended n-distinct statistics on multiple\n> columns?\n> >>\n> >> PG v14.8-1, attached please check test case with details.\n> >\n> > There is not any support for multi tables statistic\n>\n> I think it's probably worth adjusting the docs to mention this. It\n> seems like it might be something that could surprise someone.\n>\n> Something like the attached, maybe?\n>\n\n+1\n\nPavel\n\n\n> David\n>\n\nút 13. 6. 2023 v 13:26 odesílatel David Rowley <[email protected]> napsal:(moving to -hackers)\n\nOn Tue, 13 Jun 2023 at 21:30, Pavel Stehule <[email protected]> wrote:\n> út 13. 6. 2023 v 11:21 odesílatel James Pang (chaolpan) <[email protected]> napsal:\n>> When join two table on multiple columns equaljoin, rows estimation always use selectivity = multiplied by distinct multiple individual columns, possible to use extended n-distinct statistics on multiple columns?\n>>\n>> PG v14.8-1, attached please check test case with details.\n>\n> There is not any support for multi tables statistic\n\nI think it's probably worth adjusting the docs to mention this. It\nseems like it might be something that could surprise someone.\n\nSomething like the attached, maybe?+1Pavel\n\nDavid",
"msg_date": "Tue, 13 Jun 2023 13:28:34 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: extended statistics n-distinct on multiple columns not used when\n join two tables"
},
{
"msg_contents": "Thanks for your information, yes, with multiple columns equal join and correlation , looks like extended statistics could help reduce “significantly rows estimation”. Hopefully it’s in future version.\r\n\r\nJames\r\n\r\nFrom: Pavel Stehule <[email protected]>\r\nSent: Tuesday, June 13, 2023 7:29 PM\r\nTo: David Rowley <[email protected]>\r\nCc: PostgreSQL Developers <[email protected]>; James Pang (chaolpan) <[email protected]>\r\nSubject: Re: extended statistics n-distinct on multiple columns not used when join two tables\r\n\r\n\r\n\r\nút 13. 6. 2023 v 13:26 odesílatel David Rowley <[email protected]<mailto:[email protected]>> napsal:\r\n(moving to -hackers)\r\n\r\nOn Tue, 13 Jun 2023 at 21:30, Pavel Stehule <[email protected]<mailto:[email protected]>> wrote:\r\n> út 13. 6. 2023 v 11:21 odesílatel James Pang (chaolpan) <[email protected]<mailto:[email protected]>> napsal:\r\n>> When join two table on multiple columns equaljoin, rows estimation always use selectivity = multiplied by distinct multiple individual columns, possible to use extended n-distinct statistics on multiple columns?\r\n>>\r\n>> PG v14.8-1, attached please check test case with details.\r\n>\r\n> There is not any support for multi tables statistic\r\n\r\nI think it's probably worth adjusting the docs to mention this. It\r\nseems like it might be something that could surprise someone.\r\n\r\nSomething like the attached, maybe?\r\n\r\n+1\r\n\r\nPavel\r\n\r\n\r\nDavid\r\n\n\n\n\n\n\n\n\n\nThanks for your information, yes, with multiple columns equal join and correlation , looks like extended statistics could help reduce “significantly rows estimation”. Hopefully it’s in future version.\n \nJames\n \n\nFrom: Pavel Stehule <[email protected]> \nSent: Tuesday, June 13, 2023 7:29 PM\nTo: David Rowley <[email protected]>\nCc: PostgreSQL Developers <[email protected]>; James Pang (chaolpan) <[email protected]>\nSubject: Re: extended statistics n-distinct on multiple columns not used when join two tables\n\n \n\n\n \n\n \n\n\nút 13. 6. 2023 v 13:26 odesílatel David Rowley <[email protected]> napsal:\n\n\n(moving to -hackers)\n\r\nOn Tue, 13 Jun 2023 at 21:30, Pavel Stehule <[email protected]> wrote:\r\n> út 13. 6. 2023 v 11:21 odesílatel James Pang (chaolpan) <[email protected]> napsal:\r\n>> When join two table on multiple columns equaljoin, rows estimation always use selectivity = multiplied by distinct multiple individual columns, possible to use extended n-distinct statistics on multiple columns?\r\n>>\r\n>> PG v14.8-1, attached please check test case with details.\r\n>\r\n> There is not any support for multi tables statistic\n\r\nI think it's probably worth adjusting the docs to mention this. It\r\nseems like it might be something that could surprise someone.\n\r\nSomething like the attached, maybe?\n\n\n \n\n\n+1\n\n\n \n\n\nPavel\n\n\n \n\n\n\r\nDavid",
"msg_date": "Tue, 13 Jun 2023 11:32:54 +0000",
"msg_from": "\"James Pang (chaolpan)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: extended statistics n-distinct on multiple columns not used when\n join two tables"
},
{
"msg_contents": "On Tue, 13 Jun 2023 at 23:29, Pavel Stehule <[email protected]> wrote:\n>> I think it's probably worth adjusting the docs to mention this. It\n>> seems like it might be something that could surprise someone.\n>>\n>> Something like the attached, maybe?\n>\n> +1\n\nOk, I pushed that patch. Thanks.\n\nDavid\n\n\n",
"msg_date": "Thu, 22 Jun 2023 12:53:10 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: extended statistics n-distinct on multiple columns not used when\n join two tables"
}
] |
[
{
"msg_contents": "It's annoyed me for some time that to_timestamp() doesn't implement\nthe TZ format code that to_char() has. I finally got motivated to\ndo something about that after the complaint at [1] that jsonpath's\ndatetime() method can't read typical JSON.stringify() output like\n\"2023-05-22T03:09:37.825Z\". We do already understand \"Z\" as a\ntime zone abbreviation for UTC; we just need to get formatting.c\nto support this.\n\nHence, the attached patch teaches to_timestamp() to read time\nzone abbreviations as well as HH and HH:MM numeric zone offsets\nwhen TZ is specified. (We need to accept HH and HH:MM to be\nsure that we can round-trip the output of to_char(), since its\nTZ output code will fall back to one of those if it does not\nknow any abbreviation for the current zone.)\n\nYou might reasonably say that we should make it read time zone names\nnot only abbreviations. I tried to make that work, and realized that\nit'd create a complete mess because tzload() is so lax about what it\nwill interpret as a POSIX-style timezone spec. I included an example\nin the test cases below: I think that\n\nto_timestamp('2011-12-18 11:38ESTFOO24', 'YYYY-MM-DD HH12:MITZFOOSS')\n\nshould work and read just \"EST\" as the TZ value, allowing the \"24\"\nto be read as the SS value. But tzload() will happily eat all of\n\"ESTFOO24\" as a POSIX zone spec.\n\nWe could conceivably refactor tzload() enough to match only tzdb zone\nnames in this context. But I'm very hesitant to do that, for a few\nreasons:\n\n* it would make localtime.c diverge significantly from the upstream\nIANA source code;\n\n* we only need to support zone abbreviations to ensure we can\nround-trip the output of to_char();\n\n* it's not clear to me that average users would understand why\nto_timestamp() accepts some but not all zone names that are accepted\nby the TimeZone GUC and timestamptz input. If we document it as\ntaking only timezone abbreviations, that does correspond to a\nconcept that's in the manual already.\n\nSo I think that the attached represents a reasonable and useful\ncompromise. I'll park this in the July commitfest.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/014A028B-5CE6-4FDF-AC24-426CA6FC9CEE%40mohiohio.com",
"msg_date": "Tue, 13 Jun 2023 12:20:42 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Support TZ format code in to_timestamp()"
},
{
"msg_contents": "On Tue, Jun 13, 2023 at 12:20:42PM -0400, Tom Lane wrote:\n> It's annoyed me for some time that to_timestamp() doesn't implement\n> the TZ format code that to_char() has. I finally got motivated to\n> do something about that after the complaint at [1] that jsonpath's\n> datetime() method can't read typical JSON.stringify() output like\n> \"2023-05-22T03:09:37.825Z\". We do already understand \"Z\" as a\n> time zone abbreviation for UTC; we just need to get formatting.c\n> to support this.\n\nI have to admit I tend to prefer actual time zone names like\n\"America/New_York\" over acronyms or offset values. However, I can see\nthe dump/restore problem with such names.\n\nParenthetically, I often use airport codes that map to time zones in my\nown calendar. I would argue that on a global scale airport codes are\nactually more useful than abbreviations like EST, assuming you don't\nneed to designate whether daylight saving time was active, e.g. EST vs.\nEDT.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Wed, 21 Jun 2023 14:07:34 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support TZ format code in to_timestamp()"
},
{
"msg_contents": "On 6/21/23 20:07, Bruce Momjian wrote:\n> On Tue, Jun 13, 2023 at 12:20:42PM -0400, Tom Lane wrote:\n>> It's annoyed me for some time that to_timestamp() doesn't implement\n>> the TZ format code that to_char() has. I finally got motivated to\n>> do something about that after the complaint at [1] that jsonpath's\n>> datetime() method can't read typical JSON.stringify() output like\n>> \"2023-05-22T03:09:37.825Z\". We do already understand \"Z\" as a\n>> time zone abbreviation for UTC; we just need to get formatting.c\n>> to support this.\n> \n> I have to admit I tend to prefer actual time zone names like\n> \"America/New_York\" over acronyms or offset values. However, I can see\n> the dump/restore problem with such names.\n\nI think the abbreviations are worse than useless -- dangerously \nmisleading even. I was converting a timestamp I had pulled from the \ninternet the other day in IST (India Standard Time) using Postres to \ntest some new code I was working on. I got a rather surprising result so \nchanged it to Asia/Kolkata and got what I expected.\n\nTurns out IST is *also* Israel Standard Time and Irish Standard Time. I \nthink Postres gave me the conversion in Irish time. At any rate, it was \nnot offset by 30 minutes which was the dead giveaway.\n\nOffsets are fine when you just need an absolute date to feed into \nsomething like recovery and it doesn't much matter what timezone you \nwere in.\n\nZ and UTC also seem fine since they are unambiguous as far as I know.\n\nRegards,\n-David\n\n\n",
"msg_date": "Wed, 21 Jun 2023 20:52:44 +0200",
"msg_from": "David Steele <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support TZ format code in to_timestamp()"
},
{
"msg_contents": "2024-01 Commitfest.\n\nHi, this patch was marked in CF as \"Needs Review\" [1], but there has\nbeen no activity on this thread for 7+ months.\n\nIf nothing more is planned for this thread then it will be closed\n(\"Returned with feedback\") at the end of this CF.\n\n======\n[1] https://commitfest.postgresql.org/46/4362/\n\nKind Regards,\nPeter Smith.\n\n\n",
"msg_date": "Mon, 22 Jan 2024 12:57:02 +1100",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support TZ format code in to_timestamp()"
},
{
"msg_contents": "Peter Smith <[email protected]> writes:\n> Hi, this patch was marked in CF as \"Needs Review\" [1], but there has\n> been no activity on this thread for 7+ months.\n> If nothing more is planned for this thread then it will be closed\n> (\"Returned with feedback\") at the end of this CF.\n\nI still think it would be a good idea, but I can't deny the lack\nof other interest in it. Unless someone steps up to review,\nlet's close it.\n\n(FTR, I don't agree with David's objections to the entire concept\nof zone abbreviations. We're not going to remove support for them\neverywhere else, so why shouldn't to_timestamp() handle them?)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 21 Jan 2024 21:10:17 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Support TZ format code in to_timestamp()"
},
{
"msg_contents": "Hi,\n\n> > Hi, this patch was marked in CF as \"Needs Review\" [1], but there has\n> > been no activity on this thread for 7+ months.\n> > If nothing more is planned for this thread then it will be closed\n> > (\"Returned with feedback\") at the end of this CF.\n>\n> I still think it would be a good idea, but I can't deny the lack\n> of other interest in it. Unless someone steps up to review,\n> let's close it.\n\nI agree that it would be a good idea, and again I would like to\ncondemn the approach \"since no one reviews it we are going to reject\nit\". A friendly reminder like \"hey, this patch was waiting long\nenough, maybe someone could take a look\" would be more appropriate\nIMO. I remember during previous commitfests some CF managers created a\nlist of patches that could use more attention. That was useful.\n\nI will review the patch, but probably only tomorrow.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Mon, 22 Jan 2024 18:25:39 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support TZ format code in to_timestamp()"
},
{
"msg_contents": "> On 22 Jan 2024, at 03:10, Tom Lane <[email protected]> wrote:\n\n> I still think it would be a good idea, but I can't deny the lack\n> of other interest in it. Unless someone steps up to review,\n> let's close it.\n\nSince I had this on my (ever-growing) TODO I re-prioritized today and took a\nlook at it since I think it's something we should support.\n\nNothing really sticks out and I was unable to poke any holes so I don't have\ntoo much more to offer than a LGTM.\n\n+ while (len > 0)\n+ {\n+ const datetkn *tp = datebsearch(lowtoken, zoneabbrevtbl->abbrevs,\n+ zoneabbrevtbl->numabbrevs);\n\nMy immediate reaction was that we should stop at prefix lengths of two since I\ncould only think of abbreviations of two or more. Googling and reading found\nthat there are indeed one-letter timezones (Alpha, Bravo etc..). Not sure if\nit's worth mentioning that in the comment to help other readers who aren't neck\ndeep in timezones?\n\n+ /* FALL THRU */\n\nTiny nitpick, it looks a bit curious that we spell it FALL THRU here and \"fall\nthrough\" a few cases up in the same switch. While we are quite inconsistent\nacross the tree, consistency within a file is preferrable (regardless of\nwhich).\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Mon, 22 Jan 2024 16:43:03 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support TZ format code in to_timestamp()"
},
{
"msg_contents": "Daniel Gustafsson <[email protected]> writes:\n> On 22 Jan 2024, at 03:10, Tom Lane <[email protected]> wrote:\n> + while (len > 0)\n> + {\n> + const datetkn *tp = datebsearch(lowtoken, zoneabbrevtbl->abbrevs,\n> + zoneabbrevtbl->numabbrevs);\n\n> My immediate reaction was that we should stop at prefix lengths of two since I\n> could only think of abbreviations of two or more. Googling and reading found\n> that there are indeed one-letter timezones (Alpha, Bravo etc..). Not sure if\n> it's worth mentioning that in the comment to help other readers who aren't neck\n> deep in timezones?\n\nThe one I usually think of is \"Z\" for UTC; I wasn't actually aware\nthat there were any other single-letter abbrevs. But in any case\nI don't see a reason for this code to be making such assumptions.\n\n> + /* FALL THRU */\n\n> Tiny nitpick, it looks a bit curious that we spell it FALL THRU here and \"fall\n> through\" a few cases up in the same switch. While we are quite inconsistent\n> across the tree, consistency within a file is preferrable (regardless of\n> which).\n\nFair. I tend to shorten it, but I failed to notice that there was\nnearby precedent for the other way.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 22 Jan 2024 11:25:14 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Support TZ format code in to_timestamp()"
},
{
"msg_contents": "Hi,\n\n> Since I had this on my (ever-growing) TODO I re-prioritized today and took a\n> look at it since I think it's something we should support.\n>\n> Nothing really sticks out and I was unable to poke any holes so I don't have\n> too much more to offer than a LGTM.\n>\n> + while (len > 0)\n> + {\n> + const datetkn *tp = datebsearch(lowtoken, zoneabbrevtbl->abbrevs,\n> + zoneabbrevtbl->numabbrevs);\n>\n> My immediate reaction was that we should stop at prefix lengths of two since I\n> could only think of abbreviations of two or more. Googling and reading found\n> that there are indeed one-letter timezones (Alpha, Bravo etc..). Not sure if\n> it's worth mentioning that in the comment to help other readers who aren't neck\n> deep in timezones?\n>\n> + /* FALL THRU */\n>\n> Tiny nitpick, it looks a bit curious that we spell it FALL THRU here and \"fall\n> through\" a few cases up in the same switch. While we are quite inconsistent\n> across the tree, consistency within a file is preferrable (regardless of\n> which).\n\nI reviewed the patch and tested it on MacOS and generally concur with\nstated above. The only nitpick I have is the apparent lack of negative\ntests for to_timestamp(), e.g. when the string doesn't match the\nspecified format.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Tue, 23 Jan 2024 16:26:48 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support TZ format code in to_timestamp()"
},
{
"msg_contents": "Aleksander Alekseev <[email protected]> writes:\n> I reviewed the patch and tested it on MacOS and generally concur with\n> stated above. The only nitpick I have is the apparent lack of negative\n> tests for to_timestamp(), e.g. when the string doesn't match the\n> specified format.\n\nThat's an excellent suggestion indeed, because when I tried\n\nSELECT to_timestamp('2011-12-18 11:38 JUNK', 'YYYY-MM-DD HH12:MI TZ'); -- error\n\nI got\n\nERROR: invalid value \"JU\" for \"TZ\"\nDETAIL: Value must be an integer.\n\nwhich seems pretty off-point. In the attached I made it give an\nerror message about a bad zone abbreviation if the input starts\nwith a letter, but perhaps the dividing line between \"probably\nmeant as a zone name\" and \"probably meant as numeric\" should be\ndrawn differently?\n\nAnyway, v2-0001 below is the previous patch rebased up to current\n(only line numbers change), and then v2-0002 responds to your\nand Daniel's review comments.\n\n\t\t\tregards, tom lane",
"msg_date": "Tue, 23 Jan 2024 17:33:34 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Support TZ format code in to_timestamp()"
},
{
"msg_contents": "> On 23 Jan 2024, at 23:33, Tom Lane <[email protected]> wrote:\n\n> Anyway, v2-0001 below is the previous patch rebased up to current\n> (only line numbers change), and then v2-0002 responds to your\n> and Daniel's review comments.\n\nLGTM.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 24 Jan 2024 00:33:36 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support TZ format code in to_timestamp()"
},
{
"msg_contents": "Hi,\n\n> > Anyway, v2-0001 below is the previous patch rebased up to current\n> > (only line numbers change), and then v2-0002 responds to your\n> > and Daniel's review comments.\n>\n> LGTM.\n\n```\n+SELECT to_timestamp('2011-12-18 11:38 JUNK', 'YYYY-MM-DD HH12:MI\nTZ'); -- error\n+ERROR: invalid value \"JUNK\" for \"TZ\"\n+DETAIL: Time zone abbreviation is not recognized.\n+SELECT to_timestamp('2011-12-18 11:38 ...', 'YYYY-MM-DD HH12:MI TZ'); -- error\n+ERROR: invalid value \"..\" for \"TZ\"\n```\n\nShouldn't the second error display the full value \"...\" (three dots)\nsimilarly to the previous one? Also I think we need at least one\nnegative test for OF.\n\nOther than that v2 looks OK.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Wed, 24 Jan 2024 15:49:51 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support TZ format code in to_timestamp()"
},
{
"msg_contents": "Aleksander Alekseev <[email protected]> writes:\n> +SELECT to_timestamp('2011-12-18 11:38 JUNK', 'YYYY-MM-DD HH12:MI TZ'); -- error\n> +ERROR: invalid value \"JUNK\" for \"TZ\"\n> +DETAIL: Time zone abbreviation is not recognized.\n> +SELECT to_timestamp('2011-12-18 11:38 ...', 'YYYY-MM-DD HH12:MI TZ'); -- error\n> +ERROR: invalid value \"..\" for \"TZ\"\n\n> Shouldn't the second error display the full value \"...\" (three dots)\n> similarly to the previous one?\n\nThat's coming from the pre-existing OF code, which is looking for\na integer of at most two digits. I'm not especially inclined to\nmess with that, and even if I were I'd think it should be a separate\npatch.\n\n> Also I think we need at least one\n> negative test for OF.\n\nOK.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 24 Jan 2024 11:13:13 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Support TZ format code in to_timestamp()"
}
] |
[
{
"msg_contents": "Hi,\n\nI just helped somebody debug a postgres performance problem that turned out to\nbe not actually be postgres' fault. It turned out to be because postgres'\nstdout/stderr were piped to a program, and that program was slow. Whenever the\npipe buffer filled up, postgres stopped making progress.\n\nThat's not postgres' fault. But we make it too hard to debug such an\nissue. There's no way to figure this out from within postgres, one pretty much\nneeds to look at stack traces.\n\nI think we should add a few wait events for log emission. I think it'd be good\nto have one wait event for each log destination.\n\nThat's not perfect - we'd e.g. still not be able to debug where the logger\nprocess is stuck, due it not being in pg_stat_activity. But other processes\nreporting the wait event for writing to the logger process would be a pretty\ngood hint.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 13 Jun 2023 09:58:54 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Add wait event for log emission?"
},
{
"msg_contents": "On Tue, Jun 13, 2023 at 09:58:54AM -0700, Andres Freund wrote:\n> I think we should add a few wait events for log emission. I think it'd be good\n> to have one wait event for each log destination.\n\n+1\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 13 Jun 2023 11:10:09 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add wait event for log emission?"
},
{
"msg_contents": "On Tue, Jun 13, 2023 at 6:59 PM Andres Freund <[email protected]> wrote:\n>\n> Hi,\n>\n> I just helped somebody debug a postgres performance problem that turned out to\n> be not actually be postgres' fault. It turned out to be because postgres'\n> stdout/stderr were piped to a program, and that program was slow. Whenever the\n> pipe buffer filled up, postgres stopped making progress.\n>\n> That's not postgres' fault. But we make it too hard to debug such an\n> issue. There's no way to figure this out from within postgres, one pretty much\n> needs to look at stack traces.\n>\n> I think we should add a few wait events for log emission. I think it'd be good\n> to have one wait event for each log destination.\n>\n> That's not perfect - we'd e.g. still not be able to debug where the logger\n> process is stuck, due it not being in pg_stat_activity. But other processes\n> reporting the wait event for writing to the logger process would be a pretty\n> good hint.\n\n\n+1.\n\nWould it make sense to at the same time create a separate one for\nsyslog, or just use the same?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Tue, 13 Jun 2023 20:55:19 +0200",
"msg_from": "Magnus Hagander <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add wait event for log emission?"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-13 20:55:19 +0200, Magnus Hagander wrote:\n> On Tue, Jun 13, 2023 at 6:59 PM Andres Freund <[email protected]> wrote:\n> >\n> > Hi,\n> >\n> > I just helped somebody debug a postgres performance problem that turned out to\n> > be not actually be postgres' fault. It turned out to be because postgres'\n> > stdout/stderr were piped to a program, and that program was slow. Whenever the\n> > pipe buffer filled up, postgres stopped making progress.\n> >\n> > That's not postgres' fault. But we make it too hard to debug such an\n> > issue. There's no way to figure this out from within postgres, one pretty much\n> > needs to look at stack traces.\n> >\n> > I think we should add a few wait events for log emission. I think it'd be good\n> > to have one wait event for each log destination.\n> >\n> > That's not perfect - we'd e.g. still not be able to debug where the logger\n> > process is stuck, due it not being in pg_stat_activity. But other processes\n> > reporting the wait event for writing to the logger process would be a pretty\n> > good hint.\n> \n> \n> +1.\n> \n> Would it make sense to at the same time create a separate one for\n> syslog, or just use the same?\n\nI think it should be a separate one for each of the log paths in\nsend_message_to_server_log(). I don't think we gain anything by being stingy\nhere - and it's not like we add one every other day.\n\nI do wonder if it'd be worth setting up a wait event around emit_log_hook -\nit's somewhat of a misuse of wait events, but might be useful nonetheless?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 13 Jun 2023 17:18:58 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add wait event for log emission?"
},
{
"msg_contents": "At Tue, 13 Jun 2023 17:18:58 -0700, Andres Freund <[email protected]> wrote in \r\n> Hi,\r\n> \r\n> On 2023-06-13 20:55:19 +0200, Magnus Hagander wrote:\r\n> > On Tue, Jun 13, 2023 at 6:59 PM Andres Freund <[email protected]> wrote:\r\n> > > I think we should add a few wait events for log emission. I think it'd be good\r\n> > > to have one wait event for each log destination.\r\n> > >\r\n> > > That's not perfect - we'd e.g. still not be able to debug where the logger\r\n> > > process is stuck, due it not being in pg_stat_activity. But other processes\r\n> > > reporting the wait event for writing to the logger process would be a pretty\r\n> > > good hint.\r\n> > \r\n> > \r\n> > +1.\r\n> > \r\n> > Would it make sense to at the same time create a separate one for\r\n> > syslog, or just use the same?\r\n> \r\n> I think it should be a separate one for each of the log paths in\r\n> send_message_to_server_log(). I don't think we gain anything by being stingy\r\n> here - and it's not like we add one every other day.\r\n> \r\n> I do wonder if it'd be worth setting up a wait event around emit_log_hook -\r\n> it's somewhat of a misuse of wait events, but might be useful nonetheless?\r\n\r\nWe are already doing something similar for archive_command. Given that\r\nthe execution time of this hook is unpredictable, it seems resonable\r\nto me to do that there. Essentially, we are \"waiting\" for the\r\nhook-function to return.\r\n\r\nregards.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center\r\n",
"msg_date": "Wed, 14 Jun 2023 11:18:06 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add wait event for log emission?"
}
] |
[
{
"msg_contents": "Hi,\n\nI've attached the patch for $subject. In the following comment,\n\n /*\n * If available and useful, use posix_fallocate() (via FileAllocate())\n * to extend the relation. That's often more efficient than using\n * write(), as it commonly won't cause the kernel to allocate page\n * cache space for the extended pages.\n *\n * However, we don't use FileAllocate() for small extensions, as it\n * defeats delayed allocation on some filesystems. Not clear where\n * that decision should be made though? For now just use a cutoff of\n * 8, anything between 4 and 8 worked OK in some local testing.\n\ns/FileAllocate()/FileFallocate()/\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 14 Jun 2023 06:33:18 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fix a typo in md.c"
},
{
"msg_contents": "On Wed, Jun 14, 2023 at 06:33:18AM +0900, Masahiko Sawada wrote:\n> I've attached the patch for $subject. In the following comment,\n\nLGTM.\n--\nMichael",
"msg_date": "Wed, 14 Jun 2023 09:19:39 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix a typo in md.c"
},
{
"msg_contents": "On Wed, Jun 14, 2023 at 9:19 AM Michael Paquier <[email protected]> wrote:\n>\n> On Wed, Jun 14, 2023 at 06:33:18AM +0900, Masahiko Sawada wrote:\n> > I've attached the patch for $subject. In the following comment,\n>\n> LGTM.\n\nThanks, pushed.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 14 Jun 2023 13:31:33 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fix a typo in md.c"
}
] |
[
{
"msg_contents": "\nHi, hackers\n\nWe use (GUC_UNIT_MEMORY | GUC_UNIT_TIME) instead of GUC_UNIT even though we\nalready define it in guc.h.\n\nMaybe using GUC_UNIT is better? Here is a patch to fix it.\n\ndiff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c\nindex a9033b7a54..5308896c87 100644\n--- a/src/backend/utils/misc/guc.c\n+++ b/src/backend/utils/misc/guc.c\n@@ -2766,7 +2766,7 @@ convert_real_from_base_unit(double base_value, int base_unit,\n const char *\n get_config_unit_name(int flags)\n {\n- switch (flags & (GUC_UNIT_MEMORY | GUC_UNIT_TIME))\n+ switch (flags & GUC_UNIT)\n {\n case 0:\n return NULL; /* GUC has no units */\n@@ -2802,7 +2802,7 @@ get_config_unit_name(int flags)\n return \"min\";\n default:\n elog(ERROR, \"unrecognized GUC units value: %d\",\n- flags & (GUC_UNIT_MEMORY | GUC_UNIT_TIME));\n+ flags & GUC_UNIT);\n return NULL;\n }\n }\n \n-- \nRegrads,\nJapin Li.\n\n\n\n",
"msg_date": "Wed, 14 Jun 2023 11:33:06 +0800",
"msg_from": "Japin Li <[email protected]>",
"msg_from_op": true,
"msg_subject": "Replace (GUC_UNIT_MEMORY | GUC_UNIT_TIME) with GUC_UNIT in guc.c"
},
{
"msg_contents": "On Wed, Jun 14, 2023 at 12:33 PM Japin Li <[email protected]> wrote:\n>\n>\n> Hi, hackers\n>\n> We use (GUC_UNIT_MEMORY | GUC_UNIT_TIME) instead of GUC_UNIT even though we\n> already define it in guc.h.\n>\n> Maybe using GUC_UNIT is better? Here is a patch to fix it.\n\nYeah, it seems more consistent with other places in guc.c. I'll push\nit, barring any objections.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 14 Jun 2023 14:06:26 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Replace (GUC_UNIT_MEMORY | GUC_UNIT_TIME) with GUC_UNIT in guc.c"
},
{
"msg_contents": "On Wed, Jun 14, 2023 at 1:07 PM Masahiko Sawada <[email protected]>\nwrote:\n\n> On Wed, Jun 14, 2023 at 12:33 PM Japin Li <[email protected]> wrote:\n> > Hi, hackers\n> >\n> > We use (GUC_UNIT_MEMORY | GUC_UNIT_TIME) instead of GUC_UNIT even though\n> we\n> > already define it in guc.h.\n> >\n> > Maybe using GUC_UNIT is better? Here is a patch to fix it.\n>\n> Yeah, it seems more consistent with other places in guc.c. I'll push\n> it, barring any objections.\n\n\n+1. BTW, it seems that GUC_UNIT_TIME is not used anywhere except in\nGUC_UNIT. I was wondering if we can retire it, but maybe we'd better\nnot. It still indicates that we need to use time units table.\n\nThanks\nRichard\n\nOn Wed, Jun 14, 2023 at 1:07 PM Masahiko Sawada <[email protected]> wrote:On Wed, Jun 14, 2023 at 12:33 PM Japin Li <[email protected]> wrote:\n> Hi, hackers\n>\n> We use (GUC_UNIT_MEMORY | GUC_UNIT_TIME) instead of GUC_UNIT even though we\n> already define it in guc.h.\n>\n> Maybe using GUC_UNIT is better? Here is a patch to fix it.\n\nYeah, it seems more consistent with other places in guc.c. I'll push\nit, barring any objections.+1. BTW, it seems that GUC_UNIT_TIME is not used anywhere except inGUC_UNIT. I was wondering if we can retire it, but maybe we'd betternot. It still indicates that we need to use time units table.ThanksRichard",
"msg_date": "Wed, 14 Jun 2023 15:38:10 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Replace (GUC_UNIT_MEMORY | GUC_UNIT_TIME) with GUC_UNIT in guc.c"
},
{
"msg_contents": "On Wed, Jun 14, 2023 at 03:38:10PM +0800, Richard Guo wrote:\n> +1. BTW, it seems that GUC_UNIT_TIME is not used anywhere except in\n> GUC_UNIT. I was wondering if we can retire it, but maybe we'd better\n> not. It still indicates that we need to use time units table.\n\nSome out-of-core code declaring custom GUCs could rely on that, so\nit is better not to remove it.\n--\nMichael",
"msg_date": "Wed, 14 Jun 2023 16:47:45 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Replace (GUC_UNIT_MEMORY | GUC_UNIT_TIME) with GUC_UNIT in guc.c"
},
{
"msg_contents": "On Wed, Jun 14, 2023 at 3:47 PM Michael Paquier <[email protected]> wrote:\n\n> On Wed, Jun 14, 2023 at 03:38:10PM +0800, Richard Guo wrote:\n> > +1. BTW, it seems that GUC_UNIT_TIME is not used anywhere except in\n> > GUC_UNIT. I was wondering if we can retire it, but maybe we'd better\n> > not. It still indicates that we need to use time units table.\n>\n> Some out-of-core code declaring custom GUCs could rely on that, so\n> it is better not to remove it.\n\n\nI see. Thanks for pointing that out.\n\nThanks\nRichard\n\nOn Wed, Jun 14, 2023 at 3:47 PM Michael Paquier <[email protected]> wrote:On Wed, Jun 14, 2023 at 03:38:10PM +0800, Richard Guo wrote:\n> +1. BTW, it seems that GUC_UNIT_TIME is not used anywhere except in\n> GUC_UNIT. I was wondering if we can retire it, but maybe we'd better\n> not. It still indicates that we need to use time units table.\n\nSome out-of-core code declaring custom GUCs could rely on that, so\nit is better not to remove it.I see. Thanks for pointing that out.ThanksRichard",
"msg_date": "Wed, 14 Jun 2023 17:52:53 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Replace (GUC_UNIT_MEMORY | GUC_UNIT_TIME) with GUC_UNIT in guc.c"
},
{
"msg_contents": "\nOn Wed, 14 Jun 2023 at 17:52, Richard Guo <[email protected]> wrote:\n> On Wed, Jun 14, 2023 at 3:47 PM Michael Paquier <[email protected]> wrote:\n>\n>> On Wed, Jun 14, 2023 at 03:38:10PM +0800, Richard Guo wrote:\n>> > +1. BTW, it seems that GUC_UNIT_TIME is not used anywhere except in\n>> > GUC_UNIT. I was wondering if we can retire it, but maybe we'd better\n>> > not. It still indicates that we need to use time units table.\n>>\n>> Some out-of-core code declaring custom GUCs could rely on that, so\n>> it is better not to remove it.\n>\n>\n> I see. Thanks for pointing that out.\n>\n\nThanks for all of your reviews. Agreed with Michael do not touch GUC_UNIT_TIME.\n\n-- \nRegrads,\nJapin Li.\n\n\n\n",
"msg_date": "Wed, 14 Jun 2023 21:42:37 +0800",
"msg_from": "Japin Li <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Replace (GUC_UNIT_MEMORY | GUC_UNIT_TIME) with GUC_UNIT in guc.c"
},
{
"msg_contents": "On Wed, Jun 14, 2023 at 4:47 PM Michael Paquier <[email protected]> wrote:\n>\n> On Wed, Jun 14, 2023 at 03:38:10PM +0800, Richard Guo wrote:\n> > +1. BTW, it seems that GUC_UNIT_TIME is not used anywhere except in\n> > GUC_UNIT. I was wondering if we can retire it, but maybe we'd better\n> > not. It still indicates that we need to use time units table.\n>\n> Some out-of-core code declaring custom GUCs could rely on that, so\n> it is better not to remove it.\n\n+1 not to remove it.\n\nI've attached the patch to fix (GUC_UNIT_MEMORY | GUC_UNIT_TIME)\nthing, and am going to push it later today to only master branch.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 15 Jun 2023 11:02:07 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Replace (GUC_UNIT_MEMORY | GUC_UNIT_TIME) with GUC_UNIT in guc.c"
},
{
"msg_contents": "On Thu, Jun 15, 2023 at 11:02 AM Masahiko Sawada <[email protected]> wrote:\n>\n> On Wed, Jun 14, 2023 at 4:47 PM Michael Paquier <[email protected]> wrote:\n> >\n> > On Wed, Jun 14, 2023 at 03:38:10PM +0800, Richard Guo wrote:\n> > > +1. BTW, it seems that GUC_UNIT_TIME is not used anywhere except in\n> > > GUC_UNIT. I was wondering if we can retire it, but maybe we'd better\n> > > not. It still indicates that we need to use time units table.\n> >\n> > Some out-of-core code declaring custom GUCs could rely on that, so\n> > it is better not to remove it.\n>\n> +1 not to remove it.\n>\n> I've attached the patch to fix (GUC_UNIT_MEMORY | GUC_UNIT_TIME)\n> thing, and am going to push it later today to only master branch.\n\nPushed.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 15 Jun 2023 17:06:56 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Replace (GUC_UNIT_MEMORY | GUC_UNIT_TIME) with GUC_UNIT in guc.c"
}
] |
[
{
"msg_contents": "Like in cost_seqscan(), I'd expect the subpath cost to be divided among\nparallel workers. The patch below shows what I mean. Am I right?\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com",
"msg_date": "Wed, 14 Jun 2023 14:36:54 +0200",
"msg_from": "Antonin Houska <[email protected]>",
"msg_from_op": true,
"msg_subject": "Shouldn't cost_append() also scale the partial path's cost?"
},
{
"msg_contents": "At Wed, 14 Jun 2023 14:36:54 +0200, Antonin Houska <[email protected]> wrote in \n> Like in cost_seqscan(), I'd expect the subpath cost to be divided among\n> parallel workers. The patch below shows what I mean. Am I right?\n\nIf I've got it right, the total cost of a partial seqscan path\ncomprises a distributed CPU run cost and an undistributed disk run\ncost. If we want to adjust for a different worker number, we should\nonly tweak the CPU component of the total cost. By default, if one\npage contains 100 rows (I guess a moderate ratio), these two costs are\nbalanced at a 1:1 ratio and the CPU run cost and disk run cost in a\npartial seqscan path is 1:n (n = #workers). If we adjust the run cost\nin the porposed manner, it adjusts the CPU run cost correctly but in\nturn the disk run cost gets wrong (by a larger error in this premise).\n\nIn short, it will get wrong in a different way.\n\nActually it looks strange that rows are adjusted but cost is not, so\nwe might want to add an explanation in this aspect.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 15 Jun 2023 11:07:05 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Shouldn't cost_append() also scale the partial path's cost?"
}
] |
[
{
"msg_contents": "A few years ago, I sketched out a design for incremental backup, but\nno patch for incremental backup ever got committed. Instead, the whole\nthing evolved into a project to add backup manifests, which are nice,\nbut not as nice as incremental backup would be. So I've decided to\nhave another go at incremental backup itself. Attached are some WIP\npatches. Let me summarize the design and some open questions and\nproblems with it that I've discovered. I welcome problem reports and\ntest results from others, as well.\n\nThe basic design of this patch set is pretty simple, and there are\nthree main parts. First, there's a new background process called the\nwalsummarizer which runs all the time. It reads the WAL and generates\nWAL summary files. WAL summary files are extremely small compared to\nthe original WAL and contain only the minimal amount of information\nthat we need in order to determine which parts of the database need to\nbe backed up. They tell us about files getting created, destroyed, or\ntruncated, and they tell us about modified blocks. Naturally, we don't\nfind out about blocks that were modified without any write-ahead log\nrecord, e.g. hint bit updates, but those are of necessity not critical\nfor correctness, so it's OK. Second, pg_basebackup has a mode where it\ncan take an incremental backup. You must supply a backup manifest from\na previous full backup. We read the WAL summary files that have been\ngenerated between the start of the previous backup and the start of\nthis one, and use that to figure out which relation files have changed\nand how much. Non-relation files are sent normally, just as they would\nbe in a full backup. Relation files can either be sent in full or be\nreplaced by an incremental file, which contains a subset of the blocks\nin the file plus a bit of information to handle truncations properly.\nThird, there's now a pg_combinebackup utility which takes a full\nbackup and one or more incremental backups, performs a bunch of sanity\nchecks, and if everything works out, writes out a new, synthetic full\nbackup, aka a data directory.\n\nSimple usage example:\n\npg_basebackup -cfast -Dx\npg_basebackup -cfast -Dy --incremental x/backup_manifest\npg_combinebackup x y -o z\n\nThe part of all this with which I'm least happy is the WAL\nsummarization engine. Actually, the core process of summarizing the\nWAL seems totally fine, and the file format is very compact thanks to\nsome nice ideas from my colleague Dilip Kumar. Someone may of course\nwish to argue that the information should be represented in some other\nfile format instead, and that can be done if it's really needed, but I\ndon't see a lot of value in tinkering with it, either. Where I do\nthink there's a problem is deciding how much WAL ought to be\nsummarized in one WAL summary file. Summary files cover a certain\nrange of WAL records - they have names like\n$TLI${START_LSN}${END_LSN}.summary. It's not too hard to figure out\nwhere a file should start - generally, it's wherever the previous file\nended, possibly on a new timeline, but figuring out where the summary\nshould end is trickier. You always have the option to either read\nanother WAL record and fold it into the current summary, or end the\ncurrent summary where you are, write out the file, and begin a new\none. So how do you decide what to do?\n\nI originally had the idea of summarizing a certain number of MB of WAL\nper WAL summary file, and so I added a GUC wal_summarize_mb for that\npurpose. But then I realized that actually, you really want WAL\nsummary file boundaries to line up with possible redo points, because\nwhen you do an incremental backup, you need a summary that stretches\nfrom the redo point of the checkpoint written at the start of the\nprior backup to the redo point of the checkpoint written at the start\nof the current backup. The block modifications that happen in that\nrange of WAL records are the ones that need to be included in the\nincremental. Unfortunately, there's no indication in the WAL itself\nthat you've reached a redo point, but I wrote code that tries to\nnotice when we've reached the redo point stored in shared memory and\nstops the summary there. But I eventually realized that's not good\nenough either, because if summarization zooms past the redo point\nbefore noticing the updated redo point in shared memory, then the\nbackup sat around waiting for the next summary file to be generated so\nit had enough summaries to proceed with the backup, while the\nsummarizer was in no hurry to finish up the current file and just sat\nthere waiting for more WAL to be generated. Eventually the incremental\nbackup would just time out. I tried to fix that by making it so that\nif somebody's waiting for a summary file to be generated, they can let\nthe summarizer know about that and it can write a summary file ending\nat the LSN up to which it has read and then begin a new file from\nthere. That seems to fix the hangs, but now I've got three\noverlapping, interconnected systems for deciding where to end the\ncurrent summary file, and maybe that's OK, but I have a feeling there\nmight be a better way.\n\nDilip had an interesting potential solution to this problem, which was\nto always emit a special WAL record at the redo pointer. That is, when\nwe fix the redo pointer for the checkpoint record we're about to\nwrite, also insert a WAL record there. That way, when the summarizer\nreaches that sentinel record, it knows it should stop the summary just\nbefore. I'm not sure whether this approach is viable, especially from\na performance and concurrency perspective, and I'm not sure whether\npeople here would like it, but it does seem like it would make things\na whole lot simpler for this patch set.\n\nAnother thing that I'm not too sure about is: what happens if we find\na relation file on disk that doesn't appear in the backup_manifest for\nthe previous backup and isn't mentioned in the WAL summaries either?\nThe fact that said file isn't mentioned in the WAL summaries seems\nlike it ought to mean that the file is unchanged, in which case\nperhaps this ought to be an error condition. But I'm not too sure\nabout that treatment. I have a feeling that there might be some subtle\nproblems here, especially if databases or tablespaces get dropped and\nthen new ones get created that happen to have the same OIDs. And what\nabout wal_level=minimal? I'm not at a point where I can say I've gone\nthrough and plugged up these kinds of corner-case holes tightly yet,\nand I'm worried that there may be still other scenarios of which I\nhaven't even thought. Happy to hear your ideas about what the problem\ncases are or how any of the problems should be solved.\n\nA related design question is whether we should really be sending the\nwhole backup manifest to the server at all. If it turns out that we\ndon't really need anything except for the LSN of the previous backup,\nwe could send that one piece of information instead of everything. On\nthe other hand, if we need the list of files from the previous backup,\nthen sending the whole manifest makes sense.\n\nAnother big and rather obvious problem with the patch set is that it\ndoesn't currently have any automated test cases, or any real\ndocumentation. Those are obviously things that need a lot of work\nbefore there could be any thought of committing this. And probably a\nlot of bugs will be found along the way, too.\n\nA few less-serious problems with the patch:\n\n- We don't have an incremental JSON parser, so if you have a\nbackup_manifest>1GB, pg_basebackup --incremental is going to fail.\nThat's also true of the existing code in pg_verifybackup, and for the\nsame reason. I talked to Andrew Dunstan at one point about adapting\nour JSON parser to support incremental parsing, and he had a patch for\nthat, but I think he found some problems with it and I'm not sure what\nthe current status is.\n\n- The patch does support differential backup, aka an incremental atop\nanother incremental. There's no particular limit to how long a chain\nof backups can be. However, pg_combinebackup currently requires that\nthe first backup is a full backup and all the later ones are\nincremental backups. So if you have a full backup a and an incremental\nbackup b and a differential backup c, you can combine a b and c to get\na full backup equivalent to one you would have gotten if you had taken\na full backup at the time you took c. However, you can't combine b and\nc with each other without combining them with a, and that might be\ndesirable in some situations. You might want to collapse a bunch of\nolder differential backups into a single one that covers the whole\ntime range of all of them. I think that the file format can support\nthat, but the tool is currently too dumb.\n\n- We only know how to operate on directories, not tar files. I thought\nabout that when working on pg_verifybackup as well, but I didn't do\nanything about it. It would be nice to go back and make that tool work\non tar-format backups, and this one, too. I don't think there would be\na whole lot of point trying to operate on compressed tar files because\nyou need random access and that seems hard on a compressed file, but\non uncompressed files it seems at least theoretically doable. I'm not\nsure whether anyone would care that much about this, though, even\nthough it does sound pretty cool.\n\nIn the attached patch series, patches 1 through 6 are various\nrefactoring patches, patch 7 is the main event, and patch 8 adds a\nuseful inspection tool.\n\nThanks,\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 14 Jun 2023 14:46:48 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": true,
"msg_subject": "trying again to get incremental backup"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-14 14:46:48 -0400, Robert Haas wrote:\n> A few years ago, I sketched out a design for incremental backup, but\n> no patch for incremental backup ever got committed. Instead, the whole\n> thing evolved into a project to add backup manifests, which are nice,\n> but not as nice as incremental backup would be. So I've decided to\n> have another go at incremental backup itself. Attached are some WIP\n> patches. Let me summarize the design and some open questions and\n> problems with it that I've discovered. I welcome problem reports and\n> test results from others, as well.\n\nCool!\n\n\n> I originally had the idea of summarizing a certain number of MB of WAL\n> per WAL summary file, and so I added a GUC wal_summarize_mb for that\n> purpose. But then I realized that actually, you really want WAL\n> summary file boundaries to line up with possible redo points, because\n> when you do an incremental backup, you need a summary that stretches\n> from the redo point of the checkpoint written at the start of the\n> prior backup to the redo point of the checkpoint written at the start\n> of the current backup. The block modifications that happen in that\n> range of WAL records are the ones that need to be included in the\n> incremental.\n\nI assume this is \"solely\" required for keeping the incremental backups as\nsmall as possible, rather than being required for correctness?\n\n\n> Unfortunately, there's no indication in the WAL itself\n> that you've reached a redo point, but I wrote code that tries to\n> notice when we've reached the redo point stored in shared memory and\n> stops the summary there. But I eventually realized that's not good\n> enough either, because if summarization zooms past the redo point\n> before noticing the updated redo point in shared memory, then the\n> backup sat around waiting for the next summary file to be generated so\n> it had enough summaries to proceed with the backup, while the\n> summarizer was in no hurry to finish up the current file and just sat\n> there waiting for more WAL to be generated. Eventually the incremental\n> backup would just time out. I tried to fix that by making it so that\n> if somebody's waiting for a summary file to be generated, they can let\n> the summarizer know about that and it can write a summary file ending\n> at the LSN up to which it has read and then begin a new file from\n> there. That seems to fix the hangs, but now I've got three\n> overlapping, interconnected systems for deciding where to end the\n> current summary file, and maybe that's OK, but I have a feeling there\n> might be a better way.\n\nCould we just recompute the WAL summary for the [redo, end of chunk] for the\nrelevant summary file?\n\n\n> Dilip had an interesting potential solution to this problem, which was\n> to always emit a special WAL record at the redo pointer. That is, when\n> we fix the redo pointer for the checkpoint record we're about to\n> write, also insert a WAL record there. That way, when the summarizer\n> reaches that sentinel record, it knows it should stop the summary just\n> before. I'm not sure whether this approach is viable, especially from\n> a performance and concurrency perspective, and I'm not sure whether\n> people here would like it, but it does seem like it would make things\n> a whole lot simpler for this patch set.\n\nFWIW, I like the idea of a special WAL record at that point, independent of\nthis feature. It wouldn't be a meaningful overhead compared to the cost of a\ncheckpoint, and it seems like it'd be quite useful for debugging. But I can\nsee uses going beyond that - we occasionally have been discussing associating\nadditional data with redo points, and that'd be a lot easier to deal with\nduring recovery with such a record.\n\nI don't really see a performance and concurrency angle right now - what are\nyou wondering about?\n\n\n> Another thing that I'm not too sure about is: what happens if we find\n> a relation file on disk that doesn't appear in the backup_manifest for\n> the previous backup and isn't mentioned in the WAL summaries either?\n\nWouldn't that commonly happen for unlogged relations at least?\n\nI suspect there's also other ways to end up with such additional files,\ne.g. by crashing during the creation of a new relation.\n\n\n> A few less-serious problems with the patch:\n> \n> - We don't have an incremental JSON parser, so if you have a\n> backup_manifest>1GB, pg_basebackup --incremental is going to fail.\n> That's also true of the existing code in pg_verifybackup, and for the\n> same reason. I talked to Andrew Dunstan at one point about adapting\n> our JSON parser to support incremental parsing, and he had a patch for\n> that, but I think he found some problems with it and I'm not sure what\n> the current status is.\n\nAs a stopgap measure, can't we just use the relevant flag to allow larger\nallocations?\n\n\n> - The patch does support differential backup, aka an incremental atop\n> another incremental. There's no particular limit to how long a chain\n> of backups can be. However, pg_combinebackup currently requires that\n> the first backup is a full backup and all the later ones are\n> incremental backups. So if you have a full backup a and an incremental\n> backup b and a differential backup c, you can combine a b and c to get\n> a full backup equivalent to one you would have gotten if you had taken\n> a full backup at the time you took c. However, you can't combine b and\n> c with each other without combining them with a, and that might be\n> desirable in some situations. You might want to collapse a bunch of\n> older differential backups into a single one that covers the whole\n> time range of all of them. I think that the file format can support\n> that, but the tool is currently too dumb.\n\nThat seems like a feature for the future...\n\n\n> - We only know how to operate on directories, not tar files. I thought\n> about that when working on pg_verifybackup as well, but I didn't do\n> anything about it. It would be nice to go back and make that tool work\n> on tar-format backups, and this one, too. I don't think there would be\n> a whole lot of point trying to operate on compressed tar files because\n> you need random access and that seems hard on a compressed file, but\n> on uncompressed files it seems at least theoretically doable. I'm not\n> sure whether anyone would care that much about this, though, even\n> though it does sound pretty cool.\n\nI don't know the tar format well, but my understanding is that it doesn't have\na \"central metadata\" portion. I.e. doing something like this would entail\nscanning the tar file sequentially, skipping file contents? And wouldn't you\nhave to create an entirely new tar file for the modified output? That kind of\nmakes it not so incremental ;)\n\nIOW, I'm not sure it's worth bothering about this ever, and certainly doesn't\nseem worth bothering about now. But I might just be missing something.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 14 Jun 2023 12:47:17 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "On Wed, Jun 14, 2023 at 3:47 PM Andres Freund <[email protected]> wrote:\n> I assume this is \"solely\" required for keeping the incremental backups as\n> small as possible, rather than being required for correctness?\n\nI believe so. I want to spend some more time thinking about this to\nmake sure I'm not missing anything.\n\n> Could we just recompute the WAL summary for the [redo, end of chunk] for the\n> relevant summary file?\n\nI'm not understanding how that would help. If we were going to compute\na WAL summary on the fly rather than waiting for one to show up on\ndisk, what we'd want is [end of last WAL summary that does exist on\ndisk, redo]. But I'm not sure that's a great approach, because that\nLSN gap might be large and then we're duplicating a lot of work that\nthe summarizer has probably already done most of.\n\n> FWIW, I like the idea of a special WAL record at that point, independent of\n> this feature. It wouldn't be a meaningful overhead compared to the cost of a\n> checkpoint, and it seems like it'd be quite useful for debugging. But I can\n> see uses going beyond that - we occasionally have been discussing associating\n> additional data with redo points, and that'd be a lot easier to deal with\n> during recovery with such a record.\n>\n> I don't really see a performance and concurrency angle right now - what are\n> you wondering about?\n\nI'm not really sure. I expect Dilip would be happy to post his patch,\nand if you'd be willing to have a look at it and express your concerns\nor lack thereof, that would be super valuable.\n\n> > Another thing that I'm not too sure about is: what happens if we find\n> > a relation file on disk that doesn't appear in the backup_manifest for\n> > the previous backup and isn't mentioned in the WAL summaries either?\n>\n> Wouldn't that commonly happen for unlogged relations at least?\n>\n> I suspect there's also other ways to end up with such additional files,\n> e.g. by crashing during the creation of a new relation.\n\nYeah, this needs some more careful thought.\n\n> > A few less-serious problems with the patch:\n> >\n> > - We don't have an incremental JSON parser, so if you have a\n> > backup_manifest>1GB, pg_basebackup --incremental is going to fail.\n> > That's also true of the existing code in pg_verifybackup, and for the\n> > same reason. I talked to Andrew Dunstan at one point about adapting\n> > our JSON parser to support incremental parsing, and he had a patch for\n> > that, but I think he found some problems with it and I'm not sure what\n> > the current status is.\n>\n> As a stopgap measure, can't we just use the relevant flag to allow larger\n> allocations?\n\nI'm not sure that's a good idea, but theoretically, yes. We can also\njust choose to accept the limitation that your data directory can't be\ntoo darn big if you want to use this feature. But getting incremental\nJSON parsing would be better.\n\nNot having the manifest in JSON would be an even better solution, but\nregrettably I did not win that argument.\n\n> That seems like a feature for the future...\n\nSure.\n\n> I don't know the tar format well, but my understanding is that it doesn't have\n> a \"central metadata\" portion. I.e. doing something like this would entail\n> scanning the tar file sequentially, skipping file contents? And wouldn't you\n> have to create an entirely new tar file for the modified output? That kind of\n> makes it not so incremental ;)\n>\n> IOW, I'm not sure it's worth bothering about this ever, and certainly doesn't\n> seem worth bothering about now. But I might just be missing something.\n\nOh, yeah, it's just an idle thought. I'll get to it when I get to it,\nor else I won't.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 14 Jun 2023 16:10:38 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "On Wed, 14 Jun 2023 at 20:47, Robert Haas <[email protected]> wrote:\n>\n> A few years ago, I sketched out a design for incremental backup, but\n> no patch for incremental backup ever got committed. Instead, the whole\n> thing evolved into a project to add backup manifests, which are nice,\n> but not as nice as incremental backup would be. So I've decided to\n> have another go at incremental backup itself. Attached are some WIP\n> patches.\n\nNice, I like this idea.\n\n> Let me summarize the design and some open questions and\n> problems with it that I've discovered. I welcome problem reports and\n> test results from others, as well.\n\nSkimming through the 7th patch, I see claims that FSM is not fully\nWAL-logged and thus shouldn't be tracked, and so it indeed doesn't\ntrack those changes.\nI disagree with that decision: we now have support for custom resource\nmanagers, which may use the various forks for other purposes than\nthose used in PostgreSQL right now. It would be a shame if data is\nlost because of the backup tool ignoring forks because the PostgreSQL\nproject itself doesn't have post-recovery consistency guarantees in\nthat fork. So, unless we document that WAL-logged changes in the FSM\nfork are actually not recoverable from backup, regardless of the type\nof contents, we should still keep track of the changes in the FSM fork\nand include the fork in our backups or only exclude those FSM updates\nthat we know are safe to ignore.\n\nKind regards,\n\nMatthias van de Meent\nNeon, Inc.\n\n\n",
"msg_date": "Wed, 14 Jun 2023 22:34:35 +0200",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-14 16:10:38 -0400, Robert Haas wrote:\n> On Wed, Jun 14, 2023 at 3:47 PM Andres Freund <[email protected]> wrote:\n> > Could we just recompute the WAL summary for the [redo, end of chunk] for the\n> > relevant summary file?\n> \n> I'm not understanding how that would help. If we were going to compute\n> a WAL summary on the fly rather than waiting for one to show up on\n> disk, what we'd want is [end of last WAL summary that does exist on\n> disk, redo].\n\nOh, right.\n\n\n> But I'm not sure that's a great approach, because that LSN gap might be\n> large and then we're duplicating a lot of work that the summarizer has\n> probably already done most of.\n\nI guess that really depends on what the summary granularity is. If you create\na separate summary every 32MB or so, recomputing just the required range\nshouldn't be too bad.\n\n\n> > FWIW, I like the idea of a special WAL record at that point, independent of\n> > this feature. It wouldn't be a meaningful overhead compared to the cost of a\n> > checkpoint, and it seems like it'd be quite useful for debugging. But I can\n> > see uses going beyond that - we occasionally have been discussing associating\n> > additional data with redo points, and that'd be a lot easier to deal with\n> > during recovery with such a record.\n> >\n> > I don't really see a performance and concurrency angle right now - what are\n> > you wondering about?\n> \n> I'm not really sure. I expect Dilip would be happy to post his patch,\n> and if you'd be willing to have a look at it and express your concerns\n> or lack thereof, that would be super valuable.\n\nWill do. Adding me to CC: might help, I have a backlog unfortunately :(.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 14 Jun 2023 13:40:56 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "On Thu, Jun 15, 2023 at 2:11 AM Andres Freund <[email protected]> wrote:\n>\n\n> > I'm not really sure. I expect Dilip would be happy to post his patch,\n> > and if you'd be willing to have a look at it and express your concerns\n> > or lack thereof, that would be super valuable.\n>\n> Will do. Adding me to CC: might help, I have a backlog unfortunately :(.\n\nThanks, I have posted it here[1]\n\n[1] https://www.postgresql.org/message-id/CAFiTN-s-K%3DmVA%3DHPr_VoU-5bvyLQpNeuzjq1ebPJMEfCJZKFsg%40mail.gmail.com\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 15 Jun 2023 13:14:52 +0530",
"msg_from": "Dilip Kumar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "On Wed, Jun 14, 2023 at 4:40 PM Andres Freund <[email protected]> wrote:\n> > But I'm not sure that's a great approach, because that LSN gap might be\n> > large and then we're duplicating a lot of work that the summarizer has\n> > probably already done most of.\n>\n> I guess that really depends on what the summary granularity is. If you create\n> a separate summary every 32MB or so, recomputing just the required range\n> shouldn't be too bad.\n\nYeah, but I don't think that's the right approach, for two reasons.\nFirst, one of the things I'm rather worried about is what happens when\nthe WAL distance between the prior backup and the incremental backup\nis large. It could be a terabyte. If we have a WAL summary for every\n32MB of WAL, that's 32k files we have to read, and I'm concerned\nthat's too many. Maybe it isn't, but it's something that has really\nbeen weighing on my mind as I've been thinking through the design\nquestions here. The files are really very small, and having to open a\nbazillion tiny little files to get the job done sounds lame. Second, I\ndon't see what problem it actually solves. Why not just signal the\nsummarizer to write out the accumulated data to a file instead of\nre-doing the work ourselves? Or else adopt the\nWAL-record-at-the-redo-pointer approach, and then the whole thing is\nmoot?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 19 Jun 2023 09:46:12 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-19 09:46:12 -0400, Robert Haas wrote:\n> On Wed, Jun 14, 2023 at 4:40 PM Andres Freund <[email protected]> wrote:\n> > > But I'm not sure that's a great approach, because that LSN gap might be\n> > > large and then we're duplicating a lot of work that the summarizer has\n> > > probably already done most of.\n> >\n> > I guess that really depends on what the summary granularity is. If you create\n> > a separate summary every 32MB or so, recomputing just the required range\n> > shouldn't be too bad.\n> \n> Yeah, but I don't think that's the right approach, for two reasons.\n> First, one of the things I'm rather worried about is what happens when\n> the WAL distance between the prior backup and the incremental backup\n> is large. It could be a terabyte. If we have a WAL summary for every\n> 32MB of WAL, that's 32k files we have to read, and I'm concerned\n> that's too many. Maybe it isn't, but it's something that has really\n> been weighing on my mind as I've been thinking through the design\n> questions here.\n\nIt doesn't have to be a separate file - you could easily summarize ranges\nat a higher granularity, storing multiple ranges into a single file with a\ncoarser naming pattern.\n\n\n> The files are really very small, and having to open a bazillion tiny little\n> files to get the job done sounds lame. Second, I don't see what problem it\n> actually solves. Why not just signal the summarizer to write out the\n> accumulated data to a file instead of re-doing the work ourselves? Or else\n> adopt the WAL-record-at-the-redo-pointer approach, and then the whole thing\n> is moot?\n\nThe one point for a relatively grainy summarization scheme that I see is that\nit would pave the way for using the WAL summary data for other purposes in the\nfuture. That could be done orthogonal to the other solutions to the redo\npointer issues.\n\nOther potential use cases:\n\n- only restore parts of a base backup that aren't going to be overwritten by\n WAL replay\n- reconstructing database contents from WAL after data loss\n- more efficient pg_rewind\n- more efficient prefetching during WAL replay\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 19 Jun 2023 08:51:51 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "Hi,\n\nIn the limited time that I've had to work on this project lately, I've\nbeen trying to come up with a test case for this feature -- and since\nI've gotten completely stuck, I thought it might be time to post and\nsee if anyone else has a better idea. I thought a reasonable test case\nwould be: Do a full backup. Change some stuff. Do an incremental\nbackup. Restore both backups and perform replay to the same LSN. Then\ncompare the files on disk. But I cannot make this work. The first\nproblem I ran into was that replay of the full backup does a\nrestartpoint, while the replay of the incremental backup does not.\nThat results in, for example, pg_subtrans having different contents.\nI'm not sure whether it can also result in data files having different\ncontents: are changes that we replayed following the last restartpoint\nguaranteed to end up on disk when the server is shut down? It wasn't\nclear to me that this is the case. I thought maybe I could get both\nservers to perform a restartpoint at the same location by shutting\ndown the primary and then replaying through the shutdown checkpoint,\nbut that doesn't work because the primary doesn't finish archiving\nbefore shutting down. After some more fiddling I settled (at least for\nresearch purposes) on having the restored backups PITR and promote,\ninstead of PITR and pause, so that we're guaranteed a checkpoint. But\nthat just caused me to run into a far worse problem: replay on the\nstandby doesn't actually create a state that is byte-for-byte\nidentical to the one that exists on the primary. I quickly discovered\nthat in my test case, I was ending up with different contents in the\n\"hole\" of a block wherein a tuple got updated. Replay doesn't think\nit's important to make the hole end up with the same contents on all\nmachines that replay the WAL, so I end up with one server that has\nmore junk in there than the other one and the tests fail.\n\nUnless someone has a brilliant idea that I lack, this suggests to me\nthat this whole line of testing is a dead end. I can, of course, write\ntests that compare clusters *logically* -- do the correct relations\nexist, are they accessible, do they have the right contents? But I\nfeel like it would be easy to have bugs that escape detection in such\na test but would be detected by a physical comparison of the clusters.\nHowever, such a comparison can only be conducted if either (a) there's\nsome way to set up the test so that byte-for-byte identical clusters\ncan be expected or (b) there's some way to perform the comparison that\ncan distinguish between expected, harmless differences and unexpected,\nproblematic differences. And at the moment my conclusion is that\nneither (a) nor (b) exists. Does anyone think otherwise?\n\nMeanwhile, here's a rebased set of patches. The somewhat-primitive\nattempts at writing tests are in 0009, but they don't work, for the\nreasons explained above. I think I'd probably like to go ahead and\ncommit 0001 and 0002 soon if there are no objections, since I think\nthose are good refactorings independently of the rest of this.\n\n...Robert",
"msg_date": "Wed, 30 Aug 2023 10:49:47 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "Hi Robert,\n\nOn 8/30/23 10:49, Robert Haas wrote:\n> In the limited time that I've had to work on this project lately, I've\n> been trying to come up with a test case for this feature -- and since\n> I've gotten completely stuck, I thought it might be time to post and\n> see if anyone else has a better idea. I thought a reasonable test case\n> would be: Do a full backup. Change some stuff. Do an incremental\n> backup. Restore both backups and perform replay to the same LSN. Then\n> compare the files on disk. But I cannot make this work. The first\n> problem I ran into was that replay of the full backup does a\n> restartpoint, while the replay of the incremental backup does not.\n> That results in, for example, pg_subtrans having different contents.\n\npg_subtrans, at least, can be ignored since it is excluded from the \nbackup and not required for recovery.\n\n> I'm not sure whether it can also result in data files having different\n> contents: are changes that we replayed following the last restartpoint\n> guaranteed to end up on disk when the server is shut down? It wasn't\n> clear to me that this is the case. I thought maybe I could get both\n> servers to perform a restartpoint at the same location by shutting\n> down the primary and then replaying through the shutdown checkpoint,\n> but that doesn't work because the primary doesn't finish archiving\n> before shutting down. After some more fiddling I settled (at least for\n> research purposes) on having the restored backups PITR and promote,\n> instead of PITR and pause, so that we're guaranteed a checkpoint. But\n> that just caused me to run into a far worse problem: replay on the\n> standby doesn't actually create a state that is byte-for-byte\n> identical to the one that exists on the primary. I quickly discovered\n> that in my test case, I was ending up with different contents in the\n> \"hole\" of a block wherein a tuple got updated. Replay doesn't think\n> it's important to make the hole end up with the same contents on all\n> machines that replay the WAL, so I end up with one server that has\n> more junk in there than the other one and the tests fail.\n\nThis is pretty much what I discovered when investigating backup from \nstandby back in 2016. My (ultimately unsuccessful) efforts to find a \nclean delta resulted in [1] as I systematically excluded directories \nthat are not required for recovery and will not be synced between a \nprimary and standby.\n\nFWIW Heikki also made similar attempts at this before me (back then I \nfound the thread but I doubt I could find it again) and arrived at \nsimilar results. We discussed this in person and figured out that we had \ncome to more or less the same conclusion. Welcome to the club!\n\n> Unless someone has a brilliant idea that I lack, this suggests to me\n> that this whole line of testing is a dead end. I can, of course, write\n> tests that compare clusters *logically* -- do the correct relations\n> exist, are they accessible, do they have the right contents? But I\n> feel like it would be easy to have bugs that escape detection in such\n> a test but would be detected by a physical comparison of the clusters.\n\nAgreed, though a matching logical result is still very compelling.\n\n> However, such a comparison can only be conducted if either (a) there's\n> some way to set up the test so that byte-for-byte identical clusters\n> can be expected or (b) there's some way to perform the comparison that\n> can distinguish between expected, harmless differences and unexpected,\n> problematic differences. And at the moment my conclusion is that\n> neither (a) nor (b) exists. Does anyone think otherwise?\n\nI do not. My conclusion back then was that validating a physical \ncomparison would be nearly impossible without changes to Postgres to \nmake the primary and standby match via replication. Which, by the way, I \nstill think would be a great idea. In principle, at least. Replay is \nalready a major bottleneck and anything that makes it slower will likely \nnot be very popular.\n\nThis would also be great for WAL, since last time I tested the same WAL \nsegment can be different between the primary and standby because the \nunused (and recycled) portion at the end is not zeroed as it is on the \nprimary (but logically they do match). I would be very happy if somebody \ntold me that my info is out of date here and this has been fixed. But \nwhen I looked at the code it was incredibly tricky to do this because of \nhow WAL is replicated.\n\n> Meanwhile, here's a rebased set of patches. The somewhat-primitive\n> attempts at writing tests are in 0009, but they don't work, for the\n> reasons explained above. I think I'd probably like to go ahead and\n> commit 0001 and 0002 soon if there are no objections, since I think\n> those are good refactorings independently of the rest of this.\n\nNo objections to 0001/0002.\n\nRegards,\n-David\n\n[1] \nhttp://git.postgresql.org/pg/commitdiff/6ad8ac6026287e3ccbc4d606b6ab6116ccc0eec8\n\n\n",
"msg_date": "Thu, 31 Aug 2023 18:50:29 -0400",
"msg_from": "David Steele <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "Hey, thanks for the reply.\n\nOn Thu, Aug 31, 2023 at 6:50 PM David Steele <[email protected]> wrote:\n> pg_subtrans, at least, can be ignored since it is excluded from the\n> backup and not required for recovery.\n\nI agree...\n\n> Welcome to the club!\n\nThanks for the welcome, but being a member feels *terrible*. :-)\n\n> I do not. My conclusion back then was that validating a physical\n> comparison would be nearly impossible without changes to Postgres to\n> make the primary and standby match via replication. Which, by the way, I\n> still think would be a great idea. In principle, at least. Replay is\n> already a major bottleneck and anything that makes it slower will likely\n> not be very popular.\n\nFair point. But maybe the bigger issue is the work involved. I don't\nthink zeroing the hole in all cases would likely be that expensive,\nbut finding everything that can cause the standby to diverge from the\nprimary and fixing all of it sounds like an unpleasant amount of\neffort. Still, it's good to know that I'm not missing something\nobvious.\n\n> No objections to 0001/0002.\n\nCool.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 1 Sep 2023 10:30:23 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "On Wed, Aug 30, 2023 at 9:20 PM Robert Haas <[email protected]> wrote:\n\n> Unless someone has a brilliant idea that I lack, this suggests to me\n> that this whole line of testing is a dead end. I can, of course, write\n> tests that compare clusters *logically* -- do the correct relations\n> exist, are they accessible, do they have the right contents?\n\nCan't we think of comparing at the block level, like we can compare\neach block but ignore the content of the hole?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 4 Sep 2023 18:11:53 +0530",
"msg_from": "Dilip Kumar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "On Mon, Sep 4, 2023 at 8:42 AM Dilip Kumar <[email protected]> wrote:\n> Can't we think of comparing at the block level, like we can compare\n> each block but ignore the content of the hole?\n\nWe could do that, but I don't think that's a full solution. I think\nI'd end up having to reimplement the equivalent of heap_mask,\nbtree_mask, et. al. in Perl, which doesn't seem very reasonable. It's\nfairly complicated logic even written in C, and doing the right thing\nin Perl would be more complex, I think, because it wouldn't have\naccess to all the same #defines which depend on things like word size\nand Endianness and stuff. If we want to allow this sort of comparison,\nI feel we should think of changing the C code in some way to make it\nwork reliably rather than try to paper over the problems in Perl.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 5 Sep 2023 11:05:01 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "On Wed, Aug 30, 2023 at 9:20 PM Robert Haas <[email protected]> wrote:\n>\n> Meanwhile, here's a rebased set of patches. The somewhat-primitive\n> attempts at writing tests are in 0009, but they don't work, for the\n> reasons explained above. I think I'd probably like to go ahead and\n> commit 0001 and 0002 soon if there are no objections, since I think\n> those are good refactorings independently of the rest of this.\n>\n\nI have started reading the patch today, I haven't yet completed one\npass but here are my comments in 0007\n\n1.\n\n+ BlockNumber relative_block_numbers[RELSEG_SIZE];\n\nThis is close to 400kB of memory, so I think it is better we palloc it\ninstead of keeping it in the stack.\n\n2.\n /*\n * Try to parse the directory name as an unsigned integer.\n *\n- * Tablespace directories should be positive integers that can\n- * be represented in 32 bits, with no leading zeroes or trailing\n+ * Tablespace directories should be positive integers that can be\n+ * represented in 32 bits, with no leading zeroes or trailing\n * garbage. If we come across a name that doesn't meet those\n * criteria, skip it.\n\nUnrelated code refactoring hunk\n\n3.\n+typedef struct\n+{\n+ const char *filename;\n+ pg_checksum_context *checksum_ctx;\n+ bbsink *sink;\n+ size_t bytes_sent;\n+} FileChunkContext;\n\nThis structure is not used anywhere.\n\n4.\n+ * If the file is to be set incrementally, then num_incremental_blocks\n+ * should be the number of blocks to be sent, and incremental_blocks\n\n/If the file is to be set incrementally/If the file is to be sent incrementally\n\n5.\n- while (bytes_done < statbuf->st_size)\n+ while (1)\n {\n- size_t remaining = statbuf->st_size - bytes_done;\n+ /*\n\nI do not really like this change, because after removing this you have\nput 2 independent checks for sending the full file[1] and sending it\nincrementally[1]. Actually for sending incrementally\n'statbuf->st_size' is computed from the 'num_incremental_blocks'\nitself so why don't we keep this breaking condition in the while loop\nitself? So that we can avoid these two separate conditions.\n\n[1]\n+ /*\n+ * If we've read the required number of bytes, then it's time to\n+ * stop.\n+ */\n+ if (bytes_done >= statbuf->st_size)\n+ break;\n\n[2]\n+ /*\n+ * If we've read all the blocks, then it's time to stop.\n+ */\n+ if (ibindex >= num_incremental_blocks)\n+ break;\n\n\n6.\n+typedef struct\n+{\n+ TimeLineID tli;\n+ XLogRecPtr start_lsn;\n+ XLogRecPtr end_lsn;\n+} backup_wal_range;\n+\n+typedef struct\n+{\n+ uint32 status;\n+ const char *path;\n+ size_t size;\n+} backup_file_entry;\n\nBetter we add some comments for these structures.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 12 Sep 2023 15:26:00 +0530",
"msg_from": "Dilip Kumar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "On Wed, Aug 30, 2023 at 4:50 PM Robert Haas <[email protected]> wrote:\n[..]\n\nI've played a little bit more this second batch of patches on\ne8d74ad625f7344f6b715254d3869663c1569a51 @ 31Aug (days before wait\nevents refactor):\n\ntest_across_wallevelminimal.sh\ntest_many_incrementals_dbcreate.sh\ntest_many_incrementals.sh\ntest_multixact.sh\ntest_pending_2pc.sh\ntest_reindex_and_vacuum_full.sh\ntest_truncaterollback.sh\ntest_unlogged_table.sh\n\nall those basic tests had GOOD results. Please find attached. I'll try\nto schedule some more realistic (in terms of workload and sizes) test\nin a couple of days + maybe have some fun with cross-backup-and\nrestores across standbys. As per earlier doubt: raw wal_level =\nminimal situation, shouldn't be a concern, sadly because it requires\nmax_wal_senders==0, while pg_basebackup requires it above 0 (due to\n\"FATAL: number of requested standby connections exceeds\nmax_wal_senders (currently 0)\").\n\nI wanted to also introduce corruption onto pg_walsummaries files, but\nlater saw in code that is already covered with CRC32, cool.\n\nIn v07:\n> +#define MINIMUM_VERSION_FOR_WAL_SUMMARIES 160000\n\n170000 ?\n\n> A related design question is whether we should really be sending the\n> whole backup manifest to the server at all. If it turns out that we\n> don't really need anything except for the LSN of the previous backup,\n> we could send that one piece of information instead of everything. On\n> the other hand, if we need the list of files from the previous backup,\n> then sending the whole manifest makes sense.\n\nIf that is still an area open for discussion: wouldn't it be better to\njust specify LSN as it would allow resyncing standby across major lag\nwhere the WAL to replay would be enormous? Given that we had\nprimary->standby where standby would be stuck on some LSN, right now\nit would be:\n1) calculate backup manifest of desynced 10TB standby (how? using\nwhich tool?) - even if possible, that means reading 10TB of data\ninstead of just putting a number, isn't it?\n2) backup primary with such incremental backup >= LSN\n3) copy the incremental backup to standby\n4) apply it to the impaired standby\n5) restart the WAL replay\n\n> - We only know how to operate on directories, not tar files. I thought\n> about that when working on pg_verifybackup as well, but I didn't do\n> anything about it. It would be nice to go back and make that tool work\n> on tar-format backups, and this one, too. I don't think there would be\n> a whole lot of point trying to operate on compressed tar files because\n> you need random access and that seems hard on a compressed file, but\n> on uncompressed files it seems at least theoretically doable. I'm not\n> sure whether anyone would care that much about this, though, even\n> though it does sound pretty cool.\n\nAlso maybe it's too early to ask, but wouldn't it be nice if we could\nhave an future option in pg_combinebackup to avoid double writes when\nused from restore hosts (right now we need to first to reconstruct the\noriginal datadir from full and incremental backups on host hosting\nbackups and then TRANSFER it again and on target host?). So something\nlike that could work well from restorehost: pg_combinebackup\n/tmp/backup1 /tmp/incbackup2 /tmp/incbackup3 -O tar -o - | ssh\ndbserver 'tar xvf -C /path/to/restored/cluster - ' . The bad thing is\nthat such a pipe prevents parallelism from day 1 and I'm afraid I do\nnot have a better easy idea on how to have both at the same time in\nthe long term.\n\n-J.",
"msg_date": "Thu, 28 Sep 2023 12:22:07 +0200",
"msg_from": "Jakub Wartak <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "On Fri, Sep 1, 2023 at 10:30 AM Robert Haas <[email protected]> wrote:\n> > No objections to 0001/0002.\n>\n> Cool.\n\nNobody else objected either, so I went ahead and committed those. I'll\nrebase the rest of the patches on top of the latest master and repost,\nhopefully after addressing some of the other review comments from\nDilip and Jakub.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 3 Oct 2023 11:03:22 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "On Tue, Sep 12, 2023 at 5:56 AM Dilip Kumar <[email protected]> wrote:\n> + BlockNumber relative_block_numbers[RELSEG_SIZE];\n>\n> This is close to 400kB of memory, so I think it is better we palloc it\n> instead of keeping it in the stack.\n\nFixed.\n\n> Unrelated code refactoring hunk\n\nFixed.\n\n> This structure is not used anywhere.\n\nRemoved.\n\n> /If the file is to be set incrementally/If the file is to be sent incrementally\n\nFixed.\n\n> I do not really like this change, because after removing this you have\n> put 2 independent checks for sending the full file[1] and sending it\n> incrementally[1]. Actually for sending incrementally\n> 'statbuf->st_size' is computed from the 'num_incremental_blocks'\n> itself so why don't we keep this breaking condition in the while loop\n> itself? So that we can avoid these two separate conditions.\n\nI don't think that would be correct. The number of bytes that need to\nbe read from the original file is not equal to the number of bytes\nthat will be written to the incremental file. Admittedly, they're\ncurrently different by less than a block, but that could change if we\nchange the format of the incremental file (e.g. suppose we compressed\nthe blocks in the incremental file with gzip, or smushed out the holes\nin the pages). I wrote the loop as I did precisely so that the two\ncases could have different loop exit conditions.\n\n> Better we add some comments for these structures.\n\nDone.\n\nHere's a new patch set, also addressing Jakub's observation that\nMINIMUM_VERSION_FOR_WAL_SUMMARIES needed updating.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 3 Oct 2023 14:21:00 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "On Thu, Sep 28, 2023 at 6:22 AM Jakub Wartak\n<[email protected]> wrote:\n> all those basic tests had GOOD results. Please find attached. I'll try\n> to schedule some more realistic (in terms of workload and sizes) test\n> in a couple of days + maybe have some fun with cross-backup-and\n> restores across standbys.\n\nThat's awesome! Thanks for testing! This can definitely benefit from\nany amount of beating on it that people wish to do. It's a complex,\ndelicate area that risks data loss.\n\n> If that is still an area open for discussion: wouldn't it be better to\n> just specify LSN as it would allow resyncing standby across major lag\n> where the WAL to replay would be enormous? Given that we had\n> primary->standby where standby would be stuck on some LSN, right now\n> it would be:\n> 1) calculate backup manifest of desynced 10TB standby (how? using\n> which tool?) - even if possible, that means reading 10TB of data\n> instead of just putting a number, isn't it?\n> 2) backup primary with such incremental backup >= LSN\n> 3) copy the incremental backup to standby\n> 4) apply it to the impaired standby\n> 5) restart the WAL replay\n\nHmm. I wonder if this would even be a safe procedure. I admit that I\ncan't quite see a problem with it, but sometimes I'm kind of dumb.\n\n> Also maybe it's too early to ask, but wouldn't it be nice if we could\n> have an future option in pg_combinebackup to avoid double writes when\n> used from restore hosts (right now we need to first to reconstruct the\n> original datadir from full and incremental backups on host hosting\n> backups and then TRANSFER it again and on target host?). So something\n> like that could work well from restorehost: pg_combinebackup\n> /tmp/backup1 /tmp/incbackup2 /tmp/incbackup3 -O tar -o - | ssh\n> dbserver 'tar xvf -C /path/to/restored/cluster - ' . The bad thing is\n> that such a pipe prevents parallelism from day 1 and I'm afraid I do\n> not have a better easy idea on how to have both at the same time in\n> the long term.\n\nI don't think it's too early to ask for this, but I do think it's too\nearly for you to get it. ;-)\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 4 Oct 2023 15:33:27 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "On Tue, Oct 3, 2023 at 2:21 PM Robert Haas <[email protected]> wrote:\n> Here's a new patch set, also addressing Jakub's observation that\n> MINIMUM_VERSION_FOR_WAL_SUMMARIES needed updating.\n\nHere's yet another new version. In this version, I reversed the order\nof the first two patches, with the idea that what's now 0001 seems\nfairly reasonable as an independent commit, and could thus perhaps be\ncommitted sometime soon-ish. In the main patch, I added SGML\ndocumentation for pg_combinebackup. I also fixed the broken TAP tests\nso that they work, by basing them on pg_dump equivalence rather than\nfile-level equivalence. I'm sad to give up on testing the latter, but\nit seems to be unrealistic. I cleaned up a few other odds and ends,\ntoo. But, what exactly is the bigger picture for this patch in terms\nof moving forward? Here's a list of things that are on my mind:\n\n- I'd like to get the patch to mark the redo point in the WAL\ncommitted[1] and then reword this patch set to make use of that\ninfrastructure. Right now, we make a best effort to end WAL summaries\nat redo point boundaries, but it's racey, and sometimes we fail to do\nso. In theory that just has the effect of potentially making an\nincremental backup contain some extra blocks that it shouldn't really\nneed to contain, but I think it can actually lead to weird stalls,\nbecause when an incremental backup is taken, we have to wait until a\nWAL summary shows up that extends at least up to the start LSN of the\nbackup we're about to take. I believe all the logic in this area can\nbe made a good deal simpler and more reliable if that patch gets\ncommitted and this one reworked accordingly.\n\n- I would like some feedback on the generation of WAL summary files.\nRight now, I have it enabled by default, and summaries are kept for a\nweek. That means that, with no additional setup, you can take an\nincremental backup as long as the reference backup was taken in the\nlast week. File removal is governed by mtimes, so if you change the\nmtimes of your summary files or whack your system clock around, weird\nthings might happen. But obviously this might be inconvenient. Some\npeople might not want WAL summary files to be generated at all because\nthey don't care about incremental backup, and other people might want\nthem retained for longer, and still other people might want them to be\nnot removed automatically or removed automatically based on some\ncriteria other than mtime. I don't really know what's best here. I\ndon't think the default policy that the patches implement is\nespecially terrible, but it's just something that I made up and I\ndon't have any real confidence that it's wonderful. One point to be\nconsider here is that, if WAL summarization is enabled, checkpoints\ncan't remove WAL that isn't summarized yet. Mostly that's not a\nproblem, I think, because the WAL summarizer is pretty fast. But it\ncould increase disk consumption for some people. I don't think that we\nneed to worry about the summaries themselves being a problem in terms\nof space consumption; at least in all the cases I've tested, they're\njust not very big.\n\n- On a related note, I haven't yet tested this on a standby, which is\na thing that I definitely need to do. I don't know of a reason why it\nshouldn't be possible for all of this machinery to work on a standby\njust as it does on a primary, but then we need the WAL summarizer to\nrun there too, which could end up being a waste if nobody ever tries\nto take an incremental backup. I wonder how that should be reflected\nin the configuration. We could do something like what we've done for\narchive_mode, where on means \"only on if this is a primary\" and you\nhave to say always if you want it to run on standbys as well ... but\nI'm not sure if that's a design pattern that we really want to\nreplicate into more places. I'd be somewhat inclined to just make\nwhatever configuration parameters we need to configure this thing on\nthe primary also work on standbys, and you can set each server up as\nyou please. But I'm open to other suggestions.\n\n- We need to settle the question of whether to send the whole backup\nmanifest to the server or just the LSN. In a previous attempt at\nincremental backup, we decided the whole manifest was necessary,\nbecause flat-copying files could make new data show up with old LSNs.\nBut that version of the patch set was trying to find modified blocks\nby checking their LSNs individually, not by summarizing WAL. And since\nthe operations that flat-copy files are WAL-logged, the WAL summary\napproach seems to eliminate that problem - maybe an LSN (and the\nassociated TLI) is good enough now. This also relates to Jakub's\nquestion about whether this machinery could be used to fast-forward a\nstandby, which is not exactly a base backup but ... perhaps close\nenough? I'm somewhat inclined to believe that we can simplify to an\nLSN and TLI; however, if we do that, then we'll have big problems if\nlater we realize that we want the manifest for something after all. So\nif anybody thinks that there's a reason to keep doing what the patch\ndoes today -- namely, upload the whole manifest to the server --\nplease speak up.\n\n- It's regrettable that we don't have incremental JSON parsing; I\nthink that means anyone who has a backup manifest that is bigger than\n1GB can't use this feature. However, that's also a problem for the\nexisting backup manifest feature, and as far as I can see, we have no\ncomplaints about it. So maybe people just don't have databases with\nenough relations for that to be much of a live issue yet. I'm inclined\nto treat this as a non-blocker, although Andrew Dunstan tells me he\ndoes have a prototype for incremental JSON parsing so maybe that will\nland and we can use it here.\n\n- Right now, I have a hard-coded 60 second timeout for WAL\nsummarization. If you try to take an incremental backup and the WAL\nsummaries you need don't show up within 60 seconds, the backup times\nout. I think that's a reasonable default, but should it be\nconfigurable? If yes, should that be a GUC or, perhaps better, a\npg_basebackup option?\n\n- I'm curious what people think about the pg_walsummary tool that is\nincluded in 0006. I think it's going to be fairly important for\ndebugging, but it does feel a little bit bad to add a new binary for\nsomething pretty niche. Nevertheless, merging it into any other\nutility seems relatively awkward, so I'm inclined to think both that\nthis should be included in whatever finally gets committed and that it\nshould be a separate binary. I considered whether it should go in\ncontrib, but we seem to have moved to a policy that heavily favors\nlimiting contrib to extensions and loadable modules, rather than\nbinaries.\n\nClearly there's a good amount of stuff to sort out here, but we've\nstill got quite a bit of time left before feature freeze so I'd like\nto have a go at it. Please let me know your thoughts, if you have any.\n\n[1] http://postgr.es/m/CA+TgmoZAM24Ub=uxP0aWuWstNYTUJQ64j976FYJeVaMJ+qD0uw@mail.gmail.com\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 4 Oct 2023 16:08:29 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "On Wed, Oct 4, 2023 at 4:08 PM Robert Haas <[email protected]> wrote:\n> Clearly there's a good amount of stuff to sort out here, but we've\n> still got quite a bit of time left before feature freeze so I'd like\n> to have a go at it. Please let me know your thoughts, if you have any.\n\nApparently, nobody has any thoughts, but here's an updated patch set\nanyway. The main change, other than rebasing, is that I did a bunch\nmore documentation work on the main patch (0005). I'm much happier\nwith it now, although I expect it may need more adjustments here and\nthere as outstanding design questions get settled.\n\nAfter some thought, I think that it should be fine to commit 0001 and\n0002 as independent refactoring patches, and I plan to go ahead and do\nthat pretty soon unless somebody objects.\n\nThanks,\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 19 Oct 2023 12:05:35 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "On 10/19/23 12:05, Robert Haas wrote:\n> On Wed, Oct 4, 2023 at 4:08 PM Robert Haas <[email protected]> wrote:\n>> Clearly there's a good amount of stuff to sort out here, but we've\n>> still got quite a bit of time left before feature freeze so I'd like\n>> to have a go at it. Please let me know your thoughts, if you have any.\n> \n> Apparently, nobody has any thoughts, but here's an updated patch set\n> anyway. The main change, other than rebasing, is that I did a bunch\n> more documentation work on the main patch (0005). I'm much happier\n> with it now, although I expect it may need more adjustments here and\n> there as outstanding design questions get settled.\n> \n> After some thought, I think that it should be fine to commit 0001 and\n> 0002 as independent refactoring patches, and I plan to go ahead and do\n> that pretty soon unless somebody objects.\n\n0001 looks pretty good to me. The only thing I find a little troublesome \nis the repeated construction of file names with/without segment numbers \nin ResetUnloggedRelationsInDbspaceDir(), .e.g.:\n\n+\t\t\tif (segno == 0)\n+\t\t\t\tsnprintf(dstpath, sizeof(dstpath), \"%s/%u\",\n+\t\t\t\t\t\t dbspacedirname, relNumber);\n+\t\t\telse\n+\t\t\t\tsnprintf(dstpath, sizeof(dstpath), \"%s/%u.%u\",\n+\t\t\t\t\t\t dbspacedirname, relNumber, segno);\n\n\nIf this happened three times I'd definitely want a helper function, but \neven with two I think it would be a bit nicer.\n\n0002 is definitely a good idea. FWIW pgBackRest does this conversion but \nalso errors if it does not succeed. We have never seen a report of this \nerror happening in the wild, so I think it must be pretty rare if it \ndoes happen.\n\nRegards,\n-David\n\n\n",
"msg_date": "Thu, 19 Oct 2023 15:18:20 -0400",
"msg_from": "David Steele <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "On Thu, Oct 19, 2023 at 3:18 PM David Steele <[email protected]> wrote:\n> 0001 looks pretty good to me. The only thing I find a little troublesome\n> is the repeated construction of file names with/without segment numbers\n> in ResetUnloggedRelationsInDbspaceDir(), .e.g.:\n>\n> + if (segno == 0)\n> + snprintf(dstpath, sizeof(dstpath), \"%s/%u\",\n> + dbspacedirname, relNumber);\n> + else\n> + snprintf(dstpath, sizeof(dstpath), \"%s/%u.%u\",\n> + dbspacedirname, relNumber, segno);\n>\n>\n> If this happened three times I'd definitely want a helper function, but\n> even with two I think it would be a bit nicer.\n\nPersonally I think that would make the code harder to read rather than\neasier. I agree that repeating code isn't great, but this is a\nrelatively brief idiom and pretty self-explanatory. If other people\nagree with you I can change it, but to me it's not an improvement.\n\n> 0002 is definitely a good idea. FWIW pgBackRest does this conversion but\n> also errors if it does not succeed. We have never seen a report of this\n> error happening in the wild, so I think it must be pretty rare if it\n> does happen.\n\nCool, but ... how about the main patch set? It's nice to get some of\nthese refactoring bits and pieces out of the way, but if I spend the\neffort to work out what I think are the right answers to the remaining\ndesign questions for the main patch set and then find out after I've\ndone all that that you have massive objections, I'm going to be\nannoyed. I've been trying to get this feature into PostgreSQL for\nyears, and if I don't succeed this time, I want the reason to be\nsomething better than \"well, I didn't find out that David disliked X\nuntil five minutes before I was planning to type 'git push'.\"\n\nI'm not really concerned about detailed bug-hunting in the main\npatches just yet. The time for that will come. But if you have views\non how to resolve the design questions that I mentioned in a couple of\nemails back, or intend to advocate vigorously against the whole\nconcept for some reason, let's try to sort that out sooner rather than\nlater.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 19 Oct 2023 16:00:13 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "Hi Robert,\n\nOn Wed, Oct 4, 2023 at 10:09 PM Robert Haas <[email protected]> wrote:\n>\n> On Tue, Oct 3, 2023 at 2:21 PM Robert Haas <[email protected]> wrote:\n> > Here's a new patch set, also addressing Jakub's observation that\n> > MINIMUM_VERSION_FOR_WAL_SUMMARIES needed updating.\n>\n> Here's yet another new version.[..]\n\nOkay, so another good news - related to the patch version #4.\nNot-so-tiny stress test consisting of pgbench run for 24h straight\n(with incremental backups every 2h, with base of initial full backup),\nfollowed by two PITRs (one not using incremental backup and one using\nto to illustrate the performance point - and potentially spot any\nerrors in between). In both cases it worked fine. Pgbench has this\nbehaviour that it doesn't cause space growth over time - it produces\nlots of WAL instead. Some stats:\n\nSTART DBSIZE: ~3.3GB (pgbench -i -s 200 --partitions=8)\nEND DBSIZE: ~4.3GB\nRUN DURATION: 24h (pgbench -P 1 -R 100 -T 86400)\nWALARCHIVES-24h: 77GB\nFULL-DB-BACKUP-SIZE: 3.4GB\nINCREMENTAL-BACKUP-11-SIZE: 3.5GB\nEnv: Azure VM D4s (4VCPU), Debian 11, gcc 10.2, normal build (asserts\nand debug disabled)\nThe increments were taken every 2h just to see if they would fail for\nany reason - they did not.\n\nPITR RTO RESULTS (copy/pg_combinebackup time + recovery time):\n1. time to restore from fullbackup (+ recovery of 24h WAL[77GB]): 53s\n+ 4640s =~ 78min\n2. time to restore from fullbackup+incremental backup from 2h ago (+\nrecovery of 2h WAL [5.4GB]): 68s + 190s =~ 4min18s\n\nI could probably pre populate the DB with 1TB cold data (not touched\nto be touched pgbench at all), just for the sake of argument, and that\nwould probably could be demonstrated how space efficient the\nincremental backup can be, but most of time would be time wasted on\ncopying the 1TB here...\n\n> - I would like some feedback on the generation of WAL summary files.\n> Right now, I have it enabled by default, and summaries are kept for a\n> week. That means that, with no additional setup, you can take an\n> incremental backup as long as the reference backup was taken in the\n> last week.\n\nI've just noticed one thing when recovery is progress: is\nsummarization working during recovery - in the background - an\nexpected behaviour? I'm wondering about that, because after freshly\nrestored and recovered DB, one would need to create a *new* full\nbackup and only from that point new summaries would have any use?\nSample log:\n\n2023-10-20 11:10:02.288 UTC [64434] LOG: restored log file\n\"000000010000000200000022\" from archive\n2023-10-20 11:10:02.599 UTC [64434] LOG: restored log file\n\"000000010000000200000023\" from archive\n2023-10-20 11:10:02.769 UTC [64446] LOG: summarized WAL on TLI 1 from\n2/139B1130 to 2/239B1518\n2023-10-20 11:10:02.923 UTC [64434] LOG: restored log file\n\"000000010000000200000024\" from archive\n2023-10-20 11:10:03.193 UTC [64434] LOG: restored log file\n\"000000010000000200000025\" from archive\n2023-10-20 11:10:03.345 UTC [64432] LOG: restartpoint starting: wal\n2023-10-20 11:10:03.407 UTC [64446] LOG: summarized WAL on TLI 1 from\n2/239B1518 to 2/25B609D0\n2023-10-20 11:10:03.521 UTC [64434] LOG: restored log file\n\"000000010000000200000026\" from archive\n2023-10-20 11:10:04.429 UTC [64434] LOG: restored log file\n\"000000010000000200000027\" from archive\n\n>\n> - On a related note, I haven't yet tested this on a standby, which is\n> a thing that I definitely need to do. I don't know of a reason why it\n> shouldn't be possible for all of this machinery to work on a standby\n> just as it does on a primary, but then we need the WAL summarizer to\n> run there too, which could end up being a waste if nobody ever tries\n> to take an incremental backup. I wonder how that should be reflected\n> in the configuration. We could do something like what we've done for\n> archive_mode, where on means \"only on if this is a primary\" and you\n> have to say always if you want it to run on standbys as well ... but\n> I'm not sure if that's a design pattern that we really want to\n> replicate into more places. I'd be somewhat inclined to just make\n> whatever configuration parameters we need to configure this thing on\n> the primary also work on standbys, and you can set each server up as\n> you please. But I'm open to other suggestions.\n\nI'll try to play with some standby restores in future, stay tuned.\n\nRegards,\n-J.\n\n\n",
"msg_date": "Fri, 20 Oct 2023 15:20:10 +0200",
"msg_from": "Jakub Wartak <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "On 10/19/23 16:00, Robert Haas wrote:\n> On Thu, Oct 19, 2023 at 3:18 PM David Steele <[email protected]> wrote:\n>> 0001 looks pretty good to me. The only thing I find a little troublesome\n>> is the repeated construction of file names with/without segment numbers\n>> in ResetUnloggedRelationsInDbspaceDir(), .e.g.:\n>>\n>> + if (segno == 0)\n>> + snprintf(dstpath, sizeof(dstpath), \"%s/%u\",\n>> + dbspacedirname, relNumber);\n>> + else\n>> + snprintf(dstpath, sizeof(dstpath), \"%s/%u.%u\",\n>> + dbspacedirname, relNumber, segno);\n>>\n>>\n>> If this happened three times I'd definitely want a helper function, but\n>> even with two I think it would be a bit nicer.\n> \n> Personally I think that would make the code harder to read rather than\n> easier. I agree that repeating code isn't great, but this is a\n> relatively brief idiom and pretty self-explanatory. If other people\n> agree with you I can change it, but to me it's not an improvement.\n\nThen I'm fine with it as is.\n\n>> 0002 is definitely a good idea. FWIW pgBackRest does this conversion but\n>> also errors if it does not succeed. We have never seen a report of this\n>> error happening in the wild, so I think it must be pretty rare if it\n>> does happen.\n> \n> Cool, but ... how about the main patch set? It's nice to get some of\n> these refactoring bits and pieces out of the way, but if I spend the\n> effort to work out what I think are the right answers to the remaining\n> design questions for the main patch set and then find out after I've\n> done all that that you have massive objections, I'm going to be\n> annoyed. I've been trying to get this feature into PostgreSQL for\n> years, and if I don't succeed this time, I want the reason to be\n> something better than \"well, I didn't find out that David disliked X\n> until five minutes before I was planning to type 'git push'.\"\n\nI simply have not had time to look at the main patch set in any detail.\n\n> I'm not really concerned about detailed bug-hunting in the main\n> patches just yet. The time for that will come. But if you have views\n> on how to resolve the design questions that I mentioned in a couple of\n> emails back, or intend to advocate vigorously against the whole\n> concept for some reason, let's try to sort that out sooner rather than\n> later.\n\nIn my view this feature puts the cart way before the horse. I'd think \nhigher priority features might be parallelism, a backup repository, \nexpiration management, archiving, or maybe even a restore command.\n\nIt seems the only goal here is to make pg_basebackup a tool for external \nbackup software to use, which might be OK, but I don't believe this \nfeature really advances pg_basebackup as a usable piece of stand-alone \nsoftware. If people really think that start/stop backup is too \ncomplicated an interface how are they supposed to track page \nincrementals and get them to a place where pg_combinebackup can put them \nbackup together? If automation is required to use this feature, \nshouldn't pg_basebackup implement that automation?\n\nI have plenty of thoughts about the implementation as well, but I have a \nlot on my plate right now and I don't have time to get into it.\n\nI don't plan to stand in your way on this feature. I'm reviewing what \npatches I can out of courtesy and to be sure that nothing adjacent to \nyour work is being affected. My apologies if my reviews are not meeting \nyour expectations, but I am contributing as my time constraints allow.\n\nRegards,\n-David\n\n\n",
"msg_date": "Fri, 20 Oct 2023 11:30:40 -0400",
"msg_from": "David Steele <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "On Fri, Oct 20, 2023 at 11:30 AM David Steele <[email protected]> wrote:\n> Then I'm fine with it as is.\n\nOK, thanks.\n\n> In my view this feature puts the cart way before the horse. I'd think\n> higher priority features might be parallelism, a backup repository,\n> expiration management, archiving, or maybe even a restore command.\n>\n> It seems the only goal here is to make pg_basebackup a tool for external\n> backup software to use, which might be OK, but I don't believe this\n> feature really advances pg_basebackup as a usable piece of stand-alone\n> software. If people really think that start/stop backup is too\n> complicated an interface how are they supposed to track page\n> incrementals and get them to a place where pg_combinebackup can put them\n> backup together? If automation is required to use this feature,\n> shouldn't pg_basebackup implement that automation?\n>\n> I have plenty of thoughts about the implementation as well, but I have a\n> lot on my plate right now and I don't have time to get into it.\n>\n> I don't plan to stand in your way on this feature. I'm reviewing what\n> patches I can out of courtesy and to be sure that nothing adjacent to\n> your work is being affected. My apologies if my reviews are not meeting\n> your expectations, but I am contributing as my time constraints allow.\n\nSorry, I realize reading this response that I probably didn't do a\nvery good job writing that email and came across sounding like a jerk.\nPossibly, I actually am a jerk. Whether it just sounded like it or I\nactually am, I apologize. But your last paragraph here gets at my real\nquestion, which is whether you were going to try to block the feature.\nI recognize that we have different priorities when it comes to what\nwould make most sense to implement in PostgreSQL, and that's OK, or at\nleast, it's OK with me. I also don't have any particular expectation\nabout how much you should review the patch or in what level of detail,\nand I have sincerely appreciated your feedback thus far. If you are\nable to continue to provide more, that's great, and if that's not,\nwell, you're not obligated. What I was concerned about was whether\nthat review was a precursor to a vigorous attempt to keep the main\npatch from getting committed, because if that was going to be the\ncase, then I'd like to surface that conflict sooner rather than later.\nIt sounds like that's not an issue, which is great.\n\nAt the risk of drifting into the fraught question of what I *should*\nbe implementing rather than the hopefully-less-fraught question of\nwhether what I am actually implementing is any good, I see incremental\nbackup as a way of removing some of the use cases for the low-level\nbackup API. If you said \"but people still will have lots of reasons to\nuse it,\" I would agree; and if you said \"people can still screw things\nup with pg_basebackup,\" I would also agree. Nonetheless, most of the\ndisasters I've personally seen have stemmed from the use of the\nlow-level API rather than from the use of pg_basebackup, though there\nare exceptions. I also think a lot of the use of the low-level API is\ndriven by it being just too darn slow to copy the whole database, and\nincremental backup can help with that in some circumstances. Also, I\nhave worked fairly hard to try to make sure that if you misuse\npg_combinebackup, or fail to use it altogether, you'll get an error\nrather than silent data corruption. I would be interested to hear\nabout scenarios where the checks that I've implemented can be defeated\nby something that is plausibly described as stupidity rather than\nmalice. I'm not sure we can fix all such cases, but I'm very alert to\nthe horror that will befall me if user error looks exactly like a bug\nin the code. For my own sanity, we have to be able to distinguish\nthose cases. Moreover, we also need to be able to distinguish\nbackup-time bugs from reassembly-time bugs, which is why I've got the\npg_walsummary tool, and why pg_combinebackup has the ability to emit\nfairly detailed debugging output. I anticipate those things being\nuseful in investigating bug reports when they show up. I won't be too\nsurprised if it turns out that more work on sanity-checking and/or\ndebugging tools is needed, but I think your concern about people\nmisusing stuff is bang on target and I really want to do whatever we\ncan to avoid that when possible and detect it when it happens.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 23 Oct 2023 11:44:04 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "On Fri, Oct 20, 2023 at 9:20 AM Jakub Wartak\n<[email protected]> wrote:\n> Okay, so another good news - related to the patch version #4.\n> Not-so-tiny stress test consisting of pgbench run for 24h straight\n> (with incremental backups every 2h, with base of initial full backup),\n> followed by two PITRs (one not using incremental backup and one using\n> to to illustrate the performance point - and potentially spot any\n> errors in between). In both cases it worked fine.\n\nThis is great testing, thanks. What might be even better is to test\nwhether the resulting backups are correct, somehow.\n\n> I've just noticed one thing when recovery is progress: is\n> summarization working during recovery - in the background - an\n> expected behaviour? I'm wondering about that, because after freshly\n> restored and recovered DB, one would need to create a *new* full\n> backup and only from that point new summaries would have any use?\n\nActually, I think you could take an incremental backup relative to a\nfull backup from a previous timeline.\n\nBut the question of what summarization ought to do (or not do) during\nrecovery, and whether it ought to be enabled by default, and what the\nretention policy ought to be are very much live ones. Right now, it's\nenabled by default and keeps summaries for a week, assuming you don't\nreset your local clock and that it advances at the same speed as the\nuniverse's own clock. But that's all debatable. Any views?\n\nMeanwhile, here's a new patch set. I went ahead and committed the\nfirst two preparatory patches, as I said earlier that I intended to\ndo. And here I've adjusted the main patch, which is now 0003, for the\naddition of XLOG_CHECKPOINT_REDO, which permitted me to simplify a few\nthings. wal_summarize_mb now feels like a bit of a silly GUC --\npresumably you'd never care, unless you had an absolutely gigantic\ninter-checkpoint WAL distance. And if you have that, maybe you should\nalso have enough memory to summarize all that WAL. Or maybe not:\nperhaps it's better to write WAL summaries more than once per\ncheckpoint when checkpoints are really big. But I'm worried that the\nGUC will become a source of needless confusion for users. For most\npeople, it seems like emitting one summary per checkpoint should be\ntotally fine, and they might prefer a simple Boolean GUC,\nsummarize_wal = true | false, over this. I'm just not quite sure about\nthe corner cases.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Mon, 23 Oct 2023 15:34:28 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "On 10/23/23 11:44, Robert Haas wrote:\n> On Fri, Oct 20, 2023 at 11:30 AM David Steele <[email protected]> wrote:\n>>\n>> I don't plan to stand in your way on this feature. I'm reviewing what\n>> patches I can out of courtesy and to be sure that nothing adjacent to\n>> your work is being affected. My apologies if my reviews are not meeting\n>> your expectations, but I am contributing as my time constraints allow.\n> \n> Sorry, I realize reading this response that I probably didn't do a\n> very good job writing that email and came across sounding like a jerk.\n> Possibly, I actually am a jerk. Whether it just sounded like it or I\n> actually am, I apologize. \n\nThat was the way it came across, though I prefer to think it was \nunintentional. I certainly understand how frustrating dealing with a \nlarge and uncertain patch can be. Either way, I appreciate the apology.\n\nNow onward...\n\n> But your last paragraph here gets at my real\n> question, which is whether you were going to try to block the feature.\n> I recognize that we have different priorities when it comes to what\n> would make most sense to implement in PostgreSQL, and that's OK, or at\n> least, it's OK with me. \n\nThis seem perfectly natural to me.\n\n> I also don't have any particular expectation\n> about how much you should review the patch or in what level of detail,\n> and I have sincerely appreciated your feedback thus far. If you are\n> able to continue to provide more, that's great, and if that's not,\n> well, you're not obligated. What I was concerned about was whether\n> that review was a precursor to a vigorous attempt to keep the main\n> patch from getting committed, because if that was going to be the\n> case, then I'd like to surface that conflict sooner rather than later.\n> It sounds like that's not an issue, which is great.\n\nOverall I would say I'm not strongly for or against the patch. I think \nit will be very difficult to use in a manual fashion, but automation is \nthey way to go in general so that's not necessarily and argument against.\n\nHowever, this is an area of great interest to me so I do want to at \nleast make sure nothing is being impacted adjacent to the main goal of \nthis patch. So far I have seen no sign of that, but that has been a \nprimary goal of my reviews.\n\n> At the risk of drifting into the fraught question of what I *should*\n> be implementing rather than the hopefully-less-fraught question of\n> whether what I am actually implementing is any good, I see incremental\n> backup as a way of removing some of the use cases for the low-level\n> backup API. If you said \"but people still will have lots of reasons to\n> use it,\" I would agree; and if you said \"people can still screw things\n> up with pg_basebackup,\" I would also agree. Nonetheless, most of the\n> disasters I've personally seen have stemmed from the use of the\n> low-level API rather than from the use of pg_basebackup, though there\n> are exceptions. \n\nThis all makes sense to me.\n\n> I also think a lot of the use of the low-level API is\n> driven by it being just too darn slow to copy the whole database, and\n> incremental backup can help with that in some circumstances. \n\nI would argue that restore performance is *more* important than backup \nperformance and this patch is a step backward in that regard. Backups \nwill be faster and less space will be used in the repository, but \nrestore performance is going to suffer. If the deltas are very small the \ndifference will probably be negligible, but as the deltas get large (and \nespecially if there are a lot of them) the penalty will be more noticeable.\n\n> Also, I\n> have worked fairly hard to try to make sure that if you misuse\n> pg_combinebackup, or fail to use it altogether, you'll get an error\n> rather than silent data corruption. I would be interested to hear\n> about scenarios where the checks that I've implemented can be defeated\n> by something that is plausibly described as stupidity rather than\n> malice. I'm not sure we can fix all such cases, but I'm very alert to\n> the horror that will befall me if user error looks exactly like a bug\n> in the code. For my own sanity, we have to be able to distinguish\n> those cases. \n\nI was concerned with the difficulty of trying to stage the correct \nbackups for pg_combinebackup, not whether it would recognize that the \nneeded data was not available and then error appropriately. The latter \nis surmountable within pg_combinebackup but the former is left up to the \nuser.\n\n> Moreover, we also need to be able to distinguish\n> backup-time bugs from reassembly-time bugs, which is why I've got the\n> pg_walsummary tool, and why pg_combinebackup has the ability to emit\n> fairly detailed debugging output. I anticipate those things being\n> useful in investigating bug reports when they show up. I won't be too\n> surprised if it turns out that more work on sanity-checking and/or\n> debugging tools is needed, but I think your concern about people\n> misusing stuff is bang on target and I really want to do whatever we\n> can to avoid that when possible and detect it when it happens.\n\nThe ability of users to misuse tools is, of course, legendary, so that \nall sounds good to me.\n\nOne note regarding the patches. I feel like \nv5-0005-Prototype-patch-for-incremental-backup should be split to have \nthe WAL summarizer as one patch and the changes to base backup as a \nseparate patch.\n\nIt might not be useful to commit one without the other, but it would \nmake for an easier read. Just my 2c.\n\nRegards,\n-David\n\n\n",
"msg_date": "Mon, 23 Oct 2023 19:56:51 -0400",
"msg_from": "David Steele <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "On Mon, Oct 23, 2023 at 7:56 PM David Steele <[email protected]> wrote:\n> > I also think a lot of the use of the low-level API is\n> > driven by it being just too darn slow to copy the whole database, and\n> > incremental backup can help with that in some circumstances.\n>\n> I would argue that restore performance is *more* important than backup\n> performance and this patch is a step backward in that regard. Backups\n> will be faster and less space will be used in the repository, but\n> restore performance is going to suffer. If the deltas are very small the\n> difference will probably be negligible, but as the deltas get large (and\n> especially if there are a lot of them) the penalty will be more noticeable.\n\nI think an awful lot depends here on whether the repository is local\nor remote. If you have filesystem access to wherever the backups are\nstored anyway, I don't think that using pg_combinebackup to write out\na new data directory is going to be much slower than copying one data\ndirectory from the repository to wherever you'd actually use the\nbackup. It may be somewhat slower because we do need to access some\ndata in every involved backup, but I don't think it should be vastly\nslower because we don't have to read every backup in its entirety. For\neach file, we read the (small) header of the newest incremental file\nand every incremental file that precedes it until we find a full file.\nThen, we construct a map of which blocks need to be read from which\nsources and read only the required blocks from each source. If all the\nblocks are coming from a single file (because there are no incremental\nfor a certain file, or they contain no blocks) then we just copy the\nentire source file in one shot, which can be optimized using the same\ntricks we use elsewhere. Inevitably, this is going to read more data\nand do more random I/O than just a flat copy of a directory, but it's\nnot terrible. The overall amount of I/O should be a lot closer to the\nsize of the output directory than to the sum of the sizes of the input\ndirectories.\n\nNow, if the repository is remote, and you have to download all of\nthose backups first, and then run pg_combinebackup on them afterward,\nthat is going to be unpleasant, unless the incremental backups are all\nquite small. Possibly this could be addressed by teaching\npg_combinebackup to do things like accessing data over HTTP and SSH,\nand relatedly, looking inside tarfiles without needing them unpacked.\nFor now, I've left those as ideas for future improvement, but I think\npotentially they could address some of your concerns here. A\ndifficulty is that there are a lot of protocols that people might want\nto use to push bytes around, and it might be hard to keep up with the\nmarch of progress.\n\nI do agree, though, that there's no such thing as a free lunch. I\nwouldn't recommend to anyone that they plan to restore from a chain of\n100 incremental backups. Not only might it be slow, but the\nopportunities for something to go wrong are magnified. Even if you've\nautomated everything well enough that there's no human error involved,\nwhat if you've got a corrupted file somewhere? Maybe that's not likely\nin absolute terms, but the more files you've got, the more likely it\nbecomes. What I'd suggest someone do instead is periodically do\npg_combinebackup full_reference_backup oldest_incremental -o\nnew_full_reference_backup; rm -rf full_reference_backup; mv\nnew_full_reference_backup full_reference_backup. The new full\nreference backup is intended to still be usable for restoring\nincrementals based on the incremental it replaced. I hope that, if\npeople use the feature well, this should limit the need for really\nlong backup chains. I am sure, though, that some people will use it\npoorly. Maybe there's room for more documentation on this topic.\n\n> I was concerned with the difficulty of trying to stage the correct\n> backups for pg_combinebackup, not whether it would recognize that the\n> needed data was not available and then error appropriately. The latter\n> is surmountable within pg_combinebackup but the former is left up to the\n> user.\n\nIndeed.\n\n> One note regarding the patches. I feel like\n> v5-0005-Prototype-patch-for-incremental-backup should be split to have\n> the WAL summarizer as one patch and the changes to base backup as a\n> separate patch.\n>\n> It might not be useful to commit one without the other, but it would\n> make for an easier read. Just my 2c.\n\nYeah, maybe so. I'm not quite ready to commit to doing that split as\nof this writing but I will think about it and possibly do it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 24 Oct 2023 08:29:49 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "On 04.10.23 22:08, Robert Haas wrote:\n> - I would like some feedback on the generation of WAL summary files.\n> Right now, I have it enabled by default, and summaries are kept for a\n> week. That means that, with no additional setup, you can take an\n> incremental backup as long as the reference backup was taken in the\n> last week. File removal is governed by mtimes, so if you change the\n> mtimes of your summary files or whack your system clock around, weird\n> things might happen. But obviously this might be inconvenient. Some\n> people might not want WAL summary files to be generated at all because\n> they don't care about incremental backup, and other people might want\n> them retained for longer, and still other people might want them to be\n> not removed automatically or removed automatically based on some\n> criteria other than mtime. I don't really know what's best here. I\n> don't think the default policy that the patches implement is\n> especially terrible, but it's just something that I made up and I\n> don't have any real confidence that it's wonderful.\n\nThe easiest answer is to have it off by default. Let people figure out \nwhat works for them. There are various factors like storage, network, \nserver performance, RTO that will determine what combination of full \nbackup, incremental backup, and WAL replay will satisfy someone's \nrequirements. I suppose tests could be set up to determine this to some \ndegree. But we could also start slow and let people figure it out \nthemselves. When pg_basebackup was added, it was also disabled by default.\n\nIf we think that 7d is a good setting, then I would suggest to consider, \nlike 10d. Otherwise, if you do a weekly incremental backup and you have \na time change or a hiccup of some kind one day, you lose your backup \nsequence.\n\nAnother possible answer is, like, 400 days? Because why not? What is a \nreasonable upper limit for this?\n\n> - It's regrettable that we don't have incremental JSON parsing; I\n> think that means anyone who has a backup manifest that is bigger than\n> 1GB can't use this feature. However, that's also a problem for the\n> existing backup manifest feature, and as far as I can see, we have no\n> complaints about it. So maybe people just don't have databases with\n> enough relations for that to be much of a live issue yet. I'm inclined\n> to treat this as a non-blocker,\n\nIt looks like each file entry in the manifest takes about 150 bytes, so \n1 GB would allow for 1024**3/150 = 7158278 files. That seems fine for now?\n\n> - Right now, I have a hard-coded 60 second timeout for WAL\n> summarization. If you try to take an incremental backup and the WAL\n> summaries you need don't show up within 60 seconds, the backup times\n> out. I think that's a reasonable default, but should it be\n> configurable? If yes, should that be a GUC or, perhaps better, a\n> pg_basebackup option?\n\nThe current user experience of pg_basebackup is that it waits possibly a \nlong time for a checkpoint, and there is an option to make it go faster, \nbut there is no timeout AFAICT. Is this substantially different? Could \nwe just let it wait forever?\n\nAlso, does waiting for checkpoint and WAL summarization happen in \nparallel? If so, what if it starts a checkpoint that might take 15 min \nto complete, and then after 60 seconds it kicks you off because the WAL \nsummarization isn't ready. That might be wasteful.\n\n> - I'm curious what people think about the pg_walsummary tool that is\n> included in 0006. I think it's going to be fairly important for\n> debugging, but it does feel a little bit bad to add a new binary for\n> something pretty niche.\n\nThis seems fine.\n\nIs the WAL summary file format documented anywhere in your patch set \nyet? My only thought was, maybe the file format could be human-readable \n(more like backup_label) to avoid this. But maybe not.\n\n\n\n",
"msg_date": "Tue, 24 Oct 2023 16:53:44 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "On Tue, Oct 24, 2023 at 10:53 AM Peter Eisentraut <[email protected]> wrote:\n> The easiest answer is to have it off by default. Let people figure out\n> what works for them. There are various factors like storage, network,\n> server performance, RTO that will determine what combination of full\n> backup, incremental backup, and WAL replay will satisfy someone's\n> requirements. I suppose tests could be set up to determine this to some\n> degree. But we could also start slow and let people figure it out\n> themselves. When pg_basebackup was added, it was also disabled by default.\n>\n> If we think that 7d is a good setting, then I would suggest to consider,\n> like 10d. Otherwise, if you do a weekly incremental backup and you have\n> a time change or a hiccup of some kind one day, you lose your backup\n> sequence.\n>\n> Another possible answer is, like, 400 days? Because why not? What is a\n> reasonable upper limit for this?\n\nIn concept, I don't think this should even be time-based. What you\nshould do is remove WAL summaries once you know that you've taken as\nmany incremental backups that might use them as you're ever going to\ndo. But PostgreSQL itself doesn't have any way of knowing what your\nintended backup patterns are. If your incremental backup fails on\nMonday night and you run it manually on Tuesday morning, you might\nstill rerun it as an incremental backup. If it fails every night for a\nmonth and you finally realize and decide to intervene manually, maybe\nyou want a new full backup at that point. It's been a month. But on\nthe other hand maybe you don't. There's no time-based answer to this\nquestion that is really correct, and I think it's quite possible that\nyour backup software might want to shut off time-based deletion\naltogether and make its own decisions about when to nuke summaries.\nHowever, I also don't think that's a great default setting. It could\neasily lead to people wasting a bunch of disk space for no reason.\n\nAs far as the 7d value, I figured that nighty incremental backups\nwould be fairly common. If we think weekly incremental backups would\nbe common, then pushing this out to 10d would make sense. While\nthere's no reason you couldn't take an annual incremental backup, and\nthus want a 400d value, it seems like a pretty niche use case.\n\nNote that whether to remove summaries is a separate question from\nwhether to generate them in the first place. Right now, I have\nwal_summarize_mb controlling whether they get generated in the first\nplace, but as I noted in another recent email, that isn't an entirely\nsatisfying solution.\n\n> It looks like each file entry in the manifest takes about 150 bytes, so\n> 1 GB would allow for 1024**3/150 = 7158278 files. That seems fine for now?\n\nI suspect a few people have more files than that. They'll just have to\nwait to use this feature until we get incremental JSON parsing (or\nundo the decision to use JSON for the manifest).\n\n> The current user experience of pg_basebackup is that it waits possibly a\n> long time for a checkpoint, and there is an option to make it go faster,\n> but there is no timeout AFAICT. Is this substantially different? Could\n> we just let it wait forever?\n\nWe could. I installed the timeout because the first versions of the\nfeature were buggy, and I didn't like having my tests hang forever\nwith no indication of what had gone wrong. At least in my experience\nso far, the time spent waiting for WAL summarization is typically\nquite short, because only the WAL that needs to be summarized is\nwhatever was emitted since the last time it woke up up through the\nstart LSN of the backup. That's probably not much, and the next time\nthe summarizer wakes up, the file should appear within moments. So,\nit's a little different from the checkpoint case, where long waits are\nexpected.\n\n> Also, does waiting for checkpoint and WAL summarization happen in\n> parallel? If so, what if it starts a checkpoint that might take 15 min\n> to complete, and then after 60 seconds it kicks you off because the WAL\n> summarization isn't ready. That might be wasteful.\n\nIt is not parallel. The trouble is, we don't really have any way to\nknow whether WAL summarization is going to fail for whatever reason.\nWe don't expect that to happen, but if somebody changes the\npermissions on the WAL summary directory or attaches gdb to the WAL\nsummarizer process or something of that sort, it might.\n\nWe could check at the outset whether we seem to be really far behind\nand emit a warning. For instance, if we're 1TB behind on WAL\nsummarization when the checkpoint is requested, chances are something\nis busted and we're probably not going to catch up any time soon. We\ncould warn the user about that and let them make their own decision\nabout whether to cancel. But, that idea won't help in unattended\noperation, and the threshold for \"really far behind\" is not very\nclear. It might be better to wait until we get more experience with\nhow things actually fail before doing too much engineering here, but\nI'm also open to suggestions.\n\n> Is the WAL summary file format documented anywhere in your patch set\n> yet? My only thought was, maybe the file format could be human-readable\n> (more like backup_label) to avoid this. But maybe not.\n\nThe comment in blkreftable.c just above \"#define BLOCKS_PER_CHUNK\"\ngives an overview of the format. I think that we probably don't want\nto convert to a text format, because this format is extremely\nspace-efficient and very convenient to transfer between disk and\nmemory. We don't want to run out of memory when summarizing large\nranges of WAL, or when taking an incremental backup that requires\ncombining many individual summaries into a combined summary that tells\nus what needs to be included in the backup.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 24 Oct 2023 12:08:12 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "On 2023-10-24 Tu 12:08, Robert Haas wrote:\n>\n>> It looks like each file entry in the manifest takes about 150 bytes, so\n>> 1 GB would allow for 1024**3/150 = 7158278 files. That seems fine for now?\n> I suspect a few people have more files than that. They'll just have to Maybe someone on the list can see some way o\n> wait to use this feature until we get incremental JSON parsing (or\n> undo the decision to use JSON for the manifest).\n\n\nRobert asked me to work on this quite some time ago, and most of this \nwork was done last year.\n\nHere's my WIP for an incremental JSON parser. It works and passes all \nthe usual json/b tests. It implements Algorithm 4.3 in the Dragon Book. \nThe reason I haven't posted it before is that it's about 50% slower in \npure parsing speed than the current recursive descent parser in my \ntesting. I've tried various things to make it faster, but haven't made \nmuch impact. One of my colleagues is going to take a fresh look at it, \nbut maybe someone on the list can see where we can save some cycles.\n\nIf we can't make it faster, I guess we could use the RD parser for \nnon-incremental cases and only use the non-RD parser for incremental, \nalthough that would be a bit sad. However, I don't think we can make the \nRD parser suitable for incremental parsing - there's too much state \ninvolved in the call stack.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Wed, 25 Oct 2023 07:53:48 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "On Wed, Oct 25, 2023 at 7:54 AM Andrew Dunstan <[email protected]> wrote:\n> Robert asked me to work on this quite some time ago, and most of this\n> work was done last year.\n>\n> Here's my WIP for an incremental JSON parser. It works and passes all\n> the usual json/b tests. It implements Algorithm 4.3 in the Dragon Book.\n> The reason I haven't posted it before is that it's about 50% slower in\n> pure parsing speed than the current recursive descent parser in my\n> testing. I've tried various things to make it faster, but haven't made\n> much impact. One of my colleagues is going to take a fresh look at it,\n> but maybe someone on the list can see where we can save some cycles.\n>\n> If we can't make it faster, I guess we could use the RD parser for\n> non-incremental cases and only use the non-RD parser for incremental,\n> although that would be a bit sad. However, I don't think we can make the\n> RD parser suitable for incremental parsing - there's too much state\n> involved in the call stack.\n\nYeah, this is exactly why I didn't want to use JSON for the backup\nmanifest in the first place. Parsing such a manifest incrementally is\ncomplicated. If we'd gone with my original design where the manifest\nconsisted of a bunch of lines each of which could be parsed\nseparately, we'd already have incremental parsing and wouldn't be\nfaced with these difficult trade-offs.\n\nUnfortunately, I'm not in a good position either to figure out how to\nmake your prototype faster, or to evaluate how painful it is to keep\nboth in the source tree. It's probably worth considering how likely it\nis that we'd be interested in incremental JSON parsing in other cases.\nMaintaining two JSON parsers is probably not a lot of fun regardless,\nbut if each of them gets used for a bunch of things, that feels less\nbad than if one of them gets used for a bunch of things and the other\none only ever gets used for backup manifests. Would we be interested\nin JSON-format database dumps? Incrementally parsing JSON LOBs? Either\nseems tenuous, but those are examples of the kind of thing that could\nmake us happy to have incremental JSON parsing as a general facility.\n\nIf nobody's very excited by those kinds of use cases, then this just\nboils down to whether we want to (a) accept that users with very large\nnumbers of relation files won't be able to use pg_verifybackup or\nincremental backup, (b) accept that we're going to maintain a second\nJSON parser just to enable that use cas and with no other benefit, or\n(c) undertake to change the manifest format to something that is\nstraightforward to parse incrementally. I think (a) is reasonable\nshort term, but at some point I think we should do better. I'm not\nreally that enthused about (c) because it means more work for me and\npossibly more arguing, but if (b) is going to cause a lot of hassle\nthen we might need to consider it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 25 Oct 2023 09:05:36 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "\nOn 2023-10-25 We 09:05, Robert Haas wrote:\n> On Wed, Oct 25, 2023 at 7:54 AM Andrew Dunstan <[email protected]> wrote:\n>> Robert asked me to work on this quite some time ago, and most of this\n>> work was done last year.\n>>\n>> Here's my WIP for an incremental JSON parser. It works and passes all\n>> the usual json/b tests. It implements Algorithm 4.3 in the Dragon Book.\n>> The reason I haven't posted it before is that it's about 50% slower in\n>> pure parsing speed than the current recursive descent parser in my\n>> testing. I've tried various things to make it faster, but haven't made\n>> much impact. One of my colleagues is going to take a fresh look at it,\n>> but maybe someone on the list can see where we can save some cycles.\n>>\n>> If we can't make it faster, I guess we could use the RD parser for\n>> non-incremental cases and only use the non-RD parser for incremental,\n>> although that would be a bit sad. However, I don't think we can make the\n>> RD parser suitable for incremental parsing - there's too much state\n>> involved in the call stack.\n> Yeah, this is exactly why I didn't want to use JSON for the backup\n> manifest in the first place. Parsing such a manifest incrementally is\n> complicated. If we'd gone with my original design where the manifest\n> consisted of a bunch of lines each of which could be parsed\n> separately, we'd already have incremental parsing and wouldn't be\n> faced with these difficult trade-offs.\n>\n> Unfortunately, I'm not in a good position either to figure out how to\n> make your prototype faster, or to evaluate how painful it is to keep\n> both in the source tree. It's probably worth considering how likely it\n> is that we'd be interested in incremental JSON parsing in other cases.\n> Maintaining two JSON parsers is probably not a lot of fun regardless,\n> but if each of them gets used for a bunch of things, that feels less\n> bad than if one of them gets used for a bunch of things and the other\n> one only ever gets used for backup manifests. Would we be interested\n> in JSON-format database dumps? Incrementally parsing JSON LOBs? Either\n> seems tenuous, but those are examples of the kind of thing that could\n> make us happy to have incremental JSON parsing as a general facility.\n>\n> If nobody's very excited by those kinds of use cases, then this just\n> boils down to whether we want to (a) accept that users with very large\n> numbers of relation files won't be able to use pg_verifybackup or\n> incremental backup, (b) accept that we're going to maintain a second\n> JSON parser just to enable that use cas and with no other benefit, or\n> (c) undertake to change the manifest format to something that is\n> straightforward to parse incrementally. I think (a) is reasonable\n> short term, but at some point I think we should do better. I'm not\n> really that enthused about (c) because it means more work for me and\n> possibly more arguing, but if (b) is going to cause a lot of hassle\n> then we might need to consider it.\n\n\nI'm not too worried about the maintenance burden. The RD routines were \nadded in March 2013 (commit a570c98d7fa) and have hardly changed since \nthen. The new code is not ground-breaking - it's just a different (and \nfairly well known) way of doing the same thing. I'd be happier if we \ncould make it faster, but maybe it's just a fact that keeping an \nexplicit stack, which is how this works, is slower.\n\nI wouldn't at all be surprised if there were other good uses for \nincremental JSON parsing, including some you've identified.\n\nThat said, I agree that JSON might not be the best format for backup \nmanifests, but maybe that ship has sailed.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 25 Oct 2023 10:33:49 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "On Wed, Oct 25, 2023 at 10:33 AM Andrew Dunstan <[email protected]> wrote:\n> I'm not too worried about the maintenance burden.\n>\n> That said, I agree that JSON might not be the best format for backup\n> manifests, but maybe that ship has sailed.\n\nI think it's a decision we could walk back if we had a good enough\nreason, but it would be nicer if we didn't have to, because what we\nhave right now is working. If we change it for no real reason, we\nmight introduce new bugs, and at least in theory, incompatibility with\nthird-party tools that parse the existing format. If you think we can\nlive with the additional complexity in the JSON parsing stuff, I'd\nrather go that way.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 25 Oct 2023 11:24:25 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "On Tue, Oct 24, 2023 at 8:29 AM Robert Haas <[email protected]> wrote:\n> Yeah, maybe so. I'm not quite ready to commit to doing that split as\n> of this writing but I will think about it and possibly do it.\n\nI have done this. Here's v7.\n\nThis version also includes several new TAP tests for the main patch,\nsome of which were inspired by our discussion. It also includes SGML\ndocumentation for pg_walsummary.\n\nNew tests:\n003_timeline.pl tests the case where the prior backup for an\nincremental backup was taken on an earlier timeline.\n004_manifest.pl tests the manifest-related options for pg_combinebackup.\n005_integrity.pl tests the sanity checks that prevent combining a\nbackup with the wrong prior backup.\n\nOverview of the new organization of the patch set:\n0001 - preparatory refactoring of basebackup.c, changing the algorithm\nthat we use to decide which files have checksums\n0002 - code movement only. makes it possible to reuse parse_manifest.c\n0003 - add the WAL summarizer process, but useless on its own\n0004 - add incremental backup, making use of 0003\n0005 - add pg_walsummary debugging tool\n\nNotes:\n- I suspect that 0003 is the most likely to have serious bugs, followed by 0004.\n- See XXX comments in the commit messages for some known open issues.\n- Still looking for more comments on\nhttp://postgr.es/m/CA+TgmoYdPS7a4eiqAFCZ8dr4r3-O0zq1LvTO5drwWr+7wHQaSQ@mail.gmail.com\nand other recent emails where design questions came up\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 25 Oct 2023 13:38:25 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "\nOn 2023-10-25 We 11:24, Robert Haas wrote:\n> On Wed, Oct 25, 2023 at 10:33 AM Andrew Dunstan <[email protected]> wrote:\n>> I'm not too worried about the maintenance burden.\n>>\n>> That said, I agree that JSON might not be the best format for backup\n>> manifests, but maybe that ship has sailed.\n> I think it's a decision we could walk back if we had a good enough\n> reason, but it would be nicer if we didn't have to, because what we\n> have right now is working. If we change it for no real reason, we\n> might introduce new bugs, and at least in theory, incompatibility with\n> third-party tools that parse the existing format. If you think we can\n> live with the additional complexity in the JSON parsing stuff, I'd\n> rather go that way.\n>\n\nOK, I'll go with that. It will actually be a bit less invasive than the \npatch I posted.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 25 Oct 2023 15:17:35 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "On Wed, Oct 25, 2023 at 3:17 PM Andrew Dunstan <[email protected]> wrote:\n> OK, I'll go with that. It will actually be a bit less invasive than the\n> patch I posted.\n\nWhy's that?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 25 Oct 2023 15:19:59 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "\nOn 2023-10-25 We 15:19, Robert Haas wrote:\n> On Wed, Oct 25, 2023 at 3:17 PM Andrew Dunstan <[email protected]> wrote:\n>> OK, I'll go with that. It will actually be a bit less invasive than the\n>> patch I posted.\n> Why's that?\n>\n\nBecause we won't be removing the RD parser.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 26 Oct 2023 06:59:00 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "On Thu, Oct 26, 2023 at 6:59 AM Andrew Dunstan <[email protected]> wrote:\n> Because we won't be removing the RD parser.\n\nAh, OK.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 26 Oct 2023 09:24:49 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "On Tue, Oct 24, 2023 at 12:08 PM Robert Haas <[email protected]> wrote:\n> Note that whether to remove summaries is a separate question from\n> whether to generate them in the first place. Right now, I have\n> wal_summarize_mb controlling whether they get generated in the first\n> place, but as I noted in another recent email, that isn't an entirely\n> satisfying solution.\n\nI did some more research on this. My conclusion is that I should\nremove wal_summarize_mb and just have a GUC summarize_wal = on|off\nthat controls whether the summarizer runs at all. There will be one\nsummary file per checkpoint, no matter how far apart checkpoints are\nor how large the summary gets. Below I'll explain the reasoning; let\nme know if you disagree.\n\nWhat I describe above would be a bad plan if it were realistically\npossible for a summary file to get so large that it might run the\nmachine out of memory either when producing it or when trying to make\nuse of it for an incremental backup. This seems to be a somewhat\ndifficult scenario to create. So far, I haven't been able to generate\nWAL summary files more than a few tens of megabytes in size, even when\nsummarizing 50+ GB of WAL per summary file. One reason why it's hard\nto produce large summary files is because, for a single relation fork,\nthe WAL summary size converges to 1 bit per modified block when the\nnumber of modified blocks is large. This means that, even if you have\na terabyte sized relation, you're looking at no more than perhaps 20MB\nof summary data no matter how much of it gets modified. Now, somebody\ncould have a 30TB relation and then if they modify the whole thing\nthey could have the better part of a gigabyte of summary data for that\nrelation, but if you've got a 30TB table you probably have enough\nmemory that that's no big deal.\n\nBut, what if you have multiple relations? I initialized pgbench with a\nscale factor of 30000 and also with 30000 partitions and did a 1-hour\nrun. I got 4 checkpoints during that time and each one produced an\napproximately 16MB summary file. The efficiency here drops\nconsiderably. For example, one of the files is 16495398 bytes and\nrecords information on 7498403 modified blocks, which works out to\nabout 2.2 bytes per modified block. That's more than an order of\nmagnitude worse than what I got in the single-relation case, where the\nsummary file didn't even use two *bits* per modified block. But here\nagain, the file just isn't that big in absolute terms. To get a 1GB+\nWAL summary file, you'd need to modify millions of relation forks,\nmaybe tens of millions, and most installations aren't even going to\nhave that many relation forks, let alone be modifying them all\nfrequently.\n\nMy conclusion here is that it's pretty hard to have a database where\nWAL summarization is going to use too much memory. I wouldn't be\nterribly surprised if there are some extreme cases where it happens,\nbut those databases probably aren't great candidates for incremental\nbackup anyway. They're probably databases with millions of relations\nand frequent, widely-scattered modifications to those relations. And\nif you have that kind of high turnover rate then incremental backup\nisn't going to as helpful anyway, so there's probably no reason to\nenable WAL summarization in the first place. Maybe if you have that\nplus in the same database cluster you have a 100TB of completely\nstatic data that is never modified, and if you also do all of this on\na pretty small machine, then you can find a case where incremental\nbackup would have worked well but for the memory consumed by WAL\nsummarization.\n\nBut I think that's sufficiently niche that the current patch shouldn't\nconcern itself with such cases. If we find that they're common enough\nto worry about, we might eventually want to do something to mitigate\nthem, but whether that thing looks anything like wal_summarize_mb\nseems pretty unclear. So I conclude that it's a mistake to include\nthat GUC as currently designed and propose to replace it with a\nBoolean as described above.\n\nComments?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 30 Oct 2023 10:45:03 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "While reviewing this thread today, I realized that I never responded\nto this email. That was inadvertent; my apologies.\n\nOn Wed, Jun 14, 2023 at 4:34 PM Matthias van de Meent\n<[email protected]> wrote:\n> Nice, I like this idea.\n\nCool.\n\n> Skimming through the 7th patch, I see claims that FSM is not fully\n> WAL-logged and thus shouldn't be tracked, and so it indeed doesn't\n> track those changes.\n> I disagree with that decision: we now have support for custom resource\n> managers, which may use the various forks for other purposes than\n> those used in PostgreSQL right now. It would be a shame if data is\n> lost because of the backup tool ignoring forks because the PostgreSQL\n> project itself doesn't have post-recovery consistency guarantees in\n> that fork. So, unless we document that WAL-logged changes in the FSM\n> fork are actually not recoverable from backup, regardless of the type\n> of contents, we should still keep track of the changes in the FSM fork\n> and include the fork in our backups or only exclude those FSM updates\n> that we know are safe to ignore.\n\nI'm not sure what to do about this problem. I don't think any data\nwould be *lost* in the scenario that you mention; what I think would\nhappen is that the FSM forks would be backed up in their entirety even\nif they were owned by some other table AM or index AM that was\nWAL-logging all changes to whatever it was storing in that fork. So I\nthink that there is not a correctness issue here but rather an\nefficiency issue.\n\nIt would still be nice to fix that somehow, but I don't see how to do\nit. It would be easy to make the WAL summarizer stop treating the FSM\nas a special case, but there's no way for basebackup_incremental.c to\nknow whether a particular relation fork is for the heap AM or some\nother AM that handles WAL-logging differently. It can't for example\nexamine pg_class; it's not connected to any database, let alone every\ndatabase. So we have to either trust that the WAL for the FSM is\ncorrect and complete in all cases, or assume that it isn't in any\ncase. And the former doesn't seem like a safe or wise assumption given\nhow the heap AM works.\n\nI think the reality here is unfortunately that we're missing a lot of\nimportant infrastructure to really enable a multi-table-AM world. The\nheap AM, and every other table AM, should include a metapage so we can\ntell what we're looking at just by examining the disk files. Relation\nforks don't scale and should be replaced with some better system that\ndoes. We should have at least two table AMs in core that are fully\nsupported and do truly useful things. Until some of that stuff (and\nprobably a bunch of other things) get sorted out, out-of-core AMs are\ngoing to have to remain second-class citizens to some degree.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 30 Oct 2023 12:01:22 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "On Thu, Sep 28, 2023 at 6:22 AM Jakub Wartak\n<[email protected]> wrote:\n> If that is still an area open for discussion: wouldn't it be better to\n> just specify LSN as it would allow resyncing standby across major lag\n> where the WAL to replay would be enormous? Given that we had\n> primary->standby where standby would be stuck on some LSN, right now\n> it would be:\n> 1) calculate backup manifest of desynced 10TB standby (how? using\n> which tool?) - even if possible, that means reading 10TB of data\n> instead of just putting a number, isn't it?\n> 2) backup primary with such incremental backup >= LSN\n> 3) copy the incremental backup to standby\n> 4) apply it to the impaired standby\n> 5) restart the WAL replay\n\nAs you may be able to tell from the flurry of posts and new patch\nsets, I'm trying hard to sort out the remaining open items that\npertain to this patch set, and I'm now back to thinking about this\none.\n\nTL;DR: I think the idea has some potential, but there are some\npitfalls that I'm not sure how to address.\n\nI spent some time looking at how we currently use the data from the\nbackup manifest. Currently, we do two things with it. First, when\nwe're backing up each file, we check whether it's present in the\nbackup manifest and, if not, we back it up in full. This actually\nfeels fairly poor. If it makes any difference at all, then presumably\nthe underlying algorithm is buggy and needs to be fixed. Maybe that\nshould be ripped out altogether or turned into some kind of sanity\ncheck that causes a big explosion if it fails. Second, we check\nwhether the WAL ranges reported by the client match up with the\ntimeline history of the server (see PrepareForIncrementalBackup). This\nset of sanity checks seems fairly important to me, and I'd regret\ndiscarding them. I think there's some possibility that they might\ncatch user error, like where somebody promotes multiple standbys and\nmaybe they even get the same timeline on more than one of them, and\nthen confusion might ensue. I also think that there's a real\npossibility that they might make it easier to track down bugs in my\ncode, even if those bugs aren't necessarily timeline-related. If (or\nmore realistically when) somebody ends up with a corrupted cluster\nafter running pg_combinebackup, we're going to need to figure out\nwhether that corruption is the result of bugs (and if so where they\nare) or user error (and if so what it was). The most obvious ways of\nending up with a corrupted cluster are (1) taking an incremental\nbackup against a prior backup that is not in the history of the server\nfrom which the backup is taken or (2) combining an incremental backup\nthe wrong prior backup, so whatever sanity checks we can have that\nwill tend to prevent those kinds of mistakes seem like a really good\nidea.\n\nAnd those kinds of checks seem relevant here, too. Consider that it\nwouldn't be valid to use pg_combinebackup to fast-forward a standby\nserver if the incremental backup's backup-end-LSN preceded the standby\nserver's minimum recovery point. Imagine that you have a standby whose\nlast checkpoint's redo location was at LSN 2/48. Being the\nenterprising DBA that you are, you make a note of that LSN and go take\nan incremental backup based on it. You then stop the standby server\nand try to apply the incremental backup to fast-forward the standby.\nWell, it's possible that in the meanwhile the standby actually caught\nup, and now has a minimum recovery point that follows the\nbackup-end-LSN of your incremental backup. In that case, you can't\nlegally use that incremental backup to fast-forward that standby, but\nno code I've yet written would be smart enough to figure that out. Or,\nmaybe you (or some other DBA on your team) got really excited and\nactually promoted that standby meanwhile, and now it's not even on the\nsame timeline any more. In the \"normal\" case where you take an\nincremental backup based on an earlier base backup, these kinds of\nproblems are detectable, and it seems to me that if we want to enable\nthis kind of use case, it would be pretty smart to have a plan to\ndetect similar mistakes here. I don't, currently, but maybe there is\none.\n\nAnother practical problem here is that, right now, pg_combinebackup\ndoesn't have an in-place mode. It knows how to take a bunch of input\nbackups and write out an output backup, but that output backup needs\nto go into a new, fresh directory (or directories plural, if there are\nuser-defined tablespaces). I had previously considered adding such a\nmode, but the idea I had at the time wouldn't have worked for this\ncase. I imagined that someone might want to run \"pg_combinebackup\n--in-place full incr\" and clobber the contents of the incr directory\nwith the output, basically discarding the incremental backup you took\nin favor of a full backup that could have been taken at the same point\nin time. But here, you'd want to clobber the *first* input to\npg_combinebackup, not the last one, so if we want to add something\nlike this, the UI needs some thought.\n\nOne thing that I find quite scary about such a mode is that if you\ncrash mid-way through, you're in a lot of trouble. In the case that I\nhad previous contemplated -- overwrite the last incremental with the\nreconstructed full backup -- you *might* be able to make it crash safe\nby writing out the full files for each incremental file, fsyncing\neverything, then removing all of the incremental files and fsyncing\nagain. The idea would be that if you crash midway through it's OK to\njust repeat whatever you were trying to do before the crash and if it\nsucceeds the second time then all is well. If, for a given file, there\nare both incremental and non-incremental versions, then the second\nattempt should remove and recreate the non-incremental version from\nthe incremental version. If there's only a non-incremental version, it\ncould be that the previous attempt got far enough to remove the\nincremental file, but in that case the full file that we now have\nshould be the same thing that we would produce if we did the operation\nnow. It all sounds a little scary, but maybe it's OK. And as long as\nyou don't remove the this-is-an-incremental-backup markers from the\nbackup_label file until you've done everything else, you can tell\nwhether you've ever successfully completed the reassembly or not. But\nif you're using a hypothetical overwrite mode to overwrite the first\ninput rather than the last one, well, it looks like a valid data\ndirectory already, and if you replace a bunch of files and then crash,\nit still does, but it's not any more, really. I'm not sure I've really\nwrapped my head around all of the cases here, but it does feel like\nthere are some new ways to go wrong.\n\nOne thing I also realized when thinking about this is that you could\nprobably hose yourself with the patch set as it stands today by taking\na full backup, downgrading to wal_level=minimal for a while, doing\nsome WAL-skipping operations, upgrading to a higher WAL-level again,\nand then taking an incremental backup. I think the solution to that is\nprobably for the WAL summarizer to refuse to run if wal_level=minimal.\nThen there would be a gap in the summary files which an incremental\nbackup attempt would detect.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 30 Oct 2023 13:45:57 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "Hi,\n\nOn 2023-10-30 10:45:03 -0400, Robert Haas wrote:\n> On Tue, Oct 24, 2023 at 12:08 PM Robert Haas <[email protected]> wrote:\n> > Note that whether to remove summaries is a separate question from\n> > whether to generate them in the first place. Right now, I have\n> > wal_summarize_mb controlling whether they get generated in the first\n> > place, but as I noted in another recent email, that isn't an entirely\n> > satisfying solution.\n> \n> I did some more research on this. My conclusion is that I should\n> remove wal_summarize_mb and just have a GUC summarize_wal = on|off\n> that controls whether the summarizer runs at all. There will be one\n> summary file per checkpoint, no matter how far apart checkpoints are\n> or how large the summary gets. Below I'll explain the reasoning; let\n> me know if you disagree.\n\n> What I describe above would be a bad plan if it were realistically\n> possible for a summary file to get so large that it might run the\n> machine out of memory either when producing it or when trying to make\n> use of it for an incremental backup. This seems to be a somewhat\n> difficult scenario to create. So far, I haven't been able to generate\n> WAL summary files more than a few tens of megabytes in size, even when\n> summarizing 50+ GB of WAL per summary file. One reason why it's hard\n> to produce large summary files is because, for a single relation fork,\n> the WAL summary size converges to 1 bit per modified block when the\n> number of modified blocks is large. This means that, even if you have\n> a terabyte sized relation, you're looking at no more than perhaps 20MB\n> of summary data no matter how much of it gets modified. Now, somebody\n> could have a 30TB relation and then if they modify the whole thing\n> they could have the better part of a gigabyte of summary data for that\n> relation, but if you've got a 30TB table you probably have enough\n> memory that that's no big deal.\n\nI'm not particularly worried about the rewriting-30TB-table case - that'd also\ngenerate >= 30TB of WAL most of the time. Which realistically is going to\ntrigger a few checkpoints, even on very big instances.\n\n\n> But, what if you have multiple relations? I initialized pgbench with a\n> scale factor of 30000 and also with 30000 partitions and did a 1-hour\n> run. I got 4 checkpoints during that time and each one produced an\n> approximately 16MB summary file.\n\nHm, I assume the pgbench run will be fairly massively bottlenecked on IO, due\nto having to read data from disk, lots of full page write and having to write\nout lots of data? I.e. we won't do all that many transactions during the 1h?\n\n\n> To get a 1GB+ WAL summary file, you'd need to modify millions of relation\n> forks, maybe tens of millions, and most installations aren't even going to\n> have that many relation forks, let alone be modifying them all frequently.\n\nI tried to find bad cases for a bit - and I am not worried. I wrote a pgbench\nscript to create 10k single-row relations in each script, ran that with 96\nclients, checkpointed, and ran a pgbench script that updated the single row in\neach table.\n\nAfter creation of the relation WAL summarizer uses\nLOG: level: 1; Wal Summarizer: 378433680 total in 43 blocks; 5628936 free (66 chunks); 372804744 used\nand creates a 26MB summary file.\n\nAfter checkpoint & updates WAL summarizer uses:\nLOG: level: 1; Wal Summarizer: 369205392 total in 43 blocks; 5864536 free (26 chunks); 363340856 used\nand creates a 26MB summary file.\n\nSure, 350MB ain't nothing, but simply just executing \\dt in the database\ncreated by this makes the backend use 260MB after. Which isn't going away,\nwhereas WAL summarizer drops its memory usage soon after.\n\n\n> But I think that's sufficiently niche that the current patch shouldn't\n> concern itself with such cases. If we find that they're common enough\n> to worry about, we might eventually want to do something to mitigate\n> them, but whether that thing looks anything like wal_summarize_mb\n> seems pretty unclear. So I conclude that it's a mistake to include\n> that GUC as currently designed and propose to replace it with a\n> Boolean as described above.\n\nAfter playing with this for a while, I don't see a reason for wal_summarize_mb\nfrom a memory usage POV at least.\n\nI wonder if there are use cases that might like to consume WAL summaries\nbefore the next checkpoint? For those wal_summarize_mb likely wouldn't be a\ngood control, but they might want to request a summary file to be created at\nsome point?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 30 Oct 2023 11:46:45 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "On Mon, Oct 30, 2023 at 2:46 PM Andres Freund <[email protected]> wrote:\n> After playing with this for a while, I don't see a reason for wal_summarize_mb\n> from a memory usage POV at least.\n\nCool! Thanks for testing.\n\n> I wonder if there are use cases that might like to consume WAL summaries\n> before the next checkpoint? For those wal_summarize_mb likely wouldn't be a\n> good control, but they might want to request a summary file to be created at\n> some point?\n\nIt's possible. I actually think it's even more likely that there are\nuse cases that will also want the WAL summarized, but in some\ndifferent way. For example, you might want a summary that would give\nyou the LSN or approximate LSN where changes to a certain block\noccurred. Such a summary would be way bigger than these summaries and\ntherefore, at least IMHO, a lot less useful for incremental backup,\nbut it could be really useful for something else. Or you might want\nsummaries that focus on something other than which blocks got changed,\nlike what relations were created or destroyed, or only changes to\ncertain kinds of relations or relation forks, or whatever. In a way,\nyou can even think of logical decoding as a kind of WAL summarization,\njust with a very different set of goals from this one. I won't be too\nsurprised if the next hacker wants something that is different enough\nfrom what this does that it doesn't make sense to share mechanism, but\nif by chance they want the same thing but dumped a bit more\nfrequently, well, that can be done.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 30 Oct 2023 15:23:28 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "On Mon, Oct 30, 2023 at 6:46 PM Robert Haas <[email protected]> wrote:\n>\n> On Thu, Sep 28, 2023 at 6:22 AM Jakub Wartak\n> <[email protected]> wrote:\n> > If that is still an area open for discussion: wouldn't it be better to\n> > just specify LSN as it would allow resyncing standby across major lag\n> > where the WAL to replay would be enormous? Given that we had\n> > primary->standby where standby would be stuck on some LSN, right now\n> > it would be:\n> > 1) calculate backup manifest of desynced 10TB standby (how? using\n> > which tool?) - even if possible, that means reading 10TB of data\n> > instead of just putting a number, isn't it?\n> > 2) backup primary with such incremental backup >= LSN\n> > 3) copy the incremental backup to standby\n> > 4) apply it to the impaired standby\n> > 5) restart the WAL replay\n>\n> As you may be able to tell from the flurry of posts and new patch\n> sets, I'm trying hard to sort out the remaining open items that\n> pertain to this patch set, and I'm now back to thinking about this\n> one.\n>\n> TL;DR: I think the idea has some potential, but there are some\n> pitfalls that I'm not sure how to address.\n>\n> I spent some time looking at how we currently use the data from the\n> backup manifest. Currently, we do two things with it. First, when\n> we're backing up each file, we check whether it's present in the\n> backup manifest and, if not, we back it up in full. This actually\n> feels fairly poor. If it makes any difference at all, then presumably\n> the underlying algorithm is buggy and needs to be fixed. Maybe that\n> should be ripped out altogether or turned into some kind of sanity\n> check that causes a big explosion if it fails. Second, we check\n> whether the WAL ranges reported by the client match up with the\n> timeline history of the server (see PrepareForIncrementalBackup). This\n> set of sanity checks seems fairly important to me, and I'd regret\n> discarding them. I think there's some possibility that they might\n> catch user error, like where somebody promotes multiple standbys and\n> maybe they even get the same timeline on more than one of them, and\n> then confusion might ensue.\n[..]\n\n> Another practical problem here is that, right now, pg_combinebackup\n> doesn't have an in-place mode. It knows how to take a bunch of input\n> backups and write out an output backup, but that output backup needs\n> to go into a new, fresh directory (or directories plural, if there are\n> user-defined tablespaces). I had previously considered adding such a\n> mode, but the idea I had at the time wouldn't have worked for this\n> case. I imagined that someone might want to run \"pg_combinebackup\n> --in-place full incr\" and clobber the contents of the incr directory\n> with the output, basically discarding the incremental backup you took\n> in favor of a full backup that could have been taken at the same point\n> in time.\n[..]\n\nThanks for answering! It all sounds like this\nresync-standby-using-primary-incrbackup idea isn't fit for the current\npg_combinebackup, but rather for a new tool hopefully in future. It\ncould take the current LSN from stuck standby, calculate manifest on\nthe lagged and offline standby (do we need to calculate manifest\nChecksum in that case? I cannot find code for it), deliver it via\n\"UPLOAD_MANIFEST\" to primary and start fetching and applying the\ndifferences while doing some form of copy-on-write from old & incoming\nincrbackup data to \"$relfilenodeid.new\" and then durable_unlink() old\none and durable_rename(\"$relfilenodeid.new\", \"$relfilenodeid\". Would\nit still be possible in theory? (it could use additional safeguards\nlike rename controlfile when starting and just before ending to\nadditionally block startup if it hasn't finished). Also it looks as\nper comment nearby struct IncrementalBackupInfo.manifest_files that\neven checksums are just more for safeguarding rather than core\nimplementation (?)\n\nWhat I've meant in the initial idea is not to hinder current efforts,\nbut asking if the current design will not stand in a way for such a\ncool new addition in future ?\n\n> One thing I also realized when thinking about this is that you could\n> probably hose yourself with the patch set as it stands today by taking\n> a full backup, downgrading to wal_level=minimal for a while, doing\n> some WAL-skipping operations, upgrading to a higher WAL-level again,\n> and then taking an incremental backup. I think the solution to that is\n> probably for the WAL summarizer to refuse to run if wal_level=minimal.\n> Then there would be a gap in the summary files which an incremental\n> backup attempt would detect.\n\nAs per earlier test [1], I've already tried to simulate that in\nincrbackuptests-0.1.tgz/test_across_wallevelminimal.sh , but that\nworked (but that was with CTAS-wal-minimal-optimization -> new\nrelfilenodeOID is used for CTAS which got included in the incremental\nbackup as it's new file) Even retested that with Your v7 patch with\nasserts, same. When simulating with \"BEGIN; TRUNCATE nightmare; COPY\nnightmare FROM '/tmp/copy.out'; COMMIT;\" on wal_level=minimal it still\nrecovers using incremental backup because the WAL contains:\n\nrmgr: Storage, desc: CREATE base/5/36425\n[..]\nrmgr: XLOG, desc: FPI , blkref #0: rel 1663/5/36425 blk 0 FPW\n[..]\n\ne.g. TRUNCATE sets a new relfilenode each time, so they will always be\nincluded in backup and wal_level=minimal optimizations kicks only for\ncommands that issue a new relfilenode. True/false?\n\npostgres=# select oid, relfilenode, relname from pg_class where\nrelname like 'night%' order by 1;\n oid | relfilenode | relname\n-------+-------------+---------------------\n 16384 | 0 | nightmare\n 16390 | 36420 | nightmare_p0\n 16398 | 36425 | nightmare_p1\n 36411 | 0 | nightmare_pkey\n 36413 | 36422 | nightmare_p0_pkey\n 36415 | 36427 | nightmare_p1_pkey\n 36417 | 0 | nightmare_brin_idx\n 36418 | 36423 | nightmare_p0_ts_idx\n 36419 | 36428 | nightmare_p1_ts_idx\n(9 rows)\n\npostgres=# truncate nightmare;\nTRUNCATE TABLE\npostgres=# select oid, relfilenode, relname from pg_class where\nrelname like 'night%' order by 1;\n oid | relfilenode | relname\n-------+-------------+---------------------\n 16384 | 0 | nightmare\n 16390 | 36434 | nightmare_p0\n 16398 | 36439 | nightmare_p1\n 36411 | 0 | nightmare_pkey\n 36413 | 36436 | nightmare_p0_pkey\n 36415 | 36441 | nightmare_p1_pkey\n 36417 | 0 | nightmare_brin_idx\n 36418 | 36437 | nightmare_p0_ts_idx\n 36419 | 36442 | nightmare_p1_ts_idx\n\n-J.\n\n[1] - https://www.postgresql.org/message-id/CAKZiRmzT%2BbX2ZYdORO32cADtfQ9DvyaOE8fsOEWZc2V5FkEWVg%40mail.gmail.com\n\n\n",
"msg_date": "Wed, 1 Nov 2023 13:56:52 +0100",
"msg_from": "Jakub Wartak <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "On Wed, Nov 1, 2023 at 8:57 AM Jakub Wartak\n<[email protected]> wrote:\n> Thanks for answering! It all sounds like this\n> resync-standby-using-primary-incrbackup idea isn't fit for the current\n> pg_combinebackup, but rather for a new tool hopefully in future. It\n> could take the current LSN from stuck standby, calculate manifest on\n> the lagged and offline standby (do we need to calculate manifest\n> Checksum in that case? I cannot find code for it), deliver it via\n> \"UPLOAD_MANIFEST\" to primary and start fetching and applying the\n> differences while doing some form of copy-on-write from old & incoming\n> incrbackup data to \"$relfilenodeid.new\" and then durable_unlink() old\n> one and durable_rename(\"$relfilenodeid.new\", \"$relfilenodeid\". Would\n> it still be possible in theory? (it could use additional safeguards\n> like rename controlfile when starting and just before ending to\n> additionally block startup if it hasn't finished). Also it looks as\n> per comment nearby struct IncrementalBackupInfo.manifest_files that\n> even checksums are just more for safeguarding rather than core\n> implementation (?)\n>\n> What I've meant in the initial idea is not to hinder current efforts,\n> but asking if the current design will not stand in a way for such a\n> cool new addition in future ?\n\nHmm, interesting idea. I think something like that could be made to\nwork. My first thought was that it would sort of suck to have to\ncompute a manifest as a precondition of doing this, but then I started\nto think maybe it wouldn't, really. I mean, you'd have to scan the\nlocal directory tree and collect all the filenames so that you could\nremove any files that are no longer present in the current version of\nthe data directory which the incremental backup would send to you. If\nyou're already doing that, the additional cost of generating a\nmanifest isn't that high, at least if you don't include checksums,\nwhich aren't required. On the other hand, if you didn't need to send\nthe server a manifest and just needed to send the required WAL ranges,\nthat would be even cheaper. I'll spend some more time thinking about\nthis next week.\n\n> As per earlier test [1], I've already tried to simulate that in\n> incrbackuptests-0.1.tgz/test_across_wallevelminimal.sh , but that\n> worked (but that was with CTAS-wal-minimal-optimization -> new\n> relfilenodeOID is used for CTAS which got included in the incremental\n> backup as it's new file) Even retested that with Your v7 patch with\n> asserts, same. When simulating with \"BEGIN; TRUNCATE nightmare; COPY\n> nightmare FROM '/tmp/copy.out'; COMMIT;\" on wal_level=minimal it still\n> recovers using incremental backup because the WAL contains:\n\nTRUNCATE itself is always WAL-logged, but data added to the relation\nin the same relation as the TRUNCATE isn't always WAL-logged (but\nsometimes it is, depending on the relation size). So the failure case\nwouldn't be missing the TRUNCATE but missing some data-containing\nblocks within the relation shortly after it was created or truncated.\nI think what I need to do here is avoid summarizing WAL that was\ngenerated under wal_level=minimal. The walsummarizer process should\njust refuse to emit summaries for any such WAL.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 3 Nov 2023 15:50:51 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "On Mon, Oct 30, 2023 at 2:46 PM Andres Freund <[email protected]> wrote:\n> After playing with this for a while, I don't see a reason for wal_summarize_mb\n> from a memory usage POV at least.\n\nHere's v8. Changes:\n\n- Replace wal_summarize_mb GUC with summarize_wal = on | off.\n- Document the summarize_wal and wal_summary_keep_time GUCs.\n- Refuse to start with summarize_wal = on and wal_level = minimal.\n- Increase default wal_summary_keep_time to 10d from 7d, per (what I\nthink was) a suggestion from Peter E.\n- Fix fencepost errors when deciding which WAL summaries are needed\nfor a backup.\n- Fix indentation damage.\n- Standardize on ereport(DEBUG1, ...) in walsummarizer.c vs. various\nmore and less chatty things I had before.\n- Include the timeline in some error messages because not having it\nproved confusing.\n- Be more consistent about ignoring the FSM fork.\n- Fix a bug that could cause WAL summarization to error out when\nswitching timelines.\n- Fix the division between the wal summarizer and incremental backup\npatches so that the former passes tests without the latter.\n- Fix some things that an older compiler didn't like, including adding\npg_attribute_printf in some places.\n- Die with an error instead of crashing if someone feeds us a manifest\nwith no WAL ranges.\n- Sort the block numbers that need to be read from a relation file\nbefore reading them, so that we're certain to read them in ascending\norder.\n- Be more careful about computing the truncation_block_length of an\nincremental file; don't do math on a block number that might be\nInvalidBlockNumber.\n- Fix pg_combinebackup so it doesn't fail when zero-filled blocks are\nadded to a relation between the prior backup and the incremental\nbackup.\n- Improve the pg_combinebackup -d output so that it explains in detail\nhow it's carrying out reconstruction, to improve debuggability.\n- Disable WAL summarization by default, but add a test patch to the\nseries to enable it, because running the whole test suite with it\nturned on is good for bug-hunting.\n- In pg_walsummary, zero a struct before using instead of starting\nwith arbitrary junk values.\n\nTo do list:\n\n- Figure out whether to do something other than uploading the whole\nsummary, per discussion with Jakub Wartak.\n- Decide what to do about the 60-second waiting-for-WAL-summarization timeout.\n- Make incremental backup fail quickly if WAL summarization is not even enabled.\n- Have pg_basebackup error out nicely if an incremental backup is\nrequested from an older server that can't do that.\n- Add some kind of tests for pg_walsummary.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Mon, 6 Nov 2023 15:36:22 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "On Tue, Nov 7, 2023 at 2:06 AM Robert Haas <[email protected]> wrote:\n>\n> On Mon, Oct 30, 2023 at 2:46 PM Andres Freund <[email protected]> wrote:\n> > After playing with this for a while, I don't see a reason for wal_summarize_mb\n> > from a memory usage POV at least.\n>\n> Here's v8. Changes:\n\nReview comments, based on what I reviewed so far.\n\n- I think 0001 looks good improvement irrespective of the patch series.\n\n- review 0003\n1.\n+ be enabled either on a primary or on a standby. WAL summarization can\n+ cannot be enabled when <varname>wal_level</varname> is set to\n+ <literal>minimal</literal>.\n\nGrammatical error\n\"WAL summarization can cannot\" -> WAL summarization cannot\n\n2.\n+ <varlistentry id=\"guc-wal-summarize-keep-time\"\nxreflabel=\"wal_summarize_keep_time\">\n+ <term><varname>wal_summarize_keep_time</varname> (<type>boolean</type>)\n+ <indexterm>\n+ <primary><varname>wal_summarize_keep_time</varname>\nconfiguration parameter</primary>\n+ </indexterm>\n\nI feel the name of the guy should be either wal_summarizer_keep_time\nor wal_summaries_keep_time, I mean either we should refer to the\nsummarizer process or to the way summaries files.\n\n3.\n\n+XLogGetOldestSegno(TimeLineID tli)\n+{\n+\n+ /* Ignore files that are not XLOG segments */\n+ if (!IsXLogFileName(xlde->d_name))\n+ continue;\n+\n+ /* Parse filename to get TLI and segno. */\n+ XLogFromFileName(xlde->d_name, &file_tli, &file_segno,\n+ wal_segment_size);\n+\n+ /* Ignore anything that's not from the TLI of interest. */\n+ if (tli != file_tli)\n+ continue;\n+\n+ /* If it's the oldest so far, update oldest_segno. */\n\nSome of the single-line comments end with a full stop whereas others\ndo not, so better to be consistent.\n\n4.\n\n+ * If start_lsn != InvalidXLogRecPtr, only summaries that end before the\n+ * indicated LSN will be included.\n+ *\n+ * If end_lsn != InvalidXLogRecPtr, only summaries that start before the\n+ * indicated LSN will be included.\n+ *\n+ * The intent is that you can call GetWalSummaries(tli, start_lsn, end_lsn)\n+ * to get all WAL summaries on the indicated timeline that overlap the\n+ * specified LSN range.\n+ */\n+List *\n+GetWalSummaries(TimeLineID tli, XLogRecPtr start_lsn, XLogRecPtr end_lsn)\n\n\nInstead of \"If start_lsn != InvalidXLogRecPtr, only summaries that end\nbefore the\" it should be \"If start_lsn != InvalidXLogRecPtr, only\nsummaries that end after the\" because only if the summary files are\nEnding after the start_lsn then it will have some overlapping and we\nneed to return them if ending before start lsn then those files are\nnot overlapping at all, right?\n\n5.\nIn FilterWalSummaries() header also the comment is wrong same as for\nGetWalSummaries() function.\n\n6.\n+ * If the whole range of LSNs is covered, returns true, otherwise false.\n+ * If false is returned, *missing_lsn is set either to InvalidXLogRecPtr\n+ * if there are no WAL summary files in the input list, or to the first LSN\n+ * in the range that is not covered by a WAL summary file in the input list.\n+ */\n+bool\n+WalSummariesAreComplete(List *wslist, XLogRecPtr start_lsn,\n\nI did not see the usage of this function, but I think if the whole\nrange is not covered why not keep the behavior uniform w.r.t. what we\nset for '*missing_lsn', I mean suppose there is no file then\nmissing_lsn is the start_lsn because a very first LSN is missing.\n\n7.\n+ nbytes = FileRead(io->file, data, length, io->filepos,\n+ WAIT_EVENT_WAL_SUMMARY_READ);\n+ if (nbytes < 0)\n+ ereport(ERROR,\n+ (errcode_for_file_access(),\n+ errmsg(\"could not write file \\\"%s\\\": %m\",\n+ FilePathName(io->file))));\n\n/could not write file/ could not read file\n\n8.\n+/*\n+ * Comparator to sort a List of WalSummaryFile objects by start_lsn.\n+ */\n+static int\n+ListComparatorForWalSummaryFiles(const ListCell *a, const ListCell *b)\n+{\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 10 Nov 2023 16:57:14 +0530",
"msg_from": "Dilip Kumar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "Great stuff you got here. I'm doing a first pass trying to grok the\nwhole thing for more substantive comments, but in the meantime here are\nsome cosmetic ones.\n\nI got the following warnings, both valid:\n\n../../../../pgsql/source/master/src/common/blkreftable.c: In function 'WriteBlockRefTable':\n../../../../pgsql/source/master/src/common/blkreftable.c:520:45: warning: declaration of 'brtentry' shadows a previous local [-Wshadow=compatible-local]\n 520 | BlockRefTableEntry *brtentry;\n | ^~~~~~~~\n../../../../pgsql/source/master/src/common/blkreftable.c:492:37: note: shadowed declaration is here\n 492 | BlockRefTableEntry *brtentry;\n | ^~~~~~~~\n\n../../../../../pgsql/source/master/src/backend/postmaster/walsummarizer.c: In function 'SummarizeWAL':\n../../../../../pgsql/source/master/src/backend/postmaster/walsummarizer.c:833:57: warning: declaration of 'private_data' shadows a previous local [-Wshadow=compatible-local]\n 833 | SummarizerReadLocalXLogPrivate *private_data;\n | ^~~~~~~~~~~~\n../../../../../pgsql/source/master/src/backend/postmaster/walsummarizer.c:709:41: note: shadowed declaration is here\n 709 | SummarizerReadLocalXLogPrivate *private_data;\n | ^~~~~~~~~~~~\n\nIn blkreftable.c, I think the definition of SH_EQUAL should have an\nouter layer of parentheses. Also, it would be good to provide and use a\nfunction to initialize a BlockRefTableKey from the RelFileNode and\nforknum components, and ensure that any padding bytes are zeroed.\nOtherwise it's not going to be a great hash key. On my platform there\naren't any (padding bytes), but I think it's unwise to rely on that.\n\nI don't think SummarizerReadLocalXLogPrivate->waited is used for\nanything. Could be removed AFAICS, unless you're foreseen adding\nsomething that uses it.\n\nThese forward struct declarations are not buying you anything, I'd\nremove them:\n\ndiff --git a/src/include/common/blkreftable.h b/src/include/common/blkreftable.h\nindex 70d6c072d7..316e67122c 100644\n--- a/src/include/common/blkreftable.h\n+++ b/src/include/common/blkreftable.h\n@@ -29,10 +29,7 @@\n /* Magic number for serialization file format. */\n #define BLOCKREFTABLE_MAGIC\t\t\t0x652b137b\n \n-struct BlockRefTable;\n-struct BlockRefTableEntry;\n-struct BlockRefTableReader;\n-struct BlockRefTableWriter;\n+/* Struct definitions appear in blkreftable.c */\n typedef struct BlockRefTable BlockRefTable;\n typedef struct BlockRefTableEntry BlockRefTableEntry;\n typedef struct BlockRefTableReader BlockRefTableReader;\n\n\nand backup_label.h doesn't know about TimeLineID, so it needs this:\n\ndiff --git a/src/bin/pg_combinebackup/backup_label.h b/src/bin/pg_combinebackup/backup_label.h\nindex 08d6ed67a9..3af7ea274c 100644\n--- a/src/bin/pg_combinebackup/backup_label.h\n+++ b/src/bin/pg_combinebackup/backup_label.h\n@@ -12,6 +12,7 @@\n #ifndef BACKUP_LABEL_H\n #define BACKUP_LABEL_H\n \n+#include \"access/xlogdefs.h\"\n #include \"common/checksum_helper.h\"\n #include \"lib/stringinfo.h\"\n \n\nI don't much like the way header files in src/bin/pg_combinebackup files\nare structured. Particularly, causing a simplehash to be \"instantiated\"\njust because load_manifest.h is included seems poised to cause pain. I\nthink there should be a file with the basic struct declarations (no\nsimplehash); and then maybe since both pg_basebackup and\npg_combinebackup seem to need the same simplehash, create a separate\nheader file containing just that.. But, did you notice that anything\nthat includes reconstruct.h will instantiate the simplehash stuff,\nbecause it includes load_manifest.h? It may be unwise to have the\nsimplehash in a header file. Maybe just declare it in each .c file that\nneeds it. The duplicity is not that large.\n\nI'll see if I can understand the way all these headers are needed to\npropose some other arrangement.\n\nI see this idea of having \"struct FooBar;\" immediately followed by\n\"typedef struct FooBar FooBar;\" which I mentioned from blkreftable.h\noccurs in other places as well (JsonManifestParseContext in\nparse_manifest.h, maybe others?). Was this pattern cargo-culted from\nsomewhere? Perhaps we have other places to clean up.\n\n\nWhy leave unnamed arguments in function declarations? For example, in\n\nstatic void manifest_process_file(JsonManifestParseContext *,\n char *pathname,\n size_t size,\n pg_checksum_type checksum_type,\n int checksum_length,\n uint8 *checksum_payload);\nthe first argument lacks a name. Is this just an oversight, I hope?\n\n\nIn GetFileBackupMethod(), which arguments are in and which are out?\nThe comment doesn't say, and it's not obvious why we pass both the file\npath as well as the individual constituent pieces for it.\n\nDO_NOT_BACKUP_FILE appears not to be set anywhere. Do you expect to use\nthis later? If not, maybe remove it.\n\nThere are two functions named record_manifest_details_for_file() in\ndifferent programs. I think this sort of arrangement is not great, as\nit is confusing confusing to follow. It would be better if those two\nroutines were called something like, say, verifybackup_perfile_cb and\ncombinebackup_perfile_cb instead; then in the function comment say\nsomething like \n/*\n * JsonManifestParseContext->perfile_cb implementation for pg_combinebackup.\n *\n * Record details extracted from the backup manifest for one file,\n * because we like to keep things tracked or whatever.\n */\nso it's easy to track down what does what and why. Same with\nperwalrange_cb. \"perfile\" looks bothersome to me as a name entity. Why\nnot per_file_cb? and per_walrange_cb?\n \n\nIn walsummarizer.c, HandleWalSummarizerInterrupts is called in\nsummarizer_read_local_xlog_page but SummarizeWAL() doesn't do that.\nMaybe it should?\n\nI think this path is not going to be very human-likeable.\n\t\tsnprintf(final_path, MAXPGPATH,\n\t\t\t\t XLOGDIR \"/summaries/%08X%08X%08X%08X%08X.summary\",\n\t\t\t\t tli,\n\t\t\t\t LSN_FORMAT_ARGS(summary_start_lsn),\n\t\t\t\t LSN_FORMAT_ARGS(summary_end_lsn));\nWhy not add a dash between the TLI and between both LSNs, or something\nlike that? (Also, are we really printing TLIs as 8-byte hexs?)\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"I suspect most samba developers are already technically insane...\nOf course, since many of them are Australians, you can't tell.\" (L. Torvalds)\n\n\n",
"msg_date": "Mon, 13 Nov 2023 17:25:21 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "On Fri, Nov 10, 2023 at 6:27 AM Dilip Kumar <[email protected]> wrote:\n> - I think 0001 looks good improvement irrespective of the patch series.\n\nOK, perhaps that can be independently committed, then, if nobody objects.\n\nThanks for the review; I've fixed a bunch of things that you\nmentioned. I'll just comment on the ones I haven't yet done anything\nabout below.\n\n> 2.\n> + <varlistentry id=\"guc-wal-summarize-keep-time\"\n> xreflabel=\"wal_summarize_keep_time\">\n> + <term><varname>wal_summarize_keep_time</varname> (<type>boolean</type>)\n> + <indexterm>\n> + <primary><varname>wal_summarize_keep_time</varname>\n> configuration parameter</primary>\n> + </indexterm>\n>\n> I feel the name of the guy should be either wal_summarizer_keep_time\n> or wal_summaries_keep_time, I mean either we should refer to the\n> summarizer process or to the way summaries files.\n\nHow about wal_summary_keep_time?\n\n> 6.\n> + * If the whole range of LSNs is covered, returns true, otherwise false.\n> + * If false is returned, *missing_lsn is set either to InvalidXLogRecPtr\n> + * if there are no WAL summary files in the input list, or to the first LSN\n> + * in the range that is not covered by a WAL summary file in the input list.\n> + */\n> +bool\n> +WalSummariesAreComplete(List *wslist, XLogRecPtr start_lsn,\n>\n> I did not see the usage of this function, but I think if the whole\n> range is not covered why not keep the behavior uniform w.r.t. what we\n> set for '*missing_lsn', I mean suppose there is no file then\n> missing_lsn is the start_lsn because a very first LSN is missing.\n\nIt's used later in the patch series. I think the way that I have it\nmakes for a more understandable error message.\n\n> 8.\n> +/*\n> + * Comparator to sort a List of WalSummaryFile objects by start_lsn.\n> + */\n> +static int\n> +ListComparatorForWalSummaryFiles(const ListCell *a, const ListCell *b)\n> +{\n\nI'm not sure what needs fixing here.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 13 Nov 2023 14:22:30 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "On Mon, Nov 13, 2023 at 11:25 AM Alvaro Herrera <[email protected]> wrote:\n> Great stuff you got here. I'm doing a first pass trying to grok the\n> whole thing for more substantive comments, but in the meantime here are\n> some cosmetic ones.\n\nThanks, thanks, and thanks.\n\nI've fixed some things that you mentioned in the attached version.\nOther comments below.\n\n> In blkreftable.c, I think the definition of SH_EQUAL should have an\n> outer layer of parentheses. Also, it would be good to provide and use a\n> function to initialize a BlockRefTableKey from the RelFileNode and\n> forknum components, and ensure that any padding bytes are zeroed.\n> Otherwise it's not going to be a great hash key. On my platform there\n> aren't any (padding bytes), but I think it's unwise to rely on that.\n\nI'm having trouble understanding the second part of this suggestion.\nNote that in a frontend context, SH_RAW_ALLOCATOR is pg_malloc0, and\nin a backend context, we get the default, which is\nMemoryContextAllocZero. Maybe there's some case this doesn't cover,\nthough?\n\n> These forward struct declarations are not buying you anything, I'd\n> remove them:\n\nI've had problems from time to time when I don't do this. I'll remove\nit here, but I'm not convinced that it's always useless.\n\n> I don't much like the way header files in src/bin/pg_combinebackup files\n> are structured. Particularly, causing a simplehash to be \"instantiated\"\n> just because load_manifest.h is included seems poised to cause pain. I\n> think there should be a file with the basic struct declarations (no\n> simplehash); and then maybe since both pg_basebackup and\n> pg_combinebackup seem to need the same simplehash, create a separate\n> header file containing just that.. But, did you notice that anything\n> that includes reconstruct.h will instantiate the simplehash stuff,\n> because it includes load_manifest.h? It may be unwise to have the\n> simplehash in a header file. Maybe just declare it in each .c file that\n> needs it. The duplicity is not that large.\n\nI think that I did this correctly. AIUI, if you're defining a\nsimplehash that only one source file needs, you make the scope\n\"static\" and do both SH_DECLARE and SH_DEFILE it in that file. If you\nneed it to be shared by multiple files, you make it \"extern\" in the\nheader file, do SH_DECLARE there, and SH_DEFINE in one of those source\nfiles. Or you could make the scope \"static inline\" in the header file\nand then you'd both SH_DECLARE and SH_DEFINE it in the header file.\n\nIf I were to do as you suggest here, I think I'd end up with 2 copies\nof the compiled code for this instead of one, and if they ever got out\nof sync everything would break silently.\n\n> Why leave unnamed arguments in function declarations? For example, in\n>\n> static void manifest_process_file(JsonManifestParseContext *,\n> char *pathname,\n> size_t size,\n> pg_checksum_type checksum_type,\n> int checksum_length,\n> uint8 *checksum_payload);\n> the first argument lacks a name. Is this just an oversight, I hope?\n\nI mean, I've changed it now, but I don't think it's worth getting too\nexcited about. \"int checksum_length\" is much better documentation than\njust \"int,\" but \"JsonManifestParseContext *context\" is just noise,\nIMHO. You can argue that it's better for consistency that way, but\nwhatever.\n\n> In GetFileBackupMethod(), which arguments are in and which are out?\n> The comment doesn't say, and it's not obvious why we pass both the file\n> path as well as the individual constituent pieces for it.\n\nThe header comment does document which values are potentially set on\nreturn. I guess I thought it was clear enough that the stuff not\ndocumented to be output parameters was input parameters. Most of them\naren't even pointers, so they have to be input parameters. The only\nexception is 'path', which I have some difficulty thinking that anyone\nis going to imagine to be an input pointer.\n\nMaybe you could propose a more specific rewording of this comment?\nFWIW, I'm not altogether sure whether this function is going to get\nmore heavily adjusted in a rev or three of the patch set, so maybe we\nwant to wait to sort this out until this is closer to final, but OTOH\nif I know what you have in mind for the current version, I might be\nmore likely to keep it in a good place if I end up changing it.\n\n> DO_NOT_BACKUP_FILE appears not to be set anywhere. Do you expect to use\n> this later? If not, maybe remove it.\n\nWoops, that was a holdover from an earlier version.\n\n> There are two functions named record_manifest_details_for_file() in\n> different programs. I think this sort of arrangement is not great, as\n> it is confusing confusing to follow. It would be better if those two\n> routines were called something like, say, verifybackup_perfile_cb and\n> combinebackup_perfile_cb instead; then in the function comment say\n> something like\n> /*\n> * JsonManifestParseContext->perfile_cb implementation for pg_combinebackup.\n> *\n> * Record details extracted from the backup manifest for one file,\n> * because we like to keep things tracked or whatever.\n> */\n> so it's easy to track down what does what and why. Same with\n> perwalrange_cb. \"perfile\" looks bothersome to me as a name entity. Why\n> not per_file_cb? and per_walrange_cb?\n\nI had trouble figuring out how to name this stuff. I did notice the\nawkwardness, but surely nobody can think that two functions with the\nsame name in different binaries can be actually the same function.\n\nIf we want to inject more underscores here, my vote is to go all the\nway and make it per_wal_range_cb.\n\n> In walsummarizer.c, HandleWalSummarizerInterrupts is called in\n> summarizer_read_local_xlog_page but SummarizeWAL() doesn't do that.\n> Maybe it should?\n\nI replaced all the CHECK_FOR_INTERRUPTS() in that file with\nHandleWalSummarizerInterrupts(). Does that seem right?\n\n> I think this path is not going to be very human-likeable.\n> snprintf(final_path, MAXPGPATH,\n> XLOGDIR \"/summaries/%08X%08X%08X%08X%08X.summary\",\n> tli,\n> LSN_FORMAT_ARGS(summary_start_lsn),\n> LSN_FORMAT_ARGS(summary_end_lsn));\n> Why not add a dash between the TLI and between both LSNs, or something\n> like that? (Also, are we really printing TLIs as 8-byte hexs?)\n\nDealing with the last part first, we already do that in every WAL file\nname. I actually think these file names are easier to work with than\nWAL file names, because 000000010000000000000020 is not the WAL\nstarting at 0/20, but rather the WAL starting at 0/20000000. To know\nat what LSN a WAL file starts, you have to mentally delete characters\n17 through 22, which will always be zero, and instead add six zeroes\nat the end. I don't think whoever came up with that file naming\nconvention deserves an award, unless it's a raspberry award. With\nthese names, you get something like\n0000000100000000015125B800000000015128F0.summary and you can sort of\nsee that 1512 repeats so the LSN went from something ending in 5B8 to\nsomething ending in 8F0. I actually think it's way better.\n\nBut I have a hard time arguing that it wouldn't be more readable still\nif we put some separator characters in there. I didn't do that because\nthen they'd look less like WAL file names, but maybe that's not really\na problem. A possible reason not to bother is that these files are\nless necessary for humans to care about than WAL files, since they\ndon't need to be archived or transported between nodes in any way.\nBasically I think this is probably fine the way it is, but if you or\nothers think it's really important to change it, I can do that. Just\nas long as we don't spend 50 emails arguing about which separator\ncharacter to use.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Mon, 13 Nov 2023 15:40:40 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "On Tue, Nov 14, 2023 at 12:52 AM Robert Haas <[email protected]> wrote:\n>\n> On Fri, Nov 10, 2023 at 6:27 AM Dilip Kumar <[email protected]> wrote:\n> > - I think 0001 looks good improvement irrespective of the patch series.\n>\n> OK, perhaps that can be independently committed, then, if nobody objects.\n>\n> Thanks for the review; I've fixed a bunch of things that you\n> mentioned. I'll just comment on the ones I haven't yet done anything\n> about below.\n>\n> > 2.\n> > + <varlistentry id=\"guc-wal-summarize-keep-time\"\n> > xreflabel=\"wal_summarize_keep_time\">\n> > + <term><varname>wal_summarize_keep_time</varname> (<type>boolean</type>)\n> > + <indexterm>\n> > + <primary><varname>wal_summarize_keep_time</varname>\n> > configuration parameter</primary>\n> > + </indexterm>\n> >\n> > I feel the name of the guy should be either wal_summarizer_keep_time\n> > or wal_summaries_keep_time, I mean either we should refer to the\n> > summarizer process or to the way summaries files.\n>\n> How about wal_summary_keep_time?\n\nYes, that looks perfect to me.\n\n> > 6.\n> > + * If the whole range of LSNs is covered, returns true, otherwise false.\n> > + * If false is returned, *missing_lsn is set either to InvalidXLogRecPtr\n> > + * if there are no WAL summary files in the input list, or to the first LSN\n> > + * in the range that is not covered by a WAL summary file in the input list.\n> > + */\n> > +bool\n> > +WalSummariesAreComplete(List *wslist, XLogRecPtr start_lsn,\n> >\n> > I did not see the usage of this function, but I think if the whole\n> > range is not covered why not keep the behavior uniform w.r.t. what we\n> > set for '*missing_lsn', I mean suppose there is no file then\n> > missing_lsn is the start_lsn because a very first LSN is missing.\n>\n> It's used later in the patch series. I think the way that I have it\n> makes for a more understandable error message.\n\nOkay\n\n> > 8.\n> > +/*\n> > + * Comparator to sort a List of WalSummaryFile objects by start_lsn.\n> > + */\n> > +static int\n> > +ListComparatorForWalSummaryFiles(const ListCell *a, const ListCell *b)\n> > +{\n>\n> I'm not sure what needs fixing here.\n\nI think I copy-pasted it by mistake, just ignore it.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 14 Nov 2023 09:58:22 +0530",
"msg_from": "Dilip Kumar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "On Tue, Nov 14, 2023 at 2:10 AM Robert Haas <[email protected]> wrote:\n>\n> On Mon, Nov 13, 2023 at 11:25 AM Alvaro Herrera <[email protected]> wrote:\n> > Great stuff you got here. I'm doing a first pass trying to grok the\n> > whole thing for more substantive comments, but in the meantime here are\n> > some cosmetic ones.\n>\n> Thanks, thanks, and thanks.\n>\n> I've fixed some things that you mentioned in the attached version.\n> Other comments below.\n\nHere are some more comments based on what I have read so far, mostly\ncosmetics comments.\n\n1.\n+ * summary file yet, then stoppng doesn't make any sense, and we\n+ * should wait until the next stop point instead.\n\nTypo /stoppng/stopping\n\n2.\n+ /* Close temporary file and shut down xlogreader. */\n+ FileClose(io.file);\n+\n\nWe have already freed the xlogreader so the second part of the comment\nis not valid.\n\n3.+ /*\n+ * If a relation fork is truncated on disk, there is in point in\n+ * tracking anything about block modifications beyond the truncation\n+ * point.\n\n\nTypo. /there is in point/ there is no point\n\n4.\n+/*\n+ * Special handling for WAL recods with RM_XACT_ID.\n+ */\n\n/recods/records\n\n5.\n\n+ if (xact_info == XLOG_XACT_COMMIT ||\n+ xact_info == XLOG_XACT_COMMIT_PREPARED)\n+ {\n+ xl_xact_commit *xlrec = (xl_xact_commit *) XLogRecGetData(xlogreader);\n+ xl_xact_parsed_commit parsed;\n+ int i;\n+\n+ ParseCommitRecord(XLogRecGetInfo(xlogreader), xlrec, &parsed);\n+ for (i = 0; i < parsed.nrels; ++i)\n+ {\n+ ForkNumber forknum;\n+\n+ for (forknum = 0; forknum <= MAX_FORKNUM; ++forknum)\n+ if (forknum != FSM_FORKNUM)\n+ BlockRefTableSetLimitBlock(brtab, &parsed.xlocators[i],\n+ forknum, 0);\n+ }\n+ }\n\nFor SmgrCreate and Truncate I understand setting the 'limit block' but\nwhy for commit/abort? I think it would be better to add some comments\nhere.\n\n6.\n+ * Caller must set private_data->tli to the TLI of interest,\n+ * private_data->read_upto to the lowest LSN that is not known to be safe\n+ * to read on that timeline, and private_data->historic to true if and only\n+ * if the timeline is not the current timeline. This function will update\n+ * private_data->read_upto and private_data->historic if more WAL appears\n+ * on the current timeline or if the current timeline becomes historic.\n+ */\n+static int\n+summarizer_read_local_xlog_page(XLogReaderState *state,\n+ XLogRecPtr targetPagePtr, int reqLen,\n+ XLogRecPtr targetRecPtr, char *cur_page)\n\nThe comments say \"private_data->read_upto to the lowest LSN that is\nnot known to be safe\" but is it really the lowest LSN? I think it is\nthe highest LSN this is known to be safe for that TLI no?\n\n7.\n+ /* If it's time to remove any old WAL summaries, do that now. */\n+ MaybeRemoveOldWalSummaries();\n\nI was just wondering whether removing old summaries should be the job\nof the wal summarizer or it should be the job of the checkpointer, I\nmean while removing the old wals it can also check and remove the old\nsummaries? Anyway, it's just a question and I do not have a strong\nopinion on this.\n\n8.\n+ /*\n+ * Whether we we removed the file or not, we need not consider it\n+ * again.\n+ */\n\nTypo /Whether we we removed/ Whether we removed\n\n9.\n+/*\n+ * Get an entry from a block reference table.\n+ *\n+ * If the entry does not exist, this function returns NULL. Otherwise, it\n+ * returns the entry and sets *limit_block to the value from the entry.\n+ */\n+BlockRefTableEntry *\n+BlockRefTableGetEntry(BlockRefTable *brtab, const RelFileLocator *rlocator,\n+ ForkNumber forknum, BlockNumber *limit_block)\n\nIf this function is already returning 'BlockRefTableEntry' then why\nwould it need to set an extra '*limit_block' out parameter which it is\nactually reading from the entry itself?\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 14 Nov 2023 13:57:07 +0530",
"msg_from": "Dilip Kumar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "0001 looks OK to push, and since it stands on its own I would get it out\nof the way soon rather than waiting for the rest of the series to be\nfurther reviewed.\n\n\n0002:\nThis moves bin/pg_verifybackup/parse_manifest.c to\ncommon/parse_manifest.c, where it's not clear that it's for backup\nmanifests (wasn't a problem in the previous location). I wonder if\nwe're going to have anything else called \"manifest\", in which case I\npropose to rename the file to make it clear that this is about backup\nmanifests -- maybe parse_bkp_manifest.c.\n\nThis patch looks pretty uncontroversial, but there's no point in going\nfurther with this one until followup patches are closer to commit.\n\n\n0003:\nAmWalSummarizerProcess() is unused. Remove?\n\nMaybeWalSummarizer() is called on each ServerLoop() in postmaster.c?\nThis causes a function call to be emitted every time through. That\nlooks odd. All other process starts have some triggering condition. \n\nGetOldestUnsummarizedLSN uses while(true), but WaitForWalSummarization\nand SummarizeWAL use while(1). Maybe settle on one style?\n\nStill reading this one.\n\n\n0004:\nin PrepareForIncrementalBackup(), the logic that determines\nearliest_wal_range_tli and latest_wal_range_tli looks pretty weird. I\nthink it works fine if there's a single timeline, but not otherwise.\nOr maybe the trick is that it relies on timelines returned by\nreadTimeLineHistory being sorted backwards? If so, maybe add a comment\nabout that somewhere; I don't think other callers of readTimeLineHistory\nmake that assumption.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Postgres is bloatware by design: it was built to house\n PhD theses.\" (Joey Hellerstein, SIGMOD annual conference 2002)\n\n\n",
"msg_date": "Tue, 14 Nov 2023 14:12:51 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "Hi Robert,\n\n[..spotted the v9 patchset..]\n\nso I've spent some time playing still with patchset v8 (without the\n6/6 testing patch related to wal_level=minimal), with the exception of\n- patchset v9 - marked otherwise.\n\n1. On compile time there were 2 warnings to shadowing variable (at\nleast with gcc version 10.2.1), but on v9 that is fixed:\n\nblkreftable.c: In function ‘WriteBlockRefTable’:\nblkreftable.c:520:24: warning: declaration of ‘brtentry’ shadows a\nprevious local [-Wshadow=compatible-local]\nwalsummarizer.c: In function ‘SummarizeWAL’:\nwalsummarizer.c:833:36: warning: declaration of ‘private_data’ shadows\na previous local [-Wshadow=compatible-local]\n\n2. Usability thing: I hit the timeout hard: \"This backup requires WAL\nto be summarized up to 0/90000D8, but summarizer has only reached\n0/0.\" with summarize_wal=off (default) but apparently this in TODO.\nLooks like an important usability thing.\n\n3. I've verified that if the DB was in wal_level=minimal even\ntemporarily (and thus summarization was disabled) it is impossible to\ntake incremental backup:\n\npg_basebackup: error: could not initiate base backup: ERROR: WAL\nsummaries are required on timeline 1 from 0/70000D8 to 0/10000028, but\nthe summaries for that timeline and LSN range are incomplete\nDETAIL: The first unsummarized LSN is this range is 0/D04AE88.\n\n4. As we have discussed off list, there's is (was) this\npg_combinebackup bug in v8's reconstruct_from_incremental_file() where\nit was unable to realize that - in case of combining multiple\nincremental backups - it should stop looking for the previous instance\nof the full file (while it was fine with v6 of the patchset). I've\nchecked it on v9 - it is good now.\n\n5. On v8 i've finally played a little bit with standby(s) and this\npatchset with couple of basic scenarios while mixing source of the\nbackups:\n\na. full on standby, incr1 on standby, full db restore (incl. incr1) on standby\n # sometimes i'm getting spurious error like those when doing\nincrementals on standby with -c fast :\n 2023-11-15 13:49:05.721 CET [10573] LOG: recovery restart point\nat 0/A000028\n 2023-11-15 13:49:07.591 CET [10597] WARNING: aborting backup due\nto backend exiting before pg_backup_stop was called\n 2023-11-15 13:49:07.591 CET [10597] ERROR: manifest requires WAL\nfrom final timeline 1 ending at 0/A0000F8, but this backup starts at\n0/A000028\n 2023-11-15 13:49:07.591 CET [10597] STATEMENT: BASE_BACKUP (\nINCREMENTAL, LABEL 'pg_basebackup base backup', PROGRESS,\nCHECKPOINT 'fast', WAIT 0, MANIFEST 'yes', TARGET 'client')\n # when you retry the same pg_basebackup it goes fine (looks like\nCHECKPOINT on standby/restartpoint <-> summarizer disconnect, I'll dig\ndeeper tomorrow. It seems that issuing \"CHECKPOINT; pg_sleep(1);\"\nagainst primary just before pg_basebackup --incr on standby\nworkarounds it)\n\nb. full on primary, incr1 on standby, full db restore (incl. incr1) on\nstandby # WORKS\nc. full on standby, incr1 on standby, full db restore (incl. incr1) on\nprimary # WORKS*\nd. full on primary, incr1 on standby, full db restore (incl. incr1) on\nprimary # WORKS*\n\n* - needs pg_promote() due to the controlfile having standby bit +\npotential fiddling with postgresql.auto.conf as it is having\nprimary_connstring GUC.\n\n6. Sci-fi-mode-on: I was wondering about the dangers of e.g. having\nmore recent pg_basebackup (e.g. from pg18 one day) running against\npg17 in the scope of having this incremental backups possibility. Is\nit going to be safe? (currently there seems to be no safeguards\nagainst such use) or should those things (core, pg_basebackup) should\nbe running in version lock step?\n\nRegards,\n-J.\n\n\n",
"msg_date": "Wed, 15 Nov 2023 15:13:32 +0100",
"msg_from": "Jakub Wartak <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "On 2023-Nov-13, Robert Haas wrote:\n\n> On Mon, Nov 13, 2023 at 11:25 AM Alvaro Herrera <[email protected]> wrote:\n\n> > Also, it would be good to provide and use a\n> > function to initialize a BlockRefTableKey from the RelFileNode and\n> > forknum components, and ensure that any padding bytes are zeroed.\n> > Otherwise it's not going to be a great hash key. On my platform there\n> > aren't any (padding bytes), but I think it's unwise to rely on that.\n> \n> I'm having trouble understanding the second part of this suggestion.\n> Note that in a frontend context, SH_RAW_ALLOCATOR is pg_malloc0, and\n> in a backend context, we get the default, which is\n> MemoryContextAllocZero. Maybe there's some case this doesn't cover,\n> though?\n\nI meant code like this\n\n\tmemcpy(&key.rlocator, rlocator, sizeof(RelFileLocator));\n\tkey.forknum = forknum;\n\tentry = blockreftable_lookup(brtab->hash, key);\n\nwhere any padding bytes in \"key\" could have arbitrary values, because\nthey're not initialized. So I'd have a (maybe static inline) function\n BlockRefTableKeyInit(&key, rlocator, forknum)\nthat fills it in for you.\n\nNote:\n #define SH_EQUAL(tb, a, b) (memcmp(&a, &b, sizeof(BlockRefTableKey)) == 0)\nAFAICT the new simplehash uses in this patch series are the only ones\nthat use memcmp() as SH_EQUAL, so we don't necessarily have precedent on\nlack of padding bytes initialization in existing uses of simplehash.\n\n> > These forward struct declarations are not buying you anything, I'd\n> > remove them:\n> \n> I've had problems from time to time when I don't do this. I'll remove\n> it here, but I'm not convinced that it's always useless.\n\nWell, certainly there are places where they are necessary.\n\n> > I don't much like the way header files in src/bin/pg_combinebackup files\n> > are structured. Particularly, causing a simplehash to be \"instantiated\"\n> > just because load_manifest.h is included seems poised to cause pain. I\n> > think there should be a file with the basic struct declarations (no\n> > simplehash); and then maybe since both pg_basebackup and\n> > pg_combinebackup seem to need the same simplehash, create a separate\n> > header file containing just that.. But, did you notice that anything\n> > that includes reconstruct.h will instantiate the simplehash stuff,\n> > because it includes load_manifest.h? It may be unwise to have the\n> > simplehash in a header file. Maybe just declare it in each .c file that\n> > needs it. The duplicity is not that large.\n> \n> I think that I did this correctly.\n\nOh, I hadn't grokked that we had this SH_SCOPE thing and a separate\nSH_DECLARE for it being extern. OK, please ignore that.\n\n> > Why leave unnamed arguments in function declarations?\n> \n> I mean, I've changed it now, but I don't think it's worth getting too\n> excited about.\n\nWell, we did get into consistency arguments on this point previously. I\nagree it's not *terribly* important, but on thread\nhttps://www.postgresql.org/message-id/flat/CAH2-WznJt9CMM9KJTMjJh_zbL5hD9oX44qdJ4aqZtjFi-zA3Tg%40mail.gmail.com\npeople got really worked up about this stuff.\n\n> > In GetFileBackupMethod(), which arguments are in and which are out?\n> > The comment doesn't say, and it's not obvious why we pass both the file\n> > path as well as the individual constituent pieces for it.\n> \n> The header comment does document which values are potentially set on\n> return. I guess I thought it was clear enough that the stuff not\n> documented to be output parameters was input parameters. Most of them\n> aren't even pointers, so they have to be input parameters. The only\n> exception is 'path', which I have some difficulty thinking that anyone\n> is going to imagine to be an input pointer.\n\nAn output pointer, you mean :-) (Should it be const?)\n\nWhen the return value is BACK_UP_FILE_FULLY, it's not clear what happens\nto these output values; we modify some, but why? Maybe they should be\nleft alone? In that case, the \"if size == 0\" test should move a couple\nof lines up, in the brtentry == NULL block.\n\nBTW, you could do the qsort() after deciding to backup the file fully if\nmore than 90% needs to be replaced.\n\nBTW, in sendDir() why do\n lookup_path = pstrdup(pathbuf + basepathlen + 1);\nwhen you could do\n lookup_path = pstrdup(tarfilename);\n?\n\n> > There are two functions named record_manifest_details_for_file() in\n> > different programs.\n> \n> I had trouble figuring out how to name this stuff. I did notice the\n> awkwardness, but surely nobody can think that two functions with the\n> same name in different binaries can be actually the same function.\n\nOf course not, but when cscope-jumping around, it is weird.\n\n> If we want to inject more underscores here, my vote is to go all the\n> way and make it per_wal_range_cb.\n\n+1\n\n> > In walsummarizer.c, HandleWalSummarizerInterrupts is called in\n> > summarizer_read_local_xlog_page but SummarizeWAL() doesn't do that.\n> > Maybe it should?\n> \n> I replaced all the CHECK_FOR_INTERRUPTS() in that file with\n> HandleWalSummarizerInterrupts(). Does that seem right?\n\nLooks to be what walwriter.c does at least, so I guess it's OK.\n\n> > I think this path is not going to be very human-likeable.\n> > snprintf(final_path, MAXPGPATH,\n> > XLOGDIR \"/summaries/%08X%08X%08X%08X%08X.summary\",\n> > tli,\n> > LSN_FORMAT_ARGS(summary_start_lsn),\n> > LSN_FORMAT_ARGS(summary_end_lsn));\n> > Why not add a dash between the TLI and between both LSNs, or something\n> > like that?\n\n> But I have a hard time arguing that it wouldn't be more readable still\n> if we put some separator characters in there. I didn't do that because\n> then they'd look less like WAL file names, but maybe that's not really\n> a problem. A possible reason not to bother is that these files are\n> less necessary for humans to care about than WAL files, since they\n> don't need to be archived or transported between nodes in any way.\n> Basically I think this is probably fine the way it is, but if you or\n> others think it's really important to change it, I can do that. Just\n> as long as we don't spend 50 emails arguing about which separator\n> character to use.\n\nYeah, I just think that endless stream of hex chars are hard to read,\nand I've found myself following digits in the screen with my fingers in\norder to parse file names. I guess you could say thousands separators\nfor regular numbers aren't needed either, but we do have them for\nreadability sake.\n\n\nI think a new section in chapter 30 \"Reliability and the Write-Ahead\nLog\" is warranted. It would explain the summarization process, what the\nsummary files are used for, and the deletion mechanism. I can draft\nsomething if you want.\n\nIt's not clear to me if WalSummarizerCtl->pending_lsn if fulfilling some\npurpose or it's just a leftover from prior development. I see it's only\nread in an assertion ... Maybe if we think this cross-check is\nimportant, it should be turned into an elog? Otherwise, I'd remove it.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"No me acuerdo, pero no es cierto. No es cierto, y si fuera cierto,\n no me acuerdo.\" (Augusto Pinochet a una corte de justicia)\n\n\n",
"msg_date": "Thu, 16 Nov 2023 11:21:32 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "On Tue, Nov 14, 2023 at 8:12 AM Alvaro Herrera <[email protected]> wrote:\n> 0001 looks OK to push, and since it stands on its own I would get it out\n> of the way soon rather than waiting for the rest of the series to be\n> further reviewed.\n\nAll right, done.\n\n> 0003:\n> AmWalSummarizerProcess() is unused. Remove?\n\nThe intent seems to be to have one of these per enum value, whether it\ngets used or not. Some of the others aren't used, either.\n\n> MaybeWalSummarizer() is called on each ServerLoop() in postmaster.c?\n> This causes a function call to be emitted every time through. That\n> looks odd. All other process starts have some triggering condition.\n\nI'm not sure how much this matters, really. I would expect that the\nfunction call overhead here wouldn't be very noticeable. Generally I\nthink that when ServerLoop returns from WaitEventSetWait it's going to\nbe because we need to fork a process. That's pretty expensive compared\nto a function call. If we can iterate through this loop lots of times\nwithout doing any real work then it might matter, but I feel like\nthat's probably not the case, and probably something we would want to\nfix if it were the case.\n\nNow, I could nevertheless move some of the triggering conditions in\nMaybeStartWalSummarizer(), but moving, say, just the summarize_wal\ncondition wouldn't be enough to avoid having MaybeStartWalSummarizer()\ncalled repeatedly when there was no work to do, because summarize_wal\ncould be true and the summarizer could all be running. Similarly, if I\nmove just the WalSummarizerPID == 0 condition, the function gets\ncalled repeatedly without doing anything when summarize_wal = off. So\nat a minimum you have to move both of those if you care about avoiding\nthe function call overhead, and then you have to wonder if you care\nabout the corner cases where the function would be called repeatedly\nfor no gain even then.\n\nAnother approach would be to make the function static inline rather\nthan just static. Or we could delete the whole function and just\nduplicate the logic it contains at both call sites. Personally I'm\ninclined to just leave it how it is in the absence of some evidence\nthat there's a real problem here. It's nice to have all the triggering\nconditions in one place with nothing duplicated.\n\n> GetOldestUnsummarizedLSN uses while(true), but WaitForWalSummarization\n> and SummarizeWAL use while(1). Maybe settle on one style?\n\nOK.\n\n> 0004:\n> in PrepareForIncrementalBackup(), the logic that determines\n> earliest_wal_range_tli and latest_wal_range_tli looks pretty weird. I\n> think it works fine if there's a single timeline, but not otherwise.\n> Or maybe the trick is that it relies on timelines returned by\n> readTimeLineHistory being sorted backwards? If so, maybe add a comment\n> about that somewhere; I don't think other callers of readTimeLineHistory\n> make that assumption.\n\nIt does indeed rely on that assumption, and the comment at the top of\nthe for (i = 0; i < num_wal_ranges; ++i) loop explains that. Note also\nthe comment just below that begins \"If we found this TLI in the\nserver's history\". I agree with you that this logic looks strange, and\nit's possible that there's some better way to do encode the idea than\nwhat I've done here, but I think it might be just that the particular\ncalculation we're trying to do here is strange. It's almost easier to\nunderstand the logic if you start by reading the sanity checks\n(\"manifest requires WAL from initial timeline %u starting at %X/%X,\nbut that timeline begins at %X/%X\" et. al.), look at the triggering\nconditions for those, and then work upward to see how\nearliest/latest_wal_range_tli get set, and then look up from there to\nsee how saw_earliest/latest_wal_range_tli are used in computing those\nvalues.\n\nWe do rely on the ordering assumption elsewhere. For example, in\nXLogFileReadAnyTLI, see if (tli < curFileTLI) break. We also use it to\nset expectedTLEs, which is documented to have this property. And\nAddWALInfoToBackupManifest relies on it too; see the comment \"Because\nthe timeline history file lists newer timelines before older ones\" in\nAddWALInfoToBackupManifest. We're not entirely consistent about this,\ne.g., unlike XLogFileReadAnyTLI, tliInHistory() and\ntliOfPointInHistory() don't have an early exit provision, but we do\nuse it some places.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 16 Nov 2023 12:13:44 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "On Thu, Nov 16, 2023 at 5:21 AM Alvaro Herrera <[email protected]> wrote:\n> I meant code like this\n>\n> memcpy(&key.rlocator, rlocator, sizeof(RelFileLocator));\n> key.forknum = forknum;\n> entry = blockreftable_lookup(brtab->hash, key);\n\nAh, I hadn't thought about that. Another way of handling that might be\nto add = {0} to the declaration of key. But I can do the initializer\nthing too if you think it's better. I'm not sure if there's an\nargument that the initializer might optimize better.\n\n> An output pointer, you mean :-) (Should it be const?)\n\nI'm bad at const, but that seems to work, so sure.\n\n> When the return value is BACK_UP_FILE_FULLY, it's not clear what happens\n> to these output values; we modify some, but why? Maybe they should be\n> left alone? In that case, the \"if size == 0\" test should move a couple\n> of lines up, in the brtentry == NULL block.\n\nOK.\n\n> BTW, you could do the qsort() after deciding to backup the file fully if\n> more than 90% needs to be replaced.\n\nOK.\n\n> BTW, in sendDir() why do\n> lookup_path = pstrdup(pathbuf + basepathlen + 1);\n> when you could do\n> lookup_path = pstrdup(tarfilename);\n> ?\n\nNo reason, changed.\n\n> > If we want to inject more underscores here, my vote is to go all the\n> > way and make it per_wal_range_cb.\n>\n> +1\n\nWill look into this.\n\n> Yeah, I just think that endless stream of hex chars are hard to read,\n> and I've found myself following digits in the screen with my fingers in\n> order to parse file names. I guess you could say thousands separators\n> for regular numbers aren't needed either, but we do have them for\n> readability sake.\n\nSigh.\n\n> I think a new section in chapter 30 \"Reliability and the Write-Ahead\n> Log\" is warranted. It would explain the summarization process, what the\n> summary files are used for, and the deletion mechanism. I can draft\n> something if you want.\n\nSure, if you want to take a crack at it, that's great.\n\n> It's not clear to me if WalSummarizerCtl->pending_lsn if fulfilling some\n> purpose or it's just a leftover from prior development. I see it's only\n> read in an assertion ... Maybe if we think this cross-check is\n> important, it should be turned into an elog? Otherwise, I'd remove it.\n\nI've been thinking about that. One thing I'm not quite sure about\nthough is introspection. Maybe there should be a function that shows\nsummarized_tli and summarized_lsn from WalSummarizerData, and maybe it\nshould expose pending_lsn too.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 16 Nov 2023 12:13:53 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "On 2023-Oct-04, Robert Haas wrote:\n\n> - I would like some feedback on the generation of WAL summary files.\n> Right now, I have it enabled by default, and summaries are kept for a\n> week. That means that, with no additional setup, you can take an\n> incremental backup as long as the reference backup was taken in the\n> last week. File removal is governed by mtimes, so if you change the\n> mtimes of your summary files or whack your system clock around, weird\n> things might happen. But obviously this might be inconvenient. Some\n> people might not want WAL summary files to be generated at all because\n> they don't care about incremental backup, and other people might want\n> them retained for longer, and still other people might want them to be\n> not removed automatically or removed automatically based on some\n> criteria other than mtime. I don't really know what's best here. I\n> don't think the default policy that the patches implement is\n> especially terrible, but it's just something that I made up and I\n> don't have any real confidence that it's wonderful. One point to be\n> consider here is that, if WAL summarization is enabled, checkpoints\n> can't remove WAL that isn't summarized yet. Mostly that's not a\n> problem, I think, because the WAL summarizer is pretty fast. But it\n> could increase disk consumption for some people. I don't think that we\n> need to worry about the summaries themselves being a problem in terms\n> of space consumption; at least in all the cases I've tested, they're\n> just not very big.\n\nSo, wal_summary is no longer turned on by default, I think following a\ncomment from Peter E. I think this is a good decision, as we're only\ngoing to need them on servers from which incremental backups are going\nto be taken, which is a strict subset of all servers; and furthermore,\npeople that need them are going to realize that very easily, while if we\nwent the other around most people would not realize that they need to\nturn them off to save some resource consumption.\n\nGranted, the amount of resources additionally used is probably not very\nbig. But since it can be changed with a reload not restart, it doesn't\nseem problematic.\n\n... oh, I just noticed that this patch now fails to compile because of\nthe MemoryContextResetAndDeleteChildren removal.\n\n(Typo in the pg_walsummary manpage: \"since WAL summary files primary\nexist\" -> \"primarily\")\n\n> - On a related note, I haven't yet tested this on a standby, which is\n> a thing that I definitely need to do. I don't know of a reason why it\n> shouldn't be possible for all of this machinery to work on a standby\n> just as it does on a primary, but then we need the WAL summarizer to\n> run there too, which could end up being a waste if nobody ever tries\n> to take an incremental backup. I wonder how that should be reflected\n> in the configuration. We could do something like what we've done for\n> archive_mode, where on means \"only on if this is a primary\" and you\n> have to say always if you want it to run on standbys as well ... but\n> I'm not sure if that's a design pattern that we really want to\n> replicate into more places. I'd be somewhat inclined to just make\n> whatever configuration parameters we need to configure this thing on\n> the primary also work on standbys, and you can set each server up as\n> you please. But I'm open to other suggestions.\n\nI think it should default to off in primary and standby, and the user\nhas to enable it in whichever server they want to take backups from.\n\n> - We need to settle the question of whether to send the whole backup\n> manifest to the server or just the LSN. In a previous attempt at\n> incremental backup, we decided the whole manifest was necessary,\n> because flat-copying files could make new data show up with old LSNs.\n> But that version of the patch set was trying to find modified blocks\n> by checking their LSNs individually, not by summarizing WAL. And since\n> the operations that flat-copy files are WAL-logged, the WAL summary\n> approach seems to eliminate that problem - maybe an LSN (and the\n> associated TLI) is good enough now. This also relates to Jakub's\n> question about whether this machinery could be used to fast-forward a\n> standby, which is not exactly a base backup but ... perhaps close\n> enough? I'm somewhat inclined to believe that we can simplify to an\n> LSN and TLI; however, if we do that, then we'll have big problems if\n> later we realize that we want the manifest for something after all. So\n> if anybody thinks that there's a reason to keep doing what the patch\n> does today -- namely, upload the whole manifest to the server --\n> please speak up.\n\nI don't understand this point. Currently, the protocol is that\nUPLOAD_MANIFEST is used to send the manifest prior to requesting the\nbackup. You seem to be saying that you're thinking of removing support\nfor UPLOAD_MANIFEST and instead just give the LSN as an option to the\nBASE_BACKUP command?\n\n> - It's regrettable that we don't have incremental JSON parsing;\n\nWe now do have it, at least in patch form. I guess the question is\nwhether we're going to accept it in core. I see chances of changing the\nformat of the manifest rather slim at this point, and the need for very\nlarge manifests is likely to go up with time, so we probably need to\ntake that code and polish it up, and see if we can improve its\nperformance.\n\n> - Right now, I have a hard-coded 60 second timeout for WAL\n> summarization. If you try to take an incremental backup and the WAL\n> summaries you need don't show up within 60 seconds, the backup times\n> out. I think that's a reasonable default, but should it be\n> configurable? If yes, should that be a GUC or, perhaps better, a\n> pg_basebackup option?\n\nI'd rather have a way for the server to provide diagnostics on why the\nsummaries aren't being produced. Maybe a server running under valgrind\nis going to fail and need a longer one, but otherwise a hardcoded\ntimeout seems sufficient.\n\nYou did say later that you thought summary files would just go from one\ncheckpoint to the next. So the only question is at what point the file\nfor the last checkpoint (i.e. from the previous one up to the one\nrequested by pg_basebackup) is written. If walsummarizer keeps almost\nthe complete state in memory and just waits for the checkpoint record to\nwrite it, then it's probably okay.\n\n> - I'm curious what people think about the pg_walsummary tool that is\n> included in 0006. I think it's going to be fairly important for\n> debugging, but it does feel a little bit bad to add a new binary for\n> something pretty niche. Nevertheless, merging it into any other\n> utility seems relatively awkward, so I'm inclined to think both that\n> this should be included in whatever finally gets committed and that it\n> should be a separate binary. I considered whether it should go in\n> contrib, but we seem to have moved to a policy that heavily favors\n> limiting contrib to extensions and loadable modules, rather than\n> binaries.\n\nI propose to keep the door open for that binary doing other things that\ndumping the files as text. So add a command argument, which currently\ncan only be \"dump\", to allow the command do other things later if\nneeded. (For example, remove files from a server on which summarize_wal\nhas been turned off; or perhaps remove files that are below some LSN.)\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Estoy de acuerdo contigo en que la verdad absoluta no existe...\nEl problema es que la mentira sí existe y tu estás mintiendo\" (G. Lama)\n\n\n",
"msg_date": "Thu, 16 Nov 2023 18:23:00 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "On 2023-Nov-16, Robert Haas wrote:\n\n> On Thu, Nov 16, 2023 at 5:21 AM Alvaro Herrera <[email protected]> wrote:\n> > I meant code like this\n> >\n> > memcpy(&key.rlocator, rlocator, sizeof(RelFileLocator));\n> > key.forknum = forknum;\n> > entry = blockreftable_lookup(brtab->hash, key);\n> \n> Ah, I hadn't thought about that. Another way of handling that might be\n> to add = {0} to the declaration of key. But I can do the initializer\n> thing too if you think it's better. I'm not sure if there's an\n> argument that the initializer might optimize better.\n\nI think the {0} initializer is good enough, given a comment to indicate\nwhy.\n\n> > It's not clear to me if WalSummarizerCtl->pending_lsn if fulfilling some\n> > purpose or it's just a leftover from prior development. I see it's only\n> > read in an assertion ... Maybe if we think this cross-check is\n> > important, it should be turned into an elog? Otherwise, I'd remove it.\n> \n> I've been thinking about that. One thing I'm not quite sure about\n> though is introspection. Maybe there should be a function that shows\n> summarized_tli and summarized_lsn from WalSummarizerData, and maybe it\n> should expose pending_lsn too.\n\nTrue.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Thu, 16 Nov 2023 18:26:33 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "On 2023-Nov-16, Alvaro Herrera wrote:\n\n> On 2023-Oct-04, Robert Haas wrote:\n\n> > - Right now, I have a hard-coded 60 second timeout for WAL\n> > summarization. If you try to take an incremental backup and the WAL\n> > summaries you need don't show up within 60 seconds, the backup times\n> > out. I think that's a reasonable default, but should it be\n> > configurable? If yes, should that be a GUC or, perhaps better, a\n> > pg_basebackup option?\n> \n> I'd rather have a way for the server to provide diagnostics on why the\n> summaries aren't being produced. Maybe a server running under valgrind\n> is going to fail and need a longer one, but otherwise a hardcoded\n> timeout seems sufficient.\n> \n> You did say later that you thought summary files would just go from one\n> checkpoint to the next. So the only question is at what point the file\n> for the last checkpoint (i.e. from the previous one up to the one\n> requested by pg_basebackup) is written. If walsummarizer keeps almost\n> the complete state in memory and just waits for the checkpoint record to\n> write it, then it's probably okay.\n\nOn 2023-Nov-16, Alvaro Herrera wrote:\n\n> On 2023-Nov-16, Robert Haas wrote:\n> \n> > On Thu, Nov 16, 2023 at 5:21 AM Alvaro Herrera <[email protected]> wrote:\n\n> > > It's not clear to me if WalSummarizerCtl->pending_lsn if fulfilling some\n> > > purpose or it's just a leftover from prior development. I see it's only\n> > > read in an assertion ... Maybe if we think this cross-check is\n> > > important, it should be turned into an elog? Otherwise, I'd remove it.\n> > \n> > I've been thinking about that. One thing I'm not quite sure about\n> > though is introspection. Maybe there should be a function that shows\n> > summarized_tli and summarized_lsn from WalSummarizerData, and maybe it\n> > should expose pending_lsn too.\n> \n> True.\n\nPutting those two thoughts together, I think pg_basebackup with\n--progress could tell you \"still waiting for the summary file up to LSN\n%X/%X to appear, and the walsummarizer is currently handling lsn %X/%X\"\nor something like that. This would probably require two concurrent\nconnections, one to run BASE_BACKUP and another to inquire server state;\nbut this should easy enough to integrate together with parallel\nbasebackup later.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Thu, 16 Nov 2023 18:33:55 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "On Thu, Nov 16, 2023 at 12:34 PM Alvaro Herrera <[email protected]> wrote:\n> Putting those two thoughts together, I think pg_basebackup with\n> --progress could tell you \"still waiting for the summary file up to LSN\n> %X/%X to appear, and the walsummarizer is currently handling lsn %X/%X\"\n> or something like that. This would probably require two concurrent\n> connections, one to run BASE_BACKUP and another to inquire server state;\n> but this should easy enough to integrate together with parallel\n> basebackup later.\n\nI had similar thoughts, except I was thinking it would be better to\nhave the warnings be generated on the server side. That would save the\nneed for a second libpq connection, which would be good, because I\nthink adding that would result in a pretty large increase in\ncomplexity and some not-so-great user-visible consequences. In fact,\nmy latest thought is to just remove the timeout altogether, and emit\nwarnings like this:\n\nWARNING: still waiting for WAL summarization to reach %X/%X after %d\nseconds, currently at %X/%X\n\nWe could emit that every 30 seconds or so until either the situation\nresolves itself or the user hits ^C. I think that would be good enough\nhere. If we want, the interval between messages can be a GUC, but I\ndon't know how much real need there will be to tailor that.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 16 Nov 2023 12:50:43 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "On Thu, Nov 16, 2023 at 12:23 PM Alvaro Herrera <[email protected]> wrote:\n> So, wal_summary is no longer turned on by default, I think following a\n> comment from Peter E. I think this is a good decision, as we're only\n> going to need them on servers from which incremental backups are going\n> to be taken, which is a strict subset of all servers; and furthermore,\n> people that need them are going to realize that very easily, while if we\n> went the other around most people would not realize that they need to\n> turn them off to save some resource consumption.\n>\n> Granted, the amount of resources additionally used is probably not very\n> big. But since it can be changed with a reload not restart, it doesn't\n> seem problematic.\n\nYeah. I meant to say that I'd changed that for that reason, but in the\nflurry of new versions I omitted to do so.\n\n> ... oh, I just noticed that this patch now fails to compile because of\n> the MemoryContextResetAndDeleteChildren removal.\n\nFixed.\n\n> (Typo in the pg_walsummary manpage: \"since WAL summary files primary\n> exist\" -> \"primarily\")\n\nThis, too.\n\n> I think it should default to off in primary and standby, and the user\n> has to enable it in whichever server they want to take backups from.\n\nYeah, that's how it works currently.\n\n> I don't understand this point. Currently, the protocol is that\n> UPLOAD_MANIFEST is used to send the manifest prior to requesting the\n> backup. You seem to be saying that you're thinking of removing support\n> for UPLOAD_MANIFEST and instead just give the LSN as an option to the\n> BASE_BACKUP command?\n\nI don't think I'd want to do exactly that, because then you could only\nsend one LSN, and I do think we want to send a set of LSN ranges with\nthe corresponding TLI for each. I was thinking about dumping\nUPLOAD_MANIFEST and instead having a command like:\n\nINCREMENTAL_WAL_RANGE 1 2/462AC48 2/462C698\n\nThe client would execute this command one or more times before\nstarting an incremental backup.\n\n> I propose to keep the door open for that binary doing other things that\n> dumping the files as text. So add a command argument, which currently\n> can only be \"dump\", to allow the command do other things later if\n> needed. (For example, remove files from a server on which summarize_wal\n> has been turned off; or perhaps remove files that are below some LSN.)\n\nI don't like that very much. That sounds like one of those\nforward-compatibility things that somebody designs and then nothing\never happens and ten years later you still have an ugly wart.\n\nMy theory is that these files are going to need very little\nmanagement. In general, they're small; if you never removed them, it\nprobably wouldn't hurt, or at least, not for a long time. As to\nspecific use cases, if you want to remove files from a server on which\nsummarize_wal has been turned off, you can just use rm. Removing files\nfrom before a certain LSN would probably need a bit of scripting, but\nonly a bit. Conceivably we could provide something like that in core,\nbut it doesn't seem necessary, and it also seems to me that we might\ndo well to include that in pg_archivecleanup rather than in\npg_walsummary.\n\nHere's a new version. Changes:\n\n- Add preparatory renaming patches to the series.\n- Rename wal_summarize_keep_time to wal_summary_keep_time.\n- Change while (true) to while (1).\n- Typo fixes.\n- Fix incorrect assertion in summarizer_read_local_xlog_page; this\ncould cause occasional regression test failures in 004_pg_xlog_symlink\nand 009_growing_files.\n- Zero-initialize BlockRefTableKey variables.\n- Replace a couple instances of pathbuf + basepathlen + 1 with tarfilename.\n- Add const to path argument of GetFileBackupMethod.\n- Avoid setting output parameters of GetFileBackupMethod unless the\nreturn value is BACK_UP_FILE_INCREMENTALLY.\n- In GetFileBackupMethod, postpone qsorting block numbers slightly.\n- Define INCREMENTAL_PREFIX_LENGTH using sizeof(), because that should\nhopefully work everywhere and the StaticAssertStmt that checks the\nvalue of this doesn't work on Windows.\n- Change MemoryContextResetAndDeleteChildren to MemoryContextReset.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 16 Nov 2023 14:36:11 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "I made a pass over pg_combinebackup for NLS. I propose the attached\npatch.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Right now the sectors on the hard disk run clockwise, but I heard a rumor that\nyou can squeeze 0.2% more throughput by running them counterclockwise.\nIt's worth the effort. Recommended.\" (Gerry Pourwelle)",
"msg_date": "Fri, 17 Nov 2023 11:01:21 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "On Fri, Nov 17, 2023 at 5:01 AM Alvaro Herrera <[email protected]> wrote:\n> I made a pass over pg_combinebackup for NLS. I propose the attached\n> patch.\n\nThis doesn't quite compile for me so I changed a few things and\nincorporated it. Hopefully I didn't mess anything up.\n\nHere's v11. In addition to incorporating Álvaro's NLS changes, with\nthe off-list help of Jakub Wartak, I finally tracked down two one-line\nbugs in BlockRefTableEntryGetBlocks that have been causing the cfbot\nto blow up on these patches. What I hadn't realized is that cfbot runs\nwith the relation segment size changed to 6 blocks, which tickled some\ncode paths that I wasn't exercising locally. Thanks a ton to Jakub for\nthe help running this down. cfbot was unhappy about a %lu so I've\nchanged that to %zu in this version, too. Finally, the previous\nversion of this patch set had some pgindent damage, so that is\nhopefully now cleaned up as well.\n\nI wish I had better ideas about how to thoroughly test this. I've got\na bunch of different tests for pg_combinebackup and I think those are\ngood, but the bugs mentioned in the previous paragraph show that those\naren't sufficient to catch all of the logic errors that can exist,\nwhich is not great. But, as I say, I'm not quite sure how to do\nbetter, so I guess I'll just need to keep fixing problems as we find\nthem.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Mon, 20 Nov 2023 10:42:47 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "On 2023-Nov-16, Robert Haas wrote:\n\n> On Thu, Nov 16, 2023 at 12:23 PM Alvaro Herrera <[email protected]> wrote:\n\n> > I don't understand this point. Currently, the protocol is that\n> > UPLOAD_MANIFEST is used to send the manifest prior to requesting the\n> > backup. You seem to be saying that you're thinking of removing support\n> > for UPLOAD_MANIFEST and instead just give the LSN as an option to the\n> > BASE_BACKUP command?\n> \n> I don't think I'd want to do exactly that, because then you could only\n> send one LSN, and I do think we want to send a set of LSN ranges with\n> the corresponding TLI for each. I was thinking about dumping\n> UPLOAD_MANIFEST and instead having a command like:\n> \n> INCREMENTAL_WAL_RANGE 1 2/462AC48 2/462C698\n> \n> The client would execute this command one or more times before\n> starting an incremental backup.\n\nThat sounds good to me. Not having to parse the manifest server-side\nsounds like a win, as does saving the transfer, for the cases where the\nmanifest is large.\n\nIs this meant to support multiple timelines each with non-overlapping\nadjacent ranges, rather than multiple non-adjacent ranges?\n\nDo I have it right that you want to rewrite this bit before considering\nthis ready to commit?\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"No nos atrevemos a muchas cosas porque son difíciles,\npero son difíciles porque no nos atrevemos a hacerlas\" (Séneca)\n\n\n",
"msg_date": "Mon, 20 Nov 2023 20:03:01 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "On Mon, Nov 20, 2023 at 2:03 PM Alvaro Herrera <[email protected]> wrote:\n> That sounds good to me. Not having to parse the manifest server-side\n> sounds like a win, as does saving the transfer, for the cases where the\n> manifest is large.\n\nOK. I'll look into this next week, hopefully.\n\n> Is this meant to support multiple timelines each with non-overlapping\n> adjacent ranges, rather than multiple non-adjacent ranges?\n\nCorrect. I don't see how non-adjacent LSN ranges could ever be a\nuseful thing, but adjacent ranges on different timelines are useful.\n\n> Do I have it right that you want to rewrite this bit before considering\n> this ready to commit?\n\nFor sure. I don't think this is the only thing that needs to be\nrevised before commit, but it's definitely *a* thing that needs to be\nrevised before commit.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 20 Nov 2023 14:10:34 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "On Mon, Nov 20, 2023 at 2:10 PM Robert Haas <[email protected]> wrote:\n> > Is this meant to support multiple timelines each with non-overlapping\n> > adjacent ranges, rather than multiple non-adjacent ranges?\n>\n> Correct. I don't see how non-adjacent LSN ranges could ever be a\n> useful thing, but adjacent ranges on different timelines are useful.\n\nThinking about this a bit more, there are a couple of things we could\ndo here in terms of syntax. Once idea is to give up on having a\nseparate MANIFEST-WAL-RANGE command for each range and instead just\ncram everything into either a single command:\n\nMANIFEST-WAL-RANGES {tli} {startlsn} {endlsn}...\n\nOr even into a single option to the BASE_BACKUP command:\n\nBASE_BACKUP yadda yadda INCREMENTAL 'tli@startlsn-endlsn,...'\n\nOr, since we expect adjacent, non-overlapping ranges, you could even\narrange to elide the duplicated boundary LSNs, e.g.\n\nMANIFEST_WAL-RANGES {{tli} {lsn}}... {final-lsn}\n\nOr\n\nBASE_BACKUP yadda yadda INCREMENTAL 'tli@lsn,...,final-lsn'\n\nI'm not sure what's best here. Trying to trim out the duplicated\nboundary LSNs feels a bit like rearrangement for the sake of\nrearrangement, but maybe it isn't really.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 20 Nov 2023 14:27:43 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "On Mon, Nov 20, 2023 at 4:43 PM Robert Haas <[email protected]> wrote:\n>\n> On Fri, Nov 17, 2023 at 5:01 AM Alvaro Herrera <[email protected]> wrote:\n> > I made a pass over pg_combinebackup for NLS. I propose the attached\n> > patch.\n>\n> This doesn't quite compile for me so I changed a few things and\n> incorporated it. Hopefully I didn't mess anything up.\n>\n> Here's v11.\n[..]\n\n> I wish I had better ideas about how to thoroughly test this. [..]\n\nHopefully the below add some confidence, I've done some further\nquick(?) checks today and results are good:\n\nmake check-world #GOOD\ntest_full_pri__incr_stby__restore_on_pri.sh #GOOD\ntest_full_pri__incr_stby__restore_on_stby.sh #GOOD*\ntest_full_stby__incr_stby__restore_on_pri.sh #GOOD\ntest_full_stby__incr_stby__restore_on_stby.sh #GOOD*\ntest_many_incrementals_dbcreate.sh #GOOD\ntest_many_incrementals.sh #GOOD\ntest_multixact.sh #GOOD\ntest_pending_2pc.sh #GOOD\ntest_reindex_and_vacuum_full.sh #GOOD\ntest_truncaterollback.sh #GOOD\ntest_unlogged_table.sh #GOOD\ntest_across_wallevelminimal.sh # GOOD(expected failure, that\nwalsummaries are off during walminimal and incremental cannot be\ntaken--> full needs to be taken after wal_level=minimal)\n\nCFbot failed on two hosts this time, I haven't looked at the details\nyet (https://cirrus-ci.com/task/6425149646307328 -> end of EOL? ->\nLOG: WAL summarizer process (PID 71511) was terminated by signal 6:\nAborted?)\n\nThe remaining test idea is to have a longer running DB under stress\ntest in more real-world conditions and try to recover using chained\nincremental backups (one such test was carried out on patchset v6 and\nthe result was good back then).\n\n-J.\n\n\n",
"msg_date": "Tue, 21 Nov 2023 15:13:09 +0100",
"msg_from": "Jakub Wartak <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "On Wed, Nov 22, 2023 at 3:14 AM Jakub Wartak\n<[email protected]> wrote:\n> CFbot failed on two hosts this time, I haven't looked at the details\n> yet (https://cirrus-ci.com/task/6425149646307328 -> end of EOL? ->\n> LOG: WAL summarizer process (PID 71511) was terminated by signal 6:\n> Aborted?)\n\nRobert pinged me to see if I had any ideas.\n\nThe reason it fails on Windows is because there is a special code path\nfor WIN32 in the patch's src/bin/pg_combinebackup/copy_file.c, but it\nis incomplete: it returns early without feeding the data into the\nchecksum, so all the checksums have the same initial and bogus value.\nI commented that part out so it took the normal path like Unix, and it\npassed.\n\nThe reason it fails on Linux 32 bit with -fsanitize is because this\nhas uncovered a bug in xlogreader.c, which overflows a 32 bit pointer\nwhen doing a size test that could easily be changed to non-overflowing\nformulation. AFAICS it is not a live bug because it comes to the\nright conclusion without deferencing the pointer due to other checks,\nbut the sanitizer is not wrong to complain about it and I will post a\npatch to fix that in a new thread. With the draft patch I am testing,\nthe sanitizer is happy and this passes too.\n\n\n",
"msg_date": "Fri, 24 Nov 2023 17:18:07 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "On Thu, Nov 23, 2023 at 11:18 PM Thomas Munro <[email protected]> wrote:\n> Robert pinged me to see if I had any ideas.\n\nThanks, Thomas.\n\n> The reason it fails on Windows is because there is a special code path\n> for WIN32 in the patch's src/bin/pg_combinebackup/copy_file.c, but it\n> is incomplete: it returns early without feeding the data into the\n> checksum, so all the checksums have the same initial and bogus value.\n> I commented that part out so it took the normal path like Unix, and it\n> passed.\n\nYikes, that's embarrassing. Thanks for running it down. There is logic\nin the caller to figure out whether we need to recompute the checksum\nor can reuse one we already have, but copy_file() doesn't understand\nthat it should take the slow path if a new checksum computation is\nrequired.\n\n> The reason it fails on Linux 32 bit with -fsanitize is because this\n> has uncovered a bug in xlogreader.c, which overflows a 32 bit pointer\n> when doing a size test that could easily be changed to non-overflowing\n> formulation. AFAICS it is not a live bug because it comes to the\n> right conclusion without deferencing the pointer due to other checks,\n> but the sanitizer is not wrong to complain about it and I will post a\n> patch to fix that in a new thread. With the draft patch I am testing,\n> the sanitizer is happy and this passes too.\n\nThanks so much.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 27 Nov 2023 14:02:59 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "On Wed, Nov 15, 2023 at 9:14 AM Jakub Wartak\n<[email protected]> wrote:\n> so I've spent some time playing still with patchset v8 (without the\n> 6/6 testing patch related to wal_level=minimal), with the exception of\n> - patchset v9 - marked otherwise.\n\nThanks, as usual, for that.\n\n> 2. Usability thing: I hit the timeout hard: \"This backup requires WAL\n> to be summarized up to 0/90000D8, but summarizer has only reached\n> 0/0.\" with summarize_wal=off (default) but apparently this in TODO.\n> Looks like an important usability thing.\n\nAll right. I'd sort of forgotten about the need to address that issue,\nbut apparently, I need to re-remember.\n\n> 5. On v8 i've finally played a little bit with standby(s) and this\n> patchset with couple of basic scenarios while mixing source of the\n> backups:\n>\n> a. full on standby, incr1 on standby, full db restore (incl. incr1) on standby\n> # sometimes i'm getting spurious error like those when doing\n> incrementals on standby with -c fast :\n> 2023-11-15 13:49:05.721 CET [10573] LOG: recovery restart point\n> at 0/A000028\n> 2023-11-15 13:49:07.591 CET [10597] WARNING: aborting backup due\n> to backend exiting before pg_backup_stop was called\n> 2023-11-15 13:49:07.591 CET [10597] ERROR: manifest requires WAL\n> from final timeline 1 ending at 0/A0000F8, but this backup starts at\n> 0/A000028\n> 2023-11-15 13:49:07.591 CET [10597] STATEMENT: BASE_BACKUP (\n> INCREMENTAL, LABEL 'pg_basebackup base backup', PROGRESS,\n> CHECKPOINT 'fast', WAIT 0, MANIFEST 'yes', TARGET 'client')\n> # when you retry the same pg_basebackup it goes fine (looks like\n> CHECKPOINT on standby/restartpoint <-> summarizer disconnect, I'll dig\n> deeper tomorrow. It seems that issuing \"CHECKPOINT; pg_sleep(1);\"\n> against primary just before pg_basebackup --incr on standby\n> workarounds it)\n>\n> b. full on primary, incr1 on standby, full db restore (incl. incr1) on\n> standby # WORKS\n> c. full on standby, incr1 on standby, full db restore (incl. incr1) on\n> primary # WORKS*\n> d. full on primary, incr1 on standby, full db restore (incl. incr1) on\n> primary # WORKS*\n>\n> * - needs pg_promote() due to the controlfile having standby bit +\n> potential fiddling with postgresql.auto.conf as it is having\n> primary_connstring GUC.\n\nWell, \"manifest requires WAL from final timeline 1 ending at\n0/A0000F8, but this backup starts at 0/A000028\" is a valid complaint,\nnot a spurious error. It's essentially saying that WAL replay for this\nincremental backup would have to begin at a location that is earlier\nthan where replay for the earlier backup would have to end while\nrecovering that backup. It's almost like you're trying to go backwards\nin time, with the incremental happening before the full backup instead\nof after it. I think the reason this is happening is that when you\ntake a backup, recovery has to start from the previous checkpoint. On\nthe primary, we perform a new checkpoint and plan to start recovery\nfrom it. But on a standby, we can't perform a new checkpoint, since we\ncan't write WAL, so we arrange for recovery of the backup to begin\nfrom the most recent checkpoint. And if you do two backups on the\nstandby in a row without much happening in the middle, then the most\nrecent checkpoint will be the same for both. And that I think is\nwhat's resulting in this error, because the end of the backup follows\nthe start of the backup, so if two consecutive backups have the same\nstart, then the start of the second one will precede the end of the\nfirst one.\n\nOne thing that's interesting to note here is that there is no point in\nperforming an incremental backup under these circumstances. You would\naccrue no advantage over just letting replay continue further from the\nfull backup. The whole point of an incremental backup is that it lets\nyou \"fast forward\" your older backup -- you could have just replayed\nall the WAL from the older backup until you got to the start LSN of\nthe newer backup, but reconstructing a backup that can start replay\nfrom the newer LSN directly is, hopefully, quicker than replaying all\nof that WAL. But in this scenario, you're starting from the same\ncheckpoint no matter what -- the amount of WAL replay required to\nreach any given LSN will be unchanged. So storing an incremental\nbackup would be strictly a loss.\n\nAnother interesting point to consider is that you could also get this\ncomplaint by doing something like take the full backup from the\nprimary, and then try to take an incremental backup from a standby,\nmaybe even a time-delayed standby that's far behind the primary. In\nthat case, you would really be trying to take an incremental backup\nbefore you actually took the full backup, as far as LSN time goes.\n\nI'm not quite sure what to do about any of this. I think the error is\ncorrect and accurate, but understanding what it means and why it's\nhappening and what to do about it is probably going to be difficult\nfor people. Possibly we should have documentation that talks you\nthrough all of this. Or possibly there are ways to elaborate on the\nerror message itself. But I'm a little skeptical about the latter\napproach because it's all so complicated. I don't know that we can\nsummarize it in a sentence or two.\n\n> 6. Sci-fi-mode-on: I was wondering about the dangers of e.g. having\n> more recent pg_basebackup (e.g. from pg18 one day) running against\n> pg17 in the scope of having this incremental backups possibility. Is\n> it going to be safe? (currently there seems to be no safeguards\n> against such use) or should those things (core, pg_basebackup) should\n> be running in version lock step?\n\nI think it should be safe, actually. pg_basebackup has no reason to\ncare about WAL format changes across versions. It doesn't even care\nabout the format of the WAL summaries, which it never sees, but only\nneeds the server to have. If we change the format of the incremental\nfiles that are included in the backup, then we will need\nbackward-compatibility code, or we can disallow cross-version\noperations. I don't currently foresee a need to do that, but you never\nknow. It's manageable in any event.\n\nBut note that I also didn't (and can't, without a lot of ugliness)\nmake pg_combinebackup version-independent. So you could think of\ntaking incremental backups with a different version of pg_basebackup,\nbut if you want to restore you're going to need a matching version of\npg_combinebackup.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 29 Nov 2023 09:06:19 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "New patch set.\n\n0001: Rename JsonManifestParseContext callbacks, per feedback from\nÁlvaro. Not logically related to the rest of this, except by code\nproximity. Separately committable, if nobody objects.\n\n0002: Rename record_manifest_details_for_{file,wal_range}, per\nfeedback from Álvaro that the names were too generic. Separately\ncommittable, if nobody objects.\n\n0003: Move parse_manifest.c to src/common. No significant changes\nsince the previous version.\n\n0004: Add a new WAL summarizer process. No significant changes since\nthe previous version.\n\n0005: Incremental backup itself. Changes:\n- Remove UPLOAD_MANIFEST replication command and instead add\nINCREMENTAL_WAL_RANGE replication command.\n- In consequence, load_manifest.c which was included in the previous\npatch sets now moves to src/fe_utils and has some adjustments.\n- Actually document the new replication command which I overlooked previously.\n- Error out promptly if an incremental backup is attended with\nsummarize_wal = off.\n- Fix test in copy_file(). We should be willing to use the fast-path\nif a new checksum is *not* required, but the sense of the test was\ninverted in previous versions.\n- Fix some handling of the missing-manifest case in pg_combinebackup.\n- Fix terminology in a help message.\n\n0006: Add pg_walsummary tool. No significant changes since the previous version.\n\n0007: Test patch, not for commit.\n\nAs far as I know, the main commit-blockers here are (1) the timeout\nwhen waiting for WAL summarization is still hard-coded to 60 seconds\nand (2) the ubsan issue that Thomas hunted down, which would cause at\nleast the entire CF environment and maybe some portion of the BF to\nturn red if this were committed. That issue is in xlogreader rather\nthan in this patch set, at least in part, but it still needs fixing\nbefore this goes ahead. I also suspect that the slightly-more\nsignificant refactoring in this version may turn up a few new bugs in\nthe CF environment. I think once that the aforementioned items are\nsorted out, this could be committed through 0005, and 0001 and 0002\ncould be committed sooner. 0006 should have some tests written before\nit gets committed, but it doesn't necessarily have to be committed at\nthe exact same moment as everything else, and writing tests isn't that\nhard, either.\n\nOther loose ends that would be nice to tidy up at some point:\n\n- Incremental JSON parsing so we can handle huge manifests.\n\n- More documentation as proposed by Álvaro but I'm failing to find the\ndetails of his proposal right now.\n\n- More introspection facilities, maybe, or possibly rip some some\nstuff out of WalSummarizerCtl if we don't want it. This one might be a\nhigher priority to address before initial commit, but it's probably\nnot absolutely critical, either.\n\nI'm not quite sure how aggressively to press forward with getting\nstuff committed. I'd certainly rather debug as much as I can locally\nand via cfbot before turning the buildfarm pretty colors, but I think\nit generally works out better when larger features get pushed earlier\nin the cycle rather than in the mad frenzy right before feature\nfreeze, so I'm not inclined to be too patient, either.\n\n...Robert",
"msg_date": "Thu, 30 Nov 2023 09:33:04 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "On Thu, Nov 30, 2023 at 9:33 AM Robert Haas <[email protected]> wrote:\n> 0005: Incremental backup itself. Changes:\n> - Remove UPLOAD_MANIFEST replication command and instead add\n> INCREMENTAL_WAL_RANGE replication command.\n\nUnfortunately, I think this change is going to need to be reverted.\nJakub reported out a problem to me off-list, which I think boils down\nto this: take a full backup on the primary. create a database on the\nprimary. now take an incremental backup on the standby using the full\nbackup from the master as the prior backup. What happens at this point\ndepends on how far replay has progressed on the standby. I think there\nare three scenarios: (1) If replay has not yet reached a checkpoint\nlater than the one at which the full backup began, then taking the\nincremental backup will fail. This is correct, because it makes no\nsense to take an incremental backup that goes backwards in time, and\nit's pointless to take one that goes forwards but not far enough to\nreach the next checkpoint, as you won't save anything. (2) If replay\nhas progressed far enough that the redo pointer is now beyond the\nCREATE DATABASE record, then everything is fine. (3) But if the redo\npointer for the backup is a later checkpoint than the one from which\nthe full backup started, but also before the CREATE DATABASE record,\nthen the new database's files exist on disk, but are not mentioned in\nthe WAL summary, which covers all LSNs from the start of the prior\nbackup to the start of this one. Here, the start of the backup is\nbasically the LSN from which replay will start, and since the database\nwas created after that, those changes aren't in the WAL summary. This\nmeans that we think the file is unchanged since the prior backup, and\nso backup no blocks at all. But now we have an incremental file for a\nrelation for which no full file is present in the prior backup, and\nwe're in big trouble.\n\nIf my analysis is correct, this bug should be new in v12. In v11 and\nprior, I think that we always included every file that didn't appear\nin the prior manifest in full. I didn't really quite know why I was\ndoing that, which is why I was willing to rip it out and thus remove\nthe need for the manifest, but now I think it was actually preventing\nexactly this problem. This issue, in general, is files that get\ncreated after the start of the backup. By that time, the WAL summary\nthat drives the backup has already been built, so it doesn't know\nanything about the new files. That would be fine if we either (A)\nomitted those new files from the backup completely, since replay would\nrecreate them or (B) backed them up in full, so that there was nothing\nrelying on them being there in the earlier backup. But an incremental\nbackup of such a file is no good.\n\nThen I started worrying about whether there were problems in cases\nwhere a file was dropped and recreated with the same name. I *think*\nit's OK. If file F is dropped and recreated after being copied into\nthe full backup but before being copied into the incremental backup,\nthen there are basically two cases. First, F might be dropped before\nthe start LSN of the incremental backup; if so, we'll know from the\nWAL summary that the limit block is 0 and back up the whole thing.\nSecond, F might be dropped after the start LSN of the incremental\nbackup and before it's actually coped. In that case, we'll not know\nwhen backing up the file that it was dropped and recreated, so we'll\nback it up incrementally as if that hadn't happened. That's OK as long\nas reconstruction doesn't fail, because WAL replay will again drop and\nrecreate F. And I think reconstruction won't fail: blocks that are in\nthe incremental file will be taken from there, blocks in the prior\nbackup file will be taken from there, and blocks in neither place will\nbe zero-filled. The result is logically incoherent, but replay will\nnuke the file anyway, so whatever.\n\nIt bugs me a bit that we don't obey the WAL-before-data rule with file\ncreation, e.g. RelationCreateStorage does smgrcreate() and then\nlog_smgrcreate(). So in theory we could see a file on disk for which\nnothing has been logged yet; it could even happen that the file gets\ncreated before the start LSN of the backup and the log record gets\nwritten afterward. It seems like it would be far more comfortable to\nswap the order there, so that if it's on disk, it's definitely in the\nWAL. But I haven't yet been able to think of a scenario in which the\ncurrent ordering causes a real problem. If we backup a stray file in\nfull (or, hypothetically, if we skipped it entirely) then nothing will\nhappen that can't already happen today with full backup; any problems\nwe end up having are, I think, not new problems. It's only when we\nback up a file incrementally that we need to be careful, and the\nanalsysis is basically the same as before ... whatever we put into an\nincremental file will cause *something* to get reconstructed except\nwhen there's no prior file at all. Having the manifest for the prior\nbackup lets us avoid the incremental-with-no-prior-file scenario. And\nas long as *something* gets reconstructed, I think WAL replay will fix\nup the rest.\n\nConsidering all this, what I'm inclined to do is go and put\nUPLOAD_MANIFEST back, instead of INCREMENTAL_WAL_RANGE, and adjust\naccordingly. But first: does anybody see more problems here that I may\nhave missed?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 4 Dec 2023 15:58:02 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "On Mon, Dec 4, 2023 at 3:58 PM Robert Haas <[email protected]> wrote:\n> Considering all this, what I'm inclined to do is go and put\n> UPLOAD_MANIFEST back, instead of INCREMENTAL_WAL_RANGE, and adjust\n> accordingly. But first: does anybody see more problems here that I may\n> have missed?\n\nOK, so here's a new version with UPLOAD_MANIFEST put back. I wrote a\nlong comment explaining why that's believed to be necessary and\nsufficient. I committed 0001 and 0002 from the previous series also,\nsince it doesn't seem like anyone has further comments on those\nrenamings.\n\nThis version also improves (at least, IMHO) the way that we wait for\nWAL summarization to finish. Previously, you either caught up fully\nwithin 60 seconds or you died. I didn't like that, because it seemed\nlike some people would get timeouts when the operation was slowly\nprogressing and would eventually succeed. So what this version does\nis:\n\n- Every 10 seconds, it logs a warning saying that it's still waiting\nfor WAL summarization. That way, a human operator can understand\nwhat's happening easily, and cancel if they want.\n\n- If 60 seconds go by without the WAL summarizer ingesting even a\nsingle WAL record, it times out. That way, if the WAL summarizer is\ndead or totally stuck (e.g. debugger attached, hung I/O) the user\nwon't be left waiting forever even if they never cancel. But if it's\njust slow, it probably won't time out, and the operation should\neventually succeed.\n\nTo me, this seems like a reasonable compromise. It might be\nunreasonable if WAL summarization is proceeding at a very low but\nnon-zero rate. But it's hard for me to think of a situation where that\nwill happen, with the exception of when CPU or I/O are badly\noverloaded. But in those cases, the WAL generation rate is probably\nalso not that high, because apparently the system is paralyzed, so\nmaybe the wait won't even be that bad, especially given that\neverything else on the box should be super-slow too. Plus, even if we\ndid want to time out in such a case, it's hard to know how slow is too\nslow. In any event, I think most failures here are likely to be\ncomplete failures, where the WAL summarizer just doesn't, so the fact\nthat this times out in those cases seems to me to likely be as much as\nwe need to do here. But if someone sees a problem with this or has a\nclever idea how to make it better, I'm all ears.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 5 Dec 2023 13:10:44 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "On Tue, Dec 5, 2023 at 7:11 PM Robert Haas <[email protected]> wrote:\n\n[..v13 patchset]\n\nThe results with v13 patchset are following:\n\n* - requires checkpoint on primary when doing incremental on standby\nwhen it's too idle, this was explained by Robert in [1], something AKA\ntoo-fast-incremental backup due to testing-scenario:\n\ntest_across_wallevelminimal.sh - GOOD\ntest_many_incrementals_dbcreate.sh - GOOD\ntest_many_incrementals.sh - GOOD\ntest_multixact.sh - GOOD\ntest_reindex_and_vacuum_full.sh - GOOD\ntest_standby_incr_just_backup.sh - GOOD*\ntest_truncaterollback.sh - GOOD\ntest_unlogged_table.sh - GOOD\ntest_full_pri__incr_stby__restore_on_pri.sh - GOOD\ntest_full_pri__incr_stby__restore_on_stby.sh - GOOD\ntest_full_stby__incr_stby__restore_on_pri.sh - GOOD*\ntest_full_stby__incr_stby__restore_on_stby.sh - GOOD*\ntest_incr_on_standby_after_promote.sh - GOOD*\ntest_incr_after_timelineincrease.sh (pg_ctl stop, pg_resetwal -l\n00000002000000000000000E ..., pg_ctl start, pg_basebackup\n--incremental) - GOOD, I've got:\n pg_basebackup: error: could not initiate base backup: ERROR:\ntimeline 1 found in manifest, but not in this server's history\n Comment: I was wondering if it wouldn't make some sense to teach\npg_resetwal to actually delete all WAL summaries after any any\nWAL/controlfile alteration?\n\ntest_stuck_walsummary.sh (pkill -STOP walsumm) - GOOD:\n\n> This version also improves (at least, IMHO) the way that we wait for\n> WAL summarization to finish. Previously, you either caught up fully\n> within 60 seconds or you died. I didn't like that, because it seemed\n> like some people would get timeouts when the operation was slowly\n> progressing and would eventually succeed. So what this version does\n> is:\n\n WARNING: still waiting for WAL summarization through 0/A0000D8\nafter 10 seconds\n DETAIL: Summarization has reached 0/8000028 on disk and 0/80000F8\nin memory.\n[..]\n pg_basebackup: error: could not initiate base backup: ERROR: WAL\nsummarization is not progressing\n DETAIL: Summarization is needed through 0/A0000D8, but is stuck\nat 0/8000028 on disk and 0/80000F8 in memory.\n Comment2: looks good to me!\n\ntest_pending_2pc.sh - getting GOOD on most recent runs, but several\ntimes during early testing (probably due to my own mishaps), I've been\nhit by Abort/TRAP. I'm still investigating and trying to reproduce\nthose ones. TRAP: failed Assert(\"summary_end_lsn >=\nWalSummarizerCtl->pending_lsn\"), File: \"walsummarizer.c\", Line: 940\n\nRegards,\n-J.\n\n[1] - https://www.postgresql.org/message-id/CA%2BTgmoYuC27_ToGtTTNyHgpn_eJmdqrmhJ93bAbinkBtXsWHaA%40mail.gmail.com\n\n\n",
"msg_date": "Thu, 7 Dec 2023 15:42:07 +0100",
"msg_from": "Jakub Wartak <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "On Thu, Dec 7, 2023 at 9:42 AM Jakub Wartak\n<[email protected]> wrote:\n> Comment: I was wondering if it wouldn't make some sense to teach\n> pg_resetwal to actually delete all WAL summaries after any any\n> WAL/controlfile alteration?\n\nI thought that this was a good idea so I decided to go implement it,\nonly to discover that it was already part of the patch set ... did you\nfind some case where it doesn't work as expected? The code looks like\nthis:\n\n RewriteControlFile();\n KillExistingXLOG();\n KillExistingArchiveStatus();\n KillExistingWALSummaries();\n WriteEmptyXLOG();\n\n> test_pending_2pc.sh - getting GOOD on most recent runs, but several\n> times during early testing (probably due to my own mishaps), I've been\n> hit by Abort/TRAP. I'm still investigating and trying to reproduce\n> those ones. TRAP: failed Assert(\"summary_end_lsn >=\n> WalSummarizerCtl->pending_lsn\"), File: \"walsummarizer.c\", Line: 940\n\nI have a fix for this locally, but I'm going to hold off on publishing\na new version until either there's a few more things I can address all\nat once, or until Thomas commits the ubsan fix.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 7 Dec 2023 10:14:45 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "On Thu, Dec 7, 2023 at 4:15 PM Robert Haas <[email protected]> wrote:\n\nHi Robert,\n\n> On Thu, Dec 7, 2023 at 9:42 AM Jakub Wartak\n> <[email protected]> wrote:\n> > Comment: I was wondering if it wouldn't make some sense to teach\n> > pg_resetwal to actually delete all WAL summaries after any any\n> > WAL/controlfile alteration?\n>\n> I thought that this was a good idea so I decided to go implement it,\n> only to discover that it was already part of the patch set ... did you\n> find some case where it doesn't work as expected? The code looks like\n> this:\n\nAh, my bad, with a fresh mind and coffee the error message makes it\nclear and of course it did reset the summaries properly.\n\nWhile we are at it, maybe around the below in PrepareForIncrementalBackup()\n\n if (tlep[i] == NULL)\n ereport(ERROR,\n\n(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n errmsg(\"timeline %u found in\nmanifest, but not in this server's history\",\n range->tli)));\n\nwe could add\n\n errhint(\"You might need to start a new full backup instead of\nincremental one\")\n\n?\n\n> > test_pending_2pc.sh - getting GOOD on most recent runs, but several\n> > times during early testing (probably due to my own mishaps), I've been\n> > hit by Abort/TRAP. I'm still investigating and trying to reproduce\n> > those ones. TRAP: failed Assert(\"summary_end_lsn >=\n> > WalSummarizerCtl->pending_lsn\"), File: \"walsummarizer.c\", Line: 940\n>\n> I have a fix for this locally, but I'm going to hold off on publishing\n> a new version until either there's a few more things I can address all\n> at once, or until Thomas commits the ubsan fix.\n>\n\nGreat, I cannot get it to fail again today, it had to be some dirty\nstate of the testing env. BTW: Thomas has pushed that ubsan fix.\n\n-J.\n\n\n",
"msg_date": "Fri, 8 Dec 2023 11:02:11 +0100",
"msg_from": "Jakub Wartak <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "On Tue, Dec 5, 2023 at 11:40 PM Robert Haas <[email protected]> wrote:\n>\n> On Mon, Dec 4, 2023 at 3:58 PM Robert Haas <[email protected]> wrote:\n> > Considering all this, what I'm inclined to do is go and put\n> > UPLOAD_MANIFEST back, instead of INCREMENTAL_WAL_RANGE, and adjust\n> > accordingly. But first: does anybody see more problems here that I may\n> > have missed?\n>\n> OK, so here's a new version with UPLOAD_MANIFEST put back. I wrote a\n> long comment explaining why that's believed to be necessary and\n> sufficient. I committed 0001 and 0002 from the previous series also,\n> since it doesn't seem like anyone has further comments on those\n> renamings.\n\nI have done some testing on standby, but I am facing some issues,\nalthough things are working fine on the primary. As shown below test\n[1]standby is reporting some errors that manifest require WAL from\n0/60000F8, but this backup starts at 0/6000028. Then I tried to look\ninto the manifest file of the full backup and it shows contents as\nbelow[0]. Actually from this WARNING and ERROR, I am not clear what\nis the problem, I understand that full backup ends at \"0/60000F8\" so\nfor the next incremental backup we should be looking for a summary\nthat has WAL starting at \"0/60000F8\" and we do have those WALs. In\nfact, the error message is saying \"this backup starts at 0/6000028\"\nwhich is before \"0/60000F8\" so whats the issue?\n\n[0]\n\"WAL-Ranges\": [\n{ \"Timeline\": 1, \"Start-LSN\": \"0/6000028\", \"End-LSN\": \"0/60000F8\" }\n\n\n[1]\n-- test on primary\ndilipkumar@dkmac bin % ./pg_basebackup -D d\ndilipkumar@dkmac bin % ./pg_basebackup -D d1 -i d/backup_manifest\n\n-- cleanup the backup directory\ndilipkumar@dkmac bin % rm -rf d\ndilipkumar@dkmac bin % rm -rf d1\n\n--test on standby\ndilipkumar@dkmac bin % ./pg_basebackup -D d -p 5433\ndilipkumar@dkmac bin % ./pg_basebackup -D d1 -i d/backup_manifest -p 5433\n\nWARNING: aborting backup due to backend exiting before pg_backup_stop\nwas called\npg_basebackup: error: could not initiate base backup: ERROR: manifest\nrequires WAL from final timeline 1 ending at 0/60000F8, but this\nbackup starts at 0/6000028\npg_basebackup: removing data directory \"d1\"\n\n\n--\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 11 Dec 2023 11:44:06 +0530",
"msg_from": "Dilip Kumar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "On Mon, Dec 11, 2023 at 11:44 AM Dilip Kumar <[email protected]> wrote:\n>\n> On Tue, Dec 5, 2023 at 11:40 PM Robert Haas <[email protected]> wrote:\n> >\n> > On Mon, Dec 4, 2023 at 3:58 PM Robert Haas <[email protected]> wrote:\n> > > Considering all this, what I'm inclined to do is go and put\n> > > UPLOAD_MANIFEST back, instead of INCREMENTAL_WAL_RANGE, and adjust\n> > > accordingly. But first: does anybody see more problems here that I may\n> > > have missed?\n> >\n> > OK, so here's a new version with UPLOAD_MANIFEST put back. I wrote a\n> > long comment explaining why that's believed to be necessary and\n> > sufficient. I committed 0001 and 0002 from the previous series also,\n> > since it doesn't seem like anyone has further comments on those\n> > renamings.\n>\n> I have done some testing on standby, but I am facing some issues,\n> although things are working fine on the primary. As shown below test\n> [1]standby is reporting some errors that manifest require WAL from\n> 0/60000F8, but this backup starts at 0/6000028. Then I tried to look\n> into the manifest file of the full backup and it shows contents as\n> below[0]. Actually from this WARNING and ERROR, I am not clear what\n> is the problem, I understand that full backup ends at \"0/60000F8\" so\n> for the next incremental backup we should be looking for a summary\n> that has WAL starting at \"0/60000F8\" and we do have those WALs. In\n> fact, the error message is saying \"this backup starts at 0/6000028\"\n> which is before \"0/60000F8\" so whats the issue?\n>\n> [0]\n> \"WAL-Ranges\": [\n> { \"Timeline\": 1, \"Start-LSN\": \"0/6000028\", \"End-LSN\": \"0/60000F8\" }\n>\n>\n> [1]\n> -- test on primary\n> dilipkumar@dkmac bin % ./pg_basebackup -D d\n> dilipkumar@dkmac bin % ./pg_basebackup -D d1 -i d/backup_manifest\n>\n> -- cleanup the backup directory\n> dilipkumar@dkmac bin % rm -rf d\n> dilipkumar@dkmac bin % rm -rf d1\n>\n> --test on standby\n> dilipkumar@dkmac bin % ./pg_basebackup -D d -p 5433\n> dilipkumar@dkmac bin % ./pg_basebackup -D d1 -i d/backup_manifest -p 5433\n>\n> WARNING: aborting backup due to backend exiting before pg_backup_stop\n> was called\n> pg_basebackup: error: could not initiate base backup: ERROR: manifest\n> requires WAL from final timeline 1 ending at 0/60000F8, but this\n> backup starts at 0/6000028\n> pg_basebackup: removing data directory \"d1\"\n\nJakub, pinged me offlist and pointed me to the thread[1] where it is\nalready explained so I think we can ignore this.\n\n[1] https://www.postgresql.org/message-id/CA%2BTgmoYuC27_ToGtTTNyHgpn_eJmdqrmhJ93bAbinkBtXsWHaA%40mail.gmail.com\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 11 Dec 2023 13:22:32 +0530",
"msg_from": "Dilip Kumar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "On Fri, Dec 8, 2023 at 5:02 AM Jakub Wartak\n<[email protected]> wrote:\n> While we are at it, maybe around the below in PrepareForIncrementalBackup()\n>\n> if (tlep[i] == NULL)\n> ereport(ERROR,\n>\n> (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n> errmsg(\"timeline %u found in\n> manifest, but not in this server's history\",\n> range->tli)));\n>\n> we could add\n>\n> errhint(\"You might need to start a new full backup instead of\n> incremental one\")\n>\n> ?\n\nI can't exactly say that such a hint would be inaccurate, but I think\nthe impulse to add it here is misguided. One of my design goals for\nthis system is to make it so that you never have to take a new\nincremental backup \"just because,\" not even in case of an intervening\ntimeline switch. So, all of the errors in this function are warning\nyou that you've done something that you really should not have done.\nIn this particular case, you've either (1) manually removed the\ntimeline history file, and not just any timeline history file but the\none for a timeline for a backup that you still intend to use as the\nbasis for taking an incremental backup or (2) tried to use a full\nbackup taken from one server as the basis for an incremental backup on\na completely different server that happens to share the same system\nidentifier, e.g. because you promoted two standbys derived from the\nsame original primary and then tried to use a full backup taken on one\nas the basis for an incremental backup taken on the other.\n\nThe scenario I was really concerned about when I wrote this test was\n(2), because that could lead to a corrupt restore. This test isn't\nstrong enough to prevent that completely, because two unrelated\nstandbys can branch onto the same new timelines at the same LSNs, and\nthen these checks can't tell that something bad has happened. However,\nthey can detect a useful subset of problem cases. And the solution is\nnot so much \"take a new full backup\" as \"keep straight which server is\nwhich.\" Likewise, in case (1), the relevant hint would be \"don't\nmanually remove timeline history files, and if you must, then at least\ndon't nuke timelines that you actually still care about.\"\n\n> > I have a fix for this locally, but I'm going to hold off on publishing\n> > a new version until either there's a few more things I can address all\n> > at once, or until Thomas commits the ubsan fix.\n> >\n>\n> Great, I cannot get it to fail again today, it had to be some dirty\n> state of the testing env. BTW: Thomas has pushed that ubsan fix.\n\nHuzzah, the cfbot likes the patch set now. Here's a new version with\nthe promised fix for your non-reproducible issue. Let's see whether\nyou and cfbot still like this version.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Mon, 11 Dec 2023 12:08:20 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "Hi Robert,\n\nOn Mon, Dec 11, 2023 at 6:08 PM Robert Haas <[email protected]> wrote:\n>\n> On Fri, Dec 8, 2023 at 5:02 AM Jakub Wartak\n> <[email protected]> wrote:\n> > While we are at it, maybe around the below in PrepareForIncrementalBackup()\n> >\n> > if (tlep[i] == NULL)\n> > ereport(ERROR,\n> >\n> > (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n> > errmsg(\"timeline %u found in\n> > manifest, but not in this server's history\",\n> > range->tli)));\n> >\n> > we could add\n> >\n> > errhint(\"You might need to start a new full backup instead of\n> > incremental one\")\n> >\n> > ?\n>\n> I can't exactly say that such a hint would be inaccurate, but I think\n> the impulse to add it here is misguided. One of my design goals for\n> this system is to make it so that you never have to take a new\n> incremental backup \"just because,\"\n\nDid you mean take a new full backup here?\n\n> not even in case of an intervening\n> timeline switch. So, all of the errors in this function are warning\n> you that you've done something that you really should not have done.\n> In this particular case, you've either (1) manually removed the\n> timeline history file, and not just any timeline history file but the\n> one for a timeline for a backup that you still intend to use as the\n> basis for taking an incremental backup or (2) tried to use a full\n> backup taken from one server as the basis for an incremental backup on\n> a completely different server that happens to share the same system\n> identifier, e.g. because you promoted two standbys derived from the\n> same original primary and then tried to use a full backup taken on one\n> as the basis for an incremental backup taken on the other.\n>\n\nOkay, but please consider two other possibilities:\n\n(3) I had a corrupted DB where I've fixed it by running pg_resetwal\nand some cronjob just a day later attempted to take incremental and\nfailed with that error.\n\n(4) I had pg_upgraded (which calls pg_resetwal on fresh initdb\ndirectory) the DB where I had cronjob that just failed with this error\n\nI bet that (4) is going to happen more often than (1), (2) , which\nmight trigger users to complain on forums, support tickets.\n\n> > > I have a fix for this locally, but I'm going to hold off on publishing\n> > > a new version until either there's a few more things I can address all\n> > > at once, or until Thomas commits the ubsan fix.\n> > >\n> >\n> > Great, I cannot get it to fail again today, it had to be some dirty\n> > state of the testing env. BTW: Thomas has pushed that ubsan fix.\n>\n> Huzzah, the cfbot likes the patch set now. Here's a new version with\n> the promised fix for your non-reproducible issue. Let's see whether\n> you and cfbot still like this version.\n\nLGTM, all quick tests work from my end too. BTW: I have also scheduled\nthe long/large pgbench -s 14000 (~200GB?) - multiple day incremental\ntest. I'll let you know how it went.\n\n-J.\n\n\n",
"msg_date": "Wed, 13 Dec 2023 11:39:22 +0100",
"msg_from": "Jakub Wartak <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "On Wed, Dec 13, 2023 at 5:39 AM Jakub Wartak\n<[email protected]> wrote:\n> > I can't exactly say that such a hint would be inaccurate, but I think\n> > the impulse to add it here is misguided. One of my design goals for\n> > this system is to make it so that you never have to take a new\n> > incremental backup \"just because,\"\n>\n> Did you mean take a new full backup here?\n\nYes, apologies for the typo.\n\n> > not even in case of an intervening\n> > timeline switch. So, all of the errors in this function are warning\n> > you that you've done something that you really should not have done.\n> > In this particular case, you've either (1) manually removed the\n> > timeline history file, and not just any timeline history file but the\n> > one for a timeline for a backup that you still intend to use as the\n> > basis for taking an incremental backup or (2) tried to use a full\n> > backup taken from one server as the basis for an incremental backup on\n> > a completely different server that happens to share the same system\n> > identifier, e.g. because you promoted two standbys derived from the\n> > same original primary and then tried to use a full backup taken on one\n> > as the basis for an incremental backup taken on the other.\n> >\n>\n> Okay, but please consider two other possibilities:\n>\n> (3) I had a corrupted DB where I've fixed it by running pg_resetwal\n> and some cronjob just a day later attempted to take incremental and\n> failed with that error.\n>\n> (4) I had pg_upgraded (which calls pg_resetwal on fresh initdb\n> directory) the DB where I had cronjob that just failed with this error\n>\n> I bet that (4) is going to happen more often than (1), (2) , which\n> might trigger users to complain on forums, support tickets.\n\nHmm. In case (4), I was thinking that you'd get a complaint about the\ndatabase system identifier not matching. I'm not actually sure that's\nwhat would happen, though, now that you mention it.\n\nIn case (3), I think you would get an error about missing WAL summary files.\n\n> > Huzzah, the cfbot likes the patch set now. Here's a new version with\n> > the promised fix for your non-reproducible issue. Let's see whether\n> > you and cfbot still like this version.\n>\n> LGTM, all quick tests work from my end too. BTW: I have also scheduled\n> the long/large pgbench -s 14000 (~200GB?) - multiple day incremental\n> test. I'll let you know how it went.\n\nAwesome, thank you so much.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 13 Dec 2023 08:16:20 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "Hi Robert,\n\nOn Wed, Dec 13, 2023 at 2:16 PM Robert Haas <[email protected]> wrote:\n>\n >\n> > > not even in case of an intervening\n> > > timeline switch. So, all of the errors in this function are warning\n> > > you that you've done something that you really should not have done.\n> > > In this particular case, you've either (1) manually removed the\n> > > timeline history file, and not just any timeline history file but the\n> > > one for a timeline for a backup that you still intend to use as the\n> > > basis for taking an incremental backup or (2) tried to use a full\n> > > backup taken from one server as the basis for an incremental backup on\n> > > a completely different server that happens to share the same system\n> > > identifier, e.g. because you promoted two standbys derived from the\n> > > same original primary and then tried to use a full backup taken on one\n> > > as the basis for an incremental backup taken on the other.\n> > >\n> >\n> > Okay, but please consider two other possibilities:\n> >\n> > (3) I had a corrupted DB where I've fixed it by running pg_resetwal\n> > and some cronjob just a day later attempted to take incremental and\n> > failed with that error.\n> >\n> > (4) I had pg_upgraded (which calls pg_resetwal on fresh initdb\n> > directory) the DB where I had cronjob that just failed with this error\n> >\n> > I bet that (4) is going to happen more often than (1), (2) , which\n> > might trigger users to complain on forums, support tickets.\n>\n> Hmm. In case (4), I was thinking that you'd get a complaint about the\n> database system identifier not matching. I'm not actually sure that's\n> what would happen, though, now that you mention it.\n>\n\nI've played with with initdb/pg_upgrade (17->17) and i don't get DBID\nmismatch (of course they do differ after initdb), but i get this\ninstead:\n\n $ pg_basebackup -c fast -D /tmp/incr2.after.upgrade -p 5432\n--incremental /tmp/incr1.before.upgrade/backup_manifest\nWARNING: aborting backup due to backend exiting before pg_backup_stop\nwas called\npg_basebackup: error: could not initiate base backup: ERROR: timeline\n2 found in manifest, but not in this server's history\npg_basebackup: removing data directory \"/tmp/incr2.after.upgrade\"\n\nAlso in the manifest I don't see DBID ?\nMaybe it's a nuisance and all I'm trying to see is that if an\nautomated cronjob with pg_basebackup --incremental hits a freshly\nupgraded cluster, that error message without errhint() is going to\nscare some Junior DBAs.\n\n> > LGTM, all quick tests work from my end too. BTW: I have also scheduled\n> > the long/large pgbench -s 14000 (~200GB?) - multiple day incremental\n> > test. I'll let you know how it went.\n>\n> Awesome, thank you so much.\n\nOK, so pgbench -i -s 14440 and pgbench -P 1 -R 100 -c 8 -T 259200 did\ngenerate pretty large incrementals (so I had to abort it due to lack\nof space, I was expecting to see smaller incrementals so it took too\nmuch space). I initally suspected that the problem lies in the normal\ndistribution of `\\set aid random(1, 100000 * :scale)` for tpcbb that\nUPDATEs on big pgbench_accounts.\n\n$ du -sm /backups/backups/* /backups/archive/\n216205 /backups/backups/full\n215207 /backups/backups/incr.1\n216706 /backups/backups/incr.2\n102273 /backups/archive/\n\nSo I verified the recoverability yesterday anyway - the\npg_combinebackup \"full incr.1 incr.2\" took 44 minutes and later\narchive wal recovery and promotion SUCCEED. The 8-way parallel seqscan\nfoir sum(abalance) on the pgbench_accounts and other tables worked\nfine. The pg_combinebackup was using 15-20% CPU (mostly on %sys),\nwhile performing mostly 60-80MB/s separately for both reads and writes\n(it's slow, but it's due to maxed out sequence I/O of the Premium on a\nsmall SSD on Azure).\n\nSo i've launched another improved test (to force more localized\nUPDATEs) to see the more real-world space-effectiveness of the\nincremental backup:\n\n\\set aid random_exponential(1, 100000 * :scale, 8)\n\\set bid random(1, 1 * :scale)\n\\set tid random(1, 10 * :scale)\n\\set delta random(-5000, 5000)\nBEGIN;\nUPDATE pgbench_accounts SET abalance = abalance + :delta WHERE aid = :aid;\nINSERT INTO pgbench_history (tid\n, bid, aid, delta, mtime) VALUES (:tid, :bid, :aid, :delta, CURRENT_TIMESTAMP);\nEND;\n\nBut then... (and i have verified the low-IDs for :aid above).. same\nhas happened:\n\nbackups/backups$ du -sm /backups/backups/*\n210229 /backups/backups/full\n208299 /backups/backups/incr.1\n208351 /backups/backups/incr.2\n\n# pgbench_accounts has relfilenodeid 16486\npostgres@jw-test-1:/backups/backups$ for L in 5 10 15 30 100 161 173\n174 175 ; do md5sum full/base/5/16486.$L ./incr.1/base/5/16486.$L\n./incr.2/base/5/16486.$L /var/lib/postgres/17/data/base/5/16486.$L ;\necho; done\n005c6bbb40fca3c1a0a819376ef0e793 full/base/5/16486.5\n005c6bbb40fca3c1a0a819376ef0e793 ./incr.1/base/5/16486.5\n005c6bbb40fca3c1a0a819376ef0e793 ./incr.2/base/5/16486.5\n005c6bbb40fca3c1a0a819376ef0e793 /var/lib/postgres/17/data/base/5/16486.5\n\n[.. all the checksums match (!) for the above $L..]\n\nc5117a213253035da5e5ee8a80c3ee3d full/base/5/16486.173\nc5117a213253035da5e5ee8a80c3ee3d ./incr.1/base/5/16486.173\nc5117a213253035da5e5ee8a80c3ee3d ./incr.2/base/5/16486.173\nc5117a213253035da5e5ee8a80c3ee3d /var/lib/postgres/17/data/base/5/16486.173\n\n47ee6b18d7f8e40352598d194b9a3c8a full/base/5/16486.174\n47ee6b18d7f8e40352598d194b9a3c8a ./incr.1/base/5/16486.174\n47ee6b18d7f8e40352598d194b9a3c8a ./incr.2/base/5/16486.174\n47ee6b18d7f8e40352598d194b9a3c8a /var/lib/postgres/17/data/base/5/16486.174\n\n82dfeba58b4a1031ac12c23f9559a330 full/base/5/16486.175\n21a8ac1e6fef3cf0b34546c41d59b2cc ./incr.1/base/5/16486.175\n2c3d89c612b2f97d575a55c6c0204d0b ./incr.2/base/5/16486.175\n73367d44d76e98276d3a6bbc14bb31f1 /var/lib/postgres/17/data/base/5/16486.175\n\nSo to me, it looks like it copied anyway 174 out of 175 files lowering\nthe effectiveness of that incremental backup to 0% .The commands to\ngenerate those incr backups were:\npg_basebackup -v -P -c fast -D /backups/backups/incr.1\n--incremental=/backups/backups/full/backup_manifest\nsleep 4h\npg_basebackup -v -P -c fast -D /backups/backups/incr.2\n--incremental=/backups/backups/incr1/backup_manifest\n\nThe incrementals are being generated , but just for the first (0)\nsegment of the relation?\n\n/backups/backups$ ls -l incr.2/base/5 | grep INCR\n-rw------- 1 postgres postgres 12 Dec 14 21:33 INCREMENTAL.112\n-rw------- 1 postgres postgres 12 Dec 14 21:01 INCREMENTAL.113\n-rw------- 1 postgres postgres 12 Dec 14 21:36 INCREMENTAL.1247\n-rw------- 1 postgres postgres 12 Dec 14 21:38 INCREMENTAL.1247_vm\n[..note, no INCREMENTAL.$int.$segment files]\n-rw------- 1 postgres postgres 12 Dec 14 21:24 INCREMENTAL.6238\n-rw------- 1 postgres postgres 12 Dec 14 21:17 INCREMENTAL.6239\n-rw------- 1 postgres postgres 12 Dec 14 21:55 INCREMENTAL.827\n\n# 16486 is pgbench_accounts\n/backups/backups$ ls -l incr.2/base/5/*16486* | grep INCR\n-rw------- 1 postgres postgres 14613480 Dec 14 21:00\nincr.2/base/5/INCREMENTAL.16486\n-rw------- 1 postgres postgres 12 Dec 14 21:52\nincr.2/base/5/INCREMENTAL.16486_vm\n/backups/backups$\n\n/backups/backups$ find incr* -name INCREMENTAL.* | wc -l\n1342\n/backups/backups$ find incr* -name INCREMENTAL.*_* | wc -l # VM or FSM\n236\n/backups/backups$ find incr* -name INCREMENTAL.*.* | wc -l # not a\nsingle >1GB single incremental relation\n0\n\nI'm quickly passing info and I haven't really looked at the code yet ,\nbut it should be somewhere around GetFileBackupMethod() and\nreproducible easily with that configure --with-segsize-blocks=X\nswitch.\n\n-J.\n\n\n",
"msg_date": "Fri, 15 Dec 2023 11:36:26 +0100",
"msg_from": "Jakub Wartak <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "I have a couple of quick fixes here.\n\nThe first fixes up some things in nls.mk related to a file move. The \nsecond is some cleanup because some function you are using has been \nremoved in the meantime; you probably found that yourself while rebasing.\n\nThe pg_walsummary patch doesn't have a nls.mk, but you also comment that \nit doesn't have tests yet, so I assume it's not considered complete yet \nanyway.",
"msg_date": "Fri, 15 Dec 2023 12:53:40 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "A separate bikeshedding topic: The GUC \"summarize_wal\", could that be \n\"wal_something\" instead? (wal_summarize? wal_summarizer?) It would be \nnice if these settings names group together a bit, both with existing \nwal_* ones and also with the new ones you are adding \n(wal_summary_keep_time).\n\n\n\n",
"msg_date": "Fri, 15 Dec 2023 12:58:17 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "Another set of comments, about the patch that adds pg_combinebackup:\n\nMake sure all the options are listed in a consistent order. We have \nlately changed everything to be alphabetical. This includes:\n\n- reference page pg_combinebackup.sgml\n\n- long_options listing\n\n- getopt_long() argument\n\n- subsequent switch\n\n- (--help output, but it looks ok as is)\n\nAlso, in pg_combinebackup.sgml, the option --sync-method is listed as if \nit does not take an argument, but it does.\n\n\n\n",
"msg_date": "Mon, 18 Dec 2023 10:10:51 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "On Fri, Dec 15, 2023 at 6:53 AM Peter Eisentraut <[email protected]> wrote:\n> The first fixes up some things in nls.mk related to a file move. The\n> second is some cleanup because some function you are using has been\n> removed in the meantime; you probably found that yourself while rebasing.\n\nIncorporated these. As you guessed,\nMemoryContextResetAndDeleteChildren -> MemoryContextReset had already\nbeen done locally.\n\n> The pg_walsummary patch doesn't have a nls.mk, but you also comment that\n> it doesn't have tests yet, so I assume it's not considered complete yet\n> anyway.\n\nI think this was more of a case of me just not realizing that I should\nadd that. I'll add something simple to the next version, but I'm not\nvery good at this NLS stuff.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 18 Dec 2023 13:27:42 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "On Fri, Dec 15, 2023 at 6:58 AM Peter Eisentraut <[email protected]> wrote:\n> A separate bikeshedding topic: The GUC \"summarize_wal\", could that be\n> \"wal_something\" instead? (wal_summarize? wal_summarizer?) It would be\n> nice if these settings names group together a bit, both with existing\n> wal_* ones and also with the new ones you are adding\n> (wal_summary_keep_time).\n\nYeah, this is highly debatable, so bikeshed away. IMHO, the question\nhere is whether we care more about (1) having the name of the GUC\nsound nice grammatically or (2) having the GUC begin with the same\nstring as other, related GUCs. I think that Tom Lane tends to prefer\nthe former, and probably some other people do too, while some other\npeople tend to prefer the latter. Ideally it would be possible to\nsatisfy both goals at once here, but everything I thought about that\nstarted with \"wal\" sounded too awkward for me to like it; hence the\ncurrent choice of name. But if there's consensus on something else, so\nbe it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 18 Dec 2023 13:39:33 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "On Mon, Dec 18, 2023 at 4:10 AM Peter Eisentraut <[email protected]> wrote:\n> Another set of comments, about the patch that adds pg_combinebackup:\n>\n> Make sure all the options are listed in a consistent order. We have\n> lately changed everything to be alphabetical. This includes:\n>\n> - reference page pg_combinebackup.sgml\n>\n> - long_options listing\n>\n> - getopt_long() argument\n>\n> - subsequent switch\n>\n> - (--help output, but it looks ok as is)\n>\n> Also, in pg_combinebackup.sgml, the option --sync-method is listed as if\n> it does not take an argument, but it does.\n\nI've attempted to clean this stuff up in the attached version. This\nversion also includes a fix for the bug found by Jakub that caused\nthings to not work properly for segment files beyond the first for any\nparticular relation, which turns out to be a really stupid mistake in\nmy earlier commit 025584a168a4b3002e193.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Mon, 18 Dec 2023 13:58:17 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "On Fri, Dec 15, 2023 at 5:36 AM Jakub Wartak\n<[email protected]> wrote:\n> I've played with with initdb/pg_upgrade (17->17) and i don't get DBID\n> mismatch (of course they do differ after initdb), but i get this\n> instead:\n>\n> $ pg_basebackup -c fast -D /tmp/incr2.after.upgrade -p 5432\n> --incremental /tmp/incr1.before.upgrade/backup_manifest\n> WARNING: aborting backup due to backend exiting before pg_backup_stop\n> was called\n> pg_basebackup: error: could not initiate base backup: ERROR: timeline\n> 2 found in manifest, but not in this server's history\n> pg_basebackup: removing data directory \"/tmp/incr2.after.upgrade\"\n>\n> Also in the manifest I don't see DBID ?\n> Maybe it's a nuisance and all I'm trying to see is that if an\n> automated cronjob with pg_basebackup --incremental hits a freshly\n> upgraded cluster, that error message without errhint() is going to\n> scare some Junior DBAs.\n\nYeah. I think we should add the system identifier to the manifest, but\nI think that should be left for a future project, as I don't think the\nlack of it is a good reason to stop all progress here. When we have\nthat, we can give more reliable error messages about system mismatches\nat an earlier stage. Unfortunately, I don't think that the timeline\nmessages you're seeing here are going to apply in every case: suppose\nyou have two unrelated servers that are both on timeline 1. I think\nyou could use a base backup from one of those servers and use it as\nthe basis for the incremental from the other, and I think that if you\ndid it right you might fail to hit any sanity check that would block\nthat. pg_combinebackup will realize there's a problem, because it has\nthe whole cluster to work with, not just the manifest, and will notice\nthe mismatching system identifiers, but that's kind of late to find\nout that you made a big mistake. However, right now, it's the best we\ncan do.\n\n> The incrementals are being generated , but just for the first (0)\n> segment of the relation?\n\nI committed the first two patches from the series I posted yesterday.\nThe first should fix this, and the second relocates parse_manifest.c.\nThat patch hasn't changed in a while and seems unlikely to attract\nmajor objections. There's no real reason to commit it until we're\nready to move forward with the main patches, but I think we're very\nclose to that now, so I did.\n\nHere's a rebase for cfbot.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 19 Dec 2023 15:36:03 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "Hi Robert,\n\nOn Tue, Dec 19, 2023 at 9:36 PM Robert Haas <[email protected]> wrote:\n>\n> On Fri, Dec 15, 2023 at 5:36 AM Jakub Wartak\n> <[email protected]> wrote:\n> > I've played with with initdb/pg_upgrade (17->17) and i don't get DBID\n> > mismatch (of course they do differ after initdb), but i get this\n> > instead:\n> >\n> > $ pg_basebackup -c fast -D /tmp/incr2.after.upgrade -p 5432\n> > --incremental /tmp/incr1.before.upgrade/backup_manifest\n> > WARNING: aborting backup due to backend exiting before pg_backup_stop\n> > was called\n> > pg_basebackup: error: could not initiate base backup: ERROR: timeline\n> > 2 found in manifest, but not in this server's history\n> > pg_basebackup: removing data directory \"/tmp/incr2.after.upgrade\"\n> >\n> > Also in the manifest I don't see DBID ?\n> > Maybe it's a nuisance and all I'm trying to see is that if an\n> > automated cronjob with pg_basebackup --incremental hits a freshly\n> > upgraded cluster, that error message without errhint() is going to\n> > scare some Junior DBAs.\n>\n> Yeah. I think we should add the system identifier to the manifest, but\n> I think that should be left for a future project, as I don't think the\n> lack of it is a good reason to stop all progress here. When we have\n> that, we can give more reliable error messages about system mismatches\n> at an earlier stage. Unfortunately, I don't think that the timeline\n> messages you're seeing here are going to apply in every case: suppose\n> you have two unrelated servers that are both on timeline 1. I think\n> you could use a base backup from one of those servers and use it as\n> the basis for the incremental from the other, and I think that if you\n> did it right you might fail to hit any sanity check that would block\n> that. pg_combinebackup will realize there's a problem, because it has\n> the whole cluster to work with, not just the manifest, and will notice\n> the mismatching system identifiers, but that's kind of late to find\n> out that you made a big mistake. However, right now, it's the best we\n> can do.\n>\n\nOK, understood.\n\n> > The incrementals are being generated , but just for the first (0)\n> > segment of the relation?\n>\n> I committed the first two patches from the series I posted yesterday.\n> The first should fix this, and the second relocates parse_manifest.c.\n> That patch hasn't changed in a while and seems unlikely to attract\n> major objections. There's no real reason to commit it until we're\n> ready to move forward with the main patches, but I think we're very\n> close to that now, so I did.\n>\n> Here's a rebase for cfbot.\n\nthe v15 patchset (posted yesterday) test results are GOOD:\n\n1. make check-world - GOOD\n2. cfbot was GOOD\n3. the devel/master bug present in\nparse_filename_for_nontemp_relation() seems to be gone (in local\ntesting)\n4. some further tests:\ntest_across_wallevelminimal.sh - GOOD\ntest_incr_after_timelineincrease.sh - GOOD\ntest_incr_on_standby_after_promote.sh - GOOD\ntest_many_incrementals_dbcreate.sh - GOOD\ntest_many_incrementals.sh - GOOD\ntest_multixact.sh - GOOD\ntest_pending_2pc.sh - GOOD\ntest_reindex_and_vacuum_full.sh - GOOD\ntest_repro_assert.sh\ntest_standby_incr_just_backup.sh - GOOD\ntest_stuck_walsum.sh - GOOD\ntest_truncaterollback.sh - GOOD\ntest_unlogged_table.sh - GOOD\ntest_full_pri__incr_stby__restore_on_pri.sh - GOOD\ntest_full_pri__incr_stby__restore_on_stby.sh - GOOD\ntest_full_stby__incr_stby__restore_on_pri.sh - GOOD\ntest_full_stby__incr_stby__restore_on_stby.sh - GOOD\n\n5. the more real-world pgbench test with localized segment writes\nusigng `\\set aid random_exponential...` [1] indicates much greater\nefficiency in terms of backup space use now, du -sm shows:\n\n210229 /backups/backups/full\n250 /backups/backups/incr.1\n255 /backups/backups/incr.2\n[..]\n348 /backups/backups/incr.13\n408 /backups/backups/incr.14 // latest(20th of Dec on 10:40)\n6673 /backups/archive/\n\nThe DB size was as reported by \\l+ 205GB.\nThat pgbench was running for ~27h (19th Dec 08:39 -> 20th Dec 11:30)\nwith slow 100 TPS (-R), so no insane amounts of WAL.\nTime to reconstruct 14 chained incremental backups was 45mins\n(pg_combinebackup -o /var/lib/postgres/17/data /backups/backups/full\n/backups/backups/incr.1 (..) /backups/backups/incr.14).\nDB after recovering was OK and working fine.\n\n-J.\n\n\n",
"msg_date": "Wed, 20 Dec 2023 14:10:42 +0100",
"msg_from": "Jakub Wartak <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "On Wed, Dec 20, 2023 at 8:11 AM Jakub Wartak\n<[email protected]> wrote:\n> the v15 patchset (posted yesterday) test results are GOOD:\n\nAll right. I committed the main two patches, dropped the\nfor-testing-only patch, and added a simple test to the remaining\npg_walsummary patch. That needs more work, but here's what I have as\nof now.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 20 Dec 2023 15:56:00 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "Hello Robert,\n\n20.12.2023 23:56, Robert Haas wrote:\n> On Wed, Dec 20, 2023 at 8:11 AM Jakub Wartak\n> <[email protected]> wrote:\n>> the v15 patchset (posted yesterday) test results are GOOD:\n> All right. I committed the main two patches, dropped the\n> for-testing-only patch, and added a simple test to the remaining\n> pg_walsummary patch. That needs more work, but here's what I have as\n> of now.\n\nI've found several typos/inconsistencies introduced with 174c48050 and\ndc2123400. Maybe you would want to fix them, while on it?:\ns/arguent/argument/;\ns/BlkRefTableEntry/BlockRefTableEntry/;\ns/BlockRefTablEntry/BlockRefTableEntry/;\ns/Caonicalize/Canonicalize/;\ns/Checksum_Algorithm/Checksum-Algorithm/;\ns/corresonding/corresponding/;\ns/differenly/differently/;\ns/excessing/excessive/;\ns/ exta / extra /;\ns/hexademical/hexadecimal/;\ns/initally/initially/;\ns/MAXGPATH/MAXPGPATH/;\ns/overrreacting/overreacting/;\ns/old_meanifest_file/old_manifest_file/;\ns/pg_cominebackup/pg_combinebackup/;\ns/pg_tblpc/pg_tblspc/;\ns/pointrs/pointers/;\ns/Recieve/Receive/;\ns/recieved/received/;\ns/ recod / record /;\ns/ recods / records /;\ns/substntially/substantially/;\ns/sumamry/summary/;\ns/summry/summary/;\ns/synchronizaton/synchronization/;\ns/sytem/system/;\ns/withot/without/;\ns/Woops/Whoops/;\ns/xlograder/xlogreader/;\n\nAlso, a comment above MaybeRemoveOldWalSummaries() basically repeats a\ncomment above redo_pointer_at_last_summary_removal declaration, but\nperhaps it should say about removing summaries instead?\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Thu, 21 Dec 2023 07:00:01 +0300",
"msg_from": "Alexander Lakhin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "On Wed, Dec 20, 2023 at 11:00 PM Alexander Lakhin <[email protected]> wrote:\n> I've found several typos/inconsistencies introduced with 174c48050 and\n> dc2123400. Maybe you would want to fix them, while on it?:\n\nThat's an impressively long list of mistakes in something I thought\nI'd been careful about. Sigh.\n\nI don't suppose you could provide these corrections in the form of a\npatch? I don't really want to run these sed commands across the entire\ntree and then try to figure out what's what...\n\n> Also, a comment above MaybeRemoveOldWalSummaries() basically repeats a\n> comment above redo_pointer_at_last_summary_removal declaration, but\n> perhaps it should say about removing summaries instead?\n\nWow, yeah. Thanks, will fix.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 21 Dec 2023 07:07:03 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "21.12.2023 15:07, Robert Haas wrote:\n> On Wed, Dec 20, 2023 at 11:00 PM Alexander Lakhin <[email protected]> wrote:\n>> I've found several typos/inconsistencies introduced with 174c48050 and\n>> dc2123400. Maybe you would want to fix them, while on it?:\n> That's an impressively long list of mistakes in something I thought\n> I'd been careful about. Sigh.\n>\n> I don't suppose you could provide these corrections in the form of a\n> patch? I don't really want to run these sed commands across the entire\n> tree and then try to figure out what's what...\n\nPlease look at the attached patch; it corrects all 29 items (\"recods\"\nfixed in two places), but maybe you find some substitutions wrong...\n\nI've also observed that those commits introduced new warnings:\n$ CC=gcc-12 CPPFLAGS=\"-Wtype-limits\" ./configure -q && make -s -j8\nreconstruct.c: In function ‘read_bytes’:\nreconstruct.c:511:24: warning: comparison of unsigned expression in ‘< 0’ is always false [-Wtype-limits]\n 511 | if (rb < 0)\n | ^\nreconstruct.c: In function ‘write_reconstructed_file’:\nreconstruct.c:650:40: warning: comparison of unsigned expression in ‘< 0’ is always false [-Wtype-limits]\n 650 | if (rb < 0)\n | ^\nreconstruct.c:662:32: warning: comparison of unsigned expression in ‘< 0’ is always false [-Wtype-limits]\n 662 | if (wb < 0)\n\nThere are also two deadcode.DeadStores complaints from clang. First one is\nabout:\n /*\n * Align the wait time to prevent drift. This doesn't really matter,\n * but we'd like the warnings about how long we've been waiting to say\n * 10 seconds, 20 seconds, 30 seconds, 40 seconds ... without ever\n * drifting to something that is not a multiple of ten.\n */\n timeout_in_ms -=\n TimestampDifferenceMilliseconds(current_time, initial_time) %\n timeout_in_ms;\nIt looks like this timeout is really not used.\n\nAnd the minor one (similar to many existing, maybe doesn't deserve fixing):\nwalsummarizer.c:808:5: warning: Value stored to 'summary_end_lsn' is never read [deadcode.DeadStores]\n summary_end_lsn = private_data->read_upto;\n ^ ~~~~~~~~~~~~~~~~~~~~~~~\n\n>> Also, a comment above MaybeRemoveOldWalSummaries() basically repeats a\n>> comment above redo_pointer_at_last_summary_removal declaration, but\n>> perhaps it should say about removing summaries instead?\n> Wow, yeah. Thanks, will fix.\n\nThank you for paying attention to it!\n\nBest regards,\nAlexander",
"msg_date": "Thu, 21 Dec 2023 18:00:00 +0300",
"msg_from": "Alexander Lakhin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "On Thu, Dec 21, 2023 at 10:00 AM Alexander Lakhin <[email protected]> wrote:\n> Please look at the attached patch; it corrects all 29 items (\"recods\"\n> fixed in two places), but maybe you find some substitutions wrong...\n\nThanks, committed with a few additions.\n\n> I've also observed that those commits introduced new warnings:\n> $ CC=gcc-12 CPPFLAGS=\"-Wtype-limits\" ./configure -q && make -s -j8\n> reconstruct.c: In function ‘read_bytes’:\n> reconstruct.c:511:24: warning: comparison of unsigned expression in ‘< 0’ is always false [-Wtype-limits]\n> 511 | if (rb < 0)\n> | ^\n> reconstruct.c: In function ‘write_reconstructed_file’:\n> reconstruct.c:650:40: warning: comparison of unsigned expression in ‘< 0’ is always false [-Wtype-limits]\n> 650 | if (rb < 0)\n> | ^\n> reconstruct.c:662:32: warning: comparison of unsigned expression in ‘< 0’ is always false [-Wtype-limits]\n> 662 | if (wb < 0)\n\nOops. I think the variables should be type int. See attached.\n\n> There are also two deadcode.DeadStores complaints from clang. First one is\n> about:\n> /*\n> * Align the wait time to prevent drift. This doesn't really matter,\n> * but we'd like the warnings about how long we've been waiting to say\n> * 10 seconds, 20 seconds, 30 seconds, 40 seconds ... without ever\n> * drifting to something that is not a multiple of ten.\n> */\n> timeout_in_ms -=\n> TimestampDifferenceMilliseconds(current_time, initial_time) %\n> timeout_in_ms;\n> It looks like this timeout is really not used.\n\nOops. It should be. See attached.\n\n> And the minor one (similar to many existing, maybe doesn't deserve fixing):\n> walsummarizer.c:808:5: warning: Value stored to 'summary_end_lsn' is never read [deadcode.DeadStores]\n> summary_end_lsn = private_data->read_upto;\n> ^ ~~~~~~~~~~~~~~~~~~~~~~~\n\nIt kind of surprises me that this is dead, but it seems best to keep\nit there to be on the safe side, in case some change to the logic\nrenders it not dead in the future.\n\n> >> Also, a comment above MaybeRemoveOldWalSummaries() basically repeats a\n> >> comment above redo_pointer_at_last_summary_removal declaration, but\n> >> perhaps it should say about removing summaries instead?\n> > Wow, yeah. Thanks, will fix.\n>\n> Thank you for paying attention to it!\n\nI'll fix this next.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 21 Dec 2023 15:43:00 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "21.12.2023 23:43, Robert Haas wrote:\n>> There are also two deadcode.DeadStores complaints from clang. First one is\n>> about:\n>> /*\n>> * Align the wait time to prevent drift. This doesn't really matter,\n>> * but we'd like the warnings about how long we've been waiting to say\n>> * 10 seconds, 20 seconds, 30 seconds, 40 seconds ... without ever\n>> * drifting to something that is not a multiple of ten.\n>> */\n>> timeout_in_ms -=\n>> TimestampDifferenceMilliseconds(current_time, initial_time) %\n>> timeout_in_ms;\n>> It looks like this timeout is really not used.\n> Oops. It should be. See attached.\n\nMy quick experiment shows that that TimestampDifferenceMilliseconds call\nalways returns zero, due to it's arguments swapped.\n\nThe other changes look good to me.\n\nThank you!\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Fri, 22 Dec 2023 08:00:00 +0300",
"msg_from": "Alexander Lakhin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "My compiler has the following complaint:\n\n../postgresql/src/backend/postmaster/walsummarizer.c: In function ‘GetOldestUnsummarizedLSN’:\n../postgresql/src/backend/postmaster/walsummarizer.c:540:32: error: ‘unsummarized_lsn’ may be used uninitialized in this function [-Werror=maybe-uninitialized]\n 540 | WalSummarizerCtl->pending_lsn = unsummarized_lsn;\n | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~\n\nI haven't looked closely to see whether there is actually a problem here,\nbut the attached patch at least resolves the warning.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Sat, 23 Dec 2023 15:51:47 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "On Sat, Dec 23, 2023 at 4:51 PM Nathan Bossart <[email protected]> wrote:\n> My compiler has the following complaint:\n>\n> ../postgresql/src/backend/postmaster/walsummarizer.c: In function ‘GetOldestUnsummarizedLSN’:\n> ../postgresql/src/backend/postmaster/walsummarizer.c:540:32: error: ‘unsummarized_lsn’ may be used uninitialized in this function [-Werror=maybe-uninitialized]\n> 540 | WalSummarizerCtl->pending_lsn = unsummarized_lsn;\n> | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~\n\nThanks. I don't think there's a real bug, but I pushed a fix, same as\nwhat you had.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 27 Dec 2023 09:11:02 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "On Wed, Dec 27, 2023 at 09:11:02AM -0500, Robert Haas wrote:\n> Thanks. I don't think there's a real bug, but I pushed a fix, same as\n> what you had.\n\nThanks! I also noticed that WALSummarizerLock probably needs a mention in\nwait_event_names.txt.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 27 Dec 2023 09:36:47 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "On Wed, Dec 27, 2023 at 10:36 AM Nathan Bossart\n<[email protected]> wrote:\n> On Wed, Dec 27, 2023 at 09:11:02AM -0500, Robert Haas wrote:\n> > Thanks. I don't think there's a real bug, but I pushed a fix, same as\n> > what you had.\n>\n> Thanks! I also noticed that WALSummarizerLock probably needs a mention in\n> wait_event_names.txt.\n\nFixed.\n\nIt seems like it would be good if there were an automated cross-check\nbetween lwlocknames.txt and wait_event_names.txt.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 2 Jan 2024 10:34:11 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "On Fri, Dec 22, 2023 at 12:00 AM Alexander Lakhin <[email protected]> wrote:\n> My quick experiment shows that that TimestampDifferenceMilliseconds call\n> always returns zero, due to it's arguments swapped.\n\nThanks. Tom already changed the unsigned -> int stuff in a separate\ncommit, so I just pushed the fixes to PrepareForIncrementalBackup,\nboth the one I had before, and swapping the arguments to\nTimestampDifferenceMilliseconds.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 3 Jan 2024 10:10:09 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "On Wed, 3 Jan 2024 at 15:10, Robert Haas <[email protected]> wrote:\n\n> On Fri, Dec 22, 2023 at 12:00 AM Alexander Lakhin <[email protected]>\n> wrote:\n> > My quick experiment shows that that TimestampDifferenceMilliseconds call\n> > always returns zero, due to it's arguments swapped.\n>\n> Thanks. Tom already changed the unsigned -> int stuff in a separate\n> commit, so I just pushed the fixes to PrepareForIncrementalBackup,\n> both the one I had before, and swapping the arguments to\n> TimestampDifferenceMilliseconds\n>\n\nI would like to query the following:\n\n--tablespace-mapping=olddir=newdir\n\n Relocates the tablespace in directory olddir to newdir during the\nbackup. olddir is the absolute path of the tablespace as it exists in the\nfirst backup specified on the command line, and newdir is the absolute path\nto use for the tablespace in the reconstructed backup.\n\nThe first backup specified on the command line will be the regular, full,\nnon-incremental backup. But if a tablespace was introduced subsequently,\nit would only appear in an incremental backup. Wouldn't this then mean\nthat a mapping would need to be provided based on the path to the\ntablespace of that incremental backup's copy?\n\nRegards\n\nThom\n\nOn Wed, 3 Jan 2024 at 15:10, Robert Haas <[email protected]> wrote:On Fri, Dec 22, 2023 at 12:00 AM Alexander Lakhin <[email protected]> wrote:\n> My quick experiment shows that that TimestampDifferenceMilliseconds call\n> always returns zero, due to it's arguments swapped.\n\nThanks. Tom already changed the unsigned -> int stuff in a separate\ncommit, so I just pushed the fixes to PrepareForIncrementalBackup,\nboth the one I had before, and swapping the arguments to\nTimestampDifferenceMillisecondsI would like to query the following:--tablespace-mapping=olddir=newdir Relocates the tablespace in directory olddir to newdir during the backup. olddir is the absolute path of the tablespace as it exists in the first backup specified on the command line, and newdir is the absolute path to use for the tablespace in the reconstructed backup.The first backup specified on the command line will be the regular, full, non-incremental backup. But if a tablespace was introduced subsequently, it would only appear in an incremental backup. Wouldn't this then mean that a mapping would need to be provided based on the path to the tablespace of that incremental backup's copy?RegardsThom",
"msg_date": "Thu, 25 Apr 2024 23:43:52 +0100",
"msg_from": "Thom Brown <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: trying again to get incremental backup"
},
{
"msg_contents": "On Thu, Apr 25, 2024 at 6:44 PM Thom Brown <[email protected]> wrote:\n> I would like to query the following:\n>\n> --tablespace-mapping=olddir=newdir\n>\n> Relocates the tablespace in directory olddir to newdir during the backup. olddir is the absolute path of the tablespace as it exists in the first backup specified on the command line, and newdir is the absolute path to use for the tablespace in the reconstructed backup.\n>\n> The first backup specified on the command line will be the regular, full, non-incremental backup. But if a tablespace was introduced subsequently, it would only appear in an incremental backup. Wouldn't this then mean that a mapping would need to be provided based on the path to the tablespace of that incremental backup's copy?\n\nYes. Tomas Vondra found the same issue, which I have fixed in\n1713e3d6cd393fcc1d4873e75c7fa1f6c7023d75.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 26 Apr 2024 11:32:04 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: trying again to get incremental backup"
}
] |
[
{
"msg_contents": "The locale \"C\" (and equivalently, \"POSIX\") is not really a libc locale;\nit's implemented internally with memcmp for collation and\npg_ascii_tolower, etc., for ctype.\n\nThe attached patch implements a new collation provider, \"builtin\",\nwhich only supports \"C\" and \"POSIX\". It does not change the initdb\ndefault provider, so it must be requested explicitly. The user will be\nguaranteed that collations with provider \"builtin\" will never change\nsemantics; therefore they need no version and indexes are not at risk\nof corruption. See previous discussion[1].\n\n(Caveat: the \"C\" locale ordering may depend on the specific encoding.\nFor UTF-8, memcmp is equivalent to code point order, but that may not\nbe true of other encodings. Encodings can't change during pg_upgrade,\nso indexes are not at risk; but the encoding can change during\ndump/reload so results may change.)\n\nThis built-in provider is just here to support \"C\" and \"POSIX\" using\nmemcmp/pg_ascii_*, and no other locales. It is not intended as a\ngeneral license to take on the problem of maintaining locales. We may\nsupport some other locale name to mean \"code point order\", but like\nUCS_BASIC, that would just be an alias for locale \"C\" that also checks\nthat the encoding is UTF-8.\n\nMotivation:\n\nWhy not just use the \"C\" locale with the libc provider?\n\n1. It's more clear to the user what's going on: Postgres is managing\nthe provider; we aren't passing it on to libc at all. With the libc\nprovider, something like C.UTF-8 leaves room for confusion[2]; with the\nbuilt-in provider, \"C.UTF-8\" is not a supported locale and the user\nwill get an error if it's requested.\n\n2. The libc provider conflates LC_COLLATE/LC_CTYPE with the default\ncollation; whereas in the icu and built-in providers, they are separate\nconcepts. With ICU and builtin, you can set LC_COLLATE and LC_CTYPE for\na database to whatever you want at creation time\n\n3. If you use libc with locale \"C\", then future CREATE DATABASE\ncommands will default to the libc provider (because that would be the\nprovider for template0), which is not what the user wants if the\npurpose is to avoid problems with external collation providers. If you\nuse the built-in provider instead, then future CREATE DATABASE commands\nwill only succeed if the user either specifies locale C or explicitly\nchooses a new provider; which will allow them a chance to prepare for\nany challenges.\n\n4. It makes it easier to document the trade-offs between various\nproviders without confusing special cases around the C locale.\n\n\n[1]\nhttps://www.postgresql.org/message-id/87sfb4gwgv.fsf%40news-spur.riddles.org.uk\n[2]\nhttps://www.postgresql.org/message-id/[email protected]\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS",
"msg_date": "Wed, 14 Jun 2023 15:55:05 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "[17] collation provider \"builtin\""
},
{
"msg_contents": "On Thu, Jun 15, 2023 at 10:55 AM Jeff Davis <[email protected]> wrote:\n> The locale \"C\" (and equivalently, \"POSIX\") is not really a libc locale;\n> it's implemented internally with memcmp for collation and\n> pg_ascii_tolower, etc., for ctype.\n>\n> The attached patch implements a new collation provider, \"builtin\",\n> which only supports \"C\" and \"POSIX\". It does not change the initdb\n> default provider, so it must be requested explicitly. The user will be\n> guaranteed that collations with provider \"builtin\" will never change\n> semantics; therefore they need no version and indexes are not at risk\n> of corruption. See previous discussion[1].\n\nI haven't studied the details yet but +1 for this idea. It models\nwhat we are actually doing.\n\n\n",
"msg_date": "Thu, 15 Jun 2023 11:20:30 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [17] collation provider \"builtin\""
},
{
"msg_contents": "On 6/14/23 19:20, Thomas Munro wrote:\n> On Thu, Jun 15, 2023 at 10:55 AM Jeff Davis <[email protected]> wrote:\n>> The locale \"C\" (and equivalently, \"POSIX\") is not really a libc locale;\n>> it's implemented internally with memcmp for collation and\n>> pg_ascii_tolower, etc., for ctype.\n>>\n>> The attached patch implements a new collation provider, \"builtin\",\n>> which only supports \"C\" and \"POSIX\". It does not change the initdb\n>> default provider, so it must be requested explicitly. The user will be\n>> guaranteed that collations with provider \"builtin\" will never change\n>> semantics; therefore they need no version and indexes are not at risk\n>> of corruption. See previous discussion[1].\n> \n> I haven't studied the details yet but +1 for this idea. It models\n> what we are actually doing.\n\n+1 agreed\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n",
"msg_date": "Thu, 15 Jun 2023 15:08:45 -0400",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [17] collation provider \"builtin\""
},
{
"msg_contents": "On 15.06.23 00:55, Jeff Davis wrote:\n> The locale \"C\" (and equivalently, \"POSIX\") is not really a libc locale;\n> it's implemented internally with memcmp for collation and\n> pg_ascii_tolower, etc., for ctype.\n> \n> The attached patch implements a new collation provider, \"builtin\",\n> which only supports \"C\" and \"POSIX\". It does not change the initdb\n> default provider, so it must be requested explicitly. The user will be\n> guaranteed that collations with provider \"builtin\" will never change\n> semantics; therefore they need no version and indexes are not at risk\n> of corruption. See previous discussion[1].\n\nWhat happens if after this patch you continue to specify provider=libc \nand locale=C? Do you then get the \"slow\" path?\n\nShould there be some logic in pg_dump to change the provider if locale=C?\n\nWhat is the transition plan?\n\n\n\n",
"msg_date": "Fri, 16 Jun 2023 16:01:26 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [17] collation provider \"builtin\""
},
{
"msg_contents": "On Fri, 2023-06-16 at 16:01 +0200, Peter Eisentraut wrote:\n> What happens if after this patch you continue to specify\n> provider=libc \n> and locale=C? Do you then get the \"slow\" path?\n\nUsers can continue to use the libc provider as they did before and the\nfast path will still work.\n\n> Should there be some logic in pg_dump to change the provider if\n> locale=C?\n\nThat's not a part of this proposal.\n\n> What is the transition plan?\n\nThe built-in provider is for users who want to choose a provider that\nis guaranteed not to have the problems of an external provider\n(versioning, tracking affected objects, corrupt indexes, and slow\nperformance). If they initialize with --locale-provider=builtin and --\nlocale=C, and they want to choose a different locale for another\ndatabase, they'll need to specifically choose libc or ICU. Of course\nthey can still use specific collations attached to columns or queries\nas required.\n\nIt also acts as a nice complement to ICU (which doesn't support the C\nlocale) or a potential replacement for many uses of the libc provider\nwith the C locale. We can discuss later exactly how that will look, but\neven if the builtin provider needs to be explicitly requested (as in\nthe current patch), it's still useful, so I don't think we need to\ndecide now.\n\nWe should also keep in mind that whatever provider is selected at\ninitdb time also becomes the default for future databases.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Fri, 16 Jun 2023 14:42:26 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [17] collation provider \"builtin\""
},
{
"msg_contents": "On Wed, 2023-06-14 at 15:55 -0700, Jeff Davis wrote:\n> The locale \"C\" (and equivalently, \"POSIX\") is not really a libc\n> locale;\n> it's implemented internally with memcmp for collation and\n> pg_ascii_tolower, etc., for ctype.\n> \n> The attached patch implements a new collation provider, \"builtin\",\n> which only supports \"C\" and \"POSIX\".\n\nRebased patch attached.\n\nI got some generally positive comments, but it needs some more feedback\non the specifics to be committable.\n\nThis might be a good time to summarize my thoughts on collation after\nmy work in v16:\n\n* Picking a database default collation other than UCS_BASIC (a.k.a.\n\"C\", a.k.a. memcmp(), a.k.a. provider=builtin) is something that should\nbe done intentionally. It's an impactful choice that affects semantics,\nperformance, and upgrades/deployment. Beyond that, our implementation\nstill lacks a good way to manage versions of collation provider\nlibraries and track object dependencies in a safe way to prevent index\ncorruption, so the safest choice is really just to use stable memcmp()\nsemantics.\n\n* The defaults for initdb seem bad in a number of ways, but it's too\nhard to change that default now (I tried in v16 and reverted it). So\nthe job of reasonable choices is left for higher-level tools and\ndocumentation.\n\n* We can handle the collation and character classification\nindependently. The main use case is to set the collation to memcmp()\nsemantics (for stability and performance) and set the character\nclassification to something interesting (on the grounds that it's more\nlikely to be stable and less likely to be used in an index than a\ncollation). Right now the only way to do that is to use the libc\nprovider and set the collation to C and the ctype to a libc locale. But\nthere is also a use case for having ICU as the provider for character\nclassification. One option is to have separate datcolprovider=b\n(builtin provider) and datctypeprovider=i, so that the collation would\nbe handled with memcmp and the character classification daticulocale.\nIt feels like we're growing the fields in pg_database a little too\nmuch, but the use case seems valid, and perhaps we can reorganize the\ncatalog representation a bit.\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS",
"msg_date": "Tue, 22 Aug 2023 14:32:32 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [17] collation provider \"builtin\""
},
{
"msg_contents": "On Thu, 2023-06-15 at 15:08 -0400, Joe Conway wrote:\n> > I haven't studied the details yet but +1 for this idea. It models\n> > what we are actually doing.\n> \n> +1 agreed\n\nI am combining this discussion with my \"built-in CTYPE provider\"\nproposal here:\n\nhttps://www.postgresql.org/message-id/[email protected]\n\nand the most recent patch is posted there. Having a built-in provider\nis more useful if it also offers a \"C.UTF-8\" locale that is superior to\nthe libc locale of the same name.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Fri, 29 Dec 2023 10:42:18 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [17] collation provider \"builtin\""
}
] |
[
{
"msg_contents": "Hello,\n\nThere is often a need to test particular queries executed in the worst-case\nscenario, i.e. right after a server restart or with no or minimal amount of\ndata in shared buffers. In Postgres it's currently hard to achieve (other\nthan to restart the server completely to run a single query, which is not\npractical). Is there a simple way to introduce a GUC variable that makes\nqueries bypass shared_buffers and always read from storage? It would make\ntesting like that orders of magnitude simpler. I mean, are there serious\ntechnical obstacles or any other objections to that idea in principle?\n\n Thanks,\n-Vladimir Churyukin\n\nHello,There is often a need to test particular queries executed in the worst-case scenario, i.e. right after a server restart or with no or minimal amount of data in shared buffers. In Postgres it's currently hard to achieve (other than to restart the server completely to run a single query, which is not practical). Is there a simple way to introduce a GUC variable that makes queries bypass shared_buffers and always read from storage? It would make testing like that orders of magnitude simpler. I mean, are there serious technical obstacles or any other objections to that idea in principle? Thanks,-Vladimir Churyukin",
"msg_date": "Wed, 14 Jun 2023 17:57:31 -0700",
"msg_from": "Vladimir Churyukin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Bypassing shared_buffers"
},
{
"msg_contents": "To be clear, I'm talking about bypassing shared buffers for reading data /\nindexes only, not about disabling it completely (which I guess is\nimpossible anyway).\n\n-Vladimir Churyukin\n\nOn Wed, Jun 14, 2023 at 5:57 PM Vladimir Churyukin <[email protected]>\nwrote:\n\n> Hello,\n>\n> There is often a need to test particular queries executed in the\n> worst-case scenario, i.e. right after a server restart or with no or\n> minimal amount of data in shared buffers. In Postgres it's currently hard\n> to achieve (other than to restart the server completely to run a single\n> query, which is not practical). Is there a simple way to introduce a GUC\n> variable that makes queries bypass shared_buffers and always read from\n> storage? It would make testing like that orders of magnitude simpler. I\n> mean, are there serious technical obstacles or any other objections to that\n> idea in principle?\n>\n> Thanks,\n> -Vladimir Churyukin\n>\n\nTo be clear, I'm talking about bypassing shared buffers for reading data / indexes only, not about disabling it completely (which I guess is impossible anyway).-Vladimir ChuryukinOn Wed, Jun 14, 2023 at 5:57 PM Vladimir Churyukin <[email protected]> wrote:Hello,There is often a need to test particular queries executed in the worst-case scenario, i.e. right after a server restart or with no or minimal amount of data in shared buffers. In Postgres it's currently hard to achieve (other than to restart the server completely to run a single query, which is not practical). Is there a simple way to introduce a GUC variable that makes queries bypass shared_buffers and always read from storage? It would make testing like that orders of magnitude simpler. I mean, are there serious technical obstacles or any other objections to that idea in principle? Thanks,-Vladimir Churyukin",
"msg_date": "Wed, 14 Jun 2023 18:14:06 -0700",
"msg_from": "Vladimir Churyukin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Bypassing shared_buffers"
},
{
"msg_contents": "Vladimir Churyukin <[email protected]> writes:\n> There is often a need to test particular queries executed in the worst-case\n> scenario, i.e. right after a server restart or with no or minimal amount of\n> data in shared buffers. In Postgres it's currently hard to achieve (other\n> than to restart the server completely to run a single query, which is not\n> practical). Is there a simple way to introduce a GUC variable that makes\n> queries bypass shared_buffers and always read from storage? It would make\n> testing like that orders of magnitude simpler. I mean, are there serious\n> technical obstacles or any other objections to that idea in principle?\n\nIt's a complete non-starter. Pages on disk are not necessarily up to\ndate; but what is in shared buffers is.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 14 Jun 2023 21:22:30 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bypassing shared_buffers"
},
{
"msg_contents": "Ok, got it, thanks.\nIs there any alternative approach to measuring the performance as if the\ncache was empty?\nThe goal is basically to calculate the max possible I/O time for a query,\nto get a range between min and max timing.\nIt's ok if it's done during EXPLAIN ANALYZE call only, not for regular\nexecutions.\nOne thing I can think of is even if the data in storage might be stale,\nissue read calls from it anyway, for measuring purposes.\nFor EXPLAIN ANALYZE it should be fine as it doesn't return real data anyway.\nIs it possible that some pages do not exist in storage at all? Is there a\ndifferent way to simulate something like that?\n\n-Vladimir Churyukin\n\nOn Wed, Jun 14, 2023 at 6:22 PM Tom Lane <[email protected]> wrote:\n\n> Vladimir Churyukin <[email protected]> writes:\n> > There is often a need to test particular queries executed in the\n> worst-case\n> > scenario, i.e. right after a server restart or with no or minimal amount\n> of\n> > data in shared buffers. In Postgres it's currently hard to achieve (other\n> > than to restart the server completely to run a single query, which is not\n> > practical). Is there a simple way to introduce a GUC variable that makes\n> > queries bypass shared_buffers and always read from storage? It would make\n> > testing like that orders of magnitude simpler. I mean, are there serious\n> > technical obstacles or any other objections to that idea in principle?\n>\n> It's a complete non-starter. Pages on disk are not necessarily up to\n> date; but what is in shared buffers is.\n>\n> regards, tom lane\n>\n\nOk, got it, thanks.Is there any alternative approach to measuring the performance as if the cache was empty?The goal is basically to calculate the max possible I/O time for a query, to get a range between min and max timing.It's ok if it's done during EXPLAIN ANALYZE call only, not for regular executions.One thing I can think of is even if the data in storage might be stale, issue read calls from it anyway, for measuring purposes.For EXPLAIN ANALYZE it should be fine as it doesn't return real data anyway.Is it possible that some pages do not exist in storage at all? Is there a different way to simulate something like that?-Vladimir ChuryukinOn Wed, Jun 14, 2023 at 6:22 PM Tom Lane <[email protected]> wrote:Vladimir Churyukin <[email protected]> writes:\n> There is often a need to test particular queries executed in the worst-case\n> scenario, i.e. right after a server restart or with no or minimal amount of\n> data in shared buffers. In Postgres it's currently hard to achieve (other\n> than to restart the server completely to run a single query, which is not\n> practical). Is there a simple way to introduce a GUC variable that makes\n> queries bypass shared_buffers and always read from storage? It would make\n> testing like that orders of magnitude simpler. I mean, are there serious\n> technical obstacles or any other objections to that idea in principle?\n\nIt's a complete non-starter. Pages on disk are not necessarily up to\ndate; but what is in shared buffers is.\n\n regards, tom lane",
"msg_date": "Wed, 14 Jun 2023 18:37:00 -0700",
"msg_from": "Vladimir Churyukin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Bypassing shared_buffers"
},
{
"msg_contents": "On Thu, Jun 15, 2023 at 1:37 PM Vladimir Churyukin\n<[email protected]> wrote:\n> Ok, got it, thanks.\n> Is there any alternative approach to measuring the performance as if the cache was empty?\n\nThere are two levels of cache. If you're on Linux you can ask it to\ndrop its caches by writing certain values to /proc/sys/vm/drop_caches.\nFor PostgreSQL's own buffer pool, it would be nice if someone would\nextend the pg_prewarm extension to have a similar 'unwarm' operation,\nfor testing like that. But one thing you can do is just restart the\ndatabase cluster, or use pg_prewarm to fill its buffer pool up with\nother stuff (and thus kick out the stuff you didn't want in there).\n\n\n",
"msg_date": "Thu, 15 Jun 2023 14:28:24 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bypassing shared_buffers"
},
{
"msg_contents": "Thomas Munro <[email protected]> writes:\n> There are two levels of cache. If you're on Linux you can ask it to\n> drop its caches by writing certain values to /proc/sys/vm/drop_caches.\n> For PostgreSQL's own buffer pool, it would be nice if someone would\n> extend the pg_prewarm extension to have a similar 'unwarm' operation,\n> for testing like that. But one thing you can do is just restart the\n> database cluster, or use pg_prewarm to fill its buffer pool up with\n> other stuff (and thus kick out the stuff you didn't want in there).\n\nBut that'd also have to push out any dirty buffers. I'm skeptical\nthat it'd be noticeably cheaper than stopping and restarting the\nserver.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 14 Jun 2023 22:43:39 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bypassing shared_buffers"
},
{
"msg_contents": "Do you foresee any difficulties in implementation of the \"unwarm\"\noperation? It requires a cache flush operation,\nso I'm curious how complicated that is (probably there is a reason this is\nnot supported by Postgres by now? mssql and oracle support stuff like that\nfor a long time)\nCluster restart is not an option for us unfortunately, as it will be\nrequired for each query pretty much, and there are a lot of them.\nAn ideal solution would be, if it's possible, to test it in parallel with\nother activities...\nEvicting all the other stuff using pg_prewarm is an interesting idea though\n(if a large prewarm operation really evicts all the previously stored data\nreliably).\nIt's a bit hacky, but thanks, I think it's possible to make this work with\nsome effort.\nIt will require exclusive access just for that testing, which is not ideal\nbut may work for us.\n\n-Vladimir )churyukin\n\n\nOn Wed, Jun 14, 2023 at 7:29 PM Thomas Munro <[email protected]> wrote:\n\n> On Thu, Jun 15, 2023 at 1:37 PM Vladimir Churyukin\n> <[email protected]> wrote:\n> > Ok, got it, thanks.\n> > Is there any alternative approach to measuring the performance as if the\n> cache was empty?\n>\n> There are two levels of cache. If you're on Linux you can ask it to\n> drop its caches by writing certain values to /proc/sys/vm/drop_caches.\n> For PostgreSQL's own buffer pool, it would be nice if someone would\n> extend the pg_prewarm extension to have a similar 'unwarm' operation,\n> for testing like that. But one thing you can do is just restart the\n> database cluster, or use pg_prewarm to fill its buffer pool up with\n> other stuff (and thus kick out the stuff you didn't want in there).\n>\n\nDo you foresee any difficulties in implementation of the \"unwarm\" operation? It requires a cache flush operation, so I'm curious how complicated that is (probably there is a reason this is not supported by Postgres by now? mssql and oracle support stuff like that for a long time) Cluster restart is not an option for us unfortunately, as it will be required for each query pretty much, and there are a lot of them.An ideal solution would be, if it's possible, to test it in parallel with other activities...Evicting all the other stuff using pg_prewarm is an interesting idea though (if a large prewarm operation really evicts all the previously stored data reliably).It's a bit hacky, but thanks, I think it's possible to make this work with some effort.It will require exclusive access just for that testing, which is not ideal but may work for us.-Vladimir )churyukinOn Wed, Jun 14, 2023 at 7:29 PM Thomas Munro <[email protected]> wrote:On Thu, Jun 15, 2023 at 1:37 PM Vladimir Churyukin\n<[email protected]> wrote:\n> Ok, got it, thanks.\n> Is there any alternative approach to measuring the performance as if the cache was empty?\n\nThere are two levels of cache. If you're on Linux you can ask it to\ndrop its caches by writing certain values to /proc/sys/vm/drop_caches.\nFor PostgreSQL's own buffer pool, it would be nice if someone would\nextend the pg_prewarm extension to have a similar 'unwarm' operation,\nfor testing like that. But one thing you can do is just restart the\ndatabase cluster, or use pg_prewarm to fill its buffer pool up with\nother stuff (and thus kick out the stuff you didn't want in there).",
"msg_date": "Wed, 14 Jun 2023 19:51:03 -0700",
"msg_from": "Vladimir Churyukin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Bypassing shared_buffers"
},
{
"msg_contents": "It could be cheaper, if the testing is done for many SELECT queries\nsequentially - you need to flush dirty buffers just once pretty much.\n\n-Vladimir Churyukin\n\nOn Wed, Jun 14, 2023 at 7:43 PM Tom Lane <[email protected]> wrote:\n\n> Thomas Munro <[email protected]> writes:\n> > There are two levels of cache. If you're on Linux you can ask it to\n> > drop its caches by writing certain values to /proc/sys/vm/drop_caches.\n> > For PostgreSQL's own buffer pool, it would be nice if someone would\n> > extend the pg_prewarm extension to have a similar 'unwarm' operation,\n> > for testing like that. But one thing you can do is just restart the\n> > database cluster, or use pg_prewarm to fill its buffer pool up with\n> > other stuff (and thus kick out the stuff you didn't want in there).\n>\n> But that'd also have to push out any dirty buffers. I'm skeptical\n> that it'd be noticeably cheaper than stopping and restarting the\n> server.\n>\n> regards, tom lane\n>\n\nIt could be cheaper, if the testing is done for many SELECT queries sequentially - you need to flush dirty buffers just once pretty much.-Vladimir ChuryukinOn Wed, Jun 14, 2023 at 7:43 PM Tom Lane <[email protected]> wrote:Thomas Munro <[email protected]> writes:\n> There are two levels of cache. If you're on Linux you can ask it to\n> drop its caches by writing certain values to /proc/sys/vm/drop_caches.\n> For PostgreSQL's own buffer pool, it would be nice if someone would\n> extend the pg_prewarm extension to have a similar 'unwarm' operation,\n> for testing like that. But one thing you can do is just restart the\n> database cluster, or use pg_prewarm to fill its buffer pool up with\n> other stuff (and thus kick out the stuff you didn't want in there).\n\nBut that'd also have to push out any dirty buffers. I'm skeptical\nthat it'd be noticeably cheaper than stopping and restarting the\nserver.\n\n regards, tom lane",
"msg_date": "Wed, 14 Jun 2023 19:52:57 -0700",
"msg_from": "Vladimir Churyukin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Bypassing shared_buffers"
},
{
"msg_contents": "On Thu, Jun 15, 2023 at 2:51 PM Vladimir Churyukin\n<[email protected]> wrote:\n> Do you foresee any difficulties in implementation of the \"unwarm\" operation? It requires a cache flush operation,\n> so I'm curious how complicated that is (probably there is a reason this is not supported by Postgres by now? mssql and oracle support stuff like that for a long time)\n\nIf they have a way to kick individual relations out of the buffer\npool, then I suspect they have an efficient way to find the relevant\nbuffers. We'd have to scan the entire buffer pool, or (for small\nrelations), probe for blocks 0..n (when we know that n isn't too\nhigh). We'll probably eventually get something tree-based, like\noperating system kernels and perhaps those other databases use for\ntheir own buffer pools, which is useful for I/O merging and for faster\nDROP, but until then you'll face the same problem while implementing\nunwarm, and you'd probably have to understand a lot of details about\nbufmgr.c and add some new interfaces.\n\nAs Tom says, in the end it's going to work out much like restarting,\nwhich requires a pleasing zero lines of new code, perhaps explaining\nwhy no one has tried this before... Though of course you can be more\nselective about which tables are zapped.\n\n> Cluster restart is not an option for us unfortunately, as it will be required for each query pretty much, and there are a lot of them.\n> An ideal solution would be, if it's possible, to test it in parallel with other activities...\n> Evicting all the other stuff using pg_prewarm is an interesting idea though (if a large prewarm operation really evicts all the previously stored data reliably).\n> It's a bit hacky, but thanks, I think it's possible to make this work with some effort.\n> It will require exclusive access just for that testing, which is not ideal but may work for us.\n\nYou can use pg_buffercache to check the current contents of the buffer\npool, to confirm that a relation you're interested in is gone.\n\nhttps://www.postgresql.org/docs/current/pgbuffercache.html#PGBUFFERCACHE-COLUMNS\n\nI guess another approach if you really want to write code to do this\nwould be to introduce a function that takes a buffer ID and\ninvalidates it, and then you could use queries of pg_buffercache to\ndrive it. It would simplify things greatly if you only supported\ninvalidating clean buffers, and then you could query pg_buffercache to\nsee if any dirty buffers are left and if so run a checkpoint and try\nagain or something like that...\n\nAnother thing I have wondered about while hacking on I/O code is\nwhether pg_prewarm should also have an unwarm-the-kernel-cache thing.\nThere is that drop_cache thing, but that's holus bolus and Linux-only.\nPerhaps POSIX_FADV_WONTNEED could be used for this, though that would\nseem to require a double decker bus-sized layering violation.\n\n\n",
"msg_date": "Thu, 15 Jun 2023 17:22:49 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bypassing shared_buffers"
},
{
"msg_contents": "\n\nOn 15.06.2023 4:37 AM, Vladimir Churyukin wrote:\n> Ok, got it, thanks.\n> Is there any alternative approach to measuring the performance as if \n> the cache was empty?\n> The goal is basically to calculate the max possible I/O time for a \n> query, to get a range between min and max timing.\n> It's ok if it's done during EXPLAIN ANALYZE call only, not for regular \n> executions.\n> One thing I can think of is even if the data in storage might be \n> stale, issue read calls from it anyway, for measuring purposes.\n> For EXPLAIN ANALYZE it should be fine as it doesn't return real data \n> anyway.\n> Is it possible that some pages do not exist in storage at all? Is \n> there a different way to simulate something like that?\n>\n\nI do not completely understand what you want to measure: how fast cache \nbe prewarmed or what is the performance\nwhen working set doesn't fit in memory?\n\nWhy not changing `shared_buffers` size to some very small values (i.e. \n1MB) doesn't work?\nAs it was already noticed, there are levels of caching: shared buffers \nand OS file cache.\nBy reducing size of shared buffers you rely mostly on OS file cache.\nAnd actually there is no big gap in performance here - at most workloads \nI didn't see more than 15% difference).\n\nYou can certainly flush OS cache `echo 3 > /proc/sys/vm/drop_caches` and \nso simulate cold start.\nBut OS cached will be prewarmed quite fast (unlike shared buffer because \nof strange Postgres ring-buffer strategies which cause eviction of pages\nfrom shared buffers even if there is a lot of free space).\n\nSo please more precisely specify the goal of your experiment.\n\"max possible I/O time for a query\" depends on so many factors...\nDo you consider just one client working in isolation or there will be \nmany concurrent queries and background tasks like autovacuum and \ncheckpointer competing for the resources?\n\nMy point is that if you need some deterministic result then you will \nhave to exclude a lot of different factors which may affect performance\nand then ... you calculate speed of horse in vacuum, which has almost no \nrelation to real performance.\n\n\n\n\n\n\n\n",
"msg_date": "Thu, 15 Jun 2023 10:32:03 +0300",
"msg_from": "Konstantin Knizhnik <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bypassing shared_buffers"
},
{
"msg_contents": "On Thu, Jun 15, 2023 at 12:32 AM Konstantin Knizhnik <[email protected]>\nwrote:\n\n>\n>\n> On 15.06.2023 4:37 AM, Vladimir Churyukin wrote:\n> > Ok, got it, thanks.\n> > Is there any alternative approach to measuring the performance as if\n> > the cache was empty?\n> > The goal is basically to calculate the max possible I/O time for a\n> > query, to get a range between min and max timing.\n> > It's ok if it's done during EXPLAIN ANALYZE call only, not for regular\n> > executions.\n> > One thing I can think of is even if the data in storage might be\n> > stale, issue read calls from it anyway, for measuring purposes.\n> > For EXPLAIN ANALYZE it should be fine as it doesn't return real data\n> > anyway.\n> > Is it possible that some pages do not exist in storage at all? Is\n> > there a different way to simulate something like that?\n> >\n>\n> I do not completely understand what you want to measure: how fast cache\n> be prewarmed or what is the performance\n> when working set doesn't fit in memory?\n>\n>\nNo, it's not about working set or prewarming speed.\nWe're trying to see what is the worst performance in terms of I/O, i.e.\nwhen the database just started up or the data/indexes being queried are not\ncached at all.\n\nWhy not changing `shared_buffers` size to some very small values (i.e.\n> 1MB) doesn't work?\n>\nAs it was already noticed, there are levels of caching: shared buffers\n> and OS file cache.\n> By reducing size of shared buffers you rely mostly on OS file cache.\n> And actually there is no big gap in performance here - at most workloads\n> I didn't see more than 15% difference).\n>\n\nI thought about the option of setting minimal shared_buffers, but it\nrequires a server restart anyway, something I'd like to avoid.\n\nYou can certainly flush OS cache `echo 3 > /proc/sys/vm/drop_caches` and\n> so simulate cold start.\n> But OS cached will be prewarmed quite fast (unlike shared buffer because\n> of strange Postgres ring-buffer strategies which cause eviction of pages\n> from shared buffers even if there is a lot of free space).\n>\n> So please more precisely specify the goal of your experiment.\n> \"max possible I/O time for a query\" depends on so many factors...\n> Do you consider just one client working in isolation or there will be\n> many concurrent queries and background tasks like autovacuum and\n> checkpointer competing for the resources?\n>\n\n> My point is that if you need some deterministic result then you will\n> have to exclude a lot of different factors which may affect performance\n> and then ... you calculate speed of horse in vacuum, which has almost no\n> relation to real performance.\n>\n>\nExactly, we need more or less deterministic results for how bad I/O timings\ncan be.\nEven though it's not necessarily the numbers we will be getting in real\nlife, it gives us ideas about distribution,\nand it's useful because we care about the long tail (p99+) of our queries.\nFor simplicity let's say it will be a single client only (it will be hard\nto do the proposed solutions reliably with other stuff running in parallel\nanyway).\n\n-Vladimir Churyukin\n\nOn Thu, Jun 15, 2023 at 12:32 AM Konstantin Knizhnik <[email protected]> wrote:\n\nOn 15.06.2023 4:37 AM, Vladimir Churyukin wrote:\n> Ok, got it, thanks.\n> Is there any alternative approach to measuring the performance as if \n> the cache was empty?\n> The goal is basically to calculate the max possible I/O time for a \n> query, to get a range between min and max timing.\n> It's ok if it's done during EXPLAIN ANALYZE call only, not for regular \n> executions.\n> One thing I can think of is even if the data in storage might be \n> stale, issue read calls from it anyway, for measuring purposes.\n> For EXPLAIN ANALYZE it should be fine as it doesn't return real data \n> anyway.\n> Is it possible that some pages do not exist in storage at all? Is \n> there a different way to simulate something like that?\n>\n\nI do not completely understand what you want to measure: how fast cache \nbe prewarmed or what is the performance\nwhen working set doesn't fit in memory?\nNo, it's not about working set or prewarming speed.We're trying to see what is the worst performance in terms of I/O, i.e. when the database just started up or the data/indexes being queried are not cached at all.\nWhy not changing `shared_buffers` size to some very small values (i.e. \n1MB) doesn't work?As it was already noticed, there are levels of caching: shared buffers \nand OS file cache.\nBy reducing size of shared buffers you rely mostly on OS file cache.\nAnd actually there is no big gap in performance here - at most workloads \nI didn't see more than 15% difference).I thought about the option of setting minimal shared_buffers, but it requires a server restart anyway, something I'd like to avoid.\nYou can certainly flush OS cache `echo 3 > /proc/sys/vm/drop_caches` and \nso simulate cold start.\nBut OS cached will be prewarmed quite fast (unlike shared buffer because \nof strange Postgres ring-buffer strategies which cause eviction of pages\nfrom shared buffers even if there is a lot of free space).\n\nSo please more precisely specify the goal of your experiment.\n\"max possible I/O time for a query\" depends on so many factors...\nDo you consider just one client working in isolation or there will be \nmany concurrent queries and background tasks like autovacuum and \ncheckpointer competing for the resources?\nMy point is that if you need some deterministic result then you will \nhave to exclude a lot of different factors which may affect performance\nand then ... you calculate speed of horse in vacuum, which has almost no \nrelation to real performance.\nExactly, we need more or less deterministic results for how bad I/O timings can be. Even though it's not necessarily the numbers we will be getting in real life, it gives us ideas about distribution, and it's useful because we care about the long tail (p99+) of our queries. For simplicity let's say it will be a single client only (it will be hard to do the proposed solutions reliably with other stuff running in parallel anyway). -Vladimir Churyukin",
"msg_date": "Thu, 15 Jun 2023 01:16:31 -0700",
"msg_from": "Vladimir Churyukin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Bypassing shared_buffers"
},
{
"msg_contents": "On Thu, Jun 15, 2023 at 4:16 AM Vladimir Churyukin <[email protected]>\nwrote:\n\n> We're trying to see what is the worst performance in terms of I/O, i.e.\n>> when the database just started up or the data/indexes being queried are not\n>> cached at all.\n>\n>\nYou could create new tables that are copies of the existing ones (CREATE\nTABLE foo as SELECT * FROM ...), create new indexes, and run a query on\nthose. Use schemas and search_path to keep the queries the same. No restart\nneeded! (just potentially lots of I/O, time, and disk space :) Don't forget\nto do explain (analyze, buffers) to double check things.\n\nOn Thu, Jun 15, 2023 at 4:16 AM Vladimir Churyukin <[email protected]> wrote: We're trying to see what is the worst performance in terms of I/O, i.e. when the database just started up or the data/indexes being queried are not cached at all.You could create new tables that are copies of the existing ones (CREATE TABLE foo as SELECT * FROM ...), create new indexes, and run a query on those. Use schemas and search_path to keep the queries the same. No restart needed! (just potentially lots of I/O, time, and disk space :) Don't forget to do explain (analyze, buffers) to double check things.",
"msg_date": "Sat, 17 Jun 2023 18:46:53 -0400",
"msg_from": "Greg Sabino Mullane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bypassing shared_buffers"
},
{
"msg_contents": "Hi!\n\n> On 15 Jun 2023, at 03:57, Vladimir Churyukin <[email protected]> wrote:\n> \n> Hello,\n> \n> There is often a need to test particular queries executed in the worst-case scenario, i.e. right after a server restart or with no or minimal amount of data in shared buffers. In Postgres it's currently hard to achieve (other than to restart the server completely to run a single query, which is not practical). Is there a simple way to introduce a GUC variable that makes queries bypass shared_buffers and always read from storage? It would make testing like that orders of magnitude simpler. I mean, are there serious technical obstacles or any other objections to that idea in principle? \n\nFew months ago I implemented \"drop of caches\" to demonstrate basic structure of shared buffers [0]. The patch is very unsafe in the form is was implemented, but if you think that functionality is really useful (it was not intended to be) I can try to do the same as extension.\n\nit worked like \"SELECT FlushAllBuffers();\" and what is done resembles checkpoint, but evicts every buffer that can be evicted. Obviously, emptied buffers would be immediately reused by concurrent sessions.\n\n\nBest regards, Andrey Borodin.\n\n[0] https://www.youtube.com/watch?v=u8BAOqeKnwY\n\n",
"msg_date": "Mon, 19 Jun 2023 12:47:40 +0300",
"msg_from": "Andrey M. Borodin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bypassing shared_buffers"
}
] |
[
{
"msg_contents": "Hi,\n\nThere are different types of Logical Replication workers -- e.g.\ntablesync workers, apply workers, and parallel apply workers.\n\nThe logging and errors often name these worker types, but during a\nrecent code review, I noticed some inconsistency in the way this is\ndone:\na) there is a common function get_worker_name() to return the name for\nthe worker type, -- OR --\nb) the worker name is just hardcoded in the message/error\n\nI think it is not ideal to cut/paste the same hardwired strings over\nand over. IMO it just introduces an unnecessary risk of subtle naming\ndifferences creeping in.\n\n~~\n\nIt is better to have a *single* point where these worker names are\ndefined, so then all output uses identical LR worker nomenclature.\n\nPSA a small patch to modify the code accordingly. This is not intended\nto be a functional change - just a code cleanup.\n\nThoughts?\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Thu, 15 Jun 2023 12:42:33 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "Consistent coding for the naming of LR workers"
},
{
"msg_contents": "On Thu, Jun 15, 2023 at 8:13 AM Peter Smith <[email protected]> wrote:\n>\n> There are different types of Logical Replication workers -- e.g.\n> tablesync workers, apply workers, and parallel apply workers.\n>\n> The logging and errors often name these worker types, but during a\n> recent code review, I noticed some inconsistency in the way this is\n> done:\n> a) there is a common function get_worker_name() to return the name for\n> the worker type, -- OR --\n> b) the worker name is just hardcoded in the message/error\n>\n> I think it is not ideal to cut/paste the same hardwired strings over\n> and over. IMO it just introduces an unnecessary risk of subtle naming\n> differences creeping in.\n>\n> ~~\n>\n> It is better to have a *single* point where these worker names are\n> defined, so then all output uses identical LR worker nomenclature.\n>\n\n+1. I think makes error strings in the code look a bit shorter. I\nthink it is better to park the patch for the next CF to avoid\nforgetting about it.\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 15 Jun 2023 13:29:36 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Consistent coding for the naming of LR workers"
},
{
"msg_contents": "On 2023-Jun-15, Peter Smith wrote:\n\n> PSA a small patch to modify the code accordingly. This is not intended\n> to be a functional change - just a code cleanup.\n\n From a translation standpoint, this doesn't seem good. Consider this\nproposed message:\n \"lost connection to the %s\"\n\nIt's not possible to translate \"to the\" correctly to Spanish because it\ndepends on the grammatical gender of the %s. At present this is not an\nactual problem because all bgworker types have the same gender, but it\ngoes counter translability good practices. Also, other languages may\nhave different considerations.\n\nYou could use a generic term and then add a colon-separated or a quoted\nindicator for its type:\n lost connection to logical replication worker of type \"parallel apply\"\n lost connection to logical replication worker: \"parallel apply worker\"\n lost connection to logical replication worker: type \"parallel apply worker\"\n\nand then make the type string (and nothing else in that message) be a\n%s. But I don't think this looks very good.\n\nI'd leave this alone, except if there are any actual inconsistencies in\nwhich case they should be fixed specifically.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\nThou shalt check the array bounds of all strings (indeed, all arrays), for\nsurely where thou typest \"foo\" someone someday shall type\n\"supercalifragilisticexpialidocious\" (5th Commandment for C programmers)\n\n\n",
"msg_date": "Thu, 15 Jun 2023 12:37:59 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Consistent coding for the naming of LR workers"
},
{
"msg_contents": "At Thu, 15 Jun 2023 12:42:33 +1000, Peter Smith <[email protected]> wrote in \n> It is better to have a *single* point where these worker names are\n> defined, so then all output uses identical LR worker nomenclature.\n> \n> PSA a small patch to modify the code accordingly. This is not intended\n> to be a functional change - just a code cleanup.\n> \n> Thoughts?\n\nI generally like this direction when it actually decreases the number\nof translatable messages without making grepping on the tree\nexcessively difficult. However, in this case, the patch doesn't seems\nto reduce the translatable messages; instead, it makes grepping\ndifficult.\n\nAddition to that, I'm inclined to concur with Alvaro regarding the\ngramattical aspect.\n\nConsequently, I'd prefer to leave these alone.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 16 Jun 2023 10:43:27 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Consistent coding for the naming of LR workers"
},
{
"msg_contents": "Re: Alvaro's comment [1] \"From a translation standpoint, this doesn't\nseem good\".\n\nExcept, please note that there are already multiple message format\nstrings in the HEAD code that look like \"%s for subscription ...\",\nthat are using the get_worker_name() function for the name\nsubstitution.\n\ne.g.\n- \"%s for subscription \\\"%s\\\" will stop because the subscription was removed\"\n- \"%s for subscription \\\"%s\\\" will stop because the subscription was disabled\"\n- \"%s for subscription \\\"%s\\\" will restart because of a parameter change\"\n- \"%s for subscription %u will not start because the subscription was\nremoved during startup\"\n- \"%s for subscription \\\"%s\\\" will not start because the subscription\nwas disabled during startup\"\n- \"%s for subscription \\\"%s\\\" has started\"\n\nAnd many of my patch changes will result in a format string which has\nexactly that same pattern:\n\ne.g.\n- \"%s for subscription \\\"%s\\\" has finished\"\n- \"%s for subscription \\\"%s\\\", table \\\"%s\\\" has finished\"\n- \"%s for subscription \\\"%s\\\" will restart so that two_phase can be\nenabledworker\"\n- \"%s for subscription \\\"%s\\\" will stop\"\n- \"%s for subscription \\\"%s\\\" will stop because of a parameter change\"\n- \"%s for subscription \\\"%s\\\", table \\\"%s\\\" has started\"\n\nSo, I don't think it is fair to say that these format strings are OK\nfor the existing HEAD code, but not OK for the patch code, when they\nare both the same.\n\n~~\n\nOTOH, you are correct there are some more problematic messages (see\nbelow - one of these you cited) that are not using the same pattern:\n\ne.g.\n- \"lost connection to the %s\"\n- \"%s exited due to error\"\n- \"unrecognized message type received %s: %c (message length %d bytes)\"\n- \"lost connection to the %s\"\n- \"%s will serialize the remaining changes of remote transaction %u to a file\"\n- \"lost connection to the %s\"\n- \"defining savepoint %s in %s\"\n- \"rolling back to savepoint %s in %s\"\n\nIMO it will be an improvement for all-round consistency if those also\nget changed to use the similar pattern:\n\ne.g. \"lost connection to the %s\" --> \"%s for subscription \\\"%s\",\ncannot be contacted\"\ne.g. \"defining savepoint %s in %s\" --> \"%s for subscription \\\"%s\",\ndefining savepoint %s\"\ne.g. \"rolling back to savepoint %s in %s\" --> \"%s for subscription\n\\\"%s\", rolling back to savepoint %s\"\netc.\n\nThoughts?\n\n------\n\nRe: Horiguchi-san's comment [2] \"... instead, it makes grepping difficult.\"\n\nSorry, I didn't really understand how this patch makes grepping more\ndifficult. e.g. If you are grepping for any message about \"table\nsynchronization worker\" then you are currently hoping and relying that\nthere there are no differences in the wording of all the existing\nmessages. If something instead says \"tablesync worker\" you will\naccidentally overlook it.\n\nOTOH, this patch eliminates the guesswork and luck. In the example,\ngrepping for LR_WORKER_NAME_TABLESYNC will give you all the messages\nyou were looking for.\n\n------\n[1] Alvaro review comments -\nhttps://www.postgresql.org/message-id/20230615103759.bkkv226czkcnuork%40alvherre.pgsql\n[2] Horiguchi-san review comments -\nhttps://www.postgresql.org/message-id/20230616.104327.1878440413098623268.horikyota.ntt%40gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 21 Jun 2023 12:31:46 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Consistent coding for the naming of LR workers"
},
{
"msg_contents": "On 2023-Jun-21, Peter Smith wrote:\n\n> Except, please note that there are already multiple message format\n> strings in the HEAD code that look like \"%s for subscription ...\",\n> that are using the get_worker_name() function for the name\n> substitution.\n> \n> e.g.\n> - \"%s for subscription \\\"%s\\\" will stop because the subscription was removed\"\n> - \"%s for subscription \\\"%s\\\" will stop because the subscription was disabled\"\n> - \"%s for subscription \\\"%s\\\" will restart because of a parameter change\"\n> - \"%s for subscription %u will not start because the subscription was\n> removed during startup\"\n> - \"%s for subscription \\\"%s\\\" will not start because the subscription\n> was disabled during startup\"\n> - \"%s for subscription \\\"%s\\\" has started\"\n\nThat is a terrible pattern in relatively new code. Let's get rid of it\nentirely rather than continue to propagate it.\n\n> So, I don't think it is fair to say that these format strings are OK\n> for the existing HEAD code, but not OK for the patch code, when they\n> are both the same.\n\nAgreed. Let's remove them all.\n\nBTW this is documented:\nhttps://www.postgresql.org/docs/15/nls-programmer.html#NLS-GUIDELINES\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"I suspect most samba developers are already technically insane...\nOf course, since many of them are Australians, you can't tell.\" (L. Torvalds)\n\n\n",
"msg_date": "Wed, 21 Jun 2023 09:18:45 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Consistent coding for the naming of LR workers"
},
{
"msg_contents": "On 21.06.23 09:18, Alvaro Herrera wrote:\n> That is a terrible pattern in relatively new code. Let's get rid of it\n> entirely rather than continue to propagate it.\n> \n>> So, I don't think it is fair to say that these format strings are OK\n>> for the existing HEAD code, but not OK for the patch code, when they\n>> are both the same.\n> \n> Agreed. Let's remove them all.\n\nThis is an open issue for PG16 translation. I propose the attached \npatch to fix this. Mostly, this just reverts to the previous wordings. \n(I don't think for these messages the difference between \"apply worker\" \nand \"parallel apply worker\" is all that interesting to explode the \nnumber of messages. AFAICT, the table sync worker case wasn't even \nused, since callers always handled it separately.)",
"msg_date": "Wed, 12 Jul 2023 13:34:56 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Consistent coding for the naming of LR workers"
},
{
"msg_contents": "On Wed, Jul 12, 2023 at 9:35 PM Peter Eisentraut <[email protected]> wrote:\n>\n> On 21.06.23 09:18, Alvaro Herrera wrote:\n> > That is a terrible pattern in relatively new code. Let's get rid of it\n> > entirely rather than continue to propagate it.\n> >\n> >> So, I don't think it is fair to say that these format strings are OK\n> >> for the existing HEAD code, but not OK for the patch code, when they\n> >> are both the same.\n> >\n> > Agreed. Let's remove them all.\n>\n> This is an open issue for PG16 translation. I propose the attached\n> patch to fix this. Mostly, this just reverts to the previous wordings.\n> (I don't think for these messages the difference between \"apply worker\"\n> and \"parallel apply worker\" is all that interesting to explode the\n> number of messages. AFAICT, the table sync worker case wasn't even\n> used, since callers always handled it separately.)\n\nI thought the get_worker_name function was reachable by tablesync workers also.\n\nSince ApplyWorkerMain is a common entry point for both leader apply\nworkers and tablesync workers, any logs in that code path could\npotentially be from either kind of worker. At least, when the function\nwas first introduced (around patch v43-0001? [1]) it was also\nreplacing some tablesync logs.\n\n------\n[1] v43-0001 - https://www.postgresql.org/message-id/OS0PR01MB5716E366874B253B58FC0A23943C9%40OS0PR01MB5716.jpnprd01.prod.outlook.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 13 Jul 2023 14:59:40 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Consistent coding for the naming of LR workers"
},
{
"msg_contents": "On 13.07.23 06:59, Peter Smith wrote:\n> On Wed, Jul 12, 2023 at 9:35 PM Peter Eisentraut <[email protected]> wrote:\n>>\n>> On 21.06.23 09:18, Alvaro Herrera wrote:\n>>> That is a terrible pattern in relatively new code. Let's get rid of it\n>>> entirely rather than continue to propagate it.\n>>>\n>>>> So, I don't think it is fair to say that these format strings are OK\n>>>> for the existing HEAD code, but not OK for the patch code, when they\n>>>> are both the same.\n>>>\n>>> Agreed. Let's remove them all.\n>>\n>> This is an open issue for PG16 translation. I propose the attached\n>> patch to fix this. Mostly, this just reverts to the previous wordings.\n>> (I don't think for these messages the difference between \"apply worker\"\n>> and \"parallel apply worker\" is all that interesting to explode the\n>> number of messages. AFAICT, the table sync worker case wasn't even\n>> used, since callers always handled it separately.)\n> \n> I thought the get_worker_name function was reachable by tablesync workers also.\n> \n> Since ApplyWorkerMain is a common entry point for both leader apply\n> workers and tablesync workers, any logs in that code path could\n> potentially be from either kind of worker. At least, when the function\n> was first introduced (around patch v43-0001? [1]) it was also\n> replacing some tablesync logs.\n\nI suppose we could just say \"logical replication worker\" in all cases. \nThat should be enough precision for the purpose of these messages.\n\n\n\n",
"msg_date": "Thu, 13 Jul 2023 09:07:15 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Consistent coding for the naming of LR workers"
},
{
"msg_contents": "On Thu, Jul 13, 2023 at 4:07 PM Peter Eisentraut <[email protected]> wrote:\n>\n> On 13.07.23 06:59, Peter Smith wrote:\n> > On Wed, Jul 12, 2023 at 9:35 PM Peter Eisentraut <[email protected]> wrote:\n> >>\n> >> On 21.06.23 09:18, Alvaro Herrera wrote:\n> >>> That is a terrible pattern in relatively new code. Let's get rid of it\n> >>> entirely rather than continue to propagate it.\n> >>>\n> >>>> So, I don't think it is fair to say that these format strings are OK\n> >>>> for the existing HEAD code, but not OK for the patch code, when they\n> >>>> are both the same.\n> >>>\n> >>> Agreed. Let's remove them all.\n> >>\n> >> This is an open issue for PG16 translation. I propose the attached\n> >> patch to fix this. Mostly, this just reverts to the previous wordings.\n> >> (I don't think for these messages the difference between \"apply worker\"\n> >> and \"parallel apply worker\" is all that interesting to explode the\n> >> number of messages. AFAICT, the table sync worker case wasn't even\n> >> used, since callers always handled it separately.)\n> >\n> > I thought the get_worker_name function was reachable by tablesync workers also.\n> >\n> > Since ApplyWorkerMain is a common entry point for both leader apply\n> > workers and tablesync workers, any logs in that code path could\n> > potentially be from either kind of worker. At least, when the function\n> > was first introduced (around patch v43-0001? [1]) it was also\n> > replacing some tablesync logs.\n>\n> I suppose we could just say \"logical replication worker\" in all cases.\n> That should be enough precision for the purpose of these messages.\n\n+1\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 13 Jul 2023 17:50:23 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Consistent coding for the naming of LR workers"
},
{
"msg_contents": "On 2023-Jul-13, Peter Eisentraut wrote:\n\n> I suppose we could just say \"logical replication worker\" in all cases. That\n> should be enough precision for the purpose of these messages.\n\nAgreed. IMO the user doesn't care about specifics.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Thu, 13 Jul 2023 11:29:05 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Consistent coding for the naming of LR workers"
},
{
"msg_contents": "On 13.07.23 11:29, Alvaro Herrera wrote:\n> On 2023-Jul-13, Peter Eisentraut wrote:\n> \n>> I suppose we could just say \"logical replication worker\" in all cases. That\n>> should be enough precision for the purpose of these messages.\n> \n> Agreed. IMO the user doesn't care about specifics.\n\nOk, committed.\n\n\n\n",
"msg_date": "Thu, 13 Jul 2023 13:27:33 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Consistent coding for the naming of LR workers"
}
] |
[
{
"msg_contents": "Currently, CREATE COLLATION always defaults the provider to libc.\n\nThe attached patch causes it to default to libc if LC_COLLATE/LC_CTYPE\nare specified, otherwise default to the current database default\ncollation's provider.\n\nThat way, the provider choice at initdb time then becomes the default\nfor \"CREATE DATABASE ... TEMPLATE template0\", which then becomes the\ndefault provider for \"CREATE COLLATION (LOCALE='...')\".\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS",
"msg_date": "Wed, 14 Jun 2023 21:47:52 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "[17] CREATE COLLATION default provider"
},
{
"msg_contents": "On Wed, Jun 14, 2023 at 9:48 PM Jeff Davis <[email protected]> wrote:\n>\n> Currently, CREATE COLLATION always defaults the provider to libc.\n>\n> The attached patch causes it to default to libc if LC_COLLATE/LC_CTYPE\n> are specified, otherwise default to the current database default\n> collation's provider.\n\n+ if (lccollateEl || lcctypeEl)\n+ collprovider = COLLPROVIDER_LIBC;\n+ else\n+ collprovider = default_locale.provider;\n\nThe docs for the CREATE COLLATION option 'locale' say: \"This is a\nshortcut for setting LC_COLLATE and LC_CTYPE at once.\"\n\nSo it's not intuitive why the check does not include a test for the\npresence of 'localeEl', as well? If we consider the presence of\nLC_COLLATE _or_ LC_CTYPE options to be a determining factor for some\ndecision, then the presence of LOCALE option should also lead to the\nsame outcome.\n\nOtherwise the patch looks good.\n\n> v11-0001-CREATE-COLLATION-default-provider.patch\n\nI believe v11 is a typo, and you really meant v1.\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\n\n",
"msg_date": "Sat, 17 Jun 2023 09:09:02 -0700",
"msg_from": "Gurjeet Singh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [17] CREATE COLLATION default provider"
},
{
"msg_contents": "On Sat, 2023-06-17 at 09:09 -0700, Gurjeet Singh wrote:\n> The docs for the CREATE COLLATION option 'locale' say: \"This is a\n> shortcut for setting LC_COLLATE and LC_CTYPE at once.\"\n> \n> So it's not intuitive why the check does not include a test for the\n> presence of 'localeEl', as well? If we consider the presence of\n> LC_COLLATE _or_ LC_CTYPE options to be a determining factor for some\n> decision, then the presence of LOCALE option should also lead to the\n> same outcome.\n> \n\nThe docs say: \"If provider is libc, this is a shortcut...\". The point\nis that LC_COLLATE and LC_CTYPE act as a signal that what the user\nreally wants is a libc collation. LOCALE works for either, so we need a\ndefault.\n\nThat being said, I'm now having second thoughts about where that\ndefault should come from. While getting the default from datlocprovider\nis convenient, I'm not sure that the datlocprovider provides a good\nsignal. A lot of users will have datlocprovider=c and datcollate=C,\nwhich really means they want the built-in memcmp behavior, and to me\nthat doesn't signal that they want CREATE COLLATION to use libc for a\nnon-C locale.\n\nA GUC might be a better default, and we could have CREATE COLLATION\ndefault to ICU if the server is built with ICU and if PROVIDER,\nLC_COLLATE and LC_CTYPE are unspecified.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Fri, 07 Jul 2023 09:32:58 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [17] CREATE COLLATION default provider"
},
{
"msg_contents": "On Fri, Jul 7, 2023 at 9:33 AM Jeff Davis <[email protected]> wrote:\n>\n> On Sat, 2023-06-17 at 09:09 -0700, Gurjeet Singh wrote:\n> > The docs for the CREATE COLLATION option 'locale' say: \"This is a\n> > shortcut for setting LC_COLLATE and LC_CTYPE at once.\"\n> >\n> > So it's not intuitive why the check does not include a test for the\n> > presence of 'localeEl', as well? If we consider the presence of\n> > LC_COLLATE _or_ LC_CTYPE options to be a determining factor for some\n> > decision, then the presence of LOCALE option should also lead to the\n> > same outcome.\n> >\n>\n> The docs say: \"If provider is libc, this is a shortcut...\". The point\n> is that LC_COLLATE and LC_CTYPE act as a signal that what the user\n> really wants is a libc collation. LOCALE works for either, so we need a\n> default.\n\nSorry about the noise, I was consulting current/v15 docs online. Now\nthat v16 docs are online, I can see that the option in fact says this\nis the case only if libc is the provider.\n\n(note to self: for reviewing patches to master, consult devel docs [1] online)\n\n[1]: https://www.postgresql.org/docs/devel/\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\n\n",
"msg_date": "Fri, 7 Jul 2023 10:44:53 -0700",
"msg_from": "Gurjeet Singh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [17] CREATE COLLATION default provider"
},
{
"msg_contents": "On 15.06.23 06:47, Jeff Davis wrote:\n> Currently, CREATE COLLATION always defaults the provider to libc.\n> \n> The attached patch causes it to default to libc if LC_COLLATE/LC_CTYPE\n> are specified, otherwise default to the current database default\n> collation's provider.\n> \n> That way, the provider choice at initdb time then becomes the default\n> for \"CREATE DATABASE ... TEMPLATE template0\", which then becomes the\n> default provider for \"CREATE COLLATION (LOCALE='...')\".\n\nI like the general idea. If the user has selected ICU overall, it could \nbe sensible that certain commands default to ICU.\n\nI wonder, however, how useful this would be in practice. In most \ninteresting cases, you need to know what the provider is to be able to \nspell out the locale name appropriately. The cases where some overlap \nexists, like the common \"ll_CC\", are already preloaded, so won't \nactually need to be specified explicitly in many cases.\n\nAlso, I think the default should only flow one way, top-down: The \ndefault provider of CREATE COLLATION is datlocprovider. There shouldn't \nbe a second, botton-up way, based on the other specified CREATE \nCOLLATION parameters. That's just too much logical zig-zag, IMO. \nOtherwise, if you extend this locally, why not also look at if \n\"deterministic\" or \"rules\" was specified, etc.\n\n\n",
"msg_date": "Thu, 18 Jan 2024 11:15:31 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [17] CREATE COLLATION default provider"
},
{
"msg_contents": "On Thu, 2024-01-18 at 11:15 +0100, Peter Eisentraut wrote:\n> I wonder, however, how useful this would be in practice. In most \n> interesting cases, you need to know what the provider is to be able\n> to \n> spell out the locale name appropriately. The cases where some\n> overlap \n> exists, like the common \"ll_CC\", are already preloaded, so won't \n> actually need to be specified explicitly in many cases.\n\nGood point.\n\n> Also, I think the default should only flow one way, top-down: The \n> default provider of CREATE COLLATION is datlocprovider. There\n> shouldn't \n> be a second, botton-up way, based on the other specified CREATE \n> COLLATION parameters. That's just too much logical zig-zag, IMO. \n\nAlso a good point. I am withdrawing this patch from the CF, and we can\nreconsider the idea later perhaps in some other form.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Sun, 21 Jan 2024 11:07:15 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [17] CREATE COLLATION default provider"
}
] |
[
{
"msg_contents": "Hi,\n\nCurrently, only one PG_WAIT_EXTENSION event can be used as a\nwait event for extensions. Therefore, in environments with multiple\nextensions are installed, it could take time to identify which\nextension is the bottleneck.\n\nSo, I'd like to support new APIs to define custom wait events\nfor extensions. It's discussed in [1].\n\nI made patches to realize it. Although I have some TODOs,\nI want to know your feedbacks. Please feel free to comment.\n\n\n# Implementation of new APIs\n\nI implemented 2 new APIs for extensions.\n* RequestNamedExtensionWaitEventTranche()\n* GetNamedExtensionWaitEventTranche()\n\nExtensions can request custom wait events by calling\nRequestNamedExtensionWaitEventTranche(). After that, use\nGetNamedExtensionWaitEventTranche() to get the wait event information.\n\nThe APIs usage example by extensions are following.\n\n```\nshmem_request_hook = shmem_request;\nshmem_startup_hook = shmem_startup;\n\nstatic void\nshmem_request(void)\n{\n\t/* request a custom wait event */\n\tRequestNamedExtensionWaitEventTranche(\"custom_wait_event\");\n}\n\nstatic void\nshmem_startup(void)\n{\n\t/* get the wait event information */\n\tcustom_wait_event = \nGetNamedExtensionWaitEventTranche(\"custom_wait_event\");\n}\n\nvoid\nextension_funtion()\n{\n\t(void) WaitLatch(MyLatch,\n\t\t\t\t\t WL_LATCH_SET | WL_TIMEOUT | WL_EXIT_ON_PM_DEATH,\n\t\t\t\t\t 10L * 1000,\n\t\t\t\t\t custom_wait_event); /* notify core with custom wait event */\n\tResetLatch(MyLatch);\n}\n```\n\n\n# Implementation overview\n\nI referenced the implementation of\nRequestNamedLWLockTranche()/GetNamedLWLockTranche().\n(0001-Support-to-define-custom-wait-events-for-extensions.patch)\n\nExtensions calls RequestNamedExtensionWaitEventTranche() in\nshmem_request_hook() to request wait events to be used by each \nextension.\n\nIn the core, the requested wait events are dynamically registered in \nshared\nmemory. The extension then obtains the wait event information with\nGetNamedExtensionWaitEventTranche() and uses the value to notify the \ncore\nthat it is waiting.\n\nWhen a string representing of the wait event is requested,\nit returns the name defined by calling \nRequestNamedExtensionWaitEventTranche().\n\n\n# PoC extension\n\nI created the PoC extension and you can use it, as shown here:\n(0002-Add-a-extension-to-test-custom-wait-event.patch)\n\n1. start PostgreSQL with the following configuration\nshared_preload_libraries = 'inject_wait_event'\n\n2. check wait events periodically\npsql-1=# SELECT query, wait_event_type, wait_event FROM pg_stat_activity \nWHERE backend_type = 'client backend' AND pid != pg_backend_pid() ;\npsql-1=# \\watch\n\n3. execute a function to inject a wait event\npsql-2=# CREATE EXTENSION inject_wait_event;\npsql-2=# SELECT inject_wait_event();\n\n4. check the custom wait event\nYou can see the following results of psql-1.\n\n(..snip..)\n\n query | wait_event_type | wait_event\n-----------------------------+-----------------+------------\n SELECT inject_wait_event(); | Extension | Extension\n(1 row)\n\n(..snip..)\n\n(...about 10 seconds later ..)\n\n\n query | wait_event_type | wait_event\n-----------------------------+-----------------+-------------------\n SELECT inject_wait_event(); | Extension | custom_wait_event \n # requested wait event by the extension!\n(1 row)\n\n(..snip..)\n\n\n# TODOs\n\n* tests on windows (since I tested on Ubuntu 20.04 only)\n* add custom wait events for existing contrib modules (ex. postgres_fdw)\n* add regression code (but, it seems to be difficult)\n* others? (Please let me know)\n\n\n[1] \nhttps://www.postgresql.org/message-id/81290a48-b25c-22a5-31a6-3feff5864fe3%40gmail.com\n\nRegards,\n\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION",
"msg_date": "Thu, 15 Jun 2023 15:06:01 +0900",
"msg_from": "Masahiro Ikeda <[email protected]>",
"msg_from_op": true,
"msg_subject": "Support to define custom wait events for extensions"
},
{
"msg_contents": "On Thu, Jun 15, 2023 at 03:06:01PM +0900, Masahiro Ikeda wrote:\n> Currently, only one PG_WAIT_EXTENSION event can be used as a\n> wait event for extensions. Therefore, in environments with multiple\n> extensions are installed, it could take time to identify which\n> extension is the bottleneck.\n\nThanks for taking the time to implement a patch to do that.\n\n> I want to know your feedbacks. Please feel free to comment.\n\nI think that's been cruelly missed.\n\n> In the core, the requested wait events are dynamically registered in shared\n> memory. The extension then obtains the wait event information with\n> GetNamedExtensionWaitEventTranche() and uses the value to notify the core\n> that it is waiting.\n> \n> When a string representing of the wait event is requested,\n> it returns the name defined by calling\n> RequestNamedExtensionWaitEventTranche().\n\nSo this implements the equivalent of RequestNamedLWLockTranche()\nfollowed by GetNamedLWLockTranche() to get the wait event number,\nwhich can be used only during postmaster startup. Do you think that\nwe could focus on implementing something more flexible instead, that\ncan be used dynamically as well as statically? That would be similar\nto LWLockNewTrancheId() and LWLockRegisterTranche(), actually, where\nwe would get one or more tranche IDs, then do initialization actions\nin shmem based on the tranche ID(s).\n\n> 4. check the custom wait event\n> You can see the following results of psql-1.\n> \n> query | wait_event_type | wait_event\n> -----------------------------+-----------------+-------------------\n> SELECT inject_wait_event(); | Extension | custom_wait_event #\n> requested wait event by the extension!\n> (1 row)\n> \n> (..snip..)\n\nA problem with this approach is that it is expensive as a test. Do we\nreally need one? There are three places that set PG_WAIT_EXTENSION in\nsrc/test/modules/, more in /contrib, and there are modules like\npg_stat_statements that could gain from events for I/O operations, for\nexample.\n\n> # TODOs\n> \n> * tests on windows (since I tested on Ubuntu 20.04 only)\n> * add custom wait events for existing contrib modules (ex. postgres_fdw)\n> * add regression code (but, it seems to be difficult)\n> * others? (Please let me know)\n\nHmm. You would need to maintain a state in a rather stable manner,\nand SQL queries can make that difficult in the TAP tests as the wait\nevent information is reset each time a query finishes. One area where\nI think this gets easier is with a background worker loaded with\nshared_preload_libraries that has a configurable naptime. Looking at\nwhat's available in the tree, the TAP tests of pg_prewarm could use\none test on pg_stat_activity with a custom wait event name\n(pg_prewarm.autoprewarm_interval is 0 hence the bgworker waits\ninfinitely). Note that in this case, you would need to be careful of\nthe case where the wait event is loaded dynamically, but like LWLocks\nthis should be able to work as well?\n--\nMichael",
"msg_date": "Thu, 15 Jun 2023 17:00:18 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support to define custom wait events for extensions"
},
{
"msg_contents": "Hi,\n\nOn 6/15/23 10:00 AM, Michael Paquier wrote:\n> On Thu, Jun 15, 2023 at 03:06:01PM +0900, Masahiro Ikeda wrote:\n>> Currently, only one PG_WAIT_EXTENSION event can be used as a\n>> wait event for extensions. Therefore, in environments with multiple\n>> extensions are installed, it could take time to identify which\n>> extension is the bottleneck.\n> \n> Thanks for taking the time to implement a patch to do that.\n\n+1 thanks for it!\n\n> \n>> I want to know your feedbacks. Please feel free to comment.\n> \n> I think that's been cruelly missed.\n\nYeah, that would clearly help to diagnose which extension(s) is/are causing the waits (if any).\n\nI did not look at the code yet (will do) but just wanted to chime in to support the idea.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 15 Jun 2023 15:21:50 +0200",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support to define custom wait events for extensions"
},
{
"msg_contents": "We had this on our list of things to do at Neon, so it is a nice\nsurprise that you brought up an initial patchset :). It was also my\nfirst time looking up the word tranche.\n\n From 59a118402e5e59685fb9e0fb086872e25a405736 Mon Sep 17 00:00:00 2001\nFrom: Masahiro Ikeda <[email protected]>\nDate: Thu, 15 Jun 2023 12:57:29 +0900\nSubject: [PATCH 2/3] Support to define custom wait events for extensions.\n\n> Currently, only one PG_WAIT_EXTENSION event can be used as a\n> wait event for extensions. Therefore, in environments with multiple\n> extensions are installed, it could take time to identify bottlenecks.\n\n\"extensions are installed\" should be \"extensions installed\".\n\n> +#define NUM_BUILDIN_WAIT_EVENT_EXTENSION \\\n> + (WAIT_EVENT_EXTENSION_FIRST_USER_DEFINED - WAIT_EVENT_EXTENSION)\n\nShould that be NUM_BUILTIN_WAIT_EVENT_EXTENSION?\n\n> + NamedExtensionWaitEventTrancheRequestArray = (NamedExtensionWaitEventTrancheRequest *)\n> + MemoryContextAlloc(TopMemoryContext,\n> + NamedExtensionWaitEventTrancheRequestsAllocated\n> + * sizeof(NamedExtensionWaitEventTrancheRequest));\n\nI can't tell from reading other Postgres code when one should cast the\nreturn value of MemoryContextAlloc(). Seems unnecessary to me.\n\n> + if (NamedExtensionWaitEventTrancheRequestArray == NULL)\n> + {\n> + NamedExtensionWaitEventTrancheRequestsAllocated = 16;\n> + NamedExtensionWaitEventTrancheRequestArray = (NamedExtensionWaitEventTrancheRequest *)\n> + MemoryContextAlloc(TopMemoryContext,\n> + NamedExtensionWaitEventTrancheRequestsAllocated\n> + * sizeof(NamedExtensionWaitEventTrancheRequest));\n> + }\n> +\n> + if (NamedExtensionWaitEventTrancheRequests >= NamedExtensionWaitEventTrancheRequestsAllocated)\n> + {\n> + int i = pg_nextpower2_32(NamedExtensionWaitEventTrancheRequests + 1);\n> +\n> + NamedExtensionWaitEventTrancheRequestArray = (NamedExtensionWaitEventTrancheRequest *)\n> + repalloc(NamedExtensionWaitEventTrancheRequestArray,\n> + i * sizeof(NamedExtensionWaitEventTrancheRequest));\n> + NamedExtensionWaitEventTrancheRequestsAllocated = i;\n> + }\n\nDo you think this code would look better in an if/else if?\n\n> + int i = pg_nextpower2_32(NamedExtensionWaitEventTrancheRequests + 1);\n\nIn the Postgres codebase, is an int always guaranteed to be at least 32\nbits? I feel like a fixed-width type would be better for tracking the\nlength of the array, unless Postgres prefers the Size type.\n\n> + Assert(strlen(tranche_name) + 1 <= NAMEDATALEN);\n> + strlcpy(request->tranche_name, tranche_name, NAMEDATALEN);\n\nA sizeof(request->tranche_name) would keep this code more in-sync if\nsize of tranche_name were to ever change, though I see sizeof\nexpressions in the codebase are not too common. Maybe just remove the +1\nand make it less than rather than a less than or equal? Seems like it\nmight be worth noting in the docs of the function that the event name\nhas to be less than NAMEDATALEN, but maybe this is something extension\nauthors are inherently aware of?\n\n---\n\nWhat's the Postgres policy on the following?\n\nfor (int i = 0; ...)\nfor (i = 0; ...)\n\nYou are using 2 different patterns in WaitEventShmemInit() and\nInitializeExtensionWaitEventTranches().\n\n> + /*\n> + * Copy the info about any named tranches into shared memory (so that\n> + * other processes can see it), and initialize the requested wait events.\n> + */\n> + if (NamedExtensionWaitEventTrancheRequests > 0)\n\nRemoving this if would allow one less indentation level. Nothing would\nhave to change about the containing code either since the for loop will\nthen not run\n\n> + ExtensionWaitEventCounter = (int *) ((char *) NamedExtensionWaitEventTrancheArray - sizeof(int));\n\n From 65e25d4b27bbb6d0934872310c24ee19f89e9631 Mon Sep 17 00:00:00 2001\nFrom: Masahiro Ikeda <[email protected]>\nDate: Thu, 15 Jun 2023 13:16:00 +0900\nSubject: [PATCH 3/3] Add docs to define custom wait events\n\n> + <para>\n> + wait events are reserved by calling:\n> +<programlisting>\n> +void RequestNamedExtensionWaitEventTranche(const char *tranche_name)\n> +</programlisting>\n> + from your <literal>shmem_request_hook</literal>. This will ensure that\n> + wait event is available under the name <literal>tranche_name</literal>,\n> + which the wait event type is <literal>Extension</literal>.\n> + Use <function>GetNamedExtensionWaitEventTranche</function>\n> + to get a wait event information.\n> + </para>\n> + <para>\n> + To avoid possible race-conditions, each backend should use the LWLock\n> + <function>AddinShmemInitLock</function> when connecting to and initializing\n> + its allocation of shared memory, same as LWLocks reservations above.\n> + </para>\n\nShould \"wait\" be capitalized in the first sentence?\n\n\"This will ensure that wait event is available\" should have an \"a\"\nbefore \"wait\".\n\nNice patch.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Thu, 15 Jun 2023 11:13:57 -0500",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support to define custom wait events for extensions"
},
{
"msg_contents": "On Thu, Jun 15, 2023 at 11:13:57AM -0500, Tristan Partin wrote:\n> What's the Postgres policy on the following?\n> \n> for (int i = 0; ...)\n> for (i = 0; ...)\n> \n> You are using 2 different patterns in WaitEventShmemInit() and\n> InitializeExtensionWaitEventTranches().\n\nC99 style is OK since v12, so the style of the patch is fine. The\nolder style has no urgent need to change, either. One argument to let\nthe code as-is is that it could generate backpatching conflicts, while\nit does not hurt as it stands. This also means that bug fixes that\nneed to be applied down to 12 would be able to use C99 declarations\nfreely without some of the buildfarm animals running REL_11_STABLE\ncomplaining. I have fallen into this trap recently, actually. See\ndbd25dd.\n--\nMichael",
"msg_date": "Fri, 16 Jun 2023 08:02:05 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support to define custom wait events for extensions"
},
{
"msg_contents": "Thanks for replying and your kind advice!\n\nOn 2023-06-15 17:00, Michael Paquier wrote:\n> On Thu, Jun 15, 2023 at 03:06:01PM +0900, Masahiro Ikeda wrote:\n>> In the core, the requested wait events are dynamically registered in \n>> shared\n>> memory. The extension then obtains the wait event information with\n>> GetNamedExtensionWaitEventTranche() and uses the value to notify the \n>> core\n>> that it is waiting.\n>> \n>> When a string representing of the wait event is requested,\n>> it returns the name defined by calling\n>> RequestNamedExtensionWaitEventTranche().\n> \n> So this implements the equivalent of RequestNamedLWLockTranche()\n> followed by GetNamedLWLockTranche() to get the wait event number,\n> which can be used only during postmaster startup. Do you think that\n> we could focus on implementing something more flexible instead, that\n> can be used dynamically as well as statically? That would be similar\n> to LWLockNewTrancheId() and LWLockRegisterTranche(), actually, where\n> we would get one or more tranche IDs, then do initialization actions\n> in shmem based on the tranche ID(s).\n\nOK, I agree. I'll make a patch to only support\nExtensionWaitEventNewTrancheId() and ExtensionWaitEventRegisterTranche()\nsimilar to LWLockNewTrancheId() and LWLockRegisterTranche().\n\n>> 4. check the custom wait event\n>> You can see the following results of psql-1.\n>> \n>> query | wait_event_type | wait_event\n>> -----------------------------+-----------------+-------------------\n>> SELECT inject_wait_event(); | Extension | custom_wait_event \n>> #\n>> requested wait event by the extension!\n>> (1 row)\n>> \n>> (..snip..)\n> \n> A problem with this approach is that it is expensive as a test. Do we\n> really need one? There are three places that set PG_WAIT_EXTENSION in\n> src/test/modules/, more in /contrib, and there are modules like\n> pg_stat_statements that could gain from events for I/O operations, for\n> example.\n\nYes. Since it's hard to test, I thought the PoC extension\nshould not be committed. But, I couldn't figure out the best\nway to test yet.\n\n>> # TODOs\n>> \n>> * tests on windows (since I tested on Ubuntu 20.04 only)\n>> * add custom wait events for existing contrib modules (ex. \n>> postgres_fdw)\n>> * add regression code (but, it seems to be difficult)\n>> * others? (Please let me know)\n> \n> Hmm. You would need to maintain a state in a rather stable manner,\n> and SQL queries can make that difficult in the TAP tests as the wait\n> event information is reset each time a query finishes. One area where\n> I think this gets easier is with a background worker loaded with\n> shared_preload_libraries that has a configurable naptime. Looking at\n> what's available in the tree, the TAP tests of pg_prewarm could use\n> one test on pg_stat_activity with a custom wait event name\n> (pg_prewarm.autoprewarm_interval is 0 hence the bgworker waits\n> infinitely). Note that in this case, you would need to be careful of\n> the case where the wait event is loaded dynamically, but like LWLocks\n> this should be able to work as well?\n\nThanks for your advice!\n\nI tried to query on pg_stat_activity to check the background worker\ninvoked by pg_prewarm. But, I found that pg_stat_activity doesn't show\nit although I may be missing something...\n\nSo, I tried to implement TAP tests. But I have a problem with it.\nI couldn't find the way to check the status of another backend\nwhile the another backend wait with custom wait events.\n\n```\n# TAP test I've implemented.\n\n# wait forever with custom wait events in session1\n$session1->query_safe(\"SELECT test_custom_wait_events_wait()\");\n\n# I want to check the wait event from another backend process\n# But, the following code is never reached because the above\n# query is waiting forever.\n$session2->poll_query_until('postgres',\n\tqq[SELECT\n\t\t(SELECT count(*) FROM pg_stat_activity\n\t \t\tWHERE query ~ '^SELECT test_custom_wait_events_wait'\n\t\t\t AND wait_event_type = 'Extension'\n\t\t\t AND wait_event = 'custom_wait_event'\n\t\t) > 0;]);\n```\n\nIf I'm missing something or you have any idea,\nplease let me know.\n\nNow, I plan to\n\n* find out more the existing tests to check wait events and locks\n (though I have already checked a little, but I couldn't find it)\n* find another way to check wait event of the background worker invoked \nby extension\n* look up the reason why pg_stat_activity doesn't show the background \nworker\n* find a way to implement async queries in TAP tests\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 16 Jun 2023 11:14:05 +0900",
"msg_from": "Masahiro Ikeda <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Support to define custom wait events for extensions"
},
{
"msg_contents": "On 2023-06-15 22:21, Drouvot, Bertrand wrote:\n> Hi,\n> \n> On 6/15/23 10:00 AM, Michael Paquier wrote:\n>> On Thu, Jun 15, 2023 at 03:06:01PM +0900, Masahiro Ikeda wrote:\n>>> Currently, only one PG_WAIT_EXTENSION event can be used as a\n>>> wait event for extensions. Therefore, in environments with multiple\n>>> extensions are installed, it could take time to identify which\n>>> extension is the bottleneck.\n>> \n>> Thanks for taking the time to implement a patch to do that.\n> \n> +1 thanks for it!\n> \n>> \n>>> I want to know your feedbacks. Please feel free to comment.\n>> \n>> I think that's been cruelly missed.\n> \n> Yeah, that would clearly help to diagnose which extension(s) is/are\n> causing the waits (if any).\n> \n> I did not look at the code yet (will do) but just wanted to chime in\n> to support the idea.\n\nGreat! Thanks.\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 16 Jun 2023 11:17:37 +0900",
"msg_from": "Masahiro Ikeda <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Support to define custom wait events for extensions"
},
{
"msg_contents": "On 2023-06-16 01:13, Tristan Partin wrote:\n> We had this on our list of things to do at Neon, so it is a nice\n> surprise that you brought up an initial patchset :). It was also my\n> first time looking up the word tranche.\n\nWhat a coincidence! I came up with the idea when I used Neon with\npostgres_fdw. As a Neon user, I also feel the feature is important.\n\nSame as you. Thanks to Michael and Drouvot, I got to know the word \ntranche\nand the related existing code.\n\n> From 59a118402e5e59685fb9e0fb086872e25a405736 Mon Sep 17 00:00:00 2001\n> From: Masahiro Ikeda <[email protected]>\n> Date: Thu, 15 Jun 2023 12:57:29 +0900\n> Subject: [PATCH 2/3] Support to define custom wait events for \n> extensions.\n> \n>> Currently, only one PG_WAIT_EXTENSION event can be used as a\n>> wait event for extensions. Therefore, in environments with multiple\n>> extensions are installed, it could take time to identify bottlenecks.\n> \n> \"extensions are installed\" should be \"extensions installed\".\n> \n>> +#define NUM_BUILDIN_WAIT_EVENT_EXTENSION \\\n>> + (WAIT_EVENT_EXTENSION_FIRST_USER_DEFINED - \n>> WAIT_EVENT_EXTENSION)\n> \n> Should that be NUM_BUILTIN_WAIT_EVENT_EXTENSION?\n\nThanks for your comments.\nYes, I'll fix it.\n\n>> + NamedExtensionWaitEventTrancheRequestArray = \n>> (NamedExtensionWaitEventTrancheRequest *)\n>> + MemoryContextAlloc(TopMemoryContext,\n>> + \n>> NamedExtensionWaitEventTrancheRequestsAllocated\n>> + * \n>> sizeof(NamedExtensionWaitEventTrancheRequest));\n> \n> I can't tell from reading other Postgres code when one should cast the\n> return value of MemoryContextAlloc(). Seems unnecessary to me.\n\nI referenced RequestNamedLWLockTranche() and it looks ok to me.\n\n```\nvoid\nRequestNamedLWLockTranche(const char *tranche_name, int num_lwlocks)\n\t\tNamedLWLockTrancheRequestArray = (NamedLWLockTrancheRequest *)\n\t\t\tMemoryContextAlloc(TopMemoryContext,\n\t\t\t\t\t\t\t NamedLWLockTrancheRequestsAllocated\n\t\t\t\t\t\t\t * sizeof(NamedLWLockTrancheRequest));\n```\n\n>> + if (NamedExtensionWaitEventTrancheRequestArray == NULL)\n>> + {\n>> + NamedExtensionWaitEventTrancheRequestsAllocated = 16;\n>> + NamedExtensionWaitEventTrancheRequestArray = \n>> (NamedExtensionWaitEventTrancheRequest *)\n>> + MemoryContextAlloc(TopMemoryContext,\n>> + \n>> NamedExtensionWaitEventTrancheRequestsAllocated\n>> + * \n>> sizeof(NamedExtensionWaitEventTrancheRequest));\n>> + }\n>> +\n>> + if (NamedExtensionWaitEventTrancheRequests >= \n>> NamedExtensionWaitEventTrancheRequestsAllocated)\n>> + {\n>> + int i = \n>> pg_nextpower2_32(NamedExtensionWaitEventTrancheRequests + 1);\n>> +\n>> + NamedExtensionWaitEventTrancheRequestArray = \n>> (NamedExtensionWaitEventTrancheRequest *)\n>> + \n>> repalloc(NamedExtensionWaitEventTrancheRequestArray,\n>> + i * \n>> sizeof(NamedExtensionWaitEventTrancheRequest));\n>> + NamedExtensionWaitEventTrancheRequestsAllocated = i;\n>> + }\n> \n> Do you think this code would look better in an if/else if?\n\nSame as above. I referenced RequestNamedLWLockTranche().\nI don't know if it's a good idea, but it's better to refactor the\nexisting code separately from this patch.\n\nBut I plan to remove the code to focus implementing dynamic allocation\nsimilar to LWLockNewTrancheId() and LWLockRegisterTranche() as\nMichael's suggestion. I think it's good idea as a first step. Is it ok \nfor you?\n\n>> + int i = \n>> pg_nextpower2_32(NamedExtensionWaitEventTrancheRequests + 1);\n> \n> In the Postgres codebase, is an int always guaranteed to be at least 32\n> bits? I feel like a fixed-width type would be better for tracking the\n> length of the array, unless Postgres prefers the Size type.\n\nSame as above.\n\n>> + Assert(strlen(tranche_name) + 1 <= NAMEDATALEN);\n>> + strlcpy(request->tranche_name, tranche_name, NAMEDATALEN);\n> \n> A sizeof(request->tranche_name) would keep this code more in-sync if\n> size of tranche_name were to ever change, though I see sizeof\n> expressions in the codebase are not too common. Maybe just remove the \n> +1\n> and make it less than rather than a less than or equal? Seems like it\n> might be worth noting in the docs of the function that the event name\n> has to be less than NAMEDATALEN, but maybe this is something extension\n> authors are inherently aware of?\n\nSame as above.\n\n> ---\n> \n> What's the Postgres policy on the following?\n> \n> for (int i = 0; ...)\n> for (i = 0; ...)\n> You are using 2 different patterns in WaitEventShmemInit() and\n> InitializeExtensionWaitEventTranches().\n\nI didn't care it. I'll unify it.\nMichael's replay is interesting.\n\n>> + /*\n>> + * Copy the info about any named tranches into shared memory \n>> (so that\n>> + * other processes can see it), and initialize the requested \n>> wait events.\n>> + */\n>> + if (NamedExtensionWaitEventTrancheRequests > 0)\n> \n> Removing this if would allow one less indentation level. Nothing would\n> have to change about the containing code either since the for loop will\n> then not run\n\nThanks, but I plan to remove to focus implementing dynamic allocation.\n\n>> + ExtensionWaitEventCounter = (int *) ((char *) \n>> NamedExtensionWaitEventTrancheArray - sizeof(int));\n> \n> From 65e25d4b27bbb6d0934872310c24ee19f89e9631 Mon Sep 17 00:00:00 2001\n> From: Masahiro Ikeda <[email protected]>\n> Date: Thu, 15 Jun 2023 13:16:00 +0900\n> Subject: [PATCH 3/3] Add docs to define custom wait events\n> \n>> + <para>\n>> + wait events are reserved by calling:\n>> +<programlisting>\n>> +void RequestNamedExtensionWaitEventTranche(const char *tranche_name)\n>> +</programlisting>\n>> + from your <literal>shmem_request_hook</literal>. This will \n>> ensure that\n>> + wait event is available under the name \n>> <literal>tranche_name</literal>,\n>> + which the wait event type is <literal>Extension</literal>.\n>> + Use <function>GetNamedExtensionWaitEventTranche</function>\n>> + to get a wait event information.\n>> + </para>\n>> + <para>\n>> + To avoid possible race-conditions, each backend should use the \n>> LWLock\n>> + <function>AddinShmemInitLock</function> when connecting to and \n>> initializing\n>> + its allocation of shared memory, same as LWLocks reservations \n>> above.\n>> + </para>\n> \n> Should \"wait\" be capitalized in the first sentence?\n\nYes, I'll fix it\n\n> \"This will ensure that wait event is available\" should have an \"a\"\n> before \"wait\".\n\nYes, I'll fix it\n\n> Nice patch.\n\nThanks for your comments too.\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 16 Jun 2023 11:49:45 +0900",
"msg_from": "Masahiro Ikeda <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Support to define custom wait events for extensions"
},
{
"msg_contents": "On Fri, Jun 16, 2023 at 11:14:05AM +0900, Masahiro Ikeda wrote:\n> I tried to query on pg_stat_activity to check the background worker\n> invoked by pg_prewarm. But, I found that pg_stat_activity doesn't show\n> it although I may be missing something...\n> \n> So, I tried to implement TAP tests. But I have a problem with it.\n> I couldn't find the way to check the status of another backend\n> while the another backend wait with custom wait events.\n\nHmm. Right. It seems to me that BGWORKER_BACKEND_DATABASE_CONNECTION\nis required in this case, with BackgroundWorkerInitializeConnection()\nto connect to a database (or not, like the logical replication\nlauncher if only access to shared catalogs is wanted).\n\nI have missed that the leader process of pg_prewarm does not use that,\nbecause it has no need to connect to a database, but its workers do.\nSo it is not going to show up in pg_stat_activity.\n--\nMichael",
"msg_date": "Fri, 16 Jun 2023 16:46:33 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support to define custom wait events for extensions"
},
{
"msg_contents": "On 2023-06-16 16:46, Michael Paquier wrote:\n> On Fri, Jun 16, 2023 at 11:14:05AM +0900, Masahiro Ikeda wrote:\n>> I tried to query on pg_stat_activity to check the background worker\n>> invoked by pg_prewarm. But, I found that pg_stat_activity doesn't show\n>> it although I may be missing something...\n>> \n>> So, I tried to implement TAP tests. But I have a problem with it.\n>> I couldn't find the way to check the status of another backend\n>> while the another backend wait with custom wait events.\n> \n> Hmm. Right. It seems to me that BGWORKER_BACKEND_DATABASE_CONNECTION\n> is required in this case, with BackgroundWorkerInitializeConnection()\n> to connect to a database (or not, like the logical replication\n> launcher if only access to shared catalogs is wanted).\n> \n> I have missed that the leader process of pg_prewarm does not use that,\n> because it has no need to connect to a database, but its workers do.\n> So it is not going to show up in pg_stat_activity.\n\nYes. Thanks to your advice, I understood that\nBGWORKER_BACKEND_DATABASE_CONNECTION is the reason.\n\nI could make the TAP test that invokes a background worker waiting \nforever\nand checks its custom wait event in pg_stat_activity. So, I'll make \npatches\nincluding test codes next week.\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 16 Jun 2023 20:44:53 +0900",
"msg_from": "Masahiro Ikeda <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Support to define custom wait events for extensions"
},
{
"msg_contents": "I will take a look at your V2 when it is ready! I will also pass along\nthat this is wanted by Neon customers :).\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Fri, 16 Jun 2023 11:16:13 -0500",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support to define custom wait events for extensions"
},
{
"msg_contents": "On 2023/06/17 1:16, Tristan Partin wrote:\n> I will take a look at your V2 when it is ready! I will also pass along\n> that this is wanted by Neon customers :).\nThanks!\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Sat, 17 Jun 2023 02:44:37 +0900",
"msg_from": "Masahiro Ikeda <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Support to define custom wait events for extensions"
},
{
"msg_contents": "Hi,\n\nI updated the patches. The main changes are\n* to support only dynamic wait event allocation\n* to add a regression test\nI appreciate any feedback.\n\nThe followings are TODO items.\n* to check that meson.build works since I tested with old command `make` \nnow\n* to make documents\n* to add custom wait events for existing contrib modules (ex. \npostgres_fdw)\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION",
"msg_date": "Tue, 20 Jun 2023 18:26:48 +0900",
"msg_from": "Masahiro Ikeda <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Support to define custom wait events for extensions"
},
{
"msg_contents": "On 2023-06-20 18:26, Masahiro Ikeda wrote:\n> The followings are TODO items.\n> * to check that meson.build works since I tested with old command \n> `make` now\n\nI test with meson and I updated the patches to work with it.\nMy test procedure is the following.\n\n```\nexport builddir=/mnt/tmp/build\nexport prefix=/mnt/tmp/master\n\n# setup\nmeson setup $builddir --prefix=$prefix -Ddebug=true -Dcassert=true \n-Dtap_tests=enabled\n\n# build and install with src/test/modules\nninja -C $builddir install install-test-files\n\n# test\nmeson test -v -C $builddir\nmeson test -v -C $builddir --suite test_custom_wait_events # run the \ntest only\n```\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION",
"msg_date": "Thu, 22 Jun 2023 12:06:04 +0900",
"msg_from": "Masahiro Ikeda <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Support to define custom wait events for extensions"
},
{
"msg_contents": "Hi,\n\nI updated the patches to handle the warning mentioned\nby PostgreSQL Patch Tester, and removed unnecessary spaces.\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION",
"msg_date": "Fri, 23 Jun 2023 17:56:26 +0900",
"msg_from": "Masahiro Ikeda <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Support to define custom wait events for extensions"
},
{
"msg_contents": "On Fri, Jun 23, 2023 at 05:56:26PM +0900, Masahiro Ikeda wrote:\n> I updated the patches to handle the warning mentioned\n> by PostgreSQL Patch Tester, and removed unnecessary spaces.\n\nI have begun hacking on that, and the API layer inspired from the\nLWLocks is sound. I have been playing with it in my own extensions\nand it is nice to be able to plug in custom wait events into\npg_stat_activity, particularly for bgworkers. Finally.\n\nThe patch needed a rebase after the recent commit that introduced the\nautomatic generation of docs and code for wait events. It requires\ntwo tweaks in generate-wait_event_types.pl, feel free to double-check\nthem.\n\nSome of the new structures and routine names don't quite reflect the\nfact that we have wait events for extensions, so I have taken a stab\nat that.\n\nNote that the test module test_custom_wait_events would crash if\nattempting to launch a worker when not loaded in\nshared_preload_libraries, so we'd better have some protection in\nwait_worker_launch() (this function should be renamed as well).\n\nAttached is a rebased patch that I have begun tweaking here and\nthere. For now, the patch is moved as waiting on author. I have\nmerged the test module with the main patch for the moment, for\nsimplicity. A split is straight-forward as the code paths touched are\ndifferent.\n\nAnother and *very* important thing is that we are going to require\nsome documentation in xfunc.sgml to explain how to use these routines\nand what to expect from them. Ikeda-san, could you write some? You\ncould look at the part about shmem and LWLock to get some\ninspiration.\n--\nMichael",
"msg_date": "Tue, 11 Jul 2023 16:45:26 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support to define custom wait events for extensions"
},
{
"msg_contents": "> From bf06b8100cb747031959fe81a2d19baabc4838cf Mon Sep 17 00:00:00 2001\n> From: Masahiro Ikeda <[email protected]>\n> Date: Fri, 16 Jun 2023 11:53:29 +0900\n> Subject: [PATCH 1/2] Support custom wait events for extensions.\n\n> + * This is indexed by event ID minus NUM_BUILTIN_WAIT_EVENT_EXTENSION, and\n> + * stores the names of all dynamically-created event ID known to the current\n> + * process. Any unused entries in the array will contain NULL.\n\nThe second ID should be plural.\n\n> + /* If necessary, create or enlarge array. */\n> + if (eventId >= ExtensionWaitEventTrancheNamesAllocated)\n> + {\n> + int newalloc;\n> +\n> + newalloc = pg_nextpower2_32(Max(8, eventId + 1));\n\nGiven the context of our last conversation, I assume this code was\ncopied from somewhere else. Since this is new code, I think it would\nmake more sense if newalloc was a uint16 or size_t.\n\n From what I undersatnd, Neon differs from upstream in some way related\nto this patch. I am trying to ascertain how that is. I hope to provide\nmore feedback when I learn more about it.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Tue, 11 Jul 2023 12:39:52 -0500",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support to define custom wait events for extensions"
},
{
"msg_contents": "On Tue, Jul 11, 2023 at 12:39:52PM -0500, Tristan Partin wrote:\n> Given the context of our last conversation, I assume this code was\n> copied from somewhere else. Since this is new code, I think it would\n> make more sense if newalloc was a uint16 or size_t.\n\nThis style comes from LWLockRegisterTranche() in lwlock.c. Do you\nthink that it would be more adapted to change that to\npg_nextpower2_size_t() with a Size? We could do that for the existing\ncode on HEAD as an improvement.\n\n> From what I understand, Neon differs from upstream in some way related\n> to this patch. I am trying to ascertain how that is. I hope to provide\n> more feedback when I learn more about it.\n\nHmm, okay, that would nice to hear about to make sure that the\napproach taken on this thread is able to cover what you are looking\nfor. So you mean that Neon has been using something similar to\nregister wait events in the backend? Last time I looked at the Neon\nrepo, I did not get the impression that there was a custom patch for\nPostgres in this area. All the in-core code paths using\nWAIT_EVENT_EXTENSION would gain from the APIs added here, FWIW.\n--\nMichael",
"msg_date": "Wed, 12 Jul 2023 08:29:42 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support to define custom wait events for extensions"
},
{
"msg_contents": "Hi,\n\nOn 2023-07-11 16:45:26 +0900, Michael Paquier wrote:\n> +/* ----------\n> + * Wait Events - Extension\n> + *\n> + * Use this category when the server process is waiting for some condition\n> + * defined by an extension module.\n> + *\n> + * Extensions can define custom wait events. First, call\n> + * WaitEventExtensionNewTranche() just once to obtain a new wait event\n> + * tranche. The ID is allocated from a shared counter. Next, each\n> + * individual process using the tranche should call\n> + * WaitEventExtensionRegisterTranche() to associate that wait event with\n> + * a name.\n\nWhat does \"tranche\" mean here? For LWLocks it makes some sense, it's used for\na set of lwlocks, not an individual one. But here that doesn't really seem to\napply?\n\n\n> + * It may seem strange that each process using the tranche must register it\n> + * separately, but dynamic shared memory segments aren't guaranteed to be\n> + * mapped at the same address in all coordinating backends, so storing the\n> + * registration in the main shared memory segment wouldn't work for that case.\n> + */\n\nI don't really see how this applies to wait events? There's no pointers\nhere...\n\n\n> +typedef enum\n> +{\n> +\tWAIT_EVENT_EXTENSION = PG_WAIT_EXTENSION,\n> +\tWAIT_EVENT_EXTENSION_FIRST_USER_DEFINED\n> +} WaitEventExtension;\n> +\n> +extern void WaitEventExtensionShmemInit(void);\n> +extern Size WaitEventExtensionShmemSize(void);\n> +\n> +extern uint32 WaitEventExtensionNewTranche(void);\n> +extern void WaitEventExtensionRegisterTranche(uint32 wait_event_info,\n\n> -slock_t *ShmemLock;\t\t\t/* spinlock for shared memory and LWLock\n> +slock_t *ShmemLock;\t\t\t/* spinlock for shared memory, LWLock\n> +\t\t\t\t\t\t\t\t * allocation, and named extension wait event\n> \t\t\t\t\t\t\t\t * allocation */\n\nI'm doubtful that it's a good idea to reuse the spinlock further. Given that\nthe patch adds WaitEventExtensionShmemInit(), why not just have a lock in\nthere?\n\n\n\n> +/*\n> + * Allocate a new event ID and return the wait event info.\n> + */\n> +uint32\n> +WaitEventExtensionNewTranche(void)\n> +{\n> +\tuint16\t\teventId;\n> +\n> +\tSpinLockAcquire(ShmemLock);\n> +\teventId = (*WaitEventExtensionCounter)++;\n> +\tSpinLockRelease(ShmemLock);\n> +\n> +\treturn PG_WAIT_EXTENSION | eventId;\n> +}\n\nIt seems quite possible to run out space in WaitEventExtensionCounter, so this\nshould error out in that case.\n\n\n> +/*\n> + * Register a dynamic tranche name in the lookup table of the current process.\n> + *\n> + * This routine will save a pointer to the wait event tranche name passed\n> + * as an argument, so the name should be allocated in a backend-lifetime context\n> + * (shared memory, TopMemoryContext, static constant, or similar).\n> + *\n> + * The \"wait_event_name\" will be user-visible as a wait event name, so try to\n> + * use a name that fits the style for those.\n> + */\n> +void\n> +WaitEventExtensionRegisterTranche(uint32 wait_event_info,\n> +\t\t\t\t\t\t\t\t const char *wait_event_name)\n> +{\n> +\tuint16\t\teventId;\n> +\n> +\t/* Check wait event class. */\n> +\tAssert((wait_event_info & 0xFF000000) == PG_WAIT_EXTENSION);\n> +\n> +\teventId = wait_event_info & 0x0000FFFF;\n> +\n> +\t/* This should only be called for user-defined tranches. */\n> +\tif (eventId < NUM_BUILTIN_WAIT_EVENT_EXTENSION)\n> +\t\treturn;\n\nWhy not assert out in that case then?\n\n\n> +/*\n> + * Return the name of an Extension wait event ID.\n> + */\n> +static const char *\n> +GetWaitEventExtensionIdentifier(uint16 eventId)\n> +{\n> +\t/* Build-in tranche? */\n\n*built\n\n> +\tif (eventId < NUM_BUILTIN_WAIT_EVENT_EXTENSION)\n> +\t\treturn \"Extension\";\n> +\n> +\t/*\n> +\t * It's an extension tranche, so look in WaitEventExtensionTrancheNames[].\n> +\t * However, it's possible that the tranche has never been registered by\n> +\t * calling WaitEventExtensionRegisterTranche() in the current process, in\n> +\t * which case give up and return \"Extension\".\n> +\t */\n> +\teventId -= NUM_BUILTIN_WAIT_EVENT_EXTENSION;\n> +\n> +\tif (eventId >= WaitEventExtensionTrancheNamesAllocated ||\n> +\t\tWaitEventExtensionTrancheNames[eventId] == NULL)\n> +\t\treturn \"Extension\";\n\nI'd return something different here, otherwise something that's effectively a\nbug is not distinguishable from the built\n\n\n> +++ b/src/test/modules/test_custom_wait_events/t/001_basic.pl\n> @@ -0,0 +1,34 @@\n> +# Copyright (c) 2023, PostgreSQL Global Development Group\n> +\n> +use strict;\n> +use warnings;\n> +\n> +use PostgreSQL::Test::Cluster;\n> +use PostgreSQL::Test::Utils;\n> +use Test::More;\n> +\n> +my $node = PostgreSQL::Test::Cluster->new('main');\n> +\n> +$node->init;\n> +$node->append_conf(\n> +\t'postgresql.conf',\n> +\t\"shared_preload_libraries = 'test_custom_wait_events'\"\n> +);\n> +$node->start;\n\nI think this should also test registering a wait event later.\n\n\n> @@ -0,0 +1,188 @@\n> +/*--------------------------------------------------------------------------\n> + *\n> + * test_custom_wait_events.c\n> + *\t\tCode for testing custom wait events\n> + *\n> + * This code initializes a custom wait event in shmem_request_hook() and\n> + * provide a function to launch a background worker waiting forever\n> + * with the custom wait event.\n\nIsn't this vast overkill? Why not just have a simple C function that waits\nwith a custom wait event, until cancelled? That'd maybe 1/10th of the LOC.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 11 Jul 2023 17:36:47 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support to define custom wait events for extensions"
},
{
"msg_contents": "On Tue, Jul 11, 2023 at 05:36:47PM -0700, Andres Freund wrote:\n> On 2023-07-11 16:45:26 +0900, Michael Paquier wrote:\n>> +$node->init;\n>> +$node->append_conf(\n>> +\t'postgresql.conf',\n>> +\t\"shared_preload_libraries = 'test_custom_wait_events'\"\n>> +);\n>> +$node->start;\n> \n> I think this should also test registering a wait event later.\n\nYup, agreed that the coverage is not sufficient.\n\n> > @@ -0,0 +1,188 @@\n> > +/*--------------------------------------------------------------------------\n> > + *\n> > + * test_custom_wait_events.c\n> > + *\t\tCode for testing custom wait events\n> > + *\n> > + * This code initializes a custom wait event in shmem_request_hook() and\n> > + * provide a function to launch a background worker waiting forever\n> > + * with the custom wait event.\n> \n> Isn't this vast overkill? Why not just have a simple C function that waits\n> with a custom wait event, until cancelled? That'd maybe 1/10th of the LOC.\n\nHmm. You mean in the shape of a TAP test where a backend registers a\nwait event by itself in a SQL function that waits for a certain amount\nof time with a WaitLatch(), then we use a second poll_query_until()\nthat checks if the a wait event is stored in pg_stat_activity? With\nsomething like what's done at the end of 001_stream_rep.pl, that\nshould be stable, I guess..\n--\nMichael",
"msg_date": "Wed, 12 Jul 2023 16:05:17 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support to define custom wait events for extensions"
},
{
"msg_contents": "On 2023-07-11 16:45, Michael Paquier wrote:\n> On Fri, Jun 23, 2023 at 05:56:26PM +0900, Masahiro Ikeda wrote:\n>> I updated the patches to handle the warning mentioned\n>> by PostgreSQL Patch Tester, and removed unnecessary spaces.\n> \n> I have begun hacking on that, and the API layer inspired from the\n> LWLocks is sound. I have been playing with it in my own extensions\n> and it is nice to be able to plug in custom wait events into\n> pg_stat_activity, particularly for bgworkers. Finally.\n\nGreat!\n\n> The patch needed a rebase after the recent commit that introduced the\n> automatic generation of docs and code for wait events. It requires\n> two tweaks in generate-wait_event_types.pl, feel free to double-check\n> them.\n\nThanks for rebasing. I confirmed it works with the current master.\n\nI know this is a little off-topic from what we're talking about here,\nbut I'm curious about generate-wait_event_types.pl.\n\n # generate-wait_event_types.pl\n -\t# An exception is required for LWLock and Lock as these don't require\n -\t# any C and header files generated.\n +\t# An exception is required for Extension, LWLock and Lock as these \ndon't\n +\t# require any C and header files generated.\n \tdie \"wait event names must start with 'WAIT_EVENT_'\"\n \t if ( $trimmedwaiteventname eq $waiteventenumname\n +\t\t&& $waiteventenumname !~ /^Extension/\n \t\t&& $waiteventenumname !~ /^LWLock/\n \t\t&& $waiteventenumname !~ /^Lock/);\n\nIn my understanding, the first column of the row for WaitEventExtension \nin\nwait_event_names.txt can be any value and the above code should not die.\nBut if I use the following input, it falls on the last line.\n\n # wait_event_names.txt\n Section: ClassName - WaitEventExtension\n\n WAIT_EVENT_EXTENSION\t\"Extension\"\t\"Waiting in an extension.\"\n Extension\t\"Extension\"\t\"Waiting in an extension.\"\n EXTENSION\t\"Extension\"\t\"Waiting in an extension.\"\n\nIf the behavior is unexpected, we need to change the current code.\nI have created a patch for the areas that I felt needed to be changed.\n- 0001-change-the-die-condition-in-generate-wait_event_type.patch\n (In addition to the above, \"$continue = \",\\n\";\" doesn't appear to be \nnecessary.)\n\n\n> Some of the new structures and routine names don't quite reflect the\n> fact that we have wait events for extensions, so I have taken a stab\n> at that.\n\nSorry. I confirmed the change.\n\n\n> Note that the test module test_custom_wait_events would crash if\n> attempting to launch a worker when not loaded in\n> shared_preload_libraries, so we'd better have some protection in\n> wait_worker_launch() (this function should be renamed as well).\n\nOK, I will handle it. Since Andres gave me other comments for the\ntest module, I'll think about what is best.\n\n\n> Attached is a rebased patch that I have begun tweaking here and\n> there. For now, the patch is moved as waiting on author. I have\n> merged the test module with the main patch for the moment, for\n> simplicity. A split is straight-forward as the code paths touched are\n> different.\n> \n> Another and *very* important thing is that we are going to require\n> some documentation in xfunc.sgml to explain how to use these routines\n> and what to expect from them. Ikeda-san, could you write some? You\n> could look at the part about shmem and LWLock to get some\n> inspiration.\n\nOK. Yes, I planned to write documentation.\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION",
"msg_date": "Wed, 12 Jul 2023 16:52:38 +0900",
"msg_from": "Masahiro Ikeda <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Support to define custom wait events for extensions"
},
{
"msg_contents": "On 2023-07-12 02:39, Tristan Partin wrote:\n>> From bf06b8100cb747031959fe81a2d19baabc4838cf Mon Sep 17 00:00:00 2001\n>> From: Masahiro Ikeda <[email protected]>\n>> Date: Fri, 16 Jun 2023 11:53:29 +0900\n>> Subject: [PATCH 1/2] Support custom wait events for extensions.\n> \n>> + * This is indexed by event ID minus \n>> NUM_BUILTIN_WAIT_EVENT_EXTENSION, and\n>> + * stores the names of all dynamically-created event ID known to the \n>> current\n>> + * process. Any unused entries in the array will contain NULL.\n> \n> The second ID should be plural.\n\nThanks for reviewing. Yes, I'll fix it.\n\n>> + /* If necessary, create or enlarge array. */\n>> + if (eventId >= ExtensionWaitEventTrancheNamesAllocated)\n>> + {\n>> + int newalloc;\n>> +\n>> + newalloc = pg_nextpower2_32(Max(8, eventId + 1));\n> \n> Given the context of our last conversation, I assume this code was\n> copied from somewhere else. Since this is new code, I think it would\n> make more sense if newalloc was a uint16 or size_t.\n\nAs Michael-san said, I used LWLockRegisterTranche() as a reference.\nI think it is a good idea to fix the current master. I'll modify the\nabove code accordingly.\n\n> From what I undersatnd, Neon differs from upstream in some way related\n> to this patch. I am trying to ascertain how that is. I hope to provide\n> more feedback when I learn more about it.\n\nOh, it was unexpected for me. Thanks for researching the reason.\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 12 Jul 2023 16:59:22 +0900",
"msg_from": "Masahiro Ikeda <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Support to define custom wait events for extensions"
},
{
"msg_contents": "On 2023-07-12 09:36, Andres Freund wrote:\n> Hi,\n> \n> On 2023-07-11 16:45:26 +0900, Michael Paquier wrote:\n>> +/* ----------\n>> + * Wait Events - Extension\n>> + *\n>> + * Use this category when the server process is waiting for some \n>> condition\n>> + * defined by an extension module.\n>> + *\n>> + * Extensions can define custom wait events. First, call\n>> + * WaitEventExtensionNewTranche() just once to obtain a new wait \n>> event\n>> + * tranche. The ID is allocated from a shared counter. Next, each\n>> + * individual process using the tranche should call\n>> + * WaitEventExtensionRegisterTranche() to associate that wait event \n>> with\n>> + * a name.\n> \n> What does \"tranche\" mean here? For LWLocks it makes some sense, it's \n> used for\n> a set of lwlocks, not an individual one. But here that doesn't really \n> seem to\n> apply?\n\nThanks for useful comments.\nOK, I will change to WaitEventExtensionNewId() and \nWaitEventExtensionRegisterName().\n\n>> + * It may seem strange that each process using the tranche must \n>> register it\n>> + * separately, but dynamic shared memory segments aren't guaranteed \n>> to be\n>> + * mapped at the same address in all coordinating backends, so \n>> storing the\n>> + * registration in the main shared memory segment wouldn't work for \n>> that case.\n>> + */\n> I don't really see how this applies to wait events? There's no pointers\n> here...\n\nYes, I'll fix the comments.\n\n> \n>> +typedef enum\n>> +{\n>> +\tWAIT_EVENT_EXTENSION = PG_WAIT_EXTENSION,\n>> +\tWAIT_EVENT_EXTENSION_FIRST_USER_DEFINED\n>> +} WaitEventExtension;\n>> +\n>> +extern void WaitEventExtensionShmemInit(void);\n>> +extern Size WaitEventExtensionShmemSize(void);\n>> +\n>> +extern uint32 WaitEventExtensionNewTranche(void);\n>> +extern void WaitEventExtensionRegisterTranche(uint32 wait_event_info,\n> \n>> -slock_t *ShmemLock;\t\t\t/* spinlock for shared memory and LWLock\n>> +slock_t *ShmemLock;\t\t\t/* spinlock for shared memory, LWLock\n>> +\t\t\t\t\t\t\t\t * allocation, and named extension wait event\n>> \t\t\t\t\t\t\t\t * allocation */\n> \n> I'm doubtful that it's a good idea to reuse the spinlock further. Given \n> that\n> the patch adds WaitEventExtensionShmemInit(), why not just have a lock \n> in\n> there?\n\nOK, I'll create a new spinlock for the purpose.\n\n\n>> +/*\n>> + * Allocate a new event ID and return the wait event info.\n>> + */\n>> +uint32\n>> +WaitEventExtensionNewTranche(void)\n>> +{\n>> +\tuint16\t\teventId;\n>> +\n>> +\tSpinLockAcquire(ShmemLock);\n>> +\teventId = (*WaitEventExtensionCounter)++;\n>> +\tSpinLockRelease(ShmemLock);\n>> +\n>> +\treturn PG_WAIT_EXTENSION | eventId;\n>> +}\n> \n> It seems quite possible to run out space in WaitEventExtensionCounter, \n> so this\n> should error out in that case.\n\nOK, I'll do so.\n\n\n>> +/*\n>> + * Register a dynamic tranche name in the lookup table of the current \n>> process.\n>> + *\n>> + * This routine will save a pointer to the wait event tranche name \n>> passed\n>> + * as an argument, so the name should be allocated in a \n>> backend-lifetime context\n>> + * (shared memory, TopMemoryContext, static constant, or similar).\n>> + *\n>> + * The \"wait_event_name\" will be user-visible as a wait event name, \n>> so try to\n>> + * use a name that fits the style for those.\n>> + */\n>> +void\n>> +WaitEventExtensionRegisterTranche(uint32 wait_event_info,\n>> +\t\t\t\t\t\t\t\t const char *wait_event_name)\n>> +{\n>> +\tuint16\t\teventId;\n>> +\n>> +\t/* Check wait event class. */\n>> +\tAssert((wait_event_info & 0xFF000000) == PG_WAIT_EXTENSION);\n>> +\n>> +\teventId = wait_event_info & 0x0000FFFF;\n>> +\n>> +\t/* This should only be called for user-defined tranches. */\n>> +\tif (eventId < NUM_BUILTIN_WAIT_EVENT_EXTENSION)\n>> +\t\treturn;\n> \n> Why not assert out in that case then?\n\nOK, I'll add the assertion for eventID.\n\n\n>> +/*\n>> + * Return the name of an Extension wait event ID.\n>> + */\n>> +static const char *\n>> +GetWaitEventExtensionIdentifier(uint16 eventId)\n>> +{\n>> +\t/* Build-in tranche? */\n> \n> *built\n\nI will fix it.\n\n\n>> +\tif (eventId < NUM_BUILTIN_WAIT_EVENT_EXTENSION)\n>> +\t\treturn \"Extension\";\n>> +\n>> +\t/*\n>> +\t * It's an extension tranche, so look in \n>> WaitEventExtensionTrancheNames[].\n>> +\t * However, it's possible that the tranche has never been registered \n>> by\n>> +\t * calling WaitEventExtensionRegisterTranche() in the current \n>> process, in\n>> +\t * which case give up and return \"Extension\".\n>> +\t */\n>> +\teventId -= NUM_BUILTIN_WAIT_EVENT_EXTENSION;\n>> +\n>> +\tif (eventId >= WaitEventExtensionTrancheNamesAllocated ||\n>> +\t\tWaitEventExtensionTrancheNames[eventId] == NULL)\n>> +\t\treturn \"Extension\";\n> \n> I'd return something different here, otherwise something that's \n> effectively a\n> bug is not distinguishable from the built\n\nIt is a good idea. It would be good to name it \"unknown wait event\", the \nsame as\npgstat_get_wait_activity(), etc.\n\n\n>> +++ b/src/test/modules/test_custom_wait_events/t/001_basic.pl\n>> @@ -0,0 +1,34 @@\n>> +# Copyright (c) 2023, PostgreSQL Global Development Group\n>> +\n>> +use strict;\n>> +use warnings;\n>> +\n>> +use PostgreSQL::Test::Cluster;\n>> +use PostgreSQL::Test::Utils;\n>> +use Test::More;\n>> +\n>> +my $node = PostgreSQL::Test::Cluster->new('main');\n>> +\n>> +$node->init;\n>> +$node->append_conf(\n>> +\t'postgresql.conf',\n>> +\t\"shared_preload_libraries = 'test_custom_wait_events'\"\n>> +);\n>> +$node->start;\n> \n> I think this should also test registering a wait event later.\n\nI see. I wasn't expecting that.\n\n\n>> @@ -0,0 +1,188 @@\n>> +/*--------------------------------------------------------------------------\n>> + *\n>> + * test_custom_wait_events.c\n>> + *\t\tCode for testing custom wait events\n>> + *\n>> + * This code initializes a custom wait event in shmem_request_hook() \n>> and\n>> + * provide a function to launch a background worker waiting forever\n>> + * with the custom wait event.\n> \n> Isn't this vast overkill? Why not just have a simple C function that \n> waits\n> with a custom wait event, until cancelled? That'd maybe 1/10th of the \n> LOC.\n\nAre you saying that processing in the background worker is overkill?\n\nIf my understanding is correct, it is difficult to implement the test\nwithout a background worker for this purpose. This is because the test\nwill execute commands serially, while a function waiting is executed in\na backend process, it is not possible to connect to another backend and\ncheck the wait events on pg_stat_activity view.\n\nPlease let me know if my understanding is wrong.\n\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 12 Jul 2023 17:36:03 +0900",
"msg_from": "Masahiro Ikeda <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Support to define custom wait events for extensions"
},
{
"msg_contents": "On Wed, Jul 12, 2023 at 04:52:38PM +0900, Masahiro Ikeda wrote:\n> In my understanding, the first column of the row for WaitEventExtension in\n> wait_event_names.txt can be any value and the above code should not die.\n> But if I use the following input, it falls on the last line.\n> \n> # wait_event_names.txt\n> Section: ClassName - WaitEventExtension\n> \n> WAIT_EVENT_EXTENSION\t\"Extension\"\t\"Waiting in an extension.\"\n> Extension\t\"Extension\"\t\"Waiting in an extension.\"\n> EXTENSION\t\"Extension\"\t\"Waiting in an extension.\"\n> \n> If the behavior is unexpected, we need to change the current code.\n> I have created a patch for the areas that I felt needed to be changed.\n> - 0001-change-the-die-condition-in-generate-wait_event_type.patch\n> (In addition to the above, \"$continue = \",\\n\";\" doesn't appear to be\n> necessary.)\n\n die \"wait event names must start with 'WAIT_EVENT_'\"\n if ( $trimmedwaiteventname eq $waiteventenumname\n- && $waiteventenumname !~ /^LWLock/\n- && $waiteventenumname !~ /^Lock/);\n- $continue = \",\\n\";\n+ && $waitclassname !~ /^WaitEventLWLock$/\n+ && $waitclassname !~ /^WaitEventLock$/);\n\nIndeed, this looks wrong as-is. $waiteventenumname refers to the\nnames of the enum elements, so we could just apply a filter based on\nthe class names in full. The second check in for the generation of\nthe c/h files uses the class names.\n--\nMichael",
"msg_date": "Wed, 12 Jul 2023 17:46:31 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support to define custom wait events for extensions"
},
{
"msg_contents": "On Wed, Jul 12, 2023 at 05:46:31PM +0900, Michael Paquier wrote:\n> On Wed, Jul 12, 2023 at 04:52:38PM +0900, Masahiro Ikeda wrote:\n>> If the behavior is unexpected, we need to change the current code.\n>> I have created a patch for the areas that I felt needed to be changed.\n>> - 0001-change-the-die-condition-in-generate-wait_event_type.patch\n>> (In addition to the above, \"$continue = \",\\n\";\" doesn't appear to be\n>> necessary.)\n> \n> die \"wait event names must start with 'WAIT_EVENT_'\"\n> if ( $trimmedwaiteventname eq $waiteventenumname\n> - && $waiteventenumname !~ /^LWLock/\n> - && $waiteventenumname !~ /^Lock/);\n> - $continue = \",\\n\";\n> + && $waitclassname !~ /^WaitEventLWLock$/\n> + && $waitclassname !~ /^WaitEventLock$/);\n> \n> Indeed, this looks wrong as-is. $waiteventenumname refers to the\n> names of the enum elements, so we could just apply a filter based on\n> the class names in full. The second check in for the generation of\n> the c/h files uses the class names.\n\nAt the end, I have gone with an event simpler way and removed the\nchecks for LWLock and Lock as their hardcoded values marked as DOCONLY\nsatisfy this check. The second check when generating the C and header\ncode has also been simplified a bit to use an exact match with the\nclass name. \n--\nMichael",
"msg_date": "Thu, 13 Jul 2023 09:12:36 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support to define custom wait events for extensions"
},
{
"msg_contents": "On 2023-07-13 09:12, Michael Paquier wrote:\n> On Wed, Jul 12, 2023 at 05:46:31PM +0900, Michael Paquier wrote:\n>> On Wed, Jul 12, 2023 at 04:52:38PM +0900, Masahiro Ikeda wrote:\n>>> If the behavior is unexpected, we need to change the current code.\n>>> I have created a patch for the areas that I felt needed to be \n>>> changed.\n>>> - 0001-change-the-die-condition-in-generate-wait_event_type.patch\n>>> (In addition to the above, \"$continue = \",\\n\";\" doesn't appear to be\n>>> necessary.)\n>> \n>> die \"wait event names must start with 'WAIT_EVENT_'\"\n>> if ( $trimmedwaiteventname eq $waiteventenumname\n>> - && $waiteventenumname !~ /^LWLock/\n>> - && $waiteventenumname !~ /^Lock/);\n>> - $continue = \",\\n\";\n>> + && $waitclassname !~ /^WaitEventLWLock$/\n>> + && $waitclassname !~ /^WaitEventLock$/);\n>> \n>> Indeed, this looks wrong as-is. $waiteventenumname refers to the\n>> names of the enum elements, so we could just apply a filter based on\n>> the class names in full. The second check in for the generation of\n>> the c/h files uses the class names.\n> \n> At the end, I have gone with an event simpler way and removed the\n> checks for LWLock and Lock as their hardcoded values marked as DOCONLY\n> satisfy this check. The second check when generating the C and header\n> code has also been simplified a bit to use an exact match with the\n> class name.\n\nThanks for your quick response. I'll rebase for the commit.\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 13 Jul 2023 10:26:35 +0900",
"msg_from": "Masahiro Ikeda <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Support to define custom wait events for extensions"
},
{
"msg_contents": "On Thu, Jul 13, 2023 at 10:26:35AM +0900, Masahiro Ikeda wrote:\n> Thanks for your quick response. I'll rebase for the commit.\n\nOkay, thanks. I'll wait for the rebased version before moving on with\nthe next review, then.\n--\nMichael",
"msg_date": "Thu, 13 Jul 2023 11:32:13 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support to define custom wait events for extensions"
},
{
"msg_contents": "Hi,\n\nI updated the patches.\n* v6-0001-Support-custom-wait-events-for-extensions.patch\n\nThe main diffs are\n\n* rebase it atop current HEAD\n* update docs to show users how to use the APIs\n* rename of functions and variables\n* fix typos\n* define a new spinlock in shared memory for this purpose\n* output an error if the number of wait event for extensions exceeds \nuint16\n* show the wait event as \"extension\" if the custom wait event name is \nnot\n registered, which is same as LWLock one.\n* add test cases which confirm it works if new wait events for \nextensions\n are defined in initialize phase and after phase. And add a boundary\n condition test.\n\nPlease let me know if I forgot to handle something that you commented,\nand there are better idea.\n\nNote:\nI would like to change the wait event name of contrib modules for \nexample\npostgres_fdw. But, I think it's better to do so after the APIs are \ncommitted.\nThe example mentioned in docs should be updated to the contrib modules \ncodes,\nnot the test module.\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION",
"msg_date": "Wed, 19 Jul 2023 12:52:10 +0900",
"msg_from": "Masahiro Ikeda <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Support to define custom wait events for extensions"
},
{
"msg_contents": "On 2023-07-19 12:52, Masahiro Ikeda wrote:\n> Hi,\n> \n> I updated the patches.\n> * v6-0001-Support-custom-wait-events-for-extensions.patch\n\nI updated the patch since the cfbot found a bug.\n* v7-0001-Support-custom-wait-events-for-extensions.patch\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION",
"msg_date": "Wed, 19 Jul 2023 15:19:27 +0900",
"msg_from": "Masahiro Ikeda <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Support to define custom wait events for extensions"
},
{
"msg_contents": "On Wed, Jul 19, 2023 at 12:52:10PM +0900, Masahiro Ikeda wrote:\n> I would like to change the wait event name of contrib modules for example\n> postgres_fdw. But, I think it's better to do so after the APIs are\n> committed.\n\nAgreed to do things one step at a time here. Let's focus on the core\nAPIs and facilities first.\n--\nMichael",
"msg_date": "Wed, 19 Jul 2023 15:23:41 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support to define custom wait events for extensions"
},
{
"msg_contents": "On Wed, Jul 19, 2023 at 11:49 AM Masahiro Ikeda\n<[email protected]> wrote:\n>\n> I updated the patch since the cfbot found a bug.\n> * v7-0001-Support-custom-wait-events-for-extensions.patch\n\nThanks for working on this feature. +1. I've wanted this capability\nfor a while because extensions have many different wait loops for\ndifferent reasons, a single wait event type isn't enough.\n\nI think we don't need a separate test extension for demonstrating use\nof custom wait events, you can leverage the sample extension\nworker_spi because that's where extension authors look for while\nwriting a new extension. Also, it simplifies your patch a lot. I don't\nmind adding a few things to worker_spi for the sake of demonstrating\nthe use and testing for custom wait events.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 19 Jul 2023 14:00:15 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support to define custom wait events for extensions"
},
{
"msg_contents": "Thanks for continuing to work on this patchset. I only have\nprose-related comments.\n\n> To support custom wait events, it add 2 APIs to define new wait events\n> for extensions dynamically.\n\nRemove the \"it\" here.\n\n> The APIs are\n> * WaitEventExtensionNew()\n> * WaitEventExtensionRegisterName()\n \n> These are similar to the existing LWLockNewTrancheId() and\n> LWLockRegisterTranche().\n\nThis sentence seems like it could be removed given the API names have\nchanged during the development of this patch.\n\n> First, extensions should call WaitEventExtensionNew() to get one\n> or more new wait event, which IDs are allocated from a shared\n> counter. Next, each individual process can use the wait event with\n> WaitEventExtensionRegisterName() to associate that a wait event\n> string to the associated name.\n\nThis portion of the commit message is a copy-paste of the function\ncomment. Whatever you do in the function comment (which I commented on\nbelow), just do here as well.\n\n> + so an wait event might be reported as just <quote><literal>extension</literal></quote>\n> + rather than the extension-assigned name.\n\ns/an/a\n\n> + <sect2 id=\"xfunc-addin-wait-events\">\n> + <title>Custom Wait Events for Add-ins</title>\n\nThis would be the second use of \"Add-ins\" ever, according to my search.\nShould this be \"Extensions\" instead?\n\n> + <para>\n> + Add-ins can define custom wait events that the wait event type is\n\ns/that/where\n\n> + <literal>Extension</literal>.\n> + </para>\n> + <para>\n> + First, add-ins should get new one or more wait events by calling:\n\n\"one or more\" doesn't seem to make sense grammatically here.\n\n> +<programlisting>\n> + uint32 WaitEventExtensionNew(void)\n> +</programlisting>\n> + Next, each individual process can use them to associate that\n\nRemove \"that\".\n\n> + a wait event string to the associated name by calling:\n> +<programlisting>\n> + void WaitEventExtensionRegisterName(uint32 wait_event_info, const char *wait_event_name);\n> +</programlisting>\n> + An example can be found in\n> + <filename>src/test/modules/test_custom_wait_events/test_custom_wait_events.c</filename>\n> + in the PostgreSQL source tree.\n> + </para>\n> + </sect2>\n\n> + * Register a dynamic wait event name for extension in the lookup table\n> + * of the current process.\n\nInserting an \"an\" before \"extension\" would make this read better.\n\n> +/*\n> + * Return the name of an wait event ID for extension.\n> + */\n\ns/an/a\n\n> + /*\n> + * It's an user-defined wait event, so look in WaitEventExtensionNames[].\n> + * However, it's possible that the name has never been registered by\n> + * calling WaitEventExtensionRegisterName() in the current process, in\n> + * which case give up and return \"extension\".\n> + */\n\ns/an/a\n\n\"extension\" seems very similar to \"Extension\". Instead of returning a\nstring here, could we just error? This seems like a programmer error on\nthe part of the extension author.\n\n> + * Extensions can define their wait events. First, extensions should call\n> + * WaitEventExtensionNew() to get one or more wait events, which IDs are\n> + * allocated from a shared counter. Next, each individual process can use\n> + * them with WaitEventExtensionRegisterName() to associate that a wait\n> + * event string to the associated name.\n\nAn \"own\" before \"wait events\" in the first sentence would increase\nclarity. \"where\" instead of \"which\" in the next sentence. Remove \"that\"\nafter \"associate\" in the third sentence.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Wed, 19 Jul 2023 10:57:39 -0500",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support to define custom wait events for extensions"
},
{
"msg_contents": "On Tue Jul 11, 2023 at 6:29 PM CDT, Michael Paquier wrote:\n> On Tue, Jul 11, 2023 at 12:39:52PM -0500, Tristan Partin wrote:\n> > Given the context of our last conversation, I assume this code was\n> > copied from somewhere else. Since this is new code, I think it would\n> > make more sense if newalloc was a uint16 or size_t.\n>\n> This style comes from LWLockRegisterTranche() in lwlock.c. Do you\n> think that it would be more adapted to change that to\n> pg_nextpower2_size_t() with a Size? We could do that for the existing\n> code on HEAD as an improvement.\n\nYes, I think that would be the most correct code. At the very least,\nusing a uint32 instead of an int, would be preferrable.\n\n> > From what I understand, Neon differs from upstream in some way related\n> > to this patch. I am trying to ascertain how that is. I hope to provide\n> > more feedback when I learn more about it.\n>\n> Hmm, okay, that would nice to hear about to make sure that the\n> approach taken on this thread is able to cover what you are looking\n> for. So you mean that Neon has been using something similar to\n> register wait events in the backend? Last time I looked at the Neon\n> repo, I did not get the impression that there was a custom patch for\n> Postgres in this area. All the in-core code paths using\n> WAIT_EVENT_EXTENSION would gain from the APIs added here, FWIW.\n\nI did some investigation into the Neon fork[0], and couldn't find any\ncommits that seemed related. Perhaps this is on our wishlist instead of\nsomething we already have implemented. I have CCed Heikki to talk some\nmore about how this would fit in at Neon.\n\n[0]: https://github.com/neondatabase/postgres\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Wed, 19 Jul 2023 11:16:34 -0500",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support to define custom wait events for extensions"
},
{
"msg_contents": "On Wed, Jul 19, 2023 at 11:16:34AM -0500, Tristan Partin wrote:\n> On Tue Jul 11, 2023 at 6:29 PM CDT, Michael Paquier wrote:\n>> This style comes from LWLockRegisterTranche() in lwlock.c. Do you\n>> think that it would be more adapted to change that to\n>> pg_nextpower2_size_t() with a Size? We could do that for the existing\n>> code on HEAD as an improvement.\n> \n> Yes, I think that would be the most correct code. At the very least,\n> using a uint32 instead of an int, would be preferrable.\n\nWould you like to send a patch on a new thread about that?\n\n>> Hmm, okay, that would nice to hear about to make sure that the\n>> approach taken on this thread is able to cover what you are looking\n>> for. So you mean that Neon has been using something similar to\n>> register wait events in the backend? Last time I looked at the Neon\n>> repo, I did not get the impression that there was a custom patch for\n>> Postgres in this area. All the in-core code paths using\n>> WAIT_EVENT_EXTENSION would gain from the APIs added here, FWIW.\n> \n> I did some investigation into the Neon fork[0], and couldn't find any\n> commits that seemed related. Perhaps this is on our wishlist instead of\n> something we already have implemented. I have CCed Heikki to talk some\n> more about how this would fit in at Neon.\n> \n> [0]: https://github.com/neondatabase/postgres\n\nAnybody with complex out-of-core extensions have wanted more\nmonitoring capabilities for wait events without relying on the core\nbackend. To be honest, I would not be surprised to see this stuff on\nmore than one wishlist.\n--\nMichael",
"msg_date": "Thu, 20 Jul 2023 08:34:05 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support to define custom wait events for extensions"
},
{
"msg_contents": "On Wed, Jul 19, 2023 at 10:57:39AM -0500, Tristan Partin wrote:\n> > + <sect2 id=\"xfunc-addin-wait-events\">\n> > + <title>Custom Wait Events for Add-ins</title>\n> \n> This would be the second use of \"Add-ins\" ever, according to my search.\n> Should this be \"Extensions\" instead?\n\nYes, I would think that just \"Custom Wait Events\" is enough here.\nAnd I'd recommend to also use Shared Memory here. The case of\ndynamically loaded things is possible, more advanced and can work, but\nI am not sure we really need to do down to that as long as we mention\nto use shared_preload_libraries.\n\nI've rewritten the docs in their entirety, but honestly I still need\nto spend more time polishing that.\n\nAnother part of the patch that has been itching me a lot are the\nregression tests. I have spent some time today migrating the tests of\nworker_spi to TAP for the sake of this thread, resulting in commit\n320c311, and concluded that we need to care about three new cases:\n- For custom wait events where the shmem state is not loaded, check\nthat we report the default of 'extension'.\n- Check that it is possible to allocate and load a custom wait event\ndynamically. Here, I have used a new SQL function in worker_spi,\ncalled worker_spi_init(). That feels a bit hack-ish but for a test in\na template module that works great.\n- Check that wait events loaded through shared_preload_libraries work\ncorrectly.\n\nThe tests of worker_spi can take care easily of all these cases, once\na few things for the shmem handling are put in place for the dynamic\nand preloading cases.\n\n+Datum\n+get_new_wait_event_info(PG_FUNCTION_ARGS)\n+{\n+ PG_RETURN_UINT32(WaitEventExtensionNew());\n+}\n\nWhile looking at the previous patch and the test, I've noticed this\npattern. WaitEventExtensionNew() should not be called without holding\nAddinShmemInitLock, or that opens the door to race conditions.\n\nI am still mid-way through the review of the core APIs, but attached\nis my current version in progress, labelled v8. I'll continue\ntomorrow. I'm aware of some typos in the commit message of this\npatch, and the dynamic bgworker launch is failing in the CI for\nVS2019 (just too tired to finish debugging that today).\n\nThoughts are welcome.\n--\nMichael",
"msg_date": "Thu, 27 Jul 2023 18:16:46 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support to define custom wait events for extensions"
},
{
"msg_contents": "Hi, all.\n\nSorry for late reply.\n\n> I am still mid-way through the review of the core APIs, but attached\n> is my current version in progress, labelled v8. I'll continue\n> tomorrow. I'm aware of some typos in the commit message of this\n> patch, and the dynamic bgworker launch is failing in the CI for\n> VS2019 (just too tired to finish debugging that today).\n\nI suspect that I forgot to specify \"volatile\" to the variable\nfor the spinlock.\n\n+/* dynamic allocation counter for custom wait events for extensions */\n+typedef struct WaitEventExtensionCounter\n+{\n+\tint\t\t\tnextId;\t\t\t/* next ID to assign */\n+\tslock_t\t\tmutex;\t\t\t/* protects the counter only */\n+}\t\t\tWaitEventExtensionCounter;\n+\n+/* pointer to the shared memory */\n+static WaitEventExtensionCounter * waitEventExtensionCounter;\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 27 Jul 2023 18:29:22 +0900",
"msg_from": "Masahiro Ikeda <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Support to define custom wait events for extensions"
},
{
"msg_contents": "On Thu, Jul 27, 2023 at 06:29:22PM +0900, Masahiro Ikeda wrote:\n> I suspect that I forgot to specify \"volatile\" to the variable\n> for the spinlock.\n\n+ if (!IsUnderPostmaster)\n+ {\n+ /* Allocate space in shared memory. */\n+ waitEventExtensionCounter = (WaitEventExtensionCounter *)\n+ ShmemInitStruct(\"waitEventExtensionCounter\", WaitEventExtensionShmemSize(), &found);\n+ if (found)\n+ return;\n\nI think that your error is here. WaitEventExtensionShmemInit() is\nforgetting to set the pointer to waitEventExtensionCounter for\nprocesses where IsUnderPostmaster is true, which impacts things not\nforked like in -DEXEC_BACKEND (the crash is reproducible on Linux with\n-DEXEC_BACKEND in CFLAGS, as well). The correct thing to do is to\nalways call ShmemInitStruct, but only initialize the contents of the\nshared memory area if ShmemInitStruct() has *not* found the shmem\ncontents.\n\nWaitEventExtensionNew() could be easily incorrectly used, so I'd\nrather add a LWLockHeldByMeInMode() on AddinShmemInitLock as safety\nmeasure. Perhaps we should do the same for the LWLocks, subject for a\ndifferent thread..\n\n+ int newalloc;\n+\n+ newalloc = pg_nextpower2_32(Max(8, eventId + 1));\n\nThis should be a uint32.\n\n+ if (eventId >= WaitEventExtensionNamesAllocated ||\n+ WaitEventExtensionNames[eventId] == NULL)\n+ return \"extension\";\nThat's too close to the default of \"Extension\". It would be cleaner\nto use \"unknown\", but we've been using \"???\" as well in many default\npaths where an ID cannot be mapped to a string, so I would recommend\nto just use that.\n\nI have spent more time polishing the docs and the comments. This v9\nlooks in a rather committable shape now with docs, tests and core\nroutines in place.\n--\nMichael",
"msg_date": "Fri, 28 Jul 2023 10:06:17 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support to define custom wait events for extensions"
},
{
"msg_contents": "On Fri, Jul 28, 2023 at 6:36 AM Michael Paquier <[email protected]> wrote:\n>\n> I have spent more time polishing the docs and the comments. This v9\n> looks in a rather committable shape now with docs, tests and core\n> routines in place.\n\nThanks. Here are some comments on v9 patch:\n\n1.\n- so an <literal>LWLock</literal> wait event might be reported as\n- just <quote><literal>extension</literal></quote> rather than the\n- extension-assigned name.\n+ if the extension's library is not loaded; so a custom wait event might\n+ be reported as just <quote><literal>???</literal></quote>\n+ rather than the custom name assigned.\n\nTrying to understand why '???' is any better than 'extension' for a\nregistered custom wait event of an unloaded extension?\n\nPS: Looked at other instances where '???' is being used for\nrepresenting an unknown \"thing\".\n\n2. Have an example of how a custom wait event is displayed in the\nexample in the docs \"Here is an example of how wait events can be\nviewed:\". We can use the worker_spi wait event type there.\n\n3.\n- so an <literal>LWLock</literal> wait event might be reported as\n- just <quote><literal>extension</literal></quote> rather than the\n- extension-assigned name.\n\n+ <xref linkend=\"wait-event-lwlock-table\"/>. In some cases, the name\n+ assigned by an extension will not be available in all server processes\n+ if the extension's library is not loaded; so a custom wait event might\n+ be reported as just <quote><literal>???</literal></quote>\n\nAre we missing to explicitly say what wait event will be reported for\nan LWLock when the extension library is not loaded?\n\n4.\n+ Add-ins can define custom wait events under the wait event type\n\nI see a few instances of Add-ins/add-in in xfunc.sgml. Isn't it better\nto use the word extension given that glossary defines what an\nextension is https://www.postgresql.org/docs/current/glossary.html#GLOSSARY-EXTENSION?\n\n5.\n+} WaitEventExtensionCounter;\n+\n+/* pointer to the shared memory */\n+static WaitEventExtensionCounter *waitEventExtensionCounter;\n\nHow about naming the structure variable as\nWaitEventExtensionCounterData and pointer as\nWaitEventExtensionCounter? This keeps all the static variable names\nconsistent WaitEventExtensionNames, WaitEventExtensionNamesAllocated\nand WaitEventExtensionCounter.\n\n6.\n+ /* Check the wait event class. */\n+ Assert((wait_event_info & 0xFF000000) == PG_WAIT_EXTENSION);\n+\n+ /* This should only be called for user-defined wait event. */\n+ Assert(eventId >= NUM_BUILTIN_WAIT_EVENT_EXTENSION);\n\nMaybe, we must turn the above asserts into ereport(ERROR) to protect\nagainst an extension sending in an unregistered wait_event_info?\nEspecially, the first Assert((wait_event_info & 0xFF000000) ==\nPG_WAIT_EXTENSION); checks that the passed in wait_event_info is\npreviously returned by WaitEventExtensionNew. IMO, these assertions\nbetter fit for errors.\n\n7.\n+ * Extensions can define their own wait events in this categiry. First,\nTypo - s/categiry/category\n\n8.\n+ First,\n+ * they should call WaitEventExtensionNew() to get one or more wait event\n+ * IDs that are allocated from a shared counter.\n\nCan WaitEventExtensionNew() be WaitEventExtensionNew(int num_ids, int\n*result) to get the required number of wait event IDs in one call\nsimilar to RequestNamedLWLockTranche? Currently, an extension needs to\ncall WaitEventExtensionNew() N number of times to get N wait event\nIDs. Maybe the existing WaitEventExtensionNew() is good, but just a\nthought.\n\n9.\n# The expected result is a special pattern here with a newline coming from the\n# first query where the shared memory state is set.\n$result = $node->poll_query_until(\n 'postgres',\n qq[SELECT worker_spi_init(); SELECT wait_event FROM\npg_stat_activity WHERE backend_type ~ 'worker_spi';],\n qq[\nworker_spi_main]);\n\nThis test doesn't have to be that complex with the result being a\nspecial pattern, SELECT worker_spi_init(); can just be within a\nseparate safe_psql.\n\n10.\n+ wsstate = ShmemInitStruct(\"custom_wait_event\",\n\nName the shared memory just \"worker_spi\" to make it generic and\nextensible. Essentially, it is a woker_spi shared memory area part of\nit is for custom wait event id.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 28 Jul 2023 12:43:36 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support to define custom wait events for extensions"
},
{
"msg_contents": "On Fri, Jul 28, 2023 at 12:43:36PM +0530, Bharath Rupireddy wrote:\n> 1.\n> - so an <literal>LWLock</literal> wait event might be reported as\n> - just <quote><literal>extension</literal></quote> rather than the\n> - extension-assigned name.\n> + if the extension's library is not loaded; so a custom wait event might\n> + be reported as just <quote><literal>???</literal></quote>\n> + rather than the custom name assigned.\n> \n> Trying to understand why '???' is any better than 'extension' for a\n> registered custom wait event of an unloaded extension?\n> \n> PS: Looked at other instances where '???' is being used for\n> representing an unknown \"thing\".\n\nYou are right that I am making things inconsistent here. Having a\nbehavior close to the existing LWLock and use \"extension\" when the\nevent cannot be found makes the most sense. I have been a bit\nconfused with the wording though of this part of the docs, though, as\nLWLocks don't directly use the custom wait event APIs.\n\n> 2. Have an example of how a custom wait event is displayed in the\n> example in the docs \"Here is an example of how wait events can be\n> viewed:\". We can use the worker_spi wait event type there.\n\nFine by me, added one.\n\n> 3.\n> - so an <literal>LWLock</literal> wait event might be reported as\n> - just <quote><literal>extension</literal></quote> rather than the\n> - extension-assigned name.\n> \n> + <xref linkend=\"wait-event-lwlock-table\"/>. In some cases, the name\n> + assigned by an extension will not be available in all server processes\n> + if the extension's library is not loaded; so a custom wait event might\n> + be reported as just <quote><literal>???</literal></quote>\n> \n> Are we missing to explicitly say what wait event will be reported for\n> an LWLock when the extension library is not loaded?\n\nYes, see answer to point 1.\n\n> 4.\n> + Add-ins can define custom wait events under the wait event type\n> \n> I see a few instances of Add-ins/add-in in xfunc.sgml. Isn't it better\n> to use the word extension given that glossary defines what an\n> extension is https://www.postgresql.org/docs/current/glossary.html#GLOSSARY-EXTENSION?\n\nAn extension is an Add-in, and may be loaded, but it is possible to\nhave modules that just need to be LOAD'ed (with command line or just\nshared_preload_libraries) so an add-in may not always be an extension.\nI am not completely sure if add-ins is the best term, but it covers\nboth, and that's consistent with the existing docs. Perhaps the same\narea of the docs should be refreshed, but that looks like a separate\npatch for me. For now, I'd rather use a consistent term, and this one\nsounds OK to me.\n\n[1]: https://www.postgresql.org/docs/devel/extend-extensions.html.\n\n> 5.\n> +} WaitEventExtensionCounter;\n> +\n> +/* pointer to the shared memory */\n> +static WaitEventExtensionCounter *waitEventExtensionCounter;\n> \n> How about naming the structure variable as\n> WaitEventExtensionCounterData and pointer as\n> WaitEventExtensionCounter? This keeps all the static variable names\n> consistent WaitEventExtensionNames, WaitEventExtensionNamesAllocated\n> and WaitEventExtensionCounter.\n\nHmm, good point on consistency here, especially to use an upper-case\ncharacter for the first characters of waitEventExtensionCounter..\nErr.. WaitEventExtensionCounter.\n\n> 6.\n> + /* Check the wait event class. */\n> + Assert((wait_event_info & 0xFF000000) == PG_WAIT_EXTENSION);\n> +\n> + /* This should only be called for user-defined wait event. */\n> + Assert(eventId >= NUM_BUILTIN_WAIT_EVENT_EXTENSION);\n> \n> Maybe, we must turn the above asserts into ereport(ERROR) to protect\n> against an extension sending in an unregistered wait_event_info?\n> Especially, the first Assert((wait_event_info & 0xFF000000) ==\n> PG_WAIT_EXTENSION); checks that the passed in wait_event_info is\n> previously returned by WaitEventExtensionNew. IMO, these assertions\n> better fit for errors.\n\nOkay by me that it may be a better idea to use ereport(ERROR) in the\nlong run, so changed this way. I have introduced a\nWAIT_EVENT_CLASS_MASK and a WAIT_EVENT_ID_MASK as we now use\n0xFF000000 and 0x0000FFFF in three places of this file. This should\njust be a patch on its own.\n\n> 7.\n> + * Extensions can define their own wait events in this categiry. First,\n> Typo - s/categiry/category\n\nThanks, missed that.\n\n> 8.\n> + First,\n> + * they should call WaitEventExtensionNew() to get one or more wait event\n> + * IDs that are allocated from a shared counter.\n> \n> Can WaitEventExtensionNew() be WaitEventExtensionNew(int num_ids, int\n> *result) to get the required number of wait event IDs in one call\n> similar to RequestNamedLWLockTranche? Currently, an extension needs to\n> call WaitEventExtensionNew() N number of times to get N wait event\n> IDs. Maybe the existing WaitEventExtensionNew() is good, but just a\n> thought.\n\nYes, this was mentioned upthread. I am not completely sure yet how\nmuch we need to do for this interface, but surely it would be faster\nto have a Multiple() interface that returns an array made of N numbers\nrequested (rather than a rank of them). For now, I'd rather just aim\nfor simplicity for the basics.\n\n> 9.\n> # The expected result is a special pattern here with a newline coming from the\n> # first query where the shared memory state is set.\n> $result = $node->poll_query_until(\n> 'postgres',\n> qq[SELECT worker_spi_init(); SELECT wait_event FROM\n> pg_stat_activity WHERE backend_type ~ 'worker_spi';],\n> qq[\n> worker_spi_main]);\n> \n> This test doesn't have to be that complex with the result being a\n> special pattern, SELECT worker_spi_init(); can just be within a\n> separate safe_psql.\n\nNo, it cannot because we need the custom wait event string to be\nloaded in the same connection as the one querying pg_stat_activity.\nA different thing that can be done here is to use background_psql()\nwith a query_until(), though I am not sure that this is worth doing\nhere.\n\n> 10.\n> + wsstate = ShmemInitStruct(\"custom_wait_event\",\n> \n> Name the shared memory just \"worker_spi\" to make it generic and\n> extensible. Essentially, it is a woker_spi shared memory area part of\n> it is for custom wait event id.\n\nRight, this is misleading. This could be something like a \"worker_spi\nState\", for instance. I have switched to this term.\n\nAttached is a new version.\n--\nMichael",
"msg_date": "Mon, 31 Jul 2023 10:10:04 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support to define custom wait events for extensions"
},
{
"msg_contents": "On Mon, Jul 31, 2023 at 6:40 AM Michael Paquier <[email protected]> wrote:\n>\n> You are right that I am making things inconsistent here. Having a\n> behavior close to the existing LWLock and use \"extension\" when the\n> event cannot be found makes the most sense. I have been a bit\n> confused with the wording though of this part of the docs, though, as\n> LWLocks don't directly use the custom wait event APIs.\n\n+ * calling WaitEventExtensionRegisterName() in the current process, in\n+ * which case give up and return an unknown state.\n\nWe're not giving up and returning an unknown state in the v10 patch\nunlike v9, no? This comment needs to change.\n\n> > 4.\n> > + Add-ins can define custom wait events under the wait event type\n> >\n> > I see a few instances of Add-ins/add-in in xfunc.sgml. Isn't it better\n> > to use the word extension given that glossary defines what an\n> > extension is https://www.postgresql.org/docs/current/glossary.html#GLOSSARY-EXTENSION?\n>\n> An extension is an Add-in, and may be loaded, but it is possible to\n> have modules that just need to be LOAD'ed (with command line or just\n> shared_preload_libraries) so an add-in may not always be an extension.\n> I am not completely sure if add-ins is the best term, but it covers\n> both, and that's consistent with the existing docs. Perhaps the same\n> area of the docs should be refreshed, but that looks like a separate\n> patch for me. For now, I'd rather use a consistent term, and this one\n> sounds OK to me.\n>\n> [1]: https://www.postgresql.org/docs/devel/extend-extensions.html.\n\nThe \"external module\" seems the right wording here. Use of \"add-ins\"\nis fine by me for this patch.\n\n> Okay by me that it may be a better idea to use ereport(ERROR) in the\n> long run, so changed this way. I have introduced a\n> WAIT_EVENT_CLASS_MASK and a WAIT_EVENT_ID_MASK as we now use\n> 0xFF000000 and 0x0000FFFF in three places of this file. This should\n> just be a patch on its own.\n\nYeah, I don't mind these macros going along or before or after the\ncustom wait events feature.\n\n> Yes, this was mentioned upthread. I am not completely sure yet how\n> much we need to do for this interface, but surely it would be faster\n> to have a Multiple() interface that returns an array made of N numbers\n> requested (rather than a rank of them). For now, I'd rather just aim\n> for simplicity for the basics.\n\n+1 to be simple for now. If any such requests come in future, I'm sure\nwe can always get back to it.\n\n> > 9.\n> > # The expected result is a special pattern here with a newline coming from the\n> > # first query where the shared memory state is set.\n> > $result = $node->poll_query_until(\n> > 'postgres',\n> > qq[SELECT worker_spi_init(); SELECT wait_event FROM\n> > pg_stat_activity WHERE backend_type ~ 'worker_spi';],\n> > qq[\n> > worker_spi_main]);\n> >\n> > This test doesn't have to be that complex with the result being a\n> > special pattern, SELECT worker_spi_init(); can just be within a\n> > separate safe_psql.\n>\n> No, it cannot because we need the custom wait event string to be\n> loaded in the same connection as the one querying pg_stat_activity.\n> A different thing that can be done here is to use background_psql()\n> with a query_until(), though I am not sure that this is worth doing\n> here.\n\n-1 to go to the background_psql and query_until route. However,\nenhancing the comment might help \"we need the custom wait event string\nto be loaded in the same connection as .....\". Having said this, I\ndon't quite understand the point of having worker_spi_init() when we\nsay clearly how to use shmem hooks and custom wait events. If its\nintention is to show loading of shared memory to a backend via a\nfunction, do we really need it in worker_spi? I don't mind removing it\nif it's not adding any significant value.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 31 Jul 2023 12:07:40 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support to define custom wait events for extensions"
},
{
"msg_contents": "On 2023-07-31 10:10, Michael Paquier wrote:\n> Attached is a new version.\n\nThanks for all the improvements.\nI have some comments for v10.\n\n(1)\n\n <note>\n <para>\n- Extensions can add <literal>LWLock</literal> types to the list \nshown in\n- <xref linkend=\"wait-event-lwlock-table\"/>. In some cases, the \nname\n+ Extensions can add <literal>Extension</literal> and\n+ <literal>LWLock</literal> types\n+ to the list shown in <xref linkend=\"wait-event-extension-table\"/> \nand\n+ <xref linkend=\"wait-event-lwlock-table\"/>. In some cases, the name\n assigned by an extension will not be available in all server \nprocesses;\n- so an <literal>LWLock</literal> wait event might be reported as\n- just <quote><literal>extension</literal></quote> rather than the\n+ so an <literal>LWLock</literal> or <literal>Extension</literal> \nwait\n+ event might be reported as just\n+ <quote><literal>extension</literal></quote> rather than the\n extension-assigned name.\n </para>\n </note>\n\nI think the order in which they are mentioned should be matched. I mean \nthat\n- so an <literal>LWLock</literal> or <literal>Extension</literal> \nwait\n+ so an <literal>Extension</literal> or <literal>LWLock</literal> \nwait\n\n\n(2)\n\n\t/* This should only be called for user-defined wait event. */\n\tif (eventId < NUM_BUILTIN_WAIT_EVENT_EXTENSION)\n\t\tereport(ERROR,\n\t\t\t\terrcode(ERRCODE_INVALID_PARAMETER_VALUE),\n\t\t\t\terrmsg(\"invalid wait event ID %u\", eventId));\n\nI was just wondering if it should also check the eventId\nthat has been allocated though it needs to take the spinlock\nand GetWaitEventExtensionIdentifier() doesn't take it into account.\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Mon, 31 Jul 2023 15:53:27 +0900",
"msg_from": "Masahiro Ikeda <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Support to define custom wait events for extensions"
},
{
"msg_contents": "On Mon, Jul 31, 2023 at 12:07:40PM +0530, Bharath Rupireddy wrote:\n> We're not giving up and returning an unknown state in the v10 patch\n> unlike v9, no? This comment needs to change.\n\nRight. Better to be consistent with lwlock.c here.\n\n>> No, it cannot because we need the custom wait event string to be\n>> loaded in the same connection as the one querying pg_stat_activity.\n>> A different thing that can be done here is to use background_psql()\n>> with a query_until(), though I am not sure that this is worth doing\n>> here.\n> \n> -1 to go to the background_psql and query_until route. However,\n> enhancing the comment might help \"we need the custom wait event string\n> to be loaded in the same connection as .....\". Having said this, I\n> don't quite understand the point of having worker_spi_init() when we\n> say clearly how to use shmem hooks and custom wait events. If its\n> intention is to show loading of shared memory to a backend via a\n> function, do we really need it in worker_spi? I don't mind removing it\n> if it's not adding any significant value.\n\nIt seems to initialize the state of the worker_spi, so attaching a\nfunction to this stuff makes sense to me, just for the sake of testing\nall that.\n--\nMichael",
"msg_date": "Mon, 31 Jul 2023 15:55:51 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support to define custom wait events for extensions"
},
{
"msg_contents": "On Mon, Jul 31, 2023 at 03:53:27PM +0900, Masahiro Ikeda wrote:\n> I think the order in which they are mentioned should be matched. I mean that\n> - so an <literal>LWLock</literal> or <literal>Extension</literal> wait\n> + so an <literal>Extension</literal> or <literal>LWLock</literal> wait\n\nMakes sense.\n\n> \t/* This should only be called for user-defined wait event. */\n> \tif (eventId < NUM_BUILTIN_WAIT_EVENT_EXTENSION)\n> \t\tereport(ERROR,\n> \t\t\t\terrcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> \t\t\t\terrmsg(\"invalid wait event ID %u\", eventId));\n> \n> I was just wondering if it should also check the eventId\n> that has been allocated though it needs to take the spinlock\n> and GetWaitEventExtensionIdentifier() doesn't take it into account.\n\nWhat kind of extra check do you have in mind? Once WAIT_EVENT_ID_MASK\nis applied, we already know that we don't have something larger than\nPG_UNIT16_MAX, or perhaps you want to cross-check this number with\nwhat nextId holds in shared memory and that we don't have a number\nbetween nextId and PG_UNIT16_MAX? I am not sure that we need to care\nmuch about that this much in this code path, and I'd rather avoid\ntaking an extra time the spinlock just for a cross-check.\n\nAttaching a v11 based on Bharath's feedback and yours, for now. I\nhave also applied the addition of the two masking variables in\nwait_event.c separately with 7395a90.\n--\nMichael",
"msg_date": "Mon, 31 Jul 2023 16:28:16 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support to define custom wait events for extensions"
},
{
"msg_contents": "On 2023-07-31 16:28, Michael Paquier wrote:\n> On Mon, Jul 31, 2023 at 03:53:27PM +0900, Masahiro Ikeda wrote:\n>> \t/* This should only be called for user-defined wait event. */\n>> \tif (eventId < NUM_BUILTIN_WAIT_EVENT_EXTENSION)\n>> \t\tereport(ERROR,\n>> \t\t\t\terrcode(ERRCODE_INVALID_PARAMETER_VALUE),\n>> \t\t\t\terrmsg(\"invalid wait event ID %u\", eventId));\n>> \n>> I was just wondering if it should also check the eventId\n>> that has been allocated though it needs to take the spinlock\n>> and GetWaitEventExtensionIdentifier() doesn't take it into account.\n> \n> What kind of extra check do you have in mind? Once WAIT_EVENT_ID_MASK\n> is applied, we already know that we don't have something larger than\n> PG_UNIT16_MAX, or perhaps you want to cross-check this number with\n> what nextId holds in shared memory and that we don't have a number\n> between nextId and PG_UNIT16_MAX? I am not sure that we need to care\n> much about that this much in this code path, and I'd rather avoid\n> taking an extra time the spinlock just for a cross-check.\n\nOK. I assumed to check that we don't have a number between nextId and\nPG_UNIT16_MAX.\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Mon, 31 Jul 2023 16:49:14 +0900",
"msg_from": "Masahiro Ikeda <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Support to define custom wait events for extensions"
},
{
"msg_contents": "On Mon, Jul 31, 2023 at 12:58 PM Michael Paquier <[email protected]> wrote:\n>\n>\n> Attaching a v11 based on Bharath's feedback and yours, for now. I\n> have also applied the addition of the two masking variables in\n> wait_event.c separately with 7395a90.\n\n+uint32 WaitEventExtensionNew(void)\n+</programlisting>\n+ Next, each process needs to associate the wait event allocated previously\n+ to a user-facing custom string, which is something done by calling:\n+<programlisting>\n+void WaitEventExtensionRegisterName(uint32 wait_event_info, const\nchar *wait_event_name)\n+</programlisting>\n+ An example can be found in\n<filename>src/test/modules/worker_spi</filename>\n+ in the PostgreSQL source tree.\n+ </para>\n\nDo you think it's worth adding a note here in the docs about an\nexternal module defining more than one custom wait event? A pseudo\ncode if possible or just a note? Also, how about a XXX comment atop\nWaitEventExtensionNew and/or WaitEventExtensionRegisterName on the\npossibility of extending the functions to support allocation of more\nthan one custom wait events?\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 31 Jul 2023 13:37:49 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support to define custom wait events for extensions"
},
{
"msg_contents": "At Mon, 31 Jul 2023 16:28:16 +0900, Michael Paquier <[email protected]> wrote in \n> Attaching a v11 based on Bharath's feedback and yours, for now. I\n> have also applied the addition of the two masking variables in\n> wait_event.c separately with 7395a90.\n\n+/*\n+ * Return the name of an wait event ID for extension.\n+ */\n+static const char *\n+GetWaitEventExtensionIdentifier(uint16 eventId)\n\nThis looks inconsistent. Shouldn't it be GetWaitEventExtentionName()?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 31 Jul 2023 17:10:21 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support to define custom wait events for extensions"
},
{
"msg_contents": "On Mon, Jul 31, 2023 at 01:37:49PM +0530, Bharath Rupireddy wrote:\n> Do you think it's worth adding a note here in the docs about an\n> external module defining more than one custom wait event? A pseudo\n> code if possible or just a note? Also, how about a XXX comment atop\n> WaitEventExtensionNew and/or WaitEventExtensionRegisterName on the\n> possibility of extending the functions to support allocation of more\n> than one custom wait events?\n\nI am not sure that any of that is necessary. Anyway, I have applied\nv11 to get the basics done.\n\nNow, I agree that a WaitEventExtensionMultiple() may come in handy,\nparticularly for postgres_fdw that uses WAIT_EVENT_EXTENSION three\ntimes.\n--\nMichael",
"msg_date": "Mon, 31 Jul 2023 19:22:51 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support to define custom wait events for extensions"
},
{
"msg_contents": "On Mon, Jul 31, 2023 at 05:10:21PM +0900, Kyotaro Horiguchi wrote:\n> +/*\n> + * Return the name of an wait event ID for extension.\n> + */\n> +static const char *\n> +GetWaitEventExtensionIdentifier(uint16 eventId)\n> \n> This looks inconsistent. Shouldn't it be GetWaitEventExtentionName()?\n\nThis is an inspiration from GetLWLockIdentifier(), which is kind of OK\nby me. If there is a consensus in changing that, fine by me, of\ncourse.\n--\nMichael",
"msg_date": "Mon, 31 Jul 2023 19:24:09 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support to define custom wait events for extensions"
},
{
"msg_contents": "On Mon, Jul 31, 2023 at 3:54 PM Michael Paquier <[email protected]> wrote:\n>\n> On Mon, Jul 31, 2023 at 05:10:21PM +0900, Kyotaro Horiguchi wrote:\n> > +/*\n> > + * Return the name of an wait event ID for extension.\n> > + */\n> > +static const char *\n> > +GetWaitEventExtensionIdentifier(uint16 eventId)\n> >\n> > This looks inconsistent. Shouldn't it be GetWaitEventExtentionName()?\n>\n> This is an inspiration from GetLWLockIdentifier(), which is kind of OK\n> by me. If there is a consensus in changing that, fine by me, of\n> course.\n\n+1 to GetWaitEventExtensionIdentifier for consistency with LWLock's counterpart.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 31 Jul 2023 15:59:29 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support to define custom wait events for extensions"
},
{
"msg_contents": "On 2023-07-31 19:22, Michael Paquier wrote:\n> I am not sure that any of that is necessary. Anyway, I have applied\n> v11 to get the basics done.\n\nThanks for committing the main patch.\n\nIn my understanding, the rest works are\n* to support WaitEventExtensionMultiple()\n* to replace WAIT_EVENT_EXTENSION to custom wait events\n\nDo someone already works for them? If not, I'll consider\nhow to realize them.\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Tue, 01 Aug 2023 11:51:35 +0900",
"msg_from": "Masahiro Ikeda <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Support to define custom wait events for extensions"
},
{
"msg_contents": "On Tue, Aug 01, 2023 at 11:51:35AM +0900, Masahiro Ikeda wrote:\n> Thanks for committing the main patch.\n> \n> In my understanding, the rest works are\n> * to support WaitEventExtensionMultiple()\n> * to replace WAIT_EVENT_EXTENSION to custom wait events\n> \n> Do someone already works for them? If not, I'll consider\n> how to realize them.\n\nNote that postgres_fdw and dblink use WAIT_EVENT_EXTENSION, but have\nno dependency to shared_preload_libraries. Perhaps these could just\nuse a dynamic handling but that deserves a separate discussion because\nof the fact that they'd need shared memory without being able to\nrequest it. autoprewarm.c is much simpler.\n--\nMichael",
"msg_date": "Tue, 1 Aug 2023 12:14:49 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support to define custom wait events for extensions"
},
{
"msg_contents": "Hi,\n\nOn 2023-08-01 12:14:49 +0900, Michael Paquier wrote:\n> On Tue, Aug 01, 2023 at 11:51:35AM +0900, Masahiro Ikeda wrote:\n> > Thanks for committing the main patch.\n> > \n> > In my understanding, the rest works are\n> > * to support WaitEventExtensionMultiple()\n> > * to replace WAIT_EVENT_EXTENSION to custom wait events\n> > \n> > Do someone already works for them? If not, I'll consider\n> > how to realize them.\n> \n> Note that postgres_fdw and dblink use WAIT_EVENT_EXTENSION, but have\n> no dependency to shared_preload_libraries. Perhaps these could just\n> use a dynamic handling but that deserves a separate discussion because\n> of the fact that they'd need shared memory without being able to\n> request it. autoprewarm.c is much simpler.\n\nThis is why the scheme as implemented doesn't really make sense to me. It'd be\nmuch easier to use if we had a shared hashtable with the identifiers than\nwhat's been merged now.\n\nIn plenty of cases it's not realistic for an extension library to run in each\nbackend, while still needing to wait for things.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 31 Jul 2023 20:23:49 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support to define custom wait events for extensions"
},
{
"msg_contents": "On 2023-08-01 12:23, Andres Freund wrote:\n> Hi,\n> \n> On 2023-08-01 12:14:49 +0900, Michael Paquier wrote:\n>> On Tue, Aug 01, 2023 at 11:51:35AM +0900, Masahiro Ikeda wrote:\n>> > Thanks for committing the main patch.\n>> >\n>> > In my understanding, the rest works are\n>> > * to support WaitEventExtensionMultiple()\n>> > * to replace WAIT_EVENT_EXTENSION to custom wait events\n>> >\n>> > Do someone already works for them? If not, I'll consider\n>> > how to realize them.\n>> \n>> Note that postgres_fdw and dblink use WAIT_EVENT_EXTENSION, but have\n>> no dependency to shared_preload_libraries. Perhaps these could just\n>> use a dynamic handling but that deserves a separate discussion because\n>> of the fact that they'd need shared memory without being able to\n>> request it. autoprewarm.c is much simpler.\n> \n> This is why the scheme as implemented doesn't really make sense to me. \n> It'd be\n> much easier to use if we had a shared hashtable with the identifiers \n> than\n> what's been merged now.\n> \n> In plenty of cases it's not realistic for an extension library to run \n> in each\n> backend, while still needing to wait for things.\n\nOK, I'll try to make a PoC patch.\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 02 Aug 2023 18:34:15 +0900",
"msg_from": "Masahiro Ikeda <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Support to define custom wait events for extensions"
},
{
"msg_contents": "On Wed, Aug 02, 2023 at 06:34:15PM +0900, Masahiro Ikeda wrote:\n> On 2023-08-01 12:23, Andres Freund wrote:\n>> This is why the scheme as implemented doesn't really make sense to me.\n>> It'd be\n>> much easier to use if we had a shared hashtable with the identifiers\n>> than\n>> what's been merged now.\n>> \n>> In plenty of cases it's not realistic for an extension library to run in\n>> each\n>> backend, while still needing to wait for things.\n> \n> OK, I'll try to make a PoC patch.\n\nHmm. There are a few things to take into account here:\n- WaitEventExtensionShmemInit() should gain a dshash_create(), to make\nsure that the shared table is around, and we are going to have a\nreference to it in WaitEventExtensionCounterData, saved from\ndshash_get_hash_table_handle().\n- The hash table entries could just use nextId as key to look at the\nentries, with entries added during WaitEventExtensionNew(), and use as\ncontents the name of the wait event. We are going to need a fixed\nsize for these custom strings, but perhaps a hard limit of 256\ncharacters for each entry of the hash table is more than enough for\nmost users?\n- WaitEventExtensionRegisterName() could be removed, I guess, replaced\nby a single WaitEventExtensionNew(), as of:\nuint32 WaitEventExtensionNew(const char *event_name);\n- GetWaitEventExtensionIdentifier() needs to switch to a lookup of the\nshared hash table, based on the eventId.\n\nAll that would save from the extra WaitEventExtensionRegisterName()\nneeded in the backends to keep a track of the names, indeed.\n--\nMichael",
"msg_date": "Tue, 8 Aug 2023 08:54:10 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support to define custom wait events for extensions"
},
{
"msg_contents": "On 2023-08-08 08:54, Michael Paquier wrote:\n> On Wed, Aug 02, 2023 at 06:34:15PM +0900, Masahiro Ikeda wrote:\n>> On 2023-08-01 12:23, Andres Freund wrote:\n>>> This is why the scheme as implemented doesn't really make sense to \n>>> me.\n>>> It'd be\n>>> much easier to use if we had a shared hashtable with the identifiers\n>>> than\n>>> what's been merged now.\n>>> \n>>> In plenty of cases it's not realistic for an extension library to run \n>>> in\n>>> each\n>>> backend, while still needing to wait for things.\n>> \n>> OK, I'll try to make a PoC patch.\n> \n> Hmm. There are a few things to take into account here:\n> - WaitEventExtensionShmemInit() should gain a dshash_create(), to make\n> sure that the shared table is around, and we are going to have a\n> reference to it in WaitEventExtensionCounterData, saved from\n> dshash_get_hash_table_handle().\n> - The hash table entries could just use nextId as key to look at the\n> entries, with entries added during WaitEventExtensionNew(), and use as\n> contents the name of the wait event. We are going to need a fixed\n> size for these custom strings, but perhaps a hard limit of 256\n> characters for each entry of the hash table is more than enough for\n> most users?\n> - WaitEventExtensionRegisterName() could be removed, I guess, replaced\n> by a single WaitEventExtensionNew(), as of:\n> uint32 WaitEventExtensionNew(const char *event_name);\n> - GetWaitEventExtensionIdentifier() needs to switch to a lookup of the\n> shared hash table, based on the eventId.\n> \n> All that would save from the extra WaitEventExtensionRegisterName()\n> needed in the backends to keep a track of the names, indeed.\n\nThank you for pointing the direction of implementation.\n\nI am thinking a bit that we also need another hash where the key\nis a custom string. For extensions that have no dependencies\nwith shared_preload_libraries, I think we need to avoid that \nWaitEventExtensionNew() is called repeatedly and a new eventId\nis issued each time.\n\nSo, is it better to have another hash where the key is\na custom string and uniqueness is identified by it to determine\nif a new eventId should be issued?\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Tue, 08 Aug 2023 09:39:02 +0900",
"msg_from": "Masahiro Ikeda <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Support to define custom wait events for extensions"
},
{
"msg_contents": "On Tue, Aug 08, 2023 at 09:39:02AM +0900, Masahiro Ikeda wrote:\n> I am thinking a bit that we also need another hash where the key\n> is a custom string. For extensions that have no dependencies\n> with shared_preload_libraries, I think we need to avoid that\n> WaitEventExtensionNew() is called repeatedly and a new eventId\n> is issued each time.\n> \n> So, is it better to have another hash where the key is\n> a custom string and uniqueness is identified by it to determine\n> if a new eventId should be issued?\n\nYeah, I was also considering if something like that is really\nnecessary, but I cannot stop worrying about adding more contention to\nthe hash table lookup each time an extention needs to retrieve an\nevent ID to use for WaitLatch() or such. The results of the hash\ntable lookups could be cached in each backend, still it creates an\nextra cost when combined with queries running in parallel on\npg_stat_activity that do the opposite lookup event_id -> event_name.\nMy suggestion adds more load to AddinShmemInitLock instead.\n\nHence, I was just thinking about relying on AddinShmemInitLock to\ninsert new entries in the hash table, only if its shmem state is not\nfound when calling ShmemInitStruct(). Or perhaps it is just OK to not\ncare about the impact event_name -> event_id lookup for fresh\nconnections, and just bite the bullet with two lookup keys instead of\nrelying on AddinShmemInitLock for the addition of new entries in the\nhash table? Hmm, perhaps you're right with your approach here, at the\nend.\n--\nMichael",
"msg_date": "Tue, 8 Aug 2023 10:05:09 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support to define custom wait events for extensions"
},
{
"msg_contents": "On 2023-08-08 10:05, Michael Paquier wrote:\n> On Tue, Aug 08, 2023 at 09:39:02AM +0900, Masahiro Ikeda wrote:\n>> I am thinking a bit that we also need another hash where the key\n>> is a custom string. For extensions that have no dependencies\n>> with shared_preload_libraries, I think we need to avoid that\n>> WaitEventExtensionNew() is called repeatedly and a new eventId\n>> is issued each time.\n>> \n>> So, is it better to have another hash where the key is\n>> a custom string and uniqueness is identified by it to determine\n>> if a new eventId should be issued?\n> \n> Yeah, I was also considering if something like that is really\n> necessary, but I cannot stop worrying about adding more contention to\n> the hash table lookup each time an extention needs to retrieve an\n> event ID to use for WaitLatch() or such. The results of the hash\n> table lookups could be cached in each backend, still it creates an\n> extra cost when combined with queries running in parallel on\n> pg_stat_activity that do the opposite lookup event_id -> event_name.\n> \n> My suggestion adds more load to AddinShmemInitLock instead.\n> Hence, I was just thinking about relying on AddinShmemInitLock to\n> insert new entries in the hash table, only if its shmem state is not\n> found when calling ShmemInitStruct(). Or perhaps it is just OK to not\n> care about the impact event_name -> event_id lookup for fresh\n> connections, and just bite the bullet with two lookup keys instead of\n> relying on AddinShmemInitLock for the addition of new entries in the\n> hash table? Hmm, perhaps you're right with your approach here, at the\n> end.\n\nFor the first idea, I agree that if a lot of new connections come in,\nit is easy to leads many conflicts. The only solution I can think of\nis to use connection pooling now.\n\nIIUC, the second idea is based on the premise of allocating their shared\nmemory for each extension. In that case, the cons of the first idea can\nbe solved because the wait event infos are saved in their shared memory \nand\nthey don't need call WaitEventExtensionNew() anymore. Is that right?\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION",
"msg_date": "Tue, 08 Aug 2023 20:36:20 +0900",
"msg_from": "Masahiro Ikeda <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Support to define custom wait events for extensions"
},
{
"msg_contents": "I accidentally attached a patch in one previous email.\nBut, you don't need to check it, sorry.\n(v1-0001-Change-to-manage-custom-wait-events-in-shared-hash.patch)\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Tue, 08 Aug 2023 20:40:53 +0900",
"msg_from": "Masahiro Ikeda <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Support to define custom wait events for extensions"
},
{
"msg_contents": "On Tue, Aug 08, 2023 at 08:40:53PM +0900, Masahiro Ikeda wrote:\n> I accidentally attached a patch in one previous email.\n> But, you don't need to check it, sorry.\n> (v1-0001-Change-to-manage-custom-wait-events-in-shared-hash.patch)\n\nSure, no worries. With that in place, the init function in worker_spi\ncan be removed.\n--\nMichael",
"msg_date": "Wed, 9 Aug 2023 07:40:59 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support to define custom wait events for extensions"
},
{
"msg_contents": "Hi,\n\nOn 2023-08-08 08:54:10 +0900, Michael Paquier wrote:\n> - WaitEventExtensionShmemInit() should gain a dshash_create(), to make\n> sure that the shared table is around, and we are going to have a\n> reference to it in WaitEventExtensionCounterData, saved from\n> dshash_get_hash_table_handle().\n\nI'm not even sure it's worth using dshash here. Why don't we just create a\ndecently sized dynahash (say 128 enties) in shared memory? We overallocate\nshared memory by enough that there's a lot of headroom for further entries, in\nthe rare cases they're needed.\n\n> We are going to need a fixed size for these custom strings, but perhaps a\n> hard limit of 256 characters for each entry of the hash table is more than\n> enough for most users?\n\nI'd just use NAMEDATALEN.\n\n- Andres\n\n\n",
"msg_date": "Tue, 8 Aug 2023 15:59:54 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support to define custom wait events for extensions"
},
{
"msg_contents": "On Tue, Aug 08, 2023 at 03:59:54PM -0700, Andres Freund wrote:\n> On 2023-08-08 08:54:10 +0900, Michael Paquier wrote:\n>> - WaitEventExtensionShmemInit() should gain a dshash_create(), to make\n>> sure that the shared table is around, and we are going to have a\n>> reference to it in WaitEventExtensionCounterData, saved from\n>> dshash_get_hash_table_handle().\n> \n> I'm not even sure it's worth using dshash here. Why don't we just create a\n> decently sized dynahash (say 128 enties) in shared memory? We overallocate\n> shared memory by enough that there's a lot of headroom for further entries, in\n> the rare cases they're needed.\n\nThe question here would be how many slots the most popular extensions\nactually need, but that could always be sized up based on the\nfeedback.\n\n>> We are going to need a fixed size for these custom strings, but perhaps a\n>> hard limit of 256 characters for each entry of the hash table is more than\n>> enough for most users?\n> \n> I'd just use NAMEDATALEN.\n\nBoth suggestions WFM.\n--\nMichael",
"msg_date": "Wed, 9 Aug 2023 08:03:29 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support to define custom wait events for extensions"
},
{
"msg_contents": "Hi,\n\nOn 2023-08-09 08:03:29 +0900, Michael Paquier wrote:\n> On Tue, Aug 08, 2023 at 03:59:54PM -0700, Andres Freund wrote:\n> > On 2023-08-08 08:54:10 +0900, Michael Paquier wrote:\n> >> - WaitEventExtensionShmemInit() should gain a dshash_create(), to make\n> >> sure that the shared table is around, and we are going to have a\n> >> reference to it in WaitEventExtensionCounterData, saved from\n> >> dshash_get_hash_table_handle().\n> > \n> > I'm not even sure it's worth using dshash here. Why don't we just create a\n> > decently sized dynahash (say 128 enties) in shared memory? We overallocate\n> > shared memory by enough that there's a lot of headroom for further entries, in\n> > the rare cases they're needed.\n> \n> The question here would be how many slots the most popular extensions\n> actually need, but that could always be sized up based on the\n> feedback.\n\nOn a default initdb (i.e. 128MB s_b), after explicitly disabling huge pages,\nwe over-allocate shared memory by by 1922304 bytes, according to\npg_shmem_allocations. We allow that memory to be used for things like shared\nhashtables that grow beyond their initial size. So even if the hash table's\nstatic size is too small, there's lots of room to grow, even on small systems.\n\nJust because it's somewhat interesting: With huge pages available and not\ndisabled, we over-allocate by 3364096 bytes.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 8 Aug 2023 17:37:10 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support to define custom wait events for extensions"
},
{
"msg_contents": "Hi,\n\nThanks for your comments to v1 patch.\n\nI made v2 patch. Main changes are\n* change to NAMEDATALEN\n* change to use dynahash from dshash\n* remove worker_spi_init()\n* create second hash table to find a event id from a name to\n identify uniquness. It enable extensions which don't use share\n memory for their use to define custom wait events because\n WaitEventExtensionNew() will not allocate duplicate wait events.\n* create PoC patch to show that extensions, which don't use shared\n memory for their use, can define custom wait events.\n (v2-0002-poc-custom-wait-event-for-dblink.patch)\n\nI'm worrying about\n* Is 512(wee_hash_max_size) the maximum number of the custom wait\n events sufficient?\n* Is there any way to not force extensions that don't use shared\n memory for their use like dblink to acquire AddinShmemInitLock?;\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION",
"msg_date": "Wed, 09 Aug 2023 20:10:42 +0900",
"msg_from": "Masahiro Ikeda <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Support to define custom wait events for extensions"
},
{
"msg_contents": "Hi,\n\nOn 2023-08-09 20:10:42 +0900, Masahiro Ikeda wrote:\n> * Is there any way to not force extensions that don't use shared\n> memory for their use like dblink to acquire AddinShmemInitLock?;\n\nI think the caller shouldn't need to do deal with AddinShmemInitLock at all.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 9 Aug 2023 07:41:51 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support to define custom wait events for extensions"
},
{
"msg_contents": "On Wed, Aug 09, 2023 at 08:10:42PM +0900, Masahiro Ikeda wrote:\n> * create second hash table to find a event id from a name to\n> identify uniquness. It enable extensions which don't use share\n> memory for their use to define custom wait events because\n> WaitEventExtensionNew() will not allocate duplicate wait events.\n\nOkay, a second hash table to check if events are registered works for\nme.\n\n> * create PoC patch to show that extensions, which don't use shared\n> memory for their use, can define custom wait events.\n> (v2-0002-poc-custom-wait-event-for-dblink.patch)\n> \n> I'm worrying about\n> * Is 512(wee_hash_max_size) the maximum number of the custom wait\n> events sufficient?\n\nThanks for sending a patch!\n\nI'm OK to start with that. This could always be revisited later, but\neven for a server loaded with a bunch of extensions that looks more\nthan enough to me.\n\n> * Is there any way to not force extensions that don't use shared\n> memory for their use like dblink to acquire AddinShmemInitLock?;\n\nYes, they don't need it at all as the dynahashes are protected with\ntheir own LWLocks.\n\n+++ b/src/backend/storage/lmgr/lwlocknames.txt\n@@ -53,3 +53,4 @@ XactTruncationLock 44\n # 45 was XactTruncationLock until removal of BackendRandomLock\n WrapLimitsVacuumLock 46\n NotifyQueueTailLock 47\n+WaitEventExtensionLock 48\n\nThis new LWLock needs to be added to wait_event_names.txt, or it won't\nbe reported to pg_stat_activity and it would not be documented when\nthe sgml docs are generated from the txt data.\n\n-extern uint32 WaitEventExtensionNew(void);\n-extern void WaitEventExtensionRegisterName(uint32 wait_event_info,\n- const char *wait_event_name);\n+extern uint32 WaitEventExtensionNew(const char *wait_event_name);\nLooks about right, and the docs are refreshed.\n\n+static const int wee_hash_init_size = 128;\n+static const int wee_hash_max_size = 512;\nI would use a few #defines with upper-case characters here instead as\nthese are constants for us.\n\nNow that it is possible to rely on LWLocks for the hash tables, more\ncleanup is possible in worker_spi, with the removal of\nworker_spi_state, the shmem hooks and their routines. The only thing\nthat should be needed is something like that at the start of\nworker_spi_main() (same position as worker_spi_shmem_init now):\n+static uint32 wait_event = 0;\n[...]\n+ if (wait_event == 0)\n+ wait_event = WaitEventExtensionNew(\"worker_spi_main\");\n\nThe updates in 001_worker_spi.pl look OK.\n\n+ * The entry must be stored because it's registered in\n+ * WaitEventExtensionNew().\n */\n- eventId -= NUM_BUILTIN_WAIT_EVENT_EXTENSION;\n+ if (!entry)\n+ ereport(ERROR,\n+ errmsg(\"could not find the name for custom wait event ID %u\", eventId));\nYeah, I think that's better than just falling back to \"extension\". An\nID reported in pg_stat_activity should always have an entry, or we\nhave race conditions. This should be an elog(ERROR), as in\nthis-error-shall-never-happen. No need to translate the error string,\nas well (the docs have been updated with this change. thanks!).\n\nAdditionally, LWLockHeldByMeInMode(AddinShmemInitLock) in\nWaitEventExtensionNew() should not be needed, thanks to\nWaitEventExtensionLock.\n\n+ * WaitEventExtensionNameHash is used to find the name from a event id.\n+ * It enables all backends look up them without additional processing\n+ * per backend like LWLockRegisterTranche().\n\nIt does not seem necessary to mention LWLockRegisterTranche().\n\n+ * WaitEventExtensionIdHash is used to find the event id from a name.\n+ * Since it can identify uniquness by the names, extensions that do not\n+ * use shared memory also be able to define custom wait events without\n+ * defining duplicate wait events.\n\nPerhaps this could just say that this table is necessary to ensure\nthat we don't have duplicated entries when registering new strings\nwith their IDs? s/uniquness/uniqueness/. The second part of the\nsentence about shared memory does not seem necessary now.\n\n+ sz = add_size(sz, hash_estimate_size(wee_hash_init_size,\n+ sizeof(WaitEventExtensionNameEntry)));\n+ sz = add_size(sz, hash_estimate_size(wee_hash_init_size,\n+ sizeof(WaitEventExtensionIdEntry)));\n\nErr, this should use the max size, and not the init size for the size\nestimation, no?\n\n+ if (strlen(wait_event_name) >= NAMEDATALEN)\n+ ereport(ERROR,\n+ errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),\n+ errmsg(\"wait event name is too long\"));\nThis could just be an elog(ERROR), I assume, as that could only be\nreached by developers. The string needs to be rewritten, like \"cannot\nuse custom wait event string longer than %u characters\", or something\nlike that.\n\n+ if (wait_event_info == NULL)\n+ {\n+ wait_event_info = (uint32 *) MemoryContextAlloc(TopMemoryContext, sizeof(uint32));\n+ LWLockAcquire(AddinShmemInitLock, LW_EXCLUSIVE);\n+ *wait_event_info = WaitEventExtensionNew(\"dblink_get_con\");\n+ LWLockRelease(AddinShmemInitLock);\n+ }\n+ conn = libpqsrv_connect(connstr, *wait_event_info)\n\nIn 0002. Caching the value statically in the backend is what you\nshould do, but a pointer, an allocation to the TopMemoryContext and a\ndependency to AddinShmemInitLock should not be necessary when dealing\nwith a uint32. You could use an initial value of 0, for example, or\njust PG_WAIT_EXTENSION but the latter is not really necessary and\nwould bypass the sanity checks.\n\n+ /* Register the new custom wait event in the shared hash table */\n+ LWLockAcquire(WaitEventExtensionLock, LW_EXCLUSIVE);\n+\n+ name_entry = (WaitEventExtensionNameEntry *)\n+ hash_search(WaitEventExtensionNameHash, &eventId, HASH_ENTER, &found);\n+ Assert(!found);\n+ strlcpy(name_entry->wait_event_name, wait_event_name, sizeof(name_entry->wait_event_name));\n+\n+ id_entry = (WaitEventExtensionIdEntry *)\n+ hash_search(WaitEventExtensionIdHash, &wait_event_name, HASH_ENTER, &found);\n+ Assert(!found);\n+ id_entry->event_id = eventId;\n+\n+ LWLockRelease(WaitEventExtensionLock);\n\nThe logic added to WaitEventExtensionNew() is a bit racy, where it\nwould be possible with the same entry to be added multiple times.\nImagine for example the following:\n- Process 1 does WaitEventExtensionNew(\"foo1\"), does not find the\nentry by name in hash_search, gets an eventId of 1, releases the\nspinlock.\n- Process 2 calls as well WaitEventExtensionNew(\"foo1\"), does not find\nthe entry by name because it has not been added by process 1 yet,\nallocates an eventId of 2\n- Process 2 takes first WaitEventExtensionLock in LW_EXCLUSIVE to add\nentry \"foo1\", there is no entry by name, so one is added for the ID.\nWaitEventExtensionLock is released\n- Process 1, that was waiting on WaitEventExtensionLock, can now take\nit in exclusive mode. It finds an entry by name for \"foo1\", fails the\nassertion because an entry is found.\n\nI think that the ordering of WaitEventExtensionNew() should be\nreworked a bit. This order should be safer.\n- Take WaitEventExtensionLock in shared mode, look if there's an entry\nby name, release the lock. The patch does that.\n- If an entry is found, return, we're OK. The patch does that.\n- Take again WaitEventExtensionLock in exclusive mode.\n- Look again at the hash table with the name given, in case somebody\nhas inserted an equivalent entry in the short window where the lock\nwas not held.\n-- If an entry is found, release the lock and leave, we're OK.\n-- If an entry is not found, keep the lock.\n- Acquire the spinlock, and get a new event ID. Release spinlock.\n- Add the new entries to both tables, both assertions on found are OK\nto have.\n- Release LWLock and leave.\n--\nMichael",
"msg_date": "Thu, 10 Aug 2023 08:45:43 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support to define custom wait events for extensions"
},
{
"msg_contents": "Hi,\n\nThanks for your comments about the v2 patches. I updated to v3 patches.\n\nThe main changes are:\n* remove the AddinShmemInitLock assertion\n* add the new lock (WaitEventExtensionLock) to wait_event_names.txt\n* change \"static const int wee_hash_XXX_size\" to \"#define XXX\"\n* simplify worker_spi. I removed codes related to share memory and\n try to allocate the new wait event before waiting per background \nworker.\n* change to elog from ereport because the errors are for developers.\n* revise comments as advised.\n* fix the request size for shared memory correctly\n* simplify dblink.c\n* fix process ordering of WaitEventExtensionNew() as advised to\n avoid leading illegal state.\n\nIn addition, I change the followings:\n* update about custom wait events in sgml. we don't need to use\n shmem_startup_hook.\n* change the hash names for readability.\n (ex. WaitEventExtensionNameHash -> WaitEventExtensionHashById)\n* fix a bug to fail to get already defined events by names\n because HASH_BLOBS was specified. HASH_STRINGS is right since\n the key is C strings.\n\nI create a new entry in commitfest for CI testing.\n(https://commitfest.postgresql.org/44/4494/)\n\nThanks for closing the former entry.\n(https://commitfest.postgresql.org/43/4368/)\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION",
"msg_date": "Thu, 10 Aug 2023 13:08:39 +0900",
"msg_from": "Masahiro Ikeda <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Support to define custom wait events for extensions"
},
{
"msg_contents": "On Thu, Aug 10, 2023 at 01:08:39PM +0900, Masahiro Ikeda wrote:\n> In addition, I change the followings:\n> * update about custom wait events in sgml. we don't need to use\n> shmem_startup_hook.\n> * change the hash names for readability.\n> (ex. WaitEventExtensionNameHash -> WaitEventExtensionHashById)\n> * fix a bug to fail to get already defined events by names\n> because HASH_BLOBS was specified. HASH_STRINGS is right since\n> the key is C strings.\n\nThat's what I get based on what ShmemInitHash() relies on.\n\nI got a few more comments about v3. Overall this looks much better.\n\nThis time the ordering of the operations in WaitEventExtensionNew()\nlooks much better.\n\n+ * The entry must be stored because it's registered in\n+ * WaitEventExtensionNew().\nNot sure of the value added by this comment, I would remove it.\n\n+ if (!entry)\n+ elog(ERROR, \"could not find the name for custom wait event ID %u\",\n+ eventId);\n\nOr a simpler \"could not find custom wait event name for ID %u\"?\n\n+static HTAB *WaitEventExtensionHashById; /* find names from ids */\n+static HTAB *WaitEventExtensionHashByName; /* find ids from names */\n\nThese names are OK here.\n\n+/* Local variables */\n+static uint32 worker_spi_wait_event = 0;\nThat's a cached value, used as a placeholder for the custom wait event\nID found from the table.\n\n+ HASH_ELEM | HASH_STRINGS); /* key is Null-terminated C strings */\nLooks obvious to me based on the code, I would remove this note.\n\n+/* hash table entres */\ns/entres/entries/\n\n+ /*\n+ * Allocate and register a new wait event. But, we need to recheck because\n+ * other processes could already do so while releasing the lock.\n+ */\n\nCould be reworked for the second sentence, like \"Recheck if the event\nexists, as it could be possible that a concurrent process has inserted\none with the same name while the lock was previously released.\"\n\n+ /* Recheck */\nDuplicate.\n\n /* OK to make connection */\n- conn = libpqsrv_connect(connstr, WAIT_EVENT_EXTENSION);\n+ if (wait_event_info == 0)\n+ wait_event_info = WaitEventExtensionNew(\"dblink_get_con\");\n+ conn = libpqsrv_connect(connstr, wait_event_info);\n\nIt is going to be difficult to make that simpler ;)\n\nThis looks correct, but perhaps we need to think harder about the\ncustom event names and define a convention when more of this stuff is\nadded to the core modules.\n--\nMichael",
"msg_date": "Thu, 10 Aug 2023 17:37:55 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support to define custom wait events for extensions"
},
{
"msg_contents": "On Thu, Aug 10, 2023 at 05:37:55PM +0900, Michael Paquier wrote:\n> This looks correct, but perhaps we need to think harder about the\n> custom event names and define a convention when more of this stuff is\n> added to the core modules.\n\nOkay, I have put my hands on that, fixing a couple of typos, polishing\na couple of comments, clarifying the docs and applying an indentation.\nAnd here is a v4.\n\nAny thoughts or comments? I'd like to apply that soon, so as we are\nable to move on with the wait event catalog and assigning custom wait\nevents to the other in-core modules.\n--\nMichael",
"msg_date": "Mon, 14 Aug 2023 08:06:29 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support to define custom wait events for extensions"
},
{
"msg_contents": "On 2023-08-14 08:06, Michael Paquier wrote:\n> On Thu, Aug 10, 2023 at 05:37:55PM +0900, Michael Paquier wrote:\n>> This looks correct, but perhaps we need to think harder about the\n>> custom event names and define a convention when more of this stuff is\n>> added to the core modules.\n> \n> Okay, I have put my hands on that, fixing a couple of typos, polishing\n> a couple of comments, clarifying the docs and applying an indentation.\n> And here is a v4.\n> \n> Any thoughts or comments? I'd like to apply that soon, so as we are\n> able to move on with the wait event catalog and assigning custom wait\n> events to the other in-core modules.\n\nThanks! I confirmed the changes, and all tests passed.\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Mon, 14 Aug 2023 12:31:05 +0900",
"msg_from": "Masahiro Ikeda <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Support to define custom wait events for extensions"
},
{
"msg_contents": "On Mon, Aug 14, 2023 at 12:31:05PM +0900, Masahiro Ikeda wrote:\n> Thanks! I confirmed the changes, and all tests passed.\n\nOkay, cool. I got some extra time today and applied that, with a few\nmore tweaks.\n--\nMichael",
"msg_date": "Mon, 14 Aug 2023 15:28:41 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support to define custom wait events for extensions"
},
{
"msg_contents": "On 2023-08-14 15:28, Michael Paquier wrote:\n> On Mon, Aug 14, 2023 at 12:31:05PM +0900, Masahiro Ikeda wrote:\n>> Thanks! I confirmed the changes, and all tests passed.\n> \n> Okay, cool. I got some extra time today and applied that, with a few\n> more tweaks.\n\nThanks for applying master branch!\n\n> This looks correct, but perhaps we need to think harder about the\n> custom event names and define a convention when more of this stuff is\n> added to the core modules.\n\nI checked the source code how many functions use WAIT_EVENT_EXTENSION.\nThere are 3 contrib modules and a test module use WAIT_EVENT_EXTENSION \nand\nthere are 8 places where it is called as an argument.\n\n* dblink\n - dblink_get_conn(): the wait event is set until the connection \nestablishment succeeded\n - dblink_connect(): same as above\n\n* autoprewarm\n - autoprewarm_main(): the wait event is set until shutdown request is \nreceived\n - autoprewarm_main(): the wait event is set until the next dump time\n\n* postgres_fdw\n - connect_pg_server(): the wait event is set until connection \nestablishment succeeded\n - pgfdw_get_result(): the wait event is set until the results are \nreceived\n - pgfdw_get_cleanup_result(): same as above except for abort cleanup\n\n* test_sh_mq\n - wait_for_workers_to_become_ready(): the wait event is set until the \nworkers become ready\n\nI'm thinking a name like \"contrib name + description summary\" would\nbe nice. The \"contrib name\" is namespace-like and the \"description \nsummary\"\nis the same as the name of the waiting event name in core. For example,\n\"DblinkConnect\" for dblink. In the same as the core one, I thought the \nname\nshould be the camel case.\n\nBTW, is it better to discuss this in a new thread because other \ndevelopers\nmight be interested in user-facing wait event names? I also would like \nto add\ndocumentation on the wait events for each modules, as they are not \nmentioned now.\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Mon, 14 Aug 2023 17:55:42 +0900",
"msg_from": "Masahiro Ikeda <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Support to define custom wait events for extensions"
},
{
"msg_contents": "On Mon, Aug 14, 2023 at 05:55:42PM +0900, Masahiro Ikeda wrote:\n> I'm thinking a name like \"contrib name + description summary\" would\n> be nice. The \"contrib name\" is namespace-like and the \"description summary\"\n> is the same as the name of the waiting event name in core. For example,\n> \"DblinkConnect\" for dblink. In the same as the core one, I thought the name\n> should be the camel case.\n\nOr you could use something more in line with the other in-core wait\nevents formatted as camel-case, like DblinkConnect, etc.\n\n> BTW, is it better to discuss this in a new thread because other developers\n> might be interested in user-facing wait event names? I also would like to\n> add documentation on the wait events for each modules, as they are not mentioned\n> now.\n\nSaying that, having some documentation on the page of each extension\nis mandatory once these can be customized, in my opinion. All that\nshould be discussed on a new, separate thread, to attract the correct\naudience.\n--\nMichael",
"msg_date": "Mon, 14 Aug 2023 18:26:42 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support to define custom wait events for extensions"
},
{
"msg_contents": "On 2023-08-14 18:26, Michael Paquier wrote:\n> On Mon, Aug 14, 2023 at 05:55:42PM +0900, Masahiro Ikeda wrote:\n>> I'm thinking a name like \"contrib name + description summary\" would\n>> be nice. The \"contrib name\" is namespace-like and the \"description \n>> summary\"\n>> is the same as the name of the waiting event name in core. For \n>> example,\n>> \"DblinkConnect\" for dblink. In the same as the core one, I thought the \n>> name\n>> should be the camel case.\n> \n> Or you could use something more in line with the other in-core wait\n> events formatted as camel-case, like DblinkConnect, etc.\n> \n>> BTW, is it better to discuss this in a new thread because other \n>> developers\n>> might be interested in user-facing wait event names? I also would like \n>> to\n>> add documentation on the wait events for each modules, as they are not \n>> mentioned\n>> now.\n> \n> Saying that, having some documentation on the page of each extension\n> is mandatory once these can be customized, in my opinion. All that\n> should be discussed on a new, separate thread, to attract the correct\n> audience.\n\nOK. I'll make a new patch and start a new thread.\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Tue, 15 Aug 2023 09:14:15 +0900",
"msg_from": "Masahiro Ikeda <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Support to define custom wait events for extensions"
},
{
"msg_contents": "On Tue, Aug 15, 2023 at 09:14:15AM +0900, Masahiro Ikeda wrote:\n> OK. I'll make a new patch and start a new thread.\n\nCool, thanks!\n--\nMichael",
"msg_date": "Tue, 15 Aug 2023 09:30:58 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support to define custom wait events for extensions"
}
] |
[
{
"msg_contents": "Hi all,\n\nWhile doing some tests with the tree, I have noticed that we don't do\nin the tests of unaccent the business that we have elsewhere\n(test_regex, fuzzystrmatch, now hstore, collation tests, etc.) to make\nthe tests portable when these tests include UTF-8 characters but the\nregression database cannot support that.\n\nIt took some time to notice that, which makes me wonder how relevant\nthis stuff is these days.. Anyway, I would like to do like the others\nand fix it, so I am proposing the attached.\n\nThoughts?\n--\nMichael",
"msg_date": "Thu, 15 Jun 2023 15:52:13 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fix regression tests to work with REGRESS_OPTS=--no-locale"
},
{
"msg_contents": "On Thu, Jun 15, 2023 at 03:52:13PM +0900, Michael Paquier wrote:\n> It took some time to notice that, which makes me wonder how relevant\n> this stuff is these days.. Anyway, I would like to do like the others\n> and fix it, so I am proposing the attached.\n\nPlease find attached a v2 that removes the ENCODING and NO_LOCALE\nflags from meson.build and Makefile, that I forgot to clean up\npreviously. Note that I am not planning to do anything here until at\nleast v17 opens for business.\n--\nMichael",
"msg_date": "Tue, 20 Jun 2023 12:46:19 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fix regression tests to work with REGRESS_OPTS=--no-locale"
},
{
"msg_contents": "On 20.06.23 05:46, Michael Paquier wrote:\n> On Thu, Jun 15, 2023 at 03:52:13PM +0900, Michael Paquier wrote:\n>> It took some time to notice that, which makes me wonder how relevant\n>> this stuff is these days.. Anyway, I would like to do like the others\n>> and fix it, so I am proposing the attached.\n> \n> Please find attached a v2 that removes the ENCODING and NO_LOCALE\n> flags from meson.build and Makefile, that I forgot to clean up\n> previously. Note that I am not planning to do anything here until at\n> least v17 opens for business.\n\nI think it makes sense to make those checks consistent.\n\n\n\n",
"msg_date": "Mon, 3 Jul 2023 08:48:22 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix regression tests to work with REGRESS_OPTS=--no-locale"
},
{
"msg_contents": "On Mon, Jul 03, 2023 at 08:48:22AM +0200, Peter Eisentraut wrote:\n> I think it makes sense to make those checks consistent.\n\nThanks for the review!\n\nThe last thing that sets ENCODING is test_oat_hooks, for stability.\nNO_LOCALE is used in test_oat_hooks and test_extensions (71cac85). I\nam not planning to touch any of these.\n--\nMichael",
"msg_date": "Mon, 3 Jul 2023 16:17:04 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fix regression tests to work with REGRESS_OPTS=--no-locale"
}
] |
[
{
"msg_contents": "Dear hackers,\n\nI have noticed that the testcase subscription/033_run_as_table_owner in the\nsubscription is not executed when meson build system is chosen. The case is not\nlisted in the meson.build.\n\nDo we have any reasons or backgrounds about it?\nPSA the patch to add the case. It works well on my env.\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED",
"msg_date": "Thu, 15 Jun 2023 07:16:06 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "subscription/033_run_as_table_owner is not listed in the meson.build"
},
{
"msg_contents": "On Thu, Jun 15, 2023 at 07:16:06AM +0000, Hayato Kuroda (Fujitsu) wrote:\n> I have noticed that the testcase subscription/033_run_as_table_owner in the\n> subscription is not executed when meson build system is chosen. The case is not\n> listed in the meson.build.\n> \n> Do we have any reasons or backgrounds about it?\n> PSA the patch to add the case. It works well on my env.\n\nSeems like a thinko of 4826759 to me, that's easy to miss. Will fix\nin a bit..\n--\nMichael",
"msg_date": "Thu, 15 Jun 2023 17:04:22 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: subscription/033_run_as_table_owner is not listed in the\n meson.build"
},
{
"msg_contents": "On Thu, Jun 15, 2023 at 5:04 PM Michael Paquier <[email protected]> wrote:\n>\n> On Thu, Jun 15, 2023 at 07:16:06AM +0000, Hayato Kuroda (Fujitsu) wrote:\n> > I have noticed that the testcase subscription/033_run_as_table_owner in the\n> > subscription is not executed when meson build system is chosen. The case is not\n> > listed in the meson.build.\n> >\n> > Do we have any reasons or backgrounds about it?\n> > PSA the patch to add the case. It works well on my env.\n>\n> Seems like a thinko of 4826759 to me, that's easy to miss. Will fix\n> in a bit..\n\nGood catch.\n\nChecking similar oversights,\nsrc/bin/pg_basebackup/t/011_in_place_tablespace.pl seems not to be\nlisted in meson.build too.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 15 Jun 2023 17:32:22 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: subscription/033_run_as_table_owner is not listed in the\n meson.build"
},
{
"msg_contents": "On Thu, Jun 15, 2023 at 5:32 PM Masahiko Sawada <[email protected]> wrote:\n>\n> On Thu, Jun 15, 2023 at 5:04 PM Michael Paquier <[email protected]> wrote:\n> >\n> > On Thu, Jun 15, 2023 at 07:16:06AM +0000, Hayato Kuroda (Fujitsu) wrote:\n> > > I have noticed that the testcase subscription/033_run_as_table_owner in the\n> > > subscription is not executed when meson build system is chosen. The case is not\n> > > listed in the meson.build.\n> > >\n> > > Do we have any reasons or backgrounds about it?\n> > > PSA the patch to add the case. It works well on my env.\n> >\n> > Seems like a thinko of 4826759 to me, that's easy to miss. Will fix\n> > in a bit..\n>\n> Good catch.\n>\n> Checking similar oversights,\n> src/bin/pg_basebackup/t/011_in_place_tablespace.pl seems not to be\n> listed in meson.build too.\n\nHere is the patch for that.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 16 Jun 2023 07:15:36 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: subscription/033_run_as_table_owner is not listed in the\n meson.build"
},
{
"msg_contents": "On Fri, Jun 16, 2023 at 07:15:36AM +0900, Masahiko Sawada wrote:\n> On Thu, Jun 15, 2023 at 5:32 PM Masahiko Sawada <[email protected]> wrote:\n>> Checking similar oversights,\n>> src/bin/pg_basebackup/t/011_in_place_tablespace.pl seems not to be\n>> listed in meson.build too.\n> \n> Here is the patch for that.\n\nYes, good catch.\n--\nMichael",
"msg_date": "Fri, 16 Jun 2023 07:41:51 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: subscription/033_run_as_table_owner is not listed in the\n meson.build"
},
{
"msg_contents": "On Fri, Jun 16, 2023 at 7:42 AM Michael Paquier <[email protected]> wrote:\n>\n> On Fri, Jun 16, 2023 at 07:15:36AM +0900, Masahiko Sawada wrote:\n> > On Thu, Jun 15, 2023 at 5:32 PM Masahiko Sawada <[email protected]> wrote:\n> >> Checking similar oversights,\n> >> src/bin/pg_basebackup/t/011_in_place_tablespace.pl seems not to be\n> >> listed in meson.build too.\n> >\n> > Here is the patch for that.\n>\n> Yes, good catch.\n\nPushed.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 16 Jun 2023 10:38:19 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: subscription/033_run_as_table_owner is not listed in the\n meson.build"
}
] |
[
{
"msg_contents": "As discussed [1 ][2] currently, the checkpoint-redo LSN can not be\naccurately detected while processing the WAL. Although we have a\ncheckpoint WAL record containing the exact redo LSN, other WAL records\nmay be inserted between the checkpoint-redo LSN and the actual\ncheckpoint record. If we want to stop processing wal exactly at the\ncheckpoint-redo location then we cannot do that because we would have\nalready processed some extra records that got added after the redo\nLSN.\n\nThe patch inserts a special wal record named CHECKPOINT_REDO WAL,\nwhich is located exactly at the checkpoint-redo location. We can\nguarantee this record to be exactly at the checkpoint-redo point\nbecause we already hold the exclusive WAL insertion lock while\nidentifying the checkpoint redo point and can insert this special\nrecord exactly at the same time so that there are no concurrent WAL\ninsertions.\n\n[1] https://www.postgresql.org/message-id/CA%2BTgmoYOYZfMCyOXFyC-P%2B-mdrZqm5pP2N7S-r0z3_402h9rsA%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/20230614194717.jyuw3okxup4cvtbt%40awork3.anarazel.de\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Thu, 15 Jun 2023 13:11:57 +0530",
"msg_from": "Dilip Kumar <[email protected]>",
"msg_from_op": true,
"msg_subject": "New WAL record to detect the checkpoint redo location"
},
{
"msg_contents": "Hi,\n\nAs I think I mentioned before, I like this idea. However, I don't like the\nimplementation too much.\n\nOn 2023-06-15 13:11:57 +0530, Dilip Kumar wrote:\n> diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c\n> index b2430f617c..a025fb91e2 100644\n> --- a/src/backend/access/transam/xlog.c\n> +++ b/src/backend/access/transam/xlog.c\n> @@ -744,6 +744,7 @@ XLogInsertRecord(XLogRecData *rdata,\n> \tXLogRecPtr\tStartPos;\n> \tXLogRecPtr\tEndPos;\n> \tbool\t\tprevDoPageWrites = doPageWrites;\n> +\tbool\t\tcallerHoldingExlock = holdingAllLocks;\n> \tTimeLineID\tinsertTLI;\n> \n> \t/* we assume that all of the record header is in the first chunk */\n> @@ -792,10 +793,18 @@ XLogInsertRecord(XLogRecData *rdata,\n> \t *----------\n> \t */\n> \tSTART_CRIT_SECTION();\n> -\tif (isLogSwitch)\n> -\t\tWALInsertLockAcquireExclusive();\n> -\telse\n> -\t\tWALInsertLockAcquire();\n> +\n> +\t/*\n> +\t * Acquire wal insertion lock, nothing to do if the caller is already\n> +\t * holding the exclusive lock.\n> +\t */\n> +\tif (!callerHoldingExlock)\n> +\t{\n> +\t\tif (isLogSwitch)\n> +\t\t\tWALInsertLockAcquireExclusive();\n> +\t\telse\n> +\t\t\tWALInsertLockAcquire();\n> +\t}\n> \n> \t/*\n> \t * Check to see if my copy of RedoRecPtr is out of date. If so, may have\n\nThis might work right now, but doesn't really seem maintainable. Nor do I like\nadding branches into this code a whole lot.\n\n\n> @@ -6597,6 +6612,32 @@ CreateCheckPoint(int flags)\n\nI think the commentary above this function would need a fair bit of\nrevising...\n\n> \t */\n> \tRedoRecPtr = XLogCtl->Insert.RedoRecPtr = checkPoint.redo;\n> \n> +\t/*\n> +\t * Insert a special purpose CHECKPOINT_REDO record as the first record at\n> +\t * checkpoint redo lsn. Although we have the checkpoint record that\n> +\t * contains the exact redo lsn, there might have been some other records\n> +\t * those got inserted between the redo lsn and the actual checkpoint\n> +\t * record. So when processing the wal, we cannot rely on the checkpoint\n> +\t * record if we want to stop at the checkpoint-redo LSN.\n> +\t *\n> +\t * This special record, however, is not required when we doing a shutdown\n> +\t * checkpoint, as there will be no concurrent wal insertions during that\n> +\t * time. So, the shutdown checkpoint LSN will be the same as\n> +\t * checkpoint-redo LSN.\n> +\t *\n> +\t * This record is guaranteed to be the first record at checkpoint redo lsn\n> +\t * because we are inserting this while holding the exclusive wal insertion\n> +\t * lock.\n> +\t */\n> +\tif (!shutdown)\n> +\t{\n> +\t\tint\t\t\tdummy = 0;\n> +\n> +\t\tXLogBeginInsert();\n> +\t\tXLogRegisterData((char *) &dummy, sizeof(dummy));\n> +\t\trecptr = XLogInsert(RM_XLOG_ID, XLOG_CHECKPOINT_REDO);\n> +\t}\n\nIt seems to me that we should be able to do better than this.\n\nI suspect we might be able to get rid of the need for exclusive inserts\nhere. If we rid of that, we could determine the redoa location based on the\nLSN determined by the XLogInsert().\n\nAlternatively, I think we should split XLogInsertRecord() so that the part\nwith the insertion locks held is a separate function, that we could use here.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 14 Jul 2023 08:16:26 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New WAL record to detect the checkpoint redo location"
},
{
"msg_contents": "On Fri, Jul 14, 2023 at 8:46 PM Andres Freund <[email protected]> wrote:\n>\n> Hi,\n>\n> As I think I mentioned before, I like this idea. However, I don't like the\n> implementation too much.\n\nThanks for looking into it.\n\n\n> This might work right now, but doesn't really seem maintainable. Nor do I like\n> adding branches into this code a whole lot.\n\nOkay, Now I have moved the XlogInsert for the special record outside\nthe WalInsertLock so we don't need this special handling here.\n\n> > @@ -6597,6 +6612,32 @@ CreateCheckPoint(int flags)\n>\n> I think the commentary above this function would need a fair bit of\n> revising...\n>\n> > */\n> > RedoRecPtr = XLogCtl->Insert.RedoRecPtr = checkPoint.redo;\n> >\n> > + /*\n> > + * Insert a special purpose CHECKPOINT_REDO record as the first record at\n> > + * checkpoint redo lsn. Although we have the checkpoint record that\n> > + * contains the exact redo lsn, there might have been some other records\n> > + * those got inserted between the redo lsn and the actual checkpoint\n> > + * record. So when processing the wal, we cannot rely on the checkpoint\n> > + * record if we want to stop at the checkpoint-redo LSN.\n> > + *\n> > + * This special record, however, is not required when we doing a shutdown\n> > + * checkpoint, as there will be no concurrent wal insertions during that\n> > + * time. So, the shutdown checkpoint LSN will be the same as\n> > + * checkpoint-redo LSN.\n> > + *\n> > + * This record is guaranteed to be the first record at checkpoint redo lsn\n> > + * because we are inserting this while holding the exclusive wal insertion\n> > + * lock.\n> > + */\n> > + if (!shutdown)\n> > + {\n> > + int dummy = 0;\n> > +\n> > + XLogBeginInsert();\n> > + XLogRegisterData((char *) &dummy, sizeof(dummy));\n> > + recptr = XLogInsert(RM_XLOG_ID, XLOG_CHECKPOINT_REDO);\n> > + }\n>\n> It seems to me that we should be able to do better than this.\n>\n> I suspect we might be able to get rid of the need for exclusive inserts\n> here. If we rid of that, we could determine the redoa location based on the\n> LSN determined by the XLogInsert().\n\nYeah, good idea, actually we can do this insert outside of the\nexclusive insert lock and set the LSN of this insert as the\ncheckpoint. redo location. So now we do not need to compute the\ncheckpoint. redo based on the current insertion point we can directly\nuse the LSN of this record. I have modified this and I have also\nmoved some other code that is not required to be inside the WAL\ninsertion lock.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Tue, 15 Aug 2023 14:23:43 +0530",
"msg_from": "Dilip Kumar <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: New WAL record to detect the checkpoint redo location"
},
{
"msg_contents": "On Tue, Aug 15, 2023 at 02:23:43PM +0530, Dilip Kumar wrote:\n> Yeah, good idea, actually we can do this insert outside of the\n> exclusive insert lock and set the LSN of this insert as the\n> checkpoint. redo location. So now we do not need to compute the\n> checkpoint. redo based on the current insertion point we can directly\n> use the LSN of this record. I have modified this and I have also\n> moved some other code that is not required to be inside the WAL\n> insertion lock.\n\nLooking at this patch, I am bit surprised to see that the redo point\nmaps with the end location of the CHECKPOINT_REDO record as it is the\nLSN returned by XLogInsert(), not its start LSN. For example after a\ncheckpoint:\n=# CREATE EXTENSION pg_walinspect;\nCREATE EXTENSION;\n=# SELECT redo_lsn, checkpoint_lsn from pg_control_checkpoint();\n redo_lsn | checkpoint_lsn\n-----------+----------------\n 0/19129D0 | 0/1912A08\n(1 row)\n=# SELECT start_lsn, prev_lsn, end_lsn, record_type\n from pg_get_wal_record_info('0/19129D0');\n start_lsn | prev_lsn | end_lsn | record_type\n-----------+-----------+-----------+---------------\n 0/19129D0 | 0/19129B0 | 0/1912A08 | RUNNING_XACTS\n(1 row)\n\nIn this case it matches with the previous record:\n=# SELECT start_lsn, prev_lsn, end_lsn, record_type\n from pg_get_wal_record_info('0/19129B0');\n start_lsn | prev_lsn | end_lsn | record_type\n-----------+-----------+-----------+-----------------\n 0/19129B0 | 0/1912968 | 0/19129D0 | CHECKPOINT_REDO\n(1 row)\n\nThis could be used to cross-check that the first record replayed is of\nthe correct type. The commit message of this patch tells that \"the\ncheckpoint-redo location is set at LSN of this record\", which implies\nthe start LSN of the record tracked as the redo LSN, not the end of\nit? What's the intention here?\n--\nMichael",
"msg_date": "Thu, 17 Aug 2023 14:22:13 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New WAL record to detect the checkpoint redo location"
},
{
"msg_contents": "On Thu, Aug 17, 2023 at 10:52 AM Michael Paquier <[email protected]> wrote:\n>\n> On Tue, Aug 15, 2023 at 02:23:43PM +0530, Dilip Kumar wrote:\n> > Yeah, good idea, actually we can do this insert outside of the\n> > exclusive insert lock and set the LSN of this insert as the\n> > checkpoint. redo location. So now we do not need to compute the\n> > checkpoint. redo based on the current insertion point we can directly\n> > use the LSN of this record. I have modified this and I have also\n> > moved some other code that is not required to be inside the WAL\n> > insertion lock.\n>\n> Looking at this patch, I am bit surprised to see that the redo point\n> maps with the end location of the CHECKPOINT_REDO record as it is the\n> LSN returned by XLogInsert(), not its start LSN.\n\nYeah right, actually I was confused, I assumed it will return the\nstart LSN of the record. And I do not see any easy way to identify\nthe Start LSN of this record so maybe this solution will not work. I\nwill have to think of something else. Thanks for pointing it out.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 17 Aug 2023 13:11:50 +0530",
"msg_from": "Dilip Kumar <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: New WAL record to detect the checkpoint redo location"
},
{
"msg_contents": "On Thu, Aug 17, 2023 at 01:11:50PM +0530, Dilip Kumar wrote:\n> Yeah right, actually I was confused, I assumed it will return the\n> start LSN of the record. And I do not see any easy way to identify\n> the Start LSN of this record so maybe this solution will not work. I\n> will have to think of something else. Thanks for pointing it out.\n\nAbout that. One thing to consider may be ReserveXLogInsertLocation()\nwhile holding the WAL insert lock, but you can just rely on\nProcLastRecPtr for the job after inserting the REDO record, no?\n--\nMichael",
"msg_date": "Fri, 18 Aug 2023 08:54:19 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New WAL record to detect the checkpoint redo location"
},
{
"msg_contents": "On Fri, Aug 18, 2023 at 5:24 AM Michael Paquier <[email protected]> wrote:\n>\n> On Thu, Aug 17, 2023 at 01:11:50PM +0530, Dilip Kumar wrote:\n> > Yeah right, actually I was confused, I assumed it will return the\n> > start LSN of the record. And I do not see any easy way to identify\n> > the Start LSN of this record so maybe this solution will not work. I\n> > will have to think of something else. Thanks for pointing it out.\n>\n> About that. One thing to consider may be ReserveXLogInsertLocation()\n> while holding the WAL insert lock, but you can just rely on\n> ProcLastRecPtr for the job after inserting the REDO record, no?\n\nYeah right, we can use ProcLastRecPtr. I will send the updated patch.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 18 Aug 2023 10:12:00 +0530",
"msg_from": "Dilip Kumar <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: New WAL record to detect the checkpoint redo location"
},
{
"msg_contents": "On Fri, Aug 18, 2023 at 10:12 AM Dilip Kumar <[email protected]> wrote:\n>\n> On Fri, Aug 18, 2023 at 5:24 AM Michael Paquier <[email protected]> wrote:\n> >\n> > On Thu, Aug 17, 2023 at 01:11:50PM +0530, Dilip Kumar wrote:\n> > > Yeah right, actually I was confused, I assumed it will return the\n> > > start LSN of the record. And I do not see any easy way to identify\n> > > the Start LSN of this record so maybe this solution will not work. I\n> > > will have to think of something else. Thanks for pointing it out.\n> >\n> > About that. One thing to consider may be ReserveXLogInsertLocation()\n> > while holding the WAL insert lock, but you can just rely on\n> > ProcLastRecPtr for the job after inserting the REDO record, no?\n>\n> Yeah right, we can use ProcLastRecPtr. I will send the updated patch.\n\nHere is the updated version of the patch.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Fri, 25 Aug 2023 11:08:25 +0530",
"msg_from": "Dilip Kumar <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: New WAL record to detect the checkpoint redo location"
},
{
"msg_contents": "On Fri, Aug 25, 2023 at 11:08:25AM +0530, Dilip Kumar wrote:\n> Here is the updated version of the patch.\n\nThe concept of the patch looks sound to me. I have a few comments. \n\n+\t * This special record, however, is not required when we doing a shutdown\n+\t * checkpoint, as there will be no concurrent wal insertions during that\n+\t * time. So, the shutdown checkpoint LSN will be the same as\n+\t * checkpoint-redo LSN.\n\nThis is missing \"are\", as in \"when we are doing a shutdown\ncheckpoint\".\n\n- freespace = INSERT_FREESPACE(curInsert);\n- if (freespace == 0)\n\nThe variable \"freespace\" can be moved within the if block introduced\nby this patch when calculating the REDO location for the shutdown\ncase. And you can do the same with curInsert?\n\n-\t * Compute new REDO record ptr = location of next XLOG record.\n-\t *\n-\t * NB: this is NOT necessarily where the checkpoint record itself will be,\n-\t * since other backends may insert more XLOG records while we're off doing\n-\t * the buffer flush work. Those XLOG records are logically after the\n-\t * checkpoint, even though physically before it. Got that?\n+\t * In case of shutdown checkpoint, compute new REDO record\n+\t * ptr = location of next XLOG record.\n\nIt seems to me that keeping around this comment is important,\nparticularly for the case where we have a shutdown checkpoint and we\nexpect nothing to run, no?\n\nHow about adding a test in pg_walinspect? There is an online\ncheckpoint running there, so you could just add something like that\nto check that the REDO record is at the expected LSN stored in the\ncontrol file:\n+-- Check presence of REDO record.\n+SELECT redo_lsn FROM pg_control_checkpoint() \\gset\n+SELECT start_lsn = :'redo_lsn'::pg_lsn AS same_lsn, record_type\n+ FROM pg_get_wal_record_info(:'redo_lsn');\n--\nMichael",
"msg_date": "Mon, 28 Aug 2023 08:44:22 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New WAL record to detect the checkpoint redo location"
},
{
"msg_contents": "On Mon, Aug 28, 2023 at 5:14 AM Michael Paquier <[email protected]> wrote:\n>\n> On Fri, Aug 25, 2023 at 11:08:25AM +0530, Dilip Kumar wrote:\n> > Here is the updated version of the patch.\n>\n> The concept of the patch looks sound to me. I have a few comments.\n\nThanks for the review\n\n> + * This special record, however, is not required when we doing a shutdown\n> + * checkpoint, as there will be no concurrent wal insertions during that\n> + * time. So, the shutdown checkpoint LSN will be the same as\n> + * checkpoint-redo LSN.\n>\n> This is missing \"are\", as in \"when we are doing a shutdown\n> checkpoint\".\n\nFixed\n\n> - freespace = INSERT_FREESPACE(curInsert);\n> - if (freespace == 0)\n>\n> The variable \"freespace\" can be moved within the if block introduced\n> by this patch when calculating the REDO location for the shutdown\n> case. And you can do the same with curInsert?\n\nDone, I have also moved code related to computing curInsert in the\nsame if (shutdown) block.\n\n> - * Compute new REDO record ptr = location of next XLOG record.\n> - *\n> - * NB: this is NOT necessarily where the checkpoint record itself will be,\n> - * since other backends may insert more XLOG records while we're off doing\n> - * the buffer flush work. Those XLOG records are logically after the\n> - * checkpoint, even though physically before it. Got that?\n> + * In case of shutdown checkpoint, compute new REDO record\n> + * ptr = location of next XLOG record.\n>\n> It seems to me that keeping around this comment is important,\n> particularly for the case where we have a shutdown checkpoint and we\n> expect nothing to run, no?\n\nI removed this mainly because now in other comments[1] where we are\nintroducing this new CHECKPOINT_REDO record we are explaining the\nproblem\nthat the redo location and the actual checkpoint records are not at\nthe same place and that is because of the concurrent xlog insertion.\nI think we are explaining in more\ndetail by also stating that in case of a shutdown checkpoint, there\nwould not be any concurrent insertion so the shutdown checkpoint redo\nwill be at the same place. So I feel keeping old comments is not\nrequired. And we are explaining it when we are setting this for\nnon-shutdown checkpoint because for shutdown checkpoint this statement\nis anyway not correct because for the shutdown checkpoint the\ncheckpoint record will be at the same location and there will be no\nconcurrent wal insertion, what do you think?\n\n[1]\n+ /*\n+ * Insert a dummy CHECKPOINT_REDO record and set start LSN of this record\n+ * as checkpoint.redo. Although we have the checkpoint record that also\n+ * contains the exact redo lsn, there might have been some other records\n+ * those got inserted between the redo lsn and the actual checkpoint\n+ * record. So when processing the wal, we cannot rely on the checkpoint\n+ * record if we want to stop at the checkpoint-redo LSN.\n+ *\n+ * This special record, however, is not required when we are doing a\n+ * shutdown checkpoint, as there will be no concurrent wal insertions\n+ * during that time. So, the shutdown checkpoint LSN will be the same as\n+ * checkpoint-redo LSN.\n+ */\n\n>\n> How about adding a test in pg_walinspect? There is an online\n> checkpoint running there, so you could just add something like that\n> to check that the REDO record is at the expected LSN stored in the\n> control file:\n> +-- Check presence of REDO record.\n> +SELECT redo_lsn FROM pg_control_checkpoint() \\gset\n> +SELECT start_lsn = :'redo_lsn'::pg_lsn AS same_lsn, record_type\n> + FROM pg_get_wal_record_info(:'redo_lsn');\n> --\n\nDone, thanks.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Mon, 28 Aug 2023 13:47:18 +0530",
"msg_from": "Dilip Kumar <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: New WAL record to detect the checkpoint redo location"
},
{
"msg_contents": "On Mon, Aug 28, 2023 at 01:47:18PM +0530, Dilip Kumar wrote:\n> I removed this mainly because now in other comments[1] where we are\n> introducing this new CHECKPOINT_REDO record we are explaining the\n> problem\n> that the redo location and the actual checkpoint records are not at\n> the same place and that is because of the concurrent xlog insertion.\n> I think we are explaining in more\n> detail by also stating that in case of a shutdown checkpoint, there\n> would not be any concurrent insertion so the shutdown checkpoint redo\n> will be at the same place. So I feel keeping old comments is not\n> required.\n> And we are explaining it when we are setting this for\n> non-shutdown checkpoint because for shutdown checkpoint this statement\n> is anyway not correct because for the shutdown checkpoint the\n> checkpoint record will be at the same location and there will be no\n> concurrent wal insertion, what do you think?\n\n+ * Insert a dummy CHECKPOINT_REDO record and set start LSN of this record\n+ * as checkpoint.redo.\n\nI would add a \"for a non-shutdown checkpoint\" at the end of this\nsentence.\n\n+ * record. So when processing the wal, we cannot rely on the checkpoint\n+ * record if we want to stop at the checkpoint-redo LSN.\n\nThe term \"checkpoint-redo\" is also a bit confusing, I guess, because\nyou just mean to refer to the \"redo\" LSN here? Maybe rework the last\nsentence as:\n\"So, when processing WAL, we cannot rely on the checkpoint record if\nwe want to stop at the same position as the redo LSN\".\n\n+ * This special record, however, is not required when we are doing a\n+ * shutdown checkpoint, as there will be no concurrent wal insertions\n+ * during that time. So, the shutdown checkpoint LSN will be the same as\n+ * checkpoint-redo LSN.\n\nPerhaps the last sentence could be merged with the first one, if we\nare tweaking things, say:\n\"This special record is not required when doing a shutdown checkpoint;\nthe redo LSN is the same LSN as the checkpoint record as there cannot\nbe any WAL activity in a shutdown sequence.\"\n--\nMichael",
"msg_date": "Wed, 30 Aug 2023 16:33:41 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New WAL record to detect the checkpoint redo location"
},
{
"msg_contents": "On Wed, Aug 30, 2023 at 1:03 PM Michael Paquier <[email protected]> wrote:\n>\n> On Mon, Aug 28, 2023 at 01:47:18PM +0530, Dilip Kumar wrote:\n> > I removed this mainly because now in other comments[1] where we are\n> > introducing this new CHECKPOINT_REDO record we are explaining the\n> > problem\n> > that the redo location and the actual checkpoint records are not at\n> > the same place and that is because of the concurrent xlog insertion.\n> > I think we are explaining in more\n> > detail by also stating that in case of a shutdown checkpoint, there\n> > would not be any concurrent insertion so the shutdown checkpoint redo\n> > will be at the same place. So I feel keeping old comments is not\n> > required.\n> > And we are explaining it when we are setting this for\n> > non-shutdown checkpoint because for shutdown checkpoint this statement\n> > is anyway not correct because for the shutdown checkpoint the\n> > checkpoint record will be at the same location and there will be no\n> > concurrent wal insertion, what do you think?\n>\n> + * Insert a dummy CHECKPOINT_REDO record and set start LSN of this record\n> + * as checkpoint.redo.\n>\n> I would add a \"for a non-shutdown checkpoint\" at the end of this\n> sentence.\n>\n> + * record. So when processing the wal, we cannot rely on the checkpoint\n> + * record if we want to stop at the checkpoint-redo LSN.\n>\n> The term \"checkpoint-redo\" is also a bit confusing, I guess, because\n> you just mean to refer to the \"redo\" LSN here? Maybe rework the last\n> sentence as:\n> \"So, when processing WAL, we cannot rely on the checkpoint record if\n> we want to stop at the same position as the redo LSN\".\n>\n> + * This special record, however, is not required when we are doing a\n> + * shutdown checkpoint, as there will be no concurrent wal insertions\n> + * during that time. So, the shutdown checkpoint LSN will be the same as\n> + * checkpoint-redo LSN.\n>\n> Perhaps the last sentence could be merged with the first one, if we\n> are tweaking things, say:\n> \"This special record is not required when doing a shutdown checkpoint;\n> the redo LSN is the same LSN as the checkpoint record as there cannot\n> be any WAL activity in a shutdown sequence.\"\n\nYour suggestions LGTM so modified accordingly\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Wed, 30 Aug 2023 16:51:19 +0530",
"msg_from": "Dilip Kumar <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: New WAL record to detect the checkpoint redo location"
},
{
"msg_contents": "On Wed, Aug 30, 2023 at 04:51:19PM +0530, Dilip Kumar wrote:\n> Your suggestions LGTM so modified accordingly\n\nI have been putting my HEAD on this patch for a few hours, reviewing\nthe surroundings, and somewhat missed that this computation is done\nwhile we do not hold the WAL insert locks:\n+ checkPoint.redo = ProcLastRecPtr;\n\nThen a few lines down the shared Insert.RedoRecPtr is updated while\nholding an exclusive lock.\n RedoRecPtr = XLogCtl->Insert.RedoRecPtr = checkPoint.redo;\n\nIf we have a bunch of records inserted between the moment when the\nREDO record is inserted and the moment when the checkpointer takes the\nexclusive WAL lock, aren't we potentially missing a lot of FPW's that\nshould exist since the redo LSN?\n--\nMichael",
"msg_date": "Thu, 31 Aug 2023 13:06:02 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New WAL record to detect the checkpoint redo location"
},
{
"msg_contents": "On Thu, Aug 31, 2023 at 9:36 AM Michael Paquier <[email protected]> wrote:\n>\n> On Wed, Aug 30, 2023 at 04:51:19PM +0530, Dilip Kumar wrote:\n> > Your suggestions LGTM so modified accordingly\n>\n> I have been putting my HEAD on this patch for a few hours, reviewing\n> the surroundings, and somewhat missed that this computation is done\n> while we do not hold the WAL insert locks:\n> + checkPoint.redo = ProcLastRecPtr;\n>\n> Then a few lines down the shared Insert.RedoRecPtr is updated while\n> holding an exclusive lock.\n> RedoRecPtr = XLogCtl->Insert.RedoRecPtr = checkPoint.redo;\n>\n> If we have a bunch of records inserted between the moment when the\n> REDO record is inserted and the moment when the checkpointer takes the\n> exclusive WAL lock, aren't we potentially missing a lot of FPW's that\n> should exist since the redo LSN?\n\nYeah, good catch. With this, it seems like we can not move this new\nWAL Insert out of the Exclusive WAL insertion lock right? because if\nwe want to set the LSN of this record as the checkpoint. redo then\nthere should not be any concurrent insertion until we expose the\nXLogCtl->Insert.RedoRecPtr. Otherwise, we will miss the FPW for all\nthe record which has been inserted after the checkpoint. redo before\nwe acquired the exclusive WAL insertion lock.\n\nSo maybe I need to restart from the first version of the patch but\ninstead of moving the insertion of the new record out of the exclusive\nlock need to do some better refactoring so that XLogInsertRecord()\ndoesn't look ugly.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 31 Aug 2023 09:55:45 +0530",
"msg_from": "Dilip Kumar <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: New WAL record to detect the checkpoint redo location"
},
{
"msg_contents": "On Thu, Aug 31, 2023 at 09:55:45AM +0530, Dilip Kumar wrote:\n> Yeah, good catch. With this, it seems like we can not move this new\n> WAL Insert out of the Exclusive WAL insertion lock right? Because if\n> we want to set the LSN of this record as the checkpoint.redo then\n> there should not be any concurrent insertion until we expose the\n> XLogCtl->Insert.RedoRecPtr. Otherwise, we will miss the FPW for all\n> the record which has been inserted after the checkpoint.redo before\n> we acquired the exclusive WAL insertion lock.\n\nYes.\n\n> So maybe I need to restart from the first version of the patch but\n> instead of moving the insertion of the new record out of the exclusive\n> lock need to do some better refactoring so that XLogInsertRecord()\n> doesn't look ugly.\n\nYes, I am not sure which interface would be less ugli-ish, but that's\nenough material for a refactoring patch of the WAL insert routines on\ntop of the main patch that introduces the REDO record.\n--\nMichael",
"msg_date": "Thu, 31 Aug 2023 13:33:27 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New WAL record to detect the checkpoint redo location"
},
{
"msg_contents": "On Fri, Jul 14, 2023 at 11:16 AM Andres Freund <[email protected]> wrote:\n> I suspect we might be able to get rid of the need for exclusive inserts\n> here. If we rid of that, we could determine the redoa location based on the\n> LSN determined by the XLogInsert().\n\nI've been brainstorming about this today, trying to figure out some\nideas to make it work.\n\nAs Michael Paquier correctly noted downthread, we need to make sure\nthat a backend inserting a WAL record knows whether it needs to\ncontain an FPI. The comments in the do...while loop in XLogInsert are\npretty helpful here: doPageWrites can't change once XLogInsertRecord\nacquires a WAL insertion lock. For that to be true, the redo pointer\ncan only move when holding all WAL insertion locks. That means that if\nwe add an XLOG_CHECKPOINT_REDO to mark the location of the redo\npointer, we've got to either (a) insert the record *and* update our\nnotion of the last redo pointer while holding all the WAL insertion\nlocks or (b) change the concurrency model in some way.\n\nLet's explore (b) first. Perhaps my imagination is too limited here,\nbut I'm having trouble seeing a good way of doing this. One idea that\noccurred to me was to make either the insertion of the\nXLOG_CHECKPOINT_REDO record fail softly if somebody inserts a record\nafter it that omits FPIs, but that doesn't really work because then\nwe're left with a hole in the WAL. It's too late to move the later\nrecord earlier. We could convert the intended XLOG_CHECKPOINT_REDO\nrecord into a dummy record but that seems complex and wasteful.\nSimilarly, you could try to make the insertion of the later record\nfail, but that basically has the same problem: there could be an even\nlater record being inserted after that which it's already too late to\nreposition. Basically, it feels like once we get to the point where we\nhave a range of LSNs and we're copying data into wal_buffers, it's\nawfully late to be trying to back out. Other people can already be\ndepending on us to put the amount of WAL that we promised to insert at\nthe place where we promised to put it.\n\nThe only other approach to (b) that I can think of is to force FPIs on\nfor all backends from just before to just after we insert the\nXLOG_CHECKPOINT_REDO record. However, since we currently require\ntaking all the WAL insertion locks to start requiring full page\nwrites, this doesn't seem like it gains much. In theory perhaps we\ncould have an approach where we flip full page writes to sorta-on,\nthen wait until we've seen each WAL insertion lock unheld at least\nonce, and then at that point we know all new WAL insertions will see\nthem and can deem them fully on. However, when I start to think along\nthese lines, I feel like maybe I'm losing the plot. Checkpoints are\nrare enough that the overhead of taking all the WAL insertion locks at\nthe same time isn't really a big problem, or at least I don't think it\nis. I think the concern here is more about avoiding useless branches\nin hot paths that potentially cost something for every single record\nwhether it has anything to do with this mechanism or not.\n\nOK, so let's suppose we abandon the idea of changing the concurrency\nmodel in any fundamental way and just try to figure out how to both\ninsert the record and update our notion of the last redo pointer while\nholding all the WAL insertion locks i.e. (a) from the two options\nabove. Dilip's patch approaches this problem by pulling acquisition of\nthe WAL insertion locks up to the place where we're already setting\nthe redo pointer. I wonder if we could also consider the other\npossible approach of pushing the update to Insert->RedoRecPtr down\ninto XLogInsertRecord(), which already has a special case for\nacquiring all locks when the record being inserted is an XLOG_SWITCH\nrecord. That would have the advantage of holding all of the insertion\nlocks for a smaller period of time than what Dilip's patch does -- in\nparticular, it wouldn't need to hold the lock across the\nXLOG_CHECKPOINT_REDO's XLogRecordAssemble -- or across the rather\nlengthy tail of XLogInsertRecord. But the obvious objection is that it\nwould put more branches into XLogInsertRecord which nobody wants.\n\nBut ... what if it didn't? Suppose we pulled the XLOG_SWITCH case out\nof XLogInsertRecord and made a separate function for that case. It\nlooks to me like that would avoid 5 branches in that function in the\nnormal case where we're not inserting XLOG_SWITCH. We would want to\nmove some logic, particularly the WAL_DEBUG stuff and maybe other\nthings, into reusable subroutines. Then, we could decide to treat\nXLOG_CHECKPOINT_REDO either in the normal path -- adding a couple of\nthose branches back again -- or in the XLOG_SWITCH function and either\nway I think the normal path would have fewer branches than it does\ntoday. One idea that I had was to create a new rmgr for \"weird\nrecords,\" initially XLOG_SWITCH and XLOG_CHECKPOINT_REDO. Then the\ntest as to whether to use the \"normal\" version of XLogInsertRecord or\nthe \"weird\" version could just be based on the rmid, and the \"normal\"\nfunction wouldn't need to concern itself with anything specific to the\n\"weird\" cases.\n\nA variant on this idea would be to just accept a few extra branches\nand hope it's not really that big of a deal. For instance, instead of\nthis:\n\n bool isLogSwitch = (rechdr->xl_rmid == RM_XLOG_ID &&\n info == XLOG_SWITCH);\n\nWe could have this:\n\nbool isAllLocks = (rechdr->xl_rmid == RM_BIZARRE_ID);\nbool isLogSwitch = (isAllLocks && info == XLOG_SWITCH);\n\n...and then conditionalize on either isAllLocks or isLogSwitch as\napppropriate. You'd still need an extra branch someplace to update\nInsert->RedoRecPtr when isAllLocks && info == XLOG_CHECKPOINT_REDO,\nbut maybe that's not so bad?\n\n> Alternatively, I think we should split XLogInsertRecord() so that the part\n> with the insertion locks held is a separate function, that we could use here.\n\nThe difficulty that I see with this is that the function does a lot\nmore stuff after calling WALInsertLockRelease(). So just pushing the\npart that's between acquiring and releasing WAL insertion locks into a\nseparate function wouldn't actually avoid a lot of code duplication,\nif the goal was to do everything else that XLogInsertRecord() does\nexcept for the lock manipulation. To get there, I think we'd need to\nmove all of the stuff after the lock release into one or more static\nfunctions, too. Which is possibly an OK approach. I haven't checked\nhow much additional parameter passing we'd end up doing if we went\nthat way.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 18 Sep 2023 14:57:54 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New WAL record to detect the checkpoint redo location"
},
{
"msg_contents": "On Mon, Sep 18, 2023 at 2:57 PM Robert Haas <[email protected]> wrote:\n> I've been brainstorming about this today, trying to figure out some\n> ideas to make it work.\n\nHere are some patches.\n\n0001 refactors XLogInsertRecord to unify a couple of separate tests of\nisLogSwitch, hopefully making it cleaner and cheaper to add more\nspecial cases.\n\n0002 is a very minimal patch to add XLOG_CHECKPOINT_REDO without using\nit for anything.\n\n0003 modifies CreateCheckPoint() to insert an XLOG_CHECKPOINT_REDO\nrecord for any non-shutdown checkpoint, and modifies\nXLogInsertRecord() to treat that as a new special case, wherein after\ninserting the record the redo pointer is reset while still holding the\nWAL insertion locks.\n\nI've tested this to the extent of running the regression tests, and I\nalso did one (1) manual test where it looked like the right thing was\nhappening, but that's it, so this might be buggy or perform like\ngarbage for all I know. But my hope is that it isn't buggy and\nperforms adequately. If there's any chance of getting some comments on\nthe basic design choices before I spend time testing and polishing it,\nthat would be very helpful.\n\nThanks,\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 20 Sep 2023 16:20:08 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New WAL record to detect the checkpoint redo location"
},
{
"msg_contents": "On Thu, Sep 21, 2023 at 7:05 AM Robert Haas <[email protected]> wrote:\n>\n> On Mon, Sep 18, 2023 at 2:57 PM Robert Haas <[email protected]> wrote:\n> > I've been brainstorming about this today, trying to figure out some\n> > ideas to make it work.\n>\n> Here are some patches.\n>\n> 0001 refactors XLogInsertRecord to unify a couple of separate tests of\n> isLogSwitch, hopefully making it cleaner and cheaper to add more\n> special cases.\n>\n> 0002 is a very minimal patch to add XLOG_CHECKPOINT_REDO without using\n> it for anything.\n>\n> 0003 modifies CreateCheckPoint() to insert an XLOG_CHECKPOINT_REDO\n> record for any non-shutdown checkpoint, and modifies\n> XLogInsertRecord() to treat that as a new special case, wherein after\n> inserting the record the redo pointer is reset while still holding the\n> WAL insertion locks.\n>\n\nAfter the 0003 patch, do we need acquire exclusive lock via\nWALInsertLockAcquireExclusive() for non-shutdown checkpoints. Even the\ncomment \"We must block concurrent insertions while examining insert\nstate to determine the checkpoint REDO pointer.\" seems to indicate\nthat it is not required. If it is required then we may want to change\nthe comments and also acquiring the locks twice will have more cost\nthan acquiring it once and write the new WAL record under that lock.\n\nOne minor comment:\n+ else if (XLOG_CHECKPOINT_REDO)\n+ class = WALINSERT_SPECIAL_CHECKPOINT;\n+ }\n\nIsn't the check needs to compare the record type with info?\n\nYour v6-0001* patch looks like an improvement to me even without the\nother two patches.\n\nBTW, I would like to mention that there is a slight interaction of\nthis work with the patch to upgrade/migrate slots [1]. Basically in\n[1], to allow slots migration from lower to higher version, we need to\nensure that all the WAL has been consumed by the slots before clean\nshutdown. However, during upgrade we can generate few records like\ncheckpoint which we will ignore for the slot consistency checking as\nsuch records doesn't matter for data consistency after upgrade. We\nprobably need to add this record to that list. I'll keep an eye on\nboth the patches so that we don't miss that interaction but mentioned\nit here to make others also aware of the same.\n\n[1] - https://www.postgresql.org/message-id/TYAPR01MB586615579356A84A8CF29A00F5F9A%40TYAPR01MB5866.jpnprd01.prod.outlook.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 21 Sep 2023 13:52:34 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New WAL record to detect the checkpoint redo location"
},
{
"msg_contents": "On Thu, Sep 21, 2023 at 4:22 AM Amit Kapila <[email protected]> wrote:\n> After the 0003 patch, do we need acquire exclusive lock via\n> WALInsertLockAcquireExclusive() for non-shutdown checkpoints. Even the\n> comment \"We must block concurrent insertions while examining insert\n> state to determine the checkpoint REDO pointer.\" seems to indicate\n> that it is not required. If it is required then we may want to change\n> the comments and also acquiring the locks twice will have more cost\n> than acquiring it once and write the new WAL record under that lock.\n\nI think the comment needs updating. I don't think we can do curInsert\n= XLogBytePosToRecPtr(Insert->CurrBytePos) without taking the locks.\nSame for Insert->fullPageWrites.\n\nI agree that it looks a little wasteful to release the lock and then\nreacquire it, but I suppose checkpoints don't happen often enough for\nit to matter. You're not going to notice an extra set of insertion\nlock acquisitions once every 5 minutes, or every half hour, or even\nevery 1 minute if your checkpoints are super-frequent.\n\nAlso notice that the current code is also quite inefficient in this\nway. GetLastImportantRecPtr() acquires and releases each lock one at a\ntime, and then we immediately turn around and do\nWALInsertLockAcquireExclusive(). If the overhead that you're concerned\nabout here were big enough to matter, we could reclaim what we're\nlosing by having a version of GetLastImportantRecPtr() that expects to\nbe called with all locks already held. But when I asked Andres, he\nthought that it didn't matter, and I bet he's right.\n\n> One minor comment:\n> + else if (XLOG_CHECKPOINT_REDO)\n> + class = WALINSERT_SPECIAL_CHECKPOINT;\n> + }\n>\n> Isn't the check needs to compare the record type with info?\n\nYeah wow. That's a big mistake.\n\n> Your v6-0001* patch looks like an improvement to me even without the\n> other two patches.\n\nGood to know, thanks.\n\n> BTW, I would like to mention that there is a slight interaction of\n> this work with the patch to upgrade/migrate slots [1]. Basically in\n> [1], to allow slots migration from lower to higher version, we need to\n> ensure that all the WAL has been consumed by the slots before clean\n> shutdown. However, during upgrade we can generate few records like\n> checkpoint which we will ignore for the slot consistency checking as\n> such records doesn't matter for data consistency after upgrade. We\n> probably need to add this record to that list. I'll keep an eye on\n> both the patches so that we don't miss that interaction but mentioned\n> it here to make others also aware of the same.\n\nIf your approach requires a code change every time someone adds a new\nWAL record that doesn't modify table data, you might want to rethink\nthe approach a bit.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 21 Sep 2023 11:36:41 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New WAL record to detect the checkpoint redo location"
},
{
"msg_contents": "On Thu, Sep 21, 2023 at 1:50 AM Robert Haas <[email protected]> wrote:\n>\n> On Mon, Sep 18, 2023 at 2:57 PM Robert Haas <[email protected]> wrote:\n> > I've been brainstorming about this today, trying to figure out some\n> > ideas to make it work.\n>\n> Here are some patches.\n>\n> 0001 refactors XLogInsertRecord to unify a couple of separate tests of\n> isLogSwitch, hopefully making it cleaner and cheaper to add more\n> special cases.\n\nYeah, this looks improvement as it removes one isLogSwitch from the code.\n\n> 0002 is a very minimal patch to add XLOG_CHECKPOINT_REDO without using\n> it for anything.\n>\n> 0003 modifies CreateCheckPoint() to insert an XLOG_CHECKPOINT_REDO\n> record for any non-shutdown checkpoint, and modifies\n> XLogInsertRecord() to treat that as a new special case, wherein after\n> inserting the record the redo pointer is reset while still holding the\n> WAL insertion locks.\n\nYeah from a design POV, it looks fine to me because the main goal was\nto insert the XLOG_CHECKPOINT_REDO record and set the \"RedoRecPtr\"\nunder the same exclusive wal insertion lock and this patch is doing\nthis. As you already mentioned it is an improvement over my first\npatch because a) it holds the exclusive WAL insersion lock for a very\nshort duration b) not increasing the number of branches in\nXLogInsertRecord().\n\nSome review\n1.\nI feel we can reduce one more branch to the normal path by increasing\none branch in this special case i.e.\n\nYour code is\nif (class == WALINSERT_SPECIAL_SWITCH)\n{\n/*execute isSwitch case */\n}\nelse if (class == WALINSERT_SPECIAL_CHECKPOINT)\n{\n/*execute checkpoint redo case */\n}\nelse\n{\n/* common case*/\n}\n\nMy suggestion\nif (xl_rmid == RM_XLOG_ID)\n{\n if (class == WALINSERT_SPECIAL_SWITCH)\n {\n /*execute isSwitch case */\n }\n else if (class == WALINSERT_SPECIAL_CHECKPOINT)\n {\n /*execute checkpoint redo case */\n }\n}\nelse\n{\n /* common case*/\n}\n\n2.\nIn fact, I feel that we can remove this branch as well right? I mean\nwhy do we need to have this separate thing called \"class\"? we can\nvery much use \"info\" for that purpose. right?\n\n+ /* Does this record type require special handling? */\n+ if (rechdr->xl_rmid == RM_XLOG_ID)\n+ {\n+ if (info == XLOG_SWITCH)\n+ class = WALINSERT_SPECIAL_SWITCH;\n+ else if (XLOG_CHECKPOINT_REDO)\n+ class = WALINSERT_SPECIAL_CHECKPOINT;\n+ }\n\nSo if we remove this then we do not have this class and the above case\nwould look like\n\nif (xl_rmid == RM_XLOG_ID)\n{\n if (info == XLOG_SWITCH)\n {\n /*execute isSwitch case */\n }\n else if (info == XLOG_CHECKPOINT_REDO)\n {\n /*execute checkpoint redo case */\n }\n}\nelse\n{\n /* common case*/\n}\n\n3.\n+ /* Does this record type require special handling? */\n+ if (rechdr->xl_rmid == RM_XLOG_ID)\n+ {\n+ if (info == XLOG_SWITCH)\n+ class = WALINSERT_SPECIAL_SWITCH;\n+ else if (XLOG_CHECKPOINT_REDO)\n+ class = WALINSERT_SPECIAL_CHECKPOINT;\n+ }\n+\n\nthe above check-in else if is wrong I mean\nelse if (XLOG_CHECKPOINT_REDO) should be else if (info == XLOG_CHECKPOINT_REDO)\n\nThat's all I have for now.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 22 Sep 2023 09:33:50 +0530",
"msg_from": "Dilip Kumar <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: New WAL record to detect the checkpoint redo location"
},
{
"msg_contents": "On Thu, Sep 21, 2023 at 9:06 PM Robert Haas <[email protected]> wrote:\n>\n> On Thu, Sep 21, 2023 at 4:22 AM Amit Kapila <[email protected]> wrote:\n> > After the 0003 patch, do we need acquire exclusive lock via\n> > WALInsertLockAcquireExclusive() for non-shutdown checkpoints. Even the\n> > comment \"We must block concurrent insertions while examining insert\n> > state to determine the checkpoint REDO pointer.\" seems to indicate\n> > that it is not required. If it is required then we may want to change\n> > the comments and also acquiring the locks twice will have more cost\n> > than acquiring it once and write the new WAL record under that lock.\n>\n> I think the comment needs updating. I don't think we can do curInsert\n> = XLogBytePosToRecPtr(Insert->CurrBytePos) without taking the locks.\n> Same for Insert->fullPageWrites.\n>\n\nIf we can't do those without taking all the locks then it is fine but\njust wanted to give it a try to see if there is a way to avoid in case\nof online (non-shutdown) checkpoints. For example, curInsert is used\nonly for the shutdown path, so we don't need to acquire all locks for\nit in the cases except for the shutdown case. Here, we are reading\nInsert->fullPageWrites which requires an insertion lock but not all\nthe locks (as per comments in structure XLogCtlInsert). Now, I haven't\ndone detailed analysis for\nXLogCtl->InsertTimeLineID/XLogCtl->PrevTimeLineID but some places\nreading InsertTimeLineID have a comment like \"Given that we're not in\nrecovery, InsertTimeLineID is set and can't change, so we can read it\nwithout a lock.\" which suggests that some analysis is required whether\nreading those requires all locks in this code path. OTOH, it won't\nmatter to acquire all locks in this code path for the reasons\nmentioned by you and it may help in keeping the code simple. So, it is\nup to you to take the final call on this matter. I am fine with your\ndecision.\n\n>\n> > BTW, I would like to mention that there is a slight interaction of\n> > this work with the patch to upgrade/migrate slots [1]. Basically in\n> > [1], to allow slots migration from lower to higher version, we need to\n> > ensure that all the WAL has been consumed by the slots before clean\n> > shutdown. However, during upgrade we can generate few records like\n> > checkpoint which we will ignore for the slot consistency checking as\n> > such records doesn't matter for data consistency after upgrade. We\n> > probably need to add this record to that list. I'll keep an eye on\n> > both the patches so that we don't miss that interaction but mentioned\n> > it here to make others also aware of the same.\n>\n> If your approach requires a code change every time someone adds a new\n> WAL record that doesn't modify table data, you might want to rethink\n> the approach a bit.\n>\n\nI understand your hesitation and we have discussed several approaches\nthat do not rely on the WAL record type to determine if the slots have\ncaught up but the other approaches seem to have different other\ndownsides. I know it may not be a good idea to discuss those here but\nas there was a slight interaction with this work, so I thought to\nbring it up. To be precise, we need to ensure that we ignore WAL\nrecords that got generated during pg_upgrade operation (say during\npg_upgrade --check).\n\nThe approach we initially followed was to check if the slot's\nconfirmed_flush_lsn is equal to the latest checkpoint in\npg_controldata (which is the shutdown checkpoint after stopping the\nserver). This approach doesn't work for the use case where the user\nruns pg_upgrade --check before actually performing the upgrade [1].\nThis is because during the upgrade check, the server will be\nstopped/started and update the position of the latest checkpoint,\ncausing the check to fail in the actual upgrade and leading pg_upgrade\nto believe that the slot has not been caught up.\n\nTo address the issues in the above approach, we also discussed several\nalternative approaches[2][3]: a) Adding a new field in pg_controldata\nto record the last checkpoint that happens in non-upgrade mode, so\nthat we can compare the slot's confirmed_flush_lsn with this value.\nHowever, we were not sure if this was a good enough reason to add a\nnew field in controldata field and sprinkle IsBinaryUpgrade check in\ncheckpointer code path. b) Advancing each slot's confirmed_flush_lsn\nto the latest position if the first upgrade check passes. This way,\nwhen performing the actual upgrade, the confirmed_flush_lsn will also\npass. However, internally advancing the LSN seems unconventional. c)\nIntroducing a new pg_upgrade option to skip the check for slot\ncatch-up so that if it is already done at the time of pg_upgrade\n--check, we can avoid rechecking during actual upgrade. Although this\nmight work, the user would need to specify this manually, which is not\nideal. d) Document this and suggest users consume the WALs, but this\ndoesn't look acceptable to users.\n\nAll the above approaches have their downsides, prompting us to\nconsider the WAL scan approach which is to scan the end of the WAL for\nrecords that should have been streamed out. This approach was first\nproposed by Andres[4] and was chosen[5] after considering all other\napproaches. If we don't like relying on WAL record types then I think\nthe alternative (a) to add a new field in ControlDataFile is worth\nconsidering.\n\n[1] https://www.postgresql.org/message-id/CAA4eK1LzeZLoTLaAuadmuiggc5mq39oLY6fK95oFKiPBPBf%2BeQ%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/OS0PR01MB571640E1B58741979A5E586594F7A%40OS0PR01MB5716.jpnprd01.prod.outlook.com\n[3] https://www.postgresql.org/message-id/TYAPR01MB5866EF7398CB13FFDBF230E7F5F0A%40TYAPR01MB5866.jpnprd01.prod.outlook.com\n[4] https://www.postgresql.org/message-id/20230725170319.h423jbthfohwgnf7%40awork3.anarazel.de\n[5] https://www.postgresql.org/message-id/CAA4eK1KqqWayKtRhvyRgkhEHvAUemW_dEqgFn7UOG3D4B6f0ew%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 22 Sep 2023 11:37:27 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New WAL record to detect the checkpoint redo location"
},
{
"msg_contents": "On Wed, Sep 20, 2023 at 4:20 PM Robert Haas <[email protected]> wrote:\n> Here are some patches.\n\nHere are some updated patches. Following some off-list conversation\nwith Andres, I restructured 0003 to put the common case first and use\nlikely(), and I fixed the brown-paper-bag noted by Amit. I then turned\nmy attention to performance testing. I was happy to find out when I\ndid a bunch of testing on Friday that my branch with these patches\napplied outperformed master. I was then less happy to find that when I\nrepeated the same tests today, master outperformed the branch. So now\nI don't know what is going on, but it doesn't seem like my test\nresults are stable enough to draw meaningful conclusions.\n\nI was trying to think of a test case where XLogInsertRecord would be\nexercised as heavily as possible, so I really wanted to generate a lot\nof WAL while doing as little real work as possible. The best idea that\nI had was to run pg_create_restore_point() in a loop. Initially,\nperformance was dominated by the log messages which that function\nemits, so I set log_min_messages='FATAL' to suppress those. To try to\nfurther reduce other bottlenecks, I also set max_wal_size='50GB',\nfsync='off', synchronous_commit='off', and wal_buffers='256MB'. Then I\nran this query:\n\nselect count(*) from (SELECT pg_create_restore_point('banana') from\ngenerate_series(1,100000000) g) x;\n\nI can't help laughing at the comedy of creating 100 million\nbanana-named restore points with no fsyncs or logging, but here we\nare. All of my test runs with master, and with the patches, and with\njust the first patch run in between 34 and 39 seconds. As I say, I\ncan't really separate out which versions are faster and slower with\nany confidence. Before I fixed the brown-paper bag that Amit pointed\nout, it was using WALInsertLockAcquireExclusive() instead of\nWALInsertLockAcquire() for *all* WAL records, and that created an\nextremely large and obvious increase in the runtime of the tests. So\nI'm relatively confident that this test case is sensitive to changes\nin execution time of XLogInsertRecord(), but apparently the changes\ncaused by rearranging the branches are a bit too marginal for them to\nshow up here.\n\nOne possible conclusion is that the differences here aren't actually\nbig enough to get stressed about, but I don't want to jump to that\nconclusion without investigating the competing hypothesis that this\nisn't the right way to test this, and that some better test would show\nclearer results. Suggestions?\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Mon, 2 Oct 2023 10:42:37 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New WAL record to detect the checkpoint redo location"
},
{
"msg_contents": "Hi,\n\nOn 2023-10-02 10:42:37 -0400, Robert Haas wrote:\n> I was trying to think of a test case where XLogInsertRecord would be\n> exercised as heavily as possible, so I really wanted to generate a lot\n> of WAL while doing as little real work as possible. The best idea that\n> I had was to run pg_create_restore_point() in a loop.\n\nWhat I use for that is pg_logical_emit_message(). Something like\n\nSELECT count(*)\nFROM\n (\n SELECT pg_logical_emit_message(false, '1', 'short'), generate_series(1, 10000)\n );\n\nrun via pgbench does seem to exercise that path nicely.\n\n\n> One possible conclusion is that the differences here aren't actually\n> big enough to get stressed about, but I don't want to jump to that\n> conclusion without investigating the competing hypothesis that this\n> isn't the right way to test this, and that some better test would show\n> clearer results. Suggestions?\n\nI saw some small differences in runtime running pgbench with the above query,\nwith a single client. Comparing profiles showed a surprising degree of\ndifference. That turns out to mostly a consequence of the fact that\nReserveXLogInsertLocation() isn't inlined anymore, because there now are two\ncallers of the function in XLogInsertRecord().\n\nUnfortunately, I still see a small performance difference after that. To get\nthe most reproducible numbers, I disable turbo boost, bound postgres to one\ncpu core, bound pgbench to another core. Over a few runs I quite reproducibly\nget ~319.323 tps with your patches applied (+ always inline), and ~324.674\nwith master.\n\nIf I add an unlikely around if (rechdr->xl_rmid == RM_XLOG_ID), the\nperformance does improve. But that \"only\" brings it up to 322.406. Not sure\nwhat the rest is.\n\n\nOne thing that's notable, but not related to the patch, is that we waste a\nfair bit of cpu time below XLogInsertRecord() with divisions. I think they're\nall due to the use of UsableBytesInSegment in\nXLogBytePosToRecPtr/XLogBytePosToEndRecPtr. The multiplication of\nXLogSegNoOffsetToRecPtr() also shows.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 5 Oct 2023 11:34:00 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New WAL record to detect the checkpoint redo location"
},
{
"msg_contents": "On Thu, Oct 5, 2023 at 2:34 PM Andres Freund <[email protected]> wrote:\n> If I add an unlikely around if (rechdr->xl_rmid == RM_XLOG_ID), the\n> performance does improve. But that \"only\" brings it up to 322.406. Not sure\n> what the rest is.\n\nI don't really think this is worth worrying about. A sub-one-percent\nregression on a highly artificial test case doesn't seem like a big\ndeal. Anybody less determined than you would have been unable to\nmeasure that there even is a regression in the first place, and that's\nbasically everyone.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 6 Oct 2023 13:44:55 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New WAL record to detect the checkpoint redo location"
},
{
"msg_contents": "On Thu, Oct 5, 2023 at 2:34 PM Andres Freund <[email protected]> wrote:\n> One thing that's notable, but not related to the patch, is that we waste a\n> fair bit of cpu time below XLogInsertRecord() with divisions. I think they're\n> all due to the use of UsableBytesInSegment in\n> XLogBytePosToRecPtr/XLogBytePosToEndRecPtr. The multiplication of\n> XLogSegNoOffsetToRecPtr() also shows.\n\nDespite what I said in my earlier email, and with a feeling like unto\nthat created by the proximity of the sword of Damocles or some ghostly\nalbatross, I spent some time reflecting on this. Some observations:\n\n1. The reason why we're doing this multiplication and division is to\nmake sure that the code in ReserveXLogInsertLocation which executes\nwhile holding insertpos_lck remains as simple and brief as possible.\nWe could eliminate the conversion between usable byte positions and\nLSNs if we replaced Insert->{Curr,Prev}BytePos with LSNs and had\nReserveXLogInsertLocation work out by how much to advance the LSN, but\nit would have to be worked out while holding insertpos_lck (or some\nreplacement lwlock, perhaps) and that cure seems worse than the\ndisease. Given that, I think we're stuck with converting between\nusable bye positions and LSNs, and that intrinsically needs some\nmultiplication and division.\n\n2. It seems possible to remove one branch in each of\nXLogBytePosToRecPtr and XLogBytePosToEndRecPtr. Rather than testing\nwhether bytesleft < XLOG_BLCKSZ - SizeOfXLogLongPHD, we could simply\nincrement bytesleft by SizeOfXLogLongPHD - SizeOfXLogShortPHD. Then\nthe rest of the calculations can be performed as if every page in the\nsegment had a header of length SizeOfXLogShortPHD, with no need to\nspecial-case the first page. However, that doesn't get rid of any\nmultiplication or division, just a branch.\n\n3. Aside from that, there seems to be no simple way to reduce the\ncomplexity of an individual calculation, but ReserveXLogInsertLocation\ndoes perform 3 rather similar computations, and I believe that we know\nthat it will always be the case that *PrevPtr < *StartPos < *EndPos.\nMaybe we could have a fast-path for the case where they are all in the\nsame segment. We could take prevbytepos modulo UsableBytesInSegment;\ncall the result prevsegoff. If UsableBytesInSegment - prevsegoff >\nendbytepos - prevbytepos, then all three pointers are in the same\nsegment, and maybe we could take advantage of that to avoid performing\nthe segment calculations more than once, but still needing to repeat\nthe page calculations. Or, instead or in addition, I think we could by\na similar technique check whether all three pointers are on the same\npage; if so, then *StartPos and *EndPos can be computed from *PrevPtr\nby just adding the difference between the corresponding byte\npositions.\n\nI'm not really sure whether that would come out cheaper. It's just the\nonly idea that I have. It did also occur to me to wonder whether the\napparent delays performing multiplication and division here were\nreally the result of the arithmetic itself being slow or whether they\nwere synchronization-related, SpinLockRelease(&Insert->insertpos_lck)\nbeing a memory barrier just before. But I assume you thought about\nthat and concluded that wasn't the issue here.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 9 Oct 2023 15:58:36 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New WAL record to detect the checkpoint redo location"
},
{
"msg_contents": "Hi,\n\nOn 2023-10-06 13:44:55 -0400, Robert Haas wrote:\n> On Thu, Oct 5, 2023 at 2:34 PM Andres Freund <[email protected]> wrote:\n> > If I add an unlikely around if (rechdr->xl_rmid == RM_XLOG_ID), the\n> > performance does improve. But that \"only\" brings it up to 322.406. Not sure\n> > what the rest is.\n> \n> I don't really think this is worth worrying about. A sub-one-percent\n> regression on a highly artificial test case doesn't seem like a big\n> deal.\n\nI agree. I think it's worth measuring and looking at, after all the fix might\nbe trivial (like the case of the unlikely for the earlier if()). But it\nshouldn't block progress on significant features.\n\nI think this \"issue\" might be measurable in some other, not quite as artifical\ncases, like INSERT ... SELECT or such. But even then it's going to be tiny.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 9 Oct 2023 13:14:39 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New WAL record to detect the checkpoint redo location"
},
{
"msg_contents": "Hi,\n\nAs noted in my email from a few minutes ago, I agree that optimizing this\nshouldn't be a requirement for merging the patch.\n\n\nOn 2023-10-09 15:58:36 -0400, Robert Haas wrote:\n> 1. The reason why we're doing this multiplication and division is to\n> make sure that the code in ReserveXLogInsertLocation which executes\n> while holding insertpos_lck remains as simple and brief as possible.\n> We could eliminate the conversion between usable byte positions and\n> LSNs if we replaced Insert->{Curr,Prev}BytePos with LSNs and had\n> ReserveXLogInsertLocation work out by how much to advance the LSN, but\n> it would have to be worked out while holding insertpos_lck (or some\n> replacement lwlock, perhaps) and that cure seems worse than the\n> disease. Given that, I think we're stuck with converting between\n> usable bye positions and LSNs, and that intrinsically needs some\n> multiplication and division.\n\nRight, that's absolutely crucial for scalability.\n\n\n> 2. It seems possible to remove one branch in each of\n> XLogBytePosToRecPtr and XLogBytePosToEndRecPtr. Rather than testing\n> whether bytesleft < XLOG_BLCKSZ - SizeOfXLogLongPHD, we could simply\n> increment bytesleft by SizeOfXLogLongPHD - SizeOfXLogShortPHD. Then\n> the rest of the calculations can be performed as if every page in the\n> segment had a header of length SizeOfXLogShortPHD, with no need to\n> special-case the first page. However, that doesn't get rid of any\n> multiplication or division, just a branch.\n\nThis reminded me about something I've been bugged by for a while: The whole\nidea of short xlog page headers seems like a completely premature\noptimization. The page header is a very small amount of the overall data\n(long: 40/8192 ~= 0.00488, short: 24/8192 ~= 0.00292), compared to the space\nwe waste in many other places, including on a per-record level, it doesn't\nseem worth the complexity.\n\n\n\n> 3. Aside from that, there seems to be no simple way to reduce the\n> complexity of an individual calculation, but ReserveXLogInsertLocation\n> does perform 3 rather similar computations, and I believe that we know\n> that it will always be the case that *PrevPtr < *StartPos < *EndPos.\n> Maybe we could have a fast-path for the case where they are all in the\n> same segment. We could take prevbytepos modulo UsableBytesInSegment;\n> call the result prevsegoff. If UsableBytesInSegment - prevsegoff >\n> endbytepos - prevbytepos, then all three pointers are in the same\n> segment, and maybe we could take advantage of that to avoid performing\n> the segment calculations more than once, but still needing to repeat\n> the page calculations. Or, instead or in addition, I think we could by\n> a similar technique check whether all three pointers are on the same\n> page; if so, then *StartPos and *EndPos can be computed from *PrevPtr\n> by just adding the difference between the corresponding byte\n> positions.\n\nI think we might be able to speed some of this up by pre-compute values so we\ncan implement things like bytesleft / UsableBytesInPage with shifts. IIRC we\nalready insist on power-of-two segment sizes, so instead of needing to divide\nby a runtime value, we should be able to shift by a runtime value (and the\nmodulo should be a mask).\n\n\n> I'm not really sure whether that would come out cheaper. It's just the\n> only idea that I have. It did also occur to me to wonder whether the\n> apparent delays performing multiplication and division here were\n> really the result of the arithmetic itself being slow or whether they\n> were synchronization-related, SpinLockRelease(&Insert->insertpos_lck)\n> being a memory barrier just before. But I assume you thought about\n> that and concluded that wasn't the issue here.\n\nI did verify that they continue to be a bottleneck even after (incorrectly\nobviously), removing the spinlock. It's also not too surprising, the latency\nof 64bit divs is just high, particularly on intel from a few years ago (my\ncascade lake workstation) and IIRC there's just a single execution port for it\ntoo, so multiple instructions can't be fully parallelized.\n\nhttps://uops.info/table.html documents a worst case latency of 89 cycles on\ncascade lake, with the division broken up into 36 uops (reducing what's\navailable to track other in-flight instructions). It's much better on alter\nlake (9 cycles and 7 uops on the perf cores, 44 cycles and 4 uops on\nefficiency cores) and on zen 3+ (19 cycles, 2 uops).\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 9 Oct 2023 13:47:02 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New WAL record to detect the checkpoint redo location"
},
{
"msg_contents": "On Mon, Oct 9, 2023 at 4:47 PM Andres Freund <[email protected]> wrote:\n> I think we might be able to speed some of this up by pre-compute values so we\n> can implement things like bytesleft / UsableBytesInPage with shifts. IIRC we\n> already insist on power-of-two segment sizes, so instead of needing to divide\n> by a runtime value, we should be able to shift by a runtime value (and the\n> modulo should be a mask).\n\nHuh, is there a general technique for this when dividing by a\nnon-power-of-two? The segment size is a power of two, as is the page\nsize, but UsableBytesIn{Page,Segment} are some random value slightly\nless than a power of two.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 9 Oct 2023 18:31:11 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New WAL record to detect the checkpoint redo location"
},
{
"msg_contents": "Hi,\n\nOn 2023-10-09 18:31:11 -0400, Robert Haas wrote:\n> On Mon, Oct 9, 2023 at 4:47 PM Andres Freund <[email protected]> wrote:\n> > I think we might be able to speed some of this up by pre-compute values so we\n> > can implement things like bytesleft / UsableBytesInPage with shifts. IIRC we\n> > already insist on power-of-two segment sizes, so instead of needing to divide\n> > by a runtime value, we should be able to shift by a runtime value (and the\n> > modulo should be a mask).\n> \n> Huh, is there a general technique for this when dividing by a\n> non-power-of-two?\n\nThere is, but I was just having a brainfart, forgetting that UsableBytesInPage\nisn't itself a power of two. The general technique is used by compilers, but\ndoesn't iirc lend itself well to be done at runtime.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 9 Oct 2023 16:20:34 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New WAL record to detect the checkpoint redo location"
},
{
"msg_contents": "On Mon, Oct 9, 2023 at 4:47 PM Andres Freund <[email protected]> wrote:\n> As noted in my email from a few minutes ago, I agree that optimizing this\n> shouldn't be a requirement for merging the patch.\n\nHere's a new patch set. I think I've incorporated the performance\nfixes that you've suggested so far into this version. I also adjusted\na couple of other things:\n\n- After further study of a point previously raised by Amit, I adjusted\nCreateCheckPoint slightly to call WALInsertLockAcquireExclusive\nsignificantly later than before. I think that there's no real reason\nto do it so early and that the current coding is probably just a\nhistorical leftover, but it would be good to have some review here.\n\n- I added a cross-check that when starting redo from a checkpoint\nwhose redo pointer points to an earlier LSN that the checkpoint\nitself, the record we read from that LSN must an XLOG_CHECKPOINT_REDO\nrecord.\n\n- I combined what were previously 0002 and 0003 into a single patch,\nsince that's how this would get committed.\n\n- I fixed up some comments.\n\n- I updated commit messages.\n\nHopefully this is getting close to good enough.\n\n> I did verify that they continue to be a bottleneck even after (incorrectly\n> obviously), removing the spinlock. It's also not too surprising, the latency\n> of 64bit divs is just high, particularly on intel from a few years ago (my\n> cascade lake workstation) and IIRC there's just a single execution port for it\n> too, so multiple instructions can't be fully parallelized.\n\nThe chipset on my laptop is even older. Coffee Lake, I think.\n\nI'm not really sure that there's a whole lot we can reasonably do\nabout the divs unless you like the fastpath idea that I proposed\nearlier, or unless you want to write a patch to either get rid of\nshort page headers or make long and short page headers the same number\nof bytes. I have to admit I'm surprised by how visible the division\noverhead is in this code path -- but I'm also somewhat inclined to\nview that less as evidence that division is something we should be\ndesperate to eliminate and more as evidence that this code path is\nquite fast already. In light of your findings, it doesn't seem\ncompletely impossible to me that the speed of integer division in this\ncode path could be part of what limits performance for some users, but\nI'm also not sure it's all that likely or all that serious, because\nwe're deliberating creating test cases that insert unreasonable\namounts of WAL without doing any actual work. In the real world,\nthere's going to be a lot more other code running along with this code\n- probably at least the executor and some heap AM code - and I bet not\nall of that is as well-optimized as this is already. And it's also\nquite likely for many users that the real limits on the speed of the\nworkload will be related to I/O or lock contention rather than CPU\ncost in any form. I'm not saying it's not worth worrying about it. I'm\njust saying that we should make sure the amount of worrying we do is\ncalibrated to the true importance of the issue.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 10 Oct 2023 14:43:34 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New WAL record to detect the checkpoint redo location"
},
{
"msg_contents": "On Tue, Oct 10, 2023 at 11:33 AM Robert Haas <[email protected]> wrote:\n> On Mon, Oct 9, 2023 at 4:47 PM Andres Freund <[email protected]> wrote:\n> > I think we might be able to speed some of this up by pre-compute values so we\n> > can implement things like bytesleft / UsableBytesInPage with shifts. IIRC we\n> > already insist on power-of-two segment sizes, so instead of needing to divide\n> > by a runtime value, we should be able to shift by a runtime value (and the\n> > modulo should be a mask).\n>\n> Huh, is there a general technique for this when dividing by a\n> non-power-of-two? The segment size is a power of two, as is the page\n> size, but UsableBytesIn{Page,Segment} are some random value slightly\n> less than a power of two.\n\nBTW in case someone is interested, Hacker's Delight (a book that has\ncome up on this list a few times before) devotes a couple of chapters\nof magical incantations to this topic. Compilers know that magic, and\none thought I had when I first saw this discussion was that we could\nspecialise the code for the permissible wal segment sizes. But nuking\nthe variable sized page headers sounds better.\n\n\n",
"msg_date": "Thu, 12 Oct 2023 10:42:39 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New WAL record to detect the checkpoint redo location"
},
{
"msg_contents": "On Tue, Oct 10, 2023 at 02:43:34PM -0400, Robert Haas wrote:\n> - I combined what were previously 0002 and 0003 into a single patch,\n> since that's how this would get committed.\n> \n> - I fixed up some comments.\n> \n> - I updated commit messages.\n> \n> Hopefully this is getting close to good enough.\n\nI have looked at 0001, for now.. And it looks OK to me.\n\n+ * Nonetheless, this case is simpler than the normal cases handled\n+ * above, which must check for changes in doPageWrites and RedoRecPtr.\n+ * Those checks are only needed for records that can contain\n+ * full-pages images, and an XLOG_SWITCH record never does.\n+ Assert(fpw_lsn == InvalidXLogRecPtr);\n\nRight, that's the core reason behind the refactoring. The assertion\nis a good idea.\n--\nMichael",
"msg_date": "Thu, 12 Oct 2023 16:27:19 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New WAL record to detect the checkpoint redo location"
},
{
"msg_contents": "On Thu, Oct 12, 2023 at 3:27 AM Michael Paquier <[email protected]> wrote:\n> I have looked at 0001, for now.. And it looks OK to me.\n\nCool. I've committed that one. Thanks for the review.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 12 Oct 2023 14:10:31 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New WAL record to detect the checkpoint redo location"
},
{
"msg_contents": "On Tue, Oct 10, 2023 at 02:43:34PM -0400, Robert Haas wrote:\n> Here's a new patch set. I think I've incorporated the performance\n> fixes that you've suggested so far into this version. I also adjusted\n> a couple of other things:\n\nNow looking at 0002, where you should be careful about the code\nindentation or koel will complain.\n\n> - After further study of a point previously raised by Amit, I adjusted\n> CreateCheckPoint slightly to call WALInsertLockAcquireExclusive\n> significantly later than before. I think that there's no real reason\n> to do it so early and that the current coding is probably just a\n> historical leftover, but it would be good to have some review here.\n\nThis makes the new code call LocalSetXLogInsertAllowed() and what we\nset for checkPoint.PrevTimeLineID after taking the insertion locks,\nwhich should be OK.\n\n> - I added a cross-check that when starting redo from a checkpoint\n> whose redo pointer points to an earlier LSN that the checkpoint\n> itself, the record we read from that LSN must an XLOG_CHECKPOINT_REDO\n> record.\n\nI've mentioned as well a test in pg_walinspect after one of the\ncheckpoints generated there, but what you do here is enough for the\nonline case.\n\n+ /*\n+ * XLogInsertRecord will have updated RedoRecPtr, but we need to copy\n+ * that into the record that will be inserted when the checkpoint is\n+ * complete.\n+ */\n+ checkPoint.redo = RedoRecPtr;\n\nFor online checkpoints, a very important point is that\nXLogCtl->Insert.RedoRecPtr is also updated in XLogInsertRecord().\nPerhaps that's worth an addition? I was a bit confused first that we\ndo the following for shutdown checkpoints:\nRedoRecPtr = XLogCtl->Insert.RedoRecPtr = checkPoint.redo;\n\nThen repeat this pattern for non-shutdown checkpoints a few lines down\nwithout touching the copy of the redo LSN in XLogCtl->Insert, because\nof course we don't hold the WAL insert locks in an exclusive fashion\nhere:\ncheckPoint.redo = RedoRecPtr;\n\nMy point is that this is not only about RedoRecPtr, but also about\nXLogCtl->Insert.RedoRecPtr here. The comment in ReserveXLogSwitch()\nsays that.\n--\nMichael",
"msg_date": "Fri, 13 Oct 2023 16:29:12 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New WAL record to detect the checkpoint redo location"
},
{
"msg_contents": "On Fri, Oct 13, 2023 at 3:29 AM Michael Paquier <[email protected]> wrote:\n> Now looking at 0002, where you should be careful about the code\n> indentation or koel will complain.\n\nFixed in the attached version.\n\n> This makes the new code call LocalSetXLogInsertAllowed() and what we\n> set for checkPoint.PrevTimeLineID after taking the insertion locks,\n> which should be OK.\n\nCool.\n\n> I've mentioned as well a test in pg_walinspect after one of the\n> checkpoints generated there, but what you do here is enough for the\n> online case.\n\nI don't quite understand what you're saying here. If you're suggesting\na potential improvement, can you be a bit more clear and explicit\nabout what the suggestion is?\n\n> + /*\n> + * XLogInsertRecord will have updated RedoRecPtr, but we need to copy\n> + * that into the record that will be inserted when the checkpoint is\n> + * complete.\n> + */\n> + checkPoint.redo = RedoRecPtr;\n>\n> For online checkpoints, a very important point is that\n> XLogCtl->Insert.RedoRecPtr is also updated in XLogInsertRecord().\n> Perhaps that's worth an addition? I was a bit confused first that we\n> do the following for shutdown checkpoints:\n> RedoRecPtr = XLogCtl->Insert.RedoRecPtr = checkPoint.redo;\n>\n> Then repeat this pattern for non-shutdown checkpoints a few lines down\n> without touching the copy of the redo LSN in XLogCtl->Insert, because\n> of course we don't hold the WAL insert locks in an exclusive fashion\n> here:\n> checkPoint.redo = RedoRecPtr;\n>\n> My point is that this is not only about RedoRecPtr, but also about\n> XLogCtl->Insert.RedoRecPtr here. The comment in ReserveXLogSwitch()\n> says that.\n\nI have adjusted the comment in CreateCheckPoint to hopefully address\nthis concern. I don't understand what you mean about\nReserveXLogSwitch(), though.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 17 Oct 2023 12:45:52 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New WAL record to detect the checkpoint redo location"
},
{
"msg_contents": "On Tue, Oct 17, 2023 at 12:45:52PM -0400, Robert Haas wrote:\n> On Fri, Oct 13, 2023 at 3:29 AM Michael Paquier <[email protected]> wrote:\n>> I've mentioned as well a test in pg_walinspect after one of the\n>> checkpoints generated there, but what you do here is enough for the\n>> online case.\n> \n> I don't quite understand what you're saying here. If you're suggesting\n> a potential improvement, can you be a bit more clear and explicit\n> about what the suggestion is?\n\nSuggestion is from here, with a test for pg_walinspect after it runs\nits online checkpoint (see the full-page case):\nhttps://www.postgresql.org/message-id/ZOvf1tu6rfL/[email protected]\n\n+-- Check presence of REDO record.\n+SELECT redo_lsn FROM pg_control_checkpoint() \\gset\n+SELECT start_lsn = :'redo_lsn'::pg_lsn AS same_lsn, record_type\n+ FROM pg_get_wal_record_info(:'redo_lsn');\n\n>> Then repeat this pattern for non-shutdown checkpoints a few lines down\n>> without touching the copy of the redo LSN in XLogCtl->Insert, because\n>> of course we don't hold the WAL insert locks in an exclusive fashion\n>> here:\n>> checkPoint.redo = RedoRecPtr;\n>>\n>> My point is that this is not only about RedoRecPtr, but also about\n>> XLogCtl->Insert.RedoRecPtr here. The comment in ReserveXLogSwitch()\n>> says that.\n> \n> I have adjusted the comment in CreateCheckPoint to hopefully address\n> this concern.\n\n- * XLogInsertRecord will have updated RedoRecPtr, but we need to copy\n- * that into the record that will be inserted when the checkpoint is\n- * complete.\n+ * XLogInsertRecord will have updated XLogCtl->Insert.RedoRecPtr in\n+ * shared memory and RedoRecPtr in backend-local memory, but we need\n+ * to copy that into the record that will be inserted when the\n+ * checkpoint is complete. \n\nThis comment diff between v8 and v9 looks OK to me. Thanks.\n\n> I don't understand what you mean about\n> ReserveXLogSwitch(), though.\n\nI am not sure either, looking back at that :p\n--\nMichael",
"msg_date": "Wed, 18 Oct 2023 09:35:17 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New WAL record to detect the checkpoint redo location"
},
{
"msg_contents": "On Tue, Oct 17, 2023 at 8:35 PM Michael Paquier <[email protected]> wrote:\n> Suggestion is from here, with a test for pg_walinspect after it runs\n> its online checkpoint (see the full-page case):\n> https://www.postgresql.org/message-id/ZOvf1tu6rfL/[email protected]\n>\n> +-- Check presence of REDO record.\n> +SELECT redo_lsn FROM pg_control_checkpoint() \\gset\n> +SELECT start_lsn = :'redo_lsn'::pg_lsn AS same_lsn, record_type\n> + FROM pg_get_wal_record_info(:'redo_lsn');\n\nI added a variant of this test case. Here's v10.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 18 Oct 2023 10:24:50 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New WAL record to detect the checkpoint redo location"
},
{
"msg_contents": "On Wed, Oct 18, 2023 at 10:24:50AM -0400, Robert Haas wrote:\n> I added a variant of this test case. Here's v10.\n\n+-- Verify that an XLOG_CHECKPOINT_REDO record begins at precisely the redo LSN\n+-- of the checkpoint we just performed.\n+SELECT redo_lsn FROM pg_control_checkpoint() \\gset\n+SELECT start_lsn = :'redo_lsn'::pg_lsn AS same_lsn, resource_manager,\n+ record_type FROM pg_get_wal_record_info(:'redo_lsn');\n+ same_lsn | resource_manager | record_type \n+----------+------------------+-----------------\n+ t | XLOG | CHECKPOINT_REDO\n+(1 row)\n\nSeems fine to me. Thanks for considering the idea.\n--\nMichael",
"msg_date": "Thu, 19 Oct 2023 14:53:12 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New WAL record to detect the checkpoint redo location"
},
{
"msg_contents": "On Thu, Oct 19, 2023 at 1:53 AM Michael Paquier <[email protected]> wrote:\n> Seems fine to me. Thanks for considering the idea.\n\nI think it was a good idea!\n\nI've committed the patch.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 19 Oct 2023 14:48:56 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New WAL record to detect the checkpoint redo location"
}
] |
[
{
"msg_contents": "Hi,\n\nI've attached the patch for the following rewriteTargetView comments.\n\n\n Assert(parsetree->resultRelation == new_rt_index);\n\n /*\n * For INSERT/UPDATE we must also update resnos in the targetlist to refer\n * to columns of the base relation, since those indicate the target\n * columns to be affected.\n *\n * Note that this destroys the resno ordering of the targetlist, but that\n * will be fixed when we recurse through rewriteQuery, which will invoke\n * rewriteTargetListIU again on the updated targetlist.\n */\n if (parsetree->commandType != CMD_DELETE)\n {\n foreach(lc, parsetree->targetList)\n\ns/rewriteQuery/RewriteQuery\n\nregards,\nSho Kato",
"msg_date": "Thu, 15 Jun 2023 08:07:19 +0000",
"msg_from": "\"Sho Kato (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fix a typo in rewriteHandler.c"
},
{
"msg_contents": "Hello,\n\nOn Thu, Jun 15, 2023 at 5:07 PM Sho Kato (Fujitsu) <[email protected]> wrote:\n>\n> Hi,\n>\n> I've attached the patch for the following rewriteTargetView comments.\n>\n>\n> Assert(parsetree->resultRelation == new_rt_index);\n>\n> /*\n> * For INSERT/UPDATE we must also update resnos in the targetlist to refer\n> * to columns of the base relation, since those indicate the target\n> * columns to be affected.\n> *\n> * Note that this destroys the resno ordering of the targetlist, but that\n> * will be fixed when we recurse through rewriteQuery, which will invoke\n> * rewriteTargetListIU again on the updated targetlist.\n> */\n> if (parsetree->commandType != CMD_DELETE)\n> {\n> foreach(lc, parsetree->targetList)\n>\n> s/rewriteQuery/RewriteQuery\n\nGood catch and thanks for the patch. Will push shortly.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 16 Jun 2023 10:25:13 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix a typo in rewriteHandler.c"
},
{
"msg_contents": "On Fri, Jun 16, 2023 at 10:25 AM Amit Langote <[email protected]> wrote:\n> On Thu, Jun 15, 2023 at 5:07 PM Sho Kato (Fujitsu) <[email protected]> wrote:\n> > I've attached the patch for the following rewriteTargetView comments.\n> >\n> > s/rewriteQuery/RewriteQuery\n>\n> Good catch and thanks for the patch. Will push shortly.\n\nDone.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 16 Jun 2023 10:35:23 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix a typo in rewriteHandler.c"
}
] |
[
{
"msg_contents": "Hi, all.\n\nSome of my clients use JOIN's with three - four clauses. Quite \nfrequently, I see complaints on unreasonable switch of JOIN algorithm to \nMerge Join instead of Hash Join. Quick research have shown one weak \nplace - estimation of an average bucket size in final_cost_hashjoin (see \nq2.sql in attachment) with very conservative strategy.\nUnlike estimation of groups, here we use smallest ndistinct value across \nall buckets instead of multiplying them (or trying to make multivariate \nanalysis).\nIt works fine for the case of one clause. But if we have many clauses, \nand if each has high value of ndistinct, we will overestimate average \nsize of a bucket and, as a result, prefer to use Merge Join. As the \nexample in attachment shows, it leads to worse plan than possible, \nsometimes drastically worse.\nI assume, this is done with fear of functional dependencies between hash \nclause components. But as for me, here we should go the same way, as \nestimation of groups.\nThe attached patch shows a sketch of the solution.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional",
"msg_date": "Thu, 15 Jun 2023 14:30:10 +0600",
"msg_from": "Andrey Lepikhov <[email protected]>",
"msg_from_op": true,
"msg_subject": "MergeJoin beats HashJoin in the case of multiple hash clauses"
},
{
"msg_contents": "Hi!\n\nOn 15.06.2023 11:30, Andrey Lepikhov wrote:\n> Hi, all.\n>\n> Some of my clients use JOIN's with three - four clauses. Quite \n> frequently, I see complaints on unreasonable switch of JOIN algorithm \n> to Merge Join instead of Hash Join. Quick research have shown one weak \n> place - estimation of an average bucket size in final_cost_hashjoin \n> (see q2.sql in attachment) with very conservative strategy.\n> Unlike estimation of groups, here we use smallest ndistinct value \n> across all buckets instead of multiplying them (or trying to make \n> multivariate analysis).\n> It works fine for the case of one clause. But if we have many clauses, \n> and if each has high value of ndistinct, we will overestimate average \n> size of a bucket and, as a result, prefer to use Merge Join. As the \n> example in attachment shows, it leads to worse plan than possible, \n> sometimes drastically worse.\n> I assume, this is done with fear of functional dependencies between \n> hash clause components. But as for me, here we should go the same way, \n> as estimation of groups.\n> The attached patch shows a sketch of the solution.\n>\nThis problem is very important.\n\nHonestly, I'm still learning your code and looking for cases on which \ncases your patch can affect for the worse or for the better. But I have \nalready found something that seemed interesting to me. I have found \nseveral other interesting cases where your patch can solve some problem \nin order to choose a more correct plan, but in focus on memory consumption.\n\nTo make it easier to evaluate, I added a hook to your patch that makes \nit easier to switch to your or the original way of estimating the size \nof baskets (diff_estimate.diff).\n\nHere are other cases where your fix improves the query plan.\n\nFirst of all, I changed the way creation of tables are created to look \nat the behavior of the query plan in terms of planning and execution time:\n\nDROP TABLE IF EXISTS a,b CASCADE;\nCREATE TABLE a AS\n SELECT ((3*gs) % 300) AS x, ((3*gs+1) % 300) AS y, ((3*gs+2) % 300) AS z\n FROM generate_series(1,1e5) AS gs;\nCREATE TABLE b AS\n SELECT gs % 90 AS x, gs % 49 AS y, gs %100 AS z, 'abc' || gs AS payload\n FROM generate_series(1,1e5) AS gs;\nANALYZE a,b;\n\nSET enable_cost_size = 'on';\nEXPLAIN ANALYZE\nSELECT * FROM a,b\nWHERE a.x=b.x AND a.y=b.y AND a.z=b.z;\n\nSET enable_cost_size = 'off';\nEXPLAIN ANALYZE\nSELECT * FROM a,b\nWHERE a.x=b.x AND a.y=b.y AND a.z=b.z;\n\n\n QUERY PLAN\n---------------------------------------------------------------------------\n Hash Join (actual time=200.872..200.879 rows=0 loops=1)\n Hash Cond: ((b.x = a.x) AND (b.y = a.y) AND (b.z = a.z))\n -> Seq Scan on b (actual time=0.029..15.946 rows=100000 loops=1)\n -> Hash (actual time=97.645..97.649 rows=100000 loops=1)\n Buckets: 131072 Batches: 1 Memory Usage: 5612kB\n -> Seq Scan on a (actual time=0.024..17.153 rows=100000 loops=1)\n Planning Time: 2.910 ms\n Execution Time: 201.949 ms\n(8 rows)\n\nSET\n QUERY PLAN\n---------------------------------------------------------------------------\n Merge Join (actual time=687.415..687.416 rows=0 loops=1)\n Merge Cond: ((b.y = a.y) AND (b.x = a.x) AND (b.z = a.z))\n -> Sort (actual time=462.022..536.716 rows=100000 loops=1)\n Sort Key: b.y, b.x, b.z\n Sort Method: external merge Disk: 3328kB\n -> Seq Scan on b (actual time=0.017..12.326 rows=100000 loops=1)\n -> Sort (actual time=111.295..113.196 rows=16001 loops=1)\n Sort Key: a.y, a.x, a.z\n Sort Method: external sort Disk: 2840kB\n -> Seq Scan on a (actual time=0.020..10.129 rows=100000 loops=1)\n Planning Time: 0.752 ms\n Execution Time: 688.829 ms\n(12 rows)\n\nSecondly, I found another case that is not related to the fact that the \nplanner would prefer to choose merge join rather than hash join, but we \nhave the opportunity to see that the plan has become better due to the \nconsumption of less memory, and also takes less planning time.\n\nHere, with the same query, the planning time was reduced by 5 times, and \nthe number of buckets by 128 times, therefore, memory consumption also \ndecreased:\n\nDROP TABLE IF EXISTS a,b CASCADE;\n\nCREATE TABLE a AS\n SELECT ((3*gs) % 300) AS x, ((3*gs+1) % 300) AS y, ((3*gs+2) % 300) AS z\n FROM generate_series(1,600) AS gs;\nCREATE TABLE b AS\n SELECT gs % 90 AS x, gs % 49 AS y, gs %100 AS z, 'abc' || gs AS payload\n FROM generate_series(1,1e5) AS gs;\nANALYZE a,b;\n\nSET enable_cost_size = 'on';\nEXPLAIN ANALYZE\nSELECT * FROM a,b\nWHERE a.x=b.x AND a.y=b.y AND a.z=b.z;\n\nSET enable_cost_size = 'off';\nEXPLAIN ANALYZE\nSELECT * FROM a,b\nWHERE a.x=b.x AND a.y=b.y AND a.z=b.z;\n\nQUERY PLAN\n----------------------------------------------------------------------------------------------------------------\n Hash Join (cost=20.50..3157.58 rows=8 width=32) (actual \ntime=95.648..95.651 rows=0 loops=1)\n Hash Cond: ((b.x = (a.x)::numeric) AND (b.y = (a.y)::numeric) AND \n(b.z = (a.z)::numeric))\n -> Seq Scan on b (cost=0.00..1637.00 rows=100000 width=20) (actual \ntime=0.027..17.980 rows=100000 loops=1)\n -> Hash (cost=10.00..10.00 rows=600 width=12) (actual \ntime=2.046..2.047 rows=600 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 34kB\n -> Seq Scan on a (cost=0.00..10.00 rows=600 width=12) \n(actual time=0.022..0.315 rows=600 loops=1)\n Planning Time: 0.631 ms\n Execution Time: 95.730 ms\n(8 rows)\n\nSET\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=3387.00..8621.58 rows=8 width=32) (actual \ntime=102.873..102.877 rows=0 loops=1)\n Hash Cond: (((a.x)::numeric = b.x) AND ((a.y)::numeric = b.y) AND \n((a.z)::numeric = b.z))\n -> Seq Scan on a (cost=0.00..10.00 rows=600 width=12) (actual \ntime=0.014..0.131 rows=600 loops=1)\n -> Hash (cost=1637.00..1637.00 rows=100000 width=20) (actual \ntime=101.920..101.921 rows=100000 loops=1)\n Buckets: 131072 Batches: 1 Memory Usage: 6474kB\n -> Seq Scan on b (cost=0.00..1637.00 rows=100000 width=20) \n(actual time=0.013..16.349 rows=100000 loops=1)\n Planning Time: 0.153 ms\n Execution Time: 103.518 ms\n(8 rows)\n\nI also give an improvement relative to the left external or right \nconnection:\n\nDROP TABLE IF EXISTS a,b CASCADE;\n\nCREATE TABLE a AS\n SELECT ((3*gs) % 300) AS x, ((3*gs+1) % 300) AS y, ((3*gs+2) % 300) AS z\n FROM generate_series(1,600) AS gs;\nCREATE TABLE b AS\n SELECT gs % 90 AS x, gs % 49 AS y, gs %100 AS z, 'abc' || gs AS payload\n FROM generate_series(1,1e5) AS gs;\nANALYZE a,b;\n\n\nSET enable_cost_size = 'on';\n\nEXPLAIN ANALYZE\nSELECT * FROM a right join b\non a.x=b.x AND a.y=b.y AND a.z=b.z;\n\nSET enable_cost_size = 'off';\nEXPLAIN ANALYZE\nSELECT * FROM a right join b\non a.x=b.x AND a.y=b.y AND a.z=b.z;\n\nQUERY PLAN\n----------------------------------------------------------------------------------------------------------------\n Hash Left Join (cost=20.50..3157.58 rows=100000 width=32) (actual \ntime=1.846..102.264 rows=100000 loops=1)\n Hash Cond: ((b.x = (a.x)::numeric) AND (b.y = (a.y)::numeric) AND \n(b.z = (a.z)::numeric))\n -> Seq Scan on b (cost=0.00..1637.00 rows=100000 width=20) (actual \ntime=0.041..15.328 rows=100000 loops=1)\n -> Hash (cost=10.00..10.00 rows=600 width=12) (actual \ntime=1.780..1.781 rows=600 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 34kB\n -> Seq Scan on a (cost=0.00..10.00 rows=600 width=12) \n(actual time=0.031..0.252 rows=600 loops=1)\n Planning Time: 0.492 ms\n Execution Time: 107.609 ms\n(8 rows)\n\nSET\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------\n Hash Right Join (cost=3387.00..8500.08 rows=100000 width=32) (actual \ntime=80.919..101.613 rows=100000 loops=1)\n Hash Cond: (((a.x)::numeric = b.x) AND ((a.y)::numeric = b.y) AND \n((a.z)::numeric = b.z))\n -> Seq Scan on a (cost=0.00..10.00 rows=600 width=12) (actual \ntime=0.017..0.084 rows=600 loops=1)\n -> Hash (cost=1637.00..1637.00 rows=100000 width=20) (actual \ntime=80.122..80.123 rows=100000 loops=1)\n Buckets: 131072 Batches: 1 Memory Usage: 6474kB\n -> Seq Scan on b (cost=0.00..1637.00 rows=100000 width=20) \n(actual time=0.015..11.819 rows=100000 loops=1)\n Planning Time: 0.194 ms\n Execution Time: 104.662 ms\n(8 rows)\n\n-- \nRegards,\nAlena Rybakina\nPostgres Professional",
"msg_date": "Wed, 28 Jun 2023 16:53:06 +0300",
"msg_from": "Alena Rybakina <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MergeJoin beats HashJoin in the case of multiple hash clauses"
},
{
"msg_contents": "\nDoes anyone else have an opinion on this patch? It looks promising.\n\n---------------------------------------------------------------------------\n\nOn Wed, Jun 28, 2023 at 04:53:06PM +0300, Alena Rybakina wrote:\n> Hi!\n> \n> On 15.06.2023 11:30, Andrey Lepikhov wrote:\n> \n> Hi, all.\n> \n> Some of my clients use JOIN's with three - four clauses. Quite frequently,\n> I see complaints on unreasonable switch of JOIN algorithm to Merge Join\n> instead of Hash Join. Quick research have shown one weak place - estimation\n> of an average bucket size in final_cost_hashjoin (see q2.sql in attachment)\n> with very conservative strategy.\n> Unlike estimation of groups, here we use smallest ndistinct value across\n> all buckets instead of multiplying them (or trying to make multivariate\n> analysis).\n> It works fine for the case of one clause. But if we have many clauses, and\n> if each has high value of ndistinct, we will overestimate average size of a\n> bucket and, as a result, prefer to use Merge Join. As the example in\n> attachment shows, it leads to worse plan than possible, sometimes\n> drastically worse.\n> I assume, this is done with fear of functional dependencies between hash\n> clause components. But as for me, here we should go the same way, as\n> estimation of groups.\n> The attached patch shows a sketch of the solution.\n> \n> \n> This problem is very important.\n> \n> Honestly, I'm still learning your code and looking for cases on which cases\n> your patch can affect for the worse or for the better. But I have already found\n> something that seemed interesting to me. I have found several other interesting\n> cases where your patch can solve some problem in order to choose a more correct\n> plan, but in focus on memory consumption.\n> \n> To make it easier to evaluate, I added a hook to your patch that makes it\n> easier to switch to your or the original way of estimating the size of baskets\n> (diff_estimate.diff).\n> \n> Here are other cases where your fix improves the query plan.\n> \n> \n> First of all, I changed the way creation of tables are created to look at the\n> behavior of the query plan in terms of planning and execution time:\n> \n> DROP TABLE IF EXISTS a,b CASCADE;\n> CREATE TABLE a AS\n> SELECT ((3*gs) % 300) AS x, ((3*gs+1) % 300) AS y, ((3*gs+2) % 300) AS z\n> FROM generate_series(1,1e5) AS gs;\n> CREATE TABLE b AS\n> SELECT gs % 90 AS x, gs % 49 AS y, gs %100 AS z, 'abc' || gs AS payload\n> FROM generate_series(1,1e5) AS gs;\n> ANALYZE a,b;\n> \n> SET enable_cost_size = 'on';\n> EXPLAIN ANALYZE\n> SELECT * FROM a,b\n> WHERE a.x=b.x AND a.y=b.y AND a.z=b.z;\n> \n> SET enable_cost_size = 'off';\n> EXPLAIN ANALYZE\n> SELECT * FROM a,b\n> WHERE a.x=b.x AND a.y=b.y AND a.z=b.z;\n> \n> \n> QUERY PLAN \n> ---------------------------------------------------------------------------\n> Hash Join (actual time=200.872..200.879 rows=0 loops=1)\n> Hash Cond: ((b.x = a.x) AND (b.y = a.y) AND (b.z = a.z))\n> -> Seq Scan on b (actual time=0.029..15.946 rows=100000 loops=1)\n> -> Hash (actual time=97.645..97.649 rows=100000 loops=1)\n> Buckets: 131072 Batches: 1 Memory Usage: 5612kB\n> -> Seq Scan on a (actual time=0.024..17.153 rows=100000 loops=1)\n> Planning Time: 2.910 ms\n> Execution Time: 201.949 ms\n> (8 rows)\n> \n> SET\n> QUERY PLAN \n> ---------------------------------------------------------------------------\n> Merge Join (actual time=687.415..687.416 rows=0 loops=1)\n> Merge Cond: ((b.y = a.y) AND (b.x = a.x) AND (b.z = a.z))\n> -> Sort (actual time=462.022..536.716 rows=100000 loops=1)\n> Sort Key: b.y, b.x, b.z\n> Sort Method: external merge Disk: 3328kB\n> -> Seq Scan on b (actual time=0.017..12.326 rows=100000 loops=1)\n> -> Sort (actual time=111.295..113.196 rows=16001 loops=1)\n> Sort Key: a.y, a.x, a.z\n> Sort Method: external sort Disk: 2840kB\n> -> Seq Scan on a (actual time=0.020..10.129 rows=100000 loops=1)\n> Planning Time: 0.752 ms\n> Execution Time: 688.829 ms\n> (12 rows)\n> \n> Secondly, I found another case that is not related to the fact that the planner\n> would prefer to choose merge join rather than hash join, but we have the\n> opportunity to see that the plan has become better due to the consumption of\n> less memory, and also takes less planning time.\n> \n> Here, with the same query, the planning time was reduced by 5 times, and the\n> number of buckets by 128 times, therefore, memory consumption also decreased:\n> \n> DROP TABLE IF EXISTS a,b CASCADE;\n> \n> CREATE TABLE a AS\n> SELECT ((3*gs) % 300) AS x, ((3*gs+1) % 300) AS y, ((3*gs+2) % 300) AS z\n> FROM generate_series(1,600) AS gs;\n> CREATE TABLE b AS\n> SELECT gs % 90 AS x, gs % 49 AS y, gs %100 AS z, 'abc' || gs AS payload\n> FROM generate_series(1,1e5) AS gs;\n> ANALYZE a,b;\n> \n> SET enable_cost_size = 'on';\n> EXPLAIN ANALYZE\n> SELECT * FROM a,b\n> WHERE a.x=b.x AND a.y=b.y AND a.z=b.z;\n> \n> SET enable_cost_size = 'off';\n> EXPLAIN ANALYZE\n> SELECT * FROM a,b\n> WHERE a.x=b.x AND a.y=b.y AND a.z=b.z;\n> \n> QUERY\n> PLAN \n> ----------------------------------------------------------------------------------------------------------------\n> Hash Join (cost=20.50..3157.58 rows=8 width=32) (actual time=95.648..95.651\n> rows=0 loops=1)\n> Hash Cond: ((b.x = (a.x)::numeric) AND (b.y = (a.y)::numeric) AND (b.z =\n> (a.z)::numeric))\n> -> Seq Scan on b (cost=0.00..1637.00 rows=100000 width=20) (actual time=\n> 0.027..17.980 rows=100000 loops=1)\n> -> Hash (cost=10.00..10.00 rows=600 width=12) (actual time=2.046..2.047\n> rows=600 loops=1)\n> Buckets: 1024 Batches: 1 Memory Usage: 34kB\n> -> Seq Scan on a (cost=0.00..10.00 rows=600 width=12) (actual time=\n> 0.022..0.315 rows=600 loops=1)\n> Planning Time: 0.631 ms\n> Execution Time: 95.730 ms\n> (8 rows)\n> \n> SET\n> QUERY\n> PLAN \n> ----------------------------------------------------------------------------------------------------------------------\n> Hash Join (cost=3387.00..8621.58 rows=8 width=32) (actual time=\n> 102.873..102.877 rows=0 loops=1)\n> Hash Cond: (((a.x)::numeric = b.x) AND ((a.y)::numeric = b.y) AND\n> ((a.z)::numeric = b.z))\n> -> Seq Scan on a (cost=0.00..10.00 rows=600 width=12) (actual time=\n> 0.014..0.131 rows=600 loops=1)\n> -> Hash (cost=1637.00..1637.00 rows=100000 width=20) (actual time=\n> 101.920..101.921 rows=100000 loops=1)\n> Buckets: 131072 Batches: 1 Memory Usage: 6474kB\n> -> Seq Scan on b (cost=0.00..1637.00 rows=100000 width=20) (actual\n> time=0.013..16.349 rows=100000 loops=1)\n> Planning Time: 0.153 ms\n> Execution Time: 103.518 ms\n> (8 rows)\n> \n> I also give an improvement relative to the left external or right connection:\n> \n> DROP TABLE IF EXISTS a,b CASCADE;\n> \n> CREATE TABLE a AS\n> SELECT ((3*gs) % 300) AS x, ((3*gs+1) % 300) AS y, ((3*gs+2) % 300) AS z\n> FROM generate_series(1,600) AS gs;\n> CREATE TABLE b AS\n> SELECT gs % 90 AS x, gs % 49 AS y, gs %100 AS z, 'abc' || gs AS payload\n> FROM generate_series(1,1e5) AS gs;\n> ANALYZE a,b;\n> \n> \n> SET enable_cost_size = 'on';\n> \n> EXPLAIN ANALYZE\n> SELECT * FROM a right join b\n> on a.x=b.x AND a.y=b.y AND a.z=b.z;\n> \n> SET enable_cost_size = 'off';\n> EXPLAIN ANALYZE\n> SELECT * FROM a right join b\n> on a.x=b.x AND a.y=b.y AND a.z=b.z;\n> \n> QUERY\n> PLAN \n> ----------------------------------------------------------------------------------------------------------------\n> Hash Left Join (cost=20.50..3157.58 rows=100000 width=32) (actual time=\n> 1.846..102.264 rows=100000 loops=1)\n> Hash Cond: ((b.x = (a.x)::numeric) AND (b.y = (a.y)::numeric) AND (b.z =\n> (a.z)::numeric))\n> -> Seq Scan on b (cost=0.00..1637.00 rows=100000 width=20) (actual time=\n> 0.041..15.328 rows=100000 loops=1)\n> -> Hash (cost=10.00..10.00 rows=600 width=12) (actual time=1.780..1.781\n> rows=600 loops=1)\n> Buckets: 1024 Batches: 1 Memory Usage: 34kB\n> -> Seq Scan on a (cost=0.00..10.00 rows=600 width=12) (actual time=\n> 0.031..0.252 rows=600 loops=1)\n> Planning Time: 0.492 ms\n> Execution Time: 107.609 ms\n> (8 rows)\n> \n> SET\n> QUERY\n> PLAN \n> ----------------------------------------------------------------------------------------------------------------------\n> Hash Right Join (cost=3387.00..8500.08 rows=100000 width=32) (actual time=\n> 80.919..101.613 rows=100000 loops=1)\n> Hash Cond: (((a.x)::numeric = b.x) AND ((a.y)::numeric = b.y) AND\n> ((a.z)::numeric = b.z))\n> -> Seq Scan on a (cost=0.00..10.00 rows=600 width=12) (actual time=\n> 0.017..0.084 rows=600 loops=1)\n> -> Hash (cost=1637.00..1637.00 rows=100000 width=20) (actual time=\n> 80.122..80.123 rows=100000 loops=1)\n> Buckets: 131072 Batches: 1 Memory Usage: 6474kB\n> -> Seq Scan on b (cost=0.00..1637.00 rows=100000 width=20) (actual\n> time=0.015..11.819 rows=100000 loops=1)\n> Planning Time: 0.194 ms\n> Execution Time: 104.662 ms\n> (8 rows)\n> \n> --\n> Regards,\n> Alena Rybakina\n> Postgres Professional\n> \n\n> diff --git a/src/backend/optimizer/path/costsize.c b/src/backend/optimizer/path/costsize.c\n> index ef475d95a18..31771dfba46 100644\n> --- a/src/backend/optimizer/path/costsize.c\n> +++ b/src/backend/optimizer/path/costsize.c\n> @@ -153,6 +153,7 @@ bool\t\tenable_parallel_hash = true;\n> bool\t\tenable_partition_pruning = true;\n> bool\t\tenable_presorted_aggregate = true;\n> bool\t\tenable_async_append = true;\n> +bool \t\tenable_cost_size = true;\n> \n> typedef struct\n> {\n> @@ -4033,11 +4034,22 @@ final_cost_hashjoin(PlannerInfo *root, HashPath *path,\n> \t\t\t\tthismcvfreq = restrictinfo->left_mcvfreq;\n> \t\t\t}\n> \n> +\t\t\tif (enable_cost_size)\n> +\t\t\t{\n> +\t\t\t\tinnerbucketsize *= thisbucketsize;\n> +\t\t\t\tinnermcvfreq *= thismcvfreq;\n> +\t\t\t}\n> +\t\t\telse\n> +\t\t\t{\n> \t\t\tif (innerbucketsize > thisbucketsize)\n> \t\t\t\tinnerbucketsize = thisbucketsize;\n> \t\t\tif (innermcvfreq > thismcvfreq)\n> \t\t\t\tinnermcvfreq = thismcvfreq;\n> +\t\t\t}\n> \t\t}\n> +\n> +\t\tif (enable_cost_size && innerbucketsize > virtualbuckets)\n> +\t\t\tinnerbucketsize = 1.0 / virtualbuckets;\n> \t}\n> \n> \t/*\n> diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c\n> index 71e27f8eb05..ded9ba3b7a9 100644\n> --- a/src/backend/utils/misc/guc_tables.c\n> +++ b/src/backend/utils/misc/guc_tables.c\n> @@ -1007,6 +1007,19 @@ struct config_bool ConfigureNamesBool[] =\n> \t\ttrue,\n> \t\tNULL, NULL, NULL\n> \t},\n> +\t{\n> +\t\t{\"enable_cost_size\", PGC_USERSET, QUERY_TUNING_OTHER,\n> +\t\t\tgettext_noop(\"set the optimizer coefficient\"\n> +\t\t\t\t\t\t \"so that custom or generic plan is selected more often. \"\n> +\t\t\t\t\t\t \"by default, the value is set to 1, which means that \"\n> +\t\t\t\t\t\t \"the choice of using both depends on the calculated cost\"),\n> +\t\t\tNULL,\n> +\t\t\tGUC_EXPLAIN\n> +\t\t},\n> +\t\t&enable_cost_size,\n> +\t\ttrue,\n> +\t\tNULL, NULL, NULL\n> +\t},\n> \t{\n> \t\t{\"enable_async_append\", PGC_USERSET, QUERY_TUNING_METHOD,\n> \t\t\tgettext_noop(\"Enables the planner's use of async append plans.\"),\n> diff --git a/src/include/optimizer/cost.h b/src/include/optimizer/cost.h\n> index 6cf49705d3a..c79ec12e6d5 100644\n> --- a/src/include/optimizer/cost.h\n> +++ b/src/include/optimizer/cost.h\n> @@ -71,6 +71,7 @@ extern PGDLLIMPORT bool enable_partition_pruning;\n> extern PGDLLIMPORT bool enable_presorted_aggregate;\n> extern PGDLLIMPORT bool enable_async_append;\n> extern PGDLLIMPORT int constraint_exclusion;\n> +extern PGDLLIMPORT bool enable_cost_size;\n> \n> extern double index_pages_fetched(double tuples_fetched, BlockNumber pages,\n> \t\t\t\t\t\t\t\t double index_pages, PlannerInfo *root);\n\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Thu, 7 Sep 2023 14:08:56 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MergeJoin beats HashJoin in the case of multiple hash clauses"
},
{
"msg_contents": "Hi,\n\nOn Thu, Jun 15, 2023 at 4:30 PM Andrey Lepikhov <[email protected]>\nwrote:\n\n> Hi, all.\n>\n> Some of my clients use JOIN's with three - four clauses. Quite\n> frequently, I see complaints on unreasonable switch of JOIN algorithm to\n> Merge Join instead of Hash Join. Quick research have shown one weak\n> place - estimation of an average bucket size in final_cost_hashjoin (see\n> q2.sql in attachment) with very conservative strategy.\n> Unlike estimation of groups, here we use smallest ndistinct value across\n> all buckets instead of multiplying them (or trying to make multivariate\n> analysis).\n> It works fine for the case of one clause. But if we have many clauses,\n> and if each has high value of ndistinct, we will overestimate average\n> size of a bucket and, as a result, prefer to use Merge Join. As the\n> example in attachment shows, it leads to worse plan than possible,\n> sometimes drastically worse.\n> I assume, this is done with fear of functional dependencies between hash\n> clause components. But as for me, here we should go the same way, as\n> estimation of groups.\n>\n\nI can reproduce the visitation you want to improve and verify the patch\ncan do it expectedly. I think this is a right thing to do.\n\n\n> The attached patch shows a sketch of the solution.\n>\n\nI understand that this is a sketch of the solution, but the below changes\nstill\nmake me confused.\n\n+ if (innerbucketsize > virtualbuckets)\n+ innerbucketsize = 1.0 / virtualbuckets;\n\ninnerbucketsize is a fraction of rows in all the rows, so it is between 0.0\nand 1.0.\nand virtualbuckets is the number of buckets in total (when considered the\nmutli\nbatchs), how is it possible for 'innerbucketsize > virtualbuckets' ? Am\nI missing something?\n\n-- \nBest Regards\nAndy Fan\n\nHi, On Thu, Jun 15, 2023 at 4:30 PM Andrey Lepikhov <[email protected]> wrote:Hi, all.\n\nSome of my clients use JOIN's with three - four clauses. Quite \nfrequently, I see complaints on unreasonable switch of JOIN algorithm to \nMerge Join instead of Hash Join. Quick research have shown one weak \nplace - estimation of an average bucket size in final_cost_hashjoin (see \nq2.sql in attachment) with very conservative strategy.\nUnlike estimation of groups, here we use smallest ndistinct value across \nall buckets instead of multiplying them (or trying to make multivariate \nanalysis).\nIt works fine for the case of one clause. But if we have many clauses, \nand if each has high value of ndistinct, we will overestimate average \nsize of a bucket and, as a result, prefer to use Merge Join. As the \nexample in attachment shows, it leads to worse plan than possible, \nsometimes drastically worse.\nI assume, this is done with fear of functional dependencies between hash \nclause components. But as for me, here we should go the same way, as \nestimation of groups.I can reproduce the visitation you want to improve and verify the patchcan do it expectedly. I think this is a right thing to do. \nThe attached patch shows a sketch of the solution.I understand that this is a sketch of the solution, but the below changes stillmake me confused. + if (innerbucketsize > virtualbuckets)+ innerbucketsize = 1.0 / virtualbuckets;innerbucketsize is a fraction of rows in all the rows, so it is between 0.0 and 1.0.and virtualbuckets is the number of buckets in total (when considered the mutlibatchs), how is it possible for 'innerbucketsize > virtualbuckets' ? AmI missing something? -- Best RegardsAndy Fan",
"msg_date": "Mon, 11 Sep 2023 12:51:03 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MergeJoin beats HashJoin in the case of multiple hash clauses"
},
{
"msg_contents": "On Mon, Sep 11, 2023, at 11:51 AM, Andy Fan wrote:\n> Hi, \n>\n> On Thu, Jun 15, 2023 at 4:30 PM Andrey Lepikhov \n> <[email protected]> wrote:\n>> Hi, all.\n>> \n>> Some of my clients use JOIN's with three - four clauses. Quite \n>> frequently, I see complaints on unreasonable switch of JOIN algorithm to \n>> Merge Join instead of Hash Join. Quick research have shown one weak \n>> place - estimation of an average bucket size in final_cost_hashjoin (see \n>> q2.sql in attachment) with very conservative strategy.\n>> Unlike estimation of groups, here we use smallest ndistinct value across \n>> all buckets instead of multiplying them (or trying to make multivariate \n>> analysis).\n>> It works fine for the case of one clause. But if we have many clauses, \n>> and if each has high value of ndistinct, we will overestimate average \n>> size of a bucket and, as a result, prefer to use Merge Join. As the \n>> example in attachment shows, it leads to worse plan than possible, \n>> sometimes drastically worse.\n>> I assume, this is done with fear of functional dependencies between hash \n>> clause components. But as for me, here we should go the same way, as \n>> estimation of groups.\n>\n> I can reproduce the visitation you want to improve and verify the patch\n> can do it expectedly. I think this is a right thing to do. \n> \n>> The attached patch shows a sketch of the solution.\n>\n> I understand that this is a sketch of the solution, but the below \n> changes still\n> make me confused. \n>\n> + if (innerbucketsize > virtualbuckets)\n> + innerbucketsize = 1.0 / virtualbuckets;\n>\n> innerbucketsize is a fraction of rows in all the rows, so it is between \n> 0.0 and 1.0.\n> and virtualbuckets is the number of buckets in total (when considered \n> the mutli\n> batchs), how is it possible for 'innerbucketsize > virtualbuckets' ? \n> Am\n> I missing something? \n\nYou are right here. I've made a mistake here. Changed diff is in attachment.\n\n-- \nRegards,\nAndrei Lepikhov",
"msg_date": "Mon, 11 Sep 2023 15:04:22 +0700",
"msg_from": "\"Lepikhov Andrei\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MergeJoin beats HashJoin in the case of multiple hash clauses"
},
{
"msg_contents": "On 9/11/23 10:04, Lepikhov Andrei wrote:\n> \n> \n> On Mon, Sep 11, 2023, at 11:51 AM, Andy Fan wrote:\n>> Hi, \n>>\n>> On Thu, Jun 15, 2023 at 4:30 PM Andrey Lepikhov \n>> <[email protected]> wrote:\n>>> Hi, all.\n>>>\n>>> Some of my clients use JOIN's with three - four clauses. Quite \n>>> frequently, I see complaints on unreasonable switch of JOIN algorithm to \n>>> Merge Join instead of Hash Join. Quick research have shown one weak \n>>> place - estimation of an average bucket size in final_cost_hashjoin (see \n>>> q2.sql in attachment) with very conservative strategy.\n>>> Unlike estimation of groups, here we use smallest ndistinct value across \n>>> all buckets instead of multiplying them (or trying to make multivariate \n>>> analysis).\n>>> It works fine for the case of one clause. But if we have many clauses, \n>>> and if each has high value of ndistinct, we will overestimate average \n>>> size of a bucket and, as a result, prefer to use Merge Join. As the \n>>> example in attachment shows, it leads to worse plan than possible, \n>>> sometimes drastically worse.\n>>> I assume, this is done with fear of functional dependencies between hash \n>>> clause components. But as for me, here we should go the same way, as \n>>> estimation of groups.\n>>\n\nYes, this analysis is correct - final_cost_hashjoin assumes the clauses\nmay be correlated (not necessarily by functional dependencies, just that\nthe overall ndistinct is not a simple product of per-column ndistincts).\n\nAnd it even says so in the comment before calculating bucket size:\n\n * Determine bucketsize fraction and MCV frequency for the inner\n * relation. We use the smallest bucketsize or MCV frequency estimated\n * for any individual hashclause; this is undoubtedly conservative.\n\nI'm sure this may lead to inflated cost for \"good\" cases (where the\nactual bucket size really is a product), which may push the optimizer to\nuse the less efficient/slower join method.\n\nUnfortunately, AFAICS the patch simply assumes the extreme in the\nopposite direction - it assumes each clause splits the bucket for each\ndistinct value in the column. Which works great when it's true, but\nsurely it'd have issues when the columns are correlated?\n\nI think this deserves more discussion, i.e. what happens if the\nassumptions do not hold? We know what happens for the conservative\napproach, but what's the worst thing that would happen for the\noptimistic one?\n\nI doubt e can simply switch from the conservative approach to the\noptimistic one. Yes, it'll make some queries faster, but for other\nqueries it likely causes problems and slowdowns.\n\n\nIMHO the only principled way forward is to get a better ndistinct\nestimate (which this implicitly does), perhaps by using extended\nstatistics. I haven't tried, but I guess it'd need to extract the\nclauses for the inner side, and call estimate_num_groups() on it.\n\n\nThis however reminds me we don't use extended statistics for join\nclauses at all. Which means that even with accurate extended statistics,\nwe can still get stuff like this for multiple join clauses:\n\n Hash Join (cost=1317.00..2386.00 rows=200 width=24)\n (actual time=85.781..8574.784 rows=8000000 loops=1)\n\nThis is unrelated to the issue discussed here, of course, as it won't\naffect join method selection for that join. But it certainly will affect\nall estimates/costs above that join, which can be pretty disastrous.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 3 Nov 2023 17:43:26 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MergeJoin beats HashJoin in the case of multiple hash clauses"
},
{
"msg_contents": "On 3/11/2023 23:43, Tomas Vondra wrote:\n> On 9/11/23 10:04, Lepikhov Andrei wrote:\n> * Determine bucketsize fraction and MCV frequency for the inner\n> * relation. We use the smallest bucketsize or MCV frequency estimated\n> * for any individual hashclause; this is undoubtedly conservative.\n> \n> I'm sure this may lead to inflated cost for \"good\" cases (where the\n> actual bucket size really is a product), which may push the optimizer to\n> use the less efficient/slower join method.\nYes, It was contradictory idea, though.\n> IMHO the only principled way forward is to get a better ndistinct\n> estimate (which this implicitly does), perhaps by using extended\n> statistics. I haven't tried, but I guess it'd need to extract the\n> clauses for the inner side, and call estimate_num_groups() on it.\nAnd I've done it. Sorry for so long response. This patch employs of \nextended statistics for estimation of the HashJoin bucket_size. In \naddition, I describe the idea in more convenient form here [1].\nObviously, it needs the only ndistinct to make a prediction that allows \nto reduce computational cost of this statistic.\n\n[1] \nhttps://open.substack.com/pub/danolivo/p/why-postgresql-prefers-mergejoin?r=34q1yy&utm_campaign=post&utm_medium=web\n\n-- \nregards, Andrei Lepikhov",
"msg_date": "Mon, 8 Jul 2024 19:45:15 +0700",
"msg_from": "Andrei Lepikhov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MergeJoin beats HashJoin in the case of multiple hash clauses"
}
] |
[
{
"msg_contents": "Hi,\n\nI noticed that 2f2b18bd3f55 forgot to remove the mention of\nparse_jsontable.c in src/backend/parser/README.\n\nAttached a patch to fix that. Will push that shortly to HEAD and v15.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 15 Jun 2023 18:54:46 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "obsolete filename reference in parser README"
},
{
"msg_contents": "On Thu, Jun 15, 2023 at 6:54 PM Amit Langote <[email protected]> wrote:\n> I noticed that 2f2b18bd3f55 forgot to remove the mention of\n> parse_jsontable.c in src/backend/parser/README.\n>\n> Attached a patch to fix that. Will push that shortly to HEAD and v15.\n\nPushed to HEAD only. 9853bf6ab0e that added parse_jsontable.c to the\nREADME was not back-patched.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 15 Jun 2023 22:46:43 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: obsolete filename reference in parser README"
},
{
"msg_contents": "Nice catch. Looks good.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Thu, 15 Jun 2023 08:48:21 -0500",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: obsolete filename reference in parser README"
},
{
"msg_contents": "On Thu, Jun 15, 2023 at 10:48 PM Tristan Partin <[email protected]> wrote:\n> Nice catch. Looks good.\n\nThanks for checking. As just mentioned, I've pushed this moments ago.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 15 Jun 2023 22:49:52 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: obsolete filename reference in parser README"
}
] |
[
{
"msg_contents": "Good day, hackers.\n\nI found, than declaration of function as IMMUTABLE/STABLE is not enough to be sure\nfunction doesn't manipulate data.\n\nIn fact, SPI checks only direct function kind, but fails to check indirect call.\n\nAttached immutable_not.sql creates 3 functions:\n\n- `immutable_direct` is IMMUTABLE and tries to insert into table directly.\n PostgreSQL correctly detects and forbids this action.\n\n- `volatile_direct` is VOLATILE and inserts into table directly.\n It is allowed and executed well.\n\n- `immutable_indirect` is IMMUTABLE and calls `volatile_direct`.\n PostgreSQL failed to detect and prevent this DML manipulation.\n\nOutput:\n\nselect immutable_direct('immutable_direct'); psql:immutable_not.sql:28: \nERROR: INSERT is not allowed in a non-volatile function CONTEXT: SQL \nstatement \"insert into xxx values(j)\" PL/pgSQL function \nimmutable_direct(character varying) line 3 at SQL statement select \nvolatile_direct('volatile_direct'); volatile_direct ----------------- \nvolatile_direct (1 row) select immutable_indirect('immutable_indirect'); \nimmutable_indirect -------------------- immutable_indirect (1 row) \nselect * from xxx; i -------------------- volatile_direct \nimmutable_indirect (2 rows) Attached forbid-non-volatile-mutations.diff \nadd checks readonly function didn't made data manipulations. Output for \npatched version: select immutable_indirect('immutable_indirect'); \npsql:immutable_not.sql:32: ERROR: Damn2! Update were done in a \nnon-volatile function CONTEXT: SQL statement \"SELECT \nvolatile_direct(j)\" PL/pgSQL function immutable_indirect(character \nvarying) line 3 at PERFORM I doubt check should be done this way. This \ncheck is necessary, but it should be FATAL instead of ERROR. And ERROR \nshould be generated at same place, when it is generated for \n`immutable_direct`, but with check of \"read_only\" status through whole \ncall stack instead of just direct function kind. ----- regards, Yura \nSokolov Postgres Professional",
"msg_date": "Thu, 15 Jun 2023 13:22:28 +0300",
"msg_from": "Yura Sokolov <[email protected]>",
"msg_from_op": true,
"msg_subject": "When IMMUTABLE is not."
},
{
"msg_contents": "Sorry, previous message were smashed for some reason.\n\nI'll try to repeat\n\nI found, than declaration of function as IMMUTABLE/STABLE is not enough \nto be sure\nfunction doesn't manipulate data.\n\nIn fact, SPI checks only direct function kind, but fails to check \nindirect call.\n\nAttached immutable_not.sql creates 3 functions:\n\n- `immutable_direct` is IMMUTABLE and tries to insert into table directly.\n PostgreSQL correctly detects and forbids this action.\n\n- `volatile_direct` is VOLATILE and inserts into table directly.\n It is allowed and executed well.\n\n- `immutable_indirect` is IMMUTABLE and calls `volatile_direct`.\n PostgreSQL failed to detect and prevent this DML manipulation.\n\nOutput:\n\nselect immutable_direct('immutable_direct');\npsql:immutable_not.sql:28: ERROR: INSERT is not allowed in a \nnon-volatile function\nCONTEXT: SQL statement \"insert into xxx values(j)\"\nPL/pgSQL function immutable_direct(character varying) line 3 at SQL \nstatement\n\nselect volatile_direct('volatile_direct');\nvolatile_direct\n-----------------\nvolatile_direct\n(1 row)\n\nselect immutable_indirect('immutable_indirect');\nimmutable_indirect\n--------------------\nimmutable_indirect\n(1 row)\n\nselect * from xxx;\n i\n--------------------\nvolatile_direct\nimmutable_indirect\n(2 rows)\n\nAttached forbid-non-volatile-mutations.diff add checks readonly function \ndidn't made data manipulations.\nOutput for patched version:\n\nselect immutable_indirect('immutable_indirect');\npsql:immutable_not.sql:32: ERROR: Damn2! Update were done in a \nnon-volatile function\nCONTEXT: SQL statement \"SELECT volatile_direct(j)\"\nPL/pgSQL function immutable_indirect(character varying) line 3 at PERFORM\n\nI doubt check should be done this way. This check is necessary, but it \nshould be\nFATAL instead of ERROR. And ERROR should be generated at same place, when\nit is generated for `immutable_direct`, but with check of \"read_only\" \nstatus through\nwhole call stack instead of just direct function kind.\n\n-----\n\nregards,\nYura Sokolov\nPostgres Professional\n\n\n\n\n",
"msg_date": "Thu, 15 Jun 2023 13:46:02 +0300",
"msg_from": "Yura Sokolov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: When IMMUTABLE is not."
},
{
"msg_contents": "On Thu, 2023-06-15 at 13:22 +0300, Yura Sokolov wrote:\n> Good day, hackers.\n> \n> I found, than declaration of function as IMMUTABLE/STABLE is not enough to be sure\n> function doesn't manipulate data.\n> \n> [...]\n>\n> +\t\t\t\t\t\terrmsg(\"Damn1! Update were done in a non-volatile function\")));\n\nI think it is project policy to start error messages with a lower case character.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Thu, 15 Jun 2023 13:54:34 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: When IMMUTABLE is not."
},
{
"msg_contents": "Yura Sokolov <[email protected]> writes:\n> I found, than declaration of function as IMMUTABLE/STABLE is not enough to be sure\n> function doesn't manipulate data.\n\nOf course not. It is the user's responsibility to mark functions\nproperly. Trying to enforce that completely is a fool's errand;\nyou soon get into trying to solve the halting problem.\n\nI don't like anything about the proposed patch. It's necessarily\nonly a partial solution, and it probably breaks cases that are\nperfectly safe in context.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 15 Jun 2023 09:21:38 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: When IMMUTABLE is not."
},
{
"msg_contents": "15.06.2023 16:21, Tom Lane wrote:\n> Yura Sokolov <[email protected]> writes:\n>> I found, than declaration of function as IMMUTABLE/STABLE is not enough to be sure\n>> function doesn't manipulate data.\n> Of course not. It is the user's responsibility to mark functions\n> properly. Trying to enforce that completely is a fool's errand\n\nhttps://github.com/postgres/postgres/commit/b2c4071299e02ed96d48d3c8e776de2fab36f88c.patch\n\nhttps://github.com/postgres/postgres/commit/cdf8b56d5463815244467ea8f5ec6e72b6c65a6c.patch\n\n\n\n",
"msg_date": "Thu, 15 Jun 2023 16:52:42 +0300",
"msg_from": "Yura Sokolov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: When IMMUTABLE is not."
},
{
"msg_contents": "On 2023-06-15 09:21, Tom Lane wrote:\n> Yura Sokolov <[email protected]> writes:\n>> not enough to be sure function doesn't manipulate data.\n> \n> Of course not. It is the user's responsibility to mark functions\n> properly.\n\nAnd also, isn't it the case that IMMUTABLE should mark a function,\nnot merely that \"doesn't manipulate data\", but whose return value\ndoesn't depend in any way on data (outside its own arguments)?\n\nThe practice among PLs of choosing an SPI readonly flag based on\nthe IMMUTABLE/STABLE/VOLATILE declaration seems to be a sort of\npeculiar heuristic, not something inherent in what that declaration\nmeans to the optimizer. (And also influences what snapshot the\nfunction is looking at, and therefore what it can see, which has\nalso struck me more as a tacked-on effect than something inherent\nin the declaration's meaning.)\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Thu, 15 Jun 2023 09:58:39 -0400",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: When IMMUTABLE is not."
},
{
"msg_contents": "\n15.06.2023 16:58, [email protected] пишет:\n> On 2023-06-15 09:21, Tom Lane wrote:\n>> Yura Sokolov <[email protected]> writes:\n>>> not enough to be sure function doesn't manipulate data.\n>>\n>> Of course not. It is the user's responsibility to mark functions\n>> properly.\n>\n> And also, isn't it the case that IMMUTABLE should mark a function,\n> not merely that \"doesn't manipulate data\", but whose return value\n> doesn't depend in any way on data (outside its own arguments)?\n>\n> The practice among PLs of choosing an SPI readonly flag based on\n> the IMMUTABLE/STABLE/VOLATILE declaration seems to be a sort of\n> peculiar heuristic, not something inherent in what that declaration\n> means to the optimizer. (And also influences what snapshot the\n> function is looking at, and therefore what it can see, which has\n> also struck me more as a tacked-on effect than something inherent\n> in the declaration's meaning.)\n\nDocumentation disagrees:\n\nhttps://www.postgresql.org/docs/current/sql-createfunction.html#:~:text=IMMUTABLE%0ASTABLE%0AVOLATILE\n\n > |IMMUTABLE|indicates that the function cannot modify the database and \nalways returns the same result when given the same argument values\n\n > |STABLE|indicates that the function cannot modify the database, and \nthat within a single table scan it will consistently return the same \nresult for the same argument values, but that its result could change \nacross SQL statements.\n\n > |VOLATILE|indicates that the function value can change even within a \nsingle table scan, so no optimizations can be made... But note that any \nfunction that has side-effects must be classified volatile, even if its \nresult is quite predictable, to prevent calls from being optimized away\n\n\n\n",
"msg_date": "Thu, 15 Jun 2023 17:06:44 +0300",
"msg_from": "Yura Sokolov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: When IMMUTABLE is not."
},
{
"msg_contents": "[email protected] writes:\n> And also, isn't it the case that IMMUTABLE should mark a function,\n> not merely that \"doesn't manipulate data\", but whose return value\n> doesn't depend in any way on data (outside its own arguments)?\n\nRight. We can't realistically enforce that either, so it's\nup to the user.\n\n> The practice among PLs of choosing an SPI readonly flag based on\n> the IMMUTABLE/STABLE/VOLATILE declaration seems to be a sort of\n> peculiar heuristic, not something inherent in what that declaration\n> means to the optimizer. (And also influences what snapshot the\n> function is looking at, and therefore what it can see, which has\n> also struck me more as a tacked-on effect than something inherent\n> in the declaration's meaning.)\n\nWell, it is a bit odd at first sight, but these properties play\ntogether well. See\n\nhttps://www.postgresql.org/docs/current/xfunc-volatility.html\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 15 Jun 2023 10:10:03 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: When IMMUTABLE is not."
},
{
"msg_contents": "On 2023-06-15 09:58, [email protected] wrote:\n> also influences what snapshot the\n> function is looking at, and therefore what it can see, which has\n> also struck me more as a tacked-on effect than something inherent\n> in the declaration's meaning.\n\nI just re-read that and realized I should anticipate the obvious\nresponse \"but how can it matter what the function can see, if\nit's IMMUTABLE and depends on no data?\".\n\nSo, I ran into the effect while working on PL/Java, where the\ncode of a function isn't all found in pg_proc.prosrc; that just\nindicates what code has to be fetched from sqlj.jar_entry.\n\nSo one could take a strict view that \"no PL/Java function should\never be marked IMMUTABLE\" because every one depends on fetching\nsomething (once, at least).\n\nBut on the other hand, it would seem punctilious to say that\nf(int x, int y) { return x + y; } isn't IMMUTABLE, only because\nit depends on a fetch /of its own implementation/, and overall\nits behavior is better described by marking it IMMUTABLE.\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Thu, 15 Jun 2023 10:16:12 -0400",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: When IMMUTABLE is not."
},
{
"msg_contents": "On Thursday, June 15, 2023, <[email protected]> wrote:\n\n>\n> So one could take a strict view that \"no PL/Java function should\n> ever be marked IMMUTABLE\" because every one depends on fetching\n> something (once, at least).\n>\n\nThe failure to find and execute the function code itself is not a failure\nmode that these markers need be concerned with. Assuming one can execute\nthe function an immutable function will give the same answer for the same\ninput for all time.\n\nDavid J.\n\nOn Thursday, June 15, 2023, <[email protected]> wrote:\nSo one could take a strict view that \"no PL/Java function should\never be marked IMMUTABLE\" because every one depends on fetching\nsomething (once, at least).\nThe failure to find and execute the function code itself is not a failure mode that these markers need be concerned with. Assuming one can execute the function an immutable function will give the same answer for the same input for all time.David J.",
"msg_date": "Thu, 15 Jun 2023 07:19:39 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: When IMMUTABLE is not."
},
{
"msg_contents": "On 2023-06-15 10:19, David G. Johnston wrote:\n> The failure to find and execute the function code itself is not a \n> failure\n> mode that these markers need be concerned with. Assuming one can \n> execute\n> the function an immutable function will give the same answer for the \n> same\n> input for all time.\n\nThat was the view I ultimately took, and just made PL/Java suppress that\nSPI readonly flag when going to look for the function code.\n\nUntil that change, you could run into the not-uncommon situation\nwhere you've just loaded a jar of new functions and try to use them\nin the same transaction, and hey presto, the VOLATILE ones all work,\nand the IMMUTABLE ones aren't there yet.\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Thu, 15 Jun 2023 10:25:46 -0400",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: When IMMUTABLE is not."
},
{
"msg_contents": "\"David G. Johnston\" <[email protected]> writes:\n> The failure to find and execute the function code itself is not a failure\n> mode that these markers need be concerned with. Assuming one can execute\n> the function an immutable function will give the same answer for the same\n> input for all time.\n\nThe viewpoint taken in the docs I mentioned is that an IMMUTABLE\nmarker is a promise from the user to the system about the behavior\nof a function. While the system does provide a few simple tools\nto catch obvious errors and to make it easier to write functions\nthat obey such promises, it's mostly on the user to get it right.\n\nIn particular, we've never enforced that an immutable function can't\ncall non-immutable functions. While that would seem like a good idea\nin the abstract, we've intentionally not tried to do it. (I'm pretty\nsure there is more than one round of previous discussions of the point\nin the archives, although locating relevant threads seems hard.)\nOne reason not to is that polymorphic functions have to be marked\nwith worst-case volatility labels. There are plenty of examples of\nfunctions that are stable for some input types and immutable for\nothers (array_to_string, for instance); but the marking system can't\nrepresent that so we have to label them stable. Enforcing that a\nuser-defined immutable function can't use such a function might\njust break things for no gain.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 15 Jun 2023 10:49:01 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: When IMMUTABLE is not."
},
{
"msg_contents": "On Thu, 15 Jun 2023 at 10:49, Tom Lane <[email protected]> wrote:\n\nIn particular, we've never enforced that an immutable function can't\n> call non-immutable functions. While that would seem like a good idea\n> in the abstract, we've intentionally not tried to do it. (I'm pretty\n> sure there is more than one round of previous discussions of the point\n> in the archives, although locating relevant threads seems hard.)\n> One reason not to is that polymorphic functions have to be marked\n> with worst-case volatility labels. There are plenty of examples of\n> functions that are stable for some input types and immutable for\n> others (array_to_string, for instance); but the marking system can't\n> represent that so we have to label them stable. Enforcing that a\n> user-defined immutable function can't use such a function might\n> just break things for no gain.\n>\n\nMore sophisticated type systems (which I am *not* volunteering to graft\nonto Postgres) can handle some of this, but even Haskell has\nunsafePerformIO. The current policy is both wise and practical.\n\nOn Thu, 15 Jun 2023 at 10:49, Tom Lane <[email protected]> wrote:\nIn particular, we've never enforced that an immutable function can't\ncall non-immutable functions. While that would seem like a good idea\nin the abstract, we've intentionally not tried to do it. (I'm pretty\nsure there is more than one round of previous discussions of the point\nin the archives, although locating relevant threads seems hard.)\nOne reason not to is that polymorphic functions have to be marked\nwith worst-case volatility labels. There are plenty of examples of\nfunctions that are stable for some input types and immutable for\nothers (array_to_string, for instance); but the marking system can't\nrepresent that so we have to label them stable. Enforcing that a\nuser-defined immutable function can't use such a function might\njust break things for no gain.More sophisticated type systems (which I am *not* volunteering to graft onto Postgres) can handle some of this, but even Haskell has unsafePerformIO. The current policy is both wise and practical.",
"msg_date": "Thu, 15 Jun 2023 10:55:01 -0400",
"msg_from": "Isaac Morland <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: When IMMUTABLE is not."
},
{
"msg_contents": "\n15.06.2023 17:49, Tom Lane пишет:\n> \"David G. Johnston\" <[email protected]> writes:\n>> The failure to find and execute the function code itself is not a failure\n>> mode that these markers need be concerned with. Assuming one can execute\n>> the function an immutable function will give the same answer for the same\n>> input for all time.\n> The viewpoint taken in the docs I mentioned is that an IMMUTABLE\n> marker is a promise from the user to the system about the behavior\n> of a function. While the system does provide a few simple tools\n> to catch obvious errors and to make it easier to write functions\n> that obey such promises, it's mostly on the user to get it right.\n>\n> In particular, we've never enforced that an immutable function can't\n> call non-immutable functions. While that would seem like a good idea\n> in the abstract, we've intentionally not tried to do it. (I'm pretty\n> sure there is more than one round of previous discussions of the point\n> in the archives, although locating relevant threads seems hard.)\n> One reason not to is that polymorphic functions have to be marked\n> with worst-case volatility labels. There are plenty of examples of\n> functions that are stable for some input types and immutable for\n> others (array_to_string, for instance); but the marking system can't\n> represent that so we have to label them stable. Enforcing that a\n> user-defined immutable function can't use such a function might\n> just break things for no gain.\n\n\"Stable vs Immutable\" is much lesser problem compared to \"ReadOnly vs \nVolatile\".\n\nExecuting fairly read-only function more times than necessary (or less \ntimes),\ndoesn't modify data in unexpecting way.\n\nBut executing immutable/stable function, that occasionally modifies \ndata, could\nlead to different unexpected effects due to optimizer decided to call \nthem more\nor less times than query assumes.\n\nSome vulnerabilities were present due to user defined functions used in \nindex\ndefinitions started to modify data. If \"read-only\" execution were forced \nin index\noperations, those issues couldn't happen.\n\n > it's mostly on the user to get it right.\n\nIt is really bad premise. Users does strange things and aren't expected \nto be\nprofessionals who really understand whole PostgreSQL internals.\n\nAnd it is strange to hear it at the same time we don't allow users to do \nquery hints\nsince \"optimizer does better\" :-D\n\nOk, I'd go and cool myself. Certainly I don't get some point.\n\n\n\n",
"msg_date": "Thu, 15 Jun 2023 19:33:52 +0300",
"msg_from": "Yura Sokolov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: When IMMUTABLE is not."
}
] |
[
{
"msg_contents": "ExecIncrementalSort() calls tuplesort_begin_common(), which creates the \"TupleSort main\"\nand \"TupleSort sort\" memory contexts, and ExecEndIncrementalSort() calls tuplesort_end(),\nwhich destroys them.\nBut ExecReScanIncrementalSort() only resets the memory contexts. Since the next call to\nExecIncrementalSort() will create them again, we end up leaking these contexts for every\nre-scan.\n\nHere is a reproducer with the regression test database:\n\n SET enable_sort = off;\n SET enable_hashjoin = off;\n SET enable_mergejoin = off;\n SET enable_material = off;\n\n SELECT t.unique2, t2.r\n FROM tenk1 AS t \n JOIN (SELECT unique1, \n row_number() OVER (ORDER BY hundred, thousand) AS r \n FROM tenk1 \n OFFSET 0) AS t2 \n ON t.unique1 + 0 = t2.unique1\n WHERE t.unique1 < 1000;\n\nThe execution plan:\n\n Nested Loop\n Join Filter: ((t.unique1 + 0) = tenk1.unique1)\n -> Bitmap Heap Scan on tenk1 t\n Recheck Cond: (unique1 < 1000)\n -> Bitmap Index Scan on tenk1_unique1\n Index Cond: (unique1 < 1000)\n -> WindowAgg\n -> Incremental Sort\n Sort Key: tenk1.hundred, tenk1.thousand\n Presorted Key: tenk1.hundred\n -> Index Scan using tenk1_hundred on tenk1\n\n\nA memory context dump at the end of the execution looks like this:\n\n ExecutorState: 262144 total in 6 blocks; 74136 free (29 chunks); 188008 used\n TupleSort main: 32832 total in 2 blocks; 7320 free (0 chunks); 25512 used\n TupleSort sort: 8192 total in 1 blocks; 7928 free (0 chunks); 264 used\n Caller tuples: 8192 total in 1 blocks (0 chunks); 7984 free (0 chunks); 208 used\n TupleSort main: 32832 total in 2 blocks; 7256 free (0 chunks); 25576 used\n TupleSort sort: 8192 total in 1 blocks; 7928 free (0 chunks); 264 used\n Caller tuples: 8192 total in 1 blocks (0 chunks); 7984 free (0 chunks); 208 used\n TupleSort main: 32832 total in 2 blocks; 7320 free (0 chunks); 25512 used\n TupleSort sort: 8192 total in 1 blocks; 7928 free (0 chunks); 264 used\n Caller tuples: 8192 total in 1 blocks (0 chunks); 7984 free (0 chunks); 208 used\n [many more]\n 1903 more child contexts containing 93452928 total in 7597 blocks; 44073240 free (0 chunks); 49379688 used\n\n\nThe following patch fixes the problem for me:\n\n--- a/src/backend/executor/nodeIncrementalSort.c\n+++ b/src/backend/executor/nodeIncrementalSort.c\n@@ -1145,21 +1145,16 @@ ExecReScanIncrementalSort(IncrementalSortState *node)\n node->execution_status = INCSORT_LOADFULLSORT;\n \n /*\n- * If we've set up either of the sort states yet, we need to reset them.\n- * We could end them and null out the pointers, but there's no reason to\n- * repay the setup cost, and because ExecIncrementalSort guards presorted\n- * column functions by checking to see if the full sort state has been\n- * initialized yet, setting the sort states to null here might actually\n- * cause a leak.\n+ * Release tuplesort resources.\n */\n if (node->fullsort_state != NULL)\n {\n- tuplesort_reset(node->fullsort_state);\n+ tuplesort_end(node->fullsort_state);\n node->fullsort_state = NULL;\n }\n if (node->prefixsort_state != NULL)\n {\n- tuplesort_reset(node->prefixsort_state);\n+ tuplesort_end(node->prefixsort_state);\n node->prefixsort_state = NULL;\n }\n \n\nThe original comment hints that this might mot be the correct thing to do...\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Thu, 15 Jun 2023 13:48:44 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Memory leak in incremental sort re-scan"
},
{
"msg_contents": "Hi,\n\nOn 6/15/23 13:48, Laurenz Albe wrote:\n> ExecIncrementalSort() calls tuplesort_begin_common(), which creates the \"TupleSort main\"\n> and \"TupleSort sort\" memory contexts, and ExecEndIncrementalSort() calls tuplesort_end(),\n> which destroys them.\n> But ExecReScanIncrementalSort() only resets the memory contexts. Since the next call to\n> ExecIncrementalSort() will create them again, we end up leaking these contexts for every\n> re-scan.\n> \n> Here is a reproducer with the regression test database:\n> \n> SET enable_sort = off;\n> SET enable_hashjoin = off;\n> SET enable_mergejoin = off;\n> SET enable_material = off;\n> \n> SELECT t.unique2, t2.r\n> FROM tenk1 AS t \n> JOIN (SELECT unique1, \n> row_number() OVER (ORDER BY hundred, thousand) AS r \n> FROM tenk1 \n> OFFSET 0) AS t2 \n> ON t.unique1 + 0 = t2.unique1\n> WHERE t.unique1 < 1000;\n> \n> The execution plan:\n> \n> Nested Loop\n> Join Filter: ((t.unique1 + 0) = tenk1.unique1)\n> -> Bitmap Heap Scan on tenk1 t\n> Recheck Cond: (unique1 < 1000)\n> -> Bitmap Index Scan on tenk1_unique1\n> Index Cond: (unique1 < 1000)\n> -> WindowAgg\n> -> Incremental Sort\n> Sort Key: tenk1.hundred, tenk1.thousand\n> Presorted Key: tenk1.hundred\n> -> Index Scan using tenk1_hundred on tenk1\n> \n> \n> A memory context dump at the end of the execution looks like this:\n> \n> ExecutorState: 262144 total in 6 blocks; 74136 free (29 chunks); 188008 used\n> TupleSort main: 32832 total in 2 blocks; 7320 free (0 chunks); 25512 used\n> TupleSort sort: 8192 total in 1 blocks; 7928 free (0 chunks); 264 used\n> Caller tuples: 8192 total in 1 blocks (0 chunks); 7984 free (0 chunks); 208 used\n> TupleSort main: 32832 total in 2 blocks; 7256 free (0 chunks); 25576 used\n> TupleSort sort: 8192 total in 1 blocks; 7928 free (0 chunks); 264 used\n> Caller tuples: 8192 total in 1 blocks (0 chunks); 7984 free (0 chunks); 208 used\n> TupleSort main: 32832 total in 2 blocks; 7320 free (0 chunks); 25512 used\n> TupleSort sort: 8192 total in 1 blocks; 7928 free (0 chunks); 264 used\n> Caller tuples: 8192 total in 1 blocks (0 chunks); 7984 free (0 chunks); 208 used\n> [many more]\n> 1903 more child contexts containing 93452928 total in 7597 blocks; 44073240 free (0 chunks); 49379688 used\n> \n> \n> The following patch fixes the problem for me:\n> \n> --- a/src/backend/executor/nodeIncrementalSort.c\n> +++ b/src/backend/executor/nodeIncrementalSort.c\n> @@ -1145,21 +1145,16 @@ ExecReScanIncrementalSort(IncrementalSortState *node)\n> node->execution_status = INCSORT_LOADFULLSORT;\n> \n> /*\n> - * If we've set up either of the sort states yet, we need to reset them.\n> - * We could end them and null out the pointers, but there's no reason to\n> - * repay the setup cost, and because ExecIncrementalSort guards presorted\n> - * column functions by checking to see if the full sort state has been\n> - * initialized yet, setting the sort states to null here might actually\n> - * cause a leak.\n> + * Release tuplesort resources.\n> */\n> if (node->fullsort_state != NULL)\n> {\n> - tuplesort_reset(node->fullsort_state);\n> + tuplesort_end(node->fullsort_state);\n> node->fullsort_state = NULL;\n> }\n> if (node->prefixsort_state != NULL)\n> {\n> - tuplesort_reset(node->prefixsort_state);\n> + tuplesort_end(node->prefixsort_state);\n> node->prefixsort_state = NULL;\n> }\n> \n> \n> The original comment hints that this might mot be the correct thing to do...\n> \n\nI think it's correct, but I need to look at the code more closely - it's\nbeen a while. The code is a bit silly, as it resets the tuplesort and\nthen throws away all the pointers - so what could the _end() break?\n\nAFAICS the comment says that we can't just do tuplesort_reset and keep\nthe pointers, because some other code depends on them being NULL.\n\nIn hindsight, that's a bit awkward - it'd probably be better to have a\nseparate flag, which would allow us to just reset the tuplesort.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 15 Jun 2023 15:19:47 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Memory leak in incremental sort re-scan"
},
{
"msg_contents": "Tomas Vondra <[email protected]> writes:\n> On 6/15/23 13:48, Laurenz Albe wrote:\n>> ExecIncrementalSort() calls tuplesort_begin_common(), which creates the \"TupleSort main\"\n>> and \"TupleSort sort\" memory contexts, and ExecEndIncrementalSort() calls tuplesort_end(),\n>> which destroys them.\n>> But ExecReScanIncrementalSort() only resets the memory contexts.\n\n> I think it's correct, but I need to look at the code more closely - it's\n> been a while. The code is a bit silly, as it resets the tuplesort and\n> then throws away all the pointers - so what could the _end() break?\n\nThe report at [1] seems to be the same issue of ExecReScanIncrementalSort\nleaking memory. I applied Laurenz's fix, and that greatly reduces the\nspeed of leak but doesn't remove the problem entirely. It looks like\nthe remaining issue is that the data computed by preparePresortedCols() is\nrecomputed each time we rescan the node. This seems entirely gratuitous,\nbecause there's nothing in that that could change across rescans.\nI see zero leakage in that example after applying the attached quick\nhack. (It might be better to make the check in the caller, or to just\nmove the call to ExecInitIncrementalSort.)\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/db03c582-086d-e7cd-d4a1-3bc722f81765%40inf.ethz.ch",
"msg_date": "Thu, 15 Jun 2023 16:11:23 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Memory leak in incremental sort re-scan"
},
{
"msg_contents": "\n\nOn 6/15/23 22:11, Tom Lane wrote:\n> Tomas Vondra <[email protected]> writes:\n>> On 6/15/23 13:48, Laurenz Albe wrote:\n>>> ExecIncrementalSort() calls tuplesort_begin_common(), which creates the \"TupleSort main\"\n>>> and \"TupleSort sort\" memory contexts, and ExecEndIncrementalSort() calls tuplesort_end(),\n>>> which destroys them.\n>>> But ExecReScanIncrementalSort() only resets the memory contexts.\n> \n>> I think it's correct, but I need to look at the code more closely - it's\n>> been a while. The code is a bit silly, as it resets the tuplesort and\n>> then throws away all the pointers - so what could the _end() break?\n> \n> The report at [1] seems to be the same issue of ExecReScanIncrementalSort\n> leaking memory.\n\nFunny how these reports often come in pairs ...\n\n> I applied Laurenz's fix, and that greatly reduces the\n> speed of leak but doesn't remove the problem entirely. It looks like\n> the remaining issue is that the data computed by preparePresortedCols() is\n> recomputed each time we rescan the node. This seems entirely gratuitous,\n> because there's nothing in that that could change across rescans.\n\nYeah, I was wondering about that too when I skimmed over that code\nearlier today.\n\n> I see zero leakage in that example after applying the attached quick\n> hack. (It might be better to make the check in the caller, or to just\n> move the call to ExecInitIncrementalSort.)\n> \n\nThanks for looking. Are you planning to work on this and push the fix,\nor do you want me to finish this up?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 15 Jun 2023 22:30:58 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Memory leak in incremental sort re-scan"
},
{
"msg_contents": "Tomas Vondra <[email protected]> writes:\n> On 6/15/23 22:11, Tom Lane wrote:\n>> I see zero leakage in that example after applying the attached quick\n>> hack. (It might be better to make the check in the caller, or to just\n>> move the call to ExecInitIncrementalSort.)\n\n> Thanks for looking. Are you planning to work on this and push the fix,\n> or do you want me to finish this up?\n\nI'm happy to let you take it -- got lots of other stuff on my plate.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 15 Jun 2023 16:36:33 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Memory leak in incremental sort re-scan"
},
{
"msg_contents": "\n\nOn 6/15/23 22:36, Tom Lane wrote:\n> Tomas Vondra <[email protected]> writes:\n>> On 6/15/23 22:11, Tom Lane wrote:\n>>> I see zero leakage in that example after applying the attached quick\n>>> hack. (It might be better to make the check in the caller, or to just\n>>> move the call to ExecInitIncrementalSort.)\n> \n>> Thanks for looking. Are you planning to work on this and push the fix,\n>> or do you want me to finish this up?\n> \n> I'm happy to let you take it -- got lots of other stuff on my plate.\n> \n\nOK, will do.\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 16 Jun 2023 00:34:54 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Memory leak in incremental sort re-scan"
},
{
"msg_contents": "On Thu, Jun 15, 2023 at 6:35 PM Tomas Vondra\n<[email protected]> wrote:\n>\n>\n>\n> On 6/15/23 22:36, Tom Lane wrote:\n> > Tomas Vondra <[email protected]> writes:\n> >> On 6/15/23 22:11, Tom Lane wrote:\n> >>> I see zero leakage in that example after applying the attached quick\n> >>> hack. (It might be better to make the check in the caller, or to just\n> >>> move the call to ExecInitIncrementalSort.)\n> >\n> >> Thanks for looking. Are you planning to work on this and push the fix,\n> >> or do you want me to finish this up?\n> >\n> > I'm happy to let you take it -- got lots of other stuff on my plate.\n> >\n>\n> OK, will do.\n\nI think the attached is enough to fix it -- rather than nulling out\nthe sort states in rescan, we can reset them (as the comment says),\nbut not set them to null (we also have the same mistake with\npresorted_keys). That avoids unnecessary recreation of the sort\nstates, but it also fixes the problem Tom noted as well: the call to\npreparePresortedCols() is already guarded by a test on fullsort_state\nbeing NULL, so with this change we also won't unnecessarily redo that\nwork.\n\nRegards,\nJames Coleman",
"msg_date": "Wed, 21 Jun 2023 14:54:13 -0400",
"msg_from": "James Coleman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Memory leak in incremental sort re-scan"
},
{
"msg_contents": "On Fri, 2023-06-16 at 00:34 +0200, Tomas Vondra wrote:\n> On 6/15/23 22:36, Tom Lane wrote:\n> > Tomas Vondra <[email protected]> writes:\n> > > On 6/15/23 22:11, Tom Lane wrote:\n> > > > I see zero leakage in that example after applying the attached quick\n> > > > hack. (It might be better to make the check in the caller, or to just\n> > > > move the call to ExecInitIncrementalSort.)\n> > \n> > > Thanks for looking. Are you planning to work on this and push the fix,\n> > > or do you want me to finish this up?\n> > \n> > I'm happy to let you take it -- got lots of other stuff on my plate.\n> \n> OK, will do.\n\nIt would be cool if we could get that into the next minor release in August.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Thu, 29 Jun 2023 13:49:54 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Memory leak in incremental sort re-scan"
},
{
"msg_contents": "\n\nOn 6/29/23 13:49, Laurenz Albe wrote:\n> On Fri, 2023-06-16 at 00:34 +0200, Tomas Vondra wrote:\n>> On 6/15/23 22:36, Tom Lane wrote:\n>>> Tomas Vondra <[email protected]> writes:\n>>>> On 6/15/23 22:11, Tom Lane wrote:\n>>>>> I see zero leakage in that example after applying the attached quick\n>>>>> hack. (It might be better to make the check in the caller, or to just\n>>>>> move the call to ExecInitIncrementalSort.)\n>>>\n>>>> Thanks for looking. Are you planning to work on this and push the fix,\n>>>> or do you want me to finish this up?\n>>>\n>>> I'm happy to let you take it -- got lots of other stuff on my plate.\n>>\n>> OK, will do.\n> \n> It would be cool if we could get that into the next minor release in August.\n> \n\nFWIW I've pushed the fix prepared by James a couple days ago. Thanks for\nthe report!\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 2 Jul 2023 20:13:41 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Memory leak in incremental sort re-scan"
},
{
"msg_contents": "On Sun, 2023-07-02 at 20:13 +0200, Tomas Vondra wrote:\n> FWIW I've pushed the fix prepared by James a couple days ago. Thanks for\n> the report!\n\nThanks, and sorry for being pushy.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Sun, 02 Jul 2023 21:52:40 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Memory leak in incremental sort re-scan"
}
] |
[
{
"msg_contents": "Hi,\r\n\r\nIn the function WalReceiverMain, when the function walrcv_create_slot is called,\r\nthe fourth parameter is assigned the value \"0\" instead of the enum value\r\n\"CRS_EXPORT_SNAPSHOT\". I think it would be better to use the corresponding enum\r\nvalue.\r\n\r\nAttach the patch to change this point.\r\n\r\nRegards,\r\nWang wei",
"msg_date": "Fri, 16 Jun 2023 06:10:10 +0000",
"msg_from": "\"Wei Wang (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Use the enum value CRS_EXPORT_SNAPSHOT instead of \"0\""
},
{
"msg_contents": "On Fri, Jun 16, 2023 at 4:10 PM Wei Wang (Fujitsu)\n<[email protected]> wrote:\n>\n> Hi,\n>\n> In the function WalReceiverMain, when the function walrcv_create_slot is called,\n> the fourth parameter is assigned the value \"0\" instead of the enum value\n> \"CRS_EXPORT_SNAPSHOT\". I think it would be better to use the corresponding enum\n> value.\n>\n\n+1\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 16 Jun 2023 16:45:17 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use the enum value CRS_EXPORT_SNAPSHOT instead of \"0\""
},
{
"msg_contents": "On Fri, Jun 16, 2023 at 3:10 PM Wei Wang (Fujitsu)\n<[email protected]> wrote:\n>\n> Hi,\n>\n> In the function WalReceiverMain, when the function walrcv_create_slot is called,\n> the fourth parameter is assigned the value \"0\" instead of the enum value\n> \"CRS_EXPORT_SNAPSHOT\". I think it would be better to use the corresponding enum\n> value.\n\nThe walreceiver process doesn't use CRS_EXPORT_SNAPSHOT actually,\nright? I think replacing it with CRS_EXPORT_SNAPSHOT would rather\nconfuse me\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 16 Jun 2023 17:16:22 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use the enum value CRS_EXPORT_SNAPSHOT instead of \"0\""
},
{
"msg_contents": "On Fri, Jun 16, 2023 at 6:17 PM Masahiko Sawada <[email protected]> wrote:\n>\n> On Fri, Jun 16, 2023 at 3:10 PM Wei Wang (Fujitsu)\n> <[email protected]> wrote:\n> >\n> > Hi,\n> >\n> > In the function WalReceiverMain, when the function walrcv_create_slot is called,\n> > the fourth parameter is assigned the value \"0\" instead of the enum value\n> > \"CRS_EXPORT_SNAPSHOT\". I think it would be better to use the corresponding enum\n> > value.\n>\n> The walreceiver process doesn't use CRS_EXPORT_SNAPSHOT actually,\n> right? I think replacing it with CRS_EXPORT_SNAPSHOT would rather\n> confuse me\n>\n\nPassing some number (0) which has the same value as an enum, while at\nthe same time not intending it to have the same meaning as that enum\nsmells strange to me.\n\nIf none of the existing enums is meaningful here, then perhaps there\nought to be another enum added (CRS_UNUSED?) and pass that instead.\n\n~\n\nAlternatively, maybe continue to pass 0, but ensure the existing enums\ndo not include any value of 0.\n\ne.g.\ntypedef enum\n{\n CRS_EXPORT_SNAPSHOT = 1,\n CRS_NOEXPORT_SNAPSHOT,\n CRS_USE_SNAPSHOT\n} CRSSnapshotAction;\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 16 Jun 2023 19:26:09 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use the enum value CRS_EXPORT_SNAPSHOT instead of \"0\""
},
{
"msg_contents": "On 2023-Jun-16, Masahiko Sawada wrote:\n\n> The walreceiver process doesn't use CRS_EXPORT_SNAPSHOT actually,\n> right? I think replacing it with CRS_EXPORT_SNAPSHOT would rather\n> confuse me\n\nlibpqwalreceiver.c does use it. But I agree -- I think it would be\nbetter to not use the enum in walreceiver at all. IIRC if we stopped\nuse of that enum in {libpq}walreceiver, then we wouldn't need\nwalsender.h inclusion by walreceiver files.\n\nHowever, changing it means a change of the walrcv_create_slot API, so\nit's not all that trivial. But we could have a walreceiver-side enum\ninstead (with the same values). I think this would be worth doing,\nbecause it'll all end up cleaner.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Fri, 16 Jun 2023 11:47:51 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use the enum value CRS_EXPORT_SNAPSHOT instead of \"0\""
}
] |
[
{
"msg_contents": "We have a small table with only 23 rows and 21 values.\n\nThe resulting MCV and histogram is as follows\nstanumbers1 | {0.08695652,0.08695652}\nstavalues1 | {v1,v2}\nstavalues2 | \n{v3,v4,v5,v6,v7,v8,v9,v10,v11,v12,v13,v14,v15,v16,v17,v18,v19,v20,v21}\n\nAn incorrect number of rows was estimated when HashJoin was done with \nanother large table (about 2 million rows).\n\nHash Join (cost=1.52..92414.61 rows=2035023 width=0) (actual \ntime=1.943..1528.983 rows=3902 loops=1)\n\nThe reason is that the MCV of the small table excludes values with rows \nof 1. Put them in the MCV in the statistics to get the correct result.\n\nUsing the conservative samplerows <= attstattarget doesn't completely \nsolve this problem. It can solve this case.\n\nAfter modification we get statistics without histogram:\nstanumbers1 | {0.08695652,0.08695652,0.04347826,0.04347826, ... }\nstavalues1 | {v,v2, ... }\n\nAnd we have the right estimates:\nHash Join (cost=1.52..72100.69 rows=3631 width=0) (actual \ntime=1.447..1268.385 rows=3902 loops=1)\n\n\nRegards,\n\n--\nQuan Zongliang\nBeijing Vastdata Co., LTD",
"msg_date": "Fri, 16 Jun 2023 17:25:16 +0800",
"msg_from": "Quan Zongliang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Incorrect estimation of HashJoin rows resulted from inaccurate small\n table statistics"
},
{
"msg_contents": "\n\nOn 6/16/23 11:25, Quan Zongliang wrote:\n> \n> We have a small table with only 23 rows and 21 values.\n> \n> The resulting MCV and histogram is as follows\n> stanumbers1 | {0.08695652,0.08695652}\n> stavalues1 | {v1,v2}\n> stavalues2 |\n> {v3,v4,v5,v6,v7,v8,v9,v10,v11,v12,v13,v14,v15,v16,v17,v18,v19,v20,v21}\n> \n> An incorrect number of rows was estimated when HashJoin was done with\n> another large table (about 2 million rows).\n> \n> Hash Join (cost=1.52..92414.61 rows=2035023 width=0) (actual\n> time=1.943..1528.983 rows=3902 loops=1)\n> \n\nThat's interesting. I wonder how come the estimate gets this bad simply\nby skipping values entries with a single row in the sample, which means\nwe know the per-value selectivity pretty well.\n\nI guess the explanation has to be something strange happening when\nestimating the join condition selectivity, where we combine MCVs from\nboth sides of the join (which has to be happening here, otherwise it\nwould not matter what gets to the MCV).\n\nIt'd be interesting to know what's in the other MCV, and what are the\nother statistics for the attributes (ndistinct etc.).\n\nOr even better, a reproducer SQL script that builds two tables and then\njoins them.\n\n> The reason is that the MCV of the small table excludes values with rows\n> of 1. Put them in the MCV in the statistics to get the correct result.\n> \n> Using the conservative samplerows <= attstattarget doesn't completely\n> solve this problem. It can solve this case.\n> \n> After modification we get statistics without histogram:\n> stanumbers1 | {0.08695652,0.08695652,0.04347826,0.04347826, ... }\n> stavalues1 | {v,v2, ... }\n> \n> And we have the right estimates:\n> Hash Join (cost=1.52..72100.69 rows=3631 width=0) (actual\n> time=1.447..1268.385 rows=3902 loops=1)\n> \n\nI'm not against building a \"complete\" MCV, but I guess the case where\n(samplerows <= num_mcv) is pretty rare. Why shouldn't we make the MCV\ncomplete whenever we decide (ndistinct <= num_mcv)?\n\nThat would need to happen later, because we don't have the ndistinct\nestimate yet at this point - we'd have to do the loop a bit later (or\nlikely twice).\n\nFWIW the patch breaks the calculation of nmultiple (and thus likely the\nndistinct estimate).\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 16 Jun 2023 17:39:14 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Incorrect estimation of HashJoin rows resulted from inaccurate\n small table statistics"
},
{
"msg_contents": "\n\nOn 2023/6/16 23:39, Tomas Vondra wrote:\n> \n> \n> On 6/16/23 11:25, Quan Zongliang wrote:\n>>\n>> We have a small table with only 23 rows and 21 values.\n>>\n>> The resulting MCV and histogram is as follows\n>> stanumbers1 | {0.08695652,0.08695652}\n>> stavalues1 | {v1,v2}\n>> stavalues2 |\n>> {v3,v4,v5,v6,v7,v8,v9,v10,v11,v12,v13,v14,v15,v16,v17,v18,v19,v20,v21}\n>>\n>> An incorrect number of rows was estimated when HashJoin was done with\n>> another large table (about 2 million rows).\n>>\n>> Hash Join (cost=1.52..92414.61 rows=2035023 width=0) (actual\n>> time=1.943..1528.983 rows=3902 loops=1)\n>>\n> \n> That's interesting. I wonder how come the estimate gets this bad simply\n> by skipping values entries with a single row in the sample, which means\n> we know the per-value selectivity pretty well.\n> \n> I guess the explanation has to be something strange happening when\n> estimating the join condition selectivity, where we combine MCVs from\n> both sides of the join (which has to be happening here, otherwise it\n> would not matter what gets to the MCV).\n> \n> It'd be interesting to know what's in the other MCV, and what are the\n> other statistics for the attributes (ndistinct etc.).\n> \n> Or even better, a reproducer SQL script that builds two tables and then\n> joins them.\n> \nThe other table is severely skewed. Most rows cannot JOIN the small \ntable. This special case causes the inaccuracy of cost calculation.\n\n>> The reason is that the MCV of the small table excludes values with rows\n>> of 1. Put them in the MCV in the statistics to get the correct result.\n>>\n>> Using the conservative samplerows <= attstattarget doesn't completely\n>> solve this problem. It can solve this case.\n>>\n>> After modification we get statistics without histogram:\n>> stanumbers1 | {0.08695652,0.08695652,0.04347826,0.04347826, ... }\n>> stavalues1 | {v,v2, ... }\n>>\n>> And we have the right estimates:\n>> Hash Join (cost=1.52..72100.69 rows=3631 width=0) (actual\n>> time=1.447..1268.385 rows=3902 loops=1)\n>>\n> \n> I'm not against building a \"complete\" MCV, but I guess the case where\n> (samplerows <= num_mcv) is pretty rare. Why shouldn't we make the MCV\n> complete whenever we decide (ndistinct <= num_mcv)?\n> \n> That would need to happen later, because we don't have the ndistinct\n> estimate yet at this point - we'd have to do the loop a bit later (or\n> likely twice).\n> \n> FWIW the patch breaks the calculation of nmultiple (and thus likely the\n> ndistinct estimate).\n> \nIt's not just a small table. If a column's value is nearly unique. It \nalso causes the same problem because we exclude values that occur only \nonce. samplerows <= num_mcv just solves one scenario.\nPerhaps we should discard this (dups cnt > 1) restriction?\n\n> \n> regards\n> \n\n\n\n",
"msg_date": "Sat, 17 Jun 2023 06:32:58 +0800",
"msg_from": "Quan Zongliang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Incorrect estimation of HashJoin rows resulted from inaccurate\n small table statistics"
},
{
"msg_contents": "Quan Zongliang <[email protected]> writes:\n> Perhaps we should discard this (dups cnt > 1) restriction?\n\nThat's not going to happen on the basis of one test case that you\nhaven't even shown us. The implications of doing it are very unclear.\nIn particular, I seem to recall that there are bits of logic that\ndepend on the assumption that MCV entries always represent more than\none row. The nmultiple calculation Tomas referred to may be failing\nbecause of that, but I'm worried about there being other places.\n\nBasically, you're proposing a rather fundamental change in the rules\nby which Postgres has gathered statistics for decades. You need to\nbring some pretty substantial evidence to support that. The burden\nof proof is on you, not on the status quo.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 16 Jun 2023 18:46:37 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Incorrect estimation of HashJoin rows resulted from inaccurate\n small table statistics"
},
{
"msg_contents": "\n\nOn 2023/6/17 06:46, Tom Lane wrote:\n> Quan Zongliang <[email protected]> writes:\n>> Perhaps we should discard this (dups cnt > 1) restriction?\n> \n> That's not going to happen on the basis of one test case that you\n> haven't even shown us. The implications of doing it are very unclear.\n> In particular, I seem to recall that there are bits of logic that\n> depend on the assumption that MCV entries always represent more than\n> one row. The nmultiple calculation Tomas referred to may be failing\n> because of that, but I'm worried about there being other places.\n> \n\nThe statistics for the other table look like this:\nstadistinct | 6\nstanumbers1 | {0.50096667,0.49736667,0.0012}\nstavalues1 | {v22,v23,v5}\n\nThe value that appears twice in the small table (v1 and v2) does not \nappear here. The stadistinct's true value is 18 instead of 6 (three \nvalues in the small table do not appear here).\n\nWhen calculating the selectivity:\nif (nd2 > sslot2->nvalues)\n totalsel1 += unmatchfreq1 * otherfreq2 / (nd2 - sslot2->nvalues);\n\ntotalsel1 = 0\nnd2 = 21\nsslot2->nvalues = 2\nunmatchfreq1 = 0.99990002016420476\notherfreq2 = 0.82608695328235626\n\nresult: totalsel1 = 0.043473913749706022\nrows = 0.043473913749706022 * 23 * 2,000,000 = 1999800\n\n\n> Basically, you're proposing a rather fundamental change in the rules\n> by which Postgres has gathered statistics for decades. You need to\n> bring some pretty substantial evidence to support that. The burden\n> of proof is on you, not on the status quo.\n> \n> \t\t\tregards, tom lane\n\n\n\n",
"msg_date": "Sat, 17 Jun 2023 08:02:54 +0800",
"msg_from": "Quan Zongliang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Incorrect estimation of HashJoin rows resulted from inaccurate\n small table statistics"
},
{
"msg_contents": "On 6/17/23 00:32, Quan Zongliang wrote:\n> ...\n>\n> It's not just a small table. If a column's value is nearly unique. It\n> also causes the same problem because we exclude values that occur only\n> once. samplerows <= num_mcv just solves one scenario.\n> Perhaps we should discard this (dups cnt > 1) restriction?\n> \n\nBut for larger tables we'll be unable to keep all the values in the MCV.\nSo I think this only can change things for tiny tables.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 17 Jun 2023 13:48:39 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Incorrect estimation of HashJoin rows resulted from inaccurate\n small table statistics"
},
{
"msg_contents": "On 6/17/23 02:02, Quan Zongliang wrote:\n> \n> \n> On 2023/6/17 06:46, Tom Lane wrote:\n>> Quan Zongliang <[email protected]> writes:\n>>> Perhaps we should discard this (dups cnt > 1) restriction?\n>>\n>> That's not going to happen on the basis of one test case that you\n>> haven't even shown us. The implications of doing it are very unclear.\n>> In particular, I seem to recall that there are bits of logic that\n>> depend on the assumption that MCV entries always represent more than\n>> one row. The nmultiple calculation Tomas referred to may be failing\n>> because of that, but I'm worried about there being other places.\n>>\n\nI don't recall any logic that'd outright fail with MCVs containing\nsingle-row groups, and I haven't noticed anything obvious in analyze.c\nduring a cursory search. Maybe the paper analyze_mcv_list builds on\nmakes some assumptions? Not sure.\n\nHowever, compute_distinct_stats() doesn't seem to have such protection\nagainst single-row MCV groups, so if that's wrong we kinda already have\nthe issue I think (admittedly, compute_distinct_stats is much less used\nthan compute_scalar_stats).\n\n> \n> The statistics for the other table look like this:\n> stadistinct | 6\n> stanumbers1 | {0.50096667,0.49736667,0.0012}\n> stavalues1 | {v22,v23,v5}\n> \n> The value that appears twice in the small table (v1 and v2) does not\n> appear here. The stadistinct's true value is 18 instead of 6 (three\n> values in the small table do not appear here).\n> \n> When calculating the selectivity:\n> if (nd2 > sslot2->nvalues)\n> totalsel1 += unmatchfreq1 * otherfreq2 / (nd2 - sslot2->nvalues);\n> \n> totalsel1 = 0\n> nd2 = 21\n> sslot2->nvalues = 2\n> unmatchfreq1 = 0.99990002016420476\n> otherfreq2 = 0.82608695328235626\n> \n> result: totalsel1 = 0.043473913749706022\n> rows = 0.043473913749706022 * 23 * 2,000,000 = 1999800\n> \n\nAttached is a script reproducing this.\n\nI think the fundamental issue here is that the most common element of\nthe large table - v22 (~50%) is not in the tiny one at all. IIRC the\njoin estimation assumes the domain of one table is a subset of the\nother. The values 22 / 23 violate that assumption, unfortunately.\n\nIncluding all values into the small MCV fix this because then\n\n otherfreq1 = 0.0\n\nand that simply eliminates the impact of stuff that didn't have a match\nbetween the two MCV lists. Which mitigates the violated assumption.\n\nBut once the small table gets too large for the MCV, this won't work\nthat well - it probably helps a bit, as it makes otherfreq1 smaller.\n\nWhich doesn't mean it's useless, but it's likely a rare combination that\na table is (and remains) smaller than MCV, and the large table contains\nvalues without a match in the smaller one (think foreign keys).\n\n> \n>> Basically, you're proposing a rather fundamental change in the rules\n>> by which Postgres has gathered statistics for decades. You need to\n>> bring some pretty substantial evidence to support that. The burden\n>> of proof is on you, not on the status quo.\n>>\n\nRight. It's a good example of a \"quick hack\" fixing one particular case,\nwithout considering the consequences on other cases too much. Good as a\nstarting point, but plenty of legwork to do.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Sat, 17 Jun 2023 15:45:07 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Incorrect estimation of HashJoin rows resulted from inaccurate\n small table statistics"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nRelcache errors from time to time detect catalog corruptions. For example, recently I observed following:\n1. Filesystem or nvme disk zeroed out leading 160Kb of catalog index. This type of corruption passes through data_checksums.\n2. RelationBuildTupleDesc() was failing with \"catalog is missing 1 attribute(s) for relid 2662\".\n3. We monitor corruption error codes and alert on-call DBAs when see one, but the message is not marked as XX001 or XX002. It's XX000 which happens from time to time due to less critical reasons than data corruption.\n4. High-availability automation switched primary to other host and other monitoring checks did not ring too.\n\nThis particular case is not very illustrative. In fact we had index corruption that looked like catalog corruption.\nBut still it looks to me that catalog inconsistencies (like relnatts != number of pg_attribute rows) could be marked with ERRCODE_DATA_CORRUPTED.\nThis particular error code in my experience proved to be a good indicator for early corruption detection.\n\nWhat do you think?\nWhat other subsystems can be improved in the same manner?\n\nBest regards, Andrey Borodin.",
"msg_date": "Fri, 16 Jun 2023 16:17:48 +0300",
"msg_from": "\"Andrey M. Borodin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Add some more corruption error codes to relcache"
},
{
"msg_contents": "On Fri, Jun 16, 2023 at 9:18 AM Andrey M. Borodin <[email protected]>\nwrote:\n\n> Hi hackers,\n>\n> Relcache errors from time to time detect catalog corruptions. For example,\n> recently I observed following:\n> 1. Filesystem or nvme disk zeroed out leading 160Kb of catalog index. This\n> type of corruption passes through data_checksums.\n> 2. RelationBuildTupleDesc() was failing with \"catalog is missing 1\n> attribute(s) for relid 2662\".\n> 3. We monitor corruption error codes and alert on-call DBAs when see one,\n> but the message is not marked as XX001 or XX002. It's XX000 which happens\n> from time to time due to less critical reasons than data corruption.\n> 4. High-availability automation switched primary to other host and other\n> monitoring checks did not ring too.\n>\n> This particular case is not very illustrative. In fact we had index\n> corruption that looked like catalog corruption.\n> But still it looks to me that catalog inconsistencies (like relnatts !=\n> number of pg_attribute rows) could be marked with ERRCODE_DATA_CORRUPTED.\n> This particular error code in my experience proved to be a good indicator\n> for early corruption detection.\n>\n> What do you think?\n> What other subsystems can be improved in the same manner?\n>\n> Best regards, Andrey Borodin.\n>\n\nAndrey, I think this is a good idea. But your #1 item sounds familiar.\nThere was a thread about someone creating/dropping lots of databases, who\nfound some kind of race condition that would ZERO out pg_ catalog entries,\njust like you are mentioning. I think he found the problem with that\nrelations could not be found and/or the DB did not want to start. I just\nspent 30 minutes looking for it, but my \"search-fu\" is apparently failing.\n\nWhich leads me to ask if there is a way to detect the corrupting write\n(writing all zeroes to the file when we know better? A Zeroed out header\nwhen one cannot exist?) Hoping this triggers a bright idea on your end...\n\nKirk...\n\nOn Fri, Jun 16, 2023 at 9:18 AM Andrey M. Borodin <[email protected]> wrote:Hi hackers,\n\nRelcache errors from time to time detect catalog corruptions. For example, recently I observed following:\n1. Filesystem or nvme disk zeroed out leading 160Kb of catalog index. This type of corruption passes through data_checksums.\n2. RelationBuildTupleDesc() was failing with \"catalog is missing 1 attribute(s) for relid 2662\".\n3. We monitor corruption error codes and alert on-call DBAs when see one, but the message is not marked as XX001 or XX002. It's XX000 which happens from time to time due to less critical reasons than data corruption.\n4. High-availability automation switched primary to other host and other monitoring checks did not ring too.\n\nThis particular case is not very illustrative. In fact we had index corruption that looked like catalog corruption.\nBut still it looks to me that catalog inconsistencies (like relnatts != number of pg_attribute rows) could be marked with ERRCODE_DATA_CORRUPTED.\nThis particular error code in my experience proved to be a good indicator for early corruption detection.\n\nWhat do you think?\nWhat other subsystems can be improved in the same manner?\n\nBest regards, Andrey Borodin.Andrey, I think this is a good idea. But your #1 item sounds familiar. There was a thread about someone creating/dropping lots of databases, who found some kind of race condition that would ZERO out pg_ catalog entries, just like you are mentioning. I think he found the problem with that relations could not be found and/or the DB did not want to start. I just spent 30 minutes looking for it, but my \"search-fu\" is apparently failing.Which leads me to ask if there is a way to detect the corrupting write (writing all zeroes to the file when we know better? A Zeroed out header when one cannot exist?) Hoping this triggers a bright idea on your end...Kirk...",
"msg_date": "Mon, 26 Jun 2023 23:32:52 -0400",
"msg_from": "Kirk Wolak <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add some more corruption error codes to relcache"
}
] |
[
{
"msg_contents": "Hi hackers,\n\npg_get_backend_memory_contexts() (and pg_backend_memory_contexts view)\ndoes not display parent/child relation between contexts reliably.\nCurrent version of this function only shows the name of parent context\nfor each context. The issue here is that it's not guaranteed that\ncontext names are unique. So, this makes it difficult to find the\ncorrect parent of a context.\n\nHow can knowing the correct parent context be useful? One important\nuse-case can be that it would allow us to sum up all the space used by\na particular context and all other subcontexts which stem from that\ncontext.\nCalculating this sum is helpful since currently\n(total/used/free)_bytes returned by this function does not include\nchild contexts. For this reason, only looking into the related row in\npg_backend_memory_contexts does not help us to understand how many\nbytes that context is actually taking.\n\nSimplest approach to solve this could be just adding two new fields,\nid and parent_id, in pg_get_backend_memory_contexts() and ensuring\neach context has a unique id. This way allows us to build a correct\nmemory context \"tree\".\n\nPlease see the attached patch which introduces those two fields.\nCouldn't find an existing unique identifier to use. The patch simply\nassigns an id during the execution of\npg_get_backend_memory_contexts() and does not store those id's\nanywhere. This means that these id's may be different in each call.\n\nWith this change, here's a query to find how much space used by each\ncontext including its children:\n\n> WITH RECURSIVE cte AS (\n> SELECT id, total_bytes, id as root, name as root_name\n> FROM memory_contexts\n> UNION ALL\n> SELECT r.id, r.total_bytes, cte.root, cte.root_name\n> FROM memory_contexts r\n> INNER JOIN cte ON r.parent_id = cte.id\n> ),\n> memory_contexts AS (\n> SELECT * FROM pg_backend_memory_contexts\n> )\n> SELECT root as id, root_name as name, sum(total_bytes)\n> FROM cte\n> GROUP BY root, root_name\n> ORDER BY sum DESC;\n\n\nYou should see that TopMemoryContext is the one with highest allocated\nspace since all other contexts are simply created under\nTopMemoryContext.\n\n\nAlso; even though having a correct link between parent/child contexts\ncan be useful to find out many other things as well by only writing\nSQL queries, it might require complex recursive queries similar to the\none in case of total_bytes including children. Maybe, we can also\nconsider adding such frequently used and/or useful information as new\nfields in pg_get_backend_memory_contexts() too.\n\n\nI appreciate any comment/feedback on this.\n\nThanks,\n-- \nMelih Mutlu\nMicrosoft",
"msg_date": "Fri, 16 Jun 2023 17:03:14 +0300",
"msg_from": "Melih Mutlu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Parent/child context relation in pg_get_backend_memory_contexts()"
},
{
"msg_contents": "Hi hackers,\n\n\nMelih Mutlu <[email protected]>, 16 Haz 2023 Cum, 17:03 tarihinde şunu\nyazdı:\n\n> With this change, here's a query to find how much space used by each\n> context including its children:\n>\n> > WITH RECURSIVE cte AS (\n> > SELECT id, total_bytes, id as root, name as root_name\n> > FROM memory_contexts\n> > UNION ALL\n> > SELECT r.id, r.total_bytes, cte.root, cte.root_name\n> > FROM memory_contexts r\n> > INNER JOIN cte ON r.parent_id = cte.id\n> > ),\n> > memory_contexts AS (\n> > SELECT * FROM pg_backend_memory_contexts\n> > )\n> > SELECT root as id, root_name as name, sum(total_bytes)\n> > FROM cte\n> > GROUP BY root, root_name\n> > ORDER BY sum DESC;\n>\n\nGiven that the above query to get total bytes including all children is\nstill a complex one, I decided to add an additional info in\npg_backend_memory_contexts.\nThe new \"path\" field displays an integer array that consists of ids of all\nparents for the current context. This way it's easier to tell whether a\ncontext is a child of another context, and we don't need to use recursive\nqueries to get this info.\n\nHere how pg_backend_memory_contexts would look like with this patch:\n\npostgres=# SELECT name, id, parent, parent_id, path\nFROM pg_backend_memory_contexts\nORDER BY total_bytes DESC LIMIT 10;\n name | id | parent | parent_id | path\n-------------------------+-----+------------------+-----------+--------------\n CacheMemoryContext | 27 | TopMemoryContext | 0 | {0}\n Timezones | 124 | TopMemoryContext | 0 | {0}\n TopMemoryContext | 0 | | |\n MessageContext | 8 | TopMemoryContext | 0 | {0}\n WAL record construction | 118 | TopMemoryContext | 0 | {0}\n ExecutorState | 18 | PortalContext | 17 | {0,16,17}\n TupleSort main | 19 | ExecutorState | 18 | {0,16,17,18}\n TransactionAbortContext | 14 | TopMemoryContext | 0 | {0}\n smgr relation table | 10 | TopMemoryContext | 0 | {0}\n GUC hash table | 123 | GUCMemoryContext | 122 | {0,122}\n(10 rows)\n\n\nAn example query to calculate the total_bytes including its children for a\ncontext (say CacheMemoryContext) would look like this:\n\nWITH contexts AS (\nSELECT * FROM pg_backend_memory_contexts\n)\nSELECT sum(total_bytes)\nFROM contexts\nWHERE ARRAY[(SELECT id FROM contexts WHERE name = 'CacheMemoryContext')] <@\npath;\n\nWe still need to use cte since ids are not persisted and might change in\neach run of pg_backend_memory_contexts. Materializing the result can\nprevent any inconsistencies due to id change. Also it can be even good for\nperformance reasons as well.\n\nAny thoughts?\n\nThanks,\n-- \nMelih Mutlu\nMicrosoft",
"msg_date": "Fri, 4 Aug 2023 21:16:49 +0300",
"msg_from": "Melih Mutlu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Parent/child context relation in pg_get_backend_memory_contexts()"
},
{
"msg_contents": "Hi,\n\nOn 2023-08-04 21:16:49 +0300, Melih Mutlu wrote:\n> Melih Mutlu <[email protected]>, 16 Haz 2023 Cum, 17:03 tarihinde şunu\n> yazdı:\n> \n> > With this change, here's a query to find how much space used by each\n> > context including its children:\n> >\n> > > WITH RECURSIVE cte AS (\n> > > SELECT id, total_bytes, id as root, name as root_name\n> > > FROM memory_contexts\n> > > UNION ALL\n> > > SELECT r.id, r.total_bytes, cte.root, cte.root_name\n> > > FROM memory_contexts r\n> > > INNER JOIN cte ON r.parent_id = cte.id\n> > > ),\n> > > memory_contexts AS (\n> > > SELECT * FROM pg_backend_memory_contexts\n> > > )\n> > > SELECT root as id, root_name as name, sum(total_bytes)\n> > > FROM cte\n> > > GROUP BY root, root_name\n> > > ORDER BY sum DESC;\n> >\n> \n> Given that the above query to get total bytes including all children is\n> still a complex one, I decided to add an additional info in\n> pg_backend_memory_contexts.\n> The new \"path\" field displays an integer array that consists of ids of all\n> parents for the current context. This way it's easier to tell whether a\n> context is a child of another context, and we don't need to use recursive\n> queries to get this info.\n\nI think that does make it a good bit easier. Both to understand and to use.\n\n\n\n> Here how pg_backend_memory_contexts would look like with this patch:\n> \n> postgres=# SELECT name, id, parent, parent_id, path\n> FROM pg_backend_memory_contexts\n> ORDER BY total_bytes DESC LIMIT 10;\n> name | id | parent | parent_id | path\n> -------------------------+-----+------------------+-----------+--------------\n> CacheMemoryContext | 27 | TopMemoryContext | 0 | {0}\n> Timezones | 124 | TopMemoryContext | 0 | {0}\n> TopMemoryContext | 0 | | |\n> MessageContext | 8 | TopMemoryContext | 0 | {0}\n> WAL record construction | 118 | TopMemoryContext | 0 | {0}\n> ExecutorState | 18 | PortalContext | 17 | {0,16,17}\n> TupleSort main | 19 | ExecutorState | 18 | {0,16,17,18}\n> TransactionAbortContext | 14 | TopMemoryContext | 0 | {0}\n> smgr relation table | 10 | TopMemoryContext | 0 | {0}\n> GUC hash table | 123 | GUCMemoryContext | 122 | {0,122}\n> (10 rows)\n\nWould we still need the parent_id column?\n\n\n> +\n> + <row>\n> + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> + <structfield>context_id</structfield> <type>int4</type>\n> + </para>\n> + <para>\n> + Current context id\n> + </para></entry>\n> + </row>\n\nI think the docs here need to warn that the id is ephemeral and will likely\ndiffer in the next invocation.\n\n\n> + <row>\n> + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> + <structfield>parent_id</structfield> <type>int4</type>\n> + </para>\n> + <para>\n> + Parent context id\n> + </para></entry>\n> + </row>\n> +\n> + <row>\n> + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> + <structfield>path</structfield> <type>int4</type>\n> + </para>\n> + <para>\n> + Path to reach the current context from TopMemoryContext\n> + </para></entry>\n> + </row>\n\nPerhaps we should include some hint here how it could be used?\n\n\n> </tbody>\n> </tgroup>\n> </table>\n> diff --git a/src/backend/utils/adt/mcxtfuncs.c b/src/backend/utils/adt/mcxtfuncs.c\n> index 92ca5b2f72..81cb35dd47 100644\n> --- a/src/backend/utils/adt/mcxtfuncs.c\n> +++ b/src/backend/utils/adt/mcxtfuncs.c\n> @@ -20,6 +20,7 @@\n> #include \"mb/pg_wchar.h\"\n> #include \"storage/proc.h\"\n> #include \"storage/procarray.h\"\n> +#include \"utils/array.h\"\n> #include \"utils/builtins.h\"\n> \n> /* ----------\n> @@ -28,6 +29,8 @@\n> */\n> #define MEMORY_CONTEXT_IDENT_DISPLAY_SIZE\t1024\n> \n> +static Datum convert_path_to_datum(List *path);\n> +\n> /*\n> * PutMemoryContextsStatsTupleStore\n> *\t\tOne recursion level for pg_get_backend_memory_contexts.\n> @@ -35,9 +38,10 @@\n> static void\n> PutMemoryContextsStatsTupleStore(Tuplestorestate *tupstore,\n> \t\t\t\t\t\t\t\t TupleDesc tupdesc, MemoryContext context,\n> -\t\t\t\t\t\t\t\t const char *parent, int level)\n> +\t\t\t\t\t\t\t\t const char *parent, int level, int *context_id,\n> +\t\t\t\t\t\t\t\t int parent_id, List *path)\n> {\n> -#define PG_GET_BACKEND_MEMORY_CONTEXTS_COLS\t9\n> +#define PG_GET_BACKEND_MEMORY_CONTEXTS_COLS\t12\n> \n> \tDatum\t\tvalues[PG_GET_BACKEND_MEMORY_CONTEXTS_COLS];\n> \tbool\t\tnulls[PG_GET_BACKEND_MEMORY_CONTEXTS_COLS];\n> @@ -45,6 +49,7 @@ PutMemoryContextsStatsTupleStore(Tuplestorestate *tupstore,\n> \tMemoryContext child;\n> \tconst char *name;\n> \tconst char *ident;\n> +\tint current_context_id = (*context_id)++;\n> \n> \tAssert(MemoryContextIsValid(context));\n> \n> @@ -103,13 +108,29 @@ PutMemoryContextsStatsTupleStore(Tuplestorestate *tupstore,\n> \tvalues[6] = Int64GetDatum(stat.freespace);\n> \tvalues[7] = Int64GetDatum(stat.freechunks);\n> \tvalues[8] = Int64GetDatum(stat.totalspace - stat.freespace);\n> +\tvalues[9] = Int32GetDatum(current_context_id);\n> +\n> +\tif(parent_id < 0)\n> +\t\t/* TopMemoryContext has no parent context */\n> +\t\tnulls[10] = true;\n> +\telse\n> +\t\tvalues[10] = Int32GetDatum(parent_id);\n> +\n> +\tif (path == NIL)\n> +\t\tnulls[11] = true;\n> +\telse\n> +\t\tvalues[11] = convert_path_to_datum(path);\n> +\n> \ttuplestore_putvalues(tupstore, tupdesc, values, nulls);\n> \n> +\tpath = lappend_int(path, current_context_id);\n> \tfor (child = context->firstchild; child != NULL; child = child->nextchild)\n> \t{\n> -\t\tPutMemoryContextsStatsTupleStore(tupstore, tupdesc,\n> -\t\t\t\t\t\t\t\t\t\t child, name, level + 1);\n> +\t\tPutMemoryContextsStatsTupleStore(tupstore, tupdesc, child, name,\n> +\t\t\t\t\t\t\t\t\t\t level+1, context_id,\n> +\t\t\t\t\t\t\t\t\t\t current_context_id, path);\n> \t}\n> +\tpath = list_delete_last(path);\n> }\n> \n> /*\n> @@ -120,10 +141,15 @@ Datum\n> pg_get_backend_memory_contexts(PG_FUNCTION_ARGS)\n> {\n> \tReturnSetInfo *rsinfo = (ReturnSetInfo *) fcinfo->resultinfo;\n> +\tint context_id = 0;\n> +\tList *path = NIL;\n> +\n> +\telog(LOG, \"pg_get_backend_memory_contexts called\");\n> \n> \tInitMaterializedSRF(fcinfo, 0);\n> \tPutMemoryContextsStatsTupleStore(rsinfo->setResult, rsinfo->setDesc,\n> -\t\t\t\t\t\t\t\t\t TopMemoryContext, NULL, 0);\n> +\t\t\t\t\t\t\t\t\t TopMemoryContext, NULL, 0, &context_id,\n> +\t\t\t\t\t\t\t\t\t -1, path);\n> \n> \treturn (Datum) 0;\n> }\n> @@ -193,3 +219,26 @@ pg_log_backend_memory_contexts(PG_FUNCTION_ARGS)\n> \n> \tPG_RETURN_BOOL(true);\n> }\n> +\n> +/*\n> + * Convert a list of context ids to a int[] Datum\n> + */\n> +static Datum\n> +convert_path_to_datum(List *path)\n> +{\n> +\tDatum\t *datum_array;\n> +\tint\t\t\tlength;\n> +\tArrayType *result_array;\n> +\tListCell *lc;\n> +\n> +\tlength = list_length(path);\n> +\tdatum_array = (Datum *) palloc(length * sizeof(Datum));\n> +\tlength = 0;\n> +\tforeach(lc, path)\n> +\t{\n> +\t\tdatum_array[length++] = Int32GetDatum((int) lfirst_int(lc));\n\nThe \"(int)\" in front of lfirst_int() seems redundant?\n\n\nI think it'd be good to have some minimal test for this. E.g. checking that\nthere's multiple contexts below cache memory context or such.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 12 Oct 2023 09:23:09 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Parent/child context relation in pg_get_backend_memory_contexts()"
},
{
"msg_contents": "Greetings,\n\n* Melih Mutlu ([email protected]) wrote:\n> Melih Mutlu <[email protected]>, 16 Haz 2023 Cum, 17:03 tarihinde şunu\n> yazdı:\n> \n> > With this change, here's a query to find how much space used by each\n> > context including its children:\n> >\n> > > WITH RECURSIVE cte AS (\n> > > SELECT id, total_bytes, id as root, name as root_name\n> > > FROM memory_contexts\n> > > UNION ALL\n> > > SELECT r.id, r.total_bytes, cte.root, cte.root_name\n> > > FROM memory_contexts r\n> > > INNER JOIN cte ON r.parent_id = cte.id\n> > > ),\n> > > memory_contexts AS (\n> > > SELECT * FROM pg_backend_memory_contexts\n> > > )\n> > > SELECT root as id, root_name as name, sum(total_bytes)\n> > > FROM cte\n> > > GROUP BY root, root_name\n> > > ORDER BY sum DESC;\n> \n> Given that the above query to get total bytes including all children is\n> still a complex one, I decided to add an additional info in\n> pg_backend_memory_contexts.\n> The new \"path\" field displays an integer array that consists of ids of all\n> parents for the current context. This way it's easier to tell whether a\n> context is a child of another context, and we don't need to use recursive\n> queries to get this info.\n\nNice, this does seem quite useful.\n\n> Here how pg_backend_memory_contexts would look like with this patch:\n> \n> postgres=# SELECT name, id, parent, parent_id, path\n> FROM pg_backend_memory_contexts\n> ORDER BY total_bytes DESC LIMIT 10;\n> name | id | parent | parent_id | path\n> -------------------------+-----+------------------+-----------+--------------\n> CacheMemoryContext | 27 | TopMemoryContext | 0 | {0}\n> Timezones | 124 | TopMemoryContext | 0 | {0}\n> TopMemoryContext | 0 | | |\n> MessageContext | 8 | TopMemoryContext | 0 | {0}\n> WAL record construction | 118 | TopMemoryContext | 0 | {0}\n> ExecutorState | 18 | PortalContext | 17 | {0,16,17}\n> TupleSort main | 19 | ExecutorState | 18 | {0,16,17,18}\n> TransactionAbortContext | 14 | TopMemoryContext | 0 | {0}\n> smgr relation table | 10 | TopMemoryContext | 0 | {0}\n> GUC hash table | 123 | GUCMemoryContext | 122 | {0,122}\n> (10 rows)\n> \n> An example query to calculate the total_bytes including its children for a\n> context (say CacheMemoryContext) would look like this:\n> \n> WITH contexts AS (\n> SELECT * FROM pg_backend_memory_contexts\n> )\n> SELECT sum(total_bytes)\n> FROM contexts\n> WHERE ARRAY[(SELECT id FROM contexts WHERE name = 'CacheMemoryContext')] <@\n> path;\n\nI wonder if we should perhaps just include\n\"total_bytes_including_children\" as another column? Certainly seems\nlike a very useful thing that folks would like to see. We could do that\neither with C, or even something as simple as changing the view to do\nsomething like:\n\nWITH contexts AS MATERIALIZED (\n SELECT * FROM pg_get_backend_memory_contexts()\n)\nSELECT\n *,\n coalesce\n (\n (\n (SELECT sum(total_bytes) FROM contexts WHERE ARRAY[a.id] <@ path)\n + total_bytes\n ),\n total_bytes\n ) AS total_bytes_including_children\nFROM contexts a;\n\n> We still need to use cte since ids are not persisted and might change in\n> each run of pg_backend_memory_contexts. Materializing the result can\n> prevent any inconsistencies due to id change. Also it can be even good for\n> performance reasons as well.\n\nI don't think we really want this to be materialized, do we? Where this\nis particularly interesting is when it's being dumped to the log ( ...\nthough I wish we could do better than that and hope we do in the future)\nwhile something is ongoing in a given backend and if we do that a few\ntimes we are able to see what's changing in terms of allocations,\nwhereas if we materialized it (when? transaction start? first time\nit's asked for?) then we'd only ever get the one view from whenever the\nsnapshot was taken.\n\n> Any thoughts?\n\nGenerally +1 from me for working on improving this.\n\nThanks!\n\nStephen",
"msg_date": "Wed, 18 Oct 2023 15:53:30 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Parent/child context relation in pg_get_backend_memory_contexts()"
},
{
"msg_contents": "Hi,\n\nOn 2023-10-18 15:53:30 -0400, Stephen Frost wrote:\n> > Here how pg_backend_memory_contexts would look like with this patch:\n> > \n> > postgres=# SELECT name, id, parent, parent_id, path\n> > FROM pg_backend_memory_contexts\n> > ORDER BY total_bytes DESC LIMIT 10;\n> > name | id | parent | parent_id | path\n> > -------------------------+-----+------------------+-----------+--------------\n> > CacheMemoryContext | 27 | TopMemoryContext | 0 | {0}\n> > Timezones | 124 | TopMemoryContext | 0 | {0}\n> > TopMemoryContext | 0 | | |\n> > MessageContext | 8 | TopMemoryContext | 0 | {0}\n> > WAL record construction | 118 | TopMemoryContext | 0 | {0}\n> > ExecutorState | 18 | PortalContext | 17 | {0,16,17}\n> > TupleSort main | 19 | ExecutorState | 18 | {0,16,17,18}\n> > TransactionAbortContext | 14 | TopMemoryContext | 0 | {0}\n> > smgr relation table | 10 | TopMemoryContext | 0 | {0}\n> > GUC hash table | 123 | GUCMemoryContext | 122 | {0,122}\n> > (10 rows)\n> > \n> > An example query to calculate the total_bytes including its children for a\n> > context (say CacheMemoryContext) would look like this:\n> > \n> > WITH contexts AS (\n> > SELECT * FROM pg_backend_memory_contexts\n> > )\n> > SELECT sum(total_bytes)\n> > FROM contexts\n> > WHERE ARRAY[(SELECT id FROM contexts WHERE name = 'CacheMemoryContext')] <@\n> > path;\n> \n> I wonder if we should perhaps just include\n> \"total_bytes_including_children\" as another column? Certainly seems\n> like a very useful thing that folks would like to see.\n\nThe \"issue\" is where to stop - should we also add that for some of the other\ncolumns? They are a bit less important, but not that much.\n\n\n> > We still need to use cte since ids are not persisted and might change in\n> > each run of pg_backend_memory_contexts. Materializing the result can\n> > prevent any inconsistencies due to id change. Also it can be even good for\n> > performance reasons as well.\n> \n> I don't think we really want this to be materialized, do we? Where this\n> is particularly interesting is when it's being dumped to the log ( ...\n> though I wish we could do better than that and hope we do in the future)\n> while something is ongoing in a given backend and if we do that a few\n> times we are able to see what's changing in terms of allocations,\n> whereas if we materialized it (when? transaction start? first time\n> it's asked for?) then we'd only ever get the one view from whenever the\n> snapshot was taken.\n\nI think the comment was just about the need to use a CTE, because self-joining\nwith divergent versions of pg_backend_memory_contexts would not always work\nout well.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 18 Oct 2023 18:17:53 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Parent/child context relation in pg_get_backend_memory_contexts()"
},
{
"msg_contents": "Greetings,\n\n* Andres Freund ([email protected]) wrote:\n> On 2023-10-18 15:53:30 -0400, Stephen Frost wrote:\n> > > Here how pg_backend_memory_contexts would look like with this patch:\n> > > \n> > > postgres=# SELECT name, id, parent, parent_id, path\n> > > FROM pg_backend_memory_contexts\n> > > ORDER BY total_bytes DESC LIMIT 10;\n> > > name | id | parent | parent_id | path\n> > > -------------------------+-----+------------------+-----------+--------------\n> > > CacheMemoryContext | 27 | TopMemoryContext | 0 | {0}\n> > > Timezones | 124 | TopMemoryContext | 0 | {0}\n> > > TopMemoryContext | 0 | | |\n> > > MessageContext | 8 | TopMemoryContext | 0 | {0}\n> > > WAL record construction | 118 | TopMemoryContext | 0 | {0}\n> > > ExecutorState | 18 | PortalContext | 17 | {0,16,17}\n> > > TupleSort main | 19 | ExecutorState | 18 | {0,16,17,18}\n> > > TransactionAbortContext | 14 | TopMemoryContext | 0 | {0}\n> > > smgr relation table | 10 | TopMemoryContext | 0 | {0}\n> > > GUC hash table | 123 | GUCMemoryContext | 122 | {0,122}\n> > > (10 rows)\n> > > \n> > > An example query to calculate the total_bytes including its children for a\n> > > context (say CacheMemoryContext) would look like this:\n> > > \n> > > WITH contexts AS (\n> > > SELECT * FROM pg_backend_memory_contexts\n> > > )\n> > > SELECT sum(total_bytes)\n> > > FROM contexts\n> > > WHERE ARRAY[(SELECT id FROM contexts WHERE name = 'CacheMemoryContext')] <@\n> > > path;\n> > \n> > I wonder if we should perhaps just include\n> > \"total_bytes_including_children\" as another column? Certainly seems\n> > like a very useful thing that folks would like to see.\n> \n> The \"issue\" is where to stop - should we also add that for some of the other\n> columns? They are a bit less important, but not that much.\n\nI'm not sure the others really make sense to aggregate in this way as\nfree space isn't able to be moved between contexts. That said, if\nsomeone wants it then I'm not against that. I'm actively in support of\nadding an aggregated total though as that, at least to me, seems to be\nvery useful to have.\n\nThanks,\n\nStephen",
"msg_date": "Thu, 19 Oct 2023 18:01:23 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Parent/child context relation in pg_get_backend_memory_contexts()"
},
{
"msg_contents": "Hi,\n\nThanks for reviewing.\nAttached the updated patch v3.\n\nAndres Freund <[email protected]>, 12 Eki 2023 Per, 19:23 tarihinde şunu\nyazdı:\n\n> > Here how pg_backend_memory_contexts would look like with this patch:\n> >\n> > postgres=# SELECT name, id, parent, parent_id, path\n> > FROM pg_backend_memory_contexts\n> > ORDER BY total_bytes DESC LIMIT 10;\n> > name | id | parent | parent_id | path\n> >\n> -------------------------+-----+------------------+-----------+--------------\n> > CacheMemoryContext | 27 | TopMemoryContext | 0 | {0}\n> > Timezones | 124 | TopMemoryContext | 0 | {0}\n> > TopMemoryContext | 0 | | |\n> > MessageContext | 8 | TopMemoryContext | 0 | {0}\n> > WAL record construction | 118 | TopMemoryContext | 0 | {0}\n> > ExecutorState | 18 | PortalContext | 17 | {0,16,17}\n> > TupleSort main | 19 | ExecutorState | 18 |\n> {0,16,17,18}\n> > TransactionAbortContext | 14 | TopMemoryContext | 0 | {0}\n> > smgr relation table | 10 | TopMemoryContext | 0 | {0}\n> > GUC hash table | 123 | GUCMemoryContext | 122 | {0,122}\n> > (10 rows)\n>\n> Would we still need the parent_id column?\n>\n\nI guess not. Assuming the path column is sorted from TopMemoryContext to\nthe parent one level above, parent_id can be found using the path column if\nneeded.\nRemoved parent_id.\n\n\n> > +\n> > + <row>\n> > + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> > + <structfield>context_id</structfield> <type>int4</type>\n> > + </para>\n> > + <para>\n> > + Current context id\n> > + </para></entry>\n> > + </row>\n>\n> I think the docs here need to warn that the id is ephemeral and will likely\n> differ in the next invocation.\n>\n\nDone.\n\n> + <row>\n> > + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> > + <structfield>parent_id</structfield> <type>int4</type>\n> > + </para>\n> > + <para>\n> > + Parent context id\n> > + </para></entry>\n> > + </row>\n> > +\n> > + <row>\n> > + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> > + <structfield>path</structfield> <type>int4</type>\n> > + </para>\n> > + <para>\n> > + Path to reach the current context from TopMemoryContext\n> > + </para></entry>\n> > + </row>\n>\n> Perhaps we should include some hint here how it could be used?\n>\n\nI added more explanation but not sure if that is what you asked for. Do you\nwant a hint that is related to a more specific use case?\n\n> + length = list_length(path);\n> > + datum_array = (Datum *) palloc(length * sizeof(Datum));\n> > + length = 0;\n> > + foreach(lc, path)\n> > + {\n> > + datum_array[length++] = Int32GetDatum((int)\n> lfirst_int(lc));\n>\n> The \"(int)\" in front of lfirst_int() seems redundant?\n>\n\nRemoved.\n\nI think it'd be good to have some minimal test for this. E.g. checking that\n> there's multiple contexts below cache memory context or such.\n>\n\nAdded new tests in sysview.sql.\n\n\nStephen Frost <[email protected]>, 18 Eki 2023 Çar, 22:53 tarihinde şunu\nyazdı:\n\n> I wonder if we should perhaps just include\n> \"total_bytes_including_children\" as another column? Certainly seems\n> like a very useful thing that folks would like to see. We could do that\n> either with C, or even something as simple as changing the view to do\n> something like:\n>\n> WITH contexts AS MATERIALIZED (\n> SELECT * FROM pg_get_backend_memory_contexts()\n> )\n> SELECT\n> *,\n> coalesce\n> (\n> (\n> (SELECT sum(total_bytes) FROM contexts WHERE ARRAY[a.id] <@ path)\n> + total_bytes\n> ),\n> total_bytes\n> ) AS total_bytes_including_children\n> FROM contexts a;\n>\n\nI added a \"total_bytes_including_children\" column as you suggested. Did\nthat with C since it seemed faster than doing it by changing the view.\n\n-- Calculating total_bytes_including_children by modifying the view\npostgres=# select * from pg_backend_memory_contexts ;\nTime: 30.462 ms\n\n-- Calculating total_bytes_including_children with C\npostgres=# select * from pg_backend_memory_contexts ;\nTime: 1.511 ms\n\n\nThanks,\n-- \nMelih Mutlu\nMicrosoft",
"msg_date": "Mon, 23 Oct 2023 15:02:27 +0300",
"msg_from": "Melih Mutlu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Parent/child context relation in pg_get_backend_memory_contexts()"
},
{
"msg_contents": "Thanks for working on this improvement!\n\nOn 2023-10-23 21:02, Melih Mutlu wrote:\n> Hi,\n> \n> Thanks for reviewing.\n> Attached the updated patch v3.\n\nI reviewed v3 patch and here are some minor comments:\n\n> + <row>\n> + <entry role=\"catalog_table_entry\"><para \n> role=\"column_definition\">\n> + <structfield>path</structfield> <type>int4</type>\n\nShould 'int4' be 'int4[]'?\nOther system catalog columns such as pg_groups.grolist distinguish \nwhther the type is a array or not.\n\n> + Path to reach the current context from TopMemoryContext. \n> Context ids in\n> + this list represents all parents of the current context. This \n> can be\n> + used to build the parent and child relation.\n\nIt seems last \".\" is not necessary considering other explanations for \neach field end without it.\n\n+ const char *parent, int level, int \n*context_id,\n+ List *path, Size \n*total_bytes_inc_chidlren)\n\n'chidlren' -> 'children'\n\n\n+ elog(LOG, \"pg_get_backend_memory_contexts called\");\n\nIs this message necessary?\n\n\nThere was warning when applying the patch:\n\n % git apply \n../patch/pg_backend_memory_context_refine/v3-0001-Adding-id-parent_id-into-pg_backend_memory_contex.patch\n \n../patch/pg_backend_memory_context_refine/v3-0001-Adding-id-parent_id-into-pg_backend_memory_contex.patch:282: \ntrailing whitespace.\n select count(*) > 0\n \n../patch/pg_backend_memory_context_refine/v3-0001-Adding-id-parent_id-into-pg_backend_memory_contex.patch:283: \ntrailing whitespace.\n from contexts\n warning: 2 lines add whitespace errors.\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA Group Corporation\n\n\n",
"msg_date": "Mon, 04 Dec 2023 13:43:18 +0900",
"msg_from": "torikoshia <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Parent/child context relation in pg_get_backend_memory_contexts()"
},
{
"msg_contents": "Hi,\n\nThanks for reviewing. Please find the updated patch attached.\n\ntorikoshia <[email protected]>, 4 Ara 2023 Pzt, 07:43 tarihinde\nşunu yazdı:\n\n> I reviewed v3 patch and here are some minor comments:\n>\n> > + <row>\n> > + <entry role=\"catalog_table_entry\"><para\n> > role=\"column_definition\">\n> > + <structfield>path</structfield> <type>int4</type>\n>\n> Should 'int4' be 'int4[]'?\n> Other system catalog columns such as pg_groups.grolist distinguish\n> whther the type is a array or not.\n>\n\nRight! Done.\n\n\n>\n> > + Path to reach the current context from TopMemoryContext.\n> > Context ids in\n> > + this list represents all parents of the current context. This\n> > can be\n> > + used to build the parent and child relation.\n>\n> It seems last \".\" is not necessary considering other explanations for\n> each field end without it.\n>\n\nDone.\n\n\n> + const char *parent, int level, int\n> *context_id,\n> + List *path, Size\n> *total_bytes_inc_chidlren)\n>\n> 'chidlren' -> 'children'\n>\n\nDone.\n\n\n> + elog(LOG, \"pg_get_backend_memory_contexts called\");\n>\n> Is this message necessary?\n>\n\nI guess I added this line for debugging and then forgot to remove. Now\nremoved.\n\nThere was warning when applying the patch:\n>\n> % git apply\n>\n> ../patch/pg_backend_memory_context_refine/v3-0001-Adding-id-parent_id-into-pg_backend_memory_contex.patch\n>\n> ../patch/pg_backend_memory_context_refine/v3-0001-Adding-id-parent_id-into-pg_backend_memory_contex.patch:282:\n>\n> trailing whitespace.\n> select count(*) > 0\n>\n> ../patch/pg_backend_memory_context_refine/v3-0001-Adding-id-parent_id-into-pg_backend_memory_contex.patch:283:\n>\n> trailing whitespace.\n> from contexts\n> warning: 2 lines add whitespace errors.\n>\n\nFixed.\n\nThanks,\n-- \nMelih Mutlu\nMicrosoft",
"msg_date": "Wed, 3 Jan 2024 14:40:01 +0300",
"msg_from": "Melih Mutlu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Parent/child context relation in pg_get_backend_memory_contexts()"
},
{
"msg_contents": "On 2024-01-03 20:40, Melih Mutlu wrote:\n> Hi,\n> \n> Thanks for reviewing. Please find the updated patch attached.\n> \n> torikoshia <[email protected]>, 4 Ara 2023 Pzt, 07:43\n> tarihinde şunu yazdı:\n> \n>> I reviewed v3 patch and here are some minor comments:\n>> \n>>> + <row>\n>>> + <entry role=\"catalog_table_entry\"><para\n>>> role=\"column_definition\">\n>>> + <structfield>path</structfield> <type>int4</type>\n>> \n>> Should 'int4' be 'int4[]'?\n>> Other system catalog columns such as pg_groups.grolist distinguish\n>> whther the type is a array or not.\n> \n> Right! Done.\n> \n>>> + Path to reach the current context from TopMemoryContext.\n>>> Context ids in\n>>> + this list represents all parents of the current context.\n>> This\n>>> can be\n>>> + used to build the parent and child relation.\n>> \n>> It seems last \".\" is not necessary considering other explanations\n>> for\n>> each field end without it.\n> \n> Done.\n> \n>> + const char *parent, int level, int\n>> *context_id,\n>> + List *path, Size\n>> *total_bytes_inc_chidlren)\n>> \n>> 'chidlren' -> 'children'\n> \n> Done.\n> \n>> + elog(LOG, \"pg_get_backend_memory_contexts called\");\n>> \n>> Is this message necessary?\n> \n> I guess I added this line for debugging and then forgot to remove. Now\n> removed.\n> \n>> There was warning when applying the patch:\n>> \n>> % git apply\n>> \n> ../patch/pg_backend_memory_context_refine/v3-0001-Adding-id-parent_id-into-pg_backend_memory_contex.patch\n>> \n>> \n> ../patch/pg_backend_memory_context_refine/v3-0001-Adding-id-parent_id-into-pg_backend_memory_contex.patch:282:\n>> \n>> trailing whitespace.\n>> select count(*) > 0\n>> \n>> \n> ../patch/pg_backend_memory_context_refine/v3-0001-Adding-id-parent_id-into-pg_backend_memory_contex.patch:283:\n>> \n>> trailing whitespace.\n>> from contexts\n>> warning: 2 lines add whitespace errors.\n> \n> Fixed.\n> \n> Thanks,--\n> \n> Melih Mutlu\n> Microsoft\n\nThanks for updating the patch.\n\n> + <row>\n> + <entry role=\"catalog_table_entry\"><para \n> role=\"column_definition\">\n> + <structfield>context_id</structfield> <type>int4</type>\n> + </para>\n> + <para>\n> + Current context id. Note that the context id is a temporary id \n> and may\n> + change in each invocation\n> + </para></entry>\n> + </row>\n> +\n> + <row>\n> + <entry role=\"catalog_table_entry\"><para \n> role=\"column_definition\">\n> + <structfield>path</structfield> <type>int4[]</type>\n> + </para>\n> + <para>\n> + Path to reach the current context from TopMemoryContext. \n> Context ids in\n> + this list represents all parents of the current context. This \n> can be\n> + used to build the parent and child relation\n> + </para></entry>\n> + </row>\n> +\n> + <row>\n> + <entry role=\"catalog_table_entry\"><para \n> role=\"column_definition\">\n> + <structfield>total_bytes_including_children</structfield> \n> <type>int8</type>\n> + </para>\n> + <para>\n> + Total bytes allocated for this memory context including its \n> children\n> + </para></entry>\n> + </row>\n\nThese columns are currently added to the bottom of the table, but it may \nbe better to put semantically similar items close together and change \nthe insertion position with reference to other system views. For \nexample,\n\n- In pg_group and pg_user, 'id' is placed on the line following 'name', \nso 'context_id' be placed on the line following 'name'\n- 'path' is similar with 'parent' and 'level' in that these are \ninformation about the location of the context, 'path' be placed to next \nto them.\n\nIf we do this, orders of columns in the system view should be the same, \nI think.\n\n\n> + ListCell *lc;\n> +\n> + length = list_length(path);\n> + datum_array = (Datum *) palloc(length * sizeof(Datum));\n> + length = 0;\n> + foreach(lc, path)\n> + {\n> + datum_array[length++] = Int32GetDatum(lfirst_int(lc));\n> + }\n\n14dd0f27d have introduced new macro foreach_int.\nIt seems to be able to make the code a bit simpler and the commit log \nsays this macro is primarily intended for use in new code. For example:\n\n| int id;\n|\n| length = list_length(path);\n| datum_array = (Datum *) palloc(length * sizeof(Datum));\n| length = 0;\n| foreach_int(id, path)\n| {\n| datum_array[length++] = Int32GetDatum(id);\n| }\n\n\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA Group Corporation\n\n\n",
"msg_date": "Wed, 10 Jan 2024 15:37:01 +0900",
"msg_from": "torikoshia <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Parent/child context relation in pg_get_backend_memory_contexts()"
},
{
"msg_contents": "Hi,\n\nThanks for reviewing.\n\ntorikoshia <[email protected]>, 10 Oca 2024 Çar, 09:37 tarihinde\nşunu yazdı:\n\n> > + <row>\n> > + <entry role=\"catalog_table_entry\"><para\n> > role=\"column_definition\">\n> > + <structfield>context_id</structfield> <type>int4</type>\n> > + </para>\n> > + <para>\n> > + Current context id. Note that the context id is a temporary id\n> > and may\n> > + change in each invocation\n> > + </para></entry>\n> > + </row>\n> > +\n> > + <row>\n> > + <entry role=\"catalog_table_entry\"><para\n> > role=\"column_definition\">\n> > + <structfield>path</structfield> <type>int4[]</type>\n> > + </para>\n> > + <para>\n> > + Path to reach the current context from TopMemoryContext.\n> > Context ids in\n> > + this list represents all parents of the current context. This\n> > can be\n> > + used to build the parent and child relation\n> > + </para></entry>\n> > + </row>\n> > +\n> > + <row>\n> > + <entry role=\"catalog_table_entry\"><para\n> > role=\"column_definition\">\n> > + <structfield>total_bytes_including_children</structfield>\n> > <type>int8</type>\n> > + </para>\n> > + <para>\n> > + Total bytes allocated for this memory context including its\n> > children\n> > + </para></entry>\n> > + </row>\n>\n> These columns are currently added to the bottom of the table, but it may\n> be better to put semantically similar items close together and change\n> the insertion position with reference to other system views. For\n> example,\n>\n> - In pg_group and pg_user, 'id' is placed on the line following 'name',\n> so 'context_id' be placed on the line following 'name'\n> - 'path' is similar with 'parent' and 'level' in that these are\n> information about the location of the context, 'path' be placed to next\n> to them.\n>\n> If we do this, orders of columns in the system view should be the same,\n> I think.\n>\n\nI've done what you suggested. Also moved \"total_bytes_including_children\"\nright after \"total_bytes\".\n\n\n14dd0f27d have introduced new macro foreach_int.\n> It seems to be able to make the code a bit simpler and the commit log\n> says this macro is primarily intended for use in new code. For example:\n>\n\nMakes sense. Done.\n\nThanks,\n-- \nMelih Mutlu\nMicrosoft",
"msg_date": "Tue, 16 Jan 2024 12:41:22 +0300",
"msg_from": "Melih Mutlu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Parent/child context relation in pg_get_backend_memory_contexts()"
},
{
"msg_contents": "On 2024-01-16 18:41, Melih Mutlu wrote:\n> Hi,\n> \n> Thanks for reviewing.\n> \n> torikoshia <[email protected]>, 10 Oca 2024 Çar, 09:37\n> tarihinde şunu yazdı:\n> \n>>> + <row>\n>>> + <entry role=\"catalog_table_entry\"><para\n>>> role=\"column_definition\">\n>>> + <structfield>context_id</structfield> <type>int4</type>\n>>> + </para>\n>>> + <para>\n>>> + Current context id. Note that the context id is a\n>> temporary id\n>>> and may\n>>> + change in each invocation\n>>> + </para></entry>\n>>> + </row>\n>>> +\n>>> + <row>\n>>> + <entry role=\"catalog_table_entry\"><para\n>>> role=\"column_definition\">\n>>> + <structfield>path</structfield> <type>int4[]</type>\n>>> + </para>\n>>> + <para>\n>>> + Path to reach the current context from TopMemoryContext.\n>>> Context ids in\n>>> + this list represents all parents of the current context.\n>> This\n>>> can be\n>>> + used to build the parent and child relation\n>>> + </para></entry>\n>>> + </row>\n>>> +\n>>> + <row>\n>>> + <entry role=\"catalog_table_entry\"><para\n>>> role=\"column_definition\">\n>>> + <structfield>total_bytes_including_children</structfield>\n>>> <type>int8</type>\n>>> + </para>\n>>> + <para>\n>>> + Total bytes allocated for this memory context including\n>> its\n>>> children\n>>> + </para></entry>\n>>> + </row>\n>> \n>> These columns are currently added to the bottom of the table, but it\n>> may\n>> be better to put semantically similar items close together and\n>> change\n>> the insertion position with reference to other system views. For\n>> example,\n>> \n>> - In pg_group and pg_user, 'id' is placed on the line following\n>> 'name',\n>> so 'context_id' be placed on the line following 'name'\n>> - 'path' is similar with 'parent' and 'level' in that these are\n>> information about the location of the context, 'path' be placed to\n>> next\n>> to them.\n>> \n>> If we do this, orders of columns in the system view should be the\n>> same,\n>> I think.\n> \n> I've done what you suggested. Also moved\n> \"total_bytes_including_children\" right after \"total_bytes\".\n> \n>> 14dd0f27d have introduced new macro foreach_int.\n>> It seems to be able to make the code a bit simpler and the commit\n>> log\n>> says this macro is primarily intended for use in new code. For\n>> example:\n> \n> Makes sense. Done.\n\nThanks for updating the patch!\n\n> + Current context id. Note that the context id is a temporary id \n> and may\n> + change in each invocation\n> + </para></entry>\n> + </row>\n\nIt clearly states that the context id is temporary, but I am a little \nconcerned about users who write queries that refer to this view multiple \ntimes without using CTE.\n\nIf you agree, how about adding some description like below you mentioned \nbefore?\n\n> We still need to use cte since ids are not persisted and might change \n> in\n> each run of pg_backend_memory_contexts. Materializing the result can\n> prevent any inconsistencies due to id change. Also it can be even good \n> for\n> performance reasons as well.\n\nWe already have additional description below the table which explains \neach column of the system view. For example pg_locks:\nhttps://www.postgresql.org/docs/devel/view-pg-locks.html\n\n\nAlso giving an example query something like this might be useful.\n\n -- show all the parent context names of ExecutorState\n with contexts as (\n select * from pg_backend_memory_contexts\n )\n select name from contexts where array[context_id] <@ (select path from \ncontexts where name = 'ExecutorState');\n\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA Group Corporation\n\n\n",
"msg_date": "Fri, 19 Jan 2024 17:41:45 +0900",
"msg_from": "torikoshia <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Parent/child context relation in pg_get_backend_memory_contexts()"
},
{
"msg_contents": "On Fri, Jan 19, 2024 at 05:41:45PM +0900, torikoshia wrote:\n> We already have additional description below the table which explains each\n> column of the system view. For example pg_locks:\n> https://www.postgresql.org/docs/devel/view-pg-locks.html\n\nI was reading the patch, and using int[] as a representation of the\npath of context IDs up to the top-most parent looks a bit strange to\nme, with the relationship between each parent -> child being\npreserved, visibly, based on the order of the elements in this array\nmade of temporary IDs compiled on-the-fly during the function\nexecution. Am I the only one finding that a bit strange? Could it be\nbetter to use a different data type for this path and perhaps switch\nto the names of the contexts involved?\n\nIt is possible to retrieve this information some WITH RECURSIVE as\nwell, as mentioned upthread. Perhaps we could consider documenting\nthese tricks?\n--\nMichael",
"msg_date": "Wed, 14 Feb 2024 16:23:38 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Parent/child context relation in pg_get_backend_memory_contexts()"
},
{
"msg_contents": "Hi,\n\nMichael Paquier <[email protected]>, 14 Şub 2024 Çar, 10:23 tarihinde\nşunu yazdı:\n\n> On Fri, Jan 19, 2024 at 05:41:45PM +0900, torikoshia wrote:\n> > We already have additional description below the table which explains\n> each\n> > column of the system view. For example pg_locks:\n> > https://www.postgresql.org/docs/devel/view-pg-locks.html\n>\n> I was reading the patch, and using int[] as a representation of the\n> path of context IDs up to the top-most parent looks a bit strange to\n> me, with the relationship between each parent -> child being\n> preserved, visibly, based on the order of the elements in this array\n> made of temporary IDs compiled on-the-fly during the function\n> execution. Am I the only one finding that a bit strange? Could it be\n> better to use a different data type for this path and perhaps switch\n> to the names of the contexts involved?\n>\n\nDo you find having the path column strange all together? Or only using\ntemporary IDs to generate that column? The reason why I avoid using context\nnames is because there can be multiple contexts with the same name. This\nmakes it difficult to figure out which context, among those with that\nparticular name, is actually included in the path. I couldn't find any\nother information that is unique to each context.\n\nThanks,\n-- \nMelih Mutlu\nMicrosoft\n\nHi,Michael Paquier <[email protected]>, 14 Şub 2024 Çar, 10:23 tarihinde şunu yazdı:On Fri, Jan 19, 2024 at 05:41:45PM +0900, torikoshia wrote:\n> We already have additional description below the table which explains each\n> column of the system view. For example pg_locks:\n> https://www.postgresql.org/docs/devel/view-pg-locks.html\n\nI was reading the patch, and using int[] as a representation of the\npath of context IDs up to the top-most parent looks a bit strange to\nme, with the relationship between each parent -> child being\npreserved, visibly, based on the order of the elements in this array\nmade of temporary IDs compiled on-the-fly during the function\nexecution. Am I the only one finding that a bit strange? Could it be\nbetter to use a different data type for this path and perhaps switch\nto the names of the contexts involved?Do you find having the path column strange all together? Or only using temporary IDs to generate that column? The reason why I avoid using context names is because there can be multiple contexts with the same name. This makes it difficult to figure out which context, among those with that particular name, is actually included in the path. I couldn't find any other information that is unique to each context.Thanks,-- Melih MutluMicrosoft",
"msg_date": "Wed, 3 Apr 2024 16:20:39 +0300",
"msg_from": "Melih Mutlu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Parent/child context relation in pg_get_backend_memory_contexts()"
},
{
"msg_contents": "Hi,\n\nOn 2024-02-14 16:23:38 +0900, Michael Paquier wrote:\n> It is possible to retrieve this information some WITH RECURSIVE as well, as\n> mentioned upthread. Perhaps we could consider documenting these tricks?\n\nI think it's sufficiently hard that it's not a reasonable way to do this.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 3 Apr 2024 11:56:34 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Parent/child context relation in pg_get_backend_memory_contexts()"
},
{
"msg_contents": "On Wed, Apr 03, 2024 at 04:20:39PM +0300, Melih Mutlu wrote:\n> Michael Paquier <[email protected]>, 14 Şub 2024 Çar, 10:23 tarihinde\n> şunu yazdı:\n>> I was reading the patch, and using int[] as a representation of the\n>> path of context IDs up to the top-most parent looks a bit strange to\n>> me, with the relationship between each parent -> child being\n>> preserved, visibly, based on the order of the elements in this array\n>> made of temporary IDs compiled on-the-fly during the function\n>> execution. Am I the only one finding that a bit strange? Could it be\n>> better to use a different data type for this path and perhaps switch\n>> to the names of the contexts involved?\n> \n> Do you find having the path column strange all together? Or only using\n> temporary IDs to generate that column? The reason why I avoid using context\n> names is because there can be multiple contexts with the same name. This\n> makes it difficult to figure out which context, among those with that\n> particular name, is actually included in the path. I couldn't find any\n> other information that is unique to each context.\n\nI've been re-reading the patch again to remember what this is about,\nand I'm OK with having this \"path\" column in the catalog. However,\nI'm somewhat confused by the choice of having a temporary number that\nshows up in the catalog representation, because this may not be\nconstant across multiple calls so this still requires a follow-up\ntemporary ID <-> name mapping in any SQL querying this catalog. A\nsecond thing is that array does not show the hierarchy of the path;\nthe patch relies on the order of the elements in the output array\ninstead.\n--\nMichael",
"msg_date": "Thu, 4 Apr 2024 08:34:04 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Parent/child context relation in pg_get_backend_memory_contexts()"
},
{
"msg_contents": "On Thu, 4 Apr 2024 at 12:34, Michael Paquier <[email protected]> wrote:\n> I've been re-reading the patch again to remember what this is about,\n> and I'm OK with having this \"path\" column in the catalog. However,\n> I'm somewhat confused by the choice of having a temporary number that\n> shows up in the catalog representation, because this may not be\n> constant across multiple calls so this still requires a follow-up\n> temporary ID <-> name mapping in any SQL querying this catalog. A\n> second thing is that array does not show the hierarchy of the path;\n> the patch relies on the order of the elements in the output array\n> instead.\n\nMy view on this is that there are a couple of things with the patch\nwhich could be considered separately:\n\n1. Should we have a context_id in the view?\n2. Should we also have an array of all parents?\n\nMy view is that we really need #1 as there's currently no reliable way\nto determine a context's parent as the names are not unique. I do\nsee that Melih has mentioned this is temporary in:\n\n+ <para>\n+ Current context id. Note that the context id is a temporary id and may\n+ change in each invocation\n+ </para></entry>\n\nFor #2, I'm a bit less sure about this. I know Andres would like to\nsee this array added, but equally WITH RECURSIVE would work. Does the\narray of parents completely eliminate the need for recursive queries?\nI think the array works for anything that requires all parents or some\nfixed (would be) recursive level, but there might be some other\ncondition to stop recursion other than the recursion level that\nsomeone needs to do. What I'm trying to get at is; do we need to\ndocument the WITH RECURSIVE stuff anyway? and if we do, is it still\nworth having the parents array?\n\nDavid\n\n\n",
"msg_date": "Thu, 4 Apr 2024 14:44:27 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Parent/child context relation in pg_get_backend_memory_contexts()"
},
{
"msg_contents": "Hi hackers,\n\nDavid Rowley <[email protected]>, 4 Nis 2024 Per, 04:44 tarihinde şunu\nyazdı:\n\n> My view on this is that there are a couple of things with the patch\n> which could be considered separately:\n>\n> 1. Should we have a context_id in the view?\n> 2. Should we also have an array of all parents?\n>\n\nI discussed the above questions with David off-list, and decided to make\nsome changes in the patch as a result. I'd appreciate any input.\n\nFirst of all, I agree that previous versions of the patch could make things\nseem a bit more complicated than they should be, by having three new\ncolumns (context_id, path, total_bytes_including_children). Especially when\nwe could already get the same result with several different ways (e.g.\nwriting a recursive query, using the patch column, and the\ntotal_bytes_including_children column by itself help to know total used\nbytes by a contexts and all of its children)\n\nI believe that we really need to have context IDs as it's the only unique\nway to identify a context. And I'm for having a parents array as it makes\nthings easier and demonstrates the parent/child relation explicitly. One\nidea to simplify this patch a bit is adding the ID of a context into its\nown path and removing the context_id column. As those IDs are temporary, I\ndon't think they would be useful other than using them to find some kind of\nrelation by looking into path values of some other rows. So maybe not\nhaving a separate column for IDs but only having the path can help with the\nconfusion which this patch might introduce. The last element of the patch\nwould simply be the ID of that particular context.\n\nOne nice thing which David pointed out about paths is that level\ninformation can become useful in those arrays. Level can represent the\nposition of a context in the path arrays of its child contexts. For\nexample; TopMemoryContext will always be the first element in all paths as\nit's the top-most parent, it's also the only context with level 0. So this\nrelation between levels and indexes in path arrays can be somewhat useful\nto link this array with the overall hierarchy of memory contexts.\n\nAn example query to get total used bytes including children by using level\ninfo would look like:\n\nWITH contexts AS (\nSELECT * FROM pg_backend_memory_contexts\n)\nSELECT sum(total_bytes)\nFROM contexts\nWHERE path[( SELECT level+1 FROM contexts WHERE name =\n'CacheMemoryContext')] =\n(SELECT path[level+1] FROM contexts WHERE name = 'CacheMemoryContext');\n\nLastly, I created a separate patch to add total_bytes_including_children\ncolumns. I understand that sum of total_bytes of a context and its children\nwill likely be one of the frequently used cases, not everyone may agree\nwith having an _including_children column for only total_bytes. I'm open to\nhear more opinions on this.\n\nBest Regards,\n-- \nMelih Mutlu\nMicrosoft",
"msg_date": "Tue, 2 Jul 2024 16:08:22 +0300",
"msg_from": "Melih Mutlu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Parent/child context relation in pg_get_backend_memory_contexts()"
},
{
"msg_contents": "On Wed, 3 Jul 2024 at 01:08, Melih Mutlu <[email protected]> wrote:\n> An example query to get total used bytes including children by using level info would look like:\n>\n> WITH contexts AS (\n> SELECT * FROM pg_backend_memory_contexts\n> )\n> SELECT sum(total_bytes)\n> FROM contexts\n> WHERE path[( SELECT level+1 FROM contexts WHERE name = 'CacheMemoryContext')] =\n> (SELECT path[level+1] FROM contexts WHERE name = 'CacheMemoryContext');\n\nI've been wondering about the order of the \"path\" column. When we\ntalked, I had in mind that the TopMemoryContext should always be at\nthe end of the array rather than the start, but I see you've got it\nthe other way around.\n\nWith the order you have it, that query could be expressed as:\n\nWITH c AS (SELECT * FROM pg_backend_memory_contexts)\nSELECT c1.*\nFROM c c1, c c2\nWHERE c2.name = 'CacheMemoryContext'\nAND c1.path[c2.level + 1] = c2.path[c2.level + 1];\n\nWhereas, with the way I had in mind, it would need to look like:\n\nWITH c AS (SELECT * FROM pg_backend_memory_contexts)\nSELECT c1.*\nFROM c c1, c c2\nWHERE c2.name = 'CacheMemoryContext'\nAND c1.path[c1.level - c2.level + 1] = c2.path[1];\n\nI kind of think the latter makes more sense, as if for some reason you\nknow the level and context ID of the context you're looking up, you\ncan do:\n\nSELECT * FROM pg_backend_memory_contexts WHERE path[<known level> +\nlevel + 1] = <known context id>;\n\nI also imagined \"path\" would be called \"context_ids\". I thought that\nmight better indicate what the column is without consulting the\ndocumentation.\n\nI think it might also be easier to document what context_ids is:\n\n\"Array of transient identifiers to describe the memory context\nhierarchy. The first array element contains the ID for the current\ncontext and each subsequent ID is the parent of the previous element.\nNote that these IDs are unstable between multiple invocations of the\nview. See the example query below for advice on how to use this\ncolumn effectively.\"\n\nThere are also a couple of white space issues with the patch. If\nyou're in a branch with the patch applied directly onto master, then\n\"git diff master --check\" should show where they are.\n\nIf you do reverse the order of the \"path\" column, then I think\nmodifying convert_path_to_datum() is the best way to do that. If you\nwere to do it in the calling function, changing \"path =\nlist_delete_last(path);\" to use list_delete_first() is less efficient.\n\nDavid\n\n\n",
"msg_date": "Fri, 5 Jul 2024 20:06:27 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Parent/child context relation in pg_get_backend_memory_contexts()"
},
{
"msg_contents": "Hi David,\n\nDavid Rowley <[email protected]>, 5 Tem 2024 Cum, 11:06 tarihinde şunu\nyazdı:\n\n> With the order you have it, that query could be expressed as:\n>\n> WITH c AS (SELECT * FROM pg_backend_memory_contexts)\n> SELECT c1.*\n> FROM c c1, c c2\n> WHERE c2.name = 'CacheMemoryContext'\n> AND c1.path[c2.level + 1] = c2.path[c2.level + 1];\n>\n> Whereas, with the way I had in mind, it would need to look like:\n>\n> WITH c AS (SELECT * FROM pg_backend_memory_contexts)\n> SELECT c1.*\n> FROM c c1, c c2\n> WHERE c2.name = 'CacheMemoryContext'\n> AND c1.path[c1.level - c2.level + 1] = c2.path[1];\n>\n> I kind of think the latter makes more sense, as if for some reason you\n> know the level and context ID of the context you're looking up, you\n> can do:\n>\n\nI liked the fact that a context would always be at the same position,\nlevel+1, in all context_ids arrays of its children. But what you described\nmakes sense as well, so I changed the order.\n\nI also imagined \"path\" would be called \"context_ids\". I thought that\n> might better indicate what the column is without consulting the\n> documentation.\n>\n\nDone.\n\n\n\n> I think it might also be easier to document what context_ids is:\n>\n> \"Array of transient identifiers to describe the memory context\n> hierarchy. The first array element contains the ID for the current\n> context and each subsequent ID is the parent of the previous element.\n> Note that these IDs are unstable between multiple invocations of the\n> view. See the example query below for advice on how to use this\n> column effectively.\"\n>\n\nDone.\n\n\n\n> There are also a couple of white space issues with the patch. If\n> you're in a branch with the patch applied directly onto master, then\n> \"git diff master --check\" should show where they are.\n>\n\nDone.\n\nThanks,\n-- \nMelih Mutlu\nMicrosoft",
"msg_date": "Tue, 9 Jul 2024 13:56:32 +0300",
"msg_from": "Melih Mutlu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Parent/child context relation in pg_get_backend_memory_contexts()"
},
{
"msg_contents": "On Wed, Apr 3, 2024 at 7:34 PM Michael Paquier <[email protected]> wrote:\n> I've been re-reading the patch again to remember what this is about,\n> and I'm OK with having this \"path\" column in the catalog. However,\n> I'm somewhat confused by the choice of having a temporary number that\n> shows up in the catalog representation, because this may not be\n> constant across multiple calls so this still requires a follow-up\n> temporary ID <-> name mapping in any SQL querying this catalog. A\n> second thing is that array does not show the hierarchy of the path;\n> the patch relies on the order of the elements in the output array\n> instead.\n\nThis complaint doesn't seem reasonable to me. The point of the path,\nas I understand it, is to allow the caller to make sense of the\nresults of a single call, which is otherwise impossible. Stability\nacross multiple calls would be much more difficult, particularly\nbecause we have no unique, long-lived identifier for memory contexts,\nexcept perhaps the address of the context. Exposing the pointer\naddress of the memory contexts to clients would be an extremely bad\nidea from a security point of view -- and it also seems unnecessary,\nbecause the point of this function is to get a clear snapshot of\nmemory usage at a particular moment, not to track changes in usage by\nthe same contexts over time. You could still build the latter on top\nof this if you wanted to do that, but I don't think most people would,\nand I don't think the transient path IDs make it any more difficult.\n\nI feel like Melih has chosen a simple and natural representation and I\nwould have done pretty much the same thing. And AFAICS there's no\nreasonable alternative design.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 10 Jul 2024 17:16:23 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Parent/child context relation in pg_get_backend_memory_contexts()"
},
{
"msg_contents": "On Fri, Jul 5, 2024 at 4:06 AM David Rowley <[email protected]> wrote:\n> I've been wondering about the order of the \"path\" column. When we\n> talked, I had in mind that the TopMemoryContext should always be at\n> the end of the array rather than the start, but I see you've got it\n> the other way around.\n\nFWIW, I would have done what Melih did. A path normally is listed in\nroot-to-leaf order, not leaf-to-root.\n\n> I also imagined \"path\" would be called \"context_ids\". I thought that\n> might better indicate what the column is without consulting the\n> documentation.\n\nThe only problem I see with this is that it doesn't make it clear that\nwe're being shown parentage or ancestry, rather than values for the\ncurrent node. I suspect path is fairly understandable, but if you\ndon't like that, what about parent_ids?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 10 Jul 2024 17:19:00 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Parent/child context relation in pg_get_backend_memory_contexts()"
},
{
"msg_contents": "On Thu, 11 Jul 2024 at 09:19, Robert Haas <[email protected]> wrote:\n> FWIW, I would have done what Melih did. A path normally is listed in\n> root-to-leaf order, not leaf-to-root.\n\nMelih and I talked about this in a meeting yesterday evening. I think\nI'm about on the fence about having the IDs in leaf-to-root or\nroot-to-leaf. My main concern about which order is chosen is around\nhow easy it is to write hierarchical queries. I think I'd feel better\nabout having it in root-to-leaf order if \"level\" was 1-based rather\nthan 0-based. That would allow querying CacheMemoryContext and all of\nits descendants with:\n\nWITH c AS (SELECT * FROM pg_backend_memory_contexts)\nSELECT c1.*\nFROM c c1, c c2\nWHERE c2.name = 'CacheMemoryContext'\nAND c1.path[c2.level] = c2.path[c2.level];\n\n(With the v6 patch, you have to do level + 1.)\n\nIdeally, no CTE would be needed here, but unfortunately, there's no\nway to know the CacheMemoryContext's ID beforehand. We could make the\nID more stable if we did a breadth-first traversal of the context.\ni.e., assign IDs in level order. This would stop TopMemoryContext's\n2nd child getting a different ID if its first child became a parent\nitself.\n\nThis allows easier ad-hoc queries, for example:\n\nselect * from pg_backend_memory_contexts;\n-- Observe that CacheMemoryContext has ID=22 and level=2. Get the\ntotal of that and all of its descendants.\nselect sum(total_bytes) from pg_backend_memory_contexts where path[2] = 22;\n-- or just it and direct children\nselect sum(total_bytes) from pg_backend_memory_contexts where path[2]\n= 22 and level <= 3;\n\nWithout the breadth-first assignment of context IDs, the sum() would\ncause another context to be created for aggregation and the 2nd query\nwouldn't work. Of course, it doesn't make it 100% guaranteed to be\nstable, but it's way less annoying to write ad-hoc queries. It's more\nstable the closer to the root you're interested in, which seems (to\nme) the most likely area of interest for most people.\n\n> On Fri, Jul 5, 2024 at 4:06 AM David Rowley <[email protected]> wrote:\n> > I also imagined \"path\" would be called \"context_ids\". I thought that\n> > might better indicate what the column is without consulting the\n> > documentation.\n>\n> The only problem I see with this is that it doesn't make it clear that\n> we're being shown parentage or ancestry, rather than values for the\n> current node. I suspect path is fairly understandable, but if you\n> don't like that, what about parent_ids?\n\nI did a bit more work in the attached. I changed \"level\" to be\n1-based and because it's the column before \"path\" I find it much more\nintuitive (assuming no prior knowledge) that the \"path\" column relates\nto \"level\" somehow as it's easy to see that \"level\" is the same number\nas the number of elements in \"path\". With 0-based levels, that's not\nthe case.\n\nPlease see the attached patch. I didn't update any documentation.\n\nDavid",
"msg_date": "Thu, 11 Jul 2024 13:16:30 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Parent/child context relation in pg_get_backend_memory_contexts()"
},
{
"msg_contents": "On Wed, Jul 10, 2024 at 9:16 PM David Rowley <[email protected]> wrote:\n> Melih and I talked about this in a meeting yesterday evening. I think\n> I'm about on the fence about having the IDs in leaf-to-root or\n> root-to-leaf. My main concern about which order is chosen is around\n> how easy it is to write hierarchical queries. I think I'd feel better\n> about having it in root-to-leaf order if \"level\" was 1-based rather\n> than 0-based. That would allow querying CacheMemoryContext and all of\n> its descendants with:\n>\n> WITH c AS (SELECT * FROM pg_backend_memory_contexts)\n> SELECT c1.*\n> FROM c c1, c c2\n> WHERE c2.name = 'CacheMemoryContext'\n> AND c1.path[c2.level] = c2.path[c2.level];\n\nI don't object to making it 1-based.\n\n> Ideally, no CTE would be needed here, but unfortunately, there's no\n> way to know the CacheMemoryContext's ID beforehand. We could make the\n> ID more stable if we did a breadth-first traversal of the context.\n> i.e., assign IDs in level order. This would stop TopMemoryContext's\n> 2nd child getting a different ID if its first child became a parent\n> itself.\n\nDo we ever have contexts with the same name at the same level? Could\nwe just make the path an array of strings, so that you could then say\nsomething like this...\n\nSELECT * FROM pg_backend_memory_contexts where path[2] = 'CacheMemoryContext'\n\n...and get all the things with that in the path?\n\n> select * from pg_backend_memory_contexts;\n> -- Observe that CacheMemoryContext has ID=22 and level=2. Get the\n> total of that and all of its descendants.\n> select sum(total_bytes) from pg_backend_memory_contexts where path[2] = 22;\n> -- or just it and direct children\n> select sum(total_bytes) from pg_backend_memory_contexts where path[2]\n> = 22 and level <= 3;\n\nI'm doubtful about this because nothing prevents the set of memory\ncontexts from changing between one query and the next. We should try\nto make it so that it's easy to get what you want in a single query.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 11 Jul 2024 16:09:31 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Parent/child context relation in pg_get_backend_memory_contexts()"
},
{
"msg_contents": "Hi David,\n\nThanks for v8 patch. Please see attached v9.\n\nDavid Rowley <[email protected]>, 11 Tem 2024 Per, 04:16 tarihinde şunu\nyazdı:\n\n> I did a bit more work in the attached. I changed \"level\" to be\n> 1-based and because it's the column before \"path\" I find it much more\n> intuitive (assuming no prior knowledge) that the \"path\" column relates\n> to \"level\" somehow as it's easy to see that \"level\" is the same number\n> as the number of elements in \"path\". With 0-based levels, that's not\n> the case.\n>\n> Please see the attached patch. I didn't update any documentation.\n\n\nI updated documentation for path and level columns and also fixed the tests\nas level starts from 1.\n\n+ while (queue != NIL)\n> + {\n> + List *nextQueue = NIL;\n> + ListCell *lc;\n> +\n> + foreach(lc, queue)\n> + {\n\n\nI don't think we need this outer while loop. Appending to the end of a\nqueue naturally results in top-to-bottom order anyway, keeping two lists,\n\"queue\" and \"nextQueue\", might not be necessary. I believe that it's safe\nto append to a list while iterating over that list in a foreach loop. v9\nremoves nextQueue and appends directly into queue.\n\nThanks,\n-- \nMelih Mutlu\nMicrosoft",
"msg_date": "Sat, 13 Jul 2024 01:11:52 +0300",
"msg_from": "Melih Mutlu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Parent/child context relation in pg_get_backend_memory_contexts()"
},
{
"msg_contents": "Hi Robert,\n\nRobert Haas <[email protected]>, 11 Tem 2024 Per, 23:09 tarihinde şunu\nyazdı:\n\n> > Ideally, no CTE would be needed here, but unfortunately, there's no\n> > way to know the CacheMemoryContext's ID beforehand. We could make the\n> > ID more stable if we did a breadth-first traversal of the context.\n> > i.e., assign IDs in level order. This would stop TopMemoryContext's\n> > 2nd child getting a different ID if its first child became a parent\n> > itself.\n>\n> Do we ever have contexts with the same name at the same level? Could\n> we just make the path an array of strings, so that you could then say\n> something like this...\n>\n> SELECT * FROM pg_backend_memory_contexts where path[2] =\n> 'CacheMemoryContext'\n>\n> ...and get all the things with that in the path?\n>\n\nI just ran the below to see if we have any context with the same level and\nname.\n\npostgres=# select level, name, count(*) from pg_backend_memory_contexts\ngroup by level, name having count(*)>1;\n level | name | count\n-------+-------------+-------\n 3 | index info | 90\n 5 | ExprContext | 5\n\nSeems like it's a possible case. But those contexts might not be the most\ninteresting ones. I guess the contexts that most users would be interested\nin will likely be unique on their levels and with their name. So we might\nnot be concerned with the contexts, like those two from the above result,\nand chose using names instead of transient IDs. But I think that we can't\nguarantee name-based path column would be completely reliable in all cases.\n\n\n> > select * from pg_backend_memory_contexts;\n> > -- Observe that CacheMemoryContext has ID=22 and level=2. Get the\n> > total of that and all of its descendants.\n> > select sum(total_bytes) from pg_backend_memory_contexts where path[2] =\n> 22;\n> > -- or just it and direct children\n> > select sum(total_bytes) from pg_backend_memory_contexts where path[2]\n> > = 22 and level <= 3;\n>\n> I'm doubtful about this because nothing prevents the set of memory\n> contexts from changing between one query and the next. We should try\n> to make it so that it's easy to get what you want in a single query.\n>\n\nCorrect. Nothing will not prevent contexts from changing between each\nexecution. With David's change to use breadth-first traversal, contexts at\nupper levels are less likely to change. Knowing this may be useful in some\ncases. IMHO there is no harm in making those IDs slightly more \"stable\",\neven though there is no guarantee. My concern is whether we should document\nthis situation. If we should, how do we explain that the IDs are transient\nand can change but also may not change if they're closer to\nTopMemoryContext? If it's better not to mention this in the documentation,\ndoes it really matter since most users would not be aware?\n\n\nI've been also thinking if we should still have the parent column, as\nfinding out the parent is also possible via looking into the path. What do\nyou think?\n\nThanks,\n-- \nMelih Mutlu\nMicrosoft\n\nHi Robert,Robert Haas <[email protected]>, 11 Tem 2024 Per, 23:09 tarihinde şunu yazdı:\n> Ideally, no CTE would be needed here, but unfortunately, there's no\n> way to know the CacheMemoryContext's ID beforehand. We could make the\n> ID more stable if we did a breadth-first traversal of the context.\n> i.e., assign IDs in level order. This would stop TopMemoryContext's\n> 2nd child getting a different ID if its first child became a parent\n> itself.\n\nDo we ever have contexts with the same name at the same level? Could\nwe just make the path an array of strings, so that you could then say\nsomething like this...\n\nSELECT * FROM pg_backend_memory_contexts where path[2] = 'CacheMemoryContext'\n\n...and get all the things with that in the path?I just ran the below to see if we have any context with the same level and name.postgres=# select level, name, count(*) from pg_backend_memory_contexts group by level, name having count(*)>1; level | name | count-------+-------------+------- 3 | index info | 90 5 | ExprContext | 5Seems like it's a possible case. But those contexts might not be the most interesting ones. I guess the contexts that most users would be interested in will likely be unique on their levels and with their name. So we might not be concerned with the contexts, like those two from the above result, and chose using names instead of transient IDs. But I think that we can't guarantee name-based path column would be completely reliable in all cases. \n> select * from pg_backend_memory_contexts;\n> -- Observe that CacheMemoryContext has ID=22 and level=2. Get the\n> total of that and all of its descendants.\n> select sum(total_bytes) from pg_backend_memory_contexts where path[2] = 22;\n> -- or just it and direct children\n> select sum(total_bytes) from pg_backend_memory_contexts where path[2]\n> = 22 and level <= 3;\n\nI'm doubtful about this because nothing prevents the set of memory\ncontexts from changing between one query and the next. We should try\nto make it so that it's easy to get what you want in a single query.Correct. Nothing will not prevent contexts from changing between each execution. With David's change to use breadth-first traversal, contexts at upper levels are less likely to change. Knowing this may be useful in some cases. IMHO there is no harm in making those IDs slightly more \"stable\", even though there is no guarantee. My concern is whether we should document this situation. If we should, how do we explain that the IDs are transient and can change but also may not change if they're closer to TopMemoryContext? If it's better not to mention this in the documentation, does it really matter since most users would not be aware? I've been also thinking if we should still have the parent column, as finding out the parent is also possible via looking into the path. What do you think?Thanks,-- Melih MutluMicrosoft",
"msg_date": "Sat, 13 Jul 2024 01:32:55 +0300",
"msg_from": "Melih Mutlu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Parent/child context relation in pg_get_backend_memory_contexts()"
},
{
"msg_contents": "On Fri, 12 Jul 2024 at 08:09, Robert Haas <[email protected]> wrote:\n> Do we ever have contexts with the same name at the same level? Could\n> we just make the path an array of strings, so that you could then say\n> something like this...\n>\n> SELECT * FROM pg_backend_memory_contexts where path[2] = 'CacheMemoryContext'\n>\n> ...and get all the things with that in the path?\n\nUnfortunately, this wouldn't align with the goals of the patch. Going\nback to Melih's opening paragraph in the initial email, he mentions\nthat there's currently no *reliable* way to determine the parent/child\nrelationship in this view.\n\nThere's been a few different approaches to making this reliable. The\nfirst patch had \"parent_id\" and \"id\" columns. That required a WITH\nRECURSIVE query. To get away from having to write such complex\nqueries, the \"path\" column was born. I'm now trying to massage that\ninto something that's as easy to use and intuitive as possible. I've\ngotta admit, I don't love the patch. That's not Melih's fault,\nhowever. It's just the nature of what we're working with.\n\n> I'm doubtful about this because nothing prevents the set of memory\n> contexts from changing between one query and the next. We should try\n> to make it so that it's easy to get what you want in a single query.\n\nI don't think it's ideal that the context's ID changes in ad-hoc\nqueries, but I don't know how to make that foolproof. The\nbreadth-first ID assignment helps, but it could certainly still catch\npeople out when the memory context of interest is nested at some deep\nlevel. The breadth-first certainly assignment helped me with the\nCacheMemoryContext that I'd been testing with. It allowed me to run my\naggregate query to sum the bytes without the context created in\nnodeAgg.c causing the IDs to change.\n\nI'm open to better ideas on how to make this work, but it must meet\nthe spec of it being a reliable way to determine the context\nrelationship. If names were unique at each level having those instead\nof IDs might be nice, but as Melih demonstrated, they're not. I think\neven if Melih's query didn't return results, it would be a bad move to\nmake it work the way you mentioned if we have nothing to enforce the\nuniqueness of names.\n\nDavid\n\n\n",
"msg_date": "Mon, 15 Jul 2024 22:43:56 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Parent/child context relation in pg_get_backend_memory_contexts()"
},
{
"msg_contents": "On Sat, 13 Jul 2024 at 10:33, Melih Mutlu <[email protected]> wrote:\n> I've been also thinking if we should still have the parent column, as finding out the parent is also possible via looking into the path. What do you think?\n\nI think we should probably consider removing it. Let's think about\nthat later. I don't think its existence is blocking us from\nprogressing here.\n\nDavid\n\n\n",
"msg_date": "Mon, 15 Jul 2024 22:46:15 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Parent/child context relation in pg_get_backend_memory_contexts()"
},
{
"msg_contents": "On Sat, 13 Jul 2024 at 10:12, Melih Mutlu <[email protected]> wrote:\n> I updated documentation for path and level columns and also fixed the tests as level starts from 1.\n\nThanks for updating.\n\n+ The <structfield>path</structfield> column can be useful to build\n+ parent/child relation between memory contexts. For example, the following\n+ query calculates the total number of bytes used by a memory context and its\n+ child contexts:\n\n\"a memory context\" doesn't quite sound specific enough. Let's say what\nthe query is doing exactly.\n\n+<programlisting>\n+WITH memory_contexts AS (\n+ SELECT *\n+ FROM pg_backend_memory_contexts\n+)\n+SELECT SUM(total_bytes)\n+FROM memory_contexts\n+WHERE ARRAY[(SELECT path[array_length(path, 1)] FROM memory_contexts\nWHERE name = 'CacheMemoryContext')] <@ path;\n\nI don't think that example query is the most simple example. Isn't it\nbetter to use the most simple form possible to express that?\n\nI think it would be nice to give an example of using \"level\" as an\nindex into \"path\"\n\nWITH c AS (SELECT * FROM pg_backend_memory_contexts)\nSELECT sum(c1.total_bytes)\nFROM c c1, c c2\nWHERE c2.name = 'CacheMemoryContext'\nAND c1.path[c2.level] = c2.path[c2.level];\n\nI think the regression test query could be done using the same method.\n\n>> + while (queue != NIL)\n>> + {\n>> + List *nextQueue = NIL;\n>> + ListCell *lc;\n>> +\n>> + foreach(lc, queue)\n>> + {\n>\n>\n> I don't think we need this outer while loop. Appending to the end of a queue naturally results in top-to-bottom order anyway, keeping two lists, \"queue\" and \"nextQueue\", might not be necessary. I believe that it's safe to append to a list while iterating over that list in a foreach loop. v9 removes nextQueue and appends directly into queue.\n\nThe foreach() macro seems to be ok with that. I am too. The following\ncomment will need to be updated:\n\n+ /*\n+ * Queue up all the child contexts of this level for the next\n+ * iteration of the outer loop.\n+ */\n\nThat outer loop is gone.\n\nAlso, this was due to my hasty writing of the patch. I named the\nfunction get_memory_context_name_and_indent. I meant to write \"ident\".\nIf we did get rid of the \"parent\" column, I'd not see any need to keep\nthat function. The logic could just be put in\nPutMemoryContextsStatsTupleStore(). I just did it that way to avoid\nthe repeat.\n\nDavid\n\n\n",
"msg_date": "Mon, 15 Jul 2024 23:38:32 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Parent/child context relation in pg_get_backend_memory_contexts()"
},
{
"msg_contents": "On Fri, Jul 12, 2024 at 6:33 PM Melih Mutlu <[email protected]> wrote:\n> I just ran the below to see if we have any context with the same level and name.\n>\n> postgres=# select level, name, count(*) from pg_backend_memory_contexts group by level, name having count(*)>1;\n> level | name | count\n> -------+-------------+-------\n> 3 | index info | 90\n> 5 | ExprContext | 5\n>\n> Seems like it's a possible case. But those contexts might not be the most interesting ones. I guess the contexts that most users would be interested in will likely be unique on their levels and with their name. So we might not be concerned with the contexts, like those two from the above result, and chose using names instead of transient IDs. But I think that we can't guarantee name-based path column would be completely reliable in all cases.\n\nMaybe we should just fix it so that doesn't happen. I think it's only\nan issue if the whole path is the same, and I'm not sure whether\nthat's the case here. But notice that we have this:\n\n const char *name; /* context name (just\nfor debugging) */\n const char *ident; /* context ID if any\n(just for debugging) */\n\nI think this arrangement dates to\n442accc3fe0cd556de40d9d6c776449e82254763, and the discussion thread\nbegins like this:\n\n\"It does look like a 182KiB has been spent for some SQL, however\nthere's no clear way to tell which SQL is to blame.\"\n\nSo the point of that commit was to find better ways of distinguishing\nbetween similar contexts. It sounds like perhaps we're not all the way\nthere yet, but if we agree on the goal, maybe we can figure out how to\nreach it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 15 Jul 2024 13:56:05 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Parent/child context relation in pg_get_backend_memory_contexts()"
},
{
"msg_contents": "On Mon, Jul 15, 2024 at 6:44 AM David Rowley <[email protected]> wrote:\n> Unfortunately, this wouldn't align with the goals of the patch. Going\n> back to Melih's opening paragraph in the initial email, he mentions\n> that there's currently no *reliable* way to determine the parent/child\n> relationship in this view.\n>\n> There's been a few different approaches to making this reliable. The\n> first patch had \"parent_id\" and \"id\" columns. That required a WITH\n> RECURSIVE query. To get away from having to write such complex\n> queries, the \"path\" column was born. I'm now trying to massage that\n> into something that's as easy to use and intuitive as possible. I've\n> gotta admit, I don't love the patch. That's not Melih's fault,\n> however. It's just the nature of what we're working with.\n\nI'm not against what you're trying to do here, but I feel like you\nmight be over-engineering it. I don't think there was anything really\nwrong with what Melih was doing, and I don't think there's anything\nreally wrong with converting the path to an array of strings, either.\nSure, it might not be perfect, but future patches could always remove\nthe name duplication. This is a debugging facility that will be used\nby a tiny minority of users, and if some non-uniqueness gets\nreintroduced in the future, it's not a critical defect and can just be\nfixed when it's noticed. That said, if you want to go with the integer\nIDs and want to spend more time massaging it, I also think that's\nfine. I simply don't believe it's the only way forward here. YMMV, but\nmy opinion is that none of these approaches have such critical flaws\nthat we need to get stressed about it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 15 Jul 2024 14:19:08 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Parent/child context relation in pg_get_backend_memory_contexts()"
},
{
"msg_contents": "On Tue, 16 Jul 2024 at 06:19, Robert Haas <[email protected]> wrote:\n> I'm not against what you're trying to do here, but I feel like you\n> might be over-engineering it. I don't think there was anything really\n> wrong with what Melih was doing, and I don't think there's anything\n> really wrong with converting the path to an array of strings, either.\n> Sure, it might not be perfect, but future patches could always remove\n> the name duplication. This is a debugging facility that will be used\n> by a tiny minority of users, and if some non-uniqueness gets\n> reintroduced in the future, it's not a critical defect and can just be\n> fixed when it's noticed.\n\nI'm just not on board with the\nquery-returns-correct-results-most-of-the-time attitude and I'm\nsurprised you are. You can get that today if you like, just write a\nWITH RECURSIVE query joining \"name\" to \"parent\". If the consensus is\nthat's fine because it works most of the time, then I don't see any\nreason to invent a new way to get equally correct-most-of-the-time\nresults.\n\n> That said, if you want to go with the integer\n> IDs and want to spend more time massaging it, I also think that's\n> fine. I simply don't believe it's the only way forward here. YMMV, but\n> my opinion is that none of these approaches have such critical flaws\n> that we need to get stressed about it.\n\nIf there are other ways forward that match the goal of having a\nreliable way to determine the parent of a MemoryContext, then I'm\ninterested in hearing more. I know you've mentioned about having\nunique names, but I don't know how to do that. Do you have any ideas\non how we could enforce the uniqueness? I don't really like your idea\nof renaming contexts when we find duplicate names as bug fixes. The\nnature of our code wouldn't make it easy to determine as some reusable\ncode might create a context as a child of CurrentMemoryContext and\nmultiple callers might call that code within a different\nCurrentMemoryContext.\n\nOne problem is that, if you look at MemoryContextCreate(), we require\nthat the name is statically allocated. We don't have the flexibility\nto assign unique names when we find a conflict. If we were to come up\nwith a solution that assigned a unique name, then I'd call that\n\"over-engineered\" for the use case we need it for. I think if we did\nsomething like that, it would undo some of the work Tom did in\n442accc3f. Also, I think it was you that came up with the idea of\nMemoryContext reuse (9fa6f00b1)? Going by that commit message, it\nseems to be done for performance reasons. If MemoryContext.name was\ndynamic, there'd be more allocation work to do when reusing a context.\nThat might undo some of the performance gains seen in 9fa6f00b1. I\ndon't really want to go through the process of verifying there's no\nperformance regress for a patch that aims to make\npg_backend_memory_contexts more useful.\n\nDavid\n\n\n",
"msg_date": "Tue, 16 Jul 2024 12:21:46 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Parent/child context relation in pg_get_backend_memory_contexts()"
},
{
"msg_contents": "On Mon, Jul 15, 2024 at 8:22 PM David Rowley <[email protected]> wrote:\n> > That said, if you want to go with the integer\n> > IDs and want to spend more time massaging it, I also think that's\n> > fine. I simply don't believe it's the only way forward here. YMMV, but\n> > my opinion is that none of these approaches have such critical flaws\n> > that we need to get stressed about it.\n>\n> If there are other ways forward that match the goal of having a\n> reliable way to determine the parent of a MemoryContext, then I'm\n> interested in hearing more. I know you've mentioned about having\n> unique names, but I don't know how to do that. Do you have any ideas\n> on how we could enforce the uniqueness? I don't really like your idea\n> of renaming contexts when we find duplicate names as bug fixes. The\n> nature of our code wouldn't make it easy to determine as some reusable\n> code might create a context as a child of CurrentMemoryContext and\n> multiple callers might call that code within a different\n> CurrentMemoryContext.\n\nI thought the reason that we have both 'name' and 'ident' was so that\nthe names could be compile-time constants and the ident values could\nbe strings, with the idea that we would choose the strings to be\nsomething unique.\n\nBut I think I was wrong about that, because I see that for \"index\ninfo\" contexts we just use the relation name and to have it actually\nbe unique we'd have to use something like schema_name.relation_name.\nAnd even that wouldn't really work cleanly because the relation could\nbe renamed or moved to a different schema. Plus, adding string\nconstruction overhead here sounds unappealing.\n\nMaybe we'll find a clever solution someday, but I think for now you're\nright that integer IDs are the way to go.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 16 Jul 2024 11:59:48 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Parent/child context relation in pg_get_backend_memory_contexts()"
},
{
"msg_contents": "Hi David,\n\nDavid Rowley <[email protected]>, 15 Tem 2024 Pzt, 14:38 tarihinde şunu\nyazdı:\n\n> On Sat, 13 Jul 2024 at 10:12, Melih Mutlu <[email protected]> wrote:\n> > I updated documentation for path and level columns and also fixed the\n> tests as level starts from 1.\n>\n> Thanks for updating.\n>\n> + The <structfield>path</structfield> column can be useful to build\n> + parent/child relation between memory contexts. For example, the\n> following\n> + query calculates the total number of bytes used by a memory context\n> and its\n> + child contexts:\n>\n> \"a memory context\" doesn't quite sound specific enough. Let's say what\n> the query is doing exactly.\n>\n\nChanged \"a memory context\" with \"CacheMemoryContext\".\n\n\n> +<programlisting>\n> +WITH memory_contexts AS (\n> + SELECT *\n> + FROM pg_backend_memory_contexts\n> +)\n> +SELECT SUM(total_bytes)\n> +FROM memory_contexts\n> +WHERE ARRAY[(SELECT path[array_length(path, 1)] FROM memory_contexts\n> WHERE name = 'CacheMemoryContext')] <@ path;\n>\n> I don't think that example query is the most simple example. Isn't it\n> better to use the most simple form possible to express that?\n>\n> I think it would be nice to give an example of using \"level\" as an\n> index into \"path\"\n>\n> WITH c AS (SELECT * FROM pg_backend_memory_contexts)\n> SELECT sum(c1.total_bytes)\n> FROM c c1, c c2\n> WHERE c2.name = 'CacheMemoryContext'\n> AND c1.path[c2.level] = c2.path[c2.level];\n>\n\nI changed the queries in the documentation and regression test to the ones\nsimilar to the above query that you shared.\n\n\n+ /*\n> + * Queue up all the child contexts of this level for the next\n> + * iteration of the outer loop.\n> + */\n>\n> That outer loop is gone.\n>\n\nRemoved that part.\n\n\n\n> Also, this was due to my hasty writing of the patch. I named the\n> function get_memory_context_name_and_indent. I meant to write \"ident\".\n> If we did get rid of the \"parent\" column, I'd not see any need to keep\n> that function. The logic could just be put in\n> PutMemoryContextsStatsTupleStore(). I just did it that way to avoid\n> the repeat.\n>\n\nFixed the name. Also I needed to cast parameters when calling that function\nas below to get rid of some warnings.\n\n+ get_memory_context_name_and_ident(context,\n+\n(const char **)&name,\n+\n(const char **) &ident);\n\nThanks,\n-- \nMelih Mutlu\nMicrosoft",
"msg_date": "Tue, 23 Jul 2024 13:14:09 +0300",
"msg_from": "Melih Mutlu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Parent/child context relation in pg_get_backend_memory_contexts()"
},
{
"msg_contents": "On Tue, 23 Jul 2024 at 22:14, Melih Mutlu <[email protected]> wrote:\n> Fixed the name. Also I needed to cast parameters when calling that function as below to get rid of some warnings.\n>\n> + get_memory_context_name_and_ident(context,\n> + (const char **)&name,\n> + (const char **) &ident);\n\nThanks for fixing all those.\n\nI've only had a quick look so far, but I think the patch is now in the\nright shape. Unless there's some objections to how things are being\ndone in v10, I plan to commit this in the next few days... modulo any\nminor adjustments.\n\nDavid\n\n\n",
"msg_date": "Wed, 24 Jul 2024 21:47:14 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Parent/child context relation in pg_get_backend_memory_contexts()"
},
{
"msg_contents": "On Wed, 24 Jul 2024 at 21:47, David Rowley <[email protected]> wrote:\n> I've only had a quick look so far, but I think the patch is now in the\n> right shape. Unless there's some objections to how things are being\n> done in v10, I plan to commit this in the next few days... modulo any\n> minor adjustments.\n\nI reviewed v10 today. I made some adjustments and pushed the result.\n\nDavid\n\n\n",
"msg_date": "Thu, 25 Jul 2024 15:05:44 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Parent/child context relation in pg_get_backend_memory_contexts()"
},
{
"msg_contents": "On Tue, 23 Jul 2024 at 22:14, Melih Mutlu <[email protected]> wrote:\n> Fixed the name. Also I needed to cast parameters when calling that function as below to get rid of some warnings.\n>\n> + get_memory_context_name_and_ident(context,\n> + (const char **)&name,\n> + (const char **) &ident);\n\nI ended up fixing that another way as the above seems to be casting\naway the const for those variables. Instead, I changed the signature\nof the function to:\n\nstatic void get_memory_context_name_and_ident(MemoryContext context,\nconst char **const name, const char **const ident);\n\nwhich I think takes into account for the call site variables being\ndefined as \"const char *\".\n\nDavid\n\n\n",
"msg_date": "Thu, 25 Jul 2024 15:08:47 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Parent/child context relation in pg_get_backend_memory_contexts()"
},
{
"msg_contents": "David Rowley <[email protected]> writes:\n> I ended up fixing that another way as the above seems to be casting\n> away the const for those variables. Instead, I changed the signature\n> of the function to:\n> static void get_memory_context_name_and_ident(MemoryContext context,\n> const char **const name, const char **const ident);\n> which I think takes into account for the call site variables being\n> defined as \"const char *\".\n\nI did not check the history to see quite what happened here,\nbut Coverity thinks the end result is rather confused,\nand I agree:\n\n*** CID 1615190: Null pointer dereferences (REVERSE_INULL)\n/srv/coverity/git/pgsql-git/postgresql/src/backend/utils/adt/mcxtfuncs.c: 58 in get_memory_context_name_and_ident()\n52 \t*ident = context->ident;\n53 \n54 \t/*\n55 \t * To be consistent with logging output, we label dynahash contexts with\n56 \t * just the hash table name as with MemoryContextStatsPrint().\n57 \t */\n>>> CID 1615190: Null pointer dereferences (REVERSE_INULL)\n>>> Null-checking \"ident\" suggests that it may be null, but it has already been dereferenced on all paths leading to the check.\n58 \tif (ident && strcmp(*name, \"dynahash\") == 0)\n59 \t{\n60 \t\t*name = *ident;\n61 \t\t*ident = NULL;\n62 \t}\n63 }\n\nIt is not clear to me exactly which of these pointers should be\npresumed to be possibly-null, but certainly testing ident after\nstoring through it is pretty pointless. Maybe what was intended\nwas\n\n- \tif (ident && strcmp(*name, \"dynahash\") == 0)\n+ \tif (*name && strcmp(*name, \"dynahash\") == 0)\n\n?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 28 Jul 2024 12:31:27 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Parent/child context relation in pg_get_backend_memory_contexts()"
},
{
"msg_contents": "On Mon, 29 Jul 2024 at 04:31, Tom Lane <[email protected]> wrote:\n> It is not clear to me exactly which of these pointers should be\n> presumed to be possibly-null, but certainly testing ident after\n> storing through it is pretty pointless. Maybe what was intended\n> was\n>\n> - if (ident && strcmp(*name, \"dynahash\") == 0)\n> + if (*name && strcmp(*name, \"dynahash\") == 0)\n\nIt should be *ident. I just missed adding the pointer dereference when\nmoving that code to a function.\n\nThanks for the report. I'll fix shortly.\n\nDavid\n\n\n",
"msg_date": "Mon, 29 Jul 2024 09:19:51 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Parent/child context relation in pg_get_backend_memory_contexts()"
}
] |
[
{
"msg_contents": "Hey everyone,\n\nI've discovered a serious bug that leads to a server crash upon promoting an instance that crashed previously and did\nrecovery in standby mode.\n\nThe bug is present in PostgreSQL versions 13 and 14 (and in earlier versions, though it doesn't manifest itself so\ncatastrophically).\nThe circumstances to trigger the bug are as follows:\n- postgresql is configured for hot_standby, archiving, and prepared transactions\n- prepare a transaction\n- crash postgresql\n- create standby.signal file\n- start postgresql, wait for recovery to finish\n- promote\n\nThe promotion will fail with a FATAL error, stating that \"requested WAL segment .* has already been removed\".\nThe FATAL error causes the startup process to exit, so postmaster shuts down again.\n\nHere's an exemplary log output, maybe this helps people to find this issue when they search for it online:\n\nLOG: consistent recovery state reached at 0/15D8AB0\nLOG: database system is ready to accept read only connections\nLOG: received promote request\nLOG: redo done at 0/15D89B8\nLOG: last completed transaction was at log time 2023-06-16 13:09:53.71118+02\nLOG: selected new timeline ID: 2\nLOG: archive recovery complete\nFATAL: requested WAL segment pg_wal/000000010000000000000001 has already been removed\nLOG: startup process (PID 1650358) exited with exit code 1\nLOG: terminating any other active server processes\nLOG: database system is shut down\n\n\nThe cause of this failure is an oversight (rather obvious in hindsight):\nThe renaming of the WAL file (that was last written to before the crash happened) to .partial is done *before* PostgreSQL\nmight have to read this very file to recover prepared transactions from it.\nThe relevant function calls here are durable_rename() and RecoverPreparedTransactions() in xlog.c .\n\nNote that it is important that the PREPARE entry is in the WAL file that PostgreSQL is writing to prior to the inital\ncrash.\nThis has happened repeatedly in production already with a customer that uses prepared transactions quite frequently.\nI assume that this has happened for others too, but the circumstances of the crash and the cause are very dubious, and\ntroubleshooting it is pretty difficult.\n\n\nThis behaviour has - apparently unintentionally - been fixed in PG 15 and upwards (see commit 811051c ), as part of a\ngeneral restructure and reorganization of this portion of xlog.c (see commit 6df1543 ).\n\nFurthermore, it seems this behaviour does not appear in PG 12 and older, due to another possible bug:\nIn PG 13 and newer, the XLogReaderState is reset in XLogBeginRead() before reading WAL in XlogReadTwoPhaseData() in\ntwophase.c .\nIn the older releases (PG <= 12), this reset is not done, so the requested LSN containing the prepared transaction can\n(by happy coincidence) be read from in-memory buffers, and PostgreSQL consequently manages to come up just fine (as the\nWAL has already been read into buffers prior to the .partial rename).\nIf the older releases also where to properly reset the XLogReaderState, they would also fail to find the LSN on disk, and\nhence PostgreSQL would crash again.\n\nI've attached patches for PG 14 and PG 13 that mimic the change in PG15 (commit 811051c ) and reorder the crucial events,\nplacing the recovery of prepared transactions *before* renaming the file.\nI've also attached recovery test scripts for PG >= 12 and PG <= 11 that can be used to verify that promote after recovery\nwith prepared transactions works.\n\nA note for myself in the future and whomever may find it useful:\nThe test can be copied to src/test/recovery/t/ and selectively run (after you've ./configure'd for TAP testing and\ncompiled everything) from within the src/test/recovery directory using something like:\n make check PROVE_TESTS='t/PG_geq_12_promote_prepare_xact.pl'\n\n\nMy humble opinion is that this fix should be backpatched to PG 14 and PG 13.\nIt's debatable whether the fix needs to be brought back to 12 and older also, as those do not exhibit this issue, but the\norder of renaming is still wrong.\nI'm not sure if there could be cases where the in-memory buffers of the walreader are too small to cover a whole WAL\nfile.\nThere could also be other issues from operations that require reading WAL that happen after the .partial rename, I\nhaven't checked in depth what else happens in the affected codepath.\nPlease let me know if you think this should also be fixed in PG 12 and earlier, so I can produce the patches for those\nversions as well.\n\n\nKind regards\nJulian",
"msg_date": "Fri, 16 Jun 2023 16:27:40 +0200",
"msg_from": "Julian Markwort <[email protected]>",
"msg_from_op": true,
"msg_subject": "[BUG] recovery of prepared transactions during promotion can fail"
},
{
"msg_contents": "On Fri, Jun 16, 2023 at 04:27:40PM +0200, Julian Markwort wrote:\n> I've discovered a serious bug that leads to a server crash upon\n> promoting an instance that crashed previously and did recovery in\n> standby mode.\n\nReproduced here, for the versions mentioned.\n\n> The bug is present in PostgreSQL versions 13 and 14 (and in earlier\n> versions, though it doesn't manifest itself so catastrophically).\n> The circumstances to trigger the bug are as follows:\n> - postgresql is configured for hot_standby, archiving, and prepared transactions\n> - prepare a transaction\n> - crash postgresql\n> - create standby.signal file\n> - start postgresql, wait for recovery to finish\n> - promote\n\nhot_standby allows one to run queries on a standby running recovery,\nso it seems to me that it does not really matter. Enabling archiving\nis the critical piece. The nodes set in the TAP test 009_twophase.pl\ndon't use any kind of archiving. But once it is enabled on London the\nfirst promotion command of the test fails the same way as you report.\n # Setup london node\n my $node_london = get_new_node(\"london\");\n-$node_london->init(allows_streaming => 1);\n+# Archiving is used to force tests with .partial segment creations\n+# done at the end of recovery.\n+$node_london->init(allows_streaming => 1, has_archiving => 1);\n\nEnabling the archiving does not impact any of the tests, as we don't\nuse restore_command during recovery and only rely on streaming.\n\n> The cause of this failure is an oversight (rather obvious in\n> hindsight): The renaming of the WAL file (that was last written to\n> before the crash happened) to .partial is done *before* PostgreSQL\n> might have to read this very file to recover prepared transactions\n> from it. The relevant function calls here are durable_rename() and\n> RecoverPreparedTransactions() in xlog.c.\n> \n> Note that it is important that the PREPARE entry is in the WAL file\n> that PostgreSQL is writing to prior to the inital crash.\n> This has happened repeatedly in production already with a customer\n> that uses prepared transactions quite frequently. I assume that\n> this has happened for others too, but the circumstances of the crash\n> and the cause are very dubious, and troubleshooting it is pretty\n> difficult.\n\nI guess that this is a possibility yes. I have not heard directly\nabout such a report, but perhaps that's just because few people use\n2PC.\n\n> This behaviour has - apparently unintentionally - been fixed in PG\n> 15 and upwards (see commit 811051c ), as part of a general\n> restructure and reorganization of this portion of xlog.c (see commit\n> 6df1543 ).\n> \n> Furthermore, it seems this behaviour does not appear in PG 12 and\n> older, due to another possible bug: In PG 13 and newer, the\n> XLogReaderState is reset in XLogBeginRead() before reading WAL in\n> XlogReadTwoPhaseData() in twophase.c .\n> In the older releases (PG <= 12), this reset is not done, so the\n> requested LSN containing the prepared transaction can (by happy\n> coincidence) be read from in-memory buffers, and PostgreSQL\n> consequently manages to come up just fine (as the WAL has already\n> been read into buffers prior to the .partial rename). If the older\n> releases also where to properly reset the XLogReaderState, they\n> would also fail to find the LSN on disk, and hence PostgreSQL would\n> crash again.\n\nThat's debatable, but I think that I would let v12 and v11 be as they\nare. v11 is going to be end-of-life soon and we did not have any\ncomplains on this matter as far as I know, so there is a risk of\nbreaking something upon its last release. (Got some, Err..\nexperiences with that in the past). On REL_11_STABLE, note for\nexample the slight difference with the handling of\nrecovery_end_command, where we rely on InRecovery rather than\nArchiveRecoveryRequested. REL_12_STABLE is in a more consistent shape\nthan v11 regarding that.\n\n> I've attached patches for PG 14 and PG 13 that mimic the change in\n> PG15 (commit 811051c ) and reorder the crucial events, placing the\n> recovery of prepared transactions *before* renaming the file. \n\nYes, I think that's OK. I would like to add two things to your\nproposal for all the existing branches.\n- Addition of a comment where RecoverPreparedTransactions() is called\nat the end of recovery to tell that we'd better do that before working\non the last partial segment of the old timeline.\n- Enforce the use of archiving in 009_twophase.pl.\n\n> My humble opinion is that this fix should be backpatched to PG 14\n> and PG 13. It's debatable whether the fix needs to be brought back\n> to 12 and older also, as those do not exhibit this issue, but the \n> order of renaming is still wrong.\n\nYeah, I'd rather wait for somebody to complain about that. And v11 is\nnot worth taking risks with at this time of the year, IMHO.\n\nWith your fix included, the patch for REL_14_STABLE would be like the\nattached. Is that OK for you?\n--\nMichael",
"msg_date": "Mon, 19 Jun 2023 14:24:44 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] recovery of prepared transactions during promotion can fail"
},
{
"msg_contents": "Thanks for the report, reproducer and the patches.\n\nAt Fri, 16 Jun 2023 16:27:40 +0200, Julian Markwort <[email protected]> wrote in \n> - prepare a transaction\n> - crash postgresql\n> - create standby.signal file\n> - start postgresql, wait for recovery to finish\n> - promote\n..\n> The promotion will fail with a FATAL error, stating that \"requested WAL segment .* has already been removed\".\n> The FATAL error causes the startup process to exit, so postmaster shuts down again.\n> \n> Here's an exemplary log output, maybe this helps people to find this issue when they search for it online:\n\n> LOG: redo done at 0/15D89B8\n> LOG: last completed transaction was at log time 2023-06-16 13:09:53.71118+02\n> LOG: selected new timeline ID: 2\n> LOG: archive recovery complete\n> FATAL: requested WAL segment pg_wal/000000010000000000000001 has already been removed\n> LOG: startup process (PID 1650358) exited with exit code 1\n\nReproduced here.\n\n> The cause of this failure is an oversight (rather obvious in hindsight):\n> The renaming of the WAL file (that was last written to before the crash happened) to .partial is done *before* PostgreSQL\n> might have to read this very file to recover prepared transactions from it.\n> The relevant function calls here are durable_rename() and RecoverPreparedTransactions() in xlog.c .\n> This behaviour has - apparently unintentionally - been fixed in PG 15 and upwards (see commit 811051c ), as part of a\n> general restructure and reorganization of this portion of xlog.c (see commit 6df1543 ).\n\nI think so, the reordering might have done for some other reasons, though.\n\n> Furthermore, it seems this behaviour does not appear in PG 12 and older, due to another possible bug:\n\n<snip>...\n\n> In PG 13 and newer, the XLogReaderState is reset in XLogBeginRead()\n> before reading WAL in XlogReadTwoPhaseData() in twophase.c .\n\nI arraived at the same conclusion.\n\n> In the older releases (PG <= 12), this reset is not done, so the requested LSN containing the prepared transaction can\n> (by happy coincidence) be read from in-memory buffers, and PostgreSQL consequently manages to come up just fine (as the\n> WAL has already been read into buffers prior to the .partial rename).\n> If the older releases also where to properly reset the XLogReaderState, they would also fail to find the LSN on disk, and\n> hence PostgreSQL would crash again.\n\n From the perspective of loading WAL for prepared transactions, the\ncurrent code in those versions seems fine. Although I suspect Windows\nmay not like to rename currently-open segments, it's likely acceptable\nas the current test set operates without issue.. (I didn't tested this.)\n\n> I've attached patches for PG 14 and PG 13 that mimic the change in PG15 (commit 811051c ) and reorder the crucial events,\n> placing the recovery of prepared transactions *before* renaming the file.\n\nIt appears to move the correct part of the code to the proper\nlocation, modifying the steps to align with later versions.\n\n> I've also attached recovery test scripts for PG >= 12 and PG <= 11 that can be used to verify that promote after recovery\n> with prepared transactions works.\n\nIt effectively detects the bug, though it can't be directly used in\nthe tree as-is. I'm unsure whether we need this in the tree, though.\n\n> My humble opinion is that this fix should be backpatched to PG 14 and PG 13.\n\nI agree with you.\n\n> It's debatable whether the fix needs to be brought back to 12 and older also, as those do not exhibit this issue, but the\n> order of renaming is still wrong.\n> I'm not sure if there could be cases where the in-memory buffers of the walreader are too small to cover a whole WAL\n> file.\n> There could also be other issues from operations that require reading WAL that happen after the .partial rename, I\n> haven't checked in depth what else happens in the affected codepath.\n> Please let me know if you think this should also be fixed in PG 12 and earlier, so I can produce the patches for those\n> versions as well.\n\nThere's no immediate need to change the versions. However, I would\nprefer to backpatch them to the older versions for the following\nreasons.\n\n1. Applying this eases future backpatching in this area, if any.\n\n2. I have reservations about renaming possibly-open WAL segments.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 19 Jun 2023 14:25:14 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] recovery of prepared transactions during promotion can\n fail"
},
{
"msg_contents": "At Mon, 19 Jun 2023 14:24:44 +0900, Michael Paquier <[email protected]> wrote in \n> On Fri, Jun 16, 2023 at 04:27:40PM +0200, Julian Markwort wrote:\n> > Note that it is important that the PREPARE entry is in the WAL file\n> > that PostgreSQL is writing to prior to the inital crash.\n> > This has happened repeatedly in production already with a customer\n> > that uses prepared transactions quite frequently. I assume that\n> > this has happened for others too, but the circumstances of the crash\n> > and the cause are very dubious, and troubleshooting it is pretty\n> > difficult.\n> \n> I guess that this is a possibility yes. I have not heard directly\n> about such a report, but perhaps that's just because few people use\n> 2PC.\n\n+1\n\n> > This behaviour has - apparently unintentionally - been fixed in PG\n> > 15 and upwards (see commit 811051c ), as part of a general\n> > restructure and reorganization of this portion of xlog.c (see commit\n> > 6df1543 ).\n> > \n> > Furthermore, it seems this behaviour does not appear in PG 12 and\n> > older, due to another possible bug: In PG 13 and newer, the\n> > XLogReaderState is reset in XLogBeginRead() before reading WAL in\n> > XlogReadTwoPhaseData() in twophase.c .\n> > In the older releases (PG <= 12), this reset is not done, so the\n> > requested LSN containing the prepared transaction can (by happy\n> > coincidence) be read from in-memory buffers, and PostgreSQL\n> > consequently manages to come up just fine (as the WAL has already\n> > been read into buffers prior to the .partial rename). If the older\n> > releases also where to properly reset the XLogReaderState, they\n> > would also fail to find the LSN on disk, and hence PostgreSQL would\n> > crash again.\n> \n> That's debatable, but I think that I would let v12 and v11 be as they\n> are. v11 is going to be end-of-life soon and we did not have any\n> complains on this matter as far as I know, so there is a risk of\n> breaking something upon its last release. (Got some, Err..\n> experiences with that in the past). On REL_11_STABLE, note for\n> example the slight difference with the handling of\n> recovery_end_command, where we rely on InRecovery rather than\n> ArchiveRecoveryRequested. REL_12_STABLE is in a more consistent shape\n> than v11 regarding that.\n\nAgree about 11, it's no use patching. About 12, I slightly prefer\napplying this but I'm fine without it since no actual problem are\nseen.\n\n\n> > I've attached patches for PG 14 and PG 13 that mimic the change in\n> > PG15 (commit 811051c ) and reorder the crucial events, placing the\n> > recovery of prepared transactions *before* renaming the file. \n> \n> Yes, I think that's OK. I would like to add two things to your\n> proposal for all the existing branches.\n> - Addition of a comment where RecoverPreparedTransactions() is called\n> at the end of recovery to tell that we'd better do that before working\n> on the last partial segment of the old timeline.\n> - Enforce the use of archiving in 009_twophase.pl.\n\nBoth look good to me.\n\n> > My humble opinion is that this fix should be backpatched to PG 14\n> > and PG 13. It's debatable whether the fix needs to be brought back\n> > to 12 and older also, as those do not exhibit this issue, but the \n> > order of renaming is still wrong.\n> \n> Yeah, I'd rather wait for somebody to complain about that. And v11 is\n> not worth taking risks with at this time of the year, IMHO.\n\nI don't have a complaint as the whole.\n\n> With your fix included, the patch for REL_14_STABLE would be like the\n> attached. Is that OK for you?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 19 Jun 2023 14:41:54 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] recovery of prepared transactions during promotion can\n fail"
},
{
"msg_contents": "On Mon, Jun 19, 2023 at 02:41:54PM +0900, Kyotaro Horiguchi wrote:\n> At Mon, 19 Jun 2023 14:24:44 +0900, Michael Paquier <[email protected]> wrote in \n>> On Fri, Jun 16, 2023 at 04:27:40PM +0200, Julian Markwort wrote:\n>>> I've attached patches for PG 14 and PG 13 that mimic the change in\n>>> PG15 (commit 811051c ) and reorder the crucial events, placing the\n>>> recovery of prepared transactions *before* renaming the file. \n>> \n>> Yes, I think that's OK. I would like to add two things to your\n>> proposal for all the existing branches.\n>> - Addition of a comment where RecoverPreparedTransactions() is called\n>> at the end of recovery to tell that we'd better do that before working\n>> on the last partial segment of the old timeline.\n>> - Enforce the use of archiving in 009_twophase.pl.\n> \n> Both look good to me.\n\nOkay, cool. Thanks for double-checking, so let's do something down to\n13, then..\n--\nMichael",
"msg_date": "Mon, 19 Jun 2023 16:27:27 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] recovery of prepared transactions during promotion can fail"
},
{
"msg_contents": "On Mon, Jun 19, 2023 at 04:27:27PM +0900, Michael Paquier wrote:\n> Okay, cool. Thanks for double-checking, so let's do something down to\n> 13, then..\n\nAnd done for v13 and v14. I have split the test and comment changes\ninto their own commit, doing that for v13~HEAD.\n--\nMichael",
"msg_date": "Tue, 20 Jun 2023 10:49:03 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] recovery of prepared transactions during promotion can fail"
},
{
"msg_contents": "On Tue, Jun 20, 2023 at 10:49:03AM +0900, Michael Paquier wrote:\n> And done for v13 and v14. I have split the test and comment changes\n> into their own commit, doing that for v13~HEAD.\n\nI've started seen sporadic timeouts for 009_twophase.pl in cfbot, and I'm\nwondering if it's related to this change.\n\n\thttps://api.cirrus-ci.com/v1/task/4978271838797824/logs/test_world.log\n\thttps://api.cirrus-ci.com/v1/task/5477247717474304/logs/test_world.log\n\thttps://api.cirrus-ci.com/v1/task/5931749746671616/logs/test_world.log\n\thttps://api.cirrus-ci.com/v1/task/6353051175354368/logs/test_world.log\n\thttps://api.cirrus-ci.com/v1/task/5687888986243072/logs/test_world.log\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 20 Jun 2023 21:33:45 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] recovery of prepared transactions during promotion can fail"
},
{
"msg_contents": "On Tue, Jun 20, 2023 at 09:33:45PM -0700, Nathan Bossart wrote:\n> I've started seen sporadic timeouts for 009_twophase.pl in cfbot, and I'm\n> wondering if it's related to this change.\n> \n> \thttps://api.cirrus-ci.com/v1/task/4978271838797824/logs/test_world.log\n> \thttps://api.cirrus-ci.com/v1/task/5477247717474304/logs/test_world.log\n> \thttps://api.cirrus-ci.com/v1/task/5931749746671616/logs/test_world.log\n> \thttps://api.cirrus-ci.com/v1/task/6353051175354368/logs/test_world.log\n> \thttps://api.cirrus-ci.com/v1/task/5687888986243072/logs/test_world.log\n\nThanks for the poke, missed that.\n\nThe logs are enough to know what's happening here. All the tests\nfinish after this step:\n[02:29:33.169] # Now paris is primary and london is standby\n[02:29:33.169] ok 13 - Restore prepared transactions from records with\nprimary down\n\nHere are some log files:\nhttps://api.cirrus-ci.com/v1/artifact/task/5477247717474304/testrun/build/testrun/recovery/009_twophase/log/009_twophase_london.log\nhttps://api.cirrus-ci.com/v1/artifact/task/5477247717474304/testrun/build/testrun/recovery/009_twophase/log/009_twophase_paris.log\n\nJust after that, we start a previous primary as standby:\n# restart old primary as new standby\n$cur_standby->enable_streaming($cur_primary);\n$cur_standby->start;\n\nAnd the startup of the node gets stuck as the last partial segment is\nnow getting renamed, but the other node expects it to be available via\nstreaming. From london, which is the new standby starting up:\n2023-06-21 02:13:03.421 UTC [24652][walreceiver] LOG: primary server\ncontains no more WAL on requested timeline 3\n2023-06-21 02:13:03.421 UTC [24652][walreceiver] FATAL: terminating\nwalreceiver process due to administrator command \n2023-06-21 02:13:03.421 UTC [24647][startup] LOG: new timeline 4\nforked off current database system timeline 3 before current recovery\npoint 0/60000A0\n\nAnd paris complains about that:\n2023-06-21 02:13:03.515 UTC [24661][walsender] [london][4/0:0] LOG:\nreceived replication command: START_REPLICATION 0/6000000 TIMELINE 3 \n2023-06-21 02:13:03.515 UTC [24661][walsender] [london][4/0:0]\nSTATEMENT: START_REPLICATION 0/6000000 TIMELINE 3\n\nBut that won't connect work as the segment requested is now a partial\none in the primary's pg_wal, still the standby wants it. Just\nrestoring the segments won't help much as we don't have anything for\npartial segments in the TAP routines yet, so I think that it is better\nfor now to just undo has_archiving in has_archiving, and tackle the\ncoverage with a separate test, perhaps only for HEAD.\n--\nMichael",
"msg_date": "Wed, 21 Jun 2023 14:14:52 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] recovery of prepared transactions during promotion can fail"
},
{
"msg_contents": "First off, thanks for the quick reaction and reviews, I appreciate it.\n\nOn Wed, 2023-06-21 at 14:14 +0900, Michael Paquier wrote:\n> But that won't connect work as the segment requested is now a partial\n> one in the primary's pg_wal, still the standby wants it.\n\nI think since 009_twophase.pl doesn't use archiving so far, it's not a good idea to enable it generally, for all those\ntests. It changes too much of the behaviour.\n\n> I think that it is better\n> for now to just undo has_archiving in has_archiving, and tackle the\n> coverage with a separate test, perhaps only for HEAD.\n\nI see you've already undone it.\nAttached is a patch for 009_twophase.pl to just try this corner case at the very end, so as not to influence other\nexisting tests in suite.\n\nWhen I run this on REL_14_8 I get the error again, sort of as a sanity check...\n\nKind regards\nJulian",
"msg_date": "Wed, 21 Jun 2023 11:11:55 +0200",
"msg_from": "Julian Markwort <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [BUG] recovery of prepared transactions during promotion can\n fail"
},
{
"msg_contents": "On Wed, Jun 21, 2023 at 11:11:55AM +0200, Julian Markwort wrote:\n> I see you've already undone it.\n> Attached is a patch for 009_twophase.pl to just try this corner case at the very end, so as not to influence other\n> existing tests in suite.\n> \n> When I run this on REL_14_8 I get the error again, sort of as a sanity check...\n\n+$cur_primary->enable_archiving;\n\nenable_archiving is a routine aimed at being used internally by\nCluster.pm, so this does not sound like a good idea to me.\n\nRelying on a single node to avoid the previous switchover problem is a\nmuch better idea than what I have tried, but wouldn't it be better to\njust move the new test to a separate script and parallelize more? The\nruntime of 009_twophase.pl is already quite long.\n\nIt is worth noting that I am not going to take any bets with the\nbuildfarm before 16beta2. Doing that after REL_16_STABLE is created\nwill limit the risks.\n--\nMichael",
"msg_date": "Wed, 21 Jun 2023 19:12:12 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] recovery of prepared transactions during promotion can fail"
}
] |
[
{
"msg_contents": "Patch attached. Currently, the Makefile specifies NO_LOCALE=1, and the\nmeson.build does not.\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS",
"msg_date": "Fri, 16 Jun 2023 13:29:18 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "test_extensions: fix inconsistency between meson.build and Makefile"
},
{
"msg_contents": "On Fri Jun 16, 2023 at 3:29 PM CDT, Jeff Davis wrote:\n> Patch attached. Currently, the Makefile specifies NO_LOCALE=1, and the\n> meson.build does not.\n\nLooks alright to me, but it might be nicer to change the order of\narguments to match contrib/unaccent/meson.build:40. Might help with\ngrepping in the future.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Fri, 16 Jun 2023 15:56:38 -0500",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: test_extensions: fix inconsistency between meson.build and\n Makefile"
},
{
"msg_contents": "On Fri, Jun 16, 2023 at 1:56 PM Tristan Partin <[email protected]> wrote:\n>\n> On Fri Jun 16, 2023 at 3:29 PM CDT, Jeff Davis wrote:\n> > Patch attached. Currently, the Makefile specifies NO_LOCALE=1, and the\n> > meson.build does not.\n>\n> Looks alright to me, but it might be nicer to change the order of\n> arguments to match contrib/unaccent/meson.build:40. Might help with\n> grepping in the future.\n\nIt seems that Jeff's patch tried to match the precedent set in\nsrc/test/modules/test_oat_hooks/meson.build.\n\nNo matter which ordering Jeff's patch uses, it will be inconsistent\nwith one of the existing order of the options.\n\nSo attached is updated patch that makes the order consistent across\nall 3 occurrences.\n\nBest regards,\nGurjeet\nhttp://Gurje.et",
"msg_date": "Sat, 17 Jun 2023 07:40:18 -0700",
"msg_from": "Gurjeet Singh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: test_extensions: fix inconsistency between meson.build and\n Makefile"
},
{
"msg_contents": "On Sat, Jun 17, 2023 at 07:40:18AM -0700, Gurjeet Singh wrote:\n> So attached is updated patch that makes the order consistent across\n> all 3 occurrences.\n\nThere is no need to update unaccent since 44e73a4.\n\n--- a/src/test/modules/test_extensions/meson.build\n+++ b/src/test/modules/test_extensions/meson.build\n@@ -47,5 +47,6 @@ tests += {\n 'test_extensions',\n 'test_extdepend',\n ],\n+ 'regress_args': ['--no-locale', '--encoding=UTF8'],\n\nWhy is the addition of --encoding necessary for test_extensions? Its\nMakefile has a NO_LOCALE, but it has no ENCODING set.\n--\nMichael",
"msg_date": "Thu, 6 Jul 2023 11:41:02 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: test_extensions: fix inconsistency between meson.build and\n Makefile"
},
{
"msg_contents": "On Thu, 2023-07-06 at 11:41 +0900, Michael Paquier wrote:\n> Why is the addition of --encoding necessary for test_extensions? Its\n> Makefile has a NO_LOCALE, but it has no ENCODING set.\n\nI think that was an oversight -- as you point out, the Makefile doesn't\nset ENCODING, so the meson.build does not need to, either.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Wed, 05 Jul 2023 22:35:29 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: test_extensions: fix inconsistency between meson.build and\n Makefile"
}
] |
[
{
"msg_contents": "Hi,\n\nI am proposing that the default value of\nclient_connection_check_interval be moved to a non-zero value on\nsystems where it is supported. I think it would be a nice quality of\nlife improvement. I have run into a problem where this would have been\nuseful before with regard to pgbench not currently handling SIGINT\ncorrently[0]. I basically picked 10s out of thin air and am happy to\nchange it to what others feel would be more appropriate. This doesn't\nneed to be a check that happens often because it should just be a\nbackstop for unusual scenarios or poorly programmed clients.\n\nThe original thread where Thomas committed these changes seemed to\nindicate no performance impact[1]. The only reason that I can think of\nthis being turned off by default is that it isn't available on all\nsystems that Postgres supports. When this was committed however, the\nonly system that seemed to support EPOLLRDHUP was Linux. Seems like in\nrecent years this story has changed given the description of the\nparameter.\n\n> This option relies on kernel events exposed by Linux, macOS, illumos and\n> the BSD family of operating systems, and is not currently available on\n> other systems.\n\n[0]: https://www.postgresql.org/message-id/CSSWBAX56CVY.291H6ZNNHK7EO@c3po\n[1]: https://www.postgresql.org/message-id/CA+hUKG++KitzNUOxW2-koB1pKWD2cyUqA9vLj5bf0g_i7L1M0w@mail.gmail.com\n\n-- \nTristan Partin\nNeon (https://neon.tech)",
"msg_date": "Fri, 16 Jun 2023 15:52:13 -0500",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Default client_connection_check_interval to 10s on supported\n systems"
},
{
"msg_contents": "\"Tristan Partin\" <[email protected]> writes:\n> I am proposing that the default value of\n> client_connection_check_interval be moved to a non-zero value on\n> systems where it is supported. I think it would be a nice quality of\n> life improvement.\n\nI doubt that we need this, and I *really* doubt that it's appropriate\nto use a timeout as short as 10s.\n\nOne reason not to try to enable it by default is exactly that the\ndefault behavior would then be platform-dependent. That's a\ndocumentation problem we could do without.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 16 Jun 2023 18:10:06 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Default client_connection_check_interval to 10s on supported\n systems"
},
{
"msg_contents": "On Fri Jun 16, 2023 at 5:10 PM CDT, Tom Lane wrote:\n> \"Tristan Partin\" <[email protected]> writes:\n> > I am proposing that the default value of\n> > client_connection_check_interval be moved to a non-zero value on\n> > systems where it is supported. I think it would be a nice quality of\n> > life improvement.\n>\n> I doubt that we need this, and I *really* doubt that it's appropriate\n> to use a timeout as short as 10s.\n\nSure. Like I said, 10s is just a number pulled from thin air. The\noriginal patches on the mailing list had this as low as 1s.\n\n> One reason not to try to enable it by default is exactly that the\n> default behavior would then be platform-dependent. That's a\n> documentation problem we could do without.\n\nTotally fine with me if this is the reason for rejection. Just wanted to\nput it out there for discussion since I don't think the original thread\ncovered the default value very much.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Fri, 16 Jun 2023 17:16:59 -0500",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Default client_connection_check_interval to 10s on supported\n systems"
}
] |
[
{
"msg_contents": "Attached patch adds additional hardening to nbtree page deletion. It\nmakes nbtree VACUUM tolerate a certain sort of cross-page\ninconsistencies in the structure of an index (corruption). VACUUM can\npress on, avoiding an eventual wraparound/xidStopLimit failure in\nenvironments where nobody notices the problem for an extended period.\n\nThis is very similar to my recent commit 5abff197 (though it's even\ncloser to commit 5b861baa). Once again we're demoting an ERROR to a\nLOG message, and pressing on with vacuuming. I propose that this patch\nbe backpatched all the way, too. The hardening added by the patch\nseems equally well targeted and low risk. It's a parent/child\ninconsistency, as opposed to a sibling inconsistency. Very familiar\nstuff, overall.\n\nI have seen an internal report of the ERROR causing issues for a\nproduction instance, so this definitely can fail in the field on\nmodern Postgres versions. Though this particular inconsistency (\"right\nsibling is not next child...\") has a long history. It has definitely\nbeen spotted in the field several times over many years. This 2006\nthread about problems with a Wisconsin courts database is one example\nof that:\n\nhttps://www.postgresql.org/message-id/flat/3355.1144873721%40sss.pgh.pa.us#b0a89b2d9e7f6a3c818fdf723b8fa29b\n\nAt the time the ERROR was a PANIC. A few years later (in 2010), it was\ndemoted to an ERROR (see commit 8fa30f90). And now I want to demote it\nto a LOG -- which is much easier now that we have a robust approach to\npage deletion (after 2014 commit efada2b8e9).\n\n-- \nPeter Geoghegan",
"msg_date": "Fri, 16 Jun 2023 14:15:08 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Adding further hardening to nbtree page deletion"
},
{
"msg_contents": "On Fri, Jun 16, 2023 at 2:15 PM Peter Geoghegan <[email protected]> wrote:\n> Attached patch adds additional hardening to nbtree page deletion. It\n> makes nbtree VACUUM tolerate a certain sort of cross-page\n> inconsistencies in the structure of an index (corruption). VACUUM can\n> press on, avoiding an eventual wraparound/xidStopLimit failure in\n> environments where nobody notices the problem for an extended period.\n\nMy current plan is to commit this in the next couple of days. I'll\nbackpatch all the way, like last time.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 20 Jun 2023 18:28:58 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Adding further hardening to nbtree page deletion"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-16 14:15:08 -0700, Peter Geoghegan wrote:\n> Attached patch adds additional hardening to nbtree page deletion. It\n> makes nbtree VACUUM tolerate a certain sort of cross-page\n> inconsistencies in the structure of an index (corruption). VACUUM can\n> press on, avoiding an eventual wraparound/xidStopLimit failure in\n> environments where nobody notices the problem for an extended period.\n>\n> This is very similar to my recent commit 5abff197 (though it's even\n> closer to commit 5b861baa). Once again we're demoting an ERROR to a\n> LOG message, and pressing on with vacuuming. I propose that this patch\n> be backpatched all the way, too. The hardening added by the patch\n> seems equally well targeted and low risk. It's a parent/child\n> inconsistency, as opposed to a sibling inconsistency. Very familiar\n> stuff, overall.\n>\n> [...]\n>\n> At the time the ERROR was a PANIC. A few years later (in 2010), it was\n> demoted to an ERROR (see commit 8fa30f90). And now I want to demote it\n> to a LOG -- which is much easier now that we have a robust approach to\n> page deletion (after 2014 commit efada2b8e9).\n\nI have no objection to this concrete change (nor have I reviewed it\ncarefully).\n\n\nBut the further we go down this path, the more important it is that we provide\nsome way to monitor stuff like this. IME it's not particularly practical to\nrely on scanning logs to find such issues at scale. I suspect we ought to add\nat least something that makes such \"ignored errors\" visible from stats.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 20 Jun 2023 22:39:24 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Adding further hardening to nbtree page deletion"
},
{
"msg_contents": "On Tue, Jun 20, 2023 at 10:39 PM Andres Freund <[email protected]> wrote:\n> But the further we go down this path, the more important it is that we provide\n> some way to monitor stuff like this. IME it's not particularly practical to\n> rely on scanning logs to find such issues at scale. I suspect we ought to add\n> at least something that makes such \"ignored errors\" visible from stats.\n\nI'm in favor of that, of course. We do at least use\nERRCODE_INDEX_CORRUPTED for all of the ERRORs that became LOG messages\nin the past several years. That's a start.\n\nFWIW, I'm almost certain that I'll completely run out of ERRORs to\ndemote to LOGs before too long. In fact, this might very well be the\nlast ERROR that I ever have to demote to a LOG to harden nbtree\nVACUUM. There just aren't that many ERRORs that would benefit from\nsimilar treatment. And most of the individual cases that I've\naddressed come up very infrequently in practice.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 20 Jun 2023 23:13:21 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Adding further hardening to nbtree page deletion"
},
{
"msg_contents": "On Tue, Jun 20, 2023 at 11:13 PM Peter Geoghegan <[email protected]> wrote:\n> FWIW, I'm almost certain that I'll completely run out of ERRORs to\n> demote to LOGs before too long. In fact, this might very well be the\n> last ERROR that I ever have to demote to a LOG to harden nbtree\n> VACUUM.\n\nPushed this just now, backpatching all the way.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 21 Jun 2023 17:42:20 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Adding further hardening to nbtree page deletion"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nIn modern versions of Postgres the dollar sign is a totally legal character\nfor identifiers (except for the first character), but tab-complete do not\ntreat such identifiers well.\nFor example if one try to create an Oracle-style view like this:\n\ncreate view v$activity as select * from pg_stat_activity;\n\n, he will get a normally functioning view, but psql tab-complete will not\nhelp him. Type \"v\", \"v$\" or \"v$act\" and press <TAB> - nothing will be\nsuggested.\n\nAttached is a small patch fixing this problem.\nHonestly I'm a little surprised that this was not done before. Maybe, there\nare some special considerations I am not aware of, and the patch will break\nsomething?\nWhat would you say?\n--\n best regards,\n Mikhail A. Gribkov",
"msg_date": "Sat, 17 Jun 2023 00:51:30 +0300",
"msg_from": "Mikhail Gribkov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fixing tab-complete for dollar-names"
},
{
"msg_contents": "Hi hackers,\n\nAs not much preliminary interest seem to be here, I'm sending the patch to\nthe upcoming commitfest\n\n--\n best regards,\n Mikhail A. Gribkov\n\n\nOn Sat, Jun 17, 2023 at 12:51 AM Mikhail Gribkov <[email protected]> wrote:\n\n> Hi hackers,\n>\n> In modern versions of Postgres the dollar sign is a totally legal\n> character for identifiers (except for the first character), but\n> tab-complete do not treat such identifiers well.\n> For example if one try to create an Oracle-style view like this:\n>\n> create view v$activity as select * from pg_stat_activity;\n>\n> , he will get a normally functioning view, but psql tab-complete will not\n> help him. Type \"v\", \"v$\" or \"v$act\" and press <TAB> - nothing will be\n> suggested.\n>\n> Attached is a small patch fixing this problem.\n> Honestly I'm a little surprised that this was not done before. Maybe,\n> there are some special considerations I am not aware of, and the patch will\n> break something?\n> What would you say?\n> --\n> best regards,\n> Mikhail A. Gribkov\n>\n\nHi hackers,As not much preliminary interest seem to be here, I'm sending the patch to the upcoming commitfest-- best regards, Mikhail A. GribkovOn Sat, Jun 17, 2023 at 12:51 AM Mikhail Gribkov <[email protected]> wrote:Hi hackers,In modern versions of Postgres the dollar sign is a totally legal character for identifiers (except for the first character), but tab-complete do not treat such identifiers well.For example if one try to create an Oracle-style view like this:create view v$activity as select * from pg_stat_activity;, he will get a normally functioning view, but psql tab-complete will not help him. Type \"v\", \"v$\" or \"v$act\" and press <TAB> - nothing will be suggested.Attached is a small patch fixing this problem.Honestly I'm a little surprised that this was not done before. Maybe, there are some special considerations I am not aware of, and the patch will break something?What would you say?-- best regards, Mikhail A. Gribkov",
"msg_date": "Mon, 26 Jun 2023 23:10:00 +0300",
"msg_from": "Mikhail Gribkov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fixing tab-complete for dollar-names"
},
{
"msg_contents": "On 6/26/23 22:10, Mikhail Gribkov wrote:\n> Hi hackers,\n> \n> As not much preliminary interest seem to be here, I'm sending the patch to\n> the upcoming commitfest\n\nI have added myself as reviewer. I already had taken a look at it, and \nit seemed okay, but I have not yet searched for corner cases.\n-- \nVik Fearing\n\n\n\n",
"msg_date": "Tue, 27 Jun 2023 01:47:44 +0200",
"msg_from": "Vik Fearing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fixing tab-complete for dollar-names"
},
{
"msg_contents": "On 27/06/2023 02:47, Vik Fearing wrote:\n> On 6/26/23 22:10, Mikhail Gribkov wrote:\n>> Hi hackers,\n>>\n>> As not much preliminary interest seem to be here, I'm sending the patch to\n>> the upcoming commitfest\n> \n> I have added myself as reviewer. I already had taken a look at it, and\n> it seemed okay, but I have not yet searched for corner cases.\n\nLGTM, pushed.\n\nI concur it's surprising that no one's noticed or at least not bothered \nto fix this before. But I also couldn't find any cases where this would \ncause trouble.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Tue, 19 Sep 2023 19:30:25 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fixing tab-complete for dollar-names"
}
] |
[
{
"msg_contents": "Hi All,\n\nI've written a patch to add hash functions for the ltree extension. It adds\nsupport for hash indexes and hash aggregation. I've reused the existing\nlogic that's used to hash arrays and added tests that mirror elsewhere\n(i.e. hstore and hash_func regression tests).\n\nThe patch doesn't currently support hash joins as the ltree = operator was\ncreated without support for it. The ALTER OPERATOR command doesn't support\nchanging the hash join support, so I'm not sure what the best strategy to\nchange it is. Is it ok to update the operator's row in the pg_operator\nsystem catalog or is there a better way to change this that someone could\nrecommend?\n\nAny comments on the overall approach or other feedback would be appreciated.\n\nThanks,\nTommy",
"msg_date": "Sat, 17 Jun 2023 17:45:10 +0200",
"msg_from": "Tommy Pavlicek <[email protected]>",
"msg_from_op": true,
"msg_subject": "[PATCH] ltree hash functions"
},
{
"msg_contents": "Hi,\n\nI've created a CF entry for the patch:\n\n https://commitfest.postgresql.org/43/4375/\n\nI only briefly skimmed the code, so a couple comments.\n\nOn 6/17/23 17:45, Tommy Pavlicek wrote:\n> Hi All,\n> \n> I've written a patch to add hash functions for the ltree extension. It\n> adds support for hash indexes and hash aggregation. I've reused the\n> existing logic that's used to hash arrays and added tests that mirror\n> elsewhere (i.e. hstore and hash_func regression tests).\n> \n\nReusing code/logic is the right approach, IMHO.\n\n> The patch doesn't currently support hash joins as the ltree = operator\n> was created without support for it. The ALTER OPERATOR command doesn't\n> support changing the hash join support, so I'm not sure what the best\n> strategy to change it is. Is it ok to update the operator's row in the\n> pg_operator system catalog or is there a better way to change this that\n> someone could recommend?\n> \n\nI guess the \"correct\" solution would be to extend ALTER OPERATOR. I\nwonder why it's not supported - it's clearly an intentional decision\n(per comment in AlterOperator). So what might break if this changes for\nan existing operator?\n\nFWIW the CREATE OPERATOR documentation only talks about hash joins for\nHASHES, maybe it should be updated to also mention hash aggregates?\n\n> Any comments on the overall approach or other feedback would be appreciated.\n> \n\nI wonder what's the use case for this. I wonder how often people join on\nltree, for example. Did you just notice ltree can't hash and decided to\nfix that, or do you have a practical use case / need for this feature?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 17 Jun 2023 19:40:30 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] ltree hash functions"
},
{
"msg_contents": "Tomas Vondra <[email protected]> writes:\n> I guess the \"correct\" solution would be to extend ALTER OPERATOR. I\n> wonder why it's not supported - it's clearly an intentional decision\n> (per comment in AlterOperator). So what might break if this changes for\n> an existing operator?\n\nThis code was added by commit 321eed5f0. The thread leading up to\nthat commit is here:\n\nhttps://www.postgresql.org/message-id/flat/3348985.V7xMLFDaJO%40dinodell\n\nThere are some nontrivial concerns in there about breaking the\nsemantics of existing exclusion constraints, for instance. I think\nwe mostly rejected the concern about invalidation of cached plans\nas already-covered, but that wasn't the only problem.\n\nHowever, I think we could largely ignore the issues if we restricted\nALTER OPERATOR to only add commutator, negator, hashes, or merges\nproperties to operators that lacked them before --- which'd be the\nprimary if not only use-case anyway. That direction can't break\nanything.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 17 Jun 2023 14:19:42 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] ltree hash functions"
},
{
"msg_contents": "\n\nOn 6/17/23 20:19, Tom Lane wrote:\n> Tomas Vondra <[email protected]> writes:\n>> I guess the \"correct\" solution would be to extend ALTER OPERATOR. I\n>> wonder why it's not supported - it's clearly an intentional decision\n>> (per comment in AlterOperator). So what might break if this changes for\n>> an existing operator?\n> \n> This code was added by commit 321eed5f0. The thread leading up to\n> that commit is here:\n> \n> https://www.postgresql.org/message-id/flat/3348985.V7xMLFDaJO%40dinodell\n> \n> There are some nontrivial concerns in there about breaking the\n> semantics of existing exclusion constraints, for instance. I think\n> we mostly rejected the concern about invalidation of cached plans\n> as already-covered, but that wasn't the only problem.\n> \n> However, I think we could largely ignore the issues if we restricted\n> ALTER OPERATOR to only add commutator, negator, hashes, or merges\n> properties to operators that lacked them before --- which'd be the\n> primary if not only use-case anyway. That direction can't break\n> anything.\n> \n\nSound reasonable.\n\nTommy, are you interested in extending ALTER OPERATOR to allow this,\nwhich would also allow fixing the ltree operator?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 17 Jun 2023 21:57:33 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] ltree hash functions"
},
{
"msg_contents": ">\n> FWIW the CREATE OPERATOR documentation only talks about hash joins for\n\nHASHES, maybe it should be updated to also mention hash aggregates?\n\n\nI think I might have been a bit unclear here, the hash aggregate does work\nwithout altering the operator so it's just the join that's blocked. Sorry\nabout the confusion.\n\nI wonder what's the use case for this. I wonder how often people join on\n> ltree, for example. Did you just notice ltree can't hash and decided to\n> fix that, or do you have a practical use case / need for this feature?\n\n\nI mostly want to add hash indexes. Beyond selecting specific values, you\ncan use them to get ancestors (trim the path and do an exact select) and\ndescendents (using a functional index calculating the parent path for each\nrow). For example, I've found it can be faster to calculate the path of\nevery ancestor and use select ltree path = ANY([ancestor paths]) compared\nto using a gist index. It's not ideal, but unfortunately I've found that\nwith enough rows, gist indexes get very large and slow. Btree indexes are\nbetter, but for ltree they can still be up to around 10x bigger than a hash\nindex. I've also seen ltree hash indexes outperform btree indexes in very\nlarge tables, but I suspect in most cases they'll be similar.\n\nTommy, are you interested in extending ALTER OPERATOR to allow this,\n> which would also allow fixing the ltree operator?\n\n\nYes, I can do that. I took a look over the code and email thread and it\nseems like it should be relatively straight forward. I'll put a patch\ntogether for that and then update this patch to alter the operator.\n\nOn Sat, Jun 17, 2023 at 9:57 PM Tomas Vondra <[email protected]>\nwrote:\n\n>\n>\n> On 6/17/23 20:19, Tom Lane wrote:\n> > Tomas Vondra <[email protected]> writes:\n> >> I guess the \"correct\" solution would be to extend ALTER OPERATOR. I\n> >> wonder why it's not supported - it's clearly an intentional decision\n> >> (per comment in AlterOperator). So what might break if this changes for\n> >> an existing operator?\n> >\n> > This code was added by commit 321eed5f0. The thread leading up to\n> > that commit is here:\n> >\n> > https://www.postgresql.org/message-id/flat/3348985.V7xMLFDaJO%40dinodell\n> >\n> > There are some nontrivial concerns in there about breaking the\n> > semantics of existing exclusion constraints, for instance. I think\n> > we mostly rejected the concern about invalidation of cached plans\n> > as already-covered, but that wasn't the only problem.\n> >\n> > However, I think we could largely ignore the issues if we restricted\n> > ALTER OPERATOR to only add commutator, negator, hashes, or merges\n> > properties to operators that lacked them before --- which'd be the\n> > primary if not only use-case anyway. That direction can't break\n> > anything.\n> >\n>\n> Sound reasonable.\n>\n> Tommy, are you interested in extending ALTER OPERATOR to allow this,\n> which would also allow fixing the ltree operator?\n>\n>\n> regards\n>\n> --\n> Tomas Vondra\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n\nFWIW the CREATE OPERATOR documentation only talks about hash joins forHASHES, maybe it should be updated to also mention hash aggregates?I think I might have been a bit unclear here, the hash aggregate does work without altering the operator so it's just the join that's blocked. Sorry about the confusion.I wonder what's the use case for this. I wonder how often people join onltree, for example. Did you just notice ltree can't hash and decided tofix that, or do you have a practical use case / need for this feature?I mostly want to add hash indexes. Beyond selecting specific values, you can use them to get ancestors (trim the path and do an exact select) and descendents (using a functional index calculating the parent path for each row). For example, I've found it can be faster to calculate the path of every ancestor and use select ltree path = ANY([ancestor paths]) compared to using a gist index. It's not ideal, but unfortunately I've found that with enough rows, gist indexes get very large and slow. Btree indexes are better, but for ltree they can still be up to around 10x bigger than a hash index. I've also seen ltree hash indexes outperform btree indexes in very large tables, but I suspect in most cases they'll be similar.Tommy, are you interested in extending ALTER OPERATOR to allow this,which would also allow fixing the ltree operator?Yes, I can do that. I took a look over the code and email thread and it seems like it should be relatively straight forward. I'll put a patch together for that and then update this patch to alter the operator.On Sat, Jun 17, 2023 at 9:57 PM Tomas Vondra <[email protected]> wrote:\n\nOn 6/17/23 20:19, Tom Lane wrote:\n> Tomas Vondra <[email protected]> writes:\n>> I guess the \"correct\" solution would be to extend ALTER OPERATOR. I\n>> wonder why it's not supported - it's clearly an intentional decision\n>> (per comment in AlterOperator). So what might break if this changes for\n>> an existing operator?\n> \n> This code was added by commit 321eed5f0. The thread leading up to\n> that commit is here:\n> \n> https://www.postgresql.org/message-id/flat/3348985.V7xMLFDaJO%40dinodell\n> \n> There are some nontrivial concerns in there about breaking the\n> semantics of existing exclusion constraints, for instance. I think\n> we mostly rejected the concern about invalidation of cached plans\n> as already-covered, but that wasn't the only problem.\n> \n> However, I think we could largely ignore the issues if we restricted\n> ALTER OPERATOR to only add commutator, negator, hashes, or merges\n> properties to operators that lacked them before --- which'd be the\n> primary if not only use-case anyway. That direction can't break\n> anything.\n> \n\nSound reasonable.\n\nTommy, are you interested in extending ALTER OPERATOR to allow this,\nwhich would also allow fixing the ltree operator?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 19 Jun 2023 11:18:14 +0200",
"msg_from": "Tommy Pavlicek <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] ltree hash functions"
},
{
"msg_contents": "> On 19 Jun 2023, at 11:18, Tommy Pavlicek <[email protected]> wrote:\n\n> Tommy, are you interested in extending ALTER OPERATOR to allow this,\n> which would also allow fixing the ltree operator?\n> \n> Yes, I can do that. I took a look over the code and email thread and it seems like it should be relatively straight forward. I'll put a patch together for that and then update this patch to alter the operator.\n\nDid you have a chance to look at this for an updated patch for this commitfest?\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 6 Jul 2023 10:18:25 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] ltree hash functions"
},
{
"msg_contents": "On Thu, Jul 6, 2023 at 2:18 AM Daniel Gustafsson <[email protected]> wrote:\n>\n> > On 19 Jun 2023, at 11:18, Tommy Pavlicek <[email protected]> wrote:\n>\n> > Tommy, are you interested in extending ALTER OPERATOR to allow this,\n> > which would also allow fixing the ltree operator?\n> >\n> > Yes, I can do that. I took a look over the code and email thread and it seems like it should be relatively straight forward. I'll put a patch together for that and then update this patch to alter the operator.\n>\n> Did you have a chance to look at this for an updated patch for this commitfest?\n\nI finally had a chance to look at this and I've updated the patch to\nalter the = operator to enable hash joins.\n\nThis is ready to be looked at now.\n\nIs there anything I need to do to move this forward?\n\nCheers,\nTommy",
"msg_date": "Tue, 28 Nov 2023 22:08:44 +0000",
"msg_from": "Tommy Pavlicek <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] ltree hash functions"
},
{
"msg_contents": "On Wed, Nov 29, 2023 at 6:09 AM Tommy Pavlicek <[email protected]> wrote:\n>\n> On Thu, Jul 6, 2023 at 2:18 AM Daniel Gustafsson <[email protected]> wrote:\n> >\n> > > On 19 Jun 2023, at 11:18, Tommy Pavlicek <[email protected]> wrote:\n> >\n> > > Tommy, are you interested in extending ALTER OPERATOR to allow this,\n> > > which would also allow fixing the ltree operator?\n> > >\n> > > Yes, I can do that. I took a look over the code and email thread and it seems like it should be relatively straight forward. I'll put a patch together for that and then update this patch to alter the operator.\n> >\n> > Did you have a chance to look at this for an updated patch for this commitfest?\n>\n> I finally had a chance to look at this and I've updated the patch to\n> alter the = operator to enable hash joins.\n>\n> This is ready to be looked at now.\n>\n> Is there anything I need to do to move this forward?\n>\n\nyou only change Makefile, you also need to change contrib/ltree/meson.build?\n\n+drop index tstidx;\n+create index tstidx on ltreetest using hash (t);\n+set enable_seqscan=off;\n+\n+SELECT * FROM ltreetest WHERE t = '12.3' order by t asc;\n\nDo you need to use EXPLAIN to demo the index usage?\n\n\n",
"msg_date": "Wed, 29 Nov 2023 09:37:55 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] ltree hash functions"
},
{
"msg_contents": "On Tue, Nov 28, 2023 at 7:38 PM jian he <[email protected]> wrote:\n> you only change Makefile, you also need to change contrib/ltree/meson.build?\n> Do you need to use EXPLAIN to demo the index usage?\n\nThanks! Yes, I missed the Meson build file. I added additional\ncommands with EXPLAIN (COSTS OFF) as I found in other places.\n\nPatch updated for those comments (and a touch of cleanup in the tests) attached.",
"msg_date": "Fri, 1 Dec 2023 00:44:40 +0000",
"msg_from": "Tommy Pavlicek <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] ltree hash functions"
},
{
"msg_contents": "On Fri, Dec 1, 2023 at 8:44 AM Tommy Pavlicek <[email protected]> wrote:\n>\n>\n> Patch updated for those comments (and a touch of cleanup in the tests) attached.\n\nit would be a better name as hash_ltree than ltree_hash, similar logic\napplies to ltree_hash_extended.\nthat would be the convention. see: https://stackoverflow.com/a/69650940/15603477\n\n\nOther than that, it looks good.\n\n\n",
"msg_date": "Mon, 4 Dec 2023 14:46:44 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] ltree hash functions"
},
{
"msg_contents": "Thanks.\n\nI've attached the latest version that updates the naming in line with\nthe convention.\n\nOn Mon, Dec 4, 2023 at 12:46 AM jian he <[email protected]> wrote:\n>\n> On Fri, Dec 1, 2023 at 8:44 AM Tommy Pavlicek <[email protected]> wrote:\n> >\n> >\n> > Patch updated for those comments (and a touch of cleanup in the tests) attached.\n>\n> it would be a better name as hash_ltree than ltree_hash, similar logic\n> applies to ltree_hash_extended.\n> that would be the convention. see: https://stackoverflow.com/a/69650940/15603477\n>\n>\n> Other than that, it looks good.",
"msg_date": "Tue, 5 Dec 2023 16:38:08 -0600",
"msg_from": "Tommy Pavlicek <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] ltree hash functions"
},
{
"msg_contents": "On Wed, 6 Dec 2023 at 04:08, Tommy Pavlicek <[email protected]> wrote:\n>\n> Thanks.\n>\n> I've attached the latest version that updates the naming in line with\n> the convention.\n>\n> On Mon, Dec 4, 2023 at 12:46 AM jian he <[email protected]> wrote:\n> >\n> > On Fri, Dec 1, 2023 at 8:44 AM Tommy Pavlicek <[email protected]> wrote:\n> > >\n> > >\n> > > Patch updated for those comments (and a touch of cleanup in the tests) attached.\n> >\n> > it would be a better name as hash_ltree than ltree_hash, similar logic\n> > applies to ltree_hash_extended.\n> > that would be the convention. see: https://stackoverflow.com/a/69650940/15603477\n> >\n> >\n> > Other than that, it looks good.\n\nCFBot shows one of the test is failing as in [1]:\ndiff -U3 /tmp/cirrus-ci-build/contrib/ltree/expected/ltree.out\n/tmp/cirrus-ci-build/build-32/testrun/ltree/regress/results/ltree.out\n--- /tmp/cirrus-ci-build/contrib/ltree/expected/ltree.out 2024-01-31\n15:18:42.893039599 +0000\n+++ /tmp/cirrus-ci-build/build-32/testrun/ltree/regress/results/ltree.out\n2024-01-31 15:23:25.309028749 +0000\n@@ -1442,9 +1442,14 @@\n ('0.1.2'::ltree), ('0'::ltree), ('0_asd.1_ASD'::ltree)) x(v)\n WHERE hash_ltree(v)::bit(32) != hash_ltree_extended(v, 0)::bit(32)\n OR hash_ltree(v)::bit(32) = hash_ltree_extended(v, 1)::bit(32);\n- value | standard | extended0 | extended1\n--------+----------+-----------+-----------\n-(0 rows)\n+ value | standard |\nextended0 | extended1\n+-------------+----------------------------------+----------------------------------+----------------------------------\n+ 0 | 10001010100010010000000000001011 |\n01011001111001000100011001011011 | 01011001111001000100011010011111\n+ 0.1 | 10100000111110001010110001001110 |\n00111100100010001100110111010101 | 00111100100010001101100011010101\n+ 0.1.2 | 01111000011100000101111101110100 |\n10101110011101011000000011010111 | 10101110011101110010001111000011\n+ 0 | 10001010100010010000000000001011 |\n01011001111001000100011001011011 | 01011001111001000100011010011111\n+ 0_asd.1_ASD | 01000010001010000000101001001101 |\n00111100100010001100110111010101 | 00111100100010001101100011010101\n+(5 rows)\n\nPlease post an updated version for the same.\n\n[1] - https://api.cirrus-ci.com/v1/artifact/task/5572544858685440/testrun/build-32/testrun/ltree/regress/regression.diffs\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Thu, 1 Feb 2024 20:41:04 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] ltree hash functions"
},
{
"msg_contents": "On Thu, Feb 1, 2024 at 11:11 PM vignesh C <[email protected]> wrote:\n>\n> On Wed, 6 Dec 2023 at 04:08, Tommy Pavlicek <[email protected]> wrote:\n> >\n> > Thanks.\n> >\n> > I've attached the latest version that updates the naming in line with\n> > the convention.\n> >\n> > On Mon, Dec 4, 2023 at 12:46 AM jian he <[email protected]> wrote:\n> > >\n> > > On Fri, Dec 1, 2023 at 8:44 AM Tommy Pavlicek <[email protected]> wrote:\n> > > >\n> > > >\n> > > > Patch updated for those comments (and a touch of cleanup in the tests) attached.\n> > >\n> > > it would be a better name as hash_ltree than ltree_hash, similar logic\n> > > applies to ltree_hash_extended.\n> > > that would be the convention. see: https://stackoverflow.com/a/69650940/15603477\n> > >\n> > >\n> > > Other than that, it looks good.\n>\n> CFBot shows one of the test is failing as in [1]:\n> diff -U3 /tmp/cirrus-ci-build/contrib/ltree/expected/ltree.out\n> /tmp/cirrus-ci-build/build-32/testrun/ltree/regress/results/ltree.out\n> --- /tmp/cirrus-ci-build/contrib/ltree/expected/ltree.out 2024-01-31\n> 15:18:42.893039599 +0000\n> +++ /tmp/cirrus-ci-build/build-32/testrun/ltree/regress/results/ltree.out\n> 2024-01-31 15:23:25.309028749 +0000\n> @@ -1442,9 +1442,14 @@\n> ('0.1.2'::ltree), ('0'::ltree), ('0_asd.1_ASD'::ltree)) x(v)\n> WHERE hash_ltree(v)::bit(32) != hash_ltree_extended(v, 0)::bit(32)\n> OR hash_ltree(v)::bit(32) = hash_ltree_extended(v, 1)::bit(32);\n> - value | standard | extended0 | extended1\n> --------+----------+-----------+-----------\n> -(0 rows)\n> + value | standard |\n> extended0 | extended1\n> +-------------+----------------------------------+----------------------------------+----------------------------------\n> + 0 | 10001010100010010000000000001011 |\n> 01011001111001000100011001011011 | 01011001111001000100011010011111\n> + 0.1 | 10100000111110001010110001001110 |\n> 00111100100010001100110111010101 | 00111100100010001101100011010101\n> + 0.1.2 | 01111000011100000101111101110100 |\n> 10101110011101011000000011010111 | 10101110011101110010001111000011\n> + 0 | 10001010100010010000000000001011 |\n> 01011001111001000100011001011011 | 01011001111001000100011010011111\n> + 0_asd.1_ASD | 01000010001010000000101001001101 |\n> 00111100100010001100110111010101 | 00111100100010001101100011010101\n> +(5 rows)\n>\n> Please post an updated version for the same.\n>\n> [1] - https://api.cirrus-ci.com/v1/artifact/task/5572544858685440/testrun/build-32/testrun/ltree/regress/regression.diffs\n>\n\nIt only fails on Linux - Debian Bullseye - Meson.\nI fixed the white space, named it v5.\nI also made the following changes:\nfrom\n\nuint64 levelHash = hash_any_extended((unsigned char *) al->name, al->len, seed);\nuint32 levelHash = hash_any((unsigned char *) al->name, al->len);\n\nto\nuint64 levelHash = DatumGetUInt64(hash_any_extended((unsigned char *)\nal->name, al->len, seed));\nuint32 levelHash = DatumGetUInt32(hash_any((unsigned char *) al->name,\nal->len));\n\n(these two line live in different functions)\n\nI have some problems testing it locally, so I post the patch.",
"msg_date": "Mon, 5 Feb 2024 08:00:00 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] ltree hash functions"
},
{
"msg_contents": "jian he <[email protected]> writes:\n> I also made the following changes:\n> from\n\n> uint64 levelHash = hash_any_extended((unsigned char *) al->name, al->len, seed);\n> uint32 levelHash = hash_any((unsigned char *) al->name, al->len);\n\n> to\n> uint64 levelHash = DatumGetUInt64(hash_any_extended((unsigned char *)\n> al->name, al->len, seed));\n> uint32 levelHash = DatumGetUInt32(hash_any((unsigned char *) al->name,\n> al->len));\n\nYeah, that'd fail on 32-bit machines.\n\nPushed v5 with some minor cosmetic tweaking.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 21 Mar 2024 18:29:43 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] ltree hash functions"
}
] |
[
{
"msg_contents": "Hi,\r\n\r\nThe release date for PostgreSQL 16 Beta 2 is June 29, 2023. Please be \r\nsure to commit any open items[1] for the Beta 2 release before June 25, \r\n2023 0:00 AoE[2] to give them enough time to work through the buildfarm.\r\n\r\nThanks,\r\n\r\nJonathan\r\n\r\n[1] \r\nhttps://wiki.postgresql.org/wiki/PostgreSQL_16_Open_Items#Important_Dates\r\n[2] https://en.wikipedia.org/wiki/Anywhere_on_Earth",
"msg_date": "Sat, 17 Jun 2023 12:14:16 -0400",
"msg_from": "\"Jonathan S. Katz\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL 16 Beta 2 Release Date"
}
] |
[
{
"msg_contents": "Hi,\n\nI have been testing 16beta1, last commit\na14e75eb0b6a73821e0d66c0d407372ec8376105\nI just let sqlsmith do its magic before trying something else, and\ntoday I found a core with the attached backtrace.\n\nOnly information on the log was this:\n\nDETAIL: Failed process was running: autovacuum: VACUUM\npublic.array_index_op_test\n\n--\nJaime Casanova\nDirector de Servicios Profesionales\nSYSTEMGUARDS - Consultores de PostgreSQL",
"msg_date": "Sat, 17 Jun 2023 13:29:24 -0500",
"msg_from": "Jaime Casanova <[email protected]>",
"msg_from_op": true,
"msg_subject": "Assert while autovacuum was executing"
},
{
"msg_contents": "On Sat, Jun 17, 2023 at 11:29 AM Jaime Casanova\n<[email protected]> wrote:\n> I have been testing 16beta1, last commit\n> a14e75eb0b6a73821e0d66c0d407372ec8376105\n> I just let sqlsmith do its magic before trying something else, and\n> today I found a core with the attached backtrace.\n\nThe assertion that fails is the IsPageLockHeld assertion from commit 72e78d831a.\n\nI think that this is kind of an odd assertion. It's also not justified\nby any comments. Why invent this rule at all?\n\nTo be fair the use of page heavyweight locks in ginInsertCleanup() is\nalso odd. The only reason why ginInsertCleanup() uses page-level locks\nhere is to get the benefit of deadlock detection, and to be able to\nhold the lock for a relatively long time if that proves necessary\n(i.e., interruptibility). There are reasons to doubt that that's a\ngood design, but either way it seems fundamentally incompatible with\nthe rule enforced by the assertion.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sat, 17 Jun 2023 11:47:35 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Assert while autovacuum was executing"
},
{
"msg_contents": "On Sun, Jun 18, 2023 at 12:18 AM Peter Geoghegan <[email protected]> wrote:\n>\n> On Sat, Jun 17, 2023 at 11:29 AM Jaime Casanova\n> <[email protected]> wrote:\n> > I have been testing 16beta1, last commit\n> > a14e75eb0b6a73821e0d66c0d407372ec8376105\n> > I just let sqlsmith do its magic before trying something else, and\n> > today I found a core with the attached backtrace.\n>\n> The assertion that fails is the IsPageLockHeld assertion from commit 72e78d831a.\n>\n\nI'll look into this and share my analysis.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 19 Jun 2023 17:13:37 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Assert while autovacuum was executing"
},
{
"msg_contents": "On Mon, Jun 19, 2023 at 5:13 PM Amit Kapila <[email protected]> wrote:\n>\n> On Sun, Jun 18, 2023 at 12:18 AM Peter Geoghegan <[email protected]> wrote:\n> >\n> > On Sat, Jun 17, 2023 at 11:29 AM Jaime Casanova\n> > <[email protected]> wrote:\n> > > I have been testing 16beta1, last commit\n> > > a14e75eb0b6a73821e0d66c0d407372ec8376105\n> > > I just let sqlsmith do its magic before trying something else, and\n> > > today I found a core with the attached backtrace.\n> >\n> > The assertion that fails is the IsPageLockHeld assertion from commit 72e78d831a.\n> >\n>\n> I'll look into this and share my analysis.\n>\n\nThis failure mode appears to be introduced in commit 7d71d3dd08 (in\nPG16) where we started to process the config file after acquiring page\nlock during autovacuum. The problem here is that after acquiring page\nlock (a heavy-weight lock), while processing the config file, we tried\nto access the catalog cache which in turn attempts to acquire a lock\non the catalog relation, and that leads to the assertion failure. This\nis because of an existing rule that we don't acquire any other\nheavyweight lock while holding the page lock except for relation\nextension. I think normally we should be careful about the lock\nordering for heavy-weight locks to avoid deadlocks but here there may\nnot be any existing hazard in acquiring a lock on the catalog table\nafter acquiring page lock on the gin index's metapage as I am not\naware of a scenario where we can acquire them in reverse order. One\nnaive idea is to have a parameter like vacuum_config_reload_safe to\nallow config reload during autovacuum and make it false for the gin\nindex cleanup code path.\n\nThe reason for the existing rule for page lock and relation extension\nlocks was to not allow them to participate in group locking which will\nallow other parallel operations like a parallel vacuum where multiple\nworkers can work on the same index, or parallel inserts, parallel\ncopy, etc. The related commits are 15ef6ff4b9, 72e78d831ab,\n85f6b49c2c, and 3ba59ccc89. See 3ba59ccc89 for more details (To allow\nparallel inserts and parallel copy, we have ensured that relation\nextension and page locks don't participate in group locking which\nmeans such locks can conflict among the same group members. This is\nrequired as it is no safer for two related processes to extend the\nsame relation or perform clean up in gin indexes at a time than for\nunrelated processes to do the same....).\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 20 Jun 2023 15:14:26 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Assert while autovacuum was executing"
},
{
"msg_contents": "On Tuesday, June 20, 2023 5:44 PM Amit Kapila <[email protected]> wrote:\r\n> \r\n> On Mon, Jun 19, 2023 at 5:13 PM Amit Kapila <[email protected]>\r\n> wrote:\r\n> >\r\n> > On Sun, Jun 18, 2023 at 12:18 AM Peter Geoghegan <[email protected]> wrote:\r\n> > >\r\n> > > On Sat, Jun 17, 2023 at 11:29 AM Jaime Casanova\r\n> > > <[email protected]> wrote:\r\n> > > > I have been testing 16beta1, last commit\r\n> > > > a14e75eb0b6a73821e0d66c0d407372ec8376105\r\n> > > > I just let sqlsmith do its magic before trying something else, and\r\n> > > > today I found a core with the attached backtrace.\r\n> > >\r\n> > > The assertion that fails is the IsPageLockHeld assertion from commit\r\n> 72e78d831a.\r\n> > >\r\n> >\r\n> > I'll look into this and share my analysis.\r\n> >\r\n> \r\n> This failure mode appears to be introduced in commit 7d71d3dd08 (in\r\n> PG16) where we started to process the config file after acquiring page lock\r\n> during autovacuum. The problem here is that after acquiring page lock (a\r\n> heavy-weight lock), while processing the config file, we tried to access the\r\n> catalog cache which in turn attempts to acquire a lock on the catalog relation,\r\n> and that leads to the assertion failure. This is because of an existing rule that we\r\n> don't acquire any other heavyweight lock while holding the page lock except\r\n> for relation extension. I think normally we should be careful about the lock\r\n> ordering for heavy-weight locks to avoid deadlocks but here there may not be\r\n> any existing hazard in acquiring a lock on the catalog table after acquiring page\r\n> lock on the gin index's metapage as I am not aware of a scenario where we can\r\n> acquire them in reverse order. One naive idea is to have a parameter like\r\n> vacuum_config_reload_safe to allow config reload during autovacuum and\r\n> make it false for the gin index cleanup code path.\r\n\r\nI also think it would be better to skip reloading config in page lock cases to be consistent\r\nwith the rule. And here is the patch which does the same.\r\n\r\nI tried to reproduce the assert failure(before applying the patch) using the following steps:\r\n\r\n1. I added a sleep before vacuum_delay_point in ginInsertCleanup and LOG(\"attach this process\") before sleeping.\r\n2. And changed few GUC to make autovacuum happen more frequently and then start the server.\r\n-\r\nautovacuum_naptime = 5s\r\nautovacuum_vacuum_threshold = 1\r\nautovacuum_vacuum_insert_threshold = 100\r\n-\r\n\r\n3. Then I execute the following sqls:\r\n-\r\ncreate table gin_test_tbl(i int4[]);\r\ncreate index gin_test_idx on gin_test_tbl using gin (i)\r\n with (fastupdate = on, gin_pending_list_limit = 4096);\r\ninsert into gin_test_tbl select array[1, 2, g] from generate_series(1, 20000) g;\r\ninsert into gin_test_tbl select array[1, 3, g] from generate_series(1, 1000) g;\r\n-\r\n4. After a while, I can see the LOG from autovacuum worker and then use gdb to attach to the autovacuum worker.\r\n5. When the autovacuum worker is blocked, I changed the \"default_text_search_config = 'pg_catalog.public'\" in configure file and reload it.\r\n6. Release the autovacuum worker and then I can see the assert failure.\r\n\r\nAnd I can see the assert failure doesn't happen after applying the patch.\r\n\r\n> \r\n> The reason for the existing rule for page lock and relation extension locks was\r\n> to not allow them to participate in group locking which will allow other parallel\r\n> operations like a parallel vacuum where multiple workers can work on the same\r\n> index, or parallel inserts, parallel copy, etc. The related commits are 15ef6ff4b9,\r\n> 72e78d831ab, 85f6b49c2c, and 3ba59ccc89. See 3ba59ccc89 for more details\r\n> (To allow parallel inserts and parallel copy, we have ensured that relation\r\n> extension and page locks don't participate in group locking which means such\r\n> locks can conflict among the same group members. This is required as it is no\r\n> safer for two related processes to extend the same relation or perform clean\r\n> up in gin indexes at a time than for unrelated processes to do the same....).\r\n\r\nBest Regards,\r\nHou zj",
"msg_date": "Wed, 21 Jun 2023 04:12:22 +0000",
"msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Assert while autovacuum was executing"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-20 15:14:26 +0530, Amit Kapila wrote:\n> On Mon, Jun 19, 2023 at 5:13 PM Amit Kapila <[email protected]> wrote:\n> >\n> > On Sun, Jun 18, 2023 at 12:18 AM Peter Geoghegan <[email protected]> wrote:\n> > >\n> > > On Sat, Jun 17, 2023 at 11:29 AM Jaime Casanova\n> > > <[email protected]> wrote:\n> > > > I have been testing 16beta1, last commit\n> > > > a14e75eb0b6a73821e0d66c0d407372ec8376105\n> > > > I just let sqlsmith do its magic before trying something else, and\n> > > > today I found a core with the attached backtrace.\n> > >\n> > > The assertion that fails is the IsPageLockHeld assertion from commit 72e78d831a.\n> > >\n> >\n> > I'll look into this and share my analysis.\n> >\n> \n> This failure mode appears to be introduced in commit 7d71d3dd08 (in\n> PG16) where we started to process the config file after acquiring page\n> lock during autovacuum.\n\nI find it somewhat hard to believe that this is the only way to reach this\nissue. You're basically asserting that there's not a single cache lookup\nreachable from inside ginInsertCleanup() - which seems unlikely, given the\nrange of comparators that can exist.\n\n<plays around>\n\nYep. Doesn't even require enabling debug_discard_caches or reconnecting.\n\n\nDROP TABLE IF EXISTS tbl_foo;\nDROP TYPE IF EXISTS Foo;\n\nCREATE TYPE foo AS ENUM ('a', 'b', 'c');\nALTER TYPE foo ADD VALUE 'ab' BEFORE 'b';\nCREATE TABLE tbl_foo (foo foo);\nCREATE INDEX tbl_foo_idx ON tbl_foo USING gin (foo) WITH (fastupdate = on);\n\nINSERT INTO tbl_foo(foo) VALUES ('ab'), ('a'), ('b'), ('c');\n\nSELECT gin_clean_pending_list('tbl_foo_idx');\n\n\nAs far as I can tell 72e78d831a as-is is just bogus. Unfortunately that likely\nalso means 3ba59ccc89 is not right.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 20 Jun 2023 22:27:13 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Assert while autovacuum was executing"
},
{
"msg_contents": "On Tue, Jun 20, 2023 at 10:27 PM Andres Freund <[email protected]> wrote:\n> As far as I can tell 72e78d831a as-is is just bogus. Unfortunately that likely\n> also means 3ba59ccc89 is not right.\n\nQuite possibly. But I maintain that ginInsertCleanup() is probably\nalso bogus in a way that's directly relevant.\n\nDid you know that ginInsertCleanup() is the only code that uses\nheavyweight page locks these days? Though only on the index metapage!\n\nIsn't this the kind of thing that VACUUM's relation level lock is\nsupposed to take care of?\n\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 20 Jun 2023 23:23:19 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Assert while autovacuum was executing"
},
{
"msg_contents": "On Wed, Jun 21, 2023 at 10:57 AM Andres Freund <[email protected]> wrote:\n>\n> As far as I can tell 72e78d831a as-is is just bogus. Unfortunately that likely\n> also means 3ba59ccc89 is not right.\n>\n\nIndeed. I was thinking of a fix but couldn't find one yet. One idea I\nam considering is to allow catalog table locks after page lock but I\nthink apart from hacky that also won't work because we still need to\nremove the check added for page locks in the deadlock code path in\ncommit 3ba59ccc89 and may need to do something for group locking. Feel\nfree to share any ideas if you have, I can try to evaluate those in\ndetail. I think in the worst case we need to remove the changes added\nby 72e78d831a and 3ba59ccc89 which won't impact any existing feature\nbut will add a hurdle in parallelizing other write operations or even\nimproving the parallelism in vacuum (like allowing multiple workers\nfor an index).\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 22 Jun 2023 09:16:40 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Assert while autovacuum was executing"
},
{
"msg_contents": "On Wed, Jun 21, 2023 at 11:53 AM Peter Geoghegan <[email protected]> wrote:\n>\n> On Tue, Jun 20, 2023 at 10:27 PM Andres Freund <[email protected]> wrote:\n> > As far as I can tell 72e78d831a as-is is just bogus. Unfortunately that likely\n> > also means 3ba59ccc89 is not right.\n>\n> Quite possibly. But I maintain that ginInsertCleanup() is probably\n> also bogus in a way that's directly relevant.\n>\n> Did you know that ginInsertCleanup() is the only code that uses\n> heavyweight page locks these days? Though only on the index metapage!\n>\n> Isn't this the kind of thing that VACUUM's relation level lock is\n> supposed to take care of?\n>\n\nYeah, I also can't see why that shouldn't be sufficient for VACUUM.\nAssuming your observation is correct, what do you suggest doing in\nthis regard?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 22 Jun 2023 10:00:01 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Assert while autovacuum was executing"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-22 10:00:01 +0530, Amit Kapila wrote:\n> On Wed, Jun 21, 2023 at 11:53 AM Peter Geoghegan <[email protected]> wrote:\n> >\n> > On Tue, Jun 20, 2023 at 10:27 PM Andres Freund <[email protected]> wrote:\n> > > As far as I can tell 72e78d831a as-is is just bogus. Unfortunately that likely\n> > > also means 3ba59ccc89 is not right.\n> >\n> > Quite possibly. But I maintain that ginInsertCleanup() is probably\n> > also bogus in a way that's directly relevant.\n> >\n> > Did you know that ginInsertCleanup() is the only code that uses\n> > heavyweight page locks these days? Though only on the index metapage!\n> >\n> > Isn't this the kind of thing that VACUUM's relation level lock is\n> > supposed to take care of?\n> >\n> \n> Yeah, I also can't see why that shouldn't be sufficient for VACUUM.\n\nI'd replied on that point to Peter earlier, accidentlly loosing the CC\nlist. The issue is that ginInsertCleanup() isn't just called from VACUUM, but\nalso from normal inserts (to reduce the size of the fastupdate list).\n\nYou can possibly come up with another scheme, but I think just doing this via\nthe relation lock might be problematic. Suddenly an insert would, temporarily,\nalso block operations that don't normally conflict with inserts etc.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 22 Jun 2023 09:38:13 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Assert while autovacuum was executing"
},
{
"msg_contents": "On Thu, Jun 22, 2023 at 9:16 AM Amit Kapila <[email protected]> wrote:\n>\n> On Wed, Jun 21, 2023 at 10:57 AM Andres Freund <[email protected]> wrote:\n> >\n> > As far as I can tell 72e78d831a as-is is just bogus. Unfortunately that likely\n> > also means 3ba59ccc89 is not right.\n> >\n>\n> Indeed. I was thinking of a fix but couldn't find one yet. One idea I\n> am considering is to allow catalog table locks after page lock but I\n> think apart from hacky that also won't work because we still need to\n> remove the check added for page locks in the deadlock code path in\n> commit 3ba59ccc89 and may need to do something for group locking.\n>\n\nI have further thought about this part and I think even if we remove\nthe changes in commit 72e78d831a (remove the assertion for page locks\nin LockAcquireExtended()) and remove the check added for page locks in\nFindLockCycleRecurseMember() via commit 3ba59ccc89, it is still okay\nto keep the change related to \"allow page lock to conflict among\nparallel group members\" in LockCheckConflicts(). This is because locks\non catalog tables don't conflict among group members. So, we shouldn't\nsee a deadlock among parallel group members. Let me try to explain\nthis thought via an example:\n\nBegin;\nLock pg_enum in Access Exclusive mode;\ngin_clean_pending_list() -- assume this function is executed by both\nleader and parallel worker; also this requires a lock on pg_enum as\nshown by Andres in email [1]\n\nSay the parallel worker acquires page lock first and it will also get\nlock on pg_enum because of group locking, so, the leader backend will\nwait for page lock for the parallel worker. Eventually, the parallel\nworker will release the page lock and the leader backend can get the\nlock. So, we should be still okay with parallelism.\n\nOTOH, if the above theory is wrong or people are not convinced, I am\nokay with removing all the changes in commits 72e78d831a and\n3ba59ccc89.\n\n[1] - https://www.postgresql.org/message-id/20230621052713.wc5377dyslxpckfj%40awork3.anarazel.de\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 23 Jun 2023 14:04:15 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Assert while autovacuum was executing"
},
{
"msg_contents": "On Fri, Jun 23, 2023 at 2:04 PM Amit Kapila <[email protected]> wrote:\n>\n> On Thu, Jun 22, 2023 at 9:16 AM Amit Kapila <[email protected]> wrote:\n> >\n> > On Wed, Jun 21, 2023 at 10:57 AM Andres Freund <[email protected]> wrote:\n> > >\n> > > As far as I can tell 72e78d831a as-is is just bogus. Unfortunately that likely\n> > > also means 3ba59ccc89 is not right.\n> > >\n> >\n> > Indeed. I was thinking of a fix but couldn't find one yet. One idea I\n> > am considering is to allow catalog table locks after page lock but I\n> > think apart from hacky that also won't work because we still need to\n> > remove the check added for page locks in the deadlock code path in\n> > commit 3ba59ccc89 and may need to do something for group locking.\n> >\n>\n> I have further thought about this part and I think even if we remove\n> the changes in commit 72e78d831a (remove the assertion for page locks\n> in LockAcquireExtended()) and remove the check added for page locks in\n> FindLockCycleRecurseMember() via commit 3ba59ccc89, it is still okay\n> to keep the change related to \"allow page lock to conflict among\n> parallel group members\" in LockCheckConflicts(). This is because locks\n> on catalog tables don't conflict among group members. So, we shouldn't\n> see a deadlock among parallel group members. Let me try to explain\n> this thought via an example:\n>\n\nIMHO, whatsoever the case this check[1], is not wrong at all. I agree\nthat we do not have parallel write present in the code so having this\ncheck is not necessary as of now. But in theory, this check is\ncorrect because this is saying that parallel leader and worker should\nconflict on the 'relation extension lock' and the 'page lock' and\nthat's the fact. It holds true irrespective of whether it is being\nused currently or not.\n\n\n[1]\n/*\n* The relation extension or page lock conflict even between the group\n* members.\n*/\nif (LOCK_LOCKTAG(*lock) == LOCKTAG_RELATION_EXTEND ||\n(LOCK_LOCKTAG(*lock) == LOCKTAG_PAGE))\n{\nPROCLOCK_PRINT(\"LockCheckConflicts: conflicting (group)\",\n proclock);\nreturn true;\n}\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 23 Jun 2023 15:46:51 +0530",
"msg_from": "Dilip Kumar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Assert while autovacuum was executing"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-23 14:04:15 +0530, Amit Kapila wrote:\n> OTOH, if the above theory is wrong or people are not convinced, I am\n> okay with removing all the changes in commits 72e78d831a and\n> 3ba59ccc89.\n\nI am not convinced. And even if I were, coming up with new justifications in a\nreleased version, when the existing testing clearly wasn't enough to find the\ncurrent bug, doesn't strike me as wise.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 23 Jun 2023 09:37:47 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Assert while autovacuum was executing"
},
{
"msg_contents": "On Fri, Jun 23, 2023 at 10:07 PM Andres Freund <[email protected]> wrote:\n>\n> On 2023-06-23 14:04:15 +0530, Amit Kapila wrote:\n> > OTOH, if the above theory is wrong or people are not convinced, I am\n> > okay with removing all the changes in commits 72e78d831a and\n> > 3ba59ccc89.\n>\n> I am not convinced. And even if I were, coming up with new justifications in a\n> released version, when the existing testing clearly wasn't enough to find the\n> current bug, doesn't strike me as wise.\n>\n\nFair enough. If we could have been convinced of this then we can keep\nthe required change only for HEAD. But anyway let's remove the work\nrelated to both commits (72e78d831a and 3ba59ccc89) for now and then\nwe can come back to it when we parallelize writes. The attached patch\nremoves the changes added by both commits with slight tweaking in\ncomments/readme based on the recent state.\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Mon, 26 Jun 2023 09:48:24 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Assert while autovacuum was executing"
},
{
"msg_contents": "On Monday, June 26, 2023 12:18 PM Amit Kapila <[email protected]> wrote:\r\n> \r\n> On Fri, Jun 23, 2023 at 10:07 PM Andres Freund <[email protected]> wrote:\r\n> >\r\n> > On 2023-06-23 14:04:15 +0530, Amit Kapila wrote:\r\n> > > OTOH, if the above theory is wrong or people are not convinced, I am\r\n> > > okay with removing all the changes in commits 72e78d831a and\r\n> > > 3ba59ccc89.\r\n> >\r\n> > I am not convinced. And even if I were, coming up with new\r\n> > justifications in a released version, when the existing testing\r\n> > clearly wasn't enough to find the current bug, doesn't strike me as wise.\r\n> >\r\n> \r\n> Fair enough. If we could have been convinced of this then we can keep the\r\n> required change only for HEAD. But anyway let's remove the work related to\r\n> both commits (72e78d831a and 3ba59ccc89) for now and then we can come\r\n> back to it when we parallelize writes. The attached patch removes the changes\r\n> added by both commits with slight tweaking in comments/readme based on\r\n> the recent state.\r\n\r\nThanks for the patch. I have confirmed that the patch to revert page lock\r\nhandling applies cleanly on all branches(13~HEAD) and the assert failure and\r\nundetectable deadlock problem are fixed after applying the patch.\r\n\r\nBest Regards,\r\nHou zj\r\n",
"msg_date": "Wed, 28 Jun 2023 01:56:00 +0000",
"msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Assert while autovacuum was executing"
},
{
"msg_contents": "On Wed, Jun 28, 2023 at 7:26 AM Zhijie Hou (Fujitsu)\n<[email protected]> wrote:\n>\n> On Monday, June 26, 2023 12:18 PM Amit Kapila <[email protected]> wrote:\n> >\n> > Fair enough. If we could have been convinced of this then we can keep the\n> > required change only for HEAD. But anyway let's remove the work related to\n> > both commits (72e78d831a and 3ba59ccc89) for now and then we can come\n> > back to it when we parallelize writes. The attached patch removes the changes\n> > added by both commits with slight tweaking in comments/readme based on\n> > the recent state.\n>\n> Thanks for the patch. I have confirmed that the patch to revert page lock\n> handling applies cleanly on all branches(13~HEAD) and the assert failure and\n> undetectable deadlock problem are fixed after applying the patch.\n>\n\nThanks for the verification. Unless someone has any further comments\nor suggestions, I'll push this next week sometime.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 30 Jun 2023 08:26:29 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Assert while autovacuum was executing"
},
{
"msg_contents": "On Fri, Jun 30, 2023 at 8:26 AM Amit Kapila <[email protected]> wrote:\n>\n> On Wed, Jun 28, 2023 at 7:26 AM Zhijie Hou (Fujitsu)\n> <[email protected]> wrote:\n> >\n>\n> Thanks for the verification. Unless someone has any further comments\n> or suggestions, I'll push this next week sometime.\n>\n\nPushed but forgot to do indent which leads to BF failure[1]. I'll take\ncare of it.\n\n[1] - https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=koel&dt=2023-07-06%2005%3A19%3A03\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 6 Jul 2023 11:25:56 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Assert while autovacuum was executing"
}
] |
[
{
"msg_contents": "Has anyone recently tried updating a streaming replication cluster using\ndebian’s pg_upgradecluster(1) on each node?\n\nDid things work well?\n\nMy last attempt (11 to 13, as I recall) had issues and I had to drop and\nre-install the db on the secondaries.\n\nI'd like to avoid that this time...\n\nShould I expect things to work easily?\n\n-JimC\n-- \nJames Cloos <[email protected]> OpenPGP: 0x997A9F17ED7DAEA6\n\n\n",
"msg_date": "Sat, 17 Jun 2023 19:10:23 -0400",
"msg_from": "James Cloos <[email protected]>",
"msg_from_op": true,
"msg_subject": "=?utf-8?Q?deb=E2=80=99s?= pg_upgradecluster(1) vs streaming\n replication"
},
{
"msg_contents": "Hi,\n\nOn Sat, Jun 17, 2023 at 07:10:23PM -0400, James Cloos wrote:\n> Has anyone recently tried updating a streaming replication cluster using\n> debian’s pg_upgradecluster(1) on each node?\n\nNote that the word \"cluster\" in upgradecluster refers to a single\nPostgres instance, a.k.a a cluster of databases. It is not designed to\nupgrade streaming replication clusters.\n\n> Did things work well?\n> \n> My last attempt (11 to 13, as I recall) had issues and I had to drop and\n> re-install the db on the secondaries.\n> \n> I'd like to avoid that this time...\n> \n> Should I expect things to work easily?\n\nNo, you need to either rebuild the secondaries or use the rsync method\nto resync them from the documentation. The latter is complicated and\niffy though, and not generally recommended I believe.\n\n\nMichael\n\n\n",
"msg_date": "Mon, 19 Jun 2023 11:01:12 +0200",
"msg_from": "Michael Banck <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: =?utf-8?B?ZGVi4oCZ?= =?utf-8?Q?s?= pg_upgradecluster(1) vs\n streaming replication"
}
] |
[
{
"msg_contents": "I started to look at the code in postmaster.c related to launching child \nprocesses. I tried to reduce the difference between EXEC_BACKEND and \n!EXEC_BACKEND code paths, and put the code that needs to differ behind a \nbetter abstraction. I started doing this to help with implementing \nmulti-threading, but it doesn't introduce anything thread-related yet \nand I think this improves readability anyway.\n\nThis is still work-inprogress, especially the last, big, patch in the \npatch set. Mainly, I need to clean up the comments in the new \nlaunch_backend.c file. But the other patches are in pretty good shape, \nand if you ignore launch_backend.c, you can see the effect on the other \nsource files.\n\nWith these patches, there is a new function for launching a postmaster \nchild process:\n\npid_t postmaster_child_launch(PostmasterChildType child_type, char \n*startup_data, size_t startup_data_len, ClientSocket *client_sock);\n\nThis function hides the differences between EXEC_BACKEND and \n!EXEC_BACKEND cases.\n\nIn 'startup_data', the caller can pass a blob of data to the child \nprocess, with different meaning for different kinds of child processes. \nFor a backend process, for example, it's used to pass the CAC_state, \nwhich indicates whether the backend accepts the connection or just sends \n\"too many clients\" error. And for background workers, it's used to pass \nthe BackgroundWorker struct. The startup data is passed to the child \nprocess in the\n\nClientSocket is a new struct holds a socket FD, and the local and remote \naddress info. Before this patch set, postmaster initializes the Port \nstructs but only fills in those fields in it. With this patch set, we \nhave a new ClientSocket struct just for those fields, which makes it \nmore clear which fields are initialized where.\n\nI haven't done much testing yet, and no testing at all on Windows, so \nthat's probably still broken.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)",
"msg_date": "Sun, 18 Jun 2023 14:22:33 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Refactoring backend fork+exec code"
},
{
"msg_contents": "> From 1d89eec53c7fefa7a4a8c011c9f19e3df64dc436 Mon Sep 17 00:00:00 2001\n> From: Heikki Linnakangas <[email protected]>\n> Date: Mon, 12 Jun 2023 16:33:20 +0300\n> Subject: [PATCH 4/9] Use FD_CLOEXEC on ListenSockets\n\n> @@ -831,7 +834,8 @@ StreamConnection(pgsocket server_fd, Port *port)\n> void\n> StreamClose(pgsocket sock)\n> {\n> - closesocket(sock);\n> + if (closesocket(sock) != 0)\n> + elog(LOG, \"closesocket failed: %m\");\n> }\n> \n> /*\n\nDo you think WARNING would be a more appropriate log level?\n\n> From 2f518be9e96cfed1a1a49b4af8f7cb4a837aa784 Mon Sep 17 00:00:00 2001\n> From: Heikki Linnakangas <[email protected]>\n> Date: Mon, 12 Jun 2023 18:07:54 +0300\n> Subject: [PATCH 5/9] Move \"too many clients already\" et al. checks from\n> ProcessStartupPacket.\n\nThis seems like a change you could push already (assuming another\nmaintainer agrees with you), which makes reviews for this patchset even\neasier.\n\n> From c25b67c045018a2bf05e6ff53819d26e561fc83f Mon Sep 17 00:00:00 2001\n> From: Heikki Linnakangas <[email protected]>\n> Date: Sun, 18 Jun 2023 14:11:16 +0300\n> Subject: [PATCH 6/9] Pass CAC as argument to backend process.\n\nCould you expand a bit more on why it is better to pass it as a separate\nargument? Does it not fit well conceptually in struct Port?\n\n> @@ -4498,15 +4510,19 @@ postmaster_forkexec(int argc, char *argv[])\n> * returns the pid of the fork/exec'd process, or -1 on failure\n> */\n> static pid_t\n> -backend_forkexec(Port *port)\n> +backend_forkexec(Port *port, CAC_state cac)\n> {\n> - char *av[4];\n> + char *av[5];\n> int ac = 0;\n> + char cacbuf[10];\n> \n> av[ac++] = \"postgres\";\n> av[ac++] = \"--forkbackend\";\n> av[ac++] = NULL; /* filled in by internal_forkexec */\n> \n> + snprintf(cacbuf, sizeof(cacbuf), \"%d\", (int) cac);\n> + av[ac++] = cacbuf;\n\nMight be worth a sanity check that there wasn't any truncation into\ncacbuf, which is an impossibility as the code is written, but still\nuseful for catching a future developer error.\n\nIs it worth adding a command line option at all instead of having the\nnaked positional argument? It would help anybody who might read the\ncommand line what the seemingly random integer stands for.\n\n> @@ -4910,7 +4926,10 @@ SubPostmasterMain(int argc, char *argv[])\n> /* Run backend or appropriate child */\n> if (strcmp(argv[1], \"--forkbackend\") == 0)\n> {\n> - Assert(argc == 3); /* shouldn't be any more args */\n> + CAC_state cac;\n> +\n> + Assert(argc == 4);\n> + cac = (CAC_state) atoi(argv[3]);\n\nPerhaps an assert or full error checking that atoi succeeds would be\nuseful for similar reasons to my previous comment.\n\n> From 658cba5cdb2e5c45faff84566906d2fcaa8a3674 Mon Sep 17 00:00:00 2001\n> From: Heikki Linnakangas <[email protected]>\n> Date: Mon, 12 Jun 2023 18:03:03 +0300\n> Subject: [PATCH 7/9] Remove ConnCreate and ConnFree, and allocate Port in\n> stack.\n\nAgain, seems like another patch that could be pushed separately assuming\nothers don't have any comments.\n\n> From 65384b9a6cfb3b9b589041526216e0f64d64bea5 Mon Sep 17 00:00:00 2001\n> From: Heikki Linnakangas <[email protected]>\n> Date: Sun, 18 Jun 2023 13:56:44 +0300\n> Subject: [PATCH 8/9] Introduce ClientSocket, rename some funcs\n\n> @@ -1499,7 +1499,7 @@ CloseServerPorts(int status, Datum arg)\n> {\n> if (ListenSocket[i] != PGINVALID_SOCKET)\n> {\n> - StreamClose(ListenSocket[i]);\n> + closesocket(ListenSocket[i]);\n> ListenSocket[i] = PGINVALID_SOCKET;\n> }\n> }\n\nI see you have been adding log messages in the case of closesocket()\nfailing. Do you think it is worth doing here as well?\n\nOne strange part about this patch is that in patch 4, you edit\nStreamClose() to emit a log message in the case of closesocket()\nfailure, but then this patch just completely removes it.\n\n> @@ -4407,11 +4420,11 @@ BackendInitialize(Port *port, CAC_state cac)\n> * Doesn't return at all.\n> */\n> static void\n> -BackendRun(Port *port)\n> +BackendRun(void)\n> {\n> /*\n> - * Create a per-backend PGPROC struct in shared memory. We must do\n> - * this before we can use LWLocks (in AttachSharedMemoryAndSemaphores).\n> + * Create a per-backend PGPROC struct in shared memory. We must do this\n> + * before we can use LWLocks (in AttachSharedMemoryAndSemaphores).\n> */\n> InitProcess();\n\nThis comment reflow probably fits better in the patch that added the\nAttachSharedMemoryAndSemaphores function.\n\n> From b33cfeb28a5419045acb659a01410b2b463bea3e Mon Sep 17 00:00:00 2001\n> From: Heikki Linnakangas <[email protected]>\n> Date: Sun, 18 Jun 2023 13:59:48 +0300\n> Subject: [PATCH 9/9] Refactor postmaster child process launching\n\n> - Move code related to launching backend processes to new source file,\n> process_start.c\n\nSince this seems pretty self-contained, my be easier to review if this\nwas its own commit.\n\n> - Refactor the mechanism of passing informaton from the parent to\n> child process. Instead of using different command-line arguments\n> when launching the child process in EXEC_BACKEND mode, pass a\n> variable-length blob of data along with all the global\n> variables. The contents of that blob depends on the kind of child\n> process being launched. In !EXEC_BACKEND mode, we use the same blob,\n> but it's simply inherited from the parent to child process.\n\nSame with this. Perhaps others would disagree.\n\n> +const PMChildEntry entry_kinds[] = {\n> + {\"backend\", BackendMain, true},\n> +\n> + {\"autovacuum launcher\", AutoVacLauncherMain, true},\n> + {\"autovacuum worker\", AutoVacWorkerMain, true},\n> + {\"bgworker\", BackgroundWorkerMain, true},\n> + {\"syslogger\", SysLoggerMain, false},\n> +\n> + {\"startup\", StartupProcessMain, true},\n> + {\"bgwriter\", BackgroundWriterMain, true},\n> + {\"archiver\", PgArchiverMain, true},\n> + {\"checkpointer\", CheckpointerMain, true},\n> + {\"wal_writer\", WalWriterMain, true},\n> + {\"wal_receiver\", WalReceiverMain, true},\n> +};\n\nIt seems like this could be made static. I didn't see it getting exposed\nin a header file anywhere, but I also admit that I can be blind at\ntimes.\n\nI need to spend more time looking at this last patch.\n\nNice work so far!\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Mon, 10 Jul 2023 16:07:24 -0500",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Refactoring backend fork+exec code"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-18 14:22:33 +0300, Heikki Linnakangas wrote:\n> I started to look at the code in postmaster.c related to launching child\n> processes. I tried to reduce the difference between EXEC_BACKEND and\n> !EXEC_BACKEND code paths, and put the code that needs to differ behind a\n> better abstraction. I started doing this to help with implementing\n> multi-threading, but it doesn't introduce anything thread-related yet and I\n> think this improves readability anyway.\n\nYes please! This code is absolutely awful.\n\n\n> From 0cb6f8d665980d30a5d2a29013000744f16bf813 Mon Sep 17 00:00:00 2001\n> From: Heikki Linnakangas <[email protected]>\n> Date: Sun, 18 Jun 2023 11:00:21 +0300\n> Subject: [PATCH 3/9] Refactor CreateSharedMemoryAndSemaphores.\n> \n> Moves InitProcess calls a little later in EXEC_BACKEND case.\n\nWhat's the reason for this part? ISTM that we'd really want to get away from\nplastering duplicated InitProcess() etc everywhere.\n\nI think this might be easier to understand if you just changed did the\nCreateSharedMemoryAndSemaphores() -> AttachSharedMemoryAndSemaphores() piece\nin this commit, and the rest later.\n\n\n> +void\n> +AttachSharedMemoryAndSemaphores(void)\n> +{\n> +\t/* InitProcess must've been called already */\n\nPerhaps worth an assertion to make it easier to see that the order is wrong?\n\n\n> From 1d89eec53c7fefa7a4a8c011c9f19e3df64dc436 Mon Sep 17 00:00:00 2001\n> From: Heikki Linnakangas <[email protected]>\n> Date: Mon, 12 Jun 2023 16:33:20 +0300\n> Subject: [PATCH 4/9] Use FD_CLOEXEC on ListenSockets\n> \n> We went through some effort to close them in the child process. Better to\n> not hand them down to the child process in the first place.\n\nI think Thomas has a larger version of this patch:\nhttps://postgr.es/m/CA%2BhUKGKPNFcfBQduqof4-7C%3DavjcSfdkKBGvQoRuAvfocnvY0A%40mail.gmail.com\n\n\n\n> From 65384b9a6cfb3b9b589041526216e0f64d64bea5 Mon Sep 17 00:00:00 2001\n> From: Heikki Linnakangas <[email protected]>\n> Date: Sun, 18 Jun 2023 13:56:44 +0300\n> Subject: [PATCH 8/9] Introduce ClientSocket, rename some funcs\n> \n> - Move more of the work on a client socket to the child process.\n> \n> - Reduce the amount of data that needs to be passed from postmaster to\n> child. (Used to pass a full Port struct, although most of the fields were\n> empty. Now we pass the much slimmer ClientSocket.)\n\nI think there might be extensions accessing Port. Not sure if it's worth\nworrying about, but ...\n\n\n> --- a/src/backend/postmaster/autovacuum.c\n> +++ b/src/backend/postmaster/autovacuum.c\n> @@ -476,8 +476,8 @@ AutoVacLauncherMain(int argc, char *argv[])\n> \tpqsignal(SIGCHLD, SIG_DFL);\n> \n> \t/*\n> -\t * Create a per-backend PGPROC struct in shared memory. We must do\n> -\t * this before we can use LWLocks.\n> +\t * Create a per-backend PGPROC struct in shared memory. We must do this\n> +\t * before we can use LWLocks.\n> \t */\n> \tInitProcess();\n>\n\nDon't think this was intentional?\n\n\n> From b33cfeb28a5419045acb659a01410b2b463bea3e Mon Sep 17 00:00:00 2001\n> From: Heikki Linnakangas <[email protected]>\n> Date: Sun, 18 Jun 2023 13:59:48 +0300\n> Subject: [PATCH 9/9] Refactor postmaster child process launching\n> \n> - Move code related to launching backend processes to new source file,\n> process_start.c\n\nI think you might have renamed this to launch_backend.c?\n\n\n> - Introduce new postmaster_child_launch() function that deals with the\n> differences between EXEC_BACKEND and fork mode.\n> \n> - Refactor the mechanism of passing informaton from the parent to\n> child process. Instead of using different command-line arguments\n> when launching the child process in EXEC_BACKEND mode, pass a\n> variable-length blob of data along with all the global\n> variables. The contents of that blob depends on the kind of child\n> process being launched. In !EXEC_BACKEND mode, we use the same blob,\n> but it's simply inherited from the parent to child process.\n\n\n> +const\t\tPMChildEntry entry_kinds[] = {\n> +\t{\"backend\", BackendMain, true},\n> +\n> +\t{\"autovacuum launcher\", AutoVacLauncherMain, true},\n> +\t{\"autovacuum worker\", AutoVacWorkerMain, true},\n> +\t{\"bgworker\", BackgroundWorkerMain, true},\n> +\t{\"syslogger\", SysLoggerMain, false},\n> +\n> +\t{\"startup\", StartupProcessMain, true},\n> +\t{\"bgwriter\", BackgroundWriterMain, true},\n> +\t{\"archiver\", PgArchiverMain, true},\n> +\t{\"checkpointer\", CheckpointerMain, true},\n> +\t{\"wal_writer\", WalWriterMain, true},\n> +\t{\"wal_receiver\", WalReceiverMain, true},\n> +};\n\nI'd assign them with the PostmasterChildType as index, so there's no danger of\ngetting out of order.\n\nconst PMChildEntry entry_kinds = {\n [PMC_AV_LAUNCHER] = {\"autovacuum launcher\", AutoVacLauncherMain, true},\n ...\n}\n\nor such should work.\n\n\nI'd also use designated initializers for the fields, it's otherwise hard to\nknow what true means etc.\n\nI think it might be good to put more into array. If we e.g. knew whether a\nparticular child type is a backend-like, and aux process or syslogger, we\ncould avoid the duplicated InitAuxiliaryProcess(),\nMemoryContextDelete(PostmasterContext) etc calls everywhere.\n\n\n> +/*\n> + * SubPostmasterMain -- Get the fork/exec'd process into a state equivalent\n> + *\t\t\tto what it would be if we'd simply forked on Unix, and then\n> + *\t\t\tdispatch to the appropriate place.\n> + *\n> + * The first two command line arguments are expected to be \"--forkFOO\"\n> + * (where FOO indicates which postmaster child we are to become), and\n> + * the name of a variables file that we can read to load data that would\n> + * have been inherited by fork() on Unix. Remaining arguments go to the\n> + * subprocess FooMain() routine. XXX\n> + */\n> +void\n> +SubPostmasterMain(int argc, char *argv[])\n> +{\n> +\tPostmasterChildType child_type;\n> +\tchar\t *startup_data;\n> +\tsize_t\t\tstartup_data_len;\n> +\n> +\t/* In EXEC_BACKEND case we will not have inherited these settings */\n> +\tIsPostmasterEnvironment = true;\n> +\twhereToSendOutput = DestNone;\n> +\n> +\t/* Setup essential subsystems (to ensure elog() behaves sanely) */\n> +\tInitializeGUCOptions();\n> +\n> +\t/* Check we got appropriate args */\n> +\tif (argc < 3)\n> +\t\telog(FATAL, \"invalid subpostmaster invocation\");\n> +\n> +\tif (strncmp(argv[1], \"--forkchild=\", 12) == 0)\n> +\t{\n> +\t\tchar\t *entry_name = argv[1] + 12;\n> +\t\tbool\t\tfound = false;\n> +\n> +\t\tfor (int idx = 0; idx < lengthof(entry_kinds); idx++)\n> +\t\t{\n> +\t\t\tif (strcmp(entry_kinds[idx].name, entry_name) == 0)\n> +\t\t\t{\n> +\t\t\t\tchild_type = idx;\n> +\t\t\t\tfound = true;\n> +\t\t\t\tbreak;\n> +\t\t\t}\n> +\t\t}\n> +\t\tif (!found)\n> +\t\t\telog(ERROR, \"unknown child kind %s\", entry_name);\n> +\t}\n\n\nHm, shouldn't we error out when called without --forkchild?\n\n\n> +/* Save critical backend variables into the BackendParameters struct */\n> +#ifndef WIN32\n> +static bool\n> +save_backend_variables(BackendParameters *param, ClientSocket *client_sock)\n> +#else\n\nThere's so much of this kind of thing. Could we hide it in a struct or such\ninstead of needing ifdefs everywhere?\n\n\n\n> --- a/src/backend/storage/ipc/shmem.c\n> +++ b/src/backend/storage/ipc/shmem.c\n> @@ -144,6 +144,8 @@ InitShmemAllocation(void)\n> \t/*\n> \t * Initialize ShmemVariableCache for transaction manager. (This doesn't\n> \t * really belong here, but not worth moving.)\n> +\t *\n> +\t * XXX: we really should move this\n> \t */\n> \tShmemVariableCache = (VariableCache)\n> \t\tShmemAlloc(sizeof(*ShmemVariableCache));\n\nHeh. Indeed. And probably just rename it to something less insane.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 10 Jul 2023 15:50:43 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Refactoring backend fork+exec code"
},
{
"msg_contents": "Focusing on this one patch in this series:\n\nOn 11/07/2023 01:50, Andres Freund wrote:\n>> From: Heikki Linnakangas <[email protected]>\n>> Date: Mon, 12 Jun 2023 16:33:20 +0300\n>> Subject: [PATCH 4/9] Use FD_CLOEXEC on ListenSockets\n>>\n>> We went through some effort to close them in the child process. Better to\n>> not hand them down to the child process in the first place.\n> \n> I think Thomas has a larger version of this patch:\n> https://postgr.es/m/CA%2BhUKGKPNFcfBQduqof4-7C%3DavjcSfdkKBGvQoRuAvfocnvY0A%40mail.gmail.com\n\nHmm, no, that's a little different. Thomas added the FD_CLOEXEC option \nto the *accepted* socket in commit 1da569ca1f. That was part of that \nthread. This patch adds the option to the *listen* sockets. That was not \ndiscussed in that thread, but it's certainly in the same vein.\n\nThomas: What do you think of the attached?\n\nOn 11/07/2023 00:07, Tristan Partin wrote:\n>> @@ -831,7 +834,8 @@ StreamConnection(pgsocket server_fd, Port *port)\n>> void\n>> StreamClose(pgsocket sock)\n>> {\n>> - closesocket(sock);\n>> + if (closesocket(sock) != 0)\n>> + elog(LOG, \"closesocket failed: %m\");\n>> }\n>>\n>> /*\n> \n> Do you think WARNING would be a more appropriate log level?\n\nNo, WARNING is for messages that you expect the client to receive. This \nfailure is unexpected at the system level, the message is for the \nadministrator. The distinction isn't always very clear, but LOG seems \nmore appropriate in this case.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)",
"msg_date": "Thu, 24 Aug 2023 14:41:44 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Use FD_CLOEXEC on ListenSockets (was Re: Refactoring backend\n fork+exec code)"
},
{
"msg_contents": "On Thu, Aug 24, 2023 at 11:41 PM Heikki Linnakangas <[email protected]> wrote:\n> On 11/07/2023 01:50, Andres Freund wrote:\n> >> From: Heikki Linnakangas <[email protected]>\n> >> Date: Mon, 12 Jun 2023 16:33:20 +0300\n> >> Subject: [PATCH 4/9] Use FD_CLOEXEC on ListenSockets\n> >>\n> >> We went through some effort to close them in the child process. Better to\n> >> not hand them down to the child process in the first place.\n> >\n> > I think Thomas has a larger version of this patch:\n> > https://postgr.es/m/CA%2BhUKGKPNFcfBQduqof4-7C%3DavjcSfdkKBGvQoRuAvfocnvY0A%40mail.gmail.com\n>\n> Hmm, no, that's a little different. Thomas added the FD_CLOEXEC option\n> to the *accepted* socket in commit 1da569ca1f. That was part of that\n> thread. This patch adds the option to the *listen* sockets. That was not\n> discussed in that thread, but it's certainly in the same vein.\n>\n> Thomas: What do you think of the attached?\n\nLGTM. I vaguely recall thinking that it might be better to keep\nEXEC_BACKEND and !EXEC_BACKEND working the same which might be why I\ndidn't try this one, but it looks fine with the comment to explain, as\nyou have it. (It's a shame we can't use O_CLOFORK.)\n\nThere was some question in the other thread about whether doing that\nto the server socket might affect accepted sockets too on some OS, but\nI can at least confirm that your patch works fine on FreeBSD in an\nEXEC_BACKEND build. I think there were some historical disagreements\nabout which socket properties were inherited, but not that.\n\n\n",
"msg_date": "Fri, 25 Aug 2023 00:48:14 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use FD_CLOEXEC on ListenSockets (was Re: Refactoring backend\n fork+exec code)"
},
{
"msg_contents": "On 24/08/2023 15:48, Thomas Munro wrote:\n> LGTM. I vaguely recall thinking that it might be better to keep\n> EXEC_BACKEND and !EXEC_BACKEND working the same which might be why I\n> didn't try this one, but it looks fine with the comment to explain, as\n> you have it. (It's a shame we can't use O_CLOFORK.)\n\nYeah, O_CLOFORK would be nice..\n\nCommitted, thanks!\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Thu, 24 Aug 2023 17:05:46 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Use FD_CLOEXEC on ListenSockets (was Re: Refactoring backend\n fork+exec code)"
},
{
"msg_contents": "On Thu, Aug 24, 2023 at 10:05 AM Heikki Linnakangas <[email protected]> wrote:\n\n> On 24/08/2023 15:48, Thomas Munro wrote:\n> > LGTM. I vaguely recall thinking that it might be better to keep\n> > EXEC_BACKEND and !EXEC_BACKEND working the same which might be why I\n> > didn't try this one, but it looks fine with the comment to explain, as\n> > you have it. (It's a shame we can't use O_CLOFORK.)\n>\n> Yeah, O_CLOFORK would be nice..\n>\n> Committed, thanks!\n>\n>\nSince this commit, I'm getting a lot (63 per restart) of messages:\n\n LOG: could not close client or listen socket: Bad file descriptor\n\nAll I have to do to get the message is turn logging_collector = on and\nrestart.\n\nThe close failure condition existed before the commit, it just wasn't\nlogged before. So, did the extra logging added here just uncover a\npre-existing bug?\n\nThe LOG message is sent to the terminal, not to the log file.\n\nCheers,\n\nJeff\n\nOn Thu, Aug 24, 2023 at 10:05 AM Heikki Linnakangas <[email protected]> wrote:On 24/08/2023 15:48, Thomas Munro wrote:\n> LGTM. I vaguely recall thinking that it might be better to keep\n> EXEC_BACKEND and !EXEC_BACKEND working the same which might be why I\n> didn't try this one, but it looks fine with the comment to explain, as\n> you have it. (It's a shame we can't use O_CLOFORK.)\n\nYeah, O_CLOFORK would be nice..\n\nCommitted, thanks!Since this commit, I'm getting a lot (63 per restart) of messages: LOG: could not close client or listen socket: Bad file descriptor All I have to do to get the message is turn logging_collector = on and restart.The close failure condition existed before the commit, it just wasn't logged before. So, did the extra logging added here just uncover a pre-existing bug?The LOG message is sent to the terminal, not to the log file.Cheers,Jeff",
"msg_date": "Mon, 28 Aug 2023 11:55:52 -0400",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use FD_CLOEXEC on ListenSockets (was Re: Refactoring backend\n fork+exec code)"
},
{
"msg_contents": "On 28/08/2023 18:55, Jeff Janes wrote:\n> Since this commit, I'm getting a lot (63 per restart) of messages:\n> \n> LOG: could not close client or listen socket: Bad file descriptor\n> All I have to do to get the message is turn logging_collector = on and \n> restart.\n> \n> The close failure condition existed before the commit, it just wasn't \n> logged before. So, did the extra logging added here just uncover a \n> pre-existing bug?\n\nYes, so it seems. Syslogger is started before the ListenSockets array is \ninitialized, so its still all zeros. When syslogger is forked, the child \nprocess tries to close all the listen sockets, which are all zeros. So \nsyslogger calls close(0) MAXLISTEN (64) times. Attached patch moves the \narray initialization earlier.\n\nThe first close(0) actually does have an effect: it closes stdin, which \nis fd 0. That is surely accidental, but I wonder if we should indeed \nclose stdin in child processes? The postmaster process doesn't do \nanything with stdin either, although I guess a library might try to read \na passphrase from stdin before starting up, for example.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)",
"msg_date": "Mon, 28 Aug 2023 23:52:15 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Use FD_CLOEXEC on ListenSockets (was Re: Refactoring backend\n fork+exec code)"
},
{
"msg_contents": "On Mon, Aug 28, 2023 at 11:52:15PM +0300, Heikki Linnakangas wrote:\n> On 28/08/2023 18:55, Jeff Janes wrote:\n>> Since this commit, I'm getting a lot (63 per restart) of messages:\n>> \n>> LOG: could not close client or listen socket: Bad file descriptor\n>> All I have to do to get the message is turn logging_collector = on and\n>> restart.\n>> \n>> The close failure condition existed before the commit, it just wasn't\n>> logged before. So, did the extra logging added here just uncover a\n>> pre-existing bug?\n\nIn case you've not noticed:\nhttps://www.postgresql.org/message-id/[email protected]\nBut it does not really matter now ;)\n\n> Yes, so it seems. Syslogger is started before the ListenSockets array is\n> initialized, so its still all zeros. When syslogger is forked, the child\n> process tries to close all the listen sockets, which are all zeros. So\n> syslogger calls close(0) MAXLISTEN (64) times. Attached patch moves the\n> array initialization earlier.\n\nYep, I've reached the same conclusion. Wouldn't it be cleaner to move\nthe callback registration of CloseServerPorts() closer to the array\ninitialization, though?\n\n> The first close(0) actually does have an effect: it closes stdin, which is\n> fd 0. That is surely accidental, but I wonder if we should indeed close\n> stdin in child processes? The postmaster process doesn't do anything with\n> stdin either, although I guess a library might try to read a passphrase from\n> stdin before starting up, for example.\n\nWe would have heard about that, wouldn't we? I may be missing\nsomething of course, but on HEAD, the array initialization is done\nbefore starting any child processes, and the syslogger is the first\none forked.\n--\nMichael",
"msg_date": "Tue, 29 Aug 2023 07:28:16 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use FD_CLOEXEC on ListenSockets (was Re: Refactoring backend\n fork+exec code)"
},
{
"msg_contents": "On 29/08/2023 01:28, Michael Paquier wrote:\n> \n> In case you've not noticed:\n> https://www.postgresql.org/message-id/[email protected]\n> But it does not really matter now ;)\n\nAh sorry, missed that thread.\n\n>> Yes, so it seems. Syslogger is started before the ListenSockets array is\n>> initialized, so its still all zeros. When syslogger is forked, the child\n>> process tries to close all the listen sockets, which are all zeros. So\n>> syslogger calls close(0) MAXLISTEN (64) times. Attached patch moves the\n>> array initialization earlier.\n> \n> Yep, I've reached the same conclusion. Wouldn't it be cleaner to move\n> the callback registration of CloseServerPorts() closer to the array\n> initialization, though?\n\nOk, pushed that way.\n\nI checked the history of this: it goes back to commit 9a86f03b4e in \nversion 13. The SysLogger_Start() call used to be later, after setting p \nListenSockets, but that commit moved it. So I backpatched this to v13.\n\n>> The first close(0) actually does have an effect: it closes stdin, which is\n>> fd 0. That is surely accidental, but I wonder if we should indeed close\n>> stdin in child processes? The postmaster process doesn't do anything with\n>> stdin either, although I guess a library might try to read a passphrase from\n>> stdin before starting up, for example.\n> \n> We would have heard about that, wouldn't we? I may be missing\n> something of course, but on HEAD, the array initialization is done\n> before starting any child processes, and the syslogger is the first\n> one forked.\n\nYes, syslogger is the only process that closes stdin. After moving the \ninitialization, it doesn't close it either.\n\nThinking about this some more, the ListenSockets array is a bit silly \nanyway. We fill the array starting from index 0, always append to the \nend, and never remove entries from it. It would seem more \nstraightforward to keep track of the used size of the array. Currently \nwe always loop through the unused parts too, and e.g. \nConfigurePostmasterWaitSet() needs to walk the array to count how many \nelements are in use.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Tue, 29 Aug 2023 09:21:32 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Use FD_CLOEXEC on ListenSockets (was Re: Refactoring backend\n fork+exec code)"
},
{
"msg_contents": "On 29/08/2023 09:21, Heikki Linnakangas wrote:\n> Thinking about this some more, the ListenSockets array is a bit silly\n> anyway. We fill the array starting from index 0, always append to the\n> end, and never remove entries from it. It would seem more\n> straightforward to keep track of the used size of the array. Currently\n> we always loop through the unused parts too, and e.g.\n> ConfigurePostmasterWaitSet() needs to walk the array to count how many\n> elements are in use.\n\nLike this.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)",
"msg_date": "Tue, 29 Aug 2023 09:58:48 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Use FD_CLOEXEC on ListenSockets (was Re: Refactoring backend\n fork+exec code)"
},
{
"msg_contents": "On 29/08/2023 09:58, Heikki Linnakangas wrote:\n> On 29/08/2023 09:21, Heikki Linnakangas wrote:\n>> Thinking about this some more, the ListenSockets array is a bit silly\n>> anyway. We fill the array starting from index 0, always append to the\n>> end, and never remove entries from it. It would seem more\n>> straightforward to keep track of the used size of the array. Currently\n>> we always loop through the unused parts too, and e.g.\n>> ConfigurePostmasterWaitSet() needs to walk the array to count how many\n>> elements are in use.\n> \n> Like this.\n\nThis seems pretty uncontroversial, and I heard no objections, so I went \nahead and committed that.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Thu, 5 Oct 2023 15:08:37 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Use FD_CLOEXEC on ListenSockets (was Re: Refactoring backend\n fork+exec code)"
},
{
"msg_contents": "On Thu, Oct 05, 2023 at 03:08:37PM +0300, Heikki Linnakangas wrote:\n> This seems pretty uncontroversial, and I heard no objections, so I went\n> ahead and committed that.\n\nIt looks like e29c4643951 is causing issues here. While doing\nbenchmarking on a cluster compiled with -O2, I got a crash:\nLOG: system logger process (PID 27924) was terminated by signal 11: Segmentation fault \n\nProgram terminated with signal SIGSEGV, Segmentation fault.\n#0 0x000055ef3b9aed20 in pfree ()\n(gdb) bt\n#0 0x000055ef3b9aed20 in pfree ()\n#1 0x000055ef3b7e0e41 in ClosePostmasterPorts ()\n#2 0x000055ef3b7e6649 in SysLogger_Start ()\n#3 0x000055ef3b7e4413 in PostmasterMain () \n\nOkay, the backtrace is not that useful. I'll see if I can get\nsomething better, still it seems like this has broken the way the\nsyslogger closes these ports.\n--\nMichael",
"msg_date": "Fri, 6 Oct 2023 14:30:16 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use FD_CLOEXEC on ListenSockets (was Re: Refactoring backend\n fork+exec code)"
},
{
"msg_contents": "On Fri, Oct 06, 2023 at 02:30:16PM +0900, Michael Paquier wrote:\n> Okay, the backtrace is not that useful. I'll see if I can get\n> something better, still it seems like this has broken the way the\n> syslogger closes these ports.\n\nAnd here you go:\nProgram terminated with signal SIGSEGV, Segmentation fault.\n#0 GetMemoryChunkMethodID (pointer=0x0) at mcxt.c:196 196 header =\n*((const uint64 *) ((const char *) pointer - sizeof(uint64)));\n(gdb) bt\n#0 GetMemoryChunkMethodID (pointer=0x0) at mcxt.c:196\n#1 0x0000557d04176d59 in pfree (pointer=0x0) at mcxt.c:1463\n#2 0x0000557d03e8eab3 in ClosePostmasterPorts (am_syslogger=true) at postmaster.c:2571\n#3 0x0000557d03e93ac2 in SysLogger_Start () at syslogger.c:686\n#4 0x0000557d03e8c5b7 in PostmasterMain (argc=3, argv=0x557d0471ed00)\nat postmaster.c:1148\n#5 0x0000557d03d48e34 in main (argc=3, argv=0x557d0471ed00) at main.c:198\n(gdb) up 2\n#2 0x0000557d03e8eab3 in ClosePostmasterPorts (am_syslogger=true) at\npostmaster.c:2571\n2571 pfree(ListenSockets);\n(gdb) p ListenSockets $1 = (pgsocket *) 0x0\n--\nMichael",
"msg_date": "Fri, 6 Oct 2023 15:50:30 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use FD_CLOEXEC on ListenSockets (was Re: Refactoring backend\n fork+exec code)"
},
{
"msg_contents": "On 06/10/2023 09:50, Michael Paquier wrote:\n> On Fri, Oct 06, 2023 at 02:30:16PM +0900, Michael Paquier wrote:\n>> Okay, the backtrace is not that useful. I'll see if I can get\n>> something better, still it seems like this has broken the way the\n>> syslogger closes these ports.\n> \n> And here you go:\n> Program terminated with signal SIGSEGV, Segmentation fault.\n> #0 GetMemoryChunkMethodID (pointer=0x0) at mcxt.c:196 196 header =\n> *((const uint64 *) ((const char *) pointer - sizeof(uint64)));\n> (gdb) bt\n> #0 GetMemoryChunkMethodID (pointer=0x0) at mcxt.c:196\n> #1 0x0000557d04176d59 in pfree (pointer=0x0) at mcxt.c:1463\n> #2 0x0000557d03e8eab3 in ClosePostmasterPorts (am_syslogger=true) at postmaster.c:2571\n> #3 0x0000557d03e93ac2 in SysLogger_Start () at syslogger.c:686\n> #4 0x0000557d03e8c5b7 in PostmasterMain (argc=3, argv=0x557d0471ed00)\n> at postmaster.c:1148\n> #5 0x0000557d03d48e34 in main (argc=3, argv=0x557d0471ed00) at main.c:198\n> (gdb) up 2\n> #2 0x0000557d03e8eab3 in ClosePostmasterPorts (am_syslogger=true) at\n> postmaster.c:2571\n> 2571 pfree(ListenSockets);\n> (gdb) p ListenSockets $1 = (pgsocket *) 0x0\n\nFixed, thanks!\n\nI did a quick test with syslogger enabled before committing, but didn't \nnotice the segfault. I missed it because syslogger gets restarted and \nthen it worked.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Fri, 6 Oct 2023 10:27:22 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Use FD_CLOEXEC on ListenSockets (was Re: Refactoring backend\n fork+exec code)"
},
{
"msg_contents": "On Fri, Oct 06, 2023 at 10:27:22AM +0300, Heikki Linnakangas wrote:\n> I did a quick test with syslogger enabled before committing, but didn't\n> notice the segfault. I missed it because syslogger gets restarted and then\n> it worked.\n\nThanks, Heikki.\n--\nMichael",
"msg_date": "Fri, 6 Oct 2023 17:02:50 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use FD_CLOEXEC on ListenSockets (was Re: Refactoring backend\n fork+exec code)"
},
{
"msg_contents": "I updated this patch set, addressing some of the straightforward \ncomments from Tristan and Andres, and did some more cleanups, commenting \netc. Works on Windows now.\n\nReplies to some of the individual comments below:\n\nOn 11/07/2023 00:07, Tristan Partin wrote:\n>> @@ -4498,15 +4510,19 @@ postmaster_forkexec(int argc, char *argv[])\n>> * returns the pid of the fork/exec'd process, or -1 on failure\n>> */\n>> static pid_t\n>> -backend_forkexec(Port *port)\n>> +backend_forkexec(Port *port, CAC_state cac)\n>> {\n>> - char *av[4];\n>> + char *av[5];\n>> int ac = 0;\n>> + char cacbuf[10];\n>>\n>> av[ac++] = \"postgres\";\n>> av[ac++] = \"--forkbackend\";\n>> av[ac++] = NULL; /* filled in by internal_forkexec */\n>>\n>> + snprintf(cacbuf, sizeof(cacbuf), \"%d\", (int) cac);\n>> + av[ac++] = cacbuf;\n> \n> Might be worth a sanity check that there wasn't any truncation into\n> cacbuf, which is an impossibility as the code is written, but still\n> useful for catching a future developer error.\n> \n> Is it worth adding a command line option at all instead of having the\n> naked positional argument? It would help anybody who might read the\n> command line what the seemingly random integer stands for.\n\n+1. This gets refactored away in the last patch though. In the last \npatch, I used a child process name instead of an integer precisely \nbecause it looks nicer in \"ps\".\n\nI wonder if we should add more command line arguments, just for \ninformational purposes. Autovacuum worker process could display the \ndatabase name it's connected to, for example. I don't know how important \nthe command line is on Windows, is it displayed by tools that people \ncare about?\n\nOn 11/07/2023 01:50, Andres Freund wrote:\n> On 2023-06-18 14:22:33 +0300, Heikki Linnakangas wrote:\n>> From 0cb6f8d665980d30a5d2a29013000744f16bf813 Mon Sep 17 00:00:00 2001\n>> From: Heikki Linnakangas <[email protected]>\n>> Date: Sun, 18 Jun 2023 11:00:21 +0300\n>> Subject: [PATCH 3/9] Refactor CreateSharedMemoryAndSemaphores.\n>>\n>> Moves InitProcess calls a little later in EXEC_BACKEND case.\n> \n> What's the reason for this part? \n\nThe point is that with this commit, InitProcess() is called at same \nplace in EXEC_BACKEND mode and !EXEC_BACKEND. It feels more consistent \nthat way.\n\n> ISTM that we'd really want to get away from plastering duplicated\n> InitProcess() etc everywhere.\n\nSure, we could do more to reduce the duplication. I think this is a step \nin the right direction, though.\n\n>> From 65384b9a6cfb3b9b589041526216e0f64d64bea5 Mon Sep 17 00:00:00 2001\n>> From: Heikki Linnakangas <[email protected]>\n>> Date: Sun, 18 Jun 2023 13:56:44 +0300\n>> Subject: [PATCH 8/9] Introduce ClientSocket, rename some funcs\n>>\n>> - Move more of the work on a client socket to the child process.\n>>\n>> - Reduce the amount of data that needs to be passed from postmaster to\n>> child. (Used to pass a full Port struct, although most of the fields were\n>> empty. Now we pass the much slimmer ClientSocket.)\n> \n> I think there might be extensions accessing Port. Not sure if it's worth\n> worrying about, but ...\n\nThat's OK. Port still exists, it's just created a little later. It will \nbe initialized by the time extensions might look at it.\n\n>> +const\t\tPMChildEntry entry_kinds[] = {\n>> +\t{\"backend\", BackendMain, true},\n>> +\n>> +\t{\"autovacuum launcher\", AutoVacLauncherMain, true},\n>> +\t{\"autovacuum worker\", AutoVacWorkerMain, true},\n>> +\t{\"bgworker\", BackgroundWorkerMain, true},\n>> +\t{\"syslogger\", SysLoggerMain, false},\n>> +\n>> +\t{\"startup\", StartupProcessMain, true},\n>> +\t{\"bgwriter\", BackgroundWriterMain, true},\n>> +\t{\"archiver\", PgArchiverMain, true},\n>> +\t{\"checkpointer\", CheckpointerMain, true},\n>> +\t{\"wal_writer\", WalWriterMain, true},\n>> +\t{\"wal_receiver\", WalReceiverMain, true},\n>> +};\n> \n> I'd assign them with the PostmasterChildType as index, so there's no danger of\n> getting out of order.\n> \n> const PMChildEntry entry_kinds = {\n> [PMC_AV_LAUNCHER] = {\"autovacuum launcher\", AutoVacLauncherMain, true},\n> ...\n> }\n> \n> or such should work.\n\nNice, I didn't know about that syntax! Changed it that way.\n\n> I'd also use designated initializers for the fields, it's otherwise hard to\n> know what true means etc.\n\nI think with one boolean and the struct declaration nearby, it's fine. \nIf this becomes more complex in the future, with more fields, I agree.\n\n> I think it might be good to put more into array. If we e.g. knew whether a\n> particular child type is a backend-like, and aux process or syslogger, we\n> could avoid the duplicated InitAuxiliaryProcess(),\n> MemoryContextDelete(PostmasterContext) etc calls everywhere.\n\nI agree we could do more refactoring here. I don't agree with adding \nmore to this struct though. I'm trying to limit the code in \nlaunch_backend.c to hiding the differences between EXEC_BACKEND and \n!EXEC_BACKEND. In EXEC_BACKEND mode, it restores the child process to \nthe same state as it is after fork() in !EXEC_BACKEND mode. Any other \ninitialization steps belong elsewhere.\n\nSome of the steps between InitPostmasterChild() and the *Main() \nfunctions could probably be moved around and refactored. I didn't think \nhard about that. I think that can be done separately as follow-up patch.\n\n>> +/* Save critical backend variables into the BackendParameters struct */\n>> +#ifndef WIN32\n>> +static bool\n>> +save_backend_variables(BackendParameters *param, ClientSocket *client_sock)\n>> +#else\n> \n> There's so much of this kind of thing. Could we hide it in a struct or such\n> instead of needing ifdefs everywhere?\n\nA lot of #ifdefs you mean? I agree launch_backend.c has a lot of those. \nI haven't come up with any good ideas on reducing them, unfortunately.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)",
"msg_date": "Wed, 11 Oct 2023 14:12:47 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Refactoring backend fork+exec code"
},
{
"msg_contents": "On 11/10/2023 14:12, Heikki Linnakangas wrote:\n> On 11/07/2023 01:50, Andres Freund wrote:\n>>> Subject: [PATCH 3/9] Refactor CreateSharedMemoryAndSemaphores.\n>>>\n>>> Moves InitProcess calls a little later in EXEC_BACKEND case.\n>>\n>> What's the reason for this part?\n> \n> The point is that with this commit, InitProcess() is called at same\n> place in EXEC_BACKEND mode and !EXEC_BACKEND. It feels more consistent\n> that way.\n> \n>> ISTM that we'd really want to get away from plastering duplicated\n>> InitProcess() etc everywhere.\n> \n> Sure, we could do more to reduce the duplication. I think this is a step\n> in the right direction, though.\n\nHere's another rebased patch set. Compared to previous version, I did a \nlittle more refactoring around CreateSharedMemoryAndSemaphores and \nInitProcess:\n\n- patch 1 splits CreateSharedMemoryAndSemaphores into two functions: \nCreateSharedMemoryAndSemaphores is now only called at postmaster \nstartup, and a new function called AttachSharedMemoryStructs() is called \nin backends in EXEC_BACKEND mode. I extracted the common parts of those \nfunctions to a new static function. (Some of this refactoring used to be \npart of the 3rd patch in the series, but it seems useful on its own, so \nI split it out.)\n\n- patch 3 moves the call to AttachSharedMemoryStructs() to \nInitProcess(), reducing the boilerplate code a little.\n\n\nThe patches are incrementally useful, so if you don't have time to \nreview all of them, a review on a subset would be useful too.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)",
"msg_date": "Thu, 30 Nov 2023 01:36:25 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Refactoring backend fork+exec code"
},
{
"msg_contents": "On Wed Nov 29, 2023 at 5:36 PM CST, Heikki Linnakangas wrote:\n> On 11/10/2023 14:12, Heikki Linnakangas wrote:\n> Here's another rebased patch set. Compared to previous version, I did a \n> little more refactoring around CreateSharedMemoryAndSemaphores and \n> InitProcess:\n>\n> - patch 1 splits CreateSharedMemoryAndSemaphores into two functions: \n> CreateSharedMemoryAndSemaphores is now only called at postmaster \n> startup, and a new function called AttachSharedMemoryStructs() is called \n> in backends in EXEC_BACKEND mode. I extracted the common parts of those \n> functions to a new static function. (Some of this refactoring used to be \n> part of the 3rd patch in the series, but it seems useful on its own, so \n> I split it out.)\n>\n> - patch 3 moves the call to AttachSharedMemoryStructs() to \n> InitProcess(), reducing the boilerplate code a little.\n>\n>\n> The patches are incrementally useful, so if you don't have time to \n> review all of them, a review on a subset would be useful too.\n\n> From 8886db1ed6bae21bf6d77c9bb1230edbb55e24f9 Mon Sep 17 00:00:00 2001\n> From: Heikki Linnakangas <[email protected]>\n> Date: Thu, 30 Nov 2023 00:04:22 +0200\n> Subject: [PATCH v3 4/7] Pass CAC as argument to backend process\n\nFor me, being new to the code, it would be nice to have more of an \nexplanation as to why this is \"better.\" I don't doubt it; it would just \nhelp me and future readers of this commit in the future. More of an \nexplanation in the commit message would suffice.\n\nMy other comment on this commit is that we now seem to have lost the \ncontext on what CAC stands for. Before we had the member variable to \nexplain it. A comment on the enum would be great or changing cac named \nvariables to canAcceptConnections. I did notice in patch 7 that there \nare still some variables named canAcceptConnections around, so I'll \nleave this comment up to you.\n\n> From 98f8397b32a0b36e221475b32697c9c5bbca86a0 Mon Sep 17 00:00:00 2001\n> From: Heikki Linnakangas <[email protected]>\n> Date: Wed, 11 Oct 2023 13:38:06 +0300\n> Subject: [PATCH v3 5/7] Remove ConnCreate and ConnFree, and allocate Port in\n> stack\n\nI like it separate.\n\n> From 79aab42705a8cb0e16e61c33052fc56fdd4fca76 Mon Sep 17 00:00:00 2001\n> From: Heikki Linnakangas <[email protected]>\n> Date: Wed, 11 Oct 2023 13:38:10 +0300\n> Subject: [PATCH v3 6/7] Introduce ClientSocket, rename some funcs\n\n> +static int BackendStartup(ClientSocket *port);\n\ns/port/client_sock\n\n> - port->remote_hostname = strdup(remote_host);\n> + port->remote_hostname = pstrdup(remote_host);\n> + MemoryContextSwitchTo(oldcontext);\n\nSomething funky with the whitespace here, but my eyes might also be \nplaying tricks on me. Mixing spaces in tabs like what do in this \ncodebase makes it difficult to review :).\n\n> From ce51876f87f1e4317e25baf64184749448fcd033 Mon Sep 17 00:00:00 2001\n> From: Heikki Linnakangas <[email protected]>\n> Date: Thu, 30 Nov 2023 00:07:34 +0200\n> Subject: [PATCH v3 7/7] Refactor postmaster child process launching\n\n> + entry_kinds[child_type].main_fn(startup_data, startup_data_len);\n> + Assert(false);\n\nSeems like you want the pg_unreachable() macro here instead of \nAssert(false). Similar comment at the end of SubPostmasterMain().\n\n> + if (fwrite(param, paramsz, 1, fp) != 1)\n> + {\n> + ereport(LOG,\n> + (errcode_for_file_access(),\n> + errmsg(\"could not write to file \\\"%s\\\": %m\", tmpfilename)));\n> + FreeFile(fp);\n> + return -1;\n> + }\n> +\n> + /* Release file */\n> + if (FreeFile(fp))\n> + {\n> + ereport(LOG,\n> + (errcode_for_file_access(),\n> + errmsg(\"could not write to file \\\"%s\\\": %m\", tmpfilename)));\n> + return -1;\n> + }\n\nTwo pieces of feedback here. I generally find write(2) more useful than \nfwrite(3) because write(2) will report a useful errno, whereas fwrite(2) \njust uses ferror(3). The additional errno information might be valuable \ncontext in the log message. Up to you if you think it is also valuable.\n\nThe log message if FreeFile() fails doesn't seem to make sense to me. \nI didn't see any file writing in that code path, but it is possible that \nI missed something.\n\n> + /*\n> + * Set reference point for stack-depth checking. This might seem\n> + * redundant in !EXEC_BACKEND builds; but it's not because the postmaster\n> + * launches its children from signal handlers, so we might be running on\n> + * an alternative stack. XXX still true?\n> + */\n> + (void) set_stack_base();\n\nLooks like there is still this XXX left. Can't say I completely \nunderstand the second sentence either.\n\n> + /*\n> + * make sure stderr is in binary mode before anything can possibly be\n> + * written to it, in case it's actually the syslogger pipe, so the pipe\n> + * chunking protocol isn't disturbed. Non-logpipe data gets translated on\n> + * redirection (e.g. via pg_ctl -l) anyway.\n> + */\n\nNit: The 'm' in the first \"make\" should be capitalized.\n\n> + if (fread(¶m, sizeof(param), 1, fp) != 1)\n> + {\n> + write_stderr(\"could not read from backend variables file \\\"%s\\\": %s\\n\",\n> + id, strerror(errno));\n> + exit(1);\n> + }\n> +\n> + /* read startup data */\n> + *startup_data_len = param.startup_data_len;\n> + if (param.startup_data_len > 0)\n> + {\n> + *startup_data = palloc(*startup_data_len);\n> + if (fread(*startup_data, *startup_data_len, 1, fp) != 1)\n> + {\n> + write_stderr(\"could not read startup data from backend variables file \\\"%s\\\": %s\\n\",\n> + id, strerror(errno));\n> + exit(1);\n> + }\n> + }\n\nfread(3) doesn't set errno. I would probably switch these to read(2) for \nthe reason I wrote in a previous comment.\n\n> + /*\n> + * Need to reinitialize the SSL library in the backend, since the context\n> + * structures contain function pointers and cannot be passed through the\n> + * parameter file.\n> + *\n> + * If for some reason reload fails (maybe the user installed broken key\n> + * files), soldier on without SSL; that's better than all connections\n> + * becoming impossible.\n> + *\n> + * XXX should we do this in all child processes? For the moment it's\n> + * enough to do it in backend children.\n> + */\n> +#ifdef USE_SSL\n> + if (EnableSSL)\n> + {\n> + if (secure_initialize(false) == 0)\n> + LoadedSSL = true;\n> + else\n> + ereport(LOG,\n> + (errmsg(\"SSL configuration could not be loaded in child process\")));\n> + }\n> +#endif\n\nDo other child process types do any non-local communication?\n\n> -typedef struct ClientSocket {\n> +struct ClientSocket\n> +{\n> pgsocket sock; /* File descriptor */\n> SockAddr laddr; /* local addr (postmaster) */\n> SockAddr raddr; /* remote addr (client) */\n> -} ClientSocket;\n> +};\n> +typedef struct ClientSocket ClientSocket;\n\nCan't say I completely understand the reason for this change given it \nwas added in your series.\n\nI didn't look too hard at the Windows-specific code, so maybe someone \nwho knows Windows will have something to say, but it also might've just \nbeen copy-paste that I didn't realize.\n\nThere were a few more XXXs that probably should be figured out before \ncommitting. Though perhaps some of them were already there.\n\nPatches 1-3 seem committable as-is. I only had minor comments on \neverything but 7, so after taking a look at those, they could be \ncommitted.\n\nOverall, this seems liked a marked improvement :).\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Thu, 30 Nov 2023 12:44:33 -0600",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Refactoring backend fork+exec code"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-30 01:36:25 +0200, Heikki Linnakangas wrote:\n> - patch 1 splits CreateSharedMemoryAndSemaphores into two functions:\n> CreateSharedMemoryAndSemaphores is now only called at postmaster startup,\n> and a new function called AttachSharedMemoryStructs() is called in backends\n> in EXEC_BACKEND mode. I extracted the common parts of those functions to a\n> new static function. (Some of this refactoring used to be part of the 3rd\n> patch in the series, but it seems useful on its own, so I split it out.)\n\nI like that idea.\n\n\n\n> From a96b6e92fdeaa947bf32774c425419b8f987b8e2 Mon Sep 17 00:00:00 2001\n> From: Heikki Linnakangas <[email protected]>\n> Date: Thu, 30 Nov 2023 00:01:25 +0200\n> Subject: [PATCH v3 1/7] Refactor CreateSharedMemoryAndSemaphores\n>\n> For clarity, have separate functions for *creating* the shared memory\n> and semaphores at postmaster or single-user backend startup, and\n> for *attaching* to existing shared memory structures in EXEC_BACKEND\n> case. CreateSharedMemoryAndSemaphores() is now called only at\n> postmaster startup, and a new AttachSharedMemoryStructs() function is\n> called at backend startup in EXEC_BACKEND mode.\n\nI assume CreateSharedMemoryAndSemaphores() is still called during crash\nrestart? I wonder if it shouldn't split three ways:\n1) create\n2) initialize\n3) attach\n\n\n> From 3478cafcf74a5c8d649e0287e6c72669a29c0e70 Mon Sep 17 00:00:00 2001\n> From: Heikki Linnakangas <[email protected]>\n> Date: Thu, 30 Nov 2023 00:02:03 +0200\n> Subject: [PATCH v3 2/7] Pass BackgroundWorker entry in the parameter file in\n> EXEC_BACKEND mode\n>\n> This makes it possible to move InitProcess later in SubPostmasterMain\n> (in next commit), as we no longer need to access shared memory to read\n> background worker entry.\n\n> static void read_backend_variables(char *id, Port *port);\n> @@ -4831,7 +4833,7 @@ SubPostmasterMain(int argc, char *argv[])\n> \t\tstrcmp(argv[1], \"--forkavlauncher\") == 0 ||\n> \t\tstrcmp(argv[1], \"--forkavworker\") == 0 ||\n> \t\tstrcmp(argv[1], \"--forkaux\") == 0 ||\n> -\t\tstrncmp(argv[1], \"--forkbgworker=\", 15) == 0)\n> +\t\tstrncmp(argv[1], \"--forkbgworker\", 14) == 0)\n> \t\tPGSharedMemoryReAttach();\n> \telse\n> \t\tPGSharedMemoryNoReAttach();\n> @@ -4962,10 +4964,8 @@ SubPostmasterMain(int argc, char *argv[])\n>\n> \t\tAutoVacWorkerMain(argc - 2, argv + 2);\t/* does not return */\n> \t}\n> -\tif (strncmp(argv[1], \"--forkbgworker=\", 15) == 0)\n> +\tif (strncmp(argv[1], \"--forkbgworker\", 14) == 0)\n\n\nNow that we don't need to look at parameters anymore, these should probably be\njust a strcmp(), like the other cases?\n\n\n> From 0d071474e12a70ff8113c7b0731c5b97fec45007 Mon Sep 17 00:00:00 2001\n> From: Heikki Linnakangas <[email protected]>\n> Date: Wed, 29 Nov 2023 23:47:25 +0200\n> Subject: [PATCH v3 3/7] Refactor how InitProcess is called\n>\n> The order of process initialization steps is now more consistent\n> between !EXEC_BACKEND and EXEC_BACKEND modes. InitProcess() is called\n> at the same place in either mode. We can now also move the\n> AttachSharedMemoryStructs() call into InitProcess() itself. This\n> reduces the number of \"#ifdef EXEC_BACKEND\" blocks.\n\nYay.\n\n\n> diff --git a/src/backend/storage/lmgr/proc.c b/src/backend/storage/lmgr/proc.c\n> index cdfdd6fbe1d..6c708777dde 100644\n> --- a/src/backend/storage/lmgr/proc.c\n> +++ b/src/backend/storage/lmgr/proc.c\n> @@ -461,6 +461,12 @@ InitProcess(void)\n> \t */\n> \tInitLWLockAccess();\n> \tInitDeadLockChecking();\n> +\n> +#ifdef EXEC_BACKEND\n> +\t/* Attach process to shared data structures */\n> +\tif (IsUnderPostmaster)\n> +\t\tAttachSharedMemoryStructs();\n> +#endif\n> }\n>\n> /*\n> @@ -614,6 +620,12 @@ InitAuxiliaryProcess(void)\n> \t * Arrange to clean up at process exit.\n> \t */\n> \ton_shmem_exit(AuxiliaryProcKill, Int32GetDatum(proctype));\n> +\n> +#ifdef EXEC_BACKEND\n> +\t/* Attach process to shared data structures */\n> +\tif (IsUnderPostmaster)\n> +\t\tAttachSharedMemoryStructs();\n> +#endif\n> }\n\nAside: Somewhat odd that InitAuxiliaryProcess() doesn't call\nInitLWLockAccess().\n\n\nI think a short comment explaining why we can attach to shmem structs after\nalready accessing shared memory earlier in the function would be worthwhile.\n\n\n> From ce51876f87f1e4317e25baf64184749448fcd033 Mon Sep 17 00:00:00 2001\n> From: Heikki Linnakangas <[email protected]>\n> Date: Thu, 30 Nov 2023 00:07:34 +0200\n> Subject: [PATCH v3 7/7] Refactor postmaster child process launching\n>\n> - Move code related to launching backend processes to new source file,\n> launch_backend.c\n>\n> - Introduce new postmaster_child_launch() function that deals with the\n> differences between EXEC_BACKEND and fork mode.\n>\n> - Refactor the mechanism of passing information from the parent to\n> child process. Instead of using different command-line arguments when\n> launching the child process in EXEC_BACKEND mode, pass a\n> variable-length blob of data along with all the global variables. The\n> contents of that blob depend on the kind of child process being\n> launched. In !EXEC_BACKEND mode, we use the same blob, but it's simply\n> inherited from the parent to child process.\n>\n> [...]\n> 33 files changed, 1787 insertions(+), 2002 deletions(-)\n\nWell, that's not small...\n\nI think it may be worth splitting some of the file renaming out into a\nseparate commit, makes it harder to see what changed here.\n\n\n> +AutoVacLauncherMain(char *startup_data, size_t startup_data_len)\n> {\n> -\tpid_t\t\tAutoVacPID;\n> +\tsigjmp_buf\tlocal_sigjmp_buf;\n>\n> -#ifdef EXEC_BACKEND\n> -\tswitch ((AutoVacPID = avlauncher_forkexec()))\n> -#else\n> -\tswitch ((AutoVacPID = fork_process()))\n> -#endif\n> +\t/* Release postmaster's working memory context */\n> +\tif (PostmasterContext)\n> \t{\n> -\t\tcase -1:\n> -\t\t\tereport(LOG,\n> -\t\t\t\t\t(errmsg(\"could not fork autovacuum launcher process: %m\")));\n> -\t\t\treturn 0;\n> -\n> -#ifndef EXEC_BACKEND\n> -\t\tcase 0:\n> -\t\t\t/* in postmaster child ... */\n> -\t\t\tInitPostmasterChild();\n> -\n> -\t\t\t/* Close the postmaster's sockets */\n> -\t\t\tClosePostmasterPorts(false);\n> -\n> -\t\t\tAutoVacLauncherMain(0, NULL);\n> -\t\t\tbreak;\n> -#endif\n> -\t\tdefault:\n> -\t\t\treturn (int) AutoVacPID;\n> +\t\tMemoryContextDelete(PostmasterContext);\n> +\t\tPostmasterContext = NULL;\n> \t}\n>\n> -\t/* shouldn't get here */\n> -\treturn 0;\n> -}\n\nThis if (PostmasterContext) ... else \"shouldn't get here\" business seems\npretty silly, more likely to hide problems than to help.\n\n\n> +/*\n> + * Information needed to launch different kinds of child processes.\n> + */\n> +static const struct\n> +{\n> +\tconst char *name;\n> +\tvoid\t\t(*main_fn) (char *startup_data, size_t startup_data_len);\n> +\tbool\t\tshmem_attach;\n> +}\t\t\tentry_kinds[] = {\n> +\t[PMC_BACKEND] = {\"backend\", BackendMain, true},\n\nPersonally I'd give the struct an actual name - makes the debugging experience\na bit nicer than anonymous structs that you can't even reference by a typedef.\n\n\n> +\t[PMC_AV_LAUNCHER] = {\"autovacuum launcher\", AutoVacLauncherMain, true},\n> +\t[PMC_AV_WORKER] = {\"autovacuum worker\", AutoVacWorkerMain, true},\n> +\t[PMC_BGWORKER] = {\"bgworker\", BackgroundWorkerMain, true},\n> +\t[PMC_SYSLOGGER] = {\"syslogger\", SysLoggerMain, false},\n> +\n> +\t[PMC_STARTUP] = {\"startup\", StartupProcessMain, true},\n> +\t[PMC_BGWRITER] = {\"bgwriter\", BackgroundWriterMain, true},\n> +\t[PMC_ARCHIVER] = {\"archiver\", PgArchiverMain, true},\n> +\t[PMC_CHECKPOINTER] = {\"checkpointer\", CheckpointerMain, true},\n> +\t[PMC_WAL_WRITER] = {\"wal_writer\", WalWriterMain, true},\n> +\t[PMC_WAL_RECEIVER] = {\"wal_receiver\", WalReceiverMain, true},\n> +};\n\n\nIt feels like we have too many different ways of documenting the type of a\nprocess. This new PMC_ stuff, enum AuxProcType, enum BackendType. Which then\nleads to code like this:\n\n\n> -CheckpointerMain(void)\n> +CheckpointerMain(char *startup_data, size_t startup_data_len)\n> {\n> \tsigjmp_buf\tlocal_sigjmp_buf;\n> \tMemoryContext checkpointer_context;\n>\n> +\tAssert(startup_data_len == 0);\n> +\n> +\tMyAuxProcType = CheckpointerProcess;\n> +\tMyBackendType = B_CHECKPOINTER;\n> +\tAuxiliaryProcessInit();\n> +\n\nFor each type of child process. That seems a bit too redundant. Can't we\nunify this at least somewhat? Can't we just reuse BackendType here? Sure,\nthere'd be pointless entry for B_INVALID, but that doesn't seem like a\nproblem, could even be useful, by pointing it to a function raising an error.\n\nAt the very least this shouldn't deviate from the naming pattern of\nBackendType.\n\n\n> +/*\n> + * SubPostmasterMain -- Get the fork/exec'd process into a state equivalent\n> + *\t\t\tto what it would be if we'd simply forked on Unix, and then\n> + *\t\t\tdispatch to the appropriate place.\n> + *\n> + * The first two command line arguments are expected to be \"--forkchild=<name>\",\n> + * where <name> indicates which postmaster child we are to become, and\n> + * the name of a variables file that we can read to load data that would\n> + * have been inherited by fork() on Unix.\n> + */\n> +void\n> +SubPostmasterMain(int argc, char *argv[])\n> +{\n> +\tPostmasterChildType child_type;\n> +\tchar\t *startup_data;\n> +\tsize_t\t\tstartup_data_len;\n> +\tchar\t *entry_name;\n> +\tbool\t\tfound = false;\n> +\n> +\t/* In EXEC_BACKEND case we will not have inherited these settings */\n> +\tIsPostmasterEnvironment = true;\n> +\twhereToSendOutput = DestNone;\n> +\n> +\t/* Setup essential subsystems (to ensure elog() behaves sanely) */\n> +\tInitializeGUCOptions();\n> +\n> +\t/* Check we got appropriate args */\n> +\tif (argc != 3)\n> +\t\telog(FATAL, \"invalid subpostmaster invocation\");\n> +\n> +\tif (strncmp(argv[1], \"--forkchild=\", 12) != 0)\n> +\t\telog(FATAL, \"invalid subpostmaster invocation (--forkchild argument missing)\");\n> +\tentry_name = argv[1] + 12;\n> +\tfound = false;\n> +\tfor (int idx = 0; idx < lengthof(entry_kinds); idx++)\n> +\t{\n> +\t\tif (strcmp(entry_kinds[idx].name, entry_name) == 0)\n> +\t\t{\n> +\t\t\tchild_type = idx;\n> +\t\t\tfound = true;\n> +\t\t\tbreak;\n> +\t\t}\n> +\t}\n> +\tif (!found)\n> +\t\telog(ERROR, \"unknown child kind %s\", entry_name);\n\nIf we then have to search linearly, why don't we just pass the index into the\narray?\n\n>\n> -#define StartupDataBase()\t\tStartChildProcess(StartupProcess)\n> -#define StartArchiver()\t\t\tStartChildProcess(ArchiverProcess)\n> -#define StartBackgroundWriter() StartChildProcess(BgWriterProcess)\n> -#define StartCheckpointer()\t\tStartChildProcess(CheckpointerProcess)\n> -#define StartWalWriter()\t\tStartChildProcess(WalWriterProcess)\n> -#define StartWalReceiver()\t\tStartChildProcess(WalReceiverProcess)\n> +#define StartupDataBase()\t\tStartChildProcess(PMC_STARTUP)\n> +#define StartArchiver()\t\t\tStartChildProcess(PMC_ARCHIVER)\n> +#define StartBackgroundWriter() StartChildProcess(PMC_BGWRITER)\n> +#define StartCheckpointer()\t\tStartChildProcess(PMC_CHECKPOINTER)\n> +#define StartWalWriter()\t\tStartChildProcess(PMC_WAL_WRITER)\n> +#define StartWalReceiver()\t\tStartChildProcess(PMC_WAL_RECEIVER)\n> +\n> +#define StartAutoVacLauncher()\tStartChildProcess(PMC_AV_LAUNCHER);\n> +#define StartAutoVacWorker()\tStartChildProcess(PMC_AV_WORKER);\n\nObviously not your fault, but these macros are so pointless... Making it\nharder to find where we start child processes, all to save a a few characters\nin one place, while adding considerably more in others.\n\n\n> +void\n> +BackendMain(char *startup_data, size_t startup_data_len)\n> +{\n\nIs there any remaining reason for this to live in postmaster.c? Given that\nother backend types don't, that seems oddly assymmetrical.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 30 Nov 2023 12:26:48 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Refactoring backend fork+exec code"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-30 12:44:33 -0600, Tristan Partin wrote:\n> > + /*\n> > + * Set reference point for stack-depth checking. This might seem\n> > + * redundant in !EXEC_BACKEND builds; but it's not because the postmaster\n> > + * launches its children from signal handlers, so we might be running on\n> > + * an alternative stack. XXX still true?\n> > + */\n> > + (void) set_stack_base();\n> \n> Looks like there is still this XXX left. Can't say I completely understand\n> the second sentence either.\n\nWe used to start some child processes of postmaster in signal handlers. That\nwas fixed in\n\ncommit 7389aad6366\nAuthor: Thomas Munro <[email protected]>\nDate: 2023-01-12 12:34:23 +1300\n \n Use WaitEventSet API for postmaster's event loop.\n\n\nIn some cases signal handlers run on a separate stack, which meant that the\nset_stack_base() we did in postmaster would yield a completely bogus stack\ndepth estimation. So this comment should likely have been removed. Thomas?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 30 Nov 2023 12:31:29 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Refactoring backend fork+exec code"
},
{
"msg_contents": "On Fri, Dec 1, 2023 at 9:31 AM Andres Freund <[email protected]> wrote:\n> On 2023-11-30 12:44:33 -0600, Tristan Partin wrote:\n> > > + /*\n> > > + * Set reference point for stack-depth checking. This might seem\n> > > + * redundant in !EXEC_BACKEND builds; but it's not because the postmaster\n> > > + * launches its children from signal handlers, so we might be running on\n> > > + * an alternative stack. XXX still true?\n> > > + */\n> > > + (void) set_stack_base();\n> >\n> > Looks like there is still this XXX left. Can't say I completely understand\n> > the second sentence either.\n>\n> We used to start some child processes of postmaster in signal handlers. That\n> was fixed in\n>\n> commit 7389aad6366\n> Author: Thomas Munro <[email protected]>\n> Date: 2023-01-12 12:34:23 +1300\n>\n> Use WaitEventSet API for postmaster's event loop.\n>\n>\n> In some cases signal handlers run on a separate stack, which meant that the\n> set_stack_base() we did in postmaster would yield a completely bogus stack\n> depth estimation. So this comment should likely have been removed. Thomas?\n\nRight, I should delete that comment in master and 16. While wondering\nwhat to write instead, my first thought is that it is better to leave\nthe actual call there though, because otherwise there is a small\ndifference in stack reference point between EXEC_BACKEND and\n!EXEC_BACKEND builds, consumed by a few postmaster stack frames. So\nthe new comment would just say that.\n\n(I did idly wonder if there is a longjmp trick to zap those frames\npost-fork, not looked into and probably doesn't really improve\nanything we care about...)\n\n\n",
"msg_date": "Fri, 1 Dec 2023 09:49:05 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Refactoring backend fork+exec code"
},
{
"msg_contents": "On 30/11/2023 22:26, Andres Freund wrote:\n> Aside: Somewhat odd that InitAuxiliaryProcess() doesn't call\n> InitLWLockAccess().\n\nYeah that caught my eye too.\n\nIt seems to have been an oversight in commit 1c6821be31f. Before that, \nin 9.4, the lwlock stats were printed for aux processes too, on shutdown.\n\nCommitted a fix for that to master.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Fri, 1 Dec 2023 01:03:07 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Refactoring backend fork+exec code"
},
{
"msg_contents": "On 30/11/2023 22:26, Andres Freund wrote:\n> On 2023-11-30 01:36:25 +0200, Heikki Linnakangas wrote:\n>> From a96b6e92fdeaa947bf32774c425419b8f987b8e2 Mon Sep 17 00:00:00 2001\n>> From: Heikki Linnakangas <[email protected]>\n>> Date: Thu, 30 Nov 2023 00:01:25 +0200\n>> Subject: [PATCH v3 1/7] Refactor CreateSharedMemoryAndSemaphores\n>>\n>> For clarity, have separate functions for *creating* the shared memory\n>> and semaphores at postmaster or single-user backend startup, and\n>> for *attaching* to existing shared memory structures in EXEC_BACKEND\n>> case. CreateSharedMemoryAndSemaphores() is now called only at\n>> postmaster startup, and a new AttachSharedMemoryStructs() function is\n>> called at backend startup in EXEC_BACKEND mode.\n> \n> I assume CreateSharedMemoryAndSemaphores() is still called during crash\n> restart?\n\nYes.\n\n> I wonder if it shouldn't split three ways:\n> 1) create\n> 2) initialize\n> 3) attach\n\nWhy? What would be the difference between create and initialize phases?\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Fri, 1 Dec 2023 01:36:13 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Refactoring backend fork+exec code"
},
{
"msg_contents": "Hi,\n\nOn 2023-12-01 01:36:13 +0200, Heikki Linnakangas wrote:\n> On 30/11/2023 22:26, Andres Freund wrote:\n> > On 2023-11-30 01:36:25 +0200, Heikki Linnakangas wrote:\n> > > From a96b6e92fdeaa947bf32774c425419b8f987b8e2 Mon Sep 17 00:00:00 2001\n> > > From: Heikki Linnakangas <[email protected]>\n> > > Date: Thu, 30 Nov 2023 00:01:25 +0200\n> > > Subject: [PATCH v3 1/7] Refactor CreateSharedMemoryAndSemaphores\n> > >\n> > > For clarity, have separate functions for *creating* the shared memory\n> > > and semaphores at postmaster or single-user backend startup, and\n> > > for *attaching* to existing shared memory structures in EXEC_BACKEND\n> > > case. CreateSharedMemoryAndSemaphores() is now called only at\n> > > postmaster startup, and a new AttachSharedMemoryStructs() function is\n> > > called at backend startup in EXEC_BACKEND mode.\n> >\n> > I assume CreateSharedMemoryAndSemaphores() is still called during crash\n> > restart?\n>\n> Yes.\n>\n> > I wonder if it shouldn't split three ways:\n> > 1) create\n> > 2) initialize\n> > 3) attach\n>\n> Why? What would be the difference between create and initialize phases?\n\nMainly because I somehow mis-remembered how we deal with the shared memory\nallocation when crashing. I somehow had remembered that we reused the same\nallocation across restarts, but reinitialized it from scratch. There's a\nkernel of truth to that, because we can end up re-attaching to an existing\nsysv shared memory segment. But not more. Perhaps I was confusing things with\nthe listen sockets?\n\nAndres\n\n\n",
"msg_date": "Thu, 30 Nov 2023 17:42:44 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Refactoring backend fork+exec code"
},
{
"msg_contents": "On 30/11/2023 20:44, Tristan Partin wrote:\n> Patches 1-3 seem committable as-is.\n\nThanks for the review! I'm focusing on patches 1-3 now, and will come \nback to the rest after committing 1-3.\n\nThere was one test failure with EXEC_BACKEND from patch 2, in \n'test_shm_mq'. In restore_backend_variables() I checked if 'bgw_name' is \nempty to decide if the BackgroundWorker struct is filled in or not, but \nit turns out that 'test_shm_mq' doesn't fill in bgw_name. It probably \nshould, I think that's an oversight in 'test_shm_mq', but that's a \nseparate issue.\n\nI did some more refactoring of patch 2, to fix that and to improve it in \ngeneral. The BackgroundWorker struct is now passed through the \nfork-related functions similarly to the Port struct. That seems more \nconsistent.\n\nAttached is new version of these patches. For easier review, I made the \nnew refactorings compared in a new commit 0003. I will squash that \nbefore pushing, but this makes it easier to see what changed.\n\nBarring any new feedback or issues, I will commit these.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)",
"msg_date": "Fri, 1 Dec 2023 14:10:03 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Refactoring backend fork+exec code"
},
{
"msg_contents": "Hello Heikki,\n\n01.12.2023 15:10, Heikki Linnakangas wrote:\n> Attached is new version of these patches. For easier review, I made the new refactorings compared in a new commit \n> 0003. I will squash that before pushing, but this makes it easier to see what changed.\n>\n> Barring any new feedback or issues, I will commit these.\n>\n\nMaybe you could look at issues with running `make check` under Valgrind\nwhen server built with CPPFLAGS=\"-DUSE_VALGRIND -DEXEC_BACKEND\":\n# +++ regress check in src/test/regress +++\n# initializing database system by copying initdb template\n# postmaster failed, examine \".../src/test/regress/log/postmaster.log\" for the reason\nBail out!make[1]: ***\n\n...\n2023-12-01 16:48:39.136 MSK postmaster[1307988] LOG: listening on Unix socket \"/tmp/pg_regress-pPFNk0/.s.PGSQL.55312\"\n==00:00:00:01.692 1259396== Syscall param write(buf) points to uninitialised byte(s)\n==00:00:00:01.692 1259396== at 0x5245A37: write (write.c:26)\n==00:00:00:01.692 1259396== by 0x51BBF6C: _IO_file_write@@GLIBC_2.2.5 (fileops.c:1180)\n==00:00:00:01.692 1259396== by 0x51BC84F: new_do_write (fileops.c:448)\n==00:00:00:01.692 1259396== by 0x51BC84F: _IO_new_file_xsputn (fileops.c:1254)\n==00:00:00:01.692 1259396== by 0x51BC84F: _IO_file_xsputn@@GLIBC_2.2.5 (fileops.c:1196)\n==00:00:00:01.692 1259396== by 0x51B1056: fwrite (iofwrite.c:39)\n==00:00:00:01.692 1259396== by 0x552E21: internal_forkexec (postmaster.c:4518)\n==00:00:00:01.692 1259396== by 0x5546A1: postmaster_forkexec (postmaster.c:4444)\n==00:00:00:01.692 1259396== by 0x55471C: StartChildProcess (postmaster.c:5275)\n==00:00:00:01.692 1259396== by 0x557B61: PostmasterMain (postmaster.c:1454)\n==00:00:00:01.692 1259396== by 0x472136: main (main.c:198)\n==00:00:00:01.692 1259396== Address 0x1ffeffdc11 is on thread 1's stack\n==00:00:00:01.692 1259396== in frame #4, created by internal_forkexec (postmaster.c:4482)\n==00:00:00:01.692 1259396==\n\nWith memset(param, 0, sizeof(*param)); added at the beginning of\nsave_backend_variables(), server starts successfully, but then\n`make check` fails with other Valgrind error.\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Fri, 1 Dec 2023 17:00:01 +0300",
"msg_from": "Alexander Lakhin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Refactoring backend fork+exec code"
},
{
"msg_contents": "On Fri Dec 1, 2023 at 6:10 AM CST, Heikki Linnakangas wrote:\n> On 30/11/2023 20:44, Tristan Partin wrote:\n> > Patches 1-3 seem committable as-is.\n>\n> Thanks for the review! I'm focusing on patches 1-3 now, and will come \n> back to the rest after committing 1-3.\n>\n> There was one test failure with EXEC_BACKEND from patch 2, in \n> 'test_shm_mq'. In restore_backend_variables() I checked if 'bgw_name' is \n> empty to decide if the BackgroundWorker struct is filled in or not, but \n> it turns out that 'test_shm_mq' doesn't fill in bgw_name. It probably \n> should, I think that's an oversight in 'test_shm_mq', but that's a \n> separate issue.\n>\n> I did some more refactoring of patch 2, to fix that and to improve it in \n> general. The BackgroundWorker struct is now passed through the \n> fork-related functions similarly to the Port struct. That seems more \n> consistent.\n>\n> Attached is new version of these patches. For easier review, I made the \n> new refactorings compared in a new commit 0003. I will squash that \n> before pushing, but this makes it easier to see what changed.\n>\n> Barring any new feedback or issues, I will commit these.\n\nMy only thought is that perhaps has_bg_worker is a better name than \nhas_worker, but I agree that having a flag is better than checking \nbgw_name.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Fri, 01 Dec 2023 10:31:19 -0600",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Refactoring backend fork+exec code"
},
{
"msg_contents": "On 01/12/2023 16:00, Alexander Lakhin wrote:\n> Maybe you could look at issues with running `make check` under Valgrind\n> when server built with CPPFLAGS=\"-DUSE_VALGRIND -DEXEC_BACKEND\":\n> # +++ regress check in src/test/regress +++\n> # initializing database system by copying initdb template\n> # postmaster failed, examine \".../src/test/regress/log/postmaster.log\" for the reason\n> Bail out!make[1]: ***\n> \n> ...\n> 2023-12-01 16:48:39.136 MSK postmaster[1307988] LOG: listening on Unix socket \"/tmp/pg_regress-pPFNk0/.s.PGSQL.55312\"\n> ==00:00:00:01.692 1259396== Syscall param write(buf) points to uninitialised byte(s)\n> ==00:00:00:01.692 1259396== at 0x5245A37: write (write.c:26)\n> ==00:00:00:01.692 1259396== by 0x51BBF6C: _IO_file_write@@GLIBC_2.2.5 (fileops.c:1180)\n> ==00:00:00:01.692 1259396== by 0x51BC84F: new_do_write (fileops.c:448)\n> ==00:00:00:01.692 1259396== by 0x51BC84F: _IO_new_file_xsputn (fileops.c:1254)\n> ==00:00:00:01.692 1259396== by 0x51BC84F: _IO_file_xsputn@@GLIBC_2.2.5 (fileops.c:1196)\n> ==00:00:00:01.692 1259396== by 0x51B1056: fwrite (iofwrite.c:39)\n> ==00:00:00:01.692 1259396== by 0x552E21: internal_forkexec (postmaster.c:4518)\n> ==00:00:00:01.692 1259396== by 0x5546A1: postmaster_forkexec (postmaster.c:4444)\n> ==00:00:00:01.692 1259396== by 0x55471C: StartChildProcess (postmaster.c:5275)\n> ==00:00:00:01.692 1259396== by 0x557B61: PostmasterMain (postmaster.c:1454)\n> ==00:00:00:01.692 1259396== by 0x472136: main (main.c:198)\n> ==00:00:00:01.692 1259396== Address 0x1ffeffdc11 is on thread 1's stack\n> ==00:00:00:01.692 1259396== in frame #4, created by internal_forkexec (postmaster.c:4482)\n> ==00:00:00:01.692 1259396==\n> \n> With memset(param, 0, sizeof(*param)); added at the beginning of\n> save_backend_variables(), server starts successfully, but then\n> `make check` fails with other Valgrind error.\n\nThat's actually a pre-existing issue, I'm seeing that even on unpatched \n'master'.\n\nIn a nutshell, the problem is that the uninitialized padding bytes in \nBackendParameters are written to the file. When we read the file back, \nwe don't access the padding bytes, so that's harmless. But Valgrind \ndoesn't know that.\n\nOn Windows, the file is created with \nCreateFileMapping(INVALID_HANDLE_VALUE, ...) and we write the variables \ndirectly to the mapping. If I understand the Windows API docs correctly, \nit is guaranteed to be initialized to zeros, so we don't have this \nproblem on Windows, only on other platforms with EXEC_BACKEND. I think \nit makes sense to clear the memory on other platforms too, since that's \nwhat we do on Windows.\n\nCommitted a fix with memset(). I'm not sure what our policy with \nbackpatching this kind of issues is. This goes back to all supported \nversions, but given the lack of complaints, I chose to not backpatch.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Fri, 1 Dec 2023 22:44:08 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Refactoring backend fork+exec code"
},
{
"msg_contents": "On Fri Dec 1, 2023 at 2:44 PM CST, Heikki Linnakangas wrote:\n> On 01/12/2023 16:00, Alexander Lakhin wrote:\n> > Maybe you could look at issues with running `make check` under Valgrind\n> > when server built with CPPFLAGS=\"-DUSE_VALGRIND -DEXEC_BACKEND\":\n> > # +++ regress check in src/test/regress +++\n> > # initializing database system by copying initdb template\n> > # postmaster failed, examine \".../src/test/regress/log/postmaster.log\" for the reason\n> > Bail out!make[1]: ***\n> > \n> > ...\n> > 2023-12-01 16:48:39.136 MSK postmaster[1307988] LOG: listening on Unix socket \"/tmp/pg_regress-pPFNk0/.s.PGSQL.55312\"\n> > ==00:00:00:01.692 1259396== Syscall param write(buf) points to uninitialised byte(s)\n> > ==00:00:00:01.692 1259396== at 0x5245A37: write (write.c:26)\n> > ==00:00:00:01.692 1259396== by 0x51BBF6C: _IO_file_write@@GLIBC_2.2.5 (fileops.c:1180)\n> > ==00:00:00:01.692 1259396== by 0x51BC84F: new_do_write (fileops.c:448)\n> > ==00:00:00:01.692 1259396== by 0x51BC84F: _IO_new_file_xsputn (fileops.c:1254)\n> > ==00:00:00:01.692 1259396== by 0x51BC84F: _IO_file_xsputn@@GLIBC_2.2.5 (fileops.c:1196)\n> > ==00:00:00:01.692 1259396== by 0x51B1056: fwrite (iofwrite.c:39)\n> > ==00:00:00:01.692 1259396== by 0x552E21: internal_forkexec (postmaster.c:4518)\n> > ==00:00:00:01.692 1259396== by 0x5546A1: postmaster_forkexec (postmaster.c:4444)\n> > ==00:00:00:01.692 1259396== by 0x55471C: StartChildProcess (postmaster.c:5275)\n> > ==00:00:00:01.692 1259396== by 0x557B61: PostmasterMain (postmaster.c:1454)\n> > ==00:00:00:01.692 1259396== by 0x472136: main (main.c:198)\n> > ==00:00:00:01.692 1259396== Address 0x1ffeffdc11 is on thread 1's stack\n> > ==00:00:00:01.692 1259396== in frame #4, created by internal_forkexec (postmaster.c:4482)\n> > ==00:00:00:01.692 1259396==\n> > \n> > With memset(param, 0, sizeof(*param)); added at the beginning of\n> > save_backend_variables(), server starts successfully, but then\n> > `make check` fails with other Valgrind error.\n>\n> That's actually a pre-existing issue, I'm seeing that even on unpatched \n> 'master'.\n>\n> In a nutshell, the problem is that the uninitialized padding bytes in \n> BackendParameters are written to the file. When we read the file back, \n> we don't access the padding bytes, so that's harmless. But Valgrind \n> doesn't know that.\n>\n> On Windows, the file is created with \n> CreateFileMapping(INVALID_HANDLE_VALUE, ...) and we write the variables \n> directly to the mapping. If I understand the Windows API docs correctly, \n> it is guaranteed to be initialized to zeros, so we don't have this \n> problem on Windows, only on other platforms with EXEC_BACKEND. I think \n> it makes sense to clear the memory on other platforms too, since that's \n> what we do on Windows.\n>\n> Committed a fix with memset(). I'm not sure what our policy with \n> backpatching this kind of issues is. This goes back to all supported \n> versions, but given the lack of complaints, I chose to not backpatch.\n\nSeems like a harmless think to backpatch. It is conceivable that someone \nmight want to run Valgrind on something other than HEAD.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Fri, 01 Dec 2023 15:11:49 -0600",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Refactoring backend fork+exec code"
},
{
"msg_contents": "\"Tristan Partin\" <[email protected]> writes:\n> On Fri Dec 1, 2023 at 2:44 PM CST, Heikki Linnakangas wrote:\n>> Committed a fix with memset(). I'm not sure what our policy with \n>> backpatching this kind of issues is. This goes back to all supported \n>> versions, but given the lack of complaints, I chose to not backpatch.\n\n> Seems like a harmless think to backpatch. It is conceivable that someone \n> might want to run Valgrind on something other than HEAD.\n\nFWIW, I agree with Heikki's conclusion. EXEC_BACKEND on non-Windows\nis already a niche developer-only setup, and given the lack of complaints,\nnobody's that interested in running Valgrind with it. Fixing it on HEAD\nseems like plenty.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 01 Dec 2023 18:55:22 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Refactoring backend fork+exec code"
},
{
"msg_contents": "Hello Heikki,\n\n01.12.2023 23:44, Heikki Linnakangas wrote:\n>\n>> With memset(param, 0, sizeof(*param)); added at the beginning of\n>> save_backend_variables(), server starts successfully, but then\n>> `make check` fails with other Valgrind error.\n>\n> That's actually a pre-existing issue, I'm seeing that even on unpatched 'master'.\n\nThank you for fixing that!\n\nYes, I had discovered it before, but yesterday I decided to check whether\nyour patches improve the situation...\n\nWhat bothered me additionally, is an error detected after server start. I\ncouldn't see it without the patches applied. I mean, on HEAD I now see\n`make check` passing, but with the patches it fails:\n...\n# parallel group (20 tests): interval date numerology polygon box macaddr8 macaddr multirangetypes line timestamp \ntimetz timestamptz time circle strings lseg inet md5 path point\nnot ok 22 + strings 1048 ms\n# (test process exited with exit code 2)\nnot ok 23 + md5 1052 ms\n# (test process exited with exit code 2)\n...\nsrc/test/regress/log/postmaster.log contains:\n==00:00:00:30.730 1713480== Syscall param write(buf) points to uninitialised byte(s)\n==00:00:00:30.730 1713480== at 0x5245A37: write (write.c:26)\n==00:00:00:30.730 1713480== by 0x51BBF6C: _IO_file_write@@GLIBC_2.2.5 (fileops.c:1180)\n==00:00:00:30.730 1713480== by 0x51BC84F: new_do_write (fileops.c:448)\n==00:00:00:30.730 1713480== by 0x51BC84F: _IO_new_file_xsputn (fileops.c:1254)\n==00:00:00:30.730 1713480== by 0x51BC84F: _IO_file_xsputn@@GLIBC_2.2.5 (fileops.c:1196)\n==00:00:00:30.730 1713480== by 0x51B1056: fwrite (iofwrite.c:39)\n==00:00:00:30.730 1713480== by 0x5540CF: internal_forkexec (postmaster.c:4526)\n==00:00:00:30.730 1713480== by 0x5543C0: bgworker_forkexec (postmaster.c:5624)\n==00:00:00:30.730 1713480== by 0x555477: do_start_bgworker (postmaster.c:5665)\n==00:00:00:30.730 1713480== by 0x555738: maybe_start_bgworkers (postmaster.c:5928)\n==00:00:00:30.730 1713480== by 0x556072: process_pm_pmsignal (postmaster.c:5080)\n==00:00:00:30.730 1713480== by 0x556610: ServerLoop (postmaster.c:1761)\n==00:00:00:30.730 1713480== by 0x557BE2: PostmasterMain (postmaster.c:1469)\n==00:00:00:30.730 1713480== by 0x47216B: main (main.c:198)\n==00:00:00:30.730 1713480== Address 0x1ffeffd8c0 is on thread 1's stack\n==00:00:00:30.730 1713480== in frame #4, created by internal_forkexec (postmaster.c:4482)\n==00:00:00:30.730 1713480==\n...\n2023-12-02 05:14:30.751 MSK client backend[1713740] pg_regress/rangetypes FATAL: terminating connection due to \nunexpected postmaster exit\n2023-12-02 05:14:31.033 MSK client backend[1713734] pg_regress/numeric FATAL: postmaster exited during a parallel \ntransaction\nTRAP: failed Assert(\"!IsTransactionOrTransactionBlock()\"), File: \"pgstat.c\", Line: 591, PID: 1713734\n\nI haven't looked deeper yet, but it seems that we see two issues here (and\nAssert is not directly caused by the patches set.)\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Sat, 2 Dec 2023 06:00:01 +0300",
"msg_from": "Alexander Lakhin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Refactoring backend fork+exec code"
},
{
"msg_contents": "On 02/12/2023 05:00, Alexander Lakhin wrote:\n> What bothered me additionally, is an error detected after server start. I\n> couldn't see it without the patches applied. I mean, on HEAD I now see\n> `make check` passing, but with the patches it fails:\n> ...\n> # parallel group (20 tests): interval date numerology polygon box macaddr8 macaddr multirangetypes line timestamp\n> timetz timestamptz time circle strings lseg inet md5 path point\n> not ok 22 + strings 1048 ms\n> # (test process exited with exit code 2)\n> not ok 23 + md5 1052 ms\n> # (test process exited with exit code 2)\n> ...\n> src/test/regress/log/postmaster.log contains:\n> ==00:00:00:30.730 1713480== Syscall param write(buf) points to uninitialised byte(s)\n> ==00:00:00:30.730 1713480== at 0x5245A37: write (write.c:26)\n> ==00:00:00:30.730 1713480== by 0x51BBF6C: _IO_file_write@@GLIBC_2.2.5 (fileops.c:1180)\n> ==00:00:00:30.730 1713480== by 0x51BC84F: new_do_write (fileops.c:448)\n> ==00:00:00:30.730 1713480== by 0x51BC84F: _IO_new_file_xsputn (fileops.c:1254)\n> ==00:00:00:30.730 1713480== by 0x51BC84F: _IO_file_xsputn@@GLIBC_2.2.5 (fileops.c:1196)\n> ==00:00:00:30.730 1713480== by 0x51B1056: fwrite (iofwrite.c:39)\n> ==00:00:00:30.730 1713480== by 0x5540CF: internal_forkexec (postmaster.c:4526)\n> ==00:00:00:30.730 1713480== by 0x5543C0: bgworker_forkexec (postmaster.c:5624)\n> ==00:00:00:30.730 1713480== by 0x555477: do_start_bgworker (postmaster.c:5665)\n> ==00:00:00:30.730 1713480== by 0x555738: maybe_start_bgworkers (postmaster.c:5928)\n> ==00:00:00:30.730 1713480== by 0x556072: process_pm_pmsignal (postmaster.c:5080)\n> ==00:00:00:30.730 1713480== by 0x556610: ServerLoop (postmaster.c:1761)\n> ==00:00:00:30.730 1713480== by 0x557BE2: PostmasterMain (postmaster.c:1469)\n> ==00:00:00:30.730 1713480== by 0x47216B: main (main.c:198)\n> ==00:00:00:30.730 1713480== Address 0x1ffeffd8c0 is on thread 1's stack\n> ==00:00:00:30.730 1713480== in frame #4, created by internal_forkexec (postmaster.c:4482)\n> ==00:00:00:30.730 1713480==\n> ...\n\nAck, I see this too. I fixed it by adding MCXT_ALLOC_ZERO to the \nallocation of the BackendWorker struct. That's a little heavy-handed, \nlike with the previous failures the uninitialized padding bytes are \nwritten to the file and read back, but not accessed after that. But it \nseems like the simplest fix. This isn't performance critical after all.\n\nI also renamed the 'has_worker' field to 'has_bgworker', per Tristan's \nsuggestion. Pushed with those changes.\n\nThanks for the reviews!\n\n> 2023-12-02 05:14:30.751 MSK client backend[1713740] pg_regress/rangetypes FATAL: terminating connection due to\n> unexpected postmaster exit\n> 2023-12-02 05:14:31.033 MSK client backend[1713734] pg_regress/numeric FATAL: postmaster exited during a parallel\n> transaction\n> TRAP: failed Assert(\"!IsTransactionOrTransactionBlock()\"), File: \"pgstat.c\", Line: 591, PID: 1713734\n> \n> I haven't looked deeper yet, but it seems that we see two issues here (and\n> Assert is not directly caused by the patches set.)\n\nI have not been able to reproduce this one.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Sun, 3 Dec 2023 16:41:47 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Refactoring backend fork+exec code"
},
{
"msg_contents": "On 30/11/2023 22:26, Andres Freund wrote:\n> On 2023-11-30 01:36:25 +0200, Heikki Linnakangas wrote:\n>> [...]\n>> 33 files changed, 1787 insertions(+), 2002 deletions(-)\n> \n> Well, that's not small...\n> \n> I think it may be worth splitting some of the file renaming out into a\n> separate commit, makes it harder to see what change here.\n\nHere you are (details at end of this email)\n\n>> +\t[PMC_AV_LAUNCHER] = {\"autovacuum launcher\", AutoVacLauncherMain, true},\n>> +\t[PMC_AV_WORKER] = {\"autovacuum worker\", AutoVacWorkerMain, true},\n>> +\t[PMC_BGWORKER] = {\"bgworker\", BackgroundWorkerMain, true},\n>> +\t[PMC_SYSLOGGER] = {\"syslogger\", SysLoggerMain, false},\n>> +\n>> +\t[PMC_STARTUP] = {\"startup\", StartupProcessMain, true},\n>> +\t[PMC_BGWRITER] = {\"bgwriter\", BackgroundWriterMain, true},\n>> +\t[PMC_ARCHIVER] = {\"archiver\", PgArchiverMain, true},\n>> +\t[PMC_CHECKPOINTER] = {\"checkpointer\", CheckpointerMain, true},\n>> +\t[PMC_WAL_WRITER] = {\"wal_writer\", WalWriterMain, true},\n>> +\t[PMC_WAL_RECEIVER] = {\"wal_receiver\", WalReceiverMain, true},\n>> +};\n> \n> \n> It feels like we have too many different ways of documenting the type of a\n> process. This new PMC_ stuff, enum AuxProcType, enum BackendType.\n\nAgreed. And \"am_walsender\" and such variables.\n\n> Which then leads to code like this:\n> \n>> -CheckpointerMain(void)\n>> +CheckpointerMain(char *startup_data, size_t startup_data_len)\n>> {\n>> \tsigjmp_buf\tlocal_sigjmp_buf;\n>> \tMemoryContext checkpointer_context;\n>>\n>> +\tAssert(startup_data_len == 0);\n>> +\n>> +\tMyAuxProcType = CheckpointerProcess;\n>> +\tMyBackendType = B_CHECKPOINTER;\n>> +\tAuxiliaryProcessInit();\n>> +\n> \n> For each type of child process. That seems a bit too redundant. Can't we\n> unify this at least somewhat? Can't we just reuse BackendType here? Sure,\n> there'd be pointless entry for B_INVALID, but that doesn't seem like a\n> problem, could even be useful, by pointing it to a function raising an error.\n\nThere are a few differences: B_INVALID (and B_STANDALONE_BACKEND) are \npointless for this array as you noted. But also, we don't know if the \nbackend is a regular backend or WAL sender until authentication, so for \na WAL sender, we'd need to change MyBackendType from B_BACKEND to \nB_WAL_SENDER after forking. Maybe that's ok.\n\nI didn't do anything about this yet, but I'll give it some more thought.\n\n>> +\tif (strncmp(argv[1], \"--forkchild=\", 12) != 0)\n>> +\t\telog(FATAL, \"invalid subpostmaster invocation (--forkchild argument missing)\");\n>> +\tentry_name = argv[1] + 12;\n>> +\tfound = false;\n>> +\tfor (int idx = 0; idx < lengthof(entry_kinds); idx++)\n>> +\t{\n>> +\t\tif (strcmp(entry_kinds[idx].name, entry_name) == 0)\n>> +\t\t{\n>> +\t\t\tchild_type = idx;\n>> +\t\t\tfound = true;\n>> +\t\t\tbreak;\n>> +\t\t}\n>> +\t}\n>> +\tif (!found)\n>> +\t\telog(ERROR, \"unknown child kind %s\", entry_name);\n> \n> If we then have to search linearly, why don't we just pass the index into the\n> array?\n\nWe could. I like the idea of a human-readable name on the command line, \nalthough I'm not sure if it's really visible anywhere.\n\n>> +void\n>> +BackendMain(char *startup_data, size_t startup_data_len)\n>> +{\n> \n> Is there any remaining reason for this to live in postmaster.c? Given that\n> other backend types don't, that seems oddly assymmetrical.\n\nGee, another yak to shave, thanks ;-). You're right, that makes a lot of \nsense. I added another patch that moves that to a new file, \nsrc/backend/tcop/backend_startup.c. ProcessStartupPacket() and friends \ngo there too. It might make sense to do this before the other patches, \nbut it's the last patch in the series now.\n\nI kept processCancelRequest() in postmaster.c because it looks at \nBackendList/ShmemBackendArray, which are static in postmaster.c. Some \nmore refactoring might be in order there, perhaps moving those to a \ndifferent file too. But that can be done separately, this split is \npretty OK as is.\n\nOn 30/11/2023 20:44, Tristan Partin wrote:\n>> From 8886db1ed6bae21bf6d77c9bb1230edbb55e24f9 Mon Sep 17 00:00:00 2001\n>> From: Heikki Linnakangas <[email protected]>\n>> Date: Thu, 30 Nov 2023 00:04:22 +0200\n>> Subject: [PATCH v3 4/7] Pass CAC as argument to backend process\n> \n> For me, being new to the code, it would be nice to have more of an \n> explanation as to why this is \"better.\" I don't doubt it; it would just \n> help me and future readers of this commit in the future. More of an \n> explanation in the commit message would suffice.\n\nUpdated the commit message. It's mainly to pave the way for the next \npatches, which move the initialization of Port to the backend process, \nafter forking. And that in turn paves the way for the patches after \nthat. But also, very subjectively, it feels more natural to me.\n\n> My other comment on this commit is that we now seem to have lost the \n> context on what CAC stands for. Before we had the member variable to \n> explain it. A comment on the enum would be great or changing cac named \n> variables to canAcceptConnections. I did notice in patch 7 that there \n> are still some variables named canAcceptConnections around, so I'll \n> leave this comment up to you.\n\nGood point. The last patch in this series - which is new compared to \nprevious patch version - moves CAC_state to a different header file \nagain. I added a comment there.\n\n>> + if (fwrite(param, paramsz, 1, fp) != 1)\n>> + {\n>> + ereport(LOG,\n>> + (errcode_for_file_access(),\n>> + errmsg(\"could not write to file \\\"%s\\\": %m\", tmpfilename)));\n>> + FreeFile(fp);\n>> + return -1;\n>> + }\n>> +\n>> + /* Release file */\n>> + if (FreeFile(fp))\n>> + {\n>> + ereport(LOG,\n>> + (errcode_for_file_access(),\n>> + errmsg(\"could not write to file \\\"%s\\\": %m\", tmpfilename)));\n>> + return -1;\n>> + }\n> \n> Two pieces of feedback here. I generally find write(2) more useful than \n> fwrite(3) because write(2) will report a useful errno, whereas fwrite(2) \n> just uses ferror(3). The additional errno information might be valuable \n> context in the log message. Up to you if you think it is also valuable.\n\nIn general I agree. This patch just moves existing code though, so I \nleft it as is.\n\n> The log message if FreeFile() fails doesn't seem to make sense to me. \n> I didn't see any file writing in that code path, but it is possible that \n> I missed something.\n\nFreeFile() calls fclose(), which flushes the buffer. If fclose() fails, \nit's most likely because the write() to flush the buffer failed, so \n\"could not write\" is usually appropriate. (It feels ugly to me too, \nerror handling with the buffered i/o functions is a bit messy. As you \nsaid, plain open()/write() is more clear.)\n\n>> + /*\n>> + * Need to reinitialize the SSL library in the backend, since the context\n>> + * structures contain function pointers and cannot be passed through the\n>> + * parameter file.\n>> + *\n>> + * If for some reason reload fails (maybe the user installed broken key\n>> + * files), soldier on without SSL; that's better than all connections\n>> + * becoming impossible.\n>> + *\n>> + * XXX should we do this in all child processes? For the moment it's\n>> + * enough to do it in backend children.\n>> + */\n>> +#ifdef USE_SSL\n>> + if (EnableSSL)\n>> + {\n>> + if (secure_initialize(false) == 0)\n>> + LoadedSSL = true;\n>> + else\n>> + ereport(LOG,\n>> + (errmsg(\"SSL configuration could not be loaded in child process\")));\n>> + }\n>> +#endif\n> \n> Do other child process types do any non-local communication?\n\nNo. Although in theory an extension-defined background worker could do \nwhatever, including opening TLS connections. It's not clear if such a \nbackground worker would want the same initialization that we do in \nsecure_initialize(), or something else.\n\n\nHere is a new patch set:\n\n> v5-0001-Pass-CAC-as-argument-to-backend-process.patch\n> v5-0002-Remove-ConnCreate-and-ConnFree-and-allocate-Port-.patch\n> v5-0003-Move-initialization-of-Port-struct-to-child-proce.patch\n\nThese patches form a pretty well-contained unit. The gist is to move the \ninitialization of the Port struct to after forking the backend process \n(in patch 3).\n\nI plan to polish and commit these next, so any final reviews on these \nare welcome.\n\n> v5-0004-Extract-registration-of-Win32-deadchild-callback-.patch\n> v5-0005-Move-some-functions-from-postmaster.c-to-new-sour.patch\n> v5-0006-Refactor-AuxProcess-startup.patch\n> v5-0007-Refactor-postmaster-child-process-launching.patch\n\nPatches 4-6 are refactorings that don't do much good on their own, but \nthey help to make patch 7 much smaller and easier to review.\n\nI left out some of the code-moving that I had in previous patch versions:\n\n- Previously I moved fork_process() function from fork_process.c to the \nnew launch_backend.c file. That might still make sense, there is nothing \nelse in fork_process.c and the only caller is in launch_backend.c. But \nI'm not sure, and it can be done separately.\n\n- Previously I moved InitPostmasterChild from miscinit.c to the new \nlaunch_backend.c file. That might also still make sense, but I'm not \n100% sure it's an improvement, and it can be done later if we want to.\n\n> v5-0008-Move-code-for-backend-startup-to-separate-file.patch\n\nThis moves BackendMain() and friends from postmaster.c to a new file, \nper Andres's suggestion.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)",
"msg_date": "Fri, 8 Dec 2023 14:33:33 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Refactoring backend fork+exec code"
},
{
"msg_contents": "On 08/12/2023 14:33, Heikki Linnakangas wrote:\n>>> +\t[PMC_AV_LAUNCHER] = {\"autovacuum launcher\", AutoVacLauncherMain, true},\n>>> +\t[PMC_AV_WORKER] = {\"autovacuum worker\", AutoVacWorkerMain, true},\n>>> +\t[PMC_BGWORKER] = {\"bgworker\", BackgroundWorkerMain, true},\n>>> +\t[PMC_SYSLOGGER] = {\"syslogger\", SysLoggerMain, false},\n>>> +\n>>> +\t[PMC_STARTUP] = {\"startup\", StartupProcessMain, true},\n>>> +\t[PMC_BGWRITER] = {\"bgwriter\", BackgroundWriterMain, true},\n>>> +\t[PMC_ARCHIVER] = {\"archiver\", PgArchiverMain, true},\n>>> +\t[PMC_CHECKPOINTER] = {\"checkpointer\", CheckpointerMain, true},\n>>> +\t[PMC_WAL_WRITER] = {\"wal_writer\", WalWriterMain, true},\n>>> +\t[PMC_WAL_RECEIVER] = {\"wal_receiver\", WalReceiverMain, true},\n>>> +};\n>>\n>> It feels like we have too many different ways of documenting the type of a\n>> process. This new PMC_ stuff, enum AuxProcType, enum BackendType.\n> Agreed. And \"am_walsender\" and such variables.\n\nHere's a patch that gets rid of AuxProcType. It's independent of the \nother patches in this thread; if this is committed, I'll rebase the rest \nof the patches over this and get rid of the new PMC_* enum.\n\nThree patches, actually. The first one fixes an existing comment that I \nnoticed to be incorrect while working on this. I'll push that soon, \nbarring objections. The second one gets rid of AuxProcType, and the \nthird one replaces IsBackgroundWorker, IsAutoVacuumLauncherProcess() and \nIsAutoVacuumWorkerProcess() with checks on MyBackendType. So \nMyBackendType is now the primary way to check what kind of a process the \ncurrent process is.\n\n'am_walsender' would also be fairly straightforward to remove and \nreplace with AmWalSenderProcess(). I didn't do that because we also have \nam_db_walsender and am_cascading_walsender which cannot be directly \nreplaced with MyBackendType. Given that, it might be best to keep \nam_walsender for symmetry.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)",
"msg_date": "Wed, 10 Jan 2024 14:35:52 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Refactoring backend fork+exec code"
},
{
"msg_contents": "Hi,\n\nOn 2024-01-10 14:35:52 +0200, Heikki Linnakangas wrote:\n> Here's a patch that gets rid of AuxProcType. It's independent of the other\n> patches in this thread; if this is committed, I'll rebase the rest of the\n> patches over this and get rid of the new PMC_* enum.\n> \n> Three patches, actually. The first one fixes an existing comment that I\n> noticed to be incorrect while working on this. I'll push that soon, barring\n> objections. The second one gets rid of AuxProcType, and the third one\n> replaces IsBackgroundWorker, IsAutoVacuumLauncherProcess() and\n> IsAutoVacuumWorkerProcess() with checks on MyBackendType. So MyBackendType\n> is now the primary way to check what kind of a process the current process\n> is.\n> \n> 'am_walsender' would also be fairly straightforward to remove and replace\n> with AmWalSenderProcess(). I didn't do that because we also have\n> am_db_walsender and am_cascading_walsender which cannot be directly replaced\n> with MyBackendType. Given that, it might be best to keep am_walsender for\n> symmetry.\n\n\n> @@ -561,13 +561,13 @@ static void ShmemBackendArrayAdd(Backend *bn);\n> static void ShmemBackendArrayRemove(Backend *bn);\n> #endif\t\t\t\t\t\t\t/* EXEC_BACKEND */\n> \n> -#define StartupDataBase()\t\tStartChildProcess(StartupProcess)\n> -#define StartArchiver()\t\t\tStartChildProcess(ArchiverProcess)\n> -#define StartBackgroundWriter() StartChildProcess(BgWriterProcess)\n> -#define StartCheckpointer()\t\tStartChildProcess(CheckpointerProcess)\n> -#define StartWalWriter()\t\tStartChildProcess(WalWriterProcess)\n> -#define StartWalReceiver()\t\tStartChildProcess(WalReceiverProcess)\n> -#define StartWalSummarizer()\tStartChildProcess(WalSummarizerProcess)\n> +#define StartupDataBase()\t\tStartChildProcess(B_STARTUP)\n> +#define StartArchiver()\t\t\tStartChildProcess(B_ARCHIVER)\n> +#define StartBackgroundWriter() StartChildProcess(B_BG_WRITER)\n> +#define StartCheckpointer()\t\tStartChildProcess(B_CHECKPOINTER)\n> +#define StartWalWriter()\t\tStartChildProcess(B_WAL_WRITER)\n> +#define StartWalReceiver()\t\tStartChildProcess(B_WAL_RECEIVER)\n> +#define StartWalSummarizer()\tStartChildProcess(B_WAL_SUMMARIZER)\n\nNot for this commit, but we ought to rip out these macros - all they do is to\nmake it harder to understand the code.\n\n\n\n\n> @@ -5344,31 +5344,31 @@ StartChildProcess(AuxProcType type)\n> \t\terrno = save_errno;\n> \t\tswitch (type)\n> \t\t{\n> -\t\t\tcase StartupProcess:\n> +\t\t\tcase B_STARTUP:\n> \t\t\t\tereport(LOG,\n> \t\t\t\t\t\t(errmsg(\"could not fork startup process: %m\")));\n> \t\t\t\tbreak;\n> -\t\t\tcase ArchiverProcess:\n> +\t\t\tcase B_ARCHIVER:\n> \t\t\t\tereport(LOG,\n> \t\t\t\t\t\t(errmsg(\"could not fork archiver process: %m\")));\n> \t\t\t\tbreak;\n> -\t\t\tcase BgWriterProcess:\n> +\t\t\tcase B_BG_WRITER:\n> \t\t\t\tereport(LOG,\n> \t\t\t\t\t\t(errmsg(\"could not fork background writer process: %m\")));\n> \t\t\t\tbreak;\n> -\t\t\tcase CheckpointerProcess:\n> +\t\t\tcase B_CHECKPOINTER:\n> \t\t\t\tereport(LOG,\n> \t\t\t\t\t\t(errmsg(\"could not fork checkpointer process: %m\")));\n> \t\t\t\tbreak;\n> -\t\t\tcase WalWriterProcess:\n> +\t\t\tcase B_WAL_WRITER:\n> \t\t\t\tereport(LOG,\n> \t\t\t\t\t\t(errmsg(\"could not fork WAL writer process: %m\")));\n> \t\t\t\tbreak;\n> -\t\t\tcase WalReceiverProcess:\n> +\t\t\tcase B_WAL_RECEIVER:\n> \t\t\t\tereport(LOG,\n> \t\t\t\t\t\t(errmsg(\"could not fork WAL receiver process: %m\")));\n> \t\t\t\tbreak;\n> -\t\t\tcase WalSummarizerProcess:\n> +\t\t\tcase B_WAL_SUMMARIZER:\n> \t\t\t\tereport(LOG,\n> \t\t\t\t\t\t(errmsg(\"could not fork WAL summarizer process: %m\")));\n> \t\t\t\tbreak;\n\nSeems we should replace this with something slightly more generic one of these\ndays...\n\n\n> diff --git a/src/backend/utils/activity/backend_status.c b/src/backend/utils/activity/backend_status.c\n> index 1a1050c8da1..92f24db4e18 100644\n> --- a/src/backend/utils/activity/backend_status.c\n> +++ b/src/backend/utils/activity/backend_status.c\n> @@ -257,17 +257,16 @@ pgstat_beinit(void)\n> \telse\n> \t{\n> \t\t/* Must be an auxiliary process */\n> -\t\tAssert(MyAuxProcType != NotAnAuxProcess);\n> +\t\tAssert(IsAuxProcess(MyBackendType));\n> \n> \t\t/*\n> \t\t * Assign the MyBEEntry for an auxiliary process. Since it doesn't\n> \t\t * have a BackendId, the slot is statically allocated based on the\n> -\t\t * auxiliary process type (MyAuxProcType). Backends use slots indexed\n> -\t\t * in the range from 0 to MaxBackends (exclusive), so we use\n> -\t\t * MaxBackends + AuxProcType as the index of the slot for an auxiliary\n> -\t\t * process.\n> +\t\t * auxiliary process type. Backends use slots indexed in the range\n> +\t\t * from 0 to MaxBackends (exclusive), and aux processes use the slots\n> +\t\t * after that.\n> \t\t */\n> -\t\tMyBEEntry = &BackendStatusArray[MaxBackends + MyAuxProcType];\n> +\t\tMyBEEntry = &BackendStatusArray[MaxBackends + MyBackendType - FIRST_AUX_PROC];\n> \t}\n\nHm, this seems less than pretty. It's probably ok for now, but it seems like a\nbetter fix might be to just start assigning backend ids to aux procs or switch\nto indexing by pgprocno?\n\n\n> From 795929a5f5a5d6ea4fa8a46bb15c68d2ff46ad3d Mon Sep 17 00:00:00 2001\n> From: Heikki Linnakangas <[email protected]>\n> Date: Wed, 10 Jan 2024 12:59:48 +0200\n> Subject: [PATCH v6 3/3] Use MyBackendType in more places to check what process\n> this is\n> \n> Remove IsBackgroundWorker, IsAutoVacuumLauncherProcess() and\n> IsAutoVacuumWorkerProcess() in favor of new Am*Process() macros that\n> use MyBackendType. For consistency with the existing Am*Process()\n> macros.\n\nThe Am*Process() macros aren't realy new, they are just implemented\ndifferently, right? I guess there are a few more of them now.\n\nGiven that we are probably going to have more process types in the future, it\nseems like a better direction would be a AmProcessType(proctype) style\nmacro/inline function. That we we don't have to mirror the list of process\ntypes in the enum and a set of macros.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 22 Jan 2024 13:07:40 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Refactoring backend fork+exec code"
},
{
"msg_contents": "On 22/01/2024 23:07, Andres Freund wrote:\n> On 2024-01-10 14:35:52 +0200, Heikki Linnakangas wrote:\n>> @@ -5344,31 +5344,31 @@ StartChildProcess(AuxProcType type)\n>> \t\terrno = save_errno;\n>> \t\tswitch (type)\n>> \t\t{\n>> -\t\t\tcase StartupProcess:\n>> +\t\t\tcase B_STARTUP:\n>> \t\t\t\tereport(LOG,\n>> \t\t\t\t\t\t(errmsg(\"could not fork startup process: %m\")));\n>> \t\t\t\tbreak;\n>> -\t\t\tcase ArchiverProcess:\n>> +\t\t\tcase B_ARCHIVER:\n>> \t\t\t\tereport(LOG,\n>> \t\t\t\t\t\t(errmsg(\"could not fork archiver process: %m\")));\n>> \t\t\t\tbreak;\n>> -\t\t\tcase BgWriterProcess:\n>> +\t\t\tcase B_BG_WRITER:\n>> \t\t\t\tereport(LOG,\n>> \t\t\t\t\t\t(errmsg(\"could not fork background writer process: %m\")));\n>> \t\t\t\tbreak;\n>> -\t\t\tcase CheckpointerProcess:\n>> +\t\t\tcase B_CHECKPOINTER:\n>> \t\t\t\tereport(LOG,\n>> \t\t\t\t\t\t(errmsg(\"could not fork checkpointer process: %m\")));\n>> \t\t\t\tbreak;\n>> -\t\t\tcase WalWriterProcess:\n>> +\t\t\tcase B_WAL_WRITER:\n>> \t\t\t\tereport(LOG,\n>> \t\t\t\t\t\t(errmsg(\"could not fork WAL writer process: %m\")));\n>> \t\t\t\tbreak;\n>> -\t\t\tcase WalReceiverProcess:\n>> +\t\t\tcase B_WAL_RECEIVER:\n>> \t\t\t\tereport(LOG,\n>> \t\t\t\t\t\t(errmsg(\"could not fork WAL receiver process: %m\")));\n>> \t\t\t\tbreak;\n>> -\t\t\tcase WalSummarizerProcess:\n>> +\t\t\tcase B_WAL_SUMMARIZER:\n>> \t\t\t\tereport(LOG,\n>> \t\t\t\t\t\t(errmsg(\"could not fork WAL summarizer process: %m\")));\n>> \t\t\t\tbreak;\n> \n> Seems we should replace this with something slightly more generic one of these\n> days...\n\nThe later patches in this thread will turn these into\n\nereport(LOG,\n (errmsg(\"could not fork %s process: %m\", \nPostmasterChildName(type))));\n\n>> diff --git a/src/backend/utils/activity/backend_status.c b/src/backend/utils/activity/backend_status.c\n>> index 1a1050c8da1..92f24db4e18 100644\n>> --- a/src/backend/utils/activity/backend_status.c\n>> +++ b/src/backend/utils/activity/backend_status.c\n>> @@ -257,17 +257,16 @@ pgstat_beinit(void)\n>> \telse\n>> \t{\n>> \t\t/* Must be an auxiliary process */\n>> -\t\tAssert(MyAuxProcType != NotAnAuxProcess);\n>> +\t\tAssert(IsAuxProcess(MyBackendType));\n>> \n>> \t\t/*\n>> \t\t * Assign the MyBEEntry for an auxiliary process. Since it doesn't\n>> \t\t * have a BackendId, the slot is statically allocated based on the\n>> -\t\t * auxiliary process type (MyAuxProcType). Backends use slots indexed\n>> -\t\t * in the range from 0 to MaxBackends (exclusive), so we use\n>> -\t\t * MaxBackends + AuxProcType as the index of the slot for an auxiliary\n>> -\t\t * process.\n>> +\t\t * auxiliary process type. Backends use slots indexed in the range\n>> +\t\t * from 0 to MaxBackends (exclusive), and aux processes use the slots\n>> +\t\t * after that.\n>> \t\t */\n>> -\t\tMyBEEntry = &BackendStatusArray[MaxBackends + MyAuxProcType];\n>> +\t\tMyBEEntry = &BackendStatusArray[MaxBackends + MyBackendType - FIRST_AUX_PROC];\n>> \t}\n> \n> Hm, this seems less than pretty. It's probably ok for now, but it seems like a\n> better fix might be to just start assigning backend ids to aux procs or switch\n> to indexing by pgprocno?\n\nUsing pgprocno is a good idea. Come to think of it, why do we even have \na concept of backend ID that's separate from pgprocno? backend ID is \nused to index the ProcState array, but AFAICS we could use pgprocno as \nthe index to that, too.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Tue, 23 Jan 2024 21:07:08 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Refactoring backend fork+exec code"
},
{
"msg_contents": "Hi,\n\nOn 2024-01-23 21:07:08 +0200, Heikki Linnakangas wrote:\n> On 22/01/2024 23:07, Andres Freund wrote:\n> > > diff --git a/src/backend/utils/activity/backend_status.c b/src/backend/utils/activity/backend_status.c\n> > > index 1a1050c8da1..92f24db4e18 100644\n> > > --- a/src/backend/utils/activity/backend_status.c\n> > > +++ b/src/backend/utils/activity/backend_status.c\n> > > @@ -257,17 +257,16 @@ pgstat_beinit(void)\n> > > \telse\n> > > \t{\n> > > \t\t/* Must be an auxiliary process */\n> > > -\t\tAssert(MyAuxProcType != NotAnAuxProcess);\n> > > +\t\tAssert(IsAuxProcess(MyBackendType));\n> > > \t\t/*\n> > > \t\t * Assign the MyBEEntry for an auxiliary process. Since it doesn't\n> > > \t\t * have a BackendId, the slot is statically allocated based on the\n> > > -\t\t * auxiliary process type (MyAuxProcType). Backends use slots indexed\n> > > -\t\t * in the range from 0 to MaxBackends (exclusive), so we use\n> > > -\t\t * MaxBackends + AuxProcType as the index of the slot for an auxiliary\n> > > -\t\t * process.\n> > > +\t\t * auxiliary process type. Backends use slots indexed in the range\n> > > +\t\t * from 0 to MaxBackends (exclusive), and aux processes use the slots\n> > > +\t\t * after that.\n> > > \t\t */\n> > > -\t\tMyBEEntry = &BackendStatusArray[MaxBackends + MyAuxProcType];\n> > > +\t\tMyBEEntry = &BackendStatusArray[MaxBackends + MyBackendType - FIRST_AUX_PROC];\n> > > \t}\n> > \n> > Hm, this seems less than pretty. It's probably ok for now, but it seems like a\n> > better fix might be to just start assigning backend ids to aux procs or switch\n> > to indexing by pgprocno?\n> \n> Using pgprocno is a good idea. Come to think of it, why do we even have a\n> concept of backend ID that's separate from pgprocno? backend ID is used to\n> index the ProcState array, but AFAICS we could use pgprocno as the index to\n> that, too.\n\nI think we should do that. There are a few processes not participating in\nsinval, but it doesn't make enough of a difference to make sinval slower. And\nI think there'd be far bigger efficiency improvements to sinvaladt than not\nhaving a handful more entries.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 23 Jan 2024 11:50:09 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Refactoring backend fork+exec code"
},
{
"msg_contents": "On 23/01/2024 21:50, Andres Freund wrote:\n> On 2024-01-23 21:07:08 +0200, Heikki Linnakangas wrote:\n>> On 22/01/2024 23:07, Andres Freund wrote:\n>>>> diff --git a/src/backend/utils/activity/backend_status.c b/src/backend/utils/activity/backend_status.c\n>>>> index 1a1050c8da1..92f24db4e18 100644\n>>>> --- a/src/backend/utils/activity/backend_status.c\n>>>> +++ b/src/backend/utils/activity/backend_status.c\n>>>> @@ -257,17 +257,16 @@ pgstat_beinit(void)\n>>>> \telse\n>>>> \t{\n>>>> \t\t/* Must be an auxiliary process */\n>>>> -\t\tAssert(MyAuxProcType != NotAnAuxProcess);\n>>>> +\t\tAssert(IsAuxProcess(MyBackendType));\n>>>> \t\t/*\n>>>> \t\t * Assign the MyBEEntry for an auxiliary process. Since it doesn't\n>>>> \t\t * have a BackendId, the slot is statically allocated based on the\n>>>> -\t\t * auxiliary process type (MyAuxProcType). Backends use slots indexed\n>>>> -\t\t * in the range from 0 to MaxBackends (exclusive), so we use\n>>>> -\t\t * MaxBackends + AuxProcType as the index of the slot for an auxiliary\n>>>> -\t\t * process.\n>>>> +\t\t * auxiliary process type. Backends use slots indexed in the range\n>>>> +\t\t * from 0 to MaxBackends (exclusive), and aux processes use the slots\n>>>> +\t\t * after that.\n>>>> \t\t */\n>>>> -\t\tMyBEEntry = &BackendStatusArray[MaxBackends + MyAuxProcType];\n>>>> +\t\tMyBEEntry = &BackendStatusArray[MaxBackends + MyBackendType - FIRST_AUX_PROC];\n>>>> \t}\n>>>\n>>> Hm, this seems less than pretty. It's probably ok for now, but it seems like a\n>>> better fix might be to just start assigning backend ids to aux procs or switch\n>>> to indexing by pgprocno?\n>>\n>> Using pgprocno is a good idea. Come to think of it, why do we even have a\n>> concept of backend ID that's separate from pgprocno? backend ID is used to\n>> index the ProcState array, but AFAICS we could use pgprocno as the index to\n>> that, too.\n> \n> I think we should do that. There are a few processes not participating in\n> sinval, but it doesn't make enough of a difference to make sinval slower. And\n> I think there'd be far bigger efficiency improvements to sinvaladt than not\n> having a handful more entries.\n\nAnd here we go. BackendID is now a 1-based index directly into the \nPGPROC array.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)",
"msg_date": "Thu, 25 Jan 2024 01:51:02 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Refactoring backend fork+exec code"
},
{
"msg_contents": "On Thu, 2024-01-25 at 01:51 +0200, Heikki Linnakangas wrote:\n> \n> And here we go. BackendID is now a 1-based index directly into the \n> PGPROC array.\n> \n\nWould it be worthwhile to also note in this comment FIRST_AUX_PROC's\nand IsAuxProcess()'s dependency on B_ARCHIVER and it's location in the\nenum table?\n\n /* \n ¦* Auxiliary processes. These have PGPROC entries, but they are not \n ¦* attached to any particular database. There can be only one of each of \n ¦* these running at a time. \n ¦* \n ¦* If you modify these, make sure to update NUM_AUXILIARY_PROCS and the \n ¦* glossary in the docs. \n ¦*/ \n B_ARCHIVER, \n B_BG_WRITER, \n B_CHECKPOINTER, \n B_STARTUP, \n B_WAL_RECEIVER, \n B_WAL_SUMMARIZER, \n B_WAL_WRITER, \n } BackendType; \n \n #define BACKEND_NUM_TYPES (B_WAL_WRITER + 1) \n \n extern PGDLLIMPORT BackendType MyBackendType; \n \n #define FIRST_AUX_PROC B_ARCHIVER \n #define IsAuxProcess(type) (MyBackendType >= FIRST_AUX_PROC)\n\n\n",
"msg_date": "Mon, 29 Jan 2024 10:54:52 -0500",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Refactoring backend fork+exec code"
},
{
"msg_contents": "On 29/01/2024 17:54, [email protected] wrote:\n> On Thu, 2024-01-25 at 01:51 +0200, Heikki Linnakangas wrote:\n>>\n>> And here we go. BackendID is now a 1-based index directly into the\n>> PGPROC array.\n> \n> Would it be worthwhile to also note in this comment FIRST_AUX_PROC's\n> and IsAuxProcess()'s dependency on B_ARCHIVER and it's location in the\n> enum table?\n\nYeah, that might be in order. Although looking closer, it's only used in \nIsAuxProcess, which is only used in one sanity check in \nAuxProcessMain(). And even that gets refactored away by the later \npatches in this thread. So on second thoughts, I'll just remove it \naltogether.\n\nI spent some more time on the 'lastBackend' optimization in sinvaladt.c. \nI realized that it became very useless with these patches, because aux \nprocesses are allocated pgprocno's after all the slots for regular \nbackends. There are always aux processes running, so lastBackend would \nalways have a value close to the max anyway. I replaced that with a \ndense 'pgprocnos' array that keeps track of the exact slots that are in \nuse. I'm not 100% sure this is worth the effort; does any real world \nworkload send shared invalidations so frequently that this matters? In \nany case, this should avoid the regression if such a workload exists.\n\nNew patch set attached. I think these are ready to be committed, but \nwould appreciate a final review.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)",
"msg_date": "Tue, 30 Jan 2024 02:08:36 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Refactoring backend fork+exec code"
},
{
"msg_contents": "On 30/01/2024 02:08, Heikki Linnakangas wrote:\n> On 29/01/2024 17:54, [email protected] wrote:\n>> On Thu, 2024-01-25 at 01:51 +0200, Heikki Linnakangas wrote:\n>>>\n>>> And here we go. BackendID is now a 1-based index directly into the\n>>> PGPROC array.\n>>\n>> Would it be worthwhile to also note in this comment FIRST_AUX_PROC's\n>> and IsAuxProcess()'s dependency on B_ARCHIVER and it's location in the\n>> enum table?\n> \n> Yeah, that might be in order. Although looking closer, it's only used in\n> IsAuxProcess, which is only used in one sanity check in\n> AuxProcessMain(). And even that gets refactored away by the later\n> patches in this thread. So on second thoughts, I'll just remove it\n> altogether.\n> \n> I spent some more time on the 'lastBackend' optimization in sinvaladt.c.\n> I realized that it became very useless with these patches, because aux\n> processes are allocated pgprocno's after all the slots for regular\n> backends. There are always aux processes running, so lastBackend would\n> always have a value close to the max anyway. I replaced that with a\n> dense 'pgprocnos' array that keeps track of the exact slots that are in\n> use. I'm not 100% sure this is worth the effort; does any real world\n> workload send shared invalidations so frequently that this matters? In\n> any case, this should avoid the regression if such a workload exists.\n> \n> New patch set attached. I think these are ready to be committed, but\n> would appreciate a final review.\n\ncontrib/amcheck 003_cic_2pc.pl test failures revealed a bug that \nrequired some reworking:\n\nIn a PGPROC entry for a prepared xact, the PGPROC's backendID needs to \nbe the original backend's ID, because the prepared xact is holding the \nlock on the original virtual transaction id. When a transaction's \nownership is moved from the original backend's PGPROC entry to the \nprepared xact PGPROC entry, the backendID needs to be copied over. My \npatch removed the field altogether, so it was not copied over, which \nmade it look like it the original VXID lock was released at prepare.\n\nI fixed that by adding back the backendID field. For regular backends, \nit's always equal to pgprocno + 1, but for prepared xacts, it's the \noriginal backend's ID. To make that less confusing, I moved the \nbackendID and lxid fields together under a 'vxid' struct. The two fields \ntogether form the virtual transaction ID, and that's the only context \nwhere the 'backendID' field should now be looked at.\n\nI also squashed the 'lastBackend' changes in sinvaladt.c to the main patch.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)",
"msg_date": "Thu, 1 Feb 2024 15:54:23 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Refactoring backend fork+exec code"
},
{
"msg_contents": "Hi,\n\nOn 2024-01-30 02:08:36 +0200, Heikki Linnakangas wrote:\n> I spent some more time on the 'lastBackend' optimization in sinvaladt.c. I\n> realized that it became very useless with these patches, because aux\n> processes are allocated pgprocno's after all the slots for regular backends.\n> There are always aux processes running, so lastBackend would always have a\n> value close to the max anyway. I replaced that with a dense 'pgprocnos'\n> array that keeps track of the exact slots that are in use. I'm not 100% sure\n> this is worth the effort; does any real world workload send shared\n> invalidations so frequently that this matters? In any case, this should\n> avoid the regression if such a workload exists.\n>\n> New patch set attached. I think these are ready to be committed, but would\n> appreciate a final review.\n\n\n> From 54f22231bb2540fc5957c14005956161e6fc9dac Mon Sep 17 00:00:00 2001\n> From: Heikki Linnakangas <[email protected]>\n> Date: Wed, 24 Jan 2024 23:15:55 +0200\n> Subject: [PATCH v8 1/5] Remove superfluous 'pgprocno' field from PGPROC\n>\n> It was always just the index of the PGPROC entry from the beginning of\n> the proc array. Introduce a macro to compute it from the pointer\n> instead.\n\nHm. The pointer math here is bit more expensive than in some other cases, as\nthe struct is fairly large and sizeof(PGPROC) isn't a power of two. Adding\nmore math into loops like in TransactionGroupUpdateXidStatus() might end up\nshowing up.\n\nI've been thinking that we likely should pad PGPROC to some more convenient\nboundary, but...\n\n\nIs this really related to the rest of the series?\n\n\n> From 4e0121e064804b73ef8a5dc10be27b85968ea1af Mon Sep 17 00:00:00 2001\n> From: Heikki Linnakangas <[email protected]>\n> Date: Mon, 29 Jan 2024 23:50:34 +0200\n> Subject: [PATCH v8 2/5] Redefine backend ID to be an index into the proc\n> array.\n>\n> Previously, backend ID was an index into the ProcState array, in the\n> shared cache invalidation manager (sinvaladt.c). The entry in the\n> ProcState array was reserved at backend startup by scanning the array\n> for a free entry, and that was also when the backend got its backend\n> ID. Things becomes slightly simpler if we redefine backend ID to be\n> the index into the PGPROC array, and directly use it also as an index\n> to the ProcState array. This uses a little more memory, as we reserve\n> a few extra slots in the ProcState array for aux processes that don't\n> need them, but the simplicity is worth it.\n\n> Aux processes now also have a backend ID. This simplifies the\n> reservation of BackendStatusArray and ProcSignal slots.\n>\n> You can now convert a backend ID into an index into the PGPROC array\n> simply by subtracting 1. We still use 0-based \"pgprocnos\" in various\n> places, for indexes into the PGPROC array, but the only difference now\n> is that backend IDs start at 1 while pgprocnos start at 0.\n\nWhy aren't we using 0-based indexing for both? InvalidBackendId is -1, so\nthere'd not be a conflict, right?\n\n\n> One potential downside of this patch is that the ProcState array might\n> get less densely packed, as we we don't try so hard to assign\n> low-numbered backend ID anymore. If it's less densely packed,\n> lastBackend will stay at a higher value, and SIInsertDataEntries() and\n> SICleanupQueue() need to scan over more unused entries. I think that's\n> fine. They are performance critical enough to matter, and there was no\n> guarantee on dense packing before either: If you launched a lot of\n> backends concurrently, and kept the last one open, lastBackend would\n> also stay at a high value.\n\nIt's perhaps worth noting here that there's a future patch that also addresses\nthis to some degree?\n\n\n> @@ -457,7 +442,7 @@ MarkAsPreparingGuts(GlobalTransaction gxact, TransactionId xid, const char *gid,\n> \tAssert(LWLockHeldByMeInMode(TwoPhaseStateLock, LW_EXCLUSIVE));\n>\n> \tAssert(gxact != NULL);\n> -\tproc = &ProcGlobal->allProcs[gxact->pgprocno];\n> +\tproc = GetPGProcByNumber(gxact->pgprocno);\n>\n> \t/* Initialize the PGPROC entry */\n> \tMemSet(proc, 0, sizeof(PGPROC));\n\nThis set of changes is independent of this commit, isn't it?\n\n\n> diff --git a/src/backend/postmaster/auxprocess.c b/src/backend/postmaster/auxprocess.c\n> index ab86e802f21..39171fea06b 100644\n> --- a/src/backend/postmaster/auxprocess.c\n> +++ b/src/backend/postmaster/auxprocess.c\n> @@ -107,17 +107,7 @@ AuxiliaryProcessMain(AuxProcType auxtype)\n>\n> \tBaseInit();\n>\n> -\t/*\n> -\t * Assign the ProcSignalSlot for an auxiliary process. Since it doesn't\n> -\t * have a BackendId, the slot is statically allocated based on the\n> -\t * auxiliary process type (MyAuxProcType). Backends use slots indexed in\n> -\t * the range from 1 to MaxBackends (inclusive), so we use MaxBackends +\n> -\t * AuxProcType + 1 as the index of the slot for an auxiliary process.\n> -\t *\n> -\t * This will need rethinking if we ever want more than one of a particular\n> -\t * auxiliary process type.\n> -\t */\n> -\tProcSignalInit(MaxBackends + MyAuxProcType + 1);\n> +\tProcSignalInit();\n\nNow that we don't need the offset here, we could move ProcSignalInit() into\nBsaeInit() I think?\n\n\n\n> +/*\n> + * BackendIdGetProc -- get a backend's PGPROC given its backend ID\n> + *\n> + * The result may be out of date arbitrarily quickly, so the caller\n> + * must be careful about how this information is used. NULL is\n> + * returned if the backend is not active.\n> + */\n> +PGPROC *\n> +BackendIdGetProc(int backendID)\n> +{\n> +\tPGPROC\t *result;\n> +\n> +\tif (backendID < 1 || backendID > ProcGlobal->allProcCount)\n> +\t\treturn NULL;\n\nHm, doesn't calling BackendIdGetProc() with these values a bug? That's not\nabout being out of date or such.\n\n\n> +/*\n> + * BackendIdGetTransactionIds -- get a backend's transaction status\n> + *\n> + * Get the xid, xmin, nsubxid and overflow status of the backend. The\n> + * result may be out of date arbitrarily quickly, so the caller must be\n> + * careful about how this information is used.\n> + */\n> +void\n> +BackendIdGetTransactionIds(int backendID, TransactionId *xid,\n> +\t\t\t\t\t\t TransactionId *xmin, int *nsubxid, bool *overflowed)\n> +{\n> +\tPGPROC\t *proc;\n> +\n> +\t*xid = InvalidTransactionId;\n> +\t*xmin = InvalidTransactionId;\n> +\t*nsubxid = 0;\n> +\t*overflowed = false;\n> +\n> +\tif (backendID < 1 || backendID > ProcGlobal->allProcCount)\n> +\t\treturn;\n> +\tproc = GetPGProcByBackendId(backendID);\n> +\n> +\t/* Need to lock out additions/removals of backends */\n> +\tLWLockAcquire(ProcArrayLock, LW_SHARED);\n> +\n> +\tif (proc->pid != 0)\n> +\t{\n> +\t\t*xid = proc->xid;\n> +\t\t*xmin = proc->xmin;\n> +\t\t*nsubxid = proc->subxidStatus.count;\n> +\t\t*overflowed = proc->subxidStatus.overflowed;\n> +\t}\n> +\n> +\tLWLockRelease(ProcArrayLock);\n> +}\n\nHm, I'm not sure about the locking here. For one, previously we weren't\nholding ProcArrayLock. For another, holding ProcArrayLock guarantees that the\nbackend doesn't end its transaction, but it can still assign xids etc. And,\nfor that matter, the backendid could have been recycled between the caller\nacquiring the backendId and calling BackendIdGetTransactionIds().\n\n\n> --- a/src/backend/utils/error/elog.c\n> +++ b/src/backend/utils/error/elog.c\n> @@ -3074,18 +3074,18 @@ log_status_format(StringInfo buf, const char *format, ErrorData *edata)\n> \t\t\t\tbreak;\n> \t\t\tcase 'v':\n> \t\t\t\t/* keep VXID format in sync with lockfuncs.c */\n> -\t\t\t\tif (MyProc != NULL && MyProc->backendId != InvalidBackendId)\n> +\t\t\t\tif (MyProc != NULL)\n\nDoesn't this mean we'll include a vxid in more cases now, particularly\nincluding aux processes? That might be ok, but I also suspect that it'll never\nhave meaningful values...\n\n\n> From 94fd46c9ef30ba5e8ac1a8873fce577a4be425f4 Mon Sep 17 00:00:00 2001\n> From: Heikki Linnakangas <[email protected]>\n> Date: Mon, 29 Jan 2024 22:57:49 +0200\n> Subject: [PATCH v8 3/5] Replace 'lastBackend' with an array of in-use slots\n>\n> Now that the procState array is indexed by pgprocno, the 'lastBackend'\n> optimization is useless, because aux processes are assigned PGPROC\n> slots and hence have numbers higher than max_connection. So\n> 'lastBackend' was always set to almost the end of the array.\n>\n> To replace that optimization, mantain a dense array of in-use\n> indexes. This's redundant with ProgGlobal->procarray, but I was afraid\n> of adding any more contention to ProcArrayLock, and this keeps the\n> code isolated to sinvaladt.c too.\n\nI think it'd be good to include that explanation and justification in the code\nas well.\n\nI suspect we'll need to split out \"procarray membership\" locking from\nProcArrayLock at some point in some form (vagueness alert). To reduce\ncontention we already have to hold both ProcArrayLock and XidGenLock when\nchanging membership, so that holding either of the locks prevents the set of\nmembers to change. This, kinda and differently, adds yet another lock to that.\n\n\n\n> It's not clear if we need that optimization at all. I was able to\n> write a test case that become slower without this: set max_connections\n> to a very high number (> 5000), and create+truncate a table in the\n> same transaction thousands of times to send invalidation messages,\n> with fsync=off. That became about 20% slower on my laptop. Arguably\n> that's so unrealistic that it doesn't matter, but nevertheless, this\n> commit restores the performance of that.\n\nI think it's unfortunately not that uncommon to be bottlenecked by sinval\nperformance, so I think it's good that you're addressing it.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 7 Feb 2024 10:25:21 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Refactoring backend fork+exec code"
},
{
"msg_contents": "On 07/02/2024 20:25, Andres Freund wrote:\n> On 2024-01-30 02:08:36 +0200, Heikki Linnakangas wrote:\n>> From 54f22231bb2540fc5957c14005956161e6fc9dac Mon Sep 17 00:00:00 2001\n>> From: Heikki Linnakangas <[email protected]>\n>> Date: Wed, 24 Jan 2024 23:15:55 +0200\n>> Subject: [PATCH v8 1/5] Remove superfluous 'pgprocno' field from PGPROC\n>>\n>> It was always just the index of the PGPROC entry from the beginning of\n>> the proc array. Introduce a macro to compute it from the pointer\n>> instead.\n> \n> Hm. The pointer math here is bit more expensive than in some other cases, as\n> the struct is fairly large and sizeof(PGPROC) isn't a power of two. Adding\n> more math into loops like in TransactionGroupUpdateXidStatus() might end up\n> showing up.\n\nI added a MyProcNumber global variable that is set to \nGetNumberFromPGProc(MyProc). I'm not really concerned about the extra \nmath, but with MyProcNumber it should definitely not be an issue. The \nfew GetNumberFromPGProc() invocations that remain are in less \nperformance-critical paths.\n\n(In later patch, I switch backend ids to 0-based indexing, which \nreplaces MyProcNumber references with MyBackendId)\n\n> Is this really related to the rest of the series?\n\nIt's not strictly necessary, but it felt prudent to remove it now, since \nI'm removing the backendID field too.\n\n>> You can now convert a backend ID into an index into the PGPROC array\n>> simply by subtracting 1. We still use 0-based \"pgprocnos\" in various\n>> places, for indexes into the PGPROC array, but the only difference now\n>> is that backend IDs start at 1 while pgprocnos start at 0.\n> \n> Why aren't we using 0-based indexing for both? InvalidBackendId is -1, so\n> there'd not be a conflict, right?\n\nCorrect. I was being conservative and didn't dare to change the old \nconvention. The backend ids are visible in a few places like \"pg_temp_0\" \nschema names, and pg_stat_get_*() functions.\n\nOne alternative would be to reserve and waste allProcs[0]. Then pgprocno \nand backend ID could both be direct indexes to the array, but 0 would \nnot be used.\n\nIf we switch to 0-based indexing, it begs the question: why don't we \nmerge the concepts of \"pgprocno\" and \"BackendId\" completely and call it \nthe same thing everywhere? It probably would be best in the long run. It \nfeels like a lot of churn though.\n\nAnyway, I switched to 0-based indexing in the attached new version, to \nsee what it looks like.\n\n>> @@ -457,7 +442,7 @@ MarkAsPreparingGuts(GlobalTransaction gxact, TransactionId xid, const char *gid,\n>> \tAssert(LWLockHeldByMeInMode(TwoPhaseStateLock, LW_EXCLUSIVE));\n>>\n>> \tAssert(gxact != NULL);\n>> -\tproc = &ProcGlobal->allProcs[gxact->pgprocno];\n>> +\tproc = GetPGProcByNumber(gxact->pgprocno);\n>>\n>> \t/* Initialize the PGPROC entry */\n>> \tMemSet(proc, 0, sizeof(PGPROC));\n> \n> This set of changes is independent of this commit, isn't it?\n\nYes. It's just for symmetry, now that we use GetNumberFromPGProc() to \nget the pgprocno.\n\n>> diff --git a/src/backend/postmaster/auxprocess.c b/src/backend/postmaster/auxprocess.c\n>> index ab86e802f21..39171fea06b 100644\n>> --- a/src/backend/postmaster/auxprocess.c\n>> +++ b/src/backend/postmaster/auxprocess.c\n>> @@ -107,17 +107,7 @@ AuxiliaryProcessMain(AuxProcType auxtype)\n>>\n>> \tBaseInit();\n>>\n>> -\t/*\n>> -\t * Assign the ProcSignalSlot for an auxiliary process. Since it doesn't\n>> -\t * have a BackendId, the slot is statically allocated based on the\n>> -\t * auxiliary process type (MyAuxProcType). Backends use slots indexed in\n>> -\t * the range from 1 to MaxBackends (inclusive), so we use MaxBackends +\n>> -\t * AuxProcType + 1 as the index of the slot for an auxiliary process.\n>> -\t *\n>> -\t * This will need rethinking if we ever want more than one of a particular\n>> -\t * auxiliary process type.\n>> -\t */\n>> -\tProcSignalInit(MaxBackends + MyAuxProcType + 1);\n>> +\tProcSignalInit();\n> \n> Now that we don't need the offset here, we could move ProcSignalInit() into\n> BsaeInit() I think?\n\nHmm, doesn't feel right to me. BaseInit() is mostly concerned with \nsetting up backend-private structures, and it's also called for a \nstandalone backend.\n\nI feel the process initialization codepaths could use some cleanup in \ngeneral. Not sure what exactly.\n\n>> +/*\n>> + * BackendIdGetProc -- get a backend's PGPROC given its backend ID\n>> + *\n>> + * The result may be out of date arbitrarily quickly, so the caller\n>> + * must be careful about how this information is used. NULL is\n>> + * returned if the backend is not active.\n>> + */\n>> +PGPROC *\n>> +BackendIdGetProc(int backendID)\n>> +{\n>> +\tPGPROC\t *result;\n>> +\n>> +\tif (backendID < 1 || backendID > ProcGlobal->allProcCount)\n>> +\t\treturn NULL;\n> \n> Hm, doesn't calling BackendIdGetProc() with these values a bug? That's not\n> about being out of date or such.\n\nPerhaps. I just followed the example of the old implementation, which \nalso returns NULL on bogus inputs.\n\n>> +/*\n>> + * BackendIdGetTransactionIds -- get a backend's transaction status\n>> + *\n>> + * Get the xid, xmin, nsubxid and overflow status of the backend. The\n>> + * result may be out of date arbitrarily quickly, so the caller must be\n>> + * careful about how this information is used.\n>> + */\n>> +void\n>> +BackendIdGetTransactionIds(int backendID, TransactionId *xid,\n>> +\t\t\t\t\t\t TransactionId *xmin, int *nsubxid, bool *overflowed)\n>> +{\n>> +\tPGPROC\t *proc;\n>> +\n>> +\t*xid = InvalidTransactionId;\n>> +\t*xmin = InvalidTransactionId;\n>> +\t*nsubxid = 0;\n>> +\t*overflowed = false;\n>> +\n>> +\tif (backendID < 1 || backendID > ProcGlobal->allProcCount)\n>> +\t\treturn;\n>> +\tproc = GetPGProcByBackendId(backendID);\n>> +\n>> +\t/* Need to lock out additions/removals of backends */\n>> +\tLWLockAcquire(ProcArrayLock, LW_SHARED);\n>> +\n>> +\tif (proc->pid != 0)\n>> +\t{\n>> +\t\t*xid = proc->xid;\n>> +\t\t*xmin = proc->xmin;\n>> +\t\t*nsubxid = proc->subxidStatus.count;\n>> +\t\t*overflowed = proc->subxidStatus.overflowed;\n>> +\t}\n>> +\n>> +\tLWLockRelease(ProcArrayLock);\n>> +}\n> \n> Hm, I'm not sure about the locking here. For one, previously we weren't\n> holding ProcArrayLock. For another, holding ProcArrayLock guarantees that the\n> backend doesn't end its transaction, but it can still assign xids etc. And,\n> for that matter, the backendid could have been recycled between the caller\n> acquiring the backendId and calling BackendIdGetTransactionIds().\n\nYeah, the returned values could be out-of-date and even inconsistent \nwith each other. I just faithfully copied the old implementation.\n\nPerhaps this should just skip the ProcArrayLock altogether.\n\n>> --- a/src/backend/utils/error/elog.c\n>> +++ b/src/backend/utils/error/elog.c\n>> @@ -3074,18 +3074,18 @@ log_status_format(StringInfo buf, const char *format, ErrorData *edata)\n>> \t\t\t\tbreak;\n>> \t\t\tcase 'v':\n>> \t\t\t\t/* keep VXID format in sync with lockfuncs.c */\n>> -\t\t\t\tif (MyProc != NULL && MyProc->backendId != InvalidBackendId)\n>> +\t\t\t\tif (MyProc != NULL)\n> \n> Doesn't this mean we'll include a vxid in more cases now, particularly\n> including aux processes? That might be ok, but I also suspect that it'll never\n> have meaningful values...\n\nFixed. (I thought I changed that back already in the last patch version, \nbut apparently I only did it in jsonlog.c)\n\n>> From 94fd46c9ef30ba5e8ac1a8873fce577a4be425f4 Mon Sep 17 00:00:00 2001\n>> From: Heikki Linnakangas <[email protected]>\n>> Date: Mon, 29 Jan 2024 22:57:49 +0200\n>> Subject: [PATCH v8 3/5] Replace 'lastBackend' with an array of in-use slots\n>>\n>> Now that the procState array is indexed by pgprocno, the 'lastBackend'\n>> optimization is useless, because aux processes are assigned PGPROC\n>> slots and hence have numbers higher than max_connection. So\n>> 'lastBackend' was always set to almost the end of the array.\n>>\n>> To replace that optimization, mantain a dense array of in-use\n>> indexes. This's redundant with ProgGlobal->procarray, but I was afraid\n>> of adding any more contention to ProcArrayLock, and this keeps the\n>> code isolated to sinvaladt.c too.\n> \n> I think it'd be good to include that explanation and justification in the code\n> as well.\n\nAdded a comment.\n\n\nAttached is a new version of these BackendId changes. I kept it as three \nseparate patches to highlight the changes from switching to 0-based \nindexing, but I think they should be squashed together before pushing.\n\nI think the last remaining question here is about the 0- vs 1-based \nindexing of BackendIds. Is it a good idea to switch to 0-based indexing? \nAnd if we do it, should we reserve PGPROC 0. I'm on the fence on this one.\n\nAnd if we switch to 0-based indexing, should we do a more comprehensive \nsearch & replace of \"pgprocno\" to \"backendId\", or something like that. \nMy vote is no, the code churn doesn't feel worth it. And it can also be \ndone separately later if we want to.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)",
"msg_date": "Thu, 8 Feb 2024 13:19:53 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Refactoring backend fork+exec code"
},
{
"msg_contents": "Hi,\n\nOn 2024-02-08 13:19:53 +0200, Heikki Linnakangas wrote:\n> > > -\t/*\n> > > -\t * Assign the ProcSignalSlot for an auxiliary process. Since it doesn't\n> > > -\t * have a BackendId, the slot is statically allocated based on the\n> > > -\t * auxiliary process type (MyAuxProcType). Backends use slots indexed in\n> > > -\t * the range from 1 to MaxBackends (inclusive), so we use MaxBackends +\n> > > -\t * AuxProcType + 1 as the index of the slot for an auxiliary process.\n> > > -\t *\n> > > -\t * This will need rethinking if we ever want more than one of a particular\n> > > -\t * auxiliary process type.\n> > > -\t */\n> > > -\tProcSignalInit(MaxBackends + MyAuxProcType + 1);\n> > > +\tProcSignalInit();\n> >\n> > Now that we don't need the offset here, we could move ProcSignalInit() into\n> > BsaeInit() I think?\n>\n> Hmm, doesn't feel right to me. BaseInit() is mostly concerned with setting\n> up backend-private structures, and it's also called for a standalone\n> backend.\n\nIt already initializes a lot of shared subsystems (pgstat, replication slots\nand arguable things like the buffer pool, temporary file access and WAL). And\nnote that it already requires that MyProc is already set (but it's not yet\n\"added\" to the procarray, i.e. doesn't do visibility stuff at that stage).\n\nI don't think that BaseInit() being called by standalone backends really poses\na problem? So is InitPostgres(), which does call ProcSignalInit() in\nstandalone processes.\n\nMy mental model is that BaseInit() is for stuff that's shared between\nprocesses that do attach to databases and those that don't. Right now the\ninitialization flow is something like this ascii diagram:\n\nstandalone: \\ /-> StartupXLOG() \\\n -> InitProcess() -\\ /-> ProcArrayAdd() -> SharedInvalBackendInit() -> ProcSignalInit()- -> pgstat_beinit() -> attach to db -> pgstat_bestart()\nnormal backend: / \\ /\n -> BaseInit() -\naux process: InitAuxiliaryProcess() -/ \\-- -> ProcSignalInit() -> pgstat_beinit() -> pgstat_bestart()\n\n\nThe only reason ProcSignalInit() happens kinda late is that historically we\nused BackendIds as the index, which were only assigned in\nSharedInvalBackendInit() for normal processes. But that doesn't make sense\nanymore after your changes.\n\nSimilarly, we do pgstat_beinit() quite late, but that's again only because it\nuses MyBackendId, which today is only assigned during\nSharedInvalBackendInit(). I don't think we can do pgstat_bestart() earlier\nthough, which is a shame, given the four calls to it inside InitPostgres().\n\n\n> I feel the process initialization codepaths could use some cleanup in\n> general. Not sure what exactly.\n\nVery much agreed.\n\n\n> > > +/*\n> > > + * BackendIdGetProc -- get a backend's PGPROC given its backend ID\n> > > + *\n> > > + * The result may be out of date arbitrarily quickly, so the caller\n> > > + * must be careful about how this information is used. NULL is\n> > > + * returned if the backend is not active.\n> > > + */\n> > > +PGPROC *\n> > > +BackendIdGetProc(int backendID)\n> > > +{\n> > > +\tPGPROC\t *result;\n> > > +\n> > > +\tif (backendID < 1 || backendID > ProcGlobal->allProcCount)\n> > > +\t\treturn NULL;\n> >\n> > Hm, doesn't calling BackendIdGetProc() with these values a bug? That's not\n> > about being out of date or such.\n>\n> Perhaps. I just followed the example of the old implementation, which also\n> returns NULL on bogus inputs.\n\nFair enough. Makes it harder to not notice bugs, but that's not on this patchset to fix...\n\n\n\n> I think the last remaining question here is about the 0- vs 1-based indexing\n> of BackendIds. Is it a good idea to switch to 0-based indexing? And if we do\n> it, should we reserve PGPROC 0. I'm on the fence on this one.\n\nI lean towards it being a good idea. Having two internal indexing schemes was\nbad enough so far, but at least one would fairly quickly notice if one used\nthe wrong one. If they're just offset by 1, it might end up taking longer,\nbecause that'll often also be a valid id. But I think you have the author's\nprerogative on this one.\n\nIf we do so, I think it might be better to standardize on MyProcNumber instead\nof MyBackendId. That'll force looking at code where indexing shifts by 1 - and\nit also seems more descriptive, as inside postgres it's imo clearer what a\n\"proc number\" is than what a \"backend id\" is. Particularly because the latter\nis also used for things that aren't backends...\n\n\nThe only exception are SQL level users, for those I think it might make sense\nto keep the current 1 based indexing, there's just a few functions where we'd\nneed to translate.\n\n\n\n> @@ -791,6 +792,7 @@ ProcArrayEndTransactionInternal(PGPROC *proc, TransactionId latestXid)\n> static void\n> ProcArrayGroupClearXid(PGPROC *proc, TransactionId latestXid)\n> {\n> +\tint\t\t\tpgprocno = GetNumberFromPGProc(proc);\n> \tPROC_HDR *procglobal = ProcGlobal;\n> \tuint32\t\tnextidx;\n> \tuint32\t\twakeidx;\n\nThis one is the only one where I could see the additional math done in\nGetNumberFromPGProc() hurting. Which is somewhat silly, because the proc\npassed in is always MyProc. In the most unrealistic workload imaginable (many\nbackends doing nothing but assigning xids and committing, server-side), it\nindeed seems to make a tiny difference. But not enough to worry about, I think.\n\nFWIW, if I use GetNumberFromPGProc(MyProc) instead of MyProcNumber in\nLWLockQueueSelf(), that does show up a bit more noticeable.\n\n\n> void\n> -ProcSignalInit(int pss_idx)\n> +ProcSignalInit(void)\n> {\n> \tProcSignalSlot *slot;\n> \tuint64\t\tbarrier_generation;\n>\n> -\tAssert(pss_idx >= 1 && pss_idx <= NumProcSignalSlots);\n> -\n> -\tslot = &ProcSignal->psh_slot[pss_idx - 1];\n> +\tif (MyBackendId <= 0)\n> +\t\telog(ERROR, \"MyBackendId not set\");\n> +\tif (MyBackendId > NumProcSignalSlots)\n> +\t\telog(ERROR, \"unexpected MyBackendId %d in ProcSignalInit (max %d)\", MyBackendId, NumProcSignalSlots);\n> +\tslot = &ProcSignal->psh_slot[MyBackendId - 1];\n>\n> \t/* sanity check */\n> \tif (slot->pss_pid != 0)\n> \t\telog(LOG, \"process %d taking over ProcSignal slot %d, but it's not empty\",\n> -\t\t\t MyProcPid, pss_idx);\n> +\t\t\t MyProcPid, (int) (slot - ProcSignal->psh_slot));\n\nHm, why not use MyBackendId - 1 as above? Am I missing something?\n\n\n> /*\n> @@ -212,11 +211,7 @@ ProcSignalInit(int pss_idx)\n> static void\n> CleanupProcSignalState(int status, Datum arg)\n> {\n> -\tint\t\t\tpss_idx = DatumGetInt32(arg);\n> -\tProcSignalSlot *slot;\n> -\n> -\tslot = &ProcSignal->psh_slot[pss_idx - 1];\n> -\tAssert(slot == MyProcSignalSlot);\n> +\tProcSignalSlot *slot = MyProcSignalSlot;\n\nMaybe worth asserting that MyProcSignalSlot isn't NULL? Previously that was\nchecked via the assertion above.\n\n\n> +\t\t\tif (i != segP->numProcs - 1)\n> +\t\t\t\tsegP->pgprocnos[i] = segP->pgprocnos[segP->numProcs - 1];\n> +\t\t\tbreak;\n\nHm. This means the list will be out-of-order more and more over time, leading\nto less cache efficient access patterns. Perhaps we should keep this sorted,\nlike we do for ProcGlobal->xids etc?\n\n\n> @@ -148,19 +148,11 @@ pg_log_backend_memory_contexts(PG_FUNCTION_ARGS)\n> \tPGPROC\t *proc;\n> \tBackendId\tbackendId = InvalidBackendId;\n>\n> -\tproc = BackendPidGetProc(pid);\n> -\n> \t/*\n> \t * See if the process with given pid is a backend or an auxiliary process.\n> -\t *\n> -\t * If the given process is a backend, use its backend id in\n> -\t * SendProcSignal() later to speed up the operation. Otherwise, don't do\n> -\t * that because auxiliary processes (except the startup process) don't\n> -\t * have a valid backend id.\n> \t */\n> -\tif (proc != NULL)\n> -\t\tbackendId = proc->backendId;\n> -\telse\n> +\tproc = BackendPidGetProc(pid);\n> +\tif (proc == NULL)\n> \t\tproc = AuxiliaryPidGetProc(pid);\n>\n> \t/*\n> @@ -183,6 +175,8 @@ pg_log_backend_memory_contexts(PG_FUNCTION_ARGS)\n> \t\tPG_RETURN_BOOL(false);\n> \t}\n>\n> +\tif (proc != NULL)\n> +\t\tbackendId = GetBackendIdFromPGProc(proc);\n\nHow can proc be NULL here?\n\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 14 Feb 2024 13:37:11 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Refactoring backend fork+exec code"
},
{
"msg_contents": "On Thu, Feb 15, 2024 at 3:07 AM Andres Freund <[email protected]> wrote:\n> > I think the last remaining question here is about the 0- vs 1-based indexing\n> > of BackendIds. Is it a good idea to switch to 0-based indexing? And if we do\n> > it, should we reserve PGPROC 0. I'm on the fence on this one.\n>\n> I lean towards it being a good idea. Having two internal indexing schemes was\n> bad enough so far, but at least one would fairly quickly notice if one used\n> the wrong one. If they're just offset by 1, it might end up taking longer,\n> because that'll often also be a valid id.\n\nYeah, I think making everything 0-based is probably the best way\nforward long term. It might require more cleanup work to get there,\nbut it's just a lot simpler in the end, IMHO.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 15 Feb 2024 10:39:44 +0530",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Refactoring backend fork+exec code"
},
{
"msg_contents": "On 15/02/2024 07:09, Robert Haas wrote:\n> On Thu, Feb 15, 2024 at 3:07 AM Andres Freund <[email protected]> wrote:\n>>> I think the last remaining question here is about the 0- vs 1-based indexing\n>>> of BackendIds. Is it a good idea to switch to 0-based indexing? And if we do\n>>> it, should we reserve PGPROC 0. I'm on the fence on this one.\n>>\n>> I lean towards it being a good idea. Having two internal indexing schemes was\n>> bad enough so far, but at least one would fairly quickly notice if one used\n>> the wrong one. If they're just offset by 1, it might end up taking longer,\n>> because that'll often also be a valid id.\n> \n> Yeah, I think making everything 0-based is probably the best way\n> forward long term. It might require more cleanup work to get there,\n> but it's just a lot simpler in the end, IMHO.\n\nHere's another patch version that does that. Yeah, I agree it's nicer in \nthe end.\n\nI'm pretty happy with this now. I'll read through these patches myself \nagain after sleeping over it and try to get this committed by the end of \nthe week, but another pair of eyes wouldn't hurt.\n\nOn 14/02/2024 23:37, Andres Freund wrote:\n>> void\n>> -ProcSignalInit(int pss_idx)\n>> +ProcSignalInit(void)\n>> {\n>> \tProcSignalSlot *slot;\n>> \tuint64\t\tbarrier_generation;\n>>\n>> -\tAssert(pss_idx >= 1 && pss_idx <= NumProcSignalSlots);\n>> -\n>> -\tslot = &ProcSignal->psh_slot[pss_idx - 1];\n>> +\tif (MyBackendId <= 0)\n>> +\t\telog(ERROR, \"MyBackendId not set\");\n>> +\tif (MyBackendId > NumProcSignalSlots)\n>> +\t\telog(ERROR, \"unexpected MyBackendId %d in ProcSignalInit (max %d)\", MyBackendId, NumProcSignalSlots);\n>> +\tslot = &ProcSignal->psh_slot[MyBackendId - 1];\n>>\n>> \t/* sanity check */\n>> \tif (slot->pss_pid != 0)\n>> \t\telog(LOG, \"process %d taking over ProcSignal slot %d, but it's not empty\",\n>> -\t\t\t MyProcPid, pss_idx);\n>> +\t\t\t MyProcPid, (int) (slot - ProcSignal->psh_slot));\n> \n> Hm, why not use MyBackendId - 1 as above? Am I missing something?\n\nYou're right, fixed.\n\n>> /*\n>> @@ -212,11 +211,7 @@ ProcSignalInit(int pss_idx)\n>> static void\n>> CleanupProcSignalState(int status, Datum arg)\n>> {\n>> -\tint\t\t\tpss_idx = DatumGetInt32(arg);\n>> -\tProcSignalSlot *slot;\n>> -\n>> -\tslot = &ProcSignal->psh_slot[pss_idx - 1];\n>> -\tAssert(slot == MyProcSignalSlot);\n>> +\tProcSignalSlot *slot = MyProcSignalSlot;\n> \n> Maybe worth asserting that MyProcSignalSlot isn't NULL? Previously that was\n> checked via the assertion above.\n\nAdded.\n\n>> +\t\t\tif (i != segP->numProcs - 1)\n>> +\t\t\t\tsegP->pgprocnos[i] = segP->pgprocnos[segP->numProcs - 1];\n>> +\t\t\tbreak;\n> \n> Hm. This means the list will be out-of-order more and more over time, leading\n> to less cache efficient access patterns. Perhaps we should keep this sorted,\n> like we do for ProcGlobal->xids etc?\n\nPerhaps, although these are accessed much less frequently so I'm not \nconvinced it's worth the trouble.\n\nI haven't found a good performance test case that where the shared cache \ninvalidation would show up. Would you happen to have one?\n\n>> @@ -183,6 +175,8 @@ pg_log_backend_memory_contexts(PG_FUNCTION_ARGS)\n>> \t\tPG_RETURN_BOOL(false);\n>> \t}\n>>\n>> +\tif (proc != NULL)\n>> +\t\tbackendId = GetBackendIdFromPGProc(proc);\n> \n> How can proc be NULL here?\n\nFixed.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)",
"msg_date": "Thu, 22 Feb 2024 02:37:16 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Refactoring backend fork+exec code"
},
{
"msg_contents": "On 22/02/2024 02:37, Heikki Linnakangas wrote:\n> On 15/02/2024 07:09, Robert Haas wrote:\n>> On Thu, Feb 15, 2024 at 3:07 AM Andres Freund <[email protected]> wrote:\n>>>> I think the last remaining question here is about the 0- vs 1-based indexing\n>>>> of BackendIds. Is it a good idea to switch to 0-based indexing? And if we do\n>>>> it, should we reserve PGPROC 0. I'm on the fence on this one.\n>>>\n>>> I lean towards it being a good idea. Having two internal indexing schemes was\n>>> bad enough so far, but at least one would fairly quickly notice if one used\n>>> the wrong one. If they're just offset by 1, it might end up taking longer,\n>>> because that'll often also be a valid id.\n>>\n>> Yeah, I think making everything 0-based is probably the best way\n>> forward long term. It might require more cleanup work to get there,\n>> but it's just a lot simpler in the end, IMHO.\n> \n> Here's another patch version that does that. Yeah, I agree it's nicer in\n> the end.\n> \n> I'm pretty happy with this now. I'll read through these patches myself\n> again after sleeping over it and try to get this committed by the end of\n> the week, but another pair of eyes wouldn't hurt.\n\nAnd pushed. Thanks for the reviews!\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Sun, 3 Mar 2024 19:40:32 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Refactoring backend fork+exec code"
},
{
"msg_contents": "I've now completed many of the side-quests, here are the patches that \nremain.\n\nThe first three patches form a logical unit. They move the \ninitialization of the Port struct from postmaster to the backend \nprocess. Currently, that work is split between the postmaster and the \nbackend process so that postmaster fills in the socket and some other \nfields, and the backend process fills the rest after reading the startup \npacket. With these patches, there is a new much smaller ClientSocket \nstruct that is passed from the postmaster to the child process, which \ncontains just the fields that postmaster initializes. The Port struct is \nallocated in the child process. That makes the backend startup easier to \nunderstand. I plan to commit those three patches next if there are no \nobjections.\n\nThat leaves the rest of the patches. I think they're in pretty good \nshape too, and I've gotten some review on those earlier and have \naddressed the comments I got so far, but would still appreciate another \nround of review.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)",
"msg_date": "Mon, 4 Mar 2024 11:05:08 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Refactoring backend fork+exec code"
},
{
"msg_contents": "On Mon, Mar 4, 2024 at 1:40 AM Heikki Linnakangas <[email protected]> wrote:\n\n> On 22/02/2024 02:37, Heikki Linnakangas wrote:\n> > Here's another patch version that does that. Yeah, I agree it's nicer in\n> > the end.\n> >\n> > I'm pretty happy with this now. I'll read through these patches myself\n> > again after sleeping over it and try to get this committed by the end of\n> > the week, but another pair of eyes wouldn't hurt.\n>\n> And pushed. Thanks for the reviews!\n\n\nI noticed that there are still three places in backend_status.c where\npgstat_get_beentry_by_backend_id() is referenced. I think we should\nreplace them with pgstat_get_beentry_by_proc_number().\n\nThanks\nRichard\n\nOn Mon, Mar 4, 2024 at 1:40 AM Heikki Linnakangas <[email protected]> wrote:On 22/02/2024 02:37, Heikki Linnakangas wrote:\n> Here's another patch version that does that. Yeah, I agree it's nicer in\n> the end.\n> \n> I'm pretty happy with this now. I'll read through these patches myself\n> again after sleeping over it and try to get this committed by the end of\n> the week, but another pair of eyes wouldn't hurt.\n\nAnd pushed. Thanks for the reviews!I noticed that there are still three places in backend_status.c wherepgstat_get_beentry_by_backend_id() is referenced. I think we shouldreplace them with pgstat_get_beentry_by_proc_number().ThanksRichard",
"msg_date": "Tue, 5 Mar 2024 17:44:37 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Refactoring backend fork+exec code"
},
{
"msg_contents": "On 05/03/2024 11:44, Richard Guo wrote:\n> I noticed that there are still three places in backend_status.c where\n> pgstat_get_beentry_by_backend_id() is referenced. I think we should\n> replace them with pgstat_get_beentry_by_proc_number().\n\nFixed, thanks!\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Tue, 5 Mar 2024 18:31:31 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Refactoring backend fork+exec code"
},
{
"msg_contents": "On Mon Mar 4, 2024 at 3:05 AM CST, Heikki Linnakangas wrote:\n> I've now completed many of the side-quests, here are the patches that \n> remain.\n>\n> The first three patches form a logical unit. They move the \n> initialization of the Port struct from postmaster to the backend \n> process. Currently, that work is split between the postmaster and the \n> backend process so that postmaster fills in the socket and some other \n> fields, and the backend process fills the rest after reading the startup \n> packet. With these patches, there is a new much smaller ClientSocket \n> struct that is passed from the postmaster to the child process, which \n> contains just the fields that postmaster initializes. The Port struct is \n> allocated in the child process. That makes the backend startup easier to \n> understand. I plan to commit those three patches next if there are no \n> objections.\n>\n> That leaves the rest of the patches. I think they're in pretty good \n> shape too, and I've gotten some review on those earlier and have \n> addressed the comments I got so far, but would still appreciate another \n> round of review.\n\n> - * *MyProcPort, because ConnCreate() allocated that space with malloc()\n> - * ... else we'd need to copy the Port data first. Also, subsidiary data\n> - * such as the username isn't lost either; see ProcessStartupPacket().\n> + * *MyProcPort, because that space is allocated in stack ... else we'd\n> + * need to copy the Port data first. Also, subsidiary data such as the\n> + * username isn't lost either; see ProcessStartupPacket().\n\ns/allocated in/allocated on the\n\nThe first 3 patches seem good to go, in my opinion.\n\n> @@ -225,14 +331,13 @@ internal_forkexec(int argc, char *argv[], ClientSocket *client_sock, BackgroundW\n> return -1;\n> }\n> \n> - /* Make sure caller set up argv properly */\n> - Assert(argc >= 3);\n> - Assert(argv[argc] == NULL);\n> - Assert(strncmp(argv[1], \"--fork\", 6) == 0);\n> - Assert(argv[2] == NULL);\n> -\n> - /* Insert temp file name after --fork argument */\n> + /* set up argv properly */\n> + argv[0] = \"postgres\";\n> + snprintf(forkav, MAXPGPATH, \"--forkchild=%s\", child_kind);\n> + argv[1] = forkav;\n> + /* Insert temp file name after --forkchild argument */\n> argv[2] = tmpfilename;\n> + argv[3] = NULL;\n\nShould we use postgres_exec_path instead of the naked \"postgres\" here?\n\n> + /* in postmaster, fork failed ... */\n> + ereport(LOG,\n> + (errmsg(\"could not fork worker process: %m\")));\n> + /* undo what assign_backendlist_entry did */\n> + ReleasePostmasterChildSlot(rw->rw_child_slot);\n> + rw->rw_child_slot = 0;\n> + pfree(rw->rw_backend);\n> + rw->rw_backend = NULL;\n> + /* mark entry as crashed, so we'll try again later */\n> + rw->rw_crashed_at = GetCurrentTimestamp();\n> + return false;\n\nI think the error message should include the word \"background.\" It would \nbe more consistent with the log message above it.\n\n> +typedef struct\n> +{\n> + int syslogFile;\n> + int csvlogFile;\n> + int jsonlogFile;\n> +} syslogger_startup_data;\n\nIt would be nice if all of these startup data structs were named \nsimilarly. For instance, a previous one was BackendStartupInfo. It would \nhelp with greppability.\n\nI noticed there were a few XXX comments left that you created. I'll \nhighlight them here for more visibility.\n\n> +/* XXX: where does this belong? */\n> +extern bool LoadedSSL;\n\nPerhaps near the My* variables or maybe in the Port struct?\n\n> +#ifdef EXEC_BACKEND\n> +\n> + /*\n> + * Need to reinitialize the SSL library in the backend, since the context\n> + * structures contain function pointers and cannot be passed through the\n> + * parameter file.\n> + *\n> + * If for some reason reload fails (maybe the user installed broken key\n> + * files), soldier on without SSL; that's better than all connections\n> + * becoming impossible.\n> + *\n> + * XXX should we do this in all child processes? For the moment it's\n> + * enough to do it in backend children. XXX good question indeed\n> + */\n> +#ifdef USE_SSL\n> + if (EnableSSL)\n> + {\n> + if (secure_initialize(false) == 0)\n> + LoadedSSL = true;\n> + else\n> + ereport(LOG,\n> + (errmsg(\"SSL configuration could not be loaded in child process\")));\n> + }\n> +#endif\n> +#endif\n\nHere you added the \"good question indeed.\" I am not sure what the best \nanswer is either! :)\n\n> + /* XXX: translation? */\n> + ereport(LOG,\n> + (errmsg(\"could not fork %s process: %m\", PostmasterChildName(type))));\n\nI assume you are referring to the child name here?\n\n> XXX: We now have functions called AuxiliaryProcessInit() and\n> InitAuxiliaryProcess(). Confusing.\n\nBased on my analysis, the *Init() is called in the Main functions, while \nInit*() is called before the Main functions. Maybe \nAuxiliaryProcessInit() could be renamed to AuxiliaryProcessStartup()? \nRename the other to AuxiliaryProcessInit().\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Tue, 05 Mar 2024 17:02:55 -0600",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Refactoring backend fork+exec code"
},
{
"msg_contents": "On 06/03/2024 01:02, Tristan Partin wrote:\n> The first 3 patches seem good to go, in my opinion.\n\nCommitted these first patches, with a few more changes. Notably, I \nrealized that we should move the logic that I originally put in the new \nInitClientConnection function to the existing pq_init() function. It \nservers the same purpose, initialization of the socket in the child \nprocess. Thanks for the review!\n\n>> @@ -225,14 +331,13 @@ internal_forkexec(int argc, char *argv[], ClientSocket *client_sock, BackgroundW\n>> return -1;\n>> }\n>>\n>> - /* Make sure caller set up argv properly */\n>> - Assert(argc >= 3);\n>> - Assert(argv[argc] == NULL);\n>> - Assert(strncmp(argv[1], \"--fork\", 6) == 0);\n>> - Assert(argv[2] == NULL);\n>> -\n>> - /* Insert temp file name after --fork argument */\n>> + /* set up argv properly */\n>> + argv[0] = \"postgres\";\n>> + snprintf(forkav, MAXPGPATH, \"--forkchild=%s\", child_kind);\n>> + argv[1] = forkav;\n>> + /* Insert temp file name after --forkchild argument */\n>> argv[2] = tmpfilename;\n>> + argv[3] = NULL;\n> \n> Should we use postgres_exec_path instead of the naked \"postgres\" here?\n\nI don't know, but it's the same as on 'master' currently. The code just \ngot moved around.\n\n>> + /* in postmaster, fork failed ... */\n>> + ereport(LOG,\n>> + (errmsg(\"could not fork worker process: %m\")));\n>> + /* undo what assign_backendlist_entry did */\n>> + ReleasePostmasterChildSlot(rw->rw_child_slot);\n>> + rw->rw_child_slot = 0;\n>> + pfree(rw->rw_backend);\n>> + rw->rw_backend = NULL;\n>> + /* mark entry as crashed, so we'll try again later */\n>> + rw->rw_crashed_at = GetCurrentTimestamp();\n>> + return false;\n> \n> I think the error message should include the word \"background.\" It would\n> be more consistent with the log message above it.\n\nThis is also a pre-existing message I just moved around. But yeah, I \nagree, so changed.\n\n>> +typedef struct\n>> +{\n>> + int syslogFile;\n>> + int csvlogFile;\n>> + int jsonlogFile;\n>> +} syslogger_startup_data;\n> \n> It would be nice if all of these startup data structs were named\n> similarly. For instance, a previous one was BackendStartupInfo. It would\n> help with greppability.\n\nRenamed them to SysloggerStartupData and BackendStartupData. Background \nworker startup still passes a struct called BackgroundWorker, however. I \nleft that as it is, because the struct is used for other purposes too.\n\n> I noticed there were a few XXX comments left that you created. I'll\n> highlight them here for more visibility.\n> \n>> +/* XXX: where does this belong? */\n>> +extern bool LoadedSSL;\n> \n> Perhaps near the My* variables or maybe in the Port struct?\n\nIt is valid in the postmaster, too, though. The My* variables and Port \nstruct only make sense in the child process.\n\nI think this is the best place after all, so I just removed the XXX comment.\n\n>> +#ifdef EXEC_BACKEND\n>> +\n>> + /*\n>> + * Need to reinitialize the SSL library in the backend, since the context\n>> + * structures contain function pointers and cannot be passed through the\n>> + * parameter file.\n>> + *\n>> + * If for some reason reload fails (maybe the user installed broken key\n>> + * files), soldier on without SSL; that's better than all connections\n>> + * becoming impossible.\n>> + *\n>> + * XXX should we do this in all child processes? For the moment it's\n>> + * enough to do it in backend children. XXX good question indeed\n>> + */\n>> +#ifdef USE_SSL\n>> + if (EnableSSL)\n>> + {\n>> + if (secure_initialize(false) == 0)\n>> + LoadedSSL = true;\n>> + else\n>> + ereport(LOG,\n>> + (errmsg(\"SSL configuration could not be loaded in child process\")));\n>> + }\n>> +#endif\n>> +#endif\n> \n> Here you added the \"good question indeed.\" I am not sure what the best\n> answer is either! :)\n\nI just removed the extra XXX comment. It's still a valid question, but \nthis patch just moves it around, we don't need to answer it here.\n\n>> + /* XXX: translation? */\n>> + ereport(LOG,\n>> + (errmsg(\"could not fork %s process: %m\", PostmasterChildName(type))));\n> \n> I assume you are referring to the child name here?\n\nCorrect. Does the process name need to be translated? And this way of \nconstructing sentences is not translation-friendly anyway. In some \nlanguages, the word 'process' might need to be inflected differently \ndepending on the child name, for example.\n\nI put the process name in quotes, and didn't mark the process name for \ntranslation.\n\n>> XXX: We now have functions called AuxiliaryProcessInit() and\n>> InitAuxiliaryProcess(). Confusing.\n> \n> Based on my analysis, the *Init() is called in the Main functions, while\n> Init*() is called before the Main functions. Maybe\n> AuxiliaryProcessInit() could be renamed to AuxiliaryProcessStartup()?\n> Rename the other to AuxiliaryProcessInit().\n\nHmm. There's also BackendStartup() function in postmaster.c, which is \nvery different: it runs in the postmaster process and launches the \nbackend process. So the Startup suffix is not great either.\n\nI renamed AuxiliaryProcessInit() to AuxiliaryProcessMainCommon(). As in \n\"the common parts of the main functions of all the aux processes\".\n\n(We should perhaps merge InitProcess() and InitAuxiliaryProcess() into \none function. There's a lot of duplicated code between them. And the \nparts that differ should perhaps be refactored to be more similar \nanyway. I don't want to take on that refactoring right now though.)\n\nAttached is a new version of the remaining patches.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)",
"msg_date": "Wed, 13 Mar 2024 09:30:27 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Refactoring backend fork+exec code"
},
{
"msg_contents": "On 13/03/2024 09:30, Heikki Linnakangas wrote:\n> Attached is a new version of the remaining patches.\n\nCommitted, with some final cosmetic cleanups. Thanks everyone!\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Mon, 18 Mar 2024 11:41:41 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Refactoring backend fork+exec code"
},
{
"msg_contents": "Heikki Linnakangas <[email protected]> writes:\n> Committed, with some final cosmetic cleanups. Thanks everyone!\n\nA couple of buildfarm animals don't like these tests:\n\n\tAssert(child_type >= 0 && child_type < lengthof(child_process_kinds));\n\nfor example\n\n ayu | 2024-03-19 13:08:05 | launch_backend.c:211:39: warning: comparison of constant 16 with expression of type 'BackendType' (aka 'enum BackendType') is always true [-Wtautological-constant-out-of-range-compare]\n ayu | 2024-03-19 13:08:05 | launch_backend.c:233:39: warning: comparison of constant 16 with expression of type 'BackendType' (aka 'enum BackendType') is always true [-Wtautological-constant-out-of-range-compare]\n\nI'm not real sure why it's moaning about the \"<\" check but not the\n\">= 0\" check, which ought to be equally tautological given the\nassumption that the input is a valid member of the enum. But\nin any case, exactly how much value do these assertions carry?\nIf you're intent on keeping them, perhaps casting child_type to\nint here would suppress the warnings. But personally I think\nI'd lose the Asserts.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 20 Mar 2024 01:37:47 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Refactoring backend fork+exec code"
},
{
"msg_contents": "On 20/03/2024 07:37, Tom Lane wrote:\n> A couple of buildfarm animals don't like these tests:\n> \n> \tAssert(child_type >= 0 && child_type < lengthof(child_process_kinds));\n> \n> for example\n> \n> ayu | 2024-03-19 13:08:05 | launch_backend.c:211:39: warning: comparison of constant 16 with expression of type 'BackendType' (aka 'enum BackendType') is always true [-Wtautological-constant-out-of-range-compare]\n> ayu | 2024-03-19 13:08:05 | launch_backend.c:233:39: warning: comparison of constant 16 with expression of type 'BackendType' (aka 'enum BackendType') is always true [-Wtautological-constant-out-of-range-compare]\n> \n> I'm not real sure why it's moaning about the \"<\" check but not the\n> \">= 0\" check, which ought to be equally tautological given the\n> assumption that the input is a valid member of the enum. But\n> in any case, exactly how much value do these assertions carry?\n> If you're intent on keeping them, perhaps casting child_type to\n> int here would suppress the warnings. But personally I think\n> I'd lose the Asserts.\n\nYeah, it's not a very valuable assertion. Removed, thanks!\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Wed, 20 Mar 2024 09:16:13 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Refactoring backend fork+exec code"
},
{
"msg_contents": "On Wed, 20 Mar 2024 at 08:16, Heikki Linnakangas <[email protected]> wrote:\n> Yeah, it's not a very valuable assertion. Removed, thanks!\n\nHow about we add it as a static assert instead of removing it, like we\nhave for many other similar arrays.",
"msg_date": "Thu, 21 Mar 2024 11:31:17 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Refactoring backend fork+exec code"
},
{
"msg_contents": "Hello!\n\nMaybe add PGDLLIMPORT to\nextern bool LoadedSSL;\nand\nextern struct ClientSocket *MyClientSocket;\ndefinitions in the src/include/postmaster/postmaster.h ?\n\nWith the best regards,\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Sat, 27 Apr 2024 11:27:01 +0300",
"msg_from": "\"Anton A. Melnikov\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Refactoring backend fork+exec code"
},
{
"msg_contents": "On 27/04/2024 11:27, Anton A. Melnikov wrote:\n> Hello!\n> \n> Maybe add PGDLLIMPORT to\n> extern bool LoadedSSL;\n> and\n> extern struct ClientSocket *MyClientSocket;\n> definitions in the src/include/postmaster/postmaster.h ?\nPeter E noticed and Michael fixed them in commit 768ceeeaa1 already.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Sun, 28 Apr 2024 22:36:33 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Refactoring backend fork+exec code"
},
{
"msg_contents": "\nOn 28.04.2024 22:36, Heikki Linnakangas wrote:\n> Peter E noticed and Michael fixed them in commit 768ceeeaa1 already.\n\nDidn't check that is already fixed in the current master. Sorry!\nThanks for pointing this out!\n\nWith the best wishes,\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Wed, 1 May 2024 16:32:24 +0300",
"msg_from": "\"Anton A. Melnikov\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Refactoring backend fork+exec code"
},
{
"msg_contents": "On Mon, Mar 18, 2024 at 10:41 PM Heikki Linnakangas <[email protected]> wrote:\n> Committed, with some final cosmetic cleanups. Thanks everyone!\n\nNitpicking from UBSan with EXEC_BACKEND on Linux (line numbers may be\na bit off, from a branch of mine):\n\n../src/backend/postmaster/launch_backend.c:772:2: runtime error: null\npointer passed as argument 2, which is declared to never be null\n==13303==Using libbacktrace symbolizer.\n #0 0x5555564b0202 in save_backend_variables\n../src/backend/postmaster/launch_backend.c:772\n #1 0x5555564b0242 in internal_forkexec\n../src/backend/postmaster/launch_backend.c:311\n #2 0x5555564b0bdd in postmaster_child_launch\n../src/backend/postmaster/launch_backend.c:244\n #3 0x5555564b3121 in StartChildProcess\n../src/backend/postmaster/postmaster.c:3928\n #4 0x5555564b933a in PostmasterMain\n../src/backend/postmaster/postmaster.c:1357\n #5 0x5555562de4ad in main ../src/backend/main/main.c:197\n #6 0x7ffff667ad09 in __libc_start_main\n(/lib/x86_64-linux-gnu/libc.so.6+0x23d09)\n #7 0x555555e34279 in _start\n(/tmp/cirrus-ci-build/build/tmp_install/usr/local/pgsql/bin/postgres+0x8e0279)\n\nThis silences it:\n\n- memcpy(param->startup_data, startup_data, startup_data_len);\n+ if (startup_data_len > 0)\n+ memcpy(param->startup_data, startup_data, startup_data_len);\n\n(I found that out by testing EXEC_BACKEND on CI. I also learned that\nthe Mac and FreeBSD tasks fail with EXEC_BACKEND because of SysV shmem\nbleating. We probably should go and crank up the relevant sysctls in\nthe .cirrus.tasks.yml...)\n\n\n",
"msg_date": "Sat, 18 May 2024 17:24:45 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Refactoring backend fork+exec code"
},
{
"msg_contents": "While looking into [0], I noticed that main() still only checks for the\n--fork prefix, but IIUC commit aafc05d removed all --fork* options except\nfor --forkchild. I've attached a patch to strengthen the check in main().\nThis is definitely just a nitpick.\n\n[0] https://postgr.es/m/CAKAnmmJkZtZAiSryho%3DgYpbvC7H-HNjEDAh16F3SoC9LPu8rqQ%40mail.gmail.com\n\n-- \nnathan",
"msg_date": "Mon, 17 Jun 2024 13:36:00 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Refactoring backend fork+exec code"
},
{
"msg_contents": "On 18/05/2024 08:24, Thomas Munro wrote:\n> Nitpicking from UBSan with EXEC_BACKEND on Linux (line numbers may be\n> a bit off, from a branch of mine):\n> \n> ../src/backend/postmaster/launch_backend.c:772:2: runtime error: null\n> pointer passed as argument 2, which is declared to never be null\n> ==13303==Using libbacktrace symbolizer.\n> #0 0x5555564b0202 in save_backend_variables\n> ../src/backend/postmaster/launch_backend.c:772\n> #1 0x5555564b0242 in internal_forkexec\n> ../src/backend/postmaster/launch_backend.c:311\n> #2 0x5555564b0bdd in postmaster_child_launch\n> ../src/backend/postmaster/launch_backend.c:244\n> #3 0x5555564b3121 in StartChildProcess\n> ../src/backend/postmaster/postmaster.c:3928\n> #4 0x5555564b933a in PostmasterMain\n> ../src/backend/postmaster/postmaster.c:1357\n> #5 0x5555562de4ad in main ../src/backend/main/main.c:197\n> #6 0x7ffff667ad09 in __libc_start_main\n> (/lib/x86_64-linux-gnu/libc.so.6+0x23d09)\n> #7 0x555555e34279 in _start\n> (/tmp/cirrus-ci-build/build/tmp_install/usr/local/pgsql/bin/postgres+0x8e0279)\n> \n> This silences it:\n> \n> - memcpy(param->startup_data, startup_data, startup_data_len);\n> + if (startup_data_len > 0)\n> + memcpy(param->startup_data, startup_data, startup_data_len);\n\nFixed, thanks!\n\nOn 17/06/2024 21:36, Nathan Bossart wrote:\n> While looking into [0], I noticed that main() still only checks for the\n> --fork prefix, but IIUC commit aafc05d removed all --fork* options except\n> for --forkchild. I've attached a patch to strengthen the check in main().\n> This is definitely just a nitpick.\n\nFixed this too, thanks!\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Wed, 3 Jul 2024 16:25:18 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Refactoring backend fork+exec code"
}
] |
[
{
"msg_contents": "Hello,\n\nOver in \"Parallelize correlated subqueries that execute within each\nworker\" [1} Richard Guo found a bug in the current version of my patch\nin that thread. While debugging that issue I've been wondering why\nPath's param_info field seems to be NULL unless there is a LATERAL\nreference even though there may be non-lateral outer params\nreferenced.\n\nConsider the query:\nselect * from pg_description t1 where objoid in\n (select objoid from pg_description t2 where t2.description =\nt1.description);\n\nThe subquery's rel has a baserestrictinfo containing an OpExpr\ncomparing a Var (t2.description) to a Param of type PARAM_EXEC\n(t1.description). But the generated SeqScan path doesn't have its\nparam_info field set, which means PATH_REQ_OUTER returns NULL also\ndespite there being an obvious param referencing a required outer\nrelid. Looking at create_seqscan_path we see that param_info is\ninitialized with:\n\nget_baserel_parampathinfo(root, rel, required_outer)\n\nwhere required_outer is passed in from set_plain_rel_pathlist as\nrel->lateral_relids. And get_baserel_parampathinfo always returns NULL\nif required_outer is empty, so obviously with this query (no lateral\nreference) we're not going to get any ParamPathInfo added to the path\nor the rel.\n\nIs there a reason why we don't track the required relids providing the\nPARAM_EXEC params in this case?\n\nThanks,\nJames Coleman\n\n1: https://www.postgresql.org/message-id/CAMbWs4_evjcMzN8Gw78bHfhfo2FKJThqhEjRJRmoMZx%3DNXcJ7w%40mail.gmail.com\n\n\n",
"msg_date": "Sun, 18 Jun 2023 22:36:24 -0400",
"msg_from": "James Coleman <[email protected]>",
"msg_from_op": true,
"msg_subject": "path->param_info only set for lateral?"
},
{
"msg_contents": "James Coleman <[email protected]> writes:\n> Over in \"Parallelize correlated subqueries that execute within each\n> worker\" [1} Richard Guo found a bug in the current version of my patch\n> in that thread. While debugging that issue I've been wondering why\n> Path's param_info field seems to be NULL unless there is a LATERAL\n> reference even though there may be non-lateral outer params\n> referenced.\n\nPer pathnodes.h:\n\n * \"param_info\", if not NULL, links to a ParamPathInfo that identifies outer\n * relation(s) that provide parameter values to each scan of this path.\n * That means this path can only be joined to those rels by means of nestloop\n * joins with this path on the inside. ...\n\nWe're only interested in this for params that are coming from other\nrelations of the same query level, so that they affect join order and\njoin algorithm choices. Params coming down from outer query levels\nare much like EXTERN params to the planner: they are pseudoconstants\nfor any one execution of the current query level.\n\nThis isn't just LATERAL stuff; it's also intentionally-generated\nnestloop-with-inner-indexscan-cases. But it's not outer-level Params.\nEven though those are also PARAM_EXEC Params, they are fundamentally\ndifferent animals for the planner's purposes.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 18 Jun 2023 22:57:37 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: path->param_info only set for lateral?"
},
{
"msg_contents": "On Sun, Jun 18, 2023 at 10:57 PM Tom Lane <[email protected]> wrote:\n>\n> James Coleman <[email protected]> writes:\n> > Over in \"Parallelize correlated subqueries that execute within each\n> > worker\" [1} Richard Guo found a bug in the current version of my patch\n> > in that thread. While debugging that issue I've been wondering why\n> > Path's param_info field seems to be NULL unless there is a LATERAL\n> > reference even though there may be non-lateral outer params\n> > referenced.\n>\n> Per pathnodes.h:\n>\n> * \"param_info\", if not NULL, links to a ParamPathInfo that identifies outer\n> * relation(s) that provide parameter values to each scan of this path.\n> * That means this path can only be joined to those rels by means of nestloop\n> * joins with this path on the inside. ...\n>\n> We're only interested in this for params that are coming from other\n> relations of the same query level, so that they affect join order and\n> join algorithm choices. Params coming down from outer query levels\n> are much like EXTERN params to the planner: they are pseudoconstants\n> for any one execution of the current query level.\n>\n> This isn't just LATERAL stuff; it's also intentionally-generated\n> nestloop-with-inner-indexscan-cases. But it's not outer-level Params.\n> Even though those are also PARAM_EXEC Params, they are fundamentally\n> different animals for the planner's purposes.\n\nThanks for the explanation.\n\nI wonder if it'd be worth clarifying the comment slightly to hint in\nthat direction (like the attached)?\n\nThanks,\nJames Coleman",
"msg_date": "Tue, 20 Jun 2023 20:55:00 -0400",
"msg_from": "James Coleman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: path->param_info only set for lateral?"
}
] |
[
{
"msg_contents": "Hello.\n\nI'm starting a new thread for $subject per Alvaro's suggestion at [1].\n\nSo the following sql/json things still remain to be done:\n\n* sql/json query functions:\n json_exists()\n json_query()\n json_value()\n\n* other sql/json functions:\n json()\n json_scalar()\n json_serialize()\n\n* finally:\n json_table\n\nAttached is the rebased patch for the 1st part.\n\nIt also addresses Alvaro's review comments on Apr 4, though see my\ncomments below.\n\nOn Tue, Apr 4, 2023 at 9:36 PM Alvaro Herrera <[email protected]> wrote:\n> On 2023-Apr-04, Amit Langote wrote:\n> > On Tue, Apr 4, 2023 at 2:16 AM Alvaro Herrera <[email protected]> wrote:\n> > > - the gram.y solution to the \"ON ERROR/ON EMPTY\" clauses is quite ugly.\n> > > I think we could make that stuff use something similar to\n> > > ConstraintAttributeSpec with an accompanying post-processing function.\n> > > That would reduce the number of ad-hoc hacks, which seem excessive.\n>>\n> > Do you mean the solution involving the JsonBehavior node?\n>\n> Right. It has spilled as the separate on_behavior struct in the core\n> parser %union in addition to the raw jsbehavior, which is something\n> we've gone 30 years without having, and I don't see why we should start\n> now.\n\nI looked into trying to make this look like ConstraintAttributeSpec\nbut came to the conclusion that that's not quite doable in this case.\nA \"behavior\" cannot be represented simply as an integer flag, because\nthere's `DEFAULT a_expr` to fit in, so it's got to be this\nJsonBehavior node. However...\n\n> This stuff is terrible:\n>\n> json_exists_error_clause_opt:\n> json_exists_error_behavior ON ERROR_P { $$ = $1; }\n> | /* EMPTY */ { $$ = NULL; }\n> ;\n>\n> json_exists_error_behavior:\n> ERROR_P { $$ = makeJsonBehavior(JSON_BEHAVIOR_ERROR, NULL); }\n> | TRUE_P { $$ = makeJsonBehavior(JSON_BEHAVIOR_TRUE, NULL); }\n> | FALSE_P { $$ = makeJsonBehavior(JSON_BEHAVIOR_FALSE, NULL); }\n> | UNKNOWN { $$ = makeJsonBehavior(JSON_BEHAVIOR_UNKNOWN, NULL); }\n> ;\n>\n> json_value_behavior:\n> NULL_P { $$ = makeJsonBehavior(JSON_BEHAVIOR_NULL, NULL); }\n> | ERROR_P { $$ = makeJsonBehavior(JSON_BEHAVIOR_ERROR, NULL); }\n> | DEFAULT a_expr { $$ = makeJsonBehavior(JSON_BEHAVIOR_DEFAULT, $2); }\n> ;\n>\n> json_value_on_behavior_clause_opt:\n> json_value_behavior ON EMPTY_P\n> { $$.on_empty = $1; $$.on_error = NULL; }\n> | json_value_behavior ON EMPTY_P json_value_behavior ON ERROR_P\n> { $$.on_empty = $1; $$.on_error = $4; }\n> | json_value_behavior ON ERROR_P\n> { $$.on_empty = NULL; $$.on_error = $1; }\n> | /* EMPTY */\n> { $$.on_empty = NULL; $$.on_error = NULL; }\n> ;\n>\n> json_query_behavior:\n> ERROR_P { $$ = makeJsonBehavior(JSON_BEHAVIOR_ERROR, NULL); }\n> | NULL_P { $$ = makeJsonBehavior(JSON_BEHAVIOR_NULL, NULL); }\n> | EMPTY_P ARRAY { $$ = makeJsonBehavior(JSON_BEHAVIOR_EMPTY_ARRAY, NULL); }\n> /* non-standard, for Oracle compatibility only */\n> | EMPTY_P { $$ = makeJsonBehavior(JSON_BEHAVIOR_EMPTY_ARRAY, NULL); }\n> | EMPTY_P OBJECT_P { $$ = makeJsonBehavior(JSON_BEHAVIOR_EMPTY_OBJECT, NULL); }\n> | DEFAULT a_expr { $$ = makeJsonBehavior(JSON_BEHAVIOR_DEFAULT, $2); }\n> ;\n>\n> json_query_on_behavior_clause_opt:\n> json_query_behavior ON EMPTY_P\n> { $$.on_empty = $1; $$.on_error = NULL; }\n> | json_query_behavior ON EMPTY_P json_query_behavior ON ERROR_P\n> { $$.on_empty = $1; $$.on_error = $4; }\n> | json_query_behavior ON ERROR_P\n> { $$.on_empty = NULL; $$.on_error = $1; }\n> | /* EMPTY */\n> { $$.on_empty = NULL; $$.on_error = NULL; }\n> ;\n>\n> Surely this can be made cleaner.\n\n...I've managed to reduce the above down to:\n\n MergeWhenClause *mergewhen;\n struct KeyActions *keyactions;\n struct KeyAction *keyaction;\n+ JsonBehavior *jsbehavior;\n...\n+%type <jsbehavior> json_value_behavior\n+ json_query_behavior\n+ json_exists_behavior\n...\n+json_query_behavior:\n+ ERROR_P { $$ = makeJsonBehavior(JSON_BEHAVIOR_ERROR, NULL); }\n+ | NULL_P { $$ = makeJsonBehavior(JSON_BEHAVIOR_NULL, NULL); }\n+ | DEFAULT a_expr { $$ =\nmakeJsonBehavior(JSON_BEHAVIOR_DEFAULT, $2); }\n+ | EMPTY_P ARRAY { $$ =\nmakeJsonBehavior(JSON_BEHAVIOR_EMPTY_ARRAY, NULL); }\n+ | EMPTY_P OBJECT_P { $$ =\nmakeJsonBehavior(JSON_BEHAVIOR_EMPTY_OBJECT, NULL); }\n+ /* non-standard, for Oracle compatibility only */\n+ | EMPTY_P { $$ =\nmakeJsonBehavior(JSON_BEHAVIOR_EMPTY_ARRAY, NULL); }\n+ ;\n+\n+json_exists_behavior:\n+ ERROR_P { $$ = makeJsonBehavior(JSON_BEHAVIOR_ERROR, NULL); }\n+ | TRUE_P { $$ = makeJsonBehavior(JSON_BEHAVIOR_TRUE, NULL); }\n+ | FALSE_P { $$ =\nmakeJsonBehavior(JSON_BEHAVIOR_FALSE, NULL); }\n+ | UNKNOWN { $$ =\nmakeJsonBehavior(JSON_BEHAVIOR_UNKNOWN, NULL); }\n+ ;\n+\n+json_value_behavior:\n+ NULL_P { $$ = makeJsonBehavior(JSON_BEHAVIOR_NULL, NULL); }\n+ | ERROR_P { $$ =\nmakeJsonBehavior(JSON_BEHAVIOR_ERROR, NULL); }\n+ | DEFAULT a_expr { $$ =\nmakeJsonBehavior(JSON_BEHAVIOR_DEFAULT, $2); }\n+ ;\n\nThough, that does mean that there are now more rules for\nfunc_expr_common_subexpr to implement the variations of ON ERROR, ON\nEMPTY clauses for each of JSON_EXISTS, JSON_QUERY, and JSON_VALUE.\n\n> By the way -- that comment about clauses being non-standard, can you\n> spot exactly *which* clauses that comment applies to?\n\nI've moved comment as shown above to make clear that a bare EMPTY_P is\nneeded for Oracle compatibility\n\nOn Tue, Apr 4, 2023 at 2:16 AM Alvaro Herrera <[email protected]> wrote:\n> - the changes in formatting.h have no explanation whatsoever. At the\n> very least, the new function should have a comment in the .c file.\n> (And why is it at end of file? I bet there's a better location)\n\nApparently, the newly exported routine is needed in the JSON-specific\nsubroutine for the planner's contain_mutable_functions_walker(), to\ncheck if a JsonExpr's path_spec contains any timezone-dependent\nconstant. In the attached, I've changed the newly exported function's\nname as follows:\n\ndatetime_format_flags -> datetime_format_has_tz\n\nwhich let me do away with exporting those DCH_* constants in formatting.h.\n\n> - some nasty hacks are being used in the ECPG grammar with no tests at\n> all. It's easy to add a few lines to the .pgc file I added in prior\n> commits.\n\nAh, those ecpg.trailer changes weren't in the original commit that\nadded added SQL/JSON query functions (1a36bc9dba8ea), but came in\n5f0adec2537d, 83f1c7b742e8 to fix some damage caused by the former's\nmaking STRING a keyword. If I don't include the ecpg.trailer bit,\ntest_informix.pgc fails, so I think the change is already covered.\n\n> - Some functions in jsonfuncs.c have changed from throwing hard errors\n> into soft ones. I think this deserves more commentary.\n\nI've merged the delta patch I had posted earlier addressing this [2]\ninto the attached.\n\n> - func.sgml: The new functions are documented in a separate table for no\n> reason that I can see. Needs to be merged into one of the existing\n> tables. I didn't actually review the docs.\n\nHmm, so we already have \"SQL/JSON Testing Functions\" that were\ncommitted into v16 in a separate table (Table 9.48) under \"9.16.1.\nProcessing and Creating JSON Data\". So, I don't see a problem with\nadding \"SQL/JSON Query Functions\" in a separate table, though maybe it\nshould not be under the same sub-section. Maybe under \"9.16.2. The\nSQL/JSON Path Language\" is more appropriate?\n\nI'll rebase and post the patches for \"other sql/json functions\" and\n\"json_table\" shortly.\n\n--\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n[1] https://www.postgresql.org/message-id/20230503181732.26hx5ihbdkmzhlyw%40alvherre.pgsql\n[2] https://www.postgresql.org/message-id/CA%2BHiwqHGghuFpxE%3DpfUFPT%2BZzKvHWSN4BcrWr%3DZRjd4i4qubfQ%40mail.gmail.com",
"msg_date": "Mon, 19 Jun 2023 17:31:57 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "remaining sql/json patches"
},
{
"msg_contents": "On Mon, Jun 19, 2023 at 5:31 PM Amit Langote <[email protected]> wrote:\n> So the following sql/json things still remain to be done:\n>\n> * sql/json query functions:\n> json_exists()\n> json_query()\n> json_value()\n>\n> * other sql/json functions:\n> json()\n> json_scalar()\n> json_serialize()\n>\n> * finally:\n> json_table\n>\n> Attached is the rebased patch for the 1st part.\n...\n> I'll rebase and post the patches for \"other sql/json functions\" and\n> \"json_table\" shortly.\n\nAnd here they are.\n\nI realized that the patch for the \"other sql/json functions\" part is\nrelatively straightforward and has no dependence on the \"sql/json\nquery functions\" part getting done first. So I've made that one the\n0001 patch. The patch I posted in the last email is now 0002, though\nit only has changes related to changing the order of the patch, so I\ndecided not to change the patch version marker (v1).\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 21 Jun 2023 17:25:32 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On 21.06.23 10:25, Amit Langote wrote:\n> I realized that the patch for the \"other sql/json functions\" part is\n> relatively straightforward and has no dependence on the \"sql/json\n> query functions\" part getting done first. So I've made that one the\n> 0001 patch. The patch I posted in the last email is now 0002, though\n> it only has changes related to changing the order of the patch, so I\n> decided not to change the patch version marker (v1).\n\n(I suggest you change the version number anyway, next time. There are \nplenty of numbers available.)\n\nThe 0001 patch contains a change to \ndoc/src/sgml/keywords/sql2016-02-reserved.txt, which seems \ninappropriate. The additional keywords are already listed in the 2023 \nfile, and they are not SQL:2016 keywords.\n\nAnother thing, I noticed that the SQL/JSON patches in PG16 introduced \nsome nonstandard indentation in gram.y. I would like to apply the \nattached patch to straighten this out.",
"msg_date": "Fri, 7 Jul 2023 13:30:58 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Fri, Jul 7, 2023 at 8:31 PM Peter Eisentraut <[email protected]> wrote:\n> On 21.06.23 10:25, Amit Langote wrote:\n> > I realized that the patch for the \"other sql/json functions\" part is\n> > relatively straightforward and has no dependence on the \"sql/json\n> > query functions\" part getting done first. So I've made that one the\n> > 0001 patch. The patch I posted in the last email is now 0002, though\n> > it only has changes related to changing the order of the patch, so I\n> > decided not to change the patch version marker (v1).\n>\n> (I suggest you change the version number anyway, next time. There are\n> plenty of numbers available.)\n\nWill do. :)\n\n> The 0001 patch contains a change to\n> doc/src/sgml/keywords/sql2016-02-reserved.txt, which seems\n> inappropriate. The additional keywords are already listed in the 2023\n> file, and they are not SQL:2016 keywords.\n\nAh, indeed. Will remove.\n\n> Another thing, I noticed that the SQL/JSON patches in PG16 introduced\n> some nonstandard indentation in gram.y. I would like to apply the\n> attached patch to straighten this out.\n\nSounds fine to me.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 7 Jul 2023 20:59:32 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Fri, Jul 7, 2023 at 8:59 PM Amit Langote <[email protected]> wrote:\n> On Fri, Jul 7, 2023 at 8:31 PM Peter Eisentraut <[email protected]> wrote:\n> > On 21.06.23 10:25, Amit Langote wrote:\n> > > I realized that the patch for the \"other sql/json functions\" part is\n> > > relatively straightforward and has no dependence on the \"sql/json\n> > > query functions\" part getting done first. So I've made that one the\n> > > 0001 patch. The patch I posted in the last email is now 0002, though\n> > > it only has changes related to changing the order of the patch, so I\n> > > decided not to change the patch version marker (v1).\n> >\n> > (I suggest you change the version number anyway, next time. There are\n> > plenty of numbers available.)\n>\n> Will do. :)\n\nHere's v2.\n\n0001 and 0002 are new patches for some improvements of the existing code.\n\nIn the main patches (0003~), I've mainly removed a few nonterminals in\nfavor of new rules in the remaining nonterminals, especially in the\nJSON_TABLE patch.\n\nI've also removed additions to sql2016-02-reserved.txt as Peter suggested.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Fri, 7 Jul 2023 21:19:40 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Looking at 0001 now.\n\nI noticed that it adds JSON, JSON_SCALAR and JSON_SERIALIZE as reserved\nkeywords to doc/src/sgml/keywords/sql2016-02-reserved.txt; but those\nkeywords do not appear in the 2016 standard as reserved. I see that\nthose keywords appear as reserved in sql2023-02-reserved.txt, so I\nsuppose you're covered as far as that goes; you don't need to patch\nsql2016, and indeed that's the wrong thing to do.\n\nI see that you add json_returning_clause_opt, but we already have\njson_output_clause_opt. Shouldn't these two be one and the same?\nI think the new name is more sensible than the old one, since the\ngoverning keyword is RETURNING; I suppose naming it \"output\" comes from\nthe fact that the standard calls this <JSON output clause>.\n\ntypo \"requeted\"\n\nI'm not in love with the fact that JSON and JSONB have pretty much\nparallel type categorizing functionality. It seems entirely artificial.\nMaybe this didn't matter when these were contained inside each .c file\nand nobody else had to deal with that, but I think it's not good to make\nthis an exported concept. Is it possible to do away with that? I mean,\nreduce both to a single categorization enum, and a single categorization\nAPI. Here you have to cast the enum value to int in order to make\nExecInitExprRec work, and that seems a bit lame; moreso when the\n\"is_jsonb\" is determined separately (cf. ExecEvalJsonConstructor)\n\nIn the 2023 standard, JSON_SCALAR is just\n\n<JSON scalar> ::= JSON_SCALAR <left paren> <value expression> <right paren>\n\nbut we seem to have added a <JSON output format> clause to it. Should\nwe really?\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Entristecido, Wutra (canción de Las Barreras)\necha a Freyr a rodar\ny a nosotros al mar\"\n\n\n",
"msg_date": "Fri, 7 Jul 2023 14:28:20 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Hi Alvaro,\n\nOn Fri, Jul 7, 2023 at 9:28 PM Alvaro Herrera <[email protected]> wrote:\n> Looking at 0001 now.\n\nThanks.\n\n> I noticed that it adds JSON, JSON_SCALAR and JSON_SERIALIZE as reserved\n> keywords to doc/src/sgml/keywords/sql2016-02-reserved.txt; but those\n> keywords do not appear in the 2016 standard as reserved. I see that\n> those keywords appear as reserved in sql2023-02-reserved.txt, so I\n> suppose you're covered as far as that goes; you don't need to patch\n> sql2016, and indeed that's the wrong thing to do.\n\nYeah, fixed that after Peter pointed it out.\n\n> I see that you add json_returning_clause_opt, but we already have\n> json_output_clause_opt. Shouldn't these two be one and the same?\n> I think the new name is more sensible than the old one, since the\n> governing keyword is RETURNING; I suppose naming it \"output\" comes from\n> the fact that the standard calls this <JSON output clause>.\n\nOne difference between the two is that json_output_clause_opt allows\nspecifying the FORMAT clause in addition to the RETURNING type name,\nwhile json_returning_clause_op only allows specifying the type name.\n\nI'm inclined to keep only json_returning_clause_opt as you suggest and\nmake parse_expr.c output an error if the FORMAT clause is specified in\nJSON() and JSON_SCALAR(), so turning the current syntax error on\nspecifying RETURNING ... FORMAT for these functions into a parsing\nerror. Done that way in the attached updated patch and also updated\nthe latter patch that adds JSON_EXISTS() and JSON_VALUE() to have\nsimilar behavior.\n\n> typo \"requeted\"\n\nFixed.\n\n> I'm not in love with the fact that JSON and JSONB have pretty much\n> parallel type categorizing functionality. It seems entirely artificial.\n> Maybe this didn't matter when these were contained inside each .c file\n> and nobody else had to deal with that, but I think it's not good to make\n> this an exported concept. Is it possible to do away with that? I mean,\n> reduce both to a single categorization enum, and a single categorization\n> API. Here you have to cast the enum value to int in order to make\n> ExecInitExprRec work, and that seems a bit lame; moreso when the\n> \"is_jsonb\" is determined separately (cf. ExecEvalJsonConstructor)\n\nOK, I agree that a unified categorizing API might be better. I'll\nlook at making this better. Btw, does src/include/common/jsonapi.h\nlook like an appropriate place for that?\n\n> In the 2023 standard, JSON_SCALAR is just\n>\n> <JSON scalar> ::= JSON_SCALAR <left paren> <value expression> <right paren>\n>\n> but we seem to have added a <JSON output format> clause to it. Should\n> we really?\n\nHmm, I am not seeing <JSON output format> in the rule for JSON_SCALAR,\nwhich looks like this in the current grammar:\n\nfunc_expr_common_subexpr:\n...\n | JSON_SCALAR '(' a_expr json_returning_clause_opt ')'\n {\n JsonScalarExpr *n = makeNode(JsonScalarExpr);\n\n n->expr = (Expr *) $3;\n n->output = (JsonOutput *) $4;\n n->location = @1;\n $$ = (Node *) n;\n }\n...\njson_returning_clause_opt:\n RETURNING Typename\n {\n JsonOutput *n = makeNode(JsonOutput);\n\n n->typeName = $2;\n n->returning = makeNode(JsonReturning);\n n->returning->format =\n makeJsonFormat(JS_FORMAT_DEFAULT, JS_ENC_DEFAULT, @2);\n $$ = (Node *) n;\n }\n | /* EMPTY */ { $$ = NULL; }\n ;\n\nPer what I wrote above, the grammar for JSON() and JSON_SCALAR() does\nnot allow specifying the FORMAT clause. Though considering what you\nwrote, the RETURNING clause does appear to be an extension to the\nstandard's spec. I can't find any reasoning in the original\ndiscussion as to how that came about, except an email from Andrew [1]\nsaying that he added it back to the patch.\n\nHere's v3 in the meantime.\n\n--\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n[1] https://www.postgresql.org/message-id/flat/cd0bb935-0158-78a7-08b5-904886deac4b%40postgrespro.ru",
"msg_date": "Mon, 10 Jul 2023 21:56:27 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On 2023-Jul-10, Amit Langote wrote:\n\n> > I see that you add json_returning_clause_opt, but we already have\n> > json_output_clause_opt. Shouldn't these two be one and the same?\n> > I think the new name is more sensible than the old one, since the\n> > governing keyword is RETURNING; I suppose naming it \"output\" comes from\n> > the fact that the standard calls this <JSON output clause>.\n> \n> One difference between the two is that json_output_clause_opt allows\n> specifying the FORMAT clause in addition to the RETURNING type name,\n> while json_returning_clause_op only allows specifying the type name.\n> \n> I'm inclined to keep only json_returning_clause_opt as you suggest and\n> make parse_expr.c output an error if the FORMAT clause is specified in\n> JSON() and JSON_SCALAR(), so turning the current syntax error on\n> specifying RETURNING ... FORMAT for these functions into a parsing\n> error. Done that way in the attached updated patch and also updated\n> the latter patch that adds JSON_EXISTS() and JSON_VALUE() to have\n> similar behavior.\n\nYeah, that's reasonable.\n\n> > I'm not in love with the fact that JSON and JSONB have pretty much\n> > parallel type categorizing functionality. It seems entirely artificial.\n> > Maybe this didn't matter when these were contained inside each .c file\n> > and nobody else had to deal with that, but I think it's not good to make\n> > this an exported concept. Is it possible to do away with that? I mean,\n> > reduce both to a single categorization enum, and a single categorization\n> > API. Here you have to cast the enum value to int in order to make\n> > ExecInitExprRec work, and that seems a bit lame; moreso when the\n> > \"is_jsonb\" is determined separately (cf. ExecEvalJsonConstructor)\n> \n> OK, I agree that a unified categorizing API might be better. I'll\n> look at making this better. Btw, does src/include/common/jsonapi.h\n> look like an appropriate place for that?\n\nHmm, that header is frontend-available, and the type-category appears to\nbe backend-only, so maybe no. Perhaps jsonfuncs.h is more apropos?\nexecExpr.c is already dealing with array internals, so having to deal\nwith json internals doesn't seem completely out of place.\n\n\n> > In the 2023 standard, JSON_SCALAR is just\n> >\n> > <JSON scalar> ::= JSON_SCALAR <left paren> <value expression> <right paren>\n> >\n> > but we seem to have added a <JSON output format> clause to it. Should\n> > we really?\n> \n> Hmm, I am not seeing <JSON output format> in the rule for JSON_SCALAR,\n\nAgh, yeah, I confused myself, sorry.\n\n> Per what I wrote above, the grammar for JSON() and JSON_SCALAR() does\n> not allow specifying the FORMAT clause. Though considering what you\n> wrote, the RETURNING clause does appear to be an extension to the\n> standard's spec.\n\nHmm, I see that <JSON output clause> (which is RETURNING plus optional\nFORMAT) appears included in JSON_OBJECT, JSON_ARRAY, JSON_QUERY,\nJSON_SERIALIZE, JSON_OBJECTAGG, JSON_ARRAYAGG. It's not necessarily a\nbad thing to have it in other places, but we should consider it\ncarefully. Do we really want/need it in JSON() and JSON_SCALAR()?\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Mon, 10 Jul 2023 16:47:12 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "I forgot to add:\n\n* 0001 looks an obvious improvement. You could just push it now, to\navoid carrying it forward anymore. I would just put the constructName\nahead of value expr in the argument list, though.\n\n* 0002: I have no idea what this is (though I probably should). I would\nalso push it right away -- if anything, so that we figure out sooner\nthat it was actually needed in the first place. Or maybe you just need\nthe right test cases?\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Mon, 10 Jul 2023 16:51:58 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Mon, Jul 10, 2023 at 11:52 PM Alvaro Herrera <[email protected]> wrote:\n> I forgot to add:\n\nThanks for the review of these.\n\n> * 0001 looks an obvious improvement. You could just push it now, to\n> avoid carrying it forward anymore. I would just put the constructName\n> ahead of value expr in the argument list, though.\n\nSure, that makes sense.\n\n> * 0002: I have no idea what this is (though I probably should). I would\n> also push it right away -- if anything, so that we figure out sooner\n> that it was actually needed in the first place. Or maybe you just need\n> the right test cases?\n\nHmm, I don't think having or not having CaseTestExpr makes a\ndifference to the result of evaluating JsonValueExpr.format_expr, so\nthere are no test cases to prove one way or the other.\n\nAfter staring at this again for a while, I think I figured out why the\nCaseTestExpr might have been put there in the first place. It seems\nto have to do with the fact that JsonValueExpr.raw_expr is currently\nevaluated independently of JsonValueExpr.formatted_expr and the\nCaseTestExpr propagates the result of the former to the evaluation of\nthe latter. Actually, formatted_expr is effectively\nformatting_function(<result-of-raw_expr>), so if we put raw_expr\nitself into formatted_expr such that it is evaluated as part of\nevaluating formatted_expr, then there is no need for the CaseTestExpr\nas the propagator for raw_expr's result.\n\nI've expanded the commit message to mention the details.\n\nI'll push these tomorrow.\n\nOn Mon, Jul 10, 2023 at 11:47 PM Alvaro Herrera <[email protected]> wrote:\n> On 2023-Jul-10, Amit Langote wrote:\n> > > I'm not in love with the fact that JSON and JSONB have pretty much\n> > > parallel type categorizing functionality. It seems entirely artificial.\n> > > Maybe this didn't matter when these were contained inside each .c file\n> > > and nobody else had to deal with that, but I think it's not good to make\n> > > this an exported concept. Is it possible to do away with that? I mean,\n> > > reduce both to a single categorization enum, and a single categorization\n> > > API. Here you have to cast the enum value to int in order to make\n> > > ExecInitExprRec work, and that seems a bit lame; moreso when the\n> > > \"is_jsonb\" is determined separately (cf. ExecEvalJsonConstructor)\n> >\n> > OK, I agree that a unified categorizing API might be better. I'll\n> > look at making this better. Btw, does src/include/common/jsonapi.h\n> > look like an appropriate place for that?\n>\n> Hmm, that header is frontend-available, and the type-category appears to\n> be backend-only, so maybe no. Perhaps jsonfuncs.h is more apropos?\n> execExpr.c is already dealing with array internals, so having to deal\n> with json internals doesn't seem completely out of place.\n\nOK, attached 0003 does it like that. Essentially, I decided to only\nkeep JsonTypeCategory and json_categorize_type(), with some\nmodifications to accommodate the callers in jsonb.c.\n\n> > > In the 2023 standard, JSON_SCALAR is just\n> > >\n> > > <JSON scalar> ::= JSON_SCALAR <left paren> <value expression> <right paren>\n> > >\n> > > but we seem to have added a <JSON output format> clause to it. Should\n> > > we really?\n> >\n> > Hmm, I am not seeing <JSON output format> in the rule for JSON_SCALAR,\n>\n> Agh, yeah, I confused myself, sorry.\n>\n> > Per what I wrote above, the grammar for JSON() and JSON_SCALAR() does\n> > not allow specifying the FORMAT clause. Though considering what you\n> > wrote, the RETURNING clause does appear to be an extension to the\n> > standard's spec.\n>\n> Hmm, I see that <JSON output clause> (which is RETURNING plus optional\n> FORMAT) appears included in JSON_OBJECT, JSON_ARRAY, JSON_QUERY,\n> JSON_SERIALIZE, JSON_OBJECTAGG, JSON_ARRAYAGG. It's not necessarily a\n> bad thing to have it in other places, but we should consider it\n> carefully. Do we really want/need it in JSON() and JSON_SCALAR()?\n\nI thought that removing that support breaks JSON_TABLE() or something\nbut it doesn't, so maybe we can do without the extension if there's no\nparticular reason it's there in the first place. Maybe Andrew (cc'd)\nremembers why he decided in [1] to (re-) add the RETURNING clause to\nJSON() and JSON_SCALAR()?\n\nUpdated patches, with 0003 being a new refactoring patch, are\nattached. Patches 0004~ contain a few updates around JsonValueExpr.\nSpecifically, I removed the case for T_JsonValueExpr in\ntransformExprRecurse(), because I realized that JsonValueExpr\nexpressions never appear embedded in other expressions. That allowed\nme to get rid of some needless refactoring around\ntransformJsonValueExpr() in the patch that adds JSON_VALUE() etc.\n\n--\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n[1] https://www.postgresql.org/message-id/1d44d832-4ea9-1ec9-81e9-bc6b2bd8cc43%40dunslane.net",
"msg_date": "Wed, 12 Jul 2023 18:41:19 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Wed, Jul 12, 2023 at 6:41 PM Amit Langote <[email protected]> wrote:\n> On Mon, Jul 10, 2023 at 11:52 PM Alvaro Herrera <[email protected]> wrote:\n> > I forgot to add:\n>\n> Thanks for the review of these.\n>\n> > * 0001 looks an obvious improvement. You could just push it now, to\n> > avoid carrying it forward anymore. I would just put the constructName\n> > ahead of value expr in the argument list, though.\n>\n> Sure, that makes sense.\n>\n> > * 0002: I have no idea what this is (though I probably should). I would\n> > also push it right away -- if anything, so that we figure out sooner\n> > that it was actually needed in the first place. Or maybe you just need\n> > the right test cases?\n>\n> Hmm, I don't think having or not having CaseTestExpr makes a\n> difference to the result of evaluating JsonValueExpr.format_expr, so\n> there are no test cases to prove one way or the other.\n>\n> After staring at this again for a while, I think I figured out why the\n> CaseTestExpr might have been put there in the first place. It seems\n> to have to do with the fact that JsonValueExpr.raw_expr is currently\n> evaluated independently of JsonValueExpr.formatted_expr and the\n> CaseTestExpr propagates the result of the former to the evaluation of\n> the latter. Actually, formatted_expr is effectively\n> formatting_function(<result-of-raw_expr>), so if we put raw_expr\n> itself into formatted_expr such that it is evaluated as part of\n> evaluating formatted_expr, then there is no need for the CaseTestExpr\n> as the propagator for raw_expr's result.\n>\n> I've expanded the commit message to mention the details.\n>\n> I'll push these tomorrow.\n\nI updated it to make the code in makeJsonConstructorExpr() that *does*\nneed to use a CaseTestExpr a bit more readable. Also, updated the\ncomment above CaseTestExpr to mention this instance of its usage.\n\n> On Mon, Jul 10, 2023 at 11:47 PM Alvaro Herrera <[email protected]> wrote:\n> > On 2023-Jul-10, Amit Langote wrote:\n> > > > I'm not in love with the fact that JSON and JSONB have pretty much\n> > > > parallel type categorizing functionality. It seems entirely artificial.\n> > > > Maybe this didn't matter when these were contained inside each .c file\n> > > > and nobody else had to deal with that, but I think it's not good to make\n> > > > this an exported concept. Is it possible to do away with that? I mean,\n> > > > reduce both to a single categorization enum, and a single categorization\n> > > > API. Here you have to cast the enum value to int in order to make\n> > > > ExecInitExprRec work, and that seems a bit lame; moreso when the\n> > > > \"is_jsonb\" is determined separately (cf. ExecEvalJsonConstructor)\n> > >\n> > > OK, I agree that a unified categorizing API might be better. I'll\n> > > look at making this better. Btw, does src/include/common/jsonapi.h\n> > > look like an appropriate place for that?\n> >\n> > Hmm, that header is frontend-available, and the type-category appears to\n> > be backend-only, so maybe no. Perhaps jsonfuncs.h is more apropos?\n> > execExpr.c is already dealing with array internals, so having to deal\n> > with json internals doesn't seem completely out of place.\n>\n> OK, attached 0003 does it like that. Essentially, I decided to only\n> keep JsonTypeCategory and json_categorize_type(), with some\n> modifications to accommodate the callers in jsonb.c.\n>\n> > > > In the 2023 standard, JSON_SCALAR is just\n> > > >\n> > > > <JSON scalar> ::= JSON_SCALAR <left paren> <value expression> <right paren>\n> > > >\n> > > > but we seem to have added a <JSON output format> clause to it. Should\n> > > > we really?\n> > >\n> > > Hmm, I am not seeing <JSON output format> in the rule for JSON_SCALAR,\n> >\n> > Agh, yeah, I confused myself, sorry.\n> >\n> > > Per what I wrote above, the grammar for JSON() and JSON_SCALAR() does\n> > > not allow specifying the FORMAT clause. Though considering what you\n> > > wrote, the RETURNING clause does appear to be an extension to the\n> > > standard's spec.\n> >\n> > Hmm, I see that <JSON output clause> (which is RETURNING plus optional\n> > FORMAT) appears included in JSON_OBJECT, JSON_ARRAY, JSON_QUERY,\n> > JSON_SERIALIZE, JSON_OBJECTAGG, JSON_ARRAYAGG. It's not necessarily a\n> > bad thing to have it in other places, but we should consider it\n> > carefully. Do we really want/need it in JSON() and JSON_SCALAR()?\n>\n> I thought that removing that support breaks JSON_TABLE() or something\n> but it doesn't, so maybe we can do without the extension if there's no\n> particular reason it's there in the first place. Maybe Andrew (cc'd)\n> remembers why he decided in [1] to (re-) add the RETURNING clause to\n> JSON() and JSON_SCALAR()?\n>\n> Updated patches, with 0003 being a new refactoring patch, are\n> attached. Patches 0004~ contain a few updates around JsonValueExpr.\n> Specifically, I removed the case for T_JsonValueExpr in\n> transformExprRecurse(), because I realized that JsonValueExpr\n> expressions never appear embedded in other expressions. That allowed\n> me to get rid of some needless refactoring around\n> transformJsonValueExpr() in the patch that adds JSON_VALUE() etc.\n\nI noticed that 0003 was giving some warnings, which is fixed in the\nattached updated set of patches.\n\n--\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 12 Jul 2023 22:23:42 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Wed, Jul 12, 2023 at 10:23 PM Amit Langote <[email protected]> wrote:\n> On Wed, Jul 12, 2023 at 6:41 PM Amit Langote <[email protected]> wrote:\n> > On Mon, Jul 10, 2023 at 11:52 PM Alvaro Herrera <[email protected]> wrote:\n> > > I forgot to add:\n> >\n> > Thanks for the review of these.\n> >\n> > > * 0001 looks an obvious improvement. You could just push it now, to\n> > > avoid carrying it forward anymore. I would just put the constructName\n> > > ahead of value expr in the argument list, though.\n> >\n> > Sure, that makes sense.\n> >\n> > > * 0002: I have no idea what this is (though I probably should). I would\n> > > also push it right away -- if anything, so that we figure out sooner\n> > > that it was actually needed in the first place. Or maybe you just need\n> > > the right test cases?\n> >\n> > Hmm, I don't think having or not having CaseTestExpr makes a\n> > difference to the result of evaluating JsonValueExpr.format_expr, so\n> > there are no test cases to prove one way or the other.\n> >\n> > After staring at this again for a while, I think I figured out why the\n> > CaseTestExpr might have been put there in the first place. It seems\n> > to have to do with the fact that JsonValueExpr.raw_expr is currently\n> > evaluated independently of JsonValueExpr.formatted_expr and the\n> > CaseTestExpr propagates the result of the former to the evaluation of\n> > the latter. Actually, formatted_expr is effectively\n> > formatting_function(<result-of-raw_expr>), so if we put raw_expr\n> > itself into formatted_expr such that it is evaluated as part of\n> > evaluating formatted_expr, then there is no need for the CaseTestExpr\n> > as the propagator for raw_expr's result.\n> >\n> > I've expanded the commit message to mention the details.\n> >\n> > I'll push these tomorrow.\n>\n> I updated it to make the code in makeJsonConstructorExpr() that *does*\n> need to use a CaseTestExpr a bit more readable. Also, updated the\n> comment above CaseTestExpr to mention this instance of its usage.\n\nPushed these two just now.\n\n> > On Mon, Jul 10, 2023 at 11:47 PM Alvaro Herrera <[email protected]> wrote:\n> > > On 2023-Jul-10, Amit Langote wrote:\n> > > > > I'm not in love with the fact that JSON and JSONB have pretty much\n> > > > > parallel type categorizing functionality. It seems entirely artificial.\n> > > > > Maybe this didn't matter when these were contained inside each .c file\n> > > > > and nobody else had to deal with that, but I think it's not good to make\n> > > > > this an exported concept. Is it possible to do away with that? I mean,\n> > > > > reduce both to a single categorization enum, and a single categorization\n> > > > > API. Here you have to cast the enum value to int in order to make\n> > > > > ExecInitExprRec work, and that seems a bit lame; moreso when the\n> > > > > \"is_jsonb\" is determined separately (cf. ExecEvalJsonConstructor)\n> > > >\n> > > > OK, I agree that a unified categorizing API might be better. I'll\n> > > > look at making this better. Btw, does src/include/common/jsonapi.h\n> > > > look like an appropriate place for that?\n> > >\n> > > Hmm, that header is frontend-available, and the type-category appears to\n> > > be backend-only, so maybe no. Perhaps jsonfuncs.h is more apropos?\n> > > execExpr.c is already dealing with array internals, so having to deal\n> > > with json internals doesn't seem completely out of place.\n> >\n> > OK, attached 0003 does it like that. Essentially, I decided to only\n> > keep JsonTypeCategory and json_categorize_type(), with some\n> > modifications to accommodate the callers in jsonb.c.\n> >\n> > > > > In the 2023 standard, JSON_SCALAR is just\n> > > > >\n> > > > > <JSON scalar> ::= JSON_SCALAR <left paren> <value expression> <right paren>\n> > > > >\n> > > > > but we seem to have added a <JSON output format> clause to it. Should\n> > > > > we really?\n> > > >\n> > > > Hmm, I am not seeing <JSON output format> in the rule for JSON_SCALAR,\n> > >\n> > > Agh, yeah, I confused myself, sorry.\n> > >\n> > > > Per what I wrote above, the grammar for JSON() and JSON_SCALAR() does\n> > > > not allow specifying the FORMAT clause. Though considering what you\n> > > > wrote, the RETURNING clause does appear to be an extension to the\n> > > > standard's spec.\n> > >\n> > > Hmm, I see that <JSON output clause> (which is RETURNING plus optional\n> > > FORMAT) appears included in JSON_OBJECT, JSON_ARRAY, JSON_QUERY,\n> > > JSON_SERIALIZE, JSON_OBJECTAGG, JSON_ARRAYAGG. It's not necessarily a\n> > > bad thing to have it in other places, but we should consider it\n> > > carefully. Do we really want/need it in JSON() and JSON_SCALAR()?\n> >\n> > I thought that removing that support breaks JSON_TABLE() or something\n> > but it doesn't, so maybe we can do without the extension if there's no\n> > particular reason it's there in the first place. Maybe Andrew (cc'd)\n> > remembers why he decided in [1] to (re-) add the RETURNING clause to\n> > JSON() and JSON_SCALAR()?\n> >\n> > Updated patches, with 0003 being a new refactoring patch, are\n> > attached. Patches 0004~ contain a few updates around JsonValueExpr.\n> > Specifically, I removed the case for T_JsonValueExpr in\n> > transformExprRecurse(), because I realized that JsonValueExpr\n> > expressions never appear embedded in other expressions. That allowed\n> > me to get rid of some needless refactoring around\n> > transformJsonValueExpr() in the patch that adds JSON_VALUE() etc.\n>\n> I noticed that 0003 was giving some warnings, which is fixed in the\n> attached updated set of patches.\n\nHere are the remaining patches, rebased. I'll remove the RETURNING\nclause from JSON() and JSON_SCALAR() in the next version that I will\npost tomorrow unless I hear objections.\n\n--\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 13 Jul 2023 12:47:27 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "I looked at your 0001. My 0001 are some trivial comment cleanups to\nthat.\n\nI scrolled through all of jsonfuncs.c to see if there was a better place\nfor the new function than the end of the file. Man, is that one ugly\nfile. There are almost no comments! I almost wish you would create a\nnew file so that you don't have to put this new function in such bad\ncompany. But maybe it'll improve someday, so ... whatever.\n\nIn the original code, the functions here being (re)moved do not need to\nreturn a type output function in a few cases. This works okay when the\nfunctions are each contained in a single file (because each function\nknows that the respective datum_to_json/datum_to_jsonb user of the\nreturned values won't need the function OID in those other cases); but\nas an exported function, that strange API doesn't seem great. (It only\nworks for 0002 because the only thing that the executor does with these\ncached values is call datum_to_json/b). That seems easy to solve, since\nwe can return the hardcoded output function OID in those cases anyway.\nA possible complaint about this is that the OID so returned would be\nuntested code, so they might be wrong and we'd never know. However,\nISTM it's better to make a promise about always returning a function OID\nand later fixing any bogus function OID if we ever discover that we\nreturn one, rather than having to document in the function's comment\nthat \"we only return function OIDs in such and such cases\". So I made\nthis change my 0002.\n\nA similar complain can be made about which casts we look for. Right\nnow, only an explicit cast to JSON is useful, so that's the only thing\nwe do. But maybe one day a cast to JSONB would become useful if there's\nno cast to JSON for some datatype (in the is_jsonb case only?); and\nmaybe another type of cast would be useful. However, that seems like\ngoing too much into uncharted territory with no useful use case, so\nlet's just not go there for now. Maybe in the future we can improve\nthis aspect of it, if need arises.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/",
"msg_date": "Thu, 13 Jul 2023 18:54:13 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Hi Alvaro,\n\nOn Fri, Jul 14, 2023 at 1:54 AM Alvaro Herrera <[email protected]> wrote:\n> I looked at your 0001. My 0001 are some trivial comment cleanups to\n> that.\n\nThanks.\n\n> I scrolled through all of jsonfuncs.c to see if there was a better place\n> for the new function than the end of the file. Man, is that one ugly\n> file. There are almost no comments! I almost wish you would create a\n> new file so that you don't have to put this new function in such bad\n> company. But maybe it'll improve someday, so ... whatever.\n\nI tried to put it somewhere that is not the end of the file, though\nanywhere would have looked arbitrary anyway for the reasons you\nmention, so I didn't after all.\n\n> In the original code, the functions here being (re)moved do not need to\n> return a type output function in a few cases. This works okay when the\n> functions are each contained in a single file (because each function\n> knows that the respective datum_to_json/datum_to_jsonb user of the\n> returned values won't need the function OID in those other cases); but\n> as an exported function, that strange API doesn't seem great. (It only\n> works for 0002 because the only thing that the executor does with these\n> cached values is call datum_to_json/b).\n\nAgreed about not tying the new API too closely to datum_to_json[b]'s needs.\n\n> That seems easy to solve, since\n> we can return the hardcoded output function OID in those cases anyway.\n> A possible complaint about this is that the OID so returned would be\n> untested code, so they might be wrong and we'd never know. However,\n> ISTM it's better to make a promise about always returning a function OID\n> and later fixing any bogus function OID if we ever discover that we\n> return one, rather than having to document in the function's comment\n> that \"we only return function OIDs in such and such cases\". So I made\n> this change my 0002.\n\n+1\n\n> A similar complaint can be made about which casts we look for. Right\n> now, only an explicit cast to JSON is useful, so that's the only thing\n> we do. But maybe one day a cast to JSONB would become useful if there's\n> no cast to JSON for some datatype (in the is_jsonb case only?); and\n> maybe another type of cast would be useful. However, that seems like\n> going too much into uncharted territory with no useful use case, so\n> let's just not go there for now. Maybe in the future we can improve\n> this aspect of it, if need arises.\n\nHmm, yes, the note in the nearby comment stresses \"to json (not to\njsonb)\", though the (historical) reason why is not so clear to me.\nI'm inclined to leave that as-is.\n\nI've merged your deltas in the attached 0001 and rebased the other\npatches. In 0002, I have now removed RETURNING support for JSON() and\nJSON_SCALAR().\n\n\n--\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Fri, 14 Jul 2023 16:13:17 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "hi.\nseems there is no explanation about, json_api_common_syntax in\nfunctions-json.html\n\nI can get json_query full synopsis from functions-json.html as follows:\njson_query ( context_item, path_expression [ PASSING { value AS\nvarname } [, ...]] [ RETURNING data_type [ FORMAT JSON [ ENCODING UTF8\n] ] ] [ { WITHOUT | WITH { CONDITIONAL | [UNCONDITIONAL] } } [ ARRAY ]\nWRAPPER ] [ { KEEP | OMIT } QUOTES [ ON SCALAR STRING ] ] [ { ERROR |\nNULL | EMPTY { [ ARRAY ] | OBJECT } | DEFAULT expression } ON EMPTY ]\n[ { ERROR | NULL | EMPTY { [ ARRAY ] | OBJECT } | DEFAULT expression }\nON ERROR ])\n\nseems doesn't have a full synopsis for json_table? only partial one\nby one explanation.\n\n\n",
"msg_date": "Mon, 17 Jul 2023 13:00:00 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Op 7/17/23 om 07:00 schreef jian he:\n> hi.\n> seems there is no explanation about, json_api_common_syntax in\n> functions-json.html\n> \n> I can get json_query full synopsis from functions-json.html as follows:\n> json_query ( context_item, path_expression [ PASSING { value AS\n> varname } [, ...]] [ RETURNING data_type [ FORMAT JSON [ ENCODING UTF8\n> ] ] ] [ { WITHOUT | WITH { CONDITIONAL | [UNCONDITIONAL] } } [ ARRAY ]\n> WRAPPER ] [ { KEEP | OMIT } QUOTES [ ON SCALAR STRING ] ] [ { ERROR |\n> NULL | EMPTY { [ ARRAY ] | OBJECT } | DEFAULT expression } ON EMPTY ]\n> [ { ERROR | NULL | EMPTY { [ ARRAY ] | OBJECT } | DEFAULT expression }\n> ON ERROR ])\n> \n> seems doesn't have a full synopsis for json_table? only partial one\n> by one explanation.\n> \n\nFWIW, Re: json_api_common_syntax\n\nAn (old) pdf that I have (ISO/IEC TR 19075-6 First edition 2017-03)\ncontains the below specification. It's probably the source of the \nparticular term. It's easy to see how it maps onto the current v7 \nSQL/JSON implementation. (I don't know if it has changed in later \nincarnations.)\n\n\n------ 8< ------------\n5.2 JSON API common syntax\n\nThe SQL/JSON query functions all need a path specification, the JSON \nvalue to be input to that path specification for querying and \nprocessing, and optional parameter values passed to the path \nspecification. They use a common syntax:\n\n<JSON API common syntax> ::=\n <JSON context item> <comma> <JSON path specification>\n [ AS <JSON table path name> ]\n [ <JSON passing clause> ]\n\n<JSON context item> ::=\n <JSON value expression>\n\n<JSON path specification> ::=\n <character string literal>\n\n<JSON passing clause> ::=\n PASSING <JSON argument> [ { <comma> <JSON argument> } ]\n\n<JSON argument> ::=\n <JSON value expression> AS <identifier>\n\n------ 8< ------------\n\nAnd yes, we might need a readable translation of that in the docs \nalthough it might be easier to just get get rid of the term \n'json_api_common_syntax'.\n\nHTH,\n\nErik Rijkers\n\n\n",
"msg_date": "Mon, 17 Jul 2023 09:15:34 +0200",
"msg_from": "Erik Rijkers <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Hi,\n\nOn Mon, Jul 17, 2023 at 4:14 PM Erik Rijkers <[email protected]> wrote:\n> Op 7/17/23 om 07:00 schreef jian he:\n> > hi.\n> > seems there is no explanation about, json_api_common_syntax in\n> > functions-json.html\n> >\n> > I can get json_query full synopsis from functions-json.html as follows:\n> > json_query ( context_item, path_expression [ PASSING { value AS\n> > varname } [, ...]] [ RETURNING data_type [ FORMAT JSON [ ENCODING UTF8\n> > ] ] ] [ { WITHOUT | WITH { CONDITIONAL | [UNCONDITIONAL] } } [ ARRAY ]\n> > WRAPPER ] [ { KEEP | OMIT } QUOTES [ ON SCALAR STRING ] ] [ { ERROR |\n> > NULL | EMPTY { [ ARRAY ] | OBJECT } | DEFAULT expression } ON EMPTY ]\n> > [ { ERROR | NULL | EMPTY { [ ARRAY ] | OBJECT } | DEFAULT expression }\n> > ON ERROR ])\n> >\n> > seems doesn't have a full synopsis for json_table? only partial one\n> > by one explanation.\n\nI looked through the history of the docs portion of the patch and it\nlooks like the synopsis for JSON_TABLE(...) used to be there but was\ntaken out during one of the doc reworks [1].\n\nI've added it back in the patch as I agree that it would help to have\nit. Though, I am not totally sure where I've put it is the right\nplace for it. JSON_TABLE() is a beast that won't fit into the table\nthat JSON_QUERY() et al are in, so maybe that's how it will have to\nbe? I have no better idea.\n\n> FWIW, Re: json_api_common_syntax\n...\n> An (old) pdf that I have (ISO/IEC TR 19075-6 First edition 2017-03)\n> contains the below specification. It's probably the source of the\n> particular term. It's easy to see how it maps onto the current v7\n> SQL/JSON implementation. (I don't know if it has changed in later\n> incarnations.)\n>\n>\n> ------ 8< ------------\n> 5.2 JSON API common syntax\n>\n> The SQL/JSON query functions all need a path specification, the JSON\n> value to be input to that path specification for querying and\n> processing, and optional parameter values passed to the path\n> specification. They use a common syntax:\n>\n> <JSON API common syntax> ::=\n> <JSON context item> <comma> <JSON path specification>\n> [ AS <JSON table path name> ]\n> [ <JSON passing clause> ]\n>\n> <JSON context item> ::=\n> <JSON value expression>\n>\n> <JSON path specification> ::=\n> <character string literal>\n>\n> <JSON passing clause> ::=\n> PASSING <JSON argument> [ { <comma> <JSON argument> } ]\n>\n> <JSON argument> ::=\n> <JSON value expression> AS <identifier>\n>\n> ------ 8< ------------\n>\n> And yes, we might need a readable translation of that in the docs\n> although it might be easier to just get get rid of the term\n> 'json_api_common_syntax'.\n\nI found a patch proposed by Andrew Dunstan in the v15 dev cycle to get\nrid of the term in the JSON_TABLE docs that Erik seemed to agree with\n[2], so I've applied it.\n\nAttached updated patches. In 0002, I removed the mention of the\nRETURNING clause in the JSON(), JSON_SCALAR() documentation, which I\nhad forgotten to do in the last version which removed its support in\ncode.\n\nI think 0001 looks ready to go. Alvaro?\n\nAlso, I've been wondering if it isn't too late to apply the following\nto v16 too, so as to make the code look similar in both branches:\n\nb6e1157e7d Don't include CaseTestExpr in JsonValueExpr.formatted_expr\n785480c953 Pass constructName to transformJsonValueExpr()\n\n--\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n[1] https://www.postgresql.org/message-id/044204fa-738d-d89a-0e81-1c04696ba676%40dunslane.net\n[2] https://www.postgresql.org/message-id/10c997db-9270-bdd5-04d5-0ffc1eefcdb7%40dunslane.net",
"msg_date": "Tue, 18 Jul 2023 18:11:06 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On 2023-Jul-18, Amit Langote wrote:\n\n> Attached updated patches. In 0002, I removed the mention of the\n> RETURNING clause in the JSON(), JSON_SCALAR() documentation, which I\n> had forgotten to do in the last version which removed its support in\n> code.\n\n> I think 0001 looks ready to go. Alvaro?\n\nIt looks reasonable to me.\n\n> Also, I've been wondering if it isn't too late to apply the following\n> to v16 too, so as to make the code look similar in both branches:\n\nHmm.\n\n> 785480c953 Pass constructName to transformJsonValueExpr()\n\nI think 785480c953 can easily be considered a bugfix on 7081ac46ace8, so\nI agree it's better to apply it to 16.\n\n> b6e1157e7d Don't include CaseTestExpr in JsonValueExpr.formatted_expr\n\nI feel a bit uneasy about this one. It seems to assume that\nformatted_expr is always set, but at the same time it's not obvious that\nit is. (Maybe this aspect just needs some more commentary). I agree\nthat it would be better to make both branches identical, because if\nthere's a problem, we are better equipped to get a fix done to both.\n\nAs for the removal of makeCaseTestExpr(), I agree -- of the six callers\nof makeNode(CastTestExpr), only two of them would be able to use the new\nfunction, so it doesn't look of general enough usefulness.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\nY una voz del caos me habló y me dijo\n\"Sonríe y sé feliz, podría ser peor\".\nY sonreí. Y fui feliz.\nY fue peor.\n\n\n",
"msg_date": "Tue, 18 Jul 2023 17:53:13 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Wed, Jul 19, 2023 at 12:53 AM Alvaro Herrera <[email protected]> wrote:\n> On 2023-Jul-18, Amit Langote wrote:\n>\n> > Attached updated patches. In 0002, I removed the mention of the\n> > RETURNING clause in the JSON(), JSON_SCALAR() documentation, which I\n> > had forgotten to do in the last version which removed its support in\n> > code.\n>\n> > I think 0001 looks ready to go. Alvaro?\n>\n> It looks reasonable to me.\n\nThanks for taking another look.\n\nI will push this tomorrow.\n\n> > Also, I've been wondering if it isn't too late to apply the following\n> > to v16 too, so as to make the code look similar in both branches:\n>\n> Hmm.\n>\n> > 785480c953 Pass constructName to transformJsonValueExpr()\n>\n> I think 785480c953 can easily be considered a bugfix on 7081ac46ace8, so\n> I agree it's better to apply it to 16.\n\nOK.\n\n> > b6e1157e7d Don't include CaseTestExpr in JsonValueExpr.formatted_expr\n>\n> I feel a bit uneasy about this one. It seems to assume that\n> formatted_expr is always set, but at the same time it's not obvious that\n> it is. (Maybe this aspect just needs some more commentary).\n\nHmm, I agree that the comments about formatted_expr could be improved\nfurther, for which I propose the attached. Actually, staring some\nmore at this, I'm inclined to change makeJsonValueExpr() to allow\ncallers to pass it the finished 'formatted_expr' rather than set it by\nthemselves.\n\n> I agree\n> that it would be better to make both branches identical, because if\n> there's a problem, we are better equipped to get a fix done to both.\n>\n> As for the removal of makeCaseTestExpr(), I agree -- of the six callers\n> of makeNode(CastTestExpr), only two of them would be able to use the new\n> function, so it doesn't look of general enough usefulness.\n\nOK, so you agree with back-patching this one too, though perhaps only\nafter applying something like the aforementioned patch. Just to be\nsure, would the good practice in this case be to squash the fixup\npatch into b6e1157e7d before back-patching?\n\n\n--\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 19 Jul 2023 17:17:21 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Wed, Jul 19, 2023 at 5:17 PM Amit Langote <[email protected]> wrote:\n> On Wed, Jul 19, 2023 at 12:53 AM Alvaro Herrera <[email protected]> wrote:\n> > On 2023-Jul-18, Amit Langote wrote:\n> > > b6e1157e7d Don't include CaseTestExpr in JsonValueExpr.formatted_expr\n> >\n> > I feel a bit uneasy about this one. It seems to assume that\n> > formatted_expr is always set, but at the same time it's not obvious that\n> > it is. (Maybe this aspect just needs some more commentary).\n>\n> Hmm, I agree that the comments about formatted_expr could be improved\n> further, for which I propose the attached. Actually, staring some\n> more at this, I'm inclined to change makeJsonValueExpr() to allow\n> callers to pass it the finished 'formatted_expr' rather than set it by\n> themselves.\n\nHmm, after looking some more, it may not be entirely right that\nformatted_expr is always set in the code paths that call\ntransformJsonValueExpr(). Will look at this some more tomorrow.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 19 Jul 2023 21:46:34 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Tue, Jul 18, 2023 at 5:11 PM Amit Langote <[email protected]>\nwrote:\n>\n> Hi,\n>\n> On Mon, Jul 17, 2023 at 4:14 PM Erik Rijkers <[email protected]> wrote:\n> > Op 7/17/23 om 07:00 schreef jian he:\n> > > hi.\n> > > seems there is no explanation about, json_api_common_syntax in\n> > > functions-json.html\n> > >\n> > > I can get json_query full synopsis from functions-json.html as\nfollows:\n> > > json_query ( context_item, path_expression [ PASSING { value AS\n> > > varname } [, ...]] [ RETURNING data_type [ FORMAT JSON [ ENCODING UTF8\n> > > ] ] ] [ { WITHOUT | WITH { CONDITIONAL | [UNCONDITIONAL] } } [ ARRAY ]\n> > > WRAPPER ] [ { KEEP | OMIT } QUOTES [ ON SCALAR STRING ] ] [ { ERROR |\n> > > NULL | EMPTY { [ ARRAY ] | OBJECT } | DEFAULT expression } ON EMPTY ]\n> > > [ { ERROR | NULL | EMPTY { [ ARRAY ] | OBJECT } | DEFAULT expression }\n> > > ON ERROR ])\n> > >\n> > > seems doesn't have a full synopsis for json_table? only partial one\n> > > by one explanation.\n>\n> I looked through the history of the docs portion of the patch and it\n> looks like the synopsis for JSON_TABLE(...) used to be there but was\n> taken out during one of the doc reworks [1].\n>\n> I've added it back in the patch as I agree that it would help to have\n> it. Though, I am not totally sure where I've put it is the right\n> place for it. JSON_TABLE() is a beast that won't fit into the table\n> that JSON_QUERY() et al are in, so maybe that's how it will have to\n> be? I have no better idea.\n>\n> >\n\nattached screenshot render json_table syntax almost plain html. It looks\nfine.\nbased on syntax, then I am kind of confused with following 2 cases:\n--1\nSELECT * FROM JSON_TABLE(jsonb '1', '$'\n COLUMNS (a int PATH 'strict $.a' default 1 ON EMPTY default 2 on\nerror)\n ERROR ON ERROR) jt;\n\n--2\nSELECT * FROM JSON_TABLE(jsonb '1', '$'\n COLUMNS (a int PATH 'strict $.a' default 1 ON EMPTY default 2 on\nerror)) jt;\n\nthe first one should yield syntax error?\n[image: json_table_v8.png]",
"msg_date": "Thu, 20 Jul 2023 09:35:45 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Hello,\n\nOn Thu, Jul 20, 2023 at 10:35 AM jian he <[email protected]> wrote:\n> On Tue, Jul 18, 2023 at 5:11 PM Amit Langote <[email protected]> wrote:\n> > > Op 7/17/23 om 07:00 schreef jian he:\n> > > > hi.\n> > > > seems there is no explanation about, json_api_common_syntax in\n> > > > functions-json.html\n> > > >\n> > > > I can get json_query full synopsis from functions-json.html as follows:\n> > > > json_query ( context_item, path_expression [ PASSING { value AS\n> > > > varname } [, ...]] [ RETURNING data_type [ FORMAT JSON [ ENCODING UTF8\n> > > > ] ] ] [ { WITHOUT | WITH { CONDITIONAL | [UNCONDITIONAL] } } [ ARRAY ]\n> > > > WRAPPER ] [ { KEEP | OMIT } QUOTES [ ON SCALAR STRING ] ] [ { ERROR |\n> > > > NULL | EMPTY { [ ARRAY ] | OBJECT } | DEFAULT expression } ON EMPTY ]\n> > > > [ { ERROR | NULL | EMPTY { [ ARRAY ] | OBJECT } | DEFAULT expression }\n> > > > ON ERROR ])\n> > > >\n> > > > seems doesn't have a full synopsis for json_table? only partial one\n> > > > by one explanation.\n> >\n> > I looked through the history of the docs portion of the patch and it\n> > looks like the synopsis for JSON_TABLE(...) used to be there but was\n> > taken out during one of the doc reworks [1].\n> >\n> > I've added it back in the patch as I agree that it would help to have\n> > it. Though, I am not totally sure where I've put it is the right\n> > place for it. JSON_TABLE() is a beast that won't fit into the table\n> > that JSON_QUERY() et al are in, so maybe that's how it will have to\n> > be? I have no better idea.\n>\n> attached screenshot render json_table syntax almost plain html. It looks fine.\n\nThanks for checking.\n\n> based on syntax, then I am kind of confused with following 2 cases:\n> --1\n> SELECT * FROM JSON_TABLE(jsonb '1', '$'\n> COLUMNS (a int PATH 'strict $.a' default 1 ON EMPTY default 2 on error)\n> ERROR ON ERROR) jt;\n>\n> --2\n> SELECT * FROM JSON_TABLE(jsonb '1', '$'\n> COLUMNS (a int PATH 'strict $.a' default 1 ON EMPTY default 2 on error)) jt;\n>\n> the first one should yield syntax error?\n\nNo. Actually, the synopsis missed the optional ON ERROR clause that\ncan appear after COLUMNS(...). Will fix it.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 20 Jul 2023 16:03:32 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Wed, Jul 19, 2023 at 5:17 PM Amit Langote <[email protected]> wrote:\n> On Wed, Jul 19, 2023 at 12:53 AM Alvaro Herrera <[email protected]> wrote:\n> > On 2023-Jul-18, Amit Langote wrote:\n> >\n> > > Attached updated patches. In 0002, I removed the mention of the\n> > > RETURNING clause in the JSON(), JSON_SCALAR() documentation, which I\n> > > had forgotten to do in the last version which removed its support in\n> > > code.\n> >\n> > > I think 0001 looks ready to go. Alvaro?\n> >\n> > It looks reasonable to me.\n>\n> Thanks for taking another look.\n>\n> I will push this tomorrow.\n\nPushed.\n\n> > > Also, I've been wondering if it isn't too late to apply the following\n> > > to v16 too, so as to make the code look similar in both branches:\n> >\n> > Hmm.\n> >\n> > > 785480c953 Pass constructName to transformJsonValueExpr()\n> >\n> > I think 785480c953 can easily be considered a bugfix on 7081ac46ace8, so\n> > I agree it's better to apply it to 16.\n>\n> OK.\n\nPushed to 16.\n\n> > > b6e1157e7d Don't include CaseTestExpr in JsonValueExpr.formatted_expr\n> >\n> > I feel a bit uneasy about this one. It seems to assume that\n> > formatted_expr is always set, but at the same time it's not obvious that\n> > it is. (Maybe this aspect just needs some more commentary).\n>\n> Hmm, I agree that the comments about formatted_expr could be improved\n> further, for which I propose the attached. Actually, staring some\n> more at this, I'm inclined to change makeJsonValueExpr() to allow\n> callers to pass it the finished 'formatted_expr' rather than set it by\n> themselves.\n>\n> > I agree\n> > that it would be better to make both branches identical, because if\n> > there's a problem, we are better equipped to get a fix done to both.\n> >\n> > As for the removal of makeCaseTestExpr(), I agree -- of the six callers\n> > of makeNode(CastTestExpr), only two of them would be able to use the new\n> > function, so it doesn't look of general enough usefulness.\n>\n> OK, so you agree with back-patching this one too, though perhaps only\n> after applying something like the aforementioned patch.\n\nI looked at this some more and concluded that it's fine to think that\nall JsonValueExpr nodes leaving the parser have their formatted_expr\nset. I've updated the commentary some more in the patch attached as\n0001.\n\nRebased SQL/JSON patches also attached. I've fixed the JSON_TABLE\nsyntax synopsis in the documentation as mentioned in my other email.\n\n--\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 20 Jul 2023 17:19:02 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Thu, Jul 20, 2023 at 17:19 Amit Langote <[email protected]> wrote:\n\n> On Wed, Jul 19, 2023 at 5:17 PM Amit Langote <[email protected]>\n> wrote:\n> > On Wed, Jul 19, 2023 at 12:53 AM Alvaro Herrera <[email protected]>\n> wrote:\n> > > On 2023-Jul-18, Amit Langote wrote:\n> > >\n> > > > Attached updated patches. In 0002, I removed the mention of the\n> > > > RETURNING clause in the JSON(), JSON_SCALAR() documentation, which I\n> > > > had forgotten to do in the last version which removed its support in\n> > > > code.\n> > >\n> > > > I think 0001 looks ready to go. Alvaro?\n> > >\n> > > It looks reasonable to me.\n> >\n> > Thanks for taking another look.\n> >\n> > I will push this tomorrow.\n>\n> Pushed.\n>\n> > > > Also, I've been wondering if it isn't too late to apply the following\n> > > > to v16 too, so as to make the code look similar in both branches:\n> > >\n> > > Hmm.\n> > >\n> > > > 785480c953 Pass constructName to transformJsonValueExpr()\n> > >\n> > > I think 785480c953 can easily be considered a bugfix on 7081ac46ace8,\n> so\n> > > I agree it's better to apply it to 16.\n> >\n> > OK.\n>\n> Pushed to 16.\n>\n> > > > b6e1157e7d Don't include CaseTestExpr in JsonValueExpr.formatted_expr\n> > >\n> > > I feel a bit uneasy about this one. It seems to assume that\n> > > formatted_expr is always set, but at the same time it's not obvious\n> that\n> > > it is. (Maybe this aspect just needs some more commentary).\n> >\n> > Hmm, I agree that the comments about formatted_expr could be improved\n> > further, for which I propose the attached. Actually, staring some\n> > more at this, I'm inclined to change makeJsonValueExpr() to allow\n> > callers to pass it the finished 'formatted_expr' rather than set it by\n> > themselves.\n> >\n> > > I agree\n> > > that it would be better to make both branches identical, because if\n> > > there's a problem, we are better equipped to get a fix done to both.\n> > >\n> > > As for the removal of makeCaseTestExpr(), I agree -- of the six callers\n> > > of makeNode(CastTestExpr), only two of them would be able to use the\n> new\n> > > function, so it doesn't look of general enough usefulness.\n> >\n> > OK, so you agree with back-patching this one too, though perhaps only\n> > after applying something like the aforementioned patch.\n>\n> I looked at this some more and concluded that it's fine to think that\n> all JsonValueExpr nodes leaving the parser have their formatted_expr\n> set. I've updated the commentary some more in the patch attached as\n> 0001.\n>\n> Rebased SQL/JSON patches also attached. I've fixed the JSON_TABLE\n> syntax synopsis in the documentation as mentioned in my other email.\n\n\nI’m thinking of pushing 0001 and 0002 tomorrow barring objections.\n\n> --\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\nOn Thu, Jul 20, 2023 at 17:19 Amit Langote <[email protected]> wrote:On Wed, Jul 19, 2023 at 5:17 PM Amit Langote <[email protected]> wrote:\n> On Wed, Jul 19, 2023 at 12:53 AM Alvaro Herrera <[email protected]> wrote:\n> > On 2023-Jul-18, Amit Langote wrote:\n> >\n> > > Attached updated patches. In 0002, I removed the mention of the\n> > > RETURNING clause in the JSON(), JSON_SCALAR() documentation, which I\n> > > had forgotten to do in the last version which removed its support in\n> > > code.\n> >\n> > > I think 0001 looks ready to go. Alvaro?\n> >\n> > It looks reasonable to me.\n>\n> Thanks for taking another look.\n>\n> I will push this tomorrow.\n\nPushed.\n\n> > > Also, I've been wondering if it isn't too late to apply the following\n> > > to v16 too, so as to make the code look similar in both branches:\n> >\n> > Hmm.\n> >\n> > > 785480c953 Pass constructName to transformJsonValueExpr()\n> >\n> > I think 785480c953 can easily be considered a bugfix on 7081ac46ace8, so\n> > I agree it's better to apply it to 16.\n>\n> OK.\n\nPushed to 16.\n\n> > > b6e1157e7d Don't include CaseTestExpr in JsonValueExpr.formatted_expr\n> >\n> > I feel a bit uneasy about this one. It seems to assume that\n> > formatted_expr is always set, but at the same time it's not obvious that\n> > it is. (Maybe this aspect just needs some more commentary).\n>\n> Hmm, I agree that the comments about formatted_expr could be improved\n> further, for which I propose the attached. Actually, staring some\n> more at this, I'm inclined to change makeJsonValueExpr() to allow\n> callers to pass it the finished 'formatted_expr' rather than set it by\n> themselves.\n>\n> > I agree\n> > that it would be better to make both branches identical, because if\n> > there's a problem, we are better equipped to get a fix done to both.\n> >\n> > As for the removal of makeCaseTestExpr(), I agree -- of the six callers\n> > of makeNode(CastTestExpr), only two of them would be able to use the new\n> > function, so it doesn't look of general enough usefulness.\n>\n> OK, so you agree with back-patching this one too, though perhaps only\n> after applying something like the aforementioned patch.\n\nI looked at this some more and concluded that it's fine to think that\nall JsonValueExpr nodes leaving the parser have their formatted_expr\nset. I've updated the commentary some more in the patch attached as\n0001.\n\nRebased SQL/JSON patches also attached. I've fixed the JSON_TABLE\nsyntax synopsis in the documentation as mentioned in my other email.I’m thinking of pushing 0001 and 0002 tomorrow barring objections.-- Thanks, Amit LangoteEDB: http://www.enterprisedb.com",
"msg_date": "Fri, 21 Jul 2023 00:48:06 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On 2023-Jul-21, Amit Langote wrote:\n\n> I’m thinking of pushing 0001 and 0002 tomorrow barring objections.\n\n0001 looks reasonable to me. I think you asked whether to squash that\none with the other bugfix commit for the same code that you already\npushed to master; I think there's no point in committing as separate\npatches, because the first one won't show up in the git_changelog output\nas a single entity with the one in 16, so it'll just be additional\nnoise.\n\nI've looked at 0002 at various points in time and I think it looks\ngenerally reasonable. I think your removal of a couple of newlines\n(where originally two appear in sequence) is unwarranted; that the name\nto_json[b]_worker is ugly for exported functions (maybe \"datum_to_json\"\nwould be better, or you may have better ideas); and that the omission of\nthe stock comment in the new stanzas in FigureColnameInternal() is\nstrange. But I don't have anything serious. Do add some ecpg tests ...\n\nAlso, remember to pgindent and bump catversion, if you haven't already.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"No hay hombre que no aspire a la plenitud, es decir,\nla suma de experiencias de que un hombre es capaz\"\n\n\n",
"msg_date": "Thu, 20 Jul 2023 18:02:52 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Hi Alvaro,\n\nThanks for taking a look.\n\nOn Fri, Jul 21, 2023 at 1:02 AM Alvaro Herrera <[email protected]> wrote:\n> On 2023-Jul-21, Amit Langote wrote:\n>\n> > I’m thinking of pushing 0001 and 0002 tomorrow barring objections.\n>\n> 0001 looks reasonable to me. I think you asked whether to squash that\n> one with the other bugfix commit for the same code that you already\n> pushed to master; I think there's no point in committing as separate\n> patches, because the first one won't show up in the git_changelog output\n> as a single entity with the one in 16, so it'll just be additional\n> noise.\n\nOK, pushed 0001 to HEAD and b6e1157e7d + 0001 to 16.\n\n> I've looked at 0002 at various points in time and I think it looks\n> generally reasonable. I think your removal of a couple of newlines\n> (where originally two appear in sequence) is unwarranted; that the name\n> to_json[b]_worker is ugly for exported functions (maybe \"datum_to_json\"\n> would be better, or you may have better ideas);\n\nWent with datum_to_json[b]. Created a separate refactoring patch for\nthis, attached as 0001.\n\nCreated another refactoring patch for the hunks related to renaming of\na nonterminal in gram.y, attached as 0002.\n\n> and that the omission of\n> the stock comment in the new stanzas in FigureColnameInternal() is\n> strange.\n\nYes, fixed.\n\n> But I don't have anything serious. Do add some ecpg tests ...\n\nAdded.\n\n> Also, remember to pgindent and bump catversion, if you haven't already.\n\nWill do. Wasn't sure myself whether the catversion should be bumped,\nbut I suppose it must be because ruleutils.c has changed.\n\nAttaching latest patches. Will push 0001, 0002, and 0003 on Monday to\navoid worrying about the buildfarm on a Friday evening.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Fri, 21 Jul 2023 19:33:11 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "hi\nbased on v10*.patch. questions/ideas about the doc.\n\n> json_exists ( context_item, path_expression [ PASSING { value AS varname } [, ...]] [ RETURNING data_type ] [ { TRUE | FALSE | UNKNOWN | ERROR } ON ERROR ])\n> Returns true if the SQL/JSON path_expression applied to the context_item using the values yields any items. The ON ERROR clause specifies what is returned if an error occurs. Note that if the path_expression is strict, an error is generated if it yields no items. The default value is UNKNOWN which causes a NULL result.\n\nonly SELECT JSON_EXISTS(NULL::jsonb, '$'); will cause a null result.\nIn lex mode, if yield no items return false, no error will return,\neven error on error.\nOnly case error will happen, strict mode error on error. (select\njson_exists(jsonb '{\"a\": [1,2,3]}', 'strict $.b' error on error)\n\nso I came up with the following:\nReturns true if the SQL/JSON path_expression applied to the\ncontext_item using the values yields any items. The ON ERROR clause\nspecifies what is returned if an error occurs, if not specified, the\ndefault value is false when it yields no items.\nNote that if the path_expression is strict, ERROR ON ERROR specified,\nan error is generated if it yields no items.\n--------------------------------------------------------------------------------------------------\n/* --first branch of json_table_column spec.\n\nname type [ PATH json_path_specification ]\n [ { WITHOUT | WITH { CONDITIONAL | [UNCONDITIONAL] } } [ ARRAY\n] WRAPPER ]\n [ { KEEP | OMIT } QUOTES [ ON SCALAR STRING ] ]\n [ { ERROR | NULL | DEFAULT expression } ON EMPTY ]\n [ { ERROR | NULL | DEFAULT expression } ON ERROR ]\n*/\nI am not sure what \" [ ON SCALAR STRING ]\" means. There is no test on this.\ni wonder how to achieve the following query with json_table:\nselect json_query(jsonb '\"world\"', '$' returning text keep quotes) ;\n\nthe following case will fail.\nSELECT * FROM JSON_TABLE(jsonb '\"world\"', '$' COLUMNS (item text PATH\n'$' keep quotes ON SCALAR STRING ));\nERROR: cannot use OMIT QUOTES clause with scalar columns\nLINE 1: ...T * FROM JSON_TABLE(jsonb '\"world\"', '$' COLUMNS (item text ...\n ^\nerror should be ERROR: cannot use KEEP QUOTES clause with scalar columns?\nLINE1 should be: SELECT * FROM JSON_TABLE(jsonb '\"world\"', '$' COLUMNS\n(item text ...\n--------------------------------------------------------------------------------\nquote from json_query:\n> This function must return a JSON string, so if the path expression returns multiple SQL/JSON items, you must wrap the result using the\n> WITH WRAPPER clause.\n\nI think the final result will be: if the RETURNING clause is not\nspecified, then the returned data type is jsonb. if multiple SQL/JSON\nitems returned, if not specified WITH WRAPPER, null will be returned.\n------------------------------------------------------------------------------------\nquote from json_query:\n> The ON ERROR and ON EMPTY clauses have similar semantics to those clauses for json_value.\nquote from json_table:\n> These clauses have the same syntax and semantics as for json_value and json_query.\n\nit would be better in json_value syntax explicit mention: if not\nexplicitly mentioned, what will happen when on error, on empty\nhappened ?\n-------------------------------------------------------------------------------------\n> You can have only one ordinality column per table\nbut the regress test shows that you can have more than one ordinality column.\n----------------------------------------------------------------------------\nsimilar to here\nhttps://git.postgresql.org/cgit/postgresql.git/tree/src/test/regress/expected/sqljson.out#n804\nMaybe in file src/test/regress/sql/jsonb_sqljson.sql line 349, you can\nalso create a table first. insert corner case data.\nthen split the very wide select query (more than 26 columns) into 4\nsmall queries, better to view the expected result on the web.\n\n\n",
"msg_date": "Sun, 23 Jul 2023 16:17:25 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Fri, Jul 21, 2023 at 7:33 PM Amit Langote <[email protected]> wrote:\n> On Fri, Jul 21, 2023 at 1:02 AM Alvaro Herrera <[email protected]> wrote:\n> > On 2023-Jul-21, Amit Langote wrote:\n> >\n> > > I’m thinking of pushing 0001 and 0002 tomorrow barring objections.\n> >\n> > 0001 looks reasonable to me. I think you asked whether to squash that\n> > one with the other bugfix commit for the same code that you already\n> > pushed to master; I think there's no point in committing as separate\n> > patches, because the first one won't show up in the git_changelog output\n> > as a single entity with the one in 16, so it'll just be additional\n> > noise.\n>\n> OK, pushed 0001 to HEAD and b6e1157e7d + 0001 to 16.\n>\n> > I've looked at 0002 at various points in time and I think it looks\n> > generally reasonable. I think your removal of a couple of newlines\n> > (where originally two appear in sequence) is unwarranted; that the name\n> > to_json[b]_worker is ugly for exported functions (maybe \"datum_to_json\"\n> > would be better, or you may have better ideas);\n>\n> Went with datum_to_json[b]. Created a separate refactoring patch for\n> this, attached as 0001.\n>\n> Created another refactoring patch for the hunks related to renaming of\n> a nonterminal in gram.y, attached as 0002.\n>\n> > and that the omission of\n> > the stock comment in the new stanzas in FigureColnameInternal() is\n> > strange.\n>\n> Yes, fixed.\n>\n> > But I don't have anything serious. Do add some ecpg tests ...\n>\n> Added.\n>\n> > Also, remember to pgindent and bump catversion, if you haven't already.\n>\n> Will do. Wasn't sure myself whether the catversion should be bumped,\n> but I suppose it must be because ruleutils.c has changed.\n>\n> Attaching latest patches. Will push 0001, 0002, and 0003 on Monday to\n> avoid worrying about the buildfarm on a Friday evening.\n\nAnd pushed.\n\nWill post the remaining patches after addressing jian he's comments.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 26 Jul 2023 17:10:17 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Hi,\r\nThank you for developing such a great feature. The attached patch formats the documentation like any other function definition:\r\n- Added right parenthesis to json function calls.\r\n- Added <returnvalue> to json functions.\r\n- Added a space to the 'expression' part of the json_scalar function.\r\n- Added a space to the 'expression' part of the json_serialize function.\r\n\r\nIt seems that the three functions added this time do not have tuples in the pg_proc catalog. Is it unnecessary?\r\n\r\nRegards,\r\nNoriyoshi Shinoda\r\n-----Original Message-----\r\nFrom: Amit Langote <[email protected]> \r\nSent: Wednesday, July 26, 2023 5:10 PM\r\nTo: Alvaro Herrera <[email protected]>\r\nCc: Andrew Dunstan <[email protected]>; Erik Rijkers <[email protected]>; PostgreSQL-development <[email protected]>; jian he <[email protected]>\r\nSubject: Re: remaining sql/json patches\r\n\r\nOn Fri, Jul 21, 2023 at 7:33 PM Amit Langote <[email protected]> wrote:\r\n> On Fri, Jul 21, 2023 at 1:02 AM Alvaro Herrera <[email protected]> wrote:\r\n> > On 2023-Jul-21, Amit Langote wrote:\r\n> >\r\n> > > I’m thinking of pushing 0001 and 0002 tomorrow barring objections.\r\n> >\r\n> > 0001 looks reasonable to me. I think you asked whether to squash \r\n> > that one with the other bugfix commit for the same code that you \r\n> > already pushed to master; I think there's no point in committing as \r\n> > separate patches, because the first one won't show up in the \r\n> > git_changelog output as a single entity with the one in 16, so it'll \r\n> > just be additional noise.\r\n>\r\n> OK, pushed 0001 to HEAD and b6e1157e7d + 0001 to 16.\r\n>\r\n> > I've looked at 0002 at various points in time and I think it looks \r\n> > generally reasonable. I think your removal of a couple of newlines \r\n> > (where originally two appear in sequence) is unwarranted; that the \r\n> > name to_json[b]_worker is ugly for exported functions (maybe \"datum_to_json\"\r\n> > would be better, or you may have better ideas);\r\n>\r\n> Went with datum_to_json[b]. Created a separate refactoring patch for \r\n> this, attached as 0001.\r\n>\r\n> Created another refactoring patch for the hunks related to renaming of \r\n> a nonterminal in gram.y, attached as 0002.\r\n>\r\n> > and that the omission of\r\n> > the stock comment in the new stanzas in FigureColnameInternal() is \r\n> > strange.\r\n>\r\n> Yes, fixed.\r\n>\r\n> > But I don't have anything serious. Do add some ecpg tests ...\r\n>\r\n> Added.\r\n>\r\n> > Also, remember to pgindent and bump catversion, if you haven't already.\r\n>\r\n> Will do. Wasn't sure myself whether the catversion should be bumped, \r\n> but I suppose it must be because ruleutils.c has changed.\r\n>\r\n> Attaching latest patches. Will push 0001, 0002, and 0003 on Monday to \r\n> avoid worrying about the buildfarm on a Friday evening.\r\n\r\nAnd pushed.\r\n\r\nWill post the remaining patches after addressing jian he's comments.\r\n\r\n--\r\nThanks, Amit Langote\r\nEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 27 Jul 2023 09:36:35 +0000",
"msg_from": "\"Shinoda, Noriyoshi (HPE Services Japan - FSIP)\"\n\t<[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: remaining sql/json patches"
},
{
"msg_contents": "Hello,\n\nOn Thu, Jul 27, 2023 at 6:36 PM Shinoda, Noriyoshi (HPE Services Japan\n- FSIP) <[email protected]> wrote:\n> Hi,\n> Thank you for developing such a great feature. The attached patch formats the documentation like any other function definition:\n> - Added right parenthesis to json function calls.\n> - Added <returnvalue> to json functions.\n> - Added a space to the 'expression' part of the json_scalar function.\n> - Added a space to the 'expression' part of the json_serialize function.\n\nThanks for checking and the patch. Will push shortly.\n\n> It seems that the three functions added this time do not have tuples in the pg_proc catalog. Is it unnecessary?\n\nYes. These are not functions that get pg_proc entries, but SQL\nconstructs that *look like* functions, similar to XMLEXISTS(), etc.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 28 Jul 2023 15:57:14 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Op 7/21/23 om 12:33 schreef Amit Langote:\n> \n> Thanks for taking a look.\n> \n\nHi Amit,\n\nIs there any chance to rebase the outstanding SQL/JSON patches, (esp. \njson_query)?\n\nThanks!\n\nErik Rijkers\n\n\n\n",
"msg_date": "Fri, 4 Aug 2023 12:02:57 +0200",
"msg_from": "Erik Rijkers <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Hi,\n\nOn Fri, Aug 4, 2023 at 19:01 Erik Rijkers <[email protected]> wrote:\n\n> Op 7/21/23 om 12:33 schreef Amit Langote:\n> >\n> > Thanks for taking a look.\n> >\n>\n> Hi Amit,\n>\n> Is there any chance to rebase the outstanding SQL/JSON patches, (esp.\n> json_query)?\n\n\nYes, working on it. Will post a WIP shortly.\n\n> --\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\nHi,On Fri, Aug 4, 2023 at 19:01 Erik Rijkers <[email protected]> wrote:Op 7/21/23 om 12:33 schreef Amit Langote:\n> \n> Thanks for taking a look.\n> \n\nHi Amit,\n\nIs there any chance to rebase the outstanding SQL/JSON patches, (esp. \njson_query)?Yes, working on it. Will post a WIP shortly.-- Thanks, Amit LangoteEDB: http://www.enterprisedb.com",
"msg_date": "Fri, 4 Aug 2023 19:05:47 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Hi,\n\nOn Sun, Jul 23, 2023 at 5:17 PM jian he <[email protected]> wrote:\n> hi\n> based on v10*.patch. questions/ideas about the doc.\n\nThanks for taking a look.\n\n> > json_exists ( context_item, path_expression [ PASSING { value AS varname } [, ...]] [ RETURNING data_type ] [ { TRUE | FALSE | UNKNOWN | ERROR } ON ERROR ])\n> > Returns true if the SQL/JSON path_expression applied to the context_item using the values yields any items. The ON ERROR clause specifies what is returned if an error occurs. Note that if the path_expression is strict, an error is generated if it yields no items. The default value is UNKNOWN which causes a NULL result.\n>\n> only SELECT JSON_EXISTS(NULL::jsonb, '$'); will cause a null result.\n> In lex mode, if yield no items return false, no error will return,\n> even error on error.\n> Only case error will happen, strict mode error on error. (select\n> json_exists(jsonb '{\"a\": [1,2,3]}', 'strict $.b' error on error)\n>\n> so I came up with the following:\n> Returns true if the SQL/JSON path_expression applied to the\n> context_item using the values yields any items. The ON ERROR clause\n> specifies what is returned if an error occurs, if not specified, the\n> default value is false when it yields no items.\n> Note that if the path_expression is strict, ERROR ON ERROR specified,\n> an error is generated if it yields no items.\n\nOK, will change the text to say that the default ON ERROR behavior is\nto return false.\n\n> --------------------------------------------------------------------------------------------------\n> /* --first branch of json_table_column spec.\n>\n> name type [ PATH json_path_specification ]\n> [ { WITHOUT | WITH { CONDITIONAL | [UNCONDITIONAL] } } [ ARRAY\n> ] WRAPPER ]\n> [ { KEEP | OMIT } QUOTES [ ON SCALAR STRING ] ]\n> [ { ERROR | NULL | DEFAULT expression } ON EMPTY ]\n> [ { ERROR | NULL | DEFAULT expression } ON ERROR ]\n> */\n> I am not sure what \" [ ON SCALAR STRING ]\" means. There is no test on this.\n\nON SCALAR STRING is just syntactic sugar. KEEP/OMIT QUOTES specifies\nthe behavior when the result of JSON_QUERY() is a JSON scalar value.\n\n> i wonder how to achieve the following query with json_table:\n> select json_query(jsonb '\"world\"', '$' returning text keep quotes) ;\n>\n> the following case will fail.\n> SELECT * FROM JSON_TABLE(jsonb '\"world\"', '$' COLUMNS (item text PATH\n> '$' keep quotes ON SCALAR STRING ));\n> ERROR: cannot use OMIT QUOTES clause with scalar columns\n> LINE 1: ...T * FROM JSON_TABLE(jsonb '\"world\"', '$' COLUMNS (item text ...\n> ^\n> error should be ERROR: cannot use KEEP QUOTES clause with scalar columns?\n> LINE1 should be: SELECT * FROM JSON_TABLE(jsonb '\"world\"', '$' COLUMNS\n> (item text ...\n\nHmm, yes, I think the code that produces the error is not trying hard\nenough to figure out the actually specified QUOTES clause. Fixed and\nadded new tests.\n\n> --------------------------------------------------------------------------------\n> quote from json_query:\n> > This function must return a JSON string, so if the path expression returns multiple SQL/JSON items, you must wrap the result using the\n> > WITH WRAPPER clause.\n>\n> I think the final result will be: if the RETURNING clause is not\n> specified, then the returned data type is jsonb. if multiple SQL/JSON\n> items returned, if not specified WITH WRAPPER, null will be returned.\n\nI suppose you mean the following case:\n\nSELECT JSON_QUERY(jsonb '[1,2]', '$[*]');\n json_query\n------------\n\n(1 row)\n\nwhich with ERROR ON ERROR gives:\n\nSELECT JSON_QUERY(jsonb '[1,2]', '$[*]' ERROR ON ERROR);\nERROR: JSON path expression in JSON_QUERY should return singleton\nitem without wrapper\nHINT: Use WITH WRAPPER clause to wrap SQL/JSON item sequence into array.\n\nThe default return value for JSON_QUERY when an error occurs during\npath expression evaluation is NULL. I don't think that it needs to be\nmentioned separately.\n\n> ------------------------------------------------------------------------------------\n> quote from json_query:\n> > The ON ERROR and ON EMPTY clauses have similar semantics to those clauses for json_value.\n> quote from json_table:\n> > These clauses have the same syntax and semantics as for json_value and json_query.\n>\n> it would be better in json_value syntax explicit mention: if not\n> explicitly mentioned, what will happen when on error, on empty\n> happened ?\n\nOK, I've improved the text here.\n\n> -------------------------------------------------------------------------------------\n> > You can have only one ordinality column per table\n> but the regress test shows that you can have more than one ordinality column.\n\nHmm, I am not sure why the code's allowing that. Anyway, for the lack\nany historical notes on why it should be allowed, I've fixed the code\nto allow only one ordinality columns and modified the tests.\n\n> ----------------------------------------------------------------------------\n> similar to here\n> https://git.postgresql.org/cgit/postgresql.git/tree/src/test/regress/expected/sqljson.out#n804\n> Maybe in file src/test/regress/sql/jsonb_sqljson.sql line 349, you can\n> also create a table first. insert corner case data.\n> then split the very wide select query (more than 26 columns) into 4\n> small queries, better to view the expected result on the web.\n\nOK, done.\n\nI'm still finding things to fix here and there, but here's what I have\ngot so far.\n\n\n\n\n--\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Fri, 4 Aug 2023 21:21:49 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Hi.\nin v11, json_query:\n+ The returned <replaceable>data_type</replaceable> has the\nsame semantics\n+ as for constructor functions like <function>json_objectagg</function>;\n+ the default returned type is <type>text</type>.\n+ The <literal>ON EMPTY</literal> clause specifies the behavior if the\n+ <replaceable>path_expression</replaceable> yields no value at all; the\n+ default when <literal>ON ERROR</literal> is not specified is\nto return a\n+ null value.\n\nthe default returned type is jsonb? Also in above quoted second last\nline should be <literal>ON EMPTY</literal> ?\nOther than that, the doc looks good.\n\n\n",
"msg_date": "Tue, 15 Aug 2023 16:58:01 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Tue, Aug 15, 2023 at 5:58 PM jian he <[email protected]> wrote:\n> Hi.\n> in v11, json_query:\n> + The returned <replaceable>data_type</replaceable> has the\n> same semantics\n> + as for constructor functions like <function>json_objectagg</function>;\n> + the default returned type is <type>text</type>.\n> + The <literal>ON EMPTY</literal> clause specifies the behavior if the\n> + <replaceable>path_expression</replaceable> yields no value at all; the\n> + default when <literal>ON ERROR</literal> is not specified is\n> to return a\n> + null value.\n>\n> the default returned type is jsonb?\n\nYou are correct.\n\n> Also in above quoted second last\n> line should be <literal>ON EMPTY</literal> ?\n\nCorrect too.\n\n> Other than that, the doc looks good.\n\nThanks for the review.\n\nI will post a new version after finishing working on a few other\nimprovements I am working on.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 16 Aug 2023 13:27:38 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Hello,\n\nOn Wed, Aug 16, 2023 at 1:27 PM Amit Langote <[email protected]> wrote:\n> I will post a new version after finishing working on a few other\n> improvements I am working on.\n\nSorry about the delay. Here's a new version.\n\nI found out that llvmjit_expr.c additions have been broken all along,\nI mean since I rewrote the JsonExpr evaluation code to use soft error\nhandling back in January or so. For example, I had made CoerceiViaIO\nevaluation code (EEOP_IOCOERCE ExprEvalStep) invoked by JsonCoercion\nnode's evaluation to pass an ErrorSaveContext to the type input\nfunctions so that any errors result in returning NULL instead of\nthrowing the error. Though the llvmjit_expr.c code was not modified\nto do the same, so the SQL/JSON query functions would return wrong\nresults when JITed. I have made many revisions to the JsonExpr\nexpression evaluation itself, not all of which were reflected in the\nllvmjit_expr.c counterparts. I've fixed all that in the attached.\n\nI've broken the parts to teach the CoerceViaIO evaluation code to\nhandle errors softly into a separate patch attached as 0001.\n\nOther notable changes in the SQL/JSON query functions patch (now 0002):\n\n* Significantly rewrote the parser changes to make it a bit more\nreadable than before. My main goal was to separate the code for each\nJSON_EXISTS_OP, JSON_QUERY_OP, and JSON_VALUE_OP such that the\nop-type-specific behaviors are more readily apparent by reading the\ncode.\n\n* Got rid of JsonItemCoercions struct/node, which contained a\nJsonCoercion field to store the coercion expressions for each JSON\nitem type that needs to be coerced to the RETURNING type, in favor of\nusing List of JsonCoercion nodes. That resulted in simpler code in\nmany places, most notably in the executor / llvmjit_expr.c.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 31 Aug 2023 21:57:27 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Op 8/31/23 om 14:57 schreef Amit Langote:\n> Hello,\n> \n> On Wed, Aug 16, 2023 at 1:27 PM Amit Langote <[email protected]> wrote:\n>> I will post a new version after finishing working on a few other\n>> improvements I am working on.\n> \n> Sorry about the delay. Here's a new version.\n> \nHi,\n\nWhile compiling the new set\n\n[v12-0001-Support-soft-error-handling-during-CoerceViaIO-e.patch]\n[v12-0002-SQL-JSON-query-functions.patch]\n[v12-0003-JSON_TABLE.patch]\n[v12-0004-Claim-SQL-standard-compliance-for-SQL-JSON-featu.patch]\n\ngcc 13.2.0 is sputtering somewhat:\n\n--------------\nIn function ‘transformJsonFuncExpr’,\n inlined from ‘transformExprRecurse’ at parse_expr.c:374:13:\nparse_expr.c:4362:13: warning: ‘contextItemExpr’ may be used \nuninitialized [-Wmaybe-uninitialized]\n 4362 | if (exprType(contextItemExpr) != JSONBOID)\n | ^~~~~~~~~~~~~~~~~~~~~~~~~\nparse_expr.c: In function ‘transformExprRecurse’:\nparse_expr.c:4214:21: note: ‘contextItemExpr’ was declared here\n 4214 | Node *contextItemExpr;\n | ^~~~~~~~~~~~~~~\nnodeFuncs.c: In function ‘exprSetCollation’:\nnodeFuncs.c:1238:25: warning: this statement may fall through \n[-Wimplicit-fallthrough=]\n 1238 | {\n | ^\nnodeFuncs.c:1247:17: note: here\n 1247 | case T_JsonCoercion:\n | ^~~~\n--------------\n\nThose looks pretty unimportant, but I thought I'd let you know.\n\nTests (check, check-world and my own) still run fine.\n\nThanks,\n\nErik Rijkers\n\n\n\n\n\n\n> I found out that llvmjit_expr.c additions have been broken all along,\n> I mean since I rewrote the JsonExpr evaluation code to use soft error\n> handling back in January or so. For example, I had made CoerceiViaIO\n> evaluation code (EEOP_IOCOERCE ExprEvalStep) invoked by JsonCoercion\n> node's evaluation to pass an ErrorSaveContext to the type input\n> functions so that any errors result in returning NULL instead of\n> throwing the error. Though the llvmjit_expr.c code was not modified\n> to do the same, so the SQL/JSON query functions would return wrong\n> results when JITed. I have made many revisions to the JsonExpr\n> expression evaluation itself, not all of which were reflected in the\n> llvmjit_expr.c counterparts. I've fixed all that in the attached.\n> \n> I've broken the parts to teach the CoerceViaIO evaluation code to\n> handle errors softly into a separate patch attached as 0001.\n> \n> Other notable changes in the SQL/JSON query functions patch (now 0002):\n> \n> * Significantly rewrote the parser changes to make it a bit more\n> readable than before. My main goal was to separate the code for each\n> JSON_EXISTS_OP, JSON_QUERY_OP, and JSON_VALUE_OP such that the\n> op-type-specific behaviors are more readily apparent by reading the\n> code.\n> \n> * Got rid of JsonItemCoercions struct/node, which contained a\n> JsonCoercion field to store the coercion expressions for each JSON\n> item type that needs to be coerced to the RETURNING type, in favor of\n> using List of JsonCoercion nodes. That resulted in simpler code in\n> many places, most notably in the executor / llvmjit_expr.c.\n> \n\n\n",
"msg_date": "Thu, 31 Aug 2023 15:51:52 +0200",
"msg_from": "Erik Rijkers <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Hi,\n\nOn Thu, Aug 31, 2023 at 10:49 PM Erik Rijkers <[email protected]> wrote:\n>\n> Op 8/31/23 om 14:57 schreef Amit Langote:\n> > Hello,\n> >\n> > On Wed, Aug 16, 2023 at 1:27 PM Amit Langote <[email protected]> wrote:\n> >> I will post a new version after finishing working on a few other\n> >> improvements I am working on.\n> >\n> > Sorry about the delay. Here's a new version.\n> >\n> Hi,\n>\n> While compiling the new set\n>\n> [v12-0001-Support-soft-error-handling-during-CoerceViaIO-e.patch]\n> [v12-0002-SQL-JSON-query-functions.patch]\n> [v12-0003-JSON_TABLE.patch]\n> [v12-0004-Claim-SQL-standard-compliance-for-SQL-JSON-featu.patch]\n>\n> gcc 13.2.0 is sputtering somewhat:\n>\n> --------------\n> In function ‘transformJsonFuncExpr’,\n> inlined from ‘transformExprRecurse’ at parse_expr.c:374:13:\n> parse_expr.c:4362:13: warning: ‘contextItemExpr’ may be used\n> uninitialized [-Wmaybe-uninitialized]\n> 4362 | if (exprType(contextItemExpr) != JSONBOID)\n> | ^~~~~~~~~~~~~~~~~~~~~~~~~\n> parse_expr.c: In function ‘transformExprRecurse’:\n> parse_expr.c:4214:21: note: ‘contextItemExpr’ was declared here\n> 4214 | Node *contextItemExpr;\n> | ^~~~~~~~~~~~~~~\n> nodeFuncs.c: In function ‘exprSetCollation’:\n> nodeFuncs.c:1238:25: warning: this statement may fall through\n> [-Wimplicit-fallthrough=]\n> 1238 | {\n> | ^\n> nodeFuncs.c:1247:17: note: here\n> 1247 | case T_JsonCoercion:\n> | ^~~~\n> --------------\n>\n> Those looks pretty unimportant, but I thought I'd let you know.\n\nOops, fixed in the attached. Thanks for checking.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Fri, 1 Sep 2023 13:52:15 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "0001 is quite mysterious to me. I've been reading it but I'm not sure I\ngrok it, so I don't have anything too intelligent to say about it at\nthis point. But here are my thoughts anyway.\n\nAssert()ing that a pointer is not null, and in the next line\ndereferencing that pointer, is useless: the process would crash anyway\nat the time of dereference, so the Assert() adds no value. Better to\nleave the assert out. (This appears both in ExecExprEnableErrorSafe and\nExecExprDisableErrorSafe).\n\nIs it not a problem to set just the node type, and not reset the\ncontents of the node to zeroes, in ExecExprEnableErrorSafe? I'm not\nsure if it's possible to enable error-safe on a node two times with an\nerror reported in between; would that result in the escontext filled\nwith junk the second time around? That might be dangerous. Maybe a\nsimple cross-check is to verify (assert) in ExecExprEnableErrorSafe()\nthat the struct is already all-zeroes, so that if this happens, we'll\nget reports about it. (After all, there are very few nodes that handle\nthe SOFT_ERROR_OCCURRED case).\n\nDo we need to have the ->details_wanted flag turned on? Maybe if we're\nhaving ExecExprEnableErrorSafe() as a generic tool, it should receive\nthe boolean to use as an argument.\n\nWhy palloc the escontext always, and not just when\nExecExprEnableErrorSafe is called? (At Disable time, just memset it to\nzero, and next time it is enabled for that node, we don't need to\nallocate it again, just set the nodetype.)\n\nExecExprEnableErrorSafe() is a strange name for this operation. Maybe\nyou mean ExecExprEnableSoftErrors()? Maybe it'd be better to leave it\nas NULL initially, so that for the majority of cases we don't even\nallocate it.\n\nIn 0002 you're adding soft-error support for a bunch of existing\noperations, in addition to introducing SQL/JSON query functions. Maybe\nthe soft-error stuff should be done separately in a preparatory patch.\n\nI think functions such as populate_array_element() that can now save\nsoft errors and which currently do not have a return value, should\nacquire a convention to let caller know that things failed: maybe return\nfalse if SOFT_ERROR_OCCURRED(). Otherwise it appears that, for instance\npopulate_array_dim_jsonb() can return happily if an error occurs when\nparsing the last element in the array. Splitting 0002 to have a\npreparatory patch where all such soft-error-saving changes are\nintroduced separately would help review that this is indeed being\nhandled by all their callers.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Por suerte hoy explotó el califont porque si no me habría muerto\n de aburrido\" (Papelucho)\n\n\n",
"msg_date": "Wed, 6 Sep 2023 17:01:06 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On 06.09.23 17:01, Alvaro Herrera wrote:\n> Assert()ing that a pointer is not null, and in the next line\n> dereferencing that pointer, is useless: the process would crash anyway\n> at the time of dereference, so the Assert() adds no value. Better to\n> leave the assert out.\n\nI don't think this is quite correct. If you dereference a pointer, the \ncompiler may assume that it is not null and rearrange code accordingly. \nSo it might not crash. Keeping the assertion would alter that assumption.\n\n\n\n",
"msg_date": "Tue, 12 Sep 2023 09:52:50 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> On 06.09.23 17:01, Alvaro Herrera wrote:\n>> Assert()ing that a pointer is not null, and in the next line\n>> dereferencing that pointer, is useless: the process would crash anyway\n>> at the time of dereference, so the Assert() adds no value. Better to\n>> leave the assert out.\n\n> I don't think this is quite correct. If you dereference a pointer, the \n> compiler may assume that it is not null and rearrange code accordingly. \n> So it might not crash. Keeping the assertion would alter that assumption.\n\nUh ... only in assert-enabled builds. If your claim is correct,\nthis'd result in different behavior in debug and production builds,\nwhich would be even worse. But I don't believe the claim.\nI side with Alvaro's position here: such an assert is unhelpful.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 12 Sep 2023 10:43:41 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Thanks for the review.\n\nOn Thu, Sep 7, 2023 at 12:01 AM Alvaro Herrera <[email protected]> wrote:\n> 0001 is quite mysterious to me. I've been reading it but I'm not sure I\n> grok it, so I don't have anything too intelligent to say about it at\n> this point. But here are my thoughts anyway.\n>\n> Assert()ing that a pointer is not null, and in the next line\n> dereferencing that pointer, is useless: the process would crash anyway\n> at the time of dereference, so the Assert() adds no value. Better to\n> leave the assert out. (This appears both in ExecExprEnableErrorSafe and\n> ExecExprDisableErrorSafe).\n>\n> Is it not a problem to set just the node type, and not reset the\n> contents of the node to zeroes, in ExecExprEnableErrorSafe? I'm not\n> sure if it's possible to enable error-safe on a node two times with an\n> error reported in between; would that result in the escontext filled\n> with junk the second time around? That might be dangerous. Maybe a\n> simple cross-check is to verify (assert) in ExecExprEnableErrorSafe()\n> that the struct is already all-zeroes, so that if this happens, we'll\n> get reports about it. (After all, there are very few nodes that handle\n> the SOFT_ERROR_OCCURRED case).\n>\n> Do we need to have the ->details_wanted flag turned on? Maybe if we're\n> having ExecExprEnableErrorSafe() as a generic tool, it should receive\n> the boolean to use as an argument.\n>\n> Why palloc the escontext always, and not just when\n> ExecExprEnableErrorSafe is called? (At Disable time, just memset it to\n> zero, and next time it is enabled for that node, we don't need to\n> allocate it again, just set the nodetype.)\n>\n> ExecExprEnableErrorSafe() is a strange name for this operation. Maybe\n> you mean ExecExprEnableSoftErrors()? Maybe it'd be better to leave it\n> as NULL initially, so that for the majority of cases we don't even\n> allocate it.\n\nI should have clarified earlier why the ErrorSaveContext must be\nallocated statically during the expression compilation phase. This is\nnecessary because llvm_compile_expr() requires a valid pointer to the\nErrorSaveContext to integrate into the compiled version. Thus, runtime\nallocation isn't feasible.\n\nAfter some consideration, I believe we shouldn't introduce the generic\nExecExprEnable/Disable* interface. Instead, we should let individual\nexpressions manage the ErrorSaveContext that they want to use\ndirectly, using ExprState.escontext just as a temporary global\nvariable, much like ExprState.innermost_caseval is used.\n\nThe revised 0001 now only contains the changes necessary to make\nCoerceViaIO evaluation code support soft error handling.\n\n> In 0002 you're adding soft-error support for a bunch of existing\n> operations, in addition to introducing SQL/JSON query functions. Maybe\n> the soft-error stuff should be done separately in a preparatory patch.\n\nHmm, there'd be only 1 ExecExprEnableErrorSafe() in 0002 -- that in\nExecEvalJsonExprCoercion(). I'm not sure which others you're\nreferring to.\n\nGiven what I said above, the code to reset the ErrorSaveContext\npresent in 0002 now looks different. It now resets the error_occurred\nflag directly instead of using memset-0-ing the whole struct.\ndetails_wanted and error_data are both supposed to be NULL in this\ncase anyway and remain set to NULL throughout the lifetime of the\nExprState.\n\n> I think functions such as populate_array_element() that can now save\n> soft errors and which currently do not have a return value, should\n> acquire a convention to let caller know that things failed: maybe return\n> false if SOFT_ERROR_OCCURRED(). Otherwise it appears that, for instance\n> populate_array_dim_jsonb() can return happily if an error occurs when\n> parsing the last element in the array. Splitting 0002 to have a\n> preparatory patch where all such soft-error-saving changes are\n> introduced separately would help review that this is indeed being\n> handled by all their callers.\n\nI've separated the changes to jsonfuncs.c into an independent patch.\nUpon reviewing the code accessible from populate_record_field() --\nwhich serves as the entry point for the executor via\njson_populate_type() -- I identified a few more instances where errors\ncould be thrown even with a non-NULL escontext. I've included tests\nfor these in patch 0003. While some error reports, like those in\nconstruct_md_array() (invoked by populate_array()), fall outside\njsonfuncs.c, I assume they're deliberately excluded from SQL/JSON's ON\nERROR support. I've opted not to modify any external interfaces.\n\n--\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 14 Sep 2023 17:14:51 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Op 9/14/23 om 10:14 schreef Amit Langote:\n> \n> \n\nHi Amit,\n\nJust now I built a v14-patched server and I found this crash:\n\nselect json_query(jsonb '\n{\n \"arr\": [\n {\"arr\": [2,3]}\n , {\"arr\": [4,5]}\n ]\n}'\n , '$.arr[*].arr ? (@ <= 3)' returning anyarray WITH WRAPPER) --crash\n;\nserver closed the connection unexpectedly\n\tThis probably means the server terminated abnormally\n\tbefore or while processing the request.\nconnection to server was lost\n\n\nCan you have a look?\n\nThanks,\n\nErik\n\n\n",
"msg_date": "Sun, 17 Sep 2023 08:37:11 +0200",
"msg_from": "Erik Rijkers <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Sun, Sep 17, 2023 at 3:34 PM Erik Rijkers <[email protected]> wrote:\n> Op 9/14/23 om 10:14 schreef Amit Langote:\n> >\n> >\n>\n> Hi Amit,\n>\n> Just now I built a v14-patched server and I found this crash:\n>\n> select json_query(jsonb '\n> {\n> \"arr\": [\n> {\"arr\": [2,3]}\n> , {\"arr\": [4,5]}\n> ]\n> }'\n> , '$.arr[*].arr ? (@ <= 3)' returning anyarray WITH WRAPPER) --crash\n> ;\n> server closed the connection unexpectedly\n> This probably means the server terminated abnormally\n> before or while processing the request.\n> connection to server was lost\n\nThanks for the report.\n\nAttached updated version fixes the crash, but you get an error as is\nto be expected:\n\nselect json_query(jsonb '\n{\n \"arr\": [\n {\"arr\": [2,3]}\n , {\"arr\": [4,5]}\n ]\n}'\n , '$.arr[*].arr ? (@ <= 3)' returning anyarray WITH WRAPPER);\nERROR: cannot accept a value of type anyarray\n\nunlike when using int[]:\n\nselect json_query(jsonb '\n{\n \"arr\": [\n {\"arr\": [2,3]}\n , {\"arr\": [4,5]}\n ]\n}'\n , '$.arr[*].arr ? (@ <= 3)' returning int[] WITH WRAPPER);\n json_query\n------------\n {2,3}\n(1 row)\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Mon, 18 Sep 2023 12:15:40 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Op 9/18/23 om 05:15 schreef Amit Langote:\n> On Sun, Sep 17, 2023 at 3:34 PM Erik Rijkers <[email protected]> wrote:\n>> Op 9/14/23 om 10:14 schreef Amit Langote:\n>>>\n>>>\n>>\n>> Hi Amit,\n>>\n>> Just now I built a v14-patched server and I found this crash:\n>>\n>> select json_query(jsonb '\n>> {\n>> \"arr\": [\n>> {\"arr\": [2,3]}\n>> , {\"arr\": [4,5]}\n>> ]\n>> }'\n>> , '$.arr[*].arr ? (@ <= 3)' returning anyarray WITH WRAPPER) --crash\n>> ;\n>> server closed the connection unexpectedly\n>> This probably means the server terminated abnormally\n>> before or while processing the request.\n>> connection to server was lost\n> \n> Thanks for the report.\n> \n> Attached updated version fixes the crash, but you get an error as is\n> to be expected:\n> \n> select json_query(jsonb '\n> {\n> \"arr\": [\n> {\"arr\": [2,3]}\n> , {\"arr\": [4,5]}\n> ]\n> }'\n> , '$.arr[*].arr ? (@ <= 3)' returning anyarray WITH WRAPPER);\n> ERROR: cannot accept a value of type anyarray\n> \n> unlike when using int[]:\n> \n> select json_query(jsonb '\n> {\n> \"arr\": [\n> {\"arr\": [2,3]}\n> , {\"arr\": [4,5]}\n> ]\n> }'\n> , '$.arr[*].arr ? (@ <= 3)' returning int[] WITH WRAPPER);\n> json_query\n> ------------\n> {2,3}\n> (1 row)\n> \n\nThanks, Amit. Alas, there are more: for 'anyarray' I thought I'd \nsubstitute 'interval', 'int4range', 'int8range', and sure enough they \nall give similar crashes. Patched with v15:\n\npsql -qX -e << SQL\nselect json_query(jsonb'{\"a\":[{\"a\":[2,3]},{\"a\":[4,5]}]}',\n '$.a[*].a?(@<=3)'returning int[] with wrapper --ok\n);\n\nselect json_query(jsonb'{\"a\": [{\"a\": [2,3]}, {\"a\": [4,5]}]}',\n '$.a[*].a?(@<=3)'returning interval with wrapper --crash\n--'$.a[*].a?(@<=3)'returning int4range with wrapper --crash\n--'$.a[*].a?(@<=3)'returning int8range with wrapper --crash\n--'$.a[*].a?(@<=3)'returning numeric[] with wrapper --{2,3} =ok\n--'$.a[*].a?(@<=3)'returning anyarray with wrapper --fixed\n--'$.a[*].a?(@<=3)'returning anyarray --null =ok\n--'$.a[*].a?(@<=3)'returning int --null =ok\n--'$.a[*].a?(@<=3)'returning int with wrapper --error =ok\n--'$.a[*].a?(@<=3)'returning int[] with wrapper -- {2,3} =ok\n);\nSQL\n=> server closed the connection unexpectedly, etc\n\nBecause those first three tries gave a crash (*all three*), I'm a bit \nworried there may be many more.\n\nI am sorry to be bothering you with these somewhat idiotic SQL \nstatements but I suppose somehow it needs to be made more solid.\n\nThanks!\n\nErik\n\n\n",
"msg_date": "Mon, 18 Sep 2023 12:12:25 +0200",
"msg_from": "Erik Rijkers <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Hi Erik,\n\nOn Mon, Sep 18, 2023 at 19:09 Erik Rijkers <[email protected]> wrote:\n\n> Op 9/18/23 om 05:15 schreef Amit Langote:\n> > On Sun, Sep 17, 2023 at 3:34 PM Erik Rijkers <[email protected]> wrote:\n> >> Op 9/14/23 om 10:14 schreef Amit Langote:\n> >>>\n> >>>\n> >>\n> >> Hi Amit,\n> >>\n> >> Just now I built a v14-patched server and I found this crash:\n> >>\n> >> select json_query(jsonb '\n> >> {\n> >> \"arr\": [\n> >> {\"arr\": [2,3]}\n> >> , {\"arr\": [4,5]}\n> >> ]\n> >> }'\n> >> , '$.arr[*].arr ? (@ <= 3)' returning anyarray WITH WRAPPER)\n> --crash\n> >> ;\n> >> server closed the connection unexpectedly\n> >> This probably means the server terminated abnormally\n> >> before or while processing the request.\n> >> connection to server was lost\n> >\n> > Thanks for the report.\n> >\n> > Attached updated version fixes the crash, but you get an error as is\n> > to be expected:\n> >\n> > select json_query(jsonb '\n> > {\n> > \"arr\": [\n> > {\"arr\": [2,3]}\n> > , {\"arr\": [4,5]}\n> > ]\n> > }'\n> > , '$.arr[*].arr ? (@ <= 3)' returning anyarray WITH WRAPPER);\n> > ERROR: cannot accept a value of type anyarray\n> >\n> > unlike when using int[]:\n> >\n> > select json_query(jsonb '\n> > {\n> > \"arr\": [\n> > {\"arr\": [2,3]}\n> > , {\"arr\": [4,5]}\n> > ]\n> > }'\n> > , '$.arr[*].arr ? (@ <= 3)' returning int[] WITH WRAPPER);\n> > json_query\n> > ------------\n> > {2,3}\n> > (1 row)\n> >\n>\n> Thanks, Amit. Alas, there are more: for 'anyarray' I thought I'd\n> substitute 'interval', 'int4range', 'int8range', and sure enough they\n> all give similar crashes. Patched with v15:\n>\n> psql -qX -e << SQL\n> select json_query(jsonb'{\"a\":[{\"a\":[2,3]},{\"a\":[4,5]}]}',\n> '$.a[*].a?(@<=3)'returning int[] with wrapper --ok\n> );\n>\n> select json_query(jsonb'{\"a\": [{\"a\": [2,3]}, {\"a\": [4,5]}]}',\n> '$.a[*].a?(@<=3)'returning interval with wrapper --crash\n> --'$.a[*].a?(@<=3)'returning int4range with wrapper --crash\n> --'$.a[*].a?(@<=3)'returning int8range with wrapper --crash\n> --'$.a[*].a?(@<=3)'returning numeric[] with wrapper --{2,3} =ok\n> --'$.a[*].a?(@<=3)'returning anyarray with wrapper --fixed\n> --'$.a[*].a?(@<=3)'returning anyarray --null =ok\n> --'$.a[*].a?(@<=3)'returning int --null =ok\n> --'$.a[*].a?(@<=3)'returning int with wrapper --error =ok\n> --'$.a[*].a?(@<=3)'returning int[] with wrapper -- {2,3} =ok\n> );\n> SQL\n> => server closed the connection unexpectedly, etc\n>\n> Because those first three tries gave a crash (*all three*), I'm a bit\n> worried there may be many more.\n>\n> I am sorry to be bothering you with these somewhat idiotic SQL\n> statements but I suppose somehow it needs to be made more solid.\n\n\nNo, thanks for your testing. I’ll look into these.\n\n>\n\nHi Erik,On Mon, Sep 18, 2023 at 19:09 Erik Rijkers <[email protected]> wrote:Op 9/18/23 om 05:15 schreef Amit Langote:\n> On Sun, Sep 17, 2023 at 3:34 PM Erik Rijkers <[email protected]> wrote:\n>> Op 9/14/23 om 10:14 schreef Amit Langote:\n>>>\n>>>\n>>\n>> Hi Amit,\n>>\n>> Just now I built a v14-patched server and I found this crash:\n>>\n>> select json_query(jsonb '\n>> {\n>> \"arr\": [\n>> {\"arr\": [2,3]}\n>> , {\"arr\": [4,5]}\n>> ]\n>> }'\n>> , '$.arr[*].arr ? (@ <= 3)' returning anyarray WITH WRAPPER) --crash\n>> ;\n>> server closed the connection unexpectedly\n>> This probably means the server terminated abnormally\n>> before or while processing the request.\n>> connection to server was lost\n> \n> Thanks for the report.\n> \n> Attached updated version fixes the crash, but you get an error as is\n> to be expected:\n> \n> select json_query(jsonb '\n> {\n> \"arr\": [\n> {\"arr\": [2,3]}\n> , {\"arr\": [4,5]}\n> ]\n> }'\n> , '$.arr[*].arr ? (@ <= 3)' returning anyarray WITH WRAPPER);\n> ERROR: cannot accept a value of type anyarray\n> \n> unlike when using int[]:\n> \n> select json_query(jsonb '\n> {\n> \"arr\": [\n> {\"arr\": [2,3]}\n> , {\"arr\": [4,5]}\n> ]\n> }'\n> , '$.arr[*].arr ? (@ <= 3)' returning int[] WITH WRAPPER);\n> json_query\n> ------------\n> {2,3}\n> (1 row)\n> \n\nThanks, Amit. Alas, there are more: for 'anyarray' I thought I'd \nsubstitute 'interval', 'int4range', 'int8range', and sure enough they \nall give similar crashes. Patched with v15:\n\npsql -qX -e << SQL\nselect json_query(jsonb'{\"a\":[{\"a\":[2,3]},{\"a\":[4,5]}]}',\n '$.a[*].a?(@<=3)'returning int[] with wrapper --ok\n);\n\nselect json_query(jsonb'{\"a\": [{\"a\": [2,3]}, {\"a\": [4,5]}]}',\n '$.a[*].a?(@<=3)'returning interval with wrapper --crash\n--'$.a[*].a?(@<=3)'returning int4range with wrapper --crash\n--'$.a[*].a?(@<=3)'returning int8range with wrapper --crash\n--'$.a[*].a?(@<=3)'returning numeric[] with wrapper --{2,3} =ok\n--'$.a[*].a?(@<=3)'returning anyarray with wrapper --fixed\n--'$.a[*].a?(@<=3)'returning anyarray --null =ok\n--'$.a[*].a?(@<=3)'returning int --null =ok\n--'$.a[*].a?(@<=3)'returning int with wrapper --error =ok\n--'$.a[*].a?(@<=3)'returning int[] with wrapper -- {2,3} =ok\n);\nSQL\n=> server closed the connection unexpectedly, etc\n\nBecause those first three tries gave a crash (*all three*), I'm a bit \nworried there may be many more.\n\nI am sorry to be bothering you with these somewhat idiotic SQL \nstatements but I suppose somehow it needs to be made more solid.No, thanks for your testing. I’ll look into these.",
"msg_date": "Mon, 18 Sep 2023 19:20:00 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Op 9/18/23 om 12:20 schreef Amit Langote:\n> Hi Erik,\n> \n>> I am sorry to be bothering you with these somewhat idiotic SQL\n>> statements but I suppose somehow it needs to be made more solid.\n> \n\nFor 60 datatypes, I ran this statement:\n\nselect json_query(jsonb'{\"a\":[{\"a\":[2,3]},{\"a\":[4,5]}]}',\n '$.a[*].a?(@<=3)'returning ${datatype} with wrapper\n);\n\nagainst a 17devel server (a0a5) with json v15 patches and caught the \noutput, incl. 30+ crashes, in the attached .txt. I hope that's useful.\n\n\nErik",
"msg_date": "Mon, 18 Sep 2023 13:14:52 +0200",
"msg_from": "Erik Rijkers <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Op 9/18/23 om 13:14 schreef Erik Rijkers:\n> Op 9/18/23 om 12:20 schreef Amit Langote:\n>> Hi Erik,\n>>\n>>> I am sorry to be bothering you with these somewhat idiotic SQL\n>>> statements but I suppose somehow it needs to be made more solid.\n>>\n> \n> For 60 datatypes, I ran this statement:\n> \n> select json_query(jsonb'{\"a\":[{\"a\":[2,3]},{\"a\":[4,5]}]}',\n> '$.a[*].a?(@<=3)'returning ${datatype} with wrapper\n> );\n>\n> against a 17devel server (a0a5) with json v15 patches and caught the \n> output, incl. 30+ crashes, in the attached .txt. I hope that's useful.\n> \n\nand FYI: None of these crashes occur when I leave off the 'WITH WRAPPER' \nclause.\n\n> \n> Erik\n> \n\n\n",
"msg_date": "Mon, 18 Sep 2023 13:53:55 +0200",
"msg_from": "Erik Rijkers <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "0001: I wonder why you used Node for the ErrorSaveContext pointer\ninstead of the specific struct you want. I propose the attached, for\nsome extra type-safety. Or did you have a reason to do it that way?\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"How amazing is that? I call it a night and come back to find that a bug has\nbeen identified and patched while I sleep.\" (Robert Davidson)\n http://archives.postgresql.org/pgsql-sql/2006-03/msg00378.php",
"msg_date": "Tue, 19 Sep 2023 12:18:49 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Mon, Sep 18, 2023 at 7:51 PM Erik Rijkers <[email protected]> wrote:\n>\n> and FYI: None of these crashes occur when I leave off the 'WITH WRAPPER'\n> clause.\n>\n> >\n> > Erik\n> >\n\nif specify with wrapper, then default behavior is keep quotes, so\njexpr->omit_quotes will be false, which make val_string NULL.\nin ExecEvalJsonExprCoercion: InputFunctionCallSafe, val_string is\nNULL, flinfo->fn_strict is true, it will return: *op->resvalue =\n(Datum) 0. but at the same time *op->resnull is still false!\n\nif not specify with wrapper, then JsonPathQuery will return NULL.\n(because after apply the path_expression, cannot multiple SQL/JSON\nitems)\n\nselect json_query(jsonb'{\"a\":[{\"a\":3},{\"a\":[4,5]}]}','$.a[*].a?(@<=3)'\n returning int4range);\nalso make server crash, because default is KEEP QUOTES, so in\nExecEvalJsonExprCoercion jexpr->omit_quotes will be false.\nval_string will be NULL again as mentioned above.\n\nanother funny case:\ncreate domain domain_int4range int4range;\nselect json_query(jsonb'{\"a\":[{\"a\":[2,3]},{\"a\":[4,5]}]}','$.a[*].a?(@<=3)'\n returning domain_int4range with wrapper);\n\nshould I expect it to return [2,4) ?\n -------------------\nhttps://www.postgresql.org/docs/current/extend-type-system.html#EXTEND-TYPES-POLYMORPHIC\n>> When the return value of a function is declared as a polymorphic type, there must be at least one argument position that is also\n>> polymorphic, and the actual data type(s) supplied for the polymorphic arguments determine the actual result type for that call.\n\nselect json_query(jsonb'{\"a\":[{\"a\":[2,3]},{\"a\":[4,5]}]}','$.a[*].a?(@<=3)'\nreturning anyrange);\nshould fail. Now it returns NULL. Maybe we can validate it in\ntransformJsonFuncExpr?\n-------------------\n\n\n",
"msg_date": "Tue, 19 Sep 2023 18:37:45 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Tue, Sep 19, 2023 at 7:18 PM Alvaro Herrera <[email protected]> wrote:\n> 0001: I wonder why you used Node for the ErrorSaveContext pointer\n> instead of the specific struct you want. I propose the attached, for\n> some extra type-safety. Or did you have a reason to do it that way?\n\nNo reason other than that most other headers use Node. I agree that\nmaking an exception for this patch might be better, so I've\nincorporated your patch into 0001.\n\nI've also updated the query functions patch (0003) to address the\ncrashing bug reported by Erik. Essentially, I made the coercion step\nof JSON_QUERY to always use json_populate_type() when WITH WRAPPER is\nused. You might get funny errors with ERROR OR ERROR for many types\nwhen used in RETURNING, but at least there should no longer be any\ncrashes.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 19 Sep 2023 20:56:43 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Tue, Sep 19, 2023 at 7:37 PM jian he <[email protected]> wrote:\n> On Mon, Sep 18, 2023 at 7:51 PM Erik Rijkers <[email protected]> wrote:\n> >\n> > and FYI: None of these crashes occur when I leave off the 'WITH WRAPPER'\n> > clause.\n> >\n> > >\n> > > Erik\n> > >\n>\n> if specify with wrapper, then default behavior is keep quotes, so\n> jexpr->omit_quotes will be false, which make val_string NULL.\n> in ExecEvalJsonExprCoercion: InputFunctionCallSafe, val_string is\n> NULL, flinfo->fn_strict is true, it will return: *op->resvalue =\n> (Datum) 0. but at the same time *op->resnull is still false!\n>\n> if not specify with wrapper, then JsonPathQuery will return NULL.\n> (because after apply the path_expression, cannot multiple SQL/JSON\n> items)\n>\n> select json_query(jsonb'{\"a\":[{\"a\":3},{\"a\":[4,5]}]}','$.a[*].a?(@<=3)'\n> returning int4range);\n> also make server crash, because default is KEEP QUOTES, so in\n> ExecEvalJsonExprCoercion jexpr->omit_quotes will be false.\n> val_string will be NULL again as mentioned above.\n\nThat's right.\n\n> another funny case:\n> create domain domain_int4range int4range;\n> select json_query(jsonb'{\"a\":[{\"a\":[2,3]},{\"a\":[4,5]}]}','$.a[*].a?(@<=3)'\n> returning domain_int4range with wrapper);\n>\n> should I expect it to return [2,4) ?\n\nThis is what you'll get with v16 that I just posted.\n\n> -------------------\n> https://www.postgresql.org/docs/current/extend-type-system.html#EXTEND-TYPES-POLYMORPHIC\n> >> When the return value of a function is declared as a polymorphic type, there must be at least one argument position that is also\n> >> polymorphic, and the actual data type(s) supplied for the polymorphic arguments determine the actual result type for that call.\n>\n> select json_query(jsonb'{\"a\":[{\"a\":[2,3]},{\"a\":[4,5]}]}','$.a[*].a?(@<=3)'\n> returning anyrange);\n> should fail. Now it returns NULL. Maybe we can validate it in\n> transformJsonFuncExpr?\n> -------------------\n\nI'm not sure whether we should make the parser complain about the\nweird types being specified in RETURNING. The NULL you get in the\nabove example is because of the following error:\n\nselect json_query(jsonb'{\"a\":[{\"a\":[2,3]},{\"a\":[4,5]}]}','$.a[*].a?(@<=3)'\nreturning anyrange error on error);\nERROR: JSON path expression in JSON_QUERY should return singleton\nitem without wrapper\nHINT: Use WITH WRAPPER clause to wrap SQL/JSON item sequence into array.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 19 Sep 2023 21:00:02 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Op 9/19/23 om 13:56 schreef Amit Langote:\n> On Tue, Sep 19, 2023 at 7:18 PM Alvaro Herrera <[email protected]> wrote:\n>> 0001: I wonder why you used Node for the ErrorSaveContext pointer\n>> instead of the specific struct you want. I propose the attached, for\n>> some extra type-safety. Or did you have a reason to do it that way?\n> \n> No reason other than that most other headers use Node. I agree that\n> making an exception for this patch might be better, so I've\n> incorporated your patch into 0001.\n> \n> I've also updated the query functions patch (0003) to address the\n> crashing bug reported by Erik. Essentially, I made the coercion step\n> of JSON_QUERY to always use json_populate_type() when WITH WRAPPER is\n> used. You might get funny errors with ERROR OR ERROR for many types\n> when used in RETURNING, but at least there should no longer be any\n> crashes.\n> \n\nIndeed, with v16 those crashes are gone.\n\nSome lesser evil: gcc 13.2.0 gave some warnings, slightly different in \nassert vs non-assert build.\n\n--- assert build:\n\n-- [2023.09.19 14:06:35 json_table2/0] make core: make --quiet -j 4\nIn file included from ../../../src/include/postgres.h:45,\n from parse_expr.c:16:\nIn function ‘transformJsonFuncExpr’,\n inlined from ‘transformExprRecurse’ at parse_expr.c:374:13:\nparse_expr.c:4355:22: warning: ‘jsexpr’ may be used uninitialized \n[-Wmaybe-uninitialized]\n 4355 | Assert(jsexpr->formatted_expr);\n../../../src/include/c.h:864:23: note: in definition of macro ‘Assert’\n 864 | if (!(condition)) \\\n | ^~~~~~~~~\nparse_expr.c: In function ‘transformExprRecurse’:\nparse_expr.c:4212:21: note: ‘jsexpr’ was declared here\n 4212 | JsonExpr *jsexpr;\n | ^~~~~~\n\n\n--- non-assert build:\n\n-- [2023.09.19 14:11:03 json_table2/1] make core: make --quiet -j 4\nIn function ‘transformJsonFuncExpr’,\n inlined from ‘transformExprRecurse’ at parse_expr.c:374:13:\nparse_expr.c:4356:28: warning: ‘jsexpr’ may be used uninitialized \n[-Wmaybe-uninitialized]\n 4356 | if (exprType(jsexpr->formatted_expr) != JSONBOID)\n | ~~~~~~^~~~~~~~~~~~~~~~\nparse_expr.c: In function ‘transformExprRecurse’:\nparse_expr.c:4212:21: note: ‘jsexpr’ was declared here\n 4212 | JsonExpr *jsexpr;\n | ^~~~~~\n\n\nThank you,\n\nErik\n\n\n\n",
"msg_date": "Tue, 19 Sep 2023 14:34:04 +0200",
"msg_from": "Erik Rijkers <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Tue, Sep 19, 2023 at 9:31 PM Erik Rijkers <[email protected]> wrote:\n> Op 9/19/23 om 13:56 schreef Amit Langote:\n> > On Tue, Sep 19, 2023 at 7:18 PM Alvaro Herrera <[email protected]> wrote:\n> >> 0001: I wonder why you used Node for the ErrorSaveContext pointer\n> >> instead of the specific struct you want. I propose the attached, for\n> >> some extra type-safety. Or did you have a reason to do it that way?\n> >\n> > No reason other than that most other headers use Node. I agree that\n> > making an exception for this patch might be better, so I've\n> > incorporated your patch into 0001.\n> >\n> > I've also updated the query functions patch (0003) to address the\n> > crashing bug reported by Erik. Essentially, I made the coercion step\n> > of JSON_QUERY to always use json_populate_type() when WITH WRAPPER is\n> > used. You might get funny errors with ERROR OR ERROR for many types\n> > when used in RETURNING, but at least there should no longer be any\n> > crashes.\n> >\n>\n> Indeed, with v16 those crashes are gone.\n>\n> Some lesser evil: gcc 13.2.0 gave some warnings, slightly different in\n> assert vs non-assert build.\n>\n> --- assert build:\n>\n> -- [2023.09.19 14:06:35 json_table2/0] make core: make --quiet -j 4\n> In file included from ../../../src/include/postgres.h:45,\n> from parse_expr.c:16:\n> In function ‘transformJsonFuncExpr’,\n> inlined from ‘transformExprRecurse’ at parse_expr.c:374:13:\n> parse_expr.c:4355:22: warning: ‘jsexpr’ may be used uninitialized\n> [-Wmaybe-uninitialized]\n> 4355 | Assert(jsexpr->formatted_expr);\n> ../../../src/include/c.h:864:23: note: in definition of macro ‘Assert’\n> 864 | if (!(condition)) \\\n> | ^~~~~~~~~\n> parse_expr.c: In function ‘transformExprRecurse’:\n> parse_expr.c:4212:21: note: ‘jsexpr’ was declared here\n> 4212 | JsonExpr *jsexpr;\n> | ^~~~~~\n>\n>\n> --- non-assert build:\n>\n> -- [2023.09.19 14:11:03 json_table2/1] make core: make --quiet -j 4\n> In function ‘transformJsonFuncExpr’,\n> inlined from ‘transformExprRecurse’ at parse_expr.c:374:13:\n> parse_expr.c:4356:28: warning: ‘jsexpr’ may be used uninitialized\n> [-Wmaybe-uninitialized]\n> 4356 | if (exprType(jsexpr->formatted_expr) != JSONBOID)\n> | ~~~~~~^~~~~~~~~~~~~~~~\n> parse_expr.c: In function ‘transformExprRecurse’:\n> parse_expr.c:4212:21: note: ‘jsexpr’ was declared here\n> 4212 | JsonExpr *jsexpr;\n> | ^~~~~~\n\nThanks, fixed.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 19 Sep 2023 21:51:39 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Tue, Sep 19, 2023 at 9:00 PM Amit Langote <[email protected]> wrote:\n> On Tue, Sep 19, 2023 at 7:37 PM jian he <[email protected]> wrote:\n> > -------------------\n> > https://www.postgresql.org/docs/current/extend-type-system.html#EXTEND-TYPES-POLYMORPHIC\n> > >> When the return value of a function is declared as a polymorphic type, there must be at least one argument position that is also\n> > >> polymorphic, and the actual data type(s) supplied for the polymorphic arguments determine the actual result type for that call.\n> >\n> > select json_query(jsonb'{\"a\":[{\"a\":[2,3]},{\"a\":[4,5]}]}','$.a[*].a?(@<=3)'\n> > returning anyrange);\n> > should fail. Now it returns NULL. Maybe we can validate it in\n> > transformJsonFuncExpr?\n> > -------------------\n>\n> I'm not sure whether we should make the parser complain about the\n> weird types being specified in RETURNING.\n\nSleeping over this, maybe adding the following to\ntransformJsonOutput() does make sense?\n\n+ if (get_typtype(ret->typid) == TYPTYPE_PSEUDO)\n+ ereport(ERROR,\n+ errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n+ errmsg(\"returning pseudo-types is not supported in\nSQL/JSON functions\"));\n+\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 20 Sep 2023 12:07:23 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On 2023-09-19 Tu 23:07, Amit Langote wrote:\n> On Tue, Sep 19, 2023 at 9:00 PM Amit Langote<[email protected]> wrote:\n>> On Tue, Sep 19, 2023 at 7:37 PM jian he<[email protected]> wrote:\n>>> -------------------\n>>> https://www.postgresql.org/docs/current/extend-type-system.html#EXTEND-TYPES-POLYMORPHIC\n>>>>> When the return value of a function is declared as a polymorphic type, there must be at least one argument position that is also\n>>>>> polymorphic, and the actual data type(s) supplied for the polymorphic arguments determine the actual result type for that call.\n>>> select json_query(jsonb'{\"a\":[{\"a\":[2,3]},{\"a\":[4,5]}]}','$.a[*].a?(@<=3)'\n>>> returning anyrange);\n>>> should fail. Now it returns NULL. Maybe we can validate it in\n>>> transformJsonFuncExpr?\n>>> -------------------\n>> I'm not sure whether we should make the parser complain about the\n>> weird types being specified in RETURNING.\n> Sleeping over this, maybe adding the following to\n> transformJsonOutput() does make sense?\n>\n> + if (get_typtype(ret->typid) == TYPTYPE_PSEUDO)\n> + ereport(ERROR,\n> + errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> + errmsg(\"returning pseudo-types is not supported in\n> SQL/JSON functions\"));\n> +\n>\n\nSeems reasonable.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-09-19 Tu 23:07, Amit Langote\n wrote:\n\n\nOn Tue, Sep 19, 2023 at 9:00 PM Amit Langote <[email protected]> wrote:\n\n\nOn Tue, Sep 19, 2023 at 7:37 PM jian he <[email protected]> wrote:\n\n\n -------------------\nhttps://www.postgresql.org/docs/current/extend-type-system.html#EXTEND-TYPES-POLYMORPHIC\n\n\n\n When the return value of a function is declared as a polymorphic type, there must be at least one argument position that is also\npolymorphic, and the actual data type(s) supplied for the polymorphic arguments determine the actual result type for that call.\n\n\n\n\nselect json_query(jsonb'{\"a\":[{\"a\":[2,3]},{\"a\":[4,5]}]}','$.a[*].a?(@<=3)'\nreturning anyrange);\nshould fail. Now it returns NULL. Maybe we can validate it in\ntransformJsonFuncExpr?\n-------------------\n\n\n\nI'm not sure whether we should make the parser complain about the\nweird types being specified in RETURNING.\n\n\n\nSleeping over this, maybe adding the following to\ntransformJsonOutput() does make sense?\n\n+ if (get_typtype(ret->typid) == TYPTYPE_PSEUDO)\n+ ereport(ERROR,\n+ errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n+ errmsg(\"returning pseudo-types is not supported in\nSQL/JSON functions\"));\n+\n\n\n\n\n\nSeems reasonable.\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Wed, 20 Sep 2023 15:14:54 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Thu, Sep 21, 2023 at 4:14 AM Andrew Dunstan <[email protected]> wrote:\n> On 2023-09-19 Tu 23:07, Amit Langote wrote:\n> On Tue, Sep 19, 2023 at 9:00 PM Amit Langote <[email protected]> wrote:\n> On Tue, Sep 19, 2023 at 7:37 PM jian he <[email protected]> wrote:\n>\n> -------------------\n> https://www.postgresql.org/docs/current/extend-type-system.html#EXTEND-TYPES-POLYMORPHIC\n>\n> When the return value of a function is declared as a polymorphic type, there must be at least one argument position that is also\n> polymorphic, and the actual data type(s) supplied for the polymorphic arguments determine the actual result type for that call.\n>\n> select json_query(jsonb'{\"a\":[{\"a\":[2,3]},{\"a\":[4,5]}]}','$.a[*].a?(@<=3)'\n> returning anyrange);\n> should fail. Now it returns NULL. Maybe we can validate it in\n> transformJsonFuncExpr?\n> -------------------\n>\n> I'm not sure whether we should make the parser complain about the\n> weird types being specified in RETURNING.\n>\n> Sleeping over this, maybe adding the following to\n> transformJsonOutput() does make sense?\n>\n> + if (get_typtype(ret->typid) == TYPTYPE_PSEUDO)\n> + ereport(ERROR,\n> + errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> + errmsg(\"returning pseudo-types is not supported in\n> SQL/JSON functions\"));\n> +\n>\n> Seems reasonable.\n\nOK, thanks for confirming.\n\nHere is a set where I've included the above change in 0003.\n\nI had some doubts about the following bit in 0001 but I've come to\nknow through some googling that LLVM handles this alright:\n\n+/*\n+ * Emit constant oid.\n+ */\n+static inline LLVMValueRef\n+l_oid_const(Oid i)\n+{\n+ return LLVMConstInt(LLVMInt32Type(), i, false);\n+}\n+\n\nThe doubt I had was whether the Oid that l_oid_const() takes, which is\nan unsigned int, might overflow the integer that LLVM provides through\nLLVMConstInt() here. Apparently, LLVM IR always uses the full 32-bit\nwidth to store the integer value, so there's no worry of the overflow\nif I'm understanding this correctly.\n\nPatches 0001 and 0002 look ready to me to go in. Please let me know\nif anyone thinks otherwise.\n\n--\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 21 Sep 2023 17:32:01 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "I keep looking at 0001, and in the struct definition I think putting the\nescontext at the bottom is not great, because there's a comment a few\nlines above that says \"XXX: following fields only needed during\n\"compilation\"), could be thrown away afterwards\". This comment is not\nstrictly true, because innermost_caseval is actually used by\narray_map(); yet it seems that ->escontext should appear before that\ncomment.\n\nHowever, if you put it before steps_len, it would push members steps_len\nand steps_alloc beyond the struct's first cache line(*). If those\nstruct members are critical for expression init performance, then maybe\nit's not a good tradeoff. I don't know if this was struct laid out\ncarefully with that consideration in mind or not.\n\nAlso, ->escontext's own comment in ExprState seems to be saying too much\nand not saying enough. I would reword it as \"For expression nodes that\nsupport soft errors. NULL if caller wants them thrown instead\". The\nshortest I could make so that it fits in a single is \"For nodes that can\nerror softly. NULL if caller wants them thrown\", or \"For\nsoft-error-enabled nodes. NULL if caller wants errors thrown\". Not\nsure if those are good enough, or just make the comment the whole four\nlines ...\n\n\n(*) This is what pahole says about the struct as 0001 would put it:\n\nstruct ExprState {\n NodeTag type; /* 0 4 */\n uint8 flags; /* 4 1 */\n _Bool resnull; /* 5 1 */\n\n /* XXX 2 bytes hole, try to pack */\n\n Datum resvalue; /* 8 8 */\n TupleTableSlot * resultslot; /* 16 8 */\n struct ExprEvalStep * steps; /* 24 8 */\n ExprStateEvalFunc evalfunc; /* 32 8 */\n Expr * expr; /* 40 8 */\n void * evalfunc_private; /* 48 8 */\n int steps_len; /* 56 4 */\n int steps_alloc; /* 60 4 */\n /* --- cacheline 1 boundary (64 bytes) --- */\n struct PlanState * parent; /* 64 8 */\n ParamListInfo ext_params; /* 72 8 */\n Datum * innermost_caseval; /* 80 8 */\n _Bool * innermost_casenull; /* 88 8 */\n Datum * innermost_domainval; /* 96 8 */\n _Bool * innermost_domainnull; /* 104 8 */\n ErrorSaveContext * escontext; /* 112 8 */\n\n /* size: 120, cachelines: 2, members: 18 */\n /* sum members: 118, holes: 1, sum holes: 2 */\n /* last cacheline: 56 bytes */\n};\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Nunca se desea ardientemente lo que solo se desea por razón\" (F. Alexandre)\n\n\n",
"msg_date": "Thu, 21 Sep 2023 10:57:54 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Thu, Sep 21, 2023 at 5:58 PM Alvaro Herrera <[email protected]> wrote:\n> I keep looking at 0001, and in the struct definition I think putting the\n> escontext at the bottom is not great, because there's a comment a few\n> lines above that says \"XXX: following fields only needed during\n> \"compilation\"), could be thrown away afterwards\". This comment is not\n> strictly true, because innermost_caseval is actually used by\n> array_map(); yet it seems that ->escontext should appear before that\n> comment.\n\nHmm. Actually, we can make it so that *escontext* is only needed\nduring ExecInitExprRec() and never after that. I've done that in the\nattached updated patch, where you can see that ExprState.escontext is\nonly ever touched in execExpr.c. Also, I noticed that I had\nforgotten to extract one more expression node type's conversion to use\nsoft errors from the main patch (0003). That is CoerceToDomain, which\nI've now moved into 0001.\n\n> Also, ->escontext's own comment in ExprState seems to be saying too much\n> and not saying enough. I would reword it as \"For expression nodes that\n> support soft errors. NULL if caller wants them thrown instead\". The\n> shortest I could make so that it fits in a single is \"For nodes that can\n> error softly. NULL if caller wants them thrown\", or \"For\n> soft-error-enabled nodes. NULL if caller wants errors thrown\". Not\n> sure if those are good enough, or just make the comment the whole four\n> lines ...\n\nHow about:\n\n+ /*\n+ * For expression nodes that support soft errors. Set to NULL before\n+ * calling ExecInitExprRec() if the caller wants errors thrown.\n+ */\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 21 Sep 2023 21:41:32 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Thu, Sep 21, 2023 at 9:41 PM Amit Langote <[email protected]> wrote:\n> On Thu, Sep 21, 2023 at 5:58 PM Alvaro Herrera <[email protected]> wrote:\n> > I keep looking at 0001, and in the struct definition I think putting the\n> > escontext at the bottom is not great, because there's a comment a few\n> > lines above that says \"XXX: following fields only needed during\n> > \"compilation\"), could be thrown away afterwards\". This comment is not\n> > strictly true, because innermost_caseval is actually used by\n> > array_map(); yet it seems that ->escontext should appear before that\n> > comment.\n>\n> Hmm. Actually, we can make it so that *escontext* is only needed\n> during ExecInitExprRec() and never after that. I've done that in the\n> attached updated patch, where you can see that ExprState.escontext is\n> only ever touched in execExpr.c. Also, I noticed that I had\n> forgotten to extract one more expression node type's conversion to use\n> soft errors from the main patch (0003). That is CoerceToDomain, which\n> I've now moved into 0001.\n>\n> > Also, ->escontext's own comment in ExprState seems to be saying too much\n> > and not saying enough. I would reword it as \"For expression nodes that\n> > support soft errors. NULL if caller wants them thrown instead\". The\n> > shortest I could make so that it fits in a single is \"For nodes that can\n> > error softly. NULL if caller wants them thrown\", or \"For\n> > soft-error-enabled nodes. NULL if caller wants errors thrown\". Not\n> > sure if those are good enough, or just make the comment the whole four\n> > lines ...\n>\n> How about:\n>\n> + /*\n> + * For expression nodes that support soft errors. Set to NULL before\n> + * calling ExecInitExprRec() if the caller wants errors thrown.\n> + */\n\nMaybe the following is better:\n\n+ /*\n+ * For expression nodes that support soft errors. Should be set to NULL\n+ * before calling ExecInitExprRec() if the caller wants errors thrown.\n+ */\n\n...as in the attached.\n\nAlvaro, do you think your concern regarding escontext not being in the\nright spot in the ExprState struct is addressed? It doesn't seem very\ncritical to me to place it in the struct's 1st cacheline, because\nescontext is not accessed in performance critical paths such as during\nexpression evaluation, especially with the latest version. (It would\nget accessed during evaluation with previous versions.)\n\nIf so, I'd like to move ahead with committing it. 0002 seems almost there too.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 27 Sep 2023 22:55:04 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Op 9/27/23 om 15:55 schreef Amit Langote:\n> On Thu, Sep 21, 2023 at 9:41 PM Amit Langote <[email protected]> wrote:\n\nI don't knoe, maybe it's worthwhile to fix this (admittedly trivial) \nfail in the tests? It's been there for a while.\n\nThanks,\n\nErik",
"msg_date": "Wed, 27 Sep 2023 16:23:39 +0200",
"msg_from": "Erik Rijkers <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Wed, Sep 27, 2023 at 11:21 PM Erik Rijkers <[email protected]> wrote:\n> Op 9/27/23 om 15:55 schreef Amit Langote:\n> > On Thu, Sep 21, 2023 at 9:41 PM Amit Langote <[email protected]> wrote:\n>\n> I don't knoe, maybe it's worthwhile to fix this (admittedly trivial)\n> fail in the tests? It's been there for a while.\n\nThanks, fixed.\n\nPatches also needed to be rebased over some llvm changes that got in yesterday.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 28 Sep 2023 13:35:16 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On 2023-Sep-27, Amit Langote wrote:\n\n> Maybe the following is better:\n> \n> + /*\n> + * For expression nodes that support soft errors. Should be set to NULL\n> + * before calling ExecInitExprRec() if the caller wants errors thrown.\n> + */\n> \n> ...as in the attached.\n\nThat's good.\n\n> Alvaro, do you think your concern regarding escontext not being in the\n> right spot in the ExprState struct is addressed? It doesn't seem very\n> critical to me to place it in the struct's 1st cacheline, because\n> escontext is not accessed in performance critical paths such as during\n> expression evaluation, especially with the latest version. (It would\n> get accessed during evaluation with previous versions.)\n> \n> If so, I'd like to move ahead with committing it.\n\nYeah, looks OK to me in v21.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Thu, 28 Sep 2023 13:04:54 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Thu, Sep 28, 2023 at 8:04 PM Alvaro Herrera <[email protected]> wrote:\n> On 2023-Sep-27, Amit Langote wrote:\n> > Maybe the following is better:\n> >\n> > + /*\n> > + * For expression nodes that support soft errors. Should be set to NULL\n> > + * before calling ExecInitExprRec() if the caller wants errors thrown.\n> > + */\n> >\n> > ...as in the attached.\n>\n> That's good.\n>\n> > Alvaro, do you think your concern regarding escontext not being in the\n> > right spot in the ExprState struct is addressed? It doesn't seem very\n> > critical to me to place it in the struct's 1st cacheline, because\n> > escontext is not accessed in performance critical paths such as during\n> > expression evaluation, especially with the latest version. (It would\n> > get accessed during evaluation with previous versions.)\n> >\n> > If so, I'd like to move ahead with committing it.\n>\n> Yeah, looks OK to me in v21.\n\nThanks. I will push the attached 0001 shortly.\n\nAlso, I've updated 0002's commit message to mention why it only\nchanges the functions local to jsonfuncs.c to add the Node *escontext\nparameter, but not any external ones that may be invoked, such as,\nmakeMdArrayResult(). The assumption behind that is that jsonfuncs.c\nfunctions validate any data that they pass to such external functions,\nso no *suppressible* errors should occur.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Fri, 29 Sep 2023 13:57:46 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Fri, Sep 29, 2023 at 1:57 PM Amit Langote <[email protected]> wrote:\n> On Thu, Sep 28, 2023 at 8:04 PM Alvaro Herrera <[email protected]> wrote:\n> > On 2023-Sep-27, Amit Langote wrote:\n> > > Maybe the following is better:\n> > >\n> > > + /*\n> > > + * For expression nodes that support soft errors. Should be set to NULL\n> > > + * before calling ExecInitExprRec() if the caller wants errors thrown.\n> > > + */\n> > >\n> > > ...as in the attached.\n> >\n> > That's good.\n> >\n> > > Alvaro, do you think your concern regarding escontext not being in the\n> > > right spot in the ExprState struct is addressed? It doesn't seem very\n> > > critical to me to place it in the struct's 1st cacheline, because\n> > > escontext is not accessed in performance critical paths such as during\n> > > expression evaluation, especially with the latest version. (It would\n> > > get accessed during evaluation with previous versions.)\n> > >\n> > > If so, I'd like to move ahead with committing it.\n> >\n> > Yeah, looks OK to me in v21.\n>\n> Thanks. I will push the attached 0001 shortly.\n\nPushed this 30 min ago (no email on -committers yet!) and am looking\nat the following llvm crash reported by buildfarm animal pogona [1]:\n\n#0 __pthread_kill_implementation (threadid=<optimized out>,\nsigno=signo@entry=6, no_tid=no_tid@entry=0) at\n./nptl/pthread_kill.c:44\n44 ./nptl/pthread_kill.c: No such file or directory.\n#0 __pthread_kill_implementation (threadid=<optimized out>,\nsigno=signo@entry=6, no_tid=no_tid@entry=0) at\n./nptl/pthread_kill.c:44\n#1 0x00007f5bcebcb15f in __pthread_kill_internal (signo=6,\nthreadid=<optimized out>) at ./nptl/pthread_kill.c:78\n#2 0x00007f5bceb7d472 in __GI_raise (sig=sig@entry=6) at\n../sysdeps/posix/raise.c:26\n#3 0x00007f5bceb674b2 in __GI_abort () at ./stdlib/abort.c:79\n#4 0x00007f5bceb673d5 in __assert_fail_base (fmt=0x7f5bcecdbdc8\n\"%s%s%s:%u: %s%sAssertion `%s' failed.\\\\n%n\",\nassertion=assertion@entry=0x7f5bc1336419 \"(i >= FTy->getNumParams() ||\nFTy->getParamType(i) == Args[i]->getType()) && \\\\\"Calling a function\nwith a bad signature!\\\\\"\", file=file@entry=0x7f5bc1336051\n\"/home/bf/src/llvm-project-5/llvm/lib/IR/Instructions.cpp\",\nline=line@entry=299, function=function@entry=0x7f5bc13362af \"void\nllvm::CallInst::init(llvm::FunctionType *, llvm::Value *,\nArrayRef<llvm::Value *>, ArrayRef<llvm::OperandBundleDef>, const\nllvm::Twine &)\") at ./assert/assert.c:92\n#5 0x00007f5bceb763a2 in __assert_fail (assertion=0x7f5bc1336419 \"(i\n>= FTy->getNumParams() || FTy->getParamType(i) == Args[i]->getType())\n&& \\\\\"Calling a function with a bad signature!\\\\\"\",\nfile=0x7f5bc1336051\n\"/home/bf/src/llvm-project-5/llvm/lib/IR/Instructions.cpp\", line=299,\nfunction=0x7f5bc13362af \"void llvm::CallInst::init(llvm::FunctionType\n*, llvm::Value *, ArrayRef<llvm::Value *>,\nArrayRef<llvm::OperandBundleDef>, const llvm::Twine &)\") at\n./assert/assert.c:101\n#6 0x00007f5bc110f138 in llvm::CallInst::init (this=0x557a91f3e508,\nFTy=0x557a91ed9ae0, Func=0x557a91f8be88, Args=..., Bundles=...,\nNameStr=...) at\n/home/bf/src/llvm-project-5/llvm/lib/IR/Instructions.cpp:297\n#7 0x00007f5bc0fa579d in llvm::CallInst::CallInst\n(this=0x557a91f3e508, Ty=0x557a91ed9ae0, Func=0x557a91f8be88,\nArgs=..., Bundles=..., NameStr=..., InsertBefore=0x0) at\n/home/bf/src/llvm-project-5/llvm/include/llvm/IR/Instructions.h:1934\n#8 0x00007f5bc0fa538c in llvm::CallInst::Create (Ty=0x557a91ed9ae0,\nFunc=0x557a91f8be88, Args=..., Bundles=..., NameStr=...,\nInsertBefore=0x0) at\n/home/bf/src/llvm-project-5/llvm/include/llvm/IR/Instructions.h:1444\n#9 0x00007f5bc0fa51f9 in llvm::IRBuilder<llvm::ConstantFolder,\nllvm::IRBuilderDefaultInserter>::CreateCall (this=0x557a91f9c6a0,\nFTy=0x557a91ed9ae0, Callee=0x557a91f8be88, Args=..., Name=...,\nFPMathTag=0x0) at\n/home/bf/src/llvm-project-5/llvm/include/llvm/IR/IRBuilder.h:1669\n#10 0x00007f5bc100edda in llvm::IRBuilder<llvm::ConstantFolder,\nllvm::IRBuilderDefaultInserter>::CreateCall (this=0x557a91f9c6a0,\nCallee=0x557a91f8be88, Args=..., Name=..., FPMathTag=0x0) at\n/home/bf/src/llvm-project-5/llvm/include/llvm/IR/IRBuilder.h:1663\n#11 0x00007f5bc100714e in LLVMBuildCall (B=0x557a91f9c6a0,\nFn=0x557a91f8be88, Args=0x7ffde6fa0b50, NumArgs=6, Name=0x7f5bc30b648c\n\"funccall_iocoerce_in_safe\") at\n/home/bf/src/llvm-project-5/llvm/lib/IR/Core.cpp:2964\n#12 0x00007f5bc30af861 in llvm_compile_expr (state=0x557a91fbeac0) at\n/home/bf/bf-build/pogona/HEAD/pgsql.build/../pgsql/src/backend/jit/llvm/llvmjit_expr.c:1373\n#13 0x0000557a915992db in jit_compile_expr\n(state=state@entry=0x557a91fbeac0) at\n/home/bf/bf-build/pogona/HEAD/pgsql.build/../pgsql/src/backend/jit/jit.c:177\n#14 0x0000557a9123071d in ExecReadyExpr\n(state=state@entry=0x557a91fbeac0) at\n/home/bf/bf-build/pogona/HEAD/pgsql.build/../pgsql/src/backend/executor/execExpr.c:880\n#15 0x0000557a912340d7 in ExecBuildProjectionInfo\n(targetList=0x557a91fa6b58, econtext=<optimized out>, slot=<optimized\nout>, parent=parent@entry=0x557a91f430a8,\ninputDesc=inputDesc@entry=0x0) at\n/home/bf/bf-build/pogona/HEAD/pgsql.build/../pgsql/src/backend/executor/execExpr.c:484\n#16 0x0000557a9124e61e in ExecAssignProjectionInfo\n(planstate=planstate@entry=0x557a91f430a8,\ninputDesc=inputDesc@entry=0x0) at\n/home/bf/bf-build/pogona/HEAD/pgsql.build/../pgsql/src/backend/executor/execUtils.c:547\n#17 0x0000557a91274961 in ExecInitNestLoop\n(node=node@entry=0x557a91f9e5d8, estate=estate@entry=0x557a91f425a0,\neflags=<optimized out>, eflags@entry=33) at\n/home/bf/bf-build/pogona/HEAD/pgsql.build/../pgsql/src/backend/executor/nodeNestloop.c:308\n#18 0x0000557a9124760f in ExecInitNode (node=0x557a91f9e5d8,\nestate=estate@entry=0x557a91f425a0, eflags=eflags@entry=33) at\n/home/bf/bf-build/pogona/HEAD/pgsql.build/../pgsql/src/backend/executor/execProcnode.c:298\n#19 0x0000557a91255d39 in ExecInitAgg (node=node@entry=0x557a91f91540,\nestate=estate@entry=0x557a91f425a0, eflags=eflags@entry=33) at\n/home/bf/bf-build/pogona/HEAD/pgsql.build/../pgsql/src/backend/executor/nodeAgg.c:3306\n#20 0x0000557a912476bf in ExecInitNode (node=0x557a91f91540,\nestate=estate@entry=0x557a91f425a0, eflags=eflags@entry=33) at\n/home/bf/bf-build/pogona/HEAD/pgsql.build/../pgsql/src/backend/executor/execProcnode.c:341\n#21 0x0000557a912770c3 in ExecInitSort\n(node=node@entry=0x557a91f9e850, estate=estate@entry=0x557a91f425a0,\neflags=eflags@entry=33) at\n/home/bf/bf-build/pogona/HEAD/pgsql.build/../pgsql/src/backend/executor/nodeSort.c:265\n#22 0x0000557a91247667 in ExecInitNode\n(node=node@entry=0x557a91f9e850, estate=estate@entry=0x557a91f425a0,\neflags=eflags@entry=33) at\n/home/bf/bf-build/pogona/HEAD/pgsql.build/../pgsql/src/backend/executor/execProcnode.c:321\n#23 0x0000557a912402f5 in InitPlan (eflags=33,\nqueryDesc=0x557a91fa6fb8) at\n/home/bf/bf-build/pogona/HEAD/pgsql.build/../pgsql/src/backend/executor/execMain.c:968\n#24 standard_ExecutorStart (queryDesc=queryDesc@entry=0x557a91fa6fb8,\neflags=33, eflags@entry=1) at\n/home/bf/bf-build/pogona/HEAD/pgsql.build/../pgsql/src/backend/executor/execMain.c:266\n#25 0x0000557a912403c9 in ExecutorStart\n(queryDesc=queryDesc@entry=0x557a91fa6fb8, eflags=1) at\n/home/bf/bf-build/pogona/HEAD/pgsql.build/../pgsql/src/backend/executor/execMain.c:145\n#26 0x0000557a911c2153 in ExplainOnePlan\n(plannedstmt=plannedstmt@entry=0x557a91fa6ea8, into=into@entry=0x0,\nes=es@entry=0x557a91f932e8,\nqueryString=queryString@entry=0x557a91dbd650 \"EXPLAIN (COSTS\nOFF)\\\\nSELECT DISTINCT (i || '/' || j)::pg_lsn f\\\\n FROM\ngenerate_series(1, 10) i,\\\\n generate_series(1, 10) j,\\\\n\ngenerate_series(1, 5) k\\\\n WHERE i <= 10 AND j > 0 AND j <= 10\\\\n\nO\"..., params=params@entry=0x0, queryEnv=queryEnv@entry=0x0,\nplanduration=0x7ffde6fa1258, bufusage=0x0) at\n/home/bf/bf-build/pogona/HEAD/pgsql.build/../pgsql/src/backend/commands/explain.c:590\n#27 0x0000557a911c23b2 in ExplainOneQuery (query=<optimized out>,\ncursorOptions=cursorOptions@entry=2048, into=into@entry=0x0,\nes=es@entry=0x557a91f932e8, queryString=0x557a91dbd650 \"EXPLAIN (COSTS\nOFF)\\\\nSELECT DISTINCT (i || '/' || j)::pg_lsn f\\\\n FROM\ngenerate_series(1, 10) i,\\\\n generate_series(1, 10) j,\\\\n\ngenerate_series(1, 5) k\\\\n WHERE i <= 10 AND j > 0 AND j <= 10\\\\n\nO\"..., params=params@entry=0x0, queryEnv=0x0) at\n/home/bf/bf-build/pogona/HEAD/pgsql.build/../pgsql/src/backend/commands/explain.c:419\n#28 0x0000557a911c2ddb in ExplainQuery\n(pstate=pstate@entry=0x557a91f3eb18, stmt=stmt@entry=0x557a91e881d0,\nparams=params@entry=0x0, dest=dest@entry=0x557a91f3ea88) at\n/home/bf/bf-build/pogona/HEAD/pgsql.build/../pgsql/src/include/nodes/nodes.h:193\n#29 0x0000557a91413811 in standard_ProcessUtility\n(pstmt=0x557a91e88280, queryString=0x557a91dbd650 \"EXPLAIN (COSTS\nOFF)\\\\nSELECT DISTINCT (i || '/' || j)::pg_lsn f\\\\n FROM\ngenerate_series(1, 10) i,\\\\n generate_series(1, 10) j,\\\\n\ngenerate_series(1, 5) k\\\\n WHERE i <= 10 AND j > 0 AND j <= 10\\\\n\nO\"..., readOnlyTree=<optimized out>, context=PROCESS_UTILITY_TOPLEVEL,\nparams=0x0, queryEnv=0x0, dest=0x557a91f3ea88, qc=0x7ffde6fa1500) at\n/home/bf/bf-build/pogona/HEAD/pgsql.build/../pgsql/src/backend/tcop/utility.c:870\n#30 0x0000557a91413ed9 in ProcessUtility\n(pstmt=pstmt@entry=0x557a91e88280, queryString=<optimized out>,\nreadOnlyTree=<optimized out>,\ncontext=context@entry=PROCESS_UTILITY_TOPLEVEL, params=<optimized\nout>, queryEnv=<optimized out>, dest=0x557a91f3ea88,\nqc=0x7ffde6fa1500) at\n/home/bf/bf-build/pogona/HEAD/pgsql.build/../pgsql/src/backend/tcop/utility.c:530\n#31 0x0000557a91411537 in PortalRunUtility\n(portal=portal@entry=0x557a91e35970, pstmt=0x557a91e88280,\nisTopLevel=true, setHoldSnapshot=setHoldSnapshot@entry=true,\ndest=dest@entry=0x557a91f3ea88, qc=qc@entry=0x7ffde6fa1500) at\n/home/bf/bf-build/pogona/HEAD/pgsql.build/../pgsql/src/backend/tcop/pquery.c:1158\n#32 0x0000557a914119a4 in FillPortalStore\n(portal=portal@entry=0x557a91e35970, isTopLevel=isTopLevel@entry=true)\nat /home/bf/bf-build/pogona/HEAD/pgsql.build/../pgsql/src/include/nodes/nodes.h:193\n#33 0x0000557a91411d6d in PortalRun\n(portal=portal@entry=0x557a91e35970,\ncount=count@entry=9223372036854775807,\nisTopLevel=isTopLevel@entry=true, run_once=run_once@entry=true,\ndest=dest@entry=0x557a91e88900, altdest=altdest@entry=0x557a91e88900,\nqc=0x7ffde6fa1700) at\n/home/bf/bf-build/pogona/HEAD/pgsql.build/../pgsql/src/backend/tcop/pquery.c:763\n#34 0x0000557a9140d65f in exec_simple_query\n(query_string=query_string@entry=0x557a91dbd650 \"EXPLAIN (COSTS\nOFF)\\\\nSELECT DISTINCT (i || '/' || j)::pg_lsn f\\\\n FROM\ngenerate_series(1, 10) i,\\\\n generate_series(1, 10) j,\\\\n\ngenerate_series(1, 5) k\\\\n WHERE i <= 10 AND j > 0 AND j <= 10\\\\n\nO\"...) at /home/bf/bf-build/pogona/HEAD/pgsql.build/../pgsql/src/backend/tcop/postgres.c:1272\n#35 0x0000557a9140e305 in PostgresMain (dbname=<optimized out>,\nusername=<optimized out>) at\n/home/bf/bf-build/pogona/HEAD/pgsql.build/../pgsql/src/backend/tcop/postgres.c:4652\n#36 0x0000557a91372bf0 in BackendRun (port=0x557a91de8730) at\n/home/bf/bf-build/pogona/HEAD/pgsql.build/../pgsql/src/backend/postmaster/postmaster.c:4439\n#37 BackendStartup (port=0x557a91de8730) at\n/home/bf/bf-build/pogona/HEAD/pgsql.build/../pgsql/src/backend/postmaster/postmaster.c:4167\n#38 ServerLoop () at\n/home/bf/bf-build/pogona/HEAD/pgsql.build/../pgsql/src/backend/postmaster/postmaster.c:1781\n#39 0x0000557a9137488e in PostmasterMain (argc=argc@entry=8,\nargv=argv@entry=0x557a91d7cc10) at\n/home/bf/bf-build/pogona/HEAD/pgsql.build/../pgsql/src/backend/postmaster/postmaster.c:1465\n#40 0x0000557a912a001e in main (argc=8, argv=0x557a91d7cc10) at\n/home/bf/bf-build/pogona/HEAD/pgsql.build/../pgsql/src/backend/main/main.c:198\n$1 = {si_signo = 6, si_errno = 0, si_code = -6, _sifields = {_pad =\n{3110875, 1000, 0 <repeats 26 times>}, _kill = {si_pid = 3110875,\nsi_uid = 1000}, _timer = {si_tid = 3110875, si_overrun = 1000,\nsi_sigval = {sival_int = 0, sival_ptr = 0x0}}, _rt = {si_pid =\n3110875, si_uid = 1000, si_sigval = {sival_int = 0, sival_ptr = 0x0}},\n_sigchld = {si_pid = 3110875, si_uid = 1000, si_status = 0, si_utime =\n0, si_stime = 0}, _sigfault = {si_addr = 0x3e8002f77db, _addr_lsb = 0,\n_addr_bnd = {_lower = 0x0, _upper = 0x0}}, _sigpoll = {si_band =\n4294970406875, si_fd = 0}, _sigsys = {_call_addr = 0x3e8002f77db,\n_syscall = 0, _arch = 0}}}\n\nThis seems to me to be complaining about the following addition:\n\n+ {\n+ Oid ioparam = op->d.iocoerce.typioparam;\n+ LLVMValueRef v_params[6];\n+ LLVMValueRef v_success;\n+\n+ v_params[0] = l_ptr_const(op->d.iocoerce.finfo_in,\n+ l_ptr(StructFmgrInfo));\n+ v_params[1] = v_output;\n+ v_params[2] = l_oid_const(lc, ioparam);\n+ v_params[3] = l_int32_const(lc, -1);\n+ v_params[4] = l_ptr_const(op->d.iocoerce.escontext,\n+\nl_ptr(StructErrorSaveContext));\n\n- LLVMBuildStore(b, v_retval, v_resvaluep);\n+ /*\n+ * InputFunctionCallSafe() will write directly into\n+ * *op->resvalue.\n+ */\n+ v_params[5] = v_resvaluep;\n+\n+ v_success = LLVMBuildCall(b, llvm_pg_func(mod,\n\"InputFunctionCallSafe\"),\n+ v_params, lengthof(v_params),\n+ \"funccall_iocoerce_in_safe\");\n+\n+ /*\n+ * Return null if InputFunctionCallSafe() encountered\n+ * an error.\n+ */\n+ v_resnullp = LLVMBuildICmp(b, LLVMIntEQ, v_success,\n+ l_sbool_const(0), \"\");\n+ }\n\n--\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=pogona&dt=2023-10-02%2003%3A50%3A20\n\n\n",
"msg_date": "Mon, 2 Oct 2023 13:24:05 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Mon, Oct 2, 2023 at 1:24 PM Amit Langote <[email protected]> wrote:\n> Pushed this 30 min ago (no email on -committers yet!) and am looking\n> at the following llvm crash reported by buildfarm animal pogona [1]:\n>\n> #4 0x00007f5bceb673d5 in __assert_fail_base (fmt=0x7f5bcecdbdc8\n> \"%s%s%s:%u: %s%sAssertion `%s' failed.\\\\n%n\",\n> assertion=assertion@entry=0x7f5bc1336419 \"(i >= FTy->getNumParams() ||\n> FTy->getParamType(i) == Args[i]->getType()) && \\\\\"Calling a function\n> with a bad signature!\\\\\"\", file=file@entry=0x7f5bc1336051\n> \"/home/bf/src/llvm-project-5/llvm/lib/IR/Instructions.cpp\",\n> line=line@entry=299, function=function@entry=0x7f5bc13362af \"void\n> llvm::CallInst::init(llvm::FunctionType *, llvm::Value *,\n> ArrayRef<llvm::Value *>, ArrayRef<llvm::OperandBundleDef>, const\n> llvm::Twine &)\") at ./assert/assert.c:92\n> #5 0x00007f5bceb763a2 in __assert_fail (assertion=0x7f5bc1336419 \"(i\n> >= FTy->getNumParams() || FTy->getParamType(i) == Args[i]->getType())\n> && \\\\\"Calling a function with a bad signature!\\\\\"\",\n> file=0x7f5bc1336051\n> \"/home/bf/src/llvm-project-5/llvm/lib/IR/Instructions.cpp\", line=299,\n> function=0x7f5bc13362af \"void llvm::CallInst::init(llvm::FunctionType\n> *, llvm::Value *, ArrayRef<llvm::Value *>,\n> ArrayRef<llvm::OperandBundleDef>, const llvm::Twine &)\") at\n> ./assert/assert.c:101\n> #6 0x00007f5bc110f138 in llvm::CallInst::init (this=0x557a91f3e508,\n> FTy=0x557a91ed9ae0, Func=0x557a91f8be88, Args=..., Bundles=...,\n> NameStr=...) at\n> /home/bf/src/llvm-project-5/llvm/lib/IR/Instructions.cpp:297\n> #7 0x00007f5bc0fa579d in llvm::CallInst::CallInst\n> (this=0x557a91f3e508, Ty=0x557a91ed9ae0, Func=0x557a91f8be88,\n> Args=..., Bundles=..., NameStr=..., InsertBefore=0x0) at\n> /home/bf/src/llvm-project-5/llvm/include/llvm/IR/Instructions.h:1934\n> #8 0x00007f5bc0fa538c in llvm::CallInst::Create (Ty=0x557a91ed9ae0,\n> Func=0x557a91f8be88, Args=..., Bundles=..., NameStr=...,\n> InsertBefore=0x0) at\n> /home/bf/src/llvm-project-5/llvm/include/llvm/IR/Instructions.h:1444\n> #9 0x00007f5bc0fa51f9 in llvm::IRBuilder<llvm::ConstantFolder,\n> llvm::IRBuilderDefaultInserter>::CreateCall (this=0x557a91f9c6a0,\n> FTy=0x557a91ed9ae0, Callee=0x557a91f8be88, Args=..., Name=...,\n> FPMathTag=0x0) at\n> /home/bf/src/llvm-project-5/llvm/include/llvm/IR/IRBuilder.h:1669\n> #10 0x00007f5bc100edda in llvm::IRBuilder<llvm::ConstantFolder,\n> llvm::IRBuilderDefaultInserter>::CreateCall (this=0x557a91f9c6a0,\n> Callee=0x557a91f8be88, Args=..., Name=..., FPMathTag=0x0) at\n> /home/bf/src/llvm-project-5/llvm/include/llvm/IR/IRBuilder.h:1663\n> #11 0x00007f5bc100714e in LLVMBuildCall (B=0x557a91f9c6a0,\n> Fn=0x557a91f8be88, Args=0x7ffde6fa0b50, NumArgs=6, Name=0x7f5bc30b648c\n> \"funccall_iocoerce_in_safe\") at\n> /home/bf/src/llvm-project-5/llvm/lib/IR/Core.cpp:2964\n> #12 0x00007f5bc30af861 in llvm_compile_expr (state=0x557a91fbeac0) at\n>\n/home/bf/bf-build/pogona/HEAD/pgsql.build/../pgsql/src/backend/jit/llvm/llvmjit_expr.c:1373\n>\n> This seems to me to be complaining about the following addition:\n>\n> + {\n> + Oid ioparam = op->d.iocoerce.typioparam;\n> + LLVMValueRef v_params[6];\n> + LLVMValueRef v_success;\n> +\n> + v_params[0] = l_ptr_const(op->d.iocoerce.finfo_in,\n> + l_ptr(StructFmgrInfo));\n> + v_params[1] = v_output;\n> + v_params[2] = l_oid_const(lc, ioparam);\n> + v_params[3] = l_int32_const(lc, -1);\n> + v_params[4] =\nl_ptr_const(op->d.iocoerce.escontext,\n> +\n> l_ptr(StructErrorSaveContext));\n>\n> - LLVMBuildStore(b, v_retval, v_resvaluep);\n> + /*\n> + * InputFunctionCallSafe() will write directly\ninto\n> + * *op->resvalue.\n> + */\n> + v_params[5] = v_resvaluep;\n> +\n> + v_success = LLVMBuildCall(b, llvm_pg_func(mod,\n> \"InputFunctionCallSafe\"),\n> + v_params,\nlengthof(v_params),\n> +\n \"funccall_iocoerce_in_safe\");\n> +\n> + /*\n> + * Return null if InputFunctionCallSafe()\nencountered\n> + * an error.\n> + */\n> + v_resnullp = LLVMBuildICmp(b, LLVMIntEQ,\nv_success,\n> + l_sbool_const(0), \"\");\n> + }\n\nAlthough most animals except pogona looked fine, I've decided to revert the\npatch for now.\n\nIIUC, LLVM is complaining that the code in the above block is not passing\nthe arguments of InputFunctionCallSafe() using the correct types. I'm not\nexactly sure which particular argument is not handled correctly in the\nabove code, but perhaps it's:\n\n\n+ v_params[1] = v_output;\n\nwhich maps to char *str argument of InputFunctionCallSafe(). v_output is\nset in the code preceding the above block as follows:\n\n /* and call output function (can never return NULL) */\n v_output = LLVMBuildCall(b, v_fn_out, &v_fcinfo_out,\n 1, \"funccall_coerce_out\");\n\nI thought that it would be fine to pass it as-is to the call of\nInputFunctionCallSafe() given that v_fn_out is a call to a function that\nreturns char *, but perhaps not.\n\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\nOn Mon, Oct 2, 2023 at 1:24 PM Amit Langote <[email protected]> wrote:\n> Pushed this 30 min ago (no email on -committers yet!) and am looking\n> at the following llvm crash reported by buildfarm animal pogona [1]:\n>\n> #4 0x00007f5bceb673d5 in __assert_fail_base (fmt=0x7f5bcecdbdc8\n> \"%s%s%s:%u: %s%sAssertion `%s' failed.\\\\n%n\",\n> assertion=assertion@entry=0x7f5bc1336419 \"(i >= FTy->getNumParams() ||\n> FTy->getParamType(i) == Args[i]->getType()) && \\\\\"Calling a function\n> with a bad signature!\\\\\"\", file=file@entry=0x7f5bc1336051\n> \"/home/bf/src/llvm-project-5/llvm/lib/IR/Instructions.cpp\",\n> line=line@entry=299, function=function@entry=0x7f5bc13362af \"void\n> llvm::CallInst::init(llvm::FunctionType *, llvm::Value *,\n> ArrayRef<llvm::Value *>, ArrayRef<llvm::OperandBundleDef>, const\n> llvm::Twine &)\") at ./assert/assert.c:92\n> #5 0x00007f5bceb763a2 in __assert_fail (assertion=0x7f5bc1336419 \"(i\n> >= FTy->getNumParams() || FTy->getParamType(i) == Args[i]->getType())\n> && \\\\\"Calling a function with a bad signature!\\\\\"\",\n> file=0x7f5bc1336051\n> \"/home/bf/src/llvm-project-5/llvm/lib/IR/Instructions.cpp\", line=299,\n> function=0x7f5bc13362af \"void llvm::CallInst::init(llvm::FunctionType\n> *, llvm::Value *, ArrayRef<llvm::Value *>,\n> ArrayRef<llvm::OperandBundleDef>, const llvm::Twine &)\") at\n> ./assert/assert.c:101\n> #6 0x00007f5bc110f138 in llvm::CallInst::init (this=0x557a91f3e508,\n> FTy=0x557a91ed9ae0, Func=0x557a91f8be88, Args=..., Bundles=...,\n> NameStr=...) at\n> /home/bf/src/llvm-project-5/llvm/lib/IR/Instructions.cpp:297\n> #7 0x00007f5bc0fa579d in llvm::CallInst::CallInst\n> (this=0x557a91f3e508, Ty=0x557a91ed9ae0, Func=0x557a91f8be88,\n> Args=..., Bundles=..., NameStr=..., InsertBefore=0x0) at\n> /home/bf/src/llvm-project-5/llvm/include/llvm/IR/Instructions.h:1934\n> #8 0x00007f5bc0fa538c in llvm::CallInst::Create (Ty=0x557a91ed9ae0,\n> Func=0x557a91f8be88, Args=..., Bundles=..., NameStr=...,\n> InsertBefore=0x0) at\n> /home/bf/src/llvm-project-5/llvm/include/llvm/IR/Instructions.h:1444\n> #9 0x00007f5bc0fa51f9 in llvm::IRBuilder<llvm::ConstantFolder,\n> llvm::IRBuilderDefaultInserter>::CreateCall (this=0x557a91f9c6a0,\n> FTy=0x557a91ed9ae0, Callee=0x557a91f8be88, Args=..., Name=...,\n> FPMathTag=0x0) at\n> /home/bf/src/llvm-project-5/llvm/include/llvm/IR/IRBuilder.h:1669\n> #10 0x00007f5bc100edda in llvm::IRBuilder<llvm::ConstantFolder,\n> llvm::IRBuilderDefaultInserter>::CreateCall (this=0x557a91f9c6a0,\n> Callee=0x557a91f8be88, Args=..., Name=..., FPMathTag=0x0) at\n> /home/bf/src/llvm-project-5/llvm/include/llvm/IR/IRBuilder.h:1663\n> #11 0x00007f5bc100714e in LLVMBuildCall (B=0x557a91f9c6a0,\n> Fn=0x557a91f8be88, Args=0x7ffde6fa0b50, NumArgs=6, Name=0x7f5bc30b648c\n> \"funccall_iocoerce_in_safe\") at\n> /home/bf/src/llvm-project-5/llvm/lib/IR/Core.cpp:2964\n> #12 0x00007f5bc30af861 in llvm_compile_expr (state=0x557a91fbeac0) at\n> /home/bf/bf-build/pogona/HEAD/pgsql.build/../pgsql/src/backend/jit/llvm/llvmjit_expr.c:1373\n>\n> This seems to me to be complaining about the following addition:\n>\n> + {\n> + Oid ioparam = op->d.iocoerce.typioparam;\n> + LLVMValueRef v_params[6];\n> + LLVMValueRef v_success;\n> +\n> + v_params[0] = l_ptr_const(op->d.iocoerce.finfo_in,\n> + l_ptr(StructFmgrInfo));\n> + v_params[1] = v_output;\n> + v_params[2] = l_oid_const(lc, ioparam);\n> + v_params[3] = l_int32_const(lc, -1);\n> + v_params[4] = l_ptr_const(op->d.iocoerce.escontext,\n> +\n> l_ptr(StructErrorSaveContext));\n>\n> - LLVMBuildStore(b, v_retval, v_resvaluep);\n> + /*\n> + * InputFunctionCallSafe() will write directly into\n> + * *op->resvalue.\n> + */\n> + v_params[5] = v_resvaluep;\n> +\n> + v_success = LLVMBuildCall(b, llvm_pg_func(mod,\n> \"InputFunctionCallSafe\"),\n> + v_params, lengthof(v_params),\n> + \"funccall_iocoerce_in_safe\");\n> +\n> + /*\n> + * Return null if InputFunctionCallSafe() encountered\n> + * an error.\n> + */\n> + v_resnullp = LLVMBuildICmp(b, LLVMIntEQ, v_success,\n> + l_sbool_const(0), \"\");\n> + }\n\nAlthough most animals except pogona looked fine, I've decided to revert the patch for now.\n\nIIUC, LLVM is complaining that the code in the above block is not passing the arguments of InputFunctionCallSafe() using the correct types. I'm not exactly sure which particular argument is not handled correctly in the above code, but perhaps it's:\n\n+ v_params[1] = v_output;\n\nwhich maps to char *str argument of InputFunctionCallSafe(). v_output is set in the code preceding the above block as follows:\n\n /* and call output function (can never return NULL) */\n v_output = LLVMBuildCall(b, v_fn_out, &v_fcinfo_out,\n 1, \"funccall_coerce_out\");\n\nI thought that it would be fine to pass it as-is to the call of InputFunctionCallSafe() given that v_fn_out is a call to a function that returns char *, but perhaps not.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Mon, 2 Oct 2023 14:26:49 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Mon, Oct 2, 2023 at 2:26 PM Amit Langote <[email protected]> wrote:\n> On Mon, Oct 2, 2023 at 1:24 PM Amit Langote <[email protected]> wrote:\n> > Pushed this 30 min ago (no email on -committers yet!) and am looking\n> > at the following llvm crash reported by buildfarm animal pogona [1]:\n> >\n> > #4 0x00007f5bceb673d5 in __assert_fail_base (fmt=0x7f5bcecdbdc8\n> > \"%s%s%s:%u: %s%sAssertion `%s' failed.\\\\n%n\",\n> > assertion=assertion@entry=0x7f5bc1336419 \"(i >= FTy->getNumParams() ||\n> > FTy->getParamType(i) == Args[i]->getType()) && \\\\\"Calling a function\n> > with a bad signature!\\\\\"\", file=file@entry=0x7f5bc1336051\n> > \"/home/bf/src/llvm-project-5/llvm/lib/IR/Instructions.cpp\",\n> > line=line@entry=299, function=function@entry=0x7f5bc13362af \"void\n> > llvm::CallInst::init(llvm::FunctionType *, llvm::Value *,\n> > ArrayRef<llvm::Value *>, ArrayRef<llvm::OperandBundleDef>, const\n> > llvm::Twine &)\") at ./assert/assert.c:92\n> > #5 0x00007f5bceb763a2 in __assert_fail (assertion=0x7f5bc1336419 \"(i\n> > >= FTy->getNumParams() || FTy->getParamType(i) == Args[i]->getType())\n> > && \\\\\"Calling a function with a bad signature!\\\\\"\",\n> > file=0x7f5bc1336051\n> > \"/home/bf/src/llvm-project-5/llvm/lib/IR/Instructions.cpp\", line=299,\n> > function=0x7f5bc13362af \"void llvm::CallInst::init(llvm::FunctionType\n> > *, llvm::Value *, ArrayRef<llvm::Value *>,\n> > ArrayRef<llvm::OperandBundleDef>, const llvm::Twine &)\") at\n> > ./assert/assert.c:101\n> > #6 0x00007f5bc110f138 in llvm::CallInst::init (this=0x557a91f3e508,\n> > FTy=0x557a91ed9ae0, Func=0x557a91f8be88, Args=..., Bundles=...,\n> > NameStr=...) at\n> > /home/bf/src/llvm-project-5/llvm/lib/IR/Instructions.cpp:297\n> > #7 0x00007f5bc0fa579d in llvm::CallInst::CallInst\n> > (this=0x557a91f3e508, Ty=0x557a91ed9ae0, Func=0x557a91f8be88,\n> > Args=..., Bundles=..., NameStr=..., InsertBefore=0x0) at\n> > /home/bf/src/llvm-project-5/llvm/include/llvm/IR/Instructions.h:1934\n> > #8 0x00007f5bc0fa538c in llvm::CallInst::Create (Ty=0x557a91ed9ae0,\n> > Func=0x557a91f8be88, Args=..., Bundles=..., NameStr=...,\n> > InsertBefore=0x0) at\n> > /home/bf/src/llvm-project-5/llvm/include/llvm/IR/Instructions.h:1444\n> > #9 0x00007f5bc0fa51f9 in llvm::IRBuilder<llvm::ConstantFolder,\n> > llvm::IRBuilderDefaultInserter>::CreateCall (this=0x557a91f9c6a0,\n> > FTy=0x557a91ed9ae0, Callee=0x557a91f8be88, Args=..., Name=...,\n> > FPMathTag=0x0) at\n> > /home/bf/src/llvm-project-5/llvm/include/llvm/IR/IRBuilder.h:1669\n> > #10 0x00007f5bc100edda in llvm::IRBuilder<llvm::ConstantFolder,\n> > llvm::IRBuilderDefaultInserter>::CreateCall (this=0x557a91f9c6a0,\n> > Callee=0x557a91f8be88, Args=..., Name=..., FPMathTag=0x0) at\n> > /home/bf/src/llvm-project-5/llvm/include/llvm/IR/IRBuilder.h:1663\n> > #11 0x00007f5bc100714e in LLVMBuildCall (B=0x557a91f9c6a0,\n> > Fn=0x557a91f8be88, Args=0x7ffde6fa0b50, NumArgs=6, Name=0x7f5bc30b648c\n> > \"funccall_iocoerce_in_safe\") at\n> > /home/bf/src/llvm-project-5/llvm/lib/IR/Core.cpp:2964\n> > #12 0x00007f5bc30af861 in llvm_compile_expr (state=0x557a91fbeac0) at\n> > /home/bf/bf-build/pogona/HEAD/pgsql.build/../pgsql/src/backend/jit/llvm/llvmjit_expr.c:1373\n> >\n> > This seems to me to be complaining about the following addition:\n> >\n> > + {\n> > + Oid ioparam = op->d.iocoerce.typioparam;\n> > + LLVMValueRef v_params[6];\n> > + LLVMValueRef v_success;\n> > +\n> > + v_params[0] = l_ptr_const(op->d.iocoerce.finfo_in,\n> > + l_ptr(StructFmgrInfo));\n> > + v_params[1] = v_output;\n> > + v_params[2] = l_oid_const(lc, ioparam);\n> > + v_params[3] = l_int32_const(lc, -1);\n> > + v_params[4] = l_ptr_const(op->d.iocoerce.escontext,\n> > +\n> > l_ptr(StructErrorSaveContext));\n> >\n> > - LLVMBuildStore(b, v_retval, v_resvaluep);\n> > + /*\n> > + * InputFunctionCallSafe() will write directly into\n> > + * *op->resvalue.\n> > + */\n> > + v_params[5] = v_resvaluep;\n> > +\n> > + v_success = LLVMBuildCall(b, llvm_pg_func(mod,\n> > \"InputFunctionCallSafe\"),\n> > + v_params, lengthof(v_params),\n> > + \"funccall_iocoerce_in_safe\");\n> > +\n> > + /*\n> > + * Return null if InputFunctionCallSafe() encountered\n> > + * an error.\n> > + */\n> > + v_resnullp = LLVMBuildICmp(b, LLVMIntEQ, v_success,\n> > + l_sbool_const(0), \"\");\n> > + }\n>\n> Although most animals except pogona looked fine, I've decided to revert the patch for now.\n>\n> IIUC, LLVM is complaining that the code in the above block is not passing the arguments of InputFunctionCallSafe() using the correct types. I'm not exactly sure which particular argument is not handled correctly in the above code, but perhaps it's:\n>\n>\n> + v_params[1] = v_output;\n>\n> which maps to char *str argument of InputFunctionCallSafe(). v_output is set in the code preceding the above block as follows:\n>\n> /* and call output function (can never return NULL) */\n> v_output = LLVMBuildCall(b, v_fn_out, &v_fcinfo_out,\n> 1, \"funccall_coerce_out\");\n>\n> I thought that it would be fine to pass it as-is to the call of InputFunctionCallSafe() given that v_fn_out is a call to a function that returns char *, but perhaps not.\n\nOK, I think I could use some help from LLVM experts here.\n\nSo, the LLVM code involving setting up a call to\nInputFunctionCallSafe() seems to *work*, but BF animal pogona's debug\nbuild (?) is complaining that the parameter types don't match up.\nParameters are set up as follows:\n\n+ {\n+ Oid ioparam = op->d.iocoerce.typioparam;\n+ LLVMValueRef v_params[6];\n+ LLVMValueRef v_success;\n+\n+ v_params[0] = l_ptr_const(op->d.iocoerce.finfo_in,\n+ l_ptr(StructFmgrInfo));\n+ v_params[1] = v_output;\n+ v_params[2] = l_oid_const(lc, ioparam);\n+ v_params[3] = l_int32_const(lc, -1);\n+ v_params[4] = l_ptr_const(op->d.iocoerce.escontext,\n+\nl_ptr(StructErrorSaveContext));\n + /*\n+ * InputFunctionCallSafe() will write directly into\n+ * *op->resvalue.\n+ */\n+ v_params[5] = v_resvaluep;\n+\n+ v_success = LLVMBuildCall(b, llvm_pg_func(mod,\n\"InputFunctionCallSafe\"),\n+ v_params, lengthof(v_params),\n+ \"funccall_iocoerce_in_safe\");\n+\n+ /*\n+ * Return null if InputFunctionCallSafe() encountered\n+ * an error.\n+ */\n+ v_resnullp = LLVMBuildICmp(b, LLVMIntEQ, v_success,\n+ l_sbool_const(0), \"\");\n+ }\n\nAnd here's InputFunctionCallSafe's signature:\n\nbool\nInputFunctionCallSafe(FmgrInfo *flinfo, Datum d,\n Oid typioparam, int32 typmod,\n fmNodePtr escontext,\n Datum *result)\n\nI suspected that assignment to either param[1] or param[5] might be wrong.\n\nparam[1] in InputFunctionCallSafe's signature is char *, but the code\nassigns it v_output, which is an LLVMValueRef for the output\nfunction's output, a Datum, so I thought LLVM's type checker is\ncomplaining that I'm trying to pass the Datum to char * without\nappropriate conversion.\n\nparam[5] in InputFunctionCallSafe's signature is Node *, but the above\ncode is assigning it an LLVMValueRef for iocoerce's escontext whose\ntype is ErrorSaveContext.\n\nMaybe some other param is wrong.\n\nI tried various ways to fix both, but with no success. My way of\nchecking for failure is to disassemble the IR code in .bc files\n(generated with jit_dump_bitcode) with llvm-dis and finding that it\ngives me errors such as:\n\n$ llvm-dis 58536.0.bc\nllvm-dis: error: Invalid record (Producer: 'LLVM7.0.1' Reader: 'LLVM 7.0.1')\n\n$ llvm-dis 58536.0.bc\nllvm-dis: error: Invalid cast (Producer: 'LLVM7.0.1' Reader: 'LLVM 7.0.1')\n\n\n--\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 3 Oct 2023 22:11:00 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Tue, Oct 3, 2023 at 10:11 PM Amit Langote <[email protected]> wrote:\n> On Mon, Oct 2, 2023 at 2:26 PM Amit Langote <[email protected]> wrote:\n> > On Mon, Oct 2, 2023 at 1:24 PM Amit Langote <[email protected]> wrote:\n> > > Pushed this 30 min ago (no email on -committers yet!) and am looking\n> > > at the following llvm crash reported by buildfarm animal pogona [1]:\n> > >\n> > > #4 0x00007f5bceb673d5 in __assert_fail_base (fmt=0x7f5bcecdbdc8\n> > > \"%s%s%s:%u: %s%sAssertion `%s' failed.\\\\n%n\",\n> > > assertion=assertion@entry=0x7f5bc1336419 \"(i >= FTy->getNumParams() ||\n> > > FTy->getParamType(i) == Args[i]->getType()) && \\\\\"Calling a function\n> > > with a bad signature!\\\\\"\", file=file@entry=0x7f5bc1336051\n> > > \"/home/bf/src/llvm-project-5/llvm/lib/IR/Instructions.cpp\",\n> > > line=line@entry=299, function=function@entry=0x7f5bc13362af \"void\n> > > llvm::CallInst::init(llvm::FunctionType *, llvm::Value *,\n> > > ArrayRef<llvm::Value *>, ArrayRef<llvm::OperandBundleDef>, const\n> > > llvm::Twine &)\") at ./assert/assert.c:92\n> > > #5 0x00007f5bceb763a2 in __assert_fail (assertion=0x7f5bc1336419 \"(i\n> > > >= FTy->getNumParams() || FTy->getParamType(i) == Args[i]->getType())\n> > > && \\\\\"Calling a function with a bad signature!\\\\\"\",\n> > > file=0x7f5bc1336051\n> > > \"/home/bf/src/llvm-project-5/llvm/lib/IR/Instructions.cpp\", line=299,\n> > > function=0x7f5bc13362af \"void llvm::CallInst::init(llvm::FunctionType\n> > > *, llvm::Value *, ArrayRef<llvm::Value *>,\n> > > ArrayRef<llvm::OperandBundleDef>, const llvm::Twine &)\") at\n> > > ./assert/assert.c:101\n> > > #6 0x00007f5bc110f138 in llvm::CallInst::init (this=0x557a91f3e508,\n> > > FTy=0x557a91ed9ae0, Func=0x557a91f8be88, Args=..., Bundles=...,\n> > > NameStr=...) at\n> > > /home/bf/src/llvm-project-5/llvm/lib/IR/Instructions.cpp:297\n> > > #7 0x00007f5bc0fa579d in llvm::CallInst::CallInst\n> > > (this=0x557a91f3e508, Ty=0x557a91ed9ae0, Func=0x557a91f8be88,\n> > > Args=..., Bundles=..., NameStr=..., InsertBefore=0x0) at\n> > > /home/bf/src/llvm-project-5/llvm/include/llvm/IR/Instructions.h:1934\n> > > #8 0x00007f5bc0fa538c in llvm::CallInst::Create (Ty=0x557a91ed9ae0,\n> > > Func=0x557a91f8be88, Args=..., Bundles=..., NameStr=...,\n> > > InsertBefore=0x0) at\n> > > /home/bf/src/llvm-project-5/llvm/include/llvm/IR/Instructions.h:1444\n> > > #9 0x00007f5bc0fa51f9 in llvm::IRBuilder<llvm::ConstantFolder,\n> > > llvm::IRBuilderDefaultInserter>::CreateCall (this=0x557a91f9c6a0,\n> > > FTy=0x557a91ed9ae0, Callee=0x557a91f8be88, Args=..., Name=...,\n> > > FPMathTag=0x0) at\n> > > /home/bf/src/llvm-project-5/llvm/include/llvm/IR/IRBuilder.h:1669\n> > > #10 0x00007f5bc100edda in llvm::IRBuilder<llvm::ConstantFolder,\n> > > llvm::IRBuilderDefaultInserter>::CreateCall (this=0x557a91f9c6a0,\n> > > Callee=0x557a91f8be88, Args=..., Name=..., FPMathTag=0x0) at\n> > > /home/bf/src/llvm-project-5/llvm/include/llvm/IR/IRBuilder.h:1663\n> > > #11 0x00007f5bc100714e in LLVMBuildCall (B=0x557a91f9c6a0,\n> > > Fn=0x557a91f8be88, Args=0x7ffde6fa0b50, NumArgs=6, Name=0x7f5bc30b648c\n> > > \"funccall_iocoerce_in_safe\") at\n> > > /home/bf/src/llvm-project-5/llvm/lib/IR/Core.cpp:2964\n> > > #12 0x00007f5bc30af861 in llvm_compile_expr (state=0x557a91fbeac0) at\n> > > /home/bf/bf-build/pogona/HEAD/pgsql.build/../pgsql/src/backend/jit/llvm/llvmjit_expr.c:1373\n> > >\n> > > This seems to me to be complaining about the following addition:\n> > >\n> > > + {\n> > > + Oid ioparam = op->d.iocoerce.typioparam;\n> > > + LLVMValueRef v_params[6];\n> > > + LLVMValueRef v_success;\n> > > +\n> > > + v_params[0] = l_ptr_const(op->d.iocoerce.finfo_in,\n> > > + l_ptr(StructFmgrInfo));\n> > > + v_params[1] = v_output;\n> > > + v_params[2] = l_oid_const(lc, ioparam);\n> > > + v_params[3] = l_int32_const(lc, -1);\n> > > + v_params[4] = l_ptr_const(op->d.iocoerce.escontext,\n> > > +\n> > > l_ptr(StructErrorSaveContext));\n> > >\n> > > - LLVMBuildStore(b, v_retval, v_resvaluep);\n> > > + /*\n> > > + * InputFunctionCallSafe() will write directly into\n> > > + * *op->resvalue.\n> > > + */\n> > > + v_params[5] = v_resvaluep;\n> > > +\n> > > + v_success = LLVMBuildCall(b, llvm_pg_func(mod,\n> > > \"InputFunctionCallSafe\"),\n> > > + v_params, lengthof(v_params),\n> > > + \"funccall_iocoerce_in_safe\");\n> > > +\n> > > + /*\n> > > + * Return null if InputFunctionCallSafe() encountered\n> > > + * an error.\n> > > + */\n> > > + v_resnullp = LLVMBuildICmp(b, LLVMIntEQ, v_success,\n> > > + l_sbool_const(0), \"\");\n> > > + }\n> >\n> > Although most animals except pogona looked fine, I've decided to revert the patch for now.\n> >\n> > IIUC, LLVM is complaining that the code in the above block is not passing the arguments of InputFunctionCallSafe() using the correct types. I'm not exactly sure which particular argument is not handled correctly in the above code, but perhaps it's:\n> >\n> >\n> > + v_params[1] = v_output;\n> >\n> > which maps to char *str argument of InputFunctionCallSafe(). v_output is set in the code preceding the above block as follows:\n> >\n> > /* and call output function (can never return NULL) */\n> > v_output = LLVMBuildCall(b, v_fn_out, &v_fcinfo_out,\n> > 1, \"funccall_coerce_out\");\n> >\n> > I thought that it would be fine to pass it as-is to the call of InputFunctionCallSafe() given that v_fn_out is a call to a function that returns char *, but perhaps not.\n>\n> OK, I think I could use some help from LLVM experts here.\n>\n> So, the LLVM code involving setting up a call to\n> InputFunctionCallSafe() seems to *work*, but BF animal pogona's debug\n> build (?) is complaining that the parameter types don't match up.\n> Parameters are set up as follows:\n>\n> + {\n> + Oid ioparam = op->d.iocoerce.typioparam;\n> + LLVMValueRef v_params[6];\n> + LLVMValueRef v_success;\n> +\n> + v_params[0] = l_ptr_const(op->d.iocoerce.finfo_in,\n> + l_ptr(StructFmgrInfo));\n> + v_params[1] = v_output;\n> + v_params[2] = l_oid_const(lc, ioparam);\n> + v_params[3] = l_int32_const(lc, -1);\n> + v_params[4] = l_ptr_const(op->d.iocoerce.escontext,\n> +\n> l_ptr(StructErrorSaveContext));\n> + /*\n> + * InputFunctionCallSafe() will write directly into\n> + * *op->resvalue.\n> + */\n> + v_params[5] = v_resvaluep;\n> +\n> + v_success = LLVMBuildCall(b, llvm_pg_func(mod,\n> \"InputFunctionCallSafe\"),\n> + v_params, lengthof(v_params),\n> + \"funccall_iocoerce_in_safe\");\n> +\n> + /*\n> + * Return null if InputFunctionCallSafe() encountered\n> + * an error.\n> + */\n> + v_resnullp = LLVMBuildICmp(b, LLVMIntEQ, v_success,\n> + l_sbool_const(0), \"\");\n> + }\n>\n> And here's InputFunctionCallSafe's signature:\n>\n> bool\n> InputFunctionCallSafe(FmgrInfo *flinfo, Datum d,\n> Oid typioparam, int32 typmod,\n> fmNodePtr escontext,\n> Datum *result)\n>\n> I suspected that assignment to either param[1] or param[5] might be wrong.\n>\n> param[1] in InputFunctionCallSafe's signature is char *, but the code\n> assigns it v_output, which is an LLVMValueRef for the output\n> function's output, a Datum, so I thought LLVM's type checker is\n> complaining that I'm trying to pass the Datum to char * without\n> appropriate conversion.\n>\n> param[5] in InputFunctionCallSafe's signature is Node *, but the above\n> code is assigning it an LLVMValueRef for iocoerce's escontext whose\n> type is ErrorSaveContext.\n>\n> Maybe some other param is wrong.\n>\n> I tried various ways to fix both, but with no success. My way of\n> checking for failure is to disassemble the IR code in .bc files\n> (generated with jit_dump_bitcode) with llvm-dis and finding that it\n> gives me errors such as:\n>\n> $ llvm-dis 58536.0.bc\n> llvm-dis: error: Invalid record (Producer: 'LLVM7.0.1' Reader: 'LLVM 7.0.1')\n>\n> $ llvm-dis 58536.0.bc\n> llvm-dis: error: Invalid cast (Producer: 'LLVM7.0.1' Reader: 'LLVM 7.0.1')\n\nSo I built LLVM sources to get asserts like pogona:\n\n$ llvm-config --version\n15.0.7\n$ llvm-config --assertion-mode\nON\n\nand I do now get a crash with bt that looks like this (not same as pogona):\n\n#0 0x00007fe31e83c387 in raise () from /lib64/libc.so.6\n#1 0x00007fe31e83da78 in abort () from /lib64/libc.so.6\n#2 0x00007fe31e8351a6 in __assert_fail_base () from /lib64/libc.so.6\n#3 0x00007fe31e835252 in __assert_fail () from /lib64/libc.so.6\n#4 0x00007fe3136d8132 in llvm::CallInst::init(llvm::FunctionType*,\nllvm::Value*, llvm::ArrayRef<llvm::Value*>,\nllvm::ArrayRef<llvm::OperandBundleDefT<llvm::Value*> >, llvm::Twine\nconst&) ()\n from /home/amit/llvm/lib/libLLVMCore.so.15\n#5 0x00007fe31362137a in\nllvm::IRBuilderBase::CreateCall(llvm::FunctionType*, llvm::Value*,\nllvm::ArrayRef<llvm::Value*>, llvm::Twine const&, llvm::MDNode*) ()\nfrom /home/amit/llvm/lib/libLLVMCore.so.15\n#6 0x00007fe31362d627 in LLVMBuildCall () from\n/home/amit/llvm/lib/libLLVMCore.so.15\n#7 0x00007fe3205e7e92 in llvm_compile_expr (state=0x1114e48) at\nllvmjit_expr.c:1374\n#8 0x0000000000bd3fbc in jit_compile_expr (state=0x1114e48) at jit.c:177\n#9 0x000000000072442b in ExecReadyExpr (state=0x1114e48) at execExpr.c:880\n#10 0x000000000072387c in ExecBuildProjectionInfo\n(targetList=0x1110840, econtext=0x1114a20, slot=0x1114db0,\n parent=0x1114830, inputDesc=0x1114ab0) at execExpr.c:484\n#11 0x000000000074e917 in ExecAssignProjectionInfo\n(planstate=0x1114830, inputDesc=0x1114ab0) at execUtils.c:547\n#12 0x000000000074ea02 in ExecConditionalAssignProjectionInfo\n(planstate=0x1114830, inputDesc=0x1114ab0, varno=2)\n at execUtils.c:585\n#13 0x0000000000749814 in ExecAssignScanProjectionInfo\n(node=0x1114830) at execScan.c:276\n#14 0x0000000000790bf0 in ExecInitValuesScan (node=0x1045020,\nestate=0x1114600, eflags=32)\n at nodeValuesscan.c:257\n#15 0x00000000007451c9 in ExecInitNode (node=0x1045020,\nestate=0x1114600, eflags=32) at execProcnode.c:265\n#16 0x000000000073a952 in InitPlan (queryDesc=0x1070760, eflags=32) at\nexecMain.c:968\n#17 0x0000000000739828 in standard_ExecutorStart (queryDesc=0x1070760,\neflags=32) at execMain.c:266\n#18 0x000000000073959d in ExecutorStart (queryDesc=0x1070760,\neflags=0) at execMain.c:145\n#19 0x00000000009c1aaa in PortalStart (portal=0x10bf7d0, params=0x0,\neflags=0, snapshot=0x0) at pquery.c:517\n#20 0x00000000009bbba8 in exec_simple_query (\n query_string=0x10433c0 \"select i::pg_lsn from (values ('x/a'),\n('b/b')) a(i);\") at postgres.c:1233\n#21 0x00000000009c0263 in PostgresMain (dbname=0x1079750 \"postgres\",\nusername=0x1079738 \"amit\")\n at postgres.c:4652\n#22 0x00000000008f72d6 in BackendRun (port=0x106e740) at postmaster.c:4439\n#23 0x00000000008f6c6f in BackendStartup (port=0x106e740) at postmaster.c:4167\n#24 0x00000000008f363e in ServerLoop () at postmaster.c:1781\n#25 0x00000000008f300e in PostmasterMain (argc=5, argv=0x103dc60) at\npostmaster.c:1465\n#26 0x00000000007bbfb4 in main (argc=5, argv=0x103dc60) at main.c:198\n\nThe LLVMBuildCall() in frame #6 is added by the patch that I also\nmentioned in the previous replies. I haven't yet pinpointed down\nwhich of the LLVM's asserts it is, nor have I been able to walk\nthrough LLVM source code using gdb to figure what the new code is\ndoing wrong. Maybe I'm still missing a trick or two...\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 4 Oct 2023 22:26:47 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Wed, Oct 4, 2023 at 10:26 PM Amit Langote <[email protected]> wrote:\n> On Tue, Oct 3, 2023 at 10:11 PM Amit Langote <[email protected]> wrote:\n> > On Mon, Oct 2, 2023 at 2:26 PM Amit Langote <[email protected]> wrote:\n> > > On Mon, Oct 2, 2023 at 1:24 PM Amit Langote <[email protected]> wrote:\n> > > > Pushed this 30 min ago (no email on -committers yet!) and am looking\n> > > > at the following llvm crash reported by buildfarm animal pogona [1]:\n> > > > This seems to me to be complaining about the following addition:\n> > > >\n> > > > + {\n> > > > + Oid ioparam = op->d.iocoerce.typioparam;\n> > > > + LLVMValueRef v_params[6];\n> > > > + LLVMValueRef v_success;\n> > > > +\n> > > > + v_params[0] = l_ptr_const(op->d.iocoerce.finfo_in,\n> > > > + l_ptr(StructFmgrInfo));\n> > > > + v_params[1] = v_output;\n> > > > + v_params[2] = l_oid_const(lc, ioparam);\n> > > > + v_params[3] = l_int32_const(lc, -1);\n> > > > + v_params[4] = l_ptr_const(op->d.iocoerce.escontext,\n> > > > +\n> > > > l_ptr(StructErrorSaveContext));\n> > > >\n> > > > - LLVMBuildStore(b, v_retval, v_resvaluep);\n> > > > + /*\n> > > > + * InputFunctionCallSafe() will write directly into\n> > > > + * *op->resvalue.\n> > > > + */\n> > > > + v_params[5] = v_resvaluep;\n> > > > +\n> > > > + v_success = LLVMBuildCall(b, llvm_pg_func(mod,\n> > > > \"InputFunctionCallSafe\"),\n> > > > + v_params, lengthof(v_params),\n> > > > + \"funccall_iocoerce_in_safe\");\n> > > > +\n> > > > + /*\n> > > > + * Return null if InputFunctionCallSafe() encountered\n> > > > + * an error.\n> > > > + */\n> > > > + v_resnullp = LLVMBuildICmp(b, LLVMIntEQ, v_success,\n> > > > + l_sbool_const(0), \"\");\n> > > > + }\n> > >\n> ...I haven't yet pinpointed down\n> which of the LLVM's asserts it is, nor have I been able to walk\n> through LLVM source code using gdb to figure what the new code is\n> doing wrong. Maybe I'm still missing a trick or two...\n\nI finally managed to analyze the crash by getting the correct LLVM build.\n\nSo the following bits are the culprits:\n\n1. v_output needed to be converted from being reference to a Datum to\nbe reference to char * as follows before passing to\nInputFunctionCallSafe():\n\n- v_params[1] = v_output;\n+ v_params[1] = LLVMBuildIntToPtr(b, v_output,\n+\nl_ptr(LLVMInt8TypeInContext(lc)),\n+ \"\");\n\n2. Assignment of op->d.iocoerce.escontext needed to be changed like this:\n\n v_params[4] = l_ptr_const(op->d.iocoerce.escontext,\n-\nl_ptr(StructErrorSaveContext));\n+ l_ptr(StructNode));\n\n3. v_success needed to be \"zero-extended\" to match in type with\nwhatever s_bool_const() produces, as follows:\n\n+ v_success = LLVMBuildZExt(b, v_success,\nTypeStorageBool, \"\");\n v_resnullp = LLVMBuildICmp(b, LLVMIntEQ, v_success,\n l_sbool_const(0), \"\");\n\nNo more crashes with the above fixes.\n\nAttached shows the delta against the patch I reverted. I'll push the\nfixed up version on Monday.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Fri, 6 Oct 2023 18:23:37 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On 2023-Oct-06, Amit Langote wrote:\n\n> 2. Assignment of op->d.iocoerce.escontext needed to be changed like this:\n> \n> v_params[4] = l_ptr_const(op->d.iocoerce.escontext,\n> -\n> l_ptr(StructErrorSaveContext));\n> + l_ptr(StructNode));\n\nOh, so you had to go back to using StructNode in order to get this\nfixed? That's weird. Is it just because InputFunctionCallSafe is\ndefined to take fmNodePtr? (I still fail to see that a pointer to\nErrorSaveContext would differ in any material way from a pointer to\nNode).\n\n\nAnother think I thought was weird is that it would only crash in LLVM5\ndebug and not the other LLVM-enabled animals, but looking closer at the\nbuildfarm results, I think that may have been only because you reverted\ntoo quickly, and phycodorus and petalura didn't actually run with\n7fbc75b26ed8 before you reverted it. Dragonet did make a run with it,\nbut it's marked as \"LLVM optimized\" instead of \"LLVM debug\". I suppose\nthat must be making a difference.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"World domination is proceeding according to plan\" (Andrew Morton)\n\n\n",
"msg_date": "Fri, 6 Oct 2023 12:01:05 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Fri, Oct 6, 2023 at 19:01 Alvaro Herrera <[email protected]> wrote:\n\n> On 2023-Oct-06, Amit Langote wrote:\n>\n> > 2. Assignment of op->d.iocoerce.escontext needed to be changed like this:\n> >\n> > v_params[4] =\n> l_ptr_const(op->d.iocoerce.escontext,\n> > -\n> > l_ptr(StructErrorSaveContext));\n> > + l_ptr(StructNode));\n>\n> Oh, so you had to go back to using StructNode in order to get this\n> fixed? That's weird. Is it just because InputFunctionCallSafe is\n> defined to take fmNodePtr? (I still fail to see that a pointer to\n> ErrorSaveContext would differ in any material way from a pointer to\n> Node).\n\n\nThe difference matters to LLVM’s type system, which considers Node to be a\ntype with 1 sub-type (struct member) and ErrorSaveContext with 4 sub-types.\nIt doesn’t seem to understand that both share the first member.\n\n\nAnother think I thought was weird is that it would only crash in LLVM5\n> debug and not the other LLVM-enabled animals, but looking closer at the\n> buildfarm results, I think that may have been only because you reverted\n> too quickly, and phycodorus and petalura didn't actually run with\n> 7fbc75b26ed8 before you reverted it. Dragonet did make a run with it,\n> but it's marked as \"LLVM optimized\" instead of \"LLVM debug\". I suppose\n> that must be making a difference.\n\n\nAFAICS, only assert-enabled LLVM builds crash.\n\n>\n\nOn Fri, Oct 6, 2023 at 19:01 Alvaro Herrera <[email protected]> wrote:On 2023-Oct-06, Amit Langote wrote:\n\n> 2. Assignment of op->d.iocoerce.escontext needed to be changed like this:\n> \n> v_params[4] = l_ptr_const(op->d.iocoerce.escontext,\n> -\n> l_ptr(StructErrorSaveContext));\n> + l_ptr(StructNode));\n\nOh, so you had to go back to using StructNode in order to get this\nfixed? That's weird. Is it just because InputFunctionCallSafe is\ndefined to take fmNodePtr? (I still fail to see that a pointer to\nErrorSaveContext would differ in any material way from a pointer to\nNode).The difference matters to LLVM’s type system, which considers Node to be a type with 1 sub-type (struct member) and ErrorSaveContext with 4 sub-types. It doesn’t seem to understand that both share the first member.\nAnother think I thought was weird is that it would only crash in LLVM5\ndebug and not the other LLVM-enabled animals, but looking closer at the\nbuildfarm results, I think that may have been only because you reverted\ntoo quickly, and phycodorus and petalura didn't actually run with\n7fbc75b26ed8 before you reverted it. Dragonet did make a run with it,\nbut it's marked as \"LLVM optimized\" instead of \"LLVM debug\". I suppose\nthat must be making a difference.AFAICS, only assert-enabled LLVM builds crash.",
"msg_date": "Fri, 6 Oct 2023 20:27:12 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Hi,\n\nOn 2023-09-29 13:57:46 +0900, Amit Langote wrote:\n> Thanks. I will push the attached 0001 shortly.\n\nSorry for not looking at this earlier.\n\nHave you done benchmarking to verify that 0001 does not cause performance\nregressions? I'd not be suprised if it did. I'd split the soft-error path into\na separate opcode. For JIT it can largely be implemented using the same code,\neliding the check if it's the non-soft path. Or you can just put it into an\nout-of-line function.\n\nI don't like adding more stuff to ExprState. This one seems particularly\nawkward, because it might be used by more than one level in an expression\nsubtree, which means you really need to save/restore old values when\nrecursing.\n\n\n> @@ -1579,25 +1582,13 @@ ExecInitExprRec(Expr *node, ExprState *state,\n>\n> \t\t\t\t/* lookup the result type's input function */\n> \t\t\t\tscratch.d.iocoerce.finfo_in = palloc0(sizeof(FmgrInfo));\n> -\t\t\t\tscratch.d.iocoerce.fcinfo_data_in = palloc0(SizeForFunctionCallInfo(3));\n> -\n> \t\t\t\tgetTypeInputInfo(iocoerce->resulttype,\n> -\t\t\t\t\t\t\t\t &iofunc, &typioparam);\n> +\t\t\t\t\t\t\t\t &iofunc, &scratch.d.iocoerce.typioparam);\n> \t\t\t\tfmgr_info(iofunc, scratch.d.iocoerce.finfo_in);\n> \t\t\t\tfmgr_info_set_expr((Node *) node, scratch.d.iocoerce.finfo_in);\n> -\t\t\t\tInitFunctionCallInfoData(*scratch.d.iocoerce.fcinfo_data_in,\n> -\t\t\t\t\t\t\t\t\t\t scratch.d.iocoerce.finfo_in,\n> -\t\t\t\t\t\t\t\t\t\t 3, InvalidOid, NULL, NULL);\n>\n> -\t\t\t\t/*\n> -\t\t\t\t * We can preload the second and third arguments for the input\n> -\t\t\t\t * function, since they're constants.\n> -\t\t\t\t */\n> -\t\t\t\tfcinfo_in = scratch.d.iocoerce.fcinfo_data_in;\n> -\t\t\t\tfcinfo_in->args[1].value = ObjectIdGetDatum(typioparam);\n> -\t\t\t\tfcinfo_in->args[1].isnull = false;\n> -\t\t\t\tfcinfo_in->args[2].value = Int32GetDatum(-1);\n> -\t\t\t\tfcinfo_in->args[2].isnull = false;\n> +\t\t\t\t/* Use the ErrorSaveContext passed by the caller. */\n> +\t\t\t\tscratch.d.iocoerce.escontext = state->escontext;\n>\n> \t\t\t\tExprEvalPushStep(state, &scratch);\n> \t\t\t\tbreak;\n\nI think it's likely that removing the optimization of not needing to set these\narguments ahead of time will result in a performance regression. Not to speak\nof initializing the fcinfo from scratch on every evaluation of the expression.\n\nI think this shouldn't not be merged as is.\n\n\n\n> src/backend/parser/gram.y | 348 +++++-\n\nThis causes a nontrivial increase in the size of the parser (~5% in an\noptimized build here), I wonder if we can do better.\n\n\n> +/*\n> + * Push steps to evaluate a JsonExpr and its various subsidiary expressions.\n> + */\n> +static void\n> +ExecInitJsonExpr(JsonExpr *jexpr, ExprState *state,\n> +\t\t\t\t Datum *resv, bool *resnull,\n> +\t\t\t\t ExprEvalStep *scratch)\n> +{\n> +\tJsonExprState *jsestate = palloc0(sizeof(JsonExprState));\n> +\tJsonExprPreEvalState *pre_eval = &jsestate->pre_eval;\n> +\tListCell *argexprlc;\n> +\tListCell *argnamelc;\n> +\tint\t\t\tskip_step_off = -1;\n> +\tint\t\t\tpassing_args_step_off = -1;\n> +\tint\t\t\tcoercion_step_off = -1;\n> +\tint\t\t\tcoercion_finish_step_off = -1;\n> +\tint\t\t\tbehavior_step_off = -1;\n> +\tint\t\t\tonempty_expr_step_off = -1;\n> +\tint\t\t\tonempty_jump_step_off = -1;\n> +\tint\t\t\tonerror_expr_step_off = -1;\n> +\tint\t\t\tonerror_jump_step_off = -1;\n> +\tint\t\t\tresult_coercion_jump_step_off = -1;\n> +\tList\t *adjust_jumps = NIL;\n> +\tListCell *lc;\n> +\tExprEvalStep *as;\n> +\n> +\tjsestate->jsexpr = jexpr;\n> +\n> +\t/*\n> +\t * Add steps to compute formatted_expr, pathspec, and PASSING arg\n> +\t * expressions as things that must be evaluated *before* the actual JSON\n> +\t * path expression.\n> +\t */\n> +\tExecInitExprRec((Expr *) jexpr->formatted_expr, state,\n> +\t\t\t\t\t&pre_eval->formatted_expr.value,\n> +\t\t\t\t\t&pre_eval->formatted_expr.isnull);\n> +\tExecInitExprRec((Expr *) jexpr->path_spec, state,\n> +\t\t\t\t\t&pre_eval->pathspec.value,\n> +\t\t\t\t\t&pre_eval->pathspec.isnull);\n> +\n> +\t/*\n> +\t * Before pushing steps for PASSING args, push a step to decide whether to\n> +\t * skip evaluating the args and the JSON path expression depending on\n> +\t * whether either of formatted_expr and pathspec is NULL; see\n> +\t * ExecEvalJsonExprSkip().\n> +\t */\n> +\tscratch->opcode = EEOP_JSONEXPR_SKIP;\n> +\tscratch->d.jsonexpr_skip.jsestate = jsestate;\n> +\tskip_step_off = state->steps_len;\n> +\tExprEvalPushStep(state, scratch);\n\nCould SKIP be implemented using EEOP_JUMP_IF_NULL with a bit of work? I see\nthat it sets jsestate->post_eval.jcstate, but I don't understand why it needs\nto be done that way. /* ExecEvalJsonExprCoercion() depends on this. */ doesn't\nexplain that much.\n\n\n> +\t/* PASSING args. */\n> +\tjsestate->pre_eval.args = NIL;\n> +\tpassing_args_step_off = state->steps_len;\n> +\tforboth(argexprlc, jexpr->passing_values,\n> +\t\t\targnamelc, jexpr->passing_names)\n> +\t{\n> +\t\tExpr\t *argexpr = (Expr *) lfirst(argexprlc);\n> +\t\tString\t *argname = lfirst_node(String, argnamelc);\n> +\t\tJsonPathVariable *var = palloc(sizeof(*var));\n> +\n> +\t\tvar->name = pstrdup(argname->sval);\n\nWhy does this need to be strdup'd?\n\n\n> +\t/* Step for the actual JSON path evaluation; see ExecEvalJsonExpr(). */\n> +\tscratch->opcode = EEOP_JSONEXPR_PATH;\n> +\tscratch->d.jsonexpr.jsestate = jsestate;\n> +\tExprEvalPushStep(state, scratch);\n> +\n> +\t/*\n> +\t * Step to handle ON ERROR and ON EMPTY behavior. Also, to handle errors\n> +\t * that may occur during coercion handling.\n> +\t *\n> +\t * See ExecEvalJsonExprBehavior().\n> +\t */\n> +\tscratch->opcode = EEOP_JSONEXPR_BEHAVIOR;\n> +\tscratch->d.jsonexpr_behavior.jsestate = jsestate;\n> +\tbehavior_step_off = state->steps_len;\n> +\tExprEvalPushStep(state, scratch);\n\n From what I can tell there a) can never be a step between EEOP_JSONEXPR_PATH\nand EEOP_JSONEXPR_BEHAVIOR b) EEOP_JSONEXPR_PATH ends with an unconditional\nbranch. What's the point of the two different steps here?\n\n\n\n\n>\n> +\t\tEEO_CASE(EEOP_JSONEXPR_PATH)\n> +\t\t{\n> +\t\t\t/* too complex for an inline implementation */\n> +\t\t\tExecEvalJsonExpr(state, op, econtext);\n> +\t\t\tEEO_NEXT();\n> +\t\t}\n\nWhy does EEOP_JSONEXPR_PATH call ExecEvalJsonExpr, the names don't match...\n\n\n> +\t\tEEO_CASE(EEOP_JSONEXPR_SKIP)\n> +\t\t{\n> +\t\t\t/* too complex for an inline implementation */\n> +\t\t\tEEO_JUMP(ExecEvalJsonExprSkip(state, op));\n> +\t\t}\n...\n\n\n> +\t\tEEO_CASE(EEOP_JSONEXPR_COERCION_FINISH)\n> +\t\t{\n> +\t\t\t/* too complex for an inline implementation */\n> +\t\t\tEEO_JUMP(ExecEvalJsonExprCoercionFinish(state, op));\n> +\t\t}\n\nThis seems to just return op->d.jsonexpr_coercion_finish.jump_coercion_error\nor op->d.jsonexpr_coercion_finish.jump_coercion_done. Which makes me think\nit'd be better to return a boolean? Particularly because that's how you\nalready implemented it for JIT (except that you did it by hardcoding the jump\nstep to compare to, which seems odd).\n\n\nSeparately, why do we even need a jump for both cases, and not just for the\nerror case?\n\n\n> +\t\tEEO_CASE(EEOP_JSONEXPR_BEHAVIOR)\n> +\t\t{\n> +\t\t\t/* too complex for an inline implementation */\n> +\t\t\tEEO_JUMP(ExecEvalJsonExprBehavior(state, op));\n> +\t\t}\n> +\n> +\t\tEEO_CASE(EEOP_JSONEXPR_COERCION)\n> +\t\t{\n> +\t\t\t/* too complex for an inline implementation */\n> +\t\t\tEEO_JUMP(ExecEvalJsonExprCoercion(state, op, econtext,\n> +\t\t\t\t\t\t\t\t\t\t\t *op->resvalue, *op->resnull));\n> +\t\t}\n\nI wonder if this is the right design for this op - you're declaring this to be\nop not worth implementing inline, yet you then have it implemented by hand for JIT.\n\n\n> +/*\n> + * Evaluate given JsonExpr by performing the specified JSON operation.\n> + *\n> + * This also populates the JsonExprPostEvalState with the information needed\n> + * by the subsequent steps that handle the specified JsonBehavior.\n> + */\n> +void\n> +ExecEvalJsonExpr(ExprState *state, ExprEvalStep *op, ExprContext *econtext)\n> +{\n> +\tJsonExprState *jsestate = op->d.jsonexpr.jsestate;\n> +\tJsonExprPreEvalState *pre_eval = &jsestate->pre_eval;\n> +\tJsonExprPostEvalState *post_eval = &jsestate->post_eval;\n> +\tJsonExpr *jexpr = jsestate->jsexpr;\n> +\tDatum\t\titem;\n> +\tDatum\t\tres = (Datum) 0;\n> +\tbool\t\tresnull = true;\n> +\tJsonPath *path;\n> +\tbool\t\tthrow_error = (jexpr->on_error->btype == JSON_BEHAVIOR_ERROR);\n> +\tbool\t *error = &post_eval->error;\n> +\tbool\t *empty = &post_eval->empty;\n> +\n> +\titem = pre_eval->formatted_expr.value;\n> +\tpath = DatumGetJsonPathP(pre_eval->pathspec.value);\n> +\n> +\t/* Reset JsonExprPostEvalState for this evaluation. */\n> +\tmemset(post_eval, 0, sizeof(*post_eval));\n> +\n> +\tswitch (jexpr->op)\n> +\t{\n> +\t\tcase JSON_EXISTS_OP:\n> +\t\t\t{\n> +\t\t\t\tbool\t\texists = JsonPathExists(item, path,\n> +\t\t\t\t\t\t\t\t\t\t\t\t\t!throw_error ? error : NULL,\n> +\t\t\t\t\t\t\t\t\t\t\t\t\tpre_eval->args);\n> +\n> +\t\t\t\tpost_eval->jcstate = jsestate->result_jcstate;\n> +\t\t\t\tif (*error)\n> +\t\t\t\t{\n> +\t\t\t\t\t*op->resnull = true;\n> +\t\t\t\t\t*op->resvalue = (Datum) 0;\n> +\t\t\t\t\treturn;\n> +\t\t\t\t}\n> +\n> +\t\t\t\tresnull = false;\n> +\t\t\t\tres = BoolGetDatum(exists);\n> +\t\t\t\tbreak;\n> +\t\t\t}\n\nKinda seems there should be a EEOP_JSON_EXISTS/JSON_QUERY_OP op, instead of\nimplementing it all inside ExecEvalJsonExpr. I think this might obsolete\nneeding to rediscover that the value is null in SKIP etc?\n\n\n> +\t\tcase JSON_QUERY_OP:\n> +\t\t\tres = JsonPathQuery(item, path, jexpr->wrapper, empty,\n> +\t\t\t\t\t\t\t\t!throw_error ? error : NULL,\n> +\t\t\t\t\t\t\t\tpre_eval->args);\n> +\n> +\t\t\tpost_eval->jcstate = jsestate->result_jcstate;\n> +\t\t\tif (*error)\n> +\t\t\t{\n> +\t\t\t\t*op->resnull = true;\n> +\t\t\t\t*op->resvalue = (Datum) 0;\n> +\t\t\t\treturn;\n> +\t\t\t}\n> +\t\t\tresnull = !DatumGetPointer(res);\n\nShoulnd't this check empty?\n\nFWIW, it's also pretty odd that JsonPathQuery() once\n\t\treturn (Datum) 0;\nand later does\n\treturn PointerGetDatum(NULL);\n\n\n> +\t\tcase JSON_VALUE_OP:\n> +\t\t\t{\n> +\t\t\t\tJsonbValue *jbv = JsonPathValue(item, path, empty,\n> +\t\t\t\t\t\t\t\t\t\t\t\t!throw_error ? error : NULL,\n> +\t\t\t\t\t\t\t\t\t\t\t\tpre_eval->args);\n> +\n> +\t\t\t\t/* Might get overridden below by an item_jcstate. */\n> +\t\t\t\tpost_eval->jcstate = jsestate->result_jcstate;\n> +\t\t\t\tif (*error)\n> +\t\t\t\t{\n> +\t\t\t\t\t*op->resnull = true;\n> +\t\t\t\t\t*op->resvalue = (Datum) 0;\n> +\t\t\t\t\treturn;\n> +\t\t\t\t}\n> +\n> +\t\t\t\tif (!jbv)\t\t/* NULL or empty */\n> +\t\t\t\t{\n> +\t\t\t\t\tresnull = true;\n> +\t\t\t\t\tbreak;\n> +\t\t\t\t}\n> +\n> +\t\t\t\tAssert(!*empty);\n> +\n> +\t\t\t\tresnull = false;\n> +\n> +\t\t\t\t/* Coerce scalar item to the output type */\n> +\n> +\t\t\t\t/*\n> +\t\t\t\t * If the requested output type is json(b), use\n> +\t\t\t\t * JsonExprState.result_coercion to do the coercion.\n> +\t\t\t\t */\n> +\t\t\t\tif (jexpr->returning->typid == JSONOID ||\n> +\t\t\t\t\tjexpr->returning->typid == JSONBOID)\n> +\t\t\t\t{\n> +\t\t\t\t\t/* Use result_coercion from json[b] to the output type */\n> +\t\t\t\t\tres = JsonbPGetDatum(JsonbValueToJsonb(jbv));\n> +\t\t\t\t\tbreak;\n> +\t\t\t\t}\n> +\n> +\t\t\t\t/*\n> +\t\t\t\t * Else, use one of the item_coercions.\n> +\t\t\t\t *\n> +\t\t\t\t * Error out if no cast exists to coerce SQL/JSON item to the\n> +\t\t\t\t * the output type.\n> +\t\t\t\t */\n> +\t\t\t\tres = ExecPrepareJsonItemCoercion(jbv,\n> +\t\t\t\t\t\t\t\t\t\t\t\t jsestate->item_jcstates,\n> +\t\t\t\t\t\t\t\t\t\t\t\t &post_eval->jcstate);\n> +\t\t\t\tif (post_eval->jcstate &&\n> +\t\t\t\t\tpost_eval->jcstate->coercion &&\n> +\t\t\t\t\t(post_eval->jcstate->coercion->via_io ||\n> +\t\t\t\t\t post_eval->jcstate->coercion->via_populate))\n> +\t\t\t\t{\n> +\t\t\t\t\tif (!throw_error)\n> +\t\t\t\t\t{\n> +\t\t\t\t\t\t*op->resnull = true;\n> +\t\t\t\t\t\t*op->resvalue = (Datum) 0;\n> +\t\t\t\t\t\treturn;\n> +\t\t\t\t\t}\n> +\n> +\t\t\t\t\t/*\n> +\t\t\t\t\t * Coercion via I/O means here that the cast to the target\n> +\t\t\t\t\t * type simply does not exist.\n> +\t\t\t\t\t */\n> +\t\t\t\t\tereport(ERROR,\n> +\t\t\t\t\t\t\t(errcode(ERRCODE_SQL_JSON_ITEM_CANNOT_BE_CAST_TO_TARGET_TYPE),\n> +\t\t\t\t\t\t\t errmsg(\"SQL/JSON item cannot be cast to target type\")));\n> +\t\t\t\t}\n> +\t\t\t\tbreak;\n> +\t\t\t}\n> +\n> +\t\tdefault:\n> +\t\t\telog(ERROR, \"unrecognized SQL/JSON expression op %d\", jexpr->op);\n> +\t\t\t*op->resnull = true;\n> +\t\t\t*op->resvalue = (Datum) 0;\n> +\t\t\treturn;\n> +\t}\n> +\n> +\t/*\n> +\t * If the ON EMPTY behavior is to cause an error, do so here. Other\n> +\t * behaviors will be handled in ExecEvalJsonExprBehavior().\n> +\t */\n> +\tif (*empty)\n> +\t{\n> +\t\tAssert(jexpr->on_empty);\t/* it is not JSON_EXISTS */\n> +\n> +\t\tif (jexpr->on_empty->btype == JSON_BEHAVIOR_ERROR)\n> +\t\t{\n> +\t\t\tif (!throw_error)\n> +\t\t\t{\n> +\t\t\t\t*op->resnull = true;\n> +\t\t\t\t*op->resvalue = (Datum) 0;\n> +\t\t\t\treturn;\n> +\t\t\t}\n> +\n> +\t\t\tereport(ERROR,\n> +\t\t\t\t\t(errcode(ERRCODE_NO_SQL_JSON_ITEM),\n> +\t\t\t\t\t errmsg(\"no SQL/JSON item\")));\n> +\t\t}\n> +\t}\n> +\n> +\t*op->resvalue = res;\n> +\t*op->resnull = resnull;\n> +}\n> +\n> +/*\n> + * Skip calling ExecEvalJson() on the given JsonExpr?\n\nI don't think that function exists.\n\n\n> + * Returns the step address to be performed next.\n> + */\n> +int\n> +ExecEvalJsonExprSkip(ExprState *state, ExprEvalStep *op)\n> +{\n> +\tJsonExprState *jsestate = op->d.jsonexpr_skip.jsestate;\n> +\n> +\t/*\n> +\t * Skip if either of the input expressions has turned out to be NULL,\n> +\t * though do execute domain checks for NULLs, which are handled by the\n> +\t * coercion step.\n> +\t */\n> +\tif (jsestate->pre_eval.formatted_expr.isnull ||\n> +\t\tjsestate->pre_eval.pathspec.isnull)\n> +\t{\n> +\t\t*op->resvalue = (Datum) 0;\n> +\t\t*op->resnull = true;\n> +\n> +\t\t/* ExecEvalJsonExprCoercion() depends on this. */\n> +\t\tjsestate->post_eval.jcstate = jsestate->result_jcstate;\n> +\n> +\t\treturn op->d.jsonexpr_skip.jump_coercion;\n> +\t}\n> +\n> +\t/*\n> +\t * Go evaluate the PASSING args if any and subsequently JSON path itself.\n> +\t */\n> +\treturn op->d.jsonexpr_skip.jump_passing_args;\n> +}\n> +\n> +/*\n> + * Returns the step address to perform the JsonBehavior applicable to\n> + * the JSON item that resulted from evaluating the given JsonExpr.\n> + *\n> + * Returns the step address to be performed next.\n> + */\n> +int\n> +ExecEvalJsonExprBehavior(ExprState *state, ExprEvalStep *op)\n> +{\n> +\tJsonExprState *jsestate = op->d.jsonexpr_behavior.jsestate;\n> +\tJsonExprPostEvalState *post_eval = &jsestate->post_eval;\n> +\tJsonBehavior *behavior = NULL;\n> +\tint\t\t\tjump_to = -1;\n> +\n> +\tif (post_eval->error || post_eval->coercion_error)\n> +\t{\n> +\t\tbehavior = jsestate->jsexpr->on_error;\n> +\t\tjump_to = op->d.jsonexpr_behavior.jump_onerror_expr;\n> +\t}\n> +\telse if (post_eval->empty)\n> +\t{\n> +\t\tbehavior = jsestate->jsexpr->on_empty;\n> +\t\tjump_to = op->d.jsonexpr_behavior.jump_onempty_expr;\n> +\t}\n> +\telse if (!post_eval->coercion_done)\n> +\t{\n> +\t\t/*\n> +\t\t * If no error or the JSON item is not empty, directly go to the\n> +\t\t * coercion step to coerce the item as is.\n> +\t\t */\n> +\t\treturn op->d.jsonexpr_behavior.jump_coercion;\n> +\t}\n> +\n> +\tAssert(behavior);\n> +\n> +\t/*\n> +\t * Set up for coercion step that will run to coerce a non-default behavior\n> +\t * value. It should use result_coercion, if any. Errors that may occur\n> +\t * should be thrown for JSON ops other than JSON_VALUE_OP.\n> +\t */\n> +\tif (behavior->btype != JSON_BEHAVIOR_DEFAULT)\n> +\t{\n> +\t\tpost_eval->jcstate = jsestate->result_jcstate;\n> +\t\tpost_eval->coercing_behavior_expr = true;\n> +\t}\n> +\n> +\tAssert(jump_to >= 0);\n> +\treturn jump_to;\n> +}\n> +\n> +/*\n> + * Evaluate or return the step address to evaluate a coercion of a JSON item\n> + * to the target type. The former if the coercion must be done right away by\n> + * calling the target type's input function, and for some types, by calling\n> + * json_populate_type().\n> + *\n> + * Returns the step address to be performed next.\n> + */\n> +int\n> +ExecEvalJsonExprCoercion(ExprState *state, ExprEvalStep *op,\n> +\t\t\t\t\t\t ExprContext *econtext,\n> +\t\t\t\t\t\t Datum res, bool resnull)\n> +{\n> +\tJsonExprState *jsestate = op->d.jsonexpr_coercion.jsestate;\n> +\tJsonExpr *jexpr = jsestate->jsexpr;\n> +\tJsonExprPostEvalState *post_eval = &jsestate->post_eval;\n> +\tJsonCoercionState *jcstate = post_eval->jcstate;\n> +\tchar\t *val_string = NULL;\n> +\tbool\t\tomit_quotes = false;\n> +\n> +\tswitch (jexpr->op)\n> +\t{\n> +\t\tcase JSON_EXISTS_OP:\n> +\t\t\tif (jcstate && jcstate->jump_eval_expr >= 0)\n> +\t\t\t\treturn jcstate->jump_eval_expr;\n\nShouldn't this be a compile-time check and instead be handled by simply not\nemitting a step instead?\n\n\n> +\t\t\t/* No coercion needed. */\n> +\t\t\tpost_eval->coercion_done = true;\n> +\t\t\treturn op->d.jsonexpr_coercion.jump_coercion_done;\n\nWhich then means we also don't need to emit anything here, no?\n\n\n> +/*\n> + * Prepare SQL/JSON item coercion to the output type. Returned a datum of the\n> + * corresponding SQL type and a pointer to the coercion state.\n> + */\n> +static Datum\n> +ExecPrepareJsonItemCoercion(JsonbValue *item, List *item_jcstates,\n> +\t\t\t\t\t\t\tJsonCoercionState **p_item_jcstate)\n\nI might have missed it, but if not: The whole way the coercion stuff works\nneeds a decent comment explaining how things fit together.\n\nWhat does \"item\" really mean here?\n\n\n> +{\n> +\tJsonCoercionState *item_jcstate;\n> +\tDatum\t\tres;\n> +\tJsonbValue\tbuf;\n> +\n> +\tif (item->type == jbvBinary &&\n> +\t\tJsonContainerIsScalar(item->val.binary.data))\n> +\t{\n> +\t\tbool\t\tres PG_USED_FOR_ASSERTS_ONLY;\n> +\n> +\t\tres = JsonbExtractScalar(item->val.binary.data, &buf);\n> +\t\titem = &buf;\n> +\t\tAssert(res);\n> +\t}\n> +\n> +\t/* get coercion state reference and datum of the corresponding SQL type */\n> +\tswitch (item->type)\n> +\t{\n> +\t\tcase jbvNull:\n> +\t\t\titem_jcstate = list_nth(item_jcstates, JsonItemTypeNull);\n\nThis seems quite odd. We apparently have a fixed-length array, where specific\noffsets have specific meanings, yet it's encoded as a list that's then\naccessed with constant offsets?\n\n\nRight now ExecEvalJsonExpr() stores what ExecPrepareJsonItemCoercion() chooses\nin post_eval->jcstate. Which the immediately following\nExecEvalJsonExprBehavior() then digs out again. Then there's also control flow\nvia post_eval->coercing_behavior_expr. This is ... not nice.\n\n\nISTM that jsestate should have an array of jump targets, indexed by\nitem->type. Which, for llvm IR, you can encode as a switch statement, instead\nof doing control flow via JsonExprState/JsonExprPostEvalState. There's\nobviously a bit more needed, but I think something like that should work, and\nsimplify things a fair bit.\n\n\n\n> @@ -15711,6 +15721,192 @@ func_expr_common_subexpr:\n> \t\t\t\t\tn->location = @1;\n> \t\t\t\t\t$$ = (Node *) n;\n> \t\t\t\t}\n> +\t\t\t| JSON_QUERY '('\n> +\t\t\t\tjson_api_common_syntax\n> +\t\t\t\tjson_returning_clause_opt\n> +\t\t\t\tjson_wrapper_behavior\n> +\t\t\t\tjson_quotes_clause_opt\n> +\t\t\t')'\n> +\t\t\t\t{\n> +\t\t\t\t\tJsonFuncExpr *n = makeNode(JsonFuncExpr);\n> +\n> +\t\t\t\t\tn->op = JSON_QUERY_OP;\n> +\t\t\t\t\tn->common = (JsonCommon *) $3;\n> +\t\t\t\t\tn->output = (JsonOutput *) $4;\n> +\t\t\t\t\tn->wrapper = $5;\n> +\t\t\t\t\tif (n->wrapper != JSW_NONE && $6 != JS_QUOTES_UNSPEC)\n> +\t\t\t\t\t\tereport(ERROR,\n> +\t\t\t\t\t\t\t\t(errcode(ERRCODE_SYNTAX_ERROR),\n> +\t\t\t\t\t\t\t\t errmsg(\"SQL/JSON QUOTES behavior must not be specified when WITH WRAPPER is used\"),\n> +\t\t\t\t\t\t\t\t parser_errposition(@6)));\n> +\t\t\t\t\tn->quotes = $6;\n> +\t\t\t\t\tn->location = @1;\n> +\t\t\t\t\t$$ = (Node *) n;\n> +\t\t\t\t}\n> +\t\t\t| JSON_QUERY '('\n> +\t\t\t\tjson_api_common_syntax\n> +\t\t\t\tjson_returning_clause_opt\n> +\t\t\t\tjson_wrapper_behavior\n> +\t\t\t\tjson_quotes_clause_opt\n> +\t\t\t\tjson_query_behavior ON EMPTY_P\n> +\t\t\t')'\n> +\t\t\t\t{\n> +\t\t\t\t\tJsonFuncExpr *n = makeNode(JsonFuncExpr);\n> +\n> +\t\t\t\t\tn->op = JSON_QUERY_OP;\n> +\t\t\t\t\tn->common = (JsonCommon *) $3;\n> +\t\t\t\t\tn->output = (JsonOutput *) $4;\n> +\t\t\t\t\tn->wrapper = $5;\n> +\t\t\t\t\tif (n->wrapper != JSW_NONE && $6 != JS_QUOTES_UNSPEC)\n> +\t\t\t\t\t\tereport(ERROR,\n> +\t\t\t\t\t\t\t\t(errcode(ERRCODE_SYNTAX_ERROR),\n> +\t\t\t\t\t\t\t\t errmsg(\"SQL/JSON QUOTES behavior must not be specified when WITH WRAPPER is used\"),\n> +\t\t\t\t\t\t\t\t parser_errposition(@6)));\n> +\t\t\t\t\tn->quotes = $6;\n> +\t\t\t\t\tn->on_empty = $7;\n> +\t\t\t\t\tn->location = @1;\n> +\t\t\t\t\t$$ = (Node *) n;\n> +\t\t\t\t}\n> +\t\t\t| JSON_QUERY '('\n> +\t\t\t\tjson_api_common_syntax\n> +\t\t\t\tjson_returning_clause_opt\n> +\t\t\t\tjson_wrapper_behavior\n> +\t\t\t\tjson_quotes_clause_opt\n> +\t\t\t\tjson_query_behavior ON ERROR_P\n> +\t\t\t')'\n> +\t\t\t\t{\n> +\t\t\t\t\tJsonFuncExpr *n = makeNode(JsonFuncExpr);\n> +\n> +\t\t\t\t\tn->op = JSON_QUERY_OP;\n> +\t\t\t\t\tn->common = (JsonCommon *) $3;\n> +\t\t\t\t\tn->output = (JsonOutput *) $4;\n> +\t\t\t\t\tn->wrapper = $5;\n> +\t\t\t\t\tif (n->wrapper != JSW_NONE && $6 != JS_QUOTES_UNSPEC)\n> +\t\t\t\t\t\tereport(ERROR,\n> +\t\t\t\t\t\t\t\t(errcode(ERRCODE_SYNTAX_ERROR),\n> +\t\t\t\t\t\t\t\t errmsg(\"SQL/JSON QUOTES behavior must not be specified when WITH WRAPPER is used\"),\n> +\t\t\t\t\t\t\t\t parser_errposition(@6)));\n> +\t\t\t\t\tn->quotes = $6;\n> +\t\t\t\t\tn->on_error = $7;\n> +\t\t\t\t\tn->location = @1;\n> +\t\t\t\t\t$$ = (Node *) n;\n> +\t\t\t\t}\n> +\t\t\t| JSON_QUERY '('\n> +\t\t\t\tjson_api_common_syntax\n> +\t\t\t\tjson_returning_clause_opt\n> +\t\t\t\tjson_wrapper_behavior\n> +\t\t\t\tjson_quotes_clause_opt\n> +\t\t\t\tjson_query_behavior ON EMPTY_P\n> +\t\t\t\tjson_query_behavior ON ERROR_P\n> +\t\t\t')'\n> +\t\t\t\t{\n> +\t\t\t\t\tJsonFuncExpr *n = makeNode(JsonFuncExpr);\n> +\n> +\t\t\t\t\tn->op = JSON_QUERY_OP;\n> +\t\t\t\t\tn->common = (JsonCommon *) $3;\n> +\t\t\t\t\tn->output = (JsonOutput *) $4;\n> +\t\t\t\t\tn->wrapper = $5;\n> +\t\t\t\t\tif (n->wrapper != JSW_NONE && $6 != JS_QUOTES_UNSPEC)\n> +\t\t\t\t\t\tereport(ERROR,\n> +\t\t\t\t\t\t\t\t(errcode(ERRCODE_SYNTAX_ERROR),\n> +\t\t\t\t\t\t\t\t errmsg(\"SQL/JSON QUOTES behavior must not be specified when WITH WRAPPER is used\"),\n> +\t\t\t\t\t\t\t\t parser_errposition(@6)));\n> +\t\t\t\t\tn->quotes = $6;\n> +\t\t\t\t\tn->on_empty = $7;\n> +\t\t\t\t\tn->on_error = $10;\n> +\t\t\t\t\tn->location = @1;\n> +\t\t\t\t\t$$ = (Node *) n;\n> +\t\t\t\t}\n\nI'm sure we can find a way to deduplicate this.\n\n\n> +\t\t\t| JSON_EXISTS '('\n> +\t\t\t\tjson_api_common_syntax\n> +\t\t\t\tjson_returning_clause_opt\n> +\t\t\t')'\n> +\t\t\t\t{\n> +\t\t\t\t\tJsonFuncExpr *p = makeNode(JsonFuncExpr);\n> +\n> +\t\t\t\t\tp->op = JSON_EXISTS_OP;\n> +\t\t\t\t\tp->common = (JsonCommon *) $3;\n> +\t\t\t\t\tp->output = (JsonOutput *) $4;\n> +\t\t\t\t\tp->location = @1;\n> +\t\t\t\t\t$$ = (Node *) p;\n> +\t\t\t\t}\n> +\t\t\t| JSON_EXISTS '('\n> +\t\t\t\tjson_api_common_syntax\n> +\t\t\t\tjson_returning_clause_opt\n> +\t\t\t\tjson_exists_behavior ON ERROR_P\n> +\t\t\t')'\n> +\t\t\t\t{\n> +\t\t\t\t\tJsonFuncExpr *p = makeNode(JsonFuncExpr);\n> +\n> +\t\t\t\t\tp->op = JSON_EXISTS_OP;\n> +\t\t\t\t\tp->common = (JsonCommon *) $3;\n> +\t\t\t\t\tp->output = (JsonOutput *) $4;\n> +\t\t\t\t\tp->on_error = $5;\n> +\t\t\t\t\tp->location = @1;\n> +\t\t\t\t\t$$ = (Node *) p;\n> +\t\t\t\t}\n> +\t\t\t| JSON_VALUE '('\n> +\t\t\t\tjson_api_common_syntax\n> +\t\t\t\tjson_returning_clause_opt\n> +\t\t\t')'\n> +\t\t\t\t{\n> +\t\t\t\t\tJsonFuncExpr *n = makeNode(JsonFuncExpr);\n> +\n> +\t\t\t\t\tn->op = JSON_VALUE_OP;\n> +\t\t\t\t\tn->common = (JsonCommon *) $3;\n> +\t\t\t\t\tn->output = (JsonOutput *) $4;\n> +\t\t\t\t\tn->location = @1;\n> +\t\t\t\t\t$$ = (Node *) n;\n> +\t\t\t\t}\n> +\n> +\t\t\t| JSON_VALUE '('\n> +\t\t\t\tjson_api_common_syntax\n> +\t\t\t\tjson_returning_clause_opt\n> +\t\t\t\tjson_value_behavior ON EMPTY_P\n> +\t\t\t')'\n> +\t\t\t\t{\n> +\t\t\t\t\tJsonFuncExpr *n = makeNode(JsonFuncExpr);\n> +\n> +\t\t\t\t\tn->op = JSON_VALUE_OP;\n> +\t\t\t\t\tn->common = (JsonCommon *) $3;\n> +\t\t\t\t\tn->output = (JsonOutput *) $4;\n> +\t\t\t\t\tn->on_empty = $5;\n> +\t\t\t\t\tn->location = @1;\n> +\t\t\t\t\t$$ = (Node *) n;\n> +\t\t\t\t}\n> +\t\t\t| JSON_VALUE '('\n> +\t\t\t\tjson_api_common_syntax\n> +\t\t\t\tjson_returning_clause_opt\n> +\t\t\t\tjson_value_behavior ON ERROR_P\n> +\t\t\t')'\n> +\t\t\t\t{\n> +\t\t\t\t\tJsonFuncExpr *n = makeNode(JsonFuncExpr);\n> +\n> +\t\t\t\t\tn->op = JSON_VALUE_OP;\n> +\t\t\t\t\tn->common = (JsonCommon *) $3;\n> +\t\t\t\t\tn->output = (JsonOutput *) $4;\n> +\t\t\t\t\tn->on_error = $5;\n> +\t\t\t\t\tn->location = @1;\n> +\t\t\t\t\t$$ = (Node *) n;\n> +\t\t\t\t}\n> +\n> +\t\t\t| JSON_VALUE '('\n> +\t\t\t\tjson_api_common_syntax\n> +\t\t\t\tjson_returning_clause_opt\n> +\t\t\t\tjson_value_behavior ON EMPTY_P\n> +\t\t\t\tjson_value_behavior ON ERROR_P\n> +\t\t\t')'\n> +\t\t\t\t{\n> +\t\t\t\t\tJsonFuncExpr *n = makeNode(JsonFuncExpr);\n> +\n> +\t\t\t\t\tn->op = JSON_VALUE_OP;\n> +\t\t\t\t\tn->common = (JsonCommon *) $3;\n> +\t\t\t\t\tn->output = (JsonOutput *) $4;\n> +\t\t\t\t\tn->on_empty = $5;\n> +\t\t\t\t\tn->on_error = $8;\n> +\t\t\t\t\tn->location = @1;\n> +\t\t\t\t\t$$ = (Node *) n;\n> +\t\t\t\t}\n> \t\t\t;\n\nAnd this.\n\n\n\n> +json_query_behavior:\n> +\t\t\tERROR_P\t\t{ $$ = makeJsonBehavior(JSON_BEHAVIOR_ERROR, NULL, @1); }\n> +\t\t\t| NULL_P\t\t{ $$ = makeJsonBehavior(JSON_BEHAVIOR_NULL, NULL, @1); }\n> +\t\t\t| DEFAULT a_expr\t{ $$ = makeJsonBehavior(JSON_BEHAVIOR_DEFAULT, $2, @1); }\n> +\t\t\t| EMPTY_P ARRAY\t{ $$ = makeJsonBehavior(JSON_BEHAVIOR_EMPTY_ARRAY, NULL, @1); }\n> +\t\t\t| EMPTY_P OBJECT_P\t{ $$ = makeJsonBehavior(JSON_BEHAVIOR_EMPTY_OBJECT, NULL, @1); }\n> +\t\t\t/* non-standard, for Oracle compatibility only */\n> +\t\t\t| EMPTY_P\t\t{ $$ = makeJsonBehavior(JSON_BEHAVIOR_EMPTY_ARRAY, NULL, @1); }\n> +\t\t;\n\n\n> +json_exists_behavior:\n> +\t\t\tERROR_P\t\t{ $$ = makeJsonBehavior(JSON_BEHAVIOR_ERROR, NULL, @1); }\n> +\t\t\t| TRUE_P\t\t{ $$ = makeJsonBehavior(JSON_BEHAVIOR_TRUE, NULL, @1); }\n> +\t\t\t| FALSE_P\t\t{ $$ = makeJsonBehavior(JSON_BEHAVIOR_FALSE, NULL, @1); }\n> +\t\t\t| UNKNOWN\t\t{ $$ = makeJsonBehavior(JSON_BEHAVIOR_UNKNOWN, NULL, @1); }\n> +\t\t;\n> +\n> +json_value_behavior:\n> +\t\t\tNULL_P\t\t{ $$ = makeJsonBehavior(JSON_BEHAVIOR_NULL, NULL, @1); }\n> +\t\t\t| ERROR_P\t\t{ $$ = makeJsonBehavior(JSON_BEHAVIOR_ERROR, NULL, @1); }\n> +\t\t\t| DEFAULT a_expr\t{ $$ = makeJsonBehavior(JSON_BEHAVIOR_DEFAULT, $2, @1); }\n> +\t\t;\n\nThis also seems like it could use some dedup.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 6 Oct 2023 14:48:56 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On 2023-Oct-06, Andres Freund wrote:\n\n> > +json_query_behavior:\n> > +\t\t\tERROR_P\t\t{ $$ = makeJsonBehavior(JSON_BEHAVIOR_ERROR, NULL, @1); }\n> > +\t\t\t| NULL_P\t\t{ $$ = makeJsonBehavior(JSON_BEHAVIOR_NULL, NULL, @1); }\n> > +\t\t\t| DEFAULT a_expr\t{ $$ = makeJsonBehavior(JSON_BEHAVIOR_DEFAULT, $2, @1); }\n> > +\t\t\t| EMPTY_P ARRAY\t{ $$ = makeJsonBehavior(JSON_BEHAVIOR_EMPTY_ARRAY, NULL, @1); }\n> > +\t\t\t| EMPTY_P OBJECT_P\t{ $$ = makeJsonBehavior(JSON_BEHAVIOR_EMPTY_OBJECT, NULL, @1); }\n> > +\t\t\t/* non-standard, for Oracle compatibility only */\n> > +\t\t\t| EMPTY_P\t\t{ $$ = makeJsonBehavior(JSON_BEHAVIOR_EMPTY_ARRAY, NULL, @1); }\n> > +\t\t;\n> \n> > +json_exists_behavior:\n> > +\t\t\tERROR_P\t\t{ $$ = makeJsonBehavior(JSON_BEHAVIOR_ERROR, NULL, @1); }\n> > +\t\t\t| TRUE_P\t\t{ $$ = makeJsonBehavior(JSON_BEHAVIOR_TRUE, NULL, @1); }\n> > +\t\t\t| FALSE_P\t\t{ $$ = makeJsonBehavior(JSON_BEHAVIOR_FALSE, NULL, @1); }\n> > +\t\t\t| UNKNOWN\t\t{ $$ = makeJsonBehavior(JSON_BEHAVIOR_UNKNOWN, NULL, @1); }\n> > +\t\t;\n> > +\n> > +json_value_behavior:\n> > +\t\t\tNULL_P\t\t{ $$ = makeJsonBehavior(JSON_BEHAVIOR_NULL, NULL, @1); }\n> > +\t\t\t| ERROR_P\t\t{ $$ = makeJsonBehavior(JSON_BEHAVIOR_ERROR, NULL, @1); }\n> > +\t\t\t| DEFAULT a_expr\t{ $$ = makeJsonBehavior(JSON_BEHAVIOR_DEFAULT, $2, @1); }\n> > +\t\t;\n> \n> This also seems like it could use some dedup.\n\nYeah, I was looking at this the other day and thinking that we should\njust have a single json_behavior that's used by all these productions;\nat runtime we can check whether a value has been used that's improper\nfor that particular node, and error out with a syntax error or some\nsuch.\n\nOther parts of the grammar definitely needs more work, too. It appears\nto me that they were written by looking at what the standard says, more\nor less literally.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Someone said that it is at least an order of magnitude more work to do\nproduction software than a prototype. I think he is wrong by at least\nan order of magnitude.\" (Brian Kernighan)\n\n\n",
"msg_date": "Sat, 7 Oct 2023 15:54:33 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Hi Andres,\n\nOn Sat, Oct 7, 2023 at 6:49 AM Andres Freund <[email protected]> wrote:\n> Hi,\n>\n> On 2023-09-29 13:57:46 +0900, Amit Langote wrote:\n> > Thanks. I will push the attached 0001 shortly.\n>\n> Sorry for not looking at this earlier.\n\nThanks for the review. Replying here only to your comments on 0001.\n\n> Have you done benchmarking to verify that 0001 does not cause performance\n> regressions? I'd not be suprised if it did.\n\nI found that it indeed did once I benchmarked with something that\nwould stress EEOP_IOCOERCE:\n\ndo $$\nbegin\nfor i in 1..20000000 loop\ni := i::text;\nend loop; end; $$ language plpgsql;\nDO\n\nTimes and perf report:\n\nHEAD:\n\nTime: 1815.824 ms (00:01.816)\nTime: 1818.980 ms (00:01.819)\nTime: 1695.555 ms (00:01.696)\nTime: 1762.022 ms (00:01.762)\n\n --97.49%--exec_stmts\n |\n --85.97%--exec_assign_expr\n |\n |--65.56%--exec_eval_expr\n | |\n | |--53.71%--ExecInterpExpr\n | | |\n | | |--14.14%--textin\n\n\nPatched:\n\nTime: 1872.469 ms (00:01.872)\nTime: 1927.371 ms (00:01.927)\nTime: 1910.126 ms (00:01.910)\nTime: 1948.322 ms (00:01.948)\n\n --97.70%--exec_stmts\n |\n --88.13%--exec_assign_expr\n |\n |--73.27%--exec_eval_expr\n | |\n | |--58.29%--ExecInterpExpr\n | | |\n | |\n|--25.69%--InputFunctionCallSafe\n | | | |\n | | |\n|--14.75%--textin\n\nSo, yes, putting InputFunctionCallSafe() in the common path may not\nhave been such a good idea.\n\n> I'd split the soft-error path into\n> a separate opcode. For JIT it can largely be implemented using the same code,\n> eliding the check if it's the non-soft path. Or you can just put it into an\n> out-of-line function.\n\nDo you mean putting the execExprInterp.c code for the soft-error path\n(with a new opcode) into an out-of-line function? That definitely\nmakes the JIT version a tad simpler than if the error-checking is done\nin-line.\n\nSo, the existing code for EEOP_IOCOERCE in both execExprInterp.c and\nllvmjit_expr.c will remain unchanged. Also, I can write the code for\nthe new opcode such that it doesn't use InputFunctionCallSafe() at\nruntime, but rather passes the ErrorSaveContext directly by putting\nthat in the input function's FunctionCallInfo.context and checking\nSOFT_ERROR_OCCURRED() directly. That will have less overhead.\n\n> I don't like adding more stuff to ExprState. This one seems particularly\n> awkward, because it might be used by more than one level in an expression\n> subtree, which means you really need to save/restore old values when\n> recursing.\n\nHmm, I'd think that all levels will follow either soft or non-soft\nerror mode, so sharing the ErrorSaveContext passed via ExprState\ndoesn't look wrong to me. IOW, there's only one value, not one for\nevery level, so there doesn't appear to be any need to have the\nsave/restore convention as we have for innermost_domainval et al.\n\nI can see your point that adding another 8 bytes at the end of\nExprState might be undesirable. Note though that ExprState.escontext\nis only accessed in the ExecInitExpr phase, but during evaluation.\n\nThe alternative to not passing the ErrorSaveContext via ExprState is\nto add a new parameter to ExecInitExprRec() and to functions that call\nit. The footprint would be much larger though. Would you rather\nprefer that?\n\n> > @@ -1579,25 +1582,13 @@ ExecInitExprRec(Expr *node, ExprState *state,\n> >\n> > /* lookup the result type's input function */\n> > scratch.d.iocoerce.finfo_in = palloc0(sizeof(FmgrInfo));\n> > - scratch.d.iocoerce.fcinfo_data_in = palloc0(SizeForFunctionCallInfo(3));\n> > -\n> > getTypeInputInfo(iocoerce->resulttype,\n> > - &iofunc, &typioparam);\n> > + &iofunc, &scratch.d.iocoerce.typioparam);\n> > fmgr_info(iofunc, scratch.d.iocoerce.finfo_in);\n> > fmgr_info_set_expr((Node *) node, scratch.d.iocoerce.finfo_in);\n> > - InitFunctionCallInfoData(*scratch.d.iocoerce.fcinfo_data_in,\n> > - scratch.d.iocoerce.finfo_in,\n> > - 3, InvalidOid, NULL, NULL);\n> >\n> > - /*\n> > - * We can preload the second and third arguments for the input\n> > - * function, since they're constants.\n> > - */\n> > - fcinfo_in = scratch.d.iocoerce.fcinfo_data_in;\n> > - fcinfo_in->args[1].value = ObjectIdGetDatum(typioparam);\n> > - fcinfo_in->args[1].isnull = false;\n> > - fcinfo_in->args[2].value = Int32GetDatum(-1);\n> > - fcinfo_in->args[2].isnull = false;\n> > + /* Use the ErrorSaveContext passed by the caller. */\n> > + scratch.d.iocoerce.escontext = state->escontext;\n> >\n> > ExprEvalPushStep(state, &scratch);\n> > break;\n>\n> I think it's likely that removing the optimization of not needing to set these\n> arguments ahead of time will result in a performance regression. Not to speak\n> of initializing the fcinfo from scratch on every evaluation of the expression.\n\nYes, that's not good. I agree with separating out the soft-error path.\n\nI'll post the patch and benchmarking results with the new patch shortly.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 11 Oct 2023 14:08:25 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Wed, Oct 11, 2023 at 2:08 PM Amit Langote <[email protected]> wrote:\n> On Sat, Oct 7, 2023 at 6:49 AM Andres Freund <[email protected]> wrote:\n> > On 2023-09-29 13:57:46 +0900, Amit Langote wrote:\n> > > Thanks. I will push the attached 0001 shortly.\n> >\n> > Sorry for not looking at this earlier.\n>\n> Thanks for the review. Replying here only to your comments on 0001.\n>\n> > Have you done benchmarking to verify that 0001 does not cause performance\n> > regressions? I'd not be suprised if it did.\n>\n> I found that it indeed did once I benchmarked with something that\n> would stress EEOP_IOCOERCE:\n>\n> do $$\n> begin\n> for i in 1..20000000 loop\n> i := i::text;\n> end loop; end; $$ language plpgsql;\n> DO\n>\n> Times and perf report:\n>\n> HEAD:\n>\n> Time: 1815.824 ms (00:01.816)\n> Time: 1818.980 ms (00:01.819)\n> Time: 1695.555 ms (00:01.696)\n> Time: 1762.022 ms (00:01.762)\n>\n> --97.49%--exec_stmts\n> |\n> --85.97%--exec_assign_expr\n> |\n> |--65.56%--exec_eval_expr\n> | |\n> | |--53.71%--ExecInterpExpr\n> | | |\n> | | |--14.14%--textin\n>\n>\n> Patched:\n>\n> Time: 1872.469 ms (00:01.872)\n> Time: 1927.371 ms (00:01.927)\n> Time: 1910.126 ms (00:01.910)\n> Time: 1948.322 ms (00:01.948)\n>\n> --97.70%--exec_stmts\n> |\n> --88.13%--exec_assign_expr\n> |\n> |--73.27%--exec_eval_expr\n> | |\n> | |--58.29%--ExecInterpExpr\n> | | |\n> | |\n> |--25.69%--InputFunctionCallSafe\n> | | | |\n> | | |\n> |--14.75%--textin\n>\n> So, yes, putting InputFunctionCallSafe() in the common path may not\n> have been such a good idea.\n>\n> > I'd split the soft-error path into\n> > a separate opcode. For JIT it can largely be implemented using the same code,\n> > eliding the check if it's the non-soft path. Or you can just put it into an\n> > out-of-line function.\n>\n> Do you mean putting the execExprInterp.c code for the soft-error path\n> (with a new opcode) into an out-of-line function? That definitely\n> makes the JIT version a tad simpler than if the error-checking is done\n> in-line.\n>\n> So, the existing code for EEOP_IOCOERCE in both execExprInterp.c and\n> llvmjit_expr.c will remain unchanged. Also, I can write the code for\n> the new opcode such that it doesn't use InputFunctionCallSafe() at\n> runtime, but rather passes the ErrorSaveContext directly by putting\n> that in the input function's FunctionCallInfo.context and checking\n> SOFT_ERROR_OCCURRED() directly. That will have less overhead.\n>\n> > I don't like adding more stuff to ExprState. This one seems particularly\n> > awkward, because it might be used by more than one level in an expression\n> > subtree, which means you really need to save/restore old values when\n> > recursing.\n>\n> Hmm, I'd think that all levels will follow either soft or non-soft\n> error mode, so sharing the ErrorSaveContext passed via ExprState\n> doesn't look wrong to me. IOW, there's only one value, not one for\n> every level, so there doesn't appear to be any need to have the\n> save/restore convention as we have for innermost_domainval et al.\n>\n> I can see your point that adding another 8 bytes at the end of\n> ExprState might be undesirable. Note though that ExprState.escontext\n> is only accessed in the ExecInitExpr phase, but during evaluation.\n>\n> The alternative to not passing the ErrorSaveContext via ExprState is\n> to add a new parameter to ExecInitExprRec() and to functions that call\n> it. The footprint would be much larger though. Would you rather\n> prefer that?\n>\n> > > @@ -1579,25 +1582,13 @@ ExecInitExprRec(Expr *node, ExprState *state,\n> > >\n> > > /* lookup the result type's input function */\n> > > scratch.d.iocoerce.finfo_in = palloc0(sizeof(FmgrInfo));\n> > > - scratch.d.iocoerce.fcinfo_data_in = palloc0(SizeForFunctionCallInfo(3));\n> > > -\n> > > getTypeInputInfo(iocoerce->resulttype,\n> > > - &iofunc, &typioparam);\n> > > + &iofunc, &scratch.d.iocoerce.typioparam);\n> > > fmgr_info(iofunc, scratch.d.iocoerce.finfo_in);\n> > > fmgr_info_set_expr((Node *) node, scratch.d.iocoerce.finfo_in);\n> > > - InitFunctionCallInfoData(*scratch.d.iocoerce.fcinfo_data_in,\n> > > - scratch.d.iocoerce.finfo_in,\n> > > - 3, InvalidOid, NULL, NULL);\n> > >\n> > > - /*\n> > > - * We can preload the second and third arguments for the input\n> > > - * function, since they're constants.\n> > > - */\n> > > - fcinfo_in = scratch.d.iocoerce.fcinfo_data_in;\n> > > - fcinfo_in->args[1].value = ObjectIdGetDatum(typioparam);\n> > > - fcinfo_in->args[1].isnull = false;\n> > > - fcinfo_in->args[2].value = Int32GetDatum(-1);\n> > > - fcinfo_in->args[2].isnull = false;\n> > > + /* Use the ErrorSaveContext passed by the caller. */\n> > > + scratch.d.iocoerce.escontext = state->escontext;\n> > >\n> > > ExprEvalPushStep(state, &scratch);\n> > > break;\n> >\n> > I think it's likely that removing the optimization of not needing to set these\n> > arguments ahead of time will result in a performance regression. Not to speak\n> > of initializing the fcinfo from scratch on every evaluation of the expression.\n>\n> Yes, that's not good. I agree with separating out the soft-error path.\n>\n> I'll post the patch and benchmarking results with the new patch shortly.\n\nSo here's 0001, rewritten to address the above comments.\n\nIt adds a new eval opcode EEOP_IOCOERCE_SAFE, which basically copies\nthe implementation of EEOP_IOCOERCE but passes the ErrorSaveContext\npassed by the caller to the input function via the latter's\nFunctionCallInfo.context. However, unlike EEOP_IOCOERCE, it's\nimplemented in a separate function to encapsulate away the logic of\nreturning NULL when an error occurs. This makes JITing much simpler,\nbecause it now involves simply calling the function.\n\nHere are the benchmark results:\n\nSame DO block:\n\ndo $$\nbegin\nfor i in 1..20000000 loop\ni := i::text;\nend loop; end; $$ language plpgsql;\n\nHEAD:\nTime: 1629.461 ms (00:01.629)\nTime: 1635.439 ms (00:01.635)\nTime: 1634.432 ms (00:01.634)\n\nPatched:\nTime: 1657.657 ms (00:01.658)\nTime: 1686.779 ms (00:01.687)\nTime: 1626.985 ms (00:01.627)\n\nUsing the SQL/JSON query functions patch rebased over the new 0001, I\nalso compared the difference in performance between EEOP_IOCOERCE and\nEEOP_IOCOERCE_SAFE:\n\n-- uses EEOP_IOCOERCE because ERROR ON ERROR\ndo $$\nbegin\nfor i in 1..20000000 loop\ni := JSON_VALUE(jsonb '1', '$' RETURNING text ERROR ON ERROR );\nend loop; end; $$ language plpgsql;\n\n-- uses EEOP_IOCOERCE because ERROR ON ERROR\ndo $$\nbegin\nfor i in 1..20000000 loop\ni := JSON_VALUE(jsonb '1', '$' RETURNING text ERROR ON ERROR );\nend loop; end; $$ language plpgsql;\n\nTime: 2960.434 ms (00:02.960)\nTime: 2968.895 ms (00:02.969)\nTime: 3006.691 ms (00:03.007)\n\n-- uses EEOP_IOCOERCE_SAFE because NULL ON ERROR\ndo $$\nbegin\nfor i in 1..20000000 loop\ni := JSON_VALUE(jsonb '1', '$' RETURNING text NULL ON ERROR);\nend loop; end; $$ language plpgsql;\n\nTime: 3046.933 ms (00:03.047)\nTime: 3073.385 ms (00:03.073)\nTime: 3121.619 ms (00:03.122)\n\nThere's only a slight degradation with the SAFE variant presumably due\nto the extra whether-error-occurred check after calling the input\nfunction. I'd think the difference would have been more pronounced\nhad I continued to use InputFunctionCallSafe().\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 11 Oct 2023 21:34:07 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Hi!\n\nWith the latest set of patches we encountered failure with the following\nquery:\n\npostgres@postgres=# SELECT JSON_QUERY(jsonpath '\"aaa\"', '$' RETURNING text);\nserver closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\nThe connection to the server was lost. Attempting reset: Failed.\nTime: 11.165 ms\n\nA colleague of mine, Anton Melnikov, proposed the following changes which\nslightly\nalter coercion functions to process this kind of error correctly.\n\nPlease check attached patch set.\n\n--\nRegards,\nNikita Malakhov\nPostgres Professional\nThe Russian Postgres Company\nhttps://postgrespro.ru/",
"msg_date": "Mon, 16 Oct 2023 11:20:58 +0300",
"msg_from": "Nikita Malakhov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Hi,\n\nAlso FYI - the following case results in segmentation fault:\n\npostgres@postgres=# CREATE TABLE test_jsonb_constraints (\n js text,\n i int,\n x jsonb DEFAULT JSON_QUERY(jsonb '[1,2]', '$[*]' WITH WRAPPER)\n CONSTRAINT test_jsonb_constraint1\n CHECK (js IS JSON)\n CONSTRAINT test_jsonb_constraint5\n CHECK (JSON_QUERY(js::jsonb, '$.mm' RETURNING char(5) OMIT\nQUOTES EMPTY ARRAY ON EMPTY) > 'a' COLLATE \"C\")\n CONSTRAINT test_jsonb_constraint6\n CHECK (JSON_EXISTS(js::jsonb, 'strict $.a' RETURNING int\nTRUE ON ERROR) < 2)\n);\nCREATE TABLE\nTime: 13.518 ms\npostgres@postgres=# INSERT INTO test_jsonb_constraints VALUES ('[]');\nserver closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\nThe connection to the server was lost. Attempting reset: Failed.\nTime: 6.858 ms\n@!>\n\nWe're currently looking into this case.\n\n--\nRegards,\nNikita Malakhov\nPostgres Professional\nThe Russian Postgres Company\nhttps://postgrespro.ru/\n\nHi,Also FYI - the following case results in segmentation fault:postgres@postgres=# CREATE TABLE test_jsonb_constraints ( js text, i int, x jsonb DEFAULT JSON_QUERY(jsonb '[1,2]', '$[*]' WITH WRAPPER) CONSTRAINT test_jsonb_constraint1 CHECK (js IS JSON) CONSTRAINT test_jsonb_constraint5 CHECK (JSON_QUERY(js::jsonb, '$.mm' RETURNING char(5) OMIT QUOTES EMPTY ARRAY ON EMPTY) > 'a' COLLATE \"C\") CONSTRAINT test_jsonb_constraint6 CHECK (JSON_EXISTS(js::jsonb, 'strict $.a' RETURNING int TRUE ON ERROR) < 2));CREATE TABLETime: 13.518 mspostgres@postgres=# INSERT INTO test_jsonb_constraints VALUES ('[]');server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request.The connection to the server was lost. Attempting reset: Failed.The connection to the server was lost. Attempting reset: Failed.Time: 6.858 ms@!>We're currently looking into this case.--Regards,Nikita MalakhovPostgres ProfessionalThe Russian Postgres Companyhttps://postgrespro.ru/",
"msg_date": "Mon, 16 Oct 2023 11:33:53 +0300",
"msg_from": "Nikita Malakhov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Hi,\n\nOn Mon, Oct 16, 2023 at 5:34 PM Nikita Malakhov <[email protected]> wrote:\n>\n> Hi,\n>\n> Also FYI - the following case results in segmentation fault:\n>\n> postgres@postgres=# CREATE TABLE test_jsonb_constraints (\n> js text,\n> i int,\n> x jsonb DEFAULT JSON_QUERY(jsonb '[1,2]', '$[*]' WITH WRAPPER)\n> CONSTRAINT test_jsonb_constraint1\n> CHECK (js IS JSON)\n> CONSTRAINT test_jsonb_constraint5\n> CHECK (JSON_QUERY(js::jsonb, '$.mm' RETURNING char(5) OMIT QUOTES EMPTY ARRAY ON EMPTY) > 'a' COLLATE \"C\")\n> CONSTRAINT test_jsonb_constraint6\n> CHECK (JSON_EXISTS(js::jsonb, 'strict $.a' RETURNING int TRUE ON ERROR) < 2)\n> );\n> CREATE TABLE\n> Time: 13.518 ms\n> postgres@postgres=# INSERT INTO test_jsonb_constraints VALUES ('[]');\n> server closed the connection unexpectedly\n> This probably means the server terminated abnormally\n> before or while processing the request.\n> The connection to the server was lost. Attempting reset: Failed.\n> The connection to the server was lost. Attempting reset: Failed.\n> Time: 6.858 ms\n> @!>\n>\n> We're currently looking into this case.\n\nThanks for the report. I think I've figured out the problem --\nExecEvalJsonExprCoercion() mishandles the EMPTY ARRAY ON EMPTY case.\n\nI'm reading the other 2 patches...\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 16 Oct 2023 18:47:36 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Hi,\n\nSorry, forgot to mention above - patches from our patch set should be\napplied\nonto SQL/JSON part 3 - v22-0003-SQL-JSON-query-functions.patch, thus\nthey are numbered as v23-0003-1 and -2.\n\n--\nRegards,\nNikita Malakhov\nPostgres Professional\nThe Russian Postgres Company\nhttps://postgrespro.ru/\n\nHi,Sorry, forgot to mention above - patches from our patch set should be appliedonto SQL/JSON part 3 - v22-0003-SQL-JSON-query-functions.patch, thusthey are numbered as v23-0003-1 and -2.--Regards,Nikita MalakhovPostgres ProfessionalThe Russian Postgres Companyhttps://postgrespro.ru/",
"msg_date": "Mon, 16 Oct 2023 12:59:01 +0300",
"msg_from": "Nikita Malakhov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Mon, Oct 16, 2023 at 5:47 PM Amit Langote <[email protected]> wrote:\n>\n> > We're currently looking into this case.\n>\n> Thanks for the report. I think I've figured out the problem --\n> ExecEvalJsonExprCoercion() mishandles the EMPTY ARRAY ON EMPTY case.\n>\n> I'm reading the other 2 patches...\n>\n> --\n> Thanks, Amit Langote\n> EDB: http://www.enterprisedb.com\n\nquery: select JSON_QUERY('[]'::jsonb, '$.mm' RETURNING text OMIT\nQUOTES EMPTY ON EMPTY);\n\nBreakpoint 2, ExecEvalJsonExpr (state=0x55e47ad685c0,\nop=0x55e47ad68818, econtext=0x55e47ad682e8) at\n../../Desktop/pg_sources/main/postgres/src/backend/executor/execExprInterp.c:4188\n4188 JsonExprState *jsestate = op->d.jsonexpr.jsestate;\n(gdb) fin\nRun till exit from #0 ExecEvalJsonExpr (state=0x55e47ad685c0,\n op=0x55e47ad68818, econtext=0x55e47ad682e8)\n at ../../Desktop/pg_sources/main/postgres/src/backend/executor/execExprInterp.c:4188\nExecInterpExpr (state=0x55e47ad685c0, econtext=0x55e47ad682e8,\nisnull=0x7ffe63659e2f) at\n../../Desktop/pg_sources/main/postgres/src/backend/executor/execExprInterp.c:1556\n1556 EEO_NEXT();\n(gdb) p *op->resnull\n$1 = true\n(gdb) cont\nContinuing.\n\nBreakpoint 1, ExecEvalJsonExprCoercion (state=0x55e47ad685c0,\nop=0x55e47ad68998, econtext=0x55e47ad682e8, res=94439801785192,\nresnull=false) at\n../../Desktop/pg_sources/main/postgres/src/backend/executor/execExprInterp.c:4453\n4453 {\n(gdb) i args\nstate = 0x55e47ad685c0\nop = 0x55e47ad68998\necontext = 0x55e47ad682e8\nres = 94439801785192\nresnull = false\n(gdb) p *op->resnull\n$2 = false\n-------------------------------------------------------\nin ExecEvalJsonExpr, *op->resnull is true.\nthen in ExecEvalJsonExprCoercion *op->resnull is false.\nI am not sure why *op->resnull value changes, when changes.\n-------------------------------------------------------\nin ExecEvalJsonExprCoercion, if resnull is true, then jb is null, but\nit seems there is no code to handle the case.\n-----------------------------\nadd the following code after ExecEvalJsonExprCoercion if\n(!InputFunctionCallSafe(...) works, but seems like a hack.\n\nif (!val_string)\n{\n*op->resnull = true;\n*op->resvalue = (Datum) 0;\n}\n\n\n",
"msg_date": "Mon, 16 Oct 2023 20:49:15 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Hello!\n\nOn 16.10.2023 15:49, jian he wrote:\n> add the following code after ExecEvalJsonExprCoercion if\n> (!InputFunctionCallSafe(...) works, but seems like a hack.\n> \n> if (!val_string)\n> {\n> *op->resnull = true;\n> *op->resvalue = (Datum) 0;\n> }\n\nIt seems the constraint should work here:\n\nAfter\n\nCREATE TABLE test (\n\tjs text,\n\ti int,\n\tx jsonb DEFAULT JSON_QUERY(jsonb '[1,2]', '$[*]' WITH WRAPPER)\n\tCONSTRAINT test_constraint\n\t\tCHECK (JSON_QUERY(js::jsonb, '$.a' RETURNING char(5) OMIT QUOTES EMPTY ARRAY ON EMPTY) > 'a')\n);\n\nINSERT INTO test_jsonb_constraints VALUES ('[]');\n\none expected to see an error like that:\n\nERROR: new row for relation \"test\" violates check constraint \"test_constraint\"\nDETAIL: Failing row contains ([], null, [1, 2]).\n\nnot \"INSERT 0 1\"\n\nWith best regards,\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Mon, 16 Oct 2023 16:44:27 +0300",
"msg_from": "\"Anton A. Melnikov\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Mon, Oct 16, 2023 at 10:44 PM Anton A. Melnikov\n<[email protected]> wrote:\n> On 16.10.2023 15:49, jian he wrote:\n> > add the following code after ExecEvalJsonExprCoercion if\n> > (!InputFunctionCallSafe(...) works, but seems like a hack.\n> >\n> > if (!val_string)\n> > {\n> > *op->resnull = true;\n> > *op->resvalue = (Datum) 0;\n> > }\n>\n> It seems the constraint should work here:\n>\n> After\n>\n> CREATE TABLE test (\n> js text,\n> i int,\n> x jsonb DEFAULT JSON_QUERY(jsonb '[1,2]', '$[*]' WITH WRAPPER)\n> CONSTRAINT test_constraint\n> CHECK (JSON_QUERY(js::jsonb, '$.a' RETURNING char(5) OMIT QUOTES EMPTY ARRAY ON EMPTY) > 'a')\n> );\n>\n> INSERT INTO test_jsonb_constraints VALUES ('[]');\n>\n> one expected to see an error like that:\n>\n> ERROR: new row for relation \"test\" violates check constraint \"test_constraint\"\n> DETAIL: Failing row contains ([], null, [1, 2]).\n>\n> not \"INSERT 0 1\"\n\nYes, the correct thing here is for the constraint to fail.\n\nOne thing jian he missed during the debugging is that\nExecEvalJsonExprCoersion() receives the EMPTY ARRAY value via\n*op->resvalue/resnull, set by ExecEvalJsonExprBehavior(), because\nthat's the ON EMPTY behavior specified in the constraint. The bug was\nthat the code in ExecEvalJsonExprCoercion() failed to set val_string\nto that value (\"[]\") before passing to InputFunctionCallSafe(), so the\nlatter would assume the input is NULL.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 17 Oct 2023 13:02:45 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "\nOn 17.10.2023 07:02, Amit Langote wrote:\n\n> One thing jian he missed during the debugging is that\n> ExecEvalJsonExprCoersion() receives the EMPTY ARRAY value via\n> *op->resvalue/resnull, set by ExecEvalJsonExprBehavior(), because\n> that's the ON EMPTY behavior specified in the constraint. The bug was\n> that the code in ExecEvalJsonExprCoercion() failed to set val_string\n> to that value (\"[]\") before passing to InputFunctionCallSafe(), so the\n> latter would assume the input is NULL.\n>\nThank a lot for this remark!\n\nI tried to dig to the transformJsonOutput() to fix it earlier at the analyze stage,\nbut it looks like a rather hard way.\n\nMaybe simple in accordance with you note remove the second condition from this line:\nif (jb && JB_ROOT_IS_SCALAR(jb)) ?\n\nThere is a simplified reproduction before such a fix:\npostgres=# select JSON_QUERY(jsonb '[]', '$' RETURNING char(5) OMIT QUOTES EMPTY ON EMPTY);\nserver closed the connection unexpectedly\n This probably means the server terminated abnormally\n\nafter:\npostgres=# select JSON_QUERY(jsonb '[]', '$' RETURNING char(5) OMIT QUOTES EMPTY ON EMPTY);\n json_query\n------------\n []\n(1 row)\n\nAnd at the moment i havn't found any side effects of that fix.\nPlease point me if i'm missing something.\n\nWith the best wishes!\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Tue, 17 Oct 2023 10:11:12 +0300",
"msg_from": "\"Anton A. Melnikov\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Hi Anton,\n\nOn Tue, Oct 17, 2023 at 4:11 PM Anton A. Melnikov\n<[email protected]> wrote:\n> On 17.10.2023 07:02, Amit Langote wrote:\n>\n> > One thing jian he missed during the debugging is that\n> > ExecEvalJsonExprCoersion() receives the EMPTY ARRAY value via\n> > *op->resvalue/resnull, set by ExecEvalJsonExprBehavior(), because\n> > that's the ON EMPTY behavior specified in the constraint. The bug was\n> > that the code in ExecEvalJsonExprCoercion() failed to set val_string\n> > to that value (\"[]\") before passing to InputFunctionCallSafe(), so the\n> > latter would assume the input is NULL.\n> >\n> Thank a lot for this remark!\n>\n> I tried to dig to the transformJsonOutput() to fix it earlier at the analyze stage,\n> but it looks like a rather hard way.\n\nIndeed. As I said, the problem was a bug in ExecEvalJsonExprCoercion().\n\n>\n> Maybe simple in accordance with you note remove the second condition from this line:\n> if (jb && JB_ROOT_IS_SCALAR(jb)) ?\n\nYeah, that's how I would fix it.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 17 Oct 2023 16:17:57 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Mon, Oct 16, 2023 at 5:21 PM Nikita Malakhov <[email protected]> wrote:\n>\n> Hi!\n>\n> With the latest set of patches we encountered failure with the following query:\n>\n> postgres@postgres=# SELECT JSON_QUERY(jsonpath '\"aaa\"', '$' RETURNING text);\n> server closed the connection unexpectedly\n> This probably means the server terminated abnormally\n> before or while processing the request.\n> The connection to the server was lost. Attempting reset: Failed.\n> The connection to the server was lost. Attempting reset: Failed.\n> Time: 11.165 ms\n>\n> A colleague of mine, Anton Melnikov, proposed the following changes which slightly\n> alter coercion functions to process this kind of error correctly.\n>\n> Please check attached patch set.\n\nThanks for the patches.\n\nI think I understand patch 1. It makes each of JSON_{QUERY | VALUE |\nEXISTS}() use FORMAT JSON for the context item by default, which I\nthink is the correct behavior.\n\nAs for patch 2, maybe the executor part is fine, but I'm not so sure\nabout the parser part. Could you please explain why you think the\nparser must check error-safety of the target type for allowing IO\ncoercion for non-ERROR behaviors?\n\nEven if we consider that that's what should be done, it doesn't seem\nlike a good idea for the parser to implement its own logic for\ndetermining error-safety. IOW, the parser should really be using some\ntype cache API. I thought there might have been a flag in pg_proc\n(prosafe) or pg_type (typinsafe), but apparently there isn't.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 17 Oct 2023 20:12:22 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Hi.\nbased on v22.\n\nI added some tests again json_value for the sake of coverager test.\n\nA previous email thread mentioned needing to check *empty in ExecEvalJsonExpr.\nsince JSON_VALUE_OP, JSON_QUERY_OP, JSON_EXISTS_OP all need to have\n*empty cases, So I refactored a little bit.\nmight be helpful. Maybe we can also refactor *error cases.\n\nThe following part is not easy to understand.\nres = ExecPrepareJsonItemCoercion(jbv,\n+ jsestate->item_jcstates,\n+ &post_eval->jcstate);\n+ if (post_eval->jcstate &&\n+ post_eval->jcstate->coercion &&\n+ (post_eval->jcstate->coercion->via_io ||\n+ post_eval->jcstate->coercion->via_populate))",
"msg_date": "Wed, 18 Oct 2023 10:19:43 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Hi!\n\nAmit, on previous email, patch #2 - I agree that it is not the best idea to\nintroduce\nnew type of logic into the parser, so this logic could be moved to the\nexecutor,\nor removed at all. What do you think of these options?\n\nOn Wed, Oct 18, 2023 at 5:19 AM jian he <[email protected]> wrote:\n\n> Hi.\n> based on v22.\n>\n> I added some tests again json_value for the sake of coverager test.\n>\n> A previous email thread mentioned needing to check *empty in\n> ExecEvalJsonExpr.\n> since JSON_VALUE_OP, JSON_QUERY_OP, JSON_EXISTS_OP all need to have\n> *empty cases, So I refactored a little bit.\n> might be helpful. Maybe we can also refactor *error cases.\n>\n> The following part is not easy to understand.\n> res = ExecPrepareJsonItemCoercion(jbv,\n> + jsestate->item_jcstates,\n> + &post_eval->jcstate);\n> + if (post_eval->jcstate &&\n> + post_eval->jcstate->coercion &&\n> + (post_eval->jcstate->coercion->via_io ||\n> + post_eval->jcstate->coercion->via_populate))\n>\n\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nThe Russian Postgres Company\nhttps://postgrespro.ru/\n\nHi!Amit, on previous email, patch #2 - I agree that it is not the best idea to introducenew type of logic into the parser, so this logic could be moved to the executor,or removed at all. What do you think of these options?On Wed, Oct 18, 2023 at 5:19 AM jian he <[email protected]> wrote:Hi.\nbased on v22.\n\nI added some tests again json_value for the sake of coverager test.\n\nA previous email thread mentioned needing to check *empty in ExecEvalJsonExpr.\nsince JSON_VALUE_OP, JSON_QUERY_OP, JSON_EXISTS_OP all need to have\n*empty cases, So I refactored a little bit.\nmight be helpful. Maybe we can also refactor *error cases.\n\nThe following part is not easy to understand.\nres = ExecPrepareJsonItemCoercion(jbv,\n+ jsestate->item_jcstates,\n+ &post_eval->jcstate);\n+ if (post_eval->jcstate &&\n+ post_eval->jcstate->coercion &&\n+ (post_eval->jcstate->coercion->via_io ||\n+ post_eval->jcstate->coercion->via_populate))\n-- Regards,Nikita MalakhovPostgres ProfessionalThe Russian Postgres Companyhttps://postgrespro.ru/",
"msg_date": "Wed, 25 Oct 2023 20:13:07 +0300",
"msg_from": "Nikita Malakhov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Hi Nikita,\n\nOn Thu, Oct 26, 2023 at 2:13 AM Nikita Malakhov <[email protected]> wrote:\n> Amit, on previous email, patch #2 - I agree that it is not the best idea to introduce\n> new type of logic into the parser, so this logic could be moved to the executor,\n> or removed at all. What do you think of these options?\n\nYes maybe, though I'd first like to have a good answer to why is that\nlogic necessary at all. Maybe you think it's better to emit an error\nin the SQL/JSON layer of code than in the type input function if it's\nunsafe?\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 26 Oct 2023 11:32:13 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Hi,\n\nThe main goal was to correctly process invalid queries (as in examples\nabove).\nI'm not sure this could be done in type input functions. I thought that some\ncoercions could be checked before evaluating expressions for saving reasons.\n\n--\nRegards,\nNikita Malakhov\nPostgres Professional\nThe Russian Postgres Company\nhttps://postgrespro.ru/\n\nHi,The main goal was to correctly process invalid queries (as in examples above).I'm not sure this could be done in type input functions. I thought that somecoercions could be checked before evaluating expressions for saving reasons.--Regards,Nikita MalakhovPostgres ProfessionalThe Russian Postgres Companyhttps://postgrespro.ru/",
"msg_date": "Thu, 26 Oct 2023 15:19:50 +0300",
"msg_from": "Nikita Malakhov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Hi,\n\nOn Thu, Oct 26, 2023 at 9:20 PM Nikita Malakhov <[email protected]> wrote:\n>\n> Hi,\n>\n> The main goal was to correctly process invalid queries (as in examples above).\n> I'm not sure this could be done in type input functions. I thought that some\n> coercions could be checked before evaluating expressions for saving reasons.\n\nI assume by \"invalid\" you mean queries specifying types in RETURNING\nthat don't support soft-error handling in their input function.\nAdding a check makes sense but its implementation should include a\ntype cache interface to check whether a given type has error-safe\ninput handling, possibly as a separate patch. IOW, the SQL/JSON patch\nshouldn't really make a list of types to report as unsupported.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 26 Oct 2023 21:53:27 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Hi,\n\nAgreed on the latter, that must not be the part of it for sure.\nWould think on how to make this part correct.\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nThe Russian Postgres Company\nhttps://postgrespro.ru/\n\nHi,Agreed on the latter, that must not be the part of it for sure.Would think on how to make this part correct.-- Regards,Nikita MalakhovPostgres ProfessionalThe Russian Postgres Companyhttps://postgrespro.ru/",
"msg_date": "Thu, 26 Oct 2023 16:23:11 +0300",
"msg_from": "Nikita Malakhov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Hi!\n\nAccording to the discussion above, I've added the 'proerrsafe' attribute to\nthe PG_PROC relation.\nThe same was done some time ago by Nikita Glukhov but this part was\nreverted.\nThis is a WIP patch, I am new to this part of Postgres, so please correct\nme if I'm going the wrong way.\n\n--\nNikita Malakhov\nPostgres Professional\nThe Russian Postgres Company\nhttps://postgrespro.ru/",
"msg_date": "Wed, 1 Nov 2023 15:00:52 +0300",
"msg_from": "Nikita Malakhov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Hi!\n\nRead Tom Lane's note in previous discussion (quite long, so I've missed it)\non pg_proc column -\n\n>I strongly recommend against having a new pg_proc column at all.\n>I doubt that you really need it, and having one will create\n>enormous mechanical burdens to making the conversion. (For example,\n>needing a catversion bump every time we convert one more function,\n>or an extension version bump to convert extensions.)\n\nso should figure out another way to do it.\n\nRegards,\n--\nNikita Malakhov\nPostgres Professional\nThe Russian Postgres Company\nhttps://postgrespro.ru/\n\nHi!Read Tom Lane's note in previous discussion (quite long, so I've missed it)on pg_proc column ->I strongly recommend against having a new pg_proc column at all.>I doubt that you really need it, and having one will create>enormous mechanical burdens to making the conversion. (For example,>needing a catversion bump every time we convert one more function,>or an extension version bump to convert extensions.)so should figure out another way to do it.Regards,--Nikita MalakhovPostgres ProfessionalThe Russian Postgres Companyhttps://postgrespro.ru/",
"msg_date": "Fri, 3 Nov 2023 18:20:03 +0300",
"msg_from": "Nikita Malakhov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Hi,\n\nAt the moment, what is the patchset to be tested? The latest SQL/JSON \nserver I have is from September, and it's become unclear to me what \nbelongs to the SQL/JSON patchset. It seems to me cfbot erroneously \nshows green because it successfully compiles later detail-patches (i.e., \nnot the SQL/JSON set itself). Please correct me if I'm wrong and it is \nin fact possible to derive from cfbot a patchset that are the ones to \nuse to build the latest SQL/JSON server.\n\nThanks!\n\nErik\n\n\n",
"msg_date": "Sat, 11 Nov 2023 03:52:42 +0100",
"msg_from": "Erik Rijkers <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Hi Erik,\n\nOn Sat, Nov 11, 2023 at 11:52 Erik Rijkers <[email protected]> wrote:\n\n> Hi,\n>\n> At the moment, what is the patchset to be tested? The latest SQL/JSON\n> server I have is from September, and it's become unclear to me what\n> belongs to the SQL/JSON patchset. It seems to me cfbot erroneously\n> shows green because it successfully compiles later detail-patches (i.e.,\n> not the SQL/JSON set itself). Please correct me if I'm wrong and it is\n> in fact possible to derive from cfbot a patchset that are the ones to\n> use to build the latest SQL/JSON server.\n\n\nI’ll be posting a new set that addresses Andres’ comments early next week.\n\n>\n\nHi Erik,On Sat, Nov 11, 2023 at 11:52 Erik Rijkers <[email protected]> wrote:Hi,\n\nAt the moment, what is the patchset to be tested? The latest SQL/JSON \nserver I have is from September, and it's become unclear to me what \nbelongs to the SQL/JSON patchset. It seems to me cfbot erroneously \nshows green because it successfully compiles later detail-patches (i.e., \nnot the SQL/JSON set itself). Please correct me if I'm wrong and it is \nin fact possible to derive from cfbot a patchset that are the ones to \nuse to build the latest SQL/JSON server.I’ll be posting a new set that addresses Andres’ comments early next week.",
"msg_date": "Sat, 11 Nov 2023 11:56:24 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Hi,\n\nSorry for the late reply.\n\nOn Sat, Oct 7, 2023 at 6:49 AM Andres Freund <[email protected]> wrote:\n> On 2023-09-29 13:57:46 +0900, Amit Langote wrote:\n> > +/*\n> > + * Push steps to evaluate a JsonExpr and its various subsidiary expressions.\n> > + */\n> > +static void\n> > +ExecInitJsonExpr(JsonExpr *jexpr, ExprState *state,\n> > + Datum *resv, bool *resnull,\n> > + ExprEvalStep *scratch)\n> > +{\n> > + JsonExprState *jsestate = palloc0(sizeof(JsonExprState));\n> > + JsonExprPreEvalState *pre_eval = &jsestate->pre_eval;\n> > + ListCell *argexprlc;\n> > + ListCell *argnamelc;\n> > + int skip_step_off = -1;\n> > + int passing_args_step_off = -1;\n> > + int coercion_step_off = -1;\n> > + int coercion_finish_step_off = -1;\n> > + int behavior_step_off = -1;\n> > + int onempty_expr_step_off = -1;\n> > + int onempty_jump_step_off = -1;\n> > + int onerror_expr_step_off = -1;\n> > + int onerror_jump_step_off = -1;\n> > + int result_coercion_jump_step_off = -1;\n> > + List *adjust_jumps = NIL;\n> > + ListCell *lc;\n> > + ExprEvalStep *as;\n> > +\n> > + jsestate->jsexpr = jexpr;\n> > +\n> > + /*\n> > + * Add steps to compute formatted_expr, pathspec, and PASSING arg\n> > + * expressions as things that must be evaluated *before* the actual JSON\n> > + * path expression.\n> > + */\n> > + ExecInitExprRec((Expr *) jexpr->formatted_expr, state,\n> > + &pre_eval->formatted_expr.value,\n> > + &pre_eval->formatted_expr.isnull);\n> > + ExecInitExprRec((Expr *) jexpr->path_spec, state,\n> > + &pre_eval->pathspec.value,\n> > + &pre_eval->pathspec.isnull);\n> > +\n> > + /*\n> > + * Before pushing steps for PASSING args, push a step to decide whether to\n> > + * skip evaluating the args and the JSON path expression depending on\n> > + * whether either of formatted_expr and pathspec is NULL; see\n> > + * ExecEvalJsonExprSkip().\n> > + */\n> > + scratch->opcode = EEOP_JSONEXPR_SKIP;\n> > + scratch->d.jsonexpr_skip.jsestate = jsestate;\n> > + skip_step_off = state->steps_len;\n> > + ExprEvalPushStep(state, scratch);\n>\n> Could SKIP be implemented using EEOP_JUMP_IF_NULL with a bit of work? I see\n> that it sets jsestate->post_eval.jcstate, but I don't understand why it needs\n> to be done that way. /* ExecEvalJsonExprCoercion() depends on this. */ doesn't\n> explain that much.\n\nOK, I've managed to make this work using EEOP_JUMP_IF_NULL for each of\nthe two expressions that need checking: formatted_expr and pathspec.\n\n> > + /* PASSING args. */\n> > + jsestate->pre_eval.args = NIL;\n> > + passing_args_step_off = state->steps_len;\n> > + forboth(argexprlc, jexpr->passing_values,\n> > + argnamelc, jexpr->passing_names)\n> > + {\n> > + Expr *argexpr = (Expr *) lfirst(argexprlc);\n> > + String *argname = lfirst_node(String, argnamelc);\n> > + JsonPathVariable *var = palloc(sizeof(*var));\n> > +\n> > + var->name = pstrdup(argname->sval);\n>\n> Why does this need to be strdup'd?\n\nSeems unnecessary, so removed.\n\n> > + /* Step for the actual JSON path evaluation; see ExecEvalJsonExpr(). */\n> > + scratch->opcode = EEOP_JSONEXPR_PATH;\n> > + scratch->d.jsonexpr.jsestate = jsestate;\n> > + ExprEvalPushStep(state, scratch);\n> > +\n> > + /*\n> > + * Step to handle ON ERROR and ON EMPTY behavior. Also, to handle errors\n> > + * that may occur during coercion handling.\n> > + *\n> > + * See ExecEvalJsonExprBehavior().\n> > + */\n> > + scratch->opcode = EEOP_JSONEXPR_BEHAVIOR;\n> > + scratch->d.jsonexpr_behavior.jsestate = jsestate;\n> > + behavior_step_off = state->steps_len;\n> > + ExprEvalPushStep(state, scratch);\n>\n> From what I can tell there a) can never be a step between EEOP_JSONEXPR_PATH\n> and EEOP_JSONEXPR_BEHAVIOR b) EEOP_JSONEXPR_PATH ends with an unconditional\n> branch. What's the point of the two different steps here?\n\nA separate BEHAVIOR step is needed to jump to when the coercion step\ncatches an error which must be handled with the appropriate ON ERROR\nbehavior.\n\n> > + EEO_CASE(EEOP_JSONEXPR_PATH)\n> > + {\n> > + /* too complex for an inline implementation */\n> > + ExecEvalJsonExpr(state, op, econtext);\n> > + EEO_NEXT();\n> > + }\n>\n> Why does EEOP_JSONEXPR_PATH call ExecEvalJsonExpr, the names don't match...\n\nRenamed to ExecEvalJsonExprPath().\n\n> > + EEO_CASE(EEOP_JSONEXPR_SKIP)\n> > + {\n> > + /* too complex for an inline implementation */\n> > + EEO_JUMP(ExecEvalJsonExprSkip(state, op));\n> > + }\n> ...\n>\n>\n> > + EEO_CASE(EEOP_JSONEXPR_COERCION_FINISH)\n> > + {\n> > + /* too complex for an inline implementation */\n> > + EEO_JUMP(ExecEvalJsonExprCoercionFinish(state, op));\n> > + }\n>\n> This seems to just return op->d.jsonexpr_coercion_finish.jump_coercion_error\n> or op->d.jsonexpr_coercion_finish.jump_coercion_done. Which makes me think\n> it'd be better to return a boolean? Particularly because that's how you\n> already implemented it for JIT (except that you did it by hardcoding the jump\n> step to compare to, which seems odd).\n>\n> Separately, why do we even need a jump for both cases, and not just for the\n> error case?\n\nAgreed. I've redesigned all of the steps so that we need to remember\nonly a couple of jump addresses in JsonExprState for hard-coded\njumping:\n\n1. the address of the step that handles ON ERROR/EMPTY clause\n(statically set during compilation)\n2. the address of the step that evaluates coercion (dynamically set\ndepending on the type of the JSON value to coerce)\n\nThe redesign involved changing:\n\n* What each step does\n* Arranging steps in the order of operations that must be performed in\nthe following order:\n\n1. compute formatted_expr\n2. JUMP_IF_NULL (jumps to coerce the NULL result)\n3. compute pathspec\n4. JUMP_IF_NULL (jumps to coerce the NULL result)\n5. compute PASSING arg expressions or noop\n6. compute JsonPath{Exists|Query|Value} (hard-coded jump to step 9 if\nerror/empty or to appropriate coercion)\n7. evaluate coercion (via expression or via IO in\nExecEvalJsonCoercionViaPopulateOrIO) ->\n8. coercion finish\n9. JUMP_IF_NOT_TRUE (error) (jumps to skip the next expression if !error)\n10. ON ERROR expression\n12. JUMP_IF_NOT_TRUE (empty) (jumps to skip the next expression if !empty)\n13. ON EMPTY expression\n\nThere are also some unconditional JUMPs added in between above steps\nto skip to end or the appropriate target address as needed.\n\n> > + EEO_CASE(EEOP_JSONEXPR_BEHAVIOR)\n> > + {\n> > + /* too complex for an inline implementation */\n> > + EEO_JUMP(ExecEvalJsonExprBehavior(state, op));\n> > + }\n> > +\n> > + EEO_CASE(EEOP_JSONEXPR_COERCION)\n> > + {\n> > + /* too complex for an inline implementation */\n> > + EEO_JUMP(ExecEvalJsonExprCoercion(state, op, econtext,\n> > + *op->resvalue, *op->resnull));\n> > + }\n>\n> I wonder if this is the right design for this op - you're declaring this to be\n> op not worth implementing inline, yet you then have it implemented by hand for JIT.\n\nThis has been redesigned to not require the hard-coded jumps like these.\n\n> > +/*\n> > + * Evaluate given JsonExpr by performing the specified JSON operation.\n> > + *\n> > + * This also populates the JsonExprPostEvalState with the information needed\n> > + * by the subsequent steps that handle the specified JsonBehavior.\n> > + */\n> > +void\n> > +ExecEvalJsonExpr(ExprState *state, ExprEvalStep *op, ExprContext *econtext)\n> > +{\n> > + JsonExprState *jsestate = op->d.jsonexpr.jsestate;\n> > + JsonExprPreEvalState *pre_eval = &jsestate->pre_eval;\n> > + JsonExprPostEvalState *post_eval = &jsestate->post_eval;\n> > + JsonExpr *jexpr = jsestate->jsexpr;\n> > + Datum item;\n> > + Datum res = (Datum) 0;\n> > + bool resnull = true;\n> > + JsonPath *path;\n> > + bool throw_error = (jexpr->on_error->btype == JSON_BEHAVIOR_ERROR);\n> > + bool *error = &post_eval->error;\n> > + bool *empty = &post_eval->empty;\n> > +\n> > + item = pre_eval->formatted_expr.value;\n> > + path = DatumGetJsonPathP(pre_eval->pathspec.value);\n> > +\n> > + /* Reset JsonExprPostEvalState for this evaluation. */\n> > + memset(post_eval, 0, sizeof(*post_eval));\n> > +\n> > + switch (jexpr->op)\n> > + {\n> > + case JSON_EXISTS_OP:\n> > + {\n> > + bool exists = JsonPathExists(item, path,\n> > + !throw_error ? error : NULL,\n> > + pre_eval->args);\n> > +\n> > + post_eval->jcstate = jsestate->result_jcstate;\n> > + if (*error)\n> > + {\n> > + *op->resnull = true;\n> > + *op->resvalue = (Datum) 0;\n> > + return;\n> > + }\n> > +\n> > + resnull = false;\n> > + res = BoolGetDatum(exists);\n> > + break;\n> > + }\n>\n> Kinda seems there should be a EEOP_JSON_EXISTS/JSON_QUERY_OP op, instead of\n> implementing it all inside ExecEvalJsonExpr. I think this might obsolete\n> needing to rediscover that the value is null in SKIP etc?\n\nI tried but didn't really see the point of breaking\nExecEvalJsonExprPath() down into one step per JSON_*OP.\n\nThe skipping logic is based on the result of *2* input expressions\nformatted_expr and pathspec which are computed before getting to\nExecEvalJsonExprPath(). Also, the skipping logic also allows to skip\nthe evaluation of PASSING arguments which also need to be computed\nbefore ExecEvalJsonExprPath().\n\n> > + case JSON_QUERY_OP:\n> > + res = JsonPathQuery(item, path, jexpr->wrapper, empty,\n> > + !throw_error ? error : NULL,\n> > + pre_eval->args);\n> > +\n> > + post_eval->jcstate = jsestate->result_jcstate;\n> > + if (*error)\n> > + {\n> > + *op->resnull = true;\n> > + *op->resvalue = (Datum) 0;\n> > + return;\n> > + }\n> > + resnull = !DatumGetPointer(res);\n>\n> Shoulnd't this check empty?\n\nFixed.\n\n> FWIW, it's also pretty odd that JsonPathQuery() once\n> return (Datum) 0;\n> and later does\n> return PointerGetDatum(NULL);\n\nYes, fixed to use the former style at all returns.\n\n> > + case JSON_VALUE_OP:\n> > + {\n> > + JsonbValue *jbv = JsonPathValue(item, path, empty,\n> > + !throw_error ? error : NULL,\n> > + pre_eval->args);\n> > +\n> > + /* Might get overridden below by an item_jcstate. */\n> > + post_eval->jcstate = jsestate->result_jcstate;\n> > + if (*error)\n> > + {\n> > + *op->resnull = true;\n> > + *op->resvalue = (Datum) 0;\n> > + return;\n> > + }\n> > +\n> > + if (!jbv) /* NULL or empty */\n> > + {\n> > + resnull = true;\n> > + break;\n> > + }\n> > +\n> > + Assert(!*empty);\n> > +\n> > + resnull = false;\n> > +\n> > + /* Coerce scalar item to the output type */\n> > +\n> > + /*\n> > + * If the requested output type is json(b), use\n> > + * JsonExprState.result_coercion to do the coercion.\n> > + */\n> > + if (jexpr->returning->typid == JSONOID ||\n> > + jexpr->returning->typid == JSONBOID)\n> > + {\n> > + /* Use result_coercion from json[b] to the output type */\n> > + res = JsonbPGetDatum(JsonbValueToJsonb(jbv));\n> > + break;\n> > + }\n> > +\n> > + /*\n> > + * Else, use one of the item_coercions.\n> > + *\n> > + * Error out if no cast exists to coerce SQL/JSON item to the\n> > + * the output type.\n> > + */\n> > + res = ExecPrepareJsonItemCoercion(jbv,\n> > + jsestate->item_jcstates,\n> > + &post_eval->jcstate);\n> > + if (post_eval->jcstate &&\n> > + post_eval->jcstate->coercion &&\n> > + (post_eval->jcstate->coercion->via_io ||\n> > + post_eval->jcstate->coercion->via_populate))\n> > + {\n> > + if (!throw_error)\n> > + {\n> > + *op->resnull = true;\n> > + *op->resvalue = (Datum) 0;\n> > + return;\n> > + }\n> > +\n> > + /*\n> > + * Coercion via I/O means here that the cast to the target\n> > + * type simply does not exist.\n> > + */\n> > + ereport(ERROR,\n> > + (errcode(ERRCODE_SQL_JSON_ITEM_CANNOT_BE_CAST_TO_TARGET_TYPE),\n> > + errmsg(\"SQL/JSON item cannot be cast to target type\")));\n> > + }\n> > + break;\n> > + }\n> > +\n> > + default:\n> > + elog(ERROR, \"unrecognized SQL/JSON expression op %d\", jexpr->op);\n> > + *op->resnull = true;\n> > + *op->resvalue = (Datum) 0;\n> > + return;\n> > + }\n> > +\n> > + /*\n> > + * If the ON EMPTY behavior is to cause an error, do so here. Other\n> > + * behaviors will be handled in ExecEvalJsonExprBehavior().\n> > + */\n> > + if (*empty)\n> > + {\n> > + Assert(jexpr->on_empty); /* it is not JSON_EXISTS */\n> > +\n> > + if (jexpr->on_empty->btype == JSON_BEHAVIOR_ERROR)\n> > + {\n> > + if (!throw_error)\n> > + {\n> > + *op->resnull = true;\n> > + *op->resvalue = (Datum) 0;\n> > + return;\n> > + }\n> > +\n> > + ereport(ERROR,\n> > + (errcode(ERRCODE_NO_SQL_JSON_ITEM),\n> > + errmsg(\"no SQL/JSON item\")));\n> > + }\n> > + }\n> > +\n> > + *op->resvalue = res;\n> > + *op->resnull = resnull;\n> > +}\n> > +\n> > +/*\n> > + * Skip calling ExecEvalJson() on the given JsonExpr?\n>\n> I don't think that function exists.\n\nFixed.\n\n> > + * Returns the step address to be performed next.\n> > + */\n> > +int\n> > +ExecEvalJsonExprSkip(ExprState *state, ExprEvalStep *op)\n> > +{\n> > + JsonExprState *jsestate = op->d.jsonexpr_skip.jsestate;\n> > +\n> > + /*\n> > + * Skip if either of the input expressions has turned out to be NULL,\n> > + * though do execute domain checks for NULLs, which are handled by the\n> > + * coercion step.\n> > + */\n> > + if (jsestate->pre_eval.formatted_expr.isnull ||\n> > + jsestate->pre_eval.pathspec.isnull)\n> > + {\n> > + *op->resvalue = (Datum) 0;\n> > + *op->resnull = true;\n> > +\n> > + /* ExecEvalJsonExprCoercion() depends on this. */\n> > + jsestate->post_eval.jcstate = jsestate->result_jcstate;\n> > +\n> > + return op->d.jsonexpr_skip.jump_coercion;\n> > + }\n> > +\n> > + /*\n> > + * Go evaluate the PASSING args if any and subsequently JSON path itself.\n> > + */\n> > + return op->d.jsonexpr_skip.jump_passing_args;\n> > +}\n> > +\n> > +/*\n> > + * Returns the step address to perform the JsonBehavior applicable to\n> > + * the JSON item that resulted from evaluating the given JsonExpr.\n> > + *\n> > + * Returns the step address to be performed next.\n> > + */\n> > +int\n> > +ExecEvalJsonExprBehavior(ExprState *state, ExprEvalStep *op)\n> > +{\n> > + JsonExprState *jsestate = op->d.jsonexpr_behavior.jsestate;\n> > + JsonExprPostEvalState *post_eval = &jsestate->post_eval;\n> > + JsonBehavior *behavior = NULL;\n> > + int jump_to = -1;\n> > +\n> > + if (post_eval->error || post_eval->coercion_error)\n> > + {\n> > + behavior = jsestate->jsexpr->on_error;\n> > + jump_to = op->d.jsonexpr_behavior.jump_onerror_expr;\n> > + }\n> > + else if (post_eval->empty)\n> > + {\n> > + behavior = jsestate->jsexpr->on_empty;\n> > + jump_to = op->d.jsonexpr_behavior.jump_onempty_expr;\n> > + }\n> > + else if (!post_eval->coercion_done)\n> > + {\n> > + /*\n> > + * If no error or the JSON item is not empty, directly go to the\n> > + * coercion step to coerce the item as is.\n> > + */\n> > + return op->d.jsonexpr_behavior.jump_coercion;\n> > + }\n> > +\n> > + Assert(behavior);\n> > +\n> > + /*\n> > + * Set up for coercion step that will run to coerce a non-default behavior\n> > + * value. It should use result_coercion, if any. Errors that may occur\n> > + * should be thrown for JSON ops other than JSON_VALUE_OP.\n> > + */\n> > + if (behavior->btype != JSON_BEHAVIOR_DEFAULT)\n> > + {\n> > + post_eval->jcstate = jsestate->result_jcstate;\n> > + post_eval->coercing_behavior_expr = true;\n> > + }\n> > +\n> > + Assert(jump_to >= 0);\n> > + return jump_to;\n> > +}\n> > +\n> > +/*\n> > + * Evaluate or return the step address to evaluate a coercion of a JSON item\n> > + * to the target type. The former if the coercion must be done right away by\n> > + * calling the target type's input function, and for some types, by calling\n> > + * json_populate_type().\n> > + *\n> > + * Returns the step address to be performed next.\n> > + */\n> > +int\n> > +ExecEvalJsonExprCoercion(ExprState *state, ExprEvalStep *op,\n> > + ExprContext *econtext,\n> > + Datum res, bool resnull)\n> > +{\n> > + JsonExprState *jsestate = op->d.jsonexpr_coercion.jsestate;\n> > + JsonExpr *jexpr = jsestate->jsexpr;\n> > + JsonExprPostEvalState *post_eval = &jsestate->post_eval;\n> > + JsonCoercionState *jcstate = post_eval->jcstate;\n> > + char *val_string = NULL;\n> > + bool omit_quotes = false;\n> > +\n> > + switch (jexpr->op)\n> > + {\n> > + case JSON_EXISTS_OP:\n> > + if (jcstate && jcstate->jump_eval_expr >= 0)\n> > + return jcstate->jump_eval_expr;\n>\n> Shouldn't this be a compile-time check and instead be handled by simply not\n> emitting a step instead?\n\nYes and...\n\n> > + /* No coercion needed. */\n> > + post_eval->coercion_done = true;\n> > + return op->d.jsonexpr_coercion.jump_coercion_done;\n>\n> Which then means we also don't need to emit anything here, no?\n\nYes.\n\nBasically all jump address selection logic is now handled in\nExecInitJsonExpr() (compile-time) as described above.\n\n> > +/*\n> > + * Prepare SQL/JSON item coercion to the output type. Returned a datum of the\n> > + * corresponding SQL type and a pointer to the coercion state.\n> > + */\n> > +static Datum\n> > +ExecPrepareJsonItemCoercion(JsonbValue *item, List *item_jcstates,\n> > + JsonCoercionState **p_item_jcstate)\n>\n> I might have missed it, but if not: The whole way the coercion stuff works\n> needs a decent comment explaining how things fit together.\n>\n> What does \"item\" really mean here?\n\nThis term \"item\" I think refers to the JsonbValue returned by\nJsonPathValue(), which can be one of jbvType types. Because we need\nmultiple coercions to account for that, I assume the original authors\ndecided to use the term/phrase \"item coercions\" to distinguish from\nthe result_coercion which assumes either a Boolean (EXISTS) or\nJsonb/text (QUERY) result.\n\n> > +{\n> > + JsonCoercionState *item_jcstate;\n> > + Datum res;\n> > + JsonbValue buf;\n> > +\n> > + if (item->type == jbvBinary &&\n> > + JsonContainerIsScalar(item->val.binary.data))\n> > + {\n> > + bool res PG_USED_FOR_ASSERTS_ONLY;\n> > +\n> > + res = JsonbExtractScalar(item->val.binary.data, &buf);\n> > + item = &buf;\n> > + Assert(res);\n> > + }\n> > +\n> > + /* get coercion state reference and datum of the corresponding SQL type */\n> > + switch (item->type)\n> > + {\n> > + case jbvNull:\n> > + item_jcstate = list_nth(item_jcstates, JsonItemTypeNull);\n>\n> This seems quite odd. We apparently have a fixed-length array, where specific\n> offsets have specific meanings, yet it's encoded as a list that's then\n> accessed with constant offsets?\n>\n> Right now ExecEvalJsonExpr() stores what ExecPrepareJsonItemCoercion() chooses\n> in post_eval->jcstate. Which the immediately following\n> ExecEvalJsonExprBehavior() then digs out again. Then there's also control flow\n> via post_eval->coercing_behavior_expr. This is ... not nice.\n\nAgree.\n\nIn the new code, the struct JsonCoercionState is gone. So any given\ncoercion boils down to a steps address during runtime, which is\ndetermined by ExecEvalJsonExprPath().\n\n> ISTM that jsestate should have an array of jump targets, indexed by\n> item->type.\n\nYes, an array of jump targets seems better for these \"item coercions\".\nresult_coercion is a single address stored separately.\n\n> Which, for llvm IR, you can encode as a switch statement, instead\n> of doing control flow via JsonExprState/JsonExprPostEvalState. There's\n> obviously a bit more needed, but I think something like that should work, and\n> simplify things a fair bit.\n\nThanks for suggesting the switch-case idea. The LLVM IR for\nEEOP_JSONEXPR_PATH now includes one to jump to one of the coercion\naddresses between that for result_coercion and \"item coercions\" if\npresent.\n\n> > @@ -15711,6 +15721,192 @@ func_expr_common_subexpr:\n> > n->location = @1;\n> > $$ = (Node *) n;\n> > }\n> > + | JSON_QUERY '('\n> > + json_api_common_syntax\n> > + json_returning_clause_opt\n> > + json_wrapper_behavior\n> > + json_quotes_clause_opt\n> > + ')'\n> > + {\n> > + JsonFuncExpr *n = makeNode(JsonFuncExpr);\n> > +\n> > + n->op = JSON_QUERY_OP;\n> > + n->common = (JsonCommon *) $3;\n> > + n->output = (JsonOutput *) $4;\n> > + n->wrapper = $5;\n> > + if (n->wrapper != JSW_NONE && $6 != JS_QUOTES_UNSPEC)\n> > + ereport(ERROR,\n> > + (errcode(ERRCODE_SYNTAX_ERROR),\n> > + errmsg(\"SQL/JSON QUOTES behavior must not be specified when WITH WRAPPER is used\"),\n> > + parser_errposition(@6)));\n> > + n->quotes = $6;\n> > + n->location = @1;\n> > + $$ = (Node *) n;\n> > + }\n> > + | JSON_QUERY '('\n> > + json_api_common_syntax\n> > + json_returning_clause_opt\n> > + json_wrapper_behavior\n> > + json_quotes_clause_opt\n> > + json_query_behavior ON EMPTY_P\n> > + ')'\n> > + {\n> > + JsonFuncExpr *n = makeNode(JsonFuncExpr);\n> > +\n> > + n->op = JSON_QUERY_OP;\n> > + n->common = (JsonCommon *) $3;\n> > + n->output = (JsonOutput *) $4;\n> > + n->wrapper = $5;\n> > + if (n->wrapper != JSW_NONE && $6 != JS_QUOTES_UNSPEC)\n> > + ereport(ERROR,\n> > + (errcode(ERRCODE_SYNTAX_ERROR),\n> > + errmsg(\"SQL/JSON QUOTES behavior must not be specified when WITH WRAPPER is used\"),\n> > + parser_errposition(@6)));\n> > + n->quotes = $6;\n> > + n->on_empty = $7;\n> > + n->location = @1;\n> > + $$ = (Node *) n;\n> > + }\n> > + | JSON_QUERY '('\n> > + json_api_common_syntax\n> > + json_returning_clause_opt\n> > + json_wrapper_behavior\n> > + json_quotes_clause_opt\n> > + json_query_behavior ON ERROR_P\n> > + ')'\n> > + {\n> > + JsonFuncExpr *n = makeNode(JsonFuncExpr);\n> > +\n> > + n->op = JSON_QUERY_OP;\n> > + n->common = (JsonCommon *) $3;\n> > + n->output = (JsonOutput *) $4;\n> > + n->wrapper = $5;\n> > + if (n->wrapper != JSW_NONE && $6 != JS_QUOTES_UNSPEC)\n> > + ereport(ERROR,\n> > + (errcode(ERRCODE_SYNTAX_ERROR),\n> > + errmsg(\"SQL/JSON QUOTES behavior must not be specified when WITH WRAPPER is used\"),\n> > + parser_errposition(@6)));\n> > + n->quotes = $6;\n> > + n->on_error = $7;\n> > + n->location = @1;\n> > + $$ = (Node *) n;\n> > + }\n> > + | JSON_QUERY '('\n> > + json_api_common_syntax\n> > + json_returning_clause_opt\n> > + json_wrapper_behavior\n> > + json_quotes_clause_opt\n> > + json_query_behavior ON EMPTY_P\n> > + json_query_behavior ON ERROR_P\n> > + ')'\n> > + {\n> > + JsonFuncExpr *n = makeNode(JsonFuncExpr);\n> > +\n> > + n->op = JSON_QUERY_OP;\n> > + n->common = (JsonCommon *) $3;\n> > + n->output = (JsonOutput *) $4;\n> > + n->wrapper = $5;\n> > + if (n->wrapper != JSW_NONE && $6 != JS_QUOTES_UNSPEC)\n> > + ereport(ERROR,\n> > + (errcode(ERRCODE_SYNTAX_ERROR),\n> > + errmsg(\"SQL/JSON QUOTES behavior must not be specified when WITH WRAPPER is used\"),\n> > + parser_errposition(@6)));\n> > + n->quotes = $6;\n> > + n->on_empty = $7;\n> > + n->on_error = $10;\n> > + n->location = @1;\n> > + $$ = (Node *) n;\n> > + }\n>\n> I'm sure we can find a way to deduplicate this.\n>\n>\n> > + | JSON_EXISTS '('\n> > + json_api_common_syntax\n> > + json_returning_clause_opt\n> > + ')'\n> > + {\n> > + JsonFuncExpr *p = makeNode(JsonFuncExpr);\n> > +\n> > + p->op = JSON_EXISTS_OP;\n> > + p->common = (JsonCommon *) $3;\n> > + p->output = (JsonOutput *) $4;\n> > + p->location = @1;\n> > + $$ = (Node *) p;\n> > + }\n> > + | JSON_EXISTS '('\n> > + json_api_common_syntax\n> > + json_returning_clause_opt\n> > + json_exists_behavior ON ERROR_P\n> > + ')'\n> > + {\n> > + JsonFuncExpr *p = makeNode(JsonFuncExpr);\n> > +\n> > + p->op = JSON_EXISTS_OP;\n> > + p->common = (JsonCommon *) $3;\n> > + p->output = (JsonOutput *) $4;\n> > + p->on_error = $5;\n> > + p->location = @1;\n> > + $$ = (Node *) p;\n> > + }\n> > + | JSON_VALUE '('\n> > + json_api_common_syntax\n> > + json_returning_clause_opt\n> > + ')'\n> > + {\n> > + JsonFuncExpr *n = makeNode(JsonFuncExpr);\n> > +\n> > + n->op = JSON_VALUE_OP;\n> > + n->common = (JsonCommon *) $3;\n> > + n->output = (JsonOutput *) $4;\n> > + n->location = @1;\n> > + $$ = (Node *) n;\n> > + }\n> > +\n> > + | JSON_VALUE '('\n> > + json_api_common_syntax\n> > + json_returning_clause_opt\n> > + json_value_behavior ON EMPTY_P\n> > + ')'\n> > + {\n> > + JsonFuncExpr *n = makeNode(JsonFuncExpr);\n> > +\n> > + n->op = JSON_VALUE_OP;\n> > + n->common = (JsonCommon *) $3;\n> > + n->output = (JsonOutput *) $4;\n> > + n->on_empty = $5;\n> > + n->location = @1;\n> > + $$ = (Node *) n;\n> > + }\n> > + | JSON_VALUE '('\n> > + json_api_common_syntax\n> > + json_returning_clause_opt\n> > + json_value_behavior ON ERROR_P\n> > + ')'\n> > + {\n> > + JsonFuncExpr *n = makeNode(JsonFuncExpr);\n> > +\n> > + n->op = JSON_VALUE_OP;\n> > + n->common = (JsonCommon *) $3;\n> > + n->output = (JsonOutput *) $4;\n> > + n->on_error = $5;\n> > + n->location = @1;\n> > + $$ = (Node *) n;\n> > + }\n> > +\n> > + | JSON_VALUE '('\n> > + json_api_common_syntax\n> > + json_returning_clause_opt\n> > + json_value_behavior ON EMPTY_P\n> > + json_value_behavior ON ERROR_P\n> > + ')'\n> > + {\n> > + JsonFuncExpr *n = makeNode(JsonFuncExpr);\n> > +\n> > + n->op = JSON_VALUE_OP;\n> > + n->common = (JsonCommon *) $3;\n> > + n->output = (JsonOutput *) $4;\n> > + n->on_empty = $5;\n> > + n->on_error = $8;\n> > + n->location = @1;\n> > + $$ = (Node *) n;\n> > + }\n> > ;\n>\n> And this.\n>\n>\n>\n> > +json_query_behavior:\n> > + ERROR_P { $$ = makeJsonBehavior(JSON_BEHAVIOR_ERROR, NULL, @1); }\n> > + | NULL_P { $$ = makeJsonBehavior(JSON_BEHAVIOR_NULL, NULL, @1); }\n> > + | DEFAULT a_expr { $$ = makeJsonBehavior(JSON_BEHAVIOR_DEFAULT, $2, @1); }\n> > + | EMPTY_P ARRAY { $$ = makeJsonBehavior(JSON_BEHAVIOR_EMPTY_ARRAY, NULL, @1); }\n> > + | EMPTY_P OBJECT_P { $$ = makeJsonBehavior(JSON_BEHAVIOR_EMPTY_OBJECT, NULL, @1); }\n> > + /* non-standard, for Oracle compatibility only */\n> > + | EMPTY_P { $$ = makeJsonBehavior(JSON_BEHAVIOR_EMPTY_ARRAY, NULL, @1); }\n> > + ;\n>\n>\n> > +json_exists_behavior:\n> > + ERROR_P { $$ = makeJsonBehavior(JSON_BEHAVIOR_ERROR, NULL, @1); }\n> > + | TRUE_P { $$ = makeJsonBehavior(JSON_BEHAVIOR_TRUE, NULL, @1); }\n> > + | FALSE_P { $$ = makeJsonBehavior(JSON_BEHAVIOR_FALSE, NULL, @1); }\n> > + | UNKNOWN { $$ = makeJsonBehavior(JSON_BEHAVIOR_UNKNOWN, NULL, @1); }\n> > + ;\n> > +\n> > +json_value_behavior:\n> > + NULL_P { $$ = makeJsonBehavior(JSON_BEHAVIOR_NULL, NULL, @1); }\n> > + | ERROR_P { $$ = makeJsonBehavior(JSON_BEHAVIOR_ERROR, NULL, @1); }\n> > + | DEFAULT a_expr { $$ = makeJsonBehavior(JSON_BEHAVIOR_DEFAULT, $2, @1); }\n> > + ;\n>\n> This also seems like it could use some dedup.\n>\n> > src/backend/parser/gram.y | 348 +++++-\n\nI've given that a try and managed to reduce the gram.y footprint down to:\n\n src/backend/parser/gram.y | 217 +++-\n\n> This causes a nontrivial increase in the size of the parser (~5% in an\n> optimized build here), I wonder if we can do better.\n\nHmm, sorry if I sound ignorant but what do you mean by the parser here?\n\nI can see that the byte-size of gram.o increases by 1.66% after the\nabove additions (1.72% with previous versions). I've also checked\nusing log_parser_stats that there isn't much slowdown in the\nraw-parsing speed.\n\nAttached updated patch. The version of 0001 that I posted on Oct 11\nto add the error-safe version of CoerceViaIO contained many\nunnecessary bits that are now removed.\n\n--\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 15 Nov 2023 22:00:41 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Hi,\n\nThanks, this looks like a substantial improvement. I don't quite have time to\nlook right now, but I thought I'd answer one question below.\n\n\nOn 2023-11-15 22:00:41 +0900, Amit Langote wrote:\n> > This causes a nontrivial increase in the size of the parser (~5% in an\n> > optimized build here), I wonder if we can do better.\n> \n> Hmm, sorry if I sound ignorant but what do you mean by the parser here?\n\ngram.o, in an optimized build.\n\n\n> I can see that the byte-size of gram.o increases by 1.66% after the\n> above additions (1.72% with previous versions).\n\nI'm not sure anymore how I measured it, but if you just looked at the total\nfile size, that might not show the full gain, because of debug symbols\netc. You can use the size command to look at just the code and data size.\n\n\n> I've also checked\n> using log_parser_stats that there isn't much slowdown in the\n> raw-parsing speed.\n\nWhat does \"isn't much slowdown\" mean in numbers?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 15 Nov 2023 09:11:19 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On 2023-11-15 09:11:19 -0800, Andres Freund wrote:\n> On 2023-11-15 22:00:41 +0900, Amit Langote wrote:\n> > > This causes a nontrivial increase in the size of the parser (~5% in an\n> > > optimized build here), I wonder if we can do better.\n> > \n> > Hmm, sorry if I sound ignorant but what do you mean by the parser here?\n> \n> gram.o, in an optimized build.\n\nOr, hm, maybe I meant the size of the generated gram.c actually.\n\nEither is worth looking at.\n\n\n",
"msg_date": "Wed, 15 Nov 2023 09:12:07 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Op 11/15/23 om 14:00 schreef Amit Langote:\n> Hi,\n\n[..]\n\n> Attached updated patch. The version of 0001 that I posted on Oct 11\n> to add the error-safe version of CoerceViaIO contained many\n> unnecessary bits that are now removed.\n> \n> --\n> Thanks, Amit Langote\n> EDB: http://www.enterprisedb.com\n\n > [v24-0001-Add-soft-error-handling-to-some-expression-nodes.patch]\n > [v24-0002-Add-soft-error-handling-to-populate_record_field.patch]\n > [v24-0003-SQL-JSON-query-functions.patch]\n > [v24-0004-JSON_TABLE.patch]\n > [v24-0005-Claim-SQL-standard-compliance-for-SQL-JSON-featu.patch]\n\nHi Amit,\n\nHere is a statement that seems to gobble up all memory and to totally \nlock up the machine. No ctrl-C - only a power reset gets me out of that. \nIt was in one of my tests, so it used to work:\n\nselect json_query(\n jsonb '\"[3,4]\"'\n , '$[*]' returning bigint[] empty object on error\n);\n\nCan you have a look?\n\nThanks,\n\nErik\n\n\n\n\n\n\n",
"msg_date": "Thu, 16 Nov 2023 05:53:23 +0100",
"msg_from": "Erik Rijkers <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Hi Erik,\n\nOn Thu, Nov 16, 2023 at 13:52 Erik Rijkers <[email protected]> wrote:\n\n> Op 11/15/23 om 14:00 schreef Amit Langote:\n> > Hi,\n>\n> [..]\n>\n> > Attached updated patch. The version of 0001 that I posted on Oct 11\n> > to add the error-safe version of CoerceViaIO contained many\n> > unnecessary bits that are now removed.\n> >\n> > --\n> > Thanks, Amit Langote\n> > EDB: http://www.enterprisedb.com\n>\n> > [v24-0001-Add-soft-error-handling-to-some-expression-nodes.patch]\n> > [v24-0002-Add-soft-error-handling-to-populate_record_field.patch]\n> > [v24-0003-SQL-JSON-query-functions.patch]\n> > [v24-0004-JSON_TABLE.patch]\n> > [v24-0005-Claim-SQL-standard-compliance-for-SQL-JSON-featu.patch]\n>\n> Hi Amit,\n>\n> Here is a statement that seems to gobble up all memory and to totally\n> lock up the machine. No ctrl-C - only a power reset gets me out of that.\n> It was in one of my tests, so it used to work:\n>\n> select json_query(\n> jsonb '\"[3,4]\"'\n> , '$[*]' returning bigint[] empty object on error\n> );\n>\n> Can you have a look?\n\n\nWow, will look. Thanks.\n\n>\n\nHi Erik,On Thu, Nov 16, 2023 at 13:52 Erik Rijkers <[email protected]> wrote:Op 11/15/23 om 14:00 schreef Amit Langote:\n> Hi,\n\n[..]\n\n> Attached updated patch. The version of 0001 that I posted on Oct 11\n> to add the error-safe version of CoerceViaIO contained many\n> unnecessary bits that are now removed.\n> \n> --\n> Thanks, Amit Langote\n> EDB: http://www.enterprisedb.com\n\n > [v24-0001-Add-soft-error-handling-to-some-expression-nodes.patch]\n > [v24-0002-Add-soft-error-handling-to-populate_record_field.patch]\n > [v24-0003-SQL-JSON-query-functions.patch]\n > [v24-0004-JSON_TABLE.patch]\n > [v24-0005-Claim-SQL-standard-compliance-for-SQL-JSON-featu.patch]\n\nHi Amit,\n\nHere is a statement that seems to gobble up all memory and to totally \nlock up the machine. No ctrl-C - only a power reset gets me out of that. \nIt was in one of my tests, so it used to work:\n\nselect json_query(\n jsonb '\"[3,4]\"'\n , '$[*]' returning bigint[] empty object on error\n);\n\nCan you have a look?Wow, will look. Thanks.",
"msg_date": "Thu, 16 Nov 2023 13:57:19 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Thu, Nov 16, 2023 at 1:57 PM Amit Langote <[email protected]> wrote:\n> On Thu, Nov 16, 2023 at 13:52 Erik Rijkers <[email protected]> wrote:\n>> Op 11/15/23 om 14:00 schreef Amit Langote:\n>> > [v24-0001-Add-soft-error-handling-to-some-expression-nodes.patch]\n>> > [v24-0002-Add-soft-error-handling-to-populate_record_field.patch]\n>> > [v24-0003-SQL-JSON-query-functions.patch]\n>> > [v24-0004-JSON_TABLE.patch]\n>> > [v24-0005-Claim-SQL-standard-compliance-for-SQL-JSON-featu.patch]\n>>\n>> Hi Amit,\n>>\n>> Here is a statement that seems to gobble up all memory and to totally\n>> lock up the machine. No ctrl-C - only a power reset gets me out of that.\n>> It was in one of my tests, so it used to work:\n>>\n>> select json_query(\n>> jsonb '\"[3,4]\"'\n>> , '$[*]' returning bigint[] empty object on error\n>> );\n>>\n>> Can you have a look?\n>\n> Wow, will look. Thanks.\n\nShould be fixed in the attached. The bug was caused by the recent\nredesign of JsonExpr evaluation steps.\n\nYour testing is very much appreciated. Thanks.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 16 Nov 2023 15:53:30 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Thu, Nov 16, 2023 at 2:11 AM Andres Freund <[email protected]> wrote:\n> On 2023-11-15 22:00:41 +0900, Amit Langote wrote:\n> > > This causes a nontrivial increase in the size of the parser (~5% in an\n> > > optimized build here), I wonder if we can do better.\n> >\n> > Hmm, sorry if I sound ignorant but what do you mean by the parser here?\n>\n> gram.o, in an optimized build.\n>\n> > I can see that the byte-size of gram.o increases by 1.66% after the\n> > above additions (1.72% with previous versions).\n>\n> I'm not sure anymore how I measured it, but if you just looked at the total\n> file size, that might not show the full gain, because of debug symbols\n> etc. You can use the size command to look at just the code and data size.\n\n$ size /tmp/gram.*\n text data bss dec hex filename\n 661808 0 0 661808 a1930 /tmp/gram.o.unpatched\n 672800 0 0 672800 a4420 /tmp/gram.o.patched\n\nThat's still a 1.66% increase in the code size:\n\n(672800 - 661808) / 661808 % = 1.66\n\nAs for gram.c, the increase is a bit larger:\n\n$ ll /tmp/gram.*\n-rw-rw-r--. 1 amit amit 2605925 Nov 16 16:18 /tmp/gram.c.unpatched\n-rw-rw-r--. 1 amit amit 2709422 Nov 16 16:22 /tmp/gram.c.patched\n\n(2709422 - 2605925) / 2605925 % = 3.97\n\n> > I've also checked\n> > using log_parser_stats that there isn't much slowdown in the\n> > raw-parsing speed.\n>\n> What does \"isn't much slowdown\" mean in numbers?\n\nSure, the benchmark I used measured the elapsed time (using\nlog_parser_stats) of parsing a simple select statement (*) averaged\nover 10000 repetitions of the same query performed with `psql -c`:\n\nUnpatched: 0.000061 seconds\nPatched: 0.000061 seconds\n\nHere's a look at the perf:\n\nUnpatched:\n 0.59% [.] AllocSetAlloc\n 0.51% [.] hash_search_with_hash_value\n 0.47% [.] base_yyparse\n\nPatched:\n 0.63% [.] AllocSetAlloc\n 0.52% [.] hash_search_with_hash_value\n 0.44% [.] base_yyparse\n\n(*) select 1, 2, 3 from foo where a = 1\n\nIs there a more relevant benchmark I could use?\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 16 Nov 2023 17:48:55 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "hi.\nminor issues.\n\nIn transformJsonFuncExpr(ParseState *pstate, JsonFuncExpr *func)\nfunc.behavior->on_empty->location and\nfunc.behavior->on_error->location are correct.\nbut in ExecInitJsonExpr, jsestate->jsexpr->on_empty->location is -1,\njsestate->jsexpr->on_error->location is -1.\nMaybe we can preserve these on_empty, on_error token locations in\ntransformJsonBehavior.\n\nsome enum declaration, ending element need an extra comma?\n\n+ /*\n+ * ExecEvalJsonExprPath() will set this to the address of the step to\n+ * use to coerce the result of JsonPath* evaluation to the RETURNING\n+ * type. Also see the description of possible step addresses this\n+ * could be set to in the definition of JsonExprState.ZZ\n+ */\n\n\"ZZ\", typo?\n\n\n",
"msg_date": "Fri, 17 Nov 2023 15:27:22 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Fri, Nov 17, 2023 at 4:27 PM jian he <[email protected]> wrote:\n> hi.\n> minor issues.\n\nThanks for checking.\n\n> In transformJsonFuncExpr(ParseState *pstate, JsonFuncExpr *func)\n> func.behavior->on_empty->location and\n> func.behavior->on_error->location are correct.\n> but in ExecInitJsonExpr, jsestate->jsexpr->on_empty->location is -1,\n> jsestate->jsexpr->on_error->location is -1.\n> Maybe we can preserve these on_empty, on_error token locations in\n> transformJsonBehavior.\n\nSure.\n\n> some enum declaration, ending element need an extra comma?\n\nDidn't know about the convention to have that comma, but I can see it\nis present in most enum definitions.\n\nChanged all enums that the patch adds to conform.\n\n> + /*\n> + * ExecEvalJsonExprPath() will set this to the address of the step to\n> + * use to coerce the result of JsonPath* evaluation to the RETURNING\n> + * type. Also see the description of possible step addresses this\n> + * could be set to in the definition of JsonExprState.ZZ\n> + */\n>\n> \"ZZ\", typo?\n\nIndeed.\n\nWill include the fixes in the next version.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 17 Nov 2023 18:17:22 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On 2023-Nov-17, Amit Langote wrote:\n\n> On Fri, Nov 17, 2023 at 4:27 PM jian he <[email protected]> wrote:\n\n> > some enum declaration, ending element need an extra comma?\n> \n> Didn't know about the convention to have that comma, but I can see it\n> is present in most enum definitions.\n\nIt's new. See commit 611806cd726f.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Fri, 17 Nov 2023 10:40:38 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Nov 16, 2023, at 17:48, Amit Langote <[email protected]> wrote:\n> On Thu, Nov 16, 2023 at 2:11 AM Andres Freund <[email protected]> wrote:\n>> On 2023-11-15 22:00:41 +0900, Amit Langote wrote:\n>>>> This causes a nontrivial increase in the size of the parser (~5% in an\n>>>> optimized build here), I wonder if we can do better.\n>>> \n>>> Hmm, sorry if I sound ignorant but what do you mean by the parser here?\n>> \n>> gram.o, in an optimized build.\n>> \n>>> I can see that the byte-size of gram.o increases by 1.66% after the\n>>> above additions (1.72% with previous versions).\n>> \n>> I'm not sure anymore how I measured it, but if you just looked at the total\n>> file size, that might not show the full gain, because of debug symbols\n>> etc. You can use the size command to look at just the code and data size.\n> \n> $ size /tmp/gram.*\n> text data bss dec hex filename\n> 661808 0 0 661808 a1930 /tmp/gram.o.unpatched\n> 672800 0 0 672800 a4420 /tmp/gram.o.patched\n> \n> That's still a 1.66% increase in the code size:\n> \n> (672800 - 661808) / 661808 % = 1.66\n> \n> As for gram.c, the increase is a bit larger:\n> \n> $ ll /tmp/gram.*\n> -rw-rw-r--. 1 amit amit 2605925 Nov 16 16:18 /tmp/gram.c.unpatched\n> -rw-rw-r--. 1 amit amit 2709422 Nov 16 16:22 /tmp/gram.c.patched\n> \n> (2709422 - 2605925) / 2605925 % = 3.97\n> \n>>> I've also checked\n>>> using log_parser_stats that there isn't much slowdown in the\n>>> raw-parsing speed.\n>> \n>> What does \"isn't much slowdown\" mean in numbers?\n> \n> Sure, the benchmark I used measured the elapsed time (using\n> log_parser_stats) of parsing a simple select statement (*) averaged\n> over 10000 repetitions of the same query performed with `psql -c`:\n> \n> Unpatched: 0.000061 seconds\n> Patched: 0.000061 seconds\n> \n> Here's a look at the perf:\n> \n> Unpatched:\n> 0.59% [.] AllocSetAlloc\n> 0.51% [.] hash_search_with_hash_value\n> 0.47% [.] base_yyparse\n> \n> Patched:\n> 0.63% [.] AllocSetAlloc\n> 0.52% [.] hash_search_with_hash_value\n> 0.44% [.] base_yyparse\n> \n> (*) select 1, 2, 3 from foo where a = 1\n> \n> Is there a more relevant benchmark I could use?\n\nThought I’d share a few more numbers I collected to analyze the parser size increase over releases.\n\n* gram.o text bytes is from the output of `size gram.o`.\n* compiled with -O3\n\nversion gram.o text bytes %change gram.c bytes %change\n\n9.6 534010 - 2108984 -\n10 582554 9.09 2258313 7.08\n11 584596 0.35 2313475 2.44\n12 590957 1.08 2341564 1.21\n13 590381 -0.09 2357327 0.67\n14 600707 1.74 2428841 3.03\n15 633180 5.40 2495364 2.73\n16 653464 3.20 2575269 3.20\n17-sqljson 672800 2.95 2709422 3.97\n\nSo if we put SQL/JSON (including JSON_TABLE()) into 17, we end up with a gram.o 2.95% larger than v16, which granted is a somewhat larger bump, though also smaller than with some of recent releases.\n\n\n> --\n\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\nOn Nov 16, 2023, at 17:48, Amit Langote <[email protected]> wrote:On Thu, Nov 16, 2023 at 2:11 AM Andres Freund <[email protected]> wrote:On 2023-11-15 22:00:41 +0900, Amit Langote wrote:This causes a nontrivial increase in the size of the parser (~5% in anoptimized build here), I wonder if we can do better.Hmm, sorry if I sound ignorant but what do you mean by the parser here?gram.o, in an optimized build.I can see that the byte-size of gram.o increases by 1.66% after theabove additions (1.72% with previous versions).I'm not sure anymore how I measured it, but if you just looked at the totalfile size, that might not show the full gain, because of debug symbolsetc. You can use the size command to look at just the code and data size.$ size /tmp/gram.* text data bss dec hex filename 661808 0 0 661808 a1930 /tmp/gram.o.unpatched 672800 0 0 672800 a4420 /tmp/gram.o.patchedThat's still a 1.66% increase in the code size:(672800 - 661808) / 661808 % = 1.66As for gram.c, the increase is a bit larger:$ ll /tmp/gram.*-rw-rw-r--. 1 amit amit 2605925 Nov 16 16:18 /tmp/gram.c.unpatched-rw-rw-r--. 1 amit amit 2709422 Nov 16 16:22 /tmp/gram.c.patched(2709422 - 2605925) / 2605925 % = 3.97I've also checkedusing log_parser_stats that there isn't much slowdown in theraw-parsing speed.What does \"isn't much slowdown\" mean in numbers?Sure, the benchmark I used measured the elapsed time (usinglog_parser_stats) of parsing a simple select statement (*) averagedover 10000 repetitions of the same query performed with `psql -c`:Unpatched: 0.000061 secondsPatched: 0.000061 secondsHere's a look at the perf:Unpatched: 0.59% [.] AllocSetAlloc 0.51% [.] hash_search_with_hash_value 0.47% [.] base_yyparsePatched: 0.63% [.] AllocSetAlloc 0.52% [.] hash_search_with_hash_value 0.44% [.] base_yyparse(*) select 1, 2, 3 from foo where a = 1Is there a more relevant benchmark I could use?Thought I’d share a few more numbers I collected to analyze the parser size increase over releases.* gram.o text bytes is from the output of `size gram.o`.* compiled with -O3version gram.o text bytes %change gram.c bytes %change9.6 534010 - 2108984 -10 582554 9.09 2258313 7.0811 584596 0.35 2313475 2.4412 590957 1.08 2341564 1.2113 590381 -0.09 2357327 0.6714 600707 1.74 2428841 3.0315 633180 5.40 2495364 2.7316 653464 3.20 2575269 3.2017-sqljson 672800 2.95 2709422 3.97So if we put SQL/JSON (including JSON_TABLE()) into 17, we end up with a gram.o 2.95% larger than v16, which granted is a somewhat larger bump, though also smaller than with some of recent releases.\n--Thanks, Amit LangoteEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 21 Nov 2023 12:52:35 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "I looked a bit at the parser additions, because there were some concerns \nexpressed that they are quite big.\n\nIt looks like the parser rules were mostly literally copied from the BNF \nin the SQL standard. That's probably a reasonable place to start, but \nnow at the end, there is some room for simplification.\n\nAttached are a few patches that apply on top of the 0003 patch. (I \nhaven't gotten to 0004 in detail yet.) Some explanations:\n\n0001-Put-keywords-in-right-order.patch\n\nThis is just an unrelated cleanup.\n\n0002-Remove-js_quotes-union-entry.patch\n\nWe usually don't want to put every single node type into the gram.y \n%union. This one can be trivially removed.\n\n0003-Move-some-code-from-gram.y-to-parse-analysis.patch\n\nCode like this can be postponed to parse analysis, keeping gram.y \nsmaller. The error pointer loses a bit of precision, but I think that's \nok. (There is similar code in your 0004 patch, which could be similarly \nmoved.)\n\n0004-Remove-JsonBehavior-stuff-from-union.patch\n\nSimilar to my 0002. This adds a few casts as a result, but that is the \ntypical style in gram.y.\n\n0005-Get-rid-of-JsonBehaviorClause.patch\n\nI think this two-level wrapping of the behavior clauses is both \nconfusing and overkill. I was trying to just list the on-empty and \non-error clauses separately in the top-level productions (JSON_VALUE \netc.), but that led to shift/reduce errors. So the existing rule \nstructure is probably ok. But we don't need a separate node type just \nto combine two values and then unpack them again shortly thereafter. So \nI just replaced all this with a list.\n\n0006-Get-rid-of-JsonCommon.patch\n\nThis is an example where the SQL standard BNF is not sensible to apply \nliterally. I moved those clauses up directly into their callers, thus \nremoving one intermediate levels of rules and also nodes. Also, the \npath name (AS name) stuff is only for JSON_TABLE, so it's not needed in \nthis patch. I removed it here, but it would have to be readded in your \n0004 patch.\n\nAnother thing: In your patch, JSON_EXISTS has a RETURNING clause \n(json_returning_clause_opt), but I don't see that in the standard, and \nalso not in the Oracle or Db2 docs. Where did this come from?\n\nWith these changes, I think the grammar complexity in your 0003 patch is \nat an acceptable level. Similar simplification opportunities exist in \nthe 0004 patch, but I haven't worked on that yet. I suggest that you \nfocus on getting 0001..0003 committed around this commit fest and then \ndeal with 0004 in the next one. (Also split up the 0005 patch into the \npieces that apply to 0003 and 0004, respectively.)",
"msg_date": "Tue, 21 Nov 2023 08:09:17 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Tue, Nov 21, 2023 at 4:09 PM Peter Eisentraut <[email protected]> wrote:\n> I looked a bit at the parser additions, because there were some concerns\n> expressed that they are quite big.\n\nThanks Peter.\n\n> It looks like the parser rules were mostly literally copied from the BNF\n> in the SQL standard. That's probably a reasonable place to start, but\n> now at the end, there is some room for simplification.\n>\n> Attached are a few patches that apply on top of the 0003 patch. (I\n> haven't gotten to 0004 in detail yet.) Some explanations:\n>\n> 0001-Put-keywords-in-right-order.patch\n>\n> This is just an unrelated cleanup.\n>\n> 0002-Remove-js_quotes-union-entry.patch\n>\n> We usually don't want to put every single node type into the gram.y\n> %union. This one can be trivially removed.\n>\n> 0003-Move-some-code-from-gram.y-to-parse-analysis.patch\n>\n> Code like this can be postponed to parse analysis, keeping gram.y\n> smaller. The error pointer loses a bit of precision, but I think that's\n> ok. (There is similar code in your 0004 patch, which could be similarly\n> moved.)\n>\n> 0004-Remove-JsonBehavior-stuff-from-union.patch\n>\n> Similar to my 0002. This adds a few casts as a result, but that is the\n> typical style in gram.y.\n\nCheck.\n\n> 0005-Get-rid-of-JsonBehaviorClause.patch\n>\n> I think this two-level wrapping of the behavior clauses is both\n> confusing and overkill. I was trying to just list the on-empty and\n> on-error clauses separately in the top-level productions (JSON_VALUE\n> etc.), but that led to shift/reduce errors. So the existing rule\n> structure is probably ok. But we don't need a separate node type just\n> to combine two values and then unpack them again shortly thereafter. So\n> I just replaced all this with a list.\n\nOK, a List of two JsonBehavior nodes does sound better in this context\nthan a whole new parser node.\n\n> 0006-Get-rid-of-JsonCommon.patch\n>\n> This is an example where the SQL standard BNF is not sensible to apply\n> literally. I moved those clauses up directly into their callers, thus\n> removing one intermediate levels of rules and also nodes. Also, the\n> path name (AS name) stuff is only for JSON_TABLE, so it's not needed in\n> this patch. I removed it here, but it would have to be readded in your\n> 0004 patch.\n\nOK, done.\n\n> Another thing: In your patch, JSON_EXISTS has a RETURNING clause\n> (json_returning_clause_opt), but I don't see that in the standard, and\n> also not in the Oracle or Db2 docs. Where did this come from?\n\nTBH, I had no idea till I searched the original SQL/JSON development\nthread for a clue and found one at [1]:\n\n===\n* Added RETURNING clause to JSON_EXISTS() (\"side effect\" of\nimplementation EXISTS PATH columns in JSON_TABLE)\n===\n\nSo that's talking of EXISTS PATH columns of JSON_TABLE() being able to\nhave a non-default (\"bool\") type specified, as follows:\n\nJSON_TABLE(\n vals.js::jsonb, 'lax $[*]'\n COLUMNS (\n exists1 bool EXISTS PATH '$.aaa',\n exists2 int EXISTS PATH '$.aaa',\n\nI figured that JSON_EXISTS() doesn't really need a dedicated RETURNING\nclause for the above functionality to work.\n\nAttached patch 0004 to fix that; will squash into 0003 before committing.\n\n> With these changes, I think the grammar complexity in your 0003 patch is\n> at an acceptable level.\n\nThe last line in the chart I sent in the last email now look like this:\n\n17-sqljson 670262 2.57 2640912 1.34\n\nmeaning the gram.o text size changes by 2.57% as opposed to 2.97%\nbefore your fixes.\n\n> Similar simplification opportunities exist in\n> the 0004 patch, but I haven't worked on that yet. I suggest that you\n> focus on getting 0001..0003 committed around this commit fest and then\n> deal with 0004 in the next one.\n\nOK, I will keep polishing 0001-0003 with the intent to push it next\nweek barring objections / damning findings.\n\nI'll also start looking into further improving 0004.\n\n> (Also split up the 0005 patch into the\n> pieces that apply to 0003 and 0004, respectively.)\n\nDone.\n\n[1] https://www.postgresql.org/message-id/cf675d1b-47d2-04cd-30f7-c13830341347%40postgrespro.ru\n\n--\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 22 Nov 2023 15:09:36 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-22 15:09:36 +0900, Amit Langote wrote:\n> OK, I will keep polishing 0001-0003 with the intent to push it next\n> week barring objections / damning findings.\n\nI don't think the patchset is quite there yet. It's definitely getting closer\nthough! I'll try to do another review next week.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 21 Nov 2023 23:37:30 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Wed, Nov 22, 2023 at 4:37 PM Andres Freund <[email protected]> wrote:\n> On 2023-11-22 15:09:36 +0900, Amit Langote wrote:\n> > OK, I will keep polishing 0001-0003 with the intent to push it next\n> > week barring objections / damning findings.\n>\n> I don't think the patchset is quite there yet. It's definitely getting closer\n> though! I'll try to do another review next week.\n\nThat would be great, thank you. I'll post an update on Friday.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 22 Nov 2023 17:16:46 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Fri, Nov 17, 2023 at 6:40 PM Alvaro Herrera <[email protected]> wrote:\n> On 2023-Nov-17, Amit Langote wrote:\n>\n> > On Fri, Nov 17, 2023 at 4:27 PM jian he <[email protected]> wrote:\n>\n> > > some enum declaration, ending element need an extra comma?\n> >\n> > Didn't know about the convention to have that comma, but I can see it\n> > is present in most enum definitions.\n>\n> It's new. See commit 611806cd726f.\n\nI see, thanks.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 22 Nov 2023 17:37:28 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Wed, Nov 22, 2023 at 3:09 PM Amit Langote <[email protected]> wrote:\n> The last line in the chart I sent in the last email now look like this:\n>\n> 17-sqljson 670262 2.57 2640912 1.34\n>\n> meaning the gram.o text size changes by 2.57% as opposed to 2.97%\n> before your fixes.\n\nAndrew asked off-list what the percent increase is compared to 17dev\nHEAD. It's 1.27% (was 1.66% with the previous version).\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 22 Nov 2023 22:23:38 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-21 12:52:35 +0900, Amit Langote wrote:\n> version gram.o text bytes %change gram.c bytes %change\n>\n> 9.6 534010 - 2108984 -\n> 10 582554 9.09 2258313 7.08\n> 11 584596 0.35 2313475 2.44\n> 12 590957 1.08 2341564 1.21\n> 13 590381 -0.09 2357327 0.67\n> 14 600707 1.74 2428841 3.03\n> 15 633180 5.40 2495364 2.73\n> 16 653464 3.20 2575269 3.20\n> 17-sqljson 672800 2.95 2709422 3.97\n>\n> So if we put SQL/JSON (including JSON_TABLE()) into 17, we end up with a gram.o 2.95% larger than v16, which granted is a somewhat larger bump, though also smaller than with some of recent releases.\n\nI think it's ok to increase the size if it's necessary increases - but I also\nthink we've been a bit careless at times, and that that has made the parser\nslower. There's probably also some \"infrastructure\" work we could do combat\nsome of the growth too.\n\nI know I triggered the use of the .c bytes and text size, but it'd probably\nmore sensible to look at the size of the important tables generated by bison.\nI think the most relevant defines are:\n\n#define YYLAST 117115\n#define YYNTOKENS 521\n#define YYNNTS 707\n#define YYNRULES 3300\n#define YYNSTATES 6255\n#define YYMAXUTOK 758\n\n\nI think a lot of the reason we end up with such a big \"state transition\" space\nis that a single addition to e.g. col_name_keyword or unreserved_keyword\nincreases the state space substantially, because it adds new transitions to so\nmany places. We're in quadratic territory, I think. We might be able to do\nsome lexer hackery to avoid that, but not sure.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 22 Nov 2023 11:38:48 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "minor issue.\nmaybe you can add the following after\n/src/test/regress/sql/jsonb_sqljson.sql: 127.\nTest coverage for ExecPrepareJsonItemCoercion function.\n\nSELECT JSON_VALUE(jsonb 'null', '$ts' PASSING date '2018-02-21\n12:34:56 +10' AS ts returning date);\nSELECT JSON_VALUE(jsonb 'null', '$ts' PASSING time '2018-02-21\n12:34:56 +10' AS ts returning time);\nSELECT JSON_VALUE(jsonb 'null', '$ts' PASSING timetz '2018-02-21\n12:34:56 +10' AS ts returning timetz);\nSELECT JSON_VALUE(jsonb 'null', '$ts' PASSING timestamp '2018-02-21\n12:34:56 +10' AS ts returning timestamp);\n\n\n",
"msg_date": "Thu, 23 Nov 2023 14:55:39 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "+/*\n+ * Evaluate or return the step address to evaluate a coercion of a JSON item\n+ * to the target type. The former if the coercion must be done right away by\n+ * calling the target type's input function, and for some types, by calling\n+ * json_populate_type().\n+ *\n+ * Returns the step address to be performed next.\n+ */\n+void\n+ExecEvalJsonCoercionViaPopulateOrIO(ExprState *state, ExprEvalStep *op,\n+ ExprContext *econtext)\n\nthe comment seems not right? it does return anything. it did the evaluation.\n\nsome logic in ExecEvalJsonCoercionViaPopulateOrIO, like if\n(SOFT_ERROR_OCCURRED(escontext_p)) and if\n(!InputFunctionCallSafe){...}, seems validated twice,\nExecEvalJsonCoercionFinish also did it. I uncommented the following\npart, and still passed the test.\n/src/backend/executor/execExprInterp.c\n4452: // if (SOFT_ERROR_OCCURRED(escontext_p))\n4453: // {\n4454: // post_eval->error.value = BoolGetDatum(true);\n4455: // *op->resvalue = (Datum) 0;\n4456: // *op->resnull = true;\n4457: // }\n\n4470: // post_eval->error.value = BoolGetDatum(true);\n4471: // *op->resnull = true;\n4472: // *op->resvalue = (Datum) 0;\n4473: return;\n\nCorrect me if I'm wrong.\nlike in \"empty array on empty empty object on error\", the \"empty\narray\" refers to constant literal '[]' the assumed data type is jsonb,\nthe \"empty object\" refers to const literal '{}', the assumed data type\nis jsonb.\n\n--these two queries will fail very early, before ExecEvalJsonExprPath.\nSELECT JSON_QUERY(jsonb '{\"a\":[3,4]}', '$.a' RETURNING int4range\ndefault '[1.1,2]' on error);\nSELECT JSON_QUERY(jsonb '{\"a\":[3,4]}', '$.a' RETURNING int4range\ndefault '[1.1,2]' on empty);\n\n-----these four will fail later, and will call\nExecEvalJsonCoercionViaPopulateOrIO twice.\nSELECT JSON_QUERY(jsonb '{\"a\":[3,4]}', '$.z' RETURNING int4range empty\nobject on empty empty object on error);\nSELECT JSON_QUERY(jsonb '{\"a\":[3,4]}', '$.z' RETURNING int4range empty\narray on empty empty array on error);\nSELECT JSON_QUERY(jsonb '{\"a\":[3,4]}', '$.z' RETURNING int4range empty\narray on empty empty object on error);\nSELECT JSON_QUERY(jsonb '{\"a\":[3,4]}', '$.z' RETURNING int4range empty\nobject on empty empty array on error);\n\n-----however these four will not fail.\nSELECT JSON_QUERY(jsonb '{\"a\":[3,4]}', '$.z' RETURNING int4range empty\nobject on error);\nSELECT JSON_QUERY(jsonb '{\"a\":[3,4]}', '$.z' RETURNING int4range empty\narray on error);\nSELECT JSON_QUERY(jsonb '{\"a\":[3,4]}', '$.z' RETURNING int4range empty\narray on empty);\nSELECT JSON_QUERY(jsonb '{\"a\":[3,4]}', '$.z' RETURNING int4range empty\nobject on empty);\n\nshould the last four query fail or just return null?\n\n\n",
"msg_date": "Thu, 23 Nov 2023 18:46:51 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "hi.\n\n+ /*\n+ * Set information for RETURNING type's input function used by\n+ * ExecEvalJsonExprCoercion().\n+ */\n\"ExecEvalJsonExprCoercion\" comment is wrong?\n\n+ /*\n+ * Step to jump to the EEOP_JSONEXPR_FINISH step skipping over item\n+ * coercion steps that will be added below, if any.\n+ */\n\"EEOP_JSONEXPR_FINISH\" comment is wrong?\n\nseems on error, on empty behavior have some issues. The following are\ntests for json_value.\nselect json_value(jsonb '{\"a\":[123.45,1]}', '$.z' returning text\nerror on error);\nselect json_value(jsonb '{\"a\":[123.45,1]}', '$.z' returning text\nerror on empty); ---imho, this should fail?\nselect json_value(jsonb '{\"a\":[123.45,1]}', '$.z' returning text\nerror on empty error on error);\n\nI did some minor refactoring, please see the attached.\nIn transformJsonFuncExpr, only (jsexpr->result_coercion) is not null\nthen do InitJsonItemCoercions.\nThe ExecInitJsonExpr ending part is for Adjust EEOP_JUMP steps. so I\nmoved \"Set information for RETURNING type\" inside\nif (jexpr->result_coercion || jexpr->omit_quotes).\nthere are two if (jexpr->item_coercions). so I combined them together.",
"msg_date": "Fri, 24 Nov 2023 16:41:45 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Thu, Nov 23, 2023 at 4:38 AM Andres Freund <[email protected]> wrote:\n> On 2023-11-21 12:52:35 +0900, Amit Langote wrote:\n> > version gram.o text bytes %change gram.c bytes %change\n> >\n> > 9.6 534010 - 2108984 -\n> > 10 582554 9.09 2258313 7.08\n> > 11 584596 0.35 2313475 2.44\n> > 12 590957 1.08 2341564 1.21\n> > 13 590381 -0.09 2357327 0.67\n> > 14 600707 1.74 2428841 3.03\n> > 15 633180 5.40 2495364 2.73\n> > 16 653464 3.20 2575269 3.20\n> > 17-sqljson 672800 2.95 2709422 3.97\n> >\n> > So if we put SQL/JSON (including JSON_TABLE()) into 17, we end up with a gram.o 2.95% larger than v16, which granted is a somewhat larger bump, though also smaller than with some of recent releases.\n>\n> I think it's ok to increase the size if it's necessary increases - but I also\n> think we've been a bit careless at times, and that that has made the parser\n> slower. There's probably also some \"infrastructure\" work we could do combat\n> some of the growth too.\n>\n> I know I triggered the use of the .c bytes and text size, but it'd probably\n> more sensible to look at the size of the important tables generated by bison.\n> I think the most relevant defines are:\n>\n> #define YYLAST 117115\n> #define YYNTOKENS 521\n> #define YYNNTS 707\n> #define YYNRULES 3300\n> #define YYNSTATES 6255\n> #define YYMAXUTOK 758\n>\n>\n> I think a lot of the reason we end up with such a big \"state transition\" space\n> is that a single addition to e.g. col_name_keyword or unreserved_keyword\n> increases the state space substantially, because it adds new transitions to so\n> many places. We're in quadratic territory, I think. We might be able to do\n> some lexer hackery to avoid that, but not sure.\n\nOne thing I noticed when looking at the raw parsing times across\nversions is that they improved a bit around v12 and then some in v13:\n\n9.0 0.000060 s\n9.6 0.000061 s\n10 0.000061 s\n11 0.000063 s\n12 0.000055 s\n13 0.000054 s\n15 0.000057 s\n16 0.000059 s\n\nI think they might be due to the following commits in v12 and v13 resp.:\n\ncommit c64d0cd5ce24a344798534f1bc5827a9199b7a6e\nAuthor: Tom Lane <[email protected]>\nDate: Wed Jan 9 19:47:38 2019 -0500\n Use perfect hashing, instead of binary search, for keyword lookup.\n ...\n Discussion: https://postgr.es/m/[email protected]\n\ncommit 7f380c59f800f7e0fb49f45a6ff7787256851a59\nAuthor: Tom Lane <[email protected]>\nDate: Mon Jan 13 15:04:31 2020 -0500\n Reduce size of backend scanner's tables.\n ...\n Discussion:\nhttps://postgr.es/m/CACPNZCvaoa3EgVWm5yZhcSTX6RAtaLgniCPcBVOCwm8h3xpWkw@mail.gmail.com\n\nI haven't read the whole discussions there to see if the target(s)\nincluded the metrics you've mentioned though, either directly or\nindirectly.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 24 Nov 2023 18:32:06 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Some quick grepping gave me this table,\n\n YYLAST YYNTOKENS YYNNTS YYNRULES YYNSTATES YYMAXUTOK\nREL9_1_STABLE 69680 429 546 2218 4179 666\nREL9_2_STABLE 73834 432 546 2261 4301 669\nREL9_3_STABLE 77969 437 558 2322 4471 674\nREL9_4_STABLE 79419 442 576 2369 4591 679\nREL9_5_STABLE 92495 456 612 2490 4946 693\nREL9_6_STABLE 92660 459 618 2515 5006 696\nREL_10_STABLE 99601 472 653 2663 5323 709\nREL_11_STABLE 102007 480 668 2728 5477 717\nREL_12_STABLE 103948 482 667 2724 5488 719\nREL_13_STABLE 104224 492 673 2760 5558 729\nREL_14_STABLE 108111 503 676 3159 5980 740\nREL_15_STABLE 111091 506 688 3206 6090 743\nREL_16_STABLE 115435 519 706 3283 6221 756\nmaster 117115 521 707 3300 6255 758\nmaster+v26 121817 537 738 3415 6470 774\n\nand the attached chart. (v26 is with all patches applied, including the\nJSON_TABLE one whose grammar has not yet been fully tweaked.)\n\nSo, while the jump from v26 is not a trivial one, it seems within\nreasonable bounds. For example, the jump between 13 and 14 looks worse.\n(I do wonder what happened there.)\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Cada quien es cada cual y baja las escaleras como quiere\" (JMSerrat)",
"msg_date": "Fri, 24 Nov 2023 13:27:56 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Fri, Nov 24, 2023 at 9:28 PM Alvaro Herrera <[email protected]> wrote:\n> Some quick grepping gave me this table,\n>\n> YYLAST YYNTOKENS YYNNTS YYNRULES YYNSTATES YYMAXUTOK\n> REL9_1_STABLE 69680 429 546 2218 4179 666\n> REL9_2_STABLE 73834 432 546 2261 4301 669\n> REL9_3_STABLE 77969 437 558 2322 4471 674\n> REL9_4_STABLE 79419 442 576 2369 4591 679\n> REL9_5_STABLE 92495 456 612 2490 4946 693\n> REL9_6_STABLE 92660 459 618 2515 5006 696\n> REL_10_STABLE 99601 472 653 2663 5323 709\n> REL_11_STABLE 102007 480 668 2728 5477 717\n> REL_12_STABLE 103948 482 667 2724 5488 719\n> REL_13_STABLE 104224 492 673 2760 5558 729\n> REL_14_STABLE 108111 503 676 3159 5980 740\n> REL_15_STABLE 111091 506 688 3206 6090 743\n> REL_16_STABLE 115435 519 706 3283 6221 756\n> master 117115 521 707 3300 6255 758\n> master+v26 121817 537 738 3415 6470 774\n>\n> and the attached chart. (v26 is with all patches applied, including the\n> JSON_TABLE one whose grammar has not yet been fully tweaked.)\n\nThanks for the chart.\n\n> So, while the jump from v26 is not a trivial one, it seems within\n> reasonable bounds.\n\nAgreed.\n\n> For example, the jump between 13 and 14 looks worse.\n> (I do wonder what happened there.)\n\nThe following commit sounds like it might be related?\n\ncommit 06a7c3154f5bfad65549810cc84f0e3a77b408bf\nAuthor: Tom Lane <[email protected]>\nDate: Fri Sep 18 16:46:26 2020 -0400\n\n Allow most keywords to be used as column labels without requiring AS.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 27 Nov 2023 19:09:34 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On 2023-Nov-27, Amit Langote wrote:\n\n> > For example, the jump between 13 and 14 looks worse.\n> > (I do wonder what happened there.)\n> \n> The following commit sounds like it might be related?\n\nYes, but not only that one. I did some more trolling in the commit log\nfor the 14 timeframe further and found that the following commits are\nthe ones with highest additions to YYLAST during that cycle:\n\n yylast │ yylast_addition │ commit │ subject \n────────┼─────────────────┼────────────┼────────────────────────────────────────────────────────────────────────────────\n 106051 │ 1883 │ 92bf7e2d02 │ Provide the OR REPLACE option for CREATE TRIGGER.\n 105325 │ 1869 │ 06a7c3154f │ Allow most keywords to be used as column labels without requiring AS.\n 104395 │ 1816 │ 45b9805706 │ Allow CURRENT_ROLE where CURRENT_USER is accepted\n 107537 │ 1139 │ a4d75c86bf │ Extended statistics on expressions\n 105410 │ 1067 │ b5913f6120 │ Refactor CLUSTER and REINDEX grammar to use DefElem for option lists\n 106007 │ 965 │ 3696a600e2 │ SEARCH and CYCLE clauses\n 106864 │ 733 │ be45be9c33 │ Implement GROUP BY DISTINCT\n 105886 │ 609 │ 844fe9f159 │ Add the ability for the core grammar to have more than one parse target.\n 108400 │ 571 │ ec48314708 │ Revert per-index collation version tracking feature.\n 108939 │ 539 │ e6241d8e03 │ Rethink definition of pg_attribute.attcompression.\n\nbut we also have these:\n\n 105521 │ -530 │ 926fa801ac │ Remove undocumented IS [NOT] OF syntax.\n 104202 │ -640 │ c4325cefba │ Fold AlterForeignTableStmt into AlterTableStmt\n 104168 │ -718 │ 40c24bfef9 │ Improve our ability to regurgitate SQL-syntax function calls.\n 108111 │ -828 │ e56bce5d43 │ Reconsider the handling of procedure OUT parameters.\n 106398 │ -834 │ 71f4c8c6f7 │ ALTER TABLE ... DETACH PARTITION ... CONCURRENTLY\n 104402 │ -923 │ 2453ea1422 │ Support for OUT parameters in procedures\n 103456 │ -939 │ 1ed6b89563 │ Remove support for postfix (right-unary) operators.\n 104343 │ -1178 │ 873ea9ee69 │ Refactor parsing rules for option lists of EXPLAIN, VACUUM and ANALYZE\n 102784 │ -1417 │ 8f5b596744 │ Refactor AlterExtensionContentsStmt grammar\n(59 filas)\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"How strange it is to find the words \"Perl\" and \"saner\" in such close\nproximity, with no apparent sense of irony. I doubt that Larry himself\ncould have managed it.\" (ncm, http://lwn.net/Articles/174769/)\n\n\n",
"msg_date": "Mon, 27 Nov 2023 11:42:10 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "\nOn 2023-11-27 Mo 05:42, Alvaro Herrera wrote:\n> On 2023-Nov-27, Amit Langote wrote:\n>\n>>> For example, the jump between 13 and 14 looks worse.\n>>> (I do wonder what happened there.)\n>> The following commit sounds like it might be related?\n> Yes, but not only that one. I did some more trolling in the commit log\n> for the 14 timeframe further and found that the following commits are\n> the ones with highest additions to YYLAST during that cycle:\n>\n> yylast │ yylast_addition │ commit │ subject\n> ────────┼─────────────────┼────────────┼────────────────────────────────────────────────────────────────────────────────\n> 106051 │ 1883 │ 92bf7e2d02 │ Provide the OR REPLACE option for CREATE TRIGGER.\n> 105325 │ 1869 │ 06a7c3154f │ Allow most keywords to be used as column labels without requiring AS.\n> 104395 │ 1816 │ 45b9805706 │ Allow CURRENT_ROLE where CURRENT_USER is accepted\n> 107537 │ 1139 │ a4d75c86bf │ Extended statistics on expressions\n> 105410 │ 1067 │ b5913f6120 │ Refactor CLUSTER and REINDEX grammar to use DefElem for option lists\n> 106007 │ 965 │ 3696a600e2 │ SEARCH and CYCLE clauses\n> 106864 │ 733 │ be45be9c33 │ Implement GROUP BY DISTINCT\n> 105886 │ 609 │ 844fe9f159 │ Add the ability for the core grammar to have more than one parse target.\n> 108400 │ 571 │ ec48314708 │ Revert per-index collation version tracking feature.\n> 108939 │ 539 │ e6241d8e03 │ Rethink definition of pg_attribute.attcompression.\n>\n> but we also have these:\n>\n> 105521 │ -530 │ 926fa801ac │ Remove undocumented IS [NOT] OF syntax.\n> 104202 │ -640 │ c4325cefba │ Fold AlterForeignTableStmt into AlterTableStmt\n> 104168 │ -718 │ 40c24bfef9 │ Improve our ability to regurgitate SQL-syntax function calls.\n> 108111 │ -828 │ e56bce5d43 │ Reconsider the handling of procedure OUT parameters.\n> 106398 │ -834 │ 71f4c8c6f7 │ ALTER TABLE ... DETACH PARTITION ... CONCURRENTLY\n> 104402 │ -923 │ 2453ea1422 │ Support for OUT parameters in procedures\n> 103456 │ -939 │ 1ed6b89563 │ Remove support for postfix (right-unary) operators.\n> 104343 │ -1178 │ 873ea9ee69 │ Refactor parsing rules for option lists of EXPLAIN, VACUUM and ANALYZE\n> 102784 │ -1417 │ 8f5b596744 │ Refactor AlterExtensionContentsStmt grammar\n> (59 filas)\n>\n\nInteresting. But inferring a speed effect from such changes is \ndifficult. I don't have a good idea about measuring parser speed, but a \ntool to do that would be useful. Amit has made a start on such \nmeasurements, but it's only a start. I'd prefer to have evidence rather \nthan speculation.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 27 Nov 2023 08:56:43 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On 2023-Nov-27, Andrew Dunstan wrote:\n\n> Interesting. But inferring a speed effect from such changes is difficult. I\n> don't have a good idea about measuring parser speed, but a tool to do that\n> would be useful. Amit has made a start on such measurements, but it's only a\n> start. I'd prefer to have evidence rather than speculation.\n\nAt this point one thing that IMO we cannot afford to do, is stop feature\nprogress work on the name of parser speed. I mean, parser speed is\nimportant, and we need to be mindful that what we add is reasonable.\nBut at some point we'll probably have to fix that by parsing\ndifferently (a top-down parser, perhaps? Split the parser in smaller\npieces that each deal with subsets of the whole thing?)\n\nPeter told me earlier today that he noticed that the parser changes he\nproposed made the parser source code smaller, they result in larger\nparser tables (in terms of the number of states, I think he said). But\nsource code maintainability is also very important, so my suggestion\nwould be that those changes be absorbed into Amit's commits nonetheless.\n\nThe amount of effort spent on the parsing aspect on this thread seems in\nline with what we should always be doing: keep an eye on it, but not\ndisregard the work just because the parser tables have grown.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"La persona que no quería pecar / estaba obligada a sentarse\n en duras y empinadas sillas / desprovistas, por cierto\n de blandos atenuantes\" (Patricio Vogel)\n\n\n",
"msg_date": "Mon, 27 Nov 2023 15:06:12 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-27 15:06:12 +0100, Alvaro Herrera wrote:\n> On 2023-Nov-27, Andrew Dunstan wrote:\n>\n> > Interesting. But inferring a speed effect from such changes is difficult. I\n> > don't have a good idea about measuring parser speed, but a tool to do that\n> > would be useful. Amit has made a start on such measurements, but it's only a\n> > start. I'd prefer to have evidence rather than speculation.\n\nYea, the parser table sizes are influenced by the increase in complexity of\nthe grammar, but it's not a trivial correlation. Bison attempts to compress\nthe state space and it looks like there are some heuristics involved.\n\n\n> At this point one thing that IMO we cannot afford to do, is stop feature\n> progress work on the name of parser speed.\n\nAgreed - I don't think anyone advocated that though.\n\n\n> But at some point we'll probably have to fix that by parsing differently (a\n> top-down parser, perhaps? Split the parser in smaller pieces that each deal\n> with subsets of the whole thing?)\n\nYea. Both perhaps. Being able to have sub-grammars would be quite powerful I\nthink, and we might be able to do it without loosing cross-checking from bison\nthat our grammar is conflict free. Even if the resulting combined state space\nis larger, better locality should more than make up for that.\n\n\n\n> The amount of effort spent on the parsing aspect on this thread seems in\n> line with what we should always be doing: keep an eye on it, but not\n> disregard the work just because the parser tables have grown.\n\nI think we've, in other threads, not paid enough attention to it and just\nadded stuff to the grammar in the first way that didn't produce shift/reduce\nconflicts... Of course a decent part of the problem here is the SQL standard\nthat so seems to like adding one-off forms of grammar (yes,\nfunc_expr_common_subexpr, I'm looking at you)...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 27 Nov 2023 09:50:32 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Thu, Nov 23, 2023 at 6:46 PM jian he <[email protected]> wrote:\n>\n> -----however these four will not fail.\n> SELECT JSON_QUERY(jsonb '{\"a\":[3,4]}', '$.z' RETURNING int4range empty\n> object on error);\n> SELECT JSON_QUERY(jsonb '{\"a\":[3,4]}', '$.z' RETURNING int4range empty\n> array on error);\n> SELECT JSON_QUERY(jsonb '{\"a\":[3,4]}', '$.z' RETURNING int4range empty\n> array on empty);\n> SELECT JSON_QUERY(jsonb '{\"a\":[3,4]}', '$.z' RETURNING int4range empty\n> object on empty);\n>\n> should the last four query fail or just return null?\n\nI refactored making the above four queries fail.\nSELECT JSON_QUERY(jsonb '{\"a\":[3,4]}', '$.z' RETURNING int4range empty\nobject on error);\nThe new error is: ERROR: cannot cast DEFAULT expression of type jsonb\nto int4range.\n\nalso make the following query fail, which is as expected, imho.\nselect json_value(jsonb '{\"a\":[123.45,1]}', '$.z' returning text\nerror on empty);",
"msg_date": "Tue, 28 Nov 2023 10:00:56 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Mon, Nov 27, 2023 at 8:57 PM Andrew Dunstan <[email protected]> wrote:\n> Interesting. But inferring a speed effect from such changes is\n> difficult. I don't have a good idea about measuring parser speed, but a\n> tool to do that would be useful. Amit has made a start on such\n> measurements, but it's only a start. I'd prefer to have evidence rather\n> than speculation.\n\nTom shared this test a while back, and that's the one I've used in the\npast. The downside for a micro-benchmark like that is that it can\nmonopolize the CPU cache. Cache misses in real world queries are\nlikely much more dominant.\n\nhttps://www.postgresql.org/message-id/[email protected]\n\nAside on the relevance of parser speed: I've seen customers\nsuccessfully lower their monthly cloud bills by moving away from\nprepared statements, allowing smaller-memory instances.\n\n\n",
"msg_date": "Tue, 28 Nov 2023 12:10:18 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "\nOn 2023-11-28 Tu 00:10, John Naylor wrote:\n> On Mon, Nov 27, 2023 at 8:57 PM Andrew Dunstan <[email protected]> wrote:\n>> Interesting. But inferring a speed effect from such changes is\n>> difficult. I don't have a good idea about measuring parser speed, but a\n>> tool to do that would be useful. Amit has made a start on such\n>> measurements, but it's only a start. I'd prefer to have evidence rather\n>> than speculation.\n> Tom shared this test a while back, and that's the one I've used in the\n> past. The downside for a micro-benchmark like that is that it can\n> monopolize the CPU cache. Cache misses in real world queries are\n> likely much more dominant.\n>\n> https://www.postgresql.org/message-id/[email protected]\n\n\n\nCool, I took this and ran with it a bit. (See attached) Here are \ncomparative timings for 1000 iterations parsing most of the \ninformation_schema.sql, all the way back to 9.3:\n\n\n==== REL9_3_STABLE ====\nTime: 3998.701 ms\n==== REL9_4_STABLE ====\nTime: 3987.596 ms\n==== REL9_5_STABLE ====\nTime: 4129.049 ms\n==== REL9_6_STABLE ====\nTime: 4145.777 ms\n==== REL_10_STABLE ====\nTime: 4140.927 ms (00:04.141)\n==== REL_11_STABLE ====\nTime: 4145.078 ms (00:04.145)\n==== REL_12_STABLE ====\nTime: 3528.625 ms (00:03.529)\n==== REL_13_STABLE ====\nTime: 3356.067 ms (00:03.356)\n==== REL_14_STABLE ====\nTime: 3401.406 ms (00:03.401)\n==== REL_15_STABLE ====\nTime: 3372.491 ms (00:03.372)\n==== REL_16_STABLE ====\nTime: 1654.056 ms (00:01.654)\n==== HEAD ====\nTime: 1614.949 ms (00:01.615)\n\n\nThis is fairly repeatable.\n\nThe first good news is that the parser is pretty fast. Even 4ms to parse \nalmost all the information schema setup is pretty good.\n\nThe second piece of good news is that recent modifications have vastly \nimproved the speed. So even if the changes from the SQL/JSON patches eat \nup a bit of that gain, I think we're in good shape.\n\nIn a few days I'll re-run the test with the SQL/JSON patches applied.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 28 Nov 2023 15:49:04 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "\nOn 2023-11-28 Tu 15:49, Andrew Dunstan wrote:\n>\n> On 2023-11-28 Tu 00:10, John Naylor wrote:\n>> On Mon, Nov 27, 2023 at 8:57 PM Andrew Dunstan <[email protected]> \n>> wrote:\n>>> Interesting. But inferring a speed effect from such changes is\n>>> difficult. I don't have a good idea about measuring parser speed, but a\n>>> tool to do that would be useful. Amit has made a start on such\n>>> measurements, but it's only a start. I'd prefer to have evidence rather\n>>> than speculation.\n>> Tom shared this test a while back, and that's the one I've used in the\n>> past. The downside for a micro-benchmark like that is that it can\n>> monopolize the CPU cache. Cache misses in real world queries are\n>> likely much more dominant.\n>>\n>> https://www.postgresql.org/message-id/[email protected]\n>\n>\n>\n> Cool, I took this and ran with it a bit. (See attached) Here are \n> comparative timings for 1000 iterations parsing most of the \n> information_schema.sql, all the way back to 9.3:\n>\n>\n> ==== REL9_3_STABLE ====\n> Time: 3998.701 ms\n> ==== REL9_4_STABLE ====\n> Time: 3987.596 ms\n> ==== REL9_5_STABLE ====\n> Time: 4129.049 ms\n> ==== REL9_6_STABLE ====\n> Time: 4145.777 ms\n> ==== REL_10_STABLE ====\n> Time: 4140.927 ms (00:04.141)\n> ==== REL_11_STABLE ====\n> Time: 4145.078 ms (00:04.145)\n> ==== REL_12_STABLE ====\n> Time: 3528.625 ms (00:03.529)\n> ==== REL_13_STABLE ====\n> Time: 3356.067 ms (00:03.356)\n> ==== REL_14_STABLE ====\n> Time: 3401.406 ms (00:03.401)\n> ==== REL_15_STABLE ====\n> Time: 3372.491 ms (00:03.372)\n> ==== REL_16_STABLE ====\n> Time: 1654.056 ms (00:01.654)\n> ==== HEAD ====\n> Time: 1614.949 ms (00:01.615)\n>\n>\n> This is fairly repeatable.\n>\n> The first good news is that the parser is pretty fast. Even 4ms to \n> parse almost all the information schema setup is pretty good.\n>\n> The second piece of good news is that recent modifications have vastly \n> improved the speed. So even if the changes from the SQL/JSON patches \n> eat up a bit of that gain, I think we're in good shape.\n>\n> In a few days I'll re-run the test with the SQL/JSON patches applied.\n>\n>\n\nTo avoid upsetting the cfbot, I published the code here: \n<https://github.com/adunstan/parser_benchmark>\n\n\ncheers\n\n\nandrew\n\n\n\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 28 Nov 2023 15:57:45 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-28 15:57:45 -0500, Andrew Dunstan wrote:\n> To avoid upsetting the cfbot, I published the code here:\n> <https://github.com/adunstan/parser_benchmark>\n\nNeat. I wonder if we ought to include something like this into core, so that\nwe can more easily evaluate performance effects going forward.\n\nAndres\n\n\n",
"msg_date": "Tue, 28 Nov 2023 16:08:40 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Andrew Dunstan <[email protected]> writes:\n> Cool, I took this and ran with it a bit. (See attached) Here are \n> comparative timings for 1000 iterations parsing most of the \n> information_schema.sql, all the way back to 9.3:\n> ...\n> ==== REL_15_STABLE ====\n> Time: 3372.491 ms (00:03.372)\n> ==== REL_16_STABLE ====\n> Time: 1654.056 ms (00:01.654)\n> ==== HEAD ====\n> Time: 1614.949 ms (00:01.615)\n> This is fairly repeatable.\n\nThese results astonished me, because I didn't recall us having done\nanything that'd be likely to double the speed of the raw parser.\nSo I set out to replicate them, intending to bisect to find where\nthe change happened. And ... I can't replicate them. What I got\nis essentially level performance from HEAD back to d10b19e22\n(Stamp HEAD as 14devel):\n\nHEAD: 3742.544 ms\nd31d30973a (16 stamp): 3871.441 ms\n596b5af1d (15 stamp): 3759.319 ms\nd10b19e22 (14 stamp): 3730.834 ms\n\nThe run-to-run variation is a couple percent, which means that\nthese differences are down in the noise. This is using your\ntest code from github (but with 5000 iterations not 1000).\nBuilds are pretty vanilla with asserts off, on an M1 MacBook Pro.\nThe bison version might matter here: it's 3.8.2 from MacPorts.\n\nI wondered if you'd tested assert-enabled builds, but there\ndoesn't seem to be much variation with that turned on either.\n\nSo I'm now a bit baffled. Can you provide more color on what\nyour test setup is?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 28 Nov 2023 19:32:07 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "\nOn 2023-11-28 Tu 19:32, Tom Lane wrote:\n> Andrew Dunstan <[email protected]> writes:\n>> Cool, I took this and ran with it a bit. (See attached) Here are\n>> comparative timings for 1000 iterations parsing most of the\n>> information_schema.sql, all the way back to 9.3:\n>> ...\n>> ==== REL_15_STABLE ====\n>> Time: 3372.491 ms (00:03.372)\n>> ==== REL_16_STABLE ====\n>> Time: 1654.056 ms (00:01.654)\n>> ==== HEAD ====\n>> Time: 1614.949 ms (00:01.615)\n>> This is fairly repeatable.\n> These results astonished me, because I didn't recall us having done\n> anything that'd be likely to double the speed of the raw parser.\n> So I set out to replicate them, intending to bisect to find where\n> the change happened. And ... I can't replicate them. What I got\n> is essentially level performance from HEAD back to d10b19e22\n> (Stamp HEAD as 14devel):\n>\n> HEAD: 3742.544 ms\n> d31d30973a (16 stamp): 3871.441 ms\n> 596b5af1d (15 stamp): 3759.319 ms\n> d10b19e22 (14 stamp): 3730.834 ms\n>\n> The run-to-run variation is a couple percent, which means that\n> these differences are down in the noise. This is using your\n> test code from github (but with 5000 iterations not 1000).\n> Builds are pretty vanilla with asserts off, on an M1 MacBook Pro.\n> The bison version might matter here: it's 3.8.2 from MacPorts.\n>\n> I wondered if you'd tested assert-enabled builds, but there\n> doesn't seem to be much variation with that turned on either.\n>\n> So I'm now a bit baffled. Can you provide more color on what\n> your test setup is?\n\n\n*sigh* yes, you're right. I inadvertently used a setup that used meson \nfor building REL16_STABLE and HEAD. When I switch it to autoconf I get \nresults that are similar to the earlier branches:\n\n\n==== REL_16_STABLE ====\nTime: 3401.625 ms (00:03.402)\n==== HEAD ====\nTime: 3419.088 ms (00:03.419)\n\n\nIt's not clear to me why that should be. I didn't have assertions \nenabled anywhere. It's the same version of bison, same compiler \nthroughout. Maybe meson sets a higher level of optimization? It \nshouldn't really matter, ISTM.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 28 Nov 2023 20:58:41 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-28 20:58:41 -0500, Andrew Dunstan wrote:\n> On 2023-11-28 Tu 19:32, Tom Lane wrote:\n> > Andrew Dunstan <[email protected]> writes:\n> > So I'm now a bit baffled. Can you provide more color on what\n> > your test setup is?\n> \n> \n> *sigh* yes, you're right. I inadvertently used a setup that used meson for\n> building REL16_STABLE and HEAD. When I switch it to autoconf I get results\n> that are similar to the earlier branches:\n> \n> \n> ==== REL_16_STABLE ====\n> Time: 3401.625 ms (00:03.402)\n> ==== HEAD ====\n> Time: 3419.088 ms (00:03.419)\n> \n> \n> It's not clear to me why that should be. I didn't have assertions enabled\n> anywhere. It's the same version of bison, same compiler throughout. Maybe\n> meson sets a higher level of optimization? It shouldn't really matter, ISTM.\n\nIs it possible that you have CFLAGS set in your environment? For reasons that\nI find very debatable, configure.ac only adds -O2 when CFLAGS is not set:\n\n# C[XX]FLAGS are selected so:\n# If the user specifies something in the environment, that is used.\n# else: If the template file set something, that is used.\n# else: If coverage was enabled, don't set anything.\n# else: If the compiler is GCC, then we use -O2.\n# else: If the compiler is something else, then we use -O, unless debugging.\n\nif test \"$ac_env_CFLAGS_set\" = set; then\n CFLAGS=$ac_env_CFLAGS_value\nelif test \"${CFLAGS+set}\" = set; then\n : # (keep what template set)\nelif test \"$enable_coverage\" = yes; then\n : # no optimization by default\nelif test \"$GCC\" = yes; then\n CFLAGS=\"-O2\"\nelse\n # if the user selected debug mode, don't use -O\n if test \"$enable_debug\" != yes; then\n CFLAGS=\"-O\"\n fi\nfi\n\nSo if you have CFLAGS set in the environment, we'll not add -O2 to the\ncompilation flags.\n\nI'd check what the actual flags are when building a some .o.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 28 Nov 2023 18:10:37 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "\nOn 2023-11-28 Tu 21:10, Andres Freund wrote:\n> Hi,\n>\n> On 2023-11-28 20:58:41 -0500, Andrew Dunstan wrote:\n>> On 2023-11-28 Tu 19:32, Tom Lane wrote:\n>>> Andrew Dunstan <[email protected]> writes:\n>>> So I'm now a bit baffled. Can you provide more color on what\n>>> your test setup is?\n>>\n>> *sigh* yes, you're right. I inadvertently used a setup that used meson for\n>> building REL16_STABLE and HEAD. When I switch it to autoconf I get results\n>> that are similar to the earlier branches:\n>>\n>>\n>> ==== REL_16_STABLE ====\n>> Time: 3401.625 ms (00:03.402)\n>> ==== HEAD ====\n>> Time: 3419.088 ms (00:03.419)\n>>\n>>\n>> It's not clear to me why that should be. I didn't have assertions enabled\n>> anywhere. It's the same version of bison, same compiler throughout. Maybe\n>> meson sets a higher level of optimization? It shouldn't really matter, ISTM.\n> Is it possible that you have CFLAGS set in your environment? For reasons that\n> I find very debatable, configure.ac only adds -O2 when CFLAGS is not set:\n>\n> # C[XX]FLAGS are selected so:\n> # If the user specifies something in the environment, that is used.\n> # else: If the template file set something, that is used.\n> # else: If coverage was enabled, don't set anything.\n> # else: If the compiler is GCC, then we use -O2.\n> # else: If the compiler is something else, then we use -O, unless debugging.\n>\n> if test \"$ac_env_CFLAGS_set\" = set; then\n> CFLAGS=$ac_env_CFLAGS_value\n> elif test \"${CFLAGS+set}\" = set; then\n> : # (keep what template set)\n> elif test \"$enable_coverage\" = yes; then\n> : # no optimization by default\n> elif test \"$GCC\" = yes; then\n> CFLAGS=\"-O2\"\n> else\n> # if the user selected debug mode, don't use -O\n> if test \"$enable_debug\" != yes; then\n> CFLAGS=\"-O\"\n> fi\n> fi\n>\n> So if you have CFLAGS set in the environment, we'll not add -O2 to the\n> compilation flags.\n>\n> I'd check what the actual flags are when building a some .o.\n>\n\nI do have a CFLAGS setting, but for meson I used '-Ddebug=true' and no \nbuildtype or optimization setting. However, I see that in meson.build \nwe're defaulting to \"buildtype=debugoptimized\" as opposed to the \nstandard meson \"buildtype=debug\", so I guess that accounts for it.\n\nStill getting used to this stuff.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 29 Nov 2023 07:37:53 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "\nOn 2023-11-28 Tu 20:58, Andrew Dunstan wrote:\n>\n> On 2023-11-28 Tu 19:32, Tom Lane wrote:\n>> Andrew Dunstan <[email protected]> writes:\n>>> Cool, I took this and ran with it a bit. (See attached) Here are\n>>> comparative timings for 1000 iterations parsing most of the\n>>> information_schema.sql, all the way back to 9.3:\n>>> ...\n>>> ==== REL_15_STABLE ====\n>>> Time: 3372.491 ms (00:03.372)\n>>> ==== REL_16_STABLE ====\n>>> Time: 1654.056 ms (00:01.654)\n>>> ==== HEAD ====\n>>> Time: 1614.949 ms (00:01.615)\n>>> This is fairly repeatable.\n>> These results astonished me, because I didn't recall us having done\n>> anything that'd be likely to double the speed of the raw parser.\n>> So I set out to replicate them, intending to bisect to find where\n>> the change happened. And ... I can't replicate them. What I got\n>> is essentially level performance from HEAD back to d10b19e22\n>> (Stamp HEAD as 14devel):\n>>\n>> HEAD: 3742.544 ms\n>> d31d30973a (16 stamp): 3871.441 ms\n>> 596b5af1d (15 stamp): 3759.319 ms\n>> d10b19e22 (14 stamp): 3730.834 ms\n>>\n>> The run-to-run variation is a couple percent, which means that\n>> these differences are down in the noise. This is using your\n>> test code from github (but with 5000 iterations not 1000).\n>> Builds are pretty vanilla with asserts off, on an M1 MacBook Pro.\n>> The bison version might matter here: it's 3.8.2 from MacPorts.\n>>\n>> I wondered if you'd tested assert-enabled builds, but there\n>> doesn't seem to be much variation with that turned on either.\n>>\n>> So I'm now a bit baffled. Can you provide more color on what\n>> your test setup is?\n>\n>\n> *sigh* yes, you're right. I inadvertently used a setup that used meson \n> for building REL16_STABLE and HEAD. When I switch it to autoconf I get \n> results that are similar to the earlier branches:\n>\n>\n> ==== REL_16_STABLE ====\n> Time: 3401.625 ms (00:03.402)\n> ==== HEAD ====\n> Time: 3419.088 ms (00:03.419)\n>\n>\n> It's not clear to me why that should be. I didn't have assertions \n> enabled anywhere. It's the same version of bison, same compiler \n> throughout. Maybe meson sets a higher level of optimization? It \n> shouldn't really matter, ISTM.\n\n\nOK, with completely vanilla autoconf builds, doing 5000 iterations, here \nare the timings I get, including a test with Amit's latest published \npatches (with a small fixup due to bitrot).\n\nEssentially, with the patches applied it's very slightly slower than \nmaster, about the same as release 16, faster than everything earlier. \nAnd we hope to improve the grammar impact of the JSON_TABLE piece before \nwe're done.\n\n\n\n==== REL_11_STABLE ====\nTime: 10381.814 ms (00:10.382)\n==== REL_12_STABLE ====\nTime: 8151.213 ms (00:08.151)\n==== REL_13_STABLE ====\nTime: 7774.034 ms (00:07.774)\n==== REL_14_STABLE ====\nTime: 7911.005 ms (00:07.911)\n==== REL_15_STABLE ====\nTime: 7868.483 ms (00:07.868)\n==== REL_16_STABLE ====\nTime: 7729.359 ms (00:07.729)\n==== master ====\nTime: 7615.815 ms (00:07.616)\n==== sqljson ====\nTime: 7715.652 ms (00:07.716)\n\n\nBottom line: I don't think grammar slowdown is a reason to be concerned \nabout these patches.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 29 Nov 2023 12:03:04 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-29 07:37:53 -0500, Andrew Dunstan wrote:\n> On 2023-11-28 Tu 21:10, Andres Freund wrote:\n> > Hi,\n> >\n> > On 2023-11-28 20:58:41 -0500, Andrew Dunstan wrote:\n> > > On 2023-11-28 Tu 19:32, Tom Lane wrote:\n> > > > Andrew Dunstan <[email protected]> writes:\n> > > > So I'm now a bit baffled. Can you provide more color on what\n> > > > your test setup is?\n> > >\n> > > *sigh* yes, you're right. I inadvertently used a setup that used meson for\n> > > building REL16_STABLE and HEAD. When I switch it to autoconf I get results\n> > > that are similar to the earlier branches:\n> > >\n> > >\n> > > ==== REL_16_STABLE ====\n> > > Time: 3401.625 ms (00:03.402)\n> > > ==== HEAD ====\n> > > Time: 3419.088 ms (00:03.419)\n> > >\n> > >\n> > > It's not clear to me why that should be. I didn't have assertions enabled\n> > > anywhere. It's the same version of bison, same compiler throughout. Maybe\n> > > meson sets a higher level of optimization? It shouldn't really matter, ISTM.\n> > Is it possible that you have CFLAGS set in your environment? For reasons that\n> > I find very debatable, configure.ac only adds -O2 when CFLAGS is not set:\n> >\n> > # C[XX]FLAGS are selected so:\n> > # If the user specifies something in the environment, that is used.\n> > # else: If the template file set something, that is used.\n> > # else: If coverage was enabled, don't set anything.\n> > # else: If the compiler is GCC, then we use -O2.\n> > # else: If the compiler is something else, then we use -O, unless debugging.\n> >\n> > if test \"$ac_env_CFLAGS_set\" = set; then\n> > CFLAGS=$ac_env_CFLAGS_value\n> > elif test \"${CFLAGS+set}\" = set; then\n> > : # (keep what template set)\n> > elif test \"$enable_coverage\" = yes; then\n> > : # no optimization by default\n> > elif test \"$GCC\" = yes; then\n> > CFLAGS=\"-O2\"\n> > else\n> > # if the user selected debug mode, don't use -O\n> > if test \"$enable_debug\" != yes; then\n> > CFLAGS=\"-O\"\n> > fi\n> > fi\n> >\n> > So if you have CFLAGS set in the environment, we'll not add -O2 to the\n> > compilation flags.\n> >\n> > I'd check what the actual flags are when building a some .o.\n> >\n>\n> I do have a CFLAGS setting, but for meson I used '-Ddebug=true' and no\n> buildtype� or optimization setting. However, I see that in meson.build we're\n> defaulting to \"buildtype=debugoptimized\" as opposed to the standard meson\n> \"buildtype=debug\", so I guess that accounts for it.\n>\n> Still getting used to this stuff.\n\nWhat I meant was whether you set CFLAGS for the *autoconf* build, because that\nwill result in an unoptimized build unless you explicitly add -O2 (or whatnot)\nto the flags. Doing benchmarking without compiler optimizations is pretty\npointless.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 29 Nov 2023 09:42:35 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "\nOn 2023-11-29 We 12:42, Andres Freund wrote:\n> Hi,\n>\n> On 2023-11-29 07:37:53 -0500, Andrew Dunstan wrote:\n>> On 2023-11-28 Tu 21:10, Andres Freund wrote:\n>>> Hi,\n>>>\n>>> On 2023-11-28 20:58:41 -0500, Andrew Dunstan wrote:\n>>>> On 2023-11-28 Tu 19:32, Tom Lane wrote:\n>>>>> Andrew Dunstan <[email protected]> writes:\n>>>>> So I'm now a bit baffled. Can you provide more color on what\n>>>>> your test setup is?\n>>>> *sigh* yes, you're right. I inadvertently used a setup that used meson for\n>>>> building REL16_STABLE and HEAD. When I switch it to autoconf I get results\n>>>> that are similar to the earlier branches:\n>>>>\n>>>>\n>>>> ==== REL_16_STABLE ====\n>>>> Time: 3401.625 ms (00:03.402)\n>>>> ==== HEAD ====\n>>>> Time: 3419.088 ms (00:03.419)\n>>>>\n>>>>\n>>>> It's not clear to me why that should be. I didn't have assertions enabled\n>>>> anywhere. It's the same version of bison, same compiler throughout. Maybe\n>>>> meson sets a higher level of optimization? It shouldn't really matter, ISTM.\n>>> Is it possible that you have CFLAGS set in your environment? For reasons that\n>>> I find very debatable, configure.ac only adds -O2 when CFLAGS is not set:\n>>>\n>>> # C[XX]FLAGS are selected so:\n>>> # If the user specifies something in the environment, that is used.\n>>> # else: If the template file set something, that is used.\n>>> # else: If coverage was enabled, don't set anything.\n>>> # else: If the compiler is GCC, then we use -O2.\n>>> # else: If the compiler is something else, then we use -O, unless debugging.\n>>>\n>>> if test \"$ac_env_CFLAGS_set\" = set; then\n>>> CFLAGS=$ac_env_CFLAGS_value\n>>> elif test \"${CFLAGS+set}\" = set; then\n>>> : # (keep what template set)\n>>> elif test \"$enable_coverage\" = yes; then\n>>> : # no optimization by default\n>>> elif test \"$GCC\" = yes; then\n>>> CFLAGS=\"-O2\"\n>>> else\n>>> # if the user selected debug mode, don't use -O\n>>> if test \"$enable_debug\" != yes; then\n>>> CFLAGS=\"-O\"\n>>> fi\n>>> fi\n>>>\n>>> So if you have CFLAGS set in the environment, we'll not add -O2 to the\n>>> compilation flags.\n>>>\n>>> I'd check what the actual flags are when building a some .o.\n>>>\n>> I do have a CFLAGS setting, but for meson I used '-Ddebug=true' and no\n>> buildtype or optimization setting. However, I see that in meson.build we're\n>> defaulting to \"buildtype=debugoptimized\" as opposed to the standard meson\n>> \"buildtype=debug\", so I guess that accounts for it.\n>>\n>> Still getting used to this stuff.\n> What I meant was whether you set CFLAGS for the *autoconf* build,\n\n\n\nThat's what I meant too.\n\n\n> because that\n> will result in an unoptimized build unless you explicitly add -O2 (or whatnot)\n> to the flags. Doing benchmarking without compiler optimizations is pretty\n> pointless.\n>\n\nRight. My latest reported results should all be at -O2.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 29 Nov 2023 14:21:59 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-29 14:21:59 -0500, Andrew Dunstan wrote:\n> On 2023-11-29 We 12:42, Andres Freund wrote:\n> > > I do have a CFLAGS setting, but for meson I used '-Ddebug=true' and no\n> > > buildtype� or optimization setting. However, I see that in meson.build we're\n> > > defaulting to \"buildtype=debugoptimized\" as opposed to the standard meson\n> > > \"buildtype=debug\", so I guess that accounts for it.\n> > > \n> > > Still getting used to this stuff.\n> > What I meant was whether you set CFLAGS for the *autoconf* build,\n>\n> That's what I meant too.\n> \n> > because that\n> > will result in an unoptimized build unless you explicitly add -O2 (or whatnot)\n> > to the flags. Doing benchmarking without compiler optimizations is pretty\n> > pointless.\n> > \n> \n> Right. My latest reported results should all be at -O2.\n\nWhy are the results suddenly so much slower?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 29 Nov 2023 11:41:42 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "\n\n\n> On Nov 29, 2023, at 2:41 PM, Andres Freund <[email protected]> wrote:\n> \n> Hi,\n> \n>> On 2023-11-29 14:21:59 -0500, Andrew Dunstan wrote:\n>> On 2023-11-29 We 12:42, Andres Freund wrote:\n>>>> I do have a CFLAGS setting, but for meson I used '-Ddebug=true' and no\n>>>> buildtype or optimization setting. However, I see that in meson.build we're\n>>>> defaulting to \"buildtype=debugoptimized\" as opposed to the standard meson\n>>>> \"buildtype=debug\", so I guess that accounts for it.\n>>>> \n>>>> Still getting used to this stuff.\n>>> What I meant was whether you set CFLAGS for the *autoconf* build,\n>> \n>> That's what I meant too.\n>> \n>>> because that\n>>> will result in an unoptimized build unless you explicitly add -O2 (or whatnot)\n>>> to the flags. Doing benchmarking without compiler optimizations is pretty\n>>> pointless.\n>>> \n>> \n>> Right. My latest reported results should all be at -O2.\n> \n> Why are the results suddenly so much slower?\n> \n> \n\n\nAs I mentioned I increased the iteration count to 5000.\n\nCheers \n\nAndrew\n\n",
"msg_date": "Wed, 29 Nov 2023 15:24:01 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Hi,\n\nThanks for the reviews. Replying to all emails here.\n\nOn Thu, Nov 23, 2023 at 3:55 PM jian he <[email protected]> wrote:\n> minor issue.\n> maybe you can add the following after\n> /src/test/regress/sql/jsonb_sqljson.sql: 127.\n> Test coverage for ExecPrepareJsonItemCoercion function.\n>\n> SELECT JSON_VALUE(jsonb 'null', '$ts' PASSING date '2018-02-21\n> 12:34:56 +10' AS ts returning date);\n> SELECT JSON_VALUE(jsonb 'null', '$ts' PASSING time '2018-02-21\n> 12:34:56 +10' AS ts returning time);\n> SELECT JSON_VALUE(jsonb 'null', '$ts' PASSING timetz '2018-02-21\n> 12:34:56 +10' AS ts returning timetz);\n> SELECT JSON_VALUE(jsonb 'null', '$ts' PASSING timestamp '2018-02-21\n> 12:34:56 +10' AS ts returning timestamp);\n\nAdded, though I decided to not include the function name in the\ncomment and rather reworded the nearby comment a bit.\n\nOn Thu, Nov 23, 2023 at 7:47 PM jian he <[email protected]> wrote:\n> +/*\n> + * Evaluate or return the step address to evaluate a coercion of a JSON item\n> + * to the target type. The former if the coercion must be done right away by\n> + * calling the target type's input function, and for some types, by calling\n> + * json_populate_type().\n> + *\n> + * Returns the step address to be performed next.\n> + */\n> +void\n> +ExecEvalJsonCoercionViaPopulateOrIO(ExprState *state, ExprEvalStep *op,\n> + ExprContext *econtext)\n>\n> the comment seems not right? it does return anything. it did the evaluation.\n\nFixed the comment. Actually, I've also restored the old name of the\nfunction because of reworking coercion machinery to use a JsonCoercion\nnode only for cases where the coercion is performed using I/O or\njson_populdate_type().\n\n> some logic in ExecEvalJsonCoercionViaPopulateOrIO, like if\n> (SOFT_ERROR_OCCURRED(escontext_p)) and if\n> (!InputFunctionCallSafe){...}, seems validated twice,\n> ExecEvalJsonCoercionFinish also did it. I uncommented the following\n> part, and still passed the test.\n> /src/backend/executor/execExprInterp.c\n> 4452: // if (SOFT_ERROR_OCCURRED(escontext_p))\n> 4453: // {\n> 4454: // post_eval->error.value = BoolGetDatum(true);\n> 4455: // *op->resvalue = (Datum) 0;\n> 4456: // *op->resnull = true;\n> 4457: // }\n>\n> 4470: // post_eval->error.value = BoolGetDatum(true);\n> 4471: // *op->resnull = true;\n> 4472: // *op->resvalue = (Datum) 0;\n> 4473: return;\n\nYes, you're right. ExecEvalJsonCoercionFinish()'s check for\nsoft-error suffices.\n\n> Correct me if I'm wrong.\n> like in \"empty array on empty empty object on error\", the \"empty\n> array\" refers to constant literal '[]' the assumed data type is jsonb,\n> the \"empty object\" refers to const literal '{}', the assumed data type\n> is jsonb.\n\nThat's correct.\n\n> --these two queries will fail very early, before ExecEvalJsonExprPath.\n> SELECT JSON_QUERY(jsonb '{\"a\":[3,4]}', '$.a' RETURNING int4range\n> default '[1.1,2]' on error);\n> SELECT JSON_QUERY(jsonb '{\"a\":[3,4]}', '$.a' RETURNING int4range\n> default '[1.1,2]' on empty);\n\nThey fail early because the user-specified DEFAULT [ON ERROR/EMPTY]\nexpression is coerced at parse time.\n\n> -----these four will fail later, and will call\n> ExecEvalJsonCoercionViaPopulateOrIO twice.\n> SELECT JSON_QUERY(jsonb '{\"a\":[3,4]}', '$.z' RETURNING int4range empty\n> object on empty empty object on error);\n> SELECT JSON_QUERY(jsonb '{\"a\":[3,4]}', '$.z' RETURNING int4range empty\n> array on empty empty array on error);\n> SELECT JSON_QUERY(jsonb '{\"a\":[3,4]}', '$.z' RETURNING int4range empty\n> array on empty empty object on error);\n> SELECT JSON_QUERY(jsonb '{\"a\":[3,4]}', '$.z' RETURNING int4range empty\n> object on empty empty array on error);\n\nWith the latest version, you'll now get the following errors:\n\nERROR: cannot cast behavior expression of type jsonb to int4range\nLINE 1: ...RY(jsonb '{\"a\":[3,4]}', '$.z' RETURNING int4range empty obje...\n ^\nERROR: cannot cast behavior expression of type jsonb to int4range\nLINE 1: ...RY(jsonb '{\"a\":[3,4]}', '$.z' RETURNING int4range empty arra...\n ^\nERROR: cannot cast behavior expression of type jsonb to int4range\nLINE 1: ...RY(jsonb '{\"a\":[3,4]}', '$.z' RETURNING int4range empty arra...\n ^\nERROR: cannot cast behavior expression of type jsonb to int4range\nLINE 1: ...RY(jsonb '{\"a\":[3,4]}', '$.z' RETURNING int4range empty obje...\n\n> -----however these four will not fail.\n> SELECT JSON_QUERY(jsonb '{\"a\":[3,4]}', '$.z' RETURNING int4range empty\n> object on error);\n> SELECT JSON_QUERY(jsonb '{\"a\":[3,4]}', '$.z' RETURNING int4range empty\n> array on error);\n> SELECT JSON_QUERY(jsonb '{\"a\":[3,4]}', '$.z' RETURNING int4range empty\n> array on empty);\n> SELECT JSON_QUERY(jsonb '{\"a\":[3,4]}', '$.z' RETURNING int4range empty\n> object on empty);\n> should the last four query fail or just return null?\n\nYou'll get the following with the latest version:\n\nERROR: cannot cast behavior expression of type jsonb to int4range\nLINE 1: ...RY(jsonb '{\"a\":[3,4]}', '$.z' RETURNING int4range empty obje...\n ^\nERROR: cannot cast behavior expression of type jsonb to int4range\nLINE 1: ...RY(jsonb '{\"a\":[3,4]}', '$.z' RETURNING int4range empty arra...\n ^\nERROR: cannot cast behavior expression of type jsonb to int4range\nLINE 1: ...RY(jsonb '{\"a\":[3,4]}', '$.z' RETURNING int4range empty arra...\n ^\nERROR: cannot cast behavior expression of type jsonb to int4range\nLINE 1: ...RY(jsonb '{\"a\":[3,4]}', '$.z' RETURNING int4range empty obje...\n\nOn Fri, Nov 24, 2023 at 5:41 PM jian he <[email protected]> wrote:\n> + /*\n> + * Set information for RETURNING type's input function used by\n> + * ExecEvalJsonExprCoercion().\n> + */\n> \"ExecEvalJsonExprCoercion\" comment is wrong?\n\nComment removed in the latest version.\n\n> + /*\n> + * Step to jump to the EEOP_JSONEXPR_FINISH step skipping over item\n> + * coercion steps that will be added below, if any.\n> + */\n> \"EEOP_JSONEXPR_FINISH\" comment is wrong?\n\nNot wrong though the wording is misleading. It's describing what will\nhappen at runtime -- jump after performing result_coercion to skip\nover any steps that might be present between the last of the\nresult_coercion steps and the EEOP_JSONEXPR_FINISH step. You can see\nthe code that follows is adding steps for JSON_VALUE \"item\" coercions,\nwhich will be skipped by performing that jump.\n\n> seems on error, on empty behavior have some issues. The following are\n> tests for json_value.\n> select json_value(jsonb '{\"a\":[123.45,1]}', '$.z' returning text\n> error on error);\n> select json_value(jsonb '{\"a\":[123.45,1]}', '$.z' returning text\n> error on empty); ---imho, this should fail?\n> select json_value(jsonb '{\"a\":[123.45,1]}', '$.z' returning text\n> error on empty error on error);\n\nYes, I agree there are issues. I think these all should give an\nerror. So, the no-match scenario (empty=true) should give an error\nboth when ERROR ON EMPTY is specified and also if only ERROR ON ERROR\nis specified. With the current code, ON ERROR basically overrides ON\nEMPTY clause which seems wrong.\n\nWith the latest patch, you'll get the following:\n\nselect json_value(jsonb '{\"a\":[123.45,1]}', '$.z' returning text\nerror on error);\nERROR: no SQL/JSON item\n\nselect json_value(jsonb '{\"a\":[123.45,1]}', '$.z' returning text\nerror on empty); ---imho, this should fail?\nERROR: no SQL/JSON item\n\nselect json_value(jsonb '{\"a\":[123.45,1]}', '$.z' returning text\nerror on empty error on error);\nERROR: no SQL/JSON item\n\n> I did some minor refactoring, please see the attached.\n> In transformJsonFuncExpr, only (jsexpr->result_coercion) is not null\n> then do InitJsonItemCoercions.\n\nMakes sense.\n\n> The ExecInitJsonExpr ending part is for Adjust EEOP_JUMP steps. so I\n> moved \"Set information for RETURNING type\" inside\n> if (jexpr->result_coercion || jexpr->omit_quotes).\n> there are two if (jexpr->item_coercions). so I combined them together.\n\nThis code has moved to a different place with the latest patch,\nwherein I've redesigned the io/populate-based coercions.\n\nOn Tue, Nov 28, 2023 at 11:01 AM jian he <[email protected]> wrote:\n> On Thu, Nov 23, 2023 at 6:46 PM jian he <[email protected]> wrote:\n> >\n> > -----however these four will not fail.\n> > SELECT JSON_QUERY(jsonb '{\"a\":[3,4]}', '$.z' RETURNING int4range empty\n> > object on error);\n> > SELECT JSON_QUERY(jsonb '{\"a\":[3,4]}', '$.z' RETURNING int4range empty\n> > array on error);\n> > SELECT JSON_QUERY(jsonb '{\"a\":[3,4]}', '$.z' RETURNING int4range empty\n> > array on empty);\n> > SELECT JSON_QUERY(jsonb '{\"a\":[3,4]}', '$.z' RETURNING int4range empty\n> > object on empty);\n> >\n> > should the last four query fail or just return null?\n>\n> I refactored making the above four queries fail.\n> SELECT JSON_QUERY(jsonb '{\"a\":[3,4]}', '$.z' RETURNING int4range empty\n> object on error);\n> The new error is: ERROR: cannot cast DEFAULT expression of type jsonb\n> to int4range.\n>\n> also make the following query fail, which is as expected, imho.\n> select json_value(jsonb '{\"a\":[123.45,1]}', '$.z' returning text\n> error on empty);\n\nAgreed. I've incorporated your suggestions into the latest patch\nthough not using the exact code that you shared.\n\nAttached please find the latest patches. Other than the points\nmentioned above, I've made substantial changes to how JsonBehavior and\nJsonCoercion nodes work.\n\nI've attempted to trim down the JSON_TABLE grammar (0004), but this is\nall I've managed so far. Among other things, I couldn't refactor the\ngrammar to do away with the following:\n\n+%nonassoc NESTED\n+%left PATH\n\n--\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 5 Dec 2023 21:25:01 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On 2023-Dec-05, Amit Langote wrote:\n\n> I've attempted to trim down the JSON_TABLE grammar (0004), but this is\n> all I've managed so far. Among other things, I couldn't refactor the\n> grammar to do away with the following:\n> \n> +%nonassoc NESTED\n> +%left PATH\n\nTo recap, the reason we're arguing about this is that this creates two\nnew precedence classes, which are higher than everything else. Judging\nby the discussios in thread [1], this is not acceptable. Without either\nthose new classes or the two hacks I describe below, the grammar has the\nfollowing shift/reduce conflict:\n\nState 6220\n\n 2331 json_table_column_definition: NESTED . path_opt Sconst COLUMNS '(' json_table_column_definition_list ')'\n 2332 | NESTED . path_opt Sconst AS name COLUMNS '(' json_table_column_definition_list ')'\n 2636 unreserved_keyword: NESTED .\n\n PATH shift, and go to state 6286\n\n SCONST reduce using rule 2336 (path_opt)\n PATH [reduce using rule 2636 (unreserved_keyword)]\n $default reduce using rule 2636 (unreserved_keyword)\n\n path_opt go to state 6287\n\n\n\nFirst, while the grammar uses \"NESTED path_opt\" in the relevant productions, I\nnoticed that there's no test that uses NESTED without PATH, so if we break that\ncase, we won't notice. I propose we remove the PATH keyword from one of\nthe tests in jsonb_sqljson.sql in order to make sure the grammar\ncontinues to work after whatever hacking we do:\n\ndiff --git a/src/test/regress/expected/jsonb_sqljson.out b/src/test/regress/expected/jsonb_sqljson.out\nindex 7e8ae6a696..8fd2385cdc 100644\n--- a/src/test/regress/expected/jsonb_sqljson.out\n+++ b/src/test/regress/expected/jsonb_sqljson.out\n@@ -1548,7 +1548,7 @@ HINT: JSON_TABLE column names must be distinct from one another.\n SELECT * FROM JSON_TABLE(\n \tjsonb 'null', '$[*]' AS p0\n \tCOLUMNS (\n-\t\tNESTED PATH '$' AS p1 COLUMNS (\n+\t\tNESTED '$' AS p1 COLUMNS (\n \t\t\tNESTED PATH '$' AS p11 COLUMNS ( foo int ),\n \t\t\tNESTED PATH '$' AS p12 COLUMNS ( bar int )\n \t\t),\ndiff --git a/src/test/regress/sql/jsonb_sqljson.sql b/src/test/regress/sql/jsonb_sqljson.sql\nindex ea5db88b40..ea9b4ff8b6 100644\n--- a/src/test/regress/sql/jsonb_sqljson.sql\n+++ b/src/test/regress/sql/jsonb_sqljson.sql\n@@ -617,7 +617,7 @@ SELECT * FROM JSON_TABLE(\n SELECT * FROM JSON_TABLE(\n \tjsonb 'null', '$[*]' AS p0\n \tCOLUMNS (\n-\t\tNESTED PATH '$' AS p1 COLUMNS (\n+\t\tNESTED '$' AS p1 COLUMNS (\n \t\t\tNESTED PATH '$' AS p11 COLUMNS ( foo int ),\n \t\t\tNESTED PATH '$' AS p12 COLUMNS ( bar int )\n \t\t),\n\n\nHaving done that, AFAICS there are two possible fixes for the grammar.\nOne is to keep the idea of assigning precedence explicitly to these\nkeywords, but do something less hackish -- we can put NESTED together\nwith UNBOUNDED, and classify PATH in the IDENT group. This requires no\nfurther changes. This would make NESTED PATH follow the same rationale\nas UNBOUNDED FOLLOWING / UNBOUNDED PRECEDING. Here's is a preliminary\npatch for that (the large comment above needs to be updated.)\n\ndiff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y\nindex c15fcf2eb2..1493ac7580 100644\n--- a/src/backend/parser/gram.y\n+++ b/src/backend/parser/gram.y\n@@ -887,9 +887,9 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);\n * json_predicate_type_constraint and json_key_uniqueness_constraint_opt\n * productions (see comments there).\n */\n-%nonassoc\tUNBOUNDED\t\t/* ideally would have same precedence as IDENT */\n+%nonassoc\tUNBOUNDED NESTED\t\t/* ideally would have same precedence as IDENT */\n %nonassoc\tIDENT PARTITION RANGE ROWS GROUPS PRECEDING FOLLOWING CUBE ROLLUP\n-\t\t\tSET KEYS OBJECT_P SCALAR VALUE_P WITH WITHOUT\n+\t\t\tSET KEYS OBJECT_P SCALAR VALUE_P WITH WITHOUT PATH\n %left\t\tOp OPERATOR\t\t/* multi-character ops and user-defined operators */\n %left\t\t'+' '-'\n %left\t\t'*' '/' '%'\n@@ -911,8 +911,6 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);\n */\n %left\t\tJOIN CROSS LEFT FULL RIGHT INNER_P NATURAL\n \n-%nonassoc\tNESTED\n-%left\t\tPATH\n %%\n \n /*\n\n\nThe other thing we can do is use the two-token lookahead trick, by\ndeclaring\n%token NESTED_LA\nand using the parser.c code to replace NESTED with NESTED_LA when it is\nfollowed by PATH. This doesn't require assigning precedence to\nanything. We do need to expand the two rules that have \"NESTED\nopt_path Sconst\" to each be two rules, one for \"NESTED_LA PATH Sconst\"\nand another for \"NESTED Sconst\". So the opt_path production goes away.\nThis preliminary patch does that. (I did not touch the ecpg grammar, but\nit needs an update too.)\n\ndiff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y\nindex c15fcf2eb2..8e4c1d4ebe 100644\n--- a/src/backend/parser/gram.y\n+++ b/src/backend/parser/gram.y\n@@ -817,7 +817,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);\n * FORMAT_LA, NULLS_LA, WITH_LA, and WITHOUT_LA are needed to make the grammar\n * LALR(1).\n */\n-%token\t\tFORMAT_LA NOT_LA NULLS_LA WITH_LA WITHOUT_LA\n+%token\t\tFORMAT_LA NESTED_LA NOT_LA NULLS_LA WITH_LA WITHOUT_LA\n \n /*\n * The grammar likewise thinks these tokens are keywords, but they are never\n@@ -911,8 +911,6 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);\n */\n %left\t\tJOIN CROSS LEFT FULL RIGHT INNER_P NATURAL\n \n-%nonassoc\tNESTED\n-%left\t\tPATH\n %%\n \n /*\n@@ -16771,7 +16769,7 @@ json_table_column_definition:\n \t\t\t\t\tn->location = @1;\n \t\t\t\t\t$$ = (Node *) n;\n \t\t\t\t}\n-\t\t\t| NESTED path_opt Sconst\n+\t\t\t| NESTED_LA PATH Sconst\n \t\t\t\tCOLUMNS '('\tjson_table_column_definition_list ')'\n \t\t\t\t{\n \t\t\t\t\tJsonTableColumn *n = makeNode(JsonTableColumn);\n@@ -16783,7 +16781,19 @@ json_table_column_definition:\n \t\t\t\t\tn->location = @1;\n \t\t\t\t\t$$ = (Node *) n;\n \t\t\t\t}\n-\t\t\t| NESTED path_opt Sconst AS name\n+\t\t\t| NESTED Sconst\n+\t\t\t\tCOLUMNS '('\tjson_table_column_definition_list ')'\n+\t\t\t\t{\n+\t\t\t\t\tJsonTableColumn *n = makeNode(JsonTableColumn);\n+\n+\t\t\t\t\tn->coltype = JTC_NESTED;\n+\t\t\t\t\tn->pathspec = $2;\n+\t\t\t\t\tn->pathname = NULL;\n+\t\t\t\t\tn->columns = $5;\n+\t\t\t\t\tn->location = @1;\n+\t\t\t\t\t$$ = (Node *) n;\n+\t\t\t\t}\n+\t\t\t| NESTED_LA PATH Sconst AS name\n \t\t\t\tCOLUMNS '('\tjson_table_column_definition_list ')'\n \t\t\t\t{\n \t\t\t\t\tJsonTableColumn *n = makeNode(JsonTableColumn);\n@@ -16795,6 +16805,19 @@ json_table_column_definition:\n \t\t\t\t\tn->location = @1;\n \t\t\t\t\t$$ = (Node *) n;\n \t\t\t\t}\n+\t\t\t| NESTED Sconst AS name\n+\t\t\t\tCOLUMNS '('\tjson_table_column_definition_list ')'\n+\t\t\t\t{\n+\t\t\t\t\tJsonTableColumn *n = makeNode(JsonTableColumn);\n+\n+\t\t\t\t\tn->coltype = JTC_NESTED;\n+\t\t\t\t\tn->pathspec = $2;\n+\t\t\t\t\tn->pathname = $4;\n+\t\t\t\t\tn->columns = $7;\n+\t\t\t\t\tn->location = @1;\n+\t\t\t\t\t$$ = (Node *) n;\n+\t\t\t\t}\n+\n \t\t;\n \n json_table_column_path_specification_clause_opt:\n@@ -16802,11 +16825,6 @@ json_table_column_path_specification_clause_opt:\n \t\t\t| /* EMPTY */\t\t\t\t\t\t\t{ $$ = NULL; }\n \t\t;\n \n-path_opt:\n-\t\t\tPATH\t\t\t\t\t\t\t\t{ }\n-\t\t\t| /* EMPTY */\t\t\t\t\t\t{ }\n-\t\t;\n-\n json_table_plan_clause_opt:\n \t\t\tPLAN '(' json_table_plan ')'\t\t\t{ $$ = $3; }\n \t\t\t| PLAN DEFAULT '(' json_table_default_plan_choices ')'\ndiff --git a/src/backend/parser/parser.c b/src/backend/parser/parser.c\nindex e17c310cc1..e3092f2c3e 100644\n--- a/src/backend/parser/parser.c\n+++ b/src/backend/parser/parser.c\n@@ -138,6 +138,7 @@ base_yylex(YYSTYPE *lvalp, YYLTYPE *llocp, core_yyscan_t yyscanner)\n \tswitch (cur_token)\n \t{\n \t\tcase FORMAT:\n+\t\tcase NESTED:\n \t\t\tcur_token_length = 6;\n \t\t\tbreak;\n \t\tcase NOT:\n@@ -204,6 +205,16 @@ base_yylex(YYSTYPE *lvalp, YYLTYPE *llocp, core_yyscan_t yyscanner)\n \t\t\t}\n \t\t\tbreak;\n \n+\t\tcase NESTED:\n+\t\t\t/* Replace NESTED by NESTED_LA if it's followed by PATH */\n+\t\t\tswitch (next_token)\n+\t\t\t{\n+\t\t\t\tcase PATH:\n+\t\t\t\t\tcur_token = NESTED_LA;\n+\t\t\t\t\tbreak;\n+\t\t\t}\n+\t\t\tbreak;\n+\n \t\tcase NOT:\n \t\t\t/* Replace NOT by NOT_LA if it's followed by BETWEEN, IN, etc */\n \t\t\tswitch (next_token)\n\n\nI don't know which of the two \"fixes\" is less bad. Like Amit, I was not\nable to find a solution to the problem by merely attaching precedences\nto rules. (I did not try to mess with the precedence of\nunreserved_keyword, because I'm pretty sure that would not be a good\nsolution even if I could make it work.)\n\n[1] https://postgr.es/m/CADT4RqBPdbsZW7HS1jJP319TMRHs1hzUiP=iRJYR6UqgHCrgNQ@mail.gmail.com\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Wed, 6 Dec 2023 11:42:29 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Thanks Alvaro.\n\nOn Wed, Dec 6, 2023 at 7:43 PM Alvaro Herrera <[email protected]> wrote:\n> On 2023-Dec-05, Amit Langote wrote:\n>\n> > I've attempted to trim down the JSON_TABLE grammar (0004), but this is\n> > all I've managed so far. Among other things, I couldn't refactor the\n> > grammar to do away with the following:\n> >\n> > +%nonassoc NESTED\n> > +%left PATH\n>\n> To recap, the reason we're arguing about this is that this creates two\n> new precedence classes, which are higher than everything else. Judging\n> by the discussios in thread [1], this is not acceptable. Without either\n> those new classes or the two hacks I describe below, the grammar has the\n> following shift/reduce conflict:\n>\n> State 6220\n>\n> 2331 json_table_column_definition: NESTED . path_opt Sconst COLUMNS '(' json_table_column_definition_list ')'\n> 2332 | NESTED . path_opt Sconst AS name COLUMNS '(' json_table_column_definition_list ')'\n> 2636 unreserved_keyword: NESTED .\n>\n> PATH shift, and go to state 6286\n>\n> SCONST reduce using rule 2336 (path_opt)\n> PATH [reduce using rule 2636 (unreserved_keyword)]\n> $default reduce using rule 2636 (unreserved_keyword)\n>\n> path_opt go to state 6287\n>\n>\n>\n> First, while the grammar uses \"NESTED path_opt\" in the relevant productions, I\n> noticed that there's no test that uses NESTED without PATH, so if we break that\n> case, we won't notice. I propose we remove the PATH keyword from one of\n> the tests in jsonb_sqljson.sql in order to make sure the grammar\n> continues to work after whatever hacking we do:\n>\n> diff --git a/src/test/regress/expected/jsonb_sqljson.out b/src/test/regress/expected/jsonb_sqljson.out\n> index 7e8ae6a696..8fd2385cdc 100644\n> --- a/src/test/regress/expected/jsonb_sqljson.out\n> +++ b/src/test/regress/expected/jsonb_sqljson.out\n> @@ -1548,7 +1548,7 @@ HINT: JSON_TABLE column names must be distinct from one another.\n> SELECT * FROM JSON_TABLE(\n> jsonb 'null', '$[*]' AS p0\n> COLUMNS (\n> - NESTED PATH '$' AS p1 COLUMNS (\n> + NESTED '$' AS p1 COLUMNS (\n> NESTED PATH '$' AS p11 COLUMNS ( foo int ),\n> NESTED PATH '$' AS p12 COLUMNS ( bar int )\n> ),\n> diff --git a/src/test/regress/sql/jsonb_sqljson.sql b/src/test/regress/sql/jsonb_sqljson.sql\n> index ea5db88b40..ea9b4ff8b6 100644\n> --- a/src/test/regress/sql/jsonb_sqljson.sql\n> +++ b/src/test/regress/sql/jsonb_sqljson.sql\n> @@ -617,7 +617,7 @@ SELECT * FROM JSON_TABLE(\n> SELECT * FROM JSON_TABLE(\n> jsonb 'null', '$[*]' AS p0\n> COLUMNS (\n> - NESTED PATH '$' AS p1 COLUMNS (\n> + NESTED '$' AS p1 COLUMNS (\n> NESTED PATH '$' AS p11 COLUMNS ( foo int ),\n> NESTED PATH '$' AS p12 COLUMNS ( bar int )\n> ),\n\nFixed the test case like that in the attached.\n\n> Having done that, AFAICS there are two possible fixes for the grammar.\n> One is to keep the idea of assigning precedence explicitly to these\n> keywords, but do something less hackish -- we can put NESTED together\n> with UNBOUNDED, and classify PATH in the IDENT group. This requires no\n> further changes. This would make NESTED PATH follow the same rationale\n> as UNBOUNDED FOLLOWING / UNBOUNDED PRECEDING. Here's is a preliminary\n> patch for that (the large comment above needs to be updated.)\n>\n> diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y\n> index c15fcf2eb2..1493ac7580 100644\n> --- a/src/backend/parser/gram.y\n> +++ b/src/backend/parser/gram.y\n> @@ -887,9 +887,9 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);\n> * json_predicate_type_constraint and json_key_uniqueness_constraint_opt\n> * productions (see comments there).\n> */\n> -%nonassoc UNBOUNDED /* ideally would have same precedence as IDENT */\n> +%nonassoc UNBOUNDED NESTED /* ideally would have same precedence as IDENT */\n> %nonassoc IDENT PARTITION RANGE ROWS GROUPS PRECEDING FOLLOWING CUBE ROLLUP\n> - SET KEYS OBJECT_P SCALAR VALUE_P WITH WITHOUT\n> + SET KEYS OBJECT_P SCALAR VALUE_P WITH WITHOUT PATH\n> %left Op OPERATOR /* multi-character ops and user-defined operators */\n> %left '+' '-'\n> %left '*' '/' '%'\n> @@ -911,8 +911,6 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);\n> */\n> %left JOIN CROSS LEFT FULL RIGHT INNER_P NATURAL\n>\n> -%nonassoc NESTED\n> -%left PATH\n> %%\n>\n> /*\n>\n>\n> The other thing we can do is use the two-token lookahead trick, by\n> declaring\n> %token NESTED_LA\n> and using the parser.c code to replace NESTED with NESTED_LA when it is\n> followed by PATH. This doesn't require assigning precedence to\n> anything. We do need to expand the two rules that have \"NESTED\n> opt_path Sconst\" to each be two rules, one for \"NESTED_LA PATH Sconst\"\n> and another for \"NESTED Sconst\". So the opt_path production goes away.\n> This preliminary patch does that. (I did not touch the ecpg grammar, but\n> it needs an update too.)\n>\n> diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y\n> index c15fcf2eb2..8e4c1d4ebe 100644\n> --- a/src/backend/parser/gram.y\n> +++ b/src/backend/parser/gram.y\n> @@ -817,7 +817,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);\n> * FORMAT_LA, NULLS_LA, WITH_LA, and WITHOUT_LA are needed to make the grammar\n> * LALR(1).\n> */\n> -%token FORMAT_LA NOT_LA NULLS_LA WITH_LA WITHOUT_LA\n> +%token FORMAT_LA NESTED_LA NOT_LA NULLS_LA WITH_LA WITHOUT_LA\n>\n> /*\n> * The grammar likewise thinks these tokens are keywords, but they are never\n> @@ -911,8 +911,6 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);\n> */\n> %left JOIN CROSS LEFT FULL RIGHT INNER_P NATURAL\n>\n> -%nonassoc NESTED\n> -%left PATH\n> %%\n>\n> /*\n> @@ -16771,7 +16769,7 @@ json_table_column_definition:\n> n->location = @1;\n> $$ = (Node *) n;\n> }\n> - | NESTED path_opt Sconst\n> + | NESTED_LA PATH Sconst\n> COLUMNS '(' json_table_column_definition_list ')'\n> {\n> JsonTableColumn *n = makeNode(JsonTableColumn);\n> @@ -16783,7 +16781,19 @@ json_table_column_definition:\n> n->location = @1;\n> $$ = (Node *) n;\n> }\n> - | NESTED path_opt Sconst AS name\n> + | NESTED Sconst\n> + COLUMNS '(' json_table_column_definition_list ')'\n> + {\n> + JsonTableColumn *n = makeNode(JsonTableColumn);\n> +\n> + n->coltype = JTC_NESTED;\n> + n->pathspec = $2;\n> + n->pathname = NULL;\n> + n->columns = $5;\n> + n->location = @1;\n> + $$ = (Node *) n;\n> + }\n> + | NESTED_LA PATH Sconst AS name\n> COLUMNS '(' json_table_column_definition_list ')'\n> {\n> JsonTableColumn *n = makeNode(JsonTableColumn);\n> @@ -16795,6 +16805,19 @@ json_table_column_definition:\n> n->location = @1;\n> $$ = (Node *) n;\n> }\n> + | NESTED Sconst AS name\n> + COLUMNS '(' json_table_column_definition_list ')'\n> + {\n> + JsonTableColumn *n = makeNode(JsonTableColumn);\n> +\n> + n->coltype = JTC_NESTED;\n> + n->pathspec = $2;\n> + n->pathname = $4;\n> + n->columns = $7;\n> + n->location = @1;\n> + $$ = (Node *) n;\n> + }\n> +\n> ;\n>\n> json_table_column_path_specification_clause_opt:\n> @@ -16802,11 +16825,6 @@ json_table_column_path_specification_clause_opt:\n> | /* EMPTY */ { $$ = NULL; }\n> ;\n>\n> -path_opt:\n> - PATH { }\n> - | /* EMPTY */ { }\n> - ;\n> -\n> json_table_plan_clause_opt:\n> PLAN '(' json_table_plan ')' { $$ = $3; }\n> | PLAN DEFAULT '(' json_table_default_plan_choices ')'\n> diff --git a/src/backend/parser/parser.c b/src/backend/parser/parser.c\n> index e17c310cc1..e3092f2c3e 100644\n> --- a/src/backend/parser/parser.c\n> +++ b/src/backend/parser/parser.c\n> @@ -138,6 +138,7 @@ base_yylex(YYSTYPE *lvalp, YYLTYPE *llocp, core_yyscan_t yyscanner)\n> switch (cur_token)\n> {\n> case FORMAT:\n> + case NESTED:\n> cur_token_length = 6;\n> break;\n> case NOT:\n> @@ -204,6 +205,16 @@ base_yylex(YYSTYPE *lvalp, YYLTYPE *llocp, core_yyscan_t yyscanner)\n> }\n> break;\n>\n> + case NESTED:\n> + /* Replace NESTED by NESTED_LA if it's followed by PATH */\n> + switch (next_token)\n> + {\n> + case PATH:\n> + cur_token = NESTED_LA;\n> + break;\n> + }\n> + break;\n> +\n> case NOT:\n> /* Replace NOT by NOT_LA if it's followed by BETWEEN, IN, etc */\n> switch (next_token)\n>\n>\n> I don't know which of the two \"fixes\" is less bad.\n\nI think I'm inclined toward adapting the LA-token fix (attached 0005),\nbecause we've done that before with SQL/JSON constructors patch.\nAlso, if I understand the concerns that Tom mentioned at [1]\ncorrectly, maybe we'd be better off not assigning precedence to\nsymbols as much as possible, so there's that too against the approach\n#1.\n\nAlso I've attached 0006 to add news tests under ECPG for the SQL/JSON\nquery functions, which I haven't done so far but realized after you\nmentioned ECPG. It also includes the ECPG variant of the LA-token\nfix. I'll eventually merge it into 0003 and 0004 after expanding the\ntest cases some more. I do wonder what kinds of tests we normally add\nto ECPG suite but not others?\n\nFinally, I also fixed a couple of silly mistakes in 0003 around\ntransformJsonBehavior() and some further assorted tightening in the ON\nERROR / EMPTY expression coercion handling code.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 6 Dec 2023 23:02:21 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On 2023-Dec-06, Amit Langote wrote:\n\n> I think I'm inclined toward adapting the LA-token fix (attached 0005),\n> because we've done that before with SQL/JSON constructors patch.\n> Also, if I understand the concerns that Tom mentioned at [1]\n> correctly, maybe we'd be better off not assigning precedence to\n> symbols as much as possible, so there's that too against the approach\n> #1.\n\nSounds ok to me, but I'm happy for this decision to be overridden by\nothers with more experience in parser code.\n\n> Also I've attached 0006 to add news tests under ECPG for the SQL/JSON\n> query functions, which I haven't done so far but realized after you\n> mentioned ECPG. It also includes the ECPG variant of the LA-token\n> fix. I'll eventually merge it into 0003 and 0004 after expanding the\n> test cases some more. I do wonder what kinds of tests we normally add\n> to ECPG suite but not others?\n\nWell, I only added tests to the ecpg suite in the previous round of\nSQL/JSON deeds because its grammar was being modified, so it seemed\npossible that it'd break. Because you're also going to modify its\nparser.c, it seems reasonable to expect tests to be added. I wouldn't\nexpect to have to do this for other patches, because it should behave\nlike straight SQL usage.\n\n\nLooking at 0002 I noticed that populate_array_assign_ndims() is called\nin some places and its return value is not checked, so we'd ultimately\nreturn JSON_SUCCESS even though there's actually a soft error stored\nsomewhere. I don't know if it's possible to hit this in practice, but\nit seems odd.\n\nLooking at get_json_object_as_hash(), I think its comment is not\nexplicit enough about its behavior when an error is stored in escontext,\nso its hard to judge whether its caller is doing the right thing (I\nthink it is). OTOH, populate_record seems to have the same issue, but\ncallers of that definitely seem to be doing the wrong thing -- namely,\nnot checking whether an error was saved; particularly populate_composite\nseems to rely on the returned tuple, even though an error might have\nbeen reported.\n\n(I didn't look at the subsequent patches in the series to see if these\nthings were fixed later.)\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Wed, 6 Dec 2023 16:26:21 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Wed, Dec 6, 2023 at 10:02 PM Amit Langote <[email protected]> wrote:\n>\n> Finally, I also fixed a couple of silly mistakes in 0003 around\n> transformJsonBehavior() and some further assorted tightening in the ON\n> ERROR / EMPTY expression coercion handling code.\n>\n\n\ntypo:\n+ * If a soft-error occurs, it will be checked by EEOP_JSONEXPR_COECION_FINISH\n\njson_exists no RETURNING clause.\nso the following part in src/backend/parser/parse_expr.c can be removed?\n\n+ else if (jsexpr->returning->typid != BOOLOID)\n+ {\n+ Node *coercion_expr;\n+ CaseTestExpr *placeholder = makeNode(CaseTestExpr);\n+ int location = exprLocation((Node *) jsexpr);\n+\n+ /*\n+ * We abuse CaseTestExpr here as placeholder to pass the\n+ * result of evaluating JSON_EXISTS to the coercion\n+ * expression.\n+ */\n+ placeholder->typeId = BOOLOID;\n+ placeholder->typeMod = -1;\n+ placeholder->collation = InvalidOid;\n+\n+ coercion_expr =\n+ coerce_to_target_type(pstate, (Node *) placeholder, BOOLOID,\n+ jsexpr->returning->typid,\n+ jsexpr->returning->typmod,\n+ COERCION_EXPLICIT,\n+ COERCE_IMPLICIT_CAST,\n+ location);\n+\n+ if (coercion_expr == NULL)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_CANNOT_COERCE),\n+ errmsg(\"cannot cast type %s to %s\",\n+ format_type_be(BOOLOID),\n+ format_type_be(jsexpr->returning->typid)),\n+ parser_coercion_errposition(pstate, location, (Node *) jsexpr)));\n+\n+ if (coercion_expr != (Node *) placeholder)\n+ jsexpr->result_coercion = coercion_expr;\n+ }\n\nSimilarly, since JSON_EXISTS has no RETURNING clause, the following\nalso needs to be refactored?\n\n+ /*\n+ * Disallow FORMAT specification in the RETURNING clause of JSON_EXISTS()\n+ * and JSON_VALUE().\n+ */\n+ if (func->output &&\n+ (func->op == JSON_VALUE_OP || func->op == JSON_EXISTS_OP))\n+ {\n+ JsonFormat *format = func->output->returning->format;\n+\n+ if (format->format_type != JS_FORMAT_DEFAULT ||\n+ format->encoding != JS_ENC_DEFAULT)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_SYNTAX_ERROR),\n+ errmsg(\"cannot specify FORMAT in RETURNING clause of %s\",\n+ func->op == JSON_VALUE_OP ? \"JSON_VALUE()\" :\n+ \"JSON_EXISTS()\"),\n+ parser_errposition(pstate, format->location)));\n\n\n",
"msg_date": "Thu, 7 Dec 2023 11:10:59 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Here are a couple of small patches to tidy up the parser a bit in your \nv28-0004 (JSON_TABLE) patch. It's not a lot; the rest looks okay to me. \n (I don't have an opinion on the concurrent discussion on resolving \nsome precedence issues.)",
"msg_date": "Thu, 7 Dec 2023 09:25:41 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Thu, Dec 7, 2023 at 12:26 AM Alvaro Herrera <[email protected]> wrote:\n> On 2023-Dec-06, Amit Langote wrote:\n> > I think I'm inclined toward adapting the LA-token fix (attached 0005),\n> > because we've done that before with SQL/JSON constructors patch.\n> > Also, if I understand the concerns that Tom mentioned at [1]\n> > correctly, maybe we'd be better off not assigning precedence to\n> > symbols as much as possible, so there's that too against the approach\n> > #1.\n>\n> Sounds ok to me, but I'm happy for this decision to be overridden by\n> others with more experience in parser code.\n\nOK, I'll wait to hear from others.\n\n> > Also I've attached 0006 to add news tests under ECPG for the SQL/JSON\n> > query functions, which I haven't done so far but realized after you\n> > mentioned ECPG. It also includes the ECPG variant of the LA-token\n> > fix. I'll eventually merge it into 0003 and 0004 after expanding the\n> > test cases some more. I do wonder what kinds of tests we normally add\n> > to ECPG suite but not others?\n>\n> Well, I only added tests to the ecpg suite in the previous round of\n> SQL/JSON deeds because its grammar was being modified, so it seemed\n> possible that it'd break. Because you're also going to modify its\n> parser.c, it seems reasonable to expect tests to be added. I wouldn't\n> expect to have to do this for other patches, because it should behave\n> like straight SQL usage.\n\nAh, ok, so ecpg tests are only needed in the JSON_TABLE patch.\n\n> Looking at 0002 I noticed that populate_array_assign_ndims() is called\n> in some places and its return value is not checked, so we'd ultimately\n> return JSON_SUCCESS even though there's actually a soft error stored\n> somewhere. I don't know if it's possible to hit this in practice, but\n> it seems odd.\n\nIndeed, fixed. I think I missed the callbacks in JsonSemAction\nbecause I only looked at functions directly reachable from\njson_populate_record() or something.\n\n> Looking at get_json_object_as_hash(), I think its comment is not\n> explicit enough about its behavior when an error is stored in escontext,\n> so its hard to judge whether its caller is doing the right thing (I\n> think it is).\n\nI've modified get_json_object_as_hash() to return NULL if\npg_parse_json_or_errsave() returns false because of an error. Maybe\nthat's an overkill but that's at least a bit clearer than a hash table\nof indeterminate state. Added a comment too.\n\n> OTOH, populate_record seems to have the same issue, but\n> callers of that definitely seem to be doing the wrong thing -- namely,\n> not checking whether an error was saved; particularly populate_composite\n> seems to rely on the returned tuple, even though an error might have\n> been reported.\n\nRight, populate_composite() should return NULL after checking escontext. Fixed.\n\nOn Thu, Dec 7, 2023 at 12:11 PM jian he <[email protected]> wrote:\n> typo:\n> + * If a soft-error occurs, it will be checked by EEOP_JSONEXPR_COECION_FINISH\n\nFixed.\n\n> json_exists no RETURNING clause.\n> so the following part in src/backend/parser/parse_expr.c can be removed?\n>\n> + else if (jsexpr->returning->typid != BOOLOID)\n> + {\n> + Node *coercion_expr;\n> + CaseTestExpr *placeholder = makeNode(CaseTestExpr);\n> + int location = exprLocation((Node *) jsexpr);\n> +\n> + /*\n> + * We abuse CaseTestExpr here as placeholder to pass the\n> + * result of evaluating JSON_EXISTS to the coercion\n> + * expression.\n> + */\n> + placeholder->typeId = BOOLOID;\n> + placeholder->typeMod = -1;\n> + placeholder->collation = InvalidOid;\n> +\n> + coercion_expr =\n> + coerce_to_target_type(pstate, (Node *) placeholder, BOOLOID,\n> + jsexpr->returning->typid,\n> + jsexpr->returning->typmod,\n> + COERCION_EXPLICIT,\n> + COERCE_IMPLICIT_CAST,\n> + location);\n> +\n> + if (coercion_expr == NULL)\n> + ereport(ERROR,\n> + (errcode(ERRCODE_CANNOT_COERCE),\n> + errmsg(\"cannot cast type %s to %s\",\n> + format_type_be(BOOLOID),\n> + format_type_be(jsexpr->returning->typid)),\n> + parser_coercion_errposition(pstate, location, (Node *) jsexpr)));\n> +\n> + if (coercion_expr != (Node *) placeholder)\n> + jsexpr->result_coercion = coercion_expr;\n> + }\n\nThis is needed in the JSON_TABLE patch as explained in [1]. Moved\nthis part into patch 0004.\n\n> Similarly, since JSON_EXISTS has no RETURNING clause, the following\n> also needs to be refactored?\n>\n> + /*\n> + * Disallow FORMAT specification in the RETURNING clause of JSON_EXISTS()\n> + * and JSON_VALUE().\n> + */\n> + if (func->output &&\n> + (func->op == JSON_VALUE_OP || func->op == JSON_EXISTS_OP))\n> + {\n> + JsonFormat *format = func->output->returning->format;\n> +\n> + if (format->format_type != JS_FORMAT_DEFAULT ||\n> + format->encoding != JS_ENC_DEFAULT)\n> + ereport(ERROR,\n> + (errcode(ERRCODE_SYNTAX_ERROR),\n> + errmsg(\"cannot specify FORMAT in RETURNING clause of %s\",\n> + func->op == JSON_VALUE_OP ? \"JSON_VALUE()\" :\n> + \"JSON_EXISTS()\"),\n> + parser_errposition(pstate, format->location)));\n\nThis one needs to be fixed, so done.\n\nOn Thu, Dec 7, 2023 at 5:25 PM Peter Eisentraut <[email protected]> wrote:\n> Here are a couple of small patches to tidy up the parser a bit in your\n> v28-0004 (JSON_TABLE) patch. It's not a lot; the rest looks okay to me.\n\nThanks Peter. I've merged these into 0004.\n\n--\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n[1] https://www.postgresql.org/message-id/CA%2BHiwqGsByGXLUniPxBgZjn6PeDr0Scp0jxxQOmBXy63tiJ60A%40mail.gmail.com",
"msg_date": "Thu, 7 Dec 2023 18:32:06 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Op 12/7/23 om 10:32 schreef Amit Langote:\n> On Thu, Dec 7, 2023 at 12:26 AM Alvaro Herrera <[email protected]> wrote:\n>> On 2023-Dec-06, Amit Langote wrote:\n>>> I think I'm inclined toward adapting the LA-token fix (attached 0005),\n> This one needs to be fixed, so done.\n> \n> On Thu, Dec 7, 2023 at 5:25 PM Peter Eisentraut <[email protected]> wrote:\n>> Here are a couple of small patches to tidy up the parser a bit in your\n>> v28-0004 (JSON_TABLE) patch. It's not a lot; the rest looks okay to me.\n> \n> Thanks Peter. I've merged these into 0004.\n\nHm, this set doesn't apply for me. 0003 gives error, see below (sorrty \nfor my interspersed bash echoing - seemed best to leave it in.\n(I'm using patch; should be all right, no? Am I doing it wrong?)\n\n-- [2023.12.07 11:29:39 json_table2] patch 1 of 5 (json_table2) \n[/home/aardvark/download/pgpatches/0170/json_table/20231207/v29-0001-Add-soft-error-handling-to-some-expression-nodes.patch]\n rv [] # [ok]\nOK, patch returned [0] so now break and continue (all is well)\n-- [2023.12.07 11:29:39 json_table2] patch 2 of 5 (json_table2) \n[/home/aardvark/download/pgpatches/0170/json_table/20231207/v29-0002-Add-soft-error-handling-to-populate_record_field.patch]\n rv [0] # [ok]\nOK, patch returned [0] so now break and continue (all is well)\n-- [2023.12.07 11:29:39 json_table2] patch 3 of 5 (json_table2) \n[/home/aardvark/download/pgpatches/0170/json_table/20231207/v29-0003-SQL-JSON-query-functions.patch]\n rv [0] # [ok]\nFile src/interfaces/ecpg/test/sql/sqljson_queryfuncs: git binary diffs \nare not supported.\n patch apply failed: rv = 1 patch file: \n/home/aardvark/download/pgpatches/0170/json_table/20231207/v29-0003-SQL-JSON-query-functions.patch\n rv [1] # [ok]\nThe text leading up to this was:\n--------------------------\n|From 712b95c8a1a3dd683852ac151e229440af783243 Mon Sep 17 00:00:00 2001\n|From: Amit Langote <[email protected]>\n|Date: Tue, 5 Dec 2023 14:33:25 +0900\n|Subject: [PATCH v29 3/5] SQL/JSON query functions\n|MIME-Version: 1.0\n|Content-Type: text/plain; charset=UTF-8\n|Content-Transfer-Encoding: 8bit\n|\n|This introduces the SQL/JSON functions for querying JSON data using\n|jsonpath expressions. The functions are:\n|\n|JSON_EXISTS()\n|JSON_QUERY()\n|JSON_VALUE()\n|\n\n\nErik\n\n\n> --\n> Thanks, Amit Langote\n> EDB: http://www.enterprisedb.com\n> \n> [1] https://www.postgresql.org/message-id/CA%2BHiwqGsByGXLUniPxBgZjn6PeDr0Scp0jxxQOmBXy63tiJ60A%40mail.gmail.com\n\n\n",
"msg_date": "Thu, 7 Dec 2023 11:36:38 +0100",
"msg_from": "Erik Rijkers <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "two JsonCoercionState in src/tools/pgindent/typedefs.list.\n\n+JsonCoercionState\n JsonConstructorExpr\n JsonConstructorExprState\n JsonConstructorType\n JsonEncoding\n+JsonExpr\n+JsonExprOp\n+JsonExprPostEvalState\n+JsonExprState\n+JsonCoercionState\n\n+ post_eval->jump_eval_coercion = jsestate->jump_eval_result_coercion;\n+ if (jbv == NULL)\n+ {\n+ /* Will be coerced with result_coercion. */\n+ *op->resvalue = (Datum) 0;\n+ *op->resnull = true;\n+ }\n+ else if (!error && !empty)\n+ {\n+ Assert(jbv != NULL);\n\nthe above \"Assert(jbv != NULL);\" will always be true?\n\nbased on:\njson_behavior_clause_opt:\njson_behavior ON EMPTY_P\n{ $$ = list_make2($1, NULL); }\n| json_behavior ON ERROR_P\n{ $$ = list_make2(NULL, $1); }\n| json_behavior ON EMPTY_P json_behavior ON ERROR_P\n{ $$ = list_make2($1, $4); }\n| /* EMPTY */\n{ $$ = list_make2(NULL, NULL); }\n;\n\nso\n+ if (func->behavior)\n+ {\n+ on_empty = linitial(func->behavior);\n+ on_error = lsecond(func->behavior);\n+ }\n\n`if (func->behavior)` will always be true?\nBy the way, in the above \"{ $$ = list_make2($1, $4); }\" what does $4\nrefer to? (I don't know gram.y....)\n\n\n+ jsexpr->formatted_expr = transformJsonValueExpr(pstate, constructName,\n+ func->context_item,\n+ JS_FORMAT_JSON,\n+ InvalidOid, false);\n+\n+ Assert(jsexpr->formatted_expr != NULL);\nThis Assert is unnecessary? transformJsonValueExpr function already\nhas an assert in the end, will it fail that one first?\n\n\n",
"msg_date": "Thu, 7 Dec 2023 18:39:03 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On 2023-Dec-07, Erik Rijkers wrote:\n\n> Hm, this set doesn't apply for me. 0003 gives error, see below (sorrty for\n> my interspersed bash echoing - seemed best to leave it in.\n> (I'm using patch; should be all right, no? Am I doing it wrong?)\n\nThere's definitely something wrong with the patch file; that binary file\nshould not be there. OTOH clearly if we ever start including binary\nfiles in our tree, `patch` is no longer going to cut it. Maybe we won't\never do that, though.\n\nThere's also a complaint about whitespace.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"El destino baraja y nosotros jugamos\" (A. Schopenhauer)\n\n\n",
"msg_date": "Thu, 7 Dec 2023 11:51:53 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Thu, Dec 7, 2023 at 7:51 PM Alvaro Herrera <[email protected]> wrote:\n> On 2023-Dec-07, Erik Rijkers wrote:\n>\n> > Hm, this set doesn't apply for me. 0003 gives error, see below (sorrty for\n> > my interspersed bash echoing - seemed best to leave it in.\n> > (I'm using patch; should be all right, no? Am I doing it wrong?)\n>\n> There's definitely something wrong with the patch file; that binary file\n> should not be there. OTOH clearly if we ever start including binary\n> files in our tree, `patch` is no longer going to cut it. Maybe we won't\n> ever do that, though.\n>\n> There's also a complaint about whitespace.\n\nLooks like I messed something up when using git (rebase -i). :-(\n\nApply-able patches attached, including fixes based on jian he's comments.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 7 Dec 2023 21:07:59 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Thu, Dec 7, 2023 at 7:39 PM jian he <[email protected]> wrote:\n> based on:\n> json_behavior_clause_opt:\n> json_behavior ON EMPTY_P\n> { $$ = list_make2($1, NULL); }\n> | json_behavior ON ERROR_P\n> { $$ = list_make2(NULL, $1); }\n> | json_behavior ON EMPTY_P json_behavior ON ERROR_P\n> { $$ = list_make2($1, $4); }\n> | /* EMPTY */\n> { $$ = list_make2(NULL, NULL); }\n> ;\n> so\n> + if (func->behavior)\n> + {\n> + on_empty = linitial(func->behavior);\n> + on_error = lsecond(func->behavior);\n> + }\n\nYeah, maybe.\n\n> `if (func->behavior)` will always be true?\n> By the way, in the above \"{ $$ = list_make2($1, $4); }\" what does $4\n> refer to? (I don't know gram.y....)\n\n$1 and $4 refer to the 1st and 4th symbols in the following:\n\njson_behavior ON EMPTY_P json_behavior ON ERROR_P\n\nSo $1 gives the json_behavior (JsonBehavior) node for ON EMPTY and $4\ngives that for ON ERROR.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 7 Dec 2023 21:13:55 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Wed, Dec 6, 2023 at 10:26 AM Alvaro Herrera <[email protected]> wrote:\n> > I think I'm inclined toward adapting the LA-token fix (attached 0005),\n> > because we've done that before with SQL/JSON constructors patch.\n> > Also, if I understand the concerns that Tom mentioned at [1]\n> > correctly, maybe we'd be better off not assigning precedence to\n> > symbols as much as possible, so there's that too against the approach\n> > #1.\n>\n> Sounds ok to me, but I'm happy for this decision to be overridden by\n> others with more experience in parser code.\n\nIn my experience, the lookahead solution is typically suitable when\nthe keywords involved aren't used very much in other parts of the\ngrammar. I think the situation that basically gets you into trouble is\nif there's some way to have a situation where NESTED shouldn't be\nchanged to NESTED_LA when PATH immediately follows. For example, if\nNESTED could be used like DISTINCT in a SELECT query:\n\nSELECT DISTINCT a, b, c FROM whatever\n\n...then that would be a strong indication in my mind that we shouldn't\nuse the lookahead solution, because what if you substitute \"path\" for\n\"a\"? Now you have a mess.\n\nI haven't gone over the grammar changes in a lot of detail so I'm not\nsure how much risk there is here. It looks to me like there's some\nsyntax that goes NESTED [PATH] 'literal string', and if that were the\nonly use of NESTED or PATH then I think we'd be completely fine. I see\nthat PATH b_expr also gets added to xmltable_column_option_el, and\nthat's a little more worrying, because you don't really want to see\nkeywords that are used for special lookahead rules in places where\nthey can creep into general expressions, but it seems like it might\nstill be OK as long as NESTED doesn't also start to get used in other\nplaces. If you ever create a situation where NESTED can bump up\nagainst PATH without wanting that to turn into NESTED_LA PATH, then I\nthink it's likely that this whole approach will unravel. As long as we\ndon't think that will ever happen, I think it's probably OK. If we do\nthink it's going to happen, then we should probably grit our teeth and\nuse precedence.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 7 Dec 2023 12:19:24 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "I noticed that JSON_TABLE uses an explicit FORMAT JSON in one of the\nrules, instead of using json_format_clause_opt like everywhere else. I\nwondered why, and noticed that it's because it wants to set coltype\nJTC_FORMATTED when the clause is present but JTC_REGULAR otherwise.\nThis seemed a little odd, but I thought to split json_format_clause_opt\nin two productions, one without the empty rule (json_format_clause) and\nanother with it. This is not a groundbreaking improvement, but it seems\nmore natural, and it helps contain the FORMAT stuff a little better.\n\nI also noticed while at it that we can do away not only with the\njson_encoding_clause_opt clause, but also with makeJsonEncoding().\n\nThe attach patch does it. This is not derived from the patches you're\ncurrently working on; it's more of a revise of the previous SQL/JSON\ncode I committed in 7081ac46ace8.\n\nIt goes before your 0003 and has a couple of easily resolved conflicts\nwith both 0003 and 0004; then in 0004 you have to edit the JSON_TABLE\nrule that has FORMAT_LA and replace that with json_format_clause.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"El que vive para el futuro es un iluso, y el que vive para el pasado,\nun imbécil\" (Luis Adler, \"Los tripulantes de la noche\")",
"msg_date": "Thu, 7 Dec 2023 19:41:55 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Fri, Dec 8, 2023 at 2:19 AM Robert Haas <[email protected]> wrote:\n> On Wed, Dec 6, 2023 at 10:26 AM Alvaro Herrera <[email protected]> wrote:\n> > > I think I'm inclined toward adapting the LA-token fix (attached 0005),\n> > > because we've done that before with SQL/JSON constructors patch.\n> > > Also, if I understand the concerns that Tom mentioned at [1]\n> > > correctly, maybe we'd be better off not assigning precedence to\n> > > symbols as much as possible, so there's that too against the approach\n> > > #1.\n> >\n> > Sounds ok to me, but I'm happy for this decision to be overridden by\n> > others with more experience in parser code.\n>\n> In my experience, the lookahead solution is typically suitable when\n> the keywords involved aren't used very much in other parts of the\n> grammar. I think the situation that basically gets you into trouble is\n> if there's some way to have a situation where NESTED shouldn't be\n> changed to NESTED_LA when PATH immediately follows. For example, if\n> NESTED could be used like DISTINCT in a SELECT query:\n>\n> SELECT DISTINCT a, b, c FROM whatever\n>\n> ...then that would be a strong indication in my mind that we shouldn't\n> use the lookahead solution, because what if you substitute \"path\" for\n> \"a\"? Now you have a mess.\n>\n> I haven't gone over the grammar changes in a lot of detail so I'm not\n> sure how much risk there is here. It looks to me like there's some\n> syntax that goes NESTED [PATH] 'literal string', and if that were the\n> only use of NESTED or PATH then I think we'd be completely fine. I see\n> that PATH b_expr also gets added to xmltable_column_option_el, and\n> that's a little more worrying, because you don't really want to see\n> keywords that are used for special lookahead rules in places where\n> they can creep into general expressions, but it seems like it might\n> still be OK as long as NESTED doesn't also start to get used in other\n> places. If you ever create a situation where NESTED can bump up\n> against PATH without wanting that to turn into NESTED_LA PATH, then I\n> think it's likely that this whole approach will unravel. As long as we\n> don't think that will ever happen, I think it's probably OK. If we do\n> think it's going to happen, then we should probably grit our teeth and\n> use precedence.\n\nWould it be messy to replace the lookahead approach by whatever's\nsuiable *in the future* when it becomes necessary to do so?\n\n\n--\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 8 Dec 2023 15:59:27 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Fri, Dec 8, 2023 at 3:42 AM Alvaro Herrera <[email protected]> wrote:\n> I noticed that JSON_TABLE uses an explicit FORMAT JSON in one of the\n> rules, instead of using json_format_clause_opt like everywhere else. I\n> wondered why, and noticed that it's because it wants to set coltype\n> JTC_FORMATTED when the clause is present but JTC_REGULAR otherwise.\n> This seemed a little odd, but I thought to split json_format_clause_opt\n> in two productions, one without the empty rule (json_format_clause) and\n> another with it. This is not a groundbreaking improvement, but it seems\n> more natural, and it helps contain the FORMAT stuff a little better.\n>\n> I also noticed while at it that we can do away not only with the\n> json_encoding_clause_opt clause, but also with makeJsonEncoding().\n>\n> The attach patch does it. This is not derived from the patches you're\n> currently working on; it's more of a revise of the previous SQL/JSON\n> code I committed in 7081ac46ace8.\n>\n> It goes before your 0003 and has a couple of easily resolved conflicts\n> with both 0003 and 0004; then in 0004 you have to edit the JSON_TABLE\n> rule that has FORMAT_LA and replace that with json_format_clause.\n\nThanks. I've adapted that as the attached 0004.\n\nI started thinking that some changes to\nsrc/backend/utils/adt/jsonpath_exec.c made by SQL/JSON query functions\npatch belong in a separate refactoring patch, which I've attached as\npatch 0003. They are the changes related to how jsonpath executor\ntakes and extracts \"variables\".\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Fri, 8 Dec 2023 19:34:29 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Fri, Dec 8, 2023 at 1:59 AM Amit Langote <[email protected]> wrote:\n> Would it be messy to replace the lookahead approach by whatever's\n> suiable *in the future* when it becomes necessary to do so?\n\nIt might be. Changing grammar rules to tends to change corner-case\nbehavior if nothing else. We're best off picking the approach that we\nthink is correct long term.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 8 Dec 2023 11:37:29 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "\nOn 2023-12-08 Fr 11:37, Robert Haas wrote:\n> On Fri, Dec 8, 2023 at 1:59 AM Amit Langote <[email protected]> wrote:\n>> Would it be messy to replace the lookahead approach by whatever's\n>> suiable *in the future* when it becomes necessary to do so?\n> It might be. Changing grammar rules to tends to change corner-case\n> behavior if nothing else. We're best off picking the approach that we\n> think is correct long term.\n\n\nAll this makes me wonder if Alvaro's first suggested solution (adding \nNESTED to the UNBOUNDED precedence level) wouldn't be better after all.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Fri, 8 Dec 2023 12:05:43 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Hi,\n\nOn 2023-12-07 21:07:59 +0900, Amit Langote wrote:\n> --- a/src/include/executor/execExpr.h\n> +++ b/src/include/executor/execExpr.h\n> @@ -16,6 +16,7 @@\n> \n> #include \"executor/nodeAgg.h\"\n> #include \"nodes/execnodes.h\"\n> +#include \"nodes/miscnodes.h\"\n> \n> /* forward references to avoid circularity */\n> struct ExprEvalStep;\n> @@ -168,6 +169,7 @@ typedef enum ExprEvalOp\n> \n> \t/* evaluate assorted special-purpose expression types */\n> \tEEOP_IOCOERCE,\n> +\tEEOP_IOCOERCE_SAFE,\n> \tEEOP_DISTINCT,\n> \tEEOP_NOT_DISTINCT,\n> \tEEOP_NULLIF,\n> @@ -547,6 +549,7 @@ typedef struct ExprEvalStep\n> \t\t\tbool\t *checknull;\n> \t\t\t/* OID of domain type */\n> \t\t\tOid\t\t\tresulttype;\n> +\t\t\tErrorSaveContext *escontext;\n> \t\t}\t\t\tdomaincheck;\n> \n> \t\t/* for EEOP_CONVERT_ROWTYPE */\n> @@ -776,6 +779,7 @@ extern void ExecEvalParamExec(ExprState *state, ExprEvalStep *op,\n> \t\t\t\t\t\t\t ExprContext *econtext);\n> extern void ExecEvalParamExtern(ExprState *state, ExprEvalStep *op,\n> \t\t\t\t\t\t\t\tExprContext *econtext);\n> +extern void ExecEvalCoerceViaIOSafe(ExprState *state, ExprEvalStep *op);\n> extern void ExecEvalSQLValueFunction(ExprState *state, ExprEvalStep *op);\n> extern void ExecEvalCurrentOfExpr(ExprState *state, ExprEvalStep *op);\n> extern void ExecEvalNextValueExpr(ExprState *state, ExprEvalStep *op);\n> diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h\n> index 5d7f17dee0..6a7118d300 100644\n> --- a/src/include/nodes/execnodes.h\n> +++ b/src/include/nodes/execnodes.h\n> @@ -34,6 +34,7 @@\n> #include \"fmgr.h\"\n> #include \"lib/ilist.h\"\n> #include \"lib/pairingheap.h\"\n> +#include \"nodes/miscnodes.h\"\n> #include \"nodes/params.h\"\n> #include \"nodes/plannodes.h\"\n> #include \"nodes/tidbitmap.h\"\n> @@ -129,6 +130,12 @@ typedef struct ExprState\n> \n> \tDatum\t *innermost_domainval;\n> \tbool\t *innermost_domainnull;\n> +\n> +\t/*\n> +\t * For expression nodes that support soft errors. Should be set to NULL\n> +\t * before calling ExecInitExprRec() if the caller wants errors thrown.\n> +\t */\n> +\tErrorSaveContext *escontext;\n> } ExprState;\n\nWhy do we need this both in ExprState *and* in ExprEvalStep?\n\n\n\n> From 38b53297b2d435d5cebf78c1f81e4748fed6c8b6 Mon Sep 17 00:00:00 2001\n> From: Amit Langote <[email protected]>\n> Date: Wed, 22 Nov 2023 13:18:49 +0900\n> Subject: [PATCH v30 2/5] Add soft error handling to populate_record_field()\n> \n> An uncoming patch would like the ability to call it from the\n> executor for some SQL/JSON expression nodes and ask to suppress any\n> errors that may occur.\n> \n> This commit does two things mainly:\n> \n> * It modifies the various interfaces internal to jsonfuncs.c to pass\n> the ErrorSaveContext around.\n> \n> * Make necessary modifications to handle the cases where the\n> processing is aborted partway through various functions that take\n> an ErrorSaveContext when a soft error occurs.\n> \n> Note that the above changes are only intended to suppress errors in\n> the functions in jsonfuncs.c, but not those in any external functions\n> that the functions in jsonfuncs.c in turn call, such as those from\n> arrayfuncs.c. It is assumed that the various populate_* functions\n> validate the data before passing those to external functions.\n> \n> Discussion: https://postgr.es/m/CA+HiwqE4XTdfb1nW=Ojoy_tQSRhYt-q_kb6i5d4xcKyrLC1Nbg@mail.gmail.com\n\nThe code here is getting substantially more verbose / less readable. I wonder\nif there's something more general that could be improved to make this less\npainful?\n\nI'd not at all be surprised if this caused a measurable slowdown.\n\n\n> ---\n> src/backend/utils/adt/jsonfuncs.c | 310 +++++++++++++++++++++++-------\n> 1 file changed, 236 insertions(+), 74 deletions(-)\n\n> /* functions supporting jsonb_delete, jsonb_set and jsonb_concat */\n> static JsonbValue *IteratorConcat(JsonbIterator **it1, JsonbIterator **it2,\n> @@ -2484,12 +2491,12 @@ populate_array_report_expected_array(PopulateArrayContext *ctx, int ndim)\n> \tif (ndim <= 0)\n> \t{\n> \t\tif (ctx->colname)\n> -\t\t\tereport(ERROR,\n> +\t\t\terrsave(ctx->escontext,\n> \t\t\t\t\t(errcode(ERRCODE_INVALID_TEXT_REPRESENTATION),\n> \t\t\t\t\t errmsg(\"expected JSON array\"),\n> \t\t\t\t\t errhint(\"See the value of key \\\"%s\\\".\", ctx->colname)));\n> \t\telse\n> -\t\t\tereport(ERROR,\n> +\t\t\terrsave(ctx->escontext,\n> \t\t\t\t\t(errcode(ERRCODE_INVALID_TEXT_REPRESENTATION),\n> \t\t\t\t\t errmsg(\"expected JSON array\")));\n> \t}\n> @@ -2506,13 +2513,13 @@ populate_array_report_expected_array(PopulateArrayContext *ctx, int ndim)\n> \t\t\tappendStringInfo(&indices, \"[%d]\", ctx->sizes[i]);\n> \n> \t\tif (ctx->colname)\n> -\t\t\tereport(ERROR,\n> +\t\t\terrsave(ctx->escontext,\n> \t\t\t\t\t(errcode(ERRCODE_INVALID_TEXT_REPRESENTATION),\n> \t\t\t\t\t errmsg(\"expected JSON array\"),\n> \t\t\t\t\t errhint(\"See the array element %s of key \\\"%s\\\".\",\n> \t\t\t\t\t\t\t indices.data, ctx->colname)));\n> \t\telse\n> -\t\t\tereport(ERROR,\n> +\t\t\terrsave(ctx->escontext,\n> \t\t\t\t\t(errcode(ERRCODE_INVALID_TEXT_REPRESENTATION),\n> \t\t\t\t\t errmsg(\"expected JSON array\"),\n> \t\t\t\t\t errhint(\"See the array element %s.\",\n> @@ -2520,8 +2527,13 @@ populate_array_report_expected_array(PopulateArrayContext *ctx, int ndim)\n> \t}\n> }\n\nIt seems mildly errorprone to use errsave() but not have any returns in the\ncode after the errsave()s - it seems plausible that somebody later would come\nand add more code expecting to not reach the later code.\n\n\n\n> +/*\n> + * Validate and set ndims for populating an array with some\n> + * populate_array_*() function.\n> + *\n> + * Returns false if the input (ndims) is erratic.\n\nI don't think \"erratic\" is the right word, \"erroneous\" maybe?\n\n\n\n\n\n> From 35cf1759f67a1c8ca7691aa87727a9f2c404b7c2 Mon Sep 17 00:00:00 2001\n> From: Amit Langote <[email protected]>\n> Date: Tue, 5 Dec 2023 14:33:25 +0900\n> Subject: [PATCH v30 3/5] SQL/JSON query functions\n> MIME-Version: 1.0\n> Content-Type: text/plain; charset=UTF-8\n> Content-Transfer-Encoding: 8bit\n> \n> This introduces the SQL/JSON functions for querying JSON data using\n> jsonpath expressions. The functions are:\n> \n> JSON_EXISTS()\n> JSON_QUERY()\n> JSON_VALUE()\n> \n> JSON_EXISTS() tests if the jsonpath expression applied to the jsonb\n> value yields any values.\n> \n> JSON_VALUE() must return a single value, and an error occurs if it\n> tries to return multiple values.\n> \n> JSON_QUERY() must return a json object or array, and there are\n> various WRAPPER options for handling scalar or multi-value results.\n> Both these functions have options for handling EMPTY and ERROR\n> conditions.\n> \n> All of these functions only operate on jsonb. The workaround for now\n> is to cast the argument to jsonb.\n> \n> Author: Nikita Glukhov <[email protected]>\n> Author: Teodor Sigaev <[email protected]>\n> Author: Oleg Bartunov <[email protected]>\n> Author: Alexander Korotkov <[email protected]>\n> Author: Andrew Dunstan <[email protected]>\n> Author: Amit Langote <[email protected]>\n> Author: Peter Eisentraut <[email protected]>\n> Author: jian he <[email protected]>\n> \n> Reviewers have included (in no particular order) Andres Freund, Alexander\n> Korotkov, Pavel Stehule, Andrew Alsup, Erik Rijkers, Zihong Yu,\n> Himanshu Upadhyaya, Daniel Gustafsson, Justin Pryzby, �lvaro Herrera,\n> jian he, Anton A. Melnikov, Nikita Malakhov, Peter Eisentraut\n> \n> Discussion: https://postgr.es/m/[email protected]\n> Discussion: https://postgr.es/m/[email protected]\n> Discussion: https://postgr.es/m/abd9b83b-aa66-f230-3d6d-734817f0995d%40postgresql.org\n> Discussion: https://postgr.es/m/CA+HiwqE4XTdfb1nW=Ojoy_tQSRhYt-q_kb6i5d4xcKyrLC1Nbg@mail.gmail.com\n> ---\n> doc/src/sgml/func.sgml | 151 +++\n> src/backend/catalog/sql_features.txt | 12 +-\n> src/backend/executor/execExpr.c | 363 +++++++\n> src/backend/executor/execExprInterp.c | 365 ++++++-\n> src/backend/jit/llvm/llvmjit.c | 2 +\n> src/backend/jit/llvm/llvmjit_expr.c | 140 +++\n> src/backend/jit/llvm/llvmjit_types.c | 4 +\n> src/backend/nodes/makefuncs.c | 18 +\n> src/backend/nodes/nodeFuncs.c | 238 ++++-\n> src/backend/optimizer/path/costsize.c | 3 +-\n> src/backend/optimizer/util/clauses.c | 19 +\n> src/backend/parser/gram.y | 178 +++-\n> src/backend/parser/parse_expr.c | 621 ++++++++++-\n> src/backend/parser/parse_target.c | 15 +\n> src/backend/utils/adt/formatting.c | 44 +\n> src/backend/utils/adt/jsonb.c | 31 +\n> src/backend/utils/adt/jsonfuncs.c | 52 +-\n> src/backend/utils/adt/jsonpath.c | 255 +++++\n> src/backend/utils/adt/jsonpath_exec.c | 391 ++++++-\n> src/backend/utils/adt/ruleutils.c | 136 +++\n> src/include/executor/execExpr.h | 133 +++\n> src/include/fmgr.h | 1 +\n> src/include/jit/llvmjit.h | 1 +\n> src/include/nodes/makefuncs.h | 2 +\n> src/include/nodes/parsenodes.h | 47 +\n> src/include/nodes/primnodes.h | 130 +++\n> src/include/parser/kwlist.h | 11 +\n> src/include/utils/formatting.h | 1 +\n> src/include/utils/jsonb.h | 1 +\n> src/include/utils/jsonfuncs.h | 5 +\n> src/include/utils/jsonpath.h | 27 +\n> src/interfaces/ecpg/preproc/ecpg.trailer | 28 +\n> src/test/regress/expected/json_sqljson.out | 18 +\n> src/test/regress/expected/jsonb_sqljson.out | 1032 +++++++++++++++++++\n> src/test/regress/parallel_schedule | 2 +-\n> src/test/regress/sql/json_sqljson.sql | 11 +\n> src/test/regress/sql/jsonb_sqljson.sql | 337 ++++++\n> src/tools/pgindent/typedefs.list | 18 +\n> 38 files changed, 4767 insertions(+), 76 deletions(-)\n\nI think it'd be worth trying to break this into smaller bits - it's not easy\nto review this at once.\n\n\n\n\n> +/*\n> + * Information about the state of JsonPath* evaluation.\n> + */\n> +typedef struct JsonExprPostEvalState\n> +{\n> +\t/* Did JsonPath* evaluation cause an error? */\n> +\tNullableDatum\terror;\n> +\n> +\t/* Is the result of JsonPath* evaluation empty? */\n> +\tNullableDatum\tempty;\n> +\n> +\t/*\n> +\t * ExecEvalJsonExprPath() will set this to the address of the step to\n> +\t * use to coerce the result of JsonPath* evaluation to the RETURNING type.\n> +\t * Also see the description of possible step addresses that this could be\n> +\t * set to in the definition of JsonExprState.\n> +\t */\n> +#define FIELDNO_JSONEXPRPOSTEVALSTATE_JUMP_EVAL_COERCION\t2\n> +\tint\t\t\tjump_eval_coercion;\n> +} JsonExprPostEvalState;\n> +\n> +/* State for evaluating a JsonExpr, too big to inline */\n> +typedef struct JsonExprState\n> +{\n> +\t/* original expression node */\n> +\tJsonExpr *jsexpr;\n> +\n> +\t/* value/isnull for formatted_expr */\n> +\tNullableDatum formatted_expr;\n> +\n> +\t/* value/isnull for pathspec */\n> +\tNullableDatum pathspec;\n> +\n> +\t/* JsonPathVariable entries for passing_values */\n> +\tList\t *args;\n> +\n> +\t/*\n> +\t * Per-row result status info populated by ExecEvalJsonExprPath()\n> +\t * and ExecEvalJsonCoercionFinish().\n> +\t */\n> +\tJsonExprPostEvalState post_eval;\n> +\n> +\t/*\n> +\t * Address of the step that implements the non-ERROR variant of ON ERROR\n> +\t * and ON EMPTY behaviors, to be jumped to when ExecEvalJsonExprPath()\n> +\t * returns false on encountering an error during JsonPath* evaluation\n> +\t * (ON ERROR) or on finding that no matching JSON item was returned (ON\n> +\t * EMPTY). The same steps are also performed on encountering an error\n> +\t * when coercing JsonPath* result to the RETURNING type.\n> +\t */\n> +\tint\t\t\tjump_error;\n> +\n> +\t/*\n> +\t * Addresses of steps to perform the coercion of the JsonPath* result value\n> +\t * to the RETURNING type. Each address points to either 1) a special\n> +\t * EEOP_JSONEXPR_COERCION step that handles coercion using the RETURNING\n> +\t * type's input function or by using json_via_populate(), or 2) an\n> +\t * expression such as CoerceViaIO. It may be -1 if no coercion is\n> +\t * necessary.\n> +\t *\n> +\t * jump_eval_result_coercion points to the step to evaluate the coercion\n> +\t * given in JsonExpr.result_coercion.\n> +\t */\n> +\tint\t\t\tjump_eval_result_coercion;\n> +\n> +\t/* eval_item_coercion_jumps is an array of num_item_coercions elements\n> +\t * each containing a step address to evaluate the coercion from a value of\n> +\t * the given JsonItemType to the RETURNING type, or -1 if no coercion is\n> +\t * necessary. item_coercion_via_expr is an array of boolean flags of the\n> +\t * same length that indicates whether each valid step address in the\n> +\t * eval_item_coercion_jumps array points to an expression or a\n> +\t * EEOP_JSONEXPR_COERCION step. ExecEvalJsonExprPath() will cause an\n> +\t * error if it's the latter, because that mode of coercion is not\n> +\t * supported for all JsonItemTypes.\n> +\t */\n> +\tint\t\t\tnum_item_coercions;\n> +\tint\t\t *eval_item_coercion_jumps;\n> +\tbool\t *item_coercion_via_expr;\n> +\n> +\t/*\n> +\t * For passing when initializing a EEOP_IOCOERCE_SAFE step for any\n> +\t * CoerceViaIO nodes in the expression that must be evaluated in an\n> +\t * error-safe manner.\n> +\t */\n> +\tErrorSaveContext escontext;\n> +} JsonExprState;\n> +\n> +/*\n> + * State for coercing a value to the target type specified in 'coercion' using\n> + * either json_populate_type() or by calling the type's input function.\n> + */\n> +typedef struct JsonCoercionState\n> +{\n> +\t/* original expression node */\n> +\tJsonCoercion *coercion;\n> +\n> +\t/* Input function info for the target type. */\n> +\tstruct\n> +\t{\n> +\t\tFmgrInfo *finfo;\n> +\t\tOid\t\t\ttypioparam;\n> +\t}\t\t\tinput;\n> +\n> +\t/* Cache for json_populate_type() */\n> +\tvoid\t *cache;\n> +\n> +\t/*\n> +\t * For soft-error handling in json_populate_type() or\n> +\t * in InputFunctionCallSafe().\n> +\t */\n> +\tErrorSaveContext *escontext;\n> +} JsonCoercionState;\n\n\nDoes all of this stuff need to live in this header? Some of it seems like it\ndoesn't need to be in a header at all, and other bits seem like they belong\nsomewhere more json specific?\n\n\n> +/*\n> + * JsonItemType\n> + *\t\tRepresents type codes to identify a JsonCoercion node to use when\n> + *\t\tcoercing a given SQL/JSON items to the output SQL type\n> + *\n> + * The comment next to each item type mentions the JsonbValue.jbvType of the\n> + * source JsonbValue value to be coerced using the expression in the\n> + * JsonCoercion node.\n> + *\n> + * Also, see InitJsonItemCoercions() and ExecPrepareJsonItemCoercion().\n> + */\n> +typedef enum JsonItemType\n> +{\n> +\tJsonItemTypeNull = 0,\t\t/* jbvNull */\n> +\tJsonItemTypeString = 1,\t\t/* jbvString */\n> +\tJsonItemTypeNumeric = 2,\t/* jbvNumeric */\n> +\tJsonItemTypeBoolean = 3,\t/* jbvBool */\n> +\tJsonItemTypeDate = 4,\t\t/* jbvDatetime: DATEOID */\n> +\tJsonItemTypeTime = 5,\t\t/* jbvDatetime: TIMEOID */\n> +\tJsonItemTypeTimetz = 6,\t\t/* jbvDatetime: TIMETZOID */\n> +\tJsonItemTypeTimestamp = 7,\t/* jbvDatetime: TIMESTAMPOID */\n> +\tJsonItemTypeTimestamptz = 8,\t/* jbvDatetime: TIMESTAMPTZOID */\n> +\tJsonItemTypeComposite = 9,\t/* jbvArray, jbvObject, jbvBinary */\n> +\tJsonItemTypeInvalid = 10,\n> +} JsonItemType;\n\nWhy do we need manually assigned values here?\n\n\n> +/*\n> + * JsonCoercion -\n> + *\t\tcoercion from SQL/JSON item types to SQL types\n> + */\n> +typedef struct JsonCoercion\n> +{\n> +\tNodeTag\t\ttype;\n> +\n> +\tOid\t\t\ttargettype;\n> +\tint32\t\ttargettypmod;\n> +\tbool\t\tvia_populate;\t/* coerce result using json_populate_type()? */\n> +\tbool\t\tvia_io;\t\t\t/* coerce result using type input function? */\n> +\tOid\t\t\tcollation;\t\t/* collation for coercion via I/O or populate */\n> +} JsonCoercion;\n> +\n> +typedef struct JsonItemCoercion\n> +{\n> +\tNodeTag\t\ttype;\n> +\n> +\tJsonItemType item_type;\n> +\tNode\t *coercion;\n> +} JsonItemCoercion;\n\nWhat's the difference between an \"ItemCoercion\" and a \"Coercion\"?\n\n\n> +/*\n> + * JsonBehavior -\n> + * \t\trepresentation of a given JSON behavior\n\nMy editor warns about space-before-tab here.\n\n\n> + */\n> +typedef struct JsonBehavior\n> +{\n> +\tNodeTag\t\ttype;\n\n> +\tJsonBehaviorType btype;\t\t/* behavior type */\n> +\tNode\t *expr;\t\t\t/* behavior expression */\n\nThese comment don't seem helpful. I think there's need for comments here, but\nrestating the field name in different words isn't helpful. What's needed is an\nexplanation of how things interact, perhaps also why that's the appropriate\nrepresentation.\n\n> +\tJsonCoercion *coercion;\t\t/* to coerce behavior expression when there is\n> +\t\t\t\t\t\t\t\t * no cast to the target type */\n> +\tint\t\t\tlocation;\t\t/* token location, or -1 if unknown */\n\n> +} JsonBehavior;\n> +\n> +/*\n> + * JsonExpr -\n> + *\t\ttransformed representation of JSON_VALUE(), JSON_QUERY(), JSON_EXISTS()\n> + */\n> +typedef struct JsonExpr\n> +{\n> +\tExpr\t\txpr;\n> +\n> +\tJsonExprOp\top;\t\t\t\t/* json function ID */\n> +\tNode\t *formatted_expr; /* formatted context item expression */\n> +\tNode\t *result_coercion; /* resulting coercion to RETURNING type */\n> +\tJsonFormat *format;\t\t\t/* context item format (JSON/JSONB) */\n> +\tNode\t *path_spec;\t\t/* JSON path specification expression */\n> +\tList\t *passing_names;\t/* PASSING argument names */\n> +\tList\t *passing_values; /* PASSING argument values */\n> +\tJsonReturning *returning;\t/* RETURNING clause type/format info */\n> +\tJsonBehavior *on_empty;\t\t/* ON EMPTY behavior */\n> +\tJsonBehavior *on_error;\t\t/* ON ERROR behavior */\n> +\tList\t *item_coercions; /* coercions for JSON_VALUE */\n> +\tJsonWrapper wrapper;\t\t/* WRAPPER for JSON_QUERY */\n> +\tbool\t\tomit_quotes;\t/* KEEP/OMIT QUOTES for JSON_QUERY */\n> +\tint\t\t\tlocation;\t\t/* token location, or -1 if unknown */\n> +} JsonExpr;\n\nThese comments seem even worse.\n\n\n\n> +static void ExecInitJsonExpr(JsonExpr *jexpr, ExprState *state,\n> +\t\t\t\t\t\t\t Datum *resv, bool *resnull,\n> +\t\t\t\t\t\t\t ExprEvalStep *scratch);\n> +static int ExecInitJsonExprCoercion(ExprState *state, Node *coercion,\n> +\t\t\t\t\t\t ErrorSaveContext *escontext,\n> +\t\t\t\t\t\t Datum *resv, bool *resnull);\n> \n> \n> /*\n> @@ -2416,6 +2423,36 @@ ExecInitExprRec(Expr *node, ExprState *state,\n> \t\t\t\tbreak;\n> \t\t\t}\n> \n> +\t\tcase T_JsonExpr:\n> +\t\t\t{\n> +\t\t\t\tJsonExpr *jexpr = castNode(JsonExpr, node);\n> +\n> +\t\t\t\tExecInitJsonExpr(jexpr, state, resv, resnull, &scratch);\n> +\t\t\t\tbreak;\n> +\t\t\t}\n> +\n> +\t\tcase T_JsonCoercion:\n> +\t\t\t{\n> +\t\t\t\tJsonCoercion\t*coercion = castNode(JsonCoercion, node);\n> +\t\t\t\tJsonCoercionState *jcstate = palloc0(sizeof(JsonCoercionState));\n> +\t\t\t\tOid\t\t\ttypinput;\n> +\t\t\t\tFmgrInfo *finfo;\n> +\n> +\t\t\t\tgetTypeInputInfo(coercion->targettype, &typinput,\n> +\t\t\t\t\t\t\t\t &jcstate->input.typioparam);\n> +\t\t\t\tfinfo = palloc0(sizeof(FmgrInfo));\n> +\t\t\t\tfmgr_info(typinput, finfo);\n> +\t\t\t\tjcstate->input.finfo = finfo;\n> +\n> +\t\t\t\tjcstate->coercion = coercion;\n> +\t\t\t\tjcstate->escontext = state->escontext;\n> +\n> +\t\t\t\tscratch.opcode = EEOP_JSONEXPR_COERCION;\n> +\t\t\t\tscratch.d.jsonexpr_coercion.jcstate = jcstate;\n> +\t\t\t\tExprEvalPushStep(state, &scratch);\n> +\t\t\t\tbreak;\n> +\t\t\t}\n\nIt's confusing that we have ExecInitJsonExprCoercion, but aren't using that\nhere, but then use it later, in ExecInitJsonExpr().\n\n\n> \t\tcase T_NullTest:\n> \t\t\t{\n> \t\t\t\tNullTest *ntest = (NullTest *) node;\n> @@ -4184,3 +4221,329 @@ ExecBuildParamSetEqual(TupleDesc desc,\n> \n> \treturn state;\n> }\n> +\n> +/*\n> + * Push steps to evaluate a JsonExpr and its various subsidiary expressions.\n> + */\n> +static void\n> +ExecInitJsonExpr(JsonExpr *jexpr, ExprState *state,\n> +\t\t\t\t Datum *resv, bool *resnull,\n> +\t\t\t\t ExprEvalStep *scratch)\n> +{\n> +\tJsonExprState *jsestate = palloc0(sizeof(JsonExprState));\n> +\tJsonExprPostEvalState *post_eval = &jsestate->post_eval;\n> +\tListCell *argexprlc;\n> +\tListCell *argnamelc;\n> +\tList\t *jumps_if_skip = NIL;\n> +\tList\t *jumps_to_coerce_finish = NIL;\n> +\tList\t *jumps_to_end = NIL;\n> +\tListCell *lc;\n> +\tExprEvalStep *as;\n> +\n> +\tjsestate->jsexpr = jexpr;\n> +\n> +\t/*\n> +\t * Evaluate formatted_expr storing the result into\n> +\t * jsestate->formatted_expr.\n> +\t */\n> +\tExecInitExprRec((Expr *) jexpr->formatted_expr, state,\n> +\t\t\t\t\t&jsestate->formatted_expr.value,\n> +\t\t\t\t\t&jsestate->formatted_expr.isnull);\n> +\n> +\t/* Steps to jump to end if formatted_expr evaluates to NULL */\n> +\tscratch->opcode = EEOP_JUMP_IF_NULL;\n> +\tscratch->resnull = &jsestate->formatted_expr.isnull;\n> +\tscratch->d.jump.jumpdone = -1;\t/* set below */\n> +\tjumps_if_skip = lappend_int(jumps_if_skip, state->steps_len);\n> +\tExprEvalPushStep(state, scratch);\n> +\n> +\t/*\n> +\t * Evaluate pathspec expression storing the result into\n> +\t * jsestate->pathspec.\n> +\t */\n> +\tExecInitExprRec((Expr *) jexpr->path_spec, state,\n> +\t\t\t\t\t&jsestate->pathspec.value,\n> +\t\t\t\t\t&jsestate->pathspec.isnull);\n> +\n> +\t/* Steps to JUMP to end if pathspec evaluates to NULL */\n> +\tscratch->opcode = EEOP_JUMP_IF_NULL;\n> +\tscratch->resnull = &jsestate->pathspec.isnull;\n> +\tscratch->d.jump.jumpdone = -1;\t/* set below */\n> +\tjumps_if_skip = lappend_int(jumps_if_skip, state->steps_len);\n> +\tExprEvalPushStep(state, scratch);\n> +\n> +\t/* Steps to compute PASSING args. */\n> +\tjsestate->args = NIL;\n> +\tforboth(argexprlc, jexpr->passing_values,\n> +\t\t\targnamelc, jexpr->passing_names)\n> +\t{\n> +\t\tExpr\t *argexpr = (Expr *) lfirst(argexprlc);\n> +\t\tString\t *argname = lfirst_node(String, argnamelc);\n> +\t\tJsonPathVariable *var = palloc(sizeof(*var));\n> +\n> +\t\tvar->name = argname->sval;\n> +\t\tvar->typid = exprType((Node *) argexpr);\n> +\t\tvar->typmod = exprTypmod((Node *) argexpr);\n> +\n> +\t\tExecInitExprRec((Expr *) argexpr, state, &var->value, &var->isnull);\n> +\n> +\t\tjsestate->args = lappend(jsestate->args, var);\n> +\t}\n> +\n> +\t/* Step for JsonPath* evaluation; see ExecEvalJsonExprPath(). */\n> +\tscratch->opcode = EEOP_JSONEXPR_PATH;\n> +\tscratch->resvalue = resv;\n> +\tscratch->resnull = resnull;\n> +\tscratch->d.jsonexpr.jsestate = jsestate;\n> +\tExprEvalPushStep(state, scratch);\n> +\n> +\t/*\n> +\t * Step to jump to end when there's neither an error when evaluating\n> +\t * JsonPath* nor any need to coerce the result because it's already\n> +\t * of the specified type.\n> +\t */\n> +\tscratch->opcode = EEOP_JUMP;\n> +\tscratch->d.jump.jumpdone = -1;\t/* set below */\n> +\tjumps_to_end = lappend_int(jumps_to_end, state->steps_len);\n> +\tExprEvalPushStep(state, scratch);\n> +\n> +\t/*\n> +\t * Steps to coerce the result value computed by EEOP_JSONEXPR_PATH.\n> +\t * To handle coercion errors softly, use the following ErrorSaveContext\n> +\t * when initializing the coercion expressions, including any JsonCoercion\n> +\t * nodes.\n> +\t */\n> +\tjsestate->escontext.type = T_ErrorSaveContext;\n> +\tif (jexpr->result_coercion || jexpr->omit_quotes)\n> +\t{\n> +\t\tjsestate->jump_eval_result_coercion =\n> +\t\t\tExecInitJsonExprCoercion(state, jexpr->result_coercion,\n> +\t\t\t\t\t\t\t\t\t jexpr->on_error->btype != JSON_BEHAVIOR_ERROR ?\n> +\t\t\t\t\t\t\t\t\t &jsestate->escontext : NULL,\n> +\t\t\t\t\t\t\t\t\t resv, resnull);\n> +\t}\n> +\telse\n> +\t\tjsestate->jump_eval_result_coercion = -1;\n> +\n> +\t/* Steps for coercing JsonItemType values returned by JsonPathValue(). */\n> +\tif (jexpr->item_coercions)\n> +\t{\n> +\t\t/*\n> +\t\t * Jump to COERCION_FINISH to skip over the following steps if\n> +\t\t * result_coercion is present.\n> +\t\t */\n> +\t\tif (jsestate->jump_eval_result_coercion >= 0)\n> +\t\t{\n> +\t\t\tscratch->opcode = EEOP_JUMP;\n> +\t\t\tscratch->d.jump.jumpdone = -1;\t/* set below */\n> +\t\t\tjumps_to_coerce_finish = lappend_int(jumps_to_coerce_finish,\n> +\t\t\t\t\t\t\t\t\t\t\t\t state->steps_len);\n> +\t\t\tExprEvalPushStep(state, scratch);\n> +\t\t}\n> +\n> +\t\t/*\n> +\t\t * Here we create the steps for each JsonItemType type's coercion\n> +\t\t * expression and also store a flag whether the expression is\n> +\t\t * a JsonCoercion node. ExecPrepareJsonItemCoercion() called by\n> +\t\t * ExecEvalJsonExprPath() will map a given JsonbValue returned by\n> +\t\t * JsonPathValue() to its JsonItemType's expression's step address\n> +\t\t * and the flag by indexing the following arrays with JsonItemType\n> +\t\t * enum value.\n> +\t\t */\n> +\t\tjsestate->num_item_coercions = list_length(jexpr->item_coercions);\n> +\t\tjsestate->eval_item_coercion_jumps = (int *)\n> +\t\t\tpalloc(jsestate->num_item_coercions * sizeof(int));\n> +\t\tjsestate->item_coercion_via_expr = (bool *)\n> +\t\t\tpalloc0(jsestate->num_item_coercions * sizeof(bool));\n> +\t\tforeach(lc, jexpr->item_coercions)\n> +\t\t{\n> +\t\t\tJsonItemCoercion *item_coercion = lfirst(lc);\n> +\t\t\tNode *coercion = item_coercion->coercion;\n> +\n> +\t\t\tjsestate->item_coercion_via_expr[item_coercion->item_type] =\n> +\t\t\t\t(coercion != NULL && !IsA(coercion, JsonCoercion));\n> +\t\t\tjsestate->eval_item_coercion_jumps[item_coercion->item_type] =\n> +\t\t\t\tExecInitJsonExprCoercion(state, coercion,\n> +\t\t\t\t\t\t\t\t\t\t jexpr->on_error->btype != JSON_BEHAVIOR_ERROR ?\n> +\t\t\t\t\t\t\t\t\t\t &jsestate->escontext : NULL,\n> +\t\t\t\t\t\t\t\t\t\t resv, resnull);\n> +\n> +\t\t\t/* Emit JUMP step to skip past other coercions' steps. */\n> +\t\t\tscratch->opcode = EEOP_JUMP;\n> +\t\t\tscratch->d.jump.jumpdone = -1;\t/* set below */\n> +\t\t\tjumps_to_coerce_finish = lappend_int(jumps_to_coerce_finish,\n> +\t\t\t\t\t\t\t\t\t\t\t\t state->steps_len);\n> +\t\t\tExprEvalPushStep(state, scratch);\n> +\t\t}\n> +\t}\n> +\n> +\t/*\n> +\t * Add step to reset the ErrorSaveContext and set error flag if the\n> +\t * coercion steps encountered an error but was not thrown because of the\n> +\t * ON ERROR behavior.\n> +\t */\n> +\tif (jexpr->result_coercion || jexpr->item_coercions)\n> +\t{\n> +\t\tforeach(lc, jumps_to_coerce_finish)\n> +\t\t{\n> +\t\t\tas = &state->steps[lfirst_int(lc)];\n> +\t\t\tas->d.jump.jumpdone = state->steps_len;\n> +\t\t}\n> +\n> +\t\tscratch->opcode = EEOP_JSONEXPR_COERCION_FINISH;\n> +\t\tscratch->d.jsonexpr.jsestate = jsestate;\n> +\t\tExprEvalPushStep(state, scratch);\n> +\t}\n> +\n> +\t/*\n> +\t * Step to handle ON ERROR behaviors. This handles both the errors\n> +\t * that occur during EEOP_JSONEXPR_PATH evaluation and subsequent coercion\n> +\t * evaluation.\n> +\t */\n> +\tjsestate->jump_error = -1;\n> +\tif (jexpr->on_error &&\n> +\t\tjexpr->on_error->btype != JSON_BEHAVIOR_ERROR)\n> +\t{\n> +\t\tjsestate->jump_error = state->steps_len;\n> +\t\tscratch->opcode = EEOP_JUMP_IF_NOT_TRUE;\n> +\n> +\t\t/*\n> +\t\t * post_eval.error is set by ExecEvalJsonExprPath() and\n> +\t\t * ExecEvalJsonCoercionFinish().\n> +\t\t */\n> +\t\tscratch->resvalue = &post_eval->error.value;\n> +\t\tscratch->resnull = &post_eval->error.isnull;\n> +\n> +\t\tscratch->d.jump.jumpdone = -1;\t/* set below */\n> +\t\tExprEvalPushStep(state, scratch);\n> +\n> +\t\t/* Steps to evaluate the ON ERROR expression */\n> +\t\tExecInitExprRec((Expr *) jexpr->on_error->expr,\n> +\t\t\t\t\t\tstate, resv, resnull);\n> +\n> +\t\t/* Steps to coerce the ON ERROR expression if needed */\n> +\t\tif (jexpr->on_error->coercion)\n> +\t\t\tExecInitExprRec((Expr *) jexpr->on_error->coercion, state,\n> +\t\t\t\t\t\t\t resv, resnull);\n> +\n> +\t\tjumps_to_end = lappend_int(jumps_to_end, state->steps_len);\n> +\t\tscratch->opcode = EEOP_JUMP;\n> +\t\tscratch->d.jump.jumpdone = -1;\n> +\t\tExprEvalPushStep(state, scratch);\n> +\t}\n> +\n> +\t/* Step to handle ON EMPTY behaviors. */\n> +\tif (jexpr->on_empty != NULL &&\n> +\t\tjexpr->on_empty->btype != JSON_BEHAVIOR_ERROR)\n> +\t{\n> +\t\t/*\n> +\t\t * Make the ON ERROR behavior JUMP to here after checking the error\n> +\t\t * and if it's not present then make EEOP_JSONEXPR_PATH directly\n> +\t\t * jump here.\n> +\t\t */\n> +\t\tif (jsestate->jump_error >= 0)\n> +\t\t{\n> +\t\t\tas = &state->steps[jsestate->jump_error];\n> +\t\t\tas->d.jump.jumpdone = state->steps_len;\n> +\t\t}\n> +\t\telse\n> +\t\t\tjsestate->jump_error = state->steps_len;\n> +\n> +\t\tscratch->opcode = EEOP_JUMP_IF_NOT_TRUE;\n> +\t\tscratch->resvalue = &post_eval->empty.value;\n> +\t\tscratch->resnull = &post_eval->empty.isnull;\n> +\t\tscratch->d.jump.jumpdone = -1;\t/* set below */\n> +\t\tjumps_to_end = lappend_int(jumps_to_end, state->steps_len);\n> +\t\tExprEvalPushStep(state, scratch);\n> +\n> +\t\t/* Steps to evaluate the ON EMPTY expression */\n> +\t\tExecInitExprRec((Expr *) jexpr->on_empty->expr,\n> +\t\t\t\t\t\tstate, resv, resnull);\n> +\n> +\t\t/* Steps to coerce the ON EMPTY expression if needed */\n> +\t\tif (jexpr->on_empty->coercion)\n> +\t\t\tExecInitExprRec((Expr *) jexpr->on_empty->coercion, state,\n> +\t\t\t\t\t\t\t resv, resnull);\n> +\n> +\t\tscratch->opcode = EEOP_JUMP;\n> +\t\tscratch->d.jump.jumpdone = -1;\t/* set below */\n> +\t\tjumps_to_end = lappend_int(jumps_to_end, state->steps_len);\n> +\t\tExprEvalPushStep(state, scratch);\n> +\t}\n> +\t/* Make EEOP_JSONEXPR_PATH jump to end if no ON EMPTY clause present. */\n> +\telse if (jsestate->jump_error >= 0)\n> +\t\tjumps_to_end = lappend_int(jumps_to_end, jsestate->jump_error);\n> +\n> +\t/*\n> +\t * If neither ON ERROR nor ON EMPTY jumps present, then add one to go\n> +\t * to end.\n> +\t */\n> +\tif (jsestate->jump_error < 0)\n> +\t{\n> +\t\tscratch->opcode = EEOP_JUMP;\n> +\t\tscratch->d.jump.jumpdone = -1;\t/* set below */\n> +\t\tjumps_to_end = lappend_int(jumps_to_end, state->steps_len);\n> +\t\tExprEvalPushStep(state, scratch);\n> +\t}\n> +\n> +\t/* Return NULL when either formatted_expr or pathspec is NULL. */\n> +\tforeach(lc, jumps_if_skip)\n> +\t{\n> +\t\tas = &state->steps[lfirst_int(lc)];\n> +\t\tas->d.jump.jumpdone = state->steps_len;\n> +\t}\n> +\tscratch->opcode = EEOP_CONST;\n> +\tscratch->resvalue = resv;\n> +\tscratch->resnull = resnull;\n> +\tscratch->d.constval.value = (Datum) 0;\n> +\tscratch->d.constval.isnull = true;\n> +\tExprEvalPushStep(state, scratch);\n> +\n> +\t/* Jump to coerce the NULL using result_coercion is present. */\n> +\tif (jsestate->jump_eval_result_coercion >= 0)\n> +\t{\n> +\t\tscratch->opcode = EEOP_JUMP;\n> +\t\tscratch->d.jump.jumpdone = jsestate->jump_eval_result_coercion;\n> +\t\tExprEvalPushStep(state, scratch);\n> +\t}\n> +\n> +\tforeach(lc, jumps_to_end)\n> +\t{\n> +\t\tas = &state->steps[lfirst_int(lc)];\n> +\t\tas->d.jump.jumpdone = state->steps_len;\n> +\t}\n> +}\n> +\n> +/* Initialize one JsonCoercion for execution. */\n> +static int\n> +ExecInitJsonExprCoercion(ExprState *state, Node *coercion,\n> +\t\t\t\t\t\t ErrorSaveContext *escontext,\n> +\t\t\t\t\t\t Datum *resv, bool *resnull)\n> +{\n> +\tint\t\t\tjump_eval_coercion;\n> +\tDatum\t *save_innermost_caseval;\n> +\tbool\t *save_innermost_casenull;\n> +\tErrorSaveContext *save_escontext;\n> +\n> +\tif (coercion == NULL)\n> +\t\treturn -1;\n> +\n> +\tjump_eval_coercion = state->steps_len;\n> +\n> +\t/* Push step(s) to compute cstate->coercion. */\n> +\tsave_innermost_caseval = state->innermost_caseval;\n> +\tsave_innermost_casenull = state->innermost_casenull;\n> +\tsave_escontext = state->escontext;\n> +\n> +\tstate->innermost_caseval = resv;\n> +\tstate->innermost_casenull = resnull;\n> +\tstate->escontext = escontext;\n> +\n> +\tExecInitExprRec((Expr *) coercion, state, resv, resnull);\n> +\n> +\tstate->innermost_caseval = save_innermost_caseval;\n> +\tstate->innermost_casenull = save_innermost_casenull;\n> +\tstate->escontext = save_escontext;\n> +\n> +\treturn jump_eval_coercion;\n> +}\n> diff --git a/src/backend/executor/execExprInterp.c b/src/backend/executor/execExprInterp.c\n> index d5db96444c..a18662cbf9 100644\n> --- a/src/backend/executor/execExprInterp.c\n> +++ b/src/backend/executor/execExprInterp.c\n> @@ -73,8 +73,8 @@\n> #include \"utils/datum.h\"\n> #include \"utils/expandedrecord.h\"\n> #include \"utils/json.h\"\n> -#include \"utils/jsonb.h\"\n> #include \"utils/jsonfuncs.h\"\n> +#include \"utils/jsonpath.h\"\n> #include \"utils/lsyscache.h\"\n> #include \"utils/memutils.h\"\n> #include \"utils/timestamp.h\"\n> @@ -181,6 +181,10 @@ static pg_attribute_always_inline void ExecAggPlainTransByRef(AggState *aggstate\n> \t\t\t\t\t\t\t\t\t\t\t\t\t\t\t AggStatePerGroup pergroup,\n> \t\t\t\t\t\t\t\t\t\t\t\t\t\t\t ExprContext *aggcontext,\n> \t\t\t\t\t\t\t\t\t\t\t\t\t\t\t int setno);\n> +static void ExecPrepareJsonItemCoercion(JsonbValue *item, JsonExprState *jsestate,\n> +\t\t\t\t\t\t\tbool throw_error,\n> +\t\t\t\t\t\t\tint *jump_eval_item_coercion,\n> +\t\t\t\t\t\t\tDatum *resvalue, bool *resnull);\n> \n> /*\n> * ScalarArrayOpExprHashEntry\n> @@ -482,6 +486,9 @@ ExecInterpExpr(ExprState *state, ExprContext *econtext, bool *isnull)\n> \t\t&&CASE_EEOP_XMLEXPR,\n> \t\t&&CASE_EEOP_JSON_CONSTRUCTOR,\n> \t\t&&CASE_EEOP_IS_JSON,\n> +\t\t&&CASE_EEOP_JSONEXPR_PATH,\n> +\t\t&&CASE_EEOP_JSONEXPR_COERCION,\n> +\t\t&&CASE_EEOP_JSONEXPR_COERCION_FINISH,\n> \t\t&&CASE_EEOP_AGGREF,\n> \t\t&&CASE_EEOP_GROUPING_FUNC,\n> \t\t&&CASE_EEOP_WINDOW_FUNC,\n> @@ -1551,6 +1558,35 @@ ExecInterpExpr(ExprState *state, ExprContext *econtext, bool *isnull)\n> \t\t\tEEO_NEXT();\n> \t\t}\n> \n> +\t\tEEO_CASE(EEOP_JSONEXPR_PATH)\n> +\t\t{\n> +\t\t\tJsonExprState *jsestate = op->d.jsonexpr.jsestate;\n> +\n> +\t\t\t/* too complex for an inline implementation */\n> +\t\t\tif (!ExecEvalJsonExprPath(state, op, econtext))\n> +\t\t\t\tEEO_JUMP(jsestate->jump_error);\n> +\t\t\telse if (jsestate->post_eval.jump_eval_coercion >= 0)\n> +\t\t\t\tEEO_JUMP(jsestate->post_eval.jump_eval_coercion);\n> +\n> +\t\t\tEEO_NEXT();\n> +\t\t}\n\nWhy do we need post_eval.jump_eval_coercion? Seems like that could more\ncleanly be implemented by just emitting a constant JUMP step? Oh, I see -\nyou're changing post_eval.jump_eval_coercion at runtime. This seems like a\nBAD idea. I strongly suggest that instead of modifying the field, you instead\nreturn the target jump step as a return value from ExecEvalJsonExprPath or\nsuch.\n\n\n> +/*\n> + * Performs JsonPath{Exists|Query|Value}() for given context item and JSON\n> + * path.\n> + *\n> + * Result is set in *op->resvalue and *op->resnull.\n> + *\n> + * On return, JsonExprPostEvalState is populated with the following details:\n> + *\t- jump_eval_coercion: step address of coercion to apply to the result\n> + *\t- error.value: true if an error occurred during JsonPath evaluation\n> + *\t- empty.value: true if JsonPath{Query|Value}() found no matching item\n> + *\n> + * No return if the ON ERROR/EMPTY behavior is ERROR.\n> + */\n> +bool\n> +ExecEvalJsonExprPath(ExprState *state, ExprEvalStep *op,\n> +\t\t\t\t\t ExprContext *econtext)\n> +{\n> +\tJsonExprState *jsestate = op->d.jsonexpr.jsestate;\n> +\tJsonExprPostEvalState *post_eval = &jsestate->post_eval;\n> +\tJsonExpr *jexpr = jsestate->jsexpr;\n> +\tDatum\t\titem;\n> +\tJsonPath *path;\n> +\tbool\t\tthrow_error = (jexpr->on_error->btype == JSON_BEHAVIOR_ERROR);\n\nWhat's the deal with the parentheses here and in similar places below? There's\nno danger of ambiguity without, no?\n\n\n\n> +\t\tcase JSON_VALUE_OP:\n> +\t\t\t{\n> +\t\t\t\tJsonbValue *jbv = JsonPathValue(item, path, &empty,\n> +\t\t\t\t\t\t\t\t\t\t\t\t!throw_error ? &error : NULL,\n> +\t\t\t\t\t\t\t\t\t\t\t\tjsestate->args);\n> +\n> +\t\t\t\t/* Might get overridden by an item coercion below. */\n> +\t\t\t\tpost_eval->jump_eval_coercion = jsestate->jump_eval_result_coercion;\n> +\t\t\t\tif (jbv == NULL)\n> +\t\t\t\t{\n> +\t\t\t\t\t/* Will be coerced with result_coercion. */\n> +\t\t\t\t\t*op->resvalue = (Datum) 0;\n> +\t\t\t\t\t*op->resnull = true;\n> +\t\t\t\t}\n> +\t\t\t\telse if (!error && !empty)\n> +\t\t\t\t{\n> +\t\t\t\t\t/*\n> +\t\t\t\t\t * If the requested output type is json(b), use\n> +\t\t\t\t\t * result_coercion to do the coercion.\n> +\t\t\t\t\t */\n> +\t\t\t\t\tif (jexpr->returning->typid == JSONOID ||\n> +\t\t\t\t\t\tjexpr->returning->typid == JSONBOID)\n> +\t\t\t\t\t{\n> +\t\t\t\t\t\t*op->resvalue = JsonbPGetDatum(JsonbValueToJsonb(jbv));\n> +\t\t\t\t\t\t*op->resnull = false;\n> +\t\t\t\t\t}\n> +\t\t\t\t\telse\n> +\t\t\t\t\t{\n> +\t\t\t\t\t\t/*\n> +\t\t\t\t\t\t * Else, use one of the item_coercions.\n> +\t\t\t\t\t\t *\n> +\t\t\t\t\t\t * Error out if no cast expression exists.\n> +\t\t\t\t\t\t */\n> +\t\t\t\t\t\tExecPrepareJsonItemCoercion(jbv, jsestate, throw_error,\n> +\t\t\t\t\t\t\t\t\t\t\t\t\t&post_eval->jump_eval_coercion,\n> +\t\t\t\t\t\t\t\t\t\t\t\t\top->resvalue, op->resnull);\n\n\n> +\tif (empty)\n> +\t{\n> +\t\tif (jexpr->on_empty)\n> +\t\t{\n> +\t\t\tif (jexpr->on_empty->btype == JSON_BEHAVIOR_ERROR)\n> +\t\t\t\tereport(ERROR,\n> +\t\t\t\t\t\t(errcode(ERRCODE_NO_SQL_JSON_ITEM),\n> +\t\t\t\t\t\t errmsg(\"no SQL/JSON item\")));\n\nNo need for the parens around ereport() arguments anymore. Same in a few other places.\n\n\n\n> diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y\n> index d631ac89a9..4f92d000ec 100644\n> --- a/src/backend/parser/gram.y\n> +++ b/src/backend/parser/gram.y\n> @@ -650,11 +650,18 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);\n> \t\t\t\tjson_returning_clause_opt\n> \t\t\t\tjson_name_and_value\n> \t\t\t\tjson_aggregate_func\n> +\t\t\t\tjson_argument\n> +\t\t\t\tjson_behavior\n> %type <list>\tjson_name_and_value_list\n> \t\t\t\tjson_value_expr_list\n> \t\t\t\tjson_array_aggregate_order_by_clause_opt\n> +\t\t\t\tjson_arguments\n> +\t\t\t\tjson_behavior_clause_opt\n> +\t\t\t\tjson_passing_clause_opt\n> %type <ival>\tjson_encoding_clause_opt\n> \t\t\t\tjson_predicate_type_constraint\n> +\t\t\t\tjson_quotes_clause_opt\n> +\t\t\t\tjson_wrapper_behavior\n> %type <boolean>\tjson_key_uniqueness_constraint_opt\n> \t\t\t\tjson_object_constructor_null_clause_opt\n> \t\t\t\tjson_array_constructor_null_clause_opt\n> @@ -695,7 +702,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);\n> \tCACHE CALL CALLED CASCADE CASCADED CASE CAST CATALOG_P CHAIN CHAR_P\n> \tCHARACTER CHARACTERISTICS CHECK CHECKPOINT CLASS CLOSE\n> \tCLUSTER COALESCE COLLATE COLLATION COLUMN COLUMNS COMMENT COMMENTS COMMIT\n> -\tCOMMITTED COMPRESSION CONCURRENTLY CONFIGURATION CONFLICT\n> +\tCOMMITTED COMPRESSION CONCURRENTLY CONDITIONAL CONFIGURATION CONFLICT\n> \tCONNECTION CONSTRAINT CONSTRAINTS CONTENT_P CONTINUE_P CONVERSION_P COPY\n> \tCOST CREATE CROSS CSV CUBE CURRENT_P\n> \tCURRENT_CATALOG CURRENT_DATE CURRENT_ROLE CURRENT_SCHEMA\n> @@ -706,8 +713,8 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);\n> \tDETACH DICTIONARY DISABLE_P DISCARD DISTINCT DO DOCUMENT_P DOMAIN_P\n> \tDOUBLE_P DROP\n> \n> -\tEACH ELSE ENABLE_P ENCODING ENCRYPTED END_P ENUM_P ESCAPE EVENT EXCEPT\n> -\tEXCLUDE EXCLUDING EXCLUSIVE EXECUTE EXISTS EXPLAIN EXPRESSION\n> +\tEACH ELSE EMPTY_P ENABLE_P ENCODING ENCRYPTED END_P ENUM_P ERROR_P ESCAPE\n> +\tEVENT EXCEPT EXCLUDE EXCLUDING EXCLUSIVE EXECUTE EXISTS EXPLAIN EXPRESSION\n> \tEXTENSION EXTERNAL EXTRACT\n> \n> \tFALSE_P FAMILY FETCH FILTER FINALIZE FIRST_P FLOAT_P FOLLOWING FOR\n> @@ -722,10 +729,10 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);\n> \tINNER_P INOUT INPUT_P INSENSITIVE INSERT INSTEAD INT_P INTEGER\n> \tINTERSECT INTERVAL INTO INVOKER IS ISNULL ISOLATION\n> \n> -\tJOIN JSON JSON_ARRAY JSON_ARRAYAGG JSON_OBJECT JSON_OBJECTAGG\n> -\tJSON_SCALAR JSON_SERIALIZE\n> +\tJOIN JSON JSON_ARRAY JSON_ARRAYAGG JSON_EXISTS JSON_OBJECT JSON_OBJECTAGG\n> +\tJSON_QUERY JSON_SCALAR JSON_SERIALIZE JSON_VALUE\n> \n> -\tKEY KEYS\n> +\tKEEP KEY KEYS\n> \n> \tLABEL LANGUAGE LARGE_P LAST_P LATERAL_P\n> \tLEADING LEAKPROOF LEAST LEFT LEVEL LIKE LIMIT LISTEN LOAD LOCAL\n> @@ -739,7 +746,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);\n> \tNOT NOTHING NOTIFY NOTNULL NOWAIT NULL_P NULLIF\n> \tNULLS_P NUMERIC\n> \n> -\tOBJECT_P OF OFF OFFSET OIDS OLD ON ONLY OPERATOR OPTION OPTIONS OR\n> +\tOBJECT_P OF OFF OFFSET OIDS OLD OMIT ON ONLY OPERATOR OPTION OPTIONS OR\n> \tORDER ORDINALITY OTHERS OUT_P OUTER_P\n> \tOVER OVERLAPS OVERLAY OVERRIDING OWNED OWNER\n> \n> @@ -748,7 +755,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);\n> \tPOSITION PRECEDING PRECISION PRESERVE PREPARE PREPARED PRIMARY\n> \tPRIOR PRIVILEGES PROCEDURAL PROCEDURE PROCEDURES PROGRAM PUBLICATION\n> \n> -\tQUOTE\n> +\tQUOTE QUOTES\n> \n> \tRANGE READ REAL REASSIGN RECHECK RECURSIVE REF_P REFERENCES REFERENCING\n> \tREFRESH REINDEX RELATIVE_P RELEASE RENAME REPEATABLE REPLACE REPLICA\n> @@ -759,7 +766,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);\n> \tSEQUENCE SEQUENCES\n> \tSERIALIZABLE SERVER SESSION SESSION_USER SET SETS SETOF SHARE SHOW\n> \tSIMILAR SIMPLE SKIP SMALLINT SNAPSHOT SOME SQL_P STABLE STANDALONE_P\n> -\tSTART STATEMENT STATISTICS STDIN STDOUT STORAGE STORED STRICT_P STRIP_P\n> +\tSTART STATEMENT STATISTICS STDIN STDOUT STORAGE STORED STRICT_P STRING_P STRIP_P\n> \tSUBSCRIPTION SUBSTRING SUPPORT SYMMETRIC SYSID SYSTEM_P SYSTEM_USER\n> \n> \tTABLE TABLES TABLESAMPLE TABLESPACE TEMP TEMPLATE TEMPORARY TEXT_P THEN\n> @@ -767,7 +774,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);\n> \tTREAT TRIGGER TRIM TRUE_P\n> \tTRUNCATE TRUSTED TYPE_P TYPES_P\n> \n> -\tUESCAPE UNBOUNDED UNCOMMITTED UNENCRYPTED UNION UNIQUE UNKNOWN\n> +\tUESCAPE UNBOUNDED UNCONDITIONAL UNCOMMITTED UNENCRYPTED UNION UNIQUE UNKNOWN\n> \tUNLISTEN UNLOGGED UNTIL UPDATE USER USING\n> \n> \tVACUUM VALID VALIDATE VALIDATOR VALUE_P VALUES VARCHAR VARIADIC VARYING\n> @@ -15768,6 +15775,60 @@ func_expr_common_subexpr:\n> \t\t\t\t\tn->location = @1;\n> \t\t\t\t\t$$ = (Node *) n;\n> \t\t\t\t}\n> +\t\t\t| JSON_QUERY '('\n> +\t\t\t\tjson_value_expr ',' a_expr json_passing_clause_opt\n> +\t\t\t\tjson_returning_clause_opt\n> +\t\t\t\tjson_wrapper_behavior\n> +\t\t\t\tjson_quotes_clause_opt\n> +\t\t\t\tjson_behavior_clause_opt\n> +\t\t\t')'\n> +\t\t\t\t{\n> +\t\t\t\t\tJsonFuncExpr *n = makeNode(JsonFuncExpr);\n> +\n> +\t\t\t\t\tn->op = JSON_QUERY_OP;\n> +\t\t\t\t\tn->context_item = (JsonValueExpr *) $3;\n> +\t\t\t\t\tn->pathspec = $5;\n> +\t\t\t\t\tn->passing = $6;\n> +\t\t\t\t\tn->output = (JsonOutput *) $7;\n> +\t\t\t\t\tn->wrapper = $8;\n> +\t\t\t\t\tn->quotes = $9;\n> +\t\t\t\t\tn->behavior = $10;\n> +\t\t\t\t\tn->location = @1;\n> +\t\t\t\t\t$$ = (Node *) n;\n> +\t\t\t\t}\n> +\t\t\t| JSON_EXISTS '('\n> +\t\t\t\tjson_value_expr ',' a_expr json_passing_clause_opt\n> +\t\t\t\tjson_behavior_clause_opt\n> +\t\t\t')'\n> +\t\t\t\t{\n> +\t\t\t\t\tJsonFuncExpr *n = makeNode(JsonFuncExpr);\n> +\n> +\t\t\t\t\tn->op = JSON_EXISTS_OP;\n> +\t\t\t\t\tn->context_item = (JsonValueExpr *) $3;\n> +\t\t\t\t\tn->pathspec = $5;\n> +\t\t\t\t\tn->passing = $6;\n> +\t\t\t\t\tn->output = NULL;\n> +\t\t\t\t\tn->behavior = $7;\n> +\t\t\t\t\tn->location = @1;\n> +\t\t\t\t\t$$ = (Node *) n;\n> +\t\t\t\t}\n> +\t\t\t| JSON_VALUE '('\n> +\t\t\t\tjson_value_expr ',' a_expr json_passing_clause_opt\n> +\t\t\t\tjson_returning_clause_opt\n> +\t\t\t\tjson_behavior_clause_opt\n> +\t\t\t')'\n> +\t\t\t\t{\n> +\t\t\t\t\tJsonFuncExpr *n = makeNode(JsonFuncExpr);\n> +\n> +\t\t\t\t\tn->op = JSON_VALUE_OP;\n> +\t\t\t\t\tn->context_item = (JsonValueExpr *) $3;\n> +\t\t\t\t\tn->pathspec = $5;\n> +\t\t\t\t\tn->passing = $6;\n> +\t\t\t\t\tn->output = (JsonOutput *) $7;\n> +\t\t\t\t\tn->behavior = $8;\n> +\t\t\t\t\tn->location = @1;\n> +\t\t\t\t\t$$ = (Node *) n;\n> +\t\t\t\t}\n> \t\t\t;\n> \n> \n> @@ -16494,6 +16555,27 @@ opt_asymmetric: ASYMMETRIC\n> \t\t;\n> \n> /* SQL/JSON support */\n> +json_passing_clause_opt:\n> +\t\t\tPASSING json_arguments\t\t\t\t\t{ $$ = $2; }\n> +\t\t\t| /*EMPTY*/\t\t\t\t\t\t\t\t{ $$ = NIL; }\n> +\t\t;\n> +\n> +json_arguments:\n> +\t\t\tjson_argument\t\t\t\t\t\t\t{ $$ = list_make1($1); }\n> +\t\t\t| json_arguments ',' json_argument\t\t{ $$ = lappend($1, $3); }\n> +\t\t;\n> +\n> +json_argument:\n> +\t\t\tjson_value_expr AS ColLabel\n> +\t\t\t{\n> +\t\t\t\tJsonArgument *n = makeNode(JsonArgument);\n> +\n> +\t\t\t\tn->val = (JsonValueExpr *) $1;\n> +\t\t\t\tn->name = $3;\n> +\t\t\t\t$$ = (Node *) n;\n> +\t\t\t}\n> +\t\t;\n> +\n> json_value_expr:\n> \t\t\ta_expr json_format_clause_opt\n> \t\t\t{\n> @@ -16519,6 +16601,27 @@ json_encoding_clause_opt:\n> \t\t\t| /* EMPTY */\t\t\t\t\t{ $$ = JS_ENC_DEFAULT; }\n> \t\t;\n> \n> +/* ARRAY is a noise word */\n> +json_wrapper_behavior:\n> +\t\t\t WITHOUT WRAPPER\t\t\t\t\t{ $$ = JSW_NONE; }\n> +\t\t\t| WITHOUT ARRAY\tWRAPPER\t\t\t\t{ $$ = JSW_NONE; }\n> +\t\t\t| WITH WRAPPER\t\t\t\t\t\t{ $$ = JSW_UNCONDITIONAL; }\n> +\t\t\t| WITH ARRAY WRAPPER\t\t\t\t{ $$ = JSW_UNCONDITIONAL; }\n> +\t\t\t| WITH CONDITIONAL ARRAY WRAPPER\t{ $$ = JSW_CONDITIONAL; }\n> +\t\t\t| WITH UNCONDITIONAL ARRAY WRAPPER\t{ $$ = JSW_UNCONDITIONAL; }\n> +\t\t\t| WITH CONDITIONAL WRAPPER\t\t\t{ $$ = JSW_CONDITIONAL; }\n> +\t\t\t| WITH UNCONDITIONAL WRAPPER\t\t{ $$ = JSW_UNCONDITIONAL; }\n> +\t\t\t| /* empty */\t\t\t\t\t\t{ $$ = JSW_NONE; }\n> +\t\t;\n> +\n> +json_quotes_clause_opt:\n> +\t\t\tKEEP QUOTES ON SCALAR STRING_P\t\t{ $$ = JS_QUOTES_KEEP; }\n> +\t\t\t| KEEP QUOTES\t\t\t\t\t\t{ $$ = JS_QUOTES_KEEP; }\n> +\t\t\t| OMIT QUOTES ON SCALAR STRING_P\t{ $$ = JS_QUOTES_OMIT; }\n> +\t\t\t| OMIT QUOTES\t\t\t\t\t\t{ $$ = JS_QUOTES_OMIT; }\n> +\t\t\t| /* EMPTY */\t\t\t\t\t\t{ $$ = JS_QUOTES_UNSPEC; }\n> +\t\t;\n> +\n> json_returning_clause_opt:\n> \t\t\tRETURNING Typename json_format_clause_opt\n> \t\t\t\t{\n> @@ -16532,6 +16635,39 @@ json_returning_clause_opt:\n> \t\t\t| /* EMPTY */\t\t\t\t\t\t\t{ $$ = NULL; }\n> \t\t;\n> \n> +json_behavior:\n> +\t\t\tDEFAULT a_expr\n> +\t\t\t\t{ $$ = (Node *) makeJsonBehavior(JSON_BEHAVIOR_DEFAULT, $2, NULL, @1); }\n> +\t\t\t| ERROR_P\n> +\t\t\t\t{ $$ = (Node *) makeJsonBehavior(JSON_BEHAVIOR_ERROR, NULL, NULL, @1); }\n> +\t\t\t| NULL_P\n> +\t\t\t\t{ $$ = (Node *) makeJsonBehavior(JSON_BEHAVIOR_NULL, NULL, NULL, @1); }\n> +\t\t\t| TRUE_P\n> +\t\t\t\t{ $$ = (Node *) makeJsonBehavior(JSON_BEHAVIOR_TRUE, NULL, NULL, @1); }\n> +\t\t\t| FALSE_P\n> +\t\t\t\t{ $$ = (Node *) makeJsonBehavior(JSON_BEHAVIOR_FALSE, NULL, NULL, @1); }\n> +\t\t\t| UNKNOWN\n> +\t\t\t\t{ $$ = (Node *) makeJsonBehavior(JSON_BEHAVIOR_UNKNOWN, NULL, NULL, @1); }\n> +\t\t\t| EMPTY_P ARRAY\n> +\t\t\t\t{ $$ = (Node *) makeJsonBehavior(JSON_BEHAVIOR_EMPTY_ARRAY, NULL, NULL, @1); }\n> +\t\t\t| EMPTY_P OBJECT_P\n> +\t\t\t\t{ $$ = (Node *) makeJsonBehavior(JSON_BEHAVIOR_EMPTY_OBJECT, NULL, NULL, @1); }\n> +\t\t\t/* non-standard, for Oracle compatibility only */\n> +\t\t\t| EMPTY_P\n> +\t\t\t\t{ $$ = (Node *) makeJsonBehavior(JSON_BEHAVIOR_EMPTY_ARRAY, NULL, NULL, @1); }\n> +\t\t;\n\nSeems like this would look better if you made json_behavior return just the\nenum values and had one makeJsonBehavior() at the place referencing it?\n\n\n> +json_behavior_clause_opt:\n> +\t\t\tjson_behavior ON EMPTY_P\n> +\t\t\t\t{ $$ = list_make2($1, NULL); }\n> +\t\t\t| json_behavior ON ERROR_P\n> +\t\t\t\t{ $$ = list_make2(NULL, $1); }\n> +\t\t\t| json_behavior ON EMPTY_P json_behavior ON ERROR_P\n> +\t\t\t\t{ $$ = list_make2($1, $4); }\n> +\t\t\t| /* EMPTY */\n> +\t\t\t\t{ $$ = list_make2(NULL, NULL); }\n> +\t\t;\n\nThis seems like an odd representation - why represent the behavior as a two\nelement list where one needs to know what is stored at which list offset?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 8 Dec 2023 09:30:14 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Hi.\nfunction JsonPathExecResult comment needs to be refactored? since it\nchanged a lot.\n\n\n",
"msg_date": "Sat, 9 Dec 2023 13:05:34 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Hi. small issues I found...\n\ntypo:\n+-- Test mutabilily od query functions\n\n+ default:\n+ ereport(ERROR,\n+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n+ errmsg(\"only datetime, bool, numeric, and text types can be casted\nto jsonpath types\")));\n\ntransformJsonPassingArgs's function: transformJsonValueExpr will make\nthe above code unreached.\nalso based on the `switch (typid)` cases,\nI guess best message would be\nerrmsg(\"only datetime, bool, numeric, text, json, jsonb types can be\ncasted to jsonpath types\")));\n\n+ case JSON_QUERY_OP:\n+ jsexpr->wrapper = func->wrapper;\n+ jsexpr->omit_quotes = (func->quotes == JS_QUOTES_OMIT);\n+\n+ if (!OidIsValid(jsexpr->returning->typid))\n+ {\n+ JsonReturning *ret = jsexpr->returning;\n+\n+ ret->typid = JsonFuncExprDefaultReturnType(jsexpr);\n+ ret->typmod = -1;\n+ }\n+ jsexpr->result_coercion = coerceJsonFuncExprOutput(pstate, jsexpr);\n\nI noticed, if (!OidIsValid(jsexpr->returning->typid)) is the true\nfunction JsonFuncExprDefaultReturnType may be called twice, not sure\nif it's good or not..\n\n\n",
"msg_date": "Wed, 13 Dec 2023 17:59:16 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Thanks for the review.\n\nOn Sat, Dec 9, 2023 at 2:30 AM Andres Freund <[email protected]> wrote:\n> On 2023-12-07 21:07:59 +0900, Amit Langote wrote:\n> > --- a/src/include/executor/execExpr.h\n> > +++ b/src/include/executor/execExpr.h\n> > @@ -547,6 +549,7 @@ typedef struct ExprEvalStep\n> > bool *checknull;\n> > /* OID of domain type */\n> > Oid resulttype;\n> > + ErrorSaveContext *escontext;\n> > } domaincheck;\n> >\n> > diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h\n> > index 5d7f17dee0..6a7118d300 100644\n> > --- a/src/include/nodes/execnodes.h\n> > +++ b/src/include/nodes/execnodes.h\n> > @@ -34,6 +34,7 @@\n> > #include \"fmgr.h\"\n> > #include \"lib/ilist.h\"\n> > #include \"lib/pairingheap.h\"\n> > +#include \"nodes/miscnodes.h\"\n> > #include \"nodes/params.h\"\n> > #include \"nodes/plannodes.h\"\n> > #include \"nodes/tidbitmap.h\"\n> > @@ -129,6 +130,12 @@ typedef struct ExprState\n> >\n> > Datum *innermost_domainval;\n> > bool *innermost_domainnull;\n> > +\n> > + /*\n> > + * For expression nodes that support soft errors. Should be set to NULL\n> > + * before calling ExecInitExprRec() if the caller wants errors thrown.\n> > + */\n> > + ErrorSaveContext *escontext;\n> > } ExprState;\n>\n> Why do we need this both in ExprState *and* in ExprEvalStep?\n\nIn the current design, ExprState.escontext is only set when\ninitializing sub-expressions that should have their errors handled\nsoftly and is supposed to be NULL at the runtime. So, the design\nexpects the expressions to save the ErrorSaveContext pointer into\ntheir struct in ExecEvalStep or somewhere else (input function's\nFunctionCallInfo in CoerceViaIO's case).\n\n> > From 38b53297b2d435d5cebf78c1f81e4748fed6c8b6 Mon Sep 17 00:00:00 2001\n> > From: Amit Langote <[email protected]>\n> > Date: Wed, 22 Nov 2023 13:18:49 +0900\n> > Subject: [PATCH v30 2/5] Add soft error handling to populate_record_field()\n> >\n> > An uncoming patch would like the ability to call it from the\n> > executor for some SQL/JSON expression nodes and ask to suppress any\n> > errors that may occur.\n> >\n> > This commit does two things mainly:\n> >\n> > * It modifies the various interfaces internal to jsonfuncs.c to pass\n> > the ErrorSaveContext around.\n> >\n> > * Make necessary modifications to handle the cases where the\n> > processing is aborted partway through various functions that take\n> > an ErrorSaveContext when a soft error occurs.\n> >\n> > Note that the above changes are only intended to suppress errors in\n> > the functions in jsonfuncs.c, but not those in any external functions\n> > that the functions in jsonfuncs.c in turn call, such as those from\n> > arrayfuncs.c. It is assumed that the various populate_* functions\n> > validate the data before passing those to external functions.\n> >\n> > Discussion: https://postgr.es/m/CA+HiwqE4XTdfb1nW=Ojoy_tQSRhYt-q_kb6i5d4xcKyrLC1Nbg@mail.gmail.com\n>\n> The code here is getting substantially more verbose / less readable. I wonder\n> if there's something more general that could be improved to make this less\n> painful?\n\nHmm, I can't think of anything short of a rewrite of the code under\npopulate_record_field() so that any error-producing code is well\nisolated or adding a variant/wrapper with soft-error handling\ncapabilities. I'll give this some more thought, though I'm happy to\nhear ideas.\n\n> I'd not at all be surprised if this caused a measurable slowdown. Patches 0004, 0005, and 0006 are new.\n\nI don't notice a significant slowdown. The benchmark I used is the\ntime to run the following query:\n\nselect json_populate_record(row(1,1), '{\"f1\":1, \"f2\":1}') from\ngenerate_series(1, 1000000)\n\nHere are the times:\n\nUnpatched:\nTime: 1262.011 ms (00:01.262)\nTime: 1202.354 ms (00:01.202)\nTime: 1187.708 ms (00:01.188)\nTime: 1171.752 ms (00:01.172)\nTime: 1174.249 ms (00:01.174)\n\nPatched:\nTime: 1233.927 ms (00:01.234)\nTime: 1185.381 ms (00:01.185)\nTime: 1202.245 ms (00:01.202)\nTime: 1164.994 ms (00:01.165)\nTime: 1179.009 ms (00:01.179)\n\nperf shows that a significant amount of time is spent is json_lex()\ndwarfing the time spent in dispatching code that is being changed\nhere.\n\n> > ---\n> > src/backend/utils/adt/jsonfuncs.c | 310 +++++++++++++++++++++++-------\n> > 1 file changed, 236 insertions(+), 74 deletions(-)\n>\n> > /* functions supporting jsonb_delete, jsonb_set and jsonb_concat */\n> > static JsonbValue *IteratorConcat(JsonbIterator **it1, JsonbIterator **it2,\n> > @@ -2484,12 +2491,12 @@ populate_array_report_expected_array(PopulateArrayContext *ctx, int ndim)\n> > if (ndim <= 0)\n> > {\n> > if (ctx->colname)\n> > - ereport(ERROR,\n> > + errsave(ctx->escontext,\n> > (errcode(ERRCODE_INVALID_TEXT_REPRESENTATION),\n> > errmsg(\"expected JSON array\"),\n> > errhint(\"See the value of key \\\"%s\\\".\", ctx->colname)));\n> > else\n> > - ereport(ERROR,\n> > + errsave(ctx->escontext,\n> > (errcode(ERRCODE_INVALID_TEXT_REPRESENTATION),\n> > errmsg(\"expected JSON array\")));\n> > }\n> > @@ -2506,13 +2513,13 @@ populate_array_report_expected_array(PopulateArrayContext *ctx, int ndim)\n> > appendStringInfo(&indices, \"[%d]\", ctx->sizes[i]);\n> >\n> > if (ctx->colname)\n> > - ereport(ERROR,\n> > + errsave(ctx->escontext,\n> > (errcode(ERRCODE_INVALID_TEXT_REPRESENTATION),\n> > errmsg(\"expected JSON array\"),\n> > errhint(\"See the array element %s of key \\\"%s\\\".\",\n> > indices.data, ctx->colname)));\n> > else\n> > - ereport(ERROR,\n> > + errsave(ctx->escontext,\n> > (errcode(ERRCODE_INVALID_TEXT_REPRESENTATION),\n> > errmsg(\"expected JSON array\"),\n> > errhint(\"See the array element %s.\",\n> > @@ -2520,8 +2527,13 @@ populate_array_report_expected_array(PopulateArrayContext *ctx, int ndim)\n> > }\n> > }\n>\n> It seems mildly errorprone to use errsave() but not have any returns in the\n> code after the errsave()s - it seems plausible that somebody later would come\n> and add more code expecting to not reach the later code.\n\nHaving returns in the code blocks containing errsave() sounds prudent, so done.\n\n> > +/*\n> > + * Validate and set ndims for populating an array with some\n> > + * populate_array_*() function.\n> > + *\n> > + * Returns false if the input (ndims) is erratic.\n>\n> I don't think \"erratic\" is the right word, \"erroneous\" maybe?\n\n\"erroneous\" sounds better.\n\n> > ---\n> > doc/src/sgml/func.sgml | 151 +++\n> > src/backend/catalog/sql_features.txt | 12 +-\n> > src/backend/executor/execExpr.c | 363 +++++++\n> > src/backend/executor/execExprInterp.c | 365 ++++++-\n> > src/backend/jit/llvm/llvmjit.c | 2 +\n> > src/backend/jit/llvm/llvmjit_expr.c | 140 +++\n> > src/backend/jit/llvm/llvmjit_types.c | 4 +\n> > src/backend/nodes/makefuncs.c | 18 +\n> > src/backend/nodes/nodeFuncs.c | 238 ++++-\n> > src/backend/optimizer/path/costsize.c | 3 +-\n> > src/backend/optimizer/util/clauses.c | 19 +\n> > src/backend/parser/gram.y | 178 +++-\n> > src/backend/parser/parse_expr.c | 621 ++++++++++-\n> > src/backend/parser/parse_target.c | 15 +\n> > src/backend/utils/adt/formatting.c | 44 +\n> > src/backend/utils/adt/jsonb.c | 31 +\n> > src/backend/utils/adt/jsonfuncs.c | 52 +-\n> > src/backend/utils/adt/jsonpath.c | 255 +++++\n> > src/backend/utils/adt/jsonpath_exec.c | 391 ++++++-\n> > src/backend/utils/adt/ruleutils.c | 136 +++\n> > src/include/executor/execExpr.h | 133 +++\n> > src/include/fmgr.h | 1 +\n> > src/include/jit/llvmjit.h | 1 +\n> > src/include/nodes/makefuncs.h | 2 +\n> > src/include/nodes/parsenodes.h | 47 +\n> > src/include/nodes/primnodes.h | 130 +++\n> > src/include/parser/kwlist.h | 11 +\n> > src/include/utils/formatting.h | 1 +\n> > src/include/utils/jsonb.h | 1 +\n> > src/include/utils/jsonfuncs.h | 5 +\n> > src/include/utils/jsonpath.h | 27 +\n> > src/interfaces/ecpg/preproc/ecpg.trailer | 28 +\n> > src/test/regress/expected/json_sqljson.out | 18 +\n> > src/test/regress/expected/jsonb_sqljson.out | 1032 +++++++++++++++++++\n> > src/test/regress/parallel_schedule | 2 +-\n> > src/test/regress/sql/json_sqljson.sql | 11 +\n> > src/test/regress/sql/jsonb_sqljson.sql | 337 ++++++\n> > src/tools/pgindent/typedefs.list | 18 +\n> > 38 files changed, 4767 insertions(+), 76 deletions(-)\n>\n> I think it'd be worth trying to break this into smaller bits - it's not easy\n> to review this at once.\n\nISTM that the only piece that can be broken out at this point is the\nadditions under src/backend/utils/adt. I'm not entirely sure if it'd\nbe a good idea to commit the various bits on their own, that is,\nwithout tests which cannot be added without the rest of the\nparser/executor additions for JsonFuncExpr, JsonExpr, and the\nsupporting child nodes.\n\nI've extracted those bits as separate patches even if only for the\nease of review.\n\n> > +/*\n> > + * Information about the state of JsonPath* evaluation.\n> > + */\n> > +typedef struct JsonExprPostEvalState\n> > +{\n> > + /* Did JsonPath* evaluation cause an error? */\n> > + NullableDatum error;\n> > +\n> > + /* Is the result of JsonPath* evaluation empty? */\n> > + NullableDatum empty;\n> > +\n> > + /*\n> > + * ExecEvalJsonExprPath() will set this to the address of the step to\n> > + * use to coerce the result of JsonPath* evaluation to the RETURNING type.\n> > + * Also see the description of possible step addresses that this could be\n> > + * set to in the definition of JsonExprState.\n> > + */\n> > +#define FIELDNO_JSONEXPRPOSTEVALSTATE_JUMP_EVAL_COERCION 2\n> > + int jump_eval_coercion;\n> > +} JsonExprPostEvalState;\n> > +\n> > +/* State for evaluating a JsonExpr, too big to inline */\n> > +typedef struct JsonExprState\n> > +{\n> > + /* original expression node */\n> > + JsonExpr *jsexpr;\n> > +\n> > + /* value/isnull for formatted_expr */\n> > + NullableDatum formatted_expr;\n> > +\n> > + /* value/isnull for pathspec */\n> > + NullableDatum pathspec;\n> > +\n> > + /* JsonPathVariable entries for passing_values */\n> > + List *args;\n> > +\n> > + /*\n> > + * Per-row result status info populated by ExecEvalJsonExprPath()\n> > + * and ExecEvalJsonCoercionFinish().\n> > + */\n> > + JsonExprPostEvalState post_eval;\n> > +\n> > + /*\n> > + * Address of the step that implements the non-ERROR variant of ON ERROR\n> > + * and ON EMPTY behaviors, to be jumped to when ExecEvalJsonExprPath()\n> > + * returns false on encountering an error during JsonPath* evaluation\n> > + * (ON ERROR) or on finding that no matching JSON item was returned (ON\n> > + * EMPTY). The same steps are also performed on encountering an error\n> > + * when coercing JsonPath* result to the RETURNING type.\n> > + */\n> > + int jump_error;\n> > +\n> > + /*\n> > + * Addresses of steps to perform the coercion of the JsonPath* result value\n> > + * to the RETURNING type. Each address points to either 1) a special\n> > + * EEOP_JSONEXPR_COERCION step that handles coercion using the RETURNING\n> > + * type's input function or by using json_via_populate(), or 2) an\n> > + * expression such as CoerceViaIO. It may be -1 if no coercion is\n> > + * necessary.\n> > + *\n> > + * jump_eval_result_coercion points to the step to evaluate the coercion\n> > + * given in JsonExpr.result_coercion.\n> > + */\n> > + int jump_eval_result_coercion;\n> > +\n> > + /* eval_item_coercion_jumps is an array of num_item_coercions elements\n> > + * each containing a step address to evaluate the coercion from a value of\n> > + * the given JsonItemType to the RETURNING type, or -1 if no coercion is\n> > + * necessary. item_coercion_via_expr is an array of boolean flags of the\n> > + * same length that indicates whether each valid step address in the\n> > + * eval_item_coercion_jumps array points to an expression or a\n> > + * EEOP_JSONEXPR_COERCION step. ExecEvalJsonExprPath() will cause an\n> > + * error if it's the latter, because that mode of coercion is not\n> > + * supported for all JsonItemTypes.\n> > + */\n> > + int num_item_coercions;\n> > + int *eval_item_coercion_jumps;\n> > + bool *item_coercion_via_expr;\n> > +\n> > + /*\n> > + * For passing when initializing a EEOP_IOCOERCE_SAFE step for any\n> > + * CoerceViaIO nodes in the expression that must be evaluated in an\n> > + * error-safe manner.\n> > + */\n> > + ErrorSaveContext escontext;\n> > +} JsonExprState;\n> > +\n> > +/*\n> > + * State for coercing a value to the target type specified in 'coercion' using\n> > + * either json_populate_type() or by calling the type's input function.\n> > + */\n> > +typedef struct JsonCoercionState\n> > +{\n> > + /* original expression node */\n> > + JsonCoercion *coercion;\n> > +\n> > + /* Input function info for the target type. */\n> > + struct\n> > + {\n> > + FmgrInfo *finfo;\n> > + Oid typioparam;\n> > + } input;\n> > +\n> > + /* Cache for json_populate_type() */\n> > + void *cache;\n> > +\n> > + /*\n> > + * For soft-error handling in json_populate_type() or\n> > + * in InputFunctionCallSafe().\n> > + */\n> > + ErrorSaveContext *escontext;\n> > +} JsonCoercionState;\n>\n> Does all of this stuff need to live in this header? Some of it seems like it\n> doesn't need to be in a header at all, and other bits seem like they belong\n> somewhere more json specific?\n\nI've gotten rid of JsonCoercionState, moving the fields directly into\nExprEvalStep.d.jsonexpr_coercion.\n\nRegarding JsonExprState and JsonExprPostEvalState, maybe they're\nbetter put in execnodes.h to be near other expression state nodes like\nWindowFuncExprState, so have moved them there. I'm not sure of a\njson-specific place for this. All of the information contained in\nthose structs is populated and used by execInterpExpr.c, so\nexecnodes.h seems appropriate to me.\n\n> > +/*\n> > + * JsonItemType\n> > + * Represents type codes to identify a JsonCoercion node to use when\n> > + * coercing a given SQL/JSON items to the output SQL type\n> > + *\n> > + * The comment next to each item type mentions the JsonbValue.jbvType of the\n> > + * source JsonbValue value to be coerced using the expression in the\n> > + * JsonCoercion node.\n> > + *\n> > + * Also, see InitJsonItemCoercions() and ExecPrepareJsonItemCoercion().\n> > + */\n> > +typedef enum JsonItemType\n> > +{\n> > + JsonItemTypeNull = 0, /* jbvNull */\n> > + JsonItemTypeString = 1, /* jbvString */\n> > + JsonItemTypeNumeric = 2, /* jbvNumeric */\n> > + JsonItemTypeBoolean = 3, /* jbvBool */\n> > + JsonItemTypeDate = 4, /* jbvDatetime: DATEOID */\n> > + JsonItemTypeTime = 5, /* jbvDatetime: TIMEOID */\n> > + JsonItemTypeTimetz = 6, /* jbvDatetime: TIMETZOID */\n> > + JsonItemTypeTimestamp = 7, /* jbvDatetime: TIMESTAMPOID */\n> > + JsonItemTypeTimestamptz = 8, /* jbvDatetime: TIMESTAMPTZOID */\n> > + JsonItemTypeComposite = 9, /* jbvArray, jbvObject, jbvBinary */\n> > + JsonItemTypeInvalid = 10,\n> > +} JsonItemType;\n>\n> Why do we need manually assigned values here?\n\nNot really necessary here. I think I simply copied the style from\nsome other json-related enum where assigning values seems necessary.\n\n> > +/*\n> > + * JsonCoercion -\n> > + * coercion from SQL/JSON item types to SQL types\n> > + */\n> > +typedef struct JsonCoercion\n> > +{\n> > + NodeTag type;\n> > +\n> > + Oid targettype;\n> > + int32 targettypmod;\n> > + bool via_populate; /* coerce result using json_populate_type()? */\n> > + bool via_io; /* coerce result using type input function? */\n> > + Oid collation; /* collation for coercion via I/O or populate */\n> > +} JsonCoercion;\n> > +\n> > +typedef struct JsonItemCoercion\n> > +{\n> > + NodeTag type;\n> > +\n> > + JsonItemType item_type;\n> > + Node *coercion;\n> > +} JsonItemCoercion;\n>\n> What's the difference between an \"ItemCoercion\" and a \"Coercion\"?\n\nItemCoercion is used to store the coercion expression used at runtime\nto convert the value of given JsonItemType to the target type\nspecified in the JsonExpr.returning. It can either be a cast\nexpression node found by the parser or a JsonCoercion node.\n\nI'll update the comments.\n\n> > +/*\n> > + * JsonBehavior -\n> > + * representation of a given JSON behavior\n>\n> My editor warns about space-before-tab here.\n\nFixed.\n\n> > + */\n> > +typedef struct JsonBehavior\n> > +{\n> > + NodeTag type;\n>\n> > + JsonBehaviorType btype; /* behavior type */\n> > + Node *expr; /* behavior expression */\n>\n> These comment don't seem helpful. I think there's need for comments here, but\n> restating the field name in different words isn't helpful. What's needed is an\n> explanation of how things interact, perhaps also why that's the appropriate\n> representation.\n>\n> > + JsonCoercion *coercion; /* to coerce behavior expression when there is\n> > + * no cast to the target type */\n> > + int location; /* token location, or -1 if unknown */\n>\n> > +} JsonBehavior;\n> > +\n> > +/*\n> > + * JsonExpr -\n> > + * transformed representation of JSON_VALUE(), JSON_QUERY(), JSON_EXISTS()\n> > + */\n> > +typedef struct JsonExpr\n> > +{\n> > + Expr xpr;\n> > +\n> > + JsonExprOp op; /* json function ID */\n> > + Node *formatted_expr; /* formatted context item expression */\n> > + Node *result_coercion; /* resulting coercion to RETURNING type */\n> > + JsonFormat *format; /* context item format (JSON/JSONB) */\n> > + Node *path_spec; /* JSON path specification expression */\n> > + List *passing_names; /* PASSING argument names */\n> > + List *passing_values; /* PASSING argument values */\n> > + JsonReturning *returning; /* RETURNING clause type/format info */\n> > + JsonBehavior *on_empty; /* ON EMPTY behavior */\n> > + JsonBehavior *on_error; /* ON ERROR behavior */\n> > + List *item_coercions; /* coercions for JSON_VALUE */\n> > + JsonWrapper wrapper; /* WRAPPER for JSON_QUERY */\n> > + bool omit_quotes; /* KEEP/OMIT QUOTES for JSON_QUERY */\n> > + int location; /* token location, or -1 if unknown */\n> > +} JsonExpr;\n>\n> These comments seem even worse.\n\nOK, I've rewritten the comments about JsonBehavior and JsonExpr.\n\n> > +static void ExecInitJsonExpr(JsonExpr *jexpr, ExprState *state,\n> > + Datum *resv, bool *resnull,\n> > + ExprEvalStep *scratch);\n> > +static int ExecInitJsonExprCoercion(ExprState *state, Node *coercion,\n> > + ErrorSaveContext *escontext,\n> > + Datum *resv, bool *resnull);\n> >\n> >\n> > /*\n> > @@ -2416,6 +2423,36 @@ ExecInitExprRec(Expr *node, ExprState *state,\n> > break;\n> > }\n> >\n> > + case T_JsonExpr:\n> > + {\n> > + JsonExpr *jexpr = castNode(JsonExpr, node);\n> > +\n> > + ExecInitJsonExpr(jexpr, state, resv, resnull, &scratch);\n> > + break;\n> > + }\n> > +\n> > + case T_JsonCoercion:\n> > + {\n> > + JsonCoercion *coercion = castNode(JsonCoercion, node);\n> > + JsonCoercionState *jcstate = palloc0(sizeof(JsonCoercionState));\n> > + Oid typinput;\n> > + FmgrInfo *finfo;\n> > +\n> > + getTypeInputInfo(coercion->targettype, &typinput,\n> > + &jcstate->input.typioparam);\n> > + finfo = palloc0(sizeof(FmgrInfo));\n> > + fmgr_info(typinput, finfo);\n> > + jcstate->input.finfo = finfo;\n> > +\n> > + jcstate->coercion = coercion;\n> > + jcstate->escontext = state->escontext;\n> > +\n> > + scratch.opcode = EEOP_JSONEXPR_COERCION;\n> > + scratch.d.jsonexpr_coercion.jcstate = jcstate;\n> > + ExprEvalPushStep(state, &scratch);\n> > + break;\n> > + }\n>\n> It's confusing that we have ExecInitJsonExprCoercion, but aren't using that\n> here, but then use it later, in ExecInitJsonExpr().\n\nI had moved this code out of ExecInitJsonExprCoercion() into\nExecInitExprRec() to make the JsonCoercion node look like a first\nclass citizen of execExpr.c, but maybe that's not such a good idea\nafter all. I've moved it back to make it just another implementation\ndetail of JsonExpr.\n\n> > + EEO_CASE(EEOP_JSONEXPR_PATH)\n> > + {\n> > + JsonExprState *jsestate = op->d.jsonexpr.jsestate;\n> > +\n> > + /* too complex for an inline implementation */\n> > + if (!ExecEvalJsonExprPath(state, op, econtext))\n> > + EEO_JUMP(jsestate->jump_error);\n> > + else if (jsestate->post_eval.jump_eval_coercion >= 0)\n> > + EEO_JUMP(jsestate->post_eval.jump_eval_coercion);\n> > +\n> > + EEO_NEXT();\n> > + }\n>\n> Why do we need post_eval.jump_eval_coercion? Seems like that could more\n> cleanly be implemented by just emitting a constant JUMP step? Oh, I see -\n> you're changing post_eval.jump_eval_coercion at runtime. This seems like a\n> BAD idea. I strongly suggest that instead of modifying the field, you instead\n> return the target jump step as a return value from ExecEvalJsonExprPath or\n> such.\n\nOK, done that way.\n\n> > + bool throw_error = (jexpr->on_error->btype == JSON_BEHAVIOR_ERROR);\n>\n> What's the deal with the parentheses here and in similar places below? There's\n> no danger of ambiguity without, no?\n\nYes, this looks like a remnant of an old version of this condition.\n\n> > + if (empty)\n> > + {\n> > + if (jexpr->on_empty)\n> > + {\n> > + if (jexpr->on_empty->btype == JSON_BEHAVIOR_ERROR)\n> > + ereport(ERROR,\n> > + (errcode(ERRCODE_NO_SQL_JSON_ITEM),\n> > + errmsg(\"no SQL/JSON item\")));\n>\n> No need for the parens around ereport() arguments anymore. Same in a few other places.\n\nAll fixed.\n\n> > diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y\n> > index d631ac89a9..4f92d000ec 100644\n> > --- a/src/backend/parser/gram.y\n> > +++ b/src/backend/parser/gram.y\n> > +json_behavior:\n> > + DEFAULT a_expr\n> > + { $$ = (Node *) makeJsonBehavior(JSON_BEHAVIOR_DEFAULT, $2, NULL, @1); }\n> > + | ERROR_P\n> > + { $$ = (Node *) makeJsonBehavior(JSON_BEHAVIOR_ERROR, NULL, NULL, @1); }\n> > + | NULL_P\n> > + { $$ = (Node *) makeJsonBehavior(JSON_BEHAVIOR_NULL, NULL, NULL, @1); }\n> > + | TRUE_P\n> > + { $$ = (Node *) makeJsonBehavior(JSON_BEHAVIOR_TRUE, NULL, NULL, @1); }\n> > + | FALSE_P\n> > + { $$ = (Node *) makeJsonBehavior(JSON_BEHAVIOR_FALSE, NULL, NULL, @1); }\n> > + | UNKNOWN\n> > + { $$ = (Node *) makeJsonBehavior(JSON_BEHAVIOR_UNKNOWN, NULL, NULL, @1); }\n> > + | EMPTY_P ARRAY\n> > + { $$ = (Node *) makeJsonBehavior(JSON_BEHAVIOR_EMPTY_ARRAY, NULL, NULL, @1); }\n> > + | EMPTY_P OBJECT_P\n> > + { $$ = (Node *) makeJsonBehavior(JSON_BEHAVIOR_EMPTY_OBJECT, NULL, NULL, @1); }\n> > + /* non-standard, for Oracle compatibility only */\n> > + | EMPTY_P\n> > + { $$ = (Node *) makeJsonBehavior(JSON_BEHAVIOR_EMPTY_ARRAY, NULL, NULL, @1); }\n> > + ;\n>\n> Seems like this would look better if you made json_behavior return just the\n> enum values and had one makeJsonBehavior() at the place referencing it?\n\nYes, changed like that.\n\n> > +json_behavior_clause_opt:\n> > + json_behavior ON EMPTY_P\n> > + { $$ = list_make2($1, NULL); }\n> > + | json_behavior ON ERROR_P\n> > + { $$ = list_make2(NULL, $1); }\n> > + | json_behavior ON EMPTY_P json_behavior ON ERROR_P\n> > + { $$ = list_make2($1, $4); }\n> > + | /* EMPTY */\n> > + { $$ = list_make2(NULL, NULL); }\n> > + ;\n>\n> This seems like an odd representation - why represent the behavior as a two\n> element list where one needs to know what is stored at which list offset?\n\nA previous version had a JsonBehaviorClause containing 2 JsonBehavior\nnodes, but Peter didn't like it, so we have this. Like Peter, I\nprefer to use the List instead of a whole new parser node, but maybe\nthe damage would be less if we make the List be local to gram.y. I've\ndone that by adding two JsonBehavior nodes to JsonFuncExpr itself\nwhich are assigned appropriate values from the List in gram.y itself.\n\nUpdated patches attached.\n\n--\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 14 Dec 2023 17:04:46 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Sat, Dec 9, 2023 at 2:05 AM Andrew Dunstan <[email protected]> wrote:\n> On 2023-12-08 Fr 11:37, Robert Haas wrote:\n> > On Fri, Dec 8, 2023 at 1:59 AM Amit Langote <[email protected]> wrote:\n> >> Would it be messy to replace the lookahead approach by whatever's\n> >> suiable *in the future* when it becomes necessary to do so?\n> > It might be. Changing grammar rules to tends to change corner-case\n> > behavior if nothing else. We're best off picking the approach that we\n> > think is correct long term.\n>\n> All this makes me wonder if Alvaro's first suggested solution (adding\n> NESTED to the UNBOUNDED precedence level) wouldn't be better after all.\n\nI've done just that in the latest v32.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 14 Dec 2023 17:05:31 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Sat, Dec 9, 2023 at 2:05 PM jian he <[email protected]> wrote:\n> Hi.\n\nThanks for the review.\n\n> function JsonPathExecResult comment needs to be refactored? since it\n> changed a lot.\n\nI suppose you meant executeJsonPath()'s comment. I've added a\ndescription of the new callback function arguments.\n\nOn Wed, Dec 13, 2023 at 6:59 PM jian he <[email protected]> wrote:\n> Hi. small issues I found...\n>\n> typo:\n> +-- Test mutabilily od query functions\n\nFixed.\n\n>\n> + default:\n> + ereport(ERROR,\n> + (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> + errmsg(\"only datetime, bool, numeric, and text types can be casted\n> to jsonpath types\")));\n>\n> transformJsonPassingArgs's function: transformJsonValueExpr will make\n> the above code unreached.\n\nIt's good to have the ereport to catch errors caused by any future changes.\n\n> also based on the `switch (typid)` cases,\n> I guess best message would be\n> errmsg(\"only datetime, bool, numeric, text, json, jsonb types can be\n> casted to jsonpath types\")));\n\nI've rewritten the message to mention the unsupported type. Maybe the\nsupported types can go in a DETAIL message. I might do that later.\n\n> + case JSON_QUERY_OP:\n> + jsexpr->wrapper = func->wrapper;\n> + jsexpr->omit_quotes = (func->quotes == JS_QUOTES_OMIT);\n> +\n> + if (!OidIsValid(jsexpr->returning->typid))\n> + {\n> + JsonReturning *ret = jsexpr->returning;\n> +\n> + ret->typid = JsonFuncExprDefaultReturnType(jsexpr);\n> + ret->typmod = -1;\n> + }\n> + jsexpr->result_coercion = coerceJsonFuncExprOutput(pstate, jsexpr);\n>\n> I noticed, if (!OidIsValid(jsexpr->returning->typid)) is the true\n> function JsonFuncExprDefaultReturnType may be called twice, not sure\n> if it's good or not..\n\nIf avoiding the double-calling means that we've to add more conditions\nin the code, I'm fine with leaving this as-is.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 14 Dec 2023 17:10:38 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "hi.\nsince InitJsonItemCoercions cannot return NULL.\nper transformJsonFuncExpr, jsexpr->item_coercions not null imply\njsexpr->result_coercion not null.\nso I did the attached refactoring.\n\nnow every ExecInitJsonExprCoercion function call followed with:\n\nscratch->opcode = EEOP_JUMP;\nscratch->d.jump.jumpdone = -1; /* set below */\njumps_to_coerce_finish = lappend_int(jumps_to_coerce_finish,\nstate->steps_len);\nExprEvalPushStep(state, scratch);\n\nIt looks more consistent.\nwe can also change\n\n+ */\n+ if (jexpr->result_coercion || jexpr->item_coercions)\n+ {\n+\n\nto\n+ if (jexpr->result_coercion)\n\nsince jexpr->item_coercions not null imply jexpr->result_coercion not null.",
"msg_date": "Fri, 15 Dec 2023 16:36:41 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Hi! another minor issue I found:\n\n+SELECT pg_get_expr(adbin, adrelid)\n+FROM pg_attrdef\n+WHERE adrelid = 'test_jsonb_constraints'::regclass\n+ORDER BY 1;\n+\n+SELECT pg_get_expr(adbin, adrelid) FROM pg_attrdef WHERE adrelid =\n'test_jsonb_constraints'::regclass;\n\nI think these two queries are the same? Why do we test it twice.....\n\n\n",
"msg_date": "Mon, 18 Dec 2023 17:45:29 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Thu, Dec 14, 2023 at 5:04 PM Amit Langote <[email protected]> wrote:\n> On Sat, Dec 9, 2023 at 2:30 AM Andres Freund <[email protected]> wrote:\n> > On 2023-12-07 21:07:59 +0900, Amit Langote wrote:\n> > > From 38b53297b2d435d5cebf78c1f81e4748fed6c8b6 Mon Sep 17 00:00:00 2001\n> > > From: Amit Langote <[email protected]>\n> > > Date: Wed, 22 Nov 2023 13:18:49 +0900\n> > > Subject: [PATCH v30 2/5] Add soft error handling to populate_record_field()\n> > >\n> > > An uncoming patch would like the ability to call it from the\n> > > executor for some SQL/JSON expression nodes and ask to suppress any\n> > > errors that may occur.\n> > >\n> > > This commit does two things mainly:\n> > >\n> > > * It modifies the various interfaces internal to jsonfuncs.c to pass\n> > > the ErrorSaveContext around.\n> > >\n> > > * Make necessary modifications to handle the cases where the\n> > > processing is aborted partway through various functions that take\n> > > an ErrorSaveContext when a soft error occurs.\n> > >\n> > > Note that the above changes are only intended to suppress errors in\n> > > the functions in jsonfuncs.c, but not those in any external functions\n> > > that the functions in jsonfuncs.c in turn call, such as those from\n> > > arrayfuncs.c. It is assumed that the various populate_* functions\n> > > validate the data before passing those to external functions.\n> > >\n> > > Discussion: https://postgr.es/m/CA+HiwqE4XTdfb1nW=Ojoy_tQSRhYt-q_kb6i5d4xcKyrLC1Nbg@mail.gmail.com\n> >\n> > The code here is getting substantially more verbose / less readable. I wonder\n> > if there's something more general that could be improved to make this less\n> > painful?\n>\n> Hmm, I can't think of anything short of a rewrite of the code under\n> populate_record_field() so that any error-producing code is well\n> isolated or adding a variant/wrapper with soft-error handling\n> capabilities. I'll give this some more thought, though I'm happy to\n> hear ideas.\n\nI looked at this and wasn't able to come up with alternative takes\nthat are better in terms of the verbosity/readability. I'd still want\nto hear if someone well-versed in the json(b) code has any advice.\n\nI also looked at some commits touching src/backend/utils/adt/json*\nfiles to add soft error handling and I can't help but notice that\nthose commits look not very different from this. For example, commits\nc60c9bad, 50428a30 contain changes like:\n\n@@ -454,7 +474,11 @@ parse_array_element(JsonLexContext *lex,\nJsonSemAction *sem)\n return result;\n\n if (aend != NULL)\n- (*aend) (sem->semstate, isnull);\n+ {\n+ result = (*aend) (sem->semstate, isnull);\n+ if (result != JSON_SUCCESS)\n+ return result;\n+ }\n\nAttached updated patches addressing jian he's comments, some minor\nfixes, and commit message updates.\n\n\n\n--\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 20 Dec 2023 17:36:25 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Hi\nv33-0007-SQL-JSON-query-functions.patch, commit message:\nThis introduces the SQL/JSON functions for querying JSON data using\njsonpath expressions. The functions are:\n\nshould it be \"These functions are\"\n\n+ <para>\n+ Returns true if the SQL/JSON <replaceable>path_expression</replaceable>\n+ applied to the <replaceable>context_item</replaceable> using the\n+ <replaceable>value</replaceable>s yields any items.\n+ The <literal>ON ERROR</literal> clause specifies what is returned if\n+ an error occurs; the default is to return <literal>FALSE</literal>.\n+ Note that if the <replaceable>path_expression</replaceable>\n+ is <literal>strict</literal>, an error is generated if it\nyields no items.\n+ </para>\n\nI think the following description is more accurate.\n+ Note that if the <replaceable>path_expression</replaceable>\n+ is <literal>strict</literal> and the <literal>ON\nERROR</literal> clause is <literal> ERROR</literal>,\n+ an error is generated if it yields no items.\n+ </para>\n\n+/*\n+ * transformJsonTable -\n+ * Transform a raw JsonTable into TableFunc.\n+ *\n+ * Transform the document-generating expression, the row-generating expression,\n+ * the column-generating expressions, and the default value expressions.\n+ */\n+ParseNamespaceItem *\n+transformJsonTable(ParseState *pstate, JsonTable *jt)\n+{\n+ JsonTableParseContext cxt;\n+ TableFunc *tf = makeNode(TableFunc);\n+ JsonFuncExpr *jfe = makeNode(JsonFuncExpr);\n+ JsonExpr *je;\n+ JsonTablePlan *plan = jt->plan;\n+ char *rootPathName = jt->pathname;\n+ char *rootPath;\n+ bool is_lateral;\n+\n+ if (jt->on_empty)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n+ errmsg(\"ON EMPTY not allowed in JSON_TABLE\"),\n+ parser_errposition(pstate,\n+ exprLocation((Node *) jt->on_empty))));\n\nThis error may be slightly misleading?\nyou can add ON EMPTY inside the COLUMNS part, like the following:\nSELECT * FROM (VALUES ('1'), ('\"1\"')) vals(js) LEFT OUTER JOIN\nJSON_TABLE(vals.js::jsonb, '$' COLUMNS (a int PATH '$' default 1 ON\nempty)) jt ON true;\n\n+ <para>\n+ Each <literal>NESTED PATH</literal> clause can generate one or more\n+ columns. Columns produced by <literal>NESTED PATH</literal>s at the\n+ same level are considered to be <firstterm>siblings</firstterm>,\n+ while a column produced by a <literal>NESTED PATH</literal> is\n+ considered to be a child of the column produced by a\n+ <literal>NESTED PATH</literal> or row expression at a higher level.\n+ Sibling columns are always joined first. Once they are processed,\n+ the resulting rows are joined to the parent row.\n+ </para>\nDoes changing to the following make sense?\n+ considered to be a <firstterm>child</firstterm> of the column produced by a\n+ the resulting rows are joined to the <firstterm>parent</firstterm> row.\n\nseems like `format json_representation`, not listed in the\ndocumentation, but json_representation is \"Parameters\", do we need\nadd a section to explain it?\neven though I think currently we can only do `FORMAT JSON`.\n\nSELECT * FROM JSON_TABLE(jsonb '123', '$' COLUMNS (item int PATH '$'\nempty on empty)) bar;\nERROR: cannot cast jsonb array to type integer\nThe error is the same as the output of the following:\nSELECT * FROM JSON_TABLE(jsonb '123', '$' COLUMNS (item int PATH '$'\nempty array on empty )) bar;\nbut these two are different things?\n\n+ /* FALLTHROUGH */\n+ case JTC_EXISTS:\n+ case JTC_FORMATTED:\n+ {\n+ Node *je;\n+ CaseTestExpr *param = makeNode(CaseTestExpr);\n+\n+ param->collation = InvalidOid;\n+ param->typeId = cxt->contextItemTypid;\n+ param->typeMod = -1;\n+\n+ if (rawc->wrapper != JSW_NONE &&\n+ rawc->quotes != JS_QUOTES_UNSPEC)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_SYNTAX_ERROR),\n+ errmsg(\"cannot use WITH WRAPPER clause for formatted colunmns\"\n+ \" without also specifying OMIT/KEEP QUOTES\"),\n+ parser_errposition(pstate, rawc->location)));\n\ntypo, should be \"formatted columns\".\nI suspect people will be confused with the meaning of \"formatted column\".\nmaybe we can replace this part:\"cannot use WITH WRAPPER clause for\nformatted column\"\nto\n\"SQL/JSON WITH WRAPPER behavior must not be specified when FORMAT\nclause is used\"\n\nSELECT * FROM JSON_TABLE(jsonb '\"world\"', '$' COLUMNS (item text\nFORMAT JSON PATH '$' with wrapper KEEP QUOTES));\nERROR: cannot use WITH WRAPPER clause for formatted colunmns without\nalso specifying OMIT/KEEP QUOTES\nLINE 1: ...T * FROM JSON_TABLE(jsonb '\"world\"', '$' COLUMNS (item text ...\n ^\nthis error is misleading, since now I am using WITH WRAPPER clause for\nformatted columns and specified KEEP QUOTES.\n\nin parse_expr.c, we have errmsg(\"SQL/JSON QUOTES behavior must not be\nspecified when WITH WRAPPER is used\").\n\n+/*\n+ * Fetch next row from a cross/union joined scan.\n+ *\n+ * Returns false at the end of a scan, true otherwise.\n+ */\n+static bool\n+JsonTablePlanNextRow(JsonTablePlanState * state)\n+{\n+ JsonTableJoinState *join;\n+\n+ if (state->type == JSON_TABLE_SCAN_STATE)\n+ return JsonTableScanNextRow((JsonTableScanState *) state);\n+\n+ join = (JsonTableJoinState *) state;\n+ if (join->advanceRight)\n+ {\n+ /* fetch next inner row */\n+ if (JsonTablePlanNextRow(join->right))\n+ return true;\n+\n+ /* inner rows are exhausted */\n+ if (join->cross)\n+ join->advanceRight = false; /* next outer row */\n+ else\n+ return false; /* end of scan */\n+ }\n+\n+ while (!join->advanceRight)\n+ {\n+ /* fetch next outer row */\n+ bool left = JsonTablePlanNextRow(join->left);\n\n+ bool left = JsonTablePlanNextRow(join->left);\nJsonTablePlanNextRow function comment says \"Returns false at the end\nof a scan, true otherwise.\",\nso bool variable name as \"left\" seems not so good?\n\nIt might help others understand the whole code by adding some comments on\nstruct JsonTableScanState and struct JsonTableJoinState.\nsince json_table patch is quite recursive, IMHO.\n\nI did some minor refactoring in parse_expr.c, since some code like\ntransformJsonExprCommon is duplicated.",
"msg_date": "Fri, 22 Dec 2023 21:01:05 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Hi.\n\n+/*\n+ * JsonTableFetchRow\n+ * Prepare the next \"current\" tuple for upcoming GetValue calls.\n+ * Returns FALSE if the row-filter expression returned no more rows.\n+ */\n+static bool\n+JsonTableFetchRow(TableFuncScanState *state)\n+{\n+ JsonTableExecContext *cxt =\n+ GetJsonTableExecContext(state, \"JsonTableFetchRow\");\n+\n+ if (cxt->empty)\n+ return false;\n+\n+ return JsonTableScanNextRow(cxt->root);\n+}\n\nThe declaration of struct JsonbTableRoutine, SetRowFilter field is\nnull. So I am confused by the above comment.\nalso seems the `if (cxt->empty)` part never called.\n\n+static inline JsonTableExecContext *\n+GetJsonTableExecContext(TableFuncScanState *state, const char *fname)\n+{\n+ JsonTableExecContext *result;\n+\n+ if (!IsA(state, TableFuncScanState))\n+ elog(ERROR, \"%s called with invalid TableFuncScanState\", fname);\n+ result = (JsonTableExecContext *) state->opaque;\n+ if (result->magic != JSON_TABLE_EXEC_CONTEXT_MAGIC)\n+ elog(ERROR, \"%s called with invalid TableFuncScanState\", fname);\n+\n+ return result;\n+}\nI think Assert(IsA(state, TableFuncScanState)) would be better.\n\n+/*\n+ * JsonTablePlanType -\n+ * flags for JSON_TABLE plan node types representation\n+ */\n+typedef enum JsonTablePlanType\n+{\n+ JSTP_DEFAULT,\n+ JSTP_SIMPLE,\n+ JSTP_JOINED,\n+} JsonTablePlanType;\nit would be better to add some comments on it. thanks.\n\nJsonTablePlanNextRow is quite recursive! Adding more explanation would\nbe helpful, thanks.\n\n+/* Recursively reset scan and its child nodes */\n+static void\n+JsonTableRescanRecursive(JsonTablePlanState * state)\n+{\n+ if (state->type == JSON_TABLE_JOIN_STATE)\n+ {\n+ JsonTableJoinState *join = (JsonTableJoinState *) state;\n+\n+ JsonTableRescanRecursive(join->left);\n+ JsonTableRescanRecursive(join->right);\n+ join->advanceRight = false;\n+ }\n+ else\n+ {\n+ JsonTableScanState *scan = (JsonTableScanState *) state;\n+\n+ Assert(state->type == JSON_TABLE_SCAN_STATE);\n+ JsonTableRescan(scan);\n+ if (scan->plan.nested)\n+ JsonTableRescanRecursive(scan->plan.nested);\n+ }\n+}\n\n From the coverage report, I noticed the first IF branch in\nJsonTableRescanRecursive never called.\n\n+ foreach(col, columns)\n+ {\n+ JsonTableColumn *rawc = castNode(JsonTableColumn, lfirst(col));\n+ Oid typid;\n+ int32 typmod;\n+ Node *colexpr;\n+\n+ if (rawc->name)\n+ {\n+ /* make sure column names are unique */\n+ ListCell *colname;\n+\n+ foreach(colname, tf->colnames)\n+ if (!strcmp((const char *) colname, rawc->name))\n+ ereport(ERROR,\n+ (errcode(ERRCODE_SYNTAX_ERROR),\n+ errmsg(\"column name \\\"%s\\\" is not unique\",\n+ rawc->name),\n+ parser_errposition(pstate, rawc->location)));\n\nthis `/* make sure column names are unique */` logic part already\nvalidated in isJsonTablePathNameDuplicate, so we don't need it?\nactually isJsonTablePathNameDuplicate validates both column name and pathname.\n\nselect jt.* from jsonb_table_test jtt,\njson_table (jtt.js,'strict $[*]' as p\ncolumns (n for ordinality,\nnested path 'strict $.b[*]' as pb columns ( c int path '$' ),\nnested path 'strict $.b[*]' as pb columns ( s int path '$' ))\n) jt;\n\nERROR: duplicate JSON_TABLE column name: pb\nHINT: JSON_TABLE column names must be distinct from one another.\nthe error is not very accurate, since pb is a pathname?\n\n\n",
"msg_date": "Mon, 25 Dec 2023 13:03:27 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Fri, Dec 22, 2023 at 9:01 PM jian he <[email protected]> wrote:\n>\n> Hi\n>\n> + /* FALLTHROUGH */\n> + case JTC_EXISTS:\n> + case JTC_FORMATTED:\n> + {\n> + Node *je;\n> + CaseTestExpr *param = makeNode(CaseTestExpr);\n> +\n> + param->collation = InvalidOid;\n> + param->typeId = cxt->contextItemTypid;\n> + param->typeMod = -1;\n> +\n> + if (rawc->wrapper != JSW_NONE &&\n> + rawc->quotes != JS_QUOTES_UNSPEC)\n> + ereport(ERROR,\n> + (errcode(ERRCODE_SYNTAX_ERROR),\n> + errmsg(\"cannot use WITH WRAPPER clause for formatted colunmns\"\n> + \" without also specifying OMIT/KEEP QUOTES\"),\n> + parser_errposition(pstate, rawc->location)));\n>\n> typo, should be \"formatted columns\".\n> I suspect people will be confused with the meaning of \"formatted column\".\n> maybe we can replace this part:\"cannot use WITH WRAPPER clause for\n> formatted column\"\n> to\n> \"SQL/JSON WITH WRAPPER behavior must not be specified when FORMAT\n> clause is used\"\n>\n> SELECT * FROM JSON_TABLE(jsonb '\"world\"', '$' COLUMNS (item text\n> FORMAT JSON PATH '$' with wrapper KEEP QUOTES));\n> ERROR: cannot use WITH WRAPPER clause for formatted colunmns without\n> also specifying OMIT/KEEP QUOTES\n> LINE 1: ...T * FROM JSON_TABLE(jsonb '\"world\"', '$' COLUMNS (item text ...\n> ^\n> this error is misleading, since now I am using WITH WRAPPER clause for\n> formatted columns and specified KEEP QUOTES.\n>\n\nHi. still based on v33.\nJSON_TABLE:\nI also refactor parse_jsontable.c error reporting, now the error\nmessage will be consistent with json_query.\nnow you can specify wrapper freely as long as you don't specify\nwrapper and quote at the same time.\noverall, json_table behavior is more consistent with json_query and json_value.\nI also added some tests.\n\n+void\n+ExecEvalJsonCoercion(ExprState *state, ExprEvalStep *op,\n+ ExprContext *econtext)\n+{\n+ JsonCoercion *coercion = op->d.jsonexpr_coercion.coercion;\n+ ErrorSaveContext *escontext = op->d.jsonexpr_coercion.escontext;\n+ Datum res = *op->resvalue;\n+ bool resnull = *op->resnull;\n+\n+ if (coercion->via_populate)\n+ {\n+ void *cache = op->d.jsonexpr_coercion.json_populate_type_cache;\n+\n+ *op->resvalue = json_populate_type(res, JSONBOID,\n+ coercion->targettype,\n+ coercion->targettypmod,\n+ &cache,\n+ econtext->ecxt_per_query_memory,\n+ op->resnull, (Node *) escontext);\n+ }\n+ else if (coercion->via_io)\n+ {\n+ FmgrInfo *input_finfo = op->d.jsonexpr_coercion.input_finfo;\n+ Oid typioparam = op->d.jsonexpr_coercion.typioparam;\n+ char *val_string = resnull ? NULL :\n+ JsonbUnquote(DatumGetJsonbP(res));\n+\n+ (void) InputFunctionCallSafe(input_finfo, val_string, typioparam,\n+ coercion->targettypmod,\n+ (Node *) escontext,\n+ op->resvalue);\n+ }\nvia_populate, via_io should be mutually exclusive.\nyour patch, in some cases, both (coercion->via_io) and\n(coercion->via_populate) are true.\n(we can use elog find out).\nI refactor coerceJsonFuncExprOutput, so now it will be mutually exclusive.\nI also add asserts to it.\n\nBy default, json_query keeps quotes, json_value omit quotes.\nHowever, json_table will be transformed to json_value or json_query\nbased on certain criteria,\nthat means we need to explicitly set the JsonExpr->omit_quotes in the\nfunction transformJsonFuncExpr\nfor case JSON_QUERY_OP and JSON_VALUE_OP.\n\nWe need to know the QUOTE behavior in the function ExecEvalJsonCoercion.\nBecause for ExecEvalJsonCoercion, the coercion datum source can be a\nscalar string item,\nscalar items means RETURNING clause is dependent on QUOTE behavior.\nkeep quotes, omit quotes the results are different.\nconsider\nJSON_QUERY(jsonb'{\"rec\": \"[1,2]\"}', '$.rec' returning int4range omit quotes);\nand\nJSON_QUERY(jsonb'{\"rec\": \"[1,2]\"}', '$.rec' returning int4range omit quotes);\n\nto make sure ExecEvalJsonCoercion can distinguish keep and omit quotes,\nI added a bool keep_quotes to struct JsonCoercion.\n(maybe there is a more simple way, so far, that's what I come up with).\nthe keep_quotes value will be settled in the function transformJsonFuncExpr.\nAfter refactoring, in ExecEvalJsonCoercion, keep_quotes is true then\ncall JsonbToCString, else call JsonbUnquote.\n\nexample:\nSELECT JSON_QUERY(jsonb'{\"rec\": \"{1,2,3}\"}', '$.rec' returning int[]\nomit quotes);\nwithout my changes, return NULL\nwith my changes:\n {1,2,3}\n\nJSON_VALUE:\nmain changes:\n--- a/src/test/regress/expected/jsonb_sqljson.out\n+++ b/src/test/regress/expected/jsonb_sqljson.out\n@@ -301,7 +301,11 @@ SELECT JSON_VALUE(jsonb '\"2017-02-20\"', '$'\nRETURNING date) + 9;\n -- Test NULL checks execution in domain types\n CREATE DOMAIN sqljsonb_int_not_null AS int NOT NULL;\n SELECT JSON_VALUE(jsonb 'null', '$' RETURNING sqljsonb_int_not_null);\n-ERROR: domain sqljsonb_int_not_null does not allow null values\n+ json_value\n+------------\n+\n+(1 row)\n+\nI think the change is correct, given `SELECT JSON_VALUE(jsonb 'null',\n'$' RETURNING int4range);` returns NULL.\n\nI also attached a test.sql, without_patch.out (apply v33 only),\nwith_patch.out (my changes based on v33).\nSo you can see the difference after applying the patch, in case, my\nwording is not clear.",
"msg_date": "Wed, 3 Jan 2024 18:50:26 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "some more minor issues:\nSELECT * FROM JSON_TABLE(jsonb '{\"a\":[123,2]}', '$'\n COLUMNS (item int[] PATH '$.a' error on error, foo text path '$'\nerror on error)) bar;\nERROR: JSON path expression in JSON_VALUE should return singleton scalar item\n\nthe error message seems not so great, imho.\nsince the JSON_TABLE doc entries didn't mention that\nJSON_TABLE actually transformed to json_value, json_query, json_exists.\n\nJSON_VALUE even though cannot specify KEEP | OMIT QUOTES.\nIt might be a good idea to mention the default is to omit quotes in the doc.\nbecause JSON_TABLE actually transformed to json_value, json_query, json_exists.\nJSON_TABLE can specify quotes behavior freely.\n\nbother again, i kind of get what the function transformJsonTableChildPlan do,\nbut adding more comments would make it easier to understand....\n\n(json_query)\n+ This function must return a JSON string, so if the path expression\n+ returns multiple SQL/JSON items, you must wrap the result using the\n+ <literal>WITH WRAPPER</literal> clause. If the wrapper is\n+ <literal>UNCONDITIONAL</literal>, an array wrapper will always\n+ be applied, even if the returned value is already a single JSON object\n+ or an array, but if it is <literal>CONDITIONAL</literal>, it\nwill not be\n+ applied to a single array or object. <literal>UNCONDITIONAL</literal>\n+ is the default. If the result is a scalar string, by default the value\n+ returned will have surrounding quotes making it a valid JSON value,\n+ which can be made explicit by specifying <literal>KEEP\nQUOTES</literal>.\n+ Conversely, quotes can be omitted by specifying <literal>OMIT\nQUOTES</literal>.\n+ The returned <replaceable>data_type</replaceable> has the\nsame semantics\n+ as for constructor functions like <function>json_objectagg</function>;\n+ the default returned type is <type>jsonb</type>.\n\n+ <para>\n+ Returns the result of applying the\n+ <replaceable>path_expression</replaceable> to the\n+ <replaceable>context_item</replaceable> using the\n+ <literal>PASSING</literal> <replaceable>value</replaceable>s. The\n+ extracted value must be a single <acronym>SQL/JSON</acronym> scalar\n+ item. For results that are objects or arrays, use the\n+ <function>json_query</function> function instead.\n+ The returned <replaceable>data_type</replaceable> has the\nsame semantics\n+ as for constructor functions like <function>json_objectagg</function>.\n+ The default returned type is <type>text</type>.\n+ The <literal>ON ERROR</literal> and <literal>ON EMPTY</literal>\n+ clauses have similar semantics as mentioned in the description of\n+ <function>json_query</function>.\n+ </para>\n\n+ The returned <replaceable>data_type</replaceable> has the\nsame semantics\n+ as for constructor functions like <function>json_objectagg</function>.\n\nIMHO, the above description is not so good, since the function\njson_objectagg is listed in functions-aggregate.html,\nusing Ctrl + F in the browser cannot find json_objectagg in functions-json.html.\n\nfor json_query, maybe we can rephrase like:\nthe RETURNING clause, which specifies the data type returned. It must\nbe a type for which there is a cast from text to that type.\nBy default, the <type>jsonb</type> type is returned.\n\njson_value:\nthe RETURNING clause, which specifies the data type returned. It must\nbe a type for which there is a cast from text to that type.\nBy default, the <type>text</type> type is returned.\n\n\n",
"msg_date": "Wed, 3 Jan 2024 18:53:34 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "some tests after applying V33 and my small changes.\nsetup:\ncreate table test_scalar1(js jsonb);\ninsert into test_scalar1 select jsonb '{\"a\":\"[12,13]\"}' FROM\ngenerate_series(1,1e5) g;\ncreate table test_scalar2(js jsonb);\ninsert into test_scalar2 select jsonb '{\"a\":12}' FROM generate_series(1,1e5) g;\ncreate table test_array1(js jsonb);\ninsert into test_array1 select jsonb '{\"a\":[1,2,3,4,5]}' FROM\ngenerate_series(1,1e5) g;\ncreate table test_array2(js jsonb);\ninsert into test_array2 select jsonb '{\"a\": \"{1,2,3,4,5}\"}' FROM\ngenerate_series(1,1e5) g;\n\ntests:\n----------------------------------------return a scalar int4range\nexplain(costs off,analyze) SELECT item FROM test_scalar1,\nJSON_TABLE(js, '$.a' COLUMNS (item int4range PATH '$' omit quotes))\n\\watch count=5\n237.753 ms\n\nexplain(costs off,analyze) select json_query(js, '$.a' returning\nint4range omit quotes) from test_scalar1 \\watch count=5\n462.379 ms\n\nexplain(costs off,analyze) select json_value(js,'$.a' returning\nint4range) from test_scalar1 \\watch count=5\n362.148 ms\n\nexplain(costs off,analyze) select (js->>'a')::int4range from\ntest_scalar1 \\watch count=5\n301.089 ms\n\nexplain(costs off,analyze) select trim(both '\"' from\njsonb_path_query_first(js,'$.a')::text)::int4range from test_scalar1\n\\watch count=5\n643.337 ms\n\n----------------------------return a numeric array from jsonb array.\nexplain(costs off,analyze) SELECT item FROM test_array1,\nJSON_TABLE(js, '$.a' COLUMNS (item numeric[] PATH '$')) \\watch count=5\n727.807 ms\n\nexplain(costs off,analyze) SELECT json_query(js, '$.a' returning\nnumeric[]) from test_array1 \\watch count=5\n2995.909 ms\n\nexplain(costs off,analyze) SELECT\nreplace(replace(js->>'a','[','{'),']','}')::numeric[] from test_array1\n\\watch count=5\n2990.114 ms\n\n----------------------------return a numeric array from jsonb string\nexplain(costs off,analyze) SELECT item FROM test_array2,\nJSON_TABLE(js, '$.a' COLUMNS (item numeric[] PATH '$' omit quotes))\n\\watch count=5\n237.863 ms\n\nexplain(costs off,analyze) SELECT json_query(js,'$.a' returning\nnumeric[] omit quotes) from test_array2 \\watch count=5\n893.888 ms\n\nexplain(costs off,analyze) SELECT trim(both '\"'\nfrom(jsonb_path_query(js,'$.a')::text))::numeric[] from test_array2\n\\watch count=5\n1329.713 ms\n\nexplain(costs off,analyze) SELECT (js->>'a')::numeric[] from\ntest_array2 \\watch count=5\n740.645 ms\n\nexplain(costs off,analyze) SELECT trim(both '\"' from\n(json_query(js,'$.a' returning text)))::numeric[] from test_array2\n\\watch count=5\n1085.230 ms\n----------------------------return a scalar numeric\nexplain(costs off,analyze) SELECT item FROM test_scalar2,\nJSON_TABLE(js, '$.a' COLUMNS (item numeric PATH '$' omit quotes)) \\watch count=5\n238.036 ms\n\nexplain(costs off,analyze) select json_query(js,'$.a' returning\nnumeric) from test_scalar2 \\watch count=5\n300.862 ms\n\nexplain(costs off,analyze) select json_value(js,'$.a' returning\nnumeric) from test_scalar2 \\watch count=5\n160.035 ms\n\nexplain(costs off,analyze) select\njsonb_path_query_first(js,'$.a')::numeric from test_scalar2 \\watch\ncount=5\n294.666 ms\n\nexplain(costs off,analyze) select jsonb_path_query(js,'$.a')::numeric\nfrom test_scalar2 \\watch count=5\n547.130 ms\n\nexplain(costs off,analyze) select (js->>'a')::numeric from\ntest_scalar2 \\watch count=5\n243.652 ms\n\nexplain(costs off,analyze) select (js->>'a')::numeric,\n(js->>'a')::numeric from test_scalar2 \\watch count=5\n403.183 ms\n\nexplain(costs off,analyze) select json_value(js,'$.a' returning numeric),\n json_value(js,'$.a' returning numeric) from test_scalar2 \\watch count=5\n246.405 ms\n\nexplain(costs off,analyze) select json_query(js,'$.a' returning numeric),\n json_query(js,'$.a' returning numeric) from test_scalar2 \\watch count=5\n520.754 ms\n\nexplain(costs off,analyze) SELECT item, item1 FROM test_scalar2,\nJSON_TABLE(js, '$.a' COLUMNS (item numeric PATH '$' omit quotes,\n item1 numeric PATH '$' omit quotes)) \\watch count=5\n242.586 ms\n---------------------------------\noverall, json_value is faster than json_query. but json_value can not\ndeal with arrays in some cases.\nbut as you can see, in some cases, json_value and json_query are not\nas fast as our current implementation.\n\nHere I only test simple nested levels. if you extra multiple values\nfrom jsonb to sql type, then json_table is faster.\nIn almost all cases, json_table is faster.\n\njson_table is actually called json_value_op, json_query_op under the hood.\nWithout json_value and json_query related code, json_table cannot be\nimplemented.\n\n\n",
"msg_date": "Sat, 6 Jan 2024 08:44:00 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Sat, Jan 6, 2024 at 8:44 AM jian he <[email protected]> wrote:\n>\n> some tests after applying V33 and my small changes.\n> setup:\n> create table test_scalar1(js jsonb);\n> insert into test_scalar1 select jsonb '{\"a\":\"[12,13]\"}' FROM\n> generate_series(1,1e5) g;\n> create table test_scalar2(js jsonb);\n> insert into test_scalar2 select jsonb '{\"a\":12}' FROM\ngenerate_series(1,1e5) g;\n> create table test_array1(js jsonb);\n> insert into test_array1 select jsonb '{\"a\":[1,2,3,4,5]}' FROM\n> generate_series(1,1e5) g;\n> create table test_array2(js jsonb);\n> insert into test_array2 select jsonb '{\"a\": \"{1,2,3,4,5}\"}' FROM\n> generate_series(1,1e5) g;\n>\nsame as before, v33 plus my 4 minor changes (dot no-cfbot in previous\nthread).\nI realized my previous tests were wrong.\nbecause I use build type=debug and also add a bunch of c_args.\nso the following test results have no c_args, just -Dbuildtype=release.\nI actually tested several times.\n\n----------------------------------------return a scalar int4range\nexplain(costs off,analyze) SELECT item FROM test_scalar1, JSON_TABLE(js,\n'$.a' COLUMNS (item int4range PATH '$' omit quotes)) \\watch count=5\n56.487 ms\n\nexplain(costs off,analyze) select json_query(js, '$.a' returning int4range\nomit quotes) from test_scalar1 \\watch count=5\n27.272 ms\n\nexplain(costs off,analyze) select json_value(js,'$.a' returning int4range)\nfrom test_scalar1 \\watch count=5\n22.775 ms\n\nexplain(costs off,analyze) select (js->>'a')::int4range from test_scalar1\n\\watch count=5\n17.520 ms\n\nexplain(costs off,analyze) select trim(both '\"' from\njsonb_path_query_first(js,'$.a')::text)::int4range from test_scalar1 \\watch\ncount=5\n36.946 ms\n\n----------------------------return a numeric array from jsonb array.\nexplain(costs off,analyze) SELECT item FROM test_array1, JSON_TABLE(js,\n'$.a' COLUMNS (item numeric[] PATH '$')) \\watch count=5\n20.197 ms\n\nexplain(costs off,analyze) SELECT json_query(js, '$.a' returning numeric[])\nfrom test_array1 \\watch count=5\n69.759 ms\n\nexplain(costs off,analyze) SELECT\nreplace(replace(js->>'a','[','{'),']','}')::numeric[] from test_array1\n\\watch count=5\n62.114 ms\n\n----------------------------return a numeric array from jsonb string\nexplain(costs off,analyze) SELECT item FROM test_array2, JSON_TABLE(js,\n'$.a' COLUMNS (item numeric[] PATH '$' omit quotes)) \\watch count=5\n18.770 ms\n\nexplain(costs off,analyze) SELECT json_query(js,'$.a' returning numeric[]\nomit quotes) from test_array2 \\watch count=5\n46.373 ms\n\nexplain(costs off,analyze) SELECT trim(both '\"'\nfrom(jsonb_path_query(js,'$.a')::text))::numeric[] from test_array2 \\watch\ncount=5\n71.901 ms\n\nexplain(costs off,analyze) SELECT (js->>'a')::numeric[] from test_array2\n\\watch count=5\n35.572 ms\n\nexplain(costs off,analyze) SELECT trim(both '\"' from (json_query(js,'$.a'\nreturning text)))::numeric[] from test_array2 \\watch count=5\n58.755 ms\n\n----------------------------return a scalar numeric\nexplain(costs off,analyze) SELECT item FROM test_scalar2,\nJSON_TABLE(js, '$.a' COLUMNS (item numeric PATH '$' omit quotes)) \\watch\ncount=5\n18.723 ms\n\nexplain(costs off,analyze) select json_query(js,'$.a' returning numeric)\nfrom test_scalar2 \\watch count=5\n18.234 ms\n\nexplain(costs off,analyze) select json_value(js,'$.a' returning numeric)\nfrom test_scalar2 \\watch count=5\n11.667 ms\n\nexplain(costs off,analyze) select jsonb_path_query_first(js,'$.a')::numeric\nfrom test_scalar2 \\watch count=5\n17.691 ms\n\nexplain(costs off,analyze) select jsonb_path_query(js,'$.a')::numeric from\ntest_scalar2 \\watch count=5\n31.596 ms\n\nexplain(costs off,analyze) select (js->>'a')::numeric from test_scalar2\n\\watch count=5\n13.887 ms\n\n----------------------------return two scalar numeric\nexplain(costs off,analyze) select (js->>'a')::numeric, (js->>'a')::numeric\nfrom test_scalar2 \\watch count=5\n22.201 ms\n\nexplain(costs off,analyze) SELECT item, item1 FROM test_scalar2,\nJSON_TABLE(js, '$.a' COLUMNS (item numeric PATH '$' omit quotes,\n item1 numeric PATH '$' omit quotes)) \\watch\ncount=5\n19.108 ms\n\nexplain(costs off,analyze) select json_value(js,'$.a' returning numeric),\n json_value(js,'$.a' returning numeric) from test_scalar2 \\watch\ncount=5\n17.915 ms\n\nOn Sat, Jan 6, 2024 at 8:44 AM jian he <[email protected]> wrote:>> some tests after applying V33 and my small changes.> setup:> create table test_scalar1(js jsonb);> insert into test_scalar1 select jsonb '{\"a\":\"[12,13]\"}' FROM> generate_series(1,1e5) g;> create table test_scalar2(js jsonb);> insert into test_scalar2 select jsonb '{\"a\":12}' FROM generate_series(1,1e5) g;> create table test_array1(js jsonb);> insert into test_array1 select jsonb '{\"a\":[1,2,3,4,5]}' FROM> generate_series(1,1e5) g;> create table test_array2(js jsonb);> insert into test_array2 select jsonb '{\"a\": \"{1,2,3,4,5}\"}' FROM> generate_series(1,1e5) g;>same as before, v33 plus my 4 minor changes (dot no-cfbot in previous thread).I realized my previous tests were wrong.because I use build type=debug and also add a bunch of c_args.so the following test results have no c_args, just -Dbuildtype=release.I actually tested several times.----------------------------------------return a scalar int4rangeexplain(costs off,analyze) SELECT item FROM test_scalar1, JSON_TABLE(js, '$.a' COLUMNS (item int4range PATH '$' omit quotes)) \\watch count=556.487 msexplain(costs off,analyze) select json_query(js, '$.a' returning int4range omit quotes) from test_scalar1 \\watch count=527.272 msexplain(costs off,analyze) select json_value(js,'$.a' returning int4range) from test_scalar1 \\watch count=522.775 msexplain(costs off,analyze) select (js->>'a')::int4range from test_scalar1 \\watch count=517.520 msexplain(costs off,analyze) select trim(both '\"' from jsonb_path_query_first(js,'$.a')::text)::int4range from test_scalar1 \\watch count=536.946 ms----------------------------return a numeric array from jsonb array.explain(costs off,analyze) SELECT item FROM test_array1, JSON_TABLE(js, '$.a' COLUMNS (item numeric[] PATH '$')) \\watch count=520.197 msexplain(costs off,analyze) SELECT json_query(js, '$.a' returning numeric[]) from test_array1 \\watch count=569.759 msexplain(costs off,analyze) SELECT replace(replace(js->>'a','[','{'),']','}')::numeric[] from test_array1 \\watch count=562.114 ms----------------------------return a numeric array from jsonb stringexplain(costs off,analyze) SELECT item FROM test_array2, JSON_TABLE(js, '$.a' COLUMNS (item numeric[] PATH '$' omit quotes)) \\watch count=518.770 msexplain(costs off,analyze) SELECT json_query(js,'$.a' returning numeric[] omit quotes) from test_array2 \\watch count=546.373 msexplain(costs off,analyze) SELECT trim(both '\"' from(jsonb_path_query(js,'$.a')::text))::numeric[] from test_array2 \\watch count=571.901 msexplain(costs off,analyze) SELECT (js->>'a')::numeric[] from test_array2 \\watch count=535.572 msexplain(costs off,analyze) SELECT trim(both '\"' from (json_query(js,'$.a' returning text)))::numeric[] from test_array2 \\watch count=558.755 ms----------------------------return a scalar numeric explain(costs off,analyze) SELECT item FROM test_scalar2,JSON_TABLE(js, '$.a' COLUMNS (item numeric PATH '$' omit quotes)) \\watch count=518.723 msexplain(costs off,analyze) select json_query(js,'$.a' returning numeric) from test_scalar2 \\watch count=518.234 msexplain(costs off,analyze) select json_value(js,'$.a' returning numeric) from test_scalar2 \\watch count=511.667 msexplain(costs off,analyze) select jsonb_path_query_first(js,'$.a')::numeric from test_scalar2 \\watch count=517.691 msexplain(costs off,analyze) select jsonb_path_query(js,'$.a')::numeric from test_scalar2 \\watch count=531.596 msexplain(costs off,analyze) select (js->>'a')::numeric from test_scalar2 \\watch count=513.887 ms----------------------------return two scalar numeric explain(costs off,analyze) select (js->>'a')::numeric, (js->>'a')::numeric from test_scalar2 \\watch count=522.201 msexplain(costs off,analyze) SELECT item, item1 FROM test_scalar2, JSON_TABLE(js, '$.a' COLUMNS (item numeric PATH '$' omit quotes, item1 numeric PATH '$' omit quotes)) \\watch count=519.108 msexplain(costs off,analyze) select json_value(js,'$.a' returning numeric), json_value(js,'$.a' returning numeric) from test_scalar2 \\watch count=517.915 ms",
"msg_date": "Mon, 8 Jan 2024 08:00:00 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Hi,\n\nThought I'd share an update.\n\nI've been going through Jian He's comments (thanks for the reviews!),\nmost of which affect the last JSON_TABLE() patch and in some cases the\nquery functions patch (0007). It seems I'll need to spend a little\nmore time, especially on the JSON_TABLE() patch, as I'm finding things\nto improve other than those mentioned in the comments.\n\nAs for the preliminary patches 0001-0006, I'm thinking that it would\nbe a good idea to get them out of the way sooner rather than waiting\ntill the main patches are in perfect shape. I'd like to get them\ncommitted by next week after a bit of polishing, so if anyone would\nlike to take a look, please let me know. I'll post a new set\ntomorrow.\n\n0007, the query functions patch, also looks close to ready, though I\nmight need to change a few things in it as I work through the\nJSON_TABLE() changes.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 16 Jan 2024 19:00:29 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "I've been eyeballing the coverage report generated after applying all\npatches (but I only checked the code added by the 0008 patch). AFAICS\nthe coverage is pretty good. Some uncovered paths:\n\ncommands/explain.c (Hmm, I think this is a preexisting bug actually)\n\n 3893 18 : case T_TableFuncScan:\n 3894 18 : Assert(rte->rtekind == RTE_TABLEFUNC);\n 3895 18 : if (rte->tablefunc)\n 3896 0 : if (rte->tablefunc->functype == TFT_XMLTABLE)\n 3897 0 : objectname = \"xmltable\";\n 3898 : else /* Must be TFT_JSON_TABLE */\n 3899 0 : objectname = \"json_table\";\n 3900 : else\n 3901 18 : objectname = NULL;\n 3902 18 : objecttag = \"Table Function Name\";\n 3903 18 : break;\n\nparser/gram.y:\n\n 16940 : json_table_plan_cross:\n 16941 : json_table_plan_primary CROSS json_table_plan_primary\n 16942 39 : { $$ = makeJsonTableJoinedPlan(JSTPJ_CROSS, $1, $3, @1); }\n 16943 : | json_table_plan_cross CROSS json_table_plan_primary\n 16944 0 : { $$ = makeJsonTableJoinedPlan(JSTPJ_CROSS, $1, $3, @1); }\nNot really sure how critical this one is TBH.\n\n\nutils/adt/jsonpath_exec.c:\n\n 3492 : /* Recursively reset scan and its child nodes */\n 3493 : static void\n 3494 120 : JsonTableRescanRecursive(JsonTablePlanState * state)\n 3495 : {\n 3496 120 : if (state->type == JSON_TABLE_JOIN_STATE)\n 3497 : {\n 3498 0 : JsonTableJoinState *join = (JsonTableJoinState *) state;\n 3499 : \n 3500 0 : JsonTableRescanRecursive(join->left);\n 3501 0 : JsonTableRescanRecursive(join->right);\n 3502 0 : join->advanceRight = false;\n 3503 : }\n\nI think this one had better be covered.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"The saddest aspect of life right now is that science gathers knowledge faster\n than society gathers wisdom.\" (Isaac Asimov)\n\n\n",
"msg_date": "Thu, 18 Jan 2024 12:46:24 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Hi,\n\nOn Fri, Dec 22, 2023 at 10:01 PM jian he <[email protected]> wrote:\n> Hi\n\nThanks for the reviews.\n\n> v33-0007-SQL-JSON-query-functions.patch, commit message:\n> This introduces the SQL/JSON functions for querying JSON data using\n> jsonpath expressions. The functions are:\n>\n> should it be \"These functions are\"\n\nRewrote that sentence to say \"introduces the following SQL/JSON functions...\"\n\n> + <para>\n> + Returns true if the SQL/JSON <replaceable>path_expression</replaceable>\n> + applied to the <replaceable>context_item</replaceable> using the\n> + <replaceable>value</replaceable>s yields any items.\n> + The <literal>ON ERROR</literal> clause specifies what is returned if\n> + an error occurs; the default is to return <literal>FALSE</literal>.\n> + Note that if the <replaceable>path_expression</replaceable>\n> + is <literal>strict</literal>, an error is generated if it\n> yields no items.\n> + </para>\n>\n> I think the following description is more accurate.\n> + Note that if the <replaceable>path_expression</replaceable>\n> + is <literal>strict</literal> and the <literal>ON\n> ERROR</literal> clause is <literal> ERROR</literal>,\n> + an error is generated if it yields no items.\n> + </para>\n\nTrue, fixed.\n\n> +/*\n> + * transformJsonTable -\n> + * Transform a raw JsonTable into TableFunc.\n> + *\n> + * Transform the document-generating expression, the row-generating expression,\n> + * the column-generating expressions, and the default value expressions.\n> + */\n> +ParseNamespaceItem *\n> +transformJsonTable(ParseState *pstate, JsonTable *jt)\n> +{\n> + JsonTableParseContext cxt;\n> + TableFunc *tf = makeNode(TableFunc);\n> + JsonFuncExpr *jfe = makeNode(JsonFuncExpr);\n> + JsonExpr *je;\n> + JsonTablePlan *plan = jt->plan;\n> + char *rootPathName = jt->pathname;\n> + char *rootPath;\n> + bool is_lateral;\n> +\n> + if (jt->on_empty)\n> + ereport(ERROR,\n> + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> + errmsg(\"ON EMPTY not allowed in JSON_TABLE\"),\n> + parser_errposition(pstate,\n> + exprLocation((Node *) jt->on_empty))));\n>\n> This error may be slightly misleading?\n> you can add ON EMPTY inside the COLUMNS part, like the following:\n> SELECT * FROM (VALUES ('1'), ('\"1\"')) vals(js) LEFT OUTER JOIN\n> JSON_TABLE(vals.js::jsonb, '$' COLUMNS (a int PATH '$' default 1 ON\n> empty)) jt ON true;\n\nThat check is to catch an ON EMPTY specified *outside* the COLUMN(...)\nclause of a JSON_TABLE(...) expression. It was added during a recent\ngram.y refactoring, but maybe that wasn't a great idea. It seems\nbetter to disallow the ON EMPTY clause in the grammar itself.\n\n> + <para>\n> + Each <literal>NESTED PATH</literal> clause can generate one or more\n> + columns. Columns produced by <literal>NESTED PATH</literal>s at the\n> + same level are considered to be <firstterm>siblings</firstterm>,\n> + while a column produced by a <literal>NESTED PATH</literal> is\n> + considered to be a child of the column produced by a\n> + <literal>NESTED PATH</literal> or row expression at a higher level.\n> + Sibling columns are always joined first. Once they are processed,\n> + the resulting rows are joined to the parent row.\n> + </para>\n> Does changing to the following make sense?\n> + considered to be a <firstterm>child</firstterm> of the column produced by a\n> + the resulting rows are joined to the <firstterm>parent</firstterm> row.\n\nTerms \"child\" and \"parent\" are already introduced in previous\nparagraphs, so no need for the <firstterm> tag.\n\n> seems like `format json_representation`, not listed in the\n> documentation, but json_representation is \"Parameters\", do we need\n> add a section to explain it?\n> even though I think currently we can only do `FORMAT JSON`.\n\nThe syntax appears to allow an optional ENCODING UTF8 too, so I've\ngotten rid of json_representation and literally listed out what the\nsyntax says.\n\n> SELECT * FROM JSON_TABLE(jsonb '123', '$' COLUMNS (item int PATH '$'\n> empty on empty)) bar;\n> ERROR: cannot cast jsonb array to type integer\n> The error is the same as the output of the following:\n> SELECT * FROM JSON_TABLE(jsonb '123', '$' COLUMNS (item int PATH '$'\n> empty array on empty )) bar;\n> but these two are different things?\n\nEMPTY and EMPTY ARRAY both spell out an array:\n\njson_behavior_type:\n...\n | EMPTY_P ARRAY { $$ = JSON_BEHAVIOR_EMPTY_ARRAY; }\n /* non-standard, for Oracle compatibility only */\n | EMPTY_P { $$ = JSON_BEHAVIOR_EMPTY_ARRAY; }\n\n> + /* FALLTHROUGH */\n> + case JTC_EXISTS:\n> + case JTC_FORMATTED:\n> + {\n> + Node *je;\n> + CaseTestExpr *param = makeNode(CaseTestExpr);\n> +\n> + param->collation = InvalidOid;\n> + param->typeId = cxt->contextItemTypid;\n> + param->typeMod = -1;\n> +\n> + if (rawc->wrapper != JSW_NONE &&\n> + rawc->quotes != JS_QUOTES_UNSPEC)\n> + ereport(ERROR,\n> + (errcode(ERRCODE_SYNTAX_ERROR),\n> + errmsg(\"cannot use WITH WRAPPER clause for formatted colunmns\"\n> + \" without also specifying OMIT/KEEP QUOTES\"),\n> + parser_errposition(pstate, rawc->location)));\n>\n> typo, should be \"formatted columns\".\n\nOops.\n\n> I suspect people will be confused with the meaning of \"formatted column\".\n> maybe we can replace this part:\"cannot use WITH WRAPPER clause for\n> formatted column\"\n> to\n> \"SQL/JSON WITH WRAPPER behavior must not be specified when FORMAT\n> clause is used\"\n>\n> SELECT * FROM JSON_TABLE(jsonb '\"world\"', '$' COLUMNS (item text\n> FORMAT JSON PATH '$' with wrapper KEEP QUOTES));\n> ERROR: cannot use WITH WRAPPER clause for formatted colunmns without\n> also specifying OMIT/KEEP QUOTES\n> LINE 1: ...T * FROM JSON_TABLE(jsonb '\"world\"', '$' COLUMNS (item text ...\n> ^\n> this error is misleading, since now I am using WITH WRAPPER clause for\n> formatted columns and specified KEEP QUOTES.\n>\n> in parse_expr.c, we have errmsg(\"SQL/JSON QUOTES behavior must not be\n> specified when WITH WRAPPER is used\").\n\nIt seems to me that we should just remove the above check in\nappendJsonTableColumns() and let the check(s) in parse_expr.c take\ncare of the various allowed/disallowed scenarios for \"formatted\"\ncolumns. Also see further below...\n\n> +/*\n> + * Fetch next row from a cross/union joined scan.\n> + *\n> + * Returns false at the end of a scan, true otherwise.\n> + */\n> +static bool\n> +JsonTablePlanNextRow(JsonTablePlanState * state)\n> +{\n> + JsonTableJoinState *join;\n> +\n> + if (state->type == JSON_TABLE_SCAN_STATE)\n> + return JsonTableScanNextRow((JsonTableScanState *) state);\n> +\n> + join = (JsonTableJoinState *) state;\n> + if (join->advanceRight)\n> + {\n> + /* fetch next inner row */\n> + if (JsonTablePlanNextRow(join->right))\n> + return true;\n> +\n> + /* inner rows are exhausted */\n> + if (join->cross)\n> + join->advanceRight = false; /* next outer row */\n> + else\n> + return false; /* end of scan */\n> + }\n> +\n> + while (!join->advanceRight)\n> + {\n> + /* fetch next outer row */\n> + bool left = JsonTablePlanNextRow(join->left);\n>\n> + bool left = JsonTablePlanNextRow(join->left);\n> JsonTablePlanNextRow function comment says \"Returns false at the end\n> of a scan, true otherwise.\",\n> so bool variable name as \"left\" seems not so good?\n\nHmm, maybe, \"more\" might be more appropriate given the context.\n\n> It might help others understand the whole code by adding some comments on\n> struct JsonTableScanState and struct JsonTableJoinState.\n> since json_table patch is quite recursive, IMHO.\n\nAgree that the various JsonTable parser/executor comments are lacking.\nWorking on adding more commentary and improving the notation -- struct\nnames, etc.\n\n> I did some minor refactoring in parse_expr.c, since some code like\n> transformJsonExprCommon is duplicated.\n\nThanks, I've adopted some of the ideas in your patch.\n\nOn Mon, Dec 25, 2023 at 2:03 PM jian he <[email protected]> wrote:\n> +/*\n> + * JsonTableFetchRow\n> + * Prepare the next \"current\" tuple for upcoming GetValue calls.\n> + * Returns FALSE if the row-filter expression returned no more rows.\n> + */\n> +static bool\n> +JsonTableFetchRow(TableFuncScanState *state)\n> +{\n> + JsonTableExecContext *cxt =\n> + GetJsonTableExecContext(state, \"JsonTableFetchRow\");\n> +\n> + if (cxt->empty)\n> + return false;\n> +\n> + return JsonTableScanNextRow(cxt->root);\n> +}\n>\n> The declaration of struct JsonbTableRoutine, SetRowFilter field is\n> null. So I am confused by the above comment.\n\nYeah, it might be a leftover from copy-pasting the XML code. Reworded\nthe comment to not mention SetRowFilter.\n\n> also seems the `if (cxt->empty)` part never called.\n\nI don't understand why the context struct has that empty flag too, it\nmight be a leftover field. Removed.\n\n> +static inline JsonTableExecContext *\n> +GetJsonTableExecContext(TableFuncScanState *state, const char *fname)\n> +{\n> + JsonTableExecContext *result;\n> +\n> + if (!IsA(state, TableFuncScanState))\n> + elog(ERROR, \"%s called with invalid TableFuncScanState\", fname);\n> + result = (JsonTableExecContext *) state->opaque;\n> + if (result->magic != JSON_TABLE_EXEC_CONTEXT_MAGIC)\n> + elog(ERROR, \"%s called with invalid TableFuncScanState\", fname);\n> +\n> + return result;\n> +}\n> I think Assert(IsA(state, TableFuncScanState)) would be better.\n\nHmm, better to leave this as-is to be consistent with what the XML\ncode is doing. Though I also wonder why it's not an Assert in the\nfirst place.\n\n> +/*\n> + * JsonTablePlanType -\n> + * flags for JSON_TABLE plan node types representation\n> + */\n> +typedef enum JsonTablePlanType\n> +{\n> + JSTP_DEFAULT,\n> + JSTP_SIMPLE,\n> + JSTP_JOINED,\n> +} JsonTablePlanType;\n> it would be better to add some comments on it. thanks.\n>\n> JsonTablePlanNextRow is quite recursive! Adding more explanation would\n> be helpful, thanks.\n\nWill do.\n\n> +/* Recursively reset scan and its child nodes */\n> +static void\n> +JsonTableRescanRecursive(JsonTablePlanState * state)\n> +{\n> + if (state->type == JSON_TABLE_JOIN_STATE)\n> + {\n> + JsonTableJoinState *join = (JsonTableJoinState *) state;\n> +\n> + JsonTableRescanRecursive(join->left);\n> + JsonTableRescanRecursive(join->right);\n> + join->advanceRight = false;\n> + }\n> + else\n> + {\n> + JsonTableScanState *scan = (JsonTableScanState *) state;\n> +\n> + Assert(state->type == JSON_TABLE_SCAN_STATE);\n> + JsonTableRescan(scan);\n> + if (scan->plan.nested)\n> + JsonTableRescanRecursive(scan->plan.nested);\n> + }\n> +}\n>\n> From the coverage report, I noticed the first IF branch in\n> JsonTableRescanRecursive never called.\n\nWill look into this.\n\n> + foreach(col, columns)\n> + {\n> + JsonTableColumn *rawc = castNode(JsonTableColumn, lfirst(col));\n> + Oid typid;\n> + int32 typmod;\n> + Node *colexpr;\n> +\n> + if (rawc->name)\n> + {\n> + /* make sure column names are unique */\n> + ListCell *colname;\n> +\n> + foreach(colname, tf->colnames)\n> + if (!strcmp((const char *) colname, rawc->name))\n> + ereport(ERROR,\n> + (errcode(ERRCODE_SYNTAX_ERROR),\n> + errmsg(\"column name \\\"%s\\\" is not unique\",\n> + rawc->name),\n> + parser_errposition(pstate, rawc->location)));\n>\n> this `/* make sure column names are unique */` logic part already\n> validated in isJsonTablePathNameDuplicate, so we don't need it?\n> actually isJsonTablePathNameDuplicate validates both column name and pathname.\n\nI think you are right. All columns/path names are de-duplicated much\nearlier at the beginning of transformJsonTable(), so there's no need\nfor the above check.\n\nThat said, I don't know why column and path names share the namespace\nor whether that has any semantic issues. Maybe there aren't, but will\nthink some more on that.\n\n> select jt.* from jsonb_table_test jtt,\n> json_table (jtt.js,'strict $[*]' as p\n> columns (n for ordinality,\n> nested path 'strict $.b[*]' as pb columns ( c int path '$' ),\n> nested path 'strict $.b[*]' as pb columns ( s int path '$' ))\n> ) jt;\n>\n> ERROR: duplicate JSON_TABLE column name: pb\n> HINT: JSON_TABLE column names must be distinct from one another.\n> the error is not very accurate, since pb is a pathname?\n\nI think this can be improved by passing the information whether it's a\ncolumn or path name to the deduplication code. I've reworked that\ncode to get more useful error info.\n\nOn Wed, Jan 3, 2024 at 7:53 PM jian he <[email protected]> wrote:\n> some more minor issues:\n> SELECT * FROM JSON_TABLE(jsonb '{\"a\":[123,2]}', '$'\n> COLUMNS (item int[] PATH '$.a' error on error, foo text path '$'\n> error on error)) bar;\n> ERROR: JSON path expression in JSON_VALUE should return singleton scalar item\n>\n> the error message seems not so great, imho.\n> since the JSON_TABLE doc entries didn't mention that\n> JSON_TABLE actually transformed to json_value, json_query, json_exists.\n\nHmm, yes, the context whether the JSON_VALUE() is user-specified or\ninternally generated is not readily available where the error is\nreported.\n\nI'm inlinced to document this aspect of JSON_TABLE(), instead of\ncomplicating the executor interfaces in order to make the error\nmessage better.\n\n> JSON_VALUE even though cannot specify KEEP | OMIT QUOTES.\n> It might be a good idea to mention the default is to omit quotes in the doc.\n> because JSON_TABLE actually transformed to json_value, json_query, json_exists.\n> JSON_TABLE can specify quotes behavior freely.\n\nDone.\n\n> (json_query)\n> + This function must return a JSON string, so if the path expression\n> + returns multiple SQL/JSON items, you must wrap the result using the\n> + <literal>WITH WRAPPER</literal> clause. If the wrapper is\n> + <literal>UNCONDITIONAL</literal>, an array wrapper will always\n> + be applied, even if the returned value is already a single JSON object\n> + or an array, but if it is <literal>CONDITIONAL</literal>, it\n> will not be\n> + applied to a single array or object. <literal>UNCONDITIONAL</literal>\n> + is the default. If the result is a scalar string, by default the value\n> + returned will have surrounding quotes making it a valid JSON value,\n> + which can be made explicit by specifying <literal>KEEP\n> QUOTES</literal>.\n> + Conversely, quotes can be omitted by specifying <literal>OMIT\n> QUOTES</literal>.\n> + The returned <replaceable>data_type</replaceable> has the\n> same semantics\n> + as for constructor functions like <function>json_objectagg</function>;\n> + the default returned type is <type>jsonb</type>.\n>\n> + <para>\n> + Returns the result of applying the\n> + <replaceable>path_expression</replaceable> to the\n> + <replaceable>context_item</replaceable> using the\n> + <literal>PASSING</literal> <replaceable>value</replaceable>s. The\n> + extracted value must be a single <acronym>SQL/JSON</acronym> scalar\n> + item. For results that are objects or arrays, use the\n> + <function>json_query</function> function instead.\n> + The returned <replaceable>data_type</replaceable> has the\n> same semantics\n> + as for constructor functions like <function>json_objectagg</function>.\n> + The default returned type is <type>text</type>.\n> + The <literal>ON ERROR</literal> and <literal>ON EMPTY</literal>\n> + clauses have similar semantics as mentioned in the description of\n> + <function>json_query</function>.\n> + </para>\n>\n> + The returned <replaceable>data_type</replaceable> has the\n> same semantics\n> + as for constructor functions like <function>json_objectagg</function>.\n>\n> IMHO, the above description is not so good, since the function\n> json_objectagg is listed in functions-aggregate.html,\n> using Ctrl + F in the browser cannot find json_objectagg in functions-json.html.\n>\n> for json_query, maybe we can rephrase like:\n> the RETURNING clause, which specifies the data type returned. It must\n> be a type for which there is a cast from text to that type.\n> By default, the <type>jsonb</type> type is returned.\n>\n> json_value:\n> the RETURNING clause, which specifies the data type returned. It must\n> be a type for which there is a cast from text to that type.\n> By default, the <type>text</type> type is returned.\n\nFixed the description of returned type for both json_query() and\njson_value(). For the latter, the cast to the returned type must\nexist from each possible JSON scalar type viz. text, boolean, numeric,\nand various datetime types.\n\nOn Wed, Jan 3, 2024 at 7:50 PM jian he <[email protected]> wrote:\n> Hi. still based on v33.\n> JSON_TABLE:\n> I also refactor parse_jsontable.c error reporting, now the error\n> message will be consistent with json_query.\n> now you can specify wrapper freely as long as you don't specify\n> wrapper and quote at the same time.\n> overall, json_table behavior is more consistent with json_query and json_value.\n> I also added some tests.\n\nThanks for the patches. I've taken the tests, some of your suggested\ncode changes, and made some changes of my own. Some of the new tests\ngive a different error message than what your patch had but I think\nwhat I have is fine.\n\n> +void\n> +ExecEvalJsonCoercion(ExprState *state, ExprEvalStep *op,\n> + ExprContext *econtext)\n> +{\n> + JsonCoercion *coercion = op->d.jsonexpr_coercion.coercion;\n> + ErrorSaveContext *escontext = op->d.jsonexpr_coercion.escontext;\n> + Datum res = *op->resvalue;\n> + bool resnull = *op->resnull;\n> +\n> + if (coercion->via_populate)\n> + {\n> + void *cache = op->d.jsonexpr_coercion.json_populate_type_cache;\n> +\n> + *op->resvalue = json_populate_type(res, JSONBOID,\n> + coercion->targettype,\n> + coercion->targettypmod,\n> + &cache,\n> + econtext->ecxt_per_query_memory,\n> + op->resnull, (Node *) escontext);\n> + }\n> + else if (coercion->via_io)\n> + {\n> + FmgrInfo *input_finfo = op->d.jsonexpr_coercion.input_finfo;\n> + Oid typioparam = op->d.jsonexpr_coercion.typioparam;\n> + char *val_string = resnull ? NULL :\n> + JsonbUnquote(DatumGetJsonbP(res));\n> +\n> + (void) InputFunctionCallSafe(input_finfo, val_string, typioparam,\n> + coercion->targettypmod,\n> + (Node *) escontext,\n> + op->resvalue);\n> + }\n> via_populate, via_io should be mutually exclusive.\n> your patch, in some cases, both (coercion->via_io) and\n> (coercion->via_populate) are true.\n> (we can use elog find out).\n> I refactor coerceJsonFuncExprOutput, so now it will be mutually exclusive.\n> I also add asserts to it.\n\nI realized that we don't really need the via_io and via_populate\nflags. You can see in the latest patch that the decision of whether\nto call json_populate_type() or the RETURNING type's input function is\nnow deferred to run-time or ExecEvalJsonCoercion(). The new comment\nshould also make it clear why one or the other is used for a given\nsource datum passed to ExecEvalJsonCoercion().\n\n> By default, json_query keeps quotes, json_value omit quotes.\n> However, json_table will be transformed to json_value or json_query\n> based on certain criteria,\n> that means we need to explicitly set the JsonExpr->omit_quotes in the\n> function transformJsonFuncExpr\n> for case JSON_QUERY_OP and JSON_VALUE_OP.\n>\n> We need to know the QUOTE behavior in the function ExecEvalJsonCoercion.\n> Because for ExecEvalJsonCoercion, the coercion datum source can be a\n> scalar string item,\n> scalar items means RETURNING clause is dependent on QUOTE behavior.\n> keep quotes, omit quotes the results are different.\n> consider\n> JSON_QUERY(jsonb'{\"rec\": \"[1,2]\"}', '$.rec' returning int4range omit quotes);\n> and\n> JSON_QUERY(jsonb'{\"rec\": \"[1,2]\"}', '$.rec' returning int4range omit quotes);\n>\n> to make sure ExecEvalJsonCoercion can distinguish keep and omit quotes,\n> I added a bool keep_quotes to struct JsonCoercion.\n> (maybe there is a more simple way, so far, that's what I come up with).\n> the keep_quotes value will be settled in the function transformJsonFuncExpr.\n> After refactoring, in ExecEvalJsonCoercion, keep_quotes is true then\n> call JsonbToCString, else call JsonbUnquote.\n>\n> example:\n> SELECT JSON_QUERY(jsonb'{\"rec\": \"{1,2,3}\"}', '$.rec' returning int[]\n> omit quotes);\n> without my changes, return NULL\n> with my changes:\n> {1,2,3}\n>\n> JSON_VALUE:\n> main changes:\n> --- a/src/test/regress/expected/jsonb_sqljson.out\n> +++ b/src/test/regress/expected/jsonb_sqljson.out\n> @@ -301,7 +301,11 @@ SELECT JSON_VALUE(jsonb '\"2017-02-20\"', '$'\n> RETURNING date) + 9;\n> -- Test NULL checks execution in domain types\n> CREATE DOMAIN sqljsonb_int_not_null AS int NOT NULL;\n> SELECT JSON_VALUE(jsonb 'null', '$' RETURNING sqljsonb_int_not_null);\n> -ERROR: domain sqljsonb_int_not_null does not allow null values\n> + json_value\n> +------------\n> +\n> +(1 row)\n> +\n> I think the change is correct, given `SELECT JSON_VALUE(jsonb 'null',\n> '$' RETURNING int4range);` returns NULL.\n>\n> I also attached a test.sql, without_patch.out (apply v33 only),\n> with_patch.out (my changes based on v33).\n> So you can see the difference after applying the patch, in case, my\n> wording is not clear.\n\nTo address these points:\n\n* I've taken your idea to make omit/keep_quotes available to\nExecEvalJsonCoercion().\n\n* I've also taken your suggestion to fix parse_jsontable.c such that\nWRAPPER/QUOTES combinations specified with JSON_TABLE() columns work\nwithout many arbitrary-looking restrictions.\n\nPlease take a look at the attached latest patch and let me know if\nanything looks amiss.\n\nOn Sat, Jan 6, 2024 at 9:45 AM jian he <[email protected]> wrote:\n> some tests after applying V33 and my small changes.\n> setup:\n> create table test_scalar1(js jsonb);\n> insert into test_scalar1 select jsonb '{\"a\":\"[12,13]\"}' FROM\n> generate_series(1,1e5) g;\n> create table test_scalar2(js jsonb);\n> insert into test_scalar2 select jsonb '{\"a\":12}' FROM generate_series(1,1e5) g;\n> create table test_array1(js jsonb);\n> insert into test_array1 select jsonb '{\"a\":[1,2,3,4,5]}' FROM\n> generate_series(1,1e5) g;\n> create table test_array2(js jsonb);\n> insert into test_array2 select jsonb '{\"a\": \"{1,2,3,4,5}\"}' FROM\n> generate_series(1,1e5) g;\n>\n> tests:\n> ----------------------------------------return a scalar int4range\n> explain(costs off,analyze) SELECT item FROM test_scalar1,\n> JSON_TABLE(js, '$.a' COLUMNS (item int4range PATH '$' omit quotes))\n> \\watch count=5\n> 237.753 ms\n>\n> explain(costs off,analyze) select json_query(js, '$.a' returning\n> int4range omit quotes) from test_scalar1 \\watch count=5\n> 462.379 ms\n>\n> explain(costs off,analyze) select json_value(js,'$.a' returning\n> int4range) from test_scalar1 \\watch count=5\n> 362.148 ms\n>\n> explain(costs off,analyze) select (js->>'a')::int4range from\n> test_scalar1 \\watch count=5\n> 301.089 ms\n>\n> explain(costs off,analyze) select trim(both '\"' from\n> jsonb_path_query_first(js,'$.a')::text)::int4range from test_scalar1\n> \\watch count=5\n> 643.337 ms\n> ---------------------------------\n> overall, json_value is faster than json_query. but json_value can not\n> deal with arrays in some cases.\n\nI think that may be explained by the fact that JsonPathQuery() has\nthis step, which JsonPathValue() does not:\n\n if (singleton)\n return JsonbPGetDatum(JsonbValueToJsonb(singleton));\n\nI can see JsonbValueToJsonb() in perf profile when running the\nbenchmark you shared. I don't know if there's any way to make that\nbetter.\n\n> but as you can see, in some cases, json_value and json_query are not\n> as fast as our current implementation\n\nYeah, there *is* some expected overhead to using the new functions;\nExecEvalJsonExprPath() appears in the top 5 frames of perf profile,\nfor example. The times I see are similar to yours and I don't find\nthe difference to be very drastic.\n\npostgres=# \\o /dev/null\npostgres=# explain(costs off,analyze) select (js->>'a') from\ntest_scalar1 \\watch count=3\nTime: 21.581 ms\nTime: 18.838 ms\nTime: 21.589 ms\n\npostgres=# explain(costs off,analyze) select json_query(js,'$.a') from\ntest_scalar1 \\watch count=3\nTime: 38.562 ms\nTime: 34.251 ms\nTime: 32.681 ms\n\npostgres=# explain(costs off,analyze) select json_value(js,'$.a') from\ntest_scalar1 \\watch count=3\nTime: 28.595 ms\nTime: 23.947 ms\nTime: 25.334 ms\n\npostgres=# explain(costs off,analyze) select item from test_scalar1,\njson_table(js, '$.a' columns (item int4range path '$')); \\watch\ncount=3\nTime: 52.739 ms\nTime: 53.996 ms\nTime: 50.774 ms\n\nAttached v34 of all of the patches. 0008 may be considered to be WIP\ngiven the points I mentioned above -- need to add a bit more\ncommentary about JSON_TABLE plan implementation and other\nmiscellaneous fixes.\n\nAs said in my previous email, I'd like to commit 0001-0007 next week.\n\n--\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 18 Jan 2024 22:12:57 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Thu, Jan 18, 2024 at 10:12 PM Amit Langote <[email protected]> wrote:\n> Attached v34 of all of the patches. 0008 may be considered to be WIP\n> given the points I mentioned above -- need to add a bit more\n> commentary about JSON_TABLE plan implementation and other\n> miscellaneous fixes.\n\nOops, I had forgotten to update the ECPG test's expected output in\n0008. Fixed in the attached.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 18 Jan 2024 22:35:41 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On 2024-Jan-18, Alvaro Herrera wrote:\n\n> commands/explain.c (Hmm, I think this is a preexisting bug actually)\n> \n> 3893 18 : case T_TableFuncScan:\n> 3894 18 : Assert(rte->rtekind == RTE_TABLEFUNC);\n> 3895 18 : if (rte->tablefunc)\n> 3896 0 : if (rte->tablefunc->functype == TFT_XMLTABLE)\n> 3897 0 : objectname = \"xmltable\";\n> 3898 : else /* Must be TFT_JSON_TABLE */\n> 3899 0 : objectname = \"json_table\";\n> 3900 : else\n> 3901 18 : objectname = NULL;\n> 3902 18 : objecttag = \"Table Function Name\";\n> 3903 18 : break;\n\nIndeed -- the problem seems to be that add_rte_to_flat_rtable is\ncreating a new RTE and zaps the ->tablefunc pointer for it. So when\nEXPLAIN goes to examine the struct, there's a NULL pointer there and\nnothing is printed.\n\nOne simple fix is to change add_rte_to_flat_rtable so that it doesn't\nzero out the tablefunc pointer, but this is straight against what that\nfunction is trying to do, namely to remove substructure. Which means\nthat we need to preserve the name somewhere else. I added a new member\nto RangeTblEntry for this, which perhaps is a little ugly. So here's\nthe patch for that. (I also added an alias to one XMLTABLE invocation\nunder EXPLAIN, to show what it looks like when an alias is specified.\nOtherwise they're always shown as \"XMLTABLE\" \"xmltable\" which is a bit\ndumb).\n\nAnother possible way out is to decide that we don't want the\n\"objectname\" to be reported here. I admit it's perhaps redundant. In\nthis case we'd just remove lines 3896-3899 shown above and let it be\nNULL.\n\nThoughts?\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/",
"msg_date": "Thu, 18 Jan 2024 18:11:18 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Fri, Jan 19, 2024 at 2:11 AM Alvaro Herrera <[email protected]> wrote:\n> On 2024-Jan-18, Alvaro Herrera wrote:\n>\n> > commands/explain.c (Hmm, I think this is a preexisting bug actually)\n> >\n> > 3893 18 : case T_TableFuncScan:\n> > 3894 18 : Assert(rte->rtekind == RTE_TABLEFUNC);\n> > 3895 18 : if (rte->tablefunc)\n> > 3896 0 : if (rte->tablefunc->functype == TFT_XMLTABLE)\n> > 3897 0 : objectname = \"xmltable\";\n> > 3898 : else /* Must be TFT_JSON_TABLE */\n> > 3899 0 : objectname = \"json_table\";\n> > 3900 : else\n> > 3901 18 : objectname = NULL;\n> > 3902 18 : objecttag = \"Table Function Name\";\n> > 3903 18 : break;\n>\n> Indeed -- the problem seems to be that add_rte_to_flat_rtable is\n> creating a new RTE and zaps the ->tablefunc pointer for it. So when\n> EXPLAIN goes to examine the struct, there's a NULL pointer there and\n> nothing is printed.\n\nAh yes.\n\n> One simple fix is to change add_rte_to_flat_rtable so that it doesn't\n> zero out the tablefunc pointer, but this is straight against what that\n> function is trying to do, namely to remove substructure.\n\nYes.\n\n> Which means\n> that we need to preserve the name somewhere else. I added a new member\n> to RangeTblEntry for this, which perhaps is a little ugly. So here's\n> the patch for that.\n>\n> (I also added an alias to one XMLTABLE invocation\n> under EXPLAIN, to show what it looks like when an alias is specified.\n> Otherwise they're always shown as \"XMLTABLE\" \"xmltable\" which is a bit\n> dumb).\n\nThanks for the patch. Seems alright to me.\n\n> Another possible way out is to decide that we don't want the\n> \"objectname\" to be reported here. I admit it's perhaps redundant. In\n> this case we'd just remove lines 3896-3899 shown above and let it be\n> NULL.\n\nShowing the function's name spelled out in the query (XMLTABLE /\nJSON_TABLE) seems fine to me, even though maybe a bit redundant, yes.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 19 Jan 2024 12:09:49 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "play with domain types.\nin ExecEvalJsonCoercion, seems func json_populate_type cannot cope\nwith domain type.\n\ntests:\ndrop domain test;\ncreate domain test as int[] check ( array_length(value,1) =2 and\n(value[1] = 1 or value[2] = 2));\nSELECT * from JSON_QUERY(jsonb'{\"rec\": \"{1,2,3}\"}', '$.rec' returning\ntest omit quotes);\nSELECT * from JSON_QUERY(jsonb'{\"rec\": \"{1,11}\"}', '$.rec' returning\ntest keep quotes);\nSELECT * from JSON_QUERY(jsonb'{\"rec\": \"{2,11}\"}', '$.rec' returning\ntest omit quotes error on error);\nSELECT * from JSON_QUERY(jsonb'{\"rec\": \"{2,2}\"}', '$.rec' returning\ntest keep quotes error on error);\n\nSELECT * from JSON_QUERY(jsonb'{\"rec\": [1,2,3]}', '$.rec' returning\ntest omit quotes );\nSELECT * from JSON_QUERY(jsonb'{\"rec\": [1,2,3]}', '$.rec' returning\ntest omit quotes null on error);\nSELECT * from JSON_QUERY(jsonb'{\"rec\": [1,2,3]}', '$.rec' returning\ntest null on error);\nSELECT * from JSON_QUERY(jsonb'{\"rec\": [1,11]}', '$.rec' returning\ntest omit quotes);\nSELECT * from JSON_QUERY(jsonb'{\"rec\": [2,2]}', '$.rec' returning test\nomit quotes);\n\nMany domain related tests seem not right.\nlike the following, i think it should just return null.\n+SELECT JSON_QUERY(jsonb '{\"a\": 1}', '$.b' RETURNING sqljsonb_int_not_null);\n+ERROR: domain sqljsonb_int_not_null does not allow null values\n\n--another example\nSELECT JSON_QUERY(jsonb '{\"a\": 1}', '$.b' RETURNING\nsqljsonb_int_not_null null on error);\n\nMaybe in node JsonCoercion, we don't need both via_io and\nvia_populate, but we can have one bool to indicate either call\nInputFunctionCallSafe or json_populate_type in ExecEvalJsonCoercion.\n\n\n",
"msg_date": "Fri, 19 Jan 2024 18:46:09 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "based on v35.\nNow I only applied from 0001 to 0007.\nFor {DEFAULT expression ON EMPTY} | {DEFAULT expression ON ERROR}\nrestrict DEFAULT expression be either Const node or FuncExpr node.\nso these 3 SQL/JSON functions can be used in the btree expression index.\n\nI made some big changes on the doc. (see attachment)\nlist (json_query, json_exists, json_value) as a new <section2> may be\na good idea.\n\nfollow these two links, we can see the difference.\nonly apply v35, 0001 to 0007: https://v35-functions-json-html.vercel.app\napply v35, 0001 to 0007 plus my changes:\nhttps://html-starter-seven-pied.vercel.app\n\n\nminor issues:\n+ Note that if the <replaceable>path_expression</replaceable>\n+ is <literal>strict</literal>, an error is generated if it yields no\n+ items, provided the specified <literal>ON ERROR</literal> behavior is\n+ <literal>ERROR</literal>.\n\nhow about something like this:\n+ Note that if the <replaceable>path_expression</replaceable>\n+ is <literal>strict</literal> and <literal>ON ERROR</literal>\nbehavior is specified\n+ <literal>ERROR</literal>, an error is generated if it yields no\n+ items\n\n+ <note>\n+ <para>\n+ SQL/JSON path expression can currently only accept values of the\n+ <type>jsonb</type> type, so it might be necessary to cast the\n+ <replaceable>context_item</replaceable> argument of these functions to\n+ <type>jsonb</type>.\n+ </para>\n+ </note>\nhere should it be \"SQL/JSON query functions\"?",
"msg_date": "Mon, 22 Jan 2024 08:00:00 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "2024-01 Commitfest.\n\nHi, This patch has a CF status of \"Needs Review\" [1], but it seems\nthere were CFbot test failures last time it was run [2]. Please have a\nlook and post an updated version if necessary.\n\n======\n[1] https://commitfest.postgresql.org/46/4377/\n[2] https://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/46/4377\n\nKind Regards,\nPeter Smith.\n\n\n",
"msg_date": "Mon, 22 Jan 2024 16:55:07 +1100",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "I found two main issues regarding cocece SQL/JSON function output to\nother data types.\n* returning typmod influence the returning result of JSON_VALUE | JSON_QUERY.\n* JSON_VALUE | JSON_QUERY handles returning type domains allowing null\nand not allowing null inconsistencies.\n\nin ExecInitJsonExprCoercion, there is IsA(coercion,JsonCoercion) or\nnot difference.\nfor the returning of (JSON_VALUE | JSON_QUERY),\n\"coercion\" is a JsonCoercion or not is set in coerceJsonFuncExprOutput.\n\nthis influence returning type with typmod is not -1.\nif set \"coercion\" as JsonCoercion Node then it may call the\nInputFunctionCallSafe to do the coercion.\nIf not, it may call ExecInitFunc related code which is wrapped in\nExecEvalCoerceViaIOSafe.\n\nfor example:\nSELECT JSON_QUERY(jsonb '[1,2]', '$' RETURNING char(3));\nwill ExecInitFunc, will init function bpchar(character, integer,\nboolean). it will set the third argument to true.\nso it will initiate related instructions like: `select\nbpchar('[,2]',7,true); ` which in the end will make the result be\n`[,2`\nHowever, InputFunctionCallSafe cannot handle that.\nsimple demo:\ncreate table t(a char(3));\n--fail.\nINSERT INTO t values ('test');\n--ok\nselect 'test'::char(3);\n\nhowever current ExecEvalCoerceViaIOSafe cannot handle omit quotes.\n\neven if I made the changes, still not bullet-proof.\nfor example:\ncreate domain char3_domain_not_null as char(3) NOT NULL;\ncreate domain hello as text NOT NULL check (value = 'hello');\ncreate domain int42 as int check (value = 42);\nCREATE TYPE comp_domain_with_typmod AS (a char3_domain_not_null, b int42);\n\nSELECT JSON_VALUE(jsonb'{\"rec\": \"(abcd,42)\"}', '$.rec' returning\ncomp_domain_with_typmod);\nwill return NULL\n\nhowever\nSELECT JSON_VALUE(jsonb'{\"rec\": \"abcd\"}', '$.rec' returning\nchar3_domain_not_null);\nwill return `abc`.\n\nI made the modification, you can see the difference.\nattached is test_coerce.sql is the test file.\ntest_coerce_only_v35.out is the test output of only applying v35 0001\nto 0007 plus my previous changes[0].\ntest_coerce_v35_plus_change.out is the test output of applying to v35\n0001 to 0007 plus changes (attachment) and previous changes[0].\n\n[0] https://www.postgresql.org/message-id/CACJufxHo1VVk_0th3AsFxqdMgjaUDz6s0F7%2Bj9rYA3d%3DURw97A%40mail.gmail.com",
"msg_date": "Mon, 22 Jan 2024 14:14:07 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Mon, Nov 27, 2023 at 9:06 PM Alvaro Herrera <[email protected]> wrote:\n> At this point one thing that IMO we cannot afford to do, is stop feature\n> progress work on the name of parser speed. I mean, parser speed is\n> important, and we need to be mindful that what we add is reasonable.\n> But at some point we'll probably have to fix that by parsing\n> differently (a top-down parser, perhaps? Split the parser in smaller\n> pieces that each deal with subsets of the whole thing?)\n\nI was reorganizing some old backups and rediscovered an experiment I\ndid four years ago when I had some extra time on my hands, to use a\nlexer generator that emits a state machine driven by code, rather than\na table. It made parsing 12% faster on the above info-schema test, but\nonly (maybe) 3% on parsing pgbench-like queries. My quick hack ended\nup a bit uglier and more verbose than Flex, but that could be\nimproved, and in fact small components could be shared across the\nwhole code base. I might work on it again; I might not.\n\n\n",
"msg_date": "Mon, 22 Jan 2024 13:19:17 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Hi,\n\nOn Fri, Jan 19, 2024 at 7:46 PM jian he <[email protected]> wrote:\n> play with domain types.\n> in ExecEvalJsonCoercion, seems func json_populate_type cannot cope\n> with domain type.\n>\n> tests:\n> drop domain test;\n> create domain test as int[] check ( array_length(value,1) =2 and\n> (value[1] = 1 or value[2] = 2));\n> SELECT * from JSON_QUERY(jsonb'{\"rec\": \"{1,2,3}\"}', '$.rec' returning\n> test omit quotes);\n> SELECT * from JSON_QUERY(jsonb'{\"rec\": \"{1,11}\"}', '$.rec' returning\n> test keep quotes);\n> SELECT * from JSON_QUERY(jsonb'{\"rec\": \"{2,11}\"}', '$.rec' returning\n> test omit quotes error on error);\n> SELECT * from JSON_QUERY(jsonb'{\"rec\": \"{2,2}\"}', '$.rec' returning\n> test keep quotes error on error);\n>\n> SELECT * from JSON_QUERY(jsonb'{\"rec\": [1,2,3]}', '$.rec' returning\n> test omit quotes );\n> SELECT * from JSON_QUERY(jsonb'{\"rec\": [1,2,3]}', '$.rec' returning\n> test omit quotes null on error);\n> SELECT * from JSON_QUERY(jsonb'{\"rec\": [1,2,3]}', '$.rec' returning\n> test null on error);\n> SELECT * from JSON_QUERY(jsonb'{\"rec\": [1,11]}', '$.rec' returning\n> test omit quotes);\n> SELECT * from JSON_QUERY(jsonb'{\"rec\": [2,2]}', '$.rec' returning test\n> omit quotes);\n>\n> Many domain related tests seem not right.\n> like the following, i think it should just return null.\n> +SELECT JSON_QUERY(jsonb '{\"a\": 1}', '$.b' RETURNING sqljsonb_int_not_null);\n> +ERROR: domain sqljsonb_int_not_null does not allow null values\n>\n> --another example\n> SELECT JSON_QUERY(jsonb '{\"a\": 1}', '$.b' RETURNING\n> sqljsonb_int_not_null null on error);\n\nHmm, yes, I've thought the same thing, but the patch since it has\nexisted appears to have made an exception for the case when the\nRETURNING type is a domain for some reason; I couldn't find any\nmention of why in the old discussions. I suspect it might be because\na domain's constraints should always be enforced, irrespective of what\nthe SQL/JSON's ON ERROR says.\n\nThough, I'm inclined to classify the domain constraint failure errors\ninto the same class as any other error as far as the ON ERROR clause\nis concerned, so have adjusted the code to do so.\n\nPlease check if the attached looks fine.\n\n> Maybe in node JsonCoercion, we don't need both via_io and\n> via_populate, but we can have one bool to indicate either call\n> InputFunctionCallSafe or json_populate_type in ExecEvalJsonCoercion.\n\nI'm not sure if there's a way to set such a bool statically, because\nthe decision between calling input function or json_populate_type()\nmust be made at run-time based on whether the input jsonb datum is a\nscalar or not. That said, I think we should ideally be able to always\nuse json_populate_type(), but it can't handle OMIT QUOTES for scalars\nand I haven't been able to refactor it to do so\n\nOn Mon, Jan 22, 2024 at 9:00 AM jian he <[email protected]> wrote:\n>\n> based on v35.\n> Now I only applied from 0001 to 0007.\n> For {DEFAULT expression ON EMPTY} | {DEFAULT expression ON ERROR}\n> restrict DEFAULT expression be either Const node or FuncExpr node.\n> so these 3 SQL/JSON functions can be used in the btree expression index.\n\nI'm not really excited about adding these restrictions into the\ntransformJsonFuncExpr() path. Index or any other code that wants to\nput restrictions already have those in place, no need to add them\nhere. Moreover, by adding these restrictions, we might end up\npreventing users from doing useful things with this like specify\ncolumn references. If there are semantic issues with allowing that,\nwe should discuss them.\n\n> I made some big changes on the doc. (see attachment)\n> list (json_query, json_exists, json_value) as a new <section2> may be\n> a good idea.\n>\n> follow these two links, we can see the difference.\n> only apply v35, 0001 to 0007: https://v35-functions-json-html.vercel.app\n> apply v35, 0001 to 0007 plus my changes:\n> https://html-starter-seven-pied.vercel.app\n\nThanks for your patch. I've adapted some of your proposed changes.\n\n> minor issues:\n> + Note that if the <replaceable>path_expression</replaceable>\n> + is <literal>strict</literal>, an error is generated if it yields no\n> + items, provided the specified <literal>ON ERROR</literal> behavior is\n> + <literal>ERROR</literal>.\n>\n> how about something like this:\n> + Note that if the <replaceable>path_expression</replaceable>\n> + is <literal>strict</literal> and <literal>ON ERROR</literal>\n> behavior is specified\n> + <literal>ERROR</literal>, an error is generated if it yields no\n> + items\n\nSure.\n\n> + <note>\n> + <para>\n> + SQL/JSON path expression can currently only accept values of the\n> + <type>jsonb</type> type, so it might be necessary to cast the\n> + <replaceable>context_item</replaceable> argument of these functions to\n> + <type>jsonb</type>.\n> + </para>\n> + </note>\n> here should it be \"SQL/JSON query functions\"?\n\n\"path expressions\" is not wrong but I agree that \"query functions\"\nmight be better, so changed. I've also mentioned that the restriction\narises from the fact that SQL/JSON path langage expects the input\ndocument to be passed as jsonb.\n\nOn Mon, Jan 22, 2024 at 3:14 PM jian he <[email protected]> wrote:\n> I found two main issues regarding cocece SQL/JSON function output to\n> other data types.\n> * returning typmod influence the returning result of JSON_VALUE | JSON_QUERY.\n> * JSON_VALUE | JSON_QUERY handles returning type domains allowing null\n> and not allowing null inconsistencies.\n>\n> in ExecInitJsonExprCoercion, there is IsA(coercion,JsonCoercion) or\n> not difference.\n> for the returning of (JSON_VALUE | JSON_QUERY),\n> \"coercion\" is a JsonCoercion or not is set in coerceJsonFuncExprOutput.\n>\n> this influence returning type with typmod is not -1.\n> if set \"coercion\" as JsonCoercion Node then it may call the\n> InputFunctionCallSafe to do the coercion.\n> If not, it may call ExecInitFunc related code which is wrapped in\n> ExecEvalCoerceViaIOSafe.\n>\n> for example:\n> SELECT JSON_QUERY(jsonb '[1,2]', '$' RETURNING char(3));\n> will ExecInitFunc, will init function bpchar(character, integer,\n> boolean). it will set the third argument to true.\n> so it will initiate related instructions like: `select\n> bpchar('[,2]',7,true); ` which in the end will make the result be\n> `[,2`\n> However, InputFunctionCallSafe cannot handle that.\n> simple demo:\n> create table t(a char(3));\n> --fail.\n> INSERT INTO t values ('test');\n> --ok\n> select 'test'::char(3);\n>\n> however current ExecEvalCoerceViaIOSafe cannot handle omit quotes.\n>\n> even if I made the changes, still not bullet-proof.\n> for example:\n> create domain char3_domain_not_null as char(3) NOT NULL;\n> create domain hello as text NOT NULL check (value = 'hello');\n> create domain int42 as int check (value = 42);\n> CREATE TYPE comp_domain_with_typmod AS (a char3_domain_not_null, b int42);\n>\n> SELECT JSON_VALUE(jsonb'{\"rec\": \"(abcd,42)\"}', '$.rec' returning\n> comp_domain_with_typmod);\n> will return NULL\n>\n> however\n> SELECT JSON_VALUE(jsonb'{\"rec\": \"abcd\"}', '$.rec' returning\n> char3_domain_not_null);\n> will return `abc`.\n>\n> I made the modification, you can see the difference.\n> attached is test_coerce.sql is the test file.\n> test_coerce_only_v35.out is the test output of only applying v35 0001\n> to 0007 plus my previous changes[0].\n> test_coerce_v35_plus_change.out is the test output of applying to v35\n> 0001 to 0007 plus changes (attachment) and previous changes[0].\n>\n> [0] https://www.postgresql.org/message-id/CACJufxHo1VVk_0th3AsFxqdMgjaUDz6s0F7%2Bj9rYA3d%3DURw97A%40mail.gmail.com\n\nI'll think about this tomorrow.\n\nIn the meantime, here are the updated/reorganized patches with the\nfollowing changes:\n\n* I started having second thoughts about introducing\njson_populate_type(), jspIsMutable, and JsonbUnquote() in commits\nseparate from the commit introducing the SQL/JSON query functions\npatch where they are needed, so I moved them back into that patch. So\nthere are 2 fewer patches -- 0005, 0006 squashed into 0007.\n\n* Boke the test file jsonb_sqljson into 2 files named\nsqljson_queryfuncs and sqljson_jsontable. Also, the test files under\nECPG to sql_jsontable\n\n* Some cosmetic improvements in the JSON_TABLE() patch\n\nI'll push 0001-0004 tomorrow, barring objections.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Mon, 22 Jan 2024 23:27:58 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Mon, Jan 22, 2024 at 10:28 PM Amit Langote <[email protected]> wrote:\n>\n> > based on v35.\n> > Now I only applied from 0001 to 0007.\n> > For {DEFAULT expression ON EMPTY} | {DEFAULT expression ON ERROR}\n> > restrict DEFAULT expression be either Const node or FuncExpr node.\n> > so these 3 SQL/JSON functions can be used in the btree expression index.\n>\n> I'm not really excited about adding these restrictions into the\n> transformJsonFuncExpr() path. Index or any other code that wants to\n> put restrictions already have those in place, no need to add them\n> here. Moreover, by adding these restrictions, we might end up\n> preventing users from doing useful things with this like specify\n> column references. If there are semantic issues with allowing that,\n> we should discuss them.\n>\n\nafter applying v36.\nThe following index creation and query operation works. I am not 100%\nsure about these cases.\njust want confirmation, sorry for bothering you....\n\ndrop table t;\ncreate table t(a jsonb, b int);\ninsert into t select '{\"hello\":11}',1;\ninsert into t select '{\"hello\":12}',2;\nCREATE INDEX t_idx2 ON t (JSON_query(a, '$.hello1' RETURNING int\ndefault b + random() on error));\nCREATE INDEX t_idx3 ON t (JSON_query(a, '$.hello1' RETURNING int\ndefault random()::int on error));\nSELECT JSON_query(a, '$.hello1' RETURNING int default ret_setint() on\nerror) from t;\nSELECT JSON_query(a, '$.hello1' RETURNING int default sum(b) over()\non error) from t;\nSELECT JSON_query(a, '$.hello1' RETURNING int default sum(b) on\nerror) from t group by a;\n\nbut the following cases will fail related to index and default expression.\ncreate table zz(a int, b int);\nCREATE INDEX zz_idx1 ON zz ( (b + random()::int));\ncreate table ssss(a int, b int default ret_setint());\ncreate table ssss(a int, b int default sum(b) over());\n\n\n",
"msg_date": "Mon, 22 Jan 2024 23:46:15 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On 2024-Jan-18, Alvaro Herrera wrote:\n\n> > commands/explain.c (Hmm, I think this is a preexisting bug actually)\n> > \n> > 3893 18 : case T_TableFuncScan:\n> > 3894 18 : Assert(rte->rtekind == RTE_TABLEFUNC);\n> > 3895 18 : if (rte->tablefunc)\n> > 3896 0 : if (rte->tablefunc->functype == TFT_XMLTABLE)\n> > 3897 0 : objectname = \"xmltable\";\n> > 3898 : else /* Must be TFT_JSON_TABLE */\n> > 3899 0 : objectname = \"json_table\";\n> > 3900 : else\n> > 3901 18 : objectname = NULL;\n> > 3902 18 : objecttag = \"Table Function Name\";\n> > 3903 18 : break;\n> \n> Indeed \n\nI was completely wrong about this, and in order to gain coverage the\nonly thing we needed was to add an EXPLAIN that uses the JSON format.\nI did that just now. I think your addition here works just fine.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Mon, 22 Jan 2024 17:19:14 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Mon, Jan 22, 2024 at 11:46 PM jian he <[email protected]> wrote:\n>\n> On Mon, Jan 22, 2024 at 10:28 PM Amit Langote <[email protected]> wrote:\n> >\n> > > based on v35.\n> > > Now I only applied from 0001 to 0007.\n> > > For {DEFAULT expression ON EMPTY} | {DEFAULT expression ON ERROR}\n> > > restrict DEFAULT expression be either Const node or FuncExpr node.\n> > > so these 3 SQL/JSON functions can be used in the btree expression index.\n> >\n> > I'm not really excited about adding these restrictions into the\n> > transformJsonFuncExpr() path. Index or any other code that wants to\n> > put restrictions already have those in place, no need to add them\n> > here. Moreover, by adding these restrictions, we might end up\n> > preventing users from doing useful things with this like specify\n> > column references. If there are semantic issues with allowing that,\n> > we should discuss them.\n> >\n>\n> after applying v36.\n> The following index creation and query operation works. I am not 100%\n> sure about these cases.\n> just want confirmation, sorry for bothering you....\n>\n> drop table t;\n> create table t(a jsonb, b int);\n> insert into t select '{\"hello\":11}',1;\n> insert into t select '{\"hello\":12}',2;\n> CREATE INDEX t_idx2 ON t (JSON_query(a, '$.hello1' RETURNING int\n> default b + random() on error));\n> CREATE INDEX t_idx3 ON t (JSON_query(a, '$.hello1' RETURNING int\n> default random()::int on error));\n> SELECT JSON_query(a, '$.hello1' RETURNING int default ret_setint() on\n> error) from t;\n\nI forgot to attach ret_setint defition.\n\ncreate or replace function ret_setint() returns setof integer as\n$$\nbegin\n -- perform pg_sleep(0.1);\n return query execute 'select 1 union all select 1';\nend;\n$$\nlanguage plpgsql IMMUTABLE;\n\n-----------------------------------------\nIn the function transformJsonExprCommon, we have\n`JsonExpr *jsexpr = makeNode(JsonExpr);`\nthen the following 2 assignments are not necessary.\n\n/* Both set in the caller. */\njsexpr->result_coercion = NULL;\njsexpr->omit_quotes = false;\n\nSo I removed it.\n\nJSON_VALUE OMIT QUOTES by default, so I set it accordingly.\nI also changed coerceJsonFuncExprOutput accordingly",
"msg_date": "Tue, 23 Jan 2024 16:51:55 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Tue, Jan 23, 2024 at 1:19 AM Alvaro Herrera <[email protected]> wrote:\n> On 2024-Jan-18, Alvaro Herrera wrote:\n> > > commands/explain.c (Hmm, I think this is a preexisting bug actually)\n> > >\n> > > 3893 18 : case T_TableFuncScan:\n> > > 3894 18 : Assert(rte->rtekind == RTE_TABLEFUNC);\n> > > 3895 18 : if (rte->tablefunc)\n> > > 3896 0 : if (rte->tablefunc->functype == TFT_XMLTABLE)\n> > > 3897 0 : objectname = \"xmltable\";\n> > > 3898 : else /* Must be TFT_JSON_TABLE */\n> > > 3899 0 : objectname = \"json_table\";\n> > > 3900 : else\n> > > 3901 18 : objectname = NULL;\n> > > 3902 18 : objecttag = \"Table Function Name\";\n> > > 3903 18 : break;\n> >\n> > Indeed\n>\n> I was completely wrong about this, and in order to gain coverage the\n> only thing we needed was to add an EXPLAIN that uses the JSON format.\n> I did that just now. I think your addition here works just fine.\n\nI think we'd still need your RangeTblFunc.tablefunc_name in order for\nthe new code (with JSON_TABLE) to be able to set objectname to either\n\"XMLTABLE\" or \"JSON_TABLE\", no?\n\nAs you pointed out, rte->tablefunc is always NULL in\nExplainTargetRel() due to setrefs.c setting it to NULL, so the\nJSON_TABLE additions to explain.c in my patch as they were won't work.\nI've included your patch in the attached set and adjusted the\nJSON_TABLE patch to set tablefunc_name in the parser.\n\nI had intended to push 0001-0004 today, but held off to add a\nSQL-callable testing function for the changes in 0002. On that note,\nI'm now not so sure about committing jsonpath_exec.c functions\nJsonPathExists/Query/Value() from their SQL/JSON counterparts, so\ninclined to squash that one into the SQL/JSON query functions patch\nfrom a testability standpoint.\n\nI haven't looked at Jian He's comments yet.\n\n\n\n\n--\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 23 Jan 2024 22:46:06 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Tue, Jan 23, 2024 at 10:46 PM Amit Langote <[email protected]> wrote:\n> On Tue, Jan 23, 2024 at 1:19 AM Alvaro Herrera <[email protected]> wrote:\n> > On 2024-Jan-18, Alvaro Herrera wrote:\n> > > > commands/explain.c (Hmm, I think this is a preexisting bug actually)\n> > > >\n> > > > 3893 18 : case T_TableFuncScan:\n> > > > 3894 18 : Assert(rte->rtekind == RTE_TABLEFUNC);\n> > > > 3895 18 : if (rte->tablefunc)\n> > > > 3896 0 : if (rte->tablefunc->functype == TFT_XMLTABLE)\n> > > > 3897 0 : objectname = \"xmltable\";\n> > > > 3898 : else /* Must be TFT_JSON_TABLE */\n> > > > 3899 0 : objectname = \"json_table\";\n> > > > 3900 : else\n> > > > 3901 18 : objectname = NULL;\n> > > > 3902 18 : objecttag = \"Table Function Name\";\n> > > > 3903 18 : break;\n> > >\n> > > Indeed\n> >\n> > I was completely wrong about this, and in order to gain coverage the\n> > only thing we needed was to add an EXPLAIN that uses the JSON format.\n> > I did that just now. I think your addition here works just fine.\n>\n> I think we'd still need your RangeTblFunc.tablefunc_name in order for\n> the new code (with JSON_TABLE) to be able to set objectname to either\n> \"XMLTABLE\" or \"JSON_TABLE\", no?\n>\n> As you pointed out, rte->tablefunc is always NULL in\n> ExplainTargetRel() due to setrefs.c setting it to NULL, so the\n> JSON_TABLE additions to explain.c in my patch as they were won't work.\n> I've included your patch in the attached set and adjusted the\n> JSON_TABLE patch to set tablefunc_name in the parser.\n>\n> I had intended to push 0001-0004 today, but held off to add a\n> SQL-callable testing function for the changes in 0002. On that note,\n> I'm now not so sure about committing jsonpath_exec.c functions\n> JsonPathExists/Query/Value() from their SQL/JSON counterparts, so\n> inclined to squash that one into the SQL/JSON query functions patch\n> from a testability standpoint.\n\nPushed 0001-0003 for now.\n\nRebased patches attached. I merged 0004 into the query functions\npatch after all.\n\n> I haven't looked at Jian He's comments yet.\n\nSee below...\n\nOn Tue, Jan 23, 2024 at 12:46 AM jian he <[email protected]> wrote:\n> On Mon, Jan 22, 2024 at 10:28 PM Amit Langote <[email protected]> wrote:\n> >\n> > > based on v35.\n> > > Now I only applied from 0001 to 0007.\n> > > For {DEFAULT expression ON EMPTY} | {DEFAULT expression ON ERROR}\n> > > restrict DEFAULT expression be either Const node or FuncExpr node.\n> > > so these 3 SQL/JSON functions can be used in the btree expression index.\n> >\n> > I'm not really excited about adding these restrictions into the\n> > transformJsonFuncExpr() path. Index or any other code that wants to\n> > put restrictions already have those in place, no need to add them\n> > here. Moreover, by adding these restrictions, we might end up\n> > preventing users from doing useful things with this like specify\n> > column references. If there are semantic issues with allowing that,\n> > we should discuss them.\n> >\n>\n> after applying v36.\n> The following index creation and query operation works. I am not 100%\n> sure about these cases.\n> just want confirmation, sorry for bothering you....\n\nNo worries; I really appreciate your testing and suggestions.\n\n> drop table t;\n> create table t(a jsonb, b int);\n> insert into t select '{\"hello\":11}',1;\n> insert into t select '{\"hello\":12}',2;\n> CREATE INDEX t_idx2 ON t (JSON_query(a, '$.hello1' RETURNING int\n> default b + random() on error));\n> CREATE INDEX t_idx3 ON t (JSON_query(a, '$.hello1' RETURNING int\n> default random()::int on error));\n> create or replace function ret_setint() returns setof integer as\n> $$\n> begin\n> -- perform pg_sleep(0.1);\n> return query execute 'select 1 union all select 1';\n> end;\n> $$\n> language plpgsql IMMUTABLE;\n> SELECT JSON_query(a, '$.hello1' RETURNING int default ret_setint() on\n> error) from t;\n> SELECT JSON_query(a, '$.hello1' RETURNING int default sum(b) over()\n> on error) from t;\n> SELECT JSON_query(a, '$.hello1' RETURNING int default sum(b) on\n> error) from t group by a;\n>\n> but the following cases will fail related to index and default expression.\n> create table zz(a int, b int);\n> CREATE INDEX zz_idx1 ON zz ( (b + random()::int));\n> create table ssss(a int, b int default ret_setint());\n> create table ssss(a int, b int default sum(b) over());\n\nI think your suggestion to add restrictions on what is allowed for\nDEFAULT makes sense. Also, immutability shouldn't be checked in\ntransformJsonBehavior(), but in contain_mutable_functions() as done in\nthe attached. Added some tests too.\n\nI still need to take a look at your other report regarding typmod but\nI'm out of energy today.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 24 Jan 2024 22:11:54 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Wed, Jan 24, 2024 at 10:11 PM Amit Langote <[email protected]> wrote:\n> On Tue, Jan 23, 2024 at 12:46 AM jian he <[email protected]> wrote:\n> > On Mon, Jan 22, 2024 at 10:28 PM Amit Langote <[email protected]> wrote:\n> > >\n> > > > based on v35.\n> > > > Now I only applied from 0001 to 0007.\n> > > > For {DEFAULT expression ON EMPTY} | {DEFAULT expression ON ERROR}\n> > > > restrict DEFAULT expression be either Const node or FuncExpr node.\n> > > > so these 3 SQL/JSON functions can be used in the btree expression index.\n> > >\n> > > I'm not really excited about adding these restrictions into the\n> > > transformJsonFuncExpr() path. Index or any other code that wants to\n> > > put restrictions already have those in place, no need to add them\n> > > here. Moreover, by adding these restrictions, we might end up\n> > > preventing users from doing useful things with this like specify\n> > > column references. If there are semantic issues with allowing that,\n> > > we should discuss them.\n> > >\n> >\n> > after applying v36.\n> > The following index creation and query operation works. I am not 100%\n> > sure about these cases.\n> > just want confirmation, sorry for bothering you....\n>\n> No worries; I really appreciate your testing and suggestions.\n>\n> > drop table t;\n> > create table t(a jsonb, b int);\n> > insert into t select '{\"hello\":11}',1;\n> > insert into t select '{\"hello\":12}',2;\n> > CREATE INDEX t_idx2 ON t (JSON_query(a, '$.hello1' RETURNING int\n> > default b + random() on error));\n> > CREATE INDEX t_idx3 ON t (JSON_query(a, '$.hello1' RETURNING int\n> > default random()::int on error));\n> > create or replace function ret_setint() returns setof integer as\n> > $$\n> > begin\n> > -- perform pg_sleep(0.1);\n> > return query execute 'select 1 union all select 1';\n> > end;\n> > $$\n> > language plpgsql IMMUTABLE;\n> > SELECT JSON_query(a, '$.hello1' RETURNING int default ret_setint() on\n> > error) from t;\n> > SELECT JSON_query(a, '$.hello1' RETURNING int default sum(b) over()\n> > on error) from t;\n> > SELECT JSON_query(a, '$.hello1' RETURNING int default sum(b) on\n> > error) from t group by a;\n> >\n> > but the following cases will fail related to index and default expression.\n> > create table zz(a int, b int);\n> > CREATE INDEX zz_idx1 ON zz ( (b + random()::int));\n> > create table ssss(a int, b int default ret_setint());\n> > create table ssss(a int, b int default sum(b) over());\n>\n> I think your suggestion to add restrictions on what is allowed for\n> DEFAULT makes sense. Also, immutability shouldn't be checked in\n> transformJsonBehavior(), but in contain_mutable_functions() as done in\n> the attached. Added some tests too.\n>\n> I still need to take a look at your other report regarding typmod but\n> I'm out of energy today.\n\nThe attached updated patch should address one of the concerns --\nJSON_QUERY() should now work appropriately with RETURNING type with\ntypmod whether or OMIT QUOTES is specified.\n\nBut I wasn't able to address the problems with RETURNING\nrecord_type_with_typmod, that is, the following example you shared\nupthread:\n\ncreate domain char3_domain_not_null as char(3) NOT NULL;\ncreate domain hello as text not null check (value = 'hello');\ncreate domain int42 as int check (value = 42);\ncreate type comp_domain_with_typmod AS (a char3_domain_not_null, b int42);\nselect json_value(jsonb'{\"rec\": \"(abcd,42)\"}', '$.rec' returning\ncomp_domain_with_typmod);\n json_value\n------------\n\n(1 row)\n\nselect json_value(jsonb'{\"rec\": \"(abcd,42)\"}', '$.rec' returning\ncomp_domain_with_typmod error on error);\nERROR: value too long for type character(3)\n\nselect json_value(jsonb'{\"rec\": \"abcd\"}', '$.rec' returning\nchar3_domain_not_null error on error);\n json_value\n------------\n abc\n(1 row)\n\nThe problem with returning comp_domain_with_typmod from json_value()\nseems to be that it's using a text-to-record CoerceViaIO expression\npicked from JsonExpr.item_coercions, which behaves differently than\nthe expression tree that the following uses:\n\nselect ('abcd', 42)::comp_domain_with_typmod;\n row\n----------\n (abc,42)\n(1 row)\n\nI don't see a good way to make RETURNING record_type_with_typmod to\nwork cleanly, so I am inclined to either simply disallow the feature\nor live with the limitation.\n\n\n\n\n\n\n\n\n\n\n--\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 25 Jan 2024 18:09:42 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On 9.16.4. JSON_TABLE\n`\nname type FORMAT JSON [ENCODING UTF8] [ PATH json_path_specification ]\nInserts a composite SQL/JSON item into the output row\n`\ni am not sure \"Inserts a composite SQL/JSON item into the output row\"\nI think it means, for any type's typecategory is TYPCATEGORY_STRING,\nif FORMAT JSON is specified explicitly, the output value (text type)\nwill be legal\njson type representation.\n\nI also did a minor refactor on JSON_VALUE_OP, jsexpr->omit_quotes.",
"msg_date": "Thu, 25 Jan 2024 17:20:00 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Thu, Jan 25, 2024 at 6:09 PM Amit Langote <[email protected]> wrote:\n> On Wed, Jan 24, 2024 at 10:11 PM Amit Langote <[email protected]> wrote:\n> > I still need to take a look at your other report regarding typmod but\n> > I'm out of energy today.\n>\n> The attached updated patch should address one of the concerns --\n> JSON_QUERY() should now work appropriately with RETURNING type with\n> typmod whether or OMIT QUOTES is specified.\n>\n> But I wasn't able to address the problems with RETURNING\n> record_type_with_typmod, that is, the following example you shared\n> upthread:\n>\n> create domain char3_domain_not_null as char(3) NOT NULL;\n> create domain hello as text not null check (value = 'hello');\n> create domain int42 as int check (value = 42);\n> create type comp_domain_with_typmod AS (a char3_domain_not_null, b int42);\n> select json_value(jsonb'{\"rec\": \"(abcd,42)\"}', '$.rec' returning\n> comp_domain_with_typmod);\n> json_value\n> ------------\n>\n> (1 row)\n>\n> select json_value(jsonb'{\"rec\": \"(abcd,42)\"}', '$.rec' returning\n> comp_domain_with_typmod error on error);\n> ERROR: value too long for type character(3)\n>\n> select json_value(jsonb'{\"rec\": \"abcd\"}', '$.rec' returning\n> char3_domain_not_null error on error);\n> json_value\n> ------------\n> abc\n> (1 row)\n>\n> The problem with returning comp_domain_with_typmod from json_value()\n> seems to be that it's using a text-to-record CoerceViaIO expression\n> picked from JsonExpr.item_coercions, which behaves differently than\n> the expression tree that the following uses:\n>\n> select ('abcd', 42)::comp_domain_with_typmod;\n> row\n> ----------\n> (abc,42)\n> (1 row)\n\nOh, it hadn't occurred to me to check what trying to coerce a \"string\"\ncontaining the record literal would do:\n\nselect '(''abcd'', 42)'::comp_domain_with_typmod;\nERROR: value too long for type character(3)\nLINE 1: select '(''abcd'', 42)'::comp_domain_with_typmod;\n\nwhich is the same thing as what the JSON_QUERY() and JSON_VALUE() are\nrunning into. So, it might be fair to think that the error is not a\nlimitation of the SQL/JSON patch but an underlying behavior that it\nhas to accept as is.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 25 Jan 2024 20:54:25 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Thu, Jan 25, 2024 at 7:54 PM Amit Langote <[email protected]> wrote:\n>\n> >\n> > The problem with returning comp_domain_with_typmod from json_value()\n> > seems to be that it's using a text-to-record CoerceViaIO expression\n> > picked from JsonExpr.item_coercions, which behaves differently than\n> > the expression tree that the following uses:\n> >\n> > select ('abcd', 42)::comp_domain_with_typmod;\n> > row\n> > ----------\n> > (abc,42)\n> > (1 row)\n>\n> Oh, it hadn't occurred to me to check what trying to coerce a \"string\"\n> containing the record literal would do:\n>\n> select '(''abcd'', 42)'::comp_domain_with_typmod;\n> ERROR: value too long for type character(3)\n> LINE 1: select '(''abcd'', 42)'::comp_domain_with_typmod;\n>\n> which is the same thing as what the JSON_QUERY() and JSON_VALUE() are\n> running into. So, it might be fair to think that the error is not a\n> limitation of the SQL/JSON patch but an underlying behavior that it\n> has to accept as is.\n>\n\nHi, I reconciled with these cases.\nWhat bugs me now is the first query of the following 4 cases (for comparison).\nSELECT JSON_QUERY(jsonb '[1,2]', '$' RETURNING char(3) omit quotes);\nSELECT JSON_QUERY(jsonb '[1,2]', '$' RETURNING char(3) keep quotes);\nSELECT JSON_QUERY(jsonb '[1,2]', '$' RETURNING text omit quotes);\nSELECT JSON_QUERY(jsonb '[1,2]', '$' RETURNING text keep quotes);\n\nI did some minor refactoring on the function coerceJsonFuncExprOutput.\nit will make the following queries return null instead of error. NULL\nis the return of json_value.\n\n SELECT JSON_QUERY(jsonb '\"123\"', '$' RETURNING int2);\n SELECT JSON_QUERY(jsonb '\"123\"', '$' RETURNING int4);\n SELECT JSON_QUERY(jsonb '\"123\"', '$' RETURNING int8);\n SELECT JSON_QUERY(jsonb '\"123\"', '$' RETURNING bool);\n SELECT JSON_QUERY(jsonb '\"123\"', '$' RETURNING numeric);\n SELECT JSON_QUERY(jsonb '\"123\"', '$' RETURNING real);\n SELECT JSON_QUERY(jsonb '\"123\"', '$' RETURNING float8);",
"msg_date": "Thu, 25 Jan 2024 22:39:30 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Hi.\nminor issues.\nI am wondering do we need add `pg_node_attr(query_jumble_ignore)`\nto some of our created structs in src/include/nodes/parsenodes.h in\nv39-0001-Add-SQL-JSON-query-functions.patch\n\ndiff --git a/src/backend/parser/parse_jsontable.c\nb/src/backend/parser/parse_jsontable.c\nnew file mode 100644\nindex 0000000000..25b8204dc6\n--- /dev/null\n+++ b/src/backend/parser/parse_jsontable.c\n@@ -0,0 +1,718 @@\n+/*-------------------------------------------------------------------------\n+ *\n+ * parse_jsontable.c\n+ * parsing of JSON_TABLE\n+ *\n+ * Portions Copyright (c) 1996-2022, PostgreSQL Global Development Group\n+ * Portions Copyright (c) 1994, Regents of the University of California\n+ *\n+ *\n+ * IDENTIFICATION\n+ * src/backend/parser/parse_jsontable.c\n+ *\n+ *-------------------------------------------------------------------------\n+ */\n2022 should change to 2024.\n\n\n",
"msg_date": "Wed, 31 Jan 2024 22:51:52 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "based on this query:\nbegin;\nSET LOCAL TIME ZONE 10.5;\nwith cte(s) as (select jsonb '\"2023-08-15 12:34:56 +05:30\"')\nselect JSON_QUERY(s, '$.timestamp_tz()')::text,'+10.5'::text,\n'timestamp_tz'::text from cte\nunion all\nselect JSON_QUERY(s, '$.time()')::text,'+10.5'::text, 'time'::text from cte\nunion all\nselect JSON_QUERY(s, '$.timestamp()')::text,'+10.5'::text,\n'timestamp'::text from cte\nunion all\nselect JSON_QUERY(s, '$.date()')::text,'+10.5'::text, 'date'::text from cte\nunion all\nselect JSON_QUERY(s, '$.time_tz()')::text,'+10.5'::text,\n'time_tz'::text from cte;\n\nSET LOCAL TIME ZONE -8;\nwith cte(s) as (select jsonb '\"2023-08-15 12:34:56 +05:30\"')\nselect JSON_QUERY(s, '$.timestamp_tz()')::text,'+10.5'::text,\n'timestamp_tz'::text from cte\nunion all\nselect JSON_QUERY(s, '$.time()')::text,'+10.5'::text, 'time'::text from cte\nunion all\nselect JSON_QUERY(s, '$.timestamp()')::text,'+10.5'::text,\n'timestamp'::text from cte\nunion all\nselect JSON_QUERY(s, '$.date()')::text,'+10.5'::text, 'date'::text from cte\nunion all\nselect JSON_QUERY(s, '$.time_tz()')::text,'+10.5'::text,\n'time_tz'::text from cte;\ncommit;\n\nI made some changes on jspIsMutableWalker.\nvarious new jsonpath methods added:\nhttps://git.postgresql.org/cgit/postgresql.git/commit/?id=66ea94e8e606529bb334515f388c62314956739e\nso we need to change jspIsMutableWalker accordingly.\n\nbased on v39.",
"msg_date": "Mon, 5 Feb 2024 20:27:54 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Thu, Jan 25, 2024 at 10:39 PM jian he <[email protected]> wrote:\n>\n> On Thu, Jan 25, 2024 at 7:54 PM Amit Langote <[email protected]> wrote:\n> >\n> > >\n> > > The problem with returning comp_domain_with_typmod from json_value()\n> > > seems to be that it's using a text-to-record CoerceViaIO expression\n> > > picked from JsonExpr.item_coercions, which behaves differently than\n> > > the expression tree that the following uses:\n> > >\n> > > select ('abcd', 42)::comp_domain_with_typmod;\n> > > row\n> > > ----------\n> > > (abc,42)\n> > > (1 row)\n> >\n> > Oh, it hadn't occurred to me to check what trying to coerce a \"string\"\n> > containing the record literal would do:\n> >\n> > select '(''abcd'', 42)'::comp_domain_with_typmod;\n> > ERROR: value too long for type character(3)\n> > LINE 1: select '(''abcd'', 42)'::comp_domain_with_typmod;\n> >\n> > which is the same thing as what the JSON_QUERY() and JSON_VALUE() are\n> > running into. So, it might be fair to think that the error is not a\n> > limitation of the SQL/JSON patch but an underlying behavior that it\n> > has to accept as is.\n> >\n>\n> Hi, I reconciled with these cases.\n> What bugs me now is the first query of the following 4 cases (for comparison).\n> SELECT JSON_QUERY(jsonb '[1,2]', '$' RETURNING char(3) omit quotes);\n> SELECT JSON_QUERY(jsonb '[1,2]', '$' RETURNING char(3) keep quotes);\n> SELECT JSON_QUERY(jsonb '[1,2]', '$' RETURNING text omit quotes);\n> SELECT JSON_QUERY(jsonb '[1,2]', '$' RETURNING text keep quotes);\n>\n\nbased on v39.\nin ExecEvalJsonCoercion\ncoercion->targettypmod related function calls:\njson_populate_type calls populate_record_field, then populate_scalar,\nlater will eventually call InputFunctionCallSafe.\n\nso I make the following change:\n--- a/src/backend/executor/execExprInterp.c\n+++ b/src/backend/executor/execExprInterp.c\n@@ -4533,7 +4533,7 @@ ExecEvalJsonCoercion(ExprState *state, ExprEvalStep *op,\n * deed ourselves by calling the input function, that is, after removing\n * the quotes.\n */\n- if (jb && JB_ROOT_IS_SCALAR(jb) && coercion->omit_quotes)\n+ if ((jb && JB_ROOT_IS_SCALAR(jb) && coercion->omit_quotes) ||\ncoercion->targettypmod != -1)\n\nnow the following two return the same result: `[1,`\nSELECT JSON_QUERY(jsonb '[1,2]', '$' RETURNING char(3) omit quotes);\nSELECT JSON_QUERY(jsonb '[1,2]', '$' RETURNING char(3) keep quotes);\n\n\n",
"msg_date": "Tue, 6 Feb 2024 12:55:51 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "This part is already committed.\nereport(ERROR,\n(errcode(ERRCODE_UNDEFINED_OBJECT),\nerrmsg(\"could not find jsonpath variable \\\"%s\\\"\",\npnstrdup(varName, varNameLength))));\n\nbut, you can simply use:\nereport(ERROR,\n(errcode(ERRCODE_UNDEFINED_OBJECT),\nerrmsg(\"could not find jsonpath variable \\\"%s\\\"\",varName)));\n\nmaybe not worth the trouble.\nI kind of want to know, using `pnstrdup`, when the malloc related\nmemory will be freed?\n\njson_query and json_query doc explanation is kind of crammed together.\nDo you think it's a good idea to use </listitem> and </itemizedlist>?\nit will look like bullet points. but the distance between the bullet\npoint and the first text in the same line is a little bit long, so it\nmay not look elegant.\nI've attached the picture, json_query is using `</listitem> and\n</itemizedlist>`, json_value is as of the v39.\n\nother than this and previous points, v39, 0001 looks good to go.",
"msg_date": "Wed, 14 Feb 2024 08:00:00 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Hi Jian,\n\nThanks for the reviews and sorry for the late reply. Replying to all\nemails in one.\n\nOn Thu, Jan 25, 2024 at 11:39 PM jian he <[email protected]> wrote:\n> On Thu, Jan 25, 2024 at 7:54 PM Amit Langote <[email protected]> wrote:\n> > > The problem with returning comp_domain_with_typmod from json_value()\n> > > seems to be that it's using a text-to-record CoerceViaIO expression\n> > > picked from JsonExpr.item_coercions, which behaves differently than\n> > > the expression tree that the following uses:\n> > >\n> > > select ('abcd', 42)::comp_domain_with_typmod;\n> > > row\n> > > ----------\n> > > (abc,42)\n> > > (1 row)\n> >\n> > Oh, it hadn't occurred to me to check what trying to coerce a \"string\"\n> > containing the record literal would do:\n> >\n> > select '(''abcd'', 42)'::comp_domain_with_typmod;\n> > ERROR: value too long for type character(3)\n> > LINE 1: select '(''abcd'', 42)'::comp_domain_with_typmod;\n> >\n> > which is the same thing as what the JSON_QUERY() and JSON_VALUE() are\n> > running into. So, it might be fair to think that the error is not a\n> > limitation of the SQL/JSON patch but an underlying behavior that it\n> > has to accept as is.\n>\n> Hi, I reconciled with these cases.\n> What bugs me now is the first query of the following 4 cases (for comparison).\n> SELECT JSON_QUERY(jsonb '[1,2]', '$' RETURNING char(3) omit quotes);\n> SELECT JSON_QUERY(jsonb '[1,2]', '$' RETURNING char(3) keep quotes);\n> SELECT JSON_QUERY(jsonb '[1,2]', '$' RETURNING text omit quotes);\n> SELECT JSON_QUERY(jsonb '[1,2]', '$' RETURNING text keep quotes);\n\nFixed:\n\nSELECT JSON_QUERY(jsonb '\"[1,2]\"', '$' RETURNING char(3) omit quotes);\n json_query\n------------\n [1,\n(1 row)\n\nSELECT JSON_QUERY(jsonb '\"[1,2]\"', '$' RETURNING char(3) keep quotes);\n json_query\n------------\n \"[1\n(1 row)\n\nSELECT JSON_QUERY(jsonb '\"[1,2]\"', '$' RETURNING text omit quotes);\n json_query\n------------\n [1,2]\n(1 row)\n\nSELECT JSON_QUERY(jsonb '\"[1,2]\"', '$' RETURNING text keep quotes);\n json_query\n------------\n \"[1,2]\"\n(1 row)\n\nI didn't go with your proposed solution to check targettypmod in\nExecEvalJsonCoercion() though.\n\n> I did some minor refactoring on the function coerceJsonFuncExprOutput.\n> it will make the following queries return null instead of error. NULL\n> is the return of json_value.\n>\n> SELECT JSON_QUERY(jsonb '\"123\"', '$' RETURNING int2);\n> SELECT JSON_QUERY(jsonb '\"123\"', '$' RETURNING int4);\n> SELECT JSON_QUERY(jsonb '\"123\"', '$' RETURNING int8);\n> SELECT JSON_QUERY(jsonb '\"123\"', '$' RETURNING bool);\n> SELECT JSON_QUERY(jsonb '\"123\"', '$' RETURNING numeric);\n> SELECT JSON_QUERY(jsonb '\"123\"', '$' RETURNING real);\n> SELECT JSON_QUERY(jsonb '\"123\"', '$' RETURNING float8);\n\nI didn't really want to add an exception in the parser for these\nspecific types, but I agree that it's not great that the current code\ndoesn't respect the default NULL ON ERROR behavior, so I've adopted\nyour fix. I'm not sure if we'll do so in the future but the code can\nbe removed if we someday make the non-IO cast functions handle errors\nsoftly too.\n\nOn Wed, Jan 31, 2024 at 11:52 PM jian he <[email protected]> wrote:\n>\n> Hi.\n> minor issues.\n> I am wondering do we need add `pg_node_attr(query_jumble_ignore)`\n> to some of our created structs in src/include/nodes/parsenodes.h in\n> v39-0001-Add-SQL-JSON-query-functions.patch\n\nWe haven't added those to the node structs of other SQL/JSON\nfunctions, so I'm inclined to skip adding them in this patch.\n\n> diff --git a/src/backend/parser/parse_jsontable.c\n> b/src/backend/parser/parse_jsontable.c\n> new file mode 100644\n> index 0000000000..25b8204dc6\n> --- /dev/null\n> +++ b/src/backend/parser/parse_jsontable.c\n> @@ -0,0 +1,718 @@\n> +/*-------------------------------------------------------------------------\n> + *\n> + * parse_jsontable.c\n> + * parsing of JSON_TABLE\n> + *\n> + * Portions Copyright (c) 1996-2022, PostgreSQL Global Development Group\n> + * Portions Copyright (c) 1994, Regents of the University of California\n> + *\n> + *\n> + * IDENTIFICATION\n> + * src/backend/parser/parse_jsontable.c\n> + *\n> + *-------------------------------------------------------------------------\n> + */\n> 2022 should change to 2024.\n\nOops, fixed.\n\nOn Mon, Feb 5, 2024 at 9:28 PM jian he <[email protected]> wrote:\n>\n> based on this query:\n> begin;\n> SET LOCAL TIME ZONE 10.5;\n> with cte(s) as (select jsonb '\"2023-08-15 12:34:56 +05:30\"')\n> select JSON_QUERY(s, '$.timestamp_tz()')::text,'+10.5'::text,\n> 'timestamp_tz'::text from cte\n> union all\n> select JSON_QUERY(s, '$.time()')::text,'+10.5'::text, 'time'::text from cte\n> union all\n> select JSON_QUERY(s, '$.timestamp()')::text,'+10.5'::text,\n> 'timestamp'::text from cte\n> union all\n> select JSON_QUERY(s, '$.date()')::text,'+10.5'::text, 'date'::text from cte\n> union all\n> select JSON_QUERY(s, '$.time_tz()')::text,'+10.5'::text,\n> 'time_tz'::text from cte;\n>\n> SET LOCAL TIME ZONE -8;\n> with cte(s) as (select jsonb '\"2023-08-15 12:34:56 +05:30\"')\n> select JSON_QUERY(s, '$.timestamp_tz()')::text,'+10.5'::text,\n> 'timestamp_tz'::text from cte\n> union all\n> select JSON_QUERY(s, '$.time()')::text,'+10.5'::text, 'time'::text from cte\n> union all\n> select JSON_QUERY(s, '$.timestamp()')::text,'+10.5'::text,\n> 'timestamp'::text from cte\n> union all\n> select JSON_QUERY(s, '$.date()')::text,'+10.5'::text, 'date'::text from cte\n> union all\n> select JSON_QUERY(s, '$.time_tz()')::text,'+10.5'::text,\n> 'time_tz'::text from cte;\n> commit;\n>\n> I made some changes on jspIsMutableWalker.\n> various new jsonpath methods added:\n> https://git.postgresql.org/cgit/postgresql.git/commit/?id=66ea94e8e606529bb334515f388c62314956739e\n> so we need to change jspIsMutableWalker accordingly.\n\nThanks for the heads up about that, merged.\n\nOn Wed, Feb 14, 2024 at 9:00 AM jian he <[email protected]> wrote:\n>\n> This part is already committed.\n> ereport(ERROR,\n> (errcode(ERRCODE_UNDEFINED_OBJECT),\n> errmsg(\"could not find jsonpath variable \\\"%s\\\"\",\n> pnstrdup(varName, varNameLength))));\n>\n> but, you can simply use:\n> ereport(ERROR,\n> (errcode(ERRCODE_UNDEFINED_OBJECT),\n> errmsg(\"could not find jsonpath variable \\\"%s\\\"\",varName)));\n>\n> maybe not worth the trouble.\n\nYeah, maybe the pnstrdup is unnecessary. I'm inclined to leave that\nalone for now and fix it later, not as part of this patch.\n\n> I kind of want to know, using `pnstrdup`, when the malloc related\n> memory will be freed?\n\nThat particular pnstrdup() will allocate somewhere in the\nExecutorState memory context, which gets reset during the transaction\nabort processing, releasing that memory.\n\n> json_query and json_query doc explanation is kind of crammed together.\n> Do you think it's a good idea to use </listitem> and </itemizedlist>?\n> it will look like bullet points. but the distance between the bullet\n> point and the first text in the same line is a little bit long, so it\n> may not look elegant.\n> I've attached the picture, json_query is using `</listitem> and\n> </itemizedlist>`, json_value is as of the v39.\n\nYeah, the bullet point list layout looks kind of neat, and is not\nunprecedented because we have a list in the description of\njson_poulate_record() for one. Though I wasn't able to come up with a\ngood breakdown of the points into sentences of appropriate length.\nI'm inclined to leave that beautification project to another day.\n\n> other than this and previous points, v39, 0001 looks good to go.\n\nI've attached the updated patches. I would like to get 0001 committed\nafter I spent a couple more days staring at it.\n\nAlvaro, do you still think that 0002 is a good idea and would you like\nto push it yourself?\n\n--\nThanks, Amit Langote",
"msg_date": "Mon, 4 Mar 2024 18:40:10 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Op 3/4/24 om 10:40 schreef Amit Langote:\n> Hi Jian,\n> \n> Thanks for the reviews and sorry for the late reply. Replying to all\n> emails in one.\n\n > [v40-0001-Add-SQL-JSON-query-functions.patch]\n > [v40-0002-Show-function-name-in-TableFuncScan.patch]\n > [v40-0003-JSON_TABLE.patch]\n\nIn my hands (applying with patch), the patches, esp. 0001, do not apply. \n But I see the cfbot builds without problem so maybe just ignore these \nFAILED lines. Better get them merged - so I can test there...\n\nErik\n\n\nchecking file doc/src/sgml/func.sgml\nchecking file src/backend/catalog/sql_features.txt\nchecking file src/backend/executor/execExpr.c\nHunk #1 succeeded at 48 with fuzz 2 (offset -1 lines).\nHunk #2 succeeded at 88 (offset -1 lines).\nHunk #3 succeeded at 2419 (offset -1 lines).\nHunk #4 succeeded at 4195 (offset -1 lines).\nchecking file src/backend/executor/execExprInterp.c\nHunk #1 succeeded at 72 (offset -1 lines).\nHunk #2 succeeded at 180 (offset -1 lines).\nHunk #3 succeeded at 485 (offset -1 lines).\nHunk #4 succeeded at 1560 (offset -1 lines).\nHunk #5 succeeded at 4242 (offset -1 lines).\nchecking file src/backend/jit/llvm/llvmjit_expr.c\nchecking file src/backend/jit/llvm/llvmjit_types.c\nchecking file src/backend/nodes/makefuncs.c\nHunk #1 succeeded at 856 (offset -1 lines).\nchecking file src/backend/nodes/nodeFuncs.c\nHunk #1 succeeded at 233 (offset -1 lines).\nHunk #2 succeeded at 517 (offset -1 lines).\nHunk #3 succeeded at 1019 (offset -1 lines).\nHunk #4 succeeded at 1276 (offset -1 lines).\nHunk #5 succeeded at 1617 (offset -1 lines).\nHunk #6 succeeded at 2381 (offset -1 lines).\nHunk #7 succeeded at 3429 (offset -1 lines).\nHunk #8 succeeded at 4164 (offset -1 lines).\nchecking file src/backend/optimizer/path/costsize.c\nHunk #1 succeeded at 4878 (offset -1 lines).\nchecking file src/backend/optimizer/util/clauses.c\nHunk #1 succeeded at 50 (offset -3 lines).\nHunk #2 succeeded at 415 (offset -3 lines).\nchecking file src/backend/parser/gram.y\nchecking file src/backend/parser/parse_expr.c\nchecking file src/backend/parser/parse_target.c\nHunk #1 succeeded at 1988 (offset -1 lines).\nchecking file src/backend/utils/adt/formatting.c\nHunk #1 succeeded at 4465 (offset -1 lines).\nchecking file src/backend/utils/adt/jsonb.c\nHunk #1 succeeded at 2159 (offset -4 lines).\nchecking file src/backend/utils/adt/jsonfuncs.c\nchecking file src/backend/utils/adt/jsonpath.c\nHunk #1 FAILED at 68.\nHunk #2 succeeded at 1239 (offset -1 lines).\n1 out of 2 hunks FAILED\nchecking file src/backend/utils/adt/jsonpath_exec.c\nHunk #1 succeeded at 229 (offset -5 lines).\nHunk #2 succeeded at 2866 (offset -5 lines).\nHunk #3 succeeded at 3751 (offset -5 lines).\nchecking file src/backend/utils/adt/ruleutils.c\nHunk #1 succeeded at 474 (offset -1 lines).\nHunk #2 succeeded at 518 (offset -1 lines).\nHunk #3 succeeded at 8303 (offset -1 lines).\nHunk #4 succeeded at 8475 (offset -1 lines).\nHunk #5 succeeded at 8591 (offset -1 lines).\nHunk #6 succeeded at 9808 (offset -1 lines).\nHunk #7 succeeded at 9858 (offset -1 lines).\nHunk #8 succeeded at 10039 (offset -1 lines).\nHunk #9 succeeded at 10909 (offset -1 lines).\nchecking file src/include/executor/execExpr.h\nchecking file src/include/nodes/execnodes.h\nchecking file src/include/nodes/makefuncs.h\nchecking file src/include/nodes/parsenodes.h\nchecking file src/include/nodes/primnodes.h\nchecking file src/include/parser/kwlist.h\nchecking file src/include/utils/formatting.h\nchecking file src/include/utils/jsonb.h\nchecking file src/include/utils/jsonfuncs.h\nchecking file src/include/utils/jsonpath.h\nchecking file src/interfaces/ecpg/preproc/ecpg.trailer\nchecking file src/test/regress/expected/sqljson_queryfuncs.out\nchecking file src/test/regress/parallel_schedule\nchecking file src/test/regress/sql/sqljson_queryfuncs.sql\nchecking file src/tools/pgindent/typedefs.list\n\n\n",
"msg_date": "Mon, 4 Mar 2024 15:51:53 +0100",
"msg_from": "Erik Rijkers <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On 2024-Mar-04, Erik Rijkers wrote:\n\n> In my hands (applying with patch), the patches, esp. 0001, do not apply.\n> But I see the cfbot builds without problem so maybe just ignore these FAILED\n> lines. Better get them merged - so I can test there...\n\nIt's because of dbbca2cf299b. It should apply cleanly if you do \"git\ncheckout dbbca2cf299b^\" first ... That commit is so recent that\nevidently the cfbot hasn't had a chance to try this patch again since it\nwent in, which is why it's still green.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"If you have nothing to say, maybe you need just the right tool to help you\nnot say it.\" (New York Times, about Microsoft PowerPoint)\n\n\n",
"msg_date": "Mon, 4 Mar 2024 16:03:34 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Tue, Mar 5, 2024 at 12:03 AM Alvaro Herrera <[email protected]> wrote:\n> On 2024-Mar-04, Erik Rijkers wrote:\n>\n> > In my hands (applying with patch), the patches, esp. 0001, do not apply.\n> > But I see the cfbot builds without problem so maybe just ignore these FAILED\n> > lines. Better get them merged - so I can test there...\n>\n> It's because of dbbca2cf299b. It should apply cleanly if you do \"git\n> checkout dbbca2cf299b^\" first ... That commit is so recent that\n> evidently the cfbot hasn't had a chance to try this patch again since it\n> went in, which is why it's still green.\n\nThanks for the heads up. Attaching rebased patches.\n\n-- \nThanks, Amit Langote",
"msg_date": "Tue, 5 Mar 2024 10:21:54 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "\nHi,\n\n> On Tue, Mar 5, 2024 at 12:03 AM Alvaro Herrera <[email protected]> wrote:\n>> On 2024-Mar-04, Erik Rijkers wrote:\n>>\n>> > In my hands (applying with patch), the patches, esp. 0001, do not apply.\n>> > But I see the cfbot builds without problem so maybe just ignore these FAILED\n>> > lines. Better get them merged - so I can test there...\n>>\n>> It's because of dbbca2cf299b. It should apply cleanly if you do \"git\n>> checkout dbbca2cf299b^\" first ... That commit is so recent that\n>> evidently the cfbot hasn't had a chance to try this patch again since it\n>> went in, which is why it's still green.\n>\n> Thanks for the heads up. Attaching rebased patches.\n\nIn the commit message of 0001, we have:\n\n\"\"\"\nBoth JSON_VALUE() and JSON_QUERY() functions have options for\nhandling EMPTY and ERROR conditions, which can be used to specify\nthe behavior when no values are matched and when an error occurs\nduring evaluation, respectively.\n\nAll of these functions only operate on jsonb values. The workaround\nfor now is to cast the argument to jsonb.\n\"\"\"\n\nwhich is not clear for me why we introduce JSON_VALUE() function, is it\nfor handling EMPTY or ERROR conditions? I think the existing cast\nworkaround have a similar capacity?\n\nThen I think if it is introduced as a performance improvement like [1],\nthen the test at [1] might be interesting. If this is the case, the\nmethod in [1] can avoid the user to modify these queries for the using\nthe new function. \n\n[1] https://www.postgresql.org/message-id/[email protected]\n\n-- \nBest Regards\nAndy Fan\n\n\n\n",
"msg_date": "Tue, 05 Mar 2024 12:28:32 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Tue, Mar 5, 2024 at 9:22 AM Amit Langote <[email protected]> wrote:\n>\n> Thanks for the heads up. Attaching rebased patches.\n>\n\nWalking through the v41-0001-Add-SQL-JSON-query-functions.patch documentation.\nI found some minor cosmetic issues.\n\n+ <para>\n+ <literal>select json_query(jsonb '{\"a\": \"[1, 2]\"}', 'lax $.a'\nRETURNING int[] OMIT QUOTES);</literal>\n+ <returnvalue></returnvalue>\n+ </para>\nthis example is not so good, it returns NULL, makes it harder to\nrender the result.\n\n+ <replaceable>context_item</replaceable> (the document); seen\n+ <xref linkend=\"functions-sqljson-path\"/> for more details on what\n+ <replaceable>path_expression</replaceable> can contain.\n\"seen\" should be \"see\"?\n\n+ <para>\n+ This function must return a JSON string, so if the path expression\n+ returns multiple SQL/JSON items, you must wrap the result using the\n+ <literal>WITH WRAPPER</literal> clause. If the wrapper is\n\"must\" may be not correct?\nsince we have a RETURNING clause.\n\"generally\" may be more accurate, I think.\nmaybe we can rephrase the sentence:\n+ This function generally return a JSON string, so if the path expression\n+ yield multiple SQL/JSON items, you must wrap the result using the\n+ <literal>WITH WRAPPER</literal> clause\n\n+ is spcified, the returned value will be of type <type>text</type>.\n+ If no <literal>RETURNING</literal> is spcified, the returned value will\ntwo typos, and should be \"specified\".\n\n+ Note that if the <replaceable>path_expression</replaceable>\n+ is <literal>strict</literal> and <literal>ON ERROR</literal> behavior\n+ is <literal>ON ERROR</literal>, an error is generated if it yields no\n+ items.\n\nmay be the following:\n+ Note that if the <replaceable>path_expression</replaceable>\n+ is <literal>strict</literal> and <literal>ON ERROR</literal> behavior\n+ is <literal>ERROR</literal>, an error is generated if it yields no\n+ items.\n\nmost of the place, you use\n <replaceable>path_expression</replaceable>\nbut there are two place you use:\n<type>path_expression</type>\nI guess that's ok, but the appearance is different.\n <replaceable> more prominent. Anyway, it is a minor issue.\n\n+ <function>json_query</function>. Note that scalar strings returned\n+ by <function>json_value</function> always have their quotes removed,\n+ equivalent to what one would get with <literal>OMIT QUOTES</literal>\n+ when using <function>json_query</function>.\n\nI think we can simplify it like the following:\n\n+ <function>json_query</function>. Note that scalar strings returned\n+ by <function>json_value</function> always have their quotes removed,\n+ equivalent to <literal>OMIT QUOTES</literal>\n+ when using <function>json_query</function>.\n\n\n",
"msg_date": "Tue, 5 Mar 2024 12:42:17 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Hi,\n\nI know very little about sql/json and all the json internals, but I\ndecided to do some black box testing. I built a large JSONB table\n(single column, ~7GB of data after loading). And then I did a query\ntransforming the data into tabular form using JSON_TABLE.\n\nThe JSON_TABLE query looks like this:\n\nSELECT jt.* FROM\n title_jsonb t,\n json_table(t.info, '$'\n COLUMNS (\n \"id\" text path '$.\"id\"',\n \"type\" text path '$.\"type\"',\n \"title\" text path '$.\"title\"',\n \"original_title\" text path '$.\"original_title\"',\n \"is_adult\" text path '$.\"is_adult\"',\n \"start_year\" text path '$.\"start_year\"',\n \"end_year\" text path '$.\"end_year\"',\n \"minutes\" text path '$.\"minutes\"',\n \"genres\" text path '$.\"genres\"',\n \"aliases\" text path '$.\"aliases\"',\n \"directors\" text path '$.\"directors\"',\n \"writers\" text path '$.\"writers\"',\n \"ratings\" text path '$.\"ratings\"',\n NESTED PATH '$.\"aliases\"[*]'\n COLUMNS (\n \"alias_title\" text path '$.\"title\"',\n \"alias_region\" text path '$.\"region\"'\n ),\n NESTED PATH '$.\"directors\"[*]'\n COLUMNS (\n \"director_name\" text path '$.\"name\"',\n \"director_birth_year\" text path '$.\"birth_year\"',\n \"director_death_year\" text path '$.\"death_year\"'\n ),\n NESTED PATH '$.\"writers\"[*]'\n COLUMNS (\n \"writer_name\" text path '$.\"name\"',\n \"writer_birth_year\" text path '$.\"birth_year\"',\n \"writer_death_year\" text path '$.\"death_year\"'\n ),\n NESTED PATH '$.\"ratings\"[*]'\n COLUMNS (\n \"rating_average\" text path '$.\"average\"',\n \"rating_votes\" text path '$.\"votes\"'\n )\n )\n ) as jt;\n\nagain, not particularly complex. But if I run this, it consumes multiple\ngigabytes of memory, before it gets killed by OOM killer. This happens\neven when ran using\n\n COPY (...) TO '/dev/null'\n\nso there's nothing sent to the client. I did catch memory context info,\nwhere it looks like this (complete stats attached):\n\n------\nTopMemoryContext: 97696 total in 5 blocks; 13056 free (11 chunks);\n 84640 used\n ...\n TopPortalContext: 8192 total in 1 blocks; 7680 free (0 chunks); ...\n PortalContext: 1024 total in 1 blocks; 560 free (0 chunks); ...\n ExecutorState: 2541764672 total in 314 blocks; 6528176 free\n (1208 chunks); 2535236496 used\n printtup: 8192 total in 1 blocks; 7952 free (0 chunks); ...\n ...\n...\nGrand total: 2544132336 bytes in 528 blocks; 7484504 free\n (1340 chunks); 2536647832 used\n------\n\nI'd say 2.5GB in ExecutorState seems a bit excessive ... Seems there's\nsome memory management issue? My guess is we're not releasing memory\nallocated while parsing the JSON or building JSON output.\n\n\nI'm not attaching the data, but I can provide that if needed - it's\nabout 600MB compressed. The structure is not particularly complex, it's\nmovie info from [1] combined into a JSON document (one per movie).\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Tue, 5 Mar 2024 22:30:20 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Hi Tomas,\n\nOn Wed, Mar 6, 2024 at 6:30 AM Tomas Vondra\n<[email protected]> wrote:\n>\n> Hi,\n>\n> I know very little about sql/json and all the json internals, but I\n> decided to do some black box testing. I built a large JSONB table\n> (single column, ~7GB of data after loading). And then I did a query\n> transforming the data into tabular form using JSON_TABLE.\n>\n> The JSON_TABLE query looks like this:\n>\n> SELECT jt.* FROM\n> title_jsonb t,\n> json_table(t.info, '$'\n> COLUMNS (\n> \"id\" text path '$.\"id\"',\n> \"type\" text path '$.\"type\"',\n> \"title\" text path '$.\"title\"',\n> \"original_title\" text path '$.\"original_title\"',\n> \"is_adult\" text path '$.\"is_adult\"',\n> \"start_year\" text path '$.\"start_year\"',\n> \"end_year\" text path '$.\"end_year\"',\n> \"minutes\" text path '$.\"minutes\"',\n> \"genres\" text path '$.\"genres\"',\n> \"aliases\" text path '$.\"aliases\"',\n> \"directors\" text path '$.\"directors\"',\n> \"writers\" text path '$.\"writers\"',\n> \"ratings\" text path '$.\"ratings\"',\n> NESTED PATH '$.\"aliases\"[*]'\n> COLUMNS (\n> \"alias_title\" text path '$.\"title\"',\n> \"alias_region\" text path '$.\"region\"'\n> ),\n> NESTED PATH '$.\"directors\"[*]'\n> COLUMNS (\n> \"director_name\" text path '$.\"name\"',\n> \"director_birth_year\" text path '$.\"birth_year\"',\n> \"director_death_year\" text path '$.\"death_year\"'\n> ),\n> NESTED PATH '$.\"writers\"[*]'\n> COLUMNS (\n> \"writer_name\" text path '$.\"name\"',\n> \"writer_birth_year\" text path '$.\"birth_year\"',\n> \"writer_death_year\" text path '$.\"death_year\"'\n> ),\n> NESTED PATH '$.\"ratings\"[*]'\n> COLUMNS (\n> \"rating_average\" text path '$.\"average\"',\n> \"rating_votes\" text path '$.\"votes\"'\n> )\n> )\n> ) as jt;\n>\n> again, not particularly complex. But if I run this, it consumes multiple\n> gigabytes of memory, before it gets killed by OOM killer. This happens\n> even when ran using\n>\n> COPY (...) TO '/dev/null'\n>\n> so there's nothing sent to the client. I did catch memory context info,\n> where it looks like this (complete stats attached):\n>\n> ------\n> TopMemoryContext: 97696 total in 5 blocks; 13056 free (11 chunks);\n> 84640 used\n> ...\n> TopPortalContext: 8192 total in 1 blocks; 7680 free (0 chunks); ...\n> PortalContext: 1024 total in 1 blocks; 560 free (0 chunks); ...\n> ExecutorState: 2541764672 total in 314 blocks; 6528176 free\n> (1208 chunks); 2535236496 used\n> printtup: 8192 total in 1 blocks; 7952 free (0 chunks); ...\n> ...\n> ...\n> Grand total: 2544132336 bytes in 528 blocks; 7484504 free\n> (1340 chunks); 2536647832 used\n> ------\n>\n> I'd say 2.5GB in ExecutorState seems a bit excessive ... Seems there's\n> some memory management issue? My guess is we're not releasing memory\n> allocated while parsing the JSON or building JSON output.\n>\n> I'm not attaching the data, but I can provide that if needed - it's\n> about 600MB compressed. The structure is not particularly complex, it's\n> movie info from [1] combined into a JSON document (one per movie).\n\nThanks for the report.\n\nYeah, I'd like to see the data to try to drill down into what's piling\nup in ExecutorState. I want to be sure of if the 1st, query functions\npatch, is not implicated in this, because I'd like to get that one out\nof the way sooner than later.\n\n-- \nThanks, Amit Langote\n\n\n",
"msg_date": "Wed, 6 Mar 2024 13:07:33 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Tue, Mar 5, 2024 at 6:52 AM Amit Langote <[email protected]> wrote:\n\nHi,\n\nI am doing some random testing with the latest patch and found one scenario\nthat I wanted to share.\nconsider a below case.\n\n‘postgres[102531]=#’SELECT * FROM JSON_TABLE(jsonb '{\n \"id\" : 12345678901,\n \"FULL_NAME\" : \"JOHN DOE\"}',\n '$'\n COLUMNS(\n name varchar(20) PATH 'lax $.FULL_NAME',\n id int PATH 'lax $.id'\n )\n )\n;\nERROR: 22003: integer out of range\nLOCATION: numeric_int4_opt_error, numeric.c:4385\n‘postgres[102531]=#’SELECT * FROM JSON_TABLE(jsonb '{\n \"id\" : \"12345678901\",\n \"FULL_NAME\" : \"JOHN DOE\"}',\n '$'\n COLUMNS(\n name varchar(20) PATH 'lax $.FULL_NAME',\n id int PATH 'lax $.id'\n )\n )\n;\n name | id\n----------+----\n JOHN DOE |\n(1 row)\n\nThe first query throws an error that the integer is \"out of range\" and is\nquite expected but in the second case(when the value is enclosed with \") it\nis able to process the JSON object but does not return any relevant\nerror(in fact processes the JSON but returns it with empty data for \"id\"\nfield). I think second query should fail with a similar error.\n\n\n-- \nRegards,\nHimanshu Upadhyaya\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Tue, Mar 5, 2024 at 6:52 AM Amit Langote <[email protected]> wrote:Hi,I am doing some random testing with the latest patch and found one scenario that I wanted to share.consider a below case.‘postgres[102531]=#’SELECT * FROM JSON_TABLE(jsonb '{ \"id\" : 12345678901, \"FULL_NAME\" : \"JOHN DOE\"}', '$' COLUMNS( name varchar(20) PATH 'lax $.FULL_NAME', id int PATH 'lax $.id' ) );ERROR: 22003: integer out of rangeLOCATION: numeric_int4_opt_error, numeric.c:4385‘postgres[102531]=#’SELECT * FROM JSON_TABLE(jsonb '{ \"id\" : \"12345678901\", \"FULL_NAME\" : \"JOHN DOE\"}', '$' COLUMNS( name varchar(20) PATH 'lax $.FULL_NAME', id int PATH 'lax $.id' ) ); name | id ----------+---- JOHN DOE | (1 row)The first query throws an error that the integer is \"out of range\" and is quite expected but in the second case(when the value is enclosed with \") it is able to process the JSON object but does not return any relevant error(in fact processes the JSON but returns it with empty data for \"id\" field). I think second query should fail with a similar error.-- Regards,\nHimanshu Upadhyaya\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Wed, 6 Mar 2024 17:28:06 +0530",
"msg_from": "Himanshu Upadhyaya <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Wed, Mar 6, 2024 at 12:07 PM Amit Langote <[email protected]> wrote:\n>\n> Hi Tomas,\n>\n> On Wed, Mar 6, 2024 at 6:30 AM Tomas Vondra\n> <[email protected]> wrote:\n> >\n> > Hi,\n> >\n> > I know very little about sql/json and all the json internals, but I\n> > decided to do some black box testing. I built a large JSONB table\n> > (single column, ~7GB of data after loading). And then I did a query\n> > transforming the data into tabular form using JSON_TABLE.\n> >\n> > The JSON_TABLE query looks like this:\n> >\n> > SELECT jt.* FROM\n> > title_jsonb t,\n> > json_table(t.info, '$'\n> > COLUMNS (\n> > \"id\" text path '$.\"id\"',\n> > \"type\" text path '$.\"type\"',\n> > \"title\" text path '$.\"title\"',\n> > \"original_title\" text path '$.\"original_title\"',\n> > \"is_adult\" text path '$.\"is_adult\"',\n> > \"start_year\" text path '$.\"start_year\"',\n> > \"end_year\" text path '$.\"end_year\"',\n> > \"minutes\" text path '$.\"minutes\"',\n> > \"genres\" text path '$.\"genres\"',\n> > \"aliases\" text path '$.\"aliases\"',\n> > \"directors\" text path '$.\"directors\"',\n> > \"writers\" text path '$.\"writers\"',\n> > \"ratings\" text path '$.\"ratings\"',\n> > NESTED PATH '$.\"aliases\"[*]'\n> > COLUMNS (\n> > \"alias_title\" text path '$.\"title\"',\n> > \"alias_region\" text path '$.\"region\"'\n> > ),\n> > NESTED PATH '$.\"directors\"[*]'\n> > COLUMNS (\n> > \"director_name\" text path '$.\"name\"',\n> > \"director_birth_year\" text path '$.\"birth_year\"',\n> > \"director_death_year\" text path '$.\"death_year\"'\n> > ),\n> > NESTED PATH '$.\"writers\"[*]'\n> > COLUMNS (\n> > \"writer_name\" text path '$.\"name\"',\n> > \"writer_birth_year\" text path '$.\"birth_year\"',\n> > \"writer_death_year\" text path '$.\"death_year\"'\n> > ),\n> > NESTED PATH '$.\"ratings\"[*]'\n> > COLUMNS (\n> > \"rating_average\" text path '$.\"average\"',\n> > \"rating_votes\" text path '$.\"votes\"'\n> > )\n> > )\n> > ) as jt;\n> >\n> > again, not particularly complex. But if I run this, it consumes multiple\n> > gigabytes of memory, before it gets killed by OOM killer. This happens\n> > even when ran using\n> >\n> > COPY (...) TO '/dev/null'\n> >\n> > so there's nothing sent to the client. I did catch memory context info,\n> > where it looks like this (complete stats attached):\n> >\n> > ------\n> > TopMemoryContext: 97696 total in 5 blocks; 13056 free (11 chunks);\n> > 84640 used\n> > ...\n> > TopPortalContext: 8192 total in 1 blocks; 7680 free (0 chunks); ...\n> > PortalContext: 1024 total in 1 blocks; 560 free (0 chunks); ...\n> > ExecutorState: 2541764672 total in 314 blocks; 6528176 free\n> > (1208 chunks); 2535236496 used\n> > printtup: 8192 total in 1 blocks; 7952 free (0 chunks); ...\n> > ...\n> > ...\n> > Grand total: 2544132336 bytes in 528 blocks; 7484504 free\n> > (1340 chunks); 2536647832 used\n> > ------\n> >\n> > I'd say 2.5GB in ExecutorState seems a bit excessive ... Seems there's\n> > some memory management issue? My guess is we're not releasing memory\n> > allocated while parsing the JSON or building JSON output.\n> >\n> > I'm not attaching the data, but I can provide that if needed - it's\n> > about 600MB compressed. The structure is not particularly complex, it's\n> > movie info from [1] combined into a JSON document (one per movie).\n>\n> Thanks for the report.\n>\n> Yeah, I'd like to see the data to try to drill down into what's piling\n> up in ExecutorState. I want to be sure of if the 1st, query functions\n> patch, is not implicated in this, because I'd like to get that one out\n> of the way sooner than later.\n>\n\nI did some tests. it generally looks like:\n\ncreate or replace function random_text() returns text\nas $$select string_agg(md5(random()::text),'') from\ngenerate_Series(1,8) s $$ LANGUAGE SQL;\nDROP TABLE if exists s;\ncreate table s(a jsonb);\nINSERT INTO s SELECT (\n'{\"id\": \"' || random_text() || '\",'\n'\"type\": \"' || random_text() || '\",'\n'\"title\": \"' || random_text() || '\",'\n'\"original_title\": \"' || random_text() || '\",'\n'\"is_adult\": \"' || random_text() || '\",'\n'\"start_year\": \"' || random_text() || '\",'\n'\"end_year\": \"' || random_text() || '\",'\n'\"minutes\": \"' || random_text() || '\",'\n'\"genres\": \"' || random_text() || '\",'\n'\"aliases\": \"' || random_text() || '\",'\n'\"genres\": \"' || random_text() || '\",'\n'\"directors\": \"' || random_text() || '\",'\n'\"writers\": \"' || random_text() || '\",'\n'\"ratings\": \"' || random_text() || '\",'\n'\"director_name\": \"' || random_text() || '\",'\n'\"alias_title\": \"' || random_text() || '\",'\n'\"alias_region\": \"' || random_text() || '\",'\n'\"director_birth_year\": \"' || random_text() || '\",'\n'\"director_death_year\": \"' || random_text() || '\",'\n'\"rating_average\": \"' || random_text() || '\",'\n'\"rating_votes\": \"' || random_text() || '\"'\n||'}' )::jsonb\nFROM generate_series(1, 1e6);\nSELECT pg_size_pretty(pg_table_size('s')); -- 5975 MB\n\nIt's less complex than Tomas's version.\n\nattached, 3 test files:\n1e5 rows, each key's value is small. Total table size is 598 MB.\n1e6 rows, each key's value is small. Total table size is 5975 MB.\n27 rows, total table size is 5066 MB.\nThe test file's comment is the output I extracted using\npg_log_backend_memory_contexts,\nmainly ExecutorState and surrounding big number memory context.\n\nConclusion, I come from the test:\nif each json is big (5066 MB/27) , then it will take a lot of memory.\nif each json is small(here is 256 byte), then it won't take a lot of\nmemory to process.\n\nAnother case, I did test yet: more keys in a single json, but the\nvalue is small.",
"msg_date": "Wed, 6 Mar 2024 21:22:12 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "\n\nOn 3/6/24 12:58, Himanshu Upadhyaya wrote:\n> On Tue, Mar 5, 2024 at 6:52 AM Amit Langote <[email protected]> wrote:\n> \n> Hi,\n> \n> I am doing some random testing with the latest patch and found one scenario\n> that I wanted to share.\n> consider a below case.\n> \n> ‘postgres[102531]=#’SELECT * FROM JSON_TABLE(jsonb '{\n> \"id\" : 12345678901,\n> \"FULL_NAME\" : \"JOHN DOE\"}',\n> '$'\n> COLUMNS(\n> name varchar(20) PATH 'lax $.FULL_NAME',\n> id int PATH 'lax $.id'\n> )\n> )\n> ;\n> ERROR: 22003: integer out of range\n> LOCATION: numeric_int4_opt_error, numeric.c:4385\n> ‘postgres[102531]=#’SELECT * FROM JSON_TABLE(jsonb '{\n> \"id\" : \"12345678901\",\n> \"FULL_NAME\" : \"JOHN DOE\"}',\n> '$'\n> COLUMNS(\n> name varchar(20) PATH 'lax $.FULL_NAME',\n> id int PATH 'lax $.id'\n> )\n> )\n> ;\n> name | id\n> ----------+----\n> JOHN DOE |\n> (1 row)\n> \n> The first query throws an error that the integer is \"out of range\" and is\n> quite expected but in the second case(when the value is enclosed with \") it\n> is able to process the JSON object but does not return any relevant\n> error(in fact processes the JSON but returns it with empty data for \"id\"\n> field). I think second query should fail with a similar error.\n> \n\nI'm pretty sure this is the correct & expected behavior. The second\nquery treats the value as string (because that's what should happen for\nvalues in double quotes).\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 6 Mar 2024 16:34:41 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Wed, Mar 6, 2024 at 9:22 PM jian he <[email protected]> wrote:\n>\n> Another case, I did test yet: more keys in a single json, but the\n> value is small.\n\nAnother case attached. see the attached SQL file's comments.\na single simple jsonb, with 33 keys, each key's value with fixed length: 256.\ntotal table size: SELECT pg_size_pretty(pg_table_size('json33keys')); --5369 MB\nnumber of rows: 600001.\n\nusing the previously mentioned method: pg_log_backend_memory_contexts.\nall these tests under:\n-Dcassert=true \\\n-Db_coverage=true \\\n-Dbuildtype=debug \\\n\nI hope someone will tell me if the test method is correct or not.",
"msg_date": "Thu, 7 Mar 2024 10:50:09 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Tue, Mar 5, 2024 at 12:38 PM Andy Fan <[email protected]> wrote:\n>\n>\n> In the commit message of 0001, we have:\n>\n> \"\"\"\n> Both JSON_VALUE() and JSON_QUERY() functions have options for\n> handling EMPTY and ERROR conditions, which can be used to specify\n> the behavior when no values are matched and when an error occurs\n> during evaluation, respectively.\n>\n> All of these functions only operate on jsonb values. The workaround\n> for now is to cast the argument to jsonb.\n> \"\"\"\n>\n> which is not clear for me why we introduce JSON_VALUE() function, is it\n> for handling EMPTY or ERROR conditions? I think the existing cast\n> workaround have a similar capacity?\n>\n\nI guess because it's in the standard.\nbut I don't see individual sql standard Identifier, JSON_VALUE in\nsql_features.txt\nI do see JSON_QUERY.\nmysql also have JSON_VALUE, [1]\n\nEMPTY, ERROR: there is a standard Identifier: T825: SQL/JSON: ON EMPTY\nand ON ERROR clauses\n\n[1] https://dev.mysql.com/doc/refman/8.0/en/json-search-functions.html#function_json-value\n\n\n",
"msg_date": "Thu, 7 Mar 2024 12:08:21 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "two cosmetic minor issues.\n\n+/*\n+ * JsonCoercion\n+ * Information about coercing a SQL/JSON value to the specified\n+ * type at runtime\n+ *\n+ * A node of this type is created if the parser cannot find a cast expression\n+ * using coerce_type() or OMIT QUOTES is specified for JSON_QUERY. If the\n+ * latter, 'expr' may contain the cast expression; if not, the quote-stripped\n+ * scalar string will be coerced by calling the target type's input function.\n+ * See ExecEvalJsonCoercion.\n+ */\n+typedef struct JsonCoercion\n+{\n+ NodeTag type;\n+\n+ Oid targettype;\n+ int32 targettypmod;\n+ bool omit_quotes; /* OMIT QUOTES specified for JSON_QUERY? */\n+ Node *cast_expr; /* coercion cast expression or NULL */\n+ Oid collation;\n+} JsonCoercion;\n\ncomment: 'expr' may contain the cast expression;\nhere \"exr\" should be \"cast_expr\"?\n\"a SQL/JSON\" should be \" an SQL/JSON\"?\n\n\n",
"msg_date": "Thu, 7 Mar 2024 13:16:49 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Wed, Mar 6, 2024 at 9:04 PM Tomas Vondra <[email protected]>\nwrote:\n\n>\n>\n> I'm pretty sure this is the correct & expected behavior. The second\n> query treats the value as string (because that's what should happen for\n> values in double quotes).\n>\n> ok, Then why does the below query provide the correct conversion, even if\nwe enclose that in double quotes?\n‘postgres[102531]=#’SELECT * FROM JSON_TABLE(jsonb '{\n \"id\" : \"1234567890\",\n \"FULL_NAME\" : \"JOHN DOE\"}',\n '$'\n COLUMNS(\n name varchar(20) PATH 'lax $.FULL_NAME',\n id int PATH 'lax $.id'\n )\n )\n;\n name | id\n----------+------------\n JOHN DOE | 1234567890\n(1 row)\n\nand for bigger input(string) it will leave as empty as below.\n‘postgres[102531]=#’SELECT * FROM JSON_TABLE(jsonb '{\n \"id\" : \"12345678901\",\n \"FULL_NAME\" : \"JOHN DOE\"}',\n '$'\n COLUMNS(\n name varchar(20) PATH 'lax $.FULL_NAME',\n id int PATH 'lax $.id'\n )\n )\n;\n name | id\n----------+----\n JOHN DOE |\n(1 row)\n\nseems it is not something to do with data enclosed in double quotes but\nsomehow related with internal casting it to integer and I think in case of\nbigger input it is not able to cast it to integer(as defined under COLUMNS\nas id int PATH 'lax $.id')\n\n‘postgres[102531]=#’SELECT * FROM JSON_TABLE(jsonb '{\n \"id\" : \"12345678901\",\n \"FULL_NAME\" : \"JOHN DOE\"}',\n '$'\n COLUMNS(\n name varchar(20) PATH 'lax $.FULL_NAME',\n id int PATH 'lax $.id'\n )\n )\n;\n name | id\n----------+----\n JOHN DOE |\n(1 row)\n)\n\nif it is not able to represent it to integer because of bigger input, it\nshould error out with a similar error message instead of leaving it empty.\n\nThoughts?\n\n-- \nRegards,\nHimanshu Upadhyaya\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Wed, Mar 6, 2024 at 9:04 PM Tomas Vondra <[email protected]> wrote:\nI'm pretty sure this is the correct & expected behavior. The second\nquery treats the value as string (because that's what should happen for\nvalues in double quotes).\n ok, Then why does the below query provide the correct conversion, even if we enclose that in double quotes?‘postgres[102531]=#’SELECT * FROM JSON_TABLE(jsonb '{ \"id\" : \"1234567890\", \"FULL_NAME\" : \"JOHN DOE\"}', '$' COLUMNS( name varchar(20) PATH 'lax $.FULL_NAME', id int PATH 'lax $.id' ) ); name | id ----------+------------ JOHN DOE | 1234567890(1 row)and for bigger input(string) it will leave as empty as below.‘postgres[102531]=#’SELECT * FROM JSON_TABLE(jsonb '{ \"id\" : \"12345678901\", \"FULL_NAME\" : \"JOHN DOE\"}', '$' COLUMNS( name varchar(20) PATH 'lax $.FULL_NAME', id int PATH 'lax $.id' ) ); name | id ----------+---- JOHN DOE | (1 row)seems it is not something to do with data enclosed in double quotes but somehow related with internal casting it to integer and I think in case of bigger input it is not able to cast it to integer(as defined under COLUMNS as id int PATH 'lax $.id') ‘postgres[102531]=#’SELECT * FROM JSON_TABLE(jsonb '{ \"id\" : \"12345678901\", \"FULL_NAME\" : \"JOHN DOE\"}', '$' COLUMNS( name varchar(20) PATH 'lax $.FULL_NAME', id int PATH 'lax $.id' ) ); name | id ----------+---- JOHN DOE | (1 row))if it is not able to represent it to integer because of bigger input, it should error out with a similar error message instead of leaving it empty.Thoughts?-- Regards,\nHimanshu Upadhyaya\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Thu, 7 Mar 2024 10:48:02 +0530",
"msg_from": "Himanshu Upadhyaya <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Wed, Mar 6, 2024 at 1:07 PM Amit Langote <[email protected]> wrote:\n> Hi Tomas,\n>\n> On Wed, Mar 6, 2024 at 6:30 AM Tomas Vondra\n> <[email protected]> wrote:\n> > I'd say 2.5GB in ExecutorState seems a bit excessive ... Seems there's\n> > some memory management issue? My guess is we're not releasing memory\n> > allocated while parsing the JSON or building JSON output.\n> >\n> > I'm not attaching the data, but I can provide that if needed - it's\n> > about 600MB compressed. The structure is not particularly complex, it's\n> > movie info from [1] combined into a JSON document (one per movie).\n>\n> Thanks for the report.\n>\n> Yeah, I'd like to see the data to try to drill down into what's piling\n> up in ExecutorState. I want to be sure of if the 1st, query functions\n> patch, is not implicated in this, because I'd like to get that one out\n> of the way sooner than later.\n\nI tracked this memory-hogging down to a bug in the query functions\npatch (0001) after all. The problem was with a query-lifetime cache\nvariable that was never set to point to the allocated memory. So a\nstruct was allocated and then not freed for every row where it should\nhave only been allocated once.\n\nI've fixed that bug in the attached. I've also addressed some of\nJian's comments and made quite a few cleanups of my own.\n\nNow I'll go look if Himanshu's concerns are a blocker for committing 0001. ;)\n\n\n--\nThanks, Amit Langote",
"msg_date": "Thu, 7 Mar 2024 16:26:08 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On 3/7/24 08:26, Amit Langote wrote:\n> On Wed, Mar 6, 2024 at 1:07 PM Amit Langote <[email protected]> wrote:\n>> Hi Tomas,\n>>\n>> On Wed, Mar 6, 2024 at 6:30 AM Tomas Vondra\n>> <[email protected]> wrote:\n>>> I'd say 2.5GB in ExecutorState seems a bit excessive ... Seems there's\n>>> some memory management issue? My guess is we're not releasing memory\n>>> allocated while parsing the JSON or building JSON output.\n>>>\n>>> I'm not attaching the data, but I can provide that if needed - it's\n>>> about 600MB compressed. The structure is not particularly complex, it's\n>>> movie info from [1] combined into a JSON document (one per movie).\n>>\n>> Thanks for the report.\n>>\n>> Yeah, I'd like to see the data to try to drill down into what's piling\n>> up in ExecutorState. I want to be sure of if the 1st, query functions\n>> patch, is not implicated in this, because I'd like to get that one out\n>> of the way sooner than later.\n> \n> I tracked this memory-hogging down to a bug in the query functions\n> patch (0001) after all. The problem was with a query-lifetime cache\n> variable that was never set to point to the allocated memory. So a\n> struct was allocated and then not freed for every row where it should\n> have only been allocated once.\n> \n\nThanks! I can confirm the query works with the new patches.\n\nExporting the 7GB table takes ~250 seconds (the result is ~10.6GB). That\nseems maybe a bit much, but I'm not sure it's the fault of this patch.\nAttached is a flamegraph for the export, and clearly most of the time is\nspent in jsonpath. I wonder if there's a way to improve this, but I\ndon't think it's up to this patch.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 7 Mar 2024 12:02:27 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "\n\nOn 3/7/24 06:18, Himanshu Upadhyaya wrote:\n> On Wed, Mar 6, 2024 at 9:04 PM Tomas Vondra <[email protected]>\n> wrote:\n> \n>>\n>>\n>> I'm pretty sure this is the correct & expected behavior. The second\n>> query treats the value as string (because that's what should happen for\n>> values in double quotes).\n>>\n>> ok, Then why does the below query provide the correct conversion, even if\n> we enclose that in double quotes?\n> ‘postgres[102531]=#’SELECT * FROM JSON_TABLE(jsonb '{\n> \"id\" : \"1234567890\",\n> \"FULL_NAME\" : \"JOHN DOE\"}',\n> '$'\n> COLUMNS(\n> name varchar(20) PATH 'lax $.FULL_NAME',\n> id int PATH 'lax $.id'\n> )\n> )\n> ;\n> name | id\n> ----------+------------\n> JOHN DOE | 1234567890\n> (1 row)\n> \n> and for bigger input(string) it will leave as empty as below.\n> ‘postgres[102531]=#’SELECT * FROM JSON_TABLE(jsonb '{\n> \"id\" : \"12345678901\",\n> \"FULL_NAME\" : \"JOHN DOE\"}',\n> '$'\n> COLUMNS(\n> name varchar(20) PATH 'lax $.FULL_NAME',\n> id int PATH 'lax $.id'\n> )\n> )\n> ;\n> name | id\n> ----------+----\n> JOHN DOE |\n> (1 row)\n> \n> seems it is not something to do with data enclosed in double quotes but\n> somehow related with internal casting it to integer and I think in case of\n> bigger input it is not able to cast it to integer(as defined under COLUMNS\n> as id int PATH 'lax $.id')\n> \n> ‘postgres[102531]=#’SELECT * FROM JSON_TABLE(jsonb '{\n> \"id\" : \"12345678901\",\n> \"FULL_NAME\" : \"JOHN DOE\"}',\n> '$'\n> COLUMNS(\n> name varchar(20) PATH 'lax $.FULL_NAME',\n> id int PATH 'lax $.id'\n> )\n> )\n> ;\n> name | id\n> ----------+----\n> JOHN DOE |\n> (1 row)\n> )\n> \n> if it is not able to represent it to integer because of bigger input, it\n> should error out with a similar error message instead of leaving it empty.\n> \n> Thoughts?\n> \n\nAh, I see! Yes, that's a bit weird. Put slightly differently:\n\ntest=# SELECT * FROM JSON_TABLE(jsonb '{\"id\" : \"2000000000\"}',\n '$' COLUMNS(id int PATH '$.id'));\n id\n------------\n 2000000000\n(1 row)\n\nTime: 0.248 ms\ntest=# SELECT * FROM JSON_TABLE(jsonb '{\"id\" : \"3000000000\"}',\n '$' COLUMNS(id int PATH '$.id'));\n id\n----\n\n(1 row)\n\nClearly, when converting the string literal into int value, there's some\nsort of error handling that realizes 3B overflows, and returns NULL\ninstead. I'm not sure if this is intentional.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 7 Mar 2024 12:13:03 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Thu, Mar 7, 2024 at 8:13 PM Tomas Vondra\n<[email protected]> wrote:\n> On 3/7/24 06:18, Himanshu Upadhyaya wrote:\n\nThanks Himanshu for the testing.\n\n> > On Wed, Mar 6, 2024 at 9:04 PM Tomas Vondra <[email protected]>\n> > wrote:\n> >>\n> >> I'm pretty sure this is the correct & expected behavior. The second\n> >> query treats the value as string (because that's what should happen for\n> >> values in double quotes).\n> >>\n> >> ok, Then why does the below query provide the correct conversion, even if\n> > we enclose that in double quotes?\n> > ‘postgres[102531]=#’SELECT * FROM JSON_TABLE(jsonb '{\n> > \"id\" : \"1234567890\",\n> > \"FULL_NAME\" : \"JOHN DOE\"}',\n> > '$'\n> > COLUMNS(\n> > name varchar(20) PATH 'lax $.FULL_NAME',\n> > id int PATH 'lax $.id'\n> > )\n> > )\n> > ;\n> > name | id\n> > ----------+------------\n> > JOHN DOE | 1234567890\n> > (1 row)\n> >\n> > and for bigger input(string) it will leave as empty as below.\n> > ‘postgres[102531]=#’SELECT * FROM JSON_TABLE(jsonb '{\n> > \"id\" : \"12345678901\",\n> > \"FULL_NAME\" : \"JOHN DOE\"}',\n> > '$'\n> > COLUMNS(\n> > name varchar(20) PATH 'lax $.FULL_NAME',\n> > id int PATH 'lax $.id'\n> > )\n> > )\n> > ;\n> > name | id\n> > ----------+----\n> > JOHN DOE |\n> > (1 row)\n> >\n> > seems it is not something to do with data enclosed in double quotes but\n> > somehow related with internal casting it to integer and I think in case of\n> > bigger input it is not able to cast it to integer(as defined under COLUMNS\n> > as id int PATH 'lax $.id')\n> >\n> > ‘postgres[102531]=#’SELECT * FROM JSON_TABLE(jsonb '{\n> > \"id\" : \"12345678901\",\n> > \"FULL_NAME\" : \"JOHN DOE\"}',\n> > '$'\n> > COLUMNS(\n> > name varchar(20) PATH 'lax $.FULL_NAME',\n> > id int PATH 'lax $.id'\n> > )\n> > )\n> > ;\n> > name | id\n> > ----------+----\n> > JOHN DOE |\n> > (1 row)\n> > )\n> >\n> > if it is not able to represent it to integer because of bigger input, it\n> > should error out with a similar error message instead of leaving it empty.\n> >\n> > Thoughts?\n> >\n>\n> Ah, I see! Yes, that's a bit weird. Put slightly differently:\n>\n> test=# SELECT * FROM JSON_TABLE(jsonb '{\"id\" : \"2000000000\"}',\n> '$' COLUMNS(id int PATH '$.id'));\n> id\n> ------------\n> 2000000000\n> (1 row)\n>\n> Time: 0.248 ms\n> test=# SELECT * FROM JSON_TABLE(jsonb '{\"id\" : \"3000000000\"}',\n> '$' COLUMNS(id int PATH '$.id'));\n> id\n> ----\n>\n> (1 row)\n>\n> Clearly, when converting the string literal into int value, there's some\n> sort of error handling that realizes 3B overflows, and returns NULL\n> instead. I'm not sure if this is intentional.\n\nIndeed.\n\nThis boils down to the difference in the cast expression chosen to\nconvert the source value to int in the two cases.\n\nThe case where the source value has no quotes, the chosen cast\nexpression is a FuncExpr for function numeric_int4(), which has no way\nto suppress errors. When the source value has quotes, the cast\nexpression is a CoerceViaIO expression, which can suppress the error.\nThe default behavior is to suppress the error and return NULL, so the\ncorrect behavior is when the source value has quotes.\n\nI think we'll need either:\n\n* fix the code in 0001 to avoid getting numeric_int4() in this case,\nand generally cast functions that don't have soft-error handling\nsupport, in favor of using IO coercion.\n* fix FuncExpr (like CoerceViaIO) to respect SQL/JSON's request to\nsuppress errors and fix downstream functions like numeric_int4() to\ncomply by handling errors softly.\n\nI'm inclined to go with the 1st option as we already have the\ninfrastructure in place -- input functions can all handle errors\nsoftly.\n\nFor the latter, it uses numeric_int4() which doesn't support\nsoft-error handling, so throws the error. With quotes, the\n\n\n--\nThanks, Amit Langote\n\n--\nThanks, Amit Langote\n\n\n",
"msg_date": "Thu, 7 Mar 2024 21:06:08 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Thu, Mar 7, 2024 at 8:06 PM Amit Langote <[email protected]> wrote:\n>\n>\n> Indeed.\n>\n> This boils down to the difference in the cast expression chosen to\n> convert the source value to int in the two cases.\n>\n> The case where the source value has no quotes, the chosen cast\n> expression is a FuncExpr for function numeric_int4(), which has no way\n> to suppress errors. When the source value has quotes, the cast\n> expression is a CoerceViaIO expression, which can suppress the error.\n> The default behavior is to suppress the error and return NULL, so the\n> correct behavior is when the source value has quotes.\n>\n> I think we'll need either:\n>\n> * fix the code in 0001 to avoid getting numeric_int4() in this case,\n> and generally cast functions that don't have soft-error handling\n> support, in favor of using IO coercion.\n> * fix FuncExpr (like CoerceViaIO) to respect SQL/JSON's request to\n> suppress errors and fix downstream functions like numeric_int4() to\n> comply by handling errors softly.\n>\n> I'm inclined to go with the 1st option as we already have the\n> infrastructure in place -- input functions can all handle errors\n> softly.\n\nnot sure this is the right way.\nbut attached patches solved this problem.\n\nAlso, can you share the previous memory-hogging bug issue\nwhen you are free, I want to know which part I am missing.....",
"msg_date": "Thu, 7 Mar 2024 21:46:40 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Hi,\n\nI was experimenting with the v42 patches, and I think the handling of ON\nEMPTY / ON ERROR clauses may need some improvement. The grammar is\ncurrently defined like this:\n\n | json_behavior ON EMPTY_P json_behavior ON ERROR_P\n\nThis means the clauses have to be defined exactly in this order, and if\nsomeone does\n\n NULL ON ERROR NULL ON EMPTY\n\nit results in syntax error. I'm not sure what the SQL standard says\nabout this, but it seems other databases don't agree on the order. Is\nthere a particular reason to not allow both orderings?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 7 Mar 2024 15:04:07 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Thu, Mar 7, 2024 at 22:46 jian he <[email protected]> wrote:\n\n> On Thu, Mar 7, 2024 at 8:06 PM Amit Langote <[email protected]>\n> wrote:\n> >\n> >\n> > Indeed.\n> >\n> > This boils down to the difference in the cast expression chosen to\n> > convert the source value to int in the two cases.\n> >\n> > The case where the source value has no quotes, the chosen cast\n> > expression is a FuncExpr for function numeric_int4(), which has no way\n> > to suppress errors. When the source value has quotes, the cast\n> > expression is a CoerceViaIO expression, which can suppress the error.\n> > The default behavior is to suppress the error and return NULL, so the\n> > correct behavior is when the source value has quotes.\n> >\n> > I think we'll need either:\n> >\n> > * fix the code in 0001 to avoid getting numeric_int4() in this case,\n> > and generally cast functions that don't have soft-error handling\n> > support, in favor of using IO coercion.\n> > * fix FuncExpr (like CoerceViaIO) to respect SQL/JSON's request to\n> > suppress errors and fix downstream functions like numeric_int4() to\n> > comply by handling errors softly.\n> >\n> > I'm inclined to go with the 1st option as we already have the\n> > infrastructure in place -- input functions can all handle errors\n> > softly.\n>\n> not sure this is the right way.\n> but attached patches solved this problem.\n>\n> Also, can you share the previous memory-hogging bug issue\n> when you are free, I want to know which part I am missing.....\n\n\nTake a look at the json_populate_type() call in ExecEvalJsonCoercion() or\nspecifically compare the new way of passing its void *cache parameter with\nthe earlier patches.\n\n>\n\nOn Thu, Mar 7, 2024 at 22:46 jian he <[email protected]> wrote:On Thu, Mar 7, 2024 at 8:06 PM Amit Langote <[email protected]> wrote:\n>\n>\n> Indeed.\n>\n> This boils down to the difference in the cast expression chosen to\n> convert the source value to int in the two cases.\n>\n> The case where the source value has no quotes, the chosen cast\n> expression is a FuncExpr for function numeric_int4(), which has no way\n> to suppress errors. When the source value has quotes, the cast\n> expression is a CoerceViaIO expression, which can suppress the error.\n> The default behavior is to suppress the error and return NULL, so the\n> correct behavior is when the source value has quotes.\n>\n> I think we'll need either:\n>\n> * fix the code in 0001 to avoid getting numeric_int4() in this case,\n> and generally cast functions that don't have soft-error handling\n> support, in favor of using IO coercion.\n> * fix FuncExpr (like CoerceViaIO) to respect SQL/JSON's request to\n> suppress errors and fix downstream functions like numeric_int4() to\n> comply by handling errors softly.\n>\n> I'm inclined to go with the 1st option as we already have the\n> infrastructure in place -- input functions can all handle errors\n> softly.\n\nnot sure this is the right way.\nbut attached patches solved this problem.\n\nAlso, can you share the previous memory-hogging bug issue\nwhen you are free, I want to know which part I am missing.....Take a look at the json_populate_type() call in ExecEvalJsonCoercion() or specifically compare the new way of passing its void *cache parameter with the earlier patches.",
"msg_date": "Thu, 7 Mar 2024 23:11:03 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On 2024-Mar-07, Tomas Vondra wrote:\n\n> I was experimenting with the v42 patches, and I think the handling of ON\n> EMPTY / ON ERROR clauses may need some improvement.\n\nWell, the 2023 standard says things like\n\n<JSON value function> ::=\n JSON_VALUE <left paren>\n <JSON API common syntax>\n [ <JSON returning clause> ]\n [ <JSON value empty behavior> ON EMPTY ]\n [ <JSON value error behavior> ON ERROR ]\n <right paren>\n\nwhich implies that if you specify it the other way around, it's a syntax\nerror.\n\n> I'm not sure what the SQL standard says about this, but it seems other\n> databases don't agree on the order. Is there a particular reason to\n> not allow both orderings?\n\nI vaguely recall that trying to also support the other ordering leads to\nhaving more rules. Now maybe we do want that because of compatibility\nwith other DBMSs, but frankly at this stage I wouldn't bother.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"I am amazed at [the pgsql-sql] mailing list for the wonderful support, and\nlack of hesitasion in answering a lost soul's question, I just wished the rest\nof the mailing list could be like this.\" (Fotis)\n https://postgr.es/m/[email protected]\n\n\n",
"msg_date": "Thu, 7 Mar 2024 15:14:15 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Thu, Mar 7, 2024 at 23:14 Alvaro Herrera <[email protected]> wrote:\n\n> On 2024-Mar-07, Tomas Vondra wrote:\n>\n> > I was experimenting with the v42 patches, and I think the handling of ON\n> > EMPTY / ON ERROR clauses may need some improvement.\n>\n> Well, the 2023 standard says things like\n>\n> <JSON value function> ::=\n> JSON_VALUE <left paren>\n> <JSON API common syntax>\n> [ <JSON returning clause> ]\n> [ <JSON value empty behavior> ON EMPTY ]\n> [ <JSON value error behavior> ON ERROR ]\n> <right paren>\n>\n> which implies that if you specify it the other way around, it's a syntax\n> error.\n>\n> > I'm not sure what the SQL standard says about this, but it seems other\n> > databases don't agree on the order. Is there a particular reason to\n> > not allow both orderings?\n>\n> I vaguely recall that trying to also support the other ordering leads to\n> having more rules.\n\n\nYeah, I think that was it. At one point, I removed rules supporting syntax\nthat wasn’t documented.\n\nNow maybe we do want that because of compatibility\n> with other DBMSs, but frankly at this stage I wouldn't bother.\n\n\n+1.\n\n>\n\nOn Thu, Mar 7, 2024 at 23:14 Alvaro Herrera <[email protected]> wrote:On 2024-Mar-07, Tomas Vondra wrote:\n\n> I was experimenting with the v42 patches, and I think the handling of ON\n> EMPTY / ON ERROR clauses may need some improvement.\n\nWell, the 2023 standard says things like\n\n<JSON value function> ::=\n JSON_VALUE <left paren>\n <JSON API common syntax>\n [ <JSON returning clause> ]\n [ <JSON value empty behavior> ON EMPTY ]\n [ <JSON value error behavior> ON ERROR ]\n <right paren>\n\nwhich implies that if you specify it the other way around, it's a syntax\nerror.\n\n> I'm not sure what the SQL standard says about this, but it seems other\n> databases don't agree on the order. Is there a particular reason to\n> not allow both orderings?\n\nI vaguely recall that trying to also support the other ordering leads to\nhaving more rules.Yeah, I think that was it. At one point, I removed rules supporting syntax that wasn’t documented.Now maybe we do want that because of compatibility\nwith other DBMSs, but frankly at this stage I wouldn't bother.+1.",
"msg_date": "Thu, 7 Mar 2024 23:23:56 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "I looked at the documentation again.\none more changes for JSON_QUERY:\n\ndiff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml\nindex 3e58ebd2..0c49b321 100644\n--- a/doc/src/sgml/func.sgml\n+++ b/doc/src/sgml/func.sgml\n@@ -18715,8 +18715,8 @@ ERROR: jsonpath array subscript is out of bounds\n be of type <type>jsonb</type>.\n </para>\n <para>\n- The <literal>ON EMPTY</literal> clause specifies the behavior if the\n- <replaceable>path_expression</replaceable> yields no value at all; the\n+ The <literal>ON EMPTY</literal> clause specifies the behavior\nif applying the\n+ <replaceable>path_expression</replaceable> to the\n<replaceable>context_item</replaceable> yields no value at all; the\n default when <literal>ON EMPTY</literal> is not specified is to return\n a null value.\n </para>\n\n\n",
"msg_date": "Fri, 8 Mar 2024 12:04:31 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "\njian he <[email protected]> writes:\n\n> On Tue, Mar 5, 2024 at 12:38 PM Andy Fan <[email protected]> wrote:\n>>\n>>\n>> In the commit message of 0001, we have:\n>>\n>> \"\"\"\n>> Both JSON_VALUE() and JSON_QUERY() functions have options for\n>> handling EMPTY and ERROR conditions, which can be used to specify\n>> the behavior when no values are matched and when an error occurs\n>> during evaluation, respectively.\n>>\n>> All of these functions only operate on jsonb values. The workaround\n>> for now is to cast the argument to jsonb.\n>> \"\"\"\n>>\n>> which is not clear for me why we introduce JSON_VALUE() function, is it\n>> for handling EMPTY or ERROR conditions? I think the existing cast\n>> workaround have a similar capacity?\n>>\n>\n> I guess because it's in the standard.\n> but I don't see individual sql standard Identifier, JSON_VALUE in\n> sql_features.txt\n> I do see JSON_QUERY.\n> mysql also have JSON_VALUE, [1]\n>\n> EMPTY, ERROR: there is a standard Identifier: T825: SQL/JSON: ON EMPTY\n> and ON ERROR clauses\n>\n> [1] https://dev.mysql.com/doc/refman/8.0/en/json-search-functions.html#function_json-value\n\nThank you for this informatoin!\n\n-- \nBest Regards\nAndy Fan\n\n\n\n",
"msg_date": "Sun, 10 Mar 2024 07:12:27 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "one more issue.\n+ case JSON_VALUE_OP:\n+ /* Always omit quotes from scalar strings. */\n+ jsexpr->omit_quotes = (func->quotes == JS_QUOTES_OMIT);\n+\n+ /* JSON_VALUE returns text by default. */\n+ if (!OidIsValid(jsexpr->returning->typid))\n+ {\n+ jsexpr->returning->typid = TEXTOID;\n+ jsexpr->returning->typmod = -1;\n+ }\n\nby default, makeNode(JsonExpr), node initialization,\njsexpr->omit_quotes will initialize to false,\nEven though there was no implication to the JSON_TABLE patch (probably\nbecause coerceJsonFuncExprOutput), all tests still passed.\nbased on the above comment, and the regress test, you still need do (i think)\n`\njsexpr->omit_quotes = true;\n`\n\n\n",
"msg_date": "Sun, 10 Mar 2024 22:57:15 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Sun, Mar 10, 2024 at 10:57 PM jian he <[email protected]> wrote:\n>\n> one more issue.\n\nHi\none more documentation issue.\nafter applied V42, 0001 to 0003,\nthere are 11 appearance of `FORMAT JSON` in functions-json.html\nstill not a single place explained what it is for.\n\njson_query ( context_item, path_expression [ PASSING { value AS\nvarname } [, ...]] [ RETURNING data_type [ FORMAT JSON [ ENCODING UTF8\n] ] ] [ { WITHOUT | WITH { CONDITIONAL | [UNCONDITIONAL] } } [ ARRAY ]\nWRAPPER ] [ { KEEP | OMIT } QUOTES [ ON SCALAR STRING ] ] [ { ERROR |\nNULL | EMPTY { [ ARRAY ] | OBJECT } | DEFAULT expression } ON EMPTY ]\n[ { ERROR | NULL | EMPTY { [ ARRAY ] | OBJECT } | DEFAULT expression }\nON ERROR ])\n\nFORMAT JSON seems just a syntax sugar or for compatibility in json_query.\nbut it returns an error when the returning type category is not\nTYPCATEGORY_STRING.\n\nfor example, even the following will return an error.\n`\nCREATE TYPE regtest_comptype AS (b text);\nSELECT JSON_QUERY(jsonb '{\"a\":{\"b\":\"c\"}}', '$.a' RETURNING\nregtest_comptype format json);\n`\n\nseems only types in[0] will not generate an error, when specifying\nFORMAT JSON in JSON_QUERY.\n\nso it actually does something, not a syntax sugar?\n\n[0] https://www.postgresql.org/docs/current/datatype-character.html\n\n\n",
"msg_date": "Mon, 11 Mar 2024 11:30:09 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "one more issue.....\n\n+-- Extension: non-constant JSON path\n+SELECT JSON_EXISTS(jsonb '{\"a\": 123}', '$' || '.' || 'a');\n+SELECT JSON_VALUE(jsonb '{\"a\": 123}', '$' || '.' || 'a');\n+SELECT JSON_VALUE(jsonb '{\"a\": 123}', '$' || '.' || 'b' DEFAULT 'foo'\nON EMPTY);\n+SELECT JSON_QUERY(jsonb '{\"a\": 123}', '$' || '.' || 'a');\n+SELECT JSON_QUERY(jsonb '{\"a\": 123}', '$' || '.' || 'a' WITH WRAPPER);\n\njson path may not be a plain Const.\ndoes the following code in expression_tree_walker_impl need to consider\ncases when the `jexpr->path_spec` part is not a Const?\n\n+ case T_JsonExpr:\n+ {\n+ JsonExpr *jexpr = (JsonExpr *) node;\n+\n+ if (WALK(jexpr->formatted_expr))\n+ return true;\n+ if (WALK(jexpr->result_coercion))\n+ return true;\n+ if (WALK(jexpr->item_coercions))\n+ return true;\n+ if (WALK(jexpr->passing_values))\n+ return true;\n+ /* we assume walker doesn't care about passing_names */\n+ if (WALK(jexpr->on_empty))\n+ return true;\n+ if (WALK(jexpr->on_error))\n+ return true;\n+ }\n\n\n",
"msg_date": "Mon, 11 Mar 2024 15:45:11 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Hi.\nmore minor issues.\n\nby searching `elog(ERROR, \"unrecognized node type: %d\"`\nI found that generally enum is cast to int, before printing it out.\nI also found a related post at [1].\n\nSo I add the typecast to int, before printing it out.\nmost of the refactored code is unlikely to be reachable, but still.\n\nI also refactored ExecPrepareJsonItemCoercion error message, to make\nthe error message more explicit.\n@@ -4498,7 +4498,9 @@ ExecPrepareJsonItemCoercion(JsonbValue *item,\nJsonExprState *jsestate,\n if (throw_error)\n ereport(ERROR,\n\nerrcode(ERRCODE_SQL_JSON_ITEM_CANNOT_BE_CAST_TO_TARGET_TYPE),\n- errmsg(\"SQL/JSON item cannot\nbe cast to target type\"));\n+\nerrcode(ERRCODE_SQL_JSON_ITEM_CANNOT_BE_CAST_TO_TARGET_TYPE),\n+ errmsg(\"SQL/JSON item cannot\nbe cast to type %s\",\n+\nformat_type_be(jsestate->jsexpr->returning->typid)));\n\n+ /*\n+ * We abuse CaseTestExpr here as placeholder to pass the result of\n+ * evaluating the JSON_VALUE/QUERY jsonpath expression as input to the\n+ * coercion expression.\n+ */\n+ CaseTestExpr *placeholder = makeNode(CaseTestExpr);\ntypo in comment, should it be `JSON_VALUE/JSON_QUERY`?\n\n[1] https://stackoverflow.com/questions/8012647/can-we-typecast-a-enum-variable-in-c",
"msg_date": "Mon, 11 Mar 2024 17:20:38 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Hi,\nI was experimenting with the v42 patches, and I tried testing without\nproviding the path explicitly. There is one difference between the two test\ncases that I have highlighted in blue.\nThe full_name column is empty in the second test case result. Let me know\nif this is an issue or expected behaviour.\n\n*CASE 1:*\n-----------\nSELECT * FROM JSON_TABLE(jsonb '{\n \"id\" : 901,\n \"age\" : 30,\n \"*full_name*\" : \"KATE DANIEL\"}',\n '$'\n COLUMNS(\n FULL_NAME varchar(20),\n ID int,\n AGE int\n )\n\n ) as t;\n\n*RESULT:*\n full_name | id | age\n-------------+-----+-----\n KATE DANIEL | 901 | 30\n\n(1 row)\n\n*CASE 2:*\n------------------\nSELECT * FROM JSON_TABLE(jsonb '{\n \"id\" : 901,\n \"age\" : 30,\n \"*FULL_NAME*\" : \"KATE DANIEL\"}',\n '$'\n COLUMNS(\n FULL_NAME varchar(20),\n ID int,\n AGE int\n )\n\n ) as t;\n\n*RESULT:*\n full_name | id | age\n-----------+-----+-----\n | 901 | 30\n(1 row)\n\n\nThanks & Regards,\nShruthi K C\nEnterpriseDB: http://www.enterprisedb.com\n\nHi,I was experimenting with the v42 patches, and I tried testing without providing the path explicitly. There is one difference between the two test cases that I have highlighted in blue.The full_name column is empty in the second test case result. Let me know if this is an issue or expected behaviour.CASE 1:-----------SELECT * FROM JSON_TABLE(jsonb '{ \"id\" : 901, \"age\" : 30, \"full_name\" : \"KATE DANIEL\"}', '$' COLUMNS( FULL_NAME varchar(20), ID int, AGE int ) ) as t;RESULT: full_name | id | age -------------+-----+----- KATE DANIEL | 901 | 30(1 row)CASE 2:------------------SELECT * FROM JSON_TABLE(jsonb '{ \"id\" : 901, \"age\" : 30, \"FULL_NAME\" : \"KATE DANIEL\"}', '$' COLUMNS( FULL_NAME varchar(20), ID int, AGE int ) ) as t;RESULT: full_name | id | age -----------+-----+----- | 901 | 30(1 row)Thanks & Regards,Shruthi K CEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Mon, 11 Mar 2024 20:25:47 +0530",
"msg_from": "Shruthi Gowda <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On 2024-Mar-11, Shruthi Gowda wrote:\n\n> *CASE 2:*\n> ------------------\n> SELECT * FROM JSON_TABLE(jsonb '{\n> \"id\" : 901,\n> \"age\" : 30,\n> \"*FULL_NAME*\" : \"KATE DANIEL\"}',\n> '$'\n> COLUMNS(\n> FULL_NAME varchar(20),\n> ID int,\n> AGE int\n> )\n> ) as t;\n\nI think this is expected: when you use FULL_NAME as a SQL identifier, it\nis down-cased, so it no longer matches the uppercase identifier in the\nJSON data. You'd have to do it like this:\n\nSELECT * FROM JSON_TABLE(jsonb '{\n \"id\" : 901,\n \"age\" : 30,\n \"*FULL_NAME*\" : \"KATE DANIEL\"}',\n '$'\n COLUMNS(\n \"FULL_NAME\" varchar(20),\n ID int,\n AGE int\n )\n ) as t;\n\nso that the SQL identifier is not downcased.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Mon, 11 Mar 2024 16:34:31 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Thanka Alvaro. It works fine when quotes are used around the column name.\n\nOn Mon, Mar 11, 2024 at 9:04 PM Alvaro Herrera <[email protected]>\nwrote:\n\n> On 2024-Mar-11, Shruthi Gowda wrote:\n>\n> > *CASE 2:*\n> > ------------------\n> > SELECT * FROM JSON_TABLE(jsonb '{\n> > \"id\" : 901,\n> > \"age\" : 30,\n> > \"*FULL_NAME*\" : \"KATE DANIEL\"}',\n> > '$'\n> > COLUMNS(\n> > FULL_NAME varchar(20),\n> > ID int,\n> > AGE int\n> > )\n> > ) as t;\n>\n> I think this is expected: when you use FULL_NAME as a SQL identifier, it\n> is down-cased, so it no longer matches the uppercase identifier in the\n> JSON data. You'd have to do it like this:\n>\n> SELECT * FROM JSON_TABLE(jsonb '{\n> \"id\" : 901,\n> \"age\" : 30,\n> \"*FULL_NAME*\" : \"KATE DANIEL\"}',\n> '$'\n> COLUMNS(\n> \"FULL_NAME\" varchar(20),\n> ID int,\n> AGE int\n> )\n> ) as t;\n>\n> so that the SQL identifier is not downcased.\n>\n> --\n> Álvaro Herrera PostgreSQL Developer —\n> https://www.EnterpriseDB.com/\n>\n\nThanka Alvaro. It works fine when quotes are used around the column name.On Mon, Mar 11, 2024 at 9:04 PM Alvaro Herrera <[email protected]> wrote:On 2024-Mar-11, Shruthi Gowda wrote:\n\n> *CASE 2:*\n> ------------------\n> SELECT * FROM JSON_TABLE(jsonb '{\n> \"id\" : 901,\n> \"age\" : 30,\n> \"*FULL_NAME*\" : \"KATE DANIEL\"}',\n> '$'\n> COLUMNS(\n> FULL_NAME varchar(20),\n> ID int,\n> AGE int\n> )\n> ) as t;\n\nI think this is expected: when you use FULL_NAME as a SQL identifier, it\nis down-cased, so it no longer matches the uppercase identifier in the\nJSON data. You'd have to do it like this:\n\nSELECT * FROM JSON_TABLE(jsonb '{\n \"id\" : 901,\n \"age\" : 30,\n \"*FULL_NAME*\" : \"KATE DANIEL\"}',\n '$'\n COLUMNS(\n \"FULL_NAME\" varchar(20),\n ID int,\n AGE int\n )\n ) as t;\n\nso that the SQL identifier is not downcased.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/",
"msg_date": "Tue, 12 Mar 2024 00:37:34 +0530",
"msg_from": "Shruthi Gowda <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Hi,\n\nwanted to share the below case:\n\n‘postgres[146443]=#’SELECT JSON_EXISTS(jsonb '{\"customer_name\": \"test\",\n\"salary\":1000, \"department_id\":1}', '$.* ? (@== $dept_id && @ == $sal)'\nPASSING 1000 AS sal, 1 as dept_id);\n json_exists\n-------------\n f\n(1 row)\n\nisn't it supposed to return \"true\" as json in input is matching with both\nthe condition dept_id and salary?\n\n-- \nRegards,\nHimanshu Upadhyaya\nEnterpriseDB: http://www.enterprisedb.com\n\nHi,wanted to share the below case:‘postgres[146443]=#’SELECT JSON_EXISTS(jsonb '{\"customer_name\": \"test\", \"salary\":1000, \"department_id\":1}', '$.* ? (@== $dept_id && @ == $sal)' PASSING 1000 AS sal, 1 as dept_id); json_exists ------------- f(1 row)isn't it supposed to return \"true\" as json in input is matching with both the condition dept_id and salary?-- Regards,\nHimanshu Upadhyaya\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Tue, 12 Mar 2024 15:11:58 +0530",
"msg_from": "Himanshu Upadhyaya <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Hi Himanshu,\n\nOn Tue, Mar 12, 2024 at 6:42 PM Himanshu Upadhyaya\n<[email protected]> wrote:\n>\n> Hi,\n>\n> wanted to share the below case:\n>\n> ‘postgres[146443]=#’SELECT JSON_EXISTS(jsonb '{\"customer_name\": \"test\", \"salary\":1000, \"department_id\":1}', '$.* ? (@== $dept_id && @ == $sal)' PASSING 1000 AS sal, 1 as dept_id);\n> json_exists\n> -------------\n> f\n> (1 row)\n>\n> isn't it supposed to return \"true\" as json in input is matching with both the condition dept_id and salary?\n\nI think you meant to use || in your condition, not &&, because 1000 != 1.\n\nSee:\n\nSELECT JSON_EXISTS(jsonb '{\"customer_name\": \"test\", \"salary\":1000,\n\"department_id\":1}', '$.* ? (@ == $dept_id || @ == $sal)' PASSING 1000\nAS sal, 1 as dept_id);\n json_exists\n-------------\n t\n(1 row)\n\nOr you could've written the query as:\n\nSELECT JSON_EXISTS(jsonb '{\"customer_name\": \"test\", \"salary\":1000,\n\"department_id\":1}', '$ ? (@.department_id == $dept_id && @.salary ==\n$sal)' PASSING 1000 AS sal, 1 as dept_id);\n json_exists\n-------------\n t\n(1 row)\n\nDoes that make sense?\n\nIn any case, JSON_EXISTS() added by the patch here returns whatever\nthe jsonpath executor returns. The latter is not touched by this\npatch. PASSING args, which this patch adds, seem to be working\ncorrectly too.\n\n-- \nThanks, Amit Langote\n\n\n",
"msg_date": "Tue, 12 Mar 2024 21:07:01 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "About 0002:\n\nI think we should just drop it. Look at the changes it produces in the\nplans for aliases XMLTABLE:\n\n> @@ -1556,7 +1556,7 @@ SELECT f.* FROM xmldata, LATERAL xmltable('/ROWS/ROW[COUNTRY_NAME=\"Japan\" or COU\n> Output: f.\"COUNTRY_NAME\", f.\"REGION_ID\"\n> -> Seq Scan on public.xmldata\n> Output: xmldata.data\n> - -> Table Function Scan on \"xmltable\" f\n> + -> Table Function Scan on \"XMLTABLE\" f\n> Output: f.\"COUNTRY_NAME\", f.\"REGION_ID\"\n> Table Function Call: XMLTABLE(('/ROWS/ROW[COUNTRY_NAME=\"Japan\" or COUNTRY_NAME=\"India\"]'::text) PASSING (xmldata.data) COLUMNS \"COUNTRY_NAME\" text, \"REGION_ID\" integer)\n> Filter: (f.\"COUNTRY_NAME\" = 'Japan'::text)\n\nHere in text-format EXPLAIN, we already have the alias next to the\n\"xmltable\" moniker, when an alias is present. This matches the\nquery itself as well as the labels used in the \"Output:\" display.\nIf an alias is not present, then this says just 'Table Function Scan on \"xmltable\"'\nand the rest of the plans refers to this as \"xmltable\", so it's also\nfine.\n\n> @@ -1591,7 +1591,7 @@ SELECT f.* FROM xmldata, LATERAL xmltable('/ROWS/ROW[COUNTRY_NAME=\"Japan\" or COU\n> \"Parent Relationship\": \"Inner\", +\n> \"Parallel Aware\": false, +\n> \"Async Capable\": false, +\n> - \"Table Function Name\": \"xmltable\", +\n> + \"Table Function Name\": \"XMLTABLE\", +\n> \"Alias\": \"f\", +\n> \"Output\": [\"f.\\\"COUNTRY_NAME\\\"\", \"f.\\\"REGION_ID\\\"\"], +\n> \"Table Function Call\": \"XMLTABLE(('/ROWS/ROW[COUNTRY_NAME=\\\"Japan\\\" or COUNTRY_NAME=\\\"India\\\"]'::text) PASSING (xmldata.data) COLUMNS \\\"COUNTRY_NAME\\\" text, \\\"REGION_ID\\\" integer)\",+\n\nThis is the JSON-format explain. Notice that the \"Alias\" member already\nshows the alias \"f\", so the only thing this change is doing is\nuppercasing the \"xmltable\" to \"XMLTABLE\". We're not really achieving\nanything here.\n\nI think the only salvageable piece from this, **if anything**, is making\nthe \"xmltable\" literal string into uppercase. That might bring a little\nclarity to the fact that this is a keyword and not a user-introduced\nname.\n\n\nIn your 0003 I think this would only have relevance in this query,\n\n+-- JSON_TABLE() with alias\n+EXPLAIN (COSTS OFF, VERBOSE)\n+SELECT * FROM\n+ JSON_TABLE(\n+ jsonb 'null', 'lax $[*]' PASSING 1 + 2 AS a, json '\"foo\"' AS \"b c\"\n+ COLUMNS (\n+ id FOR ORDINALITY,\n+ \"int\" int PATH '$',\n+ \"text\" text PATH '$'\n+ )) json_table_func;\n+ QUERY PLAN \n \n+--------------------------------------------------------------------------------------------------------------------------------------------------------------\n----------------------------------------------------------\n+ Table Function Scan on \"JSON_TABLE\" json_table_func\n+ Output: id, \"int\", text\n+ Table Function Call: JSON_TABLE('null'::jsonb, '$[*]' AS json_table_path_0 PASSING 3 AS a, '\"foo\"'::jsonb AS \"b c\" COLUMNS (id FOR ORDINALITY, \"int\" integer PATH '$', text text PATH '$') PLAN (json_table_path_0))\n+(3 rows)\n\nand I'm curious to see what this would output if this was to be run\nwithout the 0002 patch. If I understand things correctly, the alias\nwould be displayed anyway, meaning 0002 doesn't get us anything.\n\nPlease do add a test with EXPLAIN (FORMAT JSON) in 0003.\n\nThanks\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"La vida es para el que se aventura\"\n\n\n",
"msg_date": "Tue, 12 Mar 2024 21:47:35 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Tue, Mar 12, 2024 at 5:37 PM Amit Langote <[email protected]>\nwrote:\n\n>\n>\n> SELECT JSON_EXISTS(jsonb '{\"customer_name\": \"test\", \"salary\":1000,\n> \"department_id\":1}', '$ ? (@.department_id == $dept_id && @.salary ==\n> $sal)' PASSING 1000 AS sal, 1 as dept_id);\n> json_exists\n> -------------\n> t\n> (1 row)\n>\n> Does that make sense?\n>\n> Yes, got it. Thanks for the clarification.\n\n-- \nRegards,\nHimanshu Upadhyaya\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Tue, Mar 12, 2024 at 5:37 PM Amit Langote <[email protected]> wrote:\n\nSELECT JSON_EXISTS(jsonb '{\"customer_name\": \"test\", \"salary\":1000,\n\"department_id\":1}', '$ ? (@.department_id == $dept_id && @.salary ==\n$sal)' PASSING 1000 AS sal, 1 as dept_id);\n json_exists\n-------------\n t\n(1 row)\n\nDoes that make sense?\nYes, got it. Thanks for the clarification. -- Regards,\nHimanshu Upadhyaya\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Wed, 13 Mar 2024 09:49:04 +0530",
"msg_from": "Himanshu Upadhyaya <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "one more question...\nSELECT JSON_value(NULL::int, '$' returning int);\nERROR: cannot use non-string types with implicit FORMAT JSON clause\nLINE 1: SELECT JSON_value(NULL::int, '$' returning int);\n ^\n\nSELECT JSON_query(NULL::int, '$' returning int);\nERROR: cannot use non-string types with implicit FORMAT JSON clause\nLINE 1: SELECT JSON_query(NULL::int, '$' returning int);\n ^\n\nSELECT * FROM JSON_TABLE(NULL::int, '$' COLUMNS (foo text));\nERROR: cannot use non-string types with implicit FORMAT JSON clause\nLINE 1: SELECT * FROM JSON_TABLE(NULL::int, '$' COLUMNS (foo text));\n ^\n\nSELECT JSON_value(NULL::text, '$' returning int);\nERROR: JSON_VALUE() is not yet implemented for the json type\nLINE 1: SELECT JSON_value(NULL::text, '$' returning int);\n ^\nHINT: Try casting the argument to jsonb\n\n\nSELECT JSON_query(NULL::text, '$' returning int);\nERROR: JSON_QUERY() is not yet implemented for the json type\nLINE 1: SELECT JSON_query(NULL::text, '$' returning int);\n ^\nHINT: Try casting the argument to jsonb\n\nin all these cases, the error message seems strange.\n\nwe already mentioned:\n <note>\n <para>\n SQL/JSON query functions currently only accept values of the\n <type>jsonb</type> type, because the SQL/JSON path language only\n supports those, so it might be necessary to cast the\n <replaceable>context_item</replaceable> argument of these functions to\n <type>jsonb</type>.\n </para>\n </note>\n\nwe can simply say, only accept the first argument to be jsonb data type.\n\n\n",
"msg_date": "Thu, 14 Mar 2024 18:12:06 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Mon, Mar 11, 2024 at 11:30 AM jian he <[email protected]> wrote:\n>\n> On Sun, Mar 10, 2024 at 10:57 PM jian he <[email protected]> wrote:\n> >\n> > one more issue.\n>\n> Hi\n> one more documentation issue.\n> after applied V42, 0001 to 0003,\n> there are 11 appearance of `FORMAT JSON` in functions-json.html\n> still not a single place explained what it is for.\n>\n> json_query ( context_item, path_expression [ PASSING { value AS\n> varname } [, ...]] [ RETURNING data_type [ FORMAT JSON [ ENCODING UTF8\n> ] ] ] [ { WITHOUT | WITH { CONDITIONAL | [UNCONDITIONAL] } } [ ARRAY ]\n> WRAPPER ] [ { KEEP | OMIT } QUOTES [ ON SCALAR STRING ] ] [ { ERROR |\n> NULL | EMPTY { [ ARRAY ] | OBJECT } | DEFAULT expression } ON EMPTY ]\n> [ { ERROR | NULL | EMPTY { [ ARRAY ] | OBJECT } | DEFAULT expression }\n> ON ERROR ])\n>\n> FORMAT JSON seems just a syntax sugar or for compatibility in json_query.\n> but it returns an error when the returning type category is not\n> TYPCATEGORY_STRING.\n>\n> for example, even the following will return an error.\n> `\n> CREATE TYPE regtest_comptype AS (b text);\n> SELECT JSON_QUERY(jsonb '{\"a\":{\"b\":\"c\"}}', '$.a' RETURNING\n> regtest_comptype format json);\n> `\n>\n> seems only types in[0] will not generate an error, when specifying\n> FORMAT JSON in JSON_QUERY.\n>\n> so it actually does something, not a syntax sugar?\n>\n\nSELECT * FROM JSON_TABLE(jsonb'[{\"aaa\": 123}]', 'lax $[*]' COLUMNS\n(js2 text format json PATH '$' omit quotes));\nSELECT * FROM JSON_TABLE(jsonb'[{\"aaa\": 123}]', 'lax $[*]' COLUMNS\n(js2 text format json PATH '$' keep quotes));\nSELECT * FROM JSON_TABLE(jsonb'[{\"aaa\": 123}]', 'lax $[*]' COLUMNS\n(js2 text PATH '$' keep quotes)); -- JSON_QUERY_OP\nSELECT * FROM JSON_TABLE(jsonb'[{\"aaa\": 123}]', 'lax $[*]' COLUMNS\n(js2 text PATH '$' omit quotes)); -- JSON_QUERY_OP\nSELECT * FROM JSON_TABLE(jsonb'[{\"aaa\": 123}]', 'lax $[*]' COLUMNS\n(js2 text PATH '$')); -- JSON_VALUE_OP\nSELECT * FROM JSON_TABLE(jsonb'[{\"aaa\": 123}]', 'lax $[*]' COLUMNS\n(js2 json PATH '$')); -- JSON_QUERY_OP\ncomparing these queries, I think 'FORMAT JSON' main usage is in json_table.\n\nCREATE TYPE regtest_comptype AS (b text);\nSELECT JSON_QUERY(jsonb '{\"a\":{\"b\":\"c\"}}', '$.a' RETURNING\nregtest_comptype format json);\nERROR: cannot use JSON format with non-string output types\nLINE 1: ...\"a\":{\"b\":\"c\"}}', '$.a' RETURNING regtest_comptype format jso...\n ^\nthe error message is not good, but that's a minor issue. we can pursue it later.\n-----------------------------------------------------------------------------------------\nSELECT JSON_QUERY(jsonb 'true', '$' RETURNING int KEEP QUOTES );\nSELECT JSON_QUERY(jsonb 'true', '$' RETURNING int omit QUOTES );\nSELECT JSON_VALUE(jsonb 'true', '$' RETURNING int);\nthe third query returns integer 1, not sure this is the desired behavior.\nit obviously has an implication for json_table.\n-----------------------------------------------------------------------------------------\nin jsonb_get_element, we have something like:\nif (jbvp->type == jbvBinary)\n{\ncontainer = jbvp->val.binary.data;\nhave_object = JsonContainerIsObject(container);\nhave_array = JsonContainerIsArray(container);\nAssert(!JsonContainerIsScalar(container));\n}\n\n+ res = JsonValueListHead(&found);\n+ if (res->type == jbvBinary && JsonContainerIsScalar(res->val.binary.data))\n+ JsonbExtractScalar(res->val.binary.data, res);\nSo in JsonPathValue, the above (res->type == jbvBinary) is unreachable?\nalso see the comment in jbvBinary.\n\nmaybe we can just simply do:\nif (res->type == jbvBinary)\nAssert(!JsonContainerIsScalar(res->val.binary.data));\n-----------------------------------------------------------------------------------------\n+<synopsis>\n+JSON_TABLE (\n+ <replaceable>context_item</replaceable>,\n<replaceable>path_expression</replaceable> <optional> AS\n<replaceable>json_path_name</replaceable> </optional> <optional>\nPASSING { <replaceable>value</replaceable> AS\n<replaceable>varname</replaceable> } <optional>, ...</optional>\n</optional>\n+ COLUMNS ( <replaceable\nclass=\"parameter\">json_table_column</replaceable> <optional>,\n...</optional> )\n+ <optional> { <literal>ERROR</literal> | <literal>EMPTY</literal> }\n<literal>ON ERROR</literal> </optional>\n+ <optional>\n+ PLAN ( <replaceable class=\"parameter\">json_table_plan</replaceable> ) |\n+ PLAN DEFAULT ( { INNER | OUTER } <optional> , { CROSS | UNION } </optional>\n+ | { CROSS | UNION } <optional> , { INNER | OUTER }\n</optional> )\n+ </optional>\n+)\n\nbased on the synopsis\nthe following query should not be allowed?\nSELECT *FROM (VALUES ('\"11\"'), ('\"err\"')) vals(js)\nLEFT OUTER JOIN JSON_TABLE(vals.js::jsonb, '$' COLUMNS (a int PATH\n'$') default '11' ON ERROR) jt ON true;\n\naslo the synopsis need to reflect case like:\nSELECT *FROM (VALUES ('\"11\"'), ('\"err\"')) vals(js)\nLEFT OUTER JOIN JSON_TABLE(vals.js::jsonb, '$' COLUMNS (a int PATH\n'$') NULL ON ERROR) jt ON true;\n\n\n",
"msg_date": "Fri, 15 Mar 2024 18:30:18 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Wed, Mar 13, 2024 at 5:47 AM Alvaro Herrera <[email protected]> wrote:\n> About 0002:\n>\n> I think we should just drop it. Look at the changes it produces in the\n> plans for aliases XMLTABLE:\n>\n> > @@ -1556,7 +1556,7 @@ SELECT f.* FROM xmldata, LATERAL xmltable('/ROWS/ROW[COUNTRY_NAME=\"Japan\" or COU\n> > Output: f.\"COUNTRY_NAME\", f.\"REGION_ID\"\n> > -> Seq Scan on public.xmldata\n> > Output: xmldata.data\n> > - -> Table Function Scan on \"xmltable\" f\n> > + -> Table Function Scan on \"XMLTABLE\" f\n> > Output: f.\"COUNTRY_NAME\", f.\"REGION_ID\"\n> > Table Function Call: XMLTABLE(('/ROWS/ROW[COUNTRY_NAME=\"Japan\" or COUNTRY_NAME=\"India\"]'::text) PASSING (xmldata.data) COLUMNS \"COUNTRY_NAME\" text, \"REGION_ID\" integer)\n> > Filter: (f.\"COUNTRY_NAME\" = 'Japan'::text)\n>\n> Here in text-format EXPLAIN, we already have the alias next to the\n> \"xmltable\" moniker, when an alias is present. This matches the\n> query itself as well as the labels used in the \"Output:\" display.\n> If an alias is not present, then this says just 'Table Function Scan on \"xmltable\"'\n> and the rest of the plans refers to this as \"xmltable\", so it's also\n> fine.\n>\n> > @@ -1591,7 +1591,7 @@ SELECT f.* FROM xmldata, LATERAL xmltable('/ROWS/ROW[COUNTRY_NAME=\"Japan\" or COU\n> > \"Parent Relationship\": \"Inner\", +\n> > \"Parallel Aware\": false, +\n> > \"Async Capable\": false, +\n> > - \"Table Function Name\": \"xmltable\", +\n> > + \"Table Function Name\": \"XMLTABLE\", +\n> > \"Alias\": \"f\", +\n> > \"Output\": [\"f.\\\"COUNTRY_NAME\\\"\", \"f.\\\"REGION_ID\\\"\"], +\n> > \"Table Function Call\": \"XMLTABLE(('/ROWS/ROW[COUNTRY_NAME=\\\"Japan\\\" or COUNTRY_NAME=\\\"India\\\"]'::text) PASSING (xmldata.data) COLUMNS \\\"COUNTRY_NAME\\\" text, \\\"REGION_ID\\\" integer)\",+\n>\n> This is the JSON-format explain. Notice that the \"Alias\" member already\n> shows the alias \"f\", so the only thing this change is doing is\n> uppercasing the \"xmltable\" to \"XMLTABLE\". We're not really achieving\n> anything here.\n>\n> I think the only salvageable piece from this, **if anything**, is making\n> the \"xmltable\" literal string into uppercase. That might bring a little\n> clarity to the fact that this is a keyword and not a user-introduced\n> name.\n>\n>\n> In your 0003 I think this would only have relevance in this query,\n>\n> +-- JSON_TABLE() with alias\n> +EXPLAIN (COSTS OFF, VERBOSE)\n> +SELECT * FROM\n> + JSON_TABLE(\n> + jsonb 'null', 'lax $[*]' PASSING 1 + 2 AS a, json '\"foo\"' AS \"b c\"\n> + COLUMNS (\n> + id FOR ORDINALITY,\n> + \"int\" int PATH '$',\n> + \"text\" text PATH '$'\n> + )) json_table_func;\n> + QUERY PLAN\n>\n> +--------------------------------------------------------------------------------------------------------------------------------------------------------------\n> ----------------------------------------------------------\n> + Table Function Scan on \"JSON_TABLE\" json_table_func\n> + Output: id, \"int\", text\n> + Table Function Call: JSON_TABLE('null'::jsonb, '$[*]' AS json_table_path_0 PASSING 3 AS a, '\"foo\"'::jsonb AS \"b c\" COLUMNS (id FOR ORDINALITY, \"int\" integer PATH '$', text text PATH '$') PLAN (json_table_path_0))\n> +(3 rows)\n>\n> and I'm curious to see what this would output if this was to be run\n> without the 0002 patch. If I understand things correctly, the alias\n> would be displayed anyway, meaning 0002 doesn't get us anything.\n\nPatch 0002 came about because old versions of json_table patch were\nchanging ExplainTargetRel() incorrectly to use rte->tablefunc to get\nthe function type to set objectname, but rte->tablefunc is NULL\nbecause add_rte_to_flat_rtable() zaps it. You pointed that out in\n[1].\n\nHowever, we can get the TableFunc to get the function type from the\nPlan node instead of the RTE, as follows:\n\n- Assert(rte->rtekind == RTE_TABLEFUNC);\n- objectname = \"xmltable\";\n- objecttag = \"Table Function Name\";\n+ {\n+ TableFunc *tablefunc = ((TableFuncScan *) plan)->tablefunc;\n+\n+ Assert(rte->rtekind == RTE_TABLEFUNC);\n+ switch (tablefunc->functype)\n+ {\n+ case TFT_XMLTABLE:\n+ objectname = \"xmltable\";\n+ break;\n+ case TFT_JSON_TABLE:\n+ objectname = \"json_table\";\n+ break;\n+ default:\n+ elog(ERROR, \"invalid TableFunc type %d\",\n+ (int) tablefunc->functype);\n+ }\n+ objecttag = \"Table Function Name\";\n+ }\n\nSo that gets us what we need here.\n\nGiven that, 0002 does seem like an overkill and unnecessary, so I'll drop it.\n\n> Please do add a test with EXPLAIN (FORMAT JSON) in 0003.\n\nOK, will do.\n\n\n--\nThanks, Amit Langote\n\n[1] https://www.postgresql.org/message-id/202401181711.qxjxpnl3ohnw%40alvherre.pgsql\n\n\n",
"msg_date": "Mon, 18 Mar 2024 14:00:55 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "I have tested a nested case but why is the negative number allowed in\nsubscript(NESTED '$.phones[-1]'COLUMNS), it should error out if the number\nis negative.\n\n‘postgres[170683]=#’SELECT * FROM JSON_TABLE(jsonb '{\n‘...>’ \"id\" : \"0.234567897890\",\n‘...>’ \"name\" : {\n\"first\":\"Johnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn\",\n\"last\":\"Doe\" },\n‘...>’ \"phones\" : [{\"type\":\"home\", \"number\":\"555-3762\"},\n‘...>’ {\"type\":\"work\", \"number\":\"555-7252\",\n\"test\":123}]}',\n‘...>’ '$'\n‘...>’ COLUMNS(\n‘...>’ id numeric(2,2) PATH 'lax $.id',\n‘...>’ last_name varCHAR(10) PATH 'lax $.name.last',\nfirst_name VARCHAR(10) PATH 'lax $.name.first',\n‘...>’ NESTED '$.phones[-1]'COLUMNS (\n‘...>’ \"type\" VARCHAR(10),\n‘...>’ \"number\" VARCHAR(10)\n‘...>’ )\n‘...>’ )\n‘...>’ ) as t;\n id | last_name | first_name | type | number\n------+-----------+------------+------+--------\n 0.23 | Doe | Johnnnnnnn | |\n(1 row)\n\nThanks,\nHimanshu\n\nI have tested a nested case but why is the negative number allowed in subscript(NESTED '$.phones[-1]'COLUMNS), it should error out if the number is negative.‘postgres[170683]=#’SELECT * FROM JSON_TABLE(jsonb '{‘...>’ \"id\" : \"0.234567897890\",‘...>’ \"name\" : { \"first\":\"Johnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn\", \"last\":\"Doe\" },‘...>’ \"phones\" : [{\"type\":\"home\", \"number\":\"555-3762\"},‘...>’ {\"type\":\"work\", \"number\":\"555-7252\", \"test\":123}]}',‘...>’ '$'‘...>’ COLUMNS(‘...>’ id numeric(2,2) PATH 'lax $.id',‘...>’ last_name varCHAR(10) PATH 'lax $.name.last', first_name VARCHAR(10) PATH 'lax $.name.first',‘...>’ NESTED '$.phones[-1]'COLUMNS (‘...>’ \"type\" VARCHAR(10),‘...>’ \"number\" VARCHAR(10)‘...>’ )‘...>’ )‘...>’ ) as t; id | last_name | first_name | type | number ------+-----------+------------+------+-------- 0.23 | Doe | Johnnnnnnn | | (1 row)Thanks,Himanshu",
"msg_date": "Mon, 18 Mar 2024 13:26:59 +0530",
"msg_from": "Himanshu Upadhyaya <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Himanshu,\n\nOn Mon, Mar 18, 2024 at 4:57 PM Himanshu Upadhyaya\n<[email protected]> wrote:\n> I have tested a nested case but why is the negative number allowed in subscript(NESTED '$.phones[-1]'COLUMNS), it should error out if the number is negative.\n>\n> ‘postgres[170683]=#’SELECT * FROM JSON_TABLE(jsonb '{\n> ‘...>’ \"id\" : \"0.234567897890\",\n> ‘...>’ \"name\" : { \"first\":\"Johnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn\", \"last\":\"Doe\" },\n> ‘...>’ \"phones\" : [{\"type\":\"home\", \"number\":\"555-3762\"},\n> ‘...>’ {\"type\":\"work\", \"number\":\"555-7252\", \"test\":123}]}',\n> ‘...>’ '$'\n> ‘...>’ COLUMNS(\n> ‘...>’ id numeric(2,2) PATH 'lax $.id',\n> ‘...>’ last_name varCHAR(10) PATH 'lax $.name.last', first_name VARCHAR(10) PATH 'lax $.name.first',\n> ‘...>’ NESTED '$.phones[-1]'COLUMNS (\n> ‘...>’ \"type\" VARCHAR(10),\n> ‘...>’ \"number\" VARCHAR(10)\n> ‘...>’ )\n> ‘...>’ )\n> ‘...>’ ) as t;\n> id | last_name | first_name | type | number\n> ------+-----------+------------+------+--------\n> 0.23 | Doe | Johnnnnnnn | |\n> (1 row)\n\nYou're not getting an error because the default mode of handling\nstructural errors in SQL/JSON path expressions is \"lax\". If you say\n\"strict\" in the path string, you will get an error:\n\nSELECT * FROM JSON_TABLE(jsonb '{\n \"id\" : \"0.234567897890\",\n \"name\" : {\n\"first\":\"Johnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn\",\n\"last\":\"Doe\" },\n \"phones\" : [{\"type\":\"home\", \"number\":\"555-3762\"},\n {\"type\":\"work\", \"number\":\"555-7252\", \"test\":123}]}',\n '$'\n COLUMNS(\n id numeric(2,2) PATH 'lax $.id',\n last_name varCHAR(10) PATH 'lax $.name.last',\nfirst_name VARCHAR(10) PATH 'lax $.name.first',\n NESTED 'strict $.phones[-1]'COLUMNS (\n \"type\" VARCHAR(10),\n \"number\" VARCHAR(10)\n )\n ) error on error\n ) as t;\nERROR: jsonpath array subscript is out of bounds\n\n-- \nThanks, Amit Langote\n\n\n",
"msg_date": "Mon, 18 Mar 2024 19:03:12 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Mon, Mar 18, 2024 at 3:33 PM Amit Langote <[email protected]>\nwrote:\n\n> Himanshu,\n>\n> On Mon, Mar 18, 2024 at 4:57 PM Himanshu Upadhyaya\n> <[email protected]> wrote:\n> > I have tested a nested case but why is the negative number allowed in\n> subscript(NESTED '$.phones[-1]'COLUMNS), it should error out if the number\n> is negative.\n> >\n> > ‘postgres[170683]=#’SELECT * FROM JSON_TABLE(jsonb '{\n> > ‘...>’ \"id\" : \"0.234567897890\",\n> > ‘...>’ \"name\" : {\n> \"first\":\"Johnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn\",\n> \"last\":\"Doe\" },\n> > ‘...>’ \"phones\" : [{\"type\":\"home\", \"number\":\"555-3762\"},\n> > ‘...>’ {\"type\":\"work\", \"number\":\"555-7252\",\n> \"test\":123}]}',\n> > ‘...>’ '$'\n> > ‘...>’ COLUMNS(\n> > ‘...>’ id numeric(2,2) PATH 'lax $.id',\n> > ‘...>’ last_name varCHAR(10) PATH 'lax $.name.last',\n> first_name VARCHAR(10) PATH 'lax $.name.first',\n> > ‘...>’ NESTED '$.phones[-1]'COLUMNS (\n> > ‘...>’ \"type\" VARCHAR(10),\n> > ‘...>’ \"number\" VARCHAR(10)\n> > ‘...>’ )\n> > ‘...>’ )\n> > ‘...>’ ) as t;\n> > id | last_name | first_name | type | number\n> > ------+-----------+------------+------+--------\n> > 0.23 | Doe | Johnnnnnnn | |\n> > (1 row)\n>\n> You're not getting an error because the default mode of handling\n> structural errors in SQL/JSON path expressions is \"lax\". If you say\n> \"strict\" in the path string, you will get an error:\n>\n>\nok, got it, thanks.\n\n\n> SELECT * FROM JSON_TABLE(jsonb '{\n> \"id\" : \"0.234567897890\",\n> \"name\" : {\n> \"first\":\"Johnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn\",\n> \"last\":\"Doe\" },\n> \"phones\" : [{\"type\":\"home\", \"number\":\"555-3762\"},\n> {\"type\":\"work\", \"number\":\"555-7252\", \"test\":123}]}',\n> '$'\n> COLUMNS(\n> id numeric(2,2) PATH 'lax $.id',\n> last_name varCHAR(10) PATH 'lax $.name.last',\n> first_name VARCHAR(10) PATH 'lax $.name.first',\n> NESTED 'strict $.phones[-1]'COLUMNS (\n> \"type\" VARCHAR(10),\n> \"number\" VARCHAR(10)\n> )\n> ) error on error\n> ) as t;\n> ERROR: jsonpath array subscript is out of bounds\n>\n> --\n> Thanks, Amit Langote\n>\n\n\n-- \nRegards,\nHimanshu Upadhyaya\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Mon, Mar 18, 2024 at 3:33 PM Amit Langote <[email protected]> wrote:Himanshu,\n\nOn Mon, Mar 18, 2024 at 4:57 PM Himanshu Upadhyaya\n<[email protected]> wrote:\n> I have tested a nested case but why is the negative number allowed in subscript(NESTED '$.phones[-1]'COLUMNS), it should error out if the number is negative.\n>\n> ‘postgres[170683]=#’SELECT * FROM JSON_TABLE(jsonb '{\n> ‘...>’ \"id\" : \"0.234567897890\",\n> ‘...>’ \"name\" : { \"first\":\"Johnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn\", \"last\":\"Doe\" },\n> ‘...>’ \"phones\" : [{\"type\":\"home\", \"number\":\"555-3762\"},\n> ‘...>’ {\"type\":\"work\", \"number\":\"555-7252\", \"test\":123}]}',\n> ‘...>’ '$'\n> ‘...>’ COLUMNS(\n> ‘...>’ id numeric(2,2) PATH 'lax $.id',\n> ‘...>’ last_name varCHAR(10) PATH 'lax $.name.last', first_name VARCHAR(10) PATH 'lax $.name.first',\n> ‘...>’ NESTED '$.phones[-1]'COLUMNS (\n> ‘...>’ \"type\" VARCHAR(10),\n> ‘...>’ \"number\" VARCHAR(10)\n> ‘...>’ )\n> ‘...>’ )\n> ‘...>’ ) as t;\n> id | last_name | first_name | type | number\n> ------+-----------+------------+------+--------\n> 0.23 | Doe | Johnnnnnnn | |\n> (1 row)\n\nYou're not getting an error because the default mode of handling\nstructural errors in SQL/JSON path expressions is \"lax\". If you say\n\"strict\" in the path string, you will get an error:\nok, got it, thanks. \nSELECT * FROM JSON_TABLE(jsonb '{\n \"id\" : \"0.234567897890\",\n \"name\" : {\n\"first\":\"Johnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn\",\n\"last\":\"Doe\" },\n \"phones\" : [{\"type\":\"home\", \"number\":\"555-3762\"},\n {\"type\":\"work\", \"number\":\"555-7252\", \"test\":123}]}',\n '$'\n COLUMNS(\n id numeric(2,2) PATH 'lax $.id',\n last_name varCHAR(10) PATH 'lax $.name.last',\nfirst_name VARCHAR(10) PATH 'lax $.name.first',\n NESTED 'strict $.phones[-1]'COLUMNS (\n \"type\" VARCHAR(10),\n \"number\" VARCHAR(10)\n )\n ) error on error\n ) as t;\nERROR: jsonpath array subscript is out of bounds\n\n-- \nThanks, Amit Langote\n-- Regards,\nHimanshu Upadhyaya\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Tue, 19 Mar 2024 09:33:53 +0530",
"msg_from": "Himanshu Upadhyaya <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Hi,\n\nOn Thu, Mar 7, 2024 at 9:06 PM Amit Langote <[email protected]> wrote:\n> This boils down to the difference in the cast expression chosen to\n> convert the source value to int in the two cases.\n>\n> The case where the source value has no quotes, the chosen cast\n> expression is a FuncExpr for function numeric_int4(), which has no way\n> to suppress errors. When the source value has quotes, the cast\n> expression is a CoerceViaIO expression, which can suppress the error.\n> The default behavior is to suppress the error and return NULL, so the\n> correct behavior is when the source value has quotes.\n>\n> I think we'll need either:\n>\n> * fix the code in 0001 to avoid getting numeric_int4() in this case,\n> and generally cast functions that don't have soft-error handling\n> support, in favor of using IO coercion.\n> * fix FuncExpr (like CoerceViaIO) to respect SQL/JSON's request to\n> suppress errors and fix downstream functions like numeric_int4() to\n> comply by handling errors softly.\n>\n> I'm inclined to go with the 1st option as we already have the\n> infrastructure in place -- input functions can all handle errors\n> softly.\n\nI've adjusted the coercion-handling code to deal with this and similar\ncases to use coercion by calling the target type's input function in\nmore cases. The resulting refactoring allowed me to drop a bunch of\ncode and node structs, notably, the JsonCoercion and JsonItemCoercion\nnodes. Going with input function based coercion as opposed to using\ncasts means the coercion may fail in more cases than before but I\nthink that's acceptable. For example, the following case did not fail\nbefore because they'd use numeric_int() cast function to convert 1.234\nto an integer:\n\nselect json_value('{\"a\": 1.234}', '$.a' returning int error on error);\nERROR: invalid input syntax for type integer: \"1.234\"\n\nIt is same error as this case, where the source numerical value is\nspecified as a string:\n\nselect json_value('{\"a\": \"1.234\"}', '$.a' returning int error on error);\nERROR: invalid input syntax for type integer: \"1.234\"\n\nI had hoped to get rid of all instances of using casts and standardize\non coercion at runtime using input functions and json_populate_type(),\nbut there are a few cases where casts produce saner results and also\nharmless (error-safe), such as cases where the target types are\ndomains or are types with typmod.\n\nI've also tried to address most of Jian He's comments and a bunch of\ncleanups of my own. Attaching 0002 as the delta over v42 containing\nall of those changes.\n\nI intend to commit 0001+0002 after a bit more polishing.\n\n-- \nThanks, Amit Langote",
"msg_date": "Tue, 19 Mar 2024 19:45:43 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Tue, Mar 19, 2024 at 6:46 PM Amit Langote <[email protected]> wrote:\n>\n> I intend to commit 0001+0002 after a bit more polishing.\n>\n\nV43 is far more intuitive! thanks!\n\nif (isnull ||\n(exprType(expr) == JSONBOID &&\nbtype == default_behavior))\ncoerce = true;\nelse\ncoerced_expr =\ncoerce_to_target_type(pstate, expr, exprType(expr),\n returning->typid, returning->typmod,\n COERCION_EXPLICIT, COERCE_EXPLICIT_CAST,\n exprLocation((Node *) behavior));\n\nobviously, there are cases where \"coerce\" is false, and \"coerced_expr\"\nis not null.\nso I think the bool \"coerce\" variable naming is not very intuitive.\nmaybe we can add some comments or change to a better name.\n\n\nJsonPathVariableEvalContext\nJsonPathVarCallback\nJsonItemType\nJsonExprPostEvalState\nthese should remove from src/tools/pgindent/typedefs.list\n\n\n+/*\n+ * Performs JsonPath{Exists|Query|Value}() for a given context_item and\n+ * jsonpath.\n+ *\n+ * Result is set in *op->resvalue and *op->resnull. Return value is the\n+ * step address to be performed next.\n+ *\n+ * On return, JsonExprPostEvalState is populated with the following details:\n+ * - error.value: true if an error occurred during JsonPath evaluation\n+ * - empty.value: true if JsonPath{Query|Value}() found no matching item\n+ *\n+ * No return if the ON ERROR/EMPTY behavior is ERROR.\n+ */\n+int\n+ExecEvalJsonExprPath(ExprState *state, ExprEvalStep *op,\n+ ExprContext *econtext)\n\n\" No return if the ON ERROR/EMPTY behavior is ERROR.\" is wrong?\ncounter example:\nSELECT JSON_QUERY(jsonb '{\"a\":[12,2]}', '$.a' RETURNING int4RANGE omit\nquotes error on error);\nalso \"JsonExprPostEvalState\" does not exist any more.\noverall feel like ExecEvalJsonExprPath comments need to be rephrased.\n\n\n",
"msg_date": "Wed, 20 Mar 2024 11:41:42 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "minor issues I found while looking through it.\nother than these issues, looks good!\n\n/*\n * Convert the a given JsonbValue to its C string representation\n *\n * Returns the string as a Datum setting *resnull if the JsonbValue is a\n * a jbvNull.\n */\nstatic char *\nExecGetJsonValueItemString(JsonbValue *item, bool *resnull)\n{\n}\nI think the comments are not right?\n\n/*\n * Checks if the coercion evaluation led to an error. If an error did occur,\n * this sets post_eval->error to trigger the ON ERROR handling steps.\n */\nvoid\nExecEvalJsonCoercionFinish(ExprState *state, ExprEvalStep *op)\n{\n}\nthese comments on ExecEvalJsonCoercionFinish also need to be updated?\n\n\n+ /*\n+ * Coerce the result value by calling the input function coercion.\n+ * *op->resvalue must point to C string in this case.\n+ */\n+ if (!*op->resnull && jsexpr->use_io_coercion)\n+ {\n+ FunctionCallInfo fcinfo;\n+\n+ fcinfo = jsestate->input_fcinfo;\n+ Assert(fcinfo != NULL);\n+ Assert(val_string != NULL);\n+ fcinfo->args[0].value = PointerGetDatum(val_string);\n+ fcinfo->args[0].isnull = *op->resnull;\n+ /* second and third arguments are already set up */\n+\n+ fcinfo->isnull = false;\n+ *op->resvalue = FunctionCallInvoke(fcinfo);\n+ if (SOFT_ERROR_OCCURRED(&jsestate->escontext))\n+ error = true;\n+\n+ jump_eval_coercion = -1;\n+ }\n\n+ /* second and third arguments are already set up */\nchange to\n/* second and third arguments are already set up in ExecInitJsonExpr */\nwould be great.\n\n\ncommit message\n<<<<\nAll of these functions only operate on jsonb values. The workaround\nfor now is to cast the argument to jsonb.\n<<<<\nshould be removed?\n\n\n+ case T_JsonFuncExpr:\n+ {\n+ JsonFuncExpr *jfe = (JsonFuncExpr *) node;\n+\n+ if (WALK(jfe->context_item))\n+ return true;\n+ if (WALK(jfe->pathspec))\n+ return true;\n+ if (WALK(jfe->passing))\n+ return true;\n+ if (jfe->output && WALK(jfe->output))\n+ return true;\n+ if (jfe->on_empty)\n+ return true;\n+ if (jfe->on_error)\n+ return true;\n+ }\n\n+ if (jfe->output && WALK(jfe->output))\n+ return true;\ncan be simplified:\n\n+ if (WALK(jfe->output))\n+ return true;\n\n\n",
"msg_date": "Wed, 20 Mar 2024 14:51:48 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "looking at documentation again.\none very minor question (issue)\n\n+ <para>\n+ The <literal>ON EMPTY</literal> clause specifies the behavior if the\n+ <replaceable>path_expression</replaceable> yields no value at all; the\n+ default when <literal>ON EMPTY</literal> is not specified is to return\n+ a null value.\n+ </para>\n\nI think it should be:\n\napplying <replaceable>path_expression</replaceable>\nor\nevaluating <replaceable>path_expression</replaceable>\n\nnot \"the <replaceable>path_expression</replaceable>\"\n?\n\n\n",
"msg_date": "Wed, 20 Mar 2024 19:46:00 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Wed, Mar 20, 2024 at 8:46 PM jian he <[email protected]> wrote:\n>\n> looking at documentation again.\n> one very minor question (issue)\n>\n> + <para>\n> + The <literal>ON EMPTY</literal> clause specifies the behavior if the\n> + <replaceable>path_expression</replaceable> yields no value at all; the\n> + default when <literal>ON EMPTY</literal> is not specified is to return\n> + a null value.\n> + </para>\n>\n> I think it should be:\n>\n> applying <replaceable>path_expression</replaceable>\n> or\n> evaluating <replaceable>path_expression</replaceable>\n>\n> not \"the <replaceable>path_expression</replaceable>\"\n> ?\n\nThanks. Fixed this, the other issues you mentioned, a bunch of typos\nand obsolete comments, etc.\n\nI'll push 0001 tomorrow.\n\n-- \nThanks, Amit Langote",
"msg_date": "Wed, 20 Mar 2024 21:53:52 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Wed, Mar 20, 2024 at 9:53 PM Amit Langote <[email protected]> wrote:\n> I'll push 0001 tomorrow.\n\nPushed that one. Here's the remaining JSON_TABLE() patch.\n\n-- \nThanks, Amit Langote",
"msg_date": "Fri, 22 Mar 2024 01:08:11 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "At Wed, 20 Mar 2024 21:53:52 +0900, Amit Langote <[email protected]> wrote in \n> I'll push 0001 tomorrow.\n\nThis patch (v44-0001-Add-SQL-JSON-query-functions.patch) introduced the following new erro message:\n\n+\t\t\t\t\t\t errmsg(\"can only specify constant, non-aggregate\"\n+\t\t\t\t\t\t\t\t\" function, or operator expression for\"\n+\t\t\t\t\t\t\t\t\" DEFAULT\"),\n\nI believe that our convention here is to write an error message in a\nsingle string literal, not split into multiple parts, for better\ngrep'ability.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 22 Mar 2024 09:51:49 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Hi Horiguchi-san,\n\nOn Fri, Mar 22, 2024 at 9:51 AM Kyotaro Horiguchi\n<[email protected]> wrote:\n> At Wed, 20 Mar 2024 21:53:52 +0900, Amit Langote <[email protected]> wrote in\n> > I'll push 0001 tomorrow.\n>\n> This patch (v44-0001-Add-SQL-JSON-query-functions.patch) introduced the following new erro message:\n>\n> + errmsg(\"can only specify constant, non-aggregate\"\n> + \" function, or operator expression for\"\n> + \" DEFAULT\"),\n>\n> I believe that our convention here is to write an error message in a\n> single string literal, not split into multiple parts, for better\n> grep'ability.\n\nThanks for the heads up.\n\nMy bad, will push a fix shortly.\n\n-- \nThanks, Amit Langote\n\n\n",
"msg_date": "Fri, 22 Mar 2024 11:44:08 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "At Fri, 22 Mar 2024 11:44:08 +0900, Amit Langote <[email protected]> wrote in \n> Thanks for the heads up.\n> \n> My bad, will push a fix shortly.\n\nNo problem. Thank you for the prompt correction.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 22 Mar 2024 13:40:04 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Fri, Mar 22, 2024 at 12:08 AM Amit Langote <[email protected]> wrote:\n>\n> On Wed, Mar 20, 2024 at 9:53 PM Amit Langote <[email protected]> wrote:\n> > I'll push 0001 tomorrow.\n>\n> Pushed that one. Here's the remaining JSON_TABLE() patch.\n>\n\nhi. minor issues i found json_table patch.\n\n+ if (!IsA($5, A_Const) ||\n+ castNode(A_Const, $5)->val.node.type != T_String)\n+ ereport(ERROR,\n+ errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n+ errmsg(\"only string constants are supported in JSON_TABLE\"\n+ \" path specification\"),\n+ parser_errposition(@5));\nas mentioned in upthread, this error message should be one line.\n\n\n+const TableFuncRoutine JsonbTableRoutine =\n+{\n+ JsonTableInitOpaque,\n+ JsonTableSetDocument,\n+ NULL,\n+ NULL,\n+ NULL,\n+ JsonTableFetchRow,\n+ JsonTableGetValue,\n+ JsonTableDestroyOpaque\n+};\nshould be:\n\nconst TableFuncRoutine JsonbTableRoutine =\n{\n.InitOpaque = JsonTableInitOpaque,\n.SetDocument = JsonTableSetDocument,\n.SetNamespace = NULL,\n.SetRowFilter = NULL,\n.SetColumnFilter = NULL,\n.FetchRow = JsonTableFetchRow,\n.GetValue = JsonTableGetValue,\n.DestroyOpaque = JsonTableDestroyOpaque\n};\n\n+/*\n+ * JsonTablePathSpec\n+ * untransformed specification of JSON path expression with an optional\n+ * name\n+ */\n+typedef struct JsonTablePathSpec\n+{\n+ NodeTag type;\n+\n+ Node *string;\n+ char *name;\n+ int name_location;\n+ int location; /* location of 'string' */\n+} JsonTablePathSpec;\nthe comment still does not explain the distinction between \"location\"\nand \"name_location\"?\n\n\nJsonTablePathSpec needs to be added to typedefs.list.\nJsonPathSpec should be removed from typedefs.list.\n\n\n+/*\n+ * JsonTablePlanType -\n+ * flags for JSON_TABLE plan node types representation\n+ */\n+typedef enum JsonTablePlanType\n+{\n+ JSTP_DEFAULT,\n+ JSTP_SIMPLE,\n+ JSTP_JOINED,\n+} JsonTablePlanType;\n+\n+/*\n+ * JsonTablePlanJoinType -\n+ * JSON_TABLE join types for JSTP_JOINED plans\n+ */\n+typedef enum JsonTablePlanJoinType\n+{\n+ JSTP_JOIN_INNER,\n+ JSTP_JOIN_OUTER,\n+ JSTP_JOIN_CROSS,\n+ JSTP_JOIN_UNION,\n+} JsonTablePlanJoinType;\nI can guess the enum value meaning of JsonTablePlanJoinType,\nbut I can't guess the meaning of \"JSTP_SIMPLE\" or \"JSTP_JOINED\".\nadding some comments in JsonTablePlanType would make it more clear.\n\nI think I can understand JsonTableScanNextRow.\nbut i don't understand JsonTablePlanNextRow.\nmaybe we can add some comments on JsonTableJoinState.\n\n\n+-- unspecified plan (outer, union)\n+select\n+ jt.*\n+from\n+ jsonb_table_test jtt,\n+ json_table (\n+ jtt.js,'strict $[*]' as p\n+ columns (\n+ n for ordinality,\n+ a int path 'lax $.a' default -1 on empty,\n+ nested path 'strict $.b[*]' as pb columns ( b int path '$' ),\n+ nested path 'strict $.c[*]' as pc columns ( c int path '$' )\n+ )\n+ ) jt;\n+ n | a | b | c\n+---+----+---+----\n+ 1 | 1 | |\n+ 2 | 2 | 1 |\n+ 2 | 2 | 2 |\n+ 2 | 2 | 3 |\n+ 2 | 2 | | 10\n+ 2 | 2 | |\n+ 2 | 2 | | 20\n+ 3 | 3 | 1 |\n+ 3 | 3 | 2 |\n+ 4 | -1 | 1 |\n+ 4 | -1 | 2 |\n+(11 rows)\n+\n+-- default plan (outer, union)\n+select\n+ jt.*\n+from\n+ jsonb_table_test jtt,\n+ json_table (\n+ jtt.js,'strict $[*]' as p\n+ columns (\n+ n for ordinality,\n+ a int path 'lax $.a' default -1 on empty,\n+ nested path 'strict $.b[*]' as pb columns ( b int path '$' ),\n+ nested path 'strict $.c[*]' as pc columns ( c int path '$' )\n+ )\n+ plan default (outer, union)\n+ ) jt;\n+ n | a | b | c\n+---+----+---+----\n+ 1 | 1 | |\n+ 2 | 2 | 1 | 10\n+ 2 | 2 | 1 |\n+ 2 | 2 | 1 | 20\n+ 2 | 2 | 2 | 10\n+ 2 | 2 | 2 |\n+ 2 | 2 | 2 | 20\n+ 2 | 2 | 3 | 10\n+ 2 | 2 | 3 |\n+ 2 | 2 | 3 | 20\n+ 3 | 3 | |\n+ 4 | -1 | |\n+(12 rows)\nthese two query results should be the same, if i understand it correctly.\n\n\n",
"msg_date": "Tue, 26 Mar 2024 18:16:00 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Tue, Mar 26, 2024 at 6:16 PM jian he <[email protected]> wrote:\n>\n> On Fri, Mar 22, 2024 at 12:08 AM Amit Langote <[email protected]> wrote:\n> >\n> > On Wed, Mar 20, 2024 at 9:53 PM Amit Langote <[email protected]> wrote:\n> > > I'll push 0001 tomorrow.\n> >\n> > Pushed that one. Here's the remaining JSON_TABLE() patch.\n> >\n\nhi.\nI don't fully understand all the code in json_table patch.\nmaybe we can split it into several patches, like:\n* no nested json_table_column.\n* nested json_table_column, with PLAN DEFAULT\n* nested json_table_column, with PLAN ( json_table_plan )\n\ni can understand the \"no nested json_table_column\" part,\nwhich seems to be how oracle[1] implemented it.\nI think we can make the \"no nested json_table_column\" part into v17.\ni am not sure about other complex parts.\nlack of comment, makes it kind of hard to fully understand.\n\n[1] https://docs.oracle.com/en/database/oracle/oracle-database/19/sqlrf/img_text/json_table.html\n\n\n\n+/* Reset context item of a scan, execute JSON path and reset a scan */\n+static void\n+JsonTableResetContextItem(JsonTableScanState *scan, Datum item)\n+{\n+ MemoryContext oldcxt;\n+ JsonPathExecResult res;\n+ Jsonb *js = (Jsonb *) DatumGetJsonbP(item);\n+\n+ JsonValueListClear(&scan->found);\n+\n+ MemoryContextResetOnly(scan->mcxt);\n+\n+ oldcxt = MemoryContextSwitchTo(scan->mcxt);\n+\n+ res = executeJsonPath(scan->path, scan->args,\n+ GetJsonPathVar, CountJsonPathVars,\n+ js, scan->errorOnError, &scan->found,\n+ false /* FIXME */ );\n+\n+ MemoryContextSwitchTo(oldcxt);\n+\n+ if (jperIsError(res))\n+ {\n+ Assert(!scan->errorOnError);\n+ JsonValueListClear(&scan->found); /* EMPTY ON ERROR case */\n+ }\n+\n+ JsonTableRescan(scan);\n+}\n\n\"FIXME\".\nset the last argument in executeJsonPath to true also works as expected.\nalso there is no test related to the \"FIXME\"\ni am not 100% sure about the \"FIXME\".\n\nsee demo (after set the executeJsonPath's \"useTz\" argument to true).\n\ncreate table ss(js jsonb);\nINSERT into ss select '{\"a\": \"2018-02-21 12:34:56 +10\"}';\nINSERT into ss select '{\"b\": \"2018-02-21 12:34:56 \"}';\nPREPARE q2 as SELECT jt.* FROM ss, JSON_TABLE(js, '$.a.datetime()'\nCOLUMNS (\"int7\" timestamptz PATH '$')) jt;\nPREPARE qb as SELECT jt.* FROM ss, JSON_TABLE(js, '$.b.datetime()'\nCOLUMNS (\"tstz\" timestamptz PATH '$')) jt;\nPREPARE q3 as SELECT jt.* FROM ss, JSON_TABLE(js, '$.a.datetime()'\nCOLUMNS (\"ts\" timestamp PATH '$')) jt;\n\nbegin;\nset time zone +10;\nEXECUTE q2;\nset time zone -10;\nEXECUTE q2;\nrollback;\n\nbegin;\nset time zone +10;\nSELECT JSON_VALUE(js, '$.a' returning timestamptz) from ss;\nset time zone -10;\nSELECT JSON_VALUE(js, '$.a' returning timestamptz) from ss;\nrollback;\n---------------------------------------------------------------------\nbegin;\nset time zone +10;\nEXECUTE qb;\nset time zone -10;\nEXECUTE qb;\nrollback;\n\nbegin;\nset time zone +10;\nSELECT JSON_VALUE(js, '$.b' returning timestamptz) from ss;\nset time zone -10;\nSELECT JSON_VALUE(js, '$.b' returning timestamptz) from ss;\nrollback;\n---------------------------------------------------------------------\nbegin;\nset time zone +10;\nEXECUTE q3;\nset time zone -10;\nEXECUTE q3;\nrollback;\n\nbegin;\nset time zone +10;\nSELECT JSON_VALUE(js, '$.b' returning timestamp) from ss;\nset time zone -10;\nSELECT JSON_VALUE(js, '$.b' returning timestamp) from ss;\nrollback;\n\n\n",
"msg_date": "Wed, 27 Mar 2024 11:41:50 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Wed, Mar 27, 2024 at 12:42 PM jian he <[email protected]> wrote:\n> hi.\n> I don't fully understand all the code in json_table patch.\n> maybe we can split it into several patches,\n\nI'm working on exactly that atm.\n\n> like:\n> * no nested json_table_column.\n> * nested json_table_column, with PLAN DEFAULT\n> * nested json_table_column, with PLAN ( json_table_plan )\n\nYes, I think it will end up something like this. I'll try to post the\nbreakdown tomorrow.\n\n-- \nThanks, Amit Langote\n\n\n",
"msg_date": "Wed, 27 Mar 2024 13:34:50 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Wed, Mar 27, 2024 at 1:34 PM Amit Langote <[email protected]> wrote:\n> On Wed, Mar 27, 2024 at 12:42 PM jian he <[email protected]> wrote:\n> > hi.\n> > I don't fully understand all the code in json_table patch.\n> > maybe we can split it into several patches,\n>\n> I'm working on exactly that atm.\n>\n> > like:\n> > * no nested json_table_column.\n> > * nested json_table_column, with PLAN DEFAULT\n> > * nested json_table_column, with PLAN ( json_table_plan )\n>\n> Yes, I think it will end up something like this. I'll try to post the\n> breakdown tomorrow.\n\nHere's patch 1 for the time being that implements barebones\nJSON_TABLE(), that is, without NESTED paths/columns and PLAN clause.\nI've tried to shape the interfaces so that those features can be added\nin future commits without significant rewrite of the code that\nimplements barebones JSON_TABLE() functionality. I'll know whether\nthat's really the case when I rebase the full patch over it.\n\nI'm still reading and polishing it and would be happy to get feedback\nand testing.\n\n-- \nThanks, Amit Langote",
"msg_date": "Thu, 28 Mar 2024 14:23:09 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On 2024-Mar-28, Amit Langote wrote:\n\n> Here's patch 1 for the time being that implements barebones\n> JSON_TABLE(), that is, without NESTED paths/columns and PLAN clause.\n> I've tried to shape the interfaces so that those features can be added\n> in future commits without significant rewrite of the code that\n> implements barebones JSON_TABLE() functionality. I'll know whether\n> that's really the case when I rebase the full patch over it.\n\nI think this barebones patch looks much closer to something that can be\ncommitted for pg17, given the current commitfest timeline. Maybe we\nshould just slip NESTED and PLAN to pg18 to focus current efforts into\ngetting the basic functionality in 17. When I looked at the JSON_TABLE\npatch last month, it appeared far too large to be reviewable in\nreasonable time. The fact that this split now exists gives me hope that\nwe can get at least the first part of it.\n\n(A note that PLAN seems to correspond to separate features T824+T838, so\nleaving that one out would still let us claim T821 \"Basic SQL/JSON query\noperators\" ... however, the NESTED clause does not appear to be a\nseparate SQL feature; in particular it does not appear to correspond to\nT827, though I may be reading the standard wrong. So if we don't have\nNESTED, apparently we could not claim to support T821.)\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"La fuerza no está en los medios físicos\nsino que reside en una voluntad indomable\" (Gandhi)\n\n\n",
"msg_date": "Thu, 28 Mar 2024 18:04:51 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Thu, Mar 28, 2024 at 1:23 PM Amit Langote <[email protected]> wrote:\n>\n> On Wed, Mar 27, 2024 at 1:34 PM Amit Langote <[email protected]> wrote:\n> > On Wed, Mar 27, 2024 at 12:42 PM jian he <[email protected]> wrote:\n> > > hi.\n> > > I don't fully understand all the code in json_table patch.\n> > > maybe we can split it into several patches,\n> >\n> > I'm working on exactly that atm.\n> >\n> > > like:\n> > > * no nested json_table_column.\n> > > * nested json_table_column, with PLAN DEFAULT\n> > > * nested json_table_column, with PLAN ( json_table_plan )\n> >\n> > Yes, I think it will end up something like this. I'll try to post the\n> > breakdown tomorrow.\n>\n> Here's patch 1 for the time being that implements barebones\n> JSON_TABLE(), that is, without NESTED paths/columns and PLAN clause.\n> I've tried to shape the interfaces so that those features can be added\n> in future commits without significant rewrite of the code that\n> implements barebones JSON_TABLE() functionality. I'll know whether\n> that's really the case when I rebase the full patch over it.\n>\n> I'm still reading and polishing it and would be happy to get feedback\n> and testing.\n>\n\n+static void\n+JsonValueListClear(JsonValueList *jvl)\n+{\n+ jvl->singleton = NULL;\n+ jvl->list = NULL;\n+}\n jvl->list is a List structure, do we need to set it like \"jvl->list = NIL\"?\n\n+ if (jperIsError(res))\n+ {\n+ /* EMPTY ON ERROR case */\n+ Assert(!planstate->plan->errorOnError);\n+ JsonValueListClear(&planstate->found);\n+ }\ni am not sure the comment is right.\n`SELECT * FROM JSON_TABLE(jsonb'\"1.23\"', 'strict $.a' COLUMNS (js2 int\nPATH '$') );`\nwill execute jperIsError branch.\nalso\nSELECT * FROM JSON_TABLE(jsonb'\"1.23\"', 'strict $.a' COLUMNS (js2 int\nPATH '$') default '1' on error);\n\nI think it means applying path_expression, if the top level on_error\nbehavior is not on error\nthen ` if (jperIsError(res))` part may be executed.\n\n\n\n--- a/src/include/utils/jsonpath.h\n+++ b/src/include/utils/jsonpath.h\n@@ -15,6 +15,7 @@\n #define JSONPATH_H\n\n #include \"fmgr.h\"\n+#include \"executor/tablefunc.h\"\n #include \"nodes/pg_list.h\"\n #include \"nodes/primnodes.h\"\n #include \"utils/jsonb.h\"\n\nshould be:\n+#include \"executor/tablefunc.h\"\n #include \"fmgr.h\"\n\n\n+<synopsis>\n+JSON_TABLE (\n+ <replaceable>context_item</replaceable>,\n<replaceable>path_expression</replaceable> <optional> AS\n<replaceable>json_path_name</replaceable> </optional> <optional>\nPASSING { <replaceable>value</replaceable> AS\n<replaceable>varname</replaceable> } <optional>, ...</optional>\n</optional>\n+ COLUMNS ( <replaceable\nclass=\"parameter\">json_table_column</replaceable> <optional>,\n...</optional> )\n+ <optional> { <literal>ERROR</literal> | <literal>EMPTY</literal>\n} <literal>ON ERROR</literal> </optional>\n+)\ntop level (not in the COLUMN clause) also allows\n<literal>NULL</literal> <literal>ON ERROR</literal>.\n\nSELECT JSON_VALUE(jsonb'\"1.23\"', 'strict $.a' null on error);\nreturns one value.\nSELECT * FROM JSON_TABLE(jsonb'\"1.23\"', 'strict $.a' COLUMNS (js2 int\nPATH '$') NULL on ERROR);\nreturn zero rows.\nIs this what we expected?\n\n\nmain changes are in jsonpath_exec.c, parse_expr.c, parse_jsontable.c\noverall the coverage seems pretty good.\nI added some tests to improve the coverage.",
"msg_date": "Fri, 29 Mar 2024 11:20:00 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Fri, Mar 29, 2024 at 11:20 AM jian he <[email protected]> wrote:\n>\n>\n> +<synopsis>\n> +JSON_TABLE (\n> + <replaceable>context_item</replaceable>,\n> <replaceable>path_expression</replaceable> <optional> AS\n> <replaceable>json_path_name</replaceable> </optional> <optional>\n> PASSING { <replaceable>value</replaceable> AS\n> <replaceable>varname</replaceable> } <optional>, ...</optional>\n> </optional>\n> + COLUMNS ( <replaceable\n> class=\"parameter\">json_table_column</replaceable> <optional>,\n> ...</optional> )\n> + <optional> { <literal>ERROR</literal> | <literal>EMPTY</literal>\n> } <literal>ON ERROR</literal> </optional>\n> +)\n> top level (not in the COLUMN clause) also allows\n> <literal>NULL</literal> <literal>ON ERROR</literal>.\n>\nwe can also specify <literal>DEFAULT expression</literal> <literal>ON\nERROR</literal>.\nlike:\nSELECT * FROM JSON_TABLE(jsonb'\"1.23\"', 'strict $.a' COLUMNS (js2 int\nPATH '$') default '1' on error);\n\n+ <varlistentry>\n+ <term>\n+ <replaceable>name</replaceable> <replaceable>type</replaceable>\n<literal>FORMAT JSON</literal> <optional>ENCODING\n<literal>UTF8</literal></optional>\n+ <optional> <literal>PATH</literal>\n<replaceable>json_path_specification</replaceable> </optional>\n+ </term>\n+ <listitem>\n+ <para>\n+ Inserts a composite SQL/JSON item into the output row.\n+ </para>\n+ <para>\n+ The provided <literal>PATH</literal> expression is evaluated and\n+ the column is filled with the produced SQL/JSON item. If the\n+ <literal>PATH</literal> expression is omitted, path expression\n+ <literal>$.<replaceable>name</replaceable></literal> is used,\n+ where <replaceable>name</replaceable> is the provided column name.\n+ In this case, the column name must correspond to one of the\n+ keys within the SQL/JSON item produced by the row pattern.\n+ </para>\n+ <para>\n+ Optionally, you can specify <literal>WRAPPER</literal>,\n+ <literal>QUOTES</literal> clauses to format the output and\n+ <literal>ON EMPTY</literal> and <literal>ON ERROR</literal> to handle\n+ those scenarios appropriately.\n+ </para>\n\nSimilarly, I am not sure of the description of \"composite SQL/JSON item\".\nby observing the following 3 examples:\nSELECT * FROM JSON_TABLE(jsonb'{\"a\": \"z\"}', '$.a' COLUMNS (js2 text\nformat json PATH '$' omit quotes));\nSELECT * FROM JSON_TABLE(jsonb'{\"a\": \"z\"}', '$.a' COLUMNS (js2 text\nformat json PATH '$'));\nSELECT * FROM JSON_TABLE(jsonb'{\"a\": \"z\"}', '$.a' COLUMNS (js2 text PATH '$'));\n\ni think, FORMAT JSON specification means that,\nif your specified type is text or varchar related AND didn't specify\nquotes behavior\nthen FORMAT JSON produced output can be casted to json data type.\nso FORMAT JSON seems not related to array and records data type.\n\nalso the last para can be:\n+ <para>\n+ Optionally, you can specify <literal>WRAPPER</literal>,\n+ <literal>QUOTES</literal> clauses to format the output and\n+ <literal>ON EMPTY</literal> and <literal>ON ERROR</literal> to handle\n+ those missing values and structural errors, respectively.\n+ </para>\n\n\n+ ereport(ERROR,\n+ errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n+ errmsg(\"only string constants are supported in JSON_TABLE\"\n+ \" path specification\"),\nshould be:\n\n+ ereport(ERROR,\n+ errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n+ errmsg(\"only string constants are supported in JSON_TABLE path\nspecification\"),\n\n\n+ <varlistentry>\n+ <term>\n+ <literal>AS</literal> <replaceable>json_path_name</replaceable>\n+ </term>\n+ <listitem>\n+\n+ <para>\n+ The optional <replaceable>json_path_name</replaceable> serves as an\n+ identifier of the provided\n<replaceable>json_path_specification</replaceable>.\n+ The path name must be unique and distinct from the column names.\n+ When using the <literal>PLAN</literal> clause, you must specify the names\n+ for all the paths, including the row pattern. Each path name can appear in\n+ the <literal>PLAN</literal> clause only once.\n+ </para>\n+ </listitem>\n+ </varlistentry>\nas of v46, we don't have PLAN clause.\nalso \"must be unique and distinct from the column names.\" seems incorrect.\nfor example:\nSELECT * FROM JSON_TABLE(jsonb'\"1.23\"', '$.a' as js2 COLUMNS (js2 int\nPATH '$'));\n\n\n",
"msg_date": "Fri, 29 Mar 2024 17:59:11 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Fri, Mar 29, 2024 at 6:59 PM jian he <[email protected]> wrote:\n> On Fri, Mar 29, 2024 at 11:20 AM jian he <[email protected]> wrote:\n\nThanks for the reviews and the patch to add new test cases.\n\n> Similarly, I am not sure of the description of \"composite SQL/JSON item\".\n> by observing the following 3 examples:\n> SELECT * FROM JSON_TABLE(jsonb'{\"a\": \"z\"}', '$.a' COLUMNS (js2 text\n> format json PATH '$' omit quotes));\n> SELECT * FROM JSON_TABLE(jsonb'{\"a\": \"z\"}', '$.a' COLUMNS (js2 text\n> format json PATH '$'));\n> SELECT * FROM JSON_TABLE(jsonb'{\"a\": \"z\"}', '$.a' COLUMNS (js2 text PATH '$'));\n>\n> i think, FORMAT JSON specification means that,\n> if your specified type is text or varchar related AND didn't specify\n> quotes behavior\n> then FORMAT JSON produced output can be casted to json data type.\n> so FORMAT JSON seems not related to array and records data type.\n\nHmm, yes, \"composite\" can sound confusing. Maybe just drop the word?\n\nI've taken care of most of your other comments.\n\nI'm also attaching 0002 showing an attempt to salvage only NESTED PATH\nbut not the PLAN clause. Still needs some polishing, like adding a\ndetailed explanation in JsonTablePlanNextRow() of when the nested\nplans are involved, but thought it might be worth sharing at this\npoint.\n\nI'll continue polishing 0001 with the hope to commit it early next week.\n\n-- \nThanks, Amit Langote",
"msg_date": "Fri, 29 Mar 2024 23:01:05 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Hi Alvaro,\n\nOn Fri, Mar 29, 2024 at 2:04 AM Alvaro Herrera <[email protected]> wrote:\n> On 2024-Mar-28, Amit Langote wrote:\n>\n> > Here's patch 1 for the time being that implements barebones\n> > JSON_TABLE(), that is, without NESTED paths/columns and PLAN clause.\n> > I've tried to shape the interfaces so that those features can be added\n> > in future commits without significant rewrite of the code that\n> > implements barebones JSON_TABLE() functionality. I'll know whether\n> > that's really the case when I rebase the full patch over it.\n>\n> I think this barebones patch looks much closer to something that can be\n> committed for pg17, given the current commitfest timeline. Maybe we\n> should just slip NESTED and PLAN to pg18 to focus current efforts into\n> getting the basic functionality in 17. When I looked at the JSON_TABLE\n> patch last month, it appeared far too large to be reviewable in\n> reasonable time. The fact that this split now exists gives me hope that\n> we can get at least the first part of it.\n\nThanks for chiming in. I agree that 0001 looks more manageable.\n\n> (A note that PLAN seems to correspond to separate features T824+T838, so\n> leaving that one out would still let us claim T821 \"Basic SQL/JSON query\n> operators\" ... however, the NESTED clause does not appear to be a\n> separate SQL feature; in particular it does not appear to correspond to\n> T827, though I may be reading the standard wrong. So if we don't have\n> NESTED, apparently we could not claim to support T821.)\n\nI've posted 0002 just now, which shows that adding just NESTED but not\nPLAN might be feasible.\n\n-- \nThanks, Amit Langote\n\n\n",
"msg_date": "Fri, 29 Mar 2024 23:03:14 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "FAILED: src/interfaces/ecpg/test/sql/sqljson_jsontable.c\n/home/jian/postgres/buildtest6/src/interfaces/ecpg/preproc/ecpg\n--regression -I../../Desktop/pg_src/src6/postgres/src/interfaces/ecpg/test/sql\n-I../../Desktop/pg_src/src6/postgres/src/interfaces/ecpg/include/ -o\nsrc/interfaces/ecpg/test/sql/sqljson_jsontable.c\n../../Desktop/pg_src/src6/postgres/src/interfaces/ecpg/test/sql/sqljson_jsontable.pgc\n../../Desktop/pg_src/src6/postgres/src/interfaces/ecpg/test/sql/sqljson_jsontable.pgc:21:\nWARNING: unsupported feature will be passed to server\n../../Desktop/pg_src/src6/postgres/src/interfaces/ecpg/test/sql/sqljson_jsontable.pgc:32:\nERROR: syntax error at or near \";\"\nneed an extra closing parenthesis?\n\n <para>\n The rows produced by <function>JSON_TABLE</function> are laterally\n joined to the row that generated them, so you do not have to explicitly join\n the constructed view with the original table holding <acronym>JSON</acronym>\n- data.\nneed closing para.\n\nSELECT * FROM JSON_TABLE('[]', 'strict $.a' COLUMNS (js2 text PATH\n'$' error on empty error on error) EMPTY ON ERROR);\nshould i expect it return one row?\nis there any example to make it return one row from top level \"EMPTY ON ERROR\"?\n\n\n+ {\n+ JsonTablePlan *scan = (JsonTablePlan *) plan;\n+\n+ JsonTableInitPathScan(cxt, planstate, args, mcxt);\n+\n+ planstate->nested = scan->child ?\n+ JsonTableInitPlan(cxt, scan->child, planstate, args, mcxt) : NULL;\n+ }\nfirst line seems strange, do we just simply change from \"plan\" to \"scan\"?\n\n\n+ case JTC_REGULAR:\n+ typenameTypeIdAndMod(pstate, rawc->typeName, &typid, &typmod);\n+\n+ /*\n+ * Use implicit FORMAT JSON for composite types (arrays and\n+ * records) or if a non-default WRAPPER / QUOTES behavior is\n+ * specified.\n+ */\n+ if (typeIsComposite(typid) ||\n+ rawc->quotes != JS_QUOTES_UNSPEC ||\n+ rawc->wrapper != JSW_UNSPEC)\n+ rawc->coltype = JTC_FORMATTED;\nper previous discussion, should we refactor the above comment?\n\n\n+/* Recursively set 'reset' flag of planstate and its child nodes */\n+static void\n+JsonTablePlanReset(JsonTablePlanState *planstate)\n+{\n+ if (IsA(planstate->plan, JsonTableSiblingJoin))\n+ {\n+ JsonTablePlanReset(planstate->left);\n+ JsonTablePlanReset(planstate->right);\n+ planstate->advanceRight = false;\n+ }\n+ else\n+ {\n+ planstate->reset = true;\n+ planstate->advanceNested = false;\n+\n+ if (planstate->nested)\n+ JsonTablePlanReset(planstate->nested);\n+ }\nper coverage, the first part of the IF branch never executed.\ni also found out that JsonTablePlanReset is quite similar to JsonTableRescan,\ni don't fully understand these two functions though.\n\n\nSELECT * FROM JSON_TABLE(jsonb'{\"a\": {\"z\":[1111]}, \"b\": 1,\"c\": 2, \"d\":\n91}', '$' COLUMNS (\nc int path '$.c',\nd int path '$.d',\nid1 for ordinality,\nNESTED PATH '$.a.z[*]' columns (z int path '$', id for ordinality)\n));\ndoc seems to say that duplicated ordinality columns in different nest\nlevels are not allowed?\n\n\n\"currentRow\" naming seems misleading, generally, when we think of \"row\",\nwe think of several (not one) datums, or several columns.\nbut here, we only have one datum.\nI don't have good optional naming though.\n\n\n+ case JTC_FORMATTED:\n+ case JTC_EXISTS:\n+ {\n+ Node *je;\n+ CaseTestExpr *param = makeNode(CaseTestExpr);\n+\n+ param->collation = InvalidOid;\n+ param->typeId = contextItemTypid;\n+ param->typeMod = -1;\n+\n+ je = transformJsonTableColumn(rawc, (Node *) param,\n+ NIL, errorOnError);\n+\n+ colexpr = transformExpr(pstate, je, EXPR_KIND_FROM_FUNCTION);\n+ assign_expr_collations(pstate, colexpr);\n+\n+ typid = exprType(colexpr);\n+ typmod = exprTypmod(colexpr);\n+ break;\n+ }\n+\n+ default:\n+ elog(ERROR, \"unknown JSON_TABLE column type: %d\", rawc->coltype);\n+ break;\n+ }\n+\n+ tf->coltypes = lappend_oid(tf->coltypes, typid);\n+ tf->coltypmods = lappend_int(tf->coltypmods, typmod);\n+ tf->colcollations = lappend_oid(tf->colcollations, get_typcollation(typid));\n+ tf->colvalexprs = lappend(tf->colvalexprs, colexpr);\n\nwhy not use exprCollation(colexpr) for tf->colcollations, similar to\nexprType(colexpr)?\n\n\n\n\n+-- Should fail (JSON arguments are not passed to column paths)\n+SELECT *\n+FROM JSON_TABLE(\n+ jsonb '[1,2,3]',\n+ '$[*] ? (@ < $x)'\n+ PASSING 10 AS x\n+ COLUMNS (y text FORMAT JSON PATH '$ ? (@ < $x)')\n+ ) jt;\n+ERROR: could not find jsonpath variable \"x\"\n\nthe error message does not correspond to the comments intention.\nalso \"y text FORMAT JSON\" should be fine?\n\nonly the second last example really using the PASSING clause.\nshould the following query work just fine in this context?\n\ncreate table s(js jsonb);\ninsert into s select '{\"a\":{\"za\":[{\"z1\": [11,2222]},{\"z21\": [22,\n234,2345]}]},\"c\": 3}';\nSELECT sub.* FROM s,JSON_TABLE(js, '$' passing 11 AS \"b c\", 1 + 2 as y\nCOLUMNS (xx int path '$.c ? (@ == $y)')) sub;\n\n\nI thought the json and text data type were quite similar.\nshould these following two queries return the same result?\n\nSELECT sub.* FROM s, JSON_TABLE(js, '$' COLUMNS(\nxx int path '$.c',\nnested PATH '$.a.za[1]' columns (NESTED PATH '$.z21[*]' COLUMNS (a12\njsonb path '$'))\n))sub;\n\nSELECT sub.* FROM s,JSON_TABLE(js, '$' COLUMNS (\nc int path '$.c',\nNESTED PATH '$.a.za[1]' columns (z json path '$')\n)) sub;\n\n\n",
"msg_date": "Mon, 1 Apr 2024 08:00:00 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "typedef struct JsonTableExecContext\n{\nint magic;\nJsonTablePlanState *rootplanstate;\nJsonTablePlanState **colexprplans;\n} JsonTableExecContext;\n\nimho, this kind of naming is kind of inconsistent.\n\"state\" and \"plan\" are mixed together.\nmaybe\n\ntypedef struct JsonTableExecContext\n{\nint magic;\nJsonTablePlanState *rootplanstate;\nJsonTablePlanState **colexprstates;\n} JsonTableExecContext;\n\n\n+ cxt->colexprplans = palloc(sizeof(JsonTablePlanState *) *\n+ list_length(tf->colvalexprs));\n+\n /* Initialize plan */\n- cxt->rootplanstate = JsonTableInitPlan(cxt, rootplan, args,\n+ cxt->rootplanstate = JsonTableInitPlan(cxt, (Node *) rootplan, NULL, args,\n CurrentMemoryContext);\nI think, the comments \"Initialize plan\" is not right, here we\ninitialize the rootplanstate (JsonTablePlanState)\nand also for each (no ordinality) columns, we also initialized the\nspecific JsonTablePlanState.\n\n static void JsonTableRescan(JsonTablePlanState *planstate);\n@@ -331,6 +354,9 @@ static Datum JsonTableGetValue(TableFuncScanState\n*state, int colnum,\n Oid typid, int32 typmod, bool *isnull);\n static void JsonTableDestroyOpaque(TableFuncScanState *state);\n static bool JsonTablePlanNextRow(JsonTablePlanState *planstate);\n+static bool JsonTablePlanPathNextRow(JsonTablePlanState *planstate);\n+static void JsonTableRescan(JsonTablePlanState *planstate);\n\nJsonTableRescan included twice?\n\n\n",
"msg_date": "Mon, 1 Apr 2024 11:47:37 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Mon, Apr 1, 2024 at 8:00 AM jian he <[email protected]> wrote:\n>\n> +-- Should fail (JSON arguments are not passed to column paths)\n> +SELECT *\n> +FROM JSON_TABLE(\n> + jsonb '[1,2,3]',\n> + '$[*] ? (@ < $x)'\n> + PASSING 10 AS x\n> + COLUMNS (y text FORMAT JSON PATH '$ ? (@ < $x)')\n> + ) jt;\n> +ERROR: could not find jsonpath variable \"x\"\n>\n> the error message does not correspond to the comments intention.\n> also \"y text FORMAT JSON\" should be fine?\n\nsorry for the noise, i've figured out why.\n\n> only the second last example really using the PASSING clause.\n> should the following query work just fine in this context?\n>\n> create table s(js jsonb);\n> insert into s select '{\"a\":{\"za\":[{\"z1\": [11,2222]},{\"z21\": [22,\n> 234,2345]}]},\"c\": 3}';\n> SELECT sub.* FROM s,JSON_TABLE(js, '$' passing 11 AS \"b c\", 1 + 2 as y\n> COLUMNS (xx int path '$.c ? (@ == $y)')) sub;\n>\n>\n> I thought the json and text data type were quite similar.\n> should these following two queries return the same result?\n>\n> SELECT sub.* FROM s, JSON_TABLE(js, '$' COLUMNS(\n> xx int path '$.c',\n> nested PATH '$.a.za[1]' columns (NESTED PATH '$.z21[*]' COLUMNS (a12\n> jsonb path '$'))\n> ))sub;\n>\n> SELECT sub.* FROM s,JSON_TABLE(js, '$' COLUMNS (\n> c int path '$.c',\n> NESTED PATH '$.a.za[1]' columns (z json path '$')\n> )) sub;\nsorry for the noise, i've figured out why.\n\nthere are 12 appearances of \"NESTED PATH\" in sqljson_jsontable.sql.\nbut we don't have a real example of NESTED PATH nested with NESTED PATH.\nso I added some real tests on it.\ni also added some tests about the PASSING clause.\nplease check the attachment.\n\n\n/*\n * JsonTableInitPlan\n * Initialize information for evaluating a jsonpath given in\n * JsonTablePlan\n */\nstatic void\nJsonTableInitPathScan(JsonTableExecContext *cxt,\n JsonTablePlanState *planstate,\n List *args, MemoryContext mcxt)\n{\nJsonTablePlan *plan = (JsonTablePlan *) planstate->plan;\nint i;\n\nplanstate->path = DatumGetJsonPathP(plan->path->value->constvalue);\nplanstate->args = args;\nplanstate->mcxt = AllocSetContextCreate(mcxt, \"JsonTableExecContext\",\nALLOCSET_DEFAULT_SIZES);\n\n/* No row pattern evaluated yet. */\nplanstate->currentRow = PointerGetDatum(NULL);\nplanstate->currentRowIsNull = true;\n\nfor (i = plan->colMin; i <= plan->colMax; i++)\ncxt->colexprplans[i] = planstate;\n}\n\nJsonTableInitPathScan's work is to init/assign struct\nJsonTablePlanState's elements.\nmaybe we should just put JsonTableInitPathScan's work into JsonTableInitPlan\nand also rename JsonTableInitPlan to \"JsonTableInitPlanState\" or\n\"InitJsonTablePlanState\".\n\n\n\nJsonTableSiblingJoin *join = (JsonTableSiblingJoin *) plan;\njust rename the variable name, seems unnecessary?",
"msg_date": "Mon, 1 Apr 2024 16:56:00 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "hi.\n\n+/*\n+ * Recursively transform child JSON_TABLE plan.\n+ *\n+ * Default plan is transformed into a cross/union join of its nested columns.\n+ * Simple and outer/inner plans are transformed into a JsonTablePlan by\n+ * finding and transforming corresponding nested column.\n+ * Sibling plans are recursively transformed into a JsonTableSibling.\n+ */\n+static Node *\n+transformJsonTableChildPlan(JsonTableParseContext *cxt,\n+ List *columns)\nthis comment is not the same as the function intention for now.\nmaybe we need to refactor it.\n\n\n/*\n* Each call to fetch a new set of rows - of which there may be very many\n* if XMLTABLE is being used in a lateral join - will allocate a possibly\n* substantial amount of memory, so we cannot use the per-query context\n* here. perTableCxt now serves the same function as \"argcontext\" does in\n* FunctionScan - a place to store per-one-call (i.e. one result table)\n* lifetime data (as opposed to per-query or per-result-tuple).\n*/\nMemoryContextSwitchTo(tstate->perTableCxt);\n\nmaybe we can replace \"XMLTABLE\" to \"XMLTABLE or JSON_TABLE\"?\n\n\n\n/* Transform and coerce the PASSING arguments to to jsonb. */\nthere should be only one \"to\"?\n\n-----------------------------------------------------------------------------------------------------------------------\njson_table_column clause doesn't have a passing clause.\nwe can only have one passing clause in json_table.\nbut during JsonTableInitPathScan, for each output columns associated\nJsonTablePlanState\nwe already initialized the PASSING arguments via `planstate->args = args;`\nalso transformJsonTableColumn already has a passingArgs argument.\ntechnically we can use the jsonpath variable for every output column\nregardless of whether it's nested or not.\n\nJsonTable already has the \"passing\" clause,\nwe just need to pass it to function transformJsonTableColumns and it's callees.\nbased on that, I implemented it. seems quite straightforward.\nI also wrote several contrived, slightly complicated tests.\nIt seems to work just fine.\n\nsimple explanation:\npreviously the following sql will fail, error message is that \"could\nnot find jsonpath variable %s\".\nnow it will work.\n\nSELECT sub.* FROM\nJSON_TABLE(jsonb '{\"a\":{\"za\":[{\"z1\": [11,2222]},{\"z21\": [22,\n234,2345]}]},\"c\": 3}',\n'$' PASSING 22 AS x, 234 AS y\nCOLUMNS(\nxx int path '$.c',\nNESTED PATH '$.a.za[1]' as n1 columns\n(NESTED PATH '$.z21[*]' as n2\nCOLUMNS (z21 int path '$?(@ == $\"x\" || @ == $\"y\" )' default 0 on empty)),\nNESTED PATH '$.a.za[0]' as n4 columns\n(NESTED PATH '$.z1[*]' as n3\nCOLUMNS (z1 int path '$?(@ > $\"y\" + 1988)' default 0 on empty)))\n)sub;",
"msg_date": "Tue, 2 Apr 2024 14:54:13 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Fri, Mar 22, 2024 at 12:08 AM Amit Langote <[email protected]> wrote:\n>\n> On Wed, Mar 20, 2024 at 9:53 PM Amit Langote <[email protected]> wrote:\n> > I'll push 0001 tomorrow.\n>\n> Pushed that one. Here's the remaining JSON_TABLE() patch.\n>\nI know v45 is very different from v47.\nbut v45 contains all the remaining features to be implemented.\n\nI've attached 2 files.\nv45-0001-propagate-passing-clause-to-every-json_ta.based_on_v45\nafter_apply_jsonpathvar.sql.\n\nthe first file should be applied after v45-0001-JSON_TABLE.patch\nthe second file has all kinds of tests to prove that\napplying JsonPathVariable to the NESTED PATH is ok.\n\nI know that v45 is not the whole patch we are going to push for postgres17.\nI just want to point out that applying the PASSING clause to the NESTED PATH\nworks fine with V45.\n\nthat means, I think, we can safely apply PASSING clause to NESTED PATH for\nfeature \"PLAN DEFAULT clause\", \"specific PLAN clause\" and \"sibling\nNESTED COLUMNS clauses\".",
"msg_date": "Tue, 2 Apr 2024 21:26:54 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Hi Jian,\n\nThanks for your time on this.\n\nOn Mon, Apr 1, 2024 at 9:00 AM jian he <[email protected]> wrote:\n> SELECT * FROM JSON_TABLE('[]', 'strict $.a' COLUMNS (js2 text PATH\n> '$' error on empty error on error) EMPTY ON ERROR);\n> should i expect it return one row?\n> is there any example to make it return one row from top level \"EMPTY ON ERROR\"?\n\nI think that's expected. You get 0 rows instead of a single row with\none column containing an empty array, because the NULL returned by the\nerror-handling part of JSON_TABLE's top-level path is not returned\ndirectly to the user, but instead passed as an input document for the\nTableFunc.\n\nI think it suffices to add a note to the documentation of table-level\n(that is, not column-level) ON ERROR clause that EMPTY means an empty\n\"table\", not empty array, which is what you get with JSON_QUERY().\n\n> + {\n> + JsonTablePlan *scan = (JsonTablePlan *) plan;\n> +\n> + JsonTableInitPathScan(cxt, planstate, args, mcxt);\n> +\n> + planstate->nested = scan->child ?\n> + JsonTableInitPlan(cxt, scan->child, planstate, args, mcxt) : NULL;\n> + }\n> first line seems strange, do we just simply change from \"plan\" to \"scan\"?\n\nMostly to complement the \"join\" variable in the other block.\n\nAnyway, I've reworked this to make JsonTablePlan an abstract struct\nand make JsonTablePathScan and JsonTableSiblingJoin \"inherit\" from it.\n\n> + case JTC_REGULAR:\n> + typenameTypeIdAndMod(pstate, rawc->typeName, &typid, &typmod);\n> +\n> + /*\n> + * Use implicit FORMAT JSON for composite types (arrays and\n> + * records) or if a non-default WRAPPER / QUOTES behavior is\n> + * specified.\n> + */\n> + if (typeIsComposite(typid) ||\n> + rawc->quotes != JS_QUOTES_UNSPEC ||\n> + rawc->wrapper != JSW_UNSPEC)\n> + rawc->coltype = JTC_FORMATTED;\n> per previous discussion, should we refactor the above comment?\n\nDone. Instead of saying \"use implicit FORMAT JSON\" I've reworked the\ncomment to mention instead that we do this so that the column uses\nJSON_QUERY() as implementation for these cases.\n\n> +/* Recursively set 'reset' flag of planstate and its child nodes */\n> +static void\n> +JsonTablePlanReset(JsonTablePlanState *planstate)\n> +{\n> + if (IsA(planstate->plan, JsonTableSiblingJoin))\n> + {\n> + JsonTablePlanReset(planstate->left);\n> + JsonTablePlanReset(planstate->right);\n> + planstate->advanceRight = false;\n> + }\n> + else\n> + {\n> + planstate->reset = true;\n> + planstate->advanceNested = false;\n> +\n> + if (planstate->nested)\n> + JsonTablePlanReset(planstate->nested);\n> + }\n> per coverage, the first part of the IF branch never executed.\n> i also found out that JsonTablePlanReset is quite similar to JsonTableRescan,\n> i don't fully understand these two functions though.\n\nWorking on improving the documentation of the recursive algorithm,\nthough I want to focus on finishing 0001 first.\n\n> SELECT * FROM JSON_TABLE(jsonb'{\"a\": {\"z\":[1111]}, \"b\": 1,\"c\": 2, \"d\":\n> 91}', '$' COLUMNS (\n> c int path '$.c',\n> d int path '$.d',\n> id1 for ordinality,\n> NESTED PATH '$.a.z[*]' columns (z int path '$', id for ordinality)\n> ));\n> doc seems to say that duplicated ordinality columns in different nest\n> levels are not allowed?\n\nBoth the documentation and the code in JsonTableGetValue() to\ncalculate a FOR ORDINALITY column were wrong. A nested path's columns\nshould be able to have its own ordinal counter that runs separately\nfrom the other paths, including the parent path, all the way up to the\nroot path.\n\nI've fixed both. Added a test case too.\n\n> \"currentRow\" naming seems misleading, generally, when we think of \"row\",\n> we think of several (not one) datums, or several columns.\n> but here, we only have one datum.\n> I don't have good optional naming though.\n\nYeah, I can see the confusion. I've created a new struct called\nJsonTablePlanRowSource and different places now use a variable named\njust 'current' to refer to the currently active row source. It's\nhopefully clear from the context that the datum containing the JSON\nobject is acting as a source of values for evaluating column paths.\n\n> + case JTC_FORMATTED:\n> + case JTC_EXISTS:\n> + {\n> + Node *je;\n> + CaseTestExpr *param = makeNode(CaseTestExpr);\n> +\n> + param->collation = InvalidOid;\n> + param->typeId = contextItemTypid;\n> + param->typeMod = -1;\n> +\n> + je = transformJsonTableColumn(rawc, (Node *) param,\n> + NIL, errorOnError);\n> +\n> + colexpr = transformExpr(pstate, je, EXPR_KIND_FROM_FUNCTION);\n> + assign_expr_collations(pstate, colexpr);\n> +\n> + typid = exprType(colexpr);\n> + typmod = exprTypmod(colexpr);\n> + break;\n> + }\n> +\n> + default:\n> + elog(ERROR, \"unknown JSON_TABLE column type: %d\", rawc->coltype);\n> + break;\n> + }\n> +\n> + tf->coltypes = lappend_oid(tf->coltypes, typid);\n> + tf->coltypmods = lappend_int(tf->coltypmods, typmod);\n> + tf->colcollations = lappend_oid(tf->colcollations, get_typcollation(typid));\n> + tf->colvalexprs = lappend(tf->colvalexprs, colexpr);\n>\n> why not use exprCollation(colexpr) for tf->colcollations, similar to\n> exprType(colexpr)?\n\nYes, maybe.\n\nOn Tue, Apr 2, 2024 at 3:54 PM jian he <[email protected]> wrote:\n> +/*\n> + * Recursively transform child JSON_TABLE plan.\n> + *\n> + * Default plan is transformed into a cross/union join of its nested columns.\n> + * Simple and outer/inner plans are transformed into a JsonTablePlan by\n> + * finding and transforming corresponding nested column.\n> + * Sibling plans are recursively transformed into a JsonTableSibling.\n> + */\n> +static Node *\n> +transformJsonTableChildPlan(JsonTableParseContext *cxt,\n> + List *columns)\n> this comment is not the same as the function intention for now.\n> maybe we need to refactor it.\n\nFixed.\n\n> /*\n> * Each call to fetch a new set of rows - of which there may be very many\n> * if XMLTABLE is being used in a lateral join - will allocate a possibly\n> * substantial amount of memory, so we cannot use the per-query context\n> * here. perTableCxt now serves the same function as \"argcontext\" does in\n> * FunctionScan - a place to store per-one-call (i.e. one result table)\n> * lifetime data (as opposed to per-query or per-result-tuple).\n> */\n> MemoryContextSwitchTo(tstate->perTableCxt);\n>\n> maybe we can replace \"XMLTABLE\" to \"XMLTABLE or JSON_TABLE\"?\n\nGood catch, done.\n\n>\n> /* Transform and coerce the PASSING arguments to to jsonb. */\n> there should be only one \"to\"?\n\nWill need to fix that separately.\n\n> -----------------------------------------------------------------------------------------------------------------------\n> json_table_column clause doesn't have a passing clause.\n> we can only have one passing clause in json_table.\n> but during JsonTableInitPathScan, for each output columns associated\n> JsonTablePlanState\n> we already initialized the PASSING arguments via `planstate->args = args;`\n> also transformJsonTableColumn already has a passingArgs argument.\n> technically we can use the jsonpath variable for every output column\n> regardless of whether it's nested or not.\n>\n> JsonTable already has the \"passing\" clause,\n> we just need to pass it to function transformJsonTableColumns and it's callees.\n> based on that, I implemented it. seems quite straightforward.\n> I also wrote several contrived, slightly complicated tests.\n> It seems to work just fine.\n>\n> simple explanation:\n> previously the following sql will fail, error message is that \"could\n> not find jsonpath variable %s\".\n> now it will work.\n>\n> SELECT sub.* FROM\n> JSON_TABLE(jsonb '{\"a\":{\"za\":[{\"z1\": [11,2222]},{\"z21\": [22,\n> 234,2345]}]},\"c\": 3}',\n> '$' PASSING 22 AS x, 234 AS y\n> COLUMNS(\n> xx int path '$.c',\n> NESTED PATH '$.a.za[1]' as n1 columns\n> (NESTED PATH '$.z21[*]' as n2\n> COLUMNS (z21 int path '$?(@ == $\"x\" || @ == $\"y\" )' default 0 on empty)),\n> NESTED PATH '$.a.za[0]' as n4 columns\n> (NESTED PATH '$.z1[*]' as n3\n> COLUMNS (z1 int path '$?(@ > $\"y\" + 1988)' default 0 on empty)))\n> )sub;\n\nThanks for the patch. Yeah, not allowing column paths (including\nnested ones) to use top-level PASSING args seems odd, so I wanted to\nfix it too.\n\nPlease let me know if you have further comments on 0001. I'd like to\nget that in before spending more energy on 0002.\n\n--\nThanks, Amit Langote",
"msg_date": "Tue, 2 Apr 2024 22:57:21 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Tue, Apr 2, 2024 at 9:57 PM Amit Langote <[email protected]> wrote:\n>\n> Please let me know if you have further comments on 0001. I'd like to\n> get that in before spending more energy on 0002.\n>\n\nhi. some issues with the doc.\ni think, some of the \"path expression\" can be replaced by\n\"<replaceable>path_expression</replaceable>\".\nmaybe not all of them.\n\n+ <variablelist>\n+ <varlistentry>\n+ <term>\n+ <literal><replaceable>context_item</replaceable>,\n<replaceable>path_expression</replaceable> <optional>\n<literal>AS</literal> <replaceable>json_path_name</replaceable>\n</optional> <optional> <literal>PASSING</literal> {\n<replaceable>value</replaceable> <literal>AS</literal>\n<replaceable>varname</replaceable> } <optional>,\n...</optional></optional></literal>\n+ </term>\n+ <listitem>\n+ <para>\n+ The input data to query, the JSON path expression defining the query,\n+ and an optional <literal>PASSING</literal> clause, which can provide data\n+ values to the <replaceable>path_expression</replaceable>.\n+ The result of the input data\n+ evaluation is called the <firstterm>row pattern</firstterm>. The row\n+ pattern is used as the source for row values in the constructed view.\n+ </para>\n+ </listitem>\n+ </varlistentry>\n\nmaybe\nchange this part \"The input data to query, the JSON path expression\ndefining the query,\"\nto\n`\n<replaceable>context_item</replaceable> is the input data to query,\n<replaceable>path_expression</replaceable> is the JSON path expression\ndefining the query,\n`\n\n+ <para>\n+ Specifying <literal>FORMAT JSON</literal> makes it explcit that you\n+ expect that the value to be a valid <type>json</type> object.\n+ </para>\n\"explcit\" change to \"explicit\", or should it be \"explicitly\"?\nalso FORMAT JSON can be override by OMIT QUOTES.\nSELECT sub.* FROM JSON_TABLE('{\"a\":{\"z1\": \"a\"}}', '$.a' COLUMNS(xx\nTEXT format json path '$.z1' omit quotes))sub;\nit return not double quoted literal 'a', which cannot be a valid json.\n\ncreate or replace FUNCTION test_format_json() returns table (thetype\ntext, is_ok bool) AS $$\ndeclare\n part1_sql text := $sql$SELECT sub.* FROM JSON_TABLE('{\"a\":{\"z1\":\n\"a\"}}', '$.a' COLUMNS(xx $sql$;\n part2_sql text := $sql$ format json path '$.z1' omit quotes))sub $sql$;\n run_status bool := true;\n r record;\n fin record;\nBEGIN\n for r in\n select format_type(oid, -1) as aa\n from pg_type where typtype = 'b' and typarray != 0 and\ntypnamespace = 11 and typnotnull is false\n loop\n begin\n -- raise notice '%',CONCAT_WS(' ', part1_sql, r.aa, part2_sql);\n -- raise notice 'r.aa %', r.aa;\n run_status := true;\n execute CONCAT_WS(' ', part1_sql, r.aa, part2_sql) into fin;\n return query select r.aa, run_status;\n exception when others then\n begin\n run_status := false;\n return query select r.aa, run_status;\n end;\n end;\n end loop;\nEND;\n$$ language plpgsql;\ncreate table sss_1 as select * from test_format_json();\nselect * from sss_1 where is_ok is true;\n\nuse the above query, I've figure out that FORMAT JSON can apply to the\nfollowing types:\nbytea\nname\ntext\njson\nbpchar\ncharacter varying\njsonb\nand these type's customized domain type.\n\noverall, the idea is that:\n Specifying <literal>FORMAT JSON</literal> makes it explicitly that you\n expect that the value to be a valid <type>json</type> object.\n <literal>FORMAT JSON</literal> can be overridden by OMIT QUOTES\nspecification, which can make the return value not a valid\n<type>json</type>.\n <literal>FORMAT JSON</literal> can only work with certain kinds of\ndata types.\n-----------------------------------------------------------------------------------------------\n+ <para>\n+ Optionally, you can add <literal>ON ERROR</literal> clause to define\n+ error behavior.\n+ </para>\nI think \"error behavior\" may refer to \"what kind of error message it will omit\"\nbut here, it's about what to do when an error happens.\nso I guess it's misleading.\n\nmaybe we can explain it similar to json_exist.\n+ <para>\n+ Optionally, you can add <literal>ON ERROR</literal> clause to define\n+ the behavior if an error occurs.\n+ </para>\n\n+ <para>\n+ The specified <parameter>type</parameter> should have a cast from the\n+ <type>boolean</type>.\n+ </para>\nshould be\n+ <para>\n+ The specified <replaceable>type</replaceable> should have a cast from the\n+ <type>boolean</type>.\n+ </para>\n\n\n+ <para>\n+ Inserts a SQL/JSON value into the output row.\n+ </para>\nmaybe\n+ <para>\n+ Inserts a value that the data type is\n<replaceable>type</replaceable> into the output row.\n+ </para>\n\n+ <para>\n+ Inserts a boolean item into each output row.\n+ </para>\nmaybe changed to:\n+ <para>\n+ Inserts a value that the data type is\n<replaceable>type</replaceable> into the output row.\n+ </para>\n\n\"name type EXISTS\" branch mentioned: \"The specified type should have a\ncast from the boolean.\"\nbut \"name type [FORMAT JSON [ENCODING UTF8]] [ PATH path_expression ]\"\nnever mentioned the \"type\"parameter.\nmaybe add one para, something like:\n\"after apply path_expression, the yield value cannot be coerce to\n<replaceable>type</replaceable> it will return null\"\n\n\n",
"msg_date": "Wed, 3 Apr 2024 11:30:08 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Wed, Apr 3, 2024 at 11:30 AM jian he <[email protected]> wrote:\n>\n> On Tue, Apr 2, 2024 at 9:57 PM Amit Langote <[email protected]> wrote:\n> >\n> > Please let me know if you have further comments on 0001. I'd like to\n> > get that in before spending more energy on 0002.\n> >\n\n-- a/src/backend/parser/parse_target.c\n+++ b/src/backend/parser/parse_target.c\n@@ -2019,6 +2019,9 @@ FigureColnameInternal(Node *node, char **name)\n case JSON_VALUE_OP:\n *name = \"json_value\";\n return 2;\n+ case JSON_TABLE_OP:\n+ *name = \"json_table\";\n+ return 2;\n default:\n elog(ERROR, \"unrecognized JsonExpr op: %d\",\n (int) ((JsonFuncExpr *) node)->op);\n\n\"case JSON_TABLE_OP part\", no need?\njson_table output must provide column name and type?\n\nI did some minor refactor transformJsonTableColumns, make the comments\nalign with the function intention.\nin v48-0001, in transformJsonTableColumns we can `Assert(rawc->name);`.\nsince non-nested JsonTableColumn must specify column name.\nin v48-0002, we can change to `if (rawc->coltype != JTC_NESTED)\nAssert(rawc->name);`\n\n\n\nSELECT * FROM JSON_TABLE(jsonb '1', '$' COLUMNS (a int PATH '$.a' )\nERROR ON ERROR) jt;\nERROR: no SQL/JSON item\n\nI thought it should just return NULL.\nIn this case, I thought that\n(not column-level) ERROR ON ERROR should not interfere with \"COLUMNS\n(a int PATH '$.a' )\".\n\n+-- Other miscellanous checks\n\"miscellanous\" should be \"miscellaneous\".\n\n\noverall the coverage is pretty high.\nthe current test output looks fine.",
"msg_date": "Wed, 3 Apr 2024 15:15:57 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Wed, Apr 3, 2024 at 3:15 PM jian he <[email protected]> wrote:\n>\n> On Wed, Apr 3, 2024 at 11:30 AM jian he <[email protected]> wrote:\n> >\n> > On Tue, Apr 2, 2024 at 9:57 PM Amit Langote <[email protected]> wrote:\n> > >\n> > > Please let me know if you have further comments on 0001. I'd like to\n> > > get that in before spending more energy on 0002.\n> > >\n\nmore doc issue with v48. 0001, 0002.\n <para>\n The optional <replaceable>json_path_name</replaceable> serves as an\n identifier of the provided <replaceable>path_expression</replaceable>.\n The path name must be unique and distinct from the column names.\n </para>\n\"path name\" should be\n<replaceable>json_path_name</replaceable>\n\n\ngit diff --check\ndoc/src/sgml/func.sgml:19192: trailing whitespace.\n+ id | kind | title | director\n\n\n+ <para>\n+ JSON data stored at a nested level of the row pattern can be extracted using\n+ the <literal>NESTED PATH</literal> clause. Each\n+ <literal>NESTED PATH</literal> clause can be used to generate one or more\n+ columns using the data from a nested level of the row pattern, which can be\n+ specified using a <literal>COLUMNS</literal> clause. Rows constructed from\n+ such columns are called <firstterm>child rows</firstterm> and are joined\n+ agaist the row constructed from the columns specified in the parent\n+ <literal>COLUMNS</literal> clause to get the row in the final view. Child\n+ columns may themselves contain a <literal>NESTED PATH</literal>\n+ specifification thus allowing to extract data located at arbitrary nesting\n+ levels. Columns produced by <literal>NESTED PATH</literal>s at the same\n+ level are considered to be <firstterm>siblings</firstterm> and are joined\n+ with each other before joining to the parent row.\n+ </para>\n\n\"agaist\" should be \"against\".\n\"specifification\" should be \"specification\".\n+ Rows constructed from\n+ such columns are called <firstterm>child rows</firstterm> and are joined\n+ agaist the row constructed from the columns specified in the parent\n+ <literal>COLUMNS</literal> clause to get the row in the final view.\nthis sentence is long, not easy to comprehend, maybe we can rephrase it\nor split it into two.\n\n\n\n+ | NESTED PATH <replaceable>json_path_specification</replaceable>\n<optional> AS <replaceable>path_name</replaceable> </optional>\n+ COLUMNS ( <replaceable>json_table_column</replaceable>\n<optional>, ...</optional> )\nv48, 0002 patch.\nin the json_table synopsis section, put these two lines into one line,\nI think would make it more readable.\nalso the following sgml code will render the html into one line.\n <term>\n <literal>NESTED PATH</literal>\n<replaceable>json_path_specification</replaceable> <optional>\n<literal>AS</literal> <replaceable>json_path_name</replaceable>\n</optional>\n <literal>COLUMNS</literal> (\n<replaceable>json_table_column</replaceable> <optional>,\n...</optional> )\n </term>\n\nalso <replaceable>path_name</replaceable> should be\n<replaceable>json_path_name</replaceable>.\n\n\n\n+ <para>\n+ The <literal>NESTED PATH</literal> syntax is recursive,\n+ so you can go down multiple nested levels by specifying several\n+ <literal>NESTED PATH</literal> subclauses within each other.\n+ It allows to unnest the hierarchy of JSON objects and arrays\n+ in a single function invocation rather than chaining several\n+ <function>JSON_TABLE</function> expressions in an SQL statement.\n+ </para>\n\"The <literal>NESTED PATH</literal> syntax is recursive\"\nshould be\n`\nThe <literal>NESTED PATH</literal> syntax can be recursive,\nyou can go down multiple nested levels by specifying several\n<literal>NESTED PATH</literal> subclauses within each other.\n`\n\n\n",
"msg_date": "Wed, 3 Apr 2024 17:36:50 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Wed, Apr 3, 2024 at 4:16 PM jian he <[email protected]> wrote:\n> On Wed, Apr 3, 2024 at 11:30 AM jian he <[email protected]> wrote:\n> >\n> > On Tue, Apr 2, 2024 at 9:57 PM Amit Langote <[email protected]> wrote:\n> > >\n> > > Please let me know if you have further comments on 0001. I'd like to\n> > > get that in before spending more energy on 0002.\n> > >\n>\n> -- a/src/backend/parser/parse_target.c\n> +++ b/src/backend/parser/parse_target.c\n> @@ -2019,6 +2019,9 @@ FigureColnameInternal(Node *node, char **name)\n> case JSON_VALUE_OP:\n> *name = \"json_value\";\n> return 2;\n> + case JSON_TABLE_OP:\n> + *name = \"json_table\";\n> + return 2;\n> default:\n> elog(ERROR, \"unrecognized JsonExpr op: %d\",\n> (int) ((JsonFuncExpr *) node)->op);\n>\n> \"case JSON_TABLE_OP part\", no need?\n> json_table output must provide column name and type?\n\nThat seems to be the case, so removed.\n\n> I did some minor refactor transformJsonTableColumns, make the comments\n> align with the function intention.\n\nThanks, but that seems a bit verbose. I've reduced it down to what\ngives enough information.\n\n> in v48-0001, in transformJsonTableColumns we can `Assert(rawc->name);`.\n> since non-nested JsonTableColumn must specify column name.\n> in v48-0002, we can change to `if (rawc->coltype != JTC_NESTED)\n> Assert(rawc->name);`\n\nOk, done.\n\n> SELECT * FROM JSON_TABLE(jsonb '1', '$' COLUMNS (a int PATH '$.a' )\n> ERROR ON ERROR) jt;\n> ERROR: no SQL/JSON item\n>\n> I thought it should just return NULL.\n> In this case, I thought that\n> (not column-level) ERROR ON ERROR should not interfere with \"COLUMNS\n> (a int PATH '$.a' )\".\n\nI think it does in another database's implementation, which must be\nwhy the original authors decided that the table-level ERROR should\nalso be used for columns unless overridden. But I agree that keeping\nthe two separate is better, so changed that way.\n\nAttached updated patches. I have addressed your doc comments on 0001,\nbut not 0002 yet.\n\n\n>\n> +-- Other miscellanous checks\n> \"miscellanous\" should be \"miscellaneous\".\n>\n>\n> overall the coverage is pretty high.\n> the current test output looks fine.\n\n\n\n--\nThanks, Amit Langote",
"msg_date": "Wed, 3 Apr 2024 21:38:59 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "hi.\n+ <para>\n+ <function>json_table</function> is an SQL/JSON function which\n+ queries <acronym>JSON</acronym> data\n+ and presents the results as a relational view, which can be accessed as a\n+ regular SQL table. You can only use\n<function>json_table</function> inside the\n+ <literal>FROM</literal> clause of a <literal>SELECT</literal>,\n+ <literal>UPDATE</literal>, <literal>DELETE</literal>, or\n<literal>MERGE</literal>\n+ statement.\n+ </para>\n\nthe only issue is that <literal>MERGE</literal> Synopsis don't have\n<literal>FROM</literal> clause.\nother than that, it's quite correct.\nsee following tests demo:\n\ndrop table ss;\ncreate table ss(a int);\ninsert into ss select 1;\ndelete from ss using JSON_TABLE(jsonb '1', '$' COLUMNS (a int PATH '$'\n) ERROR ON ERROR) jt where jt.a = 1;\ninsert into ss select 2;\nupdate ss set a = 1 from JSON_TABLE(jsonb '2', '$' COLUMNS (a int PATH\n'$')) jt where jt.a = 2;\nDROP TABLE IF EXISTS target;\nCREATE TABLE target (tid integer, balance integer) WITH\n(autovacuum_enabled=off);\nINSERT INTO target VALUES (1, 10),(2, 20),(3, 30);\nMERGE INTO target USING JSON_TABLE(jsonb '2', '$' COLUMNS (a int PATH\n'$' ) ERROR ON ERROR) source(sid)\nON target.tid = source.sid\nWHEN MATCHED THEN UPDATE SET balance = 0\nreturning *;\n--------------------------------------------------------------------------------------------------\n\n+ <para>\n+ To split the row pattern into columns, <function>json_table</function>\n+ provides the <literal>COLUMNS</literal> clause that defines the\n+ schema of the created view. For each column, a separate path expression\n+ can be specified to be evaluated against the row pattern to get a\n+ SQL/JSON value that will become the value for the specified column in\n+ a given output row.\n+ </para>\nshould be \"an SQL/JSON\".\n\n+ <para>\n+ Inserts a SQL/JSON value obtained by applying\n+ <replaceable>path_expression</replaceable> against the row pattern into\n+ the view's output row after coercing it to specified\n+ <replaceable>type</replaceable>.\n+ </para>\nshould be \"an SQL/JSON\".\n\n\"coercing it to specified <replaceable>type</replaceable>\"\nshould be\n\"coercing it to the specified <replaceable>type</replaceable>\"?\n---------------------------------------------------------------------------------------------------------------\n+ <para>\n+ The value corresponds to whether evaluating the <literal>PATH</literal>\n+ expression yields any SQL/JSON values.\n+ </para>\nmaybe we can change to\n+ <para>\n+ The value corresponds to whether applying the\n<replaceable>path_expression</replaceable>\n+ expression yields any SQL/JSON values.\n+ </para>\nso it looks more consistent with the preceding paragraph.\n\n+ <para>\n+ Optionally, <literal>ON ERROR</literal> can be used to specify whether\n+ to throw an error or return the specified value to handle structural\n+ errors, respectively. The default is to return a boolean value\n+ <literal>FALSE</literal>.\n+ </para>\nwe don't need \"respectively\" here?\n\n+ if (jt->on_error &&\n+ jt->on_error->btype != JSON_BEHAVIOR_ERROR &&\n+ jt->on_error->btype != JSON_BEHAVIOR_EMPTY &&\n+ jt->on_error->btype != JSON_BEHAVIOR_EMPTY_ARRAY)\n+ ereport(ERROR,\n+ errcode(ERRCODE_SYNTAX_ERROR),\n+ errmsg(\"invalid ON ERROR behavior\"),\n+ errdetail(\"Only EMPTY or ERROR is allowed for ON ERROR in JSON_TABLE().\"),\n+ parser_errposition(pstate, jt->on_error->location));\n\nerrdetail(\"Only EMPTY or ERROR is allowed for ON ERROR in JSON_TABLE().\"),\nmaybe change to something like:\n`\nerrdetail(\"Only EMPTY or ERROR is allowed for ON ERROR in the\ntop-level JSON_TABLE() \").\n`\ni guess mentioning \"top-level\" is fine.\nsince \"top-level\", we have 19 appearances in functions-json.html.\n\n\n",
"msg_date": "Wed, 3 Apr 2024 22:48:08 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Wed, Apr 3, 2024 at 8:39 PM Amit Langote <[email protected]> wrote:\n>\n> Attached updated patches. I have addressed your doc comments on 0001,\n> but not 0002 yet.\n>\n\nin v49, 0002.\n+\\sv jsonb_table_view1\n+CREATE OR REPLACE VIEW public.jsonb_table_view1 AS\n+ SELECT id,\n+ a1,\n+ b1,\n+ a11,\n+ a21,\n+ a22\n+ FROM JSON_TABLE(\n+ 'null'::jsonb, '$[*]' AS json_table_path_0\n+ PASSING\n+ 1 + 2 AS a,\n+ '\"foo\"'::json AS \"b c\"\n+ COLUMNS (\n+ id FOR ORDINALITY,\n+ a1 integer PATH '$.\"a1\"',\n+ b1 text PATH '$.\"b1\"',\n+ a11 text PATH '$.\"a11\"',\n+ a21 text PATH '$.\"a21\"',\n+ a22 text PATH '$.\"a22\"',\n+ NESTED PATH '$[1]' AS p1\n+ COLUMNS (\n+ id FOR ORDINALITY,\n+ a1 integer PATH '$.\"a1\"',\n+ b1 text PATH '$.\"b1\"',\n+ a11 text PATH '$.\"a11\"',\n+ a21 text PATH '$.\"a21\"',\n+ a22 text PATH '$.\"a22\"',\n+ NESTED PATH '$[*]' AS \"p1 1\"\n+ COLUMNS (\n+ id FOR ORDINALITY,\n+ a1 integer PATH '$.\"a1\"',\n+ b1 text PATH '$.\"b1\"',\n+ a11 text PATH '$.\"a11\"',\n+ a21 text PATH '$.\"a21\"',\n+ a22 text PATH '$.\"a22\"'\n+ )\n+ ),\n+ NESTED PATH '$[2]' AS p2\n+ COLUMNS (\n+ id FOR ORDINALITY,\n+ a1 integer PATH '$.\"a1\"',\n+ b1 text PATH '$.\"b1\"',\n+ a11 text PATH '$.\"a11\"',\n+ a21 text PATH '$.\"a21\"',\n+ a22 text PATH '$.\"a22\"'\n+ NESTED PATH '$[*]' AS \"p2:1\"\n+ COLUMNS (\n+ id FOR ORDINALITY,\n+ a1 integer PATH '$.\"a1\"',\n+ b1 text PATH '$.\"b1\"',\n+ a11 text PATH '$.\"a11\"',\n+ a21 text PATH '$.\"a21\"',\n+ a22 text PATH '$.\"a22\"'\n+ ),\n+ NESTED PATH '$[*]' AS p22\n+ COLUMNS (\n+ id FOR ORDINALITY,\n+ a1 integer PATH '$.\"a1\"',\n+ b1 text PATH '$.\"b1\"',\n+ a11 text PATH '$.\"a11\"',\n+ a21 text PATH '$.\"a21\"',\n+ a22 text PATH '$.\"a22\"'\n+ )\n+ )\n+ )\n+ )\n\nexecute this view definition (not the \"create view\") will have syntax error.\nThat means the changes in v49,0002 ruleutils.c are wrong.\nalso \\sv the output is quite long, not easy to validate it.\n\nwe need a way to validate that the view definition is equivalent to\n\"select * from view\".\nso I added a view validate function to it.\n\nwe can put it in v49, 0001.\nsince json data type don't equality operator,\nso I did some minor change to make the view validate function works with\njsonb_table_view2\njsonb_table_view3\njsonb_table_view4\njsonb_table_view5\njsonb_table_view6",
"msg_date": "Thu, 4 Apr 2024 14:41:48 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Thu, Apr 4, 2024 at 2:41 PM jian he <[email protected]> wrote:\n>\n> On Wed, Apr 3, 2024 at 8:39 PM Amit Langote <[email protected]> wrote:\n> >\n> > Attached updated patches. I have addressed your doc comments on 0001,\n> > but not 0002 yet.\n> >\n>\nabout v49, 0002.\n\n--tests setup.\ndrop table if exists s cascade;\ncreate table s(js jsonb);\ninsert into s values\n('{\"a\":{\"za\":[{\"z1\": [11,2222]},{\"z21\": [22, 234,2345]},{\"z22\": [32,\n204,145]}]},\"c\": 3}'),\n('{\"a\":{\"za\":[{\"z1\": [21,4222]},{\"z21\": [32, 134,1345]}]},\"c\": 10}');\n\nafter playing around, I found, the non-nested column will be sorted first,\nand the nested column will be ordered as is.\nthe below query, column \"xx1\" will be the first column, \"xx\" will be\nthe second column.\n\nSELECT sub.* FROM s,(values(23)) x(x),generate_series(13, 13) y,\nJSON_TABLE(js, '$' as c1 PASSING x AS x, y AS y COLUMNS(\nNESTED PATH '$.a.za[2]' as n3 columns (NESTED PATH '$.z22[*]' as z22\nCOLUMNS (c int path '$')),\nNESTED PATH '$.a.za[1]' as n4 columns (d int[] PATH '$.z21'),\nNESTED PATH '$.a.za[0]' as n1 columns (NESTED PATH '$.z1[*]' as z1\nCOLUMNS (a int path '$')),\nxx1 int path '$.c',\nNESTED PATH '$.a.za[1]' as n2 columns (NESTED PATH '$.z21[*]' as z21\nCOLUMNS (b int path '$')),\nxx int path '$.c'\n))sub;\nmaybe this behavior is fine. but there is no explanation?\n--------------------------------------------------------------------------------\n--- a/src/tools/pgindent/typedefs.list\n+++ b/src/tools/pgindent/typedefs.list\n@@ -1327,6 +1327,7 @@ JsonPathMutableContext\n JsonPathParseItem\n JsonPathParseResult\n JsonPathPredicateCallback\n+JsonPathSpec\nthis change is no need.\n\n--------------------------------------------------------------------------------\n+ if (scan->child)\n+ get_json_table_nested_columns(tf, scan->child, context, showimplicit,\n+ scan->colMax >= scan->colMin);\nexcept parse_jsontable.c, we only use colMin, colMax in get_json_table_columns.\naslo in parse_jsontable.c, we do it via:\n\n+ /* Start of column range */\n+ colMin = list_length(tf->colvalexprs);\n....\n+ /* End of column range */\n+ colMax = list_length(tf->colvalexprs) - 1;\n\nmaybe we can use (bool *) to tag whether this JsonTableColumn is nested or not\nin transformJsonTableColumns.\n\ncurrently colMin, colMax seems to make parsing back json_table (nested\npath only) not working.\n--------------------------------------------------------------------------------\nI also added some slightly complicated tests to prove that the PASSING\nclause works\nwith every level, aslo the multi level nesting clause works as intended.\n\nAs mentioned in the previous mail, parsing back nest columns\njson_table expression\nnot working as we expected.\n\nso the last view (jsonb_table_view7) I added, the view definition is WRONG!!\nthe good news is the output is what we expected, the coverage is pretty high.",
"msg_date": "Thu, 4 Apr 2024 15:50:02 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Thu, Apr 4, 2024 at 3:50 PM jian he <[email protected]> wrote:\n>\n> On Thu, Apr 4, 2024 at 2:41 PM jian he <[email protected]> wrote:\n> >\n> > On Wed, Apr 3, 2024 at 8:39 PM Amit Langote <[email protected]> wrote:\n> > >\n> > > Attached updated patches. I have addressed your doc comments on 0001,\n> > > but not 0002 yet.\n\nhi\nsome doc issue about v49, 0002.\n+ Each\n+ <literal>NESTED PATH</literal> clause can be used to generate one or more\n+ columns using the data from a nested level of the row pattern, which can be\n+ specified using a <literal>COLUMNS</literal> clause.\n maybe change to\n\n+ Each\n+ <literal>NESTED PATH</literal> clause can be used to generate one or more\n+ columns using the data from an upper nested level of the row\npattern, which can be\n+ specified using a <literal>COLUMNS</literal> clause\n\n\n+ Child\n+ columns may themselves contain a <literal>NESTED PATH</literal>\n+ specifification thus allowing to extract data located at arbitrary nesting\n+ levels.\nmaybe change to\n+ Child\n+ columns themselves may contain a <literal>NESTED PATH</literal>\n+ specification thus allowing to extract data located at any arbitrary nesting\n+ level.\n\n\n+</screen>\n+ </para>\n+ <para>\n+ The following is a modified version of the above query to show the usage\n+ of <literal>NESTED PATH</literal> for populating title and director\n+ columns, illustrating how they are joined to the parent columns id and\n+ kind:\n+<screen>\n+SELECT jt.* FROM\n+ my_films,\n+ JSON_TABLE ( js, '$.favorites[*] ? (@.films[*].director == $filter)'\n+ PASSING 'Alfred Hitchcock' AS filter\n+ COLUMNS (\n+ id FOR ORDINALITY,\n+ kind text PATH '$.kind',\n+ NESTED PATH '$.films[*]' COLUMNS (\n+ title text FORMAT JSON PATH '$.title' OMIT QUOTES,\n+ director text PATH '$.director' KEEP QUOTES))) AS jt;\n+ id | kind | title | director\n+----+----------+---------+--------------------\n+ 1 | horror | Psycho | \"Alfred Hitchcock\"\n+ 2 | thriller | Vertigo | \"Alfred Hitchcock\"\n+(2 rows)\n+</screen>\n+ </para>\n+ <para>\n+ The following is the same query but without the filter in the root\n+ path:\n+<screen>\n+SELECT jt.* FROM\n+ my_films,\n+ JSON_TABLE ( js, '$.favorites[*]'\n+ COLUMNS (\n+ id FOR ORDINALITY,\n+ kind text PATH '$.kind',\n+ NESTED PATH '$.films[*]' COLUMNS (\n+ title text FORMAT JSON PATH '$.title' OMIT QUOTES,\n+ director text PATH '$.director' KEEP QUOTES))) AS jt;\n+ id | kind | title | director\n+----+----------+-----------------+--------------------\n+ 1 | comedy | Bananas | \"Woody Allen\"\n+ 1 | comedy | The Dinner Game | \"Francis Veber\"\n+ 2 | horror | Psycho | \"Alfred Hitchcock\"\n+ 3 | thriller | Vertigo | \"Alfred Hitchcock\"\n+ 4 | drama | Yojimbo | \"Akira Kurosawa\"\n+(5 rows)\n </screen>\n\njust found out that the query and the query's output condensed together.\nin https://www.postgresql.org/docs/current/tutorial-window.html\nthe query we use <programlisting>, the output we use <screen>.\nmaybe we can do it the same way,\nor we could just have one or two empty new lines separate them.\nwe have the similar problem in v49, 0001.\n\n\n",
"msg_date": "Thu, 4 Apr 2024 16:24:01 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Wed, Apr 3, 2024 at 11:48 PM jian he <[email protected]> wrote:\n> hi.\n> + <para>\n> + <function>json_table</function> is an SQL/JSON function which\n> + queries <acronym>JSON</acronym> data\n> + and presents the results as a relational view, which can be accessed as a\n> + regular SQL table. You can only use\n> <function>json_table</function> inside the\n> + <literal>FROM</literal> clause of a <literal>SELECT</literal>,\n> + <literal>UPDATE</literal>, <literal>DELETE</literal>, or\n> <literal>MERGE</literal>\n> + statement.\n> + </para>\n>\n> the only issue is that <literal>MERGE</literal> Synopsis don't have\n> <literal>FROM</literal> clause.\n> other than that, it's quite correct.\n> see following tests demo:\n>\n> drop table ss;\n> create table ss(a int);\n> insert into ss select 1;\n> delete from ss using JSON_TABLE(jsonb '1', '$' COLUMNS (a int PATH '$'\n> ) ERROR ON ERROR) jt where jt.a = 1;\n> insert into ss select 2;\n> update ss set a = 1 from JSON_TABLE(jsonb '2', '$' COLUMNS (a int PATH\n> '$')) jt where jt.a = 2;\n> DROP TABLE IF EXISTS target;\n> CREATE TABLE target (tid integer, balance integer) WITH\n> (autovacuum_enabled=off);\n> INSERT INTO target VALUES (1, 10),(2, 20),(3, 30);\n> MERGE INTO target USING JSON_TABLE(jsonb '2', '$' COLUMNS (a int PATH\n> '$' ) ERROR ON ERROR) source(sid)\n> ON target.tid = source.sid\n> WHEN MATCHED THEN UPDATE SET balance = 0\n> returning *;\n> --------------------------------------------------------------------------------------------------\n>\n> + <para>\n> + To split the row pattern into columns, <function>json_table</function>\n> + provides the <literal>COLUMNS</literal> clause that defines the\n> + schema of the created view. For each column, a separate path expression\n> + can be specified to be evaluated against the row pattern to get a\n> + SQL/JSON value that will become the value for the specified column in\n> + a given output row.\n> + </para>\n> should be \"an SQL/JSON\".\n>\n> + <para>\n> + Inserts a SQL/JSON value obtained by applying\n> + <replaceable>path_expression</replaceable> against the row pattern into\n> + the view's output row after coercing it to specified\n> + <replaceable>type</replaceable>.\n> + </para>\n> should be \"an SQL/JSON\".\n>\n> \"coercing it to specified <replaceable>type</replaceable>\"\n> should be\n> \"coercing it to the specified <replaceable>type</replaceable>\"?\n> ---------------------------------------------------------------------------------------------------------------\n> + <para>\n> + The value corresponds to whether evaluating the <literal>PATH</literal>\n> + expression yields any SQL/JSON values.\n> + </para>\n> maybe we can change to\n> + <para>\n> + The value corresponds to whether applying the\n> <replaceable>path_expression</replaceable>\n> + expression yields any SQL/JSON values.\n> + </para>\n> so it looks more consistent with the preceding paragraph.\n>\n> + <para>\n> + Optionally, <literal>ON ERROR</literal> can be used to specify whether\n> + to throw an error or return the specified value to handle structural\n> + errors, respectively. The default is to return a boolean value\n> + <literal>FALSE</literal>.\n> + </para>\n> we don't need \"respectively\" here?\n>\n> + if (jt->on_error &&\n> + jt->on_error->btype != JSON_BEHAVIOR_ERROR &&\n> + jt->on_error->btype != JSON_BEHAVIOR_EMPTY &&\n> + jt->on_error->btype != JSON_BEHAVIOR_EMPTY_ARRAY)\n> + ereport(ERROR,\n> + errcode(ERRCODE_SYNTAX_ERROR),\n> + errmsg(\"invalid ON ERROR behavior\"),\n> + errdetail(\"Only EMPTY or ERROR is allowed for ON ERROR in JSON_TABLE().\"),\n> + parser_errposition(pstate, jt->on_error->location));\n>\n> errdetail(\"Only EMPTY or ERROR is allowed for ON ERROR in JSON_TABLE().\"),\n> maybe change to something like:\n> `\n> errdetail(\"Only EMPTY or ERROR is allowed for ON ERROR in the\n> top-level JSON_TABLE() \").\n> `\n> i guess mentioning \"top-level\" is fine.\n> since \"top-level\", we have 19 appearances in functions-json.html.\n\nThanks for checking.\n\nPushed after fixing these and a few other issues. I didn't include\nthe testing function you proposed in your other email. It sounds\nuseful for testing locally but will need some work before we can\ninclude it in the tree.\n\nI'll post the rebased 0002 tomorrow after addressing your comments.\n\n\n--\nThanks, Amit Langote\n\n\n",
"msg_date": "Thu, 4 Apr 2024 21:02:48 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Hello Amit,\n\n04.04.2024 15:02, Amit Langote wrote:\n> Pushed after fixing these and a few other issues. I didn't include\n> the testing function you proposed in your other email. It sounds\n> useful for testing locally but will need some work before we can\n> include it in the tree.\n>\n> I'll post the rebased 0002 tomorrow after addressing your comments.\n\nPlease look at an assertion failure:\nTRAP: failed Assert(\"count <= tupdesc->natts\"), File: \"parse_relation.c\", Line: 3048, PID: 1325146\n\ntriggered by the following query:\nSELECT * FROM JSON_TABLE('0', '$' COLUMNS (js int PATH '$')),\n COALESCE(row(1)) AS (a int, b int);\n\nWithout JSON_TABLE() I get:\nERROR: function return row and query-specified return row do not match\nDETAIL: Returned row contains 1 attribute, but query expects 2.\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Fri, 5 Apr 2024 09:00:00 +0300",
"msg_from": "Alexander Lakhin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Fri, Apr 05, 2024 at 09:00:00AM +0300, Alexander Lakhin wrote:\n> Please look at an assertion failure:\n> TRAP: failed Assert(\"count <= tupdesc->natts\"), File: \"parse_relation.c\", Line: 3048, PID: 1325146\n> \n> triggered by the following query:\n> SELECT * FROM JSON_TABLE('0', '$' COLUMNS (js int PATH '$')),\n> COALESCE(row(1)) AS (a int, b int);\n> \n> Without JSON_TABLE() I get:\n> ERROR: function return row and query-specified return row do not match\n> DETAIL: Returned row contains 1 attribute, but query expects 2.\n\nI've added an open item on this one. We need to keep track of all\nthat.\n--\nMichael",
"msg_date": "Fri, 5 Apr 2024 15:07:36 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Hi Alexander,\n\nOn Fri, Apr 5, 2024 at 3:00 PM Alexander Lakhin <[email protected]> wrote:\n>\n> Hello Amit,\n>\n> 04.04.2024 15:02, Amit Langote wrote:\n> > Pushed after fixing these and a few other issues. I didn't include\n> > the testing function you proposed in your other email. It sounds\n> > useful for testing locally but will need some work before we can\n> > include it in the tree.\n> >\n> > I'll post the rebased 0002 tomorrow after addressing your comments.\n>\n> Please look at an assertion failure:\n> TRAP: failed Assert(\"count <= tupdesc->natts\"), File: \"parse_relation.c\", Line: 3048, PID: 1325146\n>\n> triggered by the following query:\n> SELECT * FROM JSON_TABLE('0', '$' COLUMNS (js int PATH '$')),\n> COALESCE(row(1)) AS (a int, b int);\n>\n> Without JSON_TABLE() I get:\n> ERROR: function return row and query-specified return row do not match\n> DETAIL: Returned row contains 1 attribute, but query expects 2.\n\nThanks for the report.\n\nSeems like it might be a pre-existing issue, because I can also\nreproduce the crash with:\n\nSELECT * FROM COALESCE(row(1)) AS (a int, b int);\nserver closed the connection unexpectedly\nThis probably means the server terminated abnormally\nbefore or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\n!>\n\nBacktrace:\n\n#0 __pthread_kill_implementation (threadid=281472845250592,\nsigno=signo@entry=6, no_tid=no_tid@entry=0) at pthread_kill.c:44\n#1 0x0000ffff806c4334 in __pthread_kill_internal (signo=6,\nthreadid=<optimized out>) at pthread_kill.c:78\n#2 0x0000ffff8067c73c in __GI_raise (sig=sig@entry=6) at\n../sysdeps/posix/raise.c:26\n#3 0x0000ffff80669034 in __GI_abort () at abort.c:79\n#4 0x0000000000ad9d4c in ExceptionalCondition (conditionName=0xcbb368\n\"!(tupdesc->natts >= colcount)\", errorType=0xcbb278 \"FailedAssertion\",\nfileName=0xcbb2c8 \"nodeFunctionscan.c\",\n lineNumber=379) at assert.c:54\n#5 0x000000000073edec in ExecInitFunctionScan (node=0x293d4ed0,\nestate=0x293d51b8, eflags=16) at nodeFunctionscan.c:379\n#6 0x0000000000724bc4 in ExecInitNode (node=0x293d4ed0,\nestate=0x293d51b8, eflags=16) at execProcnode.c:248\n#7 0x000000000071b1cc in InitPlan (queryDesc=0x292f5d78, eflags=16)\nat execMain.c:1006\n#8 0x0000000000719f6c in standard_ExecutorStart\n(queryDesc=0x292f5d78, eflags=16) at execMain.c:252\n#9 0x0000000000719cac in ExecutorStart (queryDesc=0x292f5d78,\neflags=0) at execMain.c:134\n#10 0x0000000000945520 in PortalStart (portal=0x29399458, params=0x0,\neflags=0, snapshot=0x0) at pquery.c:527\n#11 0x000000000093ee50 in exec_simple_query (query_string=0x29332d38\n\"SELECT * FROM COALESCE(row(1)) AS (a int, b int);\") at\npostgres.c:1175\n#12 0x0000000000943cb8 in PostgresMain (argc=1, argv=0x2935d610,\ndbname=0x2935d450 \"postgres\", username=0x2935d430 \"amit\") at\npostgres.c:4297\n#13 0x000000000087e978 in BackendRun (port=0x29356c00) at postmaster.c:4517\n#14 0x000000000087e0bc in BackendStartup (port=0x29356c00) at postmaster.c:4200\n#15 0x0000000000879638 in ServerLoop () at postmaster.c:1725\n#16 0x0000000000878eb4 in PostmasterMain (argc=3, argv=0x292eeac0) at\npostmaster.c:1398\n#17 0x0000000000791db8 in main (argc=3, argv=0x292eeac0) at main.c:228\n\nBacktrace looks a bit different with a query similar to yours:\n\nSELECT * FROM generate_series(1, 1), COALESCE(row(1)) AS (a int, b int);\nserver closed the connection unexpectedly\nThis probably means the server terminated abnormally\nbefore or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\n!>\n\n#0 __pthread_kill_implementation (threadid=281472845250592,\nsigno=signo@entry=6, no_tid=no_tid@entry=0) at pthread_kill.c:44\n#1 0x0000ffff806c4334 in __pthread_kill_internal (signo=6,\nthreadid=<optimized out>) at pthread_kill.c:78\n#2 0x0000ffff8067c73c in __GI_raise (sig=sig@entry=6) at\n../sysdeps/posix/raise.c:26\n#3 0x0000ffff80669034 in __GI_abort () at abort.c:79\n#4 0x0000000000ad9d4c in ExceptionalCondition (conditionName=0xc903b0\n\"!(count <= tupdesc->natts)\", errorType=0xc8f8c8 \"FailedAssertion\",\nfileName=0xc8f918 \"parse_relation.c\",\n lineNumber=2649) at assert.c:54\n#5 0x0000000000649664 in expandTupleDesc (tupdesc=0x293da188,\neref=0x293d7318, count=2, offset=0, rtindex=2, sublevels_up=0,\nlocation=-1, include_dropped=true, colnames=0x0,\n colvars=0xffffc39253c8) at parse_relation.c:2649\n#6 0x0000000000648d08 in expandRTE (rte=0x293d7390, rtindex=2,\nsublevels_up=0, location=-1, include_dropped=true, colnames=0x0,\ncolvars=0xffffc39253c8) at parse_relation.c:2361\n#7 0x0000000000849bd0 in build_physical_tlist (root=0x293d5318,\nrel=0x293d88e8) at plancat.c:1681\n#8 0x0000000000806ad0 in create_scan_plan (root=0x293d5318,\nbest_path=0x293cd888, flags=0) at createplan.c:605\n#9 0x000000000080666c in create_plan_recurse (root=0x293d5318,\nbest_path=0x293cd888, flags=0) at createplan.c:389\n#10 0x000000000080c4e8 in create_nestloop_plan (root=0x293d5318,\nbest_path=0x293d96f0) at createplan.c:4056\n#11 0x0000000000807464 in create_join_plan (root=0x293d5318,\nbest_path=0x293d96f0) at createplan.c:1037\n#12 0x0000000000806680 in create_plan_recurse (root=0x293d5318,\nbest_path=0x293d96f0, flags=1) at createplan.c:394\n#13 0x000000000080658c in create_plan (root=0x293d5318,\nbest_path=0x293d96f0) at createplan.c:326\n#14 0x0000000000816534 in standard_planner (parse=0x293d3728,\ncursorOptions=256, boundParams=0x0) at planner.c:413\n#15 0x00000000008162b4 in planner (parse=0x293d3728,\ncursorOptions=256, boundParams=0x0) at planner.c:275\n#16 0x000000000093e984 in pg_plan_query (querytree=0x293d3728,\ncursorOptions=256, boundParams=0x0) at postgres.c:877\n#17 0x000000000093eb04 in pg_plan_queries (querytrees=0x293d8018,\ncursorOptions=256, boundParams=0x0) at postgres.c:967\n#18 0x000000000093edc4 in exec_simple_query (query_string=0x29332d38\n\"SELECT * FROM generate_series(1, 1), COALESCE(row(1)) AS (a int, b\nint);\") at postgres.c:1142\n#19 0x0000000000943cb8 in PostgresMain (argc=1, argv=0x2935d4f8,\ndbname=0x2935d338 \"postgres\", username=0x2935d318 \"amit\") at\npostgres.c:4297\n#20 0x000000000087e978 in BackendRun (port=0x29356dd0) at postmaster.c:4517\n#21 0x000000000087e0bc in BackendStartup (port=0x29356dd0) at postmaster.c:4200\n#22 0x0000000000879638 in ServerLoop () at postmaster.c:1725\n#23 0x0000000000878eb4 in PostmasterMain (argc=3, argv=0x292eeac0) at\npostmaster.c:1398\n#24 0x0000000000791db8 in main (argc=3, argv=0x292eeac0) at main.c:228\n\nI suspect the underlying issue is the same, though I haven't figured\nout what it is, except a guess that addRangeTableEntryForFunction()\nmight be missing something to handle this sanely.\n\nReproducible down to v12.\n\n-- \nThanks, Amit Langote\n\n\n",
"msg_date": "Fri, 5 Apr 2024 16:09:29 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "05.04.2024 10:09, Amit Langote wrote:\n> Seems like it might be a pre-existing issue, because I can also\n> reproduce the crash with:\n>\n> SELECT * FROM COALESCE(row(1)) AS (a int, b int);\n> server closed the connection unexpectedly\n> This probably means the server terminated abnormally\n> before or while processing the request.\n> The connection to the server was lost. Attempting reset: Failed.\n> !>\n>\n> Backtrace:\n>\n> #0 __pthread_kill_implementation (threadid=281472845250592,\n> signo=signo@entry=6, no_tid=no_tid@entry=0) at pthread_kill.c:44\n> #1 0x0000ffff806c4334 in __pthread_kill_internal (signo=6,\n> threadid=<optimized out>) at pthread_kill.c:78\n> #2 0x0000ffff8067c73c in __GI_raise (sig=sig@entry=6) at\n> ../sysdeps/posix/raise.c:26\n> #3 0x0000ffff80669034 in __GI_abort () at abort.c:79\n> #4 0x0000000000ad9d4c in ExceptionalCondition (conditionName=0xcbb368\n> \"!(tupdesc->natts >= colcount)\", errorType=0xcbb278 \"FailedAssertion\",\n> fileName=0xcbb2c8 \"nodeFunctionscan.c\",\n> lineNumber=379) at assert.c:54\n\nThat's strange, because I get the error (on master, 6f132ed69).\nWith backtrace_functions = 'tupledesc_match', I see\n2024-04-05 10:48:27.827 MSK client backend[2898632] regress ERROR: function return row and query-specified return row do \nnot match\n2024-04-05 10:48:27.827 MSK client backend[2898632] regress DETAIL: Returned row contains 1 attribute, but query expects 2.\n2024-04-05 10:48:27.827 MSK client backend[2898632] regress BACKTRACE:\ntupledesc_match at execSRF.c:948:3\nExecMakeTableFunctionResult at execSRF.c:427:13\nFunctionNext at nodeFunctionscan.c:94:5\nExecScanFetch at execScan.c:131:10\nExecScan at execScan.c:180:10\nExecFunctionScan at nodeFunctionscan.c:272:1\nExecProcNodeFirst at execProcnode.c:465:1\nExecProcNode at executor.h:274:9\n (inlined by) ExecutePlan at execMain.c:1646:10\nstandard_ExecutorRun at execMain.c:363:3\nExecutorRun at execMain.c:305:1\nPortalRunSelect at pquery.c:926:26\nPortalRun at pquery.c:775:8\nexec_simple_query at postgres.c:1282:3\nPostgresMain at postgres.c:4684:27\nBackendMain at backend_startup.c:57:2\npgarch_die at pgarch.c:847:1\nBackendStartup at postmaster.c:3593:8\nServerLoop at postmaster.c:1674:6\nmain at main.c:184:3\n /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x80) [0x7f37127f0e40]\n2024-04-05 10:48:27.827 MSK client backend[2898632] regress STATEMENT: SELECT * FROM COALESCE(row(1)) AS (a int, b int);\n\nThat's why I had attributed the failure to JSON_TABLE().\n\nThough SELECT * FROM generate_series(1, 1), COALESCE(row(1)) AS (a int, b int);\nreally triggers the assert too.\nSorry for the noise...\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Fri, 5 Apr 2024 11:00:00 +0300",
"msg_from": "Alexander Lakhin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Fri, Apr 5, 2024 at 5:00 PM Alexander Lakhin <[email protected]> wrote:\n> 05.04.2024 10:09, Amit Langote wrote:\n> > Seems like it might be a pre-existing issue, because I can also\n> > reproduce the crash with:\n>\n> That's strange, because I get the error (on master, 6f132ed69).\n> With backtrace_functions = 'tupledesc_match', I see\n> 2024-04-05 10:48:27.827 MSK client backend[2898632] regress ERROR: function return row and query-specified return row do\n> not match\n> 2024-04-05 10:48:27.827 MSK client backend[2898632] regress DETAIL: Returned row contains 1 attribute, but query expects 2.\n> 2024-04-05 10:48:27.827 MSK client backend[2898632] regress BACKTRACE:\n> tupledesc_match at execSRF.c:948:3\n> ExecMakeTableFunctionResult at execSRF.c:427:13\n> FunctionNext at nodeFunctionscan.c:94:5\n> ExecScanFetch at execScan.c:131:10\n> ExecScan at execScan.c:180:10\n> ExecFunctionScan at nodeFunctionscan.c:272:1\n> ExecProcNodeFirst at execProcnode.c:465:1\n> ExecProcNode at executor.h:274:9\n> (inlined by) ExecutePlan at execMain.c:1646:10\n> standard_ExecutorRun at execMain.c:363:3\n> ExecutorRun at execMain.c:305:1\n> PortalRunSelect at pquery.c:926:26\n> PortalRun at pquery.c:775:8\n> exec_simple_query at postgres.c:1282:3\n> PostgresMain at postgres.c:4684:27\n> BackendMain at backend_startup.c:57:2\n> pgarch_die at pgarch.c:847:1\n> BackendStartup at postmaster.c:3593:8\n> ServerLoop at postmaster.c:1674:6\n> main at main.c:184:3\n> /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x80) [0x7f37127f0e40]\n> 2024-04-05 10:48:27.827 MSK client backend[2898632] regress STATEMENT: SELECT * FROM COALESCE(row(1)) AS (a int, b int);\n>\n> That's why I had attributed the failure to JSON_TABLE().\n>\n> Though SELECT * FROM generate_series(1, 1), COALESCE(row(1)) AS (a int, b int);\n> really triggers the assert too.\n> Sorry for the noise...\n\nNo worries. Let's start another thread so that this gets more attention.\n\n-- \nThanks, Amit Langote\n\n\n",
"msg_date": "Fri, 5 Apr 2024 17:09:53 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Thu, Apr 4, 2024 at 9:02 PM Amit Langote <[email protected]> wrote:\n> I'll post the rebased 0002 tomorrow after addressing your comments.\n\nHere's one. Main changes:\n\n* Fixed a bug in get_table_json_columns() which caused nested columns\nto be deparsed incorrectly, something Jian reported upthread.\n* Simplified the algorithm in JsonTablePlanNextRow()\n\nI'll post another revision or two maybe tomorrow, but posting what I\nhave now in case Jian wants to do more testing.\n\n-- \nThanks, Amit Langote",
"msg_date": "Fri, 5 Apr 2024 21:34:58 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Fri, Apr 5, 2024 at 8:35 PM Amit Langote <[email protected]> wrote:\n> Here's one. Main changes:\n>\n> * Fixed a bug in get_table_json_columns() which caused nested columns\n> to be deparsed incorrectly, something Jian reported upthread.\n> * Simplified the algorithm in JsonTablePlanNextRow()\n>\n> I'll post another revision or two maybe tomorrow, but posting what I\n> have now in case Jian wants to do more testing.\n\ni am using the upthread view validation function.\nby comparing `execute the view definition` and `select * from the_view`,\nI did find 2 issues.\n\n* problem in transformJsonBehavior, JSON_BEHAVIOR_DEFAULT branch.\nI think we can fix this problem later, since sql/json query function\nalready committed?\n\nCREATE DOMAIN jsonb_test_domain AS text CHECK (value <> 'foo');\nnormally, we do:\nSELECT JSON_VALUE(jsonb '{\"d1\": \"H\"}', '$.a2' returning\njsonb_test_domain DEFAULT 'foo' ON ERROR);\n\nbut parsing back view def, we do:\nSELECT JSON_VALUE(jsonb '{\"d1\": \"H\"}', '$.a2' returning\njsonb_test_domain DEFAULT 'foo'::text::jsonb_test_domain ON ERROR);\n\nthen I found the following two queries should not be error out.\nSELECT JSON_VALUE(jsonb '{\"d1\": \"H\"}', '$.a2' returning\njsonb_test_domain DEFAULT 'foo1'::text::jsonb_test_domain ON ERROR);\nSELECT JSON_VALUE(jsonb '{\"d1\": \"H\"}', '$.a2' returning\njsonb_test_domain DEFAULT 'foo1'::jsonb_test_domain ON ERROR);\n--------------------------------------------------------------------------------------------------------------------\n\n* problem with type \"char\". the view def output is not the same as\nthe select * from v1.\n\ncreate or replace view v1 as\nSELECT col FROM s,\nJSON_TABLE(jsonb '{\"d\": [\"hello\", \"hello1\"]}', '$' as c1\nCOLUMNS(col \"char\" path '$.d' without wrapper keep quotes))sub;\n\n\\sv v1\nCREATE OR REPLACE VIEW public.v1 AS\n SELECT sub.col\n FROM s,\n JSON_TABLE(\n '{\"d\": [\"hello\", \"hello1\"]}'::jsonb, '$' AS c1\n COLUMNS (\n col \"char\" PATH '$.\"d\"'\n )\n ) sub\none under the hood called JSON_QUERY_OP, another called JSON_VALUE_OP.\n\nI will do extensive checking for other types later, so far, other than\nthese two issues,\nget_json_table_columns is pretty solid, I've tried nested columns with\nnested columns, it just works.\n\n\n",
"msg_date": "Sat, 6 Apr 2024 11:31:22 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Hi Michael,\n\nOn Fri, Apr 5, 2024 at 3:07 PM Michael Paquier <[email protected]> wrote:\n> On Fri, Apr 05, 2024 at 09:00:00AM +0300, Alexander Lakhin wrote:\n> > Please look at an assertion failure:\n> > TRAP: failed Assert(\"count <= tupdesc->natts\"), File: \"parse_relation.c\", Line: 3048, PID: 1325146\n> >\n> > triggered by the following query:\n> > SELECT * FROM JSON_TABLE('0', '$' COLUMNS (js int PATH '$')),\n> > COALESCE(row(1)) AS (a int, b int);\n> >\n> > Without JSON_TABLE() I get:\n> > ERROR: function return row and query-specified return row do not match\n> > DETAIL: Returned row contains 1 attribute, but query expects 2.\n>\n> I've added an open item on this one. We need to keep track of all\n> that.\n\nWe figured out that this is an existing bug unrelated to JSON_TABLE(),\nwhich Alexander reported to -bugs:\nhttps://postgr.es/m/[email protected]\n\nI have moved the item to Older Bugs:\nhttps://wiki.postgresql.org/wiki/PostgreSQL_17_Open_Items#Live_issues\n\n-- \nThanks, Amit Langote\n\n\n",
"msg_date": "Sat, 6 Apr 2024 14:00:25 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Sat, Apr 6, 2024 at 12:31 PM jian he <[email protected]> wrote:\n> On Fri, Apr 5, 2024 at 8:35 PM Amit Langote <[email protected]> wrote:\n> > Here's one. Main changes:\n> >\n> > * Fixed a bug in get_table_json_columns() which caused nested columns\n> > to be deparsed incorrectly, something Jian reported upthread.\n> > * Simplified the algorithm in JsonTablePlanNextRow()\n> >\n> > I'll post another revision or two maybe tomorrow, but posting what I\n> > have now in case Jian wants to do more testing.\n>\n> i am using the upthread view validation function.\n> by comparing `execute the view definition` and `select * from the_view`,\n> I did find 2 issues.\n>\n> * problem in transformJsonBehavior, JSON_BEHAVIOR_DEFAULT branch.\n> I think we can fix this problem later, since sql/json query function\n> already committed?\n>\n> CREATE DOMAIN jsonb_test_domain AS text CHECK (value <> 'foo');\n> normally, we do:\n> SELECT JSON_VALUE(jsonb '{\"d1\": \"H\"}', '$.a2' returning\n> jsonb_test_domain DEFAULT 'foo' ON ERROR);\n>\n> but parsing back view def, we do:\n> SELECT JSON_VALUE(jsonb '{\"d1\": \"H\"}', '$.a2' returning\n> jsonb_test_domain DEFAULT 'foo'::text::jsonb_test_domain ON ERROR);\n>\n> then I found the following two queries should not be error out.\n> SELECT JSON_VALUE(jsonb '{\"d1\": \"H\"}', '$.a2' returning\n> jsonb_test_domain DEFAULT 'foo1'::text::jsonb_test_domain ON ERROR);\n> SELECT JSON_VALUE(jsonb '{\"d1\": \"H\"}', '$.a2' returning\n> jsonb_test_domain DEFAULT 'foo1'::jsonb_test_domain ON ERROR);\n\nYeah, added an open item for this:\nhttps://wiki.postgresql.org/wiki/PostgreSQL_17_Open_Items#Open_Issues\n\n> --------------------------------------------------------------------------------------------------------------------\n>\n> * problem with type \"char\". the view def output is not the same as\n> the select * from v1.\n>\n> create or replace view v1 as\n> SELECT col FROM s,\n> JSON_TABLE(jsonb '{\"d\": [\"hello\", \"hello1\"]}', '$' as c1\n> COLUMNS(col \"char\" path '$.d' without wrapper keep quotes))sub;\n>\n> \\sv v1\n> CREATE OR REPLACE VIEW public.v1 AS\n> SELECT sub.col\n> FROM s,\n> JSON_TABLE(\n> '{\"d\": [\"hello\", \"hello1\"]}'::jsonb, '$' AS c1\n> COLUMNS (\n> col \"char\" PATH '$.\"d\"'\n> )\n> ) sub\n> one under the hood called JSON_QUERY_OP, another called JSON_VALUE_OP.\n\nHmm, I don't see a problem as long as both are equivalent or produce\nthe same result. Though, perhaps we could make\nget_json_expr_options() also deparse JSW_NONE explicitly into \"WITHOUT\nWRAPPER\" instead of a blank. But that's existing code, so will take\ncare of it as part of the above open item.\n\n> I will do extensive checking for other types later, so far, other than\n> these two issues,\n> get_json_table_columns is pretty solid, I've tried nested columns with\n> nested columns, it just works.\n\nThanks for checking.\n\n-- \nThanks, Amit Langote\n\n\n",
"msg_date": "Sat, 6 Apr 2024 15:03:13 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Sat, Apr 6, 2024 at 2:03 PM Amit Langote <[email protected]> wrote:\n>\n> >\n> > * problem with type \"char\". the view def output is not the same as\n> > the select * from v1.\n> >\n> > create or replace view v1 as\n> > SELECT col FROM s,\n> > JSON_TABLE(jsonb '{\"d\": [\"hello\", \"hello1\"]}', '$' as c1\n> > COLUMNS(col \"char\" path '$.d' without wrapper keep quotes))sub;\n> >\n> > \\sv v1\n> > CREATE OR REPLACE VIEW public.v1 AS\n> > SELECT sub.col\n> > FROM s,\n> > JSON_TABLE(\n> > '{\"d\": [\"hello\", \"hello1\"]}'::jsonb, '$' AS c1\n> > COLUMNS (\n> > col \"char\" PATH '$.\"d\"'\n> > )\n> > ) sub\n> > one under the hood called JSON_QUERY_OP, another called JSON_VALUE_OP.\n>\n> Hmm, I don't see a problem as long as both are equivalent or produce\n> the same result. Though, perhaps we could make\n> get_json_expr_options() also deparse JSW_NONE explicitly into \"WITHOUT\n> WRAPPER\" instead of a blank. But that's existing code, so will take\n> care of it as part of the above open item.\n>\n> > I will do extensive checking for other types later, so far, other than\n> > these two issues,\n> > get_json_table_columns is pretty solid, I've tried nested columns with\n> > nested columns, it just works.\n>\n> Thanks for checking.\n>\nAfter applying v50, this type also has some issues.\nCREATE OR REPLACE VIEW t1 as\nSELECT sub.* FROM JSON_TABLE(jsonb '{\"d\": [\"hello\", \"hello1\"]}',\n'$' AS c1 COLUMNS (\n\"tsvector0\" tsvector path '$.d' without wrapper omit quotes,\n\"tsvector1\" tsvector path '$.d' without wrapper keep quotes))sub;\ntable t1;\n\nreturn\n tsvector0 | tsvector1\n-------------------------+-------------------------\n '\"hello1\"]' '[\"hello\",' | '\"hello1\"]' '[\"hello\",'\n(1 row)\n\nsrc5=# \\sv t1\nCREATE OR REPLACE VIEW public.t1 AS\n SELECT tsvector0,\n tsvector1\n FROM JSON_TABLE(\n '{\"d\": [\"hello\", \"hello1\"]}'::jsonb, '$' AS c1\n COLUMNS (\n tsvector0 tsvector PATH '$.\"d\"' OMIT QUOTES,\n tsvector1 tsvector PATH '$.\"d\"'\n )\n ) sub\n\nbut\n\n SELECT tsvector0,\n tsvector1\n FROM JSON_TABLE(\n '{\"d\": [\"hello\", \"hello1\"]}'::jsonb, '$' AS c1\n COLUMNS (\n tsvector0 tsvector PATH '$.\"d\"' OMIT QUOTES,\n tsvector1 tsvector PATH '$.\"d\"'\n )\n ) sub\n\nonly return\n tsvector0 | tsvector1\n-------------------------+-----------\n '\"hello1\"]' '[\"hello\",' |\n\n\n",
"msg_date": "Sat, 6 Apr 2024 14:55:32 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Fri, Apr 5, 2024 at 8:35 PM Amit Langote <[email protected]> wrote:\n>\n> On Thu, Apr 4, 2024 at 9:02 PM Amit Langote <[email protected]> wrote:\n> > I'll post the rebased 0002 tomorrow after addressing your comments.\n>\n> Here's one. Main changes:\n>\n> * Fixed a bug in get_table_json_columns() which caused nested columns\n> to be deparsed incorrectly, something Jian reported upthread.\n> * Simplified the algorithm in JsonTablePlanNextRow()\n>\n> I'll post another revision or two maybe tomorrow, but posting what I\n> have now in case Jian wants to do more testing.\n>\n\n+ else\n+ {\n+ /*\n+ * Parent and thus the plan has no more rows.\n+ */\n+ return false;\n+ }\nin JsonTablePlanNextRow, the above comment seems strange to me.\n\n+ /*\n+ * Re-evaluate a nested plan's row pattern using the new parent row\n+ * pattern, if present.\n+ */\n+ Assert(parent != NULL);\n+ if (!parent->current.isnull)\n+ JsonTableResetRowPattern(planstate, parent->current.value);\nIs this assertion useful?\nif parent is null, then parent->current.isnull will cause segmentation fault.\n\nI tested with 3 NESTED PATH, it works! (I didn't fully understand\nJsonTablePlanNextRow though).\nthe doc needs some polish work.\n\n\n",
"msg_date": "Sat, 6 Apr 2024 22:34:11 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Hi,\n\nOn Sat, Apr 6, 2024 at 3:55 PM jian he <[email protected]> wrote:\n> On Sat, Apr 6, 2024 at 2:03 PM Amit Langote <[email protected]> wrote:\n> >\n> > >\n> > > * problem with type \"char\". the view def output is not the same as\n> > > the select * from v1.\n> > >\n> > > create or replace view v1 as\n> > > SELECT col FROM s,\n> > > JSON_TABLE(jsonb '{\"d\": [\"hello\", \"hello1\"]}', '$' as c1\n> > > COLUMNS(col \"char\" path '$.d' without wrapper keep quotes))sub;\n> > >\n> > > \\sv v1\n> > > CREATE OR REPLACE VIEW public.v1 AS\n> > > SELECT sub.col\n> > > FROM s,\n> > > JSON_TABLE(\n> > > '{\"d\": [\"hello\", \"hello1\"]}'::jsonb, '$' AS c1\n> > > COLUMNS (\n> > > col \"char\" PATH '$.\"d\"'\n> > > )\n> > > ) sub\n> > > one under the hood called JSON_QUERY_OP, another called JSON_VALUE_OP.\n> >\n> > Hmm, I don't see a problem as long as both are equivalent or produce\n> > the same result. Though, perhaps we could make\n> > get_json_expr_options() also deparse JSW_NONE explicitly into \"WITHOUT\n> > WRAPPER\" instead of a blank. But that's existing code, so will take\n> > care of it as part of the above open item.\n> >\n> > > I will do extensive checking for other types later, so far, other than\n> > > these two issues,\n> > > get_json_table_columns is pretty solid, I've tried nested columns with\n> > > nested columns, it just works.\n> >\n> > Thanks for checking.\n> >\n> After applying v50, this type also has some issues.\n> CREATE OR REPLACE VIEW t1 as\n> SELECT sub.* FROM JSON_TABLE(jsonb '{\"d\": [\"hello\", \"hello1\"]}',\n> '$' AS c1 COLUMNS (\n> \"tsvector0\" tsvector path '$.d' without wrapper omit quotes,\n> \"tsvector1\" tsvector path '$.d' without wrapper keep quotes))sub;\n> table t1;\n>\n> return\n> tsvector0 | tsvector1\n> -------------------------+-------------------------\n> '\"hello1\"]' '[\"hello\",' | '\"hello1\"]' '[\"hello\",'\n> (1 row)\n>\n> src5=# \\sv t1\n> CREATE OR REPLACE VIEW public.t1 AS\n> SELECT tsvector0,\n> tsvector1\n> FROM JSON_TABLE(\n> '{\"d\": [\"hello\", \"hello1\"]}'::jsonb, '$' AS c1\n> COLUMNS (\n> tsvector0 tsvector PATH '$.\"d\"' OMIT QUOTES,\n> tsvector1 tsvector PATH '$.\"d\"'\n> )\n> ) sub\n>\n> but\n>\n> SELECT tsvector0,\n> tsvector1\n> FROM JSON_TABLE(\n> '{\"d\": [\"hello\", \"hello1\"]}'::jsonb, '$' AS c1\n> COLUMNS (\n> tsvector0 tsvector PATH '$.\"d\"' OMIT QUOTES,\n> tsvector1 tsvector PATH '$.\"d\"'\n> )\n> ) sub\n>\n> only return\n> tsvector0 | tsvector1\n> -------------------------+-----------\n> '\"hello1\"]' '[\"hello\",' |\n\nYep, we *should* fix get_json_expr_options() to emit KEEP QUOTES and\nWITHOUT WRAPPER options so that transformJsonTableColumns() does the\ncorrect thing when you execute the \\sv output. Like this:\n\ndiff --git a/src/backend/utils/adt/ruleutils.c\nb/src/backend/utils/adt/ruleutils.c\nindex 283ca53cb5..5a6aabe100 100644\n--- a/src/backend/utils/adt/ruleutils.c\n+++ b/src/backend/utils/adt/ruleutils.c\n@@ -8853,9 +8853,13 @@ get_json_expr_options(JsonExpr *jsexpr,\ndeparse_context *context,\n appendStringInfo(context->buf, \" WITH CONDITIONAL WRAPPER\");\n else if (jsexpr->wrapper == JSW_UNCONDITIONAL)\n appendStringInfo(context->buf, \" WITH UNCONDITIONAL WRAPPER\");\n+ else if (jsexpr->wrapper == JSW_NONE)\n+ appendStringInfo(context->buf, \" WITHOUT WRAPPER\");\n\n if (jsexpr->omit_quotes)\n appendStringInfo(context->buf, \" OMIT QUOTES\");\n+ else\n+ appendStringInfo(context->buf, \" KEEP QUOTES\");\n }\n\nWill get that pushed tomorrow. Thanks for the test case.\n\n-- \nThanks, Amit Langote\n\n\n",
"msg_date": "Sun, 7 Apr 2024 00:10:36 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "hi.\nabout v50.\n+/*\n+ * JsonTableSiblingJoin -\n+ * Plan to union-join rows of nested paths of the same level\n+ */\n+typedef struct JsonTableSiblingJoin\n+{\n+ JsonTablePlan plan;\n+\n+ JsonTablePlan *lplan;\n+ JsonTablePlan *rplan;\n+} JsonTableSiblingJoin;\n\n\"Plan to union-join rows of nested paths of the same level\"\nsame level problem misleading?\nI think it means\n\"Plan to union-join rows of top level columns clause is a nested path\"\n\n+ if (IsA(planstate->plan, JsonTableSiblingJoin))\n+ {\n+ /* Fetch new from left sibling. */\n+ if (!JsonTablePlanNextRow(planstate->left))\n+ {\n+ /*\n+ * Left sibling ran out of rows, fetch new from right sibling.\n+ */\n+ if (!JsonTablePlanNextRow(planstate->right))\n+ {\n+ /* Right sibling and thus the plan has now more rows. */\n+ return false;\n+ }\n+ }\n+ }\n/* Right sibling and thus the plan has now more rows. */\nI think you mean:\n/* Right sibling ran out of rows and thus the plan has no more rows. */\n\n\nin <synopsis> section,\n+ | NESTED PATH <replaceable>json_path_specification</replaceable>\n<optional> AS <replaceable>path_name</replaceable> </optional>\n+ COLUMNS ( <replaceable>json_table_column</replaceable>\n<optional>, ...</optional> )\n\nmaybe make it into one line.\n\n | NESTED PATH <replaceable>json_path_specification</replaceable>\n<optional> AS <replaceable>path_name</replaceable> </optional> COLUMNS\n( <replaceable>json_table_column</replaceable> <optional>,\n...</optional> )\n\nsince the surrounding pattern is the next line beginning with \"[\",\nmeaning that next line is optional.\n\n\n+ at arbitrary nesting levels.\nmaybe\n+ at arbitrary nested level.\n\nin src/tools/pgindent/typedefs.list, \"JsonPathSpec\" is unnecessary.\n\nother than that, it looks good to me.\n\n\n",
"msg_date": "Sun, 7 Apr 2024 12:30:52 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Sun, Apr 7, 2024 at 12:30 PM jian he <[email protected]> wrote:\n>\n> other than that, it looks good to me.\nwhile looking at it again.\n\n+ | NESTED path_opt Sconst\n+ COLUMNS '(' json_table_column_definition_list ')'\n+ {\n+ JsonTableColumn *n = makeNode(JsonTableColumn);\n+\n+ n->coltype = JTC_NESTED;\n+ n->pathspec = (JsonTablePathSpec *)\n+ makeJsonTablePathSpec($3, NULL, @3, -1);\n+ n->columns = $6;\n+ n->location = @1;\n+ $$ = (Node *) n;\n+ }\n+ | NESTED path_opt Sconst AS name\n+ COLUMNS '(' json_table_column_definition_list ')'\n+ {\n+ JsonTableColumn *n = makeNode(JsonTableColumn);\n+\n+ n->coltype = JTC_NESTED;\n+ n->pathspec = (JsonTablePathSpec *)\n+ makeJsonTablePathSpec($3, $5, @3, @5);\n+ n->columns = $8;\n+ n->location = @1;\n+ $$ = (Node *) n;\n+ }\n+ ;\n+\n+path_opt:\n+ PATH\n+ | /* EMPTY */\n ;\n\nfor `NESTED PATH`, `PATH` is optional.\nSo for the doc, many places we need to replace `NESTED PATH` to `NESTED [PATH]`?\n\n\n",
"msg_date": "Sun, 7 Apr 2024 21:20:59 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Sun, Apr 7, 2024 at 10:21 PM jian he <[email protected]> wrote:\n> On Sun, Apr 7, 2024 at 12:30 PM jian he <[email protected]> wrote:\n> >\n> > other than that, it looks good to me.\n> while looking at it again.\n>\n> + | NESTED path_opt Sconst\n> + COLUMNS '(' json_table_column_definition_list ')'\n> + {\n> + JsonTableColumn *n = makeNode(JsonTableColumn);\n> +\n> + n->coltype = JTC_NESTED;\n> + n->pathspec = (JsonTablePathSpec *)\n> + makeJsonTablePathSpec($3, NULL, @3, -1);\n> + n->columns = $6;\n> + n->location = @1;\n> + $$ = (Node *) n;\n> + }\n> + | NESTED path_opt Sconst AS name\n> + COLUMNS '(' json_table_column_definition_list ')'\n> + {\n> + JsonTableColumn *n = makeNode(JsonTableColumn);\n> +\n> + n->coltype = JTC_NESTED;\n> + n->pathspec = (JsonTablePathSpec *)\n> + makeJsonTablePathSpec($3, $5, @3, @5);\n> + n->columns = $8;\n> + n->location = @1;\n> + $$ = (Node *) n;\n> + }\n> + ;\n> +\n> +path_opt:\n> + PATH\n> + | /* EMPTY */\n> ;\n>\n> for `NESTED PATH`, `PATH` is optional.\n> So for the doc, many places we need to replace `NESTED PATH` to `NESTED [PATH]`?\n\nThanks for checking.\n\nI've addressed most of your comments in the recent days including\ntoday's. Thanks for the patches for adding new test cases. That was\nvery helpful.\n\nI've changed the recursive structure of JsonTablePlanNextRow(). While\nit still may not be perfect, I think it's starting to look good now.\n\n0001 is a patch to fix up get_json_expr_options() so that it now emits\nWRAPPER and QUOTES such that they work correctly.\n\n0002 needs an expanded commit message but I've run out of energy today.\n\n-- \nThanks, Amit Langote",
"msg_date": "Sun, 7 Apr 2024 22:36:38 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Sun, Apr 7, 2024 at 9:36 PM Amit Langote <[email protected]> wrote:\n>\n>\n> 0002 needs an expanded commit message but I've run out of energy today.\n>\n\nsome cosmetic issues in v51, 0002.\n\nin struct JsonTablePathScan,\n/* ERROR/EMPTY ON ERROR behavior */\nbool errorOnError;\n\nthe comments seem not right.\nI think \"errorOnError\" means\nwhile evaluating the top level JSON path expression, whether \"error on\nerror\" is specified or not?\n\n\n+ | NESTED <optional> PATH </optional> ]\n<replaceable>json_path_specification</replaceable> <optional> AS\n<replaceable>json_path_name</replaceable> </optional> COLUMNS (\n<replaceable>json_table_column</replaceable> <optional>,\n...</optional> )\n </synopsis>\n\n\"NESTED <optional> PATH </optional> ] \"\nno need the closing bracket.\n\n\n\n+ /* Update the nested plan(s)'s row(s) using this new row. */\n+ if (planstate->nested)\n+ {\n+ JsonTableResetNestedPlan(planstate->nested);\n+ if (JsonTablePlanNextRow(planstate->nested))\n+ return true;\n+ }\n+\n return true;\n }\n\nthis part can be simplified as:\n+ if (planstate->nested)\n+{\n+ JsonTableResetNestedPlan(planstate->nested);\n+ JsonTablePlanNextRow(planstate->nested));\n+}\nsince the last part, if it returns false, eventually it returns true.\nalso the comments seem slightly confusing?\n\n\nv51 recursion function(JsonTablePlanNextRow, JsonTablePlanScanNextRow)\nis far clearer than v50!\nthanks. I think I get it.\n\n\n",
"msg_date": "Mon, 8 Apr 2024 00:34:58 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Mon, Apr 8, 2024 at 12:34 AM jian he <[email protected]> wrote:\n>\n> On Sun, Apr 7, 2024 at 9:36 PM Amit Langote <[email protected]> wrote:\n> > 0002 needs an expanded commit message but I've run out of energy today.\n> >\n>\n\n+/*\n+ * Fetch next row from a JsonTablePlan's path evaluation result and from\n+ * any child nested path(s).\n+ *\n+ * Returns true if the any of the paths (this or the nested) has more rows to\n+ * return.\n+ *\n+ * By fetching the nested path(s)'s rows based on the parent row at each\n+ * level, this essentially joins the rows of different levels. If any level\n+ * has no matching rows, the columns at that level will compute to NULL,\n+ * making it an OUTER join.\n+ */\n+static bool\n+JsonTablePlanScanNextRow(JsonTablePlanState *planstate)\n\n\"if the any\"\nshould be\n\"if any\" ?\n\nalso I think,\n + If any level\n+ * has no matching rows, the columns at that level will compute to NULL,\n+ * making it an OUTER join.\nmeans\n+ If any level rows do not match, the rows at that level will compute to NULL,\n+ making it an OUTER join.\n\nother than that, it looks good to me.\n\n\n",
"msg_date": "Mon, 8 Apr 2024 11:21:42 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Mon, Apr 8, 2024 at 11:21 AM jian he <[email protected]> wrote:\n>\n> On Mon, Apr 8, 2024 at 12:34 AM jian he <[email protected]> wrote:\n> >\n> > On Sun, Apr 7, 2024 at 9:36 PM Amit Langote <[email protected]> wrote:\n> > > 0002 needs an expanded commit message but I've run out of energy today.\n> > >\n>\n> other than that, it looks good to me.\n\none more tiny issue.\n+EXPLAIN (COSTS OFF, VERBOSE) SELECT * FROM jsonb_table_view1;\n+ERROR: relation \"jsonb_table_view1\" does not exist\n+LINE 1: EXPLAIN (COSTS OFF, VERBOSE) SELECT * FROM jsonb_table_view1...\n+ ^\nmaybe you want\nEXPLAIN (COSTS OFF, VERBOSE) SELECT * FROM jsonb_table_view7;\nat the end of the sqljson_jsontable.sql.\nI guess it will be fine, but the format json explain's out is quite big.\n\nyou also need to `drop table s cascade;` at the end of the test?\n\n\n",
"msg_date": "Mon, 8 Apr 2024 13:01:47 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Mon, Apr 8, 2024 at 2:02 PM jian he <[email protected]> wrote:\n> On Mon, Apr 8, 2024 at 11:21 AM jian he <[email protected]> wrote:\n> >\n> > On Mon, Apr 8, 2024 at 12:34 AM jian he <[email protected]> wrote:\n> > >\n> > > On Sun, Apr 7, 2024 at 9:36 PM Amit Langote <[email protected]> wrote:\n> > > > 0002 needs an expanded commit message but I've run out of energy today.\n> > > >\n> >\n> > other than that, it looks good to me.\n>\n> one more tiny issue.\n> +EXPLAIN (COSTS OFF, VERBOSE) SELECT * FROM jsonb_table_view1;\n> +ERROR: relation \"jsonb_table_view1\" does not exist\n> +LINE 1: EXPLAIN (COSTS OFF, VERBOSE) SELECT * FROM jsonb_table_view1...\n> + ^\n> maybe you want\n> EXPLAIN (COSTS OFF, VERBOSE) SELECT * FROM jsonb_table_view7;\n> at the end of the sqljson_jsontable.sql.\n> I guess it will be fine, but the format json explain's out is quite big.\n>\n> you also need to `drop table s cascade;` at the end of the test?\n\nPushed after fixing this and other issues. Thanks a lot for your\ncareful reviews.\n\nI've marked the CF entry for this as committed now:\nhttps://commitfest.postgresql.org/47/4377/\n\nLet's work on the remaining PLAN clause with a new entry in the next\nCF, possibly in a new email thread.\n\n-- \nThanks, Amit Langote\n\n\n",
"msg_date": "Mon, 8 Apr 2024 18:08:29 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Mon, 8 Apr 2024 at 10:09, Amit Langote <[email protected]> wrote:\n\n> On Mon, Apr 8, 2024 at 2:02 PM jian he <[email protected]>\n> wrote:\n> > On Mon, Apr 8, 2024 at 11:21 AM jian he <[email protected]>\n> wrote:\n> > >\n> > > On Mon, Apr 8, 2024 at 12:34 AM jian he <[email protected]>\n> wrote:\n> > > >\n> > > > On Sun, Apr 7, 2024 at 9:36 PM Amit Langote <[email protected]>\n> wrote:\n> > > > > 0002 needs an expanded commit message but I've run out of energy\n> today.\n> > > > >\n> > >\n> > > other than that, it looks good to me.\n> >\n> > one more tiny issue.\n> > +EXPLAIN (COSTS OFF, VERBOSE) SELECT * FROM jsonb_table_view1;\n> > +ERROR: relation \"jsonb_table_view1\" does not exist\n> > +LINE 1: EXPLAIN (COSTS OFF, VERBOSE) SELECT * FROM jsonb_table_view1...\n> > + ^\n> > maybe you want\n> > EXPLAIN (COSTS OFF, VERBOSE) SELECT * FROM jsonb_table_view7;\n> > at the end of the sqljson_jsontable.sql.\n> > I guess it will be fine, but the format json explain's out is quite big.\n> >\n> > you also need to `drop table s cascade;` at the end of the test?\n>\n> Pushed after fixing this and other issues. Thanks a lot for your\n> careful reviews.\n>\n> I've marked the CF entry for this as committed now:\n> https://commitfest.postgresql.org/47/4377/\n>\n> Let's work on the remaining PLAN clause with a new entry in the next\n> CF, possibly in a new email thread.\n>\n\nI've just taken a look at the doc changes, and I think we need to either\nremove the leading \"select\" keyword, or uppercase it in the examples.\n\nFor example (on\nhttps://www.postgresql.org/docs/devel/functions-json.html#SQLJSON-QUERY-FUNCTIONS\n):\n\njson_exists ( context_item, path_expression [ PASSING { value AS varname }\n[, ...]] [ { TRUE | FALSE | UNKNOWN | ERROR } ON ERROR ])\n\nReturns true if the SQL/JSON path_expression applied to the context_item\nusing the PASSING values yields any items.\nThe ON ERROR clause specifies the behavior if an error occurs; the default\nis to return the boolean FALSE value. Note that if the path_expression is\nstrict and ON ERROR behavior is ERROR, an error is generated if it yields\nno items.\nExamples:\nselect json_exists(jsonb '{\"key1\": [1,2,3]}', 'strict $.key1[*] ? (@ > 2)')\n→ t\nselect json_exists(jsonb '{\"a\": [1,2,3]}', 'lax $.a[5]' ERROR ON ERROR) → f\nselect json_exists(jsonb '{\"a\": [1,2,3]}', 'strict $.a[5]' ERROR ON ERROR) →\nERROR: jsonpath array subscript is out of bounds\n\nExamples are more difficult to read when keywords appear to be at the same\nlevel as predicates. Plus other examples within tables on the same page\ndon't start with \"select\", and further down, block examples uppercase\nkeywords. Either way, I don't like it as it is.\n\nSeparate from this, I think these tables don't scan well (see json_query\nfor an example of what I'm referring to). There is no clear separation of\nthe syntax definition, the description, and the example. This is more a\nmatter for the website mailing list, but I'm expressing it here to check\nwhether others agree.\n\nThom\n\nOn Mon, 8 Apr 2024 at 10:09, Amit Langote <[email protected]> wrote:On Mon, Apr 8, 2024 at 2:02 PM jian he <[email protected]> wrote:\n> On Mon, Apr 8, 2024 at 11:21 AM jian he <[email protected]> wrote:\n> >\n> > On Mon, Apr 8, 2024 at 12:34 AM jian he <[email protected]> wrote:\n> > >\n> > > On Sun, Apr 7, 2024 at 9:36 PM Amit Langote <[email protected]> wrote:\n> > > > 0002 needs an expanded commit message but I've run out of energy today.\n> > > >\n> >\n> > other than that, it looks good to me.\n>\n> one more tiny issue.\n> +EXPLAIN (COSTS OFF, VERBOSE) SELECT * FROM jsonb_table_view1;\n> +ERROR: relation \"jsonb_table_view1\" does not exist\n> +LINE 1: EXPLAIN (COSTS OFF, VERBOSE) SELECT * FROM jsonb_table_view1...\n> + ^\n> maybe you want\n> EXPLAIN (COSTS OFF, VERBOSE) SELECT * FROM jsonb_table_view7;\n> at the end of the sqljson_jsontable.sql.\n> I guess it will be fine, but the format json explain's out is quite big.\n>\n> you also need to `drop table s cascade;` at the end of the test?\n\nPushed after fixing this and other issues. Thanks a lot for your\ncareful reviews.\n\nI've marked the CF entry for this as committed now:\nhttps://commitfest.postgresql.org/47/4377/\n\nLet's work on the remaining PLAN clause with a new entry in the next\nCF, possibly in a new email thread.I've just taken a look at the doc changes, and I think we need to either remove the leading \"select\" keyword, or uppercase it in the examples.For example (on https://www.postgresql.org/docs/devel/functions-json.html#SQLJSON-QUERY-FUNCTIONS):json_exists ( context_item, path_expression [ PASSING { value AS varname } [, ...]] [ { TRUE | FALSE | UNKNOWN | ERROR } ON ERROR ])Returns true if the SQL/JSON path_expression applied to the context_item using the PASSING values yields any items.The ON ERROR clause specifies the behavior if an error occurs; the default is to return the boolean FALSE value. Note that if the path_expression is strict and ON ERROR behavior is ERROR, an error is generated if it yields no items.Examples:select json_exists(jsonb '{\"key1\": [1,2,3]}', 'strict $.key1[*] ? (@ > 2)') → tselect json_exists(jsonb '{\"a\": [1,2,3]}', 'lax $.a[5]' ERROR ON ERROR) → fselect json_exists(jsonb '{\"a\": [1,2,3]}', 'strict $.a[5]' ERROR ON ERROR) →ERROR: jsonpath array subscript is out of bounds Examples are more difficult to read when keywords appear to be at the same level as predicates. Plus other examples within tables on the same page don't start with \"select\", and further down, block examples uppercase keywords. Either way, I don't like it as it is.Separate from this, I think these tables don't scan well (see json_query for an example of what I'm referring to). There is no clear separation of the syntax definition, the description, and the example. This is more a matter for the website mailing list, but I'm expressing it here to check whether others agree.Thom",
"msg_date": "Thu, 16 May 2024 00:49:44 +0100",
"msg_from": "Thom Brown <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Hi Thom,\n\nOn Thu, May 16, 2024 at 8:50 AM Thom Brown <[email protected]> wrote:\n> On Mon, 8 Apr 2024 at 10:09, Amit Langote <[email protected]> wrote:\n>>\n>> On Mon, Apr 8, 2024 at 2:02 PM jian he <[email protected]> wrote:\n>> > On Mon, Apr 8, 2024 at 11:21 AM jian he <[email protected]> wrote:\n>> > >\n>> > > On Mon, Apr 8, 2024 at 12:34 AM jian he <[email protected]> wrote:\n>> > > >\n>> > > > On Sun, Apr 7, 2024 at 9:36 PM Amit Langote <[email protected]> wrote:\n>> > > > > 0002 needs an expanded commit message but I've run out of energy today.\n>> > > > >\n>> > >\n>> > > other than that, it looks good to me.\n>> >\n>> > one more tiny issue.\n>> > +EXPLAIN (COSTS OFF, VERBOSE) SELECT * FROM jsonb_table_view1;\n>> > +ERROR: relation \"jsonb_table_view1\" does not exist\n>> > +LINE 1: EXPLAIN (COSTS OFF, VERBOSE) SELECT * FROM jsonb_table_view1...\n>> > + ^\n>> > maybe you want\n>> > EXPLAIN (COSTS OFF, VERBOSE) SELECT * FROM jsonb_table_view7;\n>> > at the end of the sqljson_jsontable.sql.\n>> > I guess it will be fine, but the format json explain's out is quite big.\n>> >\n>> > you also need to `drop table s cascade;` at the end of the test?\n>>\n>> Pushed after fixing this and other issues. Thanks a lot for your\n>> careful reviews.\n>>\n>> I've marked the CF entry for this as committed now:\n>> https://commitfest.postgresql.org/47/4377/\n>>\n>> Let's work on the remaining PLAN clause with a new entry in the next\n>> CF, possibly in a new email thread.\n>\n>\n> I've just taken a look at the doc changes,\n\nThanks for taking a look.\n\n> and I think we need to either remove the leading \"select\" keyword, or uppercase it in the examples.\n>\n> For example (on https://www.postgresql.org/docs/devel/functions-json.html#SQLJSON-QUERY-FUNCTIONS):\n>\n> json_exists ( context_item, path_expression [ PASSING { value AS varname } [, ...]] [ { TRUE | FALSE | UNKNOWN | ERROR } ON ERROR ])\n>\n> Returns true if the SQL/JSON path_expression applied to the context_item using the PASSING values yields any items.\n> The ON ERROR clause specifies the behavior if an error occurs; the default is to return the boolean FALSE value. Note that if the path_expression is strict and ON ERROR behavior is ERROR, an error is generated if it yields no items.\n> Examples:\n> select json_exists(jsonb '{\"key1\": [1,2,3]}', 'strict $.key1[*] ? (@ > 2)') → t\n> select json_exists(jsonb '{\"a\": [1,2,3]}', 'lax $.a[5]' ERROR ON ERROR) → f\n> select json_exists(jsonb '{\"a\": [1,2,3]}', 'strict $.a[5]' ERROR ON ERROR) →\n> ERROR: jsonpath array subscript is out of bounds\n>\n> Examples are more difficult to read when keywords appear to be at the same level as predicates. Plus other examples within tables on the same page don't start with \"select\", and further down, block examples uppercase keywords. Either way, I don't like it as it is.\n\nI agree that the leading SELECT should be removed from these examples.\nAlso, the function names should be capitalized both in the syntax\ndescription and in the examples, even though other functions appearing\non this page aren't.\n\n> Separate from this, I think these tables don't scan well (see json_query for an example of what I'm referring to). There is no clear separation of the syntax definition, the description, and the example. This is more a matter for the website mailing list, but I'm expressing it here to check whether others agree.\n\nHmm, yes, I think I forgot to put <synopsis> around the syntax like\nit's done for a few other functions listed on the page.\n\nHow about the attached? Other than the above points, it removes the\n<para> tags from the description text of each function to turn it into\na single paragraph, because the multi-paragraph style only seems to\nappear in this table and it's looking a bit weird now. Though it's\nalso true that the functions in this table have the longest\ndescriptions.\n\n-- \nThanks, Amit Langote",
"msg_date": "Mon, 20 May 2024 20:51:12 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "On Mon, May 20, 2024 at 7:51 PM Amit Langote <[email protected]> wrote:\n>\n> Hi Thom,\n>>\n> > and I think we need to either remove the leading \"select\" keyword, or uppercase it in the examples.\n> >\n> > For example (on https://www.postgresql.org/docs/devel/functions-json.html#SQLJSON-QUERY-FUNCTIONS):\n> >\n> > json_exists ( context_item, path_expression [ PASSING { value AS varname } [, ...]] [ { TRUE | FALSE | UNKNOWN | ERROR } ON ERROR ])\n> >\n> > Returns true if the SQL/JSON path_expression applied to the context_item using the PASSING values yields any items.\n> > The ON ERROR clause specifies the behavior if an error occurs; the default is to return the boolean FALSE value. Note that if the path_expression is strict and ON ERROR behavior is ERROR, an error is generated if it yields no items.\n> > Examples:\n> > select json_exists(jsonb '{\"key1\": [1,2,3]}', 'strict $.key1[*] ? (@ > 2)') → t\n> > select json_exists(jsonb '{\"a\": [1,2,3]}', 'lax $.a[5]' ERROR ON ERROR) → f\n> > select json_exists(jsonb '{\"a\": [1,2,3]}', 'strict $.a[5]' ERROR ON ERROR) →\n> > ERROR: jsonpath array subscript is out of bounds\n> >\n> > Examples are more difficult to read when keywords appear to be at the same level as predicates. Plus other examples within tables on the same page don't start with \"select\", and further down, block examples uppercase keywords. Either way, I don't like it as it is.\n>\n> I agree that the leading SELECT should be removed from these examples.\n> Also, the function names should be capitalized both in the syntax\n> description and in the examples, even though other functions appearing\n> on this page aren't.\n>\n> > Separate from this, I think these tables don't scan well (see json_query for an example of what I'm referring to). There is no clear separation of the syntax definition, the description, and the example. This is more a matter for the website mailing list, but I'm expressing it here to check whether others agree.\n>\n> Hmm, yes, I think I forgot to put <synopsis> around the syntax like\n> it's done for a few other functions listed on the page.\n>\n> How about the attached? Other than the above points, it removes the\n> <para> tags from the description text of each function to turn it into\n> a single paragraph, because the multi-paragraph style only seems to\n> appear in this table and it's looking a bit weird now. Though it's\n> also true that the functions in this table have the longest\n> descriptions.\n>\n\n Note that scalar strings returned by <function>json_value</function>\n always have their quotes removed, equivalent to specifying\n- <literal>OMIT QUOTES</literal> in <function>json_query</function>.\n+ <literal>OMIT QUOTES</literal> in <function>JSON_QUERY</function>.\n\n\"Note that scalar strings returned by <function>json_value</function>\"\nshould be\n\"Note that scalar strings returned by <function>JSON_VALUE</function>\"\n\n\ngenerally <synopsis> section no need indentation?\n\nyou removed <para> tag for description of JSON_QUERY, JSON_VALUE, JSON_EXISTS.\nJSON_EXISTS is fine, but for\nJSON_QUERY, JSON_VALUE, the description section is very long.\nsplitting it to 2 paragraphs should be better than just a single paragraph.\n\nsince we are in the top level table section: <table\nid=\"functions-sqljson-querying\">\nso there will be no ambiguity of what we are referring to.\none para explaining what this function does, and its return value,\none para having a detailed explanation should be just fine?\n\n\n",
"msg_date": "Tue, 28 May 2024 08:00:00 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Hello,\n\nI'm not sure I've chosen the most appropriate thread for reporting the\nissue, but maybe you would like to look at code comments related to\nSQL/JSON constructors:\n\n * Transform JSON_ARRAY() constructor.\n *\n * JSON_ARRAY() is transformed into json[b]_build_array[_ext]() call\n * depending on the output JSON format. The first argument of\n * json[b]_build_array_ext() is absent_on_null.\n\n\n * Transform JSON_OBJECT() constructor.\n *\n * JSON_OBJECT() is transformed into json[b]_build_object[_ext]() call\n * depending on the output JSON format. The first two arguments of\n * json[b]_build_object_ext() are absent_on_null and check_unique.\n\nBut the referenced functions were removed at [1]; Nikita Glukhov wrote:\n> I have removed json[b]_build_object_ext() and json[b]_build_array_ext().\n\n(That thread seems too old for the current discussion.)\n\nAlso, a comment above transformJsonObjectAgg() references\njson[b]_objectagg[_unique][_strict](key, value), but I could find\njson_objectagg() only.\n\n[1] https://www.postgresql.org/message-id/be40362b-7821-7422-d33f-fbf1c61bb3e3%40postgrespro.ru\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Wed, 26 Jun 2024 14:00:00 +0300",
"msg_from": "Alexander Lakhin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Hi Alexander,\n\nOn Wed, Jun 26, 2024 at 8:00 PM Alexander Lakhin <[email protected]> wrote:\n>\n> Hello,\n>\n> I'm not sure I've chosen the most appropriate thread for reporting the\n> issue, but maybe you would like to look at code comments related to\n> SQL/JSON constructors:\n>\n> * Transform JSON_ARRAY() constructor.\n> *\n> * JSON_ARRAY() is transformed into json[b]_build_array[_ext]() call\n> * depending on the output JSON format. The first argument of\n> * json[b]_build_array_ext() is absent_on_null.\n>\n>\n> * Transform JSON_OBJECT() constructor.\n> *\n> * JSON_OBJECT() is transformed into json[b]_build_object[_ext]() call\n> * depending on the output JSON format. The first two arguments of\n> * json[b]_build_object_ext() are absent_on_null and check_unique.\n>\n> But the referenced functions were removed at [1]; Nikita Glukhov wrote:\n> > I have removed json[b]_build_object_ext() and json[b]_build_array_ext().\n>\n> (That thread seems too old for the current discussion.)\n>\n> Also, a comment above transformJsonObjectAgg() references\n> json[b]_objectagg[_unique][_strict](key, value), but I could find\n> json_objectagg() only.\n>\n> [1] https://www.postgresql.org/message-id/be40362b-7821-7422-d33f-fbf1c61bb3e3%40postgrespro.ru\n\nThanks for the report. Yeah, those comments that got added in\n7081ac46ace are obsolete.\n\nAttached is a patch to fix that. Should be back-patched to v16.\n\n-- \nThanks, Amit Langote",
"msg_date": "Fri, 28 Jun 2024 15:15:16 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Hi Amit,\n\n28.06.2024 09:15, Amit Langote wrote:\n> Hi Alexander,\n>\n>\n> Thanks for the report. Yeah, those comments that got added in\n> 7081ac46ace are obsolete.\n>\n\nThanks for paying attention to that!\n\nCould you also look at comments for transformJsonObjectAgg() and\ntransformJsonArrayAgg(), aren't they obsolete too?\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Fri, 28 Jun 2024 11:00:00 +0300",
"msg_from": "Alexander Lakhin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: remaining sql/json patches"
},
{
"msg_contents": "Hi Alexander,\n\nOn Fri, Jun 28, 2024 at 5:00 PM Alexander Lakhin <[email protected]> wrote:\n>\n> Hi Amit,\n>\n> 28.06.2024 09:15, Amit Langote wrote:\n> > Hi Alexander,\n> >\n> >\n> > Thanks for the report. Yeah, those comments that got added in\n> > 7081ac46ace are obsolete.\n> >\n>\n> Thanks for paying attention to that!\n>\n> Could you also look at comments for transformJsonObjectAgg() and\n> transformJsonArrayAgg(), aren't they obsolete too?\n\nYou're right. I didn't think they needed to be similarly fixed,\nbecause I noticed the code like the following in in\ntransformJsonObjectAgg() which sets the OID of the function to call\nfrom, again, JsonConstructorExpr:\n\n {\n if (agg->absent_on_null)\n if (agg->unique)\n aggfnoid = F_JSONB_OBJECT_AGG_UNIQUE_STRICT;\n else\n aggfnoid = F_JSONB_OBJECT_AGG_STRICT;\n else if (agg->unique)\n aggfnoid = F_JSONB_OBJECT_AGG_UNIQUE;\n else\n aggfnoid = F_JSONB_OBJECT_AGG;\n\n aggtype = JSONBOID;\n }\n\nSo, yes, the comments for them should be fixed too like the other two\nto also mention JsonConstructorExpr.\n\nUpdated patch attached.\n\nWonder if Alvaro has any thoughts on this.\n\n-- \nThanks, Amit Langote",
"msg_date": "Fri, 28 Jun 2024 22:43:00 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: remaining sql/json patches"
}
] |
[
{
"msg_contents": "Hi all,\n\nthis is about a limitation of the current query planner implementation \nwhich causes big performance declines for certain types of queries. \nAffected queries will typically execute about 1000 times slower than \nthey could. Examples are given below.\n\nAfter talking about this with David Rowley on the pgsql-bugs mailing \nlist (bug #17964), it turns out that the reason for the problem \napparently is that eligible IN clauses are always converted into \nsemi-joins. This is done even in situations where such a conversion \nprevents further optimizations to be made.\n\nHence, it would be desirable that the planner would intelligently decide \nbased on estimated costs whether or not an IN clause should be converted \ninto a semi-join. The planner obviously can already correctly estimate \nwhich variant will be faster, as shown in the query plans below.\n\nThe tricky part is that it's unfortunately not guaranteed that we'll \nfind the BEST possible solution if we decide independently for each IN \nclause, because estimated total costs will depend on the other IN \nclauses of the query as well. However, a simple heuristic solution would \nbe to restrain from converting an IN clause into a semi-join if the \nestimated number of rows returned by the subselect is below a certain \nthreshold. Then, the planner should make its final decision based on the \nestimated total cost of the two possible query variants (i.e. between \napplying the heuristic vs. not applying the heuristic).\n\nExample queries follow. Full query plans are provided within the linked \ndatabase fiddles.\n\n\n\nExample 1: Combining an IN clause with OR.\n\nSELECT * FROM book WHERE\n author_id IS NULL OR\n author_id IN (SELECT id FROM author WHERE name = 'some_name');\n\nExecution time: 159.227 ms\nExecution time (optimized variant): 0.084 ms (1896 times faster)\n\nEstimated cost: 16933.31\nEstimated cost (optimized variant): 2467.85 (6.86 times lower)\n\nFull query plans here: https://dbfiddle.uk/SOOJBMwI\n\n\n\nExample 2: Combining two IN clauses with OR.\n\nSELECT * FROM book WHERE\n author_id IN (SELECT id FROM author WHERE name = 'some_name') OR\n publisher_id IN (SELECT id FROM publisher WHERE name = 'some_name');\n\nExecution time: 227.822 ms\nExecution time (optimized variant): 0.088 ms (2589 times faster)\n\nEstimated cost: 20422.61\nEstimated cost (optimized variant): 4113.39 (4.96 times lower)\n\nFull query plans here: https://dbfiddle.uk/q6_4NuDX\n\n\n\nExample 3: Combining an IN clause with UNION.\n\nSELECT * FROM\n (SELECT * FROM table1 UNION SELECT * FROM table2) AS q\n WHERE id IN (SELECT id FROM table3);\n\nExecution time: 932.412 ms\nExecution time (optimized variant): 0.728 ms (1281 times faster)\n\nEstimated cost: 207933.98\nEstimated cost (optimized variant): 97.40 (2135 times lower)\n\nFull query plans here: https://dbfiddle.uk/TXASgMZf\n\n\n\nExample 4: Complex real-life query from our project.\n\nThe full query is linked below.\n\nExecution time: 72436.509 ms\nExecution time (optimized variant): 0.201 ms (360381 times faster)\n\nEstimated cost: 3941783.92\nEstimated cost (optimized variant): 1515.62 (2601 times lower)\n\nOriginal query here: https://pastebin.com/raw/JsY1PzG3\nOptimized query here: https://pastebin.com/raw/Xvq7zUY2\n\n\n\nNow, I'm not familiar with the current planner implementation, but \nwanted to know whether there is anybody on this list who would be \nwilling to work on this. Having the planner consider the costs of \nconverting IN clauses into semi-joins obviously seems like a worthy \ngoal. As shown, the performance improvements are gigantic for certain \ntypes of queries.\n\nThank you very much!\n\nMathias\n\n\n",
"msg_date": "Mon, 19 Jun 2023 12:14:45 +0200",
"msg_from": "Mathias Kunter <[email protected]>",
"msg_from_op": true,
"msg_subject": "Missed query planner optimization"
}
] |
[
{
"msg_contents": "An important goal of the work on nbtree that went into PostgreSQL 12\n(and to a lesser extent the work that went into 13) was to make sure\nthat index scans deal with \"boundary cases\" optimally. The simplest\nway of explaining what this means is through a practical worked\nexample.\n\nRecap, worked example:\n\nConsider a query such as \"select count(*) from tenk2 where hundred =\n$1\", where tenk1 is one of the Wisconsin benchmark tables left behind\nby the standard regression tests. In practice this query will show\nthat there are either 0 or 100 rows counted for every possible $1\nvalue. In practice query execution only ever needs to scan exactly one\nleaf page when executing this query -- for any possible $1 value. In\nparticular, it doesn't matter if $1 is a value that happens to be a\n\"boundary value\". By \"boundary value\" I mean a value that happens to\nmatch the leftmost or the rightmost tuple on some leaf page. In other\nwords, a value that happens to match a key from some internal page\nseen while descending the tree in _bt_search.\n\nThis effect is reliable for the tenk1_hundred index that my example\nquery uses (a single column index) because numerous optimizations are\nin place, which work together. This starts with _bt_search, which will\nreliably descend to exactly one leaf page -- the only one where we\ncould possibly find a match (thanks to the !pivotsearch optimization).\nAfter that, _bt_readpage() will reliably notice that it doesn't have\nto go to the page to the right at all (occasionally it'll need to\ncheck the high key to do this, and so will use the optimization\nintroduced to Postgres 12 by commit 29b64d1d).\n\n(It's easy to see this using \"EXPLAIN (ANALYZE, BUFFERS)\", which will\nreliably break out index pages when we're using a bitmap index scan --\nwhich happens to be the case here. You'll see one root page access and\none leaf page access for the tenk1_hundred index.)\n\nThis is just an example of a fairly general principle: we ought to\nexpect selective index scans to only need to access one leaf page. At\nleast for any index scan that only needs to access a single \"natural\ngrouping\" that is cleanly isolated onto a single leaf page.\n\nAnother example of such a \"natural grouping\" can also be seen in the\nTPC-C orderline table's primary key. In practice we shouldn't ever\nneed to touch more than a single leaf page when we go to read any\nindividual order's entries from the order lines table/PK. There will\nnever be more than 15 order lines per order in practice (10 on\naverage), which guarantees that suffix truncation will avoid splitting\nany individual order's order lines across two leaf pages (after a page\nsplit). Plus we're careful to exploit the index structure (and basic\nB-Tree index invariants) to maximum effect during index scans....as\nlong as they're forward scans.\n\n(This ends my recap of \"boundary case\" handling.)\n\nPatch:\n\nWe still fall short when it comes to handling boundary cases optimally\nduring backwards scans. This is at least true for a subset of\nbackwards scans that request \"goback=true\" processing inside\n_bt_first. Attached patch improves matters here. Again, the simplest\nway of explaining what this does is through a practical worked\nexample.\n\nConsider an example where we don't do as well as might be expected right now:\n\nEXPLAIN (ANALYZE, BUFFERS) select * from tenk1 where hundred < 12\norder by hundred desc limit 1;\n\nThis particular example shows \"Buffers: shared hit=4\". We see one more\nbuffer hit than is truly necessary though. If I tweak the query by\nreplacing 12 with the adjacent value 11 (i.e. same query but with\n\"hundred < 11\"), then I'll see \"Buffers: shared hit=3\" instead.\nSimilarly, \"hundred < 13\" will also show \"Buffers: shared hit=3\". Why\nshould \"hundred <12\" need to access an extra leaf page?\n\nSure enough, the problematic case shows \"Buffers: shared hit=3\" with\nmy patch applied, as expected (a buffer for the root page, a leaf\npage, and a heap page). The patch makes every variant of my query\ntouch the same number of leaf pages/buffers, as expected.\n\nThe patch teaches _bt_search (actually, _bt_compare) about the\n\"goback=true\" case. This allows it to exploit information about which\nleaf page accesses are truly necessary. The code in question already\nknows about \"nextkey=true\", which is a closely related concept. It\nfeels very natural to also teach it about \"goback=true\" now.\n\n(Actually, I lied. All the patch really does is rename existing\n\"pivotsearch\" logic, so that the symbol name \"goback\" is used instead\n-- the insertion scankey code doesn't need to change at all. The real\nbehavioral change takes place in _bt_first, the higher level calling\ncode. It has been taught to set its insertion/initial positioning\nscankey's pivotsearch/goback field to \"true\" in the patch. Before now,\nthis option was exclusively during VACUUM, for page deletion. It turns\nout that so-called \"pivotsearch\" behavior is far more general than\ncurrently supposed.)\n\nThoughts?\n-- \nPeter Geoghegan",
"msg_date": "Mon, 19 Jun 2023 16:28:38 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Optimizing \"boundary cases\" during backward scan B-Tree index\n descents"
},
{
"msg_contents": "On Mon, Jun 19, 2023 at 4:28 PM Peter Geoghegan <[email protected]> wrote:\n> We still fall short when it comes to handling boundary cases optimally\n> during backwards scans. This is at least true for a subset of\n> backwards scans that request \"goback=true\" processing inside\n> _bt_first. Attached patch improves matters here. Again, the simplest\n> way of explaining what this does is through a practical worked\n> example.\n\nOn further reflection, we should go even further in the direction of\nteaching _bt_search (and related routines) about the initial leaf\nlevel positioning requirements of backwards scans. In fact, we should\ngive _bt_search and friends exclusive responsibility for dealing with\ninitial positioning on behalf of _bt_first. Attached revision shows\nhow this can work.\n\nThis new direction is partly based on the observation that \"goback\" is\nreally just a synonym of \"backwards scan initial positioning\nbehavior\": all backwards scans already use \"goback\", while all forward\nscans use \"!goback\". So why shouldn't we just change any \"goback\"\nsymbol names to \"backward\", and be done with it? Then we can move the\n\"step back one item on leaf level\" logic from _bt_first over to\n_bt_binsrch. Now _bt_search/_bt_binsrch/_bt_compare own everything to\ndo with initial positioning.\n\nThe main benefit of this approach is that it allows _bt_first to\ndescribe how its various initial positioning strategies work using\nhigh level language, while pretty much leaving the implementation\ndetails up to _bt_search. I've always thought that it was confusing\nthat the \"<= strategy\" uses \"nextkey=true\" -- how can it be \"next key\"\nwhile also returning keys that directly match those from the insertion\nscankey? It only makes sense once you see that the \"<= strategy\" uses\nboth \"nextkey=true\" and \"backwards/goback = true\" -- something that\nthe structure in the patch makes clear and explicit.\n\nThis revision also adds test coverage for the aforementioned \"<=\nstrategy\" (not to be confused with the strategy that we're\noptimizing), since it was missing before now. It also adds test\ncoverage for the \"< strategy\" (which *is* the strategy affected by the\nnew optimization). The \"< strategy\" already has decent enough coverage\n-- it just doesn't have coverage that exercises the new optimization.\n(Note that I use the term \"new optimization\" advisedly here -- the new\nbehavior is closer to \"how it's really supposed to work\".)\n\nI'm happy with the way that v2 came out, since the new structure makes\na lot more sense to me. The refactoring is probably the most important\naspect of this patch. The new structure seems like it might become\nimportant in a world with skip scan or other new MDAM techniques added\nto B-Tree. The important principle here is \"think in terms of logical\nkey space, not in terms of physical pages\".\n\nAdding this to the next CF.\n\n--\nPeter Geoghegan",
"msg_date": "Tue, 20 Jun 2023 17:12:10 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Optimizing \"boundary cases\" during backward scan B-Tree index\n descents"
},
{
"msg_contents": "On Tue, Jun 20, 2023 at 5:12 PM Peter Geoghegan <[email protected]> wrote:\n> I'm happy with the way that v2 came out, since the new structure makes\n> a lot more sense to me.\n\nAttached is v3, which is a straightforward rebase of v2. v3 is needed\nto get the patch to apply cleanly against HEAD - so no real changes\nhere.\n\n\n-- \nPeter Geoghegan",
"msg_date": "Mon, 18 Sep 2023 16:58:47 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Optimizing \"boundary cases\" during backward scan B-Tree index\n descents"
},
{
"msg_contents": "On Mon, Sep 18, 2023 at 4:58 PM Peter Geoghegan <[email protected]> wrote:\n> Attached is v3, which is a straightforward rebase of v2. v3 is needed\n> to get the patch to apply cleanly against HEAD - so no real changes\n> here.\n\nAttached is v4. Just to keep CFTester happy.\n\n-- \nPeter Geoghegan",
"msg_date": "Sun, 15 Oct 2023 13:54:25 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Optimizing \"boundary cases\" during backward scan B-Tree index\n descents"
},
{
"msg_contents": "On Sun, 15 Oct 2023 at 22:56, Peter Geoghegan <[email protected]> wrote:\n>\n> On Mon, Sep 18, 2023 at 4:58 PM Peter Geoghegan <[email protected]> wrote:\n> > Attached is v3, which is a straightforward rebase of v2. v3 is needed\n> > to get the patch to apply cleanly against HEAD - so no real changes\n> > here.\n>\n> Attached is v4. Just to keep CFTester happy.\n\n> @@ -402,10 +405,27 @@ _bt_binsrch(Relation rel,\n> + if (unlikely(key->backward))\n> + return OffsetNumberPrev(low);\n> +\n> return low;\n\nI wonder if this is (or can be) optimized to the mostly equivalent\n\"return low - (OffsetNumber) key->backward\", as that would remove an\n\"unlikely\" branch that isn't very unlikely during page deletion, even\nif page deletion by itself is quite rare.\nI'm not sure it's worth the additional cognitive overhead, or if there\nare any significant performance implications for the hot path.\n\n> @@ -318,9 +318,12 @@ _bt_moveright(Relation rel,\n> [...]\n> * On a leaf page, _bt_binsrch() returns the OffsetNumber of the first\n> [...]\n> + * key >= given scankey, or > scankey if nextkey is true for forward scans.\n> + * _bt_binsrch() also \"steps back\" by one item/tuple on the leaf level in the\n> + * case of backward scans. (NOTE: this means it is possible to return a value\n> + * that's 1 greater than the number of keys on the leaf page. It also means\n> + * that we can return an item 1 less than the first non-pivot tuple on any\n> + * leaf page.)\n\nI think this can use a bit more wordsmithing: the use of \"also\" with\n\"steps back\" implies we also step back in other cases, which aren't\nmentioned. Could you update the wording to be more clear about this?\n\n> @@ -767,7 +787,7 @@ _bt_compare(Relation rel,\n> [...]\n> - * Most searches have a scankey that is considered greater than a\n> + * Forward scans have a scankey that is considered greater than a\n\nAlthough it's not strictly an issue for this patch, the comment here\ndoesn't describe backward scans in as much detail as forward scans\nhere. The concepts are mostly \"do the same but in reverse\", but the\ndifference is noticable.\n\nApart from these comments, no further noteworthy comments. Looks good.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Mon, 6 Nov 2023 14:16:09 +0100",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizing \"boundary cases\" during backward scan B-Tree index\n descents"
}
] |
[
{
"msg_contents": "MERGE is now a data-modification command too.",
"msg_date": "Mon, 19 Jun 2023 23:32:46 -0700",
"msg_from": "Will Mortensen <[email protected]>",
"msg_from_op": true,
"msg_subject": "[PATCH] doc: add missing mention of MERGE in MVCC"
},
{
"msg_contents": "On Mon, Jun 19, 2023 at 11:32:46PM -0700, Will Mortensen wrote:\n> MERGE is now a data-modification command too.\n\nYes, this has been applied too.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Wed, 21 Jun 2023 19:08:19 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] doc: add missing mention of MERGE in MVCC"
},
{
"msg_contents": "I saw, thanks again!\n\nOn Wed, Jun 21, 2023 at 4:08 PM Bruce Momjian <[email protected]> wrote:\n>\n> On Mon, Jun 19, 2023 at 11:32:46PM -0700, Will Mortensen wrote:\n> > MERGE is now a data-modification command too.\n>\n> Yes, this has been applied too.\n>\n> --\n> Bruce Momjian <[email protected]> https://momjian.us\n> EDB https://enterprisedb.com\n>\n> Only you can decide what is important to you.\n\n\n",
"msg_date": "Wed, 21 Jun 2023 16:16:47 -0700",
"msg_from": "Will Mortensen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] doc: add missing mention of MERGE in MVCC"
}
] |
[
{
"msg_contents": "\nStatus on collation loose ends:\n\n1. There's an open item \"Switch to ICU for 17\". It's a little bit\nconfusing exactly what that means, and the CF entry refers to two\nitems, one of which is the build-time default to --with-icu. As far as\nI know, building with ICU by default is a settled issue with no\nobjections. The second issue is the initdb default, which is covered by\nthe other open item. So I will just close that open item unless someone\nthinks I'm missing something.\n\n2. Open item about the unfriendly rules for choosing an ICU locale at\ninitdb time. Tom, Robert, and Daniel Verite have expressed concerns\n(and at least one objection) to initdb defaulting to icu for --locale-\nprovider. Some of the problems have been addressed, but the issue about\nC and C.UTF-8 locales is not settled. Even if it were settled I'm not\nsure we'd have a clear consensus on all the details. I don't think this\nshould proceed to beta2 in this state, so I intend to revert back to\nlibc as the default for initdb. [ I believe we do have a general\nconsensus that ICU is better, but we can signal it other ways: through\ndocumentation, packaging, etc. ]\n\n3. The ICU conversion from \"C\" to \"en-US-u-va-posix\": cut out this code\n(it was a small part of a larger change). It's only purpose was\nconsistency between ICU versions, and nobody liked it. It's only here\nright now to avoid test failures due to an order-of-commits issue; but\nif the initdb default goes back to libc it won't matter and I can\nremove it.\n\n4. icu_validation_level WARNING or ERROR: right now an invalid ICU\nlocale raises a WARNING, but Peter Eisentraut would prefer an ERROR.\nI'm still inclined to leave it as a WARNING for one release and\nincrease it to ERROR later. But if the default collation provider goes\nback to libc, the risk of ICU validation errors goes way down, so I\ndon't object if Peter would like to change it back to an ERROR.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Tue, 20 Jun 2023 02:02:36 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "collation-related loose ends before beta2"
},
{
"msg_contents": "Jeff Davis <[email protected]> writes:\n> Status on collation loose ends:\n\nThis all sounds good to me.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 20 Jun 2023 12:16:04 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: collation-related loose ends before beta2"
},
{
"msg_contents": "On Tue, 2023-06-20 at 12:16 -0400, Tom Lane wrote:\n> Jeff Davis <[email protected]> writes:\n> > Status on collation loose ends:\n> \n> This all sounds good to me.\n\nPatches attached.\n\n0001 also removes the code to get a default locale when ICU is being\nused, because that was a part of the same commit that changed the\ndefault provider to be ICU and I don't see a lot of value in keeping\njust that part.\n\nI'm planning to commit something similar to the attached patches\ntomorrow (Wednesday) unless I get more input.\n\nRegards,\n\tJeff Davis",
"msg_date": "Tue, 20 Jun 2023 13:46:29 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: collation-related loose ends before beta2"
},
{
"msg_contents": "On 6/20/23 5:02 AM, Jeff Davis wrote:\r\n> \r\n> Status on collation loose ends:\r\n> \r\n> 1. There's an open item \"Switch to ICU for 17\". It's a little bit\r\n> confusing exactly what that means, and the CF entry refers to two\r\n> items, one of which is the build-time default to --with-icu. As far as\r\n> I know, building with ICU by default is a settled issue with no\r\n> objections. The second issue is the initdb default, which is covered by\r\n> the other open item. So I will just close that open item unless someone\r\n> thinks I'm missing something.\r\n\r\n[RMT Hat]\r\n\r\nNo objections. The RMT had interpreted this as \"Punt on making ICU the \r\nbuilding default to v17\" but it seems the consensus is to continue to \r\nleave it in as the default for v16.\r\n\r\n> 2. Open item about the unfriendly rules for choosing an ICU locale at\r\n> initdb time. Tom, Robert, and Daniel Verite have expressed concerns\r\n> (and at least one objection) to initdb defaulting to icu for --locale-\r\n> provider. Some of the problems have been addressed, but the issue about\r\n> C and C.UTF-8 locales is not settled. Even if it were settled I'm not\r\n> sure we'd have a clear consensus on all the details. I don't think this\r\n> should proceed to beta2 in this state, so I intend to revert back to\r\n> libc as the default for initdb. [ I believe we do have a general\r\n> consensus that ICU is better, but we can signal it other ways: through\r\n> documentation, packaging, etc. ]\r\n\r\n[Personal hat]\r\n\r\n(Building...)\r\n\r\nI do think this raises a good point: it's really the packaging that will \r\nguide what users are using for v16. I don't know if we want to \r\ndiscuss/poll the packagers to see what they are thinking about this?\r\n\r\n> 3. The ICU conversion from \"C\" to \"en-US-u-va-posix\": cut out this code\r\n> (it was a small part of a larger change). It's only purpose was\r\n> consistency between ICU versions, and nobody liked it. It's only here\r\n> right now to avoid test failures due to an order-of-commits issue; but\r\n> if the initdb default goes back to libc it won't matter and I can\r\n> remove it.\r\n> \r\n> 4. icu_validation_level WARNING or ERROR: right now an invalid ICU\r\n> locale raises a WARNING, but Peter Eisentraut would prefer an ERROR.\r\n> I'm still inclined to leave it as a WARNING for one release and\r\n> increase it to ERROR later. But if the default collation provider goes\r\n> back to libc, the risk of ICU validation errors goes way down, so I\r\n> don't object if Peter would like to change it back to an ERROR.\r\n\r\n[Personal hat]\r\n\r\nI'd be inclined for \"WARNING\" until getting a sense of what packagers \r\nwho do an initdb as part of the installation process decide what \r\ncollation provider they're going to use.\r\n\r\nThanks,\r\n\r\nJonathan",
"msg_date": "Wed, 21 Jun 2023 13:17:33 -0400",
"msg_from": "\"Jonathan S. Katz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: collation-related loose ends before beta2"
}
] |
[
{
"msg_contents": "Hi,\n\nAs I've mentioned earlier on-list [0][1] and off-list, I've been\nworking on reducing the volume of WAL that we write. This is one\nintermediate step towards that effort.\n\nAttached is a patchset that reduces the storage overhead of each WAL\nrecord, with theoretical total savings in pgbench transactions\nsomewhere between 4-5%, depending on platform and the locality of\nupdates. These savings are achieved by reducing the amount of data\nstored, and by not emitting bytes that don't carry data relevant to\neach record's redo- and decode routines.\n\nPatches contained:\n0001 is copied essentially verbatim from [1] and reduces overhead in\nthe registered block's length field where possible. It is included to\nimprove code commonality between varcoded integer fields. See [1] for\nmore details.\n\n0002 updates accesses to the rmgr-managed bits of xl_info with its own\nmacro returning only rmgr-managed bits, and updates XLogRecGetInfo()\nto return only the xlog-managed bits.\n\n0003 renames the rm_identify argument from 'info' to 'rmgrinfo'; and\nstops providing the xlog-managed bits to the function - rmgrs have no\nneed to know the xlog internal info bits.\n\n0004 continues on 0003 and moves the rmgr info bits into their own\nxl_rmgrinfo of type uint8, stored in the alignment hole in the\nXLogRecord struct.\n\n0005 updates the code to only include a valid XID in the record when\nthe rmgr actually needs to use that XID.\n\n0006 implements a new, variable length, WAL record header format,\npreviously discussed at [0] and [2]. This new WAL record header is a\nminimum of 14 bytes large, but generally will be 15 to 21 bytes in\nsize, depending on the data contained, the type of record, and whether\nthe record needs an XID.\n\nNotes:\n- The patchset includes [1] for its variable-length encoding of\nuint32, and this is included in the savings calculation.\n- Not all records include the backend's XID anymore. XLog API users\nmust explicitly request the inclusion of XID in the record with the\nXLOG_INCLUDE_XID record flag.\n- XLog records are now always aligned to 8 bytes. This was needed to\nreduce the complexity of var-coding the record length on 32-bit\nsystems. Savings on 32-bit systems still exist, but can be expected to\nbe less impactful.\n- XLog length is now varlength encoded. No more records with <255\nbytes of data storing 3 0-bytes - the length is now stored in 0, 1, 2\nor 4 bytes.\n- RMGRs now get their own uint8 info/flags field. No more sharing bits\nwith WAL infrastructure in xl_info. The byte is only stored if it is\nnon-0, and otherwise omitted (indicated by flag bits in xl_info).\n\nTodo:\n- Check for any needed documentation / code comments updates\n- benchmark this\n\nFuture work:\n- we could omit XLR_BLOCK_ID_DATA_[LONG,SHORT] if it is the only\n\"block ID\" in the record (such as checkpoint records, commit/rollback\nrecords, etc.). This would be indicated by a xl_info bit, and this\nwould save 2-5 bytes per applicable record.\n- This patch inherits [1]'s property in which we can release the\nBKPBLOCK_HAS_DATA flag bit (its value is already implied by\nXLR_BLOCKID_SZCLASS), allowing us to use it for something else, like\nindicating varcoded RelFileLocator/BlockId.\n\nKind regards,\n\nMatthias van de Meent\nNeon, Inc.\n\n[0] https://www.postgresql.org/message-id/flat/CAEze2Whf%3DfwAj7rosf6aDM9t%2B7MU1w-bJn28HFWYGkz%2Bics-hg%40mail.gmail.com\n[1] https://www.postgresql.org/message-id/flat/CAEze2WjuJqVeB6EUZ1z75_ittk54H6Lk7WtwRskEeGtZubr4bQ%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/flat/CA+Tgmoaa9Yc9O-FP4vS_xTKf8Wgy8TzHpjnjN56_ShKE=jrP-Q@mail.gmail.com",
"msg_date": "Tue, 20 Jun 2023 22:01:00 +0200",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": true,
"msg_subject": "XLog size reductions: Reduced XLog record header size for PG17"
},
{
"msg_contents": "Hi,\n\nThe attached v2 patchset contains some small fixes for the failing\ncfbot 32-bit tests - at least locally it does so.\n\nI'd overlooked one remaining use of MAXALIGN64() in xlog.c in the last\npatch of the set, which has now been updated to XLP_ALIGN as well.\nAdditionally, XLP_ALIGN has been updated to use TYPEALIGN64 instead of\nTYPEALIGN so that we don't lose bits of the aligned value in 32-bit\nsystems.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech/)",
"msg_date": "Fri, 30 Jun 2023 17:36:40 +0200",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: XLog size reductions: Reduced XLog record header size for PG17"
},
{
"msg_contents": "On Fri, 30 Jun 2023 at 17:36, Matthias van de Meent\n<[email protected]> wrote:\n>\n> Hi,\n>\n> The attached v2 patchset contains some small fixes for the failing\n> cfbot 32-bit tests - at least locally it does so.\n>\n> I'd overlooked one remaining use of MAXALIGN64() in xlog.c in the last\n> patch of the set, which has now been updated to XLP_ALIGN as well.\n> Additionally, XLP_ALIGN has been updated to use TYPEALIGN64 instead of\n> TYPEALIGN so that we don't lose bits of the aligned value in 32-bit\n> systems.\n\nApparently there was some usage of MAXALIGN() in xlogreader that I'd\nmissed, and which only shows up in TAP tests. In v3 I've fixed that,\ntogether with some improved early detection of invalid record headers.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech/)",
"msg_date": "Mon, 3 Jul 2023 13:08:31 +0200",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: XLog size reductions: Reduced XLog record header size for PG17"
},
{
"msg_contents": "On Mon, 3 Jul 2023 at 13:08, Matthias van de Meent\n<[email protected]> wrote:\n>\n> On Fri, 30 Jun 2023 at 17:36, Matthias van de Meent\n> <[email protected]> wrote:\n> >\n> > Hi,\n> >\n> > The attached v2 patchset contains some small fixes for the failing\n> > cfbot 32-bit tests - at least locally it does so.\n> >\n> > I'd overlooked one remaining use of MAXALIGN64() in xlog.c in the last\n> > patch of the set, which has now been updated to XLP_ALIGN as well.\n> > Additionally, XLP_ALIGN has been updated to use TYPEALIGN64 instead of\n> > TYPEALIGN so that we don't lose bits of the aligned value in 32-bit\n> > systems.\n>\n> Apparently there was some usage of MAXALIGN() in xlogreader that I'd\n> missed, and which only shows up in TAP tests. In v3 I've fixed that,\n> together with some improved early detection of invalid record headers.\n\nAnother fix for CFBot - pg_waldump tests which were added in 96063e28\nexposed an issue in my patchset related to RM_INVALID_ID.\n\nv4 splits former patch 0006 into two: now 0006 adds RM_INVALID and\ndoes the rmgr-related changes in the code, and 0007 does the WAL disk\nformat overhaul.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech/)",
"msg_date": "Wed, 12 Jul 2023 14:50:52 +0200",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: XLog size reductions: Reduced XLog record header size for PG17"
},
{
"msg_contents": "On Wed, 12 Jul 2023 at 14:50, Matthias van de Meent\n<[email protected]> wrote:\n>\n> On Mon, 3 Jul 2023 at 13:08, Matthias van de Meent\n> <[email protected]> wrote:\n> >\n> > On Fri, 30 Jun 2023 at 17:36, Matthias van de Meent\n> > <[email protected]> wrote:\n> > >\n> > > Hi,\n> > >\n> > > The attached v2 patchset contains some small fixes for the failing\n> > > cfbot 32-bit tests - at least locally it does so.\n> > >\n> > > I'd overlooked one remaining use of MAXALIGN64() in xlog.c in the last\n> > > patch of the set, which has now been updated to XLP_ALIGN as well.\n> > > Additionally, XLP_ALIGN has been updated to use TYPEALIGN64 instead of\n> > > TYPEALIGN so that we don't lose bits of the aligned value in 32-bit\n> > > systems.\n> >\n> > Apparently there was some usage of MAXALIGN() in xlogreader that I'd\n> > missed, and which only shows up in TAP tests. In v3 I've fixed that,\n> > together with some improved early detection of invalid record headers.\n>\n> Another fix for CFBot - pg_waldump tests which were added in 96063e28\n> exposed an issue in my patchset related to RM_INVALID_ID.\n>\n> v4 splits former patch 0006 into two: now 0006 adds RM_INVALID and\n> does the rmgr-related changes in the code, and 0007 does the WAL disk\n> format overhaul.\n\nV5 is a rebased version of v4, and includes the latest patch from\n\"smaller XLRec block header\" [0] as 0001.\n\nKind regards,\n\nMatthias van de Meent\n\n[0] https://www.postgresql.org/message-id/CAEze2WhG_qvs0_HPCKyGLjFSSeiLZJcFhT%3DrzEUd7AzyxnSfKw%40mail.gmail.com",
"msg_date": "Tue, 19 Sep 2023 12:07:07 +0200",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: XLog size reductions: Reduced XLog record header size for PG17"
},
{
"msg_contents": "On Tue, Sep 19, 2023 at 12:07:07PM +0200, Matthias van de Meent wrote:\n> V5 is a rebased version of v4, and includes the latest patch from\n> \"smaller XLRec block header\" [0] as 0001.\n\n0001 and 0007 are the meat of the changes.\n\n-#define XLR_CHECK_CONSISTENCY 0x02\n+#define XLR_CHECK_CONSISTENCY (0x20)\n\nI can't help but notice that there are a few stylistic choices like\nthis one that are part of the patch. Using parenthesis in the case of\nhexa values is inconsistent with the usual practices I've seen in the\ntree.\n\n #define COPY_HEADER_FIELD(_dst, _size) \\\n do { \\\n- if (remaining < _size) \\\n+ if (remaining < (_size)) \\\n goto shortdata_err; \\\n\nThere are a couple of stylistic changes like this one, that I guess\ncould just use their own patch to make these macros easier to use.\n\n-#define XLogRecGetInfo(decoder) ((decoder)->record->header.xl_info)\n+#define XLogRecGetInfo(decoder) ((decoder)->record->header.xl_info & XLR_INFO_MASK)\n+#define XLogRecGetRmgrInfo(decoder) (((decoder)->record->header.xl_info) & XLR_RMGR_INFO_MASK)\n\nThis stuff in 0002 is independent of 0001, am I right? Doing this\nsplit with an extra macro is okay by me, reducing the presence of\nXLR_INFO_MASK and bitwise operations based on it.\n\n0003 is also mechanical, but if you begin to enforce the use of\nXLR_RMGR_INFO_MASK as the bits allowed to be passed down to the RMGR\nidentity callback, we should have at least a validity check to make\nsure that nothing, even custom RMGRs, pass down unexpected bits?\n\nI am not convinced that XLOG_INCLUDE_XID is a good interface, TBH, and \nI fear that people are going to forget to set it. Wouldn't it be\nbetter to use an option where the XID is excluded instead, making the\ninclusing the an XID the default?\n\n> The resource manager has ID = 0, thus requiring some special\n> handling in other code. Apart from being generally useful, it is\n> used in future patches to detect the end of wal in lieu of a zero-ed\n> fixed-size xl_tot_len field.\n\nErr, no, that may not be true. See for example this thread where the\ntopic of improving the checks of xl_tot_len and rely on this value on\nwhen a record header has been validated, even across page borders:\nhttps://www.postgresql.org/message-id/[email protected]\n\nExcept that, in which cases could an invalid RMGR be useful?\n--\nMichael",
"msg_date": "Wed, 20 Sep 2023 14:06:25 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: XLog size reductions: Reduced XLog record header size for PG17"
},
{
"msg_contents": "On Wed, 20 Sept 2023 at 07:06, Michael Paquier <[email protected]> wrote:\n>\n> On Tue, Sep 19, 2023 at 12:07:07PM +0200, Matthias van de Meent wrote:\n> > V5 is a rebased version of v4, and includes the latest patch from\n> > \"smaller XLRec block header\" [0] as 0001.\n>\n> 0001 and 0007 are the meat of the changes.\n\nCorrect.\n\n> -#define XLR_CHECK_CONSISTENCY 0x02\n> +#define XLR_CHECK_CONSISTENCY (0x20)\n>\n> I can't help but notice that there are a few stylistic choices like\n> this one that are part of the patch. Using parenthesis in the case of\n> hexa values is inconsistent with the usual practices I've seen in the\n> tree.\n\nYes, I'll take another look at that.\n\n> #define COPY_HEADER_FIELD(_dst, _size) \\\n> do { \\\n> - if (remaining < _size) \\\n> + if (remaining < (_size)) \\\n> goto shortdata_err; \\\n>\n> There are a couple of stylistic changes like this one, that I guess\n> could just use their own patch to make these macros easier to use.\n\nThey actually fix complaints of my IDE, but are otherwise indeed stylistic.\n\n> -#define XLogRecGetInfo(decoder) ((decoder)->record->header.xl_info)\n> +#define XLogRecGetInfo(decoder) ((decoder)->record->header.xl_info & XLR_INFO_MASK)\n> +#define XLogRecGetRmgrInfo(decoder) (((decoder)->record->header.xl_info) & XLR_RMGR_INFO_MASK)\n>\n> This stuff in 0002 is independent of 0001, am I right? Doing this\n> split with an extra macro is okay by me, reducing the presence of\n> XLR_INFO_MASK and bitwise operations based on it.\n\nYes, that change is to stop making use of (~XLR_INFO_MASK) where\nXLR_RMGR_INFO_MASK is the correct bitmask (whilst also being quite\nuseful in the later patch).\n\n> 0003 is also mechanical, but if you begin to enforce the use of\n> XLR_RMGR_INFO_MASK as the bits allowed to be passed down to the RMGR\n> identity callback, we should have at least a validity check to make\n> sure that nothing, even custom RMGRs, pass down unexpected bits?\n\nI think that's already handled in XLogInsert(), but I'll make sure to\nadd more checks if they're not in place yet.\n\n> I am not convinced that XLOG_INCLUDE_XID is a good interface, TBH, and\n> I fear that people are going to forget to set it. Wouldn't it be\n> better to use an option where the XID is excluded instead, making the\n> inclusing the an XID the default?\n\nMost rmgrs don't actually use the XID. Only XACT, MULTIXACT, HEAP,\nHEAP2, and LOGICALMSG use the xid, so I thought it would be easier to\njust find the places where those RMGR's records were being logged than\nto update all other places.\n\nI don't mind changing how we decide to log the XID, but I don't think\nEXCLUDE_XID is a good alternative: most records just don't need the\ntransaction ID. There are many more index AMs with logging than table\nAMs, so I don't think it is that weird to default to 'not included'.\n\n> > The resource manager has ID = 0, thus requiring some special\n> > handling in other code. Apart from being generally useful, it is\n> > used in future patches to detect the end of wal in lieu of a zero-ed\n> > fixed-size xl_tot_len field.\n>\n> Err, no, that may not be true. See for example this thread where the\n> topic of improving the checks of xl_tot_len and rely on this value on\n> when a record header has been validated, even across page borders:\n> https://www.postgresql.org/message-id/[email protected]\n\nYes, there are indeed exceptions when reusing WAL segments, but it's\nstill a good canary, like xl_tot_len before this patch.\n\n> Except that, in which cases could an invalid RMGR be useful?\n\nA sentinel value that is obviously invalid is available for several\ntypes, e.g. BlockNumber, TransactionId, XLogRecPtr, Buffer, and this\nis quite useful if you want to check if something is definitely\ninvalid. I think that's fine in principle, we're already \"wasting\"\nsome IDs in the gap between RM_MAX_BUILTIN_ID and RM_MIN_CUSTOM_ID.\n\nIn the current xlog infrastructure, we use xl_tot_len as that sentinel\nto detect whether a new record may exist, but in this patch that can't\nbe used because the field may not exist and depends on other bytes. So\nI used xl_rmgr_id as the field to base the 'may a next record exist'\nchecks on, which required the 0 rmgr ID to be invalid.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Mon, 25 Sep 2023 19:40:00 +0200",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: XLog size reductions: Reduced XLog record header size for PG17"
},
{
"msg_contents": "On Mon, Sep 25, 2023 at 07:40:00PM +0200, Matthias van de Meent wrote:\n> On Wed, 20 Sept 2023 at 07:06, Michael Paquier <[email protected]> wrote:\n>> #define COPY_HEADER_FIELD(_dst, _size) \\\n>> do { \\\n>> - if (remaining < _size) \\\n>> + if (remaining < (_size)) \\\n>> goto shortdata_err; \\\n>>\n>> There are a couple of stylistic changes like this one, that I guess\n>> could just use their own patch to make these macros easier to use.\n> \n> They actually fix complaints of my IDE, but are otherwise indeed stylistic.\n\nOh, OK. I just use an old-school terminal, but no objections in\nchanging these if they make life easier for some hackers. Still, that\nfeels independant of what you are proposing here.\n--\nMichael",
"msg_date": "Wed, 27 Sep 2023 08:51:26 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: XLog size reductions: Reduced XLog record header size for PG17"
},
{
"msg_contents": "Hi and Happy New Year!\n\nI've looked through the patches and the change seems quite small and\njustified. But at the second round, some doubt arises on whether this long\npatchset indeed introduces enough performance gain? I may be wrong, but it\nsaves only several bytes and the performance gain would be only in some\nspecific artificial workload. Did you do some measurements? Do we have\nseveral percent performance-wise?\n\nKind regards,\nPavel Borisov\n\nHi and Happy New Year!I've looked through the patches and the change seems quite small and justified. But at the second round, some doubt arises on whether this long patchset indeed introduces enough performance gain? I may be wrong, but it saves only several bytes and the performance gain would be only in some specific artificial workload. Did you do some measurements? Do we have several percent performance-wise?Kind regards,Pavel Borisov",
"msg_date": "Wed, 3 Jan 2024 14:15:05 +0400",
"msg_from": "Pavel Borisov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: XLog size reductions: Reduced XLog record header size for PG17"
},
{
"msg_contents": "\n\n> On 3 Jan 2024, at 15:15, Pavel Borisov <[email protected]> wrote:\n> \n> Hi and Happy New Year!\n> \n> I've looked through the patches and the change seems quite small and justified. But at the second round, some doubt arises on whether this long patchset indeed introduces enough performance gain? I may be wrong, but it saves only several bytes and the performance gain would be only in some specific artificial workload. Did you do some measurements? Do we have several percent performance-wise?\n> \n> Kind regards,\n> Pavel Borisov\n\nHi Matthias!\n\nThis is a kind reminder that the thread is waiting for your reply. Are you interesting in CF entry [0]?\n\nThanks!\n\n\nBest regards, Andrey Borodin.\n[0] https://commitfest.postgresql.org/47/4386/\n\n",
"msg_date": "Sun, 7 Apr 2024 10:37:25 +0500",
"msg_from": "\"Andrey M. Borodin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: XLog size reductions: Reduced XLog record header size for PG17"
},
{
"msg_contents": "\n\n> On Jun 20, 2023, at 1:01 PM, Matthias van de Meent <[email protected]> wrote:\n> \n> 0001 is copied essentially verbatim from [1] and reduces overhead in\n> the registered block's length field where possible. It is included to\n> improve code commonality between varcoded integer fields. See [1] for\n> more details.\n\nHi Matthias! I am interested in seeing this patch move forward. We seem to be stuck.\n\nThe disagreement on the other thread seems to be about whether we can generalize and reuse variable integer encoding. Could you comment on whether perhaps we just need a few versions of that? Perhaps one version where the number of length bytes is encoded in the length itself (such as is used for varlena and by Andres' patch) and one where the number of length bytes is stored elsewhere? You are clearly using the \"elsewhere\" form, but perhaps you could pull out the logic of that into src/common? In struct XLogRecordBlockHeader.id <http://xlogrecordblockheader.id/>, you are reserving two bits for the size class. (The code comments aren't clear about this, by the way.) Perhaps if the generalized length encoding logic could take a couple arguments to represent where and how the size class bits are to be stored, and where the length itself is stored? I doubt you need to sacrifice any performance gains of this patch to make that happen. You'd just need to restructure the patch.\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Wed, 5 Jun 2024 08:12:47 -0700",
"msg_from": "Mark Dilger <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: XLog size reductions: Reduced XLog record header size for PG17"
}
] |
[
{
"msg_contents": "Hi all,\n(adding Daniel in CC.)\n\nCompiling Postgres up to 13 with OpenSSL 3.0 leads to a couple of\ncompilation warnings with what OpenSSL considers as deprecated, like:\nsha2_openssl.c: In function pg_sha384_init\nsha2_openssl.c:70:9: warning: SHA384_Init is deprecated =\nSince OpenSSL 3.0 [-Wdeprecated-declarations]\n 70 | SHA384_Init((SHA512_CTX *) ctx);\n | ^~~~~~~~~~~\n/usr/include/openssl/sha.h:119:27: note: declared here\n 119 | OSSL_DEPRECATEDIN_3_0 int SHA384_Init(SHA512_CTX *c);\n\nI was looking at the code of OpenSSL to see if there would be a way to\nsilenced these, and found about OPENSSL_SUPPRESS_DEPRECATED.\n\nI have been annoyed by these in the past when doing backpatches, as\nthis creates some noise, and the only place where this counts is\nsha2_openssl.c. Thoughts about doing something like the attached for\n~13?\n--\nMichael",
"msg_date": "Wed, 21 Jun 2023 11:53:44 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Remove deprecation warnings when compiling PG ~13 with OpenSSL 3.0~"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-21 11:53:44 +0900, Michael Paquier wrote:\n> Compiling Postgres up to 13 with OpenSSL 3.0 leads to a couple of\n> compilation warnings with what OpenSSL considers as deprecated, like:\n> sha2_openssl.c: In function pg_sha384_init\n> sha2_openssl.c:70:9: warning: SHA384_Init is deprecated =\n> Since OpenSSL 3.0 [-Wdeprecated-declarations]\n> 70 | SHA384_Init((SHA512_CTX *) ctx);\n> | ^~~~~~~~~~~\n> /usr/include/openssl/sha.h:119:27: note: declared here\n> 119 | OSSL_DEPRECATEDIN_3_0 int SHA384_Init(SHA512_CTX *c);\n> \n> I was looking at the code of OpenSSL to see if there would be a way to\n> silenced these, and found about OPENSSL_SUPPRESS_DEPRECATED.\n> \n> I have been annoyed by these in the past when doing backpatches, as\n> this creates some noise, and the only place where this counts is\n> sha2_openssl.c. Thoughts about doing something like the attached for\n> ~13?\n\nWouldn't the proper fix be to backpatch 4d3db13621b? Just suppressing all\ndeprecations doesn't strike me as particularly wise, especially because we've\nchosen a different path for 14+?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 20 Jun 2023 22:44:59 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove deprecation warnings when compiling PG ~13 with OpenSSL\n 3.0~"
},
{
"msg_contents": "> On 21 Jun 2023, at 07:44, Andres Freund <[email protected]> wrote:\n> On 2023-06-21 11:53:44 +0900, Michael Paquier wrote:\n\n>> I have been annoyed by these in the past when doing backpatches, as\n>> this creates some noise, and the only place where this counts is\n>> sha2_openssl.c. Thoughts about doing something like the attached for\n>> ~13?\n> \n> Wouldn't the proper fix be to backpatch 4d3db13621b?\n\nAgreed, I'd be more inclined to go with OPENSSL_API_COMPAT. If we still get\nwarnings with that set then I feel those warrant special consideration rather\nthan a blanket suppression.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 21 Jun 2023 09:16:38 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove deprecation warnings when compiling PG ~13 with OpenSSL\n 3.0~"
},
{
"msg_contents": "On Wed, Jun 21, 2023 at 09:16:38AM +0200, Daniel Gustafsson wrote:\n> Agreed, I'd be more inclined to go with OPENSSL_API_COMPAT. If we still get\n> warnings with that set then I feel those warrant special consideration rather\n> than a blanket suppression.\n\n4d3db136 seems to be OK on REL_13_STABLE with a direct cherry-pick.\nREL_11_STABLE and REL_12_STABLE require two different changes:\n- pg_config.h.win32 needs to list OPENSSL_API_COMPAT.\n- Solution.pm needs an extra #define OPENSSL_API_COMPAT in\nGenerateFiles() whose value can be retrieved from configure.in like in\n13~.\n\nAnything I am missing perhaps?\n--\nMichael",
"msg_date": "Wed, 21 Jun 2023 16:43:59 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Remove deprecation warnings when compiling PG ~13 with OpenSSL\n 3.0~"
},
{
"msg_contents": "On 21.06.23 09:43, Michael Paquier wrote:\n> On Wed, Jun 21, 2023 at 09:16:38AM +0200, Daniel Gustafsson wrote:\n>> Agreed, I'd be more inclined to go with OPENSSL_API_COMPAT. If we still get\n>> warnings with that set then I feel those warrant special consideration rather\n>> than a blanket suppression.\n> \n> 4d3db136 seems to be OK on REL_13_STABLE with a direct cherry-pick.\n> REL_11_STABLE and REL_12_STABLE require two different changes:\n> - pg_config.h.win32 needs to list OPENSSL_API_COMPAT.\n> - Solution.pm needs an extra #define OPENSSL_API_COMPAT in\n> GenerateFiles() whose value can be retrieved from configure.in like in\n> 13~.\n> \n> Anything I am missing perhaps?\n\nBackpatching the OPENSSL_API_COMPAT change would set the minimum OpenSSL \nversion to 1.0.1, which is newer than what was so far required in those \nbranches. That is the reason we didn't do this.\n\n\n\n",
"msg_date": "Wed, 21 Jun 2023 10:11:33 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove deprecation warnings when compiling PG ~13 with OpenSSL\n 3.0~"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-21 10:11:33 +0200, Peter Eisentraut wrote:\n> On 21.06.23 09:43, Michael Paquier wrote:\n> > On Wed, Jun 21, 2023 at 09:16:38AM +0200, Daniel Gustafsson wrote:\n> > > Agreed, I'd be more inclined to go with OPENSSL_API_COMPAT. If we still get\n> > > warnings with that set then I feel those warrant special consideration rather\n> > > than a blanket suppression.\n> > \n> > 4d3db136 seems to be OK on REL_13_STABLE with a direct cherry-pick.\n> > REL_11_STABLE and REL_12_STABLE require two different changes:\n> > - pg_config.h.win32 needs to list OPENSSL_API_COMPAT.\n> > - Solution.pm needs an extra #define OPENSSL_API_COMPAT in\n> > GenerateFiles() whose value can be retrieved from configure.in like in\n> > 13~.\n> > \n> > Anything I am missing perhaps?\n> \n> Backpatching the OPENSSL_API_COMPAT change would set the minimum OpenSSL\n> version to 1.0.1, which is newer than what was so far required in those\n> branches. That is the reason we didn't do this.\n\nWhat's the problem with just setting a different version in those branches?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 21 Jun 2023 09:50:00 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove deprecation warnings when compiling PG ~13 with OpenSSL\n 3.0~"
},
{
"msg_contents": "On Wed, Jun 21, 2023 at 10:11:33AM +0200, Peter Eisentraut wrote:\n> Backpatching the OPENSSL_API_COMPAT change would set the minimum OpenSSL\n> version to 1.0.1, which is newer than what was so far required in those\n> branches. That is the reason we didn't do this.\n\nLooking at the relevant thread from 2020, this was still at the point\nwhere we did not consider supporting 3.0 for all the stable branches\nbecause 3.0 was in alpha:\nhttps://www.postgresql.org/message-id/[email protected]\n\nHowever, recent fixes like cab553a have made that possible, and we do\nbuild with OpenSSL 3.0 across the whole set of stable branches.\nRegarding the versions of OpenSSL supported:\n- REL_13_STABLE requires 1.0.1 since 7b283d0e1.\n- REL_12_STABLE and REL_11_STABLE require 0.9.8.\n\nFor 0.9.8, OPENSSL_API_COMPAT needs to be set at 0x00908000L (see\nupstream's CHANGES.md). So I don't see a reason not to do as\nsuggested by Andres?\n\nI have tested the attached patches across 11~13 with various versions\nof OpenSSL (OPENSSL_API_COMPAT exists since 1.1.0), and this is\nworking here. Note that I don't have a MSVC environment at hand to\ntest this change on Windows, still `perl -cw Solution.pm` is OK with\nit.\n\nWhat do you think about the attached patch set (one for each branch)?\n--\nMichael",
"msg_date": "Thu, 22 Jun 2023 08:53:31 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Remove deprecation warnings when compiling PG ~13 with OpenSSL\n 3.0~"
},
{
"msg_contents": "> On 22 Jun 2023, at 01:53, Michael Paquier <[email protected]> wrote:\n\n> I have tested the attached patches across 11~13 with various versions\n> of OpenSSL (OPENSSL_API_COMPAT exists since 1.1.0), and this is\n> working here. Note that I don't have a MSVC environment at hand to\n> test this change on Windows, still `perl -cw Solution.pm` is OK with\n> it.\n\nThese patches LGTM from reading, but I think the Discussion link in the commit\nmessages should refer to this thread as well.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 22 Jun 2023 10:02:58 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove deprecation warnings when compiling PG ~13 with OpenSSL\n 3.0~"
},
{
"msg_contents": "On Thu, Jun 22, 2023 at 10:02:58AM +0200, Daniel Gustafsson wrote:\n> These patches LGTM from reading,\n\nThanks for double-checking.\n\n> but I think the Discussion link in the commit\n> messages should refer to this thread as well.\n\nOf course.\n--\nMichael",
"msg_date": "Thu, 22 Jun 2023 17:39:33 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Remove deprecation warnings when compiling PG ~13 with OpenSSL\n 3.0~"
},
{
"msg_contents": "On 22.06.23 01:53, Michael Paquier wrote:\n> Looking at the relevant thread from 2020, this was still at the point\n> where we did not consider supporting 3.0 for all the stable branches\n> because 3.0 was in alpha:\n> https://www.postgresql.org/message-id/[email protected]\n> \n> However, recent fixes like cab553a have made that possible, and we do\n> build with OpenSSL 3.0 across the whole set of stable branches.\n> Regarding the versions of OpenSSL supported:\n> - REL_13_STABLE requires 1.0.1 since 7b283d0e1.\n> - REL_12_STABLE and REL_11_STABLE require 0.9.8.\n> \n> For 0.9.8, OPENSSL_API_COMPAT needs to be set at 0x00908000L (see\n> upstream's CHANGES.md). So I don't see a reason not to do as\n> suggested by Andres?\n\nThe message linked to above also says:\n\n > I'm not sure. I don't have a good sense of what OpenSSL versions we\n > claim to support in branches older than PG13. We made a conscious\n > decision for 1.0.1 in PG13, but I seem to recall that that discussion\n > also revealed that the version assumptions before that were quite\n > inconsistent. Code in PG12 and before makes references to OpenSSL as\n > old as 0.9.6. But OpenSSL 3.0.0 will reject a compat level older than\n > 0.9.8.\n\n\n\n",
"msg_date": "Thu, 22 Jun 2023 20:08:54 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove deprecation warnings when compiling PG ~13 with OpenSSL\n 3.0~"
},
{
"msg_contents": "On Thu, Jun 22, 2023 at 08:08:54PM +0200, Peter Eisentraut wrote:\n> The message linked to above also says:\n> \n>> I'm not sure. I don't have a good sense of what OpenSSL versions we\n>> claim to support in branches older than PG13. We made a conscious\n>> decision for 1.0.1 in PG13, but I seem to recall that that discussion\n>> also revealed that the version assumptions before that were quite\n>> inconsistent. Code in PG12 and before makes references to OpenSSL as\n>> old as 0.9.6. But OpenSSL 3.0.0 will reject a compat level older than\n>> 0.9.8.\n\nWell, I highly doubt that anybody has tried to compile Postgres 12\nwith OpenSSL 0.9.7 for a few years. If they attempt to do so, the\ncompilation fails:\n<command-line>: note: this is the location of the previous definition\nIn file included from ../../src/include/common/scram-common.h:16,\n from scram-common.c:23:\n../../src/include/common/sha2.h:73:9: error: unknown type name ‘SHA256_CTX’\n 73 | typedef SHA256_CTX pg_sha256_ctx;\n\nOne reason is that SHA256_CTX is defined in OpenSSL 0.9.8\ncrypto/sha/sha.h, but this exists only in fips-1.0 in OpenSSL 0.9.7,\nwhile we rely on SHA256_CTX in src/common/ since SCRAM exists.\n\nAlso, note that the documentation claims that the minimum version of\nOpenSSL supported is 0.9.8, which is something that commit 9b7cd59 has\ndone, impacting Postgres 10~. So your argument looks incorrect to me?\n\nHonestly, I see no reason to not move on with this and remove these\ndeprecation warnings as proposed by the last patches sent. (I have\nrun builds with 0.9.8, FWIW.)\n--\nMichael",
"msg_date": "Fri, 23 Jun 2023 07:22:54 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Remove deprecation warnings when compiling PG ~13 with OpenSSL\n 3.0~"
},
{
"msg_contents": "On 23.06.23 00:22, Michael Paquier wrote:\n> Also, note that the documentation claims that the minimum version of\n> OpenSSL supported is 0.9.8, which is something that commit 9b7cd59 has\n> done, impacting Postgres 10~. So your argument looks incorrect to me?\n\nConsidering that, yes.\n\n\n",
"msg_date": "Fri, 23 Jun 2023 22:41:06 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove deprecation warnings when compiling PG ~13 with OpenSSL\n 3.0~"
},
{
"msg_contents": "On Fri, Jun 23, 2023 at 10:41:06PM +0200, Peter Eisentraut wrote:\n> Considering that, yes.\n\nThanks, applied to 11~13, then.\n--\nMichael",
"msg_date": "Sat, 24 Jun 2023 20:34:10 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Remove deprecation warnings when compiling PG ~13 with OpenSSL\n 3.0~"
}
] |
[
{
"msg_contents": "I define a table user_ranks as such:\n\nCREATE TABLE user_ranks (\n id INTEGER GENERATED ALWAYS AS IDENTITY PRIMARY KEY,\n rank INTEGER NOT NULL,\n CONSTRAINT \"by (rank, id)\" UNIQUE (rank, id)\n);\n\nINSERT INTO user_ranks (user_id, rank) SELECT generate_series(1, 10000),\ngenerate_series(1, 10000);\n\nHere's a query I'd like to optimize:\n\nexplain (analyze,verbose)\nSELECT\n t3_0.\"id\" AS \"id\",\n t3_0.\"rank\" AS \"rank\"\nFROM\n LATERAL (\n SELECT\n t4_0.\"rank\" AS \"rank\"\n FROM\n user_ranks AS t4_0\n WHERE\n (t4_0.\"id\" = 4732455)\n ) AS t3_1\n INNER JOIN user_ranks AS t3_0 ON true\nWHERE\n (\n ((t3_0.\"rank\", t3_0.\"id\") <= (t3_1.\"rank\", 4732455))\n AND true\n )\nORDER BY\n t3_0.\"rank\" DESC,\n t3_0.\"id\" DESC\nLIMIT\n 10\n\nIt compiles to the following plan:\n\n Limit (cost=0.56..250.94 rows=10 width=12) (actual time=8.078..8.078\nrows=1 loops=1)\n Output: t3_0.id, t3_0.rank\n -> Nested Loop (cost=0.56..41763.27 rows=1668 width=12) (actual\ntime=8.075..8.076 rows=1 loops=1)\n Output: t3_0.id, t3_0.rank\n Inner Unique: true\n Join Filter: (ROW(t3_0.rank, t3_0.id) <= ROW(t4_0.rank, 4732455))\n Rows Removed by Join Filter: 5002\n -> Index Only Scan Backward using \"by (rank,id)\" on\npublic.user_ranks t3_0 (cost=0.28..163.33 rows=5003 width=12) (actual\ntime=0.023..0.638 rows=5003 loops=1)\n Output: t3_0.rank, t3_0.id\n Heap Fetches: 0\n -> Index Scan using \"by id\" on public.user_ranks t4_0\n (cost=0.28..8.30 rows=1 width=8) (actual time=0.001..0.001 rows=1\nloops=5003)\n Output: t4_0.id, t4_0.rating, t4_0.rank\n Index Cond: (t4_0.id = 4732455)\n\nAs you can see, there are a lot of rows returned by t3_0, which are then\nfiltered by Join Filter. But it would have been better if instead of the\nfilter, the t3_0 table would have an Index Cond. Similar to how it happens\nwhen a correlated subquery is used (or a CTE)\n\nexplain (analyze,verbose)\nSELECT\n t3_0.\"id\" AS \"id\",\n t3_0.\"rank\" AS \"rank\"\nFROM\n user_ranks AS t3_0\nWHERE\n (\n ((t3_0.\"rank\", t3_0.\"id\") <= (\n SELECT\n t4_0.\"rank\" AS \"rank\",\n t4_0.\"id\" AS \"id\"\n FROM\n user_ranks AS t4_0\n WHERE\n (t4_0.\"id\" = 4732455)\n ))\n AND true\n )\nORDER BY\n t3_0.\"rank\" DESC,\n t3_0.\"id\" DESC\nLIMIT\n 10\n\n Limit (cost=8.58..8.95 rows=10 width=12) (actual time=0.062..0.064 rows=1\nloops=1)\n Output: t3_0.id, t3_0.rank\n InitPlan 1 (returns $0,$1)\n -> Index Scan using \"by id\" on public.user_ranks t4_0\n (cost=0.28..8.30 rows=1 width=12) (actual time=0.024..0.025 rows=1 loops=1)\n Output: t4_0.rank, t4_0.id\n Index Cond: (t4_0.id = 4732455)\n -> Index Only Scan Backward using \"by (rank,id)\" on public.user_ranks\nt3_0 (cost=0.28..61.47 rows=1668 width=12) (actual time=0.061..0.062\nrows=1 loops=1)\n Output: t3_0.id, t3_0.rank\n Index Cond: (ROW(t3_0.rank, t3_0.id) <= ROW($0, $1))\n Heap Fetches: 0\n\n\nI'm an opposite of a PostgreSQL expert, but it was surprising to me to see\nthat a correlated subquery behaves better than a join. Is this normal? Is\nit something worth fixing/easy to fix?\n\nSincerely,\nBakhtiyar\n\nI define a table user_ranks as such:CREATE TABLE user_ranks ( id INTEGER GENERATED ALWAYS AS IDENTITY PRIMARY KEY, rank INTEGER NOT NULL, CONSTRAINT \"by (rank, id)\" UNIQUE (rank, id));INSERT INTO user_ranks (user_id, rank) SELECT generate_series(1, 10000), generate_series(1, 10000);Here's a query I'd like to optimize:explain (analyze,verbose)SELECT t3_0.\"id\" AS \"id\", t3_0.\"rank\" AS \"rank\"FROM LATERAL ( SELECT t4_0.\"rank\" AS \"rank\" FROM user_ranks AS t4_0 WHERE (t4_0.\"id\" = 4732455) ) AS t3_1 INNER JOIN user_ranks AS t3_0 ON trueWHERE ( ((t3_0.\"rank\", t3_0.\"id\") <= (t3_1.\"rank\", 4732455)) AND true )ORDER BY t3_0.\"rank\" DESC, t3_0.\"id\" DESCLIMIT 10It compiles to the following plan: Limit (cost=0.56..250.94 rows=10 width=12) (actual time=8.078..8.078 rows=1 loops=1) Output: t3_0.id, t3_0.rank -> Nested Loop (cost=0.56..41763.27 rows=1668 width=12) (actual time=8.075..8.076 rows=1 loops=1) Output: t3_0.id, t3_0.rank Inner Unique: true Join Filter: (ROW(t3_0.rank, t3_0.id) <= ROW(t4_0.rank, 4732455)) Rows Removed by Join Filter: 5002 -> Index Only Scan Backward using \"by (rank,id)\" on public.user_ranks t3_0 (cost=0.28..163.33 rows=5003 width=12) (actual time=0.023..0.638 rows=5003 loops=1) Output: t3_0.rank, t3_0.id Heap Fetches: 0 -> Index Scan using \"by id\" on public.user_ranks t4_0 (cost=0.28..8.30 rows=1 width=8) (actual time=0.001..0.001 rows=1 loops=5003) Output: t4_0.id, t4_0.rating, t4_0.rank Index Cond: (t4_0.id = 4732455)As you can see, there are a lot of rows returned by t3_0, which are then filtered by Join Filter. But it would have been better if instead of the filter, the t3_0 table would have an Index Cond. Similar to how it happens when a correlated subquery is used (or a CTE)explain (analyze,verbose)SELECT t3_0.\"id\" AS \"id\", t3_0.\"rank\" AS \"rank\"FROM user_ranks AS t3_0 WHERE ( ((t3_0.\"rank\", t3_0.\"id\") <= ( SELECT t4_0.\"rank\" AS \"rank\", t4_0.\"id\" AS \"id\" FROM user_ranks AS t4_0 WHERE (t4_0.\"id\" = 4732455) )) AND true )ORDER BY t3_0.\"rank\" DESC, t3_0.\"id\" DESCLIMIT 10 Limit (cost=8.58..8.95 rows=10 width=12) (actual time=0.062..0.064 rows=1 loops=1) Output: t3_0.id, t3_0.rank InitPlan 1 (returns $0,$1) -> Index Scan using \"by id\" on public.user_ranks t4_0 (cost=0.28..8.30 rows=1 width=12) (actual time=0.024..0.025 rows=1 loops=1) Output: t4_0.rank, t4_0.id Index Cond: (t4_0.id = 4732455) -> Index Only Scan Backward using \"by (rank,id)\" on public.user_ranks t3_0 (cost=0.28..61.47 rows=1668 width=12) (actual time=0.061..0.062 rows=1 loops=1) Output: t3_0.id, t3_0.rank Index Cond: (ROW(t3_0.rank, t3_0.id) <= ROW($0, $1)) Heap Fetches: 0I'm an opposite of a PostgreSQL expert, but it was surprising to me to see that a correlated subquery behaves better than a join. Is this normal? Is it something worth fixing/easy to fix?Sincerely,Bakhtiyar",
"msg_date": "Tue, 20 Jun 2023 20:37:00 -0700",
"msg_from": "=?UTF-8?Q?B=C9=99xtiyar_Neyman?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Can JoinFilter condition be pushed down into IndexScan?"
},
{
"msg_contents": "\nOn 6/21/23 05:37, Bəxtiyar Neyman wrote:\n> I define a table user_ranks as such:\n> \n> CREATE TABLE user_ranks (\n> id INTEGER GENERATED ALWAYS AS IDENTITY PRIMARY KEY,\n> rank INTEGER NOT NULL,\n> CONSTRAINT \"by (rank, id)\" UNIQUE (rank, id)\n> );\n> \n> INSERT INTO user_ranks (user_id, rank) SELECT generate_series(1, 10000),\n> generate_series(1, 10000);\n> \n\nThis doesn't work, the INSERT needs to only insert into (rank).\n\n> Here's a query I'd like to optimize:\n> \n> explain (analyze,verbose)\n> SELECT\n> t3_0.\"id\" AS \"id\",\n> t3_0.\"rank\" AS \"rank\"\n> FROM\n> LATERAL (\n> SELECT\n> t4_0.\"rank\" AS \"rank\"\n> FROM\n> user_ranks AS t4_0\n> WHERE\n> (t4_0.\"id\" = 4732455)\n> ) AS t3_1\n> INNER JOIN user_ranks AS t3_0 ON true\n> WHERE\n> (\n> ((t3_0.\"rank\", t3_0.\"id\") <= (t3_1.\"rank\", 4732455))\n> AND true\n> )\n> ORDER BY\n> t3_0.\"rank\" DESC,\n> t3_0.\"id\" DESC\n> LIMIT\n> 10\n> \n\nNot sure why you make the query unnecessarily complicated - the LATERAL\nis pointless I believe, the \"AND true\" just make it harder to read.\nLet's rewrite it it like this to make discussion easier:\n\nexplain (analyze,verbose)\nSELECT\n t3_0.\"id\" AS \"id\",\n t3_0.\"rank\" AS \"rank\"\nFROM\n user_ranks AS t3_1\n INNER JOIN user_ranks AS t3_0\n ON ((t3_0.\"rank\", t3_0.\"id\") <= (t3_1.\"rank\", t3_1.\"id\"))\nWHERE\n t3_1.\"id\" = 4732455\nORDER BY\n t3_0.\"rank\" DESC,\n t3_0.\"id\" DESC\nLIMIT\n 10\n\nSame query, but perhaps easier to read.\n\n> It compiles to the following plan:\n> \n> Limit (cost=0.56..250.94 rows=10 width=12) (actual time=8.078..8.078\n> rows=1 loops=1)\n> Output: t3_0.id <http://t3_0.id>, t3_0.rank\n> -> Nested Loop (cost=0.56..41763.27 rows=1668 width=12) (actual\n> time=8.075..8.076 rows=1 loops=1)\n> Output: t3_0.id <http://t3_0.id>, t3_0.rank\n> Inner Unique: true\n> Join Filter: (ROW(t3_0.rank, t3_0.id <http://t3_0.id>) <=\n> ROW(t4_0.rank, 4732455))\n> Rows Removed by Join Filter: 5002\n> -> Index Only Scan Backward using \"by (rank,id)\" on\n> public.user_ranks t3_0 (cost=0.28..163.33 rows=5003 width=12) (actual\n> time=0.023..0.638 rows=5003 loops=1)\n> Output: t3_0.rank, t3_0.id <http://t3_0.id>\n> Heap Fetches: 0\n> -> Index Scan using \"by id\" on public.user_ranks t4_0\n> (cost=0.28..8.30 rows=1 width=8) (actual time=0.001..0.001 rows=1\n> loops=5003)\n> Output: t4_0.id <http://t4_0.id>, t4_0.rating, t4_0.rank\n> Index Cond: (t4_0.id <http://t4_0.id> = 4732455)\n> \n> As you can see, there are a lot of rows returned by t3_0, which are then\n> filtered by Join Filter. But it would have been better if instead of the\n> filter, the t3_0 table would have an Index Cond. Similar to how it\n> happens when a correlated subquery is used (or a CTE)\n> \n> explain (analyze,verbose)\n> SELECT\n> t3_0.\"id\" AS \"id\",\n> t3_0.\"rank\" AS \"rank\"\n> FROM\n> user_ranks AS t3_0\n> WHERE\n> (\n> ((t3_0.\"rank\", t3_0.\"id\") <= (\n> SELECT\n> t4_0.\"rank\" AS \"rank\",\n> t4_0.\"id\" AS \"id\"\n> FROM\n> user_ranks AS t4_0\n> WHERE\n> (t4_0.\"id\" = 4732455)\n> ))\n> AND true\n> )\n> ORDER BY\n> t3_0.\"rank\" DESC,\n> t3_0.\"id\" DESC\n> LIMIT\n> 10\n> \n> Limit (cost=8.58..8.95 rows=10 width=12) (actual time=0.062..0.064\n> rows=1 loops=1)\n> Output: t3_0.id <http://t3_0.id>, t3_0.rank\n> InitPlan 1 (returns $0,$1)\n> -> Index Scan using \"by id\" on public.user_ranks t4_0\n> (cost=0.28..8.30 rows=1 width=12) (actual time=0.024..0.025 rows=1 loops=1)\n> Output: t4_0.rank, t4_0.id <http://t4_0.id>\n> Index Cond: (t4_0.id <http://t4_0.id> = 4732455)\n> -> Index Only Scan Backward using \"by (rank,id)\" on\n> public.user_ranks t3_0 (cost=0.28..61.47 rows=1668 width=12) (actual\n> time=0.061..0.062 rows=1 loops=1)\n> Output: t3_0.id <http://t3_0.id>, t3_0.rank\n> Index Cond: (ROW(t3_0.rank, t3_0.id <http://t3_0.id>) <=\n> ROW($0, $1))\n> Heap Fetches: 0\n> \n> \n> I'm an opposite of a PostgreSQL expert, but it was surprising to me to\n> see that a correlated subquery behaves better than a join. Is this\n> normal? Is it something worth fixing/easy to fix?\n> \n\nBecause those queries are not doing the same thing. In the first query\nyou sort by t3_0 columns, while the \"id = 4732455\" condition is on the\nother table. And so it can't use the index scan for sorting.\n\nWhile in the second query it can do that, and it doesn't need to do the\nexplicit sort (which needs to fetch all the rows etc.). If you alter the\nfirst query to do\n\n ORDER BY\n t3_1.\"rank\" DESC,\n t3_1.\"id\" DESC\n\nit'll use the same plan as the second query. Well, not exactly the same,\nbut much closer to it.\n\n\nNevertheless, these example queries have other estimation issues, which\nmight result in poor plan choices. There's no row for (id = 4732455),\nand the cross-table inequality estimate is just some default estimate\n(33%). In reality, this produces no rows.\n\nSecondly, for LIMIT, the cost is assumed to be \"proportional\" fraction\nof the input costs. In other words, we expect the limit to terminate\nafter only seeing a fraction of rows - if we expect to see 10000 rows\nand the query has LIMIT 10, we expect to only do 1/1000 of the work. But\nif the subtree does not produce 10000 rows, that goes out of the window\nand we may need to do much more work.\n\nI'm not sure why the two queries actually use different plans even after\nthe ORDER BY change. I would have expected the second query (with\ncorrelated subquery) to be transformed to a join, but perhaps that\ntransformation would be invalid, or maybe the planner does that based on\ncost (and then it's not surprising due to the estimation issues).\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 21 Jun 2023 15:28:11 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Can JoinFilter condition be pushed down into IndexScan?"
},
{
"msg_contents": "Thanks Tomas for the lengthy write-up!\n\nPardon the noise in the queries (LATERAL, AND true etc): they were\nautogenerated by the library we wrote.\n\n> Because those queries are not doing the same thing. In the first query\n> you sort by t3_0 columns, while the \"id = 4732455\" condition is on the\n> other table. And so it can't use the index scan for sorting.\n>\n> While in the second query it can do that, and it doesn't need to do the\n> explicit sort (which needs to fetch all the rows etc.).\n\nLet me try to explain what both of my queries do:\n1) Get the rank of the user using its id (id = 4732455 in this example, but\nit could have been one that exists, e.g. id = 500). This is LATERAL t3_1 in\nthe first query and subquery in the WHERE clause of the second query.\n2) Using that rank, get the next 10 users by rank. This is t3_0.\n\nThus I can't just change the first query to \"ORDER BY t3_1.\"rank\" DESC,\nt3_1.\"id\" DESC\" as you suggest, because then the order of returned rows\nwill not be guaranteed. In fact, such a clause will have no effect because\nthere is going to be at most one row supplied by t3_1 anyway.\n\nMy question thus still stands. The planner knows that t3_1 has at most one\nrow, and it knows that t3_0 can produce up to 5000 rows. Yet, it doesn't\nfigure out that it could have lowered the Join Filter condition from the\nfirst plan as an Index Cond of the Index Scan of t3_1. Is there a\nfundamental reason for this, or is this something worth improving in the\nplanner?\n\nSincerely,\nBakhtiyar\n\nOn Wed, Jun 21, 2023 at 6:28 AM Tomas Vondra <[email protected]>\nwrote:\n\n>\n> On 6/21/23 05:37, Bəxtiyar Neyman wrote:\n> > I define a table user_ranks as such:\n> >\n> > CREATE TABLE user_ranks (\n> > id INTEGER GENERATED ALWAYS AS IDENTITY PRIMARY KEY,\n> > rank INTEGER NOT NULL,\n> > CONSTRAINT \"by (rank, id)\" UNIQUE (rank, id)\n> > );\n> >\n> > INSERT INTO user_ranks (user_id, rank) SELECT generate_series(1, 10000),\n> > generate_series(1, 10000);\n> >\n>\n> This doesn't work, the INSERT needs to only insert into (rank).\n>\n> > Here's a query I'd like to optimize:\n> >\n> > explain (analyze,verbose)\n> > SELECT\n> > t3_0.\"id\" AS \"id\",\n> > t3_0.\"rank\" AS \"rank\"\n> > FROM\n> > LATERAL (\n> > SELECT\n> > t4_0.\"rank\" AS \"rank\"\n> > FROM\n> > user_ranks AS t4_0\n> > WHERE\n> > (t4_0.\"id\" = 4732455)\n> > ) AS t3_1\n> > INNER JOIN user_ranks AS t3_0 ON true\n> > WHERE\n> > (\n> > ((t3_0.\"rank\", t3_0.\"id\") <= (t3_1.\"rank\", 4732455))\n> > AND true\n> > )\n> > ORDER BY\n> > t3_0.\"rank\" DESC,\n> > t3_0.\"id\" DESC\n> > LIMIT\n> > 10\n> >\n>\n> Not sure why you make the query unnecessarily complicated - the LATERAL\n> is pointless I believe, the \"AND true\" just make it harder to read.\n> Let's rewrite it it like this to make discussion easier:\n>\n> explain (analyze,verbose)\n> SELECT\n> t3_0.\"id\" AS \"id\",\n> t3_0.\"rank\" AS \"rank\"\n> FROM\n> user_ranks AS t3_1\n> INNER JOIN user_ranks AS t3_0\n> ON ((t3_0.\"rank\", t3_0.\"id\") <= (t3_1.\"rank\", t3_1.\"id\"))\n> WHERE\n> t3_1.\"id\" = 4732455\n> ORDER BY\n> t3_0.\"rank\" DESC,\n> t3_0.\"id\" DESC\n> LIMIT\n> 10\n>\n> Same query, but perhaps easier to read.\n>\n> > It compiles to the following plan:\n> >\n> > Limit (cost=0.56..250.94 rows=10 width=12) (actual time=8.078..8.078\n> > rows=1 loops=1)\n> > Output: t3_0.id <http://t3_0.id>, t3_0.rank\n> > -> Nested Loop (cost=0.56..41763.27 rows=1668 width=12) (actual\n> > time=8.075..8.076 rows=1 loops=1)\n> > Output: t3_0.id <http://t3_0.id>, t3_0.rank\n> > Inner Unique: true\n> > Join Filter: (ROW(t3_0.rank, t3_0.id <http://t3_0.id>) <=\n> > ROW(t4_0.rank, 4732455))\n> > Rows Removed by Join Filter: 5002\n> > -> Index Only Scan Backward using \"by (rank,id)\" on\n> > public.user_ranks t3_0 (cost=0.28..163.33 rows=5003 width=12) (actual\n> > time=0.023..0.638 rows=5003 loops=1)\n> > Output: t3_0.rank, t3_0.id <http://t3_0.id>\n> > Heap Fetches: 0\n> > -> Index Scan using \"by id\" on public.user_ranks t4_0\n> > (cost=0.28..8.30 rows=1 width=8) (actual time=0.001..0.001 rows=1\n> > loops=5003)\n> > Output: t4_0.id <http://t4_0.id>, t4_0.rating, t4_0.rank\n> > Index Cond: (t4_0.id <http://t4_0.id> = 4732455)\n> >\n> > As you can see, there are a lot of rows returned by t3_0, which are then\n> > filtered by Join Filter. But it would have been better if instead of the\n> > filter, the t3_0 table would have an Index Cond. Similar to how it\n> > happens when a correlated subquery is used (or a CTE)\n> >\n> > explain (analyze,verbose)\n> > SELECT\n> > t3_0.\"id\" AS \"id\",\n> > t3_0.\"rank\" AS \"rank\"\n> > FROM\n> > user_ranks AS t3_0\n> > WHERE\n> > (\n> > ((t3_0.\"rank\", t3_0.\"id\") <= (\n> > SELECT\n> > t4_0.\"rank\" AS \"rank\",\n> > t4_0.\"id\" AS \"id\"\n> > FROM\n> > user_ranks AS t4_0\n> > WHERE\n> > (t4_0.\"id\" = 4732455)\n> > ))\n> > AND true\n> > )\n> > ORDER BY\n> > t3_0.\"rank\" DESC,\n> > t3_0.\"id\" DESC\n> > LIMIT\n> > 10\n> >\n> > Limit (cost=8.58..8.95 rows=10 width=12) (actual time=0.062..0.064\n> > rows=1 loops=1)\n> > Output: t3_0.id <http://t3_0.id>, t3_0.rank\n> > InitPlan 1 (returns $0,$1)\n> > -> Index Scan using \"by id\" on public.user_ranks t4_0\n> > (cost=0.28..8.30 rows=1 width=12) (actual time=0.024..0.025 rows=1\n> loops=1)\n> > Output: t4_0.rank, t4_0.id <http://t4_0.id>\n> > Index Cond: (t4_0.id <http://t4_0.id> = 4732455)\n> > -> Index Only Scan Backward using \"by (rank,id)\" on\n> > public.user_ranks t3_0 (cost=0.28..61.47 rows=1668 width=12) (actual\n> > time=0.061..0.062 rows=1 loops=1)\n> > Output: t3_0.id <http://t3_0.id>, t3_0.rank\n> > Index Cond: (ROW(t3_0.rank, t3_0.id <http://t3_0.id>) <=\n> > ROW($0, $1))\n> > Heap Fetches: 0\n> >\n> >\n> > I'm an opposite of a PostgreSQL expert, but it was surprising to me to\n> > see that a correlated subquery behaves better than a join. Is this\n> > normal? Is it something worth fixing/easy to fix?\n> >\n>\n> Because those queries are not doing the same thing. In the first query\n> you sort by t3_0 columns, while the \"id = 4732455\" condition is on the\n> other table. And so it can't use the index scan for sorting.\n>\n> While in the second query it can do that, and it doesn't need to do the\n> explicit sort (which needs to fetch all the rows etc.). If you alter the\n> first query to do\n>\n> ORDER BY\n> t3_1.\"rank\" DESC,\n> t3_1.\"id\" DESC\n>\n> it'll use the same plan as the second query. Well, not exactly the same,\n> but much closer to it.\n>\n>\n> Nevertheless, these example queries have other estimation issues, which\n> might result in poor plan choices. There's no row for (id = 4732455),\n> and the cross-table inequality estimate is just some default estimate\n> (33%). In reality, this produces no rows.\n>\n> Secondly, for LIMIT, the cost is assumed to be \"proportional\" fraction\n> of the input costs. In other words, we expect the limit to terminate\n> after only seeing a fraction of rows - if we expect to see 10000 rows\n> and the query has LIMIT 10, we expect to only do 1/1000 of the work. But\n> if the subtree does not produce 10000 rows, that goes out of the window\n> and we may need to do much more work.\n>\n> I'm not sure why the two queries actually use different plans even after\n> the ORDER BY change. I would have expected the second query (with\n> correlated subquery) to be transformed to a join, but perhaps that\n> transformation would be invalid, or maybe the planner does that based on\n> cost (and then it's not surprising due to the estimation issues).\n>\n>\n> regards\n>\n> --\n> Tomas Vondra\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n\nThanks Tomas for the lengthy write-up!Pardon the noise in the queries (LATERAL, AND true etc): they were autogenerated by the library we wrote. > Because those queries are not doing the same thing. In the first query> you sort by t3_0 columns, while the \"id = 4732455\" condition is on the> other table. And so it can't use the index scan for sorting.>> While in the second query it can do that, and it doesn't need to do the> explicit sort (which needs to fetch all the rows etc.). Let me try to explain what both of my queries do: 1) Get the rank of the user using its id (id = 4732455 in this example, but it could have been one that exists, e.g. id = 500). This is LATERAL t3_1 in the first query and subquery in the WHERE clause of the second query.2) Using that rank, get the next 10 users by rank. This is t3_0.Thus I can't just change the first query to \"ORDER BY t3_1.\"rank\" DESC, t3_1.\"id\" DESC\" as you suggest, because then the order of returned rows will not be guaranteed. In fact, such a clause will have no effect because there is going to be at most one row supplied by t3_1 anyway.My question thus still stands. The planner knows that t3_1 has at most one row, and it knows that t3_0 can produce up to 5000 rows. Yet, it doesn't figure out that it could have lowered the Join Filter condition from the first plan as an Index Cond of the Index Scan of t3_1. Is there a fundamental reason for this, or is this something worth improving in the planner?Sincerely,BakhtiyarOn Wed, Jun 21, 2023 at 6:28 AM Tomas Vondra <[email protected]> wrote:\nOn 6/21/23 05:37, Bəxtiyar Neyman wrote:\n> I define a table user_ranks as such:\n> \n> CREATE TABLE user_ranks (\n> id INTEGER GENERATED ALWAYS AS IDENTITY PRIMARY KEY,\n> rank INTEGER NOT NULL,\n> CONSTRAINT \"by (rank, id)\" UNIQUE (rank, id)\n> );\n> \n> INSERT INTO user_ranks (user_id, rank) SELECT generate_series(1, 10000),\n> generate_series(1, 10000);\n> \n\nThis doesn't work, the INSERT needs to only insert into (rank).\n\n> Here's a query I'd like to optimize:\n> \n> explain (analyze,verbose)\n> SELECT\n> t3_0.\"id\" AS \"id\",\n> t3_0.\"rank\" AS \"rank\"\n> FROM\n> LATERAL (\n> SELECT\n> t4_0.\"rank\" AS \"rank\"\n> FROM\n> user_ranks AS t4_0\n> WHERE\n> (t4_0.\"id\" = 4732455)\n> ) AS t3_1\n> INNER JOIN user_ranks AS t3_0 ON true\n> WHERE\n> (\n> ((t3_0.\"rank\", t3_0.\"id\") <= (t3_1.\"rank\", 4732455))\n> AND true\n> )\n> ORDER BY\n> t3_0.\"rank\" DESC,\n> t3_0.\"id\" DESC\n> LIMIT\n> 10\n> \n\nNot sure why you make the query unnecessarily complicated - the LATERAL\nis pointless I believe, the \"AND true\" just make it harder to read.\nLet's rewrite it it like this to make discussion easier:\n\nexplain (analyze,verbose)\nSELECT\n t3_0.\"id\" AS \"id\",\n t3_0.\"rank\" AS \"rank\"\nFROM\n user_ranks AS t3_1\n INNER JOIN user_ranks AS t3_0\n ON ((t3_0.\"rank\", t3_0.\"id\") <= (t3_1.\"rank\", t3_1.\"id\"))\nWHERE\n t3_1.\"id\" = 4732455\nORDER BY\n t3_0.\"rank\" DESC,\n t3_0.\"id\" DESC\nLIMIT\n 10\n\nSame query, but perhaps easier to read.\n\n> It compiles to the following plan:\n> \n> Limit (cost=0.56..250.94 rows=10 width=12) (actual time=8.078..8.078\n> rows=1 loops=1)\n> Output: t3_0.id <http://t3_0.id>, t3_0.rank\n> -> Nested Loop (cost=0.56..41763.27 rows=1668 width=12) (actual\n> time=8.075..8.076 rows=1 loops=1)\n> Output: t3_0.id <http://t3_0.id>, t3_0.rank\n> Inner Unique: true\n> Join Filter: (ROW(t3_0.rank, t3_0.id <http://t3_0.id>) <=\n> ROW(t4_0.rank, 4732455))\n> Rows Removed by Join Filter: 5002\n> -> Index Only Scan Backward using \"by (rank,id)\" on\n> public.user_ranks t3_0 (cost=0.28..163.33 rows=5003 width=12) (actual\n> time=0.023..0.638 rows=5003 loops=1)\n> Output: t3_0.rank, t3_0.id <http://t3_0.id>\n> Heap Fetches: 0\n> -> Index Scan using \"by id\" on public.user_ranks t4_0\n> (cost=0.28..8.30 rows=1 width=8) (actual time=0.001..0.001 rows=1\n> loops=5003)\n> Output: t4_0.id <http://t4_0.id>, t4_0.rating, t4_0.rank\n> Index Cond: (t4_0.id <http://t4_0.id> = 4732455)\n> \n> As you can see, there are a lot of rows returned by t3_0, which are then\n> filtered by Join Filter. But it would have been better if instead of the\n> filter, the t3_0 table would have an Index Cond. Similar to how it\n> happens when a correlated subquery is used (or a CTE)\n> \n> explain (analyze,verbose)\n> SELECT\n> t3_0.\"id\" AS \"id\",\n> t3_0.\"rank\" AS \"rank\"\n> FROM\n> user_ranks AS t3_0\n> WHERE\n> (\n> ((t3_0.\"rank\", t3_0.\"id\") <= (\n> SELECT\n> t4_0.\"rank\" AS \"rank\",\n> t4_0.\"id\" AS \"id\"\n> FROM\n> user_ranks AS t4_0\n> WHERE\n> (t4_0.\"id\" = 4732455)\n> ))\n> AND true\n> )\n> ORDER BY\n> t3_0.\"rank\" DESC,\n> t3_0.\"id\" DESC\n> LIMIT\n> 10\n> \n> Limit (cost=8.58..8.95 rows=10 width=12) (actual time=0.062..0.064\n> rows=1 loops=1)\n> Output: t3_0.id <http://t3_0.id>, t3_0.rank\n> InitPlan 1 (returns $0,$1)\n> -> Index Scan using \"by id\" on public.user_ranks t4_0\n> (cost=0.28..8.30 rows=1 width=12) (actual time=0.024..0.025 rows=1 loops=1)\n> Output: t4_0.rank, t4_0.id <http://t4_0.id>\n> Index Cond: (t4_0.id <http://t4_0.id> = 4732455)\n> -> Index Only Scan Backward using \"by (rank,id)\" on\n> public.user_ranks t3_0 (cost=0.28..61.47 rows=1668 width=12) (actual\n> time=0.061..0.062 rows=1 loops=1)\n> Output: t3_0.id <http://t3_0.id>, t3_0.rank\n> Index Cond: (ROW(t3_0.rank, t3_0.id <http://t3_0.id>) <=\n> ROW($0, $1))\n> Heap Fetches: 0\n> \n> \n> I'm an opposite of a PostgreSQL expert, but it was surprising to me to\n> see that a correlated subquery behaves better than a join. Is this\n> normal? Is it something worth fixing/easy to fix?\n> \n\nBecause those queries are not doing the same thing. In the first query\nyou sort by t3_0 columns, while the \"id = 4732455\" condition is on the\nother table. And so it can't use the index scan for sorting.\n\nWhile in the second query it can do that, and it doesn't need to do the\nexplicit sort (which needs to fetch all the rows etc.). If you alter the\nfirst query to do\n\n ORDER BY\n t3_1.\"rank\" DESC,\n t3_1.\"id\" DESC\n\nit'll use the same plan as the second query. Well, not exactly the same,\nbut much closer to it.\n\n\nNevertheless, these example queries have other estimation issues, which\nmight result in poor plan choices. There's no row for (id = 4732455),\nand the cross-table inequality estimate is just some default estimate\n(33%). In reality, this produces no rows.\n\nSecondly, for LIMIT, the cost is assumed to be \"proportional\" fraction\nof the input costs. In other words, we expect the limit to terminate\nafter only seeing a fraction of rows - if we expect to see 10000 rows\nand the query has LIMIT 10, we expect to only do 1/1000 of the work. But\nif the subtree does not produce 10000 rows, that goes out of the window\nand we may need to do much more work.\n\nI'm not sure why the two queries actually use different plans even after\nthe ORDER BY change. I would have expected the second query (with\ncorrelated subquery) to be transformed to a join, but perhaps that\ntransformation would be invalid, or maybe the planner does that based on\ncost (and then it's not surprising due to the estimation issues).\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Wed, 21 Jun 2023 11:37:24 -0700",
"msg_from": "=?UTF-8?Q?B=C9=99xtiyar_Neyman?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Can JoinFilter condition be pushed down into IndexScan?"
},
{
"msg_contents": "On 6/21/23 20:37, Bəxtiyar Neyman wrote:\n> Thanks Tomas for the lengthy write-up!\n> \n> Pardon the noise in the queries (LATERAL, AND true etc): they were\n> autogenerated by the library we wrote.\n> \n\nI know, but it makes them harder to read for people. If you want people\nto respond it's generally a good idea to make it easy to understand the\nquestion. Don't make them waste their time - they'll just skip the\nmessage entirely.\n\n>> Because those queries are not doing the same thing. In the first query\n>> you sort by t3_0 columns, while the \"id = 4732455\" condition is on the\n>> other table. And so it can't use the index scan for sorting.\n>>\n>> While in the second query it can do that, and it doesn't need to do the\n>> explicit sort (which needs to fetch all the rows etc.).\n> \n> Let me try to explain what both of my queries do:\n> 1) Get the rank of the user using its id (id = 4732455 in this example,\n> but it could have been one that exists, e.g. id = 500). This is LATERAL\n> t3_1 in the first query and subquery in the WHERE clause of the second\n> query.\n> 2) Using that rank, get the next 10 users by rank. This is t3_0.\n> \n> Thus I can't just change the first query to \"ORDER BY t3_1.\"rank\" DESC,\n> t3_1.\"id\" DESC\" as you suggest, because then the order of returned rows\n> will not be guaranteed. In fact, such a clause will have no effect\n> because there is going to be at most one row supplied by t3_1 anyway.\n> \n\nAh, OK. I got this wrong.\n\n> My question thus still stands. The planner knows that t3_1 has at most\n> one row, and it knows that t3_0 can produce up to 5000 rows. Yet, it\n> doesn't figure out that it could have lowered the Join Filter condition\n> from the first plan as an Index Cond of the Index Scan of t3_1. Is there\n> a fundamental reason for this, or is this something worth improving in\n> the planner?\n> \n\nAs I tried to explain before, I don't think the problem is in the\nplanner not being able to do this transformation, but more likely in not\nbeing able to cost it correctly.\n\nConsider this (with 1M rows in the user_ranks table):\n\n1) subquery case\n=================\n\n Limit (cost=8.87..9.15 rows=10 width=8) (actual time=0.032..0.037\nrows=10 loops=1)\n Output: t3_0.id, t3_0.rank\n InitPlan 1 (returns $0,$1)\n -> Index Scan using user_ranks_pkey on public.user_ranks t4_0\n(cost=0.42..8.44 rows=1 width=8) (actual time=0.017..0.019 rows=1 loops=1)\n Output: t4_0.rank, t4_0.id\n Index Cond: (t4_0.id = 333333)\n -> Index Only Scan Backward using \"by (rank, id)\" on\npublic.user_ranks t3_0 (cost=0.42..9493.75 rows=333333 width=8) (actual\ntime=0.031..0.033 rows=10 loops=1)\n Output: t3_0.id, t3_0.rank\n Index Cond: (ROW(t3_0.rank, t3_0.id) <= ROW($0, $1))\n Heap Fetches: 0\n Planning Time: 0.072 ms\n Execution Time: 0.055 ms\n(12 rows)\n\n\n2) join\n=======\n\n Limit (cost=0.85..2.15 rows=10 width=8) (actual time=464.662..464.672\nrows=10 loops=1)\n Output: t3_0.id, t3_0.rank\n -> Nested Loop (cost=0.85..43488.87 rows=333333 width=8) (actual\ntime=464.660..464.667 rows=10 loops=1)\n Output: t3_0.id, t3_0.rank\n Inner Unique: true\n Join Filter: (ROW(t3_0.rank, t3_0.id) <= ROW(t4_0.rank, t4_0.id))\n Rows Removed by Join Filter: 666667\n -> Index Only Scan Backward using \"by (rank, id)\" on\npublic.user_ranks t3_0 (cost=0.42..25980.42 rows=1000000 width=8)\n(actual time=0.015..93.703 rows=666677 loops=1)\n Output: t3_0.rank, t3_0.id\n Heap Fetches: 0\n -> Materialize (cost=0.42..8.45 rows=1 width=8) (actual\ntime=0.000..0.000 rows=1 loops=666677)\n Output: t4_0.rank, t4_0.id\n -> Index Scan using user_ranks_pkey on public.user_ranks\nt4_0 (cost=0.42..8.44 rows=1 width=8) (actual time=0.010..0.011 rows=1\nloops=1)\n Output: t4_0.rank, t4_0.id\n Index Cond: (t4_0.id = 333333)\n Planning Time: 0.092 ms\n Execution Time: 464.696 ms\n(17 rows)\n\n\n3) join (with LEFT JOIN)\n========================\n\n Limit (cost=20038.73..20038.76 rows=10 width=8) (actual\ntime=180.714..180.720 rows=10 loops=1)\n Output: t3_0.id, t3_0.rank\n -> Sort (cost=20038.73..20872.06 rows=333333 width=8) (actual\ntime=180.712..180.715 rows=10 loops=1)\n Output: t3_0.id, t3_0.rank\n Sort Key: t3_0.rank DESC, t3_0.id DESC\n Sort Method: top-N heapsort Memory: 26kB\n -> Nested Loop Left Join (cost=0.85..12835.52 rows=333333\nwidth=8) (actual time=0.033..122.000 rows=333333 loops=1)\n Output: t3_0.id, t3_0.rank\n -> Index Scan using user_ranks_pkey on public.user_ranks\nt4_0 (cost=0.42..8.44 rows=1 width=8) (actual time=0.018..0.020 rows=1\nloops=1)\n Output: t4_0.id, t4_0.rank\n Index Cond: (t4_0.id = 333333)\n -> Index Only Scan using \"by (rank, id)\" on\npublic.user_ranks t3_0 (cost=0.42..9493.75 rows=333333 width=8) (actual\ntime=0.013..49.759 rows=333333 loops=1)\n Output: t3_0.rank, t3_0.id\n Index Cond: (ROW(t3_0.rank, t3_0.id) <=\nROW(t4_0.rank, t4_0.id))\n Heap Fetches: 0\n Planning Time: 0.087 ms\n Execution Time: 180.744 ms\n(17 rows)\n\n\nSo, the optimizer clearly believes the subquery case has cost 9.15,\nwhile the inner join case costs 2.15. So it believes the plan is\n\"cheaper\" than the subquery. So even if it knew how to do the\ntransformation / build the other plan (which I'm not sure it can), it\nprobably wouldn't do it.\n\nOTOH if you rewrite it to a left join, it costs 20038.76 - way more than\nthe inner join, but it's actually 2x faster.\n\n\nAFAICS there's no chance to make this bit smarter until the estimates\nget much better to reality.\n\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 21 Jun 2023 21:58:12 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Can JoinFilter condition be pushed down into IndexScan?"
},
{
"msg_contents": "Thanks, Tomas!\n\n> I know, but it makes them harder to read for people. If you want people\n> to respond it's generally a good idea to make it easy to understand the\n> question. Don't make them waste their time - they'll just skip the\n> message entirely.\n\nFair point.\n\n\n> So, the optimizer clearly believes the subquery case has cost 9.15,\n> while the inner join case costs 2.15. So it believes the plan is\n> \"cheaper\" than the subquery. So even if it knew how to do the\n> transformation / build the other plan (which I'm not sure it can), it\n> probably wouldn't do it.\n\n> AFAICS there's no chance to make this bit smarter until the estimates\n> get much better to reality.\n\nGot it. Thanks. I guess we'll have to emit correlated subqueries/CTEs.\n\nSincerely,\nBakhtiyar\n\nOn Wed, Jun 21, 2023 at 12:58 PM Tomas Vondra <[email protected]>\nwrote:\n\n> On 6/21/23 20:37, Bəxtiyar Neyman wrote:\n> > Thanks Tomas for the lengthy write-up!\n> >\n> > Pardon the noise in the queries (LATERAL, AND true etc): they were\n> > autogenerated by the library we wrote.\n> >\n>\n> I know, but it makes them harder to read for people. If you want people\n> to respond it's generally a good idea to make it easy to understand the\n> question. Don't make them waste their time - they'll just skip the\n> message entirely.\n>\n> >> Because those queries are not doing the same thing. In the first query\n> >> you sort by t3_0 columns, while the \"id = 4732455\" condition is on the\n> >> other table. And so it can't use the index scan for sorting.\n> >>\n> >> While in the second query it can do that, and it doesn't need to do the\n> >> explicit sort (which needs to fetch all the rows etc.).\n> >\n> > Let me try to explain what both of my queries do:\n> > 1) Get the rank of the user using its id (id = 4732455 in this example,\n> > but it could have been one that exists, e.g. id = 500). This is LATERAL\n> > t3_1 in the first query and subquery in the WHERE clause of the second\n> > query.\n> > 2) Using that rank, get the next 10 users by rank. This is t3_0.\n> >\n> > Thus I can't just change the first query to \"ORDER BY t3_1.\"rank\" DESC,\n> > t3_1.\"id\" DESC\" as you suggest, because then the order of returned rows\n> > will not be guaranteed. In fact, such a clause will have no effect\n> > because there is going to be at most one row supplied by t3_1 anyway.\n> >\n>\n> Ah, OK. I got this wrong.\n>\n> > My question thus still stands. The planner knows that t3_1 has at most\n> > one row, and it knows that t3_0 can produce up to 5000 rows. Yet, it\n> > doesn't figure out that it could have lowered the Join Filter condition\n> > from the first plan as an Index Cond of the Index Scan of t3_1. Is there\n> > a fundamental reason for this, or is this something worth improving in\n> > the planner?\n> >\n>\n> As I tried to explain before, I don't think the problem is in the\n> planner not being able to do this transformation, but more likely in not\n> being able to cost it correctly.\n>\n> Consider this (with 1M rows in the user_ranks table):\n>\n> 1) subquery case\n> =================\n>\n> Limit (cost=8.87..9.15 rows=10 width=8) (actual time=0.032..0.037\n> rows=10 loops=1)\n> Output: t3_0.id, t3_0.rank\n> InitPlan 1 (returns $0,$1)\n> -> Index Scan using user_ranks_pkey on public.user_ranks t4_0\n> (cost=0.42..8.44 rows=1 width=8) (actual time=0.017..0.019 rows=1 loops=1)\n> Output: t4_0.rank, t4_0.id\n> Index Cond: (t4_0.id = 333333)\n> -> Index Only Scan Backward using \"by (rank, id)\" on\n> public.user_ranks t3_0 (cost=0.42..9493.75 rows=333333 width=8) (actual\n> time=0.031..0.033 rows=10 loops=1)\n> Output: t3_0.id, t3_0.rank\n> Index Cond: (ROW(t3_0.rank, t3_0.id) <= ROW($0, $1))\n> Heap Fetches: 0\n> Planning Time: 0.072 ms\n> Execution Time: 0.055 ms\n> (12 rows)\n>\n>\n> 2) join\n> =======\n>\n> Limit (cost=0.85..2.15 rows=10 width=8) (actual time=464.662..464.672\n> rows=10 loops=1)\n> Output: t3_0.id, t3_0.rank\n> -> Nested Loop (cost=0.85..43488.87 rows=333333 width=8) (actual\n> time=464.660..464.667 rows=10 loops=1)\n> Output: t3_0.id, t3_0.rank\n> Inner Unique: true\n> Join Filter: (ROW(t3_0.rank, t3_0.id) <= ROW(t4_0.rank, t4_0.id))\n> Rows Removed by Join Filter: 666667\n> -> Index Only Scan Backward using \"by (rank, id)\" on\n> public.user_ranks t3_0 (cost=0.42..25980.42 rows=1000000 width=8)\n> (actual time=0.015..93.703 rows=666677 loops=1)\n> Output: t3_0.rank, t3_0.id\n> Heap Fetches: 0\n> -> Materialize (cost=0.42..8.45 rows=1 width=8) (actual\n> time=0.000..0.000 rows=1 loops=666677)\n> Output: t4_0.rank, t4_0.id\n> -> Index Scan using user_ranks_pkey on public.user_ranks\n> t4_0 (cost=0.42..8.44 rows=1 width=8) (actual time=0.010..0.011 rows=1\n> loops=1)\n> Output: t4_0.rank, t4_0.id\n> Index Cond: (t4_0.id = 333333)\n> Planning Time: 0.092 ms\n> Execution Time: 464.696 ms\n> (17 rows)\n>\n>\n> 3) join (with LEFT JOIN)\n> ========================\n>\n> Limit (cost=20038.73..20038.76 rows=10 width=8) (actual\n> time=180.714..180.720 rows=10 loops=1)\n> Output: t3_0.id, t3_0.rank\n> -> Sort (cost=20038.73..20872.06 rows=333333 width=8) (actual\n> time=180.712..180.715 rows=10 loops=1)\n> Output: t3_0.id, t3_0.rank\n> Sort Key: t3_0.rank DESC, t3_0.id DESC\n> Sort Method: top-N heapsort Memory: 26kB\n> -> Nested Loop Left Join (cost=0.85..12835.52 rows=333333\n> width=8) (actual time=0.033..122.000 rows=333333 loops=1)\n> Output: t3_0.id, t3_0.rank\n> -> Index Scan using user_ranks_pkey on public.user_ranks\n> t4_0 (cost=0.42..8.44 rows=1 width=8) (actual time=0.018..0.020 rows=1\n> loops=1)\n> Output: t4_0.id, t4_0.rank\n> Index Cond: (t4_0.id = 333333)\n> -> Index Only Scan using \"by (rank, id)\" on\n> public.user_ranks t3_0 (cost=0.42..9493.75 rows=333333 width=8) (actual\n> time=0.013..49.759 rows=333333 loops=1)\n> Output: t3_0.rank, t3_0.id\n> Index Cond: (ROW(t3_0.rank, t3_0.id) <=\n> ROW(t4_0.rank, t4_0.id))\n> Heap Fetches: 0\n> Planning Time: 0.087 ms\n> Execution Time: 180.744 ms\n> (17 rows)\n>\n>\n> So, the optimizer clearly believes the subquery case has cost 9.15,\n> while the inner join case costs 2.15. So it believes the plan is\n> \"cheaper\" than the subquery. So even if it knew how to do the\n> transformation / build the other plan (which I'm not sure it can), it\n> probably wouldn't do it.\n>\n> OTOH if you rewrite it to a left join, it costs 20038.76 - way more than\n> the inner join, but it's actually 2x faster.\n>\n>\n> AFAICS there's no chance to make this bit smarter until the estimates\n> get much better to reality.\n>\n>\n> --\n> Tomas Vondra\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n\nThanks, Tomas!> I know, but it makes them harder to read for people. If you want people> to respond it's generally a good idea to make it easy to understand the> question. Don't make them waste their time - they'll just skip the> message entirely.Fair point.> So, the optimizer clearly believes the subquery case has cost 9.15,> while the inner join case costs 2.15. So it believes the plan is> \"cheaper\" than the subquery. So even if it knew how to do the> transformation / build the other plan (which I'm not sure it can), it> probably wouldn't do it.> AFAICS there's no chance to make this bit smarter until the estimates> get much better to reality.Got it. Thanks. I guess we'll have to emit correlated subqueries/CTEs.Sincerely,BakhtiyarOn Wed, Jun 21, 2023 at 12:58 PM Tomas Vondra <[email protected]> wrote:On 6/21/23 20:37, Bəxtiyar Neyman wrote:\n> Thanks Tomas for the lengthy write-up!\n> \n> Pardon the noise in the queries (LATERAL, AND true etc): they were\n> autogenerated by the library we wrote.\n> \n\nI know, but it makes them harder to read for people. If you want people\nto respond it's generally a good idea to make it easy to understand the\nquestion. Don't make them waste their time - they'll just skip the\nmessage entirely.\n\n>> Because those queries are not doing the same thing. In the first query\n>> you sort by t3_0 columns, while the \"id = 4732455\" condition is on the\n>> other table. And so it can't use the index scan for sorting.\n>>\n>> While in the second query it can do that, and it doesn't need to do the\n>> explicit sort (which needs to fetch all the rows etc.).\n> \n> Let me try to explain what both of my queries do:\n> 1) Get the rank of the user using its id (id = 4732455 in this example,\n> but it could have been one that exists, e.g. id = 500). This is LATERAL\n> t3_1 in the first query and subquery in the WHERE clause of the second\n> query.\n> 2) Using that rank, get the next 10 users by rank. This is t3_0.\n> \n> Thus I can't just change the first query to \"ORDER BY t3_1.\"rank\" DESC,\n> t3_1.\"id\" DESC\" as you suggest, because then the order of returned rows\n> will not be guaranteed. In fact, such a clause will have no effect\n> because there is going to be at most one row supplied by t3_1 anyway.\n> \n\nAh, OK. I got this wrong.\n\n> My question thus still stands. The planner knows that t3_1 has at most\n> one row, and it knows that t3_0 can produce up to 5000 rows. Yet, it\n> doesn't figure out that it could have lowered the Join Filter condition\n> from the first plan as an Index Cond of the Index Scan of t3_1. Is there\n> a fundamental reason for this, or is this something worth improving in\n> the planner?\n> \n\nAs I tried to explain before, I don't think the problem is in the\nplanner not being able to do this transformation, but more likely in not\nbeing able to cost it correctly.\n\nConsider this (with 1M rows in the user_ranks table):\n\n1) subquery case\n=================\n\n Limit (cost=8.87..9.15 rows=10 width=8) (actual time=0.032..0.037\nrows=10 loops=1)\n Output: t3_0.id, t3_0.rank\n InitPlan 1 (returns $0,$1)\n -> Index Scan using user_ranks_pkey on public.user_ranks t4_0\n(cost=0.42..8.44 rows=1 width=8) (actual time=0.017..0.019 rows=1 loops=1)\n Output: t4_0.rank, t4_0.id\n Index Cond: (t4_0.id = 333333)\n -> Index Only Scan Backward using \"by (rank, id)\" on\npublic.user_ranks t3_0 (cost=0.42..9493.75 rows=333333 width=8) (actual\ntime=0.031..0.033 rows=10 loops=1)\n Output: t3_0.id, t3_0.rank\n Index Cond: (ROW(t3_0.rank, t3_0.id) <= ROW($0, $1))\n Heap Fetches: 0\n Planning Time: 0.072 ms\n Execution Time: 0.055 ms\n(12 rows)\n\n\n2) join\n=======\n\n Limit (cost=0.85..2.15 rows=10 width=8) (actual time=464.662..464.672\nrows=10 loops=1)\n Output: t3_0.id, t3_0.rank\n -> Nested Loop (cost=0.85..43488.87 rows=333333 width=8) (actual\ntime=464.660..464.667 rows=10 loops=1)\n Output: t3_0.id, t3_0.rank\n Inner Unique: true\n Join Filter: (ROW(t3_0.rank, t3_0.id) <= ROW(t4_0.rank, t4_0.id))\n Rows Removed by Join Filter: 666667\n -> Index Only Scan Backward using \"by (rank, id)\" on\npublic.user_ranks t3_0 (cost=0.42..25980.42 rows=1000000 width=8)\n(actual time=0.015..93.703 rows=666677 loops=1)\n Output: t3_0.rank, t3_0.id\n Heap Fetches: 0\n -> Materialize (cost=0.42..8.45 rows=1 width=8) (actual\ntime=0.000..0.000 rows=1 loops=666677)\n Output: t4_0.rank, t4_0.id\n -> Index Scan using user_ranks_pkey on public.user_ranks\nt4_0 (cost=0.42..8.44 rows=1 width=8) (actual time=0.010..0.011 rows=1\nloops=1)\n Output: t4_0.rank, t4_0.id\n Index Cond: (t4_0.id = 333333)\n Planning Time: 0.092 ms\n Execution Time: 464.696 ms\n(17 rows)\n\n\n3) join (with LEFT JOIN)\n========================\n\n Limit (cost=20038.73..20038.76 rows=10 width=8) (actual\ntime=180.714..180.720 rows=10 loops=1)\n Output: t3_0.id, t3_0.rank\n -> Sort (cost=20038.73..20872.06 rows=333333 width=8) (actual\ntime=180.712..180.715 rows=10 loops=1)\n Output: t3_0.id, t3_0.rank\n Sort Key: t3_0.rank DESC, t3_0.id DESC\n Sort Method: top-N heapsort Memory: 26kB\n -> Nested Loop Left Join (cost=0.85..12835.52 rows=333333\nwidth=8) (actual time=0.033..122.000 rows=333333 loops=1)\n Output: t3_0.id, t3_0.rank\n -> Index Scan using user_ranks_pkey on public.user_ranks\nt4_0 (cost=0.42..8.44 rows=1 width=8) (actual time=0.018..0.020 rows=1\nloops=1)\n Output: t4_0.id, t4_0.rank\n Index Cond: (t4_0.id = 333333)\n -> Index Only Scan using \"by (rank, id)\" on\npublic.user_ranks t3_0 (cost=0.42..9493.75 rows=333333 width=8) (actual\ntime=0.013..49.759 rows=333333 loops=1)\n Output: t3_0.rank, t3_0.id\n Index Cond: (ROW(t3_0.rank, t3_0.id) <=\nROW(t4_0.rank, t4_0.id))\n Heap Fetches: 0\n Planning Time: 0.087 ms\n Execution Time: 180.744 ms\n(17 rows)\n\n\nSo, the optimizer clearly believes the subquery case has cost 9.15,\nwhile the inner join case costs 2.15. So it believes the plan is\n\"cheaper\" than the subquery. So even if it knew how to do the\ntransformation / build the other plan (which I'm not sure it can), it\nprobably wouldn't do it.\n\nOTOH if you rewrite it to a left join, it costs 20038.76 - way more than\nthe inner join, but it's actually 2x faster.\n\n\nAFAICS there's no chance to make this bit smarter until the estimates\nget much better to reality.\n\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Wed, 21 Jun 2023 15:58:23 -0700",
"msg_from": "=?UTF-8?Q?B=C9=99xtiyar_Neyman?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Can JoinFilter condition be pushed down into IndexScan?"
}
] |
[
{
"msg_contents": "Hi all,\n(Adding Evan in CC as he has reported the original issue with hstore.)\n\n$subject has showed up as a subject for discussion when looking at the\nset of whitespace characters that we use in the parsers:\nhttps://www.postgresql.org/message-id/CA+HWA9bTRDf52DHyU+JOoqEALgRGRo5uHUYTFuduoj3cBfer+Q@mail.gmail.com\n\nOn HEAD, these are \\t, \\n, \\r and \\f which is consistent with the list\nthat we use in scanner_isspace(). \n\nThis has quite some history, first in 9ae2661 that dealt with an old\nissue with BSD's isspace where whitespaces may not be detected\ncorrectly. hstore has been recently changed to fix the same problem\nwith d522b05, still depending on scanner_isspace() for the job makes\nthe handling of \\v kind of strange.\n\nThat's not the end of the story. There is an inconsistency with the\nway array values are handled for the same problem, where 95cacd1 added\nhandling for \\v in the list of what's considered a whitespace.\n\nAttached is a patch to bring a bit more consistency across the board,\nby adding \\v to the set of characters that are considered as\nwhitespace by the parser. Here are a few things that I have noticed\nin passing:\n- JSON should not escape \\v, as defined in RFC 7159.\n- syncrep_scanner.l already considered \\v as a whitespace. Its\nneighbor repl_scanner.l did not do that.\n- There are a few more copies that would need a refresh of what is\nconsidered as a whitespace in their respective lex scanners:\npsqlscan.l, psqlscanslash.l, cubescan.l, segscan.l, ECPG's pgc.l.\n\nOne thing I was wondering: has the SQL specification anything specific\nabout the way vertical tabs should be parsed?\n\nThoughts and comments are welcome.\nThanks,\n--\nMichael",
"msg_date": "Wed, 21 Jun 2023 15:45:32 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Consider \\v to the list of whitespace characters in the parser"
},
{
"msg_contents": "On 21.06.23 08:45, Michael Paquier wrote:\n> One thing I was wondering: has the SQL specification anything specific\n> about the way vertical tabs should be parsed?\n\nSQL has \"whitespace\", which includes any Unicode character with the \nWhite_Space property (which includes \\v), and <newline>, which is \nimplementation-defined.\n\nSo nothing there speaks against treating \\v as a (white)space character \nin the SQL scanner.\n\nIn scan.l, you might want to ponder horiz_space: Even though \\v is \nclearly not \"horizontal space\", horiz_space already includes \\f, which \nis also not horizontal IMO. I think horiz_space is really all space \ncharacters except newline characters. Maybe this should be rephrased.\n\n\n\n",
"msg_date": "Mon, 3 Jul 2023 12:17:10 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Consider \\v to the list of whitespace characters in the parser"
},
{
"msg_contents": "On Mon, Jul 03, 2023 at 12:17:10PM +0200, Peter Eisentraut wrote:\n> SQL has \"whitespace\", which includes any Unicode character with the\n> White_Space property (which includes \\v), and <newline>, which is\n> implementation-defined.\n> \n> So nothing there speaks against treating \\v as a (white)space character in\n> the SQL scanner.\n\nOkay, thanks for confirming. \n\n> In scan.l, you might want to ponder horiz_space: Even though \\v is clearly\n> not \"horizontal space\", horiz_space already includes \\f, which is also not\n> horizontal IMO. I think horiz_space is really all space characters except\n> newline characters. Maybe this should be rephrased.\n\nAnd a few lines above, there is a comment from 2000 (3cfdd8f)\npondering if \\f should be handled as a newline, which is kind of\nincorrect anyway?\n\nFWIW, I agree that horiz_space is confusing in this context because it\ndoes not completely reflect the reality, and \\v is not that so adding\nit to the existing list felt wrong to me. Form feed is also not a\nnewline, from what I understand.. From what the parser tells, there\nare two things we want to track to handle comments:\n- All space characters, which would be \\t\\n\\r\\f\\v.\n- All space characters that are not newlines, \\t\\f\\v.\n\nI don't really have a better idea this morning than using the\nfollowing terms in the parser, changing the surroundings with similar\nterms:\n-space [ \\t\\n\\r\\f]\n-horiz_space [ \\t\\f]\n+space [ \\t\\n\\r\\f\\v]\n+non_newline_space [ \\t\\f\\v]\n\nPerhaps somebody has a better idea of split?\n--\nMichael",
"msg_date": "Tue, 4 Jul 2023 09:00:51 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Consider \\v to the list of whitespace characters in the parser"
},
{
"msg_contents": "Michael Paquier <[email protected]> writes:\n> On Mon, Jul 03, 2023 at 12:17:10PM +0200, Peter Eisentraut wrote:\n>> In scan.l, you might want to ponder horiz_space: Even though \\v is clearly\n>> not \"horizontal space\", horiz_space already includes \\f, which is also not\n>> horizontal IMO. I think horiz_space is really all space characters except\n>> newline characters. Maybe this should be rephrased.\n\n> And a few lines above, there is a comment from 2000 (3cfdd8f)\n> pondering if \\f should be handled as a newline, which is kind of\n> incorrect anyway?\n\nIt looks to me like there are two places where these distinctions\nactually matter:\n\n1. Which characters terminate a \"--\" comment. Currently that's only\n[\\n\\r] (see {non_newline}).\n\n2. Which characters satisfy the SQL spec's requirement that there be a\nnewline in the whitespace separating string literals that are to be\nconcatenated. Currently, that's also only [\\n\\r].\n\nAssuming we don't want to change either of these distinctions,\nthe v2 patch looks about right to me.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 03 Jul 2023 20:15:03 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Consider \\v to the list of whitespace characters in the parser"
},
{
"msg_contents": "On Mon, Jul 03, 2023 at 08:15:03PM -0400, Tom Lane wrote:\n> Assuming we don't want to change either of these distinctions,\n> the v2 patch looks about right to me.\n\nYeah, thanks. Peter, what's your take?\n--\nMichael",
"msg_date": "Tue, 4 Jul 2023 09:28:21 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Consider \\v to the list of whitespace characters in the parser"
},
{
"msg_contents": "On Tue, Jul 04, 2023 at 09:28:21AM +0900, Michael Paquier wrote:\n> Yeah, thanks.\n\nI have looked again at that this morning, and did not notice any\nmissing spots, so applied.. Let's see how it goes.\n--\nMichael",
"msg_date": "Thu, 6 Jul 2023 08:34:00 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Consider \\v to the list of whitespace characters in the parser"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nPlease find attached a patch to truncate (in ProcessStartupPacket())\nthe port->database_name and port->user_name in such a way to not break\nmultibyte character boundary.\n\nIndeed, currently, one could create a database that way:\n\npostgres=# create database ääääääääääääääääääääääääääääääää;\nNOTICE: identifier \"ääääääääääääääääääääääääääääääää\" will be truncated to \"äääääääääääääääääääääääääääääää\"\nCREATE DATABASE\n\nThe database name has been truncated from 64 bytes to 62 bytes thanks to pg_mbcliplen()\nwhich ensures to not break multibyte character boundary.\n\npostgres=# select datname, OCTET_LENGTH(datname),encoding from pg_database;\n datname | octet_length | encoding\n---------------------------------+--------------+----------\n äääääääääääääääääääääääääääääää | 62 | 6\n\nTrying to connect with the 64 bytes name:\n\n$ psql -d ääääääääääääääääääääääääääääääää\npsql: error: connection to server on socket \"/tmp/.s.PGSQL.55448\" failed: FATAL: database \"äääääääääääääääääääääääääääääää\" does not exist\n\n\nIt fails because the truncation done in ProcessStartupPacket():\n\n\"\nif (strlen(port→database_name) >= NAMEDATALEN)\nport→database_name[NAMEDATALEN - 1] = '\\0';\n\"\n\ndoes not take care about multibyte character boundary.\n\nOn the other hand it works with non multibyte character involved:\n\npostgres=# create database abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijke;\nNOTICE: identifier \"abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijke\" will be truncated to \"abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijk\"\nCREATE DATABASE\n\npostgres=# select datname, OCTET_LENGTH(datname),encoding from pg_database;\n datname | octet_length | encoding\n-----------------------------------------------------------------+--------------+----------\n abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijk | 63 | 6\n\nThe database name is truncated to 63 bytes and then using the 64 bytes name would work:\n\n$ psql -d abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijke\npsql (16beta1)\nType \"help\" for help.\n\nThe comment in ProcessStartupPacket() states:\n\n\"\n /*\n * Truncate given database and user names to length of a Postgres name.\n * This avoids lookup failures when overlength names are given.\n */\n\"\n\nThe last sentence is not right in case of mutlibyte character (as seen\nin the first example).\n\nAbout the patch:\n\nAs the database encoding is not known yet in ProcessStartupPacket() (\nand we are even not sure the database provided does exist), the proposed\npatch does not rely on pg_mbcliplen() but on pg_encoding_mbcliplen().\n\nThe proposed patch does use the client encoding that it retrieves that way:\n\n- use the one requested in the startup packet (if we come across it)\n- use the one from the locale (if we did not find a client encoding request\nin the startup packet)\n- use PG_SQL_ASCII (if none of the above have been satisfied)\n\nHappy to discuss any other thoughts or suggestions if any.\n\nWith the proposed patch in place, using the first example above (and the\n64 bytes name) we would get:\n\n$ PGCLIENTENCODING=LATIN1 psql -d ääääääääääääääääääääääääääääääää\npsql: error: connection to server on socket \"/tmp/.s.PGSQL.55448\" failed: FATAL: database \"äääääääääääääääääääääääääääääää\" does not exist\n\nbut this one would allow us to connect:\n\n$ PGCLIENTENCODING=UTF8 psql -d ääääääääääääääääääääääääääääääää\npsql (16beta1)\nType \"help\" for help.\n\nThe patch does not provide documentation update or related TAP test (but could be added\nif we feel the need).\n\nLooking forward to your feedback,\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 21 Jun 2023 09:43:50 +0200",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "ProcessStartupPacket(): database_name and user_name truncation"
},
{
"msg_contents": "At Wed, 21 Jun 2023 09:43:50 +0200, \"Drouvot, Bertrand\" <[email protected]> wrote in \n> Trying to connect with the 64 bytes name:\n> \n> $ psql -d ääääääääääääääääääääääääääääääää\n> psql: error: connection to server on socket \"/tmp/.s.PGSQL.55448\"\n> failed: FATAL: database \"äääääääääääääääääääääääääääääää\" does not\n> exist\n\nIMHO, I'm not sure we should allow connections without the exact name\nbeing provided. In that sense, I think we might want to consider\noutright rejecting the estblishment of a connection when the given\ndatabase name doesn't fit the startup packet, since the database with\nthe exact given name cannot be found.\n\nWhile it is somewhat off-topic, I cannot establish a connection if the\nconsole encoding differs from the template database even if I provide\nthe identical database name. (I don't mean I want that behavior to be\n\"fix\"ed.)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 21 Jun 2023 17:54:59 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ProcessStartupPacket(): database_name and user_name truncation"
},
{
"msg_contents": "Kyotaro Horiguchi <[email protected]> writes:\n> At Wed, 21 Jun 2023 09:43:50 +0200, \"Drouvot, Bertrand\" <[email protected]> wrote in \n>> Trying to connect with the 64 bytes name:\n>> $ psql -d ääääääääääääääääääääääääääääääää\n>> psql: error: connection to server on socket \"/tmp/.s.PGSQL.55448\"\n>> failed: FATAL: database \"äääääääääääääääääääääääääääääää\" does not\n>> exist\n\n> IMHO, I'm not sure we should allow connections without the exact name\n> being provided. In that sense, I think we might want to consider\n> outright rejecting the estblishment of a connection when the given\n> database name doesn't fit the startup packet, since the database with\n> the exact given name cannot be found.\n\nI think I agree. I don't like the proposed patch at all, because it's\nmaking completely unsupportable assumptions about what encoding the\nnames are given in. Simply failing to match when a name is overlength\nsounds safer.\n\n(Our whole story about what is the encoding of names in shared catalogs\nis a mess. But this particular point doesn't seem like the place to\nstart if you want to clean that up.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 21 Jun 2023 09:43:38 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ProcessStartupPacket(): database_name and user_name truncation"
},
{
"msg_contents": "Hi,\n\nOn 6/21/23 3:43 PM, Tom Lane wrote:\n> Kyotaro Horiguchi <[email protected]> writes:\n>> At Wed, 21 Jun 2023 09:43:50 +0200, \"Drouvot, Bertrand\" <[email protected]> wrote in\n>>> Trying to connect with the 64 bytes name:\n>>> $ psql -d ääääääääääääääääääääääääääääääää\n>>> psql: error: connection to server on socket \"/tmp/.s.PGSQL.55448\"\n>>> failed: FATAL: database \"äääääääääääääääääääääääääääääää\" does not\n>>> exist\n> \n>> IMHO, I'm not sure we should allow connections without the exact name\n>> being provided. In that sense, I think we might want to consider\n>> outright rejecting the estblishment of a connection when the given\n>> database name doesn't fit the startup packet, since the database with\n>> the exact given name cannot be found.\n> \n> I think I agree. I don't like the proposed patch at all, because it's\n> making completely unsupportable assumptions about what encoding the\n> names are given in. Simply failing to match when a name is overlength\n> sounds safer.\n> \n\nYeah, that's another and \"cleaner\" option.\n\nI'll propose a patch to make it failing even for the non multibyte case then (\nso that multibyte and non multibyte behaves the same aka failing in case of overlength\nname is detected).\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 21 Jun 2023 16:22:47 +0200",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: ProcessStartupPacket(): database_name and user_name truncation"
},
{
"msg_contents": "On Wed, Jun 21, 2023 at 09:43:38AM -0400, Tom Lane wrote:\n> Kyotaro Horiguchi <[email protected]> writes:\n>> IMHO, I'm not sure we should allow connections without the exact name\n>> being provided. In that sense, I think we might want to consider\n>> outright rejecting the estblishment of a connection when the given\n>> database name doesn't fit the startup packet, since the database with\n>> the exact given name cannot be found.\n> \n> I think I agree. I don't like the proposed patch at all, because it's\n> making completely unsupportable assumptions about what encoding the\n> names are given in. Simply failing to match when a name is overlength\n> sounds safer.\n\n+1. Even if these assumptions were supportable, IMHO it's probably not\nworth the added complexity to keep the truncation consistent with CREATE\nROLE/DATABASE.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 21 Jun 2023 08:04:02 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ProcessStartupPacket(): database_name and user_name truncation"
},
{
"msg_contents": "On 6/21/23 4:22 PM, Drouvot, Bertrand wrote:\n> Hi,\n> \n> On 6/21/23 3:43 PM, Tom Lane wrote:\n>> Kyotaro Horiguchi <[email protected]> writes:\n>>> At Wed, 21 Jun 2023 09:43:50 +0200, \"Drouvot, Bertrand\" <[email protected]> wrote in\n>>>> Trying to connect with the 64 bytes name:\n>>>> $ psql -d ääääääääääääääääääääääääääääääää\n>>>> psql: error: connection to server on socket \"/tmp/.s.PGSQL.55448\"\n>>>> failed: FATAL: database \"äääääääääääääääääääääääääääääää\" does not\n>>>> exist\n>>\n>>> IMHO, I'm not sure we should allow connections without the exact name\n>>> being provided. In that sense, I think we might want to consider\n>>> outright rejecting the estblishment of a connection when the given\n>>> database name doesn't fit the startup packet, since the database with\n>>> the exact given name cannot be found.\n>>\n>> I think I agree. I don't like the proposed patch at all, because it's\n>> making completely unsupportable assumptions about what encoding the\n>> names are given in. Simply failing to match when a name is overlength\n>> sounds safer.\n>>\n> \n> Yeah, that's another and \"cleaner\" option.\n> \n> I'll propose a patch to make it failing even for the non multibyte case then (\n> so that multibyte and non multibyte behaves the same aka failing in case of overlength\n> name is detected).\n\nPlease find attached a patch doing so (which is basically a revert of d18c1d1f51).\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 21 Jun 2023 21:02:49 +0200",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: ProcessStartupPacket(): database_name and user_name truncation"
},
{
"msg_contents": "On Wed, Jun 21, 2023 at 09:02:49PM +0200, Drouvot, Bertrand wrote:\n> Please find attached a patch doing so (which is basically a revert of d18c1d1f51).\n\nLGTM. I think this can wait for v17 since the current behavior has been\naround since 2001 and AFAIK this is the first report. While it's arguably\na bug fix, the patch also breaks some cases that work today.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 21 Jun 2023 12:55:15 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ProcessStartupPacket(): database_name and user_name truncation"
},
{
"msg_contents": "On Wed, Jun 21, 2023 at 12:55:15PM -0700, Nathan Bossart wrote:\n> LGTM. I think this can wait for v17 since the current behavior has been\n> around since 2001 and AFAIK this is the first report. While it's arguably\n> a bug fix, the patch also breaks some cases that work today.\n\nAgreed that anything discussed on this thread does not warrant a\nbackpatch.\n--\nMichael",
"msg_date": "Thu, 22 Jun 2023 08:37:26 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ProcessStartupPacket(): database_name and user_name truncation"
},
{
"msg_contents": "Hi,\n\nOn 6/22/23 1:37 AM, Michael Paquier wrote:\n> On Wed, Jun 21, 2023 at 12:55:15PM -0700, Nathan Bossart wrote:\n>> LGTM. I think this can wait for v17 since the current behavior has been\n>> around since 2001 and AFAIK this is the first report. While it's arguably\n>> a bug fix, the patch also breaks some cases that work today.\n> \n> Agreed that anything discussed on this thread does not warrant a\n> backpatch.\n\nFully agree, the CF entry [1] has been tagged as \"Target Version 17\".\n\n[1] https://commitfest.postgresql.org/43/4383/\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 22 Jun 2023 08:10:30 +0200",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: ProcessStartupPacket(): database_name and user_name truncation"
},
{
"msg_contents": "After taking another look at this, I wonder if it'd be better to fail as\nsoon as we see the database or user name is too long instead of lugging\nthem around when authentication is destined to fail.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 30 Jun 2023 08:42:18 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ProcessStartupPacket(): database_name and user_name truncation"
},
{
"msg_contents": "Nathan Bossart <[email protected]> writes:\n> After taking another look at this, I wonder if it'd be better to fail as\n> soon as we see the database or user name is too long instead of lugging\n> them around when authentication is destined to fail.\n\nIf we're agreed that we aren't going to truncate these identifiers,\nthat seems like a reasonable way to handle it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 30 Jun 2023 11:54:12 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ProcessStartupPacket(): database_name and user_name truncation"
},
{
"msg_contents": "Hi,\n\nOn 6/30/23 5:54 PM, Tom Lane wrote:\n> Nathan Bossart <[email protected]> writes:\n>> After taking another look at this, I wonder if it'd be better to fail as\n>> soon as we see the database or user name is too long instead of lugging\n>> them around when authentication is destined to fail.\n> \n> If we're agreed that we aren't going to truncate these identifiers,\n> that seems like a reasonable way to handle it.\n> \n\nYeah agree, thanks Nathan for the idea.\nI'll work on a new patch version proposal.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 30 Jun 2023 19:32:50 +0200",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: ProcessStartupPacket(): database_name and user_name truncation"
},
{
"msg_contents": "Hi,\n\nOn 6/30/23 7:32 PM, Drouvot, Bertrand wrote:\n> Hi,\n> \n> On 6/30/23 5:54 PM, Tom Lane wrote:\n>> Nathan Bossart <[email protected]> writes:\n>>> After taking another look at this, I wonder if it'd be better to fail as\n>>> soon as we see the database or user name is too long instead of lugging\n>>> them around when authentication is destined to fail.\n>>\n>> If we're agreed that we aren't going to truncate these identifiers,\n>> that seems like a reasonable way to handle it.\n>>\n> \n> Yeah agree, thanks Nathan for the idea.\n> I'll work on a new patch version proposal.\n> \n\nPlease find V2 attached where it's failing as soon as the database name or\nuser name are detected as overlength.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Sat, 1 Jul 2023 16:02:06 +0200",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: ProcessStartupPacket(): database_name and user_name truncation"
},
{
"msg_contents": "At Fri, 30 Jun 2023 19:32:50 +0200, \"Drouvot, Bertrand\" <[email protected]> wrote in \n> Hi,\n> \n> On 6/30/23 5:54 PM, Tom Lane wrote:\n> > Nathan Bossart <[email protected]> writes:\n> >> After taking another look at this, I wonder if it'd be better to fail\n> >> as\n> >> soon as we see the database or user name is too long instead of\n> >> lugging\n> >> them around when authentication is destined to fail.\n\nFor the record, if I understand Nathan correctly, it is what I\nsuggested in my initial post. If this is correct, +1 for the suggestion.\n\nme> I think we might want to consider outright rejecting the\nme> estblishment of a connection when the given database name doesn't\nme> fit the startup packet\n\n> > If we're agreed that we aren't going to truncate these identifiers,\n> > that seems like a reasonable way to handle it.\n> > \n> \n> Yeah agree, thanks Nathan for the idea.\n> I'll work on a new patch version proposal.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 03 Jul 2023 10:50:45 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ProcessStartupPacket(): database_name and user_name truncation"
},
{
"msg_contents": "At Mon, 03 Jul 2023 10:50:45 +0900 (JST), Kyotaro Horiguchi <[email protected]> wrote in \n> For the record, if I understand Nathan correctly, it is what I\n> suggested in my initial post. If this is correct, +1 for the suggestion.\n> \n> me> I think we might want to consider outright rejecting the\n> me> estblishment of a connection when the given database name doesn't\n> me> fit the startup packet\n\nMmm. It's bit wrong. \"doesn't fit the startup packet\" is \"is long as a\ndatabase name\".\n\n\nAt Sat, 1 Jul 2023 16:02:06 +0200, \"Drouvot, Bertrand\" <[email protected]> wrote in \n> Please find V2 attached where it's failing as soon as the database\n> name or\n> user name are detected as overlength.\n\nI find another errocde \"ERRCODE_INVALID_ROLE_SPECIFICATION\". I don't\nfind a clear distinction between the usages of the two, but I think\n.._ROLE_.. might be a better fit.\n\n\nERRCODE_INVALID_ROLE_SPACIFICATION:\n auth.c:1507: \"could not transnlate name\"\n auth.c:1526: \"could not translate name\"\n auth.c:1539: \"realm name too long\"\n auth.c:1554: \"translated account name too long\"\n\nERRCODE_INVALID_AUTHORIZATION_SPECIFICATION:\npostmaster.c:2268: \"no PostgreSQL user name specified in startup packet\"\nmiscinit.c:756: \"role \\\"%s\\\" does not exist\"\nmiscinit.c:764: \"role with OID %u does not exist\"\nmiscinit.c:794: \"role \\\"%s\\\" is not permitted to log in\"\nauth.c:420: \"connection requires a valid client certificate\"\nauth.c:461,468,528,536: \"pg_hba.conf rejects ...\"\nauth.c:878: MD5 authentication is not supported when \\\"db_user_namespace\\\" is enabled\"\nauth-scram.c:1016: \"SCRAM channel binding negotiation error\"\nauth-scram.c:1349: \"SCRAM channel binding check failed\"\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 03 Jul 2023 11:09:58 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ProcessStartupPacket(): database_name and user_name truncation"
},
{
"msg_contents": "On Sat, Jul 01, 2023 at 04:02:06PM +0200, Drouvot, Bertrand wrote:\n> Please find V2 attached where it's failing as soon as the database name or\n> user name are detected as overlength.\n\nThanks, Bertrand. I chickened out and ended up committing v1 for now\n(i.e., simply removing the truncation code). I didn't like the idea of\ntrying to keep the new error messages consistent with code in faraway\nfiles, and the startup packet length limit is already pretty aggressive, so\nI'm a little less concerned about lugging around long names. Plus, I think\nv2 had some subtle interactions with db_user_namespace (maybe for the\nbetter), but I didn't spend too much time looking at that since\ndb_user_namespace will likely be removed soon.\n\nIf anyone disagrees and wants to see the FATALs emitted from\nProcessStartupPacket() directly, please let me know and we can work on\nadding them in a follow-up patch.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 3 Jul 2023 13:34:08 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ProcessStartupPacket(): database_name and user_name truncation"
},
{
"msg_contents": "Nathan Bossart <[email protected]> writes:\n> Thanks, Bertrand. I chickened out and ended up committing v1 for now\n> (i.e., simply removing the truncation code).\n\nWFM.\n\n> If anyone disagrees and wants to see the FATALs emitted from\n> ProcessStartupPacket() directly, please let me know and we can work on\n> adding them in a follow-up patch.\n\nI think the new behavior is fine.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 03 Jul 2023 18:33:46 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ProcessStartupPacket(): database_name and user_name truncation"
},
{
"msg_contents": "Hi,\n\nOn 7/3/23 10:34 PM, Nathan Bossart wrote:\n> On Sat, Jul 01, 2023 at 04:02:06PM +0200, Drouvot, Bertrand wrote:\n>> Please find V2 attached where it's failing as soon as the database name or\n>> user name are detected as overlength.\n> \n> Thanks, Bertrand. I chickened out and ended up committing v1 for now\n> (i.e., simply removing the truncation code). I didn't like the idea of\n> trying to keep the new error messages consistent with code in faraway\n> files, and the startup packet length limit is already pretty aggressive, so\n> I'm a little less concerned about lugging around long names. Plus, I think\n> v2 had some subtle interactions with db_user_namespace (maybe for the\n> better), but I didn't spend too much time looking at that since\n> db_user_namespace will likely be removed soon.\n\nThanks Nathan for the feedback and explanations, I think that makes fully sense.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 4 Jul 2023 08:06:37 +0200",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: ProcessStartupPacket(): database_name and user_name truncation"
}
] |
[
{
"msg_contents": "Hi,\r\n\r\nI was trying to add WAL stats to pg_stat_io. While doing that I was\r\ncomparing pg_stat_wal and pg_stat_io's WAL stats and there was some\r\ninequality between the total number of WALs. I found that the difference\r\ncomes from bgwriter's WALs. bgwriter generates WAL but it doesn't flush\r\nthem because the pgstat_report_wal() function isn't called in bgwriter. I\r\nattached a small patch for calling the pgstat_report_wal() function in\r\nbgwriter.\r\n\r\nbgwriter generates WAL by calling functions in this order:\r\nbgwriter.c -> BackgroundWriterMain() -> BgBufferSync() -> SyncOneBuffer()\r\n-> FlushBuffer() -> XLogFlush() -> XLogWrite()\r\n\r\nI used a query like BEGIN; followed by lots of(3000 in my case) INSERT,\r\nDELETE, or UPDATE, followed by a COMMIT while testing.\r\n\r\nExample output before patch applied:\r\n\r\n┌─────────────┬─────────────────┐\r\n│ view_name │ total_wal_write │\r\n├─────────────┼─────────────────┤\r\n│ pg_stat_wal │ 10318 │\r\n│ pg_stat_io │ 10321 │\r\n└─────────────┴─────────────────┘\r\n\r\n┌─────────────────────┬────────┬────────┐\r\n│ backend_type │ object │ writes │\r\n├─────────────────────┼────────┼────────┤\r\n│ autovacuum launcher │ wal │ 0 │\r\n│ autovacuum worker │ wal │ 691 │\r\n│ client backend │ wal │ 8170 │\r\n│ background worker │ wal │ 0 │\r\n│ background writer │ wal │ 3 │\r\n│ checkpointer │ wal │ 1 │\r\n│ standalone backend │ wal │ 737 │\r\n│ startup │ wal │ 0 │\r\n│ walsender │ wal │ 0 │\r\n│ walwriter │ wal │ 719 │\r\n└─────────────────────┴────────┴────────┘\r\n\r\nAfter the patch has been applied, there are no differences between\r\npg_stat_wal and pg_stat_io.\r\n\r\nI appreciate any comment/feedback on this patch.\r\n\r\nRegards,\r\nNazir Bilal Yavuz\r\nMicrosoft",
"msg_date": "Wed, 21 Jun 2023 14:04:17 +0300",
"msg_from": "Nazir Bilal Yavuz <[email protected]>",
"msg_from_op": true,
"msg_subject": "bgwriter doesn't flush WAL stats"
},
{
"msg_contents": "On Wed, 21 Jun 2023 at 13:04, Nazir Bilal Yavuz <[email protected]> wrote:\n>\n> Hi,\n>\n> I was trying to add WAL stats to pg_stat_io. While doing that I was comparing pg_stat_wal and pg_stat_io's WAL stats and there was some inequality between the total number of WALs. I found that the difference comes from bgwriter's WALs. bgwriter generates WAL but it doesn't flush them because the pgstat_report_wal() function isn't called in bgwriter. I attached a small patch for calling the pgstat_report_wal() function in bgwriter.\n>\n> bgwriter generates WAL by calling functions in this order:\n> bgwriter.c -> BackgroundWriterMain() -> BgBufferSync() -> SyncOneBuffer() -> FlushBuffer() -> XLogFlush() -> XLogWrite()\n\nI was quite confused here, as XLogWrite() does not generate any WAL;\nit only writes existing WAL from buffers to disk.\nIn a running PostgreSQL instance, WAL is only generated through\nXLogInsert(xloginsert.c) and serialized / written to buffers in its\ncall to XLogInsertRecord(xlog.c); XLogFlush and XLogWrite are only\nresponsible for writing those buffers to disk.\n\nThe only path that I see in XLogWrite() that could potentially put\nanything into WAL is through RequestCheckpoint(), but that only writes\nout a checkpoint when it is not in a postmaster environment - in all\nother cases it will wake up the checkpointer and wait for that\ncheckpoint to finish.\n\nI also got confused with your included views; they're not included in\nthe patch and the current master branch doesn't emit object=wal, so I\ncan't really check that the patch works as intended.\n\nBut on the topic of reporting the WAL stats in bgwriter; that seems\nlike a good idea to fix, yes.\n\n+1\n\nKind regards,\n\nMatthias van de Meent\nNeon, Inc.\n\n\n",
"msg_date": "Wed, 21 Jun 2023 17:02:50 +0200",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bgwriter doesn't flush WAL stats"
},
{
"msg_contents": "Hi,\n\nThanks for the explanation.\n\nOn Wed, 21 Jun 2023 at 18:03, Matthias van de Meent <\[email protected]> wrote:\n>\n> On Wed, 21 Jun 2023 at 13:04, Nazir Bilal Yavuz <[email protected]>\nwrote:\n> > I was trying to add WAL stats to pg_stat_io. While doing that I was\ncomparing pg_stat_wal and pg_stat_io's WAL stats and there was some\ninequality between the total number of WALs. I found that the difference\ncomes from bgwriter's WALs. bgwriter generates WAL but it doesn't flush\nthem because the pgstat_report_wal() function isn't called in bgwriter. I\nattached a small patch for calling the pgstat_report_wal() function in\nbgwriter.\n> >\n> > bgwriter generates WAL by calling functions in this order:\n> > bgwriter.c -> BackgroundWriterMain() -> BgBufferSync() ->\nSyncOneBuffer() -> FlushBuffer() -> XLogFlush() -> XLogWrite()\n>\n> I was quite confused here, as XLogWrite() does not generate any WAL;\n> it only writes existing WAL from buffers to disk.\n> In a running PostgreSQL instance, WAL is only generated through\n> XLogInsert(xloginsert.c) and serialized / written to buffers in its\n> call to XLogInsertRecord(xlog.c); XLogFlush and XLogWrite are only\n> responsible for writing those buffers to disk.\n\nYes, you are right. Correct explanation should be \"bgwriter writes existing\nWAL from buffers to disk but pg_stat_wal doesn't count them because\nbgwriter doesn't call pgstat_report_wal() to update WAL statistics\".\n\n> I also got confused with your included views; they're not included in\n> the patch and the current master branch doesn't emit object=wal, so I\n> can't really check that the patch works as intended.\n\nI attached a WIP patch for showing WAL stats in pg_stat_io.\n\nAfter applying patch, I used these queries for the getting views I shared\nin the first mail;\n\nQuery for the first view:\nSELECT\n 'pg_stat_wal' AS view_name,\n SUM(wal_write) AS total_wal_write\nFROM\n pg_stat_wal\nUNION ALL\nSELECT\n 'pg_stat_io' AS view_name,\n SUM(writes) AS total_wal_write\nFROM\n pg_stat_io\nWHERE\n object = 'wal';\n\nQuery for the second view:\nSELECT backend_type, object, writes FROM pg_stat_io where object = 'wal';\n\nI also changed the description on the patch file and attached it.\n\nRegards,\nNazir Bilal Yavuz\nMicrosoft",
"msg_date": "Wed, 21 Jun 2023 18:52:26 +0300",
"msg_from": "Nazir Bilal Yavuz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: bgwriter doesn't flush WAL stats"
},
{
"msg_contents": "At Wed, 21 Jun 2023 18:52:26 +0300, Nazir Bilal Yavuz <[email protected]> wrote in \n> I attached a WIP patch for showing WAL stats in pg_stat_io.\n\nYeah, your diagnosis appears accurate. I managed to trigger an\nassertion failure quite easily when I added\n\"Assert(!pgstat_have_pending_wal()) just after the call to\npgstat_report_bgwriter(). Good find!\n\nI slightly inclined to place the added call after smgrcloseall() but\nit doesn't seem to cause any io-stats updates so the proposed first\npatch as-is looks good to me.\n\n\nRegarding the second patch, it introduces WAL IO time as a\nIOCONTEXT_NORMAL/IOOBJECT_WAL, but it doesn't seem to follow the\nconvention or design of the pgstat_io component, which primarily\nfocuses on shared buffer IOs.\n\nThere was a brief mention about WAL IO during the development of\npgstat_io [1].\n\n>> It'd be different if we tracked WAL fsyncs more granularly - which would be\n>> quite interesting - but that's something for another day^Wpatch.\n>>\n>>\n> I do have a question about this.\n> So, if we were to start tracking WAL IO would it fit within this\n> paradigm to have a new IOPATH_WAL for WAL or would it add a separate\n> dimension?\n\n\n[1] https://www.postgresql.org/message-id/CAAKRu_bM55pj3pPRW0nd_-paWHLRkOU69r816AeztBBa-N1HLA%40mail.gmail.com\n\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 22 Jun 2023 10:48:54 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bgwriter doesn't flush WAL stats"
},
{
"msg_contents": "On Wed, Jun 21, 2023 at 9:49 PM Kyotaro Horiguchi\n<[email protected]> wrote:\n> Regarding the second patch, it introduces WAL IO time as a\n> IOCONTEXT_NORMAL/IOOBJECT_WAL, but it doesn't seem to follow the\n> convention or design of the pgstat_io component, which primarily\n> focuses on shared buffer IOs.\n\nI haven't reviewed the patch yet, but in my opinion having an\nIOOBJECT_WAL makes sense. I imagined that we would add WAL as an\nIOObject along with others such as an IOOBJECT_BYPASS for \"bypass\" IO\n(IO done through the smgr API directly) and an IOOBJECT_SPILL or\nsomething like it for spill files from joins/aggregates/etc.\n\n> > I do have a question about this.\n> > So, if we were to start tracking WAL IO would it fit within this\n> > paradigm to have a new IOPATH_WAL for WAL or would it add a separate\n> > dimension?\n\nPersonally, I think WAL fits well as an IOObject. Then we can add\nIOCONTEXT_INIT and use that for WAL file initialization and\nIOCONTEXT_NORMAL for normal WAL writes/fysncs/etc. I don't think we\nneed a new dimension for it as it feels like an IO target just like\nshared buffers and temporary buffers do. I think we should save adding\nnew dimensions for relationships that we can't express in the existing\nparadigm.\n\n- Melanie\n\n\n",
"msg_date": "Thu, 22 Jun 2023 10:03:41 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bgwriter doesn't flush WAL stats"
},
{
"msg_contents": "Hi,\n\nCreated a commitfest entry for this.\nLink: https://commitfest.postgresql.org/43/4405/\n\nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\nOn Thu, 22 Jun 2023 at 17:03, Melanie Plageman <[email protected]>\nwrote:\n\n> On Wed, Jun 21, 2023 at 9:49 PM Kyotaro Horiguchi\n> <[email protected]> wrote:\n> > Regarding the second patch, it introduces WAL IO time as a\n> > IOCONTEXT_NORMAL/IOOBJECT_WAL, but it doesn't seem to follow the\n> > convention or design of the pgstat_io component, which primarily\n> > focuses on shared buffer IOs.\n>\n> I haven't reviewed the patch yet, but in my opinion having an\n> IOOBJECT_WAL makes sense. I imagined that we would add WAL as an\n> IOObject along with others such as an IOOBJECT_BYPASS for \"bypass\" IO\n> (IO done through the smgr API directly) and an IOOBJECT_SPILL or\n> something like it for spill files from joins/aggregates/etc.\n>\n> > > I do have a question about this.\n> > > So, if we were to start tracking WAL IO would it fit within this\n> > > paradigm to have a new IOPATH_WAL for WAL or would it add a separate\n> > > dimension?\n>\n> Personally, I think WAL fits well as an IOObject. Then we can add\n> IOCONTEXT_INIT and use that for WAL file initialization and\n> IOCONTEXT_NORMAL for normal WAL writes/fysncs/etc. I don't think we\n> need a new dimension for it as it feels like an IO target just like\n> shared buffers and temporary buffers do. I think we should save adding\n> new dimensions for relationships that we can't express in the existing\n> paradigm.\n>\n> - Melanie\n>\n\nHi,Created a commitfest entry for this.Link: https://commitfest.postgresql.org/43/4405/Regards,Nazir Bilal YavuzMicrosoftOn Thu, 22 Jun 2023 at 17:03, Melanie Plageman <[email protected]> wrote:On Wed, Jun 21, 2023 at 9:49 PM Kyotaro Horiguchi\n<[email protected]> wrote:\n> Regarding the second patch, it introduces WAL IO time as a\n> IOCONTEXT_NORMAL/IOOBJECT_WAL, but it doesn't seem to follow the\n> convention or design of the pgstat_io component, which primarily\n> focuses on shared buffer IOs.\n\nI haven't reviewed the patch yet, but in my opinion having an\nIOOBJECT_WAL makes sense. I imagined that we would add WAL as an\nIOObject along with others such as an IOOBJECT_BYPASS for \"bypass\" IO\n(IO done through the smgr API directly) and an IOOBJECT_SPILL or\nsomething like it for spill files from joins/aggregates/etc.\n\n> > I do have a question about this.\n> > So, if we were to start tracking WAL IO would it fit within this\n> > paradigm to have a new IOPATH_WAL for WAL or would it add a separate\n> > dimension?\n\nPersonally, I think WAL fits well as an IOObject. Then we can add\nIOCONTEXT_INIT and use that for WAL file initialization and\nIOCONTEXT_NORMAL for normal WAL writes/fysncs/etc. I don't think we\nneed a new dimension for it as it feels like an IO target just like\nshared buffers and temporary buffers do. I think we should save adding\nnew dimensions for relationships that we can't express in the existing\nparadigm.\n\n- Melanie",
"msg_date": "Tue, 27 Jun 2023 09:46:37 +0300",
"msg_from": "Nazir Bilal Yavuz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: bgwriter doesn't flush WAL stats"
},
{
"msg_contents": "The first patch, to flush the bgwriter's WAL stats to the stats \ncollector, seems like a straightforward bug fix, so committed and \nbackpatched that. Thank you!\n\nI didn't look at the second patch.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Mon, 2 Oct 2023 13:08:36 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bgwriter doesn't flush WAL stats"
},
{
"msg_contents": "Hi,\n\nOn Mon, 2 Oct 2023 at 13:08, Heikki Linnakangas <[email protected]> wrote:\n>\n> The first patch, to flush the bgwriter's WAL stats to the stats\n> collector, seems like a straightforward bug fix, so committed and\n> backpatched that. Thank you!\n>\n> I didn't look at the second patch.\n\nThanks for the push!\n\nActual commitfest entry for the second patch is:\nhttps://commitfest.postgresql.org/45/4416/. I sent a second patch to\nthis thread just to show how I found this bug. There is no need to\nreview it, this commitfest entry could be closed as committed.\n\nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n",
"msg_date": "Tue, 3 Oct 2023 16:08:37 +0300",
"msg_from": "Nazir Bilal Yavuz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: bgwriter doesn't flush WAL stats"
}
] |
[
{
"msg_contents": "Hi,\n\nDavid Rowley wrote:\n>I've adjusted the attached patch to do that.\n\nI think that was room for more improvements.\n\n1. bms_member_index Bitmapset can be const.\n2. Only compute BITNUM when necessary.\n3. Avoid enlargement when nwords is equal wordnum.\n Can save cycles when in corner cases?\n\nJust for convenience I made a new version of the patch,\nIf want to use it.\n\nregards,\nRanier Vilela",
"msg_date": "Wed, 21 Jun 2023 09:16:01 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Making empty Bitmapsets always be NULL"
},
{
"msg_contents": "On Thu, 22 Jun 2023 at 00:16, Ranier Vilela <[email protected]> wrote:\n> 2. Only compute BITNUM when necessary.\n\nI doubt this will help. The % 64 done by BITNUM will be transformed\nto an AND operation by the compiler which is likely going to be single\ninstruction latency on most CPUs which probably amounts to it being\n\"free\". There's maybe a bit of reading for you in [1] and [2] if\nyou're wondering how any operation could be free.\n\n(The compiler is able to transform the % into what is effectively &\nbecause 64 is a power of 2. uintvar % 64 is the same as uintvar & 63.\nPlay around with [3] to see what I mean)\n\n> 3. Avoid enlargement when nwords is equal wordnum.\n> Can save cycles when in corner cases?\n\nNo, you're just introducing a bug here. Arrays in C are zero-based,\nso \"wordnum >= a->nwords\" is exactly the correct way to check if\nwordnum falls outside the bounds of the existing allocated memory. By\nchanging that to \"wordnum > a->nwords\" we'll fail to enlarge the words\narray when it needs to be enlarged by 1 element.\n\nIt looks like you've introduced a bunch of random white space and\nchanged around a load of other random things in the patch too. I'm not\nsure why you think that's a good idea.\n\nFWIW, we normally only write \"if (somevar)\" as a shortcut when somevar\nis boolean and we want to know that it's true. The word value is not\na boolean type, so although \"if (someint)\" and \"if (someint != 0)\"\nwill compile to the same machine code, we don't normally write our C\ncode that way in PostgreSQL. We also tend to write \"if (someptr !=\nNULL)\" rather than \"if (someptr)\". The compiler will produce the same\ncode for each, but we write the former to assist people reading the\ncode so they know we're checking for NULL rather than checking if some\nboolean variable is true.\n\nOverall, I'm not really interested in sneaking any additional changes\nthat are unrelated to adjusting Bitmapsets so that don't carry\ntrailing zero words. If have other optimisations you think are\nworthwhile, please include them in another thread along with\nbenchmarks to show the performance increase. For learning, I'd\nencourage you to do some micro benchmarks outside of PostgreSQL and\nmock up some Bitmapset code in a single .c file and try out with any\nwithout your changes after calling the function in a tight loop to see\nif you can measure any performance gains. Just remember you'll never\nsee any gains in performance when your change compiles into the exact\nsame code as without your change. Between [1] and [2], you still\nmight not see performance changes even when the compiled code is\nchanged (I'm thinking of your #2 change here).\n\nDavid\n\n[1] https://en.wikipedia.org/wiki/Speculative_execution\n[2] https://en.wikipedia.org/wiki/Out-of-order_execution\n[3] https://godbolt.org/z/9vbbnMKEE\n\n\n",
"msg_date": "Thu, 22 Jun 2023 16:43:28 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Making empty Bitmapsets always be NULL"
},
{
"msg_contents": "Hello,\n\nOn Thu, Jun 22, 2023 at 1:43 PM David Rowley <[email protected]> wrote:\n> > 3. Avoid enlargement when nwords is equal wordnum.\n> > Can save cycles when in corner cases?\n>\n> No, you're just introducing a bug here. Arrays in C are zero-based,\n> so \"wordnum >= a->nwords\" is exactly the correct way to check if\n> wordnum falls outside the bounds of the existing allocated memory. By\n> changing that to \"wordnum > a->nwords\" we'll fail to enlarge the words\n> array when it needs to be enlarged by 1 element.\n\nI agree with David. Unfortunately, some of the regression tests failed\nwith the v5 patch. These failures are due to the bug introduced by the\n#3 change.\n\n-- \nBest regards,\nYuya Watari\n\n\n",
"msg_date": "Thu, 22 Jun 2023 17:49:34 +0900",
"msg_from": "Yuya Watari <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Making empty Bitmapsets always be NULL"
},
{
"msg_contents": "Em qui., 22 de jun. de 2023 às 01:43, David Rowley <[email protected]>\nescreveu:\n\n> On Thu, 22 Jun 2023 at 00:16, Ranier Vilela <[email protected]> wrote:\n> > 2. Only compute BITNUM when necessary.\n>\n> I doubt this will help. The % 64 done by BITNUM will be transformed\n> to an AND operation by the compiler which is likely going to be single\n> instruction latency on most CPUs which probably amounts to it being\n> \"free\". There's maybe a bit of reading for you in [1] and [2] if\n> you're wondering how any operation could be free.\n>\nI think the word free is not the right one.\nThe end result of the code is the same, so whatever you write it one way or\nthe other,\nthe compiler will transform it as if it were written without calculating\nBITNUM in advance.\n\nSee at:\nhttps://godbolt.org/z/39MdcP7M3\n\nThe issue is the code becomes clearer and more readable with the\ncalculation in advance.\nIn that case, I think so.\nBut this is on a case-by-case basis, in other contexts it can be more\nexpensive.\n\n\n>\n> (The compiler is able to transform the % into what is effectively &\n> because 64 is a power of 2. uintvar % 64 is the same as uintvar & 63.\n> Play around with [3] to see what I mean)\n>\n> > 3. Avoid enlargement when nwords is equal wordnum.\n> > Can save cycles when in corner cases?\n>\n> No, you're just introducing a bug here. Arrays in C are zero-based,\n> so \"wordnum >= a->nwords\" is exactly the correct way to check if\n> wordnum falls outside the bounds of the existing allocated memory. By\n> changing that to \"wordnum > a->nwords\" we'll fail to enlarge the words\n> array when it needs to be enlarged by 1 element.\n>\nYeah, this is my fault.\nUnfortunately, I missed the failure of the regression tests.\n\n\n> It looks like you've introduced a bunch of random white space and\n> changed around a load of other random things in the patch too. I'm not\n> sure why you think that's a good idea.\n>\nWeel, It is much easier to read and follows the general style of the other\nfonts.\n\n\n> FWIW, we normally only write \"if (somevar)\" as a shortcut when somevar\n> is boolean and we want to know that it's true. The word value is not\n> a boolean type, so although \"if (someint)\" and \"if (someint != 0)\"\n> will compile to the same machine code, we don't normally write our C\n> code that way in PostgreSQL. We also tend to write \"if (someptr !=\n> NULL)\" rather than \"if (someptr)\". The compiler will produce the same\n> code for each, but we write the former to assist people reading the\n> code so they know we're checking for NULL rather than checking if some\n> boolean variable is true.\n>\nNo, this is not the case.\nWith unsigned words, it can be a more appropriate test without == 0.\n\nSee:\nhttps://stackoverflow.com/questions/14267081/difference-between-je-jne-and-jz-jnz\n\nIn some contexts, it can be faster when it has CMP instruction before.\n\n\n> Overall, I'm not really interested in sneaking any additional changes\n> that are unrelated to adjusting Bitmapsets so that don't carry\n> trailing zero words. If have other optimisations you think are\n> worthwhile, please include them in another thread along with\n> benchmarks to show the performance increase. For learning, I'd\n> encourage you to do some micro benchmarks outside of PostgreSQL and\n> mock up some Bitmapset code in a single .c file and try out with any\n> without your changes after calling the function in a tight loop to see\n> if you can measure any performance gains. Just remember you'll never\n> see any gains in performance when your change compiles into the exact\n> same code as without your change. Between [1] and [2], you still\n> might not see performance changes even when the compiled code is\n> changed (I'm thinking of your #2 change here).\n>\nWell, *const* always is a good style and can prevent mistakes and\nallows the compiler to do optimizations.\n\nregards,\nRanier Vilela\n\nEm qui., 22 de jun. de 2023 às 01:43, David Rowley <[email protected]> escreveu:On Thu, 22 Jun 2023 at 00:16, Ranier Vilela <[email protected]> wrote:\n> 2. Only compute BITNUM when necessary.\n\nI doubt this will help. The % 64 done by BITNUM will be transformed\nto an AND operation by the compiler which is likely going to be single\ninstruction latency on most CPUs which probably amounts to it being\n\"free\". There's maybe a bit of reading for you in [1] and [2] if\nyou're wondering how any operation could be free.I think the word free is not the right one.The end result of the code is the same, so whatever you write it one way or the other, the compiler will transform it as if it were written without calculating BITNUM in advance.See at:https://godbolt.org/z/39MdcP7M3\nThe issue is the code becomes clearer and more readable with the calculation in advance. In that case, I think so.But this is on a case-by-case basis, in other contexts it can be more expensive.\n \n\n(The compiler is able to transform the % into what is effectively &\nbecause 64 is a power of 2. uintvar % 64 is the same as uintvar & 63.\nPlay around with [3] to see what I mean)\n\n> 3. Avoid enlargement when nwords is equal wordnum.\n> Can save cycles when in corner cases?\n\nNo, you're just introducing a bug here. Arrays in C are zero-based,\nso \"wordnum >= a->nwords\" is exactly the correct way to check if\nwordnum falls outside the bounds of the existing allocated memory. By\nchanging that to \"wordnum > a->nwords\" we'll fail to enlarge the words\narray when it needs to be enlarged by 1 element.Yeah, this is my fault.Unfortunately, I missed the failure of the regression tests. \n\nIt looks like you've introduced a bunch of random white space and\nchanged around a load of other random things in the patch too. I'm not\nsure why you think that's a good idea.Weel, \nIt is much easier to read and follows the general style of the other fonts. \n\nFWIW, we normally only write \"if (somevar)\" as a shortcut when somevar\nis boolean and we want to know that it's true. The word value is not\na boolean type, so although \"if (someint)\" and \"if (someint != 0)\"\nwill compile to the same machine code, we don't normally write our C\ncode that way in PostgreSQL. We also tend to write \"if (someptr !=\nNULL)\" rather than \"if (someptr)\". The compiler will produce the same\ncode for each, but we write the former to assist people reading the\ncode so they know we're checking for NULL rather than checking if some\nboolean variable is true.No, this is not the case.With unsigned words, it can be a more appropriate test without == 0.See:https://stackoverflow.com/questions/14267081/difference-between-je-jne-and-jz-jnzIn some contexts, it can be faster when it has CMP instruction before.\n\nOverall, I'm not really interested in sneaking any additional changes\nthat are unrelated to adjusting Bitmapsets so that don't carry\ntrailing zero words. If have other optimisations you think are\nworthwhile, please include them in another thread along with\nbenchmarks to show the performance increase. For learning, I'd\nencourage you to do some micro benchmarks outside of PostgreSQL and\nmock up some Bitmapset code in a single .c file and try out with any\nwithout your changes after calling the function in a tight loop to see\nif you can measure any performance gains. Just remember you'll never\nsee any gains in performance when your change compiles into the exact\nsame code as without your change. Between [1] and [2], you still\nmight not see performance changes even when the compiled code is\nchanged (I'm thinking of your #2 change here).Well, *const* always is a good style and can prevent mistakes andallows the compiler to do optimizations. regards,Ranier Vilela",
"msg_date": "Thu, 22 Jun 2023 08:57:40 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Making empty Bitmapsets always be NULL"
},
{
"msg_contents": "Em qui., 22 de jun. de 2023 às 05:50, Yuya Watari <[email protected]>\nescreveu:\n\n> Hello,\n>\n> On Thu, Jun 22, 2023 at 1:43 PM David Rowley <[email protected]> wrote:\n> > > 3. Avoid enlargement when nwords is equal wordnum.\n> > > Can save cycles when in corner cases?\n> >\n> > No, you're just introducing a bug here. Arrays in C are zero-based,\n> > so \"wordnum >= a->nwords\" is exactly the correct way to check if\n> > wordnum falls outside the bounds of the existing allocated memory. By\n> > changing that to \"wordnum > a->nwords\" we'll fail to enlarge the words\n> > array when it needs to be enlarged by 1 element.\n>\n> I agree with David. Unfortunately, some of the regression tests failed\n> with the v5 patch. These failures are due to the bug introduced by the\n> #3 change.\n>\nYeah, this is my fault.\n\nAnyway thanks for the brilliant ideas about optimize bitmapset.\nI worked a bit more on the v4 version and made a new v6 version, with some\nchanges.\n\nI made some benchmarks with v4 and v6:\nWindows 64 bits\nmsvc 2019 64 bits\n\n== Query A ==\npsql -U postgres -f c:\\postgres_bench\\tmp\\bitmapset\\create-tables-a.sql\npsql -U postgres -f c:\\postgres_bench\\tmp\\bitmapset\\query-a.sql\n=============\n\nhead:\nTime: 3489,097 ms (00:03,489)\nTime: 3501,780 ms (00:03,502)\n\npatched v4:\nTime: 2434,873 ms (00:02,435)\nTime: 2310,832 ms (00:02,311)\nTime: 2305,445 ms (00:02,305)\nTime: 2185,972 ms (00:02,186)\nTime: 2177,434 ms (00:02,177)\nTime: 2169,883 ms (00:02,170)\n\npatched v6:\nTime: 2162,633 ms (00:02,163)\nTime: 2159,805 ms (00:02,160)\nTime: 2002,771 ms (00:02,003)\nTime: 1944,436 ms (00:01,944)\nTime: 1906,364 ms (00:01,906)\nTime: 1903,897 ms (00:01,904)\n\n== Query B ==\npsql -U postgres -f c:\\postgres_bench\\tmp\\bitmapset\\create-tables-b.sql\npsql -U postgres -f c:\\postgres_bench\\tmp\\bitmapset\\query-b.sql\n\npatched v4:\nTime: 2684,360 ms (00:02,684)\nTime: 2482,571 ms (00:02,483)\nTime: 2452,699 ms (00:02,453)\nTime: 2465,223 ms (00:02,465)\n\npatched v6:\nTime: 1837,775 ms (00:01,838)\nTime: 1801,274 ms (00:01,801)\nTime: 1800,802 ms (00:01,801)\nTime: 1798,786 ms (00:01,799)\n\nI can see some improvement, would you mind testing v6 and reporting back?\n\nregards,\nRanier Vilela",
"msg_date": "Fri, 23 Jun 2023 16:43:11 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Making empty Bitmapsets always be NULL"
},
{
"msg_contents": "On Sat, 24 Jun 2023 at 07:43, Ranier Vilela <[email protected]> wrote:\n> I worked a bit more on the v4 version and made a new v6 version, with some changes.\n\n> I can see some improvement, would you mind testing v6 and reporting back?\n\nPlease don't bother. I've already mentioned that I'm not going to\nconsider any changes here which are unrelated to changing the rule\nthat Bitmapsets no longer can have trailing zero words. I've already\nsaid in [1] that if you have unrelated changes that you wish to pursue\nin regards to Bitmapset, then please do so on another thread.\n\nAlso, FWIW, from glancing over it, your v6 patch introduces a bunch of\nout-of-bounds memory access bugs and a few things are less efficient\nthan I'd made them. The number of bytes you're zeroing using memset in\nbms_add_member() and bms_add_range() is wrong. bms_del_member() now\nneedlessly rechecks if a->words[wordnum] is 0. We already know it is 0\nfrom the above check. You may have misunderstood the point of swapping\nfor loops for do/while loops? They're meant to save the needless loop\nbounds check on the initial loop due to the knowledge that the\nBitmapset contains at least 1 word.\n\nAdditionally, it looks like you've made various places that loop over\nthe set and check for the \"lastnonzero\" less efficiently by adding an\nadditional surplus check. Depending on the CPU architecture, looping\nbackwards over arrays can be less efficient due to lack of hardware\nprefetching when accessing memory in reverse order. It's not clear to\nme why you think looping backwards is faster. I've no desire to\nintroduce code that needlessly performs more slowly depending on the\nability of the hardware prefetcher on the CPU architecture PostgreSQL\nis running on.\n\nAlso, if you going to post benchmark results, they're not very\nmeaningful unless you can demonstrate what you actually tested. You've\nmentioned nothing here to say what query-b.sql contains.\n\nDavid\n\n[1] https://postgr.es/m/CAApHDvo65DXFZcGJZ7pvXS75vUT+1-wSaP_kvefWGsns2y2vsg@mail.gmail.com\n\n\n",
"msg_date": "Sat, 24 Jun 2023 12:47:44 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Making empty Bitmapsets always be NULL"
}
] |
[
{
"msg_contents": "Hello,\n\nWhile at PGCon I was chatting with Andres (and I think Peter G. and a\nfew others who I can't remember at the moment, apologies) and Andres\nnoted that while we opportunistically prune a page when inserting a\ntuple (before deciding we need a new page) we don't do the same for\nupdates.\n\nAttached is a patch series to do the following:\n\n0001: Make it possible to call heap_page_prune_opt already holding an\nexclusive lock on the buffer.\n0002: Opportunistically prune pages on update when the current tuple's\npage has no free space. If this frees up enough space, then we\ncontinue to put the new tuple on that page; if not, then we take the\nexisting code path and get a new page.\n\nOne would plausibly expect the following improvements:\n- Reduced table bloat\n- Increased HOT update rate\n- Improved performance on updates\n\nI started to work on benchmarking this, but haven't had time to devote\nproperly to that, so I'm wondering if there's anyone who might be\ninterested in collaborating on that part.\n\nOther TODOs:\n- Audit other callers of RelationSetTargetBlock() to ensure they don't\nhold pointers into the page.\n\nRegards,\nJames Coleman",
"msg_date": "Wed, 21 Jun 2023 08:51:39 -0400",
"msg_from": "James Coleman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Opportunistically pruning page before update"
},
{
"msg_contents": "On Wed, Jun 21, 2023 at 8:51 AM James Coleman <[email protected]> wrote:\n> While at PGCon I was chatting with Andres (and I think Peter G. and a\n> few others who I can't remember at the moment, apologies) and Andres\n> noted that while we opportunistically prune a page when inserting a\n> tuple (before deciding we need a new page) we don't do the same for\n> updates.\n>\n> Attached is a patch series to do the following:\n>\n> 0001: Make it possible to call heap_page_prune_opt already holding an\n> exclusive lock on the buffer.\n> 0002: Opportunistically prune pages on update when the current tuple's\n> page has no free space. If this frees up enough space, then we\n> continue to put the new tuple on that page; if not, then we take the\n> existing code path and get a new page.\n\nI've reviewed these patches and have questions.\n\nUnder what conditions would this be exercised for UPDATE? Could you\nprovide an example?\n\nWith your patch applied, when I create a table, the first time I update\nit heap_page_prune_opt() will return before actually doing any pruning\nbecause the page prune_xid hadn't been set (it is set after pruning as\nwell as later in heap_update() after RelationGetBufferForTuple() is\ncalled).\n\nI actually added an additional parameter to heap_page_prune() and\nheap_page_prune_opt() to identify if heap_page_prune() was called from\nRelationGetBufferForTuple() and logged a message when this was true.\nRunning the test suite, I didn't see any UPDATEs executing\nheap_page_prune() from RelationGetBufferForTuple(). I did, however, see\nother statement types doing so (see RelationGetBufferForTuple()'s other\ncallers). Was that intended?\n\n> I started to work on benchmarking this, but haven't had time to devote\n> properly to that, so I'm wondering if there's anyone who might be\n> interested in collaborating on that part.\n\nI'm interested in this feature and in helping with it/helping with\nbenchmarking it, but I don't yet understand the design in its current\nform.\n\n- Melanie\n\n\n",
"msg_date": "Tue, 5 Sep 2023 13:40:37 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Opportunistically pruning page before update"
},
{
"msg_contents": "On Tue, Sep 5, 2023 at 1:40 PM Melanie Plageman\n<[email protected]> wrote:\n>\n> On Wed, Jun 21, 2023 at 8:51 AM James Coleman <[email protected]> wrote:\n> > While at PGCon I was chatting with Andres (and I think Peter G. and a\n> > few others who I can't remember at the moment, apologies) and Andres\n> > noted that while we opportunistically prune a page when inserting a\n> > tuple (before deciding we need a new page) we don't do the same for\n> > updates.\n> >\n> > Attached is a patch series to do the following:\n> >\n> > 0001: Make it possible to call heap_page_prune_opt already holding an\n> > exclusive lock on the buffer.\n> > 0002: Opportunistically prune pages on update when the current tuple's\n> > page has no free space. If this frees up enough space, then we\n> > continue to put the new tuple on that page; if not, then we take the\n> > existing code path and get a new page.\n>\n> I've reviewed these patches and have questions.\n>\n> Under what conditions would this be exercised for UPDATE? Could you\n> provide an example?\n>\n> With your patch applied, when I create a table, the first time I update\n> it heap_page_prune_opt() will return before actually doing any pruning\n> because the page prune_xid hadn't been set (it is set after pruning as\n> well as later in heap_update() after RelationGetBufferForTuple() is\n> called).\n>\n> I actually added an additional parameter to heap_page_prune() and\n> heap_page_prune_opt() to identify if heap_page_prune() was called from\n> RelationGetBufferForTuple() and logged a message when this was true.\n> Running the test suite, I didn't see any UPDATEs executing\n> heap_page_prune() from RelationGetBufferForTuple(). I did, however, see\n> other statement types doing so (see RelationGetBufferForTuple()'s other\n> callers). Was that intended?\n>\n> > I started to work on benchmarking this, but haven't had time to devote\n> > properly to that, so I'm wondering if there's anyone who might be\n> > interested in collaborating on that part.\n>\n> I'm interested in this feature and in helping with it/helping with\n> benchmarking it, but I don't yet understand the design in its current\n> form.\n\nHi Melanie,\n\nThanks for taking a look at this! Apologies for the long delay in\nreplying: I started to take a look at your questions earlier, and it\nturned into more of a rabbit hole than I'd anticipated. I've since\nbeen distracted by other things. So -- I don't have any conclusions\nhere yet, but I'm hoping at or after PGConf NYC that I'll be able to\ndedicate the time this deserves.\n\nThanks,\nJames Coleman\n\n\n",
"msg_date": "Tue, 26 Sep 2023 08:30:49 -0400",
"msg_from": "James Coleman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Opportunistically pruning page before update"
},
{
"msg_contents": "On Tue, Sep 26, 2023 at 8:30 AM James Coleman <[email protected]> wrote:\n>\n> On Tue, Sep 5, 2023 at 1:40 PM Melanie Plageman\n> <[email protected]> wrote:\n> >\n> > On Wed, Jun 21, 2023 at 8:51 AM James Coleman <[email protected]> wrote:\n> > > While at PGCon I was chatting with Andres (and I think Peter G. and a\n> > > few others who I can't remember at the moment, apologies) and Andres\n> > > noted that while we opportunistically prune a page when inserting a\n> > > tuple (before deciding we need a new page) we don't do the same for\n> > > updates.\n> > >\n> > > Attached is a patch series to do the following:\n> > >\n> > > 0001: Make it possible to call heap_page_prune_opt already holding an\n> > > exclusive lock on the buffer.\n> > > 0002: Opportunistically prune pages on update when the current tuple's\n> > > page has no free space. If this frees up enough space, then we\n> > > continue to put the new tuple on that page; if not, then we take the\n> > > existing code path and get a new page.\n> >\n> > I've reviewed these patches and have questions.\n> >\n> > Under what conditions would this be exercised for UPDATE? Could you\n> > provide an example?\n> >\n> > With your patch applied, when I create a table, the first time I update\n> > it heap_page_prune_opt() will return before actually doing any pruning\n> > because the page prune_xid hadn't been set (it is set after pruning as\n> > well as later in heap_update() after RelationGetBufferForTuple() is\n> > called).\n> >\n> > I actually added an additional parameter to heap_page_prune() and\n> > heap_page_prune_opt() to identify if heap_page_prune() was called from\n> > RelationGetBufferForTuple() and logged a message when this was true.\n> > Running the test suite, I didn't see any UPDATEs executing\n> > heap_page_prune() from RelationGetBufferForTuple(). I did, however, see\n> > other statement types doing so (see RelationGetBufferForTuple()'s other\n> > callers). Was that intended?\n> >\n> > > I started to work on benchmarking this, but haven't had time to devote\n> > > properly to that, so I'm wondering if there's anyone who might be\n> > > interested in collaborating on that part.\n> >\n> > I'm interested in this feature and in helping with it/helping with\n> > benchmarking it, but I don't yet understand the design in its current\n> > form.\n>\n> Hi Melanie,\n>\n> Thanks for taking a look at this! Apologies for the long delay in\n> replying: I started to take a look at your questions earlier, and it\n> turned into more of a rabbit hole than I'd anticipated. I've since\n> been distracted by other things. So -- I don't have any conclusions\n> here yet, but I'm hoping at or after PGConf NYC that I'll be able to\n> dedicate the time this deserves.\n\nHi,\n\nI poked at this a decent amount last night and uncovered a couple of\nthings (whether or not Andres and I had discussed these details at\nPGCon...I don't remember):\n\n1. We don't ever opportunistically prune on INSERT, but we do\n(somewhat, see below) on UPDATE, since we call it the first time we\nread the page with the to-be-updated tuple on it.\n2. The reason that original testing on v1 didn't see any real changes\nis because PageClearHasFreeLinePointers() wasn't the right fastpath\ngate on this; I should have been using !PageIsFull().\n\nWith the change to use !PageIsFull() I can trivially show that there\nis improvement functionally. Consider the following commands:\n\ndrop table if exists foo;\ncreate table foo(pk serial primary key, t text);\ninsert into foo(t) select repeat('a', 250) from generate_series(1, 27);\nselect pg_relation_size('foo');\ndelete from foo where pk <= 10;\ninsert into foo(t) select repeat('b', 250) from generate_series(1, 10);\nselect pg_relation_size('foo');\n\nOn master this will result in a final relation size of 16384 while\nwith the patch applied the final relation size is 8192.\n\nI talked to Andres and Peter again today, and out of that conversation\nI have some observations and ideas for future improvements.\n\n1. The most trivial case where this is useful is INSERT: we have a\ntarget page, and it may have dead tuples, so trying to prune may\nresult in us being able to use the target page rather than getting a\nnew page.\n2. The next most trivial case is where UPDATE (potentially after\nfailing to find space for a HOT tuple on the source tuple's page);\nmuch like the INSERT case our backend's target page may benefit from\npruning.\n3. A more complex UPDATE case occurs when we check the tuple's page\nfor space in order to insert a HOT tuple and fail to find enough\nspace. While we've already opportunistically pruned the page on\ninitial read of the tuple, in complex queries this might be some time\nin the past, so it may be worth attempting again. Beyond that context\nis key: if we already know we could otherwise do a HOT update but for\nthe lack of free space on the page, then spending extra cycles\nrescuing that failed attempt is easier to justify. In order to do that\nwe ought to invent an \"aggressive\" flag to heap_page_prune_opt telling\nit that it doesn't need to be quite so careful about exiting fast.\nPerhaps we can rescue the HOT update optimization by pruning\naggressively.\n4. We can prune the target page when the current backend recently\naborted a transaction. Additionally we could prune the target page\nimmediately on rollback (potentially we could even get into the\ncomplexity of doing retail index tuple deletion when a transaction\naborts).\n\nIt may or may not be the case that I end up pursuing all of these in\nthis particular patch series, but I wanted to at least get it written\ndown here for history's sake.\n\nThe attached v2 patch series handles case 1 and likely case 2 (though\nI haven't tested case 2 yet). The \"log when pruning\" patch files may\nor may not be useful to you: they add a bunch of logging to make it\neasier to observe what's happening while playing around in psql.\n\nRegards,\nJames",
"msg_date": "Wed, 4 Oct 2023 17:01:14 -0400",
"msg_from": "James Coleman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Opportunistically pruning page before update"
},
{
"msg_contents": "On Thu, Oct 5, 2023 at 2:35 AM James Coleman <[email protected]> wrote:\n>\n> I talked to Andres and Peter again today, and out of that conversation\n> I have some observations and ideas for future improvements.\n>\n> 1. The most trivial case where this is useful is INSERT: we have a\n> target page, and it may have dead tuples, so trying to prune may\n> result in us being able to use the target page rather than getting a\n> new page.\n> 2. The next most trivial case is where UPDATE (potentially after\n> failing to find space for a HOT tuple on the source tuple's page);\n> much like the INSERT case our backend's target page may benefit from\n> pruning.\n\nBy looking at the patch I believe that v2-0003 is implementing these 2\nideas. So my question is are we planning to prune the backend's\ncurrent target page only or if we can not find space in that then we\nare targetting to prune the other target pages as well which we are\ngetting from FSM? Because in the patch you have put a check in a loop\nit will try to prune every page it gets from the FSM not just the\ncurrent target page of the backend. Just wanted to understand if this\nis intentional.\n\nIn general, all 4 ideas look promising.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 6 Oct 2023 10:48:16 +0530",
"msg_from": "Dilip Kumar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Opportunistically pruning page before update"
},
{
"msg_contents": "Hi,\n\nThanks for taking a look!\n\nOn Fri, Oct 6, 2023 at 1:18 AM Dilip Kumar <[email protected]> wrote:\n>\n> On Thu, Oct 5, 2023 at 2:35 AM James Coleman <[email protected]> wrote:\n> >\n> > I talked to Andres and Peter again today, and out of that conversation\n> > I have some observations and ideas for future improvements.\n> >\n> > 1. The most trivial case where this is useful is INSERT: we have a\n> > target page, and it may have dead tuples, so trying to prune may\n> > result in us being able to use the target page rather than getting a\n> > new page.\n> > 2. The next most trivial case is where UPDATE (potentially after\n> > failing to find space for a HOT tuple on the source tuple's page);\n> > much like the INSERT case our backend's target page may benefit from\n> > pruning.\n>\n> By looking at the patch I believe that v2-0003 is implementing these 2\n> ideas. So my question is are we planning to prune the backend's\n> current target page only or if we can not find space in that then we\n> are targetting to prune the other target pages as well which we are\n> getting from FSM? Because in the patch you have put a check in a loop\n> it will try to prune every page it gets from the FSM not just the\n> current target page of the backend. Just wanted to understand if this\n> is intentional.\n\nYes, just like with our opportunistically pruning on each read during\na select I think we should at least check when we have a new target\npage. This seems particularly true since we're hoping to write to the\npage anyway, and the cost of additionally making pruning changes to\nthat page is low. I looked at freespace.c, and it doesn't appear that\ngetting the block from the FSM does this already, so we're not\nduplicating any existing work.\n\nRegards,\nJames\n\n\n",
"msg_date": "Fri, 6 Oct 2023 09:08:55 -0400",
"msg_from": "James Coleman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Opportunistically pruning page before update"
},
{
"msg_contents": "2024-01 Commitfest.\n\nHi, This patch has a CF status of \"Needs Review\" [1], but it seems\nlike there was some CFbot test failure last time it was run [2].\nPlease have a look and post an updated version if necessary.\n\n======\n[1] https://commitfest.postgresql.org/46/4384//\n[2] https://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/46/4384\n\n\n",
"msg_date": "Mon, 22 Jan 2024 13:58:00 +1100",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Opportunistically pruning page before update"
},
{
"msg_contents": "On Sun, Jan 21, 2024 at 9:58 PM Peter Smith <[email protected]> wrote:\n>\n> 2024-01 Commitfest.\n>\n> Hi, This patch has a CF status of \"Needs Review\" [1], but it seems\n> like there was some CFbot test failure last time it was run [2].\n> Please have a look and post an updated version if necessary.\n>\n> ======\n> [1] https://commitfest.postgresql.org/46/4384//\n> [2] https://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/46/4384\n\nSee rebased patch attached.\n\nThanks,\nJames Coleman",
"msg_date": "Mon, 22 Jan 2024 20:21:23 -0500",
"msg_from": "James Coleman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Opportunistically pruning page before update"
},
{
"msg_contents": "On Mon, Jan 22, 2024 at 8:21 PM James Coleman <[email protected]> wrote:\n>\n> See rebased patch attached.\n\nI just realized I left a change in during the rebase that wasn't necessary.\n\nv4 attached.\n\nRegards,\nJames Coleman",
"msg_date": "Mon, 22 Jan 2024 20:47:55 -0500",
"msg_from": "James Coleman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Opportunistically pruning page before update"
},
{
"msg_contents": "On Tue, Jan 23, 2024 at 7:18 AM James Coleman <[email protected]> wrote:\n>\n> On Mon, Jan 22, 2024 at 8:21 PM James Coleman <[email protected]> wrote:\n> >\n> > See rebased patch attached.\n>\n> I just realized I left a change in during the rebase that wasn't necessary.\n>\n> v4 attached.\n\nI have noticed that you are performing the opportunistic pruning after\nwe decided that the updated tuple can not fit in the current page and\nthen we are performing the pruning on the new target page. Why don't\nwe first perform the pruning on the existing page of the tuple itself?\n Or this is already being done before this patch? I could not find\nsuch existing pruning so got this question because such pruning can\nconvert many non-hot updates to the HOT update right?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 23 Jan 2024 13:16:19 +0530",
"msg_from": "Dilip Kumar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Opportunistically pruning page before update"
},
{
"msg_contents": "On Tue, Jan 23, 2024 at 2:46 AM Dilip Kumar <[email protected]> wrote:\n>\n> On Tue, Jan 23, 2024 at 7:18 AM James Coleman <[email protected]> wrote:\n> >\n> > On Mon, Jan 22, 2024 at 8:21 PM James Coleman <[email protected]> wrote:\n> > >\n> > > See rebased patch attached.\n> >\n> > I just realized I left a change in during the rebase that wasn't necessary.\n> >\n> > v4 attached.\n>\n> I have noticed that you are performing the opportunistic pruning after\n> we decided that the updated tuple can not fit in the current page and\n> then we are performing the pruning on the new target page. Why don't\n> we first perform the pruning on the existing page of the tuple itself?\n> Or this is already being done before this patch? I could not find\n> such existing pruning so got this question because such pruning can\n> convert many non-hot updates to the HOT update right?\n\nFirst off I noticed that I accidentally sent a different version of\nthe patch I'd originally worked on. Here's the one from the proper\nbranch. It's still similar, but I want to make sure the right one is\nbeing reviewed.\n\nI'm working on a demo case for updates (to go along with the insert\ncase I sent earlier) to test out your question, and I'll reply when I\nhave that.\n\nRegards,\nJames Coleman",
"msg_date": "Fri, 26 Jan 2024 20:33:37 -0500",
"msg_from": "James Coleman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Opportunistically pruning page before update"
},
{
"msg_contents": "On Fri, Jan 26, 2024 at 8:33 PM James Coleman <[email protected]> wrote:\n>\n> On Tue, Jan 23, 2024 at 2:46 AM Dilip Kumar <[email protected]> wrote:\n> >\n> > On Tue, Jan 23, 2024 at 7:18 AM James Coleman <[email protected]> wrote:\n> > >\n> > > On Mon, Jan 22, 2024 at 8:21 PM James Coleman <[email protected]> wrote:\n> > > >\n> > > > See rebased patch attached.\n> > >\n> > > I just realized I left a change in during the rebase that wasn't necessary.\n> > >\n> > > v4 attached.\n> >\n> > I have noticed that you are performing the opportunistic pruning after\n> > we decided that the updated tuple can not fit in the current page and\n> > then we are performing the pruning on the new target page. Why don't\n> > we first perform the pruning on the existing page of the tuple itself?\n> > Or this is already being done before this patch? I could not find\n> > such existing pruning so got this question because such pruning can\n> > convert many non-hot updates to the HOT update right?\n>\n> First off I noticed that I accidentally sent a different version of\n> the patch I'd originally worked on. Here's the one from the proper\n> branch. It's still similar, but I want to make sure the right one is\n> being reviewed.\n>\n> I'm working on a demo case for updates (to go along with the insert\n> case I sent earlier) to test out your question, and I'll reply when I\n> have that.\n\nAll right, getting all this loaded back into my head, as you noted\nearlier the patch currently implements points 1 and 2 of my list of\npossible improvements:\n\n> 1. The most trivial case where this is useful is INSERT: we have a\n> target page, and it may have dead tuples, so trying to prune may\n> result in us being able to use the target page rather than getting a\n> new page.\n> 2. The next most trivial case is where UPDATE (potentially after\n> failing to find space for a HOT tuple on the source tuple's page);\n> much like the INSERT case our backend's target page may benefit from\n> pruning.\n\nWhat you're describing above would be implementing (at least part of) point 3:\n\n> 3. A more complex UPDATE case occurs when we check the tuple's page\n> for space in order to insert a HOT tuple and fail to find enough\n> space. While we've already opportunistically pruned the page on\n> initial read of the tuple, in complex queries this might be some time\n> in the past, so it may be worth attempting again.\n> ...\n\nIf we try to design a simple test case for updates (like my insert\ntest case above) we might end up with something like:\n\ndrop table if exists foo;\ncreate table foo(pk serial primary key, t text);\ninsert into foo(t) select repeat('a', 250) from generate_series(1, 27);\nselect pg_relation_size('foo');\ndelete from foo where pk = 1;\nupdate foo set t = repeat('b', 250) where pk = 2;\nselect pg_relation_size('foo');\n\nBut that actually works as expected on master, because we call\nheap_page_prune_opt from heapam_index_fetch_tuple as part of the index\nscan that drives the update query.\n\nI was theorizing that if there are concurrent writes to the page we\nmight being able to trigger the need to re-prune a page in the for\nloop in heap_update(), and I tried to both regular pgbench and a\ncustom pgbench script with inserts/deletes/updates (including some\nartificial delays).\n\nWhat I concluded what this isn't isn't likely to be fruitful: we need\nthe buffer to be local to our backend (no other pins) to be able to\nclean it, but since we've already pruned it on read, we need to have\nhad another backend modify the page (and dropped its pin!) between our\nread and our write.\n\nIf someone believes there's a scenario that would demonstrate\notherwise, I would of course be interested to hear any ideas, but at\nthis point I think it's probably worth focusing on the first two cases\nthis patch already addresses.\n\nRegards,\nJames Coleman\n\n\n",
"msg_date": "Mon, 29 Jan 2024 21:39:18 -0500",
"msg_from": "James Coleman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Opportunistically pruning page before update"
},
{
"msg_contents": "On Fri, Jan 26, 2024 at 8:33 PM James Coleman <[email protected]> wrote:\n>\n> On Tue, Jan 23, 2024 at 2:46 AM Dilip Kumar <[email protected]> wrote:\n> >\n> > On Tue, Jan 23, 2024 at 7:18 AM James Coleman <[email protected]> wrote:\n> > >\n> > > On Mon, Jan 22, 2024 at 8:21 PM James Coleman <[email protected]> wrote:\n> > > >\n> > > > See rebased patch attached.\n> > >\n> > > I just realized I left a change in during the rebase that wasn't necessary.\n> > >\n> > > v4 attached.\n> >\n> > I have noticed that you are performing the opportunistic pruning after\n> > we decided that the updated tuple can not fit in the current page and\n> > then we are performing the pruning on the new target page. Why don't\n> > we first perform the pruning on the existing page of the tuple itself?\n> > Or this is already being done before this patch? I could not find\n> > such existing pruning so got this question because such pruning can\n> > convert many non-hot updates to the HOT update right?\n>\n> First off I noticed that I accidentally sent a different version of\n> the patch I'd originally worked on. Here's the one from the proper\n> branch. It's still similar, but I want to make sure the right one is\n> being reviewed.\n\nI finally got back around to looking at this. Sorry for the delay.\n\nI don't feel confident enough to say at a high level whether or not it\nis a good idea in the general case to try pruning every block\nRelationGetBufferForTuple() considers as a target block.\n\nBut, I did have a few thoughts on the implementation:\n\nheap_page_prune_opt() checks PageGetHeapFreeSpace() twice. You will\nhave already done that in RelationGetBufferForTuple(). And you don't\nneed even need to do it both of those times because you have a lock\n(which is why heap_page_prune_opt() does it twice). This all seems a\nbit wasteful. And then, you do it again after pruning.\n\nThis made me think, vacuum cares how much space heap_page_prune() (now\nheap_page_prune_and_freeze()) freed up. Now if we add another caller\nwho cares how much space pruning freed up, perhaps it is worth\ncalculating this while pruning and returning it. I know\nPageGetHeapFreeSpace() isn't the most expensive function in the world,\nbut it also seems like a useful, relevant piece of information to\ninform the caller of.\n\nYou don't have to implement the above, it was just something I was\nthinking about.\n\nLooking at the code also made me wonder if it is worth changing\nRelationGetBufferForTuple() to call PageGetHeapFreeSpace() before\ntaking a lock (which won't give a totally accurate result, but that's\nprobably okay) and then call heap_page_prune_opt() without a lock when\nPageGetHeapFreeSpace() says there isn't enough space.\n\nAlso do we want to do GetVisibilityMapPins() before calling\nheap_page_prune_opt()? I don't quite get why we do that before knowing\nif we are going to actually use the target block in the current code.\n\nAnyway, I'm not sure I like just adding a parameter to\nheap_page_prune_opt() to indicate it already has an exclusive lock. It\ndoes a bunch of stuff that was always done without a lock and now you\nare doing it with an exclusive lock.\n\n- Melanie\n\n\n",
"msg_date": "Wed, 3 Apr 2024 16:04:26 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Opportunistically pruning page before update"
}
] |
[
{
"msg_contents": " Hi,\n\nIn the \"Order changes in PG16 since ICU introduction\" discussion, one\nsub-thread [1] was about having a credible use case for tailoring collations\nwith custom rules, a new feature in v16.\n\nAt a conference this week I was asked if ICU could be able to\nsort like EBCDIC [2]. It turns out it has been already\tasked on\n-general a few years ago [3] with no satisfactory answer at the time ,\nand that it can be implemented with rules in v16.\n\nA collation like the following this seems to work (the rule simply enumerates\nUS-ASCII letters in the EBCDIC alphabet order, with adequate quoting)\n\nCREATE COLLATION ebcdic (provider='icu', locale='und',\nrules=$$&'\n'<'.'<'<'<'('<'+'<\\|<'&'<'!'<'$'<'*'<')'<';'<'-'<'/'<','<'%'<'_'<'>'<'?'<'`'<':'<'#'<'@'<\\'<'='<'\"'<a<b<c<d<e<f<g<h<i<j<k<l<m<n<o<p<q<r<'~'<s<t<u<v<w<x<y<z<'['<'^'<']'<'{'<A<B<C<D<E<F<G<H<I<'}'<J<K<L<M<N<O<P<Q<R<'\\'<S<T<U<V<W<X<Y<Z<0<1<2<3<4<5<6<7<8<9$$);\n\nThis can be useful for people who migrate from mainframes to Postgres\nand need their migration tests to produce the same sorted results as the\noriginal system.\nSince rules can be defined at the database level with the icu_rules option,\nthey don't even need to tweak their queries to add COLLATE clauses,\nwhich surely is appreciable in that kind of project.\n\nUS-ASCII when sorted in EBCDIC order comes out like this:\n\n.<(+|&!$*);-/,%_>?`:#@'=\"abcdefghijklmnopqr~stuvwxyz[^]{ABCDEFGHI}JKLMNOPQR\\ST\nUVWXYZ0123456789\n\nMaybe this example could be added to the documentation except for\nthe problem that the rule is very long and dollar-quoting cannot be split\ninto several lines. Literals enclosed by single quotes can be split that\nway, but would require escaping the single quotes in the rule, which\nwould lead to scary-looking over-quoted contents.\n\nI'm open to suggestions on whether this EBCDIC example is worth being in the\ndoc in some form or putting this in the wiki would be good enough.\n\n\n\n[1]\nhttps://www.postgresql.org/message-id/flat/a28aba5fa6bf1abfff96e40b6d6acff8412edb15.camel%40j-davis.com\n\n[2] https://en.wikipedia.org/wiki/EBCDIC\n\n[3]\nhttps://www.postgresql.org/message-id/flat/0A3221C70F24FB45833433255569204D1F84A7AD%40G01JPEXMBYT05\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite\n\n\n",
"msg_date": "Wed, 21 Jun 2023 15:28:38 +0200",
"msg_from": "\"Daniel Verite\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "EBCDIC sorting as a use case for ICU rules"
},
{
"msg_contents": "On 6/21/23 09:28, Daniel Verite wrote:\n> In the \"Order changes in PG16 since ICU introduction\" discussion, one\n> sub-thread [1] was about having a credible use case for tailoring collations\n> with custom rules, a new feature in v16.\n> \n> At a conference this week I was asked if ICU could be able to\n> sort like EBCDIC [2]. It turns out it has been already\tasked on\n> -general a few years ago [3] with no satisfactory answer at the time ,\n> and that it can be implemented with rules in v16.\n\nOh, very cool! I have seen the requirement for EBCDIC come up multiple \ntimes over the years.\n\n<snip>\n\n> Maybe this example could be added to the documentation except for\n> the problem that the rule is very long and dollar-quoting cannot be split\n> into several lines. Literals enclosed by single quotes can be split that\n> way, but would require escaping the single quotes in the rule, which\n> would lead to scary-looking over-quoted contents.\n> \n> I'm open to suggestions on whether this EBCDIC example is worth being in the\n> doc in some form or putting this in the wiki would be good enough.\n\nI would definitely favor adding to the docs, but no idea how to deal \nwith the length issue.\n\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n",
"msg_date": "Wed, 21 Jun 2023 11:50:15 -0400",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: EBCDIC sorting as a use case for ICU rules"
},
{
"msg_contents": "On Wed, 2023-06-21 at 15:28 +0200, Daniel Verite wrote:\n> At a conference this week I was asked if ICU could be able to\n> sort like EBCDIC [2]. It turns out it has been already asked on\n> -general a few years ago [3] with no satisfactory answer at the time\n> ,\n> and that it can be implemented with rules in v16.\n\nInteresting, thank you!\n\n> This can be useful for people who migrate from mainframes to Postgres\n> and need their migration tests to produce the same sorted results as\n> the\n> original system.\n> Since rules can be defined at the database level with the icu_rules\n> option,\n> they don't even need to tweak their queries to add COLLATE clauses,\n> which surely is appreciable in that kind of project.\n\nI still had some technical concerns about the ICU rules feature,\nunfortunately, and one option is to only allow it for the collation\nobjects and not the database level collation. How much would that hurt\nthis use case?\n\n\n> I'm open to suggestions on whether this EBCDIC example is worth being\n> in the\n> doc in some form or putting this in the wiki would be good enough.\n\nI like the idea of having a real example. Ideally, we could add some\nexplanation along the way about how the rule is constructed to match\nEBCDIC, which would reduce the shock of a long rule like that.\n\nI wonder why the rule syntax is such that it cannot be broken up? Would\nit be incorrect for us to allow some whitespace in there?\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Wed, 21 Jun 2023 09:14:32 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: EBCDIC sorting as a use case for ICU rules"
},
{
"msg_contents": "On 6/21/23 12:14 PM, Jeff Davis wrote:\r\n> On Wed, 2023-06-21 at 15:28 +0200, Daniel Verite wrote:\r\n>> At a conference this week I was asked if ICU could be able to\r\n>> sort like EBCDIC [2]. It turns out it has been already asked on\r\n>> -general a few years ago [3] with no satisfactory answer at the time\r\n>> ,\r\n>> and that it can be implemented with rules in v16.\r\n> \r\n> Interesting, thank you!\r\n\r\n+1 -- this is very helpful framing the problem, thank you!\r\n\r\n>> This can be useful for people who migrate from mainframes to Postgres\r\n>> and need their migration tests to produce the same sorted results as\r\n>> the\r\n>> original system.\r\n>> Since rules can be defined at the database level with the icu_rules\r\n>> option,\r\n>> they don't even need to tweak their queries to add COLLATE clauses,\r\n>> which surely is appreciable in that kind of project.\r\n> \r\n> I still had some technical concerns about the ICU rules feature,\r\n> unfortunately, and one option is to only allow it for the collation\r\n> objects and not the database level collation. How much would that hurt\r\n> this use case?\r\n> \r\n> \r\n>> I'm open to suggestions on whether this EBCDIC example is worth being\r\n>> in the\r\n>> doc in some form or putting this in the wiki would be good enough.\r\n> \r\n> I like the idea of having a real example. Ideally, we could add some\r\n> explanation along the way about how the rule is constructed to match\r\n> EBCDIC, which would reduce the shock of a long rule like that.\r\n> \r\n> I wonder why the rule syntax is such that it cannot be broken up? Would\r\n> it be incorrect for us to allow some whitespace in there?\r\n\r\nI'll give the unhelpful comment of \"yes, I agree we should have a real \r\nworld example\", especially one that seems relevant to helping more \r\npeople adopt PostgreSQL.",
"msg_date": "Wed, 21 Jun 2023 13:13:01 -0400",
"msg_from": "\"Jonathan S. Katz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: EBCDIC sorting as a use case for ICU rules"
},
{
"msg_contents": "Jeff Davis wrote:\n\n> I still had some technical concerns about the ICU rules feature,\n> unfortunately, and one option is to only allow it for the collation\n> objects and not the database level collation. How much would that hurt\n> this use case?\n\nFor a regression test suite that should produce results with the custom\norder, not being able to configure the sort rules at the db level means\nthat you'd have to change all the queries to add explicit COLLATE clauses.\nI guess that could be quite annoying if the test suite is large.\n\nAbout making a doc patch from this, I've came up with the attached,\nwhich generates a CREATE COLLATION statement with rules from an\narbitrary strings that just lists characters in whichever order is desired.\n\nIn the case of EBCDIC and code page 37, it turns out that there are\nseveral versions of \"code page 37\", with more or less additions of\ncharacters outside the US-ASCII range. This is why I decided\nto show code that generates the rules rather than an already generated\nrule. Users may simply change the codepage_37 string in the code\nto add or rearrange any characters.\n\n\nAlso the patch makes the relevant sections of \"CREATE COLLATION\" and\n\"CREATE DATABASE\" point to \"Collation Support\" with the idea to\ncentralize the information on tailoring rules.\n\nI'll add this to the next CF.\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite",
"msg_date": "Fri, 30 Jun 2023 13:08:45 +0200",
"msg_from": "\"Daniel Verite\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: EBCDIC sorting as a use case for ICU rules"
},
{
"msg_contents": "On 21.06.23 15:28, Daniel Verite wrote:\n> A collation like the following this seems to work (the rule simply enumerates\n> US-ASCII letters in the EBCDIC alphabet order, with adequate quoting)\n> \n> CREATE COLLATION ebcdic (provider='icu', locale='und',\n> rules=$$&'\n> '<'.'<'<'<'('<'+'<\\|<'&'<'!'<'$'<'*'<')'<';'<'-'<'/'<','<'%'<'_'<'>'<'?'<'`'<':'<'#'<'@'<\\'<'='<'\"'<a<b<c<d<e<f<g<h<i<j<k<l<m<n<o<p<q<r<'~'<s<t<u<v<w<x<y<z<'['<'^'<']'<'{'<A<B<C<D<E<F<G<H<I<'}'<J<K<L<M<N<O<P<Q<R<'\\'<S<T<U<V<W<X<Y<Z<0<1<2<3<4<5<6<7<8<9$$);\n> \n> This can be useful for people who migrate from mainframes to Postgres\n> and need their migration tests to produce the same sorted results as the\n> original system.\n> Since rules can be defined at the database level with the icu_rules option,\n> they don't even need to tweak their queries to add COLLATE clauses,\n> which surely is appreciable in that kind of project.\n> \n> US-ASCII when sorted in EBCDIC order comes out like this:\n> \n> .<(+|&!$*);-/,%_>?`:#@'=\"abcdefghijklmnopqr~stuvwxyz[^]{ABCDEFGHI}JKLMNOPQR\\ST\n> UVWXYZ0123456789\n> \n> Maybe this example could be added to the documentation except for\n> the problem that the rule is very long and dollar-quoting cannot be split\n> into several lines. Literals enclosed by single quotes can be split that\n> way, but would require escaping the single quotes in the rule, which\n> would lead to scary-looking over-quoted contents.\n\nYou can use whitespace in the rules. For example,\n\nCREATE COLLATION ebcdic (provider='icu', locale='und',\nrules=$$\n& ' ' < '.' < '<' < '(' < '+' < \\|\n< '&' < '!' < '$' < '*' < ')' < ';'\n< '-' < '/' < ',' < '%' < '_' < '>' < '?'\n< '`' < ':' < '#' < '@' < \\' < '=' < '\"'\n< a < b < c < d < e < f < g < h < i\n< j < k < l < m < n < o < p < q < r\n< '~' < s < t < u < v < w < x < y < z\n< '[' < '^' < ']'\n< '{' < A < B < C < D < E < F < G < H < I\n< '}' < J < K < L < M < N < O < P < Q < R\n< '\\' < S < T < U < V < W < X < Y < Z\n< 0 < 1 < 2 < 3 < 4 < 5 < 6 < 7 < 8 < 9\n$$);\n\n(This particular layout is meant to match the rows in\nhttps://en.wikipedia.org/wiki/EBCDIC#Code_page_layout.)\n\n\n",
"msg_date": "Thu, 6 Jul 2023 11:32:32 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: EBCDIC sorting as a use case for ICU rules"
},
{
"msg_contents": "On 30.06.23 13:08, Daniel Verite wrote:\n> About making a doc patch from this, I've came up with the attached,\n> which generates a CREATE COLLATION statement with rules from an\n> arbitrary strings that just lists characters in whichever order is desired.\n\nI like adding more documentation and links around this. But I'm not \nsure how this code you are including is supposed to help users \nunderstand the rules language. Effectively, this would be adding \nanother rules mechanism on top of the existing one, but doesn't explain \neither one.\n\n\n\n",
"msg_date": "Thu, 6 Jul 2023 11:35:41 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: EBCDIC sorting as a use case for ICU rules"
},
{
"msg_contents": "On Thu, 2023-07-06 at 11:32 +0200, Peter Eisentraut wrote:\n> CREATE COLLATION ebcdic (provider='icu', locale='und',\n> rules=$$\n> & ' ' < '.' < '<' < '(' < '+' < \\|\n> < '&' < '!' < '$' < '*' < ')' < ';'\n> < '-' < '/' < ',' < '%' < '_' < '>' < '?'\n> < '`' < ':' < '#' < '@' < \\' < '=' < '\"'\n> < a < b < c < d < e < f < g < h < i\n> < j < k < l < m < n < o < p < q < r\n> < '~' < s < t < u < v < w < x < y < z\n> < '[' < '^' < ']'\n> < '{' < A < B < C < D < E < F < G < H < I\n> < '}' < J < K < L < M < N < O < P < Q < R\n> < '\\' < S < T < U < V < W < X < Y < Z\n> < 0 < 1 < 2 < 3 < 4 < 5 < 6 < 7 < 8 < 9\n> $$);\n\nThat looks much nicer and would go nicely in the documentation along\nwith some explanation.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Thu, 06 Jul 2023 11:14:49 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: EBCDIC sorting as a use case for ICU rules"
},
{
"msg_contents": "Peter Eisentraut wrote:\n\n> You can use whitespace in the rules. For example,\n> \n> CREATE COLLATION ebcdic (provider='icu', locale='und',\n> rules=$$\n\nNice, it's clearly better that the piece of code I had in the\nprevious patch.\nIt can also be made more compact by grouping consecutive\ncode points, for instance <*a-r for 'a' to 'r'\nI changed it that way, and also moved '^' before '[' and ']',\nsince according to [1], '^' is at location 0xB0 and '[' and ']'\nat 0xBA and 0xBB.\n\nUpdated patch attached.\n\n\n[1] https://en.wikipedia.org/wiki/EBCDIC#Code_page_layout\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite",
"msg_date": "Mon, 17 Jul 2023 10:10:19 +0200",
"msg_from": "\"Daniel Verite\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: EBCDIC sorting as a use case for ICU rules"
},
{
"msg_contents": "On 17.07.23 10:10, Daniel Verite wrote:\n> \tPeter Eisentraut wrote:\n> \n>> You can use whitespace in the rules. For example,\n>>\n>> CREATE COLLATION ebcdic (provider='icu', locale='und',\n>> rules=$$\n> \n> Nice, it's clearly better that the piece of code I had in the\n> previous patch.\n> It can also be made more compact by grouping consecutive\n> code points, for instance <*a-r for 'a' to 'r'\n> I changed it that way, and also moved '^' before '[' and ']',\n> since according to [1], '^' is at location 0xB0 and '[' and ']'\n> at 0xBA and 0xBB.\n> \n> Updated patch attached.\n\nCommitted with some editing. I moved the existing rules example from \nthe CREATE COLLATION page into the new section you created, so we have a \nsimple example followed by the complex example.\n\n\n\n",
"msg_date": "Wed, 23 Aug 2023 11:30:52 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: EBCDIC sorting as a use case for ICU rules"
},
{
"msg_contents": "Hi,\n\nSorry to chime in so lately, I was waiting for some customer feedback.\n\nOn Wed, 21 Jun 2023 15:28:38 +0200\n\"Daniel Verite\" <[email protected]> wrote:\n\n> At a conference this week I was asked if ICU could be able to\n> sort like EBCDIC [2].\n> It turns out it has been already\tasked on\n> -general a few years ago [3] with no satisfactory answer at the time ,\n> and that it can be implemented with rules in v16.\n\nWe worked with a customer few months ago about this question and end up with a\nprocedure to build new locale/collation for glibc and load them in PostgreSQL\n[1].\n\nOur customer built the fr_ebcdic locale file themselves, based on the EBCDIC\nIBM500 codepage (including about the same characters than iso 8859-1) and share\nit under the BY-CC licence. See in attachment.\n\nThe procedure is quite simple:\n\n1. copy this file under \"/usr/share/i18n/locales/fr_ebcdic\"\n2. build it using \"localedef -c -i fr_ebcdic -f UTF-8 fr_ebcdic.UTF-8\"\n3. restart your PostgreSQL instance (because of localeset weird behavior)\n4. \"pg_import_system_collations('schema')\" or create the collation, eg.:\n CREATE COLLATION fr_ebcdic (\n PROVIDER = libc,\n LC_COLLATE = fr_ebcdic.utf8,\n LC_CTYPE = fr_ebcdic.utf8\n );\n\nNow, same question than for the ICU: do we want to provide documentation about\nthis? Online documentation about such feature are quite arid. In fact, this\ncould be useful in various other way than just EBCDIC.\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/20230209144947.1dfad6c0%40karst",
"msg_date": "Thu, 24 Aug 2023 16:26:53 +0200",
"msg_from": "Jehan-Guillaume de Rorthais <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: EBCDIC sorting as a use case for ICU rules"
},
{
"msg_contents": "\tPeter Eisentraut wrote:\n\n> Committed with some editing. I moved the existing rules example from \n> the CREATE COLLATION page into the new section you created, so we have a \n> simple example followed by the complex example.\n\nOK, thanks for pushing this!\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite\n\n\n",
"msg_date": "Wed, 30 Aug 2023 18:40:45 +0200",
"msg_from": "\"Daniel Verite\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: EBCDIC sorting as a use case for ICU rules"
}
] |
[
{
"msg_contents": "Hi all,\n\nI briefly mentioned this issue in another mailing thread [0].\n\nCurrently, a user is allowed to execute SET SESSION AUTHORIZATION [1]\nif the role they connected to PostgreSQL with was a superuser at the\ntime of connection. Even if the role is later altered to no longer be a\nsuperuser, the session can still execute SET SESSION AUTHORIZATION, as\nlong as the session isn't disconnected. As a consequence, if that role\nis altered to no longer be a superuser, then the user can use SET\nSESSION AUTHORIZATION to switch to another role that is a superuser and\nregain superuser privileges. They can even re-grant themselves the\nsuperuser attribute.\n\nIt is possible that the user had already run SET SESSION AUTHORIZATION\nto set their session to a superuser before their connecting role lost\nthe superuser attribute. In this case there's not much we can do.\n\nAlso, from looking at the code and documentation, it looks like SET\nSESSION AUTHORIZATION works this way intentionally. However, I'm unable\nto figure out why we'd want it to work this way.\n\nI've attached a patch that would fix this issue by checking the catalog\nto see if the connecting role is currently a superuser every time SET\nSESSION AUTHORIZATION is run. However, according to the comment I\ndeleted there's something invalid about reading the catalog from that\nfunction, though I wasn't able to understand it fully.\n\nOne downside is that if a user switches their session authorization to\nsome role, then loses the superuser attribute on their connecting role,\nthey may be stuck in a that role with no way to reset their session\nauthorization without disconnecting and reconnecting.\n\nThanks,\nJoe Koshakow\n\n[0]\nhttps://www.postgresql.org/message-id/CAAvxfHco7iGw4NarymhfLWN6PjzYRrbYFt2BnSFeSD5sFzqEJQ%40mail.gmail.com\n[1] https://www.postgresql.org/docs/15/sql-set-session-authorization.html",
"msg_date": "Wed, 21 Jun 2023 16:28:43 -0400",
"msg_from": "Joseph Koshakow <[email protected]>",
"msg_from_op": true,
"msg_subject": "Preventing non-superusers from altering session authorization"
},
{
"msg_contents": "On Wed, Jun 21, 2023 at 04:28:43PM -0400, Joseph Koshakow wrote:\n> Currently, a user is allowed to execute SET SESSION AUTHORIZATION [1]\n> if the role they connected to PostgreSQL with was a superuser at the\n> time of connection. Even if the role is later altered to no longer be a\n> superuser, the session can still execute SET SESSION AUTHORIZATION, as\n> long as the session isn't disconnected. As a consequence, if that role\n> is altered to no longer be a superuser, then the user can use SET\n> SESSION AUTHORIZATION to switch to another role that is a superuser and\n> regain superuser privileges. They can even re-grant themselves the\n> superuser attribute.\n\nI suspect most users aren't changing the superuser attribute on roles very\noften, so it's unlikely to be a problem. But it might still be worth\nrevisiting.\n\n> It is possible that the user had already run SET SESSION AUTHORIZATION\n> to set their session to a superuser before their connecting role lost\n> the superuser attribute. In this case there's not much we can do.\n\nRight.\n\n> Also, from looking at the code and documentation, it looks like SET\n> SESSION AUTHORIZATION works this way intentionally. However, I'm unable\n> to figure out why we'd want it to work this way.\n\nI found a brief mention in the archives about this implementation decision\n[0], but I don't think it explains the reasoning.\n\n> I've attached a patch that would fix this issue by checking the catalog\n> to see if the connecting role is currently a superuser every time SET\n> SESSION AUTHORIZATION is run. However, according to the comment I\n> deleted there's something invalid about reading the catalog from that\n> function, though I wasn't able to understand it fully.\n\nThis comment was added in e5d6b91. I see that RESET SESSION AUTHORIZATION\nwith a concurrently dropped role will FATAL with your patch but succeed\nwithout it, which could be part of the reason.\n\n> One downside is that if a user switches their session authorization to\n> some role, then loses the superuser attribute on their connecting role,\n> they may be stuck in a that role with no way to reset their session\n> authorization without disconnecting and reconnecting.\n\nIt looks like SetSessionAuthorization() skips the privilege checks if the\ntarget role is the authenticated role, so I don't think they'll get stuck.\n\n[0] https://postgr.es/m/Pine.LNX.4.30.0104182119290.762-100000%40peter.localdomain\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 21 Jun 2023 14:57:45 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Preventing non-superusers from altering session authorization"
},
{
"msg_contents": "On Wed, Jun 21, 2023 at 04:28:43PM -0400, Joseph Koshakow wrote:\n> +\troleTup = SearchSysCache1(AUTHOID, ObjectIdGetDatum(AuthenticatedUserId));\n> +\tif (!HeapTupleIsValid(roleTup))\n> +\t\tereport(FATAL,\n> +\t\t\t\t(errcode(ERRCODE_INVALID_AUTHORIZATION_SPECIFICATION),\n> +\t\t\t\t\t\terrmsg(\"role with OID %u does not exist\", AuthenticatedUserId)));\n> +\trform = (Form_pg_authid) GETSTRUCT(roleTup);\n\nI think \"superuser_arg(AuthenticatedUserId)\" would work here.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 21 Jun 2023 20:48:18 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Preventing non-superusers from altering session authorization"
},
{
"msg_contents": "On Wed, Jun 21, 2023 at 11:48 PM Nathan Bossart <[email protected]>\nwrote:\n>\n> On Wed, Jun 21, 2023 at 04:28:43PM -0400, Joseph Koshakow wrote:\n> > + roleTup = SearchSysCache1(AUTHOID,\nObjectIdGetDatum(AuthenticatedUserId));\n> > + if (!HeapTupleIsValid(roleTup))\n> > + ereport(FATAL,\n> > +\n(errcode(ERRCODE_INVALID_AUTHORIZATION_SPECIFICATION),\n> > + errmsg(\"role with OID\n%u does not exist\", AuthenticatedUserId)));\n> > + rform = (Form_pg_authid) GETSTRUCT(roleTup);\n>\n> I think \"superuser_arg(AuthenticatedUserId)\" would work here.\n\nYep, that worked. I've attached a patch with this change.\n\n> I see that RESET SESSION AUTHORIZATION\n> with a concurrently dropped role will FATAL with your patch but succeed\n> without it, which could be part of the reason.\n\nThat might be a good change? If the original authenticated role ID no\nlonger exists then we may want to return an error when trying to set\nyour session authorization to that role.\n\nThanks,\nJoe Koshakow",
"msg_date": "Thu, 22 Jun 2023 18:39:45 -0400",
"msg_from": "Joseph Koshakow <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Preventing non-superusers from altering session authorization"
},
{
"msg_contents": "Hi,\n\nI’ve just stumbled upon this patch and thread and thought I could share an idea of adding an optional temporary secret to SET SESSION AUTHORIZATION so that it is only possible to RESET SESSION AUTHORIZATION by providing the same secret ,like:\n\nSET SESSION AUTHORIZATION [role] GUARDED BY ‘[secret]’;\n\n...\n\nRESET SESSION AUTHORIZATION WITH ‘[secret]’;\n\n\nThe use case is: I have a set of Liquibase scripts I would like to execute as a different role each and make sure they cannot escape the sandbox.\n\nAs I am not a Postgres hacker I wonder how difficult to implement it might be…\n\nThanks,\nMichal\n\n> On 23 Jun 2023, at 00:39, Joseph Koshakow <[email protected]> wrote:\n> \n> \n> \n> On Wed, Jun 21, 2023 at 11:48 PM Nathan Bossart <[email protected] <mailto:[email protected]>> wrote:\n> >\n> > On Wed, Jun 21, 2023 at 04:28:43PM -0400, Joseph Koshakow wrote:\n> > > + roleTup = SearchSysCache1(AUTHOID, ObjectIdGetDatum(AuthenticatedUserId));\n> > > + if (!HeapTupleIsValid(roleTup))\n> > > + ereport(FATAL,\n> > > + (errcode(ERRCODE_INVALID_AUTHORIZATION_SPECIFICATION),\n> > > + errmsg(\"role with OID %u does not exist\", AuthenticatedUserId)));\n> > > + rform = (Form_pg_authid) GETSTRUCT(roleTup);\n> >\n> > I think \"superuser_arg(AuthenticatedUserId)\" would work here.\n> \n> Yep, that worked. I've attached a patch with this change.\n> \n> > I see that RESET SESSION AUTHORIZATION\n> > with a concurrently dropped role will FATAL with your patch but succeed\n> > without it, which could be part of the reason.\n> \n> That might be a good change? If the original authenticated role ID no\n> longer exists then we may want to return an error when trying to set\n> your session authorization to that role.\n> \n> Thanks,\n> Joe Koshakow\n> <v2-0001-Prevent-non-superusers-from-altering-session-auth.patch>\n\n\nHi,I’ve just stumbled upon this patch and thread and thought I could share an idea of adding an optional temporary secret to SET SESSION AUTHORIZATION so that it is only possible to RESET SESSION AUTHORIZATION by providing the same secret ,like:SET SESSION AUTHORIZATION [role] GUARDED BY ‘[secret]’;...RESET SESSION AUTHORIZATION WITH ‘[secret]’;The use case is: I have a set of Liquibase scripts I would like to execute as a different role each and make sure they cannot escape the sandbox.As I am not a Postgres hacker I wonder how difficult to implement it might be…Thanks,MichalOn 23 Jun 2023, at 00:39, Joseph Koshakow <[email protected]> wrote:On Wed, Jun 21, 2023 at 11:48 PM Nathan Bossart <[email protected]> wrote:>> On Wed, Jun 21, 2023 at 04:28:43PM -0400, Joseph Koshakow wrote:> > + roleTup = SearchSysCache1(AUTHOID, ObjectIdGetDatum(AuthenticatedUserId));> > + if (!HeapTupleIsValid(roleTup))> > + ereport(FATAL,> > + (errcode(ERRCODE_INVALID_AUTHORIZATION_SPECIFICATION),> > + errmsg(\"role with OID %u does not exist\", AuthenticatedUserId)));> > + rform = (Form_pg_authid) GETSTRUCT(roleTup);>> I think \"superuser_arg(AuthenticatedUserId)\" would work here.Yep, that worked. I've attached a patch with this change.> I see that RESET SESSION AUTHORIZATION> with a concurrently dropped role will FATAL with your patch but succeed> without it, which could be part of the reason.That might be a good change? If the original authenticated role ID nolonger exists then we may want to return an error when trying to setyour session authorization to that role.Thanks,Joe Koshakow\n<v2-0001-Prevent-non-superusers-from-altering-session-auth.patch>",
"msg_date": "Fri, 23 Jun 2023 05:51:34 +0200",
"msg_from": "=?utf-8?Q?Micha=C5=82_K=C5=82eczek?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Preventing non-superusers from altering session authorization"
},
{
"msg_contents": "On Thu, Jun 22, 2023 at 06:39:45PM -0400, Joseph Koshakow wrote:\n> On Wed, Jun 21, 2023 at 11:48 PM Nathan Bossart <[email protected]>\n> wrote:\n>> I see that RESET SESSION AUTHORIZATION\n>> with a concurrently dropped role will FATAL with your patch but succeed\n>> without it, which could be part of the reason.\n> \n> That might be a good change? If the original authenticated role ID no\n> longer exists then we may want to return an error when trying to set\n> your session authorization to that role.\n\nI was curious why we don't block DROP ROLE if there are active sessions for\nthe role or terminate any such sessions as part of the command, and I found\nthis discussion from 2016:\n\n\thttps://postgr.es/m/flat/56E87CD8.60007%40ohmu.fi\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 23 Jun 2023 10:54:16 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Preventing non-superusers from altering session authorization"
},
{
"msg_contents": ">> That might be a good change? If the original authenticated role ID no\n>> longer exists then we may want to return an error when trying to set\n>> your session authorization to that role.\n>\n> I was curious why we don't block DROP ROLE if there are active sessions\nfor\n> the role or terminate any such sessions as part of the command, and I\nfound\n> this discussion from 2016:\n>\n> https://postgr.es/m/flat/56E87CD8.60007%40ohmu.fi\n\nAh, that makes sense that we don't prevent DROP ROLE on active roles.\nThough, we do error when you try and set your role or session\nauthorization to a dropped role. So erroring on RESET SESSION\nAUTHORIZATION when the original role is dropped makes it consistent\nwith SET SESSION AUTHORIZATION TO <dropped-original-role>. On the other\nhand it makes it inconsistent with RESET ROLE, which does not error on\na dropped role.\n\n- Joe Koshakow\n\nOn Fri, Jun 23, 2023 at 1:54 PM Nathan Bossart <[email protected]>\nwrote:\n\n> On Thu, Jun 22, 2023 at 06:39:45PM -0400, Joseph Koshakow wrote:\n> > On Wed, Jun 21, 2023 at 11:48 PM Nathan Bossart <\n> [email protected]>\n> > wrote:\n> >> I see that RESET SESSION AUTHORIZATION\n> >> with a concurrently dropped role will FATAL with your patch but succeed\n> >> without it, which could be part of the reason.\n> >\n> > That might be a good change? If the original authenticated role ID no\n> > longer exists then we may want to return an error when trying to set\n> > your session authorization to that role.\n>\n> I was curious why we don't block DROP ROLE if there are active sessions for\n> the role or terminate any such sessions as part of the command, and I found\n> this discussion from 2016:\n>\n> https://postgr.es/m/flat/56E87CD8.60007%40ohmu.fi\n>\n> --\n> Nathan Bossart\n> Amazon Web Services: https://aws.amazon.com\n>\n\n>> That might be a good change? If the original authenticated role ID no>> longer exists then we may want to return an error when trying to set>> your session authorization to that role.> > I was curious why we don't block DROP ROLE if there are active sessions for> the role or terminate any such sessions as part of the command, and I found> this discussion from 2016:>> https://postgr.es/m/flat/56E87CD8.60007%40ohmu.fiAh, that makes sense that we don't prevent DROP ROLE on active roles.Though, we do error when you try and set your role or sessionauthorization to a dropped role. So erroring on RESET SESSIONAUTHORIZATION when the original role is dropped makes it consistentwith SET SESSION AUTHORIZATION TO <dropped-original-role>. On the otherhand it makes it inconsistent with RESET ROLE, which does not error ona dropped role.- Joe KoshakowOn Fri, Jun 23, 2023 at 1:54 PM Nathan Bossart <[email protected]> wrote:On Thu, Jun 22, 2023 at 06:39:45PM -0400, Joseph Koshakow wrote:\n> On Wed, Jun 21, 2023 at 11:48 PM Nathan Bossart <[email protected]>\n> wrote:\n>> I see that RESET SESSION AUTHORIZATION\n>> with a concurrently dropped role will FATAL with your patch but succeed\n>> without it, which could be part of the reason.\n> \n> That might be a good change? If the original authenticated role ID no\n> longer exists then we may want to return an error when trying to set\n> your session authorization to that role.\n\nI was curious why we don't block DROP ROLE if there are active sessions for\nthe role or terminate any such sessions as part of the command, and I found\nthis discussion from 2016:\n\n https://postgr.es/m/flat/56E87CD8.60007%40ohmu.fi\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Sat, 1 Jul 2023 11:33:51 -0400",
"msg_from": "Joseph Koshakow <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Preventing non-superusers from altering session authorization"
},
{
"msg_contents": "Nathan Bossart <[email protected]> wrote:\n\n> I see that RESET SESSION AUTHORIZATION\n> with a concurrently dropped role will FATAL with your patch but succeed\n> without it, which could be part of the reason.\n\nI didn't even realize it, but the change to superuser_arg() in v2 fixed\nthis issue. The catalog lookup is only done if\nuserid != AuthenticatedUserId. So RESET SESSION AUTHORIZATION with a\nconcurrently dropped role will no longer FATAL.\n\nThanks,\nJoe\n\nOn Sat, Jul 1, 2023 at 11:33 AM Joseph Koshakow <[email protected]> wrote:\n\n> >> That might be a good change? If the original authenticated role ID no\n> >> longer exists then we may want to return an error when trying to set\n> >> your session authorization to that role.\n> >\n> > I was curious why we don't block DROP ROLE if there are active sessions\n> for\n> > the role or terminate any such sessions as part of the command, and I\n> found\n> > this discussion from 2016:\n> >\n> > https://postgr.es/m/flat/56E87CD8.60007%40ohmu.fi\n>\n> Ah, that makes sense that we don't prevent DROP ROLE on active roles.\n> Though, we do error when you try and set your role or session\n> authorization to a dropped role. So erroring on RESET SESSION\n> AUTHORIZATION when the original role is dropped makes it consistent\n> with SET SESSION AUTHORIZATION TO <dropped-original-role>. On the other\n> hand it makes it inconsistent with RESET ROLE, which does not error on\n> a dropped role.\n>\n> - Joe Koshakow\n>\n> On Fri, Jun 23, 2023 at 1:54 PM Nathan Bossart <[email protected]>\n> wrote:\n>\n>> On Thu, Jun 22, 2023 at 06:39:45PM -0400, Joseph Koshakow wrote:\n>> > On Wed, Jun 21, 2023 at 11:48 PM Nathan Bossart <\n>> [email protected]>\n>> > wrote:\n>> >> I see that RESET SESSION AUTHORIZATION\n>> >> with a concurrently dropped role will FATAL with your patch but succeed\n>> >> without it, which could be part of the reason.\n>> >\n>> > That might be a good change? If the original authenticated role ID no\n>> > longer exists then we may want to return an error when trying to set\n>> > your session authorization to that role.\n>>\n>> I was curious why we don't block DROP ROLE if there are active sessions\n>> for\n>> the role or terminate any such sessions as part of the command, and I\n>> found\n>> this discussion from 2016:\n>>\n>> https://postgr.es/m/flat/56E87CD8.60007%40ohmu.fi\n>>\n>> --\n>> Nathan Bossart\n>> Amazon Web Services: https://aws.amazon.com\n>>\n>\n\nNathan Bossart <[email protected]> wrote:> I see that RESET SESSION AUTHORIZATION> with a concurrently dropped role will FATAL with your patch but succeed> without it, which could be part of the reason.I didn't even realize it, but the change to superuser_arg() in v2 fixedthis issue. The catalog lookup is only done if userid != AuthenticatedUserId. So RESET SESSION AUTHORIZATION with aconcurrently dropped role will no longer FATAL.Thanks,JoeOn Sat, Jul 1, 2023 at 11:33 AM Joseph Koshakow <[email protected]> wrote:>> That might be a good change? If the original authenticated role ID no>> longer exists then we may want to return an error when trying to set>> your session authorization to that role.> > I was curious why we don't block DROP ROLE if there are active sessions for> the role or terminate any such sessions as part of the command, and I found> this discussion from 2016:>> https://postgr.es/m/flat/56E87CD8.60007%40ohmu.fiAh, that makes sense that we don't prevent DROP ROLE on active roles.Though, we do error when you try and set your role or sessionauthorization to a dropped role. So erroring on RESET SESSIONAUTHORIZATION when the original role is dropped makes it consistentwith SET SESSION AUTHORIZATION TO <dropped-original-role>. On the otherhand it makes it inconsistent with RESET ROLE, which does not error ona dropped role.- Joe KoshakowOn Fri, Jun 23, 2023 at 1:54 PM Nathan Bossart <[email protected]> wrote:On Thu, Jun 22, 2023 at 06:39:45PM -0400, Joseph Koshakow wrote:\n> On Wed, Jun 21, 2023 at 11:48 PM Nathan Bossart <[email protected]>\n> wrote:\n>> I see that RESET SESSION AUTHORIZATION\n>> with a concurrently dropped role will FATAL with your patch but succeed\n>> without it, which could be part of the reason.\n> \n> That might be a good change? If the original authenticated role ID no\n> longer exists then we may want to return an error when trying to set\n> your session authorization to that role.\n\nI was curious why we don't block DROP ROLE if there are active sessions for\nthe role or terminate any such sessions as part of the command, and I found\nthis discussion from 2016:\n\n https://postgr.es/m/flat/56E87CD8.60007%40ohmu.fi\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Sat, 8 Jul 2023 14:03:41 -0400",
"msg_from": "Joseph Koshakow <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Preventing non-superusers from altering session authorization"
},
{
"msg_contents": "I've discovered an issue with this approach. Let's say you have some\nsession open that is connected as a superuser and you run the following\ncommands:\n\n - CREATE ROLE r1 LOGIN SUPERUSER;\n - CREATE ROLE r2;\n - CREATE ROLE r3;\n\nThen you open another session connected with user r1 and run the\nfollowing commands:\n\n - SET SESSION AUTHROIZATION r2;\n - BEGIN;\n - SET SESSION AUTHORIZATION r3;\n\nThen in your original session run:\n\n - ALTER ROLE r1 NOSUPERUSER;\n\nFinally in the r1 session run:\n\n - CREATE TABLE t ();\n\nPostgres will then panic with the following logs:\n\n2023-07-08 16:33:27.787 EDT [157141] ERROR: permission denied for schema\npublic at character 14\n2023-07-08 16:33:27.787 EDT [157141] STATEMENT: CREATE TABLE t ();\n2023-07-08 16:33:27.787 EDT [157141] ERROR: permission denied to set\nsession authorization\n2023-07-08 16:33:27.787 EDT [157141] WARNING: AbortTransaction while in\nABORT state\n2023-07-08 16:33:27.787 EDT [157141] ERROR: permission denied to set\nsession authorization\n2023-07-08 16:33:27.787 EDT [157141] WARNING: AbortTransaction while in\nABORT state\n2023-07-08 16:33:27.787 EDT [157141] ERROR: permission denied to set\nsession authorization\n2023-07-08 16:33:27.787 EDT [157141] WARNING: AbortTransaction while in\nABORT state\n2023-07-08 16:33:27.787 EDT [157141] ERROR: permission denied to set\nsession authorization\n2023-07-08 16:33:27.787 EDT [157141] PANIC: ERRORDATA_STACK_SIZE exceeded\n2023-07-08 16:33:27.882 EDT [156878] LOG: server process (PID 157141) was\nterminated by signal 6: Aborted\n2023-07-08 16:33:27.882 EDT [156878] DETAIL: Failed process was running:\nCREATE TABLE t ();\n\nI think the issue here is that if a session loses the ability to set\ntheir session authorization in the middle of a transaction, then\nrolling back the transaction may fail and cause the server to panic.\nThat's probably what the deleted comment mean when it said:\n\n> * It's OK because the check does not require catalog access and can't\n> * fail during an end-of-transaction GUC reversion\n\nInterestingly, if the r1 session manually types `ROLLBACK` instead of\nexecuting a command that fails, then everything is fine and there's no\npanic. I'm not familiar enough with transaction handling to know why\nthere would be a difference there.\n\nThanks,\nJoe Koshakow\n\nI've discovered an issue with this approach. Let's say you have somesession open that is connected as a superuser and you run the following commands: - CREATE ROLE r1 LOGIN SUPERUSER; - CREATE ROLE r2; - CREATE ROLE r3;Then you open another session connected with user r1 and run thefollowing commands: - SET SESSION AUTHROIZATION r2; - BEGIN; - SET SESSION AUTHORIZATION r3;Then in your original session run: - ALTER ROLE r1 NOSUPERUSER;Finally in the r1 session run: - CREATE TABLE t ();Postgres will then panic with the following logs:2023-07-08 16:33:27.787 EDT [157141] ERROR: permission denied for schema public at character 142023-07-08 16:33:27.787 EDT [157141] STATEMENT: CREATE TABLE t ();2023-07-08 16:33:27.787 EDT [157141] ERROR: permission denied to set session authorization2023-07-08 16:33:27.787 EDT [157141] WARNING: AbortTransaction while in ABORT state2023-07-08 16:33:27.787 EDT [157141] ERROR: permission denied to set session authorization2023-07-08 16:33:27.787 EDT [157141] WARNING: AbortTransaction while in ABORT state2023-07-08 16:33:27.787 EDT [157141] ERROR: permission denied to set session authorization2023-07-08 16:33:27.787 EDT [157141] WARNING: AbortTransaction while in ABORT state2023-07-08 16:33:27.787 EDT [157141] ERROR: permission denied to set session authorization2023-07-08 16:33:27.787 EDT [157141] PANIC: ERRORDATA_STACK_SIZE exceeded2023-07-08 16:33:27.882 EDT [156878] LOG: server process (PID 157141) was terminated by signal 6: Aborted2023-07-08 16:33:27.882 EDT [156878] DETAIL: Failed process was running: CREATE TABLE t ();I think the issue here is that if a session loses the ability to settheir session authorization in the middle of a transaction, thenrolling back the transaction may fail and cause the server to panic.That's probably what the deleted comment mean when it said:> * It's OK because the check does not require catalog access and can't> * fail during an end-of-transaction GUC reversionInterestingly, if the r1 session manually types `ROLLBACK` instead ofexecuting a command that fails, then everything is fine and there's nopanic. I'm not familiar enough with transaction handling to know whythere would be a difference there.Thanks,Joe Koshakow",
"msg_date": "Sat, 8 Jul 2023 16:44:06 -0400",
"msg_from": "Joseph Koshakow <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Preventing non-superusers from altering session authorization"
},
{
"msg_contents": "On Sat, Jul 08, 2023 at 04:44:06PM -0400, Joseph Koshakow wrote:\n> 2023-07-08 16:33:27.787 EDT [157141] PANIC: ERRORDATA_STACK_SIZE exceeded\n> 2023-07-08 16:33:27.882 EDT [156878] LOG: server process (PID 157141) was\n> terminated by signal 6: Aborted\n> 2023-07-08 16:33:27.882 EDT [156878] DETAIL: Failed process was running:\n> CREATE TABLE t ();\n> \n> I think the issue here is that if a session loses the ability to set\n> their session authorization in the middle of a transaction, then\n> rolling back the transaction may fail and cause the server to panic.\n> That's probably what the deleted comment mean when it said:\n> \n>> * It's OK because the check does not require catalog access and can't\n>> * fail during an end-of-transaction GUC reversion\n\nYeah. IIUC the ERROR longjmps to a block that calls AbortTransaction(),\nwhich ERRORs again when resetting the session authorization, which causes\nus to call AbortTransaction() again, etc., etc.\n\n> Interestingly, if the r1 session manually types `ROLLBACK` instead of\n> executing a command that fails, then everything is fine and there's no\n> panic. I'm not familiar enough with transaction handling to know why\n> there would be a difference there.\n\nI haven't had a chance to dig into this one yet, but that is indeed\ninteresting.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Sat, 8 Jul 2023 15:09:04 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Preventing non-superusers from altering session authorization"
},
{
"msg_contents": "On Sat, Jul 8, 2023 at 6:09 PM Nathan Bossart <[email protected]>\nwrote:\n\n>> I think the issue here is that if a session loses the ability to set\n>> their session authorization in the middle of a transaction, then\n>> rolling back the transaction may fail and cause the server to panic.\n>> That's probably what the deleted comment mean when it said:\n>>\n>>> * It's OK because the check does not require catalog access and can't\n>>> * fail during an end-of-transaction GUC reversion\n>\n> Yeah. IIUC the ERROR longjmps to a block that calls AbortTransaction(),\n> which ERRORs again when resetting the session authorization, which causes\n> us to call AbortTransaction() again, etc., etc.\n\nEverything seems to work fine if the privilege check is moved to\ncheck_session_authorization. Which is maybe what the comment meant\ninstead of assign_session_authorization.\n\nI've attached a patch with this change.\n\nThanks,\nJoe Koshakow",
"msg_date": "Sat, 8 Jul 2023 19:08:35 -0400",
"msg_from": "Joseph Koshakow <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Preventing non-superusers from altering session authorization"
},
{
"msg_contents": "On Sat, Jul 08, 2023 at 07:08:35PM -0400, Joseph Koshakow wrote:\n> On Sat, Jul 8, 2023 at 6:09 PM Nathan Bossart <[email protected]>\n> wrote:\n> \n>>> I think the issue here is that if a session loses the ability to set\n>>> their session authorization in the middle of a transaction, then\n>>> rolling back the transaction may fail and cause the server to panic.\n>>> That's probably what the deleted comment mean when it said:\n>>>\n>>>> * It's OK because the check does not require catalog access and can't\n>>>> * fail during an end-of-transaction GUC reversion\n>>\n>> Yeah. IIUC the ERROR longjmps to a block that calls AbortTransaction(),\n>> which ERRORs again when resetting the session authorization, which causes\n>> us to call AbortTransaction() again, etc., etc.\n\nsrc/backend/utils/misc/README has the following relevant text:\n\n\tNote that there is no provision for a failure result code. assign_hooks\n\tshould never fail except under the most dire circumstances, since a failure\n\tmay for example result in GUC settings not being rolled back properly during\n\ttransaction abort. In general, try to do anything that could conceivably\n\tfail in a check_hook instead, and pass along the results in an \"extra\"\n\tstruct, so that the assign hook has little to do beyond copying the data to\n\tsomeplace. This applies particularly to catalog lookups: any required\n\tlookups must be done in the check_hook, since the assign_hook may be\n\texecuted during transaction rollback when lookups will be unsafe.\n\n> Everything seems to work fine if the privilege check is moved to\n> check_session_authorization. Which is maybe what the comment meant\n> instead of assign_session_authorization.\n\nAh, that does make more sense.\n\nI think we should split this into two patches: one to move the permission\ncheck to check_session_authorization() and another for the behavior change.\nI've attached an attempt at the first one (that borrows heavily from your\nlatest patch). AFAICT the only reason that the permission check lives in\nSetSessionAuthorization() is because AuthenticatedUserIsSuperuser is static\nto miscinit.c and doesn't have an accessor function. I added one, but it\nwould probably just be removed by the following patch. WDYT?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Sat, 8 Jul 2023 21:47:08 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Preventing non-superusers from altering session authorization"
},
{
"msg_contents": "On Sun, Jul 9, 2023 at 12:47 AM Nathan Bossart <[email protected]>\nwrote:\n\n> I think we should split this into two patches: one to move the permission\n> check to check_session_authorization() and another for the behavior\nchange.\n> I've attached an attempt at the first one (that borrows heavily from your\n> latest patch). AFAICT the only reason that the permission check lives in\n> SetSessionAuthorization() is because AuthenticatedUserIsSuperuser is\nstatic\n> to miscinit.c and doesn't have an accessor function. I added one, but it\n> would probably just be removed by the following patch. WDYT?\n\nI think that's a good idea. We could even keep around the accessor\nfunction as a good place to bundle the calls to\n Assert(OidIsValid(AuthenticatedUserId))\nand\n superuser_arg(AuthenticatedUserId)\n\n> * Only a superuser may set auth ID to something other than himself\n\nIs \"auth ID\" the right term here? Maybe something like \"Only a\nsuperuser may set their session authorization/ID to something other\nthan their authenticated ID.\"\n\n> But we set the GUC variable\n> * is_superuser to indicate whether the *current* session userid is a\n> * superuser.\n\nJust a small correction here, I believe the is_superuser GUC is meant\nto indicate whether the current user id is a superuser, not the current\nsession user id. We only update is_superuser in SetSessionAuthorization\nbecause we are also updating the current user id in SetSessionUserId.\nFor example,\n\n test=# CREATE ROLE r1 SUPERUSER;\n CREATE ROLE\n test=# CREATE ROLE r2;\n CREATE ROLE\n test=# SET SESSION AUTHORIZATION r1;\n SET\n test=# SET ROLE r2;\n SET\n test=> SELECT session_user, current_user;\n session_user | current_user\n --------------+--------------\n r1 | r2\n (1 row)\n\n test=> SHOW is_superuser;\n is_superuser\n --------------\n off\n (1 row)\n\nWhich has also made me realize that the comment on is_superuser in\nguc_tables.c is incorrect:\n\n> /* Not for general use --- used by SET SESSION AUTHORIZATION */\n\nAdditionally the C variable name for is_superuser is fairly misleading:\n\n> session_auth_is_superuser\n\nThe documentation for this GUC in show.sgml is correct:\n\n> True if the current role has superuser privileges.\n\nAs an aside, I'm starting to think we should consider removing this\nGUC. It sometimes reports an incorrect value [0], and potentially is\nnot used internally for anything.\n\nI've rebased my changes over your patch and attached them both.\n\n[0]\nhttps://www.postgresql.org/message-id/CAAvxfHcxH-hLndty6CRThGXL1hLsgCn%2BE3QuG_4Qi7GxrHmgKg%40mail.gmail.com",
"msg_date": "Sun, 9 Jul 2023 13:03:14 -0400",
"msg_from": "Joseph Koshakow <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Preventing non-superusers from altering session authorization"
},
{
"msg_contents": "On Sun, Jul 9, 2023 at 1:03 PM Joseph Koshakow <[email protected]> wrote:\n\n>> * Only a superuser may set auth ID to something other than himself\n\n> Is \"auth ID\" the right term here? Maybe something like \"Only a\n> superuser may set their session authorization/ID to something other\n> than their authenticated ID.\"\n\n>> But we set the GUC variable\n>> * is_superuser to indicate whether the *current* session userid is a\n>> * superuser.\n\n> Just a small correction here, I believe the is_superuser GUC is meant\n> to indicate whether the current user id is a superuser, not the current\n> session user id. We only update is_superuser in SetSessionAuthorization\n> because we are also updating the current user id in SetSessionUserId.\n\nI just realized that you moved this comment from\nSetSessionAuthorization. I think we should leave the part about setting\nthe GUC variable is_superuser on top of SetSessionAuthorization since\nthat's where we actually set the GUC.\n\nThanks,\nJoe Koshakow\n\nOn Sun, Jul 9, 2023 at 1:03 PM Joseph Koshakow <[email protected]> wrote:>> * Only a superuser may set auth ID to something other than himself> Is \"auth ID\" the right term here? Maybe something like \"Only a> superuser may set their session authorization/ID to something other> than their authenticated ID.\">> But we set the GUC variable>> * is_superuser to indicate whether the *current* session userid is a>> * superuser.> Just a small correction here, I believe the is_superuser GUC is meant> to indicate whether the current user id is a superuser, not the current> session user id. We only update is_superuser in SetSessionAuthorization> because we are also updating the current user id in SetSessionUserId.I just realized that you moved this comment fromSetSessionAuthorization. I think we should leave the part about settingthe GUC variable is_superuser on top of SetSessionAuthorization sincethat's where we actually set the GUC.Thanks,Joe Koshakow",
"msg_date": "Sun, 9 Jul 2023 20:54:30 -0400",
"msg_from": "Joseph Koshakow <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Preventing non-superusers from altering session authorization"
},
{
"msg_contents": "On Sun, Jul 09, 2023 at 08:54:30PM -0400, Joseph Koshakow wrote:\n> I just realized that you moved this comment from\n> SetSessionAuthorization. I think we should leave the part about setting\n> the GUC variable is_superuser on top of SetSessionAuthorization since\n> that's where we actually set the GUC.\n\nOkay. Here's a new patch set in which I believe I've addressed all\nfeedback. I didn't keep the GetAuthenticatedUserIsSuperuser() helper\nfunction around, as I didn't see a strong need for it. And I haven't\ntouched the \"is_superuser\" GUC, either. I figured we can take up any\nchanges for it in the other thread.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 10 Jul 2023 13:31:58 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Preventing non-superusers from altering session authorization"
},
{
"msg_contents": "On Mon, Jul 10, 2023 at 4:32 PM Nathan Bossart <[email protected]>\nwrote:\n> Okay. Here's a new patch set in which I believe I've addressed all\n> feedback. I didn't keep the GetAuthenticatedUserIsSuperuser() helper\n> function around, as I didn't see a strong need for it.\n\nThanks, I think the patch set looks good to go!\n\n> And I haven't\n> touched the \"is_superuser\" GUC, either. I figured we can take up any\n> changes for it in the other thread.\n\nYeah, I think that makes sense.\n\nThanks,\nJoe Koshakow\n\nOn Mon, Jul 10, 2023 at 4:32 PM Nathan Bossart <[email protected]> wrote:> Okay. Here's a new patch set in which I believe I've addressed all> feedback. I didn't keep the GetAuthenticatedUserIsSuperuser() helper> function around, as I didn't see a strong need for it. Thanks, I think the patch set looks good to go!> And I haven't> touched the \"is_superuser\" GUC, either. I figured we can take up any> changes for it in the other thread.Yeah, I think that makes sense.Thanks,Joe Koshakow",
"msg_date": "Mon, 10 Jul 2023 16:46:07 -0400",
"msg_from": "Joseph Koshakow <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Preventing non-superusers from altering session authorization"
},
{
"msg_contents": "On Mon, Jul 10, 2023 at 04:46:07PM -0400, Joseph Koshakow wrote:\n> Thanks, I think the patch set looks good to go!\n\nGreat. I'm going to wait a few more days in case anyone has additional\nfeedback, but otherwise I intend to commit this shortly.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 10 Jul 2023 13:49:55 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Preventing non-superusers from altering session authorization"
},
{
"msg_contents": "On Mon, Jul 10, 2023 at 01:49:55PM -0700, Nathan Bossart wrote:\n> Great. I'm going to wait a few more days in case anyone has additional\n> feedback, but otherwise I intend to commit this shortly.\n\nI've committed 0001 for now. I'm hoping to commit the other two patches\nwithin the next couple of days.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 12 Jul 2023 21:37:57 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Preventing non-superusers from altering session authorization"
},
{
"msg_contents": "On Wed, Jul 12, 2023 at 09:37:57PM -0700, Nathan Bossart wrote:\n> On Mon, Jul 10, 2023 at 01:49:55PM -0700, Nathan Bossart wrote:\n>> Great. I'm going to wait a few more days in case anyone has additional\n>> feedback, but otherwise I intend to commit this shortly.\n> \n> I've committed 0001 for now. I'm hoping to commit the other two patches\n> within the next couple of days.\n\nCommitted. I dwelled on whether to proceed with this change because it\ndoesn't completely solve the originally-stated problem; i.e., a role that\nhas changed its session authorization before losing superuser can still\ntake advantage of the privileges of the target role, which might include\nreaquiring superuser. However, I think SET ROLE is subject to basically\nthe same problem, and I'd argue that this change is strictly an\nimprovement, if for no other reason than it makes SET SESSION AUTHORIZATION\nmore consistent with SET ROLE.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 13 Jul 2023 21:16:08 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Preventing non-superusers from altering session authorization"
}
] |
[
{
"msg_contents": "Hi,\n\nWhen vac_truncate_clog() returns early, due to one of these paths:\n\n\t/*\n\t * Do not truncate CLOG if we seem to have suffered wraparound already;\n\t * the computed minimum XID might be bogus. This case should now be\n\t * impossible due to the defenses in GetNewTransactionId, but we keep the\n\t * test anyway.\n\t */\n\tif (frozenAlreadyWrapped)\n\t{\n\t\tereport(WARNING,\n\t\t\t\t(errmsg(\"some databases have not been vacuumed in over 2 billion transactions\"),\n\t\t\t\t errdetail(\"You might have already suffered transaction-wraparound data loss.\")));\n\t\treturn;\n\t}\n\n\t/* chicken out if data is bogus in any other way */\n\tif (bogus)\n\t\treturn;\n\nwe haven't released the lwlock that we acquired earlier:\n\n\t/* Restrict task to one backend per cluster; see SimpleLruTruncate(). */\n\tLWLockAcquire(WrapLimitsVacuumLock, LW_EXCLUSIVE);\n\nas this isn't a path raising an error, the lock isn't released during abort.\nUntil there's some cause for the session to call LWLockReleaseAll(), the lock\nis held. Until then neither the process holding the lock, nor any other\nprocess, can finish vacuuming. We don't even have an assert against a\nself-deadlock with an already held lock, oddly enough.\n\n\nThis is somewhat nasty - there's no real way to get out of this without an\nimmediate restart, and it's hard to pinpoint the problem as well :(.\n\n\nOk, the subject line is not the most precise, but it was just too good an\nopportunity.\n\n\nTo reproduce (only on a throwaway system please!):\n\nCREATE DATABASE invalid;\nUPDATE pg_database SET datfrozenxid = '10002' WHERE datname = 'invalid';\nDROP TABLE IF EXISTS foo_tbl; CREATE TABLE foo_tbl(); DROP TABLE foo_tbl; VACUUM FREEZE;\nDROP TABLE IF EXISTS foo_tbl; CREATE TABLE foo_tbl(); DROP TABLE foo_tbl; VACUUM FREEZE;\n<hang>\n\n\nFound this while writing a test for the fix for partial dropping of\ndatabases [1].\n\n\nSeparately, I think it's quite bad that we *silently* return from\nvac_truncate_clog() when finding a bogus xid. That's a quite severe condition,\nwe should at least tell the user about it.\n\n\nGreetings,\n\nAndres Freund\n\n[1] https://postgr.es/m/20230621190204.nsaelabojxppiuix%40awork3.anarazel.de\n\n\n",
"msg_date": "Wed, 21 Jun 2023 15:12:08 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "vac_truncate_clog()'s bogus check leads to bogusness"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-21 15:12:08 -0700, Andres Freund wrote:\n> When vac_truncate_clog() returns early, due to one of these paths:\n>\n> [...]\n>\n> Separately, I think it's quite bad that we *silently* return from\n> vac_truncate_clog() when finding a bogus xid. That's a quite severe condition,\n> we should at least tell the user about it.\n\nA related issue is that as far as I can tell the determination of what is\nbogus is bogus.\n\nThe relevant cutoffs are determined vac_update_datfrozenxid() using:\n\n\t/*\n\t * Identify the latest relfrozenxid and relminmxid values that we could\n\t * validly see during the scan. These are conservative values, but it's\n\t * not really worth trying to be more exact.\n\t */\n\tlastSaneFrozenXid = ReadNextTransactionId();\n\tlastSaneMinMulti = ReadNextMultiXactId();\n\nbut doing checks based on thos is bogus, because:\n\na) a concurrent create table / truncate / vacuum can update\n pg_class.relfrozenxid of some relation in the current database to a newer\n value, after lastSaneFrozenXid already has been determined. If that\n happens, we skip updating pg_database.datfrozenxid.\n\nb) A concurrent vacuum in another database, ending up in vac_truncate_clog(),\n can compute a newer datfrozenxid. In that case the vac_truncate_clog() with\n the outdated lastSaneFrozenXid will not truncate the clog (and also forget\n to release WrapLimitsVacuumLock currently, as reported upthread) and not\n call SetTransactionIdLimit(). The latter is particularly bad, because that\n means we might not come out of \"database is not accepting commands\" land.\n\nI think in both cases a later call might fix the issue, but that could be some\nway out, if autovacuum doesn't see further writes being necessary, and no\nfurther write activity happens, because of \"\"database is not accepting\ncommands\".\n\n\nIt's not entirely obvious to me how to best fix these. For a second I thought\nwe just need to acquire a snapshot before determining the sane values, but\nthat doesn't work, since we update the relevant fields with\nheap_inplace_update().\n\nI guess we could just recompute the boundaries before actually believing the\ncatalog values are bogus?\n\nI think we also add warnings to these paths, so we actually have a chance to\nfind problems in the field.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 21 Jun 2023 17:46:37 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: vac_truncate_clog()'s bogus check leads to bogusness"
},
{
"msg_contents": "On Wed, Jun 21, 2023 at 03:12:08PM -0700, Andres Freund wrote:\n> When vac_truncate_clog() returns early\n...\n> we haven't released the lwlock that we acquired earlier\n\n> Until there's some cause for the session to call LWLockReleaseAll(), the lock\n> is held. Until then neither the process holding the lock, nor any other\n> process, can finish vacuuming. We don't even have an assert against a\n> self-deadlock with an already held lock, oddly enough.\n\nI agree with this finding. Would you like to add the lwlock releases, or\nwould you like me to?\n\nThe bug has been in all released versions for 2.5 years, yet it escaped\nnotice. That tells us something. Bogus values have gotten rare? The\naffected session tends to get lucky and call LWLockReleaseAll() soon?\n\nOn Wed, Jun 21, 2023 at 05:46:37PM -0700, Andres Freund wrote:\n> On 2023-06-21 15:12:08 -0700, Andres Freund wrote:\n> > Separately, I think it's quite bad that we *silently* return from\n> > vac_truncate_clog() when finding a bogus xid. That's a quite severe condition,\n> > we should at least tell the user about it.\n> \n> A related issue is that as far as I can tell the determination of what is\n> bogus is bogus.\n> \n> The relevant cutoffs are determined vac_update_datfrozenxid() using:\n> \n> \t/*\n> \t * Identify the latest relfrozenxid and relminmxid values that we could\n> \t * validly see during the scan. These are conservative values, but it's\n> \t * not really worth trying to be more exact.\n> \t */\n> \tlastSaneFrozenXid = ReadNextTransactionId();\n> \tlastSaneMinMulti = ReadNextMultiXactId();\n> \n> but doing checks based on thos is bogus, because:\n> \n> a) a concurrent create table / truncate / vacuum can update\n> pg_class.relfrozenxid of some relation in the current database to a newer\n> value, after lastSaneFrozenXid already has been determined. If that\n> happens, we skip updating pg_database.datfrozenxid.\n> \n> b) A concurrent vacuum in another database, ending up in vac_truncate_clog(),\n> can compute a newer datfrozenxid. In that case the vac_truncate_clog() with\n> the outdated lastSaneFrozenXid will not truncate the clog (and also forget\n> to release WrapLimitsVacuumLock currently, as reported upthread) and not\n> call SetTransactionIdLimit(). The latter is particularly bad, because that\n> means we might not come out of \"database is not accepting commands\" land.\n\n> I guess we could just recompute the boundaries before actually believing the\n> catalog values are bogus?\n\nThat's how I'd do it.\n\n\n",
"msg_date": "Wed, 21 Jun 2023 21:50:39 -0700",
"msg_from": "Noah Misch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: vac_truncate_clog()'s bogus check leads to bogusness"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-21 21:50:39 -0700, Noah Misch wrote:\n> On Wed, Jun 21, 2023 at 03:12:08PM -0700, Andres Freund wrote:\n> > When vac_truncate_clog() returns early\n> ...\n> > we haven't released the lwlock that we acquired earlier\n> \n> > Until there's some cause for the session to call LWLockReleaseAll(), the lock\n> > is held. Until then neither the process holding the lock, nor any other\n> > process, can finish vacuuming. We don't even have an assert against a\n> > self-deadlock with an already held lock, oddly enough.\n> \n> I agree with this finding. Would you like to add the lwlock releases, or\n> would you like me to?\n\nHappy with either. I do have code and testcase, so I guess it would make\nsense for me to do it?\n\n\n> The bug has been in all released versions for 2.5 years, yet it escaped\n> notice. That tells us something. Bogus values have gotten rare? The\n> affected session tends to get lucky and call LWLockReleaseAll() soon?\n\nI am not sure either. I suspect that part of it is that people couldn't even\npinpoint the problem when it happened. Process exit calls LWLockReleaseAll(),\nwhich I assume would avoid the problem in many cases.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 22 Jun 2023 09:45:18 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: vac_truncate_clog()'s bogus check leads to bogusness"
},
{
"msg_contents": "On Thu, Jun 22, 2023 at 09:45:18AM -0700, Andres Freund wrote:\n> On 2023-06-21 21:50:39 -0700, Noah Misch wrote:\n> > On Wed, Jun 21, 2023 at 03:12:08PM -0700, Andres Freund wrote:\n> > > When vac_truncate_clog() returns early\n> > ...\n> > > we haven't released the lwlock that we acquired earlier\n> > \n> > > Until there's some cause for the session to call LWLockReleaseAll(), the lock\n> > > is held. Until then neither the process holding the lock, nor any other\n> > > process, can finish vacuuming. We don't even have an assert against a\n> > > self-deadlock with an already held lock, oddly enough.\n> > \n> > I agree with this finding. Would you like to add the lwlock releases, or\n> > would you like me to?\n> \n> Happy with either. I do have code and testcase, so I guess it would make\n> sense for me to do it?\n\nSounds good. Thanks.\n\n\n",
"msg_date": "Thu, 22 Jun 2023 22:29:12 -0700",
"msg_from": "Noah Misch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: vac_truncate_clog()'s bogus check leads to bogusness"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-21 21:50:39 -0700, Noah Misch wrote:\n> On Wed, Jun 21, 2023 at 05:46:37PM -0700, Andres Freund wrote:\n> > A related issue is that as far as I can tell the determination of what is\n> > bogus is bogus.\n> > \n> > The relevant cutoffs are determined vac_update_datfrozenxid() using:\n> > \n> > \t/*\n> > \t * Identify the latest relfrozenxid and relminmxid values that we could\n> > \t * validly see during the scan. These are conservative values, but it's\n> > \t * not really worth trying to be more exact.\n> > \t */\n> > \tlastSaneFrozenXid = ReadNextTransactionId();\n> > \tlastSaneMinMulti = ReadNextMultiXactId();\n> > \n> > but doing checks based on thos is bogus, because:\n> > \n> > a) a concurrent create table / truncate / vacuum can update\n> > pg_class.relfrozenxid of some relation in the current database to a newer\n> > value, after lastSaneFrozenXid already has been determined. If that\n> > happens, we skip updating pg_database.datfrozenxid.\n> > \n> > b) A concurrent vacuum in another database, ending up in vac_truncate_clog(),\n> > can compute a newer datfrozenxid. In that case the vac_truncate_clog() with\n> > the outdated lastSaneFrozenXid will not truncate the clog (and also forget\n> > to release WrapLimitsVacuumLock currently, as reported upthread) and not\n> > call SetTransactionIdLimit(). The latter is particularly bad, because that\n> > means we might not come out of \"database is not accepting commands\" land.\n> \n> > I guess we could just recompute the boundaries before actually believing the\n> > catalog values are bogus?\n> \n> That's how I'd do it.\n\nI was looking at doing that and got confused by the current code. Am I missing\nsomething, or does vac_truncate_clog() have two pretty much identical attempts\nat a safety measures?\n\nvoid\nvac_update_datfrozenxid(void)\n...\n\tlastSaneFrozenXid = ReadNextTransactionId();\n...\n\t\tvac_truncate_clog(newFrozenXid, newMinMulti,\n\t\t\t\t\t\t lastSaneFrozenXid, lastSaneMinMulti);\n}\n...\nstatic void\nvac_truncate_clog(TransactionId frozenXID,\n\t\t\t\t MultiXactId minMulti,\n\t\t\t\t TransactionId lastSaneFrozenXid,\n\t\t\t\t MultiXactId lastSaneMinMulti)\n{\n\tTransactionId nextXID = ReadNextTransactionId();\n...\n\t\t/*\n\t\t * If things are working properly, no database should have a\n\t\t * datfrozenxid or datminmxid that is \"in the future\". However, such\n\t\t * cases have been known to arise due to bugs in pg_upgrade. If we\n\t\t * see any entries that are \"in the future\", chicken out and don't do\n\t\t * anything. This ensures we won't truncate clog before those\n\t\t * databases have been scanned and cleaned up. (We will issue the\n\t\t * \"already wrapped\" warning if appropriate, though.)\n\t\t */\n\t\tif (TransactionIdPrecedes(lastSaneFrozenXid, datfrozenxid) ||\n\t\t\tMultiXactIdPrecedes(lastSaneMinMulti, datminmxid))\n\t\t\tbogus = true;\n\n\t\tif (TransactionIdPrecedes(nextXID, datfrozenxid))\n\t\t\tfrozenAlreadyWrapped = true;\n\nlastSaneFrozenXid is a slightly older version of ReadNextTransactionId(),\nthat's the only difference afaict.\n\n\nI guess this might be caused by 78db307bb23 adding the check, but using\nGetOldestXmin(NULL, true) to determine lastSaneFrozenXid. That was changed\nsoon after, in 87f830e0ce03.\n\n\nAm I missing something?\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 23 Jun 2023 18:41:58 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: vac_truncate_clog()'s bogus check leads to bogusness"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-23 18:41:58 -0700, Andres Freund wrote:\n> I guess this might be caused by 78db307bb23 adding the check, but using\n> GetOldestXmin(NULL, true) to determine lastSaneFrozenXid. That was changed\n> soon after, in 87f830e0ce03.\n\nFWIW, the discussion leading up to 87f830e0ce03 is\nhttps://postgr.es/m/4182.1405961004%40sss.pgh.pa.us\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 23 Jun 2023 18:48:13 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: vac_truncate_clog()'s bogus check leads to bogusness"
},
{
"msg_contents": "On Fri, Jun 23, 2023 at 06:41:58PM -0700, Andres Freund wrote:\n> On 2023-06-21 21:50:39 -0700, Noah Misch wrote:\n> > On Wed, Jun 21, 2023 at 05:46:37PM -0700, Andres Freund wrote:\n> > > A related issue is that as far as I can tell the determination of what is\n> > > bogus is bogus.\n> > > \n> > > The relevant cutoffs are determined vac_update_datfrozenxid() using:\n> > > \n> > > \t/*\n> > > \t * Identify the latest relfrozenxid and relminmxid values that we could\n> > > \t * validly see during the scan. These are conservative values, but it's\n> > > \t * not really worth trying to be more exact.\n> > > \t */\n> > > \tlastSaneFrozenXid = ReadNextTransactionId();\n> > > \tlastSaneMinMulti = ReadNextMultiXactId();\n> > > \n> > > but doing checks based on thos is bogus, because:\n> > > \n> > > a) a concurrent create table / truncate / vacuum can update\n> > > pg_class.relfrozenxid of some relation in the current database to a newer\n> > > value, after lastSaneFrozenXid already has been determined. If that\n> > > happens, we skip updating pg_database.datfrozenxid.\n> > > \n> > > b) A concurrent vacuum in another database, ending up in vac_truncate_clog(),\n> > > can compute a newer datfrozenxid. In that case the vac_truncate_clog() with\n> > > the outdated lastSaneFrozenXid will not truncate the clog (and also forget\n> > > to release WrapLimitsVacuumLock currently, as reported upthread) and not\n> > > call SetTransactionIdLimit(). The latter is particularly bad, because that\n> > > means we might not come out of \"database is not accepting commands\" land.\n> > \n> > > I guess we could just recompute the boundaries before actually believing the\n> > > catalog values are bogus?\n> > \n> > That's how I'd do it.\n> \n> I was looking at doing that and got confused by the current code. Am I missing\n> something, or does vac_truncate_clog() have two pretty much identical attempts\n> at a safety measures?\n> \n> void\n> vac_update_datfrozenxid(void)\n> ...\n> \tlastSaneFrozenXid = ReadNextTransactionId();\n> ...\n> \t\tvac_truncate_clog(newFrozenXid, newMinMulti,\n> \t\t\t\t\t\t lastSaneFrozenXid, lastSaneMinMulti);\n> }\n> ...\n> static void\n> vac_truncate_clog(TransactionId frozenXID,\n> \t\t\t\t MultiXactId minMulti,\n> \t\t\t\t TransactionId lastSaneFrozenXid,\n> \t\t\t\t MultiXactId lastSaneMinMulti)\n> {\n> \tTransactionId nextXID = ReadNextTransactionId();\n> ...\n> \t\t/*\n> \t\t * If things are working properly, no database should have a\n> \t\t * datfrozenxid or datminmxid that is \"in the future\". However, such\n> \t\t * cases have been known to arise due to bugs in pg_upgrade. If we\n> \t\t * see any entries that are \"in the future\", chicken out and don't do\n> \t\t * anything. This ensures we won't truncate clog before those\n> \t\t * databases have been scanned and cleaned up. (We will issue the\n> \t\t * \"already wrapped\" warning if appropriate, though.)\n> \t\t */\n> \t\tif (TransactionIdPrecedes(lastSaneFrozenXid, datfrozenxid) ||\n> \t\t\tMultiXactIdPrecedes(lastSaneMinMulti, datminmxid))\n> \t\t\tbogus = true;\n> \n> \t\tif (TransactionIdPrecedes(nextXID, datfrozenxid))\n> \t\t\tfrozenAlreadyWrapped = true;\n> \n> lastSaneFrozenXid is a slightly older version of ReadNextTransactionId(),\n> that's the only difference afaict.\n\nI don't think you missed anything. nextXID and lastSaneFrozenXid are both\njust caches of ReadNextTransactionId(). Each can become stale enough to make\nthose comparisons suggest trouble when all is fine.\n\n> I guess this might be caused by 78db307bb23 adding the check, but using\n> GetOldestXmin(NULL, true) to determine lastSaneFrozenXid. That was changed\n> soon after, in 87f830e0ce03.\n\nYeah. The nextXID check is from 9c54cfb (2002-04), and the newer check\nconverged with it in 87f830e0ce03 (2014-07).\n\n\nWhile less important, some other things look weak in these functions:\n\n- The only non-corruption cause to reach the \"don't want to let datfrozenxid\n go backward\" code is for GetOldestNonRemovableTransactionId(NULL) to go\n backward, e.g. if a walsender starts up and advertises an xmin. One could\n eliminate that cause by replacing \"newFrozenXid =\n GetOldestNonRemovableTransactionId(NULL)\" with initialization from the first\n relfrozenxid, analogous to how vac_truncate_clog() initializes.\n vac_update_datfrozenxid() could then warn if the prevention code intervenes.\n Perhaps, instead of preventing the go-backwards, it should apply the\n go-backward change after warning? (Unlike datfrozenxid, datminmxid going\n backward already implies corruption.)\n\n- The \"some databases have not been vacuumed in over 2 billion transactions\"\n message is false more often than not. More likely, something corrupted a\n frozen ID. The message is also missing the opportunity to indicate one of\n the affected databases.\n\n- vac_truncate_clog() bogosity checks examine XIDs only, not multis.\n\n\n",
"msg_date": "Sun, 25 Jun 2023 10:13:24 -0700",
"msg_from": "Noah Misch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: vac_truncate_clog()'s bogus check leads to bogusness"
},
{
"msg_contents": "On 2023-06-22 22:29:12 -0700, Noah Misch wrote:\n> On Thu, Jun 22, 2023 at 09:45:18AM -0700, Andres Freund wrote:\n> > On 2023-06-21 21:50:39 -0700, Noah Misch wrote:\n> > > On Wed, Jun 21, 2023 at 03:12:08PM -0700, Andres Freund wrote:\n> > > > When vac_truncate_clog() returns early\n> > > ...\n> > > > we haven't released the lwlock that we acquired earlier\n> > > \n> > > > Until there's some cause for the session to call LWLockReleaseAll(), the lock\n> > > > is held. Until then neither the process holding the lock, nor any other\n> > > > process, can finish vacuuming. We don't even have an assert against a\n> > > > self-deadlock with an already held lock, oddly enough.\n> > > \n> > > I agree with this finding. Would you like to add the lwlock releases, or\n> > > would you like me to?\n> > \n> > Happy with either. I do have code and testcase, so I guess it would make\n> > sense for me to do it?\n> \n> Sounds good. Thanks.\n\nDone.\n\n\n",
"msg_date": "Thu, 13 Jul 2023 13:45:41 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: vac_truncate_clog()'s bogus check leads to bogusness"
}
] |
[
{
"msg_contents": "Dear hackers,\r\n(CC: Önder because he owned the related thread)\r\n\r\nThis is a follow-up thread of [1]. The commit allowed subscribers to use indexes\r\nother than PK and REPLICA IDENTITY when REPLICA IDENTITY is FULL on publisher,\r\nbut the index must be a B-tree. In this proposal, I aim to extend this functionality to allow\r\nfor hash indexes and potentially other types.\r\nI would like to share an initial patch to activate discussions.\r\n\r\n# Current problem\r\n\r\nThe current limitation comes from the function build_replindex_scan_key(), specifically these lines:\r\n\r\n\r\n```\r\n\t\t/*\r\n\t\t * Load the operator info. We need this to get the equality operator\r\n\t\t * function for the scan key.\r\n\t\t */\r\n\t\toptype = get_opclass_input_type(opclass->values[index_attoff]);\r\n\t\topfamily = get_opclass_family(opclass->values[index_attoff]);\r\n\t\toperator = get_opfamily_member(opfamily, optype,\r\n\t\t\t\t\t\t\t\t\t optype,\r\n\t\t\t\t\t\t\t\t\t BTEqualStrategyNumber);\r\n```\r\n\r\nThese lines retrieve an operator for equality comparisons. The \"strategy number\"\r\n[2] identifies this operator. For B-tree indexes, an equal-comparison operator\r\nis always associated with a fixed number (BTEqualStrategyNumber, 3). However,\r\nthis approach fails for other types of indexes because the strategy number is\r\ndetermined by the operator class, not fixed.\r\n\r\n# Proposed solution\r\n\r\nI propose a patch that chooses the correct strategy number based on the index\r\naccess method. We would extract the access method from the pg_opclass system\r\ncatalog, similar to the approach used for data types and operator families.\r\nAlso, this patch change the condition for using the index on the subscriber\r\n(see IsIndexUsableForReplicaIdentityFull()).\r\n\r\nHowever, this solution does not yet extend to GiST, SP-GiST, GIN, BRIN due to\r\nimplementation constraints.\r\n\r\n## Current difficulties\r\n\r\nThe challenge with supporting other indexes is that they lack a fixed set of strategies,\r\nmaking it difficult to choose the correct strategy number based on the index\r\naccess method. Even within the same index type, different operator classes can\r\nuse different strategy numbers for the same operation.\r\nE.g. [2] shows that number 6 can be used for the purpose, but other operator classes\r\nadded by btree_gist [3] seem to use number 3 for the euqlaity comparison.\r\n\r\n\r\nI've looked into using ExecIndexBuildScanKeys(), which is used for normal index scans.\r\nHowever, in this case, the operator and its family seem to be determined by the planner.\r\nBased on that, the associated strategy number is extracted. This is the opposite\r\nof what I am trying to achieve, so it doesn't seem helpful in this context.\r\n\r\n\r\n\r\nThe current patch only includes tests for hash indexes. These are separated into\r\nthe file 032_subscribe_use_index.pl for convenience, but will be integrated in a\r\nlater version.\r\n\r\n\r\nHow do you think? I want to know your opinion.\r\n\r\n[1]: https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=89e46da5e511a6970e26a020f265c9fb4b72b1d2\r\n[2]: https://www.postgresql.org/docs/devel/xindex.html#XINDEX-STRATEGIES\r\n[3]: https://www.postgresql.org/docs/devel/btree-gist.html\r\n\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Thu, 22 Jun 2023 01:36:47 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "[Patch] Use *other* indexes on the subscriber when REPLICA IDENTITY\n is FULL"
},
{
"msg_contents": "Hi Hayato, all\n\n>\n>\n> This is a follow-up thread of [1]. The commit allowed subscribers to use\n> indexes\n> other than PK and REPLICA IDENTITY when REPLICA IDENTITY is FULL on\n> publisher,\n> but the index must be a B-tree. In this proposal, I aim to extend this\n> functionality to allow\n> for hash indexes and potentially other types.\n>\n>\nCool, thanks for taking the time to work on this.\n\n\n> # Current problem\n>\n> The current limitation comes from the function build_replindex_scan_key(),\n> specifically these lines:\n>\n\nWhen I last dealt with the same issue, I was examining it from a slightly\nbroader perspective. I think\nmy conclusion was that RelationFindReplTupleByIndex() is designed for the\nconstraints of UNIQUE INDEX\nand Primary Key. Hence, btree limitation was given.\n\nSo, my main point is that it might be useful to check\nRelationFindReplTupleByIndex() once more in detail\nto see if there is anything else that is specific to btree indexes.\n\nbuild_replindex_scan_key() is definitely one of the major culprits but see\nbelow as well.\n\nI think we should also be mindful about tuples_equal() function. When an\nindex returns more than\none tuple, we rely on tuples_equal() function to make sure non-relevant\ntuples are skipped.\n\nFor btree indexes, it was safe to rely on that function as the columns that\nare indexed using btree\nalways have equality operator. I think we can safely assume the same for\nhash indexes.\n\nHowever, say we indexed \"point\" type using \"gist\" index. Then, if we let\nthis logic to kick in,\nI think tuples_equal() would fail saying that there is no equality operator\nexists.\n\nOne might argue that it is already the case for RelationFindReplTupleSeq()\nor when you\nhave index but the index on a different column. But still, it seems useful\nto make sure\nyou are aware of this limitation as well.\n\n\n>\n> ## Current difficulties\n>\n> The challenge with supporting other indexes is that they lack a fixed set\n> of strategies,\n> making it difficult to choose the correct strategy number based on the\n> index\n> access method. Even within the same index type, different operator classes\n> can\n> use different strategy numbers for the same operation.\n> E.g. [2] shows that number 6 can be used for the purpose, but other\n> operator classes\n> added by btree_gist [3] seem to use number 3 for the euqlaity comparison.\n>\n>\nAlso, build_replindex_scan_key() seems like a too late place to check this?\nI mean, what\nif there is no equality operator, how should code react to that? It\neffectively becomes\nRelationFindReplTupleSeq(), so maybe better follow that route upfront?\n\nIn other words, that decision should maybe\nhappen IsIndexUsableForReplicaIdentityFull()?\n\nFor the specific notes you raised about strategy numbers / operator\nclasses, I need to\nstudy a bit :) Though, I'll be available to do that early next week.\n\nThanks,\nOnder\n\nHi Hayato, all\n\nThis is a follow-up thread of [1]. The commit allowed subscribers to use indexes\nother than PK and REPLICA IDENTITY when REPLICA IDENTITY is FULL on publisher,\nbut the index must be a B-tree. In this proposal, I aim to extend this functionality to allow\nfor hash indexes and potentially other types.Cool, thanks for taking the time to work on this. \n# Current problem\n\nThe current limitation comes from the function build_replindex_scan_key(), specifically these lines:When I last dealt with the same issue, I was examining it from a slightly broader perspective. I thinkmy conclusion was that RelationFindReplTupleByIndex() is designed for the constraints of UNIQUE INDEX and Primary Key. Hence, btree limitation was given.So, my main point is that it might be useful to check RelationFindReplTupleByIndex() once more in detailto see if there is anything else that is specific to btree indexes.build_replindex_scan_key() is definitely one of the major culprits but see below as well.I think we should also be mindful about tuples_equal() function. When an index returns more thanone tuple, we rely on tuples_equal() function to make sure non-relevant tuples are skipped.For btree indexes, it was safe to rely on that function as the columns that are indexed using btreealways have equality operator. I think we can safely assume the same for hash indexes.However, say we indexed \"point\" type using \"gist\" index. Then, if we let this logic to kick in,I think tuples_equal() would fail saying that there is no equality operator exists. One might argue that it is already the case for RelationFindReplTupleSeq() or when youhave index but the index on a different column. But still, it seems useful to make sureyou are aware of this limitation as well. \n## Current difficulties\n\nThe challenge with supporting other indexes is that they lack a fixed set of strategies,\nmaking it difficult to choose the correct strategy number based on the index\naccess method. Even within the same index type, different operator classes can\nuse different strategy numbers for the same operation.\nE.g. [2] shows that number 6 can be used for the purpose, but other operator classes\nadded by btree_gist [3] seem to use number 3 for the euqlaity comparison.Also, build_replindex_scan_key() seems like a too late place to check this? I mean, whatif there is no equality operator, how should code react to that? It effectively becomesRelationFindReplTupleSeq(), so maybe better follow that route upfront?In other words, that decision should maybe happen IsIndexUsableForReplicaIdentityFull()?For the specific notes you raised about strategy numbers / operator classes, I need tostudy a bit :) Though, I'll be available to do that early next week.Thanks,Onder",
"msg_date": "Mon, 26 Jun 2023 10:52:31 +0300",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] Use *other* indexes on the subscriber when REPLICA\n IDENTITY is FULL"
},
{
"msg_contents": "Dear Önder,\r\n\r\nThank you for giving comments! The author's comment is quite helpful for us.\r\n\r\n>\r\nWhen I last dealt with the same issue, I was examining it from a slightly broader perspective. I think\r\nmy conclusion was that RelationFindReplTupleByIndex() is designed for the constraints of UNIQUE INDEX \r\nand Primary Key.\r\n>\r\n\r\nI see. IIUC that's why you have added tuples_equal() in the RelationFindReplTupleByIndex().\r\n\r\n>\r\nI think we should also be mindful about tuples_equal() function. When an index returns more than\r\none tuple, we rely on tuples_equal() function to make sure non-relevant tuples are skipped.\r\n\r\nFor btree indexes, it was safe to rely on that function as the columns that are indexed using btree\r\nalways have equality operator. I think we can safely assume the same for hash indexes.\r\n\r\nHowever, say we indexed \"point\" type using \"gist\" index. Then, if we let this logic to kick in,\r\nI think tuples_equal() would fail saying that there is no equality operator exists.\r\n>\r\n\r\nGood point. Actually I have tested for \"point\" datatype but it have not worked well.\r\nNow I understand the reason.\r\nIt seemed that when TYPECACHE_EQ_OPR_FINFO is reuqesed to lookup_type_cache(),\r\nit could return valid value only if the datatype has operator class for Btree or Hash.\r\nSo tuples_equal() might not be able to use if tuples have point box circle types.\r\n\r\n\r\nBTW, I have doubt that the restriction is not related with your commit.\r\nIn other words, if the table has attributes which the datatype is not for operator\r\nclass of Btree, we could not use REPLICA IDENTITY FULL. IIUC it is not documented.\r\nPlease see attched script to reproduce that. The final DELETE statement cannot be\r\nreplicated at the subscriber on my env.\r\n\r\n>\r\nFor the specific notes you raised about strategy numbers / operator classes, I need to\r\nstudy a bit :) Though, I'll be available to do that early next week.\r\n>\r\n\r\nThanks! I'm looking forward to see your opinions...\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n",
"msg_date": "Mon, 26 Jun 2023 13:44:32 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [Patch] Use *other* indexes on the subscriber when REPLICA\n IDENTITY is FULL"
},
{
"msg_contents": "On Thu, Jun 22, 2023 at 11:37 AM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> Dear hackers,\n> (CC: Önder because he owned the related thread)\n>\n...\n> The current patch only includes tests for hash indexes. These are separated into\n> the file 032_subscribe_use_index.pl for convenience, but will be integrated in a\n> later version.\n>\n\nHi Kuroda-san.\n\nI am still studying the docs references given in your initial post.\n\nMeanwhile, FWIW, here are some minor review comments for the patch you gave.\n\n======\nsrc/backend/executor/execReplication.c\n\n1. get_equal_strategy_number\n\n+/*\n+ * Return the appropriate strategy number which corresponds to the equality\n+ * comparisons.\n+ *\n+ * TODO: support other indexes: GiST, GIN, SP-GiST, BRIN\n+ */\n+static int\n+get_equal_strategy_number(Oid opclass)\n+{\n+ Oid am_method = get_opclass_method(opclass);\n+ int ret;\n+\n+ switch (am_method)\n+ {\n+ case BTREE_AM_OID:\n+ ret = BTEqualStrategyNumber;\n+ break;\n+ case HASH_AM_OID:\n+ ret = HTEqualStrategyNumber;\n+ break;\n+ case GIST_AM_OID:\n+ case GIN_AM_OID:\n+ case SPGIST_AM_OID:\n+ case BRIN_AM_OID:\n+ /* TODO */\n+ default:\n+ /* XXX: Do we have to support extended indexes? */\n+ ret = InvalidStrategy;\n+ break;\n+ }\n+\n+ return ret;\n+}\n\n1a.\nIn the file syscache.c there are already some other functions like\nget_op_opfamily_strategy so I am wondering if this new function also\nreally belongs in that file.\n\n~\n\n1b.\nShould that say \"operator\" instead of \"comparisons\"?\n\n~\n\n1c.\n\"am\" stands for \"access method\" so \"am_method\" is like \"access method\nmethod\" – is it correct?\n\n~~~\n\n2. build_replindex_scan_key\n\n int table_attno = indkey->values[index_attoff];\n+ int strategy_number;\n\n\nOught to say this is a strategy for \"equality\", so a better varname\nmight be \"equality_strategy_number\" or \"eq_strategy\" or similar.\n\n======\nsrc/backend/replication/logical/relation.c\n\n3. IsIndexUsableForReplicaIdentityFull\n\nIt seems there is some overlap between this code which hardwired 2\nvalid AMs and the switch statement in your other\nget_equal_strategy_number function which returns a strategy number for\nthose 2 AMs.\n\nWould it be better to make another common function like\nget_equality_strategy_for_am(), and then you don’t have to hardwire\nanything? Instead, you can say:\n\nis_usable_type = get_equality_strategy_for_am(indexInfo->ii_Am) !=\nInvalidStrategy;\n\n======\nsrc/backend/utils/cache/lsyscache.c\n\n4. get_opclass_method\n\n+/*\n+ * get_opclass_method\n+ *\n+ * Returns the OID of the index access method operator family is for.\n+ */\n+Oid\n+get_opclass_method(Oid opclass)\n+{\n+ HeapTuple tp;\n+ Form_pg_opclass cla_tup;\n+ Oid result;\n+\n+ tp = SearchSysCache1(CLAOID, ObjectIdGetDatum(opclass));\n+ if (!HeapTupleIsValid(tp))\n+ elog(ERROR, \"cache lookup failed for opclass %u\", opclass);\n+ cla_tup = (Form_pg_opclass) GETSTRUCT(tp);\n+\n+ result = cla_tup->opcmethod;\n+ ReleaseSysCache(tp);\n+ return result;\n+}\n\nIs the comment correct? IIUC, this function is not doing anything for\n\"families\".\n\n======\nsrc/test/subscription/t/034_hash.pl\n\n5.\n+# insert some initial data within the range 0-9 for y\n+$node_publisher->safe_psql('postgres',\n+ \"INSERT INTO test_replica_id_full SELECT i, (i%10)::text FROM\ngenerate_series(0,10) i\"\n+);\n\nWhy does the comment only say \"for y\"?\n\n~~~\n\n6.\n+# wait until the index is used on the subscriber\n+# XXX: the test will be suspended here\n+$node_publisher->wait_for_catchup($appname);\n+$node_subscriber->poll_query_until('postgres',\n+ q{select (idx_scan = 4) from pg_stat_all_indexes where indexrelname\n= 'test_replica_id_full_idx';}\n+ )\n+ or die\n+ \"Timed out while waiting for check subscriber tap_sub_rep_full\nupdates 4 rows via index\";\n+\n\nAFAIK this is OK but it was slightly misleading because it says\n\"updates 4 rows\" whereas the previous UPDATE was only for 2 rows. So\nhere I think the 4 also counts the 2 DELETED rows. The comment can be\nexpanded slightly to clarify this.\n\n~~~\n\n7.\nFYI, when I ran the TAP test the result was this:\n\nt/034_hash.pl ...................... 1/? # Tests were run but no plan\nwas declared and done_testing() was not seen.\nt/034_hash.pl ...................... All 2 subtests passed\nt/100_bugs.pl ...................... ok\n\nTest Summary Report\n-------------------\nt/034_hash.pl (Wstat: 0 Tests: 2 Failed: 0)\n Parse errors: No plan found in TAP output\nFiles=36, Tests=457, 338 wallclock secs ( 0.29 usr 0.07 sys + 206.73\ncusr 47.94 csys = 255.03 CPU)\nResult: FAIL\n\n~\n\nJust adding the missing done_testing() seemed to fix this.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 27 Jun 2023 11:02:06 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] Use *other* indexes on the subscriber when REPLICA\n IDENTITY is FULL"
},
{
"msg_contents": "On Mon, Jun 26, 2023 at 11:44 PM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n...\n> BTW, I have doubt that the restriction is not related with your commit.\n> In other words, if the table has attributes which the datatype is not for operator\n> class of Btree, we could not use REPLICA IDENTITY FULL. IIUC it is not documented.\n> Please see attched script to reproduce that. The final DELETE statement cannot be\n> replicated at the subscriber on my env.\n>\n\nMissing attachment?\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 27 Jun 2023 11:06:25 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] Use *other* indexes on the subscriber when REPLICA\n IDENTITY is FULL"
},
{
"msg_contents": "> Please see attched script to reproduce that. The final DELETE statement cannot\r\n> be\r\n> replicated at the subscriber on my env.\r\n\r\nSorry, I forgot to attach...\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Tue, 27 Jun 2023 01:09:42 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [Patch] Use *other* indexes on the subscriber when REPLICA\n IDENTITY is FULL"
},
{
"msg_contents": "Hi Hayato,\n\n\n> BTW, I have doubt that the restriction is not related with your commit.\n> In other words, if the table has attributes which the datatype is not for\n> operator\n> class of Btree, we could not use REPLICA IDENTITY FULL. IIUC it is not\n> documented.\n> Please see attched script to reproduce that. The final DELETE statement\n> cannot be\n> replicated at the subscriber on my env.\n>\n>\nYes, I agree, it is (and was before my patch as well)\nun-documented limitation of REPLICA IDENTITY FULL.\nAnd, as far as I can see, my patch actually didn't have any impact on the\nlimitation. The unsupported\ncases are still unsupported, but now the same error is thrown in a slightly\ndifferent place.\n\nI think that is a minor limitation, but maybe should be listed [1]?\n\n>\n> For the specific notes you raised about strategy numbers / operator\n> classes, I need to\n> study a bit :) Though, I'll be available to do that early next week.\n> >\n>\n> Thanks! I'm looking forward to see your opinions...\n>\n\nFor this one, I did some research in the code, but I'm not very\ncomfortable with the answer. Still, I wanted to share my observations so\nthat it might be useful for the discussion.\n\nFirst, I checked if the function get_op_btree_interpretation() could be\nused here. But, I think that is again btree-only and I couldn't find\nanything generic that does something similar.\n\nThen, I tried to come up with a SQL query, actually based on the link [2]\nyou shared. I think we should always have an \"equality\" strategy (e.g.,\n\"same\", \"overlaps\", \"contains\" etc sounds wrong to me).\n\nAnd, it seems btree, hash and brin supports \"equal\". So, a query like the\nfollowing seems to provide the list of (index type, strategy_number,\ndata_type) that we might be allowed to use.\n\n SELECT\n am.amname AS index_type,\n amop.amoplefttype::regtype,amop.amoprighttype::regtype,\n op.oprname AS operator,\n amop.amopstrategy AS strategy_number\nFROM\n pg_amop amop\nJOIN\n pg_am am ON am.oid = amop.amopmethod\nJOIN\n pg_operator op ON op.oid = amop.amopopr\nWHERE\n (am.amname = 'btree' and amop.amopstrategy = 3) OR\n (am.amname = 'hash' and amop.amopstrategy = 1) OR\n (am.amname = 'brin' and amop.amopstrategy = 3)\nORDER BY\n index_type,\n strategy_number;\n\n\nWhat do you think?\n\n\n[1]\nhttps://www.postgresql.org/docs/current/logical-replication-restrictions.html\n\n[2] https://www.postgresql.org/docs/devel/xindex.html#XINDEX-STRATEGIES\n\nHi Hayato,\n\nBTW, I have doubt that the restriction is not related with your commit.\nIn other words, if the table has attributes which the datatype is not for operator\nclass of Btree, we could not use REPLICA IDENTITY FULL. IIUC it is not documented.\nPlease see attched script to reproduce that. The final DELETE statement cannot be\nreplicated at the subscriber on my env.\nYes, I agree, it is (and was before my patch as well) un-documented limitation of REPLICA IDENTITY FULL.And, as far as I can see, my patch actually didn't have any impact on the limitation. The unsupportedcases are still unsupported, but now the same error is thrown in a slightly different place.I think that is a minor limitation, but maybe should be listed [1]? \n>\nFor the specific notes you raised about strategy numbers / operator classes, I need to\nstudy a bit :) Though, I'll be available to do that early next week.\n>\n\nThanks! I'm looking forward to see your opinions...For this one, I did some research in the code, but I'm not very comfortable with the answer. Still, I wanted to share my observations so that it might be useful for the discussion.First, I checked if the function get_op_btree_interpretation() could be used here. But, I think that is again btree-only and I couldn't find anything generic that does something similar.Then, I tried to come up with a SQL query, actually based on the link [2] you shared. I think we should always have an \"equality\" strategy (e.g., \"same\", \"overlaps\", \"contains\" etc sounds wrong to me). And, it seems btree, hash and brin supports \"equal\". So, a query like the following seems to provide the list of (index type, strategy_number, data_type) that we might be allowed to use. SELECT am.amname AS index_type, amop.amoplefttype::regtype,amop.amoprighttype::regtype, op.oprname AS operator, amop.amopstrategy AS strategy_numberFROM pg_amop amopJOIN pg_am am ON am.oid = amop.amopmethodJOIN pg_operator op ON op.oid = amop.amopoprWHERE (am.amname = 'btree' and amop.amopstrategy = 3) OR (am.amname = 'hash' and amop.amopstrategy = 1) OR (am.amname = 'brin' and amop.amopstrategy = 3)ORDER BY index_type, strategy_number;What do you think? [1] https://www.postgresql.org/docs/current/logical-replication-restrictions.html [2] https://www.postgresql.org/docs/devel/xindex.html#XINDEX-STRATEGIES",
"msg_date": "Thu, 6 Jul 2023 18:11:10 +0300",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] Use *other* indexes on the subscriber when REPLICA\n IDENTITY is FULL"
},
{
"msg_contents": "Dear Önder,\r\n\r\nThank you for your analysis!\r\n\r\n>\r\nYes, I agree, it is (and was before my patch as well) un-documented limitation of REPLICA IDENTITY FULL.\r\nAnd, as far as I can see, my patch actually didn't have any impact on the limitation. The unsupported\r\ncases are still unsupported, but now the same error is thrown in a slightly different place.\r\nI think that is a minor limitation, but maybe should be listed [1]?\r\n>\r\n\r\nYes, your modification did not touch the restriction. It has existed before the\r\ncommit. I (or my colleague) will post the patch to add the description, maybe\r\nafter [1] is committed.\r\n\r\n>\r\nFor this one, I did some research in the code, but I'm not very\r\ncomfortable with the answer. Still, I wanted to share my observations so\r\nthat it might be useful for the discussion.\r\n\r\nFirst, I checked if the function get_op_btree_interpretation() could be\r\nused here. But, I think that is again btree-only and I couldn't find\r\nanything generic that does something similar.\r\n>\r\n\r\nThanks for checking. The function seems to return the list of operator family and\r\nits strategy number when the oid of the operator is given. But what we want to do\r\nhere is get the operator oid. I think that the input and output of the function\r\nseems opposite. And as you mentioned, the index must be btree.\r\n\r\n>\r\nThen, I tried to come up with a SQL query, actually based on the link [2]\r\nyou shared. I think we should always have an \"equality\" strategy (e.g.,\r\n\"same\", \"overlaps\", \"contains\" etc sounds wrong to me).\r\n>\r\n\r\nI could agree that \"overlaps\", \"contains\", are not \"equal\", but not sure about\r\nthe \"same\". Around here we must discuss, but not now.\r\n\r\n>\r\nAnd, it seems btree, hash and brin supports \"equal\". So, a query like the\r\nfollowing seems to provide the list of (index type, strategy_number,\r\ndata_type) that we might be allowed to use.\r\n>\r\n\r\nNote that strategy numbers listed in the doc are just an example - Other than BTree\r\nand Hash do not have a fixed set of strategies at all.\r\nE.g., operator classes for Btree, Hash and BRIN (Minmax) has \"equal\" and the\r\nstrategy number is documented. But other user-defined operator classes for BRIN\r\nmay have another number, or it does not have equality comparison.\r\n\r\n>\r\n SELECT\r\n am.amname AS index_type,\r\n amop.amoplefttype::regtype,amop.amoprighttype::regtype,\r\n op.oprname AS operator,\r\n amop.amopstrategy AS strategy_number\r\nFROM\r\n pg_amop amop\r\nJOIN\r\n pg_am am ON am.oid = amop.amopmethod\r\nJOIN\r\n pg_operator op ON op.oid = amop.amopopr\r\nWHERE\r\n (am.amname = 'btree' and amop.amopstrategy = 3) OR\r\n (am.amname = 'hash' and amop.amopstrategy = 1) OR\r\n (am.amname = 'brin' and amop.amopstrategy = 3)\r\nORDER BY\r\n index_type,\r\n strategy_number;\r\n\r\nWhat do you think?\r\n>\r\n\r\nGood SQL. You have listed the equality operator and related strategy number for given\r\noperator classes.\r\n\r\nWhile analyzing more, however, I found that it might be difficult to support GIN, BRIN,\r\nand bloom indexes in the first version. These indexes does not implement the\r\n\"amgettuple\" function, which is called in RelationFindReplTupleByIndex()->index_getnext_slot()->index_getnext_tid().\r\nFor example, in the brinhandler():\r\n\r\n```\r\n/*\r\n * BRIN handler function: return IndexAmRoutine with access method parameters\r\n * and callbacks.\r\n */\r\nDatum\r\nbrinhandler(PG_FUNCTION_ARGS)\r\n{\r\n...\r\n amroutine->amgettuple = NULL;\r\n amroutine->amgetbitmap = bringetbitmap;\r\n...\r\n```\r\n\r\nAccording to the document [2], all of index access methods must implement either\r\nof amgettuple or amgetbitmap API. \"amgettuple\" is used when the table is scaned\r\nand tuples are fetched one by one, RelationFindReplTupleByIndex() do that.\r\n\"amgetbitmap\" is used when all tuples are fetched at once and RelationFindReplTupleByIndex()\r\ndoes not support such indexes. To do that the implemented API must be checked and\r\nthe function must be changed depends on that. It may be difficult to add them in\r\nthe first step so that I want not to support them. Fortunately, Hash, GiST, and\r\nSP-GiST has implemented then so we can focus on them.\r\nIn the next patch I will add the mechanism for rejecting such indexes.\r\n\r\nAnyway, thank you for keeping the interest to the patch, nevertheless it is difficult theme.\r\n\r\n[1]: https://www.postgresql.org/message-id/CAHut%2BPsFdTZJ7DG1jyu7BpA_1d4hwEd-Q%2BmQAPWcj1ZLD_X5Dw%40mail.gmail.com\r\n[2]: https://www.postgresql.org/docs/current/index-functions.html\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Fri, 7 Jul 2023 08:01:21 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [Patch] Use *other* indexes on the subscriber when REPLICA\n IDENTITY is FULL"
},
{
"msg_contents": "Dear Peter,\r\n\r\nThanks for reviewing. PSA new version.\r\nI planned to post new version after supporting more indexes, but it may take more time.\r\nSo I want to address comments from you once.\r\n\r\n> ======\r\n> src/backend/executor/execReplication.c\r\n> \r\n> 1. get_equal_strategy_number\r\n> \r\n> +/*\r\n> + * Return the appropriate strategy number which corresponds to the equality\r\n> + * comparisons.\r\n> + *\r\n> + * TODO: support other indexes: GiST, GIN, SP-GiST, BRIN\r\n> + */\r\n> +static int\r\n> +get_equal_strategy_number(Oid opclass)\r\n> +{\r\n> + Oid am_method = get_opclass_method(opclass);\r\n> + int ret;\r\n> +\r\n> + switch (am_method)\r\n> + {\r\n> + case BTREE_AM_OID:\r\n> + ret = BTEqualStrategyNumber;\r\n> + break;\r\n> + case HASH_AM_OID:\r\n> + ret = HTEqualStrategyNumber;\r\n> + break;\r\n> + case GIST_AM_OID:\r\n> + case GIN_AM_OID:\r\n> + case SPGIST_AM_OID:\r\n> + case BRIN_AM_OID:\r\n> + /* TODO */\r\n> + default:\r\n> + /* XXX: Do we have to support extended indexes? */\r\n> + ret = InvalidStrategy;\r\n> + break;\r\n> + }\r\n> +\r\n> + return ret;\r\n> +}\r\n> \r\n> 1a.\r\n> In the file syscache.c there are already some other functions like\r\n> get_op_opfamily_strategy so I am wondering if this new function also\r\n> really belongs in that file.\r\n\r\nAccording to atop comment in the syscache.c, it contains routines which access\r\nsystem catalog cache. get_equal_strategy_number() does not check it, so I don't\r\nthink it should be at the file.\r\n\r\n> 1b.\r\n> Should that say \"operator\" instead of \"comparisons\"?\r\n\r\nChanged.\r\n\r\n> 1c.\r\n> \"am\" stands for \"access method\" so \"am_method\" is like \"access method\r\n> method\" – is it correct?\r\n\r\nChanged to \"am\".\r\n\r\n> 2. build_replindex_scan_key\r\n> \r\n> int table_attno = indkey->values[index_attoff];\r\n> + int strategy_number;\r\n> \r\n> \r\n> Ought to say this is a strategy for \"equality\", so a better varname\r\n> might be \"equality_strategy_number\" or \"eq_strategy\" or similar.\r\n\r\nChanged to \"eq_strategy\".\r\n\r\n> src/backend/replication/logical/relation.c\r\n> \r\n> 3. IsIndexUsableForReplicaIdentityFull\r\n> \r\n> It seems there is some overlap between this code which hardwired 2\r\n> valid AMs and the switch statement in your other\r\n> get_equal_strategy_number function which returns a strategy number for\r\n> those 2 AMs.\r\n> \r\n> Would it be better to make another common function like\r\n> get_equality_strategy_for_am(), and then you don’t have to hardwire\r\n> anything? Instead, you can say:\r\n> \r\n> is_usable_type = get_equality_strategy_for_am(indexInfo->ii_Am) !=\r\n> InvalidStrategy;\r\n\r\nAdded. How do you think?\r\n\r\n> src/backend/utils/cache/lsyscache.c\r\n> \r\n> 4. get_opclass_method\r\n> \r\n> +/*\r\n> + * get_opclass_method\r\n> + *\r\n> + * Returns the OID of the index access method operator family is for.\r\n> + */\r\n> +Oid\r\n> +get_opclass_method(Oid opclass)\r\n> +{\r\n> + HeapTuple tp;\r\n> + Form_pg_opclass cla_tup;\r\n> + Oid result;\r\n> +\r\n> + tp = SearchSysCache1(CLAOID, ObjectIdGetDatum(opclass));\r\n> + if (!HeapTupleIsValid(tp))\r\n> + elog(ERROR, \"cache lookup failed for opclass %u\", opclass);\r\n> + cla_tup = (Form_pg_opclass) GETSTRUCT(tp);\r\n> +\r\n> + result = cla_tup->opcmethod;\r\n> + ReleaseSysCache(tp);\r\n> + return result;\r\n> +}\r\n> \r\n> Is the comment correct? IIUC, this function is not doing anything for\r\n> \"families\".\r\n\r\nModified to \"class\".\r\n\r\n> src/test/subscription/t/034_hash.pl\r\n> \r\n> 5.\r\n> +# insert some initial data within the range 0-9 for y\r\n> +$node_publisher->safe_psql('postgres',\r\n> + \"INSERT INTO test_replica_id_full SELECT i, (i%10)::text FROM\r\n> generate_series(0,10) i\"\r\n> +);\r\n> \r\n> Why does the comment only say \"for y\"?\r\n\r\nAfter considering more, I thought we do not have to mention data.\r\nSo removed the part \" within the range 0-9 for y\".\r\n\r\n> 6.\r\n> +# wait until the index is used on the subscriber\r\n> +# XXX: the test will be suspended here\r\n> +$node_publisher->wait_for_catchup($appname);\r\n> +$node_subscriber->poll_query_until('postgres',\r\n> + q{select (idx_scan = 4) from pg_stat_all_indexes where indexrelname\r\n> = 'test_replica_id_full_idx';}\r\n> + )\r\n> + or die\r\n> + \"Timed out while waiting for check subscriber tap_sub_rep_full\r\n> updates 4 rows via index\";\r\n> +\r\n> \r\n> AFAIK this is OK but it was slightly misleading because it says\r\n> \"updates 4 rows\" whereas the previous UPDATE was only for 2 rows. So\r\n> here I think the 4 also counts the 2 DELETED rows. The comment can be\r\n> expanded slightly to clarify this.\r\n\r\nClarified two rows were deleted.\r\n\r\n> 7.\r\n> FYI, when I ran the TAP test the result was this:\r\n> \r\n> t/034_hash.pl ...................... 1/? # Tests were run but no plan\r\n> was declared and done_testing() was not seen.\r\n> t/034_hash.pl ...................... All 2 subtests passed\r\n> t/100_bugs.pl ...................... ok\r\n> \r\n> Test Summary Report\r\n> -------------------\r\n> t/034_hash.pl (Wstat: 0 Tests: 2 Failed: 0)\r\n> Parse errors: No plan found in TAP output\r\n> Files=36, Tests=457, 338 wallclock secs ( 0.29 usr 0.07 sys + 206.73\r\n> cusr 47.94 csys = 255.03 CPU)\r\n> Result: FAIL\r\n> \r\n> ~\r\n> \r\n> Just adding the missing done_testing() seemed to fix this.\r\n\r\nRight. I used meson build system and it said OK. Added.\r\n\r\nFurthermore, I added a check to reject indexes which do not implement \"amgettuple\" API.\r\nMore detail, please see [1].\r\n\r\n[1]: https://www.postgresql.org/message-id/TYAPR01MB5866E02638D40C4D198334B4F52DA%40TYAPR01MB5866.jpnprd01.prod.outlook.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Fri, 7 Jul 2023 12:54:53 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [Patch] Use *other* indexes on the subscriber when REPLICA\n IDENTITY is FULL"
},
{
"msg_contents": "On Fri, Jul 7, 2023 at 1:31 PM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> Thank you for your analysis!\n>\n> >\n> Yes, I agree, it is (and was before my patch as well) un-documented limitation of REPLICA IDENTITY FULL.\n> And, as far as I can see, my patch actually didn't have any impact on the limitation. The unsupported\n> cases are still unsupported, but now the same error is thrown in a slightly different place.\n> I think that is a minor limitation, but maybe should be listed [1]?\n> >\n\n+1.\n\n>\n> Yes, your modification did not touch the restriction. It has existed before the\n> commit. I (or my colleague) will post the patch to add the description, maybe\n> after [1] is committed.\n>\n\nI don't think there is any dependency between the two. So, feel free\nto post the patch separately.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 8 Jul 2023 11:06:49 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] Use *other* indexes on the subscriber when REPLICA\n IDENTITY is FULL"
},
{
"msg_contents": "Dear Amit,\r\n\r\n> > Yes, I agree, it is (and was before my patch as well) un-documented\r\n> limitation of REPLICA IDENTITY FULL.\r\n> > And, as far as I can see, my patch actually didn't have any impact on the\r\n> limitation. The unsupported\r\n> > cases are still unsupported, but now the same error is thrown in a slightly\r\n> different place.\r\n> > I think that is a minor limitation, but maybe should be listed [1]?\r\n> > >\r\n> \r\n> +1.\r\n> \r\n> >\r\n> > Yes, your modification did not touch the restriction. It has existed before the\r\n> > commit. I (or my colleague) will post the patch to add the description, maybe\r\n> > after [1] is committed.\r\n> >\r\n> \r\n> I don't think there is any dependency between the two. So, feel free\r\n> to post the patch separately.\r\n\r\nOkay, thank you for your confirmation. I have started the fork thread [1].\r\n\r\n[1]: https://www.postgresql.org/message-id/TYAPR01MB58662174ED62648E0D611194F530A%40TYAPR01MB5866.jpnprd01.prod.outlook.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Mon, 10 Jul 2023 03:35:49 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [Patch] Use *other* indexes on the subscriber when REPLICA\n IDENTITY is FULL"
},
{
"msg_contents": "On Mon, Jun 26, 2023 at 7:14 PM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> Thank you for giving comments! The author's comment is quite helpful for us.\n>\n> >\n> When I last dealt with the same issue, I was examining it from a slightly broader perspective. I think\n> my conclusion was that RelationFindReplTupleByIndex() is designed for the constraints of UNIQUE INDEX\n> and Primary Key.\n> >\n>\n> I see. IIUC that's why you have added tuples_equal() in the RelationFindReplTupleByIndex().\n>\n> >\n> I think we should also be mindful about tuples_equal() function. When an index returns more than\n> one tuple, we rely on tuples_equal() function to make sure non-relevant tuples are skipped.\n>\n> For btree indexes, it was safe to rely on that function as the columns that are indexed using btree\n> always have equality operator. I think we can safely assume the same for hash indexes.\n>\n> However, say we indexed \"point\" type using \"gist\" index. Then, if we let this logic to kick in,\n> I think tuples_equal() would fail saying that there is no equality operator exists.\n> >\n>\n> Good point. Actually I have tested for \"point\" datatype but it have not worked well.\n> Now I understand the reason.\n> It seemed that when TYPECACHE_EQ_OPR_FINFO is reuqesed to lookup_type_cache(),\n> it could return valid value only if the datatype has operator class for Btree or Hash.\n> So tuples_equal() might not be able to use if tuples have point box circle types.\n>\n\nI also think so. If this is true, how can we think of supporting\nindexes other than hash like GiST, and SP-GiST as mentioned by you in\nyour latest email? As per my understanding if we don't have PK or\nreplica identity then after the index scan, we do tuples_equal which\nwill fail for GIST or SP-GIST. Am, I missing something?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 10 Jul 2023 15:24:04 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] Use *other* indexes on the subscriber when REPLICA\n IDENTITY is FULL"
},
{
"msg_contents": "Hi,\n\nI also think so. If this is true, how can we think of supporting\n> indexes other than hash like GiST, and SP-GiST as mentioned by you in\n> your latest email? As per my understanding if we don't have PK or\n> replica identity then after the index scan, we do tuples_equal which\n> will fail for GIST or SP-GIST. Am, I missing something?\n>\n\nI also don't think we can support anything other than btree, hash and brin\nas those lack equality operators.\n\nAnd, for BRIN, Hayato brought up the amgettuple issue, which is fair. So,\noverall, as far as I can see, we can\neasily add hash indexes but all others are either very hard to add or not\npossible.\n\nI think if someone one day works on supporting primary keys using other\nindex types, we can use it here :p\n\nThanks,\nOnder\n\nHi,\nI also think so. If this is true, how can we think of supporting\nindexes other than hash like GiST, and SP-GiST as mentioned by you in\nyour latest email? As per my understanding if we don't have PK or\nreplica identity then after the index scan, we do tuples_equal which\nwill fail for GIST or SP-GIST. Am, I missing something?I also don't think we can support anything other than btree, hash and brin as those lack equality operators.And, for BRIN, Hayato brought up the amgettuple issue, which is fair. So, overall, as far as I can see, we can easily add hash indexes but all others are either very hard to add or not possible.I think if someone one day works on supporting primary keys using other index types, we can use it here :pThanks,Onder",
"msg_date": "Mon, 10 Jul 2023 17:13:42 +0300",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] Use *other* indexes on the subscriber when REPLICA\n IDENTITY is FULL"
},
{
"msg_contents": "On Mon, Jul 10, 2023 at 7:43 PM Önder Kalacı <[email protected]> wrote:\n>\n>> I also think so. If this is true, how can we think of supporting\n>> indexes other than hash like GiST, and SP-GiST as mentioned by you in\n>> your latest email? As per my understanding if we don't have PK or\n>> replica identity then after the index scan, we do tuples_equal which\n>> will fail for GIST or SP-GIST. Am, I missing something?\n>\n>\n> I also don't think we can support anything other than btree, hash and brin as those lack equality operators.\n>\n> And, for BRIN, Hayato brought up the amgettuple issue, which is fair. So, overall, as far as I can see, we can\n> easily add hash indexes but all others are either very hard to add or not possible.\n>\n\nAgreed. So, let's update the patch with comments indicating the\nchallenges for supporting the other index types than btree and hash.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 11 Jul 2023 08:03:14 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] Use *other* indexes on the subscriber when REPLICA\n IDENTITY is FULL"
},
{
"msg_contents": "Dear Amit, Önder\r\n\r\nThanks for giving discussions. IIUC all have agreed that the patch focus on extending\r\nto Hash index. PSA the patch for that.\r\nThe basic workflow was not so changed, some comments were updated.\r\n\r\nRegarding the test code, I think it should be combined into 032_subscribe_use_index.pl\r\nbecause they have tested same feature. I have just copied tests to latter\r\npart of 032.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Tue, 11 Jul 2023 05:17:04 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [Patch] Use *other* indexes on the subscriber when REPLICA\n IDENTITY is FULL"
},
{
"msg_contents": "Hi, here are some review comments for the v3 patch\n\n======\nCommit message\n\n1.\n89e46d allowed using indexes other than PRIMARY KEY or REPLICA IDENTITY on the\nsubscriber, but only the BTree index could be used. This commit extends the\nlimitation, now the Hash index can be also used.\n\n~\n\nBefore giving details about the problems of the other index types it\nmight be good to summarize why these 2 types are OK.\n\nSUGGESTION\nThese 2 types of indexes are the only ones supported because only these\n- use fix strategy numbers\n- implement the \"equality\" strategy\n- implement the function amgettuple()\n\n~~~\n\n2.\nI'm not sure why the next paragraphs are numbered 1) and 2). Is that\nnecessary? It seems maybe a cut/paste hangover from the similar code\ncomment.\n\n~~~\n\n3.\n1) Other indexes do not have a fixed set of strategy numbers at all. In\nbuild_replindex_scan_key(), the operator which corresponds to\n\"equality comparison\"\nis searched by using the strategy number, but it is difficult for such indexes.\nFor example, GiST index operator classes for two-dimensional geometric\nobjects have\na comparison \"same\", but its strategy number is different from the\ngist_int4_ops,\nwhich is a GiST index operator class that implements the B-tree equivalent.\n\n~\n\nDon't need to say \"at all\"\n\n~~~\n\n4.\n2) Some other indexes do not implement \"amgettuple\".\nRelationFindReplTupleByIndex()\nassumes that tuples could be fetched one by one via\nindex_getnext_slot(), but such\nindexes are not supported.\n\n~\n\n4a.\n\"Some other indexes...\" --> Maybe give an example here (e.g. XXX, YYY)\nof indexes that do not implement the function.\n\n~\n\n4b.\nBEFORE\nRelationFindReplTupleByIndex() assumes that tuples could be fetched\none by one via index_getnext_slot(), but such indexes are not\nsupported.\n\nAFTER\nRelationFindReplTupleByIndex() assumes that tuples will be fetched one\nby one via index_getnext_slot(), but this currently requires the\n\"amgetuple\" function.\n\n\n======\nsrc/backend/executor/execReplication.c\n\n5.\n+ * 2) Some other indexes do not implement \"amgettuple\".\n+ * RelationFindReplTupleByIndex() assumes that tuples could be fetched one by\n+ * one via index_getnext_slot(), but such indexes are not supported. To make it\n+ * use index_getbitmap() must be used, but not done yet because of the above\n+ * reason.\n+ */\n+int\n+get_equal_strategy_number_for_am(Oid am)\n\n~\n\nChange this text to better match exactly in the commit message (see\nprevious review comments above). Also I am not sure it is necessary to\nsay how it *might* be implemented in the future if somebody wanted to\nenhance it to work also for \"amgetbitmap\" function. E.g. do we need\nthat sentence \"To make it...\"\n\n~~~\n\n6. get_equal_strategy_number_for_am\n\n+ case GIST_AM_OID:\n+ case SPGIST_AM_OID:\n+ case GIN_AM_OID:\n+ case BRIN_AM_OID:\n+ default:\n\nI am not sure it is necessary to spell out all these not-supported\ncases in the switch. If seems sufficient just to say 'default:'\ndoesn't it?\n\n~~~\n\n7. get_equal_strategy_number\n\nTwo blank lines are following this function\n\n~~~\n\n8. build_replindex_scan_key\n\n- * This is not generic routine, it expects the idxrel to be a btree,\nnon-partial\n- * and have at least one column reference (i.e. cannot consist of only\n- * expressions).\n+ * This is not generic routine, it expects the idxrel to be a btree or a hash,\n+ * non-partial and have at least one column reference (i.e. cannot consist of\n+ * only expressions).\n\nTake care. AFAIK this change will clash with changes Sawawa-san is\nmaking to the same code comment in another thread here [1].\n\n======\nsrc/backend/replication/logical/relation.c\n\n9. FindUsableIndexForReplicaIdentityFull\n\n * Returns the oid of an index that can be used by the apply worker to scan\n- * the relation. The index must be btree, non-partial, and have at least\n- * one column reference (i.e. cannot consist of only expressions). These\n+ * the relation. The index must be btree or hash, non-partial, and have at\n+ * least one column reference (i.e. cannot consist of only expressions). These\n * limitations help to keep the index scan similar to PK/RI index scans.\n\n~\n\nTake care. AFAIK this change will clash with changes Sawawa-san is\nmaking to the same code comment in another thread here [1].\n\n~\n\n10.\n+ /* Check whether the index is supported or not */\n+ is_suitable_type = ((get_equal_strategy_number_for_am(indexInfo->ii_Am)\n+ != InvalidStrategy));\n+\n+ is_partial = (indexInfo->ii_Predicate != NIL);\n+ is_only_on_expression = IsIndexOnlyOnExpression(indexInfo);\n+\n+ return is_suitable_type && !is_partial && !is_only_on_expression;\n\nI am not sure if the function IsIndexOnlyExpression() even needed\nanymore. Isn't it sufficient just to check up-front is the leftmost\nINDEX field is a column and that covers this condition also? Actually,\nthis same question is also open in the Sawada-san thread [1].\n\n======\n.../subscription/t/032_subscribe_use_index.pl\n\n11.\nAFAIK there is no test to verify that the leftmost index field must be\na column (e.g. cannot be an expression). Yes, there are some tests for\n*only* expressions like (expr, expr, expr), but IMO there should be\nanother test for an index like (expr, expr, column) which should also\nfail because the column is not the leftmost field.\n\n------\n[1] Sawada-san doc for RI restriction -\nhttps://www.postgresql.org/message-id/CAD21AoBzp9H2WV4kDagat2WUOsiYJLo10zJ1E5dZYnRTx1pc_g%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Austalia\n\n\n",
"msg_date": "Wed, 12 Jul 2023 10:04:09 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] Use *other* indexes on the subscriber when REPLICA\n IDENTITY is FULL"
},
{
"msg_contents": "Dear Peter,\r\n\r\nThank you for reviewing! PSA new version.\r\n\r\n> 1.\r\n> 89e46d allowed using indexes other than PRIMARY KEY or REPLICA IDENTITY\r\n> on the\r\n> subscriber, but only the BTree index could be used. This commit extends the\r\n> limitation, now the Hash index can be also used.\r\n> \r\n> ~\r\n> \r\n> Before giving details about the problems of the other index types it\r\n> might be good to summarize why these 2 types are OK.\r\n> \r\n> SUGGESTION\r\n> These 2 types of indexes are the only ones supported because only these\r\n> - use fix strategy numbers\r\n> - implement the \"equality\" strategy\r\n> - implement the function amgettuple()\r\n\r\nAdded.\r\n\r\n> \r\n> 2.\r\n> I'm not sure why the next paragraphs are numbered 1) and 2). Is that\r\n> necessary? It seems maybe a cut/paste hangover from the similar code\r\n> comment.\r\n\r\nYeah, this was just copied from code comments. Numbers were removed.\r\n\r\n> 3.\r\n> 1) Other indexes do not have a fixed set of strategy numbers at all. In\r\n> build_replindex_scan_key(), the operator which corresponds to\r\n> \"equality comparison\"\r\n> is searched by using the strategy number, but it is difficult for such indexes.\r\n> For example, GiST index operator classes for two-dimensional geometric\r\n> objects have\r\n> a comparison \"same\", but its strategy number is different from the\r\n> gist_int4_ops,\r\n> which is a GiST index operator class that implements the B-tree equivalent.\r\n> \r\n> ~\r\n> \r\n> Don't need to say \"at all\"\r\n\r\nRemoved.\r\n\r\n> 4.\r\n> 2) Some other indexes do not implement \"amgettuple\".\r\n> RelationFindReplTupleByIndex()\r\n> assumes that tuples could be fetched one by one via\r\n> index_getnext_slot(), but such\r\n> indexes are not supported.\r\n> \r\n> ~\r\n> \r\n> 4a.\r\n> \"Some other indexes...\" --> Maybe give an example here (e.g. XXX, YYY)\r\n> of indexes that do not implement the function.\r\n\r\nI clarified like \"BRIN and GIN indexes do not implement...\", which are the built-in\r\nindexes. Actually the bloom index cannot be supported due to the same reason, but\r\nI did not mention because it is not in core.\r\n\r\n> 4b.\r\n> BEFORE\r\n> RelationFindReplTupleByIndex() assumes that tuples could be fetched\r\n> one by one via index_getnext_slot(), but such indexes are not\r\n> supported.\r\n> \r\n> AFTER\r\n> RelationFindReplTupleByIndex() assumes that tuples will be fetched one\r\n> by one via index_getnext_slot(), but this currently requires the\r\n> \"amgetuple\" function.\r\n\r\n\r\nChanged.\r\n\r\n> src/backend/executor/execReplication.c\r\n> \r\n> 5.\r\n> + * 2) Some other indexes do not implement \"amgettuple\".\r\n> + * RelationFindReplTupleByIndex() assumes that tuples could be fetched one by\r\n> + * one via index_getnext_slot(), but such indexes are not supported. To make it\r\n> + * use index_getbitmap() must be used, but not done yet because of the above\r\n> + * reason.\r\n> + */\r\n> +int\r\n> +get_equal_strategy_number_for_am(Oid am)\r\n> \r\n> ~\r\n> \r\n> Change this text to better match exactly in the commit message (see\r\n> previous review comments above).\r\n\r\nCopied from commit message.\r\n\r\n> Also I am not sure it is necessary to\r\n> say how it *might* be implemented in the future if somebody wanted to\r\n> enhance it to work also for \"amgetbitmap\" function. E.g. do we need\r\n> that sentence \"To make it...\"\r\n\r\nAdded, how do you think?\r\n\r\n> 6. get_equal_strategy_number_for_am\r\n> \r\n> + case GIST_AM_OID:\r\n> + case SPGIST_AM_OID:\r\n> + case GIN_AM_OID:\r\n> + case BRIN_AM_OID:\r\n> + default:\r\n> \r\n> I am not sure it is necessary to spell out all these not-supported\r\n> cases in the switch. If seems sufficient just to say 'default:'\r\n> doesn't it?\r\n\r\nYeah, it's sufficient. This is a garbage for previous PoC.\r\n\r\n> 7. get_equal_strategy_number\r\n> \r\n> Two blank lines are following this function\r\n\r\nRemoved.\r\n\r\n> 8. build_replindex_scan_key\r\n> \r\n> - * This is not generic routine, it expects the idxrel to be a btree,\r\n> non-partial\r\n> - * and have at least one column reference (i.e. cannot consist of only\r\n> - * expressions).\r\n> + * This is not generic routine, it expects the idxrel to be a btree or a hash,\r\n> + * non-partial and have at least one column reference (i.e. cannot consist of\r\n> + * only expressions).\r\n> \r\n> Take care. AFAIK this change will clash with changes Sawawa-san is\r\n> making to the same code comment in another thread here [1].\r\n\r\nThanks for reminder. I thought that this change seems not needed anymore if the patch\r\nis pushed. But anyway I kept it because this may be pushed first.\r\n\r\n> src/backend/replication/logical/relation.c\r\n> \r\n> 9. FindUsableIndexForReplicaIdentityFull\r\n> \r\n> * Returns the oid of an index that can be used by the apply worker to scan\r\n> - * the relation. The index must be btree, non-partial, and have at least\r\n> - * one column reference (i.e. cannot consist of only expressions). These\r\n> + * the relation. The index must be btree or hash, non-partial, and have at\r\n> + * least one column reference (i.e. cannot consist of only expressions). These\r\n> * limitations help to keep the index scan similar to PK/RI index scans.\r\n> \r\n> ~\r\n> \r\n> Take care. AFAIK this change will clash with changes Sawawa-san is\r\n> making to the same code comment in another thread here [1].\r\n\r\nThanks for reminder. I thought that this change must be kept even if it will be\r\npushed. We must check the thread...\r\n\r\n> 10.\r\n> + /* Check whether the index is supported or not */\r\n> + is_suitable_type = ((get_equal_strategy_number_for_am(indexInfo->ii_Am)\r\n> + != InvalidStrategy));\r\n> +\r\n> + is_partial = (indexInfo->ii_Predicate != NIL);\r\n> + is_only_on_expression = IsIndexOnlyOnExpression(indexInfo);\r\n> +\r\n> + return is_suitable_type && !is_partial && !is_only_on_expression;\r\n> \r\n> I am not sure if the function IsIndexOnlyExpression() even needed\r\n> anymore. Isn't it sufficient just to check up-front is the leftmost\r\n> INDEX field is a column and that covers this condition also? Actually,\r\n> this same question is also open in the Sawada-san thread [1].\r\n> \r\n> ======\r\n> .../subscription/t/032_subscribe_use_index.pl\r\n> \r\n> 11.\r\n> AFAIK there is no test to verify that the leftmost index field must be\r\n> a column (e.g. cannot be an expression). Yes, there are some tests for\r\n> *only* expressions like (expr, expr, expr), but IMO there should be\r\n> another test for an index like (expr, expr, column) which should also\r\n> fail because the column is not the leftmost field.\r\n\r\nI'm not sure they should be fixed in the patch. You have raised these points\r\nat the Sawada-san's thread[1] and not sure he has done.\r\nFurthermore, these points are not related with our initial motivation.\r\nFork, or at least it should be done by another patch. \r\n\r\n[1]: https://www.postgresql.org/message-id/CAHut%2BPv3AgAnP%2BJTsPseuU-CT%2BOrSGiqzxqw4oQmWeKLkAea4Q%40mail.gmail.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Wed, 12 Jul 2023 03:07:04 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [Patch] Use *other* indexes on the subscriber when REPLICA\n IDENTITY is FULL"
},
{
"msg_contents": "On Wed, Jul 12, 2023 at 8:37 AM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> > 10.\n> > + /* Check whether the index is supported or not */\n> > + is_suitable_type = ((get_equal_strategy_number_for_am(indexInfo->ii_Am)\n> > + != InvalidStrategy));\n> > +\n> > + is_partial = (indexInfo->ii_Predicate != NIL);\n> > + is_only_on_expression = IsIndexOnlyOnExpression(indexInfo);\n> > +\n> > + return is_suitable_type && !is_partial && !is_only_on_expression;\n> >\n> > I am not sure if the function IsIndexOnlyExpression() even needed\n> > anymore. Isn't it sufficient just to check up-front is the leftmost\n> > INDEX field is a column and that covers this condition also? Actually,\n> > this same question is also open in the Sawada-san thread [1].\n> >\n> > ======\n> > .../subscription/t/032_subscribe_use_index.pl\n> >\n> > 11.\n> > AFAIK there is no test to verify that the leftmost index field must be\n> > a column (e.g. cannot be an expression). Yes, there are some tests for\n> > *only* expressions like (expr, expr, expr), but IMO there should be\n> > another test for an index like (expr, expr, column) which should also\n> > fail because the column is not the leftmost field.\n>\n> I'm not sure they should be fixed in the patch. You have raised these points\n> at the Sawada-san's thread[1] and not sure he has done.\n> Furthermore, these points are not related with our initial motivation.\n> Fork, or at least it should be done by another patch.\n>\n\nI think it is reasonable to discuss the existing code-related\nimprovements in a separate thread rather than trying to change this\npatch. I have some other comments about your patch:\n\n1.\n+ * 1) Other indexes do not have a fixed set of strategy numbers.\n+ * In build_replindex_scan_key(), the operator which corresponds to \"equality\n+ * comparison\" is searched by using the strategy number, but it is difficult\n+ * for such indexes. For example, GiST index operator classes for\n+ * two-dimensional geometric objects have a comparison \"same\", but its strategy\n+ * number is different from the gist_int4_ops, which is a GiST index operator\n+ * class that implements the B-tree equivalent.\n+ *\n...\n+ */\n+int\n+get_equal_strategy_number_for_am(Oid am)\n...\n\nI think this comment is slightly difficult to understand. Can we make\nit a bit generic by saying something like: \"The index access methods\nother than BTREE and HASH don't have a fixed strategy for equality\noperation. Note that in other index access methods, the support\nroutines of each operator class interpret the strategy numbers\naccording to the operator class's definition. So, we return\nInvalidStrategy number for index access methods other than BTREE and\nHASH.\"\n\n2.\n+ * 2) Moreover, BRIN and GIN indexes do not implement \"amgettuple\".\n+ * RelationFindReplTupleByIndex() assumes that tuples will be fetched one by\n+ * one via index_getnext_slot(), but this currently requires the \"amgetuple\"\n+ * function. To make it use index_getbitmap() must be used, which fetches all\n+ * the tuples at once.\n+ */\n+int\n+get_equal_strategy_number_for_am(Oid am)\n{\n..\n\nI don't think this is a good place for such a comment. We can probably\nmove this atop IsIndexUsableForReplicaIdentityFull(). I think you need\nto mention two reasons in IsIndexUsableForReplicaIdentityFull() why we\nsupport only BTREE and HASH index access methods (a) Refer to comments\natop get_equal_strategy_number_for_am(); (b) mention the reason\nrelated to tuples_equal() as discussed in email [1]. Then you can say\nthat we also need to ensure that the index access methods that we\nsupport must have an implementation \"amgettuple\" as later while\nsearching tuple via RelationFindReplTupleByIndex, we need the same. We\ncan probably have an assert for this as well.\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1Jv8%2B8rax-bejd3Ej%2BT2fG3tuqP8v89byKCBPVQj9C9pg%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 12 Jul 2023 10:21:33 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] Use *other* indexes on the subscriber when REPLICA\n IDENTITY is FULL"
},
{
"msg_contents": "Here are my review comments for the patch v4.\n\n======\nGeneral\n\n1.\nFYI, this patch also needs some minor PG docs updates [1] because\nsection \"31.1 Publication\" currently refers to only btree - e.g.\n\"Candidate indexes must be btree, non-partial, and have...\"\n\n(this may also clash with Sawada-san's other thread as previously mentioned)\n\n======\nCommit message\n\n2.\nMoreover, BRIN and GIN indexes do not implement \"amgettuple\".\nRelationFindReplTupleByIndex()\nassumes that tuples will be fetched one by one via\nindex_getnext_slot(), but this\ncurrently requires the \"amgetuple\" function.\n\n~\n\nTypo:\n/\"amgetuple\"/\"amgettuple\"/\n\n======\nsrc/backend/executor/execReplication.c\n\n3.\n+ * 2) Moreover, BRIN and GIN indexes do not implement \"amgettuple\".\n+ * RelationFindReplTupleByIndex() assumes that tuples will be fetched one by\n+ * one via index_getnext_slot(), but this currently requires the \"amgetuple\"\n+ * function. To make it use index_getbitmap() must be used, which fetches all\n+ * the tuples at once.\n+ */\n+int\n\n3a.\nTypo:\n/\"amgetuple\"/\"amgettuple\"/\n\n~\n\n3b.\nI think my previous comment about this comment was misunderstood. I\nwas questioning why that last sentence (\"To make it...\") about\n\"index_getbitmap()\" is even needed in this patch. I thought it may be\noverreach to describe details how to make some future enhancement that\nmight never happen.\n\n\n======\nsrc/backend/replication/logical/relation.c\n\n4. IsIndexUsableForReplicaIdentityFull\n\n IsIndexUsableForReplicaIdentityFull(IndexInfo *indexInfo)\n {\n- bool is_btree = (indexInfo->ii_Am == BTREE_AM_OID);\n- bool is_partial = (indexInfo->ii_Predicate != NIL);\n- bool is_only_on_expression = IsIndexOnlyOnExpression(indexInfo);\n+ bool is_suitable_type;\n+ bool is_partial;\n+ bool is_only_on_expression;\n\n- return is_btree && !is_partial && !is_only_on_expression;\n+ /* Check whether the index is supported or not */\n+ is_suitable_type = ((get_equal_strategy_number_for_am(indexInfo->ii_Am)\n+ != InvalidStrategy));\n+\n+ is_partial = (indexInfo->ii_Predicate != NIL);\n+ is_only_on_expression = IsIndexOnlyOnExpression(indexInfo);\n+\n+ return is_suitable_type && !is_partial && !is_only_on_expression;\n\n4a.\nThe code can be written more optimally, because if it is deemed\nalready that 'is_suitable_type' is false, then there is no point to\ncontinue to do the other checks and function calls.\n\n~~~\n\n4b.\n\n+ /* Check whether the index is supported or not */\n+ is_suitable_type = ((get_equal_strategy_number_for_am(indexInfo->ii_Am)\n+ != InvalidStrategy));\n\nThe above statement has an extra set of nested parentheses for some reason.\n\n======\nsrc/backend/utils/cache/lsyscache.c\n\n5.\n/*\n * get_opclass_method\n *\n * Returns the OID of the index access method operator class is for.\n */\nOid\nget_opclass_method(Oid opclass)\n\nIMO the comment should be worded more like the previous one in this source file.\n\nSUGGESTION\nReturns the OID of the index access method the opclass belongs to.\n\n------\n[1] https://www.postgresql.org/docs/devel/logical-replication-publication.html\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 12 Jul 2023 15:22:10 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] Use *other* indexes on the subscriber when REPLICA\n IDENTITY is FULL"
},
{
"msg_contents": "Dear Peter,\r\n\r\nThanks for giving comment.\r\n\r\n> \r\n> 1.\r\n> FYI, this patch also needs some minor PG docs updates [1] because\r\n> section \"31.1 Publication\" currently refers to only btree - e.g.\r\n> \"Candidate indexes must be btree, non-partial, and have...\"\r\n> \r\n> (this may also clash with Sawada-san's other thread as previously mentioned)\r\n\r\nFixed that, but I could not find any other points. Do you have in mind others?\r\nI checked related commits like 89e46d and adedf5, but only the part was changed.\r\n\r\n> Commit message\r\n> \r\n> 2.\r\n> Moreover, BRIN and GIN indexes do not implement \"amgettuple\".\r\n> RelationFindReplTupleByIndex()\r\n> assumes that tuples will be fetched one by one via\r\n> index_getnext_slot(), but this\r\n> currently requires the \"amgetuple\" function.\r\n> \r\n> ~\r\n> \r\n> Typo:\r\n> /\"amgetuple\"/\"amgettuple\"/\r\n\r\nFixed.\r\n\r\n> src/backend/executor/execReplication.c\r\n> \r\n> 3.\r\n> + * 2) Moreover, BRIN and GIN indexes do not implement \"amgettuple\".\r\n> + * RelationFindReplTupleByIndex() assumes that tuples will be fetched one by\r\n> + * one via index_getnext_slot(), but this currently requires the \"amgetuple\"\r\n> + * function. To make it use index_getbitmap() must be used, which fetches all\r\n> + * the tuples at once.\r\n> + */\r\n> +int\r\n> \r\n> 3a.\r\n> Typo:\r\n> /\"amgetuple\"/\"amgettuple\"/\r\n\r\nPer suggestion from Amit [1], the paragraph was removed.\r\n\r\n\r\n> 3b.\r\n> I think my previous comment about this comment was misunderstood. I\r\n> was questioning why that last sentence (\"To make it...\") about\r\n> \"index_getbitmap()\" is even needed in this patch. I thought it may be\r\n> overreach to describe details how to make some future enhancement that\r\n> might never happen.\r\n\r\nRemoved.\r\n\r\n> src/backend/replication/logical/relation.c\r\n> \r\n> 4. IsIndexUsableForReplicaIdentityFull\r\n> \r\n> IsIndexUsableForReplicaIdentityFull(IndexInfo *indexInfo)\r\n> {\r\n> - bool is_btree = (indexInfo->ii_Am == BTREE_AM_OID);\r\n> - bool is_partial = (indexInfo->ii_Predicate != NIL);\r\n> - bool is_only_on_expression = IsIndexOnlyOnExpression(indexInfo);\r\n> + bool is_suitable_type;\r\n> + bool is_partial;\r\n> + bool is_only_on_expression;\r\n> \r\n> - return is_btree && !is_partial && !is_only_on_expression;\r\n> + /* Check whether the index is supported or not */\r\n> + is_suitable_type = ((get_equal_strategy_number_for_am(indexInfo->ii_Am)\r\n> + != InvalidStrategy));\r\n> +\r\n> + is_partial = (indexInfo->ii_Predicate != NIL);\r\n> + is_only_on_expression = IsIndexOnlyOnExpression(indexInfo);\r\n> +\r\n> + return is_suitable_type && !is_partial && !is_only_on_expression;\r\n> \r\n> 4a.\r\n> The code can be written more optimally, because if it is deemed\r\n> already that 'is_suitable_type' is false, then there is no point to\r\n> continue to do the other checks and function calls.\r\n\r\nRight, is_suitable_type is now removed.\r\n\r\n> 4b.\r\n> \r\n> + /* Check whether the index is supported or not */\r\n> + is_suitable_type = ((get_equal_strategy_number_for_am(indexInfo->ii_Am)\r\n> + != InvalidStrategy));\r\n> \r\n> The above statement has an extra set of nested parentheses for some reason.\r\n\r\nThis part was removed per above comment.\r\n\r\n> src/backend/utils/cache/lsyscache.c\r\n> \r\n> 5.\r\n> /*\r\n> * get_opclass_method\r\n> *\r\n> * Returns the OID of the index access method operator class is for.\r\n> */\r\n> Oid\r\n> get_opclass_method(Oid opclass)\r\n> \r\n> IMO the comment should be worded more like the previous one in this source file.\r\n> \r\n> SUGGESTION\r\n> Returns the OID of the index access method the opclass belongs to.\r\n\r\nFixed.\r\n\r\n[1]: https://www.postgresql.org/message-id/CAA4eK1%2B%2BR3WSfsGH0yFR9WEbkDfF__OccADR7qXDnHGTww%2BkvQ%40mail.gmail.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Wed, 12 Jul 2023 07:06:55 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [Patch] Use *other* indexes on the subscriber when REPLICA\n IDENTITY is FULL"
},
{
"msg_contents": "Dear Amit,\r\n\r\nThanks for checking my patch! The patch can be available at [1].\r\n\r\n> > > ======\r\n> > > .../subscription/t/032_subscribe_use_index.pl\r\n> > >\r\n> > > 11.\r\n> > > AFAIK there is no test to verify that the leftmost index field must be\r\n> > > a column (e.g. cannot be an expression). Yes, there are some tests for\r\n> > > *only* expressions like (expr, expr, expr), but IMO there should be\r\n> > > another test for an index like (expr, expr, column) which should also\r\n> > > fail because the column is not the leftmost field.\r\n> >\r\n> > I'm not sure they should be fixed in the patch. You have raised these points\r\n> > at the Sawada-san's thread[1] and not sure he has done.\r\n> > Furthermore, these points are not related with our initial motivation.\r\n> > Fork, or at least it should be done by another patch.\r\n> >\r\n> \r\n> I think it is reasonable to discuss the existing code-related\r\n> improvements in a separate thread rather than trying to change this\r\n> patch. \r\n\r\nOK, so I will not touch in this thread.\r\n\r\n> I have some other comments about your patch:\r\n> \r\n> 1.\r\n> + * 1) Other indexes do not have a fixed set of strategy numbers.\r\n> + * In build_replindex_scan_key(), the operator which corresponds to \"equality\r\n> + * comparison\" is searched by using the strategy number, but it is difficult\r\n> + * for such indexes. For example, GiST index operator classes for\r\n> + * two-dimensional geometric objects have a comparison \"same\", but its\r\n> strategy\r\n> + * number is different from the gist_int4_ops, which is a GiST index operator\r\n> + * class that implements the B-tree equivalent.\r\n> + *\r\n> ...\r\n> + */\r\n> +int\r\n> +get_equal_strategy_number_for_am(Oid am)\r\n> ...\r\n> \r\n> I think this comment is slightly difficult to understand. Can we make\r\n> it a bit generic by saying something like: \"The index access methods\r\n> other than BTREE and HASH don't have a fixed strategy for equality\r\n> operation. Note that in other index access methods, the support\r\n> routines of each operator class interpret the strategy numbers\r\n> according to the operator class's definition. So, we return\r\n> InvalidStrategy number for index access methods other than BTREE and\r\n> HASH.\"\r\n\r\nSounds better. Fixed with some adjustments.\r\n\r\n> 2.\r\n> + * 2) Moreover, BRIN and GIN indexes do not implement \"amgettuple\".\r\n> + * RelationFindReplTupleByIndex() assumes that tuples will be fetched one by\r\n> + * one via index_getnext_slot(), but this currently requires the \"amgetuple\"\r\n> + * function. To make it use index_getbitmap() must be used, which fetches all\r\n> + * the tuples at once.\r\n> + */\r\n> +int\r\n> +get_equal_strategy_number_for_am(Oid am)\r\n> {\r\n> ..\r\n> \r\n> I don't think this is a good place for such a comment. We can probably\r\n> move this atop IsIndexUsableForReplicaIdentityFull(). I think you need\r\n> to mention two reasons in IsIndexUsableForReplicaIdentityFull() why we\r\n> support only BTREE and HASH index access methods (a) Refer to comments\r\n> atop get_equal_strategy_number_for_am(); (b) mention the reason\r\n> related to tuples_equal() as discussed in email [1]. Then you can say\r\n> that we also need to ensure that the index access methods that we\r\n> support must have an implementation \"amgettuple\" as later while\r\n> searching tuple via RelationFindReplTupleByIndex, we need the same.\r\n\r\nFixed, and based on that I modified the commit message accordingly.\r\nHow do you feel?\r\n\r\n> We can probably have an assert for this as well.\r\n\r\nAdded.\r\n\r\n[1]: https://www.postgresql.org/message-id/TYAPR01MB5866B4F938ADD7088379633CF536A%40TYAPR01MB5866.jpnprd01.prod.outlook.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n\r\n\r\n",
"msg_date": "Wed, 12 Jul 2023 07:07:45 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [Patch] Use *other* indexes on the subscriber when REPLICA\n IDENTITY is FULL"
},
{
"msg_contents": "Hi Hayato, all\n\n\n> > - return is_btree && !is_partial && !is_only_on_expression;\n> > + /* Check whether the index is supported or not */\n> > + is_suitable_type = ((get_equal_strategy_number_for_am(indexInfo->ii_Am)\n> > + != InvalidStrategy));\n> > +\n> > + is_partial = (indexInfo->ii_Predicate != NIL);\n> > + is_only_on_expression = IsIndexOnlyOnExpression(indexInfo);\n> > +\n> > + return is_suitable_type && !is_partial && !is_only_on_expression;\n> >\n>\n\nI don't want to repeat this too much, as it is a minor note. Just\nsharing my perspective here.\n\nAs discussed in the other email [1], I feel like keeping\nIsIndexUsableForReplicaIdentityFull() function readable is useful\nfor documentation purposes as well.\n\nSo, I'm more inclined to see something like your old code, maybe with\na different variable name.\n\nbool is_btree = (indexInfo->ii_Am == BTREE_AM_OID);\n\n\nto\n\n> bool has_equal_strategy = get_equal_strategy_number_for_am...\n> ....\n> return has_equal_strategy && !is_partial && !is_only_on_expression;\n\n\n\n> 4a.\n> > The code can be written more optimally, because if it is deemed\n> > already that 'is_suitable_type' is false, then there is no point to\n> > continue to do the other checks and function calls.\n\n\nSure, there are maybe few cycles of CPU gains, but this code is executed\ninfrequently, and I don't see much value optimizing it. I think keeping it\nslightly\nmore readable is nicer.\n\nOther than that, I think the code/test looks good. For the\ncomments/documentation,\nI think Amit and Peter have already given quite a bit of useful feedback,\nso nothing\nmuch to add from my end.\n\nThanks,\nOnder\n\n[1]:\nhttps://www.postgresql.org/message-id/CACawEhUWH1qAZ8QNeCve737Qe1_ye%3DvTW9P22ePiFssT7%2BHaaQ%40mail.gmail.com\n\n\nHayato Kuroda (Fujitsu) <[email protected]>, 12 Tem 2023 Çar, 10:07\ntarihinde şunu yazdı:\n\n> Dear Amit,\n>\n> Thanks for checking my patch! The patch can be available at [1].\n>\n> > > > ======\n> > > > .../subscription/t/032_subscribe_use_index.pl\n> > > >\n> > > > 11.\n> > > > AFAIK there is no test to verify that the leftmost index field must\n> be\n> > > > a column (e.g. cannot be an expression). Yes, there are some tests\n> for\n> > > > *only* expressions like (expr, expr, expr), but IMO there should be\n> > > > another test for an index like (expr, expr, column) which should also\n> > > > fail because the column is not the leftmost field.\n> > >\n> > > I'm not sure they should be fixed in the patch. You have raised these\n> points\n> > > at the Sawada-san's thread[1] and not sure he has done.\n> > > Furthermore, these points are not related with our initial motivation.\n> > > Fork, or at least it should be done by another patch.\n> > >\n> >\n> > I think it is reasonable to discuss the existing code-related\n> > improvements in a separate thread rather than trying to change this\n> > patch.\n>\n> OK, so I will not touch in this thread.\n>\n> > I have some other comments about your patch:\n> >\n> > 1.\n> > + * 1) Other indexes do not have a fixed set of strategy numbers.\n> > + * In build_replindex_scan_key(), the operator which corresponds to\n> \"equality\n> > + * comparison\" is searched by using the strategy number, but it is\n> difficult\n> > + * for such indexes. For example, GiST index operator classes for\n> > + * two-dimensional geometric objects have a comparison \"same\", but its\n> > strategy\n> > + * number is different from the gist_int4_ops, which is a GiST index\n> operator\n> > + * class that implements the B-tree equivalent.\n> > + *\n> > ...\n> > + */\n> > +int\n> > +get_equal_strategy_number_for_am(Oid am)\n> > ...\n> >\n> > I think this comment is slightly difficult to understand. Can we make\n> > it a bit generic by saying something like: \"The index access methods\n> > other than BTREE and HASH don't have a fixed strategy for equality\n> > operation. Note that in other index access methods, the support\n> > routines of each operator class interpret the strategy numbers\n> > according to the operator class's definition. So, we return\n> > InvalidStrategy number for index access methods other than BTREE and\n> > HASH.\"\n>\n> Sounds better. Fixed with some adjustments.\n>\n> > 2.\n> > + * 2) Moreover, BRIN and GIN indexes do not implement \"amgettuple\".\n> > + * RelationFindReplTupleByIndex() assumes that tuples will be fetched\n> one by\n> > + * one via index_getnext_slot(), but this currently requires the\n> \"amgetuple\"\n> > + * function. To make it use index_getbitmap() must be used, which\n> fetches all\n> > + * the tuples at once.\n> > + */\n> > +int\n> > +get_equal_strategy_number_for_am(Oid am)\n> > {\n> > ..\n> >\n> > I don't think this is a good place for such a comment. We can probably\n> > move this atop IsIndexUsableForReplicaIdentityFull(). I think you need\n> > to mention two reasons in IsIndexUsableForReplicaIdentityFull() why we\n> > support only BTREE and HASH index access methods (a) Refer to comments\n> > atop get_equal_strategy_number_for_am(); (b) mention the reason\n> > related to tuples_equal() as discussed in email [1]. Then you can say\n> > that we also need to ensure that the index access methods that we\n> > support must have an implementation \"amgettuple\" as later while\n> > searching tuple via RelationFindReplTupleByIndex, we need the same.\n>\n> Fixed, and based on that I modified the commit message accordingly.\n> How do you feel?\n>\n> > We can probably have an assert for this as well.\n>\n> Added.\n>\n> [1]:\n> https://www.postgresql.org/message-id/TYAPR01MB5866B4F938ADD7088379633CF536A%40TYAPR01MB5866.jpnprd01.prod.outlook.com\n>\n> Best Regards,\n> Hayato Kuroda\n> FUJITSU LIMITED\n>\n>\n>\n>\n\nHi Hayato, all> - return is_btree && !is_partial && !is_only_on_expression;> + /* Check whether the index is supported or not */> + is_suitable_type = ((get_equal_strategy_number_for_am(indexInfo->ii_Am)> + != InvalidStrategy));> +> + is_partial = (indexInfo->ii_Predicate != NIL);> + is_only_on_expression = IsIndexOnlyOnExpression(indexInfo);> +> + return is_suitable_type && !is_partial && !is_only_on_expression;>I don't want to repeat this too much, as it is a minor note. Justsharing my perspective here.As discussed in the other email [1], I feel like keeping IsIndexUsableForReplicaIdentityFull() function readable is usefulfor documentation purposes as well.So, I'm more inclined to see something like your old code, maybe witha different variable name.bool\t\tis_btree = (indexInfo->ii_Am == BTREE_AM_OID);tobool has_equal_strategy = get_equal_strategy_number_for_am....... return \n\nhas_equal_strategy && !is_partial && !is_only_on_expression; > 4a.> The code can be written more optimally, because if it is deemed> already that 'is_suitable_type' is false, then there is no point to> continue to do the other checks and function calls.Sure, there are maybe few cycles of CPU gains, but this code is executedinfrequently, and I don't see much value optimizing it. I think keeping it slightlymore readable is nicer.Other than that, I think the code/test looks good. For the comments/documentation,I think Amit and Peter have already given quite a bit of useful feedback, so nothingmuch to add from my end.Thanks,Onder[1]: https://www.postgresql.org/message-id/CACawEhUWH1qAZ8QNeCve737Qe1_ye%3DvTW9P22ePiFssT7%2BHaaQ%40mail.gmail.comHayato Kuroda (Fujitsu) <[email protected]>, 12 Tem 2023 Çar, 10:07 tarihinde şunu yazdı:Dear Amit,\n\nThanks for checking my patch! The patch can be available at [1].\n\n> > > ======\n> > > .../subscription/t/032_subscribe_use_index.pl\n> > >\n> > > 11.\n> > > AFAIK there is no test to verify that the leftmost index field must be\n> > > a column (e.g. cannot be an expression). Yes, there are some tests for\n> > > *only* expressions like (expr, expr, expr), but IMO there should be\n> > > another test for an index like (expr, expr, column) which should also\n> > > fail because the column is not the leftmost field.\n> >\n> > I'm not sure they should be fixed in the patch. You have raised these points\n> > at the Sawada-san's thread[1] and not sure he has done.\n> > Furthermore, these points are not related with our initial motivation.\n> > Fork, or at least it should be done by another patch.\n> >\n> \n> I think it is reasonable to discuss the existing code-related\n> improvements in a separate thread rather than trying to change this\n> patch. \n\nOK, so I will not touch in this thread.\n\n> I have some other comments about your patch:\n> \n> 1.\n> + * 1) Other indexes do not have a fixed set of strategy numbers.\n> + * In build_replindex_scan_key(), the operator which corresponds to \"equality\n> + * comparison\" is searched by using the strategy number, but it is difficult\n> + * for such indexes. For example, GiST index operator classes for\n> + * two-dimensional geometric objects have a comparison \"same\", but its\n> strategy\n> + * number is different from the gist_int4_ops, which is a GiST index operator\n> + * class that implements the B-tree equivalent.\n> + *\n> ...\n> + */\n> +int\n> +get_equal_strategy_number_for_am(Oid am)\n> ...\n> \n> I think this comment is slightly difficult to understand. Can we make\n> it a bit generic by saying something like: \"The index access methods\n> other than BTREE and HASH don't have a fixed strategy for equality\n> operation. Note that in other index access methods, the support\n> routines of each operator class interpret the strategy numbers\n> according to the operator class's definition. So, we return\n> InvalidStrategy number for index access methods other than BTREE and\n> HASH.\"\n\nSounds better. Fixed with some adjustments.\n\n> 2.\n> + * 2) Moreover, BRIN and GIN indexes do not implement \"amgettuple\".\n> + * RelationFindReplTupleByIndex() assumes that tuples will be fetched one by\n> + * one via index_getnext_slot(), but this currently requires the \"amgetuple\"\n> + * function. To make it use index_getbitmap() must be used, which fetches all\n> + * the tuples at once.\n> + */\n> +int\n> +get_equal_strategy_number_for_am(Oid am)\n> {\n> ..\n> \n> I don't think this is a good place for such a comment. We can probably\n> move this atop IsIndexUsableForReplicaIdentityFull(). I think you need\n> to mention two reasons in IsIndexUsableForReplicaIdentityFull() why we\n> support only BTREE and HASH index access methods (a) Refer to comments\n> atop get_equal_strategy_number_for_am(); (b) mention the reason\n> related to tuples_equal() as discussed in email [1]. Then you can say\n> that we also need to ensure that the index access methods that we\n> support must have an implementation \"amgettuple\" as later while\n> searching tuple via RelationFindReplTupleByIndex, we need the same.\n\nFixed, and based on that I modified the commit message accordingly.\nHow do you feel?\n\n> We can probably have an assert for this as well.\n\nAdded.\n\n[1]: https://www.postgresql.org/message-id/TYAPR01MB5866B4F938ADD7088379633CF536A%40TYAPR01MB5866.jpnprd01.prod.outlook.com\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED",
"msg_date": "Wed, 12 Jul 2023 17:44:39 +0300",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] Use *other* indexes on the subscriber when REPLICA\n IDENTITY is FULL"
},
{
"msg_contents": "Here are some review comments for the v5 patch\n\n======\nCommit message.\n\n1.\n89e46d allowed using indexes other than PRIMARY KEY or REPLICA IDENTITY on the\nsubscriber, but only the BTree index could be used. This commit extends the\nlimitation, now the Hash index can be also used. These 2 types of\nindexes are the\nonly ones supported because only these\n- use fix strategy numbers\n- implement the \"equality\" strategy\n- supported by tuples_equal()\n- implement the function amgettuple()\n\n~\n\n1a.\n/fix/fixed/\n\n~\n\n1b.\n/supported by tuples_equal()/are supported by tuples_equal()/\n\n~~~\n\n2.\nIndex access methods other than them don't have a fixed strategy for equality\noperation. Note that in other index access methods, the support routines of each\noperator class interpret the strategy numbers according to the operator class's\ndefinition. In build_replindex_scan_key(), the operator which corresponds to\n\"equality comparison\" is searched by using the strategy number, but it is\ndifficult for such indexes.\n\n~\n\n/Index access methods other than them/Other index access methods/\n\n~~~\n\n3.\ntuples_equal() cannot accept a datatype that does not have an operator class for\nBtree or Hash. One motivation for other types of indexes to be used is to define\nan index to attributes such as datatypes like point and box. But\nlookup_type_cache(),\nwhich is called from tuples_equal(), cannot return the valid value if\ninput datatypes\ndo not have such a opclass.\n\n~\n\nThat paragraph looks the same as the code comment in\nIsIndexUsableForReplicaIdentityFull(). I wrote some review comments\nabout that (see #5d below) so the same maybe applies here too.\n\n======\nsrc/backend/executor/execReplication.c\n\n4.\n+/*\n+ * Return the strategy number which corresponds to the equality operator for\n+ * given index access method.\n+ *\n+ * XXX: Currently, only Btree and Hash indexes are supported. This is because\n+ * index access methods other than them don't have a fixed strategy for\n+ * equality operation. Note that in other index access methods, the support\n+ * routines of each operator class interpret the strategy numbers according to\n+ * the operator class's definition. So, we return the InvalidStrategy number\n+ * for index access methods other than BTREE and HASH.\n+ */\n+int\n+get_equal_strategy_number_for_am(Oid am)\n\nThe function comment seems a bit long. Maybe it can be simplified a bit:\n\nSUGGESTION\nReturn the strategy number which corresponds to the equality operator\nfor the given index access method, otherwise, return InvalidStrategy.\n\nXXX: Currently, only Btree and Hash indexes are supported. The other\nindex access methods don't have a fixed strategy for equality\noperation - instead, the support routines of each operator class\ninterpret the strategy numbers according to the operator class's\ndefinition.\n\n======\nsrc/backend/replication/logical/relation.c\n\n5. FindUsableIndexForReplicaIdentityFull\n\n /*\n * Returns true if the index is usable for replica identity full. For details,\n * see FindUsableIndexForReplicaIdentityFull.\n+ *\n+ * XXX: Currently, only Btree and Hash indexes can be returned as usable one.\n+ * This is because mainly two reasons:\n+ *\n+ * 1) Other index access methods other than Btree and Hash don't have a fixed\n+ * strategy for equality operation. Refer comments atop\n+ * get_equal_strategy_number_for_am.\n+ * 2) tuples_equal cannot accept a datatype that does not have an operator\n+ * class for Btree or Hash. One motivation for other types of indexes to be\n+ * used is to define an index to attributes such as datatypes like point and\n+ * box. But lookup_type_cache, which is called from tuples_equal, cannot return\n+ * the valid value if input datatypes do not have such a opclass.\n+ *\n+ * Furthermore, BRIN and GIN indexes do not implement \"amgettuple\".\n+ * RelationFindReplTupleByIndex assumes that tuples will be fetched one by\n+ * one via index_getnext_slot, but this currently requires the \"amgettuple\"\n+ * function.\n */\n\n5a.\n/as usable one./as useable./\n\n~\n\n5b.\n/Other index access methods other than Btree and Hash/Other index\naccess methods/\n\n~\n\n5c.\nMaybe a blank line before 2) will help readability.\n\n~\n\n5d.\n\"One motivation for other types of indexes to be used is to define an\nindex to attributes such as datatypes like point and box.\"\n\nIsn't it enough to omit that sentence and just say:\n\nSUGGESTION\n2) tuples_equal() cannot accept a datatype (e.g. point or box) that\ndoes not have an operator class for Btree or Hash. This is because\nlookup_type_cache(), which is called from tuples_equal(), cannot\nreturn the valid value if input datatypes do not have such an opclass.\n\n~~~\n\n6. FindUsableIndexForReplicaIdentityFull\n\n+ /* Check whether the index is supported or not */\n+ if (get_equal_strategy_number_for_am(indexInfo->ii_Am) == InvalidStrategy)\n+ return false;\n\n6a.\n\nReally, this entire function is for checking \"is supported or not\" so\nIMO this is not the correct comment just for this statement. BTW, I\nnoted Onder suggested keeping this as a variable assignment (called\n'has_equal_strategy') [1]. I also think having a variable is better\nbecause then this extra comment would be unnecessary.\n\n~\n\n6b.\nIMO the code is readable with the early exit, but it is fine also if\nyou want to revert it to how Onder suggested [1]. I think it is not\nworth worrying too much here because it seems the Sawada-san patch [2]\nmight have intentions to refactor all this same function anyhow.\n\n------\n[1] Onder suggestion -\nhttps://www.postgresql.org/message-id/CACawEhXvVqxoaqj5aanaT02DHYUJwpkssS4RTZRSuqEOpT0zQg%40mail.gmail.com\n[2] Sawada-san other thread -\nhttps://www.postgresql.org/message-id/CAD21AoAKx%2BFY4OPPj%2BMEF0gM-TAV0%3Dfd3EfPoEsa%2BcMQLiWjyA%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 13 Jul 2023 11:43:19 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] Use *other* indexes on the subscriber when REPLICA\n IDENTITY is FULL"
},
{
"msg_contents": "On Wed, Jul 12, 2023 at 8:14 PM Önder Kalacı <[email protected]> wrote:\n>\n>>\n>> > - return is_btree && !is_partial && !is_only_on_expression;\n>> > + /* Check whether the index is supported or not */\n>> > + is_suitable_type = ((get_equal_strategy_number_for_am(indexInfo->ii_Am)\n>> > + != InvalidStrategy));\n>> > +\n>> > + is_partial = (indexInfo->ii_Predicate != NIL);\n>> > + is_only_on_expression = IsIndexOnlyOnExpression(indexInfo);\n>> > +\n>> > + return is_suitable_type && !is_partial && !is_only_on_expression;\n>> >\n>\n>\n> I don't want to repeat this too much, as it is a minor note. Just\n> sharing my perspective here.\n>\n> As discussed in the other email [1], I feel like keeping\n> IsIndexUsableForReplicaIdentityFull() function readable is useful\n> for documentation purposes as well.\n>\n> So, I'm more inclined to see something like your old code, maybe with\n> a different variable name.\n>\n>> bool is_btree = (indexInfo->ii_Am == BTREE_AM_OID);\n>\n>\n> to\n>>\n>> bool has_equal_strategy = get_equal_strategy_number_for_am...\n>> ....\n>> return has_equal_strategy && !is_partial && !is_only_on_expression;\n>\n\n+1 for the readability. I think we can as you suggest or I feel it\nwould be better if we return false as soon as we found any condition\nis false. The current patch has a mixed style such that for\nInvalidStrategy, it returns immediately but for others, it does a\ncombined check. The other point we should consider in this regard is\nthe below assert check:\n\n+#ifdef USE_ASSERT_CHECKING\n+ {\n+ /* Check that the given index access method has amgettuple routine */\n+ IndexAmRoutine *amroutine = GetIndexAmRoutineByAmId(indexInfo->ii_Am,\n+ false);\n+ Assert(amroutine->amgettuple != NULL);\n+ }\n+#endif\n\nApart from having an assert, we have the following two options (a)\ncheck this condition as well and return false if am doesn't support\namgettuple (b) report elog(ERROR, ..) in this case.\n\nI am of the opinion that we should either have an assert for this or\ndo (b) because if do (a) currently there is no case where it can\nreturn false. What do you think?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 13 Jul 2023 08:51:58 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] Use *other* indexes on the subscriber when REPLICA\n IDENTITY is FULL"
},
{
"msg_contents": "On Thu, Jul 13, 2023 at 8:51 AM Amit Kapila <[email protected]> wrote:\n>\n> On Wed, Jul 12, 2023 at 8:14 PM Önder Kalacı <[email protected]> wrote:\n> >\n> >>\n> >> > - return is_btree && !is_partial && !is_only_on_expression;\n> >> > + /* Check whether the index is supported or not */\n> >> > + is_suitable_type = ((get_equal_strategy_number_for_am(indexInfo->ii_Am)\n> >> > + != InvalidStrategy));\n> >> > +\n> >> > + is_partial = (indexInfo->ii_Predicate != NIL);\n> >> > + is_only_on_expression = IsIndexOnlyOnExpression(indexInfo);\n> >> > +\n> >> > + return is_suitable_type && !is_partial && !is_only_on_expression;\n> >> >\n> >\n> >\n> > I don't want to repeat this too much, as it is a minor note. Just\n> > sharing my perspective here.\n> >\n> > As discussed in the other email [1], I feel like keeping\n> > IsIndexUsableForReplicaIdentityFull() function readable is useful\n> > for documentation purposes as well.\n> >\n> > So, I'm more inclined to see something like your old code, maybe with\n> > a different variable name.\n> >\n> >> bool is_btree = (indexInfo->ii_Am == BTREE_AM_OID);\n> >\n> >\n> > to\n> >>\n> >> bool has_equal_strategy = get_equal_strategy_number_for_am...\n> >> ....\n> >> return has_equal_strategy && !is_partial && !is_only_on_expression;\n> >\n>\n> +1 for the readability. I think we can as you suggest or I feel it\n> would be better if we return false as soon as we found any condition\n> is false. The current patch has a mixed style such that for\n> InvalidStrategy, it returns immediately but for others, it does a\n> combined check.\n>\n\nI have followed the later style in the attached.\n\n> The other point we should consider in this regard is\n> the below assert check:\n>\n> +#ifdef USE_ASSERT_CHECKING\n> + {\n> + /* Check that the given index access method has amgettuple routine */\n> + IndexAmRoutine *amroutine = GetIndexAmRoutineByAmId(indexInfo->ii_Am,\n> + false);\n> + Assert(amroutine->amgettuple != NULL);\n> + }\n> +#endif\n>\n> Apart from having an assert, we have the following two options (a)\n> check this condition as well and return false if am doesn't support\n> amgettuple (b) report elog(ERROR, ..) in this case.\n>\n> I am of the opinion that we should either have an assert for this or\n> do (b) because if do (a) currently there is no case where it can\n> return false. What do you think?\n>\n\nFor now, I have kept the assert but moved it to the end of the function.\n\nApart from the above, I have made a number of minor changes (a)\nchanged the datatype for the strategy to StrategyNumber at various\nplaces in the patch; (b) made a number of changes in comments based on\nPeter's comments and otherwise; (c) ran pgindent and changed the\ncommit message; (d) few other cosmetic changes.\n\nLet me know what you think of the attached.\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Thu, 13 Jul 2023 14:46:22 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] Use *other* indexes on the subscriber when REPLICA\n IDENTITY is FULL"
},
{
"msg_contents": "Hi Amit, Hayato, all\n\n> +1 for the readability. I think we can as you suggest or I feel it\n> > would be better if we return false as soon as we found any condition\n> > is false. The current patch has a mixed style such that for\n> > InvalidStrategy, it returns immediately but for others, it does a\n> > combined check.\n> >\n>\n> I have followed the later style in the attached.\n>\n\nLooks good to me!\n\nI agree with your note that the confusion was mostly\naround two different styles (e,g., some checks return early others wait\nuntil the final return). Now, the code is pretty easy to follow.\n\n\n> > The other point we should consider in this regard is\n> > the below assert check:\n> >\n> > +#ifdef USE_ASSERT_CHECKING\n> > + {\n> > + /* Check that the given index access method has amgettuple routine */\n> > + IndexAmRoutine *amroutine = GetIndexAmRoutineByAmId(indexInfo->ii_Am,\n> > + false);\n> > + Assert(amroutine->amgettuple != NULL);\n> > + }\n> > +#endif\n> >\n> > Apart from having an assert, we have the following two options (a)\n> > check this condition as well and return false if am doesn't support\n> > amgettuple (b) report elog(ERROR, ..) in this case.\n>\n\nI think with the current state of the patch (e.g., we only support btree\nand hash),\nAssert looks reasonable.\n\nIn the future, in case we have a future hypothetical index type that\nfulfills the\n\"if\" checks but fails on amgettuple, we could change the code to \"return\nfalse\"\nsuch that it gives a chance for the other indexes to satisfy the condition.\n\n\nLet me know what you think of the attached.\n>\n>\nLooks good to me. I have also done some testing, which works as\ndocumented/expected.\n\nThanks,\nOnder\n\nHi Amit, Hayato, all\n> +1 for the readability. I think we can as you suggest or I feel it\n> would be better if we return false as soon as we found any condition\n> is false. The current patch has a mixed style such that for\n> InvalidStrategy, it returns immediately but for others, it does a\n> combined check.\n>\n\nI have followed the later style in the attached.Looks good to me! I agree with your note that the confusion was mostlyaround two different styles (e,g., some checks return early others waituntil the final return). Now, the code is pretty easy to follow.\n\n> The other point we should consider in this regard is\n> the below assert check:\n>\n> +#ifdef USE_ASSERT_CHECKING\n> + {\n> + /* Check that the given index access method has amgettuple routine */\n> + IndexAmRoutine *amroutine = GetIndexAmRoutineByAmId(indexInfo->ii_Am,\n> + false);\n> + Assert(amroutine->amgettuple != NULL);\n> + }\n> +#endif\n>\n> Apart from having an assert, we have the following two options (a)\n> check this condition as well and return false if am doesn't support\n> amgettuple (b) report elog(ERROR, ..) in this case.I think with the current state of the patch (e.g., we only support btree and hash),Assert looks reasonable.In the future, in case we have a future hypothetical index type that fulfills the \"if\" checks but fails on amgettuple, we could change the code to \"return false\"such that it gives a chance for the other indexes to satisfy the condition.Let me know what you think of the attached.Looks good to me. I have also done some testing, which works as documented/expected.Thanks,Onder",
"msg_date": "Thu, 13 Jul 2023 17:37:42 +0300",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] Use *other* indexes on the subscriber when REPLICA\n IDENTITY is FULL"
},
{
"msg_contents": "On Thu, Jul 13, 2023 at 8:07 PM Önder Kalacı <[email protected]> wrote:\n>\n> Looks good to me. I have also done some testing, which works as documented/expected.\n>\n\nThanks, I have pushed this after minor changes in the comments.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 14 Jul 2023 13:43:04 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] Use *other* indexes on the subscriber when REPLICA\n IDENTITY is FULL"
}
] |
[
{
"msg_contents": "Hello, \n\nFollowing some conversation with Tomas at PGCon, I decided to resurrect this \ntopic, which was previously discussed in the context of moving tuplesort to \nuse GenerationContext: https://www.postgresql.org/message-id/\n8046109.NyiUUSuA9g%40aivenronan\n\nThe idea for this patch is that the behaviour of glibc's malloc can be \ncounterproductive for us in some cases. To summarise, glibc's malloc offers \n(among others) two tunable parameters which greatly affects how it allocates \nmemory. From the mallopt manpage:\n\n M_TRIM_THRESHOLD\n When the amount of contiguous free memory at the top of\n the heap grows sufficiently large, free(3) employs sbrk(2)\n to release this memory back to the system. (This can be\n useful in programs that continue to execute for a long\n period after freeing a significant amount of memory.) \n\n M_MMAP_THRESHOLD\n For allocations greater than or equal to the limit\n specified (in bytes) by M_MMAP_THRESHOLD that can't be\n satisfied from the free list, the memory-allocation\n functions employ mmap(2) instead of increasing the program\n break using sbrk(2).\n\nThe thing is, by default, those parameters are adjusted dynamically by the \nglibc itself. It starts with quite small thresholds, and raises them when the \nprogram frees some memory, up to a certain limit. This patch proposes a new \nGUC allowing the user to adjust those settings according to their workload.\n\nThis can cause problems. Let's take for example a table with 10k rows, and 32 \ncolumns (as defined by a bench script David Rowley shared last year when \ndiscussing the GenerationContext for tuplesort), and execute the following \nquery, with 32MB of work_mem:\n\nselect * from t order by a offset 100000;\n\nOn unpatched master, attaching strace to the backend and grepping on brk|mmap, \nwe get the following syscalls:\n\nbrk(0x55b00df0c000) = 0x55b00df0c000\nbrk(0x55b00df05000) = 0x55b00df05000\nbrk(0x55b00df28000) = 0x55b00df28000\nbrk(0x55b00df52000) = 0x55b00df52000\nmmap(NULL, 266240, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = \n0x7fbc49254000\nbrk(0x55b00df7e000) = 0x55b00df7e000\nmmap(NULL, 528384, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = \n0x7fbc48f7f000\nmmap(NULL, 1052672, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = \n0x7fbc48e7e000\nmmap(NULL, 200704, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = \n0x7fbc4980f000\nbrk(0x55b00df72000) = 0x55b00df72000\nmmap(NULL, 2101248, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = \n0x7fbc3d56d000\n\nUsing systemtap, we can hook to glibc's mallocs static probes to log whenever \nit adjusts its values. During the above queries, glibc's malloc raised its \nthresholds:\n\n347704: New thresholds: mmap: 2101248 bytes, trim: 4202496 bytes\n\n\nIf we re-run the query again, we get: \n\nbrk(0x55b00dfe2000) = 0x55b00dfe2000\nbrk(0x55b00e042000) = 0x55b00e042000\nbrk(0x55b00e0ce000) = 0x55b00e0ce000\nbrk(0x55b00e1e6000) = 0x55b00e1e6000\nbrk(0x55b00e216000) = 0x55b00e216000\nbrk(0x55b00e416000) = 0x55b00e416000\nbrk(0x55b00e476000) = 0x55b00e476000\nbrk(0x55b00dfbc000) = 0x55b00dfbc000\n\nThis time, our allocations are below the new mmap_threshold, so malloc gets us \nour memory by repeatedly moving the brk pointer. \n\nWhen running with the attached patch, and setting the new GUC:\n\nset glibc_malloc_max_trim_threshold = '64MB';\n\nWe now get the following syscalls for the same query, for the first run:\n\nbrk(0x55b00df0c000) = 0x55b00df0c000\nbrk(0x55b00df2e000) = 0x55b00df2e000\nbrk(0x55b00df52000) = 0x55b00df52000\nbrk(0x55b00dfb2000) = 0x55b00dfb2000\nbrk(0x55b00e03e000) = 0x55b00e03e000\nbrk(0x55b00e156000) = 0x55b00e156000\nbrk(0x55b00e186000) = 0x55b00e186000\nbrk(0x55b00e386000) = 0x55b00e386000\nbrk(0x55b00e3e6000) = 0x55b00e3e6000\n\nBut for the second run, the memory allocated is kept by malloc's freelist \ninstead of being released to the kernel, generating no syscalls at all, which \nbrings us a significant performance improvement at the cost of more memory \nbeing used by the idle backend, up to twice as more tps.\n\nOn the other hand, the default behaviour can also be a problem if a backend \nmakes big allocations for a short time and then never needs that amount of \nmemory again.\n\nFor example, running this query: \n\nselect * from generate_series(1, 1000000);\n\nWe allocate some memory. The first time it's run, malloc will use mmap to \nsatisfy it. Once it's freed, it will raise it's threshold, and a second run \nwill allocate it on the heap instead. So if we run the query twice, we end up \nwith some memory in malloc's free lists that we may never use again. Using the \nnew GUC, we can actually control wether it will be given back to the OS by \nsetting a small value for the threshold.\n\nI attached the results of the 10k rows / 32 columns / 32MB work_mem benchmark \nwith different values for glibc_malloc_max_trim_threshold. \n\nI don't know how to write a test for this new feature so let me know if you \nhave suggestions. Documentation is not written yet, as I expect discussion on \nthis thread to lead to significant changes on the user-visible GUC or GUCs: \n - should we provide one for trim which also adjusts mmap_threshold (current \npatch) or several GUCs ?\n - should this be simplified to only offer the default behaviour (glibc's takes \ncare of the threshold) and some presets (\"greedy\", to set trim_threshold to \nwork_mem, \"frugal\" to set it to a really small value)\n\nBest regards,\n\n--\nRonan Dunklau",
"msg_date": "Thu, 22 Jun 2023 15:35:12 +0200",
"msg_from": "Ronan Dunklau <[email protected]>",
"msg_from_op": true,
"msg_subject": "Add GUC to tune glibc's malloc implementation."
},
{
"msg_contents": "Ronan Dunklau <[email protected]> writes:\n> Following some conversation with Tomas at PGCon, I decided to resurrect this \n> topic, which was previously discussed in the context of moving tuplesort to \n> use GenerationContext: https://www.postgresql.org/message-id/\n> 8046109.NyiUUSuA9g%40aivenronan\n\nThis seems like a pretty awful idea, mainly because there's no way\nto have such a GUC mean anything on non-glibc platforms, which is\ngoing to cause confusion or worse.\n\nAren't these same settings controllable via environment variables?\nI could see adding some docs suggesting that you set thus-and-such\nvalues in the postmaster's startup script. Admittedly, the confusion\nargument is perhaps still raisable; but we have a similar docs section\ndiscussing controlling Linux OOM behavior, and I've not heard much\ncomplaints about that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 22 Jun 2023 09:49:36 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add GUC to tune glibc's malloc implementation."
},
{
"msg_contents": "Le jeudi 22 juin 2023, 15:49:36 CEST Tom Lane a écrit :\n> This seems like a pretty awful idea, mainly because there's no way\n> to have such a GUC mean anything on non-glibc platforms, which is\n> going to cause confusion or worse.\n\nI named the GUC glibc_malloc_max_trim_threshold, I hope this is enough to \nclear up the confusion. We already have at least event_source, which is \nwindows specific even if it's not clear from the name. \n\n> \n> Aren't these same settings controllable via environment variables?\n> I could see adding some docs suggesting that you set thus-and-such\n> values in the postmaster's startup script. Admittedly, the confusion\n> argument is perhaps still raisable; but we have a similar docs section\n> discussing controlling Linux OOM behavior, and I've not heard much\n> complaints about that.\n\nYes they are, but controlling them via an environment variable for the whole \ncluster defeats the point: different backends have different workloads, and \nbeing able to make sure for example the OLAP user is memory-greedy while the \nOLTP one is as conservative as possible is a worthwile goal. Or even a \nspecific backend may want to raise it's work_mem and adapt glibc behaviour \naccordingly, then get back to being conservative with memory until the next \nsuch transaction. \n\nRegards,\n\n--\nRonan Dunklau\n\n\n\n\n",
"msg_date": "Thu, 22 Jun 2023 16:02:06 +0200",
"msg_from": "Ronan Dunklau <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add GUC to tune glibc's malloc implementation."
},
{
"msg_contents": "Ronan Dunklau <[email protected]> writes:\n> Le jeudi 22 juin 2023, 15:49:36 CEST Tom Lane a écrit :\n>> Aren't these same settings controllable via environment variables?\n>> I could see adding some docs suggesting that you set thus-and-such\n>> values in the postmaster's startup script. Admittedly, the confusion\n>> argument is perhaps still raisable; but we have a similar docs section\n>> discussing controlling Linux OOM behavior, and I've not heard much\n>> complaints about that.\n\n> Yes they are, but controlling them via an environment variable for the whole \n> cluster defeats the point: different backends have different workloads, and \n> being able to make sure for example the OLAP user is memory-greedy while the \n> OLTP one is as conservative as possible is a worthwile goal.\n\nAnd what is going to happen when we switch to a thread model?\n(I don't personally think that's going to happen, but some other\npeople do.) If we only document how to adjust this cluster-wide,\nthen we won't have a problem with that. But I'm not excited about\nintroducing functionality that is both platform-dependent and\nunsupportable in a threaded system.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 22 Jun 2023 10:07:12 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add GUC to tune glibc's malloc implementation."
},
{
"msg_contents": "On 22.06.23 15:35, Ronan Dunklau wrote:\n> The thing is, by default, those parameters are adjusted dynamically by the\n> glibc itself. It starts with quite small thresholds, and raises them when the\n> program frees some memory, up to a certain limit. This patch proposes a new\n> GUC allowing the user to adjust those settings according to their workload.\n> \n> This can cause problems. Let's take for example a table with 10k rows, and 32\n> columns (as defined by a bench script David Rowley shared last year when\n> discussing the GenerationContext for tuplesort), and execute the following\n> query, with 32MB of work_mem:\n\nI don't follow what you are trying to achieve with this. The examples \nyou show appear to work sensibly in my mind. Using this setting, you \ncan save some of the adjustments that glibc does after the first query. \nBut that seems only useful if your session only does one query. Is that \nwhat you are doing?\n\n\n\n",
"msg_date": "Fri, 23 Jun 2023 22:55:51 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add GUC to tune glibc's malloc implementation."
},
{
"msg_contents": "Le vendredi 23 juin 2023, 22:55:51 CEST Peter Eisentraut a écrit :\n> On 22.06.23 15:35, Ronan Dunklau wrote:\n> > The thing is, by default, those parameters are adjusted dynamically by the\n> > glibc itself. It starts with quite small thresholds, and raises them when\n> > the program frees some memory, up to a certain limit. This patch proposes\n> > a new GUC allowing the user to adjust those settings according to their\n> > workload.\n> > \n> > This can cause problems. Let's take for example a table with 10k rows, and\n> > 32 columns (as defined by a bench script David Rowley shared last year\n> > when discussing the GenerationContext for tuplesort), and execute the\n> > following\n> > query, with 32MB of work_mem:\n\n> I don't follow what you are trying to achieve with this. The examples\n> you show appear to work sensibly in my mind. Using this setting, you\n> can save some of the adjustments that glibc does after the first query.\n> But that seems only useful if your session only does one query. Is that\n> what you are doing?\n\nNo, not at all: glibc does not do the right thing, we don't \"save\" it. \nI will try to rephrase that.\n\nIn the first test case I showed, we see that glibc adjusts its threshold, but \nto a suboptimal value since repeated executions of a query needing the same \namount of memory will release it back to the kernel, and move the brk pointer \nagain, and will not adjust it again. On the other hand, by manually adjusting \nthe thresholds, we can set them to a higher value which means that the memory \nwill be kept in malloc's freelist for reuse for the next queries. As shown in \nthe benchmark results I posted, this can have quite a dramatic effect, going \nfrom 396 tps to 894. For ease of benchmarking, it is a single query being \nexecuted over and over again, but the same thing would be true if different \nqueries allocating memories were executed by a single backend. \n\nThe worst part of this means it is unpredictable: depending on past memory \nallocation patterns, glibc will end up in different states, and exhibit \ncompletely different performance for all subsequent queries. In fact, this is \nwhat Tomas noticed last year, (see [0]), which led to investigation into \nthis. \n\nI also tried to show that for certain cases glibcs behaviour can be on the \ncontrary to greedy, and hold on too much memory if we just need the memory \nonce and never allocate it again. \n\nI hope what I'm trying to achieve is clearer that way. Maybe this patch is not \nthe best way to go about this, but since the memory allocator behaviour can \nhave such an impact it's a bit sad we have to leave half the performance on \nthe table because of it when there are easily accessible knobs to avoid it.\n\n[0] https://www.postgresql.org/message-id/bcdd4e3e-c12d-cd2b-7ead-a91ad416100a%40enterprisedb.com\n\n\n\n\n",
"msg_date": "Mon, 26 Jun 2023 08:38:35 +0200",
"msg_from": "Ronan Dunklau <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add GUC to tune glibc's malloc implementation."
},
{
"msg_contents": "Hi,\n\nOn 2023-06-22 15:35:12 +0200, Ronan Dunklau wrote:\n> This can cause problems. Let's take for example a table with 10k rows, and 32 \n> columns (as defined by a bench script David Rowley shared last year when \n> discussing the GenerationContext for tuplesort), and execute the following \n> query, with 32MB of work_mem:\n> \n> select * from t order by a offset 100000;\n>\n> I attached the results of the 10k rows / 32 columns / 32MB work_mem benchmark \n> with different values for glibc_malloc_max_trim_threshold.\n\nCould you provide instructions for the benchmark that don't require digging\ninto the archive to find an email by David?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 26 Jun 2023 13:59:07 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add GUC to tune glibc's malloc implementation."
},
{
"msg_contents": "Hi,\n\nOn 2023-06-26 08:38:35 +0200, Ronan Dunklau wrote:\n> I hope what I'm trying to achieve is clearer that way. Maybe this patch is not\n> the best way to go about this, but since the memory allocator behaviour can\n> have such an impact it's a bit sad we have to leave half the performance on\n> the table because of it when there are easily accessible knobs to avoid it.\n\nI'm *quite* doubtful this patch is the way to go. If we want to more tightly\ncontrol memory allocation patterns, because we have more information than\nglibc, we should do that, rather than try to nudge glibc's malloc in random\ndirection. In contrast a generic malloc() implementation we can have much\nmore information about memory lifetimes etc due to memory contexts.\n\nWe e.g. could keep a larger number of memory blocks reserved\nourselves. Possibly by delaying the release of additionally held blocks until\nwe have been idle for a few seconds or such.\n\n\nWRT to the difference in TPS in the benchmark you mention - I suspect that we\nare doing something bad that needs to be improved regardless of the underlying\nmemory allocator implementation. Due to the lack of detailed instructions I\ncouldn't reproduce the results immediately.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 26 Jun 2023 14:03:48 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add GUC to tune glibc's malloc implementation."
},
{
"msg_contents": "Le lundi 26 juin 2023, 23:03:48 CEST Andres Freund a écrit :\n> Hi,\n> \n> On 2023-06-26 08:38:35 +0200, Ronan Dunklau wrote:\n> > I hope what I'm trying to achieve is clearer that way. Maybe this patch is\n> > not the best way to go about this, but since the memory allocator\n> > behaviour can have such an impact it's a bit sad we have to leave half\n> > the performance on the table because of it when there are easily\n> > accessible knobs to avoid it.\n> I'm *quite* doubtful this patch is the way to go. If we want to more\n> tightly control memory allocation patterns, because we have more\n> information than glibc, we should do that, rather than try to nudge glibc's\n> malloc in random direction. In contrast a generic malloc() implementation\n> we can have much more information about memory lifetimes etc due to memory\n> contexts.\n\nYes this is probably much more appropriate, but a much larger change with \ngreater risks of regression. Especially as we have to make sure we're not \noverfitting our own code for a specific malloc implementation, to the detriment \nof others. Except if you hinted we should write our own directly instead ?\n\n> \n> We e.g. could keep a larger number of memory blocks reserved\n> ourselves. Possibly by delaying the release of additionally held blocks\n> until we have been idle for a few seconds or such.\n\nI think keeping work_mem around after it has been used a couple times make \nsense. This is the memory a user is willing to dedicate to operations, after \nall.\n\n> \n> \n> WRT to the difference in TPS in the benchmark you mention - I suspect that\n> we are doing something bad that needs to be improved regardless of the\n> underlying memory allocator implementation. Due to the lack of detailed\n> instructions I couldn't reproduce the results immediately.\n\nI re-attached the simple script I used. I've run this script with different \nvalues for glibc_malloc_max_trim_threshold. \n\nBest regards,\n\n--\nRonan Dunklau",
"msg_date": "Tue, 27 Jun 2023 08:35:28 +0200",
"msg_from": "Ronan Dunklau <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add GUC to tune glibc's malloc implementation."
},
{
"msg_contents": "Le mardi 27 juin 2023, 08:35:28 CEST Ronan Dunklau a écrit :\n> I re-attached the simple script I used. I've run this script with different\n> values for glibc_malloc_max_trim_threshold.\n\nI forgot to add that it was using default parametrers except for work_mem, set \nto 32M, and max_parallel_workers_per_gather set to zero. \n\n\n\n\n\n",
"msg_date": "Tue, 27 Jun 2023 08:39:59 +0200",
"msg_from": "Ronan Dunklau <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add GUC to tune glibc's malloc implementation."
},
{
"msg_contents": "Hi,\n\nOn 2023-06-27 08:35:28 +0200, Ronan Dunklau wrote:\n> Le lundi 26 juin 2023, 23:03:48 CEST Andres Freund a �crit :\n> > Hi,\n> >\n> > On 2023-06-26 08:38:35 +0200, Ronan Dunklau wrote:\n> > > I hope what I'm trying to achieve is clearer that way. Maybe this patch is\n> > > not the best way to go about this, but since the memory allocator\n> > > behaviour can have such an impact it's a bit sad we have to leave half\n> > > the performance on the table because of it when there are easily\n> > > accessible knobs to avoid it.\n> > I'm *quite* doubtful this patch is the way to go. If we want to more\n> > tightly control memory allocation patterns, because we have more\n> > information than glibc, we should do that, rather than try to nudge glibc's\n> > malloc in random direction. In contrast a generic malloc() implementation\n> > we can have much more information about memory lifetimes etc due to memory\n> > contexts.\n>\n> Yes this is probably much more appropriate, but a much larger change with\n> greater risks of regression. Especially as we have to make sure we're not\n> overfitting our own code for a specific malloc implementation, to the detriment\n> of others.\n\nI think your approach is fundamentally overfitting our code to a specific\nmalloc implementation, in a way that's not tunable by mere mortals. It just\nseems like a dead end to me.\n\n\n> Except if you hinted we should write our own directly instead ?\n\nI don't think we should write our own malloc - we don't rely on it much\nourselves. And if we replace it, we need to care about mallocs performance\ncharacteristics a whole lot, because various libraries etc do heavily rely on\nit.\n\nHowever, I do think we should eventually avoid using malloc() for aset.c et\nal. malloc() is a general allocator, but at least for allocations below\nmaxBlockSize aset.c's doesn't do allocations in a way that really benefit from\nthat *at all*. It's not a lot of work to do such allocations on our own.\n\n\n> > We e.g. could keep a larger number of memory blocks reserved\n> > ourselves. Possibly by delaying the release of additionally held blocks\n> > until we have been idle for a few seconds or such.\n>\n> I think keeping work_mem around after it has been used a couple times make\n> sense. This is the memory a user is willing to dedicate to operations, after\n> all.\n\nThe biggest overhead of returning pages to the kernel is that that triggers\nzeroing the data during the next allocation. Particularly on multi-node\nservers that's surprisingly slow. It's most commonly not the brk() or mmap()\nthemselves that are the performance issue.\n\nIndeed, with your benchmark, I see that most of the time, on my dual Xeon Gold\n5215 workstation, is spent zeroing newly allocated pages during page\nfaults. That microarchitecture is worse at this than some others, but it's\nnever free (or cache friendly).\n\n\n> > WRT to the difference in TPS in the benchmark you mention - I suspect that\n> > we are doing something bad that needs to be improved regardless of the\n> > underlying memory allocator implementation. Due to the lack of detailed\n> > instructions I couldn't reproduce the results immediately.\n>\n> I re-attached the simple script I used. I've run this script with different\n> values for glibc_malloc_max_trim_threshold.\n\nFWIW, in my experience trimming the brk()ed region doesn't work reliably\nenough in real world postgres workloads to be worth relying on (from a memory\nusage POV). Sooner or later you're going to have longer lived allocations\nplaced that will prevent it from happening.\n\nI have played around with telling aset.c that certain contexts are long lived\nand using mmap() for those, to make it more likely that the libc malloc/free\ncan actually return memory to the system. I think that can be quite\nworthwhile.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 27 Jun 2023 11:17:46 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add GUC to tune glibc's malloc implementation."
},
{
"msg_contents": "Le mardi 27 juin 2023, 20:17:46 CEST Andres Freund a écrit :\n> > Yes this is probably much more appropriate, but a much larger change with\n> > greater risks of regression. Especially as we have to make sure we're not\n> > overfitting our own code for a specific malloc implementation, to the\n> > detriment of others.\n> \n> I think your approach is fundamentally overfitting our code to a specific\n> malloc implementation, in a way that's not tunable by mere mortals. It just\n> seems like a dead end to me.\n\nI see it as a way to have *some* sort of control over the malloc \nimplementation we use, instead of tuning our allocations pattern on top of it \nwhile treating it entirely as a black box. As for the tuning, I proposed \nearlier to replace this parameter expressed in terms of size as a \"profile\" \n(greedy / conservative) to make it easier to pick a sensible value.\n\n> \n> > Except if you hinted we should write our own directly instead ?\n> \n> I don't think we should write our own malloc - we don't rely on it much\n> ourselves. And if we replace it, we need to care about mallocs performance\n> characteristics a whole lot, because various libraries etc do heavily rely\n> on it.\n> \n> However, I do think we should eventually avoid using malloc() for aset.c et\n> al. malloc() is a general allocator, but at least for allocations below\n> maxBlockSize aset.c's doesn't do allocations in a way that really benefit\n> from that *at all*. It's not a lot of work to do such allocations on our\n> own.\n> > > We e.g. could keep a larger number of memory blocks reserved\n> > > ourselves. Possibly by delaying the release of additionally held blocks\n> > > until we have been idle for a few seconds or such.\n> > \n> > I think keeping work_mem around after it has been used a couple times make\n> > sense. This is the memory a user is willing to dedicate to operations,\n> > after all.\n> \n> The biggest overhead of returning pages to the kernel is that that triggers\n> zeroing the data during the next allocation. Particularly on multi-node\n> servers that's surprisingly slow. It's most commonly not the brk() or\n> mmap() themselves that are the performance issue.\n> \n> Indeed, with your benchmark, I see that most of the time, on my dual Xeon\n> Gold 5215 workstation, is spent zeroing newly allocated pages during page\n> faults. That microarchitecture is worse at this than some others, but it's\n> never free (or cache friendly).\n\nI'm not sure I see the practical difference between those, but that's \ninteresting. Were you able to reproduce my results ?\n\n> FWIW, in my experience trimming the brk()ed region doesn't work reliably\n> enough in real world postgres workloads to be worth relying on (from a\n> memory usage POV). Sooner or later you're going to have longer lived\n> allocations placed that will prevent it from happening.\n\nI'm not sure I follow: given our workload is clearly split at queries and \ntransactions boundaries, releasing memory at that time, I've assumed (and \nnoticed in practice, albeit not on a production system) that most memory at \nthe top of the heap would be trimmable as we don't keep much in between \nqueries / transactions.\n\n> \n> I have played around with telling aset.c that certain contexts are long\n> lived and using mmap() for those, to make it more likely that the libc\n> malloc/free can actually return memory to the system. I think that can be\n> > quite worthwhile.\n\nSo if I understand your different suggestions, we should: \n - use mmap ourselves for what we deem to be \"one-off\" allocations, to make \nsure that memory is not hanging around after we don't use\n - keep some pool allocated which will not be freed in between queries, but \nreused for the next time we need it. \n\nThank you for looking at this problem.\n\nRegards,\n\n--\nRonan Dunklau\n\n\n\n\n\n",
"msg_date": "Wed, 28 Jun 2023 07:26:03 +0200",
"msg_from": "Ronan Dunklau <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add GUC to tune glibc's malloc implementation."
},
{
"msg_contents": "Hi,\n\nOn 2023-06-28 07:26:03 +0200, Ronan Dunklau wrote:\n> I see it as a way to have *some* sort of control over the malloc\n> implementation we use, instead of tuning our allocations pattern on top of it\n> while treating it entirely as a black box. As for the tuning, I proposed\n> earlier to replace this parameter expressed in terms of size as a \"profile\"\n> (greedy / conservative) to make it easier to pick a sensible value.\n\nI don't think that makes it very usable - we'll still have idle connections\nuse up a lot more memory than now in some cases, and not in others, even\nthough it doesn't help. And it will be very heavily dependent on the OS and\nglibc version.\n\n\n> Le mardi 27 juin 2023, 20:17:46 CEST Andres Freund a �crit :\n> > > Except if you hinted we should write our own directly instead ?\n> > > > We e.g. could keep a larger number of memory blocks reserved\n> > > > ourselves. Possibly by delaying the release of additionally held blocks\n> > > > until we have been idle for a few seconds or such.\n> > >\n> > > I think keeping work_mem around after it has been used a couple times make\n> > > sense. This is the memory a user is willing to dedicate to operations,\n> > > after all.\n> >\n> > The biggest overhead of returning pages to the kernel is that that triggers\n> > zeroing the data during the next allocation. Particularly on multi-node\n> > servers that's surprisingly slow. It's most commonly not the brk() or\n> > mmap() themselves that are the performance issue.\n> >\n> > Indeed, with your benchmark, I see that most of the time, on my dual Xeon\n> > Gold 5215 workstation, is spent zeroing newly allocated pages during page\n> > faults. That microarchitecture is worse at this than some others, but it's\n> > never free (or cache friendly).\n>\n> I'm not sure I see the practical difference between those, but that's\n> interesting. Were you able to reproduce my results ?\n\nI see a bit smaller win than what you observed, but it is substantial.\n\n\nThe runtime difference between the \"default\" and \"cached\" malloc are almost\nentirely in these bits:\n\ncached:\n- 8.93% postgres libc.so.6 [.] __memmove_evex_unaligned_erms\n - __memmove_evex_unaligned_erms\n + 6.77% minimal_tuple_from_heap_tuple\n + 2.04% _int_realloc\n + 0.04% AllocSetRealloc\n 0.02% 0x56281094806f\n 0.02% 0x56281094e0bf\n\nvs\n\nuncached:\n\n- 14.52% postgres libc.so.6 [.] __memmove_evex_unaligned_erms\n 8.61% asm_exc_page_fault\n - 5.91% __memmove_evex_unaligned_erms\n + 5.78% minimal_tuple_from_heap_tuple\n 0.04% 0x560130a2900f\n 0.02% 0x560130a20faf\n + 0.02% AllocSetRealloc\n + 0.02% _int_realloc\n\n+ 3.81% postgres [kernel.vmlinux] [k] native_irq_return_iret\n+ 1.88% postgres [kernel.vmlinux] [k] __handle_mm_fault\n+ 1.76% postgres [kernel.vmlinux] [k] clear_page_erms\n+ 1.67% postgres [kernel.vmlinux] [k] get_mem_cgroup_from_mm\n+ 1.42% postgres [kernel.vmlinux] [k] cgroup_rstat_updated\n+ 1.00% postgres [kernel.vmlinux] [k] get_page_from_freelist\n+ 0.93% postgres [kernel.vmlinux] [k] mtree_range_walk\n\nNone of the latter are visible in a profile in the cached case.\n\nI.e. the overhead is encountering page faults and individually allocating the\nnecessary memory in the kernel.\n\n\nThis isn't surprising, I just wanted to make sure I entirely understand.\n\n\nPart of the reason this code is a bit worse is that it's using generation.c,\nwhich doesn't cache any part of of the context. Not that aset.c's level of\ncaching would help a lot, given that it caches the context itself, not later\nblocks.\n\n\n> > FWIW, in my experience trimming the brk()ed region doesn't work reliably\n> > enough in real world postgres workloads to be worth relying on (from a\n> > memory usage POV). Sooner or later you're going to have longer lived\n> > allocations placed that will prevent it from happening.\n>\n> I'm not sure I follow: given our workload is clearly split at queries and\n> transactions boundaries, releasing memory at that time, I've assumed (and\n> noticed in practice, albeit not on a production system) that most memory at\n> the top of the heap would be trimmable as we don't keep much in between\n> queries / transactions.\n\nThat's true for very simple workloads, but once you're beyond that you just\nneed some longer-lived allocation to happen. E.g. some relcache / catcache\nmiss during the query execution, and there's no exant memory in\nCacheMemoryContext, so a new block is allocated.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 28 Jun 2023 15:31:01 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add GUC to tune glibc's malloc implementation."
},
{
"msg_contents": "> On 29 Jun 2023, at 00:31, Andres Freund <[email protected]> wrote:\n> On 2023-06-28 07:26:03 +0200, Ronan Dunklau wrote:\n>> I see it as a way to have *some* sort of control over the malloc\n>> implementation we use, instead of tuning our allocations pattern on top of it\n>> while treating it entirely as a black box. As for the tuning, I proposed\n>> earlier to replace this parameter expressed in terms of size as a \"profile\"\n>> (greedy / conservative) to make it easier to pick a sensible value.\n> \n> I don't think that makes it very usable - we'll still have idle connections\n> use up a lot more memory than now in some cases, and not in others, even\n> though it doesn't help. And it will be very heavily dependent on the OS and\n> glibc version.\n\nBased on the comments in this thread and that no update has been posted\naddressing the objections I will mark this returned with feedback. Please feel\nfree to resubmit to a future CF.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Tue, 1 Aug 2023 22:54:29 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add GUC to tune glibc's malloc implementation."
}
] |
[
{
"msg_contents": "Hi,\r\n\r\n[1] is a ready-for-committer enhancement to pg_stat_progress_vacuum which exposes\r\nthe total number of indexes to vacuum and how many indexes have been vacuumed in\r\nthe current vacuum cycle.\r\n\r\nTo even further improve visibility into index vacuuming, it would be beneficial to have a\r\nfunction called pg_stat_get_vacuum_index(pid) that takes in a pid and returns the\r\nindexrelid of the index being processed.\r\n\r\nCurrently the only way to get the index being vacuumed by a process\r\nIs through os tools such as pstack.\r\n\r\nI had a patch for this as part of [1], but it was decided to handle this in a separate\r\ndiscussion.\r\n\r\nComments/feedback will be appreciated before sending out a v1 of the patch.\r\n\r\n\r\nRegards,\r\n\r\nSami Imseih\r\nAmazon Web Services (AWS)\r\n\r\n1. https://www.postgresql.org/message-id/flat/[email protected]\r\n\r\n\r\n\n\n\n\n\n\n\n\n\nHi,\n \n[1] is a ready-for-committer enhancement to pg_stat_progress_vacuum which exposes\nthe total number of indexes to vacuum and how many indexes have been vacuumed in\nthe current vacuum cycle. \n \nTo even further improve visibility into index vacuuming, it would be beneficial to have a\nfunction called pg_stat_get_vacuum_index(pid) that takes in a pid and returns the\r\n\nindexrelid of the index being processed.\n \nCurrently the only way to get the index being vacuumed by a process\r\n\nIs through os tools such as pstack.\n \nI had a patch for this as part of [1], but it was decided to handle this in a separate\ndiscussion.\n \nComments/feedback will be appreciated before sending out a v1 of the patch.\n \n \nRegards,\n \nSami Imseih\nAmazon Web Services (AWS)\n \n1. https://www.postgresql.org/message-id/flat/[email protected]",
"msg_date": "Thu, 22 Jun 2023 14:44:43 +0000",
"msg_from": "\"Imseih (AWS), Sami\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "New function to show index being vacuumed"
},
{
"msg_contents": "On Thu, 22 Jun 2023 at 16:45, Imseih (AWS), Sami <[email protected]> wrote:\n>\n> Hi,\n>\n> [1] is a ready-for-committer enhancement to pg_stat_progress_vacuum which exposes\n> the total number of indexes to vacuum and how many indexes have been vacuumed in\n> the current vacuum cycle.\n>\n> To even further improve visibility into index vacuuming, it would be beneficial to have a\n> function called pg_stat_get_vacuum_index(pid) that takes in a pid and returns the\n> indexrelid of the index being processed.\n\nI'm sorry for not having read (and not reading) the other thread yet,\nbut what was the reason we couldn't store that oid in a column in the\npg_s_p_vacuum-view?\n\nCould you summarize the other solutions that were considered for this issue?\n\nKind regards,\n\nMatthias van de Meent\nNeon, Inc.\n\n\n",
"msg_date": "Thu, 22 Jun 2023 17:55:27 +0200",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New function to show index being vacuumed"
},
{
"msg_contents": "> I'm sorry for not having read (and not reading) the other thread yet,\r\n> but what was the reason we couldn't store that oid in a column in the\r\n> pg_s_p_vacuum-view?\r\n\r\n\r\n> Could you summarize the other solutions that were considered for this issue?\r\n\r\nThanks for your feedback!\r\n\r\nThe reason we cannot stick the oid in pg_s_p_vacuum is because it will\r\nnot work for parallel vacuum as only the leader process has an entry\r\nin pg_s_p_vacuum.\r\n\r\nWith a function the leader or worker pid can be passed in to the function\r\nand will return the indexrelid being processed.\r\n\r\nRegards,\r\n\r\nSami\r\n\r\n\r\n\r\n\r\n\r\n\r\n",
"msg_date": "Thu, 22 Jun 2023 16:22:17 +0000",
"msg_from": "\"Imseih (AWS), Sami\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: New function to show index being vacuumed"
}
] |
[
{
"msg_contents": "Hi All,\n\nI've attached a couple of patches to allow ALTER OPERATOR to add\ncommutators, negators, hashes and merges to operators that lack them.\n\nThe need for this arose adding hash functions to the ltree type after the\noperator had been created without hash support[1]. There are potential\nissues with modifying these attributes that have been discussed\npreviously[2], but I understand that setting them, if they have not been\nset before, is ok.\n\nI belatedly realised that it may not be desirable or necessary to allow\nadding commutators and negators in ALTER OPERATOR because the linkage can\nalready be added when creating a new operator. I don't know what's best, so\nI thought I'd post this here and get feedback before removing anything.\n\nThe first patch is create_op_fixes_v1.patch and it includes some\nrefactoring in preparation for the ALTER OPERATOR changes and fixes a\ncouple of minor bugs in CREATE OPERATOR:\n- prevents self negation when filling in/creating an existing shell operator\n- remove reference to sort operator in the self negation error message as\nthe sort attribute appears to be deprecated in Postgres 8.3\n\nThe second patch is alter_op_v1.patch which contains the changes to ALTER\nOPERATOR and depends on create_op_fixes_v1.patch.\n\nAdditionally, I wasn't sure whether it was preferred to fail or succeed on\nALTERs that have no effect, such as adding hashes on an operator that\nalready allows them or disabling hashes on one that does not. I chose to\nraise an error when this happens, on the thinking it was more explicit and\nmade the code simpler, even though the end result would be what the user\nwanted.\n\nComments appreciated.\n\nThanks,\nTommy\n\n[1]\nhttps://www.postgresql.org/message-id/flat/CAEhP-W9ZEoHeaP_nKnPCVd_o1c3BAUvq1gWHrq8EbkNRiS9CvQ%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/flat/3348985.V7xMLFDaJO@dinodell",
"msg_date": "Thu, 22 Jun 2023 18:35:10 +0200",
"msg_from": "Tommy Pavlicek <[email protected]>",
"msg_from_op": true,
"msg_subject": "[PATCH] Extend ALTER OPERATOR to support adding commutator, negator,\n hashes, and merges"
},
{
"msg_contents": "Tommy Pavlicek <[email protected]> writes:\n> I've attached a couple of patches to allow ALTER OPERATOR to add\n> commutators, negators, hashes and merges to operators that lack them.\n\nPlease add this to the upcoming commitfest [1], to ensure we don't\nlose track of it.\n\n> The first patch is create_op_fixes_v1.patch and it includes some\n> refactoring in preparation for the ALTER OPERATOR changes and fixes a\n> couple of minor bugs in CREATE OPERATOR:\n> - prevents self negation when filling in/creating an existing shell operator\n> - remove reference to sort operator in the self negation error message as\n> the sort attribute appears to be deprecated in Postgres 8.3\n\nHmm, yeah, I bet nobody has looked at those edge cases in awhile.\n\n> Additionally, I wasn't sure whether it was preferred to fail or succeed on\n> ALTERs that have no effect, such as adding hashes on an operator that\n> already allows them or disabling hashes on one that does not. I chose to\n> raise an error when this happens, on the thinking it was more explicit and\n> made the code simpler, even though the end result would be what the user\n> wanted.\n\nYou could argue that both ways I guess. We definitely need to raise error\nif the command tries to change an existing nondefault setting, since that\nmight break things as per previous discussion. But perhaps rejecting\nan attempt to set the existing setting is overly nanny-ish. Personally\nI think I'd lean to \"don't throw an error if we don't have to\", but I'm\nnot strongly set on that position.\n\n(Don't we have existing precedents that apply here? I can't offhand\nthink of any existing ALTER commands that would reject no-op requests,\nbut maybe that's not a direct precedent.)\n\n\t\t\tregards, tom lane\n\n[1] https://commitfest.postgresql.org/43/\n\n\n",
"msg_date": "Thu, 22 Jun 2023 12:47:48 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Extend ALTER OPERATOR to support adding commutator,\n negator, hashes, and merges"
},
{
"msg_contents": "Tom Lane <[email protected]> writes:\n\n> Tommy Pavlicek <[email protected]> writes:\n>\n>> Additionally, I wasn't sure whether it was preferred to fail or succeed on\n>> ALTERs that have no effect, such as adding hashes on an operator that\n>> already allows them or disabling hashes on one that does not. I chose to\n>> raise an error when this happens, on the thinking it was more explicit and\n>> made the code simpler, even though the end result would be what the user\n>> wanted.\n>\n> You could argue that both ways I guess. We definitely need to raise error\n> if the command tries to change an existing nondefault setting, since that\n> might break things as per previous discussion. But perhaps rejecting\n> an attempt to set the existing setting is overly nanny-ish. Personally\n> I think I'd lean to \"don't throw an error if we don't have to\", but I'm\n> not strongly set on that position.\n>\n> (Don't we have existing precedents that apply here? I can't offhand\n> think of any existing ALTER commands that would reject no-op requests,\n> but maybe that's not a direct precedent.)\n\nSince it only supports adding these operations if they don't already\nexist, should it not be ALTER OPERATOR ADD <thing>, not SET <thing>?\n\nThat makes it natural to add an IF NOT EXISTS clause, like ALTER TABLE\nADD COLUMN has, to make it a no-op instead of an error.\n\n> \t\t\tregards, tom lane\n\n- ilmari\n\n\n",
"msg_date": "Thu, 22 Jun 2023 17:54:54 +0100",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Extend ALTER OPERATOR to support adding commutator,\n negator, hashes, and merges"
},
{
"msg_contents": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]> writes:\n> Tom Lane <[email protected]> writes:\n>> (Don't we have existing precedents that apply here? I can't offhand\n>> think of any existing ALTER commands that would reject no-op requests,\n>> but maybe that's not a direct precedent.)\n\n> Since it only supports adding these operations if they don't already\n> exist, should it not be ALTER OPERATOR ADD <thing>, not SET <thing>?\n> That makes it natural to add an IF NOT EXISTS clause, like ALTER TABLE\n> ADD COLUMN has, to make it a no-op instead of an error.\n\nHmm, maybe. But it feels like choosing syntax and semantics based\non what might be only a temporary implementation restriction. We\ncertainly don't handle any other property-setting commands that way.\n\nAdmittedly, \"can't change an existing setting\" is likely to be pretty\npermanent in this case, just because I don't see a use-case for it\nthat'd justify the work involved. (My wife recently gave me a coffee\ncup that says \"Nothing is as permanent as a temporary fix.\") But\nstill, if someone did show up and do that work, we'd regret this\nchoice of syntax because it'd then be uselessly unlike every other\nALTER command.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 22 Jun 2023 13:36:14 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Extend ALTER OPERATOR to support adding commutator,\n negator, hashes, and merges"
},
{
"msg_contents": "Tom Lane <[email protected]> writes:\n\n> Please add this to the upcoming commitfest [1], to ensure we don't\n> lose track of it.\n\nI've added a single patch here: https://commitfest.postgresql.org/43/4389/\n\nIt wasn't obvious whether I should create a second commitfest entry\nbecause I've included 2 patches so I've just done 1 to begin with. On\nthat note, is it preferred here to split patches of this size into\nseparate patches, and if so, additionally, separate threads?\n\nTom Lane <[email protected]> writes:\n\n> > Additionally, I wasn't sure whether it was preferred to fail or succeed on\n> > ALTERs that have no effect, such as adding hashes on an operator that\n> > already allows them or disabling hashes on one that does not. I chose to\n> > raise an error when this happens, on the thinking it was more explicit and\n> > made the code simpler, even though the end result would be what the user\n> > wanted.\n>\n> You could argue that both ways I guess. We definitely need to raise error\n> if the command tries to change an existing nondefault setting, since that\n> might break things as per previous discussion. But perhaps rejecting\n> an attempt to set the existing setting is overly nanny-ish. Personally\n> I think I'd lean to \"don't throw an error if we don't have to\", but I'm\n> not strongly set on that position.\n>\n> (Don't we have existing precedents that apply here? I can't offhand\n> think of any existing ALTER commands that would reject no-op requests,\n> but maybe that's not a direct precedent.)\n\nMy initial thinking behind the error for a no-op was largely driven by\nthe existence of 'DROP.. IF EXISTS'. However, I did some ad hoc\ntesting on ALTER commands and it does seem that they mostly allow\nno-ops. I did find that renaming an object to the same name will fail\ndue to the object already existing, but that seems to be more of a\ncoincidence than a design decision to me. Given this, I also lean\ntowards allowing the no-ops and will change it unless there are\nobjections.\n\n\n",
"msg_date": "Fri, 23 Jun 2023 12:34:38 +0200",
"msg_from": "Tommy Pavlicek <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Extend ALTER OPERATOR to support adding commutator,\n negator, hashes, and merges"
},
{
"msg_contents": "Tommy Pavlicek <[email protected]> writes:\n> I've added a single patch here: https://commitfest.postgresql.org/43/4389/\n\n> It wasn't obvious whether I should create a second commitfest entry\n> because I've included 2 patches so I've just done 1 to begin with. On\n> that note, is it preferred here to split patches of this size into\n> separate patches, and if so, additionally, separate threads?\n\nNo, our commitfest infrastructure is unable to deal with patches that have\ninterdependencies unless they're presented in a single email. So just use\none thread, and be sure to attach all the patches each time.\n\n(BTW, while you seem to have gotten away with it so far, it's usually\nadvisable to name the patch files like 0001-foo.patch, 0002-bar.patch,\netc, to make sure the cfbot understands what order to apply them in.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 23 Jun 2023 07:21:46 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Extend ALTER OPERATOR to support adding commutator,\n negator, hashes, and merges"
},
{
"msg_contents": "On Fri, Jun 23, 2023 at 12:21 PM Tom Lane <[email protected]> wrote:\n>\n> Tommy Pavlicek <[email protected]> writes:\n> > I've added a single patch here: https://commitfest.postgresql.org/43/4389/\n>\n> > It wasn't obvious whether I should create a second commitfest entry\n> > because I've included 2 patches so I've just done 1 to begin with. On\n> > that note, is it preferred here to split patches of this size into\n> > separate patches, and if so, additionally, separate threads?\n>\n> No, our commitfest infrastructure is unable to deal with patches that have\n> interdependencies unless they're presented in a single email. So just use\n> one thread, and be sure to attach all the patches each time.\n>\n> (BTW, while you seem to have gotten away with it so far, it's usually\n> advisable to name the patch files like 0001-foo.patch, 0002-bar.patch,\n> etc, to make sure the cfbot understands what order to apply them in.)\n>\n> regards, tom lane\n\nThanks.\n\nI've attached a new version of the ALTER OPERATOR patch that allows\nno-ops. It should be ready to review now.",
"msg_date": "Sun, 2 Jul 2023 15:42:53 +0100",
"msg_from": "Tommy Pavlicek <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Extend ALTER OPERATOR to support adding commutator,\n negator, hashes, and merges"
},
{
"msg_contents": "/*\n * AlterOperator\n * routine implementing ALTER OPERATOR <operator> SET (option = ...).\n *\n * Currently, only RESTRICT and JOIN estimator functions can be changed.\n */\nObjectAddress\nAlterOperator(AlterOperatorStmt *stmt)\n\nThe above comment needs to change, other than that, it passed the\ntest, works as expected.\n\n\n",
"msg_date": "Mon, 25 Sep 2023 18:52:03 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Extend ALTER OPERATOR to support adding commutator,\n negator, hashes, and merges"
},
{
"msg_contents": "in doc/src/sgml/ref/alter_operator.sgml\n\n <varlistentry>\n <term><literal>HASHES</literal></term>\n <listitem>\n <para>\n Indicates this operator can support a hash join. Can only be set\nwhen the operator does not support a hash join.\n </para>\n </listitem>\n </varlistentry>\n\n <varlistentry>\n <term><literal>MERGES</literal></term>\n <listitem>\n <para>\n Indicates this operator can support a merge join. Can only be set\nwhen the operator does not support a merge join.\n </para>\n </listitem>\n </varlistentry>\n------------------------\nif the operator cannot support hash/merge join, it can't do ALTER\nOPERATOR oprname(left_arg, right_arg) SET (HASHES/MERGES = false);\n\nI think it means:\nCan only be set when the operator does support a hash/merge join. Once\nset to true, it cannot be reset to false.\n\n\n",
"msg_date": "Mon, 25 Sep 2023 22:55:01 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Extend ALTER OPERATOR to support adding commutator,\n negator, hashes, and merges"
},
{
"msg_contents": "Tommy Pavlicek <[email protected]> writes:\n> I've attached a new version of the ALTER OPERATOR patch that allows\n> no-ops. It should be ready to review now.\n\nI got around to looking through this finally (sorry about the delay).\nI'm mostly on board with the functionality, with the exception that\nI don't see why we should allow ALTER OPERATOR to cause new shell\noperators to be created. The argument for allowing that in CREATE\nOPERATOR was mainly to allow a linked pair of operators to be created\nwithout a lot of complexity (specifically, being careful to specify\nthe commutator or negator linkage only in the second CREATE, which\nis a rule that requires another exception for a self-commutator).\nHowever, if you're using ALTER OPERATOR then you might as well create\nboth operators first and then link them with an ALTER command.\nIn fact, I don't really see a use-case where the operators wouldn't\nboth exist; isn't this feature mainly to allow retrospective\ncorrection of omitted linkages? So I think allowing ALTER to create a\nsecond operator is more likely to allow mistakes to sneak by than to\ndo anything useful --- and they will be mistakes you can't correct\nexcept by starting over. I'd even question whether we want to let\nALTER establish a linkage to an existing shell operator, rather than\ninsisting you turn it into a valid operator via CREATE first.\n\nIf we implement it with that restriction then I don't think the\nrefactorization done in 0001 is correct, or at least not ideal.\n\n(In any case, it seems like a bad idea that the command reference\npages make no mention of this stuff about shell operators. It's\nexplained in 38.15. Operator Optimization Information, but it'd\nbe worth at least alluding to that section here. Or maybe we\nshould move that info to CREATE OPERATOR?)\n\nMore generally, you muttered something about 0001 fixing some\nexisting bugs, but if so I can't see those trees for the forest of\nrefactorization. I'd suggest splitting any bug fixes apart from\nthe pure-refactorization step.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 28 Sep 2023 16:18:34 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Extend ALTER OPERATOR to support adding commutator,\n negator, hashes, and merges"
},
{
"msg_contents": "On Thu, Sep 28, 2023 at 9:18 PM Tom Lane <[email protected]> wrote:\n>\n> Tommy Pavlicek <[email protected]> writes:\n> > I've attached a new version of the ALTER OPERATOR patch that allows\n> > no-ops. It should be ready to review now.\n>\n> I got around to looking through this finally (sorry about the delay).\n> I'm mostly on board with the functionality, with the exception that\n> I don't see why we should allow ALTER OPERATOR to cause new shell\n> operators to be created. The argument for allowing that in CREATE\n> OPERATOR was mainly to allow a linked pair of operators to be created\n> without a lot of complexity (specifically, being careful to specify\n> the commutator or negator linkage only in the second CREATE, which\n> is a rule that requires another exception for a self-commutator).\n> However, if you're using ALTER OPERATOR then you might as well create\n> both operators first and then link them with an ALTER command.\n> In fact, I don't really see a use-case where the operators wouldn't\n> both exist; isn't this feature mainly to allow retrospective\n> correction of omitted linkages? So I think allowing ALTER to create a\n> second operator is more likely to allow mistakes to sneak by than to\n> do anything useful --- and they will be mistakes you can't correct\n> except by starting over. I'd even question whether we want to let\n> ALTER establish a linkage to an existing shell operator, rather than\n> insisting you turn it into a valid operator via CREATE first.\n>\n> If we implement it with that restriction then I don't think the\n> refactorization done in 0001 is correct, or at least not ideal.\n>\n> (In any case, it seems like a bad idea that the command reference\n> pages make no mention of this stuff about shell operators. It's\n> explained in 38.15. Operator Optimization Information, but it'd\n> be worth at least alluding to that section here. Or maybe we\n> should move that info to CREATE OPERATOR?)\n>\n> More generally, you muttered something about 0001 fixing some\n> existing bugs, but if so I can't see those trees for the forest of\n> refactorization. I'd suggest splitting any bug fixes apart from\n> the pure-refactorization step.\n>\n> regards, tom lane\n\nThanks Tom.\n\nThe rationale behind the shell operator and that part of section 38.15\nof the documentation had escaped me, but what you're saying makes\ncomplete sense. Based on your comments, I've made some changes:\n\n1. I've isolated the bug fixes (fixing the error message and\ndisallowing self negation when filling in a shell operator) into\n0001-bug-fixes-v3.patch.\n2. I've scaled back most of the refactoring as I agree it no longer makes sense.\n3. I updated the logic to prevent the creation of or linking to shell operators.\n4. I made further updates to the documentation including referencing\n38.15 directly in the CREATE and ALTER pages (It's easy to miss if\nonly 38.14 is referenced) and moved the commentary about creating\ncommutators and negators into the CREATE section as with the the ALTER\nchanges it now seems specific to CREATE. I didn't move the rest of\n38.15 as I think this applies to both CREATE and ALTER.\n\nI did notice one further potential bug. When creating an operator and\nadding a commutator, PostgreSQL only links the commutator back to the\noperator if the commutator has no commutator of its own, but the\ncreate operation succeeds regardless of whether this linkage happens.\n\nIn other words, if A and B are a pair of commutators and one creates\nanother operator, C, with A as its commutator, then C will link to A,\nbut A will still link to B (and B to A). It's not clear to me if this\nin itself is a problem, but unless I've misunderstood something\noperator C must be the same as B so it implies an error by the user\nand there could also be issues if A was deleted since C would continue\nto refer to the deleted A.\n\nThe same applies for negators and alter operations.\n\nDo you know if this behaviour is intentional or if I've missed\nsomething because it seems undesirable to me. If it is a bug, then I\nthink I can see how to fix it, but wanted to ask before making any\nchanges.\n\nOn Mon, Sep 25, 2023 at 11:52 AM jian he <[email protected]> wrote:\n>\n> /*\n> * AlterOperator\n> * routine implementing ALTER OPERATOR <operator> SET (option = ...).\n> *\n> * Currently, only RESTRICT and JOIN estimator functions can be changed.\n> */\n> ObjectAddress\n> AlterOperator(AlterOperatorStmt *stmt)\n>\n> The above comment needs to change, other than that, it passed the\n> test, works as expected.\n\nThanks, added a comment.\n\n> Can only be set when the operator does support a hash/merge join. Once\n> set to true, it cannot be reset to false.\n\nYes, I updated the wording. Is it clearer now?",
"msg_date": "Tue, 10 Oct 2023 21:12:50 +0100",
"msg_from": "Tommy Pavlicek <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Extend ALTER OPERATOR to support adding commutator,\n negator, hashes, and merges"
},
{
"msg_contents": "Tommy Pavlicek <[email protected]> writes:\n> I did notice one further potential bug. When creating an operator and\n> adding a commutator, PostgreSQL only links the commutator back to the\n> operator if the commutator has no commutator of its own, but the\n> create operation succeeds regardless of whether this linkage happens.\n\n> In other words, if A and B are a pair of commutators and one creates\n> another operator, C, with A as its commutator, then C will link to A,\n> but A will still link to B (and B to A). It's not clear to me if this\n> in itself is a problem, but unless I've misunderstood something\n> operator C must be the same as B so it implies an error by the user\n> and there could also be issues if A was deleted since C would continue\n> to refer to the deleted A.\n\nYeah, it'd make sense to tighten that up. Per the discussion so far,\nwe should not allow an operator's commutator/negator links to change\nonce set, so overwriting the existing link with a different value\nwould be wrong. But allowing creation of the new operator to proceed\nwith a different outcome than expected isn't good either. I think\nwe should start throwing an error for that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 10 Oct 2023 16:32:07 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Extend ALTER OPERATOR to support adding commutator,\n negator, hashes, and merges"
},
{
"msg_contents": "On Tue, Oct 10, 2023 at 9:32 PM Tom Lane <[email protected]> wrote:\n>\n> Tommy Pavlicek <[email protected]> writes:\n> > I did notice one further potential bug. When creating an operator and\n> > adding a commutator, PostgreSQL only links the commutator back to the\n> > operator if the commutator has no commutator of its own, but the\n> > create operation succeeds regardless of whether this linkage happens.\n>\n> > In other words, if A and B are a pair of commutators and one creates\n> > another operator, C, with A as its commutator, then C will link to A,\n> > but A will still link to B (and B to A). It's not clear to me if this\n> > in itself is a problem, but unless I've misunderstood something\n> > operator C must be the same as B so it implies an error by the user\n> > and there could also be issues if A was deleted since C would continue\n> > to refer to the deleted A.\n>\n> Yeah, it'd make sense to tighten that up. Per the discussion so far,\n> we should not allow an operator's commutator/negator links to change\n> once set, so overwriting the existing link with a different value\n> would be wrong. But allowing creation of the new operator to proceed\n> with a different outcome than expected isn't good either. I think\n> we should start throwing an error for that.\n>\n> regards, tom lane\n\nThanks.\n\nI've added another patch (0002-require_unused_neg_com-v1.patch) that\nprevents using a commutator or negator that's already part of a pair.\nThe only other changes from my email yesterday are that in the ALTER\ncommand I moved the post alter hook to after OperatorUpd and the\naddition of tests to verify that we can't use an existing commutator\nor negator with the ALTER command.\n\nI believe this can all be looked at again.\n\nCheers,\nTommy",
"msg_date": "Wed, 11 Oct 2023 16:11:00 +0100",
"msg_from": "Tommy Pavlicek <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Extend ALTER OPERATOR to support adding commutator,\n negator, hashes, and merges"
},
{
"msg_contents": "errmsg(\"operator attribute \\\"negator\\\" cannot be changed if it has\nalready been set\")));\nI feel like the above message is not very helpful.\n\nSomething like the following may be more helpful for diagnosis.\nerrmsg(\"operator %s's attribute \\\"negator\\\" cannot be changed if it\nhas already been set\", operatorname)));\n\nwhen I \"git apply\", I've noticed some minor whitespace warning.\n\nOther than that, it looks fine.\n\n\n",
"msg_date": "Thu, 12 Oct 2023 11:53:53 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Extend ALTER OPERATOR to support adding commutator,\n negator, hashes, and merges"
},
{
"msg_contents": "jian he <[email protected]> writes:\n> errmsg(\"operator attribute \\\"negator\\\" cannot be changed if it has\n> already been set\")));\n> I feel like the above message is not very helpful.\n\nI think it's okay to be concise about this as long as the operator\nwe're referring to is the target of the ALTER. I agree that when\nwe're complaining about some *other* operator, we'd better spell\nout which one we mean, and I made some changes to the patch to\nimprove that.\n\nPushed after a round of editorialization -- mostly cosmetic\nstuff, except for tweaking some error messages. I shortened the\ntest cases a bit too, as I thought they were somewhat excessive\nto have as a permanent thing.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 20 Oct 2023 12:33:03 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Extend ALTER OPERATOR to support adding commutator,\n negator, hashes, and merges"
},
{
"msg_contents": "Re: Tommy Pavlicek\n> I've added another patch (0002-require_unused_neg_com-v1.patch) that\n> prevents using a commutator or negator that's already part of a pair.\n\nHmm. I agree with the general idea of adding sanity checks, but this\nmight be overzealous:\n\nThis change is breaking pgsphere which has <@ @> operator pairs, but\nfor historical reasons also includes alternative spellings of these\noperators (both called @ with swapped operand types) which now\nexplodes because we can't add them with the \"proper\" commutator and\nnegators declared (which point to the canonical <@ @> !<@ !@>\noperators).\n\nhttps://github.com/postgrespro/pgsphere/blob/master/pgs_moc_compat.sql.in\n\nWe might be able to simply delete the @ operators, but doesn't this\nnew check break the general possibility to have more than one spelling\nfor the same operator?\n\nChristoph\n\n\n",
"msg_date": "Tue, 24 Oct 2023 15:51:16 +0200",
"msg_from": "Christoph Berg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Extend ALTER OPERATOR to support adding commutator,\n negator, hashes, and merges"
},
{
"msg_contents": "Christoph Berg <[email protected]> writes:\n> This change is breaking pgsphere which has <@ @> operator pairs, but\n> for historical reasons also includes alternative spellings of these\n> operators (both called @ with swapped operand types) which now\n> explodes because we can't add them with the \"proper\" commutator and\n> negators declared (which point to the canonical <@ @> !<@ !@>\n> operators).\n\nShould have guessed that somebody might be depending on the previous\nsquishy behavior. Still, I can't see how the above situation is a\ngood idea. Commutators/negators should come in pairs, not have\ncompletely random links. I think it's only accidental that this\nsetup isn't triggering other strange behavior.\n\n> We might be able to simply delete the @ operators, but doesn't this\n> new check break the general possibility to have more than one spelling\n> for the same operator?\n\nYou can have more than one operator atop the same function.\nBut why didn't you make the @ operators commutators of each other,\nrather than this mess?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 24 Oct 2023 11:16:54 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Extend ALTER OPERATOR to support adding commutator,\n negator, hashes, and merges"
},
{
"msg_contents": "Re: Tom Lane\n> > We might be able to simply delete the @ operators, but doesn't this\n> > new check break the general possibility to have more than one spelling\n> > for the same operator?\n> \n> You can have more than one operator atop the same function.\n> But why didn't you make the @ operators commutators of each other,\n> rather than this mess?\n\nHistorical something.\n\nYou are right that the commutators could be fixed that way, but the\nnegators are a different question. There is no legacy spelling for\nthese.\n\nAnyway, if this doesn't raise any \"oh we didn't think of this\"\nconcerns, we'll just remove the old operators in pgsphere.\n\nChristoph\n\n\n",
"msg_date": "Tue, 24 Oct 2023 17:31:43 +0200",
"msg_from": "Christoph Berg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Extend ALTER OPERATOR to support adding commutator,\n negator, hashes, and merges"
},
{
"msg_contents": "Christoph Berg <[email protected]> writes:\n> Anyway, if this doesn't raise any \"oh we didn't think of this\"\n> concerns, we'll just remove the old operators in pgsphere.\n\nWell, the idea was exactly to forbid that sort of setup.\nHowever, if we get sufficient pushback maybe we should\nreconsider --- for example, maybe it'd be sane to enforce\nthe restriction in ALTER but not CREATE?\n\nI'm inclined to wait and see if there are more complaints.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 24 Oct 2023 11:42:15 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Extend ALTER OPERATOR to support adding commutator,\n negator, hashes, and merges"
},
{
"msg_contents": "On Fri, Oct 20, 2023 at 5:33 PM Tom Lane <[email protected]> wrote:\n> Pushed after a round of editorialization -- mostly cosmetic\n> stuff, except for tweaking some error messages. I shortened the\n> test cases a bit too, as I thought they were somewhat excessive\n> to have as a permanent thing.\n\nThanks Tom.\n\nOn Tue, Oct 24, 2023 at 2:51 PM Christoph Berg <[email protected]> wrote:\n>\n> Re: Tommy Pavlicek\n> > I've added another patch (0002-require_unused_neg_com-v1.patch) that\n> > prevents using a commutator or negator that's already part of a pair.\n>\n> Hmm. I agree with the general idea of adding sanity checks, but this\n> might be overzealous:\n\nI can't add much beyond what Tom said, but I think this does go a bit\nbeyond a sanity check. I forgot to mention it in my previous message,\nbut the main reason I noticed this was because the DELETE operator\ncode cleans up commutator and negator links to the operator being\ndeleted and that code expects each to be part of exactly a pair.\n\n\n",
"msg_date": "Tue, 31 Oct 2023 16:10:34 +0000",
"msg_from": "Tommy Pavlicek <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Extend ALTER OPERATOR to support adding commutator,\n negator, hashes, and merges"
},
{
"msg_contents": "Re: Tom Lane\n> Well, the idea was exactly to forbid that sort of setup.\n\nFwiw, pgsphere has remove the problematic operators now:\n\nhttps://github.com/postgrespro/pgsphere/commit/e810f5ddd827881b06a92a303c5c9fbf997b892e\n\n> However, if we get sufficient pushback maybe we should\n> reconsider --- for example, maybe it'd be sane to enforce\n> the restriction in ALTER but not CREATE?\n\nHmm, that seems backwards, I'd expect that CREATE might have some\nchecks that could circumvent using ALTER if I really insisted. If\nCREATE can create things that I can't reach by ALTERing existing other\nthings, that's weird.\n\nLet's keep it like it is now in PG17.\n\nChristoph\n\n\n",
"msg_date": "Tue, 31 Oct 2023 17:15:52 +0100",
"msg_from": "Christoph Berg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Extend ALTER OPERATOR to support adding commutator,\n negator, hashes, and merges"
},
{
"msg_contents": "Re: To Tom Lane\n> Let's keep it like it is now in PG17.\n\nLate followup news: This feature has actually found a bug in\npostgresql-debversion:\n\n CREATE OPERATOR > (\n LEFTARG = debversion,\n RIGHTARG = debversion,\n COMMUTATOR = <,\n- NEGATOR = >=,\n+ NEGATOR = <=,\n RESTRICT = scalargtsel,\n JOIN = scalargtjoinsel\n );\n\nhttps://salsa.debian.org/postgresql/postgresql-debversion/-/commit/8ef08ccbea1438468249b0e94048b1a8a25fc625#000e84a71f8a28b762658375c194b25d529336f3\n\nSo, thanks!\n\nChristoph\n\n\n",
"msg_date": "Thu, 12 Sep 2024 17:46:55 +0200",
"msg_from": "Christoph Berg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Extend ALTER OPERATOR to support adding commutator,\n negator, hashes, and merges"
},
{
"msg_contents": "Hi,\n\nOn Tue, Oct 24, 2023 at 11:42:15AM -0400, Tom Lane wrote:\n> Christoph Berg <[email protected]> writes:\n> > Anyway, if this doesn't raise any \"oh we didn't think of this\"\n> > concerns, we'll just remove the old operators in pgsphere.\n> \n> Well, the idea was exactly to forbid that sort of setup.\n> However, if we get sufficient pushback maybe we should\n> reconsider --- for example, maybe it'd be sane to enforce\n> the restriction in ALTER but not CREATE?\n> \n> I'm inclined to wait and see if there are more complaints.\n\nFWIW, rdkit also fails, but that seems to be an ancient thing as well:\n\nhttps://github.com/rdkit/rdkit/issues/7843\n\nI guess there's no way to make that error a bit more helpful, like\nprinting out the offenbding SQL command, presumably because we are\nloding an extension?\n\n\nMichael\n\n\n",
"msg_date": "Wed, 25 Sep 2024 00:07:59 +0200",
"msg_from": "Michael Banck <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Extend ALTER OPERATOR to support adding commutator,\n negator, hashes, and merges"
},
{
"msg_contents": "Re: Michael Banck\n> I guess there's no way to make that error a bit more helpful, like\n> printing out the offenbding SQL command, presumably because we are\n> loding an extension?\n\nI wish there was. The error reporting from failing extension scripts\nis really bad with no context at all, it has hit me a few times in the\npast already.\n\nChristoph\n\n\n",
"msg_date": "Thu, 26 Sep 2024 16:51:54 +0200",
"msg_from": "Christoph Berg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Extend ALTER OPERATOR to support adding commutator,\n negator, hashes, and merges"
},
{
"msg_contents": "Christoph Berg <[email protected]> writes:\n> I wish there was. The error reporting from failing extension scripts\n> is really bad with no context at all, it has hit me a few times in the\n> past already.\n\nNobody's spent any work on that :-(. A really basic reporting\nfacility is not hard to add, as in the attached finger exercise.\nThe trouble with it can be explained by showing what I get after\nintentionally breaking a script file command:\n\nregression=# create extension cube;\nERROR: syntax error at or near \"CREAT\"\nLINE 16: CREAT FUNCTION cube_send(cube)\n ^\nQUERY: /* contrib/cube/cube--1.4--1.5.sql */\n\n-- complain if script is sourced in psql, rather than via ALTER EXTENSION\n\n\n-- Remove @ and ~\nDROP OPERATOR @ (cube, cube);\nDROP OPERATOR ~ (cube, cube);\n\n-- Add binary input/output handlers\nCREATE FUNCTION cube_recv(internal)\nRETURNS cube\nAS '$libdir/cube'\nLANGUAGE C IMMUTABLE STRICT PARALLEL SAFE;\n\nCREAT FUNCTION cube_send(cube)\nRETURNS bytea\nAS '$libdir/cube'\nLANGUAGE C IMMUTABLE STRICT PARALLEL SAFE;\n\nALTER TYPE cube SET ( RECEIVE = cube_recv, SEND = cube_send );\n\nCONTEXT: extension script file \"/home/postgres/install/share/extension/cube--1.4--1.5.sql\"\n\nSo the first part of that is great, but if your script file is\nlarge you probably won't be happy about having the whole thing\nrepeated in the \"QUERY\" field. So this needs some work on\nuser-friendliness.\n\nI'm inclined to think that maybe we'd be best off keeping the server\nend of it straightforward, and trying to teach psql to abbreviate the\nQUERY field in a useful way. IIRC you get this same type of problem\nwith very large SQL-language functions and suchlike.\n\nAlso, I believe this doesn't help much for non-syntax errors\n(those that aren't reported with an error location). It might\nbe interesting to use the RawStmt.stmt_location/stmt_len fields\nfor the current parsetree to identify what to report, but\nI've not dug further than this.\n\n\t\t\tregards, tom lane",
"msg_date": "Thu, 26 Sep 2024 11:51:47 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Extend ALTER OPERATOR to support adding commutator,\n negator, hashes, and merges"
},
{
"msg_contents": "Re: Tom Lane\n> So the first part of that is great, but if your script file is\n> large you probably won't be happy about having the whole thing\n> repeated in the \"QUERY\" field. So this needs some work on\n> user-friendliness.\n\nDoes this really have to be addressed? It would be way better than it\nis now, and errors during extension creation are rare and mostly for\ndevelopers only, so it doesn't have to be pretty.\n\n> I'm inclined to think that maybe we'd be best off keeping the server\n> end of it straightforward, and trying to teach psql to abbreviate the\n> QUERY field in a useful way. IIRC you get this same type of problem\n> with very large SQL-language functions and suchlike.\n\nI'd treat this as a separate patch, if it's considered to be a good\nidea.\n\nChristoph\n\n\n",
"msg_date": "Fri, 27 Sep 2024 14:17:15 +0200",
"msg_from": "Christoph Berg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Better error reporting from extension scripts (Was: Extend ALTER\n OPERATOR)"
},
{
"msg_contents": "Christoph Berg <[email protected]> writes:\n> Re: Tom Lane\n>> So the first part of that is great, but if your script file is\n>> large you probably won't be happy about having the whole thing\n>> repeated in the \"QUERY\" field. So this needs some work on\n>> user-friendliness.\n\n> Does this really have to be addressed? It would be way better than it\n> is now, and errors during extension creation are rare and mostly for\n> developers only, so it doesn't have to be pretty.\n\nPerhaps. I spent a little more effort on this and added code to\nreport errors that don't come with an error location. On those,\nwe don't have any constraints about what to report in the QUERY\nfield, so I made it trim the string to just the current query\nwithin the script, which makes things quite a bit better. You\ncan see the results in the test_extensions regression test changes.\n\n(It might be worth some effort to trim away comments appearing\njust before a command, but I didn't tackle that here.)\n\n\t\t\tregards, tom lane",
"msg_date": "Fri, 27 Sep 2024 12:31:08 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Better error reporting from extension scripts (Was: Extend ALTER\n OPERATOR)"
},
{
"msg_contents": "Re: Tom Lane\n> Perhaps. I spent a little more effort on this and added code to\n> report errors that don't come with an error location. On those,\n> we don't have any constraints about what to report in the QUERY\n> field, so I made it trim the string to just the current query\n> within the script, which makes things quite a bit better. You\n> can see the results in the test_extensions regression test changes.\n\nThat looks very good me, thanks!\n\n> (It might be worth some effort to trim away comments appearing\n> just before a command, but I didn't tackle that here.)\n\nThe \"error when psql\" comments do look confusing, but I guess in other\nplaces the comment just before the query adds valuable context, so I'd\nsay leaving the comments in is ok.\n\nChristoph\n\n\n",
"msg_date": "Fri, 27 Sep 2024 19:31:19 +0200",
"msg_from": "Christoph Berg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Better error reporting from extension scripts (Was: Extend ALTER\n OPERATOR)"
},
{
"msg_contents": "Christoph Berg <[email protected]> writes:\n> Re: Tom Lane\n>> (It might be worth some effort to trim away comments appearing\n>> just before a command, but I didn't tackle that here.)\n\n> The \"error when psql\" comments do look confusing, but I guess in other\n> places the comment just before the query adds valuable context, so I'd\n> say leaving the comments in is ok.\n\nIt looks like if we did want to suppress that, the right fix is to\nmake gram.y track statement start locations more honestly, as in\n0002 attached (0001 is the same as before). This'd add a few cycles\nper grammar nonterminal reduction, which is kind of annoying but\nprobably is negligible in the grand scheme of things. Still, I'd\nnot propose it just for this. But if memory serves, we've had\nprevious complaints about pg_stat_statements failing to strip\nleading comments from queries, and this'd fix that. I think it\nlikely also improves error cursor positioning for cases involving\nempty productions --- I'm a tad surprised that no other regression\ncases changed.\n\n\t\t\tregards, tom lane",
"msg_date": "Fri, 27 Sep 2024 13:54:44 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Better error reporting from extension scripts (Was: Extend ALTER\n OPERATOR)"
},
{
"msg_contents": "I wrote:\n> It looks like if we did want to suppress that, the right fix is to\n> make gram.y track statement start locations more honestly, as in\n> 0002 attached (0001 is the same as before).\n\nI did a little bit of further work on this:\n\n* I ran some performance checks and convinced myself that indeed\nthe more complex definition of YYLLOC_DEFAULT has no measurable\ncost compared to the overall runtime of raw_parser(), never mind\nlater processing. So I see no reason not to go ahead with that\nchange. I swapped the two patches to make that 0001, and added\na regression test illustrating its effect on pg_stat_statements.\n(Without the gram.y change, the added slash-star comment shows\nup in the pg_stat_statements output, which is surely odd.)\n\n* I concluded that the best way to report the individual statement\nwhen we're able to do that is to include it in an errcontext()\nmessage, similar to what spi.c's _SPI_error_callback does.\nOtherwise it interacts badly if some more-tightly-nested error\ncontext function has already set up an \"internal error query\",\nas for example SQL function errors will do if you enable\ncheck_function_bodies = on.\n\nSo here's v3, now with commit messages and regression tests.\nI feel like this is out of the proof-of-concept stage and\nmight now actually be committable. There's still a question\nof whether reporting the whole script as the query is OK when\nwe have a syntax error, but I have no good ideas as to how to\nmake that terser. And I think you're right that we shouldn't let\nperfection stand in the way of making things noticeably better.\n\n\t\t\tregards, tom lane",
"msg_date": "Sat, 28 Sep 2024 15:45:36 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Better error reporting from extension scripts (Was: Extend ALTER\n OPERATOR)"
}
] |
[
{
"msg_contents": "Nearby I dissed psql's \\du command for its incoherent \"Attributes\"\ncolumn [1]. It's too late to think about changing that for v16,\nbut here's some things I think we should consider for v17:\n\n* It seems weird that some attributes are described in the negative\n(\"Cannot login\", \"No inheritance\"). I realize that this corresponds\nto the defaults, so that a user created by CREATE USER with no options\nshows nothing in the Attributes column; but I wonder how much that's\nworth. As far as \"Cannot login\" goes, you could argue that the silent\ndefault ought to be for the properties assigned by CREATE ROLE, since\nthe table describes itself as \"List of roles\". I'm not dead set on\nchanging this, but it seems like a topic that deserves a fresh look.\n\n* I do not like the display of rolconnlimit, ie \"No connections\" or\n\"%d connection(s)\". A person not paying close attention might think\nthat that means the number of *current* connections the user has.\nA minimal fix could be to word it like \"No connections allowed\" or\n\"%d connection(s) allowed\". But see below.\n\n* I do not like the display of rolvaliduntil, either. Consider\n\nregression=# create user alice password 'secret';\nCREATE ROLE\nregression=# create user bob valid until 'infinity';\nCREATE ROLE\nregression=# \\du\n...\n alice |\n bob | Password valid until infinity\n...\n\nThis output claims that bob has an indefinitely-valid password, when in\nfact he has no password at all. On the other hand, nothing is said about\nalice, who actually *does* have a password valid until infinity. It's\ndifficult to imagine a more misleading way to present this.\n\nNow, it's hard to do better given that the \\du command is examining the\nuniversally-readable pg_roles view, because that doesn't betray any hint\nof whether the user has a password or not. I wonder though what is the\nrationale for letting unprivileged users see the rolvaliduntil column\nbut not whether a password exists at all. I suggest that maybe it'd\nbe okay to change the pg_roles view along the lines of\n\n- '********'::text as rolpassword,\n+ case when rolpassword is not null then '********'::text end as rolpassword,\n\nThen we could fix \\du to say nothing if rolpassword is null,\nand when it isn't, print \"Password valid until infinity\" whenever\nthat is the case (ie, rolvaliduntil is null or infinity).\n\n* On a purely presentation level, how did we possibly arrive\nat the idea that the connection-limit and valid-until properties\ndeserve their own lines in the Attributes column while the other\nproperties are comma-separated? That makes no sense whatsoever,\nnor does it look nice in \\x display format.\n\nI do grasp the distinction that the other properties are permission\nbits while these two aren't, but that doesn't naturally lead to\nthis formatting. I'd vote for either\n\n(a) each property gets its own line, or\n\n(b) move these two things into separate columns. Some of the\nverbiage could then be dropped in favor of the column title.\n\nRight now (b) would lead to an undesirably wide table; but\nif we push the \"Member of\" column out to a different \\d command\nas discussed in the other thread, maybe it'd be practical.\n\nAnyway, for now I'm just throwing this topic out for discussion.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/4128935.1687478926%40sss.pgh.pa.us\n\n\n",
"msg_date": "Thu, 22 Jun 2023 20:50:16 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Things I don't like about \\du's \"Attributes\" column"
},
{
"msg_contents": "On 23.06.2023 03:50, Tom Lane wrote:\n> Nearby I dissed psql's \\du command for its incoherent \"Attributes\"\n> column [1]. It's too late to think about changing that for v16,\n> but here's some things I think we should consider for v17:\n\nIf there are no others willing, I am ready to take up this topic. There \nis definitely room for improvement here.\nBut first I want to finish with the \\du command.\n\n-- \nPavel Luzanov\nPostgres Professional: https://postgrespro.com\n\n\n\n",
"msg_date": "Sat, 24 Jun 2023 18:16:04 +0300",
"msg_from": "Pavel Luzanov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Things I don't like about \\du's \"Attributes\" column"
},
{
"msg_contents": "On 23.06.2023 03:50, Tom Lane wrote:\n> * On a purely presentation level, how did we possibly arrive\n> at the idea that the connection-limit and valid-until properties\n> deserve their own lines in the Attributes column while the other\n> properties are comma-separated? That makes no sense whatsoever,\n> nor does it look nice in \\x display format.\n\nI think this a reason why footer property explicitly disabled in the output.\nAs part of reworking footer should be enabled, as it worked for other \nmeta-commands.\n\nJust to don't forget.\n\n-- \nPavel Luzanov\nPostgres Professional: https://postgrespro.com\n\n\n\n",
"msg_date": "Mon, 10 Jul 2023 16:10:45 +0300",
"msg_from": "Pavel Luzanov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Things I don't like about \\du's \"Attributes\" column"
},
{
"msg_contents": "Now I'm ready for discussion.\n\nOn 23.06.2023 03:50, Tom Lane wrote:\n> Nearby I dissed psql's \\du command for its incoherent \"Attributes\"\n> column [1]. It's too late to think about changing that for v16,\n> but here's some things I think we should consider for v17:\n>\n> * It seems weird that some attributes are described in the negative\n> (\"Cannot login\", \"No inheritance\"). I realize that this corresponds\n> to the defaults, so that a user created by CREATE USER with no options\n> shows nothing in the Attributes column; but I wonder how much that's\n> worth. As far as \"Cannot login\" goes, you could argue that the silent\n> default ought to be for the properties assigned by CREATE ROLE, since\n> the table describes itself as \"List of roles\". I'm not dead set on\n> changing this, but it seems like a topic that deserves a fresh look.\n\nAgree. The negative form looks strange.\n\nFresh look suggests to move login attribute to own column.\nThe attribute separates users and group roles, this is very important \ninformation,\nit deserves to be placed in a separate column. Of course, it can be\nreturned back to \"Attributes\" if such change is very radical.\n\nOn the other hand, rolinherit attribute lost its importance in v16.\nI don't see serious reasons in changing the default value, so we can \nleave it\nin the \"Attributes\" column. In most cases it will be hidden.\n\n> * I do not like the display of rolconnlimit, ie \"No connections\" or\n> \"%d connection(s)\". A person not paying close attention might think\n> that that means the number of *current* connections the user has.\n> A minimal fix could be to word it like \"No connections allowed\" or\n> \"%d connection(s) allowed\". But see below.\n\nconnlimit attribute moved from \"Attributes\" column to separate column\n\"Max connections\" in extended mode. But without any modifications to \nit's values.\nFor me it looks normal.\n\n> * I do not like the display of rolvaliduntil, either.\n\nMoved from \"Attributes\" column to separate column \"Password expire time\"\nin extended mode (+).\n\n> I suggest that maybe it'd\n> be okay to change the pg_roles view along the lines of\n>\n> - '********'::text as rolpassword,\n> + case when rolpassword is not null then '********'::text end as rolpassword,\n\nDone.\nThe same changes to pg_user.passwd for consistency.\n> Then we could fix \\du to say nothing if rolpassword is null,\n> and when it isn't, print \"Password valid until infinity\" whenever\n> that is the case (ie, rolvaliduntil is null or infinity).\n\nI think that writing the value \"infinity\" in places where there is no \nvalue is\nnot a good thing. This hides the real value of the column. In addition,\nthere is no reason to set \"infinity\" when the password is always valid with\ndefault NULL.\n\nMy suggestion to add new column \"Has password?\" in extended mode with\nyes/no values and leave rolvaliduntil values as is.\n\n> * On a purely presentation level, how did we possibly arrive\n> at the idea that the connection-limit and valid-until properties\n> deserve their own lines in the Attributes column while the other\n> properties are comma-separated? That makes no sense whatsoever,\n> nor does it look nice in \\x display format.\n> (b) move these two things into separate columns.\n\nImplemented this approach.\n\nIn a result describeRoles function significantly simplified and \nrewritten for the convenience\nof printing the whole query result. All the magic of building \n\"Attributes\" column\nmoved to SELECT statement for easy viewing by users via ECHO_HIDDEN \nvariable.\n\nHere is an example output.\n\n--DROP ROLE alice, bob, charlie, admin;\n\nCREATE ROLE alice LOGIN SUPERUSER NOINHERIT PASSWORD 'alice' VALID UNTIL 'infinity' CONNECTION LIMIT 5;\nCREATE ROLE bob LOGIN REPLICATION BYPASSRLS CREATEDB VALID UNTIL '2022-01-01';\nCREATE ROLE charlie LOGIN CREATEROLE PASSWORD 'charlie' CONNECTION LIMIT 0;\nCREATE ROLE admin;\n\nCOMMENT ON ROLE alice IS 'Superuser but with connection limit and with no inheritance';\nCOMMENT ON ROLE bob IS 'No password but with expire time';\nCOMMENT ON ROLE charlie IS 'No connections allowed';\nCOMMENT ON ROLE admin IS 'Group role without login';\n\n\nMaster.\n=# \\du\n List of roles\n Role name | Attributes\n-----------+------------------------------------------------------------\n admin | Cannot login\n alice | Superuser, No inheritance +\n | 5 connections +\n | Password valid until infinity\n bob | Create DB, Replication, Bypass RLS +\n | Password valid until 2022-01-01 00:00:00+03\n charlie | Create role +\n | No connections\n postgres | Superuser, Create role, Create DB, Replication, Bypass RLS\n\n=# \\du+\n List of roles\n Role name | Attributes | Description\n-----------+------------------------------------------------------------+-------------------------------------------------------------\n admin | Cannot login | Group role without login\n alice | Superuser, No inheritance +| Superuser but with connection limit and with no inheritance\n | 5 connections +|\n | Password valid until infinity |\n bob | Create DB, Replication, Bypass RLS +| No password but with expire time\n | Password valid until 2022-01-01 00:00:00+03 |\n charlie | Create role +| No connections allowed\n | No connections |\n postgres | Superuser, Create role, Create DB, Replication, Bypass RLS |\n\n\nPatched.\n=# \\du\n List of roles\n Role name | Can login? | Attributes\n-----------+------------+------------------------------------------------------------\n admin | no |\n alice | yes | Superuser, No inheritance\n bob | yes | Create DB, Replication, Bypass RLS\n charlie | yes | Create role\n postgres | yes | Superuser, Create role, Create DB, Replication, Bypass RLS\n(5 rows)\n\n=# \\du+\n List of roles\n Role name | Can login? | Attributes | Has password? | Password expire time | Max connections | Description\n-----------+------------+------------------------------------------------------------+---------------+------------------------+-----------------+-------------------------------------------------------------\n admin | no | | no | | -1 | Group role without login\n alice | yes | Superuser, No inheritance | yes | infinity | 5 | Superuser but with connection limit and with no inheritance\n bob | yes | Create DB, Replication, Bypass RLS | no | 2022-01-01 00:00:00+03 | -1 | No password but with expire time\n charlie | yes | Create role | yes | | 0 | No connections allowed\n postgres | yes | Superuser, Create role, Create DB, Replication, Bypass RLS | yes | | -1 |\n(5 rows)\n\n\nv1 of the patch attached. There are no tests and documentation yet.\nmake check fall in 2 existing tests due changes in pg_roles and \\du. \nWill be corrected.\n\nAny opinions?\n\nI plan to add a patch to the January commitfest.\n\n-- \nPavel Luzanov\nPostgres Professional:https://postgrespro.com",
"msg_date": "Sat, 30 Dec 2023 17:23:40 +0300",
"msg_from": "Pavel Luzanov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Things I don't like about \\du's \"Attributes\" column"
},
{
"msg_contents": "On Sat, 30 Dec 2023 at 09:23, Pavel Luzanov <[email protected]>\nwrote:\n\n\n> I think that writing the value \"infinity\" in places where there is no\n> value is\n> not a good thing. This hides the real value of the column. In addition,\n> there is no reason to set \"infinity\" when the password is always valid with\n> default NULL.\n>\n\nWould it make sense to make the column non-nullable and always set it to\ninfinity when there is no expiry?\n\nIn this case, I think NULL simply means infinity, so why not write that? If\nthe timestamp type didn't have infinity, then NULL would be a natural way\nof saying that there is no expiry, but with infinity as a possible value, I\ndon't see any reason to think of no expiry as being the absence of an\nexpiry time rather than an infinite expiry time.\n\nOn Sat, 30 Dec 2023 at 09:23, Pavel Luzanov <[email protected]> wrote: \nI think that writing the value \"infinity\" in\n places where there is no value is\n not a good thing. This hides the real value of the column. In\n addition,\n there is no reason to set \"infinity\" when the password is always\n valid with\n default NULL.Would it make sense to make the column non-nullable and always set it to infinity when there is no expiry?In this case, I think NULL simply means infinity, so why not write that? If the timestamp type didn't have infinity, then NULL would be a natural way of saying that there is no expiry, but with infinity as a possible value, I don't see any reason to think of no expiry as being the absence of an expiry time rather than an infinite expiry time.",
"msg_date": "Sat, 30 Dec 2023 09:33:59 -0500",
"msg_from": "Isaac Morland <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Things I don't like about \\du's \"Attributes\" column"
},
{
"msg_contents": "On 30.12.2023 17:33, Isaac Morland wrote:\n> Would it make sense to make the column non-nullable and always set it \n> to infinity when there is no expiry?\n\nA password is not required for roles. In many cases, external \nauthentication is used in ph_hba.conf.\nI think it would be strange to have 'infinity' for roles without a password.\n\nTom suggested to have 'infinity' in the \\du output for roles with a \npassword.\nMy doubt is that this will hide the real values (absence of values). So \nI suggested a separate column\n'Has password?' to show roles with password and unmodified column \n'Password expire time'.\n\nYes, it's easy to replace NULL with \"infinity\" for roles with a \npassword, but why?\nWhat is the reason for this action? Absence of value for 'expire time' \nclear indicates that there is no time limit.\nAlso I don't see a practical reasons to execute next command, since it \ndo nothing:\n\nALTER ROLE .. PASSWORD 'infinity';\n\nSo I think that in most cases there is no \"infinity\" in the \nrolvaliduntil column.\n\nBut of course, I can be wrong.\n\nThank you for giving your opinion.\n\n-- \nPavel Luzanov\nPostgres Professional:https://postgrespro.com\n\n\n\n\n\n\nOn 30.12.2023 17:33, Isaac Morland wrote:\n\n\n\nWould it make sense to make the\n column non-nullable and always set it to infinity when\n there is no expiry?\n\n\n\n\n A password is not required for roles. In many cases, external\n authentication is used in ph_hba.conf.\n I think it would be strange to have 'infinity' for roles without a\n password.\n\n Tom suggested to have 'infinity' in the \\du output for roles with\n a password.\n My doubt is that this will hide the real values (absence of\n values). So I suggested a separate column\n 'Has password?' to show roles with password and unmodified column\n 'Password expire time'.\n\n Yes, it's easy to replace NULL with \"infinity\" for roles with a\n password, but why?\n What is the reason for this action? Absence of value for 'expire\n time' clear indicates that there is no time limit.\n Also I don't see a practical reasons to execute next command, since it do nothing:\n\n ALTER ROLE .. PASSWORD 'infinity';\n\n So I think that in most cases there is no \"infinity\" in the\n rolvaliduntil column.\n\n But of course, I can be wrong.\n\n\n Thank you for giving your opinion.\n\n\n\n\n-- \nPavel Luzanov\nPostgres Professional: https://postgrespro.com",
"msg_date": "Sun, 31 Dec 2023 13:52:28 +0300",
"msg_from": "Pavel Luzanov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Things I don't like about \\du's \"Attributes\" column"
},
{
"msg_contents": "On Thu, Jun 22, 2023 at 8:50 PM Tom Lane <[email protected]> wrote:\n> * It seems weird that some attributes are described in the negative\n> (\"Cannot login\", \"No inheritance\"). I realize that this corresponds\n> to the defaults, so that a user created by CREATE USER with no options\n> shows nothing in the Attributes column; but I wonder how much that's\n> worth. As far as \"Cannot login\" goes, you could argue that the silent\n> default ought to be for the properties assigned by CREATE ROLE, since\n> the table describes itself as \"List of roles\". I'm not dead set on\n> changing this, but it seems like a topic that deserves a fresh look.\n\nI wonder if we shouldn't try to display the roles's properties using\nSQL keywords rather than narrating. Someone can be confused by \"No\nconnections\" but \"CONNECTION LIMIT 0\" is pretty hard to mistake;\nlikewise \"LOGIN\" or \"NOLOGIN\" seems clear enough. If we took this\napproach, there would still be a question in my mind about whether to\nshow values where the configured value of the property matches the\ndefault, and maybe we would want to do that in some cases and skip it\nin others, or maybe we would end up with a uniform rule, but that\nissue could be considered somewhat separately from how to print the\nproperties we choose to display.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 2 Jan 2024 09:41:17 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Things I don't like about \\du's \"Attributes\" column"
},
{
"msg_contents": "Robert Haas <[email protected]> writes:\n> I wonder if we shouldn't try to display the roles's properties using\n> SQL keywords rather than narrating. Someone can be confused by \"No\n> connections\" but \"CONNECTION LIMIT 0\" is pretty hard to mistake;\n> likewise \"LOGIN\" or \"NOLOGIN\" seems clear enough.\n\nMmm ... maybe. I think those of us who are native English speakers\nmay overrate the intelligibility of SQL keywords to those who aren't.\nSo I'm inclined to feel that preserving translatability of the\nrole property descriptions is a good thing. But it'd be good to\nhear comments on that point from people who actually use it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 02 Jan 2024 13:17:15 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Things I don't like about \\du's \"Attributes\" column"
},
{
"msg_contents": "On Tue, Jan 2, 2024 at 1:17 PM Tom Lane <[email protected]> wrote:\n> Mmm ... maybe. I think those of us who are native English speakers\n> may overrate the intelligibility of SQL keywords to those who aren't.\n> So I'm inclined to feel that preserving translatability of the\n> role property descriptions is a good thing. But it'd be good to\n> hear comments on that point from people who actually use it.\n\n+1 for comments from people who use it.\n\nMy thought was that such people probably need to interpret LOGIN and\nNOLOGIN into their preferred language either way, but if \\du displays\nsomething else, then they also need to mentally construct a reverse\nmapping, from whatever string is showing up there to the corresponding\nSQL syntax. The current display has that problem even for English\nspeakers -- you have to know that \"Cannot login\" corresponds to\n\"NOLOGIN\" and that \"No connections\" corresponds to \"CONNECTION LIMIT\n0\" and so forth. No matter what we put there, if it's English or\nTurkish or Hindi rather than SQL, you have to try to figure out what\nthe displayed text corresponds to at the SQL level in order to fix\nanything that isn't the way you want it to be, or to recreate the\nconfiguration on another instance.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 2 Jan 2024 13:35:42 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Things I don't like about \\du's \"Attributes\" column"
},
{
"msg_contents": "Robert Haas <[email protected]> writes:\n> My thought was that such people probably need to interpret LOGIN and\n> NOLOGIN into their preferred language either way, but if \\du displays\n> something else, then they also need to mentally construct a reverse\n> mapping, from whatever string is showing up there to the corresponding\n> SQL syntax. The current display has that problem even for English\n> speakers -- you have to know that \"Cannot login\" corresponds to\n> \"NOLOGIN\" and that \"No connections\" corresponds to \"CONNECTION LIMIT\n> 0\" and so forth.\n\nTrue, although if you aren't happy with the current state then what\nyou actually need to construct is a SQL command to set a *different*\nstate from what \\du is saying. Going from LOGIN to NOLOGIN or vice\nversa can also be non-obvious. So you're likely to end up consulting\n\"\\h alter user\" no matter what, if you don't have it memorized.\n\nI think your argument does have relevance for the other issue about\nwhether it's good to be silent about the defaults. If \\du says\nnothing at all about a particular property, that certainly isn't\nhelping you to decide whether you want to change it and if so to what.\nI'm not convinced that point is dispositive, but it's something\nto consider.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 02 Jan 2024 13:46:53 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Things I don't like about \\du's \"Attributes\" column"
},
{
"msg_contents": "On Tue, Jan 2, 2024 at 1:46 PM Tom Lane <[email protected]> wrote:\n> True, although if you aren't happy with the current state then what\n> you actually need to construct is a SQL command to set a *different*\n> state from what \\du is saying. Going from LOGIN to NOLOGIN or vice\n> versa can also be non-obvious. So you're likely to end up consulting\n> \"\\h alter user\" no matter what, if you don't have it memorized.\n\nThat could be true in some cases, but I don't think it's true in all\ncases. If you're casually familiar with ALTER USER you probably\nremember that many of the properties are negated by sticking NO on the\nfront; you're less likely to forget that than you are to forget the\nname of some specific property. And certainly if you see CONNECTION\nLIMIT 24 and you want to change it to CONNECTION LIMIT 42 it's going\nto be pretty clear what to adjust.\n\n> I think your argument does have relevance for the other issue about\n> whether it's good to be silent about the defaults. If \\du says\n> nothing at all about a particular property, that certainly isn't\n> helping you to decide whether you want to change it and if so to what.\n> I'm not convinced that point is dispositive, but it's something\n> to consider.\n\nI agree with 100% of what you say here.\n\nTo add to that a bit, I would probably never ask a user to give me the\noutput of \\du to troubleshoot some issue. I would either ask them for\npg_dumpall -g output, or I'd ask them to give me the raw contents of\npg_authid. That's because I know that either of those things are going\nto tell me about ALL the properties of the role, or at least all of\nthe properties of the role that are stored in pg_authid, without\nomitting anything that some hacker thought was uninteresting. I don't\nknow that \\du is going to do that, and I'm not going to want to read\nthe code to figure out which cases it thinks are uninteresting,\n*especially* if it behaves differently by version.\n\nThe counterargument is that if you don't omit anything, the output\ngets very long, which is a problem, because either you go wide, and\nthen you get wrapping, or you use multiple-lines, and then if there\nare 500 users the output goes on forever.\n\nI think a key consideration here is how easy it will be for somebody\nto guess the value of a property that is not mentioned. Personally,\nI'd assume that if CONNECTION LIMIT isn't mentioned, it's unlimited.\nBut a lot of the other options are less clear. Probably NOSUPERUSER is\nthe default and SUPERUSER is the exception, but it's very unclear\nwhether LOGIN or NOLOGIN is should be treated as the \"normal\" case,\ngiven that the feature encompasses users and groups and non-login\nroles that people access via SET ROLE and things that look like groups\nbut are also used as login roles.\n\nAnd with some of the other options, it's just harder to remember\nwhether there's a default and what it is exactly than for other object\ntypes. With something like a table column, it feels intuitive that if\nyou just ask for a table column, you get a \"normal\" table column ...\nand then if you add a fillfactor or a CHECK constraint it will show up\nin the \\d output, and otherwise not. But to try to apply that concept\nhere means that we suppose the user knows whether the default is\nINHERIT or NOINHERIT, whether the default is BYPASSRLS or NOBYPASSRLS,\netc. And I'm just a little bit skeptical of that assumption. Perhaps\nit's just that I've spent less time doing user management than table\nadministration and so I'm the only one who finds this fuzzier than\nsome other kinds of SQL objects, but I'm not sure it's just that.\nRoles are pretty weird.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 2 Jan 2024 14:38:48 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Things I don't like about \\du's \"Attributes\" column"
},
{
"msg_contents": "\n\n\n\n\nOn 1/2/24 1:38 PM, Robert Haas wrote:\n\n\nBut to try to apply that concept\nhere means that we suppose the user knows whether the default is\nINHERIT or NOINHERIT, whether the default is BYPASSRLS or NOBYPASSRLS,\netc. And I'm just a little bit skeptical of that assumption. Perhaps\nit's just that I've spent less time doing user management than table\nadministration and so I'm the only one who finds this fuzzier than\nsome other kinds of SQL objects, but I'm not sure it's just that.\nRoles are pretty weird.\n\nIn my consulting experience, it's extremely rare for users to do\n anything remotely sophisticated with roles (I was always happy\n just to see apps weren't connecting as a superuser...).\nLike you, I view \\du and friends as more of a \"helping hand\" to\n seeing the state of things, without the expectation that every\n tiny nuance will always be visible, because I don't think it's\n practical to do that in psql. While that behavior might surprise\n some users, the good news is once they start exploring non-default\n options the behavior becomes self-evident.\nSome attributes are arguably important enough to warrant their\n own column. The most obvious is NOLOGIN, since those roles are\n generally used for a very different purpose than LOGIN roles.\n SUPERUSER might be another candidate (though, I much prefer a\n dedicated \"sudo role\" than explicit SU on roles).\nI'm on the fence when it comes to SQL syntax vs what we have now.\n What we currenly have is more readable, but off-hand I think the\n other places we list attributes we do it in SQL syntax. It might\n be worth changing just for consistency sake.\n--\n Jim Nasby, Data Architect, Austin TX\n\n\n\n",
"msg_date": "Tue, 2 Jan 2024 17:37:41 -0600",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Things I don't like about \\du's \"Attributes\" column"
},
{
"msg_contents": "On 02.01.2024 22:38, Robert Haas wrote:\n\n> To add to that a bit, I would probably never ask a user to give me the\n> output of \\du to troubleshoot some issue. I would either ask them for\n> pg_dumpall -g output, or I'd ask them to give me the raw contents of\n> pg_authid. That's because I know that either of those things are going\n> to tell me about ALL the properties of the role, or at least all of\n> the properties of the role that are stored in pg_authid, without\n> omitting anything that some hacker thought was uninteresting. I don't\n> know that \\du is going to do that, and I'm not going to want to read\n> the code to figure out which cases it thinks are uninteresting,\n> *especially* if it behaves differently by version.\n\n\\d commands are a convenient way to see the contents of the system\ncatalogs. Short commands, instead of long SQL queries. Imo, this is their\nmain purpose.\n\nInterpreting values ('No connection' instead of 0 and so on)\ncan be useful if the actual values are easy to identify. If there is\ndoubt whether it will be clear, then it is better to show it as is.\nThe documentation contains a description of the system catalogs.\nIt tells you how to interpret the values correctly.\n\n\n> The counterargument is that if you don't omit anything, the output\n> gets very long, which is a problem, because either you go wide, and\n> then you get wrapping, or you use multiple-lines, and then if there\n> are 500 users the output goes on forever.\n\nThis can be mostly solved by using extended mode. Key properties for \\du,\nall others for \\du+. Also \\du+ can used with \\x.\nOf course, the question arises as to which properties are key and\nwhich are not. Here we need to reach a compromise.\n\n> Personally,\n> I'd assume that if CONNECTION LIMIT isn't mentioned, it's unlimited.\n> But a lot of the other options are less clear. Probably NOSUPERUSER is\n> the default and SUPERUSER is the exception, but it's very unclear\n> whether LOGIN or NOLOGIN is should be treated as the \"normal\" case,\n> given that the feature encompasses users and groups and non-login\n> roles that people access via SET ROLE and things that look like groups\n> but are also used as login roles.\n>\n> And with some of the other options, it's just harder to remember\n> whether there's a default and what it is exactly than for other object\n> types.\n\npsql provides a handy tool for solving such questions - ECHO_HIDDEN variable.\nBut it is very important that the query text is easily transformed into \nthe command output.\n\nProposed patch tries to implement this approach.\n\n-- \nPavel Luzanov\nPostgres Professional:https://postgrespro.com\n\n\n\n\n\n\nOn 02.01.2024 22:38, Robert Haas wrote:\n\n\nTo add to that a bit, I would probably never ask a user to give me the\noutput of \\du to troubleshoot some issue. I would either ask them for\npg_dumpall -g output, or I'd ask them to give me the raw contents of\npg_authid. That's because I know that either of those things are going\nto tell me about ALL the properties of the role, or at least all of\nthe properties of the role that are stored in pg_authid, without\nomitting anything that some hacker thought was uninteresting. I don't\nknow that \\du is going to do that, and I'm not going to want to read\nthe code to figure out which cases it thinks are uninteresting,\n*especially* if it behaves differently by version.\n\n\n\\d commands are a convenient way to see the contents of the system\ncatalogs. Short commands, instead of long SQL queries. Imo, this is their\nmain purpose.\n\nInterpreting values ('No connection' instead of 0 and so on)\ncan be useful if the actual values are easy to identify. If there is\ndoubt whether it will be clear, then it is better to show it as is.\nThe documentation contains a description of the system catalogs.\nIt tells you how to interpret the values correctly.\n\n\n\n\n\nThe counterargument is that if you don't omit anything, the output\ngets very long, which is a problem, because either you go wide, and\nthen you get wrapping, or you use multiple-lines, and then if there\nare 500 users the output goes on forever.\n\n\nThis can be mostly solved by using extended mode. Key properties for \\du,\nall others for \\du+. Also \\du+ can used with \\x.\nOf course, the question arises as to which properties are key and\nwhich are not. Here we need to reach a compromise.\n\n\n\nPersonally,\nI'd assume that if CONNECTION LIMIT isn't mentioned, it's unlimited.\nBut a lot of the other options are less clear. Probably NOSUPERUSER is\nthe default and SUPERUSER is the exception, but it's very unclear\nwhether LOGIN or NOLOGIN is should be treated as the \"normal\" case,\ngiven that the feature encompasses users and groups and non-login\nroles that people access via SET ROLE and things that look like groups\nbut are also used as login roles.\n\nAnd with some of the other options, it's just harder to remember\nwhether there's a default and what it is exactly than for other object\ntypes.\n\n\n\npsql provides a handy tool for solving such questions - ECHO_HIDDEN variable.\nBut it is very important that the query text is easily transformed into\nthe command output.\nProposed patch tries to implement this approach.\n\n\n-- \nPavel Luzanov\nPostgres Professional: https://postgrespro.com",
"msg_date": "Thu, 4 Jan 2024 00:22:45 +0300",
"msg_from": "Pavel Luzanov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Things I don't like about \\du's \"Attributes\" column"
},
{
"msg_contents": "On 03.01.2024 02:37, Jim Nasby wrote:\n>\n> Some attributes are arguably important enough to warrant their own \n> column. The most obvious is NOLOGIN, since those roles are generally \n> used for a very different purpose than LOGIN roles. SUPERUSER might be \n> another candidate (though, I much prefer a dedicated \"sudo role\" than \n> explicit SU on roles).\n>\nI like this idea.\nBut what if all the attributes are moved to separate columns?\nThis solves all issues except the wide output. Less significant attributes\ncan be moved to extended mode. Here's what it might look like:\n\npostgres@postgres(17.0)=# \\du\n List of roles\n Role name | Login | Superuser | Create role | Create DB | Replication\n-----------+-------+-----------+-------------+-----------+-------------\n admin | no | no | no | no | no\n alice | yes | yes | no | no | no\n bob | yes | no | no | yes | yes\n charlie | yes | no | yes | no | no\n postgres | yes | yes | yes | yes | yes\n(5 rows)\n\npostgres@postgres(17.0)=# \\du+\n List of roles\n Role name | Login | Superuser | Create role | Create DB | Replication | Bypass RLS | Inheritance | Password | Valid until | Connection limit | Description\n-----------+-------+-----------+-------------+-----------+-------------+------------+-------------+----------+------------------------+------------------+-------------------------------------------------------------\n admin | no | no | no | no | no | no | yes | no | | -1 | Group role without login\n alice | yes | yes | no | no | no | no | no | yes | infinity | 5 | Superuser but with connection limit and with no inheritance\n bob | yes | no | no | yes | yes | yes | yes | no | 2022-01-01 00:00:00+03 | -1 | No password but with expire time\n charlie | yes | no | yes | no | no | no | yes | yes | | 0 | No connections allowed\n postgres | yes | yes | yes | yes | yes | yes | yes | yes | | -1 |\n(5 rows)\n\npostgres@postgres(17.0)=# \\x \\du+ bob\nExpanded display is on.\nList of roles\n-[ RECORD 1 ]----+---------------------------------\nRole name | bob\nLogin | yes\nSuperuser | no\nCreate role | no\nCreate DB | yes\nReplication | yes\nBypass RLS | yes\nInheritance | yes\nPassword | no\nValid until | 2022-01-01 00:00:00+03\nConnection limit | -1\nDescription | No password but with expire time\n\n-- \nPavel Luzanov\nPostgres Professional:https://postgrespro.com",
"msg_date": "Tue, 9 Jan 2024 23:50:48 +0300",
"msg_from": "Pavel Luzanov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Things I don't like about \\du's \"Attributes\" column"
},
{
"msg_contents": "Another approach based on early suggestions.\n\nThe Attributes column includes only the enabled logical attributes.\nRegardless of whether the attribute is enabled by default or not.\nThis changes the current behavior, but makes it clearer: everything\nthat is enabled is displayed. This principle is easy to maintain in\nsubsequent versions, even if there is a desire to change the default\nvalue for any attribute. In addition, the issue with the 'LOGIN' attribute\nis being resolved, the default value of which depends on the command\n(CREATE ROLE or CREATE USER).\n\nThe attribute names correspond to the keywords of the CREATE ROLE command.\nThe attributes are listed in the same order as in the documentation.\n(I think that the LOGIN attribute should be moved to the first place,\nboth in the documentation and in the command.)\n\n\nThe \"Connection limit\" and \"Valid until\" attributes are placed in separate columns.\nThe \"Password?\" column has been added.\n\nSample output.\n\nPatch v3:\n=# \\du\n List of roles\n Role name | Attributes | Password? | Valid until | Connection limit\n-----------+-------------------------------------------------------------------+-----------+------------------------+------------------\n admin | INHERIT | no | | -1\n alice | SUPERUSER LOGIN | yes | infinity | 5\n bob | CREATEDB INHERIT LOGIN REPLICATION BYPASSRLS | no | 2022-01-01 00:00:00+03 | -1\n charlie | CREATEROLE INHERIT LOGIN | yes | | 0\n postgres | SUPERUSER CREATEDB CREATEROLE INHERIT LOGIN REPLICATION BYPASSRLS | no | | -1\n(5 rows)\n\n\nThe output of the command is long. But there are other commands of\ncomparable length: \\dApS, \\dfS, \\doS.\n\nSmall modification with newline separator for Attributes column:\n\nPatch v3 with newlines:\n=# \\du\n List of roles\n Role name | Attributes | Password? | Valid until | Connection limit\n-----------+-------------+-----------+------------------------+------------------\n admin | INHERIT | no | | -1\n alice | SUPERUSER +| yes | infinity | 5\n | LOGIN | | |\n bob | CREATEDB +| no | 2022-01-01 00:00:00+03 | -1\n | INHERIT +| | |\n | LOGIN +| | |\n | REPLICATION+| | |\n | BYPASSRLS | | |\n charlie | CREATEROLE +| yes | | 0\n | INHERIT +| | |\n | LOGIN | | |\n postgres | SUPERUSER +| no | | -1\n | CREATEDB +| | |\n | CREATEROLE +| | |\n | INHERIT +| | |\n | LOGIN +| | |\n | REPLICATION+| | |\n | BYPASSRLS | | |\n(5 rows)\n\nFor comparison, here's what it looks like now:\n\nmaster:\n=# \\du\n List of roles\n Role name | Attributes\n-----------+------------------------------------------------------------\n admin | Cannot login\n alice | Superuser, No inheritance +\n | 5 connections +\n | Password valid until infinity\n bob | Create DB, Replication, Bypass RLS +\n | Password valid until 2022-01-01 00:00:00+03\n charlie | Create role +\n | No connections\n postgres | Superuser, Create role, Create DB, Replication, Bypass RLS\n\n\n From my point of view, any of the proposed alternatives is better than what we have now.\nBut for moving forward we need to choose some approach.\n\nI will be glad of any opinions.\n\n-- \nPavel Luzanov\nPostgres Professional:https://postgrespro.com",
"msg_date": "Mon, 22 Jan 2024 00:34:58 +0300",
"msg_from": "Pavel Luzanov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Things I don't like about \\du's \"Attributes\" column"
},
{
"msg_contents": "2024-01 Commitfest.\n\nHi, This patch has a CF status of \"Needs Review\" [1], but it seems\nthere were CFbot test failures last time it was run [2]. Please have a\nlook and post an updated version if necessary.\n\n======\n[1] https://commitfest.postgresql.org/46/4738/\n[2] https://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/46/4738\n\nKind Regards,\nPeter Smith.\n\n\n",
"msg_date": "Mon, 22 Jan 2024 17:38:07 +1100",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Things I don't like about \\du's \"Attributes\" column"
},
{
"msg_contents": "On Sun, Jan 21, 2024 at 2:35 PM Pavel Luzanov <[email protected]>\nwrote:\n\n> Another approach based on early suggestions.\n>\n> The Attributes column includes only the enabled logical attributes.\n> Regardless of whether the attribute is enabled by default or not.\n>\n>\n\n> The attribute names correspond to the keywords of the CREATE ROLE command.\n> The attributes are listed in the same order as in the documentation.\n> (I think that the LOGIN attribute should be moved to the first place,\n> both in the documentation and in the command.)\n>\n> I'd just flip INHERIT and LOGIN\n\n\n> The \"Connection limit\" and \"Valid until\" attributes are placed in separate columns.\n> The \"Password?\" column has been added.\n>\n> Sample output.\n>\n> Patch v3:\n> =# \\du\n> List of roles\n> Role name | Attributes | Password? | Valid until | Connection limit\n> -----------+-------------------------------------------------------------------+-----------+------------------------+------------------\n> admin | INHERIT | no | | -1\n> alice | SUPERUSER LOGIN | yes | infinity | 5\n> bob | CREATEDB INHERIT LOGIN REPLICATION BYPASSRLS | no | 2022-01-01 00:00:00+03 | -1\n> charlie | CREATEROLE INHERIT LOGIN | yes | | 0\n> postgres | SUPERUSER CREATEDB CREATEROLE INHERIT LOGIN REPLICATION BYPASSRLS | no | | -1\n> (5 rows)\n>\n>\n\n> Small modification with newline separator for Attributes column:\n>\n> Patch v3 with newlines:\n> =# \\du\n> List of roles\n> Role name | Attributes | Password? | Valid until | Connection limit\n> -----------+-------------+-----------+------------------------+------------------\n> postgres | SUPERUSER +| no | | -1\n> | CREATEDB +| | |\n> | CREATEROLE +| | |\n> | INHERIT +| | |\n> | LOGIN +| | |\n> | REPLICATION+| | |\n> | BYPASSRLS | | |\n> (5 rows)\n>\n> I'm strongly in favor of using mixed-case for the attributes. The SQL\nCommand itself doesn't care about capitalization and it is much easier on\nthe eyes. I'm also strongly in favor of newlines, as can be seen by the\ndefault bootstrap superuser entry putting everything on one line eats up 65\ncharacters.\n\n List of roles\n Role name | Attributes | Password? | Valid until | Connection limit |\nDescription\n-----------+-------------+-----------+-------------+------------------+-------------\n davidj | Superuser +| no | | -1 |\n | CreateDB +| | | |\n | CreateRole +| | | |\n | Inherit +| | | |\n | Login +| | | |\n | Replication+| | | |\n | BypassRLS | | | |\n(1 row)\n\nAs noted by Peter this patch didn't update the two affected expected output\nfiles. psql.out and, due to the system view change, rules.out. That\nparticular change requires a documentation update to the pg_roles system\nview page. I'd suggest pulling out this system view change into its own\npatch.\n\nI will take another pass later when I get some more time. I want to\nre-review some of the older messages. But the tweaks I show and breaking\nout the view changes in to a separate patch both appeal to me right now.\n\nDavid J.\n\nOn Sun, Jan 21, 2024 at 2:35 PM Pavel Luzanov <[email protected]> wrote:\n\nAnother approach based on early suggestions.\n\nThe Attributes column includes only the enabled logical attributes.\nRegardless of whether the attribute is enabled by default or not.\n The attribute names correspond to the keywords of the CREATE ROLE command.\nThe attributes are listed in the same order as in the documentation.\n(I think that the LOGIN attribute should be moved to the first place,\nboth in the documentation and in the command.)\nI'd just flip INHERIT and LOGIN The \"Connection limit\" and \"Valid until\" attributes are placed in separate columns.\nThe \"Password?\" column has been added.\n\nSample output.\n\nPatch v3:\n=# \\du\n List of roles\n Role name | Attributes | Password? | Valid until | Connection limit \n-----------+-------------------------------------------------------------------+-----------+------------------------+------------------\n admin | INHERIT | no | | -1\n alice | SUPERUSER LOGIN | yes | infinity | 5\n bob | CREATEDB INHERIT LOGIN REPLICATION BYPASSRLS | no | 2022-01-01 00:00:00+03 | -1\n charlie | CREATEROLE INHERIT LOGIN | yes | | 0\n postgres | SUPERUSER CREATEDB CREATEROLE INHERIT LOGIN REPLICATION BYPASSRLS | no | | -1\n(5 rows)\n Small modification with newline separator for Attributes column:\n\nPatch v3 with newlines:\n=# \\du\n List of roles\n Role name | Attributes | Password? | Valid until | Connection limit \n-----------+-------------+-----------+------------------------+------------------\n postgres | SUPERUSER +| no | | -1\n | CREATEDB +| | | \n | CREATEROLE +| | | \n | INHERIT +| | | \n | LOGIN +| | | \n | REPLICATION+| | | \n | BYPASSRLS | | | \n(5 rows)\nI'm strongly in favor of using mixed-case for the attributes. The SQL Command itself doesn't care about capitalization and it is much easier on the eyes. I'm also strongly in favor of newlines, as can be seen by the default bootstrap superuser entry putting everything on one line eats up 65 characters. List of roles Role name | Attributes | Password? | Valid until | Connection limit | Description-----------+-------------+-----------+-------------+------------------+------------- davidj | Superuser +| no | | -1 | | CreateDB +| | | | | CreateRole +| | | | | Inherit +| | | | | Login +| | | | | Replication+| | | | | BypassRLS | | | |(1 row)As noted by Peter this patch didn't update the two affected expected output files. psql.out and, due to the system view change, rules.out. That particular change requires a documentation update to the pg_roles system view page. I'd suggest pulling out this system view change into its own patch.I will take another pass later when I get some more time. I want to re-review some of the older messages. But the tweaks I show and breaking out the view changes in to a separate patch both appeal to me right now.David J.",
"msg_date": "Mon, 22 Jan 2024 15:59:59 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Things I don't like about \\du's \"Attributes\" column"
},
{
"msg_contents": "Pavel Luzanov <[email protected]> writes:\n> Another approach based on early suggestions.\n\nI think expecting the pg_roles view to change for this is problematic.\nYou can't have that in the back branches, so with this patch psql\nwill show something different against a pre-17 server than later\nversions. At best, that's going to be confusing. Can you get the\nsame result without changing pg_roles?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 22 Jan 2024 20:18:36 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Things I don't like about \\du's \"Attributes\" column"
},
{
"msg_contents": "I wrote:\n> I think expecting the pg_roles view to change for this is problematic.\n> You can't have that in the back branches, so with this patch psql\n> will show something different against a pre-17 server than later\n> versions. At best, that's going to be confusing.\n\nActually, even more to the point: while this doesn't expose the\ncontents of a role's password, it does expose whether the role\n*has* a password to every user in the installation. I doubt\nthat that's okay from a security standpoint. It'd need debate\nat the least.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 22 Jan 2024 20:25:56 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Things I don't like about \\du's \"Attributes\" column"
},
{
"msg_contents": "On Mon, Jan 22, 2024 at 6:26 PM Tom Lane <[email protected]> wrote:\n\n> I wrote:\n> > I think expecting the pg_roles view to change for this is problematic.\n> > You can't have that in the back branches, so with this patch psql\n> > will show something different against a pre-17 server than later\n> > versions. At best, that's going to be confusing.\n>\n> Actually, even more to the point: while this doesn't expose the\n> contents of a role's password, it does expose whether the role\n> *has* a password to every user in the installation. I doubt\n> that that's okay from a security standpoint. It'd need debate\n> at the least.\n>\n>\nMakes sense, more reason to put it within its own patch. At present it\nseems like a createrole permissioned user is unable to determine whether a\ngiven role has a password or not even in the case when that role would be\nallowed to alter a role they've created to set or remove said password.\nKeeping with the changes made in v16 it does seem worthwhile modifying\npg_roles to be sensitive to the role querying the view having both\ncreaterole and admin membership on the role being displayed. With now\nthree possible outcomes: NULL if no password is in use, ********* if a\npassword is in use and the user has the ability to alter role, or\n<insufficient privileges> (alt. N/A).\n\nDavid J.\n\nOn Mon, Jan 22, 2024 at 6:26 PM Tom Lane <[email protected]> wrote:I wrote:\n> I think expecting the pg_roles view to change for this is problematic.\n> You can't have that in the back branches, so with this patch psql\n> will show something different against a pre-17 server than later\n> versions. At best, that's going to be confusing.\n\nActually, even more to the point: while this doesn't expose the\ncontents of a role's password, it does expose whether the role\n*has* a password to every user in the installation. I doubt\nthat that's okay from a security standpoint. It'd need debate\nat the least.Makes sense, more reason to put it within its own patch. At present it seems like a createrole permissioned user is unable to determine whether a given role has a password or not even in the case when that role would be allowed to alter a role they've created to set or remove said password. Keeping with the changes made in v16 it does seem worthwhile modifying pg_roles to be sensitive to the role querying the view having both createrole and admin membership on the role being displayed. With now three possible outcomes: NULL if no password is in use, ********* if a password is in use and the user has the ability to alter role, or <insufficient privileges> (alt. N/A).David J.",
"msg_date": "Mon, 22 Jan 2024 19:22:48 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Things I don't like about \\du's \"Attributes\" column"
},
{
"msg_contents": "On Sun, Jan 21, 2024 at 2:35 PM Pavel Luzanov <[email protected]>\nwrote:\n\n> List of roles\n> Role name | Attributes | Password? | Valid until | Connection limit\n> -----------+-------------+-----------+------------------------+------------------\n> admin | INHERIT | no | | -1\n> alice | SUPERUSER +| yes | infinity | 5\n>\n> I think I'm in the minority on believing that these describe outputs\nshould not be beholden to internal implementation details. But seeing a -1\nin the limit column is just jarring to my sensibilities. I suggest\ndisplaying blank (not null, \\pset should not influence this) if the\nconnection limit is \"no limit\", only showing positive numbers when there is\nmeaningful exceptional information for the user to absorb.\n\nDavid J.\n\nOn Sun, Jan 21, 2024 at 2:35 PM Pavel Luzanov <[email protected]> wrote:\n\n List of roles\n Role name | Attributes | Password? | Valid until | Connection limit \n-----------+-------------+-----------+------------------------+------------------\n admin | INHERIT | no | | -1\n alice | SUPERUSER +| yes | infinity | 5I think I'm in the minority on believing that these describe outputs should not be beholden to internal implementation details. But seeing a -1 in the limit column is just jarring to my sensibilities. I suggest displaying blank (not null, \\pset should not influence this) if the connection limit is \"no limit\", only showing positive numbers when there is meaningful exceptional information for the user to absorb.David J.",
"msg_date": "Mon, 22 Jan 2024 19:30:29 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Things I don't like about \\du's \"Attributes\" column"
},
{
"msg_contents": "On 23.01.2024 04:18, Tom Lane wrote:\n> I think expecting the pg_roles view to change for this is problematic.\n> You can't have that in the back branches, so with this patch psql\n> will show something different against a pre-17 server than later\n> versions. At best, that's going to be confusing.\n\nI've been thinking about it. Therefore, the column \"Password?\" is shown\nonly for version 17 and older.\n\n> Can you get the same result without changing pg_roles?\n\nHm. I'm not sure if this is possible.\n\n> Actually, even more to the point: while this doesn't expose the\n> contents of a role's password, it does expose whether the role\n> *has* a password to every user in the installation. I doubt\n> that that's okay from a security standpoint. It'd need debate\n> at the least.\n\nYes, I remember your caution about security from the original post.\nI'll try to explain why changing pg_roles is acceptable.\nNow \\du shows column \"Valid until\". We know that you can set\nthe password expiration date without having a password, but this is\nmore likely a mistake in role maintenance. In most cases, a non-null\nvalue indicates that the password has been set. Therefore, security\nshould not suffer much, but it will help the administrator to see\nincorrect values.\n\nOn 23.01.2024 05:22, David G. Johnston wrote:\n> At present it seems like a createrole permissioned user is unable \n> to determine whether a given role has a password or not even in the case\n> when that role would be allowed to alter a role they've created to set or\n> remove said password. Keeping with the changes made in v16 it does seem\n> worthwhile modifying pg_roles to be sensitive to the role querying the view\n> having both createrole and admin membership on the role being displayed.\n> With now three possible outcomes: NULL if no password is in use, *********\n> if a password is in use and the user has the ability to alter role, or\n> <insufficient privileges> (alt. N/A).\n\nGood point.\nBut what about \"Valid until\". Can roles without superuser or createrole\nattributes see it? The same about \"Connection limit\"?\n\nI'll think about it and try to implement in the next patch version within a few days.\nThank you for review.\n\n-- \nPavel Luzanov\nPostgres Professional:https://postgrespro.com\n\n\n\n\n\n\n On 23.01.2024 04:18, Tom Lane wrote:\n\n\nI think expecting the pg_roles view to change for this is problematic.\nYou can't have that in the back branches, so with this patch psql\nwill show something different against a pre-17 server than later\nversions. At best, that's going to be confusing.\n\n\nI've been thinking about it. Therefore, the column \"Password?\" is shown\nonly for version 17 and older.\n\n\n\n Can you get the same result without changing pg_roles?\n\nHm. I'm not sure if this is possible.\n\n\n\nActually, even more to the point: while this doesn't expose the\ncontents of a role's password, it does expose whether the role\n*has* a password to every user in the installation. I doubt\nthat that's okay from a security standpoint. It'd need debate\nat the least.\n\n\nYes, I remember your caution about security from the original post.\nI'll try to explain why changing pg_roles is acceptable.\nNow \\du shows column \"Valid until\". We know that you can set\nthe password expiration date without having a password, but this is\nmore likely a mistake in role maintenance. In most cases, a non-null\nvalue indicates that the password has been set. Therefore, security\nshould not suffer much, but it will help the administrator to see\nincorrect values.\n\nOn 23.01.2024 05:22, David G. Johnston wrote:\n> At present it seems like a createrole permissioned user is unable \n> to determine whether a given role has a password or not even in the case\n> when that role would be allowed to alter a role they've created to set or\n> remove said password. Keeping with the changes made in v16 it does seem\n> worthwhile modifying pg_roles to be sensitive to the role querying the view\n> having both createrole and admin membership on the role being displayed.\n> With now three possible outcomes: NULL if no password is in use, *********\n> if a password is in use and the user has the ability to alter role, or\n> <insufficient privileges> (alt. N/A).\n\nGood point.\nBut what about \"Valid until\". Can roles without superuser or createrole\nattributes see it? The same about \"Connection limit\"?\n\nI'll think about it and try to implement in the next patch version within a few days.\nThank you for review.\n\n-- \nPavel Luzanov\nPostgres Professional: https://postgrespro.com",
"msg_date": "Sun, 28 Jan 2024 22:51:14 +0300",
"msg_from": "Pavel Luzanov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Things I don't like about \\du's \"Attributes\" column"
},
{
"msg_contents": "On 23.01.2024 01:59, David G. Johnston wrote:\n>\n> The attribute names correspond to the keywords of the CREATE ROLE command.\n> The attributes are listed in the same order as in the documentation.\n> (I think that the LOGIN attribute should be moved to the first place,\n> both in the documentation and in the command.)\n>\n> I'd just flip INHERIT and LOGIN\n\nok\n\n\n> I'm strongly in favor of using mixed-case for the attributes. The SQL \n> Command itself doesn't care about capitalization and it is much easier \n> on the eyes. I'm also strongly in favor of newlines, as can be seen \n> by the default bootstrap superuser entry putting everything on one \n> line eats up 65 characters.\n>\n> List of roles\n> Role name | Attributes | Password? | Valid until | Connection limit \n> | Description\n> -----------+-------------+-----------+-------------+------------------+-------------\n> davidj | Superuser +| no | | -1 |\n> | CreateDB +| | | |\n> | CreateRole +| | | |\n> | Inherit +| | | |\n> | Login +| | | |\n> | Replication+| | | |\n> | BypassRLS | | | |\n> (1 row)\n\nok, I will continue with this display variant.\n\n\n>\n> As noted by Peter this patch didn't update the two affected expected \n> output files. psql.out and, due to the system view change, rules.out. \n> That particular change requires a documentation update to the pg_roles \n> system view page.\n\nYes, I was waiting for the direction of implementation to appear. Now it is there.\n\n\n> I'd suggest pulling out this system view change into its own patch.\n\nBut within this thread or new one?\n\n\nOn 23.01.2024 05:30, David G. Johnston wrote:\n> On Sun, Jan 21, 2024 at 2:35 PM Pavel Luzanov \n> <[email protected]> wrote:\n>\n> List of roles\n> Role name | Attributes | Password? | Valid until | Connection limit\n> -----------+-------------+-----------+------------------------+------------------\n> admin | INHERIT | no | | -1\n> alice | SUPERUSER +| yes | infinity | 5\n>\n> I think I'm in the minority on believing that these describe outputs \n> should not be beholden to internal implementation details.\n\nYes, I prefer real column values. But it can be discussed.\n\n\n> But seeing a -1 in the limit column is just jarring to my \n> sensibilities. I suggest displaying blank (not null, \\pset should not \n> influence this) if the connection limit is \"no limit\", only showing \n> positive numbers when there is meaningful exceptional information for \n> the user to absorb.\n\nThe connection limit can be set to 0. What should be displayed in this case, blank or 0?\nThe connection limit can be set for superusers. What should be displayed in this case,\nblank or actual non-effective value?\nCREATE|ALTER ROLE commands allow incorrect values to be set for 'Conn limit' and 'Valid until'.\nHow can the administrator see them and fix them?\n\nThese are my reasons for real column values.\n\n-- \nPavel Luzanov\nPostgres Professional:https://postgrespro.com\n\n\n\n\n\n\n On 23.01.2024 01:59, David G. Johnston wrote:\n\n\n\n\n\nThe attribute names correspond to the keywords of the CREATE ROLE command.\nThe attributes are listed in the same order as in the documentation.\n(I think that the LOGIN attribute should be moved to the first place,\nboth in the documentation and in the command.)\n\n\n\n\nI'd just\n flip INHERIT and LOGIN\n\n\n\n\n\nok\n\n\n\n\nI'm\n strongly in favor of using mixed-case for the\n attributes. The SQL Command itself doesn't care about\n capitalization and it is much easier on the eyes. I'm\n also strongly in favor of newlines, as can be seen by\n the default bootstrap superuser entry putting everything\n on one line eats up 65 characters.\n\n\n \n List of roles\n\n Role name | Attributes | Password? |\n Valid until | Connection limit | Description\n-----------+-------------+-----------+-------------+------------------+-------------\n davidj | Superuser +| no | | \n -1 |\n | CreateDB +| | | \n |\n | CreateRole +| | | \n |\n | Inherit +| | | \n |\n | Login +| | | \n |\n | Replication+| | | \n |\n | BypassRLS | | | \n |\n (1 row)\n\n\n\nok, I will continue with this display variant.\n\n\n\n\n\n\n\nAs noted by\n Peter this patch didn't update the two affected expected\n output files. psql.out and, due to the system view change,\n rules.out. That particular change requires a documentation\n update to the pg_roles system view page. \n\n\n\n\nYes, I was waiting for the direction of implementation to appear. Now it is there.\n\n\n\n\n I'd suggest\n pulling out this system view change into its own patch.\n\n\n\n\nBut within this thread or new one?\n\nOn 23.01.2024 05:30, David G. Johnston\n wrote:\n\n\n\n\nOn Sun, Jan\n 21, 2024 at 2:35 PM Pavel Luzanov <[email protected]>\n wrote:\n\n\n\n\n\n List of roles\n Role name | Attributes | Password? | Valid until | Connection limit \n-----------+-------------+-----------+------------------------+------------------\n admin | INHERIT | no | | -1\n alice | SUPERUSER +| yes | infinity | 5\n\n\n\nI think I'm\n in the minority on believing that these describe outputs\n should not be beholden to internal implementation\n details. \n\n\n\n\n\nYes, I prefer real column values. But it can be discussed.\n\n\n\n\n\n But seeing\n a -1 in the limit column is just jarring to my\n sensibilities. I suggest displaying blank (not null,\n \\pset should not influence this) if the connection limit\n is \"no limit\", only showing positive numbers when there is\n meaningful exceptional information for the user to absorb.\n\n\n\n\n\nThe connection limit can be set to 0. What should be displayed in this case, blank or 0?\nThe connection limit can be set for superusers. What should be displayed in this case,\nblank or actual non-effective value? \nCREATE|ALTER ROLE commands allow incorrect values to be set for 'Conn limit' and 'Valid until'.\nHow can the administrator see them and fix them?\n\nThese are my reasons for real column values.\n\n\n\n-- \nPavel Luzanov\nPostgres Professional: https://postgrespro.com",
"msg_date": "Sun, 28 Jan 2024 23:29:58 +0300",
"msg_from": "Pavel Luzanov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Things I don't like about \\du's \"Attributes\" column"
},
{
"msg_contents": "On Sun, Jan 28, 2024 at 1:29 PM Pavel Luzanov <[email protected]>\nwrote:\n\n> I'd suggest pulling out this system view change into its own patch.\n>\n>\n> But within this thread or new one?\n>\n>\n>\nThread. The subject line needs to make clear we are proposing changing a\nsystem view.\n\nThe connection limit can be set to 0. What should be displayed in this\ncase, blank or 0?\n>\n> 0 or even \"not allowed\" to make it clear\n\n\n> The connection limit can be set for superusers. What should be displayed in this case,\n> blank or actual non-effective value?\n>\n> print \"# (ignored)\" ?\n\n\nCREATE|ALTER ROLE commands allow incorrect values to be set for 'Conn\nlimit' and 'Valid until'.\n> How can the administrator see them and fix them?\n>\n>\nThat is unfortunate...but they can always go look at the actual system\nview. Or do what i showed above and add (invalid) after the real value.\nNote I'm only really talking about -1 here being the value that is simply\nhidden from display since it means unlimited and not actually -1\n\nI'd be more inclined to print \"forever\" for valid until since the existing\npresentation of a timestamp is already multiple characters. Using a word\nfor a column that is typically a number is less appealing.\n\nDavid J.\n\nOn Sun, Jan 28, 2024 at 1:29 PM Pavel Luzanov <[email protected]> wrote:\nI'd suggest\n pulling out this system view change into its own patch.\n\n\n\n\nBut within this thread or new one?\nThread. The subject line needs to make clear we are proposing changing a system view.\nThe connection limit can be set to 0. What should be displayed in this case, blank or 0?0 or even \"not allowed\" to make it clear The connection limit can be set for superusers. What should be displayed in this case,\nblank or actual non-effective value? print \"# (ignored)\" ?CREATE|ALTER ROLE commands allow incorrect values to be set for 'Conn limit' and 'Valid until'.\nHow can the administrator see them and fix them?\nThat is unfortunate...but they can always go look at the actual system view. Or do what i showed above and add (invalid) after the real value. Note I'm only really talking about -1 here being the value that is simply hidden from display since it means unlimited and not actually -1I'd be more inclined to print \"forever\" for valid until since the existing presentation of a timestamp is already multiple characters. Using a word for a column that is typically a number is less appealing.David J.",
"msg_date": "Sun, 28 Jan 2024 13:41:03 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Things I don't like about \\du's \"Attributes\" column"
},
{
"msg_contents": "On 28.01.2024 22:51, Pavel Luzanov wrote:\n> On 23.01.2024 04:18, Tom Lane wrote:\n>> I think expecting the pg_roles view to change for this is problematic.\n>> You can't have that in the back branches, so with this patch psql\n>> will show something different against a pre-17 server than later\n>> versions. At best, that's going to be confusing.\n>> Can you get the same result without changing pg_roles?\n> Hm. I'm not sure if this is possible.\n\nProbably there is a solution without changing pg_roles.\nThe \\du command can execute different queries for superusers and other roles.\nFor superusers, the query is based on pg_authid, for other roles on pg_roles.\nSo superusers will see the 'Password?' column and the rest won't see him.\nIn this approach, the \\du command will be able to work the same way for older\nversions.\n\nIs it worth going this way?\n\n> On 23.01.2024 05:22, David G. Johnston wrote:\n> > At present it seems like a createrole permissioned user is unable \n> > to determine whether a given role has a password or not even in the case\n> > when that role would be allowed to alter a role they've created to set or\n> > remove said password. Keeping with the changes made in v16 it does seem\n> > worthwhile modifying pg_roles to be sensitive to the role querying the view\n> > having both createrole and admin membership on the role being displayed.\n> > With now three possible outcomes: NULL if no password is in use, *********\n> > if a password is in use and the user has the ability to alter role, or\n> > <insufficient privileges> (alt. N/A).\n\nOnce again, this id a good point, but changes to pg_roles are required.\nAnd the behavior of \\du will be different for older versions.\n\n-- \nPavel Luzanov\nPostgres Professional:https://postgrespro.com\n\n\n\n\n\n\n On 28.01.2024 22:51, Pavel Luzanov wrote:\n\n\n On 23.01.2024 04:18, Tom Lane wrote:\n\n\nI think expecting the pg_roles view to change for this is problematic.\nYou can't have that in the back branches, so with this patch psql\nwill show something different against a pre-17 server than later\nversions. At best, that's going to be confusing.\n\n\n Can you get the same result without changing pg_roles?\n\nHm. I'm not sure if this is possible.\n\n\nProbably there is a solution without changing pg_roles.\nThe \\du command can execute different queries for superusers and other roles.\nFor superusers, the query is based on pg_authid, for other roles on pg_roles.\nSo superusers will see the 'Password?' column and the rest won't see him.\nIn this approach, the \\du command will be able to work the same way for older\nversions.\n\nIs it worth going this way?\n\n\n\nOn 23.01.2024 05:22, David G. Johnston wrote:\n> At present it seems like a createrole permissioned user is unable \n> to determine whether a given role has a password or not even in the case\n> when that role would be allowed to alter a role they've created to set or\n> remove said password. Keeping with the changes made in v16 it does seem\n> worthwhile modifying pg_roles to be sensitive to the role querying the view\n> having both createrole and admin membership on the role being displayed.\n> With now three possible outcomes: NULL if no password is in use, *********\n> if a password is in use and the user has the ability to alter role, or\n> <insufficient privileges> (alt. N/A).\n\n\n\nOnce again, this id a good point, but changes to pg_roles are required.\nAnd the behavior of \\du will be different for older versions.\n\n\n-- \nPavel Luzanov\nPostgres Professional: https://postgrespro.com",
"msg_date": "Mon, 29 Jan 2024 23:05:45 +0300",
"msg_from": "Pavel Luzanov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Things I don't like about \\du's \"Attributes\" column"
},
{
"msg_contents": "On 28.01.2024 22:51, Pavel Luzanov wrote:\n> I'll think about it and try to implement in the next patch version \n> within a few days.\n\nSorry for delay.\n\nPlease look at v4.\nI tried to implement all of David's suggestions.\nThe only addition - \"Login\" column. I still thinks this is important information to be highlighted.\nEspecially considering that the Attributes column small enough with a newline separator.\n\nThe changes are split into two patches.\n0001 - pg_roles view. I plan to organize a new thread for discussion.\n0002 - \\du command. It depends on 0001 for \"Password?\" and \"Valid until\" columns.\n\nOutput for superuser:\n\npostgres@postgres(17.0)=# \\du+\n List of roles\n Role name | Login | Attributes | Password? | Valid until | Connection limit | Description\n------------------+-------+-------------+-----------+---------------------------------+------------------+--------------------------------------------------\n postgres | yes | Superuser +| no | | |\n | | Create DB +| | | |\n | | Create role+| | | |\n | | Inherit +| | | |\n | | Replication+| | | |\n | | Bypass RLS | | | |\n regress_du_admin | yes | Create role+| yes | infinity | | User createrole attribute\n | | Inherit | | | |\n regress_du_role0 | yes | Create DB +| yes | 2024-12-31 00:00:00+03 | |\n | | Inherit +| | | |\n | | Replication+| | | |\n | | Bypass RLS | | | |\n regress_du_role1 | no | Inherit | no | 2024-12-31 00:00:00+03(invalid) | 50 | Group role without password but with valid until\n regress_du_role2 | yes | Inherit | yes | | Not allowed | No connections allowed\n regress_du_role3 | yes | | yes | | 10 | User without attributes\n regress_du_su | yes | Superuser +| yes | | 3(ignored) | Superuser but with connection limit\n | | Create DB +| | | |\n | | Create role+| | | |\n | | Inherit +| | | |\n | | Replication+| | | |\n | | Bypass RLS | | | |\n(7 rows)\n\nOutput for regress_du_admin (can see password for regress_du_role[0,1,2]\nbut not for regress_du_role3):\n\nregress_du_admin@postgres(17.0)=> \\du regress_du_role*\n List of roles\n Role name | Login | Attributes | Password? | Valid until | Connection limit\n------------------+-------+-------------+-----------+---------------------------------+------------------\n regress_du_role0 | yes | Create DB +| yes | 2024-12-31 00:00:00+03 |\n | | Inherit +| | |\n | | Replication+| | |\n | | Bypass RLS | | |\n regress_du_role1 | no | Inherit | no | 2024-12-31 00:00:00+03(invalid) | 50\n regress_du_role2 | yes | Inherit | yes | | Not allowed\n regress_du_role3 | yes | | | | 10\n(4 rows)\n\nOutput for regress_du_role3 (no password information):\n\nregress_du_role3@postgres(17.0)=> \\du regress_du_role*\n List of roles\n Role name | Login | Attributes | Password? | Valid until | Connection limit\n------------------+-------+-------------+-----------+------------------------+------------------\n regress_du_role0 | yes | Create DB +| | 2024-12-31 00:00:00+03 |\n | | Inherit +| | |\n | | Replication+| | |\n | | Bypass RLS | | |\n regress_du_role1 | no | Inherit | | 2024-12-31 00:00:00+03 | 50\n regress_du_role2 | yes | Inherit | | | Not allowed\n regress_du_role3 | yes | | | | 10\n(4 rows)\n\n\nAny comments. What did I miss?\n\n-- \nPavel Luzanov\nPostgres Professional:https://postgrespro.com",
"msg_date": "Tue, 13 Feb 2024 00:29:32 +0300",
"msg_from": "Pavel Luzanov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Things I don't like about \\du's \"Attributes\" column"
},
{
"msg_contents": "On 13.02.2024 00:29, Pavel Luzanov wrote:\n> The changes are split into two patches.\n> 0001 - pg_roles view. I plan to organize a new thread for discussion.\n\nPlease see it here:\nhttps://www.postgresql.org/message-id/db1d94ba-1e6e-4e86-baff-91e6e79071c1%40postgrespro.ru\n\n-- \nPavel Luzanov\nPostgres Professional:https://postgrespro.com\n\n\n\n\n\n\n On 13.02.2024 00:29, Pavel Luzanov wrote:\n\n\nThe changes are split into two patches.\n0001 - pg_roles view. I plan to organize a new thread for discussion.\n\nPlease see it here:\nhttps://www.postgresql.org/message-id/db1d94ba-1e6e-4e86-baff-91e6e79071c1%40postgrespro.ru\n-- \nPavel Luzanov\nPostgres Professional: https://postgrespro.com",
"msg_date": "Fri, 16 Feb 2024 13:04:15 +0300",
"msg_from": "Pavel Luzanov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Things I don't like about \\du's \"Attributes\" column"
},
{
"msg_contents": "On Mon, Feb 12, 2024 at 2:29 PM Pavel Luzanov <[email protected]>\nwrote:\n\n> regress_du_role1 | no | Inherit | no | 2024-12-31 00:00:00+03(invalid) | 50 | Group role without password but with valid until\n> regress_du_role2 | yes | Inherit | yes | | Not allowed | No connections allowed\n> regress_du_role3 | yes | | yes | | 10 | User without attributes\n> regress_du_su | yes | Superuser +| yes | | 3(ignored) | Superuser but with connection limit\n>\n>\nPer the recent bug report, we should probably add something like (ignored)\nafter the 50 connections for role1 since they are not allowed to login so\nthe value is indeed ignored. It is ignored to zero as opposed to unlimited\nfor the Superuser so maybe a different word (not allowed)?\n\nDavid J.\n\nOn Mon, Feb 12, 2024 at 2:29 PM Pavel Luzanov <[email protected]> wrote:\n regress_du_role1 | no | Inherit | no | 2024-12-31 00:00:00+03(invalid) | 50 | Group role without password but with valid until\n regress_du_role2 | yes | Inherit | yes | | Not allowed | No connections allowed\n regress_du_role3 | yes | | yes | | 10 | User without attributes\n regress_du_su | yes | Superuser +| yes | | 3(ignored) | Superuser but with connection limit\nPer the recent bug report, we should probably add something like (ignored) after the 50 connections for role1 since they are not allowed to login so the value is indeed ignored. It is ignored to zero as opposed to unlimited for the Superuser so maybe a different word (not allowed)?David J.",
"msg_date": "Fri, 16 Feb 2024 14:37:53 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Things I don't like about \\du's \"Attributes\" column"
},
{
"msg_contents": "\"David G. Johnston\" <[email protected]> writes:\n> Per the recent bug report, we should probably add something like (ignored)\n> after the 50 connections for role1 since they are not allowed to login so\n> the value is indeed ignored. It is ignored to zero as opposed to unlimited\n> for the Superuser so maybe a different word (not allowed)?\n\nNot sure it's worth worrying about, but if we do I'd not bother to\nshow the irrelevant value at all: it's just making the display wider\nto little purpose. We could make the column read as \"(irrelevant)\",\nor leave it blank. I'd argue the same for password expiration\ntime BTW.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 16 Feb 2024 16:44:50 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Things I don't like about \\du's \"Attributes\" column"
},
{
"msg_contents": "On 17.02.2024 00:44, Tom Lane wrote:\n> \"David G. Johnston\"<[email protected]> writes:\n>> Per the recent bug report, we should probably add something like (ignored)\n>> after the 50 connections for role1 since they are not allowed to login so\n>> the value is indeed ignored. It is ignored to zero as opposed to unlimited\n>> for the Superuser so maybe a different word (not allowed)?\n> Not sure it's worth worrying about, but if we do I'd not bother to\n> show the irrelevant value at all: it's just making the display wider\n> to little purpose. We could make the column read as \"(irrelevant)\",\n> or leave it blank. I'd argue the same for password expiration\n> time BTW.\n\nPlease look at v5.\n\nChanges:\n- 'XXX(ignored)' replaced by '(irrelevant)' for 'Connection limit'.\n\tfor superusers with Connection limit\n\tfor roles without login and Connection limit\n- 'XXX(invalid)' replaced by '(irrelevant)' for 'Valid until'.\n\tfor roles without password and Valid until\n- 'Not allowed' replaced by '(not allowed)' for consistency.\n\tfor roles with Connection limit = 0\n\npostgres@postgres(17.0)=# \\du regress*\n List of roles\n Role name | Login | Attributes | Password? | Valid until | Connection limit\n------------------+-------+-------------+-----------+------------------------+------------------\n regress_du_admin | yes | Create role+| yes | infinity |\n | | Inherit | | |\n regress_du_role0 | yes | Create DB +| yes | 2024-12-31 00:00:00+03 |\n | | Inherit +| | |\n | | Replication+| | |\n | | Bypass RLS | | |\n regress_du_role1 | no | Inherit | no | (irrelevant) | (irrelevant)\n regress_du_role2 | yes | Inherit | yes | | (not allowed)\n regress_du_role3 | yes | | yes | | 10\n regress_du_su | yes | Superuser +| yes | | (irrelevant)\n | | Create DB +| | |\n | | Create role+| | |\n | | Inherit +| | |\n | | Replication+| | |\n | | Bypass RLS | | |\n(6 rows)\n\n-- \nPavel Luzanov\nPostgres Professional:https://postgrespro.com",
"msg_date": "Sat, 17 Feb 2024 21:06:16 +0300",
"msg_from": "Pavel Luzanov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Things I don't like about \\du's \"Attributes\" column"
},
{
"msg_contents": "On 17.02.2024 00:37, David G. Johnston wrote:\n> On Mon, Feb 12, 2024 at 2:29 PM Pavel Luzanov \n> <[email protected]> wrote:\n>\n> regress_du_role1 | no | Inherit | no | 2024-12-31 00:00:00+03(invalid) | 50 | Group role without password but with valid until\n> regress_du_role2 | yes | Inherit | yes | | Not allowed | No connections allowed\n> regress_du_role3 | yes | | yes | | 10 | User without attributes\n> regress_du_su | yes | Superuser +| yes | | 3(ignored) | Superuser but with connection limit\n>\n>\n> Per the recent bug report, we should probably add something like \n> (ignored) after the 50 connections for role1 since they are not \n> allowed to login so the value is indeed ignored.\n\nHm, but the same logic applies to \"Password?\" and \"Valid until\" for role1 without login attribute.\nThe challenge is how to display it for unprivileged users. But they can't see password information.\nSo, displaying 'Valid until' as '(irrelevant)' for privileged users and real value for others looks badly.\n\nWhat can be done in this situation.\n0. Show different values as described above.\n1. Don't show 'Valid until' for unprivileged users at all. The same logic as for 'Password?'.\nWith possible exception: user can see 'Valid until' for himself.\nMay be too complicated?\n\n2. Tom's advise: \n\n> Not sure it's worth worrying about\n\nShow real values for 'Valid until' and 'Connection limit' without any hints.\n\n3. The best solution, which I can't see now.\n\n-- \nPavel Luzanov\nPostgres Professional:https://postgrespro.com\n\n\n\n\n\n\n On 17.02.2024 00:37, David G. Johnston wrote:\n\n\n\n\nOn Mon, Feb\n 12, 2024 at 2:29 PM Pavel Luzanov <[email protected]>\n wrote:\n\n\n\n\n\n regress_du_role1 | no | Inherit | no | 2024-12-31 00:00:00+03(invalid) | 50 | Group role without password but with valid until\n regress_du_role2 | yes | Inherit | yes | | Not allowed | No connections allowed\n regress_du_role3 | yes | | yes | | 10 | User without attributes\n regress_du_su | yes | Superuser +| yes | | 3(ignored) | Superuser but with connection limit\n\n\n\n\n\n\nPer the\n recent bug report, we should probably add something like\n (ignored) after the 50 connections for role1 since they are\n not allowed to login so the value is indeed ignored. \n\n\n\n\nHm, but the same logic applies to \"Password?\" and \"Valid until\" for role1 without login attribute.\nThe challenge is how to display it for unprivileged users. But they can't see password information.\nSo, displaying 'Valid until' as '(irrelevant)' for privileged users and real value for others looks badly.\n\nWhat can be done in this situation.\n0. Show different values as described above.\n1. Don't show 'Valid until' for unprivileged users at all. The same logic as for 'Password?'.\nWith possible exception: user can see 'Valid until' for himself. \nMay be too complicated?\n\n2. Tom's advise: \n\nNot sure it's worth worrying about\n\nShow real values for 'Valid until' and 'Connection limit' without any hints.\n\n3. The best solution, which I can't see now.\n\n\n-- \nPavel Luzanov\nPostgres Professional: https://postgrespro.com",
"msg_date": "Sun, 18 Feb 2024 14:14:21 +0300",
"msg_from": "Pavel Luzanov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Things I don't like about \\du's \"Attributes\" column"
},
{
"msg_contents": "Hi All,\r\n\r\n\r\nJust noticed that the definition:\r\npostgres=# \\d pg_shadow\r\n.....\r\n usebypassrls | boolean | | |\r\n passwd | text | C | |\r\n.....\r\n\r\nLooks like there is no length restriction for the password of a user.\r\n\r\n\r\nAnd in the code change history, 67a472d71c (\"Remove arbitrary restrictions on password length.\", 2020-09-03) \r\nseems having removed the length restriction. (in the history, there is 100 or even max length of 1024.)\r\n\r\nSo, here, just a minor question, can we consider there is no max length restriction for the password of a user? \r\nNeed some document to make a clarification or suggestion to the user?\r\n\r\nBR,\r\nSean He ([email protected])\nHi All,Just noticed that the definition:postgres=# \\d pg_shadow..... usebypassrls | boolean | | | passwd | text | C | |.....Looks like there is no length restriction for the password of a user.And in the code change history, 67a472d71c (\"Remove arbitrary restrictions on password length.\", 2020-09-03) seems having removed the length restriction. (in the history, there is 100 or even max length of 1024.)So, here, just a minor question, can we consider there is no max length restriction for the password of a user? Need some document to make a clarification or suggestion to the user?BR,Sean He ([email protected])",
"msg_date": "Mon, 18 Mar 2024 21:29:11 +0800",
"msg_from": "\"=?gb18030?B?U2Vhbg==?=\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Is there still password max length restrictions in PG?"
},
{
"msg_contents": "> On 18 Mar 2024, at 14:29, Sean <[email protected]> wrote:\n\n> Need some document to make a clarification or suggestion to the user?\n\nThe suggestion is to not use password authentication but instead use SCRAM.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Mon, 18 Mar 2024 14:33:46 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is there still password max length restrictions in PG?"
},
{
"msg_contents": "Hi, \r\nThanks for your information. Even using SCRAM, when specified the content of \"password\", still there is a basic request about the length of it. From the source code, seems there is no restriction, right? \r\n\r\nIs it reasonable? \r\n\r\nBR,\r\nSean He\r\n\r\n\r\n------------------ Original ------------------\r\nFrom: \"Daniel Gustafsson\" <[email protected]>;\r\nDate: Mon, Mar 18, 2024 09:33 PM\r\nTo: \"Sean\"<[email protected]>;\r\nCc: \"pgsql-hackers\"<[email protected]>;\r\nSubject: Re: Is there still password max length restrictions in PG?\r\n\r\n\r\n\r\n> On 18 Mar 2024, at 14:29, Sean <[email protected]> wrote:\r\n\r\n> Need some document to make a clarification or suggestion to the user?\r\n\r\nThe suggestion is to not use password authentication but instead use SCRAM.\r\n\r\n--\r\nDaniel Gustafsson\nHi, Thanks for your information. Even using SCRAM, when specified the content of \"password\", still there is a basic request about the length of it. From the source code, seems there is no restriction, right? Is it reasonable? BR,Sean He------------------ Original ------------------From: \"Daniel Gustafsson\" <[email protected]>;Date: Mon, Mar 18, 2024 09:33 PMTo: \"Sean\"<[email protected]>;Cc: \"pgsql-hackers\"<[email protected]>;Subject: Re: Is there still password max length restrictions in PG?> On 18 Mar 2024, at 14:29, Sean <[email protected]> wrote:> Need some document to make a clarification or suggestion to the user?The suggestion is to not use password authentication but instead use SCRAM.--Daniel Gustafsson",
"msg_date": "Mon, 18 Mar 2024 21:43:27 +0800",
"msg_from": "\"=?gb18030?B?U2Vhbg==?=\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is there still password max length restrictions in PG?"
},
{
"msg_contents": "> On 18 Mar 2024, at 14:43, Sean <[email protected]> wrote:\n> \n> Hi, \n> Thanks for your information. Even using SCRAM, when specified the content of \"password\", still there is a basic request about the length of it. From the source code, seems there is no restriction, right? \n\nSCRAM stores a hashed fixed-size representation of the password, so there is no\nrestriction in terms of length on the user supplied secret.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Mon, 18 Mar 2024 14:49:58 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is there still password max length restrictions in PG?"
},
{
"msg_contents": "Thanks Daniel. \r\nThat's a big help to me!\r\n\r\n\r\n------------------ Original ------------------\r\nFrom: \"Daniel Gustafsson\" <[email protected]>;\r\nDate: Mon, Mar 18, 2024 09:49 PM\r\nTo: \"Sean\"<[email protected]>;\r\nCc: \"pgsql-hackers\"<[email protected]>;\r\nSubject: Re: Is there still password max length restrictions in PG?\r\n\r\n\r\n\r\n> On 18 Mar 2024, at 14:43, Sean <[email protected]> wrote:\r\n> \r\n> Hi, \r\n> Thanks for your information. Even using SCRAM, when specified the content of \"password\", still there is a basic request about the length of it. From the source code, seems there is no restriction, right? \r\n\r\nSCRAM stores a hashed fixed-size representation of the password, so there is no\r\nrestriction in terms of length on the user supplied secret.\r\n\r\n--\r\nDaniel Gustafsson\nThanks Daniel. That's a big help to me!------------------ Original ------------------From: \"Daniel Gustafsson\" <[email protected]>;Date: Mon, Mar 18, 2024 09:49 PMTo: \"Sean\"<[email protected]>;Cc: \"pgsql-hackers\"<[email protected]>;Subject: Re: Is there still password max length restrictions in PG?> On 18 Mar 2024, at 14:43, Sean <[email protected]> wrote:> > Hi, > Thanks for your information. Even using SCRAM, when specified the content of \"password\", still there is a basic request about the length of it. From the source code, seems there is no restriction, right? SCRAM stores a hashed fixed-size representation of the password, so there is norestriction in terms of length on the user supplied secret.--Daniel Gustafsson",
"msg_date": "Mon, 18 Mar 2024 21:51:27 +0800",
"msg_from": "\"=?gb18030?B?U2Vhbg==?=\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is there still password max length restrictions in PG?"
},
{
"msg_contents": "I think we can change the output like this:\r\n\r\npostgres=# \\du\r\n List of roles\r\n Role name | Login | Attributes | Password | Valid until | Connection limit \r\n-----------+-------+-------------+----------+-------------+------------------\r\n test | | Inherit | | | \r\n test2 | Can | Inherit | Has | | \r\n wenyi | Can | Superuser +| | | \r\n | | Create DB +| | | \r\n | | Create role+| | | \r\n | | Inherit +| | | \r\n | | Replication+| | | \r\n | | Bypass RLS | | | \r\n(3 rows)\r\n\r\nAnd I submit my the patch, have a look?\r\nYours,\r\nWen Yi\r\n\r\n------------------ Original ------------------\r\nFrom: \"Pavel Luzanov\" <[email protected]>;\r\nDate: Sun, Feb 18, 2024 07:14 PM\r\nTo: \"David G. Johnston\"<[email protected]>;\r\nCc: \"Tom Lane\"<[email protected]>;\"Jim Nasby\"<[email protected]>;\"Robert Haas\"<[email protected]>;\"pgsql-hackers\"<[email protected]>;\r\nSubject: Re: Things I don't like about \\du's \"Attributes\" column\r\n\r\n\r\n\r\nOn 17.02.2024 00:37, David G. Johnston wrote:\r\n \r\n \r\n \r\nOn Mon, Feb 12, 2024 at 2:29 PM Pavel Luzanov <[email protected]> wrote:\r\n \r\n \r\n regress_du_role1 | no| Inherit | no| 2024-12-31 00:00:00+03(invalid) | 50 | Group role without password but with valid untilregress_du_role2 | yes | Inherit | yes | | Not allowed| No connections allowedregress_du_role3 | yes | | yes | | 10 | User without attributesregress_du_su| yes | Superuser+| yes | | 3(ignored) | Superuser but with connection limit\r\n \r\n \r\n Per the recent bug report, we should probably add something like (ignored) after the 50 connections for role1 since they are not allowed to login so the value is indeed ignored. \r\n \r\n \r\n Hm, but the same logic applies to \"Password?\" and \"Valid until\" for role1 without login attribute. The challenge is how to display it for unprivileged users. But they can't see password information. So, displaying 'Valid until' as '(irrelevant)' for privileged users and real value for others looks badly.What can be done in this situation. 0. Show different values as described above. 1. Don't show 'Valid until' for unprivileged users at all. The same logic as for 'Password?'. With possible exception: user can see 'Valid until' for himself.May be too complicated?2. Tom's advise:\r\n Not sure it's worth worrying about\r\n Show real values for 'Valid until' and 'Connection limit' without any hints.3. The best solution, which I can't see now.\r\n --Pavel Luzanov Postgres Professional: https://postgrespro.com",
"msg_date": "Sun, 14 Apr 2024 10:02:00 +0800",
"msg_from": "\"=?gb18030?B?V2VuIFlp?=\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Things I don't like about \\du's \"Attributes\" column"
},
{
"msg_contents": "I think the output need to change, like this:\r\n\r\npostgres=# \\du+\r\n List of roles\r\n Role name | Login | Attributes | Password | Valid until | Connection limit | Description \r\n-----------+-------+-------------+----------+-------------+------------------+-------------\r\n test | | Inherit | | | | \r\n test2 | Can | Inherit | Has | | | \r\n wenyi | Can | Superuser +| | | | \r\n | | Create DB +| | | | \r\n | | Create role+| | | | \r\n | | Inherit +| | | | \r\n | | Replication+| | | | \r\n | | Bypass RLS | | | | \r\n(3 rows)",
"msg_date": "Sun, 14 Apr 2024 07:47:55 +0000",
"msg_from": "Wen Yi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Things I don't like about \\du's \"Attributes\" column"
},
{
"msg_contents": "On Sat, Apr 13, 2024 at 7:02 PM Wen Yi <[email protected]> wrote:\n\n> I think we can change the output like this:\n>\n> postgres=# \\du\n> List of roles\n> Role name | Login | Attributes | Password | Valid until | Connection\n> limit\n>\n> -----------+-------+-------------+----------+-------------+------------------\n> test | | Inherit | | |\n> test2 | Can | Inherit | Has | |\n> wenyi | Can | Superuser +| | |\n> | | Create DB +| | |\n> | | Create role+| | |\n> | | Inherit +| | |\n> | | Replication+| | |\n> | | Bypass RLS | | |\n> (3 rows)\n>\n> And I submit my the patch, have a look?\n>\n>\nWhy? I actually am generally open to false being encoded as blank where\nthere are only two possible values, but there is no precedence of choosing\nsomething besides 'yes' or 'true' to represent the boolean true value.\n\nWhether Password is truly two-valued is debatable per the ongoing\ndiscussion.\n\nDavid J.\n\nOn Sat, Apr 13, 2024 at 7:02 PM Wen Yi <[email protected]> wrote:I think we can change the output like this:\n\npostgres=# \\du\n List of roles\n Role name | Login | Attributes | Password | Valid until | Connection limit \n-----------+-------+-------------+----------+-------------+------------------\n test | | Inherit | | | \n test2 | Can | Inherit | Has | | \n wenyi | Can | Superuser +| | | \n | | Create DB +| | | \n | | Create role+| | | \n | | Inherit +| | | \n | | Replication+| | | \n | | Bypass RLS | | | \n(3 rows)\n\nAnd I submit my the patch, have a look?Why? I actually am generally open to false being encoded as blank where there are only two possible values, but there is no precedence of choosing something besides 'yes' or 'true' to represent the boolean true value.Whether Password is truly two-valued is debatable per the ongoing discussion.David J.",
"msg_date": "Mon, 15 Apr 2024 14:27:25 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Things I don't like about \\du's \"Attributes\" column"
},
{
"msg_contents": "On Sun, Feb 18, 2024 at 4:14 AM Pavel Luzanov <[email protected]>\nwrote:\n\n> 2. Tom's advise:\n>\n> Not sure it's worth worrying about\n>\n> Show real values for 'Valid until' and 'Connection limit' without any hints.\n>\n>\nAt this point I'm on board with retaining the \\dr charter of simply being\nan easy way to access the detail exposed in pg_roles with some display\nformatting but without any attempt to convey how the system uses said\ninformation. Without changing pg_roles. Our level of effort here, and\ndegree of dependence on superuser, doesn't seem to be bothering people\nenough to push more radical changes here through and we have good\nimprovements that are being held up in the hope of possible perfection.\n\nDavid J.\n\nOn Sun, Feb 18, 2024 at 4:14 AM Pavel Luzanov <[email protected]> wrote:\n2. Tom's advise: \n\nNot sure it's worth worrying about\n\nShow real values for 'Valid until' and 'Connection limit' without any hints.\nAt this point I'm on board with retaining the \\dr charter of simply being an easy way to access the detail exposed in pg_roles with some display formatting but without any attempt to convey how the system uses said information. Without changing pg_roles. Our level of effort here, and degree of dependence on superuser, doesn't seem to be bothering people enough to push more radical changes here through and we have good improvements that are being held up in the hope of possible perfection.David J.",
"msg_date": "Mon, 15 Apr 2024 15:06:26 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Things I don't like about \\du's \"Attributes\" column"
},
{
"msg_contents": "On 16.04.2024 01:06, David G. Johnston wrote:\n\n> At this point I'm on board with retaining the \\dr charter of simply being an easy way to access the detail exposed in pg_roles with some display formatting but without any attempt to convey how the system uses said information. Without changing pg_roles. Our level of effort here, and degree of dependence on superuser, doesn't seem to be bothering people enough to push more radical changes here through and we have good improvements that are being held up in the hope of possible perfection.\n\nI have similar thoughts. I decided to wait for the end of featurefreeze \nand propose a simpler version of the patch for v18, without changes in \npg_roles. I hope to send a new version soon. But about \\dr. Is it a typo \nand you mean \\du & \\dg? If we were choosing a name for the command now, \nthen \\dr would be ideal: \\dr - display roles \\drg - display role grants \nBut the long history of \\du & \\dg prevents from doing so, and creating a \nthird option is too excessive.\n\n-- \nPavel Luzanov\nPostgres Professional:https://postgrespro.com\n\n\n\n\n\n\nOn 16.04.2024 01:06, David G. Johnston wrote:\n\n\n\n\n\nAt this point I'm on board with retaining the \\dr charter of simply being an easy way to access the detail exposed in pg_roles with some display formatting but without any attempt to convey how the system uses said information. Without changing pg_roles. Our level of effort here, and degree of dependence on superuser, doesn't seem to be bothering people enough to push more radical changes here through and we have good improvements that are being held up in the hope of possible perfection.\n\n\n\n\nI have similar thoughts.\nI decided to wait for the end of feature freeze and propose a simpler version\nof the patch for v18, without changes in pg_roles.\nI hope to send a new version soon.\n\nBut about \\dr. Is it a typo and you mean \\du & \\dg?\nIf we were choosing a name for the command now, then \\dr would be ideal:\n\\dr - display roles\n\\drg - display role grants\n\nBut the long history of \\du & \\dg prevents from doing so, and creating a third option is too excessive.\n\n-- \nPavel Luzanov\nPostgres Professional: https://postgrespro.com",
"msg_date": "Tue, 16 Apr 2024 09:15:58 +0300",
"msg_from": "Pavel Luzanov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Things I don't like about \\du's \"Attributes\" column"
},
{
"msg_contents": "Hi,\n\nOn 14.04.2024 05:02, Wen Yi wrote:\n> I think we can change the output like this:\n>\n> postgres=# \\du\n> List of roles\n> Role name | Login | Attributes | Password | Valid until | Connection limit\n> -----------+-------+-------------+----------+-------------+------------------\n> test | | Inherit | | |\n> test2 | Can | Inherit | Has | |\n> wenyi | Can | Superuser +| | |\n> | | Create DB +| | |\n> | | Create role+| | |\n> | | Inherit +| | |\n> | | Replication+| | |\n> | | Bypass RLS | | |\n> (3 rows)\n>\n> And I submit my the patch, have a look?\n\nThanks for the patch.\n\nAs I understand, your patch is based on my previous version.\nThe main thing I'm wondering about my patch is if we need to change pg_roles,\nand it looks like we don't. So, in the next version of my patch,\nthe Password column will no longer be there.\n\nAs for the Login column and its values.\nI'm not sure about using \"Can\" instead of \"yes\" to represent true.\nIn other psql commands, boolean values are always shown as yes/no.\nNULL instead of false might be possible, but I'd rather check if this approach\nhas been used elsewhere. I prefer consistency everywhere.\n\n-- \nPavel Luzanov\nPostgres Professional:https://postgrespro.com\n\n\n\n\n\n\n Hi,\n\nOn 14.04.2024 05:02, Wen Yi wrote:\n\n\nI think we can change the output like this:\n\npostgres=# \\du\n List of roles\n Role name | Login | Attributes | Password | Valid until | Connection limit \n-----------+-------+-------------+----------+-------------+------------------\n test | | Inherit | | | \n test2 | Can | Inherit | Has | | \n wenyi | Can | Superuser +| | | \n | | Create DB +| | | \n | | Create role+| | | \n | | Inherit +| | | \n | | Replication+| | | \n | | Bypass RLS | | | \n(3 rows)\n\nAnd I submit my the patch, have a look?\n\n\nThanks for the patch.\n\nAs I understand, your patch is based on my previous version.\nThe main thing I'm wondering about my patch is if we need to change pg_roles,\nand it looks like we don't. So, in the next version of my patch,\nthe Password column will no longer be there.\n\nAs for the Login column and its values.\nI'm not sure about using \"Can\" instead of \"yes\" to represent true.\nIn other psql commands, boolean values are always shown as yes/no.\nNULL instead of false might be possible, but I'd rather check if this approach\nhas been used elsewhere. I prefer consistency everywhere.\n\n\n-- \nPavel Luzanov\nPostgres Professional: https://postgrespro.com",
"msg_date": "Tue, 16 Apr 2024 10:06:26 +0300",
"msg_from": "Pavel Luzanov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Things I don't like about \\du's \"Attributes\" column"
},
{
"msg_contents": "On Tue, Apr 16, 2024 at 3:06 AM Pavel Luzanov <[email protected]> wrote:\n> As for the Login column and its values.\n> I'm not sure about using \"Can\" instead of \"yes\" to represent true.\n> In other psql commands, boolean values are always shown as yes/no.\n> NULL instead of false might be possible, but I'd rather check if this approach\n> has been used elsewhere. I prefer consistency everywhere.\n\nI don't think we can use \"Can\" to mean \"yes\". That's going to be\nreally confusing.\n\nI don't like (irrelevant) either. I know Tom Lane suggested that, but\nI think he's got the wrong idea: we should just display the\ninformation we find in the catalogs and let the user decide what is\nand isn't relevant. If I see that the connection limit is 40 but the\nuser can't log in, I can figure out that the value of 40 doesn't\nmatter. If I see that the connection limit is labelled as (irrelevant)\nI don't know why it's labelled that way and, if it were me, I'd likely\nend up looking at the source code to figure out why it's showing it\nthat way.\n\nI think we should go back to the v4 version of this patch, minus the\n(ignored) stuff.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 14 May 2024 12:03:13 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Things I don't like about \\du's \"Attributes\" column"
},
{
"msg_contents": "On Tue, May 14, 2024 at 9:03 AM Robert Haas <[email protected]> wrote:\n\n> On Tue, Apr 16, 2024 at 3:06 AM Pavel Luzanov <[email protected]>\n> wrote:\n> > As for the Login column and its values.\n> > I'm not sure about using \"Can\" instead of \"yes\" to represent true.\n> > In other psql commands, boolean values are always shown as yes/no.\n> > NULL instead of false might be possible, but I'd rather check if this\n> approach\n> > has been used elsewhere. I prefer consistency everywhere.\n>\n> I don't think we can use \"Can\" to mean \"yes\". That's going to be\n> really confusing.\n>\n\nAgreed\n\n\n> If I see that the connection limit is labelled as (irrelevant)\n> I don't know why it's labelled that way and, if it were me, I'd likely\n> end up looking at the source code to figure out why it's showing it\n> that way.\n>\n\nOr we'd document what we've done and users that don't want to go looking at\nsource code can just read our specification.\n\n\n> I think we should go back to the v4 version of this patch, minus the\n> (ignored) stuff.\n>\n>\nAgreed, I'm past the point of wanting to have this behave more\nintelligently rather than a way for people to avoid having to go write a\ncatalog using query themselves.\n\nDavid J.\n\nOn Tue, May 14, 2024 at 9:03 AM Robert Haas <[email protected]> wrote:On Tue, Apr 16, 2024 at 3:06 AM Pavel Luzanov <[email protected]> wrote:\n> As for the Login column and its values.\n> I'm not sure about using \"Can\" instead of \"yes\" to represent true.\n> In other psql commands, boolean values are always shown as yes/no.\n> NULL instead of false might be possible, but I'd rather check if this approach\n> has been used elsewhere. I prefer consistency everywhere.\n\nI don't think we can use \"Can\" to mean \"yes\". That's going to be\nreally confusing.Agreed If I see that the connection limit is labelled as (irrelevant)\nI don't know why it's labelled that way and, if it were me, I'd likely\nend up looking at the source code to figure out why it's showing it\nthat way.Or we'd document what we've done and users that don't want to go looking at source code can just read our specification.\nI think we should go back to the v4 version of this patch, minus the\n(ignored) stuff.\nAgreed, I'm past the point of wanting to have this behave more intelligently rather than a way for people to avoid having to go write a catalog using query themselves.David J.",
"msg_date": "Tue, 14 May 2024 09:28:17 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Things I don't like about \\du's \"Attributes\" column"
},
{
"msg_contents": "On 14.05.2024 19:03, Robert Haas wrote:\n> I think we should go back to the v4 version of this patch, minus the\n> (ignored) stuff.\n\nThank you for looking into this.\nI can assume that you support the idea of changing pg_roles. It's great.\n\nBy the way, I have attached a separate thread[1] about pg_roles to this commitfest entry[2].\n\nI will return to work on the patch after my vacation.\n\n1.https://www.postgresql.org/message-id/flat/db1d94ba-1e6e-4e86-baff-91e6e79071c1%40postgrespro.ru\n2.https://commitfest.postgresql.org/48/4738/\n\n-- \nPavel Luzanov\nPostgres Professional:https://postgrespro.com\n\n\n\n\n\n\n On 14.05.2024 19:03, Robert Haas wrote:\n\n\n\nI think we should go back to the v4 version of this patch, minus the\n(ignored) stuff.\n\n\nThank you for looking into this.\nI can assume that you support the idea of changing pg_roles. It's great.\n\nBy the way, I have attached a separate thread[1] about pg_roles to this commitfest entry[2].\n\nI will return to work on the patch after my vacation.\n\n1. https://www.postgresql.org/message-id/flat/db1d94ba-1e6e-4e86-baff-91e6e79071c1%40postgrespro.ru\n2. https://commitfest.postgresql.org/48/4738/\n\n-- \nPavel Luzanov\nPostgres Professional: https://postgrespro.com",
"msg_date": "Wed, 15 May 2024 18:05:31 +0300",
"msg_from": "Pavel Luzanov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Things I don't like about \\du's \"Attributes\" column"
},
{
"msg_contents": "On 16.04.2024 09:15, Pavel Luzanov wrote:\n> On 16.04.2024 01:06, David G. Johnston wrote:\n>> At this point I'm on board with retaining the \\dr charter of simply being\n>> an easy way to access the detail exposed in pg_roles with some display\n>> formatting but without any attempt to convey how the system uses said\n>> information. Without changing pg_roles. Our level of effort here, and\n>> degree of dependence on superuser, doesn't seem to be bothering people\n>> enough to push more radical changes here through and we have good\n>> improvements that are being held up in the hope of possible perfection.\n> I have similar thoughts. I decided to wait for the end of \n> featurefreeze and propose a simpler version of the patch for v18, \n> without changes in pg_roles\n\nSince no votes for the changes in pg_roles, please look the simplified version.\nWe can return to this topic later.\n\nButnowthere are nochangesinpg_roles. Just a special interpretation\nof the two values of the \"Connection limit\" column:\n 0 - Now allowed (changed from 'No connections')\n -1 - empty string\n\nFull list of changes in commit message.\n\nExample output:\n\n\\du+ regress_du*\n List of roles\n Role name | Login | Attributes | Valid until | Connection limit | Description\n------------------+-------+-------------+------------------------------+------------------+------------------\n regress_du_admin | yes | Superuser +| | | some description\n | | Create DB +| | |\n | | Create role+| | |\n | | Inherit +| | |\n | | Replication+| | |\n | | Bypass RLS | | |\n regress_du_role0 | yes | Inherit | Tue Jun 04 00:00:00 2024 PDT | Not allowed |\n regress_du_role1 | no | Create role+| infinity | |\n | | Inherit | | |\n regress_du_role2 | yes | Inherit +| | 42 |\n | | Replication+| | |\n | | Bypass RLS | | |\n(4 rows)\n\nData:\nCREATE ROLE regress_du_role0 LOGIN PASSWORD '123' VALID UNTIL '2024-06-04' CONNECTION LIMIT 0;\nCREATE ROLE regress_du_role1 CREATEROLE CONNECTION LIMIT -1 VALID UNTIL 'infinity';\nCREATE ROLE regress_du_role2 LOGIN REPLICATION BYPASSRLS CONNECTION LIMIT 42;\nCREATE ROLE regress_du_admin LOGIN SUPERUSER CREATEROLE CREATEDB BYPASSRLS REPLICATION INHERIT;\nCOMMENT ON ROLE regress_du_admin IS 'some description';\n\n-- \nPavel Luzanov\nPostgres Professional:https://postgrespro.com",
"msg_date": "Thu, 6 Jun 2024 12:08:48 +0300",
"msg_from": "Pavel Luzanov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Things I don't like about \\du's \"Attributes\" column"
},
{
"msg_contents": "On Thu, Jun 6, 2024 at 5:08 AM Pavel Luzanov <[email protected]> wrote:\n> But now there are no changes in pg_roles. Just a special interpretation\n> of the two values of the \"Connection limit\" column:\n> 0 - Now allowed (changed from 'No connections')\n> -1 - empty string\n\nI think the first of these special interpretations is unnecessary and\nshould be removed. It seems pretty clear what 0 means.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 6 Jun 2024 10:29:59 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Things I don't like about \\du's \"Attributes\" column"
},
{
"msg_contents": "On 06.06.2024 17:29, Robert Haas wrote:\n> I think the first of these special interpretations is unnecessary and\n> should be removed. It seems pretty clear what 0 means.\n\nAgree.\nThere is an additional technical argument for removing this replacement.\nI don't like explicit cast to text of the \"Connection limit\" column.\nWithout 'Not allowed' it is no longerrequired.\nValue -1 can be replaced by NULL with an implicit cast to integer.\n\nNext version with this change attached.\n\nExample output:\n\n\\du+ regress_du*\n List of roles\n Role name | Login | Attributes | Valid until | Connection limit | Description\n------------------+-------+-------------+------------------------------+------------------+------------------\n regress_du_admin | yes | Superuser +| | | some description\n | | Create DB +| | |\n | | Create role+| | |\n | | Inherit +| | |\n | | Replication+| | |\n | | Bypass RLS | | |\n regress_du_role0 | yes | Inherit | Tue Jun 04 00:00:00 2024 PDT | 0 |\n regress_du_role1 | no | Create role+| infinity | |\n | | Inherit | | |\n regress_du_role2 | yes | Inherit +| | 42 |\n | | Replication+| | |\n | | Bypass RLS | | |\n(4 rows)\n\nCurrent version for comparison:\n\n List of roles\n Role name | Attributes | Description\n------------------+------------------------------------------------------------+------------------\n regress_du_admin | Superuser, Create role, Create DB, Replication, Bypass RLS | some description\n regress_du_role0 | No connections +|\n | Password valid until 2024-06-04 00:00:00+03 |\n regress_du_role1 | Create role, Cannot login +|\n | Password valid until infinity |\n regress_du_role2 | Replication, Bypass RLS +|\n | 42 connections |\n\n\nData:\nCREATE ROLE regress_du_role0 LOGIN PASSWORD '123' VALID UNTIL '2024-06-04' CONNECTION LIMIT 0;\nCREATE ROLE regress_du_role1 CREATEROLE CONNECTION LIMIT -1 VALID UNTIL 'infinity';\nCREATE ROLE regress_du_role2 LOGIN REPLICATION BYPASSRLS CONNECTION LIMIT 42;\nCREATE ROLE regress_du_admin LOGIN SUPERUSER CREATEROLE CREATEDB BYPASSRLS REPLICATION INHERIT;\nCOMMENT ON ROLE regress_du_admin IS 'some description';\n\n-- \nPavel Luzanov\nPostgres Professional:https://postgrespro.com",
"msg_date": "Fri, 7 Jun 2024 00:10:34 +0300",
"msg_from": "Pavel Luzanov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Things I don't like about \\du's \"Attributes\" column"
},
{
"msg_contents": "On Thu, Jun 6, 2024 at 5:10 PM Pavel Luzanov <[email protected]> wrote:\n> Agree.\n> There is an additional technical argument for removing this replacement.\n> I don't like explicit cast to text of the \"Connection limit\" column.\n> Without 'Not allowed' it is no longer required.\n> Value -1 can be replaced by NULL with an implicit cast to integer.\n\nYeah, +1 for that idea.\n\n> Example output:\n>\n> \\du+ regress_du*\n> List of roles\n> Role name | Login | Attributes | Valid until | Connection limit | Description\n> ------------------+-------+-------------+------------------------------+------------------+------------------\n> regress_du_admin | yes | Superuser +| | | some description\n> | | Create DB +| | |\n> | | Create role+| | |\n> | | Inherit +| | |\n> | | Replication+| | |\n> | | Bypass RLS | | |\n> regress_du_role0 | yes | Inherit | Tue Jun 04 00:00:00 2024 PDT | 0 |\n> regress_du_role1 | no | Create role+| infinity | |\n> | | Inherit | | |\n> regress_du_role2 | yes | Inherit +| | 42 |\n> | | Replication+| | |\n> | | Bypass RLS | | |\n> (4 rows)\n\nThis seems unobjectionable to me. I am not sure whether it is better\nthan the current verison, or whether it is what we want. But it seems\nreasonable.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 7 Jun 2024 08:35:14 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Things I don't like about \\du's \"Attributes\" column"
},
{
"msg_contents": "On 07.06.2024 15:35, Robert Haas wrote:\n\n> This seems unobjectionable to me. I am not sure whether it is better\n> than the current verison, or whether it is what we want. But it seems\n> reasonable.\n\nI consider this patch as a continuation of the work on \\drg command,\nwhen it was decided to remove the \"Member of\" column from \\du command.\n\nWithout \"Member of\" column, the output of the \\du command looks very short.\nOnly two columns: \"Role name\" and \"Attributes\". All the information about\nthe role is collected in just one \"Attributes\" column and it is not presented\nin the most convenient and obvious way. What exactly is wrong with\nthe Attribute column Tom wrote in the first message of this thread and I agree\nwith these arguments.\n\nThe current implementation offers some solutions for 3 of the 4 issues\nmentioned in Tom's initial message. Issue about display of rolvaliduntil\ncan't be resolved without changing pg_roles (or executing different queries\nfor different users).\n\nTherefore, I think the current patch offers a better version of the \\du command.\nHowever, I admit that these improvements are not enough to accept the patch.\nI would like to hear other opinions.\n\n-- \nPavel Luzanov\nPostgres Professional:https://postgrespro.com\n\n\n\n\n\n\nOn 07.06.2024 15:35, Robert Haas wrote:\n\nThis seems unobjectionable to me. I am not sure whether it is better\nthan the current verison, or whether it is what we want. But it seems\nreasonable.\n\n\nI consider this patch as a continuation of the work on \\drg command,\nwhen it was decided to remove the \"Member of\" column from \\du command. \n\nWithout \"Member of\" column, the output of the \\du command looks very short.\nOnly two columns: \"Role name\" and \"Attributes\". All the information about\nthe role is collected in just one \"Attributes\" column and it is not presented\nin the most convenient and obvious way. What exactly is wrong with\nthe Attribute column Tom wrote in the first message of this thread and I agree\nwith these arguments.\n\nThe current implementation offers some solutions for 3 of the 4 issues\nmentioned in Tom's initial message. Issue about display of rolvaliduntil\ncan't be resolved without changing pg_roles (or executing different queries\nfor different users).\n\nTherefore, I think the current patch offers a better version of the \\du command.\nHowever, I admit that these improvements are not enough to accept the patch.\nI would like to hear other opinions.\n\n-- \nPavel Luzanov\nPostgres Professional: https://postgrespro.com",
"msg_date": "Sat, 8 Jun 2024 17:02:16 +0300",
"msg_from": "Pavel Luzanov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Things I don't like about \\du's \"Attributes\" column"
},
{
"msg_contents": "On Sat, Jun 8, 2024 at 10:02 AM Pavel Luzanov <[email protected]> wrote:\n> Therefore, I think the current patch offers a better version of the \\du command.\n> However, I admit that these improvements are not enough to accept the patch.\n> I would like to hear other opinions.\n\nHmm, I don't think I quite agree with this. If people like this\nversion better than what we have now, that's all we need to accept the\npatch. I just don't really want to be the one to decide all by myself\nwhether this is, in fact, better. So, like you, I would like to hear\nother opinions.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 8 Jun 2024 14:09:11 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Things I don't like about \\du's \"Attributes\" column"
},
{
"msg_contents": "At Sat, 8 Jun 2024 14:09:11 -0400, Robert Haas <[email protected]> wrote in \r\n> On Sat, Jun 8, 2024 at 10:02 AM Pavel Luzanov <[email protected]> wrote:\r\n> > Therefore, I think the current patch offers a better version of the \\du command.\r\n> > However, I admit that these improvements are not enough to accept the patch.\r\n> > I would like to hear other opinions.\r\n> \r\n> Hmm, I don't think I quite agree with this. If people like this\r\n> version better than what we have now, that's all we need to accept the\r\n> patch. I just don't really want to be the one to decide all by myself\r\n> whether this is, in fact, better. So, like you, I would like to hear\r\n> other opinions.\r\n\r\n> regress_du_role0 | yes | Inherit | Tue Jun 04 00:00:00 2024 PDT | 0 |\r\n> regress_du_role1 | no | Create role+| infinity | |\r\n\r\nI guess that in English, when written as \"'Login' = 'yes/no'\", it can\r\nbe easily understood. However, in Japanese, \"'ログイン' = 'はい/いいえ'\"\r\nlooks somewhat awkward and is a bit difficult to understand at a\r\nglance. \"'ログイン' = '可/不可'\" (equivalent to \"Login is\r\n'can/cannot'\") sounds more natural in Japanese, but it was rejected\r\nupthread, and I also don't like 'can/cannot'. To give further\r\ncandidates, \"allowed/not allowed\" or \"granted/denied\" can be\r\nmentioned, and they would be easier to translate, at least to\r\nJapanese. However, is there a higher likelihood that 'granted/denied'\r\nwill be misunderstood as referring to database permissions?\r\n\r\nLikewise, \"'Valid until' = 'infinity'\" (equivalent to \"'有効期限' = '\r\n無限'\") also sounds awkward. Maybe that's the same in English. I guess\r\nthat 'unbounded' or 'indefinite' sounds better, and their Japanese\r\ntranslation '無期限' also sounds natural. However, I'm not sure we\r\nwant to go to that extent in transforming the table.\r\n\r\nregards.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center\r\n",
"msg_date": "Mon, 10 Jun 2024 15:25:35 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Things I don't like about \\du's \"Attributes\" column"
},
{
"msg_contents": "On 10.06.2024 09:25, Kyotaro Horiguchi wrote:\n\n> I guess that in English, when written as \"'Login' = 'yes/no'\", it can\n> be easily understood. However, in Japanese, \"'ログイン' = 'はい/いいえ'\"\n> looks somewhat awkward and is a bit difficult to understand at a\n> glance. \"'ログイン' = '可/不可'\" (equivalent to \"Login is\n> 'can/cannot'\") sounds more natural in Japanese, but it was rejected\n> upthread, and I also don't like 'can/cannot'. To give further\n> candidates, \"allowed/not allowed\" or \"granted/denied\" can be\n> mentioned, and they would be easier to translate, at least to\n> Japanese. However, is there a higher likelihood that 'granted/denied'\n> will be misunderstood as referring to database permissions?\n\nThank you for looking into this, translationis important.\n\nWhat do you think about the following options?\n\n1. Try to find a more appropriate name for the column.\nMaybe \"Can login?\" is better suited for yes/no and Japanese translation?\n\n2. Show the value only for true, for example \"Granted\" as you suggested.\nDo not show the \"false\" value at all. This will be consistent\nwith the \"Attributes\" column, which shows only enabled values.\n\nI would prefer the first option and look for the best name for the column.\nThe second option can also be implemented if we сhoose a value for 'true'.\n\nBTW, I went through all the \\d* commands and looked at how columns with\nlogical values are displayed. There are two approaches: yes/no and t/f.\n\nyes/no\n\\dAc \"Default?\"\n\\dc \"Default?\"\n\\dC \"Implicit?\"\n\\dO \"Deterministic?\"\n\nt/f\n\\dL \"Trusted\", \"Internal language\"\n\\dRp \"All tables\", \"Inserts\" \"Updates\" \"Deletes\" \"Truncates\" \"Via root\"\n\\dRs \"Enabled\", \"Binary\", \"Disable on error\", \"Password required\", \"Run as owner?\", \"Failover\"\n\n> Likewise, \"'Valid until' = 'infinity'\" (equivalent to \"'有効期限' = '\n> 無限'\") also sounds awkward. Maybe that's the same in English. I guess\n> that 'unbounded' or 'indefinite' sounds better, and their Japanese\n> translation '無期限' also sounds natural. However, I'm not sure we\n> want to go to that extent in transforming the table.\n\n'infinity' is the value in the table as any other dates.\nAs far as I understand, it is not translatable.\nSo you'll see '有効期限' = 'infinity'.\n\n\nButthis can be implemented usingthe followingexpression: case when rolvaliduntil = 'infinity' or rolvaliduntil is null then\n 'unbounded' -- translatable value\n else\n rolvaliduntil::pg_catalog.text\n end\n\nOr we can hide 'infinity':\n\n case when rolvaliduntil = 'infinity' then\n null\n else\n rolvaliduntil\n end\n\nThis is a little bit better, but I don't like both. Wewill notbe ableto \ndistinguishbetween nullandinfinity values inthe table. After all, I \nthink 'infinity' is a rare case for \"Valid until\". Whatis the reasonto \nset'Validuntil'='infinity'ifthe passwordisunlimitedbydefault? Therefore, \nmy opinion here is to leave \"infinity\" as is, but I am open to better \nalternatives.\n\n-- \nPavel Luzanov\nPostgres Professional:https://postgrespro.com\n\n\n\n\n\n\nOn 10.06.2024 09:25, Kyotaro Horiguchi wrote:\n\n\nI guess that in English, when written as \"'Login' = 'yes/no'\", it can\nbe easily understood. However, in Japanese, \"'ログイン' = 'はい/いいえ'\"\nlooks somewhat awkward and is a bit difficult to understand at a\nglance. \"'ログイン' = '可/不可'\" (equivalent to \"Login is\n'can/cannot'\") sounds more natural in Japanese, but it was rejected\nupthread, and I also don't like 'can/cannot'. To give further\ncandidates, \"allowed/not allowed\" or \"granted/denied\" can be\nmentioned, and they would be easier to translate, at least to\nJapanese. However, is there a higher likelihood that 'granted/denied'\nwill be misunderstood as referring to database permissions?\n\nThank you for looking into this, translation is important.\n\nWhat do you think about the following options?\n\n1. Try to find a more appropriate name for the column.\nMaybe \"Can login?\" is better suited for yes/no and Japanese translation?\n\n2. Show the value only for true, for example \"Granted\" as you suggested. \nDo not show the \"false\" value at all. This will be consistent\nwith the \"Attributes\" column, which shows only enabled values.\n\nI would prefer the first option and look for the best name for the column.\nThe second option can also be implemented if we сhoose a value for 'true'.\n\nBTW, I went through all the \\d* commands and looked at how columns with\nlogical values are displayed. There are two approaches: yes/no and t/f.\n\nyes/no\n\\dAc \"Default?\"\n\\dc \"Default?\"\n\\dC \"Implicit?\"\n\\dO \"Deterministic?\"\n\nt/f\n\\dL \"Trusted\", \"Internal language\"\n\\dRp \"All tables\", \"Inserts\" \"Updates\" \"Deletes\" \"Truncates\" \"Via root\"\n\\dRs \"Enabled\", \"Binary\", \"Disable on error\", \"Password required\", \"Run as owner?\", \"Failover\"\n\n\n\nLikewise, \"'Valid until' = 'infinity'\" (equivalent to \"'有効期限' = '\n無限'\") also sounds awkward. Maybe that's the same in English. I guess\nthat 'unbounded' or 'indefinite' sounds better, and their Japanese\ntranslation '無期限' also sounds natural. However, I'm not sure we\nwant to go to that extent in transforming the table.\n\n\n\n'infinity' is the value in the table as any other dates.\nAs far as I understand, it is not translatable.\nSo you'll see '有効期限' = 'infinity'.\n\n\nBut this can be implemented using the following expression:\n\n case when rolvaliduntil = 'infinity' or rolvaliduntil is null then\n 'unbounded' -- translatable value\n else\n rolvaliduntil::pg_catalog.text\n end\n\nOr we can hide 'infinity':\n\n case when rolvaliduntil = 'infinity' then\n null\n else\n rolvaliduntil\n end\n\nThis is a little bit better, but I don't like both.\nWe will not be able to distinguish between null and infinity\nvalues in the table.\n\nAfter all, I think 'infinity' is a rare case for \"Valid until\".\nWhat is the reason to set 'Valid until' = 'infinity' if the password\nis unlimited by default?\n\nTherefore, my opinion here is to leave \"infinity\" as is, but I am open\nto better alternatives.\n\n\n\n-- \nPavel Luzanov\nPostgres Professional: https://postgrespro.com",
"msg_date": "Mon, 10 Jun 2024 22:26:38 +0300",
"msg_from": "Pavel Luzanov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Things I don't like about \\du's \"Attributes\" column"
},
{
"msg_contents": "On Thu, 6 Jun 2024 at 23:10, Pavel Luzanov <[email protected]> wrote:\n>\n> On 06.06.2024 17:29, Robert Haas wrote:\n>\n> I think the first of these special interpretations is unnecessary and\n> should be removed. It seems pretty clear what 0 means.\n>\n> Agree.\n> There is an additional technical argument for removing this replacement.\n> I don't like explicit cast to text of the \"Connection limit\" column.\n> Without 'Not allowed' it is no longer required.\n> Value -1 can be replaced by NULL with an implicit cast to integer.\n>\n> Next version with this change attached.\n>\n> Example output:\n>\n> \\du+ regress_du*\n> List of roles\n> Role name | Login | Attributes | Valid until | Connection limit | Description\n> ------------------+-------+-------------+------------------------------+------------------+------------------\n> regress_du_admin | yes | Superuser +| | | some description\n> | | Create DB +| | |\n> | | Create role+| | |\n> | | Inherit +| | |\n> | | Replication+| | |\n> | | Bypass RLS | | |\n> regress_du_role0 | yes | Inherit | Tue Jun 04 00:00:00 2024 PDT | 0 |\n> regress_du_role1 | no | Create role+| infinity | |\n> | | Inherit | | |\n> regress_du_role2 | yes | Inherit +| | 42 |\n> | | Replication+| | |\n> | | Bypass RLS | | |\n> (4 rows)\n>\n> Current version for comparison:\nThis looks much better than the current version. Only thing is, I find\nthe column name Valid until confusing. With that name I am in danger\nof taking it as the role's validity and not the passwords'.\nHow about naming it to something like Password validity...?\n>\n> List of roles\n> Role name | Attributes | Description\n> ------------------+------------------------------------------------------------+------------------\n> regress_du_admin | Superuser, Create role, Create DB, Replication, Bypass RLS | some description\n> regress_du_role0 | No connections +|\n> | Password valid until 2024-06-04 00:00:00+03 |\n> regress_du_role1 | Create role, Cannot login +|\n> | Password valid until infinity |\n> regress_du_role2 | Replication, Bypass RLS +|\n> | 42 connections |\n>\n>\n> Data:\n> CREATE ROLE regress_du_role0 LOGIN PASSWORD '123' VALID UNTIL '2024-06-04' CONNECTION LIMIT 0;\n> CREATE ROLE regress_du_role1 CREATEROLE CONNECTION LIMIT -1 VALID UNTIL 'infinity';\n> CREATE ROLE regress_du_role2 LOGIN REPLICATION BYPASSRLS CONNECTION LIMIT 42;\n> CREATE ROLE regress_du_admin LOGIN SUPERUSER CREATEROLE CREATEDB BYPASSRLS REPLICATION INHERIT;\n> COMMENT ON ROLE regress_du_admin IS 'some description';\n>\n> --\n> Pavel Luzanov\n> Postgres Professional: https://postgrespro.com\n\n-- \nRegards,\nRafia Sabih\n\n\n",
"msg_date": "Thu, 11 Jul 2024 14:07:27 +0200",
"msg_from": "Rafia Sabih <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Things I don't like about \\du's \"Attributes\" column"
},
{
"msg_contents": "On 11.07.2024 15:07, Rafia Sabih wrote:\n> This looks much better than the current version.\n\nThank you, for looking into this.\n\n> Only thing is, I find\n> the column name Valid until confusing. With that name I am in danger\n> of taking it as the role's validity and not the passwords'.\n> How about naming it to something like Password validity...?\n\nYes, my first attempt was to name this column \"Passwordexpirationdate\" for the same reason. But then I decided that the \ncolumn name should match the attribute name. Otherwise, you need to make \nsome effort to understand which columns of the table correspond to which \nattributes of the roles. It is also worth considering translation into \nother languages. If the role attribute and the column have the same \nname, then they will probably be translated the same way. But the \ntranslation may be different for different terms, which will further \nconfuse the situation. We can probably change the column name, but still \nthe root of the confusion is caused by the attribute name, not the \ncolumn name. What do you think?\n\n-- \nPavel Luzanov\nPostgres Professional:https://postgrespro.com\n\n\n\n\n\n\n On 11.07.2024 15:07, Rafia Sabih wrote:\n\n\nThis looks much better than the current version. \n\n\nThank you, for looking into this.\n\n\n\nOnly thing is, I find\nthe column name Valid until confusing. With that name I am in danger\nof taking it as the role's validity and not the passwords'.\nHow about naming it to something like Password validity...?\n\n\nYes, my first attempt was to name this column \"Password expiration date\"\nfor the same reason.\n\nBut then I decided that the column name should match the attribute name.\nOtherwise, you need to make some effort to understand which columns\nof the table correspond to which attributes of the roles.\n\nIt is also worth considering translation into other languages.\nIf the role attribute and the column have the same name, then they will\nprobably be translated the same way. But the translation may be different\nfor different terms, which will further confuse the situation.\n\nWe can probably change the column name, but still the root of the confusion\nis caused by the attribute name, not the column name.\n\nWhat do you think?\n\n\n-- \nPavel Luzanov\nPostgres Professional: https://postgrespro.com",
"msg_date": "Fri, 12 Jul 2024 10:06:25 +0300",
"msg_from": "Pavel Luzanov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Things I don't like about \\du's \"Attributes\" column"
},
{
"msg_contents": "On Fri, 12 Jul 2024 at 09:06, Pavel Luzanov <[email protected]> wrote:\n>\n> On 11.07.2024 15:07, Rafia Sabih wrote:\n>\n> This looks much better than the current version.\n>\n> Thank you, for looking into this.\n>\n> Only thing is, I find\n> the column name Valid until confusing. With that name I am in danger\n> of taking it as the role's validity and not the passwords'.\n> How about naming it to something like Password validity...?\n>\n> Yes, my first attempt was to name this column \"Password expiration date\"\n> for the same reason.\n>\n> But then I decided that the column name should match the attribute name.\n> Otherwise, you need to make some effort to understand which columns\n> of the table correspond to which attributes of the roles.\n>\n> It is also worth considering translation into other languages.\n> If the role attribute and the column have the same name, then they will\n> probably be translated the same way. But the translation may be different\n> for different terms, which will further confuse the situation.\n>\n> We can probably change the column name, but still the root of the confusion\n> is caused by the attribute name, not the column name.\n>\n> What do you think?\nYes you are right in this. I too carry the opinion that column names\nshould be the same as attribute names as much as possible.\nSo, then it is good that way.\n\nOther thoughts came to my mind, should we have this column in \\du+\ninstead, maybe connection limit too.\nI know in the current version we have all this in \\du itself, but then\nall clubbed in one column. But now since\nour table has got wider, it might be aesthetic to have it in the\nextended version. Also, their usage wise might not\nbe the first thing to be looked at for a user/role.\n\nWhat are your thoughts on that?\n>\n> --\n> Pavel Luzanov\n> Postgres Professional: https://postgrespro.com\n\n\n\n-- \nRegards,\nRafia Sabih\n\n\n",
"msg_date": "Fri, 12 Jul 2024 11:22:37 +0200",
"msg_from": "Rafia Sabih <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Things I don't like about \\du's \"Attributes\" column"
},
{
"msg_contents": "On 12.07.2024 12:22, Rafia Sabih wrote:\n> Other thoughts came to my mind, should we have this column in \\du+\n> instead, maybe connection limit too.\n> I know in the current version we have all this in \\du itself, but then\n> all clubbed in one column. But now since\n> our table has got wider, it might be aesthetic to have it in the\n> extended version. Also, their usage wise might not\n> be the first thing to be looked at for a user/role.\n\nIn the first version of the patch, \"Valid until\" (named \"Password expire time\")\nand \"Connection limit\" (\"Max connections\") columns were in extended mode. [1]\n\nLater we decided to place each attribute in the \"Attributes\" column on a separate\nline. This allowed us to significantly reduce the overall width of the output.\nSo, I decided to move \"Valid until\" and \"Connection limit\" from extended mode\nto normal mode.\n\nJust compare output from patched \\du and popular \\list command:\n\npostgres@demo(17.0)=# \\du\n List of roles\n Role name | Login | Attributes | Valid until | Connection limit\n-----------+-------+-------------+------------------------+------------------\n alice | yes | Inherit | 2024-06-30 00:00:00+03 |\n bob | yes | Inherit | infinity |\n charlie | yes | Inherit | | 11\n postgres | yes | Superuser +| |\n | | Create DB +| |\n | | Create role+| |\n | | Inherit +| |\n | | Replication+| |\n | | Bypass RLS | |\n(4 rows)\n\npostgres@demo(17.0)=# \\list\n List of databases\n Name | Owner | Encoding | Locale Provider | Collate | Ctype | Locale | ICU Rules | Access privileges\n-----------+----------+----------+-----------------+-------------+-------------+--------+-----------+-----------------------\n demo | postgres | UTF8 | libc | en_US.UTF-8 | en_US.UTF-8 | | |\n postgres | postgres | UTF8 | libc | en_US.UTF-8 | en_US.UTF-8 | | |\n template0 | postgres | UTF8 | libc | en_US.UTF-8 | en_US.UTF-8 | | | =c/postgres +\n | | | | | | | | postgres=CTc/postgres\n template1 | postgres | UTF8 | libc | en_US.UTF-8 | en_US.UTF-8 | | | =c/postgres +\n | | | | | | | | postgres=CTc/postgres\n(4 rows)\n\nIf we decide to move \"Valid until\" and \"Connection limit\" to extended mode,\nthen the role attributes should be returned to their previous form by placing\nthem on one line separated by commas.\n\n1.https://www.postgresql.org/message-id/27f87cb9-229b-478b-81b2-157f94239d55%40postgrespro.ru\n\n-- \nPavel Luzanov\nPostgres Professional:https://postgrespro.com\n\n\n\n\n\n\n On 12.07.2024 12:22, Rafia Sabih wrote:\n\n\nOther thoughts came to my mind, should we have this column in \\du+\ninstead, maybe connection limit too.\nI know in the current version we have all this in \\du itself, but then\nall clubbed in one column. But now since\nour table has got wider, it might be aesthetic to have it in the\nextended version. Also, their usage wise might not\nbe the first thing to be looked at for a user/role.\n\n\n\nIn the first version of the patch, \"Valid until\" (named \"Password expire time\")\nand \"Connection limit\" (\"Max connections\") columns were in extended mode. [1]\n\nLater we decided to place each attribute in the \"Attributes\" column on a separate\nline. This allowed us to significantly reduce the overall width of the output.\nSo, I decided to move \"Valid until\" and \"Connection limit\" from extended mode\nto normal mode.\n\nJust compare output from patched \\du and popular \\list command:\n\npostgres@demo(17.0)=# \\du\n List of roles\n Role name | Login | Attributes | Valid until | Connection limit \n-----------+-------+-------------+------------------------+------------------\n alice | yes | Inherit | 2024-06-30 00:00:00+03 | \n bob | yes | Inherit | infinity | \n charlie | yes | Inherit | | 11\n postgres | yes | Superuser +| | \n | | Create DB +| | \n | | Create role+| | \n | | Inherit +| | \n | | Replication+| | \n | | Bypass RLS | | \n(4 rows)\n\npostgres@demo(17.0)=# \\list\n List of databases\n Name | Owner | Encoding | Locale Provider | Collate | Ctype | Locale | ICU Rules | Access privileges \n-----------+----------+----------+-----------------+-------------+-------------+--------+-----------+-----------------------\n demo | postgres | UTF8 | libc | en_US.UTF-8 | en_US.UTF-8 | | | \n postgres | postgres | UTF8 | libc | en_US.UTF-8 | en_US.UTF-8 | | | \n template0 | postgres | UTF8 | libc | en_US.UTF-8 | en_US.UTF-8 | | | =c/postgres +\n | | | | | | | | postgres=CTc/postgres\n template1 | postgres | UTF8 | libc | en_US.UTF-8 | en_US.UTF-8 | | | =c/postgres +\n | | | | | | | | postgres=CTc/postgres\n(4 rows)\n\nIf we decide to move \"Valid until\" and \"Connection limit\" to extended mode,\nthen the role attributes should be returned to their previous form by placing\nthem on one line separated by commas.\n\n1. https://www.postgresql.org/message-id/27f87cb9-229b-478b-81b2-157f94239d55%40postgrespro.ru\n\n-- \nPavel Luzanov\nPostgres Professional: https://postgrespro.com",
"msg_date": "Sat, 13 Jul 2024 15:21:05 +0300",
"msg_from": "Pavel Luzanov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Things I don't like about \\du's \"Attributes\" column"
},
{
"msg_contents": "On Sat, 13 Jul 2024 at 14:21, Pavel Luzanov <[email protected]> wrote:\n>\n> On 12.07.2024 12:22, Rafia Sabih wrote:\n>\n> Other thoughts came to my mind, should we have this column in \\du+\n> instead, maybe connection limit too.\n> I know in the current version we have all this in \\du itself, but then\n> all clubbed in one column. But now since\n> our table has got wider, it might be aesthetic to have it in the\n> extended version. Also, their usage wise might not\n> be the first thing to be looked at for a user/role.\n>\n> In the first version of the patch, \"Valid until\" (named \"Password expire time\")\n> and \"Connection limit\" (\"Max connections\") columns were in extended mode. [1]\n>\n> Later we decided to place each attribute in the \"Attributes\" column on a separate\n> line. This allowed us to significantly reduce the overall width of the output.\n> So, I decided to move \"Valid until\" and \"Connection limit\" from extended mode\n> to normal mode.\n>\n> Just compare output from patched \\du and popular \\list command:\n>\n> postgres@demo(17.0)=# \\du\n> List of roles\n> Role name | Login | Attributes | Valid until | Connection limit\n> -----------+-------+-------------+------------------------+------------------\n> alice | yes | Inherit | 2024-06-30 00:00:00+03 |\n> bob | yes | Inherit | infinity |\n> charlie | yes | Inherit | | 11\n> postgres | yes | Superuser +| |\n> | | Create DB +| |\n> | | Create role+| |\n> | | Inherit +| |\n> | | Replication+| |\n> | | Bypass RLS | |\n> (4 rows)\n>\n> postgres@demo(17.0)=# \\list\n> List of databases\n> Name | Owner | Encoding | Locale Provider | Collate | Ctype | Locale | ICU Rules | Access privileges\n> -----------+----------+----------+-----------------+-------------+-------------+--------+-----------+-----------------------\n> demo | postgres | UTF8 | libc | en_US.UTF-8 | en_US.UTF-8 | | |\n> postgres | postgres | UTF8 | libc | en_US.UTF-8 | en_US.UTF-8 | | |\n> template0 | postgres | UTF8 | libc | en_US.UTF-8 | en_US.UTF-8 | | | =c/postgres +\n> | | | | | | | | postgres=CTc/postgres\n> template1 | postgres | UTF8 | libc | en_US.UTF-8 | en_US.UTF-8 | | | =c/postgres +\n> | | | | | | | | postgres=CTc/postgres\n> (4 rows)\n>\n> If we decide to move \"Valid until\" and \"Connection limit\" to extended mode,\n> then the role attributes should be returned to their previous form by placing\n> them on one line separated by commas.\n>\n> 1. https://www.postgresql.org/message-id/27f87cb9-229b-478b-81b2-157f94239d55%40postgrespro.ru\n>\n> --\n> Pavel Luzanov\n> Postgres Professional: https://postgrespro.com\n\nWell, it was just my opinion of how I would have liked it better, but\nof course you may decide against it, there is no strong feeling around\nit.\nAnd if you are on the fence with the opinion of having them in normal\nor extended mode, then maybe we can ask more people to chip in.\n\nI certainly would not want you to update the patch back and forth for\nalmost no reason.\n-- \nRegards,\nRafia Sabih\n\n\n",
"msg_date": "Mon, 15 Jul 2024 11:50:39 +0200",
"msg_from": "Rafia Sabih <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Things I don't like about \\du's \"Attributes\" column"
},
{
"msg_contents": "On 15.07.2024 12:50, Rafia Sabih wrote:\n> Well, it was just my opinion of how I would have liked it better, but\n> of course you may decide against it, there is no strong feeling around\n> it.\n> And if you are on the fence with the opinion of having them in normal\n> or extended mode, then maybe we can ask more people to chip in.\n\nI am not against moving \"Valid until\" and \"Connection limit\" to extended mode.\nIt just seemed to me that without these two columns, the output of the command\nis too short, so there is no reason in hiding them.\n\nBut you are right, we need more opinions.\n\n-- \nPavel Luzanov\nPostgres Professional:https://postgrespro.com\n\n\n\n\n\n\n On 15.07.2024 12:50, Rafia Sabih wrote:\n \nWell, it was just my opinion of how I would have liked it better, but\nof course you may decide against it, there is no strong feeling around\nit.\nAnd if you are on the fence with the opinion of having them in normal\nor extended mode, then maybe we can ask more people to chip in.\n\n\nI am not against moving \"Valid until\" and \"Connection limit\" to extended mode.\nIt just seemed to me that without these two columns, the output of the command\nis too short, so there is no reason in hiding them.\n\nBut you are right, we need more opinions.\n\n\n-- \nPavel Luzanov\nPostgres Professional: https://postgrespro.com",
"msg_date": "Tue, 16 Jul 2024 11:53:46 +0300",
"msg_from": "Pavel Luzanov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Things I don't like about \\du's \"Attributes\" column"
},
{
"msg_contents": "On Tue, Jul 16, 2024 at 4:53 AM Pavel Luzanov <[email protected]> wrote:\n> On 15.07.2024 12:50, Rafia Sabih wrote:\n> Well, it was just my opinion of how I would have liked it better, but\n> of course you may decide against it, there is no strong feeling around\n> it.\n> And if you are on the fence with the opinion of having them in normal\n> or extended mode, then maybe we can ask more people to chip in.\n>\n> I am not against moving \"Valid until\" and \"Connection limit\" to extended mode.\n> It just seemed to me that without these two columns, the output of the command\n> is too short, so there is no reason in hiding them.\n>\n> But you are right, we need more opinions.\n\nWhich version of the patch is currently under discussion?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 16 Jul 2024 09:24:37 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Things I don't like about \\du's \"Attributes\" column"
},
{
"msg_contents": "On 16.07.2024 16:24, Robert Haas wrote:\n> Which version of the patch is currently under discussion?\n\nI believe we are talking about the latest v8 patch version. [1]\n\n1.https://www.postgresql.org/message-id/5341835b-e7be-44dc-b6e5-400e9e3f3c64%40postgrespro.ru\n\n-- \nPavel Luzanov\nPostgres Professional:https://postgrespro.com\n\n\n\n\n\n\n On 16.07.2024 16:24, Robert Haas wrote:\n\n\n\nWhich version of the patch is currently under discussion?\n\n\nI believe we are talking about the latest v8 patch version. [1]\n\n1. https://www.postgresql.org/message-id/5341835b-e7be-44dc-b6e5-400e9e3f3c64%40postgrespro.ru\n\n-- \nPavel Luzanov\nPostgres Professional: https://postgrespro.com",
"msg_date": "Tue, 16 Jul 2024 16:48:36 +0300",
"msg_from": "Pavel Luzanov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Things I don't like about \\du's \"Attributes\" column"
},
{
"msg_contents": "On Tue, Jul 16, 2024 at 9:48 AM Pavel Luzanov <[email protected]> wrote:\n> Which version of the patch is currently under discussion?\n>\n> I believe we are talking about the latest v8 patch version. [1]\n>\n> 1. https://www.postgresql.org/message-id/5341835b-e7be-44dc-b6e5-400e9e3f3c64%40postgrespro.ru\n\nThanks. For some reason (likely me being dumb) I was having a hard\ntime finding that in the thread.\n\nOn the question of display width, my personal opinion is that the\ncurrent patch is worse than what we have now. Right now, if I type\n\\du, the output fits in an 80-column terminal unless the role names\nare quite long, and \\du+ fits except for the Description field, which\nwraps. With the patch, even \\du wraps in an 80-column terminal. \"Role\nname\", \"Login,\" \"Attributes,\" and \"Valid until\" fit, but \"Connection\nlimit\" doesn't. I know some people think optimizing for 80-column\nterminals is obsolete in 2024, and I often use a wider window myself,\nbut I do still appreciate it when I don't have to make the window\nwider to read stuff. Especially, if I didn't use + when running a psql\ncommand, I'd prefer for it to fit.\n\nOne solution could be to move \"Valid until\" or \"Connection limit\" or\nboth to verbose mode, as proposed by Rafia. But that's also a\nregression over what we have now, where the output fits in 80 columns\nand includes that information.\n\nI'm starting to have some doubts about whether this effort is really\nworthwhile. It seems like what we have right now is a patch which uses\nboth more horizontal space and more vertical space than the current\nimplementation, without (IMHO) really offering any clear advantage. I\nknow this started out as an effort to address Tom's complaints in the\noriginal post, but it feels like we're losing as much as we're\ngaining, and Tom seems to have lost interest in the thread, too.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 16 Jul 2024 11:00:08 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Things I don't like about \\du's \"Attributes\" column"
},
{
"msg_contents": "On Tue, Jul 16, 2024 at 8:00 AM Robert Haas <[email protected]> wrote:\n\n> I'm starting to have some doubts about whether this effort is really\n> worthwhile. It seems like what we have right now is a patch which uses\n> both more horizontal space and more vertical space than the current\n> implementation, without (IMHO) really offering any clear advantage.\n\n\nIt's simple enough to only add the mandatory vertical space here (and\ndefinitely reduce the width) by leaving the current presentation of \"N\nconnections\" and \"password valid until\" unchanged but give them their own\nline in Attributes like all of the rest. The choice to move them to their\nown columns seems like where the contention lies.\n\ne.g.,\n\nregress_du_role0 | yes | Inherit | Tue Jun 04 00:00:00 2024 PDT\n| 0 |\n\nbecomes\nNo connections allowed +|\nInherit +|\nPassword valid until Tue Jun 04 ... |\n\n(pending confirmation of where the new inherit label fits slot-wise)\n\nversus:\n\nregress_du_role0 | No connections\n +|\n | Password valid until 2024-06-04 00:00:00+03\n |\n\n(Which actually look the same because of the automatic wrapping on the\ncurrent version versus explicit one-per-line in the proposed.)\n\nIn short, fix the complaint about comma-separated attributes and leave the\nrest alone as being accurate reflections of the catalog. In short, from\nthe original message, do \"a\" without \"b\". Tom suggested \"either\" as well\nand I agree with Robert that having done both we've made it both wider and\ntaller which ends up not being a great outcome.\n\nAlso, address the N connections [allowed] to make it clear this is a static\nconfiguration and not some derivation of current state.\n\nCan also deal with Password valid until infinity if a better phrasing can\nbe thought up that doesn't require knowledge of whether a password has been\nset or not.\n\nDavid J.\n\nOn Tue, Jul 16, 2024 at 8:00 AM Robert Haas <[email protected]> wrote:I'm starting to have some doubts about whether this effort is really\nworthwhile. It seems like what we have right now is a patch which uses\nboth more horizontal space and more vertical space than the current\nimplementation, without (IMHO) really offering any clear advantage.It's simple enough to only add the mandatory vertical space here (and definitely reduce the width) by leaving the current presentation of \"N connections\" and \"password valid until\" unchanged but give them their own line in Attributes like all of the rest. The choice to move them to their own columns seems like where the contention lies.e.g.,regress_du_role0 | yes | Inherit | Tue Jun 04 00:00:00 2024 PDT | 0 | becomesNo connections allowed +|Inherit +|Password valid until Tue Jun 04 ... |(pending confirmation of where the new inherit label fits slot-wise)versus:regress_du_role0 | No connections +| | Password valid until 2024-06-04 00:00:00+03 | (Which actually look the same because of the automatic wrapping on the current version versus explicit one-per-line in the proposed.)In short, fix the complaint about comma-separated attributes and leave the rest alone as being accurate reflections of the catalog. In short, from the original message, do \"a\" without \"b\". Tom suggested \"either\" as well and I agree with Robert that having done both we've made it both wider and taller which ends up not being a great outcome.Also, address the N connections [allowed] to make it clear this is a static configuration and not some derivation of current state.Can also deal with Password valid until infinity if a better phrasing can be thought up that doesn't require knowledge of whether a password has been set or not.David J.",
"msg_date": "Tue, 16 Jul 2024 10:11:29 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Things I don't like about \\du's \"Attributes\" column"
},
{
"msg_contents": "On 16.07.2024 18:00, Robert Haas wrote:\n\n> On the question of display width, my personal opinion is that the\n> current patch is worse than what we have now.\n\nRobert, David, thanks for the detailed explanation.\n\nI tried to remember all the thoughts that led to this version of the patch.\n\nSo the main issue that Robert points out is that the output of the command\ntakes up more space compared to the current version.\n(But I'm ready to debate that too :-), see below.)\n\nIn the proposed version, columns for rolconnlimit and rolvaliduntil occupy\na significant place. It really is. We can hide them in extended mode, but\nthey still take up a lot of space. In the current command, these attributes\nare very compactly arranged in the \"Attributes\" column on separate lines.\n\nHowever, the current placement of rolconnlimit and rolvaliduntil on separate\nlines is very bad, which Tom noted in the first letter and I completely\nagree with this. Also, I don't like that the values appear only if they\ndiffer from the default values. It's more compact, but less intuitive.\nIt seems to me that this approach is not used anywhere else in other\n\\d* commands (but I may be wrong, I did not check).\n\nLet me explain why I think rolconnlimit and rolvaliduntil are worthy\nof being placed as separate columns.\n\n1. Logical attributes (rolsuper, rolinherit, rolcreaterole, rolcreatedb,\nrolcanlogin, rolreplication, rolbypassrls) are uniform in nature and\npresenting them as a list in one column looks logical.\nBut rolconnlimit and rolvaliduntil do not fit into this company in any way.\nThey are strangers here in terms of data type and meaning.\n\n2. Logical attributes give the role additional capabilities,\nwhile rolconnlimit and rolvaliduntil rather limit the use of the role.\n\n3. After switching to a role with the SET ROLE command, you can use\nthe capabilities of logical attributes, but the restrictions of rolconnlimit\nand rolvaliduntil do not apply to SET ROLE:\n\npostgres@demo(17.0)=# grant bob to alice;\ngrant bob to alice;\nGRANT ROLE\npostgres@demo(17.0)=# alter role bob connection limit 0;\nalter role bob connection limit 0;\nALTER ROLE\npostgres@demo(17.0)=# \\c - bob\nconnection to server on socket \"/tmp/.s.PGSQL.5401\" failed: FATAL: too many connections for role \"bob\"\nPrevious connection kept\npostgres@demo(17.0)=# \\c - alice\nYou are now connected to database \"demo\" as user \"alice\".\nalice@demo(17.0)=> set role bob;\nset role bob;\nSET\n\nThis makes it reasonable to consider rolconnlimit and rolvaliduntil\nas separate properties of a role, rather than together with logical\nattributes.\n\nNow the hard part. What to do with the width of the command output?\nI also think that it is desirable to fit the output of any command in 80\ncharacters. And I was calm when I saw the 78-character output in my test\nsystem:\n\npostgres@demo(17.0)=# \\du\n List of roles\n Role name | Login | Attributes | Valid until | Connection limit\n-----------+-------+-------------+------------------------+------------------\n alice | yes | Inherit | 2024-06-30 00:00:00+03 |\n bob | yes | Inherit | infinity |\n charlie | yes | Inherit | | 1\n postgres | yes | Superuser +| |\n | | Create DB +| |\n | | Create role+| |\n | | Inherit +| |\n | | Replication+| |\n | | Bypass RLS | |\n(4 rows)\n\nBut, really, the width can exceed 80 with longer role names, as well as with\na wider default date output. Compare with the date output in the patch regression\ntests:\n\n2024-06-30 00:00:00+03\nTue Jun 04 00:00:00 2024 PDT\n\nTo be fair, I must say that among the \\d* commands there are many commands\nwhose output width exceeds 80 characters.\nNamely: \\da, \\dAc, \\dAf, \\dAo, \\dAp, \\dC, \\df, \\di, \\do, \\dO, \\dRp, \\dT, \\l\n\nBut let's go back to the current version. I consider this patch as\na continuation of the work on the \\drg command that appeared in version 16.\nAs part of that work, we removed the \"Member of\" column from the \\du command\nand introduced a new \\drg command to show membership in roles.\n From my point of view, the \\du command is currently in an intermediate and\nunfinished state. Therefore, it is more correct to compare the proposed patch\nwith psql 15, rather than 16.\n(I know there is nothing more permanent than a temporary solution,\nbut give me hope :-).)\n\nIn version 15, the output of the \\du command is wider than the proposed version!\n\n15=# \\du\n List of roles\n Role name | Attributes | Member of\n-----------+------------------------------------------------------------+-----------\n postgres | Superuser, Create role, Create DB, Replication, Bypass RLS | {}\n\nAnd this is with only one role. I can assume that there are usually several\nroles in systems and role membership is actively used to organize roles within\ngroups. Therefore, in real systems, the output of the \\du command in version 15\nis probably much wider. For example, output together with system objects:\n\n15=# \\duS\n List of roles\n Role name | Attributes | Member of\n---------------------------+------------------------------------------------------------+--------------------------------------------------------------\n pg_execute_server_program | Cannot login | {}\n pg_monitor | Cannot login | {pg_read_all_settings,pg_read_all_stats,pg_stat_scan_tables}\n pg_read_all_settings | Cannot login | {}\n pg_read_all_stats | Cannot login | {}\n pg_read_server_files | Cannot login | {}\n pg_signal_backend | Cannot login | {}\n pg_stat_scan_tables | Cannot login | {}\n pg_write_server_files | Cannot login | {}\n postgres | Superuser, Create role, Create DB, Replication, Bypass RLS | {}\n\nAll this allows me to believe that the proposed version has advantages over\nthe current version of the \\du command:\n- Solutions have been proposed for 3 of the 4 Tom's complaints.\n- The new \"Login\" column separates users from group roles, which is very useful (imho).\n- Tabular output is convenient to view both in normal mode and in expanded mode (\\x).\n The last line contains information about the number of roles.\n- Refactoring: code has become much simpler and clearer.\n\nBut if we don't find a compromise and just leave it as it is, then that's fine.\nSo the time for change has not come yet. In any case, this discussion may be useful\nin the future.Butwhoknows,maybenowwecancometosomekindof agreement.\n\n-- \nPavel Luzanov\nPostgres Professional:https://postgrespro.com\n\n\n\n\n\n\nOn 16.07.2024 18:00, Robert Haas wrote:\n\nOn the question of display width, my personal opinion is that the\ncurrent patch is worse than what we have now.\n\n\nRobert, David, thanks for the detailed explanation.\n\nI tried to remember all the thoughts that led to this version of the patch.\n\nSo the main issue that Robert points out is that the output of the command\ntakes up more space compared to the current version.\n(But I'm ready to debate that too :-), see below.)\n\nIn the proposed version, columns for rolconnlimit and rolvaliduntil occupy\na significant place. It really is. We can hide them in extended mode, but\nthey still take up a lot of space. In the current command, these attributes\nare very compactly arranged in the \"Attributes\" column on separate lines.\n\nHowever, the current placement of rolconnlimit and rolvaliduntil on separate\nlines is very bad, which Tom noted in the first letter and I completely\nagree with this. Also, I don't like that the values appear only if they\ndiffer from the default values. It's more compact, but less intuitive.\nIt seems to me that this approach is not used anywhere else in other\n\\d* commands (but I may be wrong, I did not check).\n\nLet me explain why I think rolconnlimit and rolvaliduntil are worthy\nof being placed as separate columns.\n\n1. Logical attributes (rolsuper, rolinherit, rolcreaterole, rolcreatedb,\nrolcanlogin, rolreplication, rolbypassrls) are uniform in nature and\npresenting them as a list in one column looks logical.\nBut rolconnlimit and rolvaliduntil do not fit into this company in any way.\nThey are strangers here in terms of data type and meaning.\n\n2. Logical attributes give the role additional capabilities,\nwhile rolconnlimit and rolvaliduntil rather limit the use of the role.\n\n3. After switching to a role with the SET ROLE command, you can use\nthe capabilities of logical attributes, but the restrictions of rolconnlimit\nand rolvaliduntil do not apply to SET ROLE:\n\npostgres@demo(17.0)=# grant bob to alice;\ngrant bob to alice;\nGRANT ROLE\npostgres@demo(17.0)=# alter role bob connection limit 0;\nalter role bob connection limit 0;\nALTER ROLE\npostgres@demo(17.0)=# \\c - bob\nconnection to server on socket \"/tmp/.s.PGSQL.5401\" failed: FATAL: too many connections for role \"bob\"\nPrevious connection kept\npostgres@demo(17.0)=# \\c - alice \nYou are now connected to database \"demo\" as user \"alice\".\nalice@demo(17.0)=> set role bob;\nset role bob;\nSET\n\nThis makes it reasonable to consider rolconnlimit and rolvaliduntil\nas separate properties of a role, rather than together with logical\nattributes.\n\nNow the hard part. What to do with the width of the command output?\nI also think that it is desirable to fit the output of any command in 80\ncharacters. And I was calm when I saw the 78-character output in my test\nsystem:\n\npostgres@demo(17.0)=# \\du\n List of roles\n Role name | Login | Attributes | Valid until | Connection limit \n-----------+-------+-------------+------------------------+------------------\n alice | yes | Inherit | 2024-06-30 00:00:00+03 | \n bob | yes | Inherit | infinity | \n charlie | yes | Inherit | | 1\n postgres | yes | Superuser +| | \n | | Create DB +| | \n | | Create role+| | \n | | Inherit +| | \n | | Replication+| | \n | | Bypass RLS | | \n(4 rows)\n\nBut, really, the width can exceed 80 with longer role names, as well as with\na wider default date output. Compare with the date output in the patch regression\ntests:\n\n2024-06-30 00:00:00+03\nTue Jun 04 00:00:00 2024 PDT\n\nTo be fair, I must say that among the \\d* commands there are many commands\nwhose output width exceeds 80 characters.\nNamely: \\da, \\dAc, \\dAf, \\dAo, \\dAp, \\dC, \\df, \\di, \\do, \\dO, \\dRp, \\dT, \\l\n\nBut let's go back to the current version. I consider this patch as\na continuation of the work on the \\drg command that appeared in version 16.\nAs part of that work, we removed the \"Member of\" column from the \\du command\nand introduced a new \\drg command to show membership in roles.\n From my point of view, the \\du command is currently in an intermediate and\nunfinished state. Therefore, it is more correct to compare the proposed patch\nwith psql 15, rather than 16.\n(I know there is nothing more permanent than a temporary solution,\nbut give me hope :-).)\n\nIn version 15, the output of the \\du command is wider than the proposed version!\n\n15=# \\du\n List of roles\n Role name | Attributes | Member of \n-----------+------------------------------------------------------------+-----------\n postgres | Superuser, Create role, Create DB, Replication, Bypass RLS | {}\n\nAnd this is with only one role. I can assume that there are usually several\nroles in systems and role membership is actively used to organize roles within\ngroups. Therefore, in real systems, the output of the \\du command in version 15\nis probably much wider. For example, output together with system objects:\n\n15=# \\duS\n List of roles\n Role name | Attributes | Member of \n---------------------------+------------------------------------------------------------+--------------------------------------------------------------\n pg_execute_server_program | Cannot login | {}\n pg_monitor | Cannot login | {pg_read_all_settings,pg_read_all_stats,pg_stat_scan_tables}\n pg_read_all_settings | Cannot login | {}\n pg_read_all_stats | Cannot login | {}\n pg_read_server_files | Cannot login | {}\n pg_signal_backend | Cannot login | {}\n pg_stat_scan_tables | Cannot login | {}\n pg_write_server_files | Cannot login | {}\n postgres | Superuser, Create role, Create DB, Replication, Bypass RLS | {}\n\nAll this allows me to believe that the proposed version has advantages over\nthe current version of the \\du command:\n- Solutions have been proposed for 3 of the 4 Tom's complaints.\n- The new \"Login\" column separates users from group roles, which is very useful (imho).\n- Tabular output is convenient to view both in normal mode and in expanded mode (\\x).\n The last line contains information about the number of roles.\n- Refactoring: code has become much simpler and clearer.\n\nBut if we don't find a compromise and just leave it as it is, then that's fine.\nSo the time for change has not come yet. In any case, this discussion may be useful\nin the future. But who knows, maybe now we can come to some kind of agreement.\n-- \nPavel Luzanov\nPostgres Professional: https://postgrespro.com",
"msg_date": "Thu, 18 Jul 2024 00:09:09 +0300",
"msg_from": "Pavel Luzanov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Things I don't like about \\du's \"Attributes\" column"
},
{
"msg_contents": "Hi Pavel,\n\nFirst, thanks for your dedication to this effort. I always find it\nhard to make time for things like psql backslash command improvements,\nbut I'm glad that we have people in our community who work on such\nthings.\n\nSecond, I think that the threshold question for this patch is: will\nusers, on average, be happier if this patch gets committed? If the\nanswer is yes, then the patch should be committed, and if the answer\nis no, the patch should not be committed. But I actually don't really\nhave any clear idea of what users in general are likely to think. My\nown reaction is essentially ... meh. I do not think that the proposed\nnew output is massively worse than what we have now, but I also don't\nthink it's a whole lot better. Now, if a bunch of other people show up\nand vote, well then we'll have a much better view of what the typical\nuser is likely to think. But right now, I can't hazard a guess as to\nwhat the general opinion will be, and therefore I'm unprepared to\ncommit anything. Because, and I can't say this often enough, it's not\nall about me. Even if I thought that this patch was way better than\nwhat we have now, I still wouldn't commit it unless I was relatively\nconfident that other people would agree.\n\nThird, if I can back away from this particular patch for a moment, I\nfeel like roles and permissions are one of the weaker areas in psql. I\nfeel, and I suspect most users agree, that the output of \"\\d my_table\"\nis top notch. It tells you everything that you need to know about a\nparticular table -- all the columns, column types, default values,\ntriggers, indexes, and whatever else there is. If someone adds a new\nobject that attaches to a table, they're going to understand that they\nneed to add a listing for that object to the \\d output, and users are\ngoing to understand that they should look for it there. That unstated\ncontract between developers and users is a thing of beauty: without\nany words being said, there is a meeting of the minds. But I don't\nthink the same thing can be said around roles.\n\nFor a long time, one of my big gripes in this area has been \\z. It\ndisplays information about the permissions of table-like objects. But\nI don't really imagine that being a thing that a user is actually\ngoing to want to see. I feel like there are two typical use cases\nhere. One is you want to see all the privileges associated with a\ntable. In that case, you'd like the information to show up in the\noutput of \\d or \\d+. The other is that you want to see the privileges\nassociated with a particular user -- and in that case you want to see\nnot just the table privileges but the privileges on every other kind\nof database object, too. I'm not saying there's a single PostgreSQL\nuser anywhere who has wanted to list information about tables with\nnames matching a certain wildcard and see just the privilege\ninformation for each one, but I bet it's rare. I also suspect that\nonly truly hard-core PostgreSQL fans want to see incantations like\n\"robert.haas\"=arwdDxtm/\"robert.haas\" in their output.\n\nSo, personally, if I were going to work on a redesign in this area, I\nwould look into making \\du <username> work like \\d <tablename>. That\nis, it would tell you every single thing there is to know about a\nuser. Role attributes. Roles in which this role has membership. Roles\nthat are a member of this row. Objects of all sorts this object owns.\nPermissions this role has on objects of all sorts. Role settings. All\nof it in SQL-ish format like we do with the footer when you run \\d.\nThen I would make \\du work like \\d: a minimal amount of basic\ninformation about every role in the list, like whether it's a\nsuperuser and whether they can log in. Then, I would consider removing\n\\dp \\drds \\drg and \\z entirely. I don't know if a plan like that would\nactually work out well. A particular problem is that if a user owns\n10k objects, we don't want to just list them all one per line. Maybe\nwhen the number of objects owned is large, we could just give a count\nper object type unless + is used, or something like that. But even if\nall of this could be sorted out, would users in general like it\nbetter, or would it just make Robert happy? Would even Robert end up\nliking the result?\n\nI don't really know, but I think that my general discontent with \\du\n\\dg \\dp \\drds \\drg \\z is part of why I find it hard to evaluate a\npatch like this. I look forward to seeing the output of \"\\d <table>\"\non whatever table is causing some customer a problem today. I never\nlook forward to seeing the output of \\du. Is that just me?\n\n...Robert\n\n\n",
"msg_date": "Fri, 19 Jul 2024 09:26:39 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Things I don't like about \\du's \"Attributes\" column"
},
{
"msg_contents": "Robert,\n\nIam pleasedthatyouare payingso muchattentionto thispatch.\n\nOn 19.07.2024 16:26, Robert Haas wrote:\n> Second, I think that the threshold question for this patch is: will\n> users, on average, be happier if this patch gets committed? If the\n> answer is yes, then the patch should be committed, and if the answer\n> is no, the patch should not be committed. But I actually don't really\n> have any clear idea of what users in general are likely to think. My\n> own reaction is essentially ... meh. I do not think that the proposed\n> new output is massively worse than what we have now, but I also don't\n> think it's a whole lot better. Now, if a bunch of other people show up\n> and vote, well then we'll have a much better view of what the typical\n> user is likely to think.\n\nIshareyouropinionthatthe needfor a patchshouldbe decidedby the \nvotes(orlackof votes)of practicingexperts. Iam mainly \ninvolvedineducationalprojects,soinmostcasesI \nworkwithdemosystems.Therefore, I'm notsurethatthe patch I'm offeringwill makeusershappy. Perhaps it should be withdrawn.\n\n> Third, if I can back away from this particular patch for a moment, I\n> feel like roles and permissions are one of the weaker areas in psql.\n\n> So, personally, if I were going to work on a redesign in this area, I\n> would look into making \\du <username> work like \\d <tablename>. That\n> is, it would tell you every single thing there is to know about a\n> user. Role attributes. Roles in which this role has membership. Roles\n> that are a member of this row. Objects of all sorts this object owns.\n> Permissions this role has on objects of all sorts. Role settings. All\n> of it in SQL-ish format like we do with the footer when you run \\d.\n\nOh, that's very interesting. I will think about this approach,\nbut I do not know when and what result can be obtained...\n\nBut let me share my thoughts on roles, privileges and system catalogs\nfrom a different angle. This has nothing to do with the current patch,\nI just want to share my thoughts.\n\nI came to PostgreSQL from Oracle and it was unexpected for me that users\nhad almost complete access to the contents of the system сatalogs.\nWith rare exceptions (pg_authid, pg_statistic), any unprivileged user sees\nthe full contents of any system сatalog. (I'm not saying that access to system\ncatalogs needs to be reworked, it's probably impossible or very difficult.)\n\nVisible but inaccessible objects in system catalogs increase the volume\nof command output unnecessarily. Why do I need to know the list of all\nschemas in the database if I only have access to the public schema?\nThe same applies to inaccessible tables, views, functions, etc.\n\nNot for safety, but for convenience, it might be worth having a set of views\nthat show only those rows of the system catalog (with *acl column) that\nthe user has access to. Either as the object owner, or through the privileges.\nDirectly or indirectly through role membership.\n\nBy the way, this is exactly the approach implemented for the information\nschema. Here is a code fragment of the information_schema.schemata view:\n\nSELECT ...\n FROM pg_namespace n,\n pg_authid u\n WHERE n.nspowner = u.oid AND\n (pg_has_role(n.nspowner, 'USAGE'::text) OR\n has_schema_privilege(n.oid, 'CREATE, USAGE'::text))\n\nThen the commands like \\dt, \\df, \\dn, \\l, etc might use these views and show\nonly the objects accessible to the user. To do this, a new modifier to\nthe commands can be implemented, similar to the S modifier for system objects.\n\nFor example:\n\\dn - list of all schemas\n\\dnA - list of accessible schemas\n\nIn some way this approach can resolve your issue about roles and privileges.\nFamiliar psql commands will be able to display only the objects accessible\nfor current role,withoutpushingthe whole output into \\du.\n\nSuch a set of views can be useful not only in psql, but also for third-party\napplications.\n\nI think I'm not the first one trying to bikeshedding in this area.\nIt's probably been discussed many times whythisshouldnotbe done.\nBut such thoughts do come, and I don't know the answer yet.\n\n-- \nPavel Luzanov\nPostgres Professional:https://postgrespro.com\n\n\n\n\n\n\nRobert,\n\nI am pleased that you are paying so much attention to this patch.\n\n\nOn 19.07.2024 16:26, Robert Haas wrote:\n\n\nSecond, I think that the threshold question for this patch is: will\nusers, on average, be happier if this patch gets committed? If the\nanswer is yes, then the patch should be committed, and if the answer\nis no, the patch should not be committed. But I actually don't really\nhave any clear idea of what users in general are likely to think. My\nown reaction is essentially ... meh. I do not think that the proposed\nnew output is massively worse than what we have now, but I also don't\nthink it's a whole lot better. Now, if a bunch of other people show up\nand vote, well then we'll have a much better view of what the typical\nuser is likely to think.\n\n\nI share your opinion that the need for a patch should be decided\nby the votes (or lack of votes) of practicing experts. I am mainly\ninvolved in educational projects, so in most cases I work with\ndemo systems. Therefore, I'm not sure that the patch I'm offering\nwill make users happy. Perhaps it should be withdrawn.\n\n\n\nThird, if I can back away from this particular patch for a moment, I\nfeel like roles and permissions are one of the weaker areas in psql.\n\n\n\n\nSo, personally, if I were going to work on a redesign in this area, I\nwould look into making \\du <username> work like \\d <tablename>. That\nis, it would tell you every single thing there is to know about a\nuser. Role attributes. Roles in which this role has membership. Roles\nthat are a member of this row. Objects of all sorts this object owns.\nPermissions this role has on objects of all sorts. Role settings. All\nof it in SQL-ish format like we do with the footer when you run \\d.\n\nOh, that's very interesting. I will think about this approach,\nbut I do not know when and what result can be obtained...\n\nBut let me share my thoughts on roles, privileges and system catalogs\nfrom a different angle. This has nothing to do with the current patch,\nI just want to share my thoughts.\n\nI came to PostgreSQL from Oracle and it was unexpected for me that users\nhad almost complete access to the contents of the system сatalogs.\nWith rare exceptions (pg_authid, pg_statistic), any unprivileged user sees\nthe full contents of any system сatalog. (I'm not saying that access to system\ncatalogs needs to be reworked, it's probably impossible or very difficult.)\n\nVisible but inaccessible objects in system catalogs increase the volume\nof command output unnecessarily. Why do I need to know the list of all\nschemas in the database if I only have access to the public schema?\nThe same applies to inaccessible tables, views, functions, etc.\n\nNot for safety, but for convenience, it might be worth having a set of views\nthat show only those rows of the system catalog (with *acl column) that\nthe user has access to. Either as the object owner, or through the privileges.\nDirectly or indirectly through role membership.\n\nBy the way, this is exactly the approach implemented for the information\nschema. Here is a code fragment of the information_schema.schemata view:\n\nSELECT ...\n FROM pg_namespace n,\n pg_authid u\n WHERE n.nspowner = u.oid AND\n (pg_has_role(n.nspowner, 'USAGE'::text) OR\n has_schema_privilege(n.oid, 'CREATE, USAGE'::text))\n\nThen the commands like \\dt, \\df, \\dn, \\l, etc might use these views and show\nonly the objects accessible to the user. To do this, a new modifier to\nthe commands can be implemented, similar to the S modifier for system objects.\n\nFor example:\n\\dn - list of all schemas\n\\dnA - list of accessible schemas\n\nIn some way this approach can resolve your issue about roles and privileges.\nFamiliar psql commands will be able to display only the objects accessible\nfor current role, without pushing the whole output into \\du.\n\nSuch a set of views can be useful not only in psql, but also for third-party\napplications.\n\nI think I'm not the first one trying to bikeshedding in this area.\nIt's probably been discussed many times why this should not be done.\nBut such thoughts do come, and I don't know the answer yet.\n\n\n-- \nPavel Luzanov\nPostgres Professional: https://postgrespro.com",
"msg_date": "Tue, 23 Jul 2024 00:19:22 +0300",
"msg_from": "Pavel Luzanov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Things I don't like about \\du's \"Attributes\" column"
},
{
"msg_contents": "On Mon, Jul 22, 2024 at 5:19 PM Pavel Luzanov <[email protected]> wrote:\n> Visible but inaccessible objects in system catalogs increase the volume\n> of command output unnecessarily. Why do I need to know the list of all\n> schemas in the database if I only have access to the public schema?\n> The same applies to inaccessible tables, views, functions, etc.\n>\n> Not for safety, but for convenience, it might be worth having a set of views\n> that show only those rows of the system catalog (with *acl column) that\n> the user has access to. Either as the object owner, or through the privileges.\n> Directly or indirectly through role membership.\n\nSo, I wasn't actually aware that anyone had a big problem in this\narea. I thought that most of the junk you might see in \\d<whatever>\noutput would be hidden either because the objects you don't care about\nare not in your search_path or because they are system objects. I\nagree that doesn't help with schemas, but most people don't have a\nhuge number of schemas, and even if you do, you don't necessarily need\nto look at the list all that frequently.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 23 Jul 2024 08:53:31 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Things I don't like about \\du's \"Attributes\" column"
},
{
"msg_contents": "On 23.07.2024 15:53, Robert Haas wrote:\n> On Mon, Jul 22, 2024 at 5:19 PM Pavel Luzanov<[email protected]> wrote:\n>> Visible but inaccessible objects in system catalogs increase the volume\n>> of command output unnecessarily. Why do I need to know the list of all\n>> schemas in the database if I only have access to the public schema?\n>> The same applies to inaccessible tables, views, functions, etc.\n>>\n>> Not for safety, but for convenience, it might be worth having a set of views\n>> that show only those rows of the system catalog (with *acl column) that\n>> the user has access to. Either as the object owner, or through the privileges.\n>> Directly or indirectly through role membership.\n> So, I wasn't actually aware that anyone had a big problem in this\n> area. I thought that most of the junk you might see in \\d<whatever>\n> output would be hidden either because the objects you don't care about\n> are not in your search_path or because they are system objects. I\n> agree that doesn't help with schemas, but most people don't have a\n> huge number of schemas, and even if you do, you don't necessarily need\n> to look at the list all that frequently.\n\nMaybe. But it would be better not to see unnecessary objects in the \nsystem catalogs. Especially for GUI tools. Back to the subject.\n\n> So, personally, if I were going to work on a redesign in this area, I\n> would look into making \\du <username> work like \\d <tablename>. That\n> is, it would tell you every single thing there is to know about a\n> user. Role attributes. Roles in which this role has membership. Roles\n> that are a member of this row. Objects of all sorts this object owns.\n> Permissions this role has on objects of all sorts. Role settings. All\n> of it in SQL-ish format like we do with the footer when you run \\d.\n> Then I would make \\du work like \\d: a minimal amount of basic\n> information about every role in the list, like whether it's a\n> superuser and whether they can log in.\n\nYes, I still like this idea.\nA little later I will try to make a patch in this direction.\n\n\n-- \nPavel Luzanov\nPostgres Professional:https://postgrespro.com\n\n\n\n\n\n\n On 23.07.2024 15:53, Robert Haas wrote:\n\nOn Mon, Jul 22, 2024 at 5:19 PM Pavel Luzanov <[email protected]> wrote:\n\n\nVisible but inaccessible objects in system catalogs increase the volume\nof command output unnecessarily. Why do I need to know the list of all\nschemas in the database if I only have access to the public schema?\nThe same applies to inaccessible tables, views, functions, etc.\n\nNot for safety, but for convenience, it might be worth having a set of views\nthat show only those rows of the system catalog (with *acl column) that\nthe user has access to. Either as the object owner, or through the privileges.\nDirectly or indirectly through role membership.\n\n\n\nSo, I wasn't actually aware that anyone had a big problem in this\narea. I thought that most of the junk you might see in \\d<whatever>\noutput would be hidden either because the objects you don't care about\nare not in your search_path or because they are system objects. I\nagree that doesn't help with schemas, but most people don't have a\nhuge number of schemas, and even if you do, you don't necessarily need\nto look at the list all that frequently.\n\n\nMaybe.\nBut it would be better not to see unnecessary objects in the system catalogs.\nEspecially for GUI tools.\n\nBack to the subject.\n\n\n\nSo, personally, if I were going to work on a redesign in this area, I\nwould look into making \\du <username> work like \\d <tablename>. That\nis, it would tell you every single thing there is to know about a\nuser. Role attributes. Roles in which this role has membership. Roles\nthat are a member of this row. Objects of all sorts this object owns.\nPermissions this role has on objects of all sorts. Role settings. All\nof it in SQL-ish format like we do with the footer when you run \\d.\nThen I would make \\du work like \\d: a minimal amount of basic\ninformation about every role in the list, like whether it's a\nsuperuser and whether they can log in.\n\n\nYes, I still like this idea.\nA little later I will try to make a patch in this direction.\n\n\n-- \nPavel Luzanov\nPostgres Professional: https://postgrespro.com",
"msg_date": "Sat, 27 Jul 2024 09:18:45 +0300",
"msg_from": "Pavel Luzanov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Things I don't like about \\du's \"Attributes\" column"
}
] |
[
{
"msg_contents": "Hi hackers.\n\n_bt_readpage performs key check for each item on the page trying to \nlocate upper boundary.\nWhile comparison of simple integer keys are very fast, comparison of \nlong strings can be quite expensive.\nWe can first make check for the largest key on the page and if it is not \nlarger than upper boundary, then skip checks for all elements.\n\nAt this quite artificial example such optimization gives 3x time speed-up:\n\ncreate table t(t text primary key);\ninsert into t values ('primary key-'||generate_series(1,10000000)::text);\nselect count(*) from t where t between 'primary key-1000000' and 'primary key-2000000';\n\nAt my notebook with large enough shared buffers and disabled concurrency \nthe difference is 83 vs. 247 msec\nFor integer keys the difference is much smaller: 69 vs. 82 msec\n\nCertainly I realized that this example is quite exotic: most of DBAs \nprefer integer keys and such large ranges are quite rare.\nBut still such large range queries are used.\nAnd I have checked that the proposed patch doesn't cause slowdown of \nexact search.",
"msg_date": "Fri, 23 Jun 2023 10:35:50 +0300",
"msg_from": "Konstantin Knizhnik <[email protected]>",
"msg_from_op": true,
"msg_subject": "Index range search optimization"
},
{
"msg_contents": "Hi!\n\nOn Fri, Jun 23, 2023 at 10:36 AM Konstantin Knizhnik <[email protected]>\nwrote:\n\n> _bt_readpage performs key check for each item on the page trying to locate\n> upper boundary.\n> While comparison of simple integer keys are very fast, comparison of long\n> strings can be quite expensive.\n> We can first make check for the largest key on the page and if it is not\n> larger than upper boundary, then skip checks for all elements.\n>\n> At this quite artificial example such optimization gives 3x time speed-up:\n>\n> create table t(t text primary key);\n> insert into t values ('primary key-'||generate_series(1,10000000)::text);\n> select count(*) from t where t between 'primary key-1000000' and 'primary key-2000000';\n>\n> At my notebook with large enough shared buffers and disabled concurrency\n> the difference is 83 vs. 247 msec\n> For integer keys the difference is much smaller: 69 vs. 82 msec\n>\n> Certainly I realized that this example is quite exotic: most of DBAs\n> prefer integer keys and such large ranges are quite rare.\n> But still such large range queries are used.\n> And I have checked that the proposed patch doesn't cause slowdown of exact\n> search.\n>\n\nNeat optimization! But I wonder if we could do even better. The attached\npatch allows Postgres to skip scan keys required for directional scans\n(even when other keys are present in the scan). I'll soon post the testing\nresults and a more polished version of this patch.\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Thu, 14 Sep 2023 13:22:50 +0300",
"msg_from": "Alexander Korotkov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index range search optimization"
},
{
"msg_contents": "On Thu, Sep 14, 2023 at 3:23 AM Alexander Korotkov <[email protected]> wrote:\n> The attached patch allows Postgres to skip scan keys required for directional scans (even when other keys are present in the scan). I'll soon post the testing results and a more polished version of this patch.\n\nThis is very interesting to me, partly because it seems related to my\nongoing work on SAOP execution within nbtree.\n\nMy patch gives _bt_readpage and particularly _bt_checkkeys more\nhigh-level context, which they use to intelligently control the scan.\nThat enables us to dynamically decide whether we should now perform\nanother descent of the index via another call to _bt_first, or if we\nshould prefer to continue on the leaf level for now. Maybe we will\nmatch many distinct sets of array keys on the same leaf page, in the\nsame call to _bt_readpage. We don't want to miss out on such\nopportunities, but we also want to quickly notice when we're on a page\nwhere matching any more array keys is just hopeless.\n\nThere is a need to keep these two things in balance. We need to notice\nthe hopeless cases before wasting too many cycles on it. That creates\na practical need to do an early precheck of the high key (roughly the\nsame check that we do already). If the high key indicates that\ncontinuing on this page is truly hopeless, then we should give up and\ndo another primitive index scan -- _bt_first will reposition us onto\nthe leaf page that we need to go to next, which is (hopefully) far\naway from the leaf page we started on.\n\nYour patch therefore has the potential to help my own patch. But, it\nalso has some potential to conflict with it, because my patch makes\nthe meaning of SK_BT_REQFWD and SK_BT_REQBKWD more complicated (though\nonly in cases where we have SK_SEARCHARRAY scan keys). I'm sure that\nthis can be managed sensibly, though.\n\nSome feedback on your patch:\n\n* I notice that you're not using the high key for this, even in a\nforward scan -- you're using the last non-pivot tuple on the page. Why\nis that? (I have some idea why, actually, but I'd like to hear your\nthoughts first.)\n\n* Separately, I don't think that it makes sense to use the same\nrequiredDirMatched value (which came from the last non-pivot tuple on\nthe page) when the special _bt_checkkeys call for the high key takes\nplace. I don't think that this will lead to wrong answers, but it's\nweird, and is likely to defeat the existing optimization in some\nimportant cases.\n\nDue to the influence of suffix truncation, it's relatively likely that\nthe most significant column in the high key will be different to the\ncorresponding value from the last few non-pivot tuples on the page --\nthe high key tends to be \"aligned with natural boundaries in the key\nspace\", and so \"gives us a good preview of the right sibling page\". We\ndon't want to treat it the same as non-pivot tuples here, because it's\nquite different, in ways that are subtle but still important.\n\n* I would avoid using the terminology \"preprocess scan keys\" for this.\nThat exact terminology is already used to describe what\n_bt_preprocess_keys() does.\n\nThat function is actually involved in Konstantin's patch, so that\ncould be very confusing. When we \"preprocess\" a scan key, it outputs a\nprocessed scan key with markings such as the required markings that\nyou're using in the patch -- it's something that acts on/changes the\nscan keys themselves. Whereas your patch is exploiting information\nfrom already-processed scan keys, by applying it to the key space of a\npage.\n\nI suggest calling it \"prechecking the page\", or something like that. I\ndon't feel very strongly about what you call it, provided it isn't\nconfusing or ambiguous.\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 18 Sep 2023 15:48:14 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index range search optimization"
},
{
"msg_contents": "Hi, Peter!\n\nThank you for your interest in this patch.\n\nOn Tue, Sep 19, 2023 at 1:48 AM Peter Geoghegan <[email protected]> wrote:\n>\n> On Thu, Sep 14, 2023 at 3:23 AM Alexander Korotkov <[email protected]> wrote:\n> > The attached patch allows Postgres to skip scan keys required for directional scans (even when other keys are present in the scan). I'll soon post the testing results and a more polished version of this patch.\n>\n> This is very interesting to me, partly because it seems related to my\n> ongoing work on SAOP execution within nbtree.\n>\n> My patch gives _bt_readpage and particularly _bt_checkkeys more\n> high-level context, which they use to intelligently control the scan.\n> That enables us to dynamically decide whether we should now perform\n> another descent of the index via another call to _bt_first, or if we\n> should prefer to continue on the leaf level for now. Maybe we will\n> match many distinct sets of array keys on the same leaf page, in the\n> same call to _bt_readpage. We don't want to miss out on such\n> opportunities, but we also want to quickly notice when we're on a page\n> where matching any more array keys is just hopeless.\n>\n> There is a need to keep these two things in balance. We need to notice\n> the hopeless cases before wasting too many cycles on it. That creates\n> a practical need to do an early precheck of the high key (roughly the\n> same check that we do already). If the high key indicates that\n> continuing on this page is truly hopeless, then we should give up and\n> do another primitive index scan -- _bt_first will reposition us onto\n> the leaf page that we need to go to next, which is (hopefully) far\n> away from the leaf page we started on.\n\nThis is a pretty neat optimization indeed!\n\n> Your patch therefore has the potential to help my own patch. But, it\n> also has some potential to conflict with it, because my patch makes\n> the meaning of SK_BT_REQFWD and SK_BT_REQBKWD more complicated (though\n> only in cases where we have SK_SEARCHARRAY scan keys). I'm sure that\n> this can be managed sensibly, though.\n\nOK! Let me know if you feel that I need to change something in this\npatch to lower the potential conflict.\n\n> Some feedback on your patch:\n>\n> * I notice that you're not using the high key for this, even in a\n> forward scan -- you're using the last non-pivot tuple on the page. Why\n> is that? (I have some idea why, actually, but I'd like to hear your\n> thoughts first.)\n\nI'm using the last non-pivot tuple on the page instead of hikey\nbecause it's lower than hikey. As you stated below, the most\nsignificant column in the hikey is likely different from that of the\nlast non-pivot tuple. So, it's more likely to use the optimization\nwith the last non-pivot tuple.\n\n> * Separately, I don't think that it makes sense to use the same\n> requiredDirMatched value (which came from the last non-pivot tuple on\n> the page) when the special _bt_checkkeys call for the high key takes\n> place. I don't think that this will lead to wrong answers, but it's\n> weird, and is likely to defeat the existing optimization in some\n> important cases.\n>\n> Due to the influence of suffix truncation, it's relatively likely that\n> the most significant column in the high key will be different to the\n> corresponding value from the last few non-pivot tuples on the page --\n> the high key tends to be \"aligned with natural boundaries in the key\n> space\", and so \"gives us a good preview of the right sibling page\". We\n> don't want to treat it the same as non-pivot tuples here, because it's\n> quite different, in ways that are subtle but still important.\n\nThis definitely makes sense. I've removed the usage of\nrequiredDirMatched from this _bt_checkkeys() call.\n\n> * I would avoid using the terminology \"preprocess scan keys\" for this.\n> That exact terminology is already used to describe what\n> _bt_preprocess_keys() does.\n>\n> That function is actually involved in Konstantin's patch, so that\n> could be very confusing. When we \"preprocess\" a scan key, it outputs a\n> processed scan key with markings such as the required markings that\n> you're using in the patch -- it's something that acts on/changes the\n> scan keys themselves. Whereas your patch is exploiting information\n> from already-processed scan keys, by applying it to the key space of a\n> page.\n>\n> I suggest calling it \"prechecking the page\", or something like that. I\n> don't feel very strongly about what you call it, provided it isn't\n> confusing or ambiguous.\n\n\n This also makes sense. I've rephrased the comment.\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Wed, 20 Sep 2023 17:07:28 +0300",
"msg_from": "Alexander Korotkov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index range search optimization"
},
{
"msg_contents": "On Wed, Sep 20, 2023 at 5:07 PM Alexander Korotkov <[email protected]> wrote:\n> On Tue, Sep 19, 2023 at 1:48 AM Peter Geoghegan <[email protected]> wrote:\n> This also makes sense. I've rephrased the comment.\n\nThe revised patch is attached. It contains better comments and the\ncommit message. Peter, could you please check if you're OK with this?\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Thu, 21 Sep 2023 12:14:17 +0300",
"msg_from": "Alexander Korotkov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index range search optimization"
},
{
"msg_contents": "On Thu, 21 Sept 2023 at 15:17, Alexander Korotkov <[email protected]> wrote:\n>\n> On Wed, Sep 20, 2023 at 5:07 PM Alexander Korotkov <[email protected]> wrote:\n> > On Tue, Sep 19, 2023 at 1:48 AM Peter Geoghegan <[email protected]> wrote:\n> > This also makes sense. I've rephrased the comment.\n>\n> The revised patch is attached. It contains better comments and the\n> commit message. Peter, could you please check if you're OK with this?\nHi, Alexander!\n\nI looked at the patch code and I agree with this optimization.\nImplementation also looks good to me except change :\n+ if (key->sk_flags & (SK_BT_REQFWD | SK_BT_REQBKWD) &&\n+ !(key->sk_flags & SK_ROW_HEADER))\n+ requiredDir = true;\n...\n- if ((key->sk_flags & SK_BT_REQFWD) &&\n- ScanDirectionIsForward(dir))\n- *continuescan = false;\n- else if ((key->sk_flags & SK_BT_REQBKWD) &&\n- ScanDirectionIsBackward(dir))\n+ if (requiredDir)\n *continuescan = false;\n\nlooks like changing behavior in the case when key->sk_flags &\nSK_BT_REQFWD && (! ScanDirectionIsForward(dir)) &&\n(!requiredDirMatched)\nOriginally it doesn't set *continuescan = false; and with the patch it will set.\n\nThis may be relevant for the first page when requiredDirMatched is\nintentionally skipped to be set and for call\n_bt_checkkeys(scan, itup, truncatt, dir, &continuescan, false);\n\nMaybe I missed something and this can not appear for some reason?\n\nAlso naming of requiredDirMatched and requiredDir seems semantically\nhard to understand the meaning without looking at the patch commit\nmessage. But I don't have better proposals yet, so maybe it's\nacceptable.\n\nKind regards,\nPavel Borisov\nSupabase.\n\n\n",
"msg_date": "Thu, 21 Sep 2023 16:10:58 +0400",
"msg_from": "Pavel Borisov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index range search optimization"
},
{
"msg_contents": "On Thu, Sep 21, 2023 at 5:11 AM Pavel Borisov <[email protected]> wrote:\n> I looked at the patch code and I agree with this optimization.\n> Implementation also looks good to me except change :\n> + if (key->sk_flags & (SK_BT_REQFWD | SK_BT_REQBKWD) &&\n> + !(key->sk_flags & SK_ROW_HEADER))\n> + requiredDir = true;\n> ...\n> - if ((key->sk_flags & SK_BT_REQFWD) &&\n> - ScanDirectionIsForward(dir))\n> - *continuescan = false;\n> - else if ((key->sk_flags & SK_BT_REQBKWD) &&\n> - ScanDirectionIsBackward(dir))\n> + if (requiredDir)\n> *continuescan = false;\n>\n> looks like changing behavior in the case when key->sk_flags &\n> SK_BT_REQFWD && (! ScanDirectionIsForward(dir)) &&\n> (!requiredDirMatched)\n> Originally it doesn't set *continuescan = false; and with the patch it will set.\n\nI agree that this is a problem. Inequality strategy scan keys are used\nwhen the initial positioning strategy used by _bt_first (for its\n_bt_search call) is based on an operator other than the \"=\" operator\nfor the opclass. These scan keys are required in one direction only\n(Konstantin's original patch just focussed on these cases, actually).\nObviously, that difference matters. I don't think that this patch\nshould do anything that even looks like it might be revising the\nformal definition of \"required in the current scan direction\".\n\nWhy is SK_ROW_HEADER treated as a special case by the patch? Could it\nbe related to the issues with required-ness and scan direction? Note\nthat we never use BTEqualStrategyNumber for SK_ROW_HEADER scan key row\ncomparisons, so they're only ever required for one scan direction.\n(Equality-type row constructor syntax can of course be used without\npreventing the system from using an index scan, but the nbtree code\nwill not see that case as a row comparison in the first place. This is\ndue to preprocessing by the planner -- nbtree just sees conventional\nscan keys with multiple simple equality scan keys with = row\ncomparisons.)\n\nAlso, what about NULLs? While \"key IS NULL\" is classified as an\nequality check (see _bt_preprocess_keys comments), the same isn't true\nwith \"key IS NOT NULL\". The latter case usually has scan key flags\n\"SK_ISNULL | SK_SEARCHNOTNULL | SK_BT_REQFWD\" -- there is no\nSK_BT_REQBKWD here.\n\n> This may be relevant for the first page when requiredDirMatched is\n> intentionally skipped to be set and for call\n> _bt_checkkeys(scan, itup, truncatt, dir, &continuescan, false);\n\nAlso, requiredDirMatched isn't initialized by _bt_readpage() when\n\"so->firstPage\". Shouldn't it be initialized to false?\n\nAlso, don't we need to take more care with a fully empty page? The \"if\n(!so->firstPage) ... \" block should be gated using a condition such as\n\"if (!so->firstPage && minoff < maxoff)\". (Adding a \"minoff <= maxoff\"\ntest would also work, but then the optimization will get applied on\npages with only one non-pivot tuple. That would be harmless, but a\nwaste of cycles.)\n\n> Also naming of requiredDirMatched and requiredDir seems semantically\n> hard to understand the meaning without looking at the patch commit\n> message. But I don't have better proposals yet, so maybe it's\n> acceptable.\n\nI agree. How about \"requiredMatchedByPrecheck\" instead of\n\"requiredDirMatched\", and \"required\" instead of \"requiredDir\"?\n\nIt would be nice if this patch worked in a way that could be verified\nby an assertion. Under this scheme, the optimization would only really\nbe used in release builds (builds without assertions enabled, really).\nWe'd only verify that the optimized case agreed with the slow path in\nassert-enabled builds. It might also make sense to always \"apply the\noptimization\" on assert-enabled builds, even for the first page seen\nby _bt_readpage by any _bt_first-wise scan. Maybe this sort of\napproach is impractical here for some reason, but I don't see why it\nshould be.\n\nObviously, the optimization should lower the amount of work in some\ncalls to _bt_checkkeys, without ever changing the answer _bt_checkkeys\ngives. Ideally, it should be done in a way that makes that very\nobvious. There are some very subtle interactions between _bt_checkkeys\nand other, distant code -- which makes me feel paranoid. Notably, with\nrequired equality strategy scan keys, we're crucially dependent on\n_bt_first using an equality strategy for its initial positioning call\nto _bt_search. This is described by comments in both _bt_checkkeys and\nin _bt_first.\n\nNote, in particular, that it is essential that the initial offnum\npassed to _bt_readpage doesn't allow a call to _bt_checkkeys to take\nplace that could cause it to become confused by a required equality\nstrategy scan key, leading to _bt_checkkeys terminating the whole scan\n\"early\" -- causing wrong answers. For a query \"WHERE foo = 5\" (and a\nforward scan), we had better not pass _bt_readpage an offset number\nfor a tuple with \"foo\" value 4. If that is ever allowed then\n_bt_checkkeys will terminate the scan immediately, leading to wrong\nanswers. All because _bt_checkkeys can't tell if 4 comes before 5 or\ncomes after 5 -- it only has an \"=\" operator to work with, so it can't\nactually make this distinction, so it likes to assume that anything !=\n5 must come after 5 (or before 5 during a backwards scan).\n\nI added a very similar _bt_compare()-based assertion in\n_bt_check_unique(), which went on to catch a very subtle bug in the\nPostgres 12 nbtree work -- the bug fixed by commit 74eb2176bf. So I\nhave put this particular idea about asserting agreement between a fast\npath and a slow comparison path into practice already.\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 21 Sep 2023 13:48:09 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index range search optimization"
},
{
"msg_contents": "On Fri, 22 Sept 2023 at 00:48, Peter Geoghegan <[email protected]> wrote:\n>\n> On Thu, Sep 21, 2023 at 5:11 AM Pavel Borisov <[email protected]> wrote:\n> > I looked at the patch code and I agree with this optimization.\n> > Implementation also looks good to me except change :\n> > + if (key->sk_flags & (SK_BT_REQFWD | SK_BT_REQBKWD) &&\n> > + !(key->sk_flags & SK_ROW_HEADER))\n> > + requiredDir = true;\n> > ...\n> > - if ((key->sk_flags & SK_BT_REQFWD) &&\n> > - ScanDirectionIsForward(dir))\n> > - *continuescan = false;\n> > - else if ((key->sk_flags & SK_BT_REQBKWD) &&\n> > - ScanDirectionIsBackward(dir))\n> > + if (requiredDir)\n> > *continuescan = false;\n> >\n> > looks like changing behavior in the case when key->sk_flags &\n> > SK_BT_REQFWD && (! ScanDirectionIsForward(dir)) &&\n> > (!requiredDirMatched)\n> > Originally it doesn't set *continuescan = false; and with the patch it will set.\n>\n> I agree that this is a problem. Inequality strategy scan keys are used\n> when the initial positioning strategy used by _bt_first (for its\n> _bt_search call) is based on an operator other than the \"=\" operator\n> for the opclass. These scan keys are required in one direction only\n> (Konstantin's original patch just focussed on these cases, actually).\n> Obviously, that difference matters. I don't think that this patch\n> should do anything that even looks like it might be revising the\n> formal definition of \"required in the current scan direction\".\nI think it's the simplification that changed code behavior - just an\noverlook and this could be fixed easily.\n\n> Also, requiredDirMatched isn't initialized by _bt_readpage() when\n> \"so->firstPage\". Shouldn't it be initialized to false?\nTrue.\n\n> > Also naming of requiredDirMatched and requiredDir seems semantically\n> > hard to understand the meaning without looking at the patch commit\n> > message. But I don't have better proposals yet, so maybe it's\n> > acceptable.\n>\n> I agree. How about \"requiredMatchedByPrecheck\" instead of\n> \"requiredDirMatched\", and \"required\" instead of \"requiredDir\"?\nFor me, the main semantic meaning is omitted and even more unclear,\ni.e. what exactly required and matched. I'd suppose scanDirRequired,\nscanDirMatched, but feel it's not ideal either. Or maybe trySkipRange,\ncanSkipRange etc.\n\nRegards,\nPavel Borisov,\nSupabase.\n\n\n",
"msg_date": "Fri, 22 Sep 2023 18:01:18 +0400",
"msg_from": "Pavel Borisov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index range search optimization"
},
{
"msg_contents": "Hi Peter,\nHi Pavel,\n\nThe v4 of the patch is attached.\n\nOn Thu, Sep 21, 2023 at 11:48 PM Peter Geoghegan <[email protected]> wrote:\n>\n> On Thu, Sep 21, 2023 at 5:11 AM Pavel Borisov <[email protected]> wrote:\n> > I looked at the patch code and I agree with this optimization.\n> > Implementation also looks good to me except change :\n> > + if (key->sk_flags & (SK_BT_REQFWD | SK_BT_REQBKWD) &&\n> > + !(key->sk_flags & SK_ROW_HEADER))\n> > + requiredDir = true;\n> > ...\n> > - if ((key->sk_flags & SK_BT_REQFWD) &&\n> > - ScanDirectionIsForward(dir))\n> > - *continuescan = false;\n> > - else if ((key->sk_flags & SK_BT_REQBKWD) &&\n> > - ScanDirectionIsBackward(dir))\n> > + if (requiredDir)\n> > *continuescan = false;\n> >\n> > looks like changing behavior in the case when key->sk_flags &\n> > SK_BT_REQFWD && (! ScanDirectionIsForward(dir)) &&\n> > (!requiredDirMatched)\n> > Originally it doesn't set *continuescan = false; and with the patch it will set.\n>\n> I agree that this is a problem. Inequality strategy scan keys are used\n> when the initial positioning strategy used by _bt_first (for its\n> _bt_search call) is based on an operator other than the \"=\" operator\n> for the opclass. These scan keys are required in one direction only\n> (Konstantin's original patch just focussed on these cases, actually).\n> Obviously, that difference matters. I don't think that this patch\n> should do anything that even looks like it might be revising the\n> formal definition of \"required in the current scan direction\".\n\nSorry, that was messed up from various attempts to write the patch.\nActually, I end up with two boolean variables indicating whether the\ncurrent key is required for the same direction or opposite direction\nscan. I believe that the key required for the opposite direction scan\nshould be already satisfied by _bt_first() except for NULLs case.\nI've implemented a skip of calling the key function for this case\n(with assert that result is the same).\n\n> Why is SK_ROW_HEADER treated as a special case by the patch? Could it\n> be related to the issues with required-ness and scan direction? Note\n> that we never use BTEqualStrategyNumber for SK_ROW_HEADER scan key row\n> comparisons, so they're only ever required for one scan direction.\n> (Equality-type row constructor syntax can of course be used without\n> preventing the system from using an index scan, but the nbtree code\n> will not see that case as a row comparison in the first place. This is\n> due to preprocessing by the planner -- nbtree just sees conventional\n> scan keys with multiple simple equality scan keys with = row\n> comparisons.)\n\nThe thing is that NULLs could appear in the middle of matching values.\n\n# WITH t (a, b) AS (VALUES ('a', 'b'), ('a', NULL), ('b', 'a'))\nSELECT a, b, (a, b) > ('a', 'a') FROM t ORDER BY (a, b);\n a | b | ?column?\n---+------+----------\n a | b | t\n a | NULL | NULL\n b | a | t\n(3 rows)\n\nSo we can't just skip the row comparison operator, because we can meet\nNULL at any place.\n\n> > This may be relevant for the first page when requiredDirMatched is\n> > intentionally skipped to be set and for call\n> > _bt_checkkeys(scan, itup, truncatt, dir, &continuescan, false);\n>\n> Also, requiredDirMatched isn't initialized by _bt_readpage() when\n> \"so->firstPage\". Shouldn't it be initialized to false?\n>\n> Also, don't we need to take more care with a fully empty page? The \"if\n> (!so->firstPage) ... \" block should be gated using a condition such as\n> \"if (!so->firstPage && minoff < maxoff)\". (Adding a \"minoff <= maxoff\"\n> test would also work, but then the optimization will get applied on\n> pages with only one non-pivot tuple. That would be harmless, but a\n> waste of cycles.)\n\nThis makes sense. I've added (minoff < maxoff) to the condition.\n\n> > Also naming of requiredDirMatched and requiredDir seems semantically\n> > hard to understand the meaning without looking at the patch commit\n> > message. But I don't have better proposals yet, so maybe it's\n> > acceptable.\n>\n> I agree. How about \"requiredMatchedByPrecheck\" instead of\n> \"requiredDirMatched\", and \"required\" instead of \"requiredDir\"?\n>\n> It would be nice if this patch worked in a way that could be verified\n> by an assertion. Under this scheme, the optimization would only really\n> be used in release builds (builds without assertions enabled, really).\n> We'd only verify that the optimized case agreed with the slow path in\n> assert-enabled builds. It might also make sense to always \"apply the\n> optimization\" on assert-enabled builds, even for the first page seen\n> by _bt_readpage by any _bt_first-wise scan. Maybe this sort of\n> approach is impractical here for some reason, but I don't see why it\n> should be.\n\nYes, this makes sense. I've added an assert check that results are\nthe same as with requiredMatchedByPrecheck == false.\n\n> Obviously, the optimization should lower the amount of work in some\n> calls to _bt_checkkeys, without ever changing the answer _bt_checkkeys\n> gives. Ideally, it should be done in a way that makes that very\n> obvious. There are some very subtle interactions between _bt_checkkeys\n> and other, distant code -- which makes me feel paranoid. Notably, with\n> required equality strategy scan keys, we're crucially dependent on\n> _bt_first using an equality strategy for its initial positioning call\n> to _bt_search. This is described by comments in both _bt_checkkeys and\n> in _bt_first.\n>\n> Note, in particular, that it is essential that the initial offnum\n> passed to _bt_readpage doesn't allow a call to _bt_checkkeys to take\n> place that could cause it to become confused by a required equality\n> strategy scan key, leading to _bt_checkkeys terminating the whole scan\n> \"early\" -- causing wrong answers. For a query \"WHERE foo = 5\" (and a\n> forward scan), we had better not pass _bt_readpage an offset number\n> for a tuple with \"foo\" value 4. If that is ever allowed then\n> _bt_checkkeys will terminate the scan immediately, leading to wrong\n> answers. All because _bt_checkkeys can't tell if 4 comes before 5 or\n> comes after 5 -- it only has an \"=\" operator to work with, so it can't\n> actually make this distinction, so it likes to assume that anything !=\n> 5 must come after 5 (or before 5 during a backwards scan).\n>\n> I added a very similar _bt_compare()-based assertion in\n> _bt_check_unique(), which went on to catch a very subtle bug in the\n> Postgres 12 nbtree work -- the bug fixed by commit 74eb2176bf. So I\n> have put this particular idea about asserting agreement between a fast\n> path and a slow comparison path into practice already.\n\nGood, thank you for the detailed clarification.\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Fri, 22 Sep 2023 17:24:40 +0300",
"msg_from": "Alexander Korotkov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index range search optimization"
},
{
"msg_contents": "Hi, Alexander!\n\nI found and fixed a couple of naming issues that came to v4 from\nearlier patches.\nAlso, I added initialization of requiredMatchedByPrecheck in case of first page.\n\nPlease see patch v5.\n\nOne more doubt about naming. Calling function\n_bt_checkkeys(IndexScanDesc scan, IndexTuple tuple, int tupnatts,\nScanDirection dir, bool *continuescan, bool requiredMatchedByPrecheck)\nas\n(void) _bt_checkkeys(scan, itup, indnatts, dir,\n&requiredMatchedByPrecheck, false);\nlooks little bit misleading because of coincidence of names of 5 and 6\narguments.",
"msg_date": "Mon, 25 Sep 2023 13:58:02 +0400",
"msg_from": "Pavel Borisov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index range search optimization"
},
{
"msg_contents": "Sorry, I've mistaken with attached version previously. Correct v5 attached.\n\nOn Mon, 25 Sept 2023 at 13:58, Pavel Borisov <[email protected]> wrote:\n>\n> Hi, Alexander!\n>\n> I found and fixed a couple of naming issues that came to v4 from\n> earlier patches.\n> Also, I added initialization of requiredMatchedByPrecheck in case of first page.\n>\n> Please see patch v5.\n>\n> One more doubt about naming. Calling function\n> _bt_checkkeys(IndexScanDesc scan, IndexTuple tuple, int tupnatts,\n> ScanDirection dir, bool *continuescan, bool requiredMatchedByPrecheck)\n> as\n> (void) _bt_checkkeys(scan, itup, indnatts, dir,\n> &requiredMatchedByPrecheck, false);\n> looks little bit misleading because of coincidence of names of 5 and 6\n> arguments.",
"msg_date": "Mon, 25 Sep 2023 14:11:53 +0400",
"msg_from": "Pavel Borisov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index range search optimization"
},
{
"msg_contents": "On Mon, Sep 25, 2023 at 12:58 PM Pavel Borisov <[email protected]> wrote:\n> One more doubt about naming. Calling function\n> _bt_checkkeys(IndexScanDesc scan, IndexTuple tuple, int tupnatts,\n> ScanDirection dir, bool *continuescan, bool requiredMatchedByPrecheck)\n> as\n> (void) _bt_checkkeys(scan, itup, indnatts, dir,\n> &requiredMatchedByPrecheck, false);\n> looks little bit misleading because of coincidence of names of 5 and 6\n> arguments.\n\nI've added the comment clarifying this argument usage.\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Mon, 25 Sep 2023 13:18:04 +0300",
"msg_from": "Alexander Korotkov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index range search optimization"
},
{
"msg_contents": "On Mon, Sep 25, 2023 at 1:18 PM Alexander Korotkov <[email protected]>\nwrote:\n\n> On Mon, Sep 25, 2023 at 12:58 PM Pavel Borisov <[email protected]>\n> wrote:\n> > One more doubt about naming. Calling function\n> > _bt_checkkeys(IndexScanDesc scan, IndexTuple tuple, int tupnatts,\n> > ScanDirection dir, bool *continuescan, bool requiredMatchedByPrecheck)\n> > as\n> > (void) _bt_checkkeys(scan, itup, indnatts, dir,\n> > &requiredMatchedByPrecheck, false);\n> > looks little bit misleading because of coincidence of names of 5 and 6\n> > arguments.\n>\n> I've added the comment clarifying this argument usage.\n>\n\nFixed typo inficating => indicating as pointed by Pavel.\nPeter, what do you think about the current shape of the patch?\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Wed, 27 Sep 2023 19:41:31 +0300",
"msg_from": "Alexander Korotkov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index range search optimization"
},
{
"msg_contents": "On Wed, Sep 27, 2023 at 9:41 AM Alexander Korotkov <[email protected]> wrote:\n> Fixed typo inficating => indicating as pointed by Pavel.\n> Peter, what do you think about the current shape of the patch?\n\nI'll try to get to this tomorrow. I'm rather busy with moving home at\nthe moment, unfortunately.\n\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 27 Sep 2023 19:21:00 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index range search optimization"
},
{
"msg_contents": "On Thu, Sep 28, 2023 at 5:21 AM Peter Geoghegan <[email protected]> wrote:\n> On Wed, Sep 27, 2023 at 9:41 AM Alexander Korotkov <[email protected]> wrote:\n> > Fixed typo inficating => indicating as pointed by Pavel.\n> > Peter, what do you think about the current shape of the patch?\n>\n> I'll try to get to this tomorrow. I'm rather busy with moving home at\n> the moment, unfortunately.\n\nNo problem, thank you!\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Thu, 28 Sep 2023 22:03:41 +0300",
"msg_from": "Alexander Korotkov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index range search optimization"
},
{
"msg_contents": "On Fri, Sep 22, 2023 at 7:24 AM Alexander Korotkov <[email protected]> wrote:\n> The thing is that NULLs could appear in the middle of matching values.\n>\n> # WITH t (a, b) AS (VALUES ('a', 'b'), ('a', NULL), ('b', 'a'))\n> SELECT a, b, (a, b) > ('a', 'a') FROM t ORDER BY (a, b);\n> a | b | ?column?\n> ---+------+----------\n> a | b | t\n> a | NULL | NULL\n> b | a | t\n> (3 rows)\n>\n> So we can't just skip the row comparison operator, because we can meet\n> NULL at any place.\n\nBut why would SK_ROW_HEADER be any different? Is it related to this\nexisting case inside _bt_check_rowcompare()?:\n\n if (subkey->sk_flags & SK_ISNULL)\n {\n /*\n * Unlike the simple-scankey case, this isn't a disallowed case.\n * But it can never match. If all the earlier row comparison\n * columns are required for the scan direction, we can stop the\n * scan, because there can't be another tuple that will succeed.\n */\n if (subkey != (ScanKey) DatumGetPointer(skey->sk_argument))\n subkey--;\n if ((subkey->sk_flags & SK_BT_REQFWD) &&\n ScanDirectionIsForward(dir))\n *continuescan = false;\n else if ((subkey->sk_flags & SK_BT_REQBKWD) &&\n ScanDirectionIsBackward(dir))\n *continuescan = false;\n return false;\n }\n\nI noticed that you're not initializing so->firstPage correctly for the\n_bt_endpoint() path, which is used when the initial position of the\nscan is either the leftmost or rightmost page. That is, it's possible\nto reach _bt_readpage() without having reached the point in\n_bt_first() where you initialize so->firstPage to \"true\".\n\nIt would probably make sense if the flag was initialized to \"false\" in\nthe same way as most other scan state is already, somewhere in\nnbtree.c. Probably in btrescan().\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 28 Sep 2023 18:57:07 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index range search optimization"
},
{
"msg_contents": "Hi, Peter.\n\nOn Fri, Sep 29, 2023 at 4:57 AM Peter Geoghegan <[email protected]> wrote:\n> On Fri, Sep 22, 2023 at 7:24 AM Alexander Korotkov <[email protected]> wrote:\n> > The thing is that NULLs could appear in the middle of matching values.\n> >\n> > # WITH t (a, b) AS (VALUES ('a', 'b'), ('a', NULL), ('b', 'a'))\n> > SELECT a, b, (a, b) > ('a', 'a') FROM t ORDER BY (a, b);\n> > a | b | ?column?\n> > ---+------+----------\n> > a | b | t\n> > a | NULL | NULL\n> > b | a | t\n> > (3 rows)\n> >\n> > So we can't just skip the row comparison operator, because we can meet\n> > NULL at any place.\n>\n> But why would SK_ROW_HEADER be any different? Is it related to this\n> existing case inside _bt_check_rowcompare()?:\n>\n> if (subkey->sk_flags & SK_ISNULL)\n> {\n> /*\n> * Unlike the simple-scankey case, this isn't a disallowed case.\n> * But it can never match. If all the earlier row comparison\n> * columns are required for the scan direction, we can stop the\n> * scan, because there can't be another tuple that will succeed.\n> */\n> if (subkey != (ScanKey) DatumGetPointer(skey->sk_argument))\n> subkey--;\n> if ((subkey->sk_flags & SK_BT_REQFWD) &&\n> ScanDirectionIsForward(dir))\n> *continuescan = false;\n> else if ((subkey->sk_flags & SK_BT_REQBKWD) &&\n> ScanDirectionIsBackward(dir))\n> *continuescan = false;\n> return false;\n> }\n\nYes, exactly. Our row comparison operators don't match if there is any\nnull inside the row. But you can find these rows within the matching\nrange.\n\n> I noticed that you're not initializing so->firstPage correctly for the\n> _bt_endpoint() path, which is used when the initial position of the\n> scan is either the leftmost or rightmost page. That is, it's possible\n> to reach _bt_readpage() without having reached the point in\n> _bt_first() where you initialize so->firstPage to \"true\".\n\nGood catch, thank you!\n\n> It would probably make sense if the flag was initialized to \"false\" in\n> the same way as most other scan state is already, somewhere in\n> nbtree.c. Probably in btrescan().\n\nMakes sense, initialisation is added.\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Fri, 29 Sep 2023 09:35:24 +0300",
"msg_from": "Alexander Korotkov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index range search optimization"
},
{
"msg_contents": "Hi!\n\nOn Fri, 29 Sept 2023 at 10:35, Alexander Korotkov <[email protected]> wrote:\n>\n> Hi, Peter.\n>\n> On Fri, Sep 29, 2023 at 4:57 AM Peter Geoghegan <[email protected]> wrote:\n> > On Fri, Sep 22, 2023 at 7:24 AM Alexander Korotkov <[email protected]> wrote:\n> > > The thing is that NULLs could appear in the middle of matching values.\n> > >\n> > > # WITH t (a, b) AS (VALUES ('a', 'b'), ('a', NULL), ('b', 'a'))\n> > > SELECT a, b, (a, b) > ('a', 'a') FROM t ORDER BY (a, b);\n> > > a | b | ?column?\n> > > ---+------+----------\n> > > a | b | t\n> > > a | NULL | NULL\n> > > b | a | t\n> > > (3 rows)\n> > >\n> > > So we can't just skip the row comparison operator, because we can meet\n> > > NULL at any place.\n> >\n> > But why would SK_ROW_HEADER be any different? Is it related to this\n> > existing case inside _bt_check_rowcompare()?:\n> >\n> > if (subkey->sk_flags & SK_ISNULL)\n> > {\n> > /*\n> > * Unlike the simple-scankey case, this isn't a disallowed case.\n> > * But it can never match. If all the earlier row comparison\n> > * columns are required for the scan direction, we can stop the\n> > * scan, because there can't be another tuple that will succeed.\n> > */\n> > if (subkey != (ScanKey) DatumGetPointer(skey->sk_argument))\n> > subkey--;\n> > if ((subkey->sk_flags & SK_BT_REQFWD) &&\n> > ScanDirectionIsForward(dir))\n> > *continuescan = false;\n> > else if ((subkey->sk_flags & SK_BT_REQBKWD) &&\n> > ScanDirectionIsBackward(dir))\n> > *continuescan = false;\n> > return false;\n> > }\n>\n> Yes, exactly. Our row comparison operators don't match if there is any\n> null inside the row. But you can find these rows within the matching\n> range.\n>\n> > I noticed that you're not initializing so->firstPage correctly for the\n> > _bt_endpoint() path, which is used when the initial position of the\n> > scan is either the leftmost or rightmost page. That is, it's possible\n> > to reach _bt_readpage() without having reached the point in\n> > _bt_first() where you initialize so->firstPage to \"true\".\n>\n> Good catch, thank you!\n>\n> > It would probably make sense if the flag was initialized to \"false\" in\n> > the same way as most other scan state is already, somewhere in\n> > nbtree.c. Probably in btrescan().\n>\n> Makes sense, initialisation is added.\nI've looked through the patch v8. I think it's good enough to be\npushed if Peter has no objections.\n\nRegards,\nPavel.\n\n\n",
"msg_date": "Wed, 4 Oct 2023 01:58:49 +0400",
"msg_from": "Pavel Borisov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index range search optimization"
},
{
"msg_contents": "On Wed, Oct 4, 2023 at 12:59 AM Pavel Borisov <[email protected]> wrote:\n> I've looked through the patch v8. I think it's good enough to be\n> pushed if Peter has no objections.\n\nThank you, Pavel.\nI'll push this if there are no objections.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Wed, 4 Oct 2023 03:00:09 +0300",
"msg_from": "Alexander Korotkov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index range search optimization"
},
{
"msg_contents": "\nOn 04/10/2023 3:00 am, Alexander Korotkov wrote:\n> On Wed, Oct 4, 2023 at 12:59 AM Pavel Borisov <[email protected]> wrote:\n>> I've looked through the patch v8. I think it's good enough to be\n>> pushed if Peter has no objections.\n> Thank you, Pavel.\n> I'll push this if there are no objections.\n>\n> ------\n> Regards,\n> Alexander Korotkov\n\n\nSorry, can you please also mention that original idea of this \noptimization belongs to Ilya Anfimov (it was discussed in @pgsql \nTelegram chat).\n\n\n\n",
"msg_date": "Fri, 6 Oct 2023 21:44:05 +0300",
"msg_from": "Konstantin Knizhnik <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Index range search optimization"
},
{
"msg_contents": "Hi, Konstantin!\n\nOn Fri, 6 Oct 2023 at 22:44, Konstantin Knizhnik <[email protected]> wrote:\n>\n>\n> On 04/10/2023 3:00 am, Alexander Korotkov wrote:\n> > On Wed, Oct 4, 2023 at 12:59 AM Pavel Borisov <[email protected]> wrote:\n> >> I've looked through the patch v8. I think it's good enough to be\n> >> pushed if Peter has no objections.\n> > Thank you, Pavel.\n> > I'll push this if there are no objections.\n> >\n> > ------\n> > Regards,\n> > Alexander Korotkov\n>\n>\n> Sorry, can you please also mention that original idea of this\n> optimization belongs to Ilya Anfimov (it was discussed in @pgsql\n> Telegram chat).\n\nWhile it's no doubt correct to mention all authors of the patch, I\nlooked through the thread and saw no mentions of Ilya's\ncontributions/ideas before the patch became pushed. I'm not up to the\ncurrent policy for processing these requests, but I suppose it's\ncomplicated to introduce back changes into the main branch that is\nalready ahead of patch e0b1ee17dc3a38.\n\nRegards,\nPavel\n\n\n",
"msg_date": "Fri, 6 Oct 2023 22:59:16 +0400",
"msg_from": "Pavel Borisov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index range search optimization"
},
{
"msg_contents": "On Fri, Oct 6, 2023 at 9:59 PM Pavel Borisov <[email protected]> wrote:\n> On Fri, 6 Oct 2023 at 22:44, Konstantin Knizhnik <[email protected]> wrote:\n> >\n> >\n> > On 04/10/2023 3:00 am, Alexander Korotkov wrote:\n> > > On Wed, Oct 4, 2023 at 12:59 AM Pavel Borisov <[email protected]> wrote:\n> > >> I've looked through the patch v8. I think it's good enough to be\n> > >> pushed if Peter has no objections.\n> > > Thank you, Pavel.\n> > > I'll push this if there are no objections.\n> > >\n> > > ------\n> > > Regards,\n> > > Alexander Korotkov\n> >\n> >\n> > Sorry, can you please also mention that original idea of this\n> > optimization belongs to Ilya Anfimov (it was discussed in @pgsql\n> > Telegram chat).\n>\n> While it's no doubt correct to mention all authors of the patch, I\n> looked through the thread and saw no mentions of Ilya's\n> contributions/ideas before the patch became pushed. I'm not up to the\n> current policy for processing these requests, but I suppose it's\n> complicated to introduce back changes into the main branch that is\n> already ahead of patch e0b1ee17dc3a38.\n\nYep, that happened before. We don't do a force push to override\ncommit messages and credit missing contributors. I waited more than\n48 hours before pushing the final version of the patch, and that was\nthe time to propose changes like this. Now, I think all we can do is\ncredit Ilya on mailing lists. I believe we already did :)\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Sat, 7 Oct 2023 20:42:51 +0300",
"msg_from": "Alexander Korotkov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index range search optimization"
}
] |
[
{
"msg_contents": "Hi\n\nMild corner-case annoyance while doing Random Experimental Things:\n\n postgres=# SELECT * FROM parttest;\n ERROR: user mapping not found for \"postgres\"\n\nOkaaaay, but which server?\n\n postgres=# \\det\n List of foreign tables\n Schema | Table | Server\n --------+---------------+-----------\n public | parttest_10_1 | fdw_node2\n public | parttest_10_3 | fdw_node3\n public | parttest_10_5 | fdw_node4\n public | parttest_10_7 | fdw_node5\n public | parttest_10_9 | fdw_node6\n (5 rows)\n\n(Muffled sound of small patch hatching) aha:\n\n postgres=# SELECT * FROM parttest;\n ERROR: user mapping not found for user \"postgres\", server \"fdw_node5\"\n\nRegards\n\nIan Barwick",
"msg_date": "Fri, 23 Jun 2023 16:45:05 +0900",
"msg_from": "Ian Lawrence Barwick <[email protected]>",
"msg_from_op": true,
"msg_subject": "patch: improve \"user mapping not found\" error message"
},
{
"msg_contents": "On Fri, 2023-06-23 at 16:45 +0900, Ian Lawrence Barwick wrote:\n> Mild corner-case annoyance while doing Random Experimental Things:\n> \n> postgres=# SELECT * FROM parttest;\n> ERROR: user mapping not found for \"postgres\"\n> \n> Okaaaay, but which server?\n> \n> postgres=# \\det\n> List of foreign tables\n> Schema | Table | Server\n> --------+---------------+-----------\n> public | parttest_10_1 | fdw_node2\n> public | parttest_10_3 | fdw_node3\n> public | parttest_10_5 | fdw_node4\n> public | parttest_10_7 | fdw_node5\n> public | parttest_10_9 | fdw_node6\n> (5 rows)\n> \n> (Muffled sound of small patch hatching) aha:\n> \n> postgres=# SELECT * FROM parttest;\n> ERROR: user mapping not found for user \"postgres\", server \"fdw_node5\"\n\n+1\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Fri, 23 Jun 2023 11:58:01 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: patch: improve \"user mapping not found\" error message"
},
{
"msg_contents": "On 23.06.23 09:45, Ian Lawrence Barwick wrote:\n> \tif (!HeapTupleIsValid(tp))\n> +\t{\n> +\t\tForeignServer *server = GetForeignServer(serverid);\n> +\n> \t\tereport(ERROR,\n> \t\t\t\t(errcode(ERRCODE_UNDEFINED_OBJECT),\n> -\t\t\t\t errmsg(\"user mapping not found for \\\"%s\\\"\",\n> -\t\t\t\t\t\tMappingUserName(userid))));\n> +\t\t\t\t errmsg(\"user mapping not found for user \\\"%s\\\", server \\\"%s\\\"\",\n> +\t\t\t\t\t\tMappingUserName(userid),\n> +\t\t\t\t\t\tserver->servername)));\n> +\t}\n\nWhat if the foreign server does not exist either? Then this would show \na \"cache lookup failed\" error message, which I think we should avoid.\n\nThere is existing logic for handling this in \nget_object_address_usermapping().\n\n\n",
"msg_date": "Mon, 3 Jul 2023 11:22:48 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: patch: improve \"user mapping not found\" error message"
},
{
"msg_contents": "2023年7月3日(月) 18:22 Peter Eisentraut <[email protected]>:\n>\n> On 23.06.23 09:45, Ian Lawrence Barwick wrote:\n> > if (!HeapTupleIsValid(tp))\n> > + {\n> > + ForeignServer *server = GetForeignServer(serverid);\n> > +\n> > ereport(ERROR,\n> > (errcode(ERRCODE_UNDEFINED_OBJECT),\n> > - errmsg(\"user mapping not found for \\\"%s\\\"\",\n> > - MappingUserName(userid))));\n> > + errmsg(\"user mapping not found for user \\\"%s\\\", server \\\"%s\\\"\",\n> > + MappingUserName(userid),\n> > + server->servername)));\n> > + }\n>\n> What if the foreign server does not exist either? Then this would show\n> a \"cache lookup failed\" error message, which I think we should avoid.\n>\n> There is existing logic for handling this in\n> get_object_address_usermapping().\n\nApologies, missed this response somewhere. Does the attached fix that?\n\nRegards\n\nIan Barwick",
"msg_date": "Mon, 20 Nov 2023 10:25:42 +0900",
"msg_from": "Ian Lawrence Barwick <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: patch: improve \"user mapping not found\" error message"
},
{
"msg_contents": "On 20.11.23 02:25, Ian Lawrence Barwick wrote:\n> 2023年7月3日(月) 18:22 Peter Eisentraut <[email protected]>:\n>>\n>> On 23.06.23 09:45, Ian Lawrence Barwick wrote:\n>>> if (!HeapTupleIsValid(tp))\n>>> + {\n>>> + ForeignServer *server = GetForeignServer(serverid);\n>>> +\n>>> ereport(ERROR,\n>>> (errcode(ERRCODE_UNDEFINED_OBJECT),\n>>> - errmsg(\"user mapping not found for \\\"%s\\\"\",\n>>> - MappingUserName(userid))));\n>>> + errmsg(\"user mapping not found for user \\\"%s\\\", server \\\"%s\\\"\",\n>>> + MappingUserName(userid),\n>>> + server->servername)));\n>>> + }\n>>\n>> What if the foreign server does not exist either? Then this would show\n>> a \"cache lookup failed\" error message, which I think we should avoid.\n>>\n>> There is existing logic for handling this in\n>> get_object_address_usermapping().\n> \n> Apologies, missed this response somewhere. Does the attached fix that?\n\nHmm, now that I look at this again, under what circumstances would the \nserver not be found? Maybe the first patch was right and it should give \na \"scary\" error in that case, instead of just omitting it.\n\nIn any case, this patch appears to be missing an update in the \npostgres_fdw test output.\n\n\n\n",
"msg_date": "Thu, 23 Nov 2023 09:41:14 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: patch: improve \"user mapping not found\" error message"
},
{
"msg_contents": "On 23.11.23 09:41, Peter Eisentraut wrote:\n> On 20.11.23 02:25, Ian Lawrence Barwick wrote:\n>> 2023年7月3日(月) 18:22 Peter Eisentraut <[email protected]>:\n>>>\n>>> On 23.06.23 09:45, Ian Lawrence Barwick wrote:\n>>>> if (!HeapTupleIsValid(tp))\n>>>> + {\n>>>> + ForeignServer *server = GetForeignServer(serverid);\n>>>> +\n>>>> ereport(ERROR,\n>>>> (errcode(ERRCODE_UNDEFINED_OBJECT),\n>>>> - errmsg(\"user mapping not found for \n>>>> \\\"%s\\\"\",\n>>>> - \n>>>> MappingUserName(userid))));\n>>>> + errmsg(\"user mapping not found for \n>>>> user \\\"%s\\\", server \\\"%s\\\"\",\n>>>> + MappingUserName(userid),\n>>>> + server->servername)));\n>>>> + }\n>>>\n>>> What if the foreign server does not exist either? Then this would show\n>>> a \"cache lookup failed\" error message, which I think we should avoid.\n>>>\n>>> There is existing logic for handling this in\n>>> get_object_address_usermapping().\n>>\n>> Apologies, missed this response somewhere. Does the attached fix that?\n> \n> Hmm, now that I look at this again, under what circumstances would the \n> server not be found? Maybe the first patch was right and it should give \n> a \"scary\" error in that case, instead of just omitting it.\n> \n> In any case, this patch appears to be missing an update in the \n> postgres_fdw test output.\n\nI have committed the first version of the patch together with the \nrequired test changes.\n\n\n\n",
"msg_date": "Thu, 30 Nov 2023 05:42:26 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: patch: improve \"user mapping not found\" error message"
}
] |
[
{
"msg_contents": "Hi,\n\nI need to move some databases from a MySQL server to Postgresql.\n\nCan someone tell me the migration procedure, tools, and recommendations?\n\nThanks\n\nHi,I need to move some databases from a MySQL server to Postgresql.Can someone tell me the migration procedure, tools, and recommendations? Thanks",
"msg_date": "Fri, 23 Jun 2023 11:30:06 +0200",
"msg_from": "Alfredo Alcala <[email protected]>",
"msg_from_op": true,
"msg_subject": "Migration database from mysql to postgress"
},
{
"msg_contents": "Alfredo Alcala schrieb am 23.06.2023 um 11:30:\n> I need to move some databases from a MySQL server to Postgresql.\n>\n> Can someone tell me the migration procedure, tools, and recommendations? \n\n\nDespite its name, \"ora2pg\" can also migrate MySQL to Postgres\n\nhttps://ora2pg.darold.net/\n\n\n\n",
"msg_date": "Fri, 23 Jun 2023 11:44:58 +0200",
"msg_from": "Thomas Kellerer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Migration database from mysql to postgress"
},
{
"msg_contents": "On 2023-06-23 Fr 05:30, Alfredo Alcala wrote:\n> Hi,\n>\n> I need to move some databases from a MySQL server to Postgresql.\n>\n> Can someone tell me the migration procedure, tools, and recommendations?\n>\n>\n\n\nPlease ask questions on the correct forum. For this question I suggest \nthe pgsql-general mailing list. pgsql-hackers is for questions about \npostgresql development, not usage.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-06-23 Fr 05:30, Alfredo Alcala\n wrote:\n\n\n\n\nHi,\n \n\n\nI need to move some databases from a MySQL server to\n Postgresql.\n\n\nCan someone tell me the migration procedure, tools, and\n recommendations? \n\n\n\n\n\n\n\n\n\n\n\n\nPlease ask questions on the correct forum. For this question I\n suggest the pgsql-general mailing list. pgsql-hackers is for\n questions about postgresql development, not usage.\n\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Fri, 23 Jun 2023 07:28:52 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Migration database from mysql to postgress"
},
{
"msg_contents": "Hello\n\nNecesito mover algunas bases de datos de un servidor MySQL a Postgresql.\n\nCan anyone tell me the migration procedure, tools and recommendations?\n\nGracias\n\n\nEl vie, 23 jun 2023 a las 11:30, Alfredo Alcala (<[email protected]>)\nescribió:\n\n> Hola\n>\n> Necesito mover algunas bases de datos de un servidor MySQL a Postgresql.\n>\n> ¿Alguien puede decirme el procedimiento de migración, las herramientas y\n> las recomendaciones?\n>\n> Gracias\n>\n\nHelloNecesito mover algunas bases de datos de un servidor MySQL a Postgresql.Can anyone tell me the migration procedure, tools and recommendations?GraciasEl vie, 23 jun 2023 a las 11:30, Alfredo Alcala (<[email protected]>) escribió:HolaNecesito mover algunas bases de datos de un servidor MySQL a Postgresql.¿Alguien puede decirme el procedimiento de migración, las herramientas y las recomendaciones? Gracias",
"msg_date": "Fri, 23 Jun 2023 13:33:03 +0200",
"msg_from": "Alfredo Alcala <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fwd: Migration database from mysql to postgress"
}
] |
[
{
"msg_contents": "Dear hackers,\n\nWhile discussing based on the article[1] with Japanese developers, \nI found inconsistencies between codes and documents.\n\n45b1a67a[2] changed the behavior when non-ASCII characters was set as application_name,\ncluster_name and postgres_fdw.application_name, but it seemed not to be documented.\nPreviously non-ASCII chars were replaed with question makrs '?', but now they are replaced\nwith a hex escape instead.\n\nHow do you think? Is my understanding correct?\n\nAcknowledgement:\nSawada-san and Shinoda-san led the developer's discussion.\nFujii-san was confirmed my points. Thank you for all of their works!\n\n[1]: https://h50146.www5.hpe.com/products/software/oe/linux/mainstream/support/lcc/pdf/PostgreSQL16Beta1_New_Features_en_20230528_1.pdf\n[2]: https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=45b1a67a0fcb3f1588df596431871de4c93cb76f;hp=da5d4ea5aaac4fc02f2e2aec272efe438dd4e171\n \nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED",
"msg_date": "Fri, 23 Jun 2023 14:25:13 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "[PGdocs] fix description for handling pf non-ASCII characters"
},
{
"msg_contents": "On Fri, Jun 23, 2023 at 10:25 PM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> Dear hackers,\n>\n> While discussing based on the article[1] with Japanese developers,\n> I found inconsistencies between codes and documents.\n>\n> 45b1a67a[2] changed the behavior when non-ASCII characters was set as application_name,\n> cluster_name and postgres_fdw.application_name, but it seemed not to be documented.\n> Previously non-ASCII chars were replaed with question makrs '?', but now they are replaced\n> with a hex escape instead.\n>\n> How do you think? Is my understanding correct?\n>\n> Acknowledgement:\n> Sawada-san and Shinoda-san led the developer's discussion.\n> Fujii-san was confirmed my points. Thank you for all of their works!\n>\n> [1]: https://h50146.www5.hpe.com/products/software/oe/linux/mainstream/support/lcc/pdf/PostgreSQL16Beta1_New_Features_en_20230528_1.pdf\n> [2]: https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=45b1a67a0fcb3f1588df596431871de4c93cb76f;hp=da5d4ea5aaac4fc02f2e2aec272efe438dd4e171\n>\n> Best Regards,\n> Hayato Kuroda\n> FUJITSU LIMITED\n>\n\nin your patch:\n> printable ASCII characters will be replaced with a hex escape.\n\nMy wording is not good. I think the result will be: ASCII characters\nwill be as is, non-ASCII characters will be replaced with \"a hex\nescape\".\n\nset application_name to 'abc漢字Abc';\nSET\ntest16=# show application_name;\n application_name\n--------------------------------\n abc\\xe6\\xbc\\xa2\\xe5\\xad\\x97Abc\n(1 row)\n\nI see multi escape, so I am not sure \"a hex escape\".\n\nto properly render it back to 'abc漢字Abc'\nhere is how i do it:\nselect 'abc' || convert_from(decode(' e6bca2e5ad97','hex'), 'UTF8') || 'Abc';\n\nI guess it's still painful if your application_name has non-ASCII chars.\n\n\n",
"msg_date": "Thu, 29 Jun 2023 10:58:23 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PGdocs] fix description for handling pf non-ASCII characters"
},
{
"msg_contents": "Dear Jian,\r\n\r\nThank you for checking my patch!\r\n\r\n> \t\r\n> in your patch:\r\n> > printable ASCII characters will be replaced with a hex escape.\r\n> \r\n> My wording is not good. I think the result will be: ASCII characters\r\n> will be as is, non-ASCII characters will be replaced with \"a hex\r\n> escape\".\r\n\r\nYeah, your point was right. I have already said:\r\n\"anything other than printable ASCII characters will be replaced with a hex escape\"\r\nIIUC They have same meaning.\r\n\r\nYou might want to say the line was not good, so reworded like\r\n\"non-ASCII characters will be replaced with hexadecimal strings.\" How do you think?\r\n\r\n> set application_name to 'abc漢字Abc';\r\n> SET\r\n> test16=# show application_name;\r\n> application_name\r\n> --------------------------------\r\n> abc\\xe6\\xbc\\xa2\\xe5\\xad\\x97Abc\r\n> (1 row)\r\n> \r\n> I see multi escape, so I am not sure \"a hex escape\".\r\n\r\nNot sure what you said, but I could not find word \"hex escape\" in the document.\r\nSo I used \"hexadecimal string\" instead. Is it acceptable? \r\n\r\n> to properly render it back to 'abc漢字Abc'\r\n> here is how i do it:\r\n> select 'abc' || convert_from(decode(' e6bca2e5ad97','hex'), 'UTF8') || 'Abc';\r\n\r\nYeah, your approach seems right, but I'm not sure it is related with us.\r\nJust to confirm, I don't have interest the method for rendering non-ASCII characters.\r\nMy motivation of the patch was to document the the incompatibility noted in [1]:\r\n\r\n>\r\nChanged the conversion rules when non-ASCII characters are specified for ASCII-only\r\nstrings such as parameters application_name and cluster_name. Previously, it was\r\nconverted in byte units with a question mark (?), but in PostgreSQL 16, it is\r\nconverted to a hexadecimal string.\r\n>\r\n\r\n> I guess it's still painful if your application_name has non-ASCII chars.\r\n\r\nI agreed that, but no one has recommended to use non-ASCII.\r\n\r\n[1]: https://h50146.www5.hpe.com/products/software/oe/linux/mainstream/support/lcc/pdf/PostgreSQL16Beta1_New_Features_en_20230528_1.pdf\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Thu, 29 Jun 2023 07:51:49 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PGdocs] fix description for handling pf non-ASCII characters"
},
{
"msg_contents": "On Thu, Jun 29, 2023 at 3:51 PM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> Dear Jian,\n>\n> Thank you for checking my patch!\n>\n> >\n> > in your patch:\n> > > printable ASCII characters will be replaced with a hex escape.\n> >\n> > My wording is not good. I think the result will be: ASCII characters\n> > will be as is, non-ASCII characters will be replaced with \"a hex\n> > escape\".\n>\n> Yeah, your point was right. I have already said:\n> \"anything other than printable ASCII characters will be replaced with a hex escape\"\n> IIUC They have same meaning.\n>\n> You might want to say the line was not good, so reworded like\n> \"non-ASCII characters will be replaced with hexadecimal strings.\" How do you think?\n>\n> > set application_name to 'abc漢字Abc';\n> > SET\n> > test16=# show application_name;\n> > application_name\n> > --------------------------------\n> > abc\\xe6\\xbc\\xa2\\xe5\\xad\\x97Abc\n> > (1 row)\n> >\n> > I see multi escape, so I am not sure \"a hex escape\".\n>\n> Not sure what you said, but I could not find word \"hex escape\" in the document.\n> So I used \"hexadecimal string\" instead. Is it acceptable?\n>\n> > to properly render it back to 'abc漢字Abc'\n> > here is how i do it:\n> > select 'abc' || convert_from(decode(' e6bca2e5ad97','hex'), 'UTF8') || 'Abc';\n>\n> Yeah, your approach seems right, but I'm not sure it is related with us.\n> Just to confirm, I don't have interest the method for rendering non-ASCII characters.\n> My motivation of the patch was to document the the incompatibility noted in [1]:\n>\n> >\n> Changed the conversion rules when non-ASCII characters are specified for ASCII-only\n> strings such as parameters application_name and cluster_name. Previously, it was\n> converted in byte units with a question mark (?), but in PostgreSQL 16, it is\n> converted to a hexadecimal string.\n> >\n>\n> > I guess it's still painful if your application_name has non-ASCII chars.\n>\n> I agreed that, but no one has recommended to use non-ASCII.\n>\n> [1]: https://h50146.www5.hpe.com/products/software/oe/linux/mainstream/support/lcc/pdf/PostgreSQL16Beta1_New_Features_en_20230528_1.pdf\n>\n> Best Regards,\n> Hayato Kuroda\n> FUJITSU LIMITED\n\nlooks fine. Do you need to add to commitfest?\n\n\n",
"msg_date": "Mon, 3 Jul 2023 20:22:04 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PGdocs] fix description for handling pf non-ASCII characters"
},
{
"msg_contents": "Dear Jian,\r\n\r\n> On Thu, Jun 29, 2023 at 3:51 PM Hayato Kuroda (Fujitsu)\r\n> <[email protected]> wrote:\r\n> >\r\n> > Dear Jian,\r\n> >\r\n> > Thank you for checking my patch!\r\n> >\r\n> > >\r\n> > > in your patch:\r\n> > > > printable ASCII characters will be replaced with a hex escape.\r\n> > >\r\n> > > My wording is not good. I think the result will be: ASCII characters\r\n> > > will be as is, non-ASCII characters will be replaced with \"a hex\r\n> > > escape\".\r\n> >\r\n> > Yeah, your point was right. I have already said:\r\n> > \"anything other than printable ASCII characters will be replaced with a hex\r\n> escape\"\r\n> > IIUC They have same meaning.\r\n> >\r\n> > You might want to say the line was not good, so reworded like\r\n> > \"non-ASCII characters will be replaced with hexadecimal strings.\" How do you\r\n> think?\r\n> >\r\n> > > set application_name to 'abc漢字Abc';\r\n> > > SET\r\n> > > test16=# show application_name;\r\n> > > application_name\r\n> > > --------------------------------\r\n> > > abc\\xe6\\xbc\\xa2\\xe5\\xad\\x97Abc\r\n> > > (1 row)\r\n> > >\r\n> > > I see multi escape, so I am not sure \"a hex escape\".\r\n> >\r\n> > Not sure what you said, but I could not find word \"hex escape\" in the document.\r\n> > So I used \"hexadecimal string\" instead. Is it acceptable?\r\n> >\r\n> > > to properly render it back to 'abc漢字Abc'\r\n> > > here is how i do it:\r\n> > > select 'abc' || convert_from(decode(' e6bca2e5ad97','hex'), 'UTF8') || 'Abc';\r\n> >\r\n> > Yeah, your approach seems right, but I'm not sure it is related with us.\r\n> > Just to confirm, I don't have interest the method for rendering non-ASCII\r\n> characters.\r\n> > My motivation of the patch was to document the the incompatibility noted in [1]:\r\n> >\r\n> > >\r\n> > Changed the conversion rules when non-ASCII characters are specified for\r\n> ASCII-only\r\n> > strings such as parameters application_name and cluster_name. Previously, it\r\n> was\r\n> > converted in byte units with a question mark (?), but in PostgreSQL 16, it is\r\n> > converted to a hexadecimal string.\r\n> > >\r\n> >\r\n> > > I guess it's still painful if your application_name has non-ASCII chars.\r\n> >\r\n> > I agreed that, but no one has recommended to use non-ASCII.\r\n> >\r\n> > [1]:\r\n> https://h50146.www5.hpe.com/products/software/oe/linux/mainstream/suppo\r\n> rt/lcc/pdf/PostgreSQL16Beta1_New_Features_en_20230528_1.pdf\r\n> >\r\n> > Best Regards,\r\n> > Hayato Kuroda\r\n> > FUJITSU LIMITED\r\n> \r\n> looks fine. Do you need to add to commitfest?\r\n\r\nThank you for your confirmation. ! I registered to following:\r\n\r\nhttps://commitfest.postgresql.org/44/4437/\r\n\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n \r\n",
"msg_date": "Tue, 4 Jul 2023 01:30:56 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PGdocs] fix description for handling pf non-ASCII characters"
},
{
"msg_contents": "Hello Hayato and Jian,\n\nOn Tue, 4 Jul 2023 01:30:56 +0000\n\"Hayato Kuroda (Fujitsu)\" <[email protected]> wrote:\n\n> Dear Jian,\n\n> > looks fine. Do you need to add to commitfest? \n> \n> Thank you for your confirmation. ! I registered to following:\n> \n> https://commitfest.postgresql.org/44/4437/\n\nThe way the Postgres commitfest process works is that\nsomeone has to update the page to mark \"reviewed\" and the\nreviewer has to use the commitfest website to pass\nthe patches to a committer.\n\nI see a few problems with the English and style of the patches\nand am commenting below and have signed up as a reviewer. At\ncommitfest.postgresql.org I have marked the thread\nas needing author attention. Hayato, you will need\nto mark the thread as needing review when you have\nreplied to this message.\n\nJian, you might want to sign on as a reviewer as well.\nIt can be nice to have that record of your participation.\n\nNow, to reviewing the patch:\n\nFirst, it is now best practice in the PG docs to\nput a line break at the end of each sentence.\nAt least for the sentences on the lines you change.\n(No need to update the whole document!) Please\ndo this in the next version of your patch. I don't\nknow if this is a requirement for acceptance by\na committer, but it won't hurt.\n\ndiff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml\nindex e700782d3c..a4ce99ba4d 100644\n--- a/doc/src/sgml/config.sgml\n+++ b/doc/src/sgml/config.sgml\n@@ -7040,9 +7040,8 @@ local0.* /var/log/postgresql\n The name will be displayed in the\n<structname>pg_stat_activity</structname> view and included in CSV log\nentries. It can also be included in regular log entries via the <xref\nlinkend=\"guc-log-line-prefix\"/> parameter.\n- Only printable ASCII characters may be used in the\n- <varname>application_name</varname> value. Other characters\nwill be\n- replaced with question marks (<literal>?</literal>).\n+ Non-ASCII characters used in the\n<varname>application_name</varname>\n+ will be replaced with hexadecimal strings.\n </para>\n </listitem>\n </varlistentry>\n\nDon't use the future tense to describe the system's behavior. Instead\nof \"will be\" write \"are\". (Yes, this change would be an improvement\non the original text. We should fix it while we're working on it\nand our attention is focused.)\n\nIt is more accurate, if I understand the issue, to say that characters\nare replaced with hexadecimal _representations_ of the input bytes.\nFinally, it would be good to know what representation we're talking\nabout. Perhaps just give the \\xhh example and say: the Postgres\nC-style escaped hexadecimal byte value. And hyperlink to\nhttps://www.postgresql.org/docs/16/sql-syntax-lexical.html#SQL-SYNTAX-STRINGS-ESCAPE\n\n(The docbook would be, depending on text you want to link:\n\n<link linkend=\"sql-syntax-strings-escape\">C-style escaped hexadecimal\nbyte value</link>.\n\nI think. You link to id=\"someidvalue\" attribute values.)\n\n\n@@ -8037,10 +8036,9 @@ COPY postgres_log FROM\n'/full/path/to/logfile.csv' WITH csv; <para>\n The name can be any string of less\n than <symbol>NAMEDATALEN</symbol> characters (64 characters in\na standard\n- build). Only printable ASCII characters may be used in the\n- <varname>cluster_name</varname> value. Other characters will be\n- replaced with question marks (<literal>?</literal>). No name\nis shown\n- if this parameter is set to the empty string\n<literal>''</literal> (which is\n+ build). Non-ASCII characters used in the\n<varname>cluster_name</varname>\n+ will be replaced with hexadecimal strings. No name is shown if\nthis\n+ parameter is set to the empty string <literal>''</literal>\n(which is the default). This parameter can only be set at server start.\n </para>\n </listitem>\n\nSame review as for the first patch hunk.\n\ndiff --git a/doc/src/sgml/postgres-fdw.sgml\nb/doc/src/sgml/postgres-fdw.sgml index 5062d712e7..98785e87ea 100644\n--- a/doc/src/sgml/postgres-fdw.sgml\n+++ b/doc/src/sgml/postgres-fdw.sgml\n@@ -1067,9 +1067,8 @@ postgres=# SELECT postgres_fdw_disconnect_all();\n of any length and contain even non-ASCII characters. However\nwhen it's passed to and used as <varname>application_name</varname>\n in a foreign server, note that it will be truncated to less than\n- <symbol>NAMEDATALEN</symbol> characters and anything other than\n- printable ASCII characters will be replaced with question\n- marks (<literal>?</literal>).\n+ <symbol>NAMEDATALEN</symbol> characters and non-ASCII characters\nwill be\n+ replaced with hexadecimal strings.\n See <xref linkend=\"guc-application-name\"/> for details.\n </para>\n \nSame review as for the first patch hunk.\n\nSince the both of you have looked and confirmed that the\nactual behavior matches the new documentation I have not\ndone this.\n\nBut, have either of you checked that we're really talking about\nreplacing everything outside the 7-bit ASCII encodings? \nMy reading of the commit referenced in the first email of this\nthread says that it's everything outside of the _printable_\nASCII encodings, ASCII values outside of the range 32 to 127,\ninclusive. \n\nPlease check. The docs should probably say \"printable ASCII\",\nor \"non-printable ASCII\", depending. I think the meaning\nof \"printable ASCII\" is widely enough known to be 32-127.\nSo \"printable\" is good enough, right?\n\nRegards,\n\nKarl <[email protected]>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n\n",
"msg_date": "Tue, 26 Sep 2023 01:03:28 -0500",
"msg_from": "\"Karl O. Pinc\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PGdocs] fix description for handling pf non-ASCII characters"
},
{
"msg_contents": "Dear Karl,\n\nThank you for reviewing! PSA new version.\n\n> I see a few problems with the English and style of the patches\n> and am commenting below and have signed up as a reviewer.\n\nYour effort is quite helpful for me.\n\n> At\n> commitfest.postgresql.org I have marked the thread\n> as needing author attention. Hayato, you will need\n> to mark the thread as needing review when you have\n> replied to this message.\n\nSure. I will change the status after posting the patch.\n\nBefore replying your comments, I thought I should show the difference between\nversions. Regarding old versions (here PG15 was used), non-ASCIIs (like Japanese) are\nreplaced with '?'.\n\n```\npsql (15.4)\nType \"help\" for help.\n\npostgres=# SET application_name TO 'あああ';\nSET\npostgres=# SHOW application_name ;\n application_name \n------------------\n ?????????\n(1 row)\n```\n\nAs for the HEAD, as my patch said, non-ASCIIs are replaced\nwith hexadecimal representations. (Were my terminologies correct?).\n\n```\npsql (17devel)\nType \"help\" for help.\n\npostgres=# SET application_name TO 'あああ';\nSET\npostgres=# SHOW application_name ;\n application_name \n--------------------------------------\n \\xe3\\x81\\x82\\xe3\\x81\\x82\\xe3\\x81\\x82\n(1 row)\n```\n\nNote that non-printable ASCIIs are also replaced with the same rule.\n\n```\npsql (15.4)\nType \"help\" for help.\n\npostgres=# SET application_name TO E'\\x03';\nSET\npostgres=# SHOW application_name ;\n application_name \n------------------\n ?\n(1 row)\n\npsql (17devel)\nType \"help\" for help.\n\npostgres=# SET application_name TO E'\\x03';\nSET\npostgres=# SHOW application_name ;\n application_name \n------------------\n \\x03\n(1 row)\n```\n\n> Now, to reviewing the patch:\n> \n> First, it is now best practice in the PG docs to\n> put a line break at the end of each sentence.\n> At least for the sentences on the lines you change.\n> (No need to update the whole document!) Please\n> do this in the next version of your patch. I don't\n> know if this is a requirement for acceptance by\n> a committer, but it won't hurt.\n\nI didn't know that. Could you please tell me if you have a source? Anyway,\nI put a line break for each sentences for now.\n\n> diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml\n> index e700782d3c..a4ce99ba4d 100644\n> --- a/doc/src/sgml/config.sgml\n> +++ b/doc/src/sgml/config.sgml\n> @@ -7040,9 +7040,8 @@ local0.* /var/log/postgresql\n> The name will be displayed in the\n> <structname>pg_stat_activity</structname> view and included in CSV log\n> entries. It can also be included in regular log entries via the <xref\n> linkend=\"guc-log-line-prefix\"/> parameter.\n> - Only printable ASCII characters may be used in the\n> - <varname>application_name</varname> value. Other characters\n> will be\n> - replaced with question marks (<literal>?</literal>).\n> + Non-ASCII characters used in the\n> <varname>application_name</varname>\n> + will be replaced with hexadecimal strings.\n> </para>\n> </listitem>\n> </varlistentry>\n> \n> Don't use the future tense to describe the system's behavior. Instead\n> of \"will be\" write \"are\". (Yes, this change would be an improvement\n> on the original text. We should fix it while we're working on it\n> and our attention is focused.)\n> \n> It is more accurate, if I understand the issue, to say that characters\n> are replaced with hexadecimal _representations_ of the input bytes.\n> Finally, it would be good to know what representation we're talking\n> about. Perhaps just give the \\xhh example and say: the Postgres\n> C-style escaped hexadecimal byte value. And hyperlink to\n> https://www.postgresql.org/docs/16/sql-syntax-lexical.html#SQL-SYNTAX-ST\n> RINGS-ESCAPE\n> \n> (The docbook would be, depending on text you want to link:\n> \n> <link linkend=\"sql-syntax-strings-escape\">C-style escaped hexadecimal\n> byte value</link>.\n> \n> I think. You link to id=\"someidvalue\" attribute values.)\n\nIIUC the word \" Postgres\" cannot be used in the doc.\nBased on your all comments, I changed as below. How do you think?\n\n```\n Characters that are not printable ASCII, like <literal>\\x03</literal>,\n are replaced with the <productname>PostgreSQL</productname>\n <link linkend=\"sql-syntax-strings-escape\">C-style escaped hexadecimal byte value</link>.\n```\n\n\n> @@ -8037,10 +8036,9 @@ COPY postgres_log FROM\n> '/full/path/to/logfile.csv' WITH csv; <para>\n> The name can be any string of less\n> than <symbol>NAMEDATALEN</symbol> characters (64 characters\n> in\n> a standard\n> - build). Only printable ASCII characters may be used in the\n> - <varname>cluster_name</varname> value. Other characters will be\n> - replaced with question marks (<literal>?</literal>). No name\n> is shown\n> - if this parameter is set to the empty string\n> <literal>''</literal> (which is\n> + build). Non-ASCII characters used in the\n> <varname>cluster_name</varname>\n> + will be replaced with hexadecimal strings. No name is shown if\n> this\n> + parameter is set to the empty string <literal>''</literal>\n> (which is the default). This parameter can only be set at server start.\n> </para>\n> </listitem>\n> \n> Same review as for the first patch hunk.\n\nFixed like above. You can refer the patch.\n\n> diff --git a/doc/src/sgml/postgres-fdw.sgml\n> b/doc/src/sgml/postgres-fdw.sgml index 5062d712e7..98785e87ea 100644\n> --- a/doc/src/sgml/postgres-fdw.sgml\n> +++ b/doc/src/sgml/postgres-fdw.sgml\n> @@ -1067,9 +1067,8 @@ postgres=# SELECT postgres_fdw_disconnect_all();\n> of any length and contain even non-ASCII characters. However\n> when it's passed to and used as <varname>application_name</varname>\n> in a foreign server, note that it will be truncated to less than\n> - <symbol>NAMEDATALEN</symbol> characters and anything other than\n> - printable ASCII characters will be replaced with question\n> - marks (<literal>?</literal>).\n> + <symbol>NAMEDATALEN</symbol> characters and non-ASCII\n> characters\n> will be\n> + replaced with hexadecimal strings.\n> See <xref linkend=\"guc-application-name\"/> for details.\n> </para>\n> \n> Same review as for the first patch hunk.\n\nFixed like above.\n\n> Since the both of you have looked and confirmed that the\n> actual behavior matches the new documentation I have not\n> done this.\n\nI showed the result again, please see.\n\n> But, have either of you checked that we're really talking about\n> replacing everything outside the 7-bit ASCII encodings?\n> My reading of the commit referenced in the first email of this\n> thread says that it's everything outside of the _printable_\n> ASCII encodings, ASCII values outside of the range 32 to 127,\n> inclusive.\n> \n> Please check. The docs should probably say \"printable ASCII\",\n> or \"non-printable ASCII\", depending. I think the meaning\n> of \"printable ASCII\" is widely enough known to be 32-127.\n> So \"printable\" is good enough, right?\n\nFor me, \"non-printable ASCII\" sounds like control characters. So that I used the\nsentence \"Characters that are not printable ASCII ... are replaced with...\".\nPlease tell me if you have better explanation?\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED",
"msg_date": "Tue, 26 Sep 2023 13:40:26 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PGdocs] fix description for handling pf non-ASCII characters"
},
{
"msg_contents": "On Tue, 26 Sep 2023 13:40:26 +0000\n\"Hayato Kuroda (Fujitsu)\" <[email protected]> wrote:\n\n> Your effort is quite helpful for me.\n\nYou're welcome.\n\n> Before replying your comments, I thought I should show the difference\n> between versions. Regarding old versions (here PG15 was used),\n> non-ASCIIs (like Japanese) are replaced with '?'.\n> \n> ```\n> psql (15.4)\n> Type \"help\" for help.\n> \n> postgres=# SET application_name TO 'あああ';\n> SET\n> postgres=# SHOW application_name ;\n> application_name \n> ------------------\n> ?????????\n> (1 row)\n> ```\n> \n> As for the HEAD, as my patch said, non-ASCIIs are replaced\n> with hexadecimal representations. (Were my terminologies correct?).\n> \n> ```\n> psql (17devel)\n> Type \"help\" for help.\n> \n> postgres=# SET application_name TO 'あああ';\n> SET\n> postgres=# SHOW application_name ;\n> application_name \n> --------------------------------------\n> \\xe3\\x81\\x82\\xe3\\x81\\x82\\xe3\\x81\\x82\n> (1 row)\n> ```\n\nI think you're terminology is correct, but see my\nsuggestion at bottom.\n\nNever hurts to have output to look at. I noticed here\nand when reading the patch that changed the output --\nthe patch is operating on bytes, not characters per-se.\n\n> > First, it is now best practice in the PG docs to\n> > put a line break at the end of each sentence.\n> > At least for the sentences on the lines you change.\n> > (No need to update the whole document!) Please\n> > do this in the next version of your patch. I don't\n> > know if this is a requirement for acceptance by\n> > a committer, but it won't hurt. \n> \n> I didn't know that. Could you please tell me if you have a source?\n\nI thought I could find an email but the search is taking\nforever. If I find something I'll let you know.\nI could even be mis-remembering, but it's a nice\npractice regardless.\n\nThere are a number of undocumented conventions.\nAnother is the use of gender neutral language.\n\nThe place to discuss doc conventions/styles would\nbe the pgsql-docs list. (If you felt like\nasking there.)\n\nYou could try submitting another patch to add various\ndoc conventions to the official docs at\nhttps://www.postgresql.org/docs/current/docguide-style.html\n:-)\n\n> Anyway, I put a line break for each sentences for now.\n\nThanks.\n\nA related thing that's nice to have is to limit the line\nlength of the documentation source to 80 characters or less.\n79 is probably best. Since the source text around your patch\nconforms to this convention you should also.\n\n> IIUC the word \" Postgres\" cannot be used in the doc.\n\nI think you're right.\n\n> Based on your all comments, I changed as below. How do you think?\n> \n> ```\n> Characters that are not printable ASCII, like\n> <literal>\\x03</literal>, are replaced with the\n> <productname>PostgreSQL</productname> <link\n> linkend=\"sql-syntax-strings-escape\">C-style escaped hexadecimal byte\n> value</link>. ```\n\n> \n> > Since the both of you have looked and confirmed that the\n> > actual behavior matches the new documentation I have not\n> > done this. \n> \n> I showed the result again, please see.\n\nShould the committer be interested, your patch applies cleanly\nand the docs build as expected.\n\nAlso, based on the comments in the\npatch which changed the system's behavior, I believe that\nyour patch updates all the relevant places in the documentation.\n\n> > But, have either of you checked that we're really talking about\n> > replacing everything outside the 7-bit ASCII encodings?\n> > My reading of the commit referenced in the first email of this\n> > thread says that it's everything outside of the _printable_\n> > ASCII encodings, ASCII values outside of the range 32 to 127,\n> > inclusive.\n> > \n> > Please check. The docs should probably say \"printable ASCII\",\n> > or \"non-printable ASCII\", depending. I think the meaning\n> > of \"printable ASCII\" is widely enough known to be 32-127.\n> > So \"printable\" is good enough, right? \n> \n> For me, \"non-printable ASCII\" sounds like control characters. So that\n> I used the sentence \"Characters that are not printable ASCII ... are\n> replaced with...\". Please tell me if you have better explanation?\n\nYour explanation sounds good to me.\n\nI now think that you should consider another change to your wording.\nInstead of starting with \"Characters that are not printable ASCII ...\"\nconsider writing \"The bytes of the string which are not printable ASCII\n...\". Notice above that characters (e.g. あ) generate output for\neach non-ASCII byte (e.g. \\xe3\\x81\\x82). So my thought is that\nthe docs should be talking about bytes.\n\nFor the last hunk you'd change around \"anything\". Write:\n\"... it will be truncated to less than NAMEDATALEN characters and\nthe bytes of the string which are not printable ASCII characters ...\".\n\nNotice that I have also changed \"that\" to \"which\" just above. \nI _think_ this is better English. See sense 3 of:\nhttps://en.wiktionary.org/wiki/which\nSee also the first paragraph of:\nhttps://en.wikipedia.org/wiki/Relative_pronoun\n\nIf the comments above move you, send another patch. Seems to me\nwe're close to sending this on to a committer.\n\nRegards,\n\nKarl <[email protected]>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n\n",
"msg_date": "Tue, 26 Sep 2023 12:45:53 -0500",
"msg_from": "\"Karl O. Pinc\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PGdocs] fix description for handling pf non-ASCII characters"
},
{
"msg_contents": "\"Karl O. Pinc\" <[email protected]> writes:\n> For the last hunk you'd change around \"anything\". Write:\n> \"... it will be truncated to less than NAMEDATALEN characters and\n> the bytes of the string which are not printable ASCII characters ...\".\n\n> Notice that I have also changed \"that\" to \"which\" just above. \n> I _think_ this is better English.\n\nNo, I'm pretty sure you're mistaken. It's been a long time since\nhigh school English, but the way I think this works is that \"that\"\nintroduces a restrictive clause, which narrows the scope of what\nyou are saying. That is, you say \"that\" when you want to talk\nabout only the bytes of the string that aren't ASCII. But \"which\"\nintroduces a non-restrictive clause that adds information or\ncommentary. If you say \"bytes of the string which are not ASCII\",\nyou are effectively making a side assertion that no byte of the\nstring is ASCII. Which is not the meaning you want here.\n\nA smell test that works for native speakers (not sure how helpful\nit is for others) is: if the sentence would read well with commas\nor parens added before and after the clause, then it's probably\nnon-restrictive and should use \"which\". If it looks wrong that way\nthen it's a restrictive clause and should use \"that\".\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 26 Sep 2023 14:01:23 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PGdocs] fix description for handling pf non-ASCII characters"
},
{
"msg_contents": "Sep 26, 2023 1:10:55 PM Tom Lane <[email protected]>:\n\n> \"Karl O. Pinc\" <[email protected]> writes:\n>> For the last hunk you'd change around \"anything\". Write:\n>> \"... it will be truncated to less than NAMEDATALEN characters and\n>> the bytes of the string which are not printable ASCII characters ...\".\n>\n>> Notice that I have also changed \"that\" to \"which\" just above. \n>> I _think_ this is better English.\n>\n> No, I'm pretty sure you're mistaken. It's been a long time since\n> high school English, but the way I think this works is that \"that\"\n> introduces a restrictive clause, which narrows the scope of what\n> you are saying. That is, you say \"that\" when you want to talk\n> about only the bytes of the string that aren't ASCII. But \"which\"\n> introduces a non-restrictive clause that adds information or\n> commentary. If you say \"bytes of the string which are not ASCII\",\n> you are effectively making a side assertion that no byte of the\n> string is ASCII. Which is not the meaning you want here.\n\nMakes sense to me. \"That\" it is.\n\nThanks for the help. I never would have figured that out.\n\n\n\n",
"msg_date": "Tue, 26 Sep 2023 16:50:20 -0500 (CDT)",
"msg_from": "\"Karl O. Pinc\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PGdocs] fix description for handling pf non-ASCII characters"
},
{
"msg_contents": "Hi Kuroda-san.\n\nHere are my review comments for your v3 patch.\n\nTBH, I felt the new text descriptions deviated a bit too much from the\noriginals. IMO only quite a small tweak was needed, so my suggested\ntext in the comments below reflects that.\n\n======\nCommit message.\n\n1.\nmissing description\n\n======\nsrc/sgml/config.sgml\n\n2. application_name:\n\n- Only printable ASCII characters may be used in the\n- <varname>application_name</varname> value. Other characters will be\n- replaced with question marks (<literal>?</literal>).\n+ Characters that are not printable ASCII, like <literal>\\x03</literal>,\n+ are replaced with the <productname>PostgreSQL</productname>\n+ <link linkend=\"sql-syntax-strings-escape\">C-style escaped\nhexadecimal byte value</link>.\n\nBEFORE\nOther characters will be replaced with question marks (<literal>?</literal>).\n\nSUGGESTION\nOther characters will be replaced with <link\nlinkend=\"sql-syntax-strings-escape\">C-style escaped hexadecimal byte\nvalues</link>.\n\n~~~\n\n3. cluster_name:\n\n- build). Only printable ASCII characters may be used in the\n- <varname>cluster_name</varname> value. Other characters will be\n- replaced with question marks (<literal>?</literal>). No name is shown\n- if this parameter is set to the empty string\n<literal>''</literal> (which is\n- the default). This parameter can only be set at server start.\n+ build).\n+ Characters that are not printable ASCII, like <literal>\\x03</literal>,\n+ are replaced with the <productname>PostgreSQL</productname>\n+ <link linkend=\"sql-syntax-strings-escape\">C-style escaped\nhexadecimal byte value</link>.\n+ No name is shown if this parameter is set to the empty string\n+ <literal>''</literal> (which is the default). This parameter can only\n+ be set at server start.\n\n<same as previous review comment #2>\n\n======\nsrc/sgml/postgres-fdw.sgml\n\n4.\n <para>\n <varname>postgres_fdw.application_name</varname> can be any string\n- of any length and contain even non-ASCII characters. However when\n- it's passed to and used as <varname>application_name</varname>\n+ of any length and contain even characters that are not printable ASCII.\n+ However when it's passed to and used as\n<varname>application_name</varname>\n in a foreign server, note that it will be truncated to less than\n <symbol>NAMEDATALEN</symbol> characters and anything other than\n- printable ASCII characters will be replaced with question\n- marks (<literal>?</literal>).\n+ printable ASCII characters are replaced with the\n<productname>PostgreSQL</productname>\n+ <link linkend=\"sql-syntax-strings-escape\">C-style escaped\nhexadecimal byte value</link>.\n See <xref linkend=\"guc-application-name\"/> for details.\n </para>\n\n~\n\nAFAICT the first change wasn't necessary.\n\n~\n\nAs for the 2nd change:\n\nBEFORE\n... and anything other than printable ASCII characters will be\nreplaced with question marks (<literal>?</literal>).\n\nSUGGESTION\n... and anything other than printable ASCII characters will be\nreplaced with <link linkend=\"sql-syntax-strings-escape\">C-style\nescaped hexadecimal byte values</link>.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 27 Sep 2023 16:48:30 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PGdocs] fix description for handling pf non-ASCII characters"
},
{
"msg_contents": "Dear Karl,\r\n\r\nThank you for reviewing!\r\n\r\n> A related thing that's nice to have is to limit the line\r\n> length of the documentation source to 80 characters or less.\r\n> 79 is probably best. Since the source text around your patch\r\n> conforms to this convention you should also.\r\n\r\nIIUC it is not hard limit, but I followed this.\r\n\r\n> Should the committer be interested, your patch applies cleanly\r\n> and the docs build as expected.\r\n\r\nYeah, but cfbot accepted previous version. Did you have anything in your mind?\r\n\r\n> Also, based on the comments in the\r\n> patch which changed the system's behavior, I believe that\r\n> your patch updates all the relevant places in the documentation.\r\n\r\nThanks. Actually, I think it should be backpatched to PG16 because the commit was\r\ndone last year. I will make the patch for it after deciding the explanation.\r\n\r\n> \r\n> I now think that you should consider another change to your wording.\r\n> Instead of starting with \"Characters that are not printable ASCII ...\"\r\n> consider writing \"The bytes of the string which are not printable ASCII\r\n> ...\". Notice above that characters (e.g. あ) generate output for\r\n> each non-ASCII byte (e.g. \\xe3\\x81\\x82). So my thought is that\r\n> the docs should be talking about bytes.\r\n> \r\n> For the last hunk you'd change around \"anything\". Write:\r\n> \"... it will be truncated to less than NAMEDATALEN characters and\r\n> the bytes of the string which are not printable ASCII characters ...\".\r\n> \r\n\r\nHmm, what you said looked right. But as Peter pointed out [1], the fix seems too\r\nmuch. So I attached three version of patches. How do you think?\r\nFor me, type C is best.\r\n\r\nA. A patch which completely follows your comments. The name is \"v3-0001-...patch\".\r\n Cfbot tests it.\r\nB. A patch which completely follows Peter's comments [1]. The name is \"Peter_v3-....txt\".\r\nC. A patch which follows both comments. Based on b, but some comments\r\n (Don't use the future tense, \"Other characters\"->\"The bytes of other characters\"...)\r\n were picked. The name is \"Both_v3-....txt\".\r\n\r\n[1]: https://www.postgresql.org/message-id/CAHut%2BPvEbKC8ABA_daX-XPNOTFzuAmHGhjPj%3DtPZYQskRHECOg%40mail.gmail.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Wed, 27 Sep 2023 12:58:54 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PGdocs] fix description for handling pf non-ASCII characters"
},
{
"msg_contents": "Dear Peter,\r\n\r\nThank you for reviewing!\r\n\r\n> \r\n> TBH, I felt the new text descriptions deviated a bit too much from the\r\n> originals. IMO only quite a small tweak was needed, so my suggested\r\n> text in the comments below reflects that.\r\n\r\nGood point, my patch may be too much.\r\n\r\n> Commit message.\r\n> \r\n> 1.\r\n> missing description\r\n\r\nAdded.\r\nIf we should use only printable ascii as a commit message, I can use '\\x03'\r\ninstead of 'あああ'.\r\n\r\n> src/sgml/config.sgml\r\n> \r\n> 2. application_name:\r\n> \r\n> - Only printable ASCII characters may be used in the\r\n> - <varname>application_name</varname> value. Other characters will\r\n> be\r\n> - replaced with question marks (<literal>?</literal>).\r\n> + Characters that are not printable ASCII, like <literal>\\x03</literal>,\r\n> + are replaced with the <productname>PostgreSQL</productname>\r\n> + <link linkend=\"sql-syntax-strings-escape\">C-style escaped\r\n> hexadecimal byte value</link>.\r\n> \r\n> BEFORE\r\n> Other characters will be replaced with question marks (<literal>?</literal>).\r\n> \r\n> SUGGESTION\r\n> Other characters will be replaced with <link\r\n> linkend=\"sql-syntax-strings-escape\">C-style escaped hexadecimal byte\r\n> values</link>.\r\n> \r\n> ~~~\r\n> \r\n> 3. cluster_name:\r\n> \r\n> - build). Only printable ASCII characters may be used in the\r\n> - <varname>cluster_name</varname> value. Other characters will be\r\n> - replaced with question marks (<literal>?</literal>). No name is\r\n> shown\r\n> - if this parameter is set to the empty string\r\n> <literal>''</literal> (which is\r\n> - the default). This parameter can only be set at server start.\r\n> + build).\r\n> + Characters that are not printable ASCII, like <literal>\\x03</literal>,\r\n> + are replaced with the <productname>PostgreSQL</productname>\r\n> + <link linkend=\"sql-syntax-strings-escape\">C-style escaped\r\n> hexadecimal byte value</link>.\r\n> + No name is shown if this parameter is set to the empty string\r\n> + <literal>''</literal> (which is the default). This parameter can only\r\n> + be set at server start.\r\n> \r\n> <same as previous review comment #2>\r\n> \r\n> ======\r\n> src/sgml/postgres-fdw.sgml\r\n> \r\n> 4.\r\n> <para>\r\n> <varname>postgres_fdw.application_name</varname> can be any\r\n> string\r\n> - of any length and contain even non-ASCII characters. However when\r\n> - it's passed to and used as <varname>application_name</varname>\r\n> + of any length and contain even characters that are not printable ASCII.\r\n> + However when it's passed to and used as\r\n> <varname>application_name</varname>\r\n> in a foreign server, note that it will be truncated to less than\r\n> <symbol>NAMEDATALEN</symbol> characters and anything other than\r\n> - printable ASCII characters will be replaced with question\r\n> - marks (<literal>?</literal>).\r\n> + printable ASCII characters are replaced with the\r\n> <productname>PostgreSQL</productname>\r\n> + <link linkend=\"sql-syntax-strings-escape\">C-style escaped\r\n> hexadecimal byte value</link>.\r\n> See <xref linkend=\"guc-application-name\"/> for details.\r\n> </para>\r\n> \r\n> ~\r\n> \r\n> AFAICT the first change wasn't necessary.\r\n> \r\n> ~\r\n> \r\n> As for the 2nd change:\r\n> \r\n> BEFORE\r\n> ... and anything other than printable ASCII characters will be\r\n> replaced with question marks (<literal>?</literal>).\r\n> \r\n> SUGGESTION\r\n> ... and anything other than printable ASCII characters will be\r\n> replaced with <link linkend=\"sql-syntax-strings-escape\">C-style\r\n> escaped hexadecimal byte values</link>.\r\n\r\nThey seem good, but they conflict with Karl's comments.\r\nI made three patches based on comments [1], could you check?\r\n\r\n[1]: https://www.postgresql.org/message-id/TYAPR01MB58663EB061888B2715A39217F5C2A%40TYAPR01MB5866.jpnprd01.prod.outlook.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n",
"msg_date": "Wed, 27 Sep 2023 12:59:47 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PGdocs] fix description for handling pf non-ASCII characters"
},
{
"msg_contents": "Dear Tom,\n\n> No, I'm pretty sure you're mistaken. It's been a long time since\n> high school English, but the way I think this works is that \"that\"\n> introduces a restrictive clause, which narrows the scope of what\n> you are saying. That is, you say \"that\" when you want to talk\n> about only the bytes of the string that aren't ASCII. But \"which\"\n> introduces a non-restrictive clause that adds information or\n> commentary. If you say \"bytes of the string which are not ASCII\",\n> you are effectively making a side assertion that no byte of the\n> string is ASCII. Which is not the meaning you want here.\n> \n> A smell test that works for native speakers (not sure how helpful\n> it is for others) is: if the sentence would read well with commas\n> or parens added before and after the clause, then it's probably\n> non-restrictive and should use \"which\". If it looks wrong that way\n> then it's a restrictive clause and should use \"that\".\n\nThanks for giving your opinion. The suggestion is quite helpful for me,\nnon-native speaker. If you check my patch [1] I'm very happy.\n\n[1]: https://www.postgresql.org/message-id/TYAPR01MB58663EB061888B2715A39217F5C2A%40TYAPR01MB5866.jpnprd01.prod.outlook.com\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\n\n\n\n",
"msg_date": "Wed, 27 Sep 2023 13:00:41 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PGdocs] fix description for handling pf non-ASCII characters"
},
{
"msg_contents": "On Wed, 27 Sep 2023 12:58:54 +0000\n\"Hayato Kuroda (Fujitsu)\" <[email protected]> wrote:\n\n> > Should the committer be interested, your patch applies cleanly\n> > and the docs build as expected. \n> \n> Yeah, but cfbot accepted previous version. Did you have anything in\n> your mind?\n\nNo. I'm letting the committer know everything I've checked\nso that they can decide what they want to check.\n\n> Hmm, what you said looked right. But as Peter pointed out [1], the\n> fix seems too much. So I attached three version of patches. How do\n> you think? For me, type C is best.\n> \n> A. A patch which completely follows your comments. The name is\n> \"v3-0001-...patch\". Cfbot tests it.\n> B. A patch which completely follows Peter's comments [1]. The name is\n> \"Peter_v3-....txt\". \n> C. A patch which follows both comments. Based on\n> b, but some comments (Don't use the future tense, \"Other\n> characters\"->\"The bytes of other characters\"...) were picked. The\n> name is \"Both_v3-....txt\".\n\nI also like C. Fewer words is better. So long\nas nothing is left unsaid fewer words make for clarity.\n\nHowever, in the last hunk, \"of other than\" does not read well.\nInstead of writing\n\"and the bytes of other than printable ASCII characters\"\nyou want \"and the bytes that are not printable ASCII characters\".\nThat would be my suggestion. \n\nRegards,\n\nKarl <[email protected]>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n\n",
"msg_date": "Wed, 27 Sep 2023 08:59:24 -0500",
"msg_from": "\"Karl O. Pinc\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PGdocs] fix description for handling pf non-ASCII characters"
},
{
"msg_contents": "On Wed, Sep 27, 2023 at 11:59 PM Karl O. Pinc <[email protected]> wrote:\n>\n> On Wed, 27 Sep 2023 12:58:54 +0000\n> \"Hayato Kuroda (Fujitsu)\" <[email protected]> wrote:\n>\n> > > Should the committer be interested, your patch applies cleanly\n> > > and the docs build as expected.\n> >\n> > Yeah, but cfbot accepted previous version. Did you have anything in\n> > your mind?\n>\n> No. I'm letting the committer know everything I've checked\n> so that they can decide what they want to check.\n>\n> > Hmm, what you said looked right. But as Peter pointed out [1], the\n> > fix seems too much. So I attached three version of patches. How do\n> > you think? For me, type C is best.\n> >\n> > A. A patch which completely follows your comments. The name is\n> > \"v3-0001-...patch\". Cfbot tests it.\n> > B. A patch which completely follows Peter's comments [1]. The name is\n> > \"Peter_v3-....txt\".\n> > C. A patch which follows both comments. Based on\n> > b, but some comments (Don't use the future tense, \"Other\n> > characters\"->\"The bytes of other characters\"...) were picked. The\n> > name is \"Both_v3-....txt\".\n>\n> I also like C. Fewer words is better. So long\n> as nothing is left unsaid fewer words make for clarity.\n>\n> However, in the last hunk, \"of other than\" does not read well.\n> Instead of writing\n> \"and the bytes of other than printable ASCII characters\"\n> you want \"and the bytes that are not printable ASCII characters\".\n> That would be my suggestion.\n>\n\nI also prefer Option C, but...\n\n~~~\n\n+ <varname>application_name</varname> value.\n+ The bytes of other characters are replaced with\n+ <link linkend=\"sql-syntax-strings-escape\">C-style escaped hexadecimal\n+ byte values</link>.\n\nV\n\n+ <varname>cluster_name</varname> value.\n+ The bytes of other characters are replaced with\n+ <link linkend=\"sql-syntax-strings-escape\">C-style escaped hexadecimal\n+ byte values</link>.\n\nV\n\n+ <symbol>NAMEDATALEN</symbol> characters and the bytes of other than\n+ printable ASCII characters are replaced with <link\n+ linkend=\"sql-syntax-strings-escape\">C-style escaped hexadecimal byte\n+ values</link>.\n\n\nIIUC all of these 3 places can have exactly the same wording change\n(e.g. like Karl's last suggestion [1]).\n\nSUGGESTION\nAny bytes that are not printable ASCII characters are replaced with\n<link linkend=\"sql-syntax-strings-escape\">C-style escaped hexadecimal\nbyte values</link>.\n\n======\n[1] https://www.postgresql.org/message-id/20230927085924.4198c3d2%40slate.karlpinc.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 28 Sep 2023 09:49:03 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PGdocs] fix description for handling pf non-ASCII characters"
},
{
"msg_contents": "On Thu, 28 Sep 2023 09:49:03 +1000\nPeter Smith <[email protected]> wrote:\n\n> On Wed, Sep 27, 2023 at 11:59 PM Karl O. Pinc <[email protected]>\n> wrote:\n> >\n> > On Wed, 27 Sep 2023 12:58:54 +0000\n> > \"Hayato Kuroda (Fujitsu)\" <[email protected]> wrote:\n> > \n> > > > Should the committer be interested, your patch applies cleanly\n> > > > and the docs build as expected. \n> > >\n> > > Yeah, but cfbot accepted previous version. Did you have anything\n> > > in your mind? \n> >\n> > No. I'm letting the committer know everything I've checked\n> > so that they can decide what they want to check.\n> > \n> > > Hmm, what you said looked right. But as Peter pointed out [1], the\n> > > fix seems too much. So I attached three version of patches. How do\n> > > you think? For me, type C is best.\n> > >\n> > > A. A patch which completely follows your comments. The name is\n> > > \"v3-0001-...patch\". Cfbot tests it.\n> > > B. A patch which completely follows Peter's comments [1]. The\n> > > name is \"Peter_v3-....txt\".\n> > > C. A patch which follows both comments. Based on\n> > > b, but some comments (Don't use the future tense, \"Other\n> > > characters\"->\"The bytes of other characters\"...) were picked. The\n> > > name is \"Both_v3-....txt\". \n> >\n> > I also like C. Fewer words is better. So long\n> > as nothing is left unsaid fewer words make for clarity.\n> >\n> > However, in the last hunk, \"of other than\" does not read well.\n> > Instead of writing\n> > \"and the bytes of other than printable ASCII characters\"\n> > you want \"and the bytes that are not printable ASCII characters\".\n> > That would be my suggestion.\n> > \n> \n> I also prefer Option C, but...\n> \n> ~~~\n> \n> + <varname>application_name</varname> value.\n> + The bytes of other characters are replaced with\n> + <link linkend=\"sql-syntax-strings-escape\">C-style escaped\n> hexadecimal\n> + byte values</link>.\n> \n> V\n> \n> + <varname>cluster_name</varname> value.\n> + The bytes of other characters are replaced with\n> + <link linkend=\"sql-syntax-strings-escape\">C-style escaped\n> hexadecimal\n> + byte values</link>.\n> \n> V\n> \n> + <symbol>NAMEDATALEN</symbol> characters and the bytes of other\n> than\n> + printable ASCII characters are replaced with <link\n> + linkend=\"sql-syntax-strings-escape\">C-style escaped\n> hexadecimal byte\n> + values</link>.\n> \n> \n> IIUC all of these 3 places can have exactly the same wording change\n> (e.g. like Karl's last suggestion [1]).\n> \n> SUGGESTION\n> Any bytes that are not printable ASCII characters are replaced with\n> <link linkend=\"sql-syntax-strings-escape\">C-style escaped hexadecimal\n> byte values</link>.\n\nI don't see the utility in having exactly the same phrase everywhere,\nespecially since the last hunk is modifying the end of a long\nsentence. (Apologies if I'm mis-reading what Peter wrote above.)\n\nI like short sentences. So I prefer \"The bytes of other characters\"\nrather than \"Any bytes that are not printable ASCII characters\"\nfor the first 2 hunks. In context I don't see the need to repeat\nthe whole \"printable ASCII characters\" part that appears in the\npreceding sentence of both hunks. \"Other\" is clear, IMHO.\n\nBut because I like short sentences I now think that it's a good\nidea to break the long sentence of the last hunk into two.\nAdd a period and use the Peter's SUGGESTION above as the\ntext for the second sentence.\n\nIs this desireable?\n\nRegards,\n\nKarl <[email protected]>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\nP.S. Hayato, it is good practice to cc everybody who has\nreplied to a thread. At least I think that's what I see,\nit's not just people being lazy with reply-all. So I'm\nadding Tom Lane back to the thread. He can tell us otherwise\nif I'm wrong to add him back.\n\n\n",
"msg_date": "Wed, 27 Sep 2023 19:30:36 -0500",
"msg_from": "\"Karl O. Pinc\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PGdocs] fix description for handling pf non-ASCII characters"
},
{
"msg_contents": "On Thu, Sep 28, 2023 at 10:30 AM Karl O. Pinc <[email protected]> wrote:\n>\n> On Thu, 28 Sep 2023 09:49:03 +1000\n> Peter Smith <[email protected]> wrote:\n>\n> > On Wed, Sep 27, 2023 at 11:59 PM Karl O. Pinc <[email protected]>\n> > wrote:\n> > >\n> > > On Wed, 27 Sep 2023 12:58:54 +0000\n> > > \"Hayato Kuroda (Fujitsu)\" <[email protected]> wrote:\n> > >\n> > > > > Should the committer be interested, your patch applies cleanly\n> > > > > and the docs build as expected.\n> > > >\n> > > > Yeah, but cfbot accepted previous version. Did you have anything\n> > > > in your mind?\n> > >\n> > > No. I'm letting the committer know everything I've checked\n> > > so that they can decide what they want to check.\n> > >\n> > > > Hmm, what you said looked right. But as Peter pointed out [1], the\n> > > > fix seems too much. So I attached three version of patches. How do\n> > > > you think? For me, type C is best.\n> > > >\n> > > > A. A patch which completely follows your comments. The name is\n> > > > \"v3-0001-...patch\". Cfbot tests it.\n> > > > B. A patch which completely follows Peter's comments [1]. The\n> > > > name is \"Peter_v3-....txt\".\n> > > > C. A patch which follows both comments. Based on\n> > > > b, but some comments (Don't use the future tense, \"Other\n> > > > characters\"->\"The bytes of other characters\"...) were picked. The\n> > > > name is \"Both_v3-....txt\".\n> > >\n> > > I also like C. Fewer words is better. So long\n> > > as nothing is left unsaid fewer words make for clarity.\n> > >\n> > > However, in the last hunk, \"of other than\" does not read well.\n> > > Instead of writing\n> > > \"and the bytes of other than printable ASCII characters\"\n> > > you want \"and the bytes that are not printable ASCII characters\".\n> > > That would be my suggestion.\n> > >\n> >\n> > I also prefer Option C, but...\n> >\n> > ~~~\n> >\n> > + <varname>application_name</varname> value.\n> > + The bytes of other characters are replaced with\n> > + <link linkend=\"sql-syntax-strings-escape\">C-style escaped\n> > hexadecimal\n> > + byte values</link>.\n> >\n> > V\n> >\n> > + <varname>cluster_name</varname> value.\n> > + The bytes of other characters are replaced with\n> > + <link linkend=\"sql-syntax-strings-escape\">C-style escaped\n> > hexadecimal\n> > + byte values</link>.\n> >\n> > V\n> >\n> > + <symbol>NAMEDATALEN</symbol> characters and the bytes of other\n> > than\n> > + printable ASCII characters are replaced with <link\n> > + linkend=\"sql-syntax-strings-escape\">C-style escaped\n> > hexadecimal byte\n> > + values</link>.\n> >\n> >\n> > IIUC all of these 3 places can have exactly the same wording change\n> > (e.g. like Karl's last suggestion [1]).\n> >\n> > SUGGESTION\n> > Any bytes that are not printable ASCII characters are replaced with\n> > <link linkend=\"sql-syntax-strings-escape\">C-style escaped hexadecimal\n> > byte values</link>.\n>\n> I don't see the utility in having exactly the same phrase everywhere,\n> especially since the last hunk is modifying the end of a long\n> sentence. (Apologies if I'm mis-reading what Peter wrote above.)\n>\n> I like short sentences. So I prefer \"The bytes of other characters\"\n> rather than \"Any bytes that are not printable ASCII characters\"\n> for the first 2 hunks. In context I don't see the need to repeat\n> the whole \"printable ASCII characters\" part that appears in the\n> preceding sentence of both hunks. \"Other\" is clear, IMHO.\n>\n\nI had in mind something like a SHIFT-JIS encoding where a single\n\"character\" may include some trail bytes that happen to be in the\nASCII printable range. AFAIK because the new logic is processing\nbytes, not characters, I thought the end result could be a mix of\nescaped and unescaped bytes for the single SJIS character. In that\ncontext, I felt \"The bytes of other characters\" was not quite\naccurate.\n\nBut now looking at PostgreSQL-supported character sets [1] I saw SJIS\nis not supported anyhow. Unfortunately, I am not familiar enough with\nother encodings to know if there is still a chance of similar\nprintable ASCII trail bytes so I am fine with whatever wording is\nchosen.\n\n> But because I like short sentences I now think that it's a good\n> idea to break the long sentence of the last hunk into two.\n> Add a period and use the Peter's SUGGESTION above as the\n> text for the second sentence.\n>\n> Is this desireable?\n>\n\n+1.\n\n======\n[1] https://www.postgresql.org/docs/current/multibyte.html\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 28 Sep 2023 11:13:40 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PGdocs] fix description for handling pf non-ASCII characters"
},
{
"msg_contents": "Peter Smith <[email protected]> writes:\n> I had in mind something like a SHIFT-JIS encoding where a single\n> \"character\" may include some trail bytes that happen to be in the\n> ASCII printable range. AFAIK because the new logic is processing\n> bytes, not characters, I thought the end result could be a mix of\n> escaped and unescaped bytes for the single SJIS character.\n\nIt will not, because ...\n\n> But now looking at PostgreSQL-supported character sets [1] I saw SJIS\n> is not supported anyhow. Unfortunately, I am not familiar enough with\n> other encodings to know if there is still a chance of similar\n> printable ASCII trail bytes so I am fine with whatever wording is\n> chosen.\n\n... trailing bytes that could be mistaken for ASCII are precisely\nthe property that causes us to reject an encoding as not backend-safe.\nSo this code doesn't need to consider that hazard, and processing the\nstring byte-by-byte is perfectly OK.\n\nI'd be inclined to keep the text as simple as possible and not focus on\nthe distinction between bytes and characters.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 27 Sep 2023 21:19:31 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PGdocs] fix description for handling pf non-ASCII characters"
},
{
"msg_contents": "On Thu, Sep 28, 2023 at 11:19 AM Tom Lane <[email protected]> wrote:\n>\n> ... trailing bytes that could be mistaken for ASCII are precisely\n> the property that causes us to reject an encoding as not backend-safe.\n\nOh, that is good to know. Thanks for the information.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 28 Sep 2023 11:41:26 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PGdocs] fix description for handling pf non-ASCII characters"
},
{
"msg_contents": "Dear Karl,\r\n\r\nThank you for giving comments! PSA new version.\r\nI attached two patches - one is for HEAD, and another one is for REL_16_STABLE\r\nbranch. As shown below, PG16 has the same behavior.\r\n\r\n```\r\npsql (16beta3)\r\nType \"help\" for help.\r\n\r\npostgres=# SET application_name TO 'あああ';\r\nSET\r\npostgres=# SHOW application_name ;\r\n application_name \r\n--------------------------------------\r\n \\xe3\\x81\\x82\\xe3\\x81\\x82\\xe3\\x81\\x82\r\n(1 row)\r\n```\r\n\r\n\r\n> > > > A. A patch which completely follows your comments. The name is\r\n> > > > \"v3-0001-...patch\". Cfbot tests it.\r\n> > > > B. A patch which completely follows Peter's comments [1]. The\r\n> > > > name is \"Peter_v3-....txt\".\r\n> > > > C. A patch which follows both comments. Based on\r\n> > > > b, but some comments (Don't use the future tense, \"Other\r\n> > > > characters\"->\"The bytes of other characters\"...) were picked. The\r\n> > > > name is \"Both_v3-....txt\".\r\n> > >\r\n> > > I also like C. Fewer words is better. So long\r\n> > > as nothing is left unsaid fewer words make for clarity.\r\n\r\nOkay, basically I used C.\r\n\r\n> > >\r\n> > > However, in the last hunk, \"of other than\" does not read well.\r\n> > > Instead of writing\r\n> > > \"and the bytes of other than printable ASCII characters\"\r\n> > > you want \"and the bytes that are not printable ASCII characters\".\r\n> > > That would be my suggestion.\r\n> > >\r\n> >\r\n> > I also prefer Option C, but...\r\n> >\r\n> > ~~~\r\n> >\r\n> > + <varname>application_name</varname> value.\r\n> > + The bytes of other characters are replaced with\r\n> > + <link linkend=\"sql-syntax-strings-escape\">C-style escaped\r\n> > hexadecimal\r\n> > + byte values</link>.\r\n> >\r\n> > V\r\n> >\r\n> > + <varname>cluster_name</varname> value.\r\n> > + The bytes of other characters are replaced with\r\n> > + <link linkend=\"sql-syntax-strings-escape\">C-style escaped\r\n> > hexadecimal\r\n> > + byte values</link>.\r\n> >\r\n> > V\r\n> >\r\n> > + <symbol>NAMEDATALEN</symbol> characters and the bytes of\r\n> other\r\n> > than\r\n> > + printable ASCII characters are replaced with <link\r\n> > + linkend=\"sql-syntax-strings-escape\">C-style escaped\r\n> > hexadecimal byte\r\n> > + values</link>.\r\n> >\r\n> >\r\n> > IIUC all of these 3 places can have exactly the same wording change\r\n> > (e.g. like Karl's last suggestion [1]).\r\n> >\r\n> > SUGGESTION\r\n> > Any bytes that are not printable ASCII characters are replaced with\r\n> > <link linkend=\"sql-syntax-strings-escape\">C-style escaped hexadecimal\r\n> > byte values</link>.\r\n> \r\n> I don't see the utility in having exactly the same phrase everywhere,\r\n> especially since the last hunk is modifying the end of a long\r\n> sentence. (Apologies if I'm mis-reading what Peter wrote above.)\r\n\r\nRight, here we cannot use exactly the same sentence.\r\n\r\n> \r\n> I like short sentences. So I prefer \"The bytes of other characters\"\r\n> rather than \"Any bytes that are not printable ASCII characters\"\r\n> for the first 2 hunks. In context I don't see the need to repeat\r\n> the whole \"printable ASCII characters\" part that appears in the\r\n> preceding sentence of both hunks. \"Other\" is clear, IMHO.\r\n\r\nBased on the suggestion [1], I removed the word \"byte\".\r\n(Sorry, but a comment from senior members has higher priority)\r\n\r\n> \r\n> But because I like short sentences I now think that it's a good\r\n> idea to break the long sentence of the last hunk into two.\r\n> Add a period and use the Peter's SUGGESTION above as the\r\n> text for the second sentence.\r\n\r\nRight, the sentence is separated into two.\r\n\r\n[1]: https://www.postgresql.org/message-id/803569.1695863971%40sss.pgh.pa.us\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Thu, 28 Sep 2023 02:48:28 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PGdocs] fix description for handling pf non-ASCII characters"
},
{
"msg_contents": "Dear Peter,\r\n\r\nThank you for reviewing!\r\n\r\n> > > > > A. A patch which completely follows your comments. The name is\r\n> > > > > \"v3-0001-...patch\". Cfbot tests it.\r\n> > > > > B. A patch which completely follows Peter's comments [1]. The\r\n> > > > > name is \"Peter_v3-....txt\".\r\n> > > > > C. A patch which follows both comments. Based on\r\n> > > > > b, but some comments (Don't use the future tense, \"Other\r\n> > > > > characters\"->\"The bytes of other characters\"...) were picked. The\r\n> > > > > name is \"Both_v3-....txt\".\r\n> > > >\r\n> > > > I also like C. Fewer words is better. So long\r\n> > > > as nothing is left unsaid fewer words make for clarity.\r\n> > > >\r\n> > > > However, in the last hunk, \"of other than\" does not read well.\r\n> > > > Instead of writing\r\n> > > > \"and the bytes of other than printable ASCII characters\"\r\n> > > > you want \"and the bytes that are not printable ASCII characters\".\r\n> > > > That would be my suggestion.\r\n> > > >\r\n> > >\r\n> > > I also prefer Option C, but...\r\n\r\nOkay, C was chosen.\r\n\r\n\r\n> > >\r\n> > > ~~~\r\n> > >\r\n> > > + <varname>application_name</varname> value.\r\n> > > + The bytes of other characters are replaced with\r\n> > > + <link linkend=\"sql-syntax-strings-escape\">C-style escaped\r\n> > > hexadecimal\r\n> > > + byte values</link>.\r\n> > >\r\n> > > V\r\n> > >\r\n> > > + <varname>cluster_name</varname> value.\r\n> > > + The bytes of other characters are replaced with\r\n> > > + <link linkend=\"sql-syntax-strings-escape\">C-style escaped\r\n> > > hexadecimal\r\n> > > + byte values</link>.\r\n> > >\r\n> > > V\r\n> > >\r\n> > > + <symbol>NAMEDATALEN</symbol> characters and the bytes of\r\n> other\r\n> > > than\r\n> > > + printable ASCII characters are replaced with <link\r\n> > > + linkend=\"sql-syntax-strings-escape\">C-style escaped\r\n> > > hexadecimal byte\r\n> > > + values</link>.\r\n> > >\r\n> > >\r\n> > > IIUC all of these 3 places can have exactly the same wording change\r\n> > > (e.g. like Karl's last suggestion [1]).\r\n> > >\r\n> > > SUGGESTION\r\n> > > Any bytes that are not printable ASCII characters are replaced with\r\n> > > <link linkend=\"sql-syntax-strings-escape\">C-style escaped hexadecimal\r\n> > > byte values</link>.\r\n\r\nHmm, I felt that using exactly the same wording seemed strange here, so similar\r\nwords were used. Also, based on the comment [1], \"byte\" was removed.\r\n\r\n> \r\n> I had in mind something like a SHIFT-JIS encoding where a single\r\n> \"character\" may include some trail bytes that happen to be in the\r\n> ASCII printable range. AFAIK because the new logic is processing\r\n> bytes, not characters, I thought the end result could be a mix of\r\n> escaped and unescaped bytes for the single SJIS character. In that\r\n> context, I felt \"The bytes of other characters\" was not quite\r\n> accurate.\r\n> \r\n> But now looking at PostgreSQL-supported character sets [1] I saw SJIS\r\n> is not supported anyhow. Unfortunately, I am not familiar enough with\r\n> other encodings to know if there is still a chance of similar\r\n> printable ASCII trail bytes so I am fine with whatever wording is\r\n> chosen.\r\n\r\nBased on the discussion [1], I did not handle the part.\r\n\r\n> \r\n> > But because I like short sentences I now think that it's a good\r\n> > idea to break the long sentence of the last hunk into two.\r\n> > Add a period and use the Peter's SUGGESTION above as the\r\n> > text for the second sentence.\r\n> >\r\n> > Is this desireable?\r\n> >\r\n> \r\n> +1.\r\n\r\nOK, divided.\r\n\r\nNew patch is available in [2].\r\n\r\n[1]: https://www.postgresql.org/message-id/803569.1695863971%40sss.pgh.pa.us\r\n[2]: https://www.postgresql.org/message-id/TYAPR01MB5866DD962CA4FC03E338C6BBF5C1A%40TYAPR01MB5866.jpnprd01.prod.outlook.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Thu, 28 Sep 2023 02:50:55 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PGdocs] fix description for handling pf non-ASCII characters"
},
{
"msg_contents": "Dear Tom,\n\nThank you for giving a comment!\nNew patch is available in [1].\n\n> I'd be inclined to keep the text as simple as possible and not focus on\n> the distinction between bytes and characters.\n>\n\nOkay, in the latest version, the word \"byte\" was removed.\n\n[1]: https://www.postgresql.org/message-id/TYAPR01MB5866DD962CA4FC03E338C6BBF5C1A%40TYAPR01MB5866.jpnprd01.prod.outlook.com\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\n\n\n\n",
"msg_date": "Thu, 28 Sep 2023 02:51:32 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PGdocs] fix description for handling pf non-ASCII characters"
},
{
"msg_contents": "> I attached two patches - one is for HEAD, and another one is for REL_16_STABLE\r\n> branch. As shown below, PG16 has the same behavior.\r\n\r\nHmm, cfbot got angry because it tried to apply both of patches. To avoid it, I repost renamed patch.\r\n(I'm happy if we can specify the target branch of patches)\r\n\r\nSorry for inconvenience.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Thu, 28 Sep 2023 03:23:30 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PGdocs] fix description for handling pf non-ASCII characters"
},
{
"msg_contents": "On Thu, Sep 28, 2023 at 03:23:30AM +0000, Hayato Kuroda (Fujitsu) wrote:\n> Hmm, cfbot got angry because it tried to apply both of patches. To avoid it, I repost renamed patch.\n> (I'm happy if we can specify the target branch of patches)\n\nI was looking at this thread overall, the three v3 flavors of the doc\nchanges and v4.\n\n- <varname>application_name</varname> value. Other characters will be\n- replaced with question marks (<literal>?</literal>).\n+ <varname>application_name</varname> value.\n+ Other characters are replaced with <link\n+ linkend=\"sql-syntax-strings-escape\">C-style hexadecimal escapes</link>.\n\nThe simplicity of the change in v4 seems like the best approach to me,\nso +1 for that (including the mention to \"C-style\").\n--\nMichael",
"msg_date": "Thu, 28 Sep 2023 12:54:33 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PGdocs] fix description for handling pf non-ASCII characters"
},
{
"msg_contents": "v4 LGTM.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 28 Sep 2023 14:23:59 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PGdocs] fix description for handling pf non-ASCII characters"
},
{
"msg_contents": "On Thu, 28 Sep 2023 12:54:33 +0900\nMichael Paquier <[email protected]> wrote:\n\n> I was looking at this thread overall, the three v3 flavors of the doc\n> changes and v4.\n> \n> - <varname>application_name</varname> value. Other characters\n> will be\n> - replaced with question marks (<literal>?</literal>).\n> + <varname>application_name</varname> value.\n> + Other characters are replaced with <link\n> + linkend=\"sql-syntax-strings-escape\">C-style hexadecimal\n> escapes</link>.\n> \n> The simplicity of the change in v4 seems like the best approach to me,\n> so +1 for that (including the mention to \"C-style\").\n\nI agree with Tom that it's not worth spending anyone's attention\non bytes v.s. characters.\n\nSo I'm marking the patch ready for committer.\n(I have not tried the version that patches against PGv16.)\n\nThank you everyone, especially Hayato, for spending time\nand making this better.\n\nRegards,\n\nKarl <[email protected]>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n\n",
"msg_date": "Thu, 28 Sep 2023 00:58:28 -0500",
"msg_from": "\"Karl O. Pinc\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PGdocs] fix description for handling pf non-ASCII characters"
},
{
"msg_contents": "On Thu, Sep 28, 2023 at 02:23:59PM +1000, Peter Smith wrote:\n> v4 LGTM.\n\nApplied v4 down to 16, then.\n--\nMichael",
"msg_date": "Fri, 29 Sep 2023 10:35:09 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PGdocs] fix description for handling pf non-ASCII characters"
},
{
"msg_contents": "Dear Michael,\n\nI confirmed your commit. Thanks!\nCF entry was closed as \"Committed\".\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\n\n\n\n",
"msg_date": "Fri, 29 Sep 2023 02:05:46 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PGdocs] fix description for handling pf non-ASCII characters"
}
] |
[
{
"msg_contents": "Hello,\n\nWe recently brought online a new database cluster, and in the course\nof ramping up traffic to it encountered a situation where a misplanned\nquery (analyzing helped with this, but I think the issue is still\nrelevant) resulted in that query being compiled with JIT, and soon a\nlarge number of backends were running that same shape of query, all of\nthem JIT compiling it. Since each JIT compilation took ~2s, this\nstarved the server of resources.\n\nThere are a couple of issues here. I'm sure it's been discussed\nbefore, and it's not the point of my thread, but I can't help but note\nthat the default value of jit_above_cost of 100000 seems absurdly low.\nOn good hardware like we have even well-planned queries with costs\nwell above that won't be taking as long as JIT compilation does.\n\nBut on the topic of the thread: I'd like to know if anyone has ever\nconsidered implemented a GUC/feature like\n\"max_concurrent_jit_compilations\" to cap the number of backends that\nmay be compiling a query at any given point so that we avoid an\noptimization from running amok and consuming all of a servers\nresources?\n\nRegards,\nJames Coleman\n\n\n",
"msg_date": "Fri, 23 Jun 2023 10:27:57 -0400",
"msg_from": "James Coleman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Stampede of the JIT compilers"
},
{
"msg_contents": "On Sat, 24 Jun 2023 at 02:28, James Coleman <[email protected]> wrote:\n> There are a couple of issues here. I'm sure it's been discussed\n> before, and it's not the point of my thread, but I can't help but note\n> that the default value of jit_above_cost of 100000 seems absurdly low.\n> On good hardware like we have even well-planned queries with costs\n> well above that won't be taking as long as JIT compilation does.\n\nIt would be good to know your evidence for thinking it's too low.\n\nThe main problem I see with it is that the costing does not account\nfor how many expressions will be compiled. It's quite different to\ncompile JIT expressions for a query to a single table with a simple\nWHERE clause vs some query with many joins which scans a partitioned\ntable with 1000 partitions, for example.\n\n> But on the topic of the thread: I'd like to know if anyone has ever\n> considered implemented a GUC/feature like\n> \"max_concurrent_jit_compilations\" to cap the number of backends that\n> may be compiling a query at any given point so that we avoid an\n> optimization from running amok and consuming all of a servers\n> resources?\n\nWhy do the number of backends matter? JIT compilation consumes the\nsame CPU resources that the JIT compilation is meant to save. If the\nJIT compilation in your query happened to be a net win rather than a\nnet loss in terms of CPU usage, then why would\nmax_concurrent_jit_compilations be useful? It would just restrict us\non what we could save. This idea just covers up the fact that the JIT\ncosting is disconnected from reality. It's a bit like trying to tune\nyour radio with the volume control.\n\nI think the JIT costs would be better taking into account how useful\neach expression will be to JIT compile. There were some ideas thrown\naround in [1].\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CAApHDvpQJqLrNOSi8P1JLM8YE2C%2BksKFpSdZg%3Dq6sTbtQ-v%3Daw%40mail.gmail.com\n\n\n",
"msg_date": "Sat, 24 Jun 2023 12:33:46 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Stampede of the JIT compilers"
},
{
"msg_contents": "\n\nOn 6/24/23 02:33, David Rowley wrote:\n> On Sat, 24 Jun 2023 at 02:28, James Coleman <[email protected]> wrote:\n>> There are a couple of issues here. I'm sure it's been discussed\n>> before, and it's not the point of my thread, but I can't help but note\n>> that the default value of jit_above_cost of 100000 seems absurdly low.\n>> On good hardware like we have even well-planned queries with costs\n>> well above that won't be taking as long as JIT compilation does.\n> \n> It would be good to know your evidence for thinking it's too low.\n> \n> The main problem I see with it is that the costing does not account\n> for how many expressions will be compiled. It's quite different to\n> compile JIT expressions for a query to a single table with a simple\n> WHERE clause vs some query with many joins which scans a partitioned\n> table with 1000 partitions, for example.\n> \n\nI think it's both - as explained by James, there are queries with much\nhigher cost, but the JIT compilation takes much more than just running\nthe query without JIT. So the idea that 100k difference is clearly not\nsufficient to make up for the extra JIT compilation cost.\n\nBut it's true that's because the JIT costing is very crude, and there's\nlittle effort to account for how expensive the compilation will be (say,\nhow many expressions, ...).\n\nIMHO there's no \"good\" default that wouldn't hurt an awful lot of cases.\n\nThere's also a lot of bias - people are unlikely to notice/report cases\nwhen the JIT (including costing) works fine. But they sure are annoyed\nwhen it makes the wrong choice.\n\n>> But on the topic of the thread: I'd like to know if anyone has ever\n>> considered implemented a GUC/feature like\n>> \"max_concurrent_jit_compilations\" to cap the number of backends that\n>> may be compiling a query at any given point so that we avoid an\n>> optimization from running amok and consuming all of a servers\n>> resources?\n> \n> Why do the number of backends matter? JIT compilation consumes the\n> same CPU resources that the JIT compilation is meant to save. If the\n> JIT compilation in your query happened to be a net win rather than a\n> net loss in terms of CPU usage, then why would\n> max_concurrent_jit_compilations be useful? It would just restrict us\n> on what we could save. This idea just covers up the fact that the JIT\n> costing is disconnected from reality. It's a bit like trying to tune\n> your radio with the volume control.\n> \n\nYeah, I don't quite get this point either. If JIT for a given query\nhelps (i.e. makes execution shorter), it'd be harmful to restrict the\nmaximum number of concurrent compilations. It we just disable JIT after\nsome threshold is reached, that'd make queries longer and just made the\npileup worse.\n\nIf it doesn't help for a given query, we shouldn't be doing it at all.\nBut that should be based on better costing, not some threshold.\n\nIn practice there'll be a mix of queries where JIT does/doesn't help,\nand this threshold would just arbitrarily (and quite unpredictably)\nenable/disable costing, making it yet harder to investigate slow queries\n(as if we didn't have enough trouble with that already).\n\n> I think the JIT costs would be better taking into account how useful\n> each expression will be to JIT compile. There were some ideas thrown\n> around in [1].\n> \n\n+1 to that\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 24 Jun 2023 13:40:08 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Stampede of the JIT compilers"
},
{
"msg_contents": "On Sat, Jun 24, 2023 at 7:40 AM Tomas Vondra\n<[email protected]> wrote:\n>\n>\n>\n> On 6/24/23 02:33, David Rowley wrote:\n> > On Sat, 24 Jun 2023 at 02:28, James Coleman <[email protected]> wrote:\n> >> There are a couple of issues here. I'm sure it's been discussed\n> >> before, and it's not the point of my thread, but I can't help but note\n> >> that the default value of jit_above_cost of 100000 seems absurdly low.\n> >> On good hardware like we have even well-planned queries with costs\n> >> well above that won't be taking as long as JIT compilation does.\n> >\n> > It would be good to know your evidence for thinking it's too low.\n\nIt's definitely possible that I stated this much more emphatically\nthan I should have -- it was coming out of my frustration with this\nsituation after all.\n\nI think, though, that my later comments here will provide some\nphilosophical justification for it.\n\n> > The main problem I see with it is that the costing does not account\n> > for how many expressions will be compiled. It's quite different to\n> > compile JIT expressions for a query to a single table with a simple\n> > WHERE clause vs some query with many joins which scans a partitioned\n> > table with 1000 partitions, for example.\n> >\n>\n> I think it's both - as explained by James, there are queries with much\n> higher cost, but the JIT compilation takes much more than just running\n> the query without JIT. So the idea that 100k difference is clearly not\n> sufficient to make up for the extra JIT compilation cost.\n>\n> But it's true that's because the JIT costing is very crude, and there's\n> little effort to account for how expensive the compilation will be (say,\n> how many expressions, ...).\n>\n> IMHO there's no \"good\" default that wouldn't hurt an awful lot of cases.\n>\n> There's also a lot of bias - people are unlikely to notice/report cases\n> when the JIT (including costing) works fine. But they sure are annoyed\n> when it makes the wrong choice.\n>\n> >> But on the topic of the thread: I'd like to know if anyone has ever\n> >> considered implemented a GUC/feature like\n> >> \"max_concurrent_jit_compilations\" to cap the number of backends that\n> >> may be compiling a query at any given point so that we avoid an\n> >> optimization from running amok and consuming all of a servers\n> >> resources?\n> >\n> > Why do the number of backends matter? JIT compilation consumes the\n> > same CPU resources that the JIT compilation is meant to save. If the\n> > JIT compilation in your query happened to be a net win rather than a\n> > net loss in terms of CPU usage, then why would\n> > max_concurrent_jit_compilations be useful? It would just restrict us\n> > on what we could save. This idea just covers up the fact that the JIT\n> > costing is disconnected from reality. It's a bit like trying to tune\n> > your radio with the volume control.\n> >\n>\n> Yeah, I don't quite get this point either. If JIT for a given query\n> helps (i.e. makes execution shorter), it'd be harmful to restrict the\n> maximum number of concurrent compilations. It we just disable JIT after\n> some threshold is reached, that'd make queries longer and just made the\n> pileup worse.\n\nMy thought process here is that given the poor modeling of JIT costing\nyou've both described that we're likely to estimate the cost of \"easy\"\nJIT compilation acceptably well but also likely to estimate \"complex\"\nJIT compilation far lower than actual cost.\n\nAnother way of saying this is that our range of JIT compilation costs\nmay well be fine on the bottom end but clamped on the high end, and\nthat means that our failure modes will tend towards the worst\nmis-costings being the most painful (e.g., 2s compilation time for a\n100ms query). This is even more the case in an OLTP system where the\nmajority of queries are already known to be quite fast.\n\nIn that context capping the number of backends compiling, particularly\nwhere plans (and JIT?) might be cached, could well save us (depending\non workload).\n\nThat being said, I could imagine an alternative approach solving a\nsimilar problem -- a way of exiting early from compilation if it takes\nlonger than we expect.\n\n> If it doesn't help for a given query, we shouldn't be doing it at all.\n> But that should be based on better costing, not some threshold.\n>\n> In practice there'll be a mix of queries where JIT does/doesn't help,\n> and this threshold would just arbitrarily (and quite unpredictably)\n> enable/disable costing, making it yet harder to investigate slow queries\n> (as if we didn't have enough trouble with that already).\n>\n> > I think the JIT costs would be better taking into account how useful\n> > each expression will be to JIT compile. There were some ideas thrown\n> > around in [1].\n> >\n>\n> +1 to that\n\nThat does sound like an improvement.\n\nOne thing about our JIT that is different from e.g. browser JS engine\nJITing is that we don't substitute in the JIT code \"on the fly\" while\nexecution is already underway. That'd be another, albeit quite\ndifficult, way to solve these issues.\n\nRegards,\nJames Coleman\n\n\n",
"msg_date": "Sat, 24 Jun 2023 13:12:29 -0400",
"msg_from": "James Coleman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Stampede of the JIT compilers"
},
{
"msg_contents": "James Coleman <[email protected]> writes:\n> On Sat, Jun 24, 2023 at 7:40 AM Tomas Vondra\n> <[email protected]> wrote:\n>> On 6/24/23 02:33, David Rowley wrote:\n>>> On Sat, 24 Jun 2023 at 02:28, James Coleman <[email protected]> wrote:\n>>>> There are a couple of issues here. I'm sure it's been discussed\n>>>> before, and it's not the point of my thread, but I can't help but note\n>>>> that the default value of jit_above_cost of 100000 seems absurdly low.\n>>>> On good hardware like we have even well-planned queries with costs\n>>>> well above that won't be taking as long as JIT compilation does.\n\n>>> It would be good to know your evidence for thinking it's too low.\n\n> It's definitely possible that I stated this much more emphatically\n> than I should have -- it was coming out of my frustration with this\n> situation after all.\n\nI think there is *plenty* of evidence that it is too low, or at least\nthat for some reason we are too willing to invoke JIT when the result\nis to make the overall cost of a query far higher than it is without.\nJust see all the complaints on the mailing lists that have been\nresolved by advice to turn off JIT. You do not even have to look\nfurther than our own regression tests: on my machine with current\nHEAD, \"time make installcheck-parallel\" reports\n\nreal 0m8.544s\nuser 0m0.906s\nsys 0m0.863s\n\nfor a build without --with-llvm, and\n\nreal 0m13.211s\nuser 0m0.917s\nsys 0m0.811s\n\nfor a build with it (and all JIT settings left at defaults). If you\ndo non-parallel \"installcheck\" the ratio is similar. I don't see how\nanyone can claim that 50% slowdown is just fine.\n\nI don't know whether raising the default would be enough to fix that\nin a nice way, and I certainly don't pretend to have a specific value\nto offer. But it's undeniable that we have a serious problem here,\nto the point where JIT is a net negative for quite a few people.\n\n\n> In that context capping the number of backends compiling, particularly\n> where plans (and JIT?) might be cached, could well save us (depending\n> on workload).\n\nTBH I do not find this proposal attractive in the least. We have\na problem here even when you consider a single backend. If we fixed\nthat, so that we don't invoke JIT unless it really helps, then it's\nnot going to help less just because you have a lot of backends.\nPlus, the overhead of managing a system-wide limit is daunting.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 24 Jun 2023 13:54:53 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Stampede of the JIT compilers"
},
{
"msg_contents": "On Sun, 25 Jun 2023 at 05:54, Tom Lane <[email protected]> wrote:\n>\n> James Coleman <[email protected]> writes:\n> > On Sat, Jun 24, 2023 at 7:40 AM Tomas Vondra\n> > <[email protected]> wrote:\n> >> On 6/24/23 02:33, David Rowley wrote:\n> >>> On Sat, 24 Jun 2023 at 02:28, James Coleman <[email protected]> wrote:\n> >>>> There are a couple of issues here. I'm sure it's been discussed\n> >>>> before, and it's not the point of my thread, but I can't help but note\n> >>>> that the default value of jit_above_cost of 100000 seems absurdly low.\n> >>>> On good hardware like we have even well-planned queries with costs\n> >>>> well above that won't be taking as long as JIT compilation does.\n>\n> >>> It would be good to know your evidence for thinking it's too low.\n>\n> > It's definitely possible that I stated this much more emphatically\n> > than I should have -- it was coming out of my frustration with this\n> > situation after all.\n>\n> I think there is *plenty* of evidence that it is too low, or at least\n> that for some reason we are too willing to invoke JIT when the result\n> is to make the overall cost of a query far higher than it is without.\n\nI've seen plenty of other reports and I do agree there is a problem,\nbut I think you're jumping to conclusions in this particular case.\nI've seen nothing here that couldn't equally indicate the planner\ndidn't overestimate the costs or some row estimate for the given\nquery. The solution to those problems shouldn't be bumping up the\ndefault JIT thresholds it could be to fix the costs or tune/add\nstatistics to get better row estimates.\n\nI don't think it's too big an ask to see a few more details so that we\ncan confirm what the actual problem is.\n\nDavid\n\n\n",
"msg_date": "Sun, 25 Jun 2023 12:14:34 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Stampede of the JIT compilers"
},
{
"msg_contents": "On Sat, Jun 24, 2023 at 1:54 PM Tom Lane <[email protected]> wrote:\n>\n> James Coleman <[email protected]> writes:\n> > In that context capping the number of backends compiling, particularly\n> > where plans (and JIT?) might be cached, could well save us (depending\n> > on workload).\n>\n> TBH I do not find this proposal attractive in the least. We have\n> a problem here even when you consider a single backend. If we fixed\n> that, so that we don't invoke JIT unless it really helps, then it's\n> not going to help less just because you have a lot of backends.\n> Plus, the overhead of managing a system-wide limit is daunting.\n>\n> regards, tom lane\n\nI'm happy to withdraw that particular idea. My mental model was along\nthe lines \"this is a startup cost, and then we'll have it cached, so\nthe higher than expected cost won't matter as much when the system\nsettles down\", and in that scenario limiting the size of the herd can\nmake sense.\n\nBut that's not the broader problem, so...\n\nRegards,\nJames Coleman\n\n\n",
"msg_date": "Sat, 24 Jun 2023 22:24:37 -0400",
"msg_from": "James Coleman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Stampede of the JIT compilers"
},
{
"msg_contents": "On Sat, Jun 24, 2023 at 8:14 PM David Rowley <[email protected]> wrote:\n>\n> On Sun, 25 Jun 2023 at 05:54, Tom Lane <[email protected]> wrote:\n> >\n> > James Coleman <[email protected]> writes:\n> > > On Sat, Jun 24, 2023 at 7:40 AM Tomas Vondra\n> > > <[email protected]> wrote:\n> > >> On 6/24/23 02:33, David Rowley wrote:\n> > >>> On Sat, 24 Jun 2023 at 02:28, James Coleman <[email protected]> wrote:\n> > >>>> There are a couple of issues here. I'm sure it's been discussed\n> > >>>> before, and it's not the point of my thread, but I can't help but note\n> > >>>> that the default value of jit_above_cost of 100000 seems absurdly low.\n> > >>>> On good hardware like we have even well-planned queries with costs\n> > >>>> well above that won't be taking as long as JIT compilation does.\n> >\n> > >>> It would be good to know your evidence for thinking it's too low.\n> >\n> > > It's definitely possible that I stated this much more emphatically\n> > > than I should have -- it was coming out of my frustration with this\n> > > situation after all.\n> >\n> > I think there is *plenty* of evidence that it is too low, or at least\n> > that for some reason we are too willing to invoke JIT when the result\n> > is to make the overall cost of a query far higher than it is without.\n>\n> I've seen plenty of other reports and I do agree there is a problem,\n> but I think you're jumping to conclusions in this particular case.\n> I've seen nothing here that couldn't equally indicate the planner\n> didn't overestimate the costs or some row estimate for the given\n> query. The solution to those problems shouldn't be bumping up the\n> default JIT thresholds it could be to fix the costs or tune/add\n> statistics to get better row estimates.\n>\n> I don't think it's too big an ask to see a few more details so that we\n> can confirm what the actual problem is.\n\nI did say in the original email \"encountered a situation where a\nmisplanned query (analyzing helped with this, but I think the issue is\nstill relevant)\".\n\nI'll look at specifics again on Monday, but what I do remember is that\nthere were a lot of joins, and we already know we have cases where\nthose are planned poorly too (even absent bad stats).\n\nWhat I wanted to get at more broadly here was thinking along the lines\nof how to prevent the misplanning from causing such a disaster.\n\nRegards,\nJames Coleman\n\n\n",
"msg_date": "Sat, 24 Jun 2023 22:27:43 -0400",
"msg_from": "James Coleman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Stampede of the JIT compilers"
},
{
"msg_contents": "Hi,\n\nOn Sat, Jun 24, 2023 at 01:54:53PM -0400, Tom Lane wrote:\n> I don't know whether raising the default would be enough to fix that\n> in a nice way, and I certainly don't pretend to have a specific value\n> to offer. But it's undeniable that we have a serious problem here,\n> to the point where JIT is a net negative for quite a few people.\n\nSome further data: to my knowledge, most major managed postgres\nproviders disable jit for their users. Azure certainly does, but I don't\nhave a Google Cloud SQL or RDS instance running right to verify their\nsettings. I do seem to remember that they did as well though, at least a\nwhile back.\n\n\nMichael\n\n\n",
"msg_date": "Sun, 25 Jun 2023 11:10:00 +0200",
"msg_from": "Michael Banck <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Stampede of the JIT compilers"
},
{
"msg_contents": "David Rowley <[email protected]> writes:\n> I've seen plenty of other reports and I do agree there is a problem,\n> but I think you're jumping to conclusions in this particular case.\n> I've seen nothing here that couldn't equally indicate the planner\n> didn't overestimate the costs or some row estimate for the given\n> query. The solution to those problems shouldn't be bumping up the\n> default JIT thresholds it could be to fix the costs or tune/add\n> statistics to get better row estimates.\n> I don't think it's too big an ask to see a few more details so that we\n> can confirm what the actual problem is.\n\nOkay, I re-did the regression tests with log_min_duration_statement set to\nzero, and then collected the reported runtimes. (This time, the builds\nalso had --enable-cassert turned off, unlike my quick check yesterday.)\nI attach the results for anyone interested in doing their own analysis,\nbut my preliminary impression is:\n\n(1) There is *no* command in the core regression tests where it makes\nsense to invoke JIT. This is unsurprising really, because we don't\nallow any long-running queries there. The places where the time\nwith LLVM beats the time without LLVM look to be noise. (I didn't\ngo so far as to average the results from several runs, but perhaps\nsomeone else would wish to.)\n\n(2) Nonetheless, we clearly do invoke JIT in some places, and it adds\nas much as a couple hundred ms to what had been a query requiring a few\nms. I've investigated several of the ones with the worst time penalties,\nand they indeed look to be estimation errors. The planner is guessing\nthat a join for which it lacks any stats will produce some tens of\nthousands of rows, which it doesn't really, but that's enough to persuade\nit to apply JIT.\n\n(3) I still think this is evidence that the cost thresholds are too low,\nbecause even if these joins actually did produce some tens of thousands\nof rows, I think we'd be well shy of breakeven to use JIT. We'd have\nto do some more invasive testing to prove that guess, of course. But\nit looks to me like the current situation is effectively biased towards\nusing JIT when we're in the gray zone, and we'd be better off reversing\nthat bias.\n\n\t\t\tregards, tom lane",
"msg_date": "Sun, 25 Jun 2023 14:06:02 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Stampede of the JIT compilers"
},
{
"msg_contents": "On Sun, Jun 25, 2023 at 5:10 AM Michael Banck <[email protected]> wrote:\n>\n> Hi,\n>\n> On Sat, Jun 24, 2023 at 01:54:53PM -0400, Tom Lane wrote:\n> > I don't know whether raising the default would be enough to fix that\n> > in a nice way, and I certainly don't pretend to have a specific value\n> > to offer. But it's undeniable that we have a serious problem here,\n> > to the point where JIT is a net negative for quite a few people.\n>\n> Some further data: to my knowledge, most major managed postgres\n> providers disable jit for their users. Azure certainly does, but I don't\n> have a Google Cloud SQL or RDS instance running right to verify their\n> settings. I do seem to remember that they did as well though, at least a\n> while back.\n>\n>\n> Michael\n\nI believe it's off by default in Aurora Postgres also.\n\nRegards,\nJames Coleman\n\n\n",
"msg_date": "Sun, 25 Jun 2023 15:21:24 -0400",
"msg_from": "James Coleman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Stampede of the JIT compilers"
},
{
"msg_contents": "On Sun, 2023-06-25 at 11:10 +0200, Michael Banck wrote:\n> On Sat, Jun 24, 2023 at 01:54:53PM -0400, Tom Lane wrote:\n> > I don't know whether raising the default would be enough to fix that\n> > in a nice way, and I certainly don't pretend to have a specific value\n> > to offer. But it's undeniable that we have a serious problem here,\n> > to the point where JIT is a net negative for quite a few people.\n> \n> Some further data: to my knowledge, most major managed postgres\n> providers disable jit for their users.\n\nI have also started recommending jit=off for all but analytic workloads.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Mon, 26 Jun 2023 13:10:50 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Stampede of the JIT compilers"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-24 13:54:53 -0400, Tom Lane wrote:\n> I think there is *plenty* of evidence that it is too low, or at least\n> that for some reason we are too willing to invoke JIT when the result\n> is to make the overall cost of a query far higher than it is without.\n> Just see all the complaints on the mailing lists that have been\n> resolved by advice to turn off JIT. You do not even have to look\n> further than our own regression tests: on my machine with current\n> HEAD, \"time make installcheck-parallel\" reports\n> \n> real 0m8.544s\n> user 0m0.906s\n> sys 0m0.863s\n> \n> for a build without --with-llvm, and\n> \n> real 0m13.211s\n> user 0m0.917s\n> sys 0m0.811s\n\nIIRC those are all, or nearly all, cases where we have no stats and the plans\nhave ridiculous costs (and other reasons like enable_seqscans = false and\nusing seqscans nonetheless). In those cases no cost based approach will work\n:(.\n\n\n> I don't know whether raising the default would be enough to fix that\n> in a nice way, and I certainly don't pretend to have a specific value\n> to offer. But it's undeniable that we have a serious problem here,\n> to the point where JIT is a net negative for quite a few people.\n\nYea, I think at the moment it's not working well enough to be worth having on\nby default. Some of that is due to partitioning having become much more\ncommon, leading to much bigger plan trees, some of it is just old stuff that I\nhad hoped could be addressed more easily.\n\nFWIW, Daniel Gustafsson is hacking on an old patch of mine that was working\ntowards making the JIT result cacheable (and providing noticeably bigger\nperformance gains).\n\n\n> > In that context capping the number of backends compiling, particularly\n> > where plans (and JIT?) might be cached, could well save us (depending\n> > on workload).\n> \n> TBH I do not find this proposal attractive in the least.\n\nYea, me neither. It doesn't address any of the actual problems and will add\nnew contention.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 26 Jun 2023 13:12:34 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Stampede of the JIT compilers"
}
] |
[
{
"msg_contents": "Hi,\n\nI ran into a pretty terrible case of LEFT JOIN estimate, resulting in\npretty arbitrary underestimate. The query is pretty trivial, nothing\noverly complex, and the more I think about it the more I think this is\na fairly fundamental flaw in how we estimate this type of joins.\n\nImagine you have two trivial tables:\n\n CREATE TABLE large (id INT, a INT);\n INSERT INTO large SELECT i, i FROM generate_series(1,1000000) s(i);\n\n CREATE TABLE small (id INT, b INT);\n INSERT INTO small SELECT i, i FROM generate_series(1,100) s(i);\n\nThe business meaning may be that \"large\" stores orders and \"small\" is\nfor events related to tiny fraction of the large table (e.g. returns).\nAnd let's do a couple simple LEFT JOIN queries, adding conditions to it.\n\nLet's start with no condition at all:\n\n EXPLAIN ANALYZE\n SELECT * FROM large LEFT JOIN small ON (large.id = small.id)\n\n QUERY PLAN\n ----------------------------------------------------------------------\n Hash Left Join (cost=3.25..18179.25 rows=1000000 width=16)\n (actual time=0.069..550.290 rows=1000000 loops=1)\n Hash Cond: (large.id = small.id)\n -> Seq Scan on large (cost=0.00..14425.00 rows=1000000 width=8)\n (actual time=0.010..174.056 rows=1000000 loops=1)\n -> Hash (cost=2.00..2.00 rows=100 width=8) (actual time=0.052...\n Buckets: 1024 Batches: 1 Memory Usage: 12kB\n -> Seq Scan on small (cost=0.00..2.00 rows=100 width=8) ...\n Planning Time: 0.291 ms\n Execution Time: 663.551 ms\n (8 rows)\n\nSo great, this estimate is perfect. Now, let's add IS NULL condition on\nthe small table, to find rows without a match (e.g. orders that were not\nreturned):\n\n EXPLAIN ANALYZE\n SELECT * FROM large LEFT JOIN small ON (large.id = small.id)\n WHERE (small.id IS NULL);\n\n QUERY PLAN\n ----------------------------------------------------------------------\n Hash Anti Join (cost=3.25..27052.36 rows=999900 width=16)\n (actual time=0.071..544.568 rows=999900 loops=1)\n Hash Cond: (large.id = small.id)\n -> Seq Scan on large (cost=0.00..14425.00 rows=1000000 width=8)\n (actual time=0.015..166.019 rows=1000000 loops=1)\n -> Hash (cost=2.00..2.00 rows=100 width=8) (actual time=0.051...\n Buckets: 1024 Batches: 1 Memory Usage: 12kB\n -> Seq Scan on small (cost=0.00..2.00 rows=100 width=8) ...\n Planning Time: 0.260 ms\n Execution Time: 658.379 ms\n (8 rows)\n\nAlso very accurate, great! Now let's do a condition on the large table\ninstead, filtering some the rows:\n\n EXPLAIN ANALYZE\n SELECT * FROM large LEFT JOIN small ON (large.id = small.id)\n WHERE (large.a IN (1000, 2000, 3000, 4000, 5000));\n\n QUERY PLAN\n ----------------------------------------------------------------------\n Nested Loop Left Join (cost=0.00..20684.75 rows=5 width=16)\n (actual time=0.957..127.376 rows=5 loops=1)\n Join Filter: (large.id = small.id)\n Rows Removed by Join Filter: 500\n -> Seq Scan on large (cost=0.00..20675.00 rows=5 width=8)\n (actual time=0.878..127.171 rows=5 loops=1)\n Filter: (a = ANY ('{1000,2000,3000,4000,5000}'::integer[]))\n Rows Removed by Filter: 999995\n -> Materialize (cost=0.00..2.50 rows=100 width=8) ...\n -> Seq Scan on small (cost=0.00..2.00 rows=100 width=8) ...\n Planning Time: 0.223 ms\n Execution Time: 127.407 ms\n (10 rows)\n\nAlso great estimate! Surely, if we do both conditions with OR, we'll get\na decent estimate too?\n\n EXPLAIN ANALYZE\n SELECT * FROM large LEFT JOIN small ON (large.id = small.id)\n WHERE (small.id IS NULL)\n OR (large.a IN (1000, 2000, 3000, 4000, 5000));\n\n QUERY PLAN\n ----------------------------------------------------------------------\n Hash Left Join (cost=3.25..18179.88 rows=5 width=16)\n (actual time=0.073..580.827 rows=999900 loops=1)\n Hash Cond: (large.id = small.id)\n Filter: ((small.id IS NULL) OR\n (large.a = ANY ('{1000,2000,3000,4000,5000}'::integer[])))\n Rows Removed by Filter: 100\n -> Seq Scan on large (cost=0.00..14425.00 rows=1000000 width=8)\n (actual time=0.015..166.809 rows=1000000 loops=1)\n -> Hash (cost=2.00..2.00 rows=100 width=8) (actual time=0.052...\n Buckets: 1024 Batches: 1 Memory Usage: 12kB\n -> Seq Scan on small (cost=0.00..2.00 rows=100 width=8) ...\n Planning Time: 0.309 ms\n Execution Time: 694.427 ms\n (10 rows)\n\nWell, bummer! This is pretty surprising, because if we know that clause\nA produces estimate 1M and clause B estimates as 5, then it's expected\nthat (A OR B) should be estimated as something >= max(1M, 5). For users\nrunning this, this has to be really surprising.\n\nIt's also quite serious, because with underestimates like this the\nplanner is likely to pick nestloops for additional joins, and we all\nknow how that performs for millions of rows ...\n\nSo, how does this happen? Well, the simple reason is that joins are\nestimated by applying selectivities for all the clauses on a cartesian\nproduct. So we calculate product (100 * 1M), and then apply selectivity\nfor the join condition, and the two single-table clauses.\n\nThe problem is that the selectivity for \"IS NULL\" is estimated using the\ntable-level statistics. But the LEFT JOIN entirely breaks the idea that\nthe null_frac has anything to do with NULLs in the join result. Because\nthe join result is not a subset of cartesian product - it's a superset.\nEven if the small table has no NULLs, the join can have them - that's\nthe whole point of outer joins.\n\nWhen there's only the IS NULL condition, we actually recognize this as\na special case and treat the join as antijoin (Hash Anti Join), and that\nalso fixes the estimates - antijoins do handle this fine. But as soon as\nyou change the condition a bit and do the IS NULL check on the other\ncolumn of the table, it goes wrong too:\n\n EXPLAIN ANALYZE\n SELECT * FROM large LEFT JOIN small ON (large.id = small.id)\n WHERE (small.b IS NULL);\n\n QUERY PLAN\n ----------------------------------------------------------------------\n Hash Left Join (cost=3.25..18179.25 rows=1 width=16)\n (actual time=0.311..3110.298 rows=999900 loops=1)\n Hash Cond: (large.id = small.id)\n Filter: (small.b IS NULL)\n Rows Removed by Filter: 100\n -> Seq Scan on large (cost=0.00..14425.00 rows=1000000 width=8)\n (actual time=0.014..1032.497 rows=1000000 loops=1)\n -> Hash (cost=2.00..2.00 rows=100 width=8) (actual time=0.287...\n Buckets: 1024 Batches: 1 Memory Usage: 12kB\n -> Seq Scan on small (cost=0.00..2.00 rows=100 width=8) ...\n Planning Time: 0.105 ms\n Execution Time: 4083.634 ms\n (10 rows)\n\nI'd bet most users would be rather surprised to learn this subtle change\nmakes a difference.\n\nI wonder how to improve this, say by adjusting the IS NULL selectivity\nwhen we know to operate on the outer side of the join. We're able to\ndo this for antijoins, so maybe we could do that here, somehow?\n\nUnfortunately the important things (join type detection, IS NULL clause\nestimation) happen pretty far away, but maybe we could pass that info\nabout fraction of NULLs introduced by he join to the nulltestsel, and\nuse it there (essentially by doing null_frac + join_null_frac). And\nmaybe we could do something like that for NULL tests on other columns\nfrom the outer side ...\n\nThe other thing that might be beneficial is calculating boundaries for\nthe estimate. In this case we're capable of estimating the join size for\nindividual conditions, and then we could make ensure the final estimate\nfor the \"a OR b\" join is >= max of these cardinalities.\n\nOf course, that might be expensive if we have to redo some of the join\nplanning/estimates for each individual condition (otherwise might not\nnotice the antijoin transformation and stuff like that).\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 23 Jun 2023 21:23:43 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Problems with estimating OR conditions, IS NULL on LEFT JOINs"
},
{
"msg_contents": "Tomas Vondra <[email protected]> writes:\n> The problem is that the selectivity for \"IS NULL\" is estimated using the\n> table-level statistics. But the LEFT JOIN entirely breaks the idea that\n> the null_frac has anything to do with NULLs in the join result.\n\nRight.\n\n> I wonder how to improve this, say by adjusting the IS NULL selectivity\n> when we know to operate on the outer side of the join. We're able to\n> do this for antijoins, so maybe we could do that here, somehow?\n\nThis mess is part of the long-term plan around the work I've been doing\non outer-join-aware Vars. We now have infrastructure that can let\nthe estimator routines see \"oh, this Var isn't directly from a scan\nof its table, it's been passed through a potentially-nulling outer\njoin --- and I can see which one\". I don't have more than vague ideas\nabout what happens next, but that is clearly an essential step on the\nroad to doing better.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 23 Jun 2023 20:08:34 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Problems with estimating OR conditions, IS NULL on LEFT JOINs"
},
{
"msg_contents": "On 6/24/23 02:08, Tom Lane wrote:\n> Tomas Vondra <[email protected]> writes:\n>> The problem is that the selectivity for \"IS NULL\" is estimated using the\n>> table-level statistics. But the LEFT JOIN entirely breaks the idea that\n>> the null_frac has anything to do with NULLs in the join result.\n> \n> Right.\n> \n>> I wonder how to improve this, say by adjusting the IS NULL selectivity\n>> when we know to operate on the outer side of the join. We're able to\n>> do this for antijoins, so maybe we could do that here, somehow?\n> \n> This mess is part of the long-term plan around the work I've been doing\n> on outer-join-aware Vars. We now have infrastructure that can let\n> the estimator routines see \"oh, this Var isn't directly from a scan\n> of its table, it's been passed through a potentially-nulling outer\n> join --- and I can see which one\". I don't have more than vague ideas\n> about what happens next, but that is clearly an essential step on the\n> road to doing better.\n> \n\nI was wondering if that work on outer-join-aware Vars could help with\nthis, but I wasn't following it very closely. I agree the ability to\ncheck if the Var could be NULL due to an outer join seems useful, as it\nsays whether applying raw attribute statistics makes sense or not.\n\nI was thinking about what to do for the case when that's not possible,\ni.e. when the Var refers to nullable side of the join. Knowing that this\nis happening is clearly not enough - we need to know how many new NULLs\nare \"injected\" into the join result, and \"communicate\" that to the\nestimation routines.\n\nAttached is a very ugly experimental patch doing that, and with it the\nestimate changes to this:\n\n QUERY PLAN\n ----------------------------------------------------------------------\n Hash Left Join (cost=3.25..18179.88 rows=999900 width=16)\n (actual time=0.528..596.151 rows=999900 loops=1)\n Hash Cond: (large.id = small.id)\n Filter: ((small.id IS NULL) OR\n (large.a = ANY ('{1000,2000,3000,4000,5000}'::integer[])))\n Rows Removed by Filter: 100\n -> Seq Scan on large (cost=0.00..14425.00 rows=1000000 width=8)\n (actual time=0.069..176.138 rows=1000000 loops=1)\n -> Hash (cost=2.00..2.00 rows=100 width=8)\n (actual time=0.371..0.373 rows=100 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 12kB\n -> Seq Scan on small (cost=0.00..2.00 rows=100 width=8)\n (actual time=0.032..0.146 rows=100 loops=1)\n Planning Time: 3.845 ms\n Execution Time: 712.405 ms\n (10 rows)\n\nSeems nice, but. The patch is pretty ugly, I don't claim it works for\nother queries or that this is exactly what we should do. It calculates\n\"unmatched frequency\" next to eqjoinsel_inner, stashes that info into\nsjinfo and the estimator (nulltestsel) then uses that to adjust the\nnullfrac it gets from the statistics.\n\nThe good thing is this helps even for IS NULL checks on non-join-key\ncolumns (where we don't switch to an antijoin), but there's a couple\nthings that I dislike ...\n\n1) It's not restricted to outer joins or anything like that (this is\nmostly just my laziness / interest in one particular query, but also\nsomething the outer-join-aware patch might help with).\n\n2) We probably don't want to pass this kind of information through\nsjinfo. That was the simplest thing for an experimental patch, but I\nsuspect it's not the only piece of information we may need to pass to\nthe lower levels of estimation code.\n\n3) I kinda doubt we actually want to move this responsibility (to\nconsider fraction of unmatched rows) to the low-level estimation\nroutines (e.g. nulltestsel and various others). AFAICS this just\n\"introduces NULLs\" into the relation, so maybe we could \"adjust\" the\nattribute statistics (in examine_variable?) by inflating null_frac and\nmodifying the other frequencies in MCV/histogram.\n\n4) But I'm not sure we actually want to do that in these low-level\nselectivity functions. The outer join essentially produces output with\ntwo subsets - one with matches on the outer side, one without them. But\nthe side without matches has NULLs in all columns. In a way, we know\nexactly how are these columns correlated - if we do the usual estimation\n(even with the null_frac adjusted), we just throw this information away.\nAnd when there's a lot of rows without a match, that seems bad.\n\nSo maybe we should split the join estimate into two parts, one for each\nsubset of the join result. One for the rows with a match (and then we\ncan just do what we do now, with the attribute stats we already have).\nAnd one for the \"unmatched part\" where we know the values on the outer\nside are NULL (and then we can easily \"fake\" stats with null_frac=1.0).\n\n\nI really hope what I just wrote makes at least a little bit of sense.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Sat, 24 Jun 2023 13:23:23 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Problems with estimating OR conditions, IS NULL on LEFT JOINs"
},
{
"msg_contents": "On 24/6/2023 17:23, Tomas Vondra wrote:\n> I really hope what I just wrote makes at least a little bit of sense.\nThrow in one more example:\n\nSELECT i AS id INTO l FROM generate_series(1,100000) i;\nCREATE TABLE r (id int8, v text);\nINSERT INTO r (id, v) VALUES (1, 't'), (-1, 'f');\nANALYZE l,r;\nEXPLAIN ANALYZE\nSELECT * FROM l LEFT OUTER JOIN r ON (r.id = l.id) WHERE r.v IS NULL;\n\nHere you can see the same kind of underestimation:\nHash Left Join (... rows=500 width=14) (... rows=99999 ...)\n\nSo the eqjoinsel_unmatch_left() function should be modified for the case \nwhere nd1<nd2.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n\n",
"msg_date": "Mon, 26 Jun 2023 15:22:03 +0600",
"msg_from": "Andrey Lepikhov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Problems with estimating OR conditions, IS NULL on LEFT JOINs"
},
{
"msg_contents": "Hi, all!\n\nOn 24.06.2023 14:23, Tomas Vondra wrote:\n> On 6/24/23 02:08, Tom Lane wrote:\n>> Tomas Vondra<[email protected]> writes:\n>>> The problem is that the selectivity for \"IS NULL\" is estimated using the\n>>> table-level statistics. But the LEFT JOIN entirely breaks the idea that\n>>> the null_frac has anything to do with NULLs in the join result.\n>> Right.\n>>\n>>> I wonder how to improve this, say by adjusting the IS NULL selectivity\n>>> when we know to operate on the outer side of the join. We're able to\n>>> do this for antijoins, so maybe we could do that here, somehow?\n>> This mess is part of the long-term plan around the work I've been doing\n>> on outer-join-aware Vars. We now have infrastructure that can let\n>> the estimator routines see \"oh, this Var isn't directly from a scan\n>> of its table, it's been passed through a potentially-nulling outer\n>> join --- and I can see which one\". I don't have more than vague ideas\n>> about what happens next, but that is clearly an essential step on the\n>> road to doing better.\n>>\n> I was wondering if that work on outer-join-aware Vars could help with\n> this, but I wasn't following it very closely. I agree the ability to\n> check if the Var could be NULL due to an outer join seems useful, as it\n> says whether applying raw attribute statistics makes sense or not.\n>\n> I was thinking about what to do for the case when that's not possible,\n> i.e. when the Var refers to nullable side of the join. Knowing that this\n> is happening is clearly not enough - we need to know how many new NULLs\n> are \"injected\" into the join result, and \"communicate\" that to the\n> estimation routines.\n>\n> Attached is a very ugly experimental patch doing that, and with it the\n> estimate changes to this:\n>\n> QUERY PLAN\n> ----------------------------------------------------------------------\n> Hash Left Join (cost=3.25..18179.88 rows=999900 width=16)\n> (actual time=0.528..596.151 rows=999900 loops=1)\n> Hash Cond: (large.id = small.id)\n> Filter: ((small.id IS NULL) OR\n> (large.a = ANY ('{1000,2000,3000,4000,5000}'::integer[])))\n> Rows Removed by Filter: 100\n> -> Seq Scan on large (cost=0.00..14425.00 rows=1000000 width=8)\n> (actual time=0.069..176.138 rows=1000000 loops=1)\n> -> Hash (cost=2.00..2.00 rows=100 width=8)\n> (actual time=0.371..0.373 rows=100 loops=1)\n> Buckets: 1024 Batches: 1 Memory Usage: 12kB\n> -> Seq Scan on small (cost=0.00..2.00 rows=100 width=8)\n> (actual time=0.032..0.146 rows=100 loops=1)\n> Planning Time: 3.845 ms\n> Execution Time: 712.405 ms\n> (10 rows)\n>\n> Seems nice, but. The patch is pretty ugly, I don't claim it works for\n> other queries or that this is exactly what we should do. It calculates\n> \"unmatched frequency\" next to eqjoinsel_inner, stashes that info into\n> sjinfo and the estimator (nulltestsel) then uses that to adjust the\n> nullfrac it gets from the statistics.\n>\n> The good thing is this helps even for IS NULL checks on non-join-key\n> columns (where we don't switch to an antijoin), but there's a couple\n> things that I dislike ...\n>\n> 1) It's not restricted to outer joins or anything like that (this is\n> mostly just my laziness / interest in one particular query, but also\n> something the outer-join-aware patch might help with).\n>\n> 2) We probably don't want to pass this kind of information through\n> sjinfo. That was the simplest thing for an experimental patch, but I\n> suspect it's not the only piece of information we may need to pass to\n> the lower levels of estimation code.\n>\n> 3) I kinda doubt we actually want to move this responsibility (to\n> consider fraction of unmatched rows) to the low-level estimation\n> routines (e.g. nulltestsel and various others). AFAICS this just\n> \"introduces NULLs\" into the relation, so maybe we could \"adjust\" the\n> attribute statistics (in examine_variable?) by inflating null_frac and\n> modifying the other frequencies in MCV/histogram.\n>\n> 4) But I'm not sure we actually want to do that in these low-level\n> selectivity functions. The outer join essentially produces output with\n> two subsets - one with matches on the outer side, one without them. But\n> the side without matches has NULLs in all columns. In a way, we know\n> exactly how are these columns correlated - if we do the usual estimation\n> (even with the null_frac adjusted), we just throw this information away.\n> And when there's a lot of rows without a match, that seems bad.\n>\n> So maybe we should split the join estimate into two parts, one for each\n> subset of the join result. One for the rows with a match (and then we\n> can just do what we do now, with the attribute stats we already have).\n> And one for the \"unmatched part\" where we know the values on the outer\n> side are NULL (and then we can easily \"fake\" stats with null_frac=1.0).\n>\n>\n> I really hope what I just wrote makes at least a little bit of sense.\n>\n>\n> regards\n>\nI am also interested in this problem.\n\nI did some refactoring of the source code in the patch, moved the \ncalculation of unmatched_fraction to eqjoinsel_inner.\nI wrote myself in this commit as a co-author, if you don't mind, and I'm \ngoing to continue working.\n\n\nOn 26.06.2023 12:22, Andrey Lepikhov wrote:\n> On 24/6/2023 17:23, Tomas Vondra wrote:\n>> I really hope what I just wrote makes at least a little bit of sense.\n> Throw in one more example:\n>\n> SELECT i AS id INTO l FROM generate_series(1,100000) i;\n> CREATE TABLE r (id int8, v text);\n> INSERT INTO r (id, v) VALUES (1, 't'), (-1, 'f');\n> ANALYZE l,r;\n> EXPLAIN ANALYZE\n> SELECT * FROM l LEFT OUTER JOIN r ON (r.id = l.id) WHERE r.v IS NULL;\n>\n> Here you can see the same kind of underestimation:\n> Hash Left Join (... rows=500 width=14) (... rows=99999 ...)\n>\n> So the eqjoinsel_unmatch_left() function should be modified for the \n> case where nd1<nd2.\n>\nUnfortunately, this patch could not fix the cardinality calculation in \nthis request, I'll try to look and figure out what is missing here.\n\n*postgres=# SELECT i AS id INTO l FROM generate_series(1,100000) i;\nCREATE TABLE r (id int8, v text);\nINSERT INTO r (id, v) VALUES (1, 't'), (-1, 'f');\nANALYZE l,r;\nEXPLAIN ANALYZE\nSELECT * FROM l LEFT OUTER JOIN r ON (r.id = l.id) WHERE r.v IS NULL;\nSELECT 100000\nCREATE TABLE\nINSERT 0 2\nANALYZE\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------\n Hash Left Join (cost=1.04..1819.07 rows=1 width=14) (actual \ntime=0.143..114.792 rows=99999 loops=1)\n Hash Cond: (l.id = r.id)\n Filter: (r.v IS NULL)\n Rows Removed by Filter: 1\n -> Seq Scan on l (cost=0.00..1443.00 rows=100000 width=4) (actual \ntime=0.027..35.278 rows=100000 loops=1)\n -> Hash (cost=1.02..1.02 rows=2 width=10) (actual \ntime=0.014..0.017 rows=2 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 9kB\n -> Seq Scan on r (cost=0.00..1.02 rows=2 width=10) (actual \ntime=0.005..0.007 rows=2 loops=1)\n Planning Time: 0.900 ms\n Execution Time: 126.180 ms\n(10 rows)*\n\n\nAs in the previous query, even with applied the patch, the cardinality \nis calculated poorly here, I would even say that it has become worse:\n\nEXPLAIN ANALYZE\n SELECT * FROM large FULL JOIN small ON (large.id = small.id)\nWHERE (large.a IS NULL);\n\nMASTER:\n\n*QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------\n Merge Full Join (cost=127921.69..299941.59 rows=56503 width=16) \n(actual time=795.092..795.094 rows=0 loops=1)\n Merge Cond: (small.id = large.id)\n Filter: (large.a IS NULL)\n Rows Removed by Filter: 1000000\n -> Sort (cost=158.51..164.16 rows=2260 width=8) (actual \ntime=0.038..0.046 rows=100 loops=1)\n Sort Key: small.id\n Sort Method: quicksort Memory: 29kB\n -> Seq Scan on small (cost=0.00..32.60 rows=2260 width=8) \n(actual time=0.013..0.022 rows=100 loops=1)\n -> Materialize (cost=127763.19..132763.44 rows=1000050 width=8) \n(actual time=363.016..649.103 rows=1000000 loops=1)\n -> Sort (cost=127763.19..130263.31 rows=1000050 width=8) \n(actual time=363.012..481.480 rows=1000000 loops=1)\n Sort Key: large.id\n Sort Method: external merge Disk: 17664kB\n -> Seq Scan on large (cost=0.00..14425.50 rows=1000050 \nwidth=8) (actual time=0.009..111.166 rows=1000000 loops=1)\n Planning Time: 0.124 ms\n Execution Time: 797.139 ms\n(15 rows)*\n\nWith patch:\n\n*QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------\n Hash Full Join (cost=3.25..18179.25 rows=999900 width=16) (actual \ntime=261.480..261.482 rows=0 loops=1)\n Hash Cond: (large.id = small.id)\n Filter: (large.a IS NULL)\n Rows Removed by Filter: 1000000\n -> Seq Scan on large (cost=0.00..14425.00 rows=1000000 width=8) \n(actual time=0.006..92.827 rows=1000000 loops=1)\n -> Hash (cost=2.00..2.00 rows=100 width=8) (actual \ntime=0.032..0.034 rows=100 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 12kB\n -> Seq Scan on small (cost=0.00..2.00 rows=100 width=8) \n(actual time=0.008..0.015 rows=100 loops=1)\n Planning Time: 0.151 ms\n Execution Time: 261.529 ms\n(10 rows)\n*\n\nIn addition, I found a few more queries, where the estimation of \ncardinality with the patch has become better:\n\n\nEXPLAIN ANALYZE\n SELECT * FROM small LEFT JOIN large ON (large.id = small.id)\nWHERE (small.b IS NULL);\n\nMASTER:\n\n*QUERY PLAN\n-------------------------------------------------------------------------------------------------------------\n Hash Right Join (cost=32.74..18758.45 rows=55003 width=16) (actual \ntime=0.100..0.104 rows=0 loops=1)\n Hash Cond: (large.id = small.id)\n -> Seq Scan on large (cost=0.00..14425.50 rows=1000050 width=8) \n(never executed)\n -> Hash (cost=32.60..32.60 rows=11 width=8) (actual \ntime=0.089..0.091 rows=0 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 8kB\n -> Seq Scan on small (cost=0.00..32.60 rows=11 width=8) \n(actual time=0.088..0.088 rows=0 loops=1)\n Filter: (b IS NULL)\n Rows Removed by Filter: 100\n Planning Time: 0.312 ms\n Execution Time: 0.192 ms\n(10 rows)*\n\nWith patch:\n\n*QUERY PLAN\n-----------------------------------------------------------------------------------------------------------\n Hash Right Join (cost=2.01..18177.02 rows=1 width=16) (actual \ntime=0.127..0.132 rows=0 loops=1)\n Hash Cond: (large.id = small.id)\n -> Seq Scan on large (cost=0.00..14425.00 rows=1000000 width=8) \n(never executed)\n -> Hash (cost=2.00..2.00 rows=1 width=8) (actual time=0.112..0.114 \nrows=0 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 8kB\n -> Seq Scan on small (cost=0.00..2.00 rows=1 width=8) \n(actual time=0.111..0.111 rows=0 loops=1)\n Filter: (b IS NULL)\n Rows Removed by Filter: 100\n Planning Time: 0.984 ms\n Execution Time: 0.237 ms\n(10 rows)*\n\nEXPLAIN ANALYZE\n SELECT * FROM large FULL JOIN small ON (large.id = small.id)\nWHERE (small.b IS NULL);\n\nMASTER:\n\n*QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------\n Merge Full Join (cost=127921.69..299941.59 rows=56503 width=16) \n(actual time=339.478..819.232 rows=999900 loops=1)\n Merge Cond: (small.id = large.id)\n Filter: (small.b IS NULL)\n Rows Removed by Filter: 100\n -> Sort (cost=158.51..164.16 rows=2260 width=8) (actual \ntime=0.129..0.136 rows=100 loops=1)\n Sort Key: small.id\n Sort Method: quicksort Memory: 29kB\n -> Seq Scan on small (cost=0.00..32.60 rows=2260 width=8) \n(actual time=0.044..0.075 rows=100 loops=1)\n -> Materialize (cost=127763.19..132763.44 rows=1000050 width=8) \n(actual time=339.260..605.444 rows=1000000 loops=1)\n -> Sort (cost=127763.19..130263.31 rows=1000050 width=8) \n(actual time=339.254..449.930 rows=1000000 loops=1)\n Sort Key: large.id\n Sort Method: external merge Disk: 17664kB\n -> Seq Scan on large (cost=0.00..14425.50 rows=1000050 \nwidth=8) (actual time=0.032..104.484 rows=1000000 loops=1)\n Planning Time: 0.324 ms\n Execution Time: 859.705 ms\n(15 rows)\n*\n\nWith patch:\n\n*QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------\n Hash Full Join (cost=3.25..18179.25 rows=999900 width=16) (actual \ntime=0.162..349.683 rows=999900 loops=1)\n Hash Cond: (large.id = small.id)\n Filter: (small.b IS NULL)\n Rows Removed by Filter: 100\n -> Seq Scan on large (cost=0.00..14425.00 rows=1000000 width=8) \n(actual time=0.021..95.972 rows=1000000 loops=1)\n -> Hash (cost=2.00..2.00 rows=100 width=8) (actual \ntime=0.125..0.127 rows=100 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 12kB\n -> Seq Scan on small (cost=0.00..2.00 rows=100 width=8) \n(actual time=0.030..0.059 rows=100 loops=1)\n Planning Time: 0.218 ms\n Execution Time: 385.819 ms\n(10 rows)\n*\n\n**\n\nEXPLAIN ANALYZE\n SELECT * FROM large RIGHT JOIN small ON (large.id = small.id)\n WHERE (large.a IS NULL);\n\nMASTER:\n\n*QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------\n Merge Left Join (cost=127921.69..299941.59 rows=56503 width=16) \n(actual time=345.403..345.404 rows=0 loops=1)\n Merge Cond: (small.id = large.id)\n Filter: (large.a IS NULL)\n Rows Removed by Filter: 100\n -> Sort (cost=158.51..164.16 rows=2260 width=8) (actual \ntime=0.033..0.039 rows=100 loops=1)\n Sort Key: small.id\n Sort Method: quicksort Memory: 29kB\n -> Seq Scan on small (cost=0.00..32.60 rows=2260 width=8) \n(actual time=0.012..0.020 rows=100 loops=1)\n -> Materialize (cost=127763.19..132763.44 rows=1000050 width=8) \n(actual time=345.287..345.315 rows=101 loops=1)\n -> Sort (cost=127763.19..130263.31 rows=1000050 width=8) \n(actual time=345.283..345.295 rows=101 loops=1)\n Sort Key: large.id\n Sort Method: external merge Disk: 17664kB\n -> Seq Scan on large (cost=0.00..14425.50 rows=1000050 \nwidth=8) (actual time=0.009..104.648 rows=1000000 loops=1)\n Planning Time: 0.098 ms\n Execution Time: 347.807 ms\n(15 rows)*\n\nWith patch:\n\n*QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------\n Hash Right Join (cost=3.25..18179.25 rows=100 width=16) (actual \ntime=209.838..209.842 rows=0 loops=1)\n Hash Cond: (large.id = small.id)\n Filter: (large.a IS NULL)\n Rows Removed by Filter: 100\n -> Seq Scan on large (cost=0.00..14425.00 rows=1000000 width=8) \n(actual time=0.006..91.571 rows=1000000 loops=1)\n -> Hash (cost=2.00..2.00 rows=100 width=8) (actual \ntime=0.034..0.036 rows=100 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 12kB\n -> Seq Scan on small (cost=0.00..2.00 rows=100 width=8) \n(actual time=0.008..0.016 rows=100 loops=1)\n Planning Time: 0.168 ms\n Execution Time: 209.883 ms\n(10 rows)*",
"msg_date": "Mon, 26 Jun 2023 21:15:07 +0300",
"msg_from": "Alena Rybakina <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Problems with estimating OR conditions, IS NULL on LEFT JOINs"
},
{
"msg_contents": "\n\nOn 6/26/23 20:15, Alena Rybakina wrote:\n> Hi, all!\n> \n> On 24.06.2023 14:23, Tomas Vondra wrote:\n>> On 6/24/23 02:08, Tom Lane wrote:\n>>> Tomas Vondra <[email protected]> writes:\n>>>> The problem is that the selectivity for \"IS NULL\" is estimated using the\n>>>> table-level statistics. But the LEFT JOIN entirely breaks the idea that\n>>>> the null_frac has anything to do with NULLs in the join result.\n>>> Right.\n>>>\n>>>> I wonder how to improve this, say by adjusting the IS NULL selectivity\n>>>> when we know to operate on the outer side of the join. We're able to\n>>>> do this for antijoins, so maybe we could do that here, somehow?\n>>> This mess is part of the long-term plan around the work I've been doing\n>>> on outer-join-aware Vars. We now have infrastructure that can let\n>>> the estimator routines see \"oh, this Var isn't directly from a scan\n>>> of its table, it's been passed through a potentially-nulling outer\n>>> join --- and I can see which one\". I don't have more than vague ideas\n>>> about what happens next, but that is clearly an essential step on the\n>>> road to doing better.\n>>>\n>> I was wondering if that work on outer-join-aware Vars could help with\n>> this, but I wasn't following it very closely. I agree the ability to\n>> check if the Var could be NULL due to an outer join seems useful, as it\n>> says whether applying raw attribute statistics makes sense or not.\n>>\n>> I was thinking about what to do for the case when that's not possible,\n>> i.e. when the Var refers to nullable side of the join. Knowing that this\n>> is happening is clearly not enough - we need to know how many new NULLs\n>> are \"injected\" into the join result, and \"communicate\" that to the\n>> estimation routines.\n>>\n>> Attached is a very ugly experimental patch doing that, and with it the\n>> estimate changes to this:\n>>\n>> QUERY PLAN\n>> ----------------------------------------------------------------------\n>> Hash Left Join (cost=3.25..18179.88 rows=999900 width=16)\n>> (actual time=0.528..596.151 rows=999900 loops=1)\n>> Hash Cond: (large.id = small.id)\n>> Filter: ((small.id IS NULL) OR\n>> (large.a = ANY ('{1000,2000,3000,4000,5000}'::integer[])))\n>> Rows Removed by Filter: 100\n>> -> Seq Scan on large (cost=0.00..14425.00 rows=1000000 width=8)\n>> (actual time=0.069..176.138 rows=1000000 loops=1)\n>> -> Hash (cost=2.00..2.00 rows=100 width=8)\n>> (actual time=0.371..0.373 rows=100 loops=1)\n>> Buckets: 1024 Batches: 1 Memory Usage: 12kB\n>> -> Seq Scan on small (cost=0.00..2.00 rows=100 width=8)\n>> (actual time=0.032..0.146 rows=100 loops=1)\n>> Planning Time: 3.845 ms\n>> Execution Time: 712.405 ms\n>> (10 rows)\n>>\n>> Seems nice, but. The patch is pretty ugly, I don't claim it works for\n>> other queries or that this is exactly what we should do. It calculates\n>> \"unmatched frequency\" next to eqjoinsel_inner, stashes that info into\n>> sjinfo and the estimator (nulltestsel) then uses that to adjust the\n>> nullfrac it gets from the statistics.\n>>\n>> The good thing is this helps even for IS NULL checks on non-join-key\n>> columns (where we don't switch to an antijoin), but there's a couple\n>> things that I dislike ...\n>>\n>> 1) It's not restricted to outer joins or anything like that (this is\n>> mostly just my laziness / interest in one particular query, but also\n>> something the outer-join-aware patch might help with).\n>>\n>> 2) We probably don't want to pass this kind of information through\n>> sjinfo. That was the simplest thing for an experimental patch, but I\n>> suspect it's not the only piece of information we may need to pass to\n>> the lower levels of estimation code.\n>>\n>> 3) I kinda doubt we actually want to move this responsibility (to\n>> consider fraction of unmatched rows) to the low-level estimation\n>> routines (e.g. nulltestsel and various others). AFAICS this just\n>> \"introduces NULLs\" into the relation, so maybe we could \"adjust\" the\n>> attribute statistics (in examine_variable?) by inflating null_frac and\n>> modifying the other frequencies in MCV/histogram.\n>>\n>> 4) But I'm not sure we actually want to do that in these low-level\n>> selectivity functions. The outer join essentially produces output with\n>> two subsets - one with matches on the outer side, one without them. But\n>> the side without matches has NULLs in all columns. In a way, we know\n>> exactly how are these columns correlated - if we do the usual estimation\n>> (even with the null_frac adjusted), we just throw this information away.\n>> And when there's a lot of rows without a match, that seems bad.\n>>\n>> So maybe we should split the join estimate into two parts, one for each\n>> subset of the join result. One for the rows with a match (and then we\n>> can just do what we do now, with the attribute stats we already have).\n>> And one for the \"unmatched part\" where we know the values on the outer\n>> side are NULL (and then we can easily \"fake\" stats with null_frac=1.0).\n>>\n>>\n>> I really hope what I just wrote makes at least a little bit of sense.\n>>\n>>\n>> regards\n>>\n> I am also interested in this problem.\n> \n> I did some refactoring of the source code in the patch, moved the\n> calculation of unmatched_fraction to eqjoinsel_inner.\n> I wrote myself in this commit as a co-author, if you don't mind, and I'm\n> going to continue working.\n> \n\nSure, if you want to take over the patch and continue working on it,\nthat's perfectly fine with me. I'm happy to cooperate on it, doing\nreviews etc.\n\n> \n> On 26.06.2023 12:22, Andrey Lepikhov wrote:\n>> On 24/6/2023 17:23, Tomas Vondra wrote:\n>>> I really hope what I just wrote makes at least a little bit of sense.\n>> Throw in one more example:\n>>\n>> SELECT i AS id INTO l FROM generate_series(1,100000) i;\n>> CREATE TABLE r (id int8, v text);\n>> INSERT INTO r (id, v) VALUES (1, 't'), (-1, 'f');\n>> ANALYZE l,r;\n>> EXPLAIN ANALYZE\n>> SELECT * FROM l LEFT OUTER JOIN r ON (r.id = l.id) WHERE r.v IS NULL;\n>>\n>> Here you can see the same kind of underestimation:\n>> Hash Left Join (... rows=500 width=14) (... rows=99999 ...)\n>>\n>> So the eqjoinsel_unmatch_left() function should be modified for the\n>> case where nd1<nd2.\n>>\n> Unfortunately, this patch could not fix the cardinality calculation in\n> this request, I'll try to look and figure out what is missing here.\n> \n> *postgres=# SELECT i AS id INTO l FROM generate_series(1,100000) i;\n> CREATE TABLE r (id int8, v text);\n> INSERT INTO r (id, v) VALUES (1, 't'), (-1, 'f');\n> ANALYZE l,r;\n> EXPLAIN ANALYZE\n> SELECT * FROM l LEFT OUTER JOIN r ON (r.id = l.id) WHERE r.v IS NULL;\n> SELECT 100000\n> CREATE TABLE\n> INSERT 0 2\n> ANALYZE\n> QUERY\n> PLAN \n> ---------------------------------------------------------------------------------------------------------------\n> Hash Left Join (cost=1.04..1819.07 rows=1 width=14) (actual\n> time=0.143..114.792 rows=99999 loops=1)\n> Hash Cond: (l.id = r.id)\n> Filter: (r.v IS NULL)\n> Rows Removed by Filter: 1\n> -> Seq Scan on l (cost=0.00..1443.00 rows=100000 width=4) (actual\n> time=0.027..35.278 rows=100000 loops=1)\n> -> Hash (cost=1.02..1.02 rows=2 width=10) (actual time=0.014..0.017\n> rows=2 loops=1)\n> Buckets: 1024 Batches: 1 Memory Usage: 9kB\n> -> Seq Scan on r (cost=0.00..1.02 rows=2 width=10) (actual\n> time=0.005..0.007 rows=2 loops=1)\n> Planning Time: 0.900 ms\n> Execution Time: 126.180 ms\n> (10 rows)*\n> \n> \n> As in the previous query, even with applied the patch, the cardinality\n> is calculated poorly here, I would even say that it has become worse:\n> \n> EXPLAIN ANALYZE\n> SELECT * FROM large FULL JOIN small ON (large.id = small.id)\n> WHERE (large.a IS NULL);\n> \n> MASTER:\n> \n> * QUERY\n> PLAN \n> -----------------------------------------------------------------------------------------------------------------------------------\n> Merge Full Join (cost=127921.69..299941.59 rows=56503 width=16)\n> (actual time=795.092..795.094 rows=0 loops=1)\n> Merge Cond: (small.id = large.id)\n> Filter: (large.a IS NULL)\n> Rows Removed by Filter: 1000000\n> -> Sort (cost=158.51..164.16 rows=2260 width=8) (actual\n> time=0.038..0.046 rows=100 loops=1)\n> Sort Key: small.id\n> Sort Method: quicksort Memory: 29kB\n> -> Seq Scan on small (cost=0.00..32.60 rows=2260 width=8)\n> (actual time=0.013..0.022 rows=100 loops=1)\n> -> Materialize (cost=127763.19..132763.44 rows=1000050 width=8)\n> (actual time=363.016..649.103 rows=1000000 loops=1)\n> -> Sort (cost=127763.19..130263.31 rows=1000050 width=8)\n> (actual time=363.012..481.480 rows=1000000 loops=1)\n> Sort Key: large.id\n> Sort Method: external merge Disk: 17664kB\n> -> Seq Scan on large (cost=0.00..14425.50 rows=1000050\n> width=8) (actual time=0.009..111.166 rows=1000000 loops=1)\n> Planning Time: 0.124 ms\n> Execution Time: 797.139 ms\n> (15 rows)*\n> \n> With patch:\n> \n> * QUERY\n> PLAN \n> ----------------------------------------------------------------------------------------------------------------------\n> Hash Full Join (cost=3.25..18179.25 rows=999900 width=16) (actual\n> time=261.480..261.482 rows=0 loops=1)\n> Hash Cond: (large.id = small.id)\n> Filter: (large.a IS NULL)\n> Rows Removed by Filter: 1000000\n> -> Seq Scan on large (cost=0.00..14425.00 rows=1000000 width=8)\n> (actual time=0.006..92.827 rows=1000000 loops=1)\n> -> Hash (cost=2.00..2.00 rows=100 width=8) (actual\n> time=0.032..0.034 rows=100 loops=1)\n> Buckets: 1024 Batches: 1 Memory Usage: 12kB\n> -> Seq Scan on small (cost=0.00..2.00 rows=100 width=8)\n> (actual time=0.008..0.015 rows=100 loops=1)\n> Planning Time: 0.151 ms\n> Execution Time: 261.529 ms\n> (10 rows)\n> *\n> \n> In addition, I found a few more queries, where the estimation of\n> cardinality with the patch has become better:\n> \n> \n\nYes, this does not surprise me at all - the patch was very early /\nexperimental code, and I only really aimed it at that single example\nquery. So don't hesitate to rethink what/how the patch works.\n\nI do think collecting / constructing a wider set of queries with joins\nof different types is going to be an important step in working on this\npatch and making sure it doesn't break something.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 28 Jun 2023 23:53:32 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Problems with estimating OR conditions, IS NULL on LEFT JOINs"
},
{
"msg_contents": "Hi, all!\n\nOn 26.06.2023 12:22, Andrey Lepikhov wrote:\n> On 24/6/2023 17:23, Tomas Vondra wrote:\n>> I really hope what I just wrote makes at least a little bit of sense.\n> Throw in one more example:\n>\n> SELECT i AS id INTO l FROM generate_series(1,100000) i;\n> CREATE TABLE r (id int8, v text);\n> INSERT INTO r (id, v) VALUES (1, 't'), (-1, 'f');\n> ANALYZE l,r;\n> EXPLAIN ANALYZE\n> SELECT * FROM l LEFT OUTER JOIN r ON (r.id = l.id) WHERE r.v IS NULL;\n>\n> Here you can see the same kind of underestimation:\n> Hash Left Join (... rows=500 width=14) (... rows=99999 ...)\n>\n> So the eqjoinsel_unmatch_left() function should be modified for the \n> case where nd1<nd2.\n>\n>\n> Unfortunately, this patch could not fix the cardinality calculation in \n> this request, I'll try to look and figure out what is missing here.\n\nI tried to fix the cardinality score in the query above by changing:\n\ndiff --git a/src/backend/utils/adt/selfuncs.c \nb/src/backend/utils/adt/selfuncs.c\nindex 8e18aa1dd2b..40901836146 100644\n--- a/src/backend/utils/adt/selfuncs.c\n+++ b/src/backend/utils/adt/selfuncs.c\n@@ -2604,11 +2604,16 @@ eqjoinsel_inner(Oid opfuncoid, Oid collation,\n * if we're calculating fraction of NULLs or fraction \nof unmatched rows.\n */\n // unmatchfreq = (1.0 - nullfrac1) * (1.0 - nullfrac2);\n- if (nd1 > nd2)\n+ if (nd1 != nd2)\n {\n- selec /= nd1;\n- *unmatched_frac = (nd1 - nd2) * 1.0 / nd1;\n+ selec /= Max(nd1, nd2);\n+ *unmatched_frac = abs(nd1 - nd2) * 1.0 / \nMax(nd1, nd2);\n }\n+ /*if (nd1 > nd2)\n+ {\n+ selec /= nd1;\n+ *unmatched_frac = nd1 - nd2 * 1.0 / nd1;\n+ }*/\n else\n {\n selec /= nd2;\n\nand it worked:\n\nSELECT i AS id INTO l FROM generate_series(1,100000) i;\nCREATE TABLE r (id int8, v text);\nINSERT INTO r (id, v) VALUES (1, 't'), (-1, 'f');\nANALYZE l,r;\nEXPLAIN ANALYZE\nSELECT * FROM l LEFT OUTER JOIN r ON (r.id = l.id) WHERE r.v IS NULL;\nERROR: relation \"l\" already exists\nERROR: relation \"r\" already exists\nINSERT 0 2\nANALYZE\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------\n Hash Left Join (cost=1.09..1944.13 rows=99998 width=14) (actual \ntime=0.152..84.184 rows=99999 loops=1)\n Hash Cond: (l.id = r.id)\n Filter: (r.v IS NULL)\n Rows Removed by Filter: 2\n -> Seq Scan on l (cost=0.00..1443.00 rows=100000 width=4) (actual \ntime=0.040..27.635 rows=100000 loops=1)\n -> Hash (cost=1.04..1.04 rows=4 width=10) (actual \ntime=0.020..0.022 rows=4 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 9kB\n -> Seq Scan on r (cost=0.00..1.04 rows=4 width=10) (actual \ntime=0.009..0.011 rows=4 loops=1)\n Planning Time: 0.954 ms\n Execution Time: 92.309 ms\n(10 rows)\n\nIt looks too simple and I suspect that I might have missed something \nsomewhere, but so far I haven't found any examples of queries where it \ndoesn't work.\n\nI didn't see it breaking anything in the examples from my previous \nletter [1].\n\n1. \nhttps://www.postgresql.org/message-id/7af1464e-2e24-cfb1-b6d4-1544757f8cfa%40yandex.ru\n\n\nUnfortunately, I can't understand your idea from point 4, please explain it?\n\nThe good thing is this helps even for IS NULL checks on non-join-key\ncolumns (where we don't switch to an antijoin), but there's a couple\nthings that I dislike ...\n\n1) It's not restricted to outer joins or anything like that (this is\nmostly just my laziness / interest in one particular query, but also\nsomething the outer-join-aware patch might help with).\n\n2) We probably don't want to pass this kind of information through\nsjinfo. That was the simplest thing for an experimental patch, but I\nsuspect it's not the only piece of information we may need to pass to\nthe lower levels of estimation code.\n\n3) I kinda doubt we actually want to move this responsibility (to\nconsider fraction of unmatched rows) to the low-level estimation\nroutines (e.g. nulltestsel and various others). AFAICS this just\n\"introduces NULLs\" into the relation, so maybe we could \"adjust\" the\nattribute statistics (in examine_variable?) by inflating null_frac and\nmodifying the other frequencies in MCV/histogram.\n\n4) But I'm not sure we actually want to do that in these low-level\nselectivity functions. The outer join essentially produces output with\ntwo subsets - one with matches on the outer side, one without them. But\nthe side without matches has NULLs in all columns. In a way, we know\nexactly how are these columns correlated - if we do the usual estimation\n(even with the null_frac adjusted), we just throw this information away.\nAnd when there's a lot of rows without a match, that seems bad.\n\n-- \nRegards,\nAlena Rybakina\nPostgres Professional",
"msg_date": "Thu, 6 Jul 2023 16:51:48 +0300",
"msg_from": "Alena Rybakina <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Problems with estimating OR conditions, IS NULL on LEFT JOINs"
},
{
"msg_contents": "\n\nOn 7/6/23 15:51, Alena Rybakina wrote:\n> Hi, all!\n> \n> On 26.06.2023 12:22, Andrey Lepikhov wrote:\n>> On 24/6/2023 17:23, Tomas Vondra wrote:\n>>> I really hope what I just wrote makes at least a little bit of sense.\n>> Throw in one more example:\n>>\n>> SELECT i AS id INTO l FROM generate_series(1,100000) i;\n>> CREATE TABLE r (id int8, v text);\n>> INSERT INTO r (id, v) VALUES (1, 't'), (-1, 'f');\n>> ANALYZE l,r;\n>> EXPLAIN ANALYZE\n>> SELECT * FROM l LEFT OUTER JOIN r ON (r.id = l.id) WHERE r.v IS NULL;\n>>\n>> Here you can see the same kind of underestimation:\n>> Hash Left Join (... rows=500 width=14) (... rows=99999 ...)\n>>\n>> So the eqjoinsel_unmatch_left() function should be modified for the\n>> case where nd1<nd2.\n>>\n>>\n>> Unfortunately, this patch could not fix the cardinality calculation in\n>> this request, I'll try to look and figure out what is missing here.\n> \n> I tried to fix the cardinality score in the query above by changing:\n> \n> diff --git a/src/backend/utils/adt/selfuncs.c\n> b/src/backend/utils/adt/selfuncs.c\n> index 8e18aa1dd2b..40901836146 100644\n> --- a/src/backend/utils/adt/selfuncs.c\n> +++ b/src/backend/utils/adt/selfuncs.c\n> @@ -2604,11 +2604,16 @@ eqjoinsel_inner(Oid opfuncoid, Oid collation,\n> * if we're calculating fraction of NULLs or fraction of\n> unmatched rows.\n> */\n> // unmatchfreq = (1.0 - nullfrac1) * (1.0 - nullfrac2);\n> - if (nd1 > nd2)\n> + if (nd1 != nd2)\n> {\n> - selec /= nd1;\n> - *unmatched_frac = (nd1 - nd2) * 1.0 / nd1;\n> + selec /= Max(nd1, nd2);\n> + *unmatched_frac = abs(nd1 - nd2) * 1.0 /\n> Max(nd1, nd2);\n> }\n> + /*if (nd1 > nd2)\n> + {\n> + selec /= nd1;\n> + *unmatched_frac = nd1 - nd2 * 1.0 / nd1;\n> + }*/\n> else\n> {\n> selec /= nd2;\n> \n> and it worked:\n> \n> SELECT i AS id INTO l FROM generate_series(1,100000) i;\n> CREATE TABLE r (id int8, v text);\n> INSERT INTO r (id, v) VALUES (1, 't'), (-1, 'f');\n> ANALYZE l,r;\n> EXPLAIN ANALYZE\n> SELECT * FROM l LEFT OUTER JOIN r ON (r.id = l.id) WHERE r.v IS NULL;\n> ERROR: relation \"l\" already exists\n> ERROR: relation \"r\" already exists\n> INSERT 0 2\n> ANALYZE\n> QUERY\n> PLAN \n> ---------------------------------------------------------------------------------------------------------------\n> Hash Left Join (cost=1.09..1944.13 rows=99998 width=14) (actual\n> time=0.152..84.184 rows=99999 loops=1)\n> Hash Cond: (l.id = r.id)\n> Filter: (r.v IS NULL)\n> Rows Removed by Filter: 2\n> -> Seq Scan on l (cost=0.00..1443.00 rows=100000 width=4) (actual\n> time=0.040..27.635 rows=100000 loops=1)\n> -> Hash (cost=1.04..1.04 rows=4 width=10) (actual time=0.020..0.022\n> rows=4 loops=1)\n> Buckets: 1024 Batches: 1 Memory Usage: 9kB\n> -> Seq Scan on r (cost=0.00..1.04 rows=4 width=10) (actual\n> time=0.009..0.011 rows=4 loops=1)\n> Planning Time: 0.954 ms\n> Execution Time: 92.309 ms\n> (10 rows)\n> \n> It looks too simple and I suspect that I might have missed something\n> somewhere, but so far I haven't found any examples of queries where it\n> doesn't work.\n> \n> I didn't see it breaking anything in the examples from my previous\n> letter [1].\n> \n\nI think it's correct. Or at least it doesn't break anything my patch\ndidn't already break. My patch was simply written for one specific\nquery, so it didn't consider the option that the nd1 and nd2 values\nmight be in the opposite direction ...\n\n> 1.\n> https://www.postgresql.org/message-id/7af1464e-2e24-cfb1-b6d4-1544757f8cfa%40yandex.ru\n> \n> \n> Unfortunately, I can't understand your idea from point 4, please explain it?\n> \n> The good thing is this helps even for IS NULL checks on non-join-key\n> columns (where we don't switch to an antijoin), but there's a couple\n> things that I dislike ...\n> \n> 1) It's not restricted to outer joins or anything like that (this is\n> mostly just my laziness / interest in one particular query, but also\n> something the outer-join-aware patch might help with).\n> \n> 2) We probably don't want to pass this kind of information through\n> sjinfo. That was the simplest thing for an experimental patch, but I\n> suspect it's not the only piece of information we may need to pass to\n> the lower levels of estimation code.\n> \n> 3) I kinda doubt we actually want to move this responsibility (to\n> consider fraction of unmatched rows) to the low-level estimation\n> routines (e.g. nulltestsel and various others). AFAICS this just\n> \"introduces NULLs\" into the relation, so maybe we could \"adjust\" the\n> attribute statistics (in examine_variable?) by inflating null_frac and\n> modifying the other frequencies in MCV/histogram.\n> \n> 4) But I'm not sure we actually want to do that in these low-level\n> selectivity functions. The outer join essentially produces output with\n> two subsets - one with matches on the outer side, one without them. But\n> the side without matches has NULLs in all columns. In a way, we know\n> exactly how are these columns correlated - if we do the usual estimation\n> (even with the null_frac adjusted), we just throw this information away.\n> And when there's a lot of rows without a match, that seems bad.\n> \n\nWell, one option would be to modify all selectivity functions to do\nsomething like the patch does for nulltestsel(). That seems a bit\ncumbersome because why should those places care about maybe running on\nthe outer side of a join, or what? For code in extensions this would be\nparticularly problematic, I think.\n\nSo what I was thinking about doing this in a way that'd make this\nautomatic, without having to modify the selectivity functions.\n\nOption (3) is very simple - examine_variable would simply adjust the\nstatistics by tweaking the null_frac field, when looking at variables on\nthe outer side of the join. But it has issues when estimating multiple\nconditions.\n\nImagine t1 has 1M rows, and we want to estimate\n\n SELECT * FROM t1 LEFT JOIN t2 ON (t1.id = t2.id)\n WHERE ((t2.a=1) AND (t2.b=1))\n\nbut only 50% of the t1 rows has a match in t2. Assume each of the t2\nconditions matches 100% rows in the table. With the correction, this\nmeans 50% selectivity for each condition. And if we combine them the\nusual way, it's 0.5 * 0.5 = 0.25.\n\nBut we know all the rows in the \"matching\" part match the condition, so\nthe correct selectivity should be 0.5.\n\nIn a way, this is just another case of estimation issues due to the\nassumption of independence.\n\nBut (4) was suggesting we could improve this essentially by treating the\njoin as two distinct sets of rows\n\n - the inner join result\n\n - rows without match on the outer side\n\nFor the inner part, we would do estimates as now (using the regular\nper-column statistics). If we knew the conditions match 100% rows, we'd\nstill get 100% when the conditions are combined.\n\nFor the second part of the join we know the outer side is just NULLs in\nall columns, and that'd make the estimation much simpler for most\nclauses. We'd just need to have \"fake\" statistics with null_frac=1.0 and\nthat's it.\n\nAnd then we'd just combine these two selectivities. If we know the inner\nside is 50% and all rows match the conditions, and no rows in the other\n50% match, the selectivity is 50%.\n\ninner_part * inner_sel + outer_part * outer_sel = 0.5 * 1.0 + 0.0 = 0.5\n\nNow, we still have issues with independence assumption in each of these\nparts separately. But that's OK, I think.\n\nI think (4) could be implemented by doing the current estimation for the\n inner part, and by tweaking examine_variable in the \"outer\" part in a\nway similar to (3). Except that it just sets null_frac=1.0 everywhere.\n\n\n\nFWIW, I used \"AND\" in the example for simplicity, but that'd probably be\npushed to the baserel level. There'd need to be OR to keep it at the\njoin level, but the overall issue is the same, I think.\n\nAlso, this entirely ignores extended statistics - I have no idea how we\nmight tweak those in (3). For (4) we don't need to tweak those at all,\nbecause for inner part we can just apply them as is, and for outer part\nit's irrelevant because everything is NULL.\n\n\nI hope this makes more sense. If not, let me know and I'll try to\nexplain it better.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 6 Jul 2023 17:38:38 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Problems with estimating OR conditions, IS NULL on LEFT JOINs"
},
{
"msg_contents": "\n> Well, one option would be to modify all selectivity functions to do\n> something like the patch does for nulltestsel(). That seems a bit\n> cumbersome because why should those places care about maybe running on\n> the outer side of a join, or what? For code in extensions this would be\n> particularly problematic, I think.\nAgree. I would say that we can try it if nothing else works out.\n> So what I was thinking about doing this in a way that'd make this\n> automatic, without having to modify the selectivity functions.\n>\n> Option (3) is very simple - examine_variable would simply adjust the\n> statistics by tweaking the null_frac field, when looking at variables on\n> the outer side of the join. But it has issues when estimating multiple\n> conditions.\n>\n> Imagine t1 has 1M rows, and we want to estimate\n>\n> SELECT * FROM t1 LEFT JOIN t2 ON (t1.id = t2.id)\n> WHERE ((t2.a=1) AND (t2.b=1))\n>\n> but only 50% of the t1 rows has a match in t2. Assume each of the t2\n> conditions matches 100% rows in the table. With the correction, this\n> means 50% selectivity for each condition. And if we combine them the\n> usual way, it's 0.5 * 0.5 = 0.25.\n>\n> But we know all the rows in the \"matching\" part match the condition, so\n> the correct selectivity should be 0.5.\n>\n> In a way, this is just another case of estimation issues due to the\n> assumption of independence.\n> FWIW, I used \"AND\" in the example for simplicity, but that'd probably be\n> pushed to the baserel level. There'd need to be OR to keep it at the\n> join level, but the overall issue is the same, I think.\n>\n> Also, this entirely ignores extended statistics - I have no idea how we\n> might tweak those in (3).\n\nI understood the idea - it is very similar to what is implemented in the \ncurrent patch.\n\nBut I don't understand how to do it in the examine_variable function, to \nbe honest.\n\n> But (4) was suggesting we could improve this essentially by treating the\n> join as two distinct sets of rows\n>\n> - the inner join result\n>\n> - rows without match on the outer side\n>\n> For the inner part, we would do estimates as now (using the regular\n> per-column statistics). If we knew the conditions match 100% rows, we'd\n> still get 100% when the conditions are combined.\n>\n> For the second part of the join we know the outer side is just NULLs in\n> all columns, and that'd make the estimation much simpler for most\n> clauses. We'd just need to have \"fake\" statistics with null_frac=1.0 and\n> that's it.\n>\n> And then we'd just combine these two selectivities. If we know the inner\n> side is 50% and all rows match the conditions, and no rows in the other\n> 50% match, the selectivity is 50%.\n>\n> inner_part * inner_sel + outer_part * outer_sel = 0.5 * 1.0 + 0.0 = 0.5\n>\n> Now, we still have issues with independence assumption in each of these\n> parts separately. But that's OK, I think.\n>\n> I think (4) could be implemented by doing the current estimation for the\n> inner part, and by tweaking examine_variable in the \"outer\" part in a\n> way similar to (3). Except that it just sets null_frac=1.0 everywhere.\n>\n> For (4) we don't need to tweak those at all,\n> because for inner part we can just apply them as is, and for outer part\n> it's irrelevant because everything is NULL.\nI like this idea the most) I'll try to start with this and implement the \npatch.\n> I hope this makes more sense. If not, let me know and I'll try to\n> explain it better.\n\nThank you for your explanation)\n\nI will unsubscribe soon based on the results or if I have any questions.\n\n-- \nRegards,\nAlena Rybakina\nPostgres Professional\n\n\n\n",
"msg_date": "Sat, 8 Jul 2023 11:29:52 +0300",
"msg_from": "Alena Rybakina <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Problems with estimating OR conditions, IS NULL on LEFT JOINs"
},
{
"msg_contents": "\n\nOn 7/8/23 10:29, Alena Rybakina wrote:\n> \n>> Well, one option would be to modify all selectivity functions to do\n>> something like the patch does for nulltestsel(). That seems a bit\n>> cumbersome because why should those places care about maybe running on\n>> the outer side of a join, or what? For code in extensions this would be\n>> particularly problematic, I think.\n> Agree. I would say that we can try it if nothing else works out.\n>> So what I was thinking about doing this in a way that'd make this\n>> automatic, without having to modify the selectivity functions.\n>>\n>> Option (3) is very simple - examine_variable would simply adjust the\n>> statistics by tweaking the null_frac field, when looking at variables on\n>> the outer side of the join. But it has issues when estimating multiple\n>> conditions.\n>>\n>> Imagine t1 has 1M rows, and we want to estimate\n>>\n>> SELECT * FROM t1 LEFT JOIN t2 ON (t1.id = t2.id)\n>> WHERE ((t2.a=1) AND (t2.b=1))\n>>\n>> but only 50% of the t1 rows has a match in t2. Assume each of the t2\n>> conditions matches 100% rows in the table. With the correction, this\n>> means 50% selectivity for each condition. And if we combine them the\n>> usual way, it's 0.5 * 0.5 = 0.25.\n>>\n>> But we know all the rows in the \"matching\" part match the condition, so\n>> the correct selectivity should be 0.5.\n>>\n>> In a way, this is just another case of estimation issues due to the\n>> assumption of independence.\n>> FWIW, I used \"AND\" in the example for simplicity, but that'd probably be\n>> pushed to the baserel level. There'd need to be OR to keep it at the\n>> join level, but the overall issue is the same, I think.\n>>\n>> Also, this entirely ignores extended statistics - I have no idea how we\n>> might tweak those in (3).\n> \n> I understood the idea - it is very similar to what is implemented in the\n> current patch.\n> \n> But I don't understand how to do it in the examine_variable function, to\n> be honest.\n> \n\nWell, I don't have a detailed plan either. In principle it shouldn't be\nthat hard, I think - examine_variable is loading the statistics, so it\ncould apply the same null_frac correction, just like nulltestsel would\ndo a bit later.\n\nThe main question is how to pass the information to examine_variable. It\ndoesn't get the SpecialJoinInfo (which is what nulltestsel used), so\nwe'd need to invent something new ... add a new argument?\n\n>> But (4) was suggesting we could improve this essentially by treating the\n>> join as two distinct sets of rows\n>>\n>> - the inner join result\n>>\n>> - rows without match on the outer side\n>>\n>> For the inner part, we would do estimates as now (using the regular\n>> per-column statistics). If we knew the conditions match 100% rows, we'd\n>> still get 100% when the conditions are combined.\n>>\n>> For the second part of the join we know the outer side is just NULLs in\n>> all columns, and that'd make the estimation much simpler for most\n>> clauses. We'd just need to have \"fake\" statistics with null_frac=1.0 and\n>> that's it.\n>>\n>> And then we'd just combine these two selectivities. If we know the inner\n>> side is 50% and all rows match the conditions, and no rows in the other\n>> 50% match, the selectivity is 50%.\n>>\n>> inner_part * inner_sel + outer_part * outer_sel = 0.5 * 1.0 + 0.0 = 0.5\n>>\n>> Now, we still have issues with independence assumption in each of these\n>> parts separately. But that's OK, I think.\n>>\n>> I think (4) could be implemented by doing the current estimation for the\n>> inner part, and by tweaking examine_variable in the \"outer\" part in a\n>> way similar to (3). Except that it just sets null_frac=1.0 everywhere.\n>>\n>> For (4) we don't need to tweak those at all,\n>> because for inner part we can just apply them as is, and for outer part\n>> it's irrelevant because everything is NULL.\n> I like this idea the most) I'll try to start with this and implement the\n> patch.\n\nGood to hear.\n\n>> I hope this makes more sense. If not, let me know and I'll try to\n>> explain it better.\n> \n> Thank you for your explanation)\n> \n> I will unsubscribe soon based on the results or if I have any questions.\n> \n\nOK\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 8 Jul 2023 11:10:26 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Problems with estimating OR conditions, IS NULL on LEFT JOINs"
},
{
"msg_contents": "> Well, I don't have a detailed plan either. In principle it shouldn't be\n> that hard, I think - examine_variable is loading the statistics, so it\n> could apply the same null_frac correction, just like nulltestsel would\n> do a bit later.\n>\n> The main question is how to pass the information to examine_variable. It\n> doesn't get the SpecialJoinInfo (which is what nulltestsel used), so\n> we'd need to invent something new ... add a new argument?\n\nSorry I didn't answer right away, I could adapt the last version of the \npatch [2] to the current idea, but so far I have implemented\nit only for the situation where we save the number of zero values in \nSpecialJoinInfo variable.\n\nI'm starting to look for different functions scalararraysel_containment, \nboolvarsel and I try to find some bad cases for current problem,\nwhen I can fix in similar way it in those functions. But I'm not sure \nabout different ones functions:\n(mergejoinscansel, estimate_num_groups, estimate_hash_bucket_stats, \nget_restriction_variable, get_join_variables).\n\nThe examine_variable function is also called in them, and even though \nthere is no being outer join in them\n(the absence of a SpecialJoinInfo variable), I can't think of similar \ncases, when we have such problem caused by the same reasons.\n\n\nThe code passes all regression tests and I found no deterioration in \ncardinality prediction for queries from [1], except for one:\n\nEXPLAIN ANALYZE\n SELECT * FROM large FULL JOIN small ON (large.id = small.id)\nWHERE (large.a IS NULL);\n\nMASTER:\n\n*QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------\n\n Merge Full Join (cost=127921.69..299941.59 rows=56503 \nwidth=16)(actual time=795.092..795.094 rows=0 loops=1)\n\n Merge Cond: (small.id = large.id)\n Filter: (large.a IS NULL)\n Rows Removed by Filter: 1000000\n\n -> Sort (cost=158.51..164.16 rows=2260 width=8) \n(actualtime=0.038..0.046 rows=100 loops=1)\n\n Sort Key: small.id\n Sort Method: quicksort Memory: 29kB\n\n -> Seq Scan on small (cost=0.00..32.60 rows=2260 \nwidth=8)(actual time=0.013..0.022 rows=100 loops=1) -> Materialize \n(cost=127763.19..132763.44 rows=1000050 width=8)(actual \ntime=363.016..649.103 rows=1000000 loops=1) -> Sort \n(cost=127763.19..130263.31 rows=1000050 width=8)(actual \ntime=363.012..481.480 rows=1000000 loops=1)\n\n Sort Key: large.id\n Sort Method: external merge Disk: 17664kB\n\n -> Seq Scan on large (cost=0.00..14425.50 \nrows=1000050width=8) (actual time=0.009..111.166 rows=1000000 loops=1)\n\n Planning Time: 0.124 ms\n Execution Time: 797.139 ms\n(15 rows)*\n\nWith patch:\n\n*QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------\n\n Hash Full Join (cost=3.25..18179.25 rows=999900 width=16) \n(actualtime=261.480..261.482 rows=0 loops=1)\n\n Hash Cond: (large.id = small.id)\n Filter: (large.a IS NULL)\n Rows Removed by Filter: 1000000\n\n -> Seq Scan on large (cost=0.00..14425.00 rows=1000000 \nwidth=8)(actual time=0.006..92.827 rows=1000000 loops=1) -> Hash \n(cost=2.00..2.00 rows=100 width=8) (actualtime=0.032..0.034 rows=100 \nloops=1)\n\n Buckets: 1024 Batches: 1 Memory Usage: 12kB\n\n -> Seq Scan on small (cost=0.00..2.00 rows=100 \nwidth=8)(actual time=0.008..0.015 rows=100 loops=1)\n\n Planning Time: 0.151 ms\n Execution Time: 261.529 ms\n(10 rows)\n\n[1] \nhttps://www.mail-archive.com/[email protected]/msg146044.html\n\n[2] \nhttps://www.postgresql.org/message-id/148ff8f1-067b-1409-c754-af6117de9b7d%40yandex.ru\n\n\nUnfortunately, I found that my previous change leads to a big error in \npredicting selectivity.\nI will investigate this case in more detail and try to find a solution \n(I wrote the code and query below).\n\n// unmatchfreq = (1.0 - nullfrac1) * (1.0 - nullfrac2);\nif (nd1 != nd2)\n{\n selec /= Max(nd1, nd2);\n *unmatched_frac = abs(nd1 - nd2) * 1.0 / Max(nd1, nd2);\n}\nelse\n{\n selec /= nd2;\n *unmatched_frac = 0.0;\n}\n\n\npostgres=# explain analyze select * from large l1 inner join large l2 on \nl1.a is null where l1.a <100;\n\nMASTER:\n\nQUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=1000.00..35058.43 rows=1000000 width=16) (actual \ntime=91.846..93.622 rows=0 loops=1)\n -> Gather (cost=1000.00..10633.43 rows=1 width=8) (actual \ntime=91.844..93.619 rows=0 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n -> Parallel Seq Scan on large l1 (cost=0.00..9633.33 rows=1 \nwidth=8) (actual time=86.153..86.154 rows=0 loops=3)\n Filter: ((a IS NULL) AND (a < 100))\n Rows Removed by Filter: 333333\n -> Seq Scan on large l2 (cost=0.00..14425.00 rows=1000000 width=8) \n(never executed)\n Planning Time: 0.299 ms\n Execution Time: 93.689 ms\n\n(10 rows)\n\n\nWith patch:\n\nQUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=1000.00..20863771.84 rows=1667083350 width=16) \n(actual time=290.255..290.323 rows=0 loops=1)\n -> Seq Scan on large l2 (cost=0.00..14425.50 rows=1000050 width=8) \n(actual time=0.104..94.037 rows=1000000 loops=1)\n -> Materialize (cost=1000.00..10808.63 rows=1667 width=8) (actual \ntime=0.000..0.000 rows=0 loops=1000000)\n -> Gather (cost=1000.00..10800.29 rows=1667 width=8) (actual \ntime=79.472..79.539 rows=0 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n -> Parallel Seq Scan on large l1 (cost=0.00..9633.59 \nrows=695 width=8) (actual time=75.337..75.338 rows=0 loops=3)\n Filter: ((a IS NULL) AND (a < 100))\n Rows Removed by Filter: 333333\n Planning Time: 0.721 ms\n Execution Time: 290.425 ms\n\n(11 rows)\n\nI remember, it could fix this one:\n\nSELECT i AS id INTO l FROM generate_series(1,100000) i;\nCREATE TABLE r (id int8, v text);\nINSERT INTO r (id, v) VALUES (1, 't'), (-1, 'f');\nANALYZE l,r;\nEXPLAIN ANALYZE\nSELECT * FROM l LEFT OUTER JOIN r ON (r.id = l.id) WHERE r.v IS NULL;\nERROR: relation \"l\" already exists\nERROR: relation \"r\" already exists\nINSERT 0 2\nANALYZE\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------\n\n Hash Left Join (cost=1.09..1944.13 rows=99998 width=14) \n(actualtime=0.152..84.184 rows=99999 loops=1)\n\n Hash Cond: (l.id = r.id)\n Filter: (r.v IS NULL)\n Rows Removed by Filter: 2\n\n -> Seq Scan on l (cost=0.00..1443.00 rows=100000 width=4) \n(actualtime=0.040..27.635 rows=100000 loops=1) -> Hash \n(cost=1.04..1.04 rows=4 width=10) (actualtime=0.020..0.022 rows=4 loops=1)\n\n Buckets: 1024 Batches: 1 Memory Usage: 9kB\n\n -> Seq Scan on r (cost=0.00..1.04 rows=4 width=10) \n(actualtime=0.009..0.011 rows=4 loops=1)\n\n Planning Time: 0.954 ms\n Execution Time: 92.309 ms\n(10 rows)\n\nDo you think it's worth trying to apply the fourth method now?\nAs far as I remember, here we will face the problem of estimating \nmultiple conditions, in the 4th approach there is a chance to avoid this.\n\nI couldn't get such case. I found only one that I also found \ninteresting, but so far I haven't been able to figure out well enough \nwhat influenced the prediction of cardinality and, accordingly, the \nchoice of a better plan.\n\nCREATE TABLE large (id INT, a INT);\n INSERT INTO large SELECT i, 1 FROM generate_series(1,50000) s(i);\nCREATE TABLE small (id INT, b INT);\n INSERT INTO small SELECT i, 1 FROM generate_series(1,25000) s(i);\nINSERT INTO large SELECT i+50000, i+2 FROM generate_series(1,50000) s(i);\nINSERT INTO small SELECT i+25000, i+2 FROM generate_series(1,25000) s(i);\n\nexplain analyze SELECT * FROM large LEFT JOIN small ON (large.id = small.id)\n WHERE ((large.a=1) OR (small.b=1));\n\nWith patch:\n\nQUERY PLAN\n----------------------------------------------------------------------------------------------------------------------\n Hash Left Join (cost=1347.00..3911.91 rows=99775 width=16) (actual \ntime=36.864..82.634 rows=50000 loops=1)\n Hash Cond: (large.id = small.id)\n Filter: ((large.a = 1) OR (small.b = 1))\n Rows Removed by Filter: 50000\n -> Seq Scan on large (cost=0.00..1440.75 rows=99775 width=8) \n(actual time=0.034..12.536 rows=100000 loops=1)\n -> Hash (cost=722.00..722.00 rows=50000 width=8) (actual \ntime=36.752..36.754 rows=50000 loops=1)\n Buckets: 65536 Batches: 1 Memory Usage: 2466kB\n -> Seq Scan on small (cost=0.00..722.00 rows=50000 width=8) \n(actual time=0.028..13.337 rows=50000 loops=1)\n Planning Time: 2.363 ms\n Execution Time: 84.790 ms\n(10 rows)\n\noriginal:\n\nQUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------\n Merge Right Join (cost=14400.45..516963.33 rows=250528 width=16) \n(actual time=85.498..126.799 rows=50000 loops=1)\n Merge Cond: (small.id = large.id)\n Filter: ((large.a = 1) OR (small.b = 1))\n Rows Removed by Filter: 50000\n -> Sort (cost=4640.80..4766.23 rows=50172 width=8) (actual \ntime=41.538..44.204 rows=50000 loops=1)\n Sort Key: small.id\n Sort Method: quicksort Memory: 3710kB\n -> Seq Scan on small (cost=0.00..723.72 rows=50172 width=8) \n(actual time=0.068..15.182 rows=50000 loops=1)\n -> Sort (cost=9759.65..10009.95 rows=100118 width=8) (actual \ntime=43.939..53.697 rows=100000 loops=1)\n Sort Key: large.id\n Sort Method: external sort Disk: 2160kB\n -> Seq Scan on large (cost=0.00..1444.18 rows=100118 \nwidth=8) (actual time=0.046..10.378 rows=100000 loops=1)\n Planning Time: 0.406 ms\n Execution Time: 130.109 ms\n(14 rows)\n\nIf you disable merge join with the original code, then the plans \ncoincide, as well as the error value approximately, only in one case \ncardinality\noverestimation occurs in the other vice versa.\n\nQUERY PLAN\n---------------------------------------------------------------------------------------------------------------------\n Hash Left Join (cost=1347.00..3915.00 rows=75082 width=16) (actual \ntime=43.625..68.901 rows=50000 loops=1)\n Hash Cond: (large.id = small.id)\n Filter: ((large.a = 1) OR (small.b = 1))\n Rows Removed by Filter: 50000\n -> Seq Scan on large (cost=0.00..1443.00 rows=100000 width=8) \n(actual time=0.008..9.895 rows=100000 loops=1)\n -> Hash (cost=722.00..722.00 rows=50000 width=8) (actual \ntime=22.546..22.548 rows=50000 loops=1)\n Buckets: 65536 Batches: 1 Memory Usage: 2466kB\n -> Seq Scan on small (cost=0.00..722.00 rows=50000 width=8) \n(actual time=0.006..7.218 rows=50000 loops=1)\n Planning Time: 0.239 ms\n Execution Time: 70.860 ms\n(10 rows)\n\n-- \nRegards,\nAlena Rybakina\nPostgres Professional",
"msg_date": "Mon, 10 Jul 2023 20:25:36 +0300",
"msg_from": "Alena Rybakina <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Problems with estimating OR conditions, IS NULL on LEFT JOINs"
},
{
"msg_contents": "Hi,\n\nI'm still working on it, but, unfortunately, I didn't have much time to \nwork with it well enough that there would be something that could be shown.\nNow I am trying to sort out the problems that I drew attention to in the \nprevious letter.\n\n-- \nRegards,\nAlena Rybakina\nPostgres Professional\n\n\n\n",
"msg_date": "Sun, 16 Jul 2023 21:21:25 +0300",
"msg_from": "Alena Rybakina <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Problems with estimating OR conditions, IS NULL on LEFT JOINs"
}
] |
[
{
"msg_contents": "Hello hackers,\n\nCurrently domain casts are ignored. Yet this would be very useful for\nrepresenting data in different formats such as json.\n\nLet's take a tsrange as an example. Its json output by default:\n\nselect to_json('(2022-12-31 11:00, 2023-01-01 06:00)'::tsrange);\n to_json\n-----------------------------------------------------\n \"(\\\"2022-12-31 11:00:00\\\",\\\"2023-01-01 06:00:00\\\")\"\n\nWe can refine its representation in a custom way as:\n\n-- using a custom type for this example\ncreate type mytsrange as range (subtype = timestamp, subtype_diff =\ntsrange_subdiff);\n\ncreate or replace function mytsrange_to_json(mytsrange) returns json as $$\n select json_build_object(\n 'lower', lower($1)\n , 'upper', upper($1)\n , 'lower_inc', lower_inc($1)\n , 'upper_inc', upper_inc($1)\n );\n$$ language sql;\n\ncreate cast (mytsrange as json) with function mytsrange_to_json(mytsrange)\nas assignment;\n\n-- now we get the custom representation\nselect to_json('(2022-12-31 11:00, 2023-01-01 06:00)'::mytsrange);\n to_json\n--------------------------------------------------------------------------------------------------------------\n {\"lower\" : \"2022-12-31T11:00:00\", \"upper\" : \"2023-01-01T06:00:00\",\n\"lower_inc\" : false, \"upper_inc\" : false}\n(1 row)\n\nAlthough this works for this example, using a custom type requires\nknowledge of the `tsrange` internals. It would be much simpler to do:\n\ncreate domain mytsrange as range;\n\nBut casts on domains are currently ignored:\n\ncreate cast (mytsrange as json) with function mytsrange_to_json(mytsrange)\nas assignment;\nWARNING: cast will be ignored because the source data type is a domain\nCREATE CAST\n\nChecking the code seems supporting this is a TODO? Or are there any other\nconcerns of why this shouldn't be done?\n\nI would like to work on this if there is an agreement.\n\nBest regards,\nSteve\n\nHello hackers,Currently domain casts are ignored. Yet this would be very useful for representing data in different formats such as json.Let's take a tsrange as an example. Its json output by default:select to_json('(2022-12-31 11:00, 2023-01-01 06:00)'::tsrange); to_json----------------------------------------------------- \"(\\\"2022-12-31 11:00:00\\\",\\\"2023-01-01 06:00:00\\\")\"We can refine its representation in a custom way as:-- using a custom type for this examplecreate type mytsrange as range (subtype = timestamp, subtype_diff = tsrange_subdiff);create or replace function mytsrange_to_json(mytsrange) returns json as $$ select json_build_object( 'lower', lower($1) , 'upper', upper($1) , 'lower_inc', lower_inc($1) , 'upper_inc', upper_inc($1) );$$ language sql;create cast (mytsrange as json) with function mytsrange_to_json(mytsrange) as assignment;-- now we get the custom representationselect to_json('(2022-12-31 11:00, 2023-01-01 06:00)'::mytsrange); to_json-------------------------------------------------------------------------------------------------------------- {\"lower\" : \"2022-12-31T11:00:00\", \"upper\" : \"2023-01-01T06:00:00\", \"lower_inc\" : false, \"upper_inc\" : false}(1 row)Although this works for this example, using a custom type requires knowledge of the `tsrange` internals. It would be much simpler to do:create domain mytsrange as range;But casts on domains are currently ignored:create cast (mytsrange as json) with function mytsrange_to_json(mytsrange) as assignment;WARNING: cast will be ignored because the source data type is a domainCREATE CASTChecking the code seems supporting this is a TODO? Or are there any other concerns of why this shouldn't be done?I would like to work on this if there is an agreement.Best regards,Steve",
"msg_date": "Sat, 24 Jun 2023 16:35:38 -0500",
"msg_from": "Steve Chavez <[email protected]>",
"msg_from_op": true,
"msg_subject": "Castable Domains for different JSON representations"
},
{
"msg_contents": "Steve Chavez <[email protected]> writes:\n> Currently domain casts are ignored. Yet this would be very useful for\n> representing data in different formats such as json.\n\nHm. Usually what people ask for in this space is custom casts\n*to* a domain type, which is problematic because it's not clear\nhow that should interact with the default behavior of promotion\nto a domain (namely, applying any relevant domain constraints).\nI'd also be suspicious of allowing custom casts from a domain\nto any of its base types, because the assumption that that\ndirection is a no-op is wired into a lot of places. The\nparticular example you are proposing doesn't fall into either\nof those categories; but I wonder if people would find it weird\nif we allowed only other cases.\n\nThe bigger picture here, though, is what are you really buying\ncompared to just invoking the special conversion function explicitly?\nIf you have to write \"sometsrangecolumn::mytsrange::json\", that's\nnot shorter and certainly not clearer than writing a function call.\nAdmittedly, if the column is declared as mytsrange to begin with,\nyou can save one step --- but we smash domains to their base types\nin enough places that I wonder how often you'd end up needing the\nextra explicit cast anyway. And I don't think you'd want to tone\ndown that behavior, because anytime you use a domain column you\nare going to be relying on it very heavily to avoid writing lots\nof explicit casts to the base type. So I think this might prove\na lot less natural/transparent to use than you're hoping.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 25 Jun 2023 12:24:01 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Castable Domains for different JSON representations"
},
{
"msg_contents": "> The bigger picture here, though, is what are you really buying\ncompared to just invoking the special conversion function explicitly?\n> If you have to write \"sometsrangecolumn::mytsrange::json\", that's\nnot shorter and certainly not clearer than writing a function call.\n\nThe main benefit is to be able to call `json_agg` on tables with these\ncustom json representations. Then the defined json casts work\ntransparently when doing:\n\nselect json_agg(x) from mytbl x;\n json_agg\n-------------------------------------------------------------------------------------------------------------------------------\n [{\"id\":1,\"val\":{\"lower\" : \"2022-12-31T11:00:00\", \"upper\" :\n\"2023-01-01T06:00:00\", \"lower_inc\" : false, \"upper_inc\" : false}}]\n\n-- example table\ncreate table mytbl(id int, val mytsrange);\ninsert into mytbl values (1, '(2022-12-31 11:00, 2023-01-01 06:00)');\n\nThis output is directly consumable on web applications and as\nyou can see the expression is pretty short, with no need to use\nthe explicit casts as `json_agg` already does them internally.\n\nBest regards,\nSteve\n\n> The bigger picture here, though, is what are you really buyingcompared to just invoking the special conversion function explicitly?> If you have to write \"sometsrangecolumn::mytsrange::json\", that'snot shorter and certainly not clearer than writing a function call.The main benefit is to be able to call `json_agg` on tables with these custom json representations. Then the defined json casts work transparently when doing:select json_agg(x) from mytbl x; json_agg------------------------------------------------------------------------------------------------------------------------------- [{\"id\":1,\"val\":{\"lower\" : \"2022-12-31T11:00:00\", \"upper\" : \"2023-01-01T06:00:00\", \"lower_inc\" : false, \"upper_inc\" : false}}]-- example tablecreate table mytbl(id int, val mytsrange);insert into mytbl values (1, '(2022-12-31 11:00, 2023-01-01 06:00)');This output is directly consumable on web applications and as you can see the expression is pretty short, with no need to use the explicit casts as `json_agg` already does them internally.Best regards,Steve",
"msg_date": "Sun, 25 Jun 2023 13:36:15 -0500",
"msg_from": "Steve Chavez <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fwd: Castable Domains for different JSON representations"
}
] |
[
{
"msg_contents": "Hi everyone!\n\nI am new to PostgreSQL community and working currently on project\npg_adviser [https://github.com/DrPostgres/pg_adviser/]\n\nThe extension last worked with version 8.3, and currently I am working to\nmake it support version 16 and then the other active versions.\n\nI will give a brief about the extension:\nIt's used to recommend useful indexes for a set of queries. It does that\nby planning the query initially and seeing the initial cost and then\ncreating *virtual* indexes (based on the query and columns used in it,\n..etc) and planning again to see how those indexes changed the cost.\n\nThe problem I am facing is in creating those indexes in Postgres 16 (while\ncalling *index_create*), and you can find here a detail description about\nthe problem along with the code/PR\nhttps://drive.google.com/file/d/1x2PnDEfEo094vgNiBd1-BfJtB5Fovrih/view\n\nI would appreciate any help. Thanks :)\n\nHi everyone!I am new to PostgreSQL community and working currently on project pg_adviser [https://github.com/DrPostgres/pg_adviser/]The extension last worked with version 8.3, and currently I am working to make it support version 16 and then the other active versions.I will give a brief about the extension:It's used to recommend useful indexes for a set of queries. It does that by planning the query initially and seeing the initial cost and then creating *virtual* indexes (based on the query and columns used in it, ..etc) and planning again to see how those indexes changed the cost.The problem I am facing is in creating those indexes in Postgres 16 (while calling *index_create*), and you can find here a detail description about the problem along with the code/PRhttps://drive.google.com/file/d/1x2PnDEfEo094vgNiBd1-BfJtB5Fovrih/viewI would appreciate any help. Thanks :)",
"msg_date": "Sun, 25 Jun 2023 02:50:51 +0300",
"msg_from": "Ahmed Ibrahim <[email protected]>",
"msg_from_op": true,
"msg_subject": "Inquiry/Help with pg_adviser (problem in index_create function for\n creating indexes)"
},
{
"msg_contents": "Hi,\n\nSince some people prefer plain text over screenshots/pdfs (but I think the\npdf is more readable), I will post the problem here, in case anyone can\nhelp. I will appreciate that :)\n\nThe full current code (PR is still draft) can be found at\nhttps://github.com/DrPostgres/pg_adviser/pull/4\n\nThe idea behind what is being done is creating virtual indexes, and\nmeasuring the query cost after creating those indexes, and see whether we\nwill get a better cost or not, and maximize the benefit from those choices.\nSo far, the project is okay and compiling/working successfully (with\nPostgres 16), but the problem is when creating\nthe virtual indexes (with version 16), I give it flag\n*INDEX_CREATE_SKIP_BUILD* (just like it was with version 8.3 and was\nworking)\n\nAfter that, the index gets created successfully, but when trying to call\n*standard_planner* for the same query with the new index created (to see\nhow the query cost changed), I get the following error\n==================================================\n2023-06-24 19:09:21.843 EEST [45000] ERROR: could not read block 0 in file\n\"base/16384/139323\": read only 0 of 8192 bytes\n2023-06-24 19:09:21.843 EEST [45000] STATEMENT: explain select * from t\nwhere a > 5000;\nERROR: could not read block 0 in file \"base/16384/139323\": read only 0 of\n8192 bytes\n=====================================================\n\nI tried too many things, like letting it build the whole index, or\n*REINDEX *ing it after being created. I also debugged\nPostgreSQL source code to see where it stops, but wasn’t able to solve the\nproblem.\nWhen trying to let it build the Index, the function *index_build* gets\nerrors\n\nOne last thing I tried is giving it flag *INDEX_CREATE_SKIP_BUILD* and\n*INDEX_CREATE_CONCURRENT\n*, the index gets created\nsuccessfully but when doing so, the query cost never changes, and the query\nnever uses the index. When I try to\n*REINDEX* it, I just get that query is aborted.\n\nAlthough I think it might be a trivial thing I might have forgotten :D, I\nwould appreciate any help as I have been\ntrying to fix this for more than 2 days.\n\nSome screenshots can be found in the pdf mentioned in the first mail.\n\nThanks all\n\nOn Sun, Jun 25, 2023 at 2:50 AM Ahmed Ibrahim <[email protected]>\nwrote:\n\n> Hi everyone!\n>\n> I am new to PostgreSQL community and working currently on project\n> pg_adviser [https://github.com/DrPostgres/pg_adviser/]\n>\n> The extension last worked with version 8.3, and currently I am working to\n> make it support version 16 and then the other active versions.\n>\n> I will give a brief about the extension:\n> It's used to recommend useful indexes for a set of queries. It does that\n> by planning the query initially and seeing the initial cost and then\n> creating *virtual* indexes (based on the query and columns used in it,\n> ..etc) and planning again to see how those indexes changed the cost.\n>\n> The problem I am facing is in creating those indexes in Postgres 16 (while\n> calling *index_create*), and you can find here a detail description about\n> the problem along with the code/PR\n> https://drive.google.com/file/d/1x2PnDEfEo094vgNiBd1-BfJtB5Fovrih/view\n>\n> I would appreciate any help. Thanks :)\n>\n>\n\nHi,Since some people prefer plain text over screenshots/pdfs (but I think the pdf is more readable), I will post the problem here, in case anyone can help. I will appreciate that :)The full current code (PR is still draft) can be found at https://github.com/DrPostgres/pg_adviser/pull/4The idea behind what is being done is creating virtual indexes, and measuring the query cost after creating those indexes, and see whether we will get a better cost or not, and maximize the benefit from those choices.So far, the project is okay and compiling/working successfully (with Postgres 16), but the problem is when creatingthe virtual indexes (with version 16), I give it flag INDEX_CREATE_SKIP_BUILD (just like it was with version 8.3 and wasworking)After that, the index gets created successfully, but when trying to call standard_planner for the same query with the new index created (to seehow the query cost changed), I get the following error==================================================2023-06-24 19:09:21.843 EEST [45000] ERROR: could not read block 0 in file \"base/16384/139323\": read only 0 of 8192 bytes2023-06-24 19:09:21.843 EEST [45000] STATEMENT: explain select * from t where a > 5000;ERROR: could not read block 0 in file \"base/16384/139323\": read only 0 of 8192 bytes=====================================================I tried too many things, like letting it build the whole index, or REINDEX ing it after being created. I also debuggedPostgreSQL source code to see where it stops, but wasn’t able to solve the problem.When trying to let it build the Index, the function index_build gets errorsOne last thing I tried is giving it flag INDEX_CREATE_SKIP_BUILD and INDEX_CREATE_CONCURRENT , the index gets createdsuccessfully but when doing so, the query cost never changes, and the query never uses the index. When I try toREINDEX it, I just get that query is aborted.Although I think it might be a trivial thing I might have forgotten :D, I would appreciate any help as I have beentrying to fix this for more than 2 days.Some screenshots can be found in the pdf mentioned in the first mail.Thanks allOn Sun, Jun 25, 2023 at 2:50 AM Ahmed Ibrahim <[email protected]> wrote:Hi everyone!I am new to PostgreSQL community and working currently on project pg_adviser [https://github.com/DrPostgres/pg_adviser/]The extension last worked with version 8.3, and currently I am working to make it support version 16 and then the other active versions.I will give a brief about the extension:It's used to recommend useful indexes for a set of queries. It does that by planning the query initially and seeing the initial cost and then creating *virtual* indexes (based on the query and columns used in it, ..etc) and planning again to see how those indexes changed the cost.The problem I am facing is in creating those indexes in Postgres 16 (while calling *index_create*), and you can find here a detail description about the problem along with the code/PRhttps://drive.google.com/file/d/1x2PnDEfEo094vgNiBd1-BfJtB5Fovrih/viewI would appreciate any help. Thanks :)",
"msg_date": "Sun, 25 Jun 2023 17:30:37 +0300",
"msg_from": "Ahmed Ibrahim <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Inquiry/Help with pg_adviser (problem in index_create function\n for creating indexes)"
},
{
"msg_contents": "On 25/06/2023 17:30, Ahmed Ibrahim wrote:\n> The full current code (PR is still draft) can be found at \n> https://github.com/DrPostgres/pg_adviser/pull/4 \n> <https://github.com/DrPostgres/pg_adviser/pull/4>\n> \n> The idea behind what is being done is creating virtual indexes, and \n> measuring the query cost after creating those indexes, and see whether \n> we will get a better cost or not, and maximize the benefit from those \n> choices.\n> So far, the project is okay and compiling/working successfully (with \n> Postgres 16), but the problem is when creating\n> the virtual indexes (with version 16), I give it flag \n> /INDEX_CREATE_SKIP_BUILD/ (just like it was with version 8.3 and was\n> working)\n\nhttps://github.com/HypoPG/hypopg might be of interest to you. It also \ncreates virtual or \"hypothetical\" indexes.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Mon, 26 Jun 2023 08:50:09 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inquiry/Help with pg_adviser (problem in index_create function\n for creating indexes)"
}
] |
[
{
"msg_contents": "Attached is a PoC patch to implement \"Row pattern recognition\" (RPR)\nin SQL:2016 (I know SQL:2023 is already out, but I don't have access\nto it). Actually SQL:2016 defines RPR in two places[1]:\n\n Feature R010, “Row pattern recognition: FROM clause”\n Feature R020, “Row pattern recognition: WINDOW clause”\n\nThe patch includes partial support for R020 part.\n\n- What is RPR?\n\nRPR provides a way to search series of data using regular expression\npatterns. Suppose you have a stock database.\n\nCREATE TABLE stock (\n company TEXT,\n tdate DATE,\n price BIGINT);\n\nYou want to find a \"V-shaped\" pattern: i.e. price goes down for 1 or\nmore days, then goes up for 1 or more days. If we try to implement the\nquery in PostgreSQL, it could be quite complex and inefficient.\n\nRPR provides convenient way to implement the query.\n\nSELECT company, tdate, price, rpr(price) OVER w FROM stock\n WINDOW w AS (\n PARTITION BY company\n ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING\n PATTERN (START DOWN+ UP+)\n DEFINE\n START AS TRUE,\n UP AS price > PREV(price),\n DOWN AS price < PREV(price)\n);\n\n\"PATTERN\" and \"DEFINE\" are the key clauses of RPR. DEFINE defines 3\n\"row pattern variables\" namely START, UP and DOWN. They are associated\nwith logical expressions namely \"TRUE\", \"price > PREV(price)\", and\n\"price < PREV(price)\". Note that \"PREV\" function returns price column\nin the previous row. So, UP is true if price is higher than previous\nday. On the other hand, DOWN is true if price is lower than previous\nday. PATTERN uses the row pattern variables to create a necessary\npattern. In this case, the first row is always match because START is\nalways true, and second or more rows match with \"UP\" ('+' is a regular\nexpression representing one or more), and subsequent rows match with\n\"DOWN\".\n\nHere is the sample output.\n\n company | tdate | price | rpr \n----------+------------+-------+------\n company1 | 2023-07-01 | 100 | \n company1 | 2023-07-02 | 200 | 200 -- 200->150->140->150\n company1 | 2023-07-03 | 150 | 150 -- 150->140->150\n company1 | 2023-07-04 | 140 | \n company1 | 2023-07-05 | 150 | 150 -- 150->90->110->130\n company1 | 2023-07-06 | 90 | \n company1 | 2023-07-07 | 110 | \n company1 | 2023-07-08 | 130 | \n company1 | 2023-07-09 | 120 | \n company1 | 2023-07-10 | 130 | \n\nrpr shows the first row if all the patterns are satisfied. In the\nexample above 200, 150, 150 are the cases. Other rows are shown as\nNULL. For example, on 2023-07-02 price starts with 200, then goes down\nto 150 then 140 but goes up 150 next day.\n\nAs far as I know, only Oracle implements RPR (only R010. R020 is not\nimplemented) among OSS/commercial RDBMSs. There are a few DWH software\nhaving RPR. According to [2] they are Snowflake and MS Stream\nAnalytics. It seems Trino is another one[3].\n\n- Note about the patch\n\nThe current implementation is not only a subset of the standard, but\nis different from it in some places. The example above is written as\nfollows according to the standard:\n\nSELECT company, tdate, startprice OVER w FROM stock\n WINDOW w AS (\n PARTITION BY company\n MEASURES\n START.price AS startprice\n ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING\n PATTERN (START DOWN+ UP+)\n DEFINE\n START AS TRUE,\n UP AS UP.price > PREV(UP.price),\n DOWN AS DOWN.price < PREV(DOWN.price)\n);\n\nNotice that rpr(price) is written as START.price and startprice in the\nstandard. MEASURES defines variable names used in the target list used\nwith \"OVER window\". As OVER only allows functions in PostgreSQL, I had\nto make up a window function \"rpr\" which performs the row pattern\nrecognition task. I was not able to find a way to implement\nexpressions like START.price (START is not a table alias). Any\nsuggestion is greatly appreciated.\n\nThe differences from the standard include:\n\no MEASURES is not supported\no SUBSET is not supported\no FIRST, LAST, CLASSIFIER are not supported\no PREV/NEXT in the standard accept more complex arguments\no Regular expressions other than \"+\" are not supported\no Only AFTER MATCH SKIP TO NEXT ROW is supported (if AFTER MATCH is\n not specified, AFTER MATCH SKIP TO NEXT ROW is assumed. In the\n standard AFTER MATCH SKIP PAST LAST ROW is assumed in this case). I\n have a plan to implement AFTER MATCH SKIP PAST LAST ROW though.\no INITIAL or SEEK are not supported ((behaves as if INITIAL is specified)\no Aggregate functions associated with window clause using RPR do not respect RPR\n\nIt seems RPR in the standard is quite complex. I think we can start\nwith a small subset of RPR then we could gradually enhance the\nimplementation.\n\nComments and suggestions are welcome.\n\n[1] https://sqlperformance.com/2019/04/t-sql-queries/row-pattern-recognition-in-sql\n[2] https://link.springer.com/article/10.1007/s13222-022-00404-3\n[3] https://trino.io/docs/current/sql/pattern-recognition-in-window.html\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp",
"msg_date": "Sun, 25 Jun 2023 21:05:09 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Row pattern recognition"
},
{
"msg_contents": "On 6/25/23 14:05, Tatsuo Ishii wrote:\n> Attached is a PoC patch to implement \"Row pattern recognition\" (RPR)\n> in SQL:2016 (I know SQL:2023 is already out, but I don't have access\n> to it). Actually SQL:2016 defines RPR in two places[1]:\n> \n> Feature R010, “Row pattern recognition: FROM clause”\n> Feature R020, “Row pattern recognition: WINDOW clause”\n> \n> The patch includes partial support for R020 part.\n\nI have been dreaming of and lobbying for someone to pick up this \nfeature. I will be sure to review it from a standards perspective and \nwill try my best to help with the technical aspect, but I am not sure to \nhave the qualifications for that.\n\nTHANK YOU!\n\n > (I know SQL:2023 is already out, but I don't have access to it)\n\nIf you can, try to get ISO/IEC 19075-5 which is a guide to RPR instead \nof just its technical specification.\n\nhttps://www.iso.org/standard/78936.html\n\n> - What is RPR?\n> \n> RPR provides a way to search series of data using regular expression\n> patterns. Suppose you have a stock database.\n> \n> CREATE TABLE stock (\n> company TEXT,\n> tdate DATE,\n> price BIGINT);\n> \n> You want to find a \"V-shaped\" pattern: i.e. price goes down for 1 or\n> more days, then goes up for 1 or more days. If we try to implement the\n> query in PostgreSQL, it could be quite complex and inefficient.\n> \n> RPR provides convenient way to implement the query.\n> \n> SELECT company, tdate, price, rpr(price) OVER w FROM stock\n> WINDOW w AS (\n> PARTITION BY company\n> ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING\n> PATTERN (START DOWN+ UP+)\n> DEFINE\n> START AS TRUE,\n> UP AS price > PREV(price),\n> DOWN AS price < PREV(price)\n> );\n> \n> \"PATTERN\" and \"DEFINE\" are the key clauses of RPR. DEFINE defines 3\n> \"row pattern variables\" namely START, UP and DOWN. They are associated\n> with logical expressions namely \"TRUE\", \"price > PREV(price)\", and\n> \"price < PREV(price)\". Note that \"PREV\" function returns price column\n> in the previous row. So, UP is true if price is higher than previous\n> day. On the other hand, DOWN is true if price is lower than previous\n> day. PATTERN uses the row pattern variables to create a necessary\n> pattern. In this case, the first row is always match because START is\n> always true, and second or more rows match with \"UP\" ('+' is a regular\n> expression representing one or more), and subsequent rows match with\n> \"DOWN\".\n> \n> Here is the sample output.\n> \n> company | tdate | price | rpr\n> ----------+------------+-------+------\n> company1 | 2023-07-01 | 100 |\n> company1 | 2023-07-02 | 200 | 200 -- 200->150->140->150\n> company1 | 2023-07-03 | 150 | 150 -- 150->140->150\n> company1 | 2023-07-04 | 140 |\n> company1 | 2023-07-05 | 150 | 150 -- 150->90->110->130\n> company1 | 2023-07-06 | 90 |\n> company1 | 2023-07-07 | 110 |\n> company1 | 2023-07-08 | 130 |\n> company1 | 2023-07-09 | 120 |\n> company1 | 2023-07-10 | 130 |\n> \n> rpr shows the first row if all the patterns are satisfied. In the\n> example above 200, 150, 150 are the cases. Other rows are shown as\n> NULL. For example, on 2023-07-02 price starts with 200, then goes down\n> to 150 then 140 but goes up 150 next day.\n\nI don't understand this. RPR in a window specification limits the \nwindow to the matched rows, so this looks like your rpr() function is \njust the regular first_value() window function that we already have?\n\n> As far as I know, only Oracle implements RPR (only R010. R020 is not\n> implemented) among OSS/commercial RDBMSs. There are a few DWH software\n> having RPR. According to [2] they are Snowflake and MS Stream\n> Analytics. It seems Trino is another one[3].\n\nI thought DuckDB had it already, but it looks like I was wrong.\n\n> - Note about the patch\n> \n> The current implementation is not only a subset of the standard, but\n> is different from it in some places. The example above is written as\n> follows according to the standard:\n> \n> SELECT company, tdate, startprice OVER w FROM stock\n> WINDOW w AS (\n> PARTITION BY company\n> MEASURES\n> START.price AS startprice\n> ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING\n> PATTERN (START DOWN+ UP+)\n> DEFINE\n> START AS TRUE,\n> UP AS UP.price > PREV(UP.price),\n> DOWN AS DOWN.price < PREV(DOWN.price)\n> );\n> \n> Notice that rpr(price) is written as START.price and startprice in the\n> standard. MEASURES defines variable names used in the target list used\n> with \"OVER window\". As OVER only allows functions in PostgreSQL, I had\n> to make up a window function \"rpr\" which performs the row pattern\n> recognition task. I was not able to find a way to implement\n> expressions like START.price (START is not a table alias). Any\n> suggestion is greatly appreciated.\n\nAs in your example, you cannot have START.price outside of the window \nspecification; it can only go in the MEASURES clause. Only startprice \nis allowed outside and it gets its qualification from the OVER. Using \nw.startprice might have been better but that would require window names \nto be in the same namespace as range tables.\n\nThis currently works in Postgres:\n\n SELECT RANK() OVER w\n FROM (VALUES (1)) AS w (x)\n WINDOW w AS (ORDER BY w.x);\n\n> The differences from the standard include:\n> \n> o MEASURES is not supported\n > o FIRST, LAST, CLASSIFIER are not supported\n > o PREV/NEXT in the standard accept more complex arguments\n > o INITIAL or SEEK are not supported ((behaves as if INITIAL is specified)\n\nOkay, for now.\n\n> o SUBSET is not supported\n\nIs this because you haven't done it yet, or because you ran into \nproblems trying to do it?\n\n> o Regular expressions other than \"+\" are not supported\n\nThis is what I had a hard time imagining how to do while thinking about \nit. The grammar is so different here and we allow many more operators \n(like \"?\" which is also the standard parameter symbol). People more \nexpert than me will have to help here.\n\n> o Only AFTER MATCH SKIP TO NEXT ROW is supported (if AFTER MATCH is\n> not specified, AFTER MATCH SKIP TO NEXT ROW is assumed. In the\n> standard AFTER MATCH SKIP PAST LAST ROW is assumed in this case). I\n> have a plan to implement AFTER MATCH SKIP PAST LAST ROW though.\n\nIn this case, we should require the user to specify AFTER MATCH SKIP TO \nNEXT ROW so that behavior doesn't change when we implement the standard \ndefault. (Your patch might do this already.)\n\n> o Aggregate functions associated with window clause using RPR do not respect RPR\n\nI do not understand what this means.\n\n> It seems RPR in the standard is quite complex. I think we can start\n> with a small subset of RPR then we could gradually enhance the\n> implementation.\n\nI have no problem with that as long as we don't paint ourselves into a \ncorner.\n\n> Comments and suggestions are welcome.\n\nI have not looked at the patch yet, but is the reason for doing R020 \nbefore R010 because you haven't done the MEASURES clause yet?\n\nIn any case, I will be watching this with a close eye, and I am eager to \nhelp in any way I can.\n-- \nVik Fearing\n\n\n\n",
"msg_date": "Sun, 25 Jun 2023 23:08:35 +0200",
"msg_from": "Vik Fearing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Row pattern recognition"
},
{
"msg_contents": "> I have been dreaming of and lobbying for someone to pick up this\n> feature. I will be sure to review it from a standards perspective and\n> will try my best to help with the technical aspect, but I am not sure\n> to have the qualifications for that.\n> \n> THANK YOU!\n\nThank you for looking into my proposal.\n\n>> (I know SQL:2023 is already out, but I don't have access to it)\n> \n> If you can, try to get ISO/IEC 19075-5 which is a guide to RPR instead\n> of just its technical specification.\n> \n> https://www.iso.org/standard/78936.html\n\nThanks for the info.\n\n> I don't understand this. RPR in a window specification limits the\n> window to the matched rows, so this looks like your rpr() function is\n> just the regular first_value() window function that we already have?\n\nNo, rpr() is different from first_value(). rpr() returns the argument\nvalue at the first row in a frame only when matched rows found. On the\nother hand first_value() returns the argument value at the first row\nin a frame unconditionally.\n\ncompany | tdate | price | rpr | first_value \n----------+------------+-------+------+-------------\n company1 | 2023-07-01 | 100 | | 100\n company1 | 2023-07-02 | 200 | 200 | 200\n company1 | 2023-07-03 | 150 | 150 | 150\n company1 | 2023-07-04 | 140 | | 140\n company1 | 2023-07-05 | 150 | 150 | 150\n company1 | 2023-07-06 | 90 | | 90\n company1 | 2023-07-07 | 110 | | 110\n company1 | 2023-07-08 | 130 | | 130\n company1 | 2023-07-09 | 120 | | 120\n company1 | 2023-07-10 | 130 | | 130\n\nFor example, a frame starting with (tdate = 2023-07-02, price = 200)\nconsists of rows (price = 200, 150, 140, 150) satisfying the pattern,\nthus rpr returns 200. Since in this example frame option \"ROWS BETWEEN\nCURRENT ROW AND UNBOUNDED FOLLOWING\" is specified, next frame starts\nwith (tdate = 2023-07-03, price = 150). This frame satisfies the\npattern too (price = 150, 140, 150), and rpr retus 150... and so on.\n\n> As in your example, you cannot have START.price outside of the window\n> specification; it can only go in the MEASURES clause. Only startprice\n> is allowed outside and it gets its qualification from the OVER. Using\n> w.startprice might have been better but that would require window\n> names to be in the same namespace as range tables.\n> \n> This currently works in Postgres:\n> \n> SELECT RANK() OVER w\n> FROM (VALUES (1)) AS w (x)\n> WINDOW w AS (ORDER BY w.x);\n\nInteresting.\n\n>> o SUBSET is not supported\n> \n> Is this because you haven't done it yet, or because you ran into\n> problems trying to do it?\n\nBecause it seems SUBSET is not useful without MEASURES support. Thus\nmy plan is, firstly implement MEASURES, then SUBSET. What do you\nthink?\n\n>> o Regular expressions other than \"+\" are not supported\n> \n> This is what I had a hard time imagining how to do while thinking\n> about it. The grammar is so different here and we allow many more\n> operators (like \"?\" which is also the standard parameter symbol).\n> People more expert than me will have to help here.\n\nYes, that is a problem.\n\n> In this case, we should require the user to specify AFTER MATCH SKIP\n> TO NEXT ROW so that behavior doesn't change when we implement the\n> standard default. (Your patch might do this already.)\n\nAgreed. I will implement AFTER MATCH SKIP PAST LAST ROW in the next\npatch and I will change the default to AFTER MATCH SKIP PAST LAST ROW.\n\n>> o Aggregate functions associated with window clause using RPR do not\n>> respect RPR\n> \n> I do not understand what this means.\n\nOk, let me explain. See example below. In my understanding \"count\"\nshould retun the number of rows in a frame restriced by the match\ncondition. For example at the first line (2023-07-01 | 100) count\nreturns 10. I think this should be 0 because the \"restriced\" frame\nstarting at the line contains no matched row. On the other hand the\n(restricted) frame starting at second line (2023-07-02 | 200) contains\n4 rows, thus count should return 4, instead of 9.\n\nSELECT company, tdate, price, rpr(price) OVER w, count(*) OVER w FROM stock\n WINDOW w AS (\n PARTITION BY company\n ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING\n PATTERN (START DOWN+ UP+)\n DEFINE\n START AS TRUE,\n UP AS price > PREV(price),\n DOWN AS price < PREV(price)\n);\n\ncompany | tdate | price | rpr | count \n----------+------------+-------+------+-------\n company1 | 2023-07-01 | 100 | | 10\n company1 | 2023-07-02 | 200 | 200 | 9\n company1 | 2023-07-03 | 150 | 150 | 8\n company1 | 2023-07-04 | 140 | | 7\n company1 | 2023-07-05 | 150 | 150 | 6\n company1 | 2023-07-06 | 90 | | 5\n company1 | 2023-07-07 | 110 | | 4\n company1 | 2023-07-08 | 130 | | 3\n company1 | 2023-07-09 | 120 | | 2\n company1 | 2023-07-10 | 130 | | 1\n\n>> It seems RPR in the standard is quite complex. I think we can start\n>> with a small subset of RPR then we could gradually enhance the\n>> implementation.\n> \n> I have no problem with that as long as we don't paint ourselves into a\n> corner.\n\nTotally agreed.\n\n>> Comments and suggestions are welcome.\n> \n> I have not looked at the patch yet, but is the reason for doing R020\n> before R010 because you haven't done the MEASURES clause yet?\n\nOne of the reasons is, implementing MATCH_RECOGNIZE (R010) looked\nharder for me because modifying main SELECT clause could be a hard\nwork. Another reason is, I had no idea how to implement PREV/NEXT in\nother than in WINDOW clause. Other people might feel differently\nthough.\n\n> In any case, I will be watching this with a close eye, and I am eager\n> to help in any way I can.\n\nThank you! I am looking forward to comments on my patch. Also any\nidea how to implement MEASURES clause is welcome.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Mon, 26 Jun 2023 10:05:20 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Row pattern recognition"
},
{
"msg_contents": ">> In this case, we should require the user to specify AFTER MATCH SKIP\n>> TO NEXT ROW so that behavior doesn't change when we implement the\n>> standard default. (Your patch might do this already.)\n> \n> Agreed. I will implement AFTER MATCH SKIP PAST LAST ROW in the next\n> patch and I will change the default to AFTER MATCH SKIP PAST LAST ROW.\n\nAttached is the v2 patch to add support for AFTER MATCH SKIP PAST LAST\nROW and AFTER MATCH SKIP PAST LAST ROW. The default is AFTER MATCH\nSKIP PAST LAST ROW as the standard default. Here are some examples to\ndemonstrate how those clauses affect the query result.\n\nSELECT i, rpr(i) OVER w\n FROM (VALUES (1), (2), (3), (4)) AS v (i)\n WINDOW w AS (\n ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING\n AFTER MATCH SKIP PAST LAST ROW\n PATTERN (A B)\n DEFINE\n A AS i <= 2,\n B AS i <= 3\n);\n i | rpr \n---+-----\n 1 | 1\n 2 | \n 3 | \n 4 | \n(4 rows)\n\nIn this example rpr starts from i = 1 and find that row i = 1\nsatisfies A, and row i = 2 satisfies B. Then rpr moves to row i = 3\nand find that it does not satisfy A, thus the result is NULL. Same\nthing can be said to row i = 4.\n\nSELECT i, rpr(i) OVER w\n FROM (VALUES (1), (2), (3), (4)) AS v (i)\n WINDOW w AS (\n ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING\n AFTER MATCH SKIP TO NEXT ROW\n PATTERN (A B)\n DEFINE\n A AS i <= 2,\n B AS i <= 3\n);\n i | rpr \n---+-----\n 1 | 1\n 2 | 2\n 3 | \n 4 | \n(4 rows)\n\nIn this example rpr starts from i = 1 and find that row i = 1\nsatisfies A, and row i = 2 satisfies B (same as above). Then rpr moves\nto row i = 2, rather than 3 because AFTER MATCH SKIP TO NEXT ROW is\nspecified.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp",
"msg_date": "Mon, 26 Jun 2023 17:45:07 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Row pattern recognition"
},
{
"msg_contents": "On 6/26/23 03:05, Tatsuo Ishii wrote:\n>> I don't understand this. RPR in a window specification limits the\n>> window to the matched rows, so this looks like your rpr() function is\n>> just the regular first_value() window function that we already have?\n> \n> No, rpr() is different from first_value(). rpr() returns the argument\n> value at the first row in a frame only when matched rows found. On the\n> other hand first_value() returns the argument value at the first row\n> in a frame unconditionally.\n> \n> company | tdate | price | rpr | first_value\n> ----------+------------+-------+------+-------------\n> company1 | 2023-07-01 | 100 | | 100\n> company1 | 2023-07-02 | 200 | 200 | 200\n> company1 | 2023-07-03 | 150 | 150 | 150\n> company1 | 2023-07-04 | 140 | | 140\n> company1 | 2023-07-05 | 150 | 150 | 150\n> company1 | 2023-07-06 | 90 | | 90\n> company1 | 2023-07-07 | 110 | | 110\n> company1 | 2023-07-08 | 130 | | 130\n> company1 | 2023-07-09 | 120 | | 120\n> company1 | 2023-07-10 | 130 | | 130\n> \n> For example, a frame starting with (tdate = 2023-07-02, price = 200)\n> consists of rows (price = 200, 150, 140, 150) satisfying the pattern,\n> thus rpr returns 200. Since in this example frame option \"ROWS BETWEEN\n> CURRENT ROW AND UNBOUNDED FOLLOWING\" is specified, next frame starts\n> with (tdate = 2023-07-03, price = 150). This frame satisfies the\n> pattern too (price = 150, 140, 150), and rpr retus 150... and so on.\n\n\nOkay, I see the problem now, and why you need the rpr() function.\n\nYou are doing this as something that happens over a window frame, but it \nis actually something that *reduces* the window frame. The pattern \nmatching needs to be done when the frame is calculated and not when any \nparticular function is applied over it.\n\nThis query (with all the defaults made explicit):\n\nSELECT s.company, s.tdate, s.price,\n FIRST_VALUE(s.tdate) OVER w,\n LAST_VALUE(s.tdate) OVER w,\n lowest OVER w\nFROM stock AS s\nWINDOW w AS (\n PARTITION BY s.company\n ORDER BY s.tdate\n ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING\n EXCLUDE NO OTHERS\n MEASURES\n LAST(DOWN) AS lowest\n AFTER MATCH SKIP PAST LAST ROW\n INITIAL PATTERN (START DOWN+ UP+)\n DEFINE\n START AS TRUE,\n UP AS price > PREV(price),\n DOWN AS price < PREV(price)\n);\n\nShould produce this result:\n\n company | tdate | price | first_value | last_value | lowest\n----------+------------+-------+-------------+------------+--------\n company1 | 07-01-2023 | 100 | | |\n company1 | 07-02-2023 | 200 | 07-02-2023 | 07-05-2023 | 140\n company1 | 07-03-2023 | 150 | | |\n company1 | 07-04-2023 | 140 | | |\n company1 | 07-05-2023 | 150 | | |\n company1 | 07-06-2023 | 90 | | |\n company1 | 07-07-2023 | 110 | | |\n company1 | 07-08-2023 | 130 | 07-05-2023 | 07-05-2023 | 120\n company1 | 07-09-2023 | 120 | | |\n company1 | 07-10-2023 | 130 | | |\n(10 rows)\n\nOr if we switch to AFTER MATCH SKIP TO NEXT ROW, then we get:\n\n company | tdate | price | first_value | last_value | lowest\n----------+------------+-------+-------------+------------+--------\n company1 | 07-01-2023 | 100 | | |\n company1 | 07-02-2023 | 200 | 07-02-2023 | 07-05-2023 | 140\n company1 | 07-03-2023 | 150 | 07-03-2023 | 07-05-2023 | 140\n company1 | 07-04-2023 | 140 | | |\n company1 | 07-05-2023 | 150 | 07-05-2023 | 07-08-2023 | 90\n company1 | 07-06-2023 | 90 | | |\n company1 | 07-07-2023 | 110 | | |\n company1 | 07-08-2023 | 130 | 07-08-2023 | 07-10-2023 | 120\n company1 | 07-09-2023 | 120 | | |\n company1 | 07-10-2023 | 130 | | |\n(10 rows)\n\nAnd then if we change INITIAL to SEEK:\n\n company | tdate | price | first_value | last_value | lowest\n----------+------------+-------+-------------+------------+--------\n company1 | 07-01-2023 | 100 | 07-02-2023 | 07-05-2023 | 140\n company1 | 07-02-2023 | 200 | 07-02-2023 | 07-05-2023 | 140\n company1 | 07-03-2023 | 150 | 07-03-2023 | 07-05-2023 | 140\n company1 | 07-04-2023 | 140 | 07-05-2023 | 07-08-2023 | 90\n company1 | 07-05-2023 | 150 | 07-05-2023 | 07-08-2023 | 90\n company1 | 07-06-2023 | 90 | 07-08-2023 | 07-10-2023 | 120\n company1 | 07-07-2023 | 110 | 07-08-2023 | 07-10-2023 | 120\n company1 | 07-08-2023 | 130 | 07-08-2023 | 07-10-2023 | 120\n company1 | 07-09-2023 | 120 | | |\n company1 | 07-10-2023 | 130 | | |\n(10 rows)\n\nSince the pattern recognition is part of the frame, the window \naggregates should Just Work.\n\n\n>>> o SUBSET is not supported\n>>\n>> Is this because you haven't done it yet, or because you ran into\n>> problems trying to do it?\n> \n> Because it seems SUBSET is not useful without MEASURES support. Thus\n> my plan is, firstly implement MEASURES, then SUBSET. What do you\n> think?\n\n\nSUBSET elements can be used in DEFINE clauses, but I do not think this \nis important compared to other features.\n\n\n>>> Comments and suggestions are welcome.\n>>\n>> I have not looked at the patch yet, but is the reason for doing R020\n>> before R010 because you haven't done the MEASURES clause yet?\n> \n> One of the reasons is, implementing MATCH_RECOGNIZE (R010) looked\n> harder for me because modifying main SELECT clause could be a hard\n> work. Another reason is, I had no idea how to implement PREV/NEXT in\n> other than in WINDOW clause. Other people might feel differently\n> though.\n\n\nI think we could do this with a single tuplesort if we use backtracking \n(which might be really slow for some patterns). I have not looked into \nit in any detail.\n\nWe would need to be able to remove tuples from the end (even if only \nlogically), and be able to update tuples inside the store. Both of \nthose needs come from backtracking and possibly changing the classifier.\n\nWithout backtracking, I don't see how we could do it without have a \nseparate tuplestore for every current possible match.\n\n\n>> In any case, I will be watching this with a close eye, and I am eager\n>> to help in any way I can.\n> \n> Thank you! I am looking forward to comments on my patch. Also any\n> idea how to implement MEASURES clause is welcome.\n\n\nI looked at your v2 patches a little bit and the only comment that I \ncurrently have on the code is you spelled PERMUTE as PREMUTE. \nEverything else is hopefully explained above.\n-- \nVik Fearing\n\n\n\n",
"msg_date": "Tue, 27 Jun 2023 00:38:20 +0200",
"msg_from": "Vik Fearing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Row pattern recognition"
},
{
"msg_contents": "> Okay, I see the problem now, and why you need the rpr() function.\n> \n> You are doing this as something that happens over a window frame, but\n> it is actually something that *reduces* the window frame. The pattern\n> matching needs to be done when the frame is calculated and not when\n> any particular function is applied over it.\n\nYes. (I think the standard calls the window frame as \"full window\nframe\" in context of RPR to make a contrast with the subset of the\nframe rows restricted by RPR. The paper I refered to as [2] claims\nthat the latter window frame is called \"reduced window frame\" in the\nstandard but I wasn't able to find the term in the standard.)\n\nI wanted to demonstate that pattern matching logic is basically\ncorrect in the PoC patch. Now what I need to do is, move the row\npattern matching logic to somewhere inside nodeWindowAgg so that\n\"restricted window frame\" can be applied to all window functions and\nwindow aggregates. Currently I am looking into update_frameheadpos()\nand update_frametailpos() which calculate the frame head and tail\nagainst current row. What do you think?\n\n> This query (with all the defaults made explicit):\n> \n> SELECT s.company, s.tdate, s.price,\n> FIRST_VALUE(s.tdate) OVER w,\n> LAST_VALUE(s.tdate) OVER w,\n> lowest OVER w\n> FROM stock AS s\n> WINDOW w AS (\n> PARTITION BY s.company\n> ORDER BY s.tdate\n> ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING\n> EXCLUDE NO OTHERS\n> MEASURES\n> LAST(DOWN) AS lowest\n> AFTER MATCH SKIP PAST LAST ROW\n> INITIAL PATTERN (START DOWN+ UP+)\n> DEFINE\n> START AS TRUE,\n> UP AS price > PREV(price),\n> DOWN AS price < PREV(price)\n> );\n> \n> Should produce this result:\n\n[snip]\n\nThanks for the examples. I agree with the expected query results.\n\n>>>> o SUBSET is not supported\n>>>\n>>> Is this because you haven't done it yet, or because you ran into\n>>> problems trying to do it?\n>> Because it seems SUBSET is not useful without MEASURES support. Thus\n>> my plan is, firstly implement MEASURES, then SUBSET. What do you\n>> think?\n> \n> \n> SUBSET elements can be used in DEFINE clauses, but I do not think this\n> is important compared to other features.\n\nOk.\n\n>>> I have not looked at the patch yet, but is the reason for doing R020\n>>> before R010 because you haven't done the MEASURES clause yet?\n>> One of the reasons is, implementing MATCH_RECOGNIZE (R010) looked\n>> harder for me because modifying main SELECT clause could be a hard\n>> work. Another reason is, I had no idea how to implement PREV/NEXT in\n>> other than in WINDOW clause. Other people might feel differently\n>> though.\n> \n> \n> I think we could do this with a single tuplesort if we use\n> backtracking (which might be really slow for some patterns). I have\n> not looked into it in any detail.\n> \n> We would need to be able to remove tuples from the end (even if only\n> logically), and be able to update tuples inside the store. Both of\n> those needs come from backtracking and possibly changing the\n> classifier.\n> \n> Without backtracking, I don't see how we could do it without have a\n> separate tuplestore for every current possible match.\n\nMaybe an insane idea but what about rewriting MATCH_RECOGNIZE clause\ninto Window clause with RPR?\n\n> I looked at your v2 patches a little bit and the only comment that I\n> currently have on the code is you spelled PERMUTE as\n> PREMUTE. Everything else is hopefully explained above.\n\nThanks. Will fix.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Wed, 28 Jun 2023 09:58:19 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Row pattern recognition"
},
{
"msg_contents": "Small question.\n\n> This query (with all the defaults made explicit):\n> \n> SELECT s.company, s.tdate, s.price,\n> FIRST_VALUE(s.tdate) OVER w,\n> LAST_VALUE(s.tdate) OVER w,\n> lowest OVER w\n> FROM stock AS s\n> WINDOW w AS (\n> PARTITION BY s.company\n> ORDER BY s.tdate\n> ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING\n> EXCLUDE NO OTHERS\n> MEASURES\n> LAST(DOWN) AS lowest\n> AFTER MATCH SKIP PAST LAST ROW\n> INITIAL PATTERN (START DOWN+ UP+)\n> DEFINE\n> START AS TRUE,\n> UP AS price > PREV(price),\n> DOWN AS price < PREV(price)\n> );\n\n> LAST(DOWN) AS lowest\n\nshould be \"LAST(DOWN.price) AS lowest\"?\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Wed, 28 Jun 2023 21:17:00 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Row pattern recognition"
},
{
"msg_contents": "On 6/28/23 14:17, Tatsuo Ishii wrote:\n> Small question.\n> \n>> This query (with all the defaults made explicit):\n>>\n>> SELECT s.company, s.tdate, s.price,\n>> FIRST_VALUE(s.tdate) OVER w,\n>> LAST_VALUE(s.tdate) OVER w,\n>> lowest OVER w\n>> FROM stock AS s\n>> WINDOW w AS (\n>> PARTITION BY s.company\n>> ORDER BY s.tdate\n>> ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING\n>> EXCLUDE NO OTHERS\n>> MEASURES\n>> LAST(DOWN) AS lowest\n>> AFTER MATCH SKIP PAST LAST ROW\n>> INITIAL PATTERN (START DOWN+ UP+)\n>> DEFINE\n>> START AS TRUE,\n>> UP AS price > PREV(price),\n>> DOWN AS price < PREV(price)\n>> );\n> \n>> LAST(DOWN) AS lowest\n> \n> should be \"LAST(DOWN.price) AS lowest\"?\n\nYes, it should be. And the tdate='07-08-2023' row in the first \nresultset should have '07-08-2023' and '07-10-2023' as its 4th and 5th \ncolumns.\n\nSince my brain is doing the processing instead of postgres, I made some \nhuman errors. :-)\n-- \nVik Fearing\n\n\n\n",
"msg_date": "Thu, 29 Jun 2023 00:30:43 +0200",
"msg_from": "Vik Fearing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Row pattern recognition"
},
{
"msg_contents": "Hello,\n\nThanks for working on this! We're interested in RPR as well, and I've\nbeen trying to get up to speed with the specs, to maybe make myself\nuseful.\n\nOn 6/27/23 17:58, Tatsuo Ishii wrote:\n> Yes. (I think the standard calls the window frame as \"full window\n> frame\" in context of RPR to make a contrast with the subset of the\n> frame rows restricted by RPR. The paper I refered to as [2] claims\n> that the latter window frame is called \"reduced window frame\" in the\n> standard but I wasn't able to find the term in the standard.)\n\n19075-5 discusses that, at least; not sure about other parts of the spec.\n\n> Maybe an insane idea but what about rewriting MATCH_RECOGNIZE clause\n> into Window clause with RPR?\n\nAre we guaranteed to always have an equivalent window clause? There seem\nto be many differences between the two, especially when it comes to ONE\nROW/ALL ROWS PER MATCH.\n\n--\n\nTo add onto what Vik said above:\n\n>> It seems RPR in the standard is quite complex. I think we can start\n>> with a small subset of RPR then we could gradually enhance the\n>> implementation.\n> \n> I have no problem with that as long as we don't paint ourselves into a \n> corner.\n\nTo me, PATTERN looks like an area where we may want to support a broader\nset of operations in the first version. The spec has a bunch of\ndiscussion around cases like empty matches, match order of alternation\nand permutation, etc., which are not possible to express or test with\nonly the + quantifier. Those might be harder to get right in a v2, if we\ndon't at least keep them in mind for v1?\n\n> +static List *\n> +transformPatternClause(ParseState *pstate, WindowClause *wc, WindowDef *windef)\n> +{\n> + List *patterns;\n\nMy compiler complains about the `patterns` variable here, which is\nreturned without ever being initialized. (The caller doesn't seem to use\nit.)\n\n> +-- basic test using PREV\n> +SELECT company, tdate, price, rpr(price) OVER w FROM stock\n> + WINDOW w AS (\n> + PARTITION BY company\n> + ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING\n> + INITIAL\n> + PATTERN (START UP+ DOWN+)\n> + DEFINE\n> + START AS TRUE,\n> + UP AS price > PREV(price),\n> + DOWN AS price < PREV(price)\n> +);\n\nnitpick: IMO the tests should be making use of ORDER BY in the window\nclauses.\n\nThis is a very big feature. I agree with you that MEASURES seems like a\nvery important \"next piece\" to add. Are there other areas where you\nwould like reviewers to focus on right now (or avoid)?\n\nThanks!\n--Jacob\n\n\n",
"msg_date": "Wed, 19 Jul 2023 09:30:40 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Row pattern recognition"
},
{
"msg_contents": "> Hello,\n> \n> Thanks for working on this! We're interested in RPR as well, and I've\n> been trying to get up to speed with the specs, to maybe make myself\n> useful.\n\nThank you for being interested in this.\n\n> 19075-5 discusses that, at least; not sure about other parts of the spec.\n\nThanks for the info. Unfortunately I don't have 19075-5 though.\n\n>> Maybe an insane idea but what about rewriting MATCH_RECOGNIZE clause\n>> into Window clause with RPR?\n> \n> Are we guaranteed to always have an equivalent window clause? There seem\n> to be many differences between the two, especially when it comes to ONE\n> ROW/ALL ROWS PER MATCH.\n\nYou are right. I am not 100% sure if the rewriting is possible at this\npoint.\n\n> To add onto what Vik said above:\n> \n>>> It seems RPR in the standard is quite complex. I think we can start\n>>> with a small subset of RPR then we could gradually enhance the\n>>> implementation.\n>> \n>> I have no problem with that as long as we don't paint ourselves into a \n>> corner.\n> \n> To me, PATTERN looks like an area where we may want to support a broader\n> set of operations in the first version.\n\nMe too but...\n\n> The spec has a bunch of\n> discussion around cases like empty matches, match order of alternation\n> and permutation, etc., which are not possible to express or test with\n> only the + quantifier. Those might be harder to get right in a v2, if we\n> don't at least keep them in mind for v1?\n\nCurrently my patch has a limitation for the sake of simple\nimplementation: a pattern like \"A+\" is parsed and analyzed in the raw\nparser. This makes subsequent process much easier because the pattern\nelement variable (in this case \"A\") and the quantifier (in this case\n\"+\") is already identified by the raw parser. However there are much\nmore cases are allowed in the standard as you already pointed out. For\nthose cases probably we should give up to parse PATTERN items in the\nraw parser, and instead the raw parser just accepts the elements as\nSconst?\n\n>> +static List *\n>> +transformPatternClause(ParseState *pstate, WindowClause *wc, WindowDef *windef)\n>> +{\n>> + List *patterns;\n> \n> My compiler complains about the `patterns` variable here, which is\n> returned without ever being initialized. (The caller doesn't seem to use\n> it.)\n\nWill fix.\n\n>> +-- basic test using PREV\n>> +SELECT company, tdate, price, rpr(price) OVER w FROM stock\n>> + WINDOW w AS (\n>> + PARTITION BY company\n>> + ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING\n>> + INITIAL\n>> + PATTERN (START UP+ DOWN+)\n>> + DEFINE\n>> + START AS TRUE,\n>> + UP AS price > PREV(price),\n>> + DOWN AS price < PREV(price)\n>> +);\n> \n> nitpick: IMO the tests should be making use of ORDER BY in the window\n> clauses.\n\nRight. Will fix.\n\n> This is a very big feature. I agree with you that MEASURES seems like a\n> very important \"next piece\" to add. Are there other areas where you\n> would like reviewers to focus on right now (or avoid)?\n\nAny comments, especially on the PREV/NEXT implementation part is\nwelcome. Currently the DEFINE expression like \"price > PREV(price)\" is\nprepared in ExecInitWindowAgg using ExecInitExpr,tweaking var->varno\nin Var node so that PREV uses OUTER_VAR, NEXT uses INNER_VAR. Then\nevaluate the expression in ExecWindowAgg using ExecEvalExpr, setting\nprevious row TupleSlot to ExprContext->ecxt_outertuple, and next row\nTupleSlot to ExprContext->ecxt_innertuple. I think this is temporary\nhack and should be gotten ride of before v1 is committed. Better idea?\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Thu, 20 Jul 2023 14:15:13 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Row pattern recognition"
},
{
"msg_contents": "Hi Ishii-san,\n\nOn 7/19/23 22:15, Tatsuo Ishii wrote:\n> Currently my patch has a limitation for the sake of simple\n> implementation: a pattern like \"A+\" is parsed and analyzed in the raw\n> parser. This makes subsequent process much easier because the pattern\n> element variable (in this case \"A\") and the quantifier (in this case\n> \"+\") is already identified by the raw parser. However there are much\n> more cases are allowed in the standard as you already pointed out. For\n> those cases probably we should give up to parse PATTERN items in the\n> raw parser, and instead the raw parser just accepts the elements as\n> Sconst?\n\nIs there a concern that the PATTERN grammar can't be represented in\nBison? I thought it was all context-free... Or is the concern that the\nparse tree of the pattern is hard to feed into a regex engine?\n\n> Any comments, especially on the PREV/NEXT implementation part is\n> welcome. Currently the DEFINE expression like \"price > PREV(price)\" is\n> prepared in ExecInitWindowAgg using ExecInitExpr,tweaking var->varno\n> in Var node so that PREV uses OUTER_VAR, NEXT uses INNER_VAR. Then\n> evaluate the expression in ExecWindowAgg using ExecEvalExpr, setting\n> previous row TupleSlot to ExprContext->ecxt_outertuple, and next row\n> TupleSlot to ExprContext->ecxt_innertuple. I think this is temporary\n> hack and should be gotten ride of before v1 is committed. Better idea?\n\nI'm not familiar enough with this code yet to offer very concrete\nsuggestions, sorry... But at some point in the future, we need to be\nable to skip forward and backward from arbitrary points, like\n\n DEFINE B AS B.price > PREV(FIRST(A.price), 3)\n\nso there won't be just one pair of \"previous and next\" tuples. Maybe\nthat can help clarify the design? It feels like it'll need to eventually\nbe a \"real\" function that operates on the window state, even if it\ndoesn't support all of the possible complexities in v1.\n\n--\n\nTaking a closer look at the regex engine:\n\nIt looks like the + qualifier has trouble when it meets the end of the\nframe. For instance, try removing the last row of the 'stock' table in\nrpr.sql; some of the final matches will disappear unexpectedly. Or try a\npattern like\n\n PATTERN ( a+ )\n DEFINE a AS TRUE\n\nwhich doesn't seem to match anything in my testing.\n\nThere's also the issue of backtracking in the face of reclassification,\nas I think Vik was alluding to upthread. The pattern\n\n PATTERN ( a+ b+ )\n DEFINE a AS col = 2,\n b AS col = 2\n\ndoesn't match a sequence of values (2 2 ...) with the current\nimplementation, even with a dummy row at the end to avoid the\nend-of-frame bug.\n\n(I've attached two failing tests against v2, to hopefully better\nillustrate, along with what I _think_ should be the correct results.)\n\nI'm not quite understanding the match loop in evaluate_pattern(). It\nlooks like we're building up a string to pass to the regex engine, but\nby the we call regexp_instr, don't we already know whether or not the\npattern will match based on the expression evaluation we've done?\n\nThanks,\n--Jacob",
"msg_date": "Thu, 20 Jul 2023 16:36:37 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Row pattern recognition"
},
{
"msg_contents": "On 7/21/23 01:36, Jacob Champion wrote:\n> There's also the issue of backtracking in the face of reclassification,\n> as I think Vik was alluding to upthread.\n\nWe definitely need some kind of backtracking or other reclassification \nmethod.\n\n> (I've attached two failing tests against v2, to hopefully better\n> illustrate, along with what I_think_ should be the correct results.)\n\nAlmost. You are matching 07-01-2023 but the condition is \"price > 100\".\n-- \nVik Fearing\n\n\n\n",
"msg_date": "Fri, 21 Jul 2023 02:07:44 +0200",
"msg_from": "Vik Fearing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Row pattern recognition"
},
{
"msg_contents": "Hi,\n\n> Hi Ishii-san,\n> \n> On 7/19/23 22:15, Tatsuo Ishii wrote:\n>> Currently my patch has a limitation for the sake of simple\n>> implementation: a pattern like \"A+\" is parsed and analyzed in the raw\n>> parser. This makes subsequent process much easier because the pattern\n>> element variable (in this case \"A\") and the quantifier (in this case\n>> \"+\") is already identified by the raw parser. However there are much\n>> more cases are allowed in the standard as you already pointed out. For\n>> those cases probably we should give up to parse PATTERN items in the\n>> raw parser, and instead the raw parser just accepts the elements as\n>> Sconst?\n> \n> Is there a concern that the PATTERN grammar can't be represented in\n> Bison? I thought it was all context-free...\n\nI don't know at this point. I think context-free is not enough to be\nrepsented in Bison. The grammer also needs to be LALR(1). Moreover,\nadding the grammer to existing parser may generate shift/reduce\nerrors.\n\n> Or is the concern that the\n> parse tree of the pattern is hard to feed into a regex engine?\n\nOne small concern is how to convert pattern variables to regex\nexpression which our regex enginer understands. Suppose,\n\nPATTERN UP+\n\nCurrently I convert \"UP+\" to \"U+\" so that it can be fed to the regexp\nengine. In order to do that, we need to know which part of the pattern\n(UP+) is the pattern variable (\"UP\"). For \"UP+\" it's quite easy. But\nfor more complex regular expressions it would be not, unless PATTERN\ngrammer can be analyzed by our parser to know which part is the\npattern variable.\n\n>> Any comments, especially on the PREV/NEXT implementation part is\n>> welcome. Currently the DEFINE expression like \"price > PREV(price)\" is\n>> prepared in ExecInitWindowAgg using ExecInitExpr,tweaking var->varno\n>> in Var node so that PREV uses OUTER_VAR, NEXT uses INNER_VAR. Then\n>> evaluate the expression in ExecWindowAgg using ExecEvalExpr, setting\n>> previous row TupleSlot to ExprContext->ecxt_outertuple, and next row\n>> TupleSlot to ExprContext->ecxt_innertuple. I think this is temporary\n>> hack and should be gotten ride of before v1 is committed. Better idea?\n> \n> I'm not familiar enough with this code yet to offer very concrete\n> suggestions, sorry... But at some point in the future, we need to be\n> able to skip forward and backward from arbitrary points, like\n> \n> DEFINE B AS B.price > PREV(FIRST(A.price), 3)\n> \n> so there won't be just one pair of \"previous and next\" tuples.\n\nYes, I know.\n\n> Maybe\n> that can help clarify the design? It feels like it'll need to eventually\n> be a \"real\" function that operates on the window state, even if it\n> doesn't support all of the possible complexities in v1.\n\nUnfortunately an window function can not call other window functions.\n\n> Taking a closer look at the regex engine:\n> \n> It looks like the + qualifier has trouble when it meets the end of the\n> frame. For instance, try removing the last row of the 'stock' table in\n> rpr.sql; some of the final matches will disappear unexpectedly. Or try a\n> pattern like\n> \n> PATTERN ( a+ )\n> DEFINE a AS TRUE\n> \n> which doesn't seem to match anything in my testing.\n> \n> There's also the issue of backtracking in the face of reclassification,\n> as I think Vik was alluding to upthread. The pattern\n> \n> PATTERN ( a+ b+ )\n> DEFINE a AS col = 2,\n> b AS col = 2\n> \n> doesn't match a sequence of values (2 2 ...) with the current\n> implementation, even with a dummy row at the end to avoid the\n> end-of-frame bug.\n> \n> (I've attached two failing tests against v2, to hopefully better\n> illustrate, along with what I _think_ should be the correct results.)\n\nThanks. I will look into this.\n\n> I'm not quite understanding the match loop in evaluate_pattern(). It\n> looks like we're building up a string to pass to the regex engine, but\n> by the we call regexp_instr, don't we already know whether or not the\n> pattern will match based on the expression evaluation we've done?\n\nFor \"+\" yes. But for more complex regular expression like '{n}', we\nneed to call our regexp engine to check if the pattern matches.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Fri, 21 Jul 2023 15:16:48 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Row pattern recognition"
},
{
"msg_contents": "On 7/20/23 17:07, Vik Fearing wrote:\n> On 7/21/23 01:36, Jacob Champion wrote:\n>> (I've attached two failing tests against v2, to hopefully better\n>> illustrate, along with what I_think_ should be the correct results.)\n> \n> Almost. You are matching 07-01-2023 but the condition is \"price > 100\".\n\nD'oh. Correction attached. I think :)\n\nThanks,\n--Jacob",
"msg_date": "Fri, 21 Jul 2023 16:14:12 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Row pattern recognition"
},
{
"msg_contents": "On 7/20/23 23:16, Tatsuo Ishii wrote:\n> I don't know at this point. I think context-free is not enough to be\n> repsented in Bison. The grammer also needs to be LALR(1). Moreover,\n> adding the grammer to existing parser may generate shift/reduce\n> errors.\n\nAh. It's been too long since my compilers classes; I will pipe down.\n\n> One small concern is how to convert pattern variables to regex\n> expression which our regex enginer understands. Suppose,\n> \n> PATTERN UP+\n> \n> Currently I convert \"UP+\" to \"U+\" so that it can be fed to the regexp\n> engine. In order to do that, we need to know which part of the pattern\n> (UP+) is the pattern variable (\"UP\"). For \"UP+\" it's quite easy. But\n> for more complex regular expressions it would be not, unless PATTERN\n> grammer can be analyzed by our parser to know which part is the\n> pattern variable.\n\nIs the eventual plan to generate multiple alternatives, and run the\nregex against them one at a time?\n\n>> I'm not familiar enough with this code yet to offer very concrete\n>> suggestions, sorry... But at some point in the future, we need to be\n>> able to skip forward and backward from arbitrary points, like\n>>\n>> DEFINE B AS B.price > PREV(FIRST(A.price), 3)\n>>\n>> so there won't be just one pair of \"previous and next\" tuples.\n> \n> Yes, I know.\n\nI apologize. I got overexplain-y.\n\n>> Maybe\n>> that can help clarify the design? It feels like it'll need to eventually\n>> be a \"real\" function that operates on the window state, even if it\n>> doesn't support all of the possible complexities in v1.\n> \n> Unfortunately an window function can not call other window functions.\n\nCan that restriction be lifted for the EXPR_KIND_RPR_DEFINE case? Or\ndoes it make sense to split the pattern navigation \"functions\" into\ntheir own new concept, and maybe borrow some of the window function\ninfrastructure for it?\n\nThanks!\n--Jacob\n\n\n",
"msg_date": "Fri, 21 Jul 2023 16:16:18 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Row pattern recognition"
},
{
"msg_contents": "On 7/22/23 01:14, Jacob Champion wrote:\n> On 7/20/23 17:07, Vik Fearing wrote:\n>> On 7/21/23 01:36, Jacob Champion wrote:\n>>> (I've attached two failing tests against v2, to hopefully better\n>>> illustrate, along with what I_think_ should be the correct results.)\n>>\n>> Almost. You are matching 07-01-2023 but the condition is \"price > 100\".\n> \n> D'oh. Correction attached. I think :)\n\nThis looks correct to my human brain. Thanks!\n-- \nVik Fearing\n\n\n\n",
"msg_date": "Sat, 22 Jul 2023 01:38:01 +0200",
"msg_from": "Vik Fearing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Row pattern recognition"
},
{
"msg_contents": ">> One small concern is how to convert pattern variables to regex\n>> expression which our regex enginer understands. Suppose,\n>> \n>> PATTERN UP+\n>> \n>> Currently I convert \"UP+\" to \"U+\" so that it can be fed to the regexp\n>> engine. In order to do that, we need to know which part of the pattern\n>> (UP+) is the pattern variable (\"UP\"). For \"UP+\" it's quite easy. But\n>> for more complex regular expressions it would be not, unless PATTERN\n>> grammer can be analyzed by our parser to know which part is the\n>> pattern variable.\n> \n> Is the eventual plan to generate multiple alternatives, and run the\n> regex against them one at a time?\n\nYes, that's my plan.\n\n>>> I'm not familiar enough with this code yet to offer very concrete\n>>> suggestions, sorry... But at some point in the future, we need to be\n>>> able to skip forward and backward from arbitrary points, like\n>>>\n>>> DEFINE B AS B.price > PREV(FIRST(A.price), 3)\n>>>\n>>> so there won't be just one pair of \"previous and next\" tuples.\n>> \n>> Yes, I know.\n> \n> I apologize. I got overexplain-y.\n\nNo problem. Thank you for reminding me it.\n\n>>> Maybe\n>>> that can help clarify the design? It feels like it'll need to eventually\n>>> be a \"real\" function that operates on the window state, even if it\n>>> doesn't support all of the possible complexities in v1.\n>> \n>> Unfortunately an window function can not call other window functions.\n> \n> Can that restriction be lifted for the EXPR_KIND_RPR_DEFINE case?\n\nI am not sure at this point. Current PostgreSQL executor creates\nWindowStatePerFuncData for each window function and aggregate\nappearing in OVER clause. This means PREV/NEXT and other row pattern\nnavigation operators cannot have their own WindowStatePerFuncData if\nthey do not appear in OVER clauses in a query even if PREV/NEXT\netc. are defined as window function.\n\n> Or\n> does it make sense to split the pattern navigation \"functions\" into\n> their own new concept, and maybe borrow some of the window function\n> infrastructure for it?\n\nMaybe. Suppose a window function executes row pattern matching using\nprice > PREV(price). The window function already receives\nWindowStatePerFuncData. If we can pass the WindowStatePerFuncData to\nPREV, we could let PREV do the real work (getting previous tuple).\nI have not tried this yet, though.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Sat, 22 Jul 2023 10:11:49 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Row pattern recognition"
},
{
"msg_contents": "On 7/22/23 03:11, Tatsuo Ishii wrote:\n>>>> Maybe\n>>>> that can help clarify the design? It feels like it'll need to eventually\n>>>> be a \"real\" function that operates on the window state, even if it\n>>>> doesn't support all of the possible complexities in v1.\n>>> Unfortunately an window function can not call other window functions.\n>> Can that restriction be lifted for the EXPR_KIND_RPR_DEFINE case?\n\n> I am not sure at this point. Current PostgreSQL executor creates\n> WindowStatePerFuncData for each window function and aggregate\n> appearing in OVER clause. This means PREV/NEXT and other row pattern\n> navigation operators cannot have their own WindowStatePerFuncData if\n> they do not appear in OVER clauses in a query even if PREV/NEXT\n> etc. are defined as window function.\n> \n>> Or\n>> does it make sense to split the pattern navigation \"functions\" into\n>> their own new concept, and maybe borrow some of the window function\n>> infrastructure for it?\n\n> Maybe. Suppose a window function executes row pattern matching using\n> price > PREV(price). The window function already receives\n> WindowStatePerFuncData. If we can pass the WindowStatePerFuncData to\n> PREV, we could let PREV do the real work (getting previous tuple).\n> I have not tried this yet, though.\n\n\nI don't understand this logic. Window functions work over a window \nframe. What we are talking about here is *defining* a window frame. \nHow can a window function execute row pattern matching?\n-- \nVik Fearing\n\n\n\n",
"msg_date": "Sat, 22 Jul 2023 04:54:43 +0200",
"msg_from": "Vik Fearing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Row pattern recognition"
},
{
"msg_contents": "> On 7/22/23 03:11, Tatsuo Ishii wrote:\n>>>>> Maybe\n>>>>> that can help clarify the design? It feels like it'll need to\n>>>>> eventually\n>>>>> be a \"real\" function that operates on the window state, even if it\n>>>>> doesn't support all of the possible complexities in v1.\n>>>> Unfortunately an window function can not call other window functions.\n>>> Can that restriction be lifted for the EXPR_KIND_RPR_DEFINE case?\n> \n>> I am not sure at this point. Current PostgreSQL executor creates\n>> WindowStatePerFuncData for each window function and aggregate\n>> appearing in OVER clause. This means PREV/NEXT and other row pattern\n>> navigation operators cannot have their own WindowStatePerFuncData if\n>> they do not appear in OVER clauses in a query even if PREV/NEXT\n>> etc. are defined as window function.\n>> \n>>> Or\n>>> does it make sense to split the pattern navigation \"functions\" into\n>>> their own new concept, and maybe borrow some of the window function\n>>> infrastructure for it?\n> \n>> Maybe. Suppose a window function executes row pattern matching using\n>> price > PREV(price). The window function already receives\n>> WindowStatePerFuncData. If we can pass the WindowStatePerFuncData to\n>> PREV, we could let PREV do the real work (getting previous tuple).\n>> I have not tried this yet, though.\n> \n> \n> I don't understand this logic. Window functions work over a window\n> frame.\n\nYes.\n\n> What we are talking about here is *defining* a window\n> frame.\n\nWell, we are defining a \"reduced\" window frame within a (full) window\nframe. A \"reduced\" window frame is calculated each time when a window\nfunction is called.\n\n> How can a window function execute row pattern matching?\n\nA window function is called for each row fed by an outer plan. It\nfetches current, previous and next row to execute pattern matching. If\nit matches, the window function moves to next row and repeat the\nprocess, until pattern match fails.\n\nBelow is an example window function to execute pattern matching (I\nwill include this in the v3 patch). row_is_in_reduced_frame() is a\nfunction to execute pattern matching. It returns the number of rows in\nthe reduced frame if pattern match succeeds. If succeeds, the function\nreturns the last row in the reduced frame instead of the last row in\nthe full window frame.\n\n/*\n * last_value\n * return the value of VE evaluated on the last row of the\n * window frame, per spec.\n */\nDatum\nwindow_last_value(PG_FUNCTION_ARGS)\n{\n\tWindowObject winobj = PG_WINDOW_OBJECT();\n\tDatum\t\tresult;\n\tbool\t\tisnull;\n\tint64\t\tabspos;\n\tint\t\t\tnum_reduced_frame;\n\n\tabspos = WinGetCurrentPosition(winobj);\n\tnum_reduced_frame = row_is_in_reduced_frame(winobj, abspos);\n\n\tif (num_reduced_frame == 0)\n\t\t/* no RPR is involved */\n\t\tresult = WinGetFuncArgInFrame(winobj, 0,\n\t\t\t\t\t\t\t\t\t 0, WINDOW_SEEK_TAIL, true,\n\t\t\t\t\t\t\t\t\t &isnull, NULL);\n\telse if (num_reduced_frame > 0)\n\t\t/* get last row value in the reduced frame */\n\t\tresult = WinGetFuncArgInFrame(winobj, 0,\n\t\t\t\t\t\t\t\t\t num_reduced_frame - 1, WINDOW_SEEK_HEAD, true,\n\t\t\t\t\t\t\t\t\t &isnull, NULL);\n\telse\n\t\t/* RPR is involved and current row is unmatched or skipped */\n\t\tisnull = true;\n\n\tif (isnull)\n\t\tPG_RETURN_NULL();\n\n\tPG_RETURN_DATUM(result);\n}\n\n\n",
"msg_date": "Sat, 22 Jul 2023 15:14:46 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Row pattern recognition"
},
{
"msg_contents": "On 7/22/23 08:14, Tatsuo Ishii wrote:\n>> On 7/22/23 03:11, Tatsuo Ishii wrote:\n>>> Maybe. Suppose a window function executes row pattern matching using\n>>> price > PREV(price). The window function already receives\n>>> WindowStatePerFuncData. If we can pass the WindowStatePerFuncData to\n>>> PREV, we could let PREV do the real work (getting previous tuple).\n>>> I have not tried this yet, though.\n>>\n>> I don't understand this logic. Window functions work over a window\n>> frame.\n> \n> Yes.\n> \n>> What we are talking about here is *defining* a window\n>> frame.\n> \n> Well, we are defining a \"reduced\" window frame within a (full) window\n> frame. A \"reduced\" window frame is calculated each time when a window\n> function is called.\n\n\nWhy? It should only be recalculated when the current row changes and we \nneed a new frame. The reduced window frame *is* the window frame for \nall functions over that window.\n\n\n>> How can a window function execute row pattern matching?\n> \n> A window function is called for each row fed by an outer plan. It\n> fetches current, previous and next row to execute pattern matching. If\n> it matches, the window function moves to next row and repeat the\n> process, until pattern match fails.\n> \n> Below is an example window function to execute pattern matching (I\n> will include this in the v3 patch). row_is_in_reduced_frame() is a\n> function to execute pattern matching. It returns the number of rows in\n> the reduced frame if pattern match succeeds. If succeeds, the function\n> returns the last row in the reduced frame instead of the last row in\n> the full window frame.\n\n\nI strongly disagree with this. Window function do not need to know how \nthe frame is defined, and indeed they should not. WinGetFuncArgInFrame \nshould answer yes or no and the window function just works on that. \nOtherwise we will get extension (and possibly even core) functions that \ndon't handle the frame properly.\n-- \nVik Fearing\n\n\n\n",
"msg_date": "Sun, 23 Jul 2023 23:29:46 +0200",
"msg_from": "Vik Fearing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Row pattern recognition"
},
{
"msg_contents": ">>> What we are talking about here is *defining* a window\n>>> frame.\n>> Well, we are defining a \"reduced\" window frame within a (full) window\n>> frame. A \"reduced\" window frame is calculated each time when a window\n>> function is called.\n> \n> \n> Why? It should only be recalculated when the current row changes and\n> we need a new frame. The reduced window frame *is* the window frame\n> for all functions over that window.\n\nWe already recalculate a frame each time a row is processed even\nwithout RPR. See ExecWindowAgg.\n\nAlso RPR always requires a frame option ROWS BETWEEN CURRENT ROW,\nwhich means the frame head is changed each time current row position\nchanges.\n\n> I strongly disagree with this. Window function do not need to know\n> how the frame is defined, and indeed they should not.\n\nWe already break the rule by defining *support functions. See\nwindowfuncs.c.\n\n> WinGetFuncArgInFrame should answer yes or no and the window function\n> just works on that. Otherwise we will get extension (and possibly even\n> core) functions that don't handle the frame properly.\n\nMaybe I can move row_is_in_reduced_frame into WinGetFuncArgInFrame\njust for convenience.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Mon, 24 Jul 2023 09:22:40 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Row pattern recognition"
},
{
"msg_contents": "On 7/24/23 02:22, Tatsuo Ishii wrote:\n>>>> What we are talking about here is *defining* a window\n>>>> frame.\n>>> Well, we are defining a \"reduced\" window frame within a (full) window\n>>> frame. A \"reduced\" window frame is calculated each time when a window\n>>> function is called.\n>>\n>>\n>> Why? It should only be recalculated when the current row changes and\n>> we need a new frame. The reduced window frame *is* the window frame\n>> for all functions over that window.\n> \n> We already recalculate a frame each time a row is processed even\n> without RPR. See ExecWindowAgg.\n\nYes, after each row. Not for each function.\n\n> Also RPR always requires a frame option ROWS BETWEEN CURRENT ROW,\n> which means the frame head is changed each time current row position\n> changes.\n\nOff topic for now: I wonder why this restriction is in place and whether \nwe should respect or ignore it. That is a discussion for another time, \nthough.\n\n>> I strongly disagree with this. Window function do not need to know\n>> how the frame is defined, and indeed they should not.\n> \n> We already break the rule by defining *support functions. See\n> windowfuncs.c.\n\nThe support functions don't know anything about the frame, they just \nknow when a window function is monotonically increasing and execution \ncan either stop or be \"passed through\".\n\n>> WinGetFuncArgInFrame should answer yes or no and the window function\n>> just works on that. Otherwise we will get extension (and possibly even\n>> core) functions that don't handle the frame properly.\n> \n> Maybe I can move row_is_in_reduced_frame into WinGetFuncArgInFrame\n> just for convenience.\n\nI have two comments about this:\n\nIt isn't just for convenience, it is for correctness. The window \nfunctions do not need to know which rows they are *not* operating on.\n\nThere is no such thing as a \"full\" or \"reduced\" frame. The standard \nuses those terms to explain the difference between before and after RPR \nis applied, but window functions do not get to choose which frame they \napply over. They only ever apply over the reduced window frame.\n-- \nVik Fearing\n\n\n\n",
"msg_date": "Tue, 25 Jul 2023 01:14:37 +0200",
"msg_from": "Vik Fearing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Row pattern recognition"
},
{
"msg_contents": "Hi,\n\n> diff --git a/src/test/regress/expected/rpr.out b/src/test/regress/expected/rpr.out\n> index 6bf8818911..f3fd22de2a 100644\n> --- a/src/test/regress/expected/rpr.out\n> +++ b/src/test/regress/expected/rpr.out\n> @@ -230,6 +230,79 @@ SELECT company, tdate, price, rpr(price) OVER w FROM stock\n> company2 | 07-10-2023 | 1300 | \n> (20 rows)\n> \n> +-- match everything\n> +SELECT company, tdate, price, rpr(price) OVER w FROM stock\n> + WINDOW w AS (\n> + PARTITION BY company\n> + ORDER BY tdate\n> + ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING\n> + AFTER MATCH SKIP TO NEXT ROW\n\nIt seems it's a result with AFTER MATCH SKIP PAST LAST ROW.\n\n> + INITIAL\n> + PATTERN (A+)\n> + DEFINE\n> + A AS TRUE\n> +);\n> + company | tdate | price | rpr \n> +----------+------------+-------+-----\n> + company1 | 07-01-2023 | 100 | 100\n> + company1 | 07-02-2023 | 200 | \n> + company1 | 07-03-2023 | 150 | \n> + company1 | 07-04-2023 | 140 | \n> + company1 | 07-05-2023 | 150 | \n> + company1 | 07-06-2023 | 90 | \n> + company1 | 07-07-2023 | 110 | \n> + company1 | 07-08-2023 | 130 | \n> + company1 | 07-09-2023 | 120 | \n> + company1 | 07-10-2023 | 130 | \n> + company2 | 07-01-2023 | 50 | 50\n> + company2 | 07-02-2023 | 2000 | \n> + company2 | 07-03-2023 | 1500 | \n> + company2 | 07-04-2023 | 1400 | \n> + company2 | 07-05-2023 | 1500 | \n> + company2 | 07-06-2023 | 60 | \n> + company2 | 07-07-2023 | 1100 | \n> + company2 | 07-08-2023 | 1300 | \n> + company2 | 07-09-2023 | 1200 | \n> + company2 | 07-10-2023 | 1300 | \n> +(20 rows)\n> +\n> +-- backtracking with reclassification of rows\n> +SELECT company, tdate, price, rpr(price) OVER w FROM stock\n> + WINDOW w AS (\n> + PARTITION BY company\n> + ORDER BY tdate\n> + ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING\n> + AFTER MATCH SKIP TO NEXT ROW\n> + INITIAL\n> + PATTERN (A+ B+)\n> + DEFINE\n> + A AS price > 100,\n> + B AS price > 100\n> +);\n> + company | tdate | price | rpr \n> +----------+------------+-------+------\n> + company1 | 07-01-2023 | 100 | \n> + company1 | 07-02-2023 | 200 | 200\n> + company1 | 07-03-2023 | 150 | \n> + company1 | 07-04-2023 | 140 | \n> + company1 | 07-05-2023 | 150 | \n> + company1 | 07-06-2023 | 90 | \n> + company1 | 07-07-2023 | 110 | 110\n> + company1 | 07-08-2023 | 130 | \n> + company1 | 07-09-2023 | 120 | \n> + company1 | 07-10-2023 | 130 | \n> + company2 | 07-01-2023 | 50 | \n> + company2 | 07-02-2023 | 2000 | 2000\n> + company2 | 07-03-2023 | 1500 | \n> + company2 | 07-04-2023 | 1400 | \n> + company2 | 07-05-2023 | 1500 | \n> + company2 | 07-06-2023 | 60 | \n> + company2 | 07-07-2023 | 1100 | 1100\n> + company2 | 07-08-2023 | 1300 | \n> + company2 | 07-09-2023 | 1200 | \n> + company2 | 07-10-2023 | 1300 | \n> +(20 rows)\n> +\n> --\n> -- Error cases\n> --\n> diff --git a/src/test/regress/sql/rpr.sql b/src/test/regress/sql/rpr.sql\n> index 951c9abfe9..f1cd0369f4 100644\n> --- a/src/test/regress/sql/rpr.sql\n> +++ b/src/test/regress/sql/rpr.sql\n> @@ -94,6 +94,33 @@ SELECT company, tdate, price, rpr(price) OVER w FROM stock\n> UPDOWN AS price > PREV(price) AND price > NEXT(price)\n> );\n> \n> +-- match everything\n> +SELECT company, tdate, price, rpr(price) OVER w FROM stock\n> + WINDOW w AS (\n> + PARTITION BY company\n> + ORDER BY tdate\n> + ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING\n> + AFTER MATCH SKIP TO NEXT ROW\n> + INITIAL\n> + PATTERN (A+)\n> + DEFINE\n> + A AS TRUE\n> +);\n> +\n> +-- backtracking with reclassification of rows\n> +SELECT company, tdate, price, rpr(price) OVER w FROM stock\n> + WINDOW w AS (\n> + PARTITION BY company\n> + ORDER BY tdate\n> + ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING\n> + AFTER MATCH SKIP TO NEXT ROW\n> + INITIAL\n> + PATTERN (A+ B+)\n> + DEFINE\n> + A AS price > 100,\n> + B AS price > 100\n> +);\n> +\n> --\n> -- Error cases\n> --\n\n\n",
"msg_date": "Tue, 25 Jul 2023 21:35:04 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Row pattern recognition"
},
{
"msg_contents": "Attached is the v3 patch. In this patch following changes are made.\n\n(1) I completely changed the pattern matching engine so that it\nperforms backtracking. Now the engine evaluates all pattern elements\ndefined in PATTER against each row, saving matched pattern variables\nin a string per row. For example if the pattern element A and B\nevaluated to true, a string \"AB\" is created for current row.\n\nThis continues until all pattern matching fails or encounters the end\nof full window frame/partition. After that, the pattern matching\nengine creates all possible \"pattern strings\" and apply the regular\nexpression matching to each. For example if we have row 0 = \"AB\" row 1\n= \"C\", possible pattern strings are: \"AC\" and \"BC\".\n\nIf it matches, the length of matching substring is saved. After all\npossible trials are done, the longest matching substring is chosen and\nit becomes the width (number of rows) in the reduced window frame.\n\nSee row_is_in_reduced_frame, search_str_set and search_str_set_recurse\nin nodeWindowAggs.c for more details. For now I use a naive depth\nfirst search and probably there is a lot of rooms for optimization\n(for example rewriting it without using\nrecursion). Suggestions/patches are welcome.\n\nJacob Champion wrote:\n> It looks like the + qualifier has trouble when it meets the end of the\n> frame. For instance, try removing the last row of the 'stock' table in\n> rpr.sql; some of the final matches will disappear unexpectedly. Or try a\n> pattern like\n> \n> PATTERN ( a+ )\n> DEFINE a AS TRUE\n> \n> which doesn't seem to match anything in my testing.\n> \n> There's also the issue of backtracking in the face of reclassification,\n> as I think Vik was alluding to upthread. The pattern\n> \n> PATTERN ( a+ b+ )\n> DEFINE a AS col = 2,\n> b AS col = 2\n\nWith the new engine, cases above do not fail anymore. See new\nregression test cases. Thanks for providing valuable test cases!\n\n(2) Make window functions RPR aware. Now first_value, last_value, and\nnth_value recognize RPR (maybe first_value do not need any change?)\n\nVik Fearing wrote:\n> I strongly disagree with this. Window function do not need to know\n> how the frame is defined, and indeed they should not.\n> WinGetFuncArgInFrame should answer yes or no and the window function\n> just works on that. Otherwise we will get extension (and possibly even\n> core) functions that don't handle the frame properly.\n\nSo I moved row_is_in_reduce_frame into WinGetFuncArgInFrame so that\nthose window functions are not needed to be changed.\n\n(3) Window function rpr was removed. We can use first_value instead.\n\n(4) Remaining tasks/issues.\n\n- For now I disable WinSetMarkPosition because RPR often needs to\n access a row before the mark is set. We need to fix this in the\n future.\n\n- I am working on making window aggregates RPR aware now. The\n implementation is in progress and far from completeness. An example\n is below. I think row 2, 3, 4 of \"count\" column should be NULL\n instead of 3, 2, 0, 0. Same thing can be said to other\n rows. Probably this is an effect of moving aggregate but I still\n studying the window aggregation code.\n\nSELECT company, tdate, first_value(price) OVER W, count(*) OVER w FROM stock\n WINDOW w AS (\n PARTITION BY company\n ORDER BY tdate\n ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING\n AFTER MATCH SKIP PAST LAST ROW\n INITIAL\n PATTERN (START UP+ DOWN+)\n DEFINE\n START AS TRUE,\n UP AS price > PREV(price),\n DOWN AS price < PREV(price)\n);\n company | tdate | first_value | count \n----------+------------+-------------+-------\n company1 | 2023-07-01 | 100 | 4\n company1 | 2023-07-02 | | 3\n company1 | 2023-07-03 | | 2\n company1 | 2023-07-04 | | 0\n company1 | 2023-07-05 | | 0\n company1 | 2023-07-06 | 90 | 4\n company1 | 2023-07-07 | | 3\n company1 | 2023-07-08 | | 2\n company1 | 2023-07-09 | | 0\n company1 | 2023-07-10 | | 0\n company2 | 2023-07-01 | 50 | 4\n company2 | 2023-07-02 | | 3\n company2 | 2023-07-03 | | 2\n company2 | 2023-07-04 | | 0\n company2 | 2023-07-05 | | 0\n company2 | 2023-07-06 | 60 | 4\n company2 | 2023-07-07 | | 3\n company2 | 2023-07-08 | | 2\n company2 | 2023-07-09 | | 0\n company2 | 2023-07-10 | | 0\n\n- If attributes appearing in DEFINE are not used in the target list, query fails.\n\nSELECT company, tdate, count(*) OVER w FROM stock\n WINDOW w AS (\n PARTITION BY company\n ORDER BY tdate\n ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING\n AFTER MATCH SKIP PAST LAST ROW\n INITIAL\n PATTERN (START UP+ DOWN+)\n DEFINE\n START AS TRUE,\n UP AS price > PREV(price),\n DOWN AS price < PREV(price)\n);\nERROR: attribute number 3 exceeds number of columns 2\n\nThis is because attributes appearing in DEFINE are not added to the\ntarget list. I am looking for way to teach planner to add attributes\nappearing in DEFINE.\n\nI am going to add this thread to CommitFest and plan to add both of\nyou as reviewers. Thanks in advance.\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp",
"msg_date": "Wed, 26 Jul 2023 21:21:34 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Row pattern recognition"
},
{
"msg_contents": "> I am going to add this thread to CommitFest and plan to add both of\n> you as reviewers. Thanks in advance.\n\nDone.\nhttps://commitfest.postgresql.org/44/4460/\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Thu, 27 Jul 2023 05:22:30 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Row pattern recognition"
},
{
"msg_contents": ">> We already recalculate a frame each time a row is processed even\n>> without RPR. See ExecWindowAgg.\n> \n> Yes, after each row. Not for each function.\n\nOk, I understand now. Closer look at the code, I realized that each\nwindow function calls update_frameheadpos, which computes the frame\nhead position. But actually it checks winstate->framehead_valid and if\nit's already true (probably by other window function), then it does\nnothing.\n\n>> Also RPR always requires a frame option ROWS BETWEEN CURRENT ROW,\n>> which means the frame head is changed each time current row position\n>> changes.\n> \n> Off topic for now: I wonder why this restriction is in place and\n> whether we should respect or ignore it. That is a discussion for\n> another time, though.\n\nMy guess is, it is because other than ROWS BETWEEN CURRENT ROW has\nlittle or no meaning. Consider following example:\n\nSELECT i, first_value(i) OVER w\n FROM (VALUES (1), (2), (3), (4)) AS v (i)\n WINDOW w AS (\n ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING\n AFTER MATCH SKIP PAST LAST ROW\n PATTERN (A)\n DEFINE\n A AS i = 1 OR i = 3\n);\n\nIn this example ROWS BETWEEN CURRENT ROW gives frames with i = 1 and i\n= 3.\n\n i | first_value \n---+-------------\n 1 | 1\n 2 | \n 3 | 3\n 4 | \n(4 rows)\n\nBut what would happen with ROWS BETWEEN UNBOUNDED PRECEDING AND\nUNBOUNDED FOLLOWING? Probably the frame i = 3 will be missed as\nat i = 2, PATTERN is not satisfied and compution of the reduced frame\nstops.\n\n i | first_value \n---+-------------\n 1 | 1\n 2 | \n 3 | \n 4 | \n(4 rows)\n\nThis is not very useful for users.\n\n>>> I strongly disagree with this. Window function do not need to know\n>>> how the frame is defined, and indeed they should not.\n>> We already break the rule by defining *support functions. See\n>> windowfuncs.c.\n> The support functions don't know anything about the frame, they just\n> know when a window function is monotonically increasing and execution\n> can either stop or be \"passed through\".\n\nI see following code in window_row_number_support:\n\n\t\t/*\n\t\t * The frame options can always become \"ROWS BETWEEN UNBOUNDED\n\t\t * PRECEDING AND CURRENT ROW\". row_number() always just increments by\n\t\t * 1 with each row in the partition. Using ROWS instead of RANGE\n\t\t * saves effort checking peer rows during execution.\n\t\t */\n\t\treq->frameOptions = (FRAMEOPTION_NONDEFAULT |\n\t\t\t\t\t\t\t FRAMEOPTION_ROWS |\n\t\t\t\t\t\t\t FRAMEOPTION_START_UNBOUNDED_PRECEDING |\n\t\t\t\t\t\t\t FRAMEOPTION_END_CURRENT_ROW);\n\nI think it not only knows about frame but it even changes the frame\noptions. This seems far from \"don't know anything about the frame\", no?\n\n> I have two comments about this:\n> \n> It isn't just for convenience, it is for correctness. The window\n> functions do not need to know which rows they are *not* operating on.\n> \n> There is no such thing as a \"full\" or \"reduced\" frame. The standard\n> uses those terms to explain the difference between before and after\n> RPR is applied, but window functions do not get to choose which frame\n> they apply over. They only ever apply over the reduced window frame.\n\nI agree that \"full window frame\" and \"reduced window frame\" do not\nexist at the same time, and in the end (after computation of reduced\nframe), only \"reduced\" frame is visible to window\nfunctions/aggregates. But I still do think that \"full window frame\"\nand \"reduced window frame\" are important concept to explain/understand\nhow PRP works.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Fri, 28 Jul 2023 16:09:53 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Row pattern recognition"
},
{
"msg_contents": "On 7/28/23 09:09, Tatsuo Ishii wrote:\n>>> We already recalculate a frame each time a row is processed even\n>>> without RPR. See ExecWindowAgg.\n>>\n>> Yes, after each row. Not for each function.\n> \n> Ok, I understand now. Closer look at the code, I realized that each\n> window function calls update_frameheadpos, which computes the frame\n> head position. But actually it checks winstate->framehead_valid and if\n> it's already true (probably by other window function), then it does\n> nothing.\n> \n>>> Also RPR always requires a frame option ROWS BETWEEN CURRENT ROW,\n>>> which means the frame head is changed each time current row position\n>>> changes.\n>>\n>> Off topic for now: I wonder why this restriction is in place and\n>> whether we should respect or ignore it. That is a discussion for\n>> another time, though.\n> \n> My guess is, it is because other than ROWS BETWEEN CURRENT ROW has\n> little or no meaning. Consider following example:\n\nYes, that makes sense.\n\n>>>> I strongly disagree with this. Window function do not need to know\n>>>> how the frame is defined, and indeed they should not.\n>>> We already break the rule by defining *support functions. See\n>>> windowfuncs.c.\n>> The support functions don't know anything about the frame, they just\n>> know when a window function is monotonically increasing and execution\n>> can either stop or be \"passed through\".\n> \n> I see following code in window_row_number_support:\n> \n> \t\t/*\n> \t\t * The frame options can always become \"ROWS BETWEEN UNBOUNDED\n> \t\t * PRECEDING AND CURRENT ROW\". row_number() always just increments by\n> \t\t * 1 with each row in the partition. Using ROWS instead of RANGE\n> \t\t * saves effort checking peer rows during execution.\n> \t\t */\n> \t\treq->frameOptions = (FRAMEOPTION_NONDEFAULT |\n> \t\t\t\t\t\t\t FRAMEOPTION_ROWS |\n> \t\t\t\t\t\t\t FRAMEOPTION_START_UNBOUNDED_PRECEDING |\n> \t\t\t\t\t\t\t FRAMEOPTION_END_CURRENT_ROW);\n> \n> I think it not only knows about frame but it even changes the frame\n> options. This seems far from \"don't know anything about the frame\", no?\n\nThat's the planner support function. The row_number() function itself \nis not even allowed to *have* a frame, per spec. We allow it, but as \nyou can see from that support function, we completely replace it.\n\nSo all of the partition-level window functions are not affected by RPR \nanyway.\n\n>> I have two comments about this:\n>>\n>> It isn't just for convenience, it is for correctness. The window\n>> functions do not need to know which rows they are *not* operating on.\n>>\n>> There is no such thing as a \"full\" or \"reduced\" frame. The standard\n>> uses those terms to explain the difference between before and after\n>> RPR is applied, but window functions do not get to choose which frame\n>> they apply over. They only ever apply over the reduced window frame.\n> \n> I agree that \"full window frame\" and \"reduced window frame\" do not\n> exist at the same time, and in the end (after computation of reduced\n> frame), only \"reduced\" frame is visible to window\n> functions/aggregates. But I still do think that \"full window frame\"\n> and \"reduced window frame\" are important concept to explain/understand\n> how PRP works.\n\nIf we are just using those terms for documentation, then okay.\n-- \nVik Fearing\n\n\n\n",
"msg_date": "Fri, 28 Jul 2023 10:56:26 +0200",
"msg_from": "Vik Fearing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Row pattern recognition"
},
{
"msg_contents": "On 7/26/23 14:21, Tatsuo Ishii wrote:\n> Attached is the v3 patch. In this patch following changes are made.\n\nExcellent. Thanks!\n\nA few quick comments:\n\n- PERMUTE is still misspelled as PREMUTE\n\n- PATTERN variables do not have to exist in the DEFINE clause. They are \nconsidered TRUE if not present.\n\n> (1) I completely changed the pattern matching engine so that it\n> performs backtracking. Now the engine evaluates all pattern elements\n> defined in PATTER against each row, saving matched pattern variables\n> in a string per row. For example if the pattern element A and B\n> evaluated to true, a string \"AB\" is created for current row.\n> \n> This continues until all pattern matching fails or encounters the end\n> of full window frame/partition. After that, the pattern matching\n> engine creates all possible \"pattern strings\" and apply the regular\n> expression matching to each. For example if we have row 0 = \"AB\" row 1\n> = \"C\", possible pattern strings are: \"AC\" and \"BC\".\n> \n> If it matches, the length of matching substring is saved. After all\n> possible trials are done, the longest matching substring is chosen and\n> it becomes the width (number of rows) in the reduced window frame.\n> \n> See row_is_in_reduced_frame, search_str_set and search_str_set_recurse\n> in nodeWindowAggs.c for more details. For now I use a naive depth\n> first search and probably there is a lot of rooms for optimization\n> (for example rewriting it without using\n> recursion). Suggestions/patches are welcome.\n\nMy own reviews will only focus on correctness for now. Once we get a \ngood set of regression tests all passing, I will focus more on \noptimization. Of course, others might want to review the performance now.\n\n> Vik Fearing wrote:\n>> I strongly disagree with this. Window function do not need to know\n>> how the frame is defined, and indeed they should not.\n>> WinGetFuncArgInFrame should answer yes or no and the window function\n>> just works on that. Otherwise we will get extension (and possibly even\n>> core) functions that don't handle the frame properly.\n> \n> So I moved row_is_in_reduce_frame into WinGetFuncArgInFrame so that\n> those window functions are not needed to be changed.\n> \n> (3) Window function rpr was removed. We can use first_value instead.\n\nExcellent.\n\n> (4) Remaining tasks/issues.\n> \n> - For now I disable WinSetMarkPosition because RPR often needs to\n> access a row before the mark is set. We need to fix this in the\n> future.\n\nNoted, and agreed.\n\n> - I am working on making window aggregates RPR aware now. The\n> implementation is in progress and far from completeness. An example\n> is below. I think row 2, 3, 4 of \"count\" column should be NULL\n> instead of 3, 2, 0, 0. Same thing can be said to other\n> rows. Probably this is an effect of moving aggregate but I still\n> studying the window aggregation code.\n\nThis tells me again that RPR is not being run in the right place. All \nwindowed aggregates and frame-level window functions should Just Work \nwith no modification.\n\n> SELECT company, tdate, first_value(price) OVER W, count(*) OVER w FROM stock\n> WINDOW w AS (\n> PARTITION BY company\n> ORDER BY tdate\n> ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING\n> AFTER MATCH SKIP PAST LAST ROW\n> INITIAL\n> PATTERN (START UP+ DOWN+)\n> DEFINE\n> START AS TRUE,\n> UP AS price > PREV(price),\n> DOWN AS price < PREV(price)\n> );\n> company | tdate | first_value | count\n> ----------+------------+-------------+-------\n> company1 | 2023-07-01 | 100 | 4\n> company1 | 2023-07-02 | | 3\n> company1 | 2023-07-03 | | 2\n> company1 | 2023-07-04 | | 0\n> company1 | 2023-07-05 | | 0\n> company1 | 2023-07-06 | 90 | 4\n> company1 | 2023-07-07 | | 3\n> company1 | 2023-07-08 | | 2\n> company1 | 2023-07-09 | | 0\n> company1 | 2023-07-10 | | 0\n> company2 | 2023-07-01 | 50 | 4\n> company2 | 2023-07-02 | | 3\n> company2 | 2023-07-03 | | 2\n> company2 | 2023-07-04 | | 0\n> company2 | 2023-07-05 | | 0\n> company2 | 2023-07-06 | 60 | 4\n> company2 | 2023-07-07 | | 3\n> company2 | 2023-07-08 | | 2\n> company2 | 2023-07-09 | | 0\n> company2 | 2023-07-10 | | 0\n\nIn this scenario, row 1's frame is the first 5 rows and specified SKIP \nPAST LAST ROW, so rows 2-5 don't have *any* frame (because they are \nskipped) and the result of the outer count should be 0 for all of them \nbecause there are no rows in the frame.\n\nWhen we get to adding count in the MEASURES clause, there will be a \ndifference between no match and empty match, but that does not apply here.\n\n> I am going to add this thread to CommitFest and plan to add both of\n> you as reviewers. Thanks in advance.\n\nMy pleasure. Thank you for working on this difficult feature.\n-- \nVik Fearing\n\n\n\n",
"msg_date": "Fri, 28 Jul 2023 11:21:25 +0200",
"msg_from": "Vik Fearing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Row pattern recognition"
},
{
"msg_contents": ">> Attached is the v3 patch. In this patch following changes are made.\n> \n> Excellent. Thanks!\n\nYou are welcome!\n\n> A few quick comments:\n> \n> - PERMUTE is still misspelled as PREMUTE\n\nOops. Will fix.\n\n> - PATTERN variables do not have to exist in the DEFINE clause. They are\n> - considered TRUE if not present.\n\nDo you think we really need this? I found a criticism regarding this.\n\nhttps://link.springer.com/article/10.1007/s13222-022-00404-3\n\"3.2 Explicit Definition of All Row Pattern Variables\"\n\nWhat do you think?\n\n>> - I am working on making window aggregates RPR aware now. The\n>> implementation is in progress and far from completeness. An example\n>> is below. I think row 2, 3, 4 of \"count\" column should be NULL\n>> instead of 3, 2, 0, 0. Same thing can be said to other\n>> rows. Probably this is an effect of moving aggregate but I still\n>> studying the window aggregation code.\n> \n> This tells me again that RPR is not being run in the right place. All\n> windowed aggregates and frame-level window functions should Just Work\n> with no modification.\n\nI am not touching each aggregate function. I am modifying\neval_windowaggregates() in nodeWindowAgg.c, which calls each aggregate\nfunction. Do you think it's not the right place to make window\naggregates RPR aware?\n\n>> SELECT company, tdate, first_value(price) OVER W, count(*) OVER w FROM\n>> stock\n>> WINDOW w AS (\n>> PARTITION BY company\n>> ORDER BY tdate\n>> ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING\n>> AFTER MATCH SKIP PAST LAST ROW\n>> INITIAL\n>> PATTERN (START UP+ DOWN+)\n>> DEFINE\n>> START AS TRUE,\n>> UP AS price > PREV(price),\n>> DOWN AS price < PREV(price)\n>> );\n>> company | tdate | first_value | count\n>> ----------+------------+-------------+-------\n>> company1 | 2023-07-01 | 100 | 4\n>> company1 | 2023-07-02 | | 3\n>> company1 | 2023-07-03 | | 2\n>> company1 | 2023-07-04 | | 0\n>> company1 | 2023-07-05 | | 0\n>> company1 | 2023-07-06 | 90 | 4\n>> company1 | 2023-07-07 | | 3\n>> company1 | 2023-07-08 | | 2\n>> company1 | 2023-07-09 | | 0\n>> company1 | 2023-07-10 | | 0\n>> company2 | 2023-07-01 | 50 | 4\n>> company2 | 2023-07-02 | | 3\n>> company2 | 2023-07-03 | | 2\n>> company2 | 2023-07-04 | | 0\n>> company2 | 2023-07-05 | | 0\n>> company2 | 2023-07-06 | 60 | 4\n>> company2 | 2023-07-07 | | 3\n>> company2 | 2023-07-08 | | 2\n>> company2 | 2023-07-09 | | 0\n>> company2 | 2023-07-10 | | 0\n> \n> In this scenario, row 1's frame is the first 5 rows and specified SKIP\n> PAST LAST ROW, so rows 2-5 don't have *any* frame (because they are\n> skipped) and the result of the outer count should be 0 for all of them\n> because there are no rows in the frame.\n\nOk. Just I want to make sure. If it's other aggregates like sum or\navg, the result of the outer aggregates should be NULL.\n\n> When we get to adding count in the MEASURES clause, there will be a\n> difference between no match and empty match, but that does not apply\n> here.\n\nCan you elaborate more? I understand that \"no match\" and \"empty match\"\nare different things. But I do not understand how the difference\naffects the result of count.\n\n>> I am going to add this thread to CommitFest and plan to add both of\n>> you as reviewers. Thanks in advance.\n> \n> My pleasure. Thank you for working on this difficult feature.\n\nThank you for accepting being registered as a reviewer. Your comments\nare really helpful.\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Fri, 28 Jul 2023 20:02:30 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Row pattern recognition"
},
{
"msg_contents": "On 7/28/23 13:02, Tatsuo Ishii wrote:\n>>> Attached is the v3 patch. In this patch following changes are made.\n>>\n>> - PATTERN variables do not have to exist in the DEFINE clause. They are\n>> - considered TRUE if not present.\n> \n> Do you think we really need this? I found a criticism regarding this.\n> \n> https://link.springer.com/article/10.1007/s13222-022-00404-3\n> \"3.2 Explicit Definition of All Row Pattern Variables\"\n> \n> What do you think?\n\nI think that a large part of obeying the standard is to allow queries \nfrom other engines to run the same on ours. The standard does not \nrequire the pattern variables to be defined and so there are certainly \nqueries out there without them, and that hurts migrating to PostgreSQL.\n\n>>> - I am working on making window aggregates RPR aware now. The\n>>> implementation is in progress and far from completeness. An example\n>>> is below. I think row 2, 3, 4 of \"count\" column should be NULL\n>>> instead of 3, 2, 0, 0. Same thing can be said to other\n>>> rows. Probably this is an effect of moving aggregate but I still\n>>> studying the window aggregation code.\n>>\n>> This tells me again that RPR is not being run in the right place. All\n>> windowed aggregates and frame-level window functions should Just Work\n>> with no modification.\n> \n> I am not touching each aggregate function. I am modifying\n> eval_windowaggregates() in nodeWindowAgg.c, which calls each aggregate\n> function. Do you think it's not the right place to make window\n> aggregates RPR aware?\n\nOh, okay.\n\n>>> SELECT company, tdate, first_value(price) OVER W, count(*) OVER w FROM\n>>> stock\n>>> WINDOW w AS (\n>>> PARTITION BY company\n>>> ORDER BY tdate\n>>> ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING\n>>> AFTER MATCH SKIP PAST LAST ROW\n>>> INITIAL\n>>> PATTERN (START UP+ DOWN+)\n>>> DEFINE\n>>> START AS TRUE,\n>>> UP AS price > PREV(price),\n>>> DOWN AS price < PREV(price)\n>>> );\n>>> company | tdate | first_value | count\n>>> ----------+------------+-------------+-------\n>>> company1 | 2023-07-01 | 100 | 4\n>>> company1 | 2023-07-02 | | 3\n>>> company1 | 2023-07-03 | | 2\n>>> company1 | 2023-07-04 | | 0\n>>> company1 | 2023-07-05 | | 0\n>>> company1 | 2023-07-06 | 90 | 4\n>>> company1 | 2023-07-07 | | 3\n>>> company1 | 2023-07-08 | | 2\n>>> company1 | 2023-07-09 | | 0\n>>> company1 | 2023-07-10 | | 0\n>>> company2 | 2023-07-01 | 50 | 4\n>>> company2 | 2023-07-02 | | 3\n>>> company2 | 2023-07-03 | | 2\n>>> company2 | 2023-07-04 | | 0\n>>> company2 | 2023-07-05 | | 0\n>>> company2 | 2023-07-06 | 60 | 4\n>>> company2 | 2023-07-07 | | 3\n>>> company2 | 2023-07-08 | | 2\n>>> company2 | 2023-07-09 | | 0\n>>> company2 | 2023-07-10 | | 0\n>>\n>> In this scenario, row 1's frame is the first 5 rows and specified SKIP\n>> PAST LAST ROW, so rows 2-5 don't have *any* frame (because they are\n>> skipped) and the result of the outer count should be 0 for all of them\n>> because there are no rows in the frame.\n> \n> Ok. Just I want to make sure. If it's other aggregates like sum or\n> avg, the result of the outer aggregates should be NULL.\n\nThey all behave the same way as in a normal query when they receive no \nrows as input.\n\n>> When we get to adding count in the MEASURES clause, there will be a\n>> difference between no match and empty match, but that does not apply\n>> here.\n> \n> Can you elaborate more? I understand that \"no match\" and \"empty match\"\n> are different things. But I do not understand how the difference\n> affects the result of count.\n\nThis query:\n\nSELECT v.a, wcnt OVER w, count(*) OVER w\nFROM (VALUES ('A')) AS v (a)\nWINDOW w AS (\n ORDER BY v.a\n MEASURES count(*) AS wcnt\n ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING\n PATTERN (B)\n DEFINE B AS B.a = 'B'\n)\n\nproduces this result:\n\n a | wcnt | count\n---+------+-------\n A | | 0\n(1 row)\n\nInside the window specification, *no match* was found and so all of the \nMEASURES are null. The count(*) in the target list however, still \nexists and operates over zero rows.\n\nThis very similar query:\n\nSELECT v.a, wcnt OVER w, count(*) OVER w\nFROM (VALUES ('A')) AS v (a)\nWINDOW w AS (\n ORDER BY v.a\n MEASURES count(*) AS wcnt\n ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING\n PATTERN (B?)\n DEFINE B AS B.a = 'B'\n)\n\nproduces this result:\n\n a | wcnt | count\n---+------+-------\n A | 0 | 0\n(1 row)\n\nIn this case, the pattern is B? instead of just B, which produces an \n*empty match* for the MEASURES to be applied over.\n-- \nVik Fearing\n\n\n\n",
"msg_date": "Fri, 28 Jul 2023 14:36:58 +0200",
"msg_from": "Vik Fearing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Row pattern recognition"
},
{
"msg_contents": ">>> - PATTERN variables do not have to exist in the DEFINE clause. They are\n>>> - considered TRUE if not present.\n>> Do you think we really need this? I found a criticism regarding this.\n>> https://link.springer.com/article/10.1007/s13222-022-00404-3\n>> \"3.2 Explicit Definition of All Row Pattern Variables\"\n>> What do you think?\n> \n> I think that a large part of obeying the standard is to allow queries\n> from other engines to run the same on ours. The standard does not\n> require the pattern variables to be defined and so there are certainly\n> queries out there without them, and that hurts migrating to\n> PostgreSQL.\n\nYeah, migration is good point. I agree we should have the feature.\n\n>>> When we get to adding count in the MEASURES clause, there will be a\n>>> difference between no match and empty match, but that does not apply\n>>> here.\n>> Can you elaborate more? I understand that \"no match\" and \"empty match\"\n>> are different things. But I do not understand how the difference\n>> affects the result of count.\n> \n> This query:\n> \n> SELECT v.a, wcnt OVER w, count(*) OVER w\n> FROM (VALUES ('A')) AS v (a)\n> WINDOW w AS (\n> ORDER BY v.a\n> MEASURES count(*) AS wcnt\n> ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING\n> PATTERN (B)\n> DEFINE B AS B.a = 'B'\n> )\n> \n> produces this result:\n> \n> a | wcnt | count\n> ---+------+-------\n> A | | 0\n> (1 row)\n> \n> Inside the window specification, *no match* was found and so all of\n> the MEASURES are null. The count(*) in the target list however, still\n> exists and operates over zero rows.\n> \n> This very similar query:\n> \n> SELECT v.a, wcnt OVER w, count(*) OVER w\n> FROM (VALUES ('A')) AS v (a)\n> WINDOW w AS (\n> ORDER BY v.a\n> MEASURES count(*) AS wcnt\n> ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING\n> PATTERN (B?)\n> DEFINE B AS B.a = 'B'\n> )\n> \n> produces this result:\n> \n> a | wcnt | count\n> ---+------+-------\n> A | 0 | 0\n> (1 row)\n> \n> In this case, the pattern is B? instead of just B, which produces an\n> *empty match* for the MEASURES to be applied over.\n\nThank you for the detailed explanation. I think I understand now.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Sat, 29 Jul 2023 12:05:08 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Row pattern recognition"
},
{
"msg_contents": "Attached is the v4 patch. Differences from previous patch include:\n\n> - PERMUTE is still misspelled as PREMUTE\n\nFixed.\n\n> - PATTERN variables do not have to exist in the DEFINE clause. They are\n> - considered TRUE if not present.\n\nFixed. Moreover new regression test case is added.\n\n- It was possible that tle nodes in DEFINE clause do not appear in the\n plan's target list. This makes impossible to evaluate expressions in\n the DEFINE because it does not appear in the outer plan's target\n list. To fix this, call findTargetlistEntrySQL99 (with resjunk is\n true) so that the missing TargetEntry is added to the outer plan\n later on.\n\n- I eliminated some hacks in handling the Var node in DEFINE\n clause. Previously I replaced varattno of Var node in a plan tree by\n hand so that it refers to correct varattno in the outer plan\n node. In this patch I modified set_upper_references so that it calls\n fix_upper_expr for those Var nodes in the DEFINE clause. See v4-0003\n patch for more details.\n\n- I found a bug with pattern matching code. It creates a string for\n subsequent regular expression matching. It uses the initial letter\n of each define variable name. For example, if the varname is \"foo\",\n then \"f\" is used. Obviously this makes trouble if we have two or\n more variables starting with same \"f\" (e.g. \"food\"). To fix this, I\n assign [a-z] to each variable instead of its initial letter. However\n this way limits us not to have more than 26 variables. I hope 26 is\n enough for most use cases.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp",
"msg_date": "Wed, 09 Aug 2023 17:41:12 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Row pattern recognition"
},
{
"msg_contents": "Attached is the v5 patch. Differences from previous patch include:\n\n* Resolve complaint from \"PostgreSQL Patch Tester\"\n https://commitfest.postgresql.org/44/4460/\n\n- Change gram.y to use PATTERN_P instead of PATTERN. Using PATTERN seems\n to make trouble with Visual Studio build.\n\n:\n:\n[10:07:57.853] FAILED: src/backend/parser/parser.a.p/meson-generated_.._gram.c.obj \n[10:07:57.853] \"cl\" \"-Isrc\\backend\\parser\\parser.a.p\" \"-Isrc\\backend\\parser\" \"-I..\\src\\backend\\parser\" \"-Isrc\\include\" \"-I..\\src\\include\" \"-Ic:\\openssl\\1.1\\include\" \"-I..\\src\\include\\port\\win32\" \"-I..\\src\\include\\port\\win32_msvc\" \"/MDd\" \"/nologo\" \"/showIncludes\" \"/utf-8\" \"/W2\" \"/Od\" \"/Zi\" \"/DWIN32\" \"/DWINDOWS\" \"/D__WINDOWS__\" \"/D__WIN32__\" \"/D_CRT_SECURE_NO_DEPRECATE\" \"/D_CRT_NONSTDC_NO_DEPRECATE\" \"/wd4018\" \"/wd4244\" \"/wd4273\" \"/wd4101\" \"/wd4102\" \"/wd4090\" \"/wd4267\" \"-DBUILDING_DLL\" \"/Fdsrc\\backend\\parser\\parser.a.p\\meson-generated_.._gram.c.pdb\" /Fosrc/backend/parser/parser.a.p/meson-generated_.._gram.c.obj \"/c\" src/backend/parser/gram.c\n[10:07:57.860] c:\\cirrus\\build\\src\\backend\\parser\\gram.h(379): error C2365: 'PATTERN': redefinition; previous definition was 'typedef'\n[10:07:57.860] C:\\Program Files (x86)\\Windows Kits\\10\\include\\10.0.20348.0\\um\\wingdi.h(1375): note: see declaration of 'PATTERN'\n[10:07:57.860] c:\\cirrus\\build\\src\\backend\\parser\\gram.h(379): error C2086: 'yytokentype PATTERN': redefinition\n[10:07:57.860] c:\\cirrus\\build\\src\\backend\\parser\\gram.h(379): note: see declaration of 'PATTERN'\n[10:07:57.860] ninja: build stopped: subcommand failed.\n\n* Resolve complaint from \"make headerscheck\"\n\n- Change Windowapi.h and nodeWindowAgg.c to remove unecessary extern\n and public functions.\n \nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp",
"msg_date": "Sat, 02 Sep 2023 15:52:35 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Row pattern recognition"
},
{
"msg_contents": "Op 9/2/23 om 08:52 schreef Tatsuo Ishii:\n\n> Attached is the v5 patch. Differences from previous patch include:\n> \n\nHi,\n\nThe patches compile & tests run fine but this statement from the \ndocumentation crashes an assert-enabled server:\n\nSELECT company, tdate, price, max(price) OVER w FROM stock\nWINDOW w AS (\nPARTITION BY company\nROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING\nAFTER MATCH SKIP PAST LAST ROW\nINITIAL\nPATTERN (LOWPRICE UP+ DOWN+)\nDEFINE\nLOWPRICE AS price <= 100,\nUP AS price > PREV(price),\nDOWN AS price < PREV(price)\n);\nserver closed the connection unexpectedly\n\tThis probably means the server terminated abnormally\n\tbefore or while processing the request.\nconnection to server was lost\n\n\nLog file:\n\nTRAP: failed Assert(\"aggregatedupto_nonrestarted <= \nwinstate->aggregatedupto\"), File: \"nodeWindowAgg.c\", Line: 1054, PID: 68975\npostgres: 17_rpr_d0ec_gulo: aardvark testdb ::1(34808) \nSELECT(ExceptionalCondition+0x54)[0x9b0824]\npostgres: 17_rpr_d0ec_gulo: aardvark testdb ::1(34808) SELECT[0x71ae8d]\npostgres: 17_rpr_d0ec_gulo: aardvark testdb ::1(34808) \nSELECT(standard_ExecutorRun+0x13a)[0x6def9a]\n/home/aardvark/pg_stuff/pg_installations/pgsql.rpr/lib/pg_stat_statements.so(+0x55e5)[0x7ff3798b95e5]\n/home/aardvark/pg_stuff/pg_installations/pgsql.rpr/lib/auto_explain.so(+0x2680)[0x7ff3798ab680]\npostgres: 17_rpr_d0ec_gulo: aardvark testdb ::1(34808) SELECT[0x88a4ff]\npostgres: 17_rpr_d0ec_gulo: aardvark testdb ::1(34808) \nSELECT(PortalRun+0x240)[0x88bb50]\npostgres: 17_rpr_d0ec_gulo: aardvark testdb ::1(34808) SELECT[0x887cca]\npostgres: 17_rpr_d0ec_gulo: aardvark testdb ::1(34808) \nSELECT(PostgresMain+0x14dc)[0x88958c]\npostgres: 17_rpr_d0ec_gulo: aardvark testdb ::1(34808) SELECT[0x7fb0da]\npostgres: 17_rpr_d0ec_gulo: aardvark testdb ::1(34808) \nSELECT(PostmasterMain+0xd2d)[0x7fc01d]\npostgres: 17_rpr_d0ec_gulo: aardvark testdb ::1(34808) \nSELECT(main+0x1e0)[0x5286d0]\n/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xea)[0x7ff378e9dd0a]\npostgres: 17_rpr_d0ec_gulo: aardvark testdb ::1(34808) \nSELECT(_start+0x2a)[0x5289aa]\n2023-09-02 19:59:05.329 CEST 46723 LOG: server process (PID 68975) was \nterminated by signal 6: Aborted\n2023-09-02 19:59:05.329 CEST 46723 DETAIL: Failed process was running: \nSELECT company, tdate, price, max(price) OVER w FROM stock\n WINDOW w AS (\n PARTITION BY company\n ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING\n AFTER MATCH SKIP PAST LAST ROW\n INITIAL\n PATTERN (LOWPRICE UP+ DOWN+)\n DEFINE\n LOWPRICE AS price <= 100,\n UP AS price > PREV(price),\n DOWN AS price < PREV(price)\n );\n2023-09-02 19:59:05.329 CEST 46723 LOG: terminating any other active \nserver processes\n\n\n\nErik Rijkers\n\n\n",
"msg_date": "Sat, 2 Sep 2023 20:04:02 +0200",
"msg_from": "Erik Rijkers <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Row pattern recognition"
},
{
"msg_contents": "> Hi,\n> \n> The patches compile & tests run fine but this statement from the\n> documentation crashes an assert-enabled server:\n> \n> SELECT company, tdate, price, max(price) OVER w FROM stock\n> WINDOW w AS (\n> PARTITION BY company\n> ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING\n> AFTER MATCH SKIP PAST LAST ROW\n> INITIAL\n> PATTERN (LOWPRICE UP+ DOWN+)\n> DEFINE\n> LOWPRICE AS price <= 100,\n> UP AS price > PREV(price),\n> DOWN AS price < PREV(price)\n> );\n> server closed the connection unexpectedly\n> \tThis probably means the server terminated abnormally\n> \tbefore or while processing the request.\n> connection to server was lost\n\nThank you for the report. Currently the patch has an issue with\naggregate functions including max. I have been working on aggregations\nin row pattern recognition but will take more time to complete the\npart.\n\nIn the mean time if you want to play with RPR, you can try window\nfunctions. Examples can be found in src/test/regress/sql/rpr.sql.\nHere is one of this:\n\n-- the first row start with less than or equal to 100\nSELECT company, tdate, price, first_value(price) OVER w, last_value(price) OVER w\n FROM stock\n WINDOW w AS (\n PARTITION BY company\n ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING\n INITIAL\n PATTERN (LOWPRICE UP+ DOWN+)\n DEFINE\n LOWPRICE AS price <= 100,\n UP AS price > PREV(price),\n DOWN AS price < PREV(price)\n);\n\n-- second row raises 120%\nSELECT company, tdate, price, first_value(price) OVER w, last_value(price) OVER w\n FROM stock\n WINDOW w AS (\n PARTITION BY company\n ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING\n INITIAL\n PATTERN (LOWPRICE UP+ DOWN+)\n DEFINE\n LOWPRICE AS price <= 100,\n UP AS price > PREV(price) * 1.2,\n DOWN AS price < PREV(price)\n);\n\nSorry for inconvenience.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Sun, 03 Sep 2023 09:03:44 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Row pattern recognition"
},
{
"msg_contents": "Hello!\n\n> (1) I completely changed the pattern matching engine so that it\n> performs backtracking. Now the engine evaluates all pattern elements\n> defined in PATTER against each row, saving matched pattern variables\n> in a string per row. For example if the pattern element A and B\n> evaluated to true, a string \"AB\" is created for current row.\n\nIf I understand correctly, this strategy assumes that one row's\nmembership in a pattern variable is independent of the other rows'\nmembership. But I don't think that's necessarily true:\n\n DEFINE\n A AS PREV(CLASSIFIER()) IS DISTINCT FROM 'A',\n ...\n\n> See row_is_in_reduced_frame, search_str_set and search_str_set_recurse\n> in nodeWindowAggs.c for more details. For now I use a naive depth\n> first search and probably there is a lot of rooms for optimization\n> (for example rewriting it without using\n> recursion).\n\nThe depth-first match is doing a lot of subtle work here. For example, with\n\n PATTERN ( A+ B+ )\n DEFINE A AS TRUE,\n B AS TRUE\n\n(i.e. all rows match both variables), and three rows in the partition,\nour candidates will be tried in the order\n\n aaa\n aab\n aba\n abb\n ...\n bbb\n\nThe only possible matches against our regex `^a+b+` are \"aab\" and \"abb\",\nand that order matches the preferment order, so it's fine. But it's easy\nto come up with a pattern where that's the wrong order, like\n\n PATTERN ( A+ (B|A)+ )\n\nNow \"aaa\" will be considered before \"aab\", which isn't correct.\n\nSimilarly, the assumption that we want to match the longest string only\nworks because we don't allow alternation yet.\n\n> Suggestions/patches are welcome.\n\nCool, I will give this piece some more thought. Do you mind if I try to\nadd some more complicated pattern quantifiers to stress the\narchitecture, or would you prefer to tackle that later? Just alternation\nby itself will open up a world of corner cases.\n\n> With the new engine, cases above do not fail anymore. See new\n> regression test cases. Thanks for providing valuable test cases!\n\nYou're very welcome!\n\nOn 8/9/23 01:41, Tatsuo Ishii wrote:\n> - I found a bug with pattern matching code. It creates a string for\n> subsequent regular expression matching. It uses the initial letter\n> of each define variable name. For example, if the varname is \"foo\",\n> then \"f\" is used. Obviously this makes trouble if we have two or\n> more variables starting with same \"f\" (e.g. \"food\"). To fix this, I\n> assign [a-z] to each variable instead of its initial letter. However\n> this way limits us not to have more than 26 variables. I hope 26 is\n> enough for most use cases.\n\nThere are still plenty of alphanumerics left that could be assigned...\n\nBut I'm wondering if we might want to just implement the NFA directly?\nThe current implementation's Cartesian explosion can probably be pruned\naggressively, but replaying the entire regex match once for every\nbacktracked step will still duplicate a lot of work.\n\n--\n\nI've attached another test case; it looks like last_value() is depending\non some sort of side effect from either first_value() or nth_value(). I\nknow the window frame itself is still under construction, so apologies\nif this is an expected failure.\n\nThanks!\n--Jacob",
"msg_date": "Thu, 7 Sep 2023 17:00:07 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Row pattern recognition"
},
{
"msg_contents": "Hi,\n\n> Hello!\n> \n>> (1) I completely changed the pattern matching engine so that it\n>> performs backtracking. Now the engine evaluates all pattern elements\n>> defined in PATTER against each row, saving matched pattern variables\n>> in a string per row. For example if the pattern element A and B\n>> evaluated to true, a string \"AB\" is created for current row.\n> \n> If I understand correctly, this strategy assumes that one row's\n> membership in a pattern variable is independent of the other rows'\n> membership. But I don't think that's necessarily true:\n> \n> DEFINE\n> A AS PREV(CLASSIFIER()) IS DISTINCT FROM 'A',\n> ...\n\nBut:\n\nUP AS price > PREV(price)\n\nalso depends on previous row, no? Can you please elaborate how your\nexample could break current implementation? I cannot test it because\nCLASSIFIER is not implemented yet.\n\n>> See row_is_in_reduced_frame, search_str_set and search_str_set_recurse\n>> in nodeWindowAggs.c for more details. For now I use a naive depth\n>> first search and probably there is a lot of rooms for optimization\n>> (for example rewriting it without using\n>> recursion).\n> \n> The depth-first match is doing a lot of subtle work here. For example, with\n> \n> PATTERN ( A+ B+ )\n> DEFINE A AS TRUE,\n> B AS TRUE\n> \n> (i.e. all rows match both variables), and three rows in the partition,\n> our candidates will be tried in the order\n> \n> aaa\n> aab\n> aba\n> abb\n> ...\n> bbb\n> \n> The only possible matches against our regex `^a+b+` are \"aab\" and \"abb\",\n> and that order matches the preferment order, so it's fine. But it's easy\n> to come up with a pattern where that's the wrong order, like\n> \n> PATTERN ( A+ (B|A)+ )\n> \n> Now \"aaa\" will be considered before \"aab\", which isn't correct.\n\nCan you explain a little bit more? I think 'aaa' matches a regular\nexpression 'a+(b|a)+' and should be no problem before \"aab\" is\nconsidered.\n\n> Similarly, the assumption that we want to match the longest string only\n> works because we don't allow alternation yet.\n\nCan you please clarify more on this?\n\n> Cool, I will give this piece some more thought. Do you mind if I try to\n> add some more complicated pattern quantifiers to stress the\n> architecture, or would you prefer to tackle that later? Just alternation\n> by itself will open up a world of corner cases.\n\nDo you mean you want to provide a better patch for the pattern\nmatching part? That will be helpfull. Because I am currently working\non the aggregation part and have no time to do it. However, the\naggregation work affects the v5 patch: it needs a refactoring. So can\nyou wait until I release v6 patch? I hope it will be released in two\nweeks or so.\n\n> On 8/9/23 01:41, Tatsuo Ishii wrote:\n>> - I found a bug with pattern matching code. It creates a string for\n>> subsequent regular expression matching. It uses the initial letter\n>> of each define variable name. For example, if the varname is \"foo\",\n>> then \"f\" is used. Obviously this makes trouble if we have two or\n>> more variables starting with same \"f\" (e.g. \"food\"). To fix this, I\n>> assign [a-z] to each variable instead of its initial letter. However\n>> this way limits us not to have more than 26 variables. I hope 26 is\n>> enough for most use cases.\n> \n> There are still plenty of alphanumerics left that could be assigned...\n> \n> But I'm wondering if we might want to just implement the NFA directly?\n> The current implementation's Cartesian explosion can probably be pruned\n> aggressively, but replaying the entire regex match once for every\n> backtracked step will still duplicate a lot of work.\n\nNot sure if you mean implementing new regular expression engine\nbesides src/backend/regexp. I am afraid it's not a trivial work. The\ncurrent regexp code consists of over 10k lines. What do you think?\n\n> I've attached another test case; it looks like last_value() is depending\n> on some sort of side effect from either first_value() or nth_value(). I\n> know the window frame itself is still under construction, so apologies\n> if this is an expected failure.\n\nThanks. Fortunately current code which I am working passes the new\ntest. I will include it in the next (v6) patch.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Fri, 08 Sep 2023 12:54:47 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Row pattern recognition"
},
{
"msg_contents": "On 9/7/23 20:54, Tatsuo Ishii wrote:\n>> DEFINE\n>> A AS PREV(CLASSIFIER()) IS DISTINCT FROM 'A',\n>> ...\n> \n> But:\n> \n> UP AS price > PREV(price)\n> \n> also depends on previous row, no?\n\nPREV(CLASSIFIER()) depends not on the value of the previous row but the\nstate of the match so far. To take an example from the patch:\n\n> * Example:\n> * str_set[0] = \"AB\";\n> * str_set[1] = \"AC\";\n> * In this case at row 0 A and B are true, and A and C are true in row 1.\n\nWith these str_sets and my example DEFINE, row[1] is only classifiable\nas 'A' if row[0] is *not* classified as 'A' at this point in the match.\n\"AA\" is not a valid candidate string, even if it matches the PATTERN.\n\nSo if we don't reevaluate the pattern variable condition for the row, we\nat least have to prune the combinations that search_str_set() visits, so\nthat we don't generate a logically impossible combination. That seems\nlike it could be pretty fragile, and it may be difficult for us to prove\ncompliance.\n\n>> But it's easy\n>> to come up with a pattern where that's the wrong order, like\n>>\n>> PATTERN ( A+ (B|A)+ )\n>>\n>> Now \"aaa\" will be considered before \"aab\", which isn't correct.\n> \n> Can you explain a little bit more? I think 'aaa' matches a regular\n> expression 'a+(b|a)+' and should be no problem before \"aab\" is\n> considered.\n\nAssuming I've understood the rules correctly, we're not allowed to\nclassify the last row as 'A' if it also matches 'B'. Lexicographic\nordering takes precedence, so we have to try \"aab\" first. Otherwise our\nquery could return different results compared to another implementation.\n\n>> Similarly, the assumption that we want to match the longest string only\n>> works because we don't allow alternation yet.\n> \n> Can you please clarify more on this?\n\nSure: for the pattern\n\n PATTERN ( (A|B)+ )\n\nwe have to consider the candidate \"a\" over the candidate \"ba\", even\nthough the latter is longer. Like the prior example, lexicographic\nordering is considered more important than the greedy quantifier.\nQuoting ISO/IEC 9075-2:2016:\n\n More precisely, with both reluctant and greedy quantifiers, the set\n of matches is ordered lexicographically, but when one match is an\n initial substring of another match, reluctant quantifiers prefer the\n shorter match (the substring), whereas greedy quantifiers prefer the\n longer match (the “superstring”).\n\nHere, \"ba\" doesn't have \"a\" as a prefix, so \"ba\" doesn't get priority.\nISO/IEC 19075-5:2021 has a big section on this (7.2) with worked examples.\n\n(The \"lexicographic order matters more than greediness\" concept was the\nmost mind-bending part for me so far, probably because I haven't figured\nout how to translate the concept into POSIX EREs. It wouldn't make sense\nto say \"the letter 't' can match 'a', 'B', or '3' in this regex\", but\nthat's what RPR is doing.)\n\n>> Cool, I will give this piece some more thought. Do you mind if I try to\n>> add some more complicated pattern quantifiers to stress the\n>> architecture, or would you prefer to tackle that later? Just alternation\n>> by itself will open up a world of corner cases.\n> \n> Do you mean you want to provide a better patch for the pattern\n> matching part? That will be helpfull.\n\nNo guarantees that I'll find a better patch :D But yes, I will give it a\ntry.\n\n> Because I am currently working\n> on the aggregation part and have no time to do it. However, the\n> aggregation work affects the v5 patch: it needs a refactoring. So can\n> you wait until I release v6 patch? I hope it will be released in two\n> weeks or so.\n\nAbsolutely!\n\n>> But I'm wondering if we might want to just implement the NFA directly?\n>> The current implementation's Cartesian explosion can probably be pruned\n>> aggressively, but replaying the entire regex match once for every\n>> backtracked step will still duplicate a lot of work.\n> \n> Not sure if you mean implementing new regular expression engine\n> besides src/backend/regexp. I am afraid it's not a trivial work. The\n> current regexp code consists of over 10k lines. What do you think?\n\nHeh, I think it would be pretty foolish for me to code an NFA, from\nscratch, and then try to convince the community to maintain it.\n\nBut:\n- I think we have to implement a parallel parser regardless (RPR PATTERN\nsyntax looks incompatible with POSIX)\n- I suspect we need more control over the backtracking than the current\npg_reg* API is going to give us, or else I'm worried performance is\ngoing to fall off a cliff with usefully-large partitions\n- there's a lot of stuff in POSIX EREs that we don't need, and of the\nfeatures we do need, the + quantifier is probably one of the easiest\n- it seems easier to prove the correctness of a slow, naive,\nrow-at-a-time engine, because we can compare it directly to the spec\n\nSo what I'm thinking is: if I start by open-coding the + quantifier, and\nslowly add more pieces in, then it might be easier to see the parts of\nsrc/backend/regex that I've duplicated. We can try to expose those parts\ndirectly from the internal API to replace my bad implementation. And if\nthere are parts that aren't duplicated, then it'll be easier to explain\nwhy we need something different from the current engine.\n\nDoes that seem like a workable approach? (Worst-case, my code is just\nhorrible, and we throw it in the trash.)\n\n>> I've attached another test case; it looks like last_value() is depending\n>> on some sort of side effect from either first_value() or nth_value(). I\n>> know the window frame itself is still under construction, so apologies\n>> if this is an expected failure.\n> \n> Thanks. Fortunately current code which I am working passes the new\n> test. I will include it in the next (v6) patch.\n\nGreat!\n\nThanks,\n--Jacob\n\n\n",
"msg_date": "Fri, 8 Sep 2023 12:27:05 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Row pattern recognition"
},
{
"msg_contents": "On 9/8/23 21:27, Jacob Champion wrote:\n> On 9/7/23 20:54, Tatsuo Ishii wrote:\n\n>>> But it's easy\n>>> to come up with a pattern where that's the wrong order, like\n>>>\n>>> PATTERN ( A+ (B|A)+ )\n>>>\n>>> Now \"aaa\" will be considered before \"aab\", which isn't correct.\n>>\n>> Can you explain a little bit more? I think 'aaa' matches a regular\n>> expression 'a+(b|a)+' and should be no problem before \"aab\" is\n>> considered.\n> \n> Assuming I've understood the rules correctly, we're not allowed to\n> classify the last row as 'A' if it also matches 'B'. Lexicographic\n> ordering takes precedence, so we have to try \"aab\" first. Otherwise our\n> query could return different results compared to another implementation.\n\n\nYour understanding is correct.\n-- \nVik Fearing\n\n\n\n",
"msg_date": "Fri, 8 Sep 2023 23:43:10 +0200",
"msg_from": "Vik Fearing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Row pattern recognition"
},
{
"msg_contents": "Hi,\n\n>> But:\n>> \n>> UP AS price > PREV(price)\n>> \n>> also depends on previous row, no?\n> \n> PREV(CLASSIFIER()) depends not on the value of the previous row but the\n> state of the match so far. To take an example from the patch:\n> \n>> * Example:\n>> * str_set[0] = \"AB\";\n>> * str_set[1] = \"AC\";\n>> * In this case at row 0 A and B are true, and A and C are true in row 1.\n> \n> With these str_sets and my example DEFINE, row[1] is only classifiable\n> as 'A' if row[0] is *not* classified as 'A' at this point in the match.\n> \"AA\" is not a valid candidate string, even if it matches the PATTERN.\n\nOk, Let me clarify my understanding. Suppose we have:\n\nPATTER (A B)\nDEFINE A AS PREV(CLASSIFIER()) IS DISTINCT FROM 'A',\nB AS price > 100\n\nand the target table has price column values:\n\nrow[0]: 110\nrow[1]: 110\nrow[2]: 110\nrow[3]: 110\n\nThen we will get for str_set:\nr0: B\nr1: AB\n\nBecause r0 only has classifier B, r1 can have A and B. Problem is,\nr2. If we choose A at r1, then r2 = B. But if we choose B at t1, then\nr2 = AB. I guess this is the issue you pointed out.\n\n> So if we don't reevaluate the pattern variable condition for the row, we\n> at least have to prune the combinations that search_str_set() visits, so\n> that we don't generate a logically impossible combination. That seems\n> like it could be pretty fragile, and it may be difficult for us to prove\n> compliance.\n\nYeah, probably we have delay evaluation of such pattern variables like\nA, then reevaluate A after the first scan.\n\nWhat about leaving this (reevaluation) for now? Because:\n\n1) we don't have CLASSIFIER\n2) we don't allow to give CLASSIFIER to PREV as its arggument\n\nso I think we don't need to worry about this for now.\n\n>> Can you explain a little bit more? I think 'aaa' matches a regular\n>> expression 'a+(b|a)+' and should be no problem before \"aab\" is\n>> considered.\n> \n> Assuming I've understood the rules correctly, we're not allowed to\n> classify the last row as 'A' if it also matches 'B'. Lexicographic\n> ordering takes precedence, so we have to try \"aab\" first. Otherwise our\n> query could return different results compared to another implementation.\n> \n>>> Similarly, the assumption that we want to match the longest string only\n>>> works because we don't allow alternation yet.\n>> \n>> Can you please clarify more on this?\n> \n> Sure: for the pattern\n> \n> PATTERN ( (A|B)+ )\n> \n> we have to consider the candidate \"a\" over the candidate \"ba\", even\n> though the latter is longer. Like the prior example, lexicographic\n> ordering is considered more important than the greedy quantifier.\n> Quoting ISO/IEC 9075-2:2016:\n> \n> More precisely, with both reluctant and greedy quantifiers, the set\n> of matches is ordered lexicographically, but when one match is an\n> initial substring of another match, reluctant quantifiers prefer the\n> shorter match (the substring), whereas greedy quantifiers prefer the\n> longer match (the “superstring”).\n> \n> Here, \"ba\" doesn't have \"a\" as a prefix, so \"ba\" doesn't get priority.\n> ISO/IEC 19075-5:2021 has a big section on this (7.2) with worked examples.\n> \n> (The \"lexicographic order matters more than greediness\" concept was the\n> most mind-bending part for me so far, probably because I haven't figured\n> out how to translate the concept into POSIX EREs. It wouldn't make sense\n> to say \"the letter 't' can match 'a', 'B', or '3' in this regex\", but\n> that's what RPR is doing.)\n\nThanks for the explanation. Surprising concet of the standard:-) Is\nit different from SIMILAR TO REs too?\n\nWhat if we don't follow the standard, instead we follow POSIX EREs? I\nthink this is better for users unless RPR's REs has significant merit\nfor users.\n\n>> Do you mean you want to provide a better patch for the pattern\n>> matching part? That will be helpfull.\n> \n> No guarantees that I'll find a better patch :D But yes, I will give it a\n> try.\n\nOk.\n\n>> Because I am currently working\n>> on the aggregation part and have no time to do it. However, the\n>> aggregation work affects the v5 patch: it needs a refactoring. So can\n>> you wait until I release v6 patch? I hope it will be released in two\n>> weeks or so.\n> \n> Absolutely!\n\nThanks.\n\n> Heh, I think it would be pretty foolish for me to code an NFA, from\n> scratch, and then try to convince the community to maintain it.\n> \n> But:\n> - I think we have to implement a parallel parser regardless (RPR PATTERN\n> syntax looks incompatible with POSIX)\n\nI am not sure if we need to worry about this because of the reason I\nmentioned above.\n\n> - I suspect we need more control over the backtracking than the current\n> pg_reg* API is going to give us, or else I'm worried performance is\n> going to fall off a cliff with usefully-large partitions\n\nAgreed.\n\n> - there's a lot of stuff in POSIX EREs that we don't need, and of the\n> features we do need, the + quantifier is probably one of the easiest\n> - it seems easier to prove the correctness of a slow, naive,\n> row-at-a-time engine, because we can compare it directly to the spec\n> \n> So what I'm thinking is: if I start by open-coding the + quantifier, and\n> slowly add more pieces in, then it might be easier to see the parts of\n> src/backend/regex that I've duplicated. We can try to expose those parts\n> directly from the internal API to replace my bad implementation. And if\n> there are parts that aren't duplicated, then it'll be easier to explain\n> why we need something different from the current engine.\n> \n> Does that seem like a workable approach? (Worst-case, my code is just\n> horrible, and we throw it in the trash.)\n\nYes, it seems workable. I think for the first cut of RPR needs at\nleast the +quantifier with reasonable performance. The current naive\nimplementation seems to have issue because of exhaustive search.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Sat, 09 Sep 2023 20:21:21 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Row pattern recognition"
},
{
"msg_contents": "On 9/9/23 13:21, Tatsuo Ishii wrote:\n> Thanks for the explanation. Surprising concet of the standard:-)\n\n<quote from 19075-5:2023>\n\nThis leaves the choice between traditional NFA and Posix NFA. The \ndifference between these is that a traditional NFA exits (declares a \nmatch) as soon as it finds the first possible match, whereas a Posix NFA \nis obliged to find all possible matches and then choose the “leftmost \nlongest”. There are examples that show that, even for conventional \nregular expression matching on text strings and without back references, \nthere are patterns for which a Posix NFA is orders of magnitude slower \nthan a traditional NFA. In addition, reluctant quantifiers cannot be \ndefined in a Posix NFA, because of the leftmost longest rule.\n\nTherefore it was decided not to use the Posix NFA model, which leaves \nthe traditional NFA as the model for row pattern matching. Among \navailable tools that use traditional NFA engines, Perl is the most \ninfluential; therefore Perl was adopted as the design target for pattern \nmatching rules.\n\n</quote>\n\n> Is it different from SIMILAR TO REs too?\n\nOf course it is. :-) SIMILAR TO uses its own language and rules.\n\n> What if we don't follow the standard, instead we follow POSIX EREs? I\n> think this is better for users unless RPR's REs has significant merit\n> for users.\n\nThis would get big pushback from me.\n-- \nVik Fearing\n\n\n\n",
"msg_date": "Sat, 9 Sep 2023 15:32:41 +0200",
"msg_from": "Vik Fearing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Row pattern recognition"
},
{
"msg_contents": "On Sat, Sep 9, 2023 at 4:21 AM Tatsuo Ishii <[email protected]> wrote:\n> Then we will get for str_set:\n> r0: B\n> r1: AB\n>\n> Because r0 only has classifier B, r1 can have A and B. Problem is,\n> r2. If we choose A at r1, then r2 = B. But if we choose B at t1, then\n> r2 = AB. I guess this is the issue you pointed out.\n\nRight.\n\n> Yeah, probably we have delay evaluation of such pattern variables like\n> A, then reevaluate A after the first scan.\n>\n> What about leaving this (reevaluation) for now? Because:\n>\n> 1) we don't have CLASSIFIER\n> 2) we don't allow to give CLASSIFIER to PREV as its arggument\n>\n> so I think we don't need to worry about this for now.\n\nSure. I'm all for deferring features to make it easier to iterate; I\njust want to make sure the architecture doesn't hit a dead end. Or at\nleast, not without being aware of it.\n\nAlso: is CLASSIFIER the only way to run into this issue?\n\n> What if we don't follow the standard, instead we follow POSIX EREs? I\n> think this is better for users unless RPR's REs has significant merit\n> for users.\n\nPiggybacking off of what Vik wrote upthread, I think we would not be\ndoing ourselves any favors by introducing a non-compliant\nimplementation that performs worse than a traditional NFA. Those would\nbe some awful bug reports.\n\n> > - I think we have to implement a parallel parser regardless (RPR PATTERN\n> > syntax looks incompatible with POSIX)\n>\n> I am not sure if we need to worry about this because of the reason I\n> mentioned above.\n\nEven if we adopted POSIX NFA semantics, we'd still have to implement\nour own parser for the PATTERN part of the query. I don't think\nthere's a good way for us to reuse the parser in src/backend/regex.\n\n> > Does that seem like a workable approach? (Worst-case, my code is just\n> > horrible, and we throw it in the trash.)\n>\n> Yes, it seems workable. I think for the first cut of RPR needs at\n> least the +quantifier with reasonable performance. The current naive\n> implementation seems to have issue because of exhaustive search.\n\n+1\n\nThanks!\n--Jacob\n\n\n",
"msg_date": "Mon, 11 Sep 2023 15:13:43 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Row pattern recognition"
},
{
"msg_contents": ">> What about leaving this (reevaluation) for now? Because:\n>>\n>> 1) we don't have CLASSIFIER\n>> 2) we don't allow to give CLASSIFIER to PREV as its arggument\n>>\n>> so I think we don't need to worry about this for now.\n> \n> Sure. I'm all for deferring features to make it easier to iterate; I\n> just want to make sure the architecture doesn't hit a dead end. Or at\n> least, not without being aware of it.\n\nOk, let's defer this issue. Currently the patch already exceeds 3k\nlines. I am afraid too big patch cannot be reviewed by anyone, which\nmeans it will never be committed.\n\n> Also: is CLASSIFIER the only way to run into this issue?\n\nGood question. I would like to know.\n\n>> What if we don't follow the standard, instead we follow POSIX EREs? I\n>> think this is better for users unless RPR's REs has significant merit\n>> for users.\n> \n> Piggybacking off of what Vik wrote upthread, I think we would not be\n> doing ourselves any favors by introducing a non-compliant\n> implementation that performs worse than a traditional NFA. Those would\n> be some awful bug reports.\n\nWhat I am not sure about is, you and Vik mentioned that the\ntraditional NFA is superior that POSIX NFA in terms of performance.\nBut how \"lexicographic ordering\" is related to performance?\n\n>> I am not sure if we need to worry about this because of the reason I\n>> mentioned above.\n> \n> Even if we adopted POSIX NFA semantics, we'd still have to implement\n> our own parser for the PATTERN part of the query. I don't think\n> there's a good way for us to reuse the parser in src/backend/regex.\n\nOk.\n\n>> > Does that seem like a workable approach? (Worst-case, my code is just\n>> > horrible, and we throw it in the trash.)\n>>\n>> Yes, it seems workable. I think for the first cut of RPR needs at\n>> least the +quantifier with reasonable performance. The current naive\n>> implementation seems to have issue because of exhaustive search.\n> \n> +1\n\nBTW, attched is the v6 patch. The differences from v5 include:\n\n- Now aggregates can be used with RPR. Below is an example from the\n regression test cases, which is added by v6 patch.\n\n- Fix assersion error pointed out by Erik.\n\nSELECT company, tdate, price,\n first_value(price) OVER w,\n last_value(price) OVER w,\n max(price) OVER w,\n min(price) OVER w,\n sum(price) OVER w,\n avg(price) OVER w,\n count(price) OVER w\nFROM stock\nWINDOW w AS (\nPARTITION BY company\nROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING\nAFTER MATCH SKIP PAST LAST ROW\nINITIAL\nPATTERN (START UP+ DOWN+)\nDEFINE\nSTART AS TRUE,\nUP AS price > PREV(price),\nDOWN AS price < PREV(price)\n);\n company | tdate | price | first_value | last_value | max | min | sum | avg | count \n----------+------------+-------+-------------+------------+------+-----+------+-----------------------+-------\n company1 | 07-01-2023 | 100 | 100 | 140 | 200 | 100 | 590 | 147.5000000000000000 | 4\n company1 | 07-02-2023 | 200 | | | | | | | \n company1 | 07-03-2023 | 150 | | | | | | | \n company1 | 07-04-2023 | 140 | | | | | | | \n company1 | 07-05-2023 | 150 | | | | | | | \n company1 | 07-06-2023 | 90 | 90 | 120 | 130 | 90 | 450 | 112.5000000000000000 | 4\n company1 | 07-07-2023 | 110 | | | | | | | \n company1 | 07-08-2023 | 130 | | | | | | | \n company1 | 07-09-2023 | 120 | | | | | | | \n company1 | 07-10-2023 | 130 | | | | | | | \n company2 | 07-01-2023 | 50 | 50 | 1400 | 2000 | 50 | 4950 | 1237.5000000000000000 | 4\n company2 | 07-02-2023 | 2000 | | | | | | | \n company2 | 07-03-2023 | 1500 | | | | | | | \n company2 | 07-04-2023 | 1400 | | | | | | | \n company2 | 07-05-2023 | 1500 | | | | | | | \n company2 | 07-06-2023 | 60 | 60 | 1200 | 1300 | 60 | 3660 | 915.0000000000000000 | 4\n company2 | 07-07-2023 | 1100 | | | | | | | \n company2 | 07-08-2023 | 1300 | | | | | | | \n company2 | 07-09-2023 | 1200 | | | | | | | \n company2 | 07-10-2023 | 1300 | | | | | | | \n(20 rows)\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp",
"msg_date": "Tue, 12 Sep 2023 15:18:43 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Row pattern recognition"
},
{
"msg_contents": "Regarding v6 patch:\n\n> SELECT company, tdate, price,\n> first_value(price) OVER w,\n> last_value(price) OVER w,\n> max(price) OVER w,\n> min(price) OVER w,\n> sum(price) OVER w,\n> avg(price) OVER w,\n> count(price) OVER w\n> FROM stock\n> WINDOW w AS (\n> PARTITION BY company\n> ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING\n> AFTER MATCH SKIP PAST LAST ROW\n> INITIAL\n> PATTERN (START UP+ DOWN+)\n> DEFINE\n> START AS TRUE,\n> UP AS price > PREV(price),\n> DOWN AS price < PREV(price)\n> );\n> company | tdate | price | first_value | last_value | max | min | sum | avg | count \n> ----------+------------+-------+-------------+------------+------+-----+------+-----------------------+-------\n> company1 | 07-01-2023 | 100 | 100 | 140 | 200 | 100 | 590 | 147.5000000000000000 | 4\n> company1 | 07-02-2023 | 200 | | | | | | | \n> company1 | 07-03-2023 | 150 | | | | | | | \n> company1 | 07-04-2023 | 140 | | | | | | | \n> company1 | 07-05-2023 | 150 | | | | | | | \n> company1 | 07-06-2023 | 90 | 90 | 120 | 130 | 90 | 450 | 112.5000000000000000 | 4\n> company1 | 07-07-2023 | 110 | | | | | | | \n> company1 | 07-08-2023 | 130 | | | | | | | \n> company1 | 07-09-2023 | 120 | | | | | | | \n> company1 | 07-10-2023 | 130 | | | | | | | \n> company2 | 07-01-2023 | 50 | 50 | 1400 | 2000 | 50 | 4950 | 1237.5000000000000000 | 4\n> company2 | 07-02-2023 | 2000 | | | | | | | \n> company2 | 07-03-2023 | 1500 | | | | | | | \n> company2 | 07-04-2023 | 1400 | | | | | | | \n> company2 | 07-05-2023 | 1500 | | | | | | | \n> company2 | 07-06-2023 | 60 | 60 | 1200 | 1300 | 60 | 3660 | 915.0000000000000000 | 4\n> company2 | 07-07-2023 | 1100 | | | | | | | \n> company2 | 07-08-2023 | 1300 | | | | | | | \n> company2 | 07-09-2023 | 1200 | | | | | | | \n> company2 | 07-10-2023 | 1300 | | | | | | | \n> (20 rows)\n\ncount column for unmatched rows should have been 0, rather than\nNULL. i.e.\n\n company | tdate | price | first_value | last_value | max | min | sum | avg | count \n----------+------------+-------+-------------+------------+------+-----+------+-----------------------+-------\n company1 | 07-01-2023 | 100 | 100 | 140 | 200 | 100 | 590 | 147.5000000000000000 | 4\n company1 | 07-02-2023 | 200 | | | | | | | \n company1 | 07-03-2023 | 150 | | | | | | | \n company1 | 07-04-2023 | 140 | | | | | | | \n company1 | 07-05-2023 | 150 | | | | | | | 0\n company1 | 07-06-2023 | 90 | 90 | 120 | 130 | 90 | 450 | 112.5000000000000000 | 4\n company1 | 07-07-2023 | 110 | | | | | | | \n company1 | 07-08-2023 | 130 | | | | | | | \n company1 | 07-09-2023 | 120 | | | | | | | \n company1 | 07-10-2023 | 130 | | | | | | | 0\n company2 | 07-01-2023 | 50 | 50 | 1400 | 2000 | 50 | 4950 | 1237.5000000000000000 | 4\n company2 | 07-02-2023 | 2000 | | | | | | | \n company2 | 07-03-2023 | 1500 | | | | | | | \n company2 | 07-04-2023 | 1400 | | | | | | | \n company2 | 07-05-2023 | 1500 | | | | | | | 0\n company2 | 07-06-2023 | 60 | 60 | 1200 | 1300 | 60 | 3660 | 915.0000000000000000 | 4\n company2 | 07-07-2023 | 1100 | | | | | | | \n company2 | 07-08-2023 | 1300 | | | | | | | \n company2 | 07-09-2023 | 1200 | | | | | | | \n company2 | 07-10-2023 | 1300 | | | | | | | 0\n(20 rows)\n\nAttached is the fix against v6 patch. I will include this in upcoming v7 patch.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp",
"msg_date": "Tue, 12 Sep 2023 17:44:57 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Row pattern recognition"
},
{
"msg_contents": "On Mon, Sep 11, 2023 at 11:18 PM Tatsuo Ishii <[email protected]> wrote:\n> What I am not sure about is, you and Vik mentioned that the\n> traditional NFA is superior that POSIX NFA in terms of performance.\n> But how \"lexicographic ordering\" is related to performance?\n\nI think they're only tangentially related. POSIX NFAs have to fully\nbacktrack even after the first match is found, so that's where the\nperformance difference comes in. (We would be introducing new ways to\ncatastrophically backtrack if we used that approach.) But since you\ndon't visit every possible path through the graph with a traditional\nNFA, it makes sense to define an order in which you visit the nodes,\nso that you can reason about which string is actually going to be\nmatched in the end.\n\n> BTW, attched is the v6 patch. The differences from v5 include:\n>\n> - Now aggregates can be used with RPR. Below is an example from the\n> regression test cases, which is added by v6 patch.\n\nGreat, thank you!\n\n--Jacob\n\n\n",
"msg_date": "Tue, 12 Sep 2023 15:09:29 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Row pattern recognition"
},
{
"msg_contents": "> <quote from 19075-5:2023>\n\nI was looking for this but I only found ISO/IEC 19075-5:2021.\nhttps://www.iso.org/standard/78936.html\n\nMaybe 19075-5:2021 is the latest one?\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Wed, 13 Sep 2023 14:14:07 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Row pattern recognition"
},
{
"msg_contents": "On 9/13/23 07:14, Tatsuo Ishii wrote:\n>> <quote from 19075-5:2023>\n> \n> I was looking for this but I only found ISO/IEC 19075-5:2021.\n> https://www.iso.org/standard/78936.html\n> \n> Maybe 19075-5:2021 is the latest one?\n\nYes, probably. Sorry.\n-- \nVik Fearing\n\n\n\n",
"msg_date": "Wed, 13 Sep 2023 13:28:45 +0200",
"msg_from": "Vik Fearing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Row pattern recognition"
},
{
"msg_contents": "> On 9/13/23 07:14, Tatsuo Ishii wrote:\n>>> <quote from 19075-5:2023>\n>> I was looking for this but I only found ISO/IEC 19075-5:2021.\n>> https://www.iso.org/standard/78936.html\n>> Maybe 19075-5:2021 is the latest one?\n> \n> Yes, probably. Sorry.\n\nNo problem. Thanks for confirmation.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Wed, 13 Sep 2023 21:35:53 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Row pattern recognition"
},
{
"msg_contents": "> Attached is the fix against v6 patch. I will include this in upcoming v7 patch.\n\nAttached is the v7 patch. It includes the fix mentioned above. Also\nthis time the pattern matching engine is enhanced: previously it\nrecursed to row direction, which means if the number of rows in a\nframe is large, it could exceed the limit of stack depth. The new\nversion recurses over matched pattern variables in a row, which is at\nmost 26 which should be small enough.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp",
"msg_date": "Fri, 22 Sep 2023 14:16:40 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Row pattern recognition"
},
{
"msg_contents": "Op 9/22/23 om 07:16 schreef Tatsuo Ishii:\n>> Attached is the fix against v6 patch. I will include this in upcoming v7 patch.\n> \n> Attached is the v7 patch. It includes the fix mentioned above. Also\n\nHi,\n\nIn my hands, make check fails on the rpr test; see attached .diff file.\nIn these two statements:\n-- using NEXT\n-- using AFTER MATCH SKIP TO NEXT ROW\n result of first_value(price) and next_value(price) are empty.\n\n\nErik Rijkers\n\n\n> this time the pattern matching engine is enhanced: previously it\n> recursed to row direction, which means if the number of rows in a\n> frame is large, it could exceed the limit of stack depth. The new\n> version recurses over matched pattern variables in a row, which is at\n> most 26 which should be small enough.\n> \n> Best reagards,\n> --\n> Tatsuo Ishii\n> SRA OSS LLC\n> English: http://www.sraoss.co.jp/index_en/\n> Japanese:http://www.sraoss.co.jp",
"msg_date": "Fri, 22 Sep 2023 10:12:38 +0200",
"msg_from": "Erik Rijkers <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Row pattern recognition"
},
{
"msg_contents": "Op 9/22/23 om 07:16 schreef Tatsuo Ishii:\n>> Attached is the fix against v6 patch. I will include this in upcoming v7 patch.\n> \n> Attached is the v7 patch. It includes the fix mentioned above. Also\n(Champion's address bounced; removed)\n\nHi,\n\nIn my hands, make check fails on the rpr test; see attached .diff file.\nIn these two statements:\n-- using NEXT\n-- using AFTER MATCH SKIP TO NEXT ROW\n result of first_value(price) and next_value(price) are empty.\n\nErik Rijkers\n\n\n> this time the pattern matching engine is enhanced: previously it\n> recursed to row direction, which means if the number of rows in a\n> frame is large, it could exceed the limit of stack depth. The new\n> version recurses over matched pattern variables in a row, which is at\n> most 26 which should be small enough.\n> \n> Best reagards,\n> --\n> Tatsuo Ishii\n> SRA OSS LLC\n> English: http://www.sraoss.co.jp/index_en/\n> Japanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Fri, 22 Sep 2023 10:23:11 +0200",
"msg_from": "Erik Rijkers <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Row pattern recognition"
},
{
"msg_contents": "Op 9/22/23 om 10:23 schreef Erik Rijkers:\n> Op 9/22/23 om 07:16 schreef Tatsuo Ishii:\n>>> Attached is the fix against v6 patch. I will include this in upcoming \n>>> v7 patch.\n>>\n>> Attached is the v7 patch. It includes the fix mentioned above. Also\n> (Champion's address bounced; removed)\n> \n\nSorry, I forgot to re-attach the regression.diffs with resend...\n\nErik\n\n> Hi,\n> \n> In my hands, make check fails on the rpr test; see attached .diff file.\n> In these two statements:\n> -- using NEXT\n> -- using AFTER MATCH SKIP TO NEXT ROW\n> result of first_value(price) and next_value(price) are empty.\n> \n> Erik Rijkers\n> \n> \n>> this time the pattern matching engine is enhanced: previously it\n>> recursed to row direction, which means if the number of rows in a\n>> frame is large, it could exceed the limit of stack depth. The new\n>> version recurses over matched pattern variables in a row, which is at\n>> most 26 which should be small enough.\n>>\n>> Best reagards,\n>> -- \n>> Tatsuo Ishii\n>> SRA OSS LLC\n>> English: http://www.sraoss.co.jp/index_en/\n>> Japanese:http://www.sraoss.co.jp\n> \n>",
"msg_date": "Fri, 22 Sep 2023 10:26:49 +0200",
"msg_from": "Erik Rijkers <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Row pattern recognition"
},
{
"msg_contents": "> Op 9/22/23 om 07:16 schreef Tatsuo Ishii:\n>>> Attached is the fix against v6 patch. I will include this in upcoming\n>>> v7 patch.\n>> Attached is the v7 patch. It includes the fix mentioned above. Also\n> (Champion's address bounced; removed)\n\nOn my side his adress bounced too:-<\n\n> Hi,\n> \n> In my hands, make check fails on the rpr test; see attached .diff\n> file.\n> In these two statements:\n> -- using NEXT\n> -- using AFTER MATCH SKIP TO NEXT ROW\n> result of first_value(price) and next_value(price) are empty.\n\nStrange. I have checked out fresh master branch and applied the v7\npatches, then ran make check. All tests including the rpr test\npassed. This is Ubuntu 20.04.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Fri, 22 Sep 2023 19:12:50 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Row pattern recognition"
},
{
"msg_contents": "Op 9/22/23 om 12:12 schreef Tatsuo Ishii:\n>> Op 9/22/23 om 07:16 schreef Tatsuo Ishii:\n>>>> Attached is the fix against v6 patch. I will include this in upcoming\n>>>> v7 patch.\n>>> Attached is the v7 patch. It includes the fix mentioned above. Also\n>> (Champion's address bounced; removed)\n> \n> On my side his adress bounced too:-<\n> \n>> Hi,\n>>\n>> In my hands, make check fails on the rpr test; see attached .diff\n>> file.\n>> In these two statements:\n>> -- using NEXT\n>> -- using AFTER MATCH SKIP TO NEXT ROW\n>> result of first_value(price) and next_value(price) are empty.\n> \n> Strange. I have checked out fresh master branch and applied the v7\n> patches, then ran make check. All tests including the rpr test\n> passed. This is Ubuntu 20.04.\n\nThe curious thing is that the server otherwise builds ok, and if I \nexplicitly run on that server 'CREATE TEMP TABLE stock' + the 20 INSERTS \n (just to make sure to have known data), those two statements now both \nreturn the correct result.\n\nSo maybe the testing/timing is wonky (not necessarily the server).\n\nErik\n\n> \n> Best reagards,\n> --\n> Tatsuo Ishii\n> SRA OSS LLC\n> English: http://www.sraoss.co.jp/index_en/\n> Japanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Fri, 22 Sep 2023 13:28:11 +0200",
"msg_from": "Erik Rijkers <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Row pattern recognition"
},
{
"msg_contents": "On Fri, Sep 22, 2023, 3:13 AM Tatsuo Ishii <[email protected]> wrote:\n\n> > Op 9/22/23 om 07:16 schreef Tatsuo Ishii:\n> >>> Attached is the fix against v6 patch. I will include this in upcoming\n> >>> v7 patch.\n> >> Attached is the v7 patch. It includes the fix mentioned above. Also\n> > (Champion's address bounced; removed)\n>\n> On my side his adress bounced too:-<\n>\n\nYep. I'm still here, just lurking for now. It'll take a little time for me\nto get back to this thread, as my schedule has changed significantly. :D\n\nThanks,\n--Jacob\n\n>\n\nOn Fri, Sep 22, 2023, 3:13 AM Tatsuo Ishii <[email protected]> wrote:> Op 9/22/23 om 07:16 schreef Tatsuo Ishii:\n>>> Attached is the fix against v6 patch. I will include this in upcoming\n>>> v7 patch.\n>> Attached is the v7 patch. It includes the fix mentioned above. Also\n> (Champion's address bounced; removed)\n\nOn my side his adress bounced too:-<Yep. I'm still here, just lurking for now. It'll take a little time for me to get back to this thread, as my schedule has changed significantly. :DThanks,--Jacob",
"msg_date": "Fri, 22 Sep 2023 07:48:22 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Row pattern recognition"
},
{
"msg_contents": "> On Fri, Sep 22, 2023, 3:13 AM Tatsuo Ishii <[email protected]> wrote:\n> \n>> > Op 9/22/23 om 07:16 schreef Tatsuo Ishii:\n>> >>> Attached is the fix against v6 patch. I will include this in upcoming\n>> >>> v7 patch.\n>> >> Attached is the v7 patch. It includes the fix mentioned above. Also\n>> > (Champion's address bounced; removed)\n>>\n>> On my side his adress bounced too:-<\n>>\n> \n> Yep. I'm still here, just lurking for now. It'll take a little time for me\n> to get back to this thread, as my schedule has changed significantly. :D\n\nHope you get back soon...\n\nBy the way, I was thinking about eliminating recusrive calls in\npattern matching. Attached is the first cut of the implementation. In\nthe attached v8 patch:\n\n- No recursive calls anymore. search_str_set_recurse was removed.\n\n- Instead it generates all possible pattern variable name initial\n strings before pattern matching. Suppose we have \"ab\" (row 0) and\n \"ac\" (row 1). \"a\" and \"b\" represents the pattern variable names\n which are evaluated to true. In this case it will generate \"aa\",\n \"ac\", \"ba\" and \"bc\" and they are examined by do_pattern_match one by\n one, which performs pattern matching.\n\n- To implement this, an infrastructure string_set* are created. They\n take care of set of StringInfo.\n\nI found that the previous implementation did not search all possible\ncases. I believe the bug is fixed in v8.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp",
"msg_date": "Mon, 25 Sep 2023 14:26:30 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Row pattern recognition"
},
{
"msg_contents": "> By the way, I was thinking about eliminating recusrive calls in\n> pattern matching. Attached is the first cut of the implementation. In\n> the attached v8 patch:\n> \n> - No recursive calls anymore. search_str_set_recurse was removed.\n> \n> - Instead it generates all possible pattern variable name initial\n> strings before pattern matching. Suppose we have \"ab\" (row 0) and\n> \"ac\" (row 1). \"a\" and \"b\" represents the pattern variable names\n> which are evaluated to true. In this case it will generate \"aa\",\n> \"ac\", \"ba\" and \"bc\" and they are examined by do_pattern_match one by\n> one, which performs pattern matching.\n> \n> - To implement this, an infrastructure string_set* are created. They\n> take care of set of StringInfo.\n> \n> I found that the previous implementation did not search all possible\n> cases. I believe the bug is fixed in v8.\n\nThe v8 patch does not apply anymore due to commit d060e921ea \"Remove obsolete executor cleanup code\".\nSo I rebased and created v9 patch. The differences from the v8 include:\n\n- Fix bug with get_slots. It did not correctly detect the end of full frame.\n- Add test case using \"ROWS BETWEEN CURRENT ROW AND offset FOLLOWING\".\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp",
"msg_date": "Wed, 04 Oct 2023 15:03:28 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Row pattern recognition"
},
{
"msg_contents": "Attached is the v10 patch. This version enhances the performance of\npattern matching. Previously it generated all possible pattern string\ncandidates. This resulted in unnecessarily large number of\ncandidates. For example if you have 2 pattern variables and the target\nframe includes 100 rows, the number of candidates can reach to 2^100\nin the worst case. To avoid this, I do a pruning in the v10\npatch. Suppose you have:\n\nPATTERN (A B+ C+)\n\nCandidates like \"BAC\" \"CAB\" cannot survive because they never satisfy\nthe search pattern. To judge this, I assign sequence numbers (0, 1, 2)\nto (A B C). If the pattern generator tries to generate BA, this is\nnot allowed because the sequence number for B is 1 and for A is 0, and\n0 < 1: B cannot be followed by A. Note that this technique can be\napplied when the quantifiers are \"+\" or \"*\". Maybe other quantifiers\nsuch as '?' or '{n, m}' can be applied too but I don't confirm yet\nbecause I have not implemented them yet.\n\nBesides this improvement, I fixed a bug in the previous and older\npatches: when an expression in DEFINE uses text operators, it errors\nout:\n\nERROR: could not determine which collation to use for string comparison\nHINT: Use the COLLATE clause to set the collation explicitly.\n\nThis was fixed by adding assign_expr_collations() in\ntransformDefineClause().\n\nAlso I have updated documentation \"3.5. Window Functions\"\n\n- It still mentioned about rpr(). It's not applied anymore.\n- Enhance the description about DEFINE and PATTERN.\n- Mention that quantifier '*' is supported.\n\nFinally I have added more test cases to the regression test.\n- same pattern variable appears twice\n- case for quantifier '*'\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp",
"msg_date": "Sun, 22 Oct 2023 11:39:20 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Row pattern recognition"
},
{
"msg_contents": "On Sat, Oct 21, 2023 at 7:39 PM Tatsuo Ishii <[email protected]> wrote:\n> Attached is the v10 patch. This version enhances the performance of\n> pattern matching.\n\nNice! I've attached a couple of more stressful tests (window\npartitions of 1000 rows each). Beware that the second one runs my\ndesktop out of memory fairly quickly with the v10 implementation.\n\nI was able to carve out some time this week to implement a very basic\nrecursive NFA, which handles both the + and * qualifiers (attached).\nIt's not production quality -- a frame on the call stack for every row\nisn't going to work -- but with only those two features, it's pretty\ntiny, and it's able to run the new stress tests with no issue. If I\ncontinue to have time, I hope to keep updating this parallel\nimplementation as you add features to the StringSet implementation,\nand we can see how it evolves. I expect that alternation and grouping\nwill ratchet up the complexity.\n\nThanks!\n--Jacob",
"msg_date": "Tue, 24 Oct 2023 11:51:19 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Row pattern recognition"
},
{
"msg_contents": "> On Sat, Oct 21, 2023 at 7:39 PM Tatsuo Ishii <[email protected]> wrote:\r\n>> Attached is the v10 patch. This version enhances the performance of\r\n>> pattern matching.\r\n> \r\n> Nice! I've attached a couple of more stressful tests (window\r\n> partitions of 1000 rows each). Beware that the second one runs my\r\n> desktop out of memory fairly quickly with the v10 implementation.\r\n> \r\n> I was able to carve out some time this week to implement a very basic\r\n> recursive NFA, which handles both the + and * qualifiers (attached).\r\n\r\nGreat. I will look into this.\r\n\r\n> It's not production quality -- a frame on the call stack for every row\r\n> isn't going to work\r\n\r\nYeah.\r\n\r\n> -- but with only those two features, it's pretty\r\n> tiny, and it's able to run the new stress tests with no issue. If I\r\n> continue to have time, I hope to keep updating this parallel\r\n> implementation as you add features to the StringSet implementation,\r\n> and we can see how it evolves. I expect that alternation and grouping\r\n> will ratchet up the complexity.\r\n\r\nSounds like a plan.\r\n\r\nBy the way, I tested my patch (v10) to handle more large data set and\r\ntried to following query with pgbench database. On my laptop it works\r\nwith 100k rows pgbench_accounts table but with beyond the number I got\r\nOOM killer. I would like to enhance this in the next patch.\r\n\r\nSELECT aid, first_value(aid) OVER w,\r\ncount(*) OVER w\r\nFROM pgbench_accounts\r\nWINDOW w AS (\r\nPARTITION BY bid\r\nORDER BY aid\r\nROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING\r\nAFTER MATCH SKIP PAST LAST ROW\r\nINITIAL\r\nPATTERN (START UP+)\r\nDEFINE\r\nSTART AS TRUE,\r\nUP AS aid > PREV(aid)\r\n);\r\n\r\nBest reagards,\r\n--\r\nTatsuo Ishii\r\nSRA OSS LLC\r\nEnglish: http://www.sraoss.co.jp/index_en/\r\nJapanese:http://www.sraoss.co.jp\r\n",
"msg_date": "Wed, 25 Oct 2023 09:11:05 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Row pattern recognition"
},
{
"msg_contents": "> Great. I will look into this.\n\nI am impressed the simple NFA implementation. It would be nicer if it\ncould be implemented without using recursion.\n\n> By the way, I tested my patch (v10) to handle more large data set and\n> tried to following query with pgbench database. On my laptop it works\n> with 100k rows pgbench_accounts table but with beyond the number I got\n ~~~ I meant 10k.\n\n> OOM killer. I would like to enhance this in the next patch.\n> \n> SELECT aid, first_value(aid) OVER w,\n> count(*) OVER w\n> FROM pgbench_accounts\n> WINDOW w AS (\n> PARTITION BY bid\n> ORDER BY aid\n> ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING\n> AFTER MATCH SKIP PAST LAST ROW\n> INITIAL\n> PATTERN (START UP+)\n> DEFINE\n> START AS TRUE,\n> UP AS aid > PREV(aid)\n> );\n\nI ran this against your patch. It failed around > 60k rows.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Wed, 25 Oct 2023 11:49:30 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Row pattern recognition"
},
{
"msg_contents": "On Tue, Oct 24, 2023 at 7:49 PM Tatsuo Ishii <[email protected]> wrote:\n> I am impressed the simple NFA implementation.\n\nThanks!\n\n> It would be nicer if it\n> could be implemented without using recursion.\n\nYeah. If for some reason we end up going with a bespoke\nimplementation, I assume we'd just convert the algorithm to an\niterative one and optimize it heavily. But I didn't want to do that\ntoo early, since it'd probably make it harder to add new features...\nand anyway my goal is still to try to reuse src/backend/regex\neventually.\n\n> > SELECT aid, first_value(aid) OVER w,\n> > count(*) OVER w\n> > FROM pgbench_accounts\n> > WINDOW w AS (\n> > PARTITION BY bid\n> > ORDER BY aid\n> > ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING\n> > AFTER MATCH SKIP PAST LAST ROW\n> > INITIAL\n> > PATTERN (START UP+)\n> > DEFINE\n> > START AS TRUE,\n> > UP AS aid > PREV(aid)\n> > );\n>\n> I ran this against your patch. It failed around > 60k rows.\n\nNice, that's actually more frames than I expected. Looks like I have\nsimilar results here with my second test query (segfault at ~58k\nrows).\n\nThanks,\n--Jacob\n\n\n",
"msg_date": "Mon, 30 Oct 2023 12:49:18 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Row pattern recognition"
},
{
"msg_contents": ">> It would be nicer if it\n>> could be implemented without using recursion.\n> \n> Yeah. If for some reason we end up going with a bespoke\n> implementation, I assume we'd just convert the algorithm to an\n> iterative one and optimize it heavily. But I didn't want to do that\n> too early, since it'd probably make it harder to add new features...\n> and anyway my goal is still to try to reuse src/backend/regex\n> eventually.\n\nOk.\n\nAttached is the v11 patch. Below are the summary of the changes from\nprevious version.\n\n- rebase.\n\n- Reduce memory allocation in pattern matching (search_str_set()). But\n still Champion's second stress test gives OOM killer.\n \n - While keeping an old set to next round, move the StringInfo to\n new_str_set, rather than copying from old_str_set. This allows to\n run pgbench.sql against up to 60k rows on my laptop (previously\n 20k).\n \n - Use enlargeStringInfo to set the buffer size, rather than\n incrementally enlarge the buffer. This does not seem to give big\n enhancement but it should theoretically an enhancement.\n\n- Fix \"variable not found in subplan target list\" error if WITH is\n used.\n \n - To fix this apply pullup_replace_vars() against DEFINE clause in\n planning phase (perform_pullup_replace_vars()). Also add\n regression test cases for WITH that caused the error in the\n previous version.\n\n- Fix the case when no greedy quantifiers ('+' or '*') are included in\n PATTERN.\n \n - Previously update_reduced_frame() did not consider the case and\n produced wrong results. Add another code path which is dedicated\n to none greedy PATTERN (at this point, it means there's no\n quantifier case). Also add a test case for this.\n\n- Remove unnecessary check in transformPatternClause().\n\n - Previously it checked if all pattern variables are defined in\n DEFINE clause. But currently RPR allows to \"auto define\" such\n variables as \"varname AS TRUE\". So the check was not necessary.\n\n- FYI here is the list to explain what was changed in each patch file.\n\n0001-Row-pattern-recognition-patch-for-raw-parser.patch\n- same\n\n0002-Row-pattern-recognition-patch-parse-analysis.patch\n- Add markTargetListOrigins() to transformFrameOffset().\n- Change transformPatternClause().\n\n0003-Row-pattern-recognition-patch-planner.patch\n- Fix perform_pullup_replace_vars()\n\n0004-Row-pattern-recognition-patch-executor.patch\n- Fix update_reduced_frame()\n- Fix search_str_set()\n\n0005-Row-pattern-recognition-patch-docs.patch\n- same\n\n0006-Row-pattern-recognition-patch-tests.patch\n- Add test case for non-greedy and WITH cases\n\n0007-Allow-to-print-raw-parse-tree.patch\n- same\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp",
"msg_date": "Wed, 08 Nov 2023 16:37:05 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Row pattern recognition"
},
{
"msg_contents": "Sorry for posting v12 patch again. It seems the previous post of v12\npatch email lost mail threading information and was not recognized as\na part of the thread by CF application and CFbot.\nhttps://www.postgresql.org/message-id/20231204.204048.1998548830490453126.t-ishii%40sranhm.sra.co.jp\n\nAttached is the v12 patch. Below are the summary of the changes from\nprevious version.\n\n- Rebase. CFbot says v11 patch needs rebase since Nov 30, 2023.\n \n- Apply preprocess_expression() to DEFINE clause in the planning\n phase. This is necessary to simply const expressions like:\n\n DEFINE A price < (99 + 1)\n to:\n DEFINE A price < 100\n\n- Re-allow to use WinSetMarkPosition() in eval_windowaggregates().\n\n- FYI here is the list to explain what were changed in each patch file.\n\n0001-Row-pattern-recognition-patch-for-raw-parser.patch\n- Fix conflict.\n\n0002-Row-pattern-recognition-patch-parse-analysis.patch\n- Same as before.\n\n0003-Row-pattern-recognition-patch-planner.patch\n- Call preprocess_expression() for DEFINE clause in subquery_planner().\n\n0004-Row-pattern-recognition-patch-executor.patch\n- Re-allow to use WinSetMarkPosition() in eval_windowaggregates().\n\n0005-Row-pattern-recognition-patch-docs.patch\n- Same as before.\n\n0006-Row-pattern-recognition-patch-tests.patch\n- Same as before.\n\n0007-Allow-to-print-raw-parse-tree.patch\n- Same as before.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp",
"msg_date": "Fri, 08 Dec 2023 10:16:13 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Row pattern recognition"
},
{
"msg_contents": "> On 04.12.23 12:40, Tatsuo Ishii wrote:\n>> diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y\n>> index d631ac89a9..5a77fca17f 100644\n>> --- a/src/backend/parser/gram.y\n>> +++ b/src/backend/parser/gram.y\n>> @@ -251,6 +251,8 @@ static Node *makeRecursiveViewSelect(char\n>> *relname, List *aliases, Node *query);\n>> \tDefElem\t *defelt;\n>> \tSortBy\t *sortby;\n>> \tWindowDef *windef;\n>> +\tRPCommonSyntax\t*rpcom;\n>> +\tRPSubsetItem\t*rpsubset;\n>> \tJoinExpr *jexpr;\n>> \tIndexElem *ielem;\n>> \tStatsElem *selem;\n>> @@ -278,6 +280,7 @@ static Node *makeRecursiveViewSelect(char\n>> *relname, List *aliases, Node *query);\n>> \tMergeWhenClause *mergewhen;\n>> \tstruct KeyActions *keyactions;\n>> \tstruct KeyAction *keyaction;\n>> +\tRPSkipTo\tskipto;\n>> }\n>> %type <node>\tstmt toplevel_stmt schema_stmt routine_body_stmt\n> \n> It is usually not the style to add an entry for every node type to the\n> %union. Otherwise, we'd have hundreds of entries in there.\n\nOk, I have removed the node types and used existing node types. Also\nI have moved RPR related %types to same place to make it easier to know\nwhat are added by RPR.\n\n>> @@ -866,6 +878,7 @@ static Node *makeRecursiveViewSelect(char\n>> *relname, List *aliases, Node *query);\n>> %nonassoc UNBOUNDED /* ideally would have same precedence as IDENT */\n>> %nonassoc IDENT PARTITION RANGE ROWS GROUPS PRECEDING FOLLOWING CUBE\n>> %ROLLUP\n>> \t\t\tSET KEYS OBJECT_P SCALAR VALUE_P WITH WITHOUT\n>> +%nonassoc\tMEASURES AFTER INITIAL SEEK PATTERN_P\n>> %left Op OPERATOR /* multi-character ops and user-defined operators */\n>> %left\t\t'+' '-'\n>> %left\t\t'*' '/' '%'\n> \n> It was recently discussed that these %nonassoc should ideally all have\n> the same precedence. Did you consider that here?\n\nNo, I didn't realize it. Thanks for pointing it out. I have removed\n%nonassoc so that MEASURES etc. have the same precedence as IDENT etc.\n\nAttached is the new diff of gram.y against master branch.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\ndiff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y\nindex d631ac89a9..6c41aa2e9f 100644\n--- a/src/backend/parser/gram.y\n+++ b/src/backend/parser/gram.y\n@@ -659,6 +659,21 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);\n \t\t\t\tjson_object_constructor_null_clause_opt\n \t\t\t\tjson_array_constructor_null_clause_opt\n \n+%type <target>\trow_pattern_measure_item\n+\t\t\t\trow_pattern_definition\n+%type <node>\topt_row_pattern_common_syntax\n+\t\t\t\topt_row_pattern_skip_to\n+\t\t\t\trow_pattern_subset_item\n+\t\t\t\trow_pattern_term\n+%type <list>\topt_row_pattern_measures\n+\t\t\t\trow_pattern_measure_list\n+\t\t\t\trow_pattern_definition_list\n+\t\t\t\topt_row_pattern_subset_clause\n+\t\t\t\trow_pattern_subset_list\n+\t\t\t\trow_pattern_subset_rhs\n+\t\t\t\trow_pattern\n+%type <boolean>\topt_row_pattern_initial_or_seek\n+\t\t\t\tfirst_or_last\n \n /*\n * Non-keyword token types. These are hard-wired into the \"flex\" lexer.\n@@ -702,7 +717,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);\n \tCURRENT_TIME CURRENT_TIMESTAMP CURRENT_USER CURSOR CYCLE\n \n \tDATA_P DATABASE DAY_P DEALLOCATE DEC DECIMAL_P DECLARE DEFAULT DEFAULTS\n-\tDEFERRABLE DEFERRED DEFINER DELETE_P DELIMITER DELIMITERS DEPENDS DEPTH DESC\n+\tDEFERRABLE DEFERRED DEFINE DEFINER DELETE_P DELIMITER DELIMITERS DEPENDS DEPTH DESC\n \tDETACH DICTIONARY DISABLE_P DISCARD DISTINCT DO DOCUMENT_P DOMAIN_P\n \tDOUBLE_P DROP\n \n@@ -718,7 +733,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);\n \tHANDLER HAVING HEADER_P HOLD HOUR_P\n \n \tIDENTITY_P IF_P ILIKE IMMEDIATE IMMUTABLE IMPLICIT_P IMPORT_P IN_P INCLUDE\n-\tINCLUDING INCREMENT INDENT INDEX INDEXES INHERIT INHERITS INITIALLY INLINE_P\n+\tINCLUDING INCREMENT INDENT INDEX INDEXES INHERIT INHERITS INITIAL INITIALLY INLINE_P\n \tINNER_P INOUT INPUT_P INSENSITIVE INSERT INSTEAD INT_P INTEGER\n \tINTERSECT INTERVAL INTO INVOKER IS ISNULL ISOLATION\n \n@@ -731,7 +746,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);\n \tLEADING LEAKPROOF LEAST LEFT LEVEL LIKE LIMIT LISTEN LOAD LOCAL\n \tLOCALTIME LOCALTIMESTAMP LOCATION LOCK_P LOCKED LOGGED\n \n-\tMAPPING MATCH MATCHED MATERIALIZED MAXVALUE MERGE METHOD\n+\tMAPPING MATCH MATCHED MATERIALIZED MAXVALUE MEASURES MERGE METHOD\n \tMINUTE_P MINVALUE MODE MONTH_P MOVE\n \n \tNAME_P NAMES NATIONAL NATURAL NCHAR NEW NEXT NFC NFD NFKC NFKD NO NONE\n@@ -743,8 +758,8 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);\n \tORDER ORDINALITY OTHERS OUT_P OUTER_P\n \tOVER OVERLAPS OVERLAY OVERRIDING OWNED OWNER\n \n-\tPARALLEL PARAMETER PARSER PARTIAL PARTITION PASSING PASSWORD\n-\tPLACING PLANS POLICY\n+\tPARALLEL PARAMETER PARSER PARTIAL PARTITION PASSING PASSWORD PAST\n+\tPATTERN_P PERMUTE PLACING PLANS POLICY\n \tPOSITION PRECEDING PRECISION PRESERVE PREPARE PREPARED PRIMARY\n \tPRIOR PRIVILEGES PROCEDURAL PROCEDURE PROCEDURES PROGRAM PUBLICATION\n \n@@ -755,12 +770,13 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);\n \tRESET RESTART RESTRICT RETURN RETURNING RETURNS REVOKE RIGHT ROLE ROLLBACK ROLLUP\n \tROUTINE ROUTINES ROW ROWS RULE\n \n-\tSAVEPOINT SCALAR SCHEMA SCHEMAS SCROLL SEARCH SECOND_P SECURITY SELECT\n+\tSAVEPOINT SCALAR SCHEMA SCHEMAS SCROLL SEARCH SECOND_P SECURITY SEEK SELECT\n \tSEQUENCE SEQUENCES\n+\n \tSERIALIZABLE SERVER SESSION SESSION_USER SET SETS SETOF SHARE SHOW\n \tSIMILAR SIMPLE SKIP SMALLINT SNAPSHOT SOME SQL_P STABLE STANDALONE_P\n \tSTART STATEMENT STATISTICS STDIN STDOUT STORAGE STORED STRICT_P STRIP_P\n-\tSUBSCRIPTION SUBSTRING SUPPORT SYMMETRIC SYSID SYSTEM_P SYSTEM_USER\n+\tSUBSCRIPTION SUBSET SUBSTRING SUPPORT SYMMETRIC SYSID SYSTEM_P SYSTEM_USER\n \n \tTABLE TABLES TABLESAMPLE TABLESPACE TEMP TEMPLATE TEMPORARY TEXT_P THEN\n \tTIES TIME TIMESTAMP TO TRAILING TRANSACTION TRANSFORM\n@@ -866,6 +882,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);\n %nonassoc\tUNBOUNDED\t\t/* ideally would have same precedence as IDENT */\n %nonassoc\tIDENT PARTITION RANGE ROWS GROUPS PRECEDING FOLLOWING CUBE ROLLUP\n \t\t\tSET KEYS OBJECT_P SCALAR VALUE_P WITH WITHOUT\n+\t\t\tMEASURES AFTER INITIAL SEEK PATTERN_P\n %left\t\tOp OPERATOR\t\t/* multi-character ops and user-defined operators */\n %left\t\t'+' '-'\n %left\t\t'*' '/' '%'\n@@ -15914,7 +15931,8 @@ over_clause: OVER window_specification\n \t\t;\n \n window_specification: '(' opt_existing_window_name opt_partition_clause\n-\t\t\t\t\t\topt_sort_clause opt_frame_clause ')'\n+\t\t\t\t\t\topt_sort_clause opt_row_pattern_measures opt_frame_clause\n+\t\t\t\t\t\topt_row_pattern_common_syntax ')'\n \t\t\t\t{\n \t\t\t\t\tWindowDef *n = makeNode(WindowDef);\n \n@@ -15922,10 +15940,12 @@ window_specification: '(' opt_existing_window_name opt_partition_clause\n \t\t\t\t\tn->refname = $2;\n \t\t\t\t\tn->partitionClause = $3;\n \t\t\t\t\tn->orderClause = $4;\n+\t\t\t\t\tn->rowPatternMeasures = $5;\n \t\t\t\t\t/* copy relevant fields of opt_frame_clause */\n-\t\t\t\t\tn->frameOptions = $5->frameOptions;\n-\t\t\t\t\tn->startOffset = $5->startOffset;\n-\t\t\t\t\tn->endOffset = $5->endOffset;\n+\t\t\t\t\tn->frameOptions = $6->frameOptions;\n+\t\t\t\t\tn->startOffset = $6->startOffset;\n+\t\t\t\t\tn->endOffset = $6->endOffset;\n+\t\t\t\t\tn->rpCommonSyntax = (RPCommonSyntax *)$7;\n \t\t\t\t\tn->location = @1;\n \t\t\t\t\t$$ = n;\n \t\t\t\t}\n@@ -15949,6 +15969,31 @@ opt_partition_clause: PARTITION BY expr_list\t\t{ $$ = $3; }\n \t\t\t| /*EMPTY*/\t\t\t\t\t\t\t\t{ $$ = NIL; }\n \t\t;\n \n+/*\n+ * ROW PATTERN_P MEASURES\n+ */\n+opt_row_pattern_measures: MEASURES row_pattern_measure_list\t{ $$ = $2; }\n+\t\t\t| /*EMPTY*/\t\t\t\t\t\t\t\t{ $$ = NIL; }\n+\t\t;\n+\n+row_pattern_measure_list:\n+\t\t\trow_pattern_measure_item\n+\t\t\t\t\t{ $$ = list_make1($1); }\n+\t\t\t| row_pattern_measure_list ',' row_pattern_measure_item\n+\t\t\t\t\t{ $$ = lappend($1, $3); }\n+\t\t;\n+\n+row_pattern_measure_item:\n+\t\t\ta_expr AS ColLabel\n+\t\t\t\t{\n+\t\t\t\t\t$$ = makeNode(ResTarget);\n+\t\t\t\t\t$$->name = $3;\n+\t\t\t\t\t$$->indirection = NIL;\n+\t\t\t\t\t$$->val = (Node *) $1;\n+\t\t\t\t\t$$->location = @1;\n+\t\t\t\t}\n+\t\t;\n+\n /*\n * For frame clauses, we return a WindowDef, but only some fields are used:\n * frameOptions, startOffset, and endOffset.\n@@ -16108,6 +16153,143 @@ opt_window_exclusion_clause:\n \t\t\t| /*EMPTY*/\t\t\t\t{ $$ = 0; }\n \t\t;\n \n+opt_row_pattern_common_syntax:\n+opt_row_pattern_skip_to opt_row_pattern_initial_or_seek\n+\t\t\t\tPATTERN_P '(' row_pattern ')'\n+\t\t\t\topt_row_pattern_subset_clause\n+\t\t\t\tDEFINE row_pattern_definition_list\n+\t\t\t{\n+\t\t\t\tRPCommonSyntax *n = makeNode(RPCommonSyntax);\n+\t\t\t\tn->rpSkipTo = ((RPCommonSyntax *)$1)->rpSkipTo;\n+\t\t\t\tn->rpSkipVariable = ((RPCommonSyntax *)$1)->rpSkipVariable;\n+\t\t\t\tn->initial = $2;\n+\t\t\t\tn->rpPatterns = $5;\n+\t\t\t\tn->rpSubsetClause = $7;\n+\t\t\t\tn->rpDefs = $9;\n+\t\t\t\t$$ = (Node *) n;\n+\t\t\t}\n+\t\t\t| /*EMPTY*/\t\t{ $$ = NULL; }\n+\t;\n+\n+opt_row_pattern_skip_to:\n+\t\t\tAFTER MATCH SKIP TO NEXT ROW\n+\t\t\t\t{\n+\t\t\t\t\tRPCommonSyntax *n = makeNode(RPCommonSyntax);\n+\t\t\t\t\tn->rpSkipTo = ST_NEXT_ROW;\n+\t\t\t\t\tn->rpSkipVariable = NULL;\n+\t\t\t\t\t$$ = (Node *) n;\n+\t\t\t}\n+\t\t\t| AFTER MATCH SKIP PAST LAST_P ROW\n+\t\t\t\t{\n+\t\t\t\t\tRPCommonSyntax *n = makeNode(RPCommonSyntax);\n+\t\t\t\t\tn->rpSkipTo = ST_PAST_LAST_ROW;\n+\t\t\t\t\tn->rpSkipVariable = NULL;\n+\t\t\t\t\t$$ = (Node *) n;\n+\t\t\t\t}\n+\t\t\t| AFTER MATCH SKIP TO first_or_last ColId\n+\t\t\t\t{\n+\t\t\t\t\tRPCommonSyntax *n = makeNode(RPCommonSyntax);\n+\t\t\t\t\tn->rpSkipTo = $5? ST_FIRST_VARIABLE : ST_LAST_VARIABLE;\n+\t\t\t\t\tn->rpSkipVariable = $6;\n+\t\t\t\t\t$$ = (Node *) n;\n+\t\t\t\t}\n+/*\n+\t\t\t| AFTER MATCH SKIP TO LAST_P ColId\t\t%prec LAST_P\n+\t\t\t\t{\n+\t\t\t\t\tRPCommonSyntax *n = makeNode(RPCommonSyntax);\n+\t\t\t\t\tn->rpSkipTo = ST_LAST_VARIABLE;\n+\t\t\t\t\tn->rpSkipVariable = $6;\n+\t\t\t\t\t$$ = n;\n+\t\t\t\t}\n+\t\t\t| AFTER MATCH SKIP TO ColId\n+\t\t\t\t{\n+\t\t\t\t\tRPCommonSyntax *n = makeNode(RPCommonSyntax);\n+\t\t\t\t\tn->rpSkipTo = ST_VARIABLE;\n+\t\t\t\t\tn->rpSkipVariable = $5;\n+\t\t\t\t\t$$ = n;\n+\t\t\t\t}\n+*/\n+\t\t\t| /*EMPTY*/\n+\t\t\t\t{\n+\t\t\t\t\tRPCommonSyntax *n = makeNode(RPCommonSyntax);\n+\t\t\t\t\t/* temporary set default to ST_NEXT_ROW */\n+\t\t\t\t\tn->rpSkipTo = ST_PAST_LAST_ROW;\n+\t\t\t\t\tn->rpSkipVariable = NULL;\n+\t\t\t\t\t$$ = (Node *) n;\n+\t\t\t\t}\n+\t;\n+\n+first_or_last:\n+\t\t\tFIRST_P\t\t{ $$ = true; }\n+\t\t\t| LAST_P\t{ $$ = false; }\n+\t;\n+\n+opt_row_pattern_initial_or_seek:\n+\t\t\tINITIAL\t\t\t{ $$ = true; }\n+\t\t\t| SEEK\n+\t\t\t\t{\n+\t\t\t\t\tereport(ERROR,\n+\t\t\t\t\t\t\t(errcode(ERRCODE_SYNTAX_ERROR),\n+\t\t\t\t\t\t\t errmsg(\"SEEK is not supported\"),\n+\t\t\t\t\t\t\t errhint(\"Use INITIAL.\"),\n+\t\t\t\t\t\t\t parser_errposition(@1)));\n+\t\t\t\t}\n+\t\t\t| /*EMPTY*/\t\t{ $$ = true; }\n+\t\t;\n+\n+row_pattern:\n+\t\t\trow_pattern_term\t\t\t\t\t\t\t{ $$ = list_make1($1); }\n+\t\t\t| row_pattern row_pattern_term\t\t\t\t{ $$ = lappend($1, $2); }\n+\t\t;\n+\n+row_pattern_term:\n+\t\t\tColId\t{ $$ = (Node *) makeSimpleA_Expr(AEXPR_OP, \"\", (Node *)makeString($1), NULL, @1); }\n+\t\t\t| ColId '*'\t{ $$ = (Node *) makeSimpleA_Expr(AEXPR_OP, \"*\", (Node *)makeString($1), NULL, @1); }\n+\t\t\t| ColId '+'\t{ $$ = (Node *) makeSimpleA_Expr(AEXPR_OP, \"+\", (Node *)makeString($1), NULL, @1); }\n+\t\t\t| ColId '?'\t{ $$ = (Node *) makeSimpleA_Expr(AEXPR_OP, \"?\", (Node *)makeString($1), NULL, @1); }\n+\t\t;\n+\n+opt_row_pattern_subset_clause:\n+\t\t\tSUBSET row_pattern_subset_list\t{ $$ = $2; }\n+\t\t\t| /*EMPTY*/\t\t\t\t\t\t\t\t\t\t\t\t{ $$ = NIL; }\n+\t\t;\n+\n+row_pattern_subset_list:\n+\t\t\trow_pattern_subset_item\t\t\t\t\t\t\t\t\t{ $$ = list_make1($1); }\n+\t\t\t| row_pattern_subset_list ',' row_pattern_subset_item\t{ $$ = lappend($1, $3); }\n+\t\t\t| /*EMPTY*/\t\t\t\t\t\t\t\t\t\t\t\t{ $$ = NIL; }\n+\t\t;\n+\n+row_pattern_subset_item: ColId '=' '(' row_pattern_subset_rhs ')'\n+\t\t\t{\n+\t\t\t\tRPSubsetItem *n = makeNode(RPSubsetItem);\n+\t\t\t\tn->name = $1;\n+\t\t\t\tn->rhsVariable = $4;\n+\t\t\t\t$$ = (Node *) n;\n+\t\t\t}\n+\t\t;\n+\n+row_pattern_subset_rhs:\n+\t\t\tColId\t\t\t\t\t\t\t\t{ $$ = list_make1(makeStringConst($1, @1)); }\n+\t\t\t| row_pattern_subset_rhs ',' ColId\t{ $$ = lappend($1, makeStringConst($3, @1)); }\n+\t\t\t| /*EMPTY*/\t\t\t\t\t\t\t{ $$ = NIL; }\n+\t\t;\n+\n+row_pattern_definition_list:\n+\t\t\trow_pattern_definition\t\t\t\t\t\t\t\t\t\t{ $$ = list_make1($1); }\n+\t\t\t| row_pattern_definition_list ',' row_pattern_definition\t{ $$ = lappend($1, $3); }\n+\t\t;\n+\n+row_pattern_definition:\n+\t\t\tColId AS a_expr\n+\t\t\t\t{\n+\t\t\t\t\t$$ = makeNode(ResTarget);\n+\t\t\t\t\t$$->name = $1;\n+\t\t\t\t\t$$->indirection = NIL;\n+\t\t\t\t\t$$->val = (Node *) $3;\n+\t\t\t\t\t$$->location = @1;\n+\t\t\t\t}\n+\t\t;\n \n /*\n * Supporting nonterminals for expressions.\n@@ -17217,6 +17399,7 @@ unreserved_keyword:\n \t\t\t| INDEXES\n \t\t\t| INHERIT\n \t\t\t| INHERITS\n+\t\t\t| INITIAL\n \t\t\t| INLINE_P\n \t\t\t| INPUT_P\n \t\t\t| INSENSITIVE\n@@ -17244,6 +17427,7 @@ unreserved_keyword:\n \t\t\t| MATCHED\n \t\t\t| MATERIALIZED\n \t\t\t| MAXVALUE\n+\t\t\t| MEASURES\n \t\t\t| MERGE\n \t\t\t| METHOD\n \t\t\t| MINUTE_P\n@@ -17286,6 +17470,9 @@ unreserved_keyword:\n \t\t\t| PARTITION\n \t\t\t| PASSING\n \t\t\t| PASSWORD\n+\t\t\t| PAST\n+\t\t\t| PATTERN_P\n+\t\t\t| PERMUTE\n \t\t\t| PLANS\n \t\t\t| POLICY\n \t\t\t| PRECEDING\n@@ -17336,6 +17523,7 @@ unreserved_keyword:\n \t\t\t| SEARCH\n \t\t\t| SECOND_P\n \t\t\t| SECURITY\n+\t\t\t| SEEK\n \t\t\t| SEQUENCE\n \t\t\t| SEQUENCES\n \t\t\t| SERIALIZABLE\n@@ -17361,6 +17549,7 @@ unreserved_keyword:\n \t\t\t| STRICT_P\n \t\t\t| STRIP_P\n \t\t\t| SUBSCRIPTION\n+\t\t\t| SUBSET\n \t\t\t| SUPPORT\n \t\t\t| SYSID\n \t\t\t| SYSTEM_P\n@@ -17548,6 +17737,7 @@ reserved_keyword:\n \t\t\t| CURRENT_USER\n \t\t\t| DEFAULT\n \t\t\t| DEFERRABLE\n+\t\t\t| DEFINE\n \t\t\t| DESC\n \t\t\t| DISTINCT\n \t\t\t| DO\n@@ -17710,6 +17900,7 @@ bare_label_keyword:\n \t\t\t| DEFAULTS\n \t\t\t| DEFERRABLE\n \t\t\t| DEFERRED\n+\t\t\t| DEFINE\n \t\t\t| DEFINER\n \t\t\t| DELETE_P\n \t\t\t| DELIMITER\n@@ -17785,6 +17976,7 @@ bare_label_keyword:\n \t\t\t| INDEXES\n \t\t\t| INHERIT\n \t\t\t| INHERITS\n+\t\t\t| INITIAL\n \t\t\t| INITIALLY\n \t\t\t| INLINE_P\n \t\t\t| INNER_P\n@@ -17834,6 +18026,7 @@ bare_label_keyword:\n \t\t\t| MATCHED\n \t\t\t| MATERIALIZED\n \t\t\t| MAXVALUE\n+\t\t\t| MEASURES\n \t\t\t| MERGE\n \t\t\t| METHOD\n \t\t\t| MINVALUE\n@@ -17887,6 +18080,9 @@ bare_label_keyword:\n \t\t\t| PARTITION\n \t\t\t| PASSING\n \t\t\t| PASSWORD\n+\t\t\t| PAST\n+\t\t\t| PATTERN_P\n+\t\t\t| PERMUTE\n \t\t\t| PLACING\n \t\t\t| PLANS\n \t\t\t| POLICY\n@@ -17943,6 +18139,7 @@ bare_label_keyword:\n \t\t\t| SCROLL\n \t\t\t| SEARCH\n \t\t\t| SECURITY\n+\t\t\t| SEEK\n \t\t\t| SELECT\n \t\t\t| SEQUENCE\n \t\t\t| SEQUENCES\n@@ -17974,6 +18171,7 @@ bare_label_keyword:\n \t\t\t| STRICT_P\n \t\t\t| STRIP_P\n \t\t\t| SUBSCRIPTION\n+\t\t\t| SUBSET\n \t\t\t| SUBSTRING\n \t\t\t| SUPPORT\n \t\t\t| SYMMETRIC",
"msg_date": "Sat, 09 Dec 2023 07:22:58 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Row pattern recognition"
},
{
"msg_contents": "On Sat, 09 Dec 2023 07:22:58 +0900 (JST)\nTatsuo Ishii <[email protected]> wrote:\n\n> > On 04.12.23 12:40, Tatsuo Ishii wrote:\n> >> diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y\n> >> index d631ac89a9..5a77fca17f 100644\n> >> --- a/src/backend/parser/gram.y\n> >> +++ b/src/backend/parser/gram.y\n> >> @@ -251,6 +251,8 @@ static Node *makeRecursiveViewSelect(char\n> >> *relname, List *aliases, Node *query);\n> >> \tDefElem\t *defelt;\n> >> \tSortBy\t *sortby;\n> >> \tWindowDef *windef;\n> >> +\tRPCommonSyntax\t*rpcom;\n> >> +\tRPSubsetItem\t*rpsubset;\n> >> \tJoinExpr *jexpr;\n> >> \tIndexElem *ielem;\n> >> \tStatsElem *selem;\n> >> @@ -278,6 +280,7 @@ static Node *makeRecursiveViewSelect(char\n> >> *relname, List *aliases, Node *query);\n> >> \tMergeWhenClause *mergewhen;\n> >> \tstruct KeyActions *keyactions;\n> >> \tstruct KeyAction *keyaction;\n> >> +\tRPSkipTo\tskipto;\n> >> }\n> >> %type <node>\tstmt toplevel_stmt schema_stmt routine_body_stmt\n> > \n> > It is usually not the style to add an entry for every node type to the\n> > %union. Otherwise, we'd have hundreds of entries in there.\n> \n> Ok, I have removed the node types and used existing node types. Also\n> I have moved RPR related %types to same place to make it easier to know\n> what are added by RPR.\n> \n> >> @@ -866,6 +878,7 @@ static Node *makeRecursiveViewSelect(char\n> >> *relname, List *aliases, Node *query);\n> >> %nonassoc UNBOUNDED /* ideally would have same precedence as IDENT */\n> >> %nonassoc IDENT PARTITION RANGE ROWS GROUPS PRECEDING FOLLOWING CUBE\n> >> %ROLLUP\n> >> \t\t\tSET KEYS OBJECT_P SCALAR VALUE_P WITH WITHOUT\n> >> +%nonassoc\tMEASURES AFTER INITIAL SEEK PATTERN_P\n> >> %left Op OPERATOR /* multi-character ops and user-defined operators */\n> >> %left\t\t'+' '-'\n> >> %left\t\t'*' '/' '%'\n> > \n> > It was recently discussed that these %nonassoc should ideally all have\n> > the same precedence. Did you consider that here?\n> \n> No, I didn't realize it. Thanks for pointing it out. I have removed\n> %nonassoc so that MEASURES etc. have the same precedence as IDENT etc.\n> \n> Attached is the new diff of gram.y against master branch.\n\nThank you very much for providing the patch for the RPR implementation.\n\nAfter applying the v12-patches, I noticed an issue that\nthe rpr related parts in window clauses were not displayed in the\nview definitions (the definition column of pg_views).\n\nTo address this, I have taken the liberty of adding an additional patch\nthat modifies the relevant rewriter source code.\nI have attached the rewriter patch for your review and would greatly appreciate your feedback.\n\nThank you for your time and consideration.\n\n-- \nSRA OSS LLC\nNingwei Chen <[email protected]>\nTEL: 03-5979-2701 FAX: 03-5979-2702",
"msg_date": "Mon, 22 Jan 2024 14:51:49 +0900",
"msg_from": "NINGWEI CHEN <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Row pattern recognition"
},
{
"msg_contents": "> Thank you very much for providing the patch for the RPR implementation.\n> \n> After applying the v12-patches, I noticed an issue that\n> the rpr related parts in window clauses were not displayed in the\n> view definitions (the definition column of pg_views).\n> \n> To address this, I have taken the liberty of adding an additional patch\n> that modifies the relevant rewriter source code.\n> I have attached the rewriter patch for your review and would greatly appreciate your feedback.\n> \n> Thank you for your time and consideration.\n\nThank you so much for spotting the issue and creating the patch. I\nconfirmed that your patch applies cleanly and solve the issue. I will\ninclude the patches into upcoming v13 patches.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Mon, 22 Jan 2024 15:22:11 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Row pattern recognition"
},
{
"msg_contents": "Attached is the v13 patch. Below are the summary of the changes from\nprevious version (besides rebase).\n\n0001-Row-pattern-recognition-patch-for-raw-parser.patch\n- Fix raw paser per Peter Eisentraut's review. Remove the new node\n types and use existing ones. Also remove %nonassoc so that\n MEASURES etc. have the same precedence as IDENT etc.\n\nPeter's comment:\n> It is usually not the style to add an entry for every node type to the\n> %union. Otherwise, we'd have hundreds of entries in there.\n\n> It was recently discussed that these %nonassoc should ideally all have\n> the same precedence. Did you consider that here?\n\n0002-Row-pattern-recognition-patch-parse-analysis.patch\n- Fix transformRPR so that SKIP variable name in the AFTER MATCH SKIP\n TO clause is tracked. This is added by Ningwei Chen.\n\n0003-Row-pattern-recognition-patch-rewriter.patch\nThis is a new patch for rewriter. Contributed by Ningwei Chen.\n\nChen's comment:\n> After applying the v12-patches, I noticed an issue that\n> the rpr related parts in window clauses were not displayed in the\n> view definitions (the definition column of pg_views).\n\n0004-Row-pattern-recognition-patch-planner.patch\n- same as before (previously it was 0003-Row-pattern-recognition-patch-planner.patch)\n\n0005-Row-pattern-recognition-patch-executor.patch\n- same as before (previously it was 0004-Row-pattern-recognition-patch-executor.patch)\n\n0006-Row-pattern-recognition-patch-docs.patch\n- Same as before. (previously it was 0005-Row-pattern-recognition-patch-docs.patch)\n\n0007-Row-pattern-recognition-patch-tests.patch\n- Same as before. (previously it was 0006-Row-pattern-recognition-patch-tests.patch)\n\n0008-Allow-to-print-raw-parse-tree.patch\n- Same as before. (previously it was 0007-Allow-to-print-raw-parse-tree.patch).\n Note that patch is not intended to be incorporated into main\n tree. This is just for debugging purpose. With this patch, raw parse\n tree is printed if debug_print_parse is enabled.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp",
"msg_date": "Mon, 22 Jan 2024 19:26:18 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Row pattern recognition"
},
{
"msg_contents": "Attached is the v14 patch. Below are the summary of the changes from\nprevious version (besides rebase).\nV14 patches are mainly for coding style fixes.\n\n0001-Row-pattern-recognition-patch-for-raw-parser.patch\n- Fold too long lines and run pgindent.\n\n0002-Row-pattern-recognition-patch-parse-analysis.patch\n- Fold too long lines and run pgindent.\n\n0003-Row-pattern-recognition-patch-rewriter.patch\n- Fold too long lines and run pgindent.\n\n0004-Row-pattern-recognition-patch-planner.patch\n- Fold too long lines and run pgindent.\n\n0005-Row-pattern-recognition-patch-executor.patch\n- Fold too long lines and run pgindent.\n\n- Surround debug lines using \"ifdef RPR_DEBUG\" so that logs are not\n contaminated by RPR debug logs when log_min_messages are set to\n DEBUG1 or higher.\n\n0006-Row-pattern-recognition-patch-docs.patch\n- Same as before. (previously it was 0005-Row-pattern-recognition-patch-docs.patch)\n\n0007-Row-pattern-recognition-patch-tests.patch\n- Same as before. (previously it was 0006-Row-pattern-recognition-patch-tests.patch)\n\n0008-Allow-to-print-raw-parse-tree.patch\n- Same as before.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp",
"msg_date": "Thu, 29 Feb 2024 09:19:54 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Row pattern recognition"
},
{
"msg_contents": "Attached is the v15 patch. No changes are made except rebasing due to\nrecent grammar changes.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp",
"msg_date": "Thu, 28 Mar 2024 19:59:25 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Row pattern recognition"
},
{
"msg_contents": "Attached is the v16 patch. No changes are made except rebasing due to\nrecent grammar changes.\n\nAlso I removed [email protected] from the Cc: list. The email address\nis no longer valid.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp",
"msg_date": "Fri, 12 Apr 2024 16:09:08 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Row pattern recognition"
},
{
"msg_contents": "Hi Vik and Champion,\n\nI think the current RPR patch is not quite correct in handling\ncount(*).\n\n(using slightly modified version of Vik's example query)\n\nSELECT v.a, count(*) OVER w\nFROM (VALUES ('A'),('B'),('B'),('C')) AS v (a)\nWINDOW w AS (\n ORDER BY v.a\n ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING\n PATTERN (B+)\n DEFINE B AS a = 'B'\n)\n a | count \n---+-------\n A | 0\n B | 2\n B | \n C | 0\n(4 rows)\n\nHere row 3 is skipped because the pattern B matches row 2 and 3. In\nthis case I think cont(*) should return 0 rathern than NULL for row 3.\n\nWhat do you think?\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Wed, 24 Apr 2024 12:12:44 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Row pattern recognition"
},
{
"msg_contents": "On Tue, Apr 23, 2024 at 8:13 PM Tatsuo Ishii <[email protected]> wrote:\n> SELECT v.a, count(*) OVER w\n> FROM (VALUES ('A'),('B'),('B'),('C')) AS v (a)\n> WINDOW w AS (\n> ORDER BY v.a\n> ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING\n> PATTERN (B+)\n> DEFINE B AS a = 'B'\n> )\n> a | count\n> ---+-------\n> A | 0\n> B | 2\n> B |\n> C | 0\n> (4 rows)\n>\n> Here row 3 is skipped because the pattern B matches row 2 and 3. In\n> this case I think cont(*) should return 0 rathern than NULL for row 3.\n\nI think returning zero would match Vik's explanation upthread [1],\nyes. Unfortunately I don't have a spec handy to double-check for\nmyself right now.\n\n--Jacob\n\n[1] https://www.postgresql.org/message-id/c9ebc3d0-c3d1-e8eb-4a57-0ec099cbda17%40postgresfriends.org\n\n\n",
"msg_date": "Wed, 24 Apr 2024 10:55:29 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Row pattern recognition"
},
{
"msg_contents": "> On Tue, Apr 23, 2024 at 8:13 PM Tatsuo Ishii <[email protected]> wrote:\r\n>> SELECT v.a, count(*) OVER w\r\n>> FROM (VALUES ('A'),('B'),('B'),('C')) AS v (a)\r\n>> WINDOW w AS (\r\n>> ORDER BY v.a\r\n>> ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING\r\n>> PATTERN (B+)\r\n>> DEFINE B AS a = 'B'\r\n>> )\r\n>> a | count\r\n>> ---+-------\r\n>> A | 0\r\n>> B | 2\r\n>> B |\r\n>> C | 0\r\n>> (4 rows)\r\n>>\r\n>> Here row 3 is skipped because the pattern B matches row 2 and 3. In\r\n>> this case I think cont(*) should return 0 rathern than NULL for row 3.\r\n> \r\n> I think returning zero would match Vik's explanation upthread [1],\r\n> yes. Unfortunately I don't have a spec handy to double-check for\r\n> myself right now.\r\n\r\nOk. I believe you and Vik are correct.\r\nI am modifying the patch in this direction.\r\nAttached is the regression diff after modifying the patch.\r\n\r\nBest reagards,\r\n--\r\nTatsuo Ishii\r\nSRA OSS LLC\r\nEnglish: http://www.sraoss.co.jp/index_en/\r\nJapanese:http://www.sraoss.co.jp\r\n\ndiff -U3 /usr/local/src/pgsql/current/postgresql/src/test/regress/expected/rpr.out /usr/local/src/pgsql/current/postgresql/src/test/regress/results/rpr.out\n--- /usr/local/src/pgsql/current/postgresql/src/test/regress/expected/rpr.out\t2024-04-24 11:30:27.710523139 +0900\n+++ /usr/local/src/pgsql/current/postgresql/src/test/regress/results/rpr.out\t2024-04-26 14:39:03.543759205 +0900\n@@ -181,8 +181,8 @@\n company1 | 07-01-2023 | 100 | 0\n company1 | 07-02-2023 | 200 | 0\n company1 | 07-03-2023 | 150 | 3\n- company1 | 07-04-2023 | 140 | \n- company1 | 07-05-2023 | 150 | \n+ company1 | 07-04-2023 | 140 | 0\n+ company1 | 07-05-2023 | 150 | 0\n company1 | 07-06-2023 | 90 | 0\n company1 | 07-07-2023 | 110 | 0\n company1 | 07-08-2023 | 130 | 0\n@@ -556,24 +556,24 @@\n company | tdate | price | first_value | last_value | count \n ----------+------------+-------+-------------+------------+-------\n company1 | 07-01-2023 | 100 | 07-01-2023 | 07-03-2023 | 3\n- company1 | 07-02-2023 | 200 | | | \n- company1 | 07-03-2023 | 150 | | | \n+ company1 | 07-02-2023 | 200 | | | 0\n+ company1 | 07-03-2023 | 150 | | | 0\n company1 | 07-04-2023 | 140 | 07-04-2023 | 07-06-2023 | 3\n- company1 | 07-05-2023 | 150 | | | \n- company1 | 07-06-2023 | 90 | | | \n+ company1 | 07-05-2023 | 150 | | | 0\n+ company1 | 07-06-2023 | 90 | | | 0\n company1 | 07-07-2023 | 110 | 07-07-2023 | 07-09-2023 | 3\n- company1 | 07-08-2023 | 130 | | | \n- company1 | 07-09-2023 | 120 | | | \n+ company1 | 07-08-2023 | 130 | | | 0\n+ company1 | 07-09-2023 | 120 | | | 0\n company1 | 07-10-2023 | 130 | | | 0\n company2 | 07-01-2023 | 50 | 07-01-2023 | 07-03-2023 | 3\n- company2 | 07-02-2023 | 2000 | | | \n- company2 | 07-03-2023 | 1500 | | | \n+ company2 | 07-02-2023 | 2000 | | | 0\n+ company2 | 07-03-2023 | 1500 | | | 0\n company2 | 07-04-2023 | 1400 | 07-04-2023 | 07-06-2023 | 3\n- company2 | 07-05-2023 | 1500 | | | \n- company2 | 07-06-2023 | 60 | | | \n+ company2 | 07-05-2023 | 1500 | | | 0\n+ company2 | 07-06-2023 | 60 | | | 0\n company2 | 07-07-2023 | 1100 | 07-07-2023 | 07-09-2023 | 3\n- company2 | 07-08-2023 | 1300 | | | \n- company2 | 07-09-2023 | 1200 | | | \n+ company2 | 07-08-2023 | 1300 | | | 0\n+ company2 | 07-09-2023 | 1200 | | | 0\n company2 | 07-10-2023 | 1300 | | | 0\n (20 rows)\n \n@@ -604,24 +604,24 @@\n company | tdate | price | first_value | last_value | max | min | sum | avg | count \n ----------+------------+-------+-------------+------------+------+-----+------+-----------------------+-------\n company1 | 07-01-2023 | 100 | 100 | 140 | 200 | 100 | 590 | 147.5000000000000000 | 4\n- company1 | 07-02-2023 | 200 | | | | | | | \n- company1 | 07-03-2023 | 150 | | | | | | | \n- company1 | 07-04-2023 | 140 | | | | | | | \n+ company1 | 07-02-2023 | 200 | | | | | | | 0\n+ company1 | 07-03-2023 | 150 | | | | | | | 0\n+ company1 | 07-04-2023 | 140 | | | | | | | 0\n company1 | 07-05-2023 | 150 | | | | | | | 0\n company1 | 07-06-2023 | 90 | 90 | 120 | 130 | 90 | 450 | 112.5000000000000000 | 4\n- company1 | 07-07-2023 | 110 | | | | | | | \n- company1 | 07-08-2023 | 130 | | | | | | | \n- company1 | 07-09-2023 | 120 | | | | | | | \n+ company1 | 07-07-2023 | 110 | | | | | | | 0\n+ company1 | 07-08-2023 | 130 | | | | | | | 0\n+ company1 | 07-09-2023 | 120 | | | | | | | 0\n company1 | 07-10-2023 | 130 | | | | | | | 0\n company2 | 07-01-2023 | 50 | 50 | 1400 | 2000 | 50 | 4950 | 1237.5000000000000000 | 4\n- company2 | 07-02-2023 | 2000 | | | | | | | \n- company2 | 07-03-2023 | 1500 | | | | | | | \n- company2 | 07-04-2023 | 1400 | | | | | | | \n+ company2 | 07-02-2023 | 2000 | | | | | | | 0\n+ company2 | 07-03-2023 | 1500 | | | | | | | 0\n+ company2 | 07-04-2023 | 1400 | | | | | | | 0\n company2 | 07-05-2023 | 1500 | | | | | | | 0\n company2 | 07-06-2023 | 60 | 60 | 1200 | 1300 | 60 | 3660 | 915.0000000000000000 | 4\n- company2 | 07-07-2023 | 1100 | | | | | | | \n- company2 | 07-08-2023 | 1300 | | | | | | | \n- company2 | 07-09-2023 | 1200 | | | | | | | \n+ company2 | 07-07-2023 | 1100 | | | | | | | 0\n+ company2 | 07-08-2023 | 1300 | | | | | | | 0\n+ company2 | 07-09-2023 | 1200 | | | | | | | 0\n company2 | 07-10-2023 | 1300 | | | | | | | 0\n (20 rows)\n \n@@ -732,16 +732,16 @@\n tdate | price | first_value | count \n ------------+-------+-------------+-------\n 07-01-2023 | 100 | 07-01-2023 | 4\n- 07-02-2023 | 200 | | \n- 07-03-2023 | 150 | | \n- 07-04-2023 | 140 | | \n+ 07-02-2023 | 200 | | 0\n+ 07-03-2023 | 150 | | 0\n+ 07-04-2023 | 140 | | 0\n 07-05-2023 | 150 | | 0\n 07-06-2023 | 90 | | 0\n 07-07-2023 | 110 | | 0\n 07-01-2023 | 50 | 07-01-2023 | 4\n- 07-02-2023 | 2000 | | \n- 07-03-2023 | 1500 | | \n- 07-04-2023 | 1400 | | \n+ 07-02-2023 | 2000 | | 0\n+ 07-03-2023 | 1500 | | 0\n+ 07-04-2023 | 1400 | | 0\n 07-05-2023 | 1500 | | 0\n 07-06-2023 | 60 | | 0\n 07-07-2023 | 1100 | | 0",
"msg_date": "Fri, 26 Apr 2024 15:09:32 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Row pattern recognition"
},
{
"msg_contents": ">> I think returning zero would match Vik's explanation upthread [1],\n>> yes. Unfortunately I don't have a spec handy to double-check for\n>> myself right now.\n> \n> Ok. I believe you and Vik are correct.\n> I am modifying the patch in this direction.\n\nAttached are the v17 patches in the direction. Differences from v16\ninclude:\n\n- In 0005 executor patch, aggregation in RPR always restarts for each\n row. This is necessary to run aggregates on no matching (due to\n skipping) or empty matching (due to no pattern variables matches)\n rows to produce NULL (most aggregates) or 0 (count) properly. In v16\n I had a hack using a flag to force the aggregation results to be\n NULL in case of no match or empty match in\n finalize_windowaggregate(). v17 eliminates the dirty hack.\n\n- 0006 docs and 0007 test patches are adjusted to reflect the RPR\n output chages in 0005.\n \nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp",
"msg_date": "Sun, 28 Apr 2024 20:28:26 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Row pattern recognition"
},
{
"msg_contents": "Attached are the v18 patches. To fix conflicts due to recent commit:\n\n7d2c7f08d9 Fix query pullup issue with WindowClause runCondition\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp",
"msg_date": "Sat, 11 May 2024 16:23:07 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Row pattern recognition"
},
{
"msg_contents": "Attached are the v19 patches. Changes from v18 include:\n\n0002:\n- add a check whether DEFINE clause includes subqueries. If so, error out.\n0007:\n- fix wrong test (row pattern definition variable name must not appear\n more than once)\n- remove unnessary test (undefined define variable is not allowed).\n We have already allowed the undefined variables.\n- add tests: subqueries and aggregates in DEFINE clause are not\n supported. The standard allows them but I have not implemented them\n yet.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp",
"msg_date": "Wed, 15 May 2024 09:02:03 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Row pattern recognition"
},
{
"msg_contents": "Attached are the v20 patches. Just rebased.\n(The conflict was in 0001 patch.)\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp",
"msg_date": "Fri, 24 May 2024 11:39:19 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Row pattern recognition"
},
{
"msg_contents": "I gave a talk on RPR in PGConf.dev 2024.\nhttps://www.pgevents.ca/events/pgconfdev2024/schedule/session/114-implementing-row-pattern-recognition/\n(Slides are available from the link).\n\nVik Faring and Jacob Champion were one of the audiences and we had a\nsmall discussion after the talk. We continued the discussion off list\non how to move forward the RPR implementation project. One of the\nideas is, to summarize what are in the patch and what are not from the\nSQL standard specification's point of view. This should help us to\nreach the consensus regarding \"minimum viable\" feature set if we want\nto bring the patch in upcoming PostgreSQL v18.\n\nHere is the first cut of the document. Comments/feedback are welcome.\n\n-------------------------------------------------------------------------\nThis memo describes the current status of implementation of SQL/RPR\n(Row Pattern Recognition), as of June 13, 2024 (the latest patch is v20).\n\n- RPR in FROM clause and WINDOW clause\n\nThe SQL standard defines two features regarding SQL/RPR - R010 (RPR in\nFROM clause) and R020 (RPR in WINDOW clause). Only R020 is\nimplemented. From now on, we discuss on R020.\n\n- Overview of R020 syntax\n\nWINDOW window_name AS (\n[ PARTITION BY ... ]\n[ ORDER BY... ]\n[ MEASURES ... ]\nROWS BETWEEN CURRENT ROW AND ...\n[ AFTER MATCH SKIP ... ]\n[ INITIAL|SEEK ]\nPATTERN (...)\n[ SUBSET ... ]\nDEFINE ...\n)\n\n-- PARTITION BY and ORDER BY are not specific to RPR and has been\n already there in current PostgreSQL.\n\n-- What are (partially) implemented:\n\nAFTER MATCH SKIP\nINITIAL|SEEK\nPATTERN\nDEFINE\n\n-- What are not implemented at all:\nMEASURES\nSUBSET\n\nFollowings are detailed status of the each clause.\n\n- AFTER MATCH SKIP\n\n-- Implemented:\nAFTER MATCH SKIP TO NEXT ROW\nAFTER MATCH SKIP PAST LAST ROW\n\n-- Not implemented:\nAFTER MATCH SKIP TO FIRST|LAST pattern_variable\n\n- INITIAL|SEEK\n\n--Implemented:\nINITIAL\n\n-- Not implemented:\nSEEK\n\n- DEFINE\n\n-- Partially implemented row pattern navigation operations are PREV and\n NEXT. FIRST and LAST are not implemented.\n\n-- The standard says PREV and NEXT accepts optional argument \"offset\"\n but it's not implemented.\n\n-- The standard says the row pattern navigation operations can be\n nested but it's not implemented.\n\n-- CLLASSIFIER, use of aggregate functions and subqueries in DEFINE\n clause are not implemented.\n\n- PATTERN\n\n-- Followings are implemented:\n+: 1 or more rows\n*: 0 or more rows\n\n-- Followings are not implemented:\n?: 0 or 1 row\nA | B: OR condition\n(A B): grouping\n{n}: n rows\n{n,}: n or more rows\n{n,m}: greater or equal to n rows and less than or equal to m rows\n{,m}: more than 0 and less than or equal to m rows\n-------------------------------------------------------------------------\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Thu, 13 Jun 2024 09:25:01 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Row pattern recognition"
},
{
"msg_contents": "Attached are the v21 patches. Just rebased.\n(The conflict was in 0001 patch.)\n\nThe 0008 patch is just for debugging purpose. You can ignore it.\nThis hasn't been changed, but I would like to notice just in case.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS K.K.\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp",
"msg_date": "Mon, 26 Aug 2024 13:39:47 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Row pattern recognition"
},
{
"msg_contents": "Attached are the v22 patches. Just rebased. The conflict was in 0001\npatch due to commit 89f908a6d0 \"Add temporal FOREIGN KEY contraints\".\n\nThe 0008 patch is just for debugging purpose. You can ignore it.\nThis hasn't been changed, but I would like to notice just in case.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS K.K.\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp",
"msg_date": "Thu, 19 Sep 2024 13:59:47 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Row pattern recognition"
},
{
"msg_contents": "On Wed, Sep 18, 2024 at 10:00 PM Tatsuo Ishii <[email protected]> wrote:\n>\n> Attached are the v22 patches. Just rebased.\n\nThanks!\n\nWith some bigger partitions, I hit an `ERROR: wrong pos: 1024`. A\ntest that reproduces it is attached.\n\nWhile playing with the feature, I've been trying to identify runs of\nmatched rows by eye. But it's pretty difficult -- the best I can do is\nmanually count rows using a `COUNT(*) OVER ...`. So I'd like to\nsuggest that MEASURES be part of the eventual v1 feature, if there's\nno other way to determine whether a row was skipped by a previous\nmatch. (That was less obvious to me before the fix in v17.)\n\n--\n\nI've been working on an implementation [1] of SQL/RPR's \"parenthesized\nlanguage\" and preferment order. (These are defined in SQL/Foundation\n2023, section 9.41.) The tool gives you a way to figure out, for a\ngiven pattern, what matches are supposed to be attempted and in what\norder:\n\n $ ./src/test/modules/rpr/rpr_prefer \"a b? a\"\n ( ( a ( b ) ) a )\n ( ( a ( ) ) a )\n\nMany simple patterns result in an infinite set of possible matches. So\nif you use an unbounded quantifiers, you have to also use --max-rows\nto limit the size of the hypothetical window frame:\n\n $ ./src/test/modules/rpr/rpr_prefer --max-rows 2 \"^ PERMUTE(a*, b+)? $\"\n ( ( ^ ( ( ( ( ( ( a ) ( b ) ) ) - ) ) ) ) $ )\n ( ( ^ ( ( ( ( ( ( ) ( b b ) ) ) - ) ) ) ) $ )\n ( ( ^ ( ( ( ( ( ( ) ( b ) ) ) - ) ) ) ) $ )\n ( ( ^ ( ( ( - ( ( ( b b ) ( ) ) ) ) ) ) ) $ )\n ( ( ^ ( ( ( - ( ( ( b ) ( a ) ) ) ) ) ) ) $ )\n ( ( ^ ( ( ( - ( ( ( b ) ( ) ) ) ) ) ) ) $ )\n ( ( ^ ( ) ) $ )\n\nI've found this useful to check my personal understanding of the spec\nand the match behavior, but it could also potentially be used to\ngenerate test cases, or to help users debug their own patterns. For\nexample, a pattern that has a bunch of duplicate sequences in its PL\nis probably not very well optimized:\n\n $ ./src/test/modules/rpr/rpr_prefer --max-rows 4 \"a+ a+\"\n ( ( a a a ) ( a ) )\n ( ( a a ) ( a a ) )\n ( ( a a ) ( a ) )\n ( ( a ) ( a a a ) )\n ( ( a ) ( a a ) )\n ( ( a ) ( a ) )\n\nAnd patterns with catastrophic backtracking behavior tend to show a\n\"sawtooth\" pattern in the output, with a huge number of potential\nmatches being generated relative to the number of rows in the frame.\n\nMy implementation is really messy -- it leaks memory like a sieve, and\nI cannibalized the parser from ECPG, which just ended up as an\nexercise in teaching myself flex/bison. But if there's interest in\nhaving this kind of tool in the tree, I can work on making it\nreviewable. Either way, I should be able to use it to double-check\nmore complicated test cases.\n\nA while back [2], you were wondering whether our Bison implementation\nwould be able to parse the PATTERN grammar directly. I think this tool\nproves that the answer is \"yes\", but PERMUTE in particular causes a\nshift/reduce conflict. To fix it, I applied the same precedence\nworkaround that we use for CUBE and ROLLUP.\n\nThanks again!\n--Jacob\n\n[1] https://github.com/jchampio/postgres/tree/dev/rpr\n[2] https://www.postgresql.org/message-id/20230721.151648.412762379013769790.t-ishii%40sranhm.sra.co.jp",
"msg_date": "Fri, 27 Sep 2024 15:27:07 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Row pattern recognition"
},
{
"msg_contents": "> With some bigger partitions, I hit an `ERROR: wrong pos: 1024`. A\n> test that reproduces it is attached.\n\nThanks for the report. Attached is a patch on top of v22 patches to\nfix the bug. We keep info in an array\n(WindowAggState.reduced_frame_map) to track the rpr pattern match\nresult status for each row in a frame. If pattern match succeeds, the\nfirst row in the reduced frame has status RF_FRAME_HEAD and rest of\nrows have RF_SKIPPED state. A row with pattern match failure state has\nRF_UNMATCHED state. Any row which is never tested has state\nRF_NOT_DETERMINED. At begining the map is initialized with 1024\nentries with all RF_NOT_DETERMINED state. Eventually they are replaced\nwith other than RF_NOT_DETERMINED state. In the error case rpr engine\ntries to find 1024 th row's state in the map and failed because the\nrow's state has not been tested yet. I think we should treat it as\nRF_NOT_DETERMINED rather than an error. Attached patch does it.\n\n> While playing with the feature, I've been trying to identify runs of\n> matched rows by eye. But it's pretty difficult -- the best I can do is\n> manually count rows using a `COUNT(*) OVER ...`. So I'd like to\n> suggest that MEASURES be part of the eventual v1 feature, if there's\n> no other way to determine whether a row was skipped by a previous\n> match. (That was less obvious to me before the fix in v17.)\n\nI think implementing MEASURES is challenging. Especially we need to\nfind how our parser accepts \"colname OVER\nwindow_definition\". Currently PostgreSQL's parser only accepts \"func()\nOVER window_definition\" Even it is technically possible, I think the\nv1 patch size will become much larger than now due to this.\n\nHow about inventing new window function that returns row state instead?\n\n- match found (yes/no)\n- skipped due to AFTER MATCH SKIP PAST LAST ROW (no match)\n\nFor the rest of the mail I need more time to understand. I will reply\nback after studying it. For now, I just want to thank you for the\nvaluable information!\n\n> --\n> \n> I've been working on an implementation [1] of SQL/RPR's \"parenthesized\n> language\" and preferment order. (These are defined in SQL/Foundation\n> 2023, section 9.41.) The tool gives you a way to figure out, for a\n> given pattern, what matches are supposed to be attempted and in what\n> order:\n> \n> $ ./src/test/modules/rpr/rpr_prefer \"a b? a\"\n> ( ( a ( b ) ) a )\n> ( ( a ( ) ) a )\n> \n> Many simple patterns result in an infinite set of possible matches. So\n> if you use an unbounded quantifiers, you have to also use --max-rows\n> to limit the size of the hypothetical window frame:\n> \n> $ ./src/test/modules/rpr/rpr_prefer --max-rows 2 \"^ PERMUTE(a*, b+)? $\"\n> ( ( ^ ( ( ( ( ( ( a ) ( b ) ) ) - ) ) ) ) $ )\n> ( ( ^ ( ( ( ( ( ( ) ( b b ) ) ) - ) ) ) ) $ )\n> ( ( ^ ( ( ( ( ( ( ) ( b ) ) ) - ) ) ) ) $ )\n> ( ( ^ ( ( ( - ( ( ( b b ) ( ) ) ) ) ) ) ) $ )\n> ( ( ^ ( ( ( - ( ( ( b ) ( a ) ) ) ) ) ) ) $ )\n> ( ( ^ ( ( ( - ( ( ( b ) ( ) ) ) ) ) ) ) $ )\n> ( ( ^ ( ) ) $ )\n> \n> I've found this useful to check my personal understanding of the spec\n> and the match behavior, but it could also potentially be used to\n> generate test cases, or to help users debug their own patterns. For\n> example, a pattern that has a bunch of duplicate sequences in its PL\n> is probably not very well optimized:\n> \n> $ ./src/test/modules/rpr/rpr_prefer --max-rows 4 \"a+ a+\"\n> ( ( a a a ) ( a ) )\n> ( ( a a ) ( a a ) )\n> ( ( a a ) ( a ) )\n> ( ( a ) ( a a a ) )\n> ( ( a ) ( a a ) )\n> ( ( a ) ( a ) )\n> \n> And patterns with catastrophic backtracking behavior tend to show a\n> \"sawtooth\" pattern in the output, with a huge number of potential\n> matches being generated relative to the number of rows in the frame.\n> \n> My implementation is really messy -- it leaks memory like a sieve, and\n> I cannibalized the parser from ECPG, which just ended up as an\n> exercise in teaching myself flex/bison. But if there's interest in\n> having this kind of tool in the tree, I can work on making it\n> reviewable. Either way, I should be able to use it to double-check\n> more complicated test cases.\n> \n> A while back [2], you were wondering whether our Bison implementation\n> would be able to parse the PATTERN grammar directly. I think this tool\n> proves that the answer is \"yes\", but PERMUTE in particular causes a\n> shift/reduce conflict. To fix it, I applied the same precedence\n> workaround that we use for CUBE and ROLLUP.\n> \n> Thanks again!\n> --Jacob\n> \n> [1] https://github.com/jchampio/postgres/tree/dev/rpr\n> [2] https://www.postgresql.org/message-id/20230721.151648.412762379013769790.t-ishii%40sranhm.sra.co.jp\n\ndiff --git a/src/backend/executor/nodeWindowAgg.c b/src/backend/executor/nodeWindowAgg.c\nindex e46a3dd1b7..4a5a6fbf07 100644\n--- a/src/backend/executor/nodeWindowAgg.c\n+++ b/src/backend/executor/nodeWindowAgg.c\n@@ -4148,9 +4148,15 @@ int\n get_reduced_frame_map(WindowAggState *winstate, int64 pos)\n {\n \tAssert(winstate->reduced_frame_map != NULL);\n+\tAssert(pos >= 0);\n \n-\tif (pos < 0 || pos >= winstate->alloc_sz)\n-\t\telog(ERROR, \"wrong pos: \" INT64_FORMAT, pos);\n+\t/*\n+\t * If pos is not in the reduced frame map, it means that any info\n+\t * regarding the pos has not been registered yet. So we return\n+\t * RF_NOT_DETERMINED.\n+\t */\n+\tif (pos >= winstate->alloc_sz)\n+\t\treturn RF_NOT_DETERMINED;\n \n \treturn winstate->reduced_frame_map[pos];\n }",
"msg_date": "Sat, 28 Sep 2024 19:43:59 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Row pattern recognition"
},
{
"msg_contents": ">> While playing with the feature, I've been trying to identify runs of\n>> matched rows by eye. But it's pretty difficult -- the best I can do is\n>> manually count rows using a `COUNT(*) OVER ...`. So I'd like to\n>> suggest that MEASURES be part of the eventual v1 feature, if there's\n>> no other way to determine whether a row was skipped by a previous\n>> match. (That was less obvious to me before the fix in v17.)\n> \n> I think implementing MEASURES is challenging. Especially we need to\n> find how our parser accepts \"colname OVER\n> window_definition\". Currently PostgreSQL's parser only accepts \"func()\n> OVER window_definition\" Even it is technically possible, I think the\n> v1 patch size will become much larger than now due to this.\n> \n> How about inventing new window function that returns row state instead?\n> \n> - match found (yes/no)\n> - skipped due to AFTER MATCH SKIP PAST LAST ROW (no match)\n\nPlease disregard my proposal. Even if we make such a function, it\nwould always return NULL for unmatched rows or skipped rows, and I\nthink the function does not solve your problem.\n\nHowever, I wonder if supporting MEASURES solves the problem either\nbecause any columns defined by MEASURES will return NULL except the\nfirst row in a reduced frame. Can you please show an example how to\nidentify runs of matched rows using MEASURES?\n\n> For the rest of the mail I need more time to understand. I will reply\n> back after studying it. For now, I just want to thank you for the\n> valuable information!\n> \n>> --\n>> \n>> I've been working on an implementation [1] of SQL/RPR's \"parenthesized\n>> language\" and preferment order. (These are defined in SQL/Foundation\n>> 2023, section 9.41.) The tool gives you a way to figure out, for a\n>> given pattern, what matches are supposed to be attempted and in what\n>> order:\n>> \n>> $ ./src/test/modules/rpr/rpr_prefer \"a b? a\"\n>> ( ( a ( b ) ) a )\n>> ( ( a ( ) ) a )\n>> \n>> Many simple patterns result in an infinite set of possible matches. So\n>> if you use an unbounded quantifiers, you have to also use --max-rows\n>> to limit the size of the hypothetical window frame:\n>> \n>> $ ./src/test/modules/rpr/rpr_prefer --max-rows 2 \"^ PERMUTE(a*, b+)? $\"\n>> ( ( ^ ( ( ( ( ( ( a ) ( b ) ) ) - ) ) ) ) $ )\n>> ( ( ^ ( ( ( ( ( ( ) ( b b ) ) ) - ) ) ) ) $ )\n>> ( ( ^ ( ( ( ( ( ( ) ( b ) ) ) - ) ) ) ) $ )\n>> ( ( ^ ( ( ( - ( ( ( b b ) ( ) ) ) ) ) ) ) $ )\n>> ( ( ^ ( ( ( - ( ( ( b ) ( a ) ) ) ) ) ) ) $ )\n>> ( ( ^ ( ( ( - ( ( ( b ) ( ) ) ) ) ) ) ) $ )\n>> ( ( ^ ( ) ) $ )\n\nI wonder how Oracle solves the problem (an infinite set of possible\nmatches) without using \"--max-rows\" or something like that because in\nmy understanding Oracle supports the regular expressions and PERMUTE.\n\n>> I've found this useful to check my personal understanding of the spec\n>> and the match behavior, but it could also potentially be used to\n>> generate test cases, or to help users debug their own patterns. For\n>> example, a pattern that has a bunch of duplicate sequences in its PL\n>> is probably not very well optimized:\n>> \n>> $ ./src/test/modules/rpr/rpr_prefer --max-rows 4 \"a+ a+\"\n>> ( ( a a a ) ( a ) )\n>> ( ( a a ) ( a a ) )\n>> ( ( a a ) ( a ) )\n>> ( ( a ) ( a a a ) )\n>> ( ( a ) ( a a ) )\n>> ( ( a ) ( a ) )\n>> \n>> And patterns with catastrophic backtracking behavior tend to show a\n>> \"sawtooth\" pattern in the output, with a huge number of potential\n>> matches being generated relative to the number of rows in the frame.\n>> \n>> My implementation is really messy -- it leaks memory like a sieve, and\n>> I cannibalized the parser from ECPG, which just ended up as an\n>> exercise in teaching myself flex/bison. But if there's interest in\n>> having this kind of tool in the tree, I can work on making it\n>> reviewable. Either way, I should be able to use it to double-check\n>> more complicated test cases.\n\nI definitely am interested in the tool!\n\n>> A while back [2], you were wondering whether our Bison implementation\n>> would be able to parse the PATTERN grammar directly. I think this tool\n>> proves that the answer is \"yes\", but PERMUTE in particular causes a\n>> shift/reduce conflict. To fix it, I applied the same precedence\n>> workaround that we use for CUBE and ROLLUP.\n\nThat's a good news!\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS K.K.\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Mon, 30 Sep 2024 09:07:51 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Row pattern recognition"
}
] |
[
{
"msg_contents": "Hi,\n\nYuya Watari presented a series of patches, with the objective of improving\nthe Bitmapset [1].\nAfter reading through the patches, I saw a lot of good ideas and thought\nI'd help.\nUnfortunately, my suggestions were not well received.\nEven so, I decided to work on these patches and see what could be improved.\n\nEventually it arrived at the attached patch, which I'm naming v7, because\nof the sequence it had established.\n\nThose who follow the other thread will see that the original patch actually\nimproves the overall performance of Bitmapset,\nbut I believe there is room for further improvement.\n\nI ran the same tests provided by Yuya Watari and the results are attached.\nBoth on Windows and Linux Ubuntu, the performance of v7 outperforms head\nand v4.\nAt Ubuntu 64 bits:\nv7 outperforms v4 by 29% (Query A)\nv7 outperforms v4 by 19% (Query B)\n\nAt Windows 64 bits:\nv7 outperforms v4 by 22% (Query A)\nv7 outperforms v4 by 33% (Query B)\n\nI believe patch v7 leaves the Bitmapset in good shape and readable, well\nwritten.\n\nQuestions arose regarding possible regression when using the backward loop,\nbut in all the tests I ran this version with backward performed better.\nPossibly because of the smaller number of variables and the efficient test\n(--i <= 0), which both the msvc and gcc compilers successfully optimize.\n\nIf the v7 version with loop forward performs worse on x86_64 cpus,\nI don't see how it will perform better on other architectures, since the\nvast majority of modern ones and with great cache support.\n\nAnother question is related to an alleged \"spurius test\", whose objective\nis to avoid test and set instructions for each element of the array.\nAgain I ran the tests without the test and the performance was worse,\nshowing its value.\n\nregards,\nRanier Vilela\n\n[1]\nhttps://www.postgresql.org/message-id/CAJ2pMkZ0HfhfA0QNa_msknL%3D_-PavZmPHRWnW7yOb3_PWUoB%2Bg%40mail.gmail.com",
"msg_date": "Sun, 25 Jun 2023 09:28:36 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": true,
"msg_subject": "Speeding Up Bitmapset"
},
{
"msg_contents": "On Mon, 26 Jun 2023 at 00:29, Ranier Vilela <[email protected]> wrote:\n> Yuya Watari presented a series of patches, with the objective of improving the Bitmapset [1].\n> After reading through the patches, I saw a lot of good ideas and thought I'd help.\n> Unfortunately, my suggestions were not well received.\n\nFor the future, I recommend if you do have ideas that you wish to\nconvey and think a patch is the easiest way to do that, then it's best\nto send those with an extension that won't cause the CFbot to pickup\nyour patch instead. You patch did and still does seem to have a load\nof extra changes that are unjustified by your claims. There are quite\na lot of white space changes going on. There's no reason in the world\nthat those will speed up Bitmapsets, so why include them?\n\n> v7 outperforms v4 by 29% (Query A)\n\nI tested v7 with query-a and I also see additional gains. However,\nit's entirely down to your changes to bms_is_subset(). It seems, by\nchance, with the given Bitmapsets that looping backwards for the given\nsets is able to determine the result more quickly\n\nHere's some results from \"perf top\"\n\nquery-a\nv4\n\n 30.08% postgres [.] bms_is_subset\n 15.84% postgres [.] create_join_clause\n 13.54% postgres [.] bms_equal\n 11.03% postgres [.] get_eclass_for_sort_expr\n 8.53% postgres [.] generate_implied_equalities_for_column\n 3.11% postgres [.] generate_join_implied_equalities_normal\n 1.03% postgres [.] add_child_rel_equivalences\n 0.82% postgres [.] SearchCatCacheInternal\n 0.73% postgres [.] AllocSetAlloc\n 0.53% postgres [.] find_ec_member_matching_expr\n 0.40% postgres [.] hash_search_with_hash_value\n 0.36% postgres [.] palloc\n 0.36% postgres [.] palloc0\n\nlatency average = 452.480 ms\n\nv7\n 20.51% postgres [.] create_join_clause\n 15.33% postgres [.] bms_equal\n 14.17% postgres [.] get_eclass_for_sort_expr\n 12.05% postgres [.] bms_is_subset\n 10.40% postgres [.] generate_implied_equalities_for_column\n 3.90% postgres [.] generate_join_implied_equalities_normal\n 1.34% postgres [.] add_child_rel_equivalences\n 1.06% postgres [.] AllocSetAlloc\n 1.00% postgres [.] SearchCatCacheInternal\n 0.72% postgres [.] find_ec_member_matching_expr\n 0.58% postgres [.] palloc0\n 0.49% postgres [.] palloc\n 0.47% postgres [.] hash_search_with_hash_value\n 0.44% libc.so.6 [.] __memmove_avx_unaligned_erms\n\n\nlatency average = 350.543 ms\n\nmodified v7's bms_is_subset to go forwards then I get latency average\n= 445.987 ms.\n\nIf I add some debugging to bms_is_subset to have it record how many\nwords it checks, I see:\n\npostgres=# select sum(nwords) from forward;\n sum\n-----------\n 181490660\n(1 row)\n\npostgres=# select sum(nwords) from backwards;\n sum\n----------\n 11322564\n(1 row)\n\nSo, it took about 181 million loops in bms_is_member to plan query-a\nwhen looping forward, but only 11 million when looping backwards.\n\nI think unless you've got some reason that you're able to justify why\nwe're always more likely to have to perform fewer word checks in\nbms_is_subset() by looping backwards instead of forwards then I think\nthe performance gains you're showing here only happen to show better\nresults due to the given workload. It's just as easy to imagine that\nyou'll apply the equivalent slowdown for some other workload where the\ninitial words differ but all the remaining words all match.\n\nThis seems equivalent to someone suggesting that looping backwards in\nlist_member() is better than looping forwards because it'll find the\nmember more quickly. I can't imagine us ever considering that would\nbe a good change to make and I think the same for bms_is_member().\n\nDavid\n\n\n",
"msg_date": "Mon, 26 Jun 2023 08:49:04 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Speeding Up Bitmapset"
},
{
"msg_contents": "Em dom., 25 de jun. de 2023 às 17:49, David Rowley <[email protected]>\nescreveu:\n\n> There's no reason in the world\n> that those will speed up Bitmapsets, so why include them?\n>\nOf course optimization is the most important thing,\nbut since you're going to touch the source, why not make it more readable.\n\n\n>\n> > v7 outperforms v4 by 29% (Query A)\n>\n> I tested v7 with query-a and I also see additional gains. However,\n> it's entirely down to your changes to bms_is_subset(). It seems, by\n> chance, with the given Bitmapsets that looping backwards for the given\n> sets is able to determine the result more quickly\n>\nI redid the tests and it seems that most of the difference comes from\nbms_subset.\n\n\n>\n> Here's some results from \"perf top\"\n>\n> query-a\n> v4\n>\n> 30.08% postgres [.] bms_is_subset\n> 15.84% postgres [.] create_join_clause\n> 13.54% postgres [.] bms_equal\n> 11.03% postgres [.] get_eclass_for_sort_expr\n> 8.53% postgres [.] generate_implied_equalities_for_column\n> 3.11% postgres [.] generate_join_implied_equalities_normal\n> 1.03% postgres [.] add_child_rel_equivalences\n> 0.82% postgres [.] SearchCatCacheInternal\n> 0.73% postgres [.] AllocSetAlloc\n> 0.53% postgres [.] find_ec_member_matching_expr\n> 0.40% postgres [.] hash_search_with_hash_value\n> 0.36% postgres [.] palloc\n> 0.36% postgres [.] palloc0\n>\n> latency average = 452.480 ms\n>\n> v7\n> 20.51% postgres [.] create_join_clause\n> 15.33% postgres [.] bms_equal\n> 14.17% postgres [.] get_eclass_for_sort_expr\n> 12.05% postgres [.] bms_is_subset\n> 10.40% postgres [.] generate_implied_equalities_for_column\n> 3.90% postgres [.] generate_join_implied_equalities_normal\n> 1.34% postgres [.] add_child_rel_equivalences\n> 1.06% postgres [.] AllocSetAlloc\n> 1.00% postgres [.] SearchCatCacheInternal\n> 0.72% postgres [.] find_ec_member_matching_expr\n> 0.58% postgres [.] palloc0\n> 0.49% postgres [.] palloc\n> 0.47% postgres [.] hash_search_with_hash_value\n> 0.44% libc.so.6 [.] __memmove_avx_unaligned_erms\n>\n>\n> latency average = 350.543 ms\n>\n> modified v7's bms_is_subset to go forwards then I get latency average\n> = 445.987 ms.\n>\n> If I add some debugging to bms_is_subset to have it record how many\n> words it checks, I see:\n>\n> postgres=# select sum(nwords) from forward;\n> sum\n> -----------\n> 181490660\n> (1 row)\n>\n> postgres=# select sum(nwords) from backwards;\n> sum\n> ----------\n> 11322564\n> (1 row)\n>\n> So, it took about 181 million loops in bms_is_member to plan query-a\n> when looping forward, but only 11 million when looping backwards.\n>\n> I think unless you've got some reason that you're able to justify why\n> we're always more likely to have to perform fewer word checks in\n> bms_is_subset() by looping backwards instead of forwards then I think\n> the performance gains you're showing here only happen to show better\n> results due to the given workload. It's just as easy to imagine that\n> you'll apply the equivalent slowdown for some other workload where the\n> initial words differ but all the remaining words all match.\n>\nHave you seen bms_compare?\nFor some reason someone thought it would be better to loop backward the\narray.\n\nSince bms_subset performed very well with backward, I think that's a good\nreason to leave it as bms_compare.\nAs in most cases, the array size is small, in general in both modes the\nperformance will be equivalent.\n\nAnyway, I made a new patch v8, based on v4 but with some changes that I\nbelieve improve it.\n\nWindows 64 bits\nmsvc 2019 64 bits\n\n== Query A ==\npsql -U postgres -f c:\\postgres_bench\\tmp\\bitmapset\\create-tables-a.sql\npsql -U postgres -f c:\\postgres_bench\\tmp\\bitmapset\\query-a.sql\n=============\n\npatched v4:\nTime: 2305,445 ms (00:02,305)\nTime: 2185,972 ms (00:02,186)\nTime: 2177,434 ms (00:02,177)\nTime: 2169,883 ms (00:02,170)\n\npatched v8:\nTime: 2143,532 ms (00:02,144)\nTime: 2140,313 ms (00:02,140)\nTime: 2138,481 ms (00:02,138)\nTime: 2130,290 ms (00:02,130)\n\n== Query B ==\npsql -U postgres -f c:\\postgres_bench\\tmp\\bitmapset\\create-tables-b.sql\npsql -U postgres -f c:\\postgres_bench\\tmp\\bitmapset\\query-b.sql\n\npatched v4:\nTime: 2684,360 ms (00:02,684)\nTime: 2482,571 ms (00:02,483)\nTime: 2452,699 ms (00:02,453)\nTime: 2465,223 ms (00:02,465)\n\npatched v8:\nTime: 2493,281 ms (00:02,493)\nTime: 2490,090 ms (00:02,490)\nTime: 2432,515 ms (00:02,433)\nTime: 2426,860 ms (00:02,427)\n\n\nlinux Ubuntu 64 bit\ngcc 64 bits\n\n== Query A ==\n/usr/local/pgsql/bin/psql -U postgres -f\n/home/ranier/Documentos/benchmarks/bitmapset/create-tables-a.sql\n/usr/local/pgsql/bin/psql -U postgres -f\n/home/ranier/Documentos/benchmarks/bitmapset/query-a.sql\n=============\n\npatched v4:\nTime: 933,181 ms\nTime: 931,520 ms\nTime: 906,496 ms\nTime: 872,446 ms\n\npatched v8:\nTime: 937,079 ms\nTime: 930,408 ms\nTime: 865,548 ms\nTime: 865,382 ms\n\n== Query B ==\n/usr/local/pgsql/bin/psql -U postgres -f\n/home/ranier/Documentos/benchmarks/bitmapset/create-tables-b.sql\n/usr/local/pgsql/bin/psql -U postgres -f\n/home/ranier/Documentos/benchmarks/bitmapset/query-b.sql\n\npatched v4:\nTime: 1581,317 ms (00:01,581)\nTime: 1568,371 ms (00:01,568)\nTime: 1468,036 ms (00:01,468)\nTime: 1445,698 ms (00:01,446)\n\npatched v8:\nTime: 1437,997 ms (00:01,438)\nTime: 1437,435 ms (00:01,437)\nTime: 1440,422 ms (00:01,440)\nTime: 1436,112 ms (00:01,436)\n\nregards,\nRanier Vilela",
"msg_date": "Sun, 25 Jun 2023 21:55:04 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Speeding Up Bitmapset"
},
{
"msg_contents": "On Mon, 26 Jun 2023 at 12:55, Ranier Vilela <[email protected]> wrote:\n> Have you seen bms_compare?\n> For some reason someone thought it would be better to loop backward the array.\n\nThat's nothing to do with efficiency. It's related to behaviour. Have\na look at the function's header comment, it's trying to find the set\nwith the highest value member. It wouldn't make much sense to start\nat the lowest-order words for that.\n\nDavid\n\n\n",
"msg_date": "Mon, 26 Jun 2023 14:38:43 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Speeding Up Bitmapset"
},
{
"msg_contents": "On Mon, 26 Jun 2023 at 12:55, Ranier Vilela <[email protected]> wrote:\n>\n> Em dom., 25 de jun. de 2023 às 17:49, David Rowley <[email protected]> escreveu:\n>>\n>> There's no reason in the world\n>> that those will speed up Bitmapsets, so why include them?\n>\n> Of course optimization is the most important thing,\n> but since you're going to touch the source, why not make it more readable.\n\nHave a look at [1]. In particular \"Reasons your patch might be\nreturned\" and precisely \"Reformatting lines that haven't changed\"\n\nThat entire page is quite valuable and I'd recommend having a read of all of it.\n\nDavid\n\n[1] https://wiki.postgresql.org/wiki/Submitting_a_Patch\n\n\n",
"msg_date": "Mon, 26 Jun 2023 16:20:42 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Speeding Up Bitmapset"
}
] |
[
{
"msg_contents": "Hi all,\n\nJoe has reported me offlist that JumbleQuery() includes a dependency\nto the query text, but we don't use that anymore as the query ID is\ngenerated from the Query structure instead.\n\nAny thoughts about the cleanup attached? But at the same time, this\nis simple and a new thing, so I'd rather clean up that now rather than\nlater. \n\nIt is not urgent, so I am fine to postpone that after beta2 is\nreleased on 17~ if there are any objections to that, of course.\n\nThoughts?\n--\nMichael",
"msg_date": "Mon, 26 Jun 2023 17:44:49 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Clean up JumbleQuery() from query text"
},
{
"msg_contents": "On Mon, Jun 26, 2023 at 05:44:49PM +0900, Michael Paquier wrote:\n> Joe has reported me offlist that JumbleQuery() includes a dependency\n> to the query text, but we don't use that anymore as the query ID is\n> generated from the Query structure instead.\n> \n> Any thoughts about the cleanup attached? But at the same time, this\n> is simple and a new thing, so I'd rather clean up that now rather than\n> later. \n\nLGTM\n\n> It is not urgent, so I am fine to postpone that after beta2 is\n> released on 17~ if there are any objections to that, of course.\n\nEven if extensions are using it, GA for v16 is still a few months away, so\nIMO it's okay to apply it to v16.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 26 Jun 2023 08:51:17 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Clean up JumbleQuery() from query text"
},
{
"msg_contents": "On Mon, Jun 26, 2023 at 08:51:17AM -0700, Nathan Bossart wrote:\n> On Mon, Jun 26, 2023 at 05:44:49PM +0900, Michael Paquier wrote:\n>> It is not urgent, so I am fine to postpone that after beta2 is\n>> released on 17~ if there are any objections to that, of course.\n> \n> Even if extensions are using it, GA for v16 is still a few months away, so\n> IMO it's okay to apply it to v16.\n\nOkay, thanks. I am adding the RMT in CC in case there are any\nobjections, but I guess that this one will be OK even if applied in\nthe next couple of days for 16.\n--\nMichael",
"msg_date": "Tue, 27 Jun 2023 15:26:43 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Clean up JumbleQuery() from query text"
}
] |
[
{
"msg_contents": "This is a small code cleanup patch.\n\nSeveral commands internally assemble command lines to call other \ncommands. This includes initdb, pg_dumpall, and pg_regress. (Also \npg_ctl, but that is different enough that I didn't consider it here.) \nThis has all evolved a bit organically, with fixed-size buffers, and \nvarious optional command-line arguments being injected with \nconfusing-looking code, and the spacing between options handled in \ninconsistent ways. This patch cleans all this up a bit to look clearer \nand be more easily extensible with new arguments and options. We start \neach command with printfPQExpBuffer(), and then append arguments as \nnecessary with appendPQExpBuffer(). Also standardize on using \ninitPQExpBuffer() over createPQExpBuffer() where possible. pg_regress \nuses StringInfo instead of PQExpBuffer, but many of the same ideas apply.",
"msg_date": "Mon, 26 Jun 2023 11:33:28 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Clean up command argument assembly"
},
{
"msg_contents": "On 26/06/2023 12:33, Peter Eisentraut wrote:\n> This is a small code cleanup patch.\n> \n> Several commands internally assemble command lines to call other\n> commands. This includes initdb, pg_dumpall, and pg_regress. (Also\n> pg_ctl, but that is different enough that I didn't consider it here.)\n> This has all evolved a bit organically, with fixed-size buffers, and\n> various optional command-line arguments being injected with\n> confusing-looking code, and the spacing between options handled in\n> inconsistent ways. This patch cleans all this up a bit to look clearer\n> and be more easily extensible with new arguments and options.\n\n+1\n\n> We start each command with printfPQExpBuffer(), and then append\n> arguments as necessary with appendPQExpBuffer(). Also standardize on\n> using initPQExpBuffer() over createPQExpBuffer() where possible.\n> pg_regress uses StringInfo instead of PQExpBuffer, but many of the\n> same ideas apply.\n\nIt's a bit bogus to use PQExpBuffer for these. If you run out of memory, \nyou silently get an empty string instead. StringInfo, which exits the \nprocess on OOM, would be more appropriate. We have tons of such \ninappropriate uses of PQExpBuffer in all our client programs, though, so \nI don't insist on fixing this particular case right now.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Tue, 4 Jul 2023 15:14:41 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Clean up command argument assembly"
},
{
"msg_contents": "On 04.07.23 14:14, Heikki Linnakangas wrote:\n> On 26/06/2023 12:33, Peter Eisentraut wrote:\n>> This is a small code cleanup patch.\n>>\n>> Several commands internally assemble command lines to call other\n>> commands. This includes initdb, pg_dumpall, and pg_regress. (Also\n>> pg_ctl, but that is different enough that I didn't consider it here.)\n>> This has all evolved a bit organically, with fixed-size buffers, and\n>> various optional command-line arguments being injected with\n>> confusing-looking code, and the spacing between options handled in\n>> inconsistent ways. This patch cleans all this up a bit to look clearer\n>> and be more easily extensible with new arguments and options.\n> \n> +1\n\ncommitted\n\n>> We start each command with printfPQExpBuffer(), and then append\n>> arguments as necessary with appendPQExpBuffer(). Also standardize on\n>> using initPQExpBuffer() over createPQExpBuffer() where possible.\n>> pg_regress uses StringInfo instead of PQExpBuffer, but many of the\n>> same ideas apply.\n> \n> It's a bit bogus to use PQExpBuffer for these. If you run out of memory, \n> you silently get an empty string instead. StringInfo, which exits the \n> process on OOM, would be more appropriate. We have tons of such \n> inappropriate uses of PQExpBuffer in all our client programs, though, so \n> I don't insist on fixing this particular case right now.\n\nInteresting point. But as you say better dealt with as a separate problem.\n\n\n\n",
"msg_date": "Wed, 5 Jul 2023 07:22:33 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Clean up command argument assembly"
},
{
"msg_contents": "On 05.07.23 07:22, Peter Eisentraut wrote:\n>> It's a bit bogus to use PQExpBuffer for these. If you run out of \n>> memory, you silently get an empty string instead. StringInfo, which \n>> exits the process on OOM, would be more appropriate. We have tons of \n>> such inappropriate uses of PQExpBuffer in all our client programs, \n>> though, so I don't insist on fixing this particular case right now.\n> \n> Interesting point. But as you say better dealt with as a separate problem.\n\nI was inspired by a33e17f210 (for pg_rewind) to clean up some more \nfixed-buffer command assembly and replace it with extensible buffers and \nsome more elegant code. And then I remembered this thread, and it's \nreally a continuation of this.\n\nThe first patch deals with pg_regress and pg_isolation_regress. It is \npretty straightforward.\n\nThe second patch deals with pg_upgrade. It would require exporting \nappendPQExpBufferVA() from libpq, which might be overkill. But this \ngets to your point earlier: Should pg_upgrade rather be using \nStringInfo instead of PQExpBuffer? (Then we'd use appendStringInfoVA(), \nwhich already exists, but even if not we wouldn't need to change libpq \nto get it.) Should anything outside of libpq be using PQExpBuffer?",
"msg_date": "Sun, 4 Feb 2024 18:48:08 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Clean up command argument assembly"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Should anything outside of libpq be using PQExpBuffer?\n\nPerhaps not. PQExpBuffer's behavior for OOM cases is designed\nspecifically for libpq, where exit-on-OOM is not okay and we\ncan hope to include failure checks wherever needed. For most\nof our application code, we'd much rather just exit-on-OOM\nand not have to think about failure checks at the call sites.\n\nHaving said that, converting stuff like pg_dump would be quite awful\nin terms of code churn and creating a back-patching nightmare.\n\nWould it make any sense to think about having two sets of\nroutines with identical call APIs, but different failure\nbehavior, so that we don't have to touch the callers?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 04 Feb 2024 13:02:03 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Clean up command argument assembly"
}
] |
[
{
"msg_contents": "Hi All,\nEvery pg_decode routine except pg_decode_message that decodes a\ntransactional change, has following block\n/* output BEGIN if we haven't yet */\nif (data->skip_empty_xacts && !txndata->xact_wrote_changes)\n{\npg_output_begin(ctx, data, txn, false);\n}\ntxndata->xact_wrote_changes = true;\n\nBut pg_decode_message() doesn't call pg_output_begin(). If a WAL\nmessage is the first change in the transaction, it won't have a BEGIN\nbefore it. That looks like a bug. Why is pg_decode_message()\nexception?\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Mon, 26 Jun 2023 15:06:56 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_decode_message vs skip_empty_xacts and xact_wrote_changes"
},
{
"msg_contents": "On Mon, Jun 26, 2023 at 3:07 PM Ashutosh Bapat\n<[email protected]> wrote:\n>\n> Hi All,\n> Every pg_decode routine except pg_decode_message that decodes a\n> transactional change, has following block\n> /* output BEGIN if we haven't yet */\n> if (data->skip_empty_xacts && !txndata->xact_wrote_changes)\n> {\n> pg_output_begin(ctx, data, txn, false);\n> }\n> txndata->xact_wrote_changes = true;\n>\n> But pg_decode_message() doesn't call pg_output_begin(). If a WAL\n> message is the first change in the transaction, it won't have a BEGIN\n> before it. That looks like a bug. Why is pg_decode_message()\n> exception?\n>\n\nI can't see a reason why we shouldn't have a similar check for\ntransactional messages. So, agreed this is a bug.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 26 Jun 2023 15:51:10 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_decode_message vs skip_empty_xacts and xact_wrote_changes"
},
{
"msg_contents": "On Mon, 26 Jun 2023 at 15:51, Amit Kapila <[email protected]> wrote:\n>\n> On Mon, Jun 26, 2023 at 3:07 PM Ashutosh Bapat\n> <[email protected]> wrote:\n> >\n> > Hi All,\n> > Every pg_decode routine except pg_decode_message that decodes a\n> > transactional change, has following block\n> > /* output BEGIN if we haven't yet */\n> > if (data->skip_empty_xacts && !txndata->xact_wrote_changes)\n> > {\n> > pg_output_begin(ctx, data, txn, false);\n> > }\n> > txndata->xact_wrote_changes = true;\n> >\n> > But pg_decode_message() doesn't call pg_output_begin(). If a WAL\n> > message is the first change in the transaction, it won't have a BEGIN\n> > before it. That looks like a bug. Why is pg_decode_message()\n> > exception?\n> >\n>\n> I can't see a reason why we shouldn't have a similar check for\n> transactional messages. So, agreed this is a bug.\n\nHere is a patch having the fix for the same. I have not added any\ntests as the existing tests cover this scenario. The same issue is\npresent in back branches too.\nv1-0001-Call-pg_output_begin-in-pg_decode_message-if-it-i_master.patch\ncan be applied on master, PG15 and PG14,\nv1-0001-Call-pg_output_begin-in-pg_decode_message-if-it-i_PG13.patch\npatch can be applied on PG13, PG12 and PG11.\nThoughts?\n\nRegards,\nVignesh",
"msg_date": "Wed, 28 Jun 2023 16:52:38 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_decode_message vs skip_empty_xacts and xact_wrote_changes"
},
{
"msg_contents": "Hi Vignesh,\nThanks for working on this.\n\nOn Wed, Jun 28, 2023 at 4:52 PM vignesh C <[email protected]> wrote:\n>\n> Here is a patch having the fix for the same. I have not added any\n> tests as the existing tests cover this scenario. The same issue is\n> present in back branches too.\n\nInteresting, we have a test for this scenario and it accepts erroneous\noutput :).\n\n> v1-0001-Call-pg_output_begin-in-pg_decode_message-if-it-i_master.patch\n> can be applied on master, PG15 and PG14,\n> v1-0001-Call-pg_output_begin-in-pg_decode_message-if-it-i_PG13.patch\n> patch can be applied on PG13, PG12 and PG11.\n> Thoughts?\n\nI noticed this when looking at Tomas's patches for logical decoding of\nsequences. The code block you have added is repeated in\npg_decode_change() and pg_decode_truncate(). It might be better to\npush the conditions in pg_output_begin() itself so that any future\ncallsite of pg_output_begin() automatically takes care of these\nconditions.\n\nOtherwise the patches look good to me.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Wed, 28 Jun 2023 19:25:48 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_decode_message vs skip_empty_xacts and xact_wrote_changes"
},
{
"msg_contents": "On Wed, 28 Jun 2023 at 19:26, Ashutosh Bapat\n<[email protected]> wrote:\n>\n> Hi Vignesh,\n> Thanks for working on this.\n>\n> On Wed, Jun 28, 2023 at 4:52 PM vignesh C <[email protected]> wrote:\n> >\n> > Here is a patch having the fix for the same. I have not added any\n> > tests as the existing tests cover this scenario. The same issue is\n> > present in back branches too.\n>\n> Interesting, we have a test for this scenario and it accepts erroneous\n> output :).\n>\n> > v1-0001-Call-pg_output_begin-in-pg_decode_message-if-it-i_master.patch\n> > can be applied on master, PG15 and PG14,\n> > v1-0001-Call-pg_output_begin-in-pg_decode_message-if-it-i_PG13.patch\n> > patch can be applied on PG13, PG12 and PG11.\n> > Thoughts?\n>\n> I noticed this when looking at Tomas's patches for logical decoding of\n> sequences. The code block you have added is repeated in\n> pg_decode_change() and pg_decode_truncate(). It might be better to\n> push the conditions in pg_output_begin() itself so that any future\n> callsite of pg_output_begin() automatically takes care of these\n> conditions.\n\nThanks for the comments, here is an updated patch handling the above issue.\n\nRegards,\nVignesh",
"msg_date": "Thu, 29 Jun 2023 09:35:34 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_decode_message vs skip_empty_xacts and xact_wrote_changes"
},
{
"msg_contents": "On Thursday, June 29, 2023 12:06 PM vignesh C <[email protected]> wrote:\r\n> \r\n> On Wed, 28 Jun 2023 at 19:26, Ashutosh Bapat\r\n> <[email protected]> wrote:\r\n> >\r\n> > Hi Vignesh,\r\n> > Thanks for working on this.\r\n> >\r\n> > On Wed, Jun 28, 2023 at 4:52 PM vignesh C <[email protected]> wrote:\r\n> > >\r\n> > > Here is a patch having the fix for the same. I have not added any\r\n> > > tests as the existing tests cover this scenario. The same issue is\r\n> > > present in back branches too.\r\n> >\r\n> > Interesting, we have a test for this scenario and it accepts erroneous\r\n> > output :).\r\n> >\r\n> > > v1-0001-Call-pg_output_begin-in-pg_decode_message-if-it-i_master.pat\r\n> > > ch can be applied on master, PG15 and PG14,\r\n> > > v1-0001-Call-pg_output_begin-in-pg_decode_message-if-it-i_PG13.patch\r\n> > > patch can be applied on PG13, PG12 and PG11.\r\n> > > Thoughts?\r\n> >\r\n> > I noticed this when looking at Tomas's patches for logical decoding of\r\n> > sequences. The code block you have added is repeated in\r\n> > pg_decode_change() and pg_decode_truncate(). It might be better to\r\n> > push the conditions in pg_output_begin() itself so that any future\r\n> > callsite of pg_output_begin() automatically takes care of these\r\n> > conditions.\r\n> \r\n> Thanks for the comments, here is an updated patch handling the above issue.\r\n\r\nThanks for the patches.\r\n\r\nI tried to understand the following check:\r\n\r\n\t/*\r\n \t * If asked to skip empty transactions, we'll emit BEGIN at the point\r\n \t * where the first operation is received for this transaction.\r\n \t */\r\n-\tif (data->skip_empty_xacts)\r\n+\tif (!(last_write ^ data->skip_empty_xacts) || txndata->xact_wrote_changes)\r\n \t\treturn;\r\n\r\nI might miss something, but would you mind elaborating on why we use \"last_write\" in this check?\r\n\r\nBest Regard,\r\nHou zj\r\n",
"msg_date": "Thu, 29 Jun 2023 04:28:52 +0000",
"msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: pg_decode_message vs skip_empty_xacts and xact_wrote_changes"
},
{
"msg_contents": "On Thu, 29 Jun 2023 at 09:58, Zhijie Hou (Fujitsu)\n<[email protected]> wrote:\n>\n> On Thursday, June 29, 2023 12:06 PM vignesh C <[email protected]> wrote:\n> >\n> > On Wed, 28 Jun 2023 at 19:26, Ashutosh Bapat\n> > <[email protected]> wrote:\n> > >\n> > > Hi Vignesh,\n> > > Thanks for working on this.\n> > >\n> > > On Wed, Jun 28, 2023 at 4:52 PM vignesh C <[email protected]> wrote:\n> > > >\n> > > > Here is a patch having the fix for the same. I have not added any\n> > > > tests as the existing tests cover this scenario. The same issue is\n> > > > present in back branches too.\n> > >\n> > > Interesting, we have a test for this scenario and it accepts erroneous\n> > > output :).\n> > >\n> > > > v1-0001-Call-pg_output_begin-in-pg_decode_message-if-it-i_master.pat\n> > > > ch can be applied on master, PG15 and PG14,\n> > > > v1-0001-Call-pg_output_begin-in-pg_decode_message-if-it-i_PG13.patch\n> > > > patch can be applied on PG13, PG12 and PG11.\n> > > > Thoughts?\n> > >\n> > > I noticed this when looking at Tomas's patches for logical decoding of\n> > > sequences. The code block you have added is repeated in\n> > > pg_decode_change() and pg_decode_truncate(). It might be better to\n> > > push the conditions in pg_output_begin() itself so that any future\n> > > callsite of pg_output_begin() automatically takes care of these\n> > > conditions.\n> >\n> > Thanks for the comments, here is an updated patch handling the above issue.\n>\n> Thanks for the patches.\n>\n> I tried to understand the following check:\n>\n> /*\n> * If asked to skip empty transactions, we'll emit BEGIN at the point\n> * where the first operation is received for this transaction.\n> */\n> - if (data->skip_empty_xacts)\n> + if (!(last_write ^ data->skip_empty_xacts) || txndata->xact_wrote_changes)\n> return;\n>\n> I might miss something, but would you mind elaborating on why we use \"last_write\" in this check?\n\nlast_write is used to indicate if it is begin/\"begin\nprepare\"(last_write is true) or change/truncate/message(last_write is\nfalse).\n\nWe have specified logical XNOR which will be true for the following conditions:\nCondition1: last_write && data->skip_empty_xacts -> If it is\nbegin/begin prepare and user has specified skip empty transactions, we\nwill return from here, so that the begin message can be appended at\nthe point where the first operation is received for this transaction.\nCondition2: !last_write && !data->skip_empty_xacts -> If it is\nchange/truncate or message and user has not specified skip empty\ntransactions, we will return from here as we would have appended the\nbegin earlier itself.\nThe txndata->xact_w6rote_changes will be set after the first operation\nis received for this transaction during which we would have outputted\nthe begin message, this condition is to skip outputting begin message\nif the begin message was already outputted.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Thu, 29 Jun 2023 21:39:46 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_decode_message vs skip_empty_xacts and xact_wrote_changes"
},
{
"msg_contents": "On Wed, Jun 28, 2023 at 7:26 PM Ashutosh Bapat\n<[email protected]> wrote:\n>\n> Hi Vignesh,\n> Thanks for working on this.\n>\n> On Wed, Jun 28, 2023 at 4:52 PM vignesh C <[email protected]> wrote:\n> >\n> > Here is a patch having the fix for the same. I have not added any\n> > tests as the existing tests cover this scenario. The same issue is\n> > present in back branches too.\n>\n> Interesting, we have a test for this scenario and it accepts erroneous\n> output :).\n>\n\nThis made me look at the original commit d6fa44fc which has introduced\nthis check and it seems this is done primarily to avoid spurious test\nfailures due to empty transactions. The proposed change won't help\nwith that. So, I am not sure if it is worth backpatching this change\nas proposed. Though, I see the reasons to improve the code in HEAD due\nto the following reasons (a) to maintain consistency among\ntransactional messages/changes (b) we will still emit Begin/Commit\nwith transactional messages when skip_empty_xacts is '0', see below\ntest:\n\nSELECT 'init' FROM\npg_create_logical_replication_slot('regression_slot',\n'test_decoding');\nSELECT 'msg1' FROM pg_logical_emit_message(true, 'test', 'msg1');\nSELECT 'msg2' FROM pg_logical_emit_message(false, 'test', 'msg2');\n\nSELECT data FROM pg_logical_slot_peek_changes('regression_slot', NULL,\nNULL, 'force-binary', '0', 'skip-empty-xacts', '1');\n data\n------------------------------------------------------------\n message: transactional: 1 prefix: test, sz: 4 content:msg1\n message: transactional: 0 prefix: test, sz: 4 content:msg2\n(2 rows)\n\nSELECT data FROM pg_logical_slot_peek_changes('regression_slot', NULL,\nNULL, 'force-binary', '0', 'skip-empty-xacts', '0');\n data\n------------------------------------------------------------\n BEGIN 739\n message: transactional: 1 prefix: test, sz: 4 content:msg1\n COMMIT 739\n message: transactional: 0 prefix: test, sz: 4 content:msg2\n(4 rows)\n\nThoughts?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 30 Jun 2023 09:38:28 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_decode_message vs skip_empty_xacts and xact_wrote_changes"
},
{
"msg_contents": "On Thu, Jun 29, 2023 at 9:40 PM vignesh C <[email protected]> wrote:\n>\n> On Thu, 29 Jun 2023 at 09:58, Zhijie Hou (Fujitsu)\n> <[email protected]> wrote:\n> >\n> > On Thursday, June 29, 2023 12:06 PM vignesh C <[email protected]> wrote:\n> > >\n> >\n> > Thanks for the patches.\n> >\n> > I tried to understand the following check:\n> >\n> > /*\n> > * If asked to skip empty transactions, we'll emit BEGIN at the point\n> > * where the first operation is received for this transaction.\n> > */\n> > - if (data->skip_empty_xacts)\n> > + if (!(last_write ^ data->skip_empty_xacts) || txndata->xact_wrote_changes)\n> > return;\n> >\n> > I might miss something, but would you mind elaborating on why we use \"last_write\" in this check?\n>\n> last_write is used to indicate if it is begin/\"begin\n> prepare\"(last_write is true) or change/truncate/message(last_write is\n> false).\n>\n> We have specified logical XNOR which will be true for the following conditions:\n> Condition1: last_write && data->skip_empty_xacts -> If it is\n> begin/begin prepare and user has specified skip empty transactions, we\n> will return from here, so that the begin message can be appended at\n> the point where the first operation is received for this transaction.\n> Condition2: !last_write && !data->skip_empty_xacts -> If it is\n> change/truncate or message and user has not specified skip empty\n> transactions, we will return from here as we would have appended the\n> begin earlier itself.\n> The txndata->xact_w6rote_changes will be set after the first operation\n> is received for this transaction during which we would have outputted\n> the begin message, this condition is to skip outputting begin message\n> if the begin message was already outputted.\n>\n\nI feel the use of last_write has reduced the readability of this part\nof the code. It may be that we can add comments to make it clear but I\nfeel your previous version was much easier to understand.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 30 Jun 2023 09:55:03 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_decode_message vs skip_empty_xacts and xact_wrote_changes"
},
{
"msg_contents": "On Fri, 30 Jun 2023 at 09:55, Amit Kapila <[email protected]> wrote:\n>\n> On Thu, Jun 29, 2023 at 9:40 PM vignesh C <[email protected]> wrote:\n> >\n> > On Thu, 29 Jun 2023 at 09:58, Zhijie Hou (Fujitsu)\n> > <[email protected]> wrote:\n> > >\n> > > On Thursday, June 29, 2023 12:06 PM vignesh C <[email protected]> wrote:\n> > > >\n> > >\n> > > Thanks for the patches.\n> > >\n> > > I tried to understand the following check:\n> > >\n> > > /*\n> > > * If asked to skip empty transactions, we'll emit BEGIN at the point\n> > > * where the first operation is received for this transaction.\n> > > */\n> > > - if (data->skip_empty_xacts)\n> > > + if (!(last_write ^ data->skip_empty_xacts) || txndata->xact_wrote_changes)\n> > > return;\n> > >\n> > > I might miss something, but would you mind elaborating on why we use \"last_write\" in this check?\n> >\n> > last_write is used to indicate if it is begin/\"begin\n> > prepare\"(last_write is true) or change/truncate/message(last_write is\n> > false).\n> >\n> > We have specified logical XNOR which will be true for the following conditions:\n> > Condition1: last_write && data->skip_empty_xacts -> If it is\n> > begin/begin prepare and user has specified skip empty transactions, we\n> > will return from here, so that the begin message can be appended at\n> > the point where the first operation is received for this transaction.\n> > Condition2: !last_write && !data->skip_empty_xacts -> If it is\n> > change/truncate or message and user has not specified skip empty\n> > transactions, we will return from here as we would have appended the\n> > begin earlier itself.\n> > The txndata->xact_w6rote_changes will be set after the first operation\n> > is received for this transaction during which we would have outputted\n> > the begin message, this condition is to skip outputting begin message\n> > if the begin message was already outputted.\n> >\n>\n> I feel the use of last_write has reduced the readability of this part\n> of the code. It may be that we can add comments to make it clear but I\n> feel your previous version was much easier to understand.\n\n+1 for the first version patch, I also felt the first version is\neasily understandable.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Mon, 3 Jul 2023 16:49:40 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_decode_message vs skip_empty_xacts and xact_wrote_changes"
},
{
"msg_contents": "On Mon, Jul 3, 2023 at 4:49 PM vignesh C <[email protected]> wrote:\n>\n> +1 for the first version patch, I also felt the first version is\n> easily understandable.\n>\n\nOkay, please find the slightly updated version (changed a comment and\ncommit message). Unless there are more comments, I'll push this in a\nday or two.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 5 Jul 2023 14:28:00 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_decode_message vs skip_empty_xacts and xact_wrote_changes"
},
{
"msg_contents": "On Wed, Jul 5, 2023 at 2:28 PM Amit Kapila <[email protected]> wrote:\n>\n> On Mon, Jul 3, 2023 at 4:49 PM vignesh C <[email protected]> wrote:\n> >\n> > +1 for the first version patch, I also felt the first version is\n> > easily understandable.\n> >\n>\n> Okay, please find the slightly updated version (changed a comment and\n> commit message). Unless there are more comments, I'll push this in a\n> day or two.\n>\n\noops, forgot to attach the patch.\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Wed, 5 Jul 2023 14:28:56 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_decode_message vs skip_empty_xacts and xact_wrote_changes"
},
{
"msg_contents": "On Wed, Jul 5, 2023 at 2:29 PM Amit Kapila <[email protected]> wrote:\n>\n> On Wed, Jul 5, 2023 at 2:28 PM Amit Kapila <[email protected]> wrote:\n> >\n> > On Mon, Jul 3, 2023 at 4:49 PM vignesh C <[email protected]> wrote:\n> > >\n> > > +1 for the first version patch, I also felt the first version is\n> > > easily understandable.\n> > >\n> >\n> > Okay, please find the slightly updated version (changed a comment and\n> > commit message). Unless there are more comments, I'll push this in a\n> > day or two.\n> >\n>\n> oops, forgot to attach the patch.\n\nI still think that we need to do something so that a new call to\npg_output_begin() automatically takes care of the conditions under\nwhich it should be called. Otherwise, we will introduce a similar\nproblem in some other place in future.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Wed, 5 Jul 2023 19:20:06 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_decode_message vs skip_empty_xacts and xact_wrote_changes"
},
{
"msg_contents": "On Wed, Jul 5, 2023 at 7:20 PM Ashutosh Bapat\n<[email protected]> wrote:\n>\n> On Wed, Jul 5, 2023 at 2:29 PM Amit Kapila <[email protected]> wrote:\n> >\n> > On Wed, Jul 5, 2023 at 2:28 PM Amit Kapila <[email protected]> wrote:\n> > >\n> > > On Mon, Jul 3, 2023 at 4:49 PM vignesh C <[email protected]> wrote:\n> > > >\n> > > > +1 for the first version patch, I also felt the first version is\n> > > > easily understandable.\n> > > >\n> > >\n> > > Okay, please find the slightly updated version (changed a comment and\n> > > commit message). Unless there are more comments, I'll push this in a\n> > > day or two.\n> > >\n> >\n> > oops, forgot to attach the patch.\n>\n> I still think that we need to do something so that a new call to\n> pg_output_begin() automatically takes care of the conditions under\n> which it should be called. Otherwise, we will introduce a similar\n> problem in some other place in future.\n>\n\nAFAIU, this problem is because we forget to conditionally call\npg_output_begin() from pg_decode_message() which can happen with or\nwithout moving conditions inside pg_output_begin(). Also, it shouldn't\nbe done at the expense of complexity. I find the check added by\nVignesh's v2 patch (+ if (!(last_write ^ data->skip_empty_xacts) ||\ntxndata->xact_wrote_changes)) a bit difficult to understand and more\nerror-prone. The others like Hou-San also couldn't understand unless\nVignesh gave an explanation. I also thought of avoiding that check.\nBasically, IIUC, the check is added because the patch also removed\n'data->skip_empty_xacts' check from\npg_decode_begin_txn()/pg_decode_begin_prepare_txn(). Now, if retain\nthose checks then it is probably okay but again the similar checks are\nstill split and that doesn't appear to be better than the v1 or v3\npatch version. I am not against improving code in this area and\nprobably we can consider doing it as a separate patch if we have\nbetter ideas instead of combining it with this patch.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 6 Jul 2023 14:06:32 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_decode_message vs skip_empty_xacts and xact_wrote_changes"
},
{
"msg_contents": "On Thu, Jul 6, 2023 at 2:06 PM Amit Kapila <[email protected]> wrote:\n>\n> On Wed, Jul 5, 2023 at 7:20 PM Ashutosh Bapat\n> <[email protected]> wrote:\n> >\n> > On Wed, Jul 5, 2023 at 2:29 PM Amit Kapila <[email protected]> wrote:\n> > >\n> > > On Wed, Jul 5, 2023 at 2:28 PM Amit Kapila <[email protected]> wrote:\n> > > >\n> > > > On Mon, Jul 3, 2023 at 4:49 PM vignesh C <[email protected]> wrote:\n> > > > >\n> > > > > +1 for the first version patch, I also felt the first version is\n> > > > > easily understandable.\n> > > > >\n> > > >\n> > > > Okay, please find the slightly updated version (changed a comment and\n> > > > commit message). Unless there are more comments, I'll push this in a\n> > > > day or two.\n> > > >\n> > >\n> > > oops, forgot to attach the patch.\n> >\n> > I still think that we need to do something so that a new call to\n> > pg_output_begin() automatically takes care of the conditions under\n> > which it should be called. Otherwise, we will introduce a similar\n> > problem in some other place in future.\n> >\n>\n> AFAIU, this problem is because we forget to conditionally call\n> pg_output_begin() from pg_decode_message() which can happen with or\n> without moving conditions inside pg_output_begin(). Also, it shouldn't\n> be done at the expense of complexity. I find the check added by\n> Vignesh's v2 patch (+ if (!(last_write ^ data->skip_empty_xacts) ||\n> txndata->xact_wrote_changes)) a bit difficult to understand and more\n> error-prone. The others like Hou-San also couldn't understand unless\n> Vignesh gave an explanation. I also thought of avoiding that check.\n> Basically, IIUC, the check is added because the patch also removed\n> 'data->skip_empty_xacts' check from\n> pg_decode_begin_txn()/pg_decode_begin_prepare_txn(). Now, if retain\n> those checks then it is probably okay but again the similar checks are\n> still split and that doesn't appear to be better than the v1 or v3\n> patch version. I am not against improving code in this area and\n> probably we can consider doing it as a separate patch if we have\n> better ideas instead of combining it with this patch.\n>\n\nI have pushed this work. But feel free to propose further\nimprovements, if you have any better ideas.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 11 Jul 2023 17:59:07 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_decode_message vs skip_empty_xacts and xact_wrote_changes"
},
{
"msg_contents": "On Tue, Jul 11, 2023 at 5:59 PM Amit Kapila <[email protected]> wrote:\n\n> >\n>\n> I have pushed this work. But feel free to propose further\n> improvements, if you have any better ideas.\n>\n\nThanks. We have fixed the problem. So things are better than they\nwere. I have been busy with something else so couldn't reply.\n\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Tue, 11 Jul 2023 20:30:41 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_decode_message vs skip_empty_xacts and xact_wrote_changes"
}
] |
[
{
"msg_contents": "hi. simple question....\n\nThe following one works.\n------------------------------------------------------------\nDatum\ntest_direct_inputcall(PG_FUNCTION_ARGS)\n{\n char *token = PG_GETARG_CSTRING(0);\n Datum numd;\n if (!DirectInputFunctionCallSafe(numeric_in, token,\n InvalidOid, -1,\n fcinfo->context,\n &numd))\n {\n elog(INFO,\"convert to cstring failed\");\n PG_RETURN_BOOL(false);\n }\n elog(INFO,\"%s\",DatumGetCString(DirectFunctionCall1(numeric_out,numd)));\n PG_RETURN_BOOL(true);\n}\n------------------------------------------------------------\n--the following one does not work. will print out something is wrong\n\nDatum\ntest_direct_inputcall(PG_FUNCTION_ARGS)\n{\n char *token = PG_GETARG_CSTRING(0);\n Datum numd;\n numd = return_numeric_datum(token);\n elog(INFO,\"%s\",DatumGetCString(DirectFunctionCall1(numeric_out,numd)));\n PG_RETURN_BOOL(true);\n}\n\nstatic\nDatum return_numeric_datum(char *token)\n{\n Datum numd;\n Node *escontext;\n\n if (!DirectInputFunctionCallSafe(numeric_in, token,\n InvalidOid, -1,\n escontext,\n &numd));\n elog(INFO,\"something is wrong\");\n return numd;\n}\n------------------------------------------------------------\nI wonder how to make it return_numeric_datum works in functions that\narguments are not PG_FUNCTION_ARGS.\n\nTo make it work, I need to understand the Node *context, which is kind\nof a vague idea for me.\nIn the top level function (input as PG_FUNCTION_ARGS) the Node\n*context can be used to print out errors and back to normal state, I\nkind of get it.\n\nI really appreciate someone showing a minimal, reproducible example\nbased on return_numeric_datum....\n\n\n",
"msg_date": "Mon, 26 Jun 2023 19:20:00 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": true,
"msg_subject": "\n =?UTF-8?Q?=E2=80=8Bfunction_arguments_are_not_PG=5FFUNCTION=5FARGS=2C_how_?=\n\t=?UTF-8?Q?to_pass_Node_=2Aescontext?="
},
{
"msg_contents": "On 2023-06-26 Mo 07:20, jian he wrote:\n> static\n> Datum return_numeric_datum(char *token)\n> {\n> Datum numd;\n> Node *escontext;\n>\n> if (!DirectInputFunctionCallSafe(numeric_in, token,\n> InvalidOid, -1,\n> escontext,\n> &numd));\n> elog(INFO,\"something is wrong\");\n> return numd;\n> }\n\n\nTo start with, the semicolon at the end of that if appears bogus. The \nelog is indented to look like it's conditioned by the if but the \nsemicolon makes it not be.\n\nThere are compiler switches in modern gcc at least that help you detect \nthings like this.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-06-26 Mo 07:20, jian he wrote:\n\n\n\nstatic\nDatum return_numeric_datum(char *token)\n{\n Datum numd;\n Node *escontext;\n\n if (!DirectInputFunctionCallSafe(numeric_in, token,\n InvalidOid, -1,\n escontext,\n &numd));\n elog(INFO,\"something is wrong\");\n return numd;\n}\n\n\n\n\nTo start with, the semicolon at the end of that if appears bogus.\n The elog is indented to look like it's conditioned by the if but\n the semicolon makes it not be.\n\nThere are compiler switches in modern gcc at least that help you\n detect things like this.\n\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Mon, 26 Jun 2023 07:50:55 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "=?UTF-8?Q?Re=3a_=e2=80=8bfunction_arguments_are_not_PG=5fFUNCTION?=\n =?UTF-8?Q?=5fARGS=2c_how_to_pass_Node_*escontext?="
},
{
"msg_contents": "On Mon, Jun 26, 2023 at 7:50 PM Andrew Dunstan <[email protected]> wrote:\n>\n>\n> On 2023-06-26 Mo 07:20, jian he wrote:\n>\n> static\n> Datum return_numeric_datum(char *token)\n> {\n> Datum numd;\n> Node *escontext;\n>\n> if (!DirectInputFunctionCallSafe(numeric_in, token,\n> InvalidOid, -1,\n> escontext,\n> &numd));\n> elog(INFO,\"something is wrong\");\n> return numd;\n> }\n>\n>\n> To start with, the semicolon at the end of that if appears bogus. The elog is indented to look like it's conditioned by the if but the semicolon makes it not be.\n>\n> There are compiler switches in modern gcc at least that help you detect things like this.\n>\n>\n> cheers\n>\n>\n> andrew\n>\n> --\n> Andrew Dunstan\n> EDB: https://www.enterprisedb.com\n\n\nsorry. It was my mistake.\n\n> Node *escontext;\n> if (!DirectInputFunctionCallSafe(numeric_in, token,\n> InvalidOid, -1,\n> escontext,\n> &numd))\n> elog(INFO,\"something is wrong\");\n\nI wonder about the implication of just declaring Node *escontext in here.\nIn this DirectInputFunctionCallSafe, what does escontext point to.\n\n\n",
"msg_date": "Mon, 26 Jun 2023 21:32:02 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": true,
"msg_subject": "\n =?UTF-8?Q?Re=3A_=E2=80=8Bfunction_arguments_are_not_PG=5FFUNCTION=5FARGS=2C_?=\n\t=?UTF-8?Q?how_to_pass_Node_=2Aescontext?="
},
{
"msg_contents": "On Mon, Jun 26, 2023 at 9:32 PM jian he <[email protected]> wrote:\n>\n> On Mon, Jun 26, 2023 at 7:50 PM Andrew Dunstan <[email protected]>\nwrote:\n> >\n> >\n> > On 2023-06-26 Mo 07:20, jian he wrote:\n> >\n> > static\n> > Datum return_numeric_datum(char *token)\n> > {\n> > Datum numd;\n> > Node *escontext;\n> >\n> > if (!DirectInputFunctionCallSafe(numeric_in, token,\n> > InvalidOid, -1,\n> > escontext,\n> > &numd));\n> > elog(INFO,\"something is wrong\");\n> > return numd;\n> > }\n> >\n> >\n> > To start with, the semicolon at the end of that if appears bogus. The\nelog is indented to look like it's conditioned by the if but the semicolon\nmakes it not be.\n> >\n> > There are compiler switches in modern gcc at least that help you detect\nthings like this.\n> >\n> >\n> > cheers\n> >\n> >\n> > andrew\n> >\n> > --\n> > Andrew Dunstan\n> > EDB: https://www.enterprisedb.com\n>\n>\n> sorry. It was my mistake.\n>\n> > Node *escontext;\n> > if (!DirectInputFunctionCallSafe(numeric_in, token,\n> > InvalidOid, -1,\n> > escontext,\n> > &numd))\n> > elog(INFO,\"something is wrong\");\n>\n> I wonder about the implication of just declaring Node *escontext in here.\n> In this DirectInputFunctionCallSafe, what does escontext point to.\n\ngcc -Wempty-body will detect my error.\n\nHowever, gcc -Wextra includes -Wempty-body and -Wuninitialized and others.\ni guess in here, the real question in here is how to initialize escontext.\n\nOn Mon, Jun 26, 2023 at 9:32 PM jian he <[email protected]> wrote:>> On Mon, Jun 26, 2023 at 7:50 PM Andrew Dunstan <[email protected]> wrote:> >> >> > On 2023-06-26 Mo 07:20, jian he wrote:> >> > static> > Datum return_numeric_datum(char *token)> > {> > Datum numd;> > Node *escontext;> >> > if (!DirectInputFunctionCallSafe(numeric_in, token,> > InvalidOid, -1,> > escontext,> > &numd));> > elog(INFO,\"something is wrong\");> > return numd;> > }> >> >> > To start with, the semicolon at the end of that if appears bogus. The elog is indented to look like it's conditioned by the if but the semicolon makes it not be.> >> > There are compiler switches in modern gcc at least that help you detect things like this.> >> >> > cheers> >> >> > andrew> >> > --> > Andrew Dunstan> > EDB: https://www.enterprisedb.com>>> sorry. It was my mistake.>> > Node *escontext;> > if (!DirectInputFunctionCallSafe(numeric_in, token,> > InvalidOid, -1,> > escontext,> > &numd))> > elog(INFO,\"something is wrong\");>> I wonder about the implication of just declaring Node *escontext in here.> In this DirectInputFunctionCallSafe, what does escontext point to.gcc -Wempty-body will detect my error.However, gcc -Wextra includes -Wempty-body and -Wuninitialized and others.i guess in here, the real question in here is how to initialize escontext.",
"msg_date": "Mon, 26 Jun 2023 21:45:51 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": true,
"msg_subject": "\n =?UTF-8?Q?Re=3A_=E2=80=8Bfunction_arguments_are_not_PG=5FFUNCTION=5FARGS=2C_?=\n\t=?UTF-8?Q?how_to_pass_Node_=2Aescontext?="
},
{
"msg_contents": "On 2023-06-26 Mo 09:45, jian he wrote:\n>\n>\n> On Mon, Jun 26, 2023 at 9:32 PM jian he <[email protected]> \n> wrote:\n> >\n> > On Mon, Jun 26, 2023 at 7:50 PM Andrew Dunstan <[email protected]> \n> wrote:\n> > >\n> > >\n> > > On 2023-06-26 Mo 07:20, jian he wrote:\n> > >\n> > > static\n> > > Datum return_numeric_datum(char *token)\n> > > {\n> > > Datum numd;\n> > > Node *escontext;\n> > >\n> > > if (!DirectInputFunctionCallSafe(numeric_in, token,\n> > > InvalidOid, -1,\n> > > escontext,\n> > > &numd));\n> > > elog(INFO,\"something is wrong\");\n> > > return numd;\n> > > }\n> > >\n> > >\n> > > To start with, the semicolon at the end of that if appears bogus. \n> The elog is indented to look like it's conditioned by the if but the \n> semicolon makes it not be.\n> > >\n> > > There are compiler switches in modern gcc at least that help you \n> detect things like this.\n> > >\n> > >\n> > > cheers\n> > >\n> > >\n> > > andrew\n> > >\n> > > --\n> > > Andrew Dunstan\n> > > EDB: https://www.enterprisedb.com\n> >\n> >\n> > sorry. It was my mistake.\n> >\n> > > Node *escontext;\n> > > if (!DirectInputFunctionCallSafe(numeric_in, token,\n> > > InvalidOid, -1,\n> > > escontext,\n> > > &numd))\n> > > elog(INFO,\"something is wrong\");\n> >\n> > I wonder about the implication of just declaring Node *escontext in \n> here.\n> > In this DirectInputFunctionCallSafe, what does escontext point to.\n>\n> gcc -Wempty-body will detect my error.\n>\n> However, gcc -Wextra includes -Wempty-body and -Wuninitializedand \n> others.\n> i guess in here, the real question in here is how to initializeescontext.\n\n\nSee code for pg_input_error_info and friends for examples.\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-06-26 Mo 09:45, jian he wrote:\n\n\n\n\n\n On Mon, Jun 26, 2023 at 9:32 PM jian he <[email protected]>\n wrote:\n >\n > On Mon, Jun 26, 2023 at 7:50 PM Andrew Dunstan <[email protected]>\n wrote:\n > >\n > >\n > > On 2023-06-26 Mo 07:20, jian he wrote:\n > >\n > > static\n > > Datum return_numeric_datum(char *token)\n > > {\n > > Datum numd;\n > > Node *escontext;\n > >\n > > if (!DirectInputFunctionCallSafe(numeric_in,\n token,\n > > InvalidOid, -1,\n > > escontext,\n > > &numd));\n > > elog(INFO,\"something is wrong\");\n > > return numd;\n > > }\n > >\n > >\n > > To start with, the semicolon at the end of that if\n appears bogus. The elog is indented to look like it's\n conditioned by the if but the semicolon makes it not be.\n > >\n > > There are compiler switches in modern gcc at least\n that help you detect things like this.\n > >\n > >\n > > cheers\n > >\n > >\n > > andrew\n > >\n > > --\n > > Andrew Dunstan\n > > EDB: https://www.enterprisedb.com\n >\n >\n > sorry. It was my mistake.\n >\n > > Node *escontext;\n > > if (!DirectInputFunctionCallSafe(numeric_in,\n token,\n > > InvalidOid, -1,\n > > escontext,\n > > &numd))\n > > elog(INFO,\"something is wrong\");\n >\n > I wonder about the implication of just declaring Node\n *escontext in here.\n > In this DirectInputFunctionCallSafe, what does escontext\n point to.\n\n gcc -Wempty-body will detect my error.\n \n However, gcc -Wextra includes -Wempty-body and -Wuninitialized\n and others.\ni guess in here,\n the real question in here is how to initialize escontext.\n\n\n\n\n\nSee code for pg_input_error_info and friends for examples.\n\n\ncheers\n\n\nandrew\n\n\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Mon, 26 Jun 2023 11:48:26 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "=?UTF-8?Q?Re=3a_=e2=80=8bfunction_arguments_are_not_PG=5fFUNCTION?=\n =?UTF-8?Q?=5fARGS=2c_how_to_pass_Node_*escontext?="
}
] |
[
{
"msg_contents": "Hello,\n\nThis attached patch enables pgbench to cancel queries during benchmark.\n\nFormerly, Ctrl+C during benchmark killed pgbench immediately, but backend\nprocesses executing long queries remained for a while. You can simply\nreproduce this problem by cancelling the pgbench running a custom script\nexecuting \"SELECT pg_sleep(10)\".\n\nThe patch fixes this so that cancel requests are sent for all connections on\nCtrl+C, and all running queries are cancelled before pgbench exits.\n\nIn thread #0, setup_cancel_handler is called before the loop, the\nCancelRequested flag is set when Ctrl+C is issued. In the loop, cancel\nrequests are sent when the flag is set only in thread #0. SIGINT is\nblocked in other threads, but the threads will exit after their query\nare cancelled. If thread safety is disabled or OS is Windows, the signal\nis not blocked because pthread_sigmask cannot be used. \n(I didn't test the patch on WIndows yet, though.)\n\nI choose the design that the signal handler and the query cancel are\nperformed only in thread #0 because I wanted to make the behavior as\npredicable as possible. However, another design that all thread can\nreceived SIGINT and that the first thread that catches the signal is\nresponsible to sent cancel requests for all connections may also work.\n\nAlso, the array of CState that contains all clients state is changed to\na global variable so that all connections can be accessed within a thread.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <[email protected]>",
"msg_date": "Mon, 26 Jun 2023 22:46:28 +0900",
"msg_from": "Yugo NAGATA <[email protected]>",
"msg_from_op": true,
"msg_subject": "pgbnech: allow to cancel queries during benchmark"
},
{
"msg_contents": "On Mon, Jun 26, 2023 at 9:46 AM Yugo NAGATA <[email protected]> wrote:\n\n> Hello,\n>\n> This attached patch enables pgbench to cancel queries during benchmark.\n>\n> Formerly, Ctrl+C during benchmark killed pgbench immediately, but backend\n> processes executing long queries remained for a while. You can simply\n> reproduce this problem by cancelling the pgbench running a custom script\n> executing \"SELECT pg_sleep(10)\".\n>\n> The patch fixes this so that cancel requests are sent for all connections\n> on\n> Ctrl+C, and all running queries are cancelled before pgbench exits.\n>\n> In thread #0, setup_cancel_handler is called before the loop, the\n> CancelRequested flag is set when Ctrl+C is issued. In the loop, cancel\n> requests are sent when the flag is set only in thread #0. SIGINT is\n> blocked in other threads, but the threads will exit after their query\n> are cancelled. If thread safety is disabled or OS is Windows, the signal\n> is not blocked because pthread_sigmask cannot be used.\n> (I didn't test the patch on WIndows yet, though.)\n>\n> I choose the design that the signal handler and the query cancel are\n> performed only in thread #0 because I wanted to make the behavior as\n> predicable as possible. However, another design that all thread can\n> received SIGINT and that the first thread that catches the signal is\n> responsible to sent cancel requests for all connections may also work.\n>\n> Also, the array of CState that contains all clients state is changed to\n> a global variable so that all connections can be accessed within a thread.\n>\n>\n> +1\n I also like the thread #0 handling design. I have NOT reviewed/tested\nthis yet.\n\nOn Mon, Jun 26, 2023 at 9:46 AM Yugo NAGATA <[email protected]> wrote:Hello,\n\nThis attached patch enables pgbench to cancel queries during benchmark.\n\nFormerly, Ctrl+C during benchmark killed pgbench immediately, but backend\nprocesses executing long queries remained for a while. You can simply\nreproduce this problem by cancelling the pgbench running a custom script\nexecuting \"SELECT pg_sleep(10)\".\n\nThe patch fixes this so that cancel requests are sent for all connections on\nCtrl+C, and all running queries are cancelled before pgbench exits.\n\nIn thread #0, setup_cancel_handler is called before the loop, the\nCancelRequested flag is set when Ctrl+C is issued. In the loop, cancel\nrequests are sent when the flag is set only in thread #0. SIGINT is\nblocked in other threads, but the threads will exit after their query\nare cancelled. If thread safety is disabled or OS is Windows, the signal\nis not blocked because pthread_sigmask cannot be used. \n(I didn't test the patch on WIndows yet, though.)\n\nI choose the design that the signal handler and the query cancel are\nperformed only in thread #0 because I wanted to make the behavior as\npredicable as possible. However, another design that all thread can\nreceived SIGINT and that the first thread that catches the signal is\nresponsible to sent cancel requests for all connections may also work.\n\nAlso, the array of CState that contains all clients state is changed to\na global variable so that all connections can be accessed within a thread.\n+1 I also like the thread #0 handling design. I have NOT reviewed/tested this yet.",
"msg_date": "Mon, 26 Jun 2023 12:59:21 -0400",
"msg_from": "Kirk Wolak <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgbnech: allow to cancel queries during benchmark"
},
{
"msg_contents": "On Mon, 26 Jun 2023 12:59:21 -0400\nKirk Wolak <[email protected]> wrote:\n\n> On Mon, Jun 26, 2023 at 9:46 AM Yugo NAGATA <[email protected]> wrote:\n> \n> > Hello,\n> >\n> > This attached patch enables pgbench to cancel queries during benchmark.\n> >\n> > Formerly, Ctrl+C during benchmark killed pgbench immediately, but backend\n> > processes executing long queries remained for a while. You can simply\n> > reproduce this problem by cancelling the pgbench running a custom script\n> > executing \"SELECT pg_sleep(10)\".\n> >\n> > The patch fixes this so that cancel requests are sent for all connections\n> > on\n> > Ctrl+C, and all running queries are cancelled before pgbench exits.\n> >\n> > In thread #0, setup_cancel_handler is called before the loop, the\n> > CancelRequested flag is set when Ctrl+C is issued. In the loop, cancel\n> > requests are sent when the flag is set only in thread #0. SIGINT is\n> > blocked in other threads, but the threads will exit after their query\n> > are cancelled. If thread safety is disabled or OS is Windows, the signal\n> > is not blocked because pthread_sigmask cannot be used.\n> > (I didn't test the patch on WIndows yet, though.)\n> >\n> > I choose the design that the signal handler and the query cancel are\n> > performed only in thread #0 because I wanted to make the behavior as\n> > predicable as possible. However, another design that all thread can\n> > received SIGINT and that the first thread that catches the signal is\n> > responsible to sent cancel requests for all connections may also work.\n> >\n> > Also, the array of CState that contains all clients state is changed to\n> > a global variable so that all connections can be accessed within a thread.\n> >\n> >\n> > +1\n> I also like the thread #0 handling design. I have NOT reviewed/tested\n> this yet.\n\nThank you for your comment.\n\nAs a side note, actually I thought another design to create a special thread\nfor handling query cancelling in which SIGINT would be catched by sigwait,\nI quit the idea because it would add complexity due to code for Windows and\n--disabled-thread-safe.\n\nI would appreciate it if you could kindly review and test the patch!\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <[email protected]>\n\n\n",
"msg_date": "Tue, 27 Jun 2023 16:48:45 +0900",
"msg_from": "Yugo NAGATA <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pgbnech: allow to cancel queries during benchmark"
},
{
"msg_contents": "\nHello Yugo-san,\n\n>>> In thread #0, setup_cancel_handler is called before the loop, the\n>>> CancelRequested flag is set when Ctrl+C is issued. In the loop, cancel\n>>> requests are sent when the flag is set only in thread #0. SIGINT is\n>>> blocked in other threads, but the threads will exit after their query\n>>> are cancelled. If thread safety is disabled or OS is Windows, the signal\n>>> is not blocked because pthread_sigmask cannot be used.\n>>> (I didn't test the patch on WIndows yet, though.)\n>>>\n>>> I choose the design that the signal handler and the query cancel are\n>>> performed only in thread #0 because I wanted to make the behavior as\n>>> predicable as possible. However, another design that all thread can\n>>> received SIGINT and that the first thread that catches the signal is\n>>> responsible to sent cancel requests for all connections may also work.\n>>>\n>>> Also, the array of CState that contains all clients state is changed to\n>>> a global variable so that all connections can be accessed within a thread.\n\n> As a side note, actually I thought another design to create a special thread\n> for handling query cancelling in which SIGINT would be catched by sigwait,\n> I quit the idea because it would add complexity due to code for Windows and\n> --disabled-thread-safe.\n\nI agree that the simpler the better.\n\n> I would appreciate it if you could kindly review and test the patch!\n\nI'll try to have a look at it.\n\n-- \nFabien.\n\n\n",
"msg_date": "Mon, 3 Jul 2023 14:11:36 +0200 (CEST)",
"msg_from": "Fabien COELHO <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgbnech: allow to cancel queries during benchmark"
},
{
"msg_contents": "\nYugo-san,\n\nSome feedback about v1 of this patch.\n\nPatch applies cleanly, compiles.\n\nThere are no tests, could there be one? ISTM that one could be done with a \n\"SELECT pg_sleep(...)\" script??\n\nThe global name \"all_state\" is quite ambiguous, what about \"client_states\" \ninstead? Or maybe it could be avoided, see below.\n\nInstead of renaming \"state\" to \"all_state\" (or client_states as suggested \nabove), I'd suggest to minimize the patch by letting \"state\" inside the \nmain and adding a \"client_states = state;\" just after the allocation, or \nanother approach, see below.\n\nShould PQfreeCancel be called on deconnections, in finishCon? I think that \nthere may be a memory leak with the current implementation??\n\nMaybe it should check that cancel is not NULL before calling PQcancel?\n\nAfter looking at the code, I'm very unclear whether they may be some \nunderlying race conditions, or not, depending on when the cancel is \ntriggered. I think that some race conditions are still likely given the \ncurrent thread 0 implementation, and dealing with them with a barrier or \nwhatever is not desirable at all.\n\nIn order to work around this issue, ISTM that we should go away from the \nsimple and straightforward thread 0 approach, and the only clean way is \nthat the cancelation should be managed by each thread for its own client.\n\nI'd suggest to have the advanceState to call PQcancel when CancelRequested \nis set and switch to CSTATE_ABORTED to end properly. This means that there \nwould be no need for the global client states pointer, so the patch should \nbe smaller and simpler. Possibly there would be some shortcuts added here \nand there to avoid lingering after the control-C, in threadRun.\n\n-- \nFabien.\n\n\n",
"msg_date": "Mon, 3 Jul 2023 20:39:23 +0200 (CEST)",
"msg_from": "Fabien COELHO <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgbnech: allow to cancel queries during benchmark"
},
{
"msg_contents": "Hello Fabien,\n\nThank you for your review!\n\nOn Mon, 3 Jul 2023 20:39:23 +0200 (CEST)\nFabien COELHO <[email protected]> wrote:\n\n> \n> Yugo-san,\n> \n> Some feedback about v1 of this patch.\n> \n> Patch applies cleanly, compiles.\n> \n> There are no tests, could there be one? ISTM that one could be done with a \n> \"SELECT pg_sleep(...)\" script??\n\nAgreed. I will add the test.\n\n> The global name \"all_state\" is quite ambiguous, what about \"client_states\" \n> instead? Or maybe it could be avoided, see below.\n> \n> Instead of renaming \"state\" to \"all_state\" (or client_states as suggested \n> above), I'd suggest to minimize the patch by letting \"state\" inside the \n> main and adding a \"client_states = state;\" just after the allocation, or \n> another approach, see below.\n\nOk, I'll fix to add a global variable \"client_states\" and make this point to\n\"state\" instead of changing \"state\" to global.\n \n> Should PQfreeCancel be called on deconnections, in finishCon? I think that \n> there may be a memory leak with the current implementation??\n\nAgreed. I'll fix.\n \n> Maybe it should check that cancel is not NULL before calling PQcancel?\n\nI think this is already checked as below, but am I missing something?\n\n+ if (all_state[i].cancel != NULL)\n+ (void) PQcancel(all_state[i].cancel, errbuf, sizeof(errbuf));\n\n> After looking at the code, I'm very unclear whether they may be some \n> underlying race conditions, or not, depending on when the cancel is \n> triggered. I think that some race conditions are still likely given the \n> current thread 0 implementation, and dealing with them with a barrier or \n> whatever is not desirable at all.\n> \n> In order to work around this issue, ISTM that we should go away from the \n> simple and straightforward thread 0 approach, and the only clean way is \n> that the cancelation should be managed by each thread for its own client.\n> \n> I'd suggest to have the advanceState to call PQcancel when CancelRequested \n> is set and switch to CSTATE_ABORTED to end properly. This means that there \n> would be no need for the global client states pointer, so the patch should \n> be smaller and simpler. Possibly there would be some shortcuts added here \n> and there to avoid lingering after the control-C, in threadRun.\n\nI am not sure this approach is simpler than mine. \n\nIn multi-threads, only one thread can catches the signal and other threads\ncontinue to run. Therefore, if Ctrl+C is pressed while threads are waiting\nresponses from the backend in wait_on_socket_set, only one thread can be\ninterrupted and return, but other threads will continue to wait and cannot\ncheck CancelRequested. So, for implementing your suggestion, we need any hack\nto make all threads return from wait_on_socket_set when the event occurs, but\nI don't have idea to do this in simpler way. \n\nIn my patch, all threads can return from wait_on_socket_set at Ctrl+C\nbecause when thread #0 cancels all connections, the following error is\nsent to all sessions:\n\n ERROR: canceling statement due to user request\n\nand all threads will receive the response from the backend.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <[email protected]>\n\n\n",
"msg_date": "Fri, 14 Jul 2023 20:32:01 +0900",
"msg_from": "Yugo NAGATA <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pgbnech: allow to cancel queries during benchmark"
},
{
"msg_contents": "Hello Fabien,\n\nOn Fri, 14 Jul 2023 20:32:01 +0900\nYugo NAGATA <[email protected]> wrote:\n\nI attached the updated patch.\n\n> Hello Fabien,\n> \n> Thank you for your review!\n> \n> On Mon, 3 Jul 2023 20:39:23 +0200 (CEST)\n> Fabien COELHO <[email protected]> wrote:\n> \n> > \n> > Yugo-san,\n> > \n> > Some feedback about v1 of this patch.\n> > \n> > Patch applies cleanly, compiles.\n> > \n> > There are no tests, could there be one? ISTM that one could be done with a \n> > \"SELECT pg_sleep(...)\" script??\n> \n> Agreed. I will add the test.\n\nI added a TAP test.\n\n> \n> > The global name \"all_state\" is quite ambiguous, what about \"client_states\" \n> > instead? Or maybe it could be avoided, see below.\n> > \n> > Instead of renaming \"state\" to \"all_state\" (or client_states as suggested \n> > above), I'd suggest to minimize the patch by letting \"state\" inside the \n> > main and adding a \"client_states = state;\" just after the allocation, or \n> > another approach, see below.\n> \n> Ok, I'll fix to add a global variable \"client_states\" and make this point to\n> \"state\" instead of changing \"state\" to global.\n\nDone.\n\n> \n> > Should PQfreeCancel be called on deconnections, in finishCon? I think that \n> > there may be a memory leak with the current implementation??\n> \n> Agreed. I'll fix.\n\nDone.\n\nRegards,\nYugo Nagata\n\n> \n> > Maybe it should check that cancel is not NULL before calling PQcancel?\n> \n> I think this is already checked as below, but am I missing something?\n> \n> + if (all_state[i].cancel != NULL)\n> + (void) PQcancel(all_state[i].cancel, errbuf, sizeof(errbuf));\n> \n> > After looking at the code, I'm very unclear whether they may be some \n> > underlying race conditions, or not, depending on when the cancel is \n> > triggered. I think that some race conditions are still likely given the \n> > current thread 0 implementation, and dealing with them with a barrier or \n> > whatever is not desirable at all.\n> > \n> > In order to work around this issue, ISTM that we should go away from the \n> > simple and straightforward thread 0 approach, and the only clean way is \n> > that the cancelation should be managed by each thread for its own client.\n> > \n> > I'd suggest to have the advanceState to call PQcancel when CancelRequested \n> > is set and switch to CSTATE_ABORTED to end properly. This means that there \n> > would be no need for the global client states pointer, so the patch should \n> > be smaller and simpler. Possibly there would be some shortcuts added here \n> > and there to avoid lingering after the control-C, in threadRun.\n> \n> I am not sure this approach is simpler than mine. \n> \n> In multi-threads, only one thread can catches the signal and other threads\n> continue to run. Therefore, if Ctrl+C is pressed while threads are waiting\n> responses from the backend in wait_on_socket_set, only one thread can be\n> interrupted and return, but other threads will continue to wait and cannot\n> check CancelRequested. So, for implementing your suggestion, we need any hack\n> to make all threads return from wait_on_socket_set when the event occurs, but\n> I don't have idea to do this in simpler way. \n> \n> In my patch, all threads can return from wait_on_socket_set at Ctrl+C\n> because when thread #0 cancels all connections, the following error is\n> sent to all sessions:\n> \n> ERROR: canceling statement due to user request\n> \n> and all threads will receive the response from the backend.\n> \n> Regards,\n> Yugo Nagata\n> \n> -- \n> Yugo NAGATA <[email protected]>\n> \n> \n\n\n-- \nYugo NAGATA <[email protected]>\n\n\n",
"msg_date": "Wed, 2 Aug 2023 16:37:53 +0900",
"msg_from": "Yugo NAGATA <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pgbnech: allow to cancel queries during benchmark"
},
{
"msg_contents": "On Wed, 2 Aug 2023 16:37:53 +0900\nYugo NAGATA <[email protected]> wrote:\n\n> Hello Fabien,\n> \n> On Fri, 14 Jul 2023 20:32:01 +0900\n> Yugo NAGATA <[email protected]> wrote:\n> \n> I attached the updated patch.\n\nI'm sorry. I forgot to attach the patch.\n\nRegards,\nYugo Nagata\n\n> \n> > Hello Fabien,\n> > \n> > Thank you for your review!\n> > \n> > On Mon, 3 Jul 2023 20:39:23 +0200 (CEST)\n> > Fabien COELHO <[email protected]> wrote:\n> > \n> > > \n> > > Yugo-san,\n> > > \n> > > Some feedback about v1 of this patch.\n> > > \n> > > Patch applies cleanly, compiles.\n> > > \n> > > There are no tests, could there be one? ISTM that one could be done with a \n> > > \"SELECT pg_sleep(...)\" script??\n> > \n> > Agreed. I will add the test.\n> \n> I added a TAP test.\n> \n> > \n> > > The global name \"all_state\" is quite ambiguous, what about \"client_states\" \n> > > instead? Or maybe it could be avoided, see below.\n> > > \n> > > Instead of renaming \"state\" to \"all_state\" (or client_states as suggested \n> > > above), I'd suggest to minimize the patch by letting \"state\" inside the \n> > > main and adding a \"client_states = state;\" just after the allocation, or \n> > > another approach, see below.\n> > \n> > Ok, I'll fix to add a global variable \"client_states\" and make this point to\n> > \"state\" instead of changing \"state\" to global.\n> \n> Done.\n> \n> > \n> > > Should PQfreeCancel be called on deconnections, in finishCon? I think that \n> > > there may be a memory leak with the current implementation??\n> > \n> > Agreed. I'll fix.\n> \n> Done.\n> \n> Regards,\n> Yugo Nagata\n> \n> > \n> > > Maybe it should check that cancel is not NULL before calling PQcancel?\n> > \n> > I think this is already checked as below, but am I missing something?\n> > \n> > + if (all_state[i].cancel != NULL)\n> > + (void) PQcancel(all_state[i].cancel, errbuf, sizeof(errbuf));\n> > \n> > > After looking at the code, I'm very unclear whether they may be some \n> > > underlying race conditions, or not, depending on when the cancel is \n> > > triggered. I think that some race conditions are still likely given the \n> > > current thread 0 implementation, and dealing with them with a barrier or \n> > > whatever is not desirable at all.\n> > > \n> > > In order to work around this issue, ISTM that we should go away from the \n> > > simple and straightforward thread 0 approach, and the only clean way is \n> > > that the cancelation should be managed by each thread for its own client.\n> > > \n> > > I'd suggest to have the advanceState to call PQcancel when CancelRequested \n> > > is set and switch to CSTATE_ABORTED to end properly. This means that there \n> > > would be no need for the global client states pointer, so the patch should \n> > > be smaller and simpler. Possibly there would be some shortcuts added here \n> > > and there to avoid lingering after the control-C, in threadRun.\n> > \n> > I am not sure this approach is simpler than mine. \n> > \n> > In multi-threads, only one thread can catches the signal and other threads\n> > continue to run. Therefore, if Ctrl+C is pressed while threads are waiting\n> > responses from the backend in wait_on_socket_set, only one thread can be\n> > interrupted and return, but other threads will continue to wait and cannot\n> > check CancelRequested. So, for implementing your suggestion, we need any hack\n> > to make all threads return from wait_on_socket_set when the event occurs, but\n> > I don't have idea to do this in simpler way. \n> > \n> > In my patch, all threads can return from wait_on_socket_set at Ctrl+C\n> > because when thread #0 cancels all connections, the following error is\n> > sent to all sessions:\n> > \n> > ERROR: canceling statement due to user request\n> > \n> > and all threads will receive the response from the backend.\n> > \n> > Regards,\n> > Yugo Nagata\n> > \n> > -- \n> > Yugo NAGATA <[email protected]>\n> > \n> > \n> \n> \n> -- \n> Yugo NAGATA <[email protected]>\n> \n> \n\n\n-- \nYugo NAGATA <[email protected]>",
"msg_date": "Wed, 2 Aug 2023 19:01:40 +0900",
"msg_from": "Yugo NAGATA <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pgbnech: allow to cancel queries during benchmark"
},
{
"msg_contents": "\nHello Yugo-san,\n\nSome feedback about v2.\n\nThere is some dead code (&& false) which should be removed.\n\n>>>> Maybe it should check that cancel is not NULL before calling PQcancel?\n>>>\n>>> I think this is already checked as below, but am I missing something?\n>>>\n>>> + if (all_state[i].cancel != NULL)\n>>> + (void) PQcancel(all_state[i].cancel, errbuf, sizeof(errbuf));\n>>>\n>>>> After looking at the code, I'm very unclear whether they may be some\n>>>> underlying race conditions, or not, depending on when the cancel is\n>>>> triggered. I think that some race conditions are still likely given the\n>>>> current thread 0 implementation, and dealing with them with a barrier or\n>>>> whatever is not desirable at all.\n>>>>\n>>>> In order to work around this issue, ISTM that we should go away from the\n>>>> simple and straightforward thread 0 approach, and the only clean way is\n>>>> that the cancelation should be managed by each thread for its own client.\n>>>>\n>>>> I'd suggest to have the advanceState to call PQcancel when CancelRequested\n>>>> is set and switch to CSTATE_ABORTED to end properly. This means that there\n>>>> would be no need for the global client states pointer, so the patch should\n>>>> be smaller and simpler. Possibly there would be some shortcuts added here\n>>>> and there to avoid lingering after the control-C, in threadRun.\n>>>\n>>> I am not sure this approach is simpler than mine.\n\nMy argument is more about latent race conditions and inter-thread \ninterference than code simplicity.\n\n>>> In multi-threads, only one thread can catches the signal and other threads\n>>> continue to run.\n\nYep. This why I see plenty uncontrolled race conditions if thread 0 \ncancels clients which are managed in parallel by other threads and may be \nactive. I'm not really motivated to delve into libpq internals to check \nwhether there are possibly bad issues or not, but if two threads write \nmessage at the same time in the same socket, I assume that this can be \nbad if you are unlucky.\n\nISTM that the rational convention should be that each thread cancels its \nown clients, which ensures that there is no bad interaction between \nthreads.\n\n>>> Therefore, if Ctrl+C is pressed while threads are waiting\n>>> responses from the backend in wait_on_socket_set, only one thread can be\n>>> interrupted and return, but other threads will continue to wait and cannot\n>>> check CancelRequested.\n\n>>> So, for implementing your suggestion, we need any hack\n>>> to make all threads return from wait_on_socket_set when the event occurs, but\n>>> I don't have idea to do this in simpler way.\n\n>>> In my patch, all threads can return from wait_on_socket_set at Ctrl+C\n>>> because when thread #0 cancels all connections, the following error is\n>>> sent to all sessions:\n>>>\n>>> ERROR: canceling statement due to user request\n>>>\n>>> and all threads will receive the response from the backend.\n\nHmmm.\n\nI understand that the underlying issue you are raising is that other \nthreads may be stuck while waiting on socket events and that with your \napproach they will be cleared somehow by socket 0.\n\nI'll say that (1) this point does not address potential race condition \nissues with several thread interacting directly on the same client ;\n(2) thread 0 may also be stuck waiting for events so the cancel is only \ntaken into account when it is woken up.\n\nIf we accept that each thread cancels its clients when it is woken up, \nwhich may imply some (usually small) delay, then it is not so different \nfrom the current version because the code must wait for 0 to wake up \nanyway, and it solves (1). The current version does not address potential \nthread interactions.\n\n-- \nFabien.\n\n\n",
"msg_date": "Wed, 9 Aug 2023 11:06:24 +0200 (CEST)",
"msg_from": "Fabien COELHO <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgbnech: allow to cancel queries during benchmark"
},
{
"msg_contents": "I forgot, about the test:\n\nI think that it should be integrated in the existing \n\"001_pgbench_with_server.pl\" script, because a TAP script is pretty \nexpensive as it creates a cluster, starts it… before running the test.\n\nI'm surprise that IPC::Run does not give any access to the process number. \nAnyway, its object interface seems to allow sending signal:\n\n \t$h->signal(\"...\")\n\nSo the code could be simplified to use that after a small delay.\n\n-- \nFabien.",
"msg_date": "Wed, 9 Aug 2023 11:18:43 +0200 (CEST)",
"msg_from": "Fabien COELHO <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgbnech: allow to cancel queries during benchmark"
},
{
"msg_contents": "Hello Fabien,\n\nOn Wed, 9 Aug 2023 11:06:24 +0200 (CEST)\nFabien COELHO <[email protected]> wrote:\n\n> \n> Hello Yugo-san,\n> \n> Some feedback about v2.\n> \n> There is some dead code (&& false) which should be removed.\n\nI forgot to remove the debug code. I'll remove it.\n\n> >>> In multi-threads, only one thread can catches the signal and other threads\n> >>> continue to run.\n> \n> Yep. This why I see plenty uncontrolled race conditions if thread 0 \n> cancels clients which are managed in parallel by other threads and may be \n> active. I'm not really motivated to delve into libpq internals to check \n> whether there are possibly bad issues or not, but if two threads write \n> message at the same time in the same socket, I assume that this can be \n> bad if you are unlucky.\n> \n> ISTM that the rational convention should be that each thread cancels its \n> own clients, which ensures that there is no bad interaction between \n> threads.\n\nActually, thread #0 and other threads never write message at the same time\nin the same socket. When thread #0 sends cancel requests, they are not sent\nto sockets that other threads are reading or writing. Rather, new another\nsocket for cancel is created for each client, and the backend PID and cancel\nrequest are sent to the socket. PostgreSQL establishes a new connection for\nthe cancel request, and sent a cancel signal to the specified backend.\n\nTherefore, thread #0 and other threads don't access any resources in the same\ntime except to CancelRequested. Is still there any concern about race condition?\n\n> >>> Therefore, if Ctrl+C is pressed while threads are waiting\n> >>> responses from the backend in wait_on_socket_set, only one thread can be\n> >>> interrupted and return, but other threads will continue to wait and cannot\n> >>> check CancelRequested.\n> \n> >>> So, for implementing your suggestion, we need any hack\n> >>> to make all threads return from wait_on_socket_set when the event occurs, but\n> >>> I don't have idea to do this in simpler way.\n> \n> >>> In my patch, all threads can return from wait_on_socket_set at Ctrl+C\n> >>> because when thread #0 cancels all connections, the following error is\n> >>> sent to all sessions:\n> >>>\n> >>> ERROR: canceling statement due to user request\n> >>>\n> >>> and all threads will receive the response from the backend.\n> \n> Hmmm.\n> \n> I understand that the underlying issue you are raising is that other \n> threads may be stuck while waiting on socket events and that with your \n> approach they will be cleared somehow by socket 0.\n> \n> I'll say that (1) this point does not address potential race condition \n> issues with several thread interacting directly on the same client ;\n> (2) thread 0 may also be stuck waiting for events so the cancel is only \n> taken into account when it is woken up.\n\nI answered to (1) the consern about race condition above.\n\nAnd, as to (2), the SIGINT signal is handle only in thread #0 because it is\nblocked in other threads. So, when SIGINT is delivered, thread #0 will be\ninterrupted and woken up immediately from waiting on socket, returning EINTR. \nTherefore, thread #0 would not be stuck.\n \nRegards,\nYugo Nagata\n\n> If we accept that each thread cancels its clients when it is woken up, \n> which may imply some (usually small) delay, then it is not so different \n> from the current version because the code must wait for 0 to wake up \n> anyway, and it solves (1). The current version does not address potential \n> thread interactions.\n> \n> -- \n> Fabien.\n\n\n-- \nYugo NAGATA <[email protected]>\n\n\n",
"msg_date": "Thu, 10 Aug 2023 12:23:11 +0900",
"msg_from": "Yugo NAGATA <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pgbnech: allow to cancel queries during benchmark"
},
{
"msg_contents": "On Wed, 9 Aug 2023 11:18:43 +0200 (CEST)\nFabien COELHO <[email protected]> wrote:\n\n> \n> I forgot, about the test:\n> \n> I think that it should be integrated in the existing \n> \"001_pgbench_with_server.pl\" script, because a TAP script is pretty \n> expensive as it creates a cluster, starts it… before running the test.\n\nOk. I'll integrate the test into 001.\n \n> I'm surprise that IPC::Run does not give any access to the process number. \n> Anyway, its object interface seems to allow sending signal:\n> \n> \t$h->signal(\"...\")\n> \n> So the code could be simplified to use that after a small delay.\n\nThank you for your information.\n\nI didn't know $h->signal() and I mimicked the way of\nsrc/bin/psql/t/020_cancel.pl to send SIGINT to a running program. I don't\nknow why the psql test doesn't use the interface, I'll investigate whether\nthis can be used in our purpose, anyway.\n\nRegards,\nYugo Nagata\n\n> \n> -- \n> Fabien.\n\n\n-- \nYugo NAGATA <[email protected]>\n\n\n",
"msg_date": "Thu, 10 Aug 2023 12:32:44 +0900",
"msg_from": "Yugo NAGATA <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pgbnech: allow to cancel queries during benchmark"
},
{
"msg_contents": "Hi Fabien,\n\nOn Thu, 10 Aug 2023 12:32:44 +0900\nYugo NAGATA <[email protected]> wrote:\n\n> On Wed, 9 Aug 2023 11:18:43 +0200 (CEST)\n> Fabien COELHO <[email protected]> wrote:\n> \n> > \n> > I forgot, about the test:\n> > \n> > I think that it should be integrated in the existing \n> > \"001_pgbench_with_server.pl\" script, because a TAP script is pretty \n> > expensive as it creates a cluster, starts it… before running the test.\n> \n> Ok. I'll integrate the test into 001.\n> \n> > I'm surprise that IPC::Run does not give any access to the process number. \n> > Anyway, its object interface seems to allow sending signal:\n> > \n> > \t$h->signal(\"...\")\n> > \n> > So the code could be simplified to use that after a small delay.\n> \n> Thank you for your information.\n> \n> I didn't know $h->signal() and I mimicked the way of\n> src/bin/psql/t/020_cancel.pl to send SIGINT to a running program. I don't\n> know why the psql test doesn't use the interface, I'll investigate whether\n> this can be used in our purpose, anyway.\n\nI attached the updated patch v3. The changes since the previous\npatch includes the following;\n\nI removed the unnecessary condition (&& false) that you\npointed out in [1].\n\nThe test was rewritten by using IPC::Run signal() and integrated\nto \"001_pgbench_with_server.pl\". This test is skipped on Windows\nbecause SIGINT causes to terminate the test itself as discussed\nin [2] about query cancellation test in psql.\n\nI added some comments to describe how query cancellation is\nhandled as I explained in [1].\n\nAlso, I found the previous patch didn't work on Windows so fixed it.\nOn non-Windows system, a thread waiting a response of long query can\nbe interrupted by SIGINT, but on Windows, threads do not return from\nwaiting until queries they are running are cancelled. This is because,\nwhen the signal is received, the system just creates a new thread to\nexecute the callback function specified by setup_cancel_handler, and\nother thread continue to run[3]. Therefore, the queries have to be\ncancelled in the callback function.\n\n[1] https://www.postgresql.org/message-id/a58388ac-5411-4760-ea46-71324d8324cb%40mines-paristech.fr\n[2] https://www.postgresql.org/message-id/20230906004524.2fd6ee049f8a6c6f2690b99c%40sraoss.co.jp\n[3] https://learn.microsoft.com/en-us/windows/console/handlerroutine\n\nRegards,\nYugo Nagata\n\n> \n> -- \n> Yugo NAGATA <[email protected]>\n> \n> \n\n\n-- \nYugo NAGATA <[email protected]>",
"msg_date": "Wed, 6 Sep 2023 20:13:34 +0900",
"msg_from": "Yugo NAGATA <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pgbnech: allow to cancel queries during benchmark"
},
{
"msg_contents": "On Wed, 6 Sep 2023 20:13:34 +0900\nYugo NAGATA <[email protected]> wrote:\n \n> I attached the updated patch v3. The changes since the previous\n> patch includes the following;\n> \n> I removed the unnecessary condition (&& false) that you\n> pointed out in [1].\n> \n> The test was rewritten by using IPC::Run signal() and integrated\n> to \"001_pgbench_with_server.pl\". This test is skipped on Windows\n> because SIGINT causes to terminate the test itself as discussed\n> in [2] about query cancellation test in psql.\n> \n> I added some comments to describe how query cancellation is\n> handled as I explained in [1].\n> \n> Also, I found the previous patch didn't work on Windows so fixed it.\n> On non-Windows system, a thread waiting a response of long query can\n> be interrupted by SIGINT, but on Windows, threads do not return from\n> waiting until queries they are running are cancelled. This is because,\n> when the signal is received, the system just creates a new thread to\n> execute the callback function specified by setup_cancel_handler, and\n> other thread continue to run[3]. Therefore, the queries have to be\n> cancelled in the callback function.\n> \n> [1] https://www.postgresql.org/message-id/a58388ac-5411-4760-ea46-71324d8324cb%40mines-paristech.fr\n> [2] https://www.postgresql.org/message-id/20230906004524.2fd6ee049f8a6c6f2690b99c%40sraoss.co.jp\n> [3] https://learn.microsoft.com/en-us/windows/console/handlerroutine\n\nI found that --disable-thread-safety option was removed in 68a4b58eca0329.\nSo, I removed codes involving ENABLE_THREAD_SAFETY from the patch.\n\nAlso, I wrote a commit log draft.\n\nAttached is the updated version, v4.\n\nRegards,\nYugo Nagata\n\n\n-- \nYugo NAGATA <[email protected]>",
"msg_date": "Tue, 19 Sep 2023 17:30:11 +0900",
"msg_from": "Yugo NAGATA <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pgbnech: allow to cancel queries during benchmark"
},
{
"msg_contents": "> On Wed, 6 Sep 2023 20:13:34 +0900\n> Yugo NAGATA <[email protected]> wrote:\n> \n>> I attached the updated patch v3. The changes since the previous\n>> patch includes the following;\n>> \n>> I removed the unnecessary condition (&& false) that you\n>> pointed out in [1].\n>> \n>> The test was rewritten by using IPC::Run signal() and integrated\n>> to \"001_pgbench_with_server.pl\". This test is skipped on Windows\n>> because SIGINT causes to terminate the test itself as discussed\n>> in [2] about query cancellation test in psql.\n>> \n>> I added some comments to describe how query cancellation is\n>> handled as I explained in [1].\n>> \n>> Also, I found the previous patch didn't work on Windows so fixed it.\n>> On non-Windows system, a thread waiting a response of long query can\n>> be interrupted by SIGINT, but on Windows, threads do not return from\n>> waiting until queries they are running are cancelled. This is because,\n>> when the signal is received, the system just creates a new thread to\n>> execute the callback function specified by setup_cancel_handler, and\n>> other thread continue to run[3]. Therefore, the queries have to be\n>> cancelled in the callback function.\n>> \n>> [1] https://www.postgresql.org/message-id/a58388ac-5411-4760-ea46-71324d8324cb%40mines-paristech.fr\n>> [2] https://www.postgresql.org/message-id/20230906004524.2fd6ee049f8a6c6f2690b99c%40sraoss.co.jp\n>> [3] https://learn.microsoft.com/en-us/windows/console/handlerroutine\n> \n> I found that --disable-thread-safety option was removed in 68a4b58eca0329.\n> So, I removed codes involving ENABLE_THREAD_SAFETY from the patch.\n> \n> Also, I wrote a commit log draft.\n\n> Previously, Ctrl+C during benchmark killed pgbench immediately,\n> but queries running at that time were not cancelled.\n\nBetter to mention explicitely that queries keep on running on the\nbackend. What about this?\n\nPreviously, Ctrl+C during benchmark killed pgbench immediately, but\nqueries were not canceled and they keep on running on the backend\nuntil they tried to send the result to pgbench.\n\n> The commit\n> fixes this so that cancel requests are sent for all connections\n> before pgbench exits.\n\n\"sent for\" -> \"sent to\"\n\n> Attached is the updated version, v4.\n\n+/* send cancel requests to all connections */\n+static void\n+cancel_all()\n+{\n+\tfor (int i = 0; i < nclients; i++)\n+\t{\n+\t\tchar errbuf[1];\n+\t\tif (client_states[i].cancel != NULL)\n+\t\t\t(void) PQcancel(client_states[i].cancel, errbuf, sizeof(errbuf));\n+\t}\n+}\n+\n\nWhy in case of errors from PQCancel the error message is neglected? I\nthink it's better to print out the error message in case of error.\n\n+\t * On non-Windows, any callback function is not set. When SIGINT is\n+\t * received, CancelRequested is just set, and only thread #0 is interrupted\n+\t * and returns from waiting input from the backend. After that, the thread\n+\t * sends cancel requests to all benchmark queries.\n\nThe second line is a little bit long according to the coding\nstandard. Fix like this?\n\n\t * On non-Windows, any callback function is not set. When SIGINT is\n\t * received, CancelRequested is just set, and only thread #0 is\n\t * interrupted and returns from waiting input from the backend. After\n\t * that, the thread sends cancel requests to all benchmark queries.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Mon, 15 Jan 2024 16:49:44 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgbnech: allow to cancel queries during benchmark"
},
{
"msg_contents": "On Mon, 15 Jan 2024 16:49:44 +0900 (JST)\nTatsuo Ishii <[email protected]> wrote:\n\n> > On Wed, 6 Sep 2023 20:13:34 +0900\n> > Yugo NAGATA <[email protected]> wrote:\n> > \n> >> I attached the updated patch v3. The changes since the previous\n> >> patch includes the following;\n> >> \n> >> I removed the unnecessary condition (&& false) that you\n> >> pointed out in [1].\n> >> \n> >> The test was rewritten by using IPC::Run signal() and integrated\n> >> to \"001_pgbench_with_server.pl\". This test is skipped on Windows\n> >> because SIGINT causes to terminate the test itself as discussed\n> >> in [2] about query cancellation test in psql.\n> >> \n> >> I added some comments to describe how query cancellation is\n> >> handled as I explained in [1].\n> >> \n> >> Also, I found the previous patch didn't work on Windows so fixed it.\n> >> On non-Windows system, a thread waiting a response of long query can\n> >> be interrupted by SIGINT, but on Windows, threads do not return from\n> >> waiting until queries they are running are cancelled. This is because,\n> >> when the signal is received, the system just creates a new thread to\n> >> execute the callback function specified by setup_cancel_handler, and\n> >> other thread continue to run[3]. Therefore, the queries have to be\n> >> cancelled in the callback function.\n> >> \n> >> [1] https://www.postgresql.org/message-id/a58388ac-5411-4760-ea46-71324d8324cb%40mines-paristech.fr\n> >> [2] https://www.postgresql.org/message-id/20230906004524.2fd6ee049f8a6c6f2690b99c%40sraoss.co.jp\n> >> [3] https://learn.microsoft.com/en-us/windows/console/handlerroutine\n> > \n> > I found that --disable-thread-safety option was removed in 68a4b58eca0329.\n> > So, I removed codes involving ENABLE_THREAD_SAFETY from the patch.\n> > \n> > Also, I wrote a commit log draft.\n> \n> > Previously, Ctrl+C during benchmark killed pgbench immediately,\n> > but queries running at that time were not cancelled.\n> \n> Better to mention explicitely that queries keep on running on the\n> backend. What about this?\n> \n> Previously, Ctrl+C during benchmark killed pgbench immediately, but\n> queries were not canceled and they keep on running on the backend\n> until they tried to send the result to pgbench.\n\nThank you for your comments. I agree with you, so I fixed the message\nas your suggestion.\n\n> > The commit\n> > fixes this so that cancel requests are sent for all connections\n> > before pgbench exits.\n> \n> \"sent for\" -> \"sent to\"\n\nFixed.\n\n> > Attached is the updated version, v4.\n> \n> +/* send cancel requests to all connections */\n> +static void\n> +cancel_all()\n> +{\n> +\tfor (int i = 0; i < nclients; i++)\n> +\t{\n> +\t\tchar errbuf[1];\n> +\t\tif (client_states[i].cancel != NULL)\n> +\t\t\t(void) PQcancel(client_states[i].cancel, errbuf, sizeof(errbuf));\n> +\t}\n> +}\n> +\n> \n> Why in case of errors from PQCancel the error message is neglected? I\n> think it's better to print out the error message in case of error.\n\nIs the message useful for pgbench users? I saw the error is ignored\nin pg_dump, for example in bin/pg_dump/parallel.c\n\n /*\n * Send QueryCancel to leader connection, if enabled. Ignore errors,\n * there's not much we can do about them anyway.\n */\n if (signal_info.myAH != NULL && signal_info.myAH->connCancel != NULL)\n (void) PQcancel(signal_info.myAH->connCancel,\n errbuf, sizeof(errbuf));\n\n\n> +\t * On non-Windows, any callback function is not set. When SIGINT is\n> +\t * received, CancelRequested is just set, and only thread #0 is interrupted\n> +\t * and returns from waiting input from the backend. After that, the thread\n> +\t * sends cancel requests to all benchmark queries.\n> \n> The second line is a little bit long according to the coding\n> standard. Fix like this?\n> \n> \t * On non-Windows, any callback function is not set. When SIGINT is\n> \t * received, CancelRequested is just set, and only thread #0 is\n> \t * interrupted and returns from waiting input from the backend. After\n> \t * that, the thread sends cancel requests to all benchmark queries.\n\nFixed.\n\nThe attached is the updated patch, v5.\n\nRegards,\nYugo Nagata\n\n> Best reagards,\n> --\n> Tatsuo Ishii\n> SRA OSS LLC\n> English: http://www.sraoss.co.jp/index_en/\n> Japanese:http://www.sraoss.co.jp\n\n\n-- \nYugo NAGATA <[email protected]>",
"msg_date": "Fri, 19 Jan 2024 16:51:20 +0900",
"msg_from": "Yugo NAGATA <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pgbnech: allow to cancel queries during benchmark"
},
{
"msg_contents": ">> +/* send cancel requests to all connections */\n>> +static void\n>> +cancel_all()\n>> +{\n>> +\tfor (int i = 0; i < nclients; i++)\n>> +\t{\n>> +\t\tchar errbuf[1];\n>> +\t\tif (client_states[i].cancel != NULL)\n>> +\t\t\t(void) PQcancel(client_states[i].cancel, errbuf, sizeof(errbuf));\n>> +\t}\n>> +}\n>> +\n>> \n>> Why in case of errors from PQCancel the error message is neglected? I\n>> think it's better to print out the error message in case of error.\n> \n> Is the message useful for pgbench users? I saw the error is ignored\n> in pg_dump, for example in bin/pg_dump/parallel.c\n\nI think the situation is different from pg_dump. Unlike pg_dump, if\nPQcancel does not work, users can fix the problem by using\npg_terminate_backend or kill command. In order to make this work, an\nappropriate error message is essential.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Fri, 19 Jan 2024 17:46:03 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgbnech: allow to cancel queries during benchmark"
},
{
"msg_contents": "On Fri, 19 Jan 2024 17:46:03 +0900 (JST)\nTatsuo Ishii <[email protected]> wrote:\n\n> >> +/* send cancel requests to all connections */\n> >> +static void\n> >> +cancel_all()\n> >> +{\n> >> +\tfor (int i = 0; i < nclients; i++)\n> >> +\t{\n> >> +\t\tchar errbuf[1];\n> >> +\t\tif (client_states[i].cancel != NULL)\n> >> +\t\t\t(void) PQcancel(client_states[i].cancel, errbuf, sizeof(errbuf));\n> >> +\t}\n> >> +}\n> >> +\n> >> \n> >> Why in case of errors from PQCancel the error message is neglected? I\n> >> think it's better to print out the error message in case of error.\n> > \n> > Is the message useful for pgbench users? I saw the error is ignored\n> > in pg_dump, for example in bin/pg_dump/parallel.c\n> \n> I think the situation is different from pg_dump. Unlike pg_dump, if\n> PQcancel does not work, users can fix the problem by using\n> pg_terminate_backend or kill command. In order to make this work, an\n> appropriate error message is essential.\n\nMakes sense. I fixed to emit an error message when PQcancel fails.\n\nAlso, I added some comments about the signal handling on Windows\nto explain why the different way than non-Windows is required;\n\n+ * On Windows, a callback function is set in which query cancel requests\n+ * are sent to all benchmark queries running in the backend. This is\n+ * required because all threads running queries continue to run without\n+ * interrupted even when the signal is received.\n+ *\n\nAttached is the updated patch, v6.\n\n> Best reagards,\n> --\n> Tatsuo Ishii\n> SRA OSS LLC\n> English: http://www.sraoss.co.jp/index_en/\n> Japanese:http://www.sraoss.co.jp\n> \n> \n\n\n-- \nYugo NAGATA <[email protected]>",
"msg_date": "Wed, 24 Jan 2024 22:17:44 +0900",
"msg_from": "Yugo NAGATA <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pgbnech: allow to cancel queries during benchmark"
},
{
"msg_contents": "On Wed, 24 Jan 2024 22:17:44 +0900\nYugo NAGATA <[email protected]> wrote:\n \n> Attached is the updated patch, v6.\n\nCurrently, on non-Windows, SIGINT is received only by thread #0. \nCancelRequested is checked during the loop in the thread, and\nqueries are cancelled if it is set. However, once thread #0 exits\nthe loop due to some runtime error and starts waiting in pthread_join,\nthere is no opportunity to cancel queries run by other threads. \n\nIn addition, if -C option is specified, connections are created for\neach transaction, so cancel objects (PGcancel) also have to be\nrecreated at each time in each thread. However, these cancel objects\nare used in a specific thread to perform cancel for all queries,\nwhich is not safe because a thread refers to objects updated by other\nthreads.\n\nI think the first problem would be addressed by any of followings.\n\n(1a) Perform cancels in the signal handler. The signal handler will be\ncalled even while the thread 0 is blocked in pthread_join. This is safe\nbecause PQcancel is callable from a signal handler.\n\n(1b) Create an additional dedicated thread that calls sigwait on SIGINT\nand performs query cancel. As far as I studied, creating such dedicated\nthread calling sigwait is a typical way to handle signal in multi-threaded\nprogramming.\n\n(1c) Each thread performs cancel for queries run by each own, rather than\nthat thread 0 cancels all queries. For the purpose, pthread_kill might be\nused to interrupt threads blocked in wait_on_socket_set. \n\nThe second one would be addressed by any of followings. \n\n(2a) Use critical section when accessing PGcancel( e.g by using\npthread_mutex (non-Windows) or EnterCriticalSection (Windows)). On\nnon-Windows, we cannot use this way when calling PQcancel in a signal\nhandler ((1a) above) because acquiring a lock is not re-entrant.\n\n(2b) Perform query cancel in each thread that has created the connection\n(same as (1c) above).\n\nConsidering both, possible combination would be either (1b)&(2a) or\n(1c)&(2b). I would prefer the former way, because creating the\ndedicated thread handling SIGINT signal and canceling all queries seems\nsimpler and safer than calling pthread_kill in the SIGINT signal handler\nto send another signal to other threads. I'll update the patch in\nthis way soon.\n\nRegards,\nYugo Nagata\n\n\n> \n> > Best reagards,\n> > --\n> > Tatsuo Ishii\n> > SRA OSS LLC\n> > English: http://www.sraoss.co.jp/index_en/\n> > Japanese:http://www.sraoss.co.jp\n> > \n> > \n> \n> \n> -- \n> Yugo NAGATA <[email protected]>\n\n\n-- \nYugo NAGATA <[email protected]>\n\n\n",
"msg_date": "Wed, 7 Feb 2024 10:19:03 +0900",
"msg_from": "Yugo NAGATA <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pgbnech: allow to cancel queries during benchmark"
},
{
"msg_contents": "Due to commit 61461a300c1c, this patch needs to be reworked.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"La fuerza no está en los medios físicos\nsino que reside en una voluntad indomable\" (Gandhi)\n\n\n",
"msg_date": "Sat, 30 Mar 2024 14:35:37 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgbnech: allow to cancel queries during benchmark"
},
{
"msg_contents": "On Sat, 30 Mar 2024 14:35:37 +0100\nAlvaro Herrera <[email protected]> wrote:\n\n> Due to commit 61461a300c1c, this patch needs to be reworked.\n\nThank you for pointing out this.\n\nAlthough cfbot doesn't report any failure, but PQcancel is now\ndeprecated and insecure. I'll consider it too while fixing a\nproblem I found in [1].\n\n[1] https://www.postgresql.org/message-id/20240207101903.b5846c25808f64a91ee2e7a2%40sraoss.co.jp\n\nRegards,\nYugo Nagata\n\n> -- \n> Álvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n> \"La fuerza no está en los medios físicos\n> sino que reside en una voluntad indomable\" (Gandhi)\n\n\n-- \nYugo NAGATA <[email protected]>\n\n\n",
"msg_date": "Sun, 31 Mar 2024 22:25:02 +0900",
"msg_from": "Yugo NAGATA <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pgbnech: allow to cancel queries during benchmark"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nIn memory contexts, block and chunk sizes are likely to be limited by\nsome upper bounds. Some examples of those bounds can be\nMEMORYCHUNK_MAX_BLOCKOFFSET and MEMORYCHUNK_MAX_VALUE. Both values are\nonly 1 less than 1GB.\nThis makes memory contexts to have blocks/chunks with sizes less than\n1GB. Such sizes can be stored in 32-bits. Currently, \"Size\" type,\nwhich is 64-bit, is used, but 32-bit integers should be enough to\nstore any value less than 1GB.\n\nAttached patch is an attempt to change the types of some fields to\nuint32 from Size in aset, slab and generation memory contexts.\nI tried to find most of the places that needed to be changed to\nuint32, but I probably missed some. I can add more places if you feel\nlike it.\n\nI would appreciate any feedback.\n\nThanks,\n-- \nMelih Mutlu\nMicrosoft",
"msg_date": "Mon, 26 Jun 2023 17:59:09 +0300",
"msg_from": "Melih Mutlu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Changing types of block and chunk sizes in memory contexts"
},
{
"msg_contents": "> In memory contexts, block and chunk sizes are likely to be limited by\n> some upper bounds. Some examples of those bounds can be\n> MEMORYCHUNK_MAX_BLOCKOFFSET and MEMORYCHUNK_MAX_VALUE. Both values are\n> only 1 less than 1GB.\n> This makes memory contexts to have blocks/chunks with sizes less than\n> 1GB. Such sizes can be stored in 32-bits. Currently, \"Size\" type,\n> which is 64-bit, is used, but 32-bit integers should be enough to\n> store any value less than 1GB.\n\nsize_t (= Size) is the correct type in C to store the size of an object \nin memory. This is partially a self-documentation issue: If I see \nsize_t in a function signature, I know what is intended; if I see \nuint32, I have to wonder what the intent was.\n\nYou could make an argument that using shorter types would save space for \nsome internal structs, but then you'd have to show some more information \nwhere and why that would be beneficial. (But again, self-documentation: \nIf one were to do that, I would argue for introducing a custom type like \npg_short_size_t.)\n\nAbsent any strong performance argument, I don't see the benefit of this \nchange. People might well want to experiment with MEMORYCHUNK_... \nsettings larger than 1GB.\n\n\n\n",
"msg_date": "Wed, 28 Jun 2023 10:13:38 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Changing types of block and chunk sizes in memory contexts"
},
{
"msg_contents": "On Wed, 28 Jun 2023 at 20:13, Peter Eisentraut <[email protected]> wrote:\n> size_t (= Size) is the correct type in C to store the size of an object\n> in memory. This is partially a self-documentation issue: If I see\n> size_t in a function signature, I know what is intended; if I see\n> uint32, I have to wonder what the intent was.\n\nPerhaps it's ok to leave the context creation functions with Size\ntyped parameters and then just Assert the passed-in sizes are not\nlarger than 1GB within the context creation function. That way we\ncould keep this change self contained in the .c file for the given\nmemory context. That would mean there's no less readability. If we\never wanted to lift the 1GB limit on block sizes then we'd not need to\nswitch the function signature again. There's documentation where the\nstruct's field is declared, so having a smaller type in the struct\nitself does not seem like a reduction of documentation quality.\n\n> You could make an argument that using shorter types would save space for\n> some internal structs, but then you'd have to show some more information\n> where and why that would be beneficial.\n\nI think there's not much need to go proving this speeds something up.\nThere's just simply no point in the struct fields being changed in\nMelih's patch to be bigger than 32 bits as we never need to store more\nthan 1GB in them. Reducing these down means we may have to touch\nfewer cache lines and we'll also have more space on the keeper blocks\nto store allocations. Memory allocation performance is fairly\nfundamental to Postgres's performance. In my view, we shouldn't have\nfields that are twice as large as they need to be in code as hot as\nthis.\n\n> Absent any strong performance argument, I don't see the benefit of this\n> change. People might well want to experiment with MEMORYCHUNK_...\n> settings larger than 1GB.\n\nAnyone doing so will be editing C code anyway. They can adjust these\nfields then.\n\nDavid\n\n\n",
"msg_date": "Wed, 28 Jun 2023 21:37:39 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Changing types of block and chunk sizes in memory contexts"
},
{
"msg_contents": "David Rowley <[email protected]> writes:\n> Perhaps it's ok to leave the context creation functions with Size\n> typed parameters and then just Assert the passed-in sizes are not\n> larger than 1GB within the context creation function.\n\nYes, I'm strongly opposed to not using Size/size_t in the mmgr APIs.\nIf we go that road, we're going to have a problem when someone\ninevitably wants to pass a larger-than-GB value for some context\ntype.\n\nWhat happens in semi-private structs is a different matter, although\nI'm a little dubious that shaving a couple of bytes from context\nheaders is a useful activity. The self-documentation argument\nstill has some force there, so I agree with Peter that some positive\nbenefit has to be shown.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 28 Jun 2023 06:59:43 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Changing types of block and chunk sizes in memory contexts"
},
{
"msg_contents": "On 6/28/23 12:59, Tom Lane wrote:\n> David Rowley <[email protected]> writes:\n>> Perhaps it's ok to leave the context creation functions with Size\n>> typed parameters and then just Assert the passed-in sizes are not\n>> larger than 1GB within the context creation function.\n> \n> Yes, I'm strongly opposed to not using Size/size_t in the mmgr APIs.\n> If we go that road, we're going to have a problem when someone\n> inevitably wants to pass a larger-than-GB value for some context\n> type.\n\n+1\n\n> What happens in semi-private structs is a different matter, although\n> I'm a little dubious that shaving a couple of bytes from context\n> headers is a useful activity. The self-documentation argument\n> still has some force there, so I agree with Peter that some positive\n> benefit has to be shown.\n> \n\nYeah. FWIW I was interested what the patch does in practice, so I\nchecked what pahole says about impact on struct sizes:\n\nAllocSetContext 224B -> 208B (4 cachelines)\nGenerationContext 152B -> 136B (3 cachelines)\nSlabContext 200B -> 200B (no change, adds 4B hole)\n\nNothing else changes, AFAICS. I find it hard to believe this could have\nany sort of positive benefit - I doubt we ever have enough contexts for\nthis to matter.\n\nWhen I first saw the patch I was thinking it's probably changing how we\nstore the per-chunk requested_size. Maybe that'd make a difference,\nalthough 4B is tiny compared to what we waste due to the doubling.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 28 Jun 2023 23:26:00 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Changing types of block and chunk sizes in memory contexts"
},
{
"msg_contents": "Tomas Vondra <[email protected]> writes:\n> ... 4B is tiny compared to what we waste due to the doubling.\n\nYeah. I've occasionally wondered if we should rethink aset.c's\n\"only power-of-2 chunk sizes\" rule. Haven't had the bandwidth\nto pursue the idea though.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 28 Jun 2023 17:56:55 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Changing types of block and chunk sizes in memory contexts"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-28 23:26:00 +0200, Tomas Vondra wrote:\n> Yeah. FWIW I was interested what the patch does in practice, so I\n> checked what pahole says about impact on struct sizes:\n> \n> AllocSetContext 224B -> 208B (4 cachelines)\n> GenerationContext 152B -> 136B (3 cachelines)\n> SlabContext 200B -> 200B (no change, adds 4B hole)\n> \n> Nothing else changes, AFAICS. I find it hard to believe this could have\n> any sort of positive benefit - I doubt we ever have enough contexts for\n> this to matter.\n\nI don't think it's that hard to believe. We create a lot of memory contexts\nthat we never or just barely use. Just reducing the number of cachelines\ntouched for that can't hurt. This does't quite get us to reducing the size to\na lower number of cachelines, but it's a good step.\n\nThere are a few other fields that we can get rid of.\n\n- Afaics AllocSet->keeper is unnecessary these days, as it is always allocated\n together with the context itself. Saves 8 bytes.\n\n- The set of memory context types isn't runtime extensible. We could replace\n MemoryContextData->methods with a small integer index into mcxt_methods. I\n think that might actually end up being as-cheap or even cheaper than the\n current approach. Saves 8 bytes.\n\nTthat's sufficient for going to 3 cachelines.\n\n\n- We could store the power of 2 for initBlockSize, nextBlockSize,\n maxBlockSize, instead of the \"raw\" value. That'd make them one byte\n each. Which also would get rid of the concerns around needing a\n \"mini_size_t\" type.\n\n- freeListIndex could be a single byte as well (saving 7 bytes, as right now\n we loose 4 trailing bytes due to padding).\n\nThat would save another 12 bytes, if I calculate correctly. 25% shrinkage\ntogether ain't bad.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 28 Jun 2023 16:34:09 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Changing types of block and chunk sizes in memory contexts"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-28 17:56:55 -0400, Tom Lane wrote:\n> Tomas Vondra <[email protected]> writes:\n> > ... 4B is tiny compared to what we waste due to the doubling.\n> \n> Yeah. I've occasionally wondered if we should rethink aset.c's\n> \"only power-of-2 chunk sizes\" rule. Haven't had the bandwidth\n> to pursue the idea though.\n\nMe too. It'd not be trivial to do without also incurring performance overhead.\n\nA somewhat easier thing we could try is to carve the \"rounding up\" space into\nsmaller chunks, similar to what we do for full blocks. It wouldn't make sense\nto do that for the smaller size classes, but above 64-256 bytes or such, I\nthink the wins might be big enough to outweight the costs?\n\nOf course that doesn't guarantee that that memory in those smaller size\nclasses is going to be used...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 28 Jun 2023 16:42:09 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Changing types of block and chunk sizes in memory contexts"
},
{
"msg_contents": "On Thu, 29 Jun 2023 at 09:26, Tomas Vondra\n<[email protected]> wrote:\n> AllocSetContext 224B -> 208B (4 cachelines)\n> GenerationContext 152B -> 136B (3 cachelines)\n> SlabContext 200B -> 200B (no change, adds 4B hole)\n>\n> Nothing else changes, AFAICS.\n\nI don't think a lack of a reduction in the number of cache lines is\nthe important part. Allowing more space on the keeper block, which is\nat the end of the context struct seems more useful. I understand that\nthe proposal is just to shave off 12 bytes and that's not exactly huge\nwhen it's just once per context, but we do create quite a large number\nof contexts with ALLOCSET_SMALL_SIZES which have a 1KB initial block\nsize. 12 bytes in 1024 is not terrible.\n\nIt's not exactly an invasive change. It does not add any complexity\nto the code and as far as I can see, about zero risk of it slowing\nanything down.\n\nDavid\n\n\n",
"msg_date": "Thu, 29 Jun 2023 15:46:47 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Changing types of block and chunk sizes in memory contexts"
},
{
"msg_contents": "On 6/29/23 01:34, Andres Freund wrote:\n> Hi,\n> \n> On 2023-06-28 23:26:00 +0200, Tomas Vondra wrote:\n>> Yeah. FWIW I was interested what the patch does in practice, so I\n>> checked what pahole says about impact on struct sizes:\n>>\n>> AllocSetContext 224B -> 208B (4 cachelines)\n>> GenerationContext 152B -> 136B (3 cachelines)\n>> SlabContext 200B -> 200B (no change, adds 4B hole)\n>>\n>> Nothing else changes, AFAICS. I find it hard to believe this could have\n>> any sort of positive benefit - I doubt we ever have enough contexts for\n>> this to matter.\n> \n> I don't think it's that hard to believe. We create a lot of memory contexts\n> that we never or just barely use. Just reducing the number of cachelines\n> touched for that can't hurt. This does't quite get us to reducing the size to\n> a lower number of cachelines, but it's a good step.\n> \n> There are a few other fields that we can get rid of.\n> \n> - Afaics AllocSet->keeper is unnecessary these days, as it is always allocated\n> together with the context itself. Saves 8 bytes.\n> \n> - The set of memory context types isn't runtime extensible. We could replace\n> MemoryContextData->methods with a small integer index into mcxt_methods. I\n> think that might actually end up being as-cheap or even cheaper than the\n> current approach. Saves 8 bytes.\n> \n> Tthat's sufficient for going to 3 cachelines.\n> \n> \n> - We could store the power of 2 for initBlockSize, nextBlockSize,\n> maxBlockSize, instead of the \"raw\" value. That'd make them one byte\n> each. Which also would get rid of the concerns around needing a\n> \"mini_size_t\" type.\n> \n> - freeListIndex could be a single byte as well (saving 7 bytes, as right now\n> we loose 4 trailing bytes due to padding).\n> \n> That would save another 12 bytes, if I calculate correctly. 25% shrinkage\n> together ain't bad.\n> \n\nI don't oppose these changes, but I still don't quite believe it'll make\na measurable difference (even if we manage to save a cacheline or two).\nI'd definitely like to see some measurements demonstrating it's worth\nthe extra complexity.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 29 Jun 2023 11:58:27 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Changing types of block and chunk sizes in memory contexts"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-29 11:58:27 +0200, Tomas Vondra wrote:\n> On 6/29/23 01:34, Andres Freund wrote:\n> > On 2023-06-28 23:26:00 +0200, Tomas Vondra wrote:\n> >> Yeah. FWIW I was interested what the patch does in practice, so I\n> >> checked what pahole says about impact on struct sizes:\n> >>\n> >> AllocSetContext 224B -> 208B (4 cachelines)\n> >> GenerationContext 152B -> 136B (3 cachelines)\n> >> SlabContext 200B -> 200B (no change, adds 4B hole)\n> ...\n> > That would save another 12 bytes, if I calculate correctly. 25% shrinkage\n> > together ain't bad.\n> >\n>\n> I don't oppose these changes, but I still don't quite believe it'll make\n> a measurable difference (even if we manage to save a cacheline or two).\n> I'd definitely like to see some measurements demonstrating it's worth\n> the extra complexity.\n\nI hacked (emphasis on that) a version together that shrinks AllocSetContext\ndown to 176 bytes.\n\nThere seem to be some minor performance gains, and some not too shabby memory\nsavings.\n\nE.g. a backend after running readonly pgbench goes from (results repeat\nprecisely across runs):\n\npgbench: Grand total: 1361528 bytes in 289 blocks; 367480 free (206 chunks); 994048 used\nto:\npgbench: Grand total: 1339000 bytes in 278 blocks; 352352 free (188 chunks); 986648 used\n\n\nRunning a total over all connections in the main regression tests gives less\nof a win (best of three):\n\nbackends grand blocks free chunks used\n690 1046956664 111373 370680728 291436 676275936\n\nto:\n\nbackends grand blocks free chunks used\n690 1045226056 111099 372972120 297969 672253936\n\n\n\nthe latter is produced with this beauty:\nninja && m test --suite setup --no-rebuild && m test --no-rebuild --print-errorlogs regress/regress -v && grep \"Grand total\" testrun/regress/regress/log/postmaster.log|sed -E -e 's/.*Grand total: (.*) bytes in (.*) blocks; (.*) free \\((.*) chunks\\); (.*) used/\\1\\t\\2\\t\\3\\t\\4\\t\\5/'|awk '{backends += 1; grand += $1; blocks += $2; free += $3; chunks += $4; used += $5} END{print backends, grand, blocks, free, chunks, used}'\n\n\nThere's more to get. The overhead of AllocSetBlock also plays into this. Both\ndue to the keeper block and obviously separate blocks getting allocated\nsubsequently. We e.g. don't need AllocBlockData->next,prev as 8 byte pointers\n(some trickiness would be required for external blocks, but they could combine\nboth).\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 29 Jun 2023 17:29:52 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Changing types of block and chunk sizes in memory contexts"
},
{
"msg_contents": "Hi,\n\nThanks for your comments.\n\nTom Lane <[email protected]>, 28 Haz 2023 Çar, 13:59 tarihinde şunu yazdı:\n>\n> David Rowley <[email protected]> writes:\n> > Perhaps it's ok to leave the context creation functions with Size\n> > typed parameters and then just Assert the passed-in sizes are not\n> > larger than 1GB within the context creation function.\n>\n> Yes, I'm strongly opposed to not using Size/size_t in the mmgr APIs.\n> If we go that road, we're going to have a problem when someone\n> inevitably wants to pass a larger-than-GB value for some context\n> type.\n\nI reverted changes in the context creation functions and only changed\nthe types in structs.\nI believe there are already lines to assert whether the sizes are less\nthan 1GB, so we should be safe there.\n\nAndres Freund <[email protected]>, 29 Haz 2023 Per, 02:34 tarihinde şunu yazdı:\n> There are a few other fields that we can get rid of.\n>\n> - Afaics AllocSet->keeper is unnecessary these days, as it is always allocated\n> together with the context itself. Saves 8 bytes.\n\nThis seemed like a safe change and removed the keeper field in\nAllocSet and Generation contexts. It saves an additional 8 bytes.\n\nBest,\n-- \nMelih Mutlu\nMicrosoft",
"msg_date": "Mon, 10 Jul 2023 17:41:11 +0300",
"msg_from": "Melih Mutlu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Changing types of block and chunk sizes in memory contexts"
},
{
"msg_contents": "On Tue, 11 Jul 2023 at 02:41, Melih Mutlu <[email protected]> wrote:\n> > - Afaics AllocSet->keeper is unnecessary these days, as it is always allocated\n> > together with the context itself. Saves 8 bytes.\n>\n> This seemed like a safe change and removed the keeper field in\n> AllocSet and Generation contexts. It saves an additional 8 bytes.\n\nSeems like a good idea for an additional 8-bytes.\n\nI looked at your v2 patch. The only thing that really looked wrong\nwere the (Size) casts in the context creation functions. These should\nhave been casts to uint32 rather than Size. Basically, all the casts\ndo is say to the compiler \"Yes, I know this could cause truncation due\nto assigning to a size smaller than the source type's size\". Some\ncompilers will likely warn without that and the cast will stop them.\nWe know there can't be any truncation due to the Asserts. There's also\nthe fundamental limitation that MemoryChunk can't store block offsets\nlarger than 1GBs anyway, so things will go bad if we tried to have\nblocks bigger than 1GB.\n\nAside from that, I thought that a couple of other slab.c fields could\nbe shrunken to uint32 as the v2 patch just reduces the size of 1 field\nwhich just creates a 4-byte hole in SlabContext. The fullChunkSize\nfield is just the MAXALIGN(chunkSize) + sizeof(MemoryChunk). We\nshould never be using slab contexts for any chunks anywhere near that\nsize. aset.c would be a better context for that, so it seems fine to\nme to further restrict the maximum supported chunk size by another 8\nbytes.\n\nI've attached your patch again along with a small delta of what I adjusted.\n\nMy thoughts on these changes are that it's senseless to have Size\ntyped fields for storing a value that's never larger than 2^30.\nGetting rid of the keeper pointer seems like a cleanup as it's pretty\nmuch a redundant field. For small sized contexts like the ones used\nfor storing index relcache entries, I think it makes sense to save 20\nmore bytes. Each backend can have many thousand of those and there\ncould be many hundred backends. If we can fit more allocations on that\ninitial 1 kilobyte keeper block without having to allocate any\nadditional blocks, then that's great.\n\nI feel that Andres's results showing several hundred fewer block\nallocations shows this working. Albeit, his patch reduced the size of\nthe structs even further than what v3 does. I think v3 is enough for\nnow as the additional changes Andres mentioned require some more\ninvasive code changes to make work.\n\nIf nobody objects or has other ideas about this, modulo commit\nmessage, I plan to push the attached on Monday.\n\nDavid",
"msg_date": "Thu, 13 Jul 2023 17:03:50 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Changing types of block and chunk sizes in memory contexts"
},
{
"msg_contents": "Hi David,\n\nDavid Rowley <[email protected]>, 13 Tem 2023 Per, 08:04 tarihinde şunu\nyazdı:\n\n> I looked at your v2 patch. The only thing that really looked wrong\n> were the (Size) casts in the context creation functions. These should\n> have been casts to uint32 rather than Size. Basically, all the casts\n> do is say to the compiler \"Yes, I know this could cause truncation due\n> to assigning to a size smaller than the source type's size\". Some\n> compilers will likely warn without that and the cast will stop them.\n> We know there can't be any truncation due to the Asserts. There's also\n> the fundamental limitation that MemoryChunk can't store block offsets\n> larger than 1GBs anyway, so things will go bad if we tried to have\n> blocks bigger than 1GB.\n>\n\nRight! I don't know why I cast them to Size. Thanks for the fix.\n\nBest,\n-- \nMelih Mutlu\nMicrosoft\n\nHi David,David Rowley <[email protected]>, 13 Tem 2023 Per, 08:04 tarihinde şunu yazdı:\nI looked at your v2 patch. The only thing that really looked wrong\nwere the (Size) casts in the context creation functions. These should\nhave been casts to uint32 rather than Size. Basically, all the casts\ndo is say to the compiler \"Yes, I know this could cause truncation due\nto assigning to a size smaller than the source type's size\". Some\ncompilers will likely warn without that and the cast will stop them.\nWe know there can't be any truncation due to the Asserts. There's also\nthe fundamental limitation that MemoryChunk can't store block offsets\nlarger than 1GBs anyway, so things will go bad if we tried to have\nblocks bigger than 1GB.Right! I don't know why I cast them to Size. Thanks for the fix.Best,-- Melih MutluMicrosoft",
"msg_date": "Fri, 14 Jul 2023 09:53:24 +0300",
"msg_from": "Melih Mutlu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Changing types of block and chunk sizes in memory contexts"
},
{
"msg_contents": "On Fri, 14 Jul 2023 at 18:53, Melih Mutlu <[email protected]> wrote:\n> David Rowley <[email protected]>, 13 Tem 2023 Per, 08:04 tarihinde şunu yazdı:\n>>\n>> I looked at your v2 patch. The only thing that really looked wrong\n>> were the (Size) casts in the context creation functions. These should\n>> have been casts to uint32 rather than Size. Basically, all the casts\n>> do is say to the compiler \"Yes, I know this could cause truncation due\n>> to assigning to a size smaller than the source type's size\". Some\n>> compilers will likely warn without that and the cast will stop them.\n>> We know there can't be any truncation due to the Asserts. There's also\n>> the fundamental limitation that MemoryChunk can't store block offsets\n>> larger than 1GBs anyway, so things will go bad if we tried to have\n>> blocks bigger than 1GB.\n>\n>\n> Right! I don't know why I cast them to Size. Thanks for the fix.\n\nPushed.\n\nDavid\n\n\n",
"msg_date": "Mon, 17 Jul 2023 11:18:38 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Changing types of block and chunk sizes in memory contexts"
}
] |
[
{
"msg_contents": "Hello,\n\nHave we ever discussed running an analyze immediately after creating a table?\n\nConsider the following:\n\ncreate table stats(i int, t text not null);\nexplain select * from stats;\n Seq Scan on stats (cost=0.00..22.70 rows=1270 width=36\nanalyze stats;\nexplain select * from stats;\n Seq Scan on stats (cost=0.00..0.00 rows=1 width=36)\n\nCombined with rapidly increasing error margin on row estimates when\nadding joins means that a query joining to a bunch of empty tables\nwhen a database first starts up can result in some pretty wild plan\ncosts.\n\nThis feels like a simple idea to me, and so I assume people have\nconsidered it before. If so, I'd like to understand why the conclusion\nwas not to do it, or, alternatively if it's a lack of tuits.\n\nRegards,\nJames Coleman\n\n\n",
"msg_date": "Mon, 26 Jun 2023 13:40:49 -0400",
"msg_from": "James Coleman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Analyze on table creation?"
},
{
"msg_contents": "Hi\n\npo 26. 6. 2023 v 19:41 odesílatel James Coleman <[email protected]> napsal:\n\n> Hello,\n>\n> Have we ever discussed running an analyze immediately after creating a\n> table?\n>\n> Consider the following:\n>\n> create table stats(i int, t text not null);\n> explain select * from stats;\n> Seq Scan on stats (cost=0.00..22.70 rows=1270 width=36\n> analyze stats;\n> explain select * from stats;\n> Seq Scan on stats (cost=0.00..0.00 rows=1 width=36)\n>\n> Combined with rapidly increasing error margin on row estimates when\n> adding joins means that a query joining to a bunch of empty tables\n> when a database first starts up can result in some pretty wild plan\n> costs.\n>\n> This feels like a simple idea to me, and so I assume people have\n> considered it before. If so, I'd like to understand why the conclusion\n> was not to do it, or, alternatively if it's a lack of tuits.\n>\n\nI like this. On the second hand, described behaviour is designed for\nensuring of back compatibility.\n\nRegards\n\nPavel\n\n\n\n> Regards,\n> James Coleman\n>\n>\n>\n\nHipo 26. 6. 2023 v 19:41 odesílatel James Coleman <[email protected]> napsal:Hello,\n\nHave we ever discussed running an analyze immediately after creating a table?\n\nConsider the following:\n\ncreate table stats(i int, t text not null);\nexplain select * from stats;\n Seq Scan on stats (cost=0.00..22.70 rows=1270 width=36\nanalyze stats;\nexplain select * from stats;\n Seq Scan on stats (cost=0.00..0.00 rows=1 width=36)\n\nCombined with rapidly increasing error margin on row estimates when\nadding joins means that a query joining to a bunch of empty tables\nwhen a database first starts up can result in some pretty wild plan\ncosts.\n\nThis feels like a simple idea to me, and so I assume people have\nconsidered it before. If so, I'd like to understand why the conclusion\nwas not to do it, or, alternatively if it's a lack of tuits.I like this. On the second hand, described behaviour is designed for ensuring of back compatibility.RegardsPavel \n\nRegards,\nJames Coleman",
"msg_date": "Mon, 26 Jun 2023 19:43:58 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Analyze on table creation?"
},
{
"msg_contents": "po 26. 6. 2023 v 19:43 odesílatel Pavel Stehule <[email protected]>\nnapsal:\n\n> Hi\n>\n> po 26. 6. 2023 v 19:41 odesílatel James Coleman <[email protected]> napsal:\n>\n>> Hello,\n>>\n>> Have we ever discussed running an analyze immediately after creating a\n>> table?\n>>\n>> Consider the following:\n>>\n>> create table stats(i int, t text not null);\n>> explain select * from stats;\n>> Seq Scan on stats (cost=0.00..22.70 rows=1270 width=36\n>> analyze stats;\n>> explain select * from stats;\n>> Seq Scan on stats (cost=0.00..0.00 rows=1 width=36)\n>>\n>> Combined with rapidly increasing error margin on row estimates when\n>> adding joins means that a query joining to a bunch of empty tables\n>> when a database first starts up can result in some pretty wild plan\n>> costs.\n>>\n>> This feels like a simple idea to me, and so I assume people have\n>> considered it before. If so, I'd like to understand why the conclusion\n>> was not to do it, or, alternatively if it's a lack of tuits.\n>>\n>\n> I like this. On the second hand, described behaviour is designed for\n> ensuring of back compatibility.\n>\n\nif you break this back compatibility, then the immediate ANALYZE is not\nnecessary\n\n\n\n>\n> Regards\n>\n> Pavel\n>\n>\n>\n>> Regards,\n>> James Coleman\n>>\n>>\n>>\n\npo 26. 6. 2023 v 19:43 odesílatel Pavel Stehule <[email protected]> napsal:Hipo 26. 6. 2023 v 19:41 odesílatel James Coleman <[email protected]> napsal:Hello,\n\nHave we ever discussed running an analyze immediately after creating a table?\n\nConsider the following:\n\ncreate table stats(i int, t text not null);\nexplain select * from stats;\n Seq Scan on stats (cost=0.00..22.70 rows=1270 width=36\nanalyze stats;\nexplain select * from stats;\n Seq Scan on stats (cost=0.00..0.00 rows=1 width=36)\n\nCombined with rapidly increasing error margin on row estimates when\nadding joins means that a query joining to a bunch of empty tables\nwhen a database first starts up can result in some pretty wild plan\ncosts.\n\nThis feels like a simple idea to me, and so I assume people have\nconsidered it before. If so, I'd like to understand why the conclusion\nwas not to do it, or, alternatively if it's a lack of tuits.I like this. On the second hand, described behaviour is designed for ensuring of back compatibility.if you break this back compatibility, then the immediate ANALYZE is not necessary RegardsPavel \n\nRegards,\nJames Coleman",
"msg_date": "Mon, 26 Jun 2023 19:44:53 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Analyze on table creation?"
},
{
"msg_contents": "On Mon, Jun 26, 2023 at 1:45 PM Pavel Stehule <[email protected]> wrote:\n>\n>\n>\n> po 26. 6. 2023 v 19:43 odesílatel Pavel Stehule <[email protected]> napsal:\n>>\n>> Hi\n>>\n>> po 26. 6. 2023 v 19:41 odesílatel James Coleman <[email protected]> napsal:\n>>>\n>>> Hello,\n>>>\n>>> Have we ever discussed running an analyze immediately after creating a table?\n>>>\n>>> Consider the following:\n>>>\n>>> create table stats(i int, t text not null);\n>>> explain select * from stats;\n>>> Seq Scan on stats (cost=0.00..22.70 rows=1270 width=36\n>>> analyze stats;\n>>> explain select * from stats;\n>>> Seq Scan on stats (cost=0.00..0.00 rows=1 width=36)\n>>>\n>>> Combined with rapidly increasing error margin on row estimates when\n>>> adding joins means that a query joining to a bunch of empty tables\n>>> when a database first starts up can result in some pretty wild plan\n>>> costs.\n>>>\n>>> This feels like a simple idea to me, and so I assume people have\n>>> considered it before. If so, I'd like to understand why the conclusion\n>>> was not to do it, or, alternatively if it's a lack of tuits.\n>>\n>>\n>> I like this. On the second hand, described behaviour is designed for ensuring of back compatibility.\n>\n>\n> if you break this back compatibility, then the immediate ANALYZE is not necessary\n\nI don't follow what backwards compatibility you're referencing. Could\nyou expand on that?\n\nRegards,\nJames Coleman\n\n\n",
"msg_date": "Mon, 26 Jun 2023 13:48:37 -0400",
"msg_from": "James Coleman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Analyze on table creation?"
},
{
"msg_contents": "po 26. 6. 2023 v 19:48 odesílatel James Coleman <[email protected]> napsal:\n\n> On Mon, Jun 26, 2023 at 1:45 PM Pavel Stehule <[email protected]>\n> wrote:\n> >\n> >\n> >\n> > po 26. 6. 2023 v 19:43 odesílatel Pavel Stehule <[email protected]>\n> napsal:\n> >>\n> >> Hi\n> >>\n> >> po 26. 6. 2023 v 19:41 odesílatel James Coleman <[email protected]>\n> napsal:\n> >>>\n> >>> Hello,\n> >>>\n> >>> Have we ever discussed running an analyze immediately after creating a\n> table?\n> >>>\n> >>> Consider the following:\n> >>>\n> >>> create table stats(i int, t text not null);\n> >>> explain select * from stats;\n> >>> Seq Scan on stats (cost=0.00..22.70 rows=1270 width=36\n> >>> analyze stats;\n> >>> explain select * from stats;\n> >>> Seq Scan on stats (cost=0.00..0.00 rows=1 width=36)\n> >>>\n> >>> Combined with rapidly increasing error margin on row estimates when\n> >>> adding joins means that a query joining to a bunch of empty tables\n> >>> when a database first starts up can result in some pretty wild plan\n> >>> costs.\n> >>>\n> >>> This feels like a simple idea to me, and so I assume people have\n> >>> considered it before. If so, I'd like to understand why the conclusion\n> >>> was not to do it, or, alternatively if it's a lack of tuits.\n> >>\n> >>\n> >> I like this. On the second hand, described behaviour is designed for\n> ensuring of back compatibility.\n> >\n> >\n> > if you break this back compatibility, then the immediate ANALYZE is not\n> necessary\n>\n> I don't follow what backwards compatibility you're referencing. Could\n> you expand on that?\n>\n\nOriginally, until the table had minimally one row, the PostgreSQL\ncalculated with 10 pages. It was fixed (changed) in PostgreSQL 14.\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=3d351d916b20534f973eda760cde17d96545d4c4\n\nRegards\n\nPavel\n\n\n> Regards,\n> James Coleman\n>\n\npo 26. 6. 2023 v 19:48 odesílatel James Coleman <[email protected]> napsal:On Mon, Jun 26, 2023 at 1:45 PM Pavel Stehule <[email protected]> wrote:\n>\n>\n>\n> po 26. 6. 2023 v 19:43 odesílatel Pavel Stehule <[email protected]> napsal:\n>>\n>> Hi\n>>\n>> po 26. 6. 2023 v 19:41 odesílatel James Coleman <[email protected]> napsal:\n>>>\n>>> Hello,\n>>>\n>>> Have we ever discussed running an analyze immediately after creating a table?\n>>>\n>>> Consider the following:\n>>>\n>>> create table stats(i int, t text not null);\n>>> explain select * from stats;\n>>> Seq Scan on stats (cost=0.00..22.70 rows=1270 width=36\n>>> analyze stats;\n>>> explain select * from stats;\n>>> Seq Scan on stats (cost=0.00..0.00 rows=1 width=36)\n>>>\n>>> Combined with rapidly increasing error margin on row estimates when\n>>> adding joins means that a query joining to a bunch of empty tables\n>>> when a database first starts up can result in some pretty wild plan\n>>> costs.\n>>>\n>>> This feels like a simple idea to me, and so I assume people have\n>>> considered it before. If so, I'd like to understand why the conclusion\n>>> was not to do it, or, alternatively if it's a lack of tuits.\n>>\n>>\n>> I like this. On the second hand, described behaviour is designed for ensuring of back compatibility.\n>\n>\n> if you break this back compatibility, then the immediate ANALYZE is not necessary\n\nI don't follow what backwards compatibility you're referencing. Could\nyou expand on that?Originally, until the table had minimally one row, the PostgreSQL calculated with 10 pages. It was fixed (changed) in PostgreSQL 14.https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=3d351d916b20534f973eda760cde17d96545d4c4RegardsPavel\n\nRegards,\nJames Coleman",
"msg_date": "Mon, 26 Jun 2023 20:16:09 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Analyze on table creation?"
},
{
"msg_contents": "cc'ing Tom because I'm curious if he's willing to provide some greater\ncontext on the commit in question.\n\nOn Mon, Jun 26, 2023 at 2:16 PM Pavel Stehule <[email protected]> wrote:\n>\n>\n>\n> po 26. 6. 2023 v 19:48 odesílatel James Coleman <[email protected]> napsal:\n>>\n>> On Mon, Jun 26, 2023 at 1:45 PM Pavel Stehule <[email protected]> wrote:\n>> >\n>> >\n>> >\n>> > po 26. 6. 2023 v 19:43 odesílatel Pavel Stehule <[email protected]> napsal:\n>> >>\n>> >> Hi\n>> >>\n>> >> po 26. 6. 2023 v 19:41 odesílatel James Coleman <[email protected]> napsal:\n>> >>>\n>> >>> Hello,\n>> >>>\n>> >>> Have we ever discussed running an analyze immediately after creating a table?\n>> >>>\n>> >>> Consider the following:\n>> >>>\n>> >>> create table stats(i int, t text not null);\n>> >>> explain select * from stats;\n>> >>> Seq Scan on stats (cost=0.00..22.70 rows=1270 width=36\n>> >>> analyze stats;\n>> >>> explain select * from stats;\n>> >>> Seq Scan on stats (cost=0.00..0.00 rows=1 width=36)\n>> >>>\n>> >>> Combined with rapidly increasing error margin on row estimates when\n>> >>> adding joins means that a query joining to a bunch of empty tables\n>> >>> when a database first starts up can result in some pretty wild plan\n>> >>> costs.\n>> >>>\n>> >>> This feels like a simple idea to me, and so I assume people have\n>> >>> considered it before. If so, I'd like to understand why the conclusion\n>> >>> was not to do it, or, alternatively if it's a lack of tuits.\n>> >>\n>> >>\n>> >> I like this. On the second hand, described behaviour is designed for ensuring of back compatibility.\n>> >\n>> >\n>> > if you break this back compatibility, then the immediate ANALYZE is not necessary\n>>\n>> I don't follow what backwards compatibility you're referencing. Could\n>> you expand on that?\n>\n>\n> Originally, until the table had minimally one row, the PostgreSQL calculated with 10 pages. It was fixed (changed) in PostgreSQL 14.\n>\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=3d351d916b20534f973eda760cde17d96545d4c4\n>\n\n From that commit message:\n> Historically, we've considered the state with relpages and reltuples\n> both zero as indicating that we do not know the table's tuple density.\n> This is problematic because it's impossible to distinguish \"never yet\n> vacuumed\" from \"vacuumed and seen to be empty\". In particular, a user\n> cannot use VACUUM or ANALYZE to override the planner's normal heuristic\n> that an empty table should not be believed to be empty because it is\n> probably about to get populated. That heuristic is a good safety\n> measure, so I don't care to abandon it, but there should be a way to\n> override it if the table is indeed intended to stay empty.\n\nSo that implicitly provides our reasoning for not analyzing up-front\non table creation.\n\nI haven't thought about this too deeply yet, but it seems plausible to\nme that the dangers of overestimating row count here (at minimum in\nqueries like I described with lots of joins) are higher than the\ndangers of underestimating, which we would do if we believed the table\nwas empty. One critical question would be how fast we can assume the\ntable will be auto-analyzed (i.e., how fast would the underestimate be\ncorrected.\n\nRegards,\nJames Coleman\n\n\n",
"msg_date": "Mon, 26 Jun 2023 14:59:21 -0400",
"msg_from": "James Coleman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Analyze on table creation?"
},
{
"msg_contents": ">\n> >\n> > Originally, until the table had minimally one row, the PostgreSQL\n> calculated with 10 pages. It was fixed (changed) in PostgreSQL 14.\n> >\n> >\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=3d351d916b20534f973eda760cde17d96545d4c4\n> >\n>\n> From that commit message:\n> > Historically, we've considered the state with relpages and reltuples\n> > both zero as indicating that we do not know the table's tuple density.\n> > This is problematic because it's impossible to distinguish \"never yet\n> > vacuumed\" from \"vacuumed and seen to be empty\". In particular, a user\n> > cannot use VACUUM or ANALYZE to override the planner's normal heuristic\n> > that an empty table should not be believed to be empty because it is\n> > probably about to get populated. That heuristic is a good safety\n> > measure, so I don't care to abandon it, but there should be a way to\n> > override it if the table is indeed intended to stay empty.\n>\n> So that implicitly provides our reasoning for not analyzing up-front\n> on table creation.\n>\n> I haven't thought about this too deeply yet, but it seems plausible to\n> me that the dangers of overestimating row count here (at minimum in\n> queries like I described with lots of joins) are higher than the\n> dangers of underestimating, which we would do if we believed the table\n> was empty. One critical question would be how fast we can assume the\n> table will be auto-analyzed (i.e., how fast would the underestimate be\n> corrected.\n>\n\nI found this issue a few years ago. This application had 40% of tables with\none or zero row, 30% was usual size, and 30% was sometimes really big. It\ncan be \"relative\" common in OLAP applications.\n\nThe estimation was terrible. I don't think there can be some better\nheuristic. Maybe we can introduce some table option like expected size,\nthat can be used when real statistics are not available.\n\nSome like\n\nCREATE TABLE foo(...) WITH (default_relpages = x)\n\nIt is not a perfect solution, but it allows fix this issue by one command.\n\n\n> Regards,\n> James Coleman\n>\n\n\n>\n>\n> Originally, until the table had minimally one row, the PostgreSQL calculated with 10 pages. It was fixed (changed) in PostgreSQL 14.\n>\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=3d351d916b20534f973eda760cde17d96545d4c4\n>\n\n From that commit message:\n> Historically, we've considered the state with relpages and reltuples\n> both zero as indicating that we do not know the table's tuple density.\n> This is problematic because it's impossible to distinguish \"never yet\n> vacuumed\" from \"vacuumed and seen to be empty\". In particular, a user\n> cannot use VACUUM or ANALYZE to override the planner's normal heuristic\n> that an empty table should not be believed to be empty because it is\n> probably about to get populated. That heuristic is a good safety\n> measure, so I don't care to abandon it, but there should be a way to\n> override it if the table is indeed intended to stay empty.\n\nSo that implicitly provides our reasoning for not analyzing up-front\non table creation.\n\nI haven't thought about this too deeply yet, but it seems plausible to\nme that the dangers of overestimating row count here (at minimum in\nqueries like I described with lots of joins) are higher than the\ndangers of underestimating, which we would do if we believed the table\nwas empty. One critical question would be how fast we can assume the\ntable will be auto-analyzed (i.e., how fast would the underestimate be\ncorrected.I found this issue a few years ago. This application had 40% of tables with one or zero row, 30% was usual size, and 30% was sometimes really big. It can be \"relative\" common in OLAP applications.The estimation was terrible. I don't think there can be some better heuristic. Maybe we can introduce some table option like expected size, that can be used when real statistics are not available.Some likeCREATE TABLE foo(...) WITH (default_relpages = x)It is not a perfect solution, but it allows fix this issue by one command.\n\nRegards,\nJames Coleman",
"msg_date": "Mon, 26 Jun 2023 21:38:03 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Analyze on table creation?"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-26 13:40:49 -0400, James Coleman wrote:\n> Have we ever discussed running an analyze immediately after creating a table?\n\nThat doesn't make a whole lot of sense to me - we could just insert the\nconstants stats we wanted in that case.\n\n\n> Consider the following:\n> \n> create table stats(i int, t text not null);\n> explain select * from stats;\n> Seq Scan on stats (cost=0.00..22.70 rows=1270 width=36\n> analyze stats;\n> explain select * from stats;\n> Seq Scan on stats (cost=0.00..0.00 rows=1 width=36)\n> \n> Combined with rapidly increasing error margin on row estimates when\n> adding joins means that a query joining to a bunch of empty tables\n> when a database first starts up can result in some pretty wild plan\n> costs.\n\nThe issue is that the table stats are likely going to quickly out of date in\nthat case, even a hand full of inserts (which wouldn't trigger\nautovacuum analyzing) would lead to the \"0 rows\" stats causing very bad plans.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 26 Jun 2023 13:00:23 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Analyze on table creation?"
},
{
"msg_contents": "On Mon, Jun 26, 2023 at 4:00 PM Andres Freund <[email protected]> wrote:\n>\n> Hi,\n>\n> On 2023-06-26 13:40:49 -0400, James Coleman wrote:\n> > Have we ever discussed running an analyze immediately after creating a table?\n>\n> That doesn't make a whole lot of sense to me - we could just insert the\n> constants stats we wanted in that case.\n>\n\nI thought that was implicit in that, but fair enough :)\n\n> > Consider the following:\n> >\n> > create table stats(i int, t text not null);\n> > explain select * from stats;\n> > Seq Scan on stats (cost=0.00..22.70 rows=1270 width=36\n> > analyze stats;\n> > explain select * from stats;\n> > Seq Scan on stats (cost=0.00..0.00 rows=1 width=36)\n> >\n> > Combined with rapidly increasing error margin on row estimates when\n> > adding joins means that a query joining to a bunch of empty tables\n> > when a database first starts up can result in some pretty wild plan\n> > costs.\n>\n> The issue is that the table stats are likely going to quickly out of date in\n> that case, even a hand full of inserts (which wouldn't trigger\n> autovacuum analyzing) would lead to the \"0 rows\" stats causing very bad plans.\n>\n\nIt's not obvious to me (as noted elsewhere in the thread) which is\nworse: a bunch of JOINs on empty tables can result in (specific\nexample) plans with cost=15353020, and then trigger JIT, and...here we\ncollide with my other thread about JIT [1].\n\nRegards,\nJames Coleman\n\n1: https://www.postgresql.org/message-id/CAAaqYe-g-Q0Mm5H9QLcu8cHeMwok%2BHaxS4-UC9Oj3bK3a5jPvg%40mail.gmail.com\n\n\n",
"msg_date": "Mon, 26 Jun 2023 16:16:33 -0400",
"msg_from": "James Coleman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Analyze on table creation?"
},
{
"msg_contents": "On Mon, Jun 26, 2023 at 4:16 PM James Coleman <[email protected]> wrote:\n>\n> On Mon, Jun 26, 2023 at 4:00 PM Andres Freund <[email protected]> wrote:\n> >\n> > Hi,\n> >\n> > On 2023-06-26 13:40:49 -0400, James Coleman wrote:\n> > > Have we ever discussed running an analyze immediately after creating a table?\n> >\n> > That doesn't make a whole lot of sense to me - we could just insert the\n> > constants stats we wanted in that case.\n> >\n>\n> I thought that was implicit in that, but fair enough :)\n>\n> > > Consider the following:\n> > >\n> > > create table stats(i int, t text not null);\n> > > explain select * from stats;\n> > > Seq Scan on stats (cost=0.00..22.70 rows=1270 width=36\n> > > analyze stats;\n> > > explain select * from stats;\n> > > Seq Scan on stats (cost=0.00..0.00 rows=1 width=36)\n> > >\n> > > Combined with rapidly increasing error margin on row estimates when\n> > > adding joins means that a query joining to a bunch of empty tables\n> > > when a database first starts up can result in some pretty wild plan\n> > > costs.\n> >\n> > The issue is that the table stats are likely going to quickly out of date in\n> > that case, even a hand full of inserts (which wouldn't trigger\n> > autovacuum analyzing) would lead to the \"0 rows\" stats causing very bad plans.\n> >\n>\n> It's not obvious to me (as noted elsewhere in the thread) which is\n> worse: a bunch of JOINs on empty tables can result in (specific\n> example) plans with cost=15353020, and then trigger JIT, and...here we\n> collide with my other thread about JIT [1].\n>\n> Regards,\n> James Coleman\n>\n> 1: https://www.postgresql.org/message-id/CAAaqYe-g-Q0Mm5H9QLcu8cHeMwok%2BHaxS4-UC9Oj3bK3a5jPvg%40mail.gmail.com\n\nThinking about this a bit more: it seems like what we're missing is either:\n\n1. A heuristic for \"this table will probably remain empty\", or\n2. A way to invalidate \"0 rows\" stats more quickly on even a handful of inserts.\n\nI think one of those (ignoring questions about \"how\" for now) would\nsolve both cases?\n\nRegards,\nJames Coleman\n\n\n",
"msg_date": "Tue, 27 Jun 2023 10:14:22 -0400",
"msg_from": "James Coleman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Analyze on table creation?"
}
] |
[
{
"msg_contents": "Hi,\n\nI played around with adding\n __attribute__((malloc(free_func), malloc(another_free_func)))\nannotations to a few functions in pg. See\nhttps://gcc.gnu.org/onlinedocs/gcc/Common-Function-Attributes.html\n\n\nAdding them to pg_list.h seems to have found two valid issues when compiling\nwithout optimization:\n\n[1001/2331 22 42%] Compiling C object src/backend/postgres_lib.a.p/commands_tablecmds.c.o\n../../../../home/andres/src/postgresql/src/backend/commands/tablecmds.c: In function ‘ATExecAttachPartition’:\n../../../../home/andres/src/postgresql/src/backend/commands/tablecmds.c:17966:25: warning: pointer ‘partBoundConstraint’ may be used after ‘list_concat’ [-Wuse-after-free]\n17966 | get_proposed_default_constraint(partBoundConstraint);\n | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n../../../../home/andres/src/postgresql/src/backend/commands/tablecmds.c:17919:26: note: call to ‘list_concat’ here\n17919 | partConstraint = list_concat(partBoundConstraint,\n | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n17920 | RelationGetPartitionQual(rel));\n | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n[1233/2331 22 52%] Compiling C object src/backend/postgres_lib.a.p/rewrite_rewriteHandler.c.o\n../../../../home/andres/src/postgresql/src/backend/rewrite/rewriteHandler.c: In function ‘rewriteRuleAction’:\n../../../../home/andres/src/postgresql/src/backend/rewrite/rewriteHandler.c:550:41: warning: pointer ‘newjointree’ may be used after ‘list_concat’ [-Wuse-after-free]\n 550 | checkExprHasSubLink((Node *) newjointree);\n | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n../../../../home/andres/src/postgresql/src/backend/rewrite/rewriteHandler.c:542:33: note: call to ‘list_concat’ here\n 542 | list_concat(newjointree, sub_action->jointree->fromlist);\n | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n\nBoth of these bugs seem somewhat older, the latter going back to 19ff959bff0,\nin 2005. I'm a bit surprised that we haven't hit them before, via\nDEBUG_LIST_MEMORY_USAGE?\n\n\nWhen compiling with optimization, another issue is reported:\n\n[508/1814 22 28%] Compiling C object src/backend/postgres_lib.a.p/commands_explain.c.o\n../../../../home/andres/src/postgresql/src/backend/commands/explain.c: In function 'ExplainNode':\n../../../../home/andres/src/postgresql/src/backend/commands/explain.c:2037:25: warning: pointer 'ancestors' may be used after 'lcons' [-Wuse-after-free]\n 2037 | show_upper_qual(plan->qual, \"Filter\", planstate, ancestors, es);\n | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nIn function 'show_group_keys',\n inlined from 'ExplainNode' at ../../../../home/andres/src/postgresql/src/backend/commands/explain.c:2036:4:\n../../../../home/andres/src/postgresql/src/backend/commands/explain.c:2564:21: note: call to 'lcons' here\n 2564 | ancestors = lcons(plan, ancestors);\n | ^~~~~~~~~~~~~~~~~~~~~~\n\nwhich looks like it might be valid - the caller's \"ancestors\" variable could\nnow be freed? There do appear to be further instances of the issue, e.g. in\nshow_agg_keys(), that aren't flagged for some reason.\n\n\n\nFor something like pg_list.h the malloc(free) attribute is a bit awkward to\nuse, because one a) needs to list ~30 functions that can free a list and b)\nthe referenced functions need to be declared.\n\nIn my quick hack I just duplicated the relevant part of pg_list.h and added\nthe appropriate attributes to the copy of the original declaration.\n\n\nI also added such attributes to bitmapset.h and palloc() et al, but that\ndidn't find existing bugs. It does find use-after-free instances if I add\nsome, similarly it does find cases of mismatching palloc with free etc.\n\n\nThe attached prototype is quite rough and will likely fail on anything but a\nrecent gcc (likely >= 12).\n\n\nDo others think this would be useful enough to be worth polishing? And do you\nagree the warnings above are bugs?\n\nGreetings,\n\nAndres Freund",
"msg_date": "Mon, 26 Jun 2023 12:54:44 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Detecting use-after-free bugs using gcc's malloc() attribute"
},
{
"msg_contents": "On 26.06.23 21:54, Andres Freund wrote:\n> For something like pg_list.h the malloc(free) attribute is a bit awkward to\n> use, because one a) needs to list ~30 functions that can free a list and b)\n> the referenced functions need to be declared.\n\nHmm. Saying list_concat() \"deallocates\" a list is mighty confusing \nbecause 1) it doesn't, and 2) it might actually allocate a new list. So \nwhile you get the useful behavior of \"you probably didn't mean to use \nthis variable again after passing it into list_concat()\", if some other \ntool actually took these allocate/deallocate decorations at face value \nand did a memory leak analysis with them, they'd get completely bogus \nresults.\n\n> I also added such attributes to bitmapset.h and palloc() et al, but that\n> didn't find existing bugs. It does find use-after-free instances if I add\n> some, similarly it does find cases of mismatching palloc with free etc.\n\nThis seems more straightforward. Even if it didn't find any bugs, I'd \nimagine it would be useful during development.\n\n> Do others think this would be useful enough to be worth polishing? And do you\n> agree the warnings above are bugs?\n\nI actually just played with this the other day, because I can never \nremember termPQExpBuffer() vs. destroyPQExpBuffer(). I couldn't quite \nmake it work for that, but I found the feature overall useful, so I'd \nwelcome support for it.\n\n\n\n\n",
"msg_date": "Wed, 28 Jun 2023 10:40:22 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Detecting use-after-free bugs using gcc's malloc() attribute"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-28 10:40:22 +0200, Peter Eisentraut wrote:\n> On 26.06.23 21:54, Andres Freund wrote:\n> > For something like pg_list.h the malloc(free) attribute is a bit awkward to\n> > use, because one a) needs to list ~30 functions that can free a list and b)\n> > the referenced functions need to be declared.\n>\n> Hmm. Saying list_concat() \"deallocates\" a list is mighty confusing because\n> 1) it doesn't, and 2) it might actually allocate a new list.\n\nlist_concat() basically behaves like realloc(), except that the \"pointer is\nstill valid\" case is much more common. And the way that's modelled in the\nannotations is to say a function frees and allocates.\n\nNote that the free attribute references the first element for list_concat(),\nnot the second.\n\n\n> So while you get the useful behavior of \"you probably didn't mean to use\n> this variable again after passing it into list_concat()\", if some other tool\n> actually took these allocate/deallocate decorations at face value and did a\n> memory leak analysis with them, they'd get completely bogus results.\n\nHow would the annotations possibly lead to a bogus result? I see neither how\nit could lead to false negatives nor false positives?\n\nThe gcc attributes are explicitly intended to track not just plain memory\nallocations, the example in the docs\nhttps://gcc.gnu.org/onlinedocs/gcc/Common-Function-Attributes.html#Common-Function-Attributes\nis to add them for fopen() etc. So I don't think it's likely that external\ntools will interpret this is a much more stringent way.\n\n\n> > I also added such attributes to bitmapset.h and palloc() et al, but that\n> > didn't find existing bugs. It does find use-after-free instances if I add\n> > some, similarly it does find cases of mismatching palloc with free etc.\n>\n> This seems more straightforward. Even if it didn't find any bugs, I'd\n> imagine it would be useful during development.\n\nAgreed. Given our testing regimen (valgrind etc), I'd expect to find many such\nbugs before long in the tree anyway. But it's much nicer to get that far. And\nto find paths that aren't covered by tests.\n\n\n> > Do others think this would be useful enough to be worth polishing? And do you\n> > agree the warnings above are bugs?\n>\n> I actually just played with this the other day, because I can never remember\n> termPQExpBuffer() vs. destroyPQExpBuffer().\n\nThat's a pretty nasty one :(\n\n\n> I couldn't quite make it work for that, but I found the feature overall\n> useful, so I'd welcome support for it.\n\nYea, I don't think the attributes can comfortable handle initPQExpBuffer()\nstyle allocation. It's somewhat posible by moving the allocation to an inline\nfunction, and then making the thing that's allocated ->data. But it ends up\npretty messy, particularly because we need ABI stability for pqexpbuffer.h.\n\nBut createPQExpBuffer() can be dealt with reasonably.\n\nDoing so points out:\n\n[51/354 42 14%] Compiling C object src/bin/initdb/initdb.p/initdb.c.o\n../../../../home/andres/src/postgresql/src/bin/initdb/initdb.c: In function ‘replace_guc_value’:\n../../../../home/andres/src/postgresql/src/bin/initdb/initdb.c:566:9: warning: ‘free’ called on pointer returned from a mismatched allocation function [-Wmismatched-dealloc]\n 566 | free(newline); /* but don't free newline->data */\n | ^~~~~~~~~~~~~\n../../../../home/andres/src/postgresql/src/bin/initdb/initdb.c:470:31: note: returned from ‘createPQExpBuffer’\n 470 | PQExpBuffer newline = createPQExpBuffer();\n | ^~~~~~~~~~~~~~~~~~~\n\nwhich is intentional, but ... not pretty, and could very well be a bug in\nother cases. If we want to do stuff like that, we'd probably better off\nhaving a dedicated version of destroyPQExpBuffer(). Although here it looks\nlike the code should just use an on-stack PQExpBuffer.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 28 Jun 2023 11:15:37 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Detecting use-after-free bugs using gcc's malloc() attribute"
},
{
"msg_contents": "On 28.06.23 20:15, Andres Freund wrote:\n> On 2023-06-28 10:40:22 +0200, Peter Eisentraut wrote:\n>> On 26.06.23 21:54, Andres Freund wrote:\n>>> For something like pg_list.h the malloc(free) attribute is a bit awkward to\n>>> use, because one a) needs to list ~30 functions that can free a list and b)\n>>> the referenced functions need to be declared.\n>>\n>> Hmm. Saying list_concat() \"deallocates\" a list is mighty confusing because\n>> 1) it doesn't, and 2) it might actually allocate a new list.\n> \n> list_concat() basically behaves like realloc(), except that the \"pointer is\n> still valid\" case is much more common. And the way that's modelled in the\n> annotations is to say a function frees and allocates.\n> \n> Note that the free attribute references the first element for list_concat(),\n> not the second.\n\nYeah, I think that would be ok. I was worried about the cases where it \ndoesn't actually free the first argument, but in all those cases it \npasses it as a result, so as far as a caller is concerned, it would \nappear as freed and allocated, even if it's really the same.\n\n\n\n",
"msg_date": "Thu, 6 Jul 2023 13:52:59 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Detecting use-after-free bugs using gcc's malloc() attribute"
}
] |
[
{
"msg_contents": "Hello,\n\nI was running the test_pg_dump extension suite, and I got annoyed that\nI couldn't keep it from deleting its dump artifacts after a successful\nrun. Here's a patch to make use of PG_TEST_NOCLEAN (which currently\ncovers the test cluster's base directory) with the Test::Utils\ntempdirs too.\n\n(Looks like this idea was also discussed last year [1]; let me know if\nI missed any more recent suggestions.)\n\nThanks,\n--Jacob\n\n[1] https://www.postgresql.org/message-id/[email protected]",
"msg_date": "Mon, 26 Jun 2023 16:55:47 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": true,
"msg_subject": "[PATCH] Honor PG_TEST_NOCLEAN for tempdirs"
},
{
"msg_contents": "On Mon, Jun 26, 2023 at 04:55:47PM -0700, Jacob Champion wrote:\n> I was running the test_pg_dump extension suite, and I got annoyed that\n> I couldn't keep it from deleting its dump artifacts after a successful\n> run. Here's a patch to make use of PG_TEST_NOCLEAN (which currently\n> covers the test cluster's base directory) with the Test::Utils\n> tempdirs too.\n\nI am still +1 in doing that.\n\n> (Looks like this idea was also discussed last year [1]; let me know if\n> I missed any more recent suggestions.)\n\nI don't recall any specific suggestions related to that, but perhaps\nit got mentioned somewhere else.\n\nsrc/test/perl/README and regress.sgml both describe what\nPG_TEST_NOCLEAN does, and it seems to me that these should be updated\nto tell that temporary files are not removed on top of the data\nfolders?\n--\nMichael",
"msg_date": "Tue, 27 Jun 2023 14:47:08 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Honor PG_TEST_NOCLEAN for tempdirs"
},
{
"msg_contents": "> On 27 Jun 2023, at 07:47, Michael Paquier <[email protected]> wrote:\n> \n> On Mon, Jun 26, 2023 at 04:55:47PM -0700, Jacob Champion wrote:\n>> I was running the test_pg_dump extension suite, and I got annoyed that\n>> I couldn't keep it from deleting its dump artifacts after a successful\n>> run. Here's a patch to make use of PG_TEST_NOCLEAN (which currently\n>> covers the test cluster's base directory) with the Test::Utils\n>> tempdirs too.\n> \n> I am still +1 in doing that.\n> \n>> (Looks like this idea was also discussed last year [1]; let me know if\n>> I missed any more recent suggestions.)\n\n+1. I think it simply got lost in that thread which had a lot of moving parts\nas it was.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Tue, 27 Jun 2023 08:10:42 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Honor PG_TEST_NOCLEAN for tempdirs"
},
{
"msg_contents": "On 2023-06-26 Mo 19:55, Jacob Champion wrote:\n> Hello,\n>\n> I was running the test_pg_dump extension suite, and I got annoyed that\n> I couldn't keep it from deleting its dump artifacts after a successful\n> run. Here's a patch to make use of PG_TEST_NOCLEAN (which currently\n> covers the test cluster's base directory) with the Test::Utils\n> tempdirs too.\n>\n> (Looks like this idea was also discussed last year [1]; let me know if\n> I missed any more recent suggestions.)\n\n\n- CLEANUP => 1);\n+ CLEANUP => not defined $ENV{'PG_TEST_NOCLEAN'});\n\n\nThis doesn't look quite right. If PG_TEST_CLEAN had a value of 0 we \nwould still do the cleanup. I would probably use something like:\n\n CLEANUP => $ENV{'PG_TEST_NOCLEAN'} // 1\n\ni.e. if it's not defined at all or has a value of undef, do the cleanup, \notherwise use the value.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-06-26 Mo 19:55, Jacob Champion\n wrote:\n\n\nHello,\n\nI was running the test_pg_dump extension suite, and I got annoyed that\nI couldn't keep it from deleting its dump artifacts after a successful\nrun. Here's a patch to make use of PG_TEST_NOCLEAN (which currently\ncovers the test cluster's base directory) with the Test::Utils\ntempdirs too.\n\n(Looks like this idea was also discussed last year [1]; let me know if\nI missed any more recent suggestions.)\n\n\n\n- CLEANUP => 1);\n + CLEANUP => not defined $ENV{'PG_TEST_NOCLEAN'});\n\n\nThis doesn't look quite right. If PG_TEST_CLEAN had a value of 0\n we would still do the cleanup. I would probably use something\n like:\n CLEANUP => $ENV{'PG_TEST_NOCLEAN'} // 1\ni.e. if it's not defined at all or has a value of undef, do the\n cleanup, otherwise use the value.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Tue, 27 Jun 2023 11:20:20 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Honor PG_TEST_NOCLEAN for tempdirs"
},
{
"msg_contents": "Andrew Dunstan <[email protected]> writes:\n\n> On 2023-06-26 Mo 19:55, Jacob Champion wrote:\n>> Hello,\n>>\n>> I was running the test_pg_dump extension suite, and I got annoyed that\n>> I couldn't keep it from deleting its dump artifacts after a successful\n>> run. Here's a patch to make use of PG_TEST_NOCLEAN (which currently\n>> covers the test cluster's base directory) with the Test::Utils\n>> tempdirs too.\n>>\n>> (Looks like this idea was also discussed last year [1]; let me know if\n>> I missed any more recent suggestions.)\n>\n>\n> - CLEANUP => 1);\n> + CLEANUP => not defined $ENV{'PG_TEST_NOCLEAN'});\n>\n>\n> This doesn't look quite right. If PG_TEST_CLEAN had a value of 0 we\n> would still do the cleanup. I would probably use something like:\n>\n> CLEANUP => $ENV{'PG_TEST_NOCLEAN'} // 1\n>\n> i.e. if it's not defined at all or has a value of undef, do the cleanup,\n> otherwise use the value.\n\nIf the environment varible were used as a boolean, it should be\n\n\tCLEANUP => not $ENV{PG_TEST_NOCLEAN}\n\nsince `not undef` returns true with no warning, and the senses of the\ntwo flags are inverted.\n\nHowever, the docs\n(https://www.postgresql.org/docs/16/regress-tap.html#REGRESS-TAP-VARS)\nsay \"If the environment variable PG_TEST_NOCLEAN is set\", not \"is set to\na true value\", and the existing test in PostgreSQL::Test::Cluster's END\nblock is:\n\n\t# skip clean if we are requested to retain the basedir\n\tnext if defined $ENV{'PG_TEST_NOCLEAN'};\n \nSo the original `not defined` test is consistent with that.\n\nTangentially, even though the above line contradicts it, the general\nperl style is to not unnecessarily quote hash keys or words before `=>`:\n\n ~/src/postgresql $ rg -P -t perl '\\{\\s*\\w+\\s*\\}' | wc -l\n 1662\n ~/src/postgresql $ rg -P -t perl '\\{\\s*([\"'\\''])\\w+\\1\\s*\\}' | wc -l\n 155\n ~/src/postgresql $ rg -P -t perl '\\w+\\s*=>' | wc -l\n 3842\n ~/src/postgresql $ rg -P -t perl '([\"'\\''])\\w+\\1\\s*=>' | wc -l\n 310\n\n- ilmari\n\n\n",
"msg_date": "Tue, 27 Jun 2023 16:54:20 +0100",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Honor PG_TEST_NOCLEAN for tempdirs"
},
{
"msg_contents": "On 6/26/23 22:47, Michael Paquier wrote:\n> src/test/perl/README and regress.sgml both describe what\n> PG_TEST_NOCLEAN does, and it seems to me that these should be updated\n> to tell that temporary files are not removed on top of the data\n> folders?\n\nI've added a couple of quick lines to the docs in v2; see what you think.\n\nOn 6/26/23 23:10, Daniel Gustafsson wrote:\n> I think it simply got lost in that thread which had a lot of moving\n> parts as it was.\n\nI'll make sure to register it for the CF. :D\n\nOn 6/27/23 08:20, Andrew Dunstan wrote:\n> This doesn't look quite right. If PG_TEST_CLEAN had a value of 0 we\nwould still do the cleanup.\n\nThat's how it currently works for the data directories, but Dagfinn beat\nme to the punch:\n\nOn 6/27/23 08:54, Dagfinn Ilmari Mannsåker wrote:\n> However, the docs\n> (https://www.postgresql.org/docs/16/regress-tap.html#REGRESS-TAP-VARS)\n> say \"If the environment variable PG_TEST_NOCLEAN is set\", not \"is set to\n> a true value\", and the existing test in PostgreSQL::Test::Cluster's END\n> block is:\n> \n> \t# skip clean if we are requested to retain the basedir\n> \tnext if defined $ENV{'PG_TEST_NOCLEAN'};\n> \n> So the original `not defined` test is consistent with that.\n\nRight. The second patch in v2 now changes that behavior across the\nboard, so we handle false values. I'm ambivalent on changing the wording\nof the docs, but I can do that too if needed. (I'm pretty used to the\nphrase \"setting an environment variable\" implying some sort of\ntrue/false handling, when the envvar is a boolean toggle.)\n\nThanks all!\n--Jacob",
"msg_date": "Tue, 27 Jun 2023 09:47:09 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Honor PG_TEST_NOCLEAN for tempdirs"
},
{
"msg_contents": "On 2023-06-27 Tu 11:54, Dagfinn Ilmari Mannsåker wrote:\n> Andrew Dunstan<[email protected]> writes:\n>\n>> On 2023-06-26 Mo 19:55, Jacob Champion wrote:\n>>> Hello,\n>>>\n>>> I was running the test_pg_dump extension suite, and I got annoyed that\n>>> I couldn't keep it from deleting its dump artifacts after a successful\n>>> run. Here's a patch to make use of PG_TEST_NOCLEAN (which currently\n>>> covers the test cluster's base directory) with the Test::Utils\n>>> tempdirs too.\n>>>\n>>> (Looks like this idea was also discussed last year [1]; let me know if\n>>> I missed any more recent suggestions.)\n>>\n>> - CLEANUP => 1);\n>> + CLEANUP => not defined $ENV{'PG_TEST_NOCLEAN'});\n>>\n>>\n>> This doesn't look quite right. If PG_TEST_CLEAN had a value of 0 we\n>> would still do the cleanup. I would probably use something like:\n>>\n>> CLEANUP => $ENV{'PG_TEST_NOCLEAN'} // 1\n>>\n>> i.e. if it's not defined at all or has a value of undef, do the cleanup,\n>> otherwise use the value.\n> If the environment varible were used as a boolean, it should be\n>\n> \tCLEANUP => not $ENV{PG_TEST_NOCLEAN}\n>\n> since `not undef` returns true with no warning, and the senses of the\n> two flags are inverted.\n>\n> However, the docs\n> (https://www.postgresql.org/docs/16/regress-tap.html#REGRESS-TAP-VARS)\n> say \"If the environment variable PG_TEST_NOCLEAN is set\", not \"is set to\n> a true value\", and the existing test in PostgreSQL::Test::Cluster's END\n> block is:\n>\n> \t# skip clean if we are requested to retain the basedir\n> \tnext if defined $ENV{'PG_TEST_NOCLEAN'};\n> \n> So the original `not defined` test is consistent with that.\n\n\nok, but ...\n\nI think it's unwise to encourage setting environment variables without \nvalues. Some years ago I had to work around some ugly warnings in \nbuildfarm logs by removing one such. I guess in the end it's a minor \nissue, but if someone actually sets it to 0 it would seem to me like a \nPOLA violation still to skip the cleanup.\n\n\ncheers\n\n\nandew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-06-27 Tu 11:54, Dagfinn Ilmari\n Mannsåker wrote:\n\n\nAndrew Dunstan <[email protected]> writes:\n\n\n\nOn 2023-06-26 Mo 19:55, Jacob Champion wrote:\n\n\nHello,\n\nI was running the test_pg_dump extension suite, and I got annoyed that\nI couldn't keep it from deleting its dump artifacts after a successful\nrun. Here's a patch to make use of PG_TEST_NOCLEAN (which currently\ncovers the test cluster's base directory) with the Test::Utils\ntempdirs too.\n\n(Looks like this idea was also discussed last year [1]; let me know if\nI missed any more recent suggestions.)\n\n\n\n\n- CLEANUP => 1);\n+ CLEANUP => not defined $ENV{'PG_TEST_NOCLEAN'});\n\n\nThis doesn't look quite right. If PG_TEST_CLEAN had a value of 0 we\nwould still do the cleanup. I would probably use something like:\n\n CLEANUP => $ENV{'PG_TEST_NOCLEAN'} // 1\n\ni.e. if it's not defined at all or has a value of undef, do the cleanup,\notherwise use the value.\n\n\n\nIf the environment varible were used as a boolean, it should be\n\n\tCLEANUP => not $ENV{PG_TEST_NOCLEAN}\n\nsince `not undef` returns true with no warning, and the senses of the\ntwo flags are inverted.\n\nHowever, the docs\n(https://www.postgresql.org/docs/16/regress-tap.html#REGRESS-TAP-VARS)\nsay \"If the environment variable PG_TEST_NOCLEAN is set\", not \"is set to\na true value\", and the existing test in PostgreSQL::Test::Cluster's END\nblock is:\n\n\t# skip clean if we are requested to retain the basedir\n\tnext if defined $ENV{'PG_TEST_NOCLEAN'};\n \nSo the original `not defined` test is consistent with that.\n\n\n\nok, but ...\nI think it's unwise to encourage setting environment variables\n without values. Some years ago I had to work around some ugly\n warnings in buildfarm logs by removing one such. I guess in the\n end it's a minor issue, but if someone actually sets it to 0 it\n would seem to me like a POLA violation still to skip the cleanup.\n\n\n\ncheers\n\n\nandew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Tue, 27 Jun 2023 14:45:26 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Honor PG_TEST_NOCLEAN for tempdirs"
},
{
"msg_contents": "On 27.06.23 17:54, Dagfinn Ilmari Mannsåker wrote:\n> However, the docs\n> (https://www.postgresql.org/docs/16/regress-tap.html#REGRESS-TAP-VARS)\n> say \"If the environment variable PG_TEST_NOCLEAN is set\", not \"is set to\n> a true value\", and the existing test in PostgreSQL::Test::Cluster's END\n> block is:\n> \n> \t# skip clean if we are requested to retain the basedir\n> \tnext if defined $ENV{'PG_TEST_NOCLEAN'};\n> \n> So the original `not defined` test is consistent with that.\n\nRight, the usual style is just to check whether an environment variable \nis set to something, not what it is.\n\nAlso note that in general not all environment variables are processed by \nPerl, so I would avoid encoding Perl semantics about what is \"true\" or \nwhatever into it.\n\n\n\n",
"msg_date": "Wed, 28 Jun 2023 10:45:02 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Honor PG_TEST_NOCLEAN for tempdirs"
},
{
"msg_contents": "On Wed, Jun 28, 2023 at 10:45:02AM +0200, Peter Eisentraut wrote:\n> Right, the usual style is just to check whether an environment variable is\n> set to something, not what it is.\n> \n> Also note that in general not all environment variables are processed by\n> Perl, so I would avoid encoding Perl semantics about what is \"true\" or\n> whatever into it.\n\nAgreed. I am not sure that this is worth changing to have\nboolean-like checks. Hence, I would also to keep the patch that\nchecks if the environment variable is defined to enforce the behavior,\nwithout checking for a specific value.\n--\nMichael",
"msg_date": "Thu, 29 Jun 2023 09:40:53 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Honor PG_TEST_NOCLEAN for tempdirs"
},
{
"msg_contents": "On Wed, Jun 28, 2023 at 5:41 PM Michael Paquier <[email protected]> wrote:\n> Agreed. I am not sure that this is worth changing to have\n> boolean-like checks. Hence, I would also to keep the patch that\n> checks if the environment variable is defined to enforce the behavior,\n> without checking for a specific value.\n\nSounds good -- 0002 can be ignored as needed, then. (Or I can resend a\nv3 for CI purposes, if you'd like.)\n\n--Jacob\n\n\n",
"msg_date": "Thu, 29 Jun 2023 09:05:59 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Honor PG_TEST_NOCLEAN for tempdirs"
},
{
"msg_contents": "On Thu, Jun 29, 2023 at 09:05:59AM -0700, Jacob Champion wrote:\n> Sounds good -- 0002 can be ignored as needed, then. (Or I can resend a\n> v3 for CI purposes, if you'd like.)\n\nI am assuming that this is 0001 posted here:\nhttps://www.postgresql.org/message-id/[email protected]\n\nAnd that looks OK to me. This is something I'd rather backpatch down\nto v11 on usability ground for developers. Any comments or objections\nabout that?\n--\nMichael",
"msg_date": "Fri, 30 Jun 2023 16:09:03 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Honor PG_TEST_NOCLEAN for tempdirs"
},
{
"msg_contents": "> On 30 Jun 2023, at 09:09, Michael Paquier <[email protected]> wrote:\n> \n> On Thu, Jun 29, 2023 at 09:05:59AM -0700, Jacob Champion wrote:\n>> Sounds good -- 0002 can be ignored as needed, then. (Or I can resend a\n>> v3 for CI purposes, if you'd like.)\n> \n> I am assuming that this is 0001 posted here:\n> https://www.postgresql.org/message-id/[email protected]\n> \n> And that looks OK to me. This is something I'd rather backpatch down\n> to v11 on usability ground for developers. Any comments or objections\n> about that?\n\nAgreed, I'd prefer all branches to work the same for this.\n\nReading the patch, only one thing stood out:\n\n-variable PG_TEST_NOCLEAN is set, data directories will be retained\n-regardless of test status.\n+variable PG_TEST_NOCLEAN is set, those directories will be retained\n\nI would've written \"the data directories\" instead of \"those directories\" here.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Fri, 30 Jun 2023 09:42:13 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Honor PG_TEST_NOCLEAN for tempdirs"
},
{
"msg_contents": "On Fri, Jun 30, 2023 at 09:42:13AM +0200, Daniel Gustafsson wrote:\n> Agreed, I'd prefer all branches to work the same for this.\n\nThanks, done this way across all the branches, then.\n\n> Reading the patch, only one thing stood out:\n> \n> -variable PG_TEST_NOCLEAN is set, data directories will be retained\n> -regardless of test status.\n> +variable PG_TEST_NOCLEAN is set, those directories will be retained\n> \n> I would've written \"the data directories\" instead of \"those directories\" here.\n\nAdjusted that as well, on top of an extra comment.\n--\nMichael",
"msg_date": "Mon, 3 Jul 2023 10:17:24 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Honor PG_TEST_NOCLEAN for tempdirs"
},
{
"msg_contents": "On Sun, Jul 2, 2023 at 6:17 PM Michael Paquier <[email protected]> wrote:\n> Adjusted that as well, on top of an extra comment.\n\nThanks all!\n\n--Jacob\n\n\n",
"msg_date": "Wed, 5 Jul 2023 12:55:16 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Honor PG_TEST_NOCLEAN for tempdirs"
}
] |
[
{
"msg_contents": "Hi,\n\nOn Twitter Thomas thoroughly nerdsniped me [1]. As part of that I ran a\nconcurrent readonly pgbench workload and analyzed cacheline \"contention\" using\nperf c2c.\n\nOne of the cacheline conflicts, by far not the most common, but significant,\nis one I hadn't noticed in the past. The cacheline accesses are by\npgstat_report_query_id() and pgstat_report_activity().\n\nThe reason for the conflict is simple - we don't ensure any aligment for\nPgBackendStatus. Thus the end of one backend's PgBackendStatus will be in the\nsame cacheline as another backend's start of PgBackendStatus.\n\nHistorically that didn't show up too much, because the end of PgBackendStatus\nhappened to contain less-frequently changing data since at least 9.6, namely\n\tint64\t\tst_progress_param[PGSTAT_NUM_PROGRESS_PARAM];\n\nwhich effectively avoided any relevant false sharing.\n\n\nBut in 4f0b0966c866 a new trailing element was added to PgBackendStatus:\n\n\t/* query identifier, optionally computed using post_parse_analyze_hook */\n\tuint64\t\tst_query_id;\n\nwhich is very frequently set, due to the following in ExecutorStart:\n\t/*\n\t * In some cases (e.g. an EXECUTE statement) a query execution will skip\n\t * parse analysis, which means that the query_id won't be reported. Note\n\t * that it's harmless to report the query_id multiple times, as the call\n\t * will be ignored if the top level query_id has already been reported.\n\t */\n\tpgstat_report_query_id(queryDesc->plannedstmt->queryId, false);\n\n\n\nThe benchmarks I ran used -c 48 -j 48 clients on my two socket workstation, 2x\n10/20 cores/threads.\n\nWith a default pgbench -S workload, the regression is barely visible - the\ncontext switches between pgbench and backend use too many resources. But a\npipelined pgbench -S shows a 1-2% regression and server-side query execution\nof a simple statement [2] regresses by ~5.5%.\n\nNote that this is with compute_query_id = auto, without any extensions\nloaded.\n\nThe fix for this is quite simple, something like:\n#ifdef pg_attribute_aligned\n\tpg_attribute_aligned(PG_CACHE_LINE_SIZE)\n#endif\n\nat the start of PgBackendStatus.\n\n\nUnfortunately we can't fix that in the backbranches, as it obviously is an ABI\nviolation.\n\n\nLeaving the performance issue aside for a moment, I'm somewhat confused by the\nmaintenance of PgBackendStatus->st_query_id:\n\n1) Why are there pgstat_report_query_id() calls in parse_analyze_*()? We aren't\n executing the statements at that point?\n\n2) pgstat_report_query_id() doesn't overwrite a non-zero query_id unless force\n is passed in. Force is only passed in exec_simple_query(). query_id is also\n reset when pgstat_report_activity(STATE_RUNNING) is called.\n\n I think this means that e.g. bgworkers issuing queries will often get stuck\n on the first query_id used, unless they call pgstat_report_activity()?\n\nGreetings,\n\nAndres Freund\n\n[1] https://twitter.com/MengTangmu/status/1673439083518115840\n[2] DO $$ BEGIN FOR i IN 1..10000 LOOP EXECUTE 'SELECT'; END LOOP;END;$$;\n\n\n",
"msg_date": "Mon, 26 Jun 2023 18:34:58 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "False sharing for PgBackendStatus, made worse by in-core query_id\n handling"
}
] |
[
{
"msg_contents": "Hi,\n\nAs mentioned nearby [1], Thomas brought up [2] the idea of using\nReadRecentBuffer() _bt_getroot(). I couldn't resist and prototyped it.\n\nUnfortunately it scaled way worse at first. This is not an inherent issue, but\ndue to an implementation choice in ReadRecentBuffer(). Whereas the normal\nBufferAlloc() path uses PinBuffer(), ReadRecentBuffer() first does\nLockBufHdr(), checks if the buffer ID is the same and then uses\nPinBuffer_Locked().\n\nThe problem with that is that PinBuffer() takes care to not hold the buffer\nheader spinlock, it uses compare_exchange to atomically acquire the pin, while\nguaranteing nobody holds the lock. When holding the buffer header spinlock,\nthere obviously is the risk of being scheduled out (or even just not have\nexclusive access to the cacheline).\n\nReadRecentBuffer() scales worse even if LockBufHdr() is immediately followed\nby PinBuffer_Locked(), so it's really just holding the lock that is the issue.\n\n\nThe fairly obvious solution to this is to just use PinBuffer() and just unpin\nthe buffer if its identity was changed concurrently. There could be an\nunlocked pre-check as well. However, there's the following comment in\nReadRecentBuffer():\n\t\t\t * It's now safe to pin the buffer. We can't pin first and ask\n\t\t\t * questions later, because it might confuse code paths like\n\t\t\t * InvalidateBuffer() if we pinned a random non-matching buffer.\n\t\t\t */\n\nBut I'm not sure I buy that - there's plenty other things that can briefly\nacquire a buffer pin (e.g. checkpointer, reclaiming the buffer for other\ncontents, etc).\n\n\n\nAnother difference between using PinBuffer() and PinBuffer_locked() is that\nthe latter does not adjust a buffer's usagecount.\n\nLeaving the scalability issue aside, isn't it somewhat odd that optimizing a\ncodepath to use ReadRecentBuffer() instead of ReadBuffer() leads to not\nincreasing usagecount anymore?\n\n\nFWIW, once that's fixed, using ReadRecentBuffer() for _bt_getroot(), caching\nthe root page's buffer id in RelationData, seems a noticeable win. About 7% in\na concurrent, read-only pgbench that utilizes batches of 10. And it should be\neasy to get much bigger wins, e.g. with a index nested loop with a relatively\nsmall index on the inner side.\n\n\nGreetings,\n\nAndres Freund\n\n[1] https://www.postgresql.org/message-id/20230627013458.axge7iylw7llyvww%40awork3.anarazel.de\n[2] https://twitter.com/MengTangmu/status/1673439083518115840\n\n\n",
"msg_date": "Mon, 26 Jun 2023 19:05:46 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "ReadRecentBuffer() doesn't scale well"
},
{
"msg_contents": "On Tue, Jun 27, 2023 at 2:05 PM Andres Freund <[email protected]> wrote:\n> As mentioned nearby [1], Thomas brought up [2] the idea of using\n> ReadRecentBuffer() _bt_getroot(). I couldn't resist and prototyped it.\n\nThanks!\n\n> Unfortunately it scaled way worse at first. This is not an inherent issue, but\n> due to an implementation choice in ReadRecentBuffer(). Whereas the normal\n> BufferAlloc() path uses PinBuffer(), ReadRecentBuffer() first does\n> LockBufHdr(), checks if the buffer ID is the same and then uses\n> PinBuffer_Locked().\n>\n> The problem with that is that PinBuffer() takes care to not hold the buffer\n> header spinlock, it uses compare_exchange to atomically acquire the pin, while\n> guaranteing nobody holds the lock. When holding the buffer header spinlock,\n> there obviously is the risk of being scheduled out (or even just not have\n> exclusive access to the cacheline).\n\nYeah. Aside from inherent nastiness of user-space spinlocks, this new\nuse case is also enormously more likely to contend and then get into\ntrouble by being preempted due to btree root pages being about the\nhottest pages in the universe than the use case I was focusing on at\nthe time.\n\n> The fairly obvious solution to this is to just use PinBuffer() and just unpin\n> the buffer if its identity was changed concurrently. There could be an\n> unlocked pre-check as well. However, there's the following comment in\n> ReadRecentBuffer():\n> * It's now safe to pin the buffer. We can't pin first and ask\n> * questions later, because it might confuse code paths like\n> * InvalidateBuffer() if we pinned a random non-matching buffer.\n> */\n>\n> But I'm not sure I buy that - there's plenty other things that can briefly\n> acquire a buffer pin (e.g. checkpointer, reclaiming the buffer for other\n> contents, etc).\n\nI may well have been too cautious with that. The worst thing I can\nthink of right now is that InvalidateBuffer() would busy loop (as it\nalready does in other rare cases) when it sees a pin.\n\n> Another difference between using PinBuffer() and PinBuffer_locked() is that\n> the latter does not adjust a buffer's usagecount.\n>\n> Leaving the scalability issue aside, isn't it somewhat odd that optimizing a\n> codepath to use ReadRecentBuffer() instead of ReadBuffer() leads to not\n> increasing usagecount anymore?\n\nYeah, that is not great. The simplification you suggest would fix\nthat too, though I guess it would also bump the usage count of buffers\nthat don't have the tag we expected; that's obviously rare and erring\non a better side though.\n\n> FWIW, once that's fixed, using ReadRecentBuffer() for _bt_getroot(), caching\n> the root page's buffer id in RelationData, seems a noticeable win. About 7% in\n> a concurrent, read-only pgbench that utilizes batches of 10. And it should be\n> easy to get much bigger wins, e.g. with a index nested loop with a relatively\n> small index on the inner side.\n\nWooo, that's better than I was hoping. Thanks for trying it out! I\nthink, for the complexity involved (ie very little), it's a nice\nresult, and worth considering even though it's also a solid clue that\nwe could do much better than this with a (yet to be designed)\nlonger-lived pin scheme. smgr_targblock could be another\neasy-to-cache candidate, ie a place where there is a single\ninteresting hot page that we're already keeping track of with no\nrequirement for new backend-local mapping machinery.\n\n\n",
"msg_date": "Tue, 27 Jun 2023 15:33:57 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ReadRecentBuffer() doesn't scale well"
},
{
"msg_contents": "On Mon, Jun 26, 2023 at 8:34 PM Thomas Munro <[email protected]> wrote:\n> Yeah. Aside from inherent nastiness of user-space spinlocks, this new\n> use case is also enormously more likely to contend and then get into\n> trouble by being preempted due to btree root pages being about the\n> hottest pages in the universe than the use case I was focusing on at\n> the time.\n\nThey're not just the hottest. They're also among the least likely to\nchange from one moment to the next. (If that ever failed to hold then\nit wouldn't take long for the index to become grotesquely tall.)\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 26 Jun 2023 20:44:27 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ReadRecentBuffer() doesn't scale well"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-27 15:33:57 +1200, Thomas Munro wrote:\n> On Tue, Jun 27, 2023 at 2:05 PM Andres Freund <[email protected]> wrote:\n> > Unfortunately it scaled way worse at first. This is not an inherent issue, but\n> > due to an implementation choice in ReadRecentBuffer(). Whereas the normal\n> > BufferAlloc() path uses PinBuffer(), ReadRecentBuffer() first does\n> > LockBufHdr(), checks if the buffer ID is the same and then uses\n> > PinBuffer_Locked().\n> >\n> > The problem with that is that PinBuffer() takes care to not hold the buffer\n> > header spinlock, it uses compare_exchange to atomically acquire the pin, while\n> > guaranteing nobody holds the lock. When holding the buffer header spinlock,\n> > there obviously is the risk of being scheduled out (or even just not have\n> > exclusive access to the cacheline).\n> \n> Yeah. Aside from inherent nastiness of user-space spinlocks\n\nI've been wondering about making our backoff path use futexes, after some\nadaptive spinning.\n\n\n> > The fairly obvious solution to this is to just use PinBuffer() and just unpin\n> > the buffer if its identity was changed concurrently. There could be an\n> > unlocked pre-check as well. However, there's the following comment in\n> > ReadRecentBuffer():\n> > * It's now safe to pin the buffer. We can't pin first and ask\n> > * questions later, because it might confuse code paths like\n> > * InvalidateBuffer() if we pinned a random non-matching buffer.\n> > */\n> >\n> > But I'm not sure I buy that - there's plenty other things that can briefly\n> > acquire a buffer pin (e.g. checkpointer, reclaiming the buffer for other\n> > contents, etc).\n> \n> I may well have been too cautious with that. The worst thing I can\n> think of right now is that InvalidateBuffer() would busy loop (as it\n> already does in other rare cases) when it sees a pin.\n\nRight. Particularly if we were to add a pre-check for the tag to match, that\nshould be extremely rare.\n\n\n> > Another difference between using PinBuffer() and PinBuffer_locked() is that\n> > the latter does not adjust a buffer's usagecount.\n> >\n> > Leaving the scalability issue aside, isn't it somewhat odd that optimizing a\n> > codepath to use ReadRecentBuffer() instead of ReadBuffer() leads to not\n> > increasing usagecount anymore?\n> \n> Yeah, that is not great. The simplification you suggest would fix\n> that too, though I guess it would also bump the usage count of buffers\n> that don't have the tag we expected; that's obviously rare and erring\n> on a better side though.\n\nYea, I'm not worried about that. If somebody is, we could just add code to\ndecrement the usagecount again.\n\n\n> > FWIW, once that's fixed, using ReadRecentBuffer() for _bt_getroot(), caching\n> > the root page's buffer id in RelationData, seems a noticeable win. About 7% in\n> > a concurrent, read-only pgbench that utilizes batches of 10. And it should be\n> > easy to get much bigger wins, e.g. with a index nested loop with a relatively\n> > small index on the inner side.\n> \n> Wooo, that's better than I was hoping. Thanks for trying it out! I\n> think, for the complexity involved (ie very little)\n\nI don't really have a concrete thought for where to store the id of the recent\nbuffer. I just added a new field into some padding in RelationData, but we\nmight go for something fancier.\n\n\n> smgr_targblock could be another easy-to-cache candidate, ie a place where\n> there is a single interesting hot page that we're already keeping track of\n> with no requirement for new backend-local mapping machinery.\n\nI wonder if we should simple add a generic field for such a Buffer to\nRelationData, that the AM can use as it desires. For btree that would be the\nroot page, for heap the target block ...\n\n\n> it's a nice result, and worth considering even though it's also a solid clue\n> that we could do much better than this with a (yet to be designed)\n> longer-lived pin scheme.\n\nIndeed. PinBuffer() is pretty hot after the change. As is the buffer content\nlock.\n\nParticularly for the root page, it'd be really interesting to come up with a\nscheme that keeps an offline copy of the root page while also pinning the real\nroot page. I think we should be able to have a post-check that can figure out\nif the copied root page is out of date after searching it, without needing the\ncontent lock.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 26 Jun 2023 21:09:31 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: ReadRecentBuffer() doesn't scale well"
},
{
"msg_contents": "On Mon, Jun 26, 2023 at 9:09 PM Andres Freund <[email protected]> wrote:\n> I think we should be able to have a post-check that can figure out\n> if the copied root page is out of date after searching it, without needing the\n> content lock.\n\nI'm guessing that you're thinking of doing something with the page LSN?\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 26 Jun 2023 21:32:22 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ReadRecentBuffer() doesn't scale well"
},
{
"msg_contents": "On Tue, Jun 27, 2023 at 4:32 PM Peter Geoghegan <[email protected]> wrote:\n> On Mon, Jun 26, 2023 at 9:09 PM Andres Freund <[email protected]> wrote:\n> > I think we should be able to have a post-check that can figure out\n> > if the copied root page is out of date after searching it, without needing the\n> > content lock.\n>\n> I'm guessing that you're thinking of doing something with the page LSN?\n\nIf the goal is to get rid of both pins and content locks, LSN isn't\nenough. A page might be evicted and replaced by another page that has\nthe same LSN because they were modified by the same record. Maybe\nthat's vanishingly rare, but the correct thing would be counter that\ngoes up on modification AND eviction. (FWIW I toyed with variants of\nthis concept in the context of SLRU -> buffer pool migration, where I\nwas trying to do zero-lock CLOG lookups; in that case I didn't need\nthe copy of the page being discussed here due to the data being\natomically readable, but I had the same requirement for a\n\"did-it-change-under-my-feet?\" check).\n\n\n",
"msg_date": "Tue, 27 Jun 2023 16:40:08 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ReadRecentBuffer() doesn't scale well"
},
{
"msg_contents": "On Mon, Jun 26, 2023 at 9:40 PM Thomas Munro <[email protected]> wrote:\n> If the goal is to get rid of both pins and content locks, LSN isn't\n> enough. A page might be evicted and replaced by another page that has\n> the same LSN because they were modified by the same record. Maybe\n> that's vanishingly rare, but the correct thing would be counter that\n> goes up on modification AND eviction.\n\nIt should be safe to allow searchers to see a version of the root page\nthat is out of date. The Lehman & Yao design is very permissive about\nthese things. There aren't any special cases where the general rules\nare weakened in some way that might complicate this approach.\nSearchers need to check the high key to determine if they need to move\nright -- same as always.\n\nMore concretely: A root page can be concurrently split when there is\nan in-flight index scan that is about to land on it (which becomes the\nleft half of the split). It doesn't matter if it's a searcher that is\n\"between\" the meta page and the root page. It doesn't matter if a\nlevel was added. This is true even though nothing that you'd usually\nthink of as an interlock is held \"between levels\". The root page isn't\nreally special, except in the obvious way. We can even have two roots\nat the same time (the true root, and the fast root).\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 26 Jun 2023 21:53:12 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ReadRecentBuffer() doesn't scale well"
},
{
"msg_contents": "On Tue, Jun 27, 2023 at 4:53 PM Peter Geoghegan <[email protected]> wrote:\n> On Mon, Jun 26, 2023 at 9:40 PM Thomas Munro <[email protected]> wrote:\n> > If the goal is to get rid of both pins and content locks, LSN isn't\n> > enough. A page might be evicted and replaced by another page that has\n> > the same LSN because they were modified by the same record. Maybe\n> > that's vanishingly rare, but the correct thing would be counter that\n> > goes up on modification AND eviction.\n>\n> It should be safe to allow searchers to see a version of the root page\n> that is out of date. The Lehman & Yao design is very permissive about\n> these things. There aren't any special cases where the general rules\n> are weakened in some way that might complicate this approach.\n> Searchers need to check the high key to determine if they need to move\n> right -- same as always.\n\nOK. I guess I'm talking about a slightly more general version of the\nproblem inspired by the stuff I mentioned in parentheses, which would\nsimply get the wrong answer if the mapping changed, whereas here you'd\nuse the cached copy in a race case which should still work for\nsearches.\n\nSo I guess the question for this thread is: do we want to work on\nReadRecentBuffer(), or just take this experiment as evidence of even\nmore speed-up available and aim for that directly?\n\n\n",
"msg_date": "Tue, 27 Jun 2023 17:23:44 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ReadRecentBuffer() doesn't scale well"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-27 16:40:08 +1200, Thomas Munro wrote:\n> On Tue, Jun 27, 2023 at 4:32 PM Peter Geoghegan <[email protected]> wrote:\n> > On Mon, Jun 26, 2023 at 9:09 PM Andres Freund <[email protected]> wrote:\n> > > I think we should be able to have a post-check that can figure out\n> > > if the copied root page is out of date after searching it, without needing the\n> > > content lock.\n> >\n> > I'm guessing that you're thinking of doing something with the page LSN?\n\nYes, that seems to be the most obvious.\n\n\n> If the goal is to get rid of both pins and content locks, LSN isn't\n> enough.\n\nI was imaginging you'd have a long-lived pin. I don't think trying to make it\nwork without that is particularly promising in this context, where it seems\nquite feasible to keep pins around for a while.\n\n\n> A page might be evicted and replaced by another page that has the same LSN\n> because they were modified by the same record. Maybe that's vanishingly\n> rare, but the correct thing would be counter that goes up on modification\n> AND eviction.\n\nI don't think it would need to be a single counter. If we wanted to do\nsomething like this, I think you'd have to have a counter in the buffer desc\nthat's incremented whenever the page is replaced. Plus the LSN for the page\ncontent change \"counter\".\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 26 Jun 2023 22:39:14 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: ReadRecentBuffer() doesn't scale well"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-26 21:53:12 -0700, Peter Geoghegan wrote:\n> It should be safe to allow searchers to see a version of the root page\n> that is out of date. The Lehman & Yao design is very permissive about\n> these things. There aren't any special cases where the general rules\n> are weakened in some way that might complicate this approach.\n> Searchers need to check the high key to determine if they need to move\n> right -- same as always.\n\nWouldn't we at least need a pin on the root page, or hold a snapshot, to\ndefend against page deletions?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 26 Jun 2023 23:27:01 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: ReadRecentBuffer() doesn't scale well"
},
{
"msg_contents": "On Mon, Jun 26, 2023 at 11:27 PM Andres Freund <[email protected]> wrote:\n> On 2023-06-26 21:53:12 -0700, Peter Geoghegan wrote:\n> > It should be safe to allow searchers to see a version of the root page\n> > that is out of date. The Lehman & Yao design is very permissive about\n> > these things. There aren't any special cases where the general rules\n> > are weakened in some way that might complicate this approach.\n> > Searchers need to check the high key to determine if they need to move\n> > right -- same as always.\n>\n> Wouldn't we at least need a pin on the root page, or hold a snapshot, to\n> defend against page deletions?\n\nYou need to hold a snapshot to prevent concurrent page recycling --\nthough not page deletion itself (I did say \"anything that you'd\nusually think of as an interlock\"). I'm pretty sure that a concurrent\npage deletion is possible, even when you hold a pin on the page.\n(Perhaps not, but if not then it's just an accident -- a side-effect\nof the interlock that protects against concurrent heap TID recycling.)\n\nYou can't delete a rightmost page (on any level). Every root page is a\nrightmost page. So the root would have to be split, and then once\nagain emptied before it could be deleted -- only then would there be a\ndanger of some backend with a locally cached root page having an\nirredeemably bad picture of what's going on with the index. That's\nanother angle that you could approach the problem from, I suppose.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 27 Jun 2023 01:10:25 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ReadRecentBuffer() doesn't scale well"
},
{
"msg_contents": "On Tue, 27 Jun 2023 at 07:09, Andres Freund <[email protected]> wrote:\n> On 2023-06-27 15:33:57 +1200, Thomas Munro wrote:\n> > On Tue, Jun 27, 2023 at 2:05 PM Andres Freund <[email protected]> wrote:\n> > > Unfortunately it scaled way worse at first. This is not an inherent issue, but\n> > > due to an implementation choice in ReadRecentBuffer(). Whereas the normal\n> > > BufferAlloc() path uses PinBuffer(), ReadRecentBuffer() first does\n> > > LockBufHdr(), checks if the buffer ID is the same and then uses\n> > > PinBuffer_Locked().\n> > >\n> > > The problem with that is that PinBuffer() takes care to not hold the buffer\n> > > header spinlock, it uses compare_exchange to atomically acquire the pin, while\n> > > guaranteing nobody holds the lock. When holding the buffer header spinlock,\n> > > there obviously is the risk of being scheduled out (or even just not have\n> > > exclusive access to the cacheline).\n> >\n> > Yeah. Aside from inherent nastiness of user-space spinlocks\n>\n> I've been wondering about making our backoff path use futexes, after some\n> adaptive spinning.\n\nIf you want to experiment, here is a rebased version of something I\nhacked up a couple of years back on the way to Fosdem Pgday. I didn't\npursue it further because I didn't have a use case where it showed a\nsignificant difference.\n\n--\nAnts",
"msg_date": "Tue, 27 Jun 2023 14:49:48 +0300",
"msg_from": "Ants Aasma <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ReadRecentBuffer() doesn't scale well"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-27 14:49:48 +0300, Ants Aasma wrote:\n> If you want to experiment, here is a rebased version of something I\n> hacked up a couple of years back on the way to Fosdem Pgday. I didn't\n> pursue it further because I didn't have a use case where it showed a\n> significant difference.\n\nThanks for posting!\n\nBased on past experiments, anything that requires an atomic op during spinlock\nrelease on x86 will be painful :/. I'm not sure there's a realistic way to\navoid that with futexes though :(.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 27 Jun 2023 08:40:04 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: ReadRecentBuffer() doesn't scale well"
},
{
"msg_contents": "On Tue, 27 Jun 2023 at 18:40, Andres Freund <[email protected]> wrote:\n> On 2023-06-27 14:49:48 +0300, Ants Aasma wrote:\n> > If you want to experiment, here is a rebased version of something I\n> > hacked up a couple of years back on the way to Fosdem Pgday. I didn't\n> > pursue it further because I didn't have a use case where it showed a\n> > significant difference.\n>\n> Thanks for posting!\n>\n> Based on past experiments, anything that requires an atomic op during spinlock\n> release on x86 will be painful :/. I'm not sure there's a realistic way to\n> avoid that with futexes though :(.\n\nDo you happen to know if a plain xchg instruction counts as an atomic\nfor this? I haven't done atomics stuff in a while, so I might be\nmissing something, but at first glance I think using a plain xchg\nwould be enough for the releasing side.\n\n-- \nAnts\n\n\n",
"msg_date": "Tue, 27 Jun 2023 19:04:31 +0300",
"msg_from": "Ants Aasma <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ReadRecentBuffer() doesn't scale well"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-27 19:04:31 +0300, Ants Aasma wrote:\n> On Tue, 27 Jun 2023 at 18:40, Andres Freund <[email protected]> wrote:\n> > On 2023-06-27 14:49:48 +0300, Ants Aasma wrote:\n> > > If you want to experiment, here is a rebased version of something I\n> > > hacked up a couple of years back on the way to Fosdem Pgday. I didn't\n> > > pursue it further because I didn't have a use case where it showed a\n> > > significant difference.\n> >\n> > Thanks for posting!\n> >\n> > Based on past experiments, anything that requires an atomic op during spinlock\n> > release on x86 will be painful :/. I'm not sure there's a realistic way to\n> > avoid that with futexes though :(.\n> \n> Do you happen to know if a plain xchg instruction counts as an atomic\n> for this? I haven't done atomics stuff in a while, so I might be\n> missing something, but at first glance I think using a plain xchg\n> would be enough for the releasing side.\n\nIt is automatically an atomic op when referencing memory:\n\nIntel SDM 9.1.2.1 Automatic Locking:\n\"The operations on which the processor automatically follows the LOCK semantics are as follows:\n• When executing an XCHG instruction that references memory.\n...\n\"\n\nTheoretically cmpxchg can be used in a non-atomic fashion. I'm not sure that\ncan be done correctly though, if you want to also store a separate value for\nthe futex. This stuff is hard to think though :)\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 27 Jun 2023 10:07:10 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: ReadRecentBuffer() doesn't scale well"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-27 01:10:25 -0700, Peter Geoghegan wrote:\n> On Mon, Jun 26, 2023 at 11:27 PM Andres Freund <[email protected]> wrote:\n> > On 2023-06-26 21:53:12 -0700, Peter Geoghegan wrote:\n> > > It should be safe to allow searchers to see a version of the root page\n> > > that is out of date. The Lehman & Yao design is very permissive about\n> > > these things. There aren't any special cases where the general rules\n> > > are weakened in some way that might complicate this approach.\n> > > Searchers need to check the high key to determine if they need to move\n> > > right -- same as always.\n> >\n> > Wouldn't we at least need a pin on the root page, or hold a snapshot, to\n> > defend against page deletions?\n>\n> You need to hold a snapshot to prevent concurrent page recycling --\n> though not page deletion itself (I did say \"anything that you'd\n> usually think of as an interlock\").\n\nI don't think we'd want to have a snapshot for this, that make it much less\nbeneficial.\n\n\n> I'm pretty sure that a concurrent page deletion is possible, even when you\n> hold a pin on the page. (Perhaps not, but if not then it's just an accident\n> -- a side-effect of the interlock that protects against concurrent heap TID\n> recycling.)\n\nLooks like the pin should prevent the danger, but wouldn't be very attractive,\ndue to blocking vacuum...\n\n\nI've wondered before about a type of pin that just prevents buffer\nreplacement, but not cleaning up page contents. I think that'd be beneficial\nin quite a few places.\n\n\n> You can't delete a rightmost page (on any level). Every root page is a\n> rightmost page. So the root would have to be split, and then once\n> again emptied before it could be deleted -- only then would there be a\n> danger of some backend with a locally cached root page having an\n> irredeemably bad picture of what's going on with the index. That's\n> another angle that you could approach the problem from, I suppose.\n\nIf we had a way of just preventing the page from being replaced, or reliably\ndetecting that happening, without blocking btree vacuum, the easiest path\nseems to be to use the cached version of the root page, and re-do the work\nwhenever a) the LSN of the page has changed or b) the buffer has been\nreplaced. To me that seems like it'd likely be simpler and more general than\nrelying on being able to step right from any outdated, but not deleted,\nversion of the page (due to the page deletion issues).\n\nObviously that'd lead to retries more often - but realistically it's still\ngoing to be vanishingly rare, root pages don't get modified that much once the\nindex is beyond toy size.\n\n\nI think a replacement counter in the buffer desc is the easiest way to achieve\nthat? We'd have to store the buffer ID, buffer replacement counter and page\nLSN in RelationData. I think the protocol would have to be something like\n\n1) do search on the copy of the root page\n\n2) get page LSN from the relevant buffer contents - this could be from a\n different relation / block or even an empty page, but will never be an\n invalid memory access, as we don't free shared buffers before shutdown. If\n the LSN changed since taking the copy, take a new copy of the root page and\n start at 1)\n\n3) check if buffer replacement counter is the same as at the time of the copy,\n if not take a new copy of the root page and start at 1)\n\n4) happiness\n\n\nFor optimization reasons it might make sense to store the buffer replacement\ncounter on a separate cacheline from BufferDesc.{state,content_lock}, so\nreading the buffer replacement counter doesn't cause cacheline contention with\nbackends working with state/lock. But that's an implementation detail, and it\nmight not matter much, because the pressure on state,content_lock would be\nreduced drastically.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 27 Jun 2023 10:42:19 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: ReadRecentBuffer() doesn't scale well"
},
{
"msg_contents": "I (re)discovered why I used the lock-then-pin approach. In the\ncomments I mentioned InvalidBuffer(), but the main problem is in its\ncaller GetVictimBuffer() which has various sanity checks about\nreference counts that can occasionally fail if you have code randomly\npinning any old buffer.\n\nNew idea: use the standard PinBuffer() function, but add a mode that\ndoesn't pin invalid buffers (with caveat that you can perhaps get a\nfalse negative due to unlocked read, but never a false positive; see\ncommit message). Otherwise we'd have to duplicate all the same logic\nto use cmpxchg for ReadRecentBuffer(), or rethink the assumptions in\nthat other code.\n\nAs for the lack of usage bump in the back-branches, I think the\noptions are: teach PinBuffer_Locked() to increment it optionally, or\nback-patch whatever we come up with for this.\n\nFor the root buffer optimisation, the obvious place for storage seems\nto be under rd_amcache. It was originally invented for the cached\nmetapage (commit d2896a9ed14) but could accommodate a new struct\nholding whatever we want. Here is a patch to try that out.\nBTAMCacheData would also be a natural place to put future things\nincluding a copy of the root page itself, in later work on lock-free\ntricks.\n\nExperimental patches attached.",
"msg_date": "Thu, 29 Jun 2023 19:35:30 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ReadRecentBuffer() doesn't scale well"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-29 19:35:30 +1200, Thomas Munro wrote:\n> I (re)discovered why I used the lock-then-pin approach. In the\n> comments I mentioned InvalidBuffer(), but the main problem is in its\n> caller GetVictimBuffer() which has various sanity checks about\n> reference counts that can occasionally fail if you have code randomly\n> pinning any old buffer.\n\nYou're right. Specifically non-valid buffers are the issue.\n\n\n> New idea: use the standard PinBuffer() function, but add a mode that\n> doesn't pin invalid buffers (with caveat that you can perhaps get a\n> false negative due to unlocked read, but never a false positive; see\n> commit message). Otherwise we'd have to duplicate all the same logic\n> to use cmpxchg for ReadRecentBuffer(), or rethink the assumptions in\n> that other code.\n\nIt might be worth using lock free code in more places before long, but I agree\nwith the solution here.\n\n\n> As for the lack of usage bump in the back-branches, I think the\n> options are: teach PinBuffer_Locked() to increment it optionally, or\n> back-patch whatever we come up with for this.\n\nHm, or just leave it as is.\n\n\n> For the root buffer optimisation, the obvious place for storage seems\n> to be under rd_amcache. It was originally invented for the cached\n> metapage (commit d2896a9ed14) but could accommodate a new struct\n> holding whatever we want. Here is a patch to try that out.\n> BTAMCacheData would also be a natural place to put future things\n> including a copy of the root page itself, in later work on lock-free\n> tricks.\n\nI am wondering if we don't want something more generic than stashing this in\nrd_amcache. Don't want to end up duplicating relevant code across the uses of\nrd_amcache in every AM.\n\n\n> @@ -663,38 +663,17 @@ ReadRecentBuffer(RelFileLocator rlocator, ForkNumber forkNum, BlockNumber blockN\n> \telse\n> \t{\n> \t\tbufHdr = GetBufferDescriptor(recent_buffer - 1);\n> -\t\thave_private_ref = GetPrivateRefCount(recent_buffer) > 0;\n> \n> -\t\t/*\n> -\t\t * Do we already have this buffer pinned with a private reference? If\n> -\t\t * so, it must be valid and it is safe to check the tag without\n> -\t\t * locking. If not, we have to lock the header first and then check.\n> -\t\t */\n> -\t\tif (have_private_ref)\n> -\t\t\tbuf_state = pg_atomic_read_u32(&bufHdr->state);\n> -\t\telse\n> -\t\t\tbuf_state = LockBufHdr(bufHdr);\n> -\n> -\t\tif ((buf_state & BM_VALID) && BufferTagsEqual(&tag, &bufHdr->tag))\n> +\t\t/* Is it still valid and holding the right tag? */\n> +\t\tif (PinBuffer(bufHdr, NULL, true))\n\nI do wonder if we should have an unlocked pre-check for a) the buffer being\nvalid and b) BufferTagsEqual() matching. With such a pre-check the race for\nincreasing the usage count of the wrong buffer is quite small, without the\npre-check it doesn't seem that small anymore.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 29 Jun 2023 08:39:48 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: ReadRecentBuffer() doesn't scale well"
},
{
"msg_contents": "On Fri, Jun 30, 2023 at 3:39 AM Andres Freund <[email protected]> wrote:\n> I am wondering if we don't want something more generic than stashing this in\n> rd_amcache. Don't want to end up duplicating relevant code across the uses of\n> rd_amcache in every AM.\n\nI suppose we could try to track hot pages automatically. Ants Aasma\nmentioned that he was working on a tiny SIMD-based LRU that could be\nuseful for something like that, without being too slow. Just for\nfun/experimentation, here's a simple attempt to use a very stupid\nstand-in LRU to cache the most recent 8 lookups for each fork of each\nrelation. Obviously that will miss every time for many workloads so\nit'd have to be pretty close to free and this code probably isn't good\nenough, but perhaps Ants knows how to sprinkle the right magic fairy\ndust on it. It should automatically discover things like root pages,\nthe heap target block during repeat inserts etc, and visibility map\npages would stick around, and perhaps a few more pages of btrees that\nhave a very hot key range (but not pgbench).\n\n> I do wonder if we should have an unlocked pre-check for a) the buffer being\n> valid and b) BufferTagsEqual() matching. With such a pre-check the race for\n> increasing the usage count of the wrong buffer is quite small, without the\n> pre-check it doesn't seem that small anymore.\n\nYeah, that makes sense. Done in this version.",
"msg_date": "Fri, 30 Jun 2023 14:13:11 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ReadRecentBuffer() doesn't scale well"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-30 14:13:11 +1200, Thomas Munro wrote:\n> On Fri, Jun 30, 2023 at 3:39 AM Andres Freund <[email protected]> wrote:\n> > I am wondering if we don't want something more generic than stashing this in\n> > rd_amcache. Don't want to end up duplicating relevant code across the uses of\n> > rd_amcache in every AM.\n> \n> I suppose we could try to track hot pages automatically.\n\nI think that could be useful - but as a separate facility. The benefit of\nstashing the root page buffer in the relcache is that it's practically free of\noverhead and doesn't have complications from how many other intervening\naccesses there are etc.\n\nI was more thinking of just making the relevant fields part of RelationData\nand delegating the precise use to the individual AMs.\n\n\n> > I do wonder if we should have an unlocked pre-check for a) the buffer being\n> > valid and b) BufferTagsEqual() matching. With such a pre-check the race for\n> > increasing the usage count of the wrong buffer is quite small, without the\n> > pre-check it doesn't seem that small anymore.\n> \n> Yeah, that makes sense. Done in this version.\n\nCool.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 29 Jun 2023 20:09:00 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: ReadRecentBuffer() doesn't scale well"
},
{
"msg_contents": "On Fri, 30 Jun 2023 at 07:43, Thomas Munro <[email protected]> wrote:\n>\n> On Fri, Jun 30, 2023 at 3:39 AM Andres Freund <[email protected]> wrote:\n> > I am wondering if we don't want something more generic than stashing this in\n> > rd_amcache. Don't want to end up duplicating relevant code across the uses of\n> > rd_amcache in every AM.\n>\n> I suppose we could try to track hot pages automatically. Ants Aasma\n> mentioned that he was working on a tiny SIMD-based LRU that could be\n> useful for something like that, without being too slow. Just for\n> fun/experimentation, here's a simple attempt to use a very stupid\n> stand-in LRU to cache the most recent 8 lookups for each fork of each\n> relation. Obviously that will miss every time for many workloads so\n> it'd have to be pretty close to free and this code probably isn't good\n> enough, but perhaps Ants knows how to sprinkle the right magic fairy\n> dust on it. It should automatically discover things like root pages,\n> the heap target block during repeat inserts etc, and visibility map\n> pages would stick around, and perhaps a few more pages of btrees that\n> have a very hot key range (but not pgbench).\n>\n> > I do wonder if we should have an unlocked pre-check for a) the buffer being\n> > valid and b) BufferTagsEqual() matching. With such a pre-check the race for\n> > increasing the usage count of the wrong buffer is quite small, without the\n> > pre-check it doesn't seem that small anymore.\n>\n> Yeah, that makes sense. Done in this version.\n\nI have changed the status of commitfest entry to \"Waiting on Author\"\nas Andres's comments were not discussed further. Feel free to handle\nthe comments and change the status accordingly for the commitfest\nentry.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Sun, 21 Jan 2024 07:47:49 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ReadRecentBuffer() doesn't scale well"
}
] |
[
{
"msg_contents": "Hi,\n\nThe attached query makes beta2 crash with attached backtrace.\nInterestingly the index on ref_6 is needed to make it crash, without\nit the query works fine.\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSYSTEMGUARDS - Consultores de PostgreSQL",
"msg_date": "Mon, 26 Jun 2023 23:05:43 -0500",
"msg_from": "Jaime Casanova <[email protected]>",
"msg_from_op": true,
"msg_subject": "Assert !bms_overlap(joinrel->relids, required_outer)"
},
{
"msg_contents": "On Mon, Jun 26, 2023 at 11:05:43PM -0500, Jaime Casanova wrote:\n> The attached query makes beta2 crash with attached backtrace.\n> Interestingly the index on ref_6 is needed to make it crash, without\n> it the query works fine.\n\nIssue reproduced here. I am adding an open item, whose owner should\nbe Tom?\n--\nMichael",
"msg_date": "Tue, 27 Jun 2023 14:35:19 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Assert !bms_overlap(joinrel->relids, required_outer)"
},
{
"msg_contents": "On Tue, Jun 27, 2023 at 1:35 PM Michael Paquier <[email protected]> wrote:\n\n> On Mon, Jun 26, 2023 at 11:05:43PM -0500, Jaime Casanova wrote:\n> > The attached query makes beta2 crash with attached backtrace.\n> > Interestingly the index on ref_6 is needed to make it crash, without\n> > it the query works fine.\n>\n> Issue reproduced here. I am adding an open item, whose owner should\n> be Tom?\n\n\nThat's right. This issue has something to do with the\nouter-join-aware-Var changes. I reduced the repro to the query below.\n\ncreate table t (a int);\ncreate index on t(a);\n\nexplain (costs off)\nselect 1 from t t1\n join lateral\n (select t1.a from (select 1) foo offset 0) s1 on true\n join\n (select 1 from t t2\n inner join t t3\n left join t t4 left join t t5 on t4.a = 1\n on t4.a = 1 on false\n where t3.a = coalesce(t5.a,1)) as s2\n on true;\n\nWhen joining s1/t3 to t4, the relid of outer join t3/t4 appears both in\nthe joinrel's relids and in the joinrel's required outer rels, which\ncauses the Assert failure. I think it's reasonable for it to appear in\nthe joinrel's relids, because we're forming this outer join. I doubt\nthat it should appear in the joinrel's required outer rels. So I'm\nwondering if we can fix this issue by manually removing the outer join's\nrelid from the joinrel's required_outer, something like:\n\n if (bms_is_member(extra->sjinfo->ojrelid, joinrel->relids))\n required_outer = bms_del_member(required_outer,\nextra->sjinfo->ojrelid);\n\nThis would be needed in try_nestloop_path, try_mergejoin_path and\ntry_hashjoin_path after the required_outer set is computed for the join\npath. It seems quite hacky though, not sure if this is the right thing\nto do.\n\nThanks\nRichard\n\nOn Tue, Jun 27, 2023 at 1:35 PM Michael Paquier <[email protected]> wrote:On Mon, Jun 26, 2023 at 11:05:43PM -0500, Jaime Casanova wrote:\n> The attached query makes beta2 crash with attached backtrace.\n> Interestingly the index on ref_6 is needed to make it crash, without\n> it the query works fine.\n\nIssue reproduced here. I am adding an open item, whose owner should\nbe Tom?That's right. This issue has something to do with theouter-join-aware-Var changes. I reduced the repro to the query below.create table t (a int);create index on t(a);explain (costs off)select 1 from t t1 join lateral (select t1.a from (select 1) foo offset 0) s1 on true join (select 1 from t t2 inner join t t3 left join t t4 left join t t5 on t4.a = 1 on t4.a = 1 on false where t3.a = coalesce(t5.a,1)) as s2 on true;When joining s1/t3 to t4, the relid of outer join t3/t4 appears both inthe joinrel's relids and in the joinrel's required outer rels, whichcauses the Assert failure. I think it's reasonable for it to appear inthe joinrel's relids, because we're forming this outer join. I doubtthat it should appear in the joinrel's required outer rels. So I'mwondering if we can fix this issue by manually removing the outer join'srelid from the joinrel's required_outer, something like: if (bms_is_member(extra->sjinfo->ojrelid, joinrel->relids)) required_outer = bms_del_member(required_outer, extra->sjinfo->ojrelid);This would be needed in try_nestloop_path, try_mergejoin_path andtry_hashjoin_path after the required_outer set is computed for the joinpath. It seems quite hacky though, not sure if this is the right thingto do.ThanksRichard",
"msg_date": "Tue, 27 Jun 2023 19:17:16 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Assert !bms_overlap(joinrel->relids, required_outer)"
},
{
"msg_contents": "Richard Guo <[email protected]> writes:\n> That's right. This issue has something to do with the\n> outer-join-aware-Var changes. I reduced the repro to the query below.\n\nThanks for the simplified test case.\n\n> When joining s1/t3 to t4, the relid of outer join t3/t4 appears both in\n> the joinrel's relids and in the joinrel's required outer rels, which\n> causes the Assert failure. I think it's reasonable for it to appear in\n> the joinrel's relids, because we're forming this outer join. I doubt\n> that it should appear in the joinrel's required outer rels.\n\nIt looks to me like we are trying to join (2 7), that is s1 and t3,\nto 8 (t4), which would necessitate forming the outer join with relid 11.\nThat's fine as far as it goes, but the path we're trying to use for\n(2 7) is\n\n {NESTPATH \n :jpath.path.pathtype 335 \n :parent_relids (b 2 7)\n :required_outer (b 1 9 10 11)\n :jpath.outerjoinpath \n {SUBQUERYSCANPATH \n :path.pathtype 326 \n :parent_relids (b 2)\n :required_outer (b 1)\n :jpath.innerjoinpath \n {INDEXPATH \n :path.pathtype 321 \n :parent_relids (b 7) t3\n :required_outer (b 9 10 11) t5 and both outer joins\n\nThat is, the path involves an indexscan on t3 that evidently is using\nthe \"t3.a = coalesce(t5.a,1)\" condition, so it needs a post-join value\nof t5.a. So it's completely not legit to use this path as an input\nfor this join. (You could quibble about whether the path could be\nmarked as needing only one of the two outer joins, but that doesn't\nreally matter here. It certainly shouldn't be used when we've not\nyet formed either OJ.)\n\nSo it looks to me like something further up should have rejected this\npath as not being usable here. Not sure what's dropping the ball.\n\nAnother way to look at it is we should never have formed this index\npath at all, because it's not clear to me that it can have any valid\nuse. We clearly cannot form OJ 11 (t3/t4) without having already\nscanned t3, so a path for t3 that requires 11 as an input is silly on\nits face. Even if you argue that the required_outer marking for the\npath could be reduced to (9 10) on the grounds of identity 3, I still\ndon't see a valid join order that can use this path. So ideally the\npath wouldn't have been made in the first place, it's just a waste\nof planner cycles. That's a separate issue though.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 27 Jun 2023 10:12:40 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Assert !bms_overlap(joinrel->relids, required_outer)"
},
{
"msg_contents": "I wrote:\n> So it looks to me like something further up should have rejected this\n> path as not being usable here. Not sure what's dropping the ball.\n\nAfter further digging, I've concluded that what usually stops us\nfrom generating this bogus path for the t3/t4 join is the\n\"param_source_rels\" heuristic in joinpath.c. It does stop it in\nthe simplified query\n\nselect 1 from t t2\n inner join (t t3\n left join (t t4 left join t t5 on t4.a = 1)\n on t4.a = 1) on false\n where t3.a = coalesce(t5.a,1);\n\nHowever, once we add the lateral reference in s1, that heuristic\nuncritically lets the path go through, and then later we have trouble.\nOf course, that heuristic is only supposed to be a heuristic that\nhelps winnow valid paths, not a defense against invalid paths,\nso it's not its fault that this goes wrong. (I think that the old\ndelay_upper_joins mechanism is what prevented this error before v16.)\n\nFor a real fix, I'm inclined to extend the loop that calculates\nparam_source_rels (in add_paths_to_joinrel) so that it also tracks\na set of incompatible relids that *must not* be present in the\nparameterization of a proposed path. This would basically include\nOJ relids of OJs that partially overlap the target joinrel; maybe\nwe should also include the min RHS of such OJs. Then we could\ncheck that in try_nestloop_path. I've not tried to code this yet.\n\nThere's also the question of why we generated the bogus indexscan\nin the first place; but it seems advisable to fix the join-level\nissue before touching that, else we'll have nothing to test with.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 27 Jun 2023 18:28:58 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Assert !bms_overlap(joinrel->relids, required_outer)"
},
{
"msg_contents": "On Tue, Jun 27, 2023 at 10:12 PM Tom Lane <[email protected]> wrote:\n\n> Richard Guo <[email protected]> writes:\n> > That's right. This issue has something to do with the\n> > outer-join-aware-Var changes. I reduced the repro to the query below.\n>\n> Thanks for the simplified test case.\n>\n> > When joining s1/t3 to t4, the relid of outer join t3/t4 appears both in\n> > the joinrel's relids and in the joinrel's required outer rels, which\n> > causes the Assert failure. I think it's reasonable for it to appear in\n> > the joinrel's relids, because we're forming this outer join. I doubt\n> > that it should appear in the joinrel's required outer rels.\n>\n> It looks to me like we are trying to join (2 7), that is s1 and t3,\n> to 8 (t4), which would necessitate forming the outer join with relid 11.\n> That's fine as far as it goes, but the path we're trying to use for\n> (2 7) is\n>\n> {NESTPATH\n> :jpath.path.pathtype 335\n> :parent_relids (b 2 7)\n> :required_outer (b 1 9 10 11)\n> :jpath.outerjoinpath\n> {SUBQUERYSCANPATH\n> :path.pathtype 326\n> :parent_relids (b 2)\n> :required_outer (b 1)\n> :jpath.innerjoinpath\n> {INDEXPATH\n> :path.pathtype 321\n> :parent_relids (b 7) t3\n> :required_outer (b 9 10 11) t5 and both outer joins\n>\n> That is, the path involves an indexscan on t3 that evidently is using\n> the \"t3.a = coalesce(t5.a,1)\" condition, so it needs a post-join value\n> of t5.a. So it's completely not legit to use this path as an input\n> for this join. (You could quibble about whether the path could be\n> marked as needing only one of the two outer joins, but that doesn't\n> really matter here. It certainly shouldn't be used when we've not\n> yet formed either OJ.)\n\n\nI tried this query on v15 and found that we'd also generate this bogus\npath for the t3/t4 join.\n\n {NESTPATH\n :pathtype 38\n :parent_relids (b 2 7)\n :required_outer (b 1 9)\n :outerjoinpath\n {SUBQUERYSCANPATH\n :pathtype 28\n :parent_relids (b 2)\n :required_outer (b 1)\n :innerjoinpath\n {INDEXPATH\n :pathtype 23\n :parent_relids (b 7) t3\n :required_outer (b 9) t5\n\nThe Assert failure is not seen on v15 because outer join relids are not\nincluded in joinrel's relids and required_outer sets.\n\nThanks\nRichard\n\nOn Tue, Jun 27, 2023 at 10:12 PM Tom Lane <[email protected]> wrote:Richard Guo <[email protected]> writes:\n> That's right. This issue has something to do with the\n> outer-join-aware-Var changes. I reduced the repro to the query below.\n\nThanks for the simplified test case.\n\n> When joining s1/t3 to t4, the relid of outer join t3/t4 appears both in\n> the joinrel's relids and in the joinrel's required outer rels, which\n> causes the Assert failure. I think it's reasonable for it to appear in\n> the joinrel's relids, because we're forming this outer join. I doubt\n> that it should appear in the joinrel's required outer rels.\n\nIt looks to me like we are trying to join (2 7), that is s1 and t3,\nto 8 (t4), which would necessitate forming the outer join with relid 11.\nThat's fine as far as it goes, but the path we're trying to use for\n(2 7) is\n\n {NESTPATH \n :jpath.path.pathtype 335 \n :parent_relids (b 2 7)\n :required_outer (b 1 9 10 11)\n :jpath.outerjoinpath \n {SUBQUERYSCANPATH \n :path.pathtype 326 \n :parent_relids (b 2)\n :required_outer (b 1)\n :jpath.innerjoinpath \n {INDEXPATH \n :path.pathtype 321 \n :parent_relids (b 7) t3\n :required_outer (b 9 10 11) t5 and both outer joins\n\nThat is, the path involves an indexscan on t3 that evidently is using\nthe \"t3.a = coalesce(t5.a,1)\" condition, so it needs a post-join value\nof t5.a. So it's completely not legit to use this path as an input\nfor this join. (You could quibble about whether the path could be\nmarked as needing only one of the two outer joins, but that doesn't\nreally matter here. It certainly shouldn't be used when we've not\nyet formed either OJ.)I tried this query on v15 and found that we'd also generate this boguspath for the t3/t4 join. {NESTPATH :pathtype 38 :parent_relids (b 2 7) :required_outer (b 1 9) :outerjoinpath {SUBQUERYSCANPATH :pathtype 28 :parent_relids (b 2) :required_outer (b 1) :innerjoinpath {INDEXPATH :pathtype 23 :parent_relids (b 7) t3 :required_outer (b 9) t5The Assert failure is not seen on v15 because outer join relids are notincluded in joinrel's relids and required_outer sets.ThanksRichard",
"msg_date": "Wed, 28 Jun 2023 10:16:19 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Assert !bms_overlap(joinrel->relids, required_outer)"
},
{
"msg_contents": "On Wed, Jun 28, 2023 at 6:28 AM Tom Lane <[email protected]> wrote:\n\n> For a real fix, I'm inclined to extend the loop that calculates\n> param_source_rels (in add_paths_to_joinrel) so that it also tracks\n> a set of incompatible relids that *must not* be present in the\n> parameterization of a proposed path. This would basically include\n> OJ relids of OJs that partially overlap the target joinrel; maybe\n> we should also include the min RHS of such OJs. Then we could\n> check that in try_nestloop_path. I've not tried to code this yet.\n\n\nI went ahead and drafted a patch based on this idea. A little\ndifferences include\n\n* You mentioned that the incompatible relids might need to also include\nthe min_righthand of the OJs that are part of the target joinrel. It\nseems to me that we may need to also include the min_lefthand of such\nOJs, because the parameterization of any proposed join path for the\ntarget joinrel should not overlap anything in an OJ if the OJ is part of\nthis joinrel.\n\n* I think we need to check the incompatible relids also in\ntry_hashjoin_path and try_mergejoin_path besides try_nestloop_path.\n\nThanks\nRichard",
"msg_date": "Wed, 28 Jun 2023 14:51:54 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Assert !bms_overlap(joinrel->relids, required_outer)"
},
{
"msg_contents": "Richard Guo <[email protected]> writes:\n> On Wed, Jun 28, 2023 at 6:28 AM Tom Lane <[email protected]> wrote:\n>> For a real fix, I'm inclined to extend the loop that calculates\n>> param_source_rels (in add_paths_to_joinrel) so that it also tracks\n>> a set of incompatible relids that *must not* be present in the\n>> parameterization of a proposed path. This would basically include\n>> OJ relids of OJs that partially overlap the target joinrel; maybe\n>> we should also include the min RHS of such OJs. Then we could\n>> check that in try_nestloop_path. I've not tried to code this yet.\n\n> I went ahead and drafted a patch based on this idea.\n\nHmm. This patch is the opposite of what I'd been imagining, because\nI was thinking we needed to add OJs to param_incompatible_relids if\nthey were *not* already in the join, rather than if they were.\nHowever, I tried it like that and while it did stop the assertion\nfailure, it also broke a bunch of other test cases that no longer\nfound the parameterized-nestloop plans they were supposed to find.\nSo clearly I just didn't have my head screwed on in the correct\ndirection yesterday.\n\nHowever, given that what we need is to exclude parameterization\nthat depends on the currently-formed OJ, it seems to me we can do\nit more simply and without any new JoinPathExtraData field,\nas attached. What do you think?\n\n> * I think we need to check the incompatible relids also in\n> try_hashjoin_path and try_mergejoin_path besides try_nestloop_path.\n\nI think this isn't necessary, at least in my formulation.\nThose cases will go through calc_non_nestloop_required_outer\nwhich has\n\n\t/* neither path can require rels from the other */\n\tAssert(!bms_overlap(outer_paramrels, inner_path->parent->relids));\n\tAssert(!bms_overlap(inner_paramrels, outer_path->parent->relids));\n\nIn order to have a dependency on an OJ, a path would have to have\na dependency on at least one of the OJ's base relations too, so\nI think these assertions show that the case won't arise. (Of\ncourse, if someone can trip one of these assertions, I'm wrong.)\n\n\t\t\tregards, tom lane",
"msg_date": "Wed, 28 Jun 2023 10:09:41 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Assert !bms_overlap(joinrel->relids, required_outer)"
},
{
"msg_contents": "On Wed, Jun 28, 2023 at 10:09 PM Tom Lane <[email protected]> wrote:\n\n> However, given that what we need is to exclude parameterization\n> that depends on the currently-formed OJ, it seems to me we can do\n> it more simply and without any new JoinPathExtraData field,\n> as attached. What do you think?\n\n\nI think it makes sense. At first I wondered if we should also exclude\nparameterization that depends on OJs that have already been formed as\npart of this joinrel. But it seems not possible that the input paths\nhave parameterization dependency on these OJs. So it should be\nsufficient to only consider the currently-formed OJ.\n\n\n> > * I think we need to check the incompatible relids also in\n> > try_hashjoin_path and try_mergejoin_path besides try_nestloop_path.\n>\n> I think this isn't necessary, at least in my formulation.\n> Those cases will go through calc_non_nestloop_required_outer\n> which has\n>\n> /* neither path can require rels from the other */\n> Assert(!bms_overlap(outer_paramrels, inner_path->parent->relids));\n> Assert(!bms_overlap(inner_paramrels, outer_path->parent->relids));\n>\n> In order to have a dependency on an OJ, a path would have to have\n> a dependency on at least one of the OJ's base relations too, so\n> I think these assertions show that the case won't arise. (Of\n> course, if someone can trip one of these assertions, I'm wrong.)\n\n\nHmm, while this holds in most cases, it does not if the joins have been\ncommuted according to identity 3. If we change the t3/t4 join's qual to\n't3.a = t4.a' to make hashjoin possible, we'd see the same Assert\nfailure through try_hashjoin_path. I think it's also possible for merge\njoin.\n\nexplain (costs off)\nselect 1 from t t1\n join lateral\n (select t1.a from (select 1) foo offset 0) s1 on true\n join\n (select 1 from t t2\n inner join t t3\n left join t t4 left join t t5 on t4.a = 1\n on t3.a = t4.a on false\n where t3.a = coalesce(t5.a,1)) as s2\n on true;\nserver closed the connection unexpectedly\n\nThanks\nRichard\n\nOn Wed, Jun 28, 2023 at 10:09 PM Tom Lane <[email protected]> wrote:\nHowever, given that what we need is to exclude parameterization\nthat depends on the currently-formed OJ, it seems to me we can do\nit more simply and without any new JoinPathExtraData field,\nas attached. What do you think?I think it makes sense. At first I wondered if we should also excludeparameterization that depends on OJs that have already been formed aspart of this joinrel. But it seems not possible that the input pathshave parameterization dependency on these OJs. So it should besufficient to only consider the currently-formed OJ. \n> * I think we need to check the incompatible relids also in\n> try_hashjoin_path and try_mergejoin_path besides try_nestloop_path.\n\nI think this isn't necessary, at least in my formulation.\nThose cases will go through calc_non_nestloop_required_outer\nwhich has\n\n /* neither path can require rels from the other */\n Assert(!bms_overlap(outer_paramrels, inner_path->parent->relids));\n Assert(!bms_overlap(inner_paramrels, outer_path->parent->relids));\n\nIn order to have a dependency on an OJ, a path would have to have\na dependency on at least one of the OJ's base relations too, so\nI think these assertions show that the case won't arise. (Of\ncourse, if someone can trip one of these assertions, I'm wrong.)Hmm, while this holds in most cases, it does not if the joins have beencommuted according to identity 3. If we change the t3/t4 join's qual to't3.a = t4.a' to make hashjoin possible, we'd see the same Assertfailure through try_hashjoin_path. I think it's also possible for mergejoin.explain (costs off)select 1 from t t1 join lateral (select t1.a from (select 1) foo offset 0) s1 on true join (select 1 from t t2 inner join t t3 left join t t4 left join t t5 on t4.a = 1 on t3.a = t4.a on false where t3.a = coalesce(t5.a,1)) as s2 on true;server closed the connection unexpectedlyThanksRichard",
"msg_date": "Thu, 29 Jun 2023 10:39:40 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Assert !bms_overlap(joinrel->relids, required_outer)"
},
{
"msg_contents": "On Thu, Jun 29, 2023 at 10:39 AM Richard Guo <[email protected]> wrote:\n\n> On Wed, Jun 28, 2023 at 10:09 PM Tom Lane <[email protected]> wrote:\n>\n>> However, given that what we need is to exclude parameterization\n>> that depends on the currently-formed OJ, it seems to me we can do\n>> it more simply and without any new JoinPathExtraData field,\n>> as attached. What do you think?\n>\n>\n> I think it makes sense. At first I wondered if we should also exclude\n> parameterization that depends on OJs that have already been formed as\n> part of this joinrel. But it seems not possible that the input paths\n> have parameterization dependency on these OJs. So it should be\n> sufficient to only consider the currently-formed OJ.\n>\n\nBTW, it seems that extra->sjinfo would always have a valid value here.\nSo maybe we do not need to check if it is not NULL explicitly?\n\nThanks\nRichard\n\nOn Thu, Jun 29, 2023 at 10:39 AM Richard Guo <[email protected]> wrote:On Wed, Jun 28, 2023 at 10:09 PM Tom Lane <[email protected]> wrote:\nHowever, given that what we need is to exclude parameterization\nthat depends on the currently-formed OJ, it seems to me we can do\nit more simply and without any new JoinPathExtraData field,\nas attached. What do you think?I think it makes sense. At first I wondered if we should also excludeparameterization that depends on OJs that have already been formed aspart of this joinrel. But it seems not possible that the input pathshave parameterization dependency on these OJs. So it should besufficient to only consider the currently-formed OJ.BTW, it seems that extra->sjinfo would always have a valid value here.So maybe we do not need to check if it is not NULL explicitly?ThanksRichard",
"msg_date": "Thu, 29 Jun 2023 14:44:43 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Assert !bms_overlap(joinrel->relids, required_outer)"
},
{
"msg_contents": "On Wed, Jun 28, 2023 at 10:09 PM Tom Lane <[email protected]> wrote:\n\n> Those cases will go through calc_non_nestloop_required_outer\n> which has\n>\n> /* neither path can require rels from the other */\n> Assert(!bms_overlap(outer_paramrels, inner_path->parent->relids));\n> Assert(!bms_overlap(inner_paramrels, outer_path->parent->relids));\n\n\nLooking at these two assertions it occurred to me that shouldn't we\ncheck against top_parent_relids for an otherrel since paths are\nparameterized by top-level parents? We do that in try_nestloop_path.\n\n /* neither path can require rels from the other */\n- Assert(!bms_overlap(outer_paramrels, inner_path->parent->relids));\n- Assert(!bms_overlap(inner_paramrels, outer_path->parent->relids));\n+ Assert(!bms_overlap(outer_paramrels,\n+ inner_path->parent->top_parent_relids ?\n+ inner_path->parent->top_parent_relids :\n+ inner_path->parent->relids));\n+ Assert(!bms_overlap(inner_paramrels,\n+ outer_path->parent->top_parent_relids ?\n+ outer_path->parent->top_parent_relids :\n+ outer_path->parent->relids));\n\nThis is not related to the issue being discussed here. Maybe it should\nbe a separate issue.\n\nThanks\nRichard\n\nOn Wed, Jun 28, 2023 at 10:09 PM Tom Lane <[email protected]> wrote:\nThose cases will go through calc_non_nestloop_required_outer\nwhich has\n\n /* neither path can require rels from the other */\n Assert(!bms_overlap(outer_paramrels, inner_path->parent->relids));\n Assert(!bms_overlap(inner_paramrels, outer_path->parent->relids));Looking at these two assertions it occurred to me that shouldn't wecheck against top_parent_relids for an otherrel since paths areparameterized by top-level parents? We do that in try_nestloop_path. /* neither path can require rels from the other */- Assert(!bms_overlap(outer_paramrels, inner_path->parent->relids));- Assert(!bms_overlap(inner_paramrels, outer_path->parent->relids));+ Assert(!bms_overlap(outer_paramrels,+ inner_path->parent->top_parent_relids ?+ inner_path->parent->top_parent_relids :+ inner_path->parent->relids));+ Assert(!bms_overlap(inner_paramrels,+ outer_path->parent->top_parent_relids ?+ outer_path->parent->top_parent_relids :+ outer_path->parent->relids));This is not related to the issue being discussed here. Maybe it shouldbe a separate issue.ThanksRichard",
"msg_date": "Thu, 29 Jun 2023 15:39:38 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Assert !bms_overlap(joinrel->relids, required_outer)"
},
{
"msg_contents": "Richard Guo <[email protected]> writes:\n> BTW, it seems that extra->sjinfo would always have a valid value here.\n> So maybe we do not need to check if it is not NULL explicitly?\n\nRight, I was being conservative but this module expects that to\nalways be provided.\n\nPushed with that and defenses added to try_mergejoin_path and\ntry_hashjoin_path. It looks like the various try_partial_xxx_path\nfunctions already reject cases that could be problematic. (They\nwill not accept inner parameterization that would lead to the\nresult being parameterized differently from the outer path.\nBy induction, that should be fine.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 29 Jun 2023 12:16:49 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Assert !bms_overlap(joinrel->relids, required_outer)"
},
{
"msg_contents": "Richard Guo <[email protected]> writes:\n> On Wed, Jun 28, 2023 at 10:09 PM Tom Lane <[email protected]> wrote:\n>> Those cases will go through calc_non_nestloop_required_outer\n>> which has\n>> /* neither path can require rels from the other */\n>> Assert(!bms_overlap(outer_paramrels, inner_path->parent->relids));\n>> Assert(!bms_overlap(inner_paramrels, outer_path->parent->relids));\n\n> Looking at these two assertions it occurred to me that shouldn't we\n> check against top_parent_relids for an otherrel since paths are\n> parameterized by top-level parents? We do that in try_nestloop_path.\n\nYeah, while looking at this I was wondering why try_mergejoin_path and\ntry_hashjoin_path don't do the same \"Paths are parameterized by\ntop-level parents, so run parameterization tests on the parent relids\"\ndance that try_nestloop_path does. This omission is consistent with\nthat, but it's not obvious why it'd be okay to skip it for\nnon-nestloop joins. I guess we'd have noticed by now if it wasn't\nokay, but ...\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 29 Jun 2023 12:20:50 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Assert !bms_overlap(joinrel->relids, required_outer)"
},
{
"msg_contents": "On Fri, Jun 30, 2023 at 12:16 AM Tom Lane <[email protected]> wrote:\n\n> Pushed with that and defenses added to try_mergejoin_path and\n> try_hashjoin_path. It looks like the various try_partial_xxx_path\n> functions already reject cases that could be problematic. (They\n> will not accept inner parameterization that would lead to the\n> result being parameterized differently from the outer path.\n> By induction, that should be fine.)\n\n\nThanks for pushing it!\n\nYeah, I also checked that and there is no problem with partial join\npaths. However I found some opportunities for trivial revises there and\ncreated a new patch for those at [1].\n\n[1]\nhttps://www.postgresql.org/message-id/flat/CAMbWs48mKJ6g_GnYNa7dnw04MHaMK-jnAEBrMVhTp2uUg3Ut4A%40mail.gmail.com\n\nThanks\nRichard\n\nOn Fri, Jun 30, 2023 at 12:16 AM Tom Lane <[email protected]> wrote:\nPushed with that and defenses added to try_mergejoin_path and\ntry_hashjoin_path. It looks like the various try_partial_xxx_path\nfunctions already reject cases that could be problematic. (They\nwill not accept inner parameterization that would lead to the\nresult being parameterized differently from the outer path.\nBy induction, that should be fine.)Thanks for pushing it!Yeah, I also checked that and there is no problem with partial joinpaths. However I found some opportunities for trivial revises there andcreated a new patch for those at [1].[1] https://www.postgresql.org/message-id/flat/CAMbWs48mKJ6g_GnYNa7dnw04MHaMK-jnAEBrMVhTp2uUg3Ut4A%40mail.gmail.comThanksRichard",
"msg_date": "Fri, 30 Jun 2023 11:02:58 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Assert !bms_overlap(joinrel->relids, required_outer)"
},
{
"msg_contents": "On Fri, Jun 30, 2023 at 12:20 AM Tom Lane <[email protected]> wrote:\n\n> Richard Guo <[email protected]> writes:\n> > On Wed, Jun 28, 2023 at 10:09 PM Tom Lane <[email protected]> wrote:\n> >> Those cases will go through calc_non_nestloop_required_outer\n> >> which has\n> >> /* neither path can require rels from the other */\n> >> Assert(!bms_overlap(outer_paramrels, inner_path->parent->relids));\n> >> Assert(!bms_overlap(inner_paramrels, outer_path->parent->relids));\n>\n> > Looking at these two assertions it occurred to me that shouldn't we\n> > check against top_parent_relids for an otherrel since paths are\n> > parameterized by top-level parents? We do that in try_nestloop_path.\n>\n> Yeah, while looking at this I was wondering why try_mergejoin_path and\n> try_hashjoin_path don't do the same \"Paths are parameterized by\n> top-level parents, so run parameterization tests on the parent relids\"\n> dance that try_nestloop_path does. This omission is consistent with\n> that, but it's not obvious why it'd be okay to skip it for\n> non-nestloop joins. I guess we'd have noticed by now if it wasn't\n> okay, but ...\n\n\nI think it just makes these two assertions meaningless to skip it for\nnon-nestloop joins if the input paths are for otherrels, because paths\nwould never be parameterized by the member relations. So these two\nassertions would always be true for otherrel paths. I think this is why\nwe have not noticed any problem by now.\n\nHowever, this is not what we want. What we want is to verify that\nneither input path for the joinrel can require rels from the other, even\nfor otherrel paths. So I think the current code is not right for that.\nWe need to check against top_parent_relids for otherrel paths, and that\nwould make these assertions meaningful.\n\nThanks\nRichard\n\nOn Fri, Jun 30, 2023 at 12:20 AM Tom Lane <[email protected]> wrote:Richard Guo <[email protected]> writes:\n> On Wed, Jun 28, 2023 at 10:09 PM Tom Lane <[email protected]> wrote:\n>> Those cases will go through calc_non_nestloop_required_outer\n>> which has\n>> /* neither path can require rels from the other */\n>> Assert(!bms_overlap(outer_paramrels, inner_path->parent->relids));\n>> Assert(!bms_overlap(inner_paramrels, outer_path->parent->relids));\n\n> Looking at these two assertions it occurred to me that shouldn't we\n> check against top_parent_relids for an otherrel since paths are\n> parameterized by top-level parents? We do that in try_nestloop_path.\n\nYeah, while looking at this I was wondering why try_mergejoin_path and\ntry_hashjoin_path don't do the same \"Paths are parameterized by\ntop-level parents, so run parameterization tests on the parent relids\"\ndance that try_nestloop_path does. This omission is consistent with\nthat, but it's not obvious why it'd be okay to skip it for\nnon-nestloop joins. I guess we'd have noticed by now if it wasn't\nokay, but ...I think it just makes these two assertions meaningless to skip it fornon-nestloop joins if the input paths are for otherrels, because pathswould never be parameterized by the member relations. So these twoassertions would always be true for otherrel paths. I think this is whywe have not noticed any problem by now.However, this is not what we want. What we want is to verify thatneither input path for the joinrel can require rels from the other, evenfor otherrel paths. So I think the current code is not right for that.We need to check against top_parent_relids for otherrel paths, and thatwould make these assertions meaningful.ThanksRichard",
"msg_date": "Fri, 30 Jun 2023 11:05:36 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Assert !bms_overlap(joinrel->relids, required_outer)"
},
{
"msg_contents": "Richard Guo <[email protected]> writes:\n> On Fri, Jun 30, 2023 at 12:20 AM Tom Lane <[email protected]> wrote:\n>> Yeah, while looking at this I was wondering why try_mergejoin_path and\n>> try_hashjoin_path don't do the same \"Paths are parameterized by\n>> top-level parents, so run parameterization tests on the parent relids\"\n>> dance that try_nestloop_path does. This omission is consistent with\n>> that, but it's not obvious why it'd be okay to skip it for\n>> non-nestloop joins. I guess we'd have noticed by now if it wasn't\n>> okay, but ...\n\n> I think it just makes these two assertions meaningless to skip it for\n> non-nestloop joins if the input paths are for otherrels, because paths\n> would never be parameterized by the member relations. So these two\n> assertions would always be true for otherrel paths. I think this is why\n> we have not noticed any problem by now.\n\nAfter studying this some more, I think that maybe it's the \"run\nparameterization tests on the parent relids\" bit that is misguided.\nI believe the way it's really working is that all paths arriving\nhere are parameterized by top parents, because that's the only thing\nwe generate to start with. A path can only become parameterized\nby an otherrel when we apply reparameterize_path_by_child to it.\nBut the only place that happens is in try_nestloop_path itself\n(or try_partial_nestloop_path), and then we immediately wrap it in\na nestloop join node, which becomes a child of an append that's\nforming a partitionwise join. The partitionwise join as a\nwhole won't be parameterized by any child rels. So I think that\na path that's parameterized by a child rel can't exist \"in the wild\"\nin a way that would allow it to get fed to one of the try_xxx_path\nfunctions. This explains why the seeming oversights in the merge\nand hash cases aren't causing a problem.\n\nIf this theory is correct, we could simplify try_nestloop_path a\nbit. I doubt the code savings would matter, but maybe it's\nworth changing for clarity's sake.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 30 Jun 2023 11:00:09 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Assert !bms_overlap(joinrel->relids, required_outer)"
},
{
"msg_contents": "On Fri, Jun 30, 2023 at 11:00 PM Tom Lane <[email protected]> wrote:\n\n> Richard Guo <[email protected]> writes:\n> > I think it just makes these two assertions meaningless to skip it for\n> > non-nestloop joins if the input paths are for otherrels, because paths\n> > would never be parameterized by the member relations. So these two\n> > assertions would always be true for otherrel paths. I think this is why\n> > we have not noticed any problem by now.\n>\n> After studying this some more, I think that maybe it's the \"run\n> parameterization tests on the parent relids\" bit that is misguided.\n> I believe the way it's really working is that all paths arriving\n> here are parameterized by top parents, because that's the only thing\n> we generate to start with. A path can only become parameterized\n> by an otherrel when we apply reparameterize_path_by_child to it.\n> But the only place that happens is in try_nestloop_path itself\n> (or try_partial_nestloop_path), and then we immediately wrap it in\n> a nestloop join node, which becomes a child of an append that's\n> forming a partitionwise join. The partitionwise join as a\n> whole won't be parameterized by any child rels. So I think that\n> a path that's parameterized by a child rel can't exist \"in the wild\"\n> in a way that would allow it to get fed to one of the try_xxx_path\n> functions. This explains why the seeming oversights in the merge\n> and hash cases aren't causing a problem.\n>\n> If this theory is correct, we could simplify try_nestloop_path a\n> bit. I doubt the code savings would matter, but maybe it's\n> worth changing for clarity's sake.\n\n\nYeah, I think this theory is correct that all paths arriving at\ntry_xxx_path are parameterized by top parents. But I do not get how to\nsimplify try_nestloop_path on the basis of that. Would you please\nelaborate on that?\n\nThanks\nRichard\n\nOn Fri, Jun 30, 2023 at 11:00 PM Tom Lane <[email protected]> wrote:Richard Guo <[email protected]> writes:\n> I think it just makes these two assertions meaningless to skip it for\n> non-nestloop joins if the input paths are for otherrels, because paths\n> would never be parameterized by the member relations. So these two\n> assertions would always be true for otherrel paths. I think this is why\n> we have not noticed any problem by now.\n\nAfter studying this some more, I think that maybe it's the \"run\nparameterization tests on the parent relids\" bit that is misguided.\nI believe the way it's really working is that all paths arriving\nhere are parameterized by top parents, because that's the only thing\nwe generate to start with. A path can only become parameterized\nby an otherrel when we apply reparameterize_path_by_child to it.\nBut the only place that happens is in try_nestloop_path itself\n(or try_partial_nestloop_path), and then we immediately wrap it in\na nestloop join node, which becomes a child of an append that's\nforming a partitionwise join. The partitionwise join as a\nwhole won't be parameterized by any child rels. So I think that\na path that's parameterized by a child rel can't exist \"in the wild\"\nin a way that would allow it to get fed to one of the try_xxx_path\nfunctions. This explains why the seeming oversights in the merge\nand hash cases aren't causing a problem.\n\nIf this theory is correct, we could simplify try_nestloop_path a\nbit. I doubt the code savings would matter, but maybe it's\nworth changing for clarity's sake.Yeah, I think this theory is correct that all paths arriving attry_xxx_path are parameterized by top parents. But I do not get how tosimplify try_nestloop_path on the basis of that. Would you pleaseelaborate on that?ThanksRichard",
"msg_date": "Mon, 3 Jul 2023 13:53:20 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Assert !bms_overlap(joinrel->relids, required_outer)"
}
] |
[
{
"msg_contents": "Hi,\n\nCommit 6b4d23feef introduces a toplevel field in pgssHashKey, which leads\npadding. In pgss_store(), it comments that memset() is required when\npgssHashKey is without padding only.\n\n@@ -1224,9 +1227,14 @@ pgss_store(const char *query, uint64 queryId,\n query = CleanQuerytext(query, &query_location, &query_len);\n\n /* Set up key for hashtable search */\n+\n+ /* memset() is required when pgssHashKey is without padding only */\n+ memset(&key, 0, sizeof(pgssHashKey));\n+\n key.userid = GetUserId();\n key.dbid = MyDatabaseId;\n key.queryid = queryId;\n+ key.toplevel = (exec_nested_level == 0);\n\n /* Lookup the hash table entry with shared lock. */\n LWLockAcquire(pgss->lock, LW_SHARED);\n\nHowever, we need memset() only when pgssHashKey has padding, right?\n\n-- \nRegrads,\nJapin Li.",
"msg_date": "Tue, 27 Jun 2023 12:32:10 +0800",
"msg_from": "Japin Li <[email protected]>",
"msg_from_op": true,
"msg_subject": "Incorrect comment for memset() on pgssHashKey?"
},
{
"msg_contents": "On 27/06/2023 07:32, Japin Li wrote:\n> \n> Hi,\n> \n> Commit 6b4d23feef introduces a toplevel field in pgssHashKey, which leads\n> padding. In pgss_store(), it comments that memset() is required when\n> pgssHashKey is without padding only.\n> \n> @@ -1224,9 +1227,14 @@ pgss_store(const char *query, uint64 queryId,\n> query = CleanQuerytext(query, &query_location, &query_len);\n> \n> /* Set up key for hashtable search */\n> +\n> + /* memset() is required when pgssHashKey is without padding only */\n> + memset(&key, 0, sizeof(pgssHashKey));\n> +\n> key.userid = GetUserId();\n> key.dbid = MyDatabaseId;\n> key.queryid = queryId;\n> + key.toplevel = (exec_nested_level == 0);\n> \n> /* Lookup the hash table entry with shared lock. */\n> LWLockAcquire(pgss->lock, LW_SHARED);\n> \n> However, we need memset() only when pgssHashKey has padding, right?\n\nYep. I changed the comment to just \"clear padding\". Thanks!\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Tue, 27 Jun 2023 10:18:11 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Incorrect comment for memset() on pgssHashKey?"
}
] |
[
{
"msg_contents": "Hi all,\n(Fujii-san and David in CC.)\n\nFujii-san has reported on Twitter that we had better add the TLI\nnumber to what pg_waldump --save-fullpage generates for the file names\nof the blocks, as it could be possible that we overwrite some blocks.\nThis information can be added thanks to ws_tli, that tracks the TLI of\nthe opened segment.\n\nAttached is a patch to fix this issue, adding an open item assigned to\nme. The file format is documented in the TAP test and the docs, the\ntwo only places that would need a refresh.\n\nThoughts or comments?\n--\nMichael",
"msg_date": "Tue, 27 Jun 2023 15:12:43 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Add TLI number to name of files generated by pg_waldump\n --save-fullpage"
},
{
"msg_contents": "At Tue, 27 Jun 2023 15:12:43 +0900, Michael Paquier <[email protected]> wrote in \n> Hi all,\n> (Fujii-san and David in CC.)\n> \n> Fujii-san has reported on Twitter that we had better add the TLI\n> number to what pg_waldump --save-fullpage generates for the file names\n> of the blocks, as it could be possible that we overwrite some blocks.\n> This information can be added thanks to ws_tli, that tracks the TLI of\n> the opened segment.\n> \n> Attached is a patch to fix this issue, adding an open item assigned to\n> me. The file format is documented in the TAP test and the docs, the\n> two only places that would need a refresh.\n> \n> Thoughts or comments?\n\nIt's sensible to add TLI to the file name. So +1 from me.\n\n+# - Timeline number in hex format.\n\nArn't we reffering to it as \"Timeline ID\"? (I remember there was a\ndiscussion about redefining the \"timeline ID\" to use non-orderable\nIDs. That is, making it non-numbers.)\n\nOtherwise it looks fine to me.\n\n\nBy the way, somewhat irrelevant to this patch, regading the the file\nname for the output.\n\n\nThe file name was \"LSNh-LSNl.spcOid.dbOid.relNumber.blk_forkname\", but\nthe comment in the TAP script read as:\n\n-# XXXXXXXX-XXXXXXXX.DBOID.TLOID.NODEOID.dd_fork with the components being:\n\nwhich looks wrong. I'm not sure it is a business of this patch, though..\n\n# Documentation looks coorect.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 27 Jun 2023 15:44:04 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add TLI number to name of files generated by pg_waldump\n --save-fullpage"
},
{
"msg_contents": "On Tue, Jun 27, 2023 at 03:44:04PM +0900, Kyotaro Horiguchi wrote:\n> +# - Timeline number in hex format.\n> \n> Arn't we reffering to it as \"Timeline ID\"? (I remember there was a\n> discussion about redefining the \"timeline ID\" to use non-orderable\n> IDs. That is, making it non-numbers.)\n\nUsing ID is fine by me.\n\n> The file name was \"LSNh-LSNl.spcOid.dbOid.relNumber.blk_forkname\", but\n> the comment in the TAP script read as:\n> \n> -# XXXXXXXX-XXXXXXXX.DBOID.TLOID.NODEOID.dd_fork with the components being:\n> \n> which looks wrong. I'm not sure it is a business of this patch, though..\n\nThis part is getting changed here anyway, so improving it is fine by\nme with the terms you are suggesting for these two 4-byte values in\nthis comment.\n--\nMichael",
"msg_date": "Tue, 27 Jun 2023 15:58:38 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add TLI number to name of files generated by pg_waldump\n --save-fullpage"
},
{
"msg_contents": "At Tue, 27 Jun 2023 15:58:38 +0900, Michael Paquier <[email protected]> wrote in \n> On Tue, Jun 27, 2023 at 03:44:04PM +0900, Kyotaro Horiguchi wrote:\n> > The file name was \"LSNh-LSNl.spcOid.dbOid.relNumber.blk_forkname\", but\n> > the comment in the TAP script read as:\n> > \n> > -# XXXXXXXX-XXXXXXXX.DBOID.TLOID.NODEOID.dd_fork with the components being:\n> > \n> > which looks wrong. I'm not sure it is a business of this patch, though..\n> \n> This part is getting changed here anyway, so improving it is fine by\n> me with the terms you are suggesting for these two 4-byte values in\n> this comment.\n\nI meant that the name is structured as\nTLIh-TLIl.<tablespace>.<database>.<relnumber>.<blk>._<fork>, which\nappears to be inconsistent with the comment. (And I'm not sure what\n\"TLOID\" is..)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 27 Jun 2023 16:39:52 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add TLI number to name of files generated by pg_waldump\n --save-fullpage"
},
{
"msg_contents": "Of course, it's wrong.\n\nAt Tue, 27 Jun 2023 16:39:52 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail\n.com> wrote in \n> I meant that the name is structured as\n- TLIh-TLIl.<tablespace>.<database>.<relnumber>.<blk>._<fork>, which\n+ LSNh-LSNl.<tablespace>.<database>.<relnumber>.<blk>._<fork>, which\n\n> appears to be inconsistent with the comment. (And I'm not sure what\n> \"TLOID\" is..)\n> \n> regards.\n> \n> -- \n> Kyotaro Horiguchi\n> NTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 27 Jun 2023 16:41:00 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add TLI number to name of files generated by pg_waldump\n --save-fullpage"
},
{
"msg_contents": "On Tue, Jun 27, 2023 at 1:12 AM Michael Paquier <[email protected]> wrote:\n>\n> Hi all,\n> (Fujii-san and David in CC.)\n>\n> Fujii-san has reported on Twitter that we had better add the TLI\n> number to what pg_waldump --save-fullpage generates for the file names\n> of the blocks, as it could be possible that we overwrite some blocks.\n> This information can be added thanks to ws_tli, that tracks the TLI of\n> the opened segment.\n>\n> Attached is a patch to fix this issue, adding an open item assigned to\n> me. The file format is documented in the TAP test and the docs, the\n> two only places that would need a refresh.\n>\n> Thoughts or comments?\n\nPatch looks good, but agreed that that comment should also be fixed.\n\nThanks!\n\nDavid\n\n\n",
"msg_date": "Tue, 27 Jun 2023 11:53:10 -0500",
"msg_from": "David Christensen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add TLI number to name of files generated by pg_waldump\n --save-fullpage"
},
{
"msg_contents": "On Tue, Jun 27, 2023 at 11:53:10AM -0500, David Christensen wrote:\n> Patch looks good, but agreed that that comment should also be fixed.\n\nOkay, thanks for checking!\n--\nMichael",
"msg_date": "Wed, 28 Jun 2023 08:29:19 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add TLI number to name of files generated by pg_waldump\n --save-fullpage"
},
{
"msg_contents": "On Tue, Jun 27, 2023 at 04:39:52PM +0900, Kyotaro Horiguchi wrote:\n> I meant that the name is structured as\n> TLIh-TLIl.<tablespace>.<database>.<relnumber>.<blk>._<fork>, which\n> appears to be inconsistent with the comment. (And I'm not sure what\n> \"TLOID\" is..)\n\nWell, to be clear, it should not be TLIh-TLIl but LSNh-LSNl :)\n\nI'm OK with these terms for the comments. This is very internal\nanyway so anybody using this feature should know what that means.\n\nAnd yes, the order of the items is wrong, and I agree that TLOID is a\nbit confusing once the TLI is added in the set. I have just used\nTBLSPCOID as term in the comment, and adjusted the XXX to be about the\nLSN numbers.\n\nAdjusted as per the v2 attached.\n--\nMichael",
"msg_date": "Wed, 28 Jun 2023 08:47:21 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add TLI number to name of files generated by pg_waldump\n --save-fullpage"
},
{
"msg_contents": "> Adjusted as per the v2 attached.\n\n+1\n\n\n\n",
"msg_date": "Tue, 27 Jun 2023 18:58:39 -0500",
"msg_from": "David Christensen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add TLI number to name of files generated by pg_waldump\n --save-fullpage"
},
{
"msg_contents": "At Tue, 27 Jun 2023 18:58:39 -0500, David Christensen <[email protected]> wrote in \n> > Adjusted as per the v2 attached.\n> \n> +1\n\n+1\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 28 Jun 2023 09:20:27 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add TLI number to name of files generated by pg_waldump\n --save-fullpage"
},
{
"msg_contents": "On Wed, Jun 28, 2023 at 09:20:27AM +0900, Kyotaro Horiguchi wrote:\n> At Tue, 27 Jun 2023 18:58:39 -0500, David Christensen <[email protected]> wrote in \n>>> Adjusted as per the v2 attached.\n>> \n>> +1\n> \n> +1\n\nOkay, cool. Both of you seem happy with it, so I have applied it.\nThanks for the quick checks.\n--\nMichael",
"msg_date": "Wed, 28 Jun 2023 16:35:23 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add TLI number to name of files generated by pg_waldump\n --save-fullpage"
}
] |
[
{
"msg_contents": "Background:\n\nThe idea with 40af10b57 (Use Generation memory contexts to store\ntuples in sorts) was to reduce the memory wastage in tuplesort.c\ncaused by aset.c's power-of-2 rounding up behaviour and allow more\ntuples to be stored per work_mem in tuplesort.c\n\nLater, in (v16's) c6e0fe1f2a (Improve performance of and reduce\noverheads of memory management) that commit reduced the palloc chunk\nheader overhead down to 8 bytes. For generation.c contexts (as is now\nused by non-bounded tuplesorts as of 40af10b57), the overhead was 24\nbytes. So this allowed even more tuples to be stored in a work_mem by\nreducing the chunk overheads for non-bounded tuplesorts by 2/3rds down\nto 16 bytes.\n\n1083f94da (Be smarter about freeing tuples during tuplesorts) removed\nthe useless pfree() calls from tuplesort.c which pfree'd the tuples\njust before we reset the context. So, as of now, we never pfree()\nmemory allocated to store tuples in non-bounded tuplesorts.\n\nMy thoughts are, if we never pfree tuples in tuplesorts, then why\nbother having a chunk header at all?\n\nProposal:\n\nBecause of all of what is mentioned above about the current state of\ntuplesort, there does not really seem to be much need to have chunk\nheaders in memory we allocate for tuples at all. Not having these\nsaves us a further 8 bytes per tuple.\n\nIn the attached patch, I've added a bump memory allocator which\nallocates chunks without and chunk header. This means the chunks\ncannot be pfree'd or realloc'd. That seems fine for the use case for\nstoring tuples in tuplesort. I've coded bump.c in such a way that when\nbuilt with MEMORY_CONTEXT_CHECKING, we *do* have chunk headers. That\nshould allow us to pick up any bugs that are introduced by any code\nwhich accidentally tries to pfree a bump.c chunk.\n\nI'd expect a bump.c context only to be used for fairly short-lived and\nmemory that's only used by a small amount of code (e.g. only accessed\nfrom a single .c file, like tuplesort.c). That should reduce the risk\nof any code accessing the memory which might be tempted into calling\npfree or some other unsupported function.\n\nGetting away from the complexity of freelists (aset.c) and tracking\nallocations per block (generation.c) allows much better allocation\nperformance. All we need to do is check there's enough space then\nbump the free pointer when performing an allocation. See the attached\ntime_to_allocate_10gbs_memory.png to see how bump.c compares to aset.c\nand generation.c to allocate 10GBs of memory resetting the context\nafter 1MB. It's not far off twice as fast in raw allocation speed.\n(See 3rd tab in the attached spreadsheet)\n\nPerformance:\n\nIn terms of the speed of palloc(), the performance tested on an AMD\n3990x CPU on Linux for 8-byte chunks:\n\naset.c 9.19 seconds\ngeneration.c 8.68 seconds\nbump.c 4.95 seconds\n\nThese numbers were generated by calling:\nselect stype,chksz,pg_allocate_memory_test_reset(chksz,1024*1024,10::bigint*1024*1024*1024,stype)\nfrom (values(8),(16),(32),(64),(128)) t1(chksz) cross join\n(values('aset'),('generation'),('bump')) t2(stype) order by\nstype,chksz;\n\nThis allocates a total of 10GBs of chunks but calls a context reset\nafter 1MB so as not to let the memory usage get out of hand. The\nfunction is in the attached membench.patch.txt file.\n\nIn terms of performance of tuplesort, there's a small (~5-6%)\nperformance gain. Not as much as I'd hoped, but I'm also doing a bit\nof other work on tuplesort to make it more efficient in terms of CPU,\nso I suspect the cache efficiency improvements might be more\npronounced after those.\nPlease see the attached bump_context_tuplesort_2023-06-27.ods for my\ncomplete benchmark.\n\nOne thing that might need more thought is that we're running a bit low\non MemoryContextMethodIDs. I had to use an empty slot that has a bit\npattern like glibc malloc'd chunks sized 128kB. Maybe it's worth\nfreeing up a bit from the block offset in MemoryChunk. This is\ncurrently 30 bits allowing 1GB offset, but these offsets are always\nMAXALIGNED, so we could free up a couple of bits since those 2\nlowest-order bits will always be 0 anyway.\n\nI've attached the bump allocator patch and also the script I used to\ngather the performance results in the first 2 tabs in the attached\nspreadsheet.\n\nDavid",
"msg_date": "Tue, 27 Jun 2023 21:19:26 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Add bump memory context type and use it for tuplesorts"
},
{
"msg_contents": "On Tue, 27 Jun 2023 at 21:19, David Rowley <[email protected]> wrote:\n> I've attached the bump allocator patch and also the script I used to\n> gather the performance results in the first 2 tabs in the attached\n> spreadsheet.\n\nI've attached a v2 patch which changes the BumpContext a little to\nremove some of the fields that are not really required. There was no\nneed for the \"keeper\" field as the keeper block always comes at the\nend of the BumpContext as these are allocated in a single malloc().\nThe pointer to the \"block\" also isn't really needed. This is always\nthe same as the head element in the blocks dlist.\n\nDavid",
"msg_date": "Tue, 11 Jul 2023 11:51:11 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add bump memory context type and use it for tuplesorts"
},
{
"msg_contents": "On Tue, Jun 27, 2023 at 09:19:26PM +1200, David Rowley wrote:\n> Because of all of what is mentioned above about the current state of\n> tuplesort, there does not really seem to be much need to have chunk\n> headers in memory we allocate for tuples at all. Not having these\n> saves us a further 8 bytes per tuple.\n> \n> In the attached patch, I've added a bump memory allocator which\n> allocates chunks without and chunk header. This means the chunks\n> cannot be pfree'd or realloc'd. That seems fine for the use case for\n> storing tuples in tuplesort. I've coded bump.c in such a way that when\n> built with MEMORY_CONTEXT_CHECKING, we *do* have chunk headers. That\n> should allow us to pick up any bugs that are introduced by any code\n> which accidentally tries to pfree a bump.c chunk.\n\nThis is a neat idea.\n\n> In terms of performance of tuplesort, there's a small (~5-6%)\n> performance gain. Not as much as I'd hoped, but I'm also doing a bit\n> of other work on tuplesort to make it more efficient in terms of CPU,\n> so I suspect the cache efficiency improvements might be more\n> pronounced after those.\n\nNice.\n\n> One thing that might need more thought is that we're running a bit low\n> on MemoryContextMethodIDs. I had to use an empty slot that has a bit\n> pattern like glibc malloc'd chunks sized 128kB. Maybe it's worth\n> freeing up a bit from the block offset in MemoryChunk. This is\n> currently 30 bits allowing 1GB offset, but these offsets are always\n> MAXALIGNED, so we could free up a couple of bits since those 2\n> lowest-order bits will always be 0 anyway.\n\nI think it'd be okay to steal those bits. AFAICT it'd complicate the\nmacros in memutils_memorychunk.h a bit, but that doesn't seem like such a\nterrible price to pay to allow us to keep avoiding the glibc bit patterns.\n\n> +\tif (base->sortopt & TUPLESORT_ALLOWBOUNDED)\n> +\t\ttuplen = GetMemoryChunkSpace(tuple);\n> +\telse\n> +\t\ttuplen = MAXALIGN(tuple->t_len);\n\nnitpick: I see this repeated in a few places, and I wonder if it might\ndeserve a comment.\n\nI haven't had a chance to try out your benchmark, but I'm hoping to do so\nin the near future.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 25 Jul 2023 17:11:49 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add bump memory context type and use it for tuplesorts"
},
{
"msg_contents": "On Tue, 11 Jul 2023 at 01:51, David Rowley <[email protected]> wrote:\n>\n> On Tue, 27 Jun 2023 at 21:19, David Rowley <[email protected]> wrote:\n> > I've attached the bump allocator patch and also the script I used to\n> > gather the performance results in the first 2 tabs in the attached\n> > spreadsheet.\n>\n> I've attached a v2 patch which changes the BumpContext a little to\n> remove some of the fields that are not really required. There was no\n> need for the \"keeper\" field as the keeper block always comes at the\n> end of the BumpContext as these are allocated in a single malloc().\n> The pointer to the \"block\" also isn't really needed. This is always\n> the same as the head element in the blocks dlist.\n\nNeat idea, +1.\n\nI think it would make sense to split the \"add a bump allocator\"\nchanges from the \"use the bump allocator in tuplesort\" patches.\n\nTangent: Do we have specific notes on worst-case memory usage of\nmemory contexts with various allocation patterns? This new bump\nallocator seems to be quite efficient, but in a worst-case allocation\npattern it can still waste about 1/3 of its allocated memory due to\nnever using free space on previous blocks after an allocation didn't\nfit on that block.\nIt probably isn't going to be a huge problem in general, but this\nseems like something that could be documented as a potential problem\nwhen you're looking for which allocator to use and compare it with\nother allocators that handle different allocation sizes more\ngracefully.\n\n> +++ b/src/backend/utils/mmgr/bump.c\n> +BumpBlockIsEmpty(BumpBlock *block)\n> +{\n> + /* it's empty if the freeptr has not moved */\n> + return (block->freeptr == (char *) block + Bump_BLOCKHDRSZ);\n> [...]\n> +static inline void\n> +BumpBlockMarkEmpty(BumpBlock *block)\n> +{\n> +#if defined(USE_VALGRIND) || defined(CLOBBER_FREED_MEMORY)\n> + char *datastart = ((char *) block) + Bump_BLOCKHDRSZ;\n\nThese two use different definitions of the start pointer. Is that deliberate?\n\n> +++ b/src/include/utils/tuplesort.h\n> @@ -109,7 +109,8 @@ typedef struct TuplesortInstrumentation\n> * a pointer to the tuple proper (might be a MinimalTuple or IndexTuple),\n> * which is a separate palloc chunk --- we assume it is just one chunk and\n> * can be freed by a simple pfree() (except during merge, when we use a\n> - * simple slab allocator). SortTuples also contain the tuple's first key\n> + * simple slab allocator and when performing a non-bounded sort where we\n> + * use a bump allocator). SortTuples also contain the tuple's first key\n\nI'd go with something like the following:\n\n+ * ...(except during merge *where* we use a\n+ * simple slab allocator, and during a non-bounded sort where we\n+ * use a bump allocator).\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Mon, 6 Nov 2023 19:54:49 +0100",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add bump memory context type and use it for tuplesorts"
},
{
"msg_contents": "On Mon, 6 Nov 2023 at 19:54, Matthias van de Meent\n<[email protected]> wrote:\n>\n> On Tue, 11 Jul 2023 at 01:51, David Rowley <[email protected]> wrote:\n>>\n>> On Tue, 27 Jun 2023 at 21:19, David Rowley <[email protected]> wrote:\n>>> I've attached the bump allocator patch and also the script I used to\n>>> gather the performance results in the first 2 tabs in the attached\n>>> spreadsheet.\n>>\n>> I've attached a v2 patch which changes the BumpContext a little to\n>> remove some of the fields that are not really required. There was no\n>> need for the \"keeper\" field as the keeper block always comes at the\n>> end of the BumpContext as these are allocated in a single malloc().\n>> The pointer to the \"block\" also isn't really needed. This is always\n>> the same as the head element in the blocks dlist.\n\n>> +++ b/src/backend/utils/mmgr/bump.c\n>> [...]\n>> +MemoryContext\n>> +BumpContextCreate(MemoryContext parent,\n>> + const char *name,\n>> + Size minContextSize,\n>> + Size initBlockSize,\n>> + Size maxBlockSize)\n>> [...]\n>> + /* Determine size of initial block */\n>> + allocSize = MAXALIGN(sizeof(BumpContext)) + Bump_BLOCKHDRSZ +\n>> + if (minContextSize != 0)\n>> + allocSize = Max(allocSize, minContextSize);\n>> + else\n>> + allocSize = Max(allocSize, initBlockSize);\n\nShouldn't this be the following, considering the meaning of \"initBlockSize\"?\n\n+ allocSize = MAXALIGN(sizeof(BumpContext)) + Bump_BLOCKHDRSZ +\n+ Bump_CHUNKHDRSZ + initBlockSize;\n+ if (minContextSize != 0)\n+ allocSize = Max(allocSize, minContextSize);\n\n>> + * BumpFree\n>> + * Unsupported.\n>> [...]\n>> + * BumpRealloc\n>> + * Unsupported.\n\nRather than the error, can't we make this a no-op (potentially\noptionally, or in a different memory context?)\n\nWhat I mean is, I get that it is an easy validator check that the code\nthat uses this context doesn't accidentally leak memory through\nassumptions about pfree, but this does make this memory context\nunusable for more generic operations where leaking a little memory is\npreferred over the overhead of other memory contexts, as\nMemoryContextReset is quite cheap in the grand scheme of things.\n\nE.g. using a bump context in btree descent could speed up queries when\nwe use compare operators that do allocate memory (e.g. numeric, text),\nbecause btree operators must not leak memory and thus always have to\nmanually keep track of all allocations, which can be expensive.\n\nI understand that allowing pfree/repalloc in bump contexts requires\neach allocation to have a MemoryChunk prefix in overhead, but I think\nit's still a valid use case to have a very low overhead allocator with\nno-op deallocator (except context reset). Do you have performance\ncomparison results between with and without the overhead of\nMemoryChunk?\n\n\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Thu, 25 Jan 2024 13:29:24 +0100",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add bump memory context type and use it for tuplesorts"
},
{
"msg_contents": "Hi,\n\nI wanted to take a look at this patch but it seems to need a rebase,\nbecause of a seemingly trivial conflict in MemoryContextMethodID:\n\n--- src/include/utils/memutils_internal.h\n+++ src/include/utils/memutils_internal.h\n@@ -123,12 +140,13 @@ typedef enum MemoryContextMethodID\n {\n MCTX_UNUSED1_ID, /* 000 occurs in never-used memory */\n MCTX_UNUSED2_ID, /* glibc malloc'd chunks usually match 001 */\n- MCTX_UNUSED3_ID, /* glibc malloc'd chunks > 128kB match 010 */\n+ MCTX_BUMP_ID, /* glibc malloc'd chunks > 128kB match 010\n+ * XXX? */\n MCTX_ASET_ID,\n MCTX_GENERATION_ID,\n MCTX_SLAB_ID,\n MCTX_ALIGNED_REDIRECT_ID,\n- MCTX_UNUSED4_ID /* 111 occurs in wipe_mem'd memory */\n+ MCTX_UNUSED3_ID /* 111 occurs in wipe_mem'd memory */\n } MemoryContextMethodID;\n\n\nI wasn't paying much attention to these memcontext reworks in 2022, so\nmy instinct was simply to use one of those \"UNUSED\" IDs. But after\nlooking at the 80ef92675823 a bit more, are those IDs really unused? I\nmean, we're relying on those values to detect bogus pointers, right? So\nif we instead start using those values for a new memory context, won't\nwe lose the ability to detect those issues?\n\nMaybe I'm completely misunderstanding the implication of those limits,\nbut doesn't this mean the claim that we can support 8 memory context\ntypes is not quite true, and the limit is 4, because the 4 IDs are\nalready used for malloc stuff?\n\nOne thing that confuses me a bit is that the comments introduced by\n80ef92675823 talk about glibc, but the related discussion in [1] talks a\nlot about FreeBSD, NetBSD, ... which don't actually use glibc (AFAIK).\nSo how portable are those unused IDs, actually?\n\nOr am I just too caffeine-deprived and missing something obvious?\n\nregards\n\n\n[1] https://postgr.es/m/[email protected]\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 16 Feb 2024 23:46:03 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add bump memory context type and use it for tuplesorts"
},
{
"msg_contents": "On 11/6/23 19:54, Matthias van de Meent wrote:\n>\n> ...\n>\n> Tangent: Do we have specific notes on worst-case memory usage of\n> memory contexts with various allocation patterns? This new bump\n> allocator seems to be quite efficient, but in a worst-case allocation\n> pattern it can still waste about 1/3 of its allocated memory due to\n> never using free space on previous blocks after an allocation didn't\n> fit on that block.\n> It probably isn't going to be a huge problem in general, but this\n> seems like something that could be documented as a potential problem\n> when you're looking for which allocator to use and compare it with\n> other allocators that handle different allocation sizes more\n> gracefully.\n> \n\nI don't think it's documented anywhere, but I agree it might be an\ninteresting piece of information. It probably did not matter too much\nwhen we had just AllocSet, but now we have 3 very different allocators,\nso maybe we should explain this.\n\nWhen implementing these allocators, it didn't feel that important,\nbecause the new allocators started as intended for a very specific part\nof the code (as in \"This piece of code has a very unique allocation\npattern, let's develop a custom allocator for it.\"), but if we feel we\nwant to make it simpler to use the allocators elsewhere ...\n\nI think there are two obvious places where to document this - either in\nthe header of each memory context .c file, or a README in the mmgr\ndirectory. Or some combination of it.\n\nAt some point I was thinking about writing a \"proper paper\" comparing\nthese allocators in a more scientific / thorough way, but I never got to\ndo it. I wonder if that'd be interesting for enough people.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 16 Feb 2024 23:54:40 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add bump memory context type and use it for tuplesorts"
},
{
"msg_contents": "Tomas Vondra <[email protected]> writes:\n> Maybe I'm completely misunderstanding the implication of those limits,\n> but doesn't this mean the claim that we can support 8 memory context\n> types is not quite true, and the limit is 4, because the 4 IDs are\n> already used for malloc stuff?\n\nWell, correct code would still work, but we will take a hit in our\nability to detect bogus chunk pointers if we convert any of the four\nremaining bit-patterns to valid IDs. That has costs for debugging.\nThe particular bit patterns we left unused were calculated to make it\nlikely that we could detect a malloced-instead-of-palloced chunk (at\nleast with glibc); but in general, reducing the number of invalid\npatterns makes it more likely that a just-plain-bad pointer would\nescape detection.\n\nI am less concerned about that than I was in 2022, because people\nhave already had some time to flush out bugs associated with the\nGUC malloc->palloc conversion. Still, maybe we should think harder\nabout whether we can free up another ID bit before we go eating\nmore ID types. It's not apparent to me that the \"bump context\"\nidea is valuable enough to foreclose ever adding more context types,\nyet it will be pretty close to doing that if we commit it as-is.\n\nIf we do kick this can down the road, then I concur with eating 010\nnext, as it seems the least likely to occur in glibc-malloced\nchunks.\n\n> One thing that confuses me a bit is that the comments introduced by\n> 80ef92675823 talk about glibc, but the related discussion in [1] talks a\n> lot about FreeBSD, NetBSD, ... which don't actually use glibc (AFAIK).\n\nThe conclusion was that the specific invalid values didn't matter as\nmuch on the other platforms as they do with glibc. But right now you\nhave a fifty-fifty chance that a pointer to garbage will look valid.\nDo we want to increase those odds?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 16 Feb 2024 18:14:44 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add bump memory context type and use it for tuplesorts"
},
{
"msg_contents": "\n\nOn 2/17/24 00:14, Tom Lane wrote:\n> Tomas Vondra <[email protected]> writes:\n>> Maybe I'm completely misunderstanding the implication of those limits,\n>> but doesn't this mean the claim that we can support 8 memory context\n>> types is not quite true, and the limit is 4, because the 4 IDs are\n>> already used for malloc stuff?\n> \n> Well, correct code would still work, but we will take a hit in our\n> ability to detect bogus chunk pointers if we convert any of the four\n> remaining bit-patterns to valid IDs. That has costs for debugging.\n> The particular bit patterns we left unused were calculated to make it\n> likely that we could detect a malloced-instead-of-palloced chunk (at\n> least with glibc); but in general, reducing the number of invalid\n> patterns makes it more likely that a just-plain-bad pointer would\n> escape detection.\n> \n> I am less concerned about that than I was in 2022, because people\n> have already had some time to flush out bugs associated with the\n> GUC malloc->palloc conversion. Still, maybe we should think harder\n> about whether we can free up another ID bit before we go eating\n> more ID types. It's not apparent to me that the \"bump context\"\n> idea is valuable enough to foreclose ever adding more context types,\n> yet it will be pretty close to doing that if we commit it as-is.\n> \n> If we do kick this can down the road, then I concur with eating 010\n> next, as it seems the least likely to occur in glibc-malloced\n> chunks.\n> \n\nI don't know if the bump context for tuplesorts alone is worth it, but\nI've been thinking it's not the only place doing something like that.\nI'm aware of two other places doing this \"dense allocation\" - spell.c\nand nodeHash.c. And in those cases it definitely made a big difference\n(ofc, the question is how big the difference would be now, with all the\npalloc improvements).\n\nBut maybe we could switch all those places to a proper memcontext\n(instead of something built on top of a memory context) ... Of course,\nthe code in spell.c/nodeHash.c is quite stable, so the custom code does\nnot cost much.\n\n>> One thing that confuses me a bit is that the comments introduced by\n>> 80ef92675823 talk about glibc, but the related discussion in [1] talks a\n>> lot about FreeBSD, NetBSD, ... which don't actually use glibc (AFAIK).\n> \n> The conclusion was that the specific invalid values didn't matter as\n> much on the other platforms as they do with glibc. But right now you\n> have a fifty-fifty chance that a pointer to garbage will look valid.\n> Do we want to increase those odds?\n> \n\nNot sure. The ability to detect bogus pointers seems valuable, but is\nthe difference between 4/8 and 3/8 really qualitatively different? If it\nis, maybe we should try to increase it by simply adding a bit.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 17 Feb 2024 01:31:07 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add bump memory context type and use it for tuplesorts"
},
{
"msg_contents": "Tomas Vondra <[email protected]> writes:\n> On 2/17/24 00:14, Tom Lane wrote:\n>> The conclusion was that the specific invalid values didn't matter as\n>> much on the other platforms as they do with glibc. But right now you\n>> have a fifty-fifty chance that a pointer to garbage will look valid.\n>> Do we want to increase those odds?\n\n> Not sure. The ability to detect bogus pointers seems valuable, but is\n> the difference between 4/8 and 3/8 really qualitatively different? If it\n> is, maybe we should try to increase it by simply adding a bit.\n\nI think it'd be worth taking a fresh look at the bit allocation in the\nheader word to see if we can squeeze another bit without too much\npain. There's basically no remaining headroom in the current design,\nand it starts to seem like we want some. (I'm also wondering whether\nthe palloc_aligned stuff should have been done some other way than\nby consuming a context type ID.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 16 Feb 2024 20:10:48 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add bump memory context type and use it for tuplesorts"
},
{
"msg_contents": "Hi,\n\nOn 2024-02-16 20:10:48 -0500, Tom Lane wrote:\n> Tomas Vondra <[email protected]> writes:\n> > On 2/17/24 00:14, Tom Lane wrote:\n> >> The conclusion was that the specific invalid values didn't matter as\n> >> much on the other platforms as they do with glibc. But right now you\n> >> have a fifty-fifty chance that a pointer to garbage will look valid.\n> >> Do we want to increase those odds?\n> \n> > Not sure. The ability to detect bogus pointers seems valuable, but is\n> > the difference between 4/8 and 3/8 really qualitatively different? If it\n> > is, maybe we should try to increase it by simply adding a bit.\n> \n> I think it'd be worth taking a fresh look at the bit allocation in the\n> header word to see if we can squeeze another bit without too much\n> pain. There's basically no remaining headroom in the current design,\n> and it starts to seem like we want some.\n\nI think we could fairly easily \"move\" some bits around, by restricting the\nmaximum size of a non-external chunk (i.e. allocations coming out of a larger\nblock, not a separate allocation). Right now we reserve 30 bits for the offset\nfrom the block header to the allocation.\n\nIt seems unlikely that it's ever worth having an undivided 1GB block. Even if\nwe wanted something that large - say because we want to use 1GB huge pages to\nback the block - we could just add a few block headers ever couple hundred\nMBs.\n\nAnother avenue is that presumably the chunk<->block header offset always has\nat least the two lower bits set to zero, so perhaps we could just shift\nblockoffset right by two bits in MemoryChunkSetHdrMask() and left in\nMemoryChunkGetBlock()?\n\n\n> (I'm also wondering whether the palloc_aligned stuff should have been done\n> some other way than by consuming a context type ID.)\n\nPossibly, I just don't quite know how.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 17 Feb 2024 12:08:45 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add bump memory context type and use it for tuplesorts"
},
{
"msg_contents": "Thanks for having a look at this.\n\nOn Tue, 7 Nov 2023 at 07:55, Matthias van de Meent\n<[email protected]> wrote:\n> I think it would make sense to split the \"add a bump allocator\"\n> changes from the \"use the bump allocator in tuplesort\" patches.\n\nI've done this and will post updated patches after replying to the\nother comments.\n\n> Tangent: Do we have specific notes on worst-case memory usage of\n> memory contexts with various allocation patterns? This new bump\n> allocator seems to be quite efficient, but in a worst-case allocation\n> pattern it can still waste about 1/3 of its allocated memory due to\n> never using free space on previous blocks after an allocation didn't\n> fit on that block.\n> It probably isn't going to be a huge problem in general, but this\n> seems like something that could be documented as a potential problem\n> when you're looking for which allocator to use and compare it with\n> other allocators that handle different allocation sizes more\n> gracefully.\n\nIt might be a good idea to document this. The more memory allocator\ntypes we add, the harder it is to decide which one to use when writing\nnew code.\n\n> > +++ b/src/backend/utils/mmgr/bump.c\n> > +BumpBlockIsEmpty(BumpBlock *block)\n> > +{\n> > + /* it's empty if the freeptr has not moved */\n> > + return (block->freeptr == (char *) block + Bump_BLOCKHDRSZ);\n> > [...]\n> > +static inline void\n> > +BumpBlockMarkEmpty(BumpBlock *block)\n> > +{\n> > +#if defined(USE_VALGRIND) || defined(CLOBBER_FREED_MEMORY)\n> > + char *datastart = ((char *) block) + Bump_BLOCKHDRSZ;\n>\n> These two use different definitions of the start pointer. Is that deliberate?\n\nhmm, I'm not sure if I follow what you mean. Are you talking about\nthe \"datastart\" variable and the assignment of block->freeptr (which\nyou've not quoted?)\n\n> > +++ b/src/include/utils/tuplesort.h\n> > @@ -109,7 +109,8 @@ typedef struct TuplesortInstrumentation\n> > * a pointer to the tuple proper (might be a MinimalTuple or IndexTuple),\n> > * which is a separate palloc chunk --- we assume it is just one chunk and\n> > * can be freed by a simple pfree() (except during merge, when we use a\n> > - * simple slab allocator). SortTuples also contain the tuple's first key\n> > + * simple slab allocator and when performing a non-bounded sort where we\n> > + * use a bump allocator). SortTuples also contain the tuple's first key\n>\n> I'd go with something like the following:\n>\n> + * ...(except during merge *where* we use a\n> + * simple slab allocator, and during a non-bounded sort where we\n> + * use a bump allocator).\n\nAdjusted.\n\n\n",
"msg_date": "Tue, 20 Feb 2024 22:41:03 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add bump memory context type and use it for tuplesorts"
},
{
"msg_contents": "On Wed, 26 Jul 2023 at 12:11, Nathan Bossart <[email protected]> wrote:\n> I think it'd be okay to steal those bits. AFAICT it'd complicate the\n> macros in memutils_memorychunk.h a bit, but that doesn't seem like such a\n> terrible price to pay to allow us to keep avoiding the glibc bit patterns.\n\nI've not adjusted anything here and I've kept the patch using the\n>128KB glibc bit pattern. I think it was a good idea to make our\nlives easier if someone came to us with a bug report, but it's not\nlike the reserved patterns are guaranteed to cover all malloc\nimplementations. What's there is just to cover the likely candidates.\nI'd like to avoid adding any bit shift instructions in the code that\ndecodes the hdrmask.\n\n> > + if (base->sortopt & TUPLESORT_ALLOWBOUNDED)\n> > + tuplen = GetMemoryChunkSpace(tuple);\n> > + else\n> > + tuplen = MAXALIGN(tuple->t_len);\n>\n> nitpick: I see this repeated in a few places, and I wonder if it might\n> deserve a comment.\n\nI ended up adding a macro and a comment in each location that does this.\n\n> I haven't had a chance to try out your benchmark, but I'm hoping to do so\n> in the near future.\n\nGreat. It would be good to get a 2nd opinion.\n\nDavid\n\n\n",
"msg_date": "Tue, 20 Feb 2024 22:46:08 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add bump memory context type and use it for tuplesorts"
},
{
"msg_contents": "On Fri, 26 Jan 2024 at 01:29, Matthias van de Meent\n<[email protected]> wrote:\n> >> + allocSize = MAXALIGN(sizeof(BumpContext)) + Bump_BLOCKHDRSZ +\n> >> + if (minContextSize != 0)\n> >> + allocSize = Max(allocSize, minContextSize);\n> >> + else\n> >> + allocSize = Max(allocSize, initBlockSize);\n>\n> Shouldn't this be the following, considering the meaning of \"initBlockSize\"?\n\nNo, we want to make the blocksize exactly initBlockSize if we can. Not\ninitBlockSize plus all the header stuff. We do it that way for all\nthe other contexts and I agree that it's a good idea as it keeps the\nmalloc request sizes powers of 2.\n\n> >> + * BumpFree\n> >> + * Unsupported.\n> >> [...]\n> >> + * BumpRealloc\n> >> + * Unsupported.\n>\n> Rather than the error, can't we make this a no-op (potentially\n> optionally, or in a different memory context?)\n\nUnfortunately not. There are no MemoryChunks on bump chunks so we've\nno way to determine the context type a given pointer belongs to. I've\nleft the MemoryChunk on there for debug builds so we can get the\nERRORs to allow us to fix the broken code that is pfreeing these\nchunks.\n\n> I understand that allowing pfree/repalloc in bump contexts requires\n> each allocation to have a MemoryChunk prefix in overhead, but I think\n> it's still a valid use case to have a very low overhead allocator with\n> no-op deallocator (except context reset). Do you have performance\n> comparison results between with and without the overhead of\n> MemoryChunk?\n\nOh right, you've taken this into account. I was hoping not to have\nthe headers otherwise the only gains we see over using generation.c is\nthat of the allocation function being faster.\n\nI certainly did do benchmarks in [1] and saw the 338% increase due to\nthe reduction in memory. That massive jump happened by accident as\nthe sort on tenk1 went from not fitting into default 4MB work_mem to\nfitting in, so what I happened to measure there was the difference of\nspilling to disk and not. The same could happen for this case, so the\noverhead of having the chunk headers really depends on what the test\nis. Probably, \"potentially large\" is likely a good way to describe the\noverhead of having chunk headers. However, to a lesser extent, there\nwill be a difference for large sorts as we'll be able to fit more\ntuples per batch and do fewer batches. The smaller the tuple, the\nmore that will be noticeable as the chunk header is a larger portion\nof the overall allocation with those.\n\nDavid\n\n[1] https://postgr.es/m/CAApHDvoH4ASzsAOyHcxkuY01Qf++8JJ0paw+03dk+W25tQEcNQ@mail.gmail.com\n\n\n",
"msg_date": "Tue, 20 Feb 2024 23:02:05 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add bump memory context type and use it for tuplesorts"
},
{
"msg_contents": "Thanks for taking an interest in this.\n\nOn Sat, 17 Feb 2024 at 11:46, Tomas Vondra\n<[email protected]> wrote:\n> I wasn't paying much attention to these memcontext reworks in 2022, so\n> my instinct was simply to use one of those \"UNUSED\" IDs. But after\n> looking at the 80ef92675823 a bit more, are those IDs really unused? I\n> mean, we're relying on those values to detect bogus pointers, right? So\n> if we instead start using those values for a new memory context, won't\n> we lose the ability to detect those issues?\n\nI wouldn't say we're \"relying\" on them. Really there just there to\nimprove debugability. If we call any code that tries to look at the\nMemoryChunk header of a malloc'd chunk, then we can expect bad things\nto happen. We no longer have any code which does this.\nMemoryContextContains() did, and it's now gone.\n\n> Maybe I'm completely misunderstanding the implication of those limits,\n> but doesn't this mean the claim that we can support 8 memory context\n> types is not quite true, and the limit is 4, because the 4 IDs are\n> already used for malloc stuff?\n\nI think we all expected a bit more pain from the memory context\nchange. I was happy that Tom did the extra work to look at the malloc\npatterns of glibc, but I think there's been very little gone wrong.\nThe reserved MemoryContextMethodIDs do seem to have allowed [1] to be\nfound, but I guess there'd have been a segfault instead of an ERROR\nwithout the reserved IDs.\n\nI've attached version 2, now split into 2 patches.\n\n0001 for the bump allocator\n0002 to use the new allocator for tuplesorts\n\nDavid\n\n[1] https://postgr.es/m/[email protected]",
"msg_date": "Tue, 20 Feb 2024 23:18:59 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add bump memory context type and use it for tuplesorts"
},
{
"msg_contents": "On Tue, 20 Feb 2024 at 10:41, David Rowley <[email protected]> wrote:\n> On Tue, 7 Nov 2023 at 07:55, Matthias van de Meent\n> <[email protected]> wrote:\n> > > +++ b/src/backend/utils/mmgr/bump.c\n> > > +BumpBlockIsEmpty(BumpBlock *block)\n> > > +{\n> > > + /* it's empty if the freeptr has not moved */\n> > > + return (block->freeptr == (char *) block + Bump_BLOCKHDRSZ);\n> > > [...]\n> > > +static inline void\n> > > +BumpBlockMarkEmpty(BumpBlock *block)\n> > > +{\n> > > +#if defined(USE_VALGRIND) || defined(CLOBBER_FREED_MEMORY)\n> > > + char *datastart = ((char *) block) + Bump_BLOCKHDRSZ;\n> >\n> > These two use different definitions of the start pointer. Is that deliberate?\n>\n> hmm, I'm not sure if I follow what you mean. Are you talking about\n> the \"datastart\" variable and the assignment of block->freeptr (which\n> you've not quoted?)\n\nWhat I meant was that\n\n> (char *) block + Bump_BLOCKHDRSZ\nvs\n> ((char *) block) + Bump_BLOCKHDRSZ\n\n, when combined with my little experience with pointer addition and\nprecedence, and a lack of compiler at the ready at that point in time,\nI was afraid that \"(char *) block + Bump_BLOCKHDRSZ\" would be parsed\nas \"(char *) (block + Bump_BLOCKHDRSZ)\", which would get different\noffsets across the two statements.\nGodbolt has since helped me understand that both are equivalent.\n\nKind regards,\n\nMatthias van de Meent\n\n\n",
"msg_date": "Tue, 20 Feb 2024 11:52:14 +0100",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add bump memory context type and use it for tuplesorts"
},
{
"msg_contents": "On Tue, 20 Feb 2024 at 11:02, David Rowley <[email protected]> wrote:\n> On Fri, 26 Jan 2024 at 01:29, Matthias van de Meent\n> <[email protected]> wrote:\n> > >> + allocSize = MAXALIGN(sizeof(BumpContext)) + Bump_BLOCKHDRSZ +\n> > >> + if (minContextSize != 0)\n> > >> + allocSize = Max(allocSize, minContextSize);\n> > >> + else\n> > >> + allocSize = Max(allocSize, initBlockSize);\n> >\n> > Shouldn't this be the following, considering the meaning of \"initBlockSize\"?\n>\n> No, we want to make the blocksize exactly initBlockSize if we can. Not\n> initBlockSize plus all the header stuff. We do it that way for all\n> the other contexts and I agree that it's a good idea as it keeps the\n> malloc request sizes powers of 2.\n\nOne part of the reason of my comment was that initBlockSize was\nignored in favour of minContextSize if that was configured, regardless\nof the value of initBlockSize. Is it deliberately ignored when\nminContextSize is set?\n\n> > >> + * BumpFree\n> > >> + * Unsupported.\n> > >> [...]\n> > >> + * BumpRealloc\n> > >> + * Unsupported.\n> >\n> > Rather than the error, can't we make this a no-op (potentially\n> > optionally, or in a different memory context?)\n>\n> Unfortunately not. There are no MemoryChunks on bump chunks so we've\n> no way to determine the context type a given pointer belongs to. I've\n> left the MemoryChunk on there for debug builds so we can get the\n> ERRORs to allow us to fix the broken code that is pfreeing these\n> chunks.\n>\n> > I understand that allowing pfree/repalloc in bump contexts requires\n> > each allocation to have a MemoryChunk prefix in overhead, but I think\n> > it's still a valid use case to have a very low overhead allocator with\n> > no-op deallocator (except context reset). Do you have performance\n> > comparison results between with and without the overhead of\n> > MemoryChunk?\n>\n> Oh right, you've taken this into account. I was hoping not to have\n> the headers otherwise the only gains we see over using generation.c is\n> that of the allocation function being faster.\n>\n> [...] The smaller the tuple, the\n> more that will be noticeable as the chunk header is a larger portion\n> of the overall allocation with those.\n\nI see. Thanks for the explanation.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Tue, 20 Feb 2024 12:02:41 +0100",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add bump memory context type and use it for tuplesorts"
},
{
"msg_contents": "On Tue, 20 Feb 2024 at 23:52, Matthias van de Meent\n<[email protected]> wrote:\n> What I meant was that\n>\n> > (char *) block + Bump_BLOCKHDRSZ\n> vs\n> > ((char *) block) + Bump_BLOCKHDRSZ\n>\n> , when combined with my little experience with pointer addition and\n> precedence, and a lack of compiler at the ready at that point in time,\n> I was afraid that \"(char *) block + Bump_BLOCKHDRSZ\" would be parsed\n> as \"(char *) (block + Bump_BLOCKHDRSZ)\", which would get different\n> offsets across the two statements.\n> Godbolt has since helped me understand that both are equivalent.\n\nI failed to notice this. I've made them the same regardless to\nprevent future questions from being raised about the discrepancy\nbetween the two.\n\nDavid\n\n\n",
"msg_date": "Wed, 21 Feb 2024 12:29:04 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add bump memory context type and use it for tuplesorts"
},
{
"msg_contents": "On Wed, 21 Feb 2024 at 00:02, Matthias van de Meent\n<[email protected]> wrote:\n>\n> On Tue, 20 Feb 2024 at 11:02, David Rowley <[email protected]> wrote:\n> > On Fri, 26 Jan 2024 at 01:29, Matthias van de Meent\n> > <[email protected]> wrote:\n> > > >> + allocSize = MAXALIGN(sizeof(BumpContext)) + Bump_BLOCKHDRSZ +\n> > > >> + if (minContextSize != 0)\n> > > >> + allocSize = Max(allocSize, minContextSize);\n> > > >> + else\n> > > >> + allocSize = Max(allocSize, initBlockSize);\n> > >\n> > > Shouldn't this be the following, considering the meaning of \"initBlockSize\"?\n> >\n> > No, we want to make the blocksize exactly initBlockSize if we can. Not\n> > initBlockSize plus all the header stuff. We do it that way for all\n> > the other contexts and I agree that it's a good idea as it keeps the\n> > malloc request sizes powers of 2.\n>\n> One part of the reason of my comment was that initBlockSize was\n> ignored in favour of minContextSize if that was configured, regardless\n> of the value of initBlockSize. Is it deliberately ignored when\n> minContextSize is set?\n\nOk, it's a good question. It's to allow finer-grained control over the\ninitial block as it allows it to be a fixed given size without\naffecting the number that we double for the subsequent blocks.\n\ne.g BumpContextCreate(64*1024, 8*1024, 1024*1024);\n\nwould make the first block 64K and the next block 16K, followed by\n32K, 64K ... 1MB.\n\nWhereas, BumpContextCreate(0, 8*1024, 1024*1024) will start at 8K, 16K ... 1MB.\n\nIt seems useful as you might have a good idea of how much memory the\ncommon case has and want to do that without having to allocate\nsubsequent blocks, but if slightly more memory is required sometimes,\nyou probably don't want the next malloc to be double the common size,\nespecially if the common size is large.\n\nApart from slab.c, this is how all the other contexts work. It seems\nbest to keep this and not to go against the grain on this as there's\nmore to consider if we opt to change the context types of existing\ncontexts.\n\nDavid\n\n\n",
"msg_date": "Wed, 21 Feb 2024 12:48:13 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add bump memory context type and use it for tuplesorts"
},
{
"msg_contents": "There've been a few changes to the memory allocators in the past week\nand some of these changes also need to be applied to bump.c. So, I've\nrebased the patches on top of today's master. See attached.\n\nI also re-ran the performance tests to check the allocation\nperformance against the recently optimised aset, generation and slab\ncontexts. The attached graph shows the time it took in seconds to\nallocate 1GB of memory performing a context reset after 1MB. The\nfunction I ran the test on is in the attached\npg_allocate_memory_test.patch.txt file.\n\nThe query I ran was:\n\nselect chksz,mtype,pg_allocate_memory_test_reset(chksz,\n1024*1024,1024*1024*1024, mtype) from (values(8),(16),(32),(64))\nsizes(chksz),(values('aset'),('generation'),('slab'),('bump'))\ncxt(mtype) order by mtype,chksz;\n\nDavid",
"msg_date": "Tue, 5 Mar 2024 15:42:10 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add bump memory context type and use it for tuplesorts"
},
{
"msg_contents": "On Tue, Mar 5, 2024 at 9:42 AM David Rowley <[email protected]> wrote:\n> performance against the recently optimised aset, generation and slab\n> contexts. The attached graph shows the time it took in seconds to\n> allocate 1GB of memory performing a context reset after 1MB. The\n> function I ran the test on is in the attached\n> pg_allocate_memory_test.patch.txt file.\n>\n> The query I ran was:\n>\n> select chksz,mtype,pg_allocate_memory_test_reset(chksz,\n> 1024*1024,1024*1024*1024, mtype) from (values(8),(16),(32),(64))\n> sizes(chksz),(values('aset'),('generation'),('slab'),('bump'))\n> cxt(mtype) order by mtype,chksz;\n\nI ran the test function, but using 256kB and 3MB for the reset\nfrequency, and with 8,16,24,32 byte sizes (patched against a commit\nafter the recent hot/cold path separation). Images attached. I also\nget a decent speedup with the bump context, but not quite as dramatic\nas on your machine. It's worth noting that slab is the slowest for me.\nThis is an Intel i7-10750H.",
"msg_date": "Mon, 11 Mar 2024 16:09:29 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add bump memory context type and use it for tuplesorts"
},
{
"msg_contents": "On 3/11/24 10:09, John Naylor wrote:\n> On Tue, Mar 5, 2024 at 9:42 AM David Rowley <[email protected]> wrote:\n>> performance against the recently optimised aset, generation and slab\n>> contexts. The attached graph shows the time it took in seconds to\n>> allocate 1GB of memory performing a context reset after 1MB. The\n>> function I ran the test on is in the attached\n>> pg_allocate_memory_test.patch.txt file.\n>>\n>> The query I ran was:\n>>\n>> select chksz,mtype,pg_allocate_memory_test_reset(chksz,\n>> 1024*1024,1024*1024*1024, mtype) from (values(8),(16),(32),(64))\n>> sizes(chksz),(values('aset'),('generation'),('slab'),('bump'))\n>> cxt(mtype) order by mtype,chksz;\n> \n> I ran the test function, but using 256kB and 3MB for the reset\n> frequency, and with 8,16,24,32 byte sizes (patched against a commit\n> after the recent hot/cold path separation). Images attached. I also\n> get a decent speedup with the bump context, but not quite as dramatic\n> as on your machine. It's worth noting that slab is the slowest for me.\n> This is an Intel i7-10750H.\n\nThat's interesting! Obviously, I can't miss a benchmarking party like\nthis, so I ran this on my two machines, and I got very similar results\non both - see the attached charts.\n\nIt seems that compared to the other memory context types:\n\n(a) bump context is much faster\n\n(b) slab is considerably slower\n\nI wonder if this is due to the microbenchmark being a particularly poor\nfit for Slab (but I don't see why would that be), or if this is simply\nhow Slab works. I vaguely recall it was meant handle this much better\nthan AllocSet, both in terms of time and memory usage, but we improved\nAllocSet since then, so maybe it's no longer true / needed?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Tue, 12 Mar 2024 00:25:01 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add bump memory context type and use it for tuplesorts"
},
{
"msg_contents": "On Tue, 12 Mar 2024 at 12:25, Tomas Vondra\n<[email protected]> wrote:\n> (b) slab is considerably slower\n\nIt would be interesting to modify SlabReset() to, instead of free()ing\nthe blocks, push the first SLAB_MAXIMUM_EMPTY_BLOCKS of them onto the\nemptyblocks list.\n\nThat might give us an idea of how much overhead comes from malloc/free.\n\nHaving something like this as an option when creating a context might\nbe a good idea. generation.c now keeps 1 \"freeblock\" which currently\ndoes not persist during context resets. Some memory context usages\nmight suit having an option like this. Maybe something like the\nexecutor's per-tuple context, which perhaps (c|sh)ould be a generation\ncontext... However, saying that, I see you measure it to be slightly\nslower than aset.\n\nDavid\n\n\n",
"msg_date": "Tue, 12 Mar 2024 12:40:59 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add bump memory context type and use it for tuplesorts"
},
{
"msg_contents": "On Mon, 11 Mar 2024 at 22:09, John Naylor <[email protected]> wrote:\n> I ran the test function, but using 256kB and 3MB for the reset\n> frequency, and with 8,16,24,32 byte sizes (patched against a commit\n> after the recent hot/cold path separation). Images attached. I also\n> get a decent speedup with the bump context, but not quite as dramatic\n> as on your machine. It's worth noting that slab is the slowest for me.\n> This is an Intel i7-10750H.\n\nThanks for trying this out. I didn't check if the performance was\nsusceptible to the memory size before the reset. It certainly would\nbe once the allocation crosses some critical threshold of CPU cache\nsize, but probably it will also be to some extent regarding the number\nof actual mallocs that are required underneath.\n\nI see there's some discussion of bump in [1]. Do you still have a\nvalid use case for bump for performance/memory usage reasons?\n\nThe reason I ask is due to what Tom mentioned in [2] (\"It's not\napparent to me that the \"bump context\" idea is valuable enough to\nforeclose ever adding more context types\"). So, I'm just probing to\nfind other possible use cases that reinforce the usefulness of bump.\nIt would be interesting to try it in a few places to see what\nperformance gains could be had. I've not done much scouting around\nthe codebase for other uses other than non-bounded tuplesorts.\n\nDavid\n\n[1] https://postgr.es/m/CANWCAZbxxhysYtrPYZ-wZbDtvRPWoeTe7RQM1g_+4CB8Z6KYSQ@mail.gmail.com\n[2] https://postgr.es/m/[email protected]\n\n\n",
"msg_date": "Tue, 12 Mar 2024 12:41:38 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add bump memory context type and use it for tuplesorts"
},
{
"msg_contents": "On Tue, Mar 12, 2024 at 6:41 AM David Rowley <[email protected]> wrote:\n> Thanks for trying this out. I didn't check if the performance was\n> susceptible to the memory size before the reset. It certainly would\n> be once the allocation crosses some critical threshold of CPU cache\n> size, but probably it will also be to some extent regarding the number\n> of actual mallocs that are required underneath.\n\nI neglected to mention it, but the numbers I chose did have the L2/L3\ncache in mind, but the reset frequency didn't seem to make much\ndifference.\n\n> I see there's some discussion of bump in [1]. Do you still have a\n> valid use case for bump for performance/memory usage reasons?\n\nYeah, that was part of my motivation for helping test, although my\ninterest is in saving memory in cases of lots of small allocations. It\nmight help if I make this a little more concrete, so I wrote a\nquick-and-dirty function to measure the bytes used by the proposed TID\nstore and the vacuum's current array.\n\nUsing bitmaps really shines with a high number of offsets per block,\ne.g. with about a million sequential blocks, and 49 offsets per block\n(last parameter is a bound):\n\nselect * from tidstore_memory(0,1*1001*1000, 1,50);\n array_mem | ts_mem\n-----------+----------\n 294294000 | 42008576\n\nThe break-even point with this scenario is around 7 offsets per block:\n\nselect * from tidstore_memory(0,1*1001*1000, 1,8);\n array_mem | ts_mem\n-----------+----------\n 42042000 | 42008576\n\nBelow that, the array gets smaller, but the bitmap just has more empty\nspace. Here, 8 million bytes are used by the chunk header in bitmap\nallocations, so the bump context would help there (I haven't actually\ntried). Of course, the best allocation is no allocation at all, and I\nhave a draft patch to store up to 3 offsets in the last-level node's\npointer array, so for 2 or 3 offsets per block we're smaller than the\narray again:\n\nselect * from tidstore_memory(0,1*1001*1000, 1,4);\n array_mem | ts_mem\n-----------+---------\n 18018000 | 8462336\n\nSequential blocks are not the worst case scenario for memory use, but\nthis gives an idea of what's involved. So, with aset, on average I\nstill expect to use quite a bit less memory, with some corner cases\nthat use more. The bump context would be some extra insurance to\nreduce those corner cases, where there are a large number of blocks in\nplay.\n\n\n",
"msg_date": "Tue, 12 Mar 2024 16:58:39 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add bump memory context type and use it for tuplesorts"
},
{
"msg_contents": "On 3/12/24 00:40, David Rowley wrote:\n> On Tue, 12 Mar 2024 at 12:25, Tomas Vondra\n> <[email protected]> wrote:\n>> (b) slab is considerably slower\n> \n> It would be interesting to modify SlabReset() to, instead of free()ing\n> the blocks, push the first SLAB_MAXIMUM_EMPTY_BLOCKS of them onto the\n> emptyblocks list.\n> \n> That might give us an idea of how much overhead comes from malloc/free.\n> \n> Having something like this as an option when creating a context might\n> be a good idea. generation.c now keeps 1 \"freeblock\" which currently\n> does not persist during context resets. Some memory context usages\n> might suit having an option like this. Maybe something like the\n> executor's per-tuple context, which perhaps (c|sh)ould be a generation\n> context... However, saying that, I see you measure it to be slightly\n> slower than aset.\n> \n\nIIUC you're suggesting maybe it's a problem we free the blocks during\ncontext reset, only to allocate them again shortly after, paying the\nmalloc overhead. This reminded the mempool idea I recently shared in the\nnearby \"scalability bottlenecks\" thread [1]. So I decided to give this a\ntry and see how it affects this benchmark.\n\nAttached is an updated version of the mempool patch, modifying all the\nmemory contexts (not just AllocSet), including the bump context. And\nthen also PDF with results from the two machines, comparing results\nwithout and with the mempool. There's very little impact on small reset\nvalues (128kB, 1MB), but pretty massive improvements on the 8MB test\n(where it's a 2x improvement).\n\nNevertheless, it does not affect the relative performance very much. The\nbump context is still the fastest, but the gap is much smaller.\n\n\nConsidering the mempool serves as a cache in between memory contexts and\nglibc, eliminating most of the malloc/free calls, and essentially\nkeeping the blocks allocated, I doubt slab is slow because of malloc\noverhead - at least in the \"small\" tests (but I haven't looked closer).\n\n\nregards\n\n[1]\nhttps://www.postgresql.org/message-id/510b887e-c0ce-4a0c-a17a-2c6abb8d9a5c%40enterprisedb.com\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Tue, 12 Mar 2024 11:57:07 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add bump memory context type and use it for tuplesorts"
},
{
"msg_contents": "On Tue, 12 Mar 2024 at 23:57, Tomas Vondra\n<[email protected]> wrote:\n> Attached is an updated version of the mempool patch, modifying all the\n> memory contexts (not just AllocSet), including the bump context. And\n> then also PDF with results from the two machines, comparing results\n> without and with the mempool. There's very little impact on small reset\n> values (128kB, 1MB), but pretty massive improvements on the 8MB test\n> (where it's a 2x improvement).\n\nI think it would be good to have something like this. I've done some\nexperiments before with something like this [1]. However, mine was\nmuch more trivial.\n\nOne thing my version did was get rid of the *context* freelist stuff\nin aset. I wondered if we'd need that anymore as, if I understand\ncorrectly, it's just there to stop malloc/free thrashing, which is\nwhat the patch aims to do anyway. Aside from that, it's now a little\nweird that aset.c has that but generation.c and slab.c do not.\n\nOne thing I found was that in btbeginscan(), we have \"so =\n(BTScanOpaque) palloc(sizeof(BTScanOpaqueData));\", which on this\nmachine is 27344 bytes and results in a call to AllocSetAllocLarge()\nand therefore a malloc(). Ideally, there'd be no malloc() calls in a\nstandard pgbench run, at least once the rel and cat caches have been\nwarmed up.\n\nI think there are a few things in your patch that could be improved,\nhere's a quick review.\n\n1. MemoryPoolEntryIndex() could follow the lead of\nAllocSetFreeIndex(), which is quite well-tuned and has no looping. I\nthink you can get rid of MemoryPoolEntrySize() and just have\nMemoryPoolEntryIndex() round up to the next power of 2.\n\n2. The following could use \"result = Min(MEMPOOL_MIN_BLOCK,\npg_nextpower2_size_t(size));\"\n\n+ * should be very low, though (less than MEMPOOL_SIZES, i.e. 14).\n+ */\n+ result = MEMPOOL_MIN_BLOCK;\n+ while (size > result)\n+ result *= 2;\n\n3. \"MemoryPoolFree\": I wonder if this is a good name for such a\nfunction. Really you want to return it to the pool. \"Free\" sounds\nlike you're going to free() it. I went for \"Fetch\" and \"Release\"\nwhich I thought was less confusing.\n\n4. MemoryPoolRealloc(), could this just do nothing if the old and new\nindexes are the same?\n\n5. It might be good to put a likely() around this:\n\n+ /* only do this once every MEMPOOL_REBALANCE_DISTANCE allocations */\n+ if (pool->num_requests < MEMPOOL_REBALANCE_DISTANCE)\n+ return;\n\nOtherwise, if that function is inlined then you'll bloat the functions\nthat inline it for not much good reason. Another approach would be to\nhave a static inline function which checks and calls a noinline\nfunction that does the work so that the rebalance stuff is never\ninlined.\n\nOverall, I wonder if the rebalance stuff might make performance\ntesting quite tricky. I see:\n\n+/*\n+ * How often to rebalance the memory pool buckets (number of allocations).\n+ * This is a tradeoff between the pool being adaptive and more overhead.\n+ */\n+#define MEMPOOL_REBALANCE_DISTANCE 25000\n\nWill TPS take a sudden jump after 25k transactions doing the same\nthing? I'm not saying this shouldn't happen, but... benchmarking is\npretty hard already. I wonder if there's something more fine-grained\nthat can be done which makes the pool adapt faster but not all at\nonce. (I've not studied your algorithm for the rebalance.)\n\nDavid\n\n[1] https://github.com/david-rowley/postgres/tree/malloccache\n\n\n",
"msg_date": "Fri, 15 Mar 2024 15:21:02 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add bump memory context type and use it for tuplesorts"
},
{
"msg_contents": "On 3/15/24 03:21, David Rowley wrote:\n> On Tue, 12 Mar 2024 at 23:57, Tomas Vondra\n> <[email protected]> wrote:\n>> Attached is an updated version of the mempool patch, modifying all the\n>> memory contexts (not just AllocSet), including the bump context. And\n>> then also PDF with results from the two machines, comparing results\n>> without and with the mempool. There's very little impact on small reset\n>> values (128kB, 1MB), but pretty massive improvements on the 8MB test\n>> (where it's a 2x improvement).\n> \n> I think it would be good to have something like this. I've done some\n> experiments before with something like this [1]. However, mine was\n> much more trivial.\n> \n\nInteresting. My thing is a bit more complex because it was not meant to\nbe a cache initially, but more a way to limit the memory allocated by a\nbackend (discussed in [1]), or perhaps even a smaller part of a plan.\n\nI only added the caching after I ran into some bottlenecks [2], and\nmalloc turned out to be a scalability issue.\n\n> One thing my version did was get rid of the *context* freelist stuff\n> in aset. I wondered if we'd need that anymore as, if I understand\n> correctly, it's just there to stop malloc/free thrashing, which is\n> what the patch aims to do anyway. Aside from that, it's now a little\n> weird that aset.c has that but generation.c and slab.c do not.\n> \n\nTrue. I think the \"memory pool\" shared by all memory contexts would be a\nmore principled way to do this - not only it works for all memory\ncontext types, but it's also part of the \"regular\" cache eviction like\neverything else (which the context freelist is not).\n\n> One thing I found was that in btbeginscan(), we have \"so =\n> (BTScanOpaque) palloc(sizeof(BTScanOpaqueData));\", which on this\n> machine is 27344 bytes and results in a call to AllocSetAllocLarge()\n> and therefore a malloc(). Ideally, there'd be no malloc() calls in a\n> standard pgbench run, at least once the rel and cat caches have been\n> warmed up.\n> \n\nRight. That's exactly what I found in [2], where it's a massive problem\nwith many partitions and many concurrent connections.\n\n> I think there are a few things in your patch that could be improved,\n> here's a quick review.\n> \n> 1. MemoryPoolEntryIndex() could follow the lead of\n> AllocSetFreeIndex(), which is quite well-tuned and has no looping. I\n> think you can get rid of MemoryPoolEntrySize() and just have\n> MemoryPoolEntryIndex() round up to the next power of 2.\n> \n> 2. The following could use \"result = Min(MEMPOOL_MIN_BLOCK,\n> pg_nextpower2_size_t(size));\"\n> \n> + * should be very low, though (less than MEMPOOL_SIZES, i.e. 14).\n> + */\n> + result = MEMPOOL_MIN_BLOCK;\n> + while (size > result)\n> + result *= 2;\n> \n> 3. \"MemoryPoolFree\": I wonder if this is a good name for such a\n> function. Really you want to return it to the pool. \"Free\" sounds\n> like you're going to free() it. I went for \"Fetch\" and \"Release\"\n> which I thought was less confusing.\n> \n> 4. MemoryPoolRealloc(), could this just do nothing if the old and new\n> indexes are the same?\n> \n> 5. It might be good to put a likely() around this:\n> \n> + /* only do this once every MEMPOOL_REBALANCE_DISTANCE allocations */\n> + if (pool->num_requests < MEMPOOL_REBALANCE_DISTANCE)\n> + return;\n> \n> Otherwise, if that function is inlined then you'll bloat the functions\n> that inline it for not much good reason. Another approach would be to\n> have a static inline function which checks and calls a noinline\n> function that does the work so that the rebalance stuff is never\n> inlined.\n> \n\nYes, I agree with all of that. I was a bit lazy when doing the PoC, so I\nignored these things.\n\n> Overall, I wonder if the rebalance stuff might make performance\n> testing quite tricky. I see:\n> \n> +/*\n> + * How often to rebalance the memory pool buckets (number of allocations).\n> + * This is a tradeoff between the pool being adaptive and more overhead.\n> + */\n> +#define MEMPOOL_REBALANCE_DISTANCE 25000\n> \n> Will TPS take a sudden jump after 25k transactions doing the same\n> thing? I'm not saying this shouldn't happen, but... benchmarking is\n> pretty hard already. I wonder if there's something more fine-grained\n> that can be done which makes the pool adapt faster but not all at\n> once. (I've not studied your algorithm for the rebalance.)\n> \n\nI don't think so, or at least I haven't observed anything like that. My\nintent was to make the rebalancing fairly frequent but incremental, with\neach increment doing only a tiny amount of work.\n\nIt does not do any more malloc/free calls than without the cache - it\nmay just delay them a bit, and the assumption is the rest of the\nrebalancing (walking the size slots and adjusting counters based on\nactivity since the last run) is super cheap.\n\nSo it shouldn't be the case that the rebalancing is so expensive to\ncause a measurable drop in throughput, or something like that. I can\nimagine spreading it even more (doing it in smaller steps), but on the\nother hand the interval must not be too short - we need to do enough\nallocations to provide good \"heuristics\" how to adjust the cache.\n\nBTW it's not directly tied to transactions - it's triggered by block\nallocations, and each transaction likely needs many of those.\n\n\n\nregards\n\n\n[1]\nhttps://www.postgresql.org/message-id/bd57d9a4c219cc1392665fd5fba61dde8027b3da.camel%40crunchydata.com\n\n[2]\nhttps://www.postgresql.org/message-id/[email protected]\n\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 15 Mar 2024 12:38:26 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add bump memory context type and use it for tuplesorts"
},
{
"msg_contents": "On Tue, 5 Mar 2024 at 15:42, David Rowley <[email protected]> wrote:\n> The query I ran was:\n>\n> select chksz,mtype,pg_allocate_memory_test_reset(chksz,\n> 1024*1024,1024*1024*1024, mtype) from (values(8),(16),(32),(64))\n> sizes(chksz),(values('aset'),('generation'),('slab'),('bump'))\n> cxt(mtype) order by mtype,chksz;\n\nAndres and I were discussing this patch offlist in the context of\n\"should we have bump\". Andres wonders if it would be better to have a\nfunction such as palloc_nofree() (we didn't actually discuss the\nname), which for aset, would forego rounding up to the next power of 2\nand not bother checking the freelist and only have a chunk header for\nMEMORY_CONTEXT_CHECKING builds. For non-MEMORY_CONTEXT_CHECKING\nbuilds, the chunk header could be set to some other context type such\nas one of the unused ones or perhaps a dedicated new one that does\nsomething along the lines of BogusFree() which raises an ERROR if\nanything tries to pfree or repalloc it.\n\nAn advantage of having this instead of bump would be that it could be\nused for things such as the parser, where we make a possibly large\nseries of small allocations and never free them again.\n\nAndres ask me to run some benchmarks to mock up AllocSetAlloc() to\nhave it not check the freelist to see how the performance of it\ncompares to BumpAlloc(). I did this in the attached 2 patches. The\n0001 patch just #ifdefs that part of AllocSetAlloc out, however\nproperly implementing this is more complex as aset.c currently stores\nthe freelist index in the MemoryChunk rather than the chunk_size. I\ndid this because it saved having to call AllocSetFreeIndex() in\nAllocSetFree() which made a meaningful performance improvement in\npfree(). The 0002 patch effectively reverses that change out so that\nthe chunk_size is stored. Again, these patches are only intended to\ndemonstrate the performance and check how it compares to bump.\n\nI'm yet uncertain why, but I find that the first time I run the query\nquoted above, the aset results are quite a bit slower than on\nsubsequent runs. Other context types don't seem to suffer from this.\nThe previous results I sent in [1] were of the initial run after\nstarting up the database.\n\nThe attached graph shows the number of seconds it takes to allocate a\ntotal of 1GBs of memory in various chunk sizes, resetting the context\nafter 1MBs has been allocated, so as to keep the test sized so it fits\nin CPU caches.\n\nI'm not drawing any particular conclusion from the results aside from\nit's not quite as fast as bump. I also have some reservations about\nhow easy it would be to actually use something like palloc_nofree().\nFor example heap_form_minimal_tuple() does palloc0(). What if I\nwanted to call ExecCopySlotMinimalTuple() and use palloc0_nofree().\nWould we need new versions of various functions to give us control\nover this?\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CAApHDvr_hGT=kaP0YXbKSNZtbRX+6hUkieCWEn2BULwW1uTr_Q@mail.gmail.com",
"msg_date": "Tue, 26 Mar 2024 00:41:57 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add bump memory context type and use it for tuplesorts"
},
{
"msg_contents": "On 3/25/24 12:41, David Rowley wrote:\n> On Tue, 5 Mar 2024 at 15:42, David Rowley <[email protected]> wrote:\n>> The query I ran was:\n>>\n>> select chksz,mtype,pg_allocate_memory_test_reset(chksz,\n>> 1024*1024,1024*1024*1024, mtype) from (values(8),(16),(32),(64))\n>> sizes(chksz),(values('aset'),('generation'),('slab'),('bump'))\n>> cxt(mtype) order by mtype,chksz;\n> \n> Andres and I were discussing this patch offlist in the context of\n> \"should we have bump\". Andres wonders if it would be better to have a\n> function such as palloc_nofree() (we didn't actually discuss the\n> name), which for aset, would forego rounding up to the next power of 2\n> and not bother checking the freelist and only have a chunk header for\n> MEMORY_CONTEXT_CHECKING builds. For non-MEMORY_CONTEXT_CHECKING\n> builds, the chunk header could be set to some other context type such\n> as one of the unused ones or perhaps a dedicated new one that does\n> something along the lines of BogusFree() which raises an ERROR if\n> anything tries to pfree or repalloc it.\n> \n> An advantage of having this instead of bump would be that it could be\n> used for things such as the parser, where we make a possibly large\n> series of small allocations and never free them again.\n> \n\nI may be missing something, but I don't quite understand how this would\nbe simpler to use in places like parser. Wouldn't it require all the\nplaces to start explicitly calling palloc_nofree()? How is that better\nthan having a specialized memory context?\n\n> Andres ask me to run some benchmarks to mock up AllocSetAlloc() to\n> have it not check the freelist to see how the performance of it\n> compares to BumpAlloc(). I did this in the attached 2 patches. The\n> 0001 patch just #ifdefs that part of AllocSetAlloc out, however\n> properly implementing this is more complex as aset.c currently stores\n> the freelist index in the MemoryChunk rather than the chunk_size. I\n> did this because it saved having to call AllocSetFreeIndex() in\n> AllocSetFree() which made a meaningful performance improvement in\n> pfree(). The 0002 patch effectively reverses that change out so that\n> the chunk_size is stored. Again, these patches are only intended to\n> demonstrate the performance and check how it compares to bump.\n> \n> I'm yet uncertain why, but I find that the first time I run the query\n> quoted above, the aset results are quite a bit slower than on\n> subsequent runs. Other context types don't seem to suffer from this.\n> The previous results I sent in [1] were of the initial run after\n> starting up the database.\n> \n> The attached graph shows the number of seconds it takes to allocate a\n> total of 1GBs of memory in various chunk sizes, resetting the context\n> after 1MBs has been allocated, so as to keep the test sized so it fits\n> in CPU caches.\n> \n\nYeah, strange and interesting. My guess is it's some sort of caching\neffect, where the first run has to initialize stuff that's not in any of\nthe CPU caches yet, likely something specific to AllocSet (freelist?).\nOr maybe memory prefetching does not work that well for AllocSet?\n\nI'd try perf-stat, that might tell us more ... but who knows.\n\nAlternatively, it might be some interaction with the glibc allocator.\nHave you tried using jemalloc using LD_PRELOAD, or tweaking the glibc\nparameters using environment variables? If you feel adventurous, you\nmight even try the memory pool stuff, although I'm not sure that can\nhelp with the first run.\n\n> I'm not drawing any particular conclusion from the results aside from\n> it's not quite as fast as bump. I also have some reservations about\n> how easy it would be to actually use something like palloc_nofree().\n> For example heap_form_minimal_tuple() does palloc0(). What if I\n> wanted to call ExecCopySlotMinimalTuple() and use palloc0_nofree().\n> Would we need new versions of various functions to give us control\n> over this?\n> \n\nThat's kinda the problem that I mentioned above - is this really any\nsimpler/better than just having a separate memory context type? I don't\nsee what benefits this is supposed to have.\n\nIMHO the idea of having a general purpose memory context and then also\nspecialized memory contexts for particular allocation patterns is great,\nand we should embrace it. Adding more and more special cases into\nAllocSet seems to go directly against that idea, makes the code more\ncomplex, and I don't quite see how is that better or easier to use than\nhaving a separate BumpContext ...\n\nHaving an AllocSet that mixes chunks that may be freed and chunks that\ncan't be freed, and have a different context type in the chunk header,\nseems somewhat confusing and \"not great\" for debugging, for example.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 25 Mar 2024 15:35:10 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add bump memory context type and use it for tuplesorts"
},
{
"msg_contents": "Tomas Vondra <[email protected]> writes:\n> IMHO the idea of having a general purpose memory context and then also\n> specialized memory contexts for particular allocation patterns is great,\n> and we should embrace it. Adding more and more special cases into\n> AllocSet seems to go directly against that idea, makes the code more\n> complex, and I don't quite see how is that better or easier to use than\n> having a separate BumpContext ...\n\nI agree with this completely. However, the current design for chunk\nheaders is mighty restrictive about how many kinds of contexts we can\nhave. We need to open that back up.\n\nCould we move the knowledge of exactly which context type it is out\nof the per-chunk header and keep it in the block header? This'd\nrequire that every context type have a standardized way of finding\nthe block header from a chunk. We could repurpose the existing\nMemoryContextMethodID bits to allow having a small number of different\nways, perhaps.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 25 Mar 2024 10:53:12 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add bump memory context type and use it for tuplesorts"
},
{
"msg_contents": "On Tue, 26 Mar 2024 at 03:53, Tom Lane <[email protected]> wrote:\n> I agree with this completely. However, the current design for chunk\n> headers is mighty restrictive about how many kinds of contexts we can\n> have. We need to open that back up.\n\nAndres mentioned how we could do this in [1]. One possible issue with\nthat is that slab.c has no external chunks so would restrict slab to\n512MB chunks. I doubt that's ever going to realistically be an issue.\nThat's just not a good use case for slab, so I'd be ok with that.\n\n> Could we move the knowledge of exactly which context type it is out\n> of the per-chunk header and keep it in the block header? This'd\n> require that every context type have a standardized way of finding\n> the block header from a chunk. We could repurpose the existing\n> MemoryContextMethodID bits to allow having a small number of different\n> ways, perhaps.\n\nI wasn't 100% clear on your opinion about using 010 vs expanding the\nbit-space. Based on the following it sounded like you were not\noutright rejecting the idea of consuming the 010 pattern.\n\nOn Sat, 17 Feb 2024 at 12:14, Tom Lane <[email protected]> wrote:\n> If we do kick this can down the road, then I concur with eating 010\n> next, as it seems the least likely to occur in glibc-malloced\n> chunks.\n\nDavid\n\n[1] https://postgr.es/m/[email protected]\n\n\n",
"msg_date": "Tue, 26 Mar 2024 09:44:03 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add bump memory context type and use it for tuplesorts"
},
{
"msg_contents": "David Rowley <[email protected]> writes:\n> On Tue, 26 Mar 2024 at 03:53, Tom Lane <[email protected]> wrote:\n>> Could we move the knowledge of exactly which context type it is out\n>> of the per-chunk header and keep it in the block header?\n\n> I wasn't 100% clear on your opinion about using 010 vs expanding the\n> bit-space. Based on the following it sounded like you were not\n> outright rejecting the idea of consuming the 010 pattern.\n\nWhat I said earlier was that 010 was the least bad choice if we\nfail to do any expansibility work; but I'm not happy with failing\nto do that.\n\nBasically, I'm not happy with consuming the last reasonably-available\npattern for a memory context type that has little claim to being the\nLast Context Type We Will Ever Want. Rather than making a further\ndent in our ability to detect corrupted chunks, we should do something\ntowards restoring the expansibility that existed in the original\ndesign. Then we can add bump contexts and whatever else we want.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 25 Mar 2024 17:44:08 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add bump memory context type and use it for tuplesorts"
},
{
"msg_contents": "On Mon, 25 Mar 2024 at 22:44, Tom Lane <[email protected]> wrote:\n>\n> David Rowley <[email protected]> writes:\n> > On Tue, 26 Mar 2024 at 03:53, Tom Lane <[email protected]> wrote:\n> >> Could we move the knowledge of exactly which context type it is out\n> >> of the per-chunk header and keep it in the block header?\n>\n> > I wasn't 100% clear on your opinion about using 010 vs expanding the\n> > bit-space. Based on the following it sounded like you were not\n> > outright rejecting the idea of consuming the 010 pattern.\n>\n> What I said earlier was that 010 was the least bad choice if we\n> fail to do any expansibility work; but I'm not happy with failing\n> to do that.\n\nOkay.\n\n> Basically, I'm not happy with consuming the last reasonably-available\n> pattern for a memory context type that has little claim to being the\n> Last Context Type We Will Ever Want. Rather than making a further\n> dent in our ability to detect corrupted chunks, we should do something\n> towards restoring the expansibility that existed in the original\n> design. Then we can add bump contexts and whatever else we want.\n\nSo, would something like the attached make enough IDs available so\nthat we can add the bump context anyway?\n\nIt extends memory context IDs to 5 bits (32 values), of which\n- 8 have glibc's malloc pattern of 001/010;\n- 1 is unused memory's 00000\n- 1 is wipe_mem's 11111\n- 4 are used by existing contexts (Aset/Generation/Slab/AlignedRedirect)\n- 18 are newly available.\n\nKind regards,\n\nMatthias",
"msg_date": "Thu, 4 Apr 2024 21:42:12 +0200",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add bump memory context type and use it for tuplesorts"
},
{
"msg_contents": "Matthias van de Meent <[email protected]> writes:\n> On Mon, 25 Mar 2024 at 22:44, Tom Lane <[email protected]> wrote:\n>> Basically, I'm not happy with consuming the last reasonably-available\n>> pattern for a memory context type that has little claim to being the\n>> Last Context Type We Will Ever Want. Rather than making a further\n>> dent in our ability to detect corrupted chunks, we should do something\n>> towards restoring the expansibility that existed in the original\n>> design. Then we can add bump contexts and whatever else we want.\n\n> So, would something like the attached make enough IDs available so\n> that we can add the bump context anyway?\n\n> It extends memory context IDs to 5 bits (32 values), of which\n> - 8 have glibc's malloc pattern of 001/010;\n> - 1 is unused memory's 00000\n> - 1 is wipe_mem's 11111\n> - 4 are used by existing contexts (Aset/Generation/Slab/AlignedRedirect)\n> - 18 are newly available.\n\nThis seems like it would solve the problem for a good long time\nto come; and if we ever need more IDs, we could steal one more bit\nby requiring the offset to the block header to be a multiple of 8.\n(Really, we could just about do that today at little or no cost ...\nmachines with MAXALIGN less than 8 are very thin on the ground.)\n\nThe only objection I can think of is that perhaps this would slow\nthings down a tad by requiring more complicated shifting/masking.\nI wonder if we could redo the performance checks that were done\non the way to accepting the current design.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 04 Apr 2024 16:43:36 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add bump memory context type and use it for tuplesorts"
},
{
"msg_contents": "On Thu, 4 Apr 2024 at 22:43, Tom Lane <[email protected]> wrote:\n>\n> Matthias van de Meent <[email protected]> writes:\n> > On Mon, 25 Mar 2024 at 22:44, Tom Lane <[email protected]> wrote:\n> >> Basically, I'm not happy with consuming the last reasonably-available\n> >> pattern for a memory context type that has little claim to being the\n> >> Last Context Type We Will Ever Want. Rather than making a further\n> >> dent in our ability to detect corrupted chunks, we should do something\n> >> towards restoring the expansibility that existed in the original\n> >> design. Then we can add bump contexts and whatever else we want.\n>\n> > So, would something like the attached make enough IDs available so\n> > that we can add the bump context anyway?\n>\n> > It extends memory context IDs to 5 bits (32 values), of which\n> > - 8 have glibc's malloc pattern of 001/010;\n> > - 1 is unused memory's 00000\n> > - 1 is wipe_mem's 11111\n> > - 4 are used by existing contexts (Aset/Generation/Slab/AlignedRedirect)\n> > - 18 are newly available.\n>\n> This seems like it would solve the problem for a good long time\n> to come; and if we ever need more IDs, we could steal one more bit\n> by requiring the offset to the block header to be a multiple of 8.\n> (Really, we could just about do that today at little or no cost ...\n> machines with MAXALIGN less than 8 are very thin on the ground.)\n\nHmm, it seems like a decent idea, but I didn't want to deal with the\nrepercussions of that this late in the cycle when these 2 bits were\nstill relatively easy to get hold of.\n\n> The only objection I can think of is that perhaps this would slow\n> things down a tad by requiring more complicated shifting/masking.\n> I wonder if we could redo the performance checks that were done\n> on the way to accepting the current design.\n\nI didn't do very extensive testing, but the light performance tests\nthat I did with the palloc performance benchmark patch & script shared\nabove indicate didn't measure an observable negative effect.\nAn adapted version of the test that uses repalloc() to check\nperformance differences in MCXT_METHOD() doesn't show a significant\nperformance difference from master either. That test case is attached\nas repalloc-performance-test-function.patch.txt.\n\nThe full set of patches would then accumulate to the attached v5 of\nthe patchset.\n0001 is an update of my patch from yesterday, in which I update\nMemoryContextMethodID infrastructure for more IDs, and use a new\nnaming scheme for unused/reserved IDs.\n0002 and 0003 are David's patches, with minor changes to work with\n0001 (rebasing, and I moved the location around to keep function\ndeclaration in order with memctx ids)\n\nKind regards,\n\nMatthias van de Meent",
"msg_date": "Fri, 5 Apr 2024 15:29:59 +0200",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add bump memory context type and use it for tuplesorts"
},
{
"msg_contents": "Matthias van de Meent <[email protected]> writes:\n> On Thu, 4 Apr 2024 at 22:43, Tom Lane <[email protected]> wrote:\n>> The only objection I can think of is that perhaps this would slow\n>> things down a tad by requiring more complicated shifting/masking.\n>> I wonder if we could redo the performance checks that were done\n>> on the way to accepting the current design.\n\n> I didn't do very extensive testing, but the light performance tests\n> that I did with the palloc performance benchmark patch & script shared\n> above indicate didn't measure an observable negative effect.\n\nOK. I did not read the patch very closely, but at least in principle\nI have no further objections. David, are you planning to take point\non getting this in?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 05 Apr 2024 10:24:31 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add bump memory context type and use it for tuplesorts"
},
{
"msg_contents": "On Sat, 6 Apr 2024 at 03:24, Tom Lane <[email protected]> wrote:\n> OK. I did not read the patch very closely, but at least in principle\n> I have no further objections. David, are you planning to take point\n> on getting this in?\n\nYes. I'll be looking soon.\n\nDavid\n\n\n",
"msg_date": "Sat, 6 Apr 2024 15:42:11 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add bump memory context type and use it for tuplesorts"
},
{
"msg_contents": "On Sat, 6 Apr 2024 at 02:30, Matthias van de Meent\n<[email protected]> wrote:\n>\n> On Thu, 4 Apr 2024 at 22:43, Tom Lane <[email protected]> wrote:\n> >\n> > Matthias van de Meent <[email protected]> writes:\n> > > It extends memory context IDs to 5 bits (32 values), of which\n> > > - 8 have glibc's malloc pattern of 001/010;\n> > > - 1 is unused memory's 00000\n> > > - 1 is wipe_mem's 11111\n> > > - 4 are used by existing contexts (Aset/Generation/Slab/AlignedRedirect)\n> > > - 18 are newly available.\n> >\n> > This seems like it would solve the problem for a good long time\n> > to come; and if we ever need more IDs, we could steal one more bit\n> > by requiring the offset to the block header to be a multiple of 8.\n> > (Really, we could just about do that today at little or no cost ...\n> > machines with MAXALIGN less than 8 are very thin on the ground.)\n>\n> Hmm, it seems like a decent idea, but I didn't want to deal with the\n> repercussions of that this late in the cycle when these 2 bits were\n> still relatively easy to get hold of.\n\nThanks for writing the patch.\n\nI think 5 bits is 1 too many. 4 seems fine. I also think you've\nreserved too many slots in your patch as I disagree that we need to\nreserve the glibc malloc pattern anywhere but in the 1 and 2 slots of\nthe mcxt_methods[] array. I looked again at the 8 bytes prior to a\nglibc malloc'd chunk and I see the lowest 4 bits of the headers\nconsistently set to 0001 for all powers of 2 starting at 8 up to\n65536. 131072 seems to vary and beyond that, they seem to be set to\n0010.\n\nWith that, there's no increase in the number of reserved slots from\nwhat we have reserved today. Still 4. So having 4 bits instead of 3\nbits gives us a total of 12 slots rather than 4 slots. Having 3x\nslots seems enough. We might need an extra bit for something else\nsometime. I think keeping it up our sleeve is a good idea.\n\nAnother reason not to make it 5 bits is that I believe that would make\nthe mcxt_methods[] array 2304 bytes rather than 576 bytes. 4 bits\nmakes it 1152 bytes, if I'm counting correctly.\n\nI revised the patch to simplify hdrmask logic. This started with me\nhaving trouble finding the best set of words to document that the\noffset is \"half the bytes between the chunk and block\". So, instead\nof doing that, I've just made it so these two fields effectively\noverlap. The lowest bit of the block offset is the same bit as the\nhigh bit of what MemoryChunkGetValue returns. I've just added an\nAssert to MemoryChunkSetHdrMask to ensure that the low bit is never\nset in the offset. Effectively, using this method, there's no new *C*\ncode to split the values out. However, the compiler will emit one\nadditional bitwise-AND to implement the following, which I'll express\nusing fragments from the 0001 patch:\n\n #define MEMORYCHUNK_MAX_BLOCKOFFSET UINT64CONST(0x3FFFFFFF)\n\n+#define MEMORYCHUNK_BLOCKOFFSET_MASK UINT64CONST(0x3FFFFFFE)\n\n#define HdrMaskBlockOffset(hdrmask) \\\n- (((hdrmask) >> MEMORYCHUNK_BLOCKOFFSET_BASEBIT) & MEMORYCHUNK_MAX_BLOCKOFFSET)\n+ (((hdrmask) >> MEMORYCHUNK_BLOCKOFFSET_BASEBIT) &\nMEMORYCHUNK_BLOCKOFFSET_MASK)\n\nPreviously most compilers would have optimised the bitwise-AND away as\nit was effectively similar to doing something like (0xFFFFFFFF >> 16)\n& 0xFFFF. The compiler should know that no bits can be masked out by\nthe bitwise-AND due to the left shift zeroing them all. If you swap\n0xFFFF for 0xFFFE then that's no longer true.\n\nI also updated src/backend/utils/mmgr/README to explain this and\nadjust the mentions of 3-bits and 61-bits to 4-bits and 60-bits. I\nalso explained the overlapping part.\n\nI spent quite a bit of time benchmarking this. There is a small\nperformance regression from widening to 4 bits, but it's not huge.\nPlease see the 3 attached graphs. All times in the graph are the\naverage of the time taken for each test over 9 runs.\n\nbump_palloc_reset.png: Shows the results from:\n\nselect stype,chksz,avg(pg_allocate_memory_test_reset(chksz,1024*1024,10::bigint*1024*1024*1024,stype))\nfrom (values(8),(16),(32),(64),(128)) t1(chksz)\ncross join (values('bump')) t2(stype)\ncross join generate_series(1,3) r(run)\ngroup by stype,chksz\norder by stype,chksz;\n\nThere's no performance regression here. Bump does not have headers so\nno extra bits are used anywhere.\n\naset_palloc_pfree.png: Shows the results from:\n\nselect stype,chksz,avg(pg_allocate_memory_test(chksz,1024*1024,10::bigint*1024*1024*1024,stype))\nfrom (values(8),(16),(32),(64),(128)) t1(chksz)\ncross join (values('aset')) t2(stype)\ncross join generate_series(1,3) r(run)\ngroup by stype,chksz\norder by stype,chksz;\n\nThis exercises palloc and pfree. Effectively it's allocating 10GB of\nmemory but starting to pfree before each new palloc after we get to\n1MB of concurrent allocations. Because this test calls pfree, we need\nto look at the chunk header and into the mcxt_methods[] array. It's\nimportant to test this part.\n\nThe graph shows a small performance regression of about 1-2%.\n\ngeneration_palloc_pfree.png: Same as aset_palloc_pfree.png but for the\ngeneration context. The regression here is slightly more than aset.\nSeems to be about 2-3.5%. I don't think this is too surprising as\nthere's more room for instruction-level parallelism in AllocSetFree()\nwhen calling MemoryChunkGetBlock() than there is in GenerationFree().\nIn GenerationFree() we get the block and then immediately do\n\"block->nfree += 1;\", whereas in AllocSetFree() we also call\nMemoryChunkGetValue().\n\nI've attached an updated set of patches, plus graphs, plus entire\nbenchmark results as a .txt file.\n\nNote the v6-0003 patch is just v4-0002 renamed so the CFbot applies in\nthe correct order.\n\nI'm planning on pushing these, pending a final look at 0002 and 0003\non Sunday morning NZ time (UTC+12), likely in about 10 hours time.\n\nDavid",
"msg_date": "Sun, 7 Apr 2024 01:36:28 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add bump memory context type and use it for tuplesorts"
},
{
"msg_contents": "On Sat, 6 Apr 2024, 14:36 David Rowley, <[email protected]> wrote:\n\n> On Sat, 6 Apr 2024 at 02:30, Matthias van de Meent\n> <[email protected]> wrote:\n> >\n> > On Thu, 4 Apr 2024 at 22:43, Tom Lane <[email protected]> wrote:\n> > >\n> > > Matthias van de Meent <[email protected]> writes:\n> > > > It extends memory context IDs to 5 bits (32 values), of which\n> > > > - 8 have glibc's malloc pattern of 001/010;\n> > > > - 1 is unused memory's 00000\n> > > > - 1 is wipe_mem's 11111\n> > > > - 4 are used by existing contexts\n> (Aset/Generation/Slab/AlignedRedirect)\n> > > > - 18 are newly available.\n> > >\n> > > This seems like it would solve the problem for a good long time\n> > > to come; and if we ever need more IDs, we could steal one more bit\n> > > by requiring the offset to the block header to be a multiple of 8.\n> > > (Really, we could just about do that today at little or no cost ...\n> > > machines with MAXALIGN less than 8 are very thin on the ground.)\n> >\n> > Hmm, it seems like a decent idea, but I didn't want to deal with the\n> > repercussions of that this late in the cycle when these 2 bits were\n> > still relatively easy to get hold of.\n>\n> Thanks for writing the patch.\n>\n> I think 5 bits is 1 too many. 4 seems fine. I also think you've\n> reserved too many slots in your patch as I disagree that we need to\n> reserve the glibc malloc pattern anywhere but in the 1 and 2 slots of\n> the mcxt_methods[] array. I looked again at the 8 bytes prior to a\n> glibc malloc'd chunk and I see the lowest 4 bits of the headers\n> consistently set to 0001 for all powers of 2 starting at 8 up to\n> 65536.\n\n\nMalloc's docs specify the minimum chunk size at 4*sizeof(void*) and itself\nuses , so using powers of 2 for chunks would indeed fail to detect 1s in\nthe 4th bit. I suspect you'll get different results when you check the\nallocation patterns of multiples of 8 bytes, starting from 40, especially\non 32-bit arm (where MALLOC_ALIGNMENT is 8 bytes, rather than the 16 bytes\non i386 and 64-bit architectures, assuming [0] is accurate)\n\n131072 seems to vary and beyond that, they seem to be set to\n> 0010.\n>\n\nIn your updated 0001, you don't seem to fill the RESERVED_GLIBC memctx\narray entries with BOGUS_MCTX().\n\nWith that, there's no increase in the number of reserved slots from\n> what we have reserved today. Still 4. So having 4 bits instead of 3\n> bits gives us a total of 12 slots rather than 4 slots. Having 3x\n> slots seems enough. We might need an extra bit for something else\n> sometime. I think keeping it up our sleeve is a good idea.\n>\n> Another reason not to make it 5 bits is that I believe that would make\n> the mcxt_methods[] array 2304 bytes rather than 576 bytes. 4 bits\n> makes it 1152 bytes, if I'm counting correctly.\n>\n\nI don't think I understand why this would be relevant when only 5 of the\ncontexts are actually in use (thus in caches). Is that size concern about\nTLB entries then?\n\n\n> I revised the patch to simplify hdrmask logic. This started with me\n> having trouble finding the best set of words to document that the\n> offset is \"half the bytes between the chunk and block\". So, instead\n> of doing that, I've just made it so these two fields effectively\n> overlap. The lowest bit of the block offset is the same bit as the\n> high bit of what MemoryChunkGetValue returns.\n\n\nWorks for me, I suppose.\n\nI also updated src/backend/utils/mmgr/README to explain this and\n> adjust the mentions of 3-bits and 61-bits to 4-bits and 60-bits. I\n> also explained the overlapping part.\n>\n\nThanks!\n\n[0]\nhttps://sourceware.org/glibc/wiki/MallocInternals#Platform-specific_Thresholds_and_Constants\n\n>\n\nOn Sat, 6 Apr 2024, 14:36 David Rowley, <[email protected]> wrote:On Sat, 6 Apr 2024 at 02:30, Matthias van de Meent\n<[email protected]> wrote:\n>\n> On Thu, 4 Apr 2024 at 22:43, Tom Lane <[email protected]> wrote:\n> >\n> > Matthias van de Meent <[email protected]> writes:\n> > > It extends memory context IDs to 5 bits (32 values), of which\n> > > - 8 have glibc's malloc pattern of 001/010;\n> > > - 1 is unused memory's 00000\n> > > - 1 is wipe_mem's 11111\n> > > - 4 are used by existing contexts (Aset/Generation/Slab/AlignedRedirect)\n> > > - 18 are newly available.\n> >\n> > This seems like it would solve the problem for a good long time\n> > to come; and if we ever need more IDs, we could steal one more bit\n> > by requiring the offset to the block header to be a multiple of 8.\n> > (Really, we could just about do that today at little or no cost ...\n> > machines with MAXALIGN less than 8 are very thin on the ground.)\n>\n> Hmm, it seems like a decent idea, but I didn't want to deal with the\n> repercussions of that this late in the cycle when these 2 bits were\n> still relatively easy to get hold of.\n\nThanks for writing the patch.\n\nI think 5 bits is 1 too many. 4 seems fine. I also think you've\nreserved too many slots in your patch as I disagree that we need to\nreserve the glibc malloc pattern anywhere but in the 1 and 2 slots of\nthe mcxt_methods[] array. I looked again at the 8 bytes prior to a\nglibc malloc'd chunk and I see the lowest 4 bits of the headers\nconsistently set to 0001 for all powers of 2 starting at 8 up to\n65536. Malloc's docs specify the minimum chunk size at 4*sizeof(void*) and itself uses , so using powers of 2 for chunks would indeed fail to detect 1s in the 4th bit. I suspect you'll get different results when you check the allocation patterns of multiples of 8 bytes, starting from 40, especially on 32-bit arm (where MALLOC_ALIGNMENT is 8 bytes, rather than the 16 bytes on i386 and 64-bit architectures, assuming [0] is accurate)131072 seems to vary and beyond that, they seem to be set to\n0010.In your updated 0001, you don't seem to fill the RESERVED_GLIBC memctx array entries with BOGUS_MCTX(). \nWith that, there's no increase in the number of reserved slots from\nwhat we have reserved today. Still 4. So having 4 bits instead of 3\nbits gives us a total of 12 slots rather than 4 slots. Having 3x\nslots seems enough. We might need an extra bit for something else\nsometime. I think keeping it up our sleeve is a good idea.\n\nAnother reason not to make it 5 bits is that I believe that would make\nthe mcxt_methods[] array 2304 bytes rather than 576 bytes. 4 bits\nmakes it 1152 bytes, if I'm counting correctly.I don't think I understand why this would be relevant when only 5 of the contexts are actually in use (thus in caches). Is that size concern about TLB entries then?\n\nI revised the patch to simplify hdrmask logic. This started with me\nhaving trouble finding the best set of words to document that the\noffset is \"half the bytes between the chunk and block\". So, instead\nof doing that, I've just made it so these two fields effectively\noverlap. The lowest bit of the block offset is the same bit as the\nhigh bit of what MemoryChunkGetValue returns.Works for me, I suppose.I also updated src/backend/utils/mmgr/README to explain this and\nadjust the mentions of 3-bits and 61-bits to 4-bits and 60-bits. I\nalso explained the overlapping part.Thanks![0] https://sourceware.org/glibc/wiki/MallocInternals#Platform-specific_Thresholds_and_Constants",
"msg_date": "Sat, 6 Apr 2024 19:45:31 +0200",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add bump memory context type and use it for tuplesorts"
},
{
"msg_contents": "On Sun, 7 Apr 2024 at 05:45, Matthias van de Meent\n<[email protected]> wrote:\n> Malloc's docs specify the minimum chunk size at 4*sizeof(void*) and itself uses , so using powers of 2 for chunks would indeed fail to detect 1s in the 4th bit. I suspect you'll get different results when you check the allocation patterns of multiples of 8 bytes, starting from 40, especially on 32-bit arm (where MALLOC_ALIGNMENT is 8 bytes, rather than the 16 bytes on i386 and 64-bit architectures, assuming [0] is accurate)\n\nI'm prepared to be overruled, but I just don't have strong feelings\nthat 32-bit is worth making these reservations for. Especially so\ngiven the rate we're filling these slots. The only system that I see\nthe 4th bit change is Cygwin. It doesn't look like a very easy system\nto protect against pfreeing of malloc'd chunks as the prior 8-bytes\nseem to vary depending on the malloc'd size and I see all bit patterns\nthere, including the ones we use for our memory contexts.\n\nSince we can't protect against every possible bit pattern there, we\nneed to draw the line somewhere. I don't think 32-bit systems are\nworth reserving these precious slots for. I'd hazard a guess that\nthere are more instances of Postgres running on Windows today than on\n32-bit CPUs and we don't seem too worried about the bit-patterns used\nfor Windows.\n\n> In your updated 0001, you don't seem to fill the RESERVED_GLIBC memctx array entries with BOGUS_MCTX().\n\nOops. Thanks\n\n>> Another reason not to make it 5 bits is that I believe that would make\n>> the mcxt_methods[] array 2304 bytes rather than 576 bytes. 4 bits\n>> makes it 1152 bytes, if I'm counting correctly.\n>\n>\n> I don't think I understand why this would be relevant when only 5 of the contexts are actually in use (thus in caches). Is that size concern about TLB entries then?\n\nIt's a static const array. I don't want to bloat the binary with\nsomething we'll likely never need. If we one day need it, we can\nreserve another bit using the same overlapping method.\n\n>> I revised the patch to simplify hdrmask logic. This started with me\n>> having trouble finding the best set of words to document that the\n>> offset is \"half the bytes between the chunk and block\". So, instead\n>> of doing that, I've just made it so these two fields effectively\n>> overlap. The lowest bit of the block offset is the same bit as the\n>> high bit of what MemoryChunkGetValue returns.\n>\n>\n> Works for me, I suppose.\n\nhmm. I don't detect much enthusiasm for it.\n\nPersonally, I quite like the method as it adds no extra instructions\nwhen encoding the MemoryChunk and only a simple bitwise-AND when\ndecoding it. Your method added extra instructions in the encode and\ndecode. I went to great lengths to make this code as fast as\npossible, so I know which method that I prefer. We often palloc and\nnever do anything that requires the chunk header to be decoded, so not\nadding extra instructions on the encoding stage is a big win.\n\nThe only method I see to avoid adding instructions in encoding and\ndecoding is to reduce the bit-space for the MemoryChunkGetValue field\nto 29 bits. Effectively, that means non-external chunks can only be\n512MB rather than 1GB. As far as I know, that just limits slab.c to\nonly being able to do 512MB pallocs as generation.c and aset.c use\nexternal chunks well below that threshold. Restricting slab to 512MB\nis probably far from the end of the world. Anything close to that\nwould be a terrible use case for slab. I was just less keen on using\na bit from there as that's a field we allow the context implementation\nto do what they like with. Having bitspace for 2^30 possible values in\nthere just seems nice given that it can store any possible value from\nzero up to MaxAllocSize.\n\nDavid\n\n\n",
"msg_date": "Sun, 7 Apr 2024 11:59:33 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add bump memory context type and use it for tuplesorts"
},
{
"msg_contents": "On Sun, 7 Apr 2024, 01:59 David Rowley, <[email protected]> wrote:\n\n> On Sun, 7 Apr 2024 at 05:45, Matthias van de Meent\n> <[email protected]> wrote:\n> > Malloc's docs specify the minimum chunk size at 4*sizeof(void*) and\n> itself uses , so using powers of 2 for chunks would indeed fail to detect\n> 1s in the 4th bit. I suspect you'll get different results when you check\n> the allocation patterns of multiples of 8 bytes, starting from 40,\n> especially on 32-bit arm (where MALLOC_ALIGNMENT is 8 bytes, rather than\n> the 16 bytes on i386 and 64-bit architectures, assuming [0] is accurate)\n\n\nI'd hazard a guess that\n> there are more instances of Postgres running on Windows today than on\n> 32-bit CPUs and we don't seem too worried about the bit-patterns used\n> for Windows.\n>\n\nYeah, that is something I had some thoughts about too, but didn't check if\nthere was historical context around. I don't think it's worth bothering\nright now though.\n\n>> Another reason not to make it 5 bits is that I believe that would make\n> >> the mcxt_methods[] array 2304 bytes rather than 576 bytes. 4 bits\n> >> makes it 1152 bytes, if I'm counting correctly.\n> >\n> >\n> > I don't think I understand why this would be relevant when only 5 of the\n> contexts are actually in use (thus in caches). Is that size concern about\n> TLB entries then?\n>\n> It's a static const array. I don't want to bloat the binary with\n> something we'll likely never need. If we one day need it, we can\n> reserve another bit using the same overlapping method.\n>\n\nFair points.\n\n>> I revised the patch to simplify hdrmask logic. This started with me\n> >> having trouble finding the best set of words to document that the\n> >> offset is \"half the bytes between the chunk and block\". So, instead\n> >> of doing that, I've just made it so these two fields effectively\n> >> overlap. The lowest bit of the block offset is the same bit as the\n> >> high bit of what MemoryChunkGetValue returns.\n> >\n> >\n> > Works for me, I suppose.\n>\n> hmm. I don't detect much enthusiasm for it.\n>\n\nI had a tiring day leaving me short on enthousiasm, after which I realised\nthere were some things to this patch that would need fixing.\n\nI could've worded this better, but nothing against this code.\n\n-Matthias\n\nOn Sun, 7 Apr 2024, 01:59 David Rowley, <[email protected]> wrote:On Sun, 7 Apr 2024 at 05:45, Matthias van de Meent\n<[email protected]> wrote:\n> Malloc's docs specify the minimum chunk size at 4*sizeof(void*) and itself uses , so using powers of 2 for chunks would indeed fail to detect 1s in the 4th bit. I suspect you'll get different results when you check the allocation patterns of multiples of 8 bytes, starting from 40, especially on 32-bit arm (where MALLOC_ALIGNMENT is 8 bytes, rather than the 16 bytes on i386 and 64-bit architectures, assuming [0] is accurate) I'd hazard a guess that\nthere are more instances of Postgres running on Windows today than on\n32-bit CPUs and we don't seem too worried about the bit-patterns used\nfor Windows.Yeah, that is something I had some thoughts about too, but didn't check if there was historical context around. I don't think it's worth bothering right now though.\n>> Another reason not to make it 5 bits is that I believe that would make\n>> the mcxt_methods[] array 2304 bytes rather than 576 bytes. 4 bits\n>> makes it 1152 bytes, if I'm counting correctly.\n>\n>\n> I don't think I understand why this would be relevant when only 5 of the contexts are actually in use (thus in caches). Is that size concern about TLB entries then?\n\nIt's a static const array. I don't want to bloat the binary with\nsomething we'll likely never need. If we one day need it, we can\nreserve another bit using the same overlapping method.Fair points.\n>> I revised the patch to simplify hdrmask logic. This started with me\n>> having trouble finding the best set of words to document that the\n>> offset is \"half the bytes between the chunk and block\". So, instead\n>> of doing that, I've just made it so these two fields effectively\n>> overlap. The lowest bit of the block offset is the same bit as the\n>> high bit of what MemoryChunkGetValue returns.\n>\n>\n> Works for me, I suppose.\n\nhmm. I don't detect much enthusiasm for it.I had a tiring day leaving me short on enthousiasm, after which I realised there were some things to this patch that would need fixing.I could've worded this better, but nothing against this code.-Matthias",
"msg_date": "Sun, 7 Apr 2024 02:34:30 +0200",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add bump memory context type and use it for tuplesorts"
},
{
"msg_contents": "On Sat, Apr 6, 2024 at 7:37 PM David Rowley <[email protected]> wrote:\n>\n I'm planning on pushing these, pending a final look at 0002 and 0003\n> on Sunday morning NZ time (UTC+12), likely in about 10 hours time.\n\n+1\n\nI haven't looked at v6, but I've tried using it in situ, and it seems\nto work as well as hoped:\n\nhttps://www.postgresql.org/message-id/CANWCAZZQFfxvzO8yZHFWtQV%2BZ2gAMv1ku16Vu7KWmb5kZQyd1w%40mail.gmail.com\n\n\n",
"msg_date": "Sun, 7 Apr 2024 17:05:36 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add bump memory context type and use it for tuplesorts"
},
{
"msg_contents": "On Sun, 7 Apr 2024 at 22:05, John Naylor <[email protected]> wrote:\n>\n> On Sat, Apr 6, 2024 at 7:37 PM David Rowley <[email protected]> wrote:\n> >\n> I'm planning on pushing these, pending a final look at 0002 and 0003\n> > on Sunday morning NZ time (UTC+12), likely in about 10 hours time.\n>\n> +1\n\nI've now pushed all 3 patches. Thank you for all the reviews on\nthese and for the extra MemoryContextMethodID bit, Matthias.\n\n> I haven't looked at v6, but I've tried using it in situ, and it seems\n> to work as well as hoped:\n>\n> https://www.postgresql.org/message-id/CANWCAZZQFfxvzO8yZHFWtQV%2BZ2gAMv1ku16Vu7KWmb5kZQyd1w%40mail.gmail.com\n\nI'm already impressed with the radix tree work. Nice to see bump\nallowing a little more memory to be saved for TID storage.\n\nDavid\n\n\n",
"msg_date": "Mon, 8 Apr 2024 00:37:53 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add bump memory context type and use it for tuplesorts"
},
{
"msg_contents": "On 4/7/24 14:37, David Rowley wrote:\n> On Sun, 7 Apr 2024 at 22:05, John Naylor <[email protected]> wrote:\n>>\n>> On Sat, Apr 6, 2024 at 7:37 PM David Rowley <[email protected]> wrote:\n>>>\n>> I'm planning on pushing these, pending a final look at 0002 and 0003\n>>> on Sunday morning NZ time (UTC+12), likely in about 10 hours time.\n>>\n>> +1\n> \n> I've now pushed all 3 patches. Thank you for all the reviews on\n> these and for the extra MemoryContextMethodID bit, Matthias.\n> \n>> I haven't looked at v6, but I've tried using it in situ, and it seems\n>> to work as well as hoped:\n>>\n>> https://www.postgresql.org/message-id/CANWCAZZQFfxvzO8yZHFWtQV%2BZ2gAMv1ku16Vu7KWmb5kZQyd1w%40mail.gmail.com\n> \n> I'm already impressed with the radix tree work. Nice to see bump\n> allowing a little more memory to be saved for TID storage.\n> \n> David\n\nThere seems to be some issue with this on 32-bit machines. A couple\nanimals (grison, mamba) already complained about an assert int\nBumpCheck() during initdb, I get the same crash on my rpi5 running\n32-bit debian - see the backtrace attached.\n\nI haven't investigated, but I'd considering it works on 64-bit, I guess\nit's not considering alignment somewhere. I can dig more if needed.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Sun, 7 Apr 2024 22:35:47 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add bump memory context type and use it for tuplesorts"
},
{
"msg_contents": "\n\n\nOn 4/7/24 22:35, Tomas Vondra wrote:\n> On 4/7/24 14:37, David Rowley wrote:\n>> On Sun, 7 Apr 2024 at 22:05, John Naylor <[email protected]> wrote:\n>>>\n>>> On Sat, Apr 6, 2024 at 7:37 PM David Rowley <[email protected]> wrote:\n>>>>\n>>> I'm planning on pushing these, pending a final look at 0002 and 0003\n>>>> on Sunday morning NZ time (UTC+12), likely in about 10 hours time.\n>>>\n>>> +1\n>>\n>> I've now pushed all 3 patches. Thank you for all the reviews on\n>> these and for the extra MemoryContextMethodID bit, Matthias.\n>>\n>>> I haven't looked at v6, but I've tried using it in situ, and it seems\n>>> to work as well as hoped:\n>>>\n>>> https://www.postgresql.org/message-id/CANWCAZZQFfxvzO8yZHFWtQV%2BZ2gAMv1ku16Vu7KWmb5kZQyd1w%40mail.gmail.com\n>>\n>> I'm already impressed with the radix tree work. Nice to see bump\n>> allowing a little more memory to be saved for TID storage.\n>>\n>> David\n> \n> There seems to be some issue with this on 32-bit machines. A couple\n> animals (grison, mamba) already complained about an assert int\n> BumpCheck() during initdb, I get the same crash on my rpi5 running\n> 32-bit debian - see the backtrace attached.\n> \n> I haven't investigated, but I'd considering it works on 64-bit, I guess\n> it's not considering alignment somewhere. I can dig more if needed.\n> \n\nI did try running it under valgrind, and there doesn't seem to be\nanything particularly wrong - just a bit of noise about calculating CRC\non uninitialized bytes.\n\nregards\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 7 Apr 2024 22:54:23 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add bump memory context type and use it for tuplesorts"
},
{
"msg_contents": "Hi,\n\nOn 2024-04-07 22:35:47 +0200, Tomas Vondra wrote:\n> I haven't investigated, but I'd considering it works on 64-bit, I guess\n> it's not considering alignment somewhere. I can dig more if needed.\n\nI think I may the problem:\n\n\n#define KeeperBlock(set) ((BumpBlock *) ((char *) (set) + sizeof(BumpContext)))\n#define IsKeeperBlock(set, blk) (KeeperBlock(set) == (blk))\n\nBumpContextCreate():\n...\n\t/* Fill in the initial block's block header */\n\tblock = (BumpBlock *) (((char *) set) + MAXALIGN(sizeof(BumpContext)));\n\t/* determine the block size and initialize it */\n\tfirstBlockSize = allocSize - MAXALIGN(sizeof(BumpContext));\n\tBumpBlockInit(set, block, firstBlockSize);\n...\n\t((MemoryContext) set)->mem_allocated = allocSize;\n\nvoid\nBumpCheck(MemoryContext context)\n...\n\t\tif (IsKeeperBlock(bump, block))\n\t\t\ttotal_allocated += block->endptr - (char *) bump;\n...\n\nI suspect that KeeperBlock() isn't returning true, because IsKeeperBlock misses\nthe MAXALIGN(). I think that about fits with:\n\n> #4 0x008f0088 in BumpCheck (context=0x131e330) at bump.c:808\n> 808 Assert(total_allocated == context->mem_allocated);\n> (gdb) p total_allocated\n> $1 = 8120\n> (gdb) p context->mem_allocated\n> $2 = 8192\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 7 Apr 2024 14:09:24 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add bump memory context type and use it for tuplesorts"
},
{
"msg_contents": "\n\nOn 4/7/24 23:09, Andres Freund wrote:\n> Hi,\n> \n> On 2024-04-07 22:35:47 +0200, Tomas Vondra wrote:\n>> I haven't investigated, but I'd considering it works on 64-bit, I guess\n>> it's not considering alignment somewhere. I can dig more if needed.\n> \n> I think I may the problem:\n> \n> \n> #define KeeperBlock(set) ((BumpBlock *) ((char *) (set) + sizeof(BumpContext)))\n> #define IsKeeperBlock(set, blk) (KeeperBlock(set) == (blk))\n> \n> BumpContextCreate():\n> ...\n> \t/* Fill in the initial block's block header */\n> \tblock = (BumpBlock *) (((char *) set) + MAXALIGN(sizeof(BumpContext)));\n> \t/* determine the block size and initialize it */\n> \tfirstBlockSize = allocSize - MAXALIGN(sizeof(BumpContext));\n> \tBumpBlockInit(set, block, firstBlockSize);\n> ...\n> \t((MemoryContext) set)->mem_allocated = allocSize;\n> \n> void\n> BumpCheck(MemoryContext context)\n> ...\n> \t\tif (IsKeeperBlock(bump, block))\n> \t\t\ttotal_allocated += block->endptr - (char *) bump;\n> ...\n> \n> I suspect that KeeperBlock() isn't returning true, because IsKeeperBlock misses\n> the MAXALIGN(). I think that about fits with:\n> \n>> #4 0x008f0088 in BumpCheck (context=0x131e330) at bump.c:808\n>> 808 Assert(total_allocated == context->mem_allocated);\n>> (gdb) p total_allocated\n>> $1 = 8120\n>> (gdb) p context->mem_allocated\n>> $2 = 8192\n> \n\nYup, changing it to this:\n\n#define KeeperBlock(set) ((BumpBlock *) ((char *) (set) +\nMAXALIGN(sizeof(BumpContext))))\n\nfixes the issue for me.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 7 Apr 2024 23:27:18 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add bump memory context type and use it for tuplesorts"
},
{
"msg_contents": "Tomas Vondra <[email protected]> writes:\n> Yup, changing it to this:\n\n> #define KeeperBlock(set) ((BumpBlock *) ((char *) (set) +\n> MAXALIGN(sizeof(BumpContext))))\n\n> fixes the issue for me.\n\nMamba is happy with that change, too.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 07 Apr 2024 18:41:16 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add bump memory context type and use it for tuplesorts"
},
{
"msg_contents": "> On 8 Apr 2024, at 00:41, Tom Lane <[email protected]> wrote:\n> \n> Tomas Vondra <[email protected]> writes:\n>> Yup, changing it to this:\n> \n>> #define KeeperBlock(set) ((BumpBlock *) ((char *) (set) +\n>> MAXALIGN(sizeof(BumpContext))))\n> \n>> fixes the issue for me.\n> \n> Mamba is happy with that change, too.\n\nUnrelated to that one, seems like turaco ran into another issue:\n\nrunning bootstrap script ... TRAP: failed Assert(\"total_allocated == context->mem_allocated\"), File: \"bump.c\", Line: 808, PID: 7809\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=turaco&dt=2024-04-07%2022%3A42%3A54\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Mon, 8 Apr 2024 00:55:42 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add bump memory context type and use it for tuplesorts"
},
{
"msg_contents": "Hi,\n\nOn 2024-04-08 00:55:42 +0200, Daniel Gustafsson wrote:\n> > On 8 Apr 2024, at 00:41, Tom Lane <[email protected]> wrote:\n> > \n> > Tomas Vondra <[email protected]> writes:\n> >> Yup, changing it to this:\n> > \n> >> #define KeeperBlock(set) ((BumpBlock *) ((char *) (set) +\n> >> MAXALIGN(sizeof(BumpContext))))\n> > \n> >> fixes the issue for me.\n> > \n> > Mamba is happy with that change, too.\n> \n> Unrelated to that one, seems like turaco ran into another issue:\n> \n> running bootstrap script ... TRAP: failed Assert(\"total_allocated == context->mem_allocated\"), File: \"bump.c\", Line: 808, PID: 7809\n> \n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=turaco&dt=2024-04-07%2022%3A42%3A54\n\nWhat makes you think that's unrelated? To me that looks like the same issue?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 7 Apr 2024 16:04:56 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add bump memory context type and use it for tuplesorts"
},
{
"msg_contents": "On Mon, 8 Apr 2024 at 09:09, Andres Freund <[email protected]> wrote:\n> I suspect that KeeperBlock() isn't returning true, because IsKeeperBlock misses\n> the MAXALIGN(). I think that about fits with:\n\nThanks for investigating that.\n\nI've just pushed a fix for the macro and also adjusted a location\nwhich was *correctly* calculating the keeper block address manually to\nuse the macro. If I'd used the macro there to start with the Assert\nlikely wouldn't have failed, but there'd have been memory alignment\nissues.\n\nDavid\n\n\n",
"msg_date": "Mon, 8 Apr 2024 11:12:10 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add bump memory context type and use it for tuplesorts"
},
{
"msg_contents": "> On 8 Apr 2024, at 01:04, Andres Freund <[email protected]> wrote:\n\n> What makes you think that's unrelated? To me that looks like the same issue?\n\nNvm, I misread the assert, ETOOLITTLESLEEP. Sorry for the noise.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Mon, 8 Apr 2024 01:14:23 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add bump memory context type and use it for tuplesorts"
},
{
"msg_contents": "Attached is a small patch adding the missing BumpContext description to the\nREADME.\n\nRegards,\nAmul",
"msg_date": "Tue, 16 Apr 2024 10:42:53 +0530",
"msg_from": "Amul Sul <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add bump memory context type and use it for tuplesorts"
},
{
"msg_contents": "> On 16 Apr 2024, at 07:12, Amul Sul <[email protected]> wrote:\n> \n> Attached is a small patch adding the missing BumpContext description to the\n> README.\n\nNice catch, we should add it to the README.\n\n+ pfree'd or realloc'd.\nI think it's best to avoid mixing API:s, \"pfree'd or repalloc'd\" keeps it to\nfunctions in our API instead.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Tue, 16 Apr 2024 10:28:02 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add bump memory context type and use it for tuplesorts"
},
{
"msg_contents": "On Tue, 16 Apr 2024 at 17:13, Amul Sul <[email protected]> wrote:\n> Attached is a small patch adding the missing BumpContext description to the\n> README.\n\nThanks for noticing and working on the patch.\n\nThere were a few things that were not quite accurate or are misleading:\n\n1.\n\n> +These three memory contexts aim to free memory back to the operating system\n\nThat's not true for bump. It's the worst of the 4. Worse than aset.\nIt only returns memory when the context is reset/deleted.\n\n2.\n\n\"These memory contexts were initially developed for ReorderBuffer, but\nmay be useful elsewhere as long as the allocation patterns match.\"\n\nThe above isn't true for bump. It was written for tuplesort. I think\nwe can just remove that part now. Slab and generation are both old\nenough not to care why they were conceived.\n\nAlso since adding bump, I think the choice of which memory context to\nuse is about 33% harder than it used to be when there were only 3\ncontext types. I think this warrants giving more detail on what these\n3 special-purpose memory allocators are good for. I've added more\ndetails in the attached patch. This includes more details about\nfreeing malloc'd blocks\n\nI've tried to detail out enough of the specialities of the context\ntype without going into extensive detail. My hope is that there will\nbe enough detail for someone to choose the most suitable looking one\nand head over to the corresponding .c file to find out more.\n\nIs that about the right level of detail?\n\nDavid",
"msg_date": "Tue, 16 Apr 2024 22:14:13 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add bump memory context type and use it for tuplesorts"
},
{
"msg_contents": "On Mon, 8 Apr 2024 at 00:37, David Rowley <[email protected]> wrote:\n> I've now pushed all 3 patches. Thank you for all the reviews on\n> these and for the extra MemoryContextMethodID bit, Matthias.\n\nI realised earlier today when working on [1] that bump makes a pretty\nbrain-dead move when adding dedicated blocks to the blocks list. The\nproblem is that I opted to not have a current block field in\nBumpContext and just rely on the head pointer of the blocks list to be\nthe \"current\" block. The head block is the block we look at to see if\nwe've any space left when new allocations come in. The problem there\nis when adding a dedicated block in BumpAllocLarge(), the code adds\nthis to the head of the blocks list so that when a new allocation\ncomes in that's normal-sized, the block at the top of the list is full\nand we have to create a new block for the allocation.\n\nThe attached fixes this by pushing these large/dedicated blocks to the\n*tail* of the blocks list. This means the partially filled block\nremains at the head and is available for any new allocation which will\nfit. This behaviour is evident by the regression test change that I\nadded earlier today when working on [1]. The 2nd and smaller\nallocation in that text goes onto the keeper block rather than a new\nblock.\n\nI plan to push this tomorrow.\n\nDavid\n\n[1] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=bea97cd02ebb347ab469b78673c2b33a72109669",
"msg_date": "Tue, 16 Apr 2024 23:01:28 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add bump memory context type and use it for tuplesorts"
},
{
"msg_contents": "On Tue, Apr 16, 2024 at 3:44 PM David Rowley <[email protected]> wrote:\n\n> On Tue, 16 Apr 2024 at 17:13, Amul Sul <[email protected]> wrote:\n> > Attached is a small patch adding the missing BumpContext description to\n> the\n> > README.\n>\n> Thanks for noticing and working on the patch.\n>\n> There were a few things that were not quite accurate or are misleading:\n>\n> 1.\n>\n> > +These three memory contexts aim to free memory back to the operating\n> system\n>\n> That's not true for bump. It's the worst of the 4. Worse than aset.\n> It only returns memory when the context is reset/deleted.\n>\n> 2.\n>\n> \"These memory contexts were initially developed for ReorderBuffer, but\n> may be useful elsewhere as long as the allocation patterns match.\"\n>\n> The above isn't true for bump. It was written for tuplesort. I think\n> we can just remove that part now. Slab and generation are both old\n> enough not to care why they were conceived.\n>\n> Also since adding bump, I think the choice of which memory context to\n> use is about 33% harder than it used to be when there were only 3\n> context types. I think this warrants giving more detail on what these\n> 3 special-purpose memory allocators are good for. I've added more\n> details in the attached patch. This includes more details about\n> freeing malloc'd blocks\n>\n> I've tried to detail out enough of the specialities of the context\n> type without going into extensive detail. My hope is that there will\n> be enough detail for someone to choose the most suitable looking one\n> and head over to the corresponding .c file to find out more.\n>\n> Is that about the right level of detail?\n>\n\nYes, it looks much better now, thank you.\n\nRegards,\nAmul\n\nOn Tue, Apr 16, 2024 at 3:44 PM David Rowley <[email protected]> wrote:On Tue, 16 Apr 2024 at 17:13, Amul Sul <[email protected]> wrote:\n> Attached is a small patch adding the missing BumpContext description to the\n> README.\n\nThanks for noticing and working on the patch.\n\nThere were a few things that were not quite accurate or are misleading:\n\n1.\n\n> +These three memory contexts aim to free memory back to the operating system\n\nThat's not true for bump. It's the worst of the 4. Worse than aset.\nIt only returns memory when the context is reset/deleted.\n\n2.\n\n\"These memory contexts were initially developed for ReorderBuffer, but\nmay be useful elsewhere as long as the allocation patterns match.\"\n\nThe above isn't true for bump. It was written for tuplesort. I think\nwe can just remove that part now. Slab and generation are both old\nenough not to care why they were conceived.\n\nAlso since adding bump, I think the choice of which memory context to\nuse is about 33% harder than it used to be when there were only 3\ncontext types. I think this warrants giving more detail on what these\n3 special-purpose memory allocators are good for. I've added more\ndetails in the attached patch. This includes more details about\nfreeing malloc'd blocks\n\nI've tried to detail out enough of the specialities of the context\ntype without going into extensive detail. My hope is that there will\nbe enough detail for someone to choose the most suitable looking one\nand head over to the corresponding .c file to find out more.\n\nIs that about the right level of detail?Yes, it looks much better now, thank you.Regards,Amul",
"msg_date": "Tue, 16 Apr 2024 16:34:54 +0530",
"msg_from": "Amul Sul <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add bump memory context type and use it for tuplesorts"
}
] |
[
{
"msg_contents": "Hi,\n\nIn an off-list chat, Robert suggested that it might be a good idea to\nlook more closely into $subject, especially in the context of the\nproject of moving the locking of child tables / partitions to the\nExecInitNode() phase when executing cached generic plans [1].\n\nRobert's point is that a worker's output of initial pruning which\nconsists of the set of child subplans (of a parallel-aware Append or\nMergeAppend) it considers as valid for execution may not be the same\nas the leader's and that of other workers. If that does indeed\nhappen, it may confuse the Append's parallel-execution code, possibly\neven cause crashes, because the ParallelAppendState set up by the\nleader assumes a certain number and identity (?) of\nvalid-for-execution subplans.\n\nSo he suggests that initial pruning should only be done once in the\nleader and the result of that put in the EState for\nExecInitParallelPlan() to serialize to pass down to workers. Workers\nwould simply consume that as-is to set the valid-for-execution child\nsubplans in its copy of AppendState, instead of doing the initial\npruning again. Actually, earlier patches at [1] had implemented that\nmechanism (remembering the result of initial pruning and using it at a\nlater time and place), because the earlier design there was to move\nthe initial pruning on the nodes in a cached generic plan tree from\nExecInitNode() to GetCachedPlan(). The result of initial pruning done\nin the latter would be passed down to and consumed in the former using\nwhat was called PartitionPruneResult nodes.\n\nMaybe that stuff could be resurrected, though I was wondering if the\nrisk of the same initial pruning steps returning different results\nwhen performed repeatedly in *one query lifetime* aren't pretty\nminimal or maybe rather non-existent? I think that's because\nperforming initial pruning steps entails computing constant and/or\nstable expressions and comparing them with an unchanging set of\npartition bound values, with comparison functions whose result is also\npresumed to be stable. Then there's also the step of mapping the\npartition indexes as they appear in the PartitionDesc to the indexes\nof their subplans under Append/MergeAppend using the information\ncontained in PartitionPruneInfo (subplan_map) and the result of\nmapping should be immutable too.\n\nI considered that the comparison functions that\nmatch_clause_to_partition_key() obtains by calling get_opfamily_proc()\nmay in fact not be stable, though that doesn't seem to be a worry at\nleast with the out-of-the-box pg_amproc collection:\n\nselect amproc, p.provolatile from pg_amproc, pg_proc p where amproc =\np.oid and p.provolatile <> 'i';\n amproc | provolatile\n---------------------------+-------------\n date_cmp_timestamptz | s\n timestamp_cmp_timestamptz | s\n timestamptz_cmp_date | s\n timestamptz_cmp_timestamp | s\n pg_catalog.in_range | s\n(5 rows)\n\nIs it possible for a user to add a volatile procedure to pg_amproc?\nIf that's possible, match_clause_to_partition_key() may pick one as a\ncomparison function for pruning, because it doesn't actually check the\nprocedure's provolatile before doing so. I'd hope not, though would\nlike to be sure to support what I wrote above.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n[1] https://commitfest.postgresql.org/43/3478/\n\n\n",
"msg_date": "Tue, 27 Jun 2023 22:22:33 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "initial pruning in parallel append"
},
{
"msg_contents": "On Tue, Jun 27, 2023 at 9:23 AM Amit Langote <[email protected]> wrote:\n> Maybe that stuff could be resurrected, though I was wondering if the\n> risk of the same initial pruning steps returning different results\n> when performed repeatedly in *one query lifetime* aren't pretty\n> minimal or maybe rather non-existent? I think that's because\n> performing initial pruning steps entails computing constant and/or\n> stable expressions and comparing them with an unchanging set of\n> partition bound values, with comparison functions whose result is also\n> presumed to be stable. Then there's also the step of mapping the\n> partition indexes as they appear in the PartitionDesc to the indexes\n> of their subplans under Append/MergeAppend using the information\n> contained in PartitionPruneInfo (subplan_map) and the result of\n> mapping should be immutable too.\n>\n> I considered that the comparison functions that\n> match_clause_to_partition_key() obtains by calling get_opfamily_proc()\n> may in fact not be stable, though that doesn't seem to be a worry at\n> least with the out-of-the-box pg_amproc collection:\n\nI think it could be acceptable if a stable function not actually being\nstable results in some kind of internal error message, hopefully one\nthat in some way hints to the user what the problem was. But crashing\nbecause some expression was supposed to be stable and wasn't is a\nbridge too far, especially if, as I think would be the case here, the\ncrash happens in a part of the code that is far removed from where the\nproblem was introduced.\n\nThe real issue here is about how much trust you can place in a given\ninvariant. If, in a single function, we initialize a value to 0 and\nthereafter only ever increment it, we can logically reason that if we\never see a value less than zero, there must have been an overflow. It\nis true that if our function calls some other function, that other\nfunction could access data through a garbage pointer and possibly\ncorrupt the value of our function's local variable, but that's\nextremely unlikely, and we can basically decide that we're not going\nto are about it, because such code is likely to crash anyway before\ntoo long.\n\nBut now consider an invariant that implicates a larger amount of code\ne.g. you must always hold a buffer pin before accessing the buffer\ncontents. In many cases, it's fairly easy to verify that this must be\nso in any given piece of code, but there are problems: some code that\ndoes buffer access is complicated enough that it's hard to fully\nverify, especially when buffer pins are held across long periods, and\nwhat is probably worse, there are tons of different places in the code\nthat access buffers. Hence, we've had bugs in this area, and likely\nwill have bugs in this area again. In theory, with a sufficient amount\nof really careful work, you can find all of the problems, but in\npractice it's pretty difficult. Nonetheless, we just have to just\naccept the risk that we're going to crash if a bug in this area does\nexist, because there's no real way to cope with the contents of the\nbuffer that you're accessing being swapped out while you're in the\nmiddle of looking at it, or even modifying it.\n\nBut the present case is different in a couple of ways. First, there's\nprobably even more code involved, including a good bit of it that's\nnot in core but is user-defined. Second, we've generally made a\ndecision up until now that we don't want to have a hard dependency on\nstable functions actually being stable. If they aren't, and for\nexample you're using index expressions, your queries may return wrong\nanswers, but you won't get weird internal error messages, and you\nwon't get a crash. I think the bar for this feature is the same.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 7 Aug 2023 09:21:12 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: initial pruning in parallel append"
},
{
"msg_contents": "Robert Haas <[email protected]> writes:\n> ... Second, we've generally made a\n> decision up until now that we don't want to have a hard dependency on\n> stable functions actually being stable. If they aren't, and for\n> example you're using index expressions, your queries may return wrong\n> answers, but you won't get weird internal error messages, and you\n> won't get a crash. I think the bar for this feature is the same.\n\nYeah, I agree --- wrong answers may be acceptable in such a case, but\ncrashes or unintelligible error messages aren't. There are practical\nreasons for being tolerant here, notably that it's not that easy\nfor users to get their volatility markings right.\n\nIn the case at hand, I think that means that allowing workers to do\npruning would require hardening the parallel append code against the\nsituation where their pruning results vary. Maybe, instead of passing\nthe pruning results *down*, we could pass them *up* to the leader and\nthen throw an error if they're different?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 07 Aug 2023 09:29:29 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: initial pruning in parallel append"
},
{
"msg_contents": "On Mon, Aug 7, 2023 at 22:29 Tom Lane <[email protected]> wrote:\n\n> Robert Haas <[email protected]> writes:\n> > ... Second, we've generally made a\n> > decision up until now that we don't want to have a hard dependency on\n> > stable functions actually being stable. If they aren't, and for\n> > example you're using index expressions, your queries may return wrong\n> > answers, but you won't get weird internal error messages, and you\n> > won't get a crash. I think the bar for this feature is the same.\n>\n> Yeah, I agree --- wrong answers may be acceptable in such a case, but\n> crashes or unintelligible error messages aren't. There are practical\n> reasons for being tolerant here, notably that it's not that easy\n> for users to get their volatility markings right.\n>\n> In the case at hand, I think that means that allowing workers to do\n> pruning would require hardening the parallel append code against the\n> situation where their pruning results vary. Maybe, instead of passing\n> the pruning results *down*, we could pass them *up* to the leader and\n> then throw an error if they're different?\n\n\nNote we’re talking here about “initial” pruning that occurs during\nExecInitNode(). Workers are only launched during ExecGather[Merge]() which\nthereafter do ExecInitNode() on their copy of the the plan tree. So if we\nare to pass the pruning results for cross-checking, it will have to be from\nthe leader to workers.\n\n> --\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\nOn Mon, Aug 7, 2023 at 22:29 Tom Lane <[email protected]> wrote:Robert Haas <[email protected]> writes:\n> ... Second, we've generally made a\n> decision up until now that we don't want to have a hard dependency on\n> stable functions actually being stable. If they aren't, and for\n> example you're using index expressions, your queries may return wrong\n> answers, but you won't get weird internal error messages, and you\n> won't get a crash. I think the bar for this feature is the same.\n\nYeah, I agree --- wrong answers may be acceptable in such a case, but\ncrashes or unintelligible error messages aren't. There are practical\nreasons for being tolerant here, notably that it's not that easy\nfor users to get their volatility markings right.\n\nIn the case at hand, I think that means that allowing workers to do\npruning would require hardening the parallel append code against the\nsituation where their pruning results vary. Maybe, instead of passing\nthe pruning results *down*, we could pass them *up* to the leader and\nthen throw an error if they're different?Note we’re talking here about “initial” pruning that occurs during ExecInitNode(). Workers are only launched during ExecGather[Merge]() which thereafter do ExecInitNode() on their copy of the the plan tree. So if we are to pass the pruning results for cross-checking, it will have to be from the leader to workers.-- Thanks, Amit LangoteEDB: http://www.enterprisedb.com",
"msg_date": "Mon, 7 Aug 2023 23:25:30 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: initial pruning in parallel append"
},
{
"msg_contents": "On Mon, Aug 7, 2023 at 10:25 AM Amit Langote <[email protected]> wrote:\n> Note we’re talking here about “initial” pruning that occurs during ExecInitNode(). Workers are only launched during ExecGather[Merge]() which thereafter do ExecInitNode() on their copy of the the plan tree. So if we are to pass the pruning results for cross-checking, it will have to be from the leader to workers.\n\nThat doesn't seem like a big problem because there aren't many node\ntypes that do pruning, right? I think we're just talking about Append\nand MergeAppend, or something like that, right? You just need the\nExecWhateverEstimate function to budget some DSM space to store the\ninformation, which can basically just be a bitmap over the set of\nchild plans, and the ExecWhateverInitializeDSM copies the information\ninto that DSM space, and ExecWhateverInitializeWorker() copies the\ninformation from the shared space back into the local node (or maybe\njust points to it, if the representation is sufficiently compatible).\nI feel like this is an hour or two's worth of coding, unless I'm\nmissing something, and WAY easier than trying to reason about what\nhappens if expression evaluation isn't as stable as we'd like it to\nbe.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 7 Aug 2023 11:52:51 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: initial pruning in parallel append"
},
{
"msg_contents": "On Tue, Aug 8, 2023 at 12:53 AM Robert Haas <[email protected]> wrote:\n> On Mon, Aug 7, 2023 at 10:25 AM Amit Langote <[email protected]> wrote:\n> > Note we’re talking here about “initial” pruning that occurs during ExecInitNode(). Workers are only launched during ExecGather[Merge]() which thereafter do ExecInitNode() on their copy of the the plan tree. So if we are to pass the pruning results for cross-checking, it will have to be from the leader to workers.\n>\n> That doesn't seem like a big problem because there aren't many node\n> types that do pruning, right? I think we're just talking about Append\n> and MergeAppend, or something like that, right?\n\nMergeAppend can't be parallel-aware atm, so only Append.\n\n> You just need the\n> ExecWhateverEstimate function to budget some DSM space to store the\n> information, which can basically just be a bitmap over the set of\n> child plans, and the ExecWhateverInitializeDSM copies the information\n> into that DSM space, and ExecWhateverInitializeWorker() copies the\n> information from the shared space back into the local node (or maybe\n> just points to it, if the representation is sufficiently compatible).\n> I feel like this is an hour or two's worth of coding, unless I'm\n> missing something, and WAY easier than trying to reason about what\n> happens if expression evaluation isn't as stable as we'd like it to\n> be.\n\nOK, I agree that we'd better share the pruning result between the\nleader and workers.\n\nI hadn't thought about putting the pruning result into Append's DSM\n(ParallelAppendState), which is what you're describing IIUC. I looked\ninto it, though I'm not sure if it can be made to work given the way\nthings are on the worker side, or at least not without some\nreshuffling of code in ParallelQueryMain(). The pruning result will\nhave to be available in ExecInitAppend, but because the worker reads\nthe DSM only after finishing the plan tree initialization, it won't.\nPerhaps, we can integrate ExecParallelInitializeWorker()'s\nresponsibilities into ExecutorStart() / ExecInitNode() somehow?\n\nSo change the ordering of the following code in ParallelQueryMain():\n\n /* Start up the executor */\n queryDesc->plannedstmt->jitFlags = fpes->jit_flags;\n ExecutorStart(queryDesc, fpes->eflags);\n\n /* Special executor initialization steps for parallel workers */\n queryDesc->planstate->state->es_query_dsa = area;\n if (DsaPointerIsValid(fpes->param_exec))\n {\n char *paramexec_space;\n\n paramexec_space = dsa_get_address(area, fpes->param_exec);\n RestoreParamExecParams(paramexec_space, queryDesc->estate);\n }\n pwcxt.toc = toc;\n pwcxt.seg = seg;\n ExecParallelInitializeWorker(queryDesc->planstate, &pwcxt);\n\nLooking inside ExecParallelInitializeWorker():\n\nstatic bool\nExecParallelInitializeWorker(PlanState *planstate, ParallelWorkerContext *pwcxt)\n{\n if (planstate == NULL)\n return false;\n\n switch (nodeTag(planstate))\n {\n case T_SeqScanState:\n if (planstate->plan->parallel_aware)\n ExecSeqScanInitializeWorker((SeqScanState *) planstate, pwcxt);\n\nI guess that'd mean putting the if (planstate->plan->parallel_aware)\nblock seen here at the end of ExecInitSeqScan() and so on.\n\nOr we could consider something like the patch I mentioned in my 1st\nemail. The idea there was to pass the pruning result via a separate\nchannel, not the DSM chunk linked into the PlanState tree. To wit, on\nthe leader side, ExecInitParallelPlan() puts the serialized\nList-of-Bitmapset into the shm_toc with a dedicated PARALLEL_KEY,\nalongside PlannedStmt, ParamListInfo, etc. The List-of-Bitmpaset is\ninitialized during the leader's ExecInitNode(). On the worker side,\nExecParallelGetQueryDesc() reads the List-of-Bitmapset string and puts\nthe resulting node into the QueryDesc, that ParallelQueryMain() then\nuses to do ExecutorStart() which copies the pointer to\nEState.es_part_prune_results. ExecInitAppend() consults\nEState.es_part_prune_results and uses the Bitmapset from there, if\npresent, instead of performing initial pruning. I'm assuming it's not\ntoo ugly if ExecInitAppend() uses IsParallelWorker() to decide whether\nit should be writing to EState.es_part_prune_results or reading from\nit -- the former if in the leader and the latter in a worker. If we\nare to go with this approach we will need to un-revert ec386948948c,\nwhich moved PartitionPruneInfo nodes out of Append/MergeAppend nodes\nto a List in PlannedStmt (copied into EState.es_part_prune_infos),\nsuch that es_part_prune_results mirrors es_part_prune_infos.\n\nThoughts?\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 8 Aug 2023 15:58:02 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: initial pruning in parallel append"
},
{
"msg_contents": "On Tue, Aug 8, 2023 at 2:58 AM Amit Langote <[email protected]> wrote:\n> > That doesn't seem like a big problem because there aren't many node\n> > types that do pruning, right? I think we're just talking about Append\n> > and MergeAppend, or something like that, right?\n>\n> MergeAppend can't be parallel-aware atm, so only Append.\n\nWell, the question isn't whether it's parallel-aware, but whether\nstartup-time pruning happens there.\n\n> So change the ordering of the following code in ParallelQueryMain():\n\nYeah, that would be a reasonable thing to do.\n\n> Or we could consider something like the patch I mentioned in my 1st\n> email. The idea there was to pass the pruning result via a separate\n> channel, not the DSM chunk linked into the PlanState tree. To wit, on\n> the leader side, ExecInitParallelPlan() puts the serialized\n> List-of-Bitmapset into the shm_toc with a dedicated PARALLEL_KEY,\n> alongside PlannedStmt, ParamListInfo, etc. The List-of-Bitmpaset is\n> initialized during the leader's ExecInitNode(). On the worker side,\n> ExecParallelGetQueryDesc() reads the List-of-Bitmapset string and puts\n> the resulting node into the QueryDesc, that ParallelQueryMain() then\n> uses to do ExecutorStart() which copies the pointer to\n> EState.es_part_prune_results. ExecInitAppend() consults\n> EState.es_part_prune_results and uses the Bitmapset from there, if\n> present, instead of performing initial pruning.\n\nThis also seems reasonable.\n\n> I'm assuming it's not\n> too ugly if ExecInitAppend() uses IsParallelWorker() to decide whether\n> it should be writing to EState.es_part_prune_results or reading from\n> it -- the former if in the leader and the latter in a worker.\n\nI don't think that's too ugly. I mean you have to have an if statement\nsomeplace.\n\n> If we\n> are to go with this approach we will need to un-revert ec386948948c,\n> which moved PartitionPruneInfo nodes out of Append/MergeAppend nodes\n> to a List in PlannedStmt (copied into EState.es_part_prune_infos),\n> such that es_part_prune_results mirrors es_part_prune_infos.\n\nThe comment for the revert (which was\n5472743d9e8583638a897b47558066167cc14583) points to\nhttps://www.postgresql.org/message-id/[email protected]\nas the reason, but it's not very clear to me why that email led to\nthis being reverted. In any event, I agree that if we go with your\nidea to pass this via a separate PARALLEL_KEY, unreverting that patch\nseems to make sense, because otherwise I think we don't have a fast\nway to find the nodes that contain the state that we care about.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 8 Aug 2023 10:16:07 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: initial pruning in parallel append"
},
{
"msg_contents": "On Tue, Aug 8, 2023 at 11:16 PM Robert Haas <[email protected]> wrote:\n> On Tue, Aug 8, 2023 at 2:58 AM Amit Langote <[email protected]> wrote:\n> > Or we could consider something like the patch I mentioned in my 1st\n> > email. The idea there was to pass the pruning result via a separate\n> > channel, not the DSM chunk linked into the PlanState tree. To wit, on\n> > the leader side, ExecInitParallelPlan() puts the serialized\n> > List-of-Bitmapset into the shm_toc with a dedicated PARALLEL_KEY,\n> > alongside PlannedStmt, ParamListInfo, etc. The List-of-Bitmpaset is\n> > initialized during the leader's ExecInitNode(). On the worker side,\n> > ExecParallelGetQueryDesc() reads the List-of-Bitmapset string and puts\n> > the resulting node into the QueryDesc, that ParallelQueryMain() then\n> > uses to do ExecutorStart() which copies the pointer to\n> > EState.es_part_prune_results. ExecInitAppend() consults\n> > EState.es_part_prune_results and uses the Bitmapset from there, if\n> > present, instead of performing initial pruning.\n>\n> This also seems reasonable.\n>\n> > I'm assuming it's not\n> > too ugly if ExecInitAppend() uses IsParallelWorker() to decide whether\n> > it should be writing to EState.es_part_prune_results or reading from\n> > it -- the former if in the leader and the latter in a worker.\n>\n> I don't think that's too ugly. I mean you have to have an if statement\n> someplace.\n\nYes, that makes sense.\n\nIt's just that I thought maybe I haven't thought hard enough about\noptions before adding a new IsParallelWorker(), because I don't find\ntoo many instances of IsParallelWorker() in the generic executor code.\nI think that's because most parallel worker-specific logic lives in\nexecParallel.c or in Exec*Worker() functions outside that file, so the\ngeneric code remains parallel query agnostic as much as possible.\n\n> > If we\n> > are to go with this approach we will need to un-revert ec386948948c,\n> > which moved PartitionPruneInfo nodes out of Append/MergeAppend nodes\n> > to a List in PlannedStmt (copied into EState.es_part_prune_infos),\n> > such that es_part_prune_results mirrors es_part_prune_infos.\n>\n> The comment for the revert (which was\n> 5472743d9e8583638a897b47558066167cc14583) points to\n> https://www.postgresql.org/message-id/[email protected]\n> as the reason, but it's not very clear to me why that email led to\n> this being reverted. In any event, I agree that if we go with your\n> idea to pass this via a separate PARALLEL_KEY, unreverting that patch\n> seems to make sense, because otherwise I think we don't have a fast\n> way to find the nodes that contain the state that we care about.\n\nOK, I've attached the unreverted patch that adds\nEState.es_part_prune_infos as 0001.\n\n0002 adds EState.es_part_prune_results. Parallel query leader stores\nthe bitmapset of initially valid subplans by performing initial\npruning steps contained in a given PartitionPruneInfo into that list\nat the same index as the PartitionPruneInfo's index in\nes_part_prune_infos. ExecInitParallelPlan() serializes\nes_part_prune_results and stores it in the DSM. A worker initializes\nes_part_prune_results in its own EState by reading the leader's value\nfrom the DSM and for each PartitionPruneInfo in its own copy of\nEState.es_part_prune_infos, gets the set of initially valid subplans\nby referring to es_part_prune_results in lieu of performing initial\npruning again.\n\nShould workers, as Tom says, instead do the pruning and cross-check\nthe result to give an error if it doesn't match the leader's? The\nerror message can't specifically point out which, though a user would\nat least know that they have functions in their database with wrong\nvolatility markings.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 9 Aug 2023 19:22:39 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: initial pruning in parallel append"
},
{
"msg_contents": "On Wed, Aug 9, 2023 at 6:22 AM Amit Langote <[email protected]> wrote:\n> > > I'm assuming it's not\n> > > too ugly if ExecInitAppend() uses IsParallelWorker() to decide whether\n> > > it should be writing to EState.es_part_prune_results or reading from\n> > > it -- the former if in the leader and the latter in a worker.\n> >\n> > I don't think that's too ugly. I mean you have to have an if statement\n> > someplace.\n>\n> Yes, that makes sense.\n>\n> It's just that I thought maybe I haven't thought hard enough about\n> options before adding a new IsParallelWorker(), because I don't find\n> too many instances of IsParallelWorker() in the generic executor code.\n> I think that's because most parallel worker-specific logic lives in\n> execParallel.c or in Exec*Worker() functions outside that file, so the\n> generic code remains parallel query agnostic as much as possible.\n\nOh, actually, there is an issue here. IsParallelWorker() is not the\nright test. Imagine that there's a parallel query which launches some\nworkers, and one of those calls a user-defined function which again\nuses parallelism, launching more workers. This may not be possible\ntoday, I don't really remember, but the test should be \"am I a\nparallel worker with respect to this plan?\" not \"am I a parallel\nworker at all?\".Not quite sure what the best way to code that is. If\nwe could just test whether we have a ParallelWorkerContext, it would\nbe easy...\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 9 Aug 2023 08:48:27 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: initial pruning in parallel append"
},
{
"msg_contents": "On Wed, Aug 9, 2023 at 9:48 PM Robert Haas <[email protected]> wrote:\n> On Wed, Aug 9, 2023 at 6:22 AM Amit Langote <[email protected]> wrote:\n> > > > I'm assuming it's not\n> > > > too ugly if ExecInitAppend() uses IsParallelWorker() to decide whether\n> > > > it should be writing to EState.es_part_prune_results or reading from\n> > > > it -- the former if in the leader and the latter in a worker.\n> > >\n> > > I don't think that's too ugly. I mean you have to have an if statement\n> > > someplace.\n> >\n> > Yes, that makes sense.\n> >\n> > It's just that I thought maybe I haven't thought hard enough about\n> > options before adding a new IsParallelWorker(), because I don't find\n> > too many instances of IsParallelWorker() in the generic executor code.\n> > I think that's because most parallel worker-specific logic lives in\n> > execParallel.c or in Exec*Worker() functions outside that file, so the\n> > generic code remains parallel query agnostic as much as possible.\n>\n> Oh, actually, there is an issue here. IsParallelWorker() is not the\n> right test. Imagine that there's a parallel query which launches some\n> workers, and one of those calls a user-defined function which again\n> uses parallelism, launching more workers. This may not be possible\n> today, I don't really remember, but the test should be \"am I a\n> parallel worker with respect to this plan?\" not \"am I a parallel\n> worker at all?\".Not quite sure what the best way to code that is. If\n> we could just test whether we have a ParallelWorkerContext, it would\n> be easy...\n\nI checked enough to be sure that IsParallelWorker() is reliable at the\ntime of ExecutorStart() / ExecInitNode() in ParallelQueryMain() in a\nworker. However, ParallelWorkerContext is not available at that\npoint. Here's the relevant part of ParallelQueryMain():\n\n /* Start up the executor */\n queryDesc->plannedstmt->jitFlags = fpes->jit_flags;\n ExecutorStart(queryDesc, fpes->eflags);\n\n /* Special executor initialization steps for parallel workers */\n queryDesc->planstate->state->es_query_dsa = area;\n if (DsaPointerIsValid(fpes->param_exec))\n {\n char *paramexec_space;\n\n paramexec_space = dsa_get_address(area, fpes->param_exec);\n RestoreParamExecParams(paramexec_space, queryDesc->estate);\n }\n pwcxt.toc = toc;\n pwcxt.seg = seg;\n ExecParallelInitializeWorker(queryDesc->planstate, &pwcxt);\n\nBTW, we do also use IsParallelWorker() in ExecGetRangeTableRelation()\nwhich also probably only runs during ExecInitNode(), same as\nExecInitPartitionPruning() that this patch adds it to.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 9 Aug 2023 21:56:56 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: initial pruning in parallel append"
},
{
"msg_contents": "On Wed, Aug 9, 2023 at 8:57 AM Amit Langote <[email protected]> wrote:\n> I checked enough to be sure that IsParallelWorker() is reliable at the\n> time of ExecutorStart() / ExecInitNode() in ParallelQueryMain() in a\n> worker. However, ParallelWorkerContext is not available at that\n> point. Here's the relevant part of ParallelQueryMain():\n>\n> /* Start up the executor */\n> queryDesc->plannedstmt->jitFlags = fpes->jit_flags;\n> ExecutorStart(queryDesc, fpes->eflags);\n>\n> /* Special executor initialization steps for parallel workers */\n> queryDesc->planstate->state->es_query_dsa = area;\n> if (DsaPointerIsValid(fpes->param_exec))\n> {\n> char *paramexec_space;\n>\n> paramexec_space = dsa_get_address(area, fpes->param_exec);\n> RestoreParamExecParams(paramexec_space, queryDesc->estate);\n> }\n> pwcxt.toc = toc;\n> pwcxt.seg = seg;\n> ExecParallelInitializeWorker(queryDesc->planstate, &pwcxt);\n>\n> BTW, we do also use IsParallelWorker() in ExecGetRangeTableRelation()\n> which also probably only runs during ExecInitNode(), same as\n> ExecInitPartitionPruning() that this patch adds it to.\n\nI don't know if that's a great idea, but I guess if we're already\ndoing it, it doesn't hurt to expand the use a little bit.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 15 Aug 2023 13:08:36 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: initial pruning in parallel append"
}
] |
[
{
"msg_contents": "Hi,\r\n\r\nPlease see the attached draft of the PostgreSQL 16 Beta 2 release \r\nannouncement.\r\n\r\nI used the open items list[1] to build the draft. If there are any \r\nnotable please omissions, please let me know.\r\n\r\nPlease leave all feedback by June 29, 0:00 AoE.\r\n\r\nThanks,\r\n\r\nJonathan\r\n\r\n[1] https://wiki.postgresql.org/wiki/PostgreSQL_16_Open_Items",
"msg_date": "Tue, 27 Jun 2023 10:32:27 -0400",
"msg_from": "\"Jonathan S. Katz\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL 16 Beta 2 release announcement draft"
},
{
"msg_contents": "On Tue, Jun 27, 2023 at 8:32 AM Jonathan S. Katz <[email protected]> wrote:\n>\n> I used the open items list[1] to build the draft. If there are any\n> notable please omissions, please let me know.\n\nI noticed that ldap_password_hook [1] was omitted from the release\nnotes. I believe it should be included if nothing else so that it's\nwritten somewhere that it's there. AFAIK there's no other\ndocumentation about it.\n\nAndrew Dunstan and John Naylor can add comments here, but a paragraph\nlike this could be added:\n\n\"A hook for modifying the ldapbind password was added to libpq. The\nhook can be installed by a shared_preload library. This will allow\nusers who have to work with LDAP authentication to create their own\nmethods of dealing with ldap bind passwords. An example is provided at\ntest/modules/ldap_password_func/ldap_password_func.c\"\n\nThanks,\n\nRoberto\n\n[1] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=419a8dd8142afef790dafd91ba39afac2ca48aaf\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/test/modules/ldap_password_func/ldap_password_func.c;h=4d980d28b1ef3e37da365ebbd4ca998f4786b827;hb=419a8dd8142afef790dafd91ba39afac2ca48aaf\n\n\n",
"msg_date": "Tue, 27 Jun 2023 10:54:28 -0600",
"msg_from": "Roberto Mello <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 16 Beta 2 release announcement draft"
},
{
"msg_contents": "On 6/27/23 12:54 PM, Roberto Mello wrote:\r\n> On Tue, Jun 27, 2023 at 8:32 AM Jonathan S. Katz <[email protected]> wrote:\r\n>>\r\n>> I used the open items list[1] to build the draft. If there are any\r\n>> notable please omissions, please let me know.\r\n> \r\n> I noticed that ldap_password_hook [1] was omitted from the release\r\n> notes. I believe it should be included if nothing else so that it's\r\n> written somewhere that it's there. AFAIK there's no other\r\n> documentation about it.\r\n\r\nWas this discussed on the release notes thread?[1]. It can always be \r\nadded to the release notes -- those aren't finalized until GA.\r\n\r\nAfter Beta 1, the announcements are either about new feature \r\nadditions/removals since the last beta release, or bug fixes. I don't \r\nthink it makes sense to include this here.\r\n\r\nThanks,\r\n\r\nJonathan\r\n\r\n[1] https://www.postgresql.org/message-id/ZGaPa7M3gc2THeDJ%40momjian.us",
"msg_date": "Tue, 27 Jun 2023 13:40:35 -0400",
"msg_from": "\"Jonathan S. Katz\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL 16 Beta 2 release announcement draft"
},
{
"msg_contents": "On Tue, Jun 27, 2023 at 11:40 AM Jonathan S. Katz <[email protected]> wrote:\n>\n> Was this discussed on the release notes thread?[1]. It can always be\n> added to the release notes -- those aren't finalized until GA.\n>\n> After Beta 1, the announcements are either about new feature\n> additions/removals since the last beta release, or bug fixes. I don't\n> think it makes sense to include this here.\n\nIt wasn't. I'll add it to that thread.\n\nThank you,\n\nRoberto\n\n\n",
"msg_date": "Tue, 27 Jun 2023 15:44:28 -0600",
"msg_from": "Roberto Mello <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 16 Beta 2 release announcement draft"
}
] |
[
{
"msg_contents": "This is a patch which implements an issue discussed in bug #17946[0]. It\ndoesn't fix the overarching issue of the bug, but merely a consistency\nissue which was found while analyzing code by Heikki. I had originally\nsubmitted the patch within that thread, but for visibility and the\npurposes of the commitfest, I have re-sent it in its own thread.\n\n[0]: https://www.postgresql.org/message-id/[email protected]\n\n-- \nTristan Partin\nNeon (https://neon.tech)",
"msg_date": "Tue, 27 Jun 2023 10:02:05 -0500",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Make uselocale protection more consistent"
},
{
"msg_contents": "On 27.06.23 17:02, Tristan Partin wrote:\n> This is a patch which implements an issue discussed in bug #17946[0]. It\n> doesn't fix the overarching issue of the bug, but merely a consistency\n> issue which was found while analyzing code by Heikki. I had originally\n> submitted the patch within that thread, but for visibility and the\n> purposes of the commitfest, I have re-sent it in its own thread.\n> \n> [0]: https://www.postgresql.org/message-id/[email protected]\n\nI notice that HAVE_USELOCALE was introduced much later than \nHAVE_LOCALE_T, and at the time the code was already using uselocale(), \nso perhaps the introduction of HAVE_USELOCALE was unnecessary and should \nbe reverted.\n\nI think it would be better to keep HAVE_LOCALE_T as encompassing any of \nthe various locale_t-using functions, rather than using HAVE_USELOCALE \nas a proxy for them. Otherwise you create weird situations like having \n#ifdef HAVE_WCSTOMBS_L inside #ifdef HAVE_USELOCALE, which doesn't make \nsense, I think.\n\n\n\n",
"msg_date": "Mon, 3 Jul 2023 08:13:40 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Make uselocale protection more consistent"
},
{
"msg_contents": "On Mon, Jul 3, 2023 at 6:13 PM Peter Eisentraut <[email protected]> wrote:\n> On 27.06.23 17:02, Tristan Partin wrote:\n> > This is a patch which implements an issue discussed in bug #17946[0]. It\n> > doesn't fix the overarching issue of the bug, but merely a consistency\n> > issue which was found while analyzing code by Heikki. I had originally\n> > submitted the patch within that thread, but for visibility and the\n> > purposes of the commitfest, I have re-sent it in its own thread.\n> >\n> > [0]: https://www.postgresql.org/message-id/[email protected]\n>\n> I notice that HAVE_USELOCALE was introduced much later than\n> HAVE_LOCALE_T, and at the time the code was already using uselocale(),\n> so perhaps the introduction of HAVE_USELOCALE was unnecessary and should\n> be reverted.\n>\n> I think it would be better to keep HAVE_LOCALE_T as encompassing any of\n> the various locale_t-using functions, rather than using HAVE_USELOCALE\n> as a proxy for them. Otherwise you create weird situations like having\n> #ifdef HAVE_WCSTOMBS_L inside #ifdef HAVE_USELOCALE, which doesn't make\n> sense, I think.\n\nI propose[1] that we get rid of HAVE_LOCALE_T completely and make\n\"libc\" provider support unconditional. It's standardised, and every\ntarget system has it, even Windows. But Windows doesn't have\nuselocale().\n\n[1] https://www.postgresql.org/message-id/flat/CA%2BhUKGL7CmmzeRhoirzjECmOdABVFTn8fo6gEOaFRF1Oxey6Hw%40mail.gmail.com#aef2f2274b28ff8a36f9b8a598e3cec0\n\n\n",
"msg_date": "Mon, 3 Jul 2023 18:24:02 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Make uselocale protection more consistent"
},
{
"msg_contents": "On Mon Jul 3, 2023 at 1:24 AM CDT, Thomas Munro wrote:\n> On Mon, Jul 3, 2023 at 6:13 PM Peter Eisentraut <[email protected]> wrote:\n> > On 27.06.23 17:02, Tristan Partin wrote:\n> > > This is a patch which implements an issue discussed in bug #17946[0]. It\n> > > doesn't fix the overarching issue of the bug, but merely a consistency\n> > > issue which was found while analyzing code by Heikki. I had originally\n> > > submitted the patch within that thread, but for visibility and the\n> > > purposes of the commitfest, I have re-sent it in its own thread.\n> > >\n> > > [0]: https://www.postgresql.org/message-id/[email protected]\n> >\n> > I notice that HAVE_USELOCALE was introduced much later than\n> > HAVE_LOCALE_T, and at the time the code was already using uselocale(),\n> > so perhaps the introduction of HAVE_USELOCALE was unnecessary and should\n> > be reverted.\n> >\n> > I think it would be better to keep HAVE_LOCALE_T as encompassing any of\n> > the various locale_t-using functions, rather than using HAVE_USELOCALE\n> > as a proxy for them. Otherwise you create weird situations like having\n> > #ifdef HAVE_WCSTOMBS_L inside #ifdef HAVE_USELOCALE, which doesn't make\n> > sense, I think.\n>\n> I propose[1] that we get rid of HAVE_LOCALE_T completely and make\n> \"libc\" provider support unconditional. It's standardised, and every\n> target system has it, even Windows. But Windows doesn't have\n> uselocale().\n>\n> [1] https://www.postgresql.org/message-id/flat/CA%2BhUKGL7CmmzeRhoirzjECmOdABVFTn8fo6gEOaFRF1Oxey6Hw%40mail.gmail.com#aef2f2274b28ff8a36f9b8a598e3cec0\n\nI think keeping HAVE_USELOCALE is important for the Windows case as\nmentioned. I need it for my localization work where I am ripping out\nsetlocale() on non-Windows.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Mon, 03 Jul 2023 08:21:50 -0500",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Make uselocale protection more consistent"
},
{
"msg_contents": "On 03.07.23 08:24, Thomas Munro wrote:\n> I propose[1] that we get rid of HAVE_LOCALE_T completely and make\n> \"libc\" provider support unconditional. It's standardised, and every\n> target system has it, even Windows. But Windows doesn't have\n> uselocale().\n> \n> [1] https://www.postgresql.org/message-id/flat/CA%2BhUKGL7CmmzeRhoirzjECmOdABVFTn8fo6gEOaFRF1Oxey6Hw%40mail.gmail.com#aef2f2274b28ff8a36f9b8a598e3cec0\n\nOk, it appears your patch is imminent, so let's wait for that and see if \nthis patch here is still required afterwards.\n\n\n\n",
"msg_date": "Mon, 3 Jul 2023 16:15:18 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Make uselocale protection more consistent"
},
{
"msg_contents": "On 03.07.23 15:21, Tristan Partin wrote:\n>>> I think it would be better to keep HAVE_LOCALE_T as encompassing any of\n>>> the various locale_t-using functions, rather than using HAVE_USELOCALE\n>>> as a proxy for them. Otherwise you create weird situations like having\n>>> #ifdef HAVE_WCSTOMBS_L inside #ifdef HAVE_USELOCALE, which doesn't make\n>>> sense, I think.\n>> I propose[1] that we get rid of HAVE_LOCALE_T completely and make\n>> \"libc\" provider support unconditional. It's standardised, and every\n>> target system has it, even Windows. But Windows doesn't have\n>> uselocale().\n>>\n>> [1]https://www.postgresql.org/message-id/flat/CA%2BhUKGL7CmmzeRhoirzjECmOdABVFTn8fo6gEOaFRF1Oxey6Hw%40mail.gmail.com#aef2f2274b28ff8a36f9b8a598e3cec0\n> I think keeping HAVE_USELOCALE is important for the Windows case as\n> mentioned. I need it for my localization work where I am ripping out\n> setlocale() on non-Windows.\n\nThe current code is structured\n\n#ifdef HAVE_LOCALE_T\n#ifdef HAVE_WCSTOMBS_L\n wcstombs_l(...);\n#else\n uselocale(...);\n#endif\n#else\n elog(ERROR);\n#endif\n\nIf you just replace HAVE_LOCALE_T with HAVE_USELOCALE, then this would \npenalize a platform that has wcstombs_l(), but not uselocale(). I think \nthe correct structure would be\n\n#if defined(HAVE_WCSTOMBS_L)\n wcstombs_l(...);\n#elif defined(HAVE_USELOCALE)\n uselocale(...);\n#else\n elog(ERROR);\n#endif\n\n\n\n",
"msg_date": "Mon, 3 Jul 2023 16:21:04 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Make uselocale protection more consistent"
},
{
"msg_contents": "On Mon Jul 3, 2023 at 9:21 AM CDT, Peter Eisentraut wrote:\n> On 03.07.23 15:21, Tristan Partin wrote:\n> >>> I think it would be better to keep HAVE_LOCALE_T as encompassing any of\n> >>> the various locale_t-using functions, rather than using HAVE_USELOCALE\n> >>> as a proxy for them. Otherwise you create weird situations like having\n> >>> #ifdef HAVE_WCSTOMBS_L inside #ifdef HAVE_USELOCALE, which doesn't make\n> >>> sense, I think.\n> >> I propose[1] that we get rid of HAVE_LOCALE_T completely and make\n> >> \"libc\" provider support unconditional. It's standardised, and every\n> >> target system has it, even Windows. But Windows doesn't have\n> >> uselocale().\n> >>\n> >> [1]https://www.postgresql.org/message-id/flat/CA%2BhUKGL7CmmzeRhoirzjECmOdABVFTn8fo6gEOaFRF1Oxey6Hw%40mail.gmail.com#aef2f2274b28ff8a36f9b8a598e3cec0\n> > I think keeping HAVE_USELOCALE is important for the Windows case as\n> > mentioned. I need it for my localization work where I am ripping out\n> > setlocale() on non-Windows.\n>\n> The current code is structured\n>\n> #ifdef HAVE_LOCALE_T\n> #ifdef HAVE_WCSTOMBS_L\n> wcstombs_l(...);\n> #else\n> uselocale(...);\n> #endif\n> #else\n> elog(ERROR);\n> #endif\n>\n> If you just replace HAVE_LOCALE_T with HAVE_USELOCALE, then this would \n> penalize a platform that has wcstombs_l(), but not uselocale(). I think \n> the correct structure would be\n>\n> #if defined(HAVE_WCSTOMBS_L)\n> wcstombs_l(...);\n> #elif defined(HAVE_USELOCALE)\n> uselocale(...);\n> #else\n> elog(ERROR);\n> #endif\n\nThat makes sense to me. I gave it some more thought. Maybe it makes more\nsense to just completely drop HAVE_USELOCALE as mentioned, and protect\ncalls to it with #ifdef WIN32 or whatever the macro is. HAVE_USELOCALE\nmight be more descriptive, but I don't really care that much either way.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Mon, 03 Jul 2023 09:49:21 -0500",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Make uselocale protection more consistent"
}
] |
[
{
"msg_contents": "Hi,\n\n>I finished writing the code patch for transformation \"Or\" expressions to\n>\"Any\" expressions. I didn't see any problems in regression tests, even\n>when I changed the constant at which the minimum or expression is\n>replaced by any at 0. I ran my patch on sqlancer and so far the code has\n>never fallen.\nThanks for working on this.\n\nI took the liberty of making some modifications to the patch.\nI didn't compile or test it.\nPlease feel free to use them.\n\nregards,\nRanier Vilela",
"msg_date": "Tue, 27 Jun 2023 15:55:23 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: POC, WIP: OR-clause support for indexes"
},
{
"msg_contents": "On 6/27/23 20:55, Ranier Vilela wrote:\n> Hi,\n> \n>>I finished writing the code patch for transformation \"Or\" expressions to\n>>\"Any\" expressions. I didn't see any problems in regression tests, even\n>>when I changed the constant at which the minimum or expression is\n>>replaced by any at 0. I ran my patch on sqlancer and so far the code has\n>>never fallen.\n> Thanks for working on this.\n> \n> I took the liberty of making some modifications to the patch.\n> I didn't compile or test it.\n> Please feel free to use them.\n> \n\nI don't want to be rude, but this doesn't seem very helpful.\n\n- You made some changes, but you don't even attempt to explain what you\nchanged or why you changed it.\n\n- You haven't even tried to compile the code, nor tested it. If it\nhappens to compile, wow could others even know it actually behaves the\nway you wanted?\n\n- You responded in a way that breaks the original thread, so it's not\nclear which message you're responding to.\n\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 28 Jun 2023 23:45:36 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: POC, WIP: OR-clause support for indexes"
},
{
"msg_contents": "Em qua., 28 de jun. de 2023 às 18:45, Tomas Vondra <\[email protected]> escreveu:\n\n> On 6/27/23 20:55, Ranier Vilela wrote:\n> > Hi,\n> >\n> >>I finished writing the code patch for transformation \"Or\" expressions to\n> >>\"Any\" expressions. I didn't see any problems in regression tests, even\n> >>when I changed the constant at which the minimum or expression is\n> >>replaced by any at 0. I ran my patch on sqlancer and so far the code has\n> >>never fallen.\n> > Thanks for working on this.\n> >\n> > I took the liberty of making some modifications to the patch.\n> > I didn't compile or test it.\n> > Please feel free to use them.\n> >\n>\n> I don't want to be rude, but this doesn't seem very helpful.\n>\nSorry, It was not my intention to cause interruptions.\n\n\n> - You made some changes, but you don't even attempt to explain what you\n> changed or why you changed it.\n>\n1. Reduce scope\n2. Eliminate unnecessary variables\n3. Eliminate unnecessary expressions\n\n\n>\n> - You haven't even tried to compile the code, nor tested it. If it\n> happens to compile, wow could others even know it actually behaves the\n> way you wanted?\n>\nAttached v2 with make check pass all tests.\nUbuntu 64 bits\ngcc 64 bits\n\n\n> - You responded in a way that breaks the original thread, so it's not\n> clear which message you're responding to.\n>\nIt was a pretty busy day.\n\nSorry for the noise, I hope I was of some help.\n\nregards,\nRanier Vilela\n\nP.S.\n0001-Replace-clause-X-N1-OR-X-N2-.-with-X-ANY-N1-N2-on.patch fails with 4\ntests.",
"msg_date": "Wed, 28 Jun 2023 22:36:48 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: POC, WIP: OR-clause support for indexes"
},
{
"msg_contents": "Hi!\n\nOn 29.06.2023 04:36, Ranier Vilela wrote:\n>\n> I don't want to be rude, but this doesn't seem very helpful.\n>\n> Sorry, It was not my intention to cause interruptions.\n>\n>\n> - You made some changes, but you don't even attempt to explain\n> what you\n> changed or why you changed it.\n>\n> 1. Reduce scope\n> 2. Eliminate unnecessary variables\n> 3. Eliminate unnecessary expressions\n>\n>\n> - You haven't even tried to compile the code, nor tested it. If it\n> happens to compile, wow could others even know it actually behaves the\n> way you wanted?\n>\nSorry I didn't answer right away. I will try not to do this in the \nfuture thank you for your participation and help.\n\nYes, the scope of this patch may be small, but I am sure that it will \nsolve the worst case of memory consumption with large numbers of \"or\" \nexpressions or reduce execution and planning time. As I have already \nsaid, I conducted a launch on a database with 20 billion data generated \nusing a benchmark. Unfortunately, at that time I sent a not quite \ncorrect picture: the execution time, not the planning time, increases \nwith the number of \"or\" expressions (execution_time.png). x is the \nnumber of or expressions, y is the execution/scheduling time.\n\nI also throw memory consumption at 50,000 \"or\" expressions collected by \nHeapTrack (where memory consumption was recorded already at the \ninitialization stage of the 1.27GB pic3.png). I think such a \ntransformation will allow just the same to avoid such a worst case, \nsince in comparison with ANY memory is much less and takes little time.\n\nSELECT FORMAT('prepare x %s AS SELECT * FROM pgbench_accounts a WHERE %s',\n '(' || string_agg('int',',') ||')',\n string_agg(FORMAT('aid = $%s', g.id),' or ')\n ) AS cmd\n FROM generate_series(1, 50000) AS g(id)\n\\gexec\n\nSELECT FORMAT('execute x %s;','(' || string_agg(g.id::text,',') ||')') AS cmd\n FROM generate_series(1, 50000) AS g(id)\n\\gexec\n\nI tried to add a transformation at the path formation stage before we \nform indexes (set_plain_rel_pathlist function) and at the stage when we \nhave preprocessing of \"or\" expressions (getting rid of duplicates or \nuseless conditions), but everywhere there was a problem of incorrect \nselectivity estimation.\n\nCREATE TABLE tenk1 (unique1int, unique2int, tenint, hundredint);\ninsert into tenk1 SELECT x,x,x,x FROM generate_series(1,50000) as x;\nCREATE INDEX a_idx1 ON tenk1(unique1);\nCREATE INDEX a_idx2 ON tenk1(unique2);\nCREATE INDEX a_hundred ON tenk1(hundred);\n\npostgres=# explain analyze\nselect * from tenk1 a join tenk1 b on\n ((a.unique2 = 3 or a.unique2 = 7)) or (a.unique1 = 1);\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..140632434.50 rows=11250150000 width=32) (actual time=0.077..373.279 rows=1350000 loops=1)\n -> Seq Scan on tenk1 b (cost=0.00..2311.00 rows=150000 width=16) (actual time=0.037..13.941 rows=150000 loops=1)\n -> Materialize (cost=0.00..3436.01 rows=75001 width=16) (actual time=0.000..0.001 rows=9 loops=150000)\n -> Seq Scan on tenk1 a (cost=0.00..3061.00 rows=75001 width=16) (actual time=0.027..59.174 rows=9 loops=1)\n Filter: ((unique2 = ANY (ARRAY[3, 7])) OR (unique1 = 1))\n Rows Removed by Filter: 149991\n Planning Time: 0.438 ms\n Execution Time: 407.144 ms\n(8 rows)\n\nOnly by converting the expression at this stage, we do not encounter \nthis problem.\n\npostgres=# set enable_bitmapscan ='off';\nSET\npostgres=# explain analyze\nselect * from tenk1 a join tenk1 b on\n a.unique2 = 3 or a.unique2 = 7 or a.unique1 = 1;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..22247.02 rows=1350000 width=32) (actual time=0.094..373.627 rows=1350000 loops=1)\n -> Seq Scan on tenk1 b (cost=0.00..2311.00 rows=150000 width=16) (actual time=0.051..14.667 rows=150000 loops=1)\n -> Materialize (cost=0.00..3061.05 rows=9 width=16) (actual time=0.000..0.001 rows=9 loops=150000)\n -> Seq Scan on tenk1 a (cost=0.00..3061.00 rows=9 width=16) (actual time=0.026..42.389 rows=9 loops=1)\n Filter: ((unique2 = ANY ('{3,7}'::integer[])) OR (unique1 = 1))\n Rows Removed by Filter: 149991\n Planning Time: 0.414 ms\n Execution Time: 409.154 ms\n(8 rows)\n\nI compiled my original patch and there were no problems with regression \ntests. The only time there was a problem when I set the \nconst_transform_or_limit variable to 0 (I have 15), as you have in the \npatch. To be honest, diff appears there because you had a different \nplan, specifically the expressions \"or\" are replaced by ANY (see \nregression.diffs).\nUnfortunately, your patch version did not apply immediately, I did not \nunderstand the reasons, I applied it manually.\nAt the moment, I'm not sure that the constant is the right number for \napplying transformations, so I'm in search of it, to be honest. I will \npost my observations on this issue later. If you don't mind, I'll leave \nthe constant equal to 15 for now.\n\nSorry, I don't understand well enough what is meant by points \"Eliminate \nunnecessary variables\" and \"Eliminate unnecessary expressions\". Can you \nexplain in more detail?\n\n\nRegarding the patch, there was a Warning at the compilation stage.\n\nIn file included from ../../../src/include/nodes/bitmapset.h:21,\n\n from ../../../src/include/nodes/parsenodes.h:26,\n\n from ../../../src/include/catalog/objectaddress.h:17,\n\n from ../../../src/include/catalog/pg_aggregate.h:24,\n\n from parse_expr.c:18:\n\nparse_expr.c: In function ‘transformBoolExprOr’:\n\n../../../src/include/nodes/nodes.h:133:66: warning: ‘expr’ is used uninitialized [-Wuninitialized]\n\n 133 | #define nodeTag(nodeptr) (((const Node*)(nodeptr))->type)\n\n | ^~\n\nparse_expr.c:116:29: note: ‘expr’ was declared here\n\n 116 | BoolExpr *expr;\n\n | ^~~~\n\nI couldn't figure out how to fix it and went back to my original \nversion. To be honest, I don't think anything needs to be changed here.\n\nUnfortunately, I didn't understand the reasons why, with the available \nor expressions, you don't even try to convert to ANY by calling \ntransformBoolExpr, as I saw. I went back to my version.\n\nI think it's worth checking whether the or_statement variable is positive.\n\nI think it's worth leaving the use of the or_statement variable in its \noriginal form.\n\n switch (expr->boolop)\n {\n case AND_EXPR:\n opname = \"AND\";\n break;\n case OR_EXPR:\n opname = \"OR\";\n or_statement = true;\n break;\n case NOT_EXPR:\n opname = \"NOT\";\n break;\n default:\n elog(ERROR, \"unrecognized boolop: %d\", (int) expr->boolop);\n opname = NULL; /* keep compiler quiet */\n break;\n }\n\n if (!or_statement || list_length(expr->args) < const_transform_or_limit)\n return transformBoolExpr(pstate, (BoolExpr *)expr_orig);\n\nThe current version of the patch also works and all tests pass.\n\n-- \nRegards,\nAlena Rybakina\nPostgres Professional",
"msg_date": "Thu, 29 Jun 2023 08:50:21 +0300",
"msg_from": "Alena Rybakina <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: POC, WIP: OR-clause support for indexes"
},
{
"msg_contents": "Sorry for the possible duplicate. I have a suspicion that the previous \nemail was not sent.\n\nHi!\n\nOn 29.06.2023 04:36, Ranier Vilela wrote:\n> Em qua., 28 de jun. de 2023 às 18:45, Tomas Vondra \n> <[email protected]> escreveu:\n>\n> On 6/27/23 20:55, Ranier Vilela wrote:\n> > Hi,\n> >\n> >>I finished writing the code patch for transformation \"Or\"\n> expressions to\n> >>\"Any\" expressions. I didn't see any problems in regression\n> tests, even\n> >>when I changed the constant at which the minimum or expression is\n> >>replaced by any at 0. I ran my patch on sqlancer and so far the\n> code has\n> >>never fallen.\n> > Thanks for working on this.\n> >\n> > I took the liberty of making some modifications to the patch.\n> > I didn't compile or test it.\n> > Please feel free to use them.\n> >\n>\n> I don't want to be rude, but this doesn't seem very helpful.\n>\n> Sorry, It was not my intention to cause interruptions.\n>\n>\n> - You made some changes, but you don't even attempt to explain\n> what you\n> changed or why you changed it.\n>\n> 1. Reduce scope\n> 2. Eliminate unnecessary variables\n> 3. Eliminate unnecessary expressions\n>\n>\n> - You haven't even tried to compile the code, nor tested it. If it\n> happens to compile, wow could others even know it actually behaves the\n> way you wanted?\n>\n> Attached v2 with make check pass all tests.\n> Ubuntu 64 bits\n> gcc 64 bits\n>\n>\n> - You responded in a way that breaks the original thread, so it's not\n> clear which message you're responding to.\n>\n> It was a pretty busy day.\n>\n> Sorry for the noise, I hope I was of some help.\n>\n> regards,\n> Ranier Vilela\n>\n> P.S.\n> 0001-Replace-clause-X-N1-OR-X-N2-.-with-X-ANY-N1-N2-on.patch fails \n> with 4 tests.\n\nSorry I didn't answer right away. I will try not to do this in the \nfuture thank you for your participation and help.\n\nYes, the scope of this patch may be small, but I am sure that it will \nsolve the worst case of memory consumption with large numbers of \"or\" \nexpressions or reduce execution and planning time. As I have already \nsaid, I conducted a launch on a database with 20 billion data generated \nusing a benchmark. Unfortunately, at that time I sent a not quite \ncorrect picture: the execution time, not the planning time, increases \nwith the number of \"or\" expressions \n(https://www.dropbox.com/s/u7gt81blbv2adpi/execution_time.png?dl=0). x \nis the number of or expressions, y is the execution/scheduling time.\n\nI also throw memory consumption at 50,000 \"or\" expressions collected by \nHeapTrack (where memory consumption was recorded already at the \ninitialization stage of the 1.27GB \nhttps://www.dropbox.com/s/vb827ya0193dlz0/pic3.png?dl=0). I think such a \ntransformation will allow just the same to avoid such a worst case, \nsince in comparison with ANY memory is much less and takes little time.\n\nSELECT FORMAT('prepare x %s AS SELECT * FROM pgbench_accounts a WHERE %s',\n '(' || string_agg('int',',') ||')',\n string_agg(FORMAT('aid = $%s', g.id),' or ')\n ) AS cmd\n FROM generate_series(1, 50000) AS g(id)\n\\gexec\n\nSELECT FORMAT('execute x %s;','(' || string_agg(g.id::text,',') ||')') AS cmd\n FROM generate_series(1, 50000) AS g(id)\n\\gexec\n\nI tried to add a transformation at the path formation stage before we \nform indexes (set_plain_rel_pathlist function) and at the stage when we \nhave preprocessing of \"or\" expressions (getting rid of duplicates or \nuseless conditions), but everywhere there was a problem of incorrect \nselectivity estimation.\n\nCREATE TABLE tenk1 (unique1int, unique2int, tenint, hundredint);\ninsert into tenk1 SELECT x,x,x,x FROM generate_series(1,50000) as x;\nCREATE INDEX a_idx1 ON tenk1(unique1);\nCREATE INDEX a_idx2 ON tenk1(unique2);\nCREATE INDEX a_hundred ON tenk1(hundred);\n\npostgres=# explain analyze\nselect * from tenk1 a join tenk1 b on\n ((a.unique2 = 3 or a.unique2 = 7)) or (a.unique1 = 1);\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..140632434.50 rows=11250150000 width=32) (actual time=0.077..373.279 rows=1350000 loops=1)\n -> Seq Scan on tenk1 b (cost=0.00..2311.00 rows=150000 width=16) (actual time=0.037..13.941 rows=150000 loops=1)\n -> Materialize (cost=0.00..3436.01 rows=75001 width=16) (actual time=0.000..0.001 rows=9 loops=150000)\n -> Seq Scan on tenk1 a (cost=0.00..3061.00 rows=75001 width=16) (actual time=0.027..59.174 rows=9 loops=1)\n Filter: ((unique2 = ANY (ARRAY[3, 7])) OR (unique1 = 1))\n Rows Removed by Filter: 149991\n Planning Time: 0.438 ms\n Execution Time: 407.144 ms\n(8 rows)\n\nOnly by converting the expression at this stage, we do not encounter \nthis problem.\n\npostgres=# set enable_bitmapscan ='off';\nSET\npostgres=# explain analyze\nselect * from tenk1 a join tenk1 b on\n a.unique2 = 3 or a.unique2 = 7 or a.unique1 = 1;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..22247.02 rows=1350000 width=32) (actual time=0.094..373.627 rows=1350000 loops=1)\n -> Seq Scan on tenk1 b (cost=0.00..2311.00 rows=150000 width=16) (actual time=0.051..14.667 rows=150000 loops=1)\n -> Materialize (cost=0.00..3061.05 rows=9 width=16) (actual time=0.000..0.001 rows=9 loops=150000)\n -> Seq Scan on tenk1 a (cost=0.00..3061.00 rows=9 width=16) (actual time=0.026..42.389 rows=9 loops=1)\n Filter: ((unique2 = ANY ('{3,7}'::integer[])) OR (unique1 = 1))\n Rows Removed by Filter: 149991\n Planning Time: 0.414 ms\n Execution Time: 409.154 ms\n(8 rows)\n\nI compiled my original patch and there were no problems with regression \ntests. The only time there was a problem when I set the \nconst_transform_or_limit variable to 0 (I have 15), as you have in the \npatch. To be honest, diff appears there because you had a different \nplan, specifically the expressions \"or\" are replaced by ANY (see \nregression.diffs).\nUnfortunately, your patch version did not apply immediately, I did not \nunderstand the reasons, I applied it manually.\nAt the moment, I'm not sure that the constant is the right number for \napplying transformations, so I'm in search of it, to be honest. I will \npost my observations on this issue later. If you don't mind, I'll leave \nthe constant equal to 15 for now.\n\nSorry, I don't understand well enough what is meant by points \"Eliminate \nunnecessary variables\" and \"Eliminate unnecessary expressions\". Can you \nexplain in more detail?\n\n\nRegarding the patch, there was a Warning at the compilation stage.\n\nIn file included from ../../../src/include/nodes/bitmapset.h:21,\n\n from ../../../src/include/nodes/parsenodes.h:26,\n\n from ../../../src/include/catalog/objectaddress.h:17,\n\n from ../../../src/include/catalog/pg_aggregate.h:24,\n\n from parse_expr.c:18:\n\nparse_expr.c: In function ‘transformBoolExprOr’:\n\n../../../src/include/nodes/nodes.h:133:66: warning: ‘expr’ is used uninitialized [-Wuninitialized]\n\n 133 | #define nodeTag(nodeptr) (((const Node*)(nodeptr))->type)\n\n | ^~\n\nparse_expr.c:116:29: note: ‘expr’ was declared here\n\n 116 | BoolExpr *expr;\n\n | ^~~~\n\nI couldn't figure out how to fix it and went back to my original \nversion. To be honest, I don't think anything needs to be changed here.\n\nUnfortunately, I didn't understand the reasons why, with the available \nor expressions, you don't even try to convert to ANY by calling \ntransformBoolExpr, as I saw. I went back to my version.\n\nI think it's worth checking whether the or_statement variable is positive.\n\nI think it's worth leaving the use of the or_statement variable in its \noriginal form.\n\n switch (expr->boolop)\n {\n case AND_EXPR:\n opname = \"AND\";\n break;\n case OR_EXPR:\n opname = \"OR\";\n or_statement = true;\n break;\n case NOT_EXPR:\n opname = \"NOT\";\n break;\n default:\n elog(ERROR, \"unrecognized boolop: %d\", (int) expr->boolop);\n opname = NULL; /* keep compiler quiet */\n break;\n }\n\n if (!or_statement || list_length(expr->args) < const_transform_or_limit)\n return transformBoolExpr(pstate, (BoolExpr *)expr_orig);\n\nThe current version of the patch also works and all tests pass.\n\n-- \nRegards,\nAlena Rybakina\nPostgres Professional",
"msg_date": "Thu, 29 Jun 2023 09:10:42 +0300",
"msg_from": "Alena Rybakina <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: POC, WIP: OR-clause support for indexes"
},
{
"msg_contents": "Em qui., 29 de jun. de 2023 às 02:50, Alena Rybakina <\[email protected]> escreveu:\n\n> Hi!\n> On 29.06.2023 04:36, Ranier Vilela wrote:\n>\n> I don't want to be rude, but this doesn't seem very helpful.\n>>\n> Sorry, It was not my intention to cause interruptions.\n>\n>\n>> - You made some changes, but you don't even attempt to explain what you\n>> changed or why you changed it.\n>>\n> 1. Reduce scope\n> 2. Eliminate unnecessary variables\n> 3. Eliminate unnecessary expressions\n>\n>\n>>\n>> - You haven't even tried to compile the code, nor tested it. If it\n>> happens to compile, wow could others even know it actually behaves the\n>> way you wanted?\n>\n> Sorry I didn't answer right away. I will try not to do this in the future\n> thank you for your participation and help.\n>\nThere's no need to apologize.\n\n\n> Yes, the scope of this patch may be small, but I am sure that it will\n> solve the worst case of memory consumption with large numbers of \"or\"\n> expressions or reduce execution and planning time.\n>\nYeah, I also believe it will help performance.\n\n> As I have already said, I conducted a launch on a database with 20 billion\n> data generated using a benchmark. Unfortunately, at that time I sent a not\n> quite correct picture: the execution time, not the planning time, increases\n> with the number of \"or\" expressions (execution_time.png). x is the number\n> of or expressions, y is the execution/scheduling time.I also throw memory\n> consumption at 50,000 \"or\" expressions collected by HeapTrack (where memory\n> consumption was recorded already at the initialization stage of the 1.27GB\n> pic3.png). I think such a transformation will allow just the same to avoid\n> such a worst case, since in comparison with ANY memory is much less and\n> takes little time.\n>\nSELECT FORMAT('prepare x %s AS SELECT * FROM pgbench_accounts a WHERE %s',\n> '(' || string_agg('int', ',') || ')',\n> string_agg(FORMAT('aid = $%s', g.id), ' or ')\n> ) AS cmd\n> FROM generate_series(1, 50000) AS g(id)\n> \\gexec\n>\n> SELECT FORMAT('execute x %s;', '(' || string_agg(g.id::text, ',') || ')') AS cmd\n> FROM generate_series(1, 50000) AS g(id)\n> \\gexec\n>\n> I tried to add a transformation at the path formation stage before we form\n> indexes (set_plain_rel_pathlist function) and at the stage when we have\n> preprocessing of \"or\" expressions (getting rid of duplicates or useless\n> conditions), but everywhere there was a problem of incorrect selectivity\n> estimation.\n>\n> CREATE TABLE tenk1 (unique1 int, unique2 int, ten int, hundred int);\n> insert into tenk1 SELECT x,x,x,x FROM generate_series(1,50000) as x;\n> CREATE INDEX a_idx1 ON tenk1(unique1);\n> CREATE INDEX a_idx2 ON tenk1(unique2);\n> CREATE INDEX a_hundred ON tenk1(hundred);\n>\n> postgres=# explain analyze\n> select * from tenk1 a join tenk1 b on\n> ((a.unique2 = 3 or a.unique2 = 7)) or (a.unique1 = 1);\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------------\n> Nested Loop (cost=0.00..140632434.50 rows=11250150000 width=32) (actual time=0.077..373.279 rows=1350000 loops=1)\n> -> Seq Scan on tenk1 b (cost=0.00..2311.00 rows=150000 width=16) (actual time=0.037..13.941 rows=150000 loops=1)\n> -> Materialize (cost=0.00..3436.01 rows=75001 width=16) (actual time=0.000..0.001 rows=9 loops=150000)\n> -> Seq Scan on tenk1 a (cost=0.00..3061.00 rows=75001 width=16) (actual time=0.027..59.174 rows=9 loops=1)\n> Filter: ((unique2 = ANY (ARRAY[3, 7])) OR (unique1 = 1))\n> Rows Removed by Filter: 149991\n> Planning Time: 0.438 ms\n> Execution Time: 407.144 ms\n> (8 rows)\n>\n> Only by converting the expression at this stage, we do not encounter this\n> problem.\n>\n> postgres=# set enable_bitmapscan ='off';\n> SET\n> postgres=# explain analyze\n> select * from tenk1 a join tenk1 b on\n> a.unique2 = 3 or a.unique2 = 7 or a.unique1 = 1;\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------------\n> Nested Loop (cost=0.00..22247.02 rows=1350000 width=32) (actual time=0.094..373.627 rows=1350000 loops=1)\n> -> Seq Scan on tenk1 b (cost=0.00..2311.00 rows=150000 width=16) (actual time=0.051..14.667 rows=150000 loops=1)\n> -> Materialize (cost=0.00..3061.05 rows=9 width=16) (actual time=0.000..0.001 rows=9 loops=150000)\n> -> Seq Scan on tenk1 a (cost=0.00..3061.00 rows=9 width=16) (actual time=0.026..42.389 rows=9 loops=1)\n> Filter: ((unique2 = ANY ('{3,7}'::integer[])) OR (unique1 = 1))\n> Rows Removed by Filter: 149991\n> Planning Time: 0.414 ms\n> Execution Time: 409.154 ms\n> (8 rows)\n>\n> I compiled my original patch and there were no problems with regression\n> tests. The only time there was a problem when I set the\n> const_transform_or_limit variable to 0 (I have 15), as you have in the\n> patch. To be honest, diff appears there because you had a different plan,\n> specifically the expressions \"or\" are replaced by ANY (see\n> regression.diffs).\n>\nYou are right. The v3 attached shows the same diff.\n\nUnfortunately, your patch version did not apply immediately, I did not\n> understand the reasons, I applied it manually.\n>\nSorry.\n\n\n> At the moment, I'm not sure that the constant is the right number for\n> applying transformations, so I'm in search of it, to be honest. I will post\n> my observations on this issue later. If you don't mind, I'll leave the\n> constant equal to 15 for now.\n>\nIt's hard to predict. Perhaps accounting for time on each benchmark could\nhelp decide.\n\n\n> Sorry, I don't understand well enough what is meant by points \"Eliminate\n> unnecessary variables\" and \"Eliminate unnecessary expressions\". Can you\n> explain in more detail?\n>\nOne example is array_type.\nAs you can see in v2 and v3 it no longer exists.\n\n\n>\n> Regarding the patch, there was a Warning at the compilation stage.\n>\n> In file included from ../../../src/include/nodes/bitmapset.h:21,\n>\n> from ../../../src/include/nodes/parsenodes.h:26,\n>\n> from ../../../src/include/catalog/objectaddress.h:17,\n>\n> from ../../../src/include/catalog/pg_aggregate.h:24,\n>\n> from parse_expr.c:18:\n>\n> parse_expr.c: In function ‘transformBoolExprOr’:\n>\n> ../../../src/include/nodes/nodes.h:133:66: warning: ‘expr’ is used uninitialized [-Wuninitialized]\n>\n> 133 | #define nodeTag(nodeptr) (((const Node*)(nodeptr))->type)\n>\n> | ^~\n>\n> parse_expr.c:116:29: note: ‘expr’ was declared here\n>\n> 116 | BoolExpr *expr;\n>\n> | ^~~~\n>\n> Sorry, this error did not appear in my builds\n\n> I couldn't figure out how to fix it and went back to my original version.\n> To be honest, I don't think anything needs to be changed here.\n>\n> Unfortunately, I didn't understand the reasons why, with the available or\n> expressions, you don't even try to convert to ANY by calling\n> transformBoolExpr, as I saw. I went back to my version.\n>\n> I think it's worth checking whether the or_statement variable is positive.\n>\n> I think it's worth leaving the use of the or_statement variable in its\n> original form.\n>\n> switch (expr->boolop)\n> {\n> case AND_EXPR:\n> opname = \"AND\";\n> break;\n> case OR_EXPR:\n> opname = \"OR\";\n> or_statement = true;\n> break;\n> case NOT_EXPR:\n> opname = \"NOT\";\n> break;\n> default:\n> elog(ERROR, \"unrecognized boolop: %d\", (int) expr->boolop);\n> opname = NULL; /* keep compiler quiet */\n> break;\n> }\n>\n> if (!or_statement || list_length(expr->args) < const_transform_or_limit)\n> return transformBoolExpr(pstate, (BoolExpr *)expr_orig);\n>\nYou are right, the v3 this way.\n\nAs I said earlier, these are just suggestions.\nBut thinking about it now, I think they can be classified as bad early\noptimizations.\n\nregards,\nRanier Vilela",
"msg_date": "Thu, 29 Jun 2023 09:25:20 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: POC, WIP: OR-clause support for indexes"
},
{
"msg_contents": "Hi!\n>\n> At the moment, I'm not sure that the constant is the right number\n> for applying transformations, so I'm in search of it, to be\n> honest. I will post my observations on this issue later. If you\n> don't mind, I'll leave the constant equal to 15 for now.\n>\n> It's hard to predict. Perhaps accounting for time on each benchmark \n> could help decide.\n>\nI will try to test on JOB [1] (because queries are difficult for the \noptimizer due to the significant number of joins and correlations \ncontained in the dataset) and\ntpcds [2] (the benchmark I noticed contains a sufficient number of \nqueries with \"or\" expressions).\n\n> Sorry, I don't understand well enough what is meant by points\n> \"Eliminate unnecessary variables\" and \"Eliminate unnecessary\n> expressions\". Can you explain in more detail?\n>\n> One example is array_type.\n> As you can see in v2 and v3 it no longer exists.\n>\nI get it. Honestly, I was guided by the example of converting \"IN\" to \n\"ANY\" (transformAExprIn), at least the part of the code when we \nspecifically convert the expression to ScalarArrayOpExpr.\n\nBoth there and here, we first look for a common type for the collected \nconstants, and if there is one, then we try to find the type for the \narray structure.\n\nOnly I think in my current patch it is also worth returning to the \noriginal version in this place, since if it is not found, the \nScalarArrayOpExpr generation function will be processed incorrectly and\nthe request may not be executed at all, referring to the error that it \nis impossible to determine the type of node (ERROR: unrecognized node \ntype. )\n\nAt the same time we are trying to do this transformation for each group. \nThe group here implies that these are combined \"or\" expressions on the \ncommon left side, and at the same time we consider\nonly expressions that contain a constant and only equality.\n\nWhat else should be taken into account is that we are trying to do this \nprocessing before forming a BoolExpr expression (if you notice, then \nafter any outcome we call the makeBoolExpr function,\nwhich just forms the \"Or\" expression, as in the original version, \nregardless of what type of expressions it combines.\n\n>\n> As I said earlier, these are just suggestions.\n> But thinking about it now, I think they can be classified as bad early \n> optimizations.\nThank you again for your interest in this problem and help. Yes, I think \nso too)\n\n\n1. https://github.com/gregrahn/join-order-benchmark\n\n2. \nhttps://github.com/Alena0704/s64da-benchmark-toolkit/tree/master/benchmarks/tpcds\n\n-- \nRegards,\nAlena Rybakina\nPostgres Professional",
"msg_date": "Thu, 29 Jun 2023 18:15:05 +0300",
"msg_from": "Alena Rybakina <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: POC, WIP: OR-clause support for indexes"
}
] |
[
{
"msg_contents": "Hi,\n\nThis discussion started at https://postgr.es/m/[email protected]\nbut isn't really related to the bug.\n\n\nOn 2023-06-27 17:44:57 -0400, Tom Lane wrote:\n> Andres Freund <[email protected]> writes:\n> > That's not going to help you / the reporter, but to make issues like this\n> > easier to debug in the future, I think we should\n> > a) install an error context in load_libraries() printing the GUC name and\n> > b) install an error context in internal_load_library() printing the name of\n> > the shared library name\n>\n> +1. I'm not sure this specific issue justifies it, but it seems like that\n> might make it easier to diagnose other shared-library-load-time issues\n> as well.\n\nYea, this bug alone wouldn't have made me suggest it, but I've looked at\nenough issues where I regretted not knowing what library caused and error\nand/or what caused a library to be loaded that I think it's worth it.\n\n\nI first added an error context to both the places mentioned above, but that\nleads to annoyingly redundant messages. And I realized that it'd also be\nuseful to print a message when loading a library due to\nload_external_function() - I've definitely wondered about errors triggered\nbelow that in the past.\n\n\nI ended up adding a reason enum and a detail const char* to\ninternal_load_library(). I don't like that approach a whole lot, but couldn't\nreally come up with something better.\n\n\n\nExample errors that now have a context:\n\nError in _PG_init() of library called via shared_preload_libraries (error added by me):\n\n FATAL: not today\n CONTEXT: in \"_PG_init()\" callback of library \"/tmp/meson-install/lib/x86_64-linux-gnu/postgresql/auto_explain.so\"\n library load for \"shared_preload_libraries\" parameter\n\nor\n\n FATAL: could not access file \"dont_exist\": No such file or directory\n CONTEXT: library load for \"shared_preload_libraries\" parameter\n\n\n\nCreating a C function referencing a library that needs to be loaded with\nshared_preload_libraries:\n\n =# CREATE FUNCTION frak()\n RETURNS text IMMUTABLE STRICT\n AS '/srv/dev/build/m/src/test/modules/test_slru/test_slru.so' LANGUAGE C;\n ERROR: XX000: cannot load \"test_slru\" after startup\n DETAIL: \"test_slru\" must be loaded with shared_preload_libraries.\n CONTEXT: in \"_PG_init()\" callback of library \"/srv/dev/build/m/src/test/modules/test_slru/test_slru.so\"\n library load for C function \"frak\"\n\n\nLOAD of a non-postgres library:\n\n =# LOAD '/usr/lib/libarmadillo.so.11';\n ERROR: XX000: incompatible library \"/usr/lib/libarmadillo.so.11\": missing magic block\n HINT: Extension libraries are required to use the PG_MODULE_MAGIC macro.\n CONTEXT: library load for LOAD statement\n\n\nNote that here the errcontext callback prints the reason for the library being\nloaded, but not the library name. I made it so that the library name is only\nprinted during _PG_init(), otherwise it's always duplicating the primary error\nmessage. Which looks messy - but perhaps it's more important to be\n\"predictable\"?\n\n\nI don't love \"library load for ...\" and played around with a few other\nvariants, but I didn't come up with anything particularly satisfying.\n\n\nI was tempted to invent a separate \"library load reason\" for\nfmgr_info_C_lang() and other uses of load_external_function(), but concluded\nthat that reaches diminishing-returns territory.\n\n\nIs it worth adding tests for:\n1) an error during shared_preload_libraries, local_preload_libraries, session_preload_libraries\n2) loading a non-postgres library and hitting \"missing magic block\"\n3) local_preload_libraries not being allowed to load libraries outside of plugins/?\n4) session_preload_libraries being allowed to load libraries outside of plugins/?\n?\n\nGreetings,\n\nAndres Freund",
"msg_date": "Tue, 27 Jun 2023 17:03:07 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Add error context during loading of libraries"
}
] |
[
{
"msg_contents": "Hi, hackers\n\nThere has $subject that introduced by commit 6b4d23feef6. When we reset the entries\nif all parameters are avaiable, non-top-level entries removed first, then top-level\nentries.\n\n\n\n\n-- \nRegrads,\nJapin Li.",
"msg_date": "Wed, 28 Jun 2023 10:52:50 +0800",
"msg_from": "Japin Li <[email protected]>",
"msg_from_op": true,
"msg_subject": "Another incorrect comment for pg_stat_statements"
},
{
"msg_contents": "On Wed, Jun 28, 2023 at 10:53 AM Japin Li <[email protected]> wrote:\n\n>\n> Hi, hackers\n>\n> There has $subject that introduced by commit 6b4d23feef6. When we reset\n> the entries\n> if all parameters are avaiable, non-top-level entries removed first, then\n> top-level\n> entries.\n\n\nI did not see the diffs. Maybe uploaded the wrong attachment?\n\nThanks\nRichard\n\nOn Wed, Jun 28, 2023 at 10:53 AM Japin Li <[email protected]> wrote:\nHi, hackers\n\nThere has $subject that introduced by commit 6b4d23feef6. When we reset the entries\nif all parameters are avaiable, non-top-level entries removed first, then top-level\nentries.I did not see the diffs. Maybe uploaded the wrong attachment?ThanksRichard",
"msg_date": "Wed, 28 Jun 2023 11:22:55 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Another incorrect comment for pg_stat_statements"
},
{
"msg_contents": "On Wed, 28 Jun 2023 at 11:22, Richard Guo <[email protected]> wrote:\n> On Wed, Jun 28, 2023 at 10:53 AM Japin Li <[email protected]> wrote:\n>\n>>\n>> Hi, hackers\n>>\n>> There has $subject that introduced by commit 6b4d23feef6. When we reset\n>> the entries\n>> if all parameters are avaiable, non-top-level entries removed first, then\n>> top-level\n>> entries.\n>\n>\n> I did not see the diffs. Maybe uploaded the wrong attachment?\n>\n\nMy bad! Here is the patch. Thanks!\n\n\n\n\n\n-- \nRegrads,\nJapin Li.",
"msg_date": "Wed, 28 Jun 2023 12:15:47 +0800",
"msg_from": "Japin Li <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Another incorrect comment for pg_stat_statements"
},
{
"msg_contents": "On Wed, Jun 28, 2023 at 12:15:47PM +0800, Japin Li wrote:\n> -\t\t/* Remove the key if it exists, starting with the top-level entry */\n> +\t\t/* Remove the key if it exists, starting with the non-top-level entry */\n> \t\tkey.toplevel = false;\n> \t\tentry = (pgssEntry *) hash_search(pgss_hash, &key, HASH_REMOVE, NULL);\n> \t\tif (entry)\t\t\t\t/* found */\n\nNice catch. That's indeed wrong. Will fix.\n--\nMichael",
"msg_date": "Wed, 28 Jun 2023 16:04:48 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Another incorrect comment for pg_stat_statements"
},
{
"msg_contents": "On Wed, Jun 28, 2023 at 3:04 PM Michael Paquier <[email protected]> wrote:\n\n> On Wed, Jun 28, 2023 at 12:15:47PM +0800, Japin Li wrote:\n> > - /* Remove the key if it exists, starting with the\n> top-level entry */\n> > + /* Remove the key if it exists, starting with the\n> non-top-level entry */\n> > key.toplevel = false;\n> > entry = (pgssEntry *) hash_search(pgss_hash, &key,\n> HASH_REMOVE, NULL);\n> > if (entry) /* found */\n>\n> Nice catch. That's indeed wrong. Will fix.\n\n\n+1. To nitpick, how about we remove the blank line just before removing\nthe key for top level entry?\n\n- /* Also remove entries for top level statements */\n+ /* Also remove entries if exist for top level statements */\n key.toplevel = true;\n-\n- /* Remove the key if exists */\n entry = (pgssEntry *) hash_search(pgss_hash, &key, HASH_REMOVE, NULL);\n\nThanks\nRichard\n\nOn Wed, Jun 28, 2023 at 3:04 PM Michael Paquier <[email protected]> wrote:On Wed, Jun 28, 2023 at 12:15:47PM +0800, Japin Li wrote:\n> - /* Remove the key if it exists, starting with the top-level entry */\n> + /* Remove the key if it exists, starting with the non-top-level entry */\n> key.toplevel = false;\n> entry = (pgssEntry *) hash_search(pgss_hash, &key, HASH_REMOVE, NULL);\n> if (entry) /* found */\n\nNice catch. That's indeed wrong. Will fix.+1. To nitpick, how about we remove the blank line just before removingthe key for top level entry?- /* Also remove entries for top level statements */+ /* Also remove entries if exist for top level statements */ key.toplevel = true;-- /* Remove the key if exists */ entry = (pgssEntry *) hash_search(pgss_hash, &key, HASH_REMOVE, NULL);ThanksRichard",
"msg_date": "Wed, 28 Jun 2023 15:09:55 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Another incorrect comment for pg_stat_statements"
},
{
"msg_contents": "On Wed, Jun 28, 2023 at 03:09:55PM +0800, Richard Guo wrote:\n> +1. To nitpick, how about we remove the blank line just before removing\n> the key for top level entry?\n> \n> - /* Also remove entries for top level statements */\n> + /* Also remove entries if exist for top level statements */\n> key.toplevel = true;\n> -\n> - /* Remove the key if exists */\n> entry = (pgssEntry *) hash_search(pgss_hash, &key, HASH_REMOVE, NULL);\n\nWhy not if it improves the overall situation. Could you send a patch\nwith everything you have in mind?\n--\nMichael",
"msg_date": "Wed, 28 Jun 2023 16:36:29 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Another incorrect comment for pg_stat_statements"
},
{
"msg_contents": "On Wed, Jun 28, 2023 at 3:36 PM Michael Paquier <[email protected]> wrote:\n\n> On Wed, Jun 28, 2023 at 03:09:55PM +0800, Richard Guo wrote:\n> > +1. To nitpick, how about we remove the blank line just before removing\n> > the key for top level entry?\n> >\n> > - /* Also remove entries for top level statements */\n> > + /* Also remove entries if exist for top level statements */\n> > key.toplevel = true;\n> > -\n> > - /* Remove the key if exists */\n> > entry = (pgssEntry *) hash_search(pgss_hash, &key, HASH_REMOVE, NULL);\n>\n> Why not if it improves the overall situation. Could you send a patch\n> with everything you have in mind?\n\n\nHere is the patch. I don't have too much in mind, so the patch just\nremoves the blank line and revises the comment a bit.\n\nThanks\nRichard",
"msg_date": "Wed, 28 Jun 2023 16:27:00 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Another incorrect comment for pg_stat_statements"
},
{
"msg_contents": "\nOn Wed, 28 Jun 2023 at 16:27, Richard Guo <[email protected]> wrote:\n> On Wed, Jun 28, 2023 at 3:36 PM Michael Paquier <[email protected]> wrote:\n>\n>> On Wed, Jun 28, 2023 at 03:09:55PM +0800, Richard Guo wrote:\n>> > +1. To nitpick, how about we remove the blank line just before removing\n>> > the key for top level entry?\n>> >\n>> > - /* Also remove entries for top level statements */\n>> > + /* Also remove entries if exist for top level statements */\n>> > key.toplevel = true;\n>> > -\n>> > - /* Remove the key if exists */\n>> > entry = (pgssEntry *) hash_search(pgss_hash, &key, HASH_REMOVE, NULL);\n>>\n>> Why not if it improves the overall situation. Could you send a patch\n>> with everything you have in mind?\n>\n>\n> Here is the patch. I don't have too much in mind, so the patch just\n> removes the blank line and revises the comment a bit.\n>\n\n+1. LGTM.\n\n-- \nRegrads,\nJapin Li.\n\n\n",
"msg_date": "Wed, 28 Jun 2023 21:26:02 +0800",
"msg_from": "Japin Li <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Another incorrect comment for pg_stat_statements"
},
{
"msg_contents": "On Wed, Jun 28, 2023 at 09:26:02PM +0800, Japin Li wrote:\n> +1. LGTM.\n\nNothing much to add, so applied with the initial comment fix.\n--\nMichael",
"msg_date": "Thu, 29 Jun 2023 09:19:15 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Another incorrect comment for pg_stat_statements"
},
{
"msg_contents": "\nOn Thu, 29 Jun 2023 at 08:19, Michael Paquier <[email protected]> wrote:\n> On Wed, Jun 28, 2023 at 09:26:02PM +0800, Japin Li wrote:\n>> +1. LGTM.\n>\n> Nothing much to add, so applied with the initial comment fix.\n\nThanks!\n\n-- \nRegrads,\nJapin Li.\n\n\n",
"msg_date": "Thu, 29 Jun 2023 08:44:54 +0800",
"msg_from": "Japin Li <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Another incorrect comment for pg_stat_statements"
},
{
"msg_contents": "On Thu, Jun 29, 2023 at 8:19 AM Michael Paquier <[email protected]> wrote:\n\n> Nothing much to add, so applied with the initial comment fix.\n\n\nThanks for pushing it!\n\nThanks\nRichard\n\nOn Thu, Jun 29, 2023 at 8:19 AM Michael Paquier <[email protected]> wrote:\nNothing much to add, so applied with the initial comment fix.Thanks for pushing it!ThanksRichard",
"msg_date": "Thu, 29 Jun 2023 11:25:53 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Another incorrect comment for pg_stat_statements"
}
] |
[
{
"msg_contents": "While looking at something unrelated, I noticed that the vacuumdb docs\nmention the following:\n\n\tvacuumdb might need to connect several times to the PostgreSQL server,\n\tasking for a password each time.\n\nIIUC this has been fixed since 83dec5a from 2015 (which was superceded by\nff402ae), so I think this note (originally added in e0a77f5 from 2002) can\nnow be removed.\n\nI also found that neither clusterdb nor reindexdb uses the\nallow_password_reuse parameter in connectDatabase(), and the reindexdb\ndocumentation contains the same note about repeatedly asking for a\npassword (originally added in 85e9a5a from 2005). IMO we should allow\npassword reuse for all three programs, and we should remove the\naforementioned notes in the docs, too. This is what the attached patch\ndoes.\n\nThoughts?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 27 Jun 2023 21:57:41 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "harmonize password reuse in vacuumdb, clusterdb, and reindexdb"
},
{
"msg_contents": "On Tue, Jun 27, 2023 at 9:57 PM Nathan Bossart <[email protected]> wrote:\n>\n> While looking at something unrelated, I noticed that the vacuumdb docs\n> mention the following:\n>\n> vacuumdb might need to connect several times to the PostgreSQL server,\n> asking for a password each time.\n>\n> IIUC this has been fixed since 83dec5a from 2015 (which was superceded by\n> ff402ae), so I think this note (originally added in e0a77f5 from 2002) can\n> now be removed.\n>\n> I also found that neither clusterdb nor reindexdb uses the\n> allow_password_reuse parameter in connectDatabase(), and the reindexdb\n> documentation contains the same note about repeatedly asking for a\n> password (originally added in 85e9a5a from 2005). IMO we should allow\n> password reuse for all three programs, and we should remove the\n> aforementioned notes in the docs, too. This is what the attached patch\n> does.\n>\n> Thoughts?\n\nThe comment on top of connect_utils.c:connectDatabase() seems pertinent:\n\n> (Callers should not pass\n> * allow_password_reuse=true unless reconnecting to the same database+user\n> * as before, else we might create password exposure hazards.)\n\nThe callers of {cluster|reindex}_one_database() (which in turn call\nconnectDatabase()) clearly pass different database names in successive\ncalls to these functions. So the patch seems to be in conflict with\nthe recommendation in the comment.\n\nI'm not sure if the concern raised in that comment is a legitimate\none, though. I mean, if the password is reused to connect to a\ndifferent database in the same cluster/instance, which I think is\nalways the case with these utilities, the password will exposed in the\nserver logs (if at all). And since the admins of the instance already\nhave full control over the passwords of the user, I don't think this\npatch will give them any more information than what they can get\nanyways.\n\nIt is a valid concern, though, if the utility connects to a different\ninstance in the same run/invocation, and hence exposes the password\nfrom the first instance to the admins of the second cluster.\n\nNitpicking: The patch seems to have Windows line endings, which\nexplains why my `patch` complained so loudly.\n\n$ patch -p1 < v1-0001-harmonize-....patch\n(Stripping trailing CRs from patch; use --binary to disable.)\npatching file doc/src/sgml/ref/reindexdb.sgml\n(Stripping trailing CRs from patch; use --binary to disable.)\npatching file doc/src/sgml/ref/vacuumdb.sgml\n(Stripping trailing CRs from patch; use --binary to disable.)\npatching file src/bin/scripts/clusterdb.c\n(Stripping trailing CRs from patch; use --binary to disable.)\npatching file src/bin/scripts/reindexdb.c\n\n$ file v1-0001-harmonize-password-reuse-in-vacuumdb-clusterdb-an.patch\nv1-0001-harmonize-....patch: unified diff output text, ASCII text,\nwith CRLF line terminators\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\n\n",
"msg_date": "Wed, 28 Jun 2023 21:20:03 -0700",
"msg_from": "Gurjeet Singh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: harmonize password reuse in vacuumdb, clusterdb, and reindexdb"
},
{
"msg_contents": "On Wed, Jun 28, 2023 at 09:20:03PM -0700, Gurjeet Singh wrote:\n> The comment on top of connect_utils.c:connectDatabase() seems pertinent:\n> \n>> (Callers should not pass\n>> * allow_password_reuse=true unless reconnecting to the same database+user\n>> * as before, else we might create password exposure hazards.)\n> \n> The callers of {cluster|reindex}_one_database() (which in turn call\n> connectDatabase()) clearly pass different database names in successive\n> calls to these functions. So the patch seems to be in conflict with\n> the recommendation in the comment.\n> \n> I'm not sure if the concern raised in that comment is a legitimate\n> one, though. I mean, if the password is reused to connect to a\n> different database in the same cluster/instance, which I think is\n> always the case with these utilities, the password will exposed in the\n> server logs (if at all). And since the admins of the instance already\n> have full control over the passwords of the user, I don't think this\n> patch will give them any more information than what they can get\n> anyways.\n> \n> It is a valid concern, though, if the utility connects to a different\n> instance in the same run/invocation, and hence exposes the password\n> from the first instance to the admins of the second cluster.\n\nThe same commit that added this comment (ff402ae) also set the\nallow_password_reuse parameter to true in vacuumdb's connectDatabase()\ncalls. I found a message from the corresponding thread that provides some\nadditional detail [0]. I wonder if this comment should instead recommend\nagainst using the allow_password_reuse flag unless reconnecting to the same\nhost/port/user target. Connecting to different databases with the same\nhost/port/user information seems okay. Maybe I am missing something... \n\n> Nitpicking: The patch seems to have Windows line endings, which\n> explains why my `patch` complained so loudly.\n> \n> $ patch -p1 < v1-0001-harmonize-....patch\n> (Stripping trailing CRs from patch; use --binary to disable.)\n> patching file doc/src/sgml/ref/reindexdb.sgml\n> (Stripping trailing CRs from patch; use --binary to disable.)\n> patching file doc/src/sgml/ref/vacuumdb.sgml\n> (Stripping trailing CRs from patch; use --binary to disable.)\n> patching file src/bin/scripts/clusterdb.c\n> (Stripping trailing CRs from patch; use --binary to disable.)\n> patching file src/bin/scripts/reindexdb.c\n> \n> $ file v1-0001-harmonize-password-reuse-in-vacuumdb-clusterdb-an.patch\n> v1-0001-harmonize-....patch: unified diff output text, ASCII text,\n> with CRLF line terminators\n\nHuh. I didn't write it on a Windows machine. I'll look into it.\n\n[0] https://postgr.es/m/15139.1447357263%40sss.pgh.pa.us\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 28 Jun 2023 22:24:09 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: harmonize password reuse in vacuumdb, clusterdb, and reindexdb"
},
{
"msg_contents": "On Wed, Jun 28, 2023 at 10:24:09PM -0700, Nathan Bossart wrote:\n> On Wed, Jun 28, 2023 at 09:20:03PM -0700, Gurjeet Singh wrote:\n>> Nitpicking: The patch seems to have Windows line endings, which\n>> explains why my `patch` complained so loudly.\n>> \n>> $ patch -p1 < v1-0001-harmonize-....patch\n>> (Stripping trailing CRs from patch; use --binary to disable.)\n>> patching file doc/src/sgml/ref/reindexdb.sgml\n>> (Stripping trailing CRs from patch; use --binary to disable.)\n>> patching file doc/src/sgml/ref/vacuumdb.sgml\n>> (Stripping trailing CRs from patch; use --binary to disable.)\n>> patching file src/bin/scripts/clusterdb.c\n>> (Stripping trailing CRs from patch; use --binary to disable.)\n>> patching file src/bin/scripts/reindexdb.c\n>> \n>> $ file v1-0001-harmonize-password-reuse-in-vacuumdb-clusterdb-an.patch\n>> v1-0001-harmonize-....patch: unified diff output text, ASCII text,\n>> with CRLF line terminators\n> \n> Huh. I didn't write it on a Windows machine. I'll look into it.\n\nI couldn't reproduce this with the patch available in the archives:\n\n\t$ patch -p1 < v1-0001-harmonize-password-reuse-in-vacuumdb-clusterdb-an.patch \n\tpatching file doc/src/sgml/ref/reindexdb.sgml\n\tpatching file doc/src/sgml/ref/vacuumdb.sgml\n\tpatching file src/bin/scripts/clusterdb.c\n\tpatching file src/bin/scripts/reindexdb.c\n\t$ file v1-0001-harmonize-password-reuse-in-vacuumdb-clusterdb-an.patch \n\tv1-0001-harmonize-password-reuse-in-vacuumdb-clusterdb-an.patch: unified diff output, ASCII text\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 29 Jun 2023 14:05:28 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: harmonize password reuse in vacuumdb, clusterdb, and reindexdb"
},
{
"msg_contents": "On Wed, Jun 28, 2023 at 10:24:09PM -0700, Nathan Bossart wrote:\n> On Wed, Jun 28, 2023 at 09:20:03PM -0700, Gurjeet Singh wrote:\n>> The comment on top of connect_utils.c:connectDatabase() seems pertinent:\n>> \n>>> (Callers should not pass\n>>> * allow_password_reuse=true unless reconnecting to the same database+user\n>>> * as before, else we might create password exposure hazards.)\n>> \n>> The callers of {cluster|reindex}_one_database() (which in turn call\n>> connectDatabase()) clearly pass different database names in successive\n>> calls to these functions. So the patch seems to be in conflict with\n>> the recommendation in the comment.\n>> \n>> I'm not sure if the concern raised in that comment is a legitimate\n>> one, though. I mean, if the password is reused to connect to a\n>> different database in the same cluster/instance, which I think is\n>> always the case with these utilities, the password will exposed in the\n>> server logs (if at all). And since the admins of the instance already\n>> have full control over the passwords of the user, I don't think this\n>> patch will give them any more information than what they can get\n>> anyways.\n>> \n>> It is a valid concern, though, if the utility connects to a different\n>> instance in the same run/invocation, and hence exposes the password\n>> from the first instance to the admins of the second cluster.\n> \n> The same commit that added this comment (ff402ae) also set the\n> allow_password_reuse parameter to true in vacuumdb's connectDatabase()\n> calls. I found a message from the corresponding thread that provides some\n> additional detail [0]. I wonder if this comment should instead recommend\n> against using the allow_password_reuse flag unless reconnecting to the same\n> host/port/user target. Connecting to different databases with the same\n> host/port/user information seems okay. Maybe I am missing something... \n\nHere is a new version of the patch in which I've updated this comment as\nproposed. Gurjeet, do you have any other concerns about this patch?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 17 Jul 2023 13:47:44 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: harmonize password reuse in vacuumdb, clusterdb, and reindexdb"
},
{
"msg_contents": "On Mon, Jul 17, 2023 at 1:47 PM Nathan Bossart <[email protected]> wrote:\n>\n> Here is a new version of the patch in which I've updated this comment as\n> proposed. Gurjeet, do you have any other concerns about this patch?\n\nWith the updated comment, the patch looks good to me.\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\n\n",
"msg_date": "Tue, 18 Jul 2023 10:05:50 -0700",
"msg_from": "Gurjeet Singh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: harmonize password reuse in vacuumdb, clusterdb, and reindexdb"
},
{
"msg_contents": "HI,\n\nOn Jun 29, 2023 at 13:24 +0800, Nathan Bossart <[email protected]>, wrote:\n>\n> Connecting to different databases with the same\n> host/port/user information seems okay.\nHave a look, yeah, cluster_all_databases/vacuum_all_databases/reindex_all_databases will get there.\n\nLGTM.\n\nRegards,\nZhang Mingli\n\n\n\n\n\n\n\nHI,\n\nOn Jun 29, 2023 at 13:24 +0800, Nathan Bossart <[email protected]>, wrote:\n\nConnecting to different databases with the same\nhost/port/user information seems okay.\nHave a look, yeah, cluster_all_databases/vacuum_all_databases/reindex_all_databases will get there.\n\nLGTM.\n\n\n\nRegards,\nZhang Mingli",
"msg_date": "Wed, 19 Jul 2023 11:41:11 +0800",
"msg_from": "Zhang Mingli <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: harmonize password reuse in vacuumdb, clusterdb, and\n reindexdb"
},
{
"msg_contents": "On Wed, Jun 28, 2023 at 10:24:09PM -0700, Nathan Bossart wrote:\n> On Wed, Jun 28, 2023 at 09:20:03PM -0700, Gurjeet Singh wrote:\n>> The comment on top of connect_utils.c:connectDatabase() seems pertinent:\n>> \n>>> (Callers should not pass\n>>> * allow_password_reuse=true unless reconnecting to the same database+user\n>>> * as before, else we might create password exposure hazards.)\n>> \n>> The callers of {cluster|reindex}_one_database() (which in turn call\n>> connectDatabase()) clearly pass different database names in successive\n>> calls to these functions. So the patch seems to be in conflict with\n>> the recommendation in the comment.\n>> \n>> [ ... ]\n> \n> The same commit that added this comment (ff402ae) also set the\n> allow_password_reuse parameter to true in vacuumdb's connectDatabase()\n> calls. I found a message from the corresponding thread that provides some\n> additional detail [0]. I wonder if this comment should instead recommend\n> against using the allow_password_reuse flag unless reconnecting to the same\n> host/port/user target. Connecting to different databases with the same\n> host/port/user information seems okay. Maybe I am missing something... \n\nI added Tom here since it looks like he was the original author of this\ncomment. Tom, do you have any concerns with updating the comment for\nconnectDatabase() in src/fe_utils/connect_utils.c like this?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 19 Jul 2023 10:43:11 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: harmonize password reuse in vacuumdb, clusterdb, and reindexdb"
},
{
"msg_contents": "On Wed, Jul 19, 2023 at 10:43:11AM -0700, Nathan Bossart wrote:\n> On Wed, Jun 28, 2023 at 10:24:09PM -0700, Nathan Bossart wrote:\n>> The same commit that added this comment (ff402ae) also set the\n>> allow_password_reuse parameter to true in vacuumdb's connectDatabase()\n>> calls. I found a message from the corresponding thread that provides some\n>> additional detail [0]. I wonder if this comment should instead recommend\n>> against using the allow_password_reuse flag unless reconnecting to the same\n>> host/port/user target. Connecting to different databases with the same\n>> host/port/user information seems okay. Maybe I am missing something... \n> \n> I added Tom here since it looks like he was the original author of this\n> comment. Tom, do you have any concerns with updating the comment for\n> connectDatabase() in src/fe_utils/connect_utils.c like this?\n\nI went ahead and committed this. I'm happy to revisit if there are\nconcerns.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 28 Jul 2023 10:14:29 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: harmonize password reuse in vacuumdb, clusterdb, and reindexdb"
}
] |
[
{
"msg_contents": "Dear Postgres Hackers,\n\nI hope this email finds you well. I am currently facing an issue while\nperforming an upgrade using the pg_upgrade utility with the --link option.\nI was under the impression that the --link option would create hard links\nbetween the old and new cluster's data files, but it appears that the\nentire old cluster data was copied to the new cluster, resulting in a\nsignificant increase in the new cluster's size.\n\nHere are the details of my scenario:\n- PostgreSQL version: [Old Version: Postgres 11.4 | New Version: Postgres\n14.0]\n- Command used for pg_upgrade:\n[~/pg_upgrade_testing/postgres_14/bin/pg_upgrade -b\n~/pg_upgrade_testing/postgres_11.4/bin -B\n~/pg_upgrade_testing/postgres_14/bin -d\n~/pg_upgrade_testing/postgres_11.4/replica_db2 -D\n~/pg_upgrade_testing/postgres_14/new_pg -r -k\n- Paths to the old and new data directories:\n[~/pg_upgrade_testing/postgres_11.4/replica_db2]\n[~/pg_upgrade_testing/postgres_14/new_pg]\n- OS information: [Ubuntu 22.04.2 linux]\n\nHowever, after executing the pg_upgrade command with the --link option, I\nobserved that the size of the new cluster is much larger than expected. I\nexpected the --link option to create hard links instead of duplicating the\ndata files.\n\nI am seeking assistance to understand the following:\n1. Is my understanding of the --link option correct?\n2. Is there any additional configuration or step required to properly\nutilize the --link option?\n3. Are there any limitations or considerations specific to my PostgreSQL\nversion or file system that I should be aware of?\n\nAny guidance, clarification, or troubleshooting steps you can provide would\nbe greatly appreciated. I want to ensure that I am utilizing the --link\noption correctly and optimize the upgrade process.\n\nBest regards,\nPradeep Kumar\n\nDear Postgres Hackers,I hope this email finds you well. I am currently facing an issue while performing an upgrade using the pg_upgrade utility with the --link option. I was under the impression that the --link option would create hard links between the old and new cluster's data files, but it appears that the entire old cluster data was copied to the new cluster, resulting in a significant increase in the new cluster's size.Here are the details of my scenario:- PostgreSQL version: [Old Version: Postgres 11.4 | New Version: Postgres 14.0]- Command used for pg_upgrade: [~/pg_upgrade_testing/postgres_14/bin/pg_upgrade -b ~/pg_upgrade_testing/postgres_11.4/bin -B ~/pg_upgrade_testing/postgres_14/bin -d ~/pg_upgrade_testing/postgres_11.4/replica_db2 -D ~/pg_upgrade_testing/postgres_14/new_pg -r -k - Paths to the old and new data directories: [~/pg_upgrade_testing/postgres_11.4/replica_db2] [~/pg_upgrade_testing/postgres_14/new_pg]- OS information: [Ubuntu 22.04.2 linux]However, after executing the pg_upgrade command with the --link option, I observed that the size of the new cluster is much larger than expected. I expected the --link option to create hard links instead of duplicating the data files.I am seeking assistance to understand the following:1. Is my understanding of the --link option correct?2. Is there any additional configuration or step required to properly utilize the --link option?3. Are there any limitations or considerations specific to my PostgreSQL version or file system that I should be aware of?Any guidance, clarification, or troubleshooting steps you can provide would be greatly appreciated. I want to ensure that I am utilizing the --link option correctly and optimize the upgrade process.Best regards,Pradeep Kumar",
"msg_date": "Wed, 28 Jun 2023 11:49:43 +0530",
"msg_from": "Pradeep Kumar <[email protected]>",
"msg_from_op": true,
"msg_subject": "Assistance Needed: Issue with pg_upgrade and --link option"
},
{
"msg_contents": "On Wed, 2023-06-28 at 11:49 +0530, Pradeep Kumar wrote:\n> I was under the impression that the --link option would create hard links between the\n> old and new cluster's data files, but it appears that the entire old cluster data was\n> copied to the new cluster, resulting in a significant increase in the new cluster's size.\n\nPlease provide some numbers, ideally\n\n du -sk <old_data_directory> <new_data_directory>\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Wed, 28 Jun 2023 08:24:45 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Assistance Needed: Issue with pg_upgrade and --link option"
},
{
"msg_contents": "On 28.06.23 08:24, Laurenz Albe wrote:\n> On Wed, 2023-06-28 at 11:49 +0530, Pradeep Kumar wrote:\n>> I was under the impression that the --link option would create hard links between the\n>> old and new cluster's data files, but it appears that the entire old cluster data was\n>> copied to the new cluster, resulting in a significant increase in the new cluster's size.\n> \n> Please provide some numbers, ideally\n> \n> du -sk <old_data_directory> <new_data_directory>\n\nI don't think you can observe the effects of the --link option this way. \n It would just give you the full size count for both directories, even \nthough the point to the same underlying inodes.\n\nTo see the effect, you could perhaps use `df` to see how much overall \ndisk space the upgrade step eats up.\n\n\n\n",
"msg_date": "Wed, 28 Jun 2023 11:44:17 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Assistance Needed: Issue with pg_upgrade and --link option"
},
{
"msg_contents": "Sure,\ndu -sk ~/pradeep_test/pg_upgrade_testing/postgres_11.4/master\n~/pradeep_test/pg_upgrade_testing/postgres_14/new_pg\n11224524 /home/test/pradeep_test/pg_upgrade_testing/postgres_11.4/master\n41952 /home/test/pradeep_test/pg_upgrade_testing/postgres_14/new_pg\n\nOn Wed, Jun 28, 2023 at 11:54 AM Laurenz Albe <[email protected]>\nwrote:\n\n> On Wed, 2023-06-28 at 11:49 +0530, Pradeep Kumar wrote:\n> > I was under the impression that the --link option would create hard\n> links between the\n> > old and new cluster's data files, but it appears that the entire old\n> cluster data was\n> > copied to the new cluster, resulting in a significant increase in the\n> new cluster's size.\n>\n> Please provide some numbers, ideally\n>\n> du -sk <old_data_directory> <new_data_directory>\n>\n> Yours,\n> Laurenz Albe\n>\n\nSure,du -sk ~/pradeep_test/pg_upgrade_testing/postgres_11.4/master ~/pradeep_test/pg_upgrade_testing/postgres_14/new_pg11224524\t/home/test/pradeep_test/pg_upgrade_testing/postgres_11.4/master41952\t/home/test/pradeep_test/pg_upgrade_testing/postgres_14/new_pgOn Wed, Jun 28, 2023 at 11:54 AM Laurenz Albe <[email protected]> wrote:On Wed, 2023-06-28 at 11:49 +0530, Pradeep Kumar wrote:\n> I was under the impression that the --link option would create hard links between the\n> old and new cluster's data files, but it appears that the entire old cluster data was\n> copied to the new cluster, resulting in a significant increase in the new cluster's size.\n\nPlease provide some numbers, ideally\n\n du -sk <old_data_directory> <new_data_directory>\n\nYours,\nLaurenz Albe",
"msg_date": "Wed, 28 Jun 2023 15:40:37 +0530",
"msg_from": "Pradeep Kumar <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Assistance Needed: Issue with pg_upgrade and --link option"
},
{
"msg_contents": "This is my numbers.\n df ~/pradeep_test/pg_upgrade_testing/postgres_11.4/master\n~/pradeep_test/pg_upgrade_testing/postgres_14/new_pg\nFilesystem 1K-blocks Used Available Use% Mounted on\n/dev/mapper/nvme0n1p4_crypt 375161856 102253040 270335920 28% /home\n/dev/mapper/nvme0n1p4_crypt 375161856 102253040 270335920 28% /home\n\nOn Wed, Jun 28, 2023 at 3:14 PM Peter Eisentraut <[email protected]>\nwrote:\n\n> On 28.06.23 08:24, Laurenz Albe wrote:\n> > On Wed, 2023-06-28 at 11:49 +0530, Pradeep Kumar wrote:\n> >> I was under the impression that the --link option would create hard\n> links between the\n> >> old and new cluster's data files, but it appears that the entire old\n> cluster data was\n> >> copied to the new cluster, resulting in a significant increase in the\n> new cluster's size.\n> >\n> > Please provide some numbers, ideally\n> >\n> > du -sk <old_data_directory> <new_data_directory>\n>\n> I don't think you can observe the effects of the --link option this way.\n> It would just give you the full size count for both directories, even\n> though the point to the same underlying inodes.\n>\n> To see the effect, you could perhaps use `df` to see how much overall\n> disk space the upgrade step eats up.\n>\n>\n\nThis is my numbers. df ~/pradeep_test/pg_upgrade_testing/postgres_11.4/master ~/pradeep_test/pg_upgrade_testing/postgres_14/new_pgFilesystem 1K-blocks Used Available Use% Mounted on/dev/mapper/nvme0n1p4_crypt 375161856 102253040 270335920 28% /home/dev/mapper/nvme0n1p4_crypt 375161856 102253040 270335920 28% /homeOn Wed, Jun 28, 2023 at 3:14 PM Peter Eisentraut <[email protected]> wrote:On 28.06.23 08:24, Laurenz Albe wrote:\n> On Wed, 2023-06-28 at 11:49 +0530, Pradeep Kumar wrote:\n>> I was under the impression that the --link option would create hard links between the\n>> old and new cluster's data files, but it appears that the entire old cluster data was\n>> copied to the new cluster, resulting in a significant increase in the new cluster's size.\n> \n> Please provide some numbers, ideally\n> \n> du -sk <old_data_directory> <new_data_directory>\n\nI don't think you can observe the effects of the --link option this way. \n It would just give you the full size count for both directories, even \nthough the point to the same underlying inodes.\n\nTo see the effect, you could perhaps use `df` to see how much overall \ndisk space the upgrade step eats up.",
"msg_date": "Wed, 28 Jun 2023 15:49:44 +0530",
"msg_from": "Pradeep Kumar <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Assistance Needed: Issue with pg_upgrade and --link option"
},
{
"msg_contents": "On Wed, 2023-06-28 at 15:40 +0530, Pradeep Kumar wrote:\n> > > I was under the impression that the --link option would create hard links between the\n> > > old and new cluster's data files, but it appears that the entire old cluster data was\n> > > copied to the new cluster, resulting in a significant increase in the new cluster's size.\n> > \n> > Please provide some numbers, ideally\n> > \n> > du -sk <old_data_directory> <new_data_directory>\n>\n> du -sk ~/pradeep_test/pg_upgrade_testing/postgres_11.4/master ~/pradeep_test/pg_upgrade_testing/postgres_14/new_pg\n> 11224524 /home/test/pradeep_test/pg_upgrade_testing/postgres_11.4/master\n> 41952 /home/test/pradeep_test/pg_upgrade_testing/postgres_14/new_pg\n\nThat looks fine. The files exist only once, and the 41MB that only exist in\nthe new data directory are catalog data and other stuff that is different\non the new cluster.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Wed, 28 Jun 2023 12:46:58 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Assistance Needed: Issue with pg_upgrade and --link option"
},
{
"msg_contents": "On 28.06.23 12:46, Laurenz Albe wrote:\n> On Wed, 2023-06-28 at 15:40 +0530, Pradeep Kumar wrote:\n>>>> I was under the impression that the --link option would create hard links between the\n>>>> old and new cluster's data files, but it appears that the entire old cluster data was\n>>>> copied to the new cluster, resulting in a significant increase in the new cluster's size.\n>>>\n>>> Please provide some numbers, ideally\n>>>\n>>> du -sk <old_data_directory> <new_data_directory>\n>>\n>> du -sk ~/pradeep_test/pg_upgrade_testing/postgres_11.4/master ~/pradeep_test/pg_upgrade_testing/postgres_14/new_pg\n>> 11224524 /home/test/pradeep_test/pg_upgrade_testing/postgres_11.4/master\n>> 41952 /home/test/pradeep_test/pg_upgrade_testing/postgres_14/new_pg\n> \n> That looks fine. The files exist only once, and the 41MB that only exist in\n> the new data directory are catalog data and other stuff that is different\n> on the new cluster.\n\nInteresting, so it actually does count files with multiple hardlinks \nonly once.\n\n\n\n",
"msg_date": "Wed, 28 Jun 2023 18:31:01 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Assistance Needed: Issue with pg_upgrade and --link option"
}
] |
[
{
"msg_contents": "This email has no patch yet -- it's more of a placeholder to gather some\nissues into one place. During previous work on replacing vacuum's bsearch'd\narray for TID storage with a radix tree, Andres suggested [1] that the hash\ntable in tidbitmap.c should also be replaced. This will hopefully\nilluminate how to get there.\n\n* Current limitations\n\nPage table entries are of fixed-size. At PGCon, Mark Dilger and Andres\ndiscussed the problem that bitmap scans assume a hard-coded limit on the\noffset of a TID, one that's particular to heap AM. That's not a requirement\nof hash tables in general, but that's the current state of the\nimplementation. Using a radix tree would smooth the way to allowing\nvariable-length page table entries. I have briefly talked about it with\ncolleagues offlist as well.\n\nThe radix tree implementation that Masahiko and I have worked on does still\ncurrently assume fixed-sized values. (not absolutely, but fixed at compile\ntime for a given use), but last month I did some refactoring that would\nmake variable-sized values fairly straightforward, at least no big\nchallenge. There would of course also be some extra complexity in doing TBM\nunion/intersection operations etc. Recent work I did also went in the\ndirection of storing small-enough values in the last-level pointer, saving\nmemory (as well as time spent accessing it). That seems important, since\ntrees do have some space overhead compared to arrays.\n\n* Iteration/ordering\n\nThere are also now some unpleasant consequences that stem from hashed\nblocknumbers:\n- To get them ready for the executor the entries need to be sorted by\nblocknumber, and \"random\" is a strenuous sorting case, because of cache\nmisses and branch mispredicts.\n- Pages get lossified (when necessary) more-or-less at random\n\nRadix trees maintain logical ordering, allowing for ordered iteration, so\nthat solves the sorting problem, and should help give a performance boost.\n\nOne hurdle is that shared iteration must work so that each worker can have\na disjoint subset of the input. The radix tree does have shared memory\nsupport, but not yet shared iteration since there hasn't been a concrete\nuse case. Also, DSA has a noticeable performance cost. A good interim\ndevelopment step is to use a local-mem radix tree for the index scan, and\nthen move everything out to the current array for the executor, in shmem if\nthe heap scan will be parallel. (I have coded some steps in that direction,\nnot ready to share.) That keeps that part of the interface the same,\nsimplifying testing. It's possible this much would work even for varlen\nbitmaps: the iteration array could use a \"tall skinny\" page table entry\nformat, like\n\n{ blockno; <metadata>; wordnum; bitmapword; }\n\n...which would save space in many cases. Long term, we will want to move to\nshared memory for the radix tree, at least as a prerequisite for parallel\nbitmap index scan. The concurrency scheme is likely too coarse to make that\nworthwhile now, but that will hopefully change at some point.\n\n* Possible simplification\n\nSome of the above adds complexity, but I see a possible simplification:\nMany places in tidbitmap.c need to know if we have a single entry, to keep\nfrom creating the hash table. That was added before simplehash.h existed. I\nsuspect most of the overhead now in creating the hash table is in zeroing\nthe backing array (correct me if I'm wrong). The radix tree wouldn't do\nthat, but it would create about half a dozen memory contexts, and inserting\na single entry would allocate one or two context blocks. Neither of these\nare free either. If the single-entry case is still worth optimizing, it\ncould be pushed down inside inside the radix tree as a template option that\nlazily creates memory contexts etc.\n\n* Multiple use\n\nVacuum concerns started this in the first place, so it'll have to be kept\nin mind as we proceed. At the very least, vacuum will need a boolean to\ndisallow lossifying pages, but the rest should work about the same.\n\nThere are some other things left out, like memory management and lossy\nentries to work out, but this is enough to give a sense of what's involved.\n\n[1]\nhttps://www.postgresql.org/message-id/20230216164408.bcatntzzxj3jqn3q%40awork3.anarazel.de\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nThis email has no patch yet -- it's more of a placeholder to gather some issues into one place. During previous work on replacing vacuum's bsearch'd array for TID storage with a radix tree, Andres suggested [1] that the hash table in tidbitmap.c should also be replaced. This will hopefully illuminate how to get there.* Current limitationsPage table entries are of fixed-size. At PGCon, Mark Dilger and Andres discussed the problem that bitmap scans assume a hard-coded limit on the offset of a TID, one that's particular to heap AM. That's not a requirement of hash tables in general, but that's the current state of the implementation. Using a radix tree would smooth the way to allowing variable-length page table entries. I have briefly talked about it with colleagues offlist as well. The radix tree implementation that Masahiko and I have worked on does still currently assume fixed-sized values. (not absolutely, but fixed at compile time for a given use), but last month I did some refactoring that would make variable-sized values fairly straightforward, at least no big challenge. There would of course also be some extra complexity in doing TBM union/intersection operations etc. Recent work I did also went in the direction of storing small-enough values in the last-level pointer, saving memory (as well as time spent accessing it). That seems important, since trees do have some space overhead compared to arrays.* Iteration/orderingThere are also now some unpleasant consequences that stem from hashed blocknumbers:- To get them ready for the executor the entries need to be sorted by blocknumber, and \"random\" is a strenuous sorting case, because of cache misses and branch mispredicts.- Pages get lossified (when necessary) more-or-less at randomRadix trees maintain logical ordering, allowing for ordered iteration, so that solves the sorting problem, and should help give a performance boost.One hurdle is that shared iteration must work so that each worker can have a disjoint subset of the input. The radix tree does have shared memory support, but not yet shared iteration since there hasn't been a concrete use case. Also, DSA has a noticeable performance cost. A good interim development step is to use a local-mem radix tree for the index scan, and then move everything out to the current array for the executor, in shmem if the heap scan will be parallel. (I have coded some steps in that direction, not ready to share.) That keeps that part of the interface the same, simplifying testing. It's possible this much would work even for varlen bitmaps: the iteration array could use a \"tall skinny\" page table entry format, like{ blockno; <metadata>; wordnum; bitmapword; }...which would save space in many cases. Long term, we will want to move to shared memory for the radix tree, at least as a prerequisite for parallel bitmap index scan. The concurrency scheme is likely too coarse to make that worthwhile now, but that will hopefully change at some point.* Possible simplificationSome of the above adds complexity, but I see a possible simplification: Many places in tidbitmap.c need to know if we have a single entry, to keep from creating the hash table. That was added before simplehash.h existed. I suspect most of the overhead now in creating the hash table is in zeroing the backing array (correct me if I'm wrong). The radix tree wouldn't do that, but it would create about half a dozen memory contexts, and inserting a single entry would allocate one or two context blocks. Neither of these are free either. If the single-entry case is still worth optimizing, it could be pushed down inside inside the radix tree as a template option that lazily creates memory contexts etc.* Multiple useVacuum concerns started this in the first place, so it'll have to be kept in mind as we proceed. At the very least, vacuum will need a boolean to disallow lossifying pages, but the rest should work about the same.There are some other things left out, like memory management and lossy entries to work out, but this is enough to give a sense of what's involved.[1] https://www.postgresql.org/message-id/20230216164408.bcatntzzxj3jqn3q%40awork3.anarazel.de-- John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 28 Jun 2023 14:29:39 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": true,
"msg_subject": "removing limitations from bitmap index scan"
}
] |
[
{
"msg_contents": "A quick scan of the archives doesn't turn up anyone who has volunteered in\nadvance to run the upcoming commitfest. Is anyone keen at trying their hand at\nthis very important community work? The July CF is good for anyone doing this\nfor the first time IMHO as it's usually less stressful than the ones later in\nthe cycle.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 28 Jun 2023 09:45:12 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Commitfest manager for July"
},
{
"msg_contents": "> On 28 Jun 2023, at 09:45, Daniel Gustafsson <[email protected]> wrote:\n> \n> A quick scan of the archives doesn't turn up anyone who has volunteered in\n> advance to run the upcoming commitfest. Is anyone keen at trying their hand at\n> this very important community work? The July CF is good for anyone doing this\n> for the first time IMHO as it's usually less stressful than the ones later in\n> the cycle.\n\nSince this didn't get any takers, and we are in July AoE since a few days ago,\nI guess I'll assume the role this time in the interest of moving things along.\nI've switched the 2023-07 CF to in-progress and 2023-09 to open, let's try to\nclose patches!\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Mon, 3 Jul 2023 12:26:44 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Commitfest manager for July"
},
{
"msg_contents": "On Mon, Jul 03, 2023 at 12:26:44PM +0200, Daniel Gustafsson wrote:\n> Since this didn't get any takers, and we are in July AoE since a few days ago,\n> I guess I'll assume the role this time in the interest of moving things along.\n> I've switched the 2023-07 CF to in-progress and 2023-09 to open, let's try to\n> close patches!\n\nThanks, Daniel!\n--\nMichael",
"msg_date": "Tue, 4 Jul 2023 09:03:10 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest manager for July"
},
{
"msg_contents": "On Tue, Jul 4, 2023 at 5:03 AM Michael Paquier <[email protected]> wrote:\n\n> On Mon, Jul 03, 2023 at 12:26:44PM +0200, Daniel Gustafsson wrote:\n> > Since this didn't get any takers, and we are in July AoE since a few\n> days ago,\n> > I guess I'll assume the role this time in the interest of moving things\n> along.\n> > I've switched the 2023-07 CF to in-progress and 2023-09 to open, let's\n> try to\n> > close patches!\n>\n> Thanks, Daniel!\n> --\n> Michael\n>\nIf nobody taking that, I can take the responsibility.--\nIbrar Ahmed\n\nOn Tue, Jul 4, 2023 at 5:03 AM Michael Paquier <[email protected]> wrote:On Mon, Jul 03, 2023 at 12:26:44PM +0200, Daniel Gustafsson wrote:\n> Since this didn't get any takers, and we are in July AoE since a few days ago,\n> I guess I'll assume the role this time in the interest of moving things along.\n> I've switched the 2023-07 CF to in-progress and 2023-09 to open, let's try to\n> close patches!\n\nThanks, Daniel!\n--\nMichael\nIf nobody taking that, I can take the responsibility.-- Ibrar Ahmed",
"msg_date": "Tue, 4 Jul 2023 07:27:19 +0500",
"msg_from": "Ibrar Ahmed <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest manager for July"
}
] |
[
{
"msg_contents": "Hi,\n\n\nMini repo\n\ncreate table t1(c1 int, c2 int);\nCREATE TABLE\ncreate table t2(c1 int, c2 int);\nCREATE TABLE\nexplain with cte1 as (insert into t2 values (1, 2) returning *) select * from cte1 join t1 using(c1);\n QUERY PLAN\n----------------------------------------------------------------\n Hash Join (cost=0.04..41.23 rows=11 width=12)\n Hash Cond: (t1.c1 = cte1.c1)\n CTE cte1\n -> Insert on t2 (cost=0.00..0.01 rows=1 width=8)\n -> Result (cost=0.00..0.01 rows=1 width=8)\n -> Seq Scan on t1 (cost=0.00..32.60 rows=2260 width=8)\n -> Hash (cost=0.02..0.02 rows=1 width=8)\n -> CTE Scan on cte1 (cost=0.00..0.02 rows=1 width=8)\n(8 rows)\n\nwith cte1 as (insert into t2 values (1, 2) returning *) select * from cte1 join t1 using(c1);\n c1 | c2 | c2\n----+----+----\n(0 rows)\n\ntruncate t2;\nTRUNCATE TABLE\nwith cte1 as (insert into t2 values (1, 2) returning *) select cte1.*, t1.* from cte1 join t1 using(c1);\n c1 | c2 | c1 | c2\n----+----+----+----\n(0 rows)\n\nTable t1 and t2 both has 2 columns: c1, c2, when CTE join select *, the result target list seems to lost one’s column c1.\nBut it looks good when select cte1.* and t1.* explicitly .\n\nIs it a bug?\n\nRegards,\nZhang Mingli\n\n\n\n\n\n\n\nHi,\n\n\nMini repo\n\ncreate table t1(c1 int, c2 int);\nCREATE TABLE\ncreate table t2(c1 int, c2 int);\nCREATE TABLE\nexplain with cte1 as (insert into t2 values (1, 2) returning *) select * from cte1 join t1 using(c1);\n QUERY PLAN\n----------------------------------------------------------------\n Hash Join (cost=0.04..41.23 rows=11 width=12)\n Hash Cond: (t1.c1 = cte1.c1)\n CTE cte1\n -> Insert on t2 (cost=0.00..0.01 rows=1 width=8)\n -> Result (cost=0.00..0.01 rows=1 width=8)\n -> Seq Scan on t1 (cost=0.00..32.60 rows=2260 width=8)\n -> Hash (cost=0.02..0.02 rows=1 width=8)\n -> CTE Scan on cte1 (cost=0.00..0.02 rows=1 width=8)\n(8 rows)\n\nwith cte1 as (insert into t2 values (1, 2) returning *) select * from cte1 join t1 using(c1);\n c1 | c2 | c2\n----+----+----\n(0 rows)\n\ntruncate t2;\nTRUNCATE TABLE\nwith cte1 as (insert into t2 values (1, 2) returning *) select cte1.*, t1.* from cte1 join t1 using(c1);\n c1 | c2 | c1 | c2\n----+----+----+----\n(0 rows)\n\nTable t1 and t2 both has 2 columns: c1, c2, when CTE join select *, the result target list seems to lost one’s column c1.\nBut it looks good when select cte1.* and t1.* explicitly .\n\nIs it a bug?\n\n\nRegards,\nZhang Mingli",
"msg_date": "Wed, 28 Jun 2023 16:52:34 +0800",
"msg_from": "Zhang Mingli <[email protected]>",
"msg_from_op": true,
"msg_subject": "Targetlist lost when CTE join <targetlist lost when CTE join>"
},
{
"msg_contents": "Hi,\n\nExplain verbose, seems HashJoin node drop that column.\n\n\ngpadmin=# explain(verbose) with cte1 as (insert into t2 values (1, 2) returning *) select * from cte1 join t1 using(c1);\n QUERY PLAN\n-------------------------------------------------------------------\n Hash Join (cost=0.04..41.23 rows=11 width=12)\n Output: cte1.c1, cte1.c2, t1.c2\n Hash Cond: (t1.c1 = cte1.c1)\n CTE cte1\n -> Insert on public.t2 (cost=0.00..0.01 rows=1 width=8)\n Output: t2.c1, t2.c2\n -> Result (cost=0.00..0.01 rows=1 width=8)\n Output: 1, 2\n -> Seq Scan on public.t1 (cost=0.00..32.60 rows=2260 width=8)\n Output: t1.c1, t1.c2\n -> Hash (cost=0.02..0.02 rows=1 width=8)\n Output: cte1.c1, cte1.c2\n -> CTE Scan on cte1 (cost=0.00..0.02 rows=1 width=8)\n Output: cte1.c1, cte1.c2\n(14 rows)\n\n\nRegards,\nZhang Mingli\n\n\n\n\n\n\n\nHi,\n\nExplain verbose, seems HashJoin node drop that column.\n\n\ngpadmin=# explain(verbose) with cte1 as (insert into t2 values (1, 2) returning *) select * from cte1 join t1 using(c1);\n QUERY PLAN\n-------------------------------------------------------------------\n Hash Join (cost=0.04..41.23 rows=11 width=12)\n Output: cte1.c1, cte1.c2, t1.c2\n Hash Cond: (t1.c1 = cte1.c1)\n CTE cte1\n -> Insert on public.t2 (cost=0.00..0.01 rows=1 width=8)\n Output: t2.c1, t2.c2\n -> Result (cost=0.00..0.01 rows=1 width=8)\n Output: 1, 2\n -> Seq Scan on public.t1 (cost=0.00..32.60 rows=2260 width=8)\n Output: t1.c1, t1.c2\n -> Hash (cost=0.02..0.02 rows=1 width=8)\n Output: cte1.c1, cte1.c2\n -> CTE Scan on cte1 (cost=0.00..0.02 rows=1 width=8)\n Output: cte1.c1, cte1.c2\n(14 rows)\n\n\n\nRegards,\nZhang Mingli",
"msg_date": "Wed, 28 Jun 2023 16:55:09 +0800",
"msg_from": "Zhang Mingli <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Targetlist lost when CTE join <targetlist lost when CTE\n join>"
},
{
"msg_contents": "Hi,\n\nOn Wed, Jun 28, 2023 at 04:52:34PM +0800, Zhang Mingli wrote:\n>\n> Mini repo\n>\n> create table t1(c1 int, c2 int);\n> CREATE TABLE\n> create table t2(c1 int, c2 int);\n> CREATE TABLE\n> explain with cte1 as (insert into t2 values (1, 2) returning *) select * from cte1 join t1 using(c1);\n> QUERY PLAN\n> ----------------------------------------------------------------\n> Hash Join (cost=0.04..41.23 rows=11 width=12)\n> Hash Cond: (t1.c1 = cte1.c1)\n> CTE cte1\n> -> Insert on t2 (cost=0.00..0.01 rows=1 width=8)\n> -> Result (cost=0.00..0.01 rows=1 width=8)\n> -> Seq Scan on t1 (cost=0.00..32.60 rows=2260 width=8)\n> -> Hash (cost=0.02..0.02 rows=1 width=8)\n> -> CTE Scan on cte1 (cost=0.00..0.02 rows=1 width=8)\n> (8 rows)\n>\n> with cte1 as (insert into t2 values (1, 2) returning *) select * from cte1 join t1 using(c1);\n> c1 | c2 | c2\n> ----+----+----\n> (0 rows)\n>\n> truncate t2;\n> TRUNCATE TABLE\n> with cte1 as (insert into t2 values (1, 2) returning *) select cte1.*, t1.* from cte1 join t1 using(c1);\n> c1 | c2 | c1 | c2\n> ----+----+----+----\n> (0 rows)\n>\n> Table t1 and t2 both has 2 columns: c1, c2, when CTE join select *, the result target list seems to lost one’s column c1.\n> But it looks good when select cte1.* and t1.* explicitly .\n>\n> Is it a bug?\n\nThis is working as intended. When using a USING clause you \"merge\" both\ncolumns so the final target list only contain one version of the merged\ncolumns, which doesn't happen if you use e.g. ON instead. I'm assuming that\nwhat the SQL standard says, but I don't have a copy to confirm.\n\n\n",
"msg_date": "Wed, 28 Jun 2023 17:17:14 +0800",
"msg_from": "Julien Rouhaud <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Targetlist lost when CTE join <targetlist lost when CTE join>"
},
{
"msg_contents": "Hi\n\nRegards,\nZhang Mingli\nOn Jun 28, 2023, 17:17 +0800, Julien Rouhaud <[email protected]>, wrote:\n> This is working as intended. When using a USING clause you \"merge\" both\n> columns so the final target list only contain one version of the merged\n> columns, which doesn't happen if you use e.g. ON instead. I'm assuming that\n> what the SQL standard says, but I don't have a copy to confirm.\n\nThanks. You’r right.\n\nHave a test:\n\ngpadmin=# with cte1 as (insert into t2 values (1, 2) returning *) select * from cte1 join t1 on t1.c1 = cte1.c1;\n c1 | c2 | c1 | c2\n----+----+----+----\n(0 rows)\n\n\n\n\n\n\n\nHi\n\n\nRegards,\nZhang Mingli\n\n\nOn Jun 28, 2023, 17:17 +0800, Julien Rouhaud <[email protected]>, wrote:\nThis is working as intended. When using a USING clause you \"merge\" both\ncolumns so the final target list only contain one version of the merged\ncolumns, which doesn't happen if you use e.g. ON instead. I'm assuming that\nwhat the SQL standard says, but I don't have a copy to confirm.\n\nThanks. You’r right.\n \nHave a test:\n\ngpadmin=# with cte1 as (insert into t2 values (1, 2) returning *) select * from cte1 join t1 on t1.c1 = cte1.c1;\n c1 | c2 | c1 | c2\n----+----+----+----\n(0 rows)",
"msg_date": "Wed, 28 Jun 2023 17:23:59 +0800",
"msg_from": "Zhang Mingli <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Targetlist lost when CTE join <targetlist lost when CTE\n join>"
},
{
"msg_contents": "On Wed, Jun 28, 2023 at 05:17:14PM +0800, Julien Rouhaud wrote:\n> >\n> > Table t1 and t2 both has 2 columns: c1, c2, when CTE join select *, the result target list seems to lost one’s column c1.\n> > But it looks good when select cte1.* and t1.* explicitly .\n> >\n> > Is it a bug?\n>\n> This is working as intended. When using a USING clause you \"merge\" both\n> columns so the final target list only contain one version of the merged\n> columns, which doesn't happen if you use e.g. ON instead. I'm assuming that\n> what the SQL standard says, but I don't have a copy to confirm.\n\nI forgot to mention that this is actually documented:\n\nhttps://www.postgresql.org/docs/current/queries-table-expressions.html\n\nFurthermore, the output of JOIN USING suppresses redundant columns: there is no\nneed to print both of the matched columns, since they must have equal values.\nWhile JOIN ON produces all columns from T1 followed by all columns from T2,\nJOIN USING produces one output column for each of the listed column pairs (in\nthe listed order), followed by any remaining columns from T1, followed by any\nremaining columns from T2.\n\n\n",
"msg_date": "Wed, 28 Jun 2023 17:26:10 +0800",
"msg_from": "Julien Rouhaud <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Targetlist lost when CTE join <targetlist lost when CTE join>"
},
{
"msg_contents": "HI,\n\nOn Jun 28, 2023, 17:26 +0800, Julien Rouhaud <[email protected]>, wrote:\n> On Wed, Jun 28, 2023 at 05:17:14PM +0800, Julien Rouhaud wrote:\n> > >\n> > > Table t1 and t2 both has 2 columns: c1, c2, when CTE join select *, the result target list seems to lost one’s column c1.\n> > > But it looks good when select cte1.* and t1.* explicitly .\n> > >\n> > > Is it a bug?\n> >\n> > This is working as intended. When using a USING clause you \"merge\" both\n> > columns so the final target list only contain one version of the merged\n> > columns, which doesn't happen if you use e.g. ON instead. I'm assuming that\n> > what the SQL standard says, but I don't have a copy to confirm.\n>\n> I forgot to mention that this is actually documented:\n>\n> https://www.postgresql.org/docs/current/queries-table-expressions.html\n>\n> Furthermore, the output of JOIN USING suppresses redundant columns: there is no\n> need to print both of the matched columns, since they must have equal values.\n> While JOIN ON produces all columns from T1 followed by all columns from T2,\n> JOIN USING produces one output column for each of the listed column pairs (in\n> the listed order), followed by any remaining columns from T1, followed by any\n> remaining columns from T2.\n\nThanks for your help.\n\nRegards,\nZhang Mingli\n\n\n\n\n\n\n\nHI, \n\nOn Jun 28, 2023, 17:26 +0800, Julien Rouhaud <[email protected]>, wrote:\nOn Wed, Jun 28, 2023 at 05:17:14PM +0800, Julien Rouhaud wrote:\n\n\nTable t1 and t2 both has 2 columns: c1, c2, when CTE join select *, the result target list seems to lost one’s column c1.\nBut it looks good when select cte1.* and t1.* explicitly .\n\nIs it a bug?\n\nThis is working as intended. When using a USING clause you \"merge\" both\ncolumns so the final target list only contain one version of the merged\ncolumns, which doesn't happen if you use e.g. ON instead. I'm assuming that\nwhat the SQL standard says, but I don't have a copy to confirm.\n\nI forgot to mention that this is actually documented:\n\nhttps://www.postgresql.org/docs/current/queries-table-expressions.html\n\nFurthermore, the output of JOIN USING suppresses redundant columns: there is no\nneed to print both of the matched columns, since they must have equal values.\nWhile JOIN ON produces all columns from T1 followed by all columns from T2,\nJOIN USING produces one output column for each of the listed column pairs (in\nthe listed order), followed by any remaining columns from T1, followed by any\nremaining columns from T2.\n\nThanks for your help.\n\n\nRegards,\nZhang Mingli",
"msg_date": "Wed, 28 Jun 2023 17:32:37 +0800",
"msg_from": "Zhang Mingli <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Targetlist lost when CTE join <targetlist lost when CTE\n join>"
}
] |
[
{
"msg_contents": "Hi,\n\nThis is a WIP patch to add WAL write and fsync stats to pg_stat_io\nview. There is a track_io_timing variable to control pg_stat_io\ntimings and a track_wal_io_timing variable to control WAL timings. I\ncouldn't decide on which logic to enable WAL timings on pg_stat_io.\nFor now, both pg_stat_io and track_wal_io_timing are needed to be\nenabled to track WAL timings in pg_stat_io.\n\nAlso, if you compare WAL stats in pg_stat_wal and pg_stat_io; you can\ncome across differences. These differences are caused by the\nbackground writer's WAL stats not being flushed. Because of that,\nbackground writer's WAL stats are not seen in pg_stat_wal but in\npg_stat_io. I already sent a patch [1] to fix that.\n\n[1] https://www.postgresql.org/message-id/CAN55FZ2FPYngovZstr%3D3w1KSEHe6toiZwrurbhspfkXe5UDocg%40mail.gmail.com\n\nAny kind of feedback would be appreciated.\n\nRegards,\nNazir Bilal Yavuz\nMicrosoft",
"msg_date": "Wed, 28 Jun 2023 13:09:14 +0300",
"msg_from": "Nazir Bilal Yavuz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Show WAL write and fsync stats in pg_stat_io"
},
{
"msg_contents": "On Wed, Jun 28, 2023 at 6:09 AM Nazir Bilal Yavuz <[email protected]> wrote:\n> This is a WIP patch to add WAL write and fsync stats to pg_stat_io\n> view.\n\nThanks for working on this! I have some feedback on the content of the\npatch as well as some items that I feel are missing.\n\nI think it would be good to count WAL reads even though they are not\ncurrently represented in pg_stat_wal. Here is a thread discussing this\n[1].\n\nEventually, the docs will need an update as well. You can wait until a\nlater version of the patch to do this, but I would include it in a list\nof the remaining TODOs in your next version.\n\nI think we will also want to add an IOContext for WAL initialization.\nThen we can track how long is spent doing WAL init (including filling\nthe WAL file with zeroes). XLogFileInitInternal() is likely where we\nwould want to add it. And op_bytes for this would likely be\nwal_segment_size. I thought I heard about someone proposing adding WAL\ninit to pg_stat_wal, but I can't find the thread.\n\nI think there is also an argument for counting WAL files recycled as\nIOOP_REUSES. We should start thinking about how to interpret the\ndifferent IOOps within the two IOContexts and discussing what would be\nuseful to count. For example, should removing a logfile count as an\nIOOP_EVICT? Maybe it is not directly related to \"IO\" enough or even an\ninteresting statistic, but we should think about what kinds of\nIO-related WAL statistics we want to track.\n\nAny that we decide not to count for now should be \"banned\" in\npgstat_tracks_io_op() for clarity. For example, if we create a separate\nIOContext for WAL file init, I'm not sure what would count as an\nIOOP_EXTEND in IOCONTEXT_NORMAL for IOOBJECT_WAL.\n\nAlso, I think there are some backend types which will not generate WAL\nand we should determine which those are and skip those rows in\npgstat_tracks_io_object().\n\ndiff --git a/src/backend/access/transam/xlog.c\nb/src/backend/access/transam/xlog.c\nindex 8b0710abe6..2ee6c21398 100644\n--- a/src/backend/access/transam/xlog.c\n+++ b/src/backend/access/transam/xlog.c\n@@ -2207,6 +2207,10 @@ XLogWrite(XLogwrtRqst WriteRqst, TimeLineID\ntli, bool flexible)\n\nI think we should likely follow the pattern of using\npgstat_prepare_io_time() and pgstat_count_io_op_time() as it is done\nelsewhere. You could pass the IOObject as a parameter to\npgstat_prepare_io_time() in order to determine if we should check\ntrack_io_timing or track_wal_io_timing. And we'll want to check\ntrack_wal_io_timing if IOObject is IOOBJECT_WAL in\npgstat_count_io_op_time().\n\n INSTR_TIME_SET_CURRENT(duration);\n\nINSTR_TIME_ACCUM_DIFF(PendingWalStats.wal_write_time, duration,\nstart);\n\n+ pgstat_count_io_op_time(IOOBJECT_WAL,\nIOCONTEXT_NORMAL, IOOP_WRITE, start, 1);\n+ } else\n+ {\n\nOther users of pgstat_count_io_op_time()/io_op_n() which write multiple\npages at a time pass the number of pages in as the cnt parameter. (see\nExtendBufferedRelLocal() as an example). I think we want to do that for\nWAL also. In this case, it would be the local variable \"npages\" and we\ncan do it outside of this loop.\n\nIt is true that the existing WAL stats count wal_writes here. However,\nthis is essentially counting write system calls, which is probably not\nwhat we want for pg_stat_io. See [2] for a discussion about whether to\ncount blocks written back or writeback system calls for a previous\npg_stat_io feature. All of the other block-based IO statistics in\npg_stat_io count the number of blocks.\n\nThis being said, we probably want to just leave\nPendingWalStats.wal_write++ here. We would normally move it into\npg_stat_io like we have with pgBufferUsage and the db IO stats that are\nupdated in pgstat_count_io_op_time(). This consolidation makes it easier\nto eventually reduce the duplication. However, in this case, it seems\nwal_write counts something we don't count in pg_stat_io, so it can\nprobably be left here. I would still move the\nPendingWalStats.wal_write_time into pgstat_count_io_op_time(), since\nthat seems like it is the same as what will be in pg_stat_io.\n\nAlso, op_bytes for IOOBJECT_WAL/IOCONTEXT_NORMAL should be XLOG_BLCKSZ\n(see comment in pg_stat_get_io() in pgstatfuncs.c). Those default to the\nsame value but can be made to be different.\n\n\n+ pgstat_count_io_op_n(IOOBJECT_WAL,\nIOCONTEXT_NORMAL, IOOP_WRITE, 1);\n }\n\n PendingWalStats.wal_write++;\n\n@@ -8233,6 +8237,10 @@ issue_xlog_fsync(int fd, XLogSegNo segno, TimeLineID tli)\n\n INSTR_TIME_SET_CURRENT(duration);\n INSTR_TIME_ACCUM_DIFF(PendingWalStats.wal_sync_time, duration, start);\n+ pgstat_count_io_op_time(IOOBJECT_WAL, IOCONTEXT_NORMAL,\nIOOP_FSYNC, start, 1);\n\nI would wrap this line and check other lines to make sure they are not\ntoo long.\n\n+ } else\n+ {\n+ pgstat_count_io_op_n(IOOBJECT_WAL, IOCONTEXT_NORMAL, IOOP_FSYNC, 1);\n }\n\n PendingWalStats.wal_sync++;\n\nSame feedback as above about using the prepare/count pattern used for\npg_stat_io elsewhere. In this case, you should be able to move\nPendingWalStats.wal_sync into there as well.\n\ndiff --git a/src/backend/utils/activity/pgstat_io.c\nb/src/backend/utils/activity/pgstat_io.c\n@@ -350,6 +352,11 @@ pgstat_tracks_io_object(BackendType bktype,\nIOObject io_object,\n if (!pgstat_tracks_io_bktype(bktype))\n return false;\n\n+\n+ if (io_context != IOCONTEXT_NORMAL &&\n+ io_object == IOOBJECT_WAL)\n+ return false;\n\nWe should add more restrictions. See the top of my email for details.\n\n> There is a track_io_timing variable to control pg_stat_io\n> timings and a track_wal_io_timing variable to control WAL timings. I\n> couldn't decide on which logic to enable WAL timings on pg_stat_io.\n> For now, both pg_stat_io and track_wal_io_timing are needed to be\n> enabled to track WAL timings in pg_stat_io.\n\nHmm. I could see a case where someone doesn't want to incur the\noverhead of track_io_timing for regular IO but does want to do so for\nWAL because they are interested in a specific issue. I'm not sure\nthough. I could be convinced otherwise (based on relative overhead,\netc).\n\n> Also, if you compare WAL stats in pg_stat_wal and pg_stat_io; you can\n> come across differences. These differences are caused by the\n> background writer's WAL stats not being flushed. Because of that,\n> background writer's WAL stats are not seen in pg_stat_wal but in\n> pg_stat_io. I already sent a patch [1] to fix that.\n\nCool! Thanks for doing that.\n\n- Melanie\n\n[1] https://www.postgresql.org/message-id/flat/20230216191138.jotc73lqb7xhfqbi%40awork3.anarazel.de#eb4a641427fa1eb013e9ecdd8648e640\n[2] https://www.postgresql.org/message-id/20230504165738.4e2hfoddoels542c%40awork3.anarazel.de\n\n\n",
"msg_date": "Fri, 21 Jul 2023 18:30:06 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Show WAL write and fsync stats in pg_stat_io"
},
{
"msg_contents": "Hi,\n\nThanks for the review!\n\nCurrent status of the patch is:\n- 'WAL read' stats in xlogrecovery.c are added to pg_stat_io.\n- IOCONTEXT_INIT is added to count 'WAL init'. 'WAL init' stats are\nadded to pg_stat_io.\n- pg_stat_io shows different op_bytes for the IOOBJECT_WAL operations.\n- Working on which 'BackendType / IOContext / IOOp' should be banned\nin pg_stat_io.\n- Working on adding 'WAL read' to the xlogreader.c and walsender.c.\n- PendingWalStats.wal_sync and\nPendingWalStats.wal_write_time/PendingWalStats.wal_sync_time are moved\nto pgstat_count_io_op_n()/pgstat_count_io_op_time() respectively.\n\nTODOs:\n- Documentation.\n- Thinking about how to interpret the different IOOps within the two\nIOContexts and discussing what would be useful to count.\n- Decide which 'BackendType / IOContext / IOOp' should not be tracked.\n- Adding 'WAL read' to the xlogreader.c and walsender.c. (This could\nbe an another patch)\n- Adding WAIT_EVENT_WAL_COPY_* operations to pg_stat_io if needed.\n(This could be an another patch)\n\nOn Sat, 22 Jul 2023 at 01:30, Melanie Plageman\n<[email protected]> wrote:\n> I think it would be good to count WAL reads even though they are not\n> currently represented in pg_stat_wal. Here is a thread discussing this\n> [1].\n\nI used the same implementation in the thread link [1]. I added 'WAL\nread' to only xlogrecovery.c for now. I didn't add 'WAL read' to\nxlogreader.c and walsender.c because they cause some failures on:\n'!pgStatLocal.shmem->is_shutdown' asserts. I will spend more time on\nthese. Also, I added Bharath to CC. I have a question about 'WAL\nread':\n1. There are two places where 'WAL read' happens.\na. In WALRead() in xlogreader.c, it reads 'count' bytes, most of the\ntime count is equal to XLOG_BLCKSZ but there are some cases it is not.\nFor example\n- in XLogSendPhysical() in walsender.c WALRead() is called by nbytes\n- in WALDumpReadPage() in pg_waldump.c WALRead() is called by count\nThese nbytes and count variables could be different from XLOG_BLCKSZ.\n\nb. in XLogPageRead() in xlogreader.c, it reads exactly XLOG_BLCKSZ bytes:\npg_pread(readFile, readBuf, XLOG_BLCKSZ, (off_t) readOff);\n\nSo, what should op_bytes be set to for 'WAL read' operations?\n\n> Eventually, the docs will need an update as well. You can wait until a\n> later version of the patch to do this, but I would include it in a list\n> of the remaining TODOs in your next version.\n\nDone. I shared TODOs at the top.\n\n> I think we will also want to add an IOContext for WAL initialization.\n> Then we can track how long is spent doing 'WAL init' (including filling\n> the WAL file with zeroes). XLogFileInitInternal() is likely where we\n> would want to add it. And op_bytes for this would likely be\n> wal_segment_size. I thought I heard about someone proposing adding WAL\n> init to pg_stat_wal, but I can't find the thread.\n\nDone. I created a new IOCONTEXT_INIT IOContext for the 'WAL init'. I\nhave a question there:\n1. Some of the WAL processes happens at initdb (standalone backend\nIOCONTEXT_NORMAL/(IOOP_READ & IOOP_WRITE) and\nIOCONTEXT_INIT/(IOOP_WRITE & IOOP_FSYNC)). Since this happens at the\ninitdb, AFAIK there is no way to set 'track_wal_io_timing' and\n'track_io_timing' variables there. So, their timings appear as 0.\nShould I use IsBootstrapProcessingMode() to enable WAL io timings at\nthe initdb or are they not that much important?\n\n> I think there is also an argument for counting WAL files recycled as\n> IOOP_REUSES. We should start thinking about how to interpret the\n> different IOOps within the two IOContexts and discussing what would be\n> useful to count. For example, should removing a logfile count as an\n> IOOP_EVICT? Maybe it is not directly related to \"IO\" enough or even an\n> interesting statistic, but we should think about what kinds of\n> IO-related WAL statistics we want to track.\n\nI added that to TODOs.\n\n> Any that we decide not to count for now should be \"banned\" in\n> pgstat_tracks_io_op() for clarity. For example, if we create a separate\n> IOContext for WAL file init, I'm not sure what would count as an\n> IOOP_EXTEND in IOCONTEXT_NORMAL for IOOBJECT_WAL.\n>\n> Also, I think there are some backend types which will not generate WAL\n> and we should determine which those are and skip those rows in\n> pgstat_tracks_io_object().\n\nI agree, I am working on this. I have a couple of questions:\n1. Can client backend and background worker do IOCONTEXT_NORMAL/IOOP_READ?\n2. Is there an easy way to check if 'BackendType / IOOBJECT_WAL' does\nspecific IOOp operations?\n\n> diff --git a/src/backend/access/transam/xlog.c\n> b/src/backend/access/transam/xlog.c\n> index 8b0710abe6..2ee6c21398 100644\n> --- a/src/backend/access/transam/xlog.c\n> +++ b/src/backend/access/transam/xlog.c\n> @@ -2207,6 +2207,10 @@ XLogWrite(XLogwrtRqst WriteRqst, TimeLineID\n> tli, bool flexible)\n>\n> I think we should likely follow the pattern of using\n> pgstat_prepare_io_time() and pgstat_count_io_op_time() as it is done\n> elsewhere. You could pass the IOObject as a parameter to\n> pgstat_prepare_io_time() in order to determine if we should check\n> track_io_timing or track_wal_io_timing. And we'll want to check\n> track_wal_io_timing if IOObject is IOOBJECT_WAL in\n> pgstat_count_io_op_time().\n\nDone. Instead of passing parameters to pgstat_prepare_io_time(), I\nused a slightly different implementation. I return the current time if\nthere is a chance that any 'time' can be tracked.\n\n> INSTR_TIME_SET_CURRENT(duration);\n>\n> INSTR_TIME_ACCUM_DIFF(PendingWalStats.wal_write_time, duration,\n> start);\n>\n> + pgstat_count_io_op_time(IOOBJECT_WAL,\n> IOCONTEXT_NORMAL, IOOP_WRITE, start, 1);\n> + } else\n> + {\n>\n> Other users of pgstat_count_io_op_time()/io_op_n() which write multiple\n> pages at a time pass the number of pages in as the cnt parameter. (see\n> ExtendBufferedRelLocal() as an example). I think we want to do that for\n> WAL also. In this case, it would be the local variable \"npages\" and we\n> can do it outside of this loop.\n>\n> It is true that the existing WAL stats count wal_writes here. However,\n> this is essentially counting write system calls, which is probably not\n> what we want for pg_stat_io. See [2] for a discussion about whether to\n> count blocks written back or writeback system calls for a previous\n> pg_stat_io feature. All of the other block-based IO statistics in\n> pg_stat_io count the number of blocks.\n>\n> This being said, we probably want to just leave\n> PendingWalStats.wal_write++ here. We would normally move it into\n> pg_stat_io like we have with pgBufferUsage and the db IO stats that are\n> updated in pgstat_count_io_op_time(). This consolidation makes it easier\n> to eventually reduce the duplication. However, in this case, it seems\n> wal_write counts something we don't count in pg_stat_io, so it can\n> probably be left here. I would still move the\n> PendingWalStats.wal_write_time into pgstat_count_io_op_time(), since\n> that seems like it is the same as what will be in pg_stat_io.\n\nDone. I moved PendingWalStats.wal_sync and\nPendingWalStats.wal_write_time/PendingWalStats.wal_sync_time to\npgstat_count_io_op_n()/pgstat_count_io_op_time() respectively. Because\nof this change, pg_stat_wal's and pg_stat_io's\nIOOBJECT_WAL/IOCONTEXT_NORMAL/IOOP_WRITE counts are different but the\nrest are the same.\n\n> Also, op_bytes for IOOBJECT_WAL/IOCONTEXT_NORMAL should be XLOG_BLCKSZ\n> (see comment in pg_stat_get_io() in pgstatfuncs.c). Those default to the\n> same value but can be made to be different.\n\nDone.\n\n> I would wrap this line and check other lines to make sure they are not\n> too long.\n\nDone.\n\n>\n> + } else\n> + {\n> + pgstat_count_io_op_n(IOOBJECT_WAL, IOCONTEXT_NORMAL, IOOP_FSYNC, 1);\n> }\n>\n> PendingWalStats.wal_sync++;\n>\n> Same feedback as above about using the prepare/count pattern used for\n> pg_stat_io elsewhere. In this case, you should be able to move\n> PendingWalStats.wal_sync into there as well.\n\nDone.\n\n> > There is a track_io_timing variable to control pg_stat_io\n> > timings and a track_wal_io_timing variable to control WAL timings. I\n> > couldn't decide on which logic to enable WAL timings on pg_stat_io.\n> > For now, both pg_stat_io and track_wal_io_timing are needed to be\n> > enabled to track WAL timings in pg_stat_io.\n>\n> Hmm. I could see a case where someone doesn't want to incur the\n> overhead of track_io_timing for regular IO but does want to do so for\n> WAL because they are interested in a specific issue. I'm not sure\n> though. I could be convinced otherwise (based on relative overhead,\n> etc).\n\nDone. IOOBJECT_WAL uses track_wal_io_timing regardless of\ntrack_io_timing for now.\n\n> [1] https://www.postgresql.org/message-id/flat/20230216191138.jotc73lqb7xhfqbi%40awork3.anarazel.de#eb4a641427fa1eb013e9ecdd8648e640\n> [2] https://www.postgresql.org/message-id/20230504165738.4e2hfoddoels542c%40awork3.anarazel.de\n\nIn addition to these, are WAIT_EVENT_WAL_COPY_* operations needed to\nbe added to pg_stat_io? If the answer is yes, should I add them to the\ncurrent patch?\n\nAny kind of feedback would be appreciated.\n\nRegards,\nNazir Bilal Yavuz\nMicrosoft",
"msg_date": "Thu, 3 Aug 2023 16:38:41 +0300",
"msg_from": "Nazir Bilal Yavuz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Show WAL write and fsync stats in pg_stat_io"
},
{
"msg_contents": "On Thu, Aug 03, 2023 at 04:38:41PM +0300, Nazir Bilal Yavuz wrote:\n> Current status of the patch is:\n> - 'WAL read' stats in xlogrecovery.c are added to pg_stat_io.\n> - IOCONTEXT_INIT is added to count 'WAL init'. 'WAL init' stats are\n> added to pg_stat_io.\n> - pg_stat_io shows different op_bytes for the IOOBJECT_WAL operations.\n> - Working on which 'BackendType / IOContext / IOOp' should be banned\n> in pg_stat_io.\n> - Working on adding 'WAL read' to the xlogreader.c and walsender.c.\n> - PendingWalStats.wal_sync and\n> PendingWalStats.wal_write_time/PendingWalStats.wal_sync_time are moved\n> to pgstat_count_io_op_n()/pgstat_count_io_op_time() respectively.\n\nCool! Thanks for the summary and for continuing to work on this.\n\n> TODOs:\n> - Documentation.\n> - Thinking about how to interpret the different IOOps within the two\n> IOContexts and discussing what would be useful to count.\n> - Decide which 'BackendType / IOContext / IOOp' should not be tracked.\n> - Adding 'WAL read' to the xlogreader.c and walsender.c. (This could\n> be an another patch)\n\nYes, I would be explicit that you are not including WAL IO done exclusively in\nthe context of replication.\n\n> - Adding WAIT_EVENT_WAL_COPY_* operations to pg_stat_io if needed.\n> (This could be an another patch)\n\nYes, I think it makes sense as another patch.\n\n> \n> On Sat, 22 Jul 2023 at 01:30, Melanie Plageman\n> <[email protected]> wrote:\n> > I think it would be good to count WAL reads even though they are not\n> > currently represented in pg_stat_wal. Here is a thread discussing this\n> > [1].\n> \n> I used the same implementation in the thread link [1]. I added 'WAL\n> read' to only xlogrecovery.c for now. I didn't add 'WAL read' to\n> xlogreader.c and walsender.c because they cause some failures on:\n> '!pgStatLocal.shmem->is_shutdown' asserts. I will spend more time on\n> these. Also, I added Bharath to CC. I have a question about 'WAL\n> read':\n> 1. There are two places where 'WAL read' happens.\n> a. In WALRead() in xlogreader.c, it reads 'count' bytes, most of the\n> time count is equal to XLOG_BLCKSZ but there are some cases it is not.\n> For example\n> - in XLogSendPhysical() in walsender.c WALRead() is called by nbytes\n> - in WALDumpReadPage() in pg_waldump.c WALRead() is called by count\n> These nbytes and count variables could be different from XLOG_BLCKSZ.\n> \n> b. in XLogPageRead() in xlogreader.c, it reads exactly XLOG_BLCKSZ bytes:\n> pg_pread(readFile, readBuf, XLOG_BLCKSZ, (off_t) readOff);\n> \n> So, what should op_bytes be set to for 'WAL read' operations?\n\nIf there is any combination of BackendType and IOContext which will\nalways read XLOG_BLCKSZ bytes, we could use XLOG_BLCKSZ for that row's\nop_bytes. For other cases, we may have to consider using op_bytes 1 and\ntracking reads and write IOOps in number of bytes (instead of number of\npages). I don't actually know if there is a clear separation by\nBackendType for these different cases.\n\nThe other alternative I see is to use XLOG_BLCKSZ as the op_bytes and\ntreat op_bytes * number of reads as an approximation of the number of\nbytes read. I don't actually know what makes more sense. I don't think I\nwould like having a number for bytes that is not accurate.\n\n> > I think we will also want to add an IOContext for WAL initialization.\n> > Then we can track how long is spent doing 'WAL init' (including filling\n> > the WAL file with zeroes). XLogFileInitInternal() is likely where we\n> > would want to add it. And op_bytes for this would likely be\n> > wal_segment_size. I thought I heard about someone proposing adding WAL\n> > init to pg_stat_wal, but I can't find the thread.\n> \n> Done. I created a new IOCONTEXT_INIT IOContext for the 'WAL init'. I\n> have a question there:\n> 1. Some of the WAL processes happens at initdb (standalone backend\n> IOCONTEXT_NORMAL/(IOOP_READ & IOOP_WRITE) and\n> IOCONTEXT_INIT/(IOOP_WRITE & IOOP_FSYNC)). Since this happens at the\n> initdb, AFAIK there is no way to set 'track_wal_io_timing' and\n> 'track_io_timing' variables there. So, their timings appear as 0.\n> Should I use IsBootstrapProcessingMode() to enable WAL io timings at\n> the initdb or are they not that much important?\n\nI don't have an opinion about this. I can see an argument for doing it\neither way. We do track other IO during initdb in pg_stat_io.\n\n> > Any that we decide not to count for now should be \"banned\" in\n> > pgstat_tracks_io_op() for clarity. For example, if we create a separate\n> > IOContext for WAL file init, I'm not sure what would count as an\n> > IOOP_EXTEND in IOCONTEXT_NORMAL for IOOBJECT_WAL.\n> >\n> > Also, I think there are some backend types which will not generate WAL\n> > and we should determine which those are and skip those rows in\n> > pgstat_tracks_io_object().\n> \n> I agree, I am working on this. I have a couple of questions:\n> 1. Can client backend and background worker do IOCONTEXT_NORMAL/IOOP_READ?\n\nI don't know the answer to this.\n\n> 2. Is there an easy way to check if 'BackendType / IOOBJECT_WAL' does\n> specific IOOp operations?\n\nI don't think there is a general answer to this. You'll have to look at\nthe code and think about specific things that backend might do that\nwould require WAL. I think we'll definitely need other community members\nto check our work for the valid combinations.\n\nCompleting the matrix of valid combinations of BackendType, IOOp, and\nIOContext and defining each one is the biggest area where we could use\nhelp from community members.\n\nAs an additional TODO, I would explore adding some tests to prevent\naccidental removal of the pg_stat_io WAL tracking.\n\nI think we can easily test IOCONTEXT_NORMAL WAL writes in\nsrc/test/regress/sql/stats.sql (perhaps it is worth checking that\nsynchronous_commit is on in the test). IOCONTEXT_NORMAL WAL fsyncs\nshould again be easy to test if synchronous_commit is on and fsync is\non.\n\nI'm not sure how to reliably test WAL reads (given timing). Logically,\nyou can sum WAL reads before a crash is initiated in one of the tests in\nthe recovery suite, and then sum them after the db has restarted and\nthere should definitely be an increase in WAL reads, but I don't know if\nwe need to do something to guarantee that there will have been WAL reads\n(to avoid test flakes).\n\nI'm also not sure how to reliably test any IOCONTEXT_INIT operations. We\nneed a before and after and I can't think of a cheap operation to ensure\na new WAL segment is written to or fsyncd in between a before and after\nfor the purposes of testing.\n\n> > diff --git a/src/backend/access/transam/xlog.c\n> > b/src/backend/access/transam/xlog.c\n> > index 8b0710abe6..2ee6c21398 100644\n> > --- a/src/backend/access/transam/xlog.c\n> > +++ b/src/backend/access/transam/xlog.c\n> > @@ -2207,6 +2207,10 @@ XLogWrite(XLogwrtRqst WriteRqst, TimeLineID\n> > tli, bool flexible)\n> >\n> > I think we should likely follow the pattern of using\n> > pgstat_prepare_io_time() and pgstat_count_io_op_time() as it is done\n> > elsewhere. You could pass the IOObject as a parameter to\n> > pgstat_prepare_io_time() in order to determine if we should check\n> > track_io_timing or track_wal_io_timing. And we'll want to check\n> > track_wal_io_timing if IOObject is IOOBJECT_WAL in\n> > pgstat_count_io_op_time().\n> \n> Done. Instead of passing parameters to pgstat_prepare_io_time(), I\n> used a slightly different implementation. I return the current time if\n> there is a chance that any 'time' can be tracked.\n\nCool!\n\n> From 574fdec6ed8073dbc49053e6933db0310c7c62f5 Mon Sep 17 00:00:00 2001\n> From: Nazir Bilal Yavuz <[email protected]>\n> Date: Thu, 3 Aug 2023 16:11:16 +0300\n> Subject: [PATCH v2] Show WAL stats on pg_stat_io\n> \n> This patch aims to showing WAL stats per backend on pg_stat_io view.\n> \n> With this patch, it can be seen how many WAL operations it makes, their\n> context, types and total timings per backend in pg_stat_io view.\n\nIn the commit message, I would describe what kinds of WAL IO this\npatchset currently covers -- i.e. not streaming replication WAL IO.\n\n> ---\n> diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c\n> index 60c0b7ec3af..ee7b85e18ca 100644\n> --- a/src/backend/access/transam/xlog.c\n> +++ b/src/backend/access/transam/xlog.c\n> @@ -2245,6 +2229,9 @@ XLogWrite(XLogwrtRqst WriteRqst, TimeLineID tli, bool flexible)\n> \t\t\t\tstartoffset += written;\n> \t\t\t} while (nleft > 0);\n> \n\nI'm not sure if the right location is here or in\npgstat_count_io_op_time(), but I would explain why you did not move\nPendingWalStats.wal_writes counter into pg_stat_io code (and why you did\nmove the other PendingWalStats counters there.\n\n> +\t\t\tpgstat_count_io_op_time(IOOBJECT_WAL, IOCONTEXT_NORMAL,\n> +\t\t\t\t\t\t\t\t\tIOOP_WRITE, io_start, npages);\n> +\n> \t\t\tnpages = 0;\n> \n> \t\t\t/*\n> @@ -2938,6 +2925,7 @@ XLogFileInitInternal(XLogSegNo logsegno, TimeLineID logtli,\n> \tint\t\t\tfd;\n> \tint\t\t\tsave_errno;\n> \tint\t\t\topen_flags = O_RDWR | O_CREAT | O_EXCL | PG_BINARY;\n> +\tinstr_time\tio_start;\n> \n> \tAssert(logtli != 0);\n> \n> @@ -2981,6 +2969,8 @@ XLogFileInitInternal(XLogSegNo logsegno, TimeLineID logtli,\n> \t\t\t\t(errcode_for_file_access(),\n> \t\t\t\t errmsg(\"could not create file \\\"%s\\\": %m\", tmppath)));\n> \n\nSince you have two calls to pgstat_prepare_io_time() in this function, I\nthink it would be nice to have a comment above each to the effect of\n\"start timing writes for stats\" and \"start timing fsyncs for stats\"\n\n> +\tio_start = pgstat_prepare_io_time();\n> +\n> \tpgstat_report_wait_start(WAIT_EVENT_WAL_INIT_WRITE);\n\n> diff --git a/src/backend/access/transam/xlogrecovery.c b/src/backend/access/transam/xlogrecovery.c\n> index becc2bda62e..ee850af5514 100644\n> --- a/src/backend/access/transam/xlogrecovery.c\n> +++ b/src/backend/access/transam/xlogrecovery.c\n> @@ -1587,6 +1587,7 @@ PerformWalRecovery(void)\n> \tXLogRecord *record;\n> \tbool\t\treachedRecoveryTarget = false;\n> \tTimeLineID\treplayTLI;\n> +\tuint32\t\tpgstat_report_wal_frequency = 0;\n> \n> \t/*\n> \t * Initialize shared variables for tracking progress of WAL replay, as if\n> @@ -1745,6 +1746,16 @@ PerformWalRecovery(void)\n> \t\t\t */\n> \t\t\tApplyWalRecord(xlogreader, record, &replayTLI);\n> \n> +\t\t\t/*\n> +\t\t\t * Report pending statistics to the cumulative stats system once\n> +\t\t\t * every PGSTAT_REPORT_FREQUENCY times to not hinder performance.\n> +\t\t\t */\n> +\t\t\tif (pgstat_report_wal_frequency++ == PGSTAT_REPORT_FREQUENCY)\n> +\t\t\t{\n> +\t\t\t\tpgstat_report_wal(false);\n> +\t\t\t\tpgstat_report_wal_frequency = 0;\n> +\t\t\t}\n> +\n\nIs the above needed for your patch to work? What does it do? It should\nprobably be in a separate commit and should definitely have an\nexplanation.\n\n> --- a/src/backend/utils/activity/pgstat_io.c\n> +++ b/src/backend/utils/activity/pgstat_io.c\n> @@ -87,17 +87,25 @@ pgstat_count_io_op_n(IOObject io_object, IOContext io_context, IOOp io_op, uint3\n> \tAssert((unsigned int) io_op < IOOP_NUM_TYPES);\n> \tAssert(pgstat_tracks_io_op(MyBackendType, io_object, io_context, io_op));\n\nI would add a comment here explaining that pg_stat_wal doesn't count WAL\ninit or WAL reads.\n\n> +\tif(io_object == IOOBJECT_WAL && io_context == IOCONTEXT_NORMAL &&\n> +\t io_op == IOOP_FSYNC)\n> +\t\tPendingWalStats.wal_sync += cnt;\n> +\n> \tPendingIOStats.counts[io_object][io_context][io_op] += cnt;\n> \n> \thave_iostats = true;\n> }\n\n> +/*\n> + * Prepares io_time for pgstat_count_io_op_time() function. It needs to return\n> + * current time if there is a chance that any 'time' can be tracked.\n> + */\n> instr_time\n> pgstat_prepare_io_time(void)\n> {\n> \tinstr_time\tio_start;\n> \n> -\tif (track_io_timing)\n> +\tif(track_io_timing || track_wal_io_timing)\n> \t\tINSTR_TIME_SET_CURRENT(io_start);\n> \telse\n> \t\tINSTR_TIME_SET_ZERO(io_start);\n\nSince you asked me off-list why we had to do INSTR_TIME_SET_ZERO() and I\ncouldn't remember, it is probably worth a comment.\n\n> pgstat_count_io_op_time(IOObject io_object, IOContext io_context, IOOp io_op,\n> \t\t\t\t\t\tinstr_time start_time, uint32 cnt)\n> {\n> -\tif (track_io_timing)\n> +\tif (pgstat_should_track_io_time(io_object, io_context))\n> \t{\n> \t\tinstr_time\tio_time;\n> \n> @@ -124,6 +148,9 @@ pgstat_count_io_op_time(IOObject io_object, IOContext io_context, IOOp io_op,\n> \t\t\tpgstat_count_buffer_write_time(INSTR_TIME_GET_MICROSEC(io_time));\n\nNow that we are adding more if statements to this function, I think we\nshould start adding more comments.\n\nWe should explain what the different counters here are for e.g.\npgBufferUsage for EXPLAIN, PendingWalStats for pg_stat_wal.\n\nWe should also explain what is tracked for each and why it differs --\ne.g. some track time and some don't, some track only reads or writes,\netc.\n\nAlso we should mention why we are consolidating them here. That is, we\nwant to eventually deduplicate these counters, so we are consolidating\nthem first. This also makes it easy to compare what is tracked for which\nstats or instrumentation purpose.\n\nAnd for those IO counters that we haven't moved here, we should mention\nit is because they track at a different level of granularity or at a\ndifferent point in the call stack.\n\n> \t\t\tif (io_object == IOOBJECT_RELATION)\n> \t\t\t\tINSTR_TIME_ADD(pgBufferUsage.blk_write_time, io_time);\n> +\t\t\t/* Track IOOBJECT_WAL/IOCONTEXT_NORMAL times on PendingWalStats */\n> +\t\t\telse if (io_object == IOOBJECT_WAL && io_context == IOCONTEXT_NORMAL)\n> +\t\t\t\tINSTR_TIME_ADD(PendingWalStats.wal_write_time, io_time);\n> \t\t}\n\n\nAlso, I would reorder the if statements to be in order of the enum\nvalues (e.g. FSYNC, READ, WRITE).\n\n> \t\telse if (io_op == IOOP_READ)\n> \t\t{\n> @@ -131,6 +158,12 @@ pgstat_count_io_op_time(IOObject io_object, IOContext io_context, IOOp io_op,\n> \t\t\tif (io_object == IOOBJECT_RELATION)\n> \t\t\t\tINSTR_TIME_ADD(pgBufferUsage.blk_read_time, io_time);\n> \t\t}\n> +\t\telse if (io_op == IOOP_FSYNC)\n> +\t\t{\n> +\t\t\t/* Track IOOBJECT_WAL/IOCONTEXT_NORMAL times on PendingWalStats */\n\nI wouldn't squeeze this comment here like this. It is hard to read\n\n> +\t\t\tif (io_object == IOOBJECT_WAL && io_context == IOCONTEXT_NORMAL)\n> +\t\t\t\tINSTR_TIME_ADD(PendingWalStats.wal_sync_time, io_time);\n\n\n> + * op_bytes can change according to IOObject and IOContext.\n> + * Return BLCKSZ as default.\n> + */\n> +int\n> +pgstat_get_io_op_btyes(IOObject io_object, IOContext io_context)\n> +{\n\nSmall typo in function name:\npgstat_get_io_op_btyes -> pgstat_get_io_op_bytes\nI'd also mention why BLCKSZ is the default\n\n> +\tif (io_object == IOOBJECT_WAL)\n> +\t{\n> +\t\tif (io_context == IOCONTEXT_NORMAL)\n> +\t\t\treturn XLOG_BLCKSZ;\n> +\t\telse if (io_context == IOCONTEXT_INIT)\n> +\t\t\treturn wal_segment_size;\n> +\t}\n> +\n> +\treturn BLCKSZ;\n> +}\n\n> @@ -350,6 +405,15 @@ pgstat_tracks_io_object(BackendType bktype, IOObject io_object,\n> \tif (!pgstat_tracks_io_bktype(bktype))\n> \t\treturn false;\n> \n> +\t/*\n> +\t * Currently, IO on IOOBJECT_WAL IOObject can only occur in the\n> +\t * IOCONTEXT_NORMAL and IOCONTEXT_INIT IOContext.\n> +\t */\n> +\tif (io_object == IOOBJECT_WAL &&\n> +\t\t(io_context != IOCONTEXT_NORMAL &&\n\nLittle bit of errant whitespace here.\n\n> \t/*\n> \t * Currently, IO on temporary relations can only occur in the\n> \t * IOCONTEXT_NORMAL IOContext.\n> @@ -439,6 +503,14 @@ pgstat_tracks_io_op(BackendType bktype, IOObject io_object,\n> \tif (io_context == IOCONTEXT_BULKREAD && io_op == IOOP_EXTEND)\n> \t\treturn false;\n\nI would expand on the comment to explain what NORMAL is for WAL -- what\nwe consider normal to be and why. And why it is different than INIT.\n\n> \n> +\tif(io_object == IOOBJECT_WAL && io_context == IOCONTEXT_INIT &&\n> +\t !(io_op == IOOP_WRITE || io_op == IOOP_FSYNC))\n> +\t return false;\n> +\n> +\tif(io_object == IOOBJECT_WAL && io_context == IOCONTEXT_NORMAL &&\n> +\t !(io_op == IOOP_WRITE || io_op == IOOP_READ || io_op == IOOP_FSYNC))\n> +\t return false;\n\nThese are the first \"bans\" that we have for an IOOp for a specific\ncombination of io_context and io_object. We should add a new comment for\nthis and perhaps consider what ordering makes most sense. I tried to\norganize the bans from most broad to most specific at the bottom.\n\n> \n> --- a/src/backend/utils/adt/pgstatfuncs.c\n> +++ b/src/backend/utils/adt/pgstatfuncs.c\n> @@ -1409,7 +1410,8 @@ pg_stat_get_io(PG_FUNCTION_ARGS)\n> \t\t\t\t * and constant multipliers, once non-block-oriented IO (e.g.\n> \t\t\t\t * temporary file IO) is tracked.\n> \t\t\t\t */\n> -\t\t\t\tvalues[IO_COL_CONVERSION] = Int64GetDatum(BLCKSZ);\n\nThere's a comment above this in the code that says this is hard-coded to\nBLCKSZ. That comment needs to be updated or removed (in lieu of the\ncomment in your pgstat_get_io_op_bytes() function).\n\n\n> +\t\t\t\top_bytes = pgstat_get_io_op_btyes(io_obj, io_context);\n> +\t\t\t\tvalues[IO_COL_CONVERSION] = Int64GetDatum(op_bytes);\n> \n\n> +extern PGDLLIMPORT bool track_wal_io_timing;\n> +extern PGDLLIMPORT int wal_segment_size;\n\nThese shouldn't be in two places (i.e. they are already in xlog.h and\nyou added them in pgstat.h. pg_stat_io.c includes bufmgr.h for\ntrack_io_timing, so you can probably justify including xlog.h.\n\n\n- Melanie\n\n\n",
"msg_date": "Wed, 9 Aug 2023 14:52:33 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Show WAL write and fsync stats in pg_stat_io"
},
{
"msg_contents": "Hi,\n\nThanks for the review!\n\nCurrent status of the patch is:\n- IOOBJECT_WAL / IOCONTEXT_NORMAL read, write and fsync stats are added.\n- IOOBJECT_WAL / IOCONTEXT_NORMAL write and fsync tests are added.\n- IOOBJECT_WAL / IOCONTEXT_INIT stats are added.\n- pg_stat_io shows different op_bytes for the IOOBJECT_WAL operations.\n- Working on which 'BackendType / IOContext / IOOp' should be banned in\npg_stat_io.\n- PendingWalStats.wal_sync and PendingWalStats.wal_write_time /\nPendingWalStats.wal_sync_time are moved to pgstat_count_io_op_n() /\npgstat_count_io_op_time() respectively.\n\nTODOs:\n- Documentation.\n- Try to set op_bytes for BackendType / IOContext.\n- Decide which 'BackendType / IOContext / IOOp' should not be tracked.\n- Add IOOBJECT_WAL / IOCONTEXT_NORMAL read tests.\n- Add IOOBJECT_WAL / IOCONTEXT_INIT tests.\n\nI am adding tracking of BackendType / IOContext / IOOp as tables, empty\ncell means it is not decided yet:\n\nIOCONTEXT_NORMAL / Backend / IOOp table:\n\n╔═════════════════════╦═══════╦═══════╦═══════╗\n║ IOCONTEXT_NORMAL ║ read ║ write ║ fsync ║\n╠═════════════════════╬═══════╬═══════╬═══════╣\n║ autovacuum launcher ║ FALSE ║ FALSE ║ FALSE ║\n╠═════════════════════╬═══════╬═══════╬═══════╣\n║ autovacuum worker ║ FALSE ║ TRUE ║ TRUE ║\n╠═════════════════════╬═══════╬═══════╬═══════╣\n║ client backend ║ ║ TRUE ║ TRUE ║\n╠═════════════════════╬═══════╬═══════╬═══════╣\n║ background worker ║ ║ ║ ║\n╠═════════════════════╬═══════╬═══════╬═══════╣\n║ background writer ║ ║ TRUE ║ TRUE ║\n╠═════════════════════╬═══════╬═══════╬═══════╣\n║ checkpointer ║ ║ TRUE ║ TRUE ║\n╠═════════════════════╬═══════╬═══════╬═══════╣\n║ standalone backend ║ TRUE ║ TRUE ║ TRUE ║\n╠═════════════════════╬═══════╬═══════╬═══════╣\n║ startup ║ TRUE ║ ║ ║\n╠═════════════════════╬═══════╬═══════╬═══════╣\n║ walreceiver ║ ║ ║ ║\n╠═════════════════════╬═══════╬═══════╬═══════╣\n║ walsender ║ ║ ║ ║\n╠═════════════════════╬═══════╬═══════╬═══════╣\n║ walwriter ║ ║ TRUE ║ TRUE ║\n╚═════════════════════╩═══════╩═══════╩═══════╝\n\n\nIOCONTEXT_WAL_INIT / Backend / IOOp table:\n\n╔═════════════════════╦═══════╦═══════╗\n║ IOCONTEXT_WAL_INIT ║ write ║ fsync ║\n╠═════════════════════╬═══════╬═══════╣\n║ autovacuum launcher ║ ║ ║\n╠═════════════════════╬═══════╬═══════╣\n║ autovacuum worker ║ ║ ║\n╠═════════════════════╬═══════╬═══════╣\n║ client backend ║ TRUE ║ TRUE ║\n╠═════════════════════╬═══════╬═══════╣\n║ background worker ║ ║ ║\n╠═════════════════════╬═══════╬═══════╣\n║ background writer ║ ║ ║\n╠═════════════════════╬═══════╬═══════╣\n║ checkpointer ║ ║ ║\n╠═════════════════════╬═══════╬═══════╣\n║ standalone backend ║ TRUE ║ TRUE ║\n╠═════════════════════╬═══════╬═══════╣\n║ startup ║ ║ ║\n╠═════════════════════╬═══════╬═══════╣\n║ walreceiver ║ ║ ║\n╠═════════════════════╬═══════╬═══════╣\n║ walsender ║ ║ ║\n╠═════════════════════╬═══════╬═══════╣\n║ walwriter ║ ║ ║\n╚═════════════════════╩═══════╩═══════╝\n\n\nOn Wed, 9 Aug 2023 at 21:52, Melanie Plageman <[email protected]>\nwrote:\n>\n> > On Sat, 22 Jul 2023 at 01:30, Melanie Plageman\n> > <[email protected]> wrote:\n> > > I think it would be good to count WAL reads even though they are not\n> > > currently represented in pg_stat_wal. Here is a thread discussing this\n> > > [1].\n> >\n> > I used the same implementation in the thread link [1]. I added 'WAL\n> > read' to only xlogrecovery.c for now. I didn't add 'WAL read' to\n> > xlogreader.c and walsender.c because they cause some failures on:\n> > '!pgStatLocal.shmem->is_shutdown' asserts. I will spend more time on\n> > these. Also, I added Bharath to CC. I have a question about 'WAL\n> > read':\n> > 1. There are two places where 'WAL read' happens.\n> > a. In WALRead() in xlogreader.c, it reads 'count' bytes, most of the\n> > time count is equal to XLOG_BLCKSZ but there are some cases it is not.\n> > For example\n> > - in XLogSendPhysical() in walsender.c WALRead() is called by nbytes\n> > - in WALDumpReadPage() in pg_waldump.c WALRead() is called by count\n> > These nbytes and count variables could be different from XLOG_BLCKSZ.\n> >\n> > b. in XLogPageRead() in xlogreader.c, it reads exactly XLOG_BLCKSZ\nbytes:\n> > pg_pread(readFile, readBuf, XLOG_BLCKSZ, (off_t) readOff);\n> >\n> > So, what should op_bytes be set to for 'WAL read' operations?\n>\n> If there is any combination of BackendType and IOContext which will\n> always read XLOG_BLCKSZ bytes, we could use XLOG_BLCKSZ for that row's\n> op_bytes. For other cases, we may have to consider using op_bytes 1 and\n> tracking reads and write IOOps in number of bytes (instead of number of\n> pages). I don't actually know if there is a clear separation by\n> BackendType for these different cases.\n\nI agree. I will edit that later, added to TODOs.\n\n>\n> The other alternative I see is to use XLOG_BLCKSZ as the op_bytes and\n> treat op_bytes * number of reads as an approximation of the number of\n> bytes read. I don't actually know what makes more sense. I don't think I\n> would like having a number for bytes that is not accurate.\n\nYes, the prior one makes more sense to me.\n\n>\n> > Should I use IsBootstrapProcessingMode() to enable WAL io timings at\n> > the initdb or are they not that much important?\n>\n> I don't have an opinion about this. I can see an argument for doing it\n> either way. We do track other IO during initdb in pg_stat_io.\n\nI didn't add it for now. It is an easy change, it could be added later.\n\n>\n> As an additional TODO, I would explore adding some tests to prevent\n> accidental removal of the pg_stat_io WAL tracking.\n>\n> I think we can easily test IOCONTEXT_NORMAL WAL writes in\n> src/test/regress/sql/stats.sql (perhaps it is worth checking that\n> synchronous_commit is on in the test). IOCONTEXT_NORMAL WAL fsyncs\n> should again be easy to test if synchronous_commit is on and fsync is\n> on.\n>\n> I'm not sure how to reliably test WAL reads (given timing). Logically,\n> you can sum WAL reads before a crash is initiated in one of the tests in\n> the recovery suite, and then sum them after the db has restarted and\n> there should definitely be an increase in WAL reads, but I don't know if\n> we need to do something to guarantee that there will have been WAL reads\n> (to avoid test flakes).\n>\n> I'm also not sure how to reliably test any IOCONTEXT_INIT operations. We\n> need a before and after and I can't think of a cheap operation to ensure\n> a new WAL segment is written to or fsyncd in between a before and after\n> for the purposes of testing.\n\nIOOBJECT_WAL / IOCONTEXT_NORMAL write and fsync tests are added.\nFor the IOCONTEXT_NORMAL reads and IOCONTEXT_INIT tests, I couldn't find a\nway to avoid test flakes. I am open to suggestions. I added these to TODOs.\n\n>\n> > ---\n> > diff --git a/src/backend/access/transam/xlog.c\nb/src/backend/access/transam/xlog.c\n> > index 60c0b7ec3af..ee7b85e18ca 100644\n> > --- a/src/backend/access/transam/xlog.c\n> > +++ b/src/backend/access/transam/xlog.c\n> > @@ -2245,6 +2229,9 @@ XLogWrite(XLogwrtRqst WriteRqst, TimeLineID tli,\nbool flexible)\n> > startoffset += written;\n> > } while (nleft > 0);\n> >\n>\n> I'm not sure if the right location is here or in\n> pgstat_count_io_op_time(), but I would explain why you did not move\n> PendingWalStats.wal_writes counter into pg_stat_io code (and why you did\n> move the other PendingWalStats counters there.\n>\n> > + pgstat_count_io_op_time(IOOBJECT_WAL,\nIOCONTEXT_NORMAL,\n> > +\nIOOP_WRITE, io_start, npages);\n> > +\n> > npages = 0;\n> >\n> > /*\n> > @@ -2938,6 +2925,7 @@ XLogFileInitInternal(XLogSegNo logsegno,\nTimeLineID logtli,\n> > int fd;\n> > int save_errno;\n> > int open_flags = O_RDWR | O_CREAT | O_EXCL |\nPG_BINARY;\n> > + instr_time io_start;\n> >\n> > Assert(logtli != 0);\n> >\n> > @@ -2981,6 +2969,8 @@ XLogFileInitInternal(XLogSegNo logsegno,\nTimeLineID logtli,\n> > (errcode_for_file_access(),\n> > errmsg(\"could not create file \\\"%s\\\":\n%m\", tmppath)));\n> >\n>\n> Since you have two calls to pgstat_prepare_io_time() in this function, I\n> think it would be nice to have a comment above each to the effect of\n> \"start timing writes for stats\" and \"start timing fsyncs for stats\"\n\nDone.\n\n>\n> > + io_start = pgstat_prepare_io_time();\n> > +\n> > pgstat_report_wait_start(WAIT_EVENT_WAL_INIT_WRITE);\n>\n> > diff --git a/src/backend/access/transam/xlogrecovery.c\nb/src/backend/access/transam/xlogrecovery.c\n> > index becc2bda62e..ee850af5514 100644\n> > --- a/src/backend/access/transam/xlogrecovery.c\n> > +++ b/src/backend/access/transam/xlogrecovery.c\n> > @@ -1587,6 +1587,7 @@ PerformWalRecovery(void)\n> > XLogRecord *record;\n> > bool reachedRecoveryTarget = false;\n> > TimeLineID replayTLI;\n> > + uint32 pgstat_report_wal_frequency = 0;\n> >\n> > /*\n> > * Initialize shared variables for tracking progress of WAL\nreplay, as if\n> > @@ -1745,6 +1746,16 @@ PerformWalRecovery(void)\n> > */\n> > ApplyWalRecord(xlogreader, record, &replayTLI);\n> >\n> > + /*\n> > + * Report pending statistics to the cumulative\nstats system once\n> > + * every PGSTAT_REPORT_FREQUENCY times to not\nhinder performance.\n> > + */\n> > + if (pgstat_report_wal_frequency++ ==\nPGSTAT_REPORT_FREQUENCY)\n> > + {\n> > + pgstat_report_wal(false);\n> > + pgstat_report_wal_frequency = 0;\n> > + }\n> > +\n>\n> Is the above needed for your patch to work? What does it do? It should\n> probably be in a separate commit and should definitely have an\n> explanation.\n\nDone, I omit that part.\n\n>\n> > --- a/src/backend/utils/activity/pgstat_io.c\n> > +++ b/src/backend/utils/activity/pgstat_io.c\n> > @@ -87,17 +87,25 @@ pgstat_count_io_op_n(IOObject io_object, IOContext\nio_context, IOOp io_op, uint3\n> > Assert((unsigned int) io_op < IOOP_NUM_TYPES);\n> > Assert(pgstat_tracks_io_op(MyBackendType, io_object, io_context,\nio_op));\n>\n> I would add a comment here explaining that pg_stat_wal doesn't count WAL\n> init or WAL reads.\n\nDone.\n\n>\n> > + if(io_object == IOOBJECT_WAL && io_context == IOCONTEXT_NORMAL &&\n> > + io_op == IOOP_FSYNC)\n> > + PendingWalStats.wal_sync += cnt;\n> > +\n> > PendingIOStats.counts[io_object][io_context][io_op] += cnt;\n> >\n> > have_iostats = true;\n> > }\n>\n> > +/*\n> > + * Prepares io_time for pgstat_count_io_op_time() function. It needs\nto return\n> > + * current time if there is a chance that any 'time' can be tracked.\n> > + */\n> > instr_time\n> > pgstat_prepare_io_time(void)\n> > {\n> > instr_time io_start;\n> >\n> > - if (track_io_timing)\n> > + if(track_io_timing || track_wal_io_timing)\n> > INSTR_TIME_SET_CURRENT(io_start);\n> > else\n> > INSTR_TIME_SET_ZERO(io_start);\n>\n> Since you asked me off-list why we had to do INSTR_TIME_SET_ZERO() and I\n> couldn't remember, it is probably worth a comment.\n\nDone.\n\n>\n> > pgstat_count_io_op_time(IOObject io_object, IOContext io_context, IOOp\nio_op,\n> > instr_time start_time,\nuint32 cnt)\n> > {\n> > - if (track_io_timing)\n> > + if (pgstat_should_track_io_time(io_object, io_context))\n> > {\n> > instr_time io_time;\n> >\n> > @@ -124,6 +148,9 @@ pgstat_count_io_op_time(IOObject io_object,\nIOContext io_context, IOOp io_op,\n> >\npgstat_count_buffer_write_time(INSTR_TIME_GET_MICROSEC(io_time));\n>\n> Now that we are adding more if statements to this function, I think we\n> should start adding more comments.\n>\n> We should explain what the different counters here are for e.g.\n> pgBufferUsage for EXPLAIN, PendingWalStats for pg_stat_wal.\n>\n> We should also explain what is tracked for each and why it differs --\n> e.g. some track time and some don't, some track only reads or writes,\n> etc.\n>\n> Also we should mention why we are consolidating them here. That is, we\n> want to eventually deduplicate these counters, so we are consolidating\n> them first. This also makes it easy to compare what is tracked for which\n> stats or instrumentation purpose.\n>\n> And for those IO counters that we haven't moved here, we should mention\n> it is because they track at a different level of granularity or at a\n> different point in the call stack.\n\nDone.\n\n>\n> > if (io_object == IOOBJECT_RELATION)\n> >\nINSTR_TIME_ADD(pgBufferUsage.blk_write_time, io_time);\n> > + /* Track IOOBJECT_WAL/IOCONTEXT_NORMAL times on\nPendingWalStats */\n> > + else if (io_object == IOOBJECT_WAL && io_context\n== IOCONTEXT_NORMAL)\n> > +\nINSTR_TIME_ADD(PendingWalStats.wal_write_time, io_time);\n> > }\n>\n>\n> Also, I would reorder the if statements to be in order of the enum\n> values (e.g. FSYNC, READ, WRITE).\n\nDone.\n\n>\n> > else if (io_op == IOOP_READ)\n> > {\n> > @@ -131,6 +158,12 @@ pgstat_count_io_op_time(IOObject io_object,\nIOContext io_context, IOOp io_op,\n> > if (io_object == IOOBJECT_RELATION)\n> >\nINSTR_TIME_ADD(pgBufferUsage.blk_read_time, io_time);\n> > }\n> > + else if (io_op == IOOP_FSYNC)\n> > + {\n> > + /* Track IOOBJECT_WAL/IOCONTEXT_NORMAL times on\nPendingWalStats */\n>\n> I wouldn't squeeze this comment here like this. It is hard to read\n\nDone.\n\n>\n> > + if (io_object == IOOBJECT_WAL && io_context ==\nIOCONTEXT_NORMAL)\n> > +\nINSTR_TIME_ADD(PendingWalStats.wal_sync_time, io_time);\n>\n>\n> > + * op_bytes can change according to IOObject and IOContext.\n> > + * Return BLCKSZ as default.\n> > + */\n> > +int\n> > +pgstat_get_io_op_btyes(IOObject io_object, IOContext io_context)\n> > +{\n>\n> Small typo in function name:\n> pgstat_get_io_op_btyes -> pgstat_get_io_op_bytes\n> I'd also mention why BLCKSZ is the default\n\nDone.\n\n>\n> > + if (io_object == IOOBJECT_WAL)\n> > + {\n> > + if (io_context == IOCONTEXT_NORMAL)\n> > + return XLOG_BLCKSZ;\n> > + else if (io_context == IOCONTEXT_INIT)\n> > + return wal_segment_size;\n> > + }\n> > +\n> > + return BLCKSZ;\n> > +}\n>\n> > @@ -350,6 +405,15 @@ pgstat_tracks_io_object(BackendType bktype,\nIOObject io_object,\n> > if (!pgstat_tracks_io_bktype(bktype))\n> > return false;\n> >\n> > + /*\n> > + * Currently, IO on IOOBJECT_WAL IOObject can only occur in the\n> > + * IOCONTEXT_NORMAL and IOCONTEXT_INIT IOContext.\n> > + */\n> > + if (io_object == IOOBJECT_WAL &&\n> > + (io_context != IOCONTEXT_NORMAL &&\n>\n> Little bit of errant whitespace here.\n\nDone.\n\n>\n> > /*\n> > * Currently, IO on temporary relations can only occur in the\n> > * IOCONTEXT_NORMAL IOContext.\n> > @@ -439,6 +503,14 @@ pgstat_tracks_io_op(BackendType bktype, IOObject\nio_object,\n> > if (io_context == IOCONTEXT_BULKREAD && io_op == IOOP_EXTEND)\n> > return false;\n>\n> I would expand on the comment to explain what NORMAL is for WAL -- what\n> we consider normal to be and why. And why it is different than INIT.\n\nDone.\n\n>\n> >\n> > + if(io_object == IOOBJECT_WAL && io_context == IOCONTEXT_INIT &&\n> > + !(io_op == IOOP_WRITE || io_op == IOOP_FSYNC))\n> > + return false;\n> > +\n> > + if(io_object == IOOBJECT_WAL && io_context == IOCONTEXT_NORMAL &&\n> > + !(io_op == IOOP_WRITE || io_op == IOOP_READ || io_op ==\nIOOP_FSYNC))\n> > + return false;\n>\n> These are the first \"bans\" that we have for an IOOp for a specific\n> combination of io_context and io_object. We should add a new comment for\n> this and perhaps consider what ordering makes most sense. I tried to\n> organize the bans from most broad to most specific at the bottom.\n\nDone.\n\n>\n> >\n> > --- a/src/backend/utils/adt/pgstatfuncs.c\n> > +++ b/src/backend/utils/adt/pgstatfuncs.c\n> > @@ -1409,7 +1410,8 @@ pg_stat_get_io(PG_FUNCTION_ARGS)\n> > * and constant multipliers, once\nnon-block-oriented IO (e.g.\n> > * temporary file IO) is tracked.\n> > */\n> > - values[IO_COL_CONVERSION] =\nInt64GetDatum(BLCKSZ);\n>\n> There's a comment above this in the code that says this is hard-coded to\n> BLCKSZ. That comment needs to be updated or removed (in lieu of the\n> comment in your pgstat_get_io_op_bytes() function).\n\nDone.\n\n>\n>\n> > + op_bytes = pgstat_get_io_op_btyes(io_obj,\nio_context);\n> > + values[IO_COL_CONVERSION] =\nInt64GetDatum(op_bytes);\n> >\n>\n> > +extern PGDLLIMPORT bool track_wal_io_timing;\n> > +extern PGDLLIMPORT int wal_segment_size;\n>\n> These shouldn't be in two places (i.e. they are already in xlog.h and\n> you added them in pgstat.h. pg_stat_io.c includes bufmgr.h for\n> track_io_timing, so you can probably justify including xlog.h.\n\nDone.\n\nAny kind of feedback would be appreciated.\n\nRegards,\nNazir Bilal Yavuz\nMicrosoft",
"msg_date": "Wed, 20 Sep 2023 10:57:48 +0300",
"msg_from": "Nazir Bilal Yavuz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Show WAL write and fsync stats in pg_stat_io"
},
{
"msg_contents": "On Wed, Sep 20, 2023 at 10:57:48AM +0300, Nazir Bilal Yavuz wrote:\n> Any kind of feedback would be appreciated.\n\nThis was registered in the CF, so I have given it a look. Note that\n0001 has a conflict with pgstat_count_io_op_time(), so it cannot be\napplied.\n\n+pgstat_should_track_io_time(IOObject io_object, IOContext io_context)\n+{\n+\t/*\n+\t * io times of IOOBJECT_WAL IOObject needs to be tracked when\n+\t * 'track_wal_io_timing' is set regardless of 'track_io_timing'.\n+\t */\n+\tif (io_object == IOOBJECT_WAL)\n+\t\treturn track_wal_io_timing;\n+\n+\treturn track_io_timing;\n\nI can see the temptation to do that, but I have mixed feelings about\nthe approach of mixing two GUCs in a code path dedicated to pg_stat_io\nwhere now we only rely on track_io_timing. The result brings\nconfusion, while making pg_stat_io, which is itself only used for\nblock-based operations, harder to read.\n\nThe suggestion I am seeing here to have a pg_stat_io_wal (with a SRF)\nis quite tempting, actually, creating a neat separation between the\nexisting pg_stat_io and pg_stat_wal (not a SRF), with a third view\nthat provides more details about the contexts and backend types for\nthe WAL stats with its relevant fields:\nhttps://www.postgresql.org/message-id/CAAKRu_bM55pj3pPRW0nd_-paWHLRkOU69r816AeztBBa-N1HLA@mail.gmail.com\n\nAnd perhaps just putting that everything that calls\npgstat_count_io_op_time() under track_io_timing is just natural?\nWhat's the performance regression you would expect if both WAL and\nblock I/O are controlled by that, still one would expect only one of\nthem?\n\nOn top of that pg_stat_io is now for block-based I/O operations, so\nthat does not fit entirely in the picture, though I guess that Melanie\nhas thought more on the matter than me. That may be also a matter of\ntaste.\n\n+ /* Report pending statistics to the cumulative stats system */\n+ pgstat_report_wal(false);\n\nThis is hidden in 0001, still would be better if handled as a patch on\nits own and optionally backpatch it as we did for the bgwriter with\ne64c733bb1?\n\nSide note: I think that we should spend more efforts in documenting\nwhat IOContext and IOOp mean. Not something directly related to this\npatch, still this patch or things similar make it a bit harder which\npart of it is used for what by reading pgstat.h.\n--\nMichael",
"msg_date": "Thu, 26 Oct 2023 15:28:32 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Show WAL write and fsync stats in pg_stat_io"
},
{
"msg_contents": "Hi,\n\nThank you for the feedback!\n\nOn Thu, 26 Oct 2023 at 09:28, Michael Paquier <[email protected]> wrote:\n>\n> On Wed, Sep 20, 2023 at 10:57:48AM +0300, Nazir Bilal Yavuz wrote:\n> > Any kind of feedback would be appreciated.\n>\n> This was registered in the CF, so I have given it a look. Note that\n> 0001 has a conflict with pgstat_count_io_op_time(), so it cannot be\n> applied.\n>\n> +pgstat_should_track_io_time(IOObject io_object, IOContext io_context)\n> +{\n> + /*\n> + * io times of IOOBJECT_WAL IOObject needs to be tracked when\n> + * 'track_wal_io_timing' is set regardless of 'track_io_timing'.\n> + */\n> + if (io_object == IOOBJECT_WAL)\n> + return track_wal_io_timing;\n> +\n> + return track_io_timing;\n>\n> I can see the temptation to do that, but I have mixed feelings about\n> the approach of mixing two GUCs in a code path dedicated to pg_stat_io\n> where now we only rely on track_io_timing. The result brings\n> confusion, while making pg_stat_io, which is itself only used for\n> block-based operations, harder to read.\n>\n> The suggestion I am seeing here to have a pg_stat_io_wal (with a SRF)\n> is quite tempting, actually, creating a neat separation between the\n> existing pg_stat_io and pg_stat_wal (not a SRF), with a third view\n> that provides more details about the contexts and backend types for\n> the WAL stats with its relevant fields:\n> https://www.postgresql.org/message-id/CAAKRu_bM55pj3pPRW0nd_-paWHLRkOU69r816AeztBBa-N1HLA@mail.gmail.com\n>\n> And perhaps just putting that everything that calls\n> pgstat_count_io_op_time() under track_io_timing is just natural?\n> What's the performance regression you would expect if both WAL and\n> block I/O are controlled by that, still one would expect only one of\n> them?\n\nI will check these and I hope I will come back with something meaningful.\n\n>\n> + /* Report pending statistics to the cumulative stats system */\n> + pgstat_report_wal(false);\n>\n> This is hidden in 0001, still would be better if handled as a patch on\n> its own and optionally backpatch it as we did for the bgwriter with\n> e64c733bb1?\n\nI thought about it again and found the use of\n'pgstat_report_wal(false);' here wrong. This was mainly for flushing\nWAL stats because of the WAL reads but pg_stat_wal doesn't have WAL\nread stats, so there is no need to flush WAL stats here. I think this\nshould be replaced with 'pgstat_flush_io(false);'.\n\n>\n> Side note: I think that we should spend more efforts in documenting\n> what IOContext and IOOp mean. Not something directly related to this\n> patch, still this patch or things similar make it a bit harder which\n> part of it is used for what by reading pgstat.h.\n\nI agree.\n\nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n",
"msg_date": "Tue, 31 Oct 2023 16:57:57 +0300",
"msg_from": "Nazir Bilal Yavuz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Show WAL write and fsync stats in pg_stat_io"
},
{
"msg_contents": "Hi,\r\n\r\nOn Tue, 31 Oct 2023 at 16:57, Nazir Bilal Yavuz <[email protected]> wrote:\r\n> On Thu, 26 Oct 2023 at 09:28, Michael Paquier <[email protected]> wrote:\r\n> >\r\n> > And perhaps just putting that everything that calls\r\n> > pgstat_count_io_op_time() under track_io_timing is just natural?\r\n> > What's the performance regression you would expect if both WAL and\r\n> > block I/O are controlled by that, still one would expect only one of\r\n> > them?\r\n>\r\n> I will check these and I hope I will come back with something meaningful.\r\n\r\nI applied the patches on upstream postgres and then run pgbench for each\r\navailable clock sources couple of times:\r\n# Set fsync = off and track_io_timing = on\r\n# pgbench -i -s 100 test\r\n# pgbench -M prepared -c16 -j8 -f <( echo \"SELECT\r\npg_logical_emit_message(true, \\:client_id::text, '1234567890');\") -T60 test\r\n\r\nResults are:\r\n\r\n╔═════════╦═══════════════════════════════╦════════╗\r\n║ ║ track_wal_io_timing ║ ║\r\n╠═════════╬═══════════════╦═══════════════╬════════╣\r\n║ clock ║ on ║ off ║ change ║\r\n║ sources ║ ║ ║ ║\r\n╠═════════╬═══════════════╬═══════════════╬════════╣\r\n║ tsc ║ ║ ║ ║\r\n║ ║ 514814.459170 ║ 519826.284139 ║ %1 ║\r\n╠═════════╬═══════════════╬═══════════════╬════════╣\r\n║ hpet ║ ║ ║ ║\r\n║ ║ 132116.272121 ║ 141820.548447 ║ %7 ║\r\n╠═════════╬═══════════════╬═══════════════╬════════╣\r\n║ acpi_pm ║ ║ ║ ║\r\n║ ║ 394793.092255 ║ 403723.874719 ║ %2 ║\r\n╚═════════╩═══════════════╩═══════════════╩════════╝\r\n\r\nRegards,\r\nNazir Bilal Yavuz\r\nMicrosoft\r\n\nHi,On Tue, 31 Oct 2023 at 16:57, Nazir Bilal Yavuz <[email protected]> wrote:> On Thu, 26 Oct 2023 at 09:28, Michael Paquier <[email protected]> wrote:> >> > And perhaps just putting that everything that calls> > pgstat_count_io_op_time() under track_io_timing is just natural?> > What's the performance regression you would expect if both WAL and> > block I/O are controlled by that, still one would expect only one of> > them?>> I will check these and I hope I will come back with something meaningful.I applied the patches on upstream postgres and then run pgbench for each available clock sources couple of times:# Set fsync = off and track_io_timing = on# pgbench -i -s 100 test# pgbench -M prepared -c16 -j8 -f <( echo \"SELECT pg_logical_emit_message(true, \\:client_id::text, '1234567890');\") -T60 testResults are:╔═════════╦═══════════════════════════════╦════════╗║ ║ track_wal_io_timing ║ ║╠═════════╬═══════════════╦═══════════════╬════════╣║ clock ║ on ║ off ║ change ║║ sources ║ ║ ║ ║╠═════════╬═══════════════╬═══════════════╬════════╣║ tsc ║ ║ ║ ║║ ║ 514814.459170 ║ 519826.284139 ║ %1 ║╠═════════╬═══════════════╬═══════════════╬════════╣║ hpet ║ ║ ║ ║║ ║ 132116.272121 ║ 141820.548447 ║ %7 ║╠═════════╬═══════════════╬═══════════════╬════════╣║ acpi_pm ║ ║ ║ ║║ ║ 394793.092255 ║ 403723.874719 ║ %2 ║╚═════════╩═══════════════╩═══════════════╩════════╝Regards,Nazir Bilal YavuzMicrosoft",
"msg_date": "Mon, 6 Nov 2023 15:35:01 +0300",
"msg_from": "Nazir Bilal Yavuz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Show WAL write and fsync stats in pg_stat_io"
},
{
"msg_contents": "On Mon, Nov 06, 2023 at 03:35:01PM +0300, Nazir Bilal Yavuz wrote:\n> Results are:\n> \n> ╔═════════╦═══════════════════════════════╦════════╗\n> ║ ║ track_wal_io_timing ║ ║\n> ╠═════════╬═══════════════╦═══════════════╬════════╣\n> ║ clock ║ on ║ off ║ change ║\n> ║ sources ║ ║ ║ ║\n> ╠═════════╬═══════════════╬═══════════════╬════════╣\n> ║ tsc ║ ║ ║ ║\n> ║ ║ 514814.459170 ║ 519826.284139 ║ %1 ║\n> ╠═════════╬═══════════════╬═══════════════╬════════╣\n> ║ hpet ║ ║ ║ ║\n> ║ ║ 132116.272121 ║ 141820.548447 ║ %7 ║\n> ╠═════════╬═══════════════╬═══════════════╬════════╣\n> ║ acpi_pm ║ ║ ║ ║\n> ║ ║ 394793.092255 ║ 403723.874719 ║ %2 ║\n> ╚═════════╩═══════════════╩═══════════════╩════════╝\n\nThanks for the tests. That's indeed noticeable under this load.\nBetter to keep track_io_timing and track_wal_io_timing as two\nseparated beasts, at least that's clear.\n--\nMichael",
"msg_date": "Tue, 7 Nov 2023 12:25:27 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Show WAL write and fsync stats in pg_stat_io"
},
{
"msg_contents": "Hi,\n\nOn 2023-10-26 15:28:32 +0900, Michael Paquier wrote:\n> On top of that pg_stat_io is now for block-based I/O operations, so\n> that does not fit entirely in the picture, though I guess that Melanie\n> has thought more on the matter than me. That may be also a matter of\n> taste.\n\nI strongly disagree. A significant part of the design of pg_stat_io was to\nmake it possible to collect multiple sources of IO in a single view, so that\nsysadmins don't have to look in dozens of places to figure out what is causing\nwhat kind of IO.\n\nWe should over time collect all sources of IO in pg_stat_io. For some things\nwe might want to also have more detailed information in other views (e.g. it\ndoesn't make sense to track FPIs in pg_stat_io, but does make sense in\npg_stat_wal) - but that should be in addition, not instead of.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 7 Nov 2023 15:30:48 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Show WAL write and fsync stats in pg_stat_io"
},
{
"msg_contents": "On Tue, Nov 07, 2023 at 03:30:48PM -0800, Andres Freund wrote:\n> I strongly disagree. A significant part of the design of pg_stat_io was to\n> make it possible to collect multiple sources of IO in a single view, so that\n> sysadmins don't have to look in dozens of places to figure out what is causing\n> what kind of IO.\n\nOkay. Point taken.\n\n> We should over time collect all sources of IO in pg_stat_io. For some things\n> we might want to also have more detailed information in other views (e.g. it\n> doesn't make sense to track FPIs in pg_stat_io, but does make sense in\n> pg_stat_wal) - but that should be in addition, not instead of.\n\nSure. I understand here that you mean the number of FPIs counted when\na record is inserted, different from the path where we decide to write\nand/or flush WAL. The proposed patch seems to be a bit inconsistent\nregarding wal_sync_time, by the way.\n\nBy the way, if the write/sync quantities and times begin to be tracked\nby pg_stat_io, I'd see a pretty good argument in removing the\nequivalent columns in pg_stat_wal. It looks like this would reduce\nthe confusion related to the handling of PendingWalStats added in\npgstat_io.c, for one.\n--\nMichael",
"msg_date": "Wed, 8 Nov 2023 09:52:16 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Show WAL write and fsync stats in pg_stat_io"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-08 09:52:16 +0900, Michael Paquier wrote:\n> By the way, if the write/sync quantities and times begin to be tracked\n> by pg_stat_io, I'd see a pretty good argument in removing the\n> equivalent columns in pg_stat_wal. It looks like this would reduce\n> the confusion related to the handling of PendingWalStats added in\n> pgstat_io.c, for one.\n\nAnother approach would be to fetch the relevant columns from pg_stat_io in the\npg_stat_wal view. That'd avoid double accounting and breaking existing\nmonitoring.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 7 Nov 2023 17:19:28 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Show WAL write and fsync stats in pg_stat_io"
},
{
"msg_contents": "On Tue, Nov 07, 2023 at 05:19:28PM -0800, Andres Freund wrote:\n> Another approach would be to fetch the relevant columns from pg_stat_io in the\n> pg_stat_wal view. That'd avoid double accounting and breaking existing\n> monitoring.\n\nYep, I'd be OK with that as well to maintain compatibility.\n--\nMichael",
"msg_date": "Wed, 8 Nov 2023 10:27:44 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Show WAL write and fsync stats in pg_stat_io"
},
{
"msg_contents": "On Wed, Nov 08, 2023 at 10:27:44AM +0900, Michael Paquier wrote:\n> Yep, I'd be OK with that as well to maintain compatibility.\n\nBy the way, note that the patch is failing to apply, and that I've\nswitched it as waiting on author on 10/26.\n--\nMichael",
"msg_date": "Wed, 8 Nov 2023 14:59:07 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Show WAL write and fsync stats in pg_stat_io"
},
{
"msg_contents": "On Wed, Sep 20, 2023 at 1:28 PM Nazir Bilal Yavuz <[email protected]> wrote:\n>\n> Hi,\n>\n> Thanks for the review!\n>\n> Current status of the patch is:\n> - IOOBJECT_WAL / IOCONTEXT_NORMAL read, write and fsync stats are added.\n> - IOOBJECT_WAL / IOCONTEXT_NORMAL write and fsync tests are added.\n> - IOOBJECT_WAL / IOCONTEXT_INIT stats are added.\n> - pg_stat_io shows different op_bytes for the IOOBJECT_WAL operations.\n> - Working on which 'BackendType / IOContext / IOOp' should be banned in pg_stat_io.\n> - PendingWalStats.wal_sync and PendingWalStats.wal_write_time / PendingWalStats.wal_sync_time are moved to pgstat_count_io_op_n() / pgstat_count_io_op_time() respectively.\n>\n> TODOs:\n> - Documentation.\n> - Try to set op_bytes for BackendType / IOContext.\n> - Decide which 'BackendType / IOContext / IOOp' should not be tracked.\n> - Add IOOBJECT_WAL / IOCONTEXT_NORMAL read tests.\n> - Add IOOBJECT_WAL / IOCONTEXT_INIT tests.\n\nThis patchset currently covers:\n- IOOBJECT_WAL / IOCONTEXT_NORMAL read, write and fsync.\n- IOOBJECT_WAL / IOCONTEXT_INIT write and fsync.\n\ndoesn't cover:\n- Streaming replication WAL IO.\n\nIs there any plan to account for WAL read stats in the WALRead()\nfunction which will cover walsenders i.e. WAL read by logical and\nstreaming replication, WAL read by pg_walinspect and so on? I see the\npatch already covers WAL read stats by recovery in XLogPageRead(), but\nnot other page_read callbacks which will end up in WALRead()\neventually. If added, the feature at\nhttps://www.postgresql.org/message-id/CALj2ACXKKK%3DwbiG5_t6dGao5GoecMwRkhr7GjVBM_jg54%2BNa%3DQ%40mail.gmail.com\ncan then extend it to cover WAL read from WAL buffer stats.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 8 Nov 2023 13:04:37 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Show WAL write and fsync stats in pg_stat_io"
},
{
"msg_contents": "Hi,\n\nThanks for all the feedback!\n\nOn Wed, 8 Nov 2023 at 08:59, Michael Paquier <[email protected]> wrote:\n>\n> By the way, note that the patch is failing to apply, and that I've\n> switched it as waiting on author on 10/26.\n\nHere is an updated patchset in attachment. Rebased on the latest HEAD\nand changed 'pgstat_report_wal(false)' to 'pgstat_flush_io(false)' in\nxlogrecovery.c. I will share the new version of the patchset once I\naddress the feedback.\n\nRegards,\nNazir Bilal Yavuz\nMicrosoft",
"msg_date": "Thu, 9 Nov 2023 12:35:46 +0300",
"msg_from": "Nazir Bilal Yavuz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Show WAL write and fsync stats in pg_stat_io"
},
{
"msg_contents": "Hi,\n\nOn Wed, 8 Nov 2023 at 04:19, Andres Freund <[email protected]> wrote:\n>\n> Hi,\n>\n> On 2023-11-08 09:52:16 +0900, Michael Paquier wrote:\n> > By the way, if the write/sync quantities and times begin to be tracked\n> > by pg_stat_io, I'd see a pretty good argument in removing the\n> > equivalent columns in pg_stat_wal. It looks like this would reduce\n> > the confusion related to the handling of PendingWalStats added in\n> > pgstat_io.c, for one.\n>\n> Another approach would be to fetch the relevant columns from pg_stat_io in the\n> pg_stat_wal view. That'd avoid double accounting and breaking existing\n> monitoring.\n\nThere are some differences between pg_stat_wal and pg_stat_io while\ncollecting WAL stats. For example in the XLogWrite() function in the\nxlog.c file, pg_stat_wal counts wal_writes as write system calls. This\nis not something we want for pg_stat_io since pg_stat_io counts the\nnumber of blocks rather than the system calls, so instead incremented\npg_stat_io by npages.\n\nCould that cause a problem since pg_stat_wal's behaviour will be\nchanged? Of course, as an alternative we could change pg_stat_io's\nbehaviour but in the end either pg_stat_wal's or pg_stat_io's\nbehaviour will be changed.\n\nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n",
"msg_date": "Thu, 9 Nov 2023 14:39:26 +0300",
"msg_from": "Nazir Bilal Yavuz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Show WAL write and fsync stats in pg_stat_io"
},
{
"msg_contents": "On Thu, Nov 09, 2023 at 02:39:26PM +0300, Nazir Bilal Yavuz wrote:\n> There are some differences between pg_stat_wal and pg_stat_io while\n> collecting WAL stats. For example in the XLogWrite() function in the\n> xlog.c file, pg_stat_wal counts wal_writes as write system calls. This\n> is not something we want for pg_stat_io since pg_stat_io counts the\n> number of blocks rather than the system calls, so instead incremented\n> pg_stat_io by npages.\n> \n> Could that cause a problem since pg_stat_wal's behaviour will be\n> changed? Of course, as an alternative we could change pg_stat_io's\n> behaviour but in the end either pg_stat_wal's or pg_stat_io's\n> behaviour will be changed.\n\nYep, that could be confusing for existing applications that track the\ninformation of pg_stat_wal. The number of writes is not something\nthat can be correctly shared between both. The timings for the writes\nand the syncs could be shared at least, right?\n\nThis slightly relates to pgstat_count_io_op_n() in your latest patch,\nwhere it feels a bit weird to see an update of\nPendingWalStats.wal_sync sit in the middle of a routine dedicated to\npg_stat_io.. I am not completely sure what's the right balance here,\nbut I would try to implement things so as pg_stat_io paths does not\nneed to know about PendingWalStats.\n--\nMichael",
"msg_date": "Mon, 20 Nov 2023 16:47:19 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Show WAL write and fsync stats in pg_stat_io"
},
{
"msg_contents": "Hi,\n\nThanks for the feedback.\n\nOn Mon, 20 Nov 2023 at 10:47, Michael Paquier <[email protected]> wrote:\n>\n> On Thu, Nov 09, 2023 at 02:39:26PM +0300, Nazir Bilal Yavuz wrote:\n> > There are some differences between pg_stat_wal and pg_stat_io while\n> > collecting WAL stats. For example in the XLogWrite() function in the\n> > xlog.c file, pg_stat_wal counts wal_writes as write system calls. This\n> > is not something we want for pg_stat_io since pg_stat_io counts the\n> > number of blocks rather than the system calls, so instead incremented\n> > pg_stat_io by npages.\n> >\n> > Could that cause a problem since pg_stat_wal's behaviour will be\n> > changed? Of course, as an alternative we could change pg_stat_io's\n> > behaviour but in the end either pg_stat_wal's or pg_stat_io's\n> > behaviour will be changed.\n>\n> Yep, that could be confusing for existing applications that track the\n> information of pg_stat_wal. The number of writes is not something\n> that can be correctly shared between both. The timings for the writes\n> and the syncs could be shared at least, right?\n\nYes, the timings for the writes and the syncs should work. Another\nquestion I have in mind is the pg_stat_reset_shared() function. When\nwe call it with 'io' it will reset pg_stat_wal's timings and when we\ncall it with 'wal' it won't reset them, right?\n\n>\n> This slightly relates to pgstat_count_io_op_n() in your latest patch,\n> where it feels a bit weird to see an update of\n> PendingWalStats.wal_sync sit in the middle of a routine dedicated to\n> pg_stat_io.. I am not completely sure what's the right balance here,\n> but I would try to implement things so as pg_stat_io paths does not\n> need to know about PendingWalStats.\n\nWrite has block vs system calls differentiation but it is the same for\nsync. Because of that I put PendingWalStats.wal_sync to pg_stat_io but\nI agree that it looks a bit weird.\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n",
"msg_date": "Mon, 20 Nov 2023 17:43:17 +0300",
"msg_from": "Nazir Bilal Yavuz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Show WAL write and fsync stats in pg_stat_io"
},
{
"msg_contents": "On Mon, Nov 20, 2023 at 05:43:17PM +0300, Nazir Bilal Yavuz wrote:\n> Yes, the timings for the writes and the syncs should work. Another\n> question I have in mind is the pg_stat_reset_shared() function. When\n> we call it with 'io' it will reset pg_stat_wal's timings and when we\n> call it with 'wal' it won't reset them, right?\n\npg_stat_reset_shared() with a target is IMO a very edge case, so I'm\nOK with the approach of resetting timings in pg_stat_wal even if 'io'\nwas implied because pg_stat_wal would feed partially from pg_stat_io.\nI'd take that as a side-cost in favor of compatibility while making\nthe stats gathering cheaper overall. I'm OK as well if people\ncounter-argue on this point, though that would mean to keep entirely\nseparate views with duplicated fields that serve the same purpose,\nimpacting all deployments because it would make the stats gathering\nheavier for all.\n--\nMichael",
"msg_date": "Tue, 21 Nov 2023 09:26:00 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Show WAL write and fsync stats in pg_stat_io"
},
{
"msg_contents": "Hi,\n\nOn Wed, 8 Nov 2023 at 10:34, Bharath Rupireddy\n<[email protected]> wrote:\n>\n> Is there any plan to account for WAL read stats in the WALRead()\n> function which will cover walsenders i.e. WAL read by logical and\n> streaming replication, WAL read by pg_walinspect and so on? I see the\n> patch already covers WAL read stats by recovery in XLogPageRead(), but\n> not other page_read callbacks which will end up in WALRead()\n> eventually. If added, the feature at\n> https://www.postgresql.org/message-id/CALj2ACXKKK%3DwbiG5_t6dGao5GoecMwRkhr7GjVBM_jg54%2BNa%3DQ%40mail.gmail.com\n> can then extend it to cover WAL read from WAL buffer stats.\n\nYes, I am planning to create a patch for that after this patch is\ndone. Thanks for informing!\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n",
"msg_date": "Fri, 1 Dec 2023 11:30:08 +0300",
"msg_from": "Nazir Bilal Yavuz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Show WAL write and fsync stats in pg_stat_io"
},
{
"msg_contents": "Hi,\n\nThanks for all the feedback. I am sharing the new version of the patchset.\n\nCurrent status of the patchset is:\n- IOOBJECT_WAL / IOCONTEXT_NORMAL / read, write, fsync stats and their\ntests are added.\n- IOOBJECT_WAL / IOCONTEXT_INIT stats and their tests are added.\n- Documentation is updated.\n- pg_stat_io shows different op_bytes for the IOOBJECT_WAL operations.\n- PendingWalStats.wal_sync and PendingWalStats.wal_write_time /\nPendingWalStats.wal_sync_time are moved to pgstat_count_io_op_n() /\npgstat_count_io_op_time() respectively.\n\nUpdates & Discussion items:\n- Try to set op_bytes for BackendType / IOContext: I think we don't\nneed this now, we will need this when we add streaming replication WAL\nIOs.\n\n- Decide which 'BackendType / IOContext / IOOp' should not be tracked:\n-- IOOBJECT_WAL / IOCONTEXT_INIT + IOCONTEXT_NORMAL / write and fsync\nIOs can be done on every backend that tracks IO statistics. Because of\nthat and since we have a pgstat_tracks_io_bktype(bktype) check, I\ndidn't add another check for this.\n-- I found that only the standalone backend and startup backend do\nIOOBJECT_WAL / IOCONTEXT_NORMAL / read IOs. So, I added a check for\nthat but I am not sure if there are more backends that do WAL reads on\nWAL recovery.\n\n- For the IOOBJECT_WAL / IOCONTEXT_INIT and IOOBJECT_WAL /\nIOCONTEXT_NORMAL / read tests, I used initial WAL IOs to check these\nstats. I am not sure if that is the correct way or enough to test\nthese stats.\n\n- To not calculate WAL timings on pg_stat_wal and pg_stat_io view,\npg_stat_wal view's WAL timings are fetched from pg_stat_io. Since\nthese timings are fetched from pg_stat_io, pg_stat_reset_shared('io')\nwill reset pg_stat_wal's timings too.\n\n- I didn't move 'PendingWalStats.wal_sync' out from the\n'pgstat_count_io_op_n' function because they count the same thing\n(block vs system calls) but I agree that this doesn't look good.\n\nAny kind of feedback would be appreciated.\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft",
"msg_date": "Fri, 1 Dec 2023 12:02:05 +0300",
"msg_from": "Nazir Bilal Yavuz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Show WAL write and fsync stats in pg_stat_io"
},
{
"msg_contents": "On Fri, Dec 01, 2023 at 12:02:05PM +0300, Nazir Bilal Yavuz wrote:\n> Thanks for all the feedback. I am sharing the new version of the patchset.\n> \n> - I didn't move 'PendingWalStats.wal_sync' out from the\n> 'pgstat_count_io_op_n' function because they count the same thing\n> (block vs system calls) but I agree that this doesn't look good.\n\n- if (io_op == IOOP_WRITE || io_op == IOOP_EXTEND)\n+ if (io_op == IOOP_EXTEND || io_op == IOOP_WRITE)\n\nUnrelated diff.\n\n+ if (io_object == IOOBJECT_WAL && io_context == IOCONTEXT_NORMAL &&\n+ io_op == IOOP_FSYNC)\n+ PendingWalStats.wal_sync += cnt;\n\nNah, I really don't think that adding this dependency within\npg_stat_io is a good idea.\n\n- PendingWalStats.wal_sync++;\n+ pgstat_count_io_op_time(IOOBJECT_WAL, IOCONTEXT_NORMAL, IOOP_FSYNC,\n+ io_start, 1);\n\nThis is the only caller where this matters, and the count is always 1.\n\n+\tno_wal_normal_read = bktype == B_AUTOVAC_LAUNCHER ||\n+\t\tbktype == B_AUTOVAC_WORKER || bktype == B_BACKEND ||\n+\t\tbktype == B_BG_WORKER || bktype == B_BG_WRITER ||\n+\t\tbktype == B_CHECKPOINTER || bktype == B_WAL_RECEIVER ||\n+\t\tbktype == B_WAL_SENDER || bktype == B_WAL_WRITER;\n+\n+\tif (no_wal_normal_read &&\n+\t\t(io_object == IOOBJECT_WAL &&\n+\t\t io_op == IOOP_READ))\n+\t\treturn false;\n\nThis may be more readable if an enum is applied, without a default\nclause so as it would not be forgotten if a new type is added, perhaps\nin its own little routine.\n\n- if (track_io_timing)\n+ if (track_io_timing || track_wal_io_timing)\n INSTR_TIME_SET_CURRENT(io_start);\n else\n\nThis interface from pgstat_prepare_io_time() is not really good,\nbecause we could finish by setting io_start in the existing code paths\ncalling this routine even if track_io_timing is false when\ntrack_wal_io_timing is true. Why not changing this interface a bit\nand pass down a GUC (track_io_timing or track_wal_io_timing) as an\nargument of the function depending on what we expect to trigger the\ntimings?\n\n-\t/* Convert counters from microsec to millisec for display */\n-\tvalues[6] = Float8GetDatum(((double) wal_stats->wal_write_time) / 1000.0);\n-\tvalues[7] = Float8GetDatum(((double) wal_stats->wal_sync_time) / 1000.0);\n+\t/*\n+\t * There is no need to calculate timings for both pg_stat_wal and\n+\t * pg_stat_io. So, fetch timings from pg_stat_io to make stats gathering\n+\t * cheaper. Note that, since timings are fetched from pg_stat_io;\n+\t * pg_stat_reset_shared('io') will reset pg_stat_wal's timings too.\n+\t *\n+\t * Convert counters from microsec to millisec for display\n+\t */\n+\tvalues[6] = Float8GetDatum(pg_stat_get_io_time(IOOBJECT_WAL,\n+\t\t\t\t\t\t\t\t\t\t\t\t IOCONTEXT_NORMAL,\n+\t\t\t\t\t\t\t\t\t\t\t\t IOOP_WRITE));\n+\tvalues[7] = Float8GetDatum(pg_stat_get_io_time(IOOBJECT_WAL,\n+\t\t\t\t\t\t\t\t\t\t\t\t IOCONTEXT_NORMAL,\n+\t\t\t\t\t\t\t\t\t\t\t\t IOOP_FSYNC));\n\nPerhaps it is simpler to remove these columns from pg_stat_get_wal()\nand plug an SQL upgrade to the view definition of pg_stat_wal?\n\n+int\n+pgstat_get_io_op_bytes(IOObject io_object, IOContext io_context) \n\nThis interface looks like a good idea even if there is only one\ncaller.\n\nFinding a good balance between the subroutines, the two GUCs, the\ncontexts, the I/O operation type and the objects is the tricky part of\nthis patch. If the dependency to PendingWalStats is removed and if\nthe interface of pgstat_prepare_io_time is improved, things are a bit\ncleaner, but it feels like we could do more.. Nya.\n--\nMichael",
"msg_date": "Tue, 5 Dec 2023 15:16:02 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Show WAL write and fsync stats in pg_stat_io"
},
{
"msg_contents": "Hi,\n\nThanks for the feedback! The new version of the patch is attached.\n\nOn Tue, 5 Dec 2023 at 09:16, Michael Paquier <[email protected]> wrote:\n>\n> - if (io_op == IOOP_WRITE || io_op == IOOP_EXTEND)\n> + if (io_op == IOOP_EXTEND || io_op == IOOP_WRITE)\n>\n> Unrelated diff.\n\nDone.\n\n>\n> + if (io_object == IOOBJECT_WAL && io_context == IOCONTEXT_NORMAL &&\n> + io_op == IOOP_FSYNC)\n> + PendingWalStats.wal_sync += cnt;\n>\n> Nah, I really don't think that adding this dependency within\n> pg_stat_io is a good idea.\n>\n> - PendingWalStats.wal_sync++;\n> + pgstat_count_io_op_time(IOOBJECT_WAL, IOCONTEXT_NORMAL, IOOP_FSYNC,\n> + io_start, 1);\n>\n> This is the only caller where this matters, and the count is always 1.\n\nI reverted that, pgstat_count_io_op_n doesn't count\nPendingWalStats.wal_sync now.\n\n>\n> + no_wal_normal_read = bktype == B_AUTOVAC_LAUNCHER ||\n> + bktype == B_AUTOVAC_WORKER || bktype == B_BACKEND ||\n> + bktype == B_BG_WORKER || bktype == B_BG_WRITER ||\n> + bktype == B_CHECKPOINTER || bktype == B_WAL_RECEIVER ||\n> + bktype == B_WAL_SENDER || bktype == B_WAL_WRITER;\n> +\n> + if (no_wal_normal_read &&\n> + (io_object == IOOBJECT_WAL &&\n> + io_op == IOOP_READ))\n> + return false;\n>\n> This may be more readable if an enum is applied, without a default\n> clause so as it would not be forgotten if a new type is added, perhaps\n> in its own little routine.\n\nDone.\n\n>\n> - if (track_io_timing)\n> + if (track_io_timing || track_wal_io_timing)\n> INSTR_TIME_SET_CURRENT(io_start);\n> else\n>\n> This interface from pgstat_prepare_io_time() is not really good,\n> because we could finish by setting io_start in the existing code paths\n> calling this routine even if track_io_timing is false when\n> track_wal_io_timing is true. Why not changing this interface a bit\n> and pass down a GUC (track_io_timing or track_wal_io_timing) as an\n> argument of the function depending on what we expect to trigger the\n> timings?\n\nDone in 0001.\n\n>\n> - /* Convert counters from microsec to millisec for display */\n> - values[6] = Float8GetDatum(((double) wal_stats->wal_write_time) / 1000.0);\n> - values[7] = Float8GetDatum(((double) wal_stats->wal_sync_time) / 1000.0);\n> + /*\n> + * There is no need to calculate timings for both pg_stat_wal and\n> + * pg_stat_io. So, fetch timings from pg_stat_io to make stats gathering\n> + * cheaper. Note that, since timings are fetched from pg_stat_io;\n> + * pg_stat_reset_shared('io') will reset pg_stat_wal's timings too.\n> + *\n> + * Convert counters from microsec to millisec for display\n> + */\n> + values[6] = Float8GetDatum(pg_stat_get_io_time(IOOBJECT_WAL,\n> + IOCONTEXT_NORMAL,\n> + IOOP_WRITE));\n> + values[7] = Float8GetDatum(pg_stat_get_io_time(IOOBJECT_WAL,\n> + IOCONTEXT_NORMAL,\n> + IOOP_FSYNC));\n>\n> Perhaps it is simpler to remove these columns from pg_stat_get_wal()\n> and plug an SQL upgrade to the view definition of pg_stat_wal?\n\nDone in 0003 but I am not sure if that is what you expected.\n\n> Finding a good balance between the subroutines, the two GUCs, the\n> contexts, the I/O operation type and the objects is the tricky part of\n> this patch. If the dependency to PendingWalStats is removed and if\n> the interface of pgstat_prepare_io_time is improved, things are a bit\n> cleaner, but it feels like we could do more.. Nya.\n\nI agree. The patch is not logically complicated but it is hard to\nselect the best way.\n\nAny kind of feedback would be appreciated.\n\n--\nRegards,\nNazir Bilal Yavuz\nMicrosoft",
"msg_date": "Tue, 12 Dec 2023 14:29:03 +0300",
"msg_from": "Nazir Bilal Yavuz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Show WAL write and fsync stats in pg_stat_io"
},
{
"msg_contents": "On Tue, Dec 12, 2023 at 02:29:03PM +0300, Nazir Bilal Yavuz wrote:\n> On Tue, 5 Dec 2023 at 09:16, Michael Paquier <[email protected]> wrote:\n>> This interface from pgstat_prepare_io_time() is not really good,\n>> because we could finish by setting io_start in the existing code paths\n>> calling this routine even if track_io_timing is false when\n>> track_wal_io_timing is true. Why not changing this interface a bit\n>> and pass down a GUC (track_io_timing or track_wal_io_timing) as an\n>> argument of the function depending on what we expect to trigger the\n>> timings?\n> \n> Done in 0001.\n\nOne thing that 0001 missed is an update of the header where the\nfunction is declared. I've edited a few things, and applied it to\nstart on this stuff. The rest will have to wait a bit more..\n--\nMichael",
"msg_date": "Sat, 16 Dec 2023 20:20:57 +0100",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Show WAL write and fsync stats in pg_stat_io"
},
{
"msg_contents": "On Sat, Dec 16, 2023 at 08:20:57PM +0100, Michael Paquier wrote:\n> One thing that 0001 missed is an update of the header where the\n> function is declared. I've edited a few things, and applied it to\n> start on this stuff. The rest will have to wait a bit more..\n\nI have been reviewing the whole, and spotted a couple of issues.\n\n+\t * At the end of the if case, accumulate time for the pg_stat_io.\n+\t */\n+\tif (pgstat_should_track_io_time(io_object, io_context))\n\nThere was a bug here. WAL operations can do IOOP_WRITE or IOOP_READ,\nand this would cause pgstat_count_buffer_read_time() and\npgstat_count_buffer_write_time() to be called, incrementing\npgStatBlock{Read,Write}Time, which would be incorrect when it comes to\na WAL page or a WAL segment. I was wondering what to do here first,\nbut we could just avoid calling these routines when working on an\nIOOBJECT_WAL as that's the only object not doing a buffer operation.\n\nA comment at the top of pgstat_tracks_io_bktype() is incorrect,\nbecause this patch adds the WAL writer sender in the I/O tracking.\n\n+ case B_WAL_RECEIVER:\n+ case B_WAL_SENDER:\n+ case B_WAL_WRITER:\n+ return false;\n\npgstat_tracks_io_op() now needs B_WAL_SUMMARIZER.\n\npgstat_should_track_io_time() is used only in pgstat_io.c, so it can\nbe static rather than published in pgstat.h.\n\npgstat_tracks_io_bktype() does not look correct to me. Why is the WAL\nreceiver considered as something correct in the list of backend types,\nwhile the intention is to *not* add it to pg_stat_io? I have tried to\nswitche to the correct behavior of returning false for a\nB_WAL_RECEIVER, to notice that pg_rewind's test 002_databases.pl\nfreezes on its shutdown sequence. Something weird is going on here.\nCould you look at it? See the XXX comment in the attached, which is\nthe same behavior as v6-0002. It looks to me that the patch has\nintroduced an infinite loop tweaking pgstat_tracks_io_bktype() in an\nincorrect way to avoid the root issue.\n\nI have also spent more time polishing the rest, touching a few things\nwhile reviewing. Not sure that I see a point in splitting the tests\nfrom the main patch.\n--\nMichael",
"msg_date": "Mon, 25 Dec 2023 15:20:58 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Show WAL write and fsync stats in pg_stat_io"
},
{
"msg_contents": "On Mon, Dec 25, 2023 at 03:20:58PM +0900, Michael Paquier wrote:\n> pgstat_tracks_io_bktype() does not look correct to me. Why is the WAL\n> receiver considered as something correct in the list of backend types,\n> while the intention is to *not* add it to pg_stat_io? I have tried to\n> switche to the correct behavior of returning false for a\n> B_WAL_RECEIVER, to notice that pg_rewind's test 002_databases.pl\n> freezes on its shutdown sequence. Something weird is going on here.\n> Could you look at it? See the XXX comment in the attached, which is\n> the same behavior as v6-0002. It looks to me that the patch has\n> introduced an infinite loop tweaking pgstat_tracks_io_bktype() in an\n> incorrect way to avoid the root issue.\n\nAh, that's because it would trigger an assertion failure:\nTRAP: failed Assert(\"pgstat_tracks_io_op(MyBackendType, io_object,\n io_context, io_op)\"), File: \"pgstat_io.c\", Line: 89, PID: 6824\npostgres: standby_local: walreceiver\n(ExceptionalCondition+0xa8)[0x560d1b4dd38a]\n\nAnd the backtrace just tells that this is the WAL receiver\ninitializing a WAL segment:\n#5 0x0000560d1b3322c8 in pgstat_count_io_op_n\n(io_object=IOOBJECT_WAL, io_context=IOCONTEXT_INIT, io_op=IOOP_WRITE,\ncnt=1) at pgstat_io.c:89\n#6 0x0000560d1b33254a in pgstat_count_io_op_time\n(io_object=IOOBJECT_WAL, io_context=IOCONTEXT_INIT, io_op=IOOP_WRITE,\nstart_time=..., cnt=1) at pgstat_io.c:181\n#7 0x0000560d1ae7f932 in XLogFileInitInternal (logsegno=3, logtli=1,\nadded=0x7ffd2733c6eb, path=0x7ffd2733c2e0 \"pg_wal/00000001\", '0'\n<repeats 15 times>, \"3\") at xlog.c:3115\n#8 0x0000560d1ae7fc4e in XLogFileInit (logsegno=3, logtli=1) at\nxlog.c:3215\n\nWouldn't it be simpler to just bite the bullet in this case and handle\nWAL receivers in the IO tracking?\n--\nMichael",
"msg_date": "Mon, 25 Dec 2023 15:40:17 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Show WAL write and fsync stats in pg_stat_io"
},
{
"msg_contents": "Hi,\n\nThanks for the review and feedback on your previous reply!\n\nOn Mon, 25 Dec 2023 at 09:40, Michael Paquier <[email protected]> wrote:\n>\n> On Mon, Dec 25, 2023 at 03:20:58PM +0900, Michael Paquier wrote:\n> > pgstat_tracks_io_bktype() does not look correct to me. Why is the WAL\n> > receiver considered as something correct in the list of backend types,\n> > while the intention is to *not* add it to pg_stat_io? I have tried to\n> > switche to the correct behavior of returning false for a\n> > B_WAL_RECEIVER, to notice that pg_rewind's test 002_databases.pl\n> > freezes on its shutdown sequence. Something weird is going on here.\n> > Could you look at it? See the XXX comment in the attached, which is\n> > the same behavior as v6-0002. It looks to me that the patch has\n> > introduced an infinite loop tweaking pgstat_tracks_io_bktype() in an\n> > incorrect way to avoid the root issue.\n>\n> Ah, that's because it would trigger an assertion failure:\n> TRAP: failed Assert(\"pgstat_tracks_io_op(MyBackendType, io_object,\n> io_context, io_op)\"), File: \"pgstat_io.c\", Line: 89, PID: 6824\n> postgres: standby_local: walreceiver\n> (ExceptionalCondition+0xa8)[0x560d1b4dd38a]\n>\n> And the backtrace just tells that this is the WAL receiver\n> initializing a WAL segment:\n> #5 0x0000560d1b3322c8 in pgstat_count_io_op_n\n> (io_object=IOOBJECT_WAL, io_context=IOCONTEXT_INIT, io_op=IOOP_WRITE,\n> cnt=1) at pgstat_io.c:89\n> #6 0x0000560d1b33254a in pgstat_count_io_op_time\n> (io_object=IOOBJECT_WAL, io_context=IOCONTEXT_INIT, io_op=IOOP_WRITE,\n> start_time=..., cnt=1) at pgstat_io.c:181\n> #7 0x0000560d1ae7f932 in XLogFileInitInternal (logsegno=3, logtli=1,\n> added=0x7ffd2733c6eb, path=0x7ffd2733c2e0 \"pg_wal/00000001\", '0'\n> <repeats 15 times>, \"3\") at xlog.c:3115\n> #8 0x0000560d1ae7fc4e in XLogFileInit (logsegno=3, logtli=1) at\n> xlog.c:3215\n\nCorrect.\n\n>\n> Wouldn't it be simpler to just bite the bullet in this case and handle\n> WAL receivers in the IO tracking?\n\nThere is one problem and I couldn't decide how to solve it. We need to\nhandle read IO in WALRead() in xlogreader.c. How many bytes the\nWALRead() function will read is controlled by a variable and it can be\ndifferent from XLOG_BLCKSZ. This is a problem because pg_stat_io's\nop_bytes column is a constant.\n\nHere are all WALRead() function calls:\n\n1- read_local_xlog_page_guts() in xlogutils.c => WALRead(XLOG_BLCKSZ)\n=> always reads XLOG_BLCKSZ.\n\n2- summarizer_read_local_xlog_page() in walsummarizer.c =>\nWALRead(XLOG_BLCKSZ) => always reads XLOG_BLCKSZ.\n\n3- logical_read_xlog_page() in walsender.c => WALRead(XLOG_BLCKSZ) =>\nalways reads XLOG_BLCKSZ.\n\n4- XLogSendPhysical() in walsender.c => WALRead(nbytes) => nbytes can\nbe different from XLOG_BLCKSZ.\n\n5- WALDumpReadPage() in pg_waldump.c => WALRead(count) => count can be\ndifferent from XLOG_BLCKSZ.\n\n4 and 5 are the problematic calls.\n\nMelanie's answer to this problem on previous discussions:\n\nOn Wed, 9 Aug 2023 at 21:52, Melanie Plageman <[email protected]> wrote:\n>\n> If there is any combination of BackendType and IOContext which will\n> always read XLOG_BLCKSZ bytes, we could use XLOG_BLCKSZ for that row's\n> op_bytes. For other cases, we may have to consider using op_bytes 1 and\n> tracking reads and write IOOps in number of bytes (instead of number of\n> pages). I don't actually know if there is a clear separation by\n> BackendType for these different cases.\n\nUsing op_bytes as 1 solves this problem but since it will be different\nfrom the rest of the pg_stat_io view it could be hard to understand.\nThere is no clear separation by backends as it can be seen from the walsender.\n\n>\n> The other alternative I see is to use XLOG_BLCKSZ as the op_bytes and\n> treat op_bytes * number of reads as an approximation of the number of\n> bytes read. I don't actually know what makes more sense. I don't think I\n> would like having a number for bytes that is not accurate.\n\nAlso, we have a similar problem in XLogPageRead() in xlogrecovery.c.\npg_pread() call tries to read XLOG_BLCKSZ but it is not certain and we\ndon't count IO if it couldn't read XLOG_BLCKSZ. IMO, this is not as\nimportant as the previous problem but it still is a problem.\n\nI would be glad to hear opinions on these problems.\n\n--\nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n",
"msg_date": "Mon, 25 Dec 2023 16:09:34 +0300",
"msg_from": "Nazir Bilal Yavuz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Show WAL write and fsync stats in pg_stat_io"
},
{
"msg_contents": "On Mon, Dec 25, 2023 at 04:09:34PM +0300, Nazir Bilal Yavuz wrote:\n> On Wed, 9 Aug 2023 at 21:52, Melanie Plageman <[email protected]> wrote:\n>> If there is any combination of BackendType and IOContext which will\n>> always read XLOG_BLCKSZ bytes, we could use XLOG_BLCKSZ for that row's\n>> op_bytes. For other cases, we may have to consider using op_bytes 1 and\n>> tracking reads and write IOOps in number of bytes (instead of number of\n>> pages). I don't actually know if there is a clear separation by\n>> BackendType for these different cases.\n> \n> Using op_bytes as 1 solves this problem but since it will be different\n> from the rest of the pg_stat_io view it could be hard to understand.\n> There is no clear separation by backends as it can be seen from the walsender.\n\nI find the use of 1 in this context a bit confusing, because when\nreferring to a counter at N, then it can be understood as doing N\ntimes a operation, but it would be much less than that. Another\nsolution would be to use NULL (as a synonym of \"I don't know\") and\nthen document that in this case all the bigint counters of pg_stat_io\ntrack the number of bytes rather than the number of operations?\n\n>> The other alternative I see is to use XLOG_BLCKSZ as the op_bytes and\n>> treat op_bytes * number of reads as an approximation of the number of\n>> bytes read. I don't actually know what makes more sense. I don't think I\n>> would like having a number for bytes that is not accurate.\n> \n> Also, we have a similar problem in XLogPageRead() in xlogrecovery.c.\n> pg_pread() call tries to read XLOG_BLCKSZ but it is not certain and we\n> don't count IO if it couldn't read XLOG_BLCKSZ. IMO, this is not as\n> important as the previous problem but it still is a problem.\n> \n> I would be glad to hear opinions on these problems.\n\nCorrectness matters a lot for monitoring, IMO.\n--\nMichael",
"msg_date": "Tue, 26 Dec 2023 09:06:04 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Show WAL write and fsync stats in pg_stat_io"
},
{
"msg_contents": "Hi,\n\nOn Tue, 26 Dec 2023 at 03:06, Michael Paquier <[email protected]> wrote:\n>\n> On Mon, Dec 25, 2023 at 04:09:34PM +0300, Nazir Bilal Yavuz wrote:\n> > On Wed, 9 Aug 2023 at 21:52, Melanie Plageman <[email protected]> wrote:\n> >> If there is any combination of BackendType and IOContext which will\n> >> always read XLOG_BLCKSZ bytes, we could use XLOG_BLCKSZ for that row's\n> >> op_bytes. For other cases, we may have to consider using op_bytes 1 and\n> >> tracking reads and write IOOps in number of bytes (instead of number of\n> >> pages). I don't actually know if there is a clear separation by\n> >> BackendType for these different cases.\n> >\n> > Using op_bytes as 1 solves this problem but since it will be different\n> > from the rest of the pg_stat_io view it could be hard to understand.\n> > There is no clear separation by backends as it can be seen from the walsender.\n>\n> I find the use of 1 in this context a bit confusing, because when\n> referring to a counter at N, then it can be understood as doing N\n> times a operation, but it would be much less than that. Another\n> solution would be to use NULL (as a synonym of \"I don't know\") and\n> then document that in this case all the bigint counters of pg_stat_io\n> track the number of bytes rather than the number of operations?\n\nYes, that makes sense.\n\nMaybe it is better to create a pg_stat_io_wal view like you said\nbefore. We could remove unused columns and add op_bytes for each\nwrites and reads. Also, we can track both the number of bytes and the\nnumber of the operations. This doesn't fully solve the problem but it\nwill be easier to modify it to meet our needs.\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n",
"msg_date": "Tue, 26 Dec 2023 11:27:16 +0300",
"msg_from": "Nazir Bilal Yavuz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Show WAL write and fsync stats in pg_stat_io"
},
{
"msg_contents": "On Tue, Dec 26, 2023 at 11:27:16AM +0300, Nazir Bilal Yavuz wrote:\n> Maybe it is better to create a pg_stat_io_wal view like you said\n> before. We could remove unused columns and add op_bytes for each\n> writes and reads. Also, we can track both the number of bytes and the\n> number of the operations. This doesn't fully solve the problem but it\n> will be easier to modify it to meet our needs.\n\nI am not sure while the whole point of the exercise is to have all the\nI/O related data in a single view. Something that I've also found a\nbit disturbing yesterday while looking at your patch is the fact that\nthe operation size is guessed from the context and object type when\nquerying the view because now everything is tied to BLCKSZ. This\npatch extends it with two more operation sizes, and there are even\ncases where it may be a variable. Could it be a better option to\nextend pgstat_count_io_op_time() so as callers can themselves give the\nsize of the operation?\n\nThe whole patch is kind of itself complicated enough, so I'd be OK to\ndiscard the case of the WAL receiver for now. Now, if we do so, the\ncode stack of pgstat_io.c should handle WAL receivers as something\nentirely disabled until all the known issues are solved. There is\nstill a lot of value in tracking WAL data associated to the WAL\nwriter, normal backends and WAL senders.\n--\nMichael",
"msg_date": "Tue, 26 Dec 2023 19:10:41 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Show WAL write and fsync stats in pg_stat_io"
},
{
"msg_contents": "Hi,\n\nOn Tue, 26 Dec 2023 at 13:10, Michael Paquier <[email protected]> wrote:\n>\n> On Tue, Dec 26, 2023 at 11:27:16AM +0300, Nazir Bilal Yavuz wrote:\n> > Maybe it is better to create a pg_stat_io_wal view like you said\n> > before. We could remove unused columns and add op_bytes for each\n> > writes and reads. Also, we can track both the number of bytes and the\n> > number of the operations. This doesn't fully solve the problem but it\n> > will be easier to modify it to meet our needs.\n>\n> I am not sure while the whole point of the exercise is to have all the\n> I/O related data in a single view. Something that I've also found a\n> bit disturbing yesterday while looking at your patch is the fact that\n> the operation size is guessed from the context and object type when\n> querying the view because now everything is tied to BLCKSZ. This\n> patch extends it with two more operation sizes, and there are even\n> cases where it may be a variable. Could it be a better option to\n> extend pgstat_count_io_op_time() so as callers can themselves give the\n> size of the operation?\n\nDo you mean removing the op_bytes column and tracking the number of\nbytes in reads, writes, and extends? If so, that makes sense to me but\nI don't want to remove the number of operations; I believe that has a\nvalue too. We can extend the pgstat_count_io_op_time() so it can both\ntrack the number of bytes and the number of operations.\nAlso, it is not directly related to this patch but vectored IO [1] is\ncoming soon; so the number of operations could be wrong since vectored\nIO could merge a couple of operations.\n\n>\n> The whole patch is kind of itself complicated enough, so I'd be OK to\n> discard the case of the WAL receiver for now. Now, if we do so, the\n> code stack of pgstat_io.c should handle WAL receivers as something\n> entirely disabled until all the known issues are solved. There is\n> still a lot of value in tracking WAL data associated to the WAL\n> writer, normal backends and WAL senders.\n\nWhy can't we add comments and leave it as it is? Is it because this\ncould cause misunderstandings?\n\nIf we want to entirely disable it, we can add\n\nif (MyBackendType == B_WAL_RECEIVER && io_object == IOOBJECT_WAL)\n return;\n\nto the top of the pgstat_count_io_op_time() since all IOOBJECT_WAL\ncalls are done by this function, then we can disable it at\npgstat_tracks_io_bktype().\n\n[1] https://www.postgresql.org/message-id/CA%2BhUKGJkOiOCa%2Bmag4BF%2BzHo7qo%3Do9CFheB8%3Dg6uT5TUm2gkvA%40mail.gmail.com\n\n--\nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n",
"msg_date": "Tue, 26 Dec 2023 15:35:52 +0300",
"msg_from": "Nazir Bilal Yavuz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Show WAL write and fsync stats in pg_stat_io"
},
{
"msg_contents": "On Tue, Dec 26, 2023 at 03:35:52PM +0300, Nazir Bilal Yavuz wrote:\n> On Tue, 26 Dec 2023 at 13:10, Michael Paquier <[email protected]> wrote:\n>> I am not sure while the whole point of the exercise is to have all the\n>> I/O related data in a single view. Something that I've also found a\n>> bit disturbing yesterday while looking at your patch is the fact that\n>> the operation size is guessed from the context and object type when\n>> querying the view because now everything is tied to BLCKSZ. This\n>> patch extends it with two more operation sizes, and there are even\n>> cases where it may be a variable. Could it be a better option to\n>> extend pgstat_count_io_op_time() so as callers can themselves give the\n>> size of the operation?\n> \n> Do you mean removing the op_bytes column and tracking the number of\n> bytes in reads, writes, and extends? If so, that makes sense to me but\n> I don't want to remove the number of operations; I believe that has a\n> value too. We can extend the pgstat_count_io_op_time() so it can both\n> track the number of bytes and the number of operations.\n\nApologies if my previous wording sounded confusing. The idea I had in\nmind was to keep op_bytes in pg_stat_io, and extend it so as a value\nof NULL (or 0, or -1) is a synonym as \"writes\", \"extends\" and \"reads\"\nas a number of bytes.\n\n> Also, it is not directly related to this patch but vectored IO [1] is\n> coming soon; so the number of operations could be wrong since vectored\n> IO could merge a couple of operations.\n\nHmm. I have not checked this patch series so I cannot say for sure,\nbut we'd likely just want to track the number of bytes if a single\noperation has a non-equal size rather than registering in pg_stat_io N\nrows with different op_bytes, no? I am looping in Thomas Munro in CC\nfor comments.\n\n>> The whole patch is kind of itself complicated enough, so I'd be OK to\n>> discard the case of the WAL receiver for now. Now, if we do so, the\n>> code stack of pgstat_io.c should handle WAL receivers as something\n>> entirely disabled until all the known issues are solved. There is\n>> still a lot of value in tracking WAL data associated to the WAL\n>> writer, normal backends and WAL senders.\n> \n> Why can't we add comments and leave it as it is? Is it because this\n> could cause misunderstandings?\n> \n> If we want to entirely disable it, we can add\n> \n> if (MyBackendType == B_WAL_RECEIVER && io_object == IOOBJECT_WAL)\n> return;\n> \n> to the top of the pgstat_count_io_op_time() since all IOOBJECT_WAL\n> calls are done by this function, then we can disable it at\n> pgstat_tracks_io_bktype().\n\nYeah, a limitation like that may be acceptable for now. Tracking the\nWAL writer and WAL sender activities can be relevant in a lot of cases\neven if we don't have the full picture for the WAL receiver yet.\n--\nMichael",
"msg_date": "Sun, 31 Dec 2023 09:58:33 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Show WAL write and fsync stats in pg_stat_io"
},
{
"msg_contents": "Hi,\n\nOn Sun, 31 Dec 2023 at 03:58, Michael Paquier <[email protected]> wrote:\n>\n> On Tue, Dec 26, 2023 at 03:35:52PM +0300, Nazir Bilal Yavuz wrote:\n> > On Tue, 26 Dec 2023 at 13:10, Michael Paquier <[email protected]> wrote:\n> >> I am not sure while the whole point of the exercise is to have all the\n> >> I/O related data in a single view. Something that I've also found a\n> >> bit disturbing yesterday while looking at your patch is the fact that\n> >> the operation size is guessed from the context and object type when\n> >> querying the view because now everything is tied to BLCKSZ. This\n> >> patch extends it with two more operation sizes, and there are even\n> >> cases where it may be a variable. Could it be a better option to\n> >> extend pgstat_count_io_op_time() so as callers can themselves give the\n> >> size of the operation?\n> >\n> > Do you mean removing the op_bytes column and tracking the number of\n> > bytes in reads, writes, and extends? If so, that makes sense to me but\n> > I don't want to remove the number of operations; I believe that has a\n> > value too. We can extend the pgstat_count_io_op_time() so it can both\n> > track the number of bytes and the number of operations.\n>\n> Apologies if my previous wording sounded confusing. The idea I had in\n> mind was to keep op_bytes in pg_stat_io, and extend it so as a value\n> of NULL (or 0, or -1) is a synonym as \"writes\", \"extends\" and \"reads\"\n> as a number of bytes.\n\nOh, I understand it now. Yes, that makes sense.\nI thought removing op_bytes completely ( as you said \"This patch\nextends it with two more operation sizes, and there are even cases\nwhere it may be a variable\" ) from pg_stat_io view then adding\nsomething like {read | write | extend}_bytes and {read | write |\nextend}_calls could be better, so that we don't lose any information.\n\n> > Also, it is not directly related to this patch but vectored IO [1] is\n> > coming soon; so the number of operations could be wrong since vectored\n> > IO could merge a couple of operations.\n>\n> Hmm. I have not checked this patch series so I cannot say for sure,\n> but we'd likely just want to track the number of bytes if a single\n> operation has a non-equal size rather than registering in pg_stat_io N\n> rows with different op_bytes, no?\n\nYes, that is correct.\n\n> I am looping in Thomas Munro in CC for comments.\n\nThanks for doing that.\n\n> > If we want to entirely disable it, we can add\n> >\n> > if (MyBackendType == B_WAL_RECEIVER && io_object == IOOBJECT_WAL)\n> > return;\n> >\n> > to the top of the pgstat_count_io_op_time() since all IOOBJECT_WAL\n> > calls are done by this function, then we can disable it at\n> > pgstat_tracks_io_bktype().\n>\n> Yeah, a limitation like that may be acceptable for now. Tracking the\n> WAL writer and WAL sender activities can be relevant in a lot of cases\n> even if we don't have the full picture for the WAL receiver yet.\n\nI added that and disabled B_WAL_RECEIVER backend with comments\nexplaining why. v8 is attached.\n\n--\nRegards,\nNazir Bilal Yavuz\nMicrosoft",
"msg_date": "Wed, 3 Jan 2024 16:10:58 +0300",
"msg_from": "Nazir Bilal Yavuz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Show WAL write and fsync stats in pg_stat_io"
},
{
"msg_contents": "On Wed, Jan 03, 2024 at 04:10:58PM +0300, Nazir Bilal Yavuz wrote:\n> On Sun, 31 Dec 2023 at 03:58, Michael Paquier <[email protected]> wrote:\n>> Apologies if my previous wording sounded confusing. The idea I had in\n>> mind was to keep op_bytes in pg_stat_io, and extend it so as a value\n>> of NULL (or 0, or -1) is a synonym as \"writes\", \"extends\" and \"reads\"\n>> as a number of bytes.\n> \n> Oh, I understand it now. Yes, that makes sense.\n> I thought removing op_bytes completely ( as you said \"This patch\n> extends it with two more operation sizes, and there are even cases\n> where it may be a variable\" ) from pg_stat_io view then adding\n> something like {read | write | extend}_bytes and {read | write |\n> extend}_calls could be better, so that we don't lose any information.\n\nBut then you'd lose the possibility to analyze correlations between\nthe size and the number of the operations, which is something that\nmatters for more complex I/O scenarios. This does not need to be\ntackled in this patch, which is useful on its own, though I am really\nwondering if this is required for the recent work done by Thomas.\nPerhaps Andres, Thomas or Melanie could comment on that?\n\n>> Yeah, a limitation like that may be acceptable for now. Tracking the\n>> WAL writer and WAL sender activities can be relevant in a lot of cases\n>> even if we don't have the full picture for the WAL receiver yet.\n> \n> I added that and disabled B_WAL_RECEIVER backend with comments\n> explaining why. v8 is attached.\n\nI can see that's what you have been adding here, which should be OK:\n\n> - if (track_io_timing)\n> + /*\n> + * B_WAL_RECEIVER backend does IOOBJECT_WAL IOObject & IOOP_READ IOOp IOs\n> + * but these IOs are not countable for now because IOOP_READ IOs' op_bytes\n> + * (number of bytes per unit of I/O) might not be the same all the time.\n> + * The current implementation requires that the op_bytes must be the same\n> + * for the same IOObject, IOContext and IOOp. To avoid confusion, the\n> + * B_WAL_RECEIVER backend & IOOBJECT_WAL IOObject IOs are disabled for\n> + * now.\n> + */\n> + if (MyBackendType == B_WAL_RECEIVER && io_object == IOOBJECT_WAL)\n> + return;\n\nThis could be worded better, but that's one of these nits from me I\nusually tweak when committing stuff.\n\n> +/*\n> + * Decide if IO timings need to be tracked. Timings associated to\n> + * IOOBJECT_WAL objects are tracked if track_wal_io_timing is enabled,\n> + * else rely on track_io_timing.\n> + */\n> +static bool\n> +pgstat_should_track_io_time(IOObject io_object)\n> +{\n> + if (io_object == IOOBJECT_WAL)\n> + return track_wal_io_timing;\n> +\n> + return track_io_timing;\n> +}\n\nOne thing I was also considering is if eliminating this routine would\nmake pgstat_count_io_op_time() more readable the result, but I cannot\nget to that.\n\n> if (io_op == IOOP_WRITE || io_op == IOOP_EXTEND)\n> {\n> - pgstat_count_buffer_write_time(INSTR_TIME_GET_MICROSEC(io_time));\n> + if (io_object != IOOBJECT_WAL)\n> + pgstat_count_buffer_write_time(INSTR_TIME_GET_MICROSEC(io_time));\n> +\n> if (io_object == IOOBJECT_RELATION)\n> INSTR_TIME_ADD(pgBufferUsage.shared_blk_write_time, io_time);\n> else if (io_object == IOOBJECT_TEMP_RELATION)\n> @@ -139,7 +177,9 @@ pgstat_count_io_op_time(IOObject io_object, IOContext io_context, IOOp io_op,\n> }\n> else if (io_op == IOOP_READ)\n> {\n> - pgstat_count_buffer_read_time(INSTR_TIME_GET_MICROSEC(io_time));\n> + if (io_object != IOOBJECT_WAL)\n> + pgstat_count_buffer_read_time(INSTR_TIME_GET_MICROSEC(io_time));\n> +\n> if (io_object == IOOBJECT_RELATION)\n> INSTR_TIME_ADD(pgBufferUsage.shared_blk_read_time, io_time);\n> else if (io_object == IOOBJECT_TEMP_RELATION)\n\nA second thing is if this would be better with more switch/cases, say:\nswitch (io_op):\n{\n case IOOP_EXTEND:\n case IOOP_WRITE:\n switch (io_object):\n\t{\n\t case WAL:\n /* do nothing */\n break;\n\t case RELATION:\n\t case TEMP:\n\t .. blah .. \n\t}\n break;\n case IOOP_READ:\n switch (io_object):\n\t{\n\t .. blah .. \n\t}\n break;\n}\n\nOr just this one to make it clear that nothing happens for WAL\nobjects:\nswitch (io_object):\n{\n case WAL:\n /* do nothing */\n break;\n case RELATION:\n switch (io_op):\n {\n case IOOP_EXTEND:\n\t case IOOP_WRITE:\n\t .. blah ..\n\t case IOOP_READ:\n\t .. blah ..\n }\n break;\n case TEMP:\n /* same switch as RELATION */\n break;\n}\n\nThis duplicates a bit things, but at least in the second case it's\nclear which counters are updated when I/O timings are tracked. It's\nOK by me if people don't like this suggestion, but that would avoid\nbugs like the one I found upthread.\n--\nMichael",
"msg_date": "Wed, 10 Jan 2024 14:24:59 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Show WAL write and fsync stats in pg_stat_io"
},
{
"msg_contents": "Hi,\n\nOn Wed, 10 Jan 2024 at 08:25, Michael Paquier <[email protected]> wrote:\n>\n> On Wed, Jan 03, 2024 at 04:10:58PM +0300, Nazir Bilal Yavuz wrote:\n> >\n> > I thought removing op_bytes completely ( as you said \"This patch\n> > extends it with two more operation sizes, and there are even cases\n> > where it may be a variable\" ) from pg_stat_io view then adding\n> > something like {read | write | extend}_bytes and {read | write |\n> > extend}_calls could be better, so that we don't lose any information.\n>\n> But then you'd lose the possibility to analyze correlations between\n> the size and the number of the operations, which is something that\n> matters for more complex I/O scenarios. This does not need to be\n> tackled in this patch, which is useful on its own, though I am really\n> wondering if this is required for the recent work done by Thomas.\n> Perhaps Andres, Thomas or Melanie could comment on that?\n\nYes, you are right.\n\n> >> Yeah, a limitation like that may be acceptable for now. Tracking the\n> >> WAL writer and WAL sender activities can be relevant in a lot of cases\n> >> even if we don't have the full picture for the WAL receiver yet.\n> >\n> > I added that and disabled B_WAL_RECEIVER backend with comments\n> > explaining why. v8 is attached.\n>\n> I can see that's what you have been adding here, which should be OK:\n>\n> > - if (track_io_timing)\n> > + /*\n> > + * B_WAL_RECEIVER backend does IOOBJECT_WAL IOObject & IOOP_READ IOOp IOs\n> > + * but these IOs are not countable for now because IOOP_READ IOs' op_bytes\n> > + * (number of bytes per unit of I/O) might not be the same all the time.\n> > + * The current implementation requires that the op_bytes must be the same\n> > + * for the same IOObject, IOContext and IOOp. To avoid confusion, the\n> > + * B_WAL_RECEIVER backend & IOOBJECT_WAL IOObject IOs are disabled for\n> > + * now.\n> > + */\n> > + if (MyBackendType == B_WAL_RECEIVER && io_object == IOOBJECT_WAL)\n> > + return;\n>\n> This could be worded better, but that's one of these nits from me I\n> usually tweak when committing stuff.\n\nThanks for doing that! Do you have any specific comments that can help\nimprove it?\n\n> > +/*\n> > + * Decide if IO timings need to be tracked. Timings associated to\n> > + * IOOBJECT_WAL objects are tracked if track_wal_io_timing is enabled,\n> > + * else rely on track_io_timing.\n> > + */\n> > +static bool\n> > +pgstat_should_track_io_time(IOObject io_object)\n> > +{\n> > + if (io_object == IOOBJECT_WAL)\n> > + return track_wal_io_timing;\n> > +\n> > + return track_io_timing;\n> > +}\n>\n> One thing I was also considering is if eliminating this routine would\n> make pgstat_count_io_op_time() more readable the result, but I cannot\n> get to that.\n\nI could not think of a way to eliminate pgstat_should_track_io_time()\nroute without causing performance regressions. What do you think about\nmoving inside of 'pgstat_should_track_io_time(io_object) if check' to\nanother function and call this function from\npgstat_count_io_op_time()? This does not change anything but IMO it\nincreases the readability.\n\n> > if (io_op == IOOP_WRITE || io_op == IOOP_EXTEND)\n> > {\n> > - pgstat_count_buffer_write_time(INSTR_TIME_GET_MICROSEC(io_time));\n> > + if (io_object != IOOBJECT_WAL)\n> > + pgstat_count_buffer_write_time(INSTR_TIME_GET_MICROSEC(io_time));\n> > +\n> > if (io_object == IOOBJECT_RELATION)\n> > INSTR_TIME_ADD(pgBufferUsage.shared_blk_write_time, io_time);\n> > else if (io_object == IOOBJECT_TEMP_RELATION)\n> > @@ -139,7 +177,9 @@ pgstat_count_io_op_time(IOObject io_object, IOContext io_context, IOOp io_op,\n> > }\n> > else if (io_op == IOOP_READ)\n> > {\n> > - pgstat_count_buffer_read_time(INSTR_TIME_GET_MICROSEC(io_time));\n> > + if (io_object != IOOBJECT_WAL)\n> > + pgstat_count_buffer_read_time(INSTR_TIME_GET_MICROSEC(io_time));\n> > +\n> > if (io_object == IOOBJECT_RELATION)\n> > INSTR_TIME_ADD(pgBufferUsage.shared_blk_read_time, io_time);\n> > else if (io_object == IOOBJECT_TEMP_RELATION)\n>\n> A second thing is if this would be better with more switch/cases, say:\n> switch (io_op):\n> {\n> case IOOP_EXTEND:\n> case IOOP_WRITE:\n> switch (io_object):\n> {\n> case WAL:\n> /* do nothing */\n> break;\n> case RELATION:\n> case TEMP:\n> .. blah ..\n> }\n> break;\n> case IOOP_READ:\n> switch (io_object):\n> {\n> .. blah ..\n> }\n> break;\n> }\n>\n> Or just this one to make it clear that nothing happens for WAL\n> objects:\n> switch (io_object):\n> {\n> case WAL:\n> /* do nothing */\n> break;\n> case RELATION:\n> switch (io_op):\n> {\n> case IOOP_EXTEND:\n> case IOOP_WRITE:\n> .. blah ..\n> case IOOP_READ:\n> .. blah ..\n> }\n> break;\n> case TEMP:\n> /* same switch as RELATION */\n> break;\n> }\n>\n> This duplicates a bit things, but at least in the second case it's\n> clear which counters are updated when I/O timings are tracked. It's\n> OK by me if people don't like this suggestion, but that would avoid\n> bugs like the one I found upthread.\n\nI am more inclined towards the second one because it is more likely\nthat a new io_object will be introduced rather than a new io_op. So, I\nthink the second one is a bit more future proof.\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n",
"msg_date": "Wed, 10 Jan 2024 15:59:24 +0300",
"msg_from": "Nazir Bilal Yavuz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Show WAL write and fsync stats in pg_stat_io"
},
{
"msg_contents": "I have code review feedback as well, but I've saved that for my next email.\n\nOn Wed, Jan 3, 2024 at 8:11 AM Nazir Bilal Yavuz <[email protected]> wrote:\n>\n> On Sun, 31 Dec 2023 at 03:58, Michael Paquier <[email protected]> wrote:\n> >\n> > On Tue, Dec 26, 2023 at 03:35:52PM +0300, Nazir Bilal Yavuz wrote:\n> > > On Tue, 26 Dec 2023 at 13:10, Michael Paquier <[email protected]> wrote:\n> > >> I am not sure while the whole point of the exercise is to have all the\n> > >> I/O related data in a single view. Something that I've also found a\n> > >> bit disturbing yesterday while looking at your patch is the fact that\n> > >> the operation size is guessed from the context and object type when\n> > >> querying the view because now everything is tied to BLCKSZ. This\n> > >> patch extends it with two more operation sizes, and there are even\n> > >> cases where it may be a variable. Could it be a better option to\n> > >> extend pgstat_count_io_op_time() so as callers can themselves give the\n> > >> size of the operation?\n> > >\n> > > Do you mean removing the op_bytes column and tracking the number of\n> > > bytes in reads, writes, and extends? If so, that makes sense to me but\n> > > I don't want to remove the number of operations; I believe that has a\n> > > value too. We can extend the pgstat_count_io_op_time() so it can both\n> > > track the number of bytes and the number of operations.\n> >\n> > Apologies if my previous wording sounded confusing. The idea I had in\n> > mind was to keep op_bytes in pg_stat_io, and extend it so as a value\n> > of NULL (or 0, or -1) is a synonym as \"writes\", \"extends\" and \"reads\"\n> > as a number of bytes.\n>\n> Oh, I understand it now. Yes, that makes sense.\n> I thought removing op_bytes completely ( as you said \"This patch\n> extends it with two more operation sizes, and there are even cases\n> where it may be a variable\" ) from pg_stat_io view then adding\n> something like {read | write | extend}_bytes and {read | write |\n> extend}_calls could be better, so that we don't lose any information.\n\nForgive me as I catch up on this thread.\n\nUpthread, Michael says:\n\n> I find the use of 1 in this context a bit confusing, because when\n> referring to a counter at N, then it can be understood as doing N\n> times a operation,\n\nI didn't understand this argument, so I'm not sure if I agree or\ndisagree with it.\n\nI think these are the three proposals for handling WAL reads:\n\n1) setting op_bytes to 1 and the number of reads is the number of bytes\n2) setting op_bytes to XLOG_BLCKSZ and the number of reads is the\nnumber of calls to pg_pread() or similar\n3) setting op_bytes to NULL and the number of reads is the number of\ncalls to pg_pread() or similar\n\nLooking at the patch, I think it is still doing 2.\n\nIt would be good to list all our options with pros and cons (if only\nbecause they are a bit spread throughout the thread now).\n\nFor an unpopular idea: we could add separate [IOOp]_bytes columns for\nall those IOOps for which it would be relevant. It kind of stinks but\nit would give us the freedom to document exactly what a single IOOp\nmeans for each combination of BackendType, IOContext, IOObject, and\nIOOp (as relevant) and still have an accurate number in the *bytes\ncolumns. Everyone will probably hate us if we do that, though.\nEspecially because having bytes for the existing IOObjects is an\nexisting feature.\n\nA separate question: suppose [1] goes in (to read WAL from WAL buffers\ndirectly). Now, WAL reads are not from permanent storage anymore. Are\nwe only tracking permanent storage I/O in pg_stat_io? I also had this\nquestion for some of the WAL receiver functions. Should we track any\nI/O other than permanent storage I/O? Or did I miss this being\naddressed upthread?\n\n> > > Also, it is not directly related to this patch but vectored IO [1] is\n> > > coming soon; so the number of operations could be wrong since vectored\n> > > IO could merge a couple of operations.\n> >\n> > Hmm. I have not checked this patch series so I cannot say for sure,\n> > but we'd likely just want to track the number of bytes if a single\n> > operation has a non-equal size rather than registering in pg_stat_io N\n> > rows with different op_bytes, no?\n>\n> Yes, that is correct.\n\nI do not like the idea of having basically GROUP BY op_bytes in the\nview (if that is the suggestion).\n\nIn terms of what I/O we should track in a streaming/asynchronous\nworld, the options would be:\n\n1) track read/write syscalls\n2) track blocks of BLCKSZ submitted to the kernel\n3) track bytes submitted to the kernel\n4) track merged I/Os (after doing any merging in the application)\n\nI think the debate was largely between 2 and 4. There was some\ndisagreement, but I think we landed on 2 because there is merging that\ncan happen at many levels in the storage stack (even the storage\ncontroller). Distinguishing between whether or not Postgres submitted\n2 32k I/Os or 8 8k I/Os could be useful while you are developing AIO,\nbut I think it might be confusing for the Postgres user trying to\ndetermine why their query is slow. It probably makes the most sense to\nstill track in block size.\n\nNo matter what solution we pick, you should get a correct number if\nyou multiply op_bytes by an IOOp (assuming nothing is NULL). Or,\nrather, there should be some way of getting an accurate number in\nbytes of the amount of a particular kind of I/O that has been done.\n\n- Melanie\n\n\n",
"msg_date": "Wed, 10 Jan 2024 19:24:50 -0500",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Show WAL write and fsync stats in pg_stat_io"
},
{
"msg_contents": "On Wed, Jan 10, 2024 at 07:24:50PM -0500, Melanie Plageman wrote:\n> I have code review feedback as well, but I've saved that for my next email.\n\nAh, cool.\n\n> On Wed, Jan 3, 2024 at 8:11 AM Nazir Bilal Yavuz <[email protected]> wrote:\n>> On Sun, 31 Dec 2023 at 03:58, Michael Paquier <[email protected]> wrote:\n>> Oh, I understand it now. Yes, that makes sense.\n>> I thought removing op_bytes completely ( as you said \"This patch\n>> extends it with two more operation sizes, and there are even cases\n>> where it may be a variable\" ) from pg_stat_io view then adding\n>> something like {read | write | extend}_bytes and {read | write |\n>> extend}_calls could be better, so that we don't lose any information.\n> \n> Upthread, Michael says:\n> \n>> I find the use of 1 in this context a bit confusing, because when\n>> referring to a counter at N, then it can be understood as doing N\n>> times a operation,\n> \n> I didn't understand this argument, so I'm not sure if I agree or\n> disagree with it.\n\nNazir has mentioned upthread one thing: what should we do for the case\nwhere a combination of (io_object,io_context) does I/O with a\n*variable* op_bytes, because that may be the case for the WAL\nreceiver? For this case, he has mentioned that we should set op_bytes\nto 1, but that's something I find confusing because it would mean that\nwe are doing read, writes or extends 1 byte at a time. My suggestion\nwould be to use op_bytes = -1 or NULL for the variable case instead,\nwith reads, writes and extends referring to a number of bytes rather\nthan a number of operations.\n\n> I think these are the three proposals for handling WAL reads:\n> \n> 1) setting op_bytes to 1 and the number of reads is the number of bytes\n> 2) setting op_bytes to XLOG_BLCKSZ and the number of reads is the\n> number of calls to pg_pread() or similar\n> 3) setting op_bytes to NULL and the number of reads is the number of\n> calls to pg_pread() or similar\n\n3) could be a number of bytes, actually.\n\n> Looking at the patch, I think it is still doing 2.\n\nThe patch disables stats for the WAL receiver, while the startup\nprocess reads WAL with XLOG_BLCKSZ, so yeah that's 2) with a trick to\ndiscard the variable case.\n\n> For an unpopular idea: we could add separate [IOOp]_bytes columns for\n> all those IOOps for which it would be relevant. It kind of stinks but\n> it would give us the freedom to document exactly what a single IOOp\n> means for each combination of BackendType, IOContext, IOObject, and\n> IOOp (as relevant) and still have an accurate number in the *bytes\n> columns. Everyone will probably hate us if we do that, though.\n> Especially because having bytes for the existing IOObjects is an\n> existing feature.\n\nAn issue I have with this one is that having multiple tuples for\neach (object,context) if they have multiple op_bytes leads to\npotentially a lot of bloat in the view. That would be up to 8k extra\ntuples just for the sake of op_byte's variability.\n\n> A separate question: suppose [1] goes in (to read WAL from WAL buffers\n> directly). Now, WAL reads are not from permanent storage anymore. Are\n> we only tracking permanent storage I/O in pg_stat_io? I also had this\n> question for some of the WAL receiver functions. Should we track any\n> I/O other than permanent storage I/O? Or did I miss this being\n> addressed upthread?\n\nThat's a good point. I guess that this should just be a different\nIOOp? That's not a IOOP_READ. A IOOP_HIT is also different.\n\n> In terms of what I/O we should track in a streaming/asynchronous\n> world, the options would be:\n> \n> 1) track read/write syscalls\n> 2) track blocks of BLCKSZ submitted to the kernel\n> 3) track bytes submitted to the kernel\n> 4) track merged I/Os (after doing any merging in the application)\n> \n> I think the debate was largely between 2 and 4. There was some\n> disagreement, but I think we landed on 2 because there is merging that\n> can happen at many levels in the storage stack (even the storage\n> controller). Distinguishing between whether or not Postgres submitted\n> 2 32k I/Os or 8 8k I/Os could be useful while you are developing AIO,\n> but I think it might be confusing for the Postgres user trying to\n> determine why their query is slow. It probably makes the most sense to\n> still track in block size.\n> \n> No matter what solution we pick, you should get a correct number if\n> you multiply op_bytes by an IOOp (assuming nothing is NULL). Or,\n> rather, there should be some way of getting an accurate number in\n> bytes of the amount of a particular kind of I/O that has been done.\n\nYeah, coming back to op_bytes = -1/NULL as a tweak to mean that reads,\nwrites or extends are counted as bytes, because we don't have a fixed\noperation size for some (object,context) cases.\n--\nMichael",
"msg_date": "Thu, 11 Jan 2024 14:00:54 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Show WAL write and fsync stats in pg_stat_io"
},
{
"msg_contents": "Hi,\n\nOn Thu, 11 Jan 2024 at 08:01, Michael Paquier <[email protected]> wrote:\n>\n> On Wed, Jan 10, 2024 at 07:24:50PM -0500, Melanie Plageman wrote:\n> > I have code review feedback as well, but I've saved that for my next email.\n>\n> Ah, cool.\n>\n> > On Wed, Jan 3, 2024 at 8:11 AM Nazir Bilal Yavuz <[email protected]> wrote:\n> >> On Sun, 31 Dec 2023 at 03:58, Michael Paquier <[email protected]> wrote:\n> >> Oh, I understand it now. Yes, that makes sense.\n> >> I thought removing op_bytes completely ( as you said \"This patch\n> >> extends it with two more operation sizes, and there are even cases\n> >> where it may be a variable\" ) from pg_stat_io view then adding\n> >> something like {read | write | extend}_bytes and {read | write |\n> >> extend}_calls could be better, so that we don't lose any information.\n> >\n> > Upthread, Michael says:\n> >\n> >> I find the use of 1 in this context a bit confusing, because when\n> >> referring to a counter at N, then it can be understood as doing N\n> >> times a operation,\n> >\n> > I didn't understand this argument, so I'm not sure if I agree or\n> > disagree with it.\n>\n> Nazir has mentioned upthread one thing: what should we do for the case\n> where a combination of (io_object,io_context) does I/O with a\n> *variable* op_bytes, because that may be the case for the WAL\n> receiver? For this case, he has mentioned that we should set op_bytes\n> to 1, but that's something I find confusing because it would mean that\n> we are doing read, writes or extends 1 byte at a time. My suggestion\n> would be to use op_bytes = -1 or NULL for the variable case instead,\n> with reads, writes and extends referring to a number of bytes rather\n> than a number of operations.\n\nI agree but we can't do this only for the *variable* cases since\nB_WAL_RECEIVER and other backends use the same\npgstat_count_io_op_time(IOObject, IOContext, ...) call. What I mean\nis, if two backends use the same pgstat_count_io_op_time() function\ncall in the code; they must count the same thing (number of calls,\nbytes, etc.). It could be better to count the number of bytes for all\nthe IOOBJECT_WAL IOs.\n\n> > I think these are the three proposals for handling WAL reads:\n> >\n> > 1) setting op_bytes to 1 and the number of reads is the number of bytes\n> > 2) setting op_bytes to XLOG_BLCKSZ and the number of reads is the\n> > number of calls to pg_pread() or similar\n> > 3) setting op_bytes to NULL and the number of reads is the number of\n> > calls to pg_pread() or similar\n>\n> 3) could be a number of bytes, actually.\n\nOne important point is that we can't change only reads, if we decide\nto count the number of bytes for the reads; writes and extends should\nbe counted as a number of bytes as well.\n\n> > Looking at the patch, I think it is still doing 2.\n>\n> The patch disables stats for the WAL receiver, while the startup\n> process reads WAL with XLOG_BLCKSZ, so yeah that's 2) with a trick to\n> discard the variable case.\n>\n> > For an unpopular idea: we could add separate [IOOp]_bytes columns for\n> > all those IOOps for which it would be relevant. It kind of stinks but\n> > it would give us the freedom to document exactly what a single IOOp\n> > means for each combination of BackendType, IOContext, IOObject, and\n> > IOOp (as relevant) and still have an accurate number in the *bytes\n> > columns. Everyone will probably hate us if we do that, though.\n> > Especially because having bytes for the existing IOObjects is an\n> > existing feature.\n>\n> An issue I have with this one is that having multiple tuples for\n> each (object,context) if they have multiple op_bytes leads to\n> potentially a lot of bloat in the view. That would be up to 8k extra\n> tuples just for the sake of op_byte's variability.\n\nYes, that doesn't seem applicable to me.\n\n> > A separate question: suppose [1] goes in (to read WAL from WAL buffers\n> > directly). Now, WAL reads are not from permanent storage anymore. Are\n> > we only tracking permanent storage I/O in pg_stat_io? I also had this\n> > question for some of the WAL receiver functions. Should we track any\n> > I/O other than permanent storage I/O? Or did I miss this being\n> > addressed upthread?\n>\n> That's a good point. I guess that this should just be a different\n> IOOp? That's not a IOOP_READ. A IOOP_HIT is also different.\n\nI think different IOContext rather than IOOp suits better for this.\n\n> > In terms of what I/O we should track in a streaming/asynchronous\n> > world, the options would be:\n> >\n> > 1) track read/write syscalls\n> > 2) track blocks of BLCKSZ submitted to the kernel\n> > 3) track bytes submitted to the kernel\n> > 4) track merged I/Os (after doing any merging in the application)\n> >\n> > I think the debate was largely between 2 and 4. There was some\n> > disagreement, but I think we landed on 2 because there is merging that\n> > can happen at many levels in the storage stack (even the storage\n> > controller). Distinguishing between whether or not Postgres submitted\n> > 2 32k I/Os or 8 8k I/Os could be useful while you are developing AIO,\n> > but I think it might be confusing for the Postgres user trying to\n> > determine why their query is slow. It probably makes the most sense to\n> > still track in block size.\n> >\n> > No matter what solution we pick, you should get a correct number if\n> > you multiply op_bytes by an IOOp (assuming nothing is NULL). Or,\n> > rather, there should be some way of getting an accurate number in\n> > bytes of the amount of a particular kind of I/O that has been done.\n>\n> Yeah, coming back to op_bytes = -1/NULL as a tweak to mean that reads,\n> writes or extends are counted as bytes, because we don't have a fixed\n> operation size for some (object,context) cases.\n\nCan't we use 2 and 3 together? For example, use 3 for the IOOBJECT_WAL\nIOs and 2 for the other IOs.\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n",
"msg_date": "Thu, 11 Jan 2024 14:18:54 +0300",
"msg_from": "Nazir Bilal Yavuz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Show WAL write and fsync stats in pg_stat_io"
},
{
"msg_contents": "On Thu, Jan 11, 2024 at 6:19 AM Nazir Bilal Yavuz <[email protected]> wrote:\n>\n> On Thu, 11 Jan 2024 at 08:01, Michael Paquier <[email protected]> wrote:\n> >\n> > On Wed, Jan 10, 2024 at 07:24:50PM -0500, Melanie Plageman wrote:\n> > > On Wed, Jan 3, 2024 at 8:11 AM Nazir Bilal Yavuz <[email protected]> wrote:\n> > >> On Sun, 31 Dec 2023 at 03:58, Michael Paquier <[email protected]> wrote:\n> > >> Oh, I understand it now. Yes, that makes sense.\n> > >> I thought removing op_bytes completely ( as you said \"This patch\n> > >> extends it with two more operation sizes, and there are even cases\n> > >> where it may be a variable\" ) from pg_stat_io view then adding\n> > >> something like {read | write | extend}_bytes and {read | write |\n> > >> extend}_calls could be better, so that we don't lose any information.\n> > >\n> > > Upthread, Michael says:\n> > >\n> > >> I find the use of 1 in this context a bit confusing, because when\n> > >> referring to a counter at N, then it can be understood as doing N\n> > >> times a operation,\n> > >\n> > > I didn't understand this argument, so I'm not sure if I agree or\n> > > disagree with it.\n> >\n> > Nazir has mentioned upthread one thing: what should we do for the case\n> > where a combination of (io_object,io_context) does I/O with a\n> > *variable* op_bytes, because that may be the case for the WAL\n> > receiver? For this case, he has mentioned that we should set op_bytes\n> > to 1, but that's something I find confusing because it would mean that\n> > we are doing read, writes or extends 1 byte at a time. My suggestion\n> > would be to use op_bytes = -1 or NULL for the variable case instead,\n> > with reads, writes and extends referring to a number of bytes rather\n> > than a number of operations.\n>\n> I agree but we can't do this only for the *variable* cases since\n> B_WAL_RECEIVER and other backends use the same\n> pgstat_count_io_op_time(IOObject, IOContext, ...) call. What I mean\n> is, if two backends use the same pgstat_count_io_op_time() function\n> call in the code; they must count the same thing (number of calls,\n> bytes, etc.). It could be better to count the number of bytes for all\n> the IOOBJECT_WAL IOs.\n\nI'm a bit confused by this. pgstat_count_io_op_time() can check\nMyBackendType. In fact, you do this to ban the wal receiver already.\nIt is true that you would need to count all wal receiver normal\ncontext wal object IOOps in the variable way, but I don't see how\npgstat_count_io_op_time() is the limiting factor as long as the\ncallsite is always doing either the number of bytes or the number of\ncalls.\n\n> > > I think these are the three proposals for handling WAL reads:\n> > >\n> > > 1) setting op_bytes to 1 and the number of reads is the number of bytes\n> > > 2) setting op_bytes to XLOG_BLCKSZ and the number of reads is the\n> > > number of calls to pg_pread() or similar\n> > > 3) setting op_bytes to NULL and the number of reads is the number of\n> > > calls to pg_pread() or similar\n> >\n> > 3) could be a number of bytes, actually.\n>\n> One important point is that we can't change only reads, if we decide\n> to count the number of bytes for the reads; writes and extends should\n> be counted as a number of bytes as well.\n\nYes, that is true.\n\n> > > Looking at the patch, I think it is still doing 2.\n> >\n> > The patch disables stats for the WAL receiver, while the startup\n> > process reads WAL with XLOG_BLCKSZ, so yeah that's 2) with a trick to\n> > discard the variable case.\n> >\n> > > For an unpopular idea: we could add separate [IOOp]_bytes columns for\n> > > all those IOOps for which it would be relevant. It kind of stinks but\n> > > it would give us the freedom to document exactly what a single IOOp\n> > > means for each combination of BackendType, IOContext, IOObject, and\n> > > IOOp (as relevant) and still have an accurate number in the *bytes\n> > > columns. Everyone will probably hate us if we do that, though.\n> > > Especially because having bytes for the existing IOObjects is an\n> > > existing feature.\n> >\n> > An issue I have with this one is that having multiple tuples for\n> > each (object,context) if they have multiple op_bytes leads to\n> > potentially a lot of bloat in the view. That would be up to 8k extra\n> > tuples just for the sake of op_byte's variability.\n>\n> Yes, that doesn't seem applicable to me.\n\nMy suggestion (again not sure it is a good idea) was actually that we\nremove op_bytes and add \"write_bytes\", \"read_bytes\", and\n\"extend_bytes\". AFAICT, this would add columns not rows. In this\nschema, read bytes for wal receiver could be counted in one way and\nwrites in another. We could document that, for wal receiver, the reads\nare not always done in units of the same size, so the read_bytes /\nreads could be thought of as an average size of read.\n\nEven if we made a separate view for WAL I/O stats, we would still have\nthis issue of variable sized I/O vs block sized I/O and would probably\nend up solving it with separate columns for the number of bytes and\nnumber of operations.\n\n> > > A separate question: suppose [1] goes in (to read WAL from WAL buffers\n> > > directly). Now, WAL reads are not from permanent storage anymore. Are\n> > > we only tracking permanent storage I/O in pg_stat_io? I also had this\n> > > question for some of the WAL receiver functions. Should we track any\n> > > I/O other than permanent storage I/O? Or did I miss this being\n> > > addressed upthread?\n> >\n> > That's a good point. I guess that this should just be a different\n> > IOOp? That's not a IOOP_READ. A IOOP_HIT is also different.\n>\n> I think different IOContext rather than IOOp suits better for this.\n\nThat makes sense to me.\n\n> > > In terms of what I/O we should track in a streaming/asynchronous\n> > > world, the options would be:\n> > >\n> > > 1) track read/write syscalls\n> > > 2) track blocks of BLCKSZ submitted to the kernel\n> > > 3) track bytes submitted to the kernel\n> > > 4) track merged I/Os (after doing any merging in the application)\n> > >\n> > > I think the debate was largely between 2 and 4. There was some\n> > > disagreement, but I think we landed on 2 because there is merging that\n> > > can happen at many levels in the storage stack (even the storage\n> > > controller). Distinguishing between whether or not Postgres submitted\n> > > 2 32k I/Os or 8 8k I/Os could be useful while you are developing AIO,\n> > > but I think it might be confusing for the Postgres user trying to\n> > > determine why their query is slow. It probably makes the most sense to\n> > > still track in block size.\n> > >\n> > > No matter what solution we pick, you should get a correct number if\n> > > you multiply op_bytes by an IOOp (assuming nothing is NULL). Or,\n> > > rather, there should be some way of getting an accurate number in\n> > > bytes of the amount of a particular kind of I/O that has been done.\n> >\n> > Yeah, coming back to op_bytes = -1/NULL as a tweak to mean that reads,\n> > writes or extends are counted as bytes, because we don't have a fixed\n> > operation size for some (object,context) cases.\n>\n> Can't we use 2 and 3 together? For example, use 3 for the IOOBJECT_WAL\n> IOs and 2 for the other IOs.\n\nWe can do this. One concern I have is that much of WAL I/O is done in\nXLOG_BLCKSZ, so it feels kind of odd for all WAL I/O to appear as if\nit is being done in random chunks of bytes. We anticipated other\nuniformly non-block-based I/O types where having 1 in op_bytes would\nbe meaningful. I didn't realize at the time that there would be\nvariable-sized and block-sized I/O mixed together for the same backend\ntype, io object, and io context.\n\n- Melanie\n\n\n",
"msg_date": "Thu, 11 Jan 2024 09:27:53 -0500",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Show WAL write and fsync stats in pg_stat_io"
},
{
"msg_contents": "Hi,\n\nOn Thu, 11 Jan 2024 at 17:28, Melanie Plageman\n<[email protected]> wrote:\n>\n> On Thu, Jan 11, 2024 at 6:19 AM Nazir Bilal Yavuz <[email protected]> wrote:\n> >\n> > On Thu, 11 Jan 2024 at 08:01, Michael Paquier <[email protected]> wrote:\n> > >\n> > > On Wed, Jan 10, 2024 at 07:24:50PM -0500, Melanie Plageman wrote:\n> > > > On Wed, Jan 3, 2024 at 8:11 AM Nazir Bilal Yavuz <[email protected]> wrote:\n> > > >> On Sun, 31 Dec 2023 at 03:58, Michael Paquier <[email protected]> wrote:\n> > > >> Oh, I understand it now. Yes, that makes sense.\n> > > >> I thought removing op_bytes completely ( as you said \"This patch\n> > > >> extends it with two more operation sizes, and there are even cases\n> > > >> where it may be a variable\" ) from pg_stat_io view then adding\n> > > >> something like {read | write | extend}_bytes and {read | write |\n> > > >> extend}_calls could be better, so that we don't lose any information.\n> > > >\n> > > > Upthread, Michael says:\n> > > >\n> > > >> I find the use of 1 in this context a bit confusing, because when\n> > > >> referring to a counter at N, then it can be understood as doing N\n> > > >> times a operation,\n> > > >\n> > > > I didn't understand this argument, so I'm not sure if I agree or\n> > > > disagree with it.\n> > >\n> > > Nazir has mentioned upthread one thing: what should we do for the case\n> > > where a combination of (io_object,io_context) does I/O with a\n> > > *variable* op_bytes, because that may be the case for the WAL\n> > > receiver? For this case, he has mentioned that we should set op_bytes\n> > > to 1, but that's something I find confusing because it would mean that\n> > > we are doing read, writes or extends 1 byte at a time. My suggestion\n> > > would be to use op_bytes = -1 or NULL for the variable case instead,\n> > > with reads, writes and extends referring to a number of bytes rather\n> > > than a number of operations.\n> >\n> > I agree but we can't do this only for the *variable* cases since\n> > B_WAL_RECEIVER and other backends use the same\n> > pgstat_count_io_op_time(IOObject, IOContext, ...) call. What I mean\n> > is, if two backends use the same pgstat_count_io_op_time() function\n> > call in the code; they must count the same thing (number of calls,\n> > bytes, etc.). It could be better to count the number of bytes for all\n> > the IOOBJECT_WAL IOs.\n>\n> I'm a bit confused by this. pgstat_count_io_op_time() can check\n> MyBackendType. In fact, you do this to ban the wal receiver already.\n> It is true that you would need to count all wal receiver normal\n> context wal object IOOps in the variable way, but I don't see how\n> pgstat_count_io_op_time() is the limiting factor as long as the\n> callsite is always doing either the number of bytes or the number of\n> calls.\n\nApologies for not being clear. Let me try to explain this by giving examples:\n\nLet's assume that there are 3 different pgstat_count_io_op_time()\ncalls in the code base and they are labeled as 1, 2 and 3.\n\nAnd let's' assume that: WAL receiver uses 1st and 2nd\npgstat_count_io_op_time(), autovacuum uses 2nd and 3rd\npgstat_count_io_op_time() and checkpointer uses 3rd\npgstat_count_io_op_time() to count IOs.\n\nThe 1st one is the only pgstat_count_io_op_time() call that must count\nthe number of bytes because of the variable cases and the others count\nthe number of calls or blocks.\n\na) WAL receiver uses both 1st and 2nd => 1st and 2nd\npgstat_count_io_op_time() must count the same thing => 2nd\npgstat_count_io_op_time() must count the number of bytes as well.\n\nb) 2nd pgstat_count_io_op_time() started to count the number of bytes\n=> Autovacuum will start to count the number of bytes => 2nd and 3rd\nboth are used by autocavuum => 3rd pgstat_count_io_op_time() must\ncount the number of bytes as well.\n\nc) 3rd pgstat_count_io_op_time() started to count the number of bytes\n=> Checkpointer will start to count the number of bytes.\n\nThe list goes on like this and if we don't have [write | read |\nextend]_bytes, this effect will be multiplied.\n\n> > > > I think these are the three proposals for handling WAL reads:\n> > > >\n> > > > 1) setting op_bytes to 1 and the number of reads is the number of bytes\n> > > > 2) setting op_bytes to XLOG_BLCKSZ and the number of reads is the\n> > > > number of calls to pg_pread() or similar\n> > > > 3) setting op_bytes to NULL and the number of reads is the number of\n> > > > calls to pg_pread() or similar\n> > >\n> > > 3) could be a number of bytes, actually.\n> >\n> > One important point is that we can't change only reads, if we decide\n> > to count the number of bytes for the reads; writes and extends should\n> > be counted as a number of bytes as well.\n>\n> Yes, that is true.\n>\n> > > > Looking at the patch, I think it is still doing 2.\n> > >\n> > > The patch disables stats for the WAL receiver, while the startup\n> > > process reads WAL with XLOG_BLCKSZ, so yeah that's 2) with a trick to\n> > > discard the variable case.\n> > >\n> > > > For an unpopular idea: we could add separate [IOOp]_bytes columns for\n> > > > all those IOOps for which it would be relevant. It kind of stinks but\n> > > > it would give us the freedom to document exactly what a single IOOp\n> > > > means for each combination of BackendType, IOContext, IOObject, and\n> > > > IOOp (as relevant) and still have an accurate number in the *bytes\n> > > > columns. Everyone will probably hate us if we do that, though.\n> > > > Especially because having bytes for the existing IOObjects is an\n> > > > existing feature.\n> > >\n> > > An issue I have with this one is that having multiple tuples for\n> > > each (object,context) if they have multiple op_bytes leads to\n> > > potentially a lot of bloat in the view. That would be up to 8k extra\n> > > tuples just for the sake of op_byte's variability.\n> >\n> > Yes, that doesn't seem applicable to me.\n>\n> My suggestion (again not sure it is a good idea) was actually that we\n> remove op_bytes and add \"write_bytes\", \"read_bytes\", and\n> \"extend_bytes\". AFAICT, this would add columns not rows. In this\n> schema, read bytes for wal receiver could be counted in one way and\n> writes in another. We could document that, for wal receiver, the reads\n> are not always done in units of the same size, so the read_bytes /\n> reads could be thought of as an average size of read.\n\nThat looks like one of the best options to me. I suggested something\nsimilar upthread and Michael's answer was:\n\n> But then you'd lose the possibility to analyze correlations between\n> the size and the number of the operations, which is something that\n> matters for more complex I/O scenarios. This does not need to be\n> tackled in this patch, which is useful on its own, though I am really\n> wondering if this is required for the recent work done by Thomas.\n> Perhaps Andres, Thomas or Melanie could comment on that?\n\n\n> Even if we made a separate view for WAL I/O stats, we would still have\n> this issue of variable sized I/O vs block sized I/O and would probably\n> end up solving it with separate columns for the number of bytes and\n> number of operations.\n\nYes, I think it is more about flexibility and not changing the already\npublished pg_stat_io view.\n\n> > > > A separate question: suppose [1] goes in (to read WAL from WAL buffers\n> > > > directly). Now, WAL reads are not from permanent storage anymore. Are\n> > > > we only tracking permanent storage I/O in pg_stat_io? I also had this\n> > > > question for some of the WAL receiver functions. Should we track any\n> > > > I/O other than permanent storage I/O? Or did I miss this being\n> > > > addressed upthread?\n> > >\n> > > That's a good point. I guess that this should just be a different\n> > > IOOp? That's not a IOOP_READ. A IOOP_HIT is also different.\n> >\n> > I think different IOContext rather than IOOp suits better for this.\n>\n> That makes sense to me.\n>\n> > > > In terms of what I/O we should track in a streaming/asynchronous\n> > > > world, the options would be:\n> > > >\n> > > > 1) track read/write syscalls\n> > > > 2) track blocks of BLCKSZ submitted to the kernel\n> > > > 3) track bytes submitted to the kernel\n> > > > 4) track merged I/Os (after doing any merging in the application)\n> > > >\n> > > > I think the debate was largely between 2 and 4. There was some\n> > > > disagreement, but I think we landed on 2 because there is merging that\n> > > > can happen at many levels in the storage stack (even the storage\n> > > > controller). Distinguishing between whether or not Postgres submitted\n> > > > 2 32k I/Os or 8 8k I/Os could be useful while you are developing AIO,\n> > > > but I think it might be confusing for the Postgres user trying to\n> > > > determine why their query is slow. It probably makes the most sense to\n> > > > still track in block size.\n> > > >\n> > > > No matter what solution we pick, you should get a correct number if\n> > > > you multiply op_bytes by an IOOp (assuming nothing is NULL). Or,\n> > > > rather, there should be some way of getting an accurate number in\n> > > > bytes of the amount of a particular kind of I/O that has been done.\n> > >\n> > > Yeah, coming back to op_bytes = -1/NULL as a tweak to mean that reads,\n> > > writes or extends are counted as bytes, because we don't have a fixed\n> > > operation size for some (object,context) cases.\n> >\n> > Can't we use 2 and 3 together? For example, use 3 for the IOOBJECT_WAL\n> > IOs and 2 for the other IOs.\n>\n> We can do this. One concern I have is that much of WAL I/O is done in\n> XLOG_BLCKSZ, so it feels kind of odd for all WAL I/O to appear as if\n> it is being done in random chunks of bytes. We anticipated other\n> uniformly non-block-based I/O types where having 1 in op_bytes would\n> be meaningful. I didn't realize at the time that there would be\n> variable-sized and block-sized I/O mixed together for the same backend\n> type, io object, and io context.\n\nCorrect. What is the lowest level that can use two different options?\nI mean, could we use 3 for the WAL receiver, IOOP_READ, IOOBJECT_WAL,\nIOCONTEXT_NORMAL and the 2 for the rest?\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n",
"msg_date": "Fri, 12 Jan 2024 16:23:26 +0300",
"msg_from": "Nazir Bilal Yavuz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Show WAL write and fsync stats in pg_stat_io"
},
{
"msg_contents": "On Fri, Jan 12, 2024 at 04:23:26PM +0300, Nazir Bilal Yavuz wrote:\n> On Thu, 11 Jan 2024 at 17:28, Melanie Plageman <[email protected]> wrote:\n>> Even if we made a separate view for WAL I/O stats, we would still have\n>> this issue of variable sized I/O vs block sized I/O and would probably\n>> end up solving it with separate columns for the number of bytes and\n>> number of operations.\n> \n> Yes, I think it is more about flexibility and not changing the already\n> published pg_stat_io view.\n\nI don't know. Adding more columns or changing op_bytes with an extra\nmode that reflects on what the other columns mean is kind of the same\nthing to me: we want pg_stat_io to report more modes so as all I/O can\nbe evaluated from a single view, but the complication is now that\neverything is tied to BLCKSZ.\n\nIMHO, perhaps we'd better put this patch aside until we are absolutely\n*sure* of what we want to achieve when it comes to WAL, and I am\nafraid that this cannot happen until we're happy with the way we\nhandle WAL reads *and* writes, including WAL receiver or anything that\nhas the idea of pulling its own page callback with\nXLogReaderAllocate() in the backend. Well, writes should be\nrelatively \"easy\" as things happen with XLOG_BLCKSZ, mainly, but\nreads are the unknown part.\n\nThat also seems furiously related to the work happening with async I/O\nor the fact that we may want to have in the view a separate meaning\nfor cached pages or pages read directly from disk. The worst thing\nthat we would do is rush something into the tree and then have to deal\nwith the aftermath of what we'd need to deal with in terms of\ncompatibility depending on the state of the other I/O related work\nwhen the new view is released. That would not be fun for the users\nand any hackers who would have to deal with that (aka mainly me if I\nwere to commit something), because pg_stat_io could mean something in\nversion N, still mean something entirely different in version N+1.\n--\nMichael",
"msg_date": "Mon, 15 Jan 2024 15:27:20 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Show WAL write and fsync stats in pg_stat_io"
},
{
"msg_contents": "Hi,\n\nOn Mon, 15 Jan 2024 at 09:27, Michael Paquier <[email protected]> wrote:\n>\n> On Fri, Jan 12, 2024 at 04:23:26PM +0300, Nazir Bilal Yavuz wrote:\n> > On Thu, 11 Jan 2024 at 17:28, Melanie Plageman <[email protected]> wrote:\n> >> Even if we made a separate view for WAL I/O stats, we would still have\n> >> this issue of variable sized I/O vs block sized I/O and would probably\n> >> end up solving it with separate columns for the number of bytes and\n> >> number of operations.\n> >\n> > Yes, I think it is more about flexibility and not changing the already\n> > published pg_stat_io view.\n>\n> I don't know. Adding more columns or changing op_bytes with an extra\n> mode that reflects on what the other columns mean is kind of the same\n> thing to me: we want pg_stat_io to report more modes so as all I/O can\n> be evaluated from a single view, but the complication is now that\n> everything is tied to BLCKSZ.\n>\n> IMHO, perhaps we'd better put this patch aside until we are absolutely\n> *sure* of what we want to achieve when it comes to WAL, and I am\n> afraid that this cannot happen until we're happy with the way we\n> handle WAL reads *and* writes, including WAL receiver or anything that\n> has the idea of pulling its own page callback with\n> XLogReaderAllocate() in the backend. Well, writes should be\n> relatively \"easy\" as things happen with XLOG_BLCKSZ, mainly, but\n> reads are the unknown part.\n>\n> That also seems furiously related to the work happening with async I/O\n> or the fact that we may want to have in the view a separate meaning\n> for cached pages or pages read directly from disk. The worst thing\n> that we would do is rush something into the tree and then have to deal\n> with the aftermath of what we'd need to deal with in terms of\n> compatibility depending on the state of the other I/O related work\n> when the new view is released. That would not be fun for the users\n> and any hackers who would have to deal with that (aka mainly me if I\n> were to commit something), because pg_stat_io could mean something in\n> version N, still mean something entirely different in version N+1.\n\nI agree with your points. While the other I/O related work is\nhappening we can discuss what we should do in the variable op_byte\ncases. Also, this is happening only for WAL right now but if we try to\nextend pg_stat_io in the future, that problem possibly will rise\nagain. So, it could be good to come up with a general solution, not\nonly for WAL.\n\n--\nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n",
"msg_date": "Wed, 17 Jan 2024 15:20:39 +0300",
"msg_from": "Nazir Bilal Yavuz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Show WAL write and fsync stats in pg_stat_io"
},
{
"msg_contents": "On Wed, Jan 17, 2024 at 03:20:39PM +0300, Nazir Bilal Yavuz wrote:\n> I agree with your points. While the other I/O related work is\n> happening we can discuss what we should do in the variable op_byte\n> cases. Also, this is happening only for WAL right now but if we try to\n> extend pg_stat_io in the future, that problem possibly will rise\n> again. So, it could be good to come up with a general solution, not\n> only for WAL.\n\nOkay, I've marked the patch as RwF for this CF.\n--\nMichael",
"msg_date": "Thu, 18 Jan 2024 10:22:26 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Show WAL write and fsync stats in pg_stat_io"
},
{
"msg_contents": "Hi,\n\nOn Thu, 18 Jan 2024 at 04:22, Michael Paquier <[email protected]> wrote:\n>\n> On Wed, Jan 17, 2024 at 03:20:39PM +0300, Nazir Bilal Yavuz wrote:\n> > I agree with your points. While the other I/O related work is\n> > happening we can discuss what we should do in the variable op_byte\n> > cases. Also, this is happening only for WAL right now but if we try to\n> > extend pg_stat_io in the future, that problem possibly will rise\n> > again. So, it could be good to come up with a general solution, not\n> > only for WAL.\n>\n> Okay, I've marked the patch as RwF for this CF.\n\nI wanted to inform you that the 73f0a13266 commit changed all WALRead\ncalls to read variable bytes, only the WAL receiver was reading\nvariable bytes before.\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n",
"msg_date": "Mon, 19 Feb 2024 10:28:05 +0300",
"msg_from": "Nazir Bilal Yavuz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Show WAL write and fsync stats in pg_stat_io"
},
{
"msg_contents": "Hi,\n\nOn Mon, 19 Feb 2024 at 10:28, Nazir Bilal Yavuz <[email protected]> wrote:\n>\n> Hi,\n>\n> On Thu, 18 Jan 2024 at 04:22, Michael Paquier <[email protected]> wrote:\n> >\n> > On Wed, Jan 17, 2024 at 03:20:39PM +0300, Nazir Bilal Yavuz wrote:\n> > > I agree with your points. While the other I/O related work is\n> > > happening we can discuss what we should do in the variable op_byte\n> > > cases. Also, this is happening only for WAL right now but if we try to\n> > > extend pg_stat_io in the future, that problem possibly will rise\n> > > again. So, it could be good to come up with a general solution, not\n> > > only for WAL.\n> >\n> > Okay, I've marked the patch as RwF for this CF.\n>\n> I wanted to inform you that the 73f0a13266 commit changed all WALRead\n> calls to read variable bytes, only the WAL receiver was reading\n> variable bytes before.\n\nI want to start working on this again if possible. I will try to\nsummarize the current status:\n\n* With the 73f0a13266 commit, the WALRead() function started to read\nvariable bytes in every case. Before, only the WAL receiver was\nreading variable bytes.\n\n* With the 91f2cae7a4 commit, WALReadFromBuffers() is merged. We were\ndiscussing what we have to do when this is merged. It is decided that\nWALReadFromBuffers() does not call pgstat_report_wait_start() because\nthis function does not perform any IO [1]. We may follow the same\nlogic by not including these to pg_stat_io?\n\n* With the b5a9b18cd0 commit, streaming I/O is merged but AFAIK this\ndoes not block anything related to putting WAL stats in pg_stat_io.\n\nIf I am not missing any new changes, the only problem is reading\nvariable bytes now. We have discussed a couple of solutions:\n\n1- Change op_bytes to something like -1, 0, 1, NULL etc. and document\nthat this means some variable byte I/O is happening.\n\nI kind of dislike this solution because if the *only* read I/O is\nhappening in variable bytes, it will look like write and extend I/Os\nare happening in variable bytes as well. As a solution, it could be\ndocumented that only read I/Os could happen in variable bytes for now.\n\n2- Use op_bytes_[read | write | extend] columns instead of one\nop_bytes column, also use the first solution.\n\nThis can solve the first solution's weakness but it introduces two\nmore columns. This is more future proof compared to the first solution\nif there is a chance that some variable I/O could happen in write and\nextend calls as well in the future.\n\n3- Create a new pg_stat_io_wal view to put WAL I/Os here instead of pg_stat_io.\n\npg_stat_io could remain untouchable and we will have flexibility to\nedit this new view as much as we want. But the original aim of the\npg_stat_io is evaluating all I/O from a single view and adding a new\nview breaks this aim.\n\nI hope that I did not miss anything and my explanations are clear.\n\nAny kind of feedback would be appreciated.\n\n\n[1] https://www.postgresql.org/message-id/CAFiTN-sE7CJn-ZFj%2B-0Wv6TNytv_fp4n%2BeCszspxJ3mt77t5ig%40mail.gmail.com\n\n--\nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n",
"msg_date": "Fri, 19 Apr 2024 11:01:54 +0300",
"msg_from": "Nazir Bilal Yavuz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Show WAL write and fsync stats in pg_stat_io"
},
{
"msg_contents": "Hi,\n\nOn Fri, 19 Apr 2024 at 11:01, Nazir Bilal Yavuz <[email protected]> wrote:\n> > On Thu, 18 Jan 2024 at 04:22, Michael Paquier <[email protected]> wrote:\n> > >\n> > > On Wed, Jan 17, 2024 at 03:20:39PM +0300, Nazir Bilal Yavuz wrote:\n> > > > I agree with your points. While the other I/O related work is\n> > > > happening we can discuss what we should do in the variable op_byte\n> > > > cases. Also, this is happening only for WAL right now but if we try to\n> > > > extend pg_stat_io in the future, that problem possibly will rise\n> > > > again. So, it could be good to come up with a general solution, not\n> > > > only for WAL.\n> > >\n> > > Okay, I've marked the patch as RwF for this CF.\n\nSince the last commitfest entry was returned with feedback, I created\na new commitfest entry: https://commitfest.postgresql.org/48/4950/\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n",
"msg_date": "Wed, 24 Apr 2024 11:37:21 +0300",
"msg_from": "Nazir Bilal Yavuz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Show WAL write and fsync stats in pg_stat_io"
},
{
"msg_contents": "On Fri, Apr 19, 2024 at 1:32 PM Nazir Bilal Yavuz <[email protected]> wrote:\n>\n> > I wanted to inform you that the 73f0a13266 commit changed all WALRead\n> > calls to read variable bytes, only the WAL receiver was reading\n> > variable bytes before.\n>\n> I want to start working on this again if possible. I will try to\n> summarize the current status:\n\nThanks for working on this.\n\n> * With the 73f0a13266 commit, the WALRead() function started to read\n> variable bytes in every case. Before, only the WAL receiver was\n> reading variable bytes.\n>\n> * With the 91f2cae7a4 commit, WALReadFromBuffers() is merged. We were\n> discussing what we have to do when this is merged. It is decided that\n> WALReadFromBuffers() does not call pgstat_report_wait_start() because\n> this function does not perform any IO [1]. We may follow the same\n> logic by not including these to pg_stat_io?\n\nRight. WALReadFromBuffers doesn't do any I/O.\n\nWhoever reads WAL from disk (backends, walsenders, recovery process)\nusing pg_pread (XLogPageRead, WALRead) needs to be tracked in\npg_stat_io or some other view. If it were to be in pg_stat_io,\nalthough we may not be able to distinguish WAL read stats at a backend\nlevel (like how many times/bytes a walsender or recovery process or a\nbackend read WAL from disk), but it can help understand overall impact\nof WAL read I/O at a cluster level. With this approach, the WAL I/O\nstats are divided up - WAL read I/O and write I/O stats are in\npg_stat_io and pg_stat_wal respectively.\n\nThis makes me think if we need to add WAL read I/O stats also to\npg_stat_wal. Then, we can also add WALReadFromBuffers stats\nhits/misses there. With this approach, pg_stat_wal can be a one-stop\nview for all the WAL related stats. If needed, we can join info from\npg_stat_wal to pg_stat_io in system_views.sql so that the I/O stats\nare emitted to the end-user via pg_stat_io.\n\n> * With the b5a9b18cd0 commit, streaming I/O is merged but AFAIK this\n> does not block anything related to putting WAL stats in pg_stat_io.\n>\n> If I am not missing any new changes, the only problem is reading\n> variable bytes now. We have discussed a couple of solutions:\n>\n> 1- Change op_bytes to something like -1, 0, 1, NULL etc. and document\n> that this means some variable byte I/O is happening.\n>\n> I kind of dislike this solution because if the *only* read I/O is\n> happening in variable bytes, it will look like write and extend I/Os\n> are happening in variable bytes as well. As a solution, it could be\n> documented that only read I/Os could happen in variable bytes for now.\n\nYes, read I/O for relation and WAL can happen in variable bytes. I\nthink this idea seems reasonable and simple yet useful to know the\ncluster-wide read I/O.\n\nHowever, another point here is how the total number of bytes read is\nrepresented with existing pg_stat_io columns 'reads' and 'op_bytes'.\nIt is known now with 'reads' * 'op_bytes', but with variable bytes,\nhow is read bytes calculated? Maybe add new columns\nread_bytes/write_bytes?\n\n> 2- Use op_bytes_[read | write | extend] columns instead of one\n> op_bytes column, also use the first solution.\n>\n> This can solve the first solution's weakness but it introduces two\n> more columns. This is more future proof compared to the first solution\n> if there is a chance that some variable I/O could happen in write and\n> extend calls as well in the future.\n\n-1 as more columns impact the readability and usability.\n\n> 3- Create a new pg_stat_io_wal view to put WAL I/Os here instead of pg_stat_io.\n>\n> pg_stat_io could remain untouchable and we will have flexibility to\n> edit this new view as much as we want. But the original aim of the\n> pg_stat_io is evaluating all I/O from a single view and adding a new\n> view breaks this aim.\n\n-1 as it defeats the very purpose of one-stop view pg_stat_io for all\nkinds of I/O. PS: see my response above about adding both WAL write\nI/O and read I/O stats to pg_stat_wal.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 13 May 2024 19:42:11 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Show WAL write and fsync stats in pg_stat_io"
},
{
"msg_contents": "On Mon, May 13, 2024 at 7:42 PM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> On Fri, Apr 19, 2024 at 1:32 PM Nazir Bilal Yavuz <[email protected]> wrote:\n> >\n> > > I wanted to inform you that the 73f0a13266 commit changed all WALRead\n> > > calls to read variable bytes, only the WAL receiver was reading\n> > > variable bytes before.\n> >\n> > I want to start working on this again if possible. I will try to\n> > summarize the current status:\n>\n> Thanks for working on this.\n>\n> > * With the 73f0a13266 commit, the WALRead() function started to read\n> > variable bytes in every case. Before, only the WAL receiver was\n> > reading variable bytes.\n> >\n> > * With the 91f2cae7a4 commit, WALReadFromBuffers() is merged. We were\n> > discussing what we have to do when this is merged. It is decided that\n> > WALReadFromBuffers() does not call pgstat_report_wait_start() because\n> > this function does not perform any IO [1]. We may follow the same\n> > logic by not including these to pg_stat_io?\n>\n> Right. WALReadFromBuffers doesn't do any I/O.\n>\n> Whoever reads WAL from disk (backends, walsenders, recovery process)\n> using pg_pread (XLogPageRead, WALRead) needs to be tracked in\n> pg_stat_io or some other view. If it were to be in pg_stat_io,\n> although we may not be able to distinguish WAL read stats at a backend\n> level (like how many times/bytes a walsender or recovery process or a\n> backend read WAL from disk), but it can help understand overall impact\n> of WAL read I/O at a cluster level. With this approach, the WAL I/O\n> stats are divided up - WAL read I/O and write I/O stats are in\n> pg_stat_io and pg_stat_wal respectively.\n>\n> This makes me think if we need to add WAL read I/O stats also to\n> pg_stat_wal. Then, we can also add WALReadFromBuffers stats\n> hits/misses there. With this approach, pg_stat_wal can be a one-stop\n> view for all the WAL related stats.\n>\n\nIf possible, let's have all the I/O stats (even for WAL) in\npg_stat_io. Can't we show the WAL data we get from buffers in the hits\ncolumn and then have read_bytes or something like that to know the\namount of data read?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 28 May 2024 06:18:40 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Show WAL write and fsync stats in pg_stat_io"
},
{
"msg_contents": "> If possible, let's have all the I/O stats (even for WAL) in\n> pg_stat_io. Can't we show the WAL data we get from buffers in the hits\n> column and then have read_bytes or something like that to know the\n> amount of data read?\n\nThe ‘hits’ column in ‘pg_stat_io’ is a vital indicator for adjusting a\ndatabase. It signifies the count of cache hits, or in other words, the\ninstances where data was located in the ‘shared_buffers’. As a result,\nkeeping an eye on the ‘hits’ column in ‘pg_stat_io’ can offer useful\nknowledge about the buffer cache’s efficiency and assist users in\nmaking educated choices when fine-tuning their database. However, if\nwe include the hit count of WAL buffers in this, it may lead to\nmisleading interpretations for database tuning. If there’s something\nI’ve overlooked that’s already been discussed, please feel free to\ncorrect me.\n\n\nBest Regards,\nNitin Jadhav\nAzure Database for PostgreSQL\nMicrosoft\n\nOn Tue, May 28, 2024 at 6:18 AM Amit Kapila <[email protected]> wrote:\n>\n> On Mon, May 13, 2024 at 7:42 PM Bharath Rupireddy\n> <[email protected]> wrote:\n> >\n> > On Fri, Apr 19, 2024 at 1:32 PM Nazir Bilal Yavuz <[email protected]> wrote:\n> > >\n> > > > I wanted to inform you that the 73f0a13266 commit changed all WALRead\n> > > > calls to read variable bytes, only the WAL receiver was reading\n> > > > variable bytes before.\n> > >\n> > > I want to start working on this again if possible. I will try to\n> > > summarize the current status:\n> >\n> > Thanks for working on this.\n> >\n> > > * With the 73f0a13266 commit, the WALRead() function started to read\n> > > variable bytes in every case. Before, only the WAL receiver was\n> > > reading variable bytes.\n> > >\n> > > * With the 91f2cae7a4 commit, WALReadFromBuffers() is merged. We were\n> > > discussing what we have to do when this is merged. It is decided that\n> > > WALReadFromBuffers() does not call pgstat_report_wait_start() because\n> > > this function does not perform any IO [1]. We may follow the same\n> > > logic by not including these to pg_stat_io?\n> >\n> > Right. WALReadFromBuffers doesn't do any I/O.\n> >\n> > Whoever reads WAL from disk (backends, walsenders, recovery process)\n> > using pg_pread (XLogPageRead, WALRead) needs to be tracked in\n> > pg_stat_io or some other view. If it were to be in pg_stat_io,\n> > although we may not be able to distinguish WAL read stats at a backend\n> > level (like how many times/bytes a walsender or recovery process or a\n> > backend read WAL from disk), but it can help understand overall impact\n> > of WAL read I/O at a cluster level. With this approach, the WAL I/O\n> > stats are divided up - WAL read I/O and write I/O stats are in\n> > pg_stat_io and pg_stat_wal respectively.\n> >\n> > This makes me think if we need to add WAL read I/O stats also to\n> > pg_stat_wal. Then, we can also add WALReadFromBuffers stats\n> > hits/misses there. With this approach, pg_stat_wal can be a one-stop\n> > view for all the WAL related stats.\n> >\n>\n> If possible, let's have all the I/O stats (even for WAL) in\n> pg_stat_io. Can't we show the WAL data we get from buffers in the hits\n> column and then have read_bytes or something like that to know the\n> amount of data read?\n>\n> --\n> With Regards,\n> Amit Kapila.\n>\n>\n\n\n",
"msg_date": "Sun, 9 Jun 2024 20:35:08 +0530",
"msg_from": "Nitin Jadhav <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Show WAL write and fsync stats in pg_stat_io"
},
{
"msg_contents": "Hi,\n\nThank you for looking into this! And, sorry for the late answer.\n\nOn Mon, 13 May 2024 at 17:12, Bharath Rupireddy\n<[email protected]> wrote:\n>\n> On Fri, Apr 19, 2024 at 1:32 PM Nazir Bilal Yavuz <[email protected]> wrote:\n> >\n> > > I wanted to inform you that the 73f0a13266 commit changed all WALRead\n> > > calls to read variable bytes, only the WAL receiver was reading\n> > > variable bytes before.\n> >\n> > I want to start working on this again if possible. I will try to\n> > summarize the current status:\n>\n> Thanks for working on this.\n>\n> > * With the 73f0a13266 commit, the WALRead() function started to read\n> > variable bytes in every case. Before, only the WAL receiver was\n> > reading variable bytes.\n> >\n> > * With the 91f2cae7a4 commit, WALReadFromBuffers() is merged. We were\n> > discussing what we have to do when this is merged. It is decided that\n> > WALReadFromBuffers() does not call pgstat_report_wait_start() because\n> > this function does not perform any IO [1]. We may follow the same\n> > logic by not including these to pg_stat_io?\n>\n> Right. WALReadFromBuffers doesn't do any I/O.\n>\n> Whoever reads WAL from disk (backends, walsenders, recovery process)\n> using pg_pread (XLogPageRead, WALRead) needs to be tracked in\n> pg_stat_io or some other view. If it were to be in pg_stat_io,\n> although we may not be able to distinguish WAL read stats at a backend\n> level (like how many times/bytes a walsender or recovery process or a\n> backend read WAL from disk), but it can help understand overall impact\n> of WAL read I/O at a cluster level. With this approach, the WAL I/O\n> stats are divided up - WAL read I/O and write I/O stats are in\n> pg_stat_io and pg_stat_wal respectively.\n>\n> This makes me think if we need to add WAL read I/O stats also to\n> pg_stat_wal. Then, we can also add WALReadFromBuffers stats\n> hits/misses there. With this approach, pg_stat_wal can be a one-stop\n> view for all the WAL related stats. If needed, we can join info from\n> pg_stat_wal to pg_stat_io in system_views.sql so that the I/O stats\n> are emitted to the end-user via pg_stat_io.\n\nI agree that the ultimate goal is seeing WAL I/O stats from one place.\nThere is a reply to this from Amit:\n\nOn Tue, 28 May 2024 at 03:48, Amit Kapila <[email protected]> wrote:\n>\n> If possible, let's have all the I/O stats (even for WAL) in\n> pg_stat_io. Can't we show the WAL data we get from buffers in the hits\n> column and then have read_bytes or something like that to know the\n> amount of data read?\n\nI think it is better to have all the I/O stats in pg_stat_io like Amit\nsaid. And, it makes sense to me to show 'WAL data we get from buffers'\nin the hits column. Since, basically instead of doing I/O from disk;\nwe get data directly from WAL buffers. I think that fits the\nexplanation of the hits column in pg_stat_io, which is 'The number of\ntimes a desired block was found in a shared buffer.' [1].\n\n> > * With the b5a9b18cd0 commit, streaming I/O is merged but AFAIK this\n> > does not block anything related to putting WAL stats in pg_stat_io.\n> >\n> > If I am not missing any new changes, the only problem is reading\n> > variable bytes now. We have discussed a couple of solutions:\n> >\n> > 1- Change op_bytes to something like -1, 0, 1, NULL etc. and document\n> > that this means some variable byte I/O is happening.\n> >\n> > I kind of dislike this solution because if the *only* read I/O is\n> > happening in variable bytes, it will look like write and extend I/Os\n> > are happening in variable bytes as well. As a solution, it could be\n> > documented that only read I/Os could happen in variable bytes for now.\n>\n> Yes, read I/O for relation and WAL can happen in variable bytes. I\n> think this idea seems reasonable and simple yet useful to know the\n> cluster-wide read I/O.\n\nI agree.\n\n> However, another point here is how the total number of bytes read is\n> represented with existing pg_stat_io columns 'reads' and 'op_bytes'.\n> It is known now with 'reads' * 'op_bytes', but with variable bytes,\n> how is read bytes calculated? Maybe add new columns\n> read_bytes/write_bytes?\n>\n> > 2- Use op_bytes_[read | write | extend] columns instead of one\n> > op_bytes column, also use the first solution.\n> >\n> > This can solve the first solution's weakness but it introduces two\n> > more columns. This is more future proof compared to the first solution\n> > if there is a chance that some variable I/O could happen in write and\n> > extend calls as well in the future.\n>\n> -1 as more columns impact the readability and usability.\n\nI did not understand the overall difference between what you suggested\n(adding read_bytes/write_bytes columns) and my suggestion (adding\nop_bytes_[read | write | extend] columns). They both introduce new\ncolumns. Could you please explain what you suggested in more detail?\n\n> > 3- Create a new pg_stat_io_wal view to put WAL I/Os here instead of pg_stat_io.\n> >\n> > pg_stat_io could remain untouchable and we will have flexibility to\n> > edit this new view as much as we want. But the original aim of the\n> > pg_stat_io is evaluating all I/O from a single view and adding a new\n> > view breaks this aim.\n>\n> -1 as it defeats the very purpose of one-stop view pg_stat_io for all\n> kinds of I/O. PS: see my response above about adding both WAL write\n> I/O and read I/O stats to pg_stat_wal.\n\nI agree.\n\n[1] https://www.postgresql.org/docs/current/monitoring-stats.html#MONITORING-PG-STAT-IO-VIEW\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n",
"msg_date": "Thu, 13 Jun 2024 11:51:59 +0300",
"msg_from": "Nazir Bilal Yavuz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Show WAL write and fsync stats in pg_stat_io"
},
{
"msg_contents": "Hi,\n\nThank you for looking into this!\n\nOn Sun, 9 Jun 2024 at 18:05, Nitin Jadhav <[email protected]> wrote:\n>\n> > If possible, let's have all the I/O stats (even for WAL) in\n> > pg_stat_io. Can't we show the WAL data we get from buffers in the hits\n> > column and then have read_bytes or something like that to know the\n> > amount of data read?\n>\n> The ‘hits’ column in ‘pg_stat_io’ is a vital indicator for adjusting a\n> database. It signifies the count of cache hits, or in other words, the\n> instances where data was located in the ‘shared_buffers’. As a result,\n> keeping an eye on the ‘hits’ column in ‘pg_stat_io’ can offer useful\n> knowledge about the buffer cache’s efficiency and assist users in\n> making educated choices when fine-tuning their database. However, if\n> we include the hit count of WAL buffers in this, it may lead to\n> misleading interpretations for database tuning. If there’s something\n> I’ve overlooked that’s already been discussed, please feel free to\n> correct me.\n\nI think counting them as a hit makes sense. We read data from WAL\nbuffers instead of reading them from disk. And, WAL buffers are stored\nin shared memory so I believe they can be counted as hits in the\nshared buffers. Could you please explain how this change can 'lead to\nmisleading interpretations for database tuning' a bit more?\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n",
"msg_date": "Thu, 13 Jun 2024 12:24:36 +0300",
"msg_from": "Nazir Bilal Yavuz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Show WAL write and fsync stats in pg_stat_io"
},
{
"msg_contents": "On Thu, Jun 13, 2024 at 5:24 AM Nazir Bilal Yavuz <[email protected]> wrote:\n>\n> On Sun, 9 Jun 2024 at 18:05, Nitin Jadhav <[email protected]> wrote:\n> >\n> > > If possible, let's have all the I/O stats (even for WAL) in\n> > > pg_stat_io. Can't we show the WAL data we get from buffers in the hits\n> > > column and then have read_bytes or something like that to know the\n> > > amount of data read?\n> >\n> > The ‘hits’ column in ‘pg_stat_io’ is a vital indicator for adjusting a\n> > database. It signifies the count of cache hits, or in other words, the\n> > instances where data was located in the ‘shared_buffers’. As a result,\n> > keeping an eye on the ‘hits’ column in ‘pg_stat_io’ can offer useful\n> > knowledge about the buffer cache’s efficiency and assist users in\n> > making educated choices when fine-tuning their database. However, if\n> > we include the hit count of WAL buffers in this, it may lead to\n> > misleading interpretations for database tuning. If there’s something\n> > I’ve overlooked that’s already been discussed, please feel free to\n> > correct me.\n>\n> I think counting them as a hit makes sense. We read data from WAL\n> buffers instead of reading them from disk. And, WAL buffers are stored\n> in shared memory so I believe they can be counted as hits in the\n> shared buffers. Could you please explain how this change can 'lead to\n> misleading interpretations for database tuning' a bit more?\n\nPerhaps Nitin was thinking of a scenario in which WAL hits are counted\nas hits on the same IOObject as shared buffer hits. Since this thread\nhas been going on for awhile and we haven't recently had a schema\noverview, I could understand if there was some confusion. For clarity,\nI will restate that the current proposal is to count WAL buffer hits\nfor IOObject WAL, which means they will not be mixed in with shared\nbuffer hits.\n\nAnd I think it makes sense to count WAL IOObject hits since increasing\nwal_buffers can lead to more hits, right?\n\n- Melanie\n\n\n",
"msg_date": "Mon, 17 Jun 2024 10:53:27 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Show WAL write and fsync stats in pg_stat_io"
},
{
"msg_contents": "> Perhaps Nitin was thinking of a scenario in which WAL hits are counted\n> as hits on the same IOObject as shared buffer hits. Since this thread\n> has been going on for awhile and we haven't recently had a schema\n> overview, I could understand if there was some confusion\n\nYes. I was considering a scenario where WAL hits are counted as hits\non the same IOObject as shared buffer hits.\n\n> For clarity,\n> I will restate that the current proposal is to count WAL buffer hits\n> for IOObject WAL, which means they will not be mixed in with shared\n> buffer hits.\n>\n> And I think it makes sense to count WAL IOObject hits since increasing\n> wal_buffers can lead to more hits, right?\n\nThank you for the clarification. I agree with the proposal to count\nWAL buffer hits for IOObject WAL separately from shared buffer hits.\nThis distinction will provide a more accurate representation.\n\nBest Regards,\nNitin Jadhav\nAzure Database for PostgreSQL\nMicrosoft\n\nOn Mon, Jun 17, 2024 at 8:23 PM Melanie Plageman\n<[email protected]> wrote:\n>\n> On Thu, Jun 13, 2024 at 5:24 AM Nazir Bilal Yavuz <[email protected]> wrote:\n> >\n> > On Sun, 9 Jun 2024 at 18:05, Nitin Jadhav <[email protected]> wrote:\n> > >\n> > > > If possible, let's have all the I/O stats (even for WAL) in\n> > > > pg_stat_io. Can't we show the WAL data we get from buffers in the hits\n> > > > column and then have read_bytes or something like that to know the\n> > > > amount of data read?\n> > >\n> > > The ‘hits’ column in ‘pg_stat_io’ is a vital indicator for adjusting a\n> > > database. It signifies the count of cache hits, or in other words, the\n> > > instances where data was located in the ‘shared_buffers’. As a result,\n> > > keeping an eye on the ‘hits’ column in ‘pg_stat_io’ can offer useful\n> > > knowledge about the buffer cache’s efficiency and assist users in\n> > > making educated choices when fine-tuning their database. However, if\n> > > we include the hit count of WAL buffers in this, it may lead to\n> > > misleading interpretations for database tuning. If there’s something\n> > > I’ve overlooked that’s already been discussed, please feel free to\n> > > correct me.\n> >\n> > I think counting them as a hit makes sense. We read data from WAL\n> > buffers instead of reading them from disk. And, WAL buffers are stored\n> > in shared memory so I believe they can be counted as hits in the\n> > shared buffers. Could you please explain how this change can 'lead to\n> > misleading interpretations for database tuning' a bit more?\n>\n> Perhaps Nitin was thinking of a scenario in which WAL hits are counted\n> as hits on the same IOObject as shared buffer hits. Since this thread\n> has been going on for awhile and we haven't recently had a schema\n> overview, I could understand if there was some confusion. For clarity,\n> I will restate that the current proposal is to count WAL buffer hits\n> for IOObject WAL, which means they will not be mixed in with shared\n> buffer hits.\n>\n> And I think it makes sense to count WAL IOObject hits since increasing\n> wal_buffers can lead to more hits, right?\n>\n> - Melanie\n\n\n",
"msg_date": "Sat, 6 Jul 2024 12:58:56 +0530",
"msg_from": "Nitin Jadhav <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Show WAL write and fsync stats in pg_stat_io"
}
] |
[
{
"msg_contents": "Here are a few patches related to attstattarget:\n\n- 0001: Change type of pg_statistic_ext.stxstattarget, to match \nattstattarget. Maybe this should go into PG16, for consistency?\n\n- 0002: Add macro for maximum statistics target, instead of hardcoding \nit everywhere.\n\n- 0003: Take pg_attribute out of VacAttrStats. This simplifies some \ncode, especially for extended statistics, which had to have weird \nworkarounds.",
"msg_date": "Wed, 28 Jun 2023 16:52:27 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "several attstattarget-related improvements"
},
{
"msg_contents": "On 6/28/23 16:52, Peter Eisentraut wrote:\n> Here are a few patches related to attstattarget:\n> \n> - 0001: Change type of pg_statistic_ext.stxstattarget, to match\n> attstattarget. Maybe this should go into PG16, for consistency?\n> \n> - 0002: Add macro for maximum statistics target, instead of hardcoding\n> it everywhere.\n> \n> - 0003: Take pg_attribute out of VacAttrStats. This simplifies some\n> code, especially for extended statistics, which had to have weird\n> workarounds.\n\n+1 to all three patches\n\nNot sure about 0001 vs PG16, it'd require catversion bump.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 28 Jun 2023 23:30:11 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: several attstattarget-related improvements"
},
{
"msg_contents": "Tomas Vondra <[email protected]> writes:\n> On 6/28/23 16:52, Peter Eisentraut wrote:\n>> - 0001: Change type of pg_statistic_ext.stxstattarget, to match\n>> attstattarget. Maybe this should go into PG16, for consistency?\n\n> Not sure about 0001 vs PG16, it'd require catversion bump.\n\nYeah, past beta1 I think we should be conservative about bumping\ncatversion. Suggest you hold this for now, and if we find some\nmore-compelling reason for a catversion bump in v16, we can sneak\nit in at that time. Otherwise, I won't cry if it waits for v17.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 28 Jun 2023 18:10:09 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: several attstattarget-related improvements"
},
{
"msg_contents": "On 28.06.23 23:30, Tomas Vondra wrote:\n> On 6/28/23 16:52, Peter Eisentraut wrote:\n>> Here are a few patches related to attstattarget:\n>>\n>> - 0001: Change type of pg_statistic_ext.stxstattarget, to match\n>> attstattarget. Maybe this should go into PG16, for consistency?\n>>\n>> - 0002: Add macro for maximum statistics target, instead of hardcoding\n>> it everywhere.\n>>\n>> - 0003: Take pg_attribute out of VacAttrStats. This simplifies some\n>> code, especially for extended statistics, which had to have weird\n>> workarounds.\n> \n> +1 to all three patches\n> \n> Not sure about 0001 vs PG16, it'd require catversion bump.\n\ncommitted (to PG17 for now)\n\n\n\n",
"msg_date": "Mon, 3 Jul 2023 07:26:12 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: several attstattarget-related improvements"
}
] |
[
{
"msg_contents": "The MergeAttributes() and related code in and around tablecmds.c is huge \nand ancient, with many things bolted on over time, and difficult to deal \nwith. I took some time to make careful incremental updates and \nrefactorings to make the code easier to follow, more compact, and more \nmodern in appearance. I also found several pieces of obsolete code \nalong the way. This resulted in the attached long patch series. Each \npatch tries to make a single change and can be considered incrementally. \n At the end, the code is shorter, better factored, and I hope easier to \nunderstand. There shouldn't be any change in behavior.",
"msg_date": "Wed, 28 Jun 2023 18:30:14 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "tablecmds.c/MergeAttributes() cleanup"
},
{
"msg_contents": "On 2023-Jun-28, Peter Eisentraut wrote:\n\n> The MergeAttributes() and related code in and around tablecmds.c is huge and\n> ancient, with many things bolted on over time, and difficult to deal with.\n> I took some time to make careful incremental updates and refactorings to\n> make the code easier to follow, more compact, and more modern in appearance.\n> I also found several pieces of obsolete code along the way. This resulted\n> in the attached long patch series. Each patch tries to make a single change\n> and can be considered incrementally. At the end, the code is shorter,\n> better factored, and I hope easier to understand. There shouldn't be any\n> change in behavior.\n\nI request to leave this alone for now. I have enough things to juggle\nwith in the NOTNULLs patch; this patchset looks like it will cause me\nmessy merge conflicts. 0004 for instance looks problematic, as does\n0007 and 0016.\n\nFWIW for the most part that patch is working and I intend to re-submit\nshortly, but the relevant pg_upgrade code is really brittle, so it's\ntaken me much more than I expected to get it in good shape for all\ncases.\n\nThanks\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Thu, 29 Jun 2023 13:03:41 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: tablecmds.c/MergeAttributes() cleanup"
},
{
"msg_contents": "On 2023-Jun-28, Peter Eisentraut wrote:\n\n> The MergeAttributes() and related code in and around tablecmds.c is huge and\n> ancient, with many things bolted on over time, and difficult to deal with.\n> I took some time to make careful incremental updates and refactorings to\n> make the code easier to follow, more compact, and more modern in appearance.\n> I also found several pieces of obsolete code along the way. This resulted\n> in the attached long patch series. Each patch tries to make a single change\n> and can be considered incrementally. At the end, the code is shorter,\n> better factored, and I hope easier to understand. There shouldn't be any\n> change in behavior.\n\nI spent a few minutes doing a test merge of this to my branch with NOT\nNULL changes. Here's a quick review.\n\n> Subject: [PATCH 01/17] Remove obsolete comment about OID support\n\nObvious, trivial. +1\n\n> Subject: [PATCH 02/17] Remove ancient special case code for adding oid columns\n\nLGTM; deletes dead code.\n\n> Subject: [PATCH 03/17] Remove ancient special case code for dropping oid\n> columns\n\nHmm, interesting. Yay for more dead code removal. Didn't verify it.\n\n> Subject: [PATCH 04/17] Make more use of makeColumnDef()\n\nGood idea, +1. Some lines (almost all makeColumnDef callsites) end up\ntoo long. This is the first patch that actually conflicts with the NOT\nNULLs one, and the conflicts are easy to solve, so I won't complain if\nyou want to get it pushed soon.\n\n> Subject: [PATCH 05/17] Clean up MergeAttributesIntoExisting()\n\nI don't disagree with this in principle, but this one has more\nconflicts than the previous ones.\n\n\n> Subject: [PATCH 06/17] Clean up MergeCheckConstraint()\n\nLooks a reasonable change as far as this patch goes.\n\nHowever, reading it I noticed that CookedConstraint->inhcount is int\nand is tested for wraparound, but pg_constraint.coninhcount is int16.\nThis is probably bogus already. ColumnDef->inhcount is also int. These\nshould be narrowed to match the catalog definitions.\n\n\n> Subject: [PATCH 07/17] MergeAttributes() and related variable renaming\n\nI think this makes sense, but there's a bunch of conflicts to NOT NULLs.\nI guess we can come back to this one later.\n\n> Subject: [PATCH 08/17] Improve some catalog documentation\n> \n> Point out that typstorage and attstorage are never '\\0', even for\n> fixed-length types. This is different from attcompression. For this\n> reason, some of the handling of these columns in tablecmds.c etc. is\n> different. (catalogs.sgml already contained this information in an\n> indirect way.)\n\nI don't understand why we must point out that they're never '\\0'. I\nmean, if we're doing that, why not say that they can never be \\xFF?\nThe valid values are already listed. The other parts of this patch look\nokay.\n\n> Subject: [PATCH 09/17] Remove useless if condition\n> \n> This is useless because these fields are not set anywhere before, so\n> we can assign them unconditionally. This also makes this more\n> consistent with ATExecAddColumn().\n\nMakes sense.\n\n> Subject: [PATCH 10/17] Remove useless if condition\n> \n> We can call GetAttributeCompression() with a NULL argument. It\n> handles that internally already. This change makes all the callers of\n> GetAttributeCompression() uniform.\n\nI agree, +1.\n\n> From 2eda6bc9897d0995a5112e2851c51daf0c35656e Mon Sep 17 00:00:00 2001\n> From: Peter Eisentraut <[email protected]>\n> Date: Wed, 14 Jun 2023 17:51:31 +0200\n> Subject: [PATCH 11/17] Refactor ATExecAddColumn() to use\n> BuildDescForRelation()\n> \n> BuildDescForRelation() has all the knowledge for converting a\n> ColumnDef into pg_attribute/tuple descriptor. ATExecAddColumn() can\n> make use of that, instead of duplicating all that logic. We just pass\n> a one-element list of ColumnDef and we get back exactly the data\n> structure we need. Note that we don't even need to touch\n> BuildDescForRelation() to make this work.\n\nHmm, crazy. I'm not sure I like this, because it seems much too clever.\nThe number of lines that are deleted is alluring, though.\n\nMaybe it'd be better to create a separate routine that takes a single\nColumnDef and returns the Form_pg_attribute element for it; then use\nthat in both BuildDescForRelation and ATExecAddColumn.\n\n> Subject: [PATCH 12/17] Push attidentity and attgenerated handling into\n> BuildDescForRelation()\n> \n> Previously, this was handled by the callers separately, but it can be\n> trivially moved into BuildDescForRelation() so that it is handled in a\n> central place.\n\nLooks reasonable.\n\n\n\nI think the last few patches are the more innovative (interesting,\nuseful) of the bunch. Let's get the first few out of the way.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 11 Jul 2023 20:17:23 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: tablecmds.c/MergeAttributes() cleanup"
},
{
"msg_contents": "On 11.07.23 20:17, Alvaro Herrera wrote:\n> I spent a few minutes doing a test merge of this to my branch with NOT\n> NULL changes. Here's a quick review.\n> \n>> Subject: [PATCH 01/17] Remove obsolete comment about OID support\n> \n> Obvious, trivial. +1\n> \n>> Subject: [PATCH 02/17] Remove ancient special case code for adding oid columns\n> \n> LGTM; deletes dead code.\n> \n>> Subject: [PATCH 03/17] Remove ancient special case code for dropping oid\n>> columns\n> \n> Hmm, interesting. Yay for more dead code removal. Didn't verify it.\n\nI have committed these first three. I'll leave it at that for now.\n\n>> Subject: [PATCH 08/17] Improve some catalog documentation\n>>\n>> Point out that typstorage and attstorage are never '\\0', even for\n>> fixed-length types. This is different from attcompression. For this\n>> reason, some of the handling of these columns in tablecmds.c etc. is\n>> different. (catalogs.sgml already contained this information in an\n>> indirect way.)\n> \n> I don't understand why we must point out that they're never '\\0'. I\n> mean, if we're doing that, why not say that they can never be \\xFF?\n> The valid values are already listed. The other parts of this patch look\n> okay.\n\nWhile working through the storage and compression handling, which look \nsimilar, I was momentarily puzzled by this. While attcompression can be \n0 to mean, use default, this is not possible/allowed for attstorage, but \nit took looking around three corners to verify this. It could be more \nexplicit, I thought.\n\n\n\n",
"msg_date": "Wed, 12 Jul 2023 16:29:26 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: tablecmds.c/MergeAttributes() cleanup"
},
{
"msg_contents": "On 12.07.23 16:29, Peter Eisentraut wrote:\n> On 11.07.23 20:17, Alvaro Herrera wrote:\n>> I spent a few minutes doing a test merge of this to my branch with NOT\n>> NULL changes. Here's a quick review.\n>>\n>>> Subject: [PATCH 01/17] Remove obsolete comment about OID support\n>>\n>> Obvious, trivial. +1\n>>\n>>> Subject: [PATCH 02/17] Remove ancient special case code for adding \n>>> oid columns\n>>\n>> LGTM; deletes dead code.\n>>\n>>> Subject: [PATCH 03/17] Remove ancient special case code for dropping oid\n>>> columns\n>>\n>> Hmm, interesting. Yay for more dead code removal. Didn't verify it.\n> \n> I have committed these first three. I'll leave it at that for now.\n\nI have committed a few more patches from this series that were already \nagreed upon. The remaining ones are rebased and reordered a bit, attached.\n\nThere was some doubt about the patch \"Refactor ATExecAddColumn() to use \nBuildDescForRelation()\" (now 0009), whether it's too clever to build a \nfake one-item tuple descriptor. I am working on another patch, which I \nhope to post this week, that proposes to replace the use of tuple \ndescriptors there with a List of something. That would then allow maybe \nrewriting this in a less-clever way. That patch incidentally also wants \nto move BuildDescForRelation from tupdesc.c to tablecmds.c (patch 0007 \nhere).",
"msg_date": "Tue, 29 Aug 2023 10:43:39 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: tablecmds.c/MergeAttributes() cleanup"
},
{
"msg_contents": "On 2023-Aug-29, Peter Eisentraut wrote:\n\n\nRegarding this hunk in 0002,\n\n> @@ -3278,13 +3261,16 @@ MergeAttributes(List *schema, List *supers, char relpersistence,\n> *\n> * constraints is a list of CookedConstraint structs for previous constraints.\n> *\n> - * Returns true if merged (constraint is a duplicate), or false if it's\n> - * got a so-far-unique name, or throws error if conflict.\n> + * If the constraint is a duplicate, then the existing constraint's\n> + * inheritance count is updated. If the constraint doesn't match or conflict\n> + * with an existing one, a new constraint is appended to the list. If there\n> + * is a conflict (same name but different expression), throw an error.\n\nThis wording confused me:\n\n\"If the constraint doesn't match or conflict with an existing one, a new\nconstraint is appended to the list.\"\n\nI first read it as \"doesn't match or conflicts with ...\" (i.e., the\nnegation only applied to the first verb, not both) which would have been\nsurprising (== broken) behavior.\n\nI think it's clearer if you say \"doesn't match nor conflict\", but I'm\nnot sure if this is grammatically correct.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 29 Aug 2023 13:20:28 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: tablecmds.c/MergeAttributes() cleanup"
},
{
"msg_contents": "On 2023-Aug-29, Peter Eisentraut wrote:\n\n> From 471fda80c41fae835ecbe63ae8505526a37487a9 Mon Sep 17 00:00:00 2001\n> From: Peter Eisentraut <[email protected]>\n> Date: Wed, 12 Jul 2023 16:12:35 +0200\n> Subject: [PATCH v2 04/10] Add TupleDescGetDefault()\n> \n> This unifies some repetitive code.\n> \n> Note: I didn't push the \"not found\" error message into the new\n> function, even though all existing callers would be able to make use\n> of it. Using the existing error handling as-is would probably require\n> exposing the Relation type via tupdesc.h, which doesn't seem\n> desirable.\n\nNote that all errors here are elog(ERROR), not user-facing, so ISTM\nit would be OK to change them to report the relation OID rather than the\nname.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"In fact, the basic problem with Perl 5's subroutines is that they're not\ncrufty enough, so the cruft leaks out into user-defined code instead, by\nthe Conservation of Cruft Principle.\" (Larry Wall, Apocalypse 6)\n\n\n",
"msg_date": "Tue, 29 Aug 2023 14:09:31 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: tablecmds.c/MergeAttributes() cleanup"
},
{
"msg_contents": "On Tue, Aug 29, 2023 at 10:43:39AM +0200, Peter Eisentraut wrote:\n> I have committed a few more patches from this series that were already\n> agreed upon. The remaining ones are rebased and reordered a bit, attached.\n\nMy compiler is complaining about 1fa9241b:\n\n../postgresql/src/backend/commands/sequence.c: In function ‘DefineSequence’:\n../postgresql/src/backend/commands/sequence.c:196:21: error: ‘coldef’ may be used uninitialized in this function [-Werror=maybe-uninitialized]\n 196 | stmt->tableElts = lappend(stmt->tableElts, coldef);\n | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nThis went away when I added a default case that ERROR'd or initialized\ncoldef to NULL.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 29 Aug 2023 10:45:21 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: tablecmds.c/MergeAttributes() cleanup"
},
{
"msg_contents": "On 2023-Aug-29, Nathan Bossart wrote:\n\n> On Tue, Aug 29, 2023 at 10:43:39AM +0200, Peter Eisentraut wrote:\n> > I have committed a few more patches from this series that were already\n> > agreed upon. The remaining ones are rebased and reordered a bit, attached.\n> \n> My compiler is complaining about 1fa9241b:\n> \n> ../postgresql/src/backend/commands/sequence.c: In function ‘DefineSequence’:\n> ../postgresql/src/backend/commands/sequence.c:196:21: error: ‘coldef’ may be used uninitialized in this function [-Werror=maybe-uninitialized]\n> 196 | stmt->tableElts = lappend(stmt->tableElts, coldef);\n> | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n> \n> This went away when I added a default case that ERROR'd or initialized\n> coldef to NULL.\n\nMakes sense. However, maybe we should replace those ugly defines and\ntheir hardcoded values in DefineSequence with a proper array with their\nnames and datatypes.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Los trabajadores menos efectivos son sistematicamente llevados al lugar\ndonde pueden hacer el menor daño posible: gerencia.\" (El principio Dilbert)\n\n\n",
"msg_date": "Tue, 29 Aug 2023 20:44:02 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: tablecmds.c/MergeAttributes() cleanup"
},
{
"msg_contents": "On Tue, Aug 29, 2023 at 08:44:02PM +0200, Alvaro Herrera wrote:\n> On 2023-Aug-29, Nathan Bossart wrote:\n>> My compiler is complaining about 1fa9241b:\n>> \n>> ../postgresql/src/backend/commands/sequence.c: In function ‘DefineSequence’:\n>> ../postgresql/src/backend/commands/sequence.c:196:21: error: ‘coldef’ may be used uninitialized in this function [-Werror=maybe-uninitialized]\n>> 196 | stmt->tableElts = lappend(stmt->tableElts, coldef);\n>> | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n>> \n>> This went away when I added a default case that ERROR'd or initialized\n>> coldef to NULL.\n> \n> Makes sense. However, maybe we should replace those ugly defines and\n> their hardcoded values in DefineSequence with a proper array with their\n> names and datatypes.\n\nThat might be an improvement, but IIUC you'd still need to enumerate all of\nthe columns or data types to make sure you use the right get-Datum\nfunction. It doesn't help that last_value uses Int64GetDatumFast and\nlog_cnt uses Int64GetDatum. I could be missing something, though.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 29 Aug 2023 13:40:08 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: tablecmds.c/MergeAttributes() cleanup"
},
{
"msg_contents": "On 29.08.23 19:45, Nathan Bossart wrote:\n> On Tue, Aug 29, 2023 at 10:43:39AM +0200, Peter Eisentraut wrote:\n>> I have committed a few more patches from this series that were already\n>> agreed upon. The remaining ones are rebased and reordered a bit, attached.\n> \n> My compiler is complaining about 1fa9241b:\n> \n> ../postgresql/src/backend/commands/sequence.c: In function ‘DefineSequence’:\n> ../postgresql/src/backend/commands/sequence.c:196:21: error: ‘coldef’ may be used uninitialized in this function [-Werror=maybe-uninitialized]\n> 196 | stmt->tableElts = lappend(stmt->tableElts, coldef);\n> | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n> \n> This went away when I added a default case that ERROR'd or initialized\n> coldef to NULL.\n\nfixed\n\n\n\n",
"msg_date": "Wed, 30 Aug 2023 16:22:03 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: tablecmds.c/MergeAttributes() cleanup"
},
{
"msg_contents": "On 2023-Aug-29, Nathan Bossart wrote:\n\n> On Tue, Aug 29, 2023 at 08:44:02PM +0200, Alvaro Herrera wrote:\n\n> > Makes sense. However, maybe we should replace those ugly defines and\n> > their hardcoded values in DefineSequence with a proper array with their\n> > names and datatypes.\n> \n> That might be an improvement, but IIUC you'd still need to enumerate all of\n> the columns or data types to make sure you use the right get-Datum\n> function. It doesn't help that last_value uses Int64GetDatumFast and\n> log_cnt uses Int64GetDatum. I could be missing something, though.\n\nWell, for sure I meant to enumerate everything that was needed,\nincluding the initializer for the value. Like in the attached patch.\n\nHowever, now that I've actually written it, I don't find it so pretty\nanymore, but maybe that's just because I don't know how to write the\narray assignment as a single statement instead of a separate statement\nfor each column.\n\nBut this should silence the warnings, anyway.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/",
"msg_date": "Wed, 30 Aug 2023 16:46:50 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: tablecmds.c/MergeAttributes() cleanup"
},
{
"msg_contents": "On 29.08.23 13:20, Alvaro Herrera wrote:\n> On 2023-Aug-29, Peter Eisentraut wrote:\n>> @@ -3278,13 +3261,16 @@ MergeAttributes(List *schema, List *supers, char relpersistence,\n>> *\n>> * constraints is a list of CookedConstraint structs for previous constraints.\n>> *\n>> - * Returns true if merged (constraint is a duplicate), or false if it's\n>> - * got a so-far-unique name, or throws error if conflict.\n>> + * If the constraint is a duplicate, then the existing constraint's\n>> + * inheritance count is updated. If the constraint doesn't match or conflict\n>> + * with an existing one, a new constraint is appended to the list. If there\n>> + * is a conflict (same name but different expression), throw an error.\n> \n> This wording confused me:\n> \n> \"If the constraint doesn't match or conflict with an existing one, a new\n> constraint is appended to the list.\"\n> \n> I first read it as \"doesn't match or conflicts with ...\" (i.e., the\n> negation only applied to the first verb, not both) which would have been\n> surprising (== broken) behavior.\n> \n> I think it's clearer if you say \"doesn't match nor conflict\", but I'm\n> not sure if this is grammatically correct.\n\nHere is an updated version of this patch set. I resolved some conflicts \nand addressed this comment of yours. I also dropped the one patch with \nsome catalog header edits that people didn't seem to particularly like.\n\nThe patches that are now 0001 through 0004 were previously reviewed and \njust held for the not-null constraint patches, I think, so I'll commit \nthem in a few days if there are no objections.\n\nPatches 0005 through 0007 are also ready in my opinion, but they haven't \nreally been reviewed, so this would be something for reviewers to focus \non. (0005 and 0007 are trivial, but they go to together with 0006.)\n\nThe remaining 0008 and 0009 were still under discussion and contemplation.",
"msg_date": "Tue, 19 Sep 2023 15:11:51 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: tablecmds.c/MergeAttributes() cleanup"
},
{
"msg_contents": "On 19.09.23 15:11, Peter Eisentraut wrote:\n> Here is an updated version of this patch set. I resolved some conflicts \n> and addressed this comment of yours. I also dropped the one patch with \n> some catalog header edits that people didn't seem to particularly like.\n> \n> The patches that are now 0001 through 0004 were previously reviewed and \n> just held for the not-null constraint patches, I think, so I'll commit \n> them in a few days if there are no objections.\n> \n> Patches 0005 through 0007 are also ready in my opinion, but they haven't \n> really been reviewed, so this would be something for reviewers to focus \n> on. (0005 and 0007 are trivial, but they go to together with 0006.)\n> \n> The remaining 0008 and 0009 were still under discussion and contemplation.\n\nI have committed through 0007, and I'll now close this patch set as \n\"Committed\", and I will (probably) bring back the rest (especially 0008) \nas part of a different patch set soon.\n\n\n\n",
"msg_date": "Thu, 5 Oct 2023 17:49:24 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: tablecmds.c/MergeAttributes() cleanup"
},
{
"msg_contents": "On 05.10.23 17:49, Peter Eisentraut wrote:\n> On 19.09.23 15:11, Peter Eisentraut wrote:\n>> Here is an updated version of this patch set. I resolved some \n>> conflicts and addressed this comment of yours. I also dropped the one \n>> patch with some catalog header edits that people didn't seem to \n>> particularly like.\n>>\n>> The patches that are now 0001 through 0004 were previously reviewed \n>> and just held for the not-null constraint patches, I think, so I'll \n>> commit them in a few days if there are no objections.\n>>\n>> Patches 0005 through 0007 are also ready in my opinion, but they \n>> haven't really been reviewed, so this would be something for reviewers \n>> to focus on. (0005 and 0007 are trivial, but they go to together with \n>> 0006.)\n>>\n>> The remaining 0008 and 0009 were still under discussion and \n>> contemplation.\n> \n> I have committed through 0007, and I'll now close this patch set as \n> \"Committed\", and I will (probably) bring back the rest (especially 0008) \n> as part of a different patch set soon.\n\nAfter playing with this for, well, 2 months, and considering various \nother approaches, I would like to bring back the remaining two patches \nin unchanged form.\n\nEspecially the (now) first patch \"Refactor ATExecAddColumn() to use \nBuildDescForRelation()\" would be very helpful for facilitating further \nrefactoring in this area, because it avoids having two essentially \nduplicate pieces of code responsible for converting a ColumnDef node \ninto internal form.\n\nOne of your (Álvaro's) comments about this earlier was\n\n > Hmm, crazy. I'm not sure I like this, because it seems much too clever.\n > The number of lines that are deleted is alluring, though.\n >\n > Maybe it'd be better to create a separate routine that takes a single\n > ColumnDef and returns the Form_pg_attribute element for it; then use\n > that in both BuildDescForRelation and ATExecAddColumn.\n\nwhich was also my thought at the beginning. However, this wouldn't \nquite work that way, for several reasons. For instance, \nBuildDescForRelation() also needs to keep track of the has_not_null \nproperty across all fields, and so if you split that function up, you \nwould have to somehow make that an output argument and have the caller \nkeep track of it. Also, the output of BuildDescForRelation() in \nATExecAddColumn() is passed into InsertPgAttributeTuples(), which \nrequires a tuple descriptor anyway, so splitting this up into a \nper-attribute function would then require ATExecAddColumn() to \nre-assemble a tuple descriptor anyway, so this wouldn't save anything. \nAlso note that ATExecAddColumn() could in theory be enhanced to add more \nthan one column in one go, so having this code structure in place isn't \ninconsistent with that.\n\nThe main hackish thing, I suppose, is that we have to fix up the \nattribute number after returning from BuildDescForRelation(). I suppose \nwe could address that by passing in a starting attribute number (or \nalternatively maximum existing attribute number) into \nBuildDescForRelation(). I think that would be okay; it would probably \nbe about a wash in terms of code added versus saved.\n\n\nThe (now) second patch is also still of interest to me, but I have since \nnoticed that I think [0] should be fixed first, but to make that fix \nsimpler, I would like the first patch from here.\n\n[0]: \nhttps://www.postgresql.org/message-id/flat/24656cec-d6ef-4d15-8b5b-e8dfc9c833a7%40eisentraut.org",
"msg_date": "Wed, 6 Dec 2023 09:23:07 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: tablecmds.c/MergeAttributes() cleanup"
},
{
"msg_contents": "On 2023-Dec-06, Peter Eisentraut wrote:\n\n> One of your (Álvaro's) comments about this earlier was\n> \n> > Hmm, crazy. I'm not sure I like this, because it seems much too clever.\n> > The number of lines that are deleted is alluring, though.\n> >\n> > Maybe it'd be better to create a separate routine that takes a single\n> > ColumnDef and returns the Form_pg_attribute element for it; then use\n> > that in both BuildDescForRelation and ATExecAddColumn.\n> \n> which was also my thought at the beginning. However, this wouldn't quite\n> work that way, for several reasons. For instance, BuildDescForRelation()\n> also needs to keep track of the has_not_null property across all fields, and\n> so if you split that function up, you would have to somehow make that an\n> output argument and have the caller keep track of it. Also, the output of\n> BuildDescForRelation() in ATExecAddColumn() is passed into\n> InsertPgAttributeTuples(), which requires a tuple descriptor anyway, so\n> splitting this up into a per-attribute function would then require\n> ATExecAddColumn() to re-assemble a tuple descriptor anyway, so this wouldn't\n> save anything.\n\nHmm. Well, if this state of affairs is useful to you, then I withdraw\nmy objection, because with this patch we're not really adding any new\nweirdness, just moving around already-existing weirdness. So let's\npress ahead with 0001. (I didn't look at 0002 this time, since\napparently you'd like to process the other patch first and then come\nback here.)\n\n\nIf you look closely at InsertPgAttributeTuples and accompanying code, it\nall looks a bit archaic. They seem to be treating TupleDesc as a\nglorified array of Form_pg_attribute elements in a convenient packaging.\nIt's probably cleaner to change these APIs so that they deal with a\nForm_pg_attribute array, and not TupleDesc anymore. But we can hack on\nthat some other day.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Pensar que el espectro que vemos es ilusorio no lo despoja de espanto,\nsólo le suma el nuevo terror de la locura\" (Perelandra, C.S. Lewis)\n\n\n",
"msg_date": "Thu, 11 Jan 2024 13:41:49 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: tablecmds.c/MergeAttributes() cleanup"
},
{
"msg_contents": "On 2024-Jan-11, Alvaro Herrera wrote:\n\n> If you look closely at InsertPgAttributeTuples and accompanying code, it\n> all looks a bit archaic. They seem to be treating TupleDesc as a\n> glorified array of Form_pg_attribute elements in a convenient packaging.\n> It's probably cleaner to change these APIs so that they deal with a\n> Form_pg_attribute array, and not TupleDesc anymore. But we can hack on\n> that some other day.\n\nIn addition, it also occurs to me now that maybe it would make sense to\nchange the TupleDesc implementation to use a List of Form_pg_attribute\ninstead of an array, and do away with ->natts. This would let us change\nall \"for ( ... natts ...)\" loops into foreach_ptr() loops ... there are\nonly five hundred of them or so --rolls eyes--.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"El sudor es la mejor cura para un pensamiento enfermo\" (Bardia)\n\n\n",
"msg_date": "Fri, 12 Jan 2024 11:32:26 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: tablecmds.c/MergeAttributes() cleanup"
},
{
"msg_contents": "On Fri, Jan 12, 2024 at 5:32 AM Alvaro Herrera <[email protected]> wrote:\n> On 2024-Jan-11, Alvaro Herrera wrote:\n> > If you look closely at InsertPgAttributeTuples and accompanying code, it\n> > all looks a bit archaic. They seem to be treating TupleDesc as a\n> > glorified array of Form_pg_attribute elements in a convenient packaging.\n> > It's probably cleaner to change these APIs so that they deal with a\n> > Form_pg_attribute array, and not TupleDesc anymore. But we can hack on\n> > that some other day.\n>\n> In addition, it also occurs to me now that maybe it would make sense to\n> change the TupleDesc implementation to use a List of Form_pg_attribute\n> instead of an array, and do away with ->natts. This would let us change\n> all \"for ( ... natts ...)\" loops into foreach_ptr() loops ... there are\n> only five hundred of them or so --rolls eyes--.\n\nOpen-coding stuff like this can easily work out to a loss, and I\npersonally think we're overly dependent on List. It's not a\nparticularly good abstraction, IMHO, and if we do a lot of work to\nstart using it everywhere because a list is really an array, then what\nhappens when somebody decides that a list really ought to be a\nskip-list, or a hash table, or some other crazy thing?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 12 Jan 2024 10:09:41 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: tablecmds.c/MergeAttributes() cleanup"
},
{
"msg_contents": "Robert Haas <[email protected]> writes:\n> On Fri, Jan 12, 2024 at 5:32 AM Alvaro Herrera <[email protected]> wrote:\n>> In addition, it also occurs to me now that maybe it would make sense to\n>> change the TupleDesc implementation to use a List of Form_pg_attribute\n>> instead of an array, and do away with ->natts. This would let us change\n>> all \"for ( ... natts ...)\" loops into foreach_ptr() loops ... there are\n>> only five hundred of them or so --rolls eyes--.\n\n> Open-coding stuff like this can easily work out to a loss, and I\n> personally think we're overly dependent on List. It's not a\n> particularly good abstraction, IMHO, and if we do a lot of work to\n> start using it everywhere because a list is really an array, then what\n> happens when somebody decides that a list really ought to be a\n> skip-list, or a hash table, or some other crazy thing?\n\nWithout getting into opinions on whether List is a good abstraction,\nI'm -1 on this idea. It would be a large amount of code churn, with\nattendant back-patching pain, and I just don't see that there is\nmuch to be gained.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 12 Jan 2024 10:42:45 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: tablecmds.c/MergeAttributes() cleanup"
},
{
"msg_contents": "On Fri, Jan 12, 2024 at 10:09 AM Robert Haas <[email protected]> wrote:\n> Open-coding stuff like this can easily work out to a loss, and I\n> personally think we're overly dependent on List. It's not a\n> particularly good abstraction, IMHO, and if we do a lot of work to\n> start using it everywhere because a list is really an array, then what\n> happens when somebody decides that a list really ought to be a\n> skip-list, or a hash table, or some other crazy thing?\n\nThis paragraph was a bit garbled. I meant to say that open-coding can\nbe better than relying on a canned abstraction, but it came out the\nother way around.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 12 Jan 2024 10:46:50 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: tablecmds.c/MergeAttributes() cleanup"
},
{
"msg_contents": "On 06.12.23 09:23, Peter Eisentraut wrote:\n> The (now) second patch is also still of interest to me, but I have since \n> noticed that I think [0] should be fixed first, but to make that fix \n> simpler, I would like the first patch from here.\n> \n> [0]: \n> https://www.postgresql.org/message-id/flat/24656cec-d6ef-4d15-8b5b-e8dfc9c833a7%40eisentraut.org\n\nThe remaining patch in this series needed a rebase and adjustment.\n\nThe above precondition still applies.",
"msg_date": "Mon, 22 Jan 2024 13:43:38 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: tablecmds.c/MergeAttributes() cleanup"
},
{
"msg_contents": "Hi Peter,\n\nOn Mon, Jan 22, 2024 at 6:13 PM Peter Eisentraut <[email protected]> wrote:\n>\n> On 06.12.23 09:23, Peter Eisentraut wrote:\n> > The (now) second patch is also still of interest to me, but I have since\n> > noticed that I think [0] should be fixed first, but to make that fix\n> > simpler, I would like the first patch from here.\n> >\n> > [0]:\n> > https://www.postgresql.org/message-id/flat/24656cec-d6ef-4d15-8b5b-e8dfc9c833a7%40eisentraut.org\n>\n> The remaining patch in this series needed a rebase and adjustment.\n>\n> The above precondition still applies.\n\nWhile working on identity support and now while looking at the\ncompression problem you referred to, I found MergeAttribute() to be\nhard to read. It's hard to follow high level logic in that function\nsince the function is not modular. I took a stab at modularising a\npart of it. Attached is the resulting patch series.\n\n0001 is your patch as is\n0002 is pgindent fix and also fixing what I think is a typo/thinko\nfrom 0001. If you are fine with the changes, 0002 should be merged\ninto 0003.\n0003 separates the part of code merging a child attribute to the\ncorresponding inherited attribute into its own function.\n0004 does the same for code merging inherited attributes incrementally.\n\nI have kept 0003 and 0004 separate in case we pick one and not the\nother. But they can be committed as a single commit.\n\nThe two new functions have some common code and some differences.\nCommon code is not surprising since merging attributes whether from\nchild definition or from inheritance parents will have common rules.\nDifferences are expected in cases when child attributes need to be\ntreated differently. But the differences may point us to some\nyet-unknown bugs; compression being one of those differences. I think\nthe next step should combine these functions into one so that all the\nlogic to merge one attribute is at one place. I haven't attempted it;\nwanted to propose the idea first.\n\nI can see that this moduralization will lead to another and we will be\nable to reduce MergeAttribute() to a set of function calls reflecting\nits high level logic and push the detailed implementation into minion\nfunctions like this.\n\n-- \nBest Wishes,\nAshutosh Bapat",
"msg_date": "Wed, 24 Jan 2024 11:57:45 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: tablecmds.c/MergeAttributes() cleanup"
},
{
"msg_contents": "On 24.01.24 07:27, Ashutosh Bapat wrote:\n> While working on identity support and now while looking at the\n> compression problem you referred to, I found MergeAttribute() to be\n> hard to read. It's hard to follow high level logic in that function\n> since the function is not modular. I took a stab at modularising a\n> part of it. Attached is the resulting patch series.\n> \n> 0001 is your patch as is\n> 0002 is pgindent fix and also fixing what I think is a typo/thinko\n> from 0001. If you are fine with the changes, 0002 should be merged\n> into 0003.\n> 0003 separates the part of code merging a child attribute to the\n> corresponding inherited attribute into its own function.\n> 0004 does the same for code merging inherited attributes incrementally.\n> \n> I have kept 0003 and 0004 separate in case we pick one and not the\n> other. But they can be committed as a single commit.\n\nI have committed all this. These are great improvements.\n\n(One little change I made to your 0003 and 0004 patches is that I kept \nthe check whether the new column matches an existing one by name in \nMergeAttributes(). I found that pushing that down made the logic in \nMergeAttributes() too hard to follow. But it's pretty much the same.)\n\n\n\n",
"msg_date": "Fri, 26 Jan 2024 14:42:17 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: tablecmds.c/MergeAttributes() cleanup"
},
{
"msg_contents": "Hello Peter,\n\n26.01.2024 16:42, Peter Eisentraut wrote:\n>\n> I have committed all this. These are great improvements.\n>\n\nPlease look at the segmentation fault triggered by the following query since\n4d969b2f8:\nCREATE TABLE t1(a text COMPRESSION pglz);\nCREATE TABLE t2(a text);\nCREATE TABLE t3() INHERITS(t1, t2);\nNOTICE: merging multiple inherited definitions of column \"a\"\nserver closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\n\nCore was generated by `postgres: law regression [local] CREATE TABLE '.\nProgram terminated with signal SIGSEGV, Segmentation fault.\n\n(gdb) bt\n#0 __strcmp_avx2 () at ../sysdeps/x86_64/multiarch/strcmp-avx2.S:116\n#1 0x00005606fbcc9d52 in MergeAttributes (columns=0x0, supers=supers@entry=0x5606fe293d30, relpersistence=112 'p', \nis_partition=false, supconstr=supconstr@entry=0x7fff4046d410, supnotnulls=supnotnulls@entry=0x7fff4046d418)\n at tablecmds.c:2811\n#2 0x00005606fbccd764 in DefineRelation (stmt=stmt@entry=0x5606fe26a130, relkind=relkind@entry=114 'r', ownerId=10, \nownerId@entry=0, typaddress=typaddress@entry=0x0,\n queryString=queryString@entry=0x5606fe2695c0 \"CREATE TABLE t3() INHERITS(t1, t2);\") at tablecmds.c:885\n...\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Sun, 28 Jan 2024 11:00:01 +0300",
"msg_from": "Alexander Lakhin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: tablecmds.c/MergeAttributes() cleanup"
},
{
"msg_contents": "Hi Alexander,\n\nOn Sun, Jan 28, 2024 at 1:30 PM Alexander Lakhin <[email protected]> wrote:\n>\n> Hello Peter,\n>\n> 26.01.2024 16:42, Peter Eisentraut wrote:\n> >\n> > I have committed all this. These are great improvements.\n> >\n>\n> Please look at the segmentation fault triggered by the following query since\n> 4d969b2f8:\n> CREATE TABLE t1(a text COMPRESSION pglz);\n> CREATE TABLE t2(a text);\n> CREATE TABLE t3() INHERITS(t1, t2);\n> NOTICE: merging multiple inherited definitions of column \"a\"\n> server closed the connection unexpectedly\n> This probably means the server terminated abnormally\n> before or while processing the request.\n>\n> Core was generated by `postgres: law regression [local] CREATE TABLE '.\n> Program terminated with signal SIGSEGV, Segmentation fault.\n>\n> (gdb) bt\n> #0 __strcmp_avx2 () at ../sysdeps/x86_64/multiarch/strcmp-avx2.S:116\n> #1 0x00005606fbcc9d52 in MergeAttributes (columns=0x0, supers=supers@entry=0x5606fe293d30, relpersistence=112 'p',\n> is_partition=false, supconstr=supconstr@entry=0x7fff4046d410, supnotnulls=supnotnulls@entry=0x7fff4046d418)\n> at tablecmds.c:2811\n> #2 0x00005606fbccd764 in DefineRelation (stmt=stmt@entry=0x5606fe26a130, relkind=relkind@entry=114 'r', ownerId=10,\n> ownerId@entry=0, typaddress=typaddress@entry=0x0,\n> queryString=queryString@entry=0x5606fe2695c0 \"CREATE TABLE t3() INHERITS(t1, t2);\") at tablecmds.c:885\n\nThis bug existed even before the refactoring.Happens because strcmp()\nis called on NULL input (t2's compression is NULL). I already have a\nfix for this and will be posting it in [1].\n\n[1] https://www.postgresql.org/message-id/flat/24656cec-d6ef-4d15-8b5b-e8dfc9c833a7%40eisentraut.org\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Tue, 30 Jan 2024 11:52:43 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: tablecmds.c/MergeAttributes() cleanup"
},
{
"msg_contents": "Hello,\n\n30.01.2024 09:22, Ashutosh Bapat wrote:\n>\n>> Please look at the segmentation fault triggered by the following query since\n>> 4d969b2f8:\n>> CREATE TABLE t1(a text COMPRESSION pglz);\n>> CREATE TABLE t2(a text);\n>> CREATE TABLE t3() INHERITS(t1, t2);\n>> NOTICE: merging multiple inherited definitions of column \"a\"\n>> server closed the connection unexpectedly\n>> This probably means the server terminated abnormally\n>> before or while processing the request.\n>>\n>> Core was generated by `postgres: law regression [local] CREATE TABLE '.\n>> Program terminated with signal SIGSEGV, Segmentation fault.\n>>\n>> (gdb) bt\n>> #0 __strcmp_avx2 () at ../sysdeps/x86_64/multiarch/strcmp-avx2.S:116\n>> #1 0x00005606fbcc9d52 in MergeAttributes (columns=0x0, supers=supers@entry=0x5606fe293d30, relpersistence=112 'p',\n>> is_partition=false, supconstr=supconstr@entry=0x7fff4046d410, supnotnulls=supnotnulls@entry=0x7fff4046d418)\n>> at tablecmds.c:2811\n>> #2 0x00005606fbccd764 in DefineRelation (stmt=stmt@entry=0x5606fe26a130, relkind=relkind@entry=114 'r', ownerId=10,\n>> ownerId@entry=0, typaddress=typaddress@entry=0x0,\n>> queryString=queryString@entry=0x5606fe2695c0 \"CREATE TABLE t3() INHERITS(t1, t2);\") at tablecmds.c:885\n> This bug existed even before the refactoring.Happens because strcmp()\n> is called on NULL input (t2's compression is NULL). I already have a\n> fix for this and will be posting it in [1].\n>\n> [1] https://www.postgresql.org/message-id/flat/24656cec-d6ef-4d15-8b5b-e8dfc9c833a7%40eisentraut.org\n>\n\nNow that that fix is closed with RwF [1], shouldn't this crash issue be\nadded to Open Items for v17?\n(I couldn't reproduce the crash on 4d969b2f8~1 nor on REL_16_STABLE.)\n\nhttps://commitfest.postgresql.org/47/4813/\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Sat, 20 Apr 2024 07:00:01 +0300",
"msg_from": "Alexander Lakhin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: tablecmds.c/MergeAttributes() cleanup"
},
{
"msg_contents": "On Sat, Apr 20, 2024 at 9:30 AM Alexander Lakhin <[email protected]>\nwrote:\n\n> Hello,\n>\n> 30.01.2024 09:22, Ashutosh Bapat wrote:\n> >\n> >> Please look at the segmentation fault triggered by the following query\n> since\n> >> 4d969b2f8:\n> >> CREATE TABLE t1(a text COMPRESSION pglz);\n> >> CREATE TABLE t2(a text);\n> >> CREATE TABLE t3() INHERITS(t1, t2);\n> >> NOTICE: merging multiple inherited definitions of column \"a\"\n> >> server closed the connection unexpectedly\n> >> This probably means the server terminated abnormally\n> >> before or while processing the request.\n> >>\n> >> Core was generated by `postgres: law regression [local] CREATE TABLE\n> '.\n> >> Program terminated with signal SIGSEGV, Segmentation fault.\n> >>\n> >> (gdb) bt\n> >> #0 __strcmp_avx2 () at ../sysdeps/x86_64/multiarch/strcmp-avx2.S:116\n> >> #1 0x00005606fbcc9d52 in MergeAttributes (columns=0x0,\n> supers=supers@entry=0x5606fe293d30, relpersistence=112 'p',\n> >> is_partition=false, supconstr=supconstr@entry=0x7fff4046d410,\n> supnotnulls=supnotnulls@entry=0x7fff4046d418)\n> >> at tablecmds.c:2811\n> >> #2 0x00005606fbccd764 in DefineRelation (stmt=stmt@entry=0x5606fe26a130,\n> relkind=relkind@entry=114 'r', ownerId=10,\n> >> ownerId@entry=0, typaddress=typaddress@entry=0x0,\n> >> queryString=queryString@entry=0x5606fe2695c0 \"CREATE TABLE t3()\n> INHERITS(t1, t2);\") at tablecmds.c:885\n> > This bug existed even before the refactoring.Happens because strcmp()\n> > is called on NULL input (t2's compression is NULL). I already have a\n> > fix for this and will be posting it in [1].\n> >\n> > [1]\n> https://www.postgresql.org/message-id/flat/24656cec-d6ef-4d15-8b5b-e8dfc9c833a7%40eisentraut.org\n> >\n>\n> Now that that fix is closed with RwF [1], shouldn't this crash issue be\n> added to Open Items for v17?\n> (I couldn't reproduce the crash on 4d969b2f8~1 nor on REL_16_STABLE.)\n>\n> https://commitfest.postgresql.org/47/4813/\n\n\nYes please. Probably this issue surfaced again after we reverted\ncompression and storage fix? Please If that's the case, please add it to\nthe open items.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\nOn Sat, Apr 20, 2024 at 9:30 AM Alexander Lakhin <[email protected]> wrote:Hello,\n\n30.01.2024 09:22, Ashutosh Bapat wrote:\n>\n>> Please look at the segmentation fault triggered by the following query since\n>> 4d969b2f8:\n>> CREATE TABLE t1(a text COMPRESSION pglz);\n>> CREATE TABLE t2(a text);\n>> CREATE TABLE t3() INHERITS(t1, t2);\n>> NOTICE: merging multiple inherited definitions of column \"a\"\n>> server closed the connection unexpectedly\n>> This probably means the server terminated abnormally\n>> before or while processing the request.\n>>\n>> Core was generated by `postgres: law regression [local] CREATE TABLE '.\n>> Program terminated with signal SIGSEGV, Segmentation fault.\n>>\n>> (gdb) bt\n>> #0 __strcmp_avx2 () at ../sysdeps/x86_64/multiarch/strcmp-avx2.S:116\n>> #1 0x00005606fbcc9d52 in MergeAttributes (columns=0x0, supers=supers@entry=0x5606fe293d30, relpersistence=112 'p',\n>> is_partition=false, supconstr=supconstr@entry=0x7fff4046d410, supnotnulls=supnotnulls@entry=0x7fff4046d418)\n>> at tablecmds.c:2811\n>> #2 0x00005606fbccd764 in DefineRelation (stmt=stmt@entry=0x5606fe26a130, relkind=relkind@entry=114 'r', ownerId=10,\n>> ownerId@entry=0, typaddress=typaddress@entry=0x0,\n>> queryString=queryString@entry=0x5606fe2695c0 \"CREATE TABLE t3() INHERITS(t1, t2);\") at tablecmds.c:885\n> This bug existed even before the refactoring.Happens because strcmp()\n> is called on NULL input (t2's compression is NULL). I already have a\n> fix for this and will be posting it in [1].\n>\n> [1] https://www.postgresql.org/message-id/flat/24656cec-d6ef-4d15-8b5b-e8dfc9c833a7%40eisentraut.org\n>\n\nNow that that fix is closed with RwF [1], shouldn't this crash issue be\nadded to Open Items for v17?\n(I couldn't reproduce the crash on 4d969b2f8~1 nor on REL_16_STABLE.)\n\nhttps://commitfest.postgresql.org/47/4813/Yes please. Probably this issue surfaced again after we reverted compression and storage fix? Please If that's the case, please add it to the open items.-- Best Wishes,Ashutosh Bapat",
"msg_date": "Sat, 20 Apr 2024 09:46:41 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: tablecmds.c/MergeAttributes() cleanup"
},
{
"msg_contents": "On Sat, Apr 20, 2024 at 12:17 AM Ashutosh Bapat\n<[email protected]> wrote:\n> Yes please. Probably this issue surfaced again after we reverted compression and storage fix? Please If that's the case, please add it to the open items.\n\nThis is still on the open items list and I'm not clear who, if anyone,\nis working on fixing it.\n\nIt would be good if someone fixed it. :-)\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 29 Apr 2024 09:15:52 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: tablecmds.c/MergeAttributes() cleanup"
},
{
"msg_contents": "On Mon, Apr 29, 2024 at 6:46 PM Robert Haas <[email protected]> wrote:\n\n> On Sat, Apr 20, 2024 at 12:17 AM Ashutosh Bapat\n> <[email protected]> wrote:\n> > Yes please. Probably this issue surfaced again after we reverted\n> compression and storage fix? Please If that's the case, please add it to\n> the open items.\n>\n> This is still on the open items list and I'm not clear who, if anyone,\n> is working on fixing it.\n>\n> It would be good if someone fixed it. :-)\n>\n\nHere's a patch fixing it.\n\nI have added the reproducer provided by Alexander as a test. I thought of\nimproving that test further to test the compression of the inherited table\nbut did not implement it since we haven't documented the behaviour of\ncompression with inheritance. Defining and implementing compression\nbehaviour for inherited tables was the goal\nof 0413a556990ba628a3de8a0b58be020fd9a14ed0, which has been reverted.\n\n-- \nBest Wishes,\nAshutosh Bapat",
"msg_date": "Tue, 30 Apr 2024 11:49:33 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: tablecmds.c/MergeAttributes() cleanup"
},
{
"msg_contents": "On Tue, Apr 30, 2024 at 2:19 AM Ashutosh Bapat\n<[email protected]> wrote:\n> On Mon, Apr 29, 2024 at 6:46 PM Robert Haas <[email protected]> wrote:\n>> On Sat, Apr 20, 2024 at 12:17 AM Ashutosh Bapat\n>> <[email protected]> wrote:\n>> > Yes please. Probably this issue surfaced again after we reverted compression and storage fix? Please If that's the case, please add it to the open items.\n>>\n>> This is still on the open items list and I'm not clear who, if anyone,\n>> is working on fixing it.\n>>\n>> It would be good if someone fixed it. :-)\n>\n> Here's a patch fixing it.\n>\n> I have added the reproducer provided by Alexander as a test. I thought of improving that test further to test the compression of the inherited table but did not implement it since we haven't documented the behaviour of compression with inheritance. Defining and implementing compression behaviour for inherited tables was the goal of 0413a556990ba628a3de8a0b58be020fd9a14ed0, which has been reverted.\n\nI took a look at this patch. Currently this case crashes:\n\nCREATE TABLE cmdata(f1 text COMPRESSION pglz);\nCREATE TABLE cmdata3(f1 text);\nCREATE TABLE cminh() INHERITS (cmdata, cmdata3);\n\nThe patch makes this succeed, but I was initially unclear why it\ndidn't make it fail with an error instead: you can argue that cmdata\nhas pglz and cmdata3 has default and those are different. It seems\nthat prior precedent goes both ways -- we treat the absence of a\nSTORAGE specification as STORAGE EXTENDED and it conflicts with an\nexplicit storage specification on some other inheritance parent - but\non the other hand, we treat the absence of a default as compatible\nwith any explicit default, similar to what happens here. But I\neventually realized that you're just putting back behavior that we had\nin previous releases: pre-v17, the code already works the way this\npatch makes it do, and MergeChildAttribute() is already coded similar\nto this. As Alexander Lakhin said upthread, 4d969b2f8 seems to have\nbroken this.\n\nSo now I think this is committable, but I can't do it now because I\nwon't be around for the next few hours in case the buildfarm blows up.\nI can do it tomorrow, or perhaps Peter would like to handle it since\nit seems to have been his commit that introduced the issue.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 30 Apr 2024 15:48:27 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: tablecmds.c/MergeAttributes() cleanup"
},
{
"msg_contents": "On 30.04.24 21:48, Robert Haas wrote:\n> I took a look at this patch. Currently this case crashes:\n> \n> CREATE TABLE cmdata(f1 text COMPRESSION pglz);\n> CREATE TABLE cmdata3(f1 text);\n> CREATE TABLE cminh() INHERITS (cmdata, cmdata3);\n> \n> The patch makes this succeed, but I was initially unclear why it\n> didn't make it fail with an error instead: you can argue that cmdata\n> has pglz and cmdata3 has default and those are different. It seems\n> that prior precedent goes both ways -- we treat the absence of a\n> STORAGE specification as STORAGE EXTENDED and it conflicts with an\n> explicit storage specification on some other inheritance parent - but\n> on the other hand, we treat the absence of a default as compatible\n> with any explicit default, similar to what happens here.\n\nThe actual behavior here is arguably not ideal. It was the purpose of \nthe other thread mentioned upthread to improve that, but that was not \nsuccessful for the time being.\n\n> So now I think this is committable, but I can't do it now because I\n> won't be around for the next few hours in case the buildfarm blows up.\n> I can do it tomorrow, or perhaps Peter would like to handle it since\n> it seems to have been his commit that introduced the issue.\n\nI have committed it now.\n\n\n\n",
"msg_date": "Fri, 3 May 2024 11:17:35 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: tablecmds.c/MergeAttributes() cleanup"
},
{
"msg_contents": "On Fri, May 3, 2024 at 2:47 PM Peter Eisentraut <[email protected]>\nwrote:\n\n> On 30.04.24 21:48, Robert Haas wrote:\n> > I took a look at this patch. Currently this case crashes:\n> >\n> > CREATE TABLE cmdata(f1 text COMPRESSION pglz);\n> > CREATE TABLE cmdata3(f1 text);\n> > CREATE TABLE cminh() INHERITS (cmdata, cmdata3);\n> >\n> > The patch makes this succeed, but I was initially unclear why it\n> > didn't make it fail with an error instead: you can argue that cmdata\n> > has pglz and cmdata3 has default and those are different. It seems\n> > that prior precedent goes both ways -- we treat the absence of a\n> > STORAGE specification as STORAGE EXTENDED and it conflicts with an\n> > explicit storage specification on some other inheritance parent - but\n> > on the other hand, we treat the absence of a default as compatible\n> > with any explicit default, similar to what happens here.\n>\n> The actual behavior here is arguably not ideal. It was the purpose of\n> the other thread mentioned upthread to improve that, but that was not\n> successful for the time being.\n>\n> > So now I think this is committable, but I can't do it now because I\n> > won't be around for the next few hours in case the buildfarm blows up.\n> > I can do it tomorrow, or perhaps Peter would like to handle it since\n> > it seems to have been his commit that introduced the issue.\n>\n> I have committed it now.\n>\n>\nThanks Peter.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\nOn Fri, May 3, 2024 at 2:47 PM Peter Eisentraut <[email protected]> wrote:On 30.04.24 21:48, Robert Haas wrote:\n> I took a look at this patch. Currently this case crashes:\n> \n> CREATE TABLE cmdata(f1 text COMPRESSION pglz);\n> CREATE TABLE cmdata3(f1 text);\n> CREATE TABLE cminh() INHERITS (cmdata, cmdata3);\n> \n> The patch makes this succeed, but I was initially unclear why it\n> didn't make it fail with an error instead: you can argue that cmdata\n> has pglz and cmdata3 has default and those are different. It seems\n> that prior precedent goes both ways -- we treat the absence of a\n> STORAGE specification as STORAGE EXTENDED and it conflicts with an\n> explicit storage specification on some other inheritance parent - but\n> on the other hand, we treat the absence of a default as compatible\n> with any explicit default, similar to what happens here.\n\nThe actual behavior here is arguably not ideal. It was the purpose of \nthe other thread mentioned upthread to improve that, but that was not \nsuccessful for the time being.\n\n> So now I think this is committable, but I can't do it now because I\n> won't be around for the next few hours in case the buildfarm blows up.\n> I can do it tomorrow, or perhaps Peter would like to handle it since\n> it seems to have been his commit that introduced the issue.\n\nI have committed it now.\n\nThanks Peter.-- Best Wishes,Ashutosh Bapat",
"msg_date": "Fri, 3 May 2024 15:02:52 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: tablecmds.c/MergeAttributes() cleanup"
}
] |
[
{
"msg_contents": "This patch adds a few examples to demonstrate the following:\n\n* The existence of the ctid column on every table\n* The utility of ctds in self joins\n* A practical usage of SKIP LOCKED\n\nThe reasoning for this is a bit long, but if you're interested, keep\nreading.\n\nIn the past, there has been a desire to see a LIMIT clause of some sort on\nUPDATE and DELETE statements. The reason for this usually stems from having\na large archive or backfill operation that if done in one single\ntransaction would overwhelm normal operations, either by the transaction\nfailing outright, locking too many rows, flooding the WAL causing replica\nlag, or starving other processes of limited I/O.\n\nThe reasons for not adding a LIMIT clause are pretty straightforward: it\nisn't in the SQL Standard, and UPDATE/DELETE operations are unordered\noperations, so updating 1000 rows randomly isn't a great idea. The people\nwanting the LIMIT clause were undeterred by this, because they know that\nthey intend to keep issuing updates until they run out of rows to update.\n\nGiven these limitations, I would write something like this:\n\nWITH doomed AS (\n SELECT t.id\n FROM my_table AS t\n WHERE t.expiration_date < :'some_archive_date'\n FOR UPDATE SKIP LOCKED\n LIMIT 1000 )\nDELETE FROM my_table\nWHERE id IN (SELECT id FROM doomed );\n\nThis wouldn't interfere with any other updates, so I felt good about it\nrunning when the system was not-too-busy. I'd then write a script to run\nthat in a loop, with sleeps to allow the replicas a chance to catch their\nbreath. Then, when the rowcount finally dipped below 1000, I'd issue the\nfinal\n\nDELETE FROM my_table WHERE expiration_date < :'some_archive_date';\n\nAnd this was ok, because at that point I have good reason to believe that\nthere are at most 1000 rows lingering out there, so waiting on locks for\nthose was no big deal.\n\nBut a query like this involves one scan along one index (or worse, a seq\nscan) followed by another scan, either index or seq. Either way, we're\ntaking up a lot of cache with rows we don't even care about.\n\nThen in v12, the query planner got hip to bitmap tidscans, allowing for\nthis optimization:\n\nWITH doomed AS (\n SELECT t.ctid AS tid\n FROM my_table AS t\n WHERE t.expiration_date < :'some_archive_date'\n FOR UPDATE SKIP LOCKED\n LIMIT 1000 )\nDELETE FROM my_table\nUSING doomed WHERE my_table.ctid = doomed.tid;\n\nAnd this works pretty well, especially if you set up a partial index to\nmeet the quals in the CTE. But we don't document this anywhere, and until\nUPDATE and DELETE get a LIMIT clause, we probably should document this\nworkaround.",
"msg_date": "Wed, 28 Jun 2023 14:20:35 -0400",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Document efficient self-joins / UPDATE LIMIT techniques."
},
{
"msg_contents": "On Wed, Jun 28, 2023 at 2:20 PM Corey Huinker <[email protected]>\nwrote:\n\n> This patch adds a few examples to demonstrate the following:\n>\n\nBumping so CF app can see thread.\n\n>\n\nOn Wed, Jun 28, 2023 at 2:20 PM Corey Huinker <[email protected]> wrote:This patch adds a few examples to demonstrate the following:Bumping so CF app can see thread.",
"msg_date": "Thu, 31 Aug 2023 15:30:15 -0400",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Document efficient self-joins / UPDATE LIMIT techniques."
},
{
"msg_contents": "Hi.\n-----------------------------------------\nIn cases where a DML operation involving many rows must be performed,\nand that table experiences numerous other simultaneous DML operations,\na FOR UPDATE clause used in conjunction with SKIP LOCKED can be useful\nfor performing partial DML operations:\n\nWITH mods AS (SELECT ctid FROM mytable\n WHERE status = 'active' AND retries > 10\n ORDER BY id FOR UPDATE SKIP LOCKED)\nUPDATE mytable SET status = 'failed'\nFROM mods WHERE mytable.ctid = mods.ctid\n\nThis allows the DML operation to be performed in parts, avoiding\nlocking, until such time as the set of rows that remain to be modified\nis small enough that the locking will not affect overall performance,\nat which point the same statement can be issued without the SKIP\nLOCKED clause to ensure that no rows were overlooked.\n----------------------------------\nmods found out the ctids to be updated, update mytable actually do the update.\nI didn't get \"This allows the DML operation to be performed in parts\".\n\nomit \"at which point\", the last sentence still makes sense. so I\ndidn't get \"at which point\"?\n\nI am not native english speaker.\n\n\n",
"msg_date": "Mon, 25 Sep 2023 14:04:02 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Document efficient self-joins / UPDATE LIMIT techniques."
},
{
"msg_contents": "On Wed, 2023-06-28 at 14:20 -0400, Corey Huinker wrote:\n> This patch adds a few examples to demonstrate the following:\n> \n> * The existence of the ctid column on every table\n> * The utility of ctds in self joins\n> * A practical usage of SKIP LOCKED\n\nI had a look at your patch, and I am in favor of the general idea.\n\nStyle considerations:\n---------------------\n\nI think the SQL statements should end with semicolons. Our SQL examples\nare usually written like that.\n\nOur general style with CTEs seems to be (according to\nhttps://www.postgresql.org/docs/current/queries-with.html):\n\n WITH quaxi AS (\n SELECT ...\n )\n SELECT ...;\n\nAbout the DELETE example:\n-------------------------\n\nThe text suggests that a single, big DELETE operation can consume\ntoo many resources. That may be true, but the sum of your DELETEs\nwill consume even more resources.\n\nIn my experience, the bigger problem with bulk deletes like that is\nthat you can run into deadlocks easily, so maybe that would be a\nbetter rationale to give. You could say that with this technique,\nyou can force the lock to be taken in a certain order, which will\navoid the possibility of deadlock with other such DELETEs.\n\nAbout the SELECT example:\n-------------------------\n\nThat example belongs to UPDATE, I'd say, because that is the main\noperation.\n\nThe reason you give (avoid excessive locking) is good.\nPerhaps you could mention that updating in batches also avoids\nexcessive bload (if you VACUUM between the batches).\n\nAbout the UPDATE example:\n-------------------------\n\nI think that could go, because it is pretty similar to the previous\none. You even use ctid in both examples.\n\nStatus set to \"waiting for author\".\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Wed, 04 Oct 2023 15:39:08 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Document efficient self-joins / UPDATE LIMIT techniques."
},
{
"msg_contents": ">\n>\n> I think the SQL statements should end with semicolons. Our SQL examples\n> are usually written like that.\n>\n\nok\n\n\n\n>\n> Our general style with CTEs seems to be (according to\n> https://www.postgresql.org/docs/current/queries-with.html):\n>\n> WITH quaxi AS (\n> SELECT ...\n> )\n> SELECT ...;\n>\n\ndone\n\n\n>\n> About the DELETE example:\n> -------------------------\n>\n> The text suggests that a single, big DELETE operation can consume\n> too many resources. That may be true, but the sum of your DELETEs\n> will consume even more resources.\n>\n> In my experience, the bigger problem with bulk deletes like that is\n> that you can run into deadlocks easily, so maybe that would be a\n> better rationale to give. You could say that with this technique,\n> you can force the lock to be taken in a certain order, which will\n> avoid the possibility of deadlock with other such DELETEs.\n>\n\nI've changed the wording to address your concerns:\n\n While doing this will actually increase the total amount of work\nperformed, it can break the work into chunks that have a more acceptable\nimpact on other workloads.\n\n\n\n>\n> About the SELECT example:\n> -------------------------\n>\n> That example belongs to UPDATE, I'd say, because that is the main\n> operation.\n>\n\nI'm iffy on that suggestion. A big part of putting it in SELECT was the\nfact that it shows usage of SKIP LOCKED and FOR UPDATE.\n\n\n>\n> The reason you give (avoid excessive locking) is good.\n> Perhaps you could mention that updating in batches also avoids\n> excessive bload (if you VACUUM between the batches).\n>\n\nI went with:\n\n This technique has the additional benefit that it can reduce the overal\nbloat of the updated table if the table can be vacuumed in between batch\nupdates.\n\n\n>\n> About the UPDATE example:\n> -------------------------\n>\n> I think that could go, because it is pretty similar to the previous\n> one. You even use ctid in both examples.\n>\n\nIt is similar, but the idea here is to aid in discovery. A user might miss\nthe technique for update if it's only documented in delete, and even if\nthey did see it there, they might not realize that it works for both UPDATE\nand DELETE. We could make reference links from one to the other, but that\nseems like extra work for the reader.",
"msg_date": "Tue, 31 Oct 2023 14:12:17 -0400",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Document efficient self-joins / UPDATE LIMIT techniques."
},
{
"msg_contents": "On Tue, 2023-10-31 at 14:12 -0400, Corey Huinker wrote:\n> \n> \n> > About the SELECT example:\n> > -------------------------\n> > \n> > That example belongs to UPDATE, I'd say, because that is the main\n> > operation.\n> \n> I'm iffy on that suggestion. A big part of putting it in SELECT was the fact\n> that it shows usage of SKIP LOCKED and FOR UPDATE.\n\nI can accept that.\n\n> \n> > About the UPDATE example:\n> > -------------------------\n> > \n> > I think that could go, because it is pretty similar to the previous\n> > one. You even use ctid in both examples.\n> \n> It is similar, but the idea here is to aid in discovery. A user might miss the\n> technique for update if it's only documented in delete, and even if they did see\n> it there, they might not realize that it works for both UPDATE and DELETE.\n> We could make reference links from one to the other, but that seems like extra\n> work for the reader.\n\nI am talking about the similarity between the SELECT and the UPDATE example.\nI don't agree with bloating the documentation with redundant examples just\nto save a user a click.\n\nI like the idea of a link. Perhaps:\n\n If you need to perform a large UPDATE in batches to avoid excessive bloat,\n deadlocks or to reduce the load on the server, look at the example in <link>.\n\nOther observations:\n\n @@ -234,6 +234,35 @@ DELETE FROM films\n In some cases the join style is easier to write or faster to\n execute than the sub-select style.\n </para>\n + <para>\n + In situations where a single operation would consume too many resources,\n + either causing the operation to fail or negatively impacting other workloads,\n + it may be desirable to break up a large <command>DELETE</command> into\n + multiple separate commands. While doing this will actually increase the\n + total amount of work performed, it can break the work into chunks that have\n + a more acceptable impact on other workloads. The\n + <glossterm linkend=\"glossary-sql-standard\">SQL standard</glossterm> does\n + not define a <literal>LIMIT</literal> clause for <command>DELETE</command>\n + operations, but it is possible get the equivalent functionality through the\n + <literal>USING</literal> clause to a\n + <link linkend=\"queries-with\">Common Table Expression</link> which identifies\n + a subset of rows to be deleted, locks those rows, and returns their system\n + column <link linkend=\"ddl-system-columns-ctid\">ctid</link> values:\n\nI don't think that reducing the load on the server is such a great use case\nthat we should recommend it as \"best practice\" in the documentation (because,\nas your patch now mentions, it doesn't reduce the overall load).\n\nI also don't think we need a verbal description of what the following query does.\n\nHow about something like:\n\n\"If you have to delete lots of rows, it can make sense to perform the operation\n in several smaller batches to reduce the risk of deadlocks. The\n <glossterm linkend=\"glossary-sql-standard\">SQL standard</glossterm> does\n not define a <literal>LIMIT</literal> clause for <command>DELETE</command>,\n but it is possible to achieve a similar effect with a self-join on\n the system column <link linkend=\"ddl-system-columns-ctid\">ctid</link>:\"\n\n +<programlisting>\n +WITH delete_batch AS (\n + SELECT l.ctid\n + FROM user_logs AS l\n + WHERE l.status = 'archived'\n + ORDER BY l.creation_date\n + LIMIT 10000\n + FOR UPDATE\n +)\n +DELETE FROM user_logs AS ul\n +USING delete_branch AS del\n +WHERE ul.ctid = del.ctid;\n +</programlisting>\n + This allows for flexible search criteria within the CTE and an efficient self-join.\n + </para>\n\nThe last sentence is redundant, I'd say.\n\nBut you could add:\n\n\"An added benefit is that by using an <literal>ORDER BY</literal> clause in\n the subquery, you can determine the order in which the rows will be locked\n and deleted, which will prevent deadlocks with other statements that lock\n the rows in the same order.\"\n\nBut if you do that, you had better use \"ORDER BY id\" or something else that\nlooks more like a unique column.\n\n--- a/doc/src/sgml/ref/select.sgml\n+++ b/doc/src/sgml/ref/select.sgml\n@@ -1679,6 +1679,30 @@ SELECT * FROM (SELECT * FROM mytable FOR UPDATE) ss WHERE col1 = 5;\n condition is not textually within the sub-query.\n </para>\n\n+ <para>\n+ In cases where a <acronym>DML</acronym> operation involving many rows\n\nI think we should avoid using DML. Beginner might not know it, and it is\nnot an index term. My suggestion is \"data modification statement/operation\".\n\n+ must be performed, and that table experiences numerous other simultaneous\n+ <acronym>DML</acronym> operations, a <literal>FOR UPDATE</literal> clause\n+ used in conjunction with <literal>SKIP LOCKED</literal> can be useful for\n+ performing partial <acronym>DML</acronym> operations:\n+\n+<programlisting>\n+WITH mods AS (\n+ SELECT ctid FROM mytable\n+ WHERE status = 'active' AND retries > 10\n+ ORDER BY id FOR UPDATE SKIP LOCKED\n+)\n+UPDATE mytable SET status = 'failed'\n+FROM mods WHERE mytable.ctid = mods.ctid;\n+</programlisting>\n+\n+ This allows the <acronym>DML</acronym> operation to be performed in parts, avoiding locking,\n+ until such time as the set of rows that remain to be modified is small enough\n\n\"until such time as\" does not sound English to me. \"Until the number of rows that remain\"\nwould be better, in my opinion.\n\n+ that the locking will not affect overall performance, at which point the same\n\n\"that the locking\" --> \"that locking them\"\n\n+ statement can be issued without the <literal>SKIP LOCKED</literal> clause to ensure\n+ that no rows were overlooked. This technique has the additional benefit that it can reduce\n+ the overal bloat of the updated table if the table can be vacuumed in between batch updates.\n+ </para>\n\n\"overal\" --> \"overall\"\n\nI don't think you should use \"vacuum\" as a verb.\nSuggestion: \"if you perform <command>VACUUM</command> on the table between individual\nupdate batches\".\n\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Thu, 02 Nov 2023 14:58:22 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Document efficient self-joins / UPDATE LIMIT techniques."
},
{
"msg_contents": "On Tue, 31 Oct 2023 at 23:42, Corey Huinker <[email protected]> wrote:\n>>\n>>\n>> I think the SQL statements should end with semicolons. Our SQL examples\n>> are usually written like that.\n>\n>\n> ok\n>\n>\n>>\n>>\n>> Our general style with CTEs seems to be (according to\n>> https://www.postgresql.org/docs/current/queries-with.html):\n>>\n>> WITH quaxi AS (\n>> SELECT ...\n>> )\n>> SELECT ...;\n>\n>\n> done\n>\n>>\n>>\n>> About the DELETE example:\n>> -------------------------\n>>\n>> The text suggests that a single, big DELETE operation can consume\n>> too many resources. That may be true, but the sum of your DELETEs\n>> will consume even more resources.\n>>\n>> In my experience, the bigger problem with bulk deletes like that is\n>> that you can run into deadlocks easily, so maybe that would be a\n>> better rationale to give. You could say that with this technique,\n>> you can force the lock to be taken in a certain order, which will\n>> avoid the possibility of deadlock with other such DELETEs.\n>\n>\n> I've changed the wording to address your concerns:\n>\n> While doing this will actually increase the total amount of work performed, it can break the work into chunks that have a more acceptable impact on other workloads.\n>\n>\n>>\n>>\n>> About the SELECT example:\n>> -------------------------\n>>\n>> That example belongs to UPDATE, I'd say, because that is the main\n>> operation.\n>\n>\n> I'm iffy on that suggestion. A big part of putting it in SELECT was the fact that it shows usage of SKIP LOCKED and FOR UPDATE.\n>\n>>\n>>\n>> The reason you give (avoid excessive locking) is good.\n>> Perhaps you could mention that updating in batches also avoids\n>> excessive bload (if you VACUUM between the batches).\n>\n>\n> I went with:\n>\n> This technique has the additional benefit that it can reduce the overal bloat of the updated table if the table can be vacuumed in between batch updates.\n>\n>>\n>>\n>> About the UPDATE example:\n>> -------------------------\n>>\n>> I think that could go, because it is pretty similar to the previous\n>> one. You even use ctid in both examples.\n>\n>\n> It is similar, but the idea here is to aid in discovery. A user might miss the technique for update if it's only documented in delete, and even if they did see it there, they might not realize that it works for both UPDATE and DELETE. We could make reference links from one to the other, but that seems like extra work for the reader.\n\nI have changed the status of commitfest entry to \"Returned with\nFeedback\" as Laurenz's comments have not yet been resolved. Please\nhandle the comments and update the commitfest entry accordingly.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Sun, 14 Jan 2024 17:14:38 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Document efficient self-joins / UPDATE LIMIT techniques."
},
{
"msg_contents": ">\n> I have changed the status of commitfest entry to \"Returned with\n> Feedback\" as Laurenz's comments have not yet been resolved. Please\n> handle the comments and update the commitfest entry accordingly.\n>\n>\nHere's another attempt, applying Laurenz's feedback:\n\nI removed all changes to the SELECT documentation. That might seem strange\ngiven that the heavy lifting happens in the SELECT, but I'm working from\nthe assumption that people's greatest need for a ctid self-join will be\nbecause they are trying to find the LIMIT keyword on UPDATE/DELETE and\ncoming up empty.\n\nBecause the join syntax is subtly different between UPDATE and DELETE, I've\nkept code examples in both, but the detailed explanation is in UPDATE under\nthe anchor \"update-limit\" and the DELETE example links to it.",
"msg_date": "Sat, 3 Feb 2024 15:27:53 -0500",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Document efficient self-joins / UPDATE LIMIT techniques."
},
{
"msg_contents": "On Sat, 2024-02-03 at 15:27 -0500, Corey Huinker wrote:\n> \n> Here's another attempt, applying Laurenz's feedback:\n\nI like this patch much better.\n\nSome comments:\n\n> --- a/doc/src/sgml/ref/delete.sgml\n> +++ b/doc/src/sgml/ref/delete.sgml\n> @@ -234,6 +234,24 @@ DELETE FROM films\n> In some cases the join style is easier to write or faster to\n> execute than the sub-select style.\n> </para>\n> + <para id=\"delete-limit\">\n> + While there is no <literal>LIMIT</literal> clause for\n> + <command>DELETE</command>, it is possible to get a similar effect\n> + using the method for <command>UPDATE</command> operations described\n> + <link linkend=\"update-limit\">in greater detail here</link>.\n> +<programlisting>\n> +WITH delete_batch AS (\n> + SELECT l.ctid\n> + FROM user_logs AS l\n> + WHERE l.status = 'archived'\n> + ORDER BY l.creation_date\n> + LIMIT 10000\n> + FOR UPDATE\n> +)\n> +DELETE FROM user_logs AS ul\n> +USING delete_branch AS del\n> +WHERE ul.ctid = del.ctid;\n> +</programlisting></para>\n> </refsect1>\n> \n> <refsect1>\n\n- About the style: there is usually an empty line between an ending </para>\n and the next starting <para>. It does not matter for correctness, but I\n think it makes the source easier to read.\n\n- I would rather have only \"here\" as link text rather than \"in greater details\n here\". Even better would be something that gives the reader a clue where\n the link will take her, like\n <link linkend=\"update-limit\">the documentation of <command>UPDATE</command></link>.\n\n- I am not sure if it is necessary to have the <programlisting> at all.\n I'd say that it is just a trivial variation of the UPDATE example.\n On the other hand, a beginner might find the example useful.\n Not sure.\n\nIf I had my way, I'd just keep the first paragraph, something like\n\n <para id=\"delete-limit\">\n While there is no <literal>LIMIT</literal> clause for\n <command>DELETE</command>, it is possible to get a similar effect\n using a self-join with a common table expression as described in the\n <link linkend=\"update-limit\"><command>UPDATE</command> examples</link>.\n </para>\n\n\n> diff --git a/doc/src/sgml/ref/update.sgml b/doc/src/sgml/ref/update.sgml\n> index 2ab24b0523..49e0dc29de 100644\n> --- a/doc/src/sgml/ref/update.sgml\n> +++ b/doc/src/sgml/ref/update.sgml\n> @@ -434,7 +434,6 @@ UPDATE wines SET stock = stock + 24 WHERE winename = 'Chateau Lafite 2003';\n> COMMIT;\n> </programlisting>\n> </para>\n> -\n> <para>\n> Change the <structfield>kind</structfield> column of the table\n> <structname>films</structname> in the row on which the cursor\n\nPlease don't.\n\n\nI'm mostly fine with the UPDATE example.\n\n> + it can make sense to perform the operation in smaller batches. Performing a\n> + <command>VACUUM</command> operation on the table in between batches can help\n> + reduce table bloat. The\n\nI think the \"in\" before between is unnecessary and had better be removed, but\nI'll defer to the native speaker.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Thu, 08 Feb 2024 02:46:50 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Document efficient self-joins / UPDATE LIMIT techniques."
},
{
"msg_contents": ">\n>\n> - About the style: there is usually an empty line between an ending </para>\n> and the next starting <para>. It does not matter for correctness, but I\n> think it makes the source easier to read.\n>\n\nDone. I've seen them with spaces and without, and have no preference.\n\n\n>\n> - I would rather have only \"here\" as link text rather than \"in greater\n> details\n> here\". Even better would be something that gives the reader a clue where\n> the link will take her, like\n> <link linkend=\"update-limit\">the documentation of\n> <command>UPDATE</command></link>.\n>\n\nDone.\n\n>\n> - I am not sure if it is necessary to have the <programlisting> at all.\n> I'd say that it is just a trivial variation of the UPDATE example.\n> On the other hand, a beginner might find the example useful.\n> Not sure.\n>\n\nI think a beginner would find it useful. The join syntax for DELETE is\ndifferent from UPDATE in a way that has never made sense to me, and a\nperson with only the UPDATE example might try just replacing UPDATE WITH\nDELETE and eliminating the SET clause, and frustration would follow. We\nhave an opportunity to show the equivalent join in both cases, let's use it.\n\n\n\n> I think the \"in\" before between is unnecessary and had better be removed,\n> but\n> I'll defer to the native speaker.\n>\n\nThe \"in\" is more common when spoken. Removed.",
"msg_date": "Mon, 12 Feb 2024 11:45:26 -0500",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Document efficient self-joins / UPDATE LIMIT techniques."
},
{
"msg_contents": "On Mon, 2024-02-12 at 11:45 -0500, Corey Huinker wrote:\n> \n> > - I am not sure if it is necessary to have the <programlisting> at all.\n> > I'd say that it is just a trivial variation of the UPDATE example.\n> > On the other hand, a beginner might find the example useful.\n> > Not sure.\n> \n> I think a beginner would find it useful. The join syntax for DELETE is different from\n> UPDATE in a way that has never made sense to me, and a person with only the UPDATE\n> example might try just replacing UPDATE WITH DELETE and eliminating the SET clause,\n> and frustration would follow. We have an opportunity to show the equivalent join in\n> both cases, let's use it.\n\nI think we can leave the decision to the committer.\n\n> > I think the \"in\" before between is unnecessary and had better be removed, but\n> > I'll defer to the native speaker.\n> \n> The \"in\" is more common when spoken. Removed.\n\nThe \"in\" is appropriate for intransitive use:\n\"I've been here and I've been there and I've been in between.\"\nBut: \"I have been between here and there.\"\n\nDo you plan to add it to the commitfest? If yes, I'd set it \"ready for committer\".\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Mon, 12 Feb 2024 17:54:33 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Document efficient self-joins / UPDATE LIMIT techniques."
},
{
"msg_contents": ">\n> Do you plan to add it to the commitfest? If yes, I'd set it \"ready for\n> committer\".\n>\n> Commitfest entry reanimated.\n\nDo you plan to add it to the commitfest? If yes, I'd set it \"ready for committer\".Commitfest entry reanimated.",
"msg_date": "Mon, 12 Feb 2024 12:24:46 -0500",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Document efficient self-joins / UPDATE LIMIT techniques."
},
{
"msg_contents": "On Mon, 2024-02-12 at 12:24 -0500, Corey Huinker wrote:\n> > Do you plan to add it to the commitfest? If yes, I'd set it \"ready for committer\".\n> \n> Commitfest entry reanimated. \n\nTruly... you created a revenant in the already closed commitfest.\n\nI closed that again and added a new entry in the open commitfest.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Tue, 13 Feb 2024 10:28:33 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Document efficient self-joins / UPDATE LIMIT techniques."
},
{
"msg_contents": "On Tue, Feb 13, 2024, at 10:28, Laurenz Albe wrote:\n> On Mon, 2024-02-12 at 12:24 -0500, Corey Huinker wrote:\n>> > Do you plan to add it to the commitfest? If yes, I'd set it \"ready for committer\".\n>> \n>> Commitfest entry reanimated. \n>\n> Truly... you created a revenant in the already closed commitfest.\n>\n> I closed that again and added a new entry in the open commitfest.\n>\n> Yours,\n> Laurenz Albe\n\nThis thread reminded me of the old discussion \"LIMIT for UPDATE and DELETE\" from 2014 [1].\n\nBack in 2014, it was considered a \"fringe feature\" by some. It is thought to be more commonplace today?\n\n/Joel\n\n[1] https://www.postgresql.org/message-id/flat/CADB9FDf-Vh6RnKAMZ4Rrg_YP9p3THdPbji8qe4qkxRuiOwm%3Dmg%40mail.gmail.com\n\n\n",
"msg_date": "Tue, 13 Feb 2024 17:51:20 +0100",
"msg_from": "\"Joel Jacobson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Document efficient self-joins / UPDATE LIMIT techniques."
},
{
"msg_contents": "On Tue, Feb 13, 2024 at 11:51 AM Joel Jacobson <[email protected]> wrote:\n\n> On Tue, Feb 13, 2024, at 10:28, Laurenz Albe wrote:\n> > On Mon, 2024-02-12 at 12:24 -0500, Corey Huinker wrote:\n> >> > Do you plan to add it to the commitfest? If yes, I'd set it \"ready\n> for committer\".\n> >>\n> >> Commitfest entry reanimated.\n> >\n> > Truly... you created a revenant in the already closed commitfest.\n> >\n> > I closed that again and added a new entry in the open commitfest.\n> >\n> > Yours,\n> > Laurenz Albe\n>\n> This thread reminded me of the old discussion \"LIMIT for UPDATE and\n> DELETE\" from 2014 [1].\n>\n> Back in 2014, it was considered a \"fringe feature\" by some. It is thought\n> to be more commonplace today?\n>\n> /Joel\n>\n> [1]\n> https://www.postgresql.org/message-id/flat/CADB9FDf-Vh6RnKAMZ4Rrg_YP9p3THdPbji8qe4qkxRuiOwm%3Dmg%40mail.gmail.com\n\n\nThis patch came out of a discussion at the last PgCon with the person who\nmade the \"fringe feature\" quote, who seemed quite supportive of documenting\nthe technique. The comment may have been in regards to actually\nimplementing a LIMIT clause on UPDATE and DELETE, which isn't in the SQL\nstandard and would be difficult to implement as the two statements have no\nconcept of ordering. Documenting the workaround would alleviate some\ninterest in implementing a nonstandard feature.\n\nAs for whether it's commonplace, when I was a consultant I had a number of\ncustomers that I had who bemoaned how large updates caused big replica lag,\nbasically punishing access to records they did care about in order to\nproperly archive or backfill records they don't care about. I used the\ntechnique a lot, putting the update/delete in a loop, and often running\nmultiple copies of the same script at times when I/O contention was low,\nbut if load levels rose it was trivial to just kill a few of the scripts\nuntil things calmed down.\n\nOn Tue, Feb 13, 2024 at 11:51 AM Joel Jacobson <[email protected]> wrote:On Tue, Feb 13, 2024, at 10:28, Laurenz Albe wrote:\n> On Mon, 2024-02-12 at 12:24 -0500, Corey Huinker wrote:\n>> > Do you plan to add it to the commitfest? If yes, I'd set it \"ready for committer\".\n>> \n>> Commitfest entry reanimated. \n>\n> Truly... you created a revenant in the already closed commitfest.\n>\n> I closed that again and added a new entry in the open commitfest.\n>\n> Yours,\n> Laurenz Albe\n\nThis thread reminded me of the old discussion \"LIMIT for UPDATE and DELETE\" from 2014 [1].\n\nBack in 2014, it was considered a \"fringe feature\" by some. It is thought to be more commonplace today?\n\n/Joel\n\n[1] https://www.postgresql.org/message-id/flat/CADB9FDf-Vh6RnKAMZ4Rrg_YP9p3THdPbji8qe4qkxRuiOwm%3Dmg%40mail.gmail.comThis patch came out of a discussion at the last PgCon with the person who made the \"fringe feature\" quote, who seemed quite supportive of documenting the technique. The comment may have been in regards to actually implementing a LIMIT clause on UPDATE and DELETE, which isn't in the SQL standard and would be difficult to implement as the two statements have no concept of ordering. Documenting the workaround would alleviate some interest in implementing a nonstandard feature.As for whether it's commonplace, when I was a consultant I had a number of customers that I had who bemoaned how large updates caused big replica lag, basically punishing access to records they did care about in order to properly archive or backfill records they don't care about. I used the technique a lot, putting the update/delete in a loop, and often running multiple copies of the same script at times when I/O contention was low, but if load levels rose it was trivial to just kill a few of the scripts until things calmed down.",
"msg_date": "Tue, 13 Feb 2024 17:56:51 -0500",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Document efficient self-joins / UPDATE LIMIT techniques."
},
{
"msg_contents": "On Tue, Feb 13, 2024, at 23:56, Corey Huinker wrote:\n> This patch came out of a discussion at the last PgCon with the person \n> who made the \"fringe feature\" quote, who seemed quite supportive of \n> documenting the technique. The comment may have been in regards to \n> actually implementing a LIMIT clause on UPDATE and DELETE, which isn't \n> in the SQL standard and would be difficult to implement as the two \n> statements have no concept of ordering. Documenting the workaround \n> would alleviate some interest in implementing a nonstandard feature.\n\nThanks for sharing the background story.\n\n> As for whether it's commonplace, when I was a consultant I had a number \n> of customers that I had who bemoaned how large updates caused big \n> replica lag, basically punishing access to records they did care about \n> in order to properly archive or backfill records they don't care about. \n> I used the technique a lot, putting the update/delete in a loop, and \n> often running multiple copies of the same script at times when I/O \n> contention was low, but if load levels rose it was trivial to just kill \n> a few of the scripts until things calmed down.\n\nI've also used the technique quite a lot, but only using the PK,\ndidn't know about the ctid trick, so many thanks for documenting it.\n\n/Joel\n\n\n",
"msg_date": "Wed, 14 Feb 2024 17:55:07 +0100",
"msg_from": "\"Joel Jacobson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Document efficient self-joins / UPDATE LIMIT techniques."
},
{
"msg_contents": ">\n> > As for whether it's commonplace, when I was a consultant I had a number\n> > of customers that I had who bemoaned how large updates caused big\n> > replica lag, basically punishing access to records they did care about\n> > in order to properly archive or backfill records they don't care about.\n> > I used the technique a lot, putting the update/delete in a loop, and\n> > often running multiple copies of the same script at times when I/O\n> > contention was low, but if load levels rose it was trivial to just kill\n> > a few of the scripts until things calmed down.\n>\n> I've also used the technique quite a lot, but only using the PK,\n> didn't know about the ctid trick, so many thanks for documenting it.\n\n\ntid-scans only became a thing a few versions ago (12?). Prior to that, PK\nwas the only way to go.\n\n> As for whether it's commonplace, when I was a consultant I had a number \n> of customers that I had who bemoaned how large updates caused big \n> replica lag, basically punishing access to records they did care about \n> in order to properly archive or backfill records they don't care about. \n> I used the technique a lot, putting the update/delete in a loop, and \n> often running multiple copies of the same script at times when I/O \n> contention was low, but if load levels rose it was trivial to just kill \n> a few of the scripts until things calmed down.\n\nI've also used the technique quite a lot, but only using the PK,\ndidn't know about the ctid trick, so many thanks for documenting it.tid-scans only became a thing a few versions ago (12?). Prior to that, PK was the only way to go.",
"msg_date": "Thu, 15 Feb 2024 13:41:57 -0500",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Document efficient self-joins / UPDATE LIMIT techniques."
},
{
"msg_contents": "Corey Huinker <[email protected]> writes:\n>> I've also used the technique quite a lot, but only using the PK,\n>> didn't know about the ctid trick, so many thanks for documenting it.\n\n> tid-scans only became a thing a few versions ago (12?). Prior to that, PK\n> was the only way to go.\n\nI think we had TID scans for awhile before it was possible to use\nthem in joins, although I don't recall the details of that.\nAnyway, pushed after some additional wordsmithing.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 07 Apr 2024 16:29:03 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Document efficient self-joins / UPDATE LIMIT techniques."
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.